Tag Archives: copyright

IPR systems around the world; report from the Office of the United States Trade Representative (USTR)

I first saw this item via the AgIP news agency.  It’s aggregated from the Office of the United States Trade Representative: “USTR Releases Annual Special 301 Report on Intellectual Property Rights

The annual report looks into how well trading partners protect IPRs.  To compile the report the US Trade Representative office reviewed 77 trading partners.  Of particular interest view the following section online:


It includes details of action plans and initiatives that the US govt is undertaking in order to strengthen IPR regimes in countries they trade with, including capacity building efforts.

Avoiding plagiarism; ethics, attribution and authorship

SciDev.net recently posted an item entitled “Plagiarised scientific papers plague India”, dealing with some recent plagiarism controversies, and bemoaning a lack of intervention at government level.

“Nandula Raghuram, secretary of the Delhi-based Society for Scientific Values, an independent watchdog, told SciDev.Net that the Indian government has not heeded calls for an independent ethics body in the country.”

Attribution and authorship are critical issues to any research organisation anywhere in the world, but in practical terms how can we  make sure people are properly credited for their work?  High profile plagiarism cases certainly bring it out in the open, but nobody wants to see those (apart from the original authors perhaps!)

The SGRP published in August 2010 a “Booklet of CGIAR Centre Policy Instruments, Guidelines and Statements on Genetic Resources, Biotechnology and Intellectual Property Rights” of which ethics is a part.  However, these statements don’t deal with issues at the level of plagiarism or copyright. 

An old post on the Scientific Misconduct blog: ““We promise to be honest” at the University of Toronto – is it enough?”, talks of the University of Toronto’s use of an “honest oath”  – an interesting way of making sure people are aware of their responsibilities.  Dealing with this issue at the time contracts are signed certainly confirms awareness, but how effective is it?

We need to pay attention to these issues.  As the role of social media and knowledge sharing increases, we will need to pay even more attention to make sure individual creators are not forgotten.

Open Rights Group Workshop; 24th July 2010

Saturday 24th July, (despite being my sixth wedding anniversary!), I attended a workshop on the current challenges of Copyright and Digital Rights. The workshop was organised by the Open Rights Group, a non-for-profit organisation which promotes and defends freedom of expression, privacy, innovation, consumers’ rights, and creativity on the net. You can read more about them here: http://www.openrightsgroup.org/about

The day was structured between a series of presentations and small group discussions. Topics of debate included the new English Digital Economy Act, the Anti-Counterfeiting Trade Agreement (ACTA), and the movement of Open Access Data.

From the programme I believe that our readers will be mostly interested in Professor Boyle’s presentation on the shrinking of the public domain, and the discussion on open data and access to information led by the Open Knowledge Foundation Network (OKFN).  I think the discussion on ACTA could have been very interesting if more information and facts had been given – a lot of discussion was around the fact that the Act needs to be changed even though nobody is fully certain about its precise content which is still (more or less) secret!

Professor Boyle rightly argued that we currently live in a paradox: despite the fact that we generate more and more information and knowledge that enables us to be more creative and innovative, we are moving towards stricter controls to protect such information and knowledge.  Instead of letting it free, we are locking it away. Most jurisdictions have seen an ongoing stretch of the duration of copyright, from 28 years (renewable possibly for another 28 years’ period) to author’s life + 70 years, which seems to go against the original principle of promoting culture through innovation.

The decision of not dealing seriously and practically with the problem of “orphan works” is another symptom of this general over-protection of authors to the detriment of users. In this atmosphere of vagueness and uncertainty, Professor Boyle praised Creative Commons for providing a simple solution to the complexities of copyright, and urged for a real harmonisation of the exceptions, namely the situations which would not amount to breach of copyright (e.g. fair-use)

Professor Boyle gave a very similar presentation at UC Davies and which can seen on YouTube: http://www.youtube.com/watch?v=_OInGKMc1Wo

The Open Data discussion focused on the importance of keeping data freely and easily accessible and usable. Some reasons given included: such data is most of the time generated through public funding, that information which is already available in the first place cannot be locked away, and that the more openly and freely information circulates, the more additional information and knowledge is generated.

I felt that the session was quite good but that a lot of collateral issues ought to have discussed, or at least touched on if time was a constraint. For example, what happens if data has been generated using funding from the private sector as well? Is it really easy to distinguish it from confidential and/or sensitive data? Should a debate on releasing as opposed to sharing data also be tabled? What is the reality as to what data releasers want users to be able to do? (Copying in part? In full? Download and modify?).  Perhaps the OKFN does not have enough experience within the area of genetic material and access & benefit sharing, as there are a lot of concerns (possibly unfounded) like biopiracy which can be a real obstacles to the openness and sharing of data. I hope there will be other occasions in the future to address all these questions with the OKFN.

Many thanks to Joe (McNamee, Advisor to the European Digital Rights) who invited me to the event!

Post written by Francesca Re         Manning of CAS-IP

Innovation in the absence of copyright protection

“Who Owns the Korean Taco?”.  This story isn’t agriculture but it is food, and it illustrates an interesting dynamic with regards to innovation in the absence of copyright protection.    

Whilst using the example of a lucrative innovation in the LA fast food industry, the authors point out that “recipes are unprotected by copyright, and so anyone can copy another’s recipe” and they therefore ask:

“Why do chefs continue to invent new dishes when others are free to copy them?”

Why indeed?  They go on to note:

 “…the conventional wisdom says that in a system like this no one should innovate. Copyright’s raison d’etre is to promote creativity by protecting creators from pirates. But in the food world, pirates are everywhere. By this logic, we ought to be consigned to uninspired and traditional food choices…but the real world does not follow this logic. In fact, we live in a golden age of cuisine”

What I find interesting is in the absence of a formal system, an informal norm steps in to help regulation.  In addition, as we have often talked about in our work, the creation of added value is what differentiates products in an otherwise unprotected market; i.e. the building of brand.  The authors outline what they see as the reasons why chefs are not deterred from innovation despite lack of formal protection:

“…There is no such thing as an exact copy of a dish. Indeed, the same restaurant will turn out differing versions of a signature recipe depending on who’s behind the stove…Copies are inherently imperfect.

Second, food is enjoyed in a context…we are purchasing more than just the cuisine: the ambience, the scene, the service and so forth all combine to make the experience. Copies of a dish, no matter how good, cannot reproduce that overall bundle of goods. (And the law of “trade dress,” a version of trademark, protects the distinctive appearance of a restaurant’s décor.)…

Third, chefs, particularly at the high-end, appear to have certain norms about what kinds of copies are acceptable. In a fascinating paper, two professors looked at top chefs in Paris. They found that a system of social norms existed that constrained copying and enforced rules about attribution.”

Examples such as these are useful for us to take into consideration when exploring innovation in a non/under-protected environment.  (Fast) food for thought!

(Thanks to Victoria Henson-Apollonio for sending me the link.)

The article refs a related story on the fashion industry which we blogged a while back.  “Necessary piracy?

IP issues in social media networks

I listened to, and watched a very interesting presentation from Venable LLP’s Jeff Tenenbaum and A.J. Zottola – a webinar entitled: “The Legal Aspects of Social Media: What Every Association Needs to Know“.

This is relatively new territory and the legal world is applying existing rules to new platforms so things are only starting to become clear now.  One area that has much relevance to us is copyright.  In the presentation slides they say:

“Be mindful of copyright ownership.  Social media is primarily about the content… who owns work on social media?”

They went on to talk about the need for a  written assignment of rights to clarify this.  But as we know the copyright issue also goes further than this.  Social media platforms are created to share content, it is therefore imperative that for those participating in social networks to be clear about who owns the content they are disseminating, and the permissible uses of that content by others. The rules of the 3rd party platform need also to be considered.

Again this isn’t new per se, but its common that the use of these things grow organically.  Perhaps when starting with social media it wasn’t clear how much the platform would or could be used, and therefore IP considerations may not have seemed important.  However, if use of a social media has grown and become part of a larger function of your project or organisation it is probably time to revisit that tool and consider terms of use, policies and disclaimers (where relevant).

Venable include a list a useful Checklist for Social Media Legal Notices and Policies, which you might like to consider.

(thanks to Victoria for sending me the link)

U.S. refocusing on IP enforcement — including “enforcement across borders”

I first read on IP-Watch news about recent release of a national intellectual property strategy from the US government.

IP-Watch highlight that:

“The strategy  encompasses 33 enforcement strategy action items that fall within six categories of focus for the United States: (1) leading by example; (2) increasing transparency; (3) ensuring efficiency and coordination; (4) enforcing our rights internationally; (5) securing our supply chain; and (6) building a data-driven government.”

A complete copy of the strategic plan can be viewed here.
The “Enforcing Our Rights Internationally” is particularly interesting as the strategy seeks to influence enforcement outside of US government jurisdiction.

“..Federal agencies, in coordination with the IPEC, will expeditiously assess current efforts to combat such sites and will develop a coordinated and comprehensive plan to address them that includes: (1) U.S. law enforcement agencies vigorously enforcing intellectual property laws; (2) U.S. diplomatic and economic agencies working with foreign governments and international organizations; and (3) the U.S. Government working with the private sector.”

This includes (from IP-Watch site):

“Cracking down on foreign-based and foreign-controlled websites that infringe on American intellectual property rights and having federal law enforcement agencies encourage cooperation with their foreign counterparts on enforcement investigations, particularly in China.”

PCmag.com picked up on that part in their items:
DOJ, FBI to Monitor Foreign Web Sites for IP Piracy” and “Biden: U.S. to Target Pirate Web Sites“.
In the former PCmag include one of the voices who were not applauding the stance, the Computer & Communications Industry Association (CCIA) who:

“…warned against imposing too broad an enforcement strategy. “We are surprised that no one appears to be recognizing the broader economic debate on this issue. A proper enforcement strategy would ensure that legitimate innovation is not being squashed by an overly broad, overly zealous crackdown,” CCIA president and CEO Ed Black said in a statement. “Balanced intellectual property will promote innovation, investment, and civic discourse, while ensuring that intellectual property rights holders are fairly treated.”

The CCIA website publishes some questions they raised around the issues of the new strategy.  They quote their own report “Fair Use in the U.S. Economy“:

“…companies benefiting from limitations on copyright-holders’ exclusive rights, such as “fair use” – generated revenue of $4.7 trillion in 2007 – a 36 percent increase over 2002 revenue of $3.4 trillion. The most significant growth over this period was in Internet publishing and broadcasting, web search portals, electronic shopping, electronic auctions and other financial investment activity.”

In the preface Ed Black, President & CEO of the CCIA says:

“we are only beginning to fully understand in the 21st century that what copyright leaves unregulated—the ‘fair use economy’—is as economically significant as what it regulates.”


“IP Litigation in Africa”

IP Litigation in Africa” was an item in the WIPO magazine from February this year.  It includes highlights from IP disputes that have recently taken place Africa.  In the introduction Darren Olivier (co-founder of Afro-IP blog) says:

“IP dispute resolution is alive and well in most economically vibrant economies on the continent”

Examples are from Ethiopia, South Africa, Namibia, Kenya, Uganda and Nigeria and deal with patent, trademark and copyright cases.  We often read about lack of enforcement in the region, so it’s very welcome to see an article of this kind!

ACTA draft leaked

Last Sunday the first draft of the digital enforcement chapter of the Anti-Counterfeiting Trade Agreement (ACTA) that governments have been secretly working on for the past year was leaked. And nobody knows (or can blame) who did it!

The Act goes far beyond counterfeiting by including sanctions of copyright infringements and increased liability for Internet Service Providers (ISPs); these provisions are designed to force ISPs to filter and block websites considered to aid copyright infringement.

Without entering once again into the debate of intellectual property rights v freedom of expression and data protection (see my blog of Friday 24 February), it is interesting to note that the measures proposed by developed countries’ decision makers can be a threat to those countries which do not have measures to protect consumers. We at least have the Data Protection Act and the European Human Rights Act, even though they can be bent to justify other agendas! Will more leaks happen? Or will more transparent and open dialogues eventually start? I am sure that these legislative proposals will be seen by the Chinese government in support to its current policy to monitor and block “unacceptable” websites – when China uses intermediaries to carry out the censorship on its behalf…

An interesting and informative document written by the European Digital Rights answers some of these questions here.

Post written by Francesca Re Manning, consultant to CAS-IP

Copyright infringement and filtering

A recent IPKat blog on the latest judgment of Sabam v Scarlet made me realise how complex the matter is and how easy to give misleading and inaccurate analysis can be. The case on which the European Court of Justice will give its assessment discusses whether it is legally and technically possible to require an Internet Service Provider (“ISP”) to filter and monitor internet users by checking what files they share via peer-to-peer, in particular files which are somehow identified as being unauthorised.

In deciding whether ISPs should be obliged to have filters, the Court needs to weigh a series of elements, including the protection of  data and freedom of rights. The problem with filters is that the ISP may have to collect and analyse IP addresses which, especially if static, may enable the website operator to see the IP address users’ personal data. This could identify him/her, and thus, if disclosed, breach the data protection requirements of the Data Protection Act. Therefore, this is a clear legal impediment to ISPs even if they wanted to monitor users’ internet traffic.

Blocking of websites is even more problematic, for example if you go to www.anonymizer.com and download the software, the ISP “sees” nothing. Likewise, if you go to Google DNS and route your connections via their DNS server, the ISP sees nothing. Therefore, the difficulties that an ISP has in relation to “filtering” are also of a technical nature, rather than just legal and ethical (copyright protection v rights to privacy).

Joe McNamee, Advocate to European Digital Rights, rightly summarized the issue

“the bottom line is that, if you are happy to ban encryption and happy for Internet access providers to check the contents of all of your communications (whenever the technology is available to do this) and happy to pay for the capital outlay to achieve this… then filtering may be possible (expensively) some day in the not too distant future. Otherwise, you’re opposed to filtering – unless you’re happy with filtering that doesn’t actually work. That’s really all there is to the issue. Things only get complicated when you start explaining why this is the case”.

This debate comes at an interesting time when Google (and US) are battling China’s ongoing monitoring and censoring regime, exposing the topic to the complexities of legal/technical/political and moral issues. Interesting!

Post written by Francesca Re Manning, consultant to CAS-IP

Is the Google books saga nearing its end?! The DoJ, the forthcoming fairness hearing and the chances of settlement…

The much awaited Google Books settlement may be more difficult to achieve due to the recent (February 4, 2010) DoJ filing which reaffirms legal concerns.

It’s been six years now since the GBS (Google Book Search) project was launched in 2004. The fairness hearing before judge Chin scheduled for Feb 18th could yield further surprises, following the most recent DoJ filing which is still critical of the project.

The original class action lawsuit was submitted by the Authors Guild in 2005 on the grounds of copyright infringement by Google, who was accused of having created and made available digital copies of copyrighted works without seeking rights holders’ permission. The outcome of the litigation is eagerly awaited by those (mostly library community and publishers) who are interested in pursuing mass digitization projects or who envisage Google as a viable commercial partner. The settlement is expected to clarify legal status of digitized (or to be digitized collections) and the potential application of the fair use doctrine; and to shed light on the controversial issue of the “orphan works” (a category of works whose authors cannot be located after a diligent search).

In October 2008, Google and the Author’s Guild entered a complex agreement which, among other things, created a scheme (Books Rights Registry) enabling Google to compensate the rights holders for displaying parts of their books covered by copyright. Under the same settlement, there was established a highly criticized opt-out mechanism (with deadline) ,to be used by the rightsholders who didn’t wish to be part to the settlement.

In September 2009, the DoJ first stepped in to comment on the agreement. Whilst labeling it as “one of the most far-reaching class actions statements of which the United States is aware ”, it criticized severely this “ambitious undertaking” and urged the court to reject the deal. According to the DoJ, the settlement violated the Federal rule of Civil procedure 23, the copyright law and the antitrust law. The main accusations to Google was its effort to “implement a forward-looking business arrangement”, by establishing Google’s dominant position in the digitization market, mainly thorough exploitation of the “orphan works”. Also, concerns were raised with regards to the class action mechanism (as disciplined by the Rule 23): the DoJ affirmed that the named class representatives did not represent adequately the absent class members (i.e. unknown rightholders of the orphan works). The DoJ suggested the following modifications to the agreement be made to address the existing concerns:

imposing limitations on the most open-ended provisions for future licensing, eliminating potential conflicts among class members, providing additional protections for unknown rights holders, addressing the concerns of foreign authors and publishers, eliminating the joint-pricing mechanisms among publishers and authors, and providing a mechanism by which Google’s competitors can gain comparable access” (http://www.justice.gov/opa/pr/2010/February/10-opa-128.html).

To address the DoJ’s criticisms listed above, the parties came up with an amended settlement agreement, which was preliminarily approved by the Court on November 19, 2009. The amended settlement appears to have made significant progress so far as the “orphan works” and the anti-competitive behavior issues are concerned. It is proposed that a “fiduciary” be created to represent effectively the rightsholders who cannot be located, and that part of the revenue gained from the orphans work be split between further activities to locate unknown rightsholders and grants to the “literacy-based charities”. Also, the provisions that would give Google the status of the “most favored nation” are being eliminated.

So far so good!  But, yet again, the DoJ steps in at the last minute ahead of the fairness hearing and points its finger at what is considered to be most problematic issue.  Though recognizing the overall progress made by the parties in making the settlement healthier, the DoJ reaffirms that the settlement:

“suffers from the same core problem as the original agreement: it is an attempt to use the class-action mechanism to implement forward-looking business arrangements that go far beyond the dispute before the court in this litigation.”

The orphan works is, again, the most targeted issue and further limitations are suggested in order to address it, namely:  1) shortening the period of exploitation of the orphan works, and 2) increasing the waiting period between when an orphan work is entered in the database and when it is made publicly available. More generally, the legal scope of the agreement is restricted to the books printed in the US.  Please check this link for the extended version of the filing http://graphics8.nytimes.com/packages/pdf/technology/20100205_googlebooks.pdf.

A final question: why do we need GSB? Or rather who will benefit from it? In its earlier filing (September 2009, mentioned above), the DoJ recognized the settlement would “breath new life” into currently off-public  books, by widening up new research opportunities”,  by making accessible millions of books through institutional subscription and by clarifying (thanks to the creation of the Books Rights Register) the copyright status of out-of-print books.  Also, it is expected that printed-disabled categories would greatly benefit from the settlement, as indicated by the strong support demonstrated by US National Federal of the Blind.

To sum up, it seems that should the amended settlement come to fruition, it would bring huge social value in terms of widening access to knowledge. So, good luck for February 18th!

Post written by Irina Curca of CAS-IP