Reports from the USPTO Meeting – Updated

Saturday, February 18 2006 @ 12:44 EST

Groklaw sent several people to the February 16th meeting at the USPTO on the prior art database and community review patent projects. Here are their reports, which I have not edited at all. I want you to enjoy them the same way I did, as if we were there, experiencing it without editorialization.

Bruce Perens made a statement at the meeting at the end, when they asked for ideas, so I asked him if I could publish his statement, and he has graciously agreed. You will find it at the end.

UPDATE: I have added a fourth report, from Chuck Moss.

*******************************

1. ElvishArtisan's report:

I arrived at the USPTO Open Source Meeting right on the dot of 10 AM. I'd estimate that the room was about 90% full, with one- or two-dozen empty seats out of a seating capacity of around 250. About half of those attending looked to be “suits”, whereas the other half were dressed more casually.

The meeting was opened by Mr. Jack Harvey, Director of the USPTO's Technology Center 2100. One of Mr. Harvey's very first statements after welcoming the attendees was a reminder that the topic of the meeting did not include the issue of software patents per se, but rather on ways in which the USPTO and the FOSS community could work together to improve the quality of software patents issued. This point was reiterated several times by other participants from the USPTO. It's clear that the folks at the Patent Office have gotten the message that the FOSS community is unhappy about software patents! Mr. Harvey's role through the rest of the meeting was primarily that of 'majordomo', introducing the various panel speakers. I got the sense as the meeting went on that Mr. Harvey has been one of the prime advocates within the USPTO for initiating the discussions with the FOSS community.

Next up was Mr. John Doll, the Commissioner of Patents. He spoke only briefly, his two major points being that patent quality was the “number one focus” of the Office and asking the help of the FOSS community to help with this goal.

Next came Mr. Manny Schecter of International Business Machines (IBM). Mr. Schecter started by taking an informal survey of the meeting participants, asking for a show of hands from those who were there “because they are involved with or representing open source” and then from those who were there “in a traditional IP role” I'd say that perhaps 15% of the attendees claimed some affiliation with Open Source, while slightly more (20% ?) claimed a “general IP” role.

Mr. Schecter then reviewed IBM's goals in being involved with the proposed FOSS/USPTO initiatives, those goals being:

Ensure patent quality

Be as inclusive as possible

Help drive patent reform

Mr. Schecter (and many subsequent speakers) referenced an earlier meeting between USPTO and FOSS representatives that took place on December 9, 2005. Apparently, not many people on the FOSS side were aware of this event (although it was open to the public). The purpose of this meeting was to guage the interest of the FOSS community in some sort of cooperative effort with the PTO, and to have a discussion about what could be done. Today's meeting was a “follow-up” to that one, and was called to set a framework for further action.

Next came Mr. Jay Lucas, the Acting Deputy Commissioner of Patent Quality for the USPTO. Mr. Lucas basically put forward a 'vision statement', saying that the overall goal of the two groups (USPTO and FOSS community) were broadly similar, being the sharing and dissemination of ideas, and that it was logical that the two groups should help each other.

Next came Mr. Tariq Hafiz, a patent examiner who gave a fascinating and illuminating account of the day-to-day life of an examiner. The basic steps involved in processing a patent application are:

Review and understand the application

Define and execute a search for prior art

Compose and send an “office action”

Await a response

Repeat the above as necessary

An “office action” can be either an approval of a patent, or an objection to the applicant that a patent cannot be granted for some reason (“overbroad”, “too vague”, etc). If an objection is issued, the applicant then gets a chance to remedy whatever it is that made the examiner object. The process then repeats, until either a patent is granted or the application is finally denied or abandoned.

Mr. Hafiz focused particularly on the “define and execute a search” portion of the process, which itself can be broken down into the following steps:

Build a search strategy

Execute the search

Review and understand the results

Select the best prior art

The most important of these steps is often the first, “build a search strategy”. Mr. Hafiz classified the sources of data used for this search into three categories: “structured”, “semi-structured” and “unstructured”. “Structured” sources refer to databases maintained by the USPTO itself (the EAST/DIALOG system), those maintained by foreign patent organizations (primarily the EU and Japan) and, most surprisingly for me, a database maintained by IBM listing their prior art. Searches of the Internet fell into the “unstructured” category, meaning that the data was hard to get hold of without having to deal with a large amount of “noise” in the process.

One aspect of the life of a patent examiner that came into sharp relief through all of this was the extreme premium placed on time. The USPTO has a huge backlog of pending applications and limited resources, so the amount of time an examiner spends on each application is carefully tracked, and measures (described by one participant as possibly “punitive”) taken against those examiners who don't live up to the norm. What all this means is that any new database of prior art must be organized in such a way that examiners can easily access and navigate it – they just don't have time to conduct searches involving unstructured, “noisy” sources.

Next up was Mr. Kees Cook, Senior Network Administrator for OSDL. Most of Mr. Cook's presentation was a review of the December 6th meeting's conclusions. These were:

Categorization and searching are the primary problems

We need to define what qualifies as “prior art”?

We need to develop some sort of “social tagging” system (as with e.g. flickr.com)

There are three category “layers” in software patents: system, component and algorithmic

We need to better understand the USPTO's requirements

Why not use existing prior art?

Why can't representatives from the USPTO directly participate on public mailing lists?

We need to develop a central “tagging” tool.

More input is needed from other, “traditional” software suppliers (e.g. Oracle, Microsoft).

Mr. Cook also mentioned that it is important to get a commitment from the USPTO that, were a FOSS-based search system developed, it would actually be used by the examiners.

Next up was Mr. Ross Turk from SourceForge. Mr. Turk mentioned that this initiative came at a good time for SF, as they are in the middle of overhauling many of their back-end systems so as to provide a better user interface as well as improved sorting and filtering capabilities, all of which would be aids in supporting a prior art database. An attendee asked if non SourceForge-hosted projects could be included in this initiative too. Mr. Cook's take on that was that some sort of “federated” system would eventually emerge that would embrace all FOSS sites and repositories wishing to participate.

The meeting then turned to the suggestion for a “Community Patent Review” system, whereby patent applications could essentially be “peer reviewed” by the public before being granted. The first speaker on this topic was Mr. Marc Ehrlich, an attorney with the Patent Portfolio Management/Intellectual Property Licensing department at IBM. Mr Ehrlich had a number of slides showing how such an online system might actually look in practice. His proposed system consisted of three basic parts:

Access – Make it easier to access applications. Add a subscription-based alert system to notify potential reviewers when a new application in their area of expertise becomes available.

Review – The basic platform for public review. Components include education, indexing, links, discussion, and some sort of reputation scoring system so that examiners would have some idea of the track record of folks doing the public review (similar to E-Bay's scoring system).

Feedback – Find a simpler way to submit data to examiners.

Mr. Ehrlich determined that the system must also support easy ways to identify the responder (the one submitting the data to the examiner), make structured comments (in addition to just indicating any prior art found) and a way to determine if the examiner in a given case actually used the data that was provided. He then identified some of the potential challenges facing the implementation of this kind of system, the main ones being:

Flooding – the danger of the USPTO being overwhelmed by submissions from the public

Gaming – bad actors using the system to intentionally obstruct otherwise valid applications

Willful infringement dangers – overcoming the reluctance many developers feel about looking at any patent data whatever for fear of becoming liable to later charges of “willful infringement” against a patent (more on this topic later)

At this point, one of this meeting attendees, an actual patent examiner, made the admission that he sometimes did use Slashdot in trying to locate prior art, but that it was difficult and time-consuming due to “all the anti-patent noise”.

Next up was Professor Beth Noveck of the New York Law School, the organizer of the Peer to Patent Project, an effort to design and deploy a pilot system to test the idea of community patent review. The PPP is actually part of a larger effort known as the Democracy Design Workshop. The basic idea of the Design Workshop is that current government structures utilize “bureaucratic expertise based on outdated technological assumptions”, meaning that they cannot effectively capitalize on the benefits that modern communications technologies offer. Professor Noveck brought up what was perhaps one of the most important reminders of the entire meeting, that being that the constitutional goal of the patent system was ultimately to spread information, not restrict it's dissemination, and that any system must put those priorities first.

Professor Noveck broke down the basic problem elements involved in designing a community patent review system as follows:

Identify those with expertise to do the reviews

Managing the information provided (e.g. don't flood the examiners!)

Providing incentive for people to do reviews

Adapting the information provided to the requirements of the examination process

Professor Noveck is also in the course of organizing a series of meetings geared toward designing and implementing a pilot community patent review system towards the end of 2006, a goal which she described as “ambitious, but doable”. Further information can be found at the Peer to Patent web site.

The final panelist to speak was Mr. Rob Clark, Deputy Director of the Office of Patent Legal Administration for the USPTO. Mr. Clark's job is to actually draft the rules which govern the day-to-day operation of the USPTO, based on the relevant Federal statutes and governmental procedures. He described in some detail a method in place today by which members of the public can submit prior art for consideration in a patent application. This so-called “Rule 99” procedure has several limitations: material submitted can only include actual prior art –i.e. no “commentary” is allowed; and only “published” data is permitted (“published” in this context means any material that was “reasonably available” to the public –e.g. material from a public web site would be acceptable). Additionally, the submission must be within two months of the original publication of the patent application, and a fee of $180 US is required with the submission. Many of these limits are due to statutory requirements, specifically 35USC 122(c).

Mr. Clark mentioned that some of the drawbacks of this process are:

Some patent apps opt-out of the “publishing” requirement, so the public would have no way of knowing that these are upcoming.

Fear of would-be reviewers to look at patent data, lest they become liable to later damages for “willful infringement”.

There was mention that some of the patent reform legislation currently being debated in Congress would address both of these issues.

Next, the floor was thrown open to any who might have suggestions for future initiatives. At this point Mr. Bruce Perens, one of the founders of the Open Source Initiative got up and delivered a short prepared speech detailing what he felt were four critical areas in patent law needing attention with regard to FOSS. Mr. Perens was quite eloquent, so I will try to quote him where possible here:

PERJURY A party who files a patent application swears to the truth of the claims made within it, similarly to how a person testifying in court swears to tell “the truth, the whole truth and nothing but the truth”. Mr. Perens claimed that after speaking to “many people” at the USPTO, he was able to locate only one person who knew of an actual case where an applicant was prosecuted for perjury after having been discovered making willfully false statements in a patent application, and that this case took place in – 1974. As a result of this lack of enforcement, bad actors have “no sense of peril” when making knowingly false claims in an application, knowing that the worst possible outcome is a simple denial of the application. There are those who are “eavesdropping on open source” and filing spurious patent applications accordingly, thus seeking effectively to kill the original FOSS projects. Thus, “perjury creates intellectual poverty”, and Mr. Perens insisted that the “peril should be real” to those committing perjury, demanding that perjurers undergo “active prosecution”.