Memo to JC from TG 9/10 Regarding Submittal for November 2014 Meeting

1) Recommendation to JC regarding Recyclability Rate criteria

From TG 9&10

Drafted by Wayne Rifer and Mark Schaffer

Approved by TG 10/07/14.

The JC had reviewed the recyclability rate criteria (9.1.3 and 9.4.1) at the 9/16 meeting and recommended that the TG restructure the two criteria into three in the following way:

  • A required criterion that calculates recyclability rate using IEC TR62635 methodology (could add more detail, if desired) but with no threshold (95%)
  • Optional criterion specifying threshold of 90 or 95% referencing IEC TR62635 methodology in required criterion.
  • Another optional criterion with cost & time, and economics, etc.; including the options for the 2nd bullet (the JC was not in agreement about whether to keep or remove the 2nd bullet).

One of the main purposes was to remove the required recyclability rate (the rate has gone back and forth between 90% and 95%) from the required criterion.

Mark and Wayne discussed this request, and do not agree with it. They believe that removing the required recyclability rate into an optional criterion seriously weakens the standard. They believe that the 90% rate is easily and routinely met by servers. Note the following data:

Material / % in servers from the Primer / % recyclable from IEC 62635
Steel / 62.7% / 95%
Aluminum / 15.9% / 95%
Copper / 3.5% / 98%
Circuit board substrate / 9.6% / 100% (when sent to smelter per our criterion)

These four materials alone comprise 91.7% of the server weight and if only they were recycled, the recycling rate by the IEC assumptions would be 87.7%. The only other material that comprises more than 1% of the server is plastic, and it comprises 4.8%. Note that non FR plastics are over half of the plastics and much of that could be recycled. Several other materials (e.g. tin, nickel, brass, silver and gold, etc.) occur in small quantities but have high recycling rates.

Moreover, note that the EPEAT registry for computers (based on IEEE 1680.1) has122 out of 128 registered desktops (the most similar product to servers) meeting an optional criterion for 90% recyclability. The printer and TV standards require a recycling rate based on WEEE, which are lower percentages based on more difficult to recycle products, and have a 90% optional rate.

It was their conclusion from this that servers could easily be recycled at 90%, and with some intentional material choices could reach 95%, and that not including a reasonable recycling rate requirement would be a step backward in standard development.

The TG agreed andreports back to the JC that:

  1. We do not want to lose the required recycling rate for servers – we believe this would be an unjustified weakening of the standard from an environmental perspective.
  2. We propose that the required criterion cite the IEC 62635 methodology and that it call for a 90% recyclability rate.
  3. We propose that one optional criterion call for a higher recyclability rate of 95% per the same methodology, and this calculation shall be specific to the countries into which it is declared.
  4. We propose that a second optional criterion call for incorporation of economic feasibility into the calculation methodology.

Proposed redrafted criteria are included in the TG 9/10 submittal.

2) Regarding: Actions needed to develop a criterion for Section 10 on Product Longevity to test product reliability

The TG spent months wrestling with how to incentivize manufacturers to design and build more reliable and durable products for the purpose of keeping them out of the waste stream prematurely. The main activity included searching for a rigorous and accepted methodology for measuring the reliability and durability of either individual components or of whole product. We examined different systems and communicated with several experts in reliability testing field. There are several proprietary systems in use and it is common for manufacturers to make use of a reliability testing system. However, two problems prevented us from referencing a system in the standard. First, there does not seem to be an single broadly accepted measure. Moreover, we are told that the measurements, due to their complexity, are easily manipulated by manufacturers to show the results they desire. Thus even offering multiple alternative systems does not seem to provide the required incentive.

In addition, manufacturers are highly reluctant to make reliability data about their products public. We concluded that no system, or systems, was (were) acceptable for referencing in the server standard or would provide the desired incentive.

Moreover, it was agreed that the most useful incentives for longer life products had already been incorporated into section 9 in the criteria that make repair and refurbishment more practical.

We returned to consideration of the idea of a product warranty. We developed a criterion that seemed to avoid the specific requirement of a warranty – a commercial term. However, upon further consideration, mandating product warranties was not agreed by the TG to be productive even if the commercial terms issue could be resolved.

We have concluded that attempting to incentivize manufacturers to design and build more reliable/durable products is not something that the NSF standard can do without hitting major and possibly impractical limits. In sum:

  1. There are no current widely accepted standards for how reliability/durability is defined, nor any standards for which information should be used to make such calculations, nor any agreement as to which metrics are the most useful. Further, all current standards or testing bodies are commercial enterprises and if the NSF standard were to select from existing standards, none of which are specific to servers, the standard would be requiring use of a proprietary system.
  2. Manufacturers will most likely be unwilling to share information that they will view as proprietary. There is very little upside for the manufacturer to put reliability/durability information in the public eye.
  3. Finally, it is important to note that manufacturers cannot in reality know the reliability of their products ahead of their use, which is when they would be declared to the standard. Whatever information they may provide to the registry would be projections.

However, the development of a reliability criterion in the future is definitely possible. The remainder of this memo describes the actions that we believe could be implemented to make it possible to create the methodology and reporting system for reliability criteria that would provide an desired incentive.

The JC would first need to determine an entity that would host the work and how it would be funded. Such as project would need to be sustained over many years. It would have powerful impacts on how electronic equipment is designed and built.

Following are actions that would lead to making reliability/durability criteria feasible:

  1. Collection of practical failure data:
  2. Identify various categories of electronic products, such as servers.
  3. Work with a larger purchaser, like the DOE, EPA or similar, that is willing to track the repair and replacement data of these electronics:
  4. Produce reports on failure rates.
  5. Identify what fails on products and the root cause of failure.
  6. Identify if the product was able to be repaired or the product was replaced in whole with same or another product.
  7. This would allow for initial metrics to be developed on lifespan requirements, parts availability and identify those components that are the least reliable in practical usage environments.
  1. Select one or more existing reliability, durability testing standards and/or best practices, for example:
  2. Internal company testing procedures that OEMs do to validate reliability and durability of their products or competitor’s products;
  3. Proprietary and non-proprietary standards, procedures developed by testing houses, standards bodies and other third parties.
  4. Using the failure rate data and employing the testing standards/best-practices assess the standards’/best-practices’ ability to predict the actual failure rate.
  5. Determine if whole product reliability testing can be done/specified.
  6. Determine if key component reliability testing can be done/specified in a way that testing on those components could be specified in lieu of whole product testing.

Based on this research, the TG believes that a practical measure of product reliability, and a criterion to incentivize its use, could be developed.

3) Rationale for Criterion 10.1.2 Product User and Reuse Operator Access to Enabling Code

If enabling code is licensed to the user and not the machine, the machine cannot be re-used in any way, including adding non-OEM approved peripherals, adding additional components from the used market, or selling the machine to a secondary user without the OEM having the right to approve. In practice, IBM,HP, and Oracle have recently changed their policies to shift their defect support obligation from the SN to the user through licensing. This ends, entirely, the option of re-use of a functional machine. Such policies do nothing to protect the code itself from proliferation and exist only to block secondary market uses.

As reassurance – we can remind OEMs that in the auto industry – all enabling software and associated defect support (recalls) are entitled to the VIN and not the buyer. Otherwise cars couldn’t be resold.

There is a bill recently presented in Congress “You Own Devices Act” which makes this requirement in Copyright Law. EPEAT cannot rely upon Congress to get this done anytime soon.