Meeting Minutes, 2015

Asset Health Focus Community

Table of Contents

Meeting on 6 January 2015 2

Meeting on 13 January 2015 2

Meeting on 27 January 2015 3

Meeting on 10 February 2015 4

Meeting on 24 February 2015 5

Meeting on 3 March 2015 6

Meeting on 10 March 2015 7

Meeting on 17 March 2015 8

Meeting on 24 March 2015 9

Meeting on 31 March 2015 10

Meeting on 7 April 2015 14

Meeting on 14 April 2015 16

Meeting on 21 April 2015 16

Meeting on 28 April 2015 18

Meeting on 5 May 2015 19

Meeting on 12 May 2015 21

Meeting on 26 May 2015 22

Meeting on 9 June 2015 23

Meeting on 23 June 2015 25

Meeting on 7 July 2015 27

Meeting on 21 July 2015 29

Meeting on 22 July 2015 30

Meeting on 4 August 2015 30

Meeting on 11 August 2015 31

Meeting on 18 August 2015 34

Meeting on 1 September 2015 35

Meeting on 6 October 2015 37

Meeting on 20 October 2015 37

Meeting on 3 November 2015 38

Meeting on 17 November 2015 39

Meeting on 1 December 2015 39

Meeting on 6 January 2015

Meeting Attendees:

· Pat Brown

· Svein Olsen

· Gowri Rajappan

· Greg Robinson

Meeting Agenda:

· Interlaken Agenda.

· Circuit breaker and bushings nameplate.

· Test result modeling.

Discussion Summary:

· C37.04 for circuit breaker nameplate.

· Change modeling applies to assets as well. The lifecycle activities that happen with the assets result in change of the model around the asset. Good representation of this history is very important to asset health analytics. The means to achieve this modeling need to be considered together with the change modeling effort that’s ongoing. This will be the main topic at Interlaken for asset health.

· Jay’s NWIP and document on Real network vs modeled network. These are the starting point for discussion.

· Integration strategy on how information flows between asset and network sides.

Meeting on 13 January 2015

Meeting Attendees:

· Pat Brown

· Svein Olsen

· Gowri Rajappan

· Greg Robinson

· Tomasz Rogowski

Meeting Agenda:

· Interlaken Agenda.

Discussion Summary:

· What are the priority topics to pursue?

· Items: Asset templates, lifecycle activities, planning/catalog, testing, criticality/AHI/risk (includes one particular analytics output – AHI).

· Svein’s priorities:

o Highest priority: Lifecycle activities/process.

o Priority 2: Catalog data & testing.

· IRM primer for WG13? Eric and Gerald reporting on Wednesday on the use cases. This could be extended to discuss IRM as well? Greg’s recommendation to add a session to review IRM and actors list with WG13.

Meeting on 27 January 2015

Meeting Attendees:

· Pat Brown

· Mark Easley

· Jim Hortsman

· Remy Younes

· Svein Olsen

· Gowri Rajappan

· Greg Robinson

· Tomasz Rogowski

· Roger Sarkinen

· Luc Vouligny

· Xiaofeng Wang

· Frank Wilhoit

Meeting Agenda:

· Planning this year’s work.

Discussion Summary:

· Working with TC8 on coordinating their business use cases with the TC57 system use cases.

· Expanding IRM to cover WG13 and possibly WG16 (once we coordinate with them).

· Condition Based Maintenance (from AHFC work) and asset aspects of DER are going to be subjects for the next edition 61968-4.

· What is the sequence?

o Finish breaker modeling first, then procedure and test results.

o What about analytics? This is an important topic, but running this is parallel might be too complicated.

o Should we try first 45 mins on the first bullet and 15 mins on analytics, with analytics being a report out of the requirements investigation.

· Need to be able to mash up CIM classes on the fly as required by analytics. This is complicated by the limitations of the XML data exchange.

· Need a flexible model to give power back to the developer/business analyst. Change of paradigm. With NOSQL and like technologies, there is less emphasis on schema and more emphasis on doing clever things with the data. In such an environment, self-description by analytics systems is very important.

Meeting on 10 February 2015

Meeting Attendees:

· Pat Brown

· Mark Easley

· Fook-Luen Heng

· Jim Hortsman

· Chris Kardos

· Gowri Rajappan

· Tomasz Rogowski

· Luc Vouligny

· Xiaofeng Wang

Meeting Agenda:

· Circuit breaker templates and test results modeling.

Discussion Summary:

· Walk through of test results modeling.

· DGA attributes: O2 + Argon? What is this for? Ask Doble oil lab guys.

· Are the DGA and Oil Quality test results complete?

· There are some things on the online monitoring side that don’t have an equivalent on the lab test side; and vice versa.

· Create a table that lists attributes, details, does it apply to lab, and does it apply to online monitor.

· Pat mentioned an example (Southern Company?) where the online monitoring DGA results vs. lab test results were completely different. Hunch is higher confidence attributed to lab results. Online monitoring value is mainly in the alarm.

· Audit trail on the measurement is really important. What device took the measurement, what the calibration history is, etc.

·

Meeting on 24 February 2015

Meeting Attendees:

· Mark Easley

· Jim Hortsman

· Svein Olsen

· Gowri Rajappan

· Tomasz Rogowski

· Luc Vouligny

Meeting Agenda:

· DGA and Oil Quality test results modeling.

o The attributes and how to group them together.

o Oil Quality attributes are from multiple tests. How to capture this?

o Quote from survey response by TVA: “DGA indicates current-day problem; Oil Quality indicates future problem (e.g., sludge gets in to paper, causes problems in a few years).”

· The relationship between ProcedureDataSet and Asset.

o Currently tenuous due to the many-to-many relationship of Asset and Procedure.

· Attributes that are available from Procedure (lab & field tests) as well as Meas (online monitoring). How to synchronize them?

o Field/lab test and online are related but treated differently.

o Quote from survey response by Southern Company: “Because Online DGA is not a certified test via a Gas Chromatograph we treat this data as an alarm point which triggers further investigation via syringe and Lab testing.”

o In the case of online monitoring of bushing PF, the field tests are the baseline.

Discussion Summary:

· How best to categorize the test results?

o Mirroring TOA would be a good idea.

· How to capture test details along with the test results in cases where a result set is coming from multiple tests.

o What is value of capturing test details?

o If everyone is using the same set of tests, there may not be any.

o First step is making sure we have the means to capture test results in CIM.

· The association between Asset and ProcedureDataSet.

o Need to have a direct association. The current linkage through Procedure is not sufficient.

· Is the current path of sub-classing ProcedureDataSet the best way, because it will result in many, many new classes? (The alternative is key-value pair like Meas.)

o The advantage of current approach is that the data are explicitly modeled and very useful for data exchanges.

o From the perspective of building profiles, this would make it easy to building profile. In backend design, efficient representations are possible.

§ Profile with many optional attributes is not a good solution? It is not precise and difficult to validate so thus implement.

§ But this is the current practice.

o The question is do we want to have generic or more explicit profiles. Having an explicit model such as this allows more flexibility in profiling and implementation.

· Lab/field test currently modeled as Procedure/ProcedureDataSet, while online results as Measurement/MeasurementValue. Is this a good dichotomy?

o Just because this is how we do things today isn’t necessarily the reason to model it that way. Need to think ahead on how things could evolve? Having two different ways of modeling same/similar results leads to complications.

o Do the test results need to inherit from ProcedureDataSet? The test results can stand alone at the top and can be related to Procedure or Measurement, whichever is the source of the data.

o A new IdentifiedObject for AssetData or some such from which to inherit the datasets. The “AssetData” can then be associated with Procedure and Measurement.

Meeting on 3 March 2015

Meeting Attendees:

· Pat Brown

· Gowri Rajappan

· Greg Robinson

· Tomasz Rogowski

Meeting Agenda:

· The relationship between ProcedureDataSet and Asset.

o Currently tenuous due to the many-to-many relationship of Asset and Procedure.

· Use of ProcedureDataset children.

o Need to keep the data separate from the means of obtaining the data, since same data can come from multiple means.

· Description of general asset component

o A way to specify that a "manufacturer X model Y" pump is used on a breaker mechanism, for instance, without creating an Asset instance for each pump.

· Modeling analytics results.

Discussion Summary:

· ProcedureDataSet and Asset used to have an association, but was removed during a previous cleanup. Can be restored if makes sense.

· In order to keep data separate from the means of obtaining the data, there are two possible modeling directions.

o Option 1: A top level class and child classes for measurement results. These can then associate with the means of obtaining the data – i.e., Procedure and/or Measurement.

o Option 2: Do what Part 9 did but inheriting from Measurement/MeasurementValue, thus able to use its mechanisms for keeping data and means separate. As in Part 9, enumerate the measurements – e.g., DGA attributes table, fluid test attributes tables, particle content attributes table, etc.

· Invite Dave Haynes to the next TC57 asset health meeting to discuss Part 9 approach.

Meeting on 10 March 2015

Meeting Attendees:

· Elizabeth Bray

· Pat Brown

· Mark Easley

· Herb Falk

· David Haynes

· Fook-Luen Heng

· Jim Hortsman

· Svein Olsen

· Gowri Rajappan

· Luc Vouligny

Meeting Agenda:

· DGA and oil results modeling.

Discussion Summary:

· Model the dataset independently of the means through which they were obtained, and then just associate the means/context/source from which it was obtained.

· DGA values.

o Should we put the inputs and outputs in separate classes?

o Sounds like a good idea.

o In metering, derived values are exchanged all the time – like estimated value. The way you tell the difference is through quality code that is attached to the reading to mark up if it is derived or not.

· Other uses for quality code – problem with the measurement because register overflowed, or the number is suspect for whatever reason, indicate with quality code. Quality is enumerated and allow multiplicity of zero to many.

· Chat comment from Luc Vouligny: In the section "Unusual Attributes" of the "DGA and Oil Quality Attributes" document, Jocelyn Jalbert from our company says that the oil temperature when the sampling is taken is missing. Also missing are the methanol and ethanol concentration.

· The metering approach is flexible and versatile in being able to attach quality indicators, annotations, and measurement methods/standards for each attribute. On the other hand, the explicitly modeled classes are clearer.

· Are there other places from where we should be capturing oil attributes.

· As for grouping, different labs/scientists may do this differently? Should look to see if there are other groupings.

· Action Items:

o Check with Doble oil lab guys on whether there should be any supplementary source we should be looking at other than TOA4?

o Identify a contact within Delta-X/TOA to engage on modeling and whether they have looked at CIM.

Meeting on 17 March 2015

Meeting Attendees:

· Pat Brown

· Henry Dotson

· David Haynes

· Chris Kardos

· Svein Olsen

· Gowri Rajappan

· Greg Robinson

· Tomasz Rogowski

Meeting Agenda:

· Meter reading model.

Discussion Summary:

· For meter, started by collecting use cases.

· What has to be exchanged (CIM classes)?

· The readings are exchanged as MeterReadings payload. MeterReading contains Readings, which consist of value, ReadingQualities, ReadingType, etc.

· ReadingQuality concept could be used to distinguish between online vs laboratory etc.

· ReadingType is typecast as string, has 18 attributes, has many constructs around time, such as time intervals, statistical qualifiers like average, etc. Annex C of 61968-9 has more description. This 18-part string becomes the Names.name for the ReadingType.

· The ReadingType within Reading in MeterReadings is included by reference.

· The units enumeration in 61970, did they harmonize with 61968-9? The 61968-9 added to the 61970 units enumeration and over time may have diverged.

· Is it possible to inherit from enumerated classes? If so, could be a good way of, once harmonized, maintaining things going forward.

· Metering time constructs: Interested in both current value (energy) and over time value (kW-Hr). Able to express intervals as well as average over time (sliding window).

· Grouping of Readings is not there yet, we’ll have to come up with as a layer on top – similar to EnvironmentalValueSet.

· Limits and alarms associated with measurements are modeled as EndDeviceEvents.

· EndDeviceEventType has enumerations similar to ReadingType.

· isCoincidentTrigger relates Reading to EndDeviceEvent.

· MeterConfig profile with root class Meter has all the attributes corresponding to the meter.

· Real meter vs. virtual meter: ServicePoint (demarcation point where homeowner owns everything downstream and utility owns everything upstream) vs UsagePoint. But want to able measure things at points other than demarcation point, such as power at a feeder, for which UsagePoint was created. UsagePoint has a ServicePoint flag that indicates whether it is also a service point.

· Don’t need a Location for UsagePoint.

Meeting on 24 March 2015

Meeting Attendees:

· Pat Brown

· Henry Dotson

· Chris Kardos

· Svein Olsen

· Gowri Rajappan

· Tomasz Rogowski

· Luc Vouligny

Meeting Agenda:

· Test results modeling.

Discussion Summary:

· Since the exact modeling strategy is not known yet, for now we are capturing the attributes in a spreadsheet.

· Would be a good idea to disseminate it to the companies from where the attributes came in order to get comments/feedback.

· As a first step, provide this to Doble oil lab guys & get their feedback; invite them to participate in the next meeting as well.

· Questions to ask the Doble oil lab guys:

o Should we model the outputs as well (e.g., the gas ratios)? Does someone care about just the outputs and not the inputs and so they need to be exchanged?

o Different tests for the same attribute, such as the dielectricBreakdown?

o Additional sources besides TOA4?

o How relevant are the attributes that are not on TOA4. For instance, there are many Fluid Test Variables collected over time in the informative class that are not on TOA4.

· Should do the same for insulation testing.

· SFRA and PD etc. are waveforms.

· Thermography & inspections.

· Action Items:

o Provide spreadsheet to Doble oil lab guys for feedback & invite them to attend next meeting.

o Get a contact for DeltaX Research.

o Think about non-scalar test results such as SFRA, PD, Thermography, and inspections & suggest a strategy for modeling them.