ORA initial questions regarding Utility Demo A final reports

Issued to MTS Jan. 11, 2017

The following questions resulted from ORA’s preliminary review of the utility Demo A final reports. Thesequestions are based primarily on a review of SCE’s report, as to date only SCE has provided its complete results in graphical and tabular format. ORA anticipates raising additional question regarding SDG&E and PG&E once results are available and reviewed.

  • Section 3 – DPAs: Are the 82 circuits included in SCE’s Demo A representative of all of SCE’s circuits, such that results can be accurately extrapolated to the general population?
  • SCE provides comparative data on four criteria, is this sufficient?:

SCE / PG&E / SDGE
Length / X / X / X
Resistance / X / X / X
Short Circuit Current (SSC) / X / X
Load profile / X / X
Voltage Regs / X
Cap / X / X
Reclosers / X / X
DG / X
  • Are resistance and end of line SCC functions of length, such that SCE is really only comparing length and load profile?
  • Feeder profiles as provided in the excel spreadsheet should provide data for each circuit for each criteria used by SCE, and those used by SDG&E and PG&E if warranted.
  • What is the data behind Fig 6 and 7:?
  • Are these plots the dispersion of 24 hours on one “typical” feeder, or 24 hours on all feeders in the DPA?
  • What is the typical feeder?
  • Most feeders in the Rural DPA appear to have significant DER existing, per ORA figure below. How does this impact ICA?

  • Most feeders in the Rural DPA appear to have significant DER queued, per ORA figure above. Is this DER included in the baseline 2-year analysis?
  • What is the capacity of each circuit such that the % DER penetration can be determined?
  • Section 4 – Methodology – Are the methodologies fully transparent and “common enough?”
  • 4.1 - Overview
  • Why aren’t development and application of nodal load and DER profiles included in the Figure 8 diagram when they have a significant impact on the ICA results?
  • 4.1 - Layered abstraction approach
  • How is this process actually implemented for the streamlined method?
  • How is this process actually implemented for the iterative method?
  • Are station limits actually bank limits?
  • Where are results for line device, feeder, and substation? (they do not appear to be included in the Excel data files)
  • Net load profiles
  • Is figure 12 using same data as Fig 11? Seems to have lower spread in winter vs. summer max.
  • SCE Load Profile development and use are critical steps in the ICA process. The following is ORA’s rough understanding of the process. Please correct and clarify the outline in the following box.

  • To what degree are 8760 profiles discussed in the box above available for review?
  • Circuit modeling
  • What is the level of effort and cost required to input and validate circuit models statewide?
  • How will routine and ongoing changes to equipment, settings, and circuit configurations be incorporated into circuit models?
  • Analysis – general
  • What types of devices are the SCADA switch points in the Op/flex criteria, and what is their level of penetration?
  • Analysis- Iterative
  • How are initial flags removed so that violations of all types (Th, V, PQ, Prot, SR), and all locations (node, device, feeder, bank) can be determined?
  • VR devices are “locked”during analysis of flicker. Are capacitor and other device setting also locked?
  • What are minimum trip values: equipment ratings or SCE derived?
  • How can parties review equipment ratings used in the iterative evaluation?
  • Analysis – Streamlined
  • Thermal criteria: “generation[t]” is for existing generation, such that {Load[t] – Gen [t]} is the existing load net of any DER, correct?
  • What is the difference between “VLL^2” in the equation on pages 26 and 28 vs. “VLLnom^2” on page 27?
  • Section 5- Results – What is the best data, and presentation of that data, to demonstrate compliance with the ACR requirements, including ORA success criteria
  • 5.1 – Summary of results from Demo A
  • What is the naming convention of Section_ID numbers in the results files?
  • If analysis is performed for nodes, is the value applied to the line section between them based on the end with the lower value, typically the downstream end?
  • It would be more effective if it referenced particular rows of the results Excel files for each feeder
  • Why not provide results for all five criteria for uniform load and PV?
  • Are results for PV for streamlined or iterative?
  • Is the “common bus” in item 1, p. 36 the distribution (low side) bus?
  • Where are results for line device, feeder, and substation (station or bank?)
  • 5.2 – Data on representative circuits
  • If data is displayed with Ohms on x –axis, it would be helpful to have Ohm values for each node in the result spreadsheets.
  • What kind of statistical analysis was done to show that these circuits have typical results?
  • Why isn’t there a chart for Op/Flex and protection criteria?
  • 5.3 – Comparisonof two methods for each criteria
  • How are DPA level average values sufficiently granular to highlight differences between the methods? Are there similar comparisons at a feeder level?
  • I don’t understand how to interpret the color coded figures. Was this explained at the Jan. 6 meeting?
  • What conclusions about the relative to the strengths and weaknesses of each methodcan be concluded from these figures?
  • Why does comparing the average of 38 urban circuits to 44 rural circuits allow conclusive statements about one method vs. the other?
  • Where is this resistance on the x-axis summed and provided in the results spreadsheets?
  • What re the grey bars mentioned on page 41?
  • Section 6 – Comparative assessment – Do these provide a conclusive evaluation of a) streamlined vs. iterative; b) comparable results between IOUs?
  • 6.2 Comparison of ICA values between methods
  • Results are given as a function of resistance. Is this the only independent variable?
  • Protection schemes?
  • Voltage control devices?
  • Nominal voltage 4kv, 12 kV, etc?
  • Results for only 3 of 5 criteria provided, why?
  • Were there exceptions to general trends for particular circuits?
  • Comparison provided of one circuit only that is labeled by SCE as typical, how was “typical” determined?
  • Summary figure 38 is hard to interpret: stacked bars to show difference? Also, not all criteria shown,
  • Summary table 5 is hard to interpret:
  • Isn’t there a wide range of ICA values for each criteria?
  • What is “Total” and how is this relevant?
  • 6.3 Computation time
  • Can more data be provided to shown differences that appear small in Figure 39?
  • What about Feeder 41 vs. feeder 67?
  • What is required to provide iterative analysissystem wide?
  • Staff?
  • Consultants?
  • Calculation platform?
  • Software?
  • Time to code?
  • Time to calculate and process?
  • Servers to support stakeholder use of maps and data?
  • 6.4 – IEEE 123 comparison
  • How did the IOUs conclude that the differences were not significant? Based on a conversation with the IEEE test feeder WG chairman Jason Fuller, CYME and Synergi were part of development of the 123 circuit, and there should be no difference for voltage.
  • Only 4 criteria shown, where is the fifth?
  • Need tabular data of all results
  • Fig 53 – Appears SCE final ICA is approximately 16% higher for all x axis values. Why?
  • What is the assumed source impedance in each evaluation?
  • Section 7 – Display
  • What are the breaks in the colored lines on the online ICA map, for example on Curtis?
  • Section 8 – Other Studies
  • 8.1 Smart inverters
  • Tested one Phase 1 function only (VVAR) on one long feeder (goldenbear), that has low loading and is limited by steady state voltage due to length. How does this affect the general applicability of this test?
  • Few details, much missing:
  • How much VR and caps now?
  • How much DER now, type and size?
  • VVAR applied to every node?
  • How realistic is it to apply VVAR to every node?
  • 8.2 DER portfolios
  • 8.3 Transmission penetration
  • Where are results for this test?
  • Isn’t this scenario covered by the “abstraction technique”
  • 9 – Learnings
  • Regarding locational load shapes, how did incorporation of AMI data as briefly discussed on page 19 impact this learning?
  • 9.1 Computational tools
  • Can all three techniques be applied in both iterative and streamlined approaches?
  • Limitations to application on specific types of circuits?
  • Load profile reduction – used
  • Reduce number of iterations because load similar most of the time – 288 goes to 56
  • Was this the same for each feeder and node?
  • What is load profile sweep, p. 76?
  • Node reduction – not used due to timing
  • When can this be implemented?
  • Reduction of criteria – used
  • What is the technical definition of a strong circuit?
  • 9.2 – Comparison between methods
  • Protection not shown in graphs, why when it had the most significant differences per p.80?
  • Appendix B - Criteria Matrix
  • Is this table intended to satisfy ORA’s request for a joint comparison exhibit to demonstrate that the IOU are using a common methodology?
  • Were there any differences in how SCE prepares input data for use in Demo A compared to SDG&E and PG&E?
  • Were there any differences in assumptions made in Demo A compared to SDG&E and PG&E?
  • Did SCE find any limitations in using CYME compared to SDG&E’s experience with Synergi?
  • How did SCE include the substation transformer it the iterative method via post processing?

Page 1 of 6