Subj:XXX SYSTEM OPERATIONAL TEST AGENCY MILESTONE B ASSESSMENT REPORT

3980 (XXXX-OT-XX)

SerXXX/XXX

DD Mmm YY

From:Commander, Operational Test and Evaluation Force

To:Chief of Naval Operations

Subj:XXX SYSTEM OPERATIONAL TEST AGENCY MILESTONE B ASSESSMENT REPORT

Ref:(a)Test and Evaluation Master Plan No. 999 of (date)

(b) COMOPTEVFOR ltr 3980 Ser XXX/XXX of DDMmmYY(Original Test Plan)

(c) COMOPTEVFOR memo 3980 Ser 00TD/XXX of DD Mmm YY (Test Report Appendix)

(c) Other references as needed

Encl:(1) XXX System OT-B1 Operational Test Agency Milestone B Assessment Report

(2)Continue adding enclosures as needed

(3)

1.Executive Summary. Per references (a) and (b), Commander, Operational Test and Evaluation Force (COMOPTEVFOR) conducted the OT-B1Operational Assessment (OA) of [SUT Name], Chief of Naval Operations (CNO) Project Number 9999. The purpose of this test was to assess the [SUT Name] by identifying system enhancements and significant areas of risk to the programs successful completion of Initial Operational Test and Evaluation (IOT&E). The [SUT Name] [Effectiveness COI name] Critical Operational Issue (COI) was assessed as (high/moderate/low) risk, as was the [Suitability COI name] COI. The [COI names] COIs werenot assessed. Within the scope of this assessment, I recommend continuing program development for the [SUT Name].

2.Operational Effectiveness. Provide a high-level summary of the effectiveness assessments.

3.Operational Suitability. Provide a high-level summary of the suitability assessments.

4.Cyber Survivability. Provide a high-level summary of the cyber assessments.

5.Critical Operational Issues (COI)Assessments. Below are COI assessments. See enclosure (1) for COI assessment rationale [and previous COI assessments]. The data supporting COI assessments and test conduct details are in reference (c), the test appendix, available upon request.

COI Assessments
COIs / OT-B1 (OA) / SUT Risks / SoS Risks
ASW / Green / None / None
SUW / Yellow / Minor (7) / Minor (1)
MOB / SUT: Red
SoS: Yellow / Severe (1)
Major 1 (1)
Major 3 (2)
Minor (5) / Major 3 (1)
Minor (2)
Reliability / Red / Major 2 (3)
Major 3 (2) / None
Maintainability / Green / None / None
Availability / Green / None / None
Logistic Supportability / Green / None / None
ASW – Antisubmarine Warfare
MOB – Mobility
SoS – System of Systems
SUT – System Under Testing / SUW – Surface Warfare

6.Conclusions. Within the scope of this assessment, I recommend continuing program development for the [Enter System/Equipment name].

P. A. SOHL

Copy to:

Add recipients as needed

1

Subj:XXX SYSTEM OPERATIONAL TEST AGENCY MILESTONE B ASSESSMENT REPORT

BLANK PAGE

1

Subj:XXX SYSTEM OPERATIONAL TEST AGENCY MILESTONE B ASSESSMENT REPORT

XXX SYSTEM

operational test agency

REPORT

OT- test REPORT

1

Subj:XXX SYSTEM OPERATIONAL TEST AGENCY MILESTONE B ASSESSMENT REPORT

BLANK PAGE

1

Subj:XXX SYSTEM OPERATIONAL TEST AGENCY MILESTONE B ASSESSMENT REPORT

SECTION 1 - Test Results

All effectiveness and suitability tests were accomplished using the procedures and data analysis described in reference (b).

All effectiveness and suitability tests, except test E-1, were accomplished using the procedures and data analysis described in reference (a). For deviations, see the report appendix.

1.1 MAJOR TEST RESULTS

Table 1-1 contains the major quantitative test results from OTB1.

Table 1-1. Major Quantitative Test Results
Characteristic / Parameter / Result / Threshold
Deep Water Target / TEFF (KPP) Deep Water Target / 0.89 / ≥0.50
Shallow Water Target / TEFF (KPP) Shallow Water Target / 0.44 / ≥0.50
Arctic Target / TEFF (KPP) Arctic Target / 0.83 / >0.50
Reliability / RXXX (KPP)
MTBOMFTIC / 0.93
234 hr / ≥0.90
≥300 hr
Maintainability
BIT / MCMTOMFTIC
MaxCMTOMFTIC
MRTTIC
PCD (TIC)
PCFI (TIC)
FA (TIC) / 2.5 hr
3.5 hr
4.6 min
0.98
0.92
0.24 / ≤4 hr
≤7 hr
≤5 min
≥0.95
≥0.90
≤0.25
Availability / AO (TIC) / 0.96 / ≥0.93
General Note: All formulas are defined in the test report appendix.
AO - Operational Availability
BIT – Built-in-Test
FA - False Alarm
KPP - Key Performance Parameter
MaxCMTOMFTIC - Maximum Corrective Maintenance Time for Operational Mission Failures/Faults for TIC
MCMTOMFTIC - Mean Corrective Maintenance Time for Operational Mission Failures/Faults for TIC
MRTTIC - Mean Reboot Time for TIC
MTBOMFTIC - Mean Time Between Operational Mission Failures/Faults for TIC
PCD - Probability of Correct Detection
PCFI - Probability of Correct Fault Isolation
RXXX - Torpedo Mission Reliability
TEFF- Torpedo Effectiveness

Table 1-2 contains the major qualitative test results from OTB1.

Table 1-2. Major Qualitative Test Results
Characteristic / Parameter / Result / Threshold
Detect, ID, and Decontamination / Chemical Biological and Radiological Defense / SAT / N/A
Continue with E/S major qualitative test results prior to listing the CS results
Prevent / Control Access / UNSAT / N/A
Reduce System’s Cyber Detectability / SAT
Secure Transmissions and Communications / SAT
Protect System’s Information from Exploitation / SAT
Partition and Ensure Critical Functions at Mission Completion Performance Levels / SAT
Minimize and Harden Attack Surfaces / UNSAT
Actively Manage System Configurations to Counter Vulnerabilities at Tactically Relevant Speeds / SAT
Cyber Threat Environments / SAT
Mitigate / Baseline and Monitor Systems and Detect Anomalies / SAT / N/A
Manage System Performance if Degraded by Cyber Events / SAT
Actively Manage System Configurations to Counter Vulnerabilities at Tactically Relevant Speeds / SAT
Cyber Threat Environments / SAT
Recover / Recover System Capabilities / UNSAT / N/A
Actively Manage System Configurations to Counter Vulnerabilities at Tactically Relevant Speeds / SAT
Cyber Threat Environments / SAT
N/A – Not Applicable
1.2 TEST E-1, ASW

1.2.1

Will the installed sensor systems in air, surface ship, and submarine ASW platforms adequately support detection, classification, and localization of threat targets with the accuracy necessary for XXX delivery and target acquisition?

1.2.2 Results (select either Green, Yellow or Red)

The [SUT Name]’s capability to successfully accomplish the Air Warfare (AW) mission was assessed by reviewing system design and developmental testing against simulated AW threats throughout OT-B1. The [SUT Name] System design and testing data reviewed have not demonstrated weapon system employment will be sufficient to meet AW mission requirements. The primary areas of concern were the risks in the classified appendix D of this report and radar detection. Current radar performance, coupled with other system risks, made the weapon system less effective than the legacy xxxx System in the AW mission. The [SUT Name] System’s capability to successfully accomplish the AW mission was assessed as moderate risk in AW. Additional risk areas were identified and are listed in table 1-4, and are available upon request. The data supporting the AW COI assessment and test conduct details are in reference (c), the test appendix, and are available upon request.

1.2.3 Operational Considerations (OPCON)

1.2.3.1

Unlike a standard network device, the Automated Digital Network System Increment II aircraft system reboot requires approximately 30 minutes. Recommend operational commanders take this into account when planning and executing operations.

1.2.3.2

Firebird X signals use 12-bit format, which is not correctly interpreted by Firebird Y 5-bit systems. The Firebird X codes are truncated by the Firebird Y platforms. The truncated codes are then forwarded through the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance networks, Tactical Data Links, and Digital Data Systems of the Firebird Y platforms causing numerous designation conflicts. Recommend operational commanders configure Firebird Xequipped platforms for 5bit operation when they are operating with Firebird Yequipped platforms to avoid designation conflicts.

1.3 TEST E-2, SUW

1.3.1

Will the [SUT Name] System Advanced Hawkeye (AHE) successfully accomplish the SUW mission?

1.3.2 Results (Yellow)

The [SUT Name] System AHE capability to successfully accomplish the SUW mission was assessed during field carrier landing practice approaches during OT-B1. The [SUT Name] System was assessed to be marginally effective in the SUW mission due to the immaturity of the surface tracker. During testing, the [SUT Name]equipment did not demonstrate the minimum recovery headwind Key System Attribute(criterion: ≤21 knots) due to test limitations. Shore-based carrier suitability was not scheduled to be completed prior to the end of OT-B1. As such, this Measure of Effectiveness (MOE) was not assessed, but field carrier landing practice approaches were conducted without any adverse indications noted regarding recovery headwind. This MOE is expected to be tested during future IT events prior to IOT&E. Additionally, this MOE and other aircraft performance issues will be addressed in the Mobility COI as appropriate in future test plans and reports. One SoS risk was identified and is listed in table 1-4, and is available upon request. The data supporting the SUW COI assessment and test conduct details are in reference (c), the test appendix, and are available upon request.

1.3.3 OPCONs

1.4 TEST S-1, RELIABILITY

1.4.1

Will the reliability aspects of xxxx System support the completion of its mission?

1.4.2 Results (Red)

The reliability of the [SUT Name] System to support completion of its mission was assessed throughout OT-B1, which included design reviews and observed operations of the prototype weapon system and radar. The reliability assessment was based on the entire weapon system with specific concern for the radar system. The demonstrated radar Mean Time Between Operation Mission Failures System (MTBOMFSYS) for the [SUT Name] System was 3.08 hours (criterion: >25 hours) based on 121 Operation Mission Failures (OMF) in 372.6 hours of system operating time during IT-B2 and OTB1. However, using a subset of data taken from September 2018, an MTBOMF of 1.45 hours based on 78.2 hours of operation and 54 failures was observed. Due to the developmental nature of flight test, many OMFs and failures during the initial months of testing were not recorded (see figure 1-X). The high FAR for both radar and weapon system severely impacted the aircraft’s reliability performance. At this stage in the program, the maturity level of the radar was not expected to meet the threshold value; however, Commander, Naval Air Systems Command, and the prime contractor have agreed on an accepted growth curve to adjust the reliability metric based on program maturity and the [SUT Name] System not meeting established goals. Based on poor demonstrated reliability relative to the established growth curves, the reliability of the [SUT Name] System is assessed as a high risk in the capability to support completion of mission requirements. The data supporting the Reliability COI assessment and test conduct details are in reference (c), the test appendix, and are available upon request.

1.4.3 OPCONs

1.5 TEST S-2, MAINTAINABILITY

1.5.1

Will [SUT Name] System be maintainable in its intended operating environment?

1.5.2 Results (Yellow)

The [SUT Name] System's capability to be maintained by Fleet personnel was assessed during three Maintenance Demonstrations during OT-B1. Although test data showed the [SUT Name] System repair times were acceptable to support operations, the [SUT Name] System onboard fault detection and isolation systems did not support aircraft maintenance. Both the Radar Fault Isolation (PCFI) and Weapon System Fault Isolation (PCFI) rate did not meet the threshold requirements by a significant margin. The demonstrated radar PCD was 0.65 (criterion: >0.85), based on Reliability, Maintainability, and Availability analysis of IT-B2 and OTB1 post flight data downloads of the Radar Fault Detection System. Without adequate onboard fault detection and isolation systems, maintenance personnel would spend excessive repair time troubleshooting erroneous or undetermined faults resulting in unnecessary system downtime having a significant impact on the [SUT Name] System mission. Based on poor fault detection and isolation, the capability of the [SUT Name] System to be maintainable by Fleet personnel was assessed as moderate risk. The data supporting the Maintainability COI assessment and test conduct details are in reference (c), the test appendix, and are available upon request.

1.5.3 OPCONs

1.6 cyber survivability

1.6.1

Assessment of a system’s capability to survive and operate after exposure to cyber threats which attempted to prevent the completion of operational mission(s) by destruction, corruption, denial, or exposure of data transmitted, processed or stored.

1.6.2 Results (Red/Yellow/Green)

Insert a discussion detailing how the cyber survivability attributes supported or did not support mission accomplishment.

1.6.3 OPCONs

1.7 Consolidated Risk Listing

A consolidated listing of all [SUT Name] risks are listed in table 1-4 (for SUT risks) and table 15 (for SoSrisks).

Table 1-4 [SUT Name] Consolidated SUT Risk Listing
Risk Number
SUT XXXX- / Description / Severity / Affected COI
Primary / Others
001 / This is paragraph one of blue sheet 001. / Major 1 / STK / AW
003 / This is paragraph one of blue sheet 003. / Major 3 / AW / None
Continue adding all SUT blue sheets
.
AW – Air Warfare
STK – Strike Warfare
Table 1-5 [SUT Name] Consolidated SoS Risk Listing
Risk Number
SoS XXXX- / Description / Severity / Affected COI
Primary / Others
005 / This is paragraph one of gold sheet 005. / Major 2 / STK / AW
006 / This is paragraph one of gold sheet 006. / Minor / CCC / None
Continue adding all SoSgold sheets.
Acronyms:
CCC – Command, Control, and Communications

1