Final Report

Validation of DESSAC Winter Wheat Fungicide Module

Defra project: CE0152

ADAS project: VAAHN

July 2003

Dr S Parker

EXECUTIVE SUMMARY

DESSAC (decision support system for arable crops) is a computer-based system to aid crop management decisions in a range of arable crops. The Wheat Disease Manager module of DESSAC was validated in this project. This module provides crop managers with information to support decisions on control of the important fungal diseases of wheat. The aim being to ensure that fungicide inputs are just those required to ensure effective control. Key findings from the project and the actions taken as a result, are summarised below.

  • Field experiments early in the project identified specific components of DESSAC WDM that required further development work. Consequently, significant improvements in system performance were achieved during the period of this project.
  • The wheat canopy model developed originally for WDM was amended following observations made in the first year of field experiments. This amended model uses a novel method to predict canopy growth and development, which is more appropriate for use in decision support systems to aid disease management.
  • The revised canopy model predicted the emergence of the final three yield forming leaves and anthesis, within one week of their observed dates.
  • User observations of disease severity are crucial to optimise the benefit from WDM. The testing done in the experiments reported here used disease severity scored on a continuous scale 0-100%. Users have the option of scoring disease on a categorical basis, and this method was preferred by site managers using the system for the HGCA/DEFRA funded ‘Trials4U2C’ demonstration. Hence, the suitability of the categorical scale, for adjusting predictions of epidemic magnitude by the disease models, will be evaluated before commercialisation of WDM.
  • Early observations of septoria leaf blotch had an inappropriate influence in reducing initial inoculum modelled by the system at GS30-32. Hence, a probablistic method was introduced to weight early observations. More recent research in Defra project CE0532, suggests that the disease model for septoria leaf blotch could be improved further by incorporating an adjustment of initial inoculum controlled by a function of accumulated cold winter temperatures.
  • The choice of future weather scenario affects the level of fungicide input which is appropriate. Future weather was originally selected by the system according to projected margin, rather than the likelihood of disease development. A more appropriate method of selection, based directly on epidemic magnitude, was developed and implemented.
  • WDM suggested fungicide active ingredients which were appropriate to the pathogens posing a risk to the crop.
  • The original fungicide dose response parameters were based on mean values and did not contain information about the greater variation in dose efficiency likely at lower dose rates. Greater account of this variation has been incorporated into revised parameters describing fungicide efficacy.
  • Spray timings suggested by WDM are now appropriate to ensure high dose-efficiency for control of disease on the important upper culm leaves.
  • The DESSAC Shell contains important farm records. It is crucial that users are encouraged to back-up relevant data files. These need to be identified for routine back-up. Preferably, the system ought to prompt and provide a mechanism to create appropriate back-ups.
  • The current level of performance of DESSAC WDM would improve fungicide decisions for the majority of UK crops.
  • Appropriate user training will be important if the full benefits for environmental and economic sustainability are to be exploited from DESSAC.

The probity of DESSAC WDM has been tested, at the scales of both component models and the integrated system. Substantial developments have been implemented at both scales during the period of this project. Therefore, the version now considered for on-farm use must undergo carefully organised commercial testing.

CONTENTS

1. INTRODUCTION

1.1 The Need for Decision Support

1.2DESSAC and Wheat Disease Manager

1.3Wheat Canopy Model

1.4Disease Model

1.5Decision Model

1.6Objectives

2. METHOD

2.1Field Experiments

2.2Retrospective Testing

2.2.1Canopy simulation

2.2.2Disease simulation

2.2.3Fungicide decisions

2.3Trials 4U2C

2.4DESSAC 2002

3.RESULTS & DISCUSSION

3.1Field Experiments, Harvest 1999

3.1.1The season

3.1.2Software reliability

3.1.3Product choice

3.1.4Spray timing

3.1.5Fungicide dose

3.1.6Yield

3.1.7Gross margin

3.2Field Experiments, Harvest 2000

3.2.1The season

3.2.2Software reliability

3.2.3Product choice

3.2.4Spray timing

3.2.5Fungicide dose

3.2.6Yield

3.2.7Gross margin

3.3 Retrospective Testing

3.3.1Canopy model

3.3.2Disease model

3.3.3Fungicide decisions

3.4Trials 4U2C, Harvest 2001

3.5Field Tests, Harvest 2002

4.CONCLUSIONS

5.REFERENCES

APPENDICES

1.INTRODUCTION

1.1The Need for Decision Support Systems

Decision-making in agricultural production is increasingly complex. The market has set increasingly challenging product specifications, economic pressures have demanded reductions in production cost and environmental concerns have had a major effect on product specification and regulation. This is particularly so in relation to pesticide use, where sustainability demands that treatments are minimised. In order for farmers and their advisers to respond to these challenges, the knowledge base to inform decisions has had to grow rapidly and the quantity of information to be considered can be overwhelming. Filtering, assimilating and using this information appropriately is a formidable task. The UK, in common with many countries, no longer funds a national agricultural extension service, so the traditional route for providing impartial frameworks for decision-making has been lost. Thus the competitive pressures arising from the globalisation of agriculture, and the need to develop sustainable farming practices have increased the need to develop appropriate decision support systems for use by farmers and crop consultants. Agriculture has been slow to take-up advances in information technology (Parker, 1999), but recent surveys suggest that the use of computers is increasing rapidly; though mainly for business management applications. For example, a recent DEFRA survey measured high levels of computer ownership by arable farms (67% of larger holdings and 85% of very large holdings).

Over the past decade, substantial public funding has been invested in the development of computer-based decision support systems for agriculture. These can assist users to make better decisions by:

  • Integrating information into useable forms
  • Informing decisions about agrochemical use
  • Enhancing management decisions on resource use

As a consequence, sustainable farming practices would be better supported.

1.2DESSAC and Wheat Disease Manager (WDM)

The Decision Support System for Agricultural Crops (DESSAC) was developed, through joint funding from DEFRA, BBSRC and HGCA, to provide an expandable system of linked decision support modules to assist farmers and consultants making crop production decisions. The system provides tools for data acquisition and storage and, through the decision support modules, provides validated frameworks to exploit these data effectively. The system has been designed according to the principles of user-centred design (Parker, 1999) so that the needs of growers and advisors are accommodated implicitly in the system. However, this approach to problem solving is new in the industry and depends on principles that sometimes contradict widely used ‘rules of thumb’ (Paveley, 1999). Wheat Disease Manager (WDM) will be the first decision support module operational in the DESSAC shell. This will provide estimates of the optimum fungicide products, timings and doses, for disease control in winter wheat. The module comprises models that simulate crop growth and development, disease progress and is impact on yield. Maximisation of margin over fungicide programme costs is achieved using a genetic algorithm to optimise treatment selection.

1.3Wheat Canopy Model

Epidemic development is constrained by the emergence of successive ‘layers’ of culm leaves, because leaves are seldom infected by foliar pathogens before emergence. In the absence of significant abiotic or biotic stress, the onset of senescence of the upper culm leaves that are responsible for intercepting most incident photosynthetically active radiation, is delayed some weeks after anthesis (Sylvester-Bradley et al., 1997). However, when leaves become infected by foliar pathogens shortly after they emerge, the subsequent expression of symptoms causes early onset of canopy senescence and reduced resource capture (Bryson et al., 1997). As the phyllochron (Porter, 1984; Kirby et al., 1985) is shorter than the incubation period, disease-induced senescence usually starts to reduce green area shortly after a leaf reaches its maximum size, and primarily has its effect by reducing canopy duration, post-anthesis.

Fungicide efficacy against disease on a given culm leaf has been shown to relate consistently to the time that elapses between full emergence and fungicide application (Paveley et al., 2000). Typically, any given dose is most effective if it is applied around fourteen days after full leaf emergence. Hence, although the effects of disease on resource capture occur after anthesis, efficient control of disease induced canopy loss can only be obtained if fungicide treatments are applied as the upper culm leaves emerge, prior to anthesis.

The impact of disease on yield depends on the way in which the crop is partitioning carbon at the time of attack (Boote et al., 1983). In wheat, anthesis marks the start of direct assimilate partitioning to grain. Theoretical analysis based on the wheat / Septoria tritici (anamorph of Mycosphaerella graminicola) pathosystem, has suggested that the factors important in determining the relationship between disease and yield loss differ from those that are important in determining yield per se (Paveley et al., 2001). Hence, progress with predicting disease-induced yield loss should not be entirely dependent on progress in predicting yield (Landau et al., 1998).

Coupling models of crop growth with models of disease has been advocated (Boote et al., 1983; Rouse, 1988), to account for the effects of host growth on epidemic development and disease on host growth. One threat to the simplicity of coupled models is the temptation to include in the crop model all those parameters which might be affected by the disease. These ‘coupling points’ represent the mechanisms by which the disease and crop (and hence the models thereof) might interact. Boote et al. (1983) defined seven such mechanisms, ranging from stand reduction, through acceleration of leaf senescence, to reduction in plant turgor. Several additional mechanisms were listed by Rickman and Klepper (1991). In many cases the mechanisms are inadequately understood and quantified, so the incorporation of too many coupling points increases the total error of the model by adding additional parameters, each of which has been imprecisely estimated (Reynolds & Acock, 1985). For foliar diseases on wheat, green canopy area provides an appropriate coupling point (Paveley et al., 2001).

1.4Disease Model

A generic disease model was designed to be coupled with the canopy simulation model summarised above, and described more fully by Milne et al. (2003). Disease progress is modelled on the six final leaves, because its negative impact on grain yield is primarily attributable to the associated green area loss of the upper canopy leaves. A leaf is assumed to be susceptible to infection once it has grown to 5% of its maximum area. Disease levels are simulated by building up a series of daily infections that are dependent on the timing of plant development and weather conditions. The structure of the model is similar for all foliar diseases. The growth (as percentage disease) of each individual leaf infection is modelled in thermal time using a logistic function. This model has been used previously to describe disease progress at the field and plant scale (cf. Waggoner, 1986). The novelty here is that it is used to describe a single infection event. Because the detailed development of a lesion is not modelled, it is necessary to make approximations about the effects of pathogen growth and development on the leaf. Hence it is assumed that the latent period of an infection is the time taken to reach 5% of its potential size, this is the point when disease expression (i.e., green area loss) occurs. The maximum leaf damage caused by a lesion occurs at 90% of its potential size. A unit infection event is defined as one that potentially covers 1% of the fully grown leaf. The disease level on a given leaf, on a particular day, is the sum of the disease expressed by all infection events that have occurred up to that point in time.

1.5Decision Model

The decision model aims to identify the optimum fungicide treatment (product, timing and dose) for the prevailing disease risk. Treatment plans are evaluated for value (financial margin of crop value over treatment costs) through the coupled canopy and disease models (the process model).

The decision model uses a genetic algorithm to locate an optimal treatment. A treatment plan is considered as a chromosome, which is divided into genes coding the number of sprays, the fungicide products and their timing and doses. At the start of an optimisation, a population of treatment plans (chromosomes) is created. These are evaluated through the process model, and the most profitable treatments are retained to reproduce using a random selection procedure related to performance. Evolution through this process continues for up to 100 generations.

The decision model can be operated for long-term planning to devise a basic strategy for the entire season. Such plans are then be refined during the season using short-term optimisation to explore product, dose and timing over the immediate fortnight.

1.6Objectives

In their review of crop growth models for DSSs, Jame and Cutforth (1996) emphasized the importance of calibration and rigorous validation against independent data. Here we describe the development, calibration and testing of models of wheat canopy growth and disease progress, based on the principles described above, for use in a decision support system for disease management.

The project activities were designed to assess the usability of the software, to identify software bugs and evaluate the technical probity of the system. Four main activities were used to achieve this:

(1) Trials to test the usability of the software for growers and consultants.

(2) Field experiments in two seasons (1999, 2000) at locations providing climate variation for the UK

(3) Analysis of performance of an updated version of the Shell and WDM, using data collected in experiments (2) above.

(4) User tests, by industry representatives, for ‘live’ decision-making, of the software approved and released for further commercial development by the DESSAC consortium,

Results of the work in (1) are detailed in a previous report to DEFRA, reproduced here in Annex 1. This document reports the work in activities (2)-(4), summarised in Figure 1.


2.METHOD

2.1Field Experiments

Over two crop seasons, the technical performance of DESSAC-WDM (henceforth WDM) was tested across geographically and climatically diverse locations of the United Kingdom (Table 1). All sites were first wheats (site details are given in Appendix 1). Plots (minimum 2 18 m) were drilled in early October, except where unfavourable weather caused delay, with seed rates adjusted to achieve a plant population of 200-250 plants m-2. Nitrogen was managed to obtain crops with a peak green area index of 6 (Appendix 2).

Table 1. Summary of site details for field experiments. Detail is given in Appendix 1.

Cultivar
Site / Target disease / Resistant / Susceptible / Intermediate / Drilled
1999
ADAS Rosemaund / Leaf blotch* / Abbot / Consort / 8 October
ADAS Mamhead / Brown rust / Abbot / Buster / 20 October
Morley Research Centre / Leaf blotch / Abbot / Consort / 12 October
SAC Aberdeen / Mildew / Claire / Consort / 2 November
ADAS High Mowthorpe / Leaf blotch / Abbot / Consort / 12 October
ADAS Terrington / Yellow rust / Abbot / Brigadier / Harrier / 22 October
DANI Hillsborough / Leaf blotch / Consort
DANI Limavady / Leaf blotch / Abbot
2000
ADAS Rosemaund / Leaf blotch / Claire / Consort / 27 October
ADAS Mamhead / Brown rust / Claire / Buster / 14 October
Morley Research Centre / Leaf blotch / Claire / Consort / 14 October
SAC Aberdeen / Mildew / Claire / Consort / 2 October
ADAS High Mowthorpe / Leaf blotch / Claire / Consort / 5 October
ADAS Terrington / Yellow rust / Claire / Brigadier / Harrier / 12 October
DANI Hillsborough / Leaf blotch / Claire / Consort / 13 October

* Septoria tritici

Two varieties were grown at each site. One susceptible to the foliar pathogen expected to be predominant at the site, and therefore likely to be responsive to fungicide treatment. The other with good quantitative resistance to foliar pathogens and therefore less responsive to fungicide (Table 1).

A randomised split block design (main plot variety, split plot fungicide) with three replicates was used.

Baseline comparisons were provided by treatments where crops were:

(i) not treated with fungicides (untreated),

(ii) treated by a robust prophylactic programme to provide a ‘healthy’ canopy,

(iii) treated by combinations of dose × timing for a specified product, to quantify a dose response surface.

WDM was tested by three treatments:

(i) an unrestricted treatment, where the system could select dose, product and timing. Fungicide choice was limited to 13 and 21 products in 1999 and 2000 respectively (Table 2),

Table 2. Fungicides that could be selected by the WDM optimisation process in the field experiments done in 1999 and 2000

Experiment / active ingredient / Dose / Product / Price per label dose
1999, 2000 / Azoxystrobin / 250 g/l / Amistar
1999, 2000 / Chlorothalonil / 500 g/l / Bravo 500
1999, 2000 / Cypronazole / 100 g/l / Alto 100 SL
1999, 2000 / Cyprodinil / 75% w/w / Unix
1999, 2000 / Difenconazole / 250 g/l / Plover
1999, 2000 / Epoxiconazole / 125 g/l / Opus
1999, 2000 / Fenpropidin / 750 g/l / Patrol
1999, 2000 / Fenpropimorph / 750 g/l / Corbel
1999, 2000 / Prochloraz / 450 g/l / Sportak 45
1999, 2000 / Propiconazole / 250 g/l / Tilt 250
1999, 2000 / Quinoxyfen / 500 g/l / Fortress
1999, 2000 / Tebuconazole / 250 g/l / Folicur
2000 / Epoxiconazole + Fenpropimorph / 84:250 g/l / Opus Team
1999, 2000 / Epoxiconazole + Kresoxim-methyl / 125:125 g/l / Landmark
2000 / Flusilazole / 400 g/l / Sanction
2000 / Flutriafol / 125 g/l / Pointer
2000 / Spiroxamine / 500 g/l / Neon
2000 / Carbendazim + Flusilazole / 125:250 g/l / Punch C
2000 / Chlorothalonil + Flutriafol / 300:47 g/l / Impact Excel
2000 / Tebuconazole + Triadimenol / 250:125 g/l / Silvacur
2000 / Epoxiconazole +Kresoxim-methyl + Fenpropimorph / 125:150:125 g/l / Mantra

(ii) a restricted treatment, where the system could select dose and timing for a specified product for which the dose response curve was quantified at the site,