Title

Author 1Author 2Author 3

University of Sheffield
Department of Computer Science
Regent Court, 211 Portobello Street, Sheffield, S1 DP, U.K.
+44(0) 111 111 1111

{a.1, a.2, a.3 }@dcs.shef.ac.uk

ABSTRACT

Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here. Write a short abstract of your paper here..

Categories and Subject Descriptors

I.3.7 [Three-Dimensional Graphics and Realism]: Computer Facial Animation

General Terms

Algorithms, Human Factors.

Keywords

Computer facial animation.

1. INTRODUCTION

There are a number of approaches to producing visual speech and general facial movements, such as pose-based interpolation, concatenation of dynamic units, and physically-based modeling (see [13] for a review).

2. CONSTRAINT-BASED VISUAL SPEECH

A posture (viseme) for a phoneme is variable within and between speakers. It is affected by context (the so-called coarticulation effect), as well as by such things as mood and tiredness.

(2.1)

3. INPUT DATA FOR THE RANGE CONSTRAINTS

In order to produce specific values for the range constraints described in the previous section, we need to define the visemes that are to be used and measure their visual shapes on real speakers.

4. RESULTS

The Mexican-Spanish talking head was tested with the sentence “hola, ¿cómo estas?“. Figure 9 shows the results when the global acceleration constraint is varied. For the left column the global constraint is set at 0.03, whereas in the right column it is set at 0.004. Differences in the mouth opening can be observed in the two columns.

Table 1. Mexican-Spanish viseme definition

Phoneme

/

Viseme name

Figure 1. Front and side view of the viseme

5. CONCLUSIONS

We have produced a Mexican-Spanish talking head that uses a constraint-based approach to create realistic-looking speech trajectories.

6. ACKNOWLEDGMENTS

Thanks to XYZ who helped us with the production of the real mouth pictures.

7. REFERENCES

[1] Black, A., Taylor, P., Caley, R., Festival Speech Synthesis System. 2007

[2] Benguerel, A. and Cowan, H., 1974. Coarticulation of upper lip protrusion in french. Phonetica, 30:41–55.

[3] Cohen, M. and Massaro, D., 1993. Modeling coarticulation in synthetic visual speech. In Proceedings of Computer Animation ‘93, pp. 139–156.

[4] Dodd, B. and Campbell, R. (Ed), 1987. Hearing by Eye: The Psychology of Lipreading. London: Lawrence Erlbaum.