THE ACCESS GRID IN COLLABORATIVE ARTS AND HUMANITIES RESEARCH
An AHRC e-Science Workshop Series
REPORT ON WORKSHOP 2: SOUND AND MOVING IMAGE
WEDNESDAY 17 JANUARY 2007, 08.00–10.00 GMT
Workshop Leader: Professor Andrew Prescott, Humanities Research Institute, University of Sheffield
Feedback from participants at Bristol
From Ale Fernandez on the live music component:
1) No mixer/microphones were set up previously so this has to be taken intoaccount in the results... Perhaps a future test should have the input fromuniversity staff with experience with broadcasting, but also with artistswho use this kind of thing day to day for telematic work, people fromcommercial environments and technologists working with sound and streamingprotocols.
2) It was interesting that Dorothy when conducting instinctively askedpeople to participate in order of perceived sound quality, thus showingthat there is always hierarchy, in this case technical, but it wasn’t thesame order that we would have chosen in Bristol as we could hear adifferent level of quality for each!
3) The latency experiment was coming from a classical music background ofconducting, and it’s interesting how this quickly solved a lot of problemswe found when doing the Locating Grid Technologies workshops - all we didthen was try and clap or play to a beat, but it was much harder then towork with the latency issue. Very interesting to explore conducting overthe AG, perhaps even with electronic cues as we did in a very rudimentaryway in the LGT workshops with powerpoint.
4) As noted by Neal Farewell, when the musicians were playing the singlenotes, we heard one note almost a semitone lower than the rest ( I think itwas australia, it was the second sound played).
5) Interesting to hear that Dorothy was looking for a visual metaphor orrepresentation for "the grid" - as a programmer I’d relate this to a needto research more appropriate interfaces for performance. Parip Explorercould give clues to this:
From Neal Farwell Department of Music, University of Bristol, on live music component: Sound quality via the Access Grid - a quick response In conversation after this morning’s session, Pam noted the trend towardslots of locally-conceived AG and eScience ventures in the Humanities, andthe tendency inadvertently to reinvent the wheel. Sound quality is a casein point, and I think there are some ready improvements we can make bycombining AG knowledge with the sound engineering know-how that most/all ofour institutions have. We’re planning some local experiments in Bristol inthe next weeks that should help this along. I’ll report back.
A quick outline for those who might be interested but don’t work habituallywith music:
There is a huge body of knowledge in relation to recording engineering andbroadcast. Any system for sound recording/reproduction or relay hasmultiple elements in the chain. A general principle is that each elementcan potentially contribute noise or distortion, and these artefacts arevery hard to get rid of again further along the chain. Equipment designersand users in professional audio therefore take great care to match therelative configuration of the elements so that each is well suited to thetask and is operating at its best. A corollary is that it is worth findingout which is the weakest link and strengthening that first, then repeatiteratively (HiFi enthusiasts know this syndrome!).
A simplified model of the AG audio chain - one to one, and unidirectional:
(1)musician behaviour
(2)microphone type and placement, room acoustics, ambient noise
(3)analogue conditioning, noise gating, CODEC
(4)network transport and clients
(5)CODEC
(6)loudspeaker type and placement, room acoustics, ambient noise
(7)listener / musician behaviour
Item (4) is for network specialists (not me!) but deals with the many-manypotential of AG meetings, and with the tradeoff of latency (delay) versusdropouts, bandwidth per data stream, and so on. This has a bearing onchoice of CODECs and conditioning, and leads to the new aesthetic positionsthat Dorothy outlined.
What I’m hoping to do is a quick review of info on (4) and of the recent
musical work done under PARIP, then do some pragmatic optimisationexperiments on (2) and (6) especially, and their interaction with (3) and(5). We’ll probably take a mobile AG node into our recording studios wherewe can readily try out variants on microphones etc.
I’ve heard the topic of echo cancellation raised several times. My hunch isthat this is a red-herring in relation to music, rather like the old studiofallacy that you can take a poor recording and "fix it in the mix". It’susually much more productive to get the source materials right. A topic that might merit further research, re (1) and (7): are theredifferences to be observed or things to learn from a comparison of workingwith two different kinds of musician? On the one hand, pop/rock/classical"session" players who are comfortable with playing to microphones, wearingheadphones (yes, why not?), responding to talk-back, etc; on the other,musicians whose work does not involve the studio.
I hope this is useful, and your comments and suggestions are welcome pluspointers to existing smooth-rolling wheels.
From Ale again on the video component:
Speaking about partnerships, an ex colleague of ours in ILRT, Libby Milleris now working with a company that is among many trying to become the nextplatform for "television", i.e they have very good software for viewing,annotating, distributing clips with realtime chat etc, all for use over theinternet with desktop computers, but I wonder if partnering with this kindof business at this stage would be important so as to widen the horizonsfrom the TV metaphor into one similar to what we’re starting to explorewith the AG? The company was yesterday re-named and relaunched and can befound at
Finally from Pam King:
As I said during the summing-up period, I think we are all learning how towork with the virtual space, but that its ‘shape’ is confusing. Inparticular we cannot make eye-contact, and some people clearly speak to the screens they are watching rather than to camera. The conventional lay-outof rooms is, generally, unhelpful. Regarding the streamed video materialwe watched, it becomes doubly important when everything is on-screen, todistinguish between the previously-edited and the directly-experienced inreal time. The performativity specialists who have been working the withAG have already explored these issues. There are, theoretically, a numberof different audience experiences of performance in the AG medium that Ican think of, all different, including:
- sharing pre-edited recordings
- watching in real time from fixed web-cams
- watching in real time following the gaze of a participant or physicallypresent audience member with a camera at a ‘live’ performance
These are all legitimate and useful, but need to be distinguished. I feltthat in this session we were focusing the quality of transmission over theAG ‘for practical purposes’, so tended to elide our experience of liveperformance with our viewing of pre-processed material.