I welcome Dr Hugh Bradlow, Chief Scientist of Telstra, to come forward. Thank you, sir. An electrical engineer by background. I have to mention you've got a doctorate in experimental nuclear physics from Oxford, which I think haunts you for the rest of your life. He's a leading thinker in telecommunications. Please make him welcome.

(APPLAUSE)

HUGH BRADLOW: Thank you. I've never thought of myself as being "haunted" by it, but I have recently delved into quantum computing, it's turned out to be moderately useful to try and understand it - which I still don't. Anyhow, today I want to talk to you about artificial intelligence. I will cover some of the things that came up in the previous conversation, but let me start off by saying that question about - does the autonomous vehicle kill the bus load of kids or the old person is a completely ridiculous question, because it's a total edge case - the thing will just stop. Let's get it clear - human beings kill, machines don't. But having said that, let me tell you what I want to talk to you about today. I want to cover three things. The first is digital platforms - because that's why we're having this conversation. The second is big data and AI, which is the main topic. Then I thought I'd pick up a few examples of how AI is being applied today in the real world. I'll have to skim through those, because I do have a hard stop at 3:00. Starting off with digital platforms - you're all familiar with today's digital platforms - the internet, broadband, both fixed and mobile, and together those create the opportunity for cloud compute, and cloud computing creates abundant computing, and that's part of the AI puzzle - abundant computing. Of course, it's led to all sorts of disruptions of the economy. I won't go through that - you're familiar with them. Then you're also familiar with the Internet of Things, which is the coming thing now - why is it an emerging technology? Because today, very little is actually connected up. By the way, it's interesting to note the trigger for the Internet of Things is these things - mobile phones have four key sensors in them - image, in other words a camera, audio - a microphone - location, and movement. Once you create them in the scale of billions, then you get cheap communicating sensors, and that spares a whole industry of people making low-cost, low-power communicating sensors, which is what triggers the Internet of Things. You need a platform for the Internet of Things that consists of networks and big data platforms or cloud computing platforms. The networks, being from Telstra, I could spend a lot of time on. Let me summarise by saying there'll be a lot of messing around while people are trying to find arbitrage niches in the market. Eventually, the cellular industry will roll over all those solutions like a tank, 5G will come along, and that will be our network platform. The data platform is where life gets interesting, because you've got this ability to collect huge amounts of data in so-called big data platforms which I've called a data lake there - as far as I'm concerned, it's simply software plumbing, it's not that interesting, you go and collect a whole lot of open-source software like Apache, Spark and the like, you collect your data together, and apply new analytics. The new analytics are the interesting things. Basically, there are two types - there's search, and there's machine learning. Machine learning is interesting because, in the past - and I'm going to use speech recognition, like that little Tapia robot - by the way, I have an Amazon Alexa, and have had one for the last two years. It does everything that thing does, except it doesn't follow me around the house, which would be intensely irritating. But it does all those things, and the speech recognition is outstanding, and I'll tell you why. So, the way machine learning works - instead of trying to model speech - I've worked on this topic in the '80s and '90s. We used to try and create a model of the human voice track, we would then fiddle around with the parameters and see if that speech wave form on the left matched what people were actually saying. If it did, we knew we had a match and we could identify what they were saying. Didn't work. I'll show you the curve in a moment. Along came Google, and they've got abundant data. They've collected it all in their cloud big data systems. They've got abundant computing. And they throw all that data at an artificial neural network, and they recognise patterns. That's all they're doing - recognising patterns. Those patterns translate into words. If you actually look at the graph - between the 1970s, when people started trying to do speech recognition, and 2010, absolutely nothing happened. We went from 50% accuracy to 70% accuracy, which is equally unusable. Then Google came along in 2010 with artificial neural networks and their abundant data. By 2015/2016, we'd reached human level of artificial intelligence in speech recognition. You can see the transcript over there, which I presume is being done automatically, is actually recognising a whole range of different accents, different words, me babbling at an incredible speed. All these things are able to be recognised today. That, of course, transforms the household - in particular, 'cause you literally can talk to your house, and in terms of commercial channels, people talk about multi-channel contact centres, meaning a web, the app, or speech - we're starting to think about the kitchen channel, because people - and Amazon - are thinking about this very deeply. People in their kitchen want to order stuff, their hands are dirty because they're cooking, and they can do it all through voice, in a very convenient way, and you can ask it to play music for you, and the like. However, probably a more interesting indication of this pattern capability is machine vision. I just track my photo stream, of which there are about 30,000 photos, said to Google, "Recognise all the photos of me." That's not because I'm a massive egoist - I am, actually, but don't worry about that - it's because I own the rights to these pictures. You can see it's recognised me in all sorts of situations. I'm wearing bicycle helmets and sunglasses, I'm sitting in the cockpit of an aeroplane, I'm in profile - it will recognise that as being me. And it's doing that using machine vision. And it turns out, last year, or the year before, we reached an inflection point where machines could recognise objects better than human beings. They could distinguish cats from dogs better than human beings can actually do it. Machine vision has become very important. That goes back into the autonomous vehicles which I'll get to in a moment. However, it's important to realise there's a huge amount of hype around artificial intelligence. Just to give you four very simple examples of why there's so much hype. The first thing is - human beings are rather amazing in their brain power in the sense that we can do what's called one-shot learning. You can show a toddler a picture of a cat, and thereafter - or you can show it an actual cat - and thereafter, it will recognise all felines as being cats. Even big things like lions. What's more - our brain can do multidomain - we can do speech recognition, we can do visual recognition, all these things together. As machine learning algorithms can only do it in a single domain - of course, our brain is divided into single segments, but they are doing it in one instrument. The third thing is there's bias in the training data sets. That causes all sorts of confusion. So there's a Stanford academic recently who said he can use machine vision to determine the sexuality of an individual from their photograph. He also claims he can determine their IQ and their political persuasion, which is an interesting claim. But he hasn't presented any data on that. He has presented on the sexuality. What you've got to remember - his training data set was people who've declared their sexuality, so it immediately has a bias. It's not necessarily a valid result. The last thing is it does take time to retrain a machine-learning algorithm. It can take quite a long time - like weeks - to retrain an algorithm, whereas our brains can retrain almost in real-time. We've got a long way to go before we get to the Elon Musk Skynet scenario - I just don't give any credence. A few examples, as I said, of AI in the real world:

Obviously I'm in the telecommunications industry. We're heading towards this SDN/NFE world of sliced networks, of SD-WANs where you've got a need to highly optimise a very large number of end points and routes - that is an area where AI will have a big impact. The biggest impact we're hoping, though, it'll have is on customer service, and that we actually get to the nirvana of proactive customer service, which I've been in this industry for 22 years - about to leave it next week, by the way - but in those 22 years, we've talked about proactive customer service, and never quite got there. But this type of machine learning creates that promise. The big opportunity in field service is to avoid truck rolls, and that's where augmented reality has a role to play. This is an example I got from Microsoft which shows the opportunity for remote customer service. It happens to be a father helping his daughter do some plumbing. She's wearing the Halo lens - it's integrating both the digital world, which he's presenting to her, and the real world, but which it's digitising and merging them into a mixed reality. That is starting to appear in real-world situations. The picture on the right, I took at CBUT in Hanover this year. They will allow technicians to work on the latest high-tech vehicle without having to be retrained because you can actually do the retraining by overlaying that digital information into the engine they're working on. Let's get to the autonomous vehicles. This is the world of the future we're looking at. That is an intersection...

(CROWD GROANS)

There are no traffic lights. No-one is stopping. And there are no accidents. And people are platooning, and everything's moving fine. Now, this creates a huge opportunity for society...

(LAUGHTER)

It will take a little bit of user retraining...

(LAUGHTER)

..unless you come from Hanoi, of course, in which case you'd be totally used to this. But the fact is here, what we've got to remember is it's not just about autonomy - it's about connected autonomous vehicles. Watch this guy - that's where you'd need a bit of retraining...

(CROWD GROANS)

Remember, he is communicating with all those vehicles in his vicinity because he's got communications in his phone which are talking to their vehicles, and they are all coordinating with each other. The example I always give everyone is - I was driving in the middle of Africa last year. I got into one of those thunderstorms where you couldn't see beyond the bonnet of the car. It was absolutely pelting down. I knew there was a ditch on my left so I couldn't pull over. I was way out in the middle of nowhere so I had no coverage. I knew it was Africa and that the guy behind me was not going to stop, so I couldn't stop, and I just kept driving blindly. In this connected, autonomous world, the car will be communicating with the people around me, so they won't drive into me, and the car will know where it is with centimetre accuracy so it won't drive off the road even though it can see nothing in front of it. You've got to remember that, even though there will be accidents in the autonomous vehicles, and there have been - like the Tesla case - you can reprogram the car - and they have done that. You can't reprogram human beings to stop making the same stupid mistakes every time. By the way, I'm afraid to say that the research that says connected autonomous vehicles mixed with human drivers will be a danger is wrong, because we already are seeing driver-assist systems actually reduce accident rates. This world is going to have a huge impact on our future. It's going to change about 20 industries and completely radicalise the way we think of urban environment. A couple of other things that are going to change - this is going to cause some concern. Law enforcement - we'll have police officers hopefully wearing their body cams and not turn them off when they're beating people up - I forgot, that's the US. We're in Australia. It's OK.

The fact is, though, machine recognition will help them recognise who they are dealing with, and then we get into - with all the surveillance technology - what I think of as pre-crime. I don't know if any of you are old enough to have seen Minority Report, but we are heading towards pre-crime. In Washington DC, where they have quite a lot of gun crime, they have taken all the data feeds from their surveillance cameras, their surveillance microphones, Twitter feeds, transport, weather, and they put them into a big data system and then they can predict where the crime hot spots will be for a given day, and they concentrate their policing resources on those particular crime hot spots. Now, it's got a whole lot of ethical issues around it, but the fact is it does make policing more effective if it's used properly. Health - simple scenario - we will all soon be wearing a single ECDG like the one on the left there. It is an ECG lead, not a heart rate measurement. It will be continuously measuring our heartwave, feeding it in through our phone, into a big data centre, where we'll have algorithms that are looking for anomalies, and when it discovers an anomaly, it will alert a human carer, who will make a decision, and you will be sitting quite happily at your desk one day, and an ambulance will come and cart you off to hospital before you have the heart attack. If that sounds invasive - remember that your chances of survival decrease by 10% for every minute after a heart attack that you don't get care. So it will actually save lives.

I don't know why I have that one there - I'll skip it. Our lives will be managed by virtual personal assistants - Google already have one in the phone today. The thing about the personal virtual assistant is it can join up all aspects of your life. It's like having a human personal assistant look over your shoulder 24/7. For example, if you are travelling - as I am today - and you've got a whole sequence of events tried into that travel, and Qantas do what they've done to me today - cancel my flight - you get a whole sequence of events that need to be changed, a virtual personal assistant has access to all the data that it needs, it knows the Qantas schedules, knows who you're meeting, it knows your bookings, and it can join it all together to reorganise your life around you. So, let's end up with cybersecurity, 'cause this something we do have to worry about. It's not insuperable, but we've been neglecting it. Most organisations will need to access a security operation centre. This is not just a plug because we've just released one. The fact is that big organisations like ourselves can protect other organisations at the margin much more easily than you can protect yourself. Those types of operations centres use AI algorithms to look for anomalies and to look for attacks, and particularly those zero-day things that cause all the problems. As individuals, we need a set of golden rules - if you're not doing 2-factor authentication on your emails and social media, you should be out there doing it right now. You need password managers, you need to keep your systems patched, and you do need virus protection. Those are the four golden rules. If you're not doing them, do them today, because you will get attacked. But in the Internet of Things, we've got a whole lot of other problems which need to be addressed, and we're heading into a world where new problems are going to emerge. For example, there's been some research done recently where someone squiggled on a Stop sign in a street and caused the machine vision system of the car to recognise it as a right-turn sign, so the car didn't stop. So we're going to have to worry about new forms of attack that we've never seen before and that are incredibly ingenious but which can be protected against.

There was a very quick whiz-through. I can take a couple of questions. Let me emphasise - this technology is progressing faster than the scale - time scales - of normal businesses, and therefore you can't react to the change. You've got to be proactive about identifying it and getting ahead of it. Secondly, AI is massively overhyped. We're absolutely at the peak of the Gartner hype cycle. That isn't to say, though, we won't reach what Gartner called the plane of enlightenment. We'll go through a trough of disillusionment like we are with blockchain, but we will get to the plane of enlightenment, and it will have a huge impact on various aspects of our lives. I will finish there.