Educause Boulder -- Surviving the Aftershocks of a Cyberattack:

Coordination, Communication, and Where to Get Help

Welcome to EDUCAUSE Live, everyone.I'm Joanna Grama, EDUCAUSE's Director of Cybersecurity and IT GRC program; and I'll be your moderator for today's e-Live webinar.

You are probably familiar with the interface for our webinar, but here are a few quick reminders.We hope that you'll make today's session interactive.Use the Chat box on the left to submit questions, share resources and comments.If you're tweeting, please use the hashtag EDULIVE; that's E-D-U-L-I-V-E.If you're having any audio issues today, click on the link in the lower left-hand corner; and at any time, you can direct a private message to technical help for support if you need it.The session recording and slides from today will be archived later on the EDUCAUSE Live website.

Our webinar today is, "Surviving the Aftershocks of a Cyberattack:Coordination, Communication, and Where to Get Help."Everyone is at risk of a cyberattack.Events such as the breach of sensitive information, denial of service attacks, and ransomware make front-page news, disrupt service, steal time from overworked staffers, and damage reputations. From widespread prey, exploit techniques, to targeted attacks many IT leaders say it's no longer if a cyberattack happens but when.

Protection and planning are the keys to prevention, but what if it happens anyway?

This session will focus on equipping key individuals...such as general counsel, university communications personnel, CIOs, and CIFOs with the tools to plan for and respond to a cyberattack.These tools include communication plans, incident checklists, decision trees for when to obtain external experts, and a directory of participation roles to ensure the impact of an attack is minimized.

We are delighted to be joined today by Kim Milford, the Executive Director of the REN-ISAC at Indiana University, and Steven Wallace, Enterprise Network Architect, Indiana UniversityBloomington.

Kim Milford began serving as the Executive Director of the REN-ISAC in April 2014.She works with members, partners, sponsors, and advisory committees to direct strategic objectives in support of members providing services and information that allow higher education institutions to better defend local technical environments and is responsible for overseeing administration and operations.Prior to joining the REN-ISAC, Kim served in various roles at Indiana University, including Chief Privacy Officer, and worked as Information Security Officer at the University of Rochester and Information Security Manager at the University of Wisconsin-Madison.Kim has a BS in accounting and received her law degree from John Marshall Law School in Chicago, Illinois.

Steven Wallace brings more than 25 years of experience in network design, research, and deployment to his role as IU Enterprise Network Architect and Technical Advisor.Notable accomplishments in his career include 10 years leading IU's engineering support for Internet2's first high-speed backbone and directing the university's Advanced Network Management Lab.Steven currently works closely with Internet2, serving as a technical advisor to its chief innovation officer, chairing the Security Working Group, and co-chairing the Internet of Things Working Group.

With that, let's begin.I'll turn it over to you, Kim.

Thank you, Joanna.

Thanks, everyone, for joining us.We're happy to be here and look forward to a lively discussion.This discussion came out of an idea from Steve.I want to say it was based on what was going on with Spectre and Meltdown; it was around that time anyway.And he was like, "Listen, do people have a good feel for this?How can we help them more?"

So our discussion grew out of that...of him questioning that, and then he brought a lot of great ideas to the table here.So I just wanted to give you a little background.

We are going right now into our material because we have a lot to cover, and we want to make sure we have plenty of time at the end for Q&A.We'll watch the Chat window for questions; and for the most part, we'll hold them until the end and do a Q&A at the end.If one of them comes up that's really germane and might solve understanding of something that we're talking about currently, we will go ahead and address it at the time; but that's kind of our planned flow...is to hold questions and answers.

Steve is going to be walking us through a multifaceted case study today.We're going to go through the scenario, and these were learning outcomes for what we hope you gain from this exercise.We hope at the end of the session you'll have some good ideas and momentum to develop a directory of key individuals...you know, have an excellent understanding of the importance of planning and maybe be able to put some priority around that; know and have practice on how to use checklists for preparing for an attack and determining your institution's readiness; and then just a greater awareness of external expertise and plans for establishing the relationships you need in advance.

Again, welcome, everyone.Thanks for attending.I'm going to describe a scenario that reflects the facts in the situation of a real scenario, but I'm not going to mention the names of the institutions; and I'll leave some other details out.But I think we can provide enough information to give you a good sense of what was going on and the challenges involved in this particular cyberattack.We thought it was important to provide something realistic so that folks could imagine this is the kind of situation where we'd want to know this stuff.We can sort of imagine being there, although it's not total reality; but you get a sense.

Our story starts on a Friday afternoon, and there is a visitor to campus; but they're authorized to use the campus wireless network, so they went through some mechanism to get authorized to use the campus wireless network.Their computer was infected, and that infection spread to other computers on campus and its sites interconnected with the campus.So you can imagine many campuses will focus on providing, let's say, a firewall between the campus and the rest of the world; but the campus might have less restrictive connectivity to partners or institutions that are somehow related to the campus.

So in this particular case, the campus had connectivity to another organization that, among other things, provided clinical healthcare services; and that visitor's computer infected both the campus and infected resources at that partner healthcare institution.The infections were spreading pretty rapidly, ping-ponging now back and forth among institutions.

Now, what's interesting is patches for the vulnerability had been available for weeks.It was a known, critical vulnerability, and so this wasn't news.There was good information available to both patch and observe this vulnerability spreading through a network, and the vulnerability was a pretty nasty one.Ultimately, it would allow full control of an infected computer.And again, Friday afternoon this thing starts spreading very rapidly throughout both campus and our partner institution.

So the good news –

Hang on a second, Steve; didn't we have a poll on that second slide?

Yeah, sorry, so we had a poll...bingo, thank you.

So it's interesting.In this particular scenario, the patch was available.People knew it was a critical patch, and it had been available for many weeks.

[Pause for responses]

And interesting, the poll reflects roughly the reality that this particular institution...interesting...okay, so we'll go ahead and go on to the next slide.

So the good news, in near real time the campus detected the infection and could track it.This is something that I think a lot of campuses or universities or enterprises in general may not realize the value of.So you may have invested a lot in a firewall, for example; but really, what is equally important, and in some cases more important, is an intrusion detection system that has the ability to see most all of the traffic.And in this case, the campus could see the infection; in fact, they could basically tell exactly when and which hosts were infected.That was extraordinarily useful both during the attack in terms of informing mitigation as well as afterwards kind of understanding what happened.

There are different approaches to this technically.In this campus' case they have a Bro cluster that is capable and instrumented to see all traffic in and out of the campus, as well as some intra-campus traffic.The bad news is the campus needs to start its mitigation, and were seeing this spread rapidly.

First step, quarantine the infected computers.We have good information on which machines are infected.We even have pretty good information in this particular scenario on machines that have not yet been infected but are not hatched completely.That information is pretty good, but we have very good information on which machines have been infected, are infected, or are trying to infect other machines.

Another good technical capability is the campus has the ability to remove from the network any infected computer.So in this scenario if you see a machine that's been infected, the campus has the ability to officially quarantine that machine.In fact, that capability existed at the partner that provides clinical services.They would be able to...essentially, if they knew the IP address of an infected computer, be able to boot that computer off the network.

The bad news...and in this particular scenario, it turned out to be really bad news, remarkably bad news...the campus doesn't know which infected computers are providing critical service or are serving important stakeholders.So here's the scenario again.We have a vulnerability being exploited rapidly.The vulnerability has the capability of taking the machine over completely.We haven't seen any of that take place yet; but in the worst case, all of the system information could be deleted or exfiltrated.It's really a bad back door.

The campus needs to do something, but the information the campus has is the IP address of an infected device and in many cases not much more than that.And not only do they have the ability to remove thanks to the network, they can automate that ability.So they can say in this scenario if the intrusion detection system detects something that's been infected, just go ahead and automatically block it.

Now, this is a perfect storm.If you can imagine Friday evening all of a sudden there are, from their perspective, random users around campus whose machines are being taken off the network.And while the campus had mechanisms to communicate with folks, those are never ideal.There are always holes in them.And if your method of determining that you've been taken off the network is to look at something on the network, that's kind of a problem.

So the campus had to take action.They had clear authority to take action, and that's one of the things we'll talk about later...is decisions about what you should do in this situation.At least guidance needs to be made ahead of time.You don't want to have a long debate at this point.Now, you can't predict every scenario; but you at least need to have a framework, a policy, that's going to drive what your response is going to be to something like this.

Next slide.

Remember, it spread to a site that provides clinical service.Again, that site also had the ability to track the infection and quarantine infected computers.Same as the campus, they didn't have reliable information identifying the service that particular infected computer might be providing; and many medical devices today are essentially computers.So you have, for example, imaging devices that are critical to providing clinical healthcare...in fact, maybe acutely critical in their use.Maybe they're in the environment where it's urgent that these devices be available.So sort of the decision about what you quarantine is a different one.Now you have the element of safety introduced into this, and you have this great telemetry showing you what's happening...how the malware is spreading, what malware is out there.

But since you don't have a good, reliable, trusted source of information about these devices, again in the campus case, this vulnerability was spreading and had the potential to delete data, to exfiltrated date, whatever; but nothing had been exercised yet...just the vulnerability was spreading.So sort of the landscape for this decision is a little bit different than if you have an x-ray machine or an MRI or something like that that's been infected.And nothing has happened yet; and on top of that, of these many devices that have been infected, the people who have the technical ability to do the quarantine can't tell that is an x-ray machine or the desktop for an assistant.

So what do they do?

It gets much more challenging.Ultimately, the campus and the clinic must act on imperfect information.They have good information about the exploit, but they don't know the intention of who started this exploit...could be bad, could be just to spread the infection; but they don't know. They also have pretty poor information concerning what devices will be affected; and in fact, having spoken to folks who experienced this particular scenario directly, that drove them to change things after this.

In both cases, both institutions, I would consider to be well-prepared for these events.They had the capabilities in place.In the case of the campus, they actually had leadership in place and policies in place that were well-developed.But for both what was missing...and this event made this crystal clear...was what the devices do; so what effect are you having when you quarantine a device.Less of an issue for the campus, although it could be it was an issue; absolutely dire issue for the clinical healthcare provider.

So it's Friday this starts.All of this is happening rapidly.It's getting late; fewer folks are around, so you're staff is getting ready for the weekend or they're already gone.In fact, the folks that were tracking this are getting tired; ultimately they would have spent a lot of time with very little sleep as this scenario unfolded.And you need to communicate to the affected parties what's going on.The communications folks that might help with that...it's a late Friday for them too.

So what do you do?

So the gift of this presentation is to describe something like this and to say you have your Groundhog Day.So now that you've just lived this vicariously through this presentation, what would you do?It will happen to some degree or another...this kind of thing is happening all over the place.What do you do?

So we have our survey.

"Implement incident response plan," yep.In fact in this case, both institutions were able to do that and did do that.Pretty much they did all of this stuff to a lesser or greater degree.Nobody called Chuck; but in these two cases, these two institutions did have incident response plans and they provided excellent guidance.But the twist that was missed and was most critical during the incident management was who owns the devices and what do they do in terms of their criticality to the enterprise.

And then afterwards looking at the timeline of events...and in fact, that's how I got involved in this.I was asked to put together sort of an unofficial timeline of events.When I put the timeline together, the most striking thing about this timeline was the delta of the time between these vulnerabilities being disclosed to the public, patches being available, and then a very good period of time elapsed while systems...in many cases, very critical systems...were not patched.

And it's complicated.So clinical systems and scientific instruments in the campus environment share qualities.So they are typically computers, and they are in many cases computers where the campus or clinical support staff have limited ability to keep them up-to-date because they'll break.They rely on the vendors to do that because they might have special drivers or other requirements.

So it gets even more complicated in those cases where you initially, during the heat of the moment, don't have good information about which ones are critical.And since the exploit had not been used quite yet to do anything other than spread itself, you might have some latitude about decision-making in certain cases.But then as a more comprehensive follow-up, when you're trying to fix these devices, the electron microscope and the MRI machine have something in common.The user and the institution is likely unable to patch them.You've got to have the vendor do that.So that was a wake-up call to the vendors.

So that's the kind of scenario that we thought would be useful to bring forward.And now you have sort of the Groundhog Day moment where you can sort of feel and think through what you wish you had in place before this happened.