Protecting against phishing, malware and other cyber threats is a difficult cybersecurity challenge for any organization — but when your business has over 20,000 employees and runs a service used by almost a billion people, the challenge is even tougher.
But that’s precisely the challenge that’s facing LinkedIn: the world’s largest professional network has over 875 million members, ranging from entry-level employees, all the way up to high-level executives, who all use it to network with colleagues and peers, discuss ideas, and find new jobs.
With hundreds of millions of users, LinkedIn needs to ensure its systems are secure against a range of ever-evolving cyber threats, a task that falls to LinkedIn’s Threat Detection and Incident Response team.
Heading up the operation is Jeff Bollinger, the company’s director of incident response and detection engineering, and he’s under no illusions about the significance of the challenge the company faces from cyber threats.
It’s well known that highly sophisticated hacking groups have high-profile companies like LinkedIn in their sights, whether that’s trying to trick users into clicking phishing links or installing malware via manipulative social-engineering attacks.
Also: These are the cybersecurity threats of tomorrow that you should be thinking about today
“Well-funded attackers are definitely challenging because they can just keep coming — we have to be right every single time, and they’ve only got to be right once,” says Bollinger.
“That’s one of the challenges — we always have to be watching. We always have to be ready — whether it’s an opportunistic attacker or if it’s a dedicated, persistent attacker, we need to have our sensors and our signals collection in place to do it, no matter who it is.”
Building significant, more mature cybersecurity for the business was no small task, something which Bollinger describes as “akin to shooting for the moon” — so the program was named Moonbase.
Moonbase set out to improve threat detection and incident response, and it aimed to do so while improving quality of life for LinkedIn’s security analysts and engineers with the aid of automation, reducing the need for manually examining files, and server logs.
It was with this goal in mind that, over a period of six months between March 2022 and September 2022, LinkedIn rebuilt its threat-detection and monitoring capabilities, along with its security operations centre (SOC) — and that process started with reevaluating how potential threats are analyzed and detected in the first place
“Every good team and program begins with a proper threat model. We have to understand what are the actual threats that are facing our company,” Bollinger explains.
That awareness begins with analyzing what data most urgently needs protecting; things like intellectual property, customer information, and information regulated by laws or standards — then thinking about the potential risks to that data.
For LinkedIn and Bollinger, a threat is “anything that harms or interferes with the confidentiality, integrity, and availability of a system or data”.
Examining patterns and data of real-world incidents provides information on what a range of cyberattacks look like, what classes as malicious activity, and what type of unusual behavior should set off alerts. But solely relying on people to do this work is a time-consuming challenge.
By using automation as part of this analysis process, Moonbase shifted the SOC towards a new model; a software-defined and cloud-centric security operation. The goal of the software-defined SOC is that much of the initial threat detection is left to automation, which flags potential threats that investigators can examine.
Also: A Winning Strategy for Cybersecurity
But that’s not to say humans aren’t involved in the detection process at all. While many cyberattacks are based around common, tried-and-tested techniques, which malicious hackers rely on throughout the attack chain, the evolving nature of cyber threats means that there’s always new, unknown threats being deployed in efforts to breach the network — and it’s vital that this activity can also be detected.
“When it comes to what we don’t know, it really depends on us just looking for strange signals in our threat hunting. And that’s really the way to get it — by dedicating time to looking for unusual signals that could eventually be rolled into a permanent detection,” says Bollinger.
However, one of the challenges surrounding this effort is that cyber attackers often use legitimate tools and services to conduct malicious activity — so, while it might be possible to detect if malware has been installed on the system, finding malicious behavior that could also realistically be legitimate user behavior is a challenge, and something LinkedIn’s rebuild has been focused around.
“Normal, legitimate administration activity often looks exactly like hacking because attackers are going for the highest level of privileges — they want to be domain admin or they want to obtain root access, so they can have all persistence and do whatever they want to do. But normal administration activities look similar,” Bollinger explains.
However, by using the SOC to analyze unusual behavior detected by automation, it’s possible to either confirm it was legitimate activity, or find potential malicious activity before it becomes a problem.
The SOC also does so without requiring information security personnel to methodically oversee what each user at the company is doing, only getting hands-on with individual accounts if strange or potentially malicious behavior is detected.
And by using this strategy, it means that the threat-hunting team can use time to quickly examine more data in more detail and, if necessary, take action against real threats, rather than having to take time to to manually examine every single alert, especially when many of those alerts are false warnings.
“I think that gives us a lot more people power to work on these problems,” says Bollinger.
But threat detection is only part of the battle — like any organization when a threat is detected, LinkedIn must be able to act against it as quickly and smoothly as possible to avoid disruption and prevent a full–blown incident.
Also: Google’s hackers: Inside the cybersecurity red team that keeps Google safe
This is where the incident-response team comes in, actively looking for and filtering out threats, based on what’s been detailed by the threat-hunting team.
“We give our people the most context and data upfront, so that they can minimize their time spent gathering data, digging around, looking for things, and they can maximize their time on actually using the critical-thinking capacities of the human brain to understand what’s actually happening,” Bollinger explains.
The operation of incident response hasn’t changed drastically, but the way it’s approached, with the additional context of data and analysis has been revised — and that shift has helped LinkedIn become much more efficient when it comes to detecting and protecting against potential threats. According to Bollinger, investigations are now much faster — all the way from detecting threats to dealing with them.
“The time to detect is the time from when activity first occurs until when you first see it — and speeding that up, it’s been dramatic for us. We went from it being several days to being minutes,” he says.
“We’ve dramatically reduced our time to detect and time to contain as well. Because once we’ve lowered that threshold for time to detect, we also have more time to actually contain the incident itself.
“Now that we’re faster and better at seeing things, that reduces the opportunities for attackers to cause damage — but the quicker that we detect something is happening, the quicker we can shut it down, and that minimizes the window that an attacker has to actually cause damage to employees, members, the platform, or the public,” says Bollinger.
Keeping the company secure is a big part of LinkedIn’s overhaul of threat-detection capabilities, but there’s also another key element to the work — designing the process, so it’s helpful and effective for staff in the SOC, helping them to avoid the stress and burnout that can accompany working in cybersecurity, particularly when responding to live incidents.
“One of the key pieces here was preserving our human capital — we want them to have a fulfilling job here, but we also want them to be effective and not worn out,” says Bollinger.
The approach is also designed to encourage collaboration between detection engineers and incident responders, who — while divided into two different teams — are ultimately working towards the same goal.
This joined-up approach has also trickled down to LinkedIn employees, who have become part of the process of helping to identify and disrupt threats.
Users are informed about potentially suspicious activity around their accounts, with additional context and explanation as to why the threat-hunting team believes something is suspicious — as well as asking the user if they think the thing is suspicious.
Depending on the reply and the context, a workflow is triggered, which could lead to an investigation into the potential incident — and a remediation.
“Instead of having people working harder, we’re having them working smarter — that was really one of the big pieces for us in in all this,” says Bollinger.
“A big part of the job is just staying on top of things. We can’t just hope for the best and hope that our tools will find everything. We need to be constantly researching — that’s a really big part of what keeps us on our toes,” he concludes.