Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Video
ISMG Editors: OpenAI's Response to The New York Times Case
Also: Addressing Scotland's Cybercrime Surge; NOC and SOC Convergence Anna Delaney (annamadeline) • March 1, 2024In the latest weekly update, four Information Security Media Group editors discussed the convergence of the NOC and SOC functions, Scottish Police efforts to address the escalating challenge of cybercrime in Scotland, and why OpenAI is pushing to dismiss certain aspects of The New York Times lawsuit.
See Also: Mitigating Identity Risks, Lateral Movement and Privilege Escalation
The panelists - Anna Delaney, director, productions; Mathew Schwartz, executive editor, DataBreachToday and Europe; Rashmi Ramesh, assistant editor, global news desk; and Tom Field, senior vice president, editorial - discussed:
- Highlights from a recent ISMG roundtable discussion that explored the convergence of network operations center and security operations center functions;
- How Scottish Police and cybersecurity experts at the FutureScot conference in Glasgow addressed the escalating challenge of cybercrime, highlighting the surge in cases and the need for innovative solutions;
- The latest on The Times's lawsuit against OpenAI and its primary financial supporter, Microsoft, which alleges unauthorized use of millions of Times articles to train chatbots for user information.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Feb. 16 edition on the cyberwar in Israel and the Feb. 23 edition on the "new frontier" of AI and identity security.
Transcript
This transcript has been edited and refined for clarity.Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm Anna Delaney. And in this episode we'll tackle the concerning rise of cybercrime in Scotland and innovative solutions. Plus, we'll discuss the latest in a legal battle involving the New York Times, OpenAI and Microsoft, as well as the merging of NOC and SOC operations. To do so, I'm joined by my excellent teammates Tom Field, senior vice president of editorial, Mathew Schwartz, executive editor of DataBreachToday and Europe, and Rashmi Ramesh, assistant editor for global news desk. Wonderful to see you all.
Delaney: Well, Tom, you recently moderated a roundtable this week exploring the convergence of network operations center and security operations center functions, or at the very least, the improved collaboration between them. So can you share some insights from the discussion?
Tom Field: Okay, let's pretend that you're interested in the topic. I'm very interested in the topic in mind. It's not a one off. This is a series of roundtable discussions that we're doing with Broadcom in fact, coming soon to a neighborhood near you. I believe there's one coming up in London in the next several weeks. It's an interesting topic that we pose to our audience in Washington DC last fall is sort of a trial. This was a hypothesis that we had, and we wanted to play it out. And the point of this is, since you have had digital transformation and cloud migration, organizations have a vastly different protection landscape now. So given that, is it time to think about taking your network operations center and your security operations center and bringing them together? Or at least encouraging greater collaboration between the two? When you think about it, the missions are complementary. They leverage much of the same data that's coming in the U.S. many of the same tools. So should they be separate and equal? Or should they be brought together? Now what we found when we went to Washington DC, particularly among some of the federal agencies, they were really ahead of this conversation. They either had merged SOC and NOC or hybrid. But we're thinking along these ways that there needed to be some convergence, that they were bringing them together in ways that they could leverage greater automation, even AI. Taking this conversation to Tampa this week, a little less aggressive there. But definitely thinking about the possibilities. And so we're going to do this again in New York next week. What I see shaping up, as you talk to security and technology leaders in the public and private sectors alike, you do see a lot of collaboration happening between the NOC and the SOC, some convergence and an increasing attitude of "I wouldn't hire a network engineer that didn't have some security expertise", and "I wouldn't hire a security professional that didn't have some networking expertise". So there really is a convergence even among the minds of those who are hiring people. And the reality of in today's environment, where everything is at the edge in much as migrated to the cloud, what is a network operations center today? It's a vastly different thing. And where this inspires people to go is to the conversation about the role of automation in being able to bring all of these alerts and all this data together. And it encourages them to think of gen AI, in particular, as a way to synthesize this information, get it to the right people, and have them make sense of it. And then at a time, when both the SOC and the NOC are challenged to be able to get the expertise they need in house, those are real opportunity for gen AI here. Now, there were concerns. I've heard it both the discussions we've had so far, the concern the insider risk of separations of duties, if is not going to tell on itself. If they discover it, there's been a security breach. And then the concern about where do you find someone that has got the diversity of expertise in need, the comment was made by one of our attendees that we need a Jedi Master. But I think that there's something growing here. And as we're having this conversation, you can hear some of the attendees come in with ideas they've already pursued, you can see the lights coming on with others. So we're going to take this to New York, we're going to take this to LA, to London and elsewhere. I'm very excited to see what the reactions are, as we bring this conversation, get some real-world insights into the future of the NOC and the SOC. Wasn't that interesting?
Delaney: Very interesting. And did you pick up on any surprises or unexpected trends in these conversations?
Field: You know, I think what surprised me was hearing in both conversations so far, this concern about separation of duties. And it goes back to the fundamentals of cybersecurity, where you want to make sure that you've got people watching one another and that you don't have people responsible for so much that no insider risk can be overlooked. I was a little surprised to see that come up in both conversations from entirely different individuals. But this is an encouraging conversation. I'm really enjoying this one and can't wait to we do it again in New York City next week. Because of course, that is always a lively community. And in here the advertisement, if this is something you're interested in, you're going to be in the New York area, check out our events and the registration page. I know it's going to be a full house, but there might be a couple spots open. Think you will enjoy this conversation, folks.
Delaney: Excellent. I'm looking forward to hearing how the cities compare, especially with London as well in the next. So look forward to that.
Field: Thank you. Don't be surprised if that ends up on your doorstep.
Delaney: Hopefully! So Matt, moving on. So earlier this week, you were, as you said earlier, at a cybersecurity conference in Glasgow - or maybe you didn't say that - but you were in Glasgow, Scotland, and your post event report underscores a concerning trend: cybercrime is escalating in Scotland. So tell us more and any other takeaways you wish to share.
Schwartz: Yeah, so if I didn't do that big reveal, I meant to, certainly. I was at a very interesting cybersecurity conference at Strathclyde University, sixth annual, that's in Glasgow and what I love about conferences like this, including this one, is the regional nature of it. Cybercrime, cybersecurity, it's a global issue, global challenge, global requirement, but it's always fascinating to get a flavor for what people are dealing with at a regional level. That includes Scotland. As you say, cybercrime is up. I mean, it's up everywhere, isn't it? But at this event, it was fascinating to hear from multiple law enforcement and government officials giving a scope to the challenge. So Police Scotland, which is the the law enforcement agency here in Scotland, says that the amount of cybercrime that it has seen since the start of the COVID pandemic has doubled. So obviously, that is a lot of crime. And they are attempting to get their head around it. They are looking at what they call cyber-enabled crime and also cyber-dependent crime. So dependent is things like ransomware. If you didn't have cyber, you couldn't do this kind of crime. Whereas cyber-enabled crime is looking at things that used to be in the physical realm, but which have moved online, fraud for example, 95% of the fraud being reported to police as a cyber component these days, huge amounts of fraud. So without getting too deep into the weeds, every different region has its own way of approaching these sorts of things. Here in Scotland, they have a Cyber and Fraud Centre, which is connected to the government, and which helps organizations respond to attacks. So if they get hit by ransomware, they might be involved with that. There's a blanket or a coalition, I guess, that comes together depending on the sorts of things that organizations are attempting to recover from. But the head of that Cyber and Fraud Centre said that police are getting 18,000 calls a year about cyber-enabled fraud. Now, a lot of this is individuals, of course. And the sheer scale is not something that police can handle. So they're looking at ways of trying to bring together private and public organizations, banks, police, government funded anti-fraud centers, to try to better deal with this problem. We report on cybercrime all the time. And sometimes, we don't always remember maybe that there are thousands, or I mean, if you look globally, millions of individuals affected by this every year, and attempting to police it and respond to it is really difficult. We heard from cybersecurity officials at this event as well, how challenging it is from an organizational standpoint. So for example, the guy who's in charge of the cybersecurity and information security team at the Scottish National Health Service, the National Health Services, so you have the NHS, obviously, all over the U.K., but you have your different regional variations. Multiple health boards here in Scotland got 23 health boards, and there's an agency that provides services to those boards. And so the guy who's in charge of the cybersecurity component there, he's got 60 people working for him. And I think he said he was safeguarding his team. We're safeguarding 150,000 endpoints 170,000 employees 250,000 digital identities, and they're a major target. He said, this is a cash rich business. The Scottish NHS is handling billions of pounds every year, and the attackers are coming calling, at the same time, you've got public services stretched really thin from years and years of austerity. This creates a challenging environment. And a lot of the government officials there were talking about doing more with less - which governments love to say. Can you really deliver more with less? As a taxpayer, I'm skeptical. There are strategies for bringing people together using collaboration to try to get better results. And not to give the final word to an American. But from the U.S. Cybersecurity and Infrastructure Security Agency, they now have a legal attache at the U.S. Embassy in London, Julie Johnson, and she came up to Scotland to share some lessons learned from attempting to coordinate critical infrastructure security. Issues used to be in the U.S. and obviously now based in London. But what I loved, she said that CISA has a superpower. And what it is is they're the guy who knows a guy. So if you've got a problem, they pride themselves on knowing the guy who has a solution and bringing you together. And I love this sort of MacGyver ish, getting your hands dirty, getting down in the muck, figuring out how we're going to improve things, but actually figuring out the scale of the problem. And the scale is massive, and trying to bring people together in a trustworthy - I don't want to say polite - but, you know, polite, nice, collaborative way to try to get solutions to these problems, not just top down, not just bottom up, but trying to get the job done. And I thought that was a really great note to sound, given the scale of the challenges, given the fact that crime continues to escalate, given the fact that so many different people are responsible for so many different aspects of this. So I think there was that hopeful optimistic note sounded about how we go forward, how we do better.
Delaney: That's fantastic. But what about current trends? We see criminals exploiting AI and machine learning deepfakes. Did they discuss the sort of the evolving tactics that criminals are using, how they're keeping ahead of those?
Schwartz: Definitely. And we've seen this attack recently in Hong Kong where someone used deepfake videos to steal millions. That was a case specifically referenced during the conference. And one of the police officials there just said, "We're keeping a close eye on this. It's a real growing concern." They see fraud cases like that, they know this is coming. And they're probably - I don't want to say freaking out about how they're going to handle it. But what do you do? I mean, you try to get the word out. But it seems like criminals have the latest and greatest sorts of tactics at their disposal, and you're not exactly sure how they're going to be throwing them out. So definitely, we were hearing about stuff that's coming down the pipe and police talking about how they're aware of this, and they're trying to stay ahead of it. One of the other great things I heard from the conference, totally random, was, did you know that they have dogs that can sniff out digital SD cards as part of investigations? I I love that. I had no idea that when they are busting down the doors of these, probably largely, teenagers engaged in cybercrime, they're bringing the hounds and trying to find all the places where digital evidence might be stored.
Delaney: Our furry friends helping to combat cybercrime. I love it.
Field: Can't wait till we have the AI sniffing dogs. I'm struck by this image of CISA as Liam Neeson, an agency with a particular set of skills.
Schwartz: That wasn't how Julie Johnson, who's a fellow midwesterner, came across just just to offer that. But yeah, it is a very particular set of skills.
Delaney: We can give her an Irish accent, I'm sure. Great stuff, Matt. Thank you for sharing that. Rashmi, so in one of the episodes of the Editors' Panel - I think it was in January, we discussed the story, where the New York Times filed a lawsuit in December 2023 against OpenAI and Microsoft and the Times accused the companies of training chatbots using millions of articles without permission. So given recent developments in the case, why don't you just bring us up to speed?
Rashmi Ramesh: So like you mentioned, in December, it sued them in December. So using millions of articles without permission. Its complaint showed several examples as well, where if a user asked a question that was written about in NYT, both the chatbots generated results that were near verbatim copies of The Times articles. So NYT said that OpenAI and Microsoft are getting a free ride on Time's investment in its journalism, and creating a substitute for its newspaper. At the time, Times had said that it had analyzed ChatGPT responses, as well as web crawlers that AI companies used to gather data for the elements to build its copyright infringement case. So that's the background. This week, OpenAI released a statement, it accused NYT of paying a hired gun to hack ChatGPT and said that's how the newspaper got all of those comparative examples to build its case. It also asked the judge in the Manhattan federal court to dismiss parts of the case. Now, there are two key things in this statement. One, OpenAI does not say if NYT broke any anti-hacking laws, and neither does it give proof of the hack. The second is that it said one could not really use ChatGPT to serve up Times articles at will, but that the newspaper could not really prevent AI models from taking facts from stories to train it selling them. It said that that process was similar to how the newspaper itself would probably, you know, would not report a story just because another media house investigated it. It said that the way NYT had got those volatiles highly anomalous results was by making thousands of attempts and by using deceptive prompts, that while it did open the Time's terms of use, of course, NYT responded, and its lawyer said that all they did was used OpenAI's products to see how the LLM reproduced Times' copyright work. So this is an ongoing case. Now there are more media outlets that are suing Microsoft and OpenAI on Thursday. In fact, three news platforms, Raw Story and its owner AlterNet and The Intercept sued them for copyright infringement. Now, these companies are seeking about $2,500 damages per violation, and asking companies to remove all their copyrighted articles from its data training sets. Now this is interesting, because in its response, or defense to NYT's lawsuit, OpenAI had said that it hadn't used only Times articles, you know, so cheap So, leader opening I know Microsoft respond has responded to this until, you know, we're recording this episode. But Microsoft had in September said that it would cover legal costs, if its customers used AI products in a way that caused copyright concerns. I don't know how they're going to implement that. But we'll see. OpenAI is also getting into partnerships with publishers like AP and Business Insider. In fact, the NYT case is a case because those two couldn't hammer out a licensing deal. So that's what's going on in that intersection of, you know, journalism and AI.
Delaney: Messy ties. You've explained that well. So where are we right now with the case, Rashmi? And what's your take on how it will play out? How do you anticipate the lawsuit unfolding?
Ramesh: So the result of these lawsuits will have a huge impact on journalism, of course, but also massively on the AI industry, and how the industry players train LLMs. So for that, we can circle back to a point I mentioned earlier about fair use. So OpenAI and other AI companies are of the opinion that they will open these cases, because how they are using the news articles comes under fair use of copyrighted material. Now the thing is, copyright law still hinges on the idea of copying and basically says what type of copying is legal and what isn't. Computers copy everything. So copyright law issues are very common in technology history. And the thing about US Copyright Act is that fair use is already written into it, making some types of copying. Now it has a test that courts can use to figure out whether something is fair use or not. But it is up to the court to decide how to run that test and interpret its results. And what one court rules as fair use did not necessarily set precedent for any other court. So these type of cases can pan out one in a million different ways. And it is purely subjective to the court's perception. And add to this a truckload of money, media attention and hype, and we have the recipe for the perfect chaos.
Delaney: Perfect chaos indeed. We'll be watching this closely. Thank you, Rashmi. So finally, and just for fun, if you could give a TED talk on any AI or cybersecurity topic, what would it be about and why?
Field: Going into exactly what Rashmi just spoke about AI and the future of journalism. And I say that because I like to say this publicly - but I'm old enough - that I came into the industry at a time when I was given a choice. When I started my first newspaper job, I could have a typewriter to work with, or I could have a computer, I opted for the computer. The computer changed everything in journalism. As a writer, gave you the ability to cut and paste to do drafts, to be able to go back and revisit things, give the ability to be on the scene of a story and transmit that story via phone lines at the time back to the office to be used changed everything. AI has got the ability to do exactly that as well. I've said for years that the barrier for entry and what we do is become lower and is nil. Anybody can package news these days. And with AI, they're going to do that. Our differentiation is on this panel right here. And it's when we invite guests to the panel and do shows with them to be able to tap into what we know and who we know, and offer perspectives on why the news stories matter in what you can do about them in your day-to-day roles in your lives and your jobs. We're on the cusp of seeing something change dramatically. And I think it's exciting. And I feel bad for some of the traditional news organizations. It's not that they missed the boat on AI. They missed the boat on a lot of things generations ago and are paying the price for it today. I think it's a terrific topic and can't wait for the opportunity to discuss it.
Delaney: I look forward to watching it. Rashmi?
Ramesh: I cannot do enough of this. AI and climate change: two sides of the coin. AI poses massive sustainability challenges, but it's also possibly one of the most promising tools we have to help slow down global warming.
Delaney: Love that. Very timely. Matt?
Schwartz: Yeah. So my TED Talk would be channeling something I heard at the Scottish cybersecurity conference that I was at, which was some of the officials were saying they were delivering advice. That wasn't much changed from a decade ago. The guy in charge of the Wales Cybersecurity Center said "I was urging people to keep everything patched and to do proper backups." A representative from the National Cybersecurity Center here in the U.K. was saying, "it's not the new vulnerabilities we keep seeing at scale. It's the old ones, and they keep racking up victims." So with that in mind, I will do a TED talk on the three to one backup. Sometimes you see it as the 322110 backup rule. But the gist is, you've always got three copies of your data, you store your backups on at least two different types of media, and you keep at least one copy off site. And if you go with the zero, you make sure there's zero errors. And the point of all this is so that if something really bad happens, like you get hit by a hurricane, or a tornado, or a ransomware group, you can restore and you can't restore right away, it won't be free, but at least you're not having to consider paying criminals. That's my TED Talk. Thank you for coming.
Delaney: Well, I like this so we've started in the future, going to the present. My TED Talk looks at the past: what lessons can we learn from ancient spy craft, in order to challenge perceptions of cybersecurity. So we'll be looking at new comparing ancient espionage to modern cybersecurity tactics and looking at examples from ancient civilizations like Egypt and China and Rome and how they used deception and infiltration and surveillance and looking at intelligence gathering. What can we learn from Sun Tzu and Julius Caesar, and apply them to cybersecurity solutions. Thank you so much. It's always fun, always informative, excellent discussion. And thank you so much for watching. Until next time.