After a string of open-source flaws, federal agencies could soon require vendors to supply a “software bill of materials.” But there’s a lot to do before SBOMs will be capable of significantly reducing cyber risk.
Often compared to an ingredients list on a package of food, an SBOM is a text file that details the components used to build a piece of software.
A major U.S. initiative aimed at improving transparency into the security of software components has a long way to go before it will be able to reach its full potential.
According to industry analysts and the federal official leading the “software bill of materials” (SBOM) effort for the government, the next phase of the initiative is ready to begin, with more vendors expected to soon start offering a detailed peek at the components used inside their software to federal customers.
But while SBOM will need time to fully mature, the important thing is to get started with what’s ready now and build from here, said Allan Friedman, who heads the SBOM effort at the Cybersecurity and Infrastructure Security Agency.
“To go from security [where software] is a black box to thinking about the broader supply chain — that takes a while, especially in the federal government,” said Friedman, a senior adviser and strategist at CISA, in an interview with Protocol. “But it is a priority.”
At a basic level, an SBOM is just a text file that lists the components used to build a piece of software. The usual analogy that gets drawn is with the ingredients list on a package of food; some security professionals have even suggested just referring to it as a “software ingredient list.”
The software bill of materials could have a range of applications for reducing cyber risk, proponents say, though the most commonly cited use is enabling a customer to quickly pinpoint where they’re running vulnerable components.
The effort has gained traction in part due to rising concerns, in the wake of vulnerabilities such as Log4Shell and attacks such as the SolarWinds breach, about the security of open-source software components and the software supply chain overall. At the same time, not everyone in the cybersecurity community believes SBOM deserves to be a top focus, given all of the initiatives an organization could undertake to improve its security.
SBOM is only a starting point and does not solve any problems on its own, Friedman acknowledged.
“The important thing to remember about SBOM is it’s a data layer. And that’s all it is,” he said. “The goal is to take that data and turn it into intelligence, which can then drive action.”
In truth, the software tools needed to analyze SBOMs in bulk and glean insights from the data largely do not exist yet.
Even the much-touted use case of checking the SBOM for a flaw like Log4Shell is not something even a skilled developer would want to do manually, and it’s beyond the reach of anyone non-technical, said Gareth Rushgrove, vice president of products at Snyk, which offers developer security tools including SBOM generation. Notably, in the initial stage, an SBOM won’t be automatically correlated with vulnerability information.
But many in the industry told Protocol they expect people and companies will be able to solve these problems as soon as more SBOMs are being produced. That will likely be spurred, at least in the beginning, by the federal government and its tens of billions of dollars in annual IT spending.
The U.S. government has been working on various elements of the software bill of materials equation for more than a year now, ever since President Biden’s executive order in May 2021 established SBOM as an important initiative for national cybersecurity. Many software companies have interpreted the efforts as the basis for the eventual inclusion of SBOMs as a requirement in federal contracts.
The White House’s Office of Management and Budget is expected to soon issue a memo to federal agencies detailing how to go about including SBOMs in the contracting process, cybersecurity policy watchers told Protocol. OMB declined to comment.
In the meantime, some federal agencies have already begun to ask for SBOMs.
“It’s going to be a big change.”
In July, the State Department issued a draft request for proposals on technology contracts worth $8 billion to $10 billion, which included a requirement for a software bill of materials. The National Defense Authorization Act for Fiscal Year 2023 mentions a requirement for procured products — with individual items listed in a “submitted bill of materials” — to either be free from software vulnerabilities or include a plan for remediating issues.
If the White House does ultimately direct federal agencies to require SBOMs from software suppliers, it would represent the most-specific technical requirement for cybersecurity ever placed on private-sector contractors by the U.S., said David Brumley, a computer science professor at Carnegie Mellon University and co-founder of cybersecurity startup ForAllSecure, which serves federal customers including the Department of Defense.
In short, in the event this happens, “it’s going to be a big change,” Brumley said.
But given the seriousness of the problem around the security of software — particularly open-source components — it may be exactly the type of ambitious change that the tech industry needs, a number of executives in the software and cybersecurity industries told Protocol.
“I think there is very significant inherent value in this, and we will see adoption across the industry,” said Yogesh Badwe, chief security officer at data protection vendor Druva. “It will take time, of course.”
Standard data fields in an SBOM include the names and versions of components, as well as the relationships between component “dependencies” — the pre-built, third-party software components that are heavily used in software development and are often distributed under open-source licenses.
The lack of visibility into software dependencies has been a major factor behind the push for SBOMs, particularly in the wake of Log4Shell, a critical vulnerability in the widely used Apache Log4j logging component that was discovered last December.
In the ensuing rush to patch affected systems, many software vendors struggled to figure out if their products were vulnerable to Log4Shell due to lack of visibility into their own code, a much more common problem than one might think, said MongoDB CISO Lena Smart. But the data platform company’s work with Snyk allowed it to “see every instance of Log4j so quickly,” Smart said. “That’s why we were able to tell our customers within two days, ‘This is where we are [with Log4j].'”
Notably, the U.S. government’s list of minimum elements needed for an SBOM includes that the documents are written in a machine-readable format to allow for automated usage. The two leading formats, SPDX and CycloneDX, will appeal to different customers based on which type of compliance or standards their industry is focused on, said Tim Mackey, principal security strategist with the Cybersecurity Research Center at Synopsys, which will generate SBOMs for customers.
“It should just be a natural part of the landscape, the way that the other parts of our vulnerability ecosystem are.”
At this point, the basics of SBOM are “reasonably well understood,” and numerous commercial and open-source tools now exist for generating the documents, Friedman said. “There’s no reason that an organization cannot start generating SBOMs and asking for SBOMs from their suppliers.”
Friedman, who has been the federal government’s most prominent SBOM champion for years, previously worked on the issue as director of cybersecurity initiatives at the National Telecommunications and Information Administration, before continuing the effort at CISA.
Going forward, he said, the focus will be on scaling up the production of SBOMs, achieving interoperability between the different vendors that generate them and “operationalizing” the concept — in other words, making SBOM into an everyday part of corporate life, like tax reporting.
“Most people should not be thinking about SBOM” within three to five years, according to Friedman. “It should just be a natural part of the landscape, the way that the other parts of our vulnerability ecosystem are,” he said.
Even in its limited initial phase, the SBOM approach is useful for helping to better inform purchase decisions on software, according to Dan Lorenc, a former Google software engineer who is now founder and CEO of Chainguard. The startup offers tools that aim to help software developers more accurately generate an SBOM and more efficiently remediate vulnerabilities in their own code with the help of the document.
However, because SBOMs aren’t automatically correlated with the National Vulnerability Database, making vulnerabilities transparent in an SBOM will be difficult “until a lot of work gets done on matching the vulnerability database to the software database,” Lorenc said.
“If I had to guess, the first year or two of this SBOM journey is going to be focused on just building up the muscle to ask for them, to provide them, to create them, to keep them up to date,” he said.
At Mend, which offers SBOM generation capabilities, vice president of product Jeff Martin said the federal efforts have also opened the door for private industry customers to begin requesting SBOMs from their software suppliers. Ultimately, “that’s what will actually move the needle,” he said.
Security teams across both the public and private sectors are tired of the mad scramble that occurs every time a new critical vulnerability is disclosed, said Dale Gardner, senior director and analyst at Gartner. Greater software transparency is a top priority for many organizations.
“I think there’s a lot of pressure and demand within the marketplace for these kinds of solutions,” Gardner said. “So I’m pretty confident [SBOM] is going to happen.”
Vendors looking to enable “dynamic SBOM” could be another key piece of the puzzle, according to Katie Norton, a senior research analyst at IDC. Such tools “can help prioritize what to deal with first, by telling you that these are the things that are internet-exposed and exploitable,” she said.
The need for tools that can make sense of SBOMs has always been recognized as a chicken-and-egg issue that would eventually have its time to be addressed, and that’s beginning to happen now, Friedman said.
“Our consumption of SBOM data has lagged — and that’s OK,” he said. “Until recently, we didn’t have a lot of SBOM data sitting around, so no one would have sought out an SBOM consumption tool. We’re now at a level where that’s starting to emerge.”
Without Biden’s executive order, it’s unclear whether the SBOM movement would have gained the attention and momentum needed to go mainstream.
“When the White House announced that part of the basic level of software security was including an SBOM, that did have a huge impact on how people saw this,” Friedman said. “It was an emerging idea. And now it was going to be expected as part of the basic software model.”
The SBOM initiative has prompted significant debate in the cybersecurity community in recent years, and continues to do so. While most say they support the push for greater software transparency, not all agree that focusing on SBOM is the best use of time for shorthanded security teams.
For the amount of effort that will be required to use SBOMs, the actual risk reduction is not likely to be worth it right now, said Wim Remes, managing director of the security services unit at Damovo.
“SBOM is a nice idea,” Remes said. “But I think it shouldn’t be a priority at this moment.”
Jonathan Reiber, vice president of cybersecurity strategy and policy at AttackIQ, and a former cyber policy official in the Obama administration, agreed.
SBOMs are “a great thing. They should happen. They’re not ‘the’ thing,” he said. “Start with what we know the adversary is going to do, and defend your high-value data [against that].”
Meanwhile, the federal effort around SBOM has also been questioned by some in the broader tech industry, including representatives of the Information Technology Industry Council, a trade group whose members include many of the largest companies.
“We’re not saying that SBOMs can’t be useful,” said Courtney Lang, senior director of policy at the group. “But I think there does remain a lot of work to be done in order to ensure that, if there is going to be some kind of future requirement, it actually yields useful information to the federal government.”
When asked about the readiness of federal agencies to use SBOMs, Friedman said that “there are definitely organizations in the U.S. government today that are ready to embark on that journey.”
Just like in the private sector, the government “has organizations that have spent a lot of time and money and staff thinking about the broader security landscape,” he said. “And there are also much smaller organizations that comply with federal rules, but don’t necessarily have abundant resources to take on new roles and responsibilities.”
“[SBOMs are] a great thing. They should happen. They’re not ‘the’ thing.”
Likewise, smaller software vendors that sell to the federal government could also be affected differently by any forthcoming SBOM requirements, said Rick Holland, CISO at ReliaQuest-owned cybersecurity vendor Digital Shadows.
Smaller vendors may have a steeper challenge with finding the resources needed to supply an SBOM, and may have to decide whether a federal contract is valuable enough to do so, Holland said.
Whatever the federal government ends up doing in terms of SBOM requirements for contractors, “I’d like to see a gentle approach to SBOM initially,” said Marc Rogers, executive director of cybersecurity at Okta.
For the first phase of SBOM, companies should just be asked to make their best effort, “and then they can improve on it,” Rogers said. “I’d like to see that go through some cycles before anyone starts sort of waving a stick.”
At data management software vendor AvePoint, cybersecurity chief Dana Simberkoff also wants to see answers for some of the other open questions about the practicalities of implementing SBOMs — from how to automate their usage to a mechanism for ensuring they don’t end up in the hands of attackers — before any SBOM requirements for contractors roll out.
Given that AvePoint’s software is used broadly across the U.S. government, she has good reason to pose such questions.
“Conceptually, this is absolutely the right direction for the government to take and for industry to take, as well,” said Simberkoff, who is chief risk, privacy and information security officer at the company. “But there are key things that need to be fleshed out.”
Still, the current lack of visibility into the security of software is just too serious of a problem to do nothing, she said, counting herself among the strong supporters of the SBOM initiative. “I’m a big believer in not letting the perfect be the enemy of the good.”
Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.
The bugs feature a “high” severity rating, down from the initial “critical” rating, and estimates suggest just 1.5% of OpenSSL instances are impacted.
A pre-announcement last week of a new vulnerability had generated significant attention in the cybersecurity community due to the ubiquity of OpenSSL and the massive impact of the Heartbleed vulnerability of 2014.
Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.
The team that maintains OpenSSL, a key piece of widely used open-source software that’s used to provide encryption for internet communications, disclosed a pair of vulnerabilities on Tuesday that affect the most recent version of the software.
However, after initially rating the vulnerabilities as “critical” in a heads-up advisory last week, the new vulnerabilities have been downgraded to a severity rating of “high,” though administrators are still being urged to patch systems quickly.
The OpenSSL project team disclosed last week that a new vulnerability would be announced on Nov. 1 but did not provide specifics. The announcement had generated significant attention in the cybersecurity community due to the ubiquity of OpenSSL and the massive impact of a previously disclosed critical vulnerability in the software, the Heartbleed vulnerability of 2014.
OpenSSL enables secure internet communications by providing the underlying technology for the HTTPS protocol, now used on 82% of page loads worldwide, according to Firefox. The Heartbleed vulnerability had affected a significant number of major websites and led to attacks including the theft of hundreds of social insurance numbers in Canada, which prompted the shutdown of a tax filing website for the Canada Revenue Agency.
The vulnerability only impacts OpenSSL versions 3.0 and above. Data from cybersecurity vendor Wiz suggests that just 1.5% of OpenSSL instances are affected by the vulnerability.
That’s due at least in part to the relatively recent arrival of OpenSSL 3.0, which was released in September 2021.
“[Given] the fact the vulnerability is primarily client-side, requires the malicious certificate to be signed by a trusted CA (or the user to ignore the warning), and is complex to exploit, I estimate a low chance of seeing in-the-wild exploitation,” security researcher Marcus Hutchins wrote in a post.
The new version of OpenSSL featuring the patch for the vulnerability is OpenSSL 3.0.7.
The pre-announcement on the new version last week was presumably to give organizations time to determine if their applications would be impacted before disclosing the full details on the vulnerabilities, said Brian Fox, co-founder and CTO of software supply chain security vendor Sonatype.
Given the tendency for malicious actors to quickly utilize major vulnerabilities in cyberattacks, many expected that attackers would begin seeking to exploit the issue shortly after the disclosure.
The new vulnerabilities both involve buffer overflow issues, a common bug in software code that can enable an attacker to gain unauthorized access to parts of memory.
In the first vulnerability disclosed on Tuesday, which has been given the tracker CVE-2022-3602, “An attacker can craft a malicious email address to overflow four attacker-controlled bytes on the stack,” the OpenSSL team wrote in the advisory on the issue.
The resulting buffer overflow could lead to a crash or, potentially, remote execution of code, the advisory says.
The severity rating for the vulnerability was downgraded to “high” due to analysis that determined that certain mitigating factors should make it a less-severe issue, according to the OpenSSL advisory on the issue.
“Many platforms implement stack overflow protections which would mitigate against the risk of remote code execution,” the OpenSSL team wrote in the advisory.
One initial analysis suggests that exploiting the vulnerability is more difficult than it could be since the issue occurs after the validation of an encryption certificate.
For the second vulnerability, tracked at CVE-2022-3786, a malicious email address can be utilized to cause a buffer overflow and crash the system, but remote code execution is not mentioned as a potential concern.
Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.
Jason Zins is a Partner at SkyBridge Capital where he leads the firm’s venture and growth equity investing with a focus on crypto and fintech companies. Prior to joining SkyBridge in 2014, Mr. Zins worked at Bloomberg L.P. Mr. Zins received his B.A. in Government from Dartmouth College.
The flow of capital and talent into Web3 startups continues, pulled through this crypto winter by conviction in the generational technology transition it represents. Capital is in place and looking for an early-stage home. Valuations and expectations have normalized, and that is facilitating rational, purposeful engagement with Web3 startups. We believe the Web3 investment environment is riper than ever.
At SkyBridge, we have invested over $400 million in leading crypto and fintech startups since 2020. We expect to accelerate our efforts following our partnership with FTX Ventures, which recently bought a 30% stake in SkyBridge. Our collective goal is to grow the ecosystem, and we’re here for the long term.
SkyBridge Capital’s Anthony Scaramucci and FTX’s Sam Bankman-Fried at Crypto Bahamas
To founders and operators, now is the time to invest in Web3 builders who are focusing on real-world impact. Investors are looking for tangible use cases, including in the physical world. The recent SALT New York conference, for instance, featured two projects that are interesting to investors at the moment:
As an investor at SkyBridge, I have seen countless pitches, read my fair share of term sheets, and developed a good sense for what makes Web3 founders more likely to succeed — and more likely to fail.
If you are a Web3 entrepreneur, here is our advice for you:
1. Focus on the product.
Demonstrate economic value. The crypto winter is proving once again that token price is the last thing we should care about. The VC correction is proving once again that valuations are not an indicator of success. While money continues to flow, the crypto winter and VC slowdown have forced even the most committed Web3 venture capitalists (and their investors) to proceed with more caution.
Valuations have become less hype-driven and more realistic; the amount of time spent on due diligence has increased substantially; and every founder needs to directly, clearly, and concisely answer the question, “Does this project have any real-world utility, and does it create economic value?”
Just as you would with any other tech product, focus on the fundamentals: user growth, customer acquisition cost, burn rate, and all the rest of that really boring stuff that drives return on investment and really matters.
2. Embrace transparency.
Our LPs want to know that their money is safe with us — and we need to know it is safe with the companies we invest in. That means a couple things for you.
Be as transparent as you can be about custody and security, especially if tokens are part of the deal structure. Where are the assets held? What measures are in place to protect them? We have a long history of operational due diligence, and we place a premium on careful control over the assets.
Don’t underestimate the business impact of regulation. Incorporate its advent into your thinking. We believe, as many investors do, that regulation is coming — it’s just a matter of time — and that it will have a positive impact on the industry. Embrace it; don’t try to hide or operate in the gray area.
3. Play the long game.
Believe it or not, we’re still early in the age of Web3. That has several implications for founders.
Keep your nose clean. Good character is hard to find and selling at a premium in this space (see: 3AC). The majority of Web3 founders are unfamiliar to most investors. That means a clean track record, references, and being able to demonstrate trustworthiness are more important than ever.
Play nice. Whether it’s an investor who rejects you or a competitor you feel like you’re racing against, don’t sling mud or burn bridges. The landscape is constantly shifting, people move around in this industry all the time, and your paths will almost certainly cross again. The borderless economy isn’t a zero-sum game. Don’t treat it like one.
Protect your culture. Make sure your employees share the same values and standards of conduct. The talent pool is deep right now, but remember that, for startups, every single hire has an outsize impact on the culture (and chances of survival). If you make one bad hire in a company with 10,000 employees, you won’t feel it. But make one bad hire in a company with 10, and it’ll probably kill you.
*****
Projects built on financial engineering are a thing of the past. The excess and easy capital has left the system. This is a good thing. Focus on building great products or protocols, and the valuation will take care of itself over time. Obsess over valuation, and you may find yourself a zombie without access to capital.
We want you to succeed, whether that translates to capital investment or not. Because every win in this space, no matter where it comes from, pushes the tide a little higher.
Jason Zins is a Partner at SkyBridge Capital where he leads the firm’s venture and growth equity investing with a focus on crypto and fintech companies. Prior to joining SkyBridge in 2014, Mr. Zins worked at Bloomberg L.P. Mr. Zins received his B.A. in Government from Dartmouth College.
Even as climate change increases the risks of floods, fires, and droughts, there are steps that data centers large and small can take to minimize their future vulnerability.
Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).
Increasingly extreme weather threatens data centers and one of the things cloud computing customers prioritize most: reliability.
Data center operators have long planned for some climate risks, but climate change is increasing the odds of extreme events and throwing new ones into the mix. That’s creating a reckoning for operators, who could have to reevaluate everything from where to site new data centers to physically hardening infrastructure and spreading workloads across multiple regions.
A 2019 survey by the Uptime Institute, which advises business infrastructure companies on reliability, shows that a significant share of the cloud computing sector is being reactive to the threats that climate change poses or, even worse, doing nothing at all. Nearly a third of the nearly 500 data center operators that responded said they had not recently reviewed their risks and had no plans to do so. Meanwhile, just 22% said they are “preparing for increased severe weather events.”
Jay Dietrich, the Uptime Institute’s sustainability research director, said that large data center companies generally have the resources to undertake more regular risk assessments and prepare for how climate change will impact operations, from storms that could increase the risk of outages to drought that could complicate access to water for cooling. Meanwhile, smaller companies tend to be more reactive, though they stand to lose the most.
“If I’m a smaller company that doesn’t have a big data center infrastructure, but it’s integral to my operation,” Dietrich said, “I’d better be proactive because if that goes down, it’s my business that goes down with it.”
Amazon Web Services, Google, and Microsoft — dubbed the Big Three in the data center world — have the world’s biggest cloud computing footprints, and all three have robust risk assessment processes that take into account potential disasters.
AWS says it selects center locations to minimize the risks posed by flooding and extreme weather and relies on technology like automatic sensors, responsive equipment, and both water- and fire-detecting devices to protect them once they’re built. Similarly, Microsoft uses a complex threat assessment process, and Google assures customers that it automatically moves workloads between data centers in different regions in the event of a fire or other disaster.
If I’m a smaller company that doesn’t have a big data center infrastructure, but it’s integral to my operation, I’d better be proactive because if that goes down, it’s my business that goes down with it.”
However, none of the Big Three explicitly call out climate change in their public-facing risk assessment processes, much less the mounting threat it poses. (None of the three responded to Protocol’s specific questions and instead provided links to previous statements and webpages.)
A 2020 Uptime report warns that data center operators may have become complacent in their climate risk assessments, even though all evidence points to the fact that “the past is no longer a predictor of the future.” For instance, sea-level rise could overwhelm cables and other data transmission infrastructure, while the rise in large wildfires could directly threaten dozens of centers located in the West.
Meanwhile, storms are expected to intensify as well. A recent assessment found that roughly 3.3 gigawatts of data center capacity is in the federally recognized risk zone for hurricanes, and 6 gigawatts of capacity that is either planned or already under construction falls in the zone as well. And even when a data center itself is out of harm’s way, climate impacts have made power outages more likely, requiring centers to rely more on backup systems.
Given that data centers are designed to operate for 20 years — but are generally in use for much longer — the need to plan for how climate change is shifting baseline conditions is vital to ensuring operators aren’t caught off guard. This isn’t necessarily a future problem either. In 2017, wildfires got within three blocks of Sonoma County’s data center, and also scattered the team responsible for operating it across Northern California. And just this year, Google and Oracle’s data centers experienced cooling system failures amid record heat in the U.K.
To account for these risks, Uptime encourages companies to spread workloads between data centers and regions; if a storm hits Florida, a provider should have infrastructure out-of-state so service can continue without pause, which happened during Hurricane Ian last month. While this redundancy is easier for a large company with widespread data centers, even smaller companies can benefit from using secondary and out-of-region sites for backup and recovery in case a climate-related disaster causes data loss at the original site.
Smaller fixes could have a big climate resiliency payoff as well. Uptime recommends investing in disaster prediction resources, such as those developed by insurance companies, to pinpoint the likelihood of disasters at any given site and use that information to take steps to prepare data centers for disaster, from moving generators and pumps to higher ground to installing flood barriers. These steps can improve a center’s reliability when disaster hits. At least some companies are already taking these steps, including Equinix, which has a global footprint of more than 240 data centers.
“We have undertaken a climate risk and resilience review of all our sites with our insurers,” Stephen Donohoe, the company’s vice president of global data center design, and Andrew Higgins, director of engineering development and master planning, told Protocol in a joint statement. “Climate risks are an integral part of our due diligence process during site selection, with flood risk, wind risk, water stress and extreme temperatures considered prior to acquiring the site mitigation measures are considered during the design process.”
Major enterprise operations may have no choice but to take some of these steps, given policy changes underway in Europe and the U.S.
The EU’s corporate sustainability reporting directive, which will come into effect in 2023, requires large companies operating on the continent to disclose their exposure to various risks, including climate change. In the U.S., the Securities and Exchange Commission is considering a similar set of rules that would also require that companies disclose climate risk information, though a final rule is still months away.
If the rule, which is still in flux, comes into force, even the most reactive data center companies will have to change their ways.
“In our publications and discussions with clients and members, we’ve been really emphasizing that this is coming,” said Dietrich. “You’re better off being in front of it than behind it.”
Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).
Twitter could turn into an even bigger medium for crypto messages — if it survives. Meanwhile, Binance is advising Twitter on how to embrace Web3.
The ongoing health of Twitter and its direction under Musk could have a significant impact on a service where crypto promoters tout tokens, developers coordinate software updates, and investors seek information.
Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.
Twitter’s future looks fuzzy under Elon Musk. But could things be coming into focus for crypto Twitter?
Musk now owns a social network used by a large and dynamic online community of crypto supporters, in which he himself has been one of the loudest and quirkiest voices. The ongoing health of Twitter and its direction under Musk could have a significant impact on a service where crypto promoters tout tokens, developers coordinate software updates, and investors seek information.
The self-appointed “chief twit,” who has more than 112 million followers on Twitter, is known for triggering wild movements in the price of dogecoin by endorsing — or even just mentioning — the token. He triggered a sell-off by jokingly dismissing it as “a hustle” on “Saturday Night Live.”
At his direction, Tesla purchased $1.5 billion worth of bitcoin and announced that it would take the crypto token as payment before selling a huge chunk of that investment and saying bitcoin payments had been halted due to environmental worries.
Despite Musk’s idiosyncratic posturing, crypto fans on Twitter seem excited by the notion of someone they view as one of their own running the place. Dogecoin, for example, has been rallying again, its price boosted not by any tweets by Musk but simply by the idea that one of its leading cheerleaders is in charge.
There could be more concrete changes to Twitter’s business from the crypto world, though. The deal itself was made possible in part by backing from a crypto powerhouse, Binance, giving the world’s biggest crypto marketplace a say in reshaping a major social network.
CEO Changpeng Zhao said in a statement that Binance’s hope is to “play a role in bringing social media and Web3 together in order to broaden the use and adoption of crypto and blockchain technology.”
Patrick Hillman, the company’s chief strategy officer, called the investment “a great R&D opportunity.”
“We see this as a once-in-a-lifetime opportunity to use what is one of the most prestigious platforms from the Web 2.0 era as a laboratory or a sandbox to be able to test out whether Web3 solutions might be able to solve some of the problems that are plaguing Web 2.0 platforms today,” he told Protocol.
He said Binance hopes to play a role in solving problems plaguing crypto Twitter, led by the proliferation of AI-driven bots that have “completely debased the entire conversation,” he said. Musk himself has said spam bots — many of them pushing crypto scams — were a motivation to take over Twitter, and at one point vowed to “defeat the spam bots or die trying!”
Some ideas are already being considered, such as using a microtransaction system that “would result in unimaginable costs for these bot farms,” Hillman said. Another proposal is to attach an NFT to an account or a cluster of accounts to “ensure there was an actual user behind that account,” he said.
These potential fixes will take time, though Musk has shown he wants to move quickly on the product front, rapidly launching plans to charge verified users and explore a relaunch of Twitter’s defunct short-video service, Vine.
Musk is currently focused on reorganizing Twitter, “doing all that work right now that you would expect any new executive who’s just taken over a prestigious company that’s been in existence for over a decade,” Hillman said. “Once that starts to come around, then we’ll start to talk about, OK, how do we begin to launch some of these projects?” he said.
Rob Siegel, a management lecturer at the Stanford Graduate School of Business, said Twitter under Musk could mean that “Web3 technology finally gets a commercially interesting application at scale that is more than financial speculation.”
“I think that is the most interesting thing that I see right now” in the potential impact of a Musk-led Twitter on crypto, he told Protocol.
Then there are the downbeat scenarios, he said.
One is the “potential risk for more volatility [and] meme exploitation. Depending on what happens with Twitter, it could devolve into more chaos, which would encourage bad actors,” he said.
Another risk factor is Musk himself.
Marc Fagel, a former SEC regional director for San Francisco, said “Musk’s promises of a barely moderated free-for-all” could easily attract “racist and anti-Semitic” tweets as well as “unfounded crypto evangelism and pitches for NFT and crypto scams, particularly given Musk’s own predilection for doge-touting and the like.”
Melody Brue of Moor Insights & Strategy agreed. Twitter “will have to figure out how to balance Musk’s ‘free speech absolutist’ stance and human responsibility around hate and misinformation, or it will lose users and more advertisers,” she told Protocol.
Musk tried to reassure advertisers that the service would not become a “hellscape.” But he did not help his case when he shared a baseless conspiracy theory about the attack on Paul Pelosi, the husband of Speaker of the House Nancy Pelosi.
Musk later deleted the tweet, which “probably means he thought it was a mistake,” said Binance’s Hillman.
“Everybody says stupid things on social media, things they regret, things they delete,” he added. “People should be allowed to do that. And it’s not going to go into the calculus of our business and what our objective is right now.”
And that objective is turning around Twitter’s stagnant product development, slow-growing user base, and weak financials. Though the members of crypto Twitter obviously want to know how the Musk regime will benefit them, their needs are likely on the back burner as Twitter reels from the turmoil caused by the takeover.
Siegel said Musk “has bigger problems,” including building “the right internal and online culture,” as well as “navigating political minefields and also paying back his financial supporters.
“Dealing with crypto Twitter might be a low priority,” he said.
Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.
The largest game makers in the industry are pinning their growth dreams on the mobile market.
Mobile gaming accounts for roughly $100 billion — more than half of all spending on gaming globally.
Nick Statt is Protocol’s video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.
Speaking last week at The Wall Street Journal’s Tech Live conference, Microsoft Gaming CEO Phil Spencer made a proclamation that has over the last couple of years become a common belief among the biggest names in the game industry.
“There’s no way that you succeed as a gaming company without access to mobile players,” Spencer said in defending the company’s proposed acquisition of Activision Blizzard. In its last fiscal quarter, Activision Blizzard made more revenue from its mobile games like Candy Crush and Call of Duty Mobile than it did on console and PC gaming combined.
Spencer said it was “imperative” Microsoft improve its position in the mobile gaming market to better compete with rivals and expand its audience. “This opportunity is really about mobile for us,” Spencer said of the Activision deal. “When you think about 3 billion people playing video games, there’s only about 200 million households on console.”
That notion — that the console gaming audience has hit a ceiling — is not a new development, though it is rarely so bluntly said aloud. The combined install bases of Microsoft, Nintendo, and Sony amount to roughly 330 million. Yet, to Spencer’s point, many of those console owners own more than one device, while many new buyers of the PS5 and Xbox Series consoles are not fresh customers but returning ones replacing old hardware.
Mobile gaming, on the other hand, accounts for roughly $100 billion — more than half of all spending on gaming globally, according to market researcher Newzoo. This year, as other parts of the business have started to contract following the pandemic-era gaming boom, mobile is still expected to grow by more than 5%, Newzoo estimates.
Now, as the biggest names in gaming seek new revenue streams and consumers, they’re quickly realizing the biggest and most lucrative untapped market is the smartphone. Microsoft is far from alone here. FIFA publisher Electronic Arts, Grand Theft Auto maker Take-Two Interactive, and PlayStation owner Sony have all laid out ambitious plans on mobile over the past two years, often through strategic acquisitions and investments in the business models that work best on Apple and Google’s platforms.
“Mobile phones are becoming more powerful and mobile games are becoming more sophisticated,” said Dennis Yeh, the gaming insights lead at mobile analytics firm Sensor Tower. Yeh cited two other major developments that have made mobile now impossible to ignore. “Cross-platform or multiplatform play is becoming more viable and desirable, so mobile is important to reach the largest audience, especially in developing markets,” he said, while “free-to-play monetization and live [operations] are largely where the industry is moving, and mobile gaming was the original pioneer of those.”
Yeh pointed to the success of Genshin Impact, a live service game available on consoles, mobile, and PC where the experience is “largely the same” across platforms. “The game is also free to play and relies on optional in-game purchases and a ‘gacha’ system. While this itself isn’t necessarily new in Asian markets, Genshin demonstrated the viability and appetite for this in Western markets,” Yeh said.
In two years, Genshin Impact, developed by Chinese studio miHoYo, has earned more than $3.7 billion in lifetime revenue, making it one of the fastest-growing games of all time. It is so successful in both Asian and Western markets that Microsoft is using it as a template to court China-based mobile developers to build games for its Game Pass subscriptions platform, Reuters reported last week. Microsoft passed on the chance to publish Genshin Impact on Xbox, a decision Reuters says Xbox executives “regretted.”
“In developed markets like the U.S. and Western Europe, overall mobile spend is growing, and consumers are increasingly willing to spend on mobile games,” Yeh said. “In developing markets like Latin America and Southeast Asia, mobile represents access to a wide audience, especially consumers who don’t have the ability to buy a console or PC or don’t have access to stable bandwidth.”
Microsoft’s interest in finding the next Genshin Impact is part of a broader industry transition to live service gaming — a model that, as Yeh points out, is dominant and thriving on mobile. Electronic Arts spent close to $4 billion last year acquiring mobile studios to strengthen its position in the free-to-play and live service sectors. Take-Two spent close to $13 billion to buy FarmVille developer Zynga in the second-largest ever gaming acquisition behind only Microsoft’s proposed purchase of Activision Blizzard for $69 billion.
“We’re excited that there are 3.5 billion players in our addressable market. It brings accessibility to our brand,” said EA mobile chief Jeff Karp in an interview with Protocol earlier this year. “It’s really an opportunity to expand our overall ecosystem for the brand, and it creates practicable recurring revenues. It also brings the opportunity to bring our games across platforms.”
Take-Two CEO Strauss Zelnick echoed those comments in a recent interview with The Wrap. “We were already a leader in the console and PC space, and we believe we had already the best collection of intellectual property in the space,” Zelnick said. “However, mobile is the fastest-growing part of the interactive entertainment business.” Take-Two plans to use Zynga’s expertise and resources to help develop mobile versions of its biggest games, including Grand Theft Auto.
In August, Sony acquired its first ever mobile studio, Savage Game Studios, and created an all-new PlayStation Studios Mobile division separate from its console game development unit. PlayStation Studios chief Hermen Hulst described the move as “additive,” saying it will help Sony provide “more ways for more people to engage with our content.” The goal, Hulst added, will be to “reach new audiences unfamiliar with PlayStation and our games.”
PlayStation head Jim Ryan has also cited an expansion to mobile as central to its growth strategy, including a plan to release 20% of all titles by 2025 on smartphone platforms. “By expanding to PC and mobile, and it must be said … also to live services, we have the opportunity to move from a situation of being present in a very narrow segment of the overall gaming software market to being present pretty much everywhere,” Ryan said during an investor presentation in May.
“Whether it be League of Legends or Fortnite, mainstream gaming has already demonstrated the lucrativity, viability, and longevity of free-to-play, live service games,” Yeh said. “Meanwhile, mobile is just a different avenue to access gamers and gain more audience attention share in different settings, such as on commutes.”
Mobile also presents opportunities for all-new business models like cloud gaming and subscriptions, something Microsoft has invested considerable resources into exploring with its Game Pass service. While native mobile games have become more sophisticated, so, too, have streaming platforms that can let you beam console and PC titles from a remote server to your phone screen.
When combined, as Microsoft does with Game Pass and its Xbox Cloud Gaming add-on, mobile presents an opportunity to tap new customer bases. Those include people who have no intention of ever buying a console, but might be interested in streaming console games on their phone — as well as people who might not be able to afford everything required of console or PC gaming, like TVs, monitors, and accessories.
“You’re faced with two larger trends. One of them is macroeconomic — inflation, to put it simply. People are going to start cutting their entertainment budget, which is not essential compared to food and heat. That’s a big transition for the industry,” Joost van Dreunen, an assistant professor at New York University and former game analyst, told Protocol.
“The second piece is that gaming has gone through this moment of transitioning from the fringes. This is not the core gamer that wants to shell out $60 to $70 [per game],” van Dreunen said. “[Game companies] have to necessarily lower the price point to reach average consumers, in the same way Spotify and Netflix do that.”
“Even with the recent shutdown of Google Stadia, accessibility in developing markets will be a key aspect for potential viability [of cloud gaming],” Yeh said. He pointed to accessory makers like Backbone, which produces controllers for smartphones apt at playing both ported games and cloud games without needing to rely on the touchscreen, as evidence the mobile market is now accommodating a far wider breadth of players.
Netflix, a relatively new entrant in the gaming industry, has found success by focusing not on costly console or PC game development — as streaming rival Amazon did to mixed results — but instead exclusively targeting the mobile market. The streaming platform, which has 55 games in the pipeline and now offers 35 titles on smartphones, said this month it was now exploring cloud gaming as a way to reach even more customers.
“We’ll approach this the same way as we did with mobile — start small, be humble, be thoughtful — but it is a step we think we should take,” Netflix’s gaming chief Mike Verdu said onstage at TechCrunch Disrupt. “The extension into the cloud is really about reaching the other devices where people experience Netflix.”
Mobile isn’t just a money-printing machine. Companies need expertise and teams willing to move fast, update at breakneck speeds, and also maneuver the increasingly byzantine platform structures of Apple and Google, which make a bulk of their app store revenues by collecting fees from mobile games.
Cloud gaming, for instance, is not native to mobile, and instead must be accessed through browsers — a less-than-ideal compromise of working around app store restrictions. But the opportunities, and the existential necessity of diversifying how games make money and survive in an ever-changing industry, have made mobile key to survival.
“We have to break that duopoly of only two storefronts on the largest platforms. We’ve also invested a lot in our cloud streaming,” Spencer said at WSJ Live. “But if you take a long-term bet, which we’re doing, that we will be able to get access to players on the largest platforms that people play on … we want to be in a position with content and players and storefront capability to take advantage of it.”
“Gaming is the largest form of monetization on mobile,” Spencer added, “and we’re a gaming company.”
Nick Statt is Protocol’s video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.