News
Digital India, January 2, 2026 – 9:00 am (ET)
https://www.youtube.com/watch?app=desktop&v=qOnwFuiJKnY
As India steps into 2026, cybersecurity becomes everyone’s responsibility. In episode 37 of Digital India – Ask Our Experts, the Director General of CERT-In answers key citizen questions, busts common cyber myths, and shares simple, actionable cybersecurity resolutions for individuals, families, and businesses. Watch to stay safe, smart, and secure in the digital world.
IIT Delhi‘s Entrepreneurship Development Cell and the cybersecurity think tank CyberPeace announced the launch of the E-Raksha Hackathon 2026 on Tuesday. This national-level event will focus on cybersecurity, defence AI, and digital safety.
The 36-hour hackathon is scheduled to take place from January 16 to 18 at the Indian Institute of Technology (IIT) Delhi. It will be part of BECon’26, the institute’s annual entrepreneurship conclave, and will serve as a pre-summit event for the India AI Impact Summit 2026.
The hackathon aims to bring together student innovators from across the country to develop deployable solutions for emerging digital threats. Participants will work in areas such as AI and machine learning, threat detection, blockchain, and secure software development. The problem statements will focus on using agentic AI to secure home Internet of Things (IoT) devices and detect deepfakes.
“We are proud to team up with CyberPeace to develop practical, scalable solutions that address defence-related problems and enhance national security,” said Lakshmi Narayan Ramasubramanian, head of the Entrepreneurship Development Cell at IIT Delhi.
Enable AI. Reduce cybercrime. Unleash abundance
Perhaps the biggest near-term AI opportunity is reducing cybercrime costs. With serious attacks unfolding almost daily, digital insecurity’s economic weight has truly grown out of control. Per the European Commission, global cybercrime costs in 2020 were estimated at 5.5 trillion euros (around $6.43 trillion). Since then, costs have only spiraled. In 2025, Cybersecurity Ventures estimates annual costs will hit $10 trillion, a showstopping 9 percent of global GDP. As Bloomberg notes, global cybercrime is now the world’s third-largest economy. This is truly an unrivaled crisis.
Thankfully, it is also an unrivaled opportunity. Given the problem’s sheer scale, any technology, process, or policy that shaves off just a sliver of these cyber costs has percentage point growth potential. Reduce cyber threats, and abundance will follow.
The immense potential of software translation is far from the only near-term AI opportunity. Already, studies have proven AI can automate vulnerability detection—that is, AI can discover serious security issues without human involvement. Soon, software could be proactively secured even before it ships. Likewise, advances in AI task completion suggest software patches could soon be automated. In a few years, software fixes could be generated and shipped just moments after insecurities are discovered. Beyond, we find countless other possibilities in advanced cyber intelligence, threat detection, real-time response, and more.
Marcus on AI, – December 20, 2025
2025 turned out pretty much as I anticipated. What comes next?
AGI didn’t materialize (contra predictions from Elon Musk and others); GPT-5 was underwhelming, and didn’t solve hallucinations. LLMs still aren’t reliable; the economics look dubious. Few AI companies aside from Nvidia are making a profit, and nobody has much of a technical moat. OpenAI has lost a lot of its lead. Many would agree we have reached a point of diminishing returns for scaling; faith in scaling as a route to AGI has dissipated. Neurosymbolic AI (a hybrid of neural networks and classical approaches) is starting to rise. No system solved more than 4 (or maybe any) of the Marcus-Brundage tasks. Despite all the hype, agents didn’t turn out to be reliable. Overall, by my count, sixteen of my seventeen “high confidence” predictions about 2025 proved to be correct.
Here are six or seven predictions for 2026; the first is a holdover from last year that no longer will surprise many people.
- We won’t get to AGI in 2026 (or 7). At this point I doubt many people would publicly disagree, but just a few months ago the world was rather different. Astonishing how much the vibe has shifted in just a few months, especially with people like Sutskever and Sutton coming out with their own concerns.
- Human domestic robots like Optimus and Figure will be all demo and very little product. Reviews by Joanna Stern and Marques Brownle of one early prototype were damning; there will be tons of lab demos but getting these robots to work in people’s homes will be very very hard, as Rodney Brooks has said many times.
- No country will take a decisive lead in the GenAI “race”.
- Work on new approaches such as world models and neurosymbolic will escalate.
- 2025 will be known as the year of the peak bubble, and also the moment at which Wall Street began to lose confidence in generative AI. Valuations may go up before they fall, but the Oracle craze early in September and what has happened since will in hindsight be seen as the beginning of the end.
- Backlash to Generative AI and radical deregulation will escalate. In the midterms, AI will be an election issue for first time. Trump may eventually distance himself from AI because of this backlash.
And lastly, the seventh: a metaprediction, which is a prediction about predictions. I don’t expect my predictions to be as on target this year as last, for a happy reason: across the field, the intellectual situation has gone from one that was stagnant (all LLMs all the time) and unrealistic (“AGI is nigh”) to one that is more fluid, more realistic, and more open-minded. If anything would lead to genuine progress, it would be that.
