Connect with us

Innovation and Technology

HBM And Emerging Memory Technologies For AI

Published

on

HBM And Emerging Memory Technologies For AI

Introduction to AI and Mobile Networks

During congressional hearing in the House of Representatives’ Energy & Commerce Committee Subcommittee of Communication and Technology, Ronnie Vasishta, Senior VP of telecom at Nvidia said that mobile networks will be called upon to support a new kind of traffic—AI traffic. This AI traffic includes the delivery of AI services to the edge, or inferencing at the edge. Such growth in AI data could reverse the general trend towards lower growth in traffic on mobile networks.

The Rise of AI Traffic

Many AI-enabled applications will require mobile connectivity including autonomous vehicles, smart glasses, generative AI services and many other applications. He said that the transmission of this massive increase in data needs to be resilient, fit for purpose, and secure. Supporting this creation of data from AI will require large amount of memory, particularly very high bandwidth memory, such as HBM. This will result in great demand for memory that supports AI applications.

Micron’s HBM4 Memory

Micron announced that it is now shipping HBM4 memory to key customers, these are for early qualification efforts. The Micron HBM4 provides up to 2.0TB/s bandwidth and 24GB capacity per 12-high die stack. The company says that their HBM4 uses its 1-beta DRAM node, advanced through silicon via technologies, and has a highly capable built-in self-test.

HBM Memory and AI Applications

HBM memory consisting of stacks of DRAM die with massively parallel interconnects to provide high bandwidth are combined GPU’s such as those from Nvidia. This memory close to the processor allows training and inference of various AI models. The current generation of HBM memory used in current GPUs use HBM3e memory. At the 2025 March GTC in San Jose, Jensen Huang said that Micron HBM memory was being used in some of their GPU platforms.

HBM Memory Manufacturers

The manufacturers of HBM memories are SK Hynix, Samsung and Micron with SK Hynix and Samsung providing the majority of supply and with Micron coming in third. SK hynix was the first to announce HBM memory in 2013, which was adopted as an industry standard by JEDEC that same year. Samsung followed in 2016 and in 2020 Micron said that it would create its own HBM memory. All of these companies expect to be shipping HBM4 memories in volume by sometime in 2026.

Emerging Memory Technologies

Numen, a company involved in magnetic random access memory applications, recently talked about how traditional memories used in AI applications, such as DRAM and SRAM have limitations in power, bandwidth and storage density. They said that processing performance has skyrocketed by 60,000X over the past 20 years but DRAM bandwidth has improved only 100X, creating a “memory wall.”

AI Memory Engine

The company says that its AI Memory Engine is a highly configurable memory subsystem IP that enables significant improvements in power efficiency, performance, intelligence, and endurance. This is not only for Numem’s MRAM-based architecture, but also third-party MRAMs, RRAM, PCRAM, and Flash Memory.

Future of Memory Technologies

Numem said that it has developed next-generation MRAM supporting die densities up to 1GB which can deliver SRAM-class performance with up to 2.5X higher memory density in embedded applications and 100X lower standby power consumption. The company says that its solutions are foundry-ready and production-capable today.

Projections for Emerging Memories

Coughlin Associates and Objective Analysis in their Deep Look at New Memories report predict that AI and other memory-intensive applications, including the use of AI inference in embedded devices such as smart watches, hearing aids and other applications are already using MRAM, RRAM and other emerging memory technologies will decrease the costs and increase production of these memories.

Conclusion

AI will generate increased demand for memory to support training and inference. It will also increase the demand for data over mobile networks. This will drive demand for HBM memory but also increase demand for new emerging memory technologies.

FAQs

Q: What is AI traffic?
A: AI traffic refers to the delivery of AI services to the edge, or inferencing at the edge, over mobile networks.
Q: What is HBM memory?
A: HBM (High-Bandwidth Memory) is a type of memory that provides high bandwidth and is used in applications such as AI and machine learning.
Q: Who are the manufacturers of HBM memory?
A: The manufacturers of HBM memories are SK Hynix, Samsung, and Micron.
Q: What are emerging memory technologies?
A: Emerging memory technologies include MRAM, RRAM, PCRAM, and Flash Memory, which offer improvements in power efficiency, performance, and storage density compared to traditional memories.
Q: What is the projected market size for emerging memories?
A: The projected market size for emerging memories is $100B, with NOR and SRAM expected to be replaced by new memories within the next decade.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Innovation and Technology

The Future Of Healthcare Is Collaborative—And AI Is The Catalyst

Published

on

The Future Of Healthcare Is Collaborative—And AI Is The Catalyst

Introduction to AI in Indian Healthcare

A quiet revolution is underway in the heart of a radiology lab at Apollo Hospitals in Chennai, India. Artificial intelligence is scanning high-resolution images, flagging anomalies, reducing the time for diagnosis, and improving accuracy. But what makes this advancement so powerful isn’t just the algorithm behind it. It’s the collaboration between a hospital, a tech company, and a university that makes AI innovation sustainable, scalable, and relevant to India’s complex healthcare landscape.

Across India, a new model of digital health transformation is emerging, one where partnerships are as crucial as platforms. For a country grappling with massive disparities in healthcare access and delivery, this shift couldn’t be more timely. These are the observations and conclusions from my peer, Dr. Priyanka Shrivastava, who is a Professor of Marketing & Analytics at Hult International Business School and an Executive Fellow at The Digital Economist.

The Double Burden of AI Innovation and Inequity

India’s healthcare system faces deep challenges: a rapidly growing population, stark urban-rural divides, a chronic shortage of medical professionals, and overstretched public infrastructure. While the proliferation of health-tech startups has brought promise, much of the innovation remains confined to urban pockets or pilot projects.

AI detects disease, streamlines diagnosis, and personalizes treatment. Tools like AI-powered nutrition coaches (HealthifyMe’s Ria) and automated diagnostic assistants (such as those used by Aindra or Columbia Asia Hospital) are transforming the delivery of healthcare.

Yet, these tools often encounter barriers due to a lack of interoperability, fragmented data systems, regulatory uncertainty, and resistance from overworked staff who fear that AI might be more of a disruption than an aid.

Technology alone cannot fix healthcare. But technology plus collaboration just might.

Why Collaboration Is the Real Innovation Before AI

In a recent study, Dr. Shrivastava and her colleagues surveyed 300 healthcare professionals across 50 institutions and held in-depth interviews with doctors, technologists, and policymakers. The results were striking: institutions with strong cross-sector collaborations consistently showed higher and more sustained AI adoption.

Three core insights emerged:

1. Shared Resources Bridge Structural Gaps

Urban hospitals often have access to advanced technology and data, whereas rural clinics often lack even basic diagnostic capabilities. But when these entities partner via telemedicine links, shared platforms, or co-funding arrangements, AI can extend its reach. For example, Apollo’s AI systems, when linked with satellite clinics, enable faster referrals and better triage in underserved regions.

2. Knowledge Exchange Builds Trust

Resistance to AI isn’t irrational—it often stems from a lack of understanding. The study found that joint workshops, where doctors and engineers co-learned and co-created, built buy-in from healthcare workers. When staff are trained with the tools and understand how they were developed, they are far more likely to embrace them.

3. Collaborative Culture Drives Continuity

AI isn’t plug-and-play. It requires regular updates, feedback loops, and cultural alignment. Institutions that formalized collaboration through MOUs, shared R&D labs, or co-published studies were more likely to sustain AI programs over the long term.

Case Study: Apollo Hospitals’ Triple-Helix Success With AI and Collaboration

Apollo’s AI-driven radiology initiative in Chennai is a textbook example. Faced with long diagnosis times and overburdened radiologists, the hospital sought a solution. Instead of simply buying an off-the-shelf AI tool, Apollo co-developed one with a university, providing algorithm expertise, and a startup delivering the technical infrastructure.

Doctors and developers worked side by side. The result? Diagnosis time dropped by 30%, and accuracy improved by 15%. Radiologists weren’t replaced—they were enhanced, with AI acting as a second pair of eyes. Continuous training and feedback ensured the system evolved with practice.

This wasn’t a one-off deployment. It was an ecosystem. And that made all the difference.

Policy in Action: eSanjeevani and the Public Sector Push

While Apollo represents a private success, the public sector isn’t far behind. India’s eSanjeevani platform, which added AI-supported teleconsultation features during the pandemic, saw a 40% increase in rural usage. This shows that with the right support and scale, AI can democratize access to care.

The National Digital Health Mission is another promising initiative. If executed well—with strong data privacy frameworks and open APIs—it can offer a common layer for innovation. Startups can plug into public records; government hospitals can access AI-enabled diagnostics; researchers can draw insights from anonymized data.

But for this to happen, policymakers must prioritize collaboration frameworks just as much as digital infrastructure.

What Policymakers and Leaders Must Do to Collaborate with AI

As India enters a defining decade for health innovation, here are four actionable takeaways from the research:

1. Create Incentives for Public-Private Partnerships

Tax breaks, innovation grants, and pilot funding for joint ventures in AI health can catalyze adoption. Startups gain credibility and scale; public hospitals get access to frontier tech.

2. Invest in Capacity Building

Set up AI literacy programs for frontline health workers. Encourage interdisciplinary training so doctors, nurses, and tech teams speak a common language.

3. Standardize Data Sharing Protocols

A national framework on health data interoperability is overdue. Without this, AI solutions cannot scale beyond one institution. Build trust through consent-driven, encrypted data-sharing norms.

4. Measure What Matters

Mandate impact audits for all health AI deployments—measuring not just tech efficiency, but patient outcomes, staff satisfaction, and system-level equity.

The Bigger Picture: AI as an Asset for Collaboration

The most inspiring part of this story? AI in Indian healthcare isn’t being driven solely by top-down mandates or Silicon Valley imports. It’s being shaped organically by Indian doctors, engineers, policy thinkers, and entrepreneurs who are joining forces.

This pluralistic model with many voices but one mission could well become a template for emerging economies around the world. In a landscape where access to a doctor can mean the difference between life and death, AI’s potential is undeniable. But its success will depend on something far more human: our ability to collaborate. The most transformative technology for health care is not an algorithm. It is the alignment of purpose, people, vision, and AI through collaboration.

Conclusion

The integration of AI in Indian healthcare is not just about technology; it’s about collaboration and partnership. By understanding the importance of shared resources, knowledge exchange, and collaborative culture, India can successfully implement AI in its healthcare system, leading to better patient outcomes and more efficient healthcare services. Policymakers and leaders have a crucial role to play in creating an environment that fosters collaboration and supports the development of AI in healthcare.

FAQs

  • Q: What are the main challenges faced by India’s healthcare system?
    A: India’s healthcare system faces challenges such as a rapidly growing population, urban-rural divides, a shortage of medical professionals, and overstretched public infrastructure.
  • Q: How can AI improve healthcare in India?
    A: AI can detect disease, streamline diagnosis, and personalize treatment, thereby improving patient outcomes and healthcare efficiency.
  • Q: What is the importance of collaboration in AI adoption in healthcare?
    A: Collaboration between hospitals, tech companies, and universities is crucial for sustainable, scalable, and relevant AI innovation in healthcare.
  • Q: What are the key takeaways for policymakers and leaders to collaborate with AI in healthcare?
    A: Policymakers and leaders must create incentives for public-private partnerships, invest in capacity building, standardize data sharing protocols, and measure what matters in terms of patient outcomes and system-level equity.
Continue Reading

Innovation and Technology

The AI Revolution Is Coming

Published

on

The AI Revolution Is Coming

Introduction to the AI Revolution

The AI revolution is coming, and it’s going to change everything. From the way we work to the way we live, artificial intelligence is set to have a profound impact on our daily lives. But are we prepared for the changes that are coming? The answer, unfortunately, is no. Most of us are not prepared for the AI revolution, and it’s going to take some getting used to.

What is the AI Revolution?

The AI revolution refers to the rapid development and deployment of artificial intelligence technologies across various industries and aspects of life. This includes machine learning, natural language processing, computer vision, and other forms of AI that are being used to automate tasks, make decisions, and interact with humans.

Key Technologies Driving the AI Revolution

Several key technologies are driving the AI revolution, including:

  • Machine learning: This is a type of AI that allows systems to learn from data and improve their performance over time.
  • Natural language processing: This is a type of AI that allows systems to understand and generate human language.
  • Computer vision: This is a type of AI that allows systems to interpret and understand visual data from images and videos.

Impact of the AI Revolution

The AI revolution is going to have a significant impact on our lives, and it’s not all positive. While AI has the potential to bring about many benefits, such as increased efficiency and productivity, it also poses significant risks, such as job displacement and bias.

Positive Impacts of the AI Revolution

Some of the positive impacts of the AI revolution include:

  • Increased efficiency: AI can automate many tasks, freeing up humans to focus on more creative and high-value work.
  • Improved decision-making: AI can analyze large amounts of data and provide insights that humans may miss.
  • Enhanced customer experience: AI can be used to personalize customer interactions and provide 24/7 support.

Negative Impacts of the AI Revolution

Some of the negative impacts of the AI revolution include:

  • Job displacement: AI has the potential to automate many jobs, leaving millions of people without work.
  • Bias: AI systems can perpetuate existing biases and discriminate against certain groups of people.
  • Loss of privacy: AI can be used to collect and analyze large amounts of personal data, threatening our privacy and security.

Preparing for the AI Revolution

So, how can we prepare for the AI revolution? The answer is not simple, but there are several steps we can take to get ready.

  • Educate yourself: Learn about AI and its applications, and stay up to date with the latest developments.
  • Develop new skills: As AI automates many tasks, it’s essential to develop new skills that are complementary to AI.
  • Support AI research and development: Encourage and support research and development in AI, and advocate for responsible AI development.

Strategies for Businesses

Businesses can also take several steps to prepare for the AI revolution, including:

  • Investing in AI research and development: Businesses can invest in AI research and development to stay ahead of the competition.
  • Upskilling and reskilling: Businesses can provide training and development programs to help their employees develop new skills.
  • Implementing AI responsibly: Businesses can implement AI in a responsible and transparent way, ensuring that it is fair, secure, and respectful of human rights.

Conclusion

The AI revolution is coming, and it’s going to change everything. While there are many benefits to AI, there are also significant risks. To prepare for the AI revolution, we need to educate ourselves, develop new skills, and support responsible AI research and development. By working together, we can ensure that the AI revolution benefits everyone, and that we are all prepared for the changes that are coming.

FAQs

Q: What is the AI revolution?
A: The AI revolution refers to the rapid development and deployment of artificial intelligence technologies across various industries and aspects of life.
Q: What are the key technologies driving the AI revolution?
A: The key technologies driving the AI revolution include machine learning, natural language processing, and computer vision.
Q: What are the positive impacts of the AI revolution?
A: The positive impacts of the AI revolution include increased efficiency, improved decision-making, and enhanced customer experience.
Q: What are the negative impacts of the AI revolution?
A: The negative impacts of the AI revolution include job displacement, bias, and loss of privacy.
Q: How can we prepare for the AI revolution?
A: We can prepare for the AI revolution by educating ourselves, developing new skills, and supporting responsible AI research and development.

Continue Reading

Innovation and Technology

Quantum Computing Threatens Bitcoin

Published

on

Quantum Computing Threatens Bitcoin

Introduction to the Quantum Threat

Bitcoin and other cryptocurrencies are now embedded in the global financial system. Countries are creating strategic reserves, and institutional investors, from hedge funds to pension schemes, are allocating capital to digital assets.

Many individuals, businesses, and even governments are exposed to price fluctuations in this notoriously volatile market. But could it all collapse overnight if quantum computing renders the technology behind cryptocurrencies obsolete, potentially causing trillions of dollars in value to vanish?

That’s the risk some experts associate with quantum computing. These futuristic machines harness the strange properties of quantum mechanics to perform specific types of calculations exponentially faster than even the most powerful supercomputers. Given enough power, quantum computers could one day break the cryptographic foundations of blockchain systems like Bitcoin.

The Threat of Quantum Computing

At the start of 2024, an estimated 500 million people globally held Bitcoin or other cryptocurrencies, a 34% increase from the year before. The majority of holders reside in Asia and North America. In many cases, these assets represent a substantial portion of personal wealth or national reserves.

If a technological advance were to render these assets insecure, the consequences could be severe.

Cryptocurrencies function by ensuring that only authorized parties can modify the blockchain ledger. In Bitcoin’s case, this means that only someone with the correct private key can spend a given amount of Bitcoin.

Bitcoin currently uses cryptographic schemes such as the Elliptic Curve Digital Signature Algorithm (ECDSA) and Schnorr signatures to verify ownership and authorize transactions. These systems rely on the difficulty of deriving a private key from a public key, a task that is computationally infeasible for classical computers.

This infeasibility is what makes “brute-force” attacks, trying every possible key, impractical. Classical computers must test each possibility one by one, which could take millions of years.

Quantum computers, however, operate on different principles. Thanks to phenomena like superposition and entanglement, they can perform many calculations in parallel. In 1994, mathematician Peter Shor developed a quantum algorithm capable of factoring large numbers exponentially faster than classical methods. This algorithm, if run on a sufficiently powerful quantum computer, could undermine encryption systems like ECDSA.

Understanding Quantum Computers

The core difference lies in how quantum and classical computers handle data. Classical computers process data as binary digits (bits), either 0s or 1s. Quantum computers use qubits, which can exist in multiple states simultaneously.

As of 2024, the most advanced quantum computers can process around 1,000 qubits, but estimates suggest that breaking Bitcoin’s ECDSA encryption would require a machine with 10 million to 300 million fault-tolerant qubits, a goal that remains years or even decades away.

Nonetheless, technology often advances unpredictably, especially now that AI tools are accelerating research and development across fields, including quantum computing.

Counter-Measures and Preparations

This is why work on quantum-safe (or post-quantum) cryptography is already well underway. The U.S. National Institute of Standards and Technology (NIST) is leading efforts to standardize cryptographic algorithms that are secure against quantum attacks, not just to protect cryptocurrencies but to safeguard the entire digital ecosystem, from banking systems to classified government data.

Once quantum-safe standards are finalized, Bitcoin and other blockchains could adapt accordingly. Bitcoin’s open-source software is managed by a global community of developers with clear governance protocols for implementing updates. In other words, Bitcoin is not static; it can evolve to meet new threats.

The Future of Bitcoin and Quantum Computing

Could quantum computing kill Bitcoin? In theory, yes, if Bitcoin failed to adapt and quantum computers suddenly became powerful enough to break its encryption, its value would plummet.

But this scenario assumes crypto stands still while quantum computing advances, which is highly unlikely. The cryptographic community is already preparing, and the financial incentives to preserve the integrity of Bitcoin are enormous.

Moreover, if quantum computers become capable of breaking current encryption methods, the consequences would extend far beyond Bitcoin. Secure communications, financial transactions, digital identities, and national security all depend on encryption. In such a world, the collapse of Bitcoin would be just one of many crises.

The quantum threat is real, but so is the work being done to prevent it.

So, if you’re among the millions with a bit of Bitcoin tucked away in the hope it will one day make you rich, well, I can’t guarantee that will happen. But I don’t think you need to worry that quantum computing is going to make it worthless any time soon.

Conclusion

In conclusion, while the threat of quantum computing to Bitcoin and other cryptocurrencies is real, it is not imminent. The development of quantum computers capable of breaking current encryption methods is still in its early stages, and the cryptographic community is already working on counter-measures. Bitcoin and other blockchains have the potential to adapt and evolve to meet new threats, ensuring their continued security and integrity.

Frequently Asked Questions

Q: Can quantum computers break Bitcoin’s encryption?

A: Theoretically, yes, but it would require a quantum computer with a large number of fault-tolerant qubits, which is still years or decades away.

Q: What is being done to prevent quantum computers from breaking Bitcoin’s encryption?

A: The cryptographic community is working on developing quantum-safe (or post-quantum) cryptography, and the U.S. National Institute of Standards and Technology (NIST) is leading efforts to standardize cryptographic algorithms that are secure against quantum attacks.

Q: Will quantum computing kill Bitcoin?

A: It’s unlikely, as Bitcoin and other blockchains have the potential to adapt and evolve to meet new threats, and the financial incentives to preserve their integrity are enormous.

Continue Reading
Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending