Connect with us

Innovation and Technology

What Companies Miss About Customer Lifetime Value

Published

on

What Companies Miss About Customer Lifetime Value

The Flaw in Customer Lifetime Value

Measuring Customer Value: A Flawed Metric

For managers and marketers alike, the power to calculate what customers might be worth is alluring. That’s what makes customer lifetime value (CLV) so popular in so many industries. CLV brings both quantitative rigor and long-term perspective to customer acquisition and relationships.

But despite its name, this popular metric is inherently flawed because it doesn’t account for how customers become more valuable to you over time – that is, how innovations that increase a customer’s capabilities make them more valuable.

Rethinking Customer Value

A simple exercise can help your team rethink how customer value should be measured and to see customers more as “value creating partners” than as “value-extraction targets.” It involves asking your team to complete a sentence that begins with “our customers become much more valuable when…” and going past immediate responses like “when they buy products” to responses like “when they give us good ideas” or “when they introduce us to new customers.”

A New Perspective

This exercise can help your team move away from a narrow focus on short-term transactions and towards a more strategic understanding of the long-term relationships you build with your customers. By recognizing that customers become more valuable over time, you can begin to design initiatives that nurture those relationships and create more value for both your business and your customers.

Conclusion

While customer lifetime value may have its limitations, it can still be a powerful tool for businesses looking to make data-driven decisions about their customer relationships. By taking a step back and rethinking how we measure customer value, we can start to see our customers not just as a source of revenue, but as valued partners in our business.

Frequently Asked Questions
Q: What is customer lifetime value?

A: Customer lifetime value, or CLV, is a metric that calculates the total value of a customer to a business over their lifetime.

Q: Why is customer lifetime value flawed?

A: CLV is flawed because it doesn’t account for how customers become more valuable over time through innovations that increase their capabilities.

Q: How can I rethink how I measure customer value?

A: Try asking your team to complete the sentence “our customers become much more valuable when…” and explore responses that go beyond immediate transactions.

Innovation and Technology

Quantum Computing Threatens Bitcoin

Published

on

Quantum Computing Threatens Bitcoin

Introduction to the Quantum Threat

Bitcoin and other cryptocurrencies are now embedded in the global financial system. Countries are creating strategic reserves, and institutional investors, from hedge funds to pension schemes, are allocating capital to digital assets.

Many individuals, businesses, and even governments are exposed to price fluctuations in this notoriously volatile market. But could it all collapse overnight if quantum computing renders the technology behind cryptocurrencies obsolete, potentially causing trillions of dollars in value to vanish?

That’s the risk some experts associate with quantum computing. These futuristic machines harness the strange properties of quantum mechanics to perform specific types of calculations exponentially faster than even the most powerful supercomputers. Given enough power, quantum computers could one day break the cryptographic foundations of blockchain systems like Bitcoin.

The Threat of Quantum Computing

At the start of 2024, an estimated 500 million people globally held Bitcoin or other cryptocurrencies, a 34% increase from the year before. The majority of holders reside in Asia and North America. In many cases, these assets represent a substantial portion of personal wealth or national reserves.

If a technological advance were to render these assets insecure, the consequences could be severe.

Cryptocurrencies function by ensuring that only authorized parties can modify the blockchain ledger. In Bitcoin’s case, this means that only someone with the correct private key can spend a given amount of Bitcoin.

Bitcoin currently uses cryptographic schemes such as the Elliptic Curve Digital Signature Algorithm (ECDSA) and Schnorr signatures to verify ownership and authorize transactions. These systems rely on the difficulty of deriving a private key from a public key, a task that is computationally infeasible for classical computers.

This infeasibility is what makes “brute-force” attacks, trying every possible key, impractical. Classical computers must test each possibility one by one, which could take millions of years.

Quantum computers, however, operate on different principles. Thanks to phenomena like superposition and entanglement, they can perform many calculations in parallel. In 1994, mathematician Peter Shor developed a quantum algorithm capable of factoring large numbers exponentially faster than classical methods. This algorithm, if run on a sufficiently powerful quantum computer, could undermine encryption systems like ECDSA.

Understanding Quantum Computers

The core difference lies in how quantum and classical computers handle data. Classical computers process data as binary digits (bits), either 0s or 1s. Quantum computers use qubits, which can exist in multiple states simultaneously.

As of 2024, the most advanced quantum computers can process around 1,000 qubits, but estimates suggest that breaking Bitcoin’s ECDSA encryption would require a machine with 10 million to 300 million fault-tolerant qubits, a goal that remains years or even decades away.

Nonetheless, technology often advances unpredictably, especially now that AI tools are accelerating research and development across fields, including quantum computing.

Counter-Measures and Preparations

This is why work on quantum-safe (or post-quantum) cryptography is already well underway. The U.S. National Institute of Standards and Technology (NIST) is leading efforts to standardize cryptographic algorithms that are secure against quantum attacks, not just to protect cryptocurrencies but to safeguard the entire digital ecosystem, from banking systems to classified government data.

Once quantum-safe standards are finalized, Bitcoin and other blockchains could adapt accordingly. Bitcoin’s open-source software is managed by a global community of developers with clear governance protocols for implementing updates. In other words, Bitcoin is not static; it can evolve to meet new threats.

The Future of Bitcoin and Quantum Computing

Could quantum computing kill Bitcoin? In theory, yes, if Bitcoin failed to adapt and quantum computers suddenly became powerful enough to break its encryption, its value would plummet.

But this scenario assumes crypto stands still while quantum computing advances, which is highly unlikely. The cryptographic community is already preparing, and the financial incentives to preserve the integrity of Bitcoin are enormous.

Moreover, if quantum computers become capable of breaking current encryption methods, the consequences would extend far beyond Bitcoin. Secure communications, financial transactions, digital identities, and national security all depend on encryption. In such a world, the collapse of Bitcoin would be just one of many crises.

The quantum threat is real, but so is the work being done to prevent it.

So, if you’re among the millions with a bit of Bitcoin tucked away in the hope it will one day make you rich, well, I can’t guarantee that will happen. But I don’t think you need to worry that quantum computing is going to make it worthless any time soon.

Conclusion

In conclusion, while the threat of quantum computing to Bitcoin and other cryptocurrencies is real, it is not imminent. The development of quantum computers capable of breaking current encryption methods is still in its early stages, and the cryptographic community is already working on counter-measures. Bitcoin and other blockchains have the potential to adapt and evolve to meet new threats, ensuring their continued security and integrity.

Frequently Asked Questions

Q: Can quantum computers break Bitcoin’s encryption?

A: Theoretically, yes, but it would require a quantum computer with a large number of fault-tolerant qubits, which is still years or decades away.

Q: What is being done to prevent quantum computers from breaking Bitcoin’s encryption?

A: The cryptographic community is working on developing quantum-safe (or post-quantum) cryptography, and the U.S. National Institute of Standards and Technology (NIST) is leading efforts to standardize cryptographic algorithms that are secure against quantum attacks.

Q: Will quantum computing kill Bitcoin?

A: It’s unlikely, as Bitcoin and other blockchains have the potential to adapt and evolve to meet new threats, and the financial incentives to preserve their integrity are enormous.

Continue Reading

Innovation and Technology

AMD Unveils MI350 GPU And Roadmap

Published

on

AMD Unveils MI350 GPU And Roadmap

Introduction to AMD’s Advancing AI Event

AMD held their now-annual Advancing AI event today in Silicon Valley, with new GPUs, new networking, new software, and even a rack-scale architecture for 2026/27 to better compete with the Nvidia NVL72 that is taking the AI world by storm. The event was kicked off by Dr. Lisa Su, Chairman and CEO of AMD.

Net-Net Conclusions: AMD Is Catching Up

While AMD has yet to achieve investor expectations, and its products remain a distant second to Nvidia, AMD continues to keep to its commitment to an annual accelerator roadmap, delivering nearly four times better performance gen-on-gen with the MI350. That pace could help it catch up to Nvidia on GPU performance, and keeps it ahead of Nvidia regarding memory capacity and bandwidth, although Nvidia’s lead in networking, system design, AI software, and ecosystem remains intact.

However, AMD has stepped up its networking game with support for UltraEthernet this year and UALink next year for scale-out and scale-up, respectively. And, for the first time, AMD showed a 2026/27 roadmap with the “Helios” rack-scale AI system that helps somewhat versus Nvidia NVL72 and the upcoming Kyber rack-scale system. At least AMD is now on the playing field.

Oracle said they are standing up a 27,000 GPU cluster using AMD Instinct GPUs on Oracle Cloud Compute Infrastructure, so AMD is definitely gaining traction. AMD also unveiled ROCm 7.0 and the AMD Developer Cloud Access Program, helping it build a larger and stronger AI ecosystem.

The AMD MI350Series GPUs

The AMD Instinct GPU portfolio has struggled to catch up with Nvidia, but customers value the price/performance and openness of AMD. In fact, AMD claims to offer 40% more tokens per dollar, and that 7 of the 10 largest AI companies have adopted AMD GPUs, among over 60 named customers.

The biggest claim to fame AMD touts is the larger memory footprint it supports, now at 288 GB of HBM3 memory with the MI350. That’s enough memory to hold today’s larger models, up to 520B parameters, on a single node, and 60% more than the competition. That translates to lower TCO for many models. The MI350 also has twice the 64-bit floating point performance versus Nvidia, important for HPC workloads.

The MI355 is the same silicon as the MI300 but is selected to run faster and hotter, and is AMD’s flagship data center GPU. Both GPUs are available on the UBB8 industry standard boards in both air- and liquid cooled versions.

AMD claims, and has finally demonstrated through MLPerf benchmarks, that the MI355 is roughly three times faster than the MI300, and even on par with the Nvidia B200 GPU from Nvidia. But keep in mind that Nvidia NVLink, InfiniBand, system design, ecosystem, and software keep it in a leadership position for AI, while the B300 will begin shipment soon.

AMD’s GPU Roadmap Becomes More Clear

AMD added some detail on next year’s MI400 series as well. Sam Altman himself appeared on stage and gave the MI450 some serious love. His company has been instrumental in laying out the market requirements to the AMD engineering teams.

The MI400 will use HBM4 at 423GB per GPU, as well as supporting 300GB/s UltraEthernet through Pensando NICs.

To put the MI400 performance into perspective, check out the hockey stick performance they are expecting in the graph below. This reminds us of a similar slide Jensen Huang used at GTC. Clearly, AMD is on the right path.

Networking: AMD’s Missing Link

While a lot of attention in the AMD Advancing AI event surrounded the MI350/355 GPUs and the roadmap, the networking section was more exciting and important.

More important to large-scale AI, AMD is an original member of the UALink consortium, and will support UALink with the MI400 series. While the slide below makes it look amazing, keep in mind that Nvidia will likely be shipping NVLink 6.0 in the same timeframe, or earlier.

AMD ROCm Might Actually Start to Rock!

Finally, let’s give ROCm some credit. The development team has been hard at work since the Silicon Analysis crushed the AI software stack late last year, and they have some good performance results to show for it as well as ecosystem adoption.

To demonstrate the performance point, AMD showed over three times the performance for inference processing using ROCm 7. This is in part due to the ever-improving state of the open AI stack such as Triton from OpenAI, and is a developing trend that will keep Nvidia on its toes.

Conclusion

In conclusion, AMD’s Advancing AI event showed that the company is committed to catching up with Nvidia in the AI space. With its new GPUs, improved networking, and enhanced software, AMD is making significant strides in the industry. While Nvidia still maintains a leadership position, AMD’s efforts are helping to close the gap.

FAQs

Q: What was the main focus of AMD’s Advancing AI event?
A: The main focus of AMD’s Advancing AI event was to showcase the company’s new GPUs, improved networking, and enhanced software, as well as its commitment to catching up with Nvidia in the AI space.

Q: What is the MI350 and how does it compare to Nvidia’s GPUs?
A: The MI350 is AMD’s new GPU that offers 288 GB of HBM3 memory and twice the 64-bit floating point performance versus Nvidia. While it still lags behind Nvidia’s GPUs in some areas, it provides a competitive alternative with its larger memory footprint and lower TCO.

Q: What is AMD’s GPU roadmap for the future?
A: AMD’s GPU roadmap includes the MI400 series, which will use HBM4 at 423GB per GPU and support 300GB/s UltraEthernet through Pensando NICs. The company is also working on a rack-scale AI system called "Helios" for 2026/27.

Q: How does AMD’s ROCm software stack compare to Nvidia’s?
A: AMD’s ROCm software stack has improved significantly over the last two years and has seen broad ecosystem collaboration. While Nvidia’s software stack is still more comprehensive, AMD’s ROCm is becoming a more viable alternative with its improved performance and openness.

Continue Reading

Innovation and Technology

Digital Storage and AI

Published

on

Digital Storage and AI

Introduction to Data Centers

In this article we will look at some recent announcements on digital storage and its use in AI training and inference. But first, an example of digital storage technology used to save humanity.

Data Storage Saves the Day

Digital archiving startup SPhotonix’s 5D memory crystal was an important element in the plot of the latest Mission Impossible movie. The 360TB memory crystal was used to stop a rogue AI from destroying the world. In practice, SPhotonix stores data using a FemtoEtch nano-etching technology on a 5-inch glass substrate. Note that I am an advisor for SPhotonix.
SPhotonix 5D Memory Cystal
Digital storage technologies have been used in many movies and TV shows over the years, such as the StorageTek Tape library used in the 1994 Film “Clear and Present Danger.”

Hybrid AI Data Centers

In practice data centers are generally using SSDs as primary storage in data centers, including for AI training applications. SSDs provide fast storage for refreshing data on the high bandwidth memory located close to the GPUs that directly support data processing. However, the cost for storing data on SSDs in data centers is about 6X higher than storing it on HDDs.
This leads data centers to use HDDs for storing colder but useful data in a hierarchical storage environment. Data is moved back and forth from various storage technologies to optimize the balance of cost versus performance. Ultimately archived information in data centers that is not frequently used is kept on magnetic tape cartridges or optical storage.

Recent Developments in Hybrid Storage

Vdura, formerly veteran storage company, Panasas, recently announced a white paper on digital storage for AI workloads and announced changes in their hybrid SSD and HDD storage offering to support HPC and AI workloads. The company is now offering QLC NAND flash SSDs combined with high-capacity HDDs with their global namespace parallel file system combined with object storage, offering multi-level erasure coding and fast key value storage. The image below shows the layout of this hybrid SSD and HDD storage system.
Vdura Global Namespace Storage
The Vdura Data Platform V11.2 includes a preview of V-ScaleFlow that enables data movement across QLC flash and high-capacity hard drives. This allows resource utilization, maximizes system throughput and provides efficient AI-scale workloads. In particular the company is using Phison Pascari 128TB QLC NVMe SSD with 30+TB HDDs to reduce flash capacity requirements by over 50% and lowing power consumption. Overall total cost of ownership is said to be reduced by up to 60%.

AI Data Pipeline and Storage Requirements

The Vdura white paper goes into details on data storage and memory utilization in an AI application. The figure below shows an AI data pipeline which should have the storage system enable minimum GPU downtime.
AI Data Pipeline
The table below goes into detail on read, write, performance and data size requirements for various elements in an AI workload. These various elements can require from GBs to PBs of digital storage with various performance requirements. This favors a combination of storage technologies to support different elements in this workload.
Element Characteristics in an AI Workflow
The below image shows a sample storage node that can provide all-flash or hybrid SSD and HDD storage to support AI and HPC workloads with a global namespace and a common control and data plane.
Vdura Storage Node

Conclusion

Digital storage technology saved the world from a rogue AI in the latest Mission Impossible Movie. Combining SSDs and HDDs can enable modern AI workloads that optimize cost and performance.

FAQs

Q: What is the main challenge in using digital storage for AI workloads?
A: The main challenge is balancing cost and performance, as storing data on SSDs can be expensive, while using HDDs may not provide the necessary performance.
Q: What is the role of hybrid storage in AI data centers?
A: Hybrid storage combines the benefits of SSDs and HDDs to provide a balance between cost and performance, enabling efficient AI-scale workloads.
Q: What is the significance of Vdura’s recent announcement?
A: Vdura’s announcement introduces a new hybrid SSD and HDD storage offering that supports HPC and AI workloads, providing a global namespace parallel file system and object storage with multi-level erasure coding and fast key value storage.

Continue Reading
Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending