Innovation and Technology
Human-Like AI Scaling

Introduction to Agentic AI
Since the beginning of the year, I’ve been participating in discussions about the promise and limits of agentic AI, which is generally defined as a system that enables AI to make independent analyses and decisions without much human input. It has created a second wave of public interest in AI, following the launch of ChatGPT in late 2022, which introduced much of the world to GenAI.
The Promise of Agentic AI
Why all the attention? Agentic AI is a big leap forward in realizing our dreams of a world where AI can not only do things faster and better, but, with our guidance, reason independently on our behalf. If GenAI is about productivity, agentic AI is about agency, a power we typically attribute to humans.
Human-Like Agency in AI
But if AI has human-like agency, should we expect it to reason like humans?
Abstract: A Pioneer in Agentic AI
Last week, I spoke with two founders of Abstract, an AI startup that provides “real-time, contextualized policy intelligence.” They’re looking to tackle a longstanding problem for businesses: making sense of the accelerating volume of policy changes resulting from the plethora legislation at the federal, state, and local levels.
The Challenge of Policy Changes
What sets them apart is that the agents apply context for interpreting and predicting the impact of these changes, the way a human policy analyst might do, but at scale. The end user — a human being — needs context, so agents need to be capable of providing it.
The Volume of Policy Changes
Here Comes The Flood
Over the past five decades, the task of monitoring and responding to policy changes has become nearly impossible. The volume of federal restrictions alone has grown from 400,000 restrictive words in the 1970s to more than one million today, according to the Office of the Federal Register. There are more than 145,000 federal, state, county, and city government entities that pass 3,000 to 4,500 final rules annually. Adding to the complexity is a wave of federal deregulation under the current administration, which is shifting regulatory responsibilities to state and local governments. On top of that, legislative documents are hard to read. To the average human, they make little sense.
Abstract’s Solution
To keep up with the deluge, Abstract tracks all the aforementioned data to provide insights into risks and opportunities in context. By providing this level of context at scale, it has positioned itself for the “policy intelligence” market in several ways.
Expanding the Market
First, it expands the market beyond compliance, the primary focus of legacy Government, Risk, and Compliance (GRC). “Compliance is reactive. It kicks in once a regulation changes,” said Utz. “Abstract is focused on everything before that. We abstract the noise so we can identify risks and opportunities early, before compliance is even necessary. There is the proactive piece that provides an early warning system on how legal and regulatory changes may pose a risk to the organization.”
Verticalization
Second, context enables Abstract to verticalize for businesses that need to provide the high-level counsel they expect, including an analyst’s ability to see around the corners of a subject and make thoughtful recommendations. In addition to its work for large businesses, Abstract has made inroads with large national law firms in the Am Law 200.
Expansion and Growth
Finally, Abstract is expanding its user base beyond in-house legal and regulatory departments to departments like HR, product, finance, knowledge management, innovation, and business development, which use Abstract to personalize outreach and insights for their clients.
The Founders’ Vision
Abstract’s sweeping POV on current and future users hearkens back to the founders’ original mandate: to democratize access to government data. Founders Utz and Mohammed Hayat — who conceived the company while undergrads at Loyola Marymount University in Los Angeles — along with their co-founder Matthew Chang, a UCLA alum — had something in common: they each came from immigrant families that were frustrated with the lack of transparency and accessibility of government records in their home countries.
Conclusion
Abstract isn’t alone in the U.S. market. Companies such as FiscalNote and Quorum also offer proactive policy tools, but according to Utz, they don’t deliver the context that sets Abstract apart. With its unique approach to providing context and its expanding user base, Abstract is poised to make a significant impact in the policy intelligence market.
FAQs
- What is agentic AI?
Agentic AI refers to a system that enables AI to make independent analyses and decisions without much human input. - What is Abstract?
Abstract is an AI startup that provides “real-time, contextualized policy intelligence” to help businesses navigate the complex landscape of policy changes. - How does Abstract differ from other policy tools?
Abstract sets itself apart by providing context for interpreting and predicting the impact of policy changes, allowing it to verticalize for businesses and provide high-level counsel. - What is the goal of Abstract’s founders?
The founders of Abstract aim to democratize access to government data and provide transparency and accessibility to policy information.
Innovation and Technology
The AI Dilemma: How Machines are Raising Questions of Morality and Ethics

With AI and automation for impact, the world is witnessing a significant transformation in the way we live and work. As machines become increasingly intelligent and autonomous, they are raising complex questions about morality and ethics that challenge our understanding of right and wrong. The AI dilemma is a pressing concern that requires careful consideration and nuanced discussion. In this article, we will delve into the intricacies of this dilemma and explore its implications for humanity.
Understanding the AI Dilemma
The AI dilemma refers to the ethical and moral concerns that arise from the development and deployment of artificial intelligence systems. As machines become more advanced, they are capable of making decisions that can have significant consequences for individuals and society as a whole. The dilemma arises when we consider whether machines should be programmed to prioritize human well-being, efficiency, or other values.
Autonomy and Decision-Making
One of the primary concerns surrounding the AI dilemma is the issue of autonomy and decision-making. As machines become more autonomous, they are able to make decisions without human input or oversight. This raises questions about accountability and responsibility, as well as the potential for machines to make decisions that are detrimental to human well-being.
Value Alignment
Another key aspect of the AI dilemma is the issue of value alignment. As machines are programmed to optimize certain objectives, they may prioritize values that are not aligned with human values. For example, a machine designed to optimize efficiency may prioritize productivity over human safety or well-being. This highlights the need for careful consideration of the values that are embedded in AI systems.
The Impact of AI on Society
The AI dilemma has significant implications for society, from the way we work and live to the way we interact with each other. As machines become more advanced, they are likely to have a profound impact on the job market, the economy, and our social structures.
Job Displacement and Economic Impact
One of the most significant concerns surrounding the AI dilemma is the potential for job displacement. As machines become more advanced, they are likely to automate many tasks currently performed by humans, leading to significant job losses. This raises questions about the economic impact of AI and the need for strategies to mitigate the effects of job displacement.
Social and Cultural Implications
The AI dilemma also has significant social and cultural implications. As machines become more integrated into our lives, they are likely to change the way we interact with each other and the way we understand ourselves. This raises questions about the potential for AI to exacerbate existing social inequalities and the need for careful consideration of the social and cultural implications of AI.
Addressing the AI Dilemma
Addressing the AI dilemma requires a multifaceted approach that involves governments, corporations, and individuals. It is essential to develop strategies that prioritize human well-being, safety, and dignity, while also promoting the development of AI that is aligned with human values.
Regulation and Governance
One of the key strategies for addressing the AI dilemma is the development of regulatory frameworks that prioritize human well-being and safety. Governments and corporations must work together to establish standards and guidelines for the development and deployment of AI systems.
Education and Awareness
Another essential strategy is education and awareness. It is crucial to raise awareness about the potential risks and benefits of AI and to educate individuals about the importance of prioritizing human values in AI development.
Conclusion
The AI dilemma is a complex and multifaceted issue that requires careful consideration and nuanced discussion. As machines become increasingly intelligent and autonomous, they are raising significant questions about morality and ethics that challenge our understanding of right and wrong. Addressing the AI dilemma requires a multifaceted approach that involves governments, corporations, and individuals, and prioritizes human well-being, safety, and dignity. By working together, we can ensure that the development of AI is aligned with human values and promotes a future that is beneficial for all.
Frequently Asked Questions
What is the AI dilemma?
The AI dilemma refers to the ethical and moral concerns that arise from the development and deployment of artificial intelligence systems.
What are the implications of the AI dilemma for society?
The AI dilemma has significant implications for society, from the way we work and live to the way we interact with each other. It raises questions about job displacement, economic impact, social and cultural implications, and the need for careful consideration of the values that are embedded in AI systems.
How can we address the AI dilemma?
Addressing the AI dilemma requires a multifaceted approach that involves governments, corporations, and individuals. It is essential to develop strategies that prioritize human well-being, safety, and dignity, while also promoting the development of AI that is aligned with human values. This includes regulation and governance, education and awareness, and the development of standards and guidelines for AI development and deployment.
What is the future of AI?
The future of AI is uncertain, but it is likely to be shaped by the decisions we make today. By prioritizing human values and promoting the development of AI that is aligned with human well-being, safety, and dignity, we can ensure that the future of AI is beneficial for all.
How can I get involved in addressing the AI dilemma?
There are many ways to get involved in addressing the AI dilemma, from participating in public discussions and debates to supporting organizations that prioritize human values in AI development. Individuals can also make a difference by educating themselves about the potential risks and benefits of AI and by promoting awareness about the importance of prioritizing human values in AI development.
Innovation and Technology
AI Agents Deliver Productivity, But That’s Only Part Of The Story

Introduction to AI Agents in the Workplace
The word on agentic AI’s ability to deliver on its promises is: so far, so good. With caveats. A majority of 300 senior executives adopting AI agents, 66%, say they’re delivering positive results in terms of productivity, a recent PwC survey suggests. But, let’s face it — all systems deliver some degree of productivity. What executives need is that extra edge that delivers extreme competitive differentiation.
Current State of AI Agents
At this point, few AI agents are ”transforming how work gets done," the PwC report’s authors state. “Many employees are using agentic features built into enterprise apps to speed up routine tasks — surfacing insights, updating records, answering questions. It’s a meaningful boost in productivity, but it stops short of transformation.” The biggest barrier isn’t the technology; “it’s mindset, change readiness and workforce engagement,” the PwC authors conclude.
Challenges and Limitations
Mahe Bayireddi, CEO and co-founder of Phenom, which offers agents for HR tasks, agrees this is where the challenge lies. “I think there’s a lot of learning in this whole process,” he said. “There are no experts dynamically saying how they can handle AI agents effectively.” Agents can bring up productivity almost by 20-30%, if they use it in the right format, do the change management effectively, and use the data in an engagement format,” Bayireddi continued. “The point is how do they make it fly, how do they manage the change management."
Importance of Context and Personalization
AI agents and the data they consume need to be domain-specific, and will vary industry to industry, company to company. “The data at the universal level is actually complex," he said. "The nuance of a context and that nuance of personalization is very critical for AI to work. It can’t be too general.” The rise of agents advances generative AI to more practical levels. Once put into place, agents can be “baked into the workflows," he said. “Up to now, everybody has had to go to ChatGPT and ask a question and get an answer. It’s not the way how people work.”
Future of Work with AI Agents
The emphasis needs to be on addressing the nuances of functions and processes to be automated with agents. “That has to manifest in an effective format with a context,” he said. "That can only happen with an agent being effective in a department.” Bayireddi doesn’t see agents as a threat to jobs, but they will change the nature of jobs. “There are new jobs which are going to come up because of agents. There is a new work which is also going to pop up because of agents. Skills is one thing, but also the work will change and the jobs will change.”
Conclusion
Don’t settle for too little when it comes to AI agents, the PwC authors advised. “Companies that stop at pilot projects will soon find themselves outpaced by competitors willing to redesign how work gets done. We see few companies moving early to define the future, building new operating models that integrate and orchestrate multiple AI agents. Fewer than half are fundamentally rethinking operating models and how work gets done (45%) or redesigning processes around AI agents (42%).”
FAQs
Q: What percentage of senior executives say AI agents are delivering positive results in terms of productivity?
A: 66%
Q: What is the biggest barrier to AI agent adoption?
A: Mindset, change readiness, and workforce engagement
Q: Can AI agents bring up productivity?
A: Yes, by 20-30% if used in the right format and with effective change management
Q: Will AI agents replace jobs?
A: No, but they will change the nature of jobs and create new ones.
Innovation and Technology
Generative AI Tools For Lawyers

Introduction to AI in Law
For seasoned lawyers, as well as laypeople, simply trying to make sense of a tricky contract, new tools powered by generative AI promise to transform the way we engage with the law. Legal professionals often devote large chunks of their time to drafting contracts, researching previous cases, preparing documents for submission to court, or reviewing case law. Even for non-lawyers, many everyday tasks can require diving into legal concepts and principles—wading through corporate T&Cs, tenancy agreements, consumer rights advice or business compliance.
The Rise of GenAI Tools in Law
Fortunately, lawyers—professional and armchair varieties—are finding that there’s a wealth of genAI tools out there that can make their lives easier. So here’s my rundown of some of the leading apps, tools, and services that help with legal tasks. Some can help law firms and professionals automate dull and repetitive jobs, while others aim to make the legal systems and courts more accessible to laypeople.
Leading AI Platforms for Legal Tasks
Harvey AI is among the market-leading legal AI platforms. Like many of the tools here, it’s built on LLM technology (in this case, OpenAI’s GPT models that also power ChatGPT). However, it’s been fine-tuned to be particularly efficient when it comes to legal tasks such as research, contract analysis and compliance. As well as the vast amounts of training data at its disposal, it is further fine-tuned on domain-specific legal knowledge to ensure firms get assistance that’s tailored to their own way of working. Harvey now also offers agentic genAI capabilities, allowing it to work autonomously on carrying out longer, multi-step tasks.
LexisNexis has existed for more than 50 years as a database of legal information, including court decisions, judgments and case law. Today, it’s been given a generative AI upgrade in the form of Lexis+, which is designed to act as an AI legal assistant. Users—generally legal professionals—can engage through a conversational search interface in order to create tailored legal documents, and correspondence, as well as identify relevant case law, statutes and legal commentary. It also provides instant summaries of complex legal texts and integrates with Lexis’s Shepard Citation Service, ensuring citations are correct and up-to-date.
Consumer-Focused AI Tools
This is a web-based platform offering generative AI tools for a wide variety of legal tasks often faced by consumers and laypeople, including fighting parking charges, reclaiming debts and disputing bank fees. It isn’t purely AI-based—there is a strong community element to the service, too, and plenty of articles giving useful advice on a number of consumer rights, data protection and privacy issues. Billed as the "consumer AI champion," it can be used to generate dispute letters, file claims and navigate complex court processes no matter how inexperienced a user is in legal matters.
AI Assistants for Legal Professionals
Operated by Thompson Reuters, CoCounsel is an AI legal assistant that crunches through repetitive digital workloads like reviewing documents, researching case law, and identifying critical questions. It has developed a reputation as trustworthy among law firms due to a focus on robust data protection and privacy safeguards. CoCounsel has also added what it calls agentic functionality to its platform. Although it isn’t yet clear what their AI agents will do, it’s speculated that the legal industry will be heavily impacted by the adoption of these next-generation AI tools that promise even greater levels of automation.
Streamlining Legal Document Creation
GenAI-powered lawtech platform was built to streamline the creation of legal documents and the negotiation involved in contracts and agreements. Specifically tailored for startups and small businesses without full in-house legal teams, it offers AI-customized legal templates from employment contracts to NDAs, covering any standard documentation that smaller organizations might require. As it’s designed to be used by non-professionals as well as professionals, it features powerful functionality around reducing complexity and providing easy-to-digest explanations of contract clauses.
More Great Tools
That’s far from all of the tools out there designed for lawyers and armchair lawyers. If none of the above do exactly what you need, take a look at one of these:
- Blue J: AI-powered research assistant designed to help accountants and other professional services understand tax laws.
- ContractPod AI: Users of enterprise legal AI platform ContractPod can access Leah, a virtual legal ass
-
Career Advice6 months ago
Interview with Dr. Kristy K. Taylor, WORxK Global News Magazine Founder
-
Diversity and Inclusion (DEIA)6 months ago
Sarah Herrlinger Talks AirPods Pro Hearing Aid
-
Career Advice6 months ago
NetWork Your Way to Success: Top Tips for Maximizing Your Professional Network
-
Changemaker Interviews5 months ago
Unlocking Human Potential: Kim Groshek’s Journey to Transforming Leadership and Stress Resilience
-
Diversity and Inclusion (DEIA)6 months ago
The Power of Belonging: Why Feeling Accepted Matters in the Workplace
-
Global Trends and Politics6 months ago
Health-care stocks fall after Warren PBM bill, Brian Thompson shooting
-
Global Trends and Politics6 months ago
Unionization Goes Mainstream: How the Changing Workforce is Driving Demand for Collective Bargaining
-
Training and Development6 months ago
Level Up: How Upskilling Can Help You Stay Ahead of the Curve in a Rapidly Changing Industry