LangChain Integration
NimbleSearchRetriever
NimbleSearchRetriever
enables developers to build RAG applications and AI Agents that can search, access, and retrieve online information from anywhere on the web.
NimbleSearchRetriever
harnesses Nimble's Data APIs to execute search queries and retrieve web data in an efficient, scalable, and effective fashion. It has two modes:
Search & Retrieve: Execute a search query, get the top result URLs, and retrieve the text from those URLs.
Retrieve: Provide a list of URLs, and retrieve the text/data from those URLs
If you'd like to learn more about the underlying Nimble APIs, visit the documentation here.
Setup
To begin using NimbleSearchRetriever
, you'll first need to open an account with Nimble and subscribe to a plan. Nimble offers free trials, which you can register for here.
For more information about available plans, see our Pricing page.
Once you have registered, you'll receive your API credentials, which you can use to get an authentication credential string in one of two ways:
After signing in to the Dashboard, visit the pipelines section. Either click on an existing Web API pipeline or create a new one by clicking "Add Pipeline".
Once inside a Pipeline page, the relevant username, password, and Base64 token will be at the top of the page. The Base64 token is your credential string.
You can generate the authentication credential string by Base64 encoding your API credentials in the following fashion:
base64(username:password)
You can set your credential string as an environment variable so NimbleSearchRetriever
will capture it automatically without having to pass it each time inline.
NimbleSearchRetriever
.
import getpass
import os
os.environ["NIMBLE_API_KEY"] = getpass.getpass()
For more information about the Authentication process, see Nimble APIs Authentication Documentation.
With your encoded credential string, you'll now be able to access
If you want to get automated tracing for individual queries, you can set your LangSmith API key by uncommenting below:
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
Installation
This retriever lives in the langchain-community
package.
%pip install -U langchain-nimble langchain-openai
Instantiation
Now we can instantiate our retriever:
from langchain_nimble import NimbleSearchRetriever
retriever = NimbleSearchRetriever(k=3)
Usage
Arguments
NimbleSearchRetriever
has these arguments:
k
optional
integer
Number of results to return (<=20)
api_key
required
string
Nimble's API key, can be sent directly when instantiating the retriever or with the environment variable (NIMBLE_API_KEY
)
search_engine
optional
enum
The search engine your query will be executed through, you can choose from
google_search
(default value) - Google's search enginebing_search
- Bing's search engineyandex_search
- Yandex search engine
render
optional
boolean
Enables or disables Javascript rendering on the target page (if enabled the results might return more slowly)
locale
optional
string
LCID standard locale used for the URL request. Alternatively, user can use auto for automatic locale based on country targeting.
country
optional
string
Country used to access the target URL, use ISO Alpha-2 Country Codes i.e. US, DE, GB
parsing_type
optional
enum
The text structure of the returned page_content
markdown
- Markdown formatsimplified_html
(default value) - Compressed version of the original html document (~8% of the orignial html size)plain_text
- Extracts just the text from the html
links
optional
Array of strings
Array of links to the requested websites to scrape, if chosen will return the raw html content from these html (THIS WILL ACTIVATE THE SECOND MODE)
You can read more about each argument in Nimble's docs.
Example of Search & Retrieve Mode with a search query string
query = "Latest trends in artificial intelligence"
retriever.invoke(query)
Result
[Document(metadata={'title': '8 AI and machine learning trends to watch in 2025', 'snippet': 'Jan 3, 2025 — 1. Hype gives way to more pragmatic approaches · 2. Generative AI moves beyond chatbots · 3. AI agents are the next frontier · 4. Generative AI\xa0...', 'url': '', 'position': 1, 'entity_type': 'OrganicResult'}, page_content='8 AI and machine learning trends to watch in 2025 | TechTarget\nSearch Enterprise AI\nSearch the TechTarget Network\nLogin\nRegister\nExplore the Network\nTechTarget Network\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Enterprise AI\nAI Business Strategies\nAI Careers\nAI Infrastructure\nAI Platforms\nAI Technologies\nMore Topics\nApplications of AI\nML Platforms\nOther Content\nNews\nFeatures\nTips\nWebinars\n2024 IT Salary Survey Results\nSponsored Sites\nMore\nAnswers\nConference Guides\nDefinitions\nOpinions\nPodcasts\nQuizzes\nTech Accelerators\nTutorials\nVideos\nFollow:\nHome\nAI business strategies\nTech Accelerator\nWhat is enterprise AI? A complete guide for businesses\nPrev\nNext\n8 jobs that AI can\'t replace and why\n10 top artificial intelligence certifications and courses for 2025\nDownload this guide1\nX\nFree Download\nA guide to artificial intelligence in the enterprise\nThis wide-ranging guide to artificial intelligence in the enterprise provides the building blocks for becoming successful business consumers of AI technologies. It starts with introductory explanations of AI\'s history, how AI works and the main types of AI. The importance and impact of AI is covered next, followed by information on AI\'s key benefits and risks, current and potential AI use cases, building a successful AI strategy, steps for implementing AI tools in the enterprise and technological breakthroughs that are driving the field forward. Throughout the guide, we include hyperlinks to TechTarget articles that provide more detail and insights on the topics discussed.\nFeature\n8 AI and machine learning trends to watch in 2025\nAI agents, multimodal models, an emphasis on real-world results -- learn about the top AI and machine learning trends and what they mean for businesses in 2025.\nShare this item with your network:\nBy\nLev Craig,\nSite Editor\nPublished: 03 Jan 2025\nGenerative AI is at a crossroads. It\'s now more than two years since ChatGPT\'s launch, and the initial optimism about AI\'s potential is decidedly tempered by an awareness of its limitations and costs.\nThe 2025 AI landscape reflects that complexity. While excitement still abounds -- particularly for emerging areas, like agentic AI and multimodal models -- it\'s also poised to be a year of growing pains.\nCompanies are increasingly looking for proven results from generative AI, rather than early-stage prototypes. That\'s no easy feat for a technology that\'s often expensive, error-prone and vulnerable to misuse. And regulators will need to balance innovation and safety, while keeping up with a fast-moving tech environment.\nHere are eight of the top AI trends to prepare for in 2025.\n1. Hype gives way to more pragmatic approaches\nSince 2022, there\'s been an explosion of interest and innovation in generative AI, but actual adoption remains inconsistent. Companies often struggle to move generative AI projects, whether internal productivity tools or customer-facing applications, from pilot to production.\nThis article is part of\nWhat is enterprise AI? A complete guide for businesses\nWhich also includes:\nHow can AI drive revenue? Here are 10 approaches\n8 jobs that AI can\'t replace and why\n8 AI and machine learning trends to watch in 2025\nAlthough many businesses have explored generative AI through proofs of concept, fewer have fully integrated it into their operations. In a September 2024 research report, Informa TechTarget\'s Enterprise Strategy Group found that, although over 90% of organizations had increased their generative AI use over the previous year, only 8% considered their initiatives mature.\n"The most surprising thing for me [in 2024] is actually the lack of adoption that we\'re seeing," said Jen Stave, launch director for the Digital Data Design Institute at Harvard University. "When you look across businesses, companies are investing in AI. They\'re building their own custom tools. They\'re buying off-the-shelf enterprise versions of the large language models (LLMs). But there really hasn\'t been this groundswell of adoption within companies."\nOne reason for this is AI\'s uneven impact across roles and job functions. Organizations are discovering what Stave termed the "jagged technological frontier," where AI enhances productivity for some tasks or employees, while diminishing it for others. A junior analyst, for example, might significantly increase their output by using a tool that only bogs down a more experienced counterpart.\n"Managers don\'t know where that line is, and employees don\'t know where that line is," Stave said. "So, there\'s a lot of uncertainty and experimentation."\nDespite the sky-high levels of generative AI hype, the reality of slow adoption is hardly a surprise to anyone with experience in enterprise tech. In 2025, expect businesses to push harder for measurable outcomes from generative AI: reduced costs, demonstrable ROI and efficiency gains.\n2. Generative AI moves beyond chatbots\nWhen most laypeople hear the term generative AI, they think of tools like ChatGPT and Claude powered by LLMs. Early explorations from businesses, too, have tended to involve incorporating LLMs into products and services via chat interfaces. But, as the technology matures, AI developers, end users and business customers alike are looking beyond chatbots.\n"People need to think more creatively about how to use these base tools and not just try to plop a chat window into everything," said Eric Sydell, founder and CEO of Vero AI, an AI and analytics platform.\nThis transition aligns with a broader trend: building software atop LLMs rather than deploying chatbots as standalone tools. Moving from chatbot interfaces to applications that use LLMs on the back end to summarize or parse unstructured data can help mitigate some of the issues that make generative AI difficult to scale.\n"[A chatbot] can help an individual be more effective ... but it\'s very one on one," Sydell said. "So, how do you scale that in an enterprise-grade way?"\nHeading into 2025, some areas of AI development are starting to move away from text-based interfaces entirely. Increasingly, the future of AI looks to center around multimodal models, like OpenAI\'s text-to-video Sora and ElevenLabs\' AI voice generator, which can handle nontext data types, such as audio, video and images.\n"AI has become synonymous with large language models, but that\'s just one type of AI," Stave said. "It\'s this multimodal approach to AI [where] we\'re going to start seeing some major technological advancements."\nRobotics is another avenue for developing AI that goes beyond textual conversations -- in this case, to interact with the physical world. Stave anticipates that foundation models for robotics could be even more transformative than the arrival of generative AI.\n"Think about all of the different ways we interact with the physical world," she said. "I mean, the applications are just infinite."\n3. AI agents are the next frontier\nThe second half of 2024 has seen growing interest in agentic AI models capable of independent action. Tools like Salesforce\'s Agentforce are designed to autonomously handle tasks for business users, managing workflows and taking care of routine actions, like scheduling and data analysis.\nAgentic AI is in its early stages. Human direction and oversight remain critical, and the scope of actions that can be taken is usually narrowly defined. But, even with those limitations, AI agents are attractive for a wide range of sectors.\nAutonomous functionality isn\'t totally new, of course; by now, it\'s a well-established cornerstone of enterprise software. The difference with AI agents lies in their adaptability: Unlike simple automation software, agents can adapt to new information in real time, respond to unexpected obstacles and make independent decisions.\nYet, that same independence also entails new risks. Grace Yee, senior director of ethical innovation at Adobe, warned of "the harm that can come ... as agents can start, in some cases, acting upon your behalf to help with scheduling or do other tasks." Generative AI tools are notoriously prone to hallucinations, or generating false information -- what happens if an autonomous agent makes similar mistakes with immediate, real-world consequences?\nSydell cited similar concerns, noting that some use cases will raise more ethical issues than others. "When you start to get into high-risk applications -- things that have the potential to harm or help individuals -- the standards have to be way higher," he said.\nCompared with generative AI, agentic AI offers greater autonomy and adaptability.\n4. Generative AI models become commodities\nThe generative AI landscape is evolving rapidly, with foundation models seemingly now a dime a dozen. As 2025 begins, the competitive edge is moving away from which company has the best model to which businesses excel at fine-tuning pretrained models or developing specialized tools to layer on top of them.\nIn a recent newsletter, analyst Benedict Evans compared the boom in generative AI models to the PC industry of the late 1980s and 1990s. In that era, performance comparisons focused on incremental improvements in specs like CPU speed or memory, similar to how today\'s generative AI models are evaluated on niche technical benchmarks.\nOver time, however, these distinctions faded as the market reached a good-enough baseline, with differentiation shifting to factors such as cost, UX and ease of integration. Foundation models seem to be on a similar trajectory: As performance converges, advanced models are becoming more or less interchangeable for many use cases.\nIn a commoditized model landscape, the focus is no longer number of parameters or slightly better performance on a certain benchmark, but instead usability, trust and interoperability with legacy systems. In that environment, AI companies with established ecosystems, user-friendly tools and competitive pricing are likely to take the lead.\n5. AI applications and data sets become more domain-specific\nLeading AI labs, like OpenAI and Anthropic, claim to be pursuing the ambitious goal of creating artificial general intelligence (AGI), commonly defined as AI that can perform any task a human can. But AGI -- or even the comparatively limited capabilities of today\'s foundation models -- is far from necessary for most business applications.\nFor enterprises, interest in narrow, highly customized models started almost as soon as the generative AI hype cycle began. A narrowly tailored business application simply doesn\'t require the degree of versatility necessary for a consumer-facing chatbot.\n"There\'s a lot of focus on the general-purpose AI models," Yee said. "But I think what is more important is really thinking through: How are we using that technology ... and is that use case a high-risk use case?"\nIn short, businesses should consider more than what technology is being deployed and instead think more deeply about who will ultimately be using it and how. "Who\'s the audience?" Yee said. "What\'s the intended use case? What\'s the domain it\'s being used in?"\nAlthough, historically, larger data sets have driven model performance improvements, researchers and practitioners are debating whether this trend can hold. Some have suggested that, for certain tasks and populations, model performance plateaus -- or even worsens -- as algorithms are fed more data.\n"The motivation for scraping ever-larger data sets may be based on fundamentally flawed assumptions about model performance," authors Fernando Diaz and Michael Madaio wrote in their paper "Scaling Laws Do Not Scale." "That is, models may not, in fact, continue to improve as the data sets get larger -- at least not for all people or communities impacted by those models."\n6. AI literacy becomes essential\nGenerative AI\'s ubiquity has made AI literacy an in-demand skill for everyone from executives to developers to everyday employees. That means knowing how to use these tools, assess their outputs and -- perhaps most importantly -- navigate their limitations.\nNotably, although AI and machine learning talent remains in demand, developing AI literacy doesn\'t need to mean learning to code or train models. "You don\'t necessarily have to be an AI engineer to understand these tools and how to use them and whether to use them," Sydell said. "Experimenting, exploring, using the tools is massively helpful."\nAmid the persistent generative AI hype, it can be easy to forget that the technology is still relatively new. Many people either haven\'t used it at all or don\'t use it regularly: A recent research paper found that, as of August 2024, less than half of Americans aged 18 to 64 use generative AI, and just over a quarter use it at work.\nThat\'s a faster pace of adoption compared with the PC or the internet, as the paper\'s authors pointed out, but it\'s still not a majority. There\'s also a gap between businesses\' official stances on generative AI and how real workers are using it in their day-to-day tasks.\n"If you look at how many companies say they\'re using it, it\'s actually a pretty low share who are formally incorporating it into their operations," David Deming, professor at Harvard University and one of the paper\'s authors, told The Harvard Gazette. "People are using it informally for a lot of different purposes, to help write emails, using it to look up things, using it to obtain documentation on how to do something."\nStave sees a role for both companies and educational institutions in closing the AI skills gap. "When you look at companies, they understand the on-the-job training that workers need," she said. "They always have because that\'s where the work takes place."\nUniversities, in contrast, are increasingly offering skill-based, rather than role-based, education that\'s available on an ongoing basis and applicable across multiple jobs. "The business landscape is changing so fast. You can\'t just quit and go back and get a master\'s and learn everything new," Stave said. "We have to figure out how to modularize the learning and get it out to people in real time."\n7. Businesses adjust to an evolving regulatory environment\nAs 2024 progressed, companies were faced with a fragmented and rapidly changing regulatory landscape. Whereas the EU set new compliance standards with the passage of the AI Act in 2024, the U.S. remains comparatively unregulated -- a trend likely to continue in 2025 under the Trump administration.\n"One thing that I think is pretty inadequate right now is legislation [and] regulation around these tools," Sydell said. "It seems like that\'s not going to happen anytime soon at this point." Stave likewise said she\'s "not expecting significant regulation from the new administration."\nThat light-touch approach could promote AI development and innovation, but the lack of accountability also raises concerns about safety and fairness. Yee sees a need for regulation that protects the integrity of online speech, such as giving users access to provenance information about internet content, as well as anti-impersonation laws to protect creators.\nTo minimize harm without stifling innovation, Yee said she\'d like to see regulation that can be responsive to the risk level of a specific AI application. Under a tiered risk framework, she said, "low-risk AI applications can go to market faster, [while] high-risk AI applications go through a more diligent process."\nStave also pointed out that minimal oversight in the U.S. doesn\'t necessarily mean that companies will operate in a fully unregulated environment. In the absence of a cohesive global standard, large incumbents operating in multiple regions typically end up adhering to the most stringent regulations by default. In this way, the EU\'s AI Act could end up functioning similarly to GDPR, setting de facto standards for companies building or deploying AI worldwide.\n8. AI-related security concerns escalate\nThe widespread availability of generative AI, often at low or no cost, gives threat actors unprecedented access to tools for facilitating cyberattacks. That risk is poised to increase in 2025 as multimodal models become more sophisticated and readily accessible.\nIn a recent public warning, the FBI described several ways cybercriminals are using generative AI for phishing scams and financial fraud. For example, an attacker targeting victims via a deceptive social media profile might write convincing bio text and direct messages with an LLM, while using AI-generated fake photos to lend credibility to the false identity.\nAI video and audio pose a growing threat, too. Historically, models have been limited by telltale signs of inauthenticity, like robotic-sounding voices or lagging, glitchy video. While today\'s versions aren\'t perfect, they\'re significantly better, especially if an anxious or time-pressured victim isn\'t looking or listening too closely.\nAudio generators can enable hackers to impersonate a victim\'s trusted contacts, such as a spouse or colleague. Video generation has so far been less common, as it\'s more expensive and offers more opportunities for error. But, in a highly publicized incident earlier this year, scammers successfully impersonated a company\'s CFO and other staff members on a video call using deepfakes, leading a finance worker to send $25 million to fraudulent accounts.\nOther security risks are tied to vulnerabilities within models themselves, rather than social engineering. Adversarial machine learning and data poisoning, where inputs and training data are intentionally designed to mislead or corrupt models, can damage AI systems themselves. To account for these risks, businesses will need to treat AI security as a core part of their overall cybersecurity strategies.\nLev Craig covers AI and machine learning as site editor for TechTarget\'s Enterprise AI site. Craig graduated from Harvard University with a bachelor\'s degree in English and has previously written about enterprise IT, software development and cybersecurity.\nNext Steps\nThe year in AI: Catch up on the top AI news of 2024\nWays enterprise AI will transform IT infrastructure this year\nRelated Resources\nAI business strategies for successful transformation\n–Video\nRedesigning Productivity in the Age of Cognitive Acceleration\n–Replay\nDig Deeper on AI business strategies\nNvidia\'s new model aims to move GenAI to physical world\nBy: Esther\xa0Shittu\nNot-so-obvious AI predictions for 2025\nWhat are autonomous AI agents and which vendors offer them?\nBy: Alexander\xa0Gillis\nWhat are AI agents?\nBy: Kinza\xa0Yasar\nSponsored News\nPower Your Generative AI Initiatives With High-Performance, Reliable, ...\n–Dell Technologies and Intel\nPrivate AI Demystified\n–Equinix\nSustainability, AI and Dell PowerEdge Servers\n–Dell Technologies and Intel\nSee More\nRelated Content\nNvidia\'s new model aims to move GenAI to physical ...\n– Search Enterprise AI\nOracle boosts generative AI service and intros new ...\n– Search Enterprise AI\nNew Google Gemini AI tie-ins dig into local codebases\n– Search Software Quality\nLatest TechTarget resources\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Business Analytics\nDevelop data literacy skills to advance your career\nData literacy skills are the foundation of data-driven decision-making. Identify your current skill level and learn what you must...\nThoughtSpot adds data preparation with Analyst Studio launch\nLong focused largely on analytics, the vendor\'s new data preparation environment marks a foray into data management so users can ...\n8 top predictive analytics tools for 2025\nPredictive analytics tools are evolving. Enhanced with AI, easier to use and geared to both data scientists and business users, ...\nSearch CIO\nU.S. TikTok ban will affect small businesses\nThe Supreme Court upholds the U.S. TikTok ban, which means businesses that have used the app to reach and grow audiences will no ...\n10 trends shaping the future of BPM in 2025\nBusiness process management is evolving rapidly as advanced automation, software integration, process simulation and generative ...\nTrump\'s tech policy appointments ready to unleash AI\nPresident-elect Donald Trump\'s tech policy team at the White House Office of Science and Technology Policy will strongly ...\nSearch Data Management\nSnowflake takes aim at lowering GenAI development costs\nBy integrating its recently developed SwiftKV capabilities with LLMs, the vendor aims to make models more efficient so that ...\nDBT Labs acquires SDF Labs to boost data transformation\nBy adding SQL comprehension capabilities, users will be able to validate code as it\'s written, speeding the transformation ...\nOracle Exadata update boosts performance to meet AI needs\nWith database workloads growing due to the demands of AI development and real-time analytics, the tech giant\'s latest database ...\nSearch ERP\n7 benefits of using a 3PL provider for reverse logistics\nA 3PL with experience working with supply chain partners and expertise in returns can help simplify a company\'s operations. Learn...\n9 ERP trends for 2025 and beyond\nMulti-tenant SaaS, AI and automation are reshaping an ERP market that continues its long march to the cloud, as buyers seek ...\nChallenges for manufacturing\'s digital shift in 2025\nManufacturers will continue digital transformation initiatives in 2025, although some will struggle to make those moves pay off.\nAbout Us\nEditorial Ethics Policy\nMeet The Editors\nContact Us\nAdvertisers\nPartner with Us\nMedia Kit\nCorporate Site\nContributors\nReprints\nAnswers\nDefinitions\nE-Products\nEvents\nFeatures\nGuides\nOpinions\nPhoto Stories\nQuizzes\nTips\nTutorials\nVideos\nAll Rights Reserved,\nCopyright 2018 - 2025, TechTarget\nPrivacy Policy\nCookie Preferences\nCookie Preferences\nDo Not Sell or Share My Personal Information\nClose'),
Document(metadata={'title': 'Five Trends in AI and Data Science for 2025', 'snippet': 'Jan 8, 2025 — 1. Leaders will grapple with both the promise and hype around agentic AI. · 2. The time has come to measure results from generative AI\xa0...', 'url': '', 'position': 2, 'entity_type': 'OrganicResult'}, page_content='Five Trends in AI and Data Science for 2025\nMobile Menu\nMenu\nSearch\nTopics\n< Back to Menu\nData, AI, & Machine Learning\nInnovation\nLeadership\nManaging Technology\nMarketing\nOperations\nSocial Responsibility\nStrategy\nWorkplace, Teams, & Culture\nAll Topics\nTrending\nAI & Machine Learning\nOrganizational Culture\nHybrid Work\nOur Research\n< Back to Menu\nBig ideas Research Projects\nArtificial Intelligence and Business Strategy\nResponsible AI\nFuture of the Workforce\nFuture of Leadership\nAll Research Projects\nSpotlight\n< Back to Menu\nMost Popular\nAI in Action\nHybrid Work\nCoaching for the Future-Forward Leader\nCulture Champions\nMeasuring Culture\nMagazine\n< Back to Menu\nWinter 2025 Issue\nOur winter 2025 issue focuses on improving work design, implementing AI, increasing employee engagement, and more.\nPast Issues\nWebinars & Podcasts\n< Back to Menu\nUpcoming Events\nVideo Archive\nPodcasts\nMe, Myself, and AI\nSubscribe Now\nSave 22% on Unlimited Access.\nSubscribe\nTopics\nData, AI, & Machine Learning\nInnovation\nLeadership\nManaging Technology\nMarketing\nOperations\nSocial Responsibility\nStrategy\nWorkplace, Teams, & Culture\nAll Topics\nTrending\nAI & Machine Learning\nOrganizational Culture\nHybrid Work\nOur Research\nBig ideas Research Projects\nArtificial Intelligence and Business Strategy\nResponsible AI\nFuture of the Workforce\nFuture of Leadership\nAll Research Projects\nSpotlight\nMost Popular\nAI in Action\nHybrid Work\nCoaching for the Future-Forward Leader\nCulture Champions\nMeasuring Culture\nMagazine\nWinter 2025 Issue\nOur winter 2025 issue focuses on improving work design, implementing AI, increasing employee engagement, and more.\nPast Issues\nWebinars & Podcasts\nUpcoming Events\nVideo Archive\nPodcasts\nMe, Myself, and AI\nSearch\nStore\nSign In\nSubscribe —\xa0 22% off\nColumn Five Trends in AI and Data Science for 2025\nFrom agentic AI to unstructured data, these 2025 AI trends deserve close attention from leaders. Get fresh data and advice from two experts.\nThomas H. Davenport and Randy Bean\nJanuary 08, 2025\nReading Time: 10 min\nTopics\nData, AI, & Machine Learning\nManaging Technology\nAI & Machine Learning\nData & Data Culture\nIT Governance & Leadership\nTechnology Implementation\nColumn\nOur expert columnists offer opinion and analysis on important issues facing modern businesses and managers.\nMore in this series\nsubscribe-icon\nSubscribe\nShare\nTwitter\nFacebook\nLinkedin\nCarolyn Geason-Beissel/MIT SMR | Getty Images\nThis is the time of year for predictions and trend analyses, and as data science and artificial intelligence become increasingly important to the global economy, it’s vital that leaders watch emerging AI trends.\nNobody seems to use AI to make these predictions, and we won’t either, as we share our list of AI trends that will matter in 2025. But we will incorporate the latest research whenever possible. Randy has just completed his annual survey of data, analytics, and AI executives, the 2025 AI & Data Leadership Executive Benchmark Survey, conducted by his educational firm, Data & AI Leadership Exchange; and Tom has worked on several surveys on generative AI and data, technology leadership structures, and, most recently, agentic AI.\nHere are the 2025 AI trends on our radar screens that leaders should understand and monitor.\nGet Updates on Leading With AI and Data\nGet monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.\nsign\xa0up\nPlease enter a valid email address\nThank you for signing up\nPrivacy Policy\n1. Leaders will grapple with both the promise and hype around agentic AI.\nLet’s get agentic AI — the kind of AI that does tasks independently — out of the way first: It’s a sure bet for 2025’s “most trending AI trend.” Agentic AI seems to be on an inevitable rise: Everybody in the tech vendor and analyst worlds is excited about the prospect of having AI programs collaborate to do real work instead of just generating content, even though nobody is entirely sure how it will all work. Some IT leaders think they already have it (37%, in a forthcoming UiPath-sponsored survey of 252 U.S. IT leaders); most expect it soon and are ready to spend money on it (68% within six months or less); and a few skeptics (primarily encountered by us in interviews) think it’s mostly vendor hype.\nMost technology executives believe that these autonomous and collaborative AI programs will be primarily based on focused generative AI bots that will perform specific tasks. Most people believe that there will be a network of these agents, and many are hoping that the agent ecosystems will need less human intervention than AI has required in the past. Some believe that the technology will all be orchestrated by robotic process automation tools; some propose that agents will be fetched by enterprise transaction systems; and some posit the emergence of an “uber agent” that will control everything.\nThe earliest agentic AI tools will be those for small, structured internal tasks with little money involved.\nHere’s what we think: There will be (and in some cases, already are) generative AI bots that will do people’s bidding on specific content creation tasks. It will require more than one of these agentic AI tools to do something significant, such as make a travel reservation or conduct a banking transaction. But these systems still work by predicting the next word, and sometimes that will lead to errors or inaccuracies. So there will still be a need for humans to check in on them every now and then.\nThe earliest agents will be those for small, structured internal tasks with little money involved — for instance, helping change your password on the IT side, or reserving time off for vacations in HR systems. We don’t see much likelihood of companies turning these agents loose on real customers spending real money anytime soon, unless there’s the opportunity for human review or the reversal of a transaction. As a result, we don’t foresee a major impact on the human workforce from this technology in 2025, except for new jobs writing blog posts about agentic AI. (Wait, can agents do that?)\n2. The time has come to measure results from generative AI experiments.\nOne of the reasons why everybody is excited about agents is that as of 2024, it has still proved difficult to demonstrate economic value from generative AI. We argued in last year’s AI trends article that the value of GenAI still needed to be demonstrated. Data and AI leaders in Randy’s 2025 AI & Data Leadership Executive Benchmark Survey said they are confident that GenAI value is being generated: Fifty-eight percent said that their organization has achieved exponential productivity or efficiency gains from AI, presumably mostly from generative AI. Another 16% said that they have “liberated knowledge workers from mundane tasks” through the use of GenAI tools. Let’s hope that these highly positive beliefs are correct.\nBut companies shouldn’t take such confidence on faith. Very few companies are actually measuring productivity gains carefully or figuring out what the liberated knowledge workers are doing with their freed-up time. Only a few academic studies have measured GenAI productivity gains, and when they have, they’ve generally found some improvements, but not exponential ones. Goldman Sachs is one of the rare companies that has measured productivity gains in the area of programming. Developers there reported that their productivity increased by about 20%. Most similar studies have found contingent factors in productivity, where either inexperienced workers gain more (as in customer service and consulting) or experienced workers do better (as in code generation).\nIn many cases, the best way to measure productivity gains will be to establish controlled experiments. For example, a company could have one group of marketers use generative AI to create content without human review, one use it with human review, and a control group not use it at all. Again, few companies are doing this, and this will need to change. Given that GenAI is primarily about content generation for many companies right now, if we want to really understand the benefits, we’ll also have to start measuring content quality. That’s notoriously difficult to do with knowledge work output. However, if GenAI helps write blog posts much faster but the posts are boring and inaccurate, that’s important to measure: There will be little benefit in that particular use case.\nThe sad fact is that if many organizations are actually to achieve exponential productivity gains, those improvements may be measured in large-scale layoffs. But there is no sign of mass layoffs in the employment statistics. Additionally, a Nobel Prize winner in economics this year, MIT’s Daron Acemoglu, has commented that we haven’t seen real productivity gains from AI thus far, and he doesn’t expect to see anything dramatic over the next several years — perhaps a 0.5% increase over the next decade. In any case, if companies are really going to see and profit from GenAI, they’re going to need to measure and experiment to see the benefits.\n3. Reality about data-driven culture sets in.\nWe seem to be realizing that generative AI is very cool but doesn’t change everything, specifically long-term cultural attributes. In our trend article last year, we noted that Randy’s survey found that the percentage of company respondents who said that their organization had “created a data and AI-driven organization” and “established a data and AI-driven organizational culture” both doubled over the prior year (from 24% to 48% for creating data- and AI-driven organizations, and from 21% to 43% for establishing data-driven cultures). We were both somewhat astonished at this dramatic reported improvement, and we attributed the changes to generative AI, since it was very widely publicized and adopted rapidly by organizations.\nOur long-term prediction is that generative AI alone is not enough to make organizations and cultures data-driven.\nThis year, the numbers have settled back to Earth a bit. Thirty-seven percent of those surveyed said they work in a data- and AI-driven organization, and 33% said they have a data- and AI-driven culture. It’s still a good thing that data and AI leaders feel that their organizations have improved in this regard over the distant past, but our long-term prediction is that generative AI alone is not enough to make organizations and cultures data-driven.\nIn the same survey, 92% of the respondents said they feel that cultural and change management challenges are the primary barrier to becoming data- and AI-driven. This suggests that any technology alone is insufficient. It’s worth noting that most of the surveyed employees were from legacy organizations that were founded over a generation ago and have a history of transforming gradually. Many of these companies did more to execute on their digital strategies during the pandemic than they had in the previous two decades.\n4. Unstructured data is important again.\nGenerative AI has had another impact on organizations: It’s making unstructured data important again. In the 2025 AI & Data Leadership Executive Benchmark Survey, 94% of data and AI leaders said that interest in AI is leading to a greater focus on data. Since traditional analytical AI has been around for several decades, we think they were referring to GenAI’s impact. In another survey that we mentioned in last year’s AI trends article, there was substantial evidence that most companies hadn’t yet started to really manage data to get ready for generative AI.\nThe great majority of the data that GenAI works with is relatively unstructured, in forms such as text, images, video, and the like. A leader at one large insurance organization recently shared with Randy that 97% of the company’s data was unstructured. Many companies are interested in using GenAI to help manage and provide access to their own data and documents, typically using an approach called retrieval-augmented generation, or RAG. But some companies haven’t worked on their unstructured data much since the days of knowledge management 20 or more years ago. They’ve been focused on structured data — typically rows and columns of numbers from transactional systems.\nTo get unstructured data into shape, organizations need to pick the best examples of each document type, tag or graph the content, and get it loaded into the system. (Welcome to the arcane world of embeddings, vector databases, and similarity search algorithms.) These approaches do provide considerable knowledge-access benefits for employees, which is why many organizations are pursuing them. But this work is still human-intensive. At some point, perhaps, we’ll be able to just load tons of our internal documents into a GenAI prompt window, but 2025 is unlikely to be that time. Even when that’s possible, there will still be a need for considerable human curation of the data — because ChatGPT can’t tell which is the best of 20 different sales proposals.\nRelated Articles Three Nonnegotiable Leadership Skills for 2025 | Melissa Swift Five Hybrid Work Trends to Watch in 2025 | Brian Elliott How Scotiabank Built an Ethical, Engaged AI Culture | Thomas H. Davenport and Randy Bean Analytical AI: A Better Way to Identify the Right AI Projects\n5. Who should run data and AI? Expect continued struggle.\nIt should perhaps come as no surprise that while data and attempts to exploit it with AI are receiving increasing amounts of organizational attention and investment, the data leadership function itself is continuing to struggle. The role is still relatively nascent — just 12% of organizations in Randy’s first annual executive survey back in 2012 had appointed a chief data officer. Progress is being made: Eighty-five percent of organizations in Randy’s newest survey have named a chief data officer, and increasing percentages of those data leaders are primarily focused on growth, innovation, and transformation (as opposed to avoiding risk or regulatory problems). More organizations have also named chief AI officers — a surprising 33%.\nWhile these roles continue to evolve, organizations continue to wrestle with their mandates, responsibilities, and reporting structures. Fewer than half of data leaders (mostly chief data officers) who responded to Randy’s AI & Data Leadership Executive Benchmark Survey said their function is very successful and well established, and only 51% said they feel that the job is well understood within their organizations. We are still not sure that the responsibilities of a chief AI officer and a chief data (and analytics/AI) officer demand separate roles, though some organizations, including Capital One and Cleveland Clinic, have established the chief AI officer role as a peer to the chief data officer.\nThe one thing that we can say with confidence is that the demand for data and AI leadership will only grow, under whatever shape, form, and structure this demand entails.\nWe’re of two minds about the broader future of the chief data and AI officer. Randy firmly believes that the role of CDAO should be a business role reporting into business leadership. He notes that 36% of data and AI leaders in his survey this year reported to either the CEO, president, or COO. Randy strongly believes that data and AI leaders need to deliver measurable business value, and to understand and speak the language of the business.\nTom agrees that tech leaders need to be more focused on business value. But as we argued in last year’s trend report, he feels that there are too many “tech chiefs,” including CDAOs, in most organizations. Many of those CDAOs themselves feel that their internal customers are confused by all of the C-level tech executives and that the proliferation of such roles makes it both difficult to collaborate and unlikely that they will report to the CEO. Tom would prefer to see “supertech leaders,” with all of the tech roles reporting to them, as is the case in a growing number of companies that have promoted transformation-minded CIOs to fill the role. Whatever the right answer is, it’s clear that organizations must make some interventions and make those who lead data as respected as the data itself.\nTopics\nData, AI, & Machine Learning\nManaging Technology\nAI & Machine Learning\nData & Data Culture\nIT Governance & Leadership\nTechnology Implementation\nColumn\nOur expert columnists offer opinion and analysis on important issues facing modern businesses and managers.\nMore in this series\nAbout the Authors\nThomas H. Davenport (@tdav) is the President’s Distinguished Professor of Information Technology and Management at Babson College, the Bodily Bicentennial Professor of Analytics at the University of Virginia Darden School of Business, a fellow of the MIT Initiative on the Digital Economy, and senior adviser to the Deloitte Chief Data and Analytics Officer Program. His latest book is All Hands on Tech: The AI-Powered Citizen Revolution (Wiley, 2024). Randy Bean (@RandyBeanNVP) is an adviser to Fortune 1000 organizations on data and AI leadership. He is the author of\u202fFail Fast, Learn Faster: Lessons in Data-Driven Leadership in an Age of Disruption, Big Data, and AI\u202f(Wiley, 2021).\nTags:\nAnalytics & Organizational Culture\nArtificial Intelligence\nData Strategy\nGenerative AI\nProductivity\nMore Like This\nHow GenAI Helps USAA Innovate | Thomas H. Davenport and Randy Bean Will AI Help or Hurt Sustainability? Yes | Andrew Winston The GenAI App Step You’re Skimping On: Evaluations | Rama Ramakrishnan Four Leadership Loads That Keep Getting Heavier | Melissa Swift\nAdd a comment Cancel replyYou must sign in to post a comment.First time here? Sign up for a free account: Comment on articles and get access to many more articles.\nCopyright © Massachusetts Institute of Technology, 1977–2025. All rights reserved.\nHome\nOrganization Subscriptions\nAbout Us\nNewsletters\nStore\nAdvertise With Us\nContact Us\nRepublishing\nHelp\nAuthor Guidelines\nGet free, timely updates from MIT SMR with new ideas, research, frameworks, and more.\nsign\xa0up\nPlease enter a valid email address\nThank you for signing up\nPrivacy Policy\nFollow Us\nFacebook\nX\nLinkedin\nYoutube\nInstagram\nLogin\nCreate an Account\nBusiness Access\n✓Thanks for sharing!AddToAnyMore…\nclose search by queryly Advanced Search\nundefined')]
A single document from within the above result looks like the following:
import json
example_doc = retriever.invoke(query)[0]
print("Page Content: \n", json.dumps(example_doc.page_content, indent=2))
print("Metadata: \n", json.dumps(example_doc.metadata, indent=2))
Page Content:
"8 AI and machine learning trends to watch in 2025 | TechTarget\nSearch Enterprise AI\nSearch the TechTarget Network\nLogin\nRegister\nExplore the Network\nTechTarget Network\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Enterprise AI\nAI Business Strategies\nAI Careers\nAI Infrastructure\nAI Platforms\nAI Technologies\nMore Topics\nApplications of AI\nML Platforms\nOther Content\nNews\nFeatures\nTips\nWebinars\n2024 IT Salary Survey Results\nSponsored Sites\nMore\nAnswers\nConference Guides\nDefinitions\nOpinions\nPodcasts\nQuizzes\nTech Accelerators\nTutorials\nVideos\nFollow:\nHome\nAI business strategies\nTech Accelerator\nWhat is enterprise AI? A complete guide for businesses\nPrev\nNext\n8 jobs that AI can't replace and why\n10 top artificial intelligence certifications and courses for 2025\nDownload this guide1\nX\nFree Download\nA guide to artificial intelligence in the enterprise\nThis wide-ranging guide to artificial intelligence in the enterprise provides the building blocks for becoming successful business consumers of AI technologies. It starts with introductory explanations of AI's history, how AI works and the main types of AI. The importance and impact of AI is covered next, followed by information on AI's key benefits and risks, current and potential AI use cases, building a successful AI strategy, steps for implementing AI tools in the enterprise and technological breakthroughs that are driving the field forward. Throughout the guide, we include hyperlinks to TechTarget articles that provide more detail and insights on the topics discussed.\nFeature\n8 AI and machine learning trends to watch in 2025\nAI agents, multimodal models, an emphasis on real-world results -- learn about the top AI and machine learning trends and what they mean for businesses in 2025.\nShare this item with your network:\nBy\nLev Craig,\nSite Editor\nPublished: 03 Jan 2025\nGenerative AI is at a crossroads. It's now more than two years since ChatGPT's launch, and the initial optimism about AI's potential is decidedly tempered by an awareness of its limitations and costs.\nThe 2025 AI landscape reflects that complexity. While excitement still abounds -- particularly for emerging areas, like agentic AI and multimodal models -- it's also poised to be a year of growing pains.\nCompanies are increasingly looking for proven results from generative AI, rather than early-stage prototypes. That's no easy feat for a technology that's often expensive, error-prone and vulnerable to misuse. And regulators will need to balance innovation and safety, while keeping up with a fast-moving tech environment.\nHere are eight of the top AI trends to prepare for in 2025.\n1. Hype gives way to more pragmatic approaches\nSince 2022, there's been an explosion of interest and innovation in generative AI, but actual adoption remains inconsistent. Companies often struggle to move generative AI projects, whether internal productivity tools or customer-facing applications, from pilot to production.\nThis article is part of\nWhat is enterprise AI? A complete guide for businesses\nWhich also includes:\nHow can AI drive revenue? Here are 10 approaches\n8 jobs that AI can't replace and why\n8 AI and machine learning trends to watch in 2025\nAlthough many businesses have explored generative AI through proofs of concept, fewer have fully integrated it into their operations. In a September 2024 research report, Informa TechTarget's Enterprise Strategy Group found that, although over 90% of organizations had increased their generative AI use over the previous year, only 8% considered their initiatives mature.\n\"The most surprising thing for me [in 2024] is actually the lack of adoption that we're seeing,\" said Jen Stave, launch director for the Digital Data Design Institute at Harvard University. \"When you look across businesses, companies are investing in AI. They're building their own custom tools. They're buying off-the-shelf enterprise versions of the large language models (LLMs). But there really hasn't been this groundswell of adoption within companies.\"\nOne reason for this is AI's uneven impact across roles and job functions. Organizations are discovering what Stave termed the \"jagged technological frontier,\" where AI enhances productivity for some tasks or employees, while diminishing it for others. A junior analyst, for example, might significantly increase their output by using a tool that only bogs down a more experienced counterpart.\n\"Managers don't know where that line is, and employees don't know where that line is,\" Stave said. \"So, there's a lot of uncertainty and experimentation.\"\nDespite the sky-high levels of generative AI hype, the reality of slow adoption is hardly a surprise to anyone with experience in enterprise tech. In 2025, expect businesses to push harder for measurable outcomes from generative AI: reduced costs, demonstrable ROI and efficiency gains.\n2. Generative AI moves beyond chatbots\nWhen most laypeople hear the term generative AI, they think of tools like ChatGPT and Claude powered by LLMs. Early explorations from businesses, too, have tended to involve incorporating LLMs into products and services via chat interfaces. But, as the technology matures, AI developers, end users and business customers alike are looking beyond chatbots.\n\"People need to think more creatively about how to use these base tools and not just try to plop a chat window into everything,\" said Eric Sydell, founder and CEO of Vero AI, an AI and analytics platform.\nThis transition aligns with a broader trend: building software atop LLMs rather than deploying chatbots as standalone tools. Moving from chatbot interfaces to applications that use LLMs on the back end to summarize or parse unstructured data can help mitigate some of the issues that make generative AI difficult to scale.\n\"[A chatbot] can help an individual be more effective ... but it's very one on one,\" Sydell said. \"So, how do you scale that in an enterprise-grade way?\"\nHeading into 2025, some areas of AI development are starting to move away from text-based interfaces entirely. Increasingly, the future of AI looks to center around multimodal models, like OpenAI's text-to-video Sora and ElevenLabs' AI voice generator, which can handle nontext data types, such as audio, video and images.\n\"AI has become synonymous with large language models, but that's just one type of AI,\" Stave said. \"It's this multimodal approach to AI [where] we're going to start seeing some major technological advancements.\"\nRobotics is another avenue for developing AI that goes beyond textual conversations -- in this case, to interact with the physical world. Stave anticipates that foundation models for robotics could be even more transformative than the arrival of generative AI.\n\"Think about all of the different ways we interact with the physical world,\" she said. \"I mean, the applications are just infinite.\"\n3. AI agents are the next frontier\nThe second half of 2024 has seen growing interest in agentic AI models capable of independent action. Tools like Salesforce's Agentforce are designed to autonomously handle tasks for business users, managing workflows and taking care of routine actions, like scheduling and data analysis.\nAgentic AI is in its early stages. Human direction and oversight remain critical, and the scope of actions that can be taken is usually narrowly defined. But, even with those limitations, AI agents are attractive for a wide range of sectors.\nAutonomous functionality isn't totally new, of course; by now, it's a well-established cornerstone of enterprise software. The difference with AI agents lies in their adaptability: Unlike simple automation software, agents can adapt to new information in real time, respond to unexpected obstacles and make independent decisions.\nYet, that same independence also entails new risks. Grace Yee, senior director of ethical innovation at Adobe, warned of \"the harm that can come ... as agents can start, in some cases, acting upon your behalf to help with scheduling or do other tasks.\" Generative AI tools are notoriously prone to hallucinations, or generating false information -- what happens if an autonomous agent makes similar mistakes with immediate, real-world consequences?\nSydell cited similar concerns, noting that some use cases will raise more ethical issues than others. \"When you start to get into high-risk applications -- things that have the potential to harm or help individuals -- the standards have to be way higher,\" he said.\nCompared with generative AI, agentic AI offers greater autonomy and adaptability.\n4. Generative AI models become commodities\nThe generative AI landscape is evolving rapidly, with foundation models seemingly now a dime a dozen. As 2025 begins, the competitive edge is moving away from which company has the best model to which businesses excel at fine-tuning pretrained models or developing specialized tools to layer on top of them.\nIn a recent newsletter, analyst Benedict Evans compared the boom in generative AI models to the PC industry of the late 1980s and 1990s. In that era, performance comparisons focused on incremental improvements in specs like CPU speed or memory, similar to how today's generative AI models are evaluated on niche technical benchmarks.\nOver time, however, these distinctions faded as the market reached a good-enough baseline, with differentiation shifting to factors such as cost, UX and ease of integration. Foundation models seem to be on a similar trajectory: As performance converges, advanced models are becoming more or less interchangeable for many use cases.\nIn a commoditized model landscape, the focus is no longer number of parameters or slightly better performance on a certain benchmark, but instead usability, trust and interoperability with legacy systems. In that environment, AI companies with established ecosystems, user-friendly tools and competitive pricing are likely to take the lead.\n5. AI applications and data sets become more domain-specific\nLeading AI labs, like OpenAI and Anthropic, claim to be pursuing the ambitious goal of creating artificial general intelligence (AGI), commonly defined as AI that can perform any task a human can. But AGI -- or even the comparatively limited capabilities of today's foundation models -- is far from necessary for most business applications.\nFor enterprises, interest in narrow, highly customized models started almost as soon as the generative AI hype cycle began. A narrowly tailored business application simply doesn't require the degree of versatility necessary for a consumer-facing chatbot.\n\"There's a lot of focus on the general-purpose AI models,\" Yee said. \"But I think what is more important is really thinking through: How are we using that technology ... and is that use case a high-risk use case?\"\nIn short, businesses should consider more than what technology is being deployed and instead think more deeply about who will ultimately be using it and how. \"Who's the audience?\" Yee said. \"What's the intended use case? What's the domain it's being used in?\"\nAlthough, historically, larger data sets have driven model performance improvements, researchers and practitioners are debating whether this trend can hold. Some have suggested that, for certain tasks and populations, model performance plateaus -- or even worsens -- as algorithms are fed more data.\n\"The motivation for scraping ever-larger data sets may be based on fundamentally flawed assumptions about model performance,\" authors Fernando Diaz and Michael Madaio wrote in their paper \"Scaling Laws Do Not Scale.\" \"That is, models may not, in fact, continue to improve as the data sets get larger -- at least not for all people or communities impacted by those models.\"\n6. AI literacy becomes essential\nGenerative AI's ubiquity has made AI literacy an in-demand skill for everyone from executives to developers to everyday employees. That means knowing how to use these tools, assess their outputs and -- perhaps most importantly -- navigate their limitations.\nNotably, although AI and machine learning talent remains in demand, developing AI literacy doesn't need to mean learning to code or train models. \"You don't necessarily have to be an AI engineer to understand these tools and how to use them and whether to use them,\" Sydell said. \"Experimenting, exploring, using the tools is massively helpful.\"\nAmid the persistent generative AI hype, it can be easy to forget that the technology is still relatively new. Many people either haven't used it at all or don't use it regularly: A recent research paper found that, as of August 2024, less than half of Americans aged 18 to 64 use generative AI, and just over a quarter use it at work.\nThat's a faster pace of adoption compared with the PC or the internet, as the paper's authors pointed out, but it's still not a majority. There's also a gap between businesses' official stances on generative AI and how real workers are using it in their day-to-day tasks.\n\"If you look at how many companies say they're using it, it's actually a pretty low share who are formally incorporating it into their operations,\" David Deming, professor at Harvard University and one of the paper's authors, told The Harvard Gazette. \"People are using it informally for a lot of different purposes, to help write emails, using it to look up things, using it to obtain documentation on how to do something.\"\nStave sees a role for both companies and educational institutions in closing the AI skills gap. \"When you look at companies, they understand the on-the-job training that workers need,\" she said. \"They always have because that's where the work takes place.\"\nUniversities, in contrast, are increasingly offering skill-based, rather than role-based, education that's available on an ongoing basis and applicable across multiple jobs. \"The business landscape is changing so fast. You can't just quit and go back and get a master's and learn everything new,\" Stave said. \"We have to figure out how to modularize the learning and get it out to people in real time.\"\n7. Businesses adjust to an evolving regulatory environment\nAs 2024 progressed, companies were faced with a fragmented and rapidly changing regulatory landscape. Whereas the EU set new compliance standards with the passage of the AI Act in 2024, the U.S. remains comparatively unregulated -- a trend likely to continue in 2025 under the Trump administration.\n\"One thing that I think is pretty inadequate right now is legislation [and] regulation around these tools,\" Sydell said. \"It seems like that's not going to happen anytime soon at this point.\" Stave likewise said she's \"not expecting significant regulation from the new administration.\"\nThat light-touch approach could promote AI development and innovation, but the lack of accountability also raises concerns about safety and fairness. Yee sees a need for regulation that protects the integrity of online speech, such as giving users access to provenance information about internet content, as well as anti-impersonation laws to protect creators.\nTo minimize harm without stifling innovation, Yee said she'd like to see regulation that can be responsive to the risk level of a specific AI application. Under a tiered risk framework, she said, \"low-risk AI applications can go to market faster, [while] high-risk AI applications go through a more diligent process.\"\nStave also pointed out that minimal oversight in the U.S. doesn't necessarily mean that companies will operate in a fully unregulated environment. In the absence of a cohesive global standard, large incumbents operating in multiple regions typically end up adhering to the most stringent regulations by default. In this way, the EU's AI Act could end up functioning similarly to GDPR, setting de facto standards for companies building or deploying AI worldwide.\n8. AI-related security concerns escalate\nThe widespread availability of generative AI, often at low or no cost, gives threat actors unprecedented access to tools for facilitating cyberattacks. That risk is poised to increase in 2025 as multimodal models become more sophisticated and readily accessible.\nIn a recent public warning, the FBI described several ways cybercriminals are using generative AI for phishing scams and financial fraud. For example, an attacker targeting victims via a deceptive social media profile might write convincing bio text and direct messages with an LLM, while using AI-generated fake photos to lend credibility to the false identity.\nAI video and audio pose a growing threat, too. Historically, models have been limited by telltale signs of inauthenticity, like robotic-sounding voices or lagging, glitchy video. While today's versions aren't perfect, they're significantly better, especially if an anxious or time-pressured victim isn't looking or listening too closely.\nAudio generators can enable hackers to impersonate a victim's trusted contacts, such as a spouse or colleague. Video generation has so far been less common, as it's more expensive and offers more opportunities for error. But, in a highly publicized incident earlier this year, scammers successfully impersonated a company's CFO and other staff members on a video call using deepfakes, leading a finance worker to send $25 million to fraudulent accounts.\nOther security risks are tied to vulnerabilities within models themselves, rather than social engineering. Adversarial machine learning and data poisoning, where inputs and training data are intentionally designed to mislead or corrupt models, can damage AI systems themselves. To account for these risks, businesses will need to treat AI security as a core part of their overall cybersecurity strategies.\nLev Craig covers AI and machine learning as site editor for TechTarget's Enterprise AI site. Craig graduated from Harvard University with a bachelor's degree in English and has previously written about enterprise IT, software development and cybersecurity.\nNext Steps\nThe year in AI: Catch up on the top AI news of 2024\nWays enterprise AI will transform IT infrastructure this year\nRelated Resources\nAI business strategies for successful transformation\n\u2013Video\nRedesigning Productivity in the Age of Cognitive Acceleration\n\u2013Replay\nDig Deeper on AI business strategies\nNvidia's new model aims to move GenAI to physical world\nBy: Esther\u00a0Shittu\nNot-so-obvious AI predictions for 2025\nWhat are autonomous AI agents and which vendors offer them?\nBy: Alexander\u00a0Gillis\nWhat are AI agents?\nBy: Kinza\u00a0Yasar\nSponsored News\nPower Your Generative AI Initiatives With High-Performance, Reliable, ...\n\u2013Dell Technologies and Intel\nPrivate AI Demystified\n\u2013Equinix\nSustainability, AI and Dell PowerEdge Servers\n\u2013Dell Technologies and Intel\nSee More\nRelated Content\nNvidia's new model aims to move GenAI to physical ...\n\u2013 Search Enterprise AI\nOracle boosts generative AI service and intros new ...\n\u2013 Search Enterprise AI\nNew Google Gemini AI tie-ins dig into local codebases\n\u2013 Search Software Quality\nLatest TechTarget resources\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Business Analytics\nDevelop data literacy skills to advance your career\nData literacy skills are the foundation of data-driven decision-making. Identify your current skill level and learn what you must...\nThoughtSpot adds data preparation with Analyst Studio launch\nLong focused largely on analytics, the vendor's new data preparation environment marks a foray into data management so users can ...\n8 top predictive analytics tools for 2025\nPredictive analytics tools are evolving. Enhanced with AI, easier to use and geared to both data scientists and business users, ...\nSearch CIO\nU.S. TikTok ban will affect small businesses\nThe Supreme Court upholds the U.S. TikTok ban, which means businesses that have used the app to reach and grow audiences will no ...\n10 trends shaping the future of BPM in 2025\nBusiness process management is evolving rapidly as advanced automation, software integration, process simulation and generative ...\nTrump's tech policy appointments ready to unleash AI\nPresident-elect Donald Trump's tech policy team at the White House Office of Science and Technology Policy will strongly ...\nSearch Data Management\nSnowflake takes aim at lowering GenAI development costs\nBy integrating its recently developed SwiftKV capabilities with LLMs, the vendor aims to make models more efficient so that ...\nDBT Labs acquires SDF Labs to boost data transformation\nBy adding SQL comprehension capabilities, users will be able to validate code as it's written, speeding the transformation ...\nOracle Exadata update boosts performance to meet AI needs\nWith database workloads growing due to the demands of AI development and real-time analytics, the tech giant's latest database ...\nSearch ERP\n7 benefits of using a 3PL provider for reverse logistics\nA 3PL with experience working with supply chain partners and expertise in returns can help simplify a company's operations. Learn...\n9 ERP trends for 2025 and beyond\nMulti-tenant SaaS, AI and automation are reshaping an ERP market that continues its long march to the cloud, as buyers seek ...\nChallenges for manufacturing's digital shift in 2025\nManufacturers will continue digital transformation initiatives in 2025, although some will struggle to make those moves pay off.\nAbout Us\nEditorial Ethics Policy\nMeet The Editors\nContact Us\nAdvertisers\nPartner with Us\nMedia Kit\nCorporate Site\nContributors\nReprints\nAnswers\nDefinitions\nE-Products\nEvents\nFeatures\nGuides\nOpinions\nPhoto Stories\nQuizzes\nTips\nTutorials\nVideos\nAll Rights Reserved,\nCopyright 2018 - 2025, TechTarget\nPrivacy Policy\nCookie Preferences\nCookie Preferences\nDo Not Sell or Share My Personal Information\nClose"
Metadata:
{
"title": "8 AI and machine learning trends to watch in 2025",
"snippet": "Jan 3, 2025 \u2014 1. Hype gives way to more pragmatic approaches \u00b7 2. Generative AI moves beyond chatbots \u00b7 3. AI agents are the next frontier \u00b7 4. Generative AI\u00a0...",
"url": "https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends",
"position": 1,
"entity_type": "OrganicResult"
}
Example of retrieval mode with an array of URLs
retriever = NimbleSearchRetriever(links=["example.com"])
retriever.invoke(input="")
Result
[Document(metadata={'title': None, 'snippet': None, 'url': 'https://example.com', 'position': None, 'entity_type': 'HtmlContent'}, page_content='<!doctype html>\n<html>\n<head>\n <title>Example Domain</title>\n\n <meta charset="utf-8" />\n <meta http-equiv="Content-type" content="text/html; charset=utf-8" />\n <meta name="viewport" content="width=device-width, initial-scale=1" />\n <style type="text/css">\n body {\n background-color: #f0f0f2;\n margin: 0;\n padding: 0;\n font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;\n \n }\n div {\n width: 600px;\n margin: 5em auto;\n padding: 2em;\n background-color: #fdfdff;\n border-radius: 0.5em;\n box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);\n }\n a:link, a:visited {\n color: #38488f;\n text-decoration: none;\n }\n @media (max-width: 700px) {\n div {\n margin: 0 auto;\n width: auto;\n }\n }\n </style> \n</head>\n\n<body>\n<div>\n <h1>Example Domain</h1>\n <p>This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.</p>\n <p><a href="https://www.iana.org/domains/example">More information...</a></p>\n</div>\n</body>\n</html>\n')]
Chaining NimbleSearchRetriever
Like other retrievers, NimbleSearchRetriever can be incorporated into LLM applications via chains.
We will need an LLM or chat model:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
prompt = ChatPromptTemplate.from_template(
"""Answer the question based only on the context provided.
Context: {context}
Question: {question}"""
)
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
Executing the request
chain.invoke("Who is the CEO of Nimbleway?")
Result
'The CEO of Nimble Way is Uriel Knorovich.'
Last updated