Anthropic claims its new AI chatbot models beat OpenAI’s GPT-4

AI startup Anthropic, backed by Google and hundreds of millions in venture capital (and perhaps soon hundreds of millions more), today announced the latest version of its GenAI tech, Claude. And the company claims that the AI chatbot OpenAI’s GPT-4 in terms of performance.

Claude 3, as Anthropic’s new GenAI is called, is a family of models — Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, Opus being the most powerful. All show “increased capabilities” in analysis and forecasting, Anthropic claims, as well as enhanced performance on specific benchmarks versus models like ChatGPT and GPT-4 (but not GPT-4 Turbo) and Google’s Gemini 1.0 Ultra (but not Gemini 1.5 Pro).

Notably, Claude 3 is Anthropic’s first multimodal GenAI, meaning that it can analyze text as well as images — similar to some flavors of GPT-4 and Gemini. Claude 3 can process photos, charts, graphs and technical diagrams, drawing from PDFs, slideshows and other document types.

In a step one better than some GenAI rivals, Claude 3 can analyze multiple images in a single request (up to a maximum of 20). This allows it to compare and contrast images, notes Anthropic.

But there’s limits to Claude 3’s image processing.

Anthropic has disabled the models from identifying people — no doubt wary of the ethical and legal implications. And the company admits that Claude 3 is prone to making mistakes with “low-quality” images (under 200 pixels) and struggles with tasks involving spatial reasoning (e.g. reading an analog clock face) and object counting (Claude 3 can’t give exact counts of objects in images).

Anthropic Claude 3

Image Credits: Anthropic

Claude 3 also won’t generate artwork. The models are strictly image-analyzing — at least for now.

Whether fielding text or images, Anthropic says that customers can generally expect Claude 3 to better follow multi-step instructions, produce structured output in formats like JSON and converse in languages other than English compared to its predecessors,. Claude 3 should also refuse to answer questions less often thanks to a “more nuanced understanding of requests,” Anthropic says. And soon, the models will cite the source of their answers to questions so users can verify them.

“Claude 3 tends to generate more expressive and engaging responses,” Anthropic writes in a support article. “[It’s] easier to prompt and steer compared to our legacy models. Users should find that they can achieve the desired results with shorter and more concise prompts.”

Some of those improvements stem from Claude 3’s expanded context.

A model’s context, or context window, refers to input data (e.g. text) that the model considers before generating output. Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic — often in problematic ways. As an added upside, large-context models can better grasp the narrative flow of data they take in and generate more contextually rich responses (hypothetically, at least).

Anthropic says that Claude 3 will initially support a 200,000-token context window, equivalent to about 150,000 words, with select customers getting up a 1-milion-token context window (~700,000 words). That’s on par with Google’s newest GenAI model, the above-mentioned Gemini 1.5 Pro, which also offers up to a million-token context window.

Now, just because Claude 3 is an upgrade over what came before it doesn’t mean it’s perfect.

In a technical whitepaper, Anthropic admits that Claude 3 isn’t immune from the issues plaguing other GenAI models, namely bias and hallucinations (i.e. making stuff up). Unlike some GenAI models, Claude 3 can’t search the web; the models can only answer questions using data from before August 2023. And while Claude is multilingual, it’s not as fluent in certain “low-resource” languages versus English.

But Anthropic’s promising frequent updates to Claude 3 in the months to come.

“We don’t believe that model intelligence is anywhere near its limits, and we plan to release [enhancements] to the Claude 3 model family over the next few months,” the company writes in a blog post.

Opus and Sonnet are available now on the web and via Anthropic’s dev console and API, Amazon’s Bedrock platform and Google’s Vertex AI. Haiku will follow later this year.

Here’s the pricing breakdown:

  • Opus: $15 per million input tokens, $75 per million output tokens
  • Sonnet: $3 per million input tokens, $15 per million output tokens
  • Haiku: $0.25 per million input tokens, $1.25 per million output tokens

So that’s Claude 3. But what’s the 30,000-foot view of all this?

Well, as we’ve reported previously, Anthropic’s ambition is to create a next-gen algorithm for “AI self-teaching.” Such an algorithm could be used to build virtual assistants that can answer emails, perform research and generate art, books and more — some of which we’ve already gotten a taste of with the likes of GPT-4 and other large language models.

Anthropic hints at this in the aforementioned blog post, saying that it plans to add features to Claude 3 that enhance its out-of-the-gate capabilities by allowing Claude to interact with other systems, code “interactively” and deliver “advanced agentic capabilities.”

That last bit calls to mind OpenAI’s reported ambitions to build a software agent to automate complex tasks, like transferring data from a document to a spreadsheet or automatically filling out expense reports and entering them in accounting software. OpenAI already offers an API that allows developers to build “agent-like experiences” into their apps, and Anthropic, it seems, is intent on delivering functionality that’s comparable.

Could we see an image generator from Anthropic next? It’d surprise me, frankly. Image generators are the subject of much controversy these days, mainly for copyright- and bias-related reasons. Google was recently forced to disable its image generator after it injected diversity into pictures with a farcical disregard for historical context. And a number of image generator vendors are in legal battles with artists who accuse them of profiting off of their work by training GenAI on that work without providing compensation or even credit.

I’m curious to see the evolution of Anthropic’s technique for training GenAI, “constitutional AI,” which the company claims makes the behavior of its GenAI easier to understand, more predictable and simpler to adjust as needed. Constitutional AI aims to provide a way to align AI with human intentions, having models respond to questions and perform tasks using a simple set of guiding principles. For example, for Claude 3, Anthropic said that it added a principle — informed by crowdsourced feedback — that instructs the models to be understanding of and accessible to people with disabilities.

Whatever Anthropic’s endgame, it’s in it for the long haul. According to a pitch deck leaked in May of last year, the company aims to raise as much as $5 billion over the next 12 months or so — which might just be the baseline it needs to remain competitive with OpenAI. (Training models isn’t cheap, after all.) It’s well on its way, with $2 billion and $4 billion in committed capital and pledges from Google and Amazon, respectively, and well over a billion combined from other backers.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *