2024-04-09
-
173480822 story [](//tech.slashdot.org/index2.pl?fhfilter=facebook)[](//tech.slashdot.org/index2.pl?fhfilter=ai) Posted by msmash on Tuesday April 09, 2024 @12:41PM from the up-next dept. Meta Platforms is planning to launch two small versions of its forthcoming Llama 3 large-language model next week, _The Information_ [has reported](https://www.theinformation.com/articles/meta-platforms-to-launch-small-versions-of-llama-3-next-week) _\[[non-paywalled link](https://www.theverge.com/2024/4/9/24125217/meta-llama-smaller-lightweight-model-ai)\]_. From the report: _The models will serve as a precursor to the launch of the biggest version of Llama 3, expected this summer. Release of the two small models will likely help spark excitement for the forthcoming Llama 3, which will be coming out roughly a year after Llama 2 launched last July. It comes as several companies, including Google, Elon Musk's xAI and Mistral, have released open-source LLMs. Meta hopes Llama 3 will catch up with OpenAI's GPT-4, which can answer questions based on images users upload to the chatbot. The biggest version will be multimodal, which means it will be capable of understanding and generating both texts and images. In contrast, the two small models to be released next week won't be multimodal, the employee said._
2024-04-18
-
Meta Platforms on Thursday released early versions of its latest large language model, Llama 3, and an image generator that updates pictures in real time while users type prompts, as it races to catch up to generative AI market leader [OpenAI](https://www.theguardian.com/technology/openai). The models will be integrated into virtual assistant Meta AI, which the company is pitching as the most sophisticated of its free-to-use peers. The assistant will be given more prominent billing within Meta’s Facebook, Instagram, WhatsApp and Messenger apps as well as a new standalone website that positions it to compete more directly with Microsoft-backed OpenAI’s breakout hit [ChatGPT](https://www.theguardian.com/technology/chatgpt). The announcement comes as [Meta](https://www.theguardian.com/technology/meta) has been scrambling to push generative AI products out to its billions of users to challenge OpenAI’s leading position on the technology, involving an overhaul of computing infrastructure and the consolidation of previously distinct research and product teams. The social media giant equipped Llama 3 with new computer coding capabilities and fed it images as well as text this time, though for now the model will output only text, Chris Cox, Meta’s chief product officer, said in an interview. More advanced reasoning, like the ability to craft longer multi-step plans, will follow in subsequent versions, he added. Versions planned for release in the coming months will also be capable of “multimodality”, meaning they can generate both text and images, Meta said in blog posts. “The goal eventually is to help take things off your plate, just help make your life easier, whether it’s interacting with businesses, whether it’s writing something, whether it’s planning a trip,” Cox said. Cox said the inclusion of images in the training of Llama 3 would enhance an update rolling out this year to the Ray-Ban Meta smart glasses, a partnership with glasses maker EssilorLuxottica, enabling Meta AI to identify objects seen by the wearer and answer questions about them. Meta also announced a new partnership with Alphabet’s Google to include real-time search results in the assistant’s responses, supplementing an existing arrangement with Microsoft’s Bing. The Meta AI assistant is expanding to more than a dozen markets outside the US with the update, including Australia, Canada, Singapore, Nigeria and Pakistan. Meta is “still working on the right way to do this in Europe”, Cox said, where privacy rules are more stringent and the forthcoming AI Act is poised to impose requirements like disclosure of models’ training data. Generative AI models’ voracious need for data has emerged as a major source of tension in the technology’s development. Meta has been releasing models like Llama 3 for free commercial use by developers as part of its catch-up effort, as the success of a powerful free option could stymie rivals’ plans to earn revenue off their proprietary technology. The strategy has also elicited safety concerns from critics wary of what unscrupulous developers may use the model to build. Mark Zuckerberg, Meta CEO, nodded at that competition in a video accompanying the announcement, in which he called Meta AI “the most intelligent AI assistant that you can freely use”. Zuckerberg said the biggest version of Llama 3 is currently being trained with 400bn parameters and is already scoring 85 MMLU, citing metrics used to convey the strength and performance quality of AI models. The two smaller versions rolling out now have 8bn parameters and 70bn parameters, and the latter scored around 82 MMLU, or Massive Multitask Language Understanding, he said. Developers have complained that the previous Llama 2 version of the model failed to understand basic context, confusing queries on how to “kill” a computer program with requests for instructions on committing murder. Rival Google has run into similar problems and recently paused use of its Gemini AI image generation tool after it drew criticism for churning out inaccurate depictions of historical figures. Meta said it cut down on those problems in Llama 3 by using “high quality data” to get the model to recognize nuance. It did not elaborate on the datasets used, although it said it fed seven times the amount of data into Llama 3 than it used for Llama 2 and leveraged “synthetic”, or AI-created, data to strengthen areas like coding and reasoning. Cox said there was “not a major change in posture” in terms of how the company sourced its training data.
2024-04-19
-
Meta is ready to put its AI assistant in the ring with ChatGPT in the chatbot fight. The tech giant said on Thursday that it is bringing [Meta AI to all of its platforms, including Facebook and Instagram](https://about.fb.com/news/2024/04/meta-ai-assistant-built-with-llama-3/), calling it “the most intelligent AI assistant you can use for free.” The AI assistant can be used in platform feeds, chats, and search. Meta also said the AI assistant is faster at generating high quality images, and can “change with every few letters typed,” so users can see it generating their image. Meta AI, which was [introduced in September](https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools/), is now available in English in over a dozen countries outside of the U.S., including Australia, Nigeria, and Singapore. The company also provides Meta AI as a website. The assistant was built with models from [Meta’s latest generative AI model family, Llama 3](https://ai.meta.com/blog/meta-llama-3/), which it also introduced on Thursday. Llama 3 8B has 8 billion parameters — the [variables models learn during training](https://ai-event.ted.com/glossary/parameters), used to make predictions — while Llama 3 70B has 70 billion parameters. Meta said Llama 3 is “a major leap” over its predecessor, Llama 2. The Llama 3 models will be available on the Google Cloud and Microsoft Azure platforms, among others, and supported by hardware from companies including Intel and Nvidia. Llama 3 “demonstrates state-of-the-art performance on a wide range of industry benchmarks,” Meta said, claiming it [outperforms other models, including Google’s Gemini and Anthropic’s Claude 3](https://llama.meta.com/llama3/), in a series of benchmarks. “We believe these are the best open source models of their class, period,” Meta said. The two newest two newest Llama 3 models are only the beginning of Meta’s plans for the family. The company said it is still training models that are over 400 billion parameters. It plans to release multiple models over the coming months with more advancements including multimodality, which is when a model can [understand and generate different types of content](https://www.turing.com/resources/multimodal-llms), including photo and video. Meta has spent billions on chips to build on its AI ambitions. making itself [one of Nvidia’s top customers](https://qz.com/nvidia-generative-ai-google-microsoft-meta-1851206854). In March, Tom Alison, head of Facebook, said at a tech conference that Meta is [developing an AI model to power recommendations for its platforms](https://qz.com/meta-facebook-instagram-ai-video-feed-reels-1851315404) as part of its “technology roadmap” for now until 2026.
2024-04-24
-
Meta stock plummeted more than 16% in after-hours trading Wednesday, even as [the Facebook parent company reported better-than-anticipated sales](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524). Meta reported revenues of $36.5 billion for the three months ended March 31, almost 30% higher than the same period last year and ahead of the expectations of Wall Street analysts surveyed by FactSet. And its profits more than doubled to $12 billion. “It’s been a good start to the year,” Meta CEO Mark Zuckerberg said in the company’s earnings release Wednesday. “The new version of Meta AI with Llama 3 is another step towards building the world’s leading AI. We’re seeing healthy growth across our apps and we continue making steady progress building the metaverse as well.” Investors didn’t seem to care about those good fortunes, though, as Meta’s share price sank 16.5% to about $412 in after-hours trading. They cared, instead, about [its lukewarm second quarter outlook](https://qz.com/meta-more-than-doubles-q1-profit-but-revenue-guidance-p-1851433481), which pegged expected revenues for the three months ending June 31 at $36.5 to $39 billion. Still, Meta stock has more than doubled over the last year, up 42.5% so far in 2024 and 137.8% in the last 12 months. The company has been one of the top-performing tech stocks. It’s part of the so-called “Fab Four,” the A-listers of the “Magnificent Seven” big tech stocks that continue to rally even as Google parent Alphabet, Apple, and Tesla have fallen flat or worse. More Meta news -------------- [Meta’s Metaverse is still losing the company billions](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524) [Meta’s new AI assistant wanted to answer users’ questions and now it’s getting on their nerves](https://qz.com/meta-ai-tool-users-instagram-chatgpt-1851430125) [Russia has sentenced a Meta executive to 6 years in prison](https://qz.com/facebook-meta-andy-stone-meta-russia-jail-terrorism-1851426978) [Meta just unveiled a new AI chip a day after Google and Intel](https://qz.com/meta-facebook-ai-chip-google-intel-nvidia-1851400247) [Meta’s smart glasses are getting a chatty AI update next month](https://qz.com/meta-upgrading-smart-glasses-with-new-ai-capabilities-1851371595)
2024-04-25
-
Shares in Meta slumped 15% when Wall Street opened on Thursday, wiping about $190bn off the value of the Facebook and Instagram parent company, as investors reacted to a [pledge to ramp up spending](https://www.theguardian.com/technology/2024/apr/24/meta-earnings) on artificial intelligence. Mark Zuckerberg, Meta’s founder and chief executive, said on a conference call on Wednesday that spending on the technology would have to grow “meaningfully” before the company could make “much revenue” from new AI products. Shares in [Meta](https://www.theguardian.com/technology/meta) had been boosted in 2023 by Zuckerberg’s tough action on costs in what he described as a “year of efficiency”. A relaxation of that restraint has rattled investors after Meta raised the upper bound of its capital expenditure guidance on Wednesday, from $37bn to $40bn. Last week, Meta [released Llama 3](https://www.theguardian.com/technology/2024/apr/18/meta-ai-llama3-release), the latest version of its AI model, alongside an image generator that updates pictures in real time while users type prompts. The company’s AI-powered assistant, Meta AI, is expanding to its platforms in more than a dozen markets outside the US with the update, including Australia, Canada, Singapore, Nigeria and Pakistan. Chris Cox, Meta’s chief product officer, said the company was “still working on the right way to do this in Europe”. The share decline follows a record gain in market value by Meta in February, when the company added $196bn to its stock market capitalisation – a measure of a company’s worth – after declaring its first dividend. At the time it was the biggest one-day gain in Wall Street history. However, weeks later, Nvidia, the leading supplier of chips for training and operating AI models, [smashed that record with a $277bn gain](https://www.theguardian.com/business/2024/feb/22/japan-nikkei-european-shares-record-highs-ai-nvidia-stoxx-600).
-
Meta stock fell more than 10% Thursday, even as the Facebook parent company reported better-than-anticipated sales in its quarterly earnings the day before. The losses appeared to be driven by [the company’s steep Metaverse losses](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524), and [CEO Mark Zuckerberg’s commitment to continue that spending](https://qz.com/meta-stock-earnings-facebook-wall-street-expectations-1851433671). The stock dropped more than 15% in pre-market trading Thursday before recovering some of those losses to close down 10.5% on the day. Meta reported revenues of $36.5 billion for the three months ended March 31, almost 30% higher than the same period last year and ahead of the expectations of Wall Street analysts surveyed by FactSet. And its profits more than doubled to $12 billion. Earnings per share were $4.71, more than the $4.32 expected Investors didn’t seem to care about those good fortunes, though, as Meta’s share price sank 16.5% in after-hours trading Wednesday and was down 15.3% before markets opened Thursday. They cared, instead, about [its lukewarm second-quarter outlook](https://qz.com/meta-more-than-doubles-q1-profit-but-revenue-guidance-p-1851433481). The company issued light revenue guidance and is expecting revenues for the three months ending June 31 at $36.5 to $39 billion. “It’s been a good start to the year,” Meta CEO Mark Zuckerberg said in the company’s earnings release Wednesday. “The new version of Meta AI with Llama 3 is another step towards building the world’s leading AI. We’re seeing healthy growth across our apps and we continue making steady progress building the metaverse as well.” Meta stock has more than doubled over the last year, [up 39% so far in 2024 and 135% in the last 12 months.](https://www.tipranks.com/stocks/meta) The company has been one of the top-performing tech stocks. It’s part of the so-called “Fab Four,” the A-listers of the “Magnificent Seven” big tech stocks that continue to rally even as Google parent Alphabet, Apple, and Tesla have fallen flat or worse. _–Laura Bratton contributed to this article_
-
Apr 25, 2024 12:00 PM Meta’s decision to give away powerful AI software for free could threaten the business models of OpenAI and Google.  Illustration: Andriy Onufriyenko/Getty Images Jerome Pesenti has a few reasons to celebrate Meta’s decision last week to [release Llama 3](https://www.wired.com/story/meta-is-already-training-a-more-powerful-sucessor-to-llama-3/), a powerful open source [large language model](https://www.wired.com/story/how-quickly-do-large-language-models-learn-unexpected-skills/) that anyone can download, run, and build on. Pesenti [used to be vice president of](https://www.wired.com/story/facebooks-ai-says-field-hit-wall/) [artificial intelligence](https://www.wired.com/tag/artificial-intelligence/) at [Meta](https://www.wired.com/tag/meta/) and says he often pushed the company to consider releasing its technology for others to use and build on. But his main reason to rejoice is that his new startup will get access to an AI model that he says is very close in power to [OpenAI’s industry-leading text generator GPT-4](https://www.wired.com/story/5-updates-gpt-4-turbo-openai-chatgpt-sam-altman/), but considerably cheaper to run and more open to outside scrutiny and modification. “The release last Friday really feels like a game-changer,” Pesenti says. His new company, [Sizzle](https://www.szl.ai/), an AI tutor, currently uses GPT-4 and other AI models, both closed and open, to craft problem sets and curricula for students. His engineers are evaluating whether Llama 3 could replace OpenAI’s model in many cases. Sizzle’s story may augur a broader shift in the balance of power in AI. OpenAI changed the world with ChatGPT, setting off a wave of AI investment and drawing more than 2 million developers to its cloud APIs. But if open source models prove competitive, developers and entrepreneurs may decide to stop paying to access the latest model from OpenAI or Google and use Llama 3 or one of the other increasingly powerful open source models that are popping up. “It’s going to be an interesting horse race,” Pesenti says of competition between open models like Llama 3 and closed ones such as GPT-4 and Google’s Gemini. Meta’s previous model, Llama 2, was already influential, but the company says it made the latest version more powerful by feeding it larger amounts of higher-quality training data, with new techniques developed to filter out redundant or garbled content and to select the best mixture of datasets to use. Pesenti says running Llama 3 on a cloud platform such as [Fireworks.ai](http://www.fireworks.ai/) costs just a 20th of the cost of accessing GPT-4 through an API. He adds that Llama 3 can be configured to respond to queries extremely quickly, a key consideration for developers at companies like his that rely on tapping into models from different providers. “It's an equation between latency, cost, and accuracy,” he says. Open models appear to be dropping at an impressive clip. A couple of weeks ago, I went inside startup Databricks [to witness the final stages of an effort to build DBRX](https://www.wired.com/story/dbrx-inside-the-creation-of-the-worlds-most-powerful-open-source-ai-model/), a language model built that was briefly the best open one around. That crown is now Llama 3’s. Ali Ghodsi, CEO of Databricks, also describes Llama 3 as “game-changing” and says the larger model “is approaching the quality of GPT 4—that levels the playing field between open and closed-source LLMs.” Llama 3 also showcases the potential for making AI models smaller, so they can be run on less powerful hardware. Meta released two versions of its latest model, one with 70 billion parameters—a measure of the variables it uses to learn from training data—and another with 8 billion. The smaller model is compact enough to run on a laptop but is remarkably capable, at least in WIRED’s testing. Two days before Meta’s release, [Mistral](https://mistral.ai/), a French AI company founded by alumni of Pesenti’s team at Meta, [open sourced](https://mistral.ai/news/mixtral-8x22b/) Mixtral 8x22B. It has 141 billion parameters but uses only 39 billion of them at any one time, a design known as a mixture of experts. Thanks to this trick, the model is considerably more capable than some models that are much larger. Meta isn’t the only tech giant releasing open source AI. This week Microsoft released [Phi-3-mini](https://export.arxiv.org/abs/2404.14219) and Apple released [OpenELM](https://huggingface.co/apple/OpenELM#llm360), two tiny but capable free-to-use language models that can run on a smartphone. Coming months will show whether Llama 3 and other open models really can displace premium AI models like GPT-4 for some developers. And even more powerful open source AI is coming. The company is working on a massive 400-billion-parameter version of Llama 3 that chief AI scientist [Yann LeCun](https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/) says should be one of the most capable in the world. Of course all this openness is not purely altruistic. Meta CEO Mark Zuckerberg says opening up its AI models [should ultimately benefit the company](https://twitter.com/i/bookmarks?post_id=1782469953054179692) by lowering the cost of technologies it relies on, for example by spawning compatible tools and services that Meta can use for itself. He left unsaid that it may also be to Meta’s benefit to prevent OpenAI, Microsoft, or Google from dominating the field.
-
After Meta’s stock [meltdown](https://www.fastcompany.com/91112579/meta-stock-plunges-2024-q1-earnings) late Wednesday afternoon and Thursday, the tech sector is roaring back in after-hours trading on the strength of strong earnings reports from Alphabet, Microsoft, and Snap. Snap shares were up more than 30% after its earnings release and Alphabet was up 13%. Microsoft shares, meanwhile, jumped 5.5%. There were numerous reasons for the surges. Alphabet announced its first-ever dividend (of 20 cents per share) and a $70 billion stock buyback. And Snap gave guidance into future quarters, saying it expects daily active users to jump by 9 million in the second quarter to 431 million, a higher number than analysts were expecting. But the backbone for the investor celebration? [Artificial intelligence](https://www.fastcompany.com/91029555/artificial-intelligence-most-innovative-companies-2024). Microsoft reported growth of 31% in its Azure unit and said 7% of that total was from AI. “Microsoft Copilot and Copilot stack are orchestrating a new era of AI transformation, driving better business outcomes across every role and industry,” said CEO Satya Nadella in the earnings release. Google’s parent company, meanwhile, saw cloud revenue of $9.6 billion and declared the “Gemini era” was underway. “There’s great momentum across the company,” said Sundar Pichai, Alphabet CEO, in a statement. “Our leadership in AI research and infrastructure, and our global product footprint, position us well for the next wave of AI innovation.” The post-market stock movements show that investor enthusiasm for AI isn’t slowing down, but _how_ companies talk about the technology can make a big difference. Meta, for instance, had a strong first quarter in terms of earnings and revenues, but Mark Zuckerberg’s focus on the conference call about all the ways the company was spending money appeared to spook investors, who sent shares of the company plunging. Meta shares lost 10.5% of their worth on Thursday—roughly a $100 billion loss in value. The three companies that reported earnings on Thursday will certainly be spending heavily on continuing AI research in the months and years ahead as well, but they were a bit less fatalistic in their communication with investors. Microsoft [noted](https://www.microsoft.com/en-us/investor/earnings/fy-2024-q3/press-release-webcast) that “capital expenditures including assets acquired under finance leases were $14 billion to support demand in our cloud and AI offerings,” justifying the spend with a few well-placed words about demand. That kept analysts happy. “Yes, investors should keep an eye on potential AI overspending,” Emarketer senior director of briefings Jeremy Goldman tells _Fast Company_ in a statement. “But for now, Satya Nadella’s forward-looking strategy is building value by infusing productive intelligence across Microsoft’s entire portfolio – from the cloud to the desktop. With AI weaving its way into every offering, Microsoft may just cement its stay as the enterprise’s most indispensable partner.” Snap, meanwhile, [said](https://s25.q4cdn.com/442043304/files/doc_financials/2024/q1/Q1-24-Press-Release_FINAL-4-25-24.pdf), “We continue to invest in Generative AI models and automation for the creation of ML and AI Lenses, which contributed to the number of ML and AI Lenses viewed by Snapchatters increasing by more than 50% year-over-year.” Again, the emphasis was on the results of the spending. Alphabet largely dodged the issue of AI spending, [saying](https://abc.xyz/assets/91/b3/3f9213d14ce3ae27e1038e01a0e0/2024q1-alphabet-earnings-release-pdf.pdf), “certain costs are not allocated to our segments because they represent Alphabet-level activities. These costs primarily include AI-focused shared R&D activities, including development costs of our general AI models.” But Alphabet also announced the dividend, so investors were going to go crazy about that no matter what. AI has been catnip to Wall Street for over a year now. And despite Meta’s blunt honesty in its earnings report, shares of that company are still more than twice the price of where they were a year ago. The next piece of the AI puzzle won’t come for another month or so, when [Nvidia reports earnings](https://www.fastcompany.com/91034272/nvidia-nvda-earnings-record-265-revenue-growth-moving-stock-market) on Wednesday, May 22. _ Recognize your brand’s excellence by applying to this year’s [Brands That Matter Awards](https://www.fastcompany.com/apply/brands-that-matter) before the early-rate deadline, May 3. _
2024-07-01
-
An anonymous reader quotes a report from Ars Technica: _Meta continues to hit walls with its heavily scrutinized plan to comply with the European Union's strict online competition law, the Digital Markets Act (DMA), by [offering Facebook and Instagram subscriptions](https://arstechnica.com/tech-policy/2024/07/metas-pay-for-privacy-plan-falls-afoul-of-the-law-eu-regulators-say/) as an alternative for privacy-inclined users who want to opt out of ad targeting. Today, the European Commission (EC) [announced](https://ec.europa.eu/commission/presscorner/detail/en/ip_24_3582) preliminary findings that Meta's so-called "pay or consent" or "pay or OK" model -- which gives users a choice to either pay for access to its platforms or give consent to collect user data to target ads -- is not compliant with the DMA. According to the EC, Meta's advertising model violates the DMA in two ways. First, it "does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the 'personalized ads-based service." And second, it "does not allow users to exercise their right to freely consent to the combination of their personal data," the press release said. Now, Meta will have a chance to review the EC's evidence and defend its policy, with today's findings kicking off a process that will take months. The EC's investigation is expected to conclude next March. Thierry Breton, the commissioner for the internal market, said in the press release that the preliminary findings represent "another important step" to ensure Meta's full compliance with the DMA. "The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access," Breton said. A Meta spokesperson told Ars that Meta plans to fight the findings -- which could trigger fines up to 10 percent of the company's worldwide turnover, as well as fines up to 20 percent for repeat infringement if Meta loses. The EC agreed that more talks were needed, writing in the press release, "the Commission continues its constructive engagement with Meta to identify a satisfactory path towards effective compliance." _Meta continues to claim that its "subscription for no ads" model was "endorsed" by the highest court in Europe, the Court of Justice of the European Union (CJEU), last year. "Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA," Meta's spokesperson said. "We look forward to further constructive dialogue with the European Commission to bring this investigation to a close." Meta rolled out its ad-free subscription service option [last November](https://tech.slashdot.org/story/23/10/30/1229247/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe). "Depending on where you purchase it will cost $10.5/month on the web or $13.75/month on iOS and Android," said the company in a blog post. "Regardless of where you purchase, the subscription will apply to all linked Facebook and Instagram accounts in a user's Accounts Center. As is the case for many online subscriptions, the iOS and Android pricing take into account the fees that Apple and Google charge through respective purchasing policies."
2024-07-17
-
174524445 story [](//meta.slashdot.org/index2.pl?fhfilter=eu)[ ](//meta.slashdot.org/index2.pl?fhfilter=ai)[ ](//meta.slashdot.org/index2.pl?fhfilter=facebook)[](//meta.slashdot.org/index2.pl?fhfilter=meta) Posted by [BeauHD](https://www.linkedin.com/in/beauhd/) on Wednesday July 17, 2024 @06:02PM from the uncertain-future dept. According to Axios, Meta will [withhold future multimodel AI models from customers in the European Union](https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu) "due to the unpredictable nature of the European regulatory environment." From the report: _Meta plans to incorporate the new multimodal models, which are able to reason across video, audio, images and text, in a wide range of products, including smartphones and its Meta Ray-Ban smart glasses. Meta says its decision also means that European companies will not be able to use the multimodal models even though they are being released under an open license. It could also prevent companies outside of the EU from offering products and services in Europe that make use of the new multimodal models. The company is also planning to release a larger, text-only version of its Llama 3 model soon. That will be made available for customers and companies in the EU, Meta said. Meta's issue isn't with the still-being-finalized AI Act, but rather with how it can train models using data from European customers while complying with GDPR -- the EU's existing data protection law. Meta announced in May that it planned to use publicly available posts from Facebook and Instagram users to train future models. Meta said it sent more than 2 billion notifications to users in the EU, offering a means for opting out, with training set to begin in June. Meta says it briefed EU regulators months in advance of that public announcement and received only minimal feedback, which it says it addressed. In June -- after announcing its plans publicly -- Meta was ordered to pause the training on EU data. A couple weeks later it received dozens of questions from data privacy regulators from across the region. The United Kingdom has a nearly identical law to GDPR, but Meta says it isn't seeing the same level of regulatory uncertainty and plans to launch its new model for U.K. users. A Meta representative told Axios that European regulators are taking much longer to interpret existing law than their counterparts in other regions. A Meta representative told Axios that training on European data is key to ensuring its products properly reflect the terminology and culture of the region. _
2024-07-18
-
174531483 story [](//tech.slashdot.org/index2.pl?fhfilter=facebook)[](//tech.slashdot.org/index2.pl?fhfilter=ai) Posted by msmash on Thursday July 18, 2024 @12:40PM from the making-a-statement dept. Meta says it [won't be launching its upcoming multimodal AI model](https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations) -- capable of handling video, audio, images, and text -- in the European Union, citing regulatory concerns. From a report: _The decision will prevent European companies from using the multimodal model, despite it being released under an open license. Just last week, the EU finalized compliance deadlines for AI companies under its strict new AI Act. Tech companies operating in the EU will generally have until August 2026 to comply with rules around copyright, transparency, and AI uses like predictive policing. Meta's decision follows a similar move by Apple, which recently said it would likely exclude the EU from its Apple Intelligence rollout due to concerns surrounding the Digital Markets Act._
2024-07-19
-
Meta’s cost-cutting efforts at its metaverse division, Reality Labs, could help save the company $3 billion, Bank of America analysts said Friday. [Meta is reportedly cutting the budget for its Reality Labs](https://www.theinformation.com/articles/reality-comes-to-metas-reality-labs?rc=5xvgzc) hardware division, which makes its VR headsets, by about 20% between this year and 2026, The Information reported Thursday, citing unnamed sources. That doesn’t mean the company is halting its virtual and augmented reality innovations: the company is planning to release new Quest headsets and AR glasses in the next three years, the outlet said. The cost-cutting at Reality Labs is instead meant to put the division’s seemingly out of control spending under lock. While Bank of America’s Justin Post and Nitin Bansal said in a research note Friday that Meta could save an estimated $3 billion, they added that some of those cost savings could be reallocated to Meta’s AI efforts. But those efforts are also being put on hold in some regions (i.e. the European Union and Brazil) as [Meta looks to avoid growing regulatory scrutiny in the AI space](https://qz.com/meta-pause-generative-ai-brazil-multimodal-model-eu-1851599618). Meta’s plans for AI and virtual reality will likely come into clearer focus when the Facebook and Instagram parent reports its second quarter earnings on July 31. Meta CEO Mark Zuckerberg has repeatedly reiterated his belief that the capital-m Metaverse is the future. “We continue making steady progress building the metaverse as well,” he said in a call with investors in March, discussing [the company’s first quarter financial results](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524). In the same breath, Meta reported a loss of $3.8 billion for its Reality Labs division. The company’s VR and AR efforts are surely still a money-loser for Meta, but Reality Labs is at least finding ways to shrink its losses — which fell 17% between the last three months of 2023 and the first quarter of 2024. Bank of America analysts maintained their buy rating of Meta’s stock on Friday. They see shares rising nearly 15% to $550 over the next year. By the numbers -------------- **$55 billion:** How much Meta’s Reality Labs has lost the company since 2019 **30%:** How much first quarter sales for Reality Labs, which totaled $440 million, rose from last year **$3 billion:** How much Meta could save with new cost-cutting measures at Reality Labs **14.8%:** How much Bank of America analysts see Meta’s share price rising over the next year — from $479 to $550
-
Meta quickly [shifted away from the metaverse](https://qz.com/meta-layoffs-2023-jobs-metaverse-ai-1850196575) to generative artificial intelligence, and now it’s pumping the brakes on some of its efforts amid regulatory scrutiny. On Wednesday, Meta said it was [pausing the use of its generative AI tools in Brazil](https://www.reuters.com/technology/artificial-intelligence/meta-decides-suspend-its-generative-ai-tools-brazil-2024-07-17/) due to opposition from the country’s government over the company’s privacy policy on personal data and AI, according to Reuters. Meta was [banned from training its AI models](https://www.gov.br/anpd/pt-br/assuntos/noticias/anpd-determina-suspensao-cautelar-do-tratamento-de-dados-pessoais-para-treinamento-da-ia-da-meta) on Brazilians’ personal data by the country’s National Data Protection Authority (ANPD) earlier this month. The Facebook-owner had updated its privacy policy in May to give itself [permission to train AI on public Facebook, Messenger, and Instagram data](https://about.fb.com/br/news/2024/05/como-a-meta-esta-desenvolvendo-a-inteligencia-artificial-para-o-brasil/) in Brazil. The [ANPD said Meta’s privacy policy](https://apnews.com/article/brazil-tech-meta-privacy-data-93e00b2e0e26f7cc98795dd052aea8e1) has “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” according to the Associated Press. Meanwhile, Meta has decided to [not release its upcoming and future multimodal AI models](https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu) in the European Union “due to the unpredictable nature of the European regulatory environment,” the company said in a statement shared with Axios. The company’s decision follows Apple, which said in June it would [likely not roll out its new Apple Intelligence and other AI features](https://qz.com/apple-not-release-apple-intelligence-european-union-dma-1851553830) in the bloc due to the Digital Markets Act. Even though Meta’s multimodal models will be under an open license, companies in Europe will not be able to use them over the company’s decision, Axios reported. And companies outside of the bloc could reportedly be blocked from offering products and services on the continent that use Meta’s models. However, Meta has a larger, text-only version of its Llama 3 model that will be made available in the EU when it’s released, the company told Axios. In June, Meta said it would [delay training](https://about.fb.com/news/2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/) its [large language models](https://qz.com/ai-artificial-intelligence-glossary-vocabulary-terms-1851422473) on public data from Facebook and Instagram users in the European Union after facing pushback from the Irish Data Protection Commission (DPC). “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Meta said.
2024-07-23
-
Meta is taking on its artificial intelligence rivals with the [latest version of its Llama model](https://llama.meta.com/), saying open-source AI is “good for the world.” The open-source Llama 3.1 models released Tuesday, which Meta calls its [“most capable” to date](https://ai.meta.com/blog/meta-llama-3-1/), include its largest model, Llama 3.1 405B, which stands for 405 billion parameters, or the variables a model learns from training data that guide its behavior. Llama 3.1 405B rivals its closed-source competitors from OpenAI and Google in “state-of-the-art capabilities,” including general knowledge, math, and translating languages, Meta said. The release also includes [upgraded versions of its 8B and 70B models](https://qz.com/meta-ai-assistant-instagram-facebook-messenger-llama3-1851421803) which were introduced in April. Llama 3.1 405B was [evaluated on over 150 benchmark datasets and by humans](https://ai.meta.com/blog/meta-llama-3-1/) against other leading foundation models, including OpenAI’s GPT-4 and GPT-4o, and Anthropic’s Claude 3.5 Sonnet, which are closed-source models. While Llama 3.1 405B was outperformed on some benchmarks, the “experimental evaluation suggests that our flagship model is competitive” with the other leading models, Meta said. The model was trained with over 16,000 of Nvidia’s H100 GPUs, or graphics processing units, according to Meta. The chipmaker also announced a new [Nvidia AI Foundry service](https://nvidianews.nvidia.com/news/nvidia-ai-foundry-custom-llama-generative-models) for enterprises and nation states to build “supermodels” with Llama 3.1 405B. The Llama models are used to power Meta’s AI chatbot, Meta AI, which is available on Facebook, Instagram, and other platforms. Meta expanded access to Meta AI [in Latin America and other countries](https://about.fb.com/news/2024/07/meta-ai-is-now-multilingual-more-creative-and-smarter/) on Tuesday, and announced it is offering the chatbot in seven new languages, including German and Hindi. Users have the option to use the Llama 3.1 405B-powered Meta AI on WhatsApp and meta.ai. Meta chief executive Mark Zuckerberg said the company expects “future Llama models to become the most advanced in the industry,” and that Llama 3.1 405B is a step toward making open-source models the industry standard. “AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research,” Zuckerberg said in [a statement](https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/) released Tuesday. “Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society.” Zuckerberg, who told Bloomberg the company is [working on Llama 4](https://www.bloomberg.com/news/articles/2024-07-23/meta-s-zuckerberg-aims-to-rival-openai-google-with-new-llama-ai-model?srnd=phx-technology&sref=P6Q0mxvj), said he believes “open-source AI will be safer than the alternatives,” and that the company’s new model “will be an inflection point in the industry where most developers begin to primarily use open source.”
-
Jul 23, 2024 11:05 AM The newest version of Llama will make AI more accessible and customizable, but it will also stir up debate over the dangers of releasing AI without guardrails.  Photograph: Carlos Barria/Reuters/Redux Most tech moguls hope to sell [artificial intelligence](https://www.wired.com/tag/artificial-intelligence/) to the masses. But Mark Zuckerberg is giving away what Meta considers to be one of the world’s best AI models for free. Meta released the biggest, most capable version of a large language model called [Llama](https://www.wired.com/story/metas-open-source-llama-3-nipping-at-openais-heels/) on Monday, free of charge. Meta has not disclosed the cost of developing Llama 3.1, but Zuckerberg [recently told investors](https://investor.fb.com/investor-news/press-release-details/2024/Meta-Reports-First-Quarter-2024-Results/default.aspx) that his company is spending billions on AI development. Through this latest release, Meta is showing that the closed approach favored by most AI companies is not the only way to develop AI. But the company is also putting itself at the center of debate over the dangers posed by releasing AI without controls. Meta trains Llama in a way that prevents the model from producing harmful output by default, but the model can be modified to remove such safeguards. Meta says that Llama 3.1 is as clever and useful as the best commercial offerings from companies like [OpenAI](https://www.wired.com/tag/openai/), [Google](https://www.wired.com/tag/google/), and [Anthropic](https://www.wired.com/story/anthropic-black-box-ai-research-neurons-features/). In certain benchmarks that measure progress in AI, Meta says the model is the smartest AI on Earth. “It’s very exciting,” says [Percy Liang](https://cs.stanford.edu/~pliang/), an associate professor at Stanford University who tracks open source AI. If developers find the new model to be just as capable as the industry’s leading ones, including [OpenAI’s GPT-4o](https://www.wired.com/story/openai-gpt-4o-model-gives-chatgpt-a-snappy-flirty-upgrade/), Liang says, it could see many move over to Meta’s offering. “It will be interesting to see how the usage shifts,” he says. In an [open letter](https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/) posted with the release of the new model, Meta CEO Zuckerberg compared Llama to the open source [Linux](https://www.wired.com/tag/linux/) operating system. When Linux took off in the late '90s and early 2000s many big tech companies were invested in closed alternatives and criticized open source software as risky and unreliable. Today however Linux is widely used in cloud computing and serves as the core of the Android mobile OS. “I believe that AI will develop in a similar way,” Zuckerberg writes in his letter. “Today, several tech companies are developing leading closed models. But open source is quickly closing the gap.” However, Meta’s decision to give away its AI is not devoid of self-interest. Previous [Llama releases](https://www.wired.com/story/metas-open-source-llama-upsets-the-ai-horse-race/) have helped the company secure an influential position among AI researchers, developers, and startups. Liang also notes that Llama 3.1 is not truly open source, because Meta imposes restrictions on its usage—for example, limiting the scale at which the model can be used in commercial products. The new version of Llama has 405 billion parameters or tweakable elements. Meta has already released two smaller versions of Llama 3, one with 70 billion parameters and another with 8 billion. Meta today also released upgraded versions of these models branded as Llama 3.1. Llama 3.1 is too big to be run on a regular computer, but Meta says that many cloud providers, including Databricks, Groq, AWS, and Google Cloud, will offer hosting options to allow developers to run custom versions of the model. The model can also be accessed at [Meta.ai](https://www.meta.ai/). Some developers say the new Llama release could have broad implications for AI development. [Stella Biderman](https://www.stellabiderman.com/), executive director of [EleutherAI](https://www.eleuther.ai/), an open source AI project, also notes that Llama 3 is not fully open source. But Biderman notes that a change to Meta’s latest license will let developers train their own models using Llama 3, something that most AI companies currently prohibit. “This is a really, really big deal,” Biderman says. Unlike OpenAI and Google’s latest models, Llama is not “multimodal,” meaning it is not built to handle images, audio, and video. But Meta says the model is significantly better at using other software such as a web browser, something that many researchers and companies [believe could make AI more useful](https://www.wired.com/story/fast-forward-forget-chatbots-ai-agents-are-the-future/). After OpenAI released ChatGPT in late 2022, [some AI experts called for a moratorium](https://www.wired.com/story/chatgpt-pause-ai-experiments-open-letter/) on AI development for fear that the technology could be misused or too powerful to control. Existential alarm has since cooled, but many experts remain concerned that unrestricted AI models could be misused by hackers or used to speed up the development of biological or chemical weapons. "Cyber criminals everywhere will be delighted,” says Geoffrey Hinton, a Turing award winner whose pioneering work on a field of machine learning known as deep learning laid the groundwork for large language models. Hinton joined Google in 2013 but [left the company last year](https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/) to speak out about the possible risks that might come with more advanced AI models. He says that AI is fundamentally different from open source software because models cannot be scrutinized in the same way. “People fine-tune models for their own purposes, and some of those purposes are very bad," he adds. Meta has helped allay some fears by releasing previous versions of Llama carefully. The company says it puts Llama through rigorous safety testing before release, and adds that there is little evidence that its models make it easier to develop weapons. Meta said it will release several new tools to help developers keep Llama models safe by moderating their output and blocking attempts to break restrictions. Jon Carvill, a spokesman for Meta, says the company will decide on a case-by-case basis whether to release future models. Dan Hendrycks, a computer scientist and the director of the [Center for AI Safety](https://www.safe.ai/), a nonprofit organization focused on AI dangers, says Meta has generally done a good job of testing its models before releasing them. He says that the new model could help experts understand future risks. “Today’s Llama 3 release will enable researchers outside big tech companies to conduct much-needed AI safety research.”
-
Meta has [released Llama 3.1](https://llama.meta.com/), its largest open-source AI model to date, in a move that challenges the closed approaches of competitors like OpenAI and Google. The new model, [boasting 405 billion parameters](https://ai.meta.com/blog/meta-llama-3-1/), is claimed by Meta to outperform GPT-4o and Claude 3.5 Sonnet on several benchmarks, with CEO Mark Zuckerberg predicting that Meta AI will become the most widely used assistant by year-end. Llama 3.1, which Meta says was trained using over 16,000 Nvidia H100 GPUs, is being made available to developers through partnerships with major tech companies including Microsoft, Amazon, and Google, potentially reducing deployment costs compared to proprietary alternatives. The release includes smaller versions with 70 billion and 8 billion parameters, and Meta is introducing new safety tools to help developers moderate the model's output. While Meta isn't disclosing what all data it used to train its models, the company confirmed it used synthetic data to enhance the model's capabilities. The company is also expanding its Meta AI assistant, powered by Llama 3.1, to support additional languages and integrate with its various platforms, including WhatsApp, Instagram, and Facebook, as well as its Quest virtual reality headset.
2024-08-23
-
174819736 story [](//tech.slashdot.org/index2.pl?fhfilter=facebook)[](//tech.slashdot.org/index2.pl?fhfilter=technology) Posted by msmash on Friday August 23, 2024 @12:02PM from the tough-luck dept. Meta Platforms has [canceled plans for a premium mixed-reality headset](https://www.theinformation.com/articles/meta-cancels-high-end-mixed-reality-headset) intended to compete with Apple's Vision Pro, _The Information_ reported Friday, citing sources. From the report: _Meta told employees at the company's Reality Labs division to stop work on the device this week after a product review meeting attended by Meta CEO Mark Zuckerberg, Chief Technology Officer Andrew Bosworth and other Meta executives, the employees said. The axed device, which was internally code-named La Jolla, began development in November and was scheduled for release in 2027, according to current and former Meta employees. It was going to contain ultrahigh-resolution screens known as micro OLEDs -- the same display technology used in Apple's Vision Pro._
2024-09-17
-
[Meta](https://www.fastcompany.com/91190917/restarts-plans-to-train-ai-with-uk-user-data-facebook-instagram-content-social-media-activity) said it’s banning the Russia state media organization from its social media platforms, alleging that the outlets used deceptive tactics to amplify Moscow’s propaganda. The announcement drew a rebuke from the Kremlin on Tuesday. The company, which owns Facebook, WhatsApp, and Instagram, said late Monday that it will roll out the ban over the next few days in an escalation of its [efforts to counter Russia’s covert influence operations](https://www.fastcompany.com/90725896/meta-will-expand-lock-your-profile-protections-to-russian-facebook-users). “After careful consideration, we expanded our ongoing enforcement against Russian state media outlets: Rossiya Segodnya, RT, and other related entities are now banned from our apps globally for foreign interference activity,” Meta said in a prepared statement. Kremlin spokesman Dmitry Peskov lashed out, saying that “such selective actions against Russian media are unacceptable,” and that “Meta with these actions are discrediting themselves. “We have an extremely negative attitude towards this. And this, of course, complicates the prospects for normalizing our relations with Meta,” Peskov told reporters during his daily conference call. RT, formerly known as Russia Today, and Russia Segodnya, also denounced the move. “It’s cute how there’s a competition in the West—who can try to spank RT the hardest, in order to make themselves look better,” RT said in a release. Expand to continue reading ↓
2024-09-25
-
175131917 story [](//tech.slashdot.org/index2.pl?fhfilter=facebook)[](//tech.slashdot.org/index2.pl?fhfilter=technology) Posted by msmash on Wednesday September 25, 2024 @02:02PM from the pushing-the-limits dept. Meta [unveiled prototype AR glasses codenamed Orion](https://www.theverge.com/24253908/meta-orion-ar-glasses-demo-mark-zuckerberg-interview) on Wednesday, featuring a 70-degree field of view, Micro LED projectors, and silicon carbide lenses that beam graphics directly into the wearer's eyes. In an interview with The Verge, CEO Mark Zuckerberg demonstrated the device's capabilities, including ingredient recognition, holographic gaming, and video calling, controlled by a neural wristband that interprets hand gestures through electromyography. Despite technological advances, Meta has shelved Orion's commercial release, citing manufacturing complexities and costs reaching $10,000 per unit, primarily due to difficulties in producing the silicon carbide lenses. The company now aims to launch a refined, more affordable version in coming years, with executives hinting at a price comparable to high-end smartphones and laptops. Zuckerberg views AR glasses as critical to Meta's future, potentially freeing the company from its reliance on smartphone platforms controlled by Apple and Google. The push into AR hardware comes as tech giants and startups intensify competition in the space, with Apple launching Vision Pro and Google partnering with Magic Leap and Samsung on headset development.
2024-10-04
-
Oct 4, 2024 9:00 AM The next frontier in generative AI is video—and with Movie Gen, Meta has now staked its claim. An AI-generated video made from the prompt "A baby hippo swimming in the river. Colorful flowers float at the surface, as fish swim around the hippo. The hippo's skin is smooth and shiny, reflecting the sunlight that filters through the water."Courtesy of Meta Meta just announced its own media-focused [AI model](https://www.wired.com/tag/artificial-intelligence), called Movie Gen, that can be used to generate realistic video and audioclips. The company shared multiple 10-second clips generated with [Movie Gen](https://ai.meta.com/blog/movie-gen-media-foundation-models-generative-ai-video/), including a Moo Deng-esque baby hippo swimming around, to demonstrate its capabilities. While the tool is not yet available for use, this Movie Gen announcement comes shortly after its Meta Connect event, which showcased new and [refreshed hardware](https://www.wired.com/story/meta-quest-3s-headset/) and the latest version of its [large language model, Llama 3.2](https://www.wired.com/story/meta-releases-new-llama-model-ai-voice/). Going beyond the generation of straightforward [text-to-video](https://www.wired.com/story/text-to-video-ai-generators-filmmaking-hollywood/) clips, the Movie Gen model can make targeted edits to an existing clip, like adding an object into someone’s hands or changing the appearance of a surface. In one of the example videos from Meta, a woman wearing a VR headset was transformed to look like she was wearing steampunk binoculars. An AI-generated video made from the prompt "make me a painter." Courtesy of Meta An AI-generated video made from the prompt "a woman DJ spins records. She is wearing a pink jacket and giant headphones. There is a cheetah next to the woman." Courtesy of Meta Audio bites can be generated alongside the videos with Movie Gen. In the sample clips, an AI man stands near a waterfall with audible splashes and the hopeful sounds of a symphony; the engine of a sports car purrs and tires screech as it zips around the track, and a snake slides along the jungle floor, accompanied by suspenseful horns. Meta shared some further details about Movie Gen in a research paper released Friday. Movie Gen Video consists of 30 billion parameters, while Movie Gen Audio consists of 13 billion parameters. (A model's parameter count roughly corresponds to how capable it is; by contrast, the largest variant of [Llama 3.1 has 405 billion parameters](https://www.wired.com/story/meta-ai-llama-3/).) Movie Gen can produce high-definition videos up to 16 seconds long, and Meta claims that it outperforms competitive models in overall video quality. Earlier this year, CEO Mark Zuckerberg demonstrated Meta AI’s Imagine Me feature, where users can upload a photo of themselves and role-play their face into multiple scenarios, by posting an AI image of himself [drowning in gold chains](https://www.threads.net/@zuck/post/C9xxwZZyx5B?xmt=AQGzXnHzmnMqrWb6E16MB7-sBjd7WYocg9yooqdOatxWQg) on Threads. A video version of a similar feature is possible with the Movie Gen model—think of it as a kind of [ElfYourself](https://www.wired.com/2007/12/geekdad-mashup/) on steroids. What information has Movie Gen been trained on? The specifics aren’t clear in Meta’s announcement post: “We’ve trained these models on a combination of licensed and publicly available data sets.” The [sources of training data](https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/) and [what’s fair to scrape from the web](https://www.wired.com/story/perplexity-is-a-bullshit-machine/) remain a contentious issue for generative AI tools, and it's rarely ever public knowledge what text, video, or audioclips were used to create any of the major models. It will be interesting to see how long it takes Meta to make Movie Gen broadly available. The announcement blog vaguely gestures at a “potential future release.” For comparison, OpenAI announced its [AI video model, called Sora](https://www.wired.com/story/openai-sora-generative-ai-video/), earlier this year and has not yet made it available to the public or shared any upcoming release date (though WIRED did receive a few exclusive Sora clips from the company for an [investigation into bias](https://www.wired.com/story/artificial-intelligence-lgbtq-representation-openai-sora/)). Considering Meta’s legacy as a social media company, it’s possible that tools powered by Movie Gen will start popping up, eventually, inside of Facebook, Instagram, and WhatsApp. In September, competitor Google shared plans to make aspects of its Veo video model [available to creators](https://www.wired.com/story/generative-ai-tools-youtube-shorts-veo/) inside its YouTube Shorts sometime next year. While larger tech companies are still holding off on fully releasing video models to the public, you are able to experiment with AI video tools right now from smaller, upcoming startups, like [Runway](https://runwayml.com/) and [Pika](https://pika.art/home). Give Pikaffects a whirl if you’ve ever been curious what it would be like to see yourself [cartoonishly crushed](https://www.threads.net/@crumbler/post/DAokPKetoMh?xmt=AQGzNNS-5u820OA0WpsHTIxnoDiVH50L_OwMbOEw2V9DLA) with a hydraulic press or suddenly melt in a puddle.
2024-10-21
-
Facebook owner [Meta](https://www.fastcompany.com/91211773/meta-platforms-2024-layoffs-reality-labs-instagram-whatsapp-year-of-efficiency) said on Friday it was releasing a batch of new [AI](https://www.fastcompany.com/91206477/meta-ai-chatbot-brazil-uk-chatgpt) models from its research division, including a “Self-Taught Evaluator” that may offer a path toward less human involvement in the AI development process. The release follows Meta’s introduction of the tool in an August paper, which detailed how it relies upon the same “chain of thought” technique used by OpenAI’s recently released o1 models to get it to make reliable judgments about models’ responses. That technique involves breaking down complex problems into smaller logical steps and appears to improve the accuracy of responses on challenging problems in subjects like science, coding and math. Meta’s researchers used entirely AI-generated data to train the evaluator model, eliminating human input at that stage as well. The ability to use AI to evaluate AI reliably offers a glimpse at a possible pathway toward building autonomous AI agents that can learn from their own mistakes, two of the Meta researchers behind the project told Reuters. Many in the AI field envision such agents as digital assistants intelligent enough to carry out a vast array of tasks without human intervention. Self-improving models could cut out the need for an often expensive and inefficient process used today called Reinforcement Learning from Human Feedback, which requires input from human annotators who must have specialized expertise to label data accurately and verify that answers to complex math and writing queries are correct. Expand to continue reading ↓
2024-11-01
-
ChatGPT takes on Google, Meta's spending spree, and Microsoft's data center problem: AI news roundup Michael Hunter, an Atlanta-based real estate marketing professional and Apple ([AAPL](https://qz.com/quote/AAPL)) power user, has watched Apple’s new Apple Intelligence features evolve from promising to problematic. After a month with iOS 18.1's early release through his developer account, Hunter was impressed by the system’s enhanced Siri capabilities and responsiveness. [Read More](https://qz.com/apple-intelligence-ai-iphone-beta-features-siri-users-1851684957)
2025-01-15
-
Unlock seamless, secure login experiences with [**Auth0**](https://auth0.com/signup?utm_source=sourceforge&utm_campaign=global_mult_mult_all_ciam-dev_dg-plg_auth0_display_sourceforge_banner_3p_PLG-SFSiteSearchBanner_utm2&utm_medium=cpc&utm_id=aNK4z000000UIV7GAO)—where authentication meets innovation. Scale your business confidently with flexible, developer-friendly tools built to protect your users and data. [**Try for FREE here**](https://auth0.com/signup?utm_source=sourceforge&utm_campaign=global_mult_mult_all_ciam-dev_dg-plg_auth0_display_sourceforge_banner_3p_PLG-SFSiteSearchBanner_utm2&utm_medium=cpc&utm_id=aNK4z000000UIV7GAO) × 175920375 story [](//tech.slashdot.org/index2.pl?fhfilter=ai)[](//tech.slashdot.org/index2.pl?fhfilter=facebook) Posted by msmash on Wednesday January 15, 2025 @01:01PM from the how-about-that dept. Executives and researchers leading Meta's AI efforts [obsessed over beating OpenAI's GPT-4 model](https://techcrunch.com/2025/01/14/meta-execs-obsessed-over-beating-openais-gpt-4-internally-court-filings-reveal/) while developing Llama 3, according to internal messages unsealed by a court in one of the company's ongoing AI copyright cases, Kadrey v. Meta. From a report: _"Honestly... Our goal needs to be GPT-4," said Meta's VP of Generative AI, Ahmad Al-Dahle, in an October 2023 message to Meta researcher Hugo Touvron. "We have 64k GPUs coming! We need to learn how to build frontier and win this race." Though Meta releases open AI models, the company's AI leaders were far more focused on beating competitors that don't typically release their model's weights, like Anthropic and OpenAI, and instead gate them behind an API. Meta's execs and researchers held up Anthropic's Claude and OpenAI's GPT-4 as a gold standard to work toward. The French AI startup Mistral, one of the biggest open competitors to Meta, was mentioned several times in the internal messages, but the tone was dismissive. "Mistral is peanuts for us," Al-Dahle said in a message. "We should be able to do better," he said later._
2025-01-24
-
Threads, Meta’s rival to X and Bluesky, is testing ads with certain brands in the United States and Japan, the company said Friday. “We know there will be plenty of feedback about how we should approach ads, and we are making sure they feel like Threads posts you’d find relevant and interesting,” Instagram head Adam Mosseri [said in a post](https://www.threads.net/@mosseri/post/DFN0dSVhL26). He added that the team will be monitoring the test “before scaling it more broadly.” The ads will show a “Sponsored” label as they appear in users’ feeds. Meta launched Threads in 2023 and has been focusing on growing its user base and keeping people logged on. Now that it has more than [300 million monthly active users](https://www.threads.net/@zuck/post/DDqBLlMyIGD) (with more than 100 million of those using it daily), better monetization efforts appear to be the next step. After all, social media is just one big way to turn eyeballs into revenue. Meta Platforms, parent company of Facebook, Instagram, and WhatsApp, is likely to share an update about Threads when it [reports fourth-quarter 2024 earnings](https://investor.atmeta.com/investor-news/press-release-details/2025/Meta-to-Announce-Fourth-Quarter-and-Full-Year-2024-Results/default.aspx) next week. Its stock on Friday afternoon was trading at near-record highs. Responses to Mosseri’s post announcing the test revealed frustration from some users. “You put in ads, there will be no reason to stay . . . ,” one user wrote. “I’ll leave the minute the ads start rolling by. Guaranteed.” Expand to continue reading ↓
2025-01-27
-
Meta ([META+1.48%](https://qz.com/quote/META)) is expected to release its fourth-quarter earnings results at the end of the trading day on Wednesday. The tech giant’s stock was up by 1.7% at the market close on Friday at $11.04 per share. The company is expected to report revenues of $47 billion for the fourth quarter of 2024, according to analysts’ estimates compiled by FactSet ([FDS+0.19%](https://qz.com/quote/FDS)). Net income is estimated to be $23.3 billion for the quarter ended in December, while earnings per share is expected to be $6.75. On Friday, Meta chief executive Mark Zuckerberg said in a [Facebook post](https://www.facebook.com/zuck/posts/pfbid0219ude255AKkmk4JAueXZeZ9zpjNYio2tBkd7bNmCaRbJ6iJaVVjypUgDg78CNdq5l) that the tech giant is planning to [invest between $60 billion and $65 billion in capital expenditures](https://qz.com/mark-zuckerberg-meta-ai-llama-model-data-center-layoffs-1851747078?_gl=1*qewp1o*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczNzc1MjA4OS40ODYuMS4xNzM3NzUyMjI5LjU4LjAuMA..) on AI in 2025. Zuckerberg said he expects Meta AI to “be the leading assistant serving more than 1 billion people” in 2025, and he added that Meta’s Llama 4 model is expected to “become the leading state of the art model” this year. The company also plans to “build an AI engineer” that can contribute more code to its research and development efforts, he said. To support the AI expansion, Zuckerberg said the company is building a data center with a capacity of more than two gigawatts — a site that could cover a large part of Manhattan. The data center will bring around one gigawatt of compute power online in 2025, Zuckerberg said, and Meta will have more than 1.3 million graphics processing units (GPUs) by the end of the year. Meta plans to “significantly” grow its AI teams, Zuckerberg said, and has “the capital to continue investing in the years ahead.” Earlier this month, Meta — which operates Facebook, Instagram, and Reality Labs (formerly Oculus VR) — sent a message to company managers about [reducing headcount by 5%](https://www.bloomberg.com/news/articles/2025-01-14/meta-is-planning-to-cut-5-of-lowest-performers-memo-shows?sref=P6Q0mxvj), affecting about 3,600 jobs, Bloomberg reported. The cuts are aimed at “low-performers” who will reportedly be replaced later in the year. “We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle,” Zuckerberg reportedly said in the internal message. Meanwhile, the Meta chief has introduced new company policies, such as cutting back on moderation and scrapping the platform’s fact-checking system, as he’s [grown closer with the Trump administration](https://qz.com/mark-zuckerberg-meta-facebook-donald-trump-elon-musk-1851734686?_gl=1*1ipb904*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczNzczMTI0OS40ODMuMS4xNzM3NzMzMjExLjYwLjAuMA..).
2025-01-29
-
[BABA+1.03%](https://qz.com/quote/BABA)[META+0.60%](https://qz.com/quote/META)[NVDA\-5.05%](https://qz.com/quote/NVDA) Days after Chinese artificial intelligence startup DeepSeek sparked a [global tech stock sell-off](https://qz.com/nasdaq-nvidia-tech-stocks-deepseek-ai-djia-sp500-1851748172?_gl=1*10076yc*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY4MjY1LjIxLjAuMA..), a homegrown rival said its new AI model performed even better. Alibaba Cloud ([BABA+1.03%](https://qz.com/quote/BABA)) released an upgraded version of its [flagship AI model, Qwen2.5-Max](https://mp.weixin.qq.com/s/hP-r8h-LliFUPYKbd3lkUQ), that performed better than top open-source competitors, including DeepSeek’s V3 model and Meta’s ([META+0.60%](https://qz.com/quote/META)) Llama 3.1 model on various benchmarks, according to [results](https://mp.weixin.qq.com/s/hP-r8h-LliFUPYKbd3lkUQ) published by the firm on WeChat. The cloud computing subsidiary of Alibaba Group also found its Qwen2.5-Max showed comparable performance to OpenAI’s GPT-4 and Anthropic’s Claude 3.5 Sonnet — both closed-source models. The Chinese firm said its AI model “has demonstrated world-leading model performance in mainstream authoritative benchmarks,” including the Massive Multitask Language Understanding (MMLU), which evaluates general knowledge, and LiveCodeBench, which tests coding skills. The Qwen2.5-Max announcement follows DeepSeek’s launch of its [first-generation reasoning models, DeepSeek-R1](https://qz.com/china-ai-startup-deepseek-r1-v3-openai-reasoning-model-1851748222), last week, which demonstrated comparable performance to OpenAI’s reasoning models, O1-mini and O1, on several industry benchmarks, according to its [technical paper](https://api-docs.deepseek.com/news/news250120). The release of DeepSeek-R1 prompted Nasdaq, Dow Jones Industrial Average, and S&P 500 futures [to fall](https://qz.com/nasdaq-nvidia-tech-stocks-deepseek-ai-djia-sp500-1851748172?_gl=1*10076yc*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY4MjY1LjIxLjAuMA..) Monday morning. Nvidia’s ([NVDA\-5.05%](https://qz.com/quote/NVDA)) shares [plunged 17%](https://qz.com/nvidia-deepseek-r1-ai-model-chips-stock-rout-china-us-1851748667?_gl=1*ne4vve*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY4MjY1LjIxLjAuMA..), wiping out nearly $600 billion in value — a record loss for a U.S. company. Investors were spooked by the DeepSeek-R1 launch, which comes after the December release of DeepSeek-V3. While Alibaba Cloud hasn’t disclosed its development costs, DeepSeek’s [claim that it built its model](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf) for just $5.6 million using Nvidia’s reduced-capability graphics processing units has caught the market’s attention, challenging assumptions about the [massive investments needed for AI development](https://qz.com/stargate-ai-infrastructure-data-center-trump-openai-1851744873). According to the technical paper, DeepSeek used a cluster of just under 2,050 Nvidia H800 chips for training its V3 model — a less powerful version of the chipmaker’s H100 that it is allowed to sell to Chinese firms under U.S. chip restrictions. The cluster is also much smaller than the [tens of thousands of chips](https://developer.nvidia.com/blog/supercharging-llama-3-1-across-nvidia-platforms) U.S. firms are using to train similarly-sized models. DeepSeek’s release has called [Big Tech’s tens of billions in spending on AI](https://qz.com/tech-earnings-meta-microsoft-apple-deepseek-ai-nvidia-1851749011?_gl=1*1uj6qzz*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY3NDY5LjYwLjAuMA..) into question ahead of a slate of earnings results, as well as the effectiveness of U.S. efforts to curb advanced chips from entering the country.
2025-02-05
-
He has reason to be optimistic, though: Meta is currently ahead of its competition thanks to the success of the Ray-Ban Meta smart glasses—the company sold [more than 1 million units](https://www.theverge.com/meta/603674/meta-ray-ban-smart-glasses-sales) last year. It also is preparing to roll out new styles thanks to a partnership with Oakley, which, like Ray-Ban, is under the EssilorLuxottica umbrella of brands. And while its current second-generation specs can’t show its wearer digital data and notifications, a third version complete with a small display is due for release this year, according to the [_Financial Times_](https://www.ft.com/content/77bd9117-0a2d-4bd7-9248-4dd288f695a4). The company is also reportedly working on a lighter, more advanced version of its Orion AR glasses, dubbed Artemis, that could go on sale as early as 2027, [_Bloomberg_](https://www.bloomberg.com/news/articles/2025-01-21/meta-hardware-plans-oakley-and-ar-like-glasses-apple-watch-and-airpods-rivals?sref=E9Urfma4) reports. Adding display capabilities will put the Ray-Ban Meta glasses on equal footing with Google’s unnamed Android XR glasses project, which sports an [in-lens display](https://blog.google/products/android/android-xr/) (the company has not yet announced a definite release date). The prototype the company demoed to journalists in September featured a version of its AI chatbot Gemini, and much they way Google built its Android OS to run on smartphones made by third parties, its Android XR software will eventually run on smart glasses made by other companies as well as its own. These two major players are competing to bring face-mounted AI to the masses in a race that’s bound to intensify, adds Rosenberg—especially given that both [Zuckerberg](https://s21.q4cdn.com/399680738/files/doc_financials/2024/q4/META-Q4-2024-Earnings-Call-Transcript.pdf) and Google cofounder [Sergey Brin](https://www.businessinsider.com/sergey-brin-google-glass-ai-killer-app-comments-project-astra-2024-5) have called smart glasses the “perfect” hardware for AI. “Google and Meta are really the big tech companies that are furthest ahead in the AI space on their own. They’re very well positioned,” he says. “This is not just augmenting your world, it’s augmenting your brain.” When the AR gaming company Niantic’s Michael Miller walked around CES, the gigantic consumer electronics exhibition that takes over Las Vegas each January, he says he was struck by the number of smaller companies developing their own glasses and systems to run on them, including Chinese brands DreamSmart, Thunderbird, and Rokid. While it’s still not a cheap endeavor—a business would probably need a couple of million dollars in investment to get a prototype off the ground, he says—it demonstrates that the future of the sector won’t depend on Big Tech alone. “On a hardware and software level, the barrier to entry has become very low,” says Miller, the augmented reality hardware lead at Niantic, which has partnered with Meta, Snap, and Magic Leap, among others. “But turning it into a viable consumer product is still tough. Meta caught the biggest fish in this world, and so they benefit from the Ray-Ban brand. It’s hard to sell glasses when you’re an unknown brand.” That’s why it’s likely ambitious smart glasses makers in countries like Japan and China will increasingly partner with eyewear companies known locally for creating desirable frames, generating momentum in their home markets before expanding elsewhere, he suggests. These smaller players will also have an important role in creating new experiences for wearers of smart glasses. A big part of smart glasses’ usefulness hinges on their ability to send and receive information from a wearer’s smartphone—and third-party developers’ interest in building apps that run on them. The more the public can do with their glasses, the more likely they are to buy them.
2025-03-13
-
Meta ([META\-4.64%](https://qz.com/quote/META)) is rolling out community notes on March 18, taking a page from the playbook of Elon Musk’s X. The incoming feature will ask users to fact-check or clarify claims in popular posts, marking a departure from Meta’s [former fact-checking system](https://www.facebook.com/journalismproject/programs/third-party-fact-checking/selecting-partners), which relied on fact-checking experts. “We won’t be reinventing the wheel. Initially we will use X’s open-source algorithm as the basis of our rating system,” Meta said in a [press release](https://about.fb.com/news/2025/03/testing-begins-community-notes-facebook-instagram-threads/) on Thursday. Twitter introduced community notes under the name Birdwatch in 2021, well before Musk bought the service and rebranded it as X. Users on X already rank other users’ notes, and the most popular response appears directly below posts. Meta said it will launch its similar feature on Facebook, Instagram and Threads, but only within the United States for now. The company eventually intends to roll out the new system globally. Meta added that user-submitted notes won’t actually appear beneath posts until it thinks its system is working properly. Meta first announced that it would retire its third-party fact-checking program in January. At the time, CEO [Mark Zuckerberg said](https://qz.com/meta-fact-check-elon-musk-trump-x-community-notes-1851733906?_gl=1*15p8iii*_ga*MzUxNzY2NjAwLjE3MjAwMTcyMjA.*_ga_V4QNJTT5L0*MTc0MTg3NzMyOS4yMTkuMS4xNzQxODgwODExLjYwLjAuMA..) that the company would replace it with community notes, similar to X, without giving much detail. Meta’s third-party fact-checking program started in 2016, shortly after President Donald Trump won his first election. At the time, Facebook faced criticism for failing to catch election-related misinformation on the platform, including disinformation campaigns led by foreign governments. “We expect Community Notes to be less biased than the third party fact checking program it replaces, and to operate at a greater scale when it is fully up and running,” the company said in the press release, saying the experts in the earlier fact-checking program had political biases that affected their judgement. “Community Notes allow more people with more perspectives to add context to more types of content, and because publishing a note requires agreement between different people, we believe it will be less prone to bias,” Meta said. Separately, Zuckerberg has said the change could also mean that Meta is “going to catch less bad stuff,” per [ABC](https://abcnews.go.com/US/why-did-meta-remove-fact-checkers-experts-explain/story?id=117417445). Meta’s community notes also won’t have penalties associated with them. Under the earlier system, posts that received third-party fact-checking intervention were shown less often on people’s feeds, due to them potentially harboring false and harmful information. That won’t be the case with posts that receive community notes. But X’s crowd-sourced fact-checking has also been deemed ill-equipped for handling misinformation. [Reports](https://apnews.com/article/x-musk-twitter-misinformation-ccdh-0fa4fec0f703369b93be248461e8005d) have found that accurate notes on misleading posts were not displayed 100% of the time, and even when they were, the original post got significantly more views than the correcting note. Meta shared that around 200,000 users have signed up to become Community Notes contributors so far across all three apps, and the waitlist is still open for those who wish to take part. The feature will be available in English, Spanish, Chinese, Vietnamese, French and Portuguese to start before expanding to other languages with time.