On Wednesday, Facebook’s Meta (META.O) announced the release of an AI model that can identify specific objects in photos and the largest dataset of image annotations ever created.
The company’s R&D department blogged that its Segment Anything Model (SAM) could recognize items in photos and videos even if it had never seen them before in training.
What is SAM?
In SAM, you can select items by either clicking on them or responding to textual hints. When the word “cat” was entered into the tool, it drew little squares around all of the cats in the image.
Since Microsoft’s (MSFT.O) OpenAI’s ChatGPT chatbot went viral in the fall, major tech companies have been bragging about their own artificial intelligence (AI) developments in an effort to gain market share.
Meta has not yet launched an item, but it has teased at a number of features that will use generative AI, a branch of AI developed by ChatGPT that creates whole new content rather than merely identifying or categorizing data.
A system that creates surrealist films from given words is one example; another creates picture book images from text.
Mark Zuckerberg, CEO of Facebook, promised that this year’s Meta programs would use such generative AI “creative aids.”
Meta already uses SAM-similar technologies internally for things like photo tagging, blocking inappropriate content, and curating promoted posts for Facebook and Instagram.
The company asserts that spreading SAM will increase people’s ability to use such technologies.
The SAM dataset and model can be used in academic and research settings, but not for profit. Users who provide their own photographs for inclusion in the prototype will also be asked to agree to its limited research use.
Content Source: reuters.com
The post Meta announces AI model that can identify items within images appeared first on NFT News Pro.