November 05, 2024 12:13 (IST)
Follow us:
facebook-white sharing button
twitter-white sharing button
instagram-white sharing button
youtube-white sharing button
Pakistan's Lahore has become world's most polluted city with an AQI of 1900 on Sunday | Indian Army 'successfully completes' patrolling to a key point in Ladakh's Depsang region | US presidential election: Donald Trump ahead of Kamala Harris in swing states, poll survey predicts | 'I strongly condemn Hindu temple attack, intimidation of our diplomats': PM Modi amid Canada row | 'I strongly condemn Hindu temple attack, intimidation of our diplomats': PM Modi amid Canada row
Tech major Apple restricts employee use of ChatGPT to prevent leaks
ChatGPT
Image: Wallpaper Cave

Tech major Apple restricts employee use of ChatGPT to prevent leaks

| @indiablooms | 19 May 2023, 06:33 pm

Moscow: US tech giant Apple has restricted the use of ChatGPT and a number of other artificial intelligence (AI) tools for its employees in order to prevent data leaks, as the company is currently working on its own technology of this kind, The Wall Street Journal reported on Friday, citing a document and people familiar with the matter.

Apple, known for its stringent secrecy measures, has prohibited its employees from using, instead of humans, newly popular programs for automatically writing emails and software code such as ChatGPT and Microsoft-owned GitHub's Copilot, which can result in the inadvertent sharing of company information with the bots' developers, the report said.

Apple has been wary of AI-powered software entering its App Store lately. In early March, the company blocked the BlueMail email app after it was updated with ChatGPT, citing concerns that it could display inappropriate content to underage users.

Apple is not the only company concerned about the chatbot's privacy. In April, South Korea's Samsung banned its employees from using ChatGPT after discovering a source code leak caused by its workers' use of the tool.

OpenAI's ChatGPT language model, launched in late November, has received mixed reviews for its ability to mimic human conversation and generate unique text based on user input. Some have praised the model for its professional applications, such as code development, while others have criticized its potential for abuse, such as students using the model to write essays.

(With UNI inputs)

Support Our Journalism

We cannot do without you.. your contribution supports unbiased journalism

IBNS is not driven by any ism- not wokeism, not racism, not skewed secularism, not hyper right-wing or left liberal ideals, nor by any hardline religious beliefs or hyper nationalism. We want to serve you good old objective news, as they are. We do not judge or preach. We let people decide for themselves. We only try to present factual and well-sourced news.

Support objective journalism for a small contribution.