• AI Drop
  • Posts
  • ChatGPT Hacked To Reveal Its Training Data

ChatGPT Hacked To Reveal Its Training Data

Plus Google's Deep Learning Unlocks Millions of New Materials with AI GNoME

Welcome back for the Latest AI Drops!

Researchers from Google's DeepMind and several universities uncovered a vulnerability in ChatGPT, demonstrating its potential to unintentionally expose sensitive training data, raising critical privacy and security issues.

Today’s Drops:

  • ChatGPT Hacked To Reveal Its Training Data

  • Google DeepMind's AI “GNoME”

  • Hottest AI Startup “Perplexity”

  • Trending on X “Make It More”

  • More AI Headlines

  • Trending GitHub Projects

  • Latest AI Tools

Read Time: 4 minutes

How Google Researchers Unlocked ChatGPT's Training Data

A team of researchers primarily from Google’s DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on.

The researchers used a novel attack method where they prompted ChatGPT to endlessly repeat specific words. This technique led to ChatGPT eventually divulging various types of data it was trained on, such as verbatim excerpts from websites, including CNN, Goodreads, WordPress blogs, and others, along with personally identifiable information (PII).

The researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, wrote in a paper published on arXiv.

This research paper demonstrates that ChatGPT, even with its advanced alignment techniques, could still memorize and output training data verbatim, raising concerns about data security and privacy. The researchers found that about 16.9% of the AI's outputs contained memorized PII, including phone numbers, email addresses, and other sensitive details.

The implications of this study are significant. It highlights the potential risks in the way large language models like ChatGPT are trained and the necessity for stringent safeguards, especially in applications where privacy is crucial. 

OpenAI addressed the vulnerability discovered by the researchers, patching it to prevent such data exposure in the future. The research paper serves as a reminder of the challenges and responsibilities associated with the development and deployment of advanced AI systems.

Google DeepMind's AI “GNoME” Revolutionized Material Science

Google DeepMind's AI GNoME has predicted the structure of 2.2 million new crystals, among which 380,000 are stable and viable for technological applications​​. GNoME's discovery is equivalent to around 800 years of accumulated knowledge. This achievement is pivotal because stable crystals are essential in various technologies like computer chips, batteries, and solar panels. 

  • GNoME uses advanced algorithms for material stability prediction, speeding up the discovery process.

  • GNoME accuracy is confirmed with 736 new structures created in laboratories.

  • GNoME uses graph neural networks to model atomic connections and predict crystalline structures.

  • GNoME was trained on a large dataset of crystal structures; predictions refined through active learning.

  • GNoME has discovered materials potentially useful for next-generation superconductors, advanced batteries, and other transformative technologies.

  • This innovation substantially reduces the cost and time for material discovery, democratizing access to this information for broader research and practical applications.

Why this matters: GNoME’s achievements underscore the immense potential of AI in revolutionizing materials science. By providing a vast array of stable, new materials, it paves the way for sustainable and advanced technological solutions, setting new standards in materials stability and innovation. This breakthrough could have profound implications on various technologies, fostering future research and development in multiple sectors.

Hottest AI Startup!

Perplexity

Perplexity has introduced its latest online large language models, pplx-7b-online and pplx-70b-online, setting a new standard in the realm of LLMs. 

What sets these models apart is their unique ability to overcome typical LLM shortcomings, namely outdated information and inaccuracies. By harnessing the latest internet knowledge, they deliver responses that are not only up-to-date and factual but also highly relevant and useful.

To validate their effectiveness, Perplexity carried out a rigorous human evaluation process. This involved a side-by-side comparison with renowned models like OpenAI's gpt-3.5-turbo-1106 and Meta AI's llama2-70b-chat, assessing factors such as helpfulness, factuality, and freshness. The outcomes were clear: Perplexity's models demonstrated the capability to either match or exceed the performance of these established models, particularly excelling in providing responses that are both accurate and current.

Here is an example that show remarkable accuracy and up-to-date information in their responses: 

Trending on X

“Make It More” trend on ChatGPT/Dalle-E

Generate an image of something, and then keep asking for it to be MORE.

More Exciting AI News

  1. Stability AI, creators of Stable Diffusion, is exploring the possibility of selling the company amid increased pressure from investors regarding its financial position. The firm, recently valued at $1 billion, has been in early-stage talks with potential buyers, but a deal is not imminent. Key investor Coatue Management has called for CEO Emad Mostaque's resignation, citing leadership issues and financial instability.Despite these challenges, the company remains focused on developing new AI models.

  2. Sam Altman officially back as CEO and Microsoft to take non-voting, observer position on OpenAI's board. This change gives Microsoft, a major OpenAI investor, greater insight into the company without direct decision-making power. The OpenAI board now includes Bret Taylor as chair, Larry Summers, and Adam D’Angelo, with three previous members who were involved in Altman's firing no longer present.

    In a recent interview, Sam Altman expressed initial feelings of hurt and defiance but chose to return, driven by his dedication to OpenAI's mission of developing safe AGI. Altman refrained from discussing the reasons for his firing, citing an ongoing independent board review. Altman acknowledged the company's ability to function without him and recognized the need for improvements in OpenAI's governance structure.

  3. Amazon finally releases its own AI-powered image generator, AWS Titan image generator. Titan Image Generator is now available in preview for AWS customers and can create new images when given a text description or customize existing images.

  4. Together, a startup creating open source generative AI and AI model development infrastructure, announced that it closed a $102.5 million Series A funding round led by Kleiner Perkins with participation from Nvidia and Emergence Capital.

Welcome to the innovative world of GitHub projects!

ComfyUI is a powerful and modular GUI (Graphical User Interface) for Stable Diffusion, featuring a graph/nodes interface. It allows users to design and execute advanced stable diffusion pipelines using a flowchart-based interface without needing to code.

LucidDreamer transforms text descriptions into detailed 3D models. It addresses the common issue of low-quality, over-smoothed 3D outputs in earlier methods by using Interval Score Matching and 3D Gaussian Splatting. This results in more accurate and high-fidelity 3D images, enhancing both the quality and the efficiency of 3D model generation from text.

Latest AI Tools

Syllaby.io elevate your social media presence with Syllaby.io. Discover viral topics, craft engaging scripts with their AI, and publish captivating videos effortlessly. 

Morph 1.0 is an AI-powered BI dashboard across your SaaS data.

Avatar Generator by HeadshotPro creates a cute avatar from your photo.

neoSVG is an AI-based tool that generates SVG vectors from text prompts.

Mavy is your personal AI executive assistant.