Connect with us

AI News

The Dark Side of AI: Runway Secretly Built on Scraped YouTube Videos

According to a report from 404 Media, the company utilized thousands of YouTube videos and pirated films to build its latest video creation tool.

Published

on

Runway's CEO Cris Valenzuela

Runway, an AI startup, has recently come under scrutiny for the methods it used to train its AI text-to-video generator. According to a report from 404 Media, the company utilized thousands of YouTube videos and pirated films to build its latest video creation tool, Gen-3 Alpha. The training data included content from major entertainment companies like Netflix, Disney, Nintendo, and Rockstar Games, as well as popular YouTube creators such as MKBHD, Linus Tech Tips, and Sam Kolder.

A spreadsheet obtained by 404 Media reveals the extent of Runway’s data-gathering efforts. It features links to YouTube channels owned by prominent entertainment and news outlets, including The Verge, The New Yorker, Reuters, and Wired.

A former Runway employee explained that the company employed a massive web crawler to download videos from these channels, using proxies to avoid detection by Google.

This revelation raises significant questions about the legality and ethics of using such content for AI training. YouTube’s policies clearly state that training AI on its platform’s videos is a violation. Neal Mohan, YouTube’s CEO, emphasized this point in a statement to Bloomberg in April. Despite this, many tech giants like Apple, Anthropic, and Nvidia have also been reported to use YouTube videos for training their AI models.

The Ownership Debate: Who Deserves the Credit?

The use of YouTube videos and pirated films for AI training touches on deeper issues related to ownership rights and copyright. Video creators, whether independent YouTubers or large entertainment companies, invest significant time and resources into their content. Shouldn’t they receive recognition or compensation when their work is used to train AI models?

For instance, creators like MKBHD and Linus Tech Tips have built substantial followings and brands through their unique content. If their videos are used to enhance AI tools, it’s only fair that they get some form of reward or acknowledgment. This issue is not just about legality but also about ethical considerations and respecting the creators’ hard work.

The Impact on Content Creators

The implications of AI training practices extend beyond legal boundaries. If AI companies continue to use unlicensed content for training, it could discourage content creators from sharing their work publicly. This would be detrimental to the vibrant and diverse ecosystem of platforms like YouTube, where creators thrive on the free exchange of ideas and creativity.

Furthermore, the lack of transparency in AI training practices makes it difficult for creators to protect their work. While Runway’s cofounder, Anastasis Germanidis, mentioned the use of “curated, internal datasets,” the specifics remain unclear. Without clear guidelines and disclosures, content creators are left in the dark about how their work is being utilized.

The Path Forward: Ethical AI Training

To move towards more ethical AI development, companies must adopt transparent and fair practices. This includes obtaining proper licenses or permissions before using content for training purposes. Additionally, there should be mechanisms in place to compensate content creators whose work contributes to AI advancements.

As AI technology continues to evolve, it is crucial to balance innovation with respect for intellectual property and creator rights. By fostering a culture of transparency and fairness, AI companies can build trust with the public and ensure that their advancements benefit all stakeholders involved.

AI News

OpenAI o1 “Strawberry” Thinks Like a Human—Shaking Up the Tech World!

Published

on

openAI o1 strawberry model release

OpenAI just took AIs to the next level with its new o1-preview models. If you’ve been wondering, yes, it is the rumored ‘Strawberry’ model.

O1 has been designed to be able to attempt at doing things that previous models were thoroughly unequipped with; an individualized experience. This model series is applicable in particular with science, coding, and mathematics by essentially spending more time breaking down problems up until the solution phase.

A New Way of Thinking for AI

The OpenAI o1 series is trained to think through tasks like a human, weighing multiple strategies and learning from errors. This shift enables the model to solve more sophisticated problems starting with difficult physics questions and ending with producing extensive code. The next o1 model scored at approximately high school physics-level on benchmarks across physics, chemistry and biology in internal testing.

One other strong point is o1’s reasoning interface, demonstrating problem-solving in a live classroom. It is intentionally made to sound like it has human-like reasoning by including things like “I am curious about” and “Ok, let me see,” showing how it solves the task. This design, according to Bob McGrew at OpenAI, gives the model a more human quality and yet remains as an alien in some ways.

Additionally, OpenAI 01 is a shift-in-brand-name for AI research; McGrew said that reasoning power can be an intricate component in large language-model advancements. He said, “We’ve been working for many months on reasoning because actually we believe it’s the critical breakthrough.” At this point in time, the reasoning abilities of o1 are still quite slow and expensive for developers to use, but it represents a key piece of the puzzle going forward in creating autonomous systems that can make decisions and act on those decisions without human intervention.

When it comes to coding, it outperforms the previous models by a huge margin and is ranked in the top 89 percentile of Codeforces competitions.

o1-preview — graphics: vijay-source: MicrosoftWhile still in its early stages, o1-preview has achieved some impressive strides for AI reasoning. Even though it does not offer the same features as ChatGPT at this point (such as browsing or file uploads), it is still a big advance in sophisticated problems.

“here is o1, a series of our most capable and aligned models yet: https://openai.com/index/learning-to-reason-with-llms/… o1 is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it,” – said OpenAI’s CEO Sam Altman on X, – “but also, it is the beginning of a new paradigm: AI that can do general-purpose complex reasoning. o1-preview and o1-mini are available today (ramping over some number of hours) in ChatGPT for plus and team users and our API for tier 5 users.

Early users of o1 are already seeing its power firsthand. One of X users shared his excitement:

“GPT-o1 just generated a holographic shader from scratch, saving me (and future XR devs) from shelling out big bucks on asset stores. In retrospect, software engineering was great while it lasted. A new fork on our tech-tree!”

Another user noted:

“Every time a new LLM comes out, I try to make it solve the Wordle of the day and all of them have completely failed at it—until ChatGPT o1. It let me solve the daily Wordle in 4 tries! This is very impressive!”

Safety at the Core

Safety also continues to be a paramount concern in homologating the o1 series. That reasoning is then employed to make the vehicle as safe as possible into a reality. o1-preview sticks more to guidelines the more it thinks of safety rules. Other currently available models performed nowhere near as well; similarly, in testing on an intensive battery of jailbreaking scenarios that represent intentionally-bypassed model safety mechanisms, o1-preview achieved a 84 / 100 score compared to just 22 with GPT-4o.

Who Can Benefit from o1?

The OpenAI o1 series is a great choice for anyone who works in fields where there is a lot of deep reasoning or complex problem solving etc. From solving complex mathematical problems to annotating your scientific data or building workflows, researchers, developers, and scientists can use o1. If you are a physicist in quantum optics or a developer debugging code, o1 has you covered for tackling the hardest of problems.

In order to make this available model more accessible, OpenAI is also releasing OpenAI 01-mini, a variant of the reasoning model having lower costs and latency. At 80% lower cost than o1-preview, it is a much more reasonable logical step for programming problems that need focused problem solving rather than knowledge about the broad world.

Continue Reading

AI News

Grok AI Floods the Web with Uncensored Deepfakes of Trump and Elon Musk Himself – Internet Reacts

Published

on

Elon Musk is no stranger to controversy, and his latest venture, Grok AI, is no exception. Launched recently, Grok is already causing quite a stir with its deepfake capabilities and unfiltered content generation. Musk, known as a free speech champion, has once again pushed the boundaries of what’s possible—and acceptable—with artificial intelligence.

disney grok AI

Grok AI made its debut on X (formerly Twitter), the social media platform owned by Musk. Integrated into the platform for premium subscribers, Grok is unlike other AI tools in that it’s designed to be almost entirely uncensored. The AI can generate realistic images from text prompts, a feature that has delighted some users and alarmed others.

A user on X remarked,

“It’s wild what Grok can do. I’ve never seen anything like it. But at the same time, it’s a little scary.”

Capabilities that push boundaries of what’s possible—and acceptable

Grok’s standout feature is its ability to create deepfakes—AI-generated images and videos that appear almost indistinguishable from reality. Users have been quick to test the AI’s limits, creating provocative and controversial images. From scenes involving public figures in fabricated situations to more benign, though still striking, visualizations, Grok has shown a remarkable knack for going all out.

But this power comes with significant risks. Grok has been used to generate deepfake images that many find troubling. For example, users have created images of historical and political figures in highly compromising scenarios.

Despite these risks, Grok isn’t entirely without limits. Some users have reported that the AI refuses to generate certain types of content, such as explicit nudity or depictions of severe violence. However, these boundaries appear to be minimal, and many users have found ways to work around them.

Grok AI Jasper van Dijk

User Reactions: A Mixed Bag

Critics argue that Grok’s lack of restrictions makes it a dangerous tool, particularly in the wrong hands.

Alejandra Caraballo, a Harvard Law Cyberlaw Clinic instructor, didn’t mince words:

“Grok is one of the most reckless and irresponsible AI implementations I’ve ever seen. The potential for misuse is staggering.”

Musk, however, has defended Grok, framing it as a tool for creativity and free expression. He’s dismissed concerns about the AI’s potential dangers, saying,

“Grok is about having fun with AI. We’re exploring what’s possible—nothing more, nothing less.”

Grok was developed in partnership with Black Forest Labs, a small startup in Germany that specializes in AI technologies. The lab’s FLUX.1 image generation software powers Grok, and the collaboration with Musk is part of a broader strategy to set Grok apart from other AI tools on the market.

Musk’s decision to launch Grok with few restrictions appears to be a calculated risk. By offering an AI that pushes the limits, Musk is challenging the norms of AI development. However, this approach has already attracted scrutiny from regulators, particularly in Europe, where there are ongoing investigations into X’s handling of dangerous content.

The controversy surrounding Grok AI highlights the need for a balanced approach to AI development. While Grok offers exciting possibilities for creativity and innovation, it also poses significant risks. The ease with which it can be used to create misleading or harmful content underscores the importance of responsible AI use.

For now, Grok remains a divisive tool, celebrated by some for its potential and condemned by others for its dangers. As Musk continues to push the boundaries of what AI can do, the debate over the ethical implications of these technologies is only just beginning.

In the words of one X user, “Grok is a glimpse into the future of AI—one that’s both thrilling and terrifying.” Whether this future is one we want to embrace remains to be seen.

Continue Reading

AI Applications

Procreate on Generative AI: “We’re never going there”

Published

on

Procreate AI

Procreate’s CEO just announced that they won’t be adding generative AI to their products. So, what’s the big deal? Well, if you’re an artist who loves creating digital art by hand, this may be music to your ears.

Procreate on Generative AI

As stated on their website, Procreate emphasizes that they “will not replace your creativity with AI.” Instead, they focus on empowering artists by enhancing their creative process, not automating it. So, if you’re someone who values real, hand-crafted digital art, Procreate’s got your back.

“I really hate generative AI. I don’t like what’s happenig in the industry and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into our products,” – said Procreate’s CEO James Cuda.

Video: Procreate on X

SO, What’s the Big Deal?

Generative AI has been making waves across creative industries, offering tools that can generate text, images, music, and even videos with minimal human input. For some, this tech is an amazing advancement that democratizes creativity. But when it comes to traditional artists, many of them consider it a threat to the authenticity and originality of human-made art.

AI is not our future
Photo: Procreate

Procreate’s decision to reject AI is making a strong point: they care more about human creativity than following tech trends. As stated on Procreate’s new AI section on their website:

“Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future. We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us.”

Why is this a big deal? Well, choosing to go the other way while most tech companies are jumping on the AI bandwagon is nothing but a bold move. Procreate is definitely setting itself apart from the crowd, but it’s also entering a controversial debate.

The Community Reacts

And the response has been overwhelmingly positive. Many users expressed relief, thanking the company for protecting the integrity of traditional art. Some even joked about the birth of a new “AI-free” market, seeing this as a victory for ‘real art’. Comments like “Thank you for saving real artists!” and “This is the way” flooded social media. The sentiment is clear—Procreate is viewed as a guardian of traditional, human-centered creativity.

However, the decision hasn’t been met without critics too. Cannot AI and human creativity coexist and even complement each other? One commenter noted, “There is a way that AI will ASSIST people and RESPECT artists.” This perspective suggests that AI could be used as a tool to enhance, rather than replace, human creativity.

What’s Next for Procreate?

Procreate’s decision could give them a unique position in “AI-free” creative tools catering to artists who prefer a more traditional approach, attracting users who are wary of AI’s influence on the art world.

As Procreate doubles down on its commitment to human creativity, the question remains: will this decision set them apart in a crowded market, or will it limit their potential as competitors race to incorporate more AI into their products? Only time will tell.

“We don’t exactly know where this story’s gonna go, or how it ends, but we believe that we’re on the right path to supporting human creativity.”

Continue Reading

Trending