[ad_1]
Collecting immense mountains of data and using them to train the models is the typical approach of companies that develop generative AI applications. The courts are currently examining whether the copyright requirements have been adhered to – with an open outcome. Microsoft therefore now wants to insure customers against damage.
The company announces this in one go Blog post. So should “If a third party sues a commercial user for copyright infringement through the use of Microsoft Copilot or any result generated thereby, we will defend the customer and pay the costs of any judgment or settlement arising from the litigation“. However, this only applies if users use the protection mechanisms and content filters that are built into Microsoft’s Copilot products.
Copilot is the name of the AI assistant that the group wants to integrate into practically the entire product range. The range extends from Windows via office solutions, GitHub and Edge to security applications. The basis for the copilot is OpenAI’s GPT-4 language model.
Fear of lawsuits is a deterrent
What is obvious in view of the blog post: Microsoft’s customers are not left unscathed by the lawsuits filed and the reporting on potential copyright violations by AI companies. And Microsoft has now confirmed that legal risks exist. However, people emphasize that they want to take responsibility themselves. In addition, nothing has changed in the legal assessment.
Microsoft sets out what users need to consider for protection in a Copilot Copyright Commitment. This essentially means: No violations of the terms of use. For example, users are not allowed to bypass the built-in filters that prevent illegal content from being generated. In addition, entries that aim to create content for which users do not have rights are prohibited.
Microsoft explains that the procedure is comparable to that with patents. In such procedures one also assumes responsibility.
Legal proceedings are ongoing
Many copyright cases are already pending and the list is continually being updated. The core of the accusation is that the AI developers knowingly used protected material to train the models. For these reasons Actors and authors complain in the USA.
Meanwhile, AI companies like OpenAI, Google and Meta are becoming increasingly secretive. Information on the data sets used in models recently presented was no longer published or was significantly reduced – but this was not only the case copyright but is also based on business secrets.
Nevertheless, politicians are calling for more transparency. The EU AI Act contains relevant regulations.
Platform operators are now looking for different ways to protect themselves or their customers. Microsoft is obviously pursuing the goal of making AI assistants more widespread; Taking customers’ fears away is the essential approach. Meanwhile explained most recently ValveDon’t stop playing Steam to list if they were created with AI content that is based on copyrighted works.
[ad_2]
Source link