In effort to protect their constituents, European Union lawmakers are urging online platforms to set up their own systems to stop bot accounts that can affect the validity of human accounts. This change should be expected, considering that it is part of the voluntary Code of Practice that the European Commission wants to be applied towards the proposals that prevent disinformation online.
These proposals come after a recent report by the Commission within the last month that requests more transparency of these platforms, which should help to weed out the false accounts. These platforms should reduce the amount of false information, though there’s also a sense of urgency for educating journalists and other professionals in the media industry to spread the word. By educating the people at the forefront of consumer information, these regulations can easily become more commonplace and less risky.
Mistaking Bot And Human Interactions
The goal is to drop the possibility that consumers will mistake bot interactions for human ones. The EC wants to establish certain marketing systems and rules that would prevent any human from making this assumption, because of the drastic difference. Unfortunately, the details on exactly how such a change would be possible is not currently available.
Bots are well-hidden in the industry, which is what makes it so easy for them to infiltrate information and seem so much like an actual person. The whole dissemination of this program relies on finding a certain algorithm that every single bot has in common, which is why the EC wants each platform to find their own system. One of the other problems that makes bots easy to hide is the sophistication of some platforms, since there are many humans that control bots as well.
The EC is also targeting ad placements, restricting targets for political advertisements, and the vague information available about the algorithms that control what consumers see.
More information will be available as it is released.