The reinvention ability of technology The wheel has its drawbacks: It can mean ignoring blatant truths that others already know. But the good news is that new founders sometimes find answers for themselves faster than their predecessors. – Anna
Artificial intelligence, trust and security
This is an Olympic year, a leap year… and so on this Election year.But before you accuse me American defaultismI’m thinking of more than just a Biden-versus-Trump sequel: More than 60 countries are hold national electionsnot to mention the EU Parliament.
Each of these voting swings could have implications for tech companies; for example, different political parties often have different views on artificial intelligence regulation. But ahead of the election, technology will also play a role in ensuring its integrity.
Mark Zuckerberg may not have had election integrity in mind when he founded Facebook, or even when he acquired WhatsApp. But 20 and 10 years later respectively, trust and security are now responsibilities that Meta and other tech giants cannot escape, whether they like it or not. This means working to prevent misinformation, scams, hate speech, CSAM (child sexual abuse material), self-harm and more.
However, artificial intelligence may make the task more difficult, and not just because deep fakes Or by empowering more bad actors. said Lotan Levkowitz, a general partner at the firm Grove Ventures:
All these trust and security platforms have this Hash shared repository, so I could upload what was bad there, share it with all my community, and everyone would work together to prevent it; but today, I can train a model to try to avoid it. So even the more classic trust and safety efforts are becoming harder and harder because of Gen AI because algorithms can help bypass all of those things.
Thinking from hindsight to the front line
While online forums already know a thing or two about content moderation, there was no social network playbook to follow when Facebook was born, so it took a while to complete the task, which is understandable to some extent but frustrating. Frustration is learned from internal meta-documents dating back to 2017there remains internal reluctance to take steps that would better protect children.
Zuckerberg was one of five CEOs who attended a recent social media technology summit. U.S. Senate hearing The testimonial isn’t Meta’s first so far, but it’s worth noting that Discord has also been included; while it goes beyond its gaming roots, it’s a reminder that trust and security threats can occur in many online venues, for example. This means social gaming apps may also put their users at risk of phishing or phishing.
Will new companies claim companies faster than the FAANGs? This is no guarantee: founders often operate from first principles, which can be good or bad; Content Moderation Learning Curve it is true.But OpenAI is much younger than Meta, so it’s encouraging to hear it’s forming a new team Research child safety — even though that may be a result of the scrutiny it’s under.
However, some startups are not waiting for signs of trouble before taking action. A provider of artificial intelligence trust and security solutions and part of the Grove Ventures portfolio, active fence CEO Noam Schwartz told me we’re seeing more inbound requests.
“I see a lot of people contacting our team from start-up or even pre-launch companies. They consider the security of their products during the design phase. [and] “Using a concept called secure by design. They built security into their products, just like today you build features with security and privacy in mind.”
ActiveFence isn’t the only new startup in the space; wired Described as “trust and security as a service.” But it’s one of the largest, especially since its acquisition September Spectrum Lab, so it’s great to hear that its clients include not only large corporations fearing PR crises and political scrutiny, but also small teams just starting out. Technology also has the opportunity to learn from past mistakes.