Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesCopyBotsEarn
Exclusive interview with Sapien AI co-founder: Label to Earn is the future of the gig economy in the AI era

Exclusive interview with Sapien AI co-founder: Label to Earn is the future of the gig economy in the AI era

BlockBeatsBlockBeats2024/09/10 03:19
By:BlockBeats

AI has three cornerstones: computing power, data, and algorithms.


Among the three, computing power is the most important, so the market value of Nvidia, which "sells shovels", once surpassed Microsoft and Apple to become the world's most valuable company. However, as Alex Wang, founder of Scale AI, emphasized in a podcast, data is replacing computing power and becoming the biggest bottleneck for improving the performance of AI models.


AI's thirst for data is endless, but accessible Internet data resources are almost exhausted. To further improve model performance, more high-quality data must be relied on. Although enterprises have a large amount of valuable data within their own companies, these unstructured data can only be truly used for AI training after being carefully labeled. Data labeling is a resource-intensive task and has long been regarded as the hardest and most humble part of the AI industry chain.


However, it was precisely by virtue of its strategy of being the first to enter the field of data labeling that Scale AI received a valuation of US$13.8 billion in its latest round of financing in May this year, surpassing many well-known large model companies. This achievement undoubtedly breaks the prejudice that "data labeling is just hard work."


Just like many decentralized computing power projects challenged Nvidia, Sapien AI, a crypto AI project that just completed a $5 million seed round in April this year, also tried to challenge Scale AI. It not only wants to enter the long-tail market through a decentralized approach, but also plans to build the world's largest manual data labeling network.


Recently, BlockBeats interviewed Trevor Koverko, co-founder and COO of Sapien AI. As the co-founder of many successful projects such as Polymath, Polymesh and Tokens.com, Trevor had accumulated rich entrepreneurial experience before founding Sapien AI. In the interview, he shared in depth his experience in founding Sapien AI, his strategy on how Sapien AI and Scale AI compete, and his unique insights on how to draw inspiration from blockchain games to design business mechanisms.


Sapien AI project experience website: game.sapien.io


Toronto, a fertile ground for innovation, is the creative crystallization of the encryption and AI communities


BlockBeats: I saw from your LinkedIn that you played for the NHL New York Rangers. As a former professional ice hockey player, how did you transition into the encryption industry?


Trevor:I've had a lot of different roles in my career. Ice hockey was my first job. In Canada, ice hockey is such a big part of our culture that if you didn't play ice hockey as a kid, you were almost considered an outlier. So, it was a big part of my upbringing. I learned a lot about teamwork and high-level competition, and those experiences still influence me today.


When my ice hockey career ended, I started working in business, and I actually spent some time in Asia. I lived in China, specifically in Dalian, a city in northeastern China. My sports career and my experience in China were two very important parts of my upbringing.


I grew up in the crypto ecosystem in Toronto. I got involved in the Bitcoin community very early on, before Ethereum was even launched. We went to meetups and talked with friends, and I met Vitalik, who was just the editor of Bitcoin Magazine at the time.


Later, when Vitalik published the white paper, the Bitcoin community gradually evolved into the Ethereum community. It was a passionate time. I launched my own RWA project Polymath in 2017-2018, when there was not even a clear classification in this field, we called it "security token". This was my first major project in the crypto field. We did everything in this project, from raising funds to releasing applications on Ethereum.



Eventually we also built our own Layer 1 blockchain, which was a bigger challenge. Fortunately, we had very smart people like Charles Hoskinson as protocol architects. Today, this blockchain has evolved into an independent brand called Polymesh. It is one of the earliest and largest RWA networks, and it is at the Layer 1 level. Now I am just a community member, because it is completely decentralized, I just support this network from a distance. In terms of adoption, it has performed very well, and now RWA is gradually becoming an exciting ecosystem.


BlockBeats: What opportunity made you turn your interest from RWA to AI and decide to start Sapien AI?


Trevor: After Polymesh's daily operations were decentralized, I became interested in AI. Toronto has a very strong AI technology community, and many early architectures of modern AI were created by researchers at the University of Toronto, such as Geoffrey Hinton, the "father of deep learning", and Ilya Sutskever, former chief scientist of OpenAI.


Left: Ilya Sutskever; Right: Geoffrey Hinton


I was interested in using AI and had a bunch of smart friends at Waterloo who were working on machine learning. I gradually became interested in the AI technology stack, how it works, how training data is produced, and how humans participate in the production of this training data. It was a very natural learning process.


I didn't have the ambition to start a company at first, but after about 6 months of diving into the field of AI and machine learning, under the guidance of a mentor in the machine learning graduate program at the University of Waterloo, we began to find some interesting areas of problems and opportunities to solve these problems. Ultimately, we founded Sapien.


BlockBeats: For those who don't know about Sapien AI, can you introduce the core mission of this project? What is the importance of data annotation services in the current AI industry?


Trevor:Data annotation is extremely important. This is one of the main reasons for the success of mainstream large language models such as ChatGPT, because they were the first models to use industrial-scale human data annotators to enrich the data set.


To this day, the importance of data annotation continues to increase because the performance competition between these models is very fierce, and the best way to improve model performance is to add more professional human data annotations to the data set.


We think of data processing as a supply chain: first there is raw data, and then it needs to be structured and organized. Once the structure is completed, you can train the data. After training is completed, you can perform reasoning on it. In short, it is a process of gradually adding value to data in the context of artificial intelligence.


Just like other industries, we are starting to see the segmentation of the AI industry, different verticals are emerging, and certain companies excel at specific steps of the process. For me, the most interesting part is the second step, which is the structuring and training preparation of the data. This has always been the part that interests me the most.


Decentralized Scale AI, targeting the long tail market


BlockBeats: What makes Sapien AI different from traditional Web2 companies like Scale AI?


Trevor:That's a great question. We admire Scale, they're an amazing company with amazing co-founders. We know one of them. They're one of the largest AI companies in the world in terms of revenue, market cap, and usage.


What's different about us is that we think from first principles about what a modern data annotation stack should look like in 2024. We're not necessarily going after the use cases that Scale covers, we're targeting the mid-market and the long tail.


We strive to make it easy for anyone to get human feedback on a dataset, whether you're working on an open source model for the mid-market, an enterprise model, or just an individual doing research on the weekend. If you want to improve model performance and need human feedback on demand, come to us.


You can think of us as a more distributed or decentralized version of Scale AI. This means that our annotators are more widespread and they're not tied to a specific location, but can work remotely from anywhere.


In a way, this decentralization allows us to do better in the quality of data annotation, because diversity is not just for diversity, but also improves the quality of data training


For example, if you have a group of people with similar backgrounds in a facility annotating data, it is likely to produce biased or culturally biased data output. So we strive to make it as diverse and robust as possible from the beginning. Because of being more decentralized, we can also get higher quality annotators in a way. If you have to work in a specific location in the Philippines, you can attract a limited number of talents, but by doing a remote-first approach, we can find annotators from anywhere.


I'm not saying that Scale doesn't do these things, but we are thinking about how to serve other parts of the model market. Because we think this market will continue to grow, and there will be a lot of private and licensed models that need human feedback.


BlockBeats: How is Sapien AI's data annotation workflow designed and optimized? What are the key links to ensure data quality?


Trevor:Our platform works like a two-sided market. You can think of it as the Uber of data annotation, a decentralized version. On one side are the demand side, like the passengers in Uber, and for us, the enterprise customers who need human feedback in their models. For example, they are building a large language model and want to fine-tune the model, which requires human involvement.


They come to us and upload their raw dataset to the network. We give a quote based on several different variables of the dataset, such as complexity, data modality, data format, etc. For enterprise customers, the process is very self-service.


The other side is the supply side, which is the annotators, who are our equivalent of Uber drivers. Right now, this is actually the bottleneck of the industry, and we need as many annotators as possible to join the network. Because the demand is basically unlimited, just like Uber, there are always people who want to take a ride, and this demand will never end. In the field of AI, the demand for these AI models to consume more data is also constant.


We focus a lot on the supply side and are committed to making data annotation easy for anyone. We have invented some new technologies and are still constantly improving these technologies to ensure high-quality annotation at scale in a distributed model. The question we asked initially was, can we ensure high-quality annotation without centralized management? This is actually what we call the "data annotation trilemma": can we make the cost of customers lower, the income of annotators higher, and improve the overall quality?


We have run a number of experiments in this space and have achieved some very interesting results. We have tried different new mechanisms such as mean regression, anomaly detection, and mixed in some probabilistic models that can predict the quality of the annotator's work to a large extent. We are also working on some newer technologies. But so far, we are very excited about the prospects of data annotation in the next five to ten years. We think that data annotation will become more decentralized, more self-service, and more automated.


BlockBeats: Can you tell us more about your products and technologies, especially those that ensure data quality? I know you have a staking mechanism to prevent annotators from doing bad things, what other technologies are there?


Trevor:Yes, we are trying many different approaches. We have a reputation system, and we also have a staking and penalty mechanism. After annotators stake a certain amount of funds, if they fail to meet the standards, they may be fined. These mechanisms are still in the early experimental stages, but we have found that this incentive mechanism alone can significantly improve the quality compliance, even by multiple standard deviations. However, this series of quality controls is achieved by a weighted average of different algorithms, which we are constantly fine-tuning. At the same time, we are also using machine learning to optimize this process ourselves. For example, we use ML linter tools and "Red Rabbit" tests, which is to provide fake data to annotators to test whether they are annotating honestly.


That’s a big question: How do you know if people are Sybiling the network (i.e. trying to cheat and manipulate the system)? We have to be vigilant about that. That’s why we like some of the Web3 incentive mechanisms, because they were originally invented to solve similar Sybil problems, the Byzantine Generals Problem, to make it in everyone’s best interest to follow the rules. If you’re selfish, you follow the network protocol.


We’re still in the early stages. We have more traditional quality control methods for some of our larger customers, and we’re also moving very quickly into this new world of frontier data.


BlockBeats: What do you think is the biggest advantage of Sapien AI as a decentralized data annotation platform?


Trevor:As I said, our platform is more self-service, which allows us to serve a wider customer base. We also have very broad requirements for annotators. We want anyone to be an annotator because we believe the next era or chapter of AI is going to be about extracting more existing knowledge from humans. Not just the basic stuff, like “this is a stop sign”, “this is a car”, things that are easily recognizable by humans and machines, but more about reasoning.


Alex Wang from Scale has talked about this: data on the internet is the result of reasoning, but it doesn’t really describe the reasoning process. So how do we get deeper into people’s minds? This requires more work and more professional annotation. This has the potential to help us accelerate the development of general artificial intelligence (AGI).


So our larger mission is: can we unlock more knowledge in private datasets within enterprises, in the minds of professionals who have expertise in certain verticals, like medical or legal, that the models haven’t yet captured?


We’re still working on making our platform as liquid as possible, trying to keep supply and demand in balance. We want to enable dynamic pricing, like Uber. These mechanisms make us more like a true two-sided market, meeting data needs while helping annotators join. These are some of the unique ways we built our platform. In terms of quality assurance, we use the techniques I mentioned earlier in real time. We want our annotators to get as much real-time feedback as possible because it creates a better experience for everyone.


Label to Earn, the future of the gig economy


BlockBeats: I noticed that Sapien AI has reached a cooperation with the game guild Yield Guild Games (YGG), so can Sapien AI's decentralized labeling mechanism be understood as a "label to earn" game?



Trevor: Exactly. We do want to be in the world of people who want to make a living from their phones, and we think that's the future of the gig economy. You don't need a car to drive an Uber, you don't need to be in a physical location to deliver food, you just log in to your phone, annotate your data, and you can earn income.


YGG is an amazing partner, they are one of our angel investors. We have a great relationship with the founder Gabby, and they have an amazing community in Southeast Asia. We have big plans with them to help their users find new ways to make money, and they also help us acquire new users. We recently announced some partnerships, and there are more plans in the pipeline for the future. We will also be in Asia for most of Q4, meeting with these partners and continuing to promote cooperation.


BlockBeats:What do you think of "play to earn" blockchain games like "Axie Infinity"? Trevor: This is very innovative and a source of inspiration. It’s just an experiment, but I believe it will come back in a new form. That’s the beauty of startups and decentralized entrepreneurship, it’s a kind of creative destruction. There are definitely some “play to earn” elements to what we’re doing, and we also tend to use the phrases “label to earn” or “train to earn”. But there’s a difference because we’re a real business. There’s real data being labeled, there’s real customers paying real money, and ultimately there’s a real product being produced. So it’s not just an endlessly looping video game. While labeling data with Sapien AI is fun, it’s probably not as fun as playing Grand Theft Auto V. We want to strike a good balance between fun and useful, so that it’s something you can do while waiting for 5 minutes at the bus stop, or you can do it while you spend 5 hours at home in front of your computer. Our goal is to make it as accessible as possible.


BlockBeats: Do you have a way to make data annotation more interesting, not just work, but more like a game?


Trevor:Yeah, we have a lot of experiments going on right now. You can go to game.sapien.io and try out the game and annotate real AI data yourself. You can become an AI worker and annotate real AI data while playing the game and earn points. The game is very simple and the interface is very intuitive.


game.sapien.ioGame Interface


The data itself is also very interesting. You might have some really interesting images to annotate, like for our fashion data. We plan to support a lot of different types of modalities and datasets. We plan to keep adding more features over time.


Future Blueprint: Build the world's largest human data annotation network


BlockBeats: In addition to YGG, what other crypto projects do you plan to work with in the future?


Trevor:We have some interesting ideas, such as creating a data standard for data annotation. At present, this field is still quite chaotic, and each customer's needs are different. We have to do custom integration with each customer because their data formats and data modalities are different.


So we are working with others in the decentralized data space and are in the early stages of building this standard and plan to release it as a public product. We did something similar when we were at Polymath, where we released ERC-1400, which is now one of the default standards for tokenization on Ethereum.


So we have some ideas about creating standards and plan to drive this process together with the teams that have helped us in the past and some industry partners. This will make decentralized AI more real, and will also make it more interoperable, meaning data can flow more easily between different steps because no one person can do everything.


BlockBeats: When is the specific release date for the Sapien AI mainnet and mobile app?


Trevor: We don't have a specific release plan at this time. We are now focused on our core Web2 product market fit. We are growing very well and now have annotators from 71 countries. Our revenue on the demand side has been doubling almost every month this year.


We just want to continue to grow, learn more about our customers, and continue to serve them. We're open to a variety of different strategies and technologies over time.


BlockBeats: I saw that Base co-founder Rowan Stone has joined Sapien AI as Chief Business Development Officer. Which blockchain will Sapien AI be built on? Are there plans to issue a native token?



Trevor:These are very thoughtful questions, and I appreciate them. Rowan is great. He founded Base with Jesse Pollak, and Jesse is an absolute legend. Rowan has a wealth of experience and is unmatched in building industrial-grade Web3 products. In my opinion, he is second to none. He co-led the "Onchain Summer" event, which was one of the most successful events I can remember.


He is helping us develop our go-to-market strategy in certain areas. But, like I just said, we are currently very focused on serving our existing customers, and that is our primary focus. We have not made any commitments or decisions in terms of choosing any Layer 1 or otherwise. But in the future, we will continue to consider various possibilities.


BlockBeats: What are Sapien AI's plans or goals for the future? What milestones do you hope to achieve in the next few years?


Trevor: Our mission is to increase the number of human data annotators in the world by 100 times and make it easy for anyone to access this network. We want to build the largest network of human data annotators in the world. We think this will be a very valuable asset, so we want to build it and control it, but eventually open it up. We want anyone to access it and be completely permissionless.


If we can build the world’s largest human data annotation network, this will unlock a lot of potential AI capabilities, because the more high-quality data we have, the more powerful AI will be and the more accessible it will be to everyone.


We want this to work for everyone, not just the big language model companies that can afford a network of millions of human annotators. Now, anyone can use this network. You can think of it as an “annotation-as-a-service” platform.

Behind decentralization: The task of entrepreneurs is to solve problems


BlockBeats: Finally, I would like to ask about your observations and views on the entire industry. What untapped potential do you think exists in the field of crypto AI?


Trevor:I am very excited about this field, which is why we founded Sapien AI. There are good sides and there are also sides that need to be guarded against.


The good side is that decentralized AI may be more autonomous, more democratic, more accessible, and more powerful. This means that AI agents can have their own native currency for transactions, which also means that you can have more privacy and know exactly what is included in the model through ZK technology.


In terms of defense, we are facing a very scary world in which AI becomes increasingly centralized and only governments and a few large technology companies have access to powerful models. This is a pretty scary scenario. So open source and decentralized AI is a defensive play.


For us, we're more focused on the data side, decentralizing the data. That doesn't mean you can't decentralize other parts of the AI stack, like the compute and the algorithms themselves. Just like Transformer was the first innovation on the algorithm side, we've seen more innovation, but there's always room for improvement.


Decentralization doesn't mean you should do it, just because you can decentralize something doesn't mean you should. There has to be real value at the end of the day. But just like the rest of the finance and Web3 space, AI can definitely benefit from decentralization.


BlockBeats: What advice would you most like to give to entrepreneurs who want to get into the crypto AI space?


Trevor:I would recommend learning as much as you can and really understanding the technology stack and the architecture. You don't necessarily have to be a PhD in machine learning, but it's important to understand how it works and do research. Start there and over time you'll gradually understand the problem more organically. That's the key.


If you don't understand how it works, you can't understand the problem. And if you don't know what the problem is, you shouldn't be an entrepreneur because an entrepreneur's job is to solve problems.


So this is no different than any other startup, you should understand the space. You don't have to be the world's top expert in the space, but understand it well enough to be able to understand the problems and then try to solve them.


0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Locked for new tokens.
APR up to 10%. Always on, always get airdrop.
Lock now!