Reputation Without a Kill Switch

· with Pippellia
“Web of Trust is any network of relationship where trust is distributed and emergent—it’s not imposed by someone else.” Pip builds the infrastructure that makes decentralized reputation actually work. While platforms like Twitter sell verification for $8, he’s applying Google’s PageRank algorithm to Nostr—and giving it away for free.

Web of Trust is any network of relationship where trust is distributed and emergent. It emerges organically from interaction and connections—it's not imposed by someone else.

— Pip

Timestamps

  • 00:44 What Vertex is and the problem it solves
  • 03:23 Why centralized trust verification is failing—the Twitter/X model
  • 05:11 Pip's definition of Web of Trust: distributed and emergent trust
  • 06:49 Why PGP's web of trust failed after 30 years
  • 10:32 How Twitter's paid verification made identity meaningless
  • 14:19 Meta's perverse incentives—when scammers pay more than spam costs
  • 18:42 The primitives needed for healthy online discourse
  • 21:26 Why reputation depends on point of view, not absolute values
  • 27:13 How Nostr makes your audience portable and permanent
  • 29:36 Can Web of Trust be weaponized? The exclusion question
  • 34:52 Vertex's business model: freemium credits based on reputation
  • 39:49 Why app store review models are going obsolete
  • 41:57 Zapstore: using Web of Trust to verify app developers
  • 49:00 What traditional developers get wrong about decentralized identity
  • 55:21 What's next: explicit content detection and filtering
  • 1:00:46 Personalized recommendations and onboarding without surveillance

Resources

Pippellia

About Pippellia

Pip (Pippellia) is the co-founder of Vertex, a Web of Trust service for Nostr developers. He builds the infrastructure layer that helps decentralized apps solve their hardest problem: figuring out who to trust when there's no central authority. Vertex uses PageRank-style algorithms to compute reputation scores, enabling spam filtering, personalized recommendations, and impersonation protection. He received an OpenSats grant in 2025 and made Vertex free to drive adoption, prioritizing network growth over immediate revenue.

Transcript

Show full transcript

Hey, one quick thing before we get into it. Trust Revolution runs on value for value. No ads, no sponsors. Fountain is how it works for me and for the show. Pay per episode or subscribe, lightning or card. You get something from the show, you can send something back. No guilt, no gimmicks. Go to trustrevolution.co. That's trustrevolution.co. Okay, let's get into it. Pip, welcome. Hello. Hi, Sean. I appreciate it. Yeah, I'm very well. Thanks for joining. I appreciate you taking the time. It's very busy toward end of year and toward Christmas. But I think we're going to have a great conversation, and it is going to, in my view, go to the heart of really why I started this thing, which is trust and what is broken, why, and how do we fix it? And so what I'd love to do today, Pip, is, you know, you have done technical deep dives on Vertex before.

I want to go somewhere different, which is why this matters. But before we do that, in, say, 60 seconds, what is Vertex? So Vertex is a service built on top of Noster that simplifies the topic of this discussion, I presume, which is going to be Web of Trust. And so basically the tagline is you don't have to worry about spam prevention techniques or having to reinvent accurate search or having to create your own recommendation engine on top of Noster because all of that you can have and you can access through Vertex. and yeah, it's going to simplify your experience in building great experiences for your users, your customers. Fantastic. And you know, you talked with Matt O'Dell on Citadel Dispatch back in July and I'll be sure to include a link to that. It was a great conversation. What has changed with Vertex since then, since July?

Mostly improvements over the whole stack. So the server or relay is now more performant, it's faster. And I'm also working on some schemes so you can, well, to expand the services in multiple ways. They are not ready yet, but the way I want to expand the services other than the current one I have is also adding some endpoints for detecting pornography or images as explicit content so that you can decide whether your user wants to see that or don't see that. So imagine like having a setting in your application where you can turn it on and off and then remove all of the explicit content. And if, for example, your user is someone that doesn't want to see this content, is a kid, you know, those.

And also, yeah, mostly actually building low-level stuff that makes the services faster, more precise. It's not really something very sexy I'm working at the moment, but it's just grinding and making the whole thing better. Yeah, better, faster, stronger. That's important. Great. And we'll get into a little more detail there. Let's, as I mentioned, let's sort of zoom out. So right now, if I want to know whether an account is real or a bot, I'm trusting Twitter X or whoever runs the platform to tell me that. What's wrong with that model? Why shouldn't that be enough? Yeah, so there are really these two models. And as you outlined, there is one that we are currently used to, which is the centralized trust model, where there is someone, the platform, most of the case, most of the times, where it's telling you who is real, who is fake.

and this really lacks a lot of transparency so you don't know how they take this decision and they force those decisions on you and you cannot really get you cannot really take different decisions or use different theoristics for example if you wish to send a message to someone on Twitter because you think he is a really interesting person and you want to know more about his work or maybe you have your friends. But Twitter decided that no, they are a bad person, so they should be censored. Their account should be closed and stuff like that. And also, in general, this goes along with the trend that we have seen in the past years where this platform, they tend to request more and more identifying information. For example, this started with emails and then phone numbers and then,

They are slowly going towards a full KYC for basically everything. And this is also the reality today in some jurisdictions. And they do that on the surface. They do it for spam prevention. So they say, oh, to guarantee your safety, you have to give me all of your data and personal information so I can stop bots and spammers and things. and well in reality it's just maybe I think an excuse just to increase the platform revenue through ads because most of this platform actually their whole business model is track you and then serving you ads and so this is one model the old model now we are entering a new model where it's described by the word web of trust So just to give a small, I think, or my definition, because many times this term is used as a catch-all.

So I prefer to have a very clear definition. So for me, web of trust is any network of relationship where trust is distributed and is emergent. So it emerges organically from interaction and connections and is not imposed by someone else. So it's not imposed by the service providers, it's not imposed by the platforms. It emerges spontaneously as it happens in midspace or, you know, in the real world, so to speak, where you go to a bar and you hang out, you talk with some people, and then you establish connections and there is no one watching you and saying that no you should not be friends with that person because they are bad you know and yeah this is very tied to obviously Noster so like Noster is the only successful web of trust because in reality the

Web of Trust was born with the PGP. So PGP is, for those who don't know, it means pretty good privacy. It's basically a scheme for an identity for emails. So you can share encrypted emails or signed emails. And then, but how do you know that Sean is this email address and not this other email address, right? How do you link an external identity to an address? Well, the solution was, again, Web of Trust. So instead of relying on Google to tell you who is the real address for this person, you would use these things called, I don't remember the name, they basically were like messages saying from Pip, me, I trust that this email address.

An attestation, I think. Yeah, I attest that, I verified that test that's shown is this email address. Yeah, these attestations. And it was pretty, it was not a successful implementation of Web of Trust, in my opinion. it is still used by some but the problem is that it was not social enough so you want to attest to as many people as you trust that actually there is a link between them and their email address but also you want to keep these attestations up to date all the time. If someone's address is stolen or compromised or whatever, you want to revoke that attestation. And there is no incentives to do that because fundamentally it's not, sending emails is not a social phenomena. Well, on Noster, what is a social phenomena is your follow list.

So you don't, if someone is hacked and start posting terrible content, you don't want to see that content, so you unfollow and then you follow the new key. And this is very non-technical and doesn't give you strong cryptographic guarantees, but fundamentally it just relies on humans. Strong signal. Yeah, exactly. Humans maintaining this list for their own sake because they want to use those lists in certain social applications. Like in a client, your follow list is used to show you content and you want to curate that. Otherwise, you see bad content fundamentally. Yeah, and I think it's a great point. I mean, Phil Zimmerman, PGP. So we've now had 30 years, to your point, of attempts. You know, back to your comments about Twitter, about X. Imperfect as it was, verification used to mean something.

Now it means you paid $8, right? And so how did identity verification become both, you know, both meaningless and controlled by one company? How did we get from groups of, you know, nerds, and I'll count myself in that category, gathering in person to attest to a PGP, you know, web or ring of trust to pay me $8? Like, you know, what's your take on how we sort of got from one to the other? I think it's just a matter of incentives. if you can control what verified means, then you can get paid to give out these attestations. Fundamentally, and also lack of competition, that's also a big one, because if Twitter blue check were just one of the ways you can verify, you can get a badge on Twitter, like imagine on the Twitter app that there are other...

service providers that gives these different attestations, like a purple check, an orange check, those kind of things. And then you can go to the settings and you add a new, a different providers for this. So you don't want to use the paid verification. You want to use some other kind of verification. Then probably less people would use, would pay for the blue check. But at the moment you have lack of competition, which means if you want to get verified, you have to pay. And also the incentives, which is, well, the company gets paid. So it happens. Yeah. Well, and I think, you know, and that kind of goes to my next question, which you've answered in part. But if a platform, you know, can decide you're not verified anymore, that you don't exist effectively, that's enormous power, be that X, YouTube, whatever. For those, again, who may be sort of coming to understand this, who benefits from that arrangement?

Well, I think I don't want to put my, you know, hat. Your tinfoil? Yeah, my tinfoil hat. But, you know, if there is power to censor people, obviously that power is going to be used somewhere by some government, by some agencies. like it's not you know I don't want to name because honestly I don't really know about individual cases but you know if there is a political opponent in a dictatorship that wants to, is rising up and has, is gathering consensus you can just go to Twitter and say remove it or you go to jail and you know, if there is that choke point is going to be used fundamentally by someone Absolutely. Yep. In fact, I was just scanning through to see if I could find, speaking of our friend Matt O'Dell, he posted an excerpt from, you know, the latest draconian legislation. I'll dig it up for the show notes from the UK and the effect of it, or it is in effect clarifying that you can be accused and convicted of criminal behavior without malicious intent in posting something that causes destruction.

stress, right? So I think it is an extreme example of your point, which is there are plenty, plenty of powers, plenty of individuals and nation states who want the power to erase someone online. And I think, you know, to some that may sound dramatic, but I think to the degree that we live our lives online and that some, many people generate their income and they're living online, content creators, what have you, then it's enormous. Well, if we then sort of go to maybe a more utilitarian view, which is, I think, one that most of us have, spam and impersonation, right? So that's everywhere. The platforms that control verification can't stop it. So are they failing in one of their fundamental jobs, Pip? Or to your point, do they just not have the right incentives or they're perverse incentives therefore they don't stop it i think i think both both like the answer lies in in between those two points so they're probably failing to some extent

even though they they they could technically have access to all the data and all the analysis that that can stop this kind of spam also because they control everything so they can monitor every activity, even of the spammers. But I think the bigger reason now that I think about it is that they have perverse incentives. I think that the biggest example of this is meta because I think I read somewhere, or maybe it was a podcast, I don't really remember specifically, but the thing is that a large percentage of meta revenue comes from scams. like complete scams. And the anti-spam department, if the spammer pays enough in advertisement, they cannot even remove it.

So they really, it was something silly like that. Like if the budget of the spammer is higher than a certain amount, then they need permission to remove that spammer, even though they have identified it, they need permission from above or from, you know, manager or whatever. And yeah, because fundamentally scams, they pay a lot in ads because they can get a lot of money because, you know, you don't have to send a product or a service. You just take it and go away. So the fines become just a cost of doing business for the spammer scammer. Yeah, yeah, exactly. And yeah, all of these companies, they are profiting in some way or another from this kind of things. And also, they benefit in their quarterly report if they say they have more users. And guess what? If you don't identify bots as bots,

then you have more users. Right. If instead you say, ah, yeah, we have 30% bots, then someone says, ah, so you have less real users, less paying less people that we can sell to Because fundamentally if you rely on ads you need to say how many potential customers you have to your advertisers Eyeballs. Yeah, how many eyeballs and bots don't count as eyeballs. So maybe we don't identify bots very well. Well, and I think, you know, I mean, so all this clearly you are driven to build an alternative. And I think we, you know, in this conversation, as you and I already are predisposed to believe, this is broken. And I think, you know, we're talking about identity, reputation, spam, content moderation.

many of these have become conflated with a lot of negative behaviors, negative outcomes by the platforms, by nation states, but they are not in and of themselves negatives. These are, I think, very understandable sort of primitives that both a user and a builder want to address. I don't want to see certain content. And pardon me, I don't want the ability for a third party to yank me off the platform to erase my identity online, my reputation, et cetera. And so with all of that, and you've talked to some of this, what are the very high-level primitives, Pip, that need to exist in order for an individual to have, one might say, sort of a healthy engagement or experience with a community online? So, if we sort of start at that high level, what's your view on what

enables, maybe mediates is the wrong word, with no central authority, but what enables healthy discourse online. I think what enables healthy discourse and a healthy aggregation of people online is the ability to choose fundamentally and to filter. So you want to decide and you want to be able to decide what to see and what not to see. And so fundamentally, this means filtering. And also, it helps if you have access to tools that makes it easy to filter and also to search. So fundamentally, we are talking about the problem of ranking, right? You want some things to be on the top where you want to give more attention to, and then some things at the bottom where you don't want to give much attention or maybe any attention.

And those include impersonations, impersonators or spammers in general, people that just annoy you and don't give you value and then just distract you from other more meaningful purposes. And so fundamentally, the role of Vertex and other also service providers is to be able to deduce what you like and what you don't like and then offer you that. But the difference with the monopolistic and the traditional alternative is that the goal here is not to give you some ads and the goal here is not to identify you and sell your information to other people. You decide what information is public and what information basically I can use. So the service provider now is in service to you or in service to an app that you are using.

So, for example, for a business model, Vertex is more oriented towards B2B. So apps maybe decide to use Vertex to improve their experience. But fundamentally, the app does so because it can give a better experience to the user. And a better experience to the user means more ability to choose what to see, what not to see. Right. And if we go into that a little more deeply, so in the model or the approach of Web of Trust and Vertex, your product specifically, my reputation comes from the people who know me, not from a company's database. What does that actually change for someone navigating the Internet? I think the biggest leap you have to make is understanding that reputation is not a value. It's not, but it's more, it depends on the point of view.

So for me, your reputation is quite high because I follow you directly. And there are also other different types of relationship, not just follows, for example, mutes, but also other types of relationship that can be analyzed to extract important data and important metrics. But the biggest thing, yes, it's that reputation is personalized to a certain point of view. And yeah, I don't know. And that's, you know, you jump right to one of my follow-ups, which is, I would just sort of state perhaps what you have just pointed out, which is, You know, web of trust means different people can have different views as to who's trustworthy and to what degree. And I think that that level, you know, having those sliders is incredibly important, whether they're explicit to the end user or to your point, they are baked into an application such that it is not a black box, but it is not something that I have to worry about or think about.

And maybe more specifically, you know, if let's pretend I'm on Twitter, if I get banned from Twitter or X tomorrow, my reputation on Nostra stays intact. I mean, is that sort of the core shift your identity and associated reputation travels with you? Yeah, yeah, that's one of the biggest strengths of Nostra is that your identity is portable across apps and also your data, your posts are actually interoperable across apps. So this means that for you to not get seen anywhere, it means that all the relays, or maybe also all the clients, they will have to ban you. and well if that happens it's probably because you did something really bad because it means that the whole world is deciding to ban you you know so it's not one decision of one company it's multiple companies and multiple operators and people that do it just for

fun but also even in that case you can run your own you can run your own relay and then invite people you want to talk to And if they accept, you get to, you know, enjoy, you know, Nostra in a very private setting. Right. And the fact that the identity is portable, yes, means or also makes it so that your reputation is also portable. In practical sense, it means that regardless of the app you use, your audience, meaning your followers, remains the same. No one can take them away from you. And also, fundamentally, those rankings that I've mentioned, they don't consider the client you're using because that's just not one dimension. It's not a dimension that I consider, for example.

So you don't get a bonus if you use client A or client B. It's just derived from the interaction you have with other people and whether they give you attention by following or maybe by muting and they remove your attention from you, their attention from you. Right. Might be surprising to some people. But, you know, basically, there are individuals on NOSTER who have very specific points of view, very specific sort of personas, I suppose. Most of them are NIMS who are now, you know, disappointed and or bragging about the number of mutes they have. Right. But I think what's telling about that, some people might find that a little odd, but what's telling about it is it demonstrates the granularity of what Noster and Web of Trust specifically make possible. You know, you may find incredible engagement with a particular audience or niche or segment, and you may repel others.

But it is not all or nothing. And it treats the individual user as an adult who can make that decision as opposed to the steward or rather the, you know, captive of some corporate steward who gets to make that decision on their behalf. And, you know, on that note, and you've illustrated some of this, Pip, for someone who's starting to question the platforms they use, and I hope that that is one of the outcomes of people who listen and watch, maybe they've been burned. You know, we see on Noster numerous examples of content creators who build, you know, pretty significant audiences on YouTube and they get cut off just like that. So whether they're, you know, internet famous or not, Help me further understand and help them understand, us understand, why this matters. Like what is, you know, if Web of Trust succeeds and becomes sort of the standard, what changes about how individuals navigate online?

I think Nostr gives really a superpower to these, let's call them creators, because you do not rely on one single platform. And so you, whatever you build, even if it's small, your audience on Nostr is going to be yours forever. Unless obviously you screw it up and people decide to leave you. But, you know, that's how it works. And the fact that your reputation moves with you, well, gives you freedom. It's insurance against any kind of service or platform going bankrupt. Like, for example, if we don't think about it, but many social platforms, they just closed, closed down. For example, now there is Divine, which is like a second attempt at Vine. But it means that Vine closed. And many people had maybe millions of followers there and then, oops, they closed.

So on Noster, it's very unlikely that all relays and all clients and all apps, they all close down. So it's really an insurance. What you're building on Noster stays with you. And it's tied to your key. And the key, you are the only one who can control it. And the reputation you build can only realistically improve over time, unless, again, you just screwed up. Yeah, yeah, yeah. And I think, you know, that so that maybe is the flip side. And so Web of Trust built on Nostra could liberate identity from corporate control. it could also create a new kind of exclusion people who you know aren't able to build reputation and therefore community sort of freeze out dissenting opinions i think web of trust fails if humans fail fundamentally but i intend to be very optimistic so i don't think that's an issue

i mean uh people throughout history has always been able to find their own niche their own communities and have a more or less healthy relationship with them. Like at least the majority of people, they have decent relationship with their neighbors, with, you know, their people with the same interests. Obviously, there are going to always be those three, five percent of sociopaths that are going to try to manipulate this ranking, for example. And so that's part also of my job of trying to come up with algorithms that are resistant to manipulation as much as possible. For example, what happens if someone creates one million bots and those one million bots, they all follow him back. This is not so evil, evil form of manipulation.

and basically the way I'm doing it is that I have multiple defense lines against this type of civil attacks. So before including a new MPub, a new key inside my database so it can get recommended and so that sort of things, I test multiple checks. One of them is their own, let's call it global reputation. So previously I said that reputation is personalized. It depends on the point of view. But sometimes, for example, you are a service provider that wants to give out a service for free. And so that is an attack vector because if it's free and you don't check, you can get one billion requests and then your service is down. So how can you give something for free when the number of entities that can request it is potentially unlimited,

like the number of keys. So for that, there is an algorithm that is not personalized, just takes like an average perspective, which is global page rank. And I'm using that in my own thing. So before adding this swarm of bots that try to increase the reputation of a certain actor, I check if those bots are worthy of being added, basically. they themselves have a reputation or or none yeah yeah exactly and there is also time time decay like i don't add unless they have have been reputable for at least uh one week so it also takes a lot of time for for that for people to start adding many bots and so i will probably figure out this is one strategy fundamentally this kind of analysis they are not a hard there is no perfect solution. So someone could become Vertex but evil,

but it would be, how costly would it be? Like what is that, how does that sort of play out, do you think? So you mean like a new service provider that is used by others? Right. Yeah, it's possible, of course. The thing is that there are already multiple service providers. There are like three now. So while my own Vertex, then there is Relator built by Jesus. And then there is Brainstorm built by David, who also was on the show. And so, yeah, there are already three service providers. I hope if NOSR grows that there will be more. And so fundamentally it's going to be competition that improves the quality of the services and make sure that bad actors don't get used. or by many you could reframe the same question in many other situations like for example absolutely a silly example would be like a restaurant what stops you know restaurants that poisons people

from becoming uh the mcdonald of the world yeah it's the poisoning thing huh that's where that's where it trips you up um no and i take your point and a lot of these questions you know clearly are are designed to preemptively answer what those who are still kind of coming up the learning curve. But I think that's a great, it's a great metaphor. You know, I mean, ultimately, and you said this earlier, I think it's an easy to overlook, but incredibly important point of emphasis, which is service providers in service to the individuals who, you know, are paying with time, money, attention and that's often lost. Well, and so let's shift now, Pip, to focusing on, you know, business builders and for those individuals who may, you know, find it interesting to see how this stuff works under the hood You made you made the choice to make Vertex free and you designed it so to your point anyone can compete with you And I think that one of the you know incredible dichotomies of free and open source software

But infrastructure that doesn't sustain itself dies, right? Commercialization is important. And so what's the path from free to drive adoption, you know, sort of the current state to sustainable without becoming what you're fighting against? Yeah, that's a good question. I think my model now is a freemium. So it has a limit of 100 requests per day per reputable mpub. So as I made the example before, I want to give out some credits to those MPubs that I deem reputable. And it's my choice because it's my server. I pay the bills. And so I use my own reputation system to say, okay, these are the MPubs that can get access for free. And everyone else, I'm sorry, you have to pay.

Now, if you want to use... And let's pause there. I think, and forgive me, Pip, I think that that's worth putting sort of an underline. If you're a builder, if you're a developer, the traditional approach to wait listing, you know, certainly there are tried and true methods of you get an invite and with that come three other invites that you can give out. Presumably, you're going to give that out to other capable builders, developers. But I think what you've just underscored is sort of this iterative benefit to Web of Trust, which is that you yourself are using your Web of Trust to filter or screen who gets in to get the free credits to build. So it's not, you know, Amazon Web Services giving anybody who can fog a mirror $1,000 in credits, which, you know, whatever. But it's you being able to actually onboard developers with strong reputations first, if I heard you correctly.

Yeah, that's exactly right. And also their users. So imagine you use a client. this client can use your own user key to sign the request. And so you can get, your request will be fulfilled because you are reputable according to my own metrics. This is just for giving free credits so people can try it out and also developers can try it out without having to pay first. then if they want to add more bigger limits and maybe in the future more maybe premium features then they can pay simply pay for the credits and it's a pay per request so every request consumes some credits you pay for the credits you get the credits and that's it and you don't I don't like

subscriptions because many times you're underutilizing them or other times you are overutilizing, well over, you have to you have to take the other next tier, next pricing tier yeah exactly, instead here just what's my cost? My cost is fundamentally computation and so for any unit of computation you pay one credit and you buy credits from me and that's it you don't have to buy more, you can just try it with the one dollar Yeah, consumption-based, yes. Maybe in the future I can also add some other types of payment, but I think this one makes sense. Also because for a developer can cost like the minimum is $1. You want to try it out, you pay $1, you get some credit, you try it. Or maybe the developer can try it with his own key, so he doesn't even have to pay. And yeah, that has been working, I think,

because now the clients using Vertex are increasing in number. The one that was using it from the beginning is App Store, which, yeah, I know you. Yeah, I'm also a big fan. And there, I think this concept of Web of Trust with an App Store really is a match, a great match. And in fact, you know, I want to ask you, Pip, to speak on Fran's behalf, and I'll refer back to the conversation I had with Fran on this show. Talk us through, it's a great example, talk us through how Web of Trust and Vertex are changing the fundamentals of the traditional App Store model and what they enable in place of that. Yeah, so Google Play Store or the Apple Store, they have a model that I think is going obsolete very quickly. The model is reviewing everything. So before your app is accepted, they review it. And that's it.

So this is very problematic in many situations because it makes developers extremely frustrated because many times these guidelines are bullshit. They require you to do some kind of gymnastic of this is not. I'm not paying for content. I'm donating to the profile that created the content. You're trying to work around all of these stupid guidelines. And many times they fail because a malicious app gets added to the app store. And it happens multiple times. For example, that Sparrow Wallet was added for mobile, but there is no Sparrow Wallet on mobile. We saw that with BitChat, right? BitChat got, there were 14, you know, plus presumed publishers of BitChat when it first hit. Yeah, yeah, exactly. And this is the problem of impersonation, right? Right. So the ZapStore model is using Vertex now to do something,

to do like the first line of defense. Who is, okay, who is behind this application? You want to know who is behind it, and you know because there is a signature. Every app is signed by the developer. But now you ask, okay, is this developer the one I know, or is it some new key that probably has no reputation, it's just an impersonator? And for that line of defense, the first one is vertex. So there is an endpoint that you can call called verify reputation. So it's very straightforward. You send a key, you get reputation metrics. For example, how many followers it has, what is the rank according to the algorithm you specified, and then also something very important and very visual, which is who are the top followers according to the algorithm that I specified. And so basically this shows you, okay, this app is from this developer,

and this developer is followed by maybe Odell, Gigi, Jack. And you say, okay, okay, I see that how I am connected to this developer, and I see that he has some reputation. So already with this first scanner, you can remove the vast majority of impersonators because they don't have any kind of social proof behind them. And then in the future, I know Fran wants to have multiple levels, more levels of defense. Maybe the second would be maybe some kind of AI scanning for malwares. but also it's important to to the order is important like you cannot ask for an ai scan of an apk that might be quite big everything for every single app release yeah every app yeah so you kind of first because everyone can take an app change a line of code and then republish it right, in this new model, which is open,

and it's like the Wild West. So the verification first needs to constrain this set of potentially unbounded keys, unbounded, to a very narrow subset. So only apps that are signed by people with some kind of reputation. And this is also, I think, part of its direction, is like enabling the user to set his own threshold. So for all of these lines of defense, like, okay, what kind of proof do I want? Do I want a high reputation or medium or low? Yeah, and I think, you know, I think about particular use cases and again, perhaps to frame this for someone who's not a developer, what I would assert, and I think it's observably true, is that the centralized models are failing left and right. I mean, that's the central tenet of trust revolution.

And so we can black pill and give up all hope, yee-hoo, enter, or we can look for better solutions. And I think this is a great example. And so many of us, if we're technically inclined, you know, we're the family CTO for mom or dad, uncles, you know, grandparents, whatever that looks like. And so the ability to dial a setting that says, okay, you know, thinking about my mom, let's perhaps not click install for anything that I or a close network of friends and family have also used and trusted. Now, you know, maybe that's a little in the weeds, but the point being, you can create or sort of zoom in and out that locus of control or locus of influence, I guess we could say. And so to me, the sort of one, two is one, the as is ain't working, right? They're missing imposters. They're missing malware. where they are, whether an individual cares or not, they're inflicting mass pain on developers,

which makes them less inclined to build great applications and put them on any centralized app store. And so in lieu of that, we have got a legitimate trust-based approach. So long, granted, as you have a follow graph, if you have individuals, you know, who are engaging with those applications using them and that you can in turn, you know, to some degree sort of out trust your due diligence too. Is that kind of a fair characterization? Yeah, I think it is. And in the example of your mom or your mom could use, could say, okay, I'm going to use Sean's personalized page rank as my own like guideline. So, and you can use that too. like if you have a friend which you trust because he is very technical, you don't understand any of these things, you can say, okay, for this particular reason,

this particular aspect, I just choose this friend as my point of view. And he follows all these nerd developers that I don't follow. I don't understand what they talk about. But I want to access that data, access that familiarity with the matter in my own application. And you can just do that. You just have to, well, granted, you know, the client will have to explore those kind of things, but fundamentally it's all possible. Yes. I mean, that's what I did. You know, I've spoken about this before when I moved from having been on iPhone since day one, oh seven, when the first one released to a Google Pixel running Graphene now two, three months ago. That's why ZapStore was so powerful for me and continues to be is I am a deeply technical person, capable, all that good stuff. And yet this was a new world. You know, I went from Apple App Store to APK's application.

What is that? Application package kits. I forget. Anyway, point being in the Android world now. And so the ability to see, you know, these three people I know and I'm connected with have also trusted this publisher, developer, and have installed this application, boom, solves a big problem for me. And I think that goes to one of the interesting points about Web of Trust that I'd love to get into a bit, Pip, is that is, in effect, hiring Web of Trust to do a job for me. Or perhaps more accurately, I'm hiring people in my Web of Trust. You know, I'm outsourcing that I'm not going to understand everything. And I think David Stringhorn does a great job of conveying this. You know, I'm not going to be an expert about everything. And so in a particular context or domain, I can outsource my due diligence, my trust, my homework to others in my network. And so with all of that, let's sort of shift to outside of Bitcoin, outside of Nostra, outside of FreedomTech, traditional builders, traditional developers.

If they're looking at centralized systems and we see this, right, they're skeptical because they don't understand or frankly, what they've always known has been handed down from them, from the Apple developer program or Google's equivalent. what do they usually get wrong about Web of Trust and what would you say to them in terms of not underestimating what's possible and reframing how they approach building on it? I think the biggest mistake, I would call it, before even arriving at Web of Trust is the identity part. So most games, most apps maybe are not taking up, you know. why do I have to sign up and create a new damn account with this new app? Like why do I have to have 1,000 accounts and a password manager with a 1,000 password? They can just use Noster. If you want to have a commenting session, some kind of social activity

like let's play together this game, I invite you to this game, those kind of things. If you integrate Noster, it's going to take not a lot of time probably same time it takes for you to build your own authentication, but then you get access to a network that is already alive. And this, I think, is the biggest problem because everyone is trying to... And also, it's part of the mentality that maybe fuels VC funding that you have to coach your users. Crucial. The thing is that those models is really very, it's like a power law. So a few gets everything and then everyone else has 100 users. Perhaps. 100 accounts. I think that is so important. And I would, again, just underline what you're saying, which is the perception, be it VC-fueled or otherwise, that you must own identity versus the reality that you ain't going to own identity, right?

X and Meta and Google and Microsoft, to a lesser degree, own identity. And so the alternative is to compound by allowing the individual to own their own identity by using Nostre. But please continue. Yeah, I think this is the first step is allowing people to use their existing accounts, at least for this kind of logins. So at least it's less painful to use. And if you choose to add as one of the options, because we don't live in a bubble, so you probably are going to have some kind of Google sign-in or stuff like that. Off of some sort, yeah. Yeah, but you could add also Noster. Since you are already five O-Off, why not adding the six? You add Noster, you get a slightly bigger network of people that can use it. And then once you have it, you can benefit from a whole network and a whole, yeah, a network of people building on it and also content being created.

Because how extremely difficult it is to convince people to build content on your newly created game application. Like it going to be so damn difficult A cold start Yeah the cold start when you have three users or 10 users like why should i invest one hour of my time to build to write content for this free user well where the when the audience is so small with no sir you can you can jump start it and arrive at let's say slightly shorter than half a million of people. So clearly it's not super big, but from 10 users to half a million, it's a massive improvement. And once you have it, you also get access to a lot of services actually that are now serving Nostra, for example. Web of Trust services, but also media hosting.

And then you also get access to relays, which are, most of them are a free service where you can, your user can publish and you don't have to host your own server to store your user content. You just... You get resilience, right? Yeah, yeah, you can, your user will, your app when someone creates a content can just blast it to many relays and then people will read it from it and that's it. Maybe someone writes a blog post about how great your game is, right? That's all for free if you plug it into Nostra, basically. Because people are going to use the same relays to read blog posts. Right. If you create your new blog post, only people that have an account with your game, which are 10 people, are going to read it. So you get a massive, massively increased distribution. And frankly speaking, the cost is just adding a new sign-in.

Yeah, and I think, you know, I'm to the degree that I'm a gamer, which is not much, or I wouldn't admit it if I were. I'm thinking about Apple Game Center, right? Like here is a Goliath, a big tech Goliath, who I think objectively has failed in creating a social network, in effect, for gaming. And so Sony and Microsoft dominate, you know, Steam to a lesser degree. And so, you know, if you're a builder, back to your point, you're just not going to take that on and win. Or rather, if you even could, if you had a chance, what an incredible burn of capital that's going to require versus focusing on the core experience. And so I think, you know, this has been, I hope, useful for builders looking to invest where it matters most, which is differentiation and end user experience. and individuals who, as you say, I mean, I've got, you know, once was 1Password, now it's BitWord, and I've got 1,400 logins

saved in BitWord. It's insane. And so I think there's a tremendous amount to benefit both sides, both the application developers and the users. Well, let's wrap up here, Pip. Where does this go next? What capabilities are you building toward with Vertex that don't exist yet? What should we look forward to, both in Web of Trust broadly and Vertex specifically, if you care to get into that? I think a great new feature would be the ability to detect explicit content. That's a big one because a lot of people nowadays in Nostra are actually publishing more or so discussing content from my point of view, at least, you know. Yes, and everybody will, you know, that beauty is in the eye of the beholder and so is the inverse. Yeah, yeah, exactly. And so, but some kind of, let's say, objective quantification of, okay, is this non-safe for work or safe for work?

Is this safe to be shown to the child? Yeah, I don't want this. I don't want to open up, you know, my Nostra app of choice in a crowd and have something pop up on my feed that nobody wants to see. Yeah, exactly. So this is like a very obvious thing. So an endpoint where you can send maybe an MPub or an event ID and it tells you what's the likelihood that it contains explicit content and what kind maybe of explicit content. So you can maybe, if you're a builder, you can use that to remove stuff your users don't want to see from their feats. Maybe it's highly politicized, I assume, you know, which is a different category and perhaps a different severity. but there are people who just don't want to see certain political topics or content. Is that a different problem or does it fall into that same sort of rubric? No, I was more thinking about pornography. Okay, yeah, straight up. Yeah, those things where you really know before you start a session of looking at,

scrolling through the feed if you want to see pornography or not. Got it. So it's pretty binary. Yeah. Yeah. Yeah. Yeah. Well, the result could be classifying the type of and then also the likelihood. So you can have more because some form of arts, maybe they have a naked body, but they are not pornography. So obviously there is going to be difficult to do. There's no objective measure there. Now, tricky question or tricky sort of subject here that occurs to me is how do we not fall into chat control? How do we not fall into scanning content in a way that becomes KYC or otherwise violates privacy? How is this different? Now, this is different because it can only analyze public data. And this is one part where Noster is really shining. is really proving itself.

But the other part is if you have some content you don't want people to see, you can encrypt it. You have a key, you encrypt it, it's yours. You can store it locally, that's the best. Encrypted and stored locally, that's the best. Or even encrypted and stored somewhere else, like in a relay, still encrypted. So you can have private data and then public data and I can only work with the public data fundamentally. or maybe data like well maybe data you have and decrypted for me like someone send you a picture or in your dms ah i see that might you decrypt it for me you send it to me maybe in a kind of anonymized way if you want it's all opt-in obviously the fact that so i can decouple i could not to get in the weeds too far but you know if we had subkeys or key rotation. I suppose it's not rotation, but if we had a subkey,

I could spin up a subkey, submit content to the service, have it scanned, get a result back, discard that key. There's no tether or connection between my primary identity and a particular payload or message. Yeah, that's right. There is no, you could do that. At the moment, the experience is not great because you would have to buy credits for that subkey. You cannot get at the moment credits for that subkey because the subkey would probably have no reputation, would be indistinguishable from a bot. Right. And I took us down that rabbit hole, but I think, you know, not to take away your key point, which is explicit content is a job to be done. It's a service to be rendered. What else is interesting to you in terms of delivering greater capabilities to applications and users through Web of Trust?

Well, we talk about a lot about impersonation detection, but there are also many other things that because impersonation detection and spam prevention, they are like the basic. Like if you don't have those, your app is barely working. Right. Right. It's a garbage heap and nobody wants to be there. Yeah, exactly. But you can also use Web of Trust to improve the experience. So one way I'm offering now is giving you personalized recommendation like, oh, you might want to follow these people because they are quite popular in your own subnetwork. So onboarding. Well, I don't know if that goes to onboarding because you've got to have some. No, no, it goes to onboarding because you can specify different algorithms. One is, well, personalized page rank. It's going to be personalized to you, but if you don't follow anyone, it's not going to work. But you can also give at least use a global page rank. Topics, yeah. Yeah, maybe, yeah.

Also topics in the future, yes. So you can get like the global, like the average perspective of who is popular in art, in Bitcoin, in those kind of topics. And for onboarding, that's also pretty, pretty good because you don't have to reinvent your own onboarding. Well, and what I think here, too, what I would underscore, you know, we're jumping around between sort of end user and developer, which is great. But I would say here, too, that the you're here again, that the message to a builder is this will only get better. Right. Speaking of compounding, you know, and the benefits of compounding to be able to recreate that from scratch versus to be able to get this for, quote, free or inherit this capability in this this social graph and all of the signals of this web of trust just gets better over time. yeah exactly and then there is also search which if you have used the nostril it's a it's not the best feature it's not a shining moment no exactly most of the time you you search

for jack and then what happens is you get every possible impersonator under the sun of jack dorsey and but not jack dorsey right right right right and mostly because most relays most clients use the search capabilities of the relays and the relays just do some text similarity. So if someone calls themselves Jack, it's going to be Jack, right? So obviously that- I'm suddenly reminded of Silicon Valley's hot dog, not hot dog for all of those fans of the HBO show. But yeah, yeah. So it's pretty brittle right now of limited use and sort of what, You know, fast forward us a year or two, like what would search on Nostr or a Nostr powered application do differently than X? Or is it, hey, you get the same quality or caliber of search without the centralization, which is enough? Yeah, I think I think the first step is reaching, matching the quality of decentralized solution.

Most of them, they have good search unless search is a part of their revenue model where they put ads in search. Right. Which it is because I think on Google, obviously, like the first three results are ads. And then on Twitter, the blue check, they get massive boost in search. so obviously if someone even is more interesting than according to your own point of view more interesting than a blue check mark but uh you know the the search result would not match your own taste they would match what the company what wants to push on to you absolutely yeah i wrote a piece recently called extraction is rational and and that is it right like to extract value in every potential scenario like that from the standpoint of these centralized platforms and due to broken money, whole of the conversation. But that is the rational approach. And so I think, you know, as you're saying, I get the caliber and quality of search without or at the same time, I can

verify that those results have not been interdicted. You know, someone hasn't jammed something into my search results in a way that is this black box. And I think that's what's so powerful, among other things. I think one aspect of Vertex in particular that I think is very interesting is that, yes, it is trusted to some extent. Sure. Because to check that the scores are exactly those scores, you would have to recompute on the same data also. Well, the service, the call is open source, but that there is no guarantee, actually, that I'm running that code. Like I can tell you, yes, I'm running that, but you cannot prove it cryptographically that I'm doing so. But what you can do is have a kind of optimistic proof, which I think most of the times is enough. Meaning when I send you a response,

my response is signed by my key. If later you find out that that response was compromised, that I put an impersonator in the first part of Jack Dorsey, so you message the wrong person, you send money to the wrong person, whatever the case was, then you can just reshare the event and say, look, Vertex is lied here, and then my repetition is gone in just one response because every response is signed. I cannot say, no, that's false because... Attribution is absolute, right? Yeah. And I think that, again, is just one of these things that is worth highlighting. And, you know, instead of turtles, it's reputation all the way down, right? And I think what's really powerful that you've drawn out, Pip, is that at each stage, there is a signal that affects reputation positively, negatively, otherwise. And as this gets built out, as Web of Trust becomes integral, you know, there's countless bits of science fiction I can recall that I've read, you know, which is, which are, you know, they talk about sort of this reputation that follows you in credits, right?

sci-fi authors always just use credits and now we say Bitcoin. But I think it is a really powerful vision of what's coming and what can be built when there is no way to escape bad actions in the sense that if you are a bad actor, you know, it is attributable to some cryptographically verifiable verifiable identity. And so, you know, there's nowhere to hide. That's good and bad, but only bad, I think, if you're a bad actor. So, well, I really appreciate it, Pip. This is exciting. For builders who want to dive in, what are the next one or two steps they should take to get started with Vertex? They can go to vertexlab.io and or search for Vertex on Noster. And as I often say, if they use a client that has some kind of decent search,

they will find it. And if not, they should be using Vertex, right? Yeah, they can go to the website. And then you have, I think, well-written documentation. And then if you have any question, also there is a link to my signal. So you can just send me a message and I'm going to help you out. Usually I found that developers take a couple of hours between starting and implementing some kind of feature because fundamentally it's very easy. It's just you sign an event with some kind of parameters like what algorithm you want to use, who is the personalization source kind of things, how many results you want. You want 10, you want 100. You send it and then you get the response. And it's all Noster events. So if you're already using Noster,

it's going to take literally maybe less than one hour and you can get started. Brilliant. So if you're a builder, do that. If you're not a builder, but you know some, point them that way or demand better of your applications. Super. Pip, thank you so much. And I hope we can do an update in the near future. Thank you. Okay. Take care.