Sentinet is missing one key piece. No AI safety test before LLMs are ledgered in the protocol itself. There is no way 'community driven' path to stop rogue LLMs or subsequent 'weights/improvisations' from being detected upfront and stopped from being shared at all. It only stops unauthorized use by slashing. Hope they focus on that area as well.
Good luck with the move. Nice post!
Sentinet is missing one key piece. No AI safety test before LLMs are ledgered in the protocol itself. There is no way 'community driven' path to stop rogue LLMs or subsequent 'weights/improvisations' from being detected upfront and stopped from being shared at all. It only stops unauthorized use by slashing. Hope they focus on that area as well.
Great. Now if we only had trustworthy data we can feed the AI. The protocol that can deliver this, will have stuck gold.