It seems I get the knack of it now...
So your argument here is that if we are going to go this route, then interpretability technology should be used as a measure in the future towards ensuring the safety of this agentic AI as much as they are using currently to improve their "planning capabilities"
I understand the reservation about donation from AI companies cause of conflict of interest, but I still think the larger driver of this intervention area (AI Cause Area) should largely be this Company... who else got the fund that could drive it? who else get the ideological initiatives necessary for changes in this area?
While it may be counterintuitive to have them on board, they are still the best bet for now.
This is a nice read, however, in your conclusion, you asked the question "Should we lie?" Why that may seem self-explanatory and intriguing, where is the place of diplomacy in this regard? You know, as you've said your type of audience matters and others apart from your direct audience might or will see through the lies, here now lies the question, the exploration of diplomacy and frankness, can the two go pari-passu?
This is a great stride in the right direction. Looking forward to your exploration of the Africa Tech Space.
A quick question, how do you plan on bringing the seeming lack of government collaboration with this type of project especially in some parts of Africa?