Is there any way to deploy serverless functions so that their code would be publicly available and auditable?
That is, that I (as a consumer of such a function) would know what code actually gets executed when I invoke it. So that I would only have to trust the hosting provider, not the hosting provider _and_ the developer? To somehow know that the developer did not tamper with source code prior to deployment.
Of course everybody can audit the code running on the client, as long as it is not obfuscated. And of course nobody can ever be certain about the code running remotely.
The question is whether at least trust in the developer can be removed from the equation; so that clients will know that if the code running remotely was tampered with, at least it was the hosting provider, not the developer.
@IngaLovinde i vaguely remember some folks using Rust macros to include source code in AWS lambda binaries so they could serve it if you hit a particular endpoint as an easy way to do AGPL compliance
though i don't know they offered a way to attest that the code served was in fact the code executing.
@IngaLovinde yeah :( i can't think of a way to do it without specific support from the hosting provider
@IngaLovinde maybe if it also sent along a hash of its current executable or something, but then how would u verify that its not lying about that
@mxdragon yes, that's the question. So probably it should be some feature offered by the hosting provider, outside of developer control, so that developer will not be able to influence its output.
@IngaLovinde oh i feel like thats easier probably, if u can trust the provider
couldnt they just send a hash of the used binary on request? it would obviously depend on the architecture and compile-time configuration, but if thats sent along u could in theory try ro replicate that, right?
@mxdragon Yes, if the hosting provider sent a hash of the binary used to handle the request (or git commit hash) in response headers, that would probably solve the issue. But I don't know of any providers doing this?
im just rambling
@IngaLovinde im thinking theres nothing a correct binary could send or do that a malicious one couldnt
so if anything, u would need a way to generate some data that only the correct binary can know
re: im just rambling
@mxdragon malicious binary can always act as a MITM for the correct binary. And you won't be able to crypto your way around it because if you don't trust the developer, you don't trust their public keys.
re: im just rambling
@IngaLovinde do u have a specific use case in mind or is this out of curiosity?
re: im just rambling
@mxdragon I'm drafting a trusted alternative to https://embracing.space/@IngaLovinde/105425002163297401 right now, and while it mostly relies on users being able to audit the client-side code, having an auditable server-side code would significantly improve things.
@goodvibes Consider it this way: you have an account on todon.nl, which claims to run unmodified mastodon software.
Technically, you can verify that all the client scripts it server to you are those of original mastodon. But you cannot verify what software is actually running on the server.
For example, are your DMs really DMs, or is anything that you send also sent to public?
And the question was: is there any way for todon.nl admins to somehow guarantee that they didn't tamper with mastodon source code?
I don't understand how REPL can help with this. While REPL can technically run on a server, it does not allow one to do anything that is the reason for running code on the server in the first place (store secrets, which can be used to encrypt user's data, to connect to the database, etc). And without that, the fact that the code in REPL runs on the server rather than client is just an inconsequential technicality.
Thank you for the precision. It's more clear this way, I didn't completely get it the first time.
I don't know well the mastodon ecosystem so I can't really provide anything but clueless guess. Is your question actually about mastodon specifically or is it a more general one? In the former case, does solving it for mastodon would give insight for the general case?
@goodvibes It is a more general one. Todon.nl is just an example to illustrate the problem.
Of course, we can never be actually sure what code is actually running on a server, or even that there exists such a server. Some amount of trust is required.
But currently an user is required to trust both todon.nl admins and their hosting provider, and I wonder if there is a way for todon.nl admins to do something so that user would only have to trust a hosting provider. This will increase the general trust in scenarios when todon.nl admins can have a personal interest in doing something with user's data (e.g. reading user's DMs if they know each other), while the large hosting provider will not care.
OK I get it much more clearly stated as it, thank you.
I see well how it can be framed in a larger problem of trust. Sadly I have no idea how to barely scratch the surface of it.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!