…or nothing to be left
…or nothing to be left
Hasn’t ended yet, as soon as we reach 75% the simulation will end.
Define “sandboxed”
Application can only access a limited part of the system? = use flatpak or build a container/VM image using the nix pkgs.
Application can be uninstalled completely and has separate libraries? I prefer nix.
Especially since they don’t talk about how they secure the local data
They don’t because they don’t
All the data you import is indexed in a SQLite database and stored on disk organized by date, without obfuscation or anything complicated.
Probably because this is still in early alpha and “the schema is still changing”.
How does mergefs compare to btrfs and bcachefs in using multiple partitions?
Drives connected to usb have an unstable connection in my experience, this is very annoying and gets worse with hubs.
RAIDs reduce the time a system is offline and reduce data loss, if a drive fails and you can afford to wait for the new disk and the backup to restore, and have regular backups that ensure no important data gets lost (though remember the data added between backups may be lost) then you don’t need a RAID.
I don’t use RAIDs cause if my disk fails then I can stomach the 2-4 days it takes to buy a new one and restore the backup
Very important: use S.M.A.R.T and a filesystem with checksums to make sure you’re not backing up corrupted data and know to get a new one
For encryption at rest you may want to look at clevis and tang, though you need a server in your home network for this to work. The client (with clevis) then decrypts the disk at boot if it can reach the server (tang). The server can’t decrypt the data without the client secret and the client can’t decrypt it without the server public key.
Don’t know what your server could be though, maybe a router with custom firmware?
You should also look into cloud storage/rclone, that way you can automate your backups more and reduce the need for manual intervention.
I use rclone and restic to automatically backup my servers daily which takes a few seconds most of the time due to them being incremental backups.
Something I don’t get is, why try to make all browser look the same when you can do the easier thing and just make each browser session have a new fingerprint?
A unique fingerprint doesn’t matter much if it’s only valid till I close that website, right? So why not change a lot of variables by some small amount to make the data useless?
As long as you only copy off the disk, you can just reboot and the whole system in RAM vanishes and the normal system boots again for the second try.
FYI you can use kexec and a prepared initrd to do something similar with only one command.
Or encrypt it before uploading
Would this even cause a kernel panic? I think this just causes a userland “panic”
That’s fine as long as it can self reference.
You need a phone, tablet, or other device that’s been rooted.
Damit
And to calculate the offset needed to get them all synced up involves calculating time dilation, which involves knowing/assuming the speed of light. These synchronizations work just as well if the two way speed of light is different than the one way speed of light.
To know the speed of light you assume the speed of light is c, but you’re trying to calculate c so all those clocks aren’t verified synced.
Just read through the wiki or Harvard’s books if you’d like, this is an unsolved “problem” in physics for a reason or do you think no one cares about how fast c is?
See also This or, more accessibly “Synchronization conventions”
It is impossible to synchronize the clocks in such a way that you can actually measure the speed of light with it due to time dilation unless you define beforehand how fast the speed of light is to calculate that time dilation.
See also This or, more accessibly “Synchronization conventions”
The very accurate clock needed in this case is physically impossible as far as we know, there’s no way to measure it as far as our current understanding of physics goes.
Though if you can figure out a way you should publish a paper about it.
And further down:
Unfortunately, if the one-way speed of light is anisotropic, the correct time dilation factor becomes , with the anisotropy parameter κ between -1 and +1.[17] This introduces a new linear term, (here ), meaning time dilation can no longer be ignored at small velocities, and slow clock-transport will fail to detect this anisotropy. Thus it is equivalent to Einstein synchronization.
This is slighlty different though, we only know the two-way speed of light, not the one way speed of light.
We only know that this trip, to and back, takes x seconds. We cannot prove that the trip to the mirror takes the same length of time as the way back.
The special theory of relativity for example does not depend on the one way speed of light to be the same as the two way speed of light.
There’s a significant detail which is missing from this analysis. The law which puts copyright over privacy is a French law, not an EU law. The EU court found that the French law doesn’t contradict any EU law.
So the EU court did not determine that copyright is more important than privacy. It determined only that the French parliament is allowed to decide that question for France.
So while this does set a bad precedence, it is not as bad as the title would like you to believe.
Laughs in “will you allow <website> to use your camera?” (Yes/No)