I would like to find a solution (FOSS & GUI) to shred files on an Android smartphone and overwrite the device’s free space from a Linux PC,
then, in a second phase, a solution for recovering deleted data from this smartphone (to check the effectiveness of the shredding)
I use on the one hand Linux Debian and on the other hand, a non-rooted Android smartphone,
As I’m familiar with the adb program, I’ve tried to use it with Bleachbit and Testdisk, but these programs don’t detect the smartphone…
At least, I’d like to know if it’s possible to access the /data part of a non-rooted smartphone from adb?
(In this case, the “–user 0” command doesn’t work…)
Overwriting data is pretty pointless on modern storage. The only effective way to destruct data is to get rid of the decryption keys, which in android is a default on factory reset. If you cannot rely on that you should, with caution, physically destroy the device, after reset, through propper means aka actual shredding.
It’s only really useful if you have HDD drives. On ssd you are just wasting your durability. NAND (ssd/flash) typically do not use the same physical location on each write given by the flash translation layer. I believe the term is wear leveling should you want to get more info online.
If you dont write on the same physical location it is still possible to recover data.
Just want to clarify that its HDDs with physical spinning discs you should be doing this for, which are typically SATA based, but SATA based SSDs exist in both 2.5 inch and M.2 form factors
So running a 7-pass secure erase on a USB stick will not do anything other than ruin it? How would you go about making data forensically unobtainable on a USB stick that already had unencrypted data on it?
Filling a drive with junk data is useful regardless of medium.
It can still dramatically reduce the lifetime of data on flash disks and largely eliminate it on mechanical disks.
If you want to be sure you should be combining both data filling and built-in erasure methods (ATA secure erase or nvme sanitize or nvme block erase).
For Android specifically there is far too much spare area in the system and firmware partitions to make filling that useful, but it can still have some benefit.
For luks, please note your header can be backed up at anytime, so if your drive doesn’t really discard it could later be combined to restore data even if you thought it was fine. Hence combining is good.
And finally physical destruction when it really matters is the only option.
All things being equal, yes, a USB 3.0 drive will be faster than a USB 2.0 drive. In most cases, the USB 3.0 drive’s flash memory will be faster to take advantage of the faster bus.
The initial “cryptsetup” operation to create a new cryptoblock device only takes seconds. It does NOT overwrite the existing sectors.
Ideally, the process would be the following:
If applicable, copy any data from the USB flash drive to another location.
Use “shred”, “badblocks”, or both to secure erase the USB flash drive. Command “shred” will blindly overwrite the sectors with random data and no verification. Command “badblocks” will overwrite and test/verify every sector, for example: sudo badblocks -b 1048576 -wvs /dev/sdX
Use command “cryptsetup” to create a cryptoblock device: sudo cryptsetup luksFormat /dev/sdX
Use command “cryptsetup” to open the cryptoblock device: sudo cryptsetup open --type luks /dev/sdX sdX_crypt
Copy any data from the other location to the USB flash drives’s cryptoblock device.
Between step #2 and step #3, I would partition the USB flash drive, and adjust the commands to create and open the cryptoblock device directly in the partition. I was trying to keep it simple.
If you want an easier process, I would recommend Gnome Disks.
ignoring verification, shred does write the fastest
is basically broken on >=4TB drives
I’ve seen Disks silently fail to erase disks before.
Also depending on your distro age some older versions will format LUKS1 even if cryptsetup defaults to LUKS2
I searched my repository, found a link to Google Code Archive, which pointed me to GitHub. I think this is the “scrub” you mentioned:
“scrub” does look very nice. It even has DoD 5220.22-M. Thank you!
Part of the reason that I chose “badblocks” was that it performed write/read verification at each pass–0xAA, 0x55, 0xFF, and 0x00–including binary inversions similar to how “memtest” would test RAM.
Also, I have seen some POST hang issues with disks that have random/garbage data instead of a correct partition. Not ending the scrub/shred/badblocks on random data does solve some issues–which I learned the hard way with “shred”.
Understanding that it will reduce the media lifetime, I want to fully verify media before I trust it with my data.
Yes, you are correct that by default “badblocks” is limited to smaller disks, but this can be easily worked around.
NOTE: Command badblocks defaults to a 1024 byte block size limiting it to 4 TB disks. Increasing the block size from 1024 bytes to 1048576 bytes (1 MiB) allows very large disks. Using the default block size with an 8 TB disk throws the following error:
badblocks: Value too large for defined data type invalid end block (7814026584): must be 32-bit value
I agree about Gnome Disks. I, too, have seen some issues with it over the years.
Now that you mention the quirkiness of Gnome Disks, perhaps I should have suggested VeraCrypt as a simple solution. I have never used it, but I did use its predecessor TrueCrypt for years.