DataHoardingFreaks: DataHoarding, Storage and Cloud Backup Freaks by RoadToPetabyte [DataHoarders / AppleDataHoarding]
99 subscribers
3.42K photos
3.49K videos
11 files
11.6K links
The true DataHoarding lifestyle with DataHoardingFreaks. A project by @roadtopetabyte http://pixly.link/roadtopetabyte and @appledatahoarding DataHoarding single Tg Channels http://pixly.link/DataChannels Discord Server http://pixly.link/RoadToPetabyteDis
Download Telegram
Ultrastar Transporter: 368 TB NVMe-Speicher werden im Koffer transportiert
Content: MediaEinen tragbaren Storage-Server verpackt Western Digital in einem Koffer. Der Ultrastar Transporter ist für den lokalen Datenaustausch gedacht und beherbergt bis zu 368 TB Flash-Speicher, der via NVMe angebunden ist.
Source: https://www.computerbase.de/2024-04/ultrastar-transporter-368-tb-nvme-speicher-werden-im-koffer-transportiert/
Author: Michael Günsch
Date: April 12, 2024 at 04:09PM
We are DataHoardingFreaks. A @roadtopetabyte project
Newbie - Lost, Multiple (personal) MSFT OneDrives, where to begin?
Content:
I'm new to Rclone, I've read, but I still need some help. I'm trying to understand this first, as I don't want to mess with or delete remote files by accident because I wasn't sure.
I have M365 Family, so I setup 4 One Drives for myself, under different emails (primary: Mark@outlook, additional: Mark2@Outlook, Mark3@outlook, Mark4@outlook). I use the OneDrive app pointing to my primary email's OneDrive on my main PC, my phone and my laptop. Each additional email's OneDrive has a folder shared with my primary email account and thus shows as shared folder on my PC/Phone/LT, allowing me to save/copy/move files to the additional OneDrives.
I also have a second PC, not signed into the OneDrive app, that I use as my Plex server and has spare drive space.
From what I'm reading, I should be able to set up Rclone on my secondary PC, download/sync from all the personal MSFT OneDrives to a folder/folders on the 2nd PC? Is that correct? Will I be able to maintain a separate local pc folder for each OneDrive?
I do need to reorganize what data/files I have on each OneDrive. So I would like to be able to initially set things up, download ALL files from ALL OneDrives to separate folders on the 2nd PC, re-organize, and upload to the correct OneDrives. After that, continue to sync each OneDrive to the local folder. Can RClone do that?
I've read about Rclone Union, but I'm thinking, I want to keep things separated by OneDrive, so union isn't what I want to do?
Looking at https://rclone.org/onedrive/ it appears to walk me through the setup of a single OneDrive. And is there a similar guide of how to I add the 2nd, 3rd, 4th, or do I just re-follow that guide, selecting "New Remote" for the 2nd, 3rd and 4th? Does each "remote" get its own name to use in place of "remote" when issuing commands?
It appears once I setup the "remote", say Onedrive with my primary email, I would just run rclone copy onedrive:folder d:folder-mark1 to copy FOLDER on my Mike1's one drive locally to d:folder-mark1?


We are DataHoardingFreaks. A @roadtopetabyte project
Rclone browser - Onedrive- 2 difference accounts - Bidirectional sync - External SDD
Content:
I've been trying to find the solution for a few days. I have installed rclone and rclone browser on Mac. I have set up my two Microsoft Onedrive accounts. My goal is for rclone to work like the native Microsoft Onedrive application does, that is, to make changes to files bidirectionally between the cloud location and the local location which will be a removable SS disk. This will be done for two Microsoft Onedrive accounts.

I am trying to do this with rclone because the native Microsoft Onedrive application does not allow me to use an external hard drive, what it does is create a shortcut on the external drive, but actually the files are saved on the internal hard drive.

Is it possible to do this using rclone with rclone browser?
We are DataHoardingFreaks. A @roadtopetabyte project
Berichte aus Korea: Samsungs soll bei 3D-NAND in Kürze auf 290 Layer erhöhen
Content: MediaLaut Medienberichten aus Südkorea wird Samsung noch diesen Monat mit der Serienfertigung seiner 9. Generation 3D-NAND alias V-NAND V9 beginnen. Der Flash-Speicher besitzt angeblich 290 Lagen, was branchenweit bis dato die höchste Anzahl ist. Nächstes Jahr wolle Samsung direkt auf 430 Layer erhöhen.
Source: https://www.computerbase.de/2024-04/berichte-aus-korea-samsungs-soll-bei-3d-nand-in-kuerze-auf-290-layer-erhoehen/
Author: Michael Günsch
Date: April 15, 2024 at 05:15PM
We are DataHoardingFreaks. A @roadtopetabyte project
Western Digital (Instagram)

From the battery-operated cartridges of the '80s to today's SSDs, storage in the gaming industry has come a long way. 🎮🕹
How can i run two different versions at the same time?
Content:
I want to run the latest official version of Rclone to connect my OneDrive and a modified unofficial version to connect with a Drive that official Rclone doesn't support, is that possible?
We are DataHoardingFreaks. A @roadtopetabyte project
Mega
Content:
I have a mega link ,but I'm not able to download the contents of it as showing it violated terms of service, so how can I fix this problem, if anyone knows help kindly!
We are DataHoardingFreaks. A @roadtopetabyte project
Rclone_RD, Plex_Debrid, and Real_Debrid errors on transfering file
Content:
Preface,  I'm a total noob to all this. I have followed the guides and I feel I'm very close to a functioning setup. 
The issue I'm having is Plex_Debrid requires at least one file in the Rclone_RD virtual directory in order to work. I get the same errors below if I use Windows Explorer to drop the file in. It seems I need to use the Command Line set to the Rclone_RD directory and use the Copy command. The issue is after I finally got the file to transfer from my Windows local folder to the Rclone_RD Virtual folder, Rclone_RD initialized in another Terminal reports file corrupted, size mismatch and can't read Metadata. The questions I have is are the errors due to improper syntax and lack of needed flags in either the Copy command or the Rclone_RD initialization?
Or maybe a Windows Permissions issue? Here are the two commands I'm using. Thanks for any help. 
./rclone.exe cmount realdebrid: X: --vfs-cache-mode writes --dir-cache-time 10s
./rclone copy "C:rclonetesttest.avi" "X:movies"
We are DataHoardingFreaks. A @roadtopetabyte project
232 Layer QLC: Micron protzt und verschweigt dabei die Konkurrenz
Content: MediaMicron fertigt nun den 232-Layer-NAND in der QLC-Version mit 4 Bit pro Zelle und damit noch höherer Speicherdichte als die TLC-Version in Serie. Bestückt werden damit unter anderem die neuen OEM-SSDs der Serie Micron 2500. Beim Klopfen auf die eigene Schulter wird aber ein Konkurrent ignoriert, ein anderer könnte alle schlagen.
Source: https://www.computerbase.de/2024-04/232-layer-qlc-micron-protzt-und-verschweigt-dabei-die-konkurrenz/
Author: Michael Günsch
Date: April 17, 2024 at 08:19AM
We are DataHoardingFreaks. A @roadtopetabyte project
Google Drive
Content:
(Rclone newbie)
Linux Mint 21
If I setup rclone on my Linux laptop to connect to my Google Drive account - what files will I see?
Will Google Docs get converted to Office files or will they appear in their native (Google) form?
Thanks!
We are DataHoardingFreaks. A @roadtopetabyte project
Neuer Phase Change Memory: Mit Nano-Filament statt Elektrode soll es diesmal klappen
Content: MediaGemeinsam hatten Micron und Intel den Phasenwechselspeicher 3D XPoint entwickelt, der sich trotz guter Eigenschaften nicht im Markt durchsetzen konnte. Südkoreanische Forscher haben einen neuen Ansatz für sparsameren Phasenwechselspeicher vorgestellt, der zudem leichter und somit günstiger zu fertigen sei.
Source: https://www.computerbase.de/2024-04/neuer-phase-change-memory-mit-nano-filament-statt-elektrode-soll-es-diesmal-klappen/
Author: Michael Günsch
Date: April 17, 2024 at 06:33PM
We are DataHoardingFreaks. A @roadtopetabyte project
Experience with Proton Drive?
Content:
Since proton drive doesn't provide api, the implementation is a workaround. I want to share my files on it but bit skeptical if it stops working sometimes later. Anyone who can share his experience with Proton here? What are the things i should keep in mind?
We are DataHoardingFreaks. A @roadtopetabyte project
Cloning virtual machine to rclone mount point
Content:
I am using Koofr storage provider with rclone and want to clone VirtualBox virtual machine to it.
When I tried it for first time it failed and I received this error in rclone log:
2024/04/18 13:38:49 ERROR : Alpha QA Clone/Alpha QA Clone-disk1.vdi: WriteFileHandle: Truncate: Can't change size without --vfs-cache-mode >= writes
So I tried to mount rclone like this:
/usr/bin/rclone mount koofr: /home/pin/koofr --daemon --attr-timeout 0s --dir-cache-time 0s --log-file=/home/pin/rclone.log --log-level INFO --vfs-cache-mode writes --cache-dir /mnt/disk1/cache --vfs-cache-max-size 60G
But because VDI image file is bigger than 60G it seems that cache-max-size is not observed and I run out of disk space resulting in error again.
Any tip what could I try with VBoxManage tool or rclone mount command that would enable me to clone to that cloud storage mount point directly?
We are DataHoardingFreaks. A @roadtopetabyte project
QNAP Releases Qsirch 5.4.0 Beta, Supporting AI-powered Semantic Search to Revolutionize Image Search on QNAP NAS
Content: {
Taipei, Taiwan, April 18, 2024 – QNAP® Systems, Inc., a leading computing, networking and storage solution innovator, to ...} More
Source: https://www.qnap.com/en/news/2024/qnap-releases-qsirch-5-4-0-beta-supporting-ai-powered-semantic-search-to-revolutionize-image-search-on-qnap-nas
Author: marketing@qnap.com (QNAP Systems, Inc.)
Date: April 17, 2024 at 06:00PM
We are DataHoardingFreaks. A @roadtopetabyte project
Western Digital (Instagram)

Automakers are investing big in software—up to 30% of their R&D budget—reshaping the driving experience one line of code at a time. Learn more at the link in bio. 🚘
How to tell when a copy is finished? Copying 100TB from Dropbox to my Synology NAS
Content:
So I've been using Dropbox for half a decade using their unlimited plan for my business which is a Photo and Video studio, which is why I have so much data on there and due to the recent policy changes I have until the summer to offload 100 TB of data from Dropbox to my my new synology ds1821+ NAS. I originally started the transfer using synology cloud sync package and transferred over the 100TB that way but realized that it wasn't really copying everything over in order, it kept scanning files from all different dates and directories. I wasn't sure if everything copied over or not and its hard to manually check hundreds of directories and subdirectories for every little file, it was just all over the place so I researched it and basically people were saying that it's garbage and that I should use rclone and that's what I've been doing ever since. Because I already had around 80TB on my NAS I use the copy feature with some parameters to check the checksum and ignore existing files. So now I have about 100 TB on there and I have rclone still running. I'm still not sure how to check if everything copied over from dropbox to my NAS. I see that it's still checking files and that occasionally every now and then it will copy over a few files but again I have no idea when it's going to be finished and if there will be any notification or not in the terminal saying that there's nothing else to check. The other issue that I have which is sort of similar to this, is that my NAS has a limit of 104 TB per volume and I actually have 114 TB of data on dropbox that I would like to all eventually move over to my NAS but it can't all fit on the same shared folder in volume1 which complicates things because I don't know how to get rclone to know the difference and not re copy everything from dropbox again just to include the last 14TB of data on the second shared folder on volume2. I want it all to copy in one go and have the 2 shared folders on both volumes to be seen as one. I tried symlinks and the --mount scripts but i dont think its working because even though both shared folders have the same files and folders, i dont see the second volume storage increasing. Any help would be very much appreciated, thanks! :-)
We are DataHoardingFreaks. A @roadtopetabyte project
Qsirch: QNAP bringt Google-ähnliche Suche auf NAS-Systeme
Content: MediaQNAP hat mit Qsirch eine Google-ähnliche Suche für die eigenen NAS-Systeme in einer Betaversion freigegeben. Statt auf einfaches Keyword-Matching setzt Qsirch auf eine semantische Suche, wie es auch Suchmaschinen wie die Google-Suche tun, um zu verstehen, wonach der Nutzer sucht und genauere Treffer zu liefern.
Source: https://www.computerbase.de/2024-04/qsirch-qnap-bringt-google-aehnliche-suche-auf-nas-systeme/
Author: Frank Hüber
Date: April 19, 2024 at 09:31AM
We are DataHoardingFreaks. A @roadtopetabyte project
Rclone with ProtonDrive - Display file attributes
Content:
Hi,

I was wondering if rclone is able to display the creation date of files in a remote?
rclone ls remote:

doesn't show the dates unfortunately. Is there another way?
We are DataHoardingFreaks. A @roadtopetabyte project
Follow-up to an earlier post - rclone & borg
Content:
I had posted a feedback request last week on my planned usage of rclone. One rather a-hole comment spurred me to check if borg backup was a better solution. While not a fully scientific comparison, I wanted to post this in case anyone else was doing a similar evaluation, or might just be interested. Comments welcome!

I did some testing of rclone vs borg for my use-case of backing up my ~50TB unRAID server to a Windows server. Using a 5.3TB test dataset for backup, with 1043 files, I ran backups from local HDD disk on my Unraid server to local HDD disk on my Windows server. All HDD, nothing was reading from or writing to SSD on either host.

borg - running from the unraid server writing to Windows over a SMB mount.
Compressed size of backup = 5.20TB
Fresh backup - 1 days 18 hours 37 minutes 41.79 seconds
Incremental/sync - 3 minutes 4.27 seconds
Full check - i killed after a day and a half because it was already proven to be too slow for me.

rclone - running on the Windows server reading from unraid over SFTP.
Compressed size of backup = 5.22TB
Fresh backup - 1 day, 0 hours, 18 minutes (42% faster)
Incremental/sync - 2 seconds (98% faster)
Full check - 17 hours, 45 minutes

Comparison
Speed wise, rclone is better hands down in all cases. It easily saturated my ethernet for the entire run. borg, which was running on the far more powerful host (i7-10700 vs i5-7500), struggled. iperf3 checks showed network transfer in both directions is equivalent. I also did read/write tests on both sides and the SMB mount was not the apparent chokepoint either.
Simplicity wise, both are the same. Both are command-line apps with reasonable interfaces that anyone with basic knowledge can understand.
Feature-wise, both are basically the same from my user perspective for my use-case - both copy/archive data, both have a means to incrementally update the copy/archive, both have a means to quickly test or deeply test the copy/archive. Both allow mounting the archive data as a drive or directory, so interaction is easy.
OS support - rclone works on Windows, Linux, Mac, etc. Borg works on Linux and Mac, with experimental support for Windows.
Project-wise, rclone has far more regular committers, far more sponsors than borg. Borg has far fewer regular committers and far fewer public sponsors. Borg 2.0 has been in development for 2yr and seems to be a hopeful "it will fix everything" release.

I'm well aware rclone and borg have differing use cases. I just need data stored on the destination in an encrypted format - rclone's storage format does not do anything sexy except encrypting the data and filenames, while borg stores in an internal encrypted repository format. For me, performance is important, so getting data from A to B faster while also guaranteeing integrity is the most important, and rclone does that. Maybe if borg 2.0 ever releases and ever stabilizes, maybe I'll give it a try again. Until then, I'll stick with rclone, which has far better support, is faster, and is a far healthier project. I've also sponsored ncw/the rclone project too :)
We are DataHoardingFreaks. A @roadtopetabyte project