social.dk-libre.fr is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Search results for tag #backup

[?]Sebastian Cohnen »
@tisba@ruby.social

Hey @jwildeboer 👋🏻 I'm curious if you have written about the backup strategy for your home lab somewhere?

@homelab_de

    AodeRelay boosted

    [?]daltux »
    @daltux@snac.daltux.net

    @rony@novaparis.art.br
    @kariboka@mastodon.social Também acho válido. Se a massa de dados já não é coisa comprimida e sim texto mesmo, a compressão fica, de fato, muito eficiente.

    Dica que notei esses dias: se estiver demorando demais para o dump assim, a culpa pode ser justamente da compressão provavelmente sem paralelismo. Daí convém colocar compressão zero no pg_dump e encadear a saída com um compressor paralelo como pigz e nem precisa de um nível muito alto: dizem que o nível 1 já traz um custo-benefício ótimo.


      AodeRelay boosted

      [?]Vincent 🐡 »
      @vinishor@bsd.network

      I recently discover the usage of upgrade.site (man.openbsd.org/install.site.5) and I decided to implement an auto-upgrade process for my two VMs hosted at @OpenBSDAms

      But, since I need to ensure it works, I'm testing borgmatic as a backup solution :)

        0 ★ 1 ↺
        Fred de CLX boosted

        [?]oldsysops »
        @oldsysops@social.dk-libre.fr

        soirée ,
        installation de timeshift sur le pc (juste pour le système, 56go quand même, faut que je vois pour optimiser/alléger ça)
        installation d'une vm pour centraliser les données personnelles avec .
        un peu de galère sur des sujets connexes (réseaux et disques) maisune fois la doc borg2 sous la main ca c'edt plutôt bien passé.
        maintenant il faut que je reflechisse à ranger un peu mieux mes données pour facilité les sauvegardes
        ...
        et que je documente un peu.

          [?]Chaotic Unicorn »
          @alter_unicorn@masto.bike

          AodeRelay boosted

          [?]Blabla Linux »
          @blablalinux@mastodon.blablalinux.be

          Je ne veux pas laisser mon serveur en mode "presque à jour".

          Le guide complet pour installer et configurer l'update automatique de votre PBS, sans crash surprise : ➡️ wiki.blablalinux.be/fr/update-

            Fred de CLX boosted

            [?]Nicolas Delsaux »
            @Riduidel@framapiaf.org

            Une solution de stockage à froid économique et très efficace énergétiquement. retzo.net/wakeonstorage/

            [?]unfa🇺🇦 »
            @unfa@mastodon.social

            My backup is down.

            How timely, I have just been trying to backup priceless footage shot for a music video for a song where "my backup is down" is spelled out verbatim.

            While I wait for a reply from Btrfs mailing list, I am tempted to buy a bigger drive. Like a 20 TB one.

            But then I'd really need two to have redundancy, and that would be a tad bit crazy...

            If you missed my "announcement" about the music video, here it is:

            mastodon.social/@unfa/11541201

              AodeRelay boosted

              [?]Larvitz :fedora: :redhat: »
              @Larvitz@burningboard.net

              "Untested backups are just expensive hopes and dreams."

              Did some proper restore-tests of my offsite backups and restored them one after another into a local virtual machine (KVM) and verified that they decrypt, restore and boot correctly 🙂 (including our Mastodon instance burningboard.net)

              It's good to have backups, but it's an even better feeling when you know, they work, restore correctly and the procedure has been tested.

              All check marks green, next test in January 2026 🙂

              @tux

                [?]Larvitz »
                @Larvitz@mastodon.bsd.cafe

                Time for the weekly sunday-backup of my home-server:

                1. Plug USB HDD
                2. zpool import zusb
                3. zfs snapshot -r zroot@backup-$(date +%Y%m%d)
                4. zfs send -R zroot@backup-$(date +%Y%m%d) | zfs receive -o mountpoint=none zusb/backup_zroot
                5. zpool export zusb
                6. Unplug HDD

                  [?]Eugene :freebsd: :emacslogo: »
                  @evgandr@mastodon.bsd.cafe

                  Ah, that time when my backup could be written on the single DVD disk! Time to clean the dust from the split utility

                  List of files in the ~/downloads/backups/ catalog. There are some *.tar.zst files with backups made by tar. And some *.tar.zst.md5 files with MD5 sums in it. Overall size of catalog is 5.9 Gb.

                  Alt...List of files in the ~/downloads/backups/ catalog. There are some *.tar.zst files with backups made by tar. And some *.tar.zst.md5 files with MD5 sums in it. Overall size of catalog is 5.9 Gb.

                    mmu_man boosted

                    [?]Genma »
                    @genma@framapiaf.org

                    La dernière version de BackupPC date de 2020, le logiciel est vieillissant. En 2025, y a quoi comme alternative équivalente en terme de fonctionnalités ?

                      [?]Kevin Karhan :verified: »
                      @kkarhan@infosec.space

                      And the tool I found necessitates the source instance to,be online and doesn't allow using like a file...

                      @mastodonmigration @MastodonEngineering

                        AodeRelay boosted

                        [?]Dendrobatus Azureus »
                        @Dendrobatus_Azureus@mastodon.bsd.cafe

                        And the Fortune said;

                        RAID is not a backup
                        The cloud ☁️ is also not a backup!

                        Tar is a backup Bacula and ZFS with proper hardware configuration also.

                        Even just `tar -cvfz` would be great with the tar dumped on HDD off site.

                        WTF Korea now the millions of vital records are gone :(

                        The damage of this is incalculable

                        koreajoongangdaily.joins.com/n

                        The image is a close-up photograph taken in a dark, indoor setting, likely a damaged facility. The focus is on a severely burned and partially destroyed large battery unit. The battery appears to be composed of multiple rectangular modules, heavily charred and exhibiting signs of extreme heat damage. A person’s leg wearing blue jeans and a black shoe is visible on the right side of the frame. The image is accompanied by text that reads: “Officials move a burnt battery at the National Information Service (NIRS) in Daejeon on Sept. 27.” Additionally, there is text at the top of the image: “NIRS fire destroys government’s cloud storage system, no backups available.” Published: 01 Oct. 2025, 17:59 is at the bottom.

Provided by @altbot, generated privately and locally using Gemma3:27b

🌱 Energy used: 0.141 Wh

                        Alt...The image is a close-up photograph taken in a dark, indoor setting, likely a damaged facility. The focus is on a severely burned and partially destroyed large battery unit. The battery appears to be composed of multiple rectangular modules, heavily charred and exhibiting signs of extreme heat damage. A person’s leg wearing blue jeans and a black shoe is visible on the right side of the frame. The image is accompanied by text that reads: “Officials move a burnt battery at the National Information Service (NIRS) in Daejeon on Sept. 27.” Additionally, there is text at the top of the image: “NIRS fire destroys government’s cloud storage system, no backups available.” Published: 01 Oct. 2025, 17:59 is at the bottom. Provided by @altbot, generated privately and locally using Gemma3:27b 🌱 Energy used: 0.141 Wh

                          [?]Matt Marcha »
                          @mattmarcha@mamot.fr

                          Bonjour le pouetiverse !

                          Je suis à la recherche d'un logiciel pour faire des backups de fichiers, qui comporterait une feature particulière : pouvoir "archiver" (ou plutôt ghost ?) des fichiers et répertoires. Je m'explique : pouvoir supprimer le fichier/répertoire du disque tout en y gardant une trace du fichier (nom, emplacement), restaurable facilement depuis la quand nécessaire.
                          Idéalement bien intégré avec Nautilus.

                          Des pistes ?

                          Le repouet donne des croquettes à Tux

                            [?]Matt Marcha »
                            @mattmarcha@mamot.fr

                            Hello tootiverse !

                            I'm looking for a files software with a special feature: "archive" (or maybe "ghost" ?) some files and directories.
                            In a nutshell: being able to delete a file/directory from the disk while keeping track of it (path, name, size...), so that you can restore it easily from the backup whenever you need it.
                            Ideally well integrated into Nautilus.

                            Does anybody know if this is a thing ?

                            Boosting gives Tux some kibbles

                              [?]unfa🇺🇦 »
                              @unfa@mastodon.social

                              The feeling when the last successful thing you did on the filesystem before it broke was pushing your work for an important project to a git server...

                              Phew.

                              Also - good I have recent backup so I can recover everything else.


                              Screenshot of a colorful (RED) dmesg log showinbg Btrfs fiulesystem errors.

Exact text below:

BTRFS error (device sda): parent transid verify failed on logical 10038756720640 mirror 1 wanted 110571 found 110645                                                                                                             
BTRFS error (device sda): parent transid verify failed on logical 10038756720640 mirror 2 wanted 110571 found 110645                                                                                                             
BTRFS: error (device sda: state A) in __btrfs_free_extent:3092: errno=-5 IO failure                                                                                                                                              
BTRFS info (device sda: state EA): forced readonly                                                                                                                                                                               
BTRFS error (device sda: state EA): failed to run delayed ref for logical 10037305835520 num_bytes 16384 type 176 action 2 ref_mod 1: -5                                                                                         
BTRFS: error (device sda: state EA) in btrfs_run_delayed_refs:2165: errno=-5 IO failure                                                                                                                                          
(...)

                              Alt...Screenshot of a colorful (RED) dmesg log showinbg Btrfs fiulesystem errors. Exact text below: BTRFS error (device sda): parent transid verify failed on logical 10038756720640 mirror 1 wanted 110571 found 110645 BTRFS error (device sda): parent transid verify failed on logical 10038756720640 mirror 2 wanted 110571 found 110645 BTRFS: error (device sda: state A) in __btrfs_free_extent:3092: errno=-5 IO failure BTRFS info (device sda: state EA): forced readonly BTRFS error (device sda: state EA): failed to run delayed ref for logical 10037305835520 num_bytes 16384 type 176 action 2 ref_mod 1: -5 BTRFS: error (device sda: state EA) in btrfs_run_delayed_refs:2165: errno=-5 IO failure (...)

                              Screenshot from a pivate Forgejo instance showing:

"unfa pushed to main at unfa-games/game-02 12 hours ago"

                              Alt...Screenshot from a pivate Forgejo instance showing: "unfa pushed to main at unfa-games/game-02 12 hours ago"

                                [?]kazé »
                                @fabi1cazenave@mastodon.social

                                I’m looking for an app (ideally on Linux) that can scan a local SMB network periodically, pick all shared files and create an incremental backup. Is there such a thing ?

                                In a perfect world, this could upload an encrypted copy onto an external storage (OneDrive, Google Drive, whatever) and there would be a way to get back N days in time, because shit happens.

                                Boosts appreciated. :-)

                                  [?]Marcos Dione »
                                  @mdione@en.osm.town

                                  @matrix just an idea to improve backups:

                                  Make exponential backoff like backups: last month, months 2-3 ago, mos 4-6 ago, 7-12moa, 2-3ya, etc. Or with N messages instead of N days.

                                  Sounds like you could recover the fresher data first, then catch up, then restore backwards.

                                    AodeRelay boosted

                                    [?]daltux »
                                    @daltux@snac.daltux.net

                                    Novidade de hoje no :debian: forky/sid: restic e rclone foram incluídos como dependência e recomendação do pacote deja-dup, potencialmente ampliando as possibilidades desta ferramenta amigável de cópias de salvaguarda.

                                    Upgrading:                      
                                    deja-dup (49~alpha-1 => 49~alpha-2)
                                    [...]
                                    Installing dependencies:
                                    rclone (1.60.1+dfsg-4)
                                    restic (0.18.0-1+b4)

                                      [?]Blabla Linux »
                                      @blablalinux@mastodon.blablalinux.be

                                      Un oubli de taille, une solution de choc ! 😅
                                      Parfois, la mémoire me joue des tours... et j'ai totalement oublié de vous parler de cette pépite que j'ai installée chez moi il y a déjà un bon moment !

                                      J'ai pas encore eu le temps d'intégrer tous mes clusters VE et Proxmox , mais les captures d'écran vous donneront un aperçu de la magie.

                                        [?]Jan Wildeboer 😷:krulorange: »
                                        @jwildeboer@social.wildeboer.net

                                        First 24 hours after upgrading my home server from CentOS 7 to RHEL10 (Red Hat Enterprise Linux) and configuring a modern Samba share as target for Apple Time Machine backups, replacing the old AFP (Apple Filing Protocol) based setup. You can see the initial backups taking quite some time and network traffic, but after that the hourly backups just cause little spikes. Nice!

                                        Gist on how I configured Samba and the Mac: codeberg.org/jwildeboer/gists/

                                        Network traffic on my home server in the past 24 hours. Clearly visible are two blocks of traffic, one slower block where my MacBook did its initial backup via WLAN, one quite fast block where my iMac did its backup via Gigabit LAN. After those big spikes a lot of small spikes when Time Machine does its hourly differential backup.

                                        Alt...Network traffic on my home server in the past 24 hours. Clearly visible are two blocks of traffic, one slower block where my MacBook did its initial backup via WLAN, one quite fast block where my iMac did its backup via Gigabit LAN. After those big spikes a lot of small spikes when Time Machine does its hourly differential backup.

                                        Hard drive traffic on my home server. The Time Machine backups go to a RAID1, consisting of 2 spinning drives with 2TB each. Again, the speed difference between WLAN backup and cable LAN becomes quite obvious in the initial backup phase. After that just small spikes for the hourly diff backups.

                                        Alt...Hard drive traffic on my home server. The Time Machine backups go to a RAID1, consisting of 2 spinning drives with 2TB each. Again, the speed difference between WLAN backup and cable LAN becomes quite obvious in the initial backup phase. After that just small spikes for the hourly diff backups.

                                          [?]Beurt »
                                          @Beurt@mamot.fr

                                          Ces performances étonnantes de me font douter du maintient de ma méthode de actuelle ( + )...

                                          Ce qui me retient encore de tout passer à Borg c'est le (Keep It Simple, Stupid) du backup Rsync qui donne tout simplement des fichiers. Donc, on peut pas faire plus compatible et simple d'accès.

                                          Côté Borg, il faut quand même monter avec borg mount ou restorer. Compliqué... S'il y a une cassure techno c'est plus incertain et moins KISS.

                                          🤔Je doute... Et vous ?

                                            [?]Tom Rini »
                                            @trini@tenforward.social

                                            I think I might have asked this before, but given another boost, it's worth asking again since I didn't find an answer. What's a reasonable Windows backup solution that I can either (a) point at a local samba share or at some S3-compatible object store?

                                              [?]Jonathan Kamens 86 47 »
                                              @jik@federate.social

                                              P.S. I cannot stress enough that if you do decide to start using or any other document management system, BACK UP YOUR DATA. With Paperless NGX that means setting up a daily export job and making sure you follow the 3-2-1 backup rule for the export directory (3 copies of the data on 2 different storage media at least 1 of which is offsite).