mirror of
https://github.com/restic/restic.git
synced 2025-03-16 00:00:05 +01:00
Merge branch 'master' into add-smb-backend
This commit is contained in:
commit
fa9aa3557b
180 changed files with 6500 additions and 3645 deletions
17
.github/workflows/tests.yml
vendored
17
.github/workflows/tests.yml
vendored
|
@ -13,7 +13,7 @@ permissions:
|
|||
contents: read
|
||||
|
||||
env:
|
||||
latest_go: "1.21.x"
|
||||
latest_go: "1.22.x"
|
||||
GO111MODULE: on
|
||||
|
||||
jobs:
|
||||
|
@ -23,18 +23,18 @@ jobs:
|
|||
# list of jobs to run:
|
||||
include:
|
||||
- job_name: Windows
|
||||
go: 1.21.x
|
||||
go: 1.22.x
|
||||
os: windows-latest
|
||||
test_smb: true
|
||||
|
||||
- job_name: macOS
|
||||
go: 1.21.x
|
||||
go: 1.22.x
|
||||
os: macOS-latest
|
||||
test_fuse: false
|
||||
test_smb: true
|
||||
|
||||
- job_name: Linux
|
||||
go: 1.21.x
|
||||
go: 1.22.x
|
||||
os: ubuntu-latest
|
||||
test_cloud_backends: true
|
||||
test_fuse: true
|
||||
|
@ -42,12 +42,17 @@ jobs:
|
|||
check_changelog: true
|
||||
|
||||
- job_name: Linux (race)
|
||||
go: 1.21.x
|
||||
go: 1.22.x
|
||||
os: ubuntu-latest
|
||||
test_fuse: true
|
||||
test_smb: true
|
||||
test_opts: "-race"
|
||||
|
||||
- job_name: Linux
|
||||
go: 1.21.x
|
||||
os: ubuntu-latest
|
||||
test_fuse: true
|
||||
|
||||
- job_name: Linux
|
||||
go: 1.20.x
|
||||
os: ubuntu-latest
|
||||
|
@ -373,7 +378,7 @@ jobs:
|
|||
uses: golangci/golangci-lint-action@v3
|
||||
with:
|
||||
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
|
||||
version: v1.55.2
|
||||
version: v1.56.1
|
||||
args: --verbose --timeout 5m
|
||||
|
||||
# only run golangci-lint for pull requests, otherwise ALL hints get
|
||||
|
|
|
@ -35,6 +35,9 @@ linters:
|
|||
# parse and typecheck code
|
||||
- typecheck
|
||||
|
||||
# ensure that http response bodies are closed
|
||||
- bodyclose
|
||||
|
||||
issues:
|
||||
# don't use the default exclude rules, this hides (among others) ignored
|
||||
# errors from Close() calls
|
||||
|
@ -51,3 +54,8 @@ issues:
|
|||
# staticcheck: there's no easy way to replace these packages
|
||||
- "SA1019: \"golang.org/x/crypto/poly1305\" is deprecated"
|
||||
- "SA1019: \"golang.org/x/crypto/openpgp\" is deprecated"
|
||||
|
||||
exclude-rules:
|
||||
# revive: ignore unused parameters in tests
|
||||
- path: (_test\.go|testing\.go|backend/.*/tests\.go)
|
||||
text: "unused-parameter:"
|
3477
CHANGELOG.md
3477
CHANGELOG.md
File diff suppressed because it is too large
Load diff
|
@ -6,7 +6,8 @@ Ways to Help Out
|
|||
Thank you for your contribution! Please **open an issue first** (or add a
|
||||
comment to an existing issue) if you plan to work on any code or add a new
|
||||
feature. This way, duplicate work is prevented and we can discuss your ideas
|
||||
and design first.
|
||||
and design first. Small bugfixes are an exception to this rule, just open a
|
||||
pull request in this case.
|
||||
|
||||
There are several ways you can help us out. First of all code contributions and
|
||||
bug fixes are most welcome. However even "minor" details as fixing spelling
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
0.16.2
|
||||
0.16.4
|
||||
|
|
|
@ -4,7 +4,7 @@ Since Go 1.21, most filesystem reparse points on Windows are considered to be
|
|||
irregular files. This caused restic to show an `error: invalid node type ""`
|
||||
error message for those files.
|
||||
|
||||
We have improved the error message to include the file path for those files:
|
||||
This error message has now been improved and includes the relevant file path:
|
||||
`error: nodeFromFileInfo path/to/file: unsupported file type "irregular"`.
|
||||
As irregular files are not required to behave like regular files, it is not
|
||||
possible to provide a generic way to back up those files.
|
11
changelog/0.16.3_2024-01-14/issue-4574
Normal file
11
changelog/0.16.3_2024-01-14/issue-4574
Normal file
|
@ -0,0 +1,11 @@
|
|||
Bugfix: Support backup of deduplicated files on Windows again
|
||||
|
||||
With the official release builds of restic 0.16.1 and 0.16.2, it was not
|
||||
possible to back up files that were deduplicated by the corresponding
|
||||
Windows Server feature. This also applied to restic versions built using
|
||||
Go 1.21.0-1.21.4.
|
||||
|
||||
The Go version used to build restic has now been updated to fix this.
|
||||
|
||||
https://github.com/restic/restic/issues/4574
|
||||
https://github.com/restic/restic/pull/4621
|
|
@ -1,11 +1,11 @@
|
|||
Bugfix: Correct restore progress information if an error occurs
|
||||
Bugfix: Correct `restore` progress information if an error occurs
|
||||
|
||||
If an error occurred while restoring a snapshot, this could cause the restore
|
||||
If an error occurred while restoring a snapshot, this could cause the `restore`
|
||||
progress bar to show incorrect information. In addition, if a data file could
|
||||
not be loaded completely, then errors would also be reported for some already
|
||||
restored files.
|
||||
|
||||
We have improved the error reporting of the restore command to be more accurate.
|
||||
Error reporting of the `restore` command has now been made more accurate.
|
||||
|
||||
https://github.com/restic/restic/pull/4624
|
||||
https://forum.restic.net/t/errors-restoring-with-restic-on-windows-server-s3/6943
|
18
changelog/0.16.4_2024-02-04/issue-4529
Normal file
18
changelog/0.16.4_2024-02-04/issue-4529
Normal file
|
@ -0,0 +1,18 @@
|
|||
Enhancement: Add extra verification of data integrity before upload
|
||||
|
||||
Hardware issues, or a bug in restic or its dependencies, could previously cause
|
||||
corruption in the files restic created and stored in the repository. Detecting
|
||||
such corruption previously required explicitly running the `check --read-data`
|
||||
or `check --read-data-subset` commands.
|
||||
|
||||
To further ensure data integrity, even in the case of hardware issues or
|
||||
software bugs, restic now performs additional verification of the files about
|
||||
to be uploaded to the repository.
|
||||
|
||||
These extra checks will increase CPU usage during backups. They can therefore,
|
||||
if absolutely necessary, be disabled using the `--no-extra-verify` global
|
||||
option. Please note that this should be combined with more active checking
|
||||
using the previously mentioned check commands.
|
||||
|
||||
https://github.com/restic/restic/issues/4529
|
||||
https://github.com/restic/restic/pull/4681
|
19
changelog/0.16.4_2024-02-04/issue-4677
Normal file
19
changelog/0.16.4_2024-02-04/issue-4677
Normal file
|
@ -0,0 +1,19 @@
|
|||
Bugfix: Downgrade zstd library to fix rare data corruption at max. compression
|
||||
|
||||
In restic 0.16.3, backups where the compression level was set to `max` (using
|
||||
`--compression max`) could in rare and very specific circumstances result in
|
||||
data corruption due to a bug in the library used for compressing data. Restic
|
||||
0.16.1 and 0.16.2 were not affected.
|
||||
|
||||
Restic now uses the previous version of the library used to compress data, the
|
||||
same version used by restic 0.16.2. Please note that the `auto` compression
|
||||
level (which restic uses by default) was never affected, and even if you used
|
||||
`max` compression, chances of being affected by this issue are small.
|
||||
|
||||
To check a repository for any corruption, run `restic check --read-data`. This
|
||||
will download and verify the whole repository and can be used at any time to
|
||||
completely verify the integrity of a repository. If the `check` command detects
|
||||
anomalies, follow the suggested steps.
|
||||
|
||||
https://github.com/restic/restic/issues/4677
|
||||
https://github.com/restic/restic/pull/4679
|
|
@ -1,14 +1,16 @@
|
|||
Enhancement: Support reading backup from a program's standard output
|
||||
Enhancement: Support reading backup from a commands's standard output
|
||||
|
||||
When reading data from stdin, the `backup` command could not verify whether the
|
||||
corresponding command completed successfully.
|
||||
The `backup` command now supports the `--stdin-from-command` option. When using
|
||||
this option, the arguments to `backup` are interpreted as a command instead of
|
||||
paths to back up. `backup` then executes the given command and stores the
|
||||
standard output from it in the backup, similar to the what the `--stdin` option
|
||||
does. This also enables restic to verify that the command completes with exit
|
||||
code zero. A non-zero exit code causes the backup to fail.
|
||||
|
||||
The `backup` command now supports starting an arbitrary command and sourcing
|
||||
the backup content from its standard output. This enables restic to verify that
|
||||
the command completes with exit code zero. A non-zero exit code causes the
|
||||
backup to fail.
|
||||
Note that the `--stdin` option does not have to be specified at the same time,
|
||||
and that the `--stdin-filename` option also applies to `--stdin-from-command`.
|
||||
|
||||
Example: `restic backup --stdin-from-command mysqldump [...]`
|
||||
Example: `restic backup --stdin-from-command --stdin-filename dump.sql mysqldump [...]`
|
||||
|
||||
https://github.com/restic/restic/issues/4251
|
||||
https://github.com/restic/restic/pull/4410
|
||||
|
|
11
changelog/unreleased/issue-4549
Normal file
11
changelog/unreleased/issue-4549
Normal file
|
@ -0,0 +1,11 @@
|
|||
Enhancement: Add `--ncdu` option to `ls` command
|
||||
|
||||
NCDU (NCurses Disk Usage) is a tool to analyse disk usage of directories.
|
||||
It has an option to save a directory tree and analyse it later.
|
||||
The `ls` command now supports the `--ncdu` option which outputs information
|
||||
about a snapshot in the NCDU format.
|
||||
|
||||
You can use it as follows: `restic ls latest --ncdu | ncdu -f -`
|
||||
|
||||
https://github.com/restic/restic/issues/4549
|
||||
https://github.com/restic/restic/pull/4550
|
|
@ -1,11 +0,0 @@
|
|||
Bugfix: support backup of deduplicated files on Windows again
|
||||
|
||||
With the official release builds of restic 0.16.1 and 0.16.2, it was not
|
||||
possible to back up files that were deduplicated by the corresponding Windows
|
||||
Server feature. This also applies to restic versions built using Go
|
||||
1.21.0 - 1.21.4.
|
||||
|
||||
We have updated the used Go version to fix this.
|
||||
|
||||
https://github.com/restic/restic/issues/4574
|
||||
https://github.com/restic/restic/pull/4621
|
12
changelog/unreleased/issue-4583
Normal file
12
changelog/unreleased/issue-4583
Normal file
|
@ -0,0 +1,12 @@
|
|||
Enhancement: Ignore s3.storage-class for metadata if archive tier is specified
|
||||
|
||||
There is no official cold storage support in restic, use this option at your
|
||||
own risk.
|
||||
|
||||
Restic always stored all files on s3 using the specified `s3.storage-class`.
|
||||
Now, restic will store metadata using a non-archive storage tier to avoid
|
||||
problems when accessing a repository. To restore any data, it is still
|
||||
necessary to manually warm up the required data beforehand.
|
||||
|
||||
https://github.com/restic/restic/issues/4583
|
||||
https://github.com/restic/restic/pull/4584
|
7
changelog/unreleased/issue-4656
Normal file
7
changelog/unreleased/issue-4656
Normal file
|
@ -0,0 +1,7 @@
|
|||
Bugfix: Properly report the ID of newly added keys
|
||||
|
||||
`restic key add` now reports the ID of a newly added key. This simplifies
|
||||
selecting a specific key using the `--key-hint key` option.
|
||||
|
||||
https://github.com/restic/restic/issues/4656
|
||||
https://github.com/restic/restic/pull/4657
|
8
changelog/unreleased/issue-4676
Normal file
8
changelog/unreleased/issue-4676
Normal file
|
@ -0,0 +1,8 @@
|
|||
Enhancement: Move key add, list, remove and passwd as separate sub-commands
|
||||
|
||||
Restic now provides usage documentation for the `key` command. Each sub-command;
|
||||
`add`, `list`, `remove` and `passwd` now have their own sub-command documentation
|
||||
which can be invoked using `restic key <add|list|remove|passwd> --help`.
|
||||
|
||||
https://github.com/restic/restic/issues/4676
|
||||
https://github.com/restic/restic/pull/4685
|
8
changelog/unreleased/issue-4678
Normal file
8
changelog/unreleased/issue-4678
Normal file
|
@ -0,0 +1,8 @@
|
|||
Enhancement: Add --target flag to the dump command
|
||||
|
||||
Restic `dump` always printed to the standard output. It now permits to select a
|
||||
`--target` file to write the output to.
|
||||
|
||||
https://github.com/restic/restic/issues/4678
|
||||
https://github.com/restic/restic/pull/4682
|
||||
https://github.com/restic/restic/pull/4692
|
7
changelog/unreleased/pull-4611
Normal file
7
changelog/unreleased/pull-4611
Normal file
|
@ -0,0 +1,7 @@
|
|||
Enhancement: Back up windows created time and file attributes like hidden flag
|
||||
|
||||
Restic did not back up windows-specific meta-data like created time and file attributes like hidden flag.
|
||||
Restic now backs up file created time and file attributes like hidden, readonly and encrypted flag when backing up files and folders on windows.
|
||||
|
||||
https://github.com/restic/restic/pull/4611
|
||||
|
6
changelog/unreleased/pull-4615
Normal file
6
changelog/unreleased/pull-4615
Normal file
|
@ -0,0 +1,6 @@
|
|||
Bugfix: `find` ignored directories in some cases
|
||||
|
||||
In some cases, the `find` command ignored empty or moved directories. This has
|
||||
been fixed.
|
||||
|
||||
https://github.com/restic/restic/pull/4615
|
10
changelog/unreleased/pull-4644
Normal file
10
changelog/unreleased/pull-4644
Normal file
|
@ -0,0 +1,10 @@
|
|||
Enhancement: Improve `repair packs` command
|
||||
|
||||
The `repair packs` command has been improved to also be able to process
|
||||
truncated pack files. The `check --read-data` command will provide instructions
|
||||
on using the command if necessary to repair a repository. See the guide at
|
||||
https://restic.readthedocs.io/en/stable/077_troubleshooting.html for further
|
||||
instructions.
|
||||
|
||||
https://github.com/restic/restic/pull/4644
|
||||
https://github.com/restic/restic/pull/4655
|
8
changelog/unreleased/pull-4664
Normal file
8
changelog/unreleased/pull-4664
Normal file
|
@ -0,0 +1,8 @@
|
|||
Enhancement: `ls` uses `message_type` field to distinguish JSON messages
|
||||
|
||||
The `ls` command was the only command that used the `struct_type` field to determine
|
||||
the message type in the JSON output format. Now, the JSON output of the
|
||||
`ls` command also includes the `message_type`. The `struct_type` field is
|
||||
still included, but it deprecated.
|
||||
|
||||
https://github.com/restic/restic/pull/4664
|
9
changelog/unreleased/pull-4703
Normal file
9
changelog/unreleased/pull-4703
Normal file
|
@ -0,0 +1,9 @@
|
|||
Bugfix: Shutdown cleanly when SIGTERM is received
|
||||
|
||||
Prior, if restic received SIGTERM it'd just immediately terminate skipping
|
||||
cleanup- resulting in potential issues like stale locks being left behind.
|
||||
|
||||
This primarily effected containerized restic invocations- they use SIGTERM-
|
||||
but this could be triggered via a simple `killall restic` in addition.
|
||||
|
||||
https://github.com/restic/restic/pull/4703
|
|
@ -19,7 +19,7 @@ var cleanupHandlers struct {
|
|||
func init() {
|
||||
cleanupHandlers.ch = make(chan os.Signal, 1)
|
||||
go CleanupHandler(cleanupHandlers.ch)
|
||||
signal.Notify(cleanupHandlers.ch, syscall.SIGINT)
|
||||
signal.Notify(cleanupHandlers.ch, syscall.SIGINT, syscall.SIGTERM)
|
||||
}
|
||||
|
||||
// AddCleanupHandler adds the function f to the list of cleanup handlers so
|
||||
|
@ -56,7 +56,7 @@ func RunCleanupHandlers(code int) int {
|
|||
return code
|
||||
}
|
||||
|
||||
// CleanupHandler handles the SIGINT signals.
|
||||
// CleanupHandler handles the SIGINT and SIGTERM signals.
|
||||
func CleanupHandler(c <-chan os.Signal) {
|
||||
for s := range c {
|
||||
debug.Log("signal %v received, cleaning up", s)
|
||||
|
@ -70,7 +70,7 @@ func CleanupHandler(c <-chan os.Signal) {
|
|||
|
||||
code := 0
|
||||
|
||||
if s == syscall.SIGINT {
|
||||
if s == syscall.SIGINT || s == syscall.SIGTERM {
|
||||
code = 130
|
||||
} else {
|
||||
code = 1
|
||||
|
|
|
@ -12,7 +12,6 @@ import (
|
|||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
@ -25,7 +24,6 @@ import (
|
|||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/textfile"
|
||||
"github.com/restic/restic/internal/ui"
|
||||
"github.com/restic/restic/internal/ui/backup"
|
||||
"github.com/restic/restic/internal/ui/termstatus"
|
||||
)
|
||||
|
@ -44,7 +42,7 @@ Exit status is 0 if the command was successful.
|
|||
Exit status is 1 if there was a fatal error (no snapshot created).
|
||||
Exit status is 3 if some source data could not be read (incomplete snapshot created).
|
||||
`,
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
PreRun: func(_ *cobra.Command, _ []string) {
|
||||
if backupOptions.Host == "" {
|
||||
hostname, err := os.Hostname()
|
||||
if err != nil {
|
||||
|
@ -56,31 +54,9 @@ Exit status is 3 if some source data could not be read (incomplete snapshot crea
|
|||
},
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
var wg sync.WaitGroup
|
||||
cancelCtx, cancel := context.WithCancel(ctx)
|
||||
defer func() {
|
||||
// shutdown termstatus
|
||||
cancel()
|
||||
wg.Wait()
|
||||
}()
|
||||
|
||||
term := termstatus.New(globalOptions.stdout, globalOptions.stderr, globalOptions.Quiet)
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
term.Run(cancelCtx)
|
||||
}()
|
||||
|
||||
// use the terminal for stdout/stderr
|
||||
prevStdout, prevStderr := globalOptions.stdout, globalOptions.stderr
|
||||
defer func() {
|
||||
globalOptions.stdout, globalOptions.stderr = prevStdout, prevStderr
|
||||
}()
|
||||
stdioWrapper := ui.NewStdioWrapper(term)
|
||||
globalOptions.stdout, globalOptions.stderr = stdioWrapper.Stdout(), stdioWrapper.Stderr()
|
||||
|
||||
return runBackup(ctx, backupOptions, globalOptions, term, args)
|
||||
term, cancel := setupTermstatus()
|
||||
defer cancel()
|
||||
return runBackup(cmd.Context(), backupOptions, globalOptions, term, args)
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -135,7 +111,7 @@ func init() {
|
|||
f.StringVar(&backupOptions.ExcludeLargerThan, "exclude-larger-than", "", "max `size` of the files to be backed up (allowed suffixes: k/K, m/M, g/G, t/T)")
|
||||
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
|
||||
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "`filename` to use when reading from stdin")
|
||||
f.BoolVar(&backupOptions.StdinCommand, "stdin-from-command", false, "execute command and store its stdout")
|
||||
f.BoolVar(&backupOptions.StdinCommand, "stdin-from-command", false, "interpret arguments as command to execute and store its stdout")
|
||||
f.Var(&backupOptions.Tags, "tag", "add `tags` for the new snapshot in the format `tag[,tag,...]` (can be specified multiple times)")
|
||||
f.UintVar(&backupOptions.ReadConcurrency, "read-concurrency", 0, "read `n` files concurrently (default: $RESTIC_READ_CONCURRENCY or 2)")
|
||||
f.StringVarP(&backupOptions.Host, "host", "H", "", "set the `hostname` for the snapshot manually. To prevent an expensive rescan use the \"parent\" flag")
|
||||
|
@ -150,7 +126,7 @@ func init() {
|
|||
f.StringArrayVar(&backupOptions.FilesFromRaw, "files-from-raw", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
|
||||
f.StringVar(&backupOptions.TimeStamp, "time", "", "`time` of the backup (ex. '2012-11-01 22:08:41') (default: now)")
|
||||
f.BoolVar(&backupOptions.WithAtime, "with-atime", false, "store the atime for all files and directories")
|
||||
f.BoolVar(&backupOptions.IgnoreInode, "ignore-inode", false, "ignore inode number changes when checking for modified files")
|
||||
f.BoolVar(&backupOptions.IgnoreInode, "ignore-inode", false, "ignore inode number and ctime changes when checking for modified files")
|
||||
f.BoolVar(&backupOptions.IgnoreCtime, "ignore-ctime", false, "ignore ctime changes when checking for modified files")
|
||||
f.BoolVarP(&backupOptions.DryRun, "dry-run", "n", false, "do not upload or write any data, just show what would be done")
|
||||
f.BoolVar(&backupOptions.NoScan, "no-scan", false, "do not run scanner to estimate size of backup")
|
||||
|
@ -435,7 +411,7 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
|
|||
|
||||
// parent returns the ID of the parent snapshot. If there is none, nil is
|
||||
// returned.
|
||||
func findParentSnapshot(ctx context.Context, repo restic.Repository, opts BackupOptions, targets []string, timeStampLimit time.Time) (*restic.Snapshot, error) {
|
||||
func findParentSnapshot(ctx context.Context, repo restic.ListerLoaderUnpacked, opts BackupOptions, targets []string, timeStampLimit time.Time) (*restic.Snapshot, error) {
|
||||
if opts.Force {
|
||||
return nil, nil
|
||||
}
|
||||
|
@ -633,7 +609,7 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
|
|||
wg.Go(func() error { return sc.Scan(cancelCtx, targets) })
|
||||
}
|
||||
|
||||
arch := archiver.New(repo, targetFS, archiver.Options{ReadConcurrency: backupOptions.ReadConcurrency})
|
||||
arch := archiver.New(repo, targetFS, archiver.Options{ReadConcurrency: opts.ReadConcurrency})
|
||||
arch.SelectByName = selectByNameFilter
|
||||
arch.Select = selectFilter
|
||||
arch.WithAtime = opts.WithAtime
|
||||
|
|
|
@ -28,7 +28,7 @@ EXIT STATUS
|
|||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
RunE: func(_ *cobra.Command, args []string) error {
|
||||
return runCache(cacheOptions, globalOptions, args)
|
||||
},
|
||||
}
|
||||
|
|
|
@ -38,7 +38,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runCheck(cmd.Context(), checkOptions, globalOptions, args)
|
||||
},
|
||||
PreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
PreRunE: func(_ *cobra.Command, _ []string) error {
|
||||
return checkFlags(checkOptions)
|
||||
},
|
||||
}
|
||||
|
@ -336,20 +336,18 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
|
|||
errorsFound = true
|
||||
Warnf("%v\n", err)
|
||||
if err, ok := err.(*checker.ErrPackData); ok {
|
||||
if strings.Contains(err.Error(), "wrong data returned, hash is") {
|
||||
salvagePacks = append(salvagePacks, err.PackID)
|
||||
}
|
||||
salvagePacks = append(salvagePacks, err.PackID)
|
||||
}
|
||||
}
|
||||
p.Done()
|
||||
|
||||
if len(salvagePacks) > 0 {
|
||||
Warnf("\nThe repository contains pack files with damaged blobs. These blobs must be removed to repair the repository. This can be done using the following commands:\n\n")
|
||||
var strIds []string
|
||||
Warnf("\nThe repository contains pack files with damaged blobs. These blobs must be removed to repair the repository. This can be done using the following commands. Please read the troubleshooting guide at https://restic.readthedocs.io/en/stable/077_troubleshooting.html first.\n\n")
|
||||
var strIDs []string
|
||||
for _, id := range salvagePacks {
|
||||
strIds = append(strIds, id.String())
|
||||
strIDs = append(strIDs, id.String())
|
||||
}
|
||||
Warnf("RESTIC_FEATURES=repair-packs-v1 restic repair packs %v\nrestic repair snapshots --forget\n\n", strings.Join(strIds, " "))
|
||||
Warnf("restic repair packs %v\nrestic repair snapshots --forget\n\n", strings.Join(strIDs, " "))
|
||||
Warnf("Corrupted blobs are either caused by hardware problems or bugs in restic. Please open an issue at https://github.com/restic/restic/issues/new/choose for further troubleshooting!\n")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -52,19 +52,23 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
},
|
||||
}
|
||||
|
||||
var tryRepair bool
|
||||
var repairByte bool
|
||||
var extractPack bool
|
||||
var reuploadBlobs bool
|
||||
type DebugExamineOptions struct {
|
||||
TryRepair bool
|
||||
RepairByte bool
|
||||
ExtractPack bool
|
||||
ReuploadBlobs bool
|
||||
}
|
||||
|
||||
var debugExamineOpts DebugExamineOptions
|
||||
|
||||
func init() {
|
||||
cmdRoot.AddCommand(cmdDebug)
|
||||
cmdDebug.AddCommand(cmdDebugDump)
|
||||
cmdDebug.AddCommand(cmdDebugExamine)
|
||||
cmdDebugExamine.Flags().BoolVar(&extractPack, "extract-pack", false, "write blobs to the current directory")
|
||||
cmdDebugExamine.Flags().BoolVar(&reuploadBlobs, "reupload-blobs", false, "reupload blobs to the repository")
|
||||
cmdDebugExamine.Flags().BoolVar(&tryRepair, "try-repair", false, "try to repair broken blobs with single bit flips")
|
||||
cmdDebugExamine.Flags().BoolVar(&repairByte, "repair-byte", false, "try to repair broken blobs by trying bytes")
|
||||
cmdDebugExamine.Flags().BoolVar(&debugExamineOpts.ExtractPack, "extract-pack", false, "write blobs to the current directory")
|
||||
cmdDebugExamine.Flags().BoolVar(&debugExamineOpts.ReuploadBlobs, "reupload-blobs", false, "reupload blobs to the repository")
|
||||
cmdDebugExamine.Flags().BoolVar(&debugExamineOpts.TryRepair, "try-repair", false, "try to repair broken blobs with single bit flips")
|
||||
cmdDebugExamine.Flags().BoolVar(&debugExamineOpts.RepairByte, "repair-byte", false, "try to repair broken blobs by trying bytes")
|
||||
}
|
||||
|
||||
func prettyPrintJSON(wr io.Writer, item interface{}) error {
|
||||
|
@ -133,7 +137,7 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
|
|||
})
|
||||
}
|
||||
|
||||
func dumpIndexes(ctx context.Context, repo restic.Repository, wr io.Writer) error {
|
||||
func dumpIndexes(ctx context.Context, repo restic.ListerLoaderUnpacked, wr io.Writer) error {
|
||||
return index.ForAllIndexes(ctx, repo, repo, func(id restic.ID, idx *index.Index, oldFormat bool, err error) error {
|
||||
Printf("index_id: %v\n", id)
|
||||
if err != nil {
|
||||
|
@ -196,7 +200,7 @@ var cmdDebugExamine = &cobra.Command{
|
|||
Short: "Examine a pack file",
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runDebugExamine(cmd.Context(), globalOptions, args)
|
||||
return runDebugExamine(cmd.Context(), globalOptions, debugExamineOpts, args)
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -315,7 +319,7 @@ func decryptUnsigned(ctx context.Context, k *crypto.Key, buf []byte) []byte {
|
|||
return out
|
||||
}
|
||||
|
||||
func loadBlobs(ctx context.Context, repo restic.Repository, packID restic.ID, list []restic.Blob) error {
|
||||
func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Repository, packID restic.ID, list []restic.Blob) error {
|
||||
dec, err := zstd.NewReader(nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
|
@ -328,7 +332,7 @@ func loadBlobs(ctx context.Context, repo restic.Repository, packID restic.ID, li
|
|||
|
||||
wg, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
if reuploadBlobs {
|
||||
if opts.ReuploadBlobs {
|
||||
repo.StartPackUploader(ctx, wg)
|
||||
}
|
||||
|
||||
|
@ -356,8 +360,8 @@ func loadBlobs(ctx context.Context, repo restic.Repository, packID restic.ID, li
|
|||
filePrefix := ""
|
||||
if err != nil {
|
||||
Warnf("error decrypting blob: %v\n", err)
|
||||
if tryRepair || repairByte {
|
||||
plaintext = tryRepairWithBitflip(ctx, key, buf, repairByte)
|
||||
if opts.TryRepair || opts.RepairByte {
|
||||
plaintext = tryRepairWithBitflip(ctx, key, buf, opts.RepairByte)
|
||||
}
|
||||
if plaintext != nil {
|
||||
outputPrefix = "repaired "
|
||||
|
@ -391,13 +395,13 @@ func loadBlobs(ctx context.Context, repo restic.Repository, packID restic.ID, li
|
|||
Printf(" successfully %vdecrypted blob (length %v), hash is %v, ID matches\n", outputPrefix, len(plaintext), id)
|
||||
prefix = "correct-"
|
||||
}
|
||||
if extractPack {
|
||||
if opts.ExtractPack {
|
||||
err = storePlainBlob(id, filePrefix+prefix, plaintext)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if reuploadBlobs {
|
||||
if opts.ReuploadBlobs {
|
||||
_, _, _, err := repo.SaveBlob(ctx, blob.Type, plaintext, id, true)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -406,7 +410,7 @@ func loadBlobs(ctx context.Context, repo restic.Repository, packID restic.ID, li
|
|||
}
|
||||
}
|
||||
|
||||
if reuploadBlobs {
|
||||
if opts.ReuploadBlobs {
|
||||
return repo.Flush(ctx)
|
||||
}
|
||||
return nil
|
||||
|
@ -437,7 +441,7 @@ func storePlainBlob(id restic.ID, prefix string, plain []byte) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func runDebugExamine(ctx context.Context, gopts GlobalOptions, args []string) error {
|
||||
func runDebugExamine(ctx context.Context, gopts GlobalOptions, opts DebugExamineOptions, args []string) error {
|
||||
repo, err := OpenRepository(ctx, gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -476,7 +480,7 @@ func runDebugExamine(ctx context.Context, gopts GlobalOptions, args []string) er
|
|||
}
|
||||
|
||||
for _, id := range ids {
|
||||
err := examinePack(ctx, repo, id)
|
||||
err := examinePack(ctx, opts, repo, id)
|
||||
if err != nil {
|
||||
Warnf("error: %v\n", err)
|
||||
}
|
||||
|
@ -487,7 +491,7 @@ func runDebugExamine(ctx context.Context, gopts GlobalOptions, args []string) er
|
|||
return nil
|
||||
}
|
||||
|
||||
func examinePack(ctx context.Context, repo restic.Repository, id restic.ID) error {
|
||||
func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repository, id restic.ID) error {
|
||||
Printf("examine %v\n", id)
|
||||
|
||||
h := backend.Handle{
|
||||
|
@ -524,7 +528,7 @@ func examinePack(ctx context.Context, repo restic.Repository, id restic.ID) erro
|
|||
|
||||
checkPackSize(blobs, fi.Size)
|
||||
|
||||
err = loadBlobs(ctx, repo, id, blobs)
|
||||
err = loadBlobs(ctx, opts, repo, id, blobs)
|
||||
if err != nil {
|
||||
Warnf("error: %v\n", err)
|
||||
} else {
|
||||
|
@ -542,7 +546,7 @@ func examinePack(ctx context.Context, repo restic.Repository, id restic.ID) erro
|
|||
checkPackSize(blobs, fi.Size)
|
||||
|
||||
if !blobsLoaded {
|
||||
return loadBlobs(ctx, repo, id, blobs)
|
||||
return loadBlobs(ctx, opts, repo, id, blobs)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -61,7 +61,7 @@ func init() {
|
|||
f.BoolVar(&diffOptions.ShowMetadata, "metadata", false, "print changes in metadata")
|
||||
}
|
||||
|
||||
func loadSnapshot(ctx context.Context, be restic.Lister, repo restic.Repository, desc string) (*restic.Snapshot, string, error) {
|
||||
func loadSnapshot(ctx context.Context, be restic.Lister, repo restic.LoaderUnpacked, desc string) (*restic.Snapshot, string, error) {
|
||||
sn, subfolder, err := restic.FindSnapshot(ctx, be, repo, desc)
|
||||
if err != nil {
|
||||
return nil, "", errors.Fatal(err.Error())
|
||||
|
@ -71,7 +71,7 @@ func loadSnapshot(ctx context.Context, be restic.Lister, repo restic.Repository,
|
|||
|
||||
// Comparer collects all things needed to compare two snapshots.
|
||||
type Comparer struct {
|
||||
repo restic.Repository
|
||||
repo restic.BlobLoader
|
||||
opts DiffOptions
|
||||
printChange func(change *Change)
|
||||
}
|
||||
|
@ -147,7 +147,7 @@ type DiffStatsContainer struct {
|
|||
}
|
||||
|
||||
// updateBlobs updates the blob counters in the stats struct.
|
||||
func updateBlobs(repo restic.Repository, blobs restic.BlobSet, stats *DiffStat) {
|
||||
func updateBlobs(repo restic.Loader, blobs restic.BlobSet, stats *DiffStat) {
|
||||
for h := range blobs {
|
||||
switch h.Type {
|
||||
case restic.DataBlob:
|
||||
|
@ -401,7 +401,7 @@ func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []
|
|||
|
||||
c := &Comparer{
|
||||
repo: repo,
|
||||
opts: diffOptions,
|
||||
opts: opts,
|
||||
printChange: func(change *Change) {
|
||||
Printf("%-5s%v\n", change.Modifier, change.Path)
|
||||
},
|
||||
|
@ -418,7 +418,7 @@ func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []
|
|||
}
|
||||
|
||||
if gopts.Quiet {
|
||||
c.printChange = func(change *Change) {}
|
||||
c.printChange = func(_ *Change) {}
|
||||
}
|
||||
|
||||
stats := &DiffStatsContainer{
|
||||
|
|
|
@ -46,6 +46,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
type DumpOptions struct {
|
||||
restic.SnapshotFilter
|
||||
Archive string
|
||||
Target string
|
||||
}
|
||||
|
||||
var dumpOptions DumpOptions
|
||||
|
@ -56,6 +57,7 @@ func init() {
|
|||
flags := cmdDump.Flags()
|
||||
initSingleSnapshotFilter(flags, &dumpOptions.SnapshotFilter)
|
||||
flags.StringVarP(&dumpOptions.Archive, "archive", "a", "tar", "set archive `format` as \"tar\" or \"zip\"")
|
||||
flags.StringVarP(&dumpOptions.Target, "target", "t", "", "write the output to target `path`")
|
||||
}
|
||||
|
||||
func splitPath(p string) []string {
|
||||
|
@ -67,11 +69,11 @@ func splitPath(p string) []string {
|
|||
return append(s, f)
|
||||
}
|
||||
|
||||
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string, d *dump.Dumper) error {
|
||||
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.BlobLoader, prefix string, pathComponents []string, d *dump.Dumper, canWriteArchiveFunc func() error) error {
|
||||
// If we print / we need to assume that there are multiple nodes at that
|
||||
// level in the tree.
|
||||
if pathComponents[0] == "" {
|
||||
if err := checkStdoutArchive(); err != nil {
|
||||
if err := canWriteArchiveFunc(); err != nil {
|
||||
return err
|
||||
}
|
||||
return d.DumpTree(ctx, tree, "/")
|
||||
|
@ -91,9 +93,9 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
|
|||
if err != nil {
|
||||
return errors.Wrapf(err, "cannot load subtree for %q", item)
|
||||
}
|
||||
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], d)
|
||||
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], d, canWriteArchiveFunc)
|
||||
case dump.IsDir(node):
|
||||
if err := checkStdoutArchive(); err != nil {
|
||||
if err := canWriteArchiveFunc(); err != nil {
|
||||
return err
|
||||
}
|
||||
subtree, err := restic.LoadTree(ctx, repo, *node.Subtree)
|
||||
|
@ -168,8 +170,24 @@ func runDump(ctx context.Context, opts DumpOptions, gopts GlobalOptions, args []
|
|||
return errors.Fatalf("loading tree for snapshot %q failed: %v", snapshotIDString, err)
|
||||
}
|
||||
|
||||
d := dump.New(opts.Archive, repo, os.Stdout)
|
||||
err = printFromTree(ctx, tree, repo, "/", splittedPath, d)
|
||||
outputFileWriter := os.Stdout
|
||||
canWriteArchiveFunc := checkStdoutArchive
|
||||
|
||||
if opts.Target != "" {
|
||||
file, err := os.Create(opts.Target)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot dump to file: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
_ = file.Close()
|
||||
}()
|
||||
|
||||
outputFileWriter = file
|
||||
canWriteArchiveFunc = func() error { return nil }
|
||||
}
|
||||
|
||||
d := dump.New(opts.Archive, repo, outputFileWriter)
|
||||
err = printFromTree(ctx, tree, repo, "/", splittedPath, d, canWriteArchiveFunc)
|
||||
if err != nil {
|
||||
return errors.Fatalf("cannot dump file: %v", err)
|
||||
}
|
||||
|
|
|
@ -126,6 +126,7 @@ func (s *statefulOutput) PrintPatternJSON(path string, node *restic.Node) {
|
|||
// Make the following attributes disappear
|
||||
Name byte `json:"name,omitempty"`
|
||||
ExtendedAttributes byte `json:"extended_attributes,omitempty"`
|
||||
GenericAttributes byte `json:"generic_attributes,omitempty"`
|
||||
Device byte `json:"device,omitempty"`
|
||||
Content byte `json:"content,omitempty"`
|
||||
Subtree byte `json:"subtree,omitempty"`
|
||||
|
@ -244,13 +245,12 @@ func (s *statefulOutput) Finish() {
|
|||
|
||||
// Finder bundles information needed to find a file or directory.
|
||||
type Finder struct {
|
||||
repo restic.Repository
|
||||
pat findPattern
|
||||
out statefulOutput
|
||||
ignoreTrees restic.IDSet
|
||||
blobIDs map[string]struct{}
|
||||
treeIDs map[string]struct{}
|
||||
itemsFound int
|
||||
repo restic.Repository
|
||||
pat findPattern
|
||||
out statefulOutput
|
||||
blobIDs map[string]struct{}
|
||||
treeIDs map[string]struct{}
|
||||
itemsFound int
|
||||
}
|
||||
|
||||
func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error {
|
||||
|
@ -261,17 +261,17 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
|
|||
}
|
||||
|
||||
f.out.newsn = sn
|
||||
return walker.Walk(ctx, f.repo, *sn.Tree, f.ignoreTrees, func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
||||
return walker.Walk(ctx, f.repo, *sn.Tree, walker.WalkVisitor{ProcessNode: func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) error {
|
||||
if err != nil {
|
||||
debug.Log("Error loading tree %v: %v", parentTreeID, err)
|
||||
|
||||
Printf("Unable to load tree %s\n ... which belongs to snapshot %s\n", parentTreeID, sn.ID())
|
||||
|
||||
return false, walker.ErrSkipNode
|
||||
return walker.ErrSkipNode
|
||||
}
|
||||
|
||||
if node == nil {
|
||||
return false, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
normalizedNodepath := nodepath
|
||||
|
@ -284,7 +284,7 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
|
|||
for _, pat := range f.pat.pattern {
|
||||
found, err := filter.Match(pat, normalizedNodepath)
|
||||
if err != nil {
|
||||
return false, err
|
||||
return err
|
||||
}
|
||||
if found {
|
||||
foundMatch = true
|
||||
|
@ -292,16 +292,13 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
|
|||
}
|
||||
}
|
||||
|
||||
var (
|
||||
ignoreIfNoMatch = true
|
||||
errIfNoMatch error
|
||||
)
|
||||
var errIfNoMatch error
|
||||
if node.Type == "dir" {
|
||||
var childMayMatch bool
|
||||
for _, pat := range f.pat.pattern {
|
||||
mayMatch, err := filter.ChildMatch(pat, normalizedNodepath)
|
||||
if err != nil {
|
||||
return false, err
|
||||
return err
|
||||
}
|
||||
if mayMatch {
|
||||
childMayMatch = true
|
||||
|
@ -310,31 +307,28 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
|
|||
}
|
||||
|
||||
if !childMayMatch {
|
||||
ignoreIfNoMatch = true
|
||||
errIfNoMatch = walker.ErrSkipNode
|
||||
} else {
|
||||
ignoreIfNoMatch = false
|
||||
}
|
||||
}
|
||||
|
||||
if !foundMatch {
|
||||
return ignoreIfNoMatch, errIfNoMatch
|
||||
return errIfNoMatch
|
||||
}
|
||||
|
||||
if !f.pat.oldest.IsZero() && node.ModTime.Before(f.pat.oldest) {
|
||||
debug.Log(" ModTime is older than %s\n", f.pat.oldest)
|
||||
return ignoreIfNoMatch, errIfNoMatch
|
||||
return errIfNoMatch
|
||||
}
|
||||
|
||||
if !f.pat.newest.IsZero() && node.ModTime.After(f.pat.newest) {
|
||||
debug.Log(" ModTime is newer than %s\n", f.pat.newest)
|
||||
return ignoreIfNoMatch, errIfNoMatch
|
||||
return errIfNoMatch
|
||||
}
|
||||
|
||||
debug.Log(" found match\n")
|
||||
f.out.PrintPattern(nodepath, node)
|
||||
return false, nil
|
||||
})
|
||||
return nil
|
||||
}})
|
||||
}
|
||||
|
||||
func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
|
||||
|
@ -345,17 +339,17 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
|
|||
}
|
||||
|
||||
f.out.newsn = sn
|
||||
return walker.Walk(ctx, f.repo, *sn.Tree, f.ignoreTrees, func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
||||
return walker.Walk(ctx, f.repo, *sn.Tree, walker.WalkVisitor{ProcessNode: func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) error {
|
||||
if err != nil {
|
||||
debug.Log("Error loading tree %v: %v", parentTreeID, err)
|
||||
|
||||
Printf("Unable to load tree %s\n ... which belongs to snapshot %s\n", parentTreeID, sn.ID())
|
||||
|
||||
return false, walker.ErrSkipNode
|
||||
return walker.ErrSkipNode
|
||||
}
|
||||
|
||||
if node == nil {
|
||||
return false, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
if node.Type == "dir" && f.treeIDs != nil {
|
||||
|
@ -373,7 +367,7 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
|
|||
// looking for blobs)
|
||||
if f.itemsFound >= len(f.treeIDs) && f.blobIDs == nil {
|
||||
// Return an error to terminate the Walk
|
||||
return true, errors.New("OK")
|
||||
return errors.New("OK")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -394,8 +388,8 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
|
|||
}
|
||||
}
|
||||
|
||||
return false, nil
|
||||
})
|
||||
return nil
|
||||
}})
|
||||
}
|
||||
|
||||
var errAllPacksFound = errors.New("all packs found")
|
||||
|
@ -593,10 +587,9 @@ func runFind(ctx context.Context, opts FindOptions, gopts GlobalOptions, args []
|
|||
}
|
||||
|
||||
f := &Finder{
|
||||
repo: repo,
|
||||
pat: pat,
|
||||
out: statefulOutput{ListLong: opts.ListLong, HumanReadable: opts.HumanReadable, JSON: gopts.JSON},
|
||||
ignoreTrees: restic.NewIDSet(),
|
||||
repo: repo,
|
||||
pat: pat,
|
||||
out: statefulOutput{ListLong: opts.ListLong, HumanReadable: opts.HumanReadable, JSON: gopts.JSON},
|
||||
}
|
||||
|
||||
if opts.BlobID {
|
||||
|
|
|
@ -33,7 +33,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runForget(cmd.Context(), forgetOptions, globalOptions, args)
|
||||
return runForget(cmd.Context(), forgetOptions, forgetPruneOptions, globalOptions, args)
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -98,6 +98,7 @@ type ForgetOptions struct {
|
|||
}
|
||||
|
||||
var forgetOptions ForgetOptions
|
||||
var forgetPruneOptions PruneOptions
|
||||
|
||||
func init() {
|
||||
cmdRoot.AddCommand(cmdForget)
|
||||
|
@ -132,7 +133,7 @@ func init() {
|
|||
f.BoolVar(&forgetOptions.Prune, "prune", false, "automatically run the 'prune' command if snapshots have been removed")
|
||||
|
||||
f.SortFlags = false
|
||||
addPruneOptions(cmdForget)
|
||||
addPruneOptions(cmdForget, &forgetPruneOptions)
|
||||
}
|
||||
|
||||
func verifyForgetOptions(opts *ForgetOptions) error {
|
||||
|
@ -151,7 +152,7 @@ func verifyForgetOptions(opts *ForgetOptions) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func runForget(ctx context.Context, opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
||||
func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOptions, gopts GlobalOptions, args []string) error {
|
||||
err := verifyForgetOptions(&opts)
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
|
@ -9,5 +9,8 @@ import (
|
|||
|
||||
func testRunForget(t testing.TB, gopts GlobalOptions, args ...string) {
|
||||
opts := ForgetOptions{}
|
||||
rtest.OK(t, runForget(context.TODO(), opts, gopts, args))
|
||||
pruneOpts := PruneOptions{
|
||||
MaxUnused: "5%",
|
||||
}
|
||||
rtest.OK(t, runForget(context.TODO(), opts, pruneOpts, gopts, args))
|
||||
}
|
||||
|
|
|
@ -21,7 +21,9 @@ EXIT STATUS
|
|||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: runGenerate,
|
||||
RunE: func(_ *cobra.Command, args []string) error {
|
||||
return runGenerate(genOpts, args)
|
||||
},
|
||||
}
|
||||
|
||||
type generateOptions struct {
|
||||
|
@ -90,48 +92,48 @@ func writePowerShellCompletion(file string) error {
|
|||
return cmdRoot.GenPowerShellCompletionFile(file)
|
||||
}
|
||||
|
||||
func runGenerate(_ *cobra.Command, args []string) error {
|
||||
func runGenerate(opts generateOptions, args []string) error {
|
||||
if len(args) > 0 {
|
||||
return errors.Fatal("the generate command expects no arguments, only options - please see `restic help generate` for usage and flags")
|
||||
}
|
||||
|
||||
if genOpts.ManDir != "" {
|
||||
err := writeManpages(genOpts.ManDir)
|
||||
if opts.ManDir != "" {
|
||||
err := writeManpages(opts.ManDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if genOpts.BashCompletionFile != "" {
|
||||
err := writeBashCompletion(genOpts.BashCompletionFile)
|
||||
if opts.BashCompletionFile != "" {
|
||||
err := writeBashCompletion(opts.BashCompletionFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if genOpts.FishCompletionFile != "" {
|
||||
err := writeFishCompletion(genOpts.FishCompletionFile)
|
||||
if opts.FishCompletionFile != "" {
|
||||
err := writeFishCompletion(opts.FishCompletionFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if genOpts.ZSHCompletionFile != "" {
|
||||
err := writeZSHCompletion(genOpts.ZSHCompletionFile)
|
||||
if opts.ZSHCompletionFile != "" {
|
||||
err := writeZSHCompletion(opts.ZSHCompletionFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if genOpts.PowerShellCompletionFile != "" {
|
||||
err := writePowerShellCompletion(genOpts.PowerShellCompletionFile)
|
||||
if opts.PowerShellCompletionFile != "" {
|
||||
err := writePowerShellCompletion(opts.PowerShellCompletionFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
var empty generateOptions
|
||||
if genOpts == empty {
|
||||
if opts == empty {
|
||||
return errors.Fatal("nothing to do, please specify at least one output file/dir")
|
||||
}
|
||||
|
||||
|
|
|
@ -1,266 +1,18 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/restic/restic/internal/backend"
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/ui/table"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var cmdKey = &cobra.Command{
|
||||
Use: "key [flags] [list|add|remove|passwd] [ID]",
|
||||
Use: "key",
|
||||
Short: "Manage keys (passwords)",
|
||||
Long: `
|
||||
The "key" command manages keys (passwords) for accessing the repository.
|
||||
|
||||
EXIT STATUS
|
||||
===========
|
||||
|
||||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runKey(cmd.Context(), globalOptions, args)
|
||||
},
|
||||
The "key" command allows you to set multiple access keys or passwords
|
||||
per repository.
|
||||
`,
|
||||
}
|
||||
|
||||
var (
|
||||
newPasswordFile string
|
||||
keyUsername string
|
||||
keyHostname string
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmdRoot.AddCommand(cmdKey)
|
||||
|
||||
flags := cmdKey.Flags()
|
||||
flags.StringVarP(&newPasswordFile, "new-password-file", "", "", "`file` from which to read the new password")
|
||||
flags.StringVarP(&keyUsername, "user", "", "", "the username for new keys")
|
||||
flags.StringVarP(&keyHostname, "host", "", "", "the hostname for new keys")
|
||||
}
|
||||
|
||||
func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions) error {
|
||||
type keyInfo struct {
|
||||
Current bool `json:"current"`
|
||||
ID string `json:"id"`
|
||||
UserName string `json:"userName"`
|
||||
HostName string `json:"hostName"`
|
||||
Created string `json:"created"`
|
||||
}
|
||||
|
||||
var m sync.Mutex
|
||||
var keys []keyInfo
|
||||
|
||||
err := restic.ParallelList(ctx, s, restic.KeyFile, s.Connections(), func(ctx context.Context, id restic.ID, size int64) error {
|
||||
k, err := repository.LoadKey(ctx, s, id)
|
||||
if err != nil {
|
||||
Warnf("LoadKey() failed: %v\n", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
key := keyInfo{
|
||||
Current: id == s.KeyID(),
|
||||
ID: id.Str(),
|
||||
UserName: k.Username,
|
||||
HostName: k.Hostname,
|
||||
Created: k.Created.Local().Format(TimeFormat),
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
keys = append(keys, key)
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if gopts.JSON {
|
||||
return json.NewEncoder(globalOptions.stdout).Encode(keys)
|
||||
}
|
||||
|
||||
tab := table.New()
|
||||
tab.AddColumn(" ID", "{{if .Current}}*{{else}} {{end}}{{ .ID }}")
|
||||
tab.AddColumn("User", "{{ .UserName }}")
|
||||
tab.AddColumn("Host", "{{ .HostName }}")
|
||||
tab.AddColumn("Created", "{{ .Created }}")
|
||||
|
||||
for _, key := range keys {
|
||||
tab.AddRow(key)
|
||||
}
|
||||
|
||||
return tab.Write(globalOptions.stdout)
|
||||
}
|
||||
|
||||
// testKeyNewPassword is used to set a new password during integration testing.
|
||||
var testKeyNewPassword string
|
||||
|
||||
func getNewPassword(gopts GlobalOptions) (string, error) {
|
||||
if testKeyNewPassword != "" {
|
||||
return testKeyNewPassword, nil
|
||||
}
|
||||
|
||||
if newPasswordFile != "" {
|
||||
return loadPasswordFromFile(newPasswordFile)
|
||||
}
|
||||
|
||||
// Since we already have an open repository, temporary remove the password
|
||||
// to prompt the user for the passwd.
|
||||
newopts := gopts
|
||||
newopts.password = ""
|
||||
|
||||
return ReadPasswordTwice(newopts,
|
||||
"enter new password: ",
|
||||
"enter password again: ")
|
||||
}
|
||||
|
||||
func addKey(ctx context.Context, repo *repository.Repository, gopts GlobalOptions) error {
|
||||
pw, err := getNewPassword(gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
id, err := repository.AddKey(ctx, repo, pw, keyUsername, keyHostname, repo.Key())
|
||||
if err != nil {
|
||||
return errors.Fatalf("creating new key failed: %v\n", err)
|
||||
}
|
||||
|
||||
err = switchToNewKeyAndRemoveIfBroken(ctx, repo, id, pw)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("saved new key as %s\n", id)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteKey(ctx context.Context, repo *repository.Repository, id restic.ID) error {
|
||||
if id == repo.KeyID() {
|
||||
return errors.Fatal("refusing to remove key currently used to access repository")
|
||||
}
|
||||
|
||||
h := backend.Handle{Type: restic.KeyFile, Name: id.String()}
|
||||
err := repo.Backend().Remove(ctx, h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("removed key %v\n", id)
|
||||
return nil
|
||||
}
|
||||
|
||||
func changePassword(ctx context.Context, repo *repository.Repository, gopts GlobalOptions) error {
|
||||
pw, err := getNewPassword(gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
id, err := repository.AddKey(ctx, repo, pw, "", "", repo.Key())
|
||||
if err != nil {
|
||||
return errors.Fatalf("creating new key failed: %v\n", err)
|
||||
}
|
||||
oldID := repo.KeyID()
|
||||
|
||||
err = switchToNewKeyAndRemoveIfBroken(ctx, repo, id, pw)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
h := backend.Handle{Type: restic.KeyFile, Name: oldID.String()}
|
||||
err = repo.Backend().Remove(ctx, h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("saved new key as %s\n", id)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func switchToNewKeyAndRemoveIfBroken(ctx context.Context, repo *repository.Repository, key *repository.Key, pw string) error {
|
||||
// Verify new key to make sure it really works. A broken key can render the
|
||||
// whole repository inaccessible
|
||||
err := repo.SearchKey(ctx, pw, 0, key.ID().String())
|
||||
if err != nil {
|
||||
// the key is invalid, try to remove it
|
||||
h := backend.Handle{Type: restic.KeyFile, Name: key.ID().String()}
|
||||
_ = repo.Backend().Remove(ctx, h)
|
||||
return errors.Fatalf("failed to access repository with new key: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func runKey(ctx context.Context, gopts GlobalOptions, args []string) error {
|
||||
if len(args) < 1 || (args[0] == "remove" && len(args) != 2) || (args[0] != "remove" && len(args) != 1) {
|
||||
return errors.Fatal("wrong number of arguments")
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(ctx, gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch args[0] {
|
||||
case "list":
|
||||
if !gopts.NoLock {
|
||||
var lock *restic.Lock
|
||||
lock, ctx, err = lockRepo(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return listKeys(ctx, repo, gopts)
|
||||
case "add":
|
||||
lock, ctx, err := lockRepo(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return addKey(ctx, repo, gopts)
|
||||
case "remove":
|
||||
lock, ctx, err := lockRepoExclusive(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
id, err := restic.Find(ctx, repo, restic.KeyFile, args[1])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return deleteKey(ctx, repo, id)
|
||||
case "passwd":
|
||||
lock, ctx, err := lockRepoExclusive(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return changePassword(ctx, repo, gopts)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func loadPasswordFromFile(pwdFile string) (string, error) {
|
||||
s, err := os.ReadFile(pwdFile)
|
||||
if os.IsNotExist(err) {
|
||||
return "", errors.Fatalf("%s does not exist", pwdFile)
|
||||
}
|
||||
return strings.TrimSpace(string(s)), errors.Wrap(err, "Readfile")
|
||||
}
|
||||
|
|
128
cmd/restic/cmd_key_add.go
Normal file
128
cmd/restic/cmd_key_add.go
Normal file
|
@ -0,0 +1,128 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var cmdKeyAdd = &cobra.Command{
|
||||
Use: "add",
|
||||
Short: "Add a new key (password) to the repository; returns the new key ID",
|
||||
Long: `
|
||||
The "add" sub-command creates a new key and validates the key. Returns the new key ID.
|
||||
|
||||
EXIT STATUS
|
||||
===========
|
||||
|
||||
Exit status is 0 if the command is successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runKeyAdd(cmd.Context(), globalOptions, keyAddOpts, args)
|
||||
},
|
||||
}
|
||||
|
||||
type KeyAddOptions struct {
|
||||
NewPasswordFile string
|
||||
Username string
|
||||
Hostname string
|
||||
}
|
||||
|
||||
var keyAddOpts KeyAddOptions
|
||||
|
||||
func init() {
|
||||
cmdKey.AddCommand(cmdKeyAdd)
|
||||
|
||||
flags := cmdKeyAdd.Flags()
|
||||
flags.StringVarP(&keyAddOpts.NewPasswordFile, "new-password-file", "", "", "`file` from which to read the new password")
|
||||
flags.StringVarP(&keyAddOpts.Username, "user", "", "", "the username for new key")
|
||||
flags.StringVarP(&keyAddOpts.Hostname, "host", "", "", "the hostname for new key")
|
||||
}
|
||||
|
||||
func runKeyAdd(ctx context.Context, gopts GlobalOptions, opts KeyAddOptions, args []string) error {
|
||||
if len(args) > 0 {
|
||||
return fmt.Errorf("the key add command expects no arguments, only options - please see `restic help key add` for usage and flags")
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(ctx, gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
lock, ctx, err := lockRepo(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return addKey(ctx, repo, gopts, opts)
|
||||
}
|
||||
|
||||
func addKey(ctx context.Context, repo *repository.Repository, gopts GlobalOptions, opts KeyAddOptions) error {
|
||||
pw, err := getNewPassword(gopts, opts.NewPasswordFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
id, err := repository.AddKey(ctx, repo, pw, opts.Username, opts.Hostname, repo.Key())
|
||||
if err != nil {
|
||||
return errors.Fatalf("creating new key failed: %v\n", err)
|
||||
}
|
||||
|
||||
err = switchToNewKeyAndRemoveIfBroken(ctx, repo, id, pw)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("saved new key with ID %s\n", id.ID())
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// testKeyNewPassword is used to set a new password during integration testing.
|
||||
var testKeyNewPassword string
|
||||
|
||||
func getNewPassword(gopts GlobalOptions, newPasswordFile string) (string, error) {
|
||||
if testKeyNewPassword != "" {
|
||||
return testKeyNewPassword, nil
|
||||
}
|
||||
|
||||
if newPasswordFile != "" {
|
||||
return loadPasswordFromFile(newPasswordFile)
|
||||
}
|
||||
|
||||
// Since we already have an open repository, temporary remove the password
|
||||
// to prompt the user for the passwd.
|
||||
newopts := gopts
|
||||
newopts.password = ""
|
||||
|
||||
return ReadPasswordTwice(newopts,
|
||||
"enter new password: ",
|
||||
"enter password again: ")
|
||||
}
|
||||
|
||||
func loadPasswordFromFile(pwdFile string) (string, error) {
|
||||
s, err := os.ReadFile(pwdFile)
|
||||
if os.IsNotExist(err) {
|
||||
return "", errors.Fatalf("%s does not exist", pwdFile)
|
||||
}
|
||||
return strings.TrimSpace(string(s)), errors.Wrap(err, "Readfile")
|
||||
}
|
||||
|
||||
func switchToNewKeyAndRemoveIfBroken(ctx context.Context, repo *repository.Repository, key *repository.Key, pw string) error {
|
||||
// Verify new key to make sure it really works. A broken key can render the
|
||||
// whole repository inaccessible
|
||||
err := repo.SearchKey(ctx, pw, 0, key.ID().String())
|
||||
if err != nil {
|
||||
// the key is invalid, try to remove it
|
||||
_ = repository.RemoveKey(ctx, repo, key.ID())
|
||||
return errors.Fatalf("failed to access repository with new key: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
|
@ -4,6 +4,7 @@ import (
|
|||
"bufio"
|
||||
"context"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/restic/restic/internal/backend"
|
||||
|
@ -13,7 +14,7 @@ import (
|
|||
|
||||
func testRunKeyListOtherIDs(t testing.TB, gopts GlobalOptions) []string {
|
||||
buf, err := withCaptureStdout(func() error {
|
||||
return runKey(context.TODO(), gopts, []string{"list"})
|
||||
return runKeyList(context.TODO(), gopts, []string{})
|
||||
})
|
||||
rtest.OK(t, err)
|
||||
|
||||
|
@ -36,21 +37,20 @@ func testRunKeyAddNewKey(t testing.TB, newPassword string, gopts GlobalOptions)
|
|||
testKeyNewPassword = ""
|
||||
}()
|
||||
|
||||
rtest.OK(t, runKey(context.TODO(), gopts, []string{"add"}))
|
||||
rtest.OK(t, runKeyAdd(context.TODO(), gopts, KeyAddOptions{}, []string{}))
|
||||
}
|
||||
|
||||
func testRunKeyAddNewKeyUserHost(t testing.TB, gopts GlobalOptions) {
|
||||
testKeyNewPassword = "john's geheimnis"
|
||||
defer func() {
|
||||
testKeyNewPassword = ""
|
||||
keyUsername = ""
|
||||
keyHostname = ""
|
||||
}()
|
||||
|
||||
rtest.OK(t, cmdKey.Flags().Parse([]string{"--user=john", "--host=example.com"}))
|
||||
|
||||
t.Log("adding key for john@example.com")
|
||||
rtest.OK(t, runKey(context.TODO(), gopts, []string{"add"}))
|
||||
rtest.OK(t, runKeyAdd(context.TODO(), gopts, KeyAddOptions{
|
||||
Username: "john",
|
||||
Hostname: "example.com",
|
||||
}, []string{}))
|
||||
|
||||
repo, err := OpenRepository(context.TODO(), gopts)
|
||||
rtest.OK(t, err)
|
||||
|
@ -67,13 +67,13 @@ func testRunKeyPasswd(t testing.TB, newPassword string, gopts GlobalOptions) {
|
|||
testKeyNewPassword = ""
|
||||
}()
|
||||
|
||||
rtest.OK(t, runKey(context.TODO(), gopts, []string{"passwd"}))
|
||||
rtest.OK(t, runKeyPasswd(context.TODO(), gopts, KeyPasswdOptions{}, []string{}))
|
||||
}
|
||||
|
||||
func testRunKeyRemove(t testing.TB, gopts GlobalOptions, IDs []string) {
|
||||
t.Logf("remove %d keys: %q\n", len(IDs), IDs)
|
||||
for _, id := range IDs {
|
||||
rtest.OK(t, runKey(context.TODO(), gopts, []string{"remove", id}))
|
||||
rtest.OK(t, runKeyRemove(context.TODO(), gopts, []string{id}))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -103,7 +103,7 @@ func TestKeyAddRemove(t *testing.T) {
|
|||
|
||||
env.gopts.password = passwordList[len(passwordList)-1]
|
||||
t.Logf("testing access with last password %q\n", env.gopts.password)
|
||||
rtest.OK(t, runKey(context.TODO(), env.gopts, []string{"list"}))
|
||||
rtest.OK(t, runKeyList(context.TODO(), env.gopts, []string{}))
|
||||
testRunCheck(t, env.gopts)
|
||||
|
||||
testRunKeyAddNewKeyUserHost(t, env.gopts)
|
||||
|
@ -131,15 +131,45 @@ func TestKeyProblems(t *testing.T) {
|
|||
testKeyNewPassword = ""
|
||||
}()
|
||||
|
||||
err := runKey(context.TODO(), env.gopts, []string{"passwd"})
|
||||
err := runKeyPasswd(context.TODO(), env.gopts, KeyPasswdOptions{}, []string{})
|
||||
t.Log(err)
|
||||
rtest.Assert(t, err != nil, "expected passwd change to fail")
|
||||
|
||||
err = runKey(context.TODO(), env.gopts, []string{"add"})
|
||||
err = runKeyAdd(context.TODO(), env.gopts, KeyAddOptions{}, []string{})
|
||||
t.Log(err)
|
||||
rtest.Assert(t, err != nil, "expected key adding to fail")
|
||||
|
||||
t.Logf("testing access with initial password %q\n", env.gopts.password)
|
||||
rtest.OK(t, runKey(context.TODO(), env.gopts, []string{"list"}))
|
||||
rtest.OK(t, runKeyList(context.TODO(), env.gopts, []string{}))
|
||||
testRunCheck(t, env.gopts)
|
||||
}
|
||||
|
||||
func TestKeyCommandInvalidArguments(t *testing.T) {
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
defer cleanup()
|
||||
|
||||
testRunInit(t, env.gopts)
|
||||
env.gopts.backendTestHook = func(r backend.Backend) (backend.Backend, error) {
|
||||
return &emptySaveBackend{r}, nil
|
||||
}
|
||||
|
||||
err := runKeyAdd(context.TODO(), env.gopts, KeyAddOptions{}, []string{"johndoe"})
|
||||
t.Log(err)
|
||||
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "no arguments"), "unexpected error for key add: %v", err)
|
||||
|
||||
err = runKeyPasswd(context.TODO(), env.gopts, KeyPasswdOptions{}, []string{"johndoe"})
|
||||
t.Log(err)
|
||||
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "no arguments"), "unexpected error for key passwd: %v", err)
|
||||
|
||||
err = runKeyList(context.TODO(), env.gopts, []string{"johndoe"})
|
||||
t.Log(err)
|
||||
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "no arguments"), "unexpected error for key list: %v", err)
|
||||
|
||||
err = runKeyRemove(context.TODO(), env.gopts, []string{})
|
||||
t.Log(err)
|
||||
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "one argument"), "unexpected error for key remove: %v", err)
|
||||
|
||||
err = runKeyRemove(context.TODO(), env.gopts, []string{"john", "doe"})
|
||||
t.Log(err)
|
||||
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "one argument"), "unexpected error for key remove: %v", err)
|
||||
}
|
||||
|
|
112
cmd/restic/cmd_key_list.go
Normal file
112
cmd/restic/cmd_key_list.go
Normal file
|
@ -0,0 +1,112 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/ui/table"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var cmdKeyList = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List keys (passwords)",
|
||||
Long: `
|
||||
The "list" sub-command lists all the keys (passwords) associated with the repository.
|
||||
Returns the key ID, username, hostname, created time and if it's the current key being
|
||||
used to access the repository.
|
||||
|
||||
EXIT STATUS
|
||||
===========
|
||||
|
||||
Exit status is 0 if the command is successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runKeyList(cmd.Context(), globalOptions, args)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
cmdKey.AddCommand(cmdKeyList)
|
||||
}
|
||||
|
||||
func runKeyList(ctx context.Context, gopts GlobalOptions, args []string) error {
|
||||
if len(args) > 0 {
|
||||
return fmt.Errorf("the key list command expects no arguments, only options - please see `restic help key list` for usage and flags")
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(ctx, gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !gopts.NoLock {
|
||||
var lock *restic.Lock
|
||||
lock, ctx, err = lockRepo(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return listKeys(ctx, repo, gopts)
|
||||
}
|
||||
|
||||
func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions) error {
|
||||
type keyInfo struct {
|
||||
Current bool `json:"current"`
|
||||
ID string `json:"id"`
|
||||
UserName string `json:"userName"`
|
||||
HostName string `json:"hostName"`
|
||||
Created string `json:"created"`
|
||||
}
|
||||
|
||||
var m sync.Mutex
|
||||
var keys []keyInfo
|
||||
|
||||
err := restic.ParallelList(ctx, s, restic.KeyFile, s.Connections(), func(ctx context.Context, id restic.ID, _ int64) error {
|
||||
k, err := repository.LoadKey(ctx, s, id)
|
||||
if err != nil {
|
||||
Warnf("LoadKey() failed: %v\n", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
key := keyInfo{
|
||||
Current: id == s.KeyID(),
|
||||
ID: id.Str(),
|
||||
UserName: k.Username,
|
||||
HostName: k.Hostname,
|
||||
Created: k.Created.Local().Format(TimeFormat),
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
keys = append(keys, key)
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if gopts.JSON {
|
||||
return json.NewEncoder(globalOptions.stdout).Encode(keys)
|
||||
}
|
||||
|
||||
tab := table.New()
|
||||
tab.AddColumn(" ID", "{{if .Current}}*{{else}} {{end}}{{ .ID }}")
|
||||
tab.AddColumn("User", "{{ .UserName }}")
|
||||
tab.AddColumn("Host", "{{ .HostName }}")
|
||||
tab.AddColumn("Created", "{{ .Created }}")
|
||||
|
||||
for _, key := range keys {
|
||||
tab.AddRow(key)
|
||||
}
|
||||
|
||||
return tab.Write(globalOptions.stdout)
|
||||
}
|
89
cmd/restic/cmd_key_passwd.go
Normal file
89
cmd/restic/cmd_key_passwd.go
Normal file
|
@ -0,0 +1,89 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var cmdKeyPasswd = &cobra.Command{
|
||||
Use: "passwd",
|
||||
Short: "Change key (password); creates a new key ID and removes the old key ID, returns new key ID",
|
||||
Long: `
|
||||
The "passwd" sub-command creates a new key, validates the key and remove the old key ID.
|
||||
Returns the new key ID.
|
||||
|
||||
EXIT STATUS
|
||||
===========
|
||||
|
||||
Exit status is 0 if the command is successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runKeyPasswd(cmd.Context(), globalOptions, keyPasswdOpts, args)
|
||||
},
|
||||
}
|
||||
|
||||
type KeyPasswdOptions struct {
|
||||
KeyAddOptions
|
||||
}
|
||||
|
||||
var keyPasswdOpts KeyPasswdOptions
|
||||
|
||||
func init() {
|
||||
cmdKey.AddCommand(cmdKeyPasswd)
|
||||
|
||||
flags := cmdKeyPasswd.Flags()
|
||||
flags.StringVarP(&keyPasswdOpts.NewPasswordFile, "new-password-file", "", "", "`file` from which to read the new password")
|
||||
flags.StringVarP(&keyPasswdOpts.Username, "user", "", "", "the username for new key")
|
||||
flags.StringVarP(&keyPasswdOpts.Hostname, "host", "", "", "the hostname for new key")
|
||||
}
|
||||
|
||||
func runKeyPasswd(ctx context.Context, gopts GlobalOptions, opts KeyPasswdOptions, args []string) error {
|
||||
if len(args) > 0 {
|
||||
return fmt.Errorf("the key passwd command expects no arguments, only options - please see `restic help key passwd` for usage and flags")
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(ctx, gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
lock, ctx, err := lockRepoExclusive(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return changePassword(ctx, repo, gopts, opts)
|
||||
}
|
||||
|
||||
func changePassword(ctx context.Context, repo *repository.Repository, gopts GlobalOptions, opts KeyPasswdOptions) error {
|
||||
pw, err := getNewPassword(gopts, opts.NewPasswordFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
id, err := repository.AddKey(ctx, repo, pw, "", "", repo.Key())
|
||||
if err != nil {
|
||||
return errors.Fatalf("creating new key failed: %v\n", err)
|
||||
}
|
||||
oldID := repo.KeyID()
|
||||
|
||||
err = switchToNewKeyAndRemoveIfBroken(ctx, repo, id, pw)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = repository.RemoveKey(ctx, repo, oldID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("saved new key as %s\n", id)
|
||||
|
||||
return nil
|
||||
}
|
73
cmd/restic/cmd_key_remove.go
Normal file
73
cmd/restic/cmd_key_remove.go
Normal file
|
@ -0,0 +1,73 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var cmdKeyRemove = &cobra.Command{
|
||||
Use: "remove [ID]",
|
||||
Short: "Remove key ID (password) from the repository.",
|
||||
Long: `
|
||||
The "remove" sub-command removes the selected key ID. The "remove" command does not allow
|
||||
removing the current key being used to access the repository.
|
||||
|
||||
EXIT STATUS
|
||||
===========
|
||||
|
||||
Exit status is 0 if the command is successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runKeyRemove(cmd.Context(), globalOptions, args)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
cmdKey.AddCommand(cmdKeyRemove)
|
||||
}
|
||||
|
||||
func runKeyRemove(ctx context.Context, gopts GlobalOptions, args []string) error {
|
||||
if len(args) != 1 {
|
||||
return fmt.Errorf("key remove expects one argument as the key id")
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(ctx, gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
lock, ctx, err := lockRepoExclusive(ctx, repo, gopts.RetryLock, gopts.JSON)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
idPrefix := args[0]
|
||||
|
||||
return deleteKey(ctx, repo, idPrefix)
|
||||
}
|
||||
|
||||
func deleteKey(ctx context.Context, repo *repository.Repository, idPrefix string) error {
|
||||
id, err := restic.Find(ctx, repo, restic.KeyFile, idPrefix)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if id == repo.KeyID() {
|
||||
return errors.Fatal("refusing to remove key currently used to access repository")
|
||||
}
|
||||
|
||||
err = repository.RemoveKey(ctx, repo, id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("removed key %v\n", id)
|
||||
return nil
|
||||
}
|
|
@ -23,7 +23,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runList(cmd.Context(), cmd, globalOptions, args)
|
||||
return runList(cmd.Context(), globalOptions, args)
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -31,9 +31,9 @@ func init() {
|
|||
cmdRoot.AddCommand(cmdList)
|
||||
}
|
||||
|
||||
func runList(ctx context.Context, cmd *cobra.Command, gopts GlobalOptions, args []string) error {
|
||||
func runList(ctx context.Context, gopts GlobalOptions, args []string) error {
|
||||
if len(args) != 1 {
|
||||
return errors.Fatal("type not specified, usage: " + cmd.Use)
|
||||
return errors.Fatal("type not specified")
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(ctx, gopts)
|
||||
|
@ -63,7 +63,7 @@ func runList(ctx context.Context, cmd *cobra.Command, gopts GlobalOptions, args
|
|||
case "locks":
|
||||
t = restic.LockFile
|
||||
case "blobs":
|
||||
return index.ForAllIndexes(ctx, repo, repo, func(id restic.ID, idx *index.Index, oldFormat bool, err error) error {
|
||||
return index.ForAllIndexes(ctx, repo, repo, func(_ restic.ID, idx *index.Index, _ bool, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -76,7 +76,7 @@ func runList(ctx context.Context, cmd *cobra.Command, gopts GlobalOptions, args
|
|||
return errors.Fatal("invalid type")
|
||||
}
|
||||
|
||||
return repo.List(ctx, t, func(id restic.ID, size int64) error {
|
||||
return repo.List(ctx, t, func(id restic.ID, _ int64) error {
|
||||
Printf("%s\n", id)
|
||||
return nil
|
||||
})
|
||||
|
|
|
@ -12,7 +12,7 @@ import (
|
|||
|
||||
func testRunList(t testing.TB, tpe string, opts GlobalOptions) restic.IDs {
|
||||
buf, err := withCaptureStdout(func() error {
|
||||
return runList(context.TODO(), cmdList, opts, []string{tpe})
|
||||
return runList(context.TODO(), opts, []string{tpe})
|
||||
})
|
||||
rtest.OK(t, err)
|
||||
return parseIDsFromReader(t, buf)
|
||||
|
|
|
@ -3,6 +3,8 @@ package main
|
|||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
@ -51,6 +53,7 @@ type LsOptions struct {
|
|||
restic.SnapshotFilter
|
||||
Recursive bool
|
||||
HumanReadable bool
|
||||
Ncdu bool
|
||||
}
|
||||
|
||||
var lsOptions LsOptions
|
||||
|
@ -63,16 +66,49 @@ func init() {
|
|||
flags.BoolVarP(&lsOptions.ListLong, "long", "l", false, "use a long listing format showing size and mode")
|
||||
flags.BoolVar(&lsOptions.Recursive, "recursive", false, "include files in subfolders of the listed directories")
|
||||
flags.BoolVar(&lsOptions.HumanReadable, "human-readable", false, "print sizes in human readable format")
|
||||
flags.BoolVar(&lsOptions.Ncdu, "ncdu", false, "output NCDU export format (pipe into 'ncdu -f -')")
|
||||
}
|
||||
|
||||
type lsSnapshot struct {
|
||||
*restic.Snapshot
|
||||
ID *restic.ID `json:"id"`
|
||||
ShortID string `json:"short_id"`
|
||||
StructType string `json:"struct_type"` // "snapshot"
|
||||
type lsPrinter interface {
|
||||
Snapshot(sn *restic.Snapshot)
|
||||
Node(path string, node *restic.Node)
|
||||
LeaveDir(path string)
|
||||
Close()
|
||||
}
|
||||
|
||||
type jsonLsPrinter struct {
|
||||
enc *json.Encoder
|
||||
}
|
||||
|
||||
func (p *jsonLsPrinter) Snapshot(sn *restic.Snapshot) {
|
||||
type lsSnapshot struct {
|
||||
*restic.Snapshot
|
||||
ID *restic.ID `json:"id"`
|
||||
ShortID string `json:"short_id"`
|
||||
MessageType string `json:"message_type"` // "snapshot"
|
||||
StructType string `json:"struct_type"` // "snapshot", deprecated
|
||||
}
|
||||
|
||||
err := p.enc.Encode(lsSnapshot{
|
||||
Snapshot: sn,
|
||||
ID: sn.ID(),
|
||||
ShortID: sn.ID().Str(),
|
||||
MessageType: "snapshot",
|
||||
StructType: "snapshot",
|
||||
})
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Print node in our custom JSON format, followed by a newline.
|
||||
func (p *jsonLsPrinter) Node(path string, node *restic.Node) {
|
||||
err := lsNodeJSON(p.enc, path, node)
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
|
||||
n := &struct {
|
||||
Name string `json:"name"`
|
||||
|
@ -87,7 +123,8 @@ func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
|
|||
AccessTime time.Time `json:"atime,omitempty"`
|
||||
ChangeTime time.Time `json:"ctime,omitempty"`
|
||||
Inode uint64 `json:"inode,omitempty"`
|
||||
StructType string `json:"struct_type"` // "node"
|
||||
MessageType string `json:"message_type"` // "node"
|
||||
StructType string `json:"struct_type"` // "node", deprecated
|
||||
|
||||
size uint64 // Target for Size pointer.
|
||||
}{
|
||||
|
@ -103,6 +140,7 @@ func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
|
|||
AccessTime: node.AccessTime,
|
||||
ChangeTime: node.ChangeTime,
|
||||
Inode: node.Inode,
|
||||
MessageType: "node",
|
||||
StructType: "node",
|
||||
}
|
||||
// Always print size for regular files, even when empty,
|
||||
|
@ -114,10 +152,117 @@ func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
|
|||
return enc.Encode(n)
|
||||
}
|
||||
|
||||
func (p *jsonLsPrinter) LeaveDir(_ string) {}
|
||||
func (p *jsonLsPrinter) Close() {}
|
||||
|
||||
type ncduLsPrinter struct {
|
||||
out io.Writer
|
||||
depth int
|
||||
}
|
||||
|
||||
// lsSnapshotNcdu prints a restic snapshot in Ncdu save format.
|
||||
// It opens the JSON list. Nodes are added with lsNodeNcdu and the list is closed by lsCloseNcdu.
|
||||
// Format documentation: https://dev.yorhel.nl/ncdu/jsonfmt
|
||||
func (p *ncduLsPrinter) Snapshot(sn *restic.Snapshot) {
|
||||
const NcduMajorVer = 1
|
||||
const NcduMinorVer = 2
|
||||
|
||||
snapshotBytes, err := json.Marshal(sn)
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
p.depth++
|
||||
fmt.Fprintf(p.out, "[%d, %d, %s", NcduMajorVer, NcduMinorVer, string(snapshotBytes))
|
||||
}
|
||||
|
||||
func lsNcduNode(_ string, node *restic.Node) ([]byte, error) {
|
||||
type NcduNode struct {
|
||||
Name string `json:"name"`
|
||||
Asize uint64 `json:"asize"`
|
||||
Dsize uint64 `json:"dsize"`
|
||||
Dev uint64 `json:"dev"`
|
||||
Ino uint64 `json:"ino"`
|
||||
NLink uint64 `json:"nlink"`
|
||||
NotReg bool `json:"notreg"`
|
||||
UID uint32 `json:"uid"`
|
||||
GID uint32 `json:"gid"`
|
||||
Mode uint16 `json:"mode"`
|
||||
Mtime int64 `json:"mtime"`
|
||||
}
|
||||
|
||||
outNode := NcduNode{
|
||||
Name: node.Name,
|
||||
Asize: node.Size,
|
||||
Dsize: node.Size,
|
||||
Dev: node.DeviceID,
|
||||
Ino: node.Inode,
|
||||
NLink: node.Links,
|
||||
NotReg: node.Type != "dir" && node.Type != "file",
|
||||
UID: node.UID,
|
||||
GID: node.GID,
|
||||
Mode: uint16(node.Mode & os.ModePerm),
|
||||
Mtime: node.ModTime.Unix(),
|
||||
}
|
||||
// bits according to inode(7) manpage
|
||||
if node.Mode&os.ModeSetuid != 0 {
|
||||
outNode.Mode |= 0o4000
|
||||
}
|
||||
if node.Mode&os.ModeSetgid != 0 {
|
||||
outNode.Mode |= 0o2000
|
||||
}
|
||||
if node.Mode&os.ModeSticky != 0 {
|
||||
outNode.Mode |= 0o1000
|
||||
}
|
||||
|
||||
return json.Marshal(outNode)
|
||||
}
|
||||
|
||||
func (p *ncduLsPrinter) Node(path string, node *restic.Node) {
|
||||
out, err := lsNcduNode(path, node)
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
|
||||
if node.Type == "dir" {
|
||||
fmt.Fprintf(p.out, ",\n%s[\n%s%s", strings.Repeat(" ", p.depth), strings.Repeat(" ", p.depth+1), string(out))
|
||||
p.depth++
|
||||
} else {
|
||||
fmt.Fprintf(p.out, ",\n%s%s", strings.Repeat(" ", p.depth), string(out))
|
||||
}
|
||||
}
|
||||
|
||||
func (p *ncduLsPrinter) LeaveDir(_ string) {
|
||||
p.depth--
|
||||
fmt.Fprintf(p.out, "\n%s]", strings.Repeat(" ", p.depth))
|
||||
}
|
||||
|
||||
func (p *ncduLsPrinter) Close() {
|
||||
fmt.Fprint(p.out, "\n]\n")
|
||||
}
|
||||
|
||||
type textLsPrinter struct {
|
||||
dirs []string
|
||||
ListLong bool
|
||||
HumanReadable bool
|
||||
}
|
||||
|
||||
func (p *textLsPrinter) Snapshot(sn *restic.Snapshot) {
|
||||
Verbosef("%v filtered by %v:\n", sn, p.dirs)
|
||||
}
|
||||
func (p *textLsPrinter) Node(path string, node *restic.Node) {
|
||||
Printf("%s\n", formatNode(path, node, p.ListLong, p.HumanReadable))
|
||||
}
|
||||
|
||||
func (p *textLsPrinter) LeaveDir(_ string) {}
|
||||
func (p *textLsPrinter) Close() {}
|
||||
|
||||
func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.Fatal("no snapshot ID specified, specify snapshot ID or use special ID 'latest'")
|
||||
}
|
||||
if opts.Ncdu && gopts.JSON {
|
||||
return errors.Fatal("only either '--json' or '--ncdu' can be specified")
|
||||
}
|
||||
|
||||
// extract any specific directories to walk
|
||||
var dirs []string
|
||||
|
@ -179,38 +324,21 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
|
|||
return err
|
||||
}
|
||||
|
||||
var (
|
||||
printSnapshot func(sn *restic.Snapshot)
|
||||
printNode func(path string, node *restic.Node)
|
||||
)
|
||||
var printer lsPrinter
|
||||
|
||||
if gopts.JSON {
|
||||
enc := json.NewEncoder(globalOptions.stdout)
|
||||
|
||||
printSnapshot = func(sn *restic.Snapshot) {
|
||||
err := enc.Encode(lsSnapshot{
|
||||
Snapshot: sn,
|
||||
ID: sn.ID(),
|
||||
ShortID: sn.ID().Str(),
|
||||
StructType: "snapshot",
|
||||
})
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
printer = &jsonLsPrinter{
|
||||
enc: json.NewEncoder(globalOptions.stdout),
|
||||
}
|
||||
|
||||
printNode = func(path string, node *restic.Node) {
|
||||
err := lsNodeJSON(enc, path, node)
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
} else if opts.Ncdu {
|
||||
printer = &ncduLsPrinter{
|
||||
out: globalOptions.stdout,
|
||||
}
|
||||
} else {
|
||||
printSnapshot = func(sn *restic.Snapshot) {
|
||||
Verbosef("%v filtered by %v:\n", sn, dirs)
|
||||
}
|
||||
printNode = func(path string, node *restic.Node) {
|
||||
Printf("%s\n", formatNode(path, node, lsOptions.ListLong, lsOptions.HumanReadable))
|
||||
printer = &textLsPrinter{
|
||||
dirs: dirs,
|
||||
ListLong: opts.ListLong,
|
||||
HumanReadable: opts.HumanReadable,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -228,44 +356,55 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
|
|||
return err
|
||||
}
|
||||
|
||||
printSnapshot(sn)
|
||||
printer.Snapshot(sn)
|
||||
|
||||
err = walker.Walk(ctx, repo, *sn.Tree, nil, func(_ restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
||||
processNode := func(_ restic.ID, nodepath string, node *restic.Node, err error) error {
|
||||
if err != nil {
|
||||
return false, err
|
||||
return err
|
||||
}
|
||||
if node == nil {
|
||||
return false, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
if withinDir(nodepath) {
|
||||
// if we're within a dir, print the node
|
||||
printNode(nodepath, node)
|
||||
printer.Node(nodepath, node)
|
||||
|
||||
// if recursive listing is requested, signal the walker that it
|
||||
// should continue walking recursively
|
||||
if opts.Recursive {
|
||||
return false, nil
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// if there's an upcoming match deeper in the tree (but we're not
|
||||
// there yet), signal the walker to descend into any subdirs
|
||||
if approachingMatchingTree(nodepath) {
|
||||
return false, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
// otherwise, signal the walker to not walk recursively into any
|
||||
// subdirs
|
||||
if node.Type == "dir" {
|
||||
return false, walker.ErrSkipNode
|
||||
return walker.ErrSkipNode
|
||||
}
|
||||
return false, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
err = walker.Walk(ctx, repo, *sn.Tree, walker.WalkVisitor{
|
||||
ProcessNode: processNode,
|
||||
LeaveDir: func(path string) {
|
||||
// the root path `/` has no corresponding node and is thus also skipped by processNode
|
||||
if withinDir(path) && path != "/" {
|
||||
printer.LeaveDir(path)
|
||||
}
|
||||
},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
printer.Close()
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -2,18 +2,46 @@ package main
|
|||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
rtest "github.com/restic/restic/internal/test"
|
||||
)
|
||||
|
||||
func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
|
||||
func testRunLsWithOpts(t testing.TB, gopts GlobalOptions, opts LsOptions, args []string) []byte {
|
||||
buf, err := withCaptureStdout(func() error {
|
||||
gopts.Quiet = true
|
||||
opts := LsOptions{}
|
||||
return runLs(context.TODO(), opts, gopts, []string{snapshotID})
|
||||
return runLs(context.TODO(), opts, gopts, args)
|
||||
})
|
||||
rtest.OK(t, err)
|
||||
return strings.Split(buf.String(), "\n")
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
|
||||
out := testRunLsWithOpts(t, gopts, LsOptions{}, []string{snapshotID})
|
||||
return strings.Split(string(out), "\n")
|
||||
}
|
||||
|
||||
func assertIsValidJSON(t *testing.T, data []byte) {
|
||||
// Sanity check: output must be valid JSON.
|
||||
var v interface{}
|
||||
err := json.Unmarshal(data, &v)
|
||||
rtest.OK(t, err)
|
||||
}
|
||||
|
||||
func TestRunLsNcdu(t *testing.T) {
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
defer cleanup()
|
||||
|
||||
testRunInit(t, env.gopts)
|
||||
opts := BackupOptions{}
|
||||
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, opts, env.gopts)
|
||||
|
||||
ncdu := testRunLsWithOpts(t, env.gopts, LsOptions{Ncdu: true}, []string{"latest"})
|
||||
assertIsValidJSON(t, ncdu)
|
||||
|
||||
ncdu = testRunLsWithOpts(t, env.gopts, LsOptions{Ncdu: true}, []string{"latest", "/testdata"})
|
||||
assertIsValidJSON(t, ncdu)
|
||||
}
|
||||
|
|
|
@ -11,78 +11,94 @@ import (
|
|||
rtest "github.com/restic/restic/internal/test"
|
||||
)
|
||||
|
||||
type lsTestNode struct {
|
||||
path string
|
||||
restic.Node
|
||||
}
|
||||
|
||||
var lsTestNodes = []lsTestNode{
|
||||
// Mode is omitted when zero.
|
||||
// Permissions, by convention is "-" per mode bit
|
||||
{
|
||||
path: "/bar/baz",
|
||||
Node: restic.Node{
|
||||
Name: "baz",
|
||||
Type: "file",
|
||||
Size: 12345,
|
||||
UID: 10000000,
|
||||
GID: 20000000,
|
||||
|
||||
User: "nobody",
|
||||
Group: "nobodies",
|
||||
Links: 1,
|
||||
},
|
||||
},
|
||||
|
||||
// Even empty files get an explicit size.
|
||||
{
|
||||
path: "/foo/empty",
|
||||
Node: restic.Node{
|
||||
Name: "empty",
|
||||
Type: "file",
|
||||
Size: 0,
|
||||
UID: 1001,
|
||||
GID: 1001,
|
||||
|
||||
User: "not printed",
|
||||
Group: "not printed",
|
||||
Links: 0xF00,
|
||||
},
|
||||
},
|
||||
|
||||
// Non-regular files do not get a size.
|
||||
// Mode is printed in decimal, including the type bits.
|
||||
{
|
||||
path: "/foo/link",
|
||||
Node: restic.Node{
|
||||
Name: "link",
|
||||
Type: "symlink",
|
||||
Mode: os.ModeSymlink | 0777,
|
||||
LinkTarget: "not printed",
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
path: "/some/directory",
|
||||
Node: restic.Node{
|
||||
Name: "directory",
|
||||
Type: "dir",
|
||||
Mode: os.ModeDir | 0755,
|
||||
ModTime: time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC),
|
||||
AccessTime: time.Date(2021, 2, 3, 4, 5, 6, 7, time.UTC),
|
||||
ChangeTime: time.Date(2022, 3, 4, 5, 6, 7, 8, time.UTC),
|
||||
},
|
||||
},
|
||||
|
||||
// Test encoding of setuid/setgid/sticky bit
|
||||
{
|
||||
path: "/some/sticky",
|
||||
Node: restic.Node{
|
||||
Name: "sticky",
|
||||
Type: "dir",
|
||||
Mode: os.ModeDir | 0755 | os.ModeSetuid | os.ModeSetgid | os.ModeSticky,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func TestLsNodeJSON(t *testing.T) {
|
||||
for _, c := range []struct {
|
||||
path string
|
||||
restic.Node
|
||||
expect string
|
||||
}{
|
||||
// Mode is omitted when zero.
|
||||
// Permissions, by convention is "-" per mode bit
|
||||
{
|
||||
path: "/bar/baz",
|
||||
Node: restic.Node{
|
||||
Name: "baz",
|
||||
Type: "file",
|
||||
Size: 12345,
|
||||
UID: 10000000,
|
||||
GID: 20000000,
|
||||
|
||||
User: "nobody",
|
||||
Group: "nobodies",
|
||||
Links: 1,
|
||||
},
|
||||
expect: `{"name":"baz","type":"file","path":"/bar/baz","uid":10000000,"gid":20000000,"size":12345,"permissions":"----------","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","struct_type":"node"}`,
|
||||
},
|
||||
|
||||
// Even empty files get an explicit size.
|
||||
{
|
||||
path: "/foo/empty",
|
||||
Node: restic.Node{
|
||||
Name: "empty",
|
||||
Type: "file",
|
||||
Size: 0,
|
||||
UID: 1001,
|
||||
GID: 1001,
|
||||
|
||||
User: "not printed",
|
||||
Group: "not printed",
|
||||
Links: 0xF00,
|
||||
},
|
||||
expect: `{"name":"empty","type":"file","path":"/foo/empty","uid":1001,"gid":1001,"size":0,"permissions":"----------","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","struct_type":"node"}`,
|
||||
},
|
||||
|
||||
// Non-regular files do not get a size.
|
||||
// Mode is printed in decimal, including the type bits.
|
||||
{
|
||||
path: "/foo/link",
|
||||
Node: restic.Node{
|
||||
Name: "link",
|
||||
Type: "symlink",
|
||||
Mode: os.ModeSymlink | 0777,
|
||||
LinkTarget: "not printed",
|
||||
},
|
||||
expect: `{"name":"link","type":"symlink","path":"/foo/link","uid":0,"gid":0,"mode":134218239,"permissions":"Lrwxrwxrwx","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","struct_type":"node"}`,
|
||||
},
|
||||
|
||||
{
|
||||
path: "/some/directory",
|
||||
Node: restic.Node{
|
||||
Name: "directory",
|
||||
Type: "dir",
|
||||
Mode: os.ModeDir | 0755,
|
||||
ModTime: time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC),
|
||||
AccessTime: time.Date(2021, 2, 3, 4, 5, 6, 7, time.UTC),
|
||||
ChangeTime: time.Date(2022, 3, 4, 5, 6, 7, 8, time.UTC),
|
||||
},
|
||||
expect: `{"name":"directory","type":"dir","path":"/some/directory","uid":0,"gid":0,"mode":2147484141,"permissions":"drwxr-xr-x","mtime":"2020-01-02T03:04:05Z","atime":"2021-02-03T04:05:06.000000007Z","ctime":"2022-03-04T05:06:07.000000008Z","struct_type":"node"}`,
|
||||
},
|
||||
for i, expect := range []string{
|
||||
`{"name":"baz","type":"file","path":"/bar/baz","uid":10000000,"gid":20000000,"size":12345,"permissions":"----------","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","message_type":"node","struct_type":"node"}`,
|
||||
`{"name":"empty","type":"file","path":"/foo/empty","uid":1001,"gid":1001,"size":0,"permissions":"----------","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","message_type":"node","struct_type":"node"}`,
|
||||
`{"name":"link","type":"symlink","path":"/foo/link","uid":0,"gid":0,"mode":134218239,"permissions":"Lrwxrwxrwx","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","message_type":"node","struct_type":"node"}`,
|
||||
`{"name":"directory","type":"dir","path":"/some/directory","uid":0,"gid":0,"mode":2147484141,"permissions":"drwxr-xr-x","mtime":"2020-01-02T03:04:05Z","atime":"2021-02-03T04:05:06.000000007Z","ctime":"2022-03-04T05:06:07.000000008Z","message_type":"node","struct_type":"node"}`,
|
||||
`{"name":"sticky","type":"dir","path":"/some/sticky","uid":0,"gid":0,"mode":2161115629,"permissions":"dugtrwxr-xr-x","mtime":"0001-01-01T00:00:00Z","atime":"0001-01-01T00:00:00Z","ctime":"0001-01-01T00:00:00Z","message_type":"node","struct_type":"node"}`,
|
||||
} {
|
||||
c := lsTestNodes[i]
|
||||
buf := new(bytes.Buffer)
|
||||
enc := json.NewEncoder(buf)
|
||||
err := lsNodeJSON(enc, c.path, &c.Node)
|
||||
rtest.OK(t, err)
|
||||
rtest.Equals(t, c.expect+"\n", buf.String())
|
||||
rtest.Equals(t, expect+"\n", buf.String())
|
||||
|
||||
// Sanity check: output must be valid JSON.
|
||||
var v interface{}
|
||||
|
@ -90,3 +106,54 @@ func TestLsNodeJSON(t *testing.T) {
|
|||
rtest.OK(t, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLsNcduNode(t *testing.T) {
|
||||
for i, expect := range []string{
|
||||
`{"name":"baz","asize":12345,"dsize":12345,"dev":0,"ino":0,"nlink":1,"notreg":false,"uid":10000000,"gid":20000000,"mode":0,"mtime":-62135596800}`,
|
||||
`{"name":"empty","asize":0,"dsize":0,"dev":0,"ino":0,"nlink":3840,"notreg":false,"uid":1001,"gid":1001,"mode":0,"mtime":-62135596800}`,
|
||||
`{"name":"link","asize":0,"dsize":0,"dev":0,"ino":0,"nlink":0,"notreg":true,"uid":0,"gid":0,"mode":511,"mtime":-62135596800}`,
|
||||
`{"name":"directory","asize":0,"dsize":0,"dev":0,"ino":0,"nlink":0,"notreg":false,"uid":0,"gid":0,"mode":493,"mtime":1577934245}`,
|
||||
`{"name":"sticky","asize":0,"dsize":0,"dev":0,"ino":0,"nlink":0,"notreg":false,"uid":0,"gid":0,"mode":4077,"mtime":-62135596800}`,
|
||||
} {
|
||||
c := lsTestNodes[i]
|
||||
out, err := lsNcduNode(c.path, &c.Node)
|
||||
rtest.OK(t, err)
|
||||
rtest.Equals(t, expect, string(out))
|
||||
|
||||
// Sanity check: output must be valid JSON.
|
||||
var v interface{}
|
||||
err = json.Unmarshal(out, &v)
|
||||
rtest.OK(t, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLsNcdu(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
printer := &ncduLsPrinter{
|
||||
out: &buf,
|
||||
}
|
||||
|
||||
printer.Snapshot(&restic.Snapshot{
|
||||
Hostname: "host",
|
||||
Paths: []string{"/example"},
|
||||
})
|
||||
printer.Node("/directory", &restic.Node{
|
||||
Type: "dir",
|
||||
Name: "directory",
|
||||
})
|
||||
printer.Node("/directory/data", &restic.Node{
|
||||
Type: "file",
|
||||
Name: "data",
|
||||
Size: 42,
|
||||
})
|
||||
printer.LeaveDir("/directory")
|
||||
printer.Close()
|
||||
|
||||
rtest.Equals(t, `[1, 2, {"time":"0001-01-01T00:00:00Z","tree":null,"paths":["/example"],"hostname":"host"},
|
||||
[
|
||||
{"name":"directory","asize":0,"dsize":0,"dev":0,"ino":0,"nlink":0,"notreg":false,"uid":0,"gid":0,"mode":0,"mtime":-62135596800},
|
||||
{"name":"data","asize":42,"dsize":42,"dev":0,"ino":0,"nlink":0,"notreg":false,"uid":0,"gid":0,"mode":0,"mtime":-62135596800}
|
||||
]
|
||||
]
|
||||
`, buf.String())
|
||||
}
|
||||
|
|
|
@ -12,7 +12,6 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/restic/restic/internal/debug"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
rtest "github.com/restic/restic/internal/test"
|
||||
|
@ -160,11 +159,6 @@ func TestMount(t *testing.T) {
|
|||
t.Skip("Skipping fuse tests")
|
||||
}
|
||||
|
||||
debugEnabled := debug.TestLogToStderr(t)
|
||||
if debugEnabled {
|
||||
defer debug.TestDisableLog(t)
|
||||
}
|
||||
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
// must list snapshots more than once
|
||||
env.gopts.backendTestHook = nil
|
||||
|
|
|
@ -21,7 +21,7 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
`,
|
||||
Hidden: true,
|
||||
DisableAutoGenTag: true,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
Run: func(_ *cobra.Command, _ []string) {
|
||||
fmt.Printf("All Extended Options:\n")
|
||||
var maxLen int
|
||||
for _, opt := range options.List() {
|
||||
|
|
|
@ -15,6 +15,7 @@ import (
|
|||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/ui"
|
||||
"github.com/restic/restic/internal/ui/progress"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
@ -36,7 +37,7 @@ EXIT STATUS
|
|||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
return runPrune(cmd.Context(), pruneOptions, globalOptions)
|
||||
},
|
||||
}
|
||||
|
@ -66,10 +67,10 @@ func init() {
|
|||
f := cmdPrune.Flags()
|
||||
f.BoolVarP(&pruneOptions.DryRun, "dry-run", "n", false, "do not modify the repository, just print what would be done")
|
||||
f.StringVarP(&pruneOptions.UnsafeNoSpaceRecovery, "unsafe-recover-no-free-space", "", "", "UNSAFE, READ THE DOCUMENTATION BEFORE USING! Try to recover a repository stuck with no free space. Do not use without trying out 'prune --max-repack-size 0' first.")
|
||||
addPruneOptions(cmdPrune)
|
||||
addPruneOptions(cmdPrune, &pruneOptions)
|
||||
}
|
||||
|
||||
func addPruneOptions(c *cobra.Command) {
|
||||
func addPruneOptions(c *cobra.Command, pruneOptions *PruneOptions) {
|
||||
f := c.Flags()
|
||||
f.StringVar(&pruneOptions.MaxUnused, "max-unused", "5%", "tolerate given `limit` of unused data (absolute value in bytes with suffixes k/K, m/M, g/G, t/T, a value in % or the word 'unlimited')")
|
||||
f.StringVar(&pruneOptions.MaxRepackSize, "max-repack-size", "", "maximum `size` to repack (allowed suffixes: k/K, m/M, g/G, t/T)")
|
||||
|
@ -100,7 +101,7 @@ func verifyPruneOptions(opts *PruneOptions) error {
|
|||
// parse MaxUnused either as unlimited, a percentage, or an absolute number of bytes
|
||||
switch {
|
||||
case maxUnused == "unlimited":
|
||||
opts.maxUnusedBytes = func(used uint64) uint64 {
|
||||
opts.maxUnusedBytes = func(_ uint64) uint64 {
|
||||
return math.MaxUint64
|
||||
}
|
||||
|
||||
|
@ -129,7 +130,7 @@ func verifyPruneOptions(opts *PruneOptions) error {
|
|||
return errors.Fatalf("invalid number of bytes %q for --max-unused: %v", opts.MaxUnused, err)
|
||||
}
|
||||
|
||||
opts.maxUnusedBytes = func(used uint64) uint64 {
|
||||
opts.maxUnusedBytes = func(_ uint64) uint64 {
|
||||
return uint64(size)
|
||||
}
|
||||
}
|
||||
|
@ -766,7 +767,7 @@ func doPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions, repo r
|
|||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
} else if len(plan.ignorePacks) != 0 {
|
||||
err = rebuildIndexFiles(ctx, gopts, repo, plan.ignorePacks, nil)
|
||||
err = rebuildIndexFiles(ctx, gopts, repo, plan.ignorePacks, nil, false)
|
||||
if err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
|
@ -778,7 +779,7 @@ func doPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions, repo r
|
|||
}
|
||||
|
||||
if opts.unsafeRecovery {
|
||||
_, err = writeIndexFiles(ctx, gopts, repo, plan.ignorePacks, nil)
|
||||
err = rebuildIndexFiles(ctx, gopts, repo, plan.ignorePacks, nil, true)
|
||||
if err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
|
@ -788,23 +789,22 @@ func doPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions, repo r
|
|||
return nil
|
||||
}
|
||||
|
||||
func writeIndexFiles(ctx context.Context, gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) (restic.IDSet, error) {
|
||||
func rebuildIndexFiles(ctx context.Context, gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs, skipDeletion bool) error {
|
||||
Verbosef("rebuilding index\n")
|
||||
|
||||
bar := newProgressMax(!gopts.Quiet, 0, "packs processed")
|
||||
obsoleteIndexes, err := repo.Index().Save(ctx, repo, removePacks, extraObsolete, bar)
|
||||
bar.Done()
|
||||
return obsoleteIndexes, err
|
||||
}
|
||||
|
||||
func rebuildIndexFiles(ctx context.Context, gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) error {
|
||||
obsoleteIndexes, err := writeIndexFiles(ctx, gopts, repo, removePacks, extraObsolete)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("deleting obsolete index files\n")
|
||||
return DeleteFilesChecked(ctx, gopts, repo, obsoleteIndexes, restic.IndexFile)
|
||||
return repo.Index().Save(ctx, repo, removePacks, extraObsolete, restic.MasterIndexSaveOpts{
|
||||
SaveProgress: bar,
|
||||
DeleteProgress: func() *progress.Counter {
|
||||
return newProgressMax(!gopts.Quiet, 0, "old indexes deleted")
|
||||
},
|
||||
DeleteReport: func(id restic.ID, _ error) {
|
||||
if gopts.verbosity > 2 {
|
||||
Verbosef("removed index %v\n", id.String())
|
||||
}
|
||||
},
|
||||
SkipDeletion: skipDeletion,
|
||||
})
|
||||
}
|
||||
|
||||
func getUsedBlobs(ctx context.Context, repo restic.Repository, ignoreSnapshots restic.IDSet, quiet bool) (usedBlobs restic.CountedBlobSet, err error) {
|
||||
|
|
|
@ -81,7 +81,10 @@ func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
|
|||
DryRun: true,
|
||||
Last: 1,
|
||||
}
|
||||
return runForget(context.TODO(), opts, gopts, args)
|
||||
pruneOpts := PruneOptions{
|
||||
MaxUnused: "5%",
|
||||
}
|
||||
return runForget(context.TODO(), opts, pruneOpts, gopts, args)
|
||||
})
|
||||
rtest.OK(t, err)
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ EXIT STATUS
|
|||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
return runRecover(cmd.Context(), globalOptions)
|
||||
},
|
||||
}
|
||||
|
@ -91,7 +91,7 @@ func runRecover(ctx context.Context, gopts GlobalOptions) error {
|
|||
bar.Done()
|
||||
|
||||
Verbosef("load snapshots\n")
|
||||
err = restic.ForAllSnapshots(ctx, snapshotLister, repo, nil, func(id restic.ID, sn *restic.Snapshot, err error) error {
|
||||
err = restic.ForAllSnapshots(ctx, snapshotLister, repo, nil, func(_ restic.ID, sn *restic.Snapshot, _ error) error {
|
||||
trees[*sn.Tree] = true
|
||||
return nil
|
||||
})
|
||||
|
@ -158,7 +158,7 @@ func runRecover(ctx context.Context, gopts GlobalOptions) error {
|
|||
|
||||
}
|
||||
|
||||
func createSnapshot(ctx context.Context, name, hostname string, tags []string, repo restic.Repository, tree *restic.ID) error {
|
||||
func createSnapshot(ctx context.Context, name, hostname string, tags []string, repo restic.SaverUnpacked, tree *restic.ID) error {
|
||||
sn, err := restic.NewSnapshot([]string{name}, tags, hostname, time.Now())
|
||||
if err != nil {
|
||||
return errors.Fatalf("unable to save snapshot: %v", err)
|
||||
|
|
|
@ -24,7 +24,7 @@ EXIT STATUS
|
|||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
return runRebuildIndex(cmd.Context(), repairIndexOptions, globalOptions)
|
||||
},
|
||||
}
|
||||
|
@ -78,7 +78,7 @@ func rebuildIndex(ctx context.Context, opts RepairIndexOptions, gopts GlobalOpti
|
|||
|
||||
if opts.ReadAllPacks {
|
||||
// get list of old index files but start with empty index
|
||||
err := repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||
err := repo.List(ctx, restic.IndexFile, func(id restic.ID, _ int64) error {
|
||||
obsoleteIndexes = append(obsoleteIndexes, id)
|
||||
return nil
|
||||
})
|
||||
|
@ -88,7 +88,7 @@ func rebuildIndex(ctx context.Context, opts RepairIndexOptions, gopts GlobalOpti
|
|||
} else {
|
||||
Verbosef("loading indexes...\n")
|
||||
mi := index.NewMasterIndex()
|
||||
err := index.ForAllIndexes(ctx, repo, repo, func(id restic.ID, idx *index.Index, oldFormat bool, err error) error {
|
||||
err := index.ForAllIndexes(ctx, repo, repo, func(id restic.ID, idx *index.Index, _ bool, err error) error {
|
||||
if err != nil {
|
||||
Warnf("removing invalid index %v: %v\n", id, err)
|
||||
obsoleteIndexes = append(obsoleteIndexes, id)
|
||||
|
@ -154,7 +154,7 @@ func rebuildIndex(ctx context.Context, opts RepairIndexOptions, gopts GlobalOpti
|
|||
}
|
||||
}
|
||||
|
||||
err = rebuildIndexFiles(ctx, gopts, repo, removePacks, obsoleteIndexes)
|
||||
err = rebuildIndexFiles(ctx, gopts, repo, removePacks, obsoleteIndexes, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -9,8 +9,8 @@ import (
|
|||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/ui/termstatus"
|
||||
"github.com/spf13/cobra"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
var cmdRepairPacks = &cobra.Command{
|
||||
|
@ -29,7 +29,9 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runRepairPacks(cmd.Context(), globalOptions, args)
|
||||
term, cancel := setupTermstatus()
|
||||
defer cancel()
|
||||
return runRepairPacks(cmd.Context(), globalOptions, term, args)
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -37,14 +39,7 @@ func init() {
|
|||
cmdRepair.AddCommand(cmdRepairPacks)
|
||||
}
|
||||
|
||||
func runRepairPacks(ctx context.Context, gopts GlobalOptions, args []string) error {
|
||||
// FIXME discuss and add proper feature flag mechanism
|
||||
flag, _ := os.LookupEnv("RESTIC_FEATURES")
|
||||
if flag != "repair-packs-v1" {
|
||||
return errors.Fatal("This command is experimental and may change/be removed without notice between restic versions. " +
|
||||
"Set the environment variable 'RESTIC_FEATURES=repair-packs-v1' to enable it.")
|
||||
}
|
||||
|
||||
func runRepairPacks(ctx context.Context, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
|
||||
ids := restic.NewIDSet()
|
||||
for _, arg := range args {
|
||||
id, err := restic.ParseID(arg)
|
||||
|
@ -68,21 +63,19 @@ func runRepairPacks(ctx context.Context, gopts GlobalOptions, args []string) err
|
|||
return err
|
||||
}
|
||||
|
||||
return repairPacks(ctx, gopts, repo, ids)
|
||||
}
|
||||
|
||||
func repairPacks(ctx context.Context, gopts GlobalOptions, repo *repository.Repository, ids restic.IDSet) error {
|
||||
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
|
||||
err := repo.LoadIndex(ctx, bar)
|
||||
err = repo.LoadIndex(ctx, bar)
|
||||
if err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
|
||||
Warnf("saving backup copies of pack files in current folder\n")
|
||||
printer := newTerminalProgressPrinter(gopts.verbosity, term)
|
||||
|
||||
printer.P("saving backup copies of pack files to current folder")
|
||||
for id := range ids {
|
||||
f, err := os.OpenFile("pack-"+id.String(), os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0o666)
|
||||
if err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
return err
|
||||
}
|
||||
|
||||
err = repo.Backend().Load(ctx, backend.Handle{Type: restic.PackFile, Name: id.String()}, 0, 0, func(rd io.Reader) error {
|
||||
|
@ -94,66 +87,15 @@ func repairPacks(ctx context.Context, gopts GlobalOptions, repo *repository.Repo
|
|||
return err
|
||||
})
|
||||
if err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
wg, wgCtx := errgroup.WithContext(ctx)
|
||||
repo.StartPackUploader(wgCtx, wg)
|
||||
repo.DisableAutoIndexUpdate()
|
||||
|
||||
Warnf("salvaging intact data from specified pack files\n")
|
||||
bar = newProgressMax(!gopts.Quiet, uint64(len(ids)), "pack files")
|
||||
defer bar.Done()
|
||||
|
||||
wg.Go(func() error {
|
||||
// examine all data the indexes have for the pack file
|
||||
for b := range repo.Index().ListPacks(wgCtx, ids) {
|
||||
blobs := b.Blobs
|
||||
if len(blobs) == 0 {
|
||||
Warnf("no blobs found for pack %v\n", b.PackID)
|
||||
bar.Add(1)
|
||||
continue
|
||||
}
|
||||
|
||||
err = repository.StreamPack(wgCtx, repo.Backend().Load, repo.Key(), b.PackID, blobs, func(blob restic.BlobHandle, buf []byte, err error) error {
|
||||
if err != nil {
|
||||
// Fallback path
|
||||
buf, err = repo.LoadBlob(wgCtx, blob.Type, blob.ID, nil)
|
||||
if err != nil {
|
||||
Warnf("failed to load blob %v: %v\n", blob.ID, err)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
id, _, _, err := repo.SaveBlob(wgCtx, blob.Type, buf, restic.ID{}, true)
|
||||
if !id.Equal(blob.ID) {
|
||||
panic("pack id mismatch during upload")
|
||||
}
|
||||
return err
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bar.Add(1)
|
||||
}
|
||||
return repo.Flush(wgCtx)
|
||||
})
|
||||
|
||||
if err := wg.Wait(); err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
bar.Done()
|
||||
|
||||
// remove salvaged packs from index
|
||||
err = rebuildIndexFiles(ctx, gopts, repo, ids, nil)
|
||||
err = repository.RepairPacks(ctx, repo, ids, printer)
|
||||
if err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
|
||||
// cleanup
|
||||
Warnf("removing salvaged pack files\n")
|
||||
DeleteFiles(ctx, gopts, repo, ids, restic.PackFile)
|
||||
|
||||
Warnf("\nUse `restic repair snapshots --forget` to remove the corrupted data blobs from all snapshots\n")
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -125,7 +125,7 @@ func runRepairSnapshots(ctx context.Context, gopts GlobalOptions, opts RepairOpt
|
|||
node.Size = newSize
|
||||
return node
|
||||
},
|
||||
RewriteFailedTree: func(nodeID restic.ID, path string, _ error) (restic.ID, error) {
|
||||
RewriteFailedTree: func(_ restic.ID, path string, _ error) (restic.ID, error) {
|
||||
if path == "/" {
|
||||
Verbosef(" dir %q: not readable\n", path)
|
||||
// remove snapshots with invalid root node
|
||||
|
|
|
@ -3,7 +3,6 @@ package main
|
|||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/restic/restic/internal/debug"
|
||||
|
@ -38,31 +37,9 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
ctx := cmd.Context()
|
||||
var wg sync.WaitGroup
|
||||
cancelCtx, cancel := context.WithCancel(ctx)
|
||||
defer func() {
|
||||
// shutdown termstatus
|
||||
cancel()
|
||||
wg.Wait()
|
||||
}()
|
||||
|
||||
term := termstatus.New(globalOptions.stdout, globalOptions.stderr, globalOptions.Quiet)
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
term.Run(cancelCtx)
|
||||
}()
|
||||
|
||||
// allow usage of warnf / verbosef
|
||||
prevStdout, prevStderr := globalOptions.stdout, globalOptions.stderr
|
||||
defer func() {
|
||||
globalOptions.stdout, globalOptions.stderr = prevStdout, prevStderr
|
||||
}()
|
||||
stdioWrapper := ui.NewStdioWrapper(term)
|
||||
globalOptions.stdout, globalOptions.stderr = stdioWrapper.Stdout(), stdioWrapper.Stderr()
|
||||
|
||||
return runRestore(ctx, restoreOptions, globalOptions, term, args)
|
||||
term, cancel := setupTermstatus()
|
||||
defer cancel()
|
||||
return runRestore(cmd.Context(), restoreOptions, globalOptions, term, args)
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -201,10 +178,13 @@ func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
|
|||
totalErrors++
|
||||
return nil
|
||||
}
|
||||
res.Warn = func(message string) {
|
||||
msg.E("Warning: %s\n", message)
|
||||
}
|
||||
|
||||
excludePatterns := filter.ParsePatterns(opts.Exclude)
|
||||
insensitiveExcludePatterns := filter.ParsePatterns(opts.InsensitiveExclude)
|
||||
selectExcludeFilter := func(item string, dstpath string, node *restic.Node) (selectedForRestore bool, childMayBeSelected bool) {
|
||||
selectExcludeFilter := func(item string, _ string, node *restic.Node) (selectedForRestore bool, childMayBeSelected bool) {
|
||||
matched, err := filter.List(excludePatterns, item)
|
||||
if err != nil {
|
||||
msg.E("error for exclude pattern: %v", err)
|
||||
|
@ -227,7 +207,7 @@ func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
|
|||
|
||||
includePatterns := filter.ParsePatterns(opts.Include)
|
||||
insensitiveIncludePatterns := filter.ParsePatterns(opts.InsensitiveInclude)
|
||||
selectIncludeFilter := func(item string, dstpath string, node *restic.Node) (selectedForRestore bool, childMayBeSelected bool) {
|
||||
selectIncludeFilter := func(item string, _ string, node *restic.Node) (selectedForRestore bool, childMayBeSelected bool) {
|
||||
matched, childMayMatch, err := filter.ListWithChild(includePatterns, item)
|
||||
if err != nil {
|
||||
msg.E("error for include pattern: %v", err)
|
||||
|
|
|
@ -147,7 +147,7 @@ func rewriteSnapshot(ctx context.Context, repo *repository.Repository, sn *resti
|
|||
return rewriter.RewriteTree(ctx, repo, "/", *sn.Tree)
|
||||
}
|
||||
} else {
|
||||
filter = func(ctx context.Context, sn *restic.Snapshot) (restic.ID, error) {
|
||||
filter = func(_ context.Context, sn *restic.Snapshot) (restic.ID, error) {
|
||||
return *sn.Tree, nil
|
||||
}
|
||||
}
|
||||
|
@ -209,7 +209,7 @@ func filterAndReplaceSnapshot(ctx context.Context, repo restic.Repository, sn *r
|
|||
}
|
||||
|
||||
if newMetadata != nil && newMetadata.Hostname != "" {
|
||||
Verbosef("would set time to %s\n", newMetadata.Hostname)
|
||||
Verbosef("would set hostname to %s\n", newMetadata.Hostname)
|
||||
}
|
||||
|
||||
return true, nil
|
||||
|
|
|
@ -189,7 +189,7 @@ func runStats(ctx context.Context, opts StatsOptions, gopts GlobalOptions, args
|
|||
return nil
|
||||
}
|
||||
|
||||
func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo restic.Repository, opts StatsOptions, stats *statsContainer) error {
|
||||
func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo restic.Loader, opts StatsOptions, stats *statsContainer) error {
|
||||
if snapshot.Tree == nil {
|
||||
return fmt.Errorf("snapshot %s has nil tree", snapshot.ID().Str())
|
||||
}
|
||||
|
@ -203,7 +203,9 @@ func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo rest
|
|||
}
|
||||
|
||||
hardLinkIndex := restorer.NewHardlinkIndex[struct{}]()
|
||||
err := walker.Walk(ctx, repo, *snapshot.Tree, restic.NewIDSet(), statsWalkTree(repo, opts, stats, hardLinkIndex))
|
||||
err := walker.Walk(ctx, repo, *snapshot.Tree, walker.WalkVisitor{
|
||||
ProcessNode: statsWalkTree(repo, opts, stats, hardLinkIndex),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("walking tree %s: %v", *snapshot.Tree, err)
|
||||
}
|
||||
|
@ -211,13 +213,13 @@ func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo rest
|
|||
return nil
|
||||
}
|
||||
|
||||
func statsWalkTree(repo restic.Repository, opts StatsOptions, stats *statsContainer, hardLinkIndex *restorer.HardlinkIndex[struct{}]) walker.WalkFunc {
|
||||
return func(parentTreeID restic.ID, npath string, node *restic.Node, nodeErr error) (bool, error) {
|
||||
func statsWalkTree(repo restic.Loader, opts StatsOptions, stats *statsContainer, hardLinkIndex *restorer.HardlinkIndex[struct{}]) walker.WalkFunc {
|
||||
return func(parentTreeID restic.ID, npath string, node *restic.Node, nodeErr error) error {
|
||||
if nodeErr != nil {
|
||||
return true, nodeErr
|
||||
return nodeErr
|
||||
}
|
||||
if node == nil {
|
||||
return true, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
if opts.countMode == countModeUniqueFilesByContents || opts.countMode == countModeBlobsPerFile {
|
||||
|
@ -247,7 +249,7 @@ func statsWalkTree(repo restic.Repository, opts StatsOptions, stats *statsContai
|
|||
// is always a data blob since we're accessing it via a file's Content array
|
||||
blobSize, found := repo.LookupBlobSize(blobID, restic.DataBlob)
|
||||
if !found {
|
||||
return true, fmt.Errorf("blob %s not found for tree %s", blobID, parentTreeID)
|
||||
return fmt.Errorf("blob %s not found for tree %s", blobID, parentTreeID)
|
||||
}
|
||||
|
||||
// count the blob's size, then add this blob by this
|
||||
|
@ -274,11 +276,9 @@ func statsWalkTree(repo restic.Repository, opts StatsOptions, stats *statsContai
|
|||
hardLinkIndex.Add(node.Inode, node.DeviceID, struct{}{})
|
||||
stats.TotalSize += node.Size
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
return true, nil
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -365,9 +365,9 @@ func statsDebug(ctx context.Context, repo restic.Repository) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func statsDebugFileType(ctx context.Context, repo restic.Repository, tpe restic.FileType) (*sizeHistogram, error) {
|
||||
func statsDebugFileType(ctx context.Context, repo restic.Lister, tpe restic.FileType) (*sizeHistogram, error) {
|
||||
hist := newSizeHistogram(2 * repository.MaxPackSize)
|
||||
err := repo.List(ctx, tpe, func(id restic.ID, size int64) error {
|
||||
err := repo.List(ctx, tpe, func(_ restic.ID, size int64) error {
|
||||
hist.Add(uint64(size))
|
||||
return nil
|
||||
})
|
||||
|
|
|
@ -19,7 +19,7 @@ EXIT STATUS
|
|||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
RunE: func(cmd *cobra.Command, _ []string) error {
|
||||
return runUnlock(cmd.Context(), unlockOptions, globalOptions)
|
||||
},
|
||||
}
|
||||
|
|
|
@ -21,7 +21,7 @@ EXIT STATUS
|
|||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
Run: func(_ *cobra.Command, _ []string) {
|
||||
if globalOptions.JSON {
|
||||
type jsonVersion struct {
|
||||
Version string `json:"version"`
|
||||
|
|
|
@ -3,9 +3,6 @@ package main
|
|||
import (
|
||||
"context"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
"github.com/restic/restic/internal/backend"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
)
|
||||
|
||||
|
@ -24,46 +21,21 @@ func DeleteFilesChecked(ctx context.Context, gopts GlobalOptions, repo restic.Re
|
|||
// deleteFiles deletes the given fileList of fileType in parallel
|
||||
// if ignoreError=true, it will print a warning if there was an error, else it will abort.
|
||||
func deleteFiles(ctx context.Context, gopts GlobalOptions, ignoreError bool, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
|
||||
totalCount := len(fileList)
|
||||
fileChan := make(chan restic.ID)
|
||||
wg, ctx := errgroup.WithContext(ctx)
|
||||
wg.Go(func() error {
|
||||
defer close(fileChan)
|
||||
for id := range fileList {
|
||||
select {
|
||||
case fileChan <- id:
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
bar := newProgressMax(!gopts.JSON && !gopts.Quiet, 0, "files deleted")
|
||||
defer bar.Done()
|
||||
|
||||
return restic.ParallelRemove(ctx, repo, fileList, fileType, func(id restic.ID, err error) error {
|
||||
if err != nil {
|
||||
if !gopts.JSON {
|
||||
Warnf("unable to remove %v/%v from the repository\n", fileType, id)
|
||||
}
|
||||
if !ignoreError {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if !gopts.JSON && gopts.verbosity > 2 {
|
||||
Verbosef("removed %v/%v\n", fileType, id)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
bar := newProgressMax(!gopts.JSON && !gopts.Quiet, uint64(totalCount), "files deleted")
|
||||
defer bar.Done()
|
||||
// deleting files is IO-bound
|
||||
workerCount := repo.Connections()
|
||||
for i := 0; i < int(workerCount); i++ {
|
||||
wg.Go(func() error {
|
||||
for id := range fileChan {
|
||||
h := backend.Handle{Type: fileType, Name: id.String()}
|
||||
err := repo.Backend().Remove(ctx, h)
|
||||
if err != nil {
|
||||
if !gopts.JSON {
|
||||
Warnf("unable to remove %v from the repository\n", h)
|
||||
}
|
||||
if !ignoreError {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if !gopts.JSON && gopts.verbosity > 2 {
|
||||
Verbosef("removed %v\n", h)
|
||||
}
|
||||
bar.Add(1)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
err := wg.Wait()
|
||||
return err
|
||||
}, bar)
|
||||
}
|
||||
|
|
|
@ -426,7 +426,7 @@ func readExcludePatternsFromFiles(excludeFiles []string) ([]string, error) {
|
|||
return scanner.Err()
|
||||
}()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("failed to read excludes from file %q: %w", filename, err)
|
||||
}
|
||||
}
|
||||
return excludes, nil
|
||||
|
|
|
@ -44,7 +44,7 @@ import (
|
|||
"golang.org/x/term"
|
||||
)
|
||||
|
||||
var version = "0.16.2-dev (compiled manually)"
|
||||
var version = "0.16.4-dev (compiled manually)"
|
||||
|
||||
// TimeFormat is the format used for all timestamps printed by restic.
|
||||
const TimeFormat = "2006-01-02 15:04:05"
|
||||
|
@ -68,6 +68,7 @@ type GlobalOptions struct {
|
|||
CleanupCache bool
|
||||
Compression repository.CompressionMode
|
||||
PackSize uint
|
||||
NoExtraVerify bool
|
||||
|
||||
backend.TransportOptions
|
||||
limiter.Limits
|
||||
|
@ -141,6 +142,7 @@ func init() {
|
|||
f.BoolVar(&globalOptions.InsecureTLS, "insecure-tls", false, "skip TLS certificate verification when connecting to the repository (insecure)")
|
||||
f.BoolVar(&globalOptions.CleanupCache, "cleanup-cache", false, "auto remove old cache directories")
|
||||
f.Var(&globalOptions.Compression, "compression", "compression mode (only available for repository format version 2), one of (auto|off|max) (default: $RESTIC_COMPRESSION)")
|
||||
f.BoolVar(&globalOptions.NoExtraVerify, "no-extra-verify", false, "skip additional verification of data before upload (see documentation)")
|
||||
f.IntVar(&globalOptions.Limits.UploadKb, "limit-upload", 0, "limits uploads to a maximum `rate` in KiB/s. (default: unlimited)")
|
||||
f.IntVar(&globalOptions.Limits.DownloadKb, "limit-download", 0, "limits downloads to a maximum `rate` in KiB/s. (default: unlimited)")
|
||||
f.UintVar(&globalOptions.PackSize, "pack-size", 0, "set target pack `size` in MiB, created pack files may be larger (default: $RESTIC_PACK_SIZE)")
|
||||
|
@ -455,8 +457,9 @@ func OpenRepository(ctx context.Context, opts GlobalOptions) (*repository.Reposi
|
|||
}
|
||||
|
||||
s, err := repository.New(be, repository.Options{
|
||||
Compression: opts.Compression,
|
||||
PackSize: opts.PackSize * 1024 * 1024,
|
||||
Compression: opts.Compression,
|
||||
PackSize: opts.PackSize * 1024 * 1024,
|
||||
NoExtraVerify: opts.NoExtraVerify,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, errors.Fatal(err.Error())
|
||||
|
|
|
@ -37,7 +37,7 @@ The full documentation can be found at https://restic.readthedocs.io/ .
|
|||
SilenceUsage: true,
|
||||
DisableAutoGenTag: true,
|
||||
|
||||
PersistentPreRunE: func(c *cobra.Command, args []string) error {
|
||||
PersistentPreRunE: func(c *cobra.Command, _ []string) error {
|
||||
// set verbosity, default is one
|
||||
globalOptions.verbosity = 1
|
||||
if globalOptions.Quiet && globalOptions.Verbose > 0 {
|
||||
|
|
|
@ -30,7 +30,7 @@ func calculateProgressInterval(show bool, json bool) time.Duration {
|
|||
}
|
||||
|
||||
// newTerminalProgressMax returns a progress.Counter that prints to stdout or terminal if provided.
|
||||
func newGenericProgressMax(show bool, max uint64, description string, print func(status string)) *progress.Counter {
|
||||
func newGenericProgressMax(show bool, max uint64, description string, print func(status string, final bool)) *progress.Counter {
|
||||
if !show {
|
||||
return nil
|
||||
}
|
||||
|
@ -46,16 +46,18 @@ func newGenericProgressMax(show bool, max uint64, description string, print func
|
|||
ui.FormatDuration(d), ui.FormatPercent(v, max), v, max, description)
|
||||
}
|
||||
|
||||
print(status)
|
||||
if final {
|
||||
fmt.Print("\n")
|
||||
}
|
||||
print(status, final)
|
||||
})
|
||||
}
|
||||
|
||||
func newTerminalProgressMax(show bool, max uint64, description string, term *termstatus.Terminal) *progress.Counter {
|
||||
return newGenericProgressMax(show, max, description, func(status string) {
|
||||
term.SetStatus([]string{status})
|
||||
return newGenericProgressMax(show, max, description, func(status string, final bool) {
|
||||
if final {
|
||||
term.SetStatus([]string{})
|
||||
term.Print(status)
|
||||
} else {
|
||||
term.SetStatus([]string{status})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -64,7 +66,7 @@ func newProgressMax(show bool, max uint64, description string) *progress.Counter
|
|||
return newGenericProgressMax(show, max, description, printProgress)
|
||||
}
|
||||
|
||||
func printProgress(status string) {
|
||||
func printProgress(status string, final bool) {
|
||||
|
||||
canUpdateStatus := stdoutCanUpdateStatus()
|
||||
|
||||
|
@ -95,6 +97,9 @@ func printProgress(status string) {
|
|||
}
|
||||
|
||||
_, _ = os.Stdout.Write([]byte(clear + status + carriageControl))
|
||||
if final {
|
||||
_, _ = os.Stdout.Write([]byte("\n"))
|
||||
}
|
||||
}
|
||||
|
||||
func newIndexProgress(quiet bool, json bool) *progress.Counter {
|
||||
|
@ -104,3 +109,21 @@ func newIndexProgress(quiet bool, json bool) *progress.Counter {
|
|||
func newIndexTerminalProgress(quiet bool, json bool, term *termstatus.Terminal) *progress.Counter {
|
||||
return newTerminalProgressMax(!quiet && !json && stdoutIsTerminal(), 0, "index files loaded", term)
|
||||
}
|
||||
|
||||
type terminalProgressPrinter struct {
|
||||
term *termstatus.Terminal
|
||||
ui.Message
|
||||
show bool
|
||||
}
|
||||
|
||||
func (t *terminalProgressPrinter) NewCounter(description string) *progress.Counter {
|
||||
return newTerminalProgressMax(t.show, 0, description, t.term)
|
||||
}
|
||||
|
||||
func newTerminalProgressPrinter(verbosity uint, term *termstatus.Terminal) progress.Printer {
|
||||
return &terminalProgressPrinter{
|
||||
term: term,
|
||||
Message: *ui.NewMessage(term, verbosity),
|
||||
show: verbosity > 0,
|
||||
}
|
||||
}
|
||||
|
|
43
cmd/restic/termstatus.go
Normal file
43
cmd/restic/termstatus.go
Normal file
|
@ -0,0 +1,43 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
"github.com/restic/restic/internal/ui"
|
||||
"github.com/restic/restic/internal/ui/termstatus"
|
||||
)
|
||||
|
||||
// setupTermstatus creates a new termstatus and reroutes globalOptions.{stdout,stderr} to it
|
||||
// The returned function must be called to shut down the termstatus,
|
||||
//
|
||||
// Expected usage:
|
||||
// ```
|
||||
// term, cancel := setupTermstatus()
|
||||
// defer cancel()
|
||||
// // do stuff
|
||||
// ```
|
||||
func setupTermstatus() (*termstatus.Terminal, func()) {
|
||||
var wg sync.WaitGroup
|
||||
// only shutdown once cancel is called to ensure that no output is lost
|
||||
cancelCtx, cancel := context.WithCancel(context.Background())
|
||||
|
||||
term := termstatus.New(globalOptions.stdout, globalOptions.stderr, globalOptions.Quiet)
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
term.Run(cancelCtx)
|
||||
}()
|
||||
|
||||
// use the termstatus for stdout/stderr
|
||||
prevStdout, prevStderr := globalOptions.stdout, globalOptions.stderr
|
||||
stdioWrapper := ui.NewStdioWrapper(term)
|
||||
globalOptions.stdout, globalOptions.stderr = stdioWrapper.Stdout(), stdioWrapper.Stderr()
|
||||
|
||||
return term, func() {
|
||||
// shutdown termstatus
|
||||
globalOptions.stdout, globalOptions.stderr = prevStdout, prevStderr
|
||||
cancel()
|
||||
wg.Wait()
|
||||
}
|
||||
}
|
|
@ -35,15 +35,15 @@ environment variable ``RESTIC_REPOSITORY_FILE``.
|
|||
For automating the supply of the repository password to restic, several options
|
||||
exist:
|
||||
|
||||
* Setting the environment variable ``RESTIC_PASSWORD``
|
||||
* Setting the environment variable ``RESTIC_PASSWORD``
|
||||
|
||||
* Specifying the path to a file with the password via the option
|
||||
``--password-file`` or the environment variable ``RESTIC_PASSWORD_FILE``
|
||||
* Specifying the path to a file with the password via the option
|
||||
``--password-file`` or the environment variable ``RESTIC_PASSWORD_FILE``
|
||||
|
||||
* Configuring a program to be called when the password is needed via the
|
||||
option ``--password-command`` or the environment variable
|
||||
``RESTIC_PASSWORD_COMMAND``
|
||||
|
||||
* Configuring a program to be called when the password is needed via the
|
||||
option ``--password-command`` or the environment variable
|
||||
``RESTIC_PASSWORD_COMMAND``
|
||||
|
||||
The ``init`` command has an option called ``--repository-version`` which can
|
||||
be used to explicitly set the version of the new repository. By default, the
|
||||
current stable version is used (see table below). The alias ``latest`` will
|
||||
|
@ -487,7 +487,8 @@ Backblaze B2
|
|||
|
||||
Different from the B2 backend, restic's S3 backend will only hide no longer
|
||||
necessary files. Thus, make sure to setup lifecycle rules to eventually
|
||||
delete hidden files.
|
||||
delete hidden files. The lifecycle setting "Keep only the last version of the file"
|
||||
will keep only the most current version of a file. Read the [Backblaze documentation](https://www.backblaze.com/docs/cloud-storage-lifecycle-rules).
|
||||
|
||||
Restic can backup data to any Backblaze B2 bucket. You need to first setup the
|
||||
following environment variables with the credentials you can find in the
|
||||
|
@ -548,6 +549,14 @@ For authentication export one of the following variables:
|
|||
# For SAS
|
||||
$ export AZURE_ACCOUNT_SAS=<SAS_TOKEN>
|
||||
|
||||
For authentication using ``az login`` set the resource group name and ensure the user has
|
||||
the minimum permissions of the role assignment ``Storage Blob Data Contributor`` on Azure RBAC.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export AZURE_RESOURCE_GROUP=<RESOURCE_GROUP_NAME>
|
||||
$ az login
|
||||
|
||||
Alternatively, if run on Azure, restic will automatically uses service accounts configured
|
||||
via the standard environment variables or Workload / Managed Identities.
|
||||
|
||||
|
@ -736,9 +745,9 @@ For debugging rclone, you can set the environment variable ``RCLONE_VERBOSE=2``.
|
|||
|
||||
The rclone backend has three additional options:
|
||||
|
||||
* ``-o rclone.program`` specifies the path to rclone, the default value is just ``rclone``
|
||||
* ``-o rclone.args`` allows setting the arguments passed to rclone, by default this is ``serve restic --stdio --b2-hard-delete``
|
||||
* ``-o rclone.timeout`` specifies timeout for waiting on repository opening, the default value is ``1m``
|
||||
* ``-o rclone.program`` specifies the path to rclone, the default value is just ``rclone``
|
||||
* ``-o rclone.args`` allows setting the arguments passed to rclone, by default this is ``serve restic --stdio --b2-hard-delete``
|
||||
* ``-o rclone.timeout`` specifies timeout for waiting on repository opening, the default value is ``1m``
|
||||
|
||||
The reason for the ``--b2-hard-delete`` parameters can be found in the corresponding GitHub `issue #1657`_.
|
||||
|
||||
|
|
|
@ -170,10 +170,10 @@ On **Unix** (including Linux and Mac), given that a file lives at the same
|
|||
location as a file in a previous backup, the following file metadata
|
||||
attributes have to match for its contents to be presumed unchanged:
|
||||
|
||||
* Modification timestamp (mtime).
|
||||
* Metadata change timestamp (ctime).
|
||||
* File size.
|
||||
* Inode number (internal number used to reference a file in a filesystem).
|
||||
* Modification timestamp (mtime).
|
||||
* Metadata change timestamp (ctime).
|
||||
* File size.
|
||||
* Inode number (internal number used to reference a file in a filesystem).
|
||||
|
||||
The reason for requiring both mtime and ctime to match is that Unix programs
|
||||
can freely change mtime (and some do). In such cases, a ctime change may be
|
||||
|
@ -182,9 +182,9 @@ the only hint that a file did change.
|
|||
The following ``restic backup`` command line flags modify the change detection
|
||||
rules:
|
||||
|
||||
* ``--force``: turn off change detection and rescan all files.
|
||||
* ``--ignore-ctime``: require mtime to match, but allow ctime to differ.
|
||||
* ``--ignore-inode``: require mtime to match, but allow inode number
|
||||
* ``--force``: turn off change detection and rescan all files.
|
||||
* ``--ignore-ctime``: require mtime to match, but allow ctime to differ.
|
||||
* ``--ignore-inode``: require mtime to match, but allow inode number
|
||||
and ctime to differ.
|
||||
|
||||
The option ``--ignore-inode`` exists to support FUSE-based filesystems and
|
||||
|
@ -250,9 +250,9 @@ It can be used like this:
|
|||
|
||||
This instructs restic to exclude files matching the following criteria:
|
||||
|
||||
* All files matching ``*.c`` (parameter ``--exclude``)
|
||||
* All files matching ``*.go`` (second line in ``excludes.txt``)
|
||||
* All files and sub-directories named ``bar`` which reside somewhere below a directory called ``foo`` (fourth line in ``excludes.txt``)
|
||||
* All files matching ``*.c`` (parameter ``--exclude``)
|
||||
* All files matching ``*.go`` (second line in ``excludes.txt``)
|
||||
* All files and sub-directories named ``bar`` which reside somewhere below a directory called ``foo`` (fourth line in ``excludes.txt``)
|
||||
|
||||
Patterns use the syntax of the Go function
|
||||
`filepath.Match <https://pkg.go.dev/path/filepath#Match>`__
|
||||
|
@ -270,8 +270,8 @@ environment variable (depending on your operating system).
|
|||
|
||||
Patterns need to match on complete path components. For example, the pattern ``foo``:
|
||||
|
||||
* matches ``/dir1/foo/dir2/file`` and ``/dir/foo``
|
||||
* does not match ``/dir/foobar`` or ``barfoo``
|
||||
* matches ``/dir1/foo/dir2/file`` and ``/dir/foo``
|
||||
* does not match ``/dir/foobar`` or ``barfoo``
|
||||
|
||||
A trailing ``/`` is ignored, a leading ``/`` anchors the pattern at the root directory.
|
||||
This means, ``/bin`` matches ``/bin/bash`` but does not match ``/usr/bin/restic``.
|
||||
|
@ -281,9 +281,9 @@ e.g. ``b*ash`` matches ``/bin/bash`` but does not match ``/bin/ash``. For this,
|
|||
the special wildcard ``**`` can be used to match arbitrary sub-directories: The
|
||||
pattern ``foo/**/bar`` matches:
|
||||
|
||||
* ``/dir1/foo/dir2/bar/file``
|
||||
* ``/foo/bar/file``
|
||||
* ``/tmp/foo/bar``
|
||||
* ``/dir1/foo/dir2/bar/file``
|
||||
* ``/foo/bar/file``
|
||||
* ``/tmp/foo/bar``
|
||||
|
||||
Spaces in patterns listed in an exclude file can be specified verbatim. That is,
|
||||
in order to exclude a file named ``foo bar star.txt``, put that just as it reads
|
||||
|
@ -298,9 +298,9 @@ some escaping in order to pass the name/pattern as a single argument to restic.
|
|||
|
||||
On most Unixy shells, you can either quote or use backslashes. For example:
|
||||
|
||||
* ``--exclude='foo bar star/foo.txt'``
|
||||
* ``--exclude="foo bar star/foo.txt"``
|
||||
* ``--exclude=foo\ bar\ star/foo.txt``
|
||||
* ``--exclude='foo bar star/foo.txt'``
|
||||
* ``--exclude="foo bar star/foo.txt"``
|
||||
* ``--exclude=foo\ bar\ star/foo.txt``
|
||||
|
||||
If a pattern starts with exclamation mark and matches a file that
|
||||
was previously matched by a regular pattern, the match is cancelled.
|
||||
|
@ -381,8 +381,8 @@ contains one *pattern* per line. The file must be encoded as UTF-8, or UTF-16
|
|||
with a byte-order mark. Leading and trailing whitespace is removed from the
|
||||
patterns. Empty lines and lines starting with a ``#`` are ignored and each
|
||||
pattern is expanded when read, such that special characters in it are expanded
|
||||
using the Go function `filepath.Glob <https://pkg.go.dev/path/filepath#Glob>`__
|
||||
- please see its documentation for the syntax you can use in the patterns.
|
||||
according to the syntax described in the documentation of the Go function
|
||||
`filepath.Match <https://pkg.go.dev/path/filepath#Match>`__.
|
||||
|
||||
The argument passed to ``--files-from-verbatim`` must be the name of a text file
|
||||
that contains one *path* per line, e.g. as generated by GNU ``find`` with the
|
||||
|
@ -482,13 +482,11 @@ want to save the access time for files and directories, you can pass the
|
|||
``--with-atime`` option to the ``backup`` command.
|
||||
|
||||
Note that ``restic`` does not back up some metadata associated with files. Of
|
||||
particular note are::
|
||||
|
||||
- file creation date on Unix platforms
|
||||
- inode flags on Unix platforms
|
||||
- file ownership and ACLs on Windows
|
||||
- the "hidden" flag on Windows
|
||||
particular note are:
|
||||
|
||||
* File creation date on Unix platforms
|
||||
* Inode flags on Unix platforms
|
||||
* File ownership and ACLs on Windows
|
||||
|
||||
Reading data from a command
|
||||
***************************
|
||||
|
@ -514,7 +512,6 @@ Restic uses the command exit code to determine whether the command succeeded. A
|
|||
non-zero exit code from the command causes restic to cancel the backup. This causes
|
||||
restic to fail with exit code 1. No snapshot will be created in this case.
|
||||
|
||||
|
||||
Reading data from stdin
|
||||
***********************
|
||||
|
||||
|
@ -555,7 +552,6 @@ the pipe and act accordingly (e.g., remove the last backup). Refer to the
|
|||
`Use the Unofficial Bash Strict Mode <http://redsymbol.net/articles/unofficial-bash-strict-mode/>`__
|
||||
for more details on this.
|
||||
|
||||
|
||||
Tags for backup
|
||||
***************
|
||||
|
||||
|
@ -688,15 +684,14 @@ The external programs that restic may execute include ``rclone`` (for rclone
|
|||
backends) and ``ssh`` (for the SFTP backend). These may respond to further
|
||||
environment variables and configuration files; see their respective manuals.
|
||||
|
||||
|
||||
Exit status codes
|
||||
*****************
|
||||
|
||||
Restic returns one of the following exit status codes after the backup command is run:
|
||||
|
||||
* 0 when the backup was successful (snapshot with all source files created)
|
||||
* 1 when there was a fatal error (no snapshot created)
|
||||
* 3 when some source files could not be read (incomplete snapshot with remaining files created)
|
||||
* 0 when the backup was successful (snapshot with all source files created)
|
||||
* 1 when there was a fatal error (no snapshot created)
|
||||
* 3 when some source files could not be read (incomplete snapshot with remaining files created)
|
||||
|
||||
Fatal errors occur for example when restic is unable to write to the backup destination, when
|
||||
there are network connectivity issues preventing successful communication, or when an invalid
|
||||
|
|
|
@ -82,6 +82,76 @@ Furthermore you can group the output by the same filters (host, paths, tags):
|
|||
1 snapshots
|
||||
|
||||
|
||||
Listing files in a snapshot
|
||||
===========================
|
||||
|
||||
To get a list of the files in a specific snapshot you can use the ``ls`` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic ls 073a90db
|
||||
|
||||
snapshot 073a90db of [/home/user/work.txt] filtered by [] at 2024-01-21 16:51:18.474558607 +0100 CET):
|
||||
/home
|
||||
/home/user
|
||||
/home/user/work.txt
|
||||
|
||||
The special snapshot ID ``latest`` can be used to list files and directories of the latest snapshot in the repository.
|
||||
The ``--host`` flag can be used in conjunction to select the latest snapshot originating from a certain host only.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic ls --host kasimir latest
|
||||
|
||||
snapshot 073a90db of [/home/user/work.txt] filtered by [] at 2024-01-21 16:51:18.474558607 +0100 CET):
|
||||
/home
|
||||
/home/user
|
||||
/home/user/work.txt
|
||||
|
||||
By default, ``ls`` prints all files in a snapshot.
|
||||
|
||||
File listings can optionally be filtered by directories. Any positional arguments after the snapshot ID are interpreted
|
||||
as absolute directory paths, and only files inside those directories will be listed. Files in subdirectories are not
|
||||
listed when filtering by directories. If the ``--recursive`` flag is used, then subdirectories are also included.
|
||||
Any directory paths specified must be absolute (starting with a path separator); paths use the forward slash '/'
|
||||
as separator.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic ls latest /home
|
||||
|
||||
snapshot 073a90db of [/home/user/work.txt] filtered by [/home] at 2024-01-21 16:51:18.474558607 +0100 CET):
|
||||
/home
|
||||
/home/user
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic ls --recursive latest /home
|
||||
|
||||
snapshot 073a90db of [/home/user/work.txt] filtered by [/home] at 2024-01-21 16:51:18.474558607 +0100 CET):
|
||||
/home
|
||||
/home/user
|
||||
/home/user/work.txt
|
||||
|
||||
To show more details about the files in a snapshot, you can use the ``--long`` option. The colums include
|
||||
file permissions, UID, GID, file size, modification time and file path. For scripting usage, the
|
||||
``ls`` command supports the ``--json`` flag; the JSON output format is described at :ref:`ls json`.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic ls --long latest
|
||||
|
||||
snapshot 073a90db of [/home/user/work.txt] filtered by [] at 2024-01-21 16:51:18.474558607 +0100 CET):
|
||||
drwxr-xr-x 0 0 0 2024-01-21 16:50:52 /home
|
||||
drwxr-xr-x 0 0 0 2024-01-21 16:51:03 /home/user
|
||||
-rw-r--r-- 0 0 18 2024-01-21 16:51:03 /home/user/work.txt
|
||||
|
||||
NCDU (NCurses Disk Usage) is a tool to analyse disk usage of directories. The ``ls`` command supports
|
||||
outputting information about a snapshot in the NCDU format using the ``--ncdu`` option.
|
||||
|
||||
You can use it as follows: ``restic ls latest --ncdu | ncdu -f -``
|
||||
|
||||
|
||||
Copying snapshots between repositories
|
||||
======================================
|
||||
|
||||
|
@ -242,6 +312,7 @@ Currently, rewriting the hostname and the time of the backup is supported.
|
|||
This is possible using the ``rewrite`` command with the option ``--new-host`` followed by the desired new hostname or the option ``--new-time`` followed by the desired new timestamp.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic rewrite --new-host newhost --new-time "1999-01-01 11:11:11"
|
||||
|
||||
repository b7dbade3 opened (version 2, compression level auto)
|
||||
|
|
|
@ -60,6 +60,20 @@ only applied for the single run of restic. The option can also be set via the en
|
|||
variable ``RESTIC_COMPRESSION``.
|
||||
|
||||
|
||||
Data Verification
|
||||
=================
|
||||
|
||||
To prevent the upload of corrupted data to the repository, which can happen due
|
||||
to hardware issues or software bugs, restic verifies that generated files can
|
||||
be decoded and contain the correct data beforehand. This increases the CPU usage
|
||||
during backups. If necessary, you can disable this verification using the
|
||||
``--no-extra-verify`` option of the ``backup`` command. However, in this case
|
||||
you should verify the repository integrity more actively using
|
||||
``restic check --read-data`` (or the similar ``--read-data-subset`` option).
|
||||
Otherwise, data corruption due to hardware issues or software bugs might go
|
||||
unnoticed.
|
||||
|
||||
|
||||
File Read Concurrency
|
||||
=====================
|
||||
|
||||
|
|
|
@ -174,3 +174,9 @@ To include the folder content at the root of the archive, you can use the ``<sna
|
|||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo dump latest:/home/other/work / > restore.tar
|
||||
|
||||
It is also possible to ``dump`` the contents of a selected snapshot and folder
|
||||
structure to a file using the ``--target`` flag.
|
||||
|
||||
.. code-block:: console
|
||||
$ restic -r /srv/restic-repo dump latest / --target /home/linux.user/output.tar -a tar
|
|
@ -75,9 +75,6 @@ Several commands, in particular long running ones or those that generate a large
|
|||
use a format also known as JSON lines. It consists of a stream of new-line separated JSON
|
||||
messages. You can determine the nature of the message using the ``message_type`` field.
|
||||
|
||||
As an exception, the ``ls`` command uses the field ``struct_type`` instead.
|
||||
|
||||
|
||||
backup
|
||||
------
|
||||
|
||||
|
@ -409,6 +406,8 @@ The ``key list`` command returns an array of objects with the following structur
|
|||
+--------------+------------------------------------+
|
||||
|
||||
|
||||
.. _ls json:
|
||||
|
||||
ls
|
||||
--
|
||||
|
||||
|
@ -418,63 +417,67 @@ As an exception, the ``struct_type`` field is used to determine the message type
|
|||
snapshot
|
||||
^^^^^^^^
|
||||
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``struct_type``| Always "snapshot" |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``time`` | Timestamp of when the backup was started |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``parent`` | ID of the parent snapshot |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``tree`` | ID of the root tree blob |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``paths`` | List of paths included in the backup |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``hostname`` | Hostname of the backed up machine |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``username`` | Username the backup command was run as |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``uid`` | ID of owner |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``gid`` | ID of group |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``excludes`` | List of paths and globs excluded from the backup |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``tags`` | List of tags for the snapshot in question |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``id`` | Snapshot ID |
|
||||
+----------------+--------------------------------------------------+
|
||||
| ``short_id`` | Snapshot ID, short form |
|
||||
+----------------+--------------------------------------------------+
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``message_type`` | Always "snapshot" |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``struct_type`` | Always "snapshot" (deprecated) |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``time`` | Timestamp of when the backup was started |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``parent`` | ID of the parent snapshot |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``tree`` | ID of the root tree blob |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``paths`` | List of paths included in the backup |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``hostname`` | Hostname of the backed up machine |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``username`` | Username the backup command was run as |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``uid`` | ID of owner |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``gid`` | ID of group |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``excludes`` | List of paths and globs excluded from the backup |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``tags`` | List of tags for the snapshot in question |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``id`` | Snapshot ID |
|
||||
+------------------+--------------------------------------------------+
|
||||
| ``short_id`` | Snapshot ID, short form |
|
||||
+------------------+--------------------------------------------------+
|
||||
|
||||
|
||||
node
|
||||
^^^^
|
||||
|
||||
+-----------------+--------------------------+
|
||||
| ``struct_type`` | Always "node" |
|
||||
+-----------------+--------------------------+
|
||||
| ``name`` | Node name |
|
||||
+-----------------+--------------------------+
|
||||
| ``type`` | Node type |
|
||||
+-----------------+--------------------------+
|
||||
| ``path`` | Node path |
|
||||
+-----------------+--------------------------+
|
||||
| ``uid`` | UID of node |
|
||||
+-----------------+--------------------------+
|
||||
| ``gid`` | GID of node |
|
||||
+-----------------+--------------------------+
|
||||
| ``size`` | Size in bytes |
|
||||
+-----------------+--------------------------+
|
||||
| ``mode`` | Node mode |
|
||||
+-----------------+--------------------------+
|
||||
| ``atime`` | Node access time |
|
||||
+-----------------+--------------------------+
|
||||
| ``mtime`` | Node modification time |
|
||||
+-----------------+--------------------------+
|
||||
| ``ctime`` | Node creation time |
|
||||
+-----------------+--------------------------+
|
||||
| ``inode`` | Inode number of node |
|
||||
+-----------------+--------------------------+
|
||||
+------------------+----------------------------+
|
||||
| ``message_type`` | Always "node" |
|
||||
+------------------+----------------------------+
|
||||
| ``struct_type`` | Always "node" (deprecated) |
|
||||
+------------------+----------------------------+
|
||||
| ``name`` | Node name |
|
||||
+------------------+----------------------------+
|
||||
| ``type`` | Node type |
|
||||
+------------------+----------------------------+
|
||||
| ``path`` | Node path |
|
||||
+------------------+----------------------------+
|
||||
| ``uid`` | UID of node |
|
||||
+------------------+----------------------------+
|
||||
| ``gid`` | GID of node |
|
||||
+------------------+----------------------------+
|
||||
| ``size`` | Size in bytes |
|
||||
+------------------+----------------------------+
|
||||
| ``mode`` | Node mode |
|
||||
+------------------+----------------------------+
|
||||
| ``atime`` | Node access time |
|
||||
+------------------+----------------------------+
|
||||
| ``mtime`` | Node modification time |
|
||||
+------------------+----------------------------+
|
||||
| ``ctime`` | Node creation time |
|
||||
+------------------+----------------------------+
|
||||
| ``inode`` | Inode number of node |
|
||||
+------------------+----------------------------+
|
||||
|
||||
|
||||
restore
|
||||
|
|
|
@ -76,6 +76,10 @@ Similarly, if a repository is repeatedly damaged, please open an `issue on Githu
|
|||
somewhere. Please include the check output and additional information that might
|
||||
help locate the problem.
|
||||
|
||||
If ``check`` detects damaged pack files, it will show instructions on how to repair
|
||||
them using the ``repair pack`` command. Use that command instead of the "Repair the
|
||||
index" section in this guide.
|
||||
|
||||
|
||||
2. Backup the repository
|
||||
************************
|
||||
|
@ -104,6 +108,11 @@ whether your issue is already known and solved. Please take a look at the
|
|||
3. Repair the index
|
||||
*******************
|
||||
|
||||
.. note::
|
||||
|
||||
If the `check` command tells you to run `restic repair pack`, then use that
|
||||
command instead. It will repair the damaged pack files and also update the index.
|
||||
|
||||
Restic relies on its index to contain correct information about what data is
|
||||
stored in the repository. Thus, the first step to repair a repository is to
|
||||
repair the index:
|
||||
|
|
|
@ -7,18 +7,18 @@ API.
|
|||
|
||||
The following values are valid for ``{type}``:
|
||||
|
||||
* ``data``
|
||||
* ``keys``
|
||||
* ``locks``
|
||||
* ``snapshots``
|
||||
* ``index``
|
||||
* ``config``
|
||||
* ``data``
|
||||
* ``keys``
|
||||
* ``locks``
|
||||
* ``snapshots``
|
||||
* ``index``
|
||||
* ``config``
|
||||
|
||||
The API version is selected via the ``Accept`` HTTP header in the request. The
|
||||
following values are defined:
|
||||
|
||||
* ``application/vnd.x.restic.rest.v1`` or empty: Select API version 1
|
||||
* ``application/vnd.x.restic.rest.v2``: Select API version 2
|
||||
* ``application/vnd.x.restic.rest.v1`` or empty: Select API version 1
|
||||
* ``application/vnd.x.restic.rest.v2``: Select API version 2
|
||||
|
||||
The server will respond with the value of the highest version it supports in
|
||||
the ``Content-Type`` HTTP response header for the HTTP requests which should
|
||||
|
|
|
@ -488,6 +488,7 @@ _restic_backup()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -560,6 +561,7 @@ _restic_cache()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -624,6 +626,7 @@ _restic_cat()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -696,6 +699,7 @@ _restic_check()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -794,6 +798,7 @@ _restic_copy()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -860,6 +865,7 @@ _restic_diff()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -944,6 +950,7 @@ _restic_dump()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1058,6 +1065,7 @@ _restic_find()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1228,6 +1236,7 @@ _restic_forget()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1312,6 +1321,7 @@ _restic_generate()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1372,6 +1382,7 @@ _restic_help()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1463,6 +1474,7 @@ _restic_init()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1539,6 +1551,7 @@ _restic_key()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1603,6 +1616,7 @@ _restic_list()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1689,6 +1703,7 @@ _restic_ls()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1757,6 +1772,7 @@ _restic_migrate()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1849,6 +1865,7 @@ _restic_mount()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1935,6 +1952,7 @@ _restic_prune()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -1999,6 +2017,7 @@ _restic_recover()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2059,6 +2078,7 @@ _restic_repair_help()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2126,6 +2146,7 @@ _restic_repair_index()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2190,6 +2211,7 @@ _restic_repair_packs()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2274,6 +2296,7 @@ _restic_repair_snapshots()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2342,6 +2365,7 @@ _restic_repair()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2450,6 +2474,7 @@ _restic_restore()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2552,6 +2577,7 @@ _restic_rewrite()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2620,6 +2646,7 @@ _restic_self-update()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2712,6 +2739,7 @@ _restic_snapshots()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2794,6 +2822,7 @@ _restic_stats()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2884,6 +2913,7 @@ _restic_tag()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -2950,6 +2980,7 @@ _restic_unlock()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -3014,6 +3045,7 @@ _restic_version()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
@ -3106,6 +3138,7 @@ _restic_root_command()
|
|||
flags+=("--limit-upload=")
|
||||
two_word_flags+=("--limit-upload")
|
||||
flags+=("--no-cache")
|
||||
flags+=("--no-extra-verify")
|
||||
flags+=("--no-lock")
|
||||
flags+=("--option=")
|
||||
two_word_flags+=("--option")
|
||||
|
|
|
@ -824,4 +824,4 @@ Changes
|
|||
Repository Version 2
|
||||
--------------------
|
||||
|
||||
* Support compression for blobs (data/tree) and index / lock / snapshot files
|
||||
* Support compression for blobs (data/tree) and index / lock / snapshot files
|
||||
|
|
|
@ -9,14 +9,14 @@ restic for version 0.10.0 and later. For restic versions down to 0.9.3 please
|
|||
refer to the documentation for the respective version. The binary produced
|
||||
depends on the following things:
|
||||
|
||||
* The source code for the release
|
||||
* The exact version of the official `Go compiler <https://go.dev>`__ used to produce the binaries (running ``restic version`` will print this)
|
||||
* The architecture and operating system the Go compiler runs on (Linux, ``amd64``)
|
||||
* The build tags (for official binaries, it's the tag ``selfupdate``)
|
||||
* The path where the source code is extracted to (``/restic``)
|
||||
* The path to the Go compiler (``/usr/local/go``)
|
||||
* The path to the Go workspace (``GOPATH=/home/build/go``)
|
||||
* Other environment variables (mostly ``$GOOS``, ``$GOARCH``, ``$CGO_ENABLED``)
|
||||
* The source code for the release
|
||||
* The exact version of the official `Go compiler <https://go.dev>`__ used to produce the binaries (running ``restic version`` will print this)
|
||||
* The architecture and operating system the Go compiler runs on (Linux, ``amd64``)
|
||||
* The build tags (for official binaries, it's the tag ``selfupdate``)
|
||||
* The path where the source code is extracted to (``/restic``)
|
||||
* The path to the Go compiler (``/usr/local/go``)
|
||||
* The path to the Go workspace (``GOPATH=/home/build/go``)
|
||||
* Other environment variables (mostly ``$GOOS``, ``$GOARCH``, ``$CGO_ENABLED``)
|
||||
|
||||
In addition, The compressed ZIP files for Windows depends on the modification
|
||||
timestamp and filename of the binary contained in it. In order to reproduce the
|
||||
|
@ -69,9 +69,9 @@ container can be found in the `GitHub repository
|
|||
<https://github.com/restic/builder>`__
|
||||
|
||||
The container serves the following goals:
|
||||
* Have a very controlled environment which is independent from the local system
|
||||
* Make it easy to have the correct version of the Go compiler at the right path
|
||||
* Make it easy to pass in the source code to build at a well-defined path
|
||||
* Have a very controlled environment which is independent from the local system
|
||||
* Make it easy to have the correct version of the Go compiler at the right path
|
||||
* Make it easy to pass in the source code to build at a well-defined path
|
||||
|
||||
The following steps are necessary to build the binaries:
|
||||
|
||||
|
@ -113,6 +113,26 @@ The following steps are necessary to build the binaries:
|
|||
restic/builder \
|
||||
go run helpers/build-release-binaries/main.go --version 0.14.0 --verbose
|
||||
|
||||
Verifying the Official Binaries
|
||||
*******************************
|
||||
|
||||
To verify the official binaries, you can either build them yourself using the above
|
||||
instructions or use the ``helpers/verify-release-binaries.sh`` script from the restic
|
||||
repository. Run it as ``helpers/verify-release-binaries.sh restic_version go_version``.
|
||||
The specified go compiler version must match the one used to build the official
|
||||
binaries. For example, for restic 0.16.2 the command would be
|
||||
``helpers/verify-release-binaries.sh 0.16.2 1.21.3``.
|
||||
|
||||
The script requires bash, curl, docker, git, gpg, shasum and tar.
|
||||
|
||||
The script first downloads all release binaries, checks the SHASUM256 file and its
|
||||
signature. Afterwards it checks that the tarball matches the restic git repository
|
||||
contents, before first reproducing the builder docker container and finally the
|
||||
restic binaries. As final step, the restic binary in both the docker hub images
|
||||
and the GitHub container registry is verified. If any step fails, then the script
|
||||
will issue a warning.
|
||||
|
||||
|
||||
Prepare a New Release
|
||||
*********************
|
||||
|
||||
|
|
|
@ -171,6 +171,10 @@ Exit status is 3 if some source data could not be read (incomplete snapshot crea
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -80,6 +80,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -68,6 +68,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -85,6 +85,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -109,6 +109,10 @@ new destination repository using the "init" command.
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -93,6 +93,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -96,6 +96,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -117,6 +117,10 @@ It can also be used to search for restic blobs or trees for troubleshooting.
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -179,6 +179,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -89,6 +89,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -96,6 +96,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -80,6 +80,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -68,6 +68,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -107,6 +107,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -74,6 +74,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -144,6 +144,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -97,6 +97,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -70,6 +70,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -73,6 +73,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -72,6 +72,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -107,6 +107,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -63,6 +63,10 @@ Repair the repository
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -117,6 +117,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
|
@ -121,6 +121,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||
\fB--no-cache\fP[=false]
|
||||
do not use a local cache
|
||||
|
||||
.PP
|
||||
\fB--no-extra-verify\fP[=false]
|
||||
skip additional verification of data before upload (see documentation)
|
||||
|
||||
.PP
|
||||
\fB--no-lock\fP[=false]
|
||||
do not lock the repository, this allows some operations on read-only repositories
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue