Compare commits

...
Sign in to create a new pull request.

64 commits

Author SHA1 Message Date
Alexander Neumann
686f24b578 doc: Clarify B2 application keys 2018-08-02 21:14:05 +02:00
Alexander Neumann
247d2b7215 Merge pull request #1921 from salbertson/patch-1
Add a "Reviewed by Hound" badge
2018-08-02 20:03:43 +02:00
Alexander Neumann
017cd113d3 Merge pull request #1922 from salbertson/patch-2
Use https when linking to chris.beams.io
2018-08-02 20:03:40 +02:00
Scott Albertson
f744c2553e
Use https when linking to chris.beams.io
Why not link to How to [Write a Git Commit Message](https://chris.beams.io/posts/git-commit/) using HTTPS, it's going to redirect anyway.
2018-08-01 14:59:06 -07:00
Alexander Neumann
56cd6bd495 Merge pull request #1919 from restic/update-deps
Update dependencies
2018-08-01 23:56:55 +02:00
Alexander Neumann
bff635bc5f Update dependencies, enable pruning for vendor/
So, `dep` got an nice new feature to remove tests and non-go files from
`vendor/`, and this brings the size of the vendor directory from ~300MiB
down to ~20MiB. We don that now.
2018-08-01 21:32:15 +02:00
Alexander Neumann
3422c1ca83 Merge pull request #1729 from mholt/stats
Implement `restic stats` command to get more info about a repository
2018-07-31 23:24:36 +02:00
Matthew Holt
f6b2731aa5 stats: Add manual doc, improve -h doc
Also rename files-by-content to files-by-contents, once and for all
2018-07-31 22:54:10 +02:00
Scott Albertson
3eb5b45b41
Add a "Reviewed by Hound" badge 2018-07-31 13:53:24 -07:00
Alexander Neumann
01aacf41b5 Merge pull request #1915 from mlissner/patch-2
Adds warning re performance of prune
2018-07-31 22:42:20 +02:00
Mike Lissner
2caf8edc55 Add warning of the performance of prune
I went pretty loud with this, but I think the performance is bad enough
that it's really worth highlighting, especially since it locks the index
during the prune.
2018-07-31 22:41:40 +02:00
Alexander Neumann
3151978f58 Fix changelog type 2018-07-31 21:57:27 +02:00
Alexander Neumann
ab4ef432ff Add entry to changelog 2018-07-31 21:29:47 +02:00
Alexander Neumann
be4f54b603 Merge pull request #1913 from restic/restic-password-stdin-message
Print message for password being read from stdin
2018-07-31 21:28:12 +02:00
Alexander Neumann
7260110c27 Merge pull request #1914 from restic/update-blazer
Add support for B2 application keys
2018-07-31 21:27:50 +02:00
Alexander Neumann
2437f11af7 Update github.com/kurin/blazer to 0.5.1
This adds support for B2 application keys.
2018-07-31 20:51:36 +02:00
Alexander Neumann
57873502f8 Add note about B2 application keys to the docs 2018-07-31 20:49:54 +02:00
Alexander Neumann
3678ec9ad8 Print message for password being read from stdin
Closes #1911
2018-07-31 20:21:18 +02:00
Alexander Neumann
a717e9e6f7 Improve message for number of bytes newly added 2018-07-31 19:08:43 +02:00
Alexander Neumann
12c797700e make statsWalkSnapshot return a function 2018-07-27 21:44:59 +02:00
Matthew Holt
daca9d6815 Consolidate mode flags; use new Walk function 2018-07-27 21:27:40 +02:00
Matthew Holt
930602a444 Update comment now that question was answered 2018-07-27 21:27:39 +02:00
Matthew Holt
acb05e7855 Fix filepath uniqueness bug for blobs-per-file mode 2018-07-27 21:27:39 +02:00
Matthew Holt
a7b95d716a Implement four counting modes 2018-07-27 21:27:39 +02:00
Matthew Holt
925b542eb0 Count unique files by blob sequence rather than tree ID 2018-07-27 21:27:39 +02:00
Matthew Holt
f7659bd8b0 stats: Initial implementation of stats command 2018-07-27 21:27:39 +02:00
Alexander Neumann
8c124a2b75 Merge pull request #1902 from mlissner/patch-1
b2 bucket names need to be unique
2018-07-23 22:58:42 +02:00
Mike Lissner
d3ad63a4ec
b2 bucket names need to be unique
Adds a small warning indicating that b2 bucket names need to be unique. It's an easy mistake to make, and it's surprising to get the following error if you're not accustomed to the way B2 works:

    Fatal: create repository at b2:postgres failed: NewBucket: b2_create_bucket: 400: Bucket name is already in use
2018-07-23 11:48:59 -07:00
Alexander Neumann
271c50cf5c Add entry to changelog 2018-07-23 20:15:55 +02:00
Alexander Neumann
1aeb193fd9 Merge pull request #1900 from restic/fix-1870
restorer: Add test for restore with include filter
2018-07-23 20:15:50 +02:00
Alexander Neumann
f715bef82f Merge pull request #1899 from garrmcnu/check-cache-dir
check: Use --cache-dir argument
2018-07-22 21:03:52 +02:00
Alexander Neumann
4fc00d4120 Merge pull request #1901 from restic/update-blazer
Update github.com/kurin/blazer
2018-07-22 20:59:52 +02:00
Garry McNulty
7603ab7ac1 check: Update --cache-dir argument handling based on code review comments
The temporary cache directory is created in the specified directory, or
if not specified in the default temporary directory.
2018-07-22 18:24:11 +01:00
Alexander Neumann
36fa1f8c20 Merge pull request #1894 from restic/fix1893
Return error when exclude file cannot be read
2018-07-22 14:34:27 +02:00
Alexander Neumann
445fb23b6d Rework issue templates for Bug reports and Features 2018-07-22 14:26:23 +02:00
Alexander Neumann
5f79b4cb6c Update issue template again 2018-07-22 14:21:08 +02:00
Alexander Neumann
8e15b59347 Use underline style markup for issue/PR templates 2018-07-22 14:17:53 +02:00
Alexander Neumann
6e2e957332 Add entry to changelog 2018-07-22 14:16:08 +02:00
Alexander Neumann
7ffc03ff8f Update github.com/kurin/blazer to 0.5.0
This includes support for the upcoming B2 application keys feature.
2018-07-22 14:12:02 +02:00
Alexander Neumann
44924ba043 restorer: Fix traverseTree
traverseTree() was meant to call enterDir() whenever a directory is
selected for restore, either explicitly or implicitly (=contains a file
which is to be restored). After restoring a file, leaveDir() is called
in reverse order for all intermediate directories so that the metadata
can be restored.

When a directory is selected implicitly, the metadata for it is
restored. This is different from the previous restorer behavior, which
created implicitly selected intermediate directories with permissions
0700 (only user can read/write it).

This commit changes the behavior back to the old one. Only a directory
is explicitly selected for restore, enterDir()/leaveDir() are called for
it. Otherwise, only visitNode() is called, so visitNode() needs to make
sure the parent directory exists. If the directory is explicitly
included, leaveDir() will then restore the metadata correctly.

When we decide to change the behavior (restore metadata for all
intermediate directories, even if selected implicitly), we should do
that in the selection functions, not here.

This finally resolves #1870
2018-07-21 23:24:40 +02:00
Alexander Neumann
ce19f26948 restorer: Add tests for traverseTree 2018-07-21 23:24:40 +02:00
Alexander Neumann
74016d5981 restorer: Fix return of saveSnapshot 2018-07-21 23:24:40 +02:00
Alexander Neumann
57636a4573 restorer: Run tests in the same package 2018-07-21 23:24:40 +02:00
Alexander Neumann
4f6d2502f7 restorer: Add test for restore with include filter 2018-07-21 23:24:40 +02:00
Garry McNulty
f1f69bc648 check: Use --cache-dir argument
Closes #1880
2018-07-20 20:51:20 +01:00
Alexander Neumann
d7551d7b0c Add entry to changelog 2018-07-18 21:41:20 +02:00
Alexander Neumann
fb74de6360 Return an error when exclude files cannot be read 2018-07-18 21:39:07 +02:00
Alexander Neumann
67535e00a8 Merge pull request #1889 from ProactiveServices/patch-3
doc: Minor grammar, RST syntax fixes
2018-07-18 21:22:10 +02:00
Alexander Neumann
19592285eb Merge pull request #1888 from ProactiveServices/patch-2
doc: Minor grammar fixes
2018-07-18 21:21:52 +02:00
Alexander Neumann
f64862722a Merge pull request #1887 from restic/disable-error-size
checker: Disable size check for now
2018-07-18 21:19:54 +02:00
Adam Piggott
254239c2a9
doc: Minor grammar, RST syntax fixes
Fix unescaped backslash
Fix wording of Windows installation
2018-07-18 02:28:23 +01:00
Adam Piggott
cce1a1f768
doc: Minor grammar fixes 2018-07-18 02:25:31 +01:00
Alexander Neumann
754482fe6c checker: Disable size check for now 2018-07-15 21:52:38 +02:00
Alexander Neumann
73153dbd3f Merge pull request #1885 from restic/create-restore-target
restore: Make sure the target directory exists
2018-07-15 16:28:25 +02:00
Alexander Neumann
92421ec47f restore: Make sure target directory exists 2018-07-15 16:02:04 +02:00
Alexander Neumann
9acc9243ba Add test for not-existing top-level dir and top-level file 2018-07-15 16:00:26 +02:00
Alexander Neumann
df64998649 Merge pull request #1882 from duzvik/aws-credentials-priority
Change AWS credentials priority, to accept AWS_SESSION_TOKEN
2018-07-14 20:48:42 +02:00
Alexander Neumann
64d27eed86 doc: Improve dump to stdout
Closes #1884
2018-07-14 20:45:52 +02:00
Alexander Neumann
abb18a830c Fix test 2018-07-14 11:51:34 +02:00
denis.uzvik
1e42f4f300 S3 backend: accept AWS_SESSION_TOKEN 2018-07-12 16:18:19 +03:00
Alexander Neumann
bd742ddb69 cache: Don't recreate CACHEDIR.TAG 2018-07-08 12:05:12 +02:00
Alexander Neumann
b511f4dce2 Improve help message for check 2018-07-05 22:19:08 +02:00
Alexander Neumann
7961740dcc Fix link 2018-07-05 21:03:40 +02:00
Alexander Neumann
dc3032c360 Mention that AppsCode is sponsoring backend tests 2018-07-05 21:01:57 +02:00
6766 changed files with 28511 additions and 4898058 deletions

View file

@ -1,3 +1,8 @@
---
name: Bug report
about: Report a problem with restic to help us resolve it and improve
---
<!--
Welcome! - We kindly ask that you:
@ -23,10 +28,12 @@ Thanks for understanding, and for contributing to the project!
-->
## Output of `restic version`
Output of `restic version`
--------------------------
## How did you run restic exactly?
How did you run restic exactly?
-------------------------------
<!--
This section should include at least:
@ -38,39 +45,46 @@ This section should include at least:
information to diagnose the problem!
-->
## What backend/server/service did you use to store the repository?
What backend/server/service did you use to store the repository?
----------------------------------------------------------------
## Expected behavior
Expected behavior
-----------------
<!--
Describe what you'd like restic to do differently.
-->
## Actual behavior
Actual behavior
---------------
<!--
In this section, please try to concentrate on observations, so only describe
what you observed directly.
-->
## Steps to reproduce the behavior
Steps to reproduce the behavior
-------------------------------
<!--
The more time you spend describing an easy way to reproduce the behavior (if
this is possible), the easier it is for the project developers to fix it!
-->
## Do you have any idea what may have caused this?
Do you have any idea what may have caused this?
-----------------------------------------------
## Do you have an idea how to solve the issue?
Do you have an idea how to solve the issue?
-------------------------------------------
## Did restic help you or made you happy in any way?
Did restic help you or made you happy in any way?
-------------------------------------------------
<!--
Answering this question is not required, but if you have anything positive to share, please do so here!

57
.github/ISSUE_TEMPLATE/Feature.md vendored Normal file
View file

@ -0,0 +1,57 @@
---
name: Feature request
about: Suggest a new feature or enhancement for restic
---
<!--
Welcome! - We kindly ask that you:
1. Fill out the issue template below - not doing so needs a good reason.
2. Use the forum if you have a question rather than a bug or feature request.
The forum is at: https://forum.restic.net
The forum is a better place for questions about restic or general suggestions
and topics, e.g. usage or documentation questions! This issue tracker is mainly
for tracking bugs and feature requests directly relating to the development of
the software itself, rather than the project.
Thanks for understanding, and for contributing to the project!
-->
Output of `restic version`
--------------------------
<!--
Please add the version of restic you're currently using here, this helps us
later to see what has changed in restic when we revisit this issue after some
time.
-->
What should restic do differently? Which functionality do you think we should add?
----------------------------------------------------------------------------------
<!--
Please describe the feature you'd like us to add here.
-->
What are you trying to do?
--------------------------
<!--
This section should contain a brief description what you're trying to do, which
would be possible after implementing the new feature.
-->
Did restic help you or made you happy in any way?
-------------------------------------------------
<!--
Answering this question is not required, but if you have anything positive to share, please do so here!
Sometimes we get tired of reading bug reports all day and a little positive end note does wonders.
Idea by Joey Hess, https://joeyh.name/blog/entry/two_holiday_stories/
-->

View file

@ -1,3 +1,5 @@
<!--
Thank you very much for contributing code or documentation to restic! Please
fill out the following questions to make it easier for us to review your
@ -8,19 +10,22 @@ your time and add more commits. If you're done and ready for review, please
check the last box.
-->
### What is the purpose of this change? What does it change?
What is the purpose of this change? What does it change?
--------------------------------------------------------
<!--
Describe the changes here, as detailed as needed.
-->
### Was the change discussed in an issue or in the forum before?
Was the change discussed in an issue or in the forum before?
------------------------------------------------------------
<!--
Link issues and relevant forum posts here.
-->
### Checklist
Checklist
---------
- [ ] I have read the [Contribution Guidelines](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#providing-patches)
- [ ] I have added tests for all changes in this PR

View file

@ -164,7 +164,7 @@ history and triaging bugs much easier.
Git commit messages have a very terse summary in the first line of the commit
message, followed by an empty line, followed by a more verbose description or a
List of changed things. For examples, please refer to the excellent [How to
Write a Git Commit Message](http://chris.beams.io/posts/git-commit/).
Write a Git Commit Message](https://chris.beams.io/posts/git-commit/).
If you change/add multiple different things that aren't related at all, try to
make several smaller commits. This is much easier to review. Using `git add -p`

327
Gopkg.lock generated
View file

@ -3,253 +3,472 @@
[[projects]]
branch = "master"
digest = "1:94e9caf404409a2990cfd22aca37d758494c098eff3e2c37fda1abed862e74dd"
name = "bazil.org/fuse"
packages = [".","fs","fuseutil"]
revision = "371fbbdaa8987b715bdd21d6adc4c9b20155f748"
packages = [
".",
"fs",
"fuseutil",
]
pruneopts = "UT"
revision = "65cc252bf6691cb3c7014bcb2c8dc29de91e3a7e"
[[projects]]
digest = "1:5c3894b2aa4d6bead0ceeea6831b305d62879c871780e7b76296ded1b004bc57"
name = "cloud.google.com/go"
packages = ["compute/metadata"]
revision = "4b98a6370e36d7a85192e7bad08a4ebd82eac2a8"
version = "v0.20.0"
pruneopts = "UT"
revision = "aad3f485ee528456e0768f20397b4d9dd941e755"
version = "v0.25.0"
[[projects]]
digest = "1:46ea9487304f4b3c787f54483ecb13a338d686dcd670db0ab1a112ed0ae2128e"
name = "github.com/Azure/azure-sdk-for-go"
packages = ["storage","version"]
revision = "56332fec5b308fbb6615fa1af6117394cdba186d"
version = "v15.0.0"
packages = [
"storage",
"version",
]
pruneopts = "UT"
revision = "4e8cbbfb1aeab140cd0fa97fd16b64ee18c3ca6a"
version = "v19.1.0"
[[projects]]
digest = "1:27d0cd1a78fc836f7c0f07749d029a5f7895c84ad066187b08b70e9d1830098e"
name = "github.com/Azure/go-autorest"
packages = ["autorest","autorest/adal","autorest/azure","autorest/date"]
revision = "ed4b7f5bf1ec0c9ede1fda2681d96771282f2862"
version = "v10.4.0"
packages = [
"autorest",
"autorest/adal",
"autorest/azure",
"autorest/date",
"logger",
"version",
]
pruneopts = "UT"
revision = "dd94e014aaf16d1df746762e392aa201c1b4c461"
version = "v10.15.0"
[[projects]]
digest = "1:2209584c0f7c9b68c23374e659357ab546e1b70eec2761f03280f69a8fd23d77"
name = "github.com/cenkalti/backoff"
packages = ["."]
pruneopts = "UT"
revision = "2ea60e5f094469f9e65adb9cd103795b73ae743e"
version = "v2.0.0"
[[projects]]
digest = "1:7cb4fdca4c251b3ef8027c90ea35f70c7b661a593b9eeae34753c65499098bb1"
name = "github.com/cpuguy83/go-md2man"
packages = ["md2man"]
pruneopts = "UT"
revision = "20f5889cbdc3c73dbd2862796665e7c465ade7d1"
version = "v1.0.8"
[[projects]]
digest = "1:76dc72490af7174349349838f2fe118996381b31ea83243812a97e5a0fd5ed55"
name = "github.com/dgrijalva/jwt-go"
packages = ["."]
pruneopts = "UT"
revision = "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e"
version = "v3.2.0"
[[projects]]
branch = "master"
digest = "1:6f9339c912bbdda81302633ad7e99a28dfa5a639c864061f1929510a9a64aa74"
name = "github.com/dustin/go-humanize"
packages = ["."]
revision = "bb3d318650d48840a39aa21a027c6630e198e626"
pruneopts = "UT"
revision = "9f541cc9db5d55bce703bd99987c9d5cb8eea45e"
[[projects]]
digest = "1:c7edfbb6320d6a93240d663dc52bca92bed4c116abe54c35679eec4e7cc2bd77"
name = "github.com/elithrar/simple-scrypt"
packages = ["."]
pruneopts = "UT"
revision = "d150773194090feb6c897805a7bcea8d49544e2c"
version = "v1.3.0"
[[projects]]
digest = "1:fe8a03a8222d5b913f256972933d26d24ad7c8286692a42943bc01633cc8fce3"
name = "github.com/go-ini/ini"
packages = ["."]
revision = "6333e38ac20b8949a8dd68baa3650f4dee8f39f0"
version = "v1.33.0"
pruneopts = "UT"
revision = "358ee7663966325963d4e8b2e1fbd570c5195153"
version = "v1.38.1"
[[projects]]
digest = "1:15042ad3498153684d09f393bbaec6b216c8eec6d61f63dff711de7d64ed8861"
name = "github.com/golang/protobuf"
packages = ["proto"]
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
version = "v1.0.0"
pruneopts = "UT"
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
version = "v1.1.0"
[[projects]]
digest = "1:2e3c336fc7fde5c984d2841455a658a6d626450b1754a854b3b32e7a8f49a07a"
name = "github.com/google/go-cmp"
packages = ["cmp","cmp/internal/diff","cmp/internal/function","cmp/internal/value"]
revision = "8099a9787ce5dc5984ed879a3bda47dc730a8e97"
version = "v0.1.0"
packages = [
"cmp",
"cmp/internal/diff",
"cmp/internal/function",
"cmp/internal/value",
]
pruneopts = "UT"
revision = "3af367b6b30c263d47e8895973edcca9a49cf029"
version = "v0.2.0"
[[projects]]
digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be"
name = "github.com/inconshreveable/mousetrap"
packages = ["."]
pruneopts = "UT"
revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
version = "v1.0"
[[projects]]
digest = "1:190ff84d9b2ed6589088f178cba8edb4b8ecb334df4572421fb016be1ac20463"
name = "github.com/juju/ratelimit"
packages = ["."]
pruneopts = "UT"
revision = "59fac5042749a5afb9af70e813da1dd5474f0167"
version = "1.0.1"
[[projects]]
branch = "master"
digest = "1:9cedee824c21326bd26950bd9e1ffe9dc4e7ca03dc8634d0e6f954ee6a383172"
name = "github.com/kr/fs"
packages = ["."]
revision = "2788f0dbd16903de03cb8186e5c7d97b69ad387b"
pruneopts = "UT"
revision = "1455def202f6e05b95cc7bfc7e8ae67ae5141eba"
version = "v0.1.0"
[[projects]]
digest = "1:1faa76bd9bffce9c25eaca0597afb67bd05a21ae57fe4378154ce8855ef163d1"
name = "github.com/kurin/blazer"
packages = ["b2","base","internal/b2assets","internal/b2types","internal/blog","x/window"]
revision = "318e9768bf9a0fe52a64b9f8fe74f4f5caef6452"
version = "v0.4.4"
packages = [
"b2",
"base",
"internal/b2assets",
"internal/b2types",
"internal/blog",
"x/window",
]
pruneopts = "UT"
revision = "caf65aa76491dc533bac68ad3243ce72fa4e0a0a"
version = "v0.5.1"
[[projects]]
digest = "1:4e878df5f4e9fd625bf9c9aac77ef7cbfa4a74c01265505527c23470c0e40300"
name = "github.com/marstr/guid"
packages = ["."]
pruneopts = "UT"
revision = "8bd9a64bf37eb297b492a4101fb28e80ac0b290f"
version = "v1.1.0"
[[projects]]
digest = "1:d4d17353dbd05cb52a2a52b7fe1771883b682806f68db442b436294926bbfafb"
name = "github.com/mattn/go-isatty"
packages = ["."]
pruneopts = "UT"
revision = "0360b2af4f38e8d38c7fce2a9f4e702702d73a39"
version = "v0.0.3"
[[projects]]
digest = "1:95c73c666919be2843b955eafc83f58c136312b74f79c703152f4c4a95fd64dc"
name = "github.com/minio/minio-go"
packages = [".","pkg/credentials","pkg/encrypt","pkg/policy","pkg/s3signer","pkg/s3utils","pkg/set"]
revision = "66252c2a3c15f7b90cc8493d497a04ac3b6e3606"
version = "5.0.0"
packages = [
".",
"pkg/credentials",
"pkg/encrypt",
"pkg/s3signer",
"pkg/s3utils",
"pkg/set",
]
pruneopts = "UT"
revision = "70799fe8dae6ecfb6c7d7e9e048fce27f23a1992"
version = "v6.0.5"
[[projects]]
branch = "master"
digest = "1:8eb17c2ec4df79193ae65b621cd1c0c4697db3bc317fe6afdc76d7f2746abd05"
name = "github.com/mitchellh/go-homedir"
packages = ["."]
revision = "b8bc1bf767474819792c23f32d8286a45736f1c6"
pruneopts = "UT"
revision = "3864e76763d94a6df2f9960b16a20a33da9f9a66"
[[projects]]
branch = "master"
digest = "1:928de5172dd3563964d1b88a4ee3775cf72e16f1efabb482ab6d0e0bab86ee69"
name = "github.com/ncw/swift"
packages = ["."]
pruneopts = "UT"
revision = "b2a7479cf26fa841ff90dd932d0221cb5c50782d"
version = "v1.0.39"
[[projects]]
digest = "1:40e195917a951a8bf867cd05de2a46aaf1806c50cf92eebf4c16f78cd196f747"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = "UT"
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
[[projects]]
digest = "1:cfa0d7741863a0e1d30e0ccdd4b48a96a471cdb47892303de8b92c3713af3e77"
name = "github.com/pkg/profile"
packages = ["."]
pruneopts = "UT"
revision = "5b67d428864e92711fcbd2f8629456121a56d91f"
version = "v1.2.1"
[[projects]]
digest = "1:23ed92ba5d90a2dfe817f3895027ccef796e79c30be5125d48e17afdcc395d73"
name = "github.com/pkg/sftp"
packages = ["."]
revision = "49488377fa2f14143ba3067cf7555f60f6c7b550"
version = "1.5.0"
pruneopts = "UT"
revision = "57673e38ea946592a59c26592b7e6fbda646975b"
version = "v1.8.0"
[[projects]]
digest = "1:0d67664e93e366f072ac9672feea29bfc63c9f90f005e9e8a0df1954153f5a14"
name = "github.com/pkg/xattr"
packages = ["."]
revision = "1d7b7ffe7c46974a836eb583b7452f22de1c18cf"
version = "v0.2.3"
pruneopts = "UT"
revision = "ae385d07bb53f092fcc7daaf738d8513df084931"
version = "v0.3.1"
[[projects]]
digest = "1:13ecc4000f49cf0aa3ee56fffcc93119c8edffacfff08674c80d2757d8c33a83"
name = "github.com/restic/chunker"
packages = ["."]
pruneopts = "UT"
revision = "db83917be3b88cc307464b7d8a221c173e34a0db"
version = "v0.2.0"
[[projects]]
digest = "1:8bc629776d035c003c7814d4369521afe67fdb8efc4b5f66540d29343b98cf23"
name = "github.com/russross/blackfriday"
packages = ["."]
pruneopts = "UT"
revision = "55d61fa8aa702f59229e6cff85793c22e580eaf5"
version = "v1.5.1"
[[projects]]
digest = "1:274f67cb6fed9588ea2521ecdac05a6d62a8c51c074c1fccc6a49a40ba80e925"
name = "github.com/satori/go.uuid"
packages = ["."]
pruneopts = "UT"
revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3"
version = "v1.2.0"
[[projects]]
digest = "1:d867dfa6751c8d7a435821ad3b736310c2ed68945d05b50fb9d23aee0540c8cc"
name = "github.com/sirupsen/logrus"
packages = ["."]
revision = "c155da19408a8799da419ed3eeb0cb5db0ad5dbc"
version = "v1.0.5"
pruneopts = "UT"
revision = "3e01752db0189b9157070a0e1668a620f9a85da2"
version = "v1.0.6"
[[projects]]
digest = "1:e01b05ba901239c783dfe56450bcde607fc858908529868259c9a8765dc176d0"
name = "github.com/spf13/cobra"
packages = [".","doc"]
revision = "a1f051bc3eba734da4772d60e2d677f47cf93ef4"
version = "v0.0.2"
packages = [
".",
"doc",
]
pruneopts = "UT"
revision = "ef82de70bb3f60c65fb8eebacbb2d122ef517385"
version = "v0.0.3"
[[projects]]
digest = "1:9424f440bba8f7508b69414634aef3b2b3a877e522d8a4624692412805407bb7"
name = "github.com/spf13/pflag"
packages = ["."]
revision = "e57e3eeb33f795204c1ca35f56c44f83227c6e66"
version = "v1.0.0"
pruneopts = "UT"
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1"
[[projects]]
branch = "master"
digest = "1:eefb1f49ec07e71206d4c9ea1a3e634cad331c2180733e4121b8ae39e8e92ecb"
name = "golang.org/x/crypto"
packages = ["argon2","blake2b","curve25519","ed25519","ed25519/internal/edwards25519","internal/chacha20","pbkdf2","poly1305","scrypt","ssh","ssh/terminal"]
revision = "4ec37c66abab2c7e02ae775328b2ff001c3f025a"
packages = [
"argon2",
"blake2b",
"curve25519",
"ed25519",
"ed25519/internal/edwards25519",
"internal/chacha20",
"internal/subtle",
"pbkdf2",
"poly1305",
"scrypt",
"ssh",
"ssh/terminal",
]
pruneopts = "UT"
revision = "c126467f60eb25f8f27e5a981f32a87e3965053f"
[[projects]]
branch = "master"
digest = "1:8356aa7bdcb10a210b814b64ff76d61de7c36ac4cb6263de3af5e3e2e546956d"
name = "golang.org/x/net"
packages = ["context","context/ctxhttp","http2","http2/hpack","idna","lex/httplex"]
revision = "6078986fec03a1dcc236c34816c71b0e05018fda"
packages = [
"context",
"context/ctxhttp",
"http/httpguts",
"http2",
"http2/hpack",
"idna",
]
pruneopts = "UT"
revision = "32f9bdbd7df18e8641d215e7ea68be88b971feb0"
[[projects]]
branch = "master"
digest = "1:bea0314c10bd362ab623af4880d853b5bad3b63d0ab9945c47e461b8d04203ed"
name = "golang.org/x/oauth2"
packages = [".","google","internal","jws","jwt"]
revision = "fdc9e635145ae97e6c2cb777c48305600cf515cb"
packages = [
".",
"google",
"internal",
"jws",
"jwt",
]
pruneopts = "UT"
revision = "3d292e4d0cdc3a0113e6d207bb137145ef1de42f"
[[projects]]
branch = "master"
digest = "1:39ebcc2b11457b703ae9ee2e8cca0f68df21969c6102cb3b705f76cca0ea0239"
name = "golang.org/x/sync"
packages = ["errgroup"]
pruneopts = "UT"
revision = "1d60e4601c6fd243af51cc01ddf169918a5407ca"
[[projects]]
branch = "master"
digest = "1:a220a85c72a6cb7339c412cb2b117019a7fd94007cdfffb3b5b1d058227a2bf8"
name = "golang.org/x/sys"
packages = ["cpu","unix","windows"]
revision = "7db1c3b1a98089d0071c84f646ff5c96aad43682"
packages = [
"cpu",
"unix",
"windows",
]
pruneopts = "UT"
revision = "bd9dbc187b6e1dacfdd2722a87e83093c2d7bd6e"
[[projects]]
digest = "1:e5a8511f063c38c51ab9ab80e718e9149f692652aeb4e393a8c020dd1bf38ca2"
name = "golang.org/x/text"
packages = ["collate","collate/build","encoding","encoding/internal","encoding/internal/identifier","encoding/unicode","internal/colltab","internal/gen","internal/tag","internal/triegen","internal/ucd","internal/utf8internal","language","runes","secure/bidirule","transform","unicode/bidi","unicode/cldr","unicode/norm","unicode/rangetable"]
packages = [
"collate",
"collate/build",
"encoding",
"encoding/internal",
"encoding/internal/identifier",
"encoding/unicode",
"internal/colltab",
"internal/gen",
"internal/tag",
"internal/triegen",
"internal/ucd",
"internal/utf8internal",
"language",
"runes",
"secure/bidirule",
"transform",
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable",
]
pruneopts = "UT"
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
branch = "master"
digest = "1:fb983acae7bd9c3ed9aadc1b1241d9e559ed21dbf84c17a0dda663ca169ccd69"
name = "google.golang.org/api"
packages = ["gensupport","googleapi","googleapi/internal/uritemplates","storage/v1"]
revision = "dbbc13f71100fa6ece308335445fca6bb0dd5c2f"
packages = [
"gensupport",
"googleapi",
"googleapi/internal/uritemplates",
"storage/v1",
]
pruneopts = "UT"
revision = "31ca0e01cd791f07750cb23fc99327721f753290"
[[projects]]
digest = "1:c8907869850adaa8bd7631887948d0684f3787d0912f1c01ab72581a6c34432e"
name = "google.golang.org/appengine"
packages = [".","internal","internal/app_identity","internal/base","internal/datastore","internal/log","internal/modules","internal/remote_api","internal/urlfetch","urlfetch"]
revision = "150dc57a1b433e64154302bdc40b6bb8aefa313a"
version = "v1.0.0"
packages = [
".",
"internal",
"internal/app_identity",
"internal/base",
"internal/datastore",
"internal/log",
"internal/modules",
"internal/remote_api",
"internal/urlfetch",
"urlfetch",
]
pruneopts = "UT"
revision = "b1f26356af11148e710935ed1ac8a7f5702c7612"
version = "v1.1.0"
[[projects]]
branch = "v2"
digest = "1:5bb148b78468350091db2ffbb2370f35cc6dcd74d9378a31b1c7b86ff7528f08"
name = "gopkg.in/tomb.v2"
packages = ["."]
pruneopts = "UT"
revision = "d5d1b5820637886def9eef33e03a27a9f166942c"
[[projects]]
digest = "1:342378ac4dcb378a5448dd723f0784ae519383532f5e70ade24132c4c8693202"
name = "gopkg.in/yaml.v2"
packages = ["."]
pruneopts = "UT"
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "a5de339cba7570216b212439b90e1e6c384c94be8342fe7755b7cb66aa0a3440"
input-imports = [
"bazil.org/fuse",
"bazil.org/fuse/fs",
"github.com/Azure/azure-sdk-for-go/storage",
"github.com/cenkalti/backoff",
"github.com/elithrar/simple-scrypt",
"github.com/google/go-cmp/cmp",
"github.com/juju/ratelimit",
"github.com/kurin/blazer/b2",
"github.com/mattn/go-isatty",
"github.com/minio/minio-go",
"github.com/minio/minio-go/pkg/credentials",
"github.com/ncw/swift",
"github.com/pkg/errors",
"github.com/pkg/profile",
"github.com/pkg/sftp",
"github.com/pkg/xattr",
"github.com/restic/chunker",
"github.com/spf13/cobra",
"github.com/spf13/cobra/doc",
"github.com/spf13/pflag",
"golang.org/x/crypto/poly1305",
"golang.org/x/crypto/scrypt",
"golang.org/x/crypto/ssh/terminal",
"golang.org/x/net/context",
"golang.org/x/net/context/ctxhttp",
"golang.org/x/net/http2",
"golang.org/x/oauth2/google",
"golang.org/x/sync/errgroup",
"golang.org/x/sys/unix",
"golang.org/x/text/encoding/unicode",
"google.golang.org/api/googleapi",
"google.golang.org/api/storage/v1",
"gopkg.in/tomb.v2",
]
solver-name = "gps-cdcl"
solver-version = 1

View file

@ -19,3 +19,7 @@
# [[override]]
# name = "github.com/x/y"
# version = "2.4.0"
[prune]
unused-packages = true
go-tests = true

View file

@ -1,4 +1,4 @@
|Documentation| |Build Status| |Build status| |Report Card| |Say Thanks| |TestCoverage|
|Documentation| |Build Status| |Build status| |Report Card| |Say Thanks| |TestCoverage| |Reviewed by Hound|
Introduction
------------
@ -111,6 +111,14 @@ License
Restic is licensed under `BSD 2-Clause License <https://opensource.org/licenses/BSD-2-Clause>`__. You can find the
complete text in ``LICENSE``.
Sponsorship
-----------
Backend integration tests for Google Cloud Storage and Microsoft Azure Blob
Storage are sponsored by `AppsCode <https://appscode.com>`__!
|AppsCode|
.. |Documentation| image:: https://readthedocs.org/projects/restic/badge/?version=latest
:target: https://restic.readthedocs.io/en/latest/?badge=latest
.. |Build Status| image:: https://travis-ci.com/restic/restic.svg?branch=master
@ -123,3 +131,7 @@ complete text in ``LICENSE``.
:target: https://saythanks.io/to/restic
.. |TestCoverage| image:: https://codecov.io/gh/restic/restic/branch/master/graph/badge.svg
:target: https://codecov.io/gh/restic/restic
.. |AppsCode| image:: https://cdn.appscode.com/images/logo/appscode/ac-logo-color.png
:target: https://appscode.com
.. |Reviewed by Hound| image:: https://img.shields.io/badge/Reviewed_by-Hound-8E64B0.svg
:target: https://houndci.com

View file

@ -0,0 +1,6 @@
Bugfix: Fix restore with --include
We fixed a bug which prevented restic to restore files with an include filter.
https://github.com/restic/restic/issues/1870
https://github.com/restic/restic/pull/1900

View file

@ -0,0 +1,12 @@
Bugfix: Use `--cache-dir` argument for `check` command
`check` command now uses a temporary sub-directory of the specified directory
if set using the `--cache-dir` argument. If not set, the cache directory is
created in the default temporary directory as before.
In either case a temporary cache is used to ensure the actual repository is
checked (rather than a local copy).
The `--cache-dir` argument was not used by the `check` command, instead a
cache directory was created in the temporary directory.
https://github.com/restic/restic/issues/1880

View file

@ -0,0 +1,8 @@
Bugfix: Return error when exclude file cannot be read
A bug was found: when multiple exclude files were passed to restic and one of
them could not be read, an error was printed and restic continued, ignoring
even the existing exclude files. Now, an error message is printed and restic
aborts when an exclude file cannot be read.
https://github.com/restic/restic/issues/1893

View file

@ -0,0 +1,8 @@
Enhancement: Add support for B2 application keys
Restic can now use so-called "application keys" which can be created in the B2
dashboard and were only introduced recently. In contrast to the "master key",
such keys can be restricted to a specific bucket and/or path.
https://github.com/restic/restic/issues/1906
https://github.com/restic/restic/pull/1914

View file

@ -0,0 +1,4 @@
Enhancement: Add stats command to get information about a repository
https://github.com/restic/restic/issues/874
https://github.com/restic/restic/pull/1729

View file

@ -0,0 +1,8 @@
Enhancement: S3 backend: accept AWS_SESSION_TOKEN
Before, it was not possible to use s3 backend with AWS temporary security credentials(with AWS_SESSION_TOKEN).
This change gives higher priority to credentials.EnvAWS credentials provider.
https://github.com/restic/restic/issues/1477
https://github.com/restic/restic/pull/1479
https://github.com/restic/restic/pull/1647

View file

@ -0,0 +1,9 @@
Enhancement: Update the Backblaze B2 library
We've updated the library we're using for accessing the Backblaze B2 service to
0.5.0 to include support for upcoming so-called "application keys". With this
feature, you can create access credentials for B2 which are restricted to e.g.
a single bucket or even a sub-directory of a bucket.
https://github.com/restic/restic/pull/1901
https://github.com/kurin/blazer

View file

@ -210,7 +210,11 @@ func collectRejectFuncs(opts BackupOptions, repo *repository.Repository, targets
// add patterns from file
if len(opts.ExcludeFiles) > 0 {
opts.Excludes = append(opts.Excludes, readExcludePatternsFromFiles(opts.ExcludeFiles)...)
excludes, err := readExcludePatternsFromFiles(opts.ExcludeFiles)
if err != nil {
return nil, err
}
opts.Excludes = append(opts.Excludes, excludes...)
}
if len(opts.Excludes) > 0 {
@ -238,7 +242,7 @@ func collectRejectFuncs(opts BackupOptions, repo *repository.Repository, targets
// and comment lines are ignored. For each remaining pattern, environment
// variables are resolved. For adding a literal dollar sign ($), write $$ to
// the file.
func readExcludePatternsFromFiles(excludeFiles []string) []string {
func readExcludePatternsFromFiles(excludeFiles []string) ([]string, error) {
getenvOrDollar := func(s string) string {
if s == "$" {
return "$"
@ -274,11 +278,10 @@ func readExcludePatternsFromFiles(excludeFiles []string) []string {
return scanner.Err()
}()
if err != nil {
Warnf("error reading exclude patterns: %v:", err)
return nil
return nil, err
}
}
return excludes
return excludes, nil
}
// collectTargets returns a list of target files/dirs from several sources.

View file

@ -50,7 +50,7 @@ func init() {
f := cmdCheck.Flags()
f.BoolVar(&checkOptions.ReadData, "read-data", false, "read all data blobs")
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read subset of data packs")
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read subset n of m data packs (format: `n/m`)")
f.BoolVar(&checkOptions.CheckUnused, "check-unused", false, "find unused blobs")
f.BoolVar(&checkOptions.WithCache, "with-cache", false, "use the cache")
}
@ -123,6 +123,7 @@ func newReadProgress(gopts GlobalOptions, todo restic.Stat) *restic.Progress {
//
// * if --with-cache is specified, the default cache is used
// * if the user explicitly requested --no-cache, we don't use any cache
// * if the user provides --cache-dir, we use a cache in a temporary sub-directory of the specified directory and the sub-directory is deleted after the check
// * by default, we use a cache in a temporary directory that is deleted after the check
func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions) (cleanup func()) {
cleanup = func() {}
@ -136,8 +137,10 @@ func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions) (cleanup func())
return cleanup
}
cachedir := gopts.CacheDir
// use a cache in a temporary directory
tempdir, err := ioutil.TempDir("", "restic-check-cache-")
tempdir, err := ioutil.TempDir(cachedir, "restic-check-cache-")
if err != nil {
// if an error occurs, don't use any cache
Warnf("unable to create temporary directory for cache during check, disabling cache: %v\n", err)

314
cmd/restic/cmd_stats.go Normal file
View file

@ -0,0 +1,314 @@
package main
import (
"context"
"crypto/sha256"
"encoding/json"
"fmt"
"os"
"path/filepath"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/walker"
"github.com/spf13/cobra"
)
var cmdStats = &cobra.Command{
Use: "stats [flags] [snapshot-ID]",
Short: "Scan the repository and show basic statistics",
Long: `
The "stats" command walks one or all snapshots in a repository and
accumulates statistics about the data stored therein. It reports on
the number of unique files and their sizes, according to one of
the counting modes as given by the --mode flag.
If no snapshot is specified, all snapshots will be considered. Some
modes make more sense over just a single snapshot, while others
are useful across all snapshots, depending on what you are trying
to calculate.
The modes are:
restore-size: (default) Counts the size of the restored files.
files-by-contents: Counts total size of files, where a file is
considered unique if it has unique contents.
raw-data: Counts the size of blobs in the repository, regardless
of how many files reference them.
blobs-per-file: A combination of files-by-contents and raw-data.
Refer to the online manual for more details about each mode.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runStats(globalOptions, args)
},
}
func init() {
cmdRoot.AddCommand(cmdStats)
f := cmdStats.Flags()
f.StringVar(&countMode, "mode", countModeRestoreSize, "counting mode: restore-size (default), files-by-contents, blobs-per-file, or raw-data")
f.StringVar(&snapshotByHost, "host", "", "filter latest snapshot by this hostname")
}
func runStats(gopts GlobalOptions, args []string) error {
err := verifyStatsInput(gopts, args)
if err != nil {
return err
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
repo, err := OpenRepository(gopts)
if err != nil {
return err
}
if err = repo.LoadIndex(ctx); err != nil {
return err
}
if !gopts.NoLock {
lock, err := lockRepo(repo)
defer unlockRepo(lock)
if err != nil {
return err
}
}
// create a container for the stats (and other needed state)
stats := &statsContainer{
uniqueFiles: make(map[fileID]struct{}),
fileBlobs: make(map[string]restic.IDSet),
blobs: restic.NewBlobSet(),
blobsSeen: restic.NewBlobSet(),
}
if snapshotIDString != "" {
// scan just a single snapshot
var sID restic.ID
if snapshotIDString == "latest" {
sID, err = restic.FindLatestSnapshot(ctx, repo, []string{}, []restic.TagList{}, snapshotByHost)
if err != nil {
Exitf(1, "latest snapshot for criteria not found: %v", err)
}
} else {
sID, err = restic.FindSnapshot(repo, snapshotIDString)
if err != nil {
return err
}
}
snapshot, err := restic.LoadSnapshot(ctx, repo, sID)
if err != nil {
return err
}
err = statsWalkSnapshot(ctx, snapshot, repo, stats)
} else {
// iterate every snapshot in the repo
err = repo.List(ctx, restic.SnapshotFile, func(snapshotID restic.ID, size int64) error {
snapshot, err := restic.LoadSnapshot(ctx, repo, snapshotID)
if err != nil {
return fmt.Errorf("Error loading snapshot %s: %v", snapshotID.Str(), err)
}
return statsWalkSnapshot(ctx, snapshot, repo, stats)
})
}
if err != nil {
return err
}
if countMode == countModeRawData {
// the blob handles have been collected, but not yet counted
for blobHandle := range stats.blobs {
blobSize, found := repo.LookupBlobSize(blobHandle.ID, blobHandle.Type)
if !found {
return fmt.Errorf("blob %v not found", blobHandle)
}
stats.TotalSize += uint64(blobSize)
stats.TotalBlobCount++
}
}
if gopts.JSON {
err = json.NewEncoder(os.Stdout).Encode(stats)
if err != nil {
return fmt.Errorf("encoding output: %v", err)
}
return nil
}
if stats.TotalBlobCount > 0 {
Printf(" Total Blob Count: %d\n", stats.TotalBlobCount)
}
if stats.TotalFileCount > 0 {
Printf(" Total File Count: %d\n", stats.TotalFileCount)
}
Printf(" Total Size: %-5s\n", formatBytes(stats.TotalSize))
return nil
}
func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo restic.Repository, stats *statsContainer) error {
if snapshot.Tree == nil {
return fmt.Errorf("snapshot %s has nil tree", snapshot.ID().Str())
}
if countMode == countModeRawData {
// count just the sizes of unique blobs; we don't need to walk the tree
// ourselves in this case, since a nifty function does it for us
return restic.FindUsedBlobs(ctx, repo, *snapshot.Tree, stats.blobs, stats.blobsSeen)
}
err := walker.Walk(ctx, repo, *snapshot.Tree, restic.NewIDSet(), statsWalkTree(repo, stats))
if err != nil {
return fmt.Errorf("walking tree %s: %v", *snapshot.Tree, err)
}
return nil
}
func statsWalkTree(repo restic.Repository, stats *statsContainer) walker.WalkFunc {
return func(npath string, node *restic.Node, nodeErr error) (bool, error) {
if nodeErr != nil {
return true, nodeErr
}
if node == nil {
return true, nil
}
if countMode == countModeUniqueFilesByContents || countMode == countModeBlobsPerFile {
// only count this file if we haven't visited it before
fid := makeFileIDByContents(node)
if _, ok := stats.uniqueFiles[fid]; !ok {
// mark the file as visited
stats.uniqueFiles[fid] = struct{}{}
if countMode == countModeUniqueFilesByContents {
// simply count the size of each unique file (unique by contents only)
stats.TotalSize += node.Size
stats.TotalFileCount++
}
if countMode == countModeBlobsPerFile {
// count the size of each unique blob reference, which is
// by unique file (unique by contents and file path)
for _, blobID := range node.Content {
// ensure we have this file (by path) in our map; in this
// mode, a file is unique by both contents and path
nodePath := filepath.Join(npath, node.Name)
if _, ok := stats.fileBlobs[nodePath]; !ok {
stats.fileBlobs[nodePath] = restic.NewIDSet()
stats.TotalFileCount++
}
if _, ok := stats.fileBlobs[nodePath][blobID]; !ok {
// is always a data blob since we're accessing it via a file's Content array
blobSize, found := repo.LookupBlobSize(blobID, restic.DataBlob)
if !found {
return true, fmt.Errorf("blob %s not found for tree %s", blobID, *node.Subtree)
}
// count the blob's size, then add this blob by this
// file (path) so we don't double-count it
stats.TotalSize += uint64(blobSize)
stats.fileBlobs[nodePath].Insert(blobID)
// this mode also counts total unique blob _references_ per file
stats.TotalBlobCount++
}
}
}
}
}
if countMode == countModeRestoreSize {
// as this is a file in the snapshot, we can simply count its
// size without worrying about uniqueness, since duplicate files
// will still be restored
stats.TotalSize += node.Size
stats.TotalFileCount++
}
return true, nil
}
}
// makeFileIDByContents returns a hash of the blob IDs of the
// node's Content in sequence.
func makeFileIDByContents(node *restic.Node) fileID {
var bb []byte
for _, c := range node.Content {
bb = append(bb, []byte(c[:])...)
}
return sha256.Sum256(bb)
}
func verifyStatsInput(gopts GlobalOptions, args []string) error {
// require a recognized counting mode
switch countMode {
case countModeRestoreSize:
case countModeUniqueFilesByContents:
case countModeBlobsPerFile:
case countModeRawData:
default:
return fmt.Errorf("unknown counting mode: %s (use the -h flag to get a list of supported modes)", countMode)
}
// ensure at most one snapshot was specified
if len(args) > 1 {
return fmt.Errorf("only one snapshot may be specified")
}
// if a snapshot was specified, mark it as the one to scan
if len(args) == 1 {
snapshotIDString = args[0]
}
return nil
}
// statsContainer holds information during a walk of a repository
// to collect information about it, as well as state needed
// for a successful and efficient walk.
type statsContainer struct {
TotalSize uint64 `json:"total_size"`
TotalFileCount uint64 `json:"total_file_count"`
TotalBlobCount uint64 `json:"total_blob_count,omitempty"`
// uniqueFiles marks visited files according to their
// contents (hashed sequence of content blob IDs)
uniqueFiles map[fileID]struct{}
// fileBlobs maps a file name (path) to the set of
// blobs that have been seen as a part of the file
fileBlobs map[string]restic.IDSet
// blobs and blobsSeen are used to count indiviudal
// unique blobs, independent of references to files
blobs, blobsSeen restic.BlobSet
}
// fileID is a 256-bit hash that distinguishes unique files.
type fileID [32]byte
var (
// the mode of counting to perform
countMode string
// the snapshot to scan, as given by the user
snapshotIDString string
// snapshotByHost is the host to filter latest
// snapshot by, if given by user
snapshotByHost string
)
const (
countModeRestoreSize = "restore-size"
countModeUniqueFilesByContents = "files-by-contents"
countModeBlobsPerFile = "blobs-per-file"
countModeRawData = "raw-data"
)

View file

@ -293,6 +293,7 @@ func ReadPassword(opts GlobalOptions, prompt string) (string, error) {
password, err = readPasswordTerminal(os.Stdin, os.Stderr, prompt)
} else {
password, err = readPassword(os.Stdin)
Verbosef("read password from stdin\n")
}
if err != nil {

View file

@ -150,9 +150,9 @@ master branch.
Windows
=======
On Windows, put the `restic.exe` into `%SystemRoot%\System32` to use restic
On Windows, put the `restic.exe` binary into `%SystemRoot%\\System32` to use restic
in scripts without the need for absolute paths to the binary. This requires
Admin rights.
administrator rights.
Docker Container
****************

View file

@ -14,10 +14,10 @@
Preparing a new repository
##########################
The place where your backups will be saved at is called a "repository".
The place where your backups will be saved is called a "repository".
This chapter explains how to create ("init") such a repository. The repository
can be stored locally, or on some remote server or service. We'll first cover
using a local repository, the remaining sections of this chapter cover all the
using a local repository; the remaining sections of this chapter cover all the
other options. You can skip to the next chapter once you've read the relevant
section here.
@ -129,8 +129,8 @@ scheme like this:
$ restic -r rest:http://host:8000/
Depending on your REST server setup, you can use HTTPS protocol,
password protection, or multiple repositories. Or any combination of
those features, as you see fit. TCP/IP port is also configurable. Here
password protection, multiple repositories or any combination of
those features. The TCP/IP port is also configurable. Here
are some more examples:
.. code-block:: console
@ -167,7 +167,7 @@ while creating the bucket.
$ export AWS_SECRET_ACCESS_KEY=<MY_SECRET_ACCESS_KEY>
You can then easily initialize a repository that uses your Amazon S3 as
a backend, if the bucket does not exist yet it will be created in the
a backend. If the bucket does not exist it will be created in the
default location:
.. code-block:: console
@ -209,7 +209,7 @@ written in Go and compatible with AWS S3 API.
on installation and getting started on Minio Client and Minio Server.
You must first setup the following environment variables with the
credentials of your running Minio Server.
credentials of your Minio Server.
.. code-block:: console
@ -234,7 +234,7 @@ OpenStack Swift
Restic can backup data to an OpenStack Swift container. Because Swift supports
various authentication methods, credentials are passed through environment
variables. In order to help integration with existing OpenStack installations,
the naming convention of those variables follows official python swift client:
the naming convention of those variables follows the official Python Swift client:
.. code-block:: console
@ -265,12 +265,12 @@ the naming convention of those variables follows official python swift client:
$ export OS_AUTH_TOKEN=<MY_AUTH_TOKEN>
Restic should be compatible with `OpenStack RC file
Restic should be compatible with an `OpenStack RC file
<https://docs.openstack.org/user-guide/common/cli-set-environment-variables-using-openstack-rc.html>`__
in most cases.
Once environment variables are set up, a new repository can be created. The
name of swift container and optional path can be specified. If
name of the Swift container and optional path can be specified. If
the container does not exist, it will be created automatically:
.. code-block:: console
@ -282,7 +282,7 @@ the container does not exist, it will be created automatically:
Please note that knowledge of your password is required to access the repository.
Losing your password means that your data is irrecoverably lost.
The policy of new container created by restic can be changed using environment variable:
The policy of the new container created by restic can be changed using environment variable:
.. code-block:: console
@ -293,16 +293,24 @@ Backblaze B2
************
Restic can backup data to any Backblaze B2 bucket. You need to first setup the
following environment variables with the credentials you obtained when signed
into your B2 account:
following environment variables with the credentials you can find in the
dashboard in on the "Buckets" page when signed into your B2 account:
.. code-block:: console
$ export B2_ACCOUNT_ID=<MY_ACCOUNT_ID>
$ export B2_ACCOUNT_KEY=<MY_SECRET_ACCOUNT_KEY>
You can then easily initialize a repository stored at Backblaze B2. If the
bucket does not exist yet, it will be created:
You can either specify the so-called "Master Application Key" here (which can
access any bucket at any path) or a dedicated "Application Key" created just
for restic (which may be restricted to a specific bucket and/or path). The
master key consists of a ``B2_ACCOUNT_ID`` and a ``B2_ACCOUNT_KEY``, and each
application key also is a pair of ``B2_ACCOUNT_ID`` and ``B2_ACCOUNT_KEY``. The
ID of an application key is much longer than the ID of the master key.
You can then initialize a repository stored at Backblaze B2. If the
bucket does not exist yet and the credentials you passed to restic have the
privilege to create buckets, it will be created automatically:
.. code-block:: console
@ -313,8 +321,10 @@ bucket does not exist yet, it will be created:
Please note that knowledge of your password is required to access the repository.
Losing your password means that your data is irrecoverably lost.
Note that the bucket name must be unique across all of B2.
The number of concurrent connections to the B2 service can be set with the ``-o
b2.connections=10``. By default, at most five parallel connections are
b2.connections=10`` switch. By default, at most five parallel connections are
established.
Microsoft Azure Blob Storage
@ -341,15 +351,13 @@ root path like this:
[...]
The number of concurrent connections to the Azure Blob Storage service can be set with the
``-o azure.connections=10``. By default, at most five parallel connections are
``-o azure.connections=10`` switch. By default, at most five parallel connections are
established.
Google Cloud Storage
********************
Restic supports Google Cloud Storage as a backend.
Restic connects to Google Cloud Storage via a `service account`_.
Restic supports Google Cloud Storage as a backend and connects via a `service account`_.
For normal restic operation, the service account must have the
``storage.objects.{create,delete,get,list}`` permissions for the bucket. These
@ -371,7 +379,7 @@ key file and the project ID as follows:
Restic uses Google's client library to generate `default authentication material`_,
which means if you're running in Google Container Engine or are otherwise
located on an instance with default service accounts then these should work out
located on an instance with default service accounts then these should work out of
the box.
Once authenticated, you can use the ``gs:`` backend type to create a new
@ -387,7 +395,7 @@ repository in the bucket ``foo`` at the root path:
[...]
The number of concurrent connections to the GCS service can be set with the
``-o gs.connections=10``. By default, at most five parallel connections are
``-o gs.connections=10`` switch. By default, at most five parallel connections are
established.
.. _service account: https://cloud.google.com/storage/docs/authentication#service_accounts
@ -506,7 +514,7 @@ At the moment, restic only supports the default Windows console
interaction. If you use emulation environments like
`MSYS2 <https://msys2.github.io/>`__ or
`Cygwin <https://www.cygwin.com/>`__, which use terminals like
``Mintty`` or ``rxvt``, you may get a password error:
``Mintty`` or ``rxvt``, you may get a password error.
You can workaround this by using a special tool called ``winpty`` (look
`here <https://sourceforge.net/p/msys2/wiki/Porting/>`__ and

View file

@ -85,3 +85,36 @@ the data directly. This can be achieved by using the `dump` command, like this:
.. code-block:: console
$ restic -r /srv/restic-repo dump latest production.sql | mysql
If you have saved multiple different things into the same repo, the ``latest``
snapshot may not be the right one. For example, consider the following
snapshots in a repo:
.. code-block:: console
$ restic -r /srv/restic-repo snapshots
ID Date Host Tags Directory
----------------------------------------------------------------------
562bfc5e 2018-07-14 20:18:01 mopped /home/user/file1
bbacb625 2018-07-14 20:18:07 mopped /home/other/work
e922c858 2018-07-14 20:18:10 mopped /home/other/work
098db9d5 2018-07-14 20:18:13 mopped /production.sql
b62f46ec 2018-07-14 20:18:16 mopped /home/user/file1
1541acae 2018-07-14 20:18:18 mopped /home/other/work
----------------------------------------------------------------------
Here, restic would resolve ``latest`` to the snapshot ``1541acae``, which does
not contain the file we'd like to print at all (``production.sql``). In this
case, you can pass restic the snapshot ID of the snapshot you like to restore:
.. code-block:: console
$ restic -r /srv/restic-repo dump 098db9d5 production.sql | mysql
Or you can pass restic a path that should be used for selecting the latest
snapshot. The path must match the patch printed in the "Directory" column,
e.g.:
.. code-block:: console
$ restic -r /srv/restic-repo dump --path /production.sql latest production.sql | mysql

View file

@ -23,6 +23,13 @@ data that was referenced by the snapshot from the repository. This can
be automated with the ``--prune`` option of the ``forget`` command,
which runs ``prune`` automatically if snapshots have been removed.
.. Warning::
Pruning snapshots can be a very time-consuming process, taking nearly
as long as backups themselves. During a prune operation, the index is
locked and backups cannot be completed. Performance improvements are
planned for this feature.
It is advisable to run ``restic check`` after pruning, to make sure
you are alerted, should the internal data structures of the repository
be damaged.

View file

@ -1310,6 +1310,7 @@ _restic_root_command()
commands+=("rebuild-index")
commands+=("restore")
commands+=("snapshots")
commands+=("stats")
commands+=("tag")
commands+=("unlock")
commands+=("version")

View file

@ -36,6 +36,7 @@ Usage help is available:
rebuild-index Build a new index file
restore Extract the data from a snapshot
snapshots List all snapshots
stats Count up sizes and show information about repository data
tag Modify tags on snapshots
unlock Remove locks other processes created
version Print version information
@ -236,6 +237,76 @@ The following metadata is handled by restic:
- Subtree
- ExtendedAttributes
Getting information about repository data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use the ``stats`` command to count up stats about the data in the repository.
There are different counting modes available using the ``--mode`` flag,
depending on what you want to calculate. The default is the restore size, or
the size required to restore the files:
- ``restore-size`` (default) counts the size of the restored files.
- ``files-by-contents`` counts the total size of unique files as given by their
contents. This can be useful since a file is considered unique only if it has
unique contents. Keep in mind that a small change to a large file (even when the
file name/path hasn't changed) will cause them to look like different files, thus
essentially causing the whole size of the file to be counted twice.
- ``raw-data`` counts the size of the blobs in the repository, regardless of how many
files reference them. This tells you how much restic has reduced all your original
data down to (either for a single snapshot or across all your backups), and compared
to the size given by the restore-size mode, can tell you how much deduplication is
helping you.
- ``blobs-per-file`` is kind of a mix between files-by-contents and raw-data modes;
it is useful for knowing how much value your backup is providing you in terms of unique
data stored by file. Like files-by-contents, it is resilient to file renames/moves.
Unlike files-by-contents, it does not balloon to high values when large files have
small edits, as long as the file path stayed the same. Unlike raw-data, this mode
DOES consider how many files point to each blob such that the more files a blob is
referenced by, the more it counts toward the size.
For example, to calculate how much space would be
required to restore the latest snapshot (from any host that made it):
.. code-block:: console
$ restic stats latest
password is correct
Total File Count: 10538
Total Size: 37.824 GiB
If multiple hosts are backing up to the repository, the latest snapshot may not
be the one you want. You can specify the latest snapshot from only a specific
host by using the ``--host`` flag:
.. code-block:: console
$ restic stats --host myserver latest
password is correct
Total File Count: 21766
Total Size: 481.783 GiB
There we see that it would take 482 GiB of disk space to restore the latest
snapshot from "myserver".
But how much space does that snapshot take on disk? In other words, how much
has restic's deduplication helped? We can check:
.. code-block:: console
$ restic stats --host myserver --mode raw-data latest
password is correct
Total Blob Count: 340847
Total Size: 458.663 GiB
Comparing this size to the previous command, we see that restic has saved
about 23 GiB of space with deduplication.
Which mode you use depends on your exact use case. Some modes are more useful
across all snapshots, while others make more sense on just a single snapshot,
depending on what you're trying to calculate.
Scripting
---------

View file

@ -50,13 +50,13 @@ func open(cfg Config, rt http.RoundTripper) (*Backend, error) {
// call to a pre-defined endpoint, only valid inside
// configured ec2 instances)
creds := credentials.NewChainCredentials([]credentials.Provider{
&credentials.EnvAWS{},
&credentials.Static{
Value: credentials.Value{
AccessKeyID: cfg.KeyID,
SecretAccessKey: cfg.Secret,
},
},
&credentials.EnvAWS{},
&credentials.EnvMinio{},
&credentials.FileAWSCredentials{},
&credentials.FileMinioClient{},

View file

@ -61,7 +61,13 @@ func writeCachedirTag(dir string) error {
return err
}
f, err := fs.OpenFile(filepath.Join(dir, "CACHEDIR.TAG"), os.O_CREATE|os.O_EXCL|os.O_WRONLY, 0644)
tagfile := filepath.Join(dir, "CACHEDIR.TAG")
_, err := fs.Lstat(tagfile)
if err != nil && !os.IsNotExist(err) {
return errors.Wrap(err, "Lstat")
}
f, err := fs.OpenFile(tagfile, os.O_CREATE|os.O_EXCL|os.O_WRONLY, 0644)
if err != nil {
if os.IsExist(errors.Cause(err)) {
return nil

View file

@ -582,12 +582,6 @@ func (c *Checker) checkTree(id restic.ID, tree *restic.Tree) (errs []error) {
}
size += uint64(blobSize)
}
if size != node.Size {
errs = append(errs, Error{
TreeID: id,
Err: errors.Errorf("file %q: metadata size (%v) and sum of blob sizes (%v) do not match", node.Name, node.Size, size),
})
}
case "dir":
if node.Subtree == nil {
errs = append(errs, Error{TreeID: id, Err: errors.Errorf("dir node %q has no subtree", node.Name)})

View file

@ -112,7 +112,7 @@ func TestNodeFromFileInfo(t *testing.T) {
s, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
t.Skip("fi type is %T, not stat_t", fi.Sys())
t.Skipf("fi type is %T, not stat_t", fi.Sys())
return
}

View file

@ -99,36 +99,16 @@ func (res *Restorer) traverseTree(ctx context.Context, target, location string,
return err
}
enteredDir := false
if node.Type == "dir" {
if node.Subtree == nil {
return errors.Errorf("Dir without subtree in tree %v", treeID.Str())
}
// ifedorenko: apparently a dir can be selected explicitly or implicitly when a child is selected
// to support implicit selection, visit the directory from within visitor#visitNode
if selectedForRestore {
enteredDir = true
err = sanitizeError(visitor.enterDir(node, nodeTarget, nodeLocation))
if err != nil {
return err
}
} else {
_visitor := visitor
visitor = treeVisitor{
enterDir: _visitor.enterDir,
visitNode: func(node *restic.Node, nodeTarget, nodeLocation string) error {
if !enteredDir {
enteredDir = true
derr := sanitizeError(_visitor.enterDir(node, nodeTarget, nodeLocation))
if derr != nil {
return derr
}
}
return _visitor.visitNode(node, nodeTarget, nodeLocation)
},
leaveDir: _visitor.leaveDir,
}
}
if childMayBeSelected {
@ -137,25 +117,21 @@ func (res *Restorer) traverseTree(ctx context.Context, target, location string,
return err
}
}
}
if selectedForRestore && node.Type != "dir" {
err = sanitizeError(visitor.visitNode(node, nodeTarget, nodeLocation))
if err != nil {
err = res.Error(nodeLocation, node, errors.Wrap(err, "restoreNodeTo"))
if selectedForRestore {
err = sanitizeError(visitor.leaveDir(node, nodeTarget, nodeLocation))
if err != nil {
return err
}
}
continue
}
if enteredDir {
err = sanitizeError(visitor.leaveDir(node, nodeTarget, nodeLocation))
if selectedForRestore {
err = sanitizeError(visitor.visitNode(node, nodeTarget, nodeLocation))
if err != nil {
err = res.Error(nodeLocation, node, errors.Wrap(err, "RestoreTimestamps"))
if err != nil {
return err
}
return err
}
}
}
@ -197,6 +173,12 @@ func (res *Restorer) RestoreTo(ctx context.Context, dst string) error {
}
}
// make sure the target directory exists
err = fs.MkdirAll(dst, 0777) // umask takes care of dir permissions
if err != nil {
return errors.Wrap(err, "MkdirAll")
}
idx := restic.NewHardlinkIndex()
return res.traverseTree(ctx, dst, string(filepath.Separator), *res.sn.Tree, treeVisitor{
enterDir: func(node *restic.Node, target, location string) error {
@ -205,6 +187,13 @@ func (res *Restorer) RestoreTo(ctx context.Context, dst string) error {
return fs.MkdirAll(target, 0700)
},
visitNode: func(node *restic.Node, target, location string) error {
// create parent dir with default permissions
// #leaveDir restores dir metadata after visiting all children
err := fs.MkdirAll(filepath.Dir(target), 0700)
if err != nil {
return err
}
return res.restoreNodeTo(ctx, node, target, location, idx)
},
leaveDir: func(node *restic.Node, target, location string) error {

View file

@ -1,4 +1,4 @@
package restorer_test
package restorer
import (
"bytes"
@ -13,7 +13,6 @@ import (
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/restorer"
rtest "github.com/restic/restic/internal/test"
)
@ -92,7 +91,7 @@ func saveDir(t testing.TB, repo restic.Repository, nodes map[string]Node) restic
return id
}
func saveSnapshot(t testing.TB, repo restic.Repository, snapshot Snapshot) (restic.Repository, restic.ID) {
func saveSnapshot(t testing.TB, repo restic.Repository, snapshot Snapshot) (*restic.Snapshot, restic.ID) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
@ -119,7 +118,7 @@ func saveSnapshot(t testing.TB, repo restic.Repository, snapshot Snapshot) (rest
t.Fatal(err)
}
return repo, id
return sn, id
}
// toSlash converts the OS specific path dir to a slash-separated path.
@ -134,6 +133,7 @@ func TestRestorer(t *testing.T) {
Files map[string]string
ErrorsMust map[string]string
ErrorsMay map[string]string
Select func(item string, dstpath string, node *restic.Node) (selectForRestore bool, childMayBeSelected bool)
}{
// valid test cases
{
@ -202,6 +202,41 @@ func TestRestorer(t *testing.T) {
"dir/file": "file in dir",
},
},
{
Snapshot: Snapshot{
Nodes: map[string]Node{
"topfile": File{"top-level file"},
},
},
Files: map[string]string{
"topfile": "top-level file",
},
},
{
Snapshot: Snapshot{
Nodes: map[string]Node{
"dir": Dir{
Nodes: map[string]Node{
"file": File{"content: file\n"},
},
},
},
},
Files: map[string]string{
"dir/file": "content: file\n",
},
Select: func(item, dstpath string, node *restic.Node) (selectedForRestore bool, childMayBeSelected bool) {
switch item {
case filepath.FromSlash("/dir"):
childMayBeSelected = true
case filepath.FromSlash("/dir/file"):
selectedForRestore = true
childMayBeSelected = true
}
return selectedForRestore, childMayBeSelected
},
},
// test cases with invalid/constructed names
{
@ -265,7 +300,7 @@ func TestRestorer(t *testing.T) {
_, id := saveSnapshot(t, repo, test.Snapshot)
t.Logf("snapshot saved as %v", id.Str())
res, err := restorer.NewRestorer(repo, id)
res, err := NewRestorer(repo, id)
if err != nil {
t.Fatal(err)
}
@ -273,6 +308,9 @@ func TestRestorer(t *testing.T) {
tempdir, cleanup := rtest.TempDir(t)
defer cleanup()
// make sure we're creating a new subdir of the tempdir
tempdir = filepath.Join(tempdir, "target")
res.SelectFilter = func(item, dstpath string, node *restic.Node) (selectedForRestore bool, childMayBeSelected bool) {
t.Logf("restore %v to %v", item, dstpath)
if !fs.HasPathPrefix(tempdir, dstpath) {
@ -280,6 +318,11 @@ func TestRestorer(t *testing.T) {
item, dstpath, tempdir)
return false, false
}
if test.Select != nil {
return test.Select(item, dstpath, node)
}
return true, true
}
@ -378,7 +421,7 @@ func TestRestorerRelative(t *testing.T) {
_, id := saveSnapshot(t, repo, test.Snapshot)
t.Logf("snapshot saved as %v", id.Str())
res, err := restorer.NewRestorer(repo, id)
res, err := NewRestorer(repo, id)
if err != nil {
t.Fatal(err)
}
@ -423,3 +466,213 @@ func TestRestorerRelative(t *testing.T) {
})
}
}
type TraverseTreeCheck func(testing.TB) treeVisitor
type TreeVisit struct {
funcName string // name of the function
location string // location passed to the function
}
func checkVisitOrder(list []TreeVisit) TraverseTreeCheck {
var pos int
return func(t testing.TB) treeVisitor {
check := func(funcName string) func(*restic.Node, string, string) error {
return func(node *restic.Node, target, location string) error {
if pos >= len(list) {
t.Errorf("step %v, %v(%v): expected no more than %d function calls", pos, funcName, location, len(list))
pos++
return nil
}
v := list[pos]
if v.funcName != funcName {
t.Errorf("step %v, location %v: want function %v, but %v was called",
pos, location, v.funcName, funcName)
}
if location != filepath.FromSlash(v.location) {
t.Errorf("step %v: want location %v, got %v", pos, list[pos].location, location)
}
pos++
return nil
}
}
return treeVisitor{
enterDir: check("enterDir"),
visitNode: check("visitNode"),
leaveDir: check("leaveDir"),
}
}
}
func TestRestorerTraverseTree(t *testing.T) {
var tests = []struct {
Snapshot
Select func(item string, dstpath string, node *restic.Node) (selectForRestore bool, childMayBeSelected bool)
Visitor TraverseTreeCheck
}{
{
// select everything
Snapshot: Snapshot{
Nodes: map[string]Node{
"dir": Dir{Nodes: map[string]Node{
"otherfile": File{"x"},
"subdir": Dir{Nodes: map[string]Node{
"file": File{"content: file\n"},
}},
}},
"foo": File{"content: foo\n"},
},
},
Select: func(item string, dstpath string, node *restic.Node) (selectForRestore bool, childMayBeSelected bool) {
return true, true
},
Visitor: checkVisitOrder([]TreeVisit{
{"enterDir", "/dir"},
{"visitNode", "/dir/otherfile"},
{"enterDir", "/dir/subdir"},
{"visitNode", "/dir/subdir/file"},
{"leaveDir", "/dir/subdir"},
{"leaveDir", "/dir"},
{"visitNode", "/foo"},
}),
},
// select only the top-level file
{
Snapshot: Snapshot{
Nodes: map[string]Node{
"dir": Dir{Nodes: map[string]Node{
"otherfile": File{"x"},
"subdir": Dir{Nodes: map[string]Node{
"file": File{"content: file\n"},
}},
}},
"foo": File{"content: foo\n"},
},
},
Select: func(item string, dstpath string, node *restic.Node) (selectForRestore bool, childMayBeSelected bool) {
if item == "/foo" {
return true, false
}
return false, false
},
Visitor: checkVisitOrder([]TreeVisit{
{"visitNode", "/foo"},
}),
},
{
Snapshot: Snapshot{
Nodes: map[string]Node{
"aaa": File{"content: foo\n"},
"dir": Dir{Nodes: map[string]Node{
"otherfile": File{"x"},
"subdir": Dir{Nodes: map[string]Node{
"file": File{"content: file\n"},
}},
}},
},
},
Select: func(item string, dstpath string, node *restic.Node) (selectForRestore bool, childMayBeSelected bool) {
if item == "/aaa" {
return true, false
}
return false, false
},
Visitor: checkVisitOrder([]TreeVisit{
{"visitNode", "/aaa"},
}),
},
// select dir/
{
Snapshot: Snapshot{
Nodes: map[string]Node{
"dir": Dir{Nodes: map[string]Node{
"otherfile": File{"x"},
"subdir": Dir{Nodes: map[string]Node{
"file": File{"content: file\n"},
}},
}},
"foo": File{"content: foo\n"},
},
},
Select: func(item string, dstpath string, node *restic.Node) (selectForRestore bool, childMayBeSelected bool) {
if strings.HasPrefix(item, "/dir") {
return true, true
}
return false, false
},
Visitor: checkVisitOrder([]TreeVisit{
{"enterDir", "/dir"},
{"visitNode", "/dir/otherfile"},
{"enterDir", "/dir/subdir"},
{"visitNode", "/dir/subdir/file"},
{"leaveDir", "/dir/subdir"},
{"leaveDir", "/dir"},
}),
},
// select only dir/otherfile
{
Snapshot: Snapshot{
Nodes: map[string]Node{
"dir": Dir{Nodes: map[string]Node{
"otherfile": File{"x"},
"subdir": Dir{Nodes: map[string]Node{
"file": File{"content: file\n"},
}},
}},
"foo": File{"content: foo\n"},
},
},
Select: func(item string, dstpath string, node *restic.Node) (selectForRestore bool, childMayBeSelected bool) {
switch item {
case "/dir":
return false, true
case "/dir/otherfile":
return true, false
default:
return false, false
}
},
Visitor: checkVisitOrder([]TreeVisit{
{"visitNode", "/dir/otherfile"},
}),
},
}
for _, test := range tests {
t.Run("", func(t *testing.T) {
repo, cleanup := repository.TestRepository(t)
defer cleanup()
sn, id := saveSnapshot(t, repo, test.Snapshot)
res, err := NewRestorer(repo, id)
if err != nil {
t.Fatal(err)
}
res.SelectFilter = test.Select
tempdir, cleanup := rtest.TempDir(t)
defer cleanup()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// make sure we're creating a new subdir of the tempdir
target := filepath.Join(tempdir, "target")
err = res.traverseTree(ctx, target, string(filepath.Separator), *sn.Tree, test.Visitor(t))
if err != nil {
t.Fatal(err)
}
})
}
}

View file

@ -357,7 +357,7 @@ func (b *Backup) Finish() {
b.P("Dirs: %5d new, %5d changed, %5d unmodified\n", b.summary.Dirs.New, b.summary.Dirs.Changed, b.summary.Dirs.Unchanged)
b.V("Data Blobs: %5d new\n", b.summary.ItemStats.DataBlobs)
b.V("Tree Blobs: %5d new\n", b.summary.ItemStats.TreeBlobs)
b.P("Added: %-5s\n", formatBytes(b.summary.ItemStats.DataSize+b.summary.ItemStats.TreeSize))
b.P("Added to the repo: %-5s\n", formatBytes(b.summary.ItemStats.DataSize+b.summary.ItemStats.TreeSize))
b.P("\n")
b.P("processed %v files, %v in %s",
b.summary.Files.New+b.summary.Files.Changed+b.summary.Files.Unchanged,

View file

@ -1,4 +0,0 @@
/*.seq.svg
# not ignoring *.seq.png; we want those committed to the repo
# for embedding on Github

View file

@ -1,6 +0,0 @@
# bazil.org/fuse documentation
See also API docs at http://godoc.org/bazil.org/fuse
- [The mount sequence](mount-sequence.md)
- [Writing documentation](writing-docs.md)

View file

@ -1,32 +0,0 @@
seqdiag {
app;
fuse [label="bazil.org/fuse"];
fusermount;
kernel;
mounts;
app;
fuse [label="bazil.org/fuse"];
fusermount;
kernel;
mounts;
app -> fuse [label="Mount"];
fuse -> fusermount [label="spawn, pass socketpair fd"];
fusermount -> kernel [label="open /dev/fuse"];
fusermount -> kernel [label="mount(2)"];
kernel ->> mounts [label="mount is visible"];
fusermount <-- kernel [label="mount(2) returns"];
fuse <<-- fusermount [diagonal, label="exit, receive /dev/fuse fd", leftnote="on Linux, successful exit here\nmeans the mount has happened,\nthough InitRequest might not have yet"];
app <-- fuse [label="Mount returns\nConn.Ready is already closed"];
app -> fuse [label="fs.Serve"];
fuse => kernel [label="read /dev/fuse fd", note="starts with InitRequest"];
fuse -> app [label="Init"];
fuse <-- app [color=red];
fuse -> kernel [label="write /dev/fuse fd", color=red];
kernel -> kernel [label="set connection\nstate to error", color=red];
fuse <-- kernel;
... conn.MountError == nil, so it is still mounted ...
... call conn.Close to clean up ...
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

View file

@ -1,41 +0,0 @@
seqdiag {
// seqdiag -T svg -o doc/mount-osx.svg doc/mount-osx.seq
app;
fuse [label="bazil.org/fuse"];
fusermount;
kernel;
mounts;
app -> fuse [label="Mount"];
fuse -> fusermount [label="spawn, pass socketpair fd"];
fusermount -> kernel [label="open /dev/fuse"];
fusermount -> kernel [label="mount(2)"];
kernel ->> mounts [label="mount is visible"];
fusermount <-- kernel [label="mount(2) returns"];
fuse <<-- fusermount [diagonal, label="exit, receive /dev/fuse fd", leftnote="on Linux, successful exit here\nmeans the mount has happened,\nthough InitRequest might not have yet"];
app <-- fuse [label="Mount returns\nConn.Ready is already closed", rightnote="InitRequest and StatfsRequest\nmay or may not be seen\nbefore Conn.Ready,\ndepending on platform"];
app -> fuse [label="fs.Serve"];
fuse => kernel [label="read /dev/fuse fd", note="starts with InitRequest"];
fuse => app [label="FS/Node/Handle methods"];
fuse => kernel [label="write /dev/fuse fd"];
... repeat ...
... shutting down ...
app -> fuse [label="Unmount"];
fuse -> fusermount [label="fusermount -u"];
fusermount -> kernel;
kernel <<-- mounts;
fusermount <-- kernel;
fuse <<-- fusermount [diagonal];
app <-- fuse [label="Unmount returns"];
// actually triggers before above
fuse <<-- kernel [diagonal, label="/dev/fuse EOF"];
app <-- fuse [label="fs.Serve returns"];
app -> fuse [label="conn.Close"];
fuse -> kernel [label="close /dev/fuse fd"];
fuse <-- kernel;
app <-- fuse;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

View file

@ -1,32 +0,0 @@
seqdiag {
app;
fuse [label="bazil.org/fuse"];
wait [label="callMount\nhelper goroutine"];
mount_osxfusefs;
kernel;
app -> fuse [label="Mount"];
fuse -> kernel [label="open /dev/osxfuseN"];
fuse -> mount_osxfusefs [label="spawn, pass fd"];
fuse -> wait [label="goroutine", note="blocks on cmd.Wait"];
app <-- fuse [label="Mount returns"];
mount_osxfusefs -> kernel [label="mount(2)"];
app -> fuse [label="fs.Serve"];
fuse => kernel [label="read /dev/osxfuseN fd", note="starts with InitRequest,\nalso seen before mount exits:\ntwo StatfsRequest calls"];
fuse -> app [label="Init"];
fuse <-- app [color=red];
fuse -> kernel [label="write /dev/osxfuseN fd", color=red];
fuse <-- kernel;
mount_osxfusefs <-- kernel [label="mount(2) returns", color=red];
wait <<-- mount_osxfusefs [diagonal, label="exit", color=red];
app <<-- wait [diagonal, label="mount has failed,\nclose Conn.Ready", color=red];
// actually triggers before above
fuse <<-- kernel [diagonal, label="/dev/osxfuseN EOF"];
app <-- fuse [label="fs.Serve returns"];
... conn.MountError != nil, so it was was never mounted ...
... call conn.Close to clean up ...
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

View file

@ -1,45 +0,0 @@
seqdiag {
// seqdiag -T svg -o doc/mount-osx.svg doc/mount-osx.seq
app;
fuse [label="bazil.org/fuse"];
wait [label="callMount\nhelper goroutine"];
mount_osxfusefs;
kernel;
mounts;
app -> fuse [label="Mount"];
fuse -> kernel [label="open /dev/osxfuseN"];
fuse -> mount_osxfusefs [label="spawn, pass fd"];
fuse -> wait [label="goroutine", note="blocks on cmd.Wait"];
app <-- fuse [label="Mount returns"];
mount_osxfusefs -> kernel [label="mount(2)"];
app -> fuse [label="fs.Serve"];
fuse => kernel [label="read /dev/osxfuseN fd", note="starts with InitRequest,\nalso seen before mount exits:\ntwo StatfsRequest calls"];
fuse => app [label="FS/Node/Handle methods"];
fuse => kernel [label="write /dev/osxfuseN fd"];
... repeat ...
kernel ->> mounts [label="mount is visible"];
mount_osxfusefs <-- kernel [label="mount(2) returns"];
wait <<-- mount_osxfusefs [diagonal, label="exit", leftnote="on OS X, successful exit\nhere means we finally know\nthe mount has happened\n(can't trust InitRequest,\nkernel might have timed out\nwaiting for InitResponse)"];
app <<-- wait [diagonal, label="mount is ready,\nclose Conn.Ready", rightnote="InitRequest and StatfsRequest\nmay or may not be seen\nbefore Conn.Ready,\ndepending on platform"];
... shutting down ...
app -> fuse [label="Unmount"];
fuse -> kernel [label="umount(2)"];
kernel <<-- mounts;
fuse <-- kernel;
app <-- fuse [label="Unmount returns"];
// actually triggers before above
fuse <<-- kernel [diagonal, label="/dev/osxfuseN EOF"];
app <-- fuse [label="fs.Serve returns"];
app -> fuse [label="conn.Close"];
fuse -> kernel [label="close /dev/osxfuseN"];
fuse <-- kernel;
app <-- fuse;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

View file

@ -1,30 +0,0 @@
# The mount sequence
FUSE mounting is a little bit tricky. There's a userspace helper tool
that performs the handshake with the kernel, and then steps out of the
way. This helper behaves differently on different platforms, forcing a
more complex API on us.
## Successful runs
On Linux, the mount is immediate and file system accesses wait until
the requests are served.
![Diagram of Linux FUSE mount sequence](mount-linux.seq.png)
On OS X, the mount becomes visible only after `InitRequest` (and maybe
more) have been served.
![Diagram of OSXFUSE mount sequence](mount-osx.seq.png)
## Errors
Let's see what happens if `InitRequest` gets an error response. On
Linux, the mountpoint is there but all operations will fail:
![Diagram of Linux error handling](mount-linux-error-init.seq.png)
On OS X, the mount never happened:
![Diagram of OS X error handling](mount-osx-error-init.seq.png)

View file

@ -1,16 +0,0 @@
# Writing documentation
## Sequence diagrams
The sequence diagrams are generated with `seqdiag`:
http://blockdiag.com/en/seqdiag/index.html
An easy way to work on them is to automatically update the generated
files with https://github.com/cespare/reflex :
reflex -g 'doc/[^.]*.seq' -- seqdiag -T svg -o '{}.svg' '{}' &
reflex -g 'doc/[^.]*.seq' -- seqdiag -T png -o '{}.png' '{}' &
The markdown files refer to PNG images because of Github limitations,
but the SVG is generally more pleasant to view.

View file

@ -1,184 +0,0 @@
// Clockfs implements a file system with the current time in a file.
// It was written to demonstrate kernel cache invalidation.
package main
import (
"flag"
"fmt"
"log"
"os"
"sync/atomic"
"syscall"
"time"
"bazil.org/fuse"
"bazil.org/fuse/fs"
_ "bazil.org/fuse/fs/fstestutil"
"bazil.org/fuse/fuseutil"
"golang.org/x/net/context"
)
func usage() {
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
fmt.Fprintf(os.Stderr, " %s MOUNTPOINT\n", os.Args[0])
flag.PrintDefaults()
}
func run(mountpoint string) error {
c, err := fuse.Mount(
mountpoint,
fuse.FSName("clock"),
fuse.Subtype("clockfsfs"),
fuse.LocalVolume(),
fuse.VolumeName("Clock filesystem"),
)
if err != nil {
return err
}
defer c.Close()
if p := c.Protocol(); !p.HasInvalidate() {
return fmt.Errorf("kernel FUSE support is too old to have invalidations: version %v", p)
}
srv := fs.New(c, nil)
filesys := &FS{
// We pre-create the clock node so that it's always the same
// object returned from all the Lookups. You could carefully
// track its lifetime between Lookup&Forget, and have the
// ticking & invalidation happen only when active, but let's
// keep this example simple.
clockFile: &File{
fuse: srv,
},
}
filesys.clockFile.tick()
// This goroutine never exits. That's fine for this example.
go filesys.clockFile.update()
if err := srv.Serve(filesys); err != nil {
return err
}
// Check if the mount process has an error to report.
<-c.Ready
if err := c.MountError; err != nil {
return err
}
return nil
}
func main() {
flag.Usage = usage
flag.Parse()
if flag.NArg() != 1 {
usage()
os.Exit(2)
}
mountpoint := flag.Arg(0)
if err := run(mountpoint); err != nil {
log.Fatal(err)
}
}
type FS struct {
clockFile *File
}
var _ fs.FS = (*FS)(nil)
func (f *FS) Root() (fs.Node, error) {
return &Dir{fs: f}, nil
}
// Dir implements both Node and Handle for the root directory.
type Dir struct {
fs *FS
}
var _ fs.Node = (*Dir)(nil)
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) error {
a.Inode = 1
a.Mode = os.ModeDir | 0555
return nil
}
var _ fs.NodeStringLookuper = (*Dir)(nil)
func (d *Dir) Lookup(ctx context.Context, name string) (fs.Node, error) {
if name == "clock" {
return d.fs.clockFile, nil
}
return nil, fuse.ENOENT
}
var dirDirs = []fuse.Dirent{
{Inode: 2, Name: "clock", Type: fuse.DT_File},
}
var _ fs.HandleReadDirAller = (*Dir)(nil)
func (d *Dir) ReadDirAll(ctx context.Context) ([]fuse.Dirent, error) {
return dirDirs, nil
}
type File struct {
fs.NodeRef
fuse *fs.Server
content atomic.Value
count uint64
}
var _ fs.Node = (*File)(nil)
func (f *File) Attr(ctx context.Context, a *fuse.Attr) error {
a.Inode = 2
a.Mode = 0444
t := f.content.Load().(string)
a.Size = uint64(len(t))
return nil
}
var _ fs.NodeOpener = (*File)(nil)
func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fs.Handle, error) {
if !req.Flags.IsReadOnly() {
return nil, fuse.Errno(syscall.EACCES)
}
resp.Flags |= fuse.OpenKeepCache
return f, nil
}
var _ fs.Handle = (*File)(nil)
var _ fs.HandleReader = (*File)(nil)
func (f *File) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error {
t := f.content.Load().(string)
fuseutil.HandleRead(req, resp, []byte(t))
return nil
}
func (f *File) tick() {
// Intentionally a variable-length format, to demonstrate size changes.
f.count++
s := fmt.Sprintf("%d\t%s\n", f.count, time.Now())
f.content.Store(s)
// For simplicity, this example tries to send invalidate
// notifications even when the kernel does not hold a reference to
// the node, so be extra sure to ignore ErrNotCached.
if err := f.fuse.InvalidateNodeData(f); err != nil && err != fuse.ErrNotCached {
log.Printf("invalidate error: %v", err)
}
}
func (f *File) update() {
tick := time.NewTicker(1 * time.Second)
defer tick.Stop()
for range tick.C {
f.tick()
}
}

View file

@ -1,101 +0,0 @@
// Hellofs implements a simple "hello world" file system.
package main
import (
"flag"
"fmt"
"log"
"os"
"bazil.org/fuse"
"bazil.org/fuse/fs"
_ "bazil.org/fuse/fs/fstestutil"
"golang.org/x/net/context"
)
func usage() {
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
fmt.Fprintf(os.Stderr, " %s MOUNTPOINT\n", os.Args[0])
flag.PrintDefaults()
}
func main() {
flag.Usage = usage
flag.Parse()
if flag.NArg() != 1 {
usage()
os.Exit(2)
}
mountpoint := flag.Arg(0)
c, err := fuse.Mount(
mountpoint,
fuse.FSName("helloworld"),
fuse.Subtype("hellofs"),
fuse.LocalVolume(),
fuse.VolumeName("Hello world!"),
)
if err != nil {
log.Fatal(err)
}
defer c.Close()
err = fs.Serve(c, FS{})
if err != nil {
log.Fatal(err)
}
// check if the mount process has an error to report
<-c.Ready
if err := c.MountError; err != nil {
log.Fatal(err)
}
}
// FS implements the hello world file system.
type FS struct{}
func (FS) Root() (fs.Node, error) {
return Dir{}, nil
}
// Dir implements both Node and Handle for the root directory.
type Dir struct{}
func (Dir) Attr(ctx context.Context, a *fuse.Attr) error {
a.Inode = 1
a.Mode = os.ModeDir | 0555
return nil
}
func (Dir) Lookup(ctx context.Context, name string) (fs.Node, error) {
if name == "hello" {
return File{}, nil
}
return nil, fuse.ENOENT
}
var dirDirs = []fuse.Dirent{
{Inode: 2, Name: "hello", Type: fuse.DT_File},
}
func (Dir) ReadDirAll(ctx context.Context) ([]fuse.Dirent, error) {
return dirDirs, nil
}
// File implements both Node and Handle for the hello file.
type File struct{}
const greeting = "hello, world\n"
func (File) Attr(ctx context.Context, a *fuse.Attr) error {
a.Inode = 2
a.Mode = 0444
a.Size = uint64(len(greeting))
return nil
}
func (File) ReadAll(ctx context.Context) ([]byte, error) {
return []byte(greeting), nil
}

View file

@ -1,54 +0,0 @@
package bench_test
import (
"fmt"
"os"
"testing"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"bazil.org/fuse/fs/fstestutil"
"golang.org/x/net/context"
)
type dummyFile struct {
fstestutil.File
}
type benchCreateDir struct {
fstestutil.Dir
}
var _ fs.NodeCreater = (*benchCreateDir)(nil)
func (f *benchCreateDir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (fs.Node, fs.Handle, error) {
child := &dummyFile{}
return child, child, nil
}
func BenchmarkCreate(b *testing.B) {
f := &benchCreateDir{}
mnt, err := fstestutil.MountedT(b, fstestutil.SimpleFS{f}, nil)
if err != nil {
b.Fatal(err)
}
defer mnt.Close()
// prepare file names to decrease test overhead
names := make([]string, 0, b.N)
for i := 0; i < b.N; i++ {
// zero-padded so cost stays the same on every iteration
names = append(names, mnt.Dir+"/"+fmt.Sprintf("%08x", i))
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
f, err := os.Create(names[i])
if err != nil {
b.Fatalf("WriteFile: %v", err)
}
f.Close()
}
b.StopTimer()
}

View file

@ -1,42 +0,0 @@
package bench_test
import (
"os"
"testing"
"golang.org/x/net/context"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"bazil.org/fuse/fs/fstestutil"
)
type benchLookupDir struct {
fstestutil.Dir
}
var _ fs.NodeRequestLookuper = (*benchLookupDir)(nil)
func (f *benchLookupDir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (fs.Node, error) {
return nil, fuse.ENOENT
}
func BenchmarkLookup(b *testing.B) {
f := &benchLookupDir{}
mnt, err := fstestutil.MountedT(b, fstestutil.SimpleFS{f}, nil)
if err != nil {
b.Fatal(err)
}
defer mnt.Close()
name := mnt.Dir + "/does-not-exist"
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := os.Stat(name); !os.IsNotExist(err) {
b.Fatalf("Stat: wrong error: %v", err)
}
}
b.StopTimer()
}

View file

@ -1,268 +0,0 @@
package bench_test
import (
"io"
"io/ioutil"
"os"
"path"
"testing"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"bazil.org/fuse/fs/fstestutil"
"golang.org/x/net/context"
)
type benchConfig struct {
directIO bool
}
type benchFS struct {
conf *benchConfig
}
var _ = fs.FS(benchFS{})
func (f benchFS) Root() (fs.Node, error) {
return benchDir{conf: f.conf}, nil
}
type benchDir struct {
conf *benchConfig
}
var _ = fs.Node(benchDir{})
var _ = fs.NodeStringLookuper(benchDir{})
var _ = fs.Handle(benchDir{})
var _ = fs.HandleReadDirAller(benchDir{})
func (benchDir) Attr(ctx context.Context, a *fuse.Attr) error {
a.Inode = 1
a.Mode = os.ModeDir | 0555
return nil
}
func (d benchDir) Lookup(ctx context.Context, name string) (fs.Node, error) {
if name == "bench" {
return benchFile{conf: d.conf}, nil
}
return nil, fuse.ENOENT
}
func (benchDir) ReadDirAll(ctx context.Context) ([]fuse.Dirent, error) {
l := []fuse.Dirent{
{Inode: 2, Name: "bench", Type: fuse.DT_File},
}
return l, nil
}
type benchFile struct {
conf *benchConfig
}
var _ = fs.Node(benchFile{})
var _ = fs.NodeOpener(benchFile{})
var _ = fs.NodeFsyncer(benchFile{})
var _ = fs.Handle(benchFile{})
var _ = fs.HandleReader(benchFile{})
var _ = fs.HandleWriter(benchFile{})
func (benchFile) Attr(ctx context.Context, a *fuse.Attr) error {
a.Inode = 2
a.Mode = 0644
a.Size = 9999999999999999
return nil
}
func (f benchFile) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fs.Handle, error) {
if f.conf.directIO {
resp.Flags |= fuse.OpenDirectIO
}
// TODO configurable?
resp.Flags |= fuse.OpenKeepCache
return f, nil
}
func (benchFile) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error {
resp.Data = resp.Data[:cap(resp.Data)]
return nil
}
func (benchFile) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) error {
resp.Size = len(req.Data)
return nil
}
func (benchFile) Fsync(ctx context.Context, req *fuse.FsyncRequest) error {
return nil
}
func benchmark(b *testing.B, fn func(b *testing.B, mnt string), conf *benchConfig) {
filesys := benchFS{
conf: conf,
}
mnt, err := fstestutil.Mounted(filesys, nil,
fuse.MaxReadahead(64*1024*1024),
fuse.AsyncRead(),
fuse.WritebackCache(),
)
if err != nil {
b.Fatal(err)
}
defer mnt.Close()
fn(b, mnt.Dir)
}
type zero struct{}
func (zero) Read(p []byte) (n int, err error) {
return len(p), nil
}
var Zero io.Reader = zero{}
func doWrites(size int64) func(b *testing.B, mnt string) {
return func(b *testing.B, mnt string) {
p := path.Join(mnt, "bench")
f, err := os.Create(p)
if err != nil {
b.Fatalf("create: %v", err)
}
defer f.Close()
b.ResetTimer()
b.SetBytes(size)
for i := 0; i < b.N; i++ {
_, err = io.CopyN(f, Zero, size)
if err != nil {
b.Fatalf("write: %v", err)
}
}
}
}
func BenchmarkWrite100(b *testing.B) {
benchmark(b, doWrites(100), &benchConfig{})
}
func BenchmarkWrite10MB(b *testing.B) {
benchmark(b, doWrites(10*1024*1024), &benchConfig{})
}
func BenchmarkWrite100MB(b *testing.B) {
benchmark(b, doWrites(100*1024*1024), &benchConfig{})
}
func BenchmarkDirectWrite100(b *testing.B) {
benchmark(b, doWrites(100), &benchConfig{
directIO: true,
})
}
func BenchmarkDirectWrite10MB(b *testing.B) {
benchmark(b, doWrites(10*1024*1024), &benchConfig{
directIO: true,
})
}
func BenchmarkDirectWrite100MB(b *testing.B) {
benchmark(b, doWrites(100*1024*1024), &benchConfig{
directIO: true,
})
}
func doWritesSync(size int64) func(b *testing.B, mnt string) {
return func(b *testing.B, mnt string) {
p := path.Join(mnt, "bench")
f, err := os.Create(p)
if err != nil {
b.Fatalf("create: %v", err)
}
defer f.Close()
b.ResetTimer()
b.SetBytes(size)
for i := 0; i < b.N; i++ {
_, err = io.CopyN(f, Zero, size)
if err != nil {
b.Fatalf("write: %v", err)
}
if err := f.Sync(); err != nil {
b.Fatalf("sync: %v", err)
}
}
}
}
func BenchmarkWriteSync100(b *testing.B) {
benchmark(b, doWritesSync(100), &benchConfig{})
}
func BenchmarkWriteSync10MB(b *testing.B) {
benchmark(b, doWritesSync(10*1024*1024), &benchConfig{})
}
func BenchmarkWriteSync100MB(b *testing.B) {
benchmark(b, doWritesSync(100*1024*1024), &benchConfig{})
}
func doReads(size int64) func(b *testing.B, mnt string) {
return func(b *testing.B, mnt string) {
p := path.Join(mnt, "bench")
f, err := os.Open(p)
if err != nil {
b.Fatalf("close: %v", err)
}
defer f.Close()
b.ResetTimer()
b.SetBytes(size)
for i := 0; i < b.N; i++ {
n, err := io.CopyN(ioutil.Discard, f, size)
if err != nil {
b.Fatalf("read: %v", err)
}
if n != size {
b.Errorf("unexpected size: %d != %d", n, size)
}
}
}
}
func BenchmarkRead100(b *testing.B) {
benchmark(b, doReads(100), &benchConfig{})
}
func BenchmarkRead10MB(b *testing.B) {
benchmark(b, doReads(10*1024*1024), &benchConfig{})
}
func BenchmarkRead100MB(b *testing.B) {
benchmark(b, doReads(100*1024*1024), &benchConfig{})
}
func BenchmarkDirectRead100(b *testing.B) {
benchmark(b, doReads(100), &benchConfig{
directIO: true,
})
}
func BenchmarkDirectRead10MB(b *testing.B) {
benchmark(b, doReads(10*1024*1024), &benchConfig{
directIO: true,
})
}
func BenchmarkDirectRead100MB(b *testing.B) {
benchmark(b, doReads(100*1024*1024), &benchConfig{
directIO: true,
})
}

View file

@ -1,5 +0,0 @@
// Package bench contains benchmarks.
//
// It is kept in a separate package to avoid conflicting with the
// debug-heavy defaults for the actual tests.
package bench

View file

@ -1,70 +0,0 @@
package fstestutil
import (
"fmt"
"io/ioutil"
"os"
)
// FileInfoCheck is a function that validates an os.FileInfo according
// to some criteria.
type FileInfoCheck func(fi os.FileInfo) error
type checkDirError struct {
missing map[string]struct{}
extra map[string]os.FileMode
}
func (e *checkDirError) Error() string {
return fmt.Sprintf("wrong directory contents: missing %v, extra %v", e.missing, e.extra)
}
// CheckDir checks the contents of the directory at path, making sure
// every directory entry listed in want is present. If the check is
// not nil, it must also pass.
//
// If want contains the impossible filename "", unexpected files are
// checked with that. If the key is not in want, unexpected files are
// an error.
//
// Missing entries, that are listed in want but not seen, are an
// error.
func CheckDir(path string, want map[string]FileInfoCheck) error {
problems := &checkDirError{
missing: make(map[string]struct{}, len(want)),
extra: make(map[string]os.FileMode),
}
for k := range want {
if k == "" {
continue
}
problems.missing[k] = struct{}{}
}
fis, err := ioutil.ReadDir(path)
if err != nil {
return fmt.Errorf("cannot read directory: %v", err)
}
for _, fi := range fis {
check, ok := want[fi.Name()]
if !ok {
check, ok = want[""]
}
if !ok {
problems.extra[fi.Name()] = fi.Mode()
continue
}
delete(problems.missing, fi.Name())
if check != nil {
if err := check(fi); err != nil {
return fmt.Errorf("check failed: %v: %v", fi.Name(), err)
}
}
}
if len(problems.missing) > 0 || len(problems.extra) > 0 {
return problems
}
return nil
}

View file

@ -1,65 +0,0 @@
package fstestutil
import (
"flag"
"log"
"strconv"
"bazil.org/fuse"
)
type flagDebug bool
var debug flagDebug
var _ = flag.Value(&debug)
func (f *flagDebug) IsBoolFlag() bool {
return true
}
func nop(msg interface{}) {}
func (f *flagDebug) Set(s string) error {
v, err := strconv.ParseBool(s)
if err != nil {
return err
}
*f = flagDebug(v)
if v {
fuse.Debug = logMsg
} else {
fuse.Debug = nop
}
return nil
}
func (f *flagDebug) String() string {
return strconv.FormatBool(bool(*f))
}
func logMsg(msg interface{}) {
log.Printf("FUSE: %s\n", msg)
}
func init() {
flag.Var(&debug, "fuse.debug", "log FUSE processing details")
}
// DebugByDefault changes the default of the `-fuse.debug` flag to
// true.
//
// This package registers a command line flag `-fuse.debug` and when
// run with that flag (and activated inside the tests), logs FUSE
// debug messages.
//
// This is disabled by default, as most callers probably won't care
// about FUSE details. Use DebugByDefault for tests where you'd
// normally be passing `-fuse.debug` all the time anyway.
//
// Call from an init function.
func DebugByDefault() {
f := flag.Lookup("fuse.debug")
f.DefValue = "true"
f.Value.Set(f.DefValue)
}

View file

@ -1 +0,0 @@
package fstestutil // import "bazil.org/fuse/fs/fstestutil"

View file

@ -1,141 +0,0 @@
package fstestutil
import (
"errors"
"io/ioutil"
"log"
"os"
"testing"
"time"
"bazil.org/fuse"
"bazil.org/fuse/fs"
)
// Mount contains information about the mount for the test to use.
type Mount struct {
// Dir is the temporary directory where the filesystem is mounted.
Dir string
Conn *fuse.Conn
Server *fs.Server
// Error will receive the return value of Serve.
Error <-chan error
done <-chan struct{}
closed bool
}
// Close unmounts the filesystem and waits for fs.Serve to return. Any
// returned error will be stored in Err. It is safe to call Close
// multiple times.
func (mnt *Mount) Close() {
if mnt.closed {
return
}
mnt.closed = true
for tries := 0; tries < 1000; tries++ {
err := fuse.Unmount(mnt.Dir)
if err != nil {
// TODO do more than log?
log.Printf("unmount error: %v", err)
time.Sleep(10 * time.Millisecond)
continue
}
break
}
<-mnt.done
mnt.Conn.Close()
os.Remove(mnt.Dir)
}
// MountedFunc mounts a filesystem at a temporary directory. The
// filesystem used is constructed by calling a function, to allow
// storing fuse.Conn and fs.Server in the FS.
//
// It also waits until the filesystem is known to be visible (OS X
// workaround).
//
// After successful return, caller must clean up by calling Close.
func MountedFunc(fn func(*Mount) fs.FS, conf *fs.Config, options ...fuse.MountOption) (*Mount, error) {
dir, err := ioutil.TempDir("", "fusetest")
if err != nil {
return nil, err
}
c, err := fuse.Mount(dir, options...)
if err != nil {
return nil, err
}
server := fs.New(c, conf)
done := make(chan struct{})
serveErr := make(chan error, 1)
mnt := &Mount{
Dir: dir,
Conn: c,
Server: server,
Error: serveErr,
done: done,
}
filesys := fn(mnt)
go func() {
defer close(done)
serveErr <- server.Serve(filesys)
}()
select {
case <-mnt.Conn.Ready:
if err := mnt.Conn.MountError; err != nil {
return nil, err
}
return mnt, nil
case err = <-mnt.Error:
// Serve quit early
if err != nil {
return nil, err
}
return nil, errors.New("Serve exited early")
}
}
// Mounted mounts the fuse.Server at a temporary directory.
//
// It also waits until the filesystem is known to be visible (OS X
// workaround).
//
// After successful return, caller must clean up by calling Close.
func Mounted(filesys fs.FS, conf *fs.Config, options ...fuse.MountOption) (*Mount, error) {
fn := func(*Mount) fs.FS { return filesys }
return MountedFunc(fn, conf, options...)
}
// MountedFuncT mounts a filesystem at a temporary directory,
// directing it's debug log to the testing logger.
//
// See MountedFunc for usage.
//
// The debug log is not enabled by default. Use `-fuse.debug` or call
// DebugByDefault to enable.
func MountedFuncT(t testing.TB, fn func(*Mount) fs.FS, conf *fs.Config, options ...fuse.MountOption) (*Mount, error) {
if conf == nil {
conf = &fs.Config{}
}
if debug && conf.Debug == nil {
conf.Debug = func(msg interface{}) {
t.Logf("FUSE: %s", msg)
}
}
return MountedFunc(fn, conf, options...)
}
// MountedT mounts the filesystem at a temporary directory,
// directing it's debug log to the testing logger.
//
// See Mounted for usage.
//
// The debug log is not enabled by default. Use `-fuse.debug` or call
// DebugByDefault to enable.
func MountedT(t testing.TB, filesys fs.FS, conf *fs.Config, options ...fuse.MountOption) (*Mount, error) {
fn := func(*Mount) fs.FS { return filesys }
return MountedFuncT(t, fn, conf, options...)
}

View file

@ -1,26 +0,0 @@
package fstestutil
// MountInfo describes a mounted file system.
type MountInfo struct {
FSName string
Type string
}
// GetMountInfo finds information about the mount at mnt. It is
// intended for use by tests only, and only fetches information
// relevant to the current tests.
func GetMountInfo(mnt string) (*MountInfo, error) {
return getMountInfo(mnt)
}
// cstr converts a nil-terminated C string into a Go string
func cstr(ca []int8) string {
s := make([]byte, 0, len(ca))
for _, c := range ca {
if c == 0x00 {
break
}
s = append(s, byte(c))
}
return string(s)
}

View file

@ -1,29 +0,0 @@
package fstestutil
import (
"regexp"
"syscall"
)
var re = regexp.MustCompile(`\\(.)`)
// unescape removes backslash-escaping. The escaped characters are not
// mapped in any way; that is, unescape(`\n` ) == `n`.
func unescape(s string) string {
return re.ReplaceAllString(s, `$1`)
}
func getMountInfo(mnt string) (*MountInfo, error) {
var st syscall.Statfs_t
err := syscall.Statfs(mnt, &st)
if err != nil {
return nil, err
}
i := &MountInfo{
// osx getmntent(3) fails to un-escape the data, so we do it..
// this might lead to double-unescaping in the future. fun.
// TestMountOptionFSNameEvilBackslashDouble checks for that.
FSName: unescape(cstr(st.Mntfromname[:])),
}
return i, nil
}

View file

@ -1,7 +0,0 @@
package fstestutil
import "errors"
func getMountInfo(mnt string) (*MountInfo, error) {
return nil, errors.New("FreeBSD has no useful mount information")
}

View file

@ -1,51 +0,0 @@
package fstestutil
import (
"errors"
"io/ioutil"
"strings"
)
// Linux /proc/mounts shows current mounts.
// Same format as /etc/fstab. Quoting getmntent(3):
//
// Since fields in the mtab and fstab files are separated by whitespace,
// octal escapes are used to represent the four characters space (\040),
// tab (\011), newline (\012) and backslash (\134) in those files when
// they occur in one of the four strings in a mntent structure.
//
// http://linux.die.net/man/3/getmntent
var fstabUnescape = strings.NewReplacer(
`\040`, "\040",
`\011`, "\011",
`\012`, "\012",
`\134`, "\134",
)
var errNotFound = errors.New("mount not found")
func getMountInfo(mnt string) (*MountInfo, error) {
data, err := ioutil.ReadFile("/proc/mounts")
if err != nil {
return nil, err
}
for _, line := range strings.Split(string(data), "\n") {
fields := strings.Fields(line)
if len(fields) < 3 {
continue
}
// Fields are: fsname dir type opts freq passno
fsname := fstabUnescape.Replace(fields[0])
dir := fstabUnescape.Replace(fields[1])
fstype := fstabUnescape.Replace(fields[2])
if mnt == dir {
info := &MountInfo{
FSName: fsname,
Type: fstype,
}
return info, nil
}
}
return nil, errNotFound
}

View file

@ -1,28 +0,0 @@
package record
import (
"bytes"
"io"
"sync"
)
// Buffer is like bytes.Buffer but safe to access from multiple
// goroutines.
type Buffer struct {
mu sync.Mutex
buf bytes.Buffer
}
var _ = io.Writer(&Buffer{})
func (b *Buffer) Write(p []byte) (n int, err error) {
b.mu.Lock()
defer b.mu.Unlock()
return b.buf.Write(p)
}
func (b *Buffer) Bytes() []byte {
b.mu.Lock()
defer b.mu.Unlock()
return b.buf.Bytes()
}

View file

@ -1,409 +0,0 @@
package record // import "bazil.org/fuse/fs/fstestutil/record"
import (
"sync"
"sync/atomic"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"golang.org/x/net/context"
)
// Writes gathers data from FUSE Write calls.
type Writes struct {
buf Buffer
}
var _ = fs.HandleWriter(&Writes{})
func (w *Writes) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) error {
n, err := w.buf.Write(req.Data)
resp.Size = n
if err != nil {
return err
}
return nil
}
func (w *Writes) RecordedWriteData() []byte {
return w.buf.Bytes()
}
// Counter records number of times a thing has occurred.
type Counter struct {
count uint32
}
func (r *Counter) Inc() {
atomic.AddUint32(&r.count, 1)
}
func (r *Counter) Count() uint32 {
return atomic.LoadUint32(&r.count)
}
// MarkRecorder records whether a thing has occurred.
type MarkRecorder struct {
count Counter
}
func (r *MarkRecorder) Mark() {
r.count.Inc()
}
func (r *MarkRecorder) Recorded() bool {
return r.count.Count() > 0
}
// Flushes notes whether a FUSE Flush call has been seen.
type Flushes struct {
rec MarkRecorder
}
var _ = fs.HandleFlusher(&Flushes{})
func (r *Flushes) Flush(ctx context.Context, req *fuse.FlushRequest) error {
r.rec.Mark()
return nil
}
func (r *Flushes) RecordedFlush() bool {
return r.rec.Recorded()
}
type Recorder struct {
mu sync.Mutex
val interface{}
}
// Record that we've seen value. A nil value is indistinguishable from
// no value recorded.
func (r *Recorder) Record(value interface{}) {
r.mu.Lock()
r.val = value
r.mu.Unlock()
}
func (r *Recorder) Recorded() interface{} {
r.mu.Lock()
val := r.val
r.mu.Unlock()
return val
}
type RequestRecorder struct {
rec Recorder
}
// Record a fuse.Request, after zeroing header fields that are hard to
// reproduce.
//
// Make sure to record a copy, not the original request.
func (r *RequestRecorder) RecordRequest(req fuse.Request) {
hdr := req.Hdr()
*hdr = fuse.Header{}
r.rec.Record(req)
}
func (r *RequestRecorder) Recorded() fuse.Request {
val := r.rec.Recorded()
if val == nil {
return nil
}
return val.(fuse.Request)
}
// Setattrs records a Setattr request and its fields.
type Setattrs struct {
rec RequestRecorder
}
var _ = fs.NodeSetattrer(&Setattrs{})
func (r *Setattrs) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) error {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil
}
func (r *Setattrs) RecordedSetattr() fuse.SetattrRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.SetattrRequest{}
}
return *(val.(*fuse.SetattrRequest))
}
// Fsyncs records an Fsync request and its fields.
type Fsyncs struct {
rec RequestRecorder
}
var _ = fs.NodeFsyncer(&Fsyncs{})
func (r *Fsyncs) Fsync(ctx context.Context, req *fuse.FsyncRequest) error {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil
}
func (r *Fsyncs) RecordedFsync() fuse.FsyncRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.FsyncRequest{}
}
return *(val.(*fuse.FsyncRequest))
}
// Mkdirs records a Mkdir request and its fields.
type Mkdirs struct {
rec RequestRecorder
}
var _ = fs.NodeMkdirer(&Mkdirs{})
// Mkdir records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Mkdirs) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (fs.Node, error) {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil, fuse.EIO
}
// RecordedMkdir returns information about the Mkdir request.
// If no request was seen, returns a zero value.
func (r *Mkdirs) RecordedMkdir() fuse.MkdirRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.MkdirRequest{}
}
return *(val.(*fuse.MkdirRequest))
}
// Symlinks records a Symlink request and its fields.
type Symlinks struct {
rec RequestRecorder
}
var _ = fs.NodeSymlinker(&Symlinks{})
// Symlink records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Symlinks) Symlink(ctx context.Context, req *fuse.SymlinkRequest) (fs.Node, error) {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil, fuse.EIO
}
// RecordedSymlink returns information about the Symlink request.
// If no request was seen, returns a zero value.
func (r *Symlinks) RecordedSymlink() fuse.SymlinkRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.SymlinkRequest{}
}
return *(val.(*fuse.SymlinkRequest))
}
// Links records a Link request and its fields.
type Links struct {
rec RequestRecorder
}
var _ = fs.NodeLinker(&Links{})
// Link records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Links) Link(ctx context.Context, req *fuse.LinkRequest, old fs.Node) (fs.Node, error) {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil, fuse.EIO
}
// RecordedLink returns information about the Link request.
// If no request was seen, returns a zero value.
func (r *Links) RecordedLink() fuse.LinkRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.LinkRequest{}
}
return *(val.(*fuse.LinkRequest))
}
// Mknods records a Mknod request and its fields.
type Mknods struct {
rec RequestRecorder
}
var _ = fs.NodeMknoder(&Mknods{})
// Mknod records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Mknods) Mknod(ctx context.Context, req *fuse.MknodRequest) (fs.Node, error) {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil, fuse.EIO
}
// RecordedMknod returns information about the Mknod request.
// If no request was seen, returns a zero value.
func (r *Mknods) RecordedMknod() fuse.MknodRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.MknodRequest{}
}
return *(val.(*fuse.MknodRequest))
}
// Opens records a Open request and its fields.
type Opens struct {
rec RequestRecorder
}
var _ = fs.NodeOpener(&Opens{})
// Open records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Opens) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fs.Handle, error) {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil, fuse.EIO
}
// RecordedOpen returns information about the Open request.
// If no request was seen, returns a zero value.
func (r *Opens) RecordedOpen() fuse.OpenRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.OpenRequest{}
}
return *(val.(*fuse.OpenRequest))
}
// Getxattrs records a Getxattr request and its fields.
type Getxattrs struct {
rec RequestRecorder
}
var _ = fs.NodeGetxattrer(&Getxattrs{})
// Getxattr records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Getxattrs) Getxattr(ctx context.Context, req *fuse.GetxattrRequest, resp *fuse.GetxattrResponse) error {
tmp := *req
r.rec.RecordRequest(&tmp)
return fuse.ErrNoXattr
}
// RecordedGetxattr returns information about the Getxattr request.
// If no request was seen, returns a zero value.
func (r *Getxattrs) RecordedGetxattr() fuse.GetxattrRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.GetxattrRequest{}
}
return *(val.(*fuse.GetxattrRequest))
}
// Listxattrs records a Listxattr request and its fields.
type Listxattrs struct {
rec RequestRecorder
}
var _ = fs.NodeListxattrer(&Listxattrs{})
// Listxattr records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Listxattrs) Listxattr(ctx context.Context, req *fuse.ListxattrRequest, resp *fuse.ListxattrResponse) error {
tmp := *req
r.rec.RecordRequest(&tmp)
return fuse.ErrNoXattr
}
// RecordedListxattr returns information about the Listxattr request.
// If no request was seen, returns a zero value.
func (r *Listxattrs) RecordedListxattr() fuse.ListxattrRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.ListxattrRequest{}
}
return *(val.(*fuse.ListxattrRequest))
}
// Setxattrs records a Setxattr request and its fields.
type Setxattrs struct {
rec RequestRecorder
}
var _ = fs.NodeSetxattrer(&Setxattrs{})
// Setxattr records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Setxattrs) Setxattr(ctx context.Context, req *fuse.SetxattrRequest) error {
tmp := *req
// The byte slice points to memory that will be reused, so make a
// deep copy.
tmp.Xattr = append([]byte(nil), req.Xattr...)
r.rec.RecordRequest(&tmp)
return nil
}
// RecordedSetxattr returns information about the Setxattr request.
// If no request was seen, returns a zero value.
func (r *Setxattrs) RecordedSetxattr() fuse.SetxattrRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.SetxattrRequest{}
}
return *(val.(*fuse.SetxattrRequest))
}
// Removexattrs records a Removexattr request and its fields.
type Removexattrs struct {
rec RequestRecorder
}
var _ = fs.NodeRemovexattrer(&Removexattrs{})
// Removexattr records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Removexattrs) Removexattr(ctx context.Context, req *fuse.RemovexattrRequest) error {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil
}
// RecordedRemovexattr returns information about the Removexattr request.
// If no request was seen, returns a zero value.
func (r *Removexattrs) RecordedRemovexattr() fuse.RemovexattrRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.RemovexattrRequest{}
}
return *(val.(*fuse.RemovexattrRequest))
}
// Creates records a Create request and its fields.
type Creates struct {
rec RequestRecorder
}
var _ = fs.NodeCreater(&Creates{})
// Create records the request and returns an error. Most callers should
// wrap this call in a function that returns a more useful result.
func (r *Creates) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (fs.Node, fs.Handle, error) {
tmp := *req
r.rec.RecordRequest(&tmp)
return nil, nil, fuse.EIO
}
// RecordedCreate returns information about the Create request.
// If no request was seen, returns a zero value.
func (r *Creates) RecordedCreate() fuse.CreateRequest {
val := r.rec.Recorded()
if val == nil {
return fuse.CreateRequest{}
}
return *(val.(*fuse.CreateRequest))
}

View file

@ -1,55 +0,0 @@
package record
import (
"sync"
"time"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"golang.org/x/net/context"
)
type nothing struct{}
// ReleaseWaiter notes whether a FUSE Release call has been seen.
//
// Releases are not guaranteed to happen synchronously with any client
// call, so they must be waited for.
type ReleaseWaiter struct {
once sync.Once
seen chan nothing
}
var _ = fs.HandleReleaser(&ReleaseWaiter{})
func (r *ReleaseWaiter) init() {
r.once.Do(func() {
r.seen = make(chan nothing, 1)
})
}
func (r *ReleaseWaiter) Release(ctx context.Context, req *fuse.ReleaseRequest) error {
r.init()
close(r.seen)
return nil
}
// WaitForRelease waits for Release to be called.
//
// With zero duration, wait forever. Otherwise, timeout early
// in a more controller way than `-test.timeout`.
//
// Returns whether a Release was seen. Always true if dur==0.
func (r *ReleaseWaiter) WaitForRelease(dur time.Duration) bool {
r.init()
var timeout <-chan time.Time
if dur > 0 {
timeout = time.After(dur)
}
select {
case <-r.seen:
return true
case <-timeout:
return false
}
}

View file

@ -1,55 +0,0 @@
package fstestutil
import (
"os"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"golang.org/x/net/context"
)
// SimpleFS is a trivial FS that just implements the Root method.
type SimpleFS struct {
Node fs.Node
}
var _ = fs.FS(SimpleFS{})
func (f SimpleFS) Root() (fs.Node, error) {
return f.Node, nil
}
// File can be embedded in a struct to make it look like a file.
type File struct{}
func (f File) Attr(ctx context.Context, a *fuse.Attr) error {
a.Mode = 0666
return nil
}
// Dir can be embedded in a struct to make it look like a directory.
type Dir struct{}
func (f Dir) Attr(ctx context.Context, a *fuse.Attr) error {
a.Mode = os.ModeDir | 0777
return nil
}
// ChildMap is a directory with child nodes looked up from a map.
type ChildMap map[string]fs.Node
var _ = fs.Node(&ChildMap{})
var _ = fs.NodeStringLookuper(&ChildMap{})
func (f *ChildMap) Attr(ctx context.Context, a *fuse.Attr) error {
a.Mode = os.ModeDir | 0777
return nil
}
func (f *ChildMap) Lookup(ctx context.Context, name string) (fs.Node, error) {
child, ok := (*f)[name]
if !ok {
return nil, fuse.ENOENT
}
return child, nil
}

View file

@ -1,67 +0,0 @@
package fs_test
import (
"errors"
"flag"
"os"
"os/exec"
"path/filepath"
"testing"
)
var childHelpers = map[string]func(){}
type childProcess struct {
name string
fn func()
}
var _ flag.Value = (*childProcess)(nil)
func (c *childProcess) String() string {
return c.name
}
func (c *childProcess) Set(s string) error {
fn, ok := childHelpers[s]
if !ok {
return errors.New("helper not found")
}
c.name = s
c.fn = fn
return nil
}
var childMode childProcess
func init() {
flag.Var(&childMode, "fuse.internal.child", "internal use only")
}
// childCmd prepares a test function to be run in a subprocess, with
// childMode set to true. Caller must still call Run or Start.
//
// Re-using the test executable as the subprocess is useful because
// now test executables can e.g. be cross-compiled, transferred
// between hosts, and run in settings where the whole Go development
// environment is not installed.
func childCmd(childName string) (*exec.Cmd, error) {
// caller may set cwd, so we can't rely on relative paths
executable, err := filepath.Abs(os.Args[0])
if err != nil {
return nil, err
}
cmd := exec.Command(executable, "-fuse.internal.child="+childName)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
return cmd, nil
}
func TestMain(m *testing.M) {
flag.Parse()
if childMode.fn != nil {
childMode.fn()
os.Exit(0)
}
os.Exit(m.Run())
}

View file

@ -1,30 +0,0 @@
package fs_test
import (
"testing"
"bazil.org/fuse/fs/fstestutil"
"golang.org/x/sys/unix"
)
type exchangeData struct {
fstestutil.File
// this struct cannot be zero size or multiple instances may look identical
_ int
}
func TestExchangeDataNotSupported(t *testing.T) {
t.Parallel()
mnt, err := fstestutil.MountedT(t, fstestutil.SimpleFS{&fstestutil.ChildMap{
"one": &exchangeData{},
"two": &exchangeData{},
}}, nil)
if err != nil {
t.Fatal(err)
}
defer mnt.Close()
if err := unix.Exchangedata(mnt.Dir+"/one", mnt.Dir+"/two", 0); err != unix.ENOTSUP {
t.Fatalf("expected ENOTSUP from exchangedata: %v", err)
}
}

2843
vendor/bazil.org/fuse/fs/serve_test.go generated vendored

File diff suppressed because it is too large Load diff

1
vendor/bazil.org/fuse/fuse.go generated vendored
View file

@ -1262,6 +1262,7 @@ func (r *StatfsRequest) Respond(resp *StatfsResponse) {
Bfree: resp.Bfree,
Bavail: resp.Bavail,
Files: resp.Files,
Ffree: resp.Ffree,
Bsize: resp.Bsize,
Namelen: resp.Namelen,
Frsize: resp.Frsize,

View file

@ -1,63 +0,0 @@
package fuse_test
import (
"os"
"testing"
"bazil.org/fuse"
)
func TestOpenFlagsAccmodeMaskReadWrite(t *testing.T) {
var f = fuse.OpenFlags(os.O_RDWR | os.O_SYNC)
if g, e := f&fuse.OpenAccessModeMask, fuse.OpenReadWrite; g != e {
t.Fatalf("OpenAccessModeMask behaves wrong: %v: %o != %o", f, g, e)
}
if f.IsReadOnly() {
t.Fatalf("IsReadOnly is wrong: %v", f)
}
if f.IsWriteOnly() {
t.Fatalf("IsWriteOnly is wrong: %v", f)
}
if !f.IsReadWrite() {
t.Fatalf("IsReadWrite is wrong: %v", f)
}
}
func TestOpenFlagsAccmodeMaskReadOnly(t *testing.T) {
var f = fuse.OpenFlags(os.O_RDONLY | os.O_SYNC)
if g, e := f&fuse.OpenAccessModeMask, fuse.OpenReadOnly; g != e {
t.Fatalf("OpenAccessModeMask behaves wrong: %v: %o != %o", f, g, e)
}
if !f.IsReadOnly() {
t.Fatalf("IsReadOnly is wrong: %v", f)
}
if f.IsWriteOnly() {
t.Fatalf("IsWriteOnly is wrong: %v", f)
}
if f.IsReadWrite() {
t.Fatalf("IsReadWrite is wrong: %v", f)
}
}
func TestOpenFlagsAccmodeMaskWriteOnly(t *testing.T) {
var f = fuse.OpenFlags(os.O_WRONLY | os.O_SYNC)
if g, e := f&fuse.OpenAccessModeMask, fuse.OpenWriteOnly; g != e {
t.Fatalf("OpenAccessModeMask behaves wrong: %v: %o != %o", f, g, e)
}
if f.IsReadOnly() {
t.Fatalf("IsReadOnly is wrong: %v", f)
}
if !f.IsWriteOnly() {
t.Fatalf("IsWriteOnly is wrong: %v", f)
}
if f.IsReadWrite() {
t.Fatalf("IsReadWrite is wrong: %v", f)
}
}
func TestOpenFlagsString(t *testing.T) {
var f = fuse.OpenFlags(os.O_RDWR | os.O_SYNC | os.O_APPEND)
if g, e := f.String(), "OpenReadWrite+OpenAppend+OpenSync"; g != e {
t.Fatalf("OpenFlags.String: %q != %q", g, e)
}
}

View file

@ -1,64 +0,0 @@
// Test for adjustable timeout between a FUSE request and the daemon's response.
//
// +build darwin freebsd
package fuse_test
import (
"os"
"runtime"
"syscall"
"testing"
"time"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"bazil.org/fuse/fs/fstestutil"
"golang.org/x/net/context"
)
type slowCreaterDir struct {
fstestutil.Dir
}
var _ fs.NodeCreater = slowCreaterDir{}
func (c slowCreaterDir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (fs.Node, fs.Handle, error) {
time.Sleep(10 * time.Second)
// pick a really distinct error, to identify it later
return nil, nil, fuse.Errno(syscall.ENAMETOOLONG)
}
func TestMountOptionDaemonTimeout(t *testing.T) {
if runtime.GOOS != "darwin" && runtime.GOOS != "freebsd" {
return
}
if testing.Short() {
t.Skip("skipping time-based test in short mode")
}
t.Parallel()
mnt, err := fstestutil.MountedT(t,
fstestutil.SimpleFS{slowCreaterDir{}},
nil,
fuse.DaemonTimeout("2"),
)
if err != nil {
t.Fatal(err)
}
defer mnt.Close()
// This should fail by the kernel timing out the request.
f, err := os.Create(mnt.Dir + "/child")
if err == nil {
f.Close()
t.Fatal("expected an error")
}
perr, ok := err.(*os.PathError)
if !ok {
t.Fatalf("expected PathError, got %T: %v", err, err)
}
if perr.Err == syscall.ENAMETOOLONG {
t.Fatalf("expected other than ENAMETOOLONG, got %T: %v", err, err)
}
}

View file

@ -1,10 +0,0 @@
package fuse
// for TestMountOptionCommaError
func ForTestSetMountOption(k, v string) MountOption {
fn := func(conf *mountConfig) error {
conf.options[k] = v
return nil
}
return fn
}

View file

@ -1,31 +0,0 @@
// This file contains tests for platforms that have no escape
// mechanism for including commas in mount options.
//
// +build darwin
package fuse_test
import (
"runtime"
"testing"
"bazil.org/fuse"
"bazil.org/fuse/fs/fstestutil"
)
func TestMountOptionCommaError(t *testing.T) {
t.Parallel()
// this test is not tied to any specific option, it just needs
// some string content
var evil = "FuseTest,Marker"
mnt, err := fstestutil.MountedT(t, fstestutil.SimpleFS{fstestutil.Dir{}}, nil,
fuse.ForTestSetMountOption("fusetest", evil),
)
if err == nil {
mnt.Close()
t.Fatal("expected an error about commas")
}
if g, e := err.Error(), `mount options cannot contain commas on `+runtime.GOOS+`: "fusetest"="FuseTest,Marker"`; g != e {
t.Fatalf("wrong error: %q != %q", g, e)
}
}

231
vendor/bazil.org/fuse/options_test.go generated vendored
View file

@ -1,231 +0,0 @@
package fuse_test
import (
"os"
"runtime"
"syscall"
"testing"
"bazil.org/fuse"
"bazil.org/fuse/fs"
"bazil.org/fuse/fs/fstestutil"
"golang.org/x/net/context"
)
func init() {
fstestutil.DebugByDefault()
}
func TestMountOptionFSName(t *testing.T) {
if runtime.GOOS == "freebsd" {
t.Skip("FreeBSD does not support FSName")
}
t.Parallel()
const name = "FuseTestMarker"
mnt, err := fstestutil.MountedT(t, fstestutil.SimpleFS{fstestutil.Dir{}}, nil,
fuse.FSName(name),
)
if err != nil {
t.Fatal(err)
}
defer mnt.Close()
info, err := fstestutil.GetMountInfo(mnt.Dir)
if err != nil {
t.Fatal(err)
}
if g, e := info.FSName, name; g != e {
t.Errorf("wrong FSName: %q != %q", g, e)
}
}
func testMountOptionFSNameEvil(t *testing.T, evil string) {
if runtime.GOOS == "freebsd" {
t.Skip("FreeBSD does not support FSName")
}
t.Parallel()
var name = "FuseTest" + evil + "Marker"
mnt, err := fstestutil.MountedT(t, fstestutil.SimpleFS{fstestutil.Dir{}}, nil,
fuse.FSName(name),
)
if err != nil {
t.Fatal(err)
}
defer mnt.Close()
info, err := fstestutil.GetMountInfo(mnt.Dir)
if err != nil {
t.Fatal(err)
}
if g, e := info.FSName, name; g != e {
t.Errorf("wrong FSName: %q != %q", g, e)
}
}
func TestMountOptionFSNameEvilComma(t *testing.T) {
if runtime.GOOS == "darwin" {
// see TestMountOptionCommaError for a test that enforces we
// at least give a nice error, instead of corrupting the mount
// options
t.Skip("TODO: OS X gets this wrong, commas in mount options cannot be escaped at all")
}
testMountOptionFSNameEvil(t, ",")
}
func TestMountOptionFSNameEvilSpace(t *testing.T) {
testMountOptionFSNameEvil(t, " ")
}
func TestMountOptionFSNameEvilTab(t *testing.T) {
testMountOptionFSNameEvil(t, "\t")
}
func TestMountOptionFSNameEvilNewline(t *testing.T) {
testMountOptionFSNameEvil(t, "\n")
}
func TestMountOptionFSNameEvilBackslash(t *testing.T) {
testMountOptionFSNameEvil(t, `\`)
}
func TestMountOptionFSNameEvilBackslashDouble(t *testing.T) {
// catch double-unescaping, if it were to happen
testMountOptionFSNameEvil(t, `\\`)
}
func TestMountOptionSubtype(t *testing.T) {
if runtime.GOOS == "darwin" {
t.Skip("OS X does not support Subtype")
}
if runtime.GOOS == "freebsd" {
t.Skip("FreeBSD does not support Subtype")
}
t.Parallel()
const name = "FuseTestMarker"
mnt, err := fstestutil.MountedT(t, fstestutil.SimpleFS{fstestutil.Dir{}}, nil,
fuse.Subtype(name),
)
if err != nil {
t.Fatal(err)
}
defer mnt.Close()
info, err := fstestutil.GetMountInfo(mnt.Dir)
if err != nil {
t.Fatal(err)
}
if g, e := info.Type, "fuse."+name; g != e {
t.Errorf("wrong Subtype: %q != %q", g, e)
}
}
// TODO test LocalVolume
// TODO test AllowOther; hard because needs system-level authorization
func TestMountOptionAllowOtherThenAllowRoot(t *testing.T) {
t.Parallel()
mnt, err := fstestutil.MountedT(t, fstestutil.SimpleFS{fstestutil.Dir{}}, nil,
fuse.AllowOther(),
fuse.AllowRoot(),
)
if err == nil {
mnt.Close()
}
if g, e := err, fuse.ErrCannotCombineAllowOtherAndAllowRoot; g != e {
t.Fatalf("wrong error: %v != %v", g, e)
}
}
// TODO test AllowRoot; hard because needs system-level authorization
func TestMountOptionAllowRootThenAllowOther(t *testing.T) {
t.Parallel()
mnt, err := fstestutil.MountedT(t, fstestutil.SimpleFS{fstestutil.Dir{}}, nil,
fuse.AllowRoot(),
fuse.AllowOther(),
)
if err == nil {
mnt.Close()
}
if g, e := err, fuse.ErrCannotCombineAllowOtherAndAllowRoot; g != e {
t.Fatalf("wrong error: %v != %v", g, e)
}
}
type unwritableFile struct{}
func (f unwritableFile) Attr(ctx context.Context, a *fuse.Attr) error {
a.Mode = 0000
return nil
}
func TestMountOptionDefaultPermissions(t *testing.T) {
if runtime.GOOS == "freebsd" {
t.Skip("FreeBSD does not support DefaultPermissions")
}
t.Parallel()
mnt, err := fstestutil.MountedT(t,
fstestutil.SimpleFS{
&fstestutil.ChildMap{"child": unwritableFile{}},
},
nil,
fuse.DefaultPermissions(),
)
if err != nil {
t.Fatal(err)
}
defer mnt.Close()
// This will be prevented by kernel-level access checking when
// DefaultPermissions is used.
f, err := os.OpenFile(mnt.Dir+"/child", os.O_WRONLY, 0000)
if err == nil {
f.Close()
t.Fatal("expected an error")
}
if !os.IsPermission(err) {
t.Fatalf("expected a permission error, got %T: %v", err, err)
}
}
type createrDir struct {
fstestutil.Dir
}
var _ fs.NodeCreater = createrDir{}
func (createrDir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (fs.Node, fs.Handle, error) {
// pick a really distinct error, to identify it later
return nil, nil, fuse.Errno(syscall.ENAMETOOLONG)
}
func TestMountOptionReadOnly(t *testing.T) {
t.Parallel()
mnt, err := fstestutil.MountedT(t,
fstestutil.SimpleFS{createrDir{}},
nil,
fuse.ReadOnly(),
)
if err != nil {
t.Fatal(err)
}
defer mnt.Close()
// This will be prevented by kernel-level access checking when
// ReadOnly is used.
f, err := os.Create(mnt.Dir + "/child")
if err == nil {
f.Close()
t.Fatal("expected an error")
}
perr, ok := err.(*os.PathError)
if !ok {
t.Fatalf("expected PathError, got %T: %v", err, err)
}
if perr.Err != syscall.EROFS {
t.Fatalf("expected EROFS, got %T: %v", err, err)
}
}

View file

@ -1,13 +0,0 @@
// Package syscallx provides wrappers that make syscalls on various
// platforms more interoperable.
//
// The API intentionally omits the OS X-specific position and option
// arguments for extended attribute calls.
//
// Not having position means it might not be useful for accessing the
// resource fork. If that's needed by code inside fuse, a function
// with a different name may be added on the side.
//
// Options can be implemented with separate wrappers, in the style of
// Linux getxattr/lgetxattr/fgetxattr.
package syscallx // import "bazil.org/fuse/syscallx"

View file

@ -1,34 +0,0 @@
#!/bin/sh
set -e
mksys="$(go env GOROOT)/src/pkg/syscall/mksyscall.pl"
fix() {
sed 's,^package syscall$,&x\nimport "syscall",' \
| gofmt -r='BytePtrFromString -> syscall.BytePtrFromString' \
| gofmt -r='Syscall6 -> syscall.Syscall6' \
| gofmt -r='Syscall -> syscall.Syscall' \
| gofmt -r='SYS_GETXATTR -> syscall.SYS_GETXATTR' \
| gofmt -r='SYS_LISTXATTR -> syscall.SYS_LISTXATTR' \
| gofmt -r='SYS_SETXATTR -> syscall.SYS_SETXATTR' \
| gofmt -r='SYS_REMOVEXATTR -> syscall.SYS_REMOVEXATTR' \
| gofmt -r='SYS_MSYNC -> syscall.SYS_MSYNC'
}
cd "$(dirname "$0")"
$mksys xattr_darwin.go \
| fix \
>xattr_darwin_amd64.go
$mksys -l32 xattr_darwin.go \
| fix \
>xattr_darwin_386.go
$mksys msync.go \
| fix \
>msync_amd64.go
$mksys -l32 msync.go \
| fix \
>msync_386.go

View file

@ -1,9 +0,0 @@
package syscallx
/* This is the source file for msync_*.go, to regenerate run
./generate
*/
//sys Msync(b []byte, flags int) (err error)

View file

@ -1,24 +0,0 @@
// mksyscall.pl -l32 msync.go
// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT
package syscallx
import "syscall"
import "unsafe"
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func Msync(b []byte, flags int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
_p0 = unsafe.Pointer(&b[0])
} else {
_p0 = unsafe.Pointer(&_zero)
}
_, _, e1 := syscall.Syscall(syscall.SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags))
if e1 != 0 {
err = e1
}
return
}

View file

@ -1,24 +0,0 @@
// mksyscall.pl msync.go
// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT
package syscallx
import "syscall"
import "unsafe"
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func Msync(b []byte, flags int) (err error) {
var _p0 unsafe.Pointer
if len(b) > 0 {
_p0 = unsafe.Pointer(&b[0])
} else {
_p0 = unsafe.Pointer(&_zero)
}
_, _, e1 := syscall.Syscall(syscall.SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags))
if e1 != 0 {
err = e1
}
return
}

View file

@ -1,4 +0,0 @@
package syscallx
// make us look more like package syscall, so mksyscall.pl output works
var _zero uintptr

View file

@ -1,26 +0,0 @@
// +build !darwin
package syscallx
// This file just contains wrappers for platforms that already have
// the right stuff in golang.org/x/sys/unix.
import (
"golang.org/x/sys/unix"
)
func Getxattr(path string, attr string, dest []byte) (sz int, err error) {
return unix.Getxattr(path, attr, dest)
}
func Listxattr(path string, dest []byte) (sz int, err error) {
return unix.Listxattr(path, dest)
}
func Setxattr(path string, attr string, data []byte, flags int) (err error) {
return unix.Setxattr(path, attr, data, flags)
}
func Removexattr(path string, attr string) (err error) {
return unix.Removexattr(path, attr)
}

View file

@ -1,38 +0,0 @@
package syscallx
/* This is the source file for syscallx_darwin_*.go, to regenerate run
./generate
*/
// cannot use dest []byte here because OS X getxattr really wants a
// NULL to trigger size probing, size==0 is not enough
//
//sys getxattr(path string, attr string, dest *byte, size int, position uint32, options int) (sz int, err error)
func Getxattr(path string, attr string, dest []byte) (sz int, err error) {
var destp *byte
if len(dest) > 0 {
destp = &dest[0]
}
return getxattr(path, attr, destp, len(dest), 0, 0)
}
//sys listxattr(path string, dest []byte, options int) (sz int, err error)
func Listxattr(path string, dest []byte) (sz int, err error) {
return listxattr(path, dest, 0)
}
//sys setxattr(path string, attr string, data []byte, position uint32, flags int) (err error)
func Setxattr(path string, attr string, data []byte, flags int) (err error) {
return setxattr(path, attr, data, 0, flags)
}
//sys removexattr(path string, attr string, options int) (err error)
func Removexattr(path string, attr string) (err error) {
return removexattr(path, attr, 0)
}

View file

@ -1,97 +0,0 @@
// mksyscall.pl -l32 xattr_darwin.go
// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT
package syscallx
import "syscall"
import "unsafe"
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func getxattr(path string, attr string, dest *byte, size int, position uint32, options int) (sz int, err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 *byte
_p1, err = syscall.BytePtrFromString(attr)
if err != nil {
return
}
r0, _, e1 := syscall.Syscall6(syscall.SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(dest)), uintptr(size), uintptr(position), uintptr(options))
sz = int(r0)
if e1 != 0 {
err = e1
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func listxattr(path string, dest []byte, options int) (sz int, err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 unsafe.Pointer
if len(dest) > 0 {
_p1 = unsafe.Pointer(&dest[0])
} else {
_p1 = unsafe.Pointer(&_zero)
}
r0, _, e1 := syscall.Syscall6(syscall.SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest)), uintptr(options), 0, 0)
sz = int(r0)
if e1 != 0 {
err = e1
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func setxattr(path string, attr string, data []byte, position uint32, flags int) (err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 *byte
_p1, err = syscall.BytePtrFromString(attr)
if err != nil {
return
}
var _p2 unsafe.Pointer
if len(data) > 0 {
_p2 = unsafe.Pointer(&data[0])
} else {
_p2 = unsafe.Pointer(&_zero)
}
_, _, e1 := syscall.Syscall6(syscall.SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(position), uintptr(flags))
if e1 != 0 {
err = e1
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func removexattr(path string, attr string, options int) (err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 *byte
_p1, err = syscall.BytePtrFromString(attr)
if err != nil {
return
}
_, _, e1 := syscall.Syscall(syscall.SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(options))
if e1 != 0 {
err = e1
}
return
}

View file

@ -1,97 +0,0 @@
// mksyscall.pl xattr_darwin.go
// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT
package syscallx
import "syscall"
import "unsafe"
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func getxattr(path string, attr string, dest *byte, size int, position uint32, options int) (sz int, err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 *byte
_p1, err = syscall.BytePtrFromString(attr)
if err != nil {
return
}
r0, _, e1 := syscall.Syscall6(syscall.SYS_GETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(unsafe.Pointer(dest)), uintptr(size), uintptr(position), uintptr(options))
sz = int(r0)
if e1 != 0 {
err = e1
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func listxattr(path string, dest []byte, options int) (sz int, err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 unsafe.Pointer
if len(dest) > 0 {
_p1 = unsafe.Pointer(&dest[0])
} else {
_p1 = unsafe.Pointer(&_zero)
}
r0, _, e1 := syscall.Syscall6(syscall.SYS_LISTXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(_p1), uintptr(len(dest)), uintptr(options), 0, 0)
sz = int(r0)
if e1 != 0 {
err = e1
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func setxattr(path string, attr string, data []byte, position uint32, flags int) (err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 *byte
_p1, err = syscall.BytePtrFromString(attr)
if err != nil {
return
}
var _p2 unsafe.Pointer
if len(data) > 0 {
_p2 = unsafe.Pointer(&data[0])
} else {
_p2 = unsafe.Pointer(&_zero)
}
_, _, e1 := syscall.Syscall6(syscall.SYS_SETXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(_p2), uintptr(len(data)), uintptr(position), uintptr(flags))
if e1 != 0 {
err = e1
}
return
}
// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT
func removexattr(path string, attr string, options int) (err error) {
var _p0 *byte
_p0, err = syscall.BytePtrFromString(path)
if err != nil {
return
}
var _p1 *byte
_p1, err = syscall.BytePtrFromString(attr)
if err != nil {
return
}
_, _, e1 := syscall.Syscall(syscall.SYS_REMOVEXATTR, uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(_p1)), uintptr(options))
if e1 != 0 {
err = e1
}
return
}

View file

@ -1,21 +0,0 @@
sudo: false
language: go
go:
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
install:
- go get -v cloud.google.com/go/...
script:
- openssl aes-256-cbc -K $encrypted_a8b3f4fc85f4_key -iv $encrypted_a8b3f4fc85f4_iv -in keys.tar.enc -out keys.tar -d
- tar xvf keys.tar
- GCLOUD_TESTS_GOLANG_PROJECT_ID="dulcet-port-762"
GCLOUD_TESTS_GOLANG_KEY="$(pwd)/dulcet-port-762-key.json"
GCLOUD_TESTS_GOLANG_FIRESTORE_PROJECT_ID="gcloud-golang-firestore-tests"
GCLOUD_TESTS_GOLANG_FIRESTORE_KEY="$(pwd)/gcloud-golang-firestore-tests-key.json"
./run-tests.sh $TRAVIS_COMMIT
env:
matrix:
# The GCLOUD_TESTS_API_KEY environment variable.
secure: VdldogUOoubQ60LhuHJ+g/aJoBiujkSkWEWl79Zb8cvQorcQbxISS+JsOOp4QkUOU4WwaHAm8/3pIH1QMWOR6O78DaLmDKi5Q4RpkVdCpUXy+OAfQaZIcBsispMrjxLXnqFjo9ELnrArfjoeCTzaX0QTCfwQwVmigC8rR30JBKI=

View file

@ -1,152 +0,0 @@
# Contributing
1. Sign one of the contributor license agreements below.
1. `go get golang.org/x/review/git-codereview` to install the code reviewing tool.
1. You will need to ensure that your `GOBIN` directory (by default
`$GOPATH/bin`) is in your `PATH` so that git can find the command.
1. If you would like, you may want to set up aliases for git-codereview,
such that `git codereview change` becomes `git change`. See the
[godoc](https://godoc.org/golang.org/x/review/git-codereview) for details.
1. Should you run into issues with the git-codereview tool, please note
that all error messages will assume that you have set up these
aliases.
1. Get the cloud package by running `go get -d cloud.google.com/go`.
1. If you have already checked out the source, make sure that the remote git
origin is https://code.googlesource.com/gocloud:
git remote set-url origin https://code.googlesource.com/gocloud
1. Make sure your auth is configured correctly by visiting
https://code.googlesource.com, clicking "Generate Password", and following
the directions.
1. Make changes and create a change by running `git codereview change <name>`,
provide a commit message, and use `git codereview mail` to create a Gerrit CL.
1. Keep amending to the change with `git codereview change` and mail as your receive
feedback. Each new mailed amendment will create a new patch set for your change in Gerrit.
## Integration Tests
In addition to the unit tests, you may run the integration test suite.
To run the integrations tests, creating and configuration of a project in the
Google Developers Console is required.
After creating a project, you must [create a service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount).
Ensure the project-level **Owner**
[IAM role](console.cloud.google.com/iam-admin/iam/project) role is added to the
service account. Alternatively, the account can be granted all of the following roles:
- **Editor**
- **Logs Configuration Writer**
- **PubSub Admin**
Once you create a project, set the following environment variables to be able to
run the against the actual APIs.
- **GCLOUD_TESTS_GOLANG_PROJECT_ID**: Developers Console project's ID (e.g. bamboo-shift-455)
- **GCLOUD_TESTS_GOLANG_KEY**: The path to the JSON key file.
- **GCLOUD_TESTS_API_KEY**: Your API key.
Firestore requires a different project and key:
- **GCLOUD_TESTS_GOLANG_FIRESTORE_PROJECT_ID**: Developers Console project's ID
supporting Firestore
- **GCLOUD_TESTS_GOLANG_FIRESTORE_KEY**: The path to the JSON key file.
Install the [gcloud command-line tool][gcloudcli] to your machine and use it
to create some resources used in integration tests.
From the project's root directory:
``` sh
# Set the default project in your env.
$ gcloud config set project $GCLOUD_TESTS_GOLANG_PROJECT_ID
# Authenticate the gcloud tool with your account.
$ gcloud auth login
# Create the indexes used in the datastore integration tests.
$ gcloud preview datastore create-indexes datastore/testdata/index.yaml
# Create a Google Cloud storage bucket with the same name as your test project,
# and with the Stackdriver Logging service account as owner, for the sink
# integration tests in logging.
$ gsutil mb gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
$ gsutil acl ch -g cloud-logs@google.com:O gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
# Create a PubSub topic for integration tests of storage notifications.
$ gcloud beta pubsub topics create go-storage-notification-test
# Create a Spanner instance for the spanner integration tests.
$ gcloud beta spanner instances create go-integration-test --config regional-us-central1 --nodes 1 --description 'Instance for go client test'
# NOTE: Spanner instances are priced by the node-hour, so you may want to delete
# the instance after testing with 'gcloud beta spanner instances delete'.
```
Once you've set the environment variables, you can run the integration tests by
running:
``` sh
$ go test -v cloud.google.com/go/...
```
## Contributor License Agreements
Before we can accept your pull requests you'll need to sign a Contributor
License Agreement (CLA):
- **If you are an individual writing original source code** and **you own the
intellectual property**, then you'll need to sign an [individual CLA][indvcla].
- **If you work for a company that wants to allow you to contribute your
work**, then you'll need to sign a [corporate CLA][corpcla].
You can sign these electronically (just scroll to the bottom). After that,
we'll be able to accept your pull requests.
## Contributor Code of Conduct
As contributors and maintainers of this project,
and in the interest of fostering an open and welcoming community,
we pledge to respect all people who contribute through reporting issues,
posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project
a harassment-free experience for everyone,
regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information,
such as physical or electronic
addresses, without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct.
By adopting this Code of Conduct,
project maintainers commit themselves to fairly and consistently
applying these principles to every aspect of managing this project.
Project maintainers who do not follow or enforce the Code of Conduct
may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported by opening an issue
or contacting one or more of the project maintainers.
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
[gcloudcli]: https://developers.google.com/cloud/sdk/gcloud/
[indvcla]: https://developers.google.com/open-source/cla/individual
[corpcla]: https://developers.google.com/open-source/cla/corporate

View file

@ -22,6 +22,7 @@ David Symonds <dsymonds@golang.org>
Filippo Valsorda <hi@filippo.io>
Glenn Lewis <gmlewis@google.com>
Ingo Oeser <nightlyone@googlemail.com>
James Hall <james.hall@shopify.com>
Johan Euphrosine <proppy@google.com>
Jonathan Amsterdam <jba@google.com>
Kunpei Sakai <namusyaka@gmail.com>

View file

@ -1,54 +0,0 @@
# Code Changes
## v0.10.0
- pubsub: Replace
```
sub.ModifyPushConfig(ctx, pubsub.PushConfig{Endpoint: "https://example.com/push"})
```
with
```
sub.Update(ctx, pubsub.SubscriptionConfigToUpdate{
PushConfig: &pubsub.PushConfig{Endpoint: "https://example.com/push"},
})
```
- trace: traceGRPCServerInterceptor will be provided from *trace.Client.
Given an initialized `*trace.Client` named `tc`, instead of
```
s := grpc.NewServer(grpc.UnaryInterceptor(trace.GRPCServerInterceptor(tc)))
```
write
```
s := grpc.NewServer(grpc.UnaryInterceptor(tc.GRPCServerInterceptor()))
```
- trace trace.GRPCClientInterceptor will also provided from *trace.Client.
Instead of
```
conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(trace.GRPCClientInterceptor()))
```
write
```
conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor()))
```
- trace: We removed the deprecated `trace.EnableGRPCTracing`. Use the gRPC
interceptor as a dial option as shown below when initializing Cloud package
clients:
```
c, err := pubsub.NewClient(ctx, "project-id", option.WithGRPCDialOption(grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor())))
if err != nil {
...
}
```

570
vendor/cloud.google.com/go/README.md generated vendored
View file

@ -1,570 +0,0 @@
# Google Cloud Client Libraries for Go
[![GoDoc](https://godoc.org/cloud.google.com/go?status.svg)](https://godoc.org/cloud.google.com/go)
Go packages for [Google Cloud Platform](https://cloud.google.com) services.
``` go
import "cloud.google.com/go"
```
To install the packages on your system,
```
$ go get -u cloud.google.com/go/...
```
**NOTE:** Some of these packages are under development, and may occasionally
make backwards-incompatible changes.
**NOTE:** Github repo is a mirror of [https://code.googlesource.com/gocloud](https://code.googlesource.com/gocloud).
* [News](#news)
* [Supported APIs](#supported-apis)
* [Go Versions Supported](#go-versions-supported)
* [Authorization](#authorization)
* [Cloud Datastore](#cloud-datastore-)
* [Cloud Storage](#cloud-storage-)
* [Cloud Pub/Sub](#cloud-pub-sub-)
* [Cloud BigQuery](#cloud-bigquery-)
* [Stackdriver Logging](#stackdriver-logging-)
* [Cloud Spanner](#cloud-spanner-)
## News
_March 22, 2018_
*v0.20.*
- bigquery: Support SchemaUpdateOptions for load jobs.
- bigtable:
- Add SampleRowKeys.
- cbt: Support union, intersection GCPolicy.
- Retry admin RPCS.
- Add trace spans to retries.
- datastore: Add OpenCensus tracing.
- firestore:
- Fix queries involving Null and NaN.
- Allow Timestamp protobuffers for time values.
- logging: Add a WriteTimeout option.
- spanner: Support Batch API.
- storage: Add OpenCensus tracing.
_February 26, 2018_
*v0.19.0*
- bigquery:
- Support customer-managed encryption keys.
- bigtable:
- Improved emulator support.
- Support GetCluster.
- datastore:
- Add general mutations.
- Support pointer struct fields.
- Support transaction options.
- firestore:
- Add Transaction.GetAll.
- Support document cursors.
- logging:
- Support concurrent RPCs to the service.
- Support per-entry resources.
- profiler:
- Add config options to disable heap and thread profiling.
- Read the project ID from $GOOGLE_CLOUD_PROJECT when it's set.
- pubsub:
- BEHAVIOR CHANGE: Release flow control after ack/nack (instead of after the
callback returns).
- Add SubscriptionInProject.
- Add OpenCensus instrumentation for streaming pull.
- storage:
- Support CORS.
_January 18, 2018_
*v0.18.0*
- bigquery:
- Marked stable.
- Schema inference of nullable fields supported.
- Added TimePartitioning to QueryConfig.
- firestore: Data provided to DocumentRef.Set with a Merge option can contain
Delete sentinels.
- logging: Clients can accept parent resources other than projects.
- pubsub:
- pubsub/pstest: A lighweight fake for pubsub. Experimental; feedback welcome.
- Support updating more subscription metadata: AckDeadline,
RetainAckedMessages and RetentionDuration.
- oslogin/apiv1beta: New client for the Cloud OS Login API.
- rpcreplay: A package for recording and replaying gRPC traffic.
- spanner:
- Add a ReadWithOptions that supports a row limit, as well as an index.
- Support query plan and execution statistics.
- Added [OpenCensus](http://opencensus.io) support.
- storage: Clarify checksum validation for gzipped files (it is not validated
when the file is served uncompressed).
_December 11, 2017_
*v0.17.0*
- firestore BREAKING CHANGES:
- Remove UpdateMap and UpdateStruct; rename UpdatePaths to Update.
Change
`docref.UpdateMap(ctx, map[string]interface{}{"a.b", 1})`
to
`docref.Update(ctx, []firestore.Update{{Path: "a.b", Value: 1}})`
Change
`docref.UpdateStruct(ctx, []string{"Field"}, aStruct)`
to
`docref.Update(ctx, []firestore.Update{{Path: "Field", Value: aStruct.Field}})`
- Rename MergePaths to Merge; require args to be FieldPaths
- A value stored as an integer can be read into a floating-point field, and vice versa.
- bigtable/cmd/cbt:
- Support deleting a column.
- Add regex option for row read.
- spanner: Mark stable.
- storage:
- Add Reader.ContentEncoding method.
- Fix handling of SignedURL headers.
- bigquery:
- If Uploader.Put is called with no rows, it returns nil without making a
call.
- Schema inference supports the "nullable" option in struct tags for
non-required fields.
- TimePartitioning supports "Field".
[Older news](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/old-news.md)
## Supported APIs
Google API | Status | Package
---------------------------------|--------------|-----------------------------------------------------------
[BigQuery][cloud-bigquery] | stable | [`cloud.google.com/go/bigquery`][cloud-bigquery-ref]
[Bigtable][cloud-bigtable] | stable | [`cloud.google.com/go/bigtable`][cloud-bigtable-ref]
[Container][cloud-container] | alpha | [`cloud.google.com/go/container/apiv1`][cloud-container-ref]
[Data Loss Prevention][cloud-dlp]| alpha | [`cloud.google.com/go/dlp/apiv2beta1`][cloud-dlp-ref]
[Datastore][cloud-datastore] | stable | [`cloud.google.com/go/datastore`][cloud-datastore-ref]
[Debugger][cloud-debugger] | alpha | [`cloud.google.com/go/debugger/apiv2`][cloud-debugger-ref]
[ErrorReporting][cloud-errors] | alpha | [`cloud.google.com/go/errorreporting`][cloud-errors-ref]
[Firestore][cloud-firestore] | beta | [`cloud.google.com/go/firestore`][cloud-firestore-ref]
[Language][cloud-language] | stable | [`cloud.google.com/go/language/apiv1`][cloud-language-ref]
[Logging][cloud-logging] | stable | [`cloud.google.com/go/logging`][cloud-logging-ref]
[Monitoring][cloud-monitoring] | beta | [`cloud.google.com/go/monitoring/apiv3`][cloud-monitoring-ref]
[OS Login][cloud-oslogin] | alpha | [`cloud.google.com/compute/docs/oslogin/rest`][cloud-oslogin-ref]
[Pub/Sub][cloud-pubsub] | beta | [`cloud.google.com/go/pubsub`][cloud-pubsub-ref]
[Spanner][cloud-spanner] | stable | [`cloud.google.com/go/spanner`][cloud-spanner-ref]
[Speech][cloud-speech] | stable | [`cloud.google.com/go/speech/apiv1`][cloud-speech-ref]
[Storage][cloud-storage] | stable | [`cloud.google.com/go/storage`][cloud-storage-ref]
[Translation][cloud-translation] | stable | [`cloud.google.com/go/translate`][cloud-translation-ref]
[Video Intelligence][cloud-video]| beta | [`cloud.google.com/go/videointelligence/apiv1beta1`][cloud-video-ref]
[Vision][cloud-vision] | stable | [`cloud.google.com/go/vision/apiv1`][cloud-vision-ref]
> **Alpha status**: the API is still being actively developed. As a
> result, it might change in backward-incompatible ways and is not recommended
> for production use.
>
> **Beta status**: the API is largely complete, but still has outstanding
> features and bugs to be addressed. There may be minor backwards-incompatible
> changes where necessary.
>
> **Stable status**: the API is mature and ready for production use. We will
> continue addressing bugs and feature requests.
Documentation and examples are available at
https://godoc.org/cloud.google.com/go
Visit or join the
[google-api-go-announce group](https://groups.google.com/forum/#!forum/google-api-go-announce)
for updates on these packages.
## Go Versions Supported
We support the two most recent major versions of Go. If Google App Engine uses
an older version, we support that as well. You can see which versions are
currently supported by looking at the lines following `go:` in
[`.travis.yml`](.travis.yml).
## Authorization
By default, each API will use [Google Application Default Credentials][default-creds]
for authorization credentials used in calling the API endpoints. This will allow your
application to run in many environments without requiring explicit configuration.
[snip]:# (auth)
```go
client, err := storage.NewClient(ctx)
```
To authorize using a
[JSON key file](https://cloud.google.com/iam/docs/managing-service-account-keys),
pass
[`option.WithServiceAccountFile`](https://godoc.org/google.golang.org/api/option#WithServiceAccountFile)
to the `NewClient` function of the desired package. For example:
[snip]:# (auth-JSON)
```go
client, err := storage.NewClient(ctx, option.WithServiceAccountFile("path/to/keyfile.json"))
```
You can exert more control over authorization by using the
[`golang.org/x/oauth2`](https://godoc.org/golang.org/x/oauth2) package to
create an `oauth2.TokenSource`. Then pass
[`option.WithTokenSource`](https://godoc.org/google.golang.org/api/option#WithTokenSource)
to the `NewClient` function:
[snip]:# (auth-ts)
```go
tokenSource := ...
client, err := storage.NewClient(ctx, option.WithTokenSource(tokenSource))
```
## Cloud Datastore [![GoDoc](https://godoc.org/cloud.google.com/go/datastore?status.svg)](https://godoc.org/cloud.google.com/go/datastore)
- [About Cloud Datastore][cloud-datastore]
- [Activating the API for your project][cloud-datastore-activation]
- [API documentation][cloud-datastore-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/datastore)
- [Complete sample program](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/datastore/tasks)
### Example Usage
First create a `datastore.Client` to use throughout your application:
[snip]:# (datastore-1)
```go
client, err := datastore.NewClient(ctx, "my-project-id")
if err != nil {
log.Fatal(err)
}
```
Then use that client to interact with the API:
[snip]:# (datastore-2)
```go
type Post struct {
Title string
Body string `datastore:",noindex"`
PublishedAt time.Time
}
keys := []*datastore.Key{
datastore.NameKey("Post", "post1", nil),
datastore.NameKey("Post", "post2", nil),
}
posts := []*Post{
{Title: "Post 1", Body: "...", PublishedAt: time.Now()},
{Title: "Post 2", Body: "...", PublishedAt: time.Now()},
}
if _, err := client.PutMulti(ctx, keys, posts); err != nil {
log.Fatal(err)
}
```
## Cloud Storage [![GoDoc](https://godoc.org/cloud.google.com/go/storage?status.svg)](https://godoc.org/cloud.google.com/go/storage)
- [About Cloud Storage][cloud-storage]
- [API documentation][cloud-storage-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/storage)
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/storage)
### Example Usage
First create a `storage.Client` to use throughout your application:
[snip]:# (storage-1)
```go
client, err := storage.NewClient(ctx)
if err != nil {
log.Fatal(err)
}
```
[snip]:# (storage-2)
```go
// Read the object1 from bucket.
rc, err := client.Bucket("bucket").Object("object1").NewReader(ctx)
if err != nil {
log.Fatal(err)
}
defer rc.Close()
body, err := ioutil.ReadAll(rc)
if err != nil {
log.Fatal(err)
}
```
## Cloud Pub/Sub [![GoDoc](https://godoc.org/cloud.google.com/go/pubsub?status.svg)](https://godoc.org/cloud.google.com/go/pubsub)
- [About Cloud Pubsub][cloud-pubsub]
- [API documentation][cloud-pubsub-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/pubsub)
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsub)
### Example Usage
First create a `pubsub.Client` to use throughout your application:
[snip]:# (pubsub-1)
```go
client, err := pubsub.NewClient(ctx, "project-id")
if err != nil {
log.Fatal(err)
}
```
Then use the client to publish and subscribe:
[snip]:# (pubsub-2)
```go
// Publish "hello world" on topic1.
topic := client.Topic("topic1")
res := topic.Publish(ctx, &pubsub.Message{
Data: []byte("hello world"),
})
// The publish happens asynchronously.
// Later, you can get the result from res:
...
msgID, err := res.Get(ctx)
if err != nil {
log.Fatal(err)
}
// Use a callback to receive messages via subscription1.
sub := client.Subscription("subscription1")
err = sub.Receive(ctx, func(ctx context.Context, m *pubsub.Message) {
fmt.Println(m.Data)
m.Ack() // Acknowledge that we've consumed the message.
})
if err != nil {
log.Println(err)
}
```
## Cloud BigQuery [![GoDoc](https://godoc.org/cloud.google.com/go/bigquery?status.svg)](https://godoc.org/cloud.google.com/go/bigquery)
- [About Cloud BigQuery][cloud-bigquery]
- [API documentation][cloud-bigquery-docs]
- [Go client documentation][cloud-bigquery-ref]
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/bigquery)
### Example Usage
First create a `bigquery.Client` to use throughout your application:
[snip]:# (bq-1)
```go
c, err := bigquery.NewClient(ctx, "my-project-ID")
if err != nil {
// TODO: Handle error.
}
```
Then use that client to interact with the API:
[snip]:# (bq-2)
```go
// Construct a query.
q := c.Query(`
SELECT year, SUM(number)
FROM [bigquery-public-data:usa_names.usa_1910_2013]
WHERE name = "William"
GROUP BY year
ORDER BY year
`)
// Execute the query.
it, err := q.Read(ctx)
if err != nil {
// TODO: Handle error.
}
// Iterate through the results.
for {
var values []bigquery.Value
err := it.Next(&values)
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
fmt.Println(values)
}
```
## Stackdriver Logging [![GoDoc](https://godoc.org/cloud.google.com/go/logging?status.svg)](https://godoc.org/cloud.google.com/go/logging)
- [About Stackdriver Logging][cloud-logging]
- [API documentation][cloud-logging-docs]
- [Go client documentation][cloud-logging-ref]
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/logging)
### Example Usage
First create a `logging.Client` to use throughout your application:
[snip]:# (logging-1)
```go
ctx := context.Background()
client, err := logging.NewClient(ctx, "my-project")
if err != nil {
// TODO: Handle error.
}
```
Usually, you'll want to add log entries to a buffer to be periodically flushed
(automatically and asynchronously) to the Stackdriver Logging service.
[snip]:# (logging-2)
```go
logger := client.Logger("my-log")
logger.Log(logging.Entry{Payload: "something happened!"})
```
Close your client before your program exits, to flush any buffered log entries.
[snip]:# (logging-3)
```go
err = client.Close()
if err != nil {
// TODO: Handle error.
}
```
## Cloud Spanner [![GoDoc](https://godoc.org/cloud.google.com/go/spanner?status.svg)](https://godoc.org/cloud.google.com/go/spanner)
- [About Cloud Spanner][cloud-spanner]
- [API documentation][cloud-spanner-docs]
- [Go client documentation](https://godoc.org/cloud.google.com/go/spanner)
### Example Usage
First create a `spanner.Client` to use throughout your application:
[snip]:# (spanner-1)
```go
client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")
if err != nil {
log.Fatal(err)
}
```
[snip]:# (spanner-2)
```go
// Simple Reads And Writes
_, err = client.Apply(ctx, []*spanner.Mutation{
spanner.Insert("Users",
[]string{"name", "email"},
[]interface{}{"alice", "a@example.com"})})
if err != nil {
log.Fatal(err)
}
row, err := client.Single().ReadRow(ctx, "Users",
spanner.Key{"alice"}, []string{"email"})
if err != nil {
log.Fatal(err)
}
```
## Contributing
Contributions are welcome. Please, see the
[CONTRIBUTING](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/CONTRIBUTING.md)
document for details. We're using Gerrit for our code reviews. Please don't open pull
requests against this repo, new pull requests will be automatically closed.
Please note that this project is released with a Contributor Code of Conduct.
By participating in this project you agree to abide by its terms.
See [Contributor Code of Conduct](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/CONTRIBUTING.md#contributor-code-of-conduct)
for more information.
[cloud-datastore]: https://cloud.google.com/datastore/
[cloud-datastore-ref]: https://godoc.org/cloud.google.com/go/datastore
[cloud-datastore-docs]: https://cloud.google.com/datastore/docs
[cloud-datastore-activation]: https://cloud.google.com/datastore/docs/activate
[cloud-firestore]: https://cloud.google.com/firestore/
[cloud-firestore-ref]: https://godoc.org/cloud.google.com/go/firestore
[cloud-firestore-docs]: https://cloud.google.com/firestore/docs
[cloud-firestore-activation]: https://cloud.google.com/firestore/docs/activate
[cloud-pubsub]: https://cloud.google.com/pubsub/
[cloud-pubsub-ref]: https://godoc.org/cloud.google.com/go/pubsub
[cloud-pubsub-docs]: https://cloud.google.com/pubsub/docs
[cloud-storage]: https://cloud.google.com/storage/
[cloud-storage-ref]: https://godoc.org/cloud.google.com/go/storage
[cloud-storage-docs]: https://cloud.google.com/storage/docs
[cloud-storage-create-bucket]: https://cloud.google.com/storage/docs/cloud-console#_creatingbuckets
[cloud-bigtable]: https://cloud.google.com/bigtable/
[cloud-bigtable-ref]: https://godoc.org/cloud.google.com/go/bigtable
[cloud-bigquery]: https://cloud.google.com/bigquery/
[cloud-bigquery-docs]: https://cloud.google.com/bigquery/docs
[cloud-bigquery-ref]: https://godoc.org/cloud.google.com/go/bigquery
[cloud-logging]: https://cloud.google.com/logging/
[cloud-logging-docs]: https://cloud.google.com/logging/docs
[cloud-logging-ref]: https://godoc.org/cloud.google.com/go/logging
[cloud-monitoring]: https://cloud.google.com/monitoring/
[cloud-monitoring-ref]: https://godoc.org/cloud.google.com/go/monitoring/apiv3
[cloud-vision]: https://cloud.google.com/vision
[cloud-vision-ref]: https://godoc.org/cloud.google.com/go/vision/apiv1
[cloud-language]: https://cloud.google.com/natural-language
[cloud-language-ref]: https://godoc.org/cloud.google.com/go/language/apiv1
[cloud-oslogin]: https://cloud.google.com/compute/docs/oslogin/rest
[cloud-oslogin-ref]: https://cloud.google.com/compute/docs/oslogin/rest
[cloud-speech]: https://cloud.google.com/speech
[cloud-speech-ref]: https://godoc.org/cloud.google.com/go/speech/apiv1
[cloud-spanner]: https://cloud.google.com/spanner/
[cloud-spanner-ref]: https://godoc.org/cloud.google.com/go/spanner
[cloud-spanner-docs]: https://cloud.google.com/spanner/docs
[cloud-translation]: https://cloud.google.com/translation
[cloud-translation-ref]: https://godoc.org/cloud.google.com/go/translation
[cloud-video]: https://cloud.google.com/video-intelligence/
[cloud-video-ref]: https://godoc.org/cloud.google.com/go/videointelligence/apiv1beta1
[cloud-errors]: https://cloud.google.com/error-reporting/
[cloud-errors-ref]: https://godoc.org/cloud.google.com/go/errorreporting
[cloud-container]: https://cloud.google.com/containers/
[cloud-container-ref]: https://godoc.org/cloud.google.com/go/container/apiv1
[cloud-debugger]: https://cloud.google.com/debugger/
[cloud-debugger-ref]: https://godoc.org/cloud.google.com/go/debugger/apiv2
[cloud-dlp]: https://cloud.google.com/dlp/
[cloud-dlp-ref]: https://godoc.org/cloud.google.com/go/dlp/apiv2beta1
[default-creds]: https://developers.google.com/identity/protocols/application-default-credentials

View file

@ -1,32 +0,0 @@
# This file configures AppVeyor (http://www.appveyor.com),
# a Windows-based CI service similar to Travis.
# Identifier for this run
version: "{build}"
# Clone the repo into this path, which conforms to the standard
# Go workspace structure.
clone_folder: c:\gopath\src\cloud.google.com\go
environment:
GOPATH: c:\gopath
GCLOUD_TESTS_GOLANG_PROJECT_ID: dulcet-port-762
GCLOUD_TESTS_GOLANG_KEY: c:\gopath\src\cloud.google.com\go\key.json
KEYFILE_CONTENTS:
secure: IvRbDAhM2PIQqzVkjzJ4FjizUvoQ+c3vG/qhJQG+HlZ/L5KEkqLu+x6WjLrExrNMyGku4znB2jmbTrUW3Ob4sGG+R5vvqeQ3YMHCVIkw5CxY+/bUDkW5RZWsVbuCnNa/vKsWmCP+/sZW6ICe29yKJ2ZOb6QaauI4s9R6j+cqBbU9pumMGYFRb0Rw3uUU7DKmVFCy+NjTENZIlDP9rmjANgAzigowJJEb2Tg9sLlQKmQeKiBSRN8lKc5Nq60a+fIzHGKvql4eIitDDDpOpyHv15/Xr1BzFw2yDoiR4X1lng0u7q0X9RgX4VIYa6gT16NXBEmQgbuX8gh7SfPMp9RhiZD9sVUaV+yogEabYpyPnmUURo0hXwkctKaBkQlEmKvjHwF5dvbg8+yqGhwtjAgFNimXG3INrwQsfQsZskkQWanutbJf9xy50GyWWFZZdi0uT4oXP/b5P7aklPXKXsvrJKBh7RjEaqBrhi86IJwOjBspvoR4l2WmcQyxb2xzQS1pjbBJFQfYJJ8+JgsstTL8PBO9d4ybJC0li1Om1qnWxkaewvPxxuoHJ9LpRKof19yRYWBmhTXb2tTASKG/zslvl4fgG4DmQBS93WC7dsiGOhAraGw2eCTgd0lYZOhk1FjWl9TS80aktXxzH/7nTvem5ohm+eDl6O0wnTL4KXjQVNSQ1PyLn4lGRJ5MNGzBTRFWIr2API2rca4Fysyfh/UdmazPGlNbY9JPGqb9+F04QzLfqm+Zz/cHy59E7lOSMBlUI4KD6d6ZNNKNRH+/g9i+fSiyiXKugTfda8KBnWGyPwprxuWGYaiQUGUYOwJY5R6x5c4mjImAB310V+Wo33UbWFJiwxEDsiCNqW1meVkBzt2er26vh4qbgCUIQ3iM3gFPfHgy+QxkmIhic7Q1HYacQElt8AAP41M7cCKWCuZidegP37MBB//mjjiNt047ZSQEvB4tqsX/OvfbByVef+cbtVw9T0yjHvmCdPW1XrhyrCCgclu6oYYdbmc5D7BBDRbjjMWGv6YvceAbfGf6ukdB5PuV+TGEN/FoQ1QTRA6Aqf+3fLMg4mS4oyTfw5xyYNbv3qoyLPrp+BnxI53WB9p0hfMg4n9FD6NntBxjDq+Q3Lk/bjC/Y4MaRWdzbMzF9a0lgGfcw9DURlK5p7uGJC9vg34feNoQprxVEZRQ01cHLeob6eGkYm4HxSRx8JY39Mh+9wzJo+k/aIvFleNC3e35NOrkXr6wb5e42n2DwBdPqdNolTLtLFRglAL1LTpp27UjvjieWJAKfoDTR5CKl01sZqt0wPdLLcvsMj6CiPFmccUIOYeZMe86kLBD61Qa5F1EwkgO3Om2qSjW96FzL4skRc+BmU5RrHlAFSldR1wpUgtkUMv9vH5Cy+UJdcvpZ8KbmhZ2PsjF7ddJ1ve9RAw3cP325AyIMwZ77Ef1mgTM0NJze6eSW1qKlEsgt1FADPyeUu1NQTA2H2dueMPGlArWTSUgyWR9AdfpqouT7eg0JWI5w+yUZZC+/rPglYbt84oLmYpwuli0z8FyEQRPIc3EtkfWIv/yYgDr2TZ0N2KvGfpi/MAUWgxI1gleC2uKgEOEtuJthd3XZjF2NoE7IBqjQOINybcJOjyeB5vRLDY1FLuxYzdg1y1etkV4XQig/vje
install:
# Info for debugging.
- echo %PATH%
- go version
- go env
- go get -v -d -t ./...
# Provide a build script, or AppVeyor will call msbuild.
build_script:
- go install -v ./...
- echo %KEYFILE_CONTENTS% > %GCLOUD_TESTS_GOLANG_KEY%
test_script:
- go test -v ./...

View file

@ -1,49 +0,0 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloud_test
import (
"cloud.google.com/go/datastore"
"golang.org/x/net/context"
"google.golang.org/api/option"
)
func Example_applicationDefaultCredentials() {
// Google Application Default Credentials is the recommended way to authorize
// and authenticate clients.
//
// See the following link on how to create and obtain Application Default Credentials:
// https://developers.google.com/identity/protocols/application-default-credentials.
client, err := datastore.NewClient(context.Background(), "project-id")
if err != nil {
// TODO: handle error.
}
_ = client // Use the client.
}
func Example_serviceAccountFile() {
// Use a JSON key file associated with a Google service account to
// authenticate and authorize. Service Account keys can be created and
// downloaded from https://console.developers.google.com/permissions/serviceaccounts.
//
// Note: This example uses the datastore client, but the same steps apply to
// the other client libraries underneath this package.
client, err := datastore.NewClient(context.Background(),
"project-id", option.WithServiceAccountFile("/path/to/service-account-key.json"))
if err != nil {
// TODO: handle error.
}
_ = client // Use the client.
}

View file

@ -1,8 +0,0 @@
# BigQuery Benchmark
This directory contains benchmarks for BigQuery client.
## Usage
`go run bench.go -- <your project id> queries.json`
BigQuery service caches requests so the benchmark should be run
at least twice, disregarding the first result.

View file

@ -1,85 +0,0 @@
// Copyright 2017 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//+build ignore
package main
import (
"encoding/json"
"flag"
"io/ioutil"
"log"
"time"
"cloud.google.com/go/bigquery"
"golang.org/x/net/context"
"google.golang.org/api/iterator"
)
func main() {
flag.Parse()
ctx := context.Background()
c, err := bigquery.NewClient(ctx, flag.Arg(0))
if err != nil {
log.Fatal(err)
}
queriesJSON, err := ioutil.ReadFile(flag.Arg(1))
if err != nil {
log.Fatal(err)
}
var queries []string
if err := json.Unmarshal(queriesJSON, &queries); err != nil {
log.Fatal(err)
}
for _, q := range queries {
doQuery(ctx, c, q)
}
}
func doQuery(ctx context.Context, c *bigquery.Client, qt string) {
startTime := time.Now()
q := c.Query(qt)
it, err := q.Read(ctx)
if err != nil {
log.Fatal(err)
}
numRows, numCols := 0, 0
var firstByte time.Duration
for {
var values []bigquery.Value
err := it.Next(&values)
if err == iterator.Done {
break
}
if err != nil {
log.Fatal(err)
}
if numRows == 0 {
numCols = len(values)
firstByte = time.Since(startTime)
} else if numCols != len(values) {
log.Fatalf("got %d columns, want %d", len(values), numCols)
}
numRows++
}
log.Printf("query %q: %d rows, %d cols, first byte %f sec, total %f sec",
qt, numRows, numCols, firstByte.Seconds(), time.Since(startTime).Seconds())
}

View file

@ -1,10 +0,0 @@
[
"SELECT * FROM `nyc-tlc.yellow.trips` LIMIT 10000",
"SELECT * FROM `nyc-tlc.yellow.trips` LIMIT 100000",
"SELECT * FROM `nyc-tlc.yellow.trips` LIMIT 1000000",
"SELECT title FROM `bigquery-public-data.samples.wikipedia` ORDER BY title LIMIT 1000",
"SELECT title, id, timestamp, contributor_ip FROM `bigquery-public-data.samples.wikipedia` WHERE title like 'Blo%' ORDER BY id",
"SELECT * FROM `bigquery-public-data.baseball.games_post_wide` ORDER BY gameId",
"SELECT * FROM `bigquery-public-data.samples.github_nested` WHERE repository.has_downloads ORDER BY repository.created_at LIMIT 10000",
"SELECT repo_name, path FROM `bigquery-public-data.github_repos.files` WHERE path LIKE '%.java' ORDER BY id LIMIT 1000000"
]

View file

@ -1,161 +0,0 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"fmt"
"io"
"net/http"
"time"
gax "github.com/googleapis/gax-go"
"cloud.google.com/go/internal"
"cloud.google.com/go/internal/version"
"google.golang.org/api/googleapi"
"google.golang.org/api/option"
htransport "google.golang.org/api/transport/http"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
const (
prodAddr = "https://www.googleapis.com/bigquery/v2/"
Scope = "https://www.googleapis.com/auth/bigquery"
userAgent = "gcloud-golang-bigquery/20160429"
)
var xGoogHeader = fmt.Sprintf("gl-go/%s gccl/%s", version.Go(), version.Repo)
func setClientHeader(headers http.Header) {
headers.Set("x-goog-api-client", xGoogHeader)
}
// Client may be used to perform BigQuery operations.
type Client struct {
// Location, if set, will be used as the default location for all subsequent
// dataset creation and job operations. A location specified directly in one of
// those operations will override this value.
Location string
projectID string
bqs *bq.Service
}
// NewClient constructs a new Client which can perform BigQuery operations.
// Operations performed via the client are billed to the specified GCP project.
func NewClient(ctx context.Context, projectID string, opts ...option.ClientOption) (*Client, error) {
o := []option.ClientOption{
option.WithEndpoint(prodAddr),
option.WithScopes(Scope),
option.WithUserAgent(userAgent),
}
o = append(o, opts...)
httpClient, endpoint, err := htransport.NewClient(ctx, o...)
if err != nil {
return nil, fmt.Errorf("bigquery: dialing: %v", err)
}
bqs, err := bq.New(httpClient)
if err != nil {
return nil, fmt.Errorf("bigquery: constructing client: %v", err)
}
bqs.BasePath = endpoint
c := &Client{
projectID: projectID,
bqs: bqs,
}
return c, nil
}
// Close closes any resources held by the client.
// Close should be called when the client is no longer needed.
// It need not be called at program exit.
func (c *Client) Close() error {
return nil
}
// Calls the Jobs.Insert RPC and returns a Job.
func (c *Client) insertJob(ctx context.Context, job *bq.Job, media io.Reader) (*Job, error) {
call := c.bqs.Jobs.Insert(c.projectID, job).Context(ctx)
setClientHeader(call.Header())
if media != nil {
call.Media(media)
}
var res *bq.Job
var err error
invoke := func() error {
res, err = call.Do()
return err
}
// A job with a client-generated ID can be retried; the presence of the
// ID makes the insert operation idempotent.
// We don't retry if there is media, because it is an io.Reader. We'd
// have to read the contents and keep it in memory, and that could be expensive.
// TODO(jba): Look into retrying if media != nil.
if job.JobReference != nil && media == nil {
err = runWithRetry(ctx, invoke)
} else {
err = invoke()
}
if err != nil {
return nil, err
}
return bqToJob(res, c)
}
// Convert a number of milliseconds since the Unix epoch to a time.Time.
// Treat an input of zero specially: convert it to the zero time,
// rather than the start of the epoch.
func unixMillisToTime(m int64) time.Time {
if m == 0 {
return time.Time{}
}
return time.Unix(0, m*1e6)
}
// runWithRetry calls the function until it returns nil or a non-retryable error, or
// the context is done.
// See the similar function in ../storage/invoke.go. The main difference is the
// reason for retrying.
func runWithRetry(ctx context.Context, call func() error) error {
// These parameters match the suggestions in https://cloud.google.com/bigquery/sla.
backoff := gax.Backoff{
Initial: 1 * time.Second,
Max: 32 * time.Second,
Multiplier: 2,
}
return internal.Retry(ctx, backoff, func() (stop bool, err error) {
err = call()
if err == nil {
return true, nil
}
return !retryableError(err), err
})
}
// This is the correct definition of retryable according to the BigQuery team.
func retryableError(err error) bool {
e, ok := err.(*googleapi.Error)
if !ok {
return false
}
var reason string
if len(e.Errors) > 0 {
reason = e.Errors[0].Reason
}
return e.Code == http.StatusBadGateway || reason == "backendError" || reason == "rateLimitExceeded"
}

View file

@ -1,106 +0,0 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
)
// CopyConfig holds the configuration for a copy job.
type CopyConfig struct {
// Srcs are the tables from which data will be copied.
Srcs []*Table
// Dst is the table into which the data will be copied.
Dst *Table
// CreateDisposition specifies the circumstances under which the destination table will be created.
// The default is CreateIfNeeded.
CreateDisposition TableCreateDisposition
// WriteDisposition specifies how existing data in the destination table is treated.
// The default is WriteEmpty.
WriteDisposition TableWriteDisposition
// The labels associated with this job.
Labels map[string]string
// Custom encryption configuration (e.g., Cloud KMS keys).
DestinationEncryptionConfig *EncryptionConfig
}
func (c *CopyConfig) toBQ() *bq.JobConfiguration {
var ts []*bq.TableReference
for _, t := range c.Srcs {
ts = append(ts, t.toBQ())
}
return &bq.JobConfiguration{
Labels: c.Labels,
Copy: &bq.JobConfigurationTableCopy{
CreateDisposition: string(c.CreateDisposition),
WriteDisposition: string(c.WriteDisposition),
DestinationTable: c.Dst.toBQ(),
DestinationEncryptionConfiguration: c.DestinationEncryptionConfig.toBQ(),
SourceTables: ts,
},
}
}
func bqToCopyConfig(q *bq.JobConfiguration, c *Client) *CopyConfig {
cc := &CopyConfig{
Labels: q.Labels,
CreateDisposition: TableCreateDisposition(q.Copy.CreateDisposition),
WriteDisposition: TableWriteDisposition(q.Copy.WriteDisposition),
Dst: bqToTable(q.Copy.DestinationTable, c),
DestinationEncryptionConfig: bqToEncryptionConfig(q.Copy.DestinationEncryptionConfiguration),
}
for _, t := range q.Copy.SourceTables {
cc.Srcs = append(cc.Srcs, bqToTable(t, c))
}
return cc
}
// A Copier copies data into a BigQuery table from one or more BigQuery tables.
type Copier struct {
JobIDConfig
CopyConfig
c *Client
}
// CopierFrom returns a Copier which can be used to copy data into a
// BigQuery table from one or more BigQuery tables.
// The returned Copier may optionally be further configured before its Run method is called.
func (t *Table) CopierFrom(srcs ...*Table) *Copier {
return &Copier{
c: t.c,
CopyConfig: CopyConfig{
Srcs: srcs,
Dst: t,
},
}
}
// Run initiates a copy job.
func (c *Copier) Run(ctx context.Context) (*Job, error) {
return c.c.insertJob(ctx, c.newJob(), nil)
}
func (c *Copier) newJob() *bq.Job {
return &bq.Job{
JobReference: c.JobIDConfig.createJobRef(c.c),
Configuration: c.CopyConfig.toBQ(),
}
}

View file

@ -1,165 +0,0 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"testing"
"github.com/google/go-cmp/cmp/cmpopts"
"cloud.google.com/go/internal/testutil"
bq "google.golang.org/api/bigquery/v2"
)
func defaultCopyJob() *bq.Job {
return &bq.Job{
JobReference: &bq.JobReference{JobId: "RANDOM", ProjectId: "client-project-id"},
Configuration: &bq.JobConfiguration{
Copy: &bq.JobConfigurationTableCopy{
DestinationTable: &bq.TableReference{
ProjectId: "d-project-id",
DatasetId: "d-dataset-id",
TableId: "d-table-id",
},
SourceTables: []*bq.TableReference{
{
ProjectId: "s-project-id",
DatasetId: "s-dataset-id",
TableId: "s-table-id",
},
},
},
},
}
}
func TestCopy(t *testing.T) {
defer fixRandomID("RANDOM")()
testCases := []struct {
dst *Table
srcs []*Table
jobID string
location string
config CopyConfig
want *bq.Job
}{
{
dst: &Table{
ProjectID: "d-project-id",
DatasetID: "d-dataset-id",
TableID: "d-table-id",
},
srcs: []*Table{
{
ProjectID: "s-project-id",
DatasetID: "s-dataset-id",
TableID: "s-table-id",
},
},
want: defaultCopyJob(),
},
{
dst: &Table{
ProjectID: "d-project-id",
DatasetID: "d-dataset-id",
TableID: "d-table-id",
},
srcs: []*Table{
{
ProjectID: "s-project-id",
DatasetID: "s-dataset-id",
TableID: "s-table-id",
},
},
config: CopyConfig{
CreateDisposition: CreateNever,
WriteDisposition: WriteTruncate,
DestinationEncryptionConfig: &EncryptionConfig{KMSKeyName: "keyName"},
Labels: map[string]string{"a": "b"},
},
want: func() *bq.Job {
j := defaultCopyJob()
j.Configuration.Labels = map[string]string{"a": "b"}
j.Configuration.Copy.CreateDisposition = "CREATE_NEVER"
j.Configuration.Copy.WriteDisposition = "WRITE_TRUNCATE"
j.Configuration.Copy.DestinationEncryptionConfiguration = &bq.EncryptionConfiguration{KmsKeyName: "keyName"}
return j
}(),
},
{
dst: &Table{
ProjectID: "d-project-id",
DatasetID: "d-dataset-id",
TableID: "d-table-id",
},
srcs: []*Table{
{
ProjectID: "s-project-id",
DatasetID: "s-dataset-id",
TableID: "s-table-id",
},
},
jobID: "job-id",
want: func() *bq.Job {
j := defaultCopyJob()
j.JobReference.JobId = "job-id"
return j
}(),
},
{
dst: &Table{
ProjectID: "d-project-id",
DatasetID: "d-dataset-id",
TableID: "d-table-id",
},
srcs: []*Table{
{
ProjectID: "s-project-id",
DatasetID: "s-dataset-id",
TableID: "s-table-id",
},
},
location: "asia-northeast1",
want: func() *bq.Job {
j := defaultCopyJob()
j.JobReference.Location = "asia-northeast1"
return j
}(),
},
}
c := &Client{projectID: "client-project-id"}
for i, tc := range testCases {
tc.dst.c = c
copier := tc.dst.CopierFrom(tc.srcs...)
copier.JobID = tc.jobID
copier.Location = tc.location
tc.config.Srcs = tc.srcs
tc.config.Dst = tc.dst
copier.CopyConfig = tc.config
got := copier.newJob()
checkJob(t, i, got, tc.want)
jc, err := bqToJobConfig(got.Configuration, c)
if err != nil {
t.Fatalf("#%d: %v", i, err)
}
diff := testutil.Diff(jc.(*CopyConfig), &copier.CopyConfig,
cmpopts.IgnoreUnexported(Table{}))
if diff != "" {
t.Errorf("#%d: (got=-, want=+:\n%s", i, diff)
}
}
}

View file

@ -1,505 +0,0 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"errors"
"fmt"
"time"
"cloud.google.com/go/internal/optional"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
"google.golang.org/api/iterator"
)
// Dataset is a reference to a BigQuery dataset.
type Dataset struct {
ProjectID string
DatasetID string
c *Client
}
// DatasetMetadata contains information about a BigQuery dataset.
type DatasetMetadata struct {
// These fields can be set when creating a dataset.
Name string // The user-friendly name for this dataset.
Description string // The user-friendly description of this dataset.
Location string // The geo location of the dataset.
DefaultTableExpiration time.Duration // The default expiration time for new tables.
Labels map[string]string // User-provided labels.
Access []*AccessEntry // Access permissions.
// These fields are read-only.
CreationTime time.Time
LastModifiedTime time.Time // When the dataset or any of its tables were modified.
FullID string // The full dataset ID in the form projectID:datasetID.
// ETag is the ETag obtained when reading metadata. Pass it to Dataset.Update to
// ensure that the metadata hasn't changed since it was read.
ETag string
}
// DatasetMetadataToUpdate is used when updating a dataset's metadata.
// Only non-nil fields will be updated.
type DatasetMetadataToUpdate struct {
Description optional.String // The user-friendly description of this table.
Name optional.String // The user-friendly name for this dataset.
// DefaultTableExpiration is the the default expiration time for new tables.
// If set to time.Duration(0), new tables never expire.
DefaultTableExpiration optional.Duration
// The entire access list. It is not possible to replace individual entries.
Access []*AccessEntry
labelUpdater
}
// Dataset creates a handle to a BigQuery dataset in the client's project.
func (c *Client) Dataset(id string) *Dataset {
return c.DatasetInProject(c.projectID, id)
}
// DatasetInProject creates a handle to a BigQuery dataset in the specified project.
func (c *Client) DatasetInProject(projectID, datasetID string) *Dataset {
return &Dataset{
ProjectID: projectID,
DatasetID: datasetID,
c: c,
}
}
// Create creates a dataset in the BigQuery service. An error will be returned if the
// dataset already exists. Pass in a DatasetMetadata value to configure the dataset.
func (d *Dataset) Create(ctx context.Context, md *DatasetMetadata) error {
ds, err := md.toBQ()
if err != nil {
return err
}
ds.DatasetReference = &bq.DatasetReference{DatasetId: d.DatasetID}
// Use Client.Location as a default.
if ds.Location == "" {
ds.Location = d.c.Location
}
call := d.c.bqs.Datasets.Insert(d.ProjectID, ds).Context(ctx)
setClientHeader(call.Header())
_, err = call.Do()
return err
}
func (dm *DatasetMetadata) toBQ() (*bq.Dataset, error) {
ds := &bq.Dataset{}
if dm == nil {
return ds, nil
}
ds.FriendlyName = dm.Name
ds.Description = dm.Description
ds.Location = dm.Location
ds.DefaultTableExpirationMs = int64(dm.DefaultTableExpiration / time.Millisecond)
ds.Labels = dm.Labels
var err error
ds.Access, err = accessListToBQ(dm.Access)
if err != nil {
return nil, err
}
if !dm.CreationTime.IsZero() {
return nil, errors.New("bigquery: Dataset.CreationTime is not writable")
}
if !dm.LastModifiedTime.IsZero() {
return nil, errors.New("bigquery: Dataset.LastModifiedTime is not writable")
}
if dm.FullID != "" {
return nil, errors.New("bigquery: Dataset.FullID is not writable")
}
if dm.ETag != "" {
return nil, errors.New("bigquery: Dataset.ETag is not writable")
}
return ds, nil
}
func accessListToBQ(a []*AccessEntry) ([]*bq.DatasetAccess, error) {
var q []*bq.DatasetAccess
for _, e := range a {
a, err := e.toBQ()
if err != nil {
return nil, err
}
q = append(q, a)
}
return q, nil
}
// Delete deletes the dataset.
func (d *Dataset) Delete(ctx context.Context) error {
call := d.c.bqs.Datasets.Delete(d.ProjectID, d.DatasetID).Context(ctx)
setClientHeader(call.Header())
return call.Do()
}
// Metadata fetches the metadata for the dataset.
func (d *Dataset) Metadata(ctx context.Context) (*DatasetMetadata, error) {
call := d.c.bqs.Datasets.Get(d.ProjectID, d.DatasetID).Context(ctx)
setClientHeader(call.Header())
var ds *bq.Dataset
if err := runWithRetry(ctx, func() (err error) {
ds, err = call.Do()
return err
}); err != nil {
return nil, err
}
return bqToDatasetMetadata(ds)
}
func bqToDatasetMetadata(d *bq.Dataset) (*DatasetMetadata, error) {
dm := &DatasetMetadata{
CreationTime: unixMillisToTime(d.CreationTime),
LastModifiedTime: unixMillisToTime(d.LastModifiedTime),
DefaultTableExpiration: time.Duration(d.DefaultTableExpirationMs) * time.Millisecond,
Description: d.Description,
Name: d.FriendlyName,
FullID: d.Id,
Location: d.Location,
Labels: d.Labels,
ETag: d.Etag,
}
for _, a := range d.Access {
e, err := bqToAccessEntry(a, nil)
if err != nil {
return nil, err
}
dm.Access = append(dm.Access, e)
}
return dm, nil
}
// Update modifies specific Dataset metadata fields.
// To perform a read-modify-write that protects against intervening reads,
// set the etag argument to the DatasetMetadata.ETag field from the read.
// Pass the empty string for etag for a "blind write" that will always succeed.
func (d *Dataset) Update(ctx context.Context, dm DatasetMetadataToUpdate, etag string) (*DatasetMetadata, error) {
ds, err := dm.toBQ()
if err != nil {
return nil, err
}
call := d.c.bqs.Datasets.Patch(d.ProjectID, d.DatasetID, ds).Context(ctx)
setClientHeader(call.Header())
if etag != "" {
call.Header().Set("If-Match", etag)
}
var ds2 *bq.Dataset
if err := runWithRetry(ctx, func() (err error) {
ds2, err = call.Do()
return err
}); err != nil {
return nil, err
}
return bqToDatasetMetadata(ds2)
}
func (dm *DatasetMetadataToUpdate) toBQ() (*bq.Dataset, error) {
ds := &bq.Dataset{}
forceSend := func(field string) {
ds.ForceSendFields = append(ds.ForceSendFields, field)
}
if dm.Description != nil {
ds.Description = optional.ToString(dm.Description)
forceSend("Description")
}
if dm.Name != nil {
ds.FriendlyName = optional.ToString(dm.Name)
forceSend("FriendlyName")
}
if dm.DefaultTableExpiration != nil {
dur := optional.ToDuration(dm.DefaultTableExpiration)
if dur == 0 {
// Send a null to delete the field.
ds.NullFields = append(ds.NullFields, "DefaultTableExpirationMs")
} else {
ds.DefaultTableExpirationMs = int64(dur / time.Millisecond)
}
}
if dm.Access != nil {
var err error
ds.Access, err = accessListToBQ(dm.Access)
if err != nil {
return nil, err
}
if len(ds.Access) == 0 {
ds.NullFields = append(ds.NullFields, "Access")
}
}
labels, forces, nulls := dm.update()
ds.Labels = labels
ds.ForceSendFields = append(ds.ForceSendFields, forces...)
ds.NullFields = append(ds.NullFields, nulls...)
return ds, nil
}
// Table creates a handle to a BigQuery table in the dataset.
// To determine if a table exists, call Table.Metadata.
// If the table does not already exist, use Table.Create to create it.
func (d *Dataset) Table(tableID string) *Table {
return &Table{ProjectID: d.ProjectID, DatasetID: d.DatasetID, TableID: tableID, c: d.c}
}
// Tables returns an iterator over the tables in the Dataset.
func (d *Dataset) Tables(ctx context.Context) *TableIterator {
it := &TableIterator{
ctx: ctx,
dataset: d,
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(
it.fetch,
func() int { return len(it.tables) },
func() interface{} { b := it.tables; it.tables = nil; return b })
return it
}
// A TableIterator is an iterator over Tables.
type TableIterator struct {
ctx context.Context
dataset *Dataset
tables []*Table
pageInfo *iterator.PageInfo
nextFunc func() error
}
// Next returns the next result. Its second return value is Done if there are
// no more results. Once Next returns Done, all subsequent calls will return
// Done.
func (it *TableIterator) Next() (*Table, error) {
if err := it.nextFunc(); err != nil {
return nil, err
}
t := it.tables[0]
it.tables = it.tables[1:]
return t, nil
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *TableIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
// for testing
var listTables = func(it *TableIterator, pageSize int, pageToken string) (*bq.TableList, error) {
call := it.dataset.c.bqs.Tables.List(it.dataset.ProjectID, it.dataset.DatasetID).
PageToken(pageToken).
Context(it.ctx)
setClientHeader(call.Header())
if pageSize > 0 {
call.MaxResults(int64(pageSize))
}
var res *bq.TableList
err := runWithRetry(it.ctx, func() (err error) {
res, err = call.Do()
return err
})
return res, err
}
func (it *TableIterator) fetch(pageSize int, pageToken string) (string, error) {
res, err := listTables(it, pageSize, pageToken)
if err != nil {
return "", err
}
for _, t := range res.Tables {
it.tables = append(it.tables, bqToTable(t.TableReference, it.dataset.c))
}
return res.NextPageToken, nil
}
func bqToTable(tr *bq.TableReference, c *Client) *Table {
return &Table{
ProjectID: tr.ProjectId,
DatasetID: tr.DatasetId,
TableID: tr.TableId,
c: c,
}
}
// Datasets returns an iterator over the datasets in a project.
// The Client's project is used by default, but that can be
// changed by setting ProjectID on the returned iterator before calling Next.
func (c *Client) Datasets(ctx context.Context) *DatasetIterator {
return c.DatasetsInProject(ctx, c.projectID)
}
// DatasetsInProject returns an iterator over the datasets in the provided project.
//
// Deprecated: call Client.Datasets, then set ProjectID on the returned iterator.
func (c *Client) DatasetsInProject(ctx context.Context, projectID string) *DatasetIterator {
it := &DatasetIterator{
ctx: ctx,
c: c,
ProjectID: projectID,
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(
it.fetch,
func() int { return len(it.items) },
func() interface{} { b := it.items; it.items = nil; return b })
return it
}
// DatasetIterator iterates over the datasets in a project.
type DatasetIterator struct {
// ListHidden causes hidden datasets to be listed when set to true.
// Set before the first call to Next.
ListHidden bool
// Filter restricts the datasets returned by label. The filter syntax is described in
// https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels
// Set before the first call to Next.
Filter string
// The project ID of the listed datasets.
// Set before the first call to Next.
ProjectID string
ctx context.Context
c *Client
pageInfo *iterator.PageInfo
nextFunc func() error
items []*Dataset
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *DatasetIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
func (it *DatasetIterator) Next() (*Dataset, error) {
if err := it.nextFunc(); err != nil {
return nil, err
}
item := it.items[0]
it.items = it.items[1:]
return item, nil
}
// for testing
var listDatasets = func(it *DatasetIterator, pageSize int, pageToken string) (*bq.DatasetList, error) {
call := it.c.bqs.Datasets.List(it.ProjectID).
Context(it.ctx).
PageToken(pageToken).
All(it.ListHidden)
setClientHeader(call.Header())
if pageSize > 0 {
call.MaxResults(int64(pageSize))
}
if it.Filter != "" {
call.Filter(it.Filter)
}
var res *bq.DatasetList
err := runWithRetry(it.ctx, func() (err error) {
res, err = call.Do()
return err
})
return res, err
}
func (it *DatasetIterator) fetch(pageSize int, pageToken string) (string, error) {
res, err := listDatasets(it, pageSize, pageToken)
if err != nil {
return "", err
}
for _, d := range res.Datasets {
it.items = append(it.items, &Dataset{
ProjectID: d.DatasetReference.ProjectId,
DatasetID: d.DatasetReference.DatasetId,
c: it.c,
})
}
return res.NextPageToken, nil
}
// An AccessEntry describes the permissions that an entity has on a dataset.
type AccessEntry struct {
Role AccessRole // The role of the entity
EntityType EntityType // The type of entity
Entity string // The entity (individual or group) granted access
View *Table // The view granted access (EntityType must be ViewEntity)
}
// AccessRole is the level of access to grant to a dataset.
type AccessRole string
const (
OwnerRole AccessRole = "OWNER"
ReaderRole AccessRole = "READER"
WriterRole AccessRole = "WRITER"
)
// EntityType is the type of entity in an AccessEntry.
type EntityType int
const (
// A domain (e.g. "example.com")
DomainEntity EntityType = iota + 1
// Email address of a Google Group
GroupEmailEntity
// Email address of an individual user.
UserEmailEntity
// A special group: one of projectOwners, projectReaders, projectWriters or allAuthenticatedUsers.
SpecialGroupEntity
// A BigQuery view.
ViewEntity
)
func (e *AccessEntry) toBQ() (*bq.DatasetAccess, error) {
q := &bq.DatasetAccess{Role: string(e.Role)}
switch e.EntityType {
case DomainEntity:
q.Domain = e.Entity
case GroupEmailEntity:
q.GroupByEmail = e.Entity
case UserEmailEntity:
q.UserByEmail = e.Entity
case SpecialGroupEntity:
q.SpecialGroup = e.Entity
case ViewEntity:
q.View = e.View.toBQ()
default:
return nil, fmt.Errorf("bigquery: unknown entity type %d", e.EntityType)
}
return q, nil
}
func bqToAccessEntry(q *bq.DatasetAccess, c *Client) (*AccessEntry, error) {
e := &AccessEntry{Role: AccessRole(q.Role)}
switch {
case q.Domain != "":
e.Entity = q.Domain
e.EntityType = DomainEntity
case q.GroupByEmail != "":
e.Entity = q.GroupByEmail
e.EntityType = GroupEmailEntity
case q.UserByEmail != "":
e.Entity = q.UserByEmail
e.EntityType = UserEmailEntity
case q.SpecialGroup != "":
e.Entity = q.SpecialGroup
e.EntityType = SpecialGroupEntity
case q.View != nil:
e.View = c.DatasetInProject(q.View.ProjectId, q.View.DatasetId).Table(q.View.TableId)
e.EntityType = ViewEntity
default:
return nil, errors.New("bigquery: invalid access value")
}
return e, nil
}

View file

@ -1,328 +0,0 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package bigquery
import (
"errors"
"strconv"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"cloud.google.com/go/internal/testutil"
"golang.org/x/net/context"
bq "google.golang.org/api/bigquery/v2"
itest "google.golang.org/api/iterator/testing"
)
// readServiceStub services read requests by returning data from an in-memory list of values.
type listTablesStub struct {
expectedProject, expectedDataset string
tables []*bq.TableListTables
}
func (s *listTablesStub) listTables(it *TableIterator, pageSize int, pageToken string) (*bq.TableList, error) {
if it.dataset.ProjectID != s.expectedProject {
return nil, errors.New("wrong project id")
}
if it.dataset.DatasetID != s.expectedDataset {
return nil, errors.New("wrong dataset id")
}
const maxPageSize = 2
if pageSize <= 0 || pageSize > maxPageSize {
pageSize = maxPageSize
}
start := 0
if pageToken != "" {
var err error
start, err = strconv.Atoi(pageToken)
if err != nil {
return nil, err
}
}
end := start + pageSize
if end > len(s.tables) {
end = len(s.tables)
}
nextPageToken := ""
if end < len(s.tables) {
nextPageToken = strconv.Itoa(end)
}
return &bq.TableList{
Tables: s.tables[start:end],
NextPageToken: nextPageToken,
}, nil
}
func TestTables(t *testing.T) {
c := &Client{projectID: "p1"}
inTables := []*bq.TableListTables{
{TableReference: &bq.TableReference{ProjectId: "p1", DatasetId: "d1", TableId: "t1"}},
{TableReference: &bq.TableReference{ProjectId: "p1", DatasetId: "d1", TableId: "t2"}},
{TableReference: &bq.TableReference{ProjectId: "p1", DatasetId: "d1", TableId: "t3"}},
}
outTables := []*Table{
{ProjectID: "p1", DatasetID: "d1", TableID: "t1", c: c},
{ProjectID: "p1", DatasetID: "d1", TableID: "t2", c: c},
{ProjectID: "p1", DatasetID: "d1", TableID: "t3", c: c},
}
lts := &listTablesStub{
expectedProject: "p1",
expectedDataset: "d1",
tables: inTables,
}
old := listTables
listTables = lts.listTables // cannot use t.Parallel with this test
defer func() { listTables = old }()
msg, ok := itest.TestIterator(outTables,
func() interface{} { return c.Dataset("d1").Tables(context.Background()) },
func(it interface{}) (interface{}, error) { return it.(*TableIterator).Next() })
if !ok {
t.Error(msg)
}
}
type listDatasetsStub struct {
expectedProject string
datasets []*bq.DatasetListDatasets
hidden map[*bq.DatasetListDatasets]bool
}
func (s *listDatasetsStub) listDatasets(it *DatasetIterator, pageSize int, pageToken string) (*bq.DatasetList, error) {
const maxPageSize = 2
if pageSize <= 0 || pageSize > maxPageSize {
pageSize = maxPageSize
}
if it.Filter != "" {
return nil, errors.New("filter not supported")
}
if it.ProjectID != s.expectedProject {
return nil, errors.New("bad project ID")
}
start := 0
if pageToken != "" {
var err error
start, err = strconv.Atoi(pageToken)
if err != nil {
return nil, err
}
}
var (
i int
result []*bq.DatasetListDatasets
nextPageToken string
)
for i = start; len(result) < pageSize && i < len(s.datasets); i++ {
if s.hidden[s.datasets[i]] && !it.ListHidden {
continue
}
result = append(result, s.datasets[i])
}
if i < len(s.datasets) {
nextPageToken = strconv.Itoa(i)
}
return &bq.DatasetList{
Datasets: result,
NextPageToken: nextPageToken,
}, nil
}
func TestDatasets(t *testing.T) {
client := &Client{projectID: "p"}
inDatasets := []*bq.DatasetListDatasets{
{DatasetReference: &bq.DatasetReference{ProjectId: "p", DatasetId: "a"}},
{DatasetReference: &bq.DatasetReference{ProjectId: "p", DatasetId: "b"}},
{DatasetReference: &bq.DatasetReference{ProjectId: "p", DatasetId: "hidden"}},
{DatasetReference: &bq.DatasetReference{ProjectId: "p", DatasetId: "c"}},
}
outDatasets := []*Dataset{
{"p", "a", client},
{"p", "b", client},
{"p", "hidden", client},
{"p", "c", client},
}
lds := &listDatasetsStub{
expectedProject: "p",
datasets: inDatasets,
hidden: map[*bq.DatasetListDatasets]bool{inDatasets[2]: true},
}
old := listDatasets
listDatasets = lds.listDatasets // cannot use t.Parallel with this test
defer func() { listDatasets = old }()
msg, ok := itest.TestIterator(outDatasets,
func() interface{} { it := client.Datasets(context.Background()); it.ListHidden = true; return it },
func(it interface{}) (interface{}, error) { return it.(*DatasetIterator).Next() })
if !ok {
t.Fatalf("ListHidden=true: %s", msg)
}
msg, ok = itest.TestIterator([]*Dataset{outDatasets[0], outDatasets[1], outDatasets[3]},
func() interface{} { it := client.Datasets(context.Background()); it.ListHidden = false; return it },
func(it interface{}) (interface{}, error) { return it.(*DatasetIterator).Next() })
if !ok {
t.Fatalf("ListHidden=false: %s", msg)
}
}
func TestDatasetToBQ(t *testing.T) {
for _, test := range []struct {
in *DatasetMetadata
want *bq.Dataset
}{
{nil, &bq.Dataset{}},
{&DatasetMetadata{Name: "name"}, &bq.Dataset{FriendlyName: "name"}},
{&DatasetMetadata{
Name: "name",
Description: "desc",
DefaultTableExpiration: time.Hour,
Location: "EU",
Labels: map[string]string{"x": "y"},
Access: []*AccessEntry{{Role: OwnerRole, Entity: "example.com", EntityType: DomainEntity}},
}, &bq.Dataset{
FriendlyName: "name",
Description: "desc",
DefaultTableExpirationMs: 60 * 60 * 1000,
Location: "EU",
Labels: map[string]string{"x": "y"},
Access: []*bq.DatasetAccess{{Role: "OWNER", Domain: "example.com"}},
}},
} {
got, err := test.in.toBQ()
if err != nil {
t.Fatal(err)
}
if !testutil.Equal(got, test.want) {
t.Errorf("%v:\ngot %+v\nwant %+v", test.in, got, test.want)
}
}
// Check that non-writeable fields are unset.
aTime := time.Date(2017, 1, 26, 0, 0, 0, 0, time.Local)
for _, dm := range []*DatasetMetadata{
{CreationTime: aTime},
{LastModifiedTime: aTime},
{FullID: "x"},
{ETag: "e"},
} {
if _, err := dm.toBQ(); err == nil {
t.Errorf("%+v: got nil, want error", dm)
}
}
}
func TestBQToDatasetMetadata(t *testing.T) {
cTime := time.Date(2017, 1, 26, 0, 0, 0, 0, time.Local)
cMillis := cTime.UnixNano() / 1e6
mTime := time.Date(2017, 10, 31, 0, 0, 0, 0, time.Local)
mMillis := mTime.UnixNano() / 1e6
q := &bq.Dataset{
CreationTime: cMillis,
LastModifiedTime: mMillis,
FriendlyName: "name",
Description: "desc",
DefaultTableExpirationMs: 60 * 60 * 1000,
Location: "EU",
Labels: map[string]string{"x": "y"},
Access: []*bq.DatasetAccess{
{Role: "READER", UserByEmail: "joe@example.com"},
{Role: "WRITER", GroupByEmail: "users@example.com"},
},
Etag: "etag",
}
want := &DatasetMetadata{
CreationTime: cTime,
LastModifiedTime: mTime,
Name: "name",
Description: "desc",
DefaultTableExpiration: time.Hour,
Location: "EU",
Labels: map[string]string{"x": "y"},
Access: []*AccessEntry{
{Role: ReaderRole, Entity: "joe@example.com", EntityType: UserEmailEntity},
{Role: WriterRole, Entity: "users@example.com", EntityType: GroupEmailEntity},
},
ETag: "etag",
}
got, err := bqToDatasetMetadata(q)
if err != nil {
t.Fatal(err)
}
if diff := testutil.Diff(got, want); diff != "" {
t.Errorf("-got, +want:\n%s", diff)
}
}
func TestDatasetMetadataToUpdateToBQ(t *testing.T) {
dm := DatasetMetadataToUpdate{
Description: "desc",
Name: "name",
DefaultTableExpiration: time.Hour,
}
dm.SetLabel("label", "value")
dm.DeleteLabel("del")
got, err := dm.toBQ()
if err != nil {
t.Fatal(err)
}
want := &bq.Dataset{
Description: "desc",
FriendlyName: "name",
DefaultTableExpirationMs: 60 * 60 * 1000,
Labels: map[string]string{"label": "value"},
ForceSendFields: []string{"Description", "FriendlyName"},
NullFields: []string{"Labels.del"},
}
if diff := testutil.Diff(got, want); diff != "" {
t.Errorf("-got, +want:\n%s", diff)
}
}
func TestConvertAccessEntry(t *testing.T) {
c := &Client{projectID: "pid"}
for _, e := range []*AccessEntry{
{Role: ReaderRole, Entity: "e", EntityType: DomainEntity},
{Role: WriterRole, Entity: "e", EntityType: GroupEmailEntity},
{Role: OwnerRole, Entity: "e", EntityType: UserEmailEntity},
{Role: ReaderRole, Entity: "e", EntityType: SpecialGroupEntity},
{Role: ReaderRole, EntityType: ViewEntity,
View: &Table{ProjectID: "p", DatasetID: "d", TableID: "t", c: c}},
} {
q, err := e.toBQ()
if err != nil {
t.Fatal(err)
}
got, err := bqToAccessEntry(q, c)
if err != nil {
t.Fatal(err)
}
if diff := testutil.Diff(got, e, cmp.AllowUnexported(Table{}, Client{})); diff != "" {
t.Errorf("got=-, want=+:\n%s", diff)
}
}
e := &AccessEntry{Role: ReaderRole, Entity: "e"}
if _, err := e.toBQ(); err == nil {
t.Error("got nil, want error")
}
if _, err := bqToAccessEntry(&bq.DatasetAccess{Role: "WRITER"}, nil); err == nil {
t.Error("got nil, want error")
}
}

View file

@ -1,67 +0,0 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
package datatransfer
import (
datatransferpb "google.golang.org/genproto/googleapis/cloud/bigquery/datatransfer/v1"
)
import (
"fmt"
"strconv"
"testing"
"time"
"cloud.google.com/go/internal/testutil"
"golang.org/x/net/context"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
)
var _ = fmt.Sprintf
var _ = iterator.Done
var _ = strconv.FormatUint
var _ = time.Now
func TestDataTransferServiceSmoke(t *testing.T) {
if testing.Short() {
t.Skip("skipping smoke test in short mode")
}
ctx := context.Background()
ts := testutil.TokenSource(ctx, DefaultAuthScopes()...)
if ts == nil {
t.Skip("Integration tests skipped. See CONTRIBUTING.md for details")
}
projectId := testutil.ProjID()
_ = projectId
c, err := NewClient(ctx, option.WithTokenSource(ts))
if err != nil {
t.Fatal(err)
}
var formattedParent string = fmt.Sprintf("projects/%s", projectId)
var request = &datatransferpb.ListDataSourcesRequest{
Parent: formattedParent,
}
iter := c.ListDataSources(ctx, request)
if _, err := iter.Next(); err != nil && err != iterator.Done {
t.Error(err)
}
}

View file

@ -1,601 +0,0 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
package datatransfer
import (
"math"
"time"
"cloud.google.com/go/internal/version"
gax "github.com/googleapis/gax-go"
"golang.org/x/net/context"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
"google.golang.org/api/transport"
datatransferpb "google.golang.org/genproto/googleapis/cloud/bigquery/datatransfer/v1"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
)
// CallOptions contains the retry settings for each method of Client.
type CallOptions struct {
GetDataSource []gax.CallOption
ListDataSources []gax.CallOption
CreateTransferConfig []gax.CallOption
UpdateTransferConfig []gax.CallOption
DeleteTransferConfig []gax.CallOption
GetTransferConfig []gax.CallOption
ListTransferConfigs []gax.CallOption
ScheduleTransferRuns []gax.CallOption
GetTransferRun []gax.CallOption
DeleteTransferRun []gax.CallOption
ListTransferRuns []gax.CallOption
ListTransferLogs []gax.CallOption
CheckValidCreds []gax.CallOption
}
func defaultClientOptions() []option.ClientOption {
return []option.ClientOption{
option.WithEndpoint("bigquerydatatransfer.googleapis.com:443"),
option.WithScopes(DefaultAuthScopes()...),
}
}
func defaultCallOptions() *CallOptions {
retry := map[[2]string][]gax.CallOption{
{"default", "idempotent"}: {
gax.WithRetry(func() gax.Retryer {
return gax.OnCodes([]codes.Code{
codes.DeadlineExceeded,
codes.Unavailable,
}, gax.Backoff{
Initial: 100 * time.Millisecond,
Max: 60000 * time.Millisecond,
Multiplier: 1.3,
})
}),
},
}
return &CallOptions{
GetDataSource: retry[[2]string{"default", "idempotent"}],
ListDataSources: retry[[2]string{"default", "idempotent"}],
CreateTransferConfig: retry[[2]string{"default", "non_idempotent"}],
UpdateTransferConfig: retry[[2]string{"default", "non_idempotent"}],
DeleteTransferConfig: retry[[2]string{"default", "idempotent"}],
GetTransferConfig: retry[[2]string{"default", "idempotent"}],
ListTransferConfigs: retry[[2]string{"default", "idempotent"}],
ScheduleTransferRuns: retry[[2]string{"default", "non_idempotent"}],
GetTransferRun: retry[[2]string{"default", "idempotent"}],
DeleteTransferRun: retry[[2]string{"default", "idempotent"}],
ListTransferRuns: retry[[2]string{"default", "idempotent"}],
ListTransferLogs: retry[[2]string{"default", "idempotent"}],
CheckValidCreds: retry[[2]string{"default", "idempotent"}],
}
}
// Client is a client for interacting with BigQuery Data Transfer API.
type Client struct {
// The connection to the service.
conn *grpc.ClientConn
// The gRPC API client.
client datatransferpb.DataTransferServiceClient
// The call options for this service.
CallOptions *CallOptions
// The x-goog-* metadata to be sent with each request.
xGoogMetadata metadata.MD
}
// NewClient creates a new data transfer service client.
//
// The Google BigQuery Data Transfer Service API enables BigQuery users to
// configure the transfer of their data from other Google Products into BigQuery.
// This service contains methods that are end user exposed. It backs up the
// frontend.
func NewClient(ctx context.Context, opts ...option.ClientOption) (*Client, error) {
conn, err := transport.DialGRPC(ctx, append(defaultClientOptions(), opts...)...)
if err != nil {
return nil, err
}
c := &Client{
conn: conn,
CallOptions: defaultCallOptions(),
client: datatransferpb.NewDataTransferServiceClient(conn),
}
c.setGoogleClientInfo()
return c, nil
}
// Connection returns the client's connection to the API service.
func (c *Client) Connection() *grpc.ClientConn {
return c.conn
}
// Close closes the connection to the API service. The user should invoke this when
// the client is no longer required.
func (c *Client) Close() error {
return c.conn.Close()
}
// setGoogleClientInfo sets the name and version of the application in
// the `x-goog-api-client` header passed on each request. Intended for
// use by Google-written clients.
func (c *Client) setGoogleClientInfo(keyval ...string) {
kv := append([]string{"gl-go", version.Go()}, keyval...)
kv = append(kv, "gapic", version.Repo, "gax", gax.Version, "grpc", grpc.Version)
c.xGoogMetadata = metadata.Pairs("x-goog-api-client", gax.XGoogHeader(kv...))
}
// GetDataSource retrieves a supported data source and returns its settings,
// which can be used for UI rendering.
func (c *Client) GetDataSource(ctx context.Context, req *datatransferpb.GetDataSourceRequest, opts ...gax.CallOption) (*datatransferpb.DataSource, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.GetDataSource[0:len(c.CallOptions.GetDataSource):len(c.CallOptions.GetDataSource)], opts...)
var resp *datatransferpb.DataSource
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.GetDataSource(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, err
}
return resp, nil
}
// ListDataSources lists supported data sources and returns their settings,
// which can be used for UI rendering.
func (c *Client) ListDataSources(ctx context.Context, req *datatransferpb.ListDataSourcesRequest, opts ...gax.CallOption) *DataSourceIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.ListDataSources[0:len(c.CallOptions.ListDataSources):len(c.CallOptions.ListDataSources)], opts...)
it := &DataSourceIterator{}
it.InternalFetch = func(pageSize int, pageToken string) ([]*datatransferpb.DataSource, string, error) {
var resp *datatransferpb.ListDataSourcesResponse
req.PageToken = pageToken
if pageSize > math.MaxInt32 {
req.PageSize = math.MaxInt32
} else {
req.PageSize = int32(pageSize)
}
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.ListDataSources(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, "", err
}
return resp.DataSources, resp.NextPageToken, nil
}
fetch := func(pageSize int, pageToken string) (string, error) {
items, nextPageToken, err := it.InternalFetch(pageSize, pageToken)
if err != nil {
return "", err
}
it.items = append(it.items, items...)
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
return it
}
// CreateTransferConfig creates a new data transfer configuration.
func (c *Client) CreateTransferConfig(ctx context.Context, req *datatransferpb.CreateTransferConfigRequest, opts ...gax.CallOption) (*datatransferpb.TransferConfig, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.CreateTransferConfig[0:len(c.CallOptions.CreateTransferConfig):len(c.CallOptions.CreateTransferConfig)], opts...)
var resp *datatransferpb.TransferConfig
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.CreateTransferConfig(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, err
}
return resp, nil
}
// UpdateTransferConfig updates a data transfer configuration.
// All fields must be set, even if they are not updated.
func (c *Client) UpdateTransferConfig(ctx context.Context, req *datatransferpb.UpdateTransferConfigRequest, opts ...gax.CallOption) (*datatransferpb.TransferConfig, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.UpdateTransferConfig[0:len(c.CallOptions.UpdateTransferConfig):len(c.CallOptions.UpdateTransferConfig)], opts...)
var resp *datatransferpb.TransferConfig
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.UpdateTransferConfig(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, err
}
return resp, nil
}
// DeleteTransferConfig deletes a data transfer configuration,
// including any associated transfer runs and logs.
func (c *Client) DeleteTransferConfig(ctx context.Context, req *datatransferpb.DeleteTransferConfigRequest, opts ...gax.CallOption) error {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.DeleteTransferConfig[0:len(c.CallOptions.DeleteTransferConfig):len(c.CallOptions.DeleteTransferConfig)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
_, err = c.client.DeleteTransferConfig(ctx, req, settings.GRPC...)
return err
}, opts...)
return err
}
// GetTransferConfig returns information about a data transfer config.
func (c *Client) GetTransferConfig(ctx context.Context, req *datatransferpb.GetTransferConfigRequest, opts ...gax.CallOption) (*datatransferpb.TransferConfig, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.GetTransferConfig[0:len(c.CallOptions.GetTransferConfig):len(c.CallOptions.GetTransferConfig)], opts...)
var resp *datatransferpb.TransferConfig
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.GetTransferConfig(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, err
}
return resp, nil
}
// ListTransferConfigs returns information about all data transfers in the project.
func (c *Client) ListTransferConfigs(ctx context.Context, req *datatransferpb.ListTransferConfigsRequest, opts ...gax.CallOption) *TransferConfigIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.ListTransferConfigs[0:len(c.CallOptions.ListTransferConfigs):len(c.CallOptions.ListTransferConfigs)], opts...)
it := &TransferConfigIterator{}
it.InternalFetch = func(pageSize int, pageToken string) ([]*datatransferpb.TransferConfig, string, error) {
var resp *datatransferpb.ListTransferConfigsResponse
req.PageToken = pageToken
if pageSize > math.MaxInt32 {
req.PageSize = math.MaxInt32
} else {
req.PageSize = int32(pageSize)
}
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.ListTransferConfigs(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, "", err
}
return resp.TransferConfigs, resp.NextPageToken, nil
}
fetch := func(pageSize int, pageToken string) (string, error) {
items, nextPageToken, err := it.InternalFetch(pageSize, pageToken)
if err != nil {
return "", err
}
it.items = append(it.items, items...)
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
return it
}
// ScheduleTransferRuns creates transfer runs for a time range [start_time, end_time].
// For each date - or whatever granularity the data source supports - in the
// range, one transfer run is created.
// Note that runs are created per UTC time in the time range.
func (c *Client) ScheduleTransferRuns(ctx context.Context, req *datatransferpb.ScheduleTransferRunsRequest, opts ...gax.CallOption) (*datatransferpb.ScheduleTransferRunsResponse, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.ScheduleTransferRuns[0:len(c.CallOptions.ScheduleTransferRuns):len(c.CallOptions.ScheduleTransferRuns)], opts...)
var resp *datatransferpb.ScheduleTransferRunsResponse
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.ScheduleTransferRuns(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, err
}
return resp, nil
}
// GetTransferRun returns information about the particular transfer run.
func (c *Client) GetTransferRun(ctx context.Context, req *datatransferpb.GetTransferRunRequest, opts ...gax.CallOption) (*datatransferpb.TransferRun, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.GetTransferRun[0:len(c.CallOptions.GetTransferRun):len(c.CallOptions.GetTransferRun)], opts...)
var resp *datatransferpb.TransferRun
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.GetTransferRun(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, err
}
return resp, nil
}
// DeleteTransferRun deletes the specified transfer run.
func (c *Client) DeleteTransferRun(ctx context.Context, req *datatransferpb.DeleteTransferRunRequest, opts ...gax.CallOption) error {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.DeleteTransferRun[0:len(c.CallOptions.DeleteTransferRun):len(c.CallOptions.DeleteTransferRun)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
_, err = c.client.DeleteTransferRun(ctx, req, settings.GRPC...)
return err
}, opts...)
return err
}
// ListTransferRuns returns information about running and completed jobs.
func (c *Client) ListTransferRuns(ctx context.Context, req *datatransferpb.ListTransferRunsRequest, opts ...gax.CallOption) *TransferRunIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.ListTransferRuns[0:len(c.CallOptions.ListTransferRuns):len(c.CallOptions.ListTransferRuns)], opts...)
it := &TransferRunIterator{}
it.InternalFetch = func(pageSize int, pageToken string) ([]*datatransferpb.TransferRun, string, error) {
var resp *datatransferpb.ListTransferRunsResponse
req.PageToken = pageToken
if pageSize > math.MaxInt32 {
req.PageSize = math.MaxInt32
} else {
req.PageSize = int32(pageSize)
}
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.ListTransferRuns(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, "", err
}
return resp.TransferRuns, resp.NextPageToken, nil
}
fetch := func(pageSize int, pageToken string) (string, error) {
items, nextPageToken, err := it.InternalFetch(pageSize, pageToken)
if err != nil {
return "", err
}
it.items = append(it.items, items...)
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
return it
}
// ListTransferLogs returns user facing log messages for the data transfer run.
func (c *Client) ListTransferLogs(ctx context.Context, req *datatransferpb.ListTransferLogsRequest, opts ...gax.CallOption) *TransferMessageIterator {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.ListTransferLogs[0:len(c.CallOptions.ListTransferLogs):len(c.CallOptions.ListTransferLogs)], opts...)
it := &TransferMessageIterator{}
it.InternalFetch = func(pageSize int, pageToken string) ([]*datatransferpb.TransferMessage, string, error) {
var resp *datatransferpb.ListTransferLogsResponse
req.PageToken = pageToken
if pageSize > math.MaxInt32 {
req.PageSize = math.MaxInt32
} else {
req.PageSize = int32(pageSize)
}
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.ListTransferLogs(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, "", err
}
return resp.TransferMessages, resp.NextPageToken, nil
}
fetch := func(pageSize int, pageToken string) (string, error) {
items, nextPageToken, err := it.InternalFetch(pageSize, pageToken)
if err != nil {
return "", err
}
it.items = append(it.items, items...)
return nextPageToken, nil
}
it.pageInfo, it.nextFunc = iterator.NewPageInfo(fetch, it.bufLen, it.takeBuf)
return it
}
// CheckValidCreds returns true if valid credentials exist for the given data source and
// requesting user.
// Some data sources doesn't support service account, so we need to talk to
// them on behalf of the end user. This API just checks whether we have OAuth
// token for the particular user, which is a pre-requisite before user can
// create a transfer config.
func (c *Client) CheckValidCreds(ctx context.Context, req *datatransferpb.CheckValidCredsRequest, opts ...gax.CallOption) (*datatransferpb.CheckValidCredsResponse, error) {
ctx = insertMetadata(ctx, c.xGoogMetadata)
opts = append(c.CallOptions.CheckValidCreds[0:len(c.CallOptions.CheckValidCreds):len(c.CallOptions.CheckValidCreds)], opts...)
var resp *datatransferpb.CheckValidCredsResponse
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
resp, err = c.client.CheckValidCreds(ctx, req, settings.GRPC...)
return err
}, opts...)
if err != nil {
return nil, err
}
return resp, nil
}
// DataSourceIterator manages a stream of *datatransferpb.DataSource.
type DataSourceIterator struct {
items []*datatransferpb.DataSource
pageInfo *iterator.PageInfo
nextFunc func() error
// InternalFetch is for use by the Google Cloud Libraries only.
// It is not part of the stable interface of this package.
//
// InternalFetch returns results from a single call to the underlying RPC.
// The number of results is no greater than pageSize.
// If there are no more results, nextPageToken is empty and err is nil.
InternalFetch func(pageSize int, pageToken string) (results []*datatransferpb.DataSource, nextPageToken string, err error)
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *DataSourceIterator) PageInfo() *iterator.PageInfo {
return it.pageInfo
}
// Next returns the next result. Its second return value is iterator.Done if there are no more
// results. Once Next returns Done, all subsequent calls will return Done.
func (it *DataSourceIterator) Next() (*datatransferpb.DataSource, error) {
var item *datatransferpb.DataSource
if err := it.nextFunc(); err != nil {
return item, err
}
item = it.items[0]
it.items = it.items[1:]
return item, nil
}
func (it *DataSourceIterator) bufLen() int {
return len(it.items)
}
func (it *DataSourceIterator) takeBuf() interface{} {
b := it.items
it.items = nil
return b
}
// TransferConfigIterator manages a stream of *datatransferpb.TransferConfig.
type TransferConfigIterator struct {
items []*datatransferpb.TransferConfig
pageInfo *iterator.PageInfo
nextFunc func() error
// InternalFetch is for use by the Google Cloud Libraries only.
// It is not part of the stable interface of this package.
//
// InternalFetch returns results from a single call to the underlying RPC.
// The number of results is no greater than pageSize.
// If there are no more results, nextPageToken is empty and err is nil.
InternalFetch func(pageSize int, pageToken string) (results []*datatransferpb.TransferConfig, nextPageToken string, err error)
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *TransferConfigIterator) PageInfo() *iterator.PageInfo {
return it.pageInfo
}
// Next returns the next result. Its second return value is iterator.Done if there are no more
// results. Once Next returns Done, all subsequent calls will return Done.
func (it *TransferConfigIterator) Next() (*datatransferpb.TransferConfig, error) {
var item *datatransferpb.TransferConfig
if err := it.nextFunc(); err != nil {
return item, err
}
item = it.items[0]
it.items = it.items[1:]
return item, nil
}
func (it *TransferConfigIterator) bufLen() int {
return len(it.items)
}
func (it *TransferConfigIterator) takeBuf() interface{} {
b := it.items
it.items = nil
return b
}
// TransferMessageIterator manages a stream of *datatransferpb.TransferMessage.
type TransferMessageIterator struct {
items []*datatransferpb.TransferMessage
pageInfo *iterator.PageInfo
nextFunc func() error
// InternalFetch is for use by the Google Cloud Libraries only.
// It is not part of the stable interface of this package.
//
// InternalFetch returns results from a single call to the underlying RPC.
// The number of results is no greater than pageSize.
// If there are no more results, nextPageToken is empty and err is nil.
InternalFetch func(pageSize int, pageToken string) (results []*datatransferpb.TransferMessage, nextPageToken string, err error)
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *TransferMessageIterator) PageInfo() *iterator.PageInfo {
return it.pageInfo
}
// Next returns the next result. Its second return value is iterator.Done if there are no more
// results. Once Next returns Done, all subsequent calls will return Done.
func (it *TransferMessageIterator) Next() (*datatransferpb.TransferMessage, error) {
var item *datatransferpb.TransferMessage
if err := it.nextFunc(); err != nil {
return item, err
}
item = it.items[0]
it.items = it.items[1:]
return item, nil
}
func (it *TransferMessageIterator) bufLen() int {
return len(it.items)
}
func (it *TransferMessageIterator) takeBuf() interface{} {
b := it.items
it.items = nil
return b
}
// TransferRunIterator manages a stream of *datatransferpb.TransferRun.
type TransferRunIterator struct {
items []*datatransferpb.TransferRun
pageInfo *iterator.PageInfo
nextFunc func() error
// InternalFetch is for use by the Google Cloud Libraries only.
// It is not part of the stable interface of this package.
//
// InternalFetch returns results from a single call to the underlying RPC.
// The number of results is no greater than pageSize.
// If there are no more results, nextPageToken is empty and err is nil.
InternalFetch func(pageSize int, pageToken string) (results []*datatransferpb.TransferRun, nextPageToken string, err error)
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
func (it *TransferRunIterator) PageInfo() *iterator.PageInfo {
return it.pageInfo
}
// Next returns the next result. Its second return value is iterator.Done if there are no more
// results. Once Next returns Done, all subsequent calls will return Done.
func (it *TransferRunIterator) Next() (*datatransferpb.TransferRun, error) {
var item *datatransferpb.TransferRun
if err := it.nextFunc(); err != nil {
return item, err
}
item = it.items[0]
it.items = it.items[1:]
return item, nil
}
func (it *TransferRunIterator) bufLen() int {
return len(it.items)
}
func (it *TransferRunIterator) takeBuf() interface{} {
b := it.items
it.items = nil
return b
}

View file

@ -1,288 +0,0 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
package datatransfer_test
import (
"cloud.google.com/go/bigquery/datatransfer/apiv1"
"golang.org/x/net/context"
"google.golang.org/api/iterator"
datatransferpb "google.golang.org/genproto/googleapis/cloud/bigquery/datatransfer/v1"
)
func ExampleNewClient() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
// TODO: Use client.
_ = c
}
func ExampleClient_GetDataSource() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.GetDataSourceRequest{
// TODO: Fill request struct fields.
}
resp, err := c.GetDataSource(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
func ExampleClient_ListDataSources() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.ListDataSourcesRequest{
// TODO: Fill request struct fields.
}
it := c.ListDataSources(ctx, req)
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
}
func ExampleClient_CreateTransferConfig() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.CreateTransferConfigRequest{
// TODO: Fill request struct fields.
}
resp, err := c.CreateTransferConfig(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
func ExampleClient_UpdateTransferConfig() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.UpdateTransferConfigRequest{
// TODO: Fill request struct fields.
}
resp, err := c.UpdateTransferConfig(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
func ExampleClient_DeleteTransferConfig() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.DeleteTransferConfigRequest{
// TODO: Fill request struct fields.
}
err = c.DeleteTransferConfig(ctx, req)
if err != nil {
// TODO: Handle error.
}
}
func ExampleClient_GetTransferConfig() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.GetTransferConfigRequest{
// TODO: Fill request struct fields.
}
resp, err := c.GetTransferConfig(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
func ExampleClient_ListTransferConfigs() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.ListTransferConfigsRequest{
// TODO: Fill request struct fields.
}
it := c.ListTransferConfigs(ctx, req)
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
}
func ExampleClient_ScheduleTransferRuns() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.ScheduleTransferRunsRequest{
// TODO: Fill request struct fields.
}
resp, err := c.ScheduleTransferRuns(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
func ExampleClient_GetTransferRun() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.GetTransferRunRequest{
// TODO: Fill request struct fields.
}
resp, err := c.GetTransferRun(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
func ExampleClient_DeleteTransferRun() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.DeleteTransferRunRequest{
// TODO: Fill request struct fields.
}
err = c.DeleteTransferRun(ctx, req)
if err != nil {
// TODO: Handle error.
}
}
func ExampleClient_ListTransferRuns() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.ListTransferRunsRequest{
// TODO: Fill request struct fields.
}
it := c.ListTransferRuns(ctx, req)
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
}
func ExampleClient_ListTransferLogs() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.ListTransferLogsRequest{
// TODO: Fill request struct fields.
}
it := c.ListTransferLogs(ctx, req)
for {
resp, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}
}
func ExampleClient_CheckValidCreds() {
ctx := context.Background()
c, err := datatransfer.NewClient(ctx)
if err != nil {
// TODO: Handle error.
}
req := &datatransferpb.CheckValidCredsRequest{
// TODO: Fill request struct fields.
}
resp, err := c.CheckValidCreds(ctx, req)
if err != nil {
// TODO: Handle error.
}
// TODO: Use resp.
_ = resp
}

View file

@ -1,47 +0,0 @@
// Copyright 2018 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// AUTO-GENERATED CODE. DO NOT EDIT.
// Package datatransfer is an auto-generated package for the
// BigQuery Data Transfer API.
//
// NOTE: This package is in alpha. It is not stable, and is likely to change.
//
// Transfers data from partner SaaS applications to Google BigQuery on a
// scheduled, managed basis.
package datatransfer // import "cloud.google.com/go/bigquery/datatransfer/apiv1"
import (
"golang.org/x/net/context"
"google.golang.org/grpc/metadata"
)
func insertMetadata(ctx context.Context, mds ...metadata.MD) context.Context {
out, _ := metadata.FromOutgoingContext(ctx)
out = out.Copy()
for _, md := range mds {
for k, v := range md {
out[k] = append(out[k], v...)
}
}
return metadata.NewOutgoingContext(ctx, out)
}
// DefaultAuthScopes reports the default set of authentication scopes to use with this package.
func DefaultAuthScopes() []string {
return []string{
"https://www.googleapis.com/auth/cloud-platform",
}
}

File diff suppressed because it is too large Load diff

Some files were not shown because too many files have changed in this diff Show more