RowSampleFilter
.gcloud
will be updated in a subsequent gcloud
release.Subscription.Receive
need no IAM permissions other than Pub/Sub Subscriber
.bigtable:
bigquery:
datastore:
dlp:
firestore:
iam:
profiler:
pubsub:
redis:
spanner:
speech:
storage:
bigquery:
firestore:
spanner:
CommitTimestamp
, which supports inserting the commit timestamp of a transaction into a column.bigquery: Support SchemaUpdateOptions for load jobs.
bigtable:
datastore: Add OpenCensus tracing.
firestore:
logging: Add a WriteTimeout option.
spanner: Support Batch API.
storage: Add OpenCensus tracing.
bigquery:
bigtable:
datastore:
firestore:
logging:
profiler:
pubsub:
storage:
bigquery:
firestore: Data provided to DocumentRef.Set with a Merge option can contain Delete sentinels.
logging: Clients can accept parent resources other than projects.
pubsub:
oslogin/apiv1beta: New client for the Cloud OS Login API.
rpcreplay: A package for recording and replaying gRPC traffic.
spanner:
storage: Clarify checksum validation for gzipped files (it is not validated when the file is served uncompressed).
Remove UpdateMap and UpdateStruct; rename UpdatePaths to Update. Change docref.UpdateMap(ctx, map[string]interface{}{"a.b", 1})
to docref.Update(ctx, []firestore.Update{{Path: "a.b", Value: 1}})
Change docref.UpdateStruct(ctx, []string{"Field"}, aStruct)
to docref.Update(ctx, []firestore.Update{{Path: "Field", Value: aStruct.Field}})
Rename MergePaths to Merge; require args to be FieldPaths
A value stored as an integer can be read into a floating-point field, and vice versa.
Other bigquery changes:
JobIterator.Next
returns *Job
; removed JobInfo
(BREAKING CHANGE).Job
remembers its last retrieved status.storage:
profiler: Support goroutine and mutex profile types.
firestore: beta release. See the announcement.
errorreporting: The existing package has been redesigned.
errors: This package has been removed. Use errorreporting.
bigquery BREAKING CHANGES:
Table.Create
takes TableMetadata
as a second argument, instead of options.Dataset.Create
takes DatasetMetadata
as a second argument.DatasetMetadata
field ID
renamed to FullID
TableMetadata
field ID
renamed to FullID
Other bigquery changes:
AddJobIDSuffix
to true in a job config.vision, language, speech: clients are now stable
monitoring: client is now beta
profiler:
bigquery: UseLegacySQL options for CreateTable and QueryConfig. Use these options to continue using Legacy SQL after the client switches its default to Standard SQL.
bigquery: Support for updating dataset labels.
bigquery: Set DatasetIterator.ProjectID to list datasets in a project other than the client's. DatasetsInProject is no longer needed and is deprecated.
bigtable: Fail ListInstances when any zones fail.
spanner: support decoding of slices of basic types (e.g. []string, []int64, etc.)
logging/logadmin: UpdateSink no longer creates a sink if it is missing (actually a change to the underlying service, not the client)
profiler: Service and ServiceVersion replace Target in Config.
pubsub: Subscription.Receive now uses streaming pull.
pubsub: add Client.TopicInProject to access topics in a different project than the client.
errors: renamed errorreporting. The errors package will be removed shortly.
datastore: improved retry behavior.
bigquery: support updates to dataset metadata, with etags.
bigquery: add etag support to Table.Update (BREAKING: etag argument added).
bigquery: generate all job IDs on the client.
storage: support bucket lifecycle configurations.
Clients for spanner, pubsub and video are now in beta.
New client for DLP.
spanner: performance and testing improvements.
storage: requester-pays buckets are supported.
storage, profiler, bigtable, bigquery: bug fixes and other minor improvements.
pubsub: bug fixes and other minor improvements
pubsub: Subscription.ModifyPushConfig replaced with Subscription.Update.
pubsub: Subscription.Receive now runs concurrently for higher throughput.
vision: cloud.google.com/go/vision is deprecated. Use cloud.google.com/go/vision/apiv1 instead.
translation: now stable.
trace: several changes to the surface. See the link below.
pubsub: Replace
sub.ModifyPushConfig(ctx, pubsub.PushConfig{Endpoint: "https://example.com/push"})
with
sub.Update(ctx, pubsub.SubscriptionConfigToUpdate{ PushConfig: &pubsub.PushConfig{Endpoint: "https://example.com/push"}, })
trace: traceGRPCServerInterceptor will be provided from *trace.Client. Given an initialized *trace.Client
named tc
, instead of
s := grpc.NewServer(grpc.UnaryInterceptor(trace.GRPCServerInterceptor(tc)))
write
s := grpc.NewServer(grpc.UnaryInterceptor(tc.GRPCServerInterceptor()))
trace trace.GRPCClientInterceptor will also provided from *trace.Client. Instead of
conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(trace.GRPCClientInterceptor()))
write
conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor()))
trace: We removed the deprecated trace.EnableGRPCTracing
. Use the gRPC interceptor as a dial option as shown below when initializing Cloud package clients:
c, err := pubsub.NewClient(ctx, "project-id", option.WithGRPCDialOption(grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor()))) if err != nil { ... }
Beta release of BigQuery, DataStore, Logging and Storage. See the blog post.
bigquery:
struct support. Read a row directly into a struct with RowIterator.Next
, and upload a row directly from a struct with Uploader.Put
. You can also use field tags. See the [package documentation][cloud-bigquery-ref] for details.
The ValueList
type was removed. It is no longer necessary. Instead of
var v ValueList
... it.Next(&v) ..
use
var v []Value
... it.Next(&v) ...
Previously, repeatedly calling RowIterator.Next
on the same []Value
or ValueList
would append to the slice. Now each call resets the size to zero first.
Schema inference will infer the SQL type BYTES for a struct field of type []byte. Previously it inferred STRING.
The types uint
, uint64
and uintptr
are no longer supported in schema inference. BigQuery's integer type is INT64, and those types may hold values that are not correctly represented in a 64-bit signed integer.
Date
, Time
and DateTime
types in the new cloud.google.com/go/civil
package.type State struct {
Cities []struct{
Populations []int
}
}
See the announcement for more details.q := datastore.NewQuery("Kind").Namespace("ns")
k := &Key{Kind: "Kind", ID: 37, Namespace: "ns"}
NewIncompleteKey
has been removed, replaced by IncompleteKey
. ReplaceNewIncompleteKey(ctx, kind, parent)
withIncompleteKey(kind, parent)
and if you do use namespaces, make sure you set the namespace on the returned key.NewKey
has been removed, replaced by NameKey
and IDKey
. ReplaceNewKey(ctx, kind, name, 0, parent)
NewKey(ctx, kind, "", id, parent)
withNameKey(kind, name, parent)
IDKey(kind, id, parent)
and if you do use namespaces, make sure you set the namespace on the returned key.Done
variable has been removed. Replace datastore.Done
with iterator.Done
, from the package google.golang.org/api/iterator
.Client.Close
method will have a return type of error. It will return the result of closing the underlying gRPC connection.bigquery: -NewGCSReference
is now a function, not a method on Client
.
Table.LoaderFrom
now accepts a ReaderSource
, enabling loading data into a table from a file or any io.Reader
.Client.Table and Client.OpenTable have been removed. Replace
client.OpenTable("project", "dataset", "table")
with
client.DatasetInProject("project", "dataset").Table("table")
Client.CreateTable has been removed. Replace
client.CreateTable(ctx, "project", "dataset", "table")
with
client.DatasetInProject("project", "dataset").Table("table").Create(ctx)
Dataset.ListTables have been replaced with Dataset.Tables. Replace
tables, err := ds.ListTables(ctx)
with
it := ds.Tables(ctx) for { table, err := it.Next() if err == iterator.Done { break } if err != nil { // TODO: Handle error. } // TODO: use table. }
Client.Read has been replaced with Job.Read, Table.Read and Query.Read. Replace
it, err := client.Read(ctx, job)
with
it, err := job.Read(ctx)
and similarly for reading from tables or queries.
The iterator returned from the Read methods is now named RowIterator. Its behavior is closer to the other iterators in these libraries. It no longer supports the Schema method; see the next item. Replace
for it.Next(ctx) { var vals ValueList if err := it.Get(&vals); err != nil { // TODO: Handle error. } // TODO: use vals. } if err := it.Err(); err != nil { // TODO: Handle error. }
with
for { var vals ValueList err := it.Next(&vals) if err == iterator.Done { break } if err != nil { // TODO: Handle error. } // TODO: use vals. }
Instead of the RecordsPerRequest(n)
option, write
it.PageInfo().MaxSize = n
Instead of the StartIndex(i)
option, write
it.StartIndex = i
ValueLoader.Load now takes a Schema in addition to a slice of Values. Replace
func (vl *myValueLoader) Load(v []bigquery.Value)
with
func (vl *myValueLoader) Load(v []bigquery.Value, s bigquery.Schema)
Table.Patch is replace by Table.Update. Replace
p := table.Patch()
p.Description("new description")
metadata, err := p.Apply(ctx)
with
metadata, err := table.Update(ctx, bigquery.TableMetadataToUpdate{
Description: "new description",
})
Client.Copy is replaced by separate methods for each of its four functions. All options have been replaced by struct fields.
To load data from Google Cloud Storage into a table, use Table.LoaderFrom.
Replace
client.Copy(ctx, table, gcsRef)
with
table.LoaderFrom(gcsRef).Run(ctx)
Instead of passing options to Copy, set fields on the Loader:
loader := table.LoaderFrom(gcsRef)
loader.WriteDisposition = bigquery.WriteTruncate
To extract data from a table into Google Cloud Storage, use Table.ExtractorTo. Set fields on the returned Extractor instead of passing options.
Replace
client.Copy(ctx, gcsRef, table)
with
table.ExtractorTo(gcsRef).Run(ctx)
To copy data into a table from one or more other tables, use Table.CopierFrom. Set fields on the returned Copier instead of passing options.
Replace
client.Copy(ctx, dstTable, srcTable)
with
dst.Table.CopierFrom(srcTable).Run(ctx)
To start a query job, create a Query and call its Run method. Set fields on the query instead of passing options.
Replace
client.Copy(ctx, table, query)
with
query.Run(ctx)
Table.NewUploader has been renamed to Table.Uploader. Instead of options, configure an Uploader by setting its fields. Replace
u := table.NewUploader(bigquery.UploadIgnoreUnknownValues())
with
u := table.NewUploader(bigquery.UploadIgnoreUnknownValues())
u.IgnoreUnknownValues = true
pubsub: remove pubsub.Done
. Use iterator.Done
instead, where iterator
is the package google.golang.org/api/iterator
.
storage:
AdminClient replaced by methods on Client. Replace
adminClient.CreateBucket(ctx, bucketName, attrs)
with
client.Bucket(bucketName).Create(ctx, projectID, attrs)
BucketHandle.List replaced by BucketHandle.Objects. Replace
for query != nil {
objs, err := bucket.List(d.ctx, query)
if err != nil { ... }
query = objs.Next
for _, obj := range objs.Results {
fmt.Println(obj)
}
}
with
iter := bucket.Objects(d.ctx, query)
for {
obj, err := iter.Next()
if err == iterator.Done {
break
}
if err != nil { ... }
fmt.Println(obj)
}
(The iterator
package is at google.golang.org/api/iterator
.)
Replace Query.Cursor
with ObjectIterator.PageInfo().Token
.
Replace Query.MaxResults
with ObjectIterator.PageInfo().MaxSize
.
ObjectHandle.CopyTo replaced by ObjectHandle.CopierFrom. Replace
attrs, err := src.CopyTo(ctx, dst, nil)
with
attrs, err := dst.CopierFrom(src).Run(ctx)
Replace
attrs, err := src.CopyTo(ctx, dst, &storage.ObjectAttrs{ContextType: "text/html"})
with
c := dst.CopierFrom(src)
c.ContextType = "text/html"
attrs, err := c.Run(ctx)
ObjectHandle.ComposeFrom replaced by ObjectHandle.ComposerFrom. Replace
attrs, err := dst.ComposeFrom(ctx, []*storage.ObjectHandle{src1, src2}, nil)
with
attrs, err := dst.ComposerFrom(src1, src2).Run(ctx)
ObjectHandle.Update's ObjectAttrs argument replaced by ObjectAttrsToUpdate. Replace
attrs, err := obj.Update(ctx, &storage.ObjectAttrs{ContextType: "text/html"})
with
attrs, err := obj.Update(ctx, storage.ObjectAttrsToUpdate{ContextType: "text/html"})
ObjectHandle.WithConditions replaced by ObjectHandle.If. Replace
obj.WithConditions(storage.Generation(gen), storage.IfMetaGenerationMatch(mgen))
with
obj.Generation(gen).If(storage.Conditions{MetagenerationMatch: mgen})
Replace
obj.WithConditions(storage.IfGenerationMatch(0))
with
obj.If(storage.Conditions{DoesNotExist: true})
storage.Done
replaced by iterator.Done
(from package google.golang.org/api/iterator
).
Package preview/logging deleted. Use logging instead.
Logging client replaced with preview version (see below).
New clients for some of Google's Machine Learning APIs: Vision, Speech, and Natural Language.
Preview version of a new [Stackdriver Logging][cloud-logging] client in cloud.google.com/go/preview/logging
. This client uses gRPC as its transport layer, and supports log reading, sinks and metrics. It will replace the current client at cloud.google.com/go/logging
shortly.