Bot releases are hidden (Show)
Published by garrettjonesgoogle over 7 years ago
google-cloud-spanner
has been added. Find out more about Cloud Spanner at https://cloudplatform.googleblog.com/2017/02/introducing-Cloud-Spanner-a-global-database-service-for-mission-critical-applications.html .
Published by garrettjonesgoogle over 7 years ago
The Pub/Sub client has been completely rewritten to enable high throughput. The handwritten layer on top of the SPI layer has been deprecated, and two handwritten classes have been added in the SPI layer, Publisher
and Subscriber
, for publishing and subscribing. (Note for those concerned: synchronous pull is still possible in SubscriberClient
.) The handwritten layer was deprecated because the SPI layer can be kept up to date more easily with new service features. Since the change is so disruptive, we have retained the deprecated classes, but moved them under com.google.cloud.pubsub.deprecated
. They will be removed before the Pub/Sub client goes to GA.
Published by garrettjonesgoogle over 7 years ago
Published by garrettjonesgoogle almost 8 years ago
The dependency on grpc was bumped from 1.0.1 to 1.0.3. (#1504)
Published by garrettjonesgoogle almost 8 years ago
In this release, clients for four APIs are moving to beta:
Their versions will have “-beta” on the end call out that fact. All other clients are still Alpha.
Several tests were flaky on AppVeyor, so improvements were made to make them more reliable.
Despite that, integration tests were still failing on AppVeyor, so they have been disabled until they can all run reliably - tracking issue: #1429
Published by garrettjonesgoogle almost 8 years ago
Api
have been renamed so that they end in Client
(#1417)OperationFuture
type which enables an easier way to get the final result of the long-running operation. (#1419)Published by mziccard almost 8 years ago
AuthCredentials
classes have been deleted. Use classes from google-auth-library-java for authentication.
google-cloud
will still try to infer credentials from the environment when no credentials are provided:
Storage storage = StorageOptions.getDefaultInstance().getService();
You can also explicitly provide credentials. For instance, to use a JSON credentials file try the following code:
Storage storage = StorageOptions.newBuilder()
.setCredentials(ServiceAccountCredentials.fromStream(new FileInputStream("/path/to/my/key.json"))
.build()
.getService();
For more details see the Authentication section of the main README.
pullAsync
methods now use returnImmediately=false
and are not subject to client-side timeouts (#1387)TranslateOption.model(String)
option which allows to set the language translation model used to translate text. This option is only available to whitelisted users (#1393)BaseWriteChannel
's position
to long
to fix integer overflow on big files (#1390)Published by mziccard almost 8 years ago
google-cloud-core
module. Get rid of duplicate classes (#1365)DatastoreOptions.Builder
's namespace(String)
setter in favor of setNamespace(String)
, undo deprecating Transaction.Response.getGeneratedKeys()
(#1358)javax
package in google-cloud-nio
shaded jar (#1362)Published by mziccard almost 8 years ago
get
and set
prefix have been added to all classes/builders. Older getters/setters (without get/set
prefix) have been deprecatedbuilder()
have been deprecated, you should use newBuilder()
insteaddefaultInstance()
factory methods have been deprecated, you should use getDefaultInstance()
insteadSee the following example of using google-cloud-storage
after the naming changes:
Storage storage = StorageOptions.getDefaultInstance().getService();
BlobId blobId = BlobId.of("bucket", "blob_name");
Blob blob = storage.get(blobId);
if (blob != null) {
byte[] prevContent = blob.getContent();
System.out.println(new String(prevContent, UTF_8));
WritableByteChannel channel = blob.writer();
channel.write(ByteBuffer.wrap("Updated content".getBytes(UTF_8)));
channel.close();
}
LocalDatastoreHelper
for more recent version of the Datastore emulator installed via gcloud
(#1303)reset()
method to LocalDatastoreHelper
to clear the status of the Datastore emulator (#1293)PUBSUB_EMULATOR_HOST
environment variable is set, the PubSub client uses it to locate the PubSub emulator. (#1317)LocalDatastoreHelper
to properly cache downloaded copies of the Datastore emulator (#1302)Storage.signUrl
to support blob names containing /
characters (#1346)Storage.reader
to read gzip blobs in compressed chunks. This prevents ReadChannel
from trying (and failing) to uncompress gzipped chunks (#1301)google-cloud-nio
shaded jar (#1327)Published by mziccard about 8 years ago
of(String)
factory method to DatasetInfo (#1275)bigquery.create(DatasetInfo.of("dataset-name"));
google-cloud
now depends on protobuf 3.0.0
and grpc 1.0.1
(#1273)// Example of replacing a subscription policy
Policy policy = pubsub.getSubscriptionPolicy(subscriptionName);
Policy updatedPolicy = policy.toBuilder()
.addIdentity(Role.viewer(), Identity.allAuthenticatedUsers())
.build();
updatedPolicy = pubsub.replaceSubscriptionPolicy(subscriptionName, updatedPolicy);
// Example of asynchronously replacing a topic policy
Policy policy = pubsub.getTopicPolicy(topicName);
Policy updatedPolicy = policy.toBuilder()
.addIdentity(Role.viewer(), Identity.allAuthenticatedUsers())
.build();
Future<Policy> future = pubsub.replaceTopicPolicyAsync(topicName, updatedPolicy);
// ...
updatedPolicy = future.get();
// Example of updating the ACL for a blob
BlobId blobId = BlobId.of(bucketName, blobName, blobGeneration);
Acl acl = storage.updateAcl(blobId, Acl.of(User.ofAllAuthenticatedUsers(), Role.OWNER));
// Example of listing the ACL entries for a bucket
List<Acl> acls = storage.listAcls(bucketName);
for (Acl acl : acls) {
// do something with ACL entry
}
Key key = ...;
String base64Key = ...;
byte[] content = {0xD, 0xE, 0xA, 0xD};
BlobInfo blobInfo = BlobInfo.builder(bucketName, blobName).build();
// Example of creating a blob with a customer-supplied encryption key (as Key object)
storage.create(blobInfo, content, Storage.BlobTargetOption.encryptionKey(key));
// Example of reading a blob with a customer-supplied decryption key (as base64 String)
byte[] readBytes =
storage.readAllBytes(bucketName, blobName, Storage.BlobSourceOption.decryptionKey(base64Key));
BigQueryOptions
(#1217)RowToInsert
using Map<Str, ? extends Object>
rather than Map<Str, Object>
(#1259)ABORTED
Datastore commits only when the commit was NON_TRANSACTIONAL
(#1235)MetricInfo
parameter from Metric.updateAsync()
(#1221)SinkInfo
parameter from Sink.updateAsync()
(#1222)Logging.deleteSink
now returns false
on NOT_FOUND
(#1222)WriteChannel
, when they fail with a retryable error (#1233)Published by mziccard about 8 years ago
gcloud-java
renamed to google-cloud
gcloud-java
has been deprecated and renamed to google-cloud
.
If you are using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud</artifactId>
<version>0.3.0</version>
</dependency>
If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:google-cloud:0.3.0'
If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "google-cloud" % "0.3.0"
gcloud-java-<service>
renamed to google-cloud-<service>
Service-specific artifacts have also been renamed from gcloud-java-<service>
to google-cloud-<service>
. See the following for examples of adding google-cloud-datastore
as a dependency:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-datastore</artifactId>
<version>0.3.0</version>
</dependency>
If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:google-cloud-datastore:0.3.0'
If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "google-cloud-datastore" % "0.3.0"
GCLOUD_PROJECT
environment variable is now deprecated, use GOOGLE_CLOUD_PROJECT
to set your default project id.Published by mziccard about 8 years ago
gcloud-java-datastore
now uses Datastore v1 (#1169)gcloud-java-translate
, a new client library to interact with Google Translate, is released and is in alpha. See the docs for more information.gcloud-java-translate
javadoc.import com.google.cloud.translate.Detection;
import com.google.cloud.translate.Translate;
import com.google.cloud.translate.Translate.TranslateOption;
import com.google.cloud.translate.TranslateOptions;
import com.google.cloud.translate.Translation;
Translate translate = TranslateOptions.defaultInstance().service();
Detection detection = translate.detect("Hola");
String detectedLanguage = detection.language();
Translation translation = translate.translate(
"World",
TranslateOption.sourceLanguage("en"),
TranslateOption.targetLanguage(detectedLanguage));
System.out.printf("Hola %s%n", translation.translatedText());
SocketException
and "insufficient data written" IOException
are now retried (#1187)gcloud-java-nio
is in the classpath and no credentials are available (#1189)CloudStorageFileSystemProvider.setGCloudOptions
to CloudStorageFileSystemProvider.setStorageOptions
(#1189)Published by mziccard about 8 years ago
DeprecationStatus
timestamps are removed from DeprecationStatus.Builder
. Getters are still available in DeprecationStatus
for legacy support (#1127).StreamingBuffer
to allow oldestEntryTime
to be null
(#1141).useLegacySql
to QueryRequest
and QueryJobConfiguration
(#1142).getNumber()
instead of ordinal()
to get DatastoreException
's error code (#1140).DatastoreOptions
, to perform service requests (#1144).gcloud-java
user agent in gcloud-java-logging
, as done for other modules (#1147).pubsub-experimental.googleapis.com
to pubsub.googleapis.com
(#1149).Published by mziccard about 8 years ago
TableId tableId = TableId.of(datasetName, tableName);
TimePartitioning partitioning = TimePartitioning.of(Type.DAY);
// You can also set the expiration
// TimePartitioning partitioning = TimePartitioning.of(Type.DAY, 2592000000);
StandardTableDefinition tableDefinition = StandardTableDefinition.builder()
.schema(tableSchema)
.timePartitioning(partitioning)
.build();
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));
gcloud-java-logging
, a new client library to interact with Stackdriver Logging, is released and is in alpha. See the docs for more information.gcloud-java-logging
uses gRPC as transport layer, which is not (yet) supported by App Engine Standard. gcloud-java-logging
will work on App Engine Flexible.gcloud-java-logging
javadoc.import com.google.cloud.MonitoredResource;
import com.google.cloud.Page;
import com.google.cloud.logging.LogEntry;
import com.google.cloud.logging.Logging;
import com.google.cloud.logging.Logging.EntryListOption;
import com.google.cloud.logging.LoggingOptions;
import com.google.cloud.logging.Payload.StringPayload;
import java.util.Collections;
import java.util.Iterator;
LoggingOptions options = LoggingOptions.defaultInstance();
try(Logging logging = options.service()) {
LogEntry firstEntry = LogEntry.builder(StringPayload.of("message"))
.logName("test-log")
.resource(MonitoredResource.builder("global")
.addLabel("project_id", options.projectId())
.build())
.build();
logging.write(Collections.singleton(firstEntry));
Page<LogEntry> entries = logging.listLogEntries(
EntryListOption.filter("logName=projects/" + options.projectId() + "/logs/test-log"));
Iterator<LogEntry> entryIterator = entries.iterateAll();
while (entryIterator.hasNext()) {
System.out.println(entryIterator.next());
}
}
The following snippet, instead, shows how to use a java.util.logging.Logger
to write log entries to Stackdriver Logging. The snippet installs a Stackdriver Logging handler using
LoggingHandler.addHandler(Logger, LoggingHandler)
. Notice that this could also be done through the logging.properties
file, adding the following line:
com.google.cloud.examples.logging.snippets.AddLoggingHandler.handlers=com.google.cloud.logging.LoggingHandler}
The complete code can be found on AddLoggingHandler.java.
import com.google.cloud.logging.LoggingHandler;
import java.util.logging.Logger;
Logger logger = Logger.getLogger(AddLoggingHandler.class.getName());
LoggingHandler.addHandler(logger, new LoggingHandler());
logger.warning("test warning");
Published by mziccard over 8 years ago
gcloud-java-nio
, a new client library that allows to interact with Google Cloud Storage using Java's NIO API, is released and is in alpha. Not all NIO features have been implemented yet, see the docs for more information.gcloud-java-nio
is with Paths
and Files
:Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);
InputStream
and OutputStream
can also be used for streaming:
Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
try (InputStream input = Files.newInputStream(path)) {
// use input stream
}
To configure a bucket per-environment, you can use the FileSystem
API:
FileSystem fs = FileSystems.getFileSystem(URI.create("gs://bucket"));
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);
If you don't want to rely on Java SPI, which requires a META-INF file in your jar generated by Google Auto, you can instantiate this file system directly as follows:
CloudStorageFileSystem fs = CloudStorageFileSystem.forBucket("bucket");
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
data = Files.readAllBytes(path);
For instructions on how to add Google Cloud Storage NIO support to a legacy jar see this example. For more examples see here.
BlobReadChannel
to support reading and seeking files larger than Integer.MAX_VALUE
bytesPublished by mziccard over 8 years ago
gcloud-java-pubsub
, a new client library to interact with Google Cloud Pub/Sub, is released and is in alpha. See the docs for more information.gcloud-java-pubsub
uses gRPC as transport layer, which is not (yet) supported by App Engine Standard. gcloud-java-pubsub
will work on App Engine Flexible.gcloud-java-pubsub
javadoc. try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
Topic topic = pubsub.create(TopicInfo.of("test-topic"));
Message message1 = Message.of("First message");
Message message2 = Message.of("Second message");
topic.publishAsync(message1, message2);
}
The following snippet, instead, shows how to create a Pub/Sub pull subscription and asynchronously pull messages from it. See CreateSubscriptionAndPullMessages.java for the full source code.
try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
Subscription subscription =
pubsub.create(SubscriptionInfo.of("test-topic", "test-subscription"));
MessageProcessor callback = new MessageProcessor() {
@Override
public void process(Message message) throws Exception {
System.out.printf("Received message \"%s\"%n", message.payloadAsString());
}
};
// Create a message consumer and pull messages (for 60 seconds)
try (MessageConsumer consumer = subscription.pullAsync(callback)) {
Thread.sleep(60_000);
}
}
Published by mziccard over 8 years ago
BYTES
datatype. A field of type BYTES
can be created by using Field.Value.bytes()
. The byte[] bytesValue()
method is added to FieldValue
to return the value of a field as a byte array.Job waitFor(WaitForOption... waitOptions)
method is added to Job
class. This method waits for the job to complete and returns job's updated information:Job completedJob = job.waitFor();
if (completedJob == null) {
// job no longer exists
} else if (completedJob.status().error() != null) {
// job failed, handle error
} else {
// job completed successfully
}
By default, the job status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit)
can be used. WaitForOption.timeout(long, TimeUnit)
, instead, sets the maximum time to wait.
AuthCredentials.createFor(String)
and AuthCredentials.createFor(String, Date)
methods have been added to create AuthCredentials
objects given an OAuth2 access token (and possibly its expiration date).Operation waitFor(WaitForOption... waitOptions)
method is added to Operation
class. This method waits for the operation to complete and returns operation's updated information:Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
// operation no longer exists
} else if (completedOperation.errors() != null) {
// operation failed, handle error
} else {
// operation completed successfully
}
By default, the operation status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit)
can be used. WaitForOption.timeout(long, TimeUnit)
, instead, sets the maximum time to wait.
Datastore.put
and DatastoreBatchWriter.put
now support entities with incomplete keys. Both put
methods return the just updated/created entities. A putWithDeferredIdAllocation
method has been also added to DatastoreBatchWriter
.StorageExample
now contains examples on how to add ACLs to blobs and buckets (#1033).BlobInfo.createTime()
getter has been added. This method returns the time at which a blob was created (#1034).Published by mziccard over 8 years ago
Clock
abstract class is moved out of ServiceOptions
. ServiceOptions.clock()
is now used by RetryHelper
in all service calls. This enables mocking the Clock
source used for retries when testing your code.BatchResult
class. Sending batch requests in Storage is now as simple as in DNS. See the following example of sending a batch request:StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1");
BlobId secondBlob = BlobId.of("bucket", "blob2");
BlobId thirdBlob = BlobId.of("bucket", "blob3");
// Users can either register a callback on an operation
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
@Override
public void success(Boolean result) {
// handle delete result
}
@Override
public void error(StorageException exception) {
// handle exception
}
});
// Ignore its result
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(thirdBlob);
batch.submit();
// Or get the result
Blob blob = result.get(); // returns the operation's result or throws StorageException
LocalDatastoreHelper
now uses https to download the emulator - thanks to @pehrs (#942).DatastoreExample
(#980).StorageImpl.signUrl
for blob names that start with "/" - thanks to @clementdenis (#1013).readAllBytes
permission error on Google AppEngine (#1010).Published by mziccard over 8 years ago
gcloud-java-compute
, a new client library to interact with Google Compute Engine is released and is in alpha. See the docs for more information. See ComputeExample for a complete example or API Documentation for gcloud-java-compute
javadoc. // Create a service object
// Credentials are inferred from the environment.
Compute compute = ComputeOptions.defaultInstance().service();
// Create an external region address
RegionAddressId addressId = RegionAddressId.of("us-central1", "test-address");
Operation operation = compute.create(AddressInfo.of(addressId));
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Address " + addressId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Address creation failed");
}
// Create a persistent disk
ImageId imageId = ImageId.of("debian-cloud", "debian-8-jessie-v20160329");
DiskId diskId = DiskId.of("us-central1-a", "test-disk");
ImageDiskConfiguration diskConfiguration = ImageDiskConfiguration.of(imageId);
DiskInfo disk = DiskInfo.of(diskId, diskConfiguration);
operation = compute.create(disk);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Disk " + diskId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Disk creation failed");
}
// Create a virtual machine instance
Address externalIp = compute.getAddress(addressId);
InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
NetworkId networkId = NetworkId.of("default");
PersistentDiskConfiguration attachConfiguration =
PersistentDiskConfiguration.builder(diskId).boot(true).build();
AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
NetworkInterface networkInterface = NetworkInterface.builder(networkId)
.accessConfigurations(AccessConfig.of(externalIp.address()))
.build();
MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
InstanceInfo instance =
InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
operation = compute.create(instance);
// Wait for operation to complete
while (!operation.isDone()) {
Thread.sleep(1000L);
}
// Check operation errors
operation = operation.reload();
if (operation.errors() == null) {
System.out.println("Instance " + instanceId + " was successfully created");
} else {
// inspect operation.errors()
throw new RuntimeException("Instance creation failed");
}
options(String namespace)
method has been added to LocalDatastoreHelper
allowing to create testing options for a specific namespace (#936).of
methods have been added to ListValue
to support specific types (String
, long
, double
, boolean
, DateTime
, LatLng
, Key
, FullEntity
and Blob
). addValue
methods have been added to ListValue.Builder
to support the same set of specific types (#934).gcloud-java-dns
(#940). Batches allow to perform a number of operations in one single RPC request.BaseServiceException.getCause()
(#774).Published by ajkannan over 8 years ago
gcloud-java
has been repackaged. com.google.gcloud
has now changed to com.google.cloud
, and we're releasing our artifacts on maven under the Group ID com.google.cloud
rather than com.google.gcloud
. The new way to add our library as a dependency in your project is as follows:If you're using Maven, add this to your pom.xml file
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>gcloud-java</artifactId>
<version>0.2.0</version>
</dependency>
If you are using Gradle, add this to your dependencies
compile 'com.google.cloud:gcloud-java:0.2.0'
If you are using SBT, add this to your dependencies
libraryDependencies += "com.google.cloud" % "gcloud-java" % "0.2.0"
ServiceAccountSigner
was added. Both AppEngineAuthCredentials
and ServiceAccountAuthCredentials
extend this interface and can be used to sign Google Cloud Storage blob URLs (#701, #854).gcloud-java
now uses the project ID given in the credentials file specified by the environment variable GOOGLE_APPLICATION_CREDENTIALS
(if set) (#845).Job
's isDone
method is fixed to return true if the job is complete or the job doesn't exist (#853).LocalGcdHelper
has been renamed to RemoteDatastoreHelper
, and the command line startup/shutdown of the helper has been removed. The helper is now more consistent with other modules' test helpers and can be used via the create
, start
, and stop
methods (#821).ListValue
no longer rejects empty lists, since Cloud Datastore v1beta3 supports empty array values (#862).ChangeRequest
, namely adding reload
/isDone
methods and changing the method signature of applyTo
(#849).RemoteGcsHelper
was renamed to RemoteStorageHelper
to be more consistent with other modules' test helpers (#821).