google-cloud-java

Google Cloud Client Library for Java

APACHE-2.0 License

Stars
1.9K
Committers
222

Bot releases are hidden (Show)

google-cloud-java - 0.9.2

Published by garrettjonesgoogle over 7 years ago

New client for Cloud Spanner

google-cloud-spanner has been added. Find out more about Cloud Spanner at https://cloudplatform.googleblog.com/2017/02/introducing-Cloud-Spanner-a-global-database-service-for-mission-critical-applications.html .

google-cloud-java - 0.9.0

Published by garrettjonesgoogle over 7 years ago

Pub/Sub High-Performance Rewrite

The Pub/Sub client has been completely rewritten to enable high throughput. The handwritten layer on top of the SPI layer has been deprecated, and two handwritten classes have been added in the SPI layer, Publisher and Subscriber, for publishing and subscribing. (Note for those concerned: synchronous pull is still possible in SubscriberClient.) The handwritten layer was deprecated because the SPI layer can be kept up to date more easily with new service features. Since the change is so disruptive, we have retained the deprecated classes, but moved them under com.google.cloud.pubsub.deprecated. They will be removed before the Pub/Sub client goes to GA.

Logging

  • add zone to GAE Flex logging enhancer (#1589)
  • fix(logging): Make LoggingHandler.Enhancer interface public (#1607)

SPI layer changes

  • SPI layer: Regenerating with RpcStreamObserver (#1611)
    • This change is a prerequisite to enabling the shading of Guava

Docs

  • Javadocs: adding links to external types (#1600)
  • fix a typo in README.md (#1604)
google-cloud-java - 0.8.3

Published by garrettjonesgoogle over 7 years ago

Fixes

  • (Storage) Allow path in URIs passed to newFileSystem (#1470)
  • (Storage) Add a PathMatcher for CloudStorageFileSystem (#1469)
  • (Logging) Preventing logging re-entrance at FINE level (#1523)
  • (Logging) Set timestamp from LogRecord (#1533)
  • (Logging) Initialize the default MonitoredResource from a GAE environment (#1535)
  • (BigQuery) BigQuery: Add support to FormatOptions for AVRO (#1576)

SPI layer changes

  • use RpcFuture and remove old BundlingSettings (#1572)
  • fix DefaultLoggingRpc (#1584)
google-cloud-java - 0.8.1

Published by garrettjonesgoogle almost 8 years ago

Dependency updates

The dependency on grpc was bumped from 1.0.1 to 1.0.3. (#1504)

Interface changes

  • SPI layer: Converted Error Reporting, Monitoring, and Pub/Sub to use resource name types, and removed formatX/parseX methods (#1454)

Test improvements

  • Fixed more races in pubsub tests (#1473)
google-cloud-java - 0.8.0

Published by garrettjonesgoogle almost 8 years ago

Select clients going from Alpha to Beta

In this release, clients for four APIs are moving to beta:

  • Google Cloud BigQuery
  • Stackdriver Logging
  • Google Cloud Datastore
  • Google Cloud Storage

Their versions will have “-beta” on the end call out that fact. All other clients are still Alpha.

Features

  • QueryParameter support added to BigQuery, DATE/TIME/DATETIME added to LegacySQLTypeName (#1451)

Interface changes

  • Logging api layer: using resource name classes instead of strings where appropriate (#1454)

Test improvements

Several tests were flaky on AppVeyor, so improvements were made to make them more reliable.

  • BlockingProcessStreamReader (#1457)
  • BigQuery integration tests (#1456)
  • PubSub integration tests (#1453)
  • DNS tests: removed LocalDnsHelper and its test (#1446)

Despite that, integration tests were still failing on AppVeyor, so they have been disabled until they can all run reliably - tracking issue: #1429

Documentation, Snippets

  • Various snippet updates - (#1399), (#1400)
google-cloud-java - 0.7.0

Published by garrettjonesgoogle almost 8 years ago

Naming, interface changes

  • SPI classes ending in Api have been renamed so that they end in Client (#1417)
  • Deleted the client for Natural Language v1beta1, added the client for Natural Language v1 (#1417)
  • PubSub SPI classes now take resource name classes instead of strings (#1403)

Features

  • Speech SPI layer: AsyncRecognize now returns a new OperationFuture type which enables an easier way to get the final result of the long-running operation. (#1419)

Documentation, Snippets

  • Various documentation fixes - (#1419), (#1415)
  • Various snippet updates for BigQuery - (#1410), (#1407), (#1406)
google-cloud-java - 0.6.0

Published by mziccard almost 8 years ago

Credentials changes

AuthCredentials classes have been deleted. Use classes from google-auth-library-java for authentication.

google-cloud will still try to infer credentials from the environment when no credentials are provided:

Storage storage = StorageOptions.getDefaultInstance().getService();

You can also explicitly provide credentials. For instance, to use a JSON credentials file try the following code:

Storage storage = StorageOptions.newBuilder()
    .setCredentials(ServiceAccountCredentials.fromStream(new FileInputStream("/path/to/my/key.json"))
    .build()
    .getService();

For more details see the Authentication section of the main README.

Features

PubSub

  • All pullAsync methods now use returnImmediately=false and are not subject to client-side timeouts (#1387)

Translate

  • Add support for the TranslateOption.model(String) option which allows to set the language translation model used to translate text. This option is only available to whitelisted users (#1393)

Fixes

Storage

  • Change BaseWriteChannel's position to long to fix integer overflow on big files (#1390)
google-cloud-java - 0.5.1

Published by mziccard almost 8 years ago

Fixes

All

  • Reduce gRPC dependency footprint. In particular, some gRPC-related dependencies are removed from google-cloud-core module. Get rid of duplicate classes (#1365)

Datastore

  • Deprecate DatastoreOptions.Builder's namespace(String) setter in favor of setNamespace(String), undo deprecating Transaction.Response.getGeneratedKeys() (#1358)

  • Avoid shading javax package in google-cloud-nio shaded jar (#1362)
google-cloud-java - 0.5.0

Published by mziccard almost 8 years ago

Naming changes

  • Getters and setters with the get and set prefix have been added to all classes/builders. Older getters/setters (without get/set prefix) have been deprecated
  • Builder factory methods builder() have been deprecated, you should use newBuilder() instead
  • defaultInstance() factory methods have been deprecated, you should use getDefaultInstance() instead

See the following example of using google-cloud-storage after the naming changes:

Storage storage = StorageOptions.getDefaultInstance().getService();
BlobId blobId = BlobId.of("bucket", "blob_name");
Blob blob = storage.get(blobId);
if (blob != null) {
  byte[] prevContent = blob.getContent();
  System.out.println(new String(prevContent, UTF_8));
  WritableByteChannel channel = blob.writer();
  channel.write(ByteBuffer.wrap("Updated content".getBytes(UTF_8)));
  channel.close();
}

Features

Datastore

  • Add support to LocalDatastoreHelper for more recent version of the Datastore emulator installed via gcloud (#1303)
  • Add reset() method to LocalDatastoreHelper to clear the status of the Datastore emulator (#1293)

PubSub

  • Add support for PubSub emulator host variable. If the PUBSUB_EMULATOR_HOST environment variable is set, the PubSub client uses it to locate the PubSub emulator. (#1317)

Fixes

Datastore

  • Allow LocalDatastoreHelper to properly cache downloaded copies of the Datastore emulator (#1302)

Storage

  • Fix regression in Storage.signUrl to support blob names containing / characters (#1346)
  • Allow Storage.reader to read gzip blobs in compressed chunks. This prevents ReadChannel from trying (and failing) to uncompress gzipped chunks (#1301)

Storage NIO

  • All dependencies are now shaded in the google-cloud-nio shaded jar (#1327)
google-cloud-java - 0.4.0

Published by mziccard about 8 years ago

Features

BigQuery

  • Add of(String) factory method to DatasetInfo (#1275)
bigquery.create(DatasetInfo.of("dataset-name"));

Core

  • google-cloud now depends on protobuf 3.0.0 and grpc 1.0.1 (#1273)

PubSub

  • Add support for IAM methods for sinks and subscriptions (#1231)
// Example of replacing a subscription policy
Policy policy = pubsub.getSubscriptionPolicy(subscriptionName);
Policy updatedPolicy = policy.toBuilder()
    .addIdentity(Role.viewer(), Identity.allAuthenticatedUsers())
    .build();
updatedPolicy = pubsub.replaceSubscriptionPolicy(subscriptionName, updatedPolicy);

// Example of asynchronously replacing a topic policy
Policy policy = pubsub.getTopicPolicy(topicName);
Policy updatedPolicy = policy.toBuilder()
    .addIdentity(Role.viewer(), Identity.allAuthenticatedUsers())
    .build();
Future<Policy> future = pubsub.replaceTopicPolicyAsync(topicName, updatedPolicy);
// ...
updatedPolicy = future.get();

Storage

  • Add support for create/get/update/delete/list ACLs for blobs and buckets (#1228)
// Example of updating the ACL for a blob
BlobId blobId = BlobId.of(bucketName, blobName, blobGeneration);
Acl acl = storage.updateAcl(blobId, Acl.of(User.ofAllAuthenticatedUsers(), Role.OWNER));

// Example of listing the ACL entries for a bucket
List<Acl> acls = storage.listAcls(bucketName);
for (Acl acl : acls) {
   // do something with ACL entry
}
Key key = ...;
String base64Key = ...;
byte[] content = {0xD, 0xE, 0xA, 0xD};
BlobInfo blobInfo = BlobInfo.builder(bucketName, blobName).build();

// Example of creating a blob with a customer-supplied encryption key (as Key object)
storage.create(blobInfo, content, Storage.BlobTargetOption.encryptionKey(key));

// Example of reading a blob with a customer-supplied decryption key (as base64 String)
byte[] readBytes =
    storage.readAllBytes(bucketName, blobName, Storage.BlobSourceOption.decryptionKey(base64Key));

Fixes

BigQuery

  • Support operations on tables/datasets/jobs in projects other than the one set in BigQueryOptions (#1217)
  • Allow constructing a RowToInsert using Map<Str, ? extends Object> rather than Map<Str, Object> (#1259)

Datastore

  • Retry ABORTED Datastore commits only when the commit was NON_TRANSACTIONAL (#1235)

Logging

  • Remove unnecessary MetricInfo parameter from Metric.updateAsync() (#1221)
  • Remove unnecessary SinkInfo parameter from Sink.updateAsync() (#1222)
  • Logging.deleteSink now returns false on NOT_FOUND (#1222)

Storage

  • Retry calls that open a resumable upload session in WriteChannel, when they fail with a retryable error (#1233)
  • Fix generation of signed URLs for blobs containing spaces and other special chars (#1277 - thanks to @ostronom)
google-cloud-java - 0.3.0

Published by mziccard about 8 years ago

gcloud-java renamed to google-cloud

gcloud-java has been deprecated and renamed to google-cloud.

If you are using Maven, add this to your pom.xml file

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud</artifactId>
  <version>0.3.0</version>
</dependency>

If you are using Gradle, add this to your dependencies

compile 'com.google.cloud:google-cloud:0.3.0'

If you are using SBT, add this to your dependencies

libraryDependencies += "com.google.cloud" % "google-cloud" % "0.3.0"

gcloud-java-<service> renamed to google-cloud-<service>

Service-specific artifacts have also been renamed from gcloud-java-<service> to google-cloud-<service>. See the following for examples of adding google-cloud-datastore as a dependency:

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-datastore</artifactId>
  <version>0.3.0</version>
</dependency>

If you are using Gradle, add this to your dependencies

compile 'com.google.cloud:google-cloud-datastore:0.3.0'

If you are using SBT, add this to your dependencies

libraryDependencies += "com.google.cloud" % "google-cloud-datastore" % "0.3.0"

Other changes

google-cloud-java - 0.2.8

Published by mziccard about 8 years ago

Features

Datastore

  • gcloud-java-datastore now uses Datastore v1 (#1169)

Translate

  • gcloud-java-translate, a new client library to interact with Google Translate, is released and is in alpha. See the docs for more information.
    See TranslateExample for a complete example or API Documentation for gcloud-java-translate javadoc.
    The following snippet shows how to detect the language of some text and how to translate some text.
    Complete source code can be found on
    DetectLanguageAndTranslate.java.
import com.google.cloud.translate.Detection;
import com.google.cloud.translate.Translate;
import com.google.cloud.translate.Translate.TranslateOption;
import com.google.cloud.translate.TranslateOptions;
import com.google.cloud.translate.Translation;

Translate translate = TranslateOptions.defaultInstance().service();

Detection detection = translate.detect("Hola");
String detectedLanguage = detection.language();

Translation translation = translate.translate(
    "World",
    TranslateOption.sourceLanguage("en"),
    TranslateOption.targetLanguage(detectedLanguage));

System.out.printf("Hola %s%n", translation.translatedText());

Fixes

Core

  • SocketException and "insufficient data written" IOException are now retried (#1187)

Storage NIO

  • Enumerating filesystems no longer fails if gcloud-java-nio is in the classpath and no credentials are available (#1189)
  • Rename CloudStorageFileSystemProvider.setGCloudOptions to CloudStorageFileSystemProvider.setStorageOptions (#1189)
google-cloud-java - 0.2.7

Published by mziccard about 8 years ago

Fixes

BigQuery

  • String setters for DeprecationStatus timestamps are removed from DeprecationStatus.Builder. Getters are still available in DeprecationStatus for legacy support (#1127).
  • Fix table's StreamingBuffer to allow oldestEntryTime to be null (#1141).
  • Add support for useLegacySql to QueryRequest and QueryJobConfiguration (#1142).

Datastore

  • Fix Datastore exceptions conversion: use getNumber() instead of ordinal() to get DatastoreException's error code (#1140).
  • Use HTTP transport factory, as set via DatastoreOptions, to perform service requests (#1144).

Logging

  • Set gcloud-java user agent in gcloud-java-logging, as done for other modules (#1147).

PubSub

  • Change Pub/Sub endpoint from pubsub-experimental.googleapis.com to pubsub.googleapis.com (#1149).
google-cloud-java - 0.2.6

Published by mziccard about 8 years ago

Features

BigQuery

  • Add support for time-partitioned tables. For example, you can now create a time partitioned table using the following code:
TableId tableId = TableId.of(datasetName, tableName);
TimePartitioning partitioning = TimePartitioning.of(Type.DAY);
// You can also set the expiration
// TimePartitioning partitioning = TimePartitioning.of(Type.DAY, 2592000000);
StandardTableDefinition tableDefinition = StandardTableDefinition.builder()
    .schema(tableSchema)
    .timePartitioning(partitioning)
    .build();
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));

Logging

  • gcloud-java-logging, a new client library to interact with Stackdriver Logging, is released and is in alpha. See the docs for more information.
    gcloud-java-logging uses gRPC as transport layer, which is not (yet) supported by App Engine Standard. gcloud-java-logging will work on App Engine Flexible.
    See LoggingExample for a complete example or API Documentation for gcloud-java-logging javadoc.
    The following snippet shows how to write and list log entries. Complete source code can be found on
    WriteAndListLogEntries.java.
import com.google.cloud.MonitoredResource;
import com.google.cloud.Page;
import com.google.cloud.logging.LogEntry;
import com.google.cloud.logging.Logging;
import com.google.cloud.logging.Logging.EntryListOption;
import com.google.cloud.logging.LoggingOptions;
import com.google.cloud.logging.Payload.StringPayload;

import java.util.Collections;
import java.util.Iterator;

LoggingOptions options = LoggingOptions.defaultInstance();
try(Logging logging = options.service()) {

  LogEntry firstEntry = LogEntry.builder(StringPayload.of("message"))
      .logName("test-log")
      .resource(MonitoredResource.builder("global")
          .addLabel("project_id", options.projectId())
          .build())
      .build();
  logging.write(Collections.singleton(firstEntry));

  Page<LogEntry> entries = logging.listLogEntries(
      EntryListOption.filter("logName=projects/" + options.projectId() + "/logs/test-log"));
  Iterator<LogEntry> entryIterator = entries.iterateAll();
  while (entryIterator.hasNext()) {
    System.out.println(entryIterator.next());
  }
}

The following snippet, instead, shows how to use a java.util.logging.Logger to write log entries to Stackdriver Logging. The snippet installs a Stackdriver Logging handler using
LoggingHandler.addHandler(Logger, LoggingHandler). Notice that this could also be done through the logging.properties file, adding the following line:

com.google.cloud.examples.logging.snippets.AddLoggingHandler.handlers=com.google.cloud.logging.LoggingHandler}

The complete code can be found on AddLoggingHandler.java.

import com.google.cloud.logging.LoggingHandler;

import java.util.logging.Logger;

Logger logger = Logger.getLogger(AddLoggingHandler.class.getName());
LoggingHandler.addHandler(logger, new LoggingHandler());
logger.warning("test warning");
google-cloud-java - 0.2.5

Published by mziccard over 8 years ago

Features

Storage NIO

  • gcloud-java-nio, a new client library that allows to interact with Google Cloud Storage using Java's NIO API, is released and is in alpha. Not all NIO features have been implemented yet, see the docs for more information.
    The simplest way to get started with gcloud-java-nio is with Paths and Files:
Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);

InputStream and OutputStream can also be used for streaming:

Path path = Paths.get(URI.create("gs://bucket/lolcat.csv"));
try (InputStream input = Files.newInputStream(path)) {
  // use input stream
}

To configure a bucket per-environment, you can use the FileSystem API:

FileSystem fs = FileSystems.getFileSystem(URI.create("gs://bucket"));
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
List<String> lines = Files.readAllLines(path, StandardCharsets.UTF_8);

If you don't want to rely on Java SPI, which requires a META-INF file in your jar generated by Google Auto, you can instantiate this file system directly as follows:

CloudStorageFileSystem fs = CloudStorageFileSystem.forBucket("bucket");
byte[] data = "hello world".getBytes(StandardCharsets.UTF_8);
Path path = fs.getPath("/object");
Files.write(path, data);
data = Files.readAllBytes(path);

For instructions on how to add Google Cloud Storage NIO support to a legacy jar see this example. For more examples see here.

Fixes

Storage

  • Fix BlobReadChannel to support reading and seeking files larger than Integer.MAX_VALUE bytes
google-cloud-java - 0.2.4

Published by mziccard over 8 years ago

Features

Pub/Sub

  • gcloud-java-pubsub, a new client library to interact with Google Cloud Pub/Sub, is released and is in alpha. See the docs for more information.
    gcloud-java-pubsub uses gRPC as transport layer, which is not (yet) supported by App Engine Standard. gcloud-java-pubsub will work on App Engine Flexible.
    See PubSubExample for a complete example or API Documentation for gcloud-java-pubsub javadoc.
    The following snippet shows how to create a Pub/Sub topic and asynchronously publish messages to it. See CreateTopicAndPublishMessages.java for the full source code.
  try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
    Topic topic = pubsub.create(TopicInfo.of("test-topic"));
    Message message1 = Message.of("First message");
    Message message2 = Message.of("Second message");
    topic.publishAsync(message1, message2);
  }

The following snippet, instead, shows how to create a Pub/Sub pull subscription and asynchronously pull messages from it. See CreateSubscriptionAndPullMessages.java for the full source code.

  try (PubSub pubsub = PubSubOptions.defaultInstance().service()) {
    Subscription subscription =
        pubsub.create(SubscriptionInfo.of("test-topic", "test-subscription"));
    MessageProcessor callback = new MessageProcessor() {
      @Override
      public void process(Message message) throws Exception {
        System.out.printf("Received message \"%s\"%n", message.payloadAsString());
      }
    };
    // Create a message consumer and pull messages (for 60 seconds)
    try (MessageConsumer consumer = subscription.pullAsync(callback)) {
      Thread.sleep(60_000);
    }
  }
google-cloud-java - 0.2.3

Published by mziccard over 8 years ago

Features

BigQuery

  • Add support for the BYTES datatype. A field of type BYTES can be created by using Field.Value.bytes(). The byte[] bytesValue() method is added to FieldValue to return the value of a field as a byte array.
  • A Job waitFor(WaitForOption... waitOptions) method is added to Job class. This method waits for the job to complete and returns job's updated information:
Job completedJob = job.waitFor();
if (completedJob == null) {
  // job no longer exists
} else if (completedJob.status().error() != null) {
  // job failed, handle error
} else {
  // job completed successfully
}

By default, the job status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.

Core

Compute

  • A Operation waitFor(WaitForOption... waitOptions) method is added to Operation class. This method waits for the operation to complete and returns operation's updated information:
Operation completedOperation = operation.waitFor();
if (completedOperation == null) {
  // operation no longer exists
} else if (completedOperation.errors() != null) {
  // operation failed, handle error
} else {
  // operation completed successfully
}

By default, the operation status is checked every 500 milliseconds, to configure this value WaitForOption.checkEvery(long, TimeUnit) can be used. WaitForOption.timeout(long, TimeUnit), instead, sets the maximum time to wait.

Datastore

Fixes

Storage

  • StorageExample now contains examples on how to add ACLs to blobs and buckets (#1033).
  • BlobInfo.createTime() getter has been added. This method returns the time at which a blob was created (#1034).
google-cloud-java - 0.2.2

Published by mziccard over 8 years ago

Features

Core

  • Clock abstract class is moved out of ServiceOptions. ServiceOptions.clock() is now used by RetryHelper in all service calls. This enables mocking the Clock source used for retries when testing your code.

Storage

  • Refactor storage batches to use the common BatchResult class. Sending batch requests in Storage is now as simple as in DNS. See the following example of sending a batch request:
StorageBatch batch = storage.batch();
BlobId firstBlob = BlobId.of("bucket", "blob1");
BlobId secondBlob = BlobId.of("bucket", "blob2");
BlobId thirdBlob = BlobId.of("bucket", "blob3");
// Users can either register a callback on an operation
batch.delete(firstBlob).notify(new BatchResult.Callback<Boolean, StorageException>() {
  @Override
  public void success(Boolean result) {
    // handle delete result
  }

  @Override
  public void error(StorageException exception) {
    // handle exception
  }
});
// Ignore its result
batch.update(BlobInfo.builder(secondBlob).contentType("text/plain").build());
StorageBatchResult<Blob> result = batch.get(thirdBlob);
batch.submit();
// Or get the result
Blob blob = result.get(); // returns the operation's result or throws StorageException

Fixes

Datastore

  • Update datastore client to accept IP addresses for localhost (#1002).
  • LocalDatastoreHelper now uses https to download the emulator - thanks to @pehrs (#942).
  • Add example on embedded entities to DatastoreExample (#980).

Storage

  • Fix StorageImpl.signUrl for blob names that start with "/" - thanks to @clementdenis (#1013).
  • Fix readAllBytes permission error on Google AppEngine (#1010).
google-cloud-java - 0.2.1

Published by mziccard over 8 years ago

Features

Compute

  • gcloud-java-compute, a new client library to interact with Google Compute Engine is released and is in alpha. See the docs for more information. See ComputeExample for a complete example or API Documentation for gcloud-java-compute javadoc.
    The following snippet shows how to create a region external IP address, a persistent boot disk and a virtual machine instance that uses both the IP address and the persistent disk. See CreateAddressDiskAndInstance.java for the full source code.
    // Create a service object
    // Credentials are inferred from the environment.
    Compute compute = ComputeOptions.defaultInstance().service();

    // Create an external region address
    RegionAddressId addressId = RegionAddressId.of("us-central1", "test-address");
    Operation operation = compute.create(AddressInfo.of(addressId));
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Address " + addressId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Address creation failed");
    }

    // Create a persistent disk
    ImageId imageId = ImageId.of("debian-cloud", "debian-8-jessie-v20160329");
    DiskId diskId = DiskId.of("us-central1-a", "test-disk");
    ImageDiskConfiguration diskConfiguration = ImageDiskConfiguration.of(imageId);
    DiskInfo disk = DiskInfo.of(diskId, diskConfiguration);
    operation = compute.create(disk);
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Disk " + diskId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Disk creation failed");
    }

    // Create a virtual machine instance
    Address externalIp = compute.getAddress(addressId);
    InstanceId instanceId = InstanceId.of("us-central1-a", "test-instance");
    NetworkId networkId = NetworkId.of("default");
    PersistentDiskConfiguration attachConfiguration =
        PersistentDiskConfiguration.builder(diskId).boot(true).build();
    AttachedDisk attachedDisk = AttachedDisk.of("dev0", attachConfiguration);
    NetworkInterface networkInterface = NetworkInterface.builder(networkId)
        .accessConfigurations(AccessConfig.of(externalIp.address()))
        .build();
    MachineTypeId machineTypeId = MachineTypeId.of("us-central1-a", "n1-standard-1");
    InstanceInfo instance =
        InstanceInfo.of(instanceId, machineTypeId, attachedDisk, networkInterface);
    operation = compute.create(instance);
    // Wait for operation to complete
    while (!operation.isDone()) {
      Thread.sleep(1000L);
    }
    // Check operation errors
    operation = operation.reload();
    if (operation.errors() == null) {
      System.out.println("Instance " + instanceId + " was successfully created");
    } else {
      // inspect operation.errors()
      throw new RuntimeException("Instance creation failed");
    }

Datastore

  • options(String namespace) method has been added to LocalDatastoreHelper allowing to create testing options for a specific namespace (#936).
  • of methods have been added to ListValue to support specific types (String, long, double, boolean, DateTime, LatLng, Key, FullEntity and Blob). addValue methods have been added to ListValue.Builder to support the same set of specific types (#934).

DNS

  • Support for batches has been added to gcloud-java-dns (#940). Batches allow to perform a number of operations in one single RPC request.

Fixes

Core

  • The causing exception is now chained in BaseServiceException.getCause() (#774).
google-cloud-java - 0.2.0

Published by ajkannan over 8 years ago

Features

General

  • gcloud-java has been repackaged. com.google.gcloud has now changed to com.google.cloud, and we're releasing our artifacts on maven under the Group ID com.google.cloud rather than com.google.gcloud. The new way to add our library as a dependency in your project is as follows:

If you're using Maven, add this to your pom.xml file

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>gcloud-java</artifactId>
  <version>0.2.0</version>
</dependency>

If you are using Gradle, add this to your dependencies

compile 'com.google.cloud:gcloud-java:0.2.0'

If you are using SBT, add this to your dependencies

libraryDependencies += "com.google.cloud" % "gcloud-java" % "0.2.0"

Storage

  • The interface ServiceAccountSigner was added. Both AppEngineAuthCredentials and ServiceAccountAuthCredentials extend this interface and can be used to sign Google Cloud Storage blob URLs (#701, #854).

Fixes

General

  • The default RPC retry parameters were changed to align with the backoff policy requirement listed in the Service Level Agreements (SLAs) for Cloud BigQuery, and Cloud Datastore, and Cloud Storage (#857, #860).
  • The expiration date is now properly populated for App Engine credentials (#873, #894).
  • gcloud-java now uses the project ID given in the credentials file specified by the environment variable GOOGLE_APPLICATION_CREDENTIALS (if set) (#845).

BigQuery

  • Job's isDone method is fixed to return true if the job is complete or the job doesn't exist (#853).

Datastore

  • LocalGcdHelper has been renamed to RemoteDatastoreHelper, and the command line startup/shutdown of the helper has been removed. The helper is now more consistent with other modules' test helpers and can be used via the create, start, and stop methods (#821).
  • ListValue no longer rejects empty lists, since Cloud Datastore v1beta3 supports empty array values (#862).

DNS

  • There were some minor changes to ChangeRequest, namely adding reload/isDone methods and changing the method signature of applyTo (#849).

Storage

  • RemoteGcsHelper was renamed to RemoteStorageHelper to be more consistent with other modules' test helpers (#821).