Bot releases are hidden (Show)
Published by ajkannan over 8 years ago
gcloud-java-datastore
now uses Cloud Datastore v1beta3. You can read more about updates in Datastore v1beta3 here. Note that to use this new API, you may have to re-enable the Google Cloud Datastore API in the Developers Console. The following API changes are coupled with this update.
indexed
has been changed to excludeFromIndexes
. Properties of type EntityValue
and type ListValue
can now be indexed. Moreover, indexing and querying properties inside of entity values is now supported. Values inside entity values are indexed by default.LatLng
and LatLngValue
, representing the new property type for latitude & longitude, are added.meaning
has been made package scope instead of public, as it is a deprecated field.BatchOption
and TransactionOption
classes are now removed.ReadOption
is added to allow users to specify eventual consistency on Datastore reads. This can be a useful optimization when strongly consistent results for get
/fetch
or ancestor queries aren't necessary.QueryResults.cursorAfter()
is updated to point to the position after the last consumed result. In v1beta2, cursorAfter
was only updated after all results were consumed.groupBy
is replaced by distinctOn
.Projection
class in StructuredQuery
is replaced with a string representing the property name. Aggregation functions are removed.gcloud-java-datastore
.gcloud-java-bigquery
, gcloud-java-dns
, and gcloud-java-storage
, the field id()
has been renamed to generatedId
for classes that are assigned id
s from the service.Published by mziccard over 8 years ago
gcloud-java-dns
, a new client library to interact with Google Cloud DNS, is released and is in alpha. See the docs for more information and samples.startPageToken
is now called pageToken
(#774) and maxResults
is now called pageSize
(#745) to be consistent with page-based listing methods in other gcloud-java
modules.overrideInfo
is added to copy requests to denote whether metadata should be overridden (#762).startPageToken
is now called pageToken
(#774) and maxResults
is now called pageSize
(#745) to be consistent with page-based listing methods in other gcloud-java
modules.Published by ajkannan over 8 years ago
versions(boolean versions)
option to BlobListOption
to enable/disable versioned Blob
listing. If enabled all versions of an object as distinct results (#688).BlobTargetOption
and BlobWriteOption
classes are added to Bucket
to allow setting options for create
methods (#705).NoAuthCredentials
class is added with the AuthCredentials.noAuth()
method, to ne used when testing service against local emulators (#719).Storage.BlobTargetOption
and Storage.BlobWriteOption
in Bucket
's create
methods. New classes (Bucket.BlobTargetOption
and Bucket.BlobWriteOption
) are added to provide options to Bucket.create
(#705).BlobWriteChannel
writes a blob whose size is a multiple of the chunk size used (#725).BlobReadChannel
a blob whose size is a multiple of the chunk/buffer size (#725).Published by ajkannan over 8 years ago
JobInfo
and TableInfo
class hierarchies are flattened (#584, #600). Instead, JobInfo
contains a field JobConfiguration
, which is subclassed to provide configurations for different types of jobs. Likewise, TableInfo
contains a new field TableDefinition
, which is subclassed to provide table settings depending on the table type.Job
, Table
, Dataset
) now extend their associated metadata classes (JobInfo
, TableInfo
, DatasetInfo
) (#530, #609). The BigQuery
service methods now return functional objects instead of the metadata objects.Setting list properties containing values of a single type is more concise (#640, #648).
For example, to set a list of string values as a property on an entity, you'd previously have to type:
someEntity.set("someStringListProperty", StringValue.of("a"), StringValue.of("b"),
StringValue.of("c"));
Now you can set the property using:
someEntity.set("someStringListProperty", "a", "b", "c");
There is now a more concise way to get the parent of an entity key (#640, #648).
Key parentOfCompleteKey = someKey.parent();
The consistency setting (defaults to 0.9 both before and after this change) can be set in LocalGcdHelper
(#639, #648).
You no longer have to cast or use the unknown type when getting a ListValue
from an entity (#648). Now you can use something like the following to get a list of double values:
List<DoubleValue> doublesList = someEntity.get("myDoublesList");
ResourceManager
list
method is now supported. (#651)Project
is now a subclass of ProjectInfo
(#530). The ResourceManager
service methods now return Project
instead of ProjectInfo
.Bucket
, Blob
) now extend their associated metadata classes (BucketInfo
, BlobInfo
) (#530, #603, #614). The Storage
service methods now return functional objects instead of metadata objects.Table
that were meant to be public but kept package scope are now fixed (#621).Published by ajkannan over 8 years ago
Resumable uploads via write channel are now supported (#540)
An example of uploading a CSV file in chunks of CHUNK_SIZE bytes:
try (FileChannel fileChannel = FileChannel.open(Paths.get("/path/to/your/file"))) {
ByteBuffer buffer = ByteBuffer.allocate(256 * 1024);
TableId tableId = TableId.of("YourDataset", "YourTable");
LoadConfiguration configuration =
LoadConfiguration.of(tableId, FormatOptions.of("CSV"));
WriteChannel writeChannel = bigquery.writer(configuration);
long position = 0;
long written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel);
while (written > 0) {
position += written;
written = fileChannel.transferTo(position, CHUNK_SIZE, writeChannel);
}
writeChannel.close();
}
defaultDataset(String dataset)
(in QueryJobInfo
and QueryRequest
) can be used to specify a default dataset (#567).
apply
to submit
(#562).hashCode
and equals
are now overridden in subclasses of BaseTableInfo
(#565, #573).jobComplete
is renamed to jobCompleted
in QueryResult
(#567).The precondition check that cursors are UTF-8 strings has been removed (#578).
EntityQuery
, KeyQuery
, and ProjectionEntityQuery
classes have been introduced (#585). This enables users to use modify projections and group by clauses for projection entity queries after using toBuilder()
. For example, this now works:
ProjectionEntityQuery query = Query.projectionEntityQueryBuilder()
.kind("Person")
.projection(Projection.property("name"))
.build();
ProjectionEntityQuery newQuery =
query.toBuilder().projection(Projection.property("favorite_food")).build();
Published by mziccard almost 9 years ago
By default, requests are now retried (#547).
For example:
// Use the default retry strategy
Storage storageWithRetries = StorageOptions.defaultInstance().service();
// Don't use retries
Storage storageWithoutRetries = StorageOptions.builder()
.retryParams(RetryParams.noRetries())
.build()
.service()
QueryResults.cursorAfter()
is now set when all results from a query have been exhausted (#549).
When running large queries, users may see Datastore-internal errors with code 500 due to a Datastore issue. This issue will be fixed in the next version of Datastore. Until then, users can set a limit on their query and use the cursor to get more results in subsequent queries. Here is an example:
int limit = 100;
StructuredQuery<Entity> query = Query.entityQueryBuilder()
.kind("user")
.limit(limit)
.build();
while (true) {
QueryResults<Entity> results = datastore.run(query);
int resultCount = 0;
while (results.hasNext()) {
Entity result = results.next(); // consume all results
// do something with the result
resultCount++;
}
if (resultCount < limit) {
break;
}
query = query.toBuilder().startCursor(results.cursorAfter()).build();
}
load
is renamed to get
in functional classes (#535)
Published by ajkannan almost 9 years ago
Introduce support for Google Cloud BigQuery (#503): create datasets and tables, manage jobs, insert and list table data. See BigQueryExample for a complete example or API Documentation for gcloud-java-bigquery
javadoc.
import com.google.gcloud.bigquery.BaseTableInfo;
import com.google.gcloud.bigquery.BigQuery;
import com.google.gcloud.bigquery.BigQueryOptions;
import com.google.gcloud.bigquery.Field;
import com.google.gcloud.bigquery.JobStatus;
import com.google.gcloud.bigquery.LoadJobInfo;
import com.google.gcloud.bigquery.Schema;
import com.google.gcloud.bigquery.TableId;
import com.google.gcloud.bigquery.TableInfo;
BigQuery bigquery = BigQueryOptions.defaultInstance().service();
TableId tableId = TableId.of("dataset", "table");
BaseTableInfo info = bigquery.getTable(tableId);
if (info == null) {
System.out.println("Creating table " + tableId);
Field integerField = Field.of("fieldName", Field.Type.integer());
bigquery.create(TableInfo.of(tableId, Schema.of(integerField)));
} else {
System.out.println("Loading data into table " + tableId);
LoadJobInfo loadJob = LoadJobInfo.of(tableId, "gs://bucket/path");
loadJob = bigquery.create(loadJob);
while (loadJob.status().state() != JobStatus.State.DONE) {
Thread.sleep(1000L);
loadJob = bigquery.getJob(loadJob.jobId());
}
if (loadJob.status().error() != null) {
System.out.println("Job completed with errors");
} else {
System.out.println("Job succeeded");
}
}
Introduce support for Google Cloud Resource Manager (#495): get a list of all projects associated with an account, create/update/delete projects, undelete projects that you don't want to delete. See ResourceManagerExample for a complete example or API Documentation for gcloud-java-resourcemanager
javadoc.
import com.google.gcloud.resourcemanager.ProjectInfo;
import com.google.gcloud.resourcemanager.ResourceManager;
import com.google.gcloud.resourcemanager.ResourceManagerOptions;
import java.util.Iterator;
ResourceManager resourceManager = ResourceManagerOptions.defaultInstance().service();
// Replace "some-project-id" with an existing project's ID
ProjectInfo myProject = resourceManager.get("some-project-id");
ProjectInfo newProjectInfo = resourceManager.replace(myProject.toBuilder()
.addLabel("launch-status", "in-development").build());
System.out.println("Updated the labels of project " + newProjectInfo.projectId()
+ " to be " + newProjectInfo.labels());
// List all the projects you have permission to view.
Iterator<ProjectInfo> projectIterator = resourceManager.list().iterateAll();
System.out.println("Projects I can view:");
while (projectIterator.hasNext()) {
System.out.println(projectIterator.next().projectId());
}
RemoteGcsHelper.create(String, String)
method (#494)DefaultDatastoreRpc
(#448)Published by ajkannan almost 9 years ago
The project ID set in the Google Cloud SDK now supersedes the project ID set by Compute Engine (#337).
The project ID is determined by iterating through the following list in order, stopping when a valid project ID is found:
GCLOUD_PROJECT
GCLOUD_PROJECT
The explicit AuthCredentials.noCredentials
option was removed.
The testing helper class RemoteGCSHelper
now uses GOOGLE_APPLICATION_CREDENTIALS
and GCLOUD_PROJECT
environment variables to set credentials and project (#335, #339).
export GCLOUD_TESTS_PROJECT_ID="MY_PROJECT_ID"
export GCLOUD_TESTS_KEY=/path/to/my/key.json
export GCLOUD_PROJECT="MY_PROJECT_ID"
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/my/key.json
BlobReadChannel
throws a StorageException
if a blob is updated during a read (#359, #390)
generation
is moved from BlobInfo
to BlobId
, and generationMatch
and generationNotMatch
methods are added to BlobSourceOption
and BlobGetOption
(#363, #366).
BlobInfo myBlobInfo = someAlreadyExistingBlobInfo.toBuilder().generation(1L);
BlobId myBlobId = BlobId.of("bucketName", "idName", 1L);
The Blob
's batch delete method now returns false for blobs that were not found (#380).
SocketTimeoutExceptions
are now retried (#410, #414).SocketException
exception is no longer thrown when creating the Datastore service object from within the App Engine production environment (#411).toBuilder
methods in BlobInfo
and BucketInfo
are fixed so that info.equals(info.toBuilder().build())
is true (#415, #416).Published by ajkannan almost 9 years ago
Published by ajkannan almost 9 years ago
Published by ajkannan about 9 years ago
Fixes build issues that caused ClassNotFoundExceptions in 0.0.9 maven artifact.
Published by aozarov about 9 years ago