Skip to main content

Setup Zuora Connector for BigQuery

Zuora

Setup Zuora Connector for BigQuery

Provides detailed instructions for setting up the Zuora Connector for BigQuery.

Configure your BigQuery Destination

To configure your Google BigQuery destination within the Zuora Connector, you must set up service account credentials, manage permissions, configure dataset access, and make other necessary configurations to ensure a seamless data transfer process from Zuora to Google BigQuery.

Prerequisites

By default, BigQuery authentication uses role-based access. You will need the data-syncing service's service account name available to grant access. It should look like some-name@some-project.iam.gserviceaccount.com.

Step 1: Create a service account in BigQuery project

  1. In the GCP console, navigate to the IAM & Admin menu, click the Service Accounts tab, and click Create service account at the top of the menu.
    • gcp-create-service-account-menu-1.png
  2. In the first step, name the user and click Create and Continue.
    • gcp-service-account-name-options-2.png
  3. In the second step, grant the user BigQuery User role.
    • gcp-bigquery-user-3.png
  4. In the third step (Grant users access to this service account step), within the Service account users role field, enter the provided Service account (see prerequisite) and click Done.
  5. Once successfully created, search for the created service account in the service accounts list, click the Service account name to view the details, and make a note of the email (This email is different from the service's service account).
  6. Select the permissions tab, find the provided principal name (Service account from the prerequisite), click the Edit principal button (pencil icon), click Add another role, select the Service Account Token Creator role, and click Save.
    • gcp-grant-role-4.png
Understand the BigQuery User role

The BigQuery User role is a predefined IAM role that allows for the creation of new datasets, with the creator granted BigQuery Data Owner on the new dataset.

If you would like to avoid using the BigQuery User role, the minimum required permissions are:

  • On the Project level:
  • bigquery.datasets.create
  • bigquery.datasets.get
  • bigquery.jobs.create

These minimum permissions assume that the dataset has not been created beforehand.

If you create the dataset beforehand:

Loading data into a Dataset that already exists

By default, a new dataset (with a name you provide) will be created in the BigQuery project. If, instead, you create the dataset ahead of time, you will need to grant the BigQuery Data Owner role to this Service Account at the dataset level.

In BigQuery, click on the existing dataset. In the dataset tab, click Sharing, then Permissions. Click Add Principals. Enter the Service Account name, and add the Role: BigQuery Data Owner.

Specifically, the minimum permissions required can be granted to the principal and applied to the Dataset:

  • bigquery.tables.create
  • bigquery.tables.delete
  • bigquery.tables.get
  • bigquery.tables.getData
  • bigquery.tables.list
  • bigquery.tables.update
  • bigquery.tables.updateData
  • bigquery.routines.get
  • bigquery.routines.list

On the Project level, you will still need bigquery.jobs.create, but you will not need bigquery.datasets.create or bigquery.datasets.get.

Alternative authentication method: Granting direct access to service account

Role-based authentication is the preferred authentication mode for BigQuery based on GCP recommendations. However, providing a service account key to directly log-in to the created service account is an alternative authentication method that you can use.

  1. Back in the Service accounts menu, click the Actions dropdown next to the newly created service account and click Manage keys.
    • gcp-manage-service-account-keys-5.png
  2. Click Add key > Create new key.
    • gcp-create-new-key-6.png
  3. Select the JSON Key type and click Create. Make note of the key that is generated.

Step 2: Create a staging bucket

  1. Log into the Google Cloud Console and navigate to Cloud Storage. Click Create to create a new bucket.
    • gcp-create-gcs-bucket-7.png
  2. Choose a name for the bucket > Click Continue > Select a location for the staging bucket. Make a note of both the name and the location (region). 

    Choose a location (region)

    The location you choose for your staging bucket must match the location of your destination dataset in BigQuery. When creating your bucket, be sure to choose a region in which BigQuery is supported. For more information, see BigQuery Locations

    • If the dataset does not exist yet, the dataset will be created for you in the same region where you created your bucket.
    • If the dataset does exist, the dataset region must match the location you choose for your bucket.
  3. Click continue and select the following options according to your preferences. Once you have filled in the options, click Create.
  4. On the Bucket details page, click the Permissions tab > Add.
    • gcp-add-permission-to-bucket-8.png
  5. In the New principles dropdown, add the Service Account created in Step 1, select the Storage Admin role, and click Save.
    • gcp-storage-admin-9.png

Step 3: Find Project ID

  1. Log into the Google Cloud Console and from the projects list dropdown, make a note of the BigQuery Project ID.
    • gcp-project-id-10.png

Step 4: Add Your Destination

  1. After completing the initial setup, share your BigQuery host address and bucket name with a Zuora representative who will create a connection link for you.
  2. Using the connection link shared with you by Zuora, you can securely input your Project ID, Bucket Name, Bucket Location, Destination Schema Name and Service Account name, to finalize the setup for the connection.
  3. After you fill in all the required BigQuery details through the provided link and test the connection, saving the destination will kickstart the onboarding process and begin transferring data.

Verification and Data Transfer 

For Google BigQuery, your data will be loaded into the specified datasets and tables that you have configured during the setup process. You can access and query this data directly within your BigQuery environment using SQL queries or through integrated analytics tools.

Format of Transferred Data

  • Data transferred to Google BigQuery is loaded as properly typed tables within a single schema. Each table corresponds to a distinct dataset or entity from your Zuora data.
  •  In BigQuery, the last_updated timestamp for a table is already accessible in the __TABLES_SUMMARY__ metatable.
  • The exact structure and organization of your transferred data within BigQuery will be determined by the configurations you've specified during the setup process. This ensures that your data is seamlessly integrated into your existing BigQuery environment, ready for analysis and reporting.