Loading data from Google BigQuery

On this page, you'll learn how to add a new Google BigQuery data source to SlicingDice and create a new loading job using it to synchronize your source with your databases.


BigQuery roles and permissions

Configuring the necessary roles and permissions on BigQuery

Before connecting your Google Account to SlicingDice, be advised that the connected Google account needs to have, at least, one of the following roles in your project so we can have permission to access your source and query it:
- BigQuery Data Viewer
- BigQuery Data Editor
- BigQuery Data Owner
- BigQuery Admin

If you connect an account to SlicingDice that doesn't have at least one of the roles above, you'll need to add one of them. To do it, follow the steps below with an administrator account:


  • Access the Identity and Access Management section on Google Cloud Platform

    With your Google Account logged in, go to the Identity and Access Management section on Google Cloud Platform

  • Click on the user that you want to edit
  • Click on "Add another role"
  • On the dropdown menu, select one of the following roles:

    - BigQuery Data Viewer
    - BigQuery Data Editor
    - BigQuery Data Owner
    - BigQuery Admin

    Responsive image
  • Click on Save
    .

    You have now configured BigQuery to allow SlicingDice Data Loading Module to access your data.


Add a BigQuery Data Source

Before adding your BigQuery data source on SlicingDice, you need to be logged in our Control Panel. Then, you need to go to the Data Sources page so we can start our tutorial.


How to add a BigQuery data source on SlicingDice

Before creating your data source you need to have the following information in order to connect to your Google BigQuery projects and datasets:
- Your Google username and password
- Your BigQuery project ID
- Your BigQuery dataset

Now let's start creating your data source just clicking on the Create new data source button on SlicingDice's Data Source section


  • Data Source setup

    The first step is the configuration of your data source identification on SlicingDice. The following screen shows step 1.

    Responsive image

    Three fields will appear. Each field function is described at the table below.

    Field Description
    Data Source Name The name of your data source. Can be edited at any time. Mandatory
    Data Source Labels/Tags Labels/tags you might want to associate to a source, in order to organize your sources. Can be edited at any time. Optional
    Data Source Description The description of your data source. Can be edited at any time. Optional

    When ready, click on the Save & Continue button to go to Step 2.

  • Data Source Details

    Below you can see an example of the information and credentials that you should provide so SlicingDice can be able to connect to your Google BigQuery projects.

    Responsive image
    Field Description
    Data Source Type The type of data source. In this case we're using BigQuery.
    Connect with BigQuery When you click this button, a new window will open and you need to log in to Google to your BigQuery account
    Project ID The BigQuery project ID..
    Dataset ID The BigQuery dataset ID.

    Now you can go to the next step clicking on Save & Continue

  • Confirmation

    Here you'll see a summary of the configurations defined for this data source before you finally create it.

    The following image shows an example of a confirmation screen, which the name of the data source is drinks .

    Responsive image

    If everything is ok, click on the Submit button and then you'll receive a success creation message.

    Now you'll be able to find your new data source at the data sources list as you can see in the following image.

    Responsive image

    That's it! The next step is to load your Google BigQuery data into SlicingDice creating a loading job.

Add a new loading job using BigQuery sources

Now all the connection configuration with your BigQuery source is completed, so the next step is just to create and execute a loading job using this BigQuery Data Source you've configured on SlicingDice.

Here are the creation jobs tutorials for each type of loading job. Choose the most helpful for your use case:

  • One-time loading job: The one-time loading job loads your data once, needing manual intervention to execute it. This loading job type is useful if you don't update your data frequently.
  • Manual incremental loading job: The manual incremental loading job loads your data on-demand when new data needs to be inserted in SlicingDice. You need to manually start this loading job.
    Differently from an one-time loading job, only new rows will be inserted in the database. Your dataset needs to have a timestamp column in order to use this loading job type.
  • Automatic loading job: The automatic loading job loads your data frequently, specified by a predetermined time interval. You don't need to manually start this loading job, as it executes automatically.
    Your dataset needs to have a timestamp column in order to use this loading job type.