----- Glassbury AI Onboarding Prototype - Technical Guide

Glassbury AI Onboarding Prototype

Interactive Technical Resource Guide

Project Goal

This document provides the technical details for deploying a seamless, AI-powered Candidate Onboarding & Retention Prototype. The system integrates Glassbury's self-managed **OpenTour HCM (Community Edition)** with various **Google Cloud Platform (GCP)** services.

At its core, the prototype utilizes the **Gemini 1.5 Pro** model hosted on **Vertex AI**. This powerful AI delivers personalized, behavioral-based guidance to candidates during their onboarding journey, leveraging the principles of the **Fogg Behavior Model** to enhance engagement and retention.

This interactive guide breaks down the architecture, setup process, AI implementation, and deployment steps necessary to bring the prototype to life.

System Architecture

The prototype employs a modern cloud architecture centered around Google Cloud Platform services. Each component plays a specific role in managing data, hosting the application, running the AI model, and enabling development. The following table outlines the key components and their functions within the system.

Component GCP Service / Tool Purpose
Frontend/API Layer Google App Engine (Standard) Hosts the Python backend, providing a scalable API endpoint for the web application to request AI nudges.
Core Database Google BigQuery Central data warehouse for all candidate records, onboarding progress, and analytical data exported from OpenTour.
AI Model Hosting Vertex AI Manages and provides the API endpoint for the **Gemini 1.5 Pro** model.
AI Logic Framework Fogg Behavior Model (B=MAP) The proprietary logic layer used to determine the type of advice needed (Spark, Facilitator, Signal).
Development IDE Visual Studio Code Primary tool for all application code, data pipeline scripts, and terminal operations.
AI/ML R&D Vertex AI Workbench JupyterLab environment for **prompt engineering**, model testing, and R&D with the Gemini SDK.

Installation and Setup

This section details the necessary steps to configure the development environment and GCP resources. All command-line operations are intended to be executed from the Integrated Terminal within Visual Studio Code, preferably on a macOS machine.

A. Core GCP and Authentication Setup

  1. Install/Update `gcloud` CLI: Ensure the Google Cloud SDK is installed and up-to-date on your development machine. Refer to the official documentation for installation instructions.
  2. Project Initialization: Authenticate your Google Cloud account and set the target project for all subsequent commands. Replace `[GLASSBURY_PROJECT_ID]` with your specific project ID.
    gcloud auth login
    gcloud config set project [GLASSBURY_PROJECT_ID]
  3. Application Default Credentials (ADC): Set up local credentials for your Python application to securely access GCP services like Vertex AI and BigQuery during development.
    gcloud auth application-default login
  4. Enable Required APIs: Verify that the necessary APIs are enabled within your GCP project. You can do this via the GCP Console under "APIs & Services" > "Library". Ensure these are enabled:
    • Vertex AI API (`aiplatform.googleapis.com`)
    • App Engine Admin API (`appengine.googleapis.com`)
    • BigQuery API (`bigquery.googleapis.com`)

B. Environment Setup (Python & VS Code)

  1. Create Virtual Environment (venv): Isolate project dependencies using a Python virtual environment.
    python3 -m venv venv
    source venv/bin/activate

    On Windows, use `venv\Scripts\activate`.

  2. Install Python Dependencies: Install the required libraries specified in your `requirements.txt` file.
    pip install -r requirements.txt

    Ensure `requirements.txt` includes: `flask`, `google-cloud-aiplatform`, `gunicorn`.

  3. Configure Debugger (`.vscode/launch.json`): Set up your VS Code debugger configuration to pass necessary environment variables during local testing. Create a `.vscode` folder in your project root if it doesn't exist, and add a `launch.json` file with content similar to the example below (replace `[YOUR_PROJECT_ID]` and adjust region if needed).
    {
        "version": "0.2.0",
        "configurations": [
            {
                "name": "Python: Flask",
                "type": "python",
                "request": "launch",
                "module": "flask",
                "args": [
                    "--app",
                    "fogg_behavior_model", // Assuming your main file is named this
                    "run",
                    "--port",
                    "5000"
                ],
                "justMyCode": true,
                "env": {
                    "GOOGLE_CLOUD_PROJECT_ID": "[YOUR_PROJECT_ID]",
                    "GOOGLE_CLOUD_LOCATION": "us-central1"
                }
            }
        ]
    }

C. OpenTour HCM Integration and Data Pipeline

Centralizing candidate data from OpenTour HCM into BigQuery is essential for providing context to the AI model. This involves installing OpenTour and setting up a data synchronization pipeline.

  1. OpenTour Installation: Install the self-managed community edition of OpenTour HCM on a dedicated Compute Engine VM or a GKE cluster within your GCP project. Strictly follow the official OpenTour documentation for installation and configuration procedures. Ensure network connectivity between OpenTour and other GCP services (like BigQuery).
  2. BigQuery Dataset Creation: Create a BigQuery dataset to serve as the destination for synchronized OpenTour data.
    • Navigate to BigQuery in the GCP Console.
    • Click "Create Dataset".
    • Set **Dataset ID:** `glassbury_onboarding_data`.
    • Configure location and other settings as needed.
  3. Data Synchronization Pipeline: Develop a robust Python script (using VS Code and potentially libraries like `pandas` and `google-cloud-bigquery`) to perform periodic ETL (Extract, Transform, Load).
    • **Extract:** Connect to the OpenTour database (e.g., PostgreSQL, MySQL) and query relevant candidate progress tables (e.g., profile completion status, task deadlines, module interactions).
    • **Transform:** Clean, format, and structure the extracted data to match a predefined schema in the `glassbury_onboarding_data` BigQuery dataset. Ensure data consistency and handle potential errors.
    • **Load:** Use the `google-cloud-bigquery` Python client library to load the transformed data into the appropriate BigQuery tables. Consider using incremental updates or full refreshes based on data volume and frequency requirements.
    • **Scheduling:** Schedule this Python script to run regularly (e.g., using Cloud Scheduler and Cloud Functions, or within a GKE cron job) to keep BigQuery data up-to-date.

    Note: The reliability and accuracy of this ETL pipeline are critical for the AI's effectiveness. Thorough testing and error handling are essential.

Gemini 1.5 Pro & Fogg Model Implementation

The core intelligence of the prototype resides in the Python API running on App Engine. It leverages the **Fogg Behavior Model** (B=MAP: Behavior = Motivation x Ability x Prompt) to analyze a candidate's inferred state and determine the most effective type of intervention. The **Gemini 1.5 Pro** model is then used via the Vertex AI SDK to generate the actual nudge message based on a dynamically constructed prompt reflecting the required intervention type.

A. Fogg Model Logic

The application assesses the candidate's current situation (based on data synchronized from OpenTour/BigQuery) to infer their likely **Motivation** and **Ability** levels regarding the next onboarding step. Based on these two factors, it selects one of four intervention strategies:

Select Candidate State:
State (Motivation / Ability) Intervention Type Prompt Goal for Gemini
High / High Signal Clear, direct, immediate Call to Action (e.g., "Hit 'Continue' to finish now.").
High / Low Facilitator Simplify the task and provide direct help (e.g., "Having trouble with the file? Try a PDF.").
Low / High Spark Increase desire by highlighting the benefit (e.g., "Complete this step to meet your hiring manager!").
Low / Low Compassionate/Combined Offer low-friction help and high encouragement.

B. Gemini API Call (Python/Vertex AI SDK)

The application uses the `google-cloud-aiplatform` library to interact with the deployed Gemini 1.5 Pro model on Vertex AI. The following function stub illustrates the core API call, sending the dynamically generated Fogg prompt and retrieving the textual nudge.

# Function stub for calling Gemini 1.5 Pro
from google.cloud import aiplatform

# Initialize Vertex AI SDK - typically done once at application startup
# Ensure PROJECT_ID and LOCATION are set via environment variables or config
# aiplatform.init(project=PROJECT_ID, location=LOCATION)

def get_gemini_response(prompt_text):
    """Sends the custom Fogg Model prompt to Gemini and gets the nudge text."""
    try:
        # Ensure you specify the correct model endpoint/ID
        model = aiplatform.get_model("gemini-1.5-pro-preview-05-20")
        response = model.generate_content(prompt_text)
        # Add basic validation or processing of response if needed
        return response.text
    except Exception as e:
        # Implement proper logging to Cloud Logging or another system
        print(f"Gemini API Error: {e}")
        # Return a user-friendly default message on error
        return "Sorry, I ran into a technical issue. Please try again."

Note: Robust error handling, logging, and potentially retry mechanisms should be implemented around this core API call in a production environment.

Deployment to Google App Engine

Once the application is developed and tested locally, it can be deployed to Google App Engine Standard Environment for scalable and managed hosting. This involves creating a configuration file (`app.yaml`) and using the `gcloud` CLI.

  1. Create `app.yaml` Configuration: In the root directory of your project, create a file named `app.yaml`. This file tells App Engine how to run your application. Below is a basic configuration for a Python 3.9 Flask app using Gunicorn.
    runtime: python39 # Or your chosen Python 3 version (e.g., python311)
    entrypoint: gunicorn -b :$PORT fogg_behavior_model:app # Adjust 'fogg_behavior_model' if your main file name is different
    
    # Optional: Add instance class, scaling settings, environment variables etc.
    # instance_class: F2
    # automatic_scaling:
    #   min_instances: 1
    #   max_instances: 10
    # env_variables:
    #   GOOGLE_CLOUD_LOCATION: 'us-central1'

    Make sure the `entrypoint` correctly points to your Flask application instance (e.g., `filename:variable_name`). The `$PORT` variable is automatically provided by App Engine.

  2. Final Deployment: Open your VS Code terminal (ensure you are in the project root directory and the correct GCP project is set). Run the deployment command.
    gcloud app deploy

    Follow the prompts to confirm the deployment region and settings. App Engine will build a container image, upload it, and start serving your application. You can monitor the deployment progress in the terminal and view logs in the GCP Console under App Engine.

    Note: Ensure your App Engine service account has the necessary permissions to access Vertex AI and BigQuery if it needs to interact with them directly (though typically the interaction happens via user context or specific service account keys depending on the architecture).

  3. Accessing the Deployed App: Once deployed, `gcloud` will provide the URL for your application (e.g., `https://[YOUR_PROJECT_ID].appspot.com`). You can test the API endpoint at this URL.
--
0
Skip to Content
Glassbury AI  LLC
Home
About
Contact
Giving/Donations
Blog
G.AI Podcast
AI Ethics FAQ
Privacy Policy
Newsletter
Glassbury AI  LLC
Home
About
Contact
Giving/Donations
Blog
G.AI Podcast
AI Ethics FAQ
Privacy Policy
Newsletter
Home
About
Contact
Giving/Donations
Blog
G.AI Podcast
AI Ethics FAQ
Privacy Policy
Newsletter
GlassBury Cares

Glassbury AI

Established 2024

Made with

Squarespace

Location
30 N Gould St
STE R
Sheridan, Wyoming
USA

Contact

email us