Table of Contents
Pixee Enterprise Server Documentation¶
Welcome to the Pixee Enterprise Server documentation. This guide will help you install, configure, and operate Pixee Enterprise Server in your environment.
Overview¶
Pixee Enterprise Server is a self-hosted solution that brings the power of automated code improvements directly to your infrastructure. It analyzes your codebase, identifies potential improvements, and automatically generates pull requests with fixes and enhancements.
Getting Help¶
If you encounter issues during installation or operation:
- Check the FAQ section for common solutions
- Review the troubleshooting sections in the installation guides
- Contact our support team with detailed information about your environment and the issue
Installation
Installation Overview¶
Pixee Enterprise Server is a self-hosted solution that brings pixee.ai into a customer's infrastructure.
Installation Methods¶
There are currently two methods available for installing Pixee Enterprise Server:
This option provides the most streamlined installation, configuration, update, and support experience. This is the recommended method of installation for Pixee Enterprise Server as it provides a user-friendly interface for installation, configuration and enhanced troubleshooting capabilities.
This option allows users to deploy Pixee Enterprise Server into a managed kubernetes cluster.
Prerequisites¶
Before installing Pixee Enterprise Server, you'll need to provision the necessary infrastructure.
Common Requirements¶
Database¶
For trial installations you can use the embedded database and skip this section. For production environments, we recommend creating an external database:
- PostgreSQL 17.4+
- 10Gb+ available disk space
- Network connectivity between Pixee Enterprise Server Kubernetes cluster and database
- Create database named
pixee_platform(or any name you choose) - Create user with permissions to
pixee_platformdatabase
External Object Store¶
You can configure Pixee Enterprise Server to use an external object store. If you prefer to use the in-cluster, embedded object store, you may skip this section.
Requirements¶
The following are requirements of an external object store compatible with Pixee Enterprise Server:
- The object store and the Kubernetes cluster are able to communicate over the network
- The object store exposes a S3 compatible API
- A bucket has been created for use as the
pixee-analysis-inputbucket
AI Providers¶
Advanced AI capabilities are required. Please review the AI Providers documentation for more information for supported AI providers in Pixee Enterprise Server.
Infrastructure Requirements¶
Create a VM with the following specifications:
- Linux distro: Ubuntu 24.04+ (recommended) or Enterprise Linux 9 (RHEL 9, Rocky Linux, AlmaLinux)
systemdinstalled- For Enterprise Linux 9: SELinux must be in permissive mode (enforcing mode is not supported)
- Allows traffic to egress to the internet (more details here)
- Allows HTTPS traffic to ingress to port 443
- Allows HTTPS traffic to ingress to port 30000 (for embedded cluster admin console)
- 8 vCPU, 32 GB RAM
- 100GB+ disk with <10ms write latency (i.e. SSD/NVME)
DNS Configuration: Create the appropriate DNS records so that a domain name resolves to your provisioned virtual machine.
TLS Certificate: To encrypt traffic to Pixee Enterprise Server, you'll need to generate/acquire a TLS certificate for use with your selected domain name. With the Embedded Cluster installation method, Pixee Enterprise Server can automate the TLS certificate request using LetsEncrypt or self-signed certificates as part of the configuration process.
Select or create a Kubernetes cluster with the following available to Pixee Enterprise Server:
- 8+ vCPU
- 32+ GB RAM
- 100GB+ disk with <10ms write latency (i.e. SSD/NVME)
- Allow outgoing HTTP traffic to the internet (more details here)
- Allow incoming HTTPS traffic to port 443
- (optional) ingress controller installed and configured
DNS Configuration: Create the appropriate DNS records so that a domain name resolves to your Kubernetes cluster.
TLS Certificate: To encrypt traffic to Pixee Enterprise Server, you'll need to generate/acquire a TLS certificate for use with your selected domain name. With Helm, you have multiple options for TLS certificate management: using cert-manager to automatically provision TLS certificates, using pre-existing TLS certificates as Kubernetes secrets, or terminating TLS outside the cluster (i.e., via a load balancer).
Additional Helm Requirements:
- Kubectl installed and configured to access the target cluster
- Helm CLI installed (version >3.15)
- preflight and support-bundle plugins from troubleshoot.sh installed in the target cluster
- Access to image registry (images.pixee.ai) from Kubernetes cluster
Tip
If you need to pull images from an internal registry, you will need to update your values.yaml to override the registry and pullSecrets values for all images listed in the reference section identified by the patterns **.image.registry and **.image.pullSecrets. In addition, if your registry requires authentication you will need to create a dockerconfigjson type secret to authenticate with your internal registry in the pullSecret value for each image.
Installation Instructions¶
Installation¶
To install using Embedded Cluster follow:
-
From your virtual machine, download the Pixee installer:
curl -f "https://distribution.pixee.ai/embedded/pixee/<release channel>" -H "Authorization: <your license ID>" -o pixee.tgz -
Extract the Pixee installer:
tar -xvzf pixee.tgz -
Run the Pixee installer:
sudo ./pixee install --license license.yamlInfo
The directory used for data storage can be changed by passing the --data-dir
-
You will be prompted to set an admin password, this password will grant access to the admin console later
-
When the installer completes, visit the admin console url in your browser:
https://<domain name or vm ip>:30000 - You may receive a self-signed certificate warning from your browser, this is expected
-
If you have a domain name and TLS certificate available you can configure the admin console to use them by following the prompts, or you can
-
The admin console will then load the configuration page. You will be directed through a workflow that will step you through configuring Pixee Enterprise Server.
To install using Helm Deployment follow:
-
Authenticate against the Pixee Helm Registry:
helm registry login registry.pixee.ai --username <your email address> --password <your license key> -
Preflight checks - If there are any known issues that would prevent successful installation, the preflight checks will report them. To run the preflight checks:
If there are no issues, or you are able to address all reported issues, continue with the installation using helm.helm template oci://registry.pixee.ai/pixee/<release channel>/pixee-enterprise-server --values values.yaml | kubectl preflight - -
Helm install - Execute helm against the Kubernetes cluster to install, be sure to replace your release channel below (likely
stableorunstable):helm upgrade --install pixee-enterprise-server oci://registry.pixee.ai/pixee/<release channel>/pixee-enterprise-server -f values.yaml -n pixee-enterprise-server --create-namespace
Tip
Be sure to replace <release channel> with your actual assigned channel, this is likely stable or unstable
Basic Configuration¶
After installation, you'll need to configure the basic settings for Pixee Enterprise Server. These settings include the domain name, protocol, and ingress configuration.
Configuration¶
Configuration is done through the admin console configuration page available after installation at:
https://<domain name or ip address>:30000
When you load the admin console page you will be prompted to enter your admin password. The first time configuring after installation you will be directed through a workflow that will step you through configuring Pixee Enterprise Server.
Basic Settings¶
The settings under the Basic Settings section all require your input. Here you will find the required settings for the Pixee Enterprise Server domain name, TLS options, and AI model provider settings. Make sure you review all of these settings for completeness and accuracy.
Domain¶
Enter the domain name you have assigned to your Pixee Enterprise Server. If you have not assigned a domain name you can enter the public IP address of your Pixee Enterprise server instead but this will limit your TLS options.
Protocol¶
Select the protocol that will be used for your Pixee Enterprise Server. If you select HTTP, ingress traffic to your Pixee Enterprise Server will be un-encrypted. The HTTP option is for quick testing configurations or when you have an external system like an App Gateway or Load Balancer that is terminating TLS instead of Pixee Enterprise Server. If you have an external system (i.e. App Gateway, Load Balancer, etc.) acting as a reverse proxy to your Pixee Enterprise Server, make sure you provide reverse proxy settings in the Advanced Settings section. If you select HTTPS, ingress traffic to your Pixee Enterprise Server will be encrypted, and you will be prompted to configure TLS.
Authentication¶
Pixee Enterprise Server currently supports the following OIDC providers:
Select the OIDC provider you want to use for authentication. If you select Embedded Provider, Pixee Enterprise Server will use its built-in OIDC provider. For other providers, you will need to provide the necessary configuration details such as client ID, client secret, and issuer URL.
See Authentication for more information on specific provider configuration.
AI Providers¶
To use OpenAI directly, select OpenAI and enter your OpenAI API key.
To use Azure OpenAI, select Azure OpenAI and enter your Azure OpenAI resource endpoint, key, and model deployment names for o3-mini.
For Databricks please see: Databricks AI Serving Endpoints.
Create a values.yaml file and configure the following basic settings:
Domain¶
Set the URL where your Pixee Enterprise Server will be accessible (if no domain name is available, use an external IP address):
global:
pixee:
domain: "<your pixee enterprise server domain name>"
Protocol¶
Set the HTTP protocol (http or https) used to access your Pixee Enterprise Server:
global:
pixee:
protocol: "https"
Info
If you are using TLS to secure traffic to your Pixee Enterprise Server set this to https, even if you terminate TLS outside your cluster.
Ingress¶
If you are using an ingress controller, you can enable and configure the Pixee Enterprise Server ingress resource as follows:
platform:
proxy:
# enable proxy configuration with ingress to allow headers from the ingress controller
enabled: true
ingress:
enabled: true
className: "<your ingress controller class name (i.e. nginx, gce, etc)"
hosts:
- host: "<your pixee enterprise server domain name>"
paths:
- path: "/"
pathType: "Prefix"
# If you are securing your Pixee Enterprise Server with TLS via ingress, set the following
tls:
- hosts:
- "<your pixee enterprise server domain name>"
secretName: "<your tls certificate secret name>"
AI Model Provider - OpenAI¶
To configure access to the OpenAI API, set the following:
global:
pixee:
ai:
openai:
key: "<your OpenAI API key>"
# Optional: Custom OpenAI API base URL (e.g., for Azure OpenAI compatible endpoints)
# baseUrl: "https://example.cloud.databricks.com/serving-endpoints"
# -- Use an existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- Secret key containing the api key
apiKey: "key"
AI Model Provider - Azure OpenAI¶
To configure access to Azure OpenAI, set the following:
global:
pixee:
ai:
azure:
enabled: true
key: "<your Azure OpenAI key>"
# -- Use an existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- Secret key containing the api key
apiKey: "key"
endpoint: "<your Azure OpenAI endpoint>"
deployments:
o3-mini: "<your model deployment name for o3-mini>"
Databricks AI Serving Endpoints¶
Pixee Enterprise Server can integrate with Databricks AI. See Databricks AI for more information.
To configure Databricks AI serving endpoints, set the following:
global:
pixee:
ai:
openai:
enabled: true
key: "<your Databricks PAT or API key>"
baseUrl: "https://<your-databricks-workspace>.cloud.databricks.com/serving-endpoints"
# -- Use an existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- Secret key containing the api key
apiKey: "key"
Info
The baseUrl should point to your Databricks workspace serving endpoints. Ensure the required model endpoint (o3-mini) is deployed and accessible.
Authentication¶
Pixee Enterprise Server supports multiple authentication providers. This section covers configuration for all supported authentication methods.
Info
Support for OIDC compatible identity providers is in active development, contact support@pixee.ai to request additional support.
Embedded Identity Provider (Authentik)¶
Pixee Enterprise Server includes Authentik as an embedded identity provider. This provides a full-featured identity management solution without requiring an external OIDC provider.
Features¶
- User management with web-based admin interface
- Support for local users and passwords
- Federation with external identity providers (Google Workspace, Oracle, etc.)
- Self-service password change
- Session management
- Pixee-branded login experience
No Email / SMTP Support
The embedded Authentik deployment does not have SMTP configured. This means email-based features such as self-service password reset, email verification, and notification emails are not available. Password resets must be performed by an administrator through the Authentik admin interface or by using the recovery key command described below.
Configuration¶
To enable Authentik in Embedded Cluster deployments:
- Navigate to the admin console, select the
Configtab, then go to theBasic Settingssection - Under
Authentication mode, select "Authentik" - Save and deploy the configuration
After deployment, Authentik will automatically initialize with the Pixee OIDC application pre-configured. The default admin credentials are displayed in the config page.
To enable Authentik in Helm deployments, set the following in your values.yaml:
authentik:
enabled: true
pixee:
clientSecret: "<generate a secure random string>"
authentik:
secret_key: "<generate a secure random string - must not change after install>"
bootstrap:
password: "<initial admin password for akadmin user>"
postgresql:
host: "<postgresql-host>"
name: "authentik"
user: "authentik"
password: "<database-password>"
redis:
host: "<redis-host>"
port: 6379
global:
pixee:
access:
enabled: true
oidc:
client:
id: "pixee"
secret: "<same as authentik.pixee.clientSecret above>"
redirectUri: "https://<your-domain>/api/auth/callback"
Initial Setup¶
After deploying with Authentik enabled:
-
Access the admin interface by navigating to:
https://<your-domain>/authentik/Log in with username
akadminand the password shown in the admin console config page (your license ID). After logging in, click the Admin interface button to access user management. -
Change the admin password (recommended) - Navigate to Directory > Users, select
akadmin, and update the password. This change will persist across upgrades. -
Create additional users as needed through the Authentik admin interface
-
Find the admin password in your
values.yamlunderauthentik.authentik.bootstrap_password -
Access the admin interface by navigating to:
https://<your-domain>/authentik/Log in with username
akadminand the configured password, then click the Admin interface button. -
Create additional users as needed through the Authentik admin interface
Recovery Access
If you need to reset a user's password, the easiest way is to create a recovery link from the Authentik admin UI:
- Navigate to Directory → Users and select the user
- Click Create Recovery Link — you can set how long the link stays valid (default: 30 minutes)
- Send the link to the user; they will be prompted to set a new password
As a CLI fallback, you can generate a recovery link via kubectl:
kubectl exec -n <namespace> deploy/<release-name>-authentik-server -- ak create_recovery_key 30 akadmin
This generates a recovery URL valid for 30 minutes.
Upgrades and Persistence
- User accounts, passwords, and settings are stored in the Authentik database and persist across upgrades
- Password changes made by users or admins will not be overwritten during upgrades
- The OIDC application configuration is managed declaratively and will be updated automatically during upgrades
User Management¶
Users are managed through the Authentik admin interface at https://<your-domain>/authentik/if/admin/.
Creating Users¶
- In the admin interface, go to Directory → Users and click Create
- Fill in the user's details (username, display name, email) and click Create
- After creating the user, select them and click Create Recovery Link to generate a one-time password setup link — you can choose how long the link stays valid
- Send the recovery link to the user; they will be prompted to set their password on first visit
For more details, see the Authentik documentation on creating users and creating recovery links.
Editing and Deleting Users¶
- Go to Directory → Users, select a user, and update their details or deactivate their account
- Users created here can log in to Pixee Enterprise Server
Federating External Identity Providers¶
Authentik supports federating external identity providers so users can log in with their existing corporate credentials. After creating a source for your identity provider (see provider-specific sections below), you must add it to the login page.
Adding a Source to the Login Page¶
This step is the same for all federated identity providers. After creating a source in Authentik:
- In the Authentik admin interface, go to Flows and Stages → Flows
- Click on default-authentication-flow
- Go to the Stage Bindings tab
- Click Edit Stage on the default-authentication-identification stage
- Under Source settings, add your identity provider source to the Selected sources field
- Click Update to save
Users will now see the identity provider as a login option on the Authentik login page.
Google Workspace (SAML)¶
Pixee Enterprise Server supports Google Workspace as an external identity provider using SAML.
Step 1: Create a SAML App in Google Workspace¶
- Go to Google Admin Console (
admin.google.com) → Apps → Web and mobile apps → Add app → Add custom SAML app - Enter a name (e.g.,
Pixee Enterprise Server) and click Continue - Copy the SSO URL and Certificate from Google — you will need these for the Authentik source configuration
-
Under Service Provider Details, set:
- ACS URL:
https://<your-domain>/authentik/source/saml/google/acs/ - Entity ID:
https://<your-domain>/authentik/source/saml/google/metadata - Name ID format:
EMAIL - Name ID:
Basic Information > Primary email
- ACS URL:
-
Check the Signed response checkbox
- Click Continue, then Finish
Enable the app for users
By default, new SAML apps in Google Workspace are OFF for everyone. You must turn it on:
- Click on the newly created app
- Click User access
- Set the service status to ON for everyone (or for the appropriate organizational units)
- Click Save
Step 2: Create a SAML Source in Authentik¶
Follow the Authentik documentation for Google Workspace SAML integration to create a SAML source using the SSO URL and Certificate from Step 1.
- In the Authentik admin interface, go to Directory → Federation and Social login → Create → SAML Source
- Set the Name (e.g.,
Google Workspace) and Slug (e.g.,google) - Set the Icon field to
/static/authentik/sources/google.svgso the Google logo appears on the login page - Set the SSO URL to the value copied from Google (e.g.,
https://accounts.google.com/o/saml2/idp?idpid=<your-idp-id>) - Set the Binding Type to Redirect (required for auto-redirect to work)
- Upload the Signing Certificate downloaded from Google
After creating the source, add it to the login page.
Troubleshooting¶
403 app_not_configured_for_user: This means either the Entity ID doesn't match or the app isn't enabled for the user. Verify that the Entity ID in Google Admin Console exactly matches the Authentik metadata URL (case-sensitive), and that the app is turned ON for the user's organizational unit.No Signature exists in the Response element: Enable the Signed response checkbox in the Google Admin Console SAML app under Service Provider Details.- "Permission denied" on login: Verify the Pixee application in Authentik is linked to the
pixeeprovider. Check Applications > Pixee Enterprise Server > Provider assignment.
Oracle Identity Domains (OAuth)¶
Pixee Enterprise Server supports Oracle Identity Domains as an external identity provider using OAuth/OIDC.
Step 1: Create a Confidential Application in Oracle¶
- Go to OCI Console > Identity & Security > Domains and select your domain
- Navigate to Integrated applications > Add application > Confidential Application
- Enter a name (e.g.,
Pixee Enterprise Server) and click Next - Under Client configuration, check "Configure this application as a client now"
- Set Allowed Grant Types to Authorization Code
-
Set Redirect URL to:
https://<your-domain>/authentik/source/oauth/callback/oracle/ -
Leave Token issuance policy set to All
- Click Finish, then Activate the application
- Copy the Client ID and Client Secret
Warning
Do not register Authentik as a Social Identity Provider in Oracle. Oracle should handle password authentication directly.
Step 2: Configure Authentik Federation¶
Follow the Authentik documentation for creating an OAuth Source using the OpenID Connect type.
When configuring the source, use the Client ID and Client Secret from Step 1. Your Oracle OIDC endpoint URLs follow this pattern (replace <your-idcs-instance> with your domain identifier):
- Authorization URL:
https://<your-idcs-instance>.identity.oraclecloud.com/oauth2/v1/authorize - Access token URL:
https://<your-idcs-instance>.identity.oraclecloud.com/oauth2/v1/token - Profile URL:
https://<your-idcs-instance>.identity.oraclecloud.com/oauth2/v1/userinfo
You can find these values in your Oracle OIDC discovery document at https://<your-idcs-instance>.identity.oraclecloud.com/.well-known/openid-configuration.
Note
All three endpoint URLs must be set explicitly on the source. Do not rely solely on the OIDC Well-known URL to auto-populate them.
After creating the source, add it to the login page.
Troubleshooting¶
- "Permission denied" on login: Verify the Pixee application in Authentik is linked to the
pixeeprovider. Check Applications > Pixee Enterprise Server > Provider assignment. - Redirect loop on Oracle login: Ensure the Authorization, Token, and Profile URLs are all explicitly set on the Oracle source. If any are blank, the redirect loops back to Authentik.
- Oracle shows Authentik login button: Remove any Social Identity Provider entries for Authentik from Oracle under Security > Identity providers.
Auto-Redirect to Identity Provider¶
By default, when a federated identity provider is added as a source, users see the Authentik login page with both username/password fields and the identity provider button. To skip this page and redirect users directly to the identity provider, configure the identification stage to auto-redirect:
- In the Authentik admin interface, go to Flows and Stages → Stages
- Edit default-authentication-identification
- Under User fields, deselect all fields (remove Username, Email, etc.)
- Under Sources, ensure only the identity provider source is selected
- Ensure the Passwordless flow field is not set (empty/none) — auto-redirect only works when this is unset
- Click Update to save
With no user fields and exactly one source configured, Authentik automatically redirects users to the identity provider without showing the login page.
SAML Binding Type
For SAML sources, ensure the Binding Type on the source is set to Redirect rather than POST. With Redirect binding, Authentik performs a direct HTTP 302 to the identity provider. POST binding requires an intermediate page to submit the SAML request form.
Multiple Identity Providers
If more than one source is configured on the identification stage, auto-redirect is disabled and users will see a source selection page instead.
Direct Login for Administrators¶
When auto-redirect is enabled, administrators who need to log in with username/password (e.g., the akadmin account) can no longer use the default login page. Create a separate authentication flow for direct login:
Create an Identification Stage¶
- Go to Flows and Stages → Stages → Create
- Select Identification as the stage type
- Configure:
- Name:
direct-authentication-identification - User fields: select Username
- Sources: leave empty (no identity provider buttons)
- Password stage: select
default-authentication-password(embeds the password field on the same page)
- Name:
- Click Create
Create the Direct Authentication Flow¶
- Go to Flows and Stages → Flows → Create
- Configure:
- Name:
Direct Authentication Flow - Slug:
direct-authentication-flow - Designation: Authentication
- Required authentication level: Require no authentication
- Name:
- Click Create
Bind Stages to the Flow¶
- Click on the direct-authentication-flow flow to open it
- Go to the Stage Bindings tab
-
Click Bind Stage and add the following bindings:
Order Stage 10 direct-authentication-identification30 default-authentication-mfa-validation100 default-authentication-loginNote
The
default-authentication-passwordstage is not bound separately because it is already embedded in the identification stage (configured above). Adding it as a separate binding would prompt for the password twice. -
Administrators can now log in directly at:
https://<your-domain>/authentik/if/flow/direct-authentication-flow/
Google Authentication¶
Pixee Enterprise Server supports Google authentication using OAuth 2.0.
Configuration¶
You must set up a new OAuth client and retrieve the client ID and client secret. See the Google Cloud Console documentation for more information on creating a new OAuth 2.0 Client ID.
To configure Google authentication in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Basic Settings section.
Under Authentication mode, select Google as the provider and provide a client ID and client secret.
To configure Google authentication in Helm Deployment follow:
To enable Google authentication, set the following in your values.yaml:
global:
pixee:
access:
oidc:
client:
provider: google
id: '<your Google oidc client id>'
secret: '<your Google oidc client secret>'
Microsoft Entra Authentication¶
Pixee Enterprise Server supports Microsoft Entra authentication with single tenant applications.
Create an App Registration¶
In order to set up OIDC for Microsoft you need to go to your Microsoft Azure Portal,
and search for Microsoft Entra ID. Select Microsoft Entra ID under Services.
Look for Manage on the left navigation bar, click on App registrations then click on New registration:

Fill in your application name, select the Single tenant option and add a Web Redirect URI as https://<domain>/api/auth/login, then click on Register:
Retrieve App Registration Details¶
After creating the app registration, you will be redirected to the app's overview page. On this page you will find:
- Application (client) ID: Save this ID, which you will use as the
ClientIDin your Pixee configuration.
Then navigate to Certificates & secrets in the left navigation bar, and click on New client secret to create a new secret:

Client Secret: After creating the client secret, copy the value immediately as it will not be shown again. This value will be used as the ClientSecret in your Pixee configuration:

Authority URL: can be obtained from the "Endpoints" section of the App Registration:

To configure Microsoft Entra authentication in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Basic Settings section.
Under Authentication mode, select Microsoft Entra as the provider and provide a client ID, client secret, and authority URL.

To configure Microsoft Entra authentication in Helm Deployment follow:
To enable Microsoft authentication, set the following in your values.yaml:
global:
pixee:
access:
oidc:
client:
provider: microsoft
id: '<your Microsoft oidc client id>'
secret: '<your Microsoft oidc client secret>'
authServerUrl: '<your Microsoft oidc auth server url, such as https://login.microsoftonline.com/{tenant_id}>'
Okta Authentication¶
Pixee Enterprise Server supports Okta OIDC authentication.
Configuration¶
You must create a new OIDC App Integration from the Okta Admin Console and retrieve the client ID, client secret, and Okta URL:
- Log in to the Okta Admin Console as an administrator.
- Navigate to Applications > Applications > Add App Integration.
- Select OIDC - OpenID Connect, set Application Type to Web Application, and then click Next.
- Configure the following required settings:
- App Integration Name: pixee
- Sign-in redirect URIs: https://< domain >/api/auth/login
- Under Assignments, select how you'd like to control access to Pixee. Allow everyone in your organization to access or select a group to limit access.
- Click Save.
- Under Client Credentials, take note of the Client ID. This value will be required in the Pixee Admin Console.
- Under CLIENT SECRETS, click the Copy to clipboard next to the secret and take note of the value, it will also be required in the Pixee Admin Console.
The Okta URL is of the form https://{tenant-name}.okta.com. You can verify this is correct by viewing the well-known OpenID Connect configuration at https://{tenant-name}.okta.com/.well-known/openid-configuration.
To configure Okta authentication in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Basic Settings section.
Under Authentication mode, select Okta as the provider and provide a client ID, client secret, and Okta URL.
To configure Okta authentication in Helm Deployment follow:
To enable Okta authentication, set the following in your values.yaml:
global:
pixee:
access:
oidc:
client:
provider: okta
id: '<your Okta oidc client id>'
secret: '<your Okta oidc client secret>'
authServerUrl: '<your Okta oidc auth server url, such as https://{tenant-name}.okta.com>'
Development Platform Integrations¶
This section covers integrating Pixee Enterprise Server with various development platforms and source code management systems.
Azure DevOps Integration¶
Azure DevOps integration allows Pixee Enterprise Server to work with your Azure DevOps repositories and requires a personal access token with specific permissions.
Requirements¶
Azure DevOps integration requires:
- Your Azure DevOps organization name
- A personal access token with a custom scope that includes full Code access (not "Full access" which grants broader permissions than necessary)
Info
The webhook user and password are optional properties for Azure DevOps webhook authentication. If configured, these credentials will be used to authenticate incoming webhook requests from Azure DevOps.
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Development Platforms section.
Select the Azure DevOps checkbox to enable Azure DevOps integration.
Enter the following information in the configuration fields:
- Organization: Your Azure DevOps organization name
- Token: Your personal access token with full Code access
- Webhook credentials (optional): Username and password for webhook authentication if desired
For Helm deployments, add the following to your values.yaml:
platform:
scm:
azure:
organization: "<your azure devops organization name>"
token: "<your personal access token>"
# Optional: For webhook authentication
# webhook:
# user: "<your webhook username>"
# password: "<your webhook password>"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the token
tokenKey: "token"
# -- The secret key containing the webhook password
webhookPasswordKey: "webhookPassword"
BitBucket Cloud Integration¶
BitBucket Cloud integration allows Pixee Enterprise Server to work with your BitBucket repositories and requires account credentials with specific permissions.
For security, it is recommended to create and use an API token for BitBucket Cloud integration rather than using personal credentials. See the BitBucket API Token documentation for information on creating an API token.
Note
BitBucket API tokens require your account's email address for API authentication, while Git operations use your username. Make sure to configure both values.
Requirements¶
BitBucket Cloud integration requires:
- A BitBucket Cloud username (used for Git operations)
- Your BitBucket account email address (used for API authentication)
- An API token with the following scopes:
read:user:bitbucketread:workspace:bitbucketread:repository:bitbucketread:pullrequest:bitbucketwrite:repository:bitbucketwrite:pullrequest:bitbucket
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Development Platforms section.
Select the BitBucket checkbox to enable BitBucket Cloud integration.
Enter the following information in the configuration fields:
- Username: Your BitBucket Cloud username (used for Git operations)
- Email Address: Your BitBucket account email address (used for API authentication)
- API Token: Your BitBucket API token
For Helm deployments, add the following to your values.yaml:
platform:
scm:
bitbucket:
username: "<your bitbucket cloud username>"
emailAddress: "<your bitbucket account email address>"
apiToken: "<your bitbucket api token>"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the API token
apiTokenKey: "apiToken"
GitHub Integration¶
GitHub integration allows Pixee Enterprise Server to work with GitHub.com or self-hosted GitHub Enterprise Servers and requires a custom GitHub app to be created.
Pixee Enterprise Server is able to integrate with GitHub.com and self-hosted GitHub Enterprise Servers. If you are self-hosting a GitHub enterprise server or otherwise have configured GitHub enterprise server on a domain other than github.com, see the Configuration section below for instructions on setting your custom GitHub domain.
GitHub integration comes in the form of a custom GitHub app, which will be needed to configure GitHub integration in Pixee Enterprise Server. A custom GitHub app configures webhook events, event destination, and permissions for enhanced GitHub integration. In creating this application, we have followed the best practices provided by GitHub.
Info
Network communication between your GitHub (.com or Enterprise Server) and Pixee Enterprise Server must exist. This can vary based on the deployment configuration of GitHub Enterprise Server and Pixee Enterprise Server.
GitHub App Setup¶
Unless otherwise instructed, leave the existing default values provided by GitHub.
- Go to https://github.com/settings/apps, replace
github.comwith your own private GitHub host as needed. - Click
New GitHub Appbutton. - Set the
GitHub App nameto something unique (i.e. "AcmePixeebotApp"), save this value for later. - Set
Homepage URLto anything (i.e. "https://pixee.ai"), this can be updated later. - Set the
Callback URLto the URL of your host/cluster in the following format http://acme.getpixee.com/api/auth/login. - Check
Request user authorization (OAuth) during installation. - Check
ActiveunderWebhook. - Set
Webhook URL, to the URL of your host/cluster in the following format http://acme.getpixee.com/github-event. - Set
Webhook Secretto a secret value, a randomly generated string will work (save this for later). -
Set these
Repository permissions:Repository permissions Access Checks Read and write Code scanning alerts Read and write Commit statuses Read and write Contents Read and write Dependabot alerts Read and write Issues Read and write Metadata Read-only Pull Requests Read and write Workflows Read and write -
Set these
Organization permissions:Organization permission Access Members Read-only -
Set these
Account permissions:Account permissions Access Email addresses Read-only -
Check to
Subscribe to eventsfor the following:- Code scanning alert
- Check Run
- Create
- Dependabot alert
- Issue Comment
- Issues
- Pull request
- Pull request review
- Pull request review comment
- Pull request review thread
- Push
- Repository
-
For
Where can this GitHub App be installed?selectOnly on this account, this can be updated later. - Click
Create GitHub Appbutton. - Once the GitHub App is created, you should see the GitHub App configuration page.
- Copy
App IDand save for later. - Scroll down and click
Generate a private key, download the private key file and save for later.
Configuration¶
Select your installation method for instructions.
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Development Platforms section.
Select the GitHub checkbox to enable GitHub integration.
If you are self-hosting a GitHub enterprise server or otherwise have configured GitHub enterprise server on a domain other than github.com, be sure to select custom domain for the GitHub domain setting in the Pixee Enterprise Server admin console and enter your custom GitHub domain.
After creating up your GitHub App, insert the following data into the appropriate fields on the Pixee Enterprise Server admin console configuration screen:
- app name
- app id
- app private key (downloaded from browser)
For Helm deployments, add the following to your values.yaml:
platform:
github:
appName: "<your custom GitHub app name>"
appId: "<your custom GitHub app id>"
appWebhookSecret: "<your custom GitHub app webhook secret>"
appPrivateKey: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
# -- Use an existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the appWebhookSecret
appWebhookSecretKey: appWebhookSecret
# -- The secret key containing the appPrivateKey
appPrivateKeySecretKey: appPrivateKey
# For GitHub Enterprise hosted at domains other than github.com, uncomment set your GitHub Enterprise url:
# url: "https://github.your-company.com"
Tip
Be sure to check the indentation is correct for each line of the GitHub app private key
Verification¶
If you enabled GitHub integration and created a custom GitHub app, you can verify your GitHub App connectivity by checking your GitHub App's event log. This log can be accessed through your GitHub App's settings under the "Advanced" section. See GitHub.com for more information.
GitLab Integration¶
Pixee Enterprise Server is able to integrate with https://gitlab.com as well as self-hosted GitLab servers. If you have a self-hosted GitLab server, see the Configuration section below for instructions on setting your custom GitLab base URI.
Requirements¶
GitLab integration requires:
- A GitLab personal access token with the following scopes:
apiread_userread_repositoryread_apiwrite_repositoryai_featuresread_registryread_virtual_registry
- (Optional) Self-hosted GitLab server base URI if not using GitLab.com
- (Optional) Webhook secret for GitLab webhook integration
Tip
It is recommended to use a GitLab service account to generate the personal access token rather than a personal user account. Service accounts are not tied to individual users, which avoids disruption if a team member leaves or their account is modified. The service account should be granted access to the groups or projects that Pixee will manage.
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Development Platforms section.
Select the GitLab checkbox to enable GitLab integration.
Enter the following information in the configuration fields:
- Token: Your GitLab personal access token with the required scopes listed above
- Base URI (optional): Your self-hosted GitLab server URL
- Webhook secret (optional): Secret for webhook authentication
For Helm deployments, add the following to your values.yaml:
platform:
scm:
gitlab:
# For self-hosted GitLab, add:
# baseUri: "https://gitlab.your-company.com"
token: "your-personal-access-token" # requires scopes: api, read_user, read_repository, read_api, write_repository, ai_features, read_registry, read_virtual_registry
# If you are using GitLab webhooks, provide the webhook secret:
# webhookSecret: "your-gitlab-webhook-secret"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the token
tokenKey: "token"
# -- The secret key containing the webhookSecret
webhookSecretKey: "webhookSecret"
Webhook Configuration¶
If you want to use webhooks to notify Pixee of build events, you'll need to configure webhooks in your GitLab project.
The webhook URI should be: https://<example-pixee-server.com>/api/v1/integrations/gitlab-default/webhooks
For detailed instructions on configuring GitLab webhooks, see the GitLab Webhook Documentation.
The webhook secret configured in Pixee Enterprise Server should match the secret token configured in your GitLab webhook settings.
Security Integrations¶
This section covers integrating Pixee Enterprise Server with various security scanning and analysis tools.
HCL AppScan Integration¶
HCL AppScan integration allows Pixee Enterprise Server to communicate with your existing AppScan security scans to analyze, react to, fix and update issues.
Requirements¶
HCL AppScan integration requires:
- An AppScan Base URI (defaults to https://cloud.appscan.com)
- An AppScan Key ID and Key Secret
- The key ID must be attached to a role with permissions to post comments on issues
- Webhook authentication credentials:
- Basic Auth (recommended): Username and password for HTTP Basic authentication on incoming webhooks.
- Webhook Secret (deprecated): The secret is embedded in the webhook URL path. This method is deprecated in favor of Basic Auth.
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Security Tool section.
Select the AppScan checkbox to enable HCL AppScan integration.
Enter the following information in the configuration fields:
- Base URI: Your AppScan base URI (defaults to https://cloud.appscan.com)
- Key ID: Your AppScan key ID with comment permissions
- Key Secret: Your AppScan key secret
- Webhook Authentication Mode: Choose your authentication method:
- Basic Auth (Username/Password) (recommended): Enter username and password for HTTP Basic authentication on incoming webhooks.
- Webhook Secret (deprecated): Enter a shared secret that will be embedded in the webhook URL. This method is deprecated in favor of Basic Auth.
For Helm deployments, add the following to your values.yaml:
platform:
pixeebot:
appscan:
apiKeyId: "your-appscan-key-id"
apiKeySecret: "your-appscan-key-secret"
webhook:
user: "your-webhook-username"
password: "your-webhook-password"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
apiKeySecretKey: "apiKeySecret"
webhookUserKey: "webhookUser"
webhookPasswordKey: "webhookPassword"
Webhook Configuration¶
To receive notifications from AppScan, you'll need to configure two webhooks in your AppScan presence server using Basic Auth.
Creating the Authorization Header¶
First, generate the Base64-encoded authorization header using the webhook username and password you configured in Pixee Enterprise Server:
echo -n "username:password" | base64
This will output a Base64 string like dXNlcm5hbWU6cGFzc3dvcmQ=. Prepend Basic to create the full authorization header value.
Webhook 1: Scan Execution Completed¶
This webhook notifies Pixee Enterprise Server when an AppScan scan completes. Use the AppScan Webhook API to create it with the following request body:
{
"AuthorizationHeader": "Basic <your-base64-encoded-credentials>",
"PresenceId": "<your-presence-id>",
"Uri": "https://<your-pixee-server>/api/v1/integrations/appscan-default/webhooks/_/ScanExecutionCompleted/{SubjectId}",
"Global": true,
"AssetGroupId": "<your-asset-group-id>",
"Event": "ScanExecutionCompleted"
}
Webhook 2: New Patch Request¶
This webhook notifies Pixee Enterprise Server when a new patch is requested in AppScan. Use the AppScan Webhook API to create it with the following request body:
{
"AuthorizationHeader": "Basic <your-base64-encoded-credentials>",
"PresenceId": "<your-presence-id>",
"Uri": "https://<your-pixee-server>/api/v1/integrations/appscan-default/webhooks/CreatePatch",
"Global": true,
"AssetGroupId": "<your-asset-group-id>",
"Event": "NewPatchRequest",
"RequestMethod": "POST",
"RequestBody": "{\"patch_id\": \"{SubjectId}\"}",
"ContentType": "application/json"
}
Placeholder Reference¶
Replace the following placeholders in both webhooks:
<your-base64-encoded-credentials>: The Base64-encodedusername:passwordstring from the command above<your-presence-id>: Your AppScan presence server ID<your-pixee-server>: Your Pixee Enterprise Server hostname<your-asset-group-id>: Your AppScan asset group ID
For detailed instructions on configuring AppScan webhooks, refer to the AppScan Webhook API Documentation.
Arnica Integration¶
Arnica integration allows Pixee Enterprise Server to communicate with your existing Arnica security platform to analyze, react to, fix and update issues.
Requirements¶
Arnica integration requires:
- An Arnica API key
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Security Tool section.
Select the Arnica checkbox to enable Arnica integration.
Enter the following information in the configuration fields:
- API Key: Your Arnica API key
For Helm deployments, add the following to your values.yaml:
platform:
arnica:
apiKey: "your-arnica-api-key"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the apiKey
apiKeyKey: "apiKey"
Black Duck Integration¶
Black Duck integration allows Pixee Enterprise Server to communicate with your existing Black Duck security scans to analyze, react to, fix and update issues.
Requirements¶
Black Duck integration requires:
- A Black Duck access token
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Security Tool section.
Select the Black Duck checkbox to enable Black Duck integration.
Enter the following information in the configuration fields:
- Access Token: Your Black Duck access token
For Helm deployments, add the following to your values.yaml:
platform:
blackduck:
accessToken: "your-blackduck-access-token"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the accessToken
accessTokenKey: "accessToken"
SonarQube Integration¶
SonarQube integration allows Pixee Enterprise Server to communicate with your existing SonarQube security scans to analyze, react to, fix and update issues.
Pixee Enterprise Server can integrate with both SonarQube Cloud and SonarQube Server.
Requirements¶
SonarQube integration requires:
- A SonarQube personal access token with access to retrieve issues and hotspots for the projects that will be integrated with Pixee Enterprise Server
- A webhook secret for receiving scan notifications. When creating the webhook, set the URL to
https://<domain>/api/v1/integrations/sonar-default/webhooks.
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Security Tool section.
Select the SonarQube checkbox to enable SonarQube integration. If you host your own SonarQube server instance, select SonarQube server and enter your SonarQube Server base URI and SonarQube GitHub app if applicable.
For all SonarQube integration types (Server or Cloud) enter the following information in the configuration fields:
- Personal Access Token: Your SonarQube personal access token
- Webhook Secret: Secret for webhook authentication
For Helm deployments, add the following to your values.yaml:
platform:
sonar:
# For SonarQube Server integration, provide your SonarQube server baseUri:
# baseUri: "https://sonarqube.your-company.com"
# If you have a custom Sonar GitHub app, provide the GitHub app name:
# gitHubAppName: "your-sonarqube-github-app-name"
token: "your-sonarqube-personal-access-token"
webhookSecret: "your-sonarqube-webhook-secret"
# -- Use an existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the token
tokenKey: "token"
# -- The secret key containing the webhookSecret
webhookSecretKey: "webhookSecret"
Advanced Filtering Options¶
SonarQube integration supports advanced filtering to control which findings are retrieved and processed.
Software Quality Filtering¶
Control which types of findings to retrieve:
In the admin console Security Tool section, use the following checkboxes:
- Exclude Maintainability Findings: Select to exclude maintainability findings (code smells), retrieving only security-related issues
- Exclude Reliability Findings: Select to exclude reliability findings (bugs), retrieving only security-related issues
platform:
sonar:
# Exclude maintainability findings (code smells)
excludeMaintainabilityFindings: true
# Exclude reliability findings (bugs)
excludeReliabilityFindings: true
CWE Filtering¶
Filter findings by specific Common Weakness Enumeration (CWE) identifiers:
In the admin console Security Tool section:
- CWE IDs: Enter a comma-separated list of CWE IDs to filter findings (e.g.,
79,89,502,918). No spaces. When set, this overrides "Filter CWE Top 25" and "Additional CWE IDs". - Filter CWE Top 25 (Deprecated): Select to retrieve only findings from the SANS CWE Top 25 list. Ignored when "CWE IDs" is set.
- Additional CWE IDs (Deprecated): Enter comma-separated CWE IDs to include (e.g.,
611,918,1234). No spaces. Ignored when "CWE IDs" is set.
platform:
sonar:
# Explicit CWE ID list (overrides filterCweTop25 and additionalCweIds)
cweIds: "79,89,502,918"
# Deprecated - use cweIds instead
# filterCweTop25: true
# additionalCweIds: "611,918,1234"
Example Configurations¶
Custom CWE list (recommended):
platform:
sonar:
cweIds: "79,89,502,918"
excludeMaintainabilityFindings: true
excludeReliabilityFindings: true
Security + Reliability (no code smells):
platform:
sonar:
excludeMaintainabilityFindings: true
Legacy: SANS Top 25 only (deprecated):
platform:
sonar:
filterCweTop25: true
excludeMaintainabilityFindings: true
excludeReliabilityFindings: true
Veracode Integration¶
Requirements¶
Veracode integration requires:
- A Veracode Key ID and Key Secret
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Security Tool section.
Select the Veracode checkbox to enable Veracode integration.
Enter the following information in the configuration fields:
- Key ID: Your Veracode key ID
- Key Secret: Your Veracode key secret
For Helm deployments, add the following to your values.yaml:
platform:
veracode:
apiKeyId: "your-veracode-key-id"
apiKeySecret: "your-veracode-key-secret"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the apiKeySecret
apiKeySecretKey: "apiKeySecret"
Checkmarx Integration¶
Checkmarx integration allows Pixee Enterprise Server to communicate with your existing Checkmarx One platform to analyze, react to, fix and update security vulnerabilities found in SAST scans.
Requirements¶
Checkmarx integration requires:
- A Checkmarx tenant account name. You can find this by going to your Checkmarx One platform and navigating to the
Settings>Identity and Access Managementsection. The tenant account name appears above the GUID that is your tenant ID. Be sure to use the account name, not the GUID. - API key with access to retrieve scan results and projects
- Knowledge of your Checkmarx region (US, US2, EU, EU2, DEU, ANZ, IND, SNG, or MEA)
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the Security Tool section.
Select the Checkmarx checkbox to enable Checkmarx integration.
Enter the following information in the configuration fields:
- Region: Your Checkmarx region (defaults to US)
- Tenant Account Name: Your Checkmarx tenant account name
- API Key: Your Checkmarx API key
For Helm deployments, add the following to your values.yaml:
platform:
checkmarx:
region: "US" # Available regions: US, US2, EU, EU2, DEU, ANZ, IND, SNG, MEA
tenantAccountName: "your-checkmarx-tenant-account-name"
apiKey: "your-checkmarx-api-key"
Supported Regions¶
Checkmarx operates in multiple regions worldwide. The following regions are supported:
- US: Default US environment (
https://ast.checkmarx.net) - US2: Second US environment (
https://us.ast.checkmarx.net) - EU: European environment (
https://eu.ast.checkmarx.net) - EU2: Second European environment (
https://eu-2.ast.checkmarx.net) - DEU: Germany environment (
https://deu.ast.checkmarx.net) - ANZ: Australia & New Zealand environment (
https://anz.ast.checkmarx.net) - IND: India environment (
https://ind.ast.checkmarx.net) - SNG: Singapore environment (
https://sng.ast.checkmarx.net) - MEA: UAE/Middle East environment (
https://mea.ast.checkmarx.net)
Make sure to select the region that matches your Checkmarx AST tenant.
How It Works¶
The Checkmarx integration operates as follows:
- Project Discovery: Pixee Enterprise Server discovers Checkmarx projects associated with your repositories
- Scan Retrieval: The latest SAST scan results are fetched from the Checkmarx AST platform
- Vulnerability Analysis: SAST vulnerabilities are converted to SARIF format and analyzed by Pixee's security analysis engine
- Fix Generation: Pixee identifies applicable fixes for the discovered vulnerabilities
- Pull Request Creation: Automatic fixes are applied and submitted as pull requests to the repository
The integration uses Checkmarx's REST API to retrieve project information, scan results, and vulnerability details.
GitLab SAST Integration¶
GitLab SAST integration allows Pixee Enterprise Server to automatically consume SAST (Static Application Security Testing) scan results from your GitLab CI/CD pipelines and apply automated fixes to security vulnerabilities.
Requirements¶
GitLab SAST integration requires:
- A GitLab personal access token with the following scopes:
apiread_userread_repositoryread_apiwrite_repositoryai_featuresread_registryread_virtual_registry
- A webhook secret for authenticating incoming pipeline notifications
- GitLab pipelines configured with SAST scanning (using GitLab's built-in SAST analyzer)
Tip
It is recommended to use a GitLab service account to generate the personal access token rather than a personal user account. Service accounts are not tied to individual users, which avoids disruption if a team member leaves or their account is modified. The service account should be granted access to the groups or projects that Pixee will manage.
Configuration¶
For embedded cluster deployments, navigate to the admin console, Config tab and then to the SCM section.
Select the GitLab checkbox to enable GitLab integration.
Enter the following information in the configuration fields:
- Base URI: The base URL of your GitLab instance (default:
https://gitlab.com) - Access Token: Your GitLab access token with the required scopes listed above
- Webhook Secret: Secret for webhook authentication
For Helm deployments, add the following to your values.yaml:
platform:
scm:
gitlab:
enabled: true
baseUri: "https://gitlab.com" # or your GitLab instance URL
token: "your-gitlab-access-token" # requires scopes: api, read_user, read_repository, read_api, write_repository, ai_features, read_registry, read_virtual_registry
webhookSecret: "your-gitlab-webhook-secret"
# Use existing secret instead of creating one
existingSecret: ""
secretKeys:
# -- The secret key containing the token
tokenKey: "token"
# -- The secret key containing the webhookSecret
webhookSecretKey: "webhookSecret"
Webhook Configuration¶
To receive notifications when GitLab pipelines complete with SAST results, configure a webhook in your GitLab project or group settings.
Setting Up GitLab Webhooks¶
- Navigate to your GitLab project page
- Go to Settings > Webhooks
- Add a new webhook with the following configuration:
- URL:
https://<your-pixee-server.com>/api/v1/integrations/gitlab-default/webhooks - Secret Token: Use the same webhook secret configured in Pixee Enterprise Server
- Trigger Events: Select Pipeline events
- SSL verification: Enable if using HTTPS (recommended)
The webhook secret configured in GitLab must match the webhook secret configured in your Pixee Enterprise Server.
SAST Pipeline Configuration¶
Pixee Enterprise Server processes SAST results from two types of GitLab Pipeline Events:
- Branch Pipeline Events - Triggered when pipelines run on the default branch (e.g.,
main,develop) - Merge Request Pipeline Events - Triggered when pipelines run specifically for merge requests
SAST Configuration with Advanced Security Features¶
GitLab provides built-in SAST analyzers including both Semgrep and GitLab Advanced SAST. To enable comprehensive SAST scanning that works with both branch and merge request pipelines, add the following to your .gitlab-ci.yml:
workflow:
# Run pipeline jobs on the pushes to the default branch and merge requests
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
include:
- template: Jobs/SAST.gitlab-ci.yml
variables:
GITLAB_ADVANCED_SAST_ENABLED: true
# Override specific SAST jobs to run on merge requests
semgrep-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
gitlab-advanced-sast:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
This configuration:
- Runs pipelines on merge request events and default branch pushes only
- Enables GitLab Advanced SAST in addition to the standard Semgrep-based SAST analyzer
- Explicitly configures both semgrep-sast and gitlab-advanced-sast jobs to run on merge requests
- Ensures SAST coverage for both Pixee's pull request hardening (from merge request pipelines) and repository-wide scanning (from default branch pipelines)
Pixee Enterprise Server will automatically detect and process SAST vulnerabilities from completed pipeline runs, applying fixes where possible and creating merge requests with the remediated code.
How It Works¶
The GitLab SAST integration operates as follows:
- Pipeline Completion: When a GitLab pipeline with SAST scanning completes, GitLab sends a webhook notification to Pixee Enterprise Server
- Vulnerability Retrieval: Pixee fetches the SAST vulnerabilities from the GitLab API for the completed pipeline
- Analysis: The vulnerabilities are converted to SARIF format and analyzed by Pixee's security analysis engine
- Fix Generation: Pixee identifies applicable fixes for the discovered vulnerabilities
- Merge Request Creation: Automatic fixes are applied and submitted as merge requests to the repository
Both regular branch pipelines and merge request pipelines are supported, with merge request pipelines triggering pull request hardening workflows for more targeted security improvements.
Alternative: Using Pixee GitLab Component¶
As an alternative to the webhook-based configuration described above, you can use the Pixee GitLab component for a simplified integration that does not require webhook setup.
The Pixee GitLab component is available at https://gitlab.com/pixee/pixee and provides a pre-configured CI/CD component that handles delivering SAST findings to Pixee Enterprise Server directly from your pipeline, eliminating the need for webhook configuration.
For detailed configuration options, usage instructions, and requirements, refer to the component documentation at https://gitlab.com/pixee/pixee.
Note: When using the Pixee GitLab component, you do not need to configure GitLab webhooks as described in the "Webhook Configuration" section above.
Datadog SAST Integration¶
Datadog SAST integration allows Pixee Enterprise Server to consume scan results from the Datadog CLI. This is done by uploading the SARIF file that the Datadog CLI outputs via the Pixee API, and can be easily automated in your CI/CD pipeline.
Requirements¶
- The Datadog Static Analyzer CLI
- A Pixee authentication token. See API Access for details.
Installing the CLI¶
Compiled binaries of the Datadog Static Analyzer CLI can be found and downloaded from the releases page of its main Github repository. Find the release that matches the OS and architecture for the machine it will be running on.
CLI Configuration¶
Filtering for security-only findings¶
The Datadog CLI applies many rule sets when scanning a codebase, many of which are not security related. To configure which rule sets are applied, the CLI reads a YAML file named static-analysis.datadog.yml in the current working directory that specifies which rules to use. You will need to create this YAML file if you wish to filter for security-only findings.
Here's an example of a YAML configuration file that only applies Java security rules:
schema-version: v1
rulesets:
- java-security
python-security for a Python project. You can even specify multiple of these rulesets if the codebase that is being scanned contains multiple programming languages.
See Datadog's SAST Rules documentation for available rulesets.
Manually running and uploading a scan to Pixee¶
Now that the CLI is installed and configured, it can be run in the desired codebase directory to generate a scan. To do this, invoke the following CLI command within the codebase's root directory:
datadog-static-analyzer -i . -o ./report-sarif.json -f sarif
-o flag specifies the output scan file name and location. The -f flag specifies the output format. Pixee requires the SARIF output format. See the README for the CLI for a full list of options.
This output scan file can then be uploaded to Pixee via the API. To do this, you will need to retrieve the base URL and repository ID for the codebase you want to analyze. These can be extracted from the URL when opening the repository in Pixee Resolution Center. You can then send an HTTP POST request to the /scans endpoint for that repository using any HTTP client. Here's an example using cURL:
curl -X POST "$BASE_URL/api/v1/repositories/$REPO_ID/scans" \
-H "Accept: application/json" \
-H "Authorization: Bearer $PIXEE_API_KEY" \
-F 'file=@./report-sarif.json' \
-F 'metadata={"tool":"datadog_sast","branch":"main"};type=application/json'
Automating scan and upload in your CI/CD pipeline¶
Any CI/CD platform can automate the Datadog SAST scan and upload workflow. The pipeline needs to perform three steps:
- Install the Datadog Static Analyzer CLI
- Run the scan, outputting SARIF
- Upload the SARIF file to Pixee via the API
Make sure your static-analysis.datadog.yml configuration file is committed to the repository root so the CLI picks it up automatically during CI runs.
GitHub Actions example¶
Below is a complete GitHub Actions workflow that installs the Datadog Static Analyzer CLI, runs a scan, and uploads the results to Pixee. Add this file to your repository at .github/workflows/datadog-sast-pixee.yml.
The workflow uses three GitHub Actions secrets that you must configure in your repository settings:
PIXEE_API_KEY— your Pixee API keyPIXEE_BASE_URL— the base URL of your Pixee instance (e.g.,https://app.pixee.example.com)PIXEE_REPO_ID— the repository ID from the Pixee Resolution Center URL
name: Datadog SAST scan and upload to Pixee
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
datadog-sast:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Install Datadog Static Analyzer CLI
run: |
ARCH=$(uname -m)
case "$ARCH" in
x86_64) ARCH_NAME="x86_64" ;;
aarch64) ARCH_NAME="aarch64" ;;
arm64) ARCH_NAME="aarch64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
LATEST=$(curl -s https://api.github.com/repos/DataDog/datadog-static-analyzer/releases/latest \
| grep tag_name | cut -d '"' -f 4)
curl -sL "https://github.com/DataDog/datadog-static-analyzer/releases/download/${LATEST}/datadog-static-analyzer-${ARCH_NAME}-unknown-linux-gnu.zip" \
-o datadog-static-analyzer.zip
unzip -o datadog-static-analyzer.zip -d /usr/local/bin
chmod +x /usr/local/bin/datadog-static-analyzer
rm datadog-static-analyzer.zip
- name: Run Datadog SAST scan
run: datadog-static-analyzer -i . -o results.sarif -f sarif
- name: Upload SARIF to Pixee
env:
PIXEE_API_KEY: ${{ secrets.PIXEE_API_KEY }}
PIXEE_BASE_URL: ${{ secrets.PIXEE_BASE_URL }}
PIXEE_REPO_ID: ${{ secrets.PIXEE_REPO_ID }}
run: |
curl -X POST "$PIXEE_BASE_URL/api/v1/repositories/$PIXEE_REPO_ID/scans" \
-H "Accept: application/json" \
-H "Authorization: Bearer $PIXEE_API_KEY" \
-F 'file=@./results.sarif' \
-F 'metadata={"tool":"datadog_sast","branch":"${{ github.ref_name }}","workflow_execution_policy":"execute"};type=application/json'
This workflow does not require Datadog API or App keys — it only uses the open-source static analyzer CLI and uploads results directly to Pixee.
Advanced Settings¶
This section covers advanced configuration options for Pixee Enterprise Server.
External Database¶
For production environments, it is recommended to configure an external database when deploying into production. See the installation prerequisites for requirements.
To configure external database in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Under Database Type select External.
Provide the following information:
- Database host
- Database port
- Database username
- Database password
- Database name
To configure external database in Helm Deployment follow:
You can configure Pixee Enterprise Server to use an external database server. If you prefer to use the in-cluster, embedded database, you may skip this step.
To configure an external database, add the following to your values.yaml:
platform:
database:
embedded: false
host: "<your database hostname>"
port: "<your database server port, defaults to 5432>"
name: "<your database name, defaults to pixee_platform>"
username: "<your database user with access to database>"
password: "<your database user password>"
# -- Use an existing secret for the password instead of passing directly in Values. Secret must contain a `password` key.
existingSecret: "<your postgres secret name>"
Embedded Database Credential Secrets¶
When using the embedded database (platform.database.embedded: true), Pixee Enterprise Server automatically creates Kubernetes secrets for CloudNative-PG managed database roles used by Superset and Authentik. If you prefer to manage these secrets yourself (e.g., via an external secrets operator), you can provide your own pre-existing secrets instead.
Embedded Cluster deployments manage these secrets automatically. No additional configuration is needed.
To use existing secrets for Superset and/or Authentik database credentials, first create the secret(s):
Superset database credentials:
apiVersion: v1
kind: Secret
metadata:
name: my-superset-postgresql-credentials
type: kubernetes.io/basic-auth
stringData:
username: "superset"
password: "<your-password>"
Authentik database credentials:
apiVersion: v1
kind: Secret
metadata:
name: my-authentik-postgresql-credentials
type: kubernetes.io/basic-auth
stringData:
username: "authentik"
password: "<your-password>"
Then reference them in your values.yaml:
superset:
database:
existingSecret: "my-superset-postgresql-credentials"
authentik:
database:
existingSecret: "my-authentik-postgresql-credentials"
Object Store¶
Embedded Object Store¶
You can configure Pixee Enterprise Server to use an embedded object store. This is the default configuration and is suitable for development and testing environments.
When embedded object store is enabled, you will also be prompted to configure the Object Store Expiry in Days for the object store. This setting determines how long objects will be retained in the embedded object store before they are automatically deleted.
External Object Store¶
You can configure Pixee Enterprise Server to use an external object store. If you prefer to use the in-cluster, embedded object store, you may skip this step.
Requirements¶
The following are requirements of an external object store compatible with Pixee Enterprise Server:
- The object store and the Kubernetes cluster are able to communicate over the network
- The object store exposes a S3 compatible API
- A bucket has been created for use as the
pixee-analysis-inputbucket
To configure external object store in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Under Object Store Type select External.
Provide the following information:
- Object store endpoint URL
- Object store username (Access Key ID)
- Object store password (Secret Access Key)
- Bucket name for pixee analysis inputs
- Bucket name for the pixee analysis service
To configure external object store with static credentials in Helm Deployment follow:
Static Credentials¶
To configure an external object store, add the following to your values.yaml:
global:
pixee:
objectStore:
embedded: false
endpoint: "<your object store endpoint url>"
username: "<your object store username>" # Access Key ID for S3
password: "<your object store password>" # Secret Access Key for S3
credentialType: "static"
platform:
inputBucket: "<your provisioned bucket name for pixee analysis inputs>"
analysis:
objectStore:
bucket: "<your provisioned bucket name for the pixee analysis service>"
Service Account Authentication¶
For enhanced security, you can use Kubernetes service account authentication instead of static credentials. This approach leverages cloud provider IAM roles and eliminates the need for long-lived access keys.
To configure service account authentication, add the following to your values.yaml:
global:
pixee:
serviceAccount:
create: false
name: "your-external-service-account"
objectStore:
embedded: false
endpoint: "<your object store endpoint url>"
region: "<your object store region>"
credentialType: "default"
# username and password are not required with service account auth
platform:
inputBucket: "<your provisioned bucket name for pixee analysis inputs>"
analysis:
objectStore:
bucket: "<your provisioned bucket name for the pixee analysis service>"
Git Clone Strategy¶
Pixee Enterprise Server supports two Git cloning strategies for VCS operations:
- Partial Clone (Default): Downloads only the specific commit, tree, and blob objects needed for the requested revision. This provides optimal performance and minimal bandwidth usage but may not be supported by all Git servers.
- Full Clone: Downloads the complete repository including all history, branches, and objects. While this requires more time and bandwidth, it ensures maximum compatibility with all Git servers.
To configure the Git clone strategy in Embedded Cluster deployments:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Under Git Clone Strategy, select either:
- Partial (recommended - faster, less bandwidth): For optimal performance
- Full (maximum compatibility): For environments where partial clones are not supported
To configure the Git clone strategy in Helm deployments, add the following to your values.yaml:
platform:
gitCloneStrategy: "partial" # or "full" for maximum compatibility
The default value is partial for optimal performance. Change to full if you encounter issues with Git servers that don't support partial clones.
Error Reporting¶
By default, Pixee Enterprise Server will send error and crash reports to Pixee via Sentry.io.
To configure error reporting in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section.
To disable automatic error and crash reporting, uncheck the error reporting option.
To configure error reporting in Helm Deployment follow:
To disable automatic error and crash reporting, add the following to your values.yaml:
global:
pixee:
sentry:
enabled: false
HTTP(S) Proxy Settings¶
You can configure Pixee Enterprise Server to route HTTP/HTTPS traffic through a proxy server.
NO_PROXY Configuration Limitations¶
Important: NO_PROXY Wildcard Limitations
The NO_PROXY environment variable in Pixee Enterprise Server does not support wildcard patterns. You must specify exact hostnames or IP addresses.
Not Supported:
NO_PROXY=*.internal.company.com
NO_PROXY=10.0.*.*
Supported (Required Format):
NO_PROXY=service1.internal.company.com,service2.internal.company.com,10.0.1.5,10.0.2.10
This limitation means that if you have multiple internal services that should bypass the proxy, each hostname must be explicitly listed in the NO_PROXY configuration.
To configure HTTP(S) proxy in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section.
Select the Configure HTTP(S) Proxy checkbox if you need to route HTTP/HTTPS traffic through a proxy server. Once enabled, you can configure or modify the HTTP and HTTPS proxy server addresses as well as provide a comma separated list of domains to exclude from using your HTTP/HTTPS proxy.
Note: The NO_PROXY field requires exact hostnames or IP addresses. Wildcard patterns are not supported.
To configure HTTP(S) proxy in Helm Deployment follow:
Add the following to your values.yaml:
global:
pixee:
httpProxy: "<address>:<port>" # HTTP proxy server host/address and port
httpsProxy: "<address>:<port>" # HTTPS proxy server host/address and port
noProxy: "<comma,separated,hosts>" # Comma separated list of exact hostnames/IPs to exclude from proxy (wildcards not supported)
Example configuration:
global:
pixee:
httpProxy: "proxy.company.com:8080"
httpsProxy: "proxy.company.com:8080"
noProxy: "kubernetes.default,kubernetes.default.svc,10.0.0.1,database.internal.company.com,api.internal.company.com"
Private CA Certificates¶
If your environment uses self-signed certificates or a private Certificate Authority (CA), you can configure Pixee Enterprise Server to trust these certificates.
Embedded Cluster automatically detects and uses the host system's CA trust store at install time. Ensure your private CA certificates are installed on the host before running the Embedded Cluster installer — no additional Pixee configuration is needed.
Updating CA certificates after installation:
If you need to add new CA certificates after the initial installation:
- Update the host's CA trust store (e.g. add the certificate and run
update-ca-trust) -
Wait up to one hour for the
kotsadm-private-casConfigMap to refresh automatically, or force an immediate refresh:kubectl rollout restart deployment/embedded-cluster-operator -n embedded-cluster -
Restart the Pixee platform deployment to pick up the new certificates:
kubectl rollout restart deployment/pixee-platform -n <namespace>
Create a ConfigMap containing your PEM-encoded CA certificate(s):
kubectl create configmap my-ca-certs \
--from-file=ca.pem=/path/to/your/ca-certificate.pem \
-n <namespace>
Then reference it in your values.yaml:
global:
pixee:
privateCACert: "my-ca-certs"
The ConfigMap may contain one or more PEM files, each with one or more certificates. All certificates will be imported into the trust stores used by the platform, analysis, and forge services.
Updating CA certificates:
To add or replace certificates, update the ConfigMap and restart the platform deployment:
kubectl rollout restart deployment/pixee-platform -n <namespace>
Deprecated: Skip SSL Verification
The global.pixee.skipSSLVerification Helm value (and the corresponding KOTS admin console checkbox) is deprecated and will be removed in a future release. This setting disables ALL certificate verification for outbound HTTPS connections, which is a security risk. Use private CA certificates instead.
Host Aliases¶
You can configure a custom host-to-IP mapping (/etc/hosts entry) for the Platform pods. This is useful for environments with private DNS, split-horizon DNS, or services not resolvable via the cluster's DNS.
To configure host aliases in Embedded Cluster deployments:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Select the Enable Custom Host Aliases checkbox.
Once enabled, enter host aliases in /etc/hosts format in the text area — one entry per line, with an IP address followed by one or more space-separated hostnames. Lines starting with # are treated as comments and ignored.
Example:
10.0.0.1 service.internal api.internal
192.168.1.100 db.internal
To configure host aliases in Helm deployments, add the following to your values.yaml:
platform:
hostAliases:
- ip: "10.0.0.1"
hostnames:
- "service.internal"
- "api.internal"
Each entry maps an IP address to one or more hostnames, which are added to the pod's /etc/hosts file.
Metrics Reporting¶
By default, Pixee Enterprise Server will send anonymized usage metrics to Pixee.
To configure metrics reporting in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section.
To disable metrics reporting, uncheck the metrics reporting option.
To configure metrics reporting in Helm Deployment follow:
To disable metrics reporting, add the following to your values.yaml:
global:
pixee:
metrics:
enabled: false
Object Store Signature Duration¶
By default, Pixee Enterprise Server generates pre-signed URLs for object store operations with a system-defined expiration duration. You can customize this duration by configuring the signature duration setting.
The duration should be specified as a string with time units. Common examples: "1h" (1 hour), "30m" (30 minutes), "2h" (2 hours), "45m" (45 minutes). If not configured, the system will use its default behavior.
To configure signature duration in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Locate the Object Store Signature Duration field.
To configure signature duration in Helm Deployment follow:
To configure a custom signature duration for pre-signed URLs, add the following to your values.yaml:
platform:
inputSignatureDuration: "<duration>"
Reverse Proxy Settings¶
If your Pixee Enterprise Server is accessible via a reverse proxy (i.e., ALB, App Gateway, NGINX, etc.), you may need to configure additional settings.
To configure reverse proxy settings in Embedded Cluster deployments follow:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section.
If you have an external system (i.e. App Gateway, Load Balancer, etc.) acting as a reverse proxy to your Pixee Enterprise Server, make sure you provide reverse proxy settings in the Advanced Settings section.
To configure reverse proxy settings in Helm Deployment follow:
Add the following applicable settings to your values.yaml:
platform:
proxy:
enabled: true
address: "<address of your proxy server, optional>"
headers:
forwarded: true|false # set to true to allow `Forwarded` header, defaults to false
xForwarded: true|false # set to true to allow X-Forwarded-* headers, defaults to false
Analysis Timeout Settings¶
Pixee Enterprise Server allows you to configure timeout values for different types of analyses. These settings control how long the platform waits for analysis results before timing out.
Analysis Progress Timeout¶
The analysis progress timeout controls how long the platform waits for callbacks from the analysis service. If this duration passes without receiving an update, the analysis will timeout.
SAST and SCA Analysis Timeouts¶
You can configure separate timeout values for SAST (Static Application Security Testing) and SCA (Software Composition Analysis) analyses. SCA analyses typically require longer timeouts than SAST due to dependency resolution complexity.
When not configured, the platform uses the general analysis timeout as the default for both analysis types.
To configure analysis timeouts in Embedded Cluster deployments:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section.
- Analysis progress timeout: Controls callback timeout from the analysis service (default: 15m)
- SAST analysis timeout: Timeout for SAST analyses (optional)
- SCA analysis timeout: Timeout for SCA analyses (optional, typically longer than SAST)
The duration should be specified as a string with time units. Common examples: "15m" (15 minutes), "30m" (30 minutes), "1h" (1 hour).
To configure analysis timeouts in Helm deployments, add the following to your values.yaml:
platform:
analysisTimeout: "15m" # General analysis progress timeout
sastAnalysisTimeout: "20m" # SAST-specific timeout (optional)
scaAnalysisTimeout: "45m" # SCA-specific timeout (optional)
Agentic Triage for All Rules¶
When enabled, this setting routes all triage rules through the ReACT agentic analyzer, bypassing explicit and magic handlers. This provides more thorough and consistent triage results across all rule types.
This setting is disabled by default. When enabled, the standard Triage Mode setting is ignored.
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Locate the Use Agentic Triage for All Rules checkbox.
To enable agentic triage for all rules in Helm deployments, add the following to your values.yaml:
analysis:
useAgenticTriageForAllRules: true
SCA Max Requests to Analyze¶
Controls the maximum number of requests to analyze during SCA (Software Composition Analysis). This limits the upper bound of dependency requests that SCA will process per analysis, helping to manage resource usage for repositories with large dependency trees.
The default value is 5.
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Locate the SCA Max Requests to Analyze field and enter the desired value.
To configure the SCA max requests to analyze in Helm deployments, add the following to your values.yaml:
analysis:
scaMaxRequestsToAnalyze: 5
SCA Exploitability Fix Shortcircuit¶
When enabled, fix generation is skipped for findings that SCA (Software Composition Analysis) determines are not exploitable. This reduces unnecessary compute usage by not generating fixes for vulnerabilities that cannot be reached in practice.
This setting is disabled by default.
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Locate the Skip fix for non-exploitable SCA findings checkbox.
To enable this optimization in Helm deployments, add the following to your values.yaml:
analysis:
useScaExploitabilityToShortcircuitFix: true
Vendored File Triage¶
When enabled, a specialized triage strategy is used for vendored files. This improves the accuracy of analysis results for files that are vendored (copied from external sources) rather than authored in-house.
This setting is enabled by default.
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Locate the Enable vendored file triage checkbox.
This is enabled by default. To disable it in Helm deployments, add the following to your values.yaml:
analysis:
enableVendoredFileTriage: false
Analysis Input Caching¶
The analysis service supports URL-based input caching to improve performance for repeated analyses. When enabled, downloaded analysis inputs are cached locally to avoid redundant downloads.
Caching is enabled by default with a 24-hour TTL and 10GB maximum cache size.
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section.
- Enable analysis input caching: Toggle to enable or disable caching (enabled by default)
- Analysis cache TTL (seconds): Set the time-to-live for cached inputs (default: 86400 seconds / 24 hours)
To configure analysis input caching in Helm deployments, add the following to your values.yaml:
analysis:
cache:
enabled: true
defaultTtlSeconds: 86400 # 24 hours
maxSizeBytes: 10737418240 # 10GB
honorCacheControl: true
Analysis Backpressure¶
The analysis backpressure feature enables the analysis service to proactively cancel analyses that cannot successfully complete within platform timeout limits. This helps prevent wasted compute resources on analyses that would eventually timeout.
This setting is enabled by default.
To configure analysis backpressure in Embedded Cluster deployments:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Locate the Enable analysis backpressure checkbox.
- Enabled (default): The analysis service will proactively cancel analyses that cannot complete in time
- Disabled: Analyses will run until they complete or timeout naturally
To configure analysis backpressure in Helm deployments, add the following to your values.yaml:
analysis:
backpressureEnabled: true # or false to disable
Transitive Dependency Analysis¶
Transitive dependency analysis enables deeper vulnerability detection by analyzing indirect (transitive) dependencies during SCA (Software Composition Analysis). When enabled, the analysis service will trace dependency chains beyond direct dependencies to identify vulnerabilities in the full dependency tree.
This setting is disabled by default.
To configure transitive dependency analysis in Embedded Cluster deployments:
Navigate to the admin console, select the Config tab, then go to the Advanced Settings section. Locate the Enable Transitive Dependency Analysis checkbox.
- Enabled: The analysis service will analyze transitive dependencies during SCA
- Disabled (default): Only direct dependencies are analyzed
To configure transitive dependency analysis in Helm deployments, add the following to your values.yaml:
analysis:
enableTransitiveDependencyAnalysis: true # or false to disable (default)
Verify Installation¶
After installation is complete, you can verify your installation with the following steps.
Health Check Endpoint¶
Both deployment methods provide the same health check endpoint to verify the Pixee Enterprise service status:
curl https://<domain or ip>/q/health
Expected response:
{
"status": "UP",
"checks": [
{
"name": "SmallRye Reactive Messaging - liveness check",
"status": "UP"
},
{
"name": "Pixee Server health check",
"status": "UP",
"data": {
"server-version": "2024-11-03-653a81d"
}
},
{
"name": "Database connections health check",
"status": "UP",
"data": {
"<default>": "UP"
}
},
{
"name": "SmallRye Reactive Messaging - readiness check",
"status": "UP"
},
{
"name": "SmallRye Reactive Messaging - startup check",
"status": "UP"
}
]
}
Kubernetes Resources¶
To verify Kubernetes resources in Embedded Cluster deployments follow:
-
Open a terminal session on the VM and run the following commands:
sudo ./pixee shell kubectl get all -n kotsadm -
Verify the Pixee Enterprise Server is ready by viewing pods and services, making sure all are in the
readystate.
To verify Kubernetes resources in Helm Deployment follow:
Verify the application is properly deployed by viewing pods and services, making sure all are in the ready state:
kubectl get all -n pixee-enterprise-server
GitHub App Connectivity¶
If you enabled GitHub integration and created a custom GitHub app, you can verify your GitHub App connectivity by checking your GitHub App's event log.
This log can be accessed through your GitHub App's settings under the "Advanced" section. See GitHub.com for more information.
Update Instructions¶
Instructions for updating Pixee Enterprise Server to newer versions.
Update Process¶
To update Pixee Enterprise Server in Embedded Cluster deployments follow:
Updating Pixee Enterprise Server is a simple process:
- Visit the admin console in your browser:
https://<domain name or vm ip>:30000 - Login with the admin console password set during installation
- New available versions will be visible in the
Available Updateslist under theVersion Historytab - Click
deploynext to the desired version you want to upgrade to - Confirm configuration
- Verify preflight checks
- Confirm deployment
To update Pixee Enterprise Server in Helm Deployment follow:
To update a Pixee Enterprise Server release, use the helm upgrade command. This is good for when you need to apply template changes to the running release or update configuration values.
To change variable values in a release, use the helm upgrade command with the -f flag and provide your updated values.yaml file:
helm upgrade pixee-enterprise-server oci://registry.pixee.ai/pixee/stable/pixee-enterprise-server -f values.yaml -n pixee-enterprise-server
AI Providers Overview¶
This section provides AI provider-specific guidance and resource examples for deploying Pixee Enterprise Server on major cloud platforms.
Advanced AI Model Configuration¶
SCA Models¶
Software Composition Analysis (SCA) models enable enhanced vulnerability detection in dependencies and third-party libraries. This feature is disabled by default and can be enabled in the AI settings.
To configure SCA models in Embedded Cluster deployments:
Navigate to the admin console, select the Config tab, then go to the AI Settings section.
- Enable SCA Models: Toggle to enable Software Composition Analysis models (disabled by default)
- SCA Model Name: Optionally specify a custom LLM model for SCA analysis (only available when SCA is enabled, defaults to gpt-4.1)
To configure SCA models in Helm deployments:
global:
pixee:
ai:
# Enable SCA models (disabled by default)
scaModelsEnabled: true
# Optionally specify a custom model name (defaults to gpt-4.1)
scaModelName: "gpt-4.1"
Deep Research Models¶
Deep Research models provide advanced analysis capabilities for complex code understanding tasks. These models are only used when SCA is enabled.
To configure Deep Research models in Embedded Cluster deployments:
Navigate to the admin console, select the Config tab, then go to the AI Settings section.
- Deep Research Model Name: Optionally specify a custom LLM model for deep research analysis (only available when SCA is enabled, defaults to o4-mini-deep-research)
To configure Deep Research models in Helm deployments:
global:
pixee:
ai:
# Enable SCA first (required for deep research models)
scaModelsEnabled: true
# Then specify a custom model for deep research (defaults to o4-mini-deep-research)
deepResearchModelName: "o4-mini-deep-research"
OpenAI¶
Requirements¶
AI Provider integration requires:
- An AI provider API key with access to the required models
- An AI provider endpoint, or default to the provider's public API endpoint
- Model names for the reasoning and fast models from your chosen AI provider
For OpenAI, see OpenAI's page on creating an API Key for more information.
OpenAI-compatible APIs¶
Providers such as Azure AI Foundry and AWS Bedrock provide endpoints for connecting to both OpenAI and non-OpenAI models via a consistent OpenAI-compatible API. This enables connection to models from these providers with the same configuration as if it was OpenAI directly.
Requirements¶
- An AI provider endpoint
- Model names for the reasoning and fast models from your chosen AI provider
- An AI provider API key with access to the required models
Example: Connecting to DeepSeek-R1 via Azure AI Foundry¶
Azure AI Foundry hosts a variety of foundation models from industry-leading providers. In this example, Pixee will be configured to connect to DeepSeek-R1 using the OpenAI-compatible API endpoints using the same process for connecting to OpenAI's API directly. This example assumes that the model has already been deployed in the target Azure environment, and a model deployment API key has already been created.
Input the OpenAI-compatible endpoint for your deployment¶
Azure AI Foundry provides a cononacle URL endpoint for AI Foundry model deployments. It's usually of the format https://{foundary-instance-name}-resource.service.ai.azure.com/models.
Input your API key¶
Copy the deployment's endpoint key from the Azure AI Foundry portal and paste it into the API key field.
Specify the reasoning and fast model names¶
Find the DeepSeek-R1 model name in the AI Foundry portal and paste it into the reasoning and fast model name fields. We want to use the same model for this example to keep things simple. But we could choose a different model if we wanted to, so long as it's available in our AI Foundry instance.
Verify with preflight checks¶
Preflight checks should run after the configuration is saved and should successfully connect to the model.
Azure OpenAI¶
Requirements¶
Azure OpenAI integration requires:
- Integration with Azure OpenAI requires model deployments of the following recommended models:
- o3-mini (version: 2025-01-31) -- as fast model
- o3-mini (version: 2025-01-31) -- as reasoning model
Databricks AI¶
Requirements¶
Databricks AI integration requires:
- Integration with Databricks AI serving endpoints requires:
- Databricks workspace with Mosaic AI Model Serving enabled
- External model endpoints for "o3-mini" as fast and reasoning models
- Databricks personal access token with serving endpoint access
Configuration¶
To integrate Pixee Enterprise Server with Databricks AI serving endpoints:
-
Create the required serving endpoints in Databricks following the Databricks external models documentation. You need to deploy the following endpoint:
o3-mini(for o3-mini model)
-
Configure Pixee Enterprise Server to use your Databricks workspace URL as the OpenAI base URL with your Databricks PAT as the API key.
-
Verify connectivity by checking that both endpoints are accessible from your Pixee Enterprise Server deployment.
Info
Databricks integration uses the OpenAI-compatible API, so select "OpenAI" as the provider type when configuring through the admin console, and be sure to update the OpenAI Endpoint to match your Databricks base url.
Common Issues¶
- Ensure your Databricks PAT has permission to access the serving endpoints
- Verify network connectivity between Pixee Enterprise Server and your Databricks workspace
- Check that the required endpoint ("o3-mini") is deployed and running
- When using the embedded cluster installation, select "OpenAI" as the provider type since Databricks uses OpenAI-compatible APIs
- Be sure to set the OpenAI endpoint to your Databricks base url
Azure Anthropic¶
Azure Anthropic allows you to access Anthropic models (such as Claude) through Azure's infrastructure.
Requirements¶
- An Azure Anthropic API endpoint for your model deployments
- An API key with access to the required models
- Model names for the reasoning and fast models
Configuration¶
Navigate to the admin console, select the Config tab, then go to the AI Settings section.
Select Azure Anthropic as the Default LLM Provider and configure:
- API Key: Your Azure Anthropic API key
- Endpoint: The Azure Anthropic API endpoint for your model deployments
- Reasoning Model Name: Model for complex reasoning tasks (default: claude-sonnet-4-20250514)
- Fast Model Name: Model for quick response tasks (default: claude-sonnet-4-20250514)
To configure Azure Anthropic in Helm deployments, add the following to your values.yaml:
global:
pixee:
ai:
enabled: true
default:
provider: "azure-anthropic"
model: "claude-sonnet-4-20250514"
apiKey: "<your Azure Anthropic API key>"
endpoint: "<your Azure Anthropic endpoint>"
reasoning:
model: "claude-sonnet-4-20250514"
fast:
model: "claude-sonnet-4-20250514"
Web Search LLM Model¶
When an AI provider is configured, you can optionally specify a model for web-search-enabled queries. This model is used for tasks that benefit from real-time web search capabilities during analysis.
Navigate to the admin console, select the Config tab, then go to the AI Settings section.
When using OpenAI or Azure AI Foundry as your provider, a Web Search Model Name field is available. The default is gpt-5.2.
To configure a web search model in Helm deployments:
global:
pixee:
ai:
webSearch:
model: "gpt-5.2"
Note
Web search models are currently supported for OpenAI and Azure AI Foundry providers only.
Oracle Cloud Infrastructure Generative AI Services¶
Requirements¶
Oracle Cloud Infrastructure (OCI) integration requires: - Integration with Oracle Cloud Infrastructure Generative AI Services * Llama and custom models can be deployed, please contact support for up-to-date instructions based on your deployment type. *
Cloud Providers¶
This section provides cloud provider-specific guidance and resource examples for deploying Pixee Enterprise Server on major cloud platforms.
AWS¶
Configuration and setup information for deploying Pixee Enterprise Server on Amazon Web Services.
Notes¶
When installing in EKS v1.30+, persistent volumes need to have the defaultStorageClass set. This is especially important if using the embedded database or embedded object store. Set the following in your values.yaml:
global:
defaultStorageClass: "gp2"
Resources¶
For Helm deployments on EKS, AWS resources typically include:
- RDS / Aurora PostgreSQL (small) for external database
- 2x S3 buckets for external object storage
- IAM role with S3 permissions (if using IRSA)
- EKS cluster with appropriate node groups
- Application Load Balancer (if using ALB ingress controller)
Service Account Authentication¶
For enhanced security when using external object storage, you can configure service account authentication instead of using static AWS credentials. This approach leverages cloud provider IAM roles and eliminates the need for long-lived access keys.
Note
IAM Roles for Service Accounts (IRSA) are currently supported with helm installations. Please reach out to Pixee Support if you need assistance with this setup.
AWS S3 with IRSA (IAM Roles for Service Accounts)¶
This section covers AWS S3 access from EKS clusters. For other cloud providers accessing their native object stores (GCS, Azure Blob), similar workload identity patterns apply but are not covered in this guide.
Prerequisites¶
- EKS cluster with OIDC identity provider enabled
- IAM role with appropriate S3 permissions
- Trust relationship configured between the IAM role and the EKS service account
Setup Steps¶
-
Create IAM Role and Policy
Create an IAM policy with the required S3 permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::pixee-analysis-input" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::pixee-analysis-input/*" ] } ] } -
Create Kubernetes Service Account
Create a service account with the IAM role annotation:
apiVersion: v1 kind: ServiceAccount metadata: name: pixee-s3-service-account namespace: pixee-enterprise-server annotations: eks.amazonaws.com/role-arn: "arn:aws:iam::123456789012:role/pixee-s3-role" -
Configure Helm Values
Set the following in your
values.yaml:global: pixee: serviceAccount: create: false name: "pixee-s3-service-account" objectStore: embedded: false endpoint: "https://s3.amazonaws.com" region: "us-east-1" credentialType: "default" # Use IRSA # username and password are not required with IRSA
External RDS Database Configuration¶
If using an external database such as Amazon RDS for PostgreSQL you can reference an existing Kubernetes secret instead of passing the password directly through helm values.
-
See the installation prerequisites for database requirements.
-
Create a Kubernetes secret with a
passwordkey that contains the password for the PostgreSQL user to be used by Pixee. -
Configure Helm Values
database: embedded: false host: <RDS ENDPOINT> port: <RDS PORT> name: "pixee_platform" username: "pixee" existingSecret: <EXISTING SECRET NAME>
Azure¶
Configuration and setup information for deploying Pixee Enterprise Server on Microsoft Azure.
Resources¶
For Embedded Cluster deployments on Azure VMs, resources typically include:
- Resource Group (if it doesn't already exist)
- SSH Key (stored in Azure; used by the VM)
- Virtual Network (VNet)
- Subnet (within the VNet)
- Network Security Group (NSG)
- Inbound rule for TCP on ports: 30000, 443, and 22, 80 temporarily
- Public IP Address (Standard, static)
- Network Interface (NIC) (linked to VNet, subnet, NSG, and the public IP)
- Optional: Azure DNS Zone (if you provide a domain)
- DNS A record pointing to the public IP
- Virtual Machine (image: Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest, attached to the resources above)
- Size: Standard_D8s_v3 w/ 512 GB, Premium_LRS os disk
- Azure Cognitive Services (OpenAI) resource
- OpenAI Model Deployment ("o3-mini")
For Helm deployments on AKS, Azure resources typically include:
- Resource Group (if it doesn't already exist)
- Virtual Network (VNet)
- Subnet (within the VNet)
- Network Security Group (NSG)
- Inbound rule for TCP on ports: 443
- Public IP Address (Standard, static)
- Optional: Azure DNS Zone (if you provide a domain)
- DNS A record pointing to the public IP
- Kubernetes cluster (AKS) with worker nodes sized appropriately
- Node size equivalent to Standard_D8s_v3 or better
- Azure Cognitive Services (OpenAI) resource
- OpenAI Model Deployment ("o3-mini")
Google Cloud Platform¶
Configuration and setup information for deploying Pixee Enterprise Server on Google Cloud Platform.
Notes¶
You can utilize the built-in ingress controller for Google Kubernetes Engine by setting the following in values.yaml:
global:
platform:
service:
type: ClusterIP
ingress:
enabled: true
className: "gce"
annotations:
"kubernetes.io/ingress.class": "gce"
hosts:
- host: ""
paths:
- path: "/"
pathType: "Prefix"
Resources¶
For Helm deployments on GKE, Google Cloud resources typically include:
- Google Kubernetes Engine
- Cloud SQL
Oracle Cloud Infrastructure¶
Configuration and setup information for deploying Pixee Enterprise Server on Oracle Cloud Infrastructure.
Resources¶
For Embedded Cluster deployments on OCI VMs, resources typically include:
- Virtual Cloud Network (VCN)
- Subnet (within the VCN)
- Network Security Group (NSG)
- Security List or NSG Rules (allowing ingress on ports 30000, 443, 22 (temp), 80 (temp))
- Reserved Public IP (if applicable)
- Virtual Network Interface Card (VNIC) (attached to the instance, associated with VCN, subnet, NSG, and Public IP)
- OCI DNS Zone (if managing the domain in OCI)
- DNS A Record (pointing to the Reserved Public IP in OCI DNS)
- Compute Instance (Ubuntu 22.04 image from OCI Marketplace or Platform Images)
- VM.Standard3.Flex (8 OCPUs, 64GB RAM) with a 512 GB Block Volume (NVMe or Balanced option)
- SSH Key Pair
- OCI Generative AI (if available) or Custom Model Deployment in OCI Data Science
- OCI Generative AI Service Deployment (if applicable) or OCI AI Services (custom model in Data Science or AI Text Services)
For Helm deployments on OKE, OCI resources typically include:
- Virtual Cloud Network (VCN)
- Subnet (within the VCN)
- Network Security Group (NSG)
- Security List or NSG Rules (allowing ingress on ports 443)
- Reserved Public IP (if applicable)
- OCI DNS Zone (if managing the domain in OCI)
- DNS A Record (pointing to the Reserved Public IP in OCI DNS)
- Kubernetes cluster (OKE) with worker nodes sized appropriately
- Node size equivalent to VM.Standard3.Flex (8 OCPUs, 64GB RAM)
- OCI Generative AI (if available) or Custom Model Deployment in OCI Data Science
- OCI Generative AI Service Deployment (if applicable) or OCI AI Services (custom model in Data Science or AI Text Services)
Organization Preferences¶
Organization preferences let you define natural-language guidelines that Pixee applies across all repositories in your organization. Instead of configuring each repository individually, you can set baseline rules once and have them take effect everywhere.
Accessing Preferences¶
Select your organization from the top bar in the Pixee Platform UI, then navigate to Preferences. The preferences editor supports Markdown formatting and saves your changes immediately.
Writing Preferences¶
Write your preferences in Markdown. Each preference should clearly describe a rule or guideline for how Pixee should handle findings and generate fixes across your repositories.
Examples of effective preferences:
- "Ignore findings in test directories"
- "Apply stricter checks to repositories with public visibility"
- "Prefer
slf4joverjava.util.loggingwhen suggesting logging changes" - "Do not suggest changes to files in
src/generated/"
Tip
Start with a few high-impact rules and refine them over time. You can review how Pixee applies your preferences in the activity log for each repository.
When you first open the preferences editor, you'll see example content to help you get started. Replace it with rules tailored to your organization's standards and workflows.
How Preferences Are Applied¶
Pixee uses a single source of preferences for each analysis run. The following precedence rules determine which source is used:
- Repo-level
PIXEE.mdtakes priority when present. If a repository contains aPIXEE.mdfile, Pixee uses it and ignores organization preferences for that repository. - Organization preferences serve as the baseline for any repository that does not have its own
PIXEE.md. - Empty
PIXEE.mdopts a repository out of organization preferences entirely. If you want a repository to have no preferences at all, commit an emptyPIXEE.mdfile.
Info
There is no merging between sources. Pixee uses either the repo-level PIXEE.md or the organization preferences — never both. If you have requirements for merging preferences from multiple sources, please contact Pixee support.
Concurrent Editing¶
Organization preferences use optimistic locking to prevent silent overwrites. If another user saves changes while you are editing, you will see a conflict warning when you attempt to save. When this happens, refresh the page to load the latest version before making your changes again.
Observability
Observability¶
Pixee Enterprise Server provides comprehensive observability tools to help you monitor and understand your deployment's behavior.
Available Tools¶
Metrics & Dashboards¶
Monitor real-time metrics and visualize system performance with custom dashboards. Track AI service performance, per-finding task metrics, and more.
Logs & Debugging¶
Access pod status, view application logs, and query centralized logs with VictoriaLogs. When local metrics is enabled, logs from all Pixee components are automatically collected and searchable via the VictoriaLogs VMUI.
Learn more about Logs & Debugging →
Traces¶
Inspect distributed traces from the analysis service with VictoriaTraces. When local metrics is enabled, trace telemetry is automatically collected, allowing you to search by service name, trace ID, or span attributes.
Getting Started¶
For most operational tasks, you'll need:
kubectlaccess to your cluster- SSH access to the cluster host (for Embedded Cluster deployments)
- Knowledge of your deployment namespace:
- Embedded Cluster:
kotsadm - Helm Deployment:
pixee-enterprise-server
- Embedded Cluster:
Logs & Debugging¶
Access pod status, view application logs, and query centralized logs for your Pixee Enterprise Server deployment.
VictoriaLogs (Centralized Logging)¶
When local metrics is enabled, Pixee Enterprise Server also deploys VictoriaLogs for centralized log collection. VictoriaLogs automatically collects and indexes logs from all Pixee components, making it easy to search and analyze logs across your entire deployment.
Enabling VictoriaLogs¶
VictoriaLogs is automatically enabled when you enable local metrics. See Enabling Local Metrics for instructions.
Accessing VictoriaLogs VMUI¶
You can access the VictoriaLogs web interface to query and explore logs. You can either enable web access via ingress or use port forwarding.
Option 1: Enable VMUI Web Interface (Ingress)¶
Enable the VMUI web interface to access the logs query interface directly through your browser.
To enable the VictoriaLogs web interface in Embedded Cluster deployments:
- Navigate to the admin console
- Select the
Configtab - Go to the
Advanced Settingssection - Check the
Enable VMUI web interfaceoption (this enables both metrics and logs interfaces) - Save and redeploy the application
Once enabled, access the logs interface at:
https://<your-domain>/logs/vmui/
Unauthenticated Access
The VMUI web interface endpoints are not authenticated. Only enable this option if your deployment is within a trusted network or you have implemented external authentication.
To enable the VictoriaLogs web interface in Helm Deployment, add the following to your values.yaml:
victoria-logs-single:
server:
ingress:
enabled: true
ingressClassName: "nginx" # Use your ingress class
hosts:
- name: "your-domain.com"
path:
- /logs
port: http
Then upgrade your deployment:
helm upgrade pixee-enterprise-server ./charts/pixee-enterprise-server \
-f values.yaml \
-n pixee-enterprise-server
Once enabled, access the logs interface at:
https://your-domain.com/logs/vmui/
Unauthenticated Access
The VMUI web interface endpoints are not authenticated. Consider implementing external authentication or only enable this in trusted network environments.
Option 2: Port Forwarding¶
If you prefer not to expose the VMUI via ingress, you can use port forwarding for temporary access.
Step 1: Create SSH tunnel from your local machine
ssh -L 9428:localhost:9428 pixee@<your-hostname>
Step 2: Set up port forwarding
In the SSH session, run:
sudo kubectl port-forward svc/pixee-enterprise-server-victoria-logs-single-server 9428:9428 -n kotsadm
Step 3: Access the logs interface
Open your browser and navigate to:
http://localhost:9428/vmui/
Step 1: Port forward to VictoriaLogs
kubectl port-forward svc/pixee-enterprise-server-victoria-logs-single-server 9428:9428 -n pixee-enterprise-server
Step 2: Access the logs interface
Open your browser and navigate to:
http://localhost:9428/vmui/
LogsQL Query Examples¶
VictoriaLogs uses LogsQL, a powerful query language for searching and filtering logs. Here are some useful queries:
# Search for errors across all logs
error OR ERROR
# Filter logs by pod name
_stream:{pod="pixee-enterprise-server-platform"}
# Search for specific text in platform logs
_stream:{pod=~"pixee-enterprise-server-platform.*"} AND "webhook"
# Find logs with a specific log level
_stream:{pod=~"pixee-enterprise-server-.*"} AND level="ERROR"
# Search within a time range (use the time picker in VMUI)
_stream:{namespace="kotsadm"} AND "analysis"
VictoriaLogs Resources¶
- VictoriaLogs Documentation
- LogsQL Query Language - Complete LogsQL reference
- VictoriaLogs VMUI - Web UI guide for logs
Viewing Pods¶
To view the pods in your deployment, use the namespace you installed Pixee into:
- Embedded Cluster:
kotsadm - Helm Deployment:
pixee-enterprise-server,default, or your selected namespace
List Running Pods¶
kubectl get pods -n <namespace> --field-selector status.phase=Running
View All Pods with Status¶
kubectl get pods -n <namespace>
Viewing Logs¶
Basic Log Viewing¶
To view logs for a specific service, use the kubectl logs command. For example, to view logs for the platform service:
kubectl logs deployment.apps/pixee-enterprise-server-platform -n kotsadm
kubectl logs deployment.apps/pixee-enterprise-server-platform -n pixee-enterprise-server
Viewing Recent Logs¶
To narrow down to the last 25 lines with timestamps:
kubectl logs deployment.apps/pixee-enterprise-server-platform -n kotsadm --tail 25 --timestamps
kubectl logs deployment.apps/pixee-enterprise-server-platform -n pixee-enterprise-server --tail 25 --timestamps
Following Logs in Real-Time¶
To continuously stream new logs as they are generated, use the --follow flag:
kubectl logs deployment.apps/pixee-enterprise-server-platform -n kotsadm --follow --tail 25 --timestamps
kubectl logs deployment.apps/pixee-enterprise-server-platform -n pixee-enterprise-server --follow --tail 25 --timestamps
Common Deployments to Monitor¶
Here are the main Pixee Enterprise Server deployments you may need to view logs for:
| Deployment | Purpose |
|---|---|
pixee-enterprise-server-platform |
Main platform service |
pixee-enterprise-server-analysis |
Analysis service |
Example: Viewing Analysis Service Logs¶
kubectl logs deployment.apps/pixee-enterprise-server-analysis -n kotsadm --tail 50 --timestamps
kubectl logs deployment.apps/pixee-enterprise-server-analysis -n pixee-enterprise-server --tail 50 --timestamps
Viewing Logs for Specific Pods¶
If you need to view logs for a specific pod (rather than a service), first get the pod name:
kubectl get pods -n <namespace>
Then view the logs:
kubectl logs <pod-name> -n <namespace>
Multiple Containers in a Pod¶
If a pod has multiple containers, specify the container name:
kubectl logs <pod-name> -c <container-name> -n <namespace>
Troubleshooting Common Issues¶
No Logs Appearing¶
If no logs are appearing:
-
Verify the pod is running:
kubectl get pods -n <namespace> -
Check pod status and events:
kubectl describe pod <pod-name> -n <namespace> -
Check if the service has active endpoints:
kubectl get endpoints -n <namespace>
Pod Keeps Restarting¶
If a pod is repeatedly restarting, check the previous container's logs:
kubectl logs <pod-name> -n <namespace> --previous
Additional Resources¶
For more information on the kubectl logs command, see the Kubernetes documentation.
Metrics & Dashboards¶
Pixee Enterprise Server includes Victoria Metrics for local metrics collection and visualization. Custom dashboards are automatically deployed to help monitor AI service performance and per-finding task metrics.
Enabling Local Metrics¶
Before accessing Metrics dashboards, you must enable local metrics collection in your deployment.
To enable local metrics collection in Embedded Cluster deployments:
- Navigate to the admin console
- Select the
Configtab - Go to the
Advanced Settingssection - Check the
Enable Local Metricsoption - Save and redeploy the application
To enable local metrics collection in Helm Deployment, add the following to your values.yaml:
global:
pixee:
localMetrics:
enabled: true
Then upgrade your deployment:
helm upgrade pixee-enterprise-server ./charts/pixee-enterprise-server \
-f values.yaml \
-n pixee-enterprise-server
Accessing Metrics Dashboards¶
After enabling local metrics, you can access the Metrics dashboards to view real-time metrics and custom dashboards. You can either enable web access via ingress or use port forwarding.
Option 1: Enable VMUI Web Interface (Ingress)¶
Enable the VMUI web interface to access metrics dashboards directly through your browser without port forwarding.
To enable the VMUI web interface in Embedded Cluster deployments:
- Navigate to the admin console
- Select the
Configtab - Go to the
Advanced Settingssection - Check the
Enable VMUI web interfaceoption - Save and redeploy the application
Once enabled, access the dashboards at:
https://<your-domain>/metrics/vmui/#/dashboards
Unauthenticated Access
The VMUI web interface endpoints are not authenticated. Only enable this option if your deployment is within a trusted network or you have implemented external authentication.
To enable the VMUI web interface in Helm Deployment, add the following to your values.yaml:
victoria-metrics-single:
server:
ingress:
enabled: true
ingressClassName: "nginx" # Use your ingress class
hosts:
- name: "your-domain.com"
path:
- /metrics
port: http
Then upgrade your deployment:
helm upgrade pixee-enterprise-server ./charts/pixee-enterprise-server \
-f values.yaml \
-n pixee-enterprise-server
Once enabled, access the dashboards at:
https://your-domain.com/metrics/vmui/#/dashboards
Unauthenticated Access
The VMUI web interface endpoints are not authenticated. Consider implementing external authentication or only enable this in trusted network environments.
Option 2: Port Forwarding¶
If you prefer not to expose the VMUI via ingress, you can use port forwarding for temporary access.
Step 1: Create SSH tunnel from your local machine
ssh -L 8428:localhost:8428 pixee@<your-hostname>
Step 2: Set up port forwarding
In the SSH session, run:
sudo kubectl port-forward pixee-enterprise-server-metrics-server-0 8428:8428 -n kotsadm
Step 3: Access the dashboards
Open your browser and navigate to:
http://localhost:8428/vmui/#/dashboards
Tip
Keep the SSH tunnel and port-forward running while viewing the dashboards.
Step 1: Port forward to Metrics
kubectl port-forward pixee-enterprise-server-metrics-server-0 8428:8428 -n pixee-enterprise-server
Step 2: Access the dashboards
Open your browser and navigate to:
http://localhost:8428/vmui/#/dashboards
Available Dashboards¶
Pixee Enterprise Server includes the following pre-configured dashboards:
AI Service Metrics¶
Monitor AI service performance with the following metrics:
- AI Service Requests Rate: Track request volume by model, status, and status code
- AI Service Latency Percentiles: Monitor p50, p95, and p99 latency by model and status
- AI Service Retry Rate: Track retry frequency across services
- AI Service Rate Limit Events: Monitor rate limiting occurrences
- AI Health Check Latency Percentiles: Track health check performance (p50, p95, p99)
Per-Finding Task Metrics¶
Track per-finding task execution with:
- Per-Finding Task Counts by Type and Status: Monitor task distribution and completion rates
Example: Troubleshooting Analysis Latency¶
One of the most common use cases for the metrics dashboards is troubleshooting slow analysis performance. The AI Service Latency Percentiles dashboard is particularly useful for identifying which AI models are contributing to latency issues.
Understanding the Latency Percentiles Graph
The AI Service Latency Percentiles dashboard displays three key metrics for each AI model over time:
- p50 (Median): The median latency - half of all requests complete faster than this value, half complete slower. This represents typical performance.
- p95: 95% of requests complete faster than this value. This helps identify performance outliers while filtering out the worst 5%.
- p99 (Maximum): 99% of requests complete faster than this value. This captures near-worst-case performance and helps identify extreme latency spikes.
Each AI model (e.g., o3-mini) has its own set of percentile lines on the graph, allowing you to compare performance across models and identify which models are experiencing latency issues.
Troubleshooting Scenario
If users report that analysis is taking longer than expected:
-
Access the AI Service Latency Percentiles dashboard following the steps in Accessing Metrics Dashboards
-
Identify the time period when the slowdown occurred using the time range selector in VMUI
-
Compare latency across models:
- Look for models with elevated p50 values - this indicates consistently slow performance
- Check for spikes in p95 or p99 values - this indicates intermittent latency issues
-
Compare current latency values to historical baselines to confirm degradation
-
Correlate with other metrics:
- Check the AI Service Requests Rate dashboard to see if increased request volume is causing the latency
- Review the AI Service Rate Limit Events dashboard to see if rate limiting is delaying requests
-
Examine the AI Service Retry Rate to identify if failed requests are causing delays
-
Take action based on findings:
- If a specific model shows consistently high latency, consider switching to an alternative model or contacting the AI service provider
- If rate limiting is occurring, adjust request rates or increase service quotas
- If all models show elevated latency during specific time periods, investigate external factors (network issues, AI service outages, etc.)
Dashboard Updates¶
Restarting Metrics After Updates
When new dashboards are added or existing dashboards are updated during an upgrade, the Metrics pod must be restarted to load the changes.
# For Embedded Cluster
kubectl rollout restart statefulset pixee-enterprise-server-victoria-metrics-single-server -n kotsadm
# For Helm Deployment
kubectl rollout restart statefulset pixee-enterprise-server-victoria-metrics-single-server -n pixee-enterprise-server
The pod will perform a rolling restart, which may take a few minutes. You can monitor the restart progress:
# For Embedded Cluster
kubectl rollout status statefulset pixee-enterprise-server-victoria-metrics-single-server -n kotsadm
# For Helm Deployment
kubectl rollout status statefulset pixee-enterprise-server-victoria-metrics-single-server -n pixee-enterprise-server
User-Created Dashboards Are Not Persisted
Custom dashboards created through the Metrics UI are stored in browser localStorage and are not persisted to the cluster. They will be lost when:
- The pod restarts
- You clear your browser cache
- You access Metrics from a different browser or device
To preserve custom dashboards:
- Export them as JSON files before pod restarts
- Save the JSON files to your local filesystem
- Re-import them after the pod restart
Victoria Metrics VMUI¶
Victoria Metrics provides a powerful UI (VMUI) for querying and visualizing metrics beyond the pre-configured dashboards.
Accessing VMUI¶
Access the full VMUI interface at:
http://localhost:8428/vmui/
VMUI Features¶
From the VMUI, you can:
- Execute PromQL queries: Write custom queries to explore your metrics
- Create custom visualizations: Build charts and graphs for any metric
- Explore available metrics: Browse all collected metrics and their labels
- View and create dashboards: Access pre-configured dashboards or create your own
- Export data: Download metrics data for offline analysis
Useful PromQL Queries¶
Here are some example queries you can run in VMUI:
# View all metric names
{__name__!=""}
# AI service request rate (last 5 minutes)
rate(ai_service_requests[5m])
# AI service latency by model
histogram_quantile(0.95, sum(rate(ai_service_latency_ms_bucket[5m])) by (le, model))
# Per-finding task counts by status
sum by(status) (per_finding_tasks)
Metrics Retention¶
Metrics are retained for 3 days by default. This retention period balances observability needs with storage requirements.
Adjusting Retention Period¶
To adjust the retention period in Embedded Cluster deployments:
- Navigate to the admin console
- Select the
Configtab - Go to the
Advanced Settingssection - Update the
Metrics retention periodfield (e.g., 3d, 7d, 30d, 1y) - Save and redeploy the application
The new retention period will be applied automatically during the deployment.
To adjust the retention period in Helm Deployment, add the following to your values.yaml:
victoria-metrics-single:
server:
retentionPeriod: "7d" # Options: 3d, 7d, 30d, 1y, etc.
Then upgrade your deployment:
helm upgrade pixee-enterprise-server ./charts/pixee-enterprise-server \
-f values.yaml \
-n pixee-enterprise-server
The new retention period will be applied automatically during the deployment.
Troubleshooting¶
Port Forward Fails¶
If port forwarding fails, verify the pod is running:
# For Embedded Cluster
kubectl get pod pixee-enterprise-server-metrics-server-0 -n kotsadm
# For Helm Deployment
kubectl get pod pixee-enterprise-server-metrics-server-0 -n pixee-enterprise-server
If the pod is not running, check the pod logs:
# For Embedded Cluster
kubectl logs pixee-enterprise-server-metrics-server-0 -n kotsadm
# For Helm Deployment
kubectl logs pixee-enterprise-server-metrics-server-0 -n pixee-enterprise-server
Dashboards Not Appearing¶
If dashboards don't appear after an upgrade:
- Verify local metrics is enabled (see Enabling Local Metrics)
- Restart the Metrics pod (see Dashboard Updates)
- Clear your browser cache and refresh the page
- Check that you're accessing the correct URL:
http://localhost:8428/vmui/#/dashboards
No Metrics Data¶
If dashboards show no data:
- Verify local metrics has been enabled for at least a few minutes (metrics need time to accumulate)
- Ensure you have run at least one Pixee analysis to generate metrics data (trigger a repository scan, PR analysis, or other Pixee operation)
- Check that the analysis and platform services are running and processing work
- Verify the time range selector in VMUI is set appropriately (default is "Last 30 minutes")
Additional Resources¶
- VictoriaMetrics Documentation
- MetricsQL Query Language - VictoriaMetrics query language (PromQL-compatible with extensions)
- VMUI Documentation - VictoriaMetrics web UI guide
- PromQL Query Examples
Traces¶
Pixee Enterprise Server includes VictoriaTraces for distributed trace collection. When local metrics is enabled, trace telemetry from the analysis service is automatically collected and stored, allowing you to inspect individual request spans and trace IDs.
Enabling Traces¶
Traces are automatically collected when local metrics is enabled. See Enabling Local Metrics for instructions.
Accessing Traces VMUI¶
You can access the VictoriaTraces web interface to search and explore traces. You can either enable web access via ingress or use port forwarding.
Option 1: Enable VMUI Web Interface (Ingress)¶
Enable the VMUI web interface to access the traces query interface directly through your browser.
To enable the VictoriaTraces web interface in Embedded Cluster deployments:
- Navigate to the admin console
- Select the
Configtab - Go to the
Advanced Settingssection - Check the
Enable Traces VMUI web interfaceoption - Save and redeploy the application
Once enabled, access the traces interface at:
https://<your-domain>/traces/select/vmui/
Unauthenticated Access
The VMUI web interface endpoints are not authenticated. Only enable this option if your deployment is within a trusted network or you have implemented external authentication.
To enable the VictoriaTraces web interface in Helm Deployment, add the following to your values.yaml:
victoriatraces:
server:
ingress:
enabled: true
ingressClassName: "nginx" # Use your ingress class
hosts:
- name: "your-domain.com"
path:
- /traces
port: http
Then upgrade your deployment:
helm upgrade pixee-enterprise-server ./charts/pixee-enterprise-server \
-f values.yaml \
-n pixee-enterprise-server
Once enabled, access the traces interface at:
https://your-domain.com/traces/select/vmui/
Unauthenticated Access
The VMUI web interface endpoints are not authenticated. Consider implementing external authentication or only enable this in trusted network environments.
Option 2: Port Forwarding¶
If you prefer not to expose the VMUI via ingress, you can use port forwarding for temporary access.
Step 1: Create SSH tunnel from your local machine
ssh -L 10428:localhost:10428 pixee@<your-hostname>
Step 2: Set up port forwarding
In the SSH session, run:
sudo kubectl port-forward svc/pixee-enterprise-server-traces-server 10428:10428 -n kotsadm
Step 3: Access the traces interface
Open your browser and navigate to:
http://localhost:10428/traces/select/vmui/
Step 1: Port forward to VictoriaTraces
kubectl port-forward svc/pixee-enterprise-server-traces-server 10428:10428 -n pixee-enterprise-server
Step 2: Access the traces interface
Open your browser and navigate to:
http://localhost:10428/traces/select/vmui/
Querying Traces¶
VictoriaTraces provides a Jaeger-compatible query API for searching traces. In the VMUI, you can:
- Search by service name — Filter traces from specific services (e.g.,
pixee-analysis-service) - Search by trace ID — Look up a specific trace using its trace ID
- Filter by duration — Find slow requests by setting minimum/maximum duration
- Filter by tags — Search for traces with specific span attributes
VictoriaTraces Resources¶
- VictoriaTraces Documentation
- VictoriaTraces Querying - Query API reference
API Access¶
Pixee Enterprise Server provides a REST API for programmatic access to your Pixee installation. This page explains how to authenticate with the API and access available resources.
Info
The current API authentication uses a global system API key. Improved authentication mechanisms are planned for future releases.
Retrieving Your API Key¶
To access the Pixee API, you need to retrieve your API key from the admin console.
Step 1: Access the Admin Console¶
Navigate to your admin console:
- Embedded Cluster:
https://<your-domain>:30000 - Helm Deployment: Access via the configured admin console endpoint
Log in with your admin credentials.
Step 2: Enable and Retrieve the API Key¶
- Select the Config tab in the admin console
- Scroll to the Basic Settings section
- Check the Enable Pixee API key checkbox to enable API authentication
- Copy the value from the Pixee API Key field
Tip
Store your API key securely. Treat it like a password and avoid committing it to version control or sharing it in insecure channels.
Authentication¶
The Pixee API uses Bearer token authentication. Include your API key in the Authorization header of each request.
Example: cURL Request¶
curl -H "Authorization: Bearer <your-api-key>" \
https://<your-pixee-server>/api/v1/openapi
Example: Python Request¶
import requests
headers = {
"Authorization": "Bearer <your-api-key>"
}
response = requests.get(
"https://<your-pixee-server>/api/v1/openapi",
headers=headers
)
API Resources¶
Pixee Enterprise Server provides two resources for exploring and integrating with the API:
HAL Browser¶
URL: /api/browser
The HAL Browser provides an interactive web interface for exploring the API with real data from your Pixee installation. Use it to:
- Discover available API endpoints
- Understand the structure of API responses
- Test API calls interactively
OpenAPI Specification¶
URL: /api/v1/openapi
The OpenAPI specification provides a machine-readable description of the API. Use it to:
- Generate client code in your preferred programming language
- Import into API testing tools like Postman or Insomnia
- Build integrations with automated tooling
To download the specification:
curl -H "Authorization: Bearer <your-api-key>" \
https://<your-pixee-server>/api/v1/openapi > openapi.json
Pixee Enterprise Server Frequently Asked Questions¶
Below are list of frequently asked questions related to Pixee Enterprise Server installation, configuration and operation. If your question is not answered below, please let us know!
How do I view the pods and logs?¶
For detailed instructions on viewing pods and logs, see the Observability - Logs & Debugging section.
How do I verify that the GitHub App is successfully sending events to pixee-enterprise-server?¶
To confirm that events are being properly transmitted from the GitHub App to pixee-enterprise-server, you can review the GitHub App's event log. This log can be accessed through your GitHub App's settings under the "Advanced" section.
GitHub provides detailed documentation on viewing webhook deliveries here.
The event log displays a comprehensive list of all events sent from the GitHub App, along with their respective response codes.
How do I retrieve a file out of SeaweedFS?¶
Pixee uses SeaweedFS to store CodeTF files and we may ask for those files to help debug issues related to Pixee.
CodeTF files are an interchange format used by the Pixee platform. They represent the results of fixes and therefore can be useful for debugging fix-related issues.
SeaweedFS ports are not exposed by default. To access the SeaweedFS web UI, first create an SSH tunnel from your local machine:
ssh -L 8888:localhost:8888 pixee@<ip>
Then in the SSH session, use the kubectl port-forward command to forward the port:
kubectl port-forward -n kotsadm svc/pixee-enterprise-server-seaweedfs-filer 8888:8888
This will forward the SeaweedFS web UI to port 8888. Remember to kill these commands once you are done. You can then access the SeaweedFS web UI by visiting http://localhost:8888 in your browser to browse and download files.
How do I update the Pixee Enterprise Server embedded cluster admin console domain name and TLS certificate?¶
If you have registered a domain name for your Pixee Enterprise Server and have a valid TLS certificate and want to replace the self-signed certificate used by the admin console during the initial installation, follow these steps: 1. (optional) Retrieve the TLS certificate and private key from the cluster (skip this step if you already have the cert and key available)
# From the Pixee Enterprise Server virtual machine
sudo ./pixee shell
kubectl get secret pixee-platform-tls-requested -o jsonpath='{.data.tls\.crt}' -n kotsadm | base64 --decode > cert.pem
kubectl get secret pixee-platform-tls-requested -o jsonpath='{.data.tls\.key}' -n kotsadm | base64 --decode > key.pem
# From your local machine
scp pixee@<vm ip address>:/home/pixee/cert.pem ./
scp pixee@<vm ip address>:/home/pixee/key.pem ./
# From the Pixee Enterprise Server virtual machine
sudo ./pixee shell
kubectl -n default annotate secret kotsadm-tls acceptAnonymousUploads=1 --overwrite -n kotsadm
PROXY_SERVER=$(kubectl get pods -A | grep kurl-proxy | awk '{print $2}')
kubectl delete pods $PROXY_SERVER -n kotsadm
ℹ️ The
acceptAnonymousUploadsannotation will be removed after completing the update in the next step
- In a browser visit
https://<vm ip addresss>:30000/tlsand follow the prompts to set the admin console domain name to match the existing Pixee Enterprise Server and upload the TLS certificate and private key - When completed you should be able to browse to the admin console at
https://<domain name>:30000
How do I troubleshoot Pixee analysis failures or limited results?¶
The first step in troubleshooting is to generate a support bundle.
To generate a support bundle for an Embedded Cluster installation:
1. Click Troublshoot in the navigation bar of the Pixee Enterprise Server Admin Console
2. Click Generate a Support Bundle
3. Click Analyze
4. Wait for the support bundle to generate, this may take a few minutes.
When the support bundle is ready, you will be directed to a report. Any detected issues will be highlighted in the report with suggested next steps. If none of the suggested steps resolve your issue, contact Pixee for further support.
To generate a support bundle for a Helm CLI installation:
1. run kubectl support-bundle --load-cluster-specs
A Helm CLI installation support bundle will generate an archive you can share with Pixee for further support.
How do I troubleshoot Authentik blueprint errors?¶
Pixee Enterprise Server uses Authentik blueprints to declaratively configure the OIDC provider, application, and brand settings. Blueprints are automatically discovered and applied by the Authentik worker pod on a periodic cycle (~60 seconds). If a blueprint fails to apply, it enters an error state and will not be retried until the issue is resolved.
Checking Blueprint Status¶
Find the Authentik worker pod and check blueprint status:
# Find the worker pod
kubectl get pods -n <namespace> -l app.kubernetes.io/name=authentik,app.kubernetes.io/component=worker
# Check all blueprint statuses
kubectl exec -n <namespace> <worker-pod> -- ak shell -c "
from authentik.blueprints.models import BlueprintInstance
for bp in BlueprintInstance.objects.all():
print(f'{bp.name} | status={bp.status} | enabled={bp.enabled} | path={bp.path}')
"
Blueprint statuses:
successful— Applied without errorserror— Failed to apply; will not be retried automaticallyunknown— Not yet processed or was manually reset
Finding the Error¶
When a blueprint is in error state, the status alone does not show the error message. To see the actual error, manually trigger a blueprint apply from the worker pod:
kubectl exec -n <namespace> <worker-pod> -- ak apply_blueprint mounted/cm-pixee-authentik-blueprint/pixee-oidc.yaml
This will output the full error details, such as serializer validation errors or missing references.
Fixing and Reapplying¶
- Fix the root cause (e.g., correct an invalid field value in the blueprint configmap)
- Deploy the fix so the updated configmap is mounted in the worker pod
-
Reset the blueprint status if it's stuck in
error:kubectl exec -n <namespace> <worker-pod> -- ak shell -c " from authentik.blueprints.models import BlueprintInstance bp = BlueprintInstance.objects.get(name='Pixee OIDC Provider') bp.status = 'unknown' bp.save() " -
Wait for the next discovery cycle (~60 seconds), or manually trigger the apply:
kubectl exec -n <namespace> <worker-pod> -- ak apply_blueprint mounted/cm-pixee-authentik-blueprint/pixee-oidc.yaml
For more details on blueprint troubleshooting, see the Authentik blueprint documentation.
What internet access is necessary for Pixee Enterprise Server?¶
Pixee Enterprise Server will need access to certain resources on the internet during installation/upgrade and normal operation. All optional resources can be disabled via config/values. If you need complete air gap support, please contact us. The resources are detailed below:
During installation/update:¶
| Domain | IP Addresses (if available) | Notes |
|---|---|---|
| images.pixee.ai proxy.replicated.com |
see https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57 | Pixee container image registry |
| registry.pixee.ai registry.replicated.com |
see https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25 | Pixee Helm registry |
| github.com | see api section at https://api.github.com/meta | GitHub integration (optional) |
| api.openai.com | N/A | OpenAI integration (optional) |
| *.api.letsencrypt.org | see Letsencrypt FAQ | TLS certificate requests (optional) |
During normal operation:¶
| Domain | IP Addresses (if available) | Notes |
|---|---|---|
| distribution.pixee.ai replicated.app |
162.159.133.41 162.159.134.41 2606:4700:7::a29f:8529 2606:4700:7::a29f:8629 |
Metrics reporting |
| sentry.io *.us.sentry.io |
35.186.247.156/32 34.120.195.249/32 |
Crash reporting (optional) |
| github.com | see api section at https://api.github.com/meta | GitHub integration (optional) |
| api.openai.com | N/A | OpenAI integration (optional) |
Helm Values Reference¶
Common Values¶
| Name | Description | Required | Default |
|---|---|---|---|
analysis.image.registry |
Analysis service image registry | No | images.pixee.ai |
analysis.image.repository |
Analysis service image repository | No | proxy/pixee/218200003247.dkr.ecr.us-east-1.amazonaws.com/pixee/pixeebot |
analysis.image.pullSecrets |
Analysis image pull secrets (set to {} to disable) |
No | [{name: pixee-registry}] |
analysis.image.tag |
Analysis service image tag | No | see analysis Chart.yaml app version |
analysis.service.type |
Analysis service tyupe | No | ClusterIp |
analysis.serviceAccount.name |
Analysis service account name | No | "" |
analysis.replicaCount |
Number of analysis service replicas | No | 1 |
analysis.resources.requests.cpu |
CPU requests for analysis service | No | None |
analysis.resources.requests.memory |
Memory requests for analysis service | No | None |
analysis.resources.limits.cpu |
CPU limits for analysis service | No | None |
analysis.resources.limits.memory |
Memory limits for analysis service | No | None |
seaweedfs.global.imageName |
SeaweedFS image name (for embedded object store) | No | images.pixee.ai/proxy/pixee/index.docker.io/chrislusf/seaweedfs |
seaweedfs.global.imagePullSecrets |
SeaweedFS image pull secrets (for embedded object store) | No | [{name: image-pull-secret}] |
seaweedfs.s3.enabled |
Enable SeaweedFS S3 compatibility (for embedded object store) | No | true |
seaweedfs.filer.data.type |
Storage type for filer data (hostPath or persistentVolumeClaim) | No | persistentVolumeClaim (Helm), hostPath (Embedded Cluster) |
seaweedfs.filer.data.size |
Size of filer persistent volume | No | 25Gi |
seaweedfs.filer.data.storageClass |
Storage class for filer PVC (empty = cluster default) | No | "" (uses cluster default) |
seaweedfs.filer.data.hostPathPrefix |
Host path for filer data when using hostPath type | No (if using hostPath) | /var/lib/seaweedfs (Embedded Cluster default) |
seaweedfs.filer.s3.createBuckets |
SeaweedFS buckets to create on install (for embedded object store) | No | pixee-analysis-input |
global.pixee.objectStore.username |
Object store username (for embedded SeaweedFS) | No | pixeebot |
global.pixee.objectStore.password |
Object store password (for embedded SeaweedFS) | No | pixeebot |
global.pixee.objectStore.ttlDays |
Number of days before objects expire in embedded object store | No | 7 |
platform.hostAliases |
Custom host-to-IP mappings for platform pods (/etc/hosts entries) |
No | [] |
platform.image.registry |
Platform service image registry | No | images.pixee.ai |
platform.image.repository |
Platform service image repository | No | proxy/pixee/218200003247.dkr.ecr.us-east-1.amazonaws.com/pixee/pixeebot |
platform.image.pullSecrets |
Platform image pull secrets (set to {} to disable) |
No | [{name: pixee-registry}] |
platform.image.tag |
Platform service image tag | No | see platform Chart.yaml app version |
platform.ingress.enabled |
Enable ingress for the platform service | No | false |
platform.ingress.className |
Ingress controller class name | Yes (if ingress enabled) | None |
platform.ingress.hosts |
List of host configurations | Yes (if ingress enabled) | None |
platform.ingress.tls |
TLS configuration for ingress | No | None |
platform.replicaCount |
Number of platform service replicas | No | 1 |
platform.resources.requests.cpu |
CPU requests for analysis service | No | None |
platform.resources.requests.memory |
Memory requests for analysis service | No | None |
platform.resources.limits.cpu |
CPU limits for analysis service | No | None |
platform.resources.limits.memory |
Memory limits for analysis service | No | None |
platform.service.type |
Service type for platform | No | ClusterIp |
platform.serviceAccount.name |
Platform service account name | No | "" |
replicated.image.registry |
Replicated SDK image registry (for embedded object store) | No | images.pixee.ai |
replicated.image.repository |
Replicated SDK image repsitory (for embedded object store) | No | proxy/pixee/index.docker.io/replicated/replicated-sdk |
replicated.imagepullSecrets |
Replicated SDK image pull secret, (set to {} to disable) |
No | [{name: pixee-registry}] |
Global Values¶
| Name | Description | Required | Default |
|---|---|---|---|
global.pixee.domain |
Domain name where Pixee Enterprise Server will be accessible | Yes | None |
global.pixee.protocol |
Protocol to use for accessing Pixee Enterprise Server (http or https) |
Yes | https |
global.pixee.serviceAccount.create |
Create a service account for the pixee enterprise server release | Yes | true |
global.pixee.serviceAccount.name |
Name of service account to create | No | pixee |
global.pixee.access.oidc.client.provider |
OIDC provider (google, microsoft, embedded) | Yes (if using authentication) | None |
global.pixee.access.oidc.client.id |
Client ID for OIDC provider | Yes (if using authentication) | web |
global.pixee.access.oidc.client.secret |
Client secret for OIDC provider | Yes (if using authentication) | secret |
global.pixee.access.oidc.client.existingSecret |
Name of existing secret containing the client secret | No | {} |
global.pixee.access.oidc.client.secretKeys.secretKey |
Secret key containing the client secret | Yes | secret |
global.pixee.access.oidc.client.authServerUrl |
Auth server URL for Microsoft OIDC provider or embedded | Yes (if using Microsoft or embedded) | None |
global.pixee.access.oidc.client.basePath |
Base path for OIDC endpoints | No | oidc |
global.pixee.access.oidc.embedded.enabled |
Use embedded OIDC provider | No | false |
global.pixee.access.oidc.embedded.scopes |
OAuth scopes for embedded OIDC provider | Yes (if using embedded OIDC) | openid profile email |
global.pixee.access.oidc.embedded.issuer |
OIDC issuer URL for embedded provider | Yes (if using embedded OIDC) | None |
global.pixee.access.oidc.embedded.authenticationRedirectPath |
Path for authentication redirect | Yes (if using embedded OIDC) | /api/auth/login |
global.pixee.access.oidc.embedded.applicationType |
OAuth application type | Yes (if using embedded OIDC) | web-app |
global.pixee.access.oidc.embedded.usersJson |
JSON configuration for embedded OIDC users | Yes (if using embedded OIDC) | Default alice/bob users |
global.pixee.access.oidc.embedded.existingSecret |
Name of existing secret containing users.json | No | None |
global.pixee.ai.enabled |
Enable or disable AI functionality | No | true |
global.pixee.ai.default.provider |
AI provider type. Options: openai, azure, anthropic |
No | None |
global.pixee.ai.default.apiKey |
AI provider API key for AI features | Yes (if using OpenAI) | None |
global.pixee.ai.default.existingSecret |
Name of existing Kubernetes secret containing AI provider API key (takes precedence over direct key) | No (alternative to direct key) | None |
global.pixee.ai.default.secretKeys.apiKey |
Key within the secret that contains the AI provider API key | Yes | key |
global.pixee.ai.default.endpoint |
AI provider base URL | No | None |
global.pixee.ai.scaModelsEnabled |
Enable Software Composition Analysis (SCA) models for enhanced vulnerability detection | No | false |
global.pixee.ai.scaModelName |
LLM model name for SCA analysis (only used when scaModelsEnabled=true) | No | "gpt-4.1" |
global.pixee.ai.deepResearchModelName |
LLM model name for deep research analysis (only used when scaModelsEnabled=true) | No | "o4-mini-deep-research" |
global.pixee.objectStore.embedded |
Use embedded object store instead of external | No | true |
global.pixee.objectStore.endpoint |
External object store endpoint URL | Yes (if embedded: false) |
None |
global.pixee.objectStore.username |
External object store access key ID | Yes (if embedded: false) |
None |
global.pixee.objectStore.password |
External object store secret access key | Yes (if embedded: false) |
None |
global.pixee.sentry.enabled |
Enable or disable error reporting via Sentry | No | true |
global.pixee.metrics.enabled |
Enable or disable metrics reporting | No | true |
global.defaultStorageClass |
Default storage class to use for PVCs | No | None |
global.pixee.httpProxy |
HTTP proxy server host/address and port (: |
No | None |
global.pixee.httpsProxy |
HTTPS proxy server host/address and port (: |
No | None |
global.pixee.noProxy |
Comma separates list of hosts to exclude from HTTP/HTTPS proxy | No | None |
global.pixee.privateCACert |
Name of a ConfigMap containing PEM-encoded CA certificates to add to trust stores | No | "" |
global.pixee.skipSSLVerification |
(Deprecated) Disable SSL cert verification for platform. Use privateCACert instead |
No | false |
Custom Values¶
| Name | Description | Required | Default |
|---|---|---|---|
analysis.aiCodegenStrategy |
AI code generation strategy (agentic-native, agentic-fast) | No | agentic-native |
analysis.backpressureEnabled |
Enable backpressure algorithm to proactively cancel analyses that cannot complete within timeout limits | No | true |
analysis.enableTransitiveDependencyAnalysis |
Enable analysis of transitive dependencies during SCA for deeper vulnerability detection | No | false |
analysis.useAgenticTriageForAllRules |
Route all triage rules through the ReACT agentic analyzer, bypassing explicit and magic handlers | No | false |
analysis.useScaExploitabilityToShortcircuitFix |
Skip fix generation for findings that SCA determines are not exploitable | No | false |
analysis.enableVendoredFileTriage |
Use a specialized triage strategy for vendored files | No | true |
analysis.cache.enabled |
Enable URL-based analysis input caching | No | true |
analysis.cache.defaultTtlSeconds |
Default TTL in seconds for cached analysis inputs | No | 86400 |
analysis.cache.maxSizeBytes |
Maximum cache size in bytes | No | 10737418240 (10GB) |
analysis.cache.honorCacheControl |
Honor cache-control headers from source | No | true |
analysis.cache.directory |
Override cache directory path | No | "" (service default) |
analysis.scaMaxRequestsToAnalyze |
Maximum number of requests to analyze during SCA | No | 5 |
analysis.scaQueueNumWorkers |
Number of workers in the dedicated SCA analysis queue | No | 2 |
analysis.scaQueueMaxSize |
Maximum size of the SCA task queue (0 = unbounded) | No | 0 |
analysis.scaBackpressureEnabled |
Enable backpressure for the SCA analysis queue | No | false |
global.pixee.ai.webSearch.model |
LLM model name for web-search-enabled queries | No | "" |
platform.database.embedded |
Use embedded database instead of external | No | true |
platform.database.host |
External database hostname | Yes (if embedded: false) |
None |
platform.database.port |
External database port | No | 5432 |
platform.database.name |
External database name | No | pixee_platform |
platform.database.username |
External database username | Yes (if embedded: false) |
None |
platform.database.password |
External database password | Yes (if embedded: false) |
None |
platform.database.existingSecret |
Name of existing secret containing a password key |
No | "" |
platform.gitCloneStrategy |
Git clone strategy for VCS operations (partial or full) | No | partial |
platform.gitBranchPrefix |
Optional prefix for Git branch names created by Pixee | No | None |
platform.gitCommitMessagePrefix |
Optional prefix for Git commit messages created by Pixee | No | None |
platform.proxy.enabled |
Enable proxy configuration | No | false |
platform.proxy.address |
Address of proxy server | No | None |
platform.proxy.headers.forwarded |
Allow 'Forwarded' header | No | false |
platform.proxy.headers.xForwarded |
Allow X-Forwarded-* headers | No | false |
platform.inputBucket |
Custom name for analysis input bucket | No | pixee-analysis-input |
platform.inputSignatureDuration |
Duration for pre-signed URLs (e.g., "1h", "30m") | No | None |
platform.analysisTimeout |
General analysis progress timeout (e.g., "15m", "30m") | No | 15m |
platform.sastAnalysisTimeout |
SAST-specific analysis timeout (e.g., "20m", "30m") | No | None |
platform.scaAnalysisTimeout |
SCA-specific analysis timeout (e.g., "45m", "1h") | No | None |
platform.github.appName |
GitHub App name | No | None |
platform.github.appId |
GitHub App ID | No | None |
platform.github.appWebhookSecret |
GitHub App webhook secret | No | None |
platform.github.appPrivateKey |
GitHub App private key | No | None |
platform.github.url |
GitHub Enterprise URL | No | None |
platform.github.existingSecret |
Name of existing secret containing GitHub App webhook and private key (takes precedence over setting appWebhookSecret directly) | No | None |
platform.github.secretKeys.appWebhookSecretKey |
Secret key containing the appWebhookSecret | No | appWebhookSecret |
platform.github.secretKeys.appPrivateKeySecretKey |
Secret key containing the appPrivateKey | No | appPrivateKey |
platform.scm.azure.organization |
Azure DevOps organization name | No | None |
platform.scm.azure.token |
Azure DevOps personal access token | No | None |
platform.scm.azure.existingSecret |
Name of existing secret containing Azure DevOps token and webhook password (takes precedence over setting token directly) | No | None |
platform.scm.azure.secretKeys.tokenKey |
Key within the secret that contains the Azure DevOps token | Yes | token |
platform.scm.azure.secretKeys.webhookPasswordKey |
Key within the secret that contains the Azure DevOps webhook password | Yes | webhookPassword |
platform.scm.gitlab.baseUri |
Self-hosted GitLab base URI | No | None |
platform.scm.gitlab.token |
GitLab personal access token (required scopes: api, read_user, read_repository, read_api, write_repository, ai_features, read_registry, read_virtual_registry). A service account token is recommended. |
No | None |
platform.scm.gitlab.webhookSecret |
GitLab webhook secret | No | None |
platform.scm.gitlab.existingSecret |
Name of existing secret containing GitLab token and webhookSecret (takes precedence over setting token directly) | No | None |
platform.scm.gitlab.secretKeys.tokenKey |
Key within the secret that contains the GitLab token | Yes | token |
platform.scm.gitlab.secretKeys.webhookSecretKey |
Key within the secret that contains the GitLab webhookSecret | Yes | webhookSecret |
platform.scm.bitbucket.username |
BitBucket username | No | None |
platform.scm.bitbucket.password |
BitBucket app password | No | None |
platform.scm.bitbucket.existingSecret |
Name of existing secret containing BitBucket password (takes precedence over setting password directly) | No | None |
platform.scm.bitbucket.secretKeys.passwordKey |
Key within the secret that contains the BitBucket password | Yes | password |
platform.pixeebot.appscan.apiKeyId |
AppScan key ID | No | None |
platform.pixeebot.appscan.apiKeySecret |
AppScan key secret | No | None |
platform.pixeebot.appscan.webhook.user |
AppScan webhook username for basic authentication | No | None |
platform.pixeebot.appscan.webhook.password |
AppScan webhook password for basic authentication | No | None |
platform.pixeebot.appscan.existingSecret |
Name of existing secret containing AppScan API key, webhook user and password (takes precedence over setting apiKeySecret, webhook.user and webhook.password directly) | No | None |
platform.pixeebot.appscan.secretKeys.apiKeySecretKey |
Key within the secret that contains the AppScan API key | Yes | apiKeySecret |
platform.pixeebot.appscan.secretKeys.webhookUserKey |
Key within the secret that contains the AppScan webhook username | Yes | webhookUser |
platform.pixeebot.appscan.secretKeys.webhookPasswordKey |
Key within the secret that contains the AppScan webhook password | Yes | webhookPassword |
platform.sonar.token |
SonarQube personal access token | No | None |
platform.sonar.webhookSecret |
SonarQube webhook secret | No | None |
platform.sonar.baseUri |
SonarQube server base URI | Yes (if type is server) | None |
platform.sonar.gitHubAppName |
SonarQube GitHub app name | No | None |
platform.sonar.existingSecret |
Name of existing secret containing SonarQube token and webhookSecret (takes precedence over setting token directly) | No | None |
platform.sonar.secretKeys.tokenKey |
Key within the secret that contains the SonarQube token | Yes | token |
platform.sonar.secretKeys.webhookSecretKey |
Key within the secret that contains the SonarQube webhookSecret | Yes | webhookSecret |
platform.sonar.excludeMaintainabilityFindings |
Exclude maintainability findings (code smells) | No | false |
platform.sonar.excludeReliabilityFindings |
Exclude reliability findings (bugs) | No | false |
platform.sonar.cweIds |
Comma-separated list of CWE IDs to filter findings. When set, overrides filterCweTop25 and additionalCweIds |
No | None |
platform.sonar.filterCweTop25 |
(Deprecated) Filter to include only CWE Top 25 findings. Use cweIds instead |
No | false |
platform.sonar.additionalCweIds |
(Deprecated) Comma-separated list of additional CWE IDs to include. Use cweIds instead |
No | None |
platform.sonar.maxFindingsPerScan |
Maximum number of findings to retrieve per scan | No | 10000 |
platform.veracode.apiKeyId |
Veracode key ID | No | None |
platform.veracode.apiKeySecret |
Veracode key secret | No | None |
platform.veracode.existingSecret |
Name of existing secret containing Veracode apiKeySecret (takes precedence over setting accessToken directly) | No | None |
platform.veracode.secretKeys.apiKeySecretKey |
Key within the secret that contains the Veracode apiKeySecret | Yes | apiKeySecret |
platform.arnica.apiKey |
Arnica API key | No | None |
platform.arnica.existingSecret |
Name of existing secret containing Arnica API key (takes precedence over setting apiKey directly) | No | None |
platform.arnica.secretKeys.apiKeyKey |
Key within the secret that contains the Arnica API key | Yes | apiKey |
platform.blackduck.accessToken |
Black Duck access token | No | None |
platform.blackduck.existingSecret |
Name of existing secret containing Black Duck access token (takes precedence over setting accessToken directly) | No | None |
platform.blackduck.secretKeys.accessTokenKey |
Key within the secret that contains the Black Duck access token | Yes | accessToken |
platform.checkmarx.region |
Checkmarx AST region (US, US2, EU, EU2, DEU, ANZ, IND, SNG, MEA) | No | US |
platform.checkmarx.tenantAccountName |
Checkmarx tenant account name | No | None |
platform.checkmarx.apiKey |
Checkmarx API key | No | None |
platform.checkmarx.existingSecret |
Name of existing secret containing Checkmarx API key (takes precedence over setting apiKey directly) | No | None |
platform.checkmarx.secretKeys.apiKeyKey |
Key within the secret that contains the Checkmarx API key | Yes | apiKey |
oidc.ingress.enabled |
Enable ingress for OIDC service | No | false |
oidc.ingress.className |
Ingress controller class name for OIDC | No | None |
oidc.ingress.hosts |
Host configurations for OIDC ingress | No | None |
oidc.ingress.tls |
TLS configuration for OIDC ingress | No | None |
oidc.ingress.annotations |
Annotations for OIDC ingress | No | None |
oidc.service.type |
Service type for OIDC service | No | ClusterIP |
oidc.image.registry |
Container registry for OIDC service image | No | images.pixee.ai |
oidc.image.repository |
Repository path for OIDC service image | No | proxy/pixee/218200003247.dkr.ecr.us-east-1.amazonaws.com/pixee/zitadel-oidc-service |
oidc.image.tag |
Image tag for OIDC service | No | 3.38.1-52fc585@sha256:2d0fd908e81f4e8fff4141ca2cb84271dfd5edf2f8a0fe5968edf7e56cba5343 |
oidc.image.pullPolicy |
Image pull policy for OIDC service | No | IfNotPresent |
oidc.image.pullSecrets |
Image pull secrets for OIDC service (set to '{}' to disable) | No | [{name: pixee-registry}] |
superset.database.existingSecret |
Name of existing secret containing Superset PostgreSQL credentials (kubernetes.io/basic-auth with username and password keys) |
No | "" |
authentik.database.existingSecret |
Name of existing secret containing Authentik PostgreSQL credentials (kubernetes.io/basic-auth with username and password keys) |
No | "" |
cloudnative-pg.postgresql.parameters.maxConnections |
Maximum number of PostgreSQL connections | No | 200 |
cloudnative-pg.postgresql.parameters.sharedBuffers |
PostgreSQL shared buffer memory (recommended: 25% of memory limit) | No | 1GB |
cloudnative-pg.postgresql.parameters.effectiveCacheSize |
Planner hint for available cache memory | No | 3GB |
cloudnative-pg.postgresql.parameters.workMem |
Per-operation memory for sorts and hashes | No | 16MB |
cloudnative-pg.postgresql.parameters.maintenanceWorkMem |
Memory for VACUUM and index creation | No | 256MB |
cloudnative-pg.postgresql.parameters.randomPageCost |
Planner cost for random page access (lower for SSD) | No | 1.1 |
cloudnative-pg.postgresql.parameters.checkpointCompletionTarget |
Checkpoint I/O spread target (0.0-1.0) | No | 0.9 |
cloudnative-pg.postgresql.parameters.logLockWaits |
Log lock wait events for debugging | No | on |
cloudnative-pg.postgresql.resources.requests.memory |
Memory request for PostgreSQL pod | No | 1Gi |
cloudnative-pg.postgresql.resources.requests.cpu |
CPU request for PostgreSQL pod | No | 250m |
cloudnative-pg.postgresql.resources.limits.memory |
Memory limit for PostgreSQL pod | No | 4Gi |
cloudnative-pg.postgresql.resources.limits.cpu |
CPU limit for PostgreSQL pod | No | 2000m |






