Groovy scripting in PBCS and EPBCS has fundamentally changed what is possible inside a business rule. Where traditional calc scripts give you aggregation and allocation logic, Groovy gives you a full programming language—complete with loops, conditionals, HTTP calls, JSON parsing, and direct grid-level data access. If you have ever wished your business rule could read data from one cube, transform it, and write it to another without a convoluted series of Smart Pushes and intermediate members, Groovy is how you do it.
Yet despite being available in EPM Cloud since 2016, Groovy adoption remains uneven. The Oracle documentation is reference-oriented rather than example-driven, Stack Overflow coverage is thin compared to mainstream Groovy usage, and the EPM-specific object model (DataGridDefinitionBuilder, DataGridBuilder, Operation) has no counterpart in standard Groovy tutorials. This guide fills that gap with complete, copy-and-adapt recipes for the most common Groovy use cases in PBCS and EPBCS.
1. Why Groovy Over Calc Scripts
Groovy does not replace calc scripts—it extends what business rules can do. The question is not "which is better" but "which is the right tool for this job." Here is a concrete decision framework.
| Requirement | Calc Script | Groovy | Winner |
|---|---|---|---|
| Aggregate a sparse dimension | AGG("Entity"); |
Not directly supported | Calc Script |
| Allocate overhead proportionally | FIX + cross-dim references | Possible but verbose | Calc Script |
| Read data, transform, write to different cube | Requires Smart Push + staging members | DataGridDefinitionBuilder + DataGridBuilder | Groovy |
| Call an external REST API (e.g., exchange rates) | Not possible | Native HttpURLConnection |
Groovy |
| Validate data before save (form-level) | Not possible | throwVetoException() |
Groovy |
| Dynamic member selection based on data values | Limited (IF/ENDIF) | Full programmatic control | Groovy |
| Generate calc scripts dynamically at runtime | Not possible | String interpolation + operation.application.getCalculation() |
Groovy |
| Conditional Smart Push with override members | Manual configuration only | Programmatic push with full control | Groovy |
2. The Groovy Object Model in EPM
Before diving into recipes, you need to understand the five core objects that Oracle exposes inside every Groovy business rule. These are not standard Groovy classes—they are EPM-specific and are injected automatically into your script's execution context.
| Object | Access Pattern | Purpose |
|---|---|---|
operation |
Implicit (always available) | Entry point to the execution context. Provides access to the application, cube, grid, and user-selected runtime prompt values. |
Application |
operation.application |
Represents the PBCS application. Used to get cubes, execute calc scripts, run data maps, and access application-level metadata. |
Cube |
operation.application.getCube("PlanCube") |
Represents a specific BSO or ASO cube within the application. Required parameter for grid builders and Smart Push operations. |
DataGridDefinitionBuilder |
new DataGridDefinitionBuilder() |
Defines a read query against a cube. You specify POV members, row dimensions, column dimensions, and the builder constructs the retrieval grid. |
DataGridBuilder |
new DataGridBuilder() |
Defines a write operation to a cube. You set POV, add rows with member combinations and data values, then submit the grid to update the database. |
java.net.HttpURLConnection works, but java.io.FileOutputStream does not.
3. Recipe 1: Reading Data with DataGridDefinitionBuilder
Pattern: Read revenue data from a BSO cube for all entities in a specific scenario and period
This is the foundation for every Groovy script that needs to inspect existing data before making decisions. The DataGridDefinitionBuilder constructs a retrieval grid, and the DataGridIterator lets you walk through the results row by row.
/*
* Recipe 1: Read Revenue by Entity
* Cube: FinPlan (BSO)
* Reads "Revenue" for all level-0 entities under "Total Entity"
* for the current scenario, version, year, and period.
*/
// Step 1: Get a reference to the cube
Cube cube = operation.application.getCube("FinPlan")
// Step 2: Build the grid definition for reading
DataGridDefinitionBuilder gridDefBuilder = cube.dataGridDefinitionBuilder()
// Step 3: Set the POV (fixed dimensions not on rows or columns)
gridDefBuilder.addPov(["Scenario", "Version", "Year"], [["Actual"], ["Working"], ["FY26"]])
// Step 4: Define columns (what data points to retrieve)
gridDefBuilder.addColumn(["Period"], [["Jan"], ["Feb"], ["Mar"]])
// Step 5: Define rows (which members to iterate over)
// ILvl0Descendants returns leaf-level entities
gridDefBuilder.addRow(["Account", "Entity"], [
["Revenue", "ILvl0Descendants(Total Entity)"]
])
// Step 6: Build the definition and retrieve the grid
DataGridDefinition gridDef = gridDefBuilder.build()
DataGrid dataGrid = cube.loadGrid(gridDef, false)
// Step 7: Iterate through the results
println "Entity | Jan | Feb | Mar"
println "-------|-----|-----|----"
dataGrid.dataCellIterator().each { DataCell cell ->
String entity = cell.getMemberName("Entity")
String period = cell.getMemberName("Period")
double value = cell.data
// Skip #MISSING values
if (!cell.isMissing()) {
println "${entity} | ${period} | ${value}"
}
}
// Step 8: Aggregate into a map for downstream logic
Map<String, Double> revenueByEntity = [:]
dataGrid.dataCellIterator().each { DataCell cell ->
if (!cell.isMissing()) {
String entity = cell.getMemberName("Entity")
double current = revenueByEntity.getOrDefault(entity, 0.0)
revenueByEntity[entity] = current + cell.data
}
}
println "\nQ1 Revenue Totals:"
revenueByEntity.each { entity, total ->
println " ${entity}: ${String.format('%,.2f', total)}"
}
cube.loadGrid(gridDef, false) controls whether to suppress missing rows. Pass true to exclude rows where all cells are #MISSING, which dramatically reduces iteration time on sparse data.
4. Recipe 2: Writing Data with DataGridBuilder
Pattern: Push calculated results back into a BSO cube
After reading and transforming data in Groovy, you need to write the results back. DataGridBuilder is the write counterpart to DataGridDefinitionBuilder. You define the POV, add rows with explicit member intersections and values, then submit the grid.
/*
* Recipe 2: Write Calculated Margins to Cube
* Reads revenue by entity, applies tiered margin rates,
* and writes the results back to "Calculated_Margin" account.
*/
Cube cube = operation.application.getCube("FinPlan")
// --- Step 1: Read current revenue data ---
DataGridDefinitionBuilder readBuilder = cube.dataGridDefinitionBuilder()
readBuilder.addPov(["Scenario", "Version", "Year"], [["Actual"], ["Working"], ["FY26"]])
readBuilder.addColumn(["Period"], [["Jan"]])
readBuilder.addRow(["Account", "Entity"], [
["Revenue", "ILvl0Descendants(Total Entity)"]
])
DataGrid readGrid = cube.loadGrid(readBuilder.build(), true)
// --- Step 2: Calculate margins with tiered rates ---
Map<String, Double> margins = [:]
readGrid.dataCellIterator().each { DataCell cell ->
if (!cell.isMissing()) {
String entity = cell.getMemberName("Entity")
double revenue = cell.data
double marginRate
if (revenue > 10000000) {
marginRate = 0.35
} else if (revenue > 5000000) {
marginRate = 0.28
} else if (revenue > 1000000) {
marginRate = 0.22
} else {
marginRate = 0.15
}
margins[entity] = revenue * marginRate
}
}
// --- Step 3: Write results back using DataGridBuilder ---
DataGridBuilder writeBuilder = cube.dataGridBuilder()
// Set the POV for the write operation
writeBuilder.addPov("Scenario", "Actual")
writeBuilder.addPov("Version", "Working")
writeBuilder.addPov("Year", "FY26")
writeBuilder.addPov("Period", "Jan")
// Add a row for each entity with the calculated margin
margins.each { entity, marginValue ->
writeBuilder.addRow(["Account", "Entity"], ["Calculated_Margin", entity])
.addData(marginValue)
}
// Submit the grid - this writes data to the cube
writeBuilder.build().execute()
println "Successfully wrote margins for ${margins.size()} entities"
DataGridBuilder writes are atomic per grid submission. If one cell in the grid fails validation, the entire write is rolled back. For large writes, batch your rows into groups of 500-1000 to balance atomicity with performance.
5. Recipe 3: Smart Push via Groovy
Pattern: Programmatically push data between cubes with member overrides
Smart Push in PBCS moves data from a BSO cube to an ASO cube (typically for reporting). Doing this via Groovy gives you control over which members to push, the ability to override mappings, and conditional logic that is impossible with the standard Smart Push configuration UI.
/*
* Recipe 3: Conditional Smart Push with Overrides
* Pushes Actual data from FinPlan (BSO) to FinRpt (ASO)
* Only pushes entities that have been modified this month.
* Overrides the Version member from "Working" to "Final".
*/
Application app = operation.application
Cube sourceCube = app.getCube("FinPlan")
Cube targetCube = app.getCube("FinRpt")
// --- Step 1: Identify which entities have recent data ---
DataGridDefinitionBuilder checkBuilder = sourceCube.dataGridDefinitionBuilder()
checkBuilder.addPov(["Scenario", "Version", "Year", "Period"], [
["Actual"], ["Working"], ["FY26"], ["Jan"]
])
checkBuilder.addColumn(["Account"], [["Revenue"]])
checkBuilder.addRow(["Entity"], [["ILvl0Descendants(Total Entity)"]])
DataGrid checkGrid = sourceCube.loadGrid(checkBuilder.build(), true)
List<String> modifiedEntities = []
checkGrid.dataCellIterator().each { DataCell cell ->
if (!cell.isMissing() && cell.data != 0) {
modifiedEntities.add(cell.getMemberName("Entity"))
}
}
println "Found ${modifiedEntities.size()} entities with data to push"
if (modifiedEntities.isEmpty()) {
println "No entities to push. Exiting."
return
}
// --- Step 2: Execute Smart Push for each batch ---
// Build the entity list as a comma-separated member specification
String entitySpec = modifiedEntities.collect { "\"${it}\"" }.join(",")
// Create the Smart Push definition
SmartPush smartPush = sourceCube.smartPush(targetCube)
// Set source scope
smartPush.addMemberSpec("Scenario", ["Actual"])
smartPush.addMemberSpec("Version", ["Working"])
smartPush.addMemberSpec("Year", ["FY26"])
smartPush.addMemberSpec("Period", ["Jan", "Feb", "Mar"])
smartPush.addMemberSpec("Account", ["ILvl0Descendants(Total Revenue)", "ILvl0Descendants(Total Expenses)"])
smartPush.addMemberSpec("Entity", modifiedEntities)
// Override mapping: source "Working" maps to target "Final"
smartPush.addMemberOverride("Version", "Working", "Final")
// Execute the push
smartPush.execute()
println "Smart Push completed successfully for ${modifiedEntities.size()} entities"
6. Recipe 4: Runtime Prompt Variables
Pattern: Access user-selected members from runtime prompts in Groovy
When a business rule has runtime prompts (the dialog boxes where users select Scenario, Year, Period, etc.), Groovy accesses those selections through the operation object's runtime prompt API. This lets you build dynamic rules that adapt to whatever the user selects at launch time.
/*
* Recipe 4: Dynamic Calculation Using Runtime Prompts
* Runtime prompts defined in the rule:
* - Scenario (member of Scenario dimension)
* - StartPeriod (member of Period dimension)
* - EndPeriod (member of Period dimension)
* - EntitySelection (member of Entity dimension)
*/
// --- Step 1: Retrieve runtime prompt values ---
// rtps is a map of runtime prompt variable names to user selections
String scenario = operation.grid.getCellValue(0, 0)
.getMembers().find { it.dimName == "Scenario" }?.mbrName
// More commonly, use the RTP variables directly
String rtpScenario = operation.application.getSubstitutionVariable("CurScenario")
// For explicit runtime prompts defined on the rule:
String selectedScenario = rtps.Scenario.toString()
String startPeriod = rtps.StartPeriod.toString()
String endPeriod = rtps.EndPeriod.toString()
String entitySelection = rtps.EntitySelection.toString()
println "User selections:"
println " Scenario: ${selectedScenario}"
println " Period range: ${startPeriod} to ${endPeriod}"
println " Entity: ${entitySelection}"
// --- Step 2: Build a period list from the range ---
List<String> allPeriods = ["Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
int startIdx = allPeriods.indexOf(startPeriod)
int endIdx = allPeriods.indexOf(endPeriod)
if (startIdx == -1 || endIdx == -1 || startIdx > endIdx) {
throwVetoException("Invalid period range: ${startPeriod} to ${endPeriod}")
}
List<String> selectedPeriods = allPeriods.subList(startIdx, endIdx + 1)
println "Processing periods: ${selectedPeriods.join(', ')}"
// --- Step 3: Use the prompt values in a grid read ---
Cube cube = operation.application.getCube("FinPlan")
DataGridDefinitionBuilder builder = cube.dataGridDefinitionBuilder()
builder.addPov(["Scenario", "Version", "Year"], [
[selectedScenario], ["Working"], ["FY26"]
])
// Use the dynamically built period list as columns
List<List<String>> periodColumns = selectedPeriods.collect { [it] }
builder.addColumn(["Period"], periodColumns)
// Use the entity selection from the prompt
builder.addRow(["Account", "Entity"], [
["Revenue", entitySelection]
])
DataGrid grid = cube.loadGrid(builder.build(), false)
// --- Step 4: Process the data ---
double total = 0.0
grid.dataCellIterator().each { DataCell cell ->
if (!cell.isMissing()) {
total += cell.data
}
}
println "Total revenue for ${entitySelection} in ${selectedScenario}: ${String.format('%,.2f', total)}"
throwVetoException() to surface clear error messages back to the user when validation fails.
7. Recipe 5: Calling REST APIs from Groovy
Pattern: Fetch external data (exchange rates, commodity prices) and load into PBCS
One of Groovy's most powerful capabilities in EPM Cloud is making HTTP calls to external APIs. This enables real-time data feeds without building ETL pipelines—exchange rates, commodity prices, headcount data from HCM, or any REST endpoint your business rule needs.
/*
* Recipe 5: Fetch Exchange Rates from External API and Load into Cube
* Calls an external FX rate API, parses the JSON response,
* and writes rates into the FinPlan cube for currency conversion.
*/
// --- Step 1: Call the external REST API ---
String baseCurrency = "USD"
String apiUrl = "https://api.exchangerate.host/latest?base=${baseCurrency}"
HttpURLConnection connection = (HttpURLConnection) new URL(apiUrl).openConnection()
connection.setRequestMethod("GET")
connection.setRequestProperty("Accept", "application/json")
connection.setConnectTimeout(10000) // 10 second timeout
connection.setReadTimeout(10000)
int responseCode = connection.getResponseCode()
if (responseCode != 200) {
throwVetoException("Exchange rate API returned HTTP ${responseCode}. Please try again later.")
}
// Read the response body
String responseBody = connection.getInputStream().getText("UTF-8")
// --- Step 2: Parse the JSON response ---
// PBCS Groovy includes a built-in JSON parser
def jsonSlurper = new groovy.json.JsonSlurper()
def rateData = jsonSlurper.parseText(responseBody)
// Extract the rates we need
Map<String, Double> currencyMapping = [
"EUR": "Euro",
"GBP": "British_Pound",
"JPY": "Japanese_Yen",
"CAD": "Canadian_Dollar",
"AUD": "Australian_Dollar"
]
Map<String, Double> fxRates = [:]
currencyMapping.each { isoCode, memberName ->
if (rateData.rates.containsKey(isoCode)) {
fxRates[memberName] = rateData.rates[isoCode] as Double
println " ${isoCode} (${memberName}): ${fxRates[memberName]}"
}
}
// --- Step 3: Write exchange rates into the cube ---
Cube cube = operation.application.getCube("FinPlan")
DataGridBuilder writeBuilder = cube.dataGridBuilder()
writeBuilder.addPov("Scenario", "Actual")
writeBuilder.addPov("Version", "Working")
writeBuilder.addPov("Year", "FY26")
writeBuilder.addPov("Period", "Jan")
writeBuilder.addPov("Entity", "Corporate")
fxRates.each { currencyMember, rate ->
writeBuilder.addRow(["Account", "Currency"], ["FX_Rate", currencyMember])
.addData(rate)
}
writeBuilder.build().execute()
println "\nSuccessfully loaded ${fxRates.size()} exchange rates into FinPlan"
// --- Step 4: Call an internal EPM REST API ---
// You can also call the EPM Cloud REST API itself for automation tasks
String epmBaseUrl = operation.application.getBaseUrl()
String jobUrl = "${epmBaseUrl}/HyperionPlanning/rest/v3/applications/FinPlan/jobs"
HttpURLConnection epmConn = (HttpURLConnection) new URL(jobUrl).openConnection()
epmConn.setRequestMethod("GET")
epmConn.setRequestProperty("Content-Type", "application/json")
// Internal calls inherit the current user's session token automatically
int epmResponse = epmConn.getResponseCode()
if (epmResponse == 200) {
def jobList = jsonSlurper.parseText(epmConn.getInputStream().getText("UTF-8"))
println "Found ${jobList.items?.size() ?: 0} recent jobs"
}
8. Recipe 6: Conditional Validation with Veto Exceptions
Pattern: Validate form data on save and reject invalid submissions with clear error messages
Groovy business rules attached to forms with the "Run on Save" option can inspect submitted data and reject the save if validation fails. The throwVetoException() method cancels the save and displays your custom error message to the user in the form interface.
/*
* Recipe 6: Multi-Rule Validation on Form Save
* Attached to a revenue planning form with "Run on Save" enabled.
* Validates data integrity before allowing the save to proceed.
*/
// --- Access the grid of data the user just edited ---
DataGrid grid = operation.grid
List<String> errors = []
int cellCount = 0
int modifiedCount = 0
// --- Rule 1: No negative revenue ---
grid.dataCellIterator("Revenue").each { DataCell cell ->
cellCount++
if (cell.isModified()) {
modifiedCount++
if (cell.data < 0) {
String entity = cell.getMemberName("Entity")
String period = cell.getMemberName("Period")
errors.add("Negative revenue not allowed: ${entity} / ${period} = ${String.format('%,.0f', cell.data)}")
}
}
}
// --- Rule 2: Revenue cannot exceed entity-level cap ---
Map<String, Double> entityCaps = [
"US_East" : 50000000,
"US_West" : 75000000,
"EMEA_North" : 30000000,
"EMEA_South" : 25000000,
"APAC" : 40000000
]
grid.dataCellIterator("Revenue").each { DataCell cell ->
if (cell.isModified()) {
String entity = cell.getMemberName("Entity")
Double cap = entityCaps[entity]
if (cap != null && cell.data > cap) {
String period = cell.getMemberName("Period")
errors.add("Revenue for ${entity} / ${period} exceeds cap of ${String.format('%,.0f', cap)}: entered ${String.format('%,.0f', cell.data)}")
}
}
}
// --- Rule 3: Q1 total cannot exceed annual budget by more than 10% ---
grid.dataCellIterator("Revenue").each { DataCell cell ->
if (cell.isModified()) {
String entity = cell.getMemberName("Entity")
String period = cell.getMemberName("Period")
// Only check monthly periods, not quarter/year totals
if (["Jan", "Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"].contains(period)) {
// Check if the monthly value is suspiciously high
if (cell.data > 20000000) {
errors.add("Warning: ${entity} / ${period} value of ${String.format('%,.0f', cell.data)} seems unusually high. Please verify.")
}
}
}
}
// --- Rule 4: Required comment for large changes ---
grid.dataCellIterator("Revenue").each { DataCell cell ->
if (cell.isModified()) {
double originalValue = cell.originalData
double newValue = cell.data
if (originalValue != 0) {
double changePct = Math.abs((newValue - originalValue) / originalValue)
if (changePct > 0.25) {
String entity = cell.getMemberName("Entity")
String period = cell.getMemberName("Period")
errors.add("Change exceeds 25% for ${entity} / ${period}: ${String.format('%,.0f', originalValue)} -> ${String.format('%,.0f', newValue)} (${String.format('%.1f', changePct * 100)}%). Supporting comments required.")
}
}
}
}
// --- Report results ---
println "Validated ${cellCount} cells (${modifiedCount} modified)"
if (!errors.isEmpty()) {
String errorMsg = "Save rejected. Please correct the following:\n\n"
errors.eachWithIndex { error, idx ->
errorMsg += "${idx + 1}. ${error}\n"
}
throwVetoException(errorMsg)
}
println "All validations passed. Save proceeding."
throwVetoException() with the complete list. This gives the user all the issues at once instead of forcing them to fix errors one at a time, resubmit, and discover the next error.
9. Recipe 7: Generating Dynamic Calc Scripts
Pattern: Build and execute calc scripts programmatically based on runtime conditions
This is the most powerful combination of Groovy and calc scripts: use Groovy to analyze data, make decisions, and then construct a targeted calc script on the fly. The generated script runs through the native calc engine at full speed, but its scope and logic are determined by Groovy at runtime.
/*
* Recipe 7: Dynamic Calc Script Generation
* Analyzes which entities have variance above threshold,
* then generates and executes a targeted reforecast calc script
* only for those entities.
*/
Cube cube = operation.application.getCube("FinPlan")
// --- Step 1: Read variance data to identify entities needing reforecast ---
DataGridDefinitionBuilder varBuilder = cube.dataGridDefinitionBuilder()
varBuilder.addPov(["Version", "Year", "Period"], [["Working"], ["FY26"], ["Jan"]])
varBuilder.addColumn(["Scenario"], [["Actual"], ["Budget"]])
varBuilder.addRow(["Account", "Entity"], [
["Revenue", "ILvl0Descendants(Total Entity)"]
])
DataGrid varGrid = cube.loadGrid(varBuilder.build(), true)
// --- Step 2: Calculate variance percentages ---
Map<String, Map> entityData = [:]
varGrid.dataCellIterator().each { DataCell cell ->
String entity = cell.getMemberName("Entity")
String scenario = cell.getMemberName("Scenario")
if (!entityData.containsKey(entity)) {
entityData[entity] = [actual: 0.0, budget: 0.0]
}
if (!cell.isMissing()) {
if (scenario == "Actual") {
entityData[entity].actual = cell.data
} else if (scenario == "Budget") {
entityData[entity].budget = cell.data
}
}
}
// Find entities with variance greater than 15%
double varianceThreshold = 0.15
List<String> entitiesToReforecast = []
entityData.each { entity, data ->
if (data.budget != 0) {
double variancePct = (data.actual - data.budget) / Math.abs(data.budget)
if (Math.abs(variancePct) > varianceThreshold) {
entitiesToReforecast.add(entity)
println " ${entity}: Actual=${String.format('%,.0f', data.actual)}, " +
"Budget=${String.format('%,.0f', data.budget)}, " +
"Variance=${String.format('%.1f%%', variancePct * 100)}"
}
}
}
if (entitiesToReforecast.isEmpty()) {
println "No entities exceed the ${varianceThreshold * 100}% variance threshold. No reforecast needed."
return
}
println "\n${entitiesToReforecast.size()} entities require reforecast. Generating calc script..."
// --- Step 3: Build the dynamic calc script ---
String entityList = entitiesToReforecast.collect { "\"${it}\"" }.join(",\n ")
String calcScript = """
/* ============================================ */
/* Auto-Generated Reforecast Calculation */
/* Generated: ${new Date().format("yyyy-MM-dd HH:mm:ss")} */
/* Entities: ${entitiesToReforecast.size()} above ${varianceThreshold * 100}% variance */
/* ============================================ */
SET CREATEBLOCKONEQ ON;
SET CALCPARALLEL 4;
/* Step 1: Clear forecast periods for affected entities */
FIX("Forecast", "FY26", "Working")
FIX(${entityList})
FIX("Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
CLEARDATA "Revenue";
CLEARDATA "COGS";
CLEARDATA "OpEx";
ENDFIX
ENDFIX
ENDFIX
/* Step 2: Reforecast using actual Jan run-rate */
FIX("Forecast", "FY26", "Working")
FIX(${entityList})
FIX("Feb", "Mar", "Apr", "May", "Jun",
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec")
"Revenue"(
"Revenue" = "Revenue"->"Actual"->"Jan" *
"Seasonality_Index" / "Seasonality_Index"->"Jan";
)
"COGS"(
"COGS" = "Revenue" * ("COGS"->"Actual"->"Jan" / "Revenue"->"Actual"->"Jan");
)
ENDFIX
ENDFIX
ENDFIX
/* Step 3: Recalculate P&L for affected entities */
FIX("Forecast", "FY26", "Working")
FIX(${entityList})
"Gross_Margin" = "Revenue" - "COGS";
"Operating_Income" = "Gross_Margin" - "OpEx";
ENDFIX
ENDFIX
/* Step 4: Reaggregate */
FIX("Forecast", "FY26", "Working")
AGG("Entity");
ENDFIX
"""
println "Generated calc script (${calcScript.length()} chars):"
println calcScript
// --- Step 4: Execute the generated calc script ---
operation.application.getCalculation(calcScript).execute()
println "\nReforecast calculation completed successfully for ${entitiesToReforecast.size()} entities"
println before executing it. This creates an audit trail in the job console and makes debugging dramatically easier. In production, you can wrap the execution in a try/catch to handle calc engine errors gracefully.
10. Common Groovy Pitfalls and Debugging Tips
Groovy in PBCS is not quite standard Groovy, and the differences catch even experienced developers off guard. Here are the most common issues and how to resolve them.
Pitfall 1: Grid Builder Dimension Order
The order in which you add dimensions to DataGridDefinitionBuilder matters. The POV dimensions, row dimensions, and column dimensions must not overlap, and every dimension in the cube must appear exactly once across POV, rows, and columns. Missing a dimension or including one twice produces a cryptic runtime error.
// WRONG - missing the Currency dimension entirely
DataGridDefinitionBuilder builder = cube.dataGridDefinitionBuilder()
builder.addPov(["Scenario", "Version", "Year"], [["Actual"], ["Working"], ["FY26"]])
builder.addColumn(["Period"], [["Jan"]])
builder.addRow(["Account", "Entity"], [["Revenue", "US_East"]])
// Runtime error: "Not all dimensions are represented in the grid"
// CORRECT - all dimensions accounted for
DataGridDefinitionBuilder builder = cube.dataGridDefinitionBuilder()
builder.addPov(["Scenario", "Version", "Year", "Currency"], [
["Actual"], ["Working"], ["FY26"], ["USD"]
])
builder.addColumn(["Period"], [["Jan"]])
builder.addRow(["Account", "Entity"], [["Revenue", "US_East"]])
Pitfall 2: String Comparison with Member Names
Member names retrieved from DataCell.getMemberName() may have different casing or trailing whitespace depending on how they were defined in the outline. Always use case-insensitive comparison or normalize the strings.
// FRAGILE - exact string match may fail
if (cell.getMemberName("Entity") == "us_east") { ... }
// ROBUST - case-insensitive comparison
if (cell.getMemberName("Entity").equalsIgnoreCase("US_East")) { ... }
// BEST - normalize once, compare cleanly
String entity = cell.getMemberName("Entity").trim()
switch (entity.toLowerCase()) {
case "us_east": marginRate = 0.30; break
case "us_west": marginRate = 0.28; break
case "emea": marginRate = 0.25; break
default: marginRate = 0.20; break
}
Pitfall 3: Timeout on Large Grid Operations
PBCS enforces a maximum execution time for business rules (typically 10–60 minutes depending on your subscription tier). Large grid reads or writes can hit this limit, especially when iterating over combinations of entities and periods. The solution is to batch your operations.
// RISKY - reading all entities x all periods x all accounts in one grid
// Can easily exceed memory or timeout limits
// BETTER - batch by entity groups
List<String> allEntities = ["US_East", "US_West", "US_Central", /* ... 50+ entities */]
int batchSize = 10
allEntities.collate(batchSize).each { batch ->
DataGridDefinitionBuilder builder = cube.dataGridDefinitionBuilder()
builder.addPov(["Scenario", "Version", "Year"], [
["Actual"], ["Working"], ["FY26"]
])
builder.addColumn(["Period"], [["Jan"], ["Feb"], ["Mar"]])
builder.addRow(["Account", "Entity"], [
["Revenue", batch.join(",")]
])
DataGrid batchGrid = cube.loadGrid(builder.build(), true)
// Process this batch...
println "Processed batch: ${batch.size()} entities"
}
Pitfall 4: No Implicit Imports
Unlike standard Groovy, the PBCS sandbox does not automatically import common Java packages. If you need SimpleDateFormat, BigDecimal, or similar classes, you must import them explicitly.
// WRONG - GroovyRuntimeException: unable to resolve class
def today = new SimpleDateFormat("yyyy-MM-dd").format(new Date())
// CORRECT - explicit import at the top of the script
import java.text.SimpleDateFormat
def today = new SimpleDateFormat("yyyy-MM-dd").format(new Date())
Pitfall 5: Debugging Without println
There is no debugger, no breakpoints, and no step-through execution in PBCS Groovy. Your only debugging tool is println, which writes to the job console. Use it liberally during development.
// Add structured logging throughout your script
println "=== Script Start: ${new Date()} ==="
println "Runtime prompts: Scenario=${selectedScenario}, Period=${selectedPeriod}"
println "Step 1: Reading grid..."
println " Grid returned ${rowCount} rows"
println "Step 2: Processing..."
println " Entities processed: ${entityList.size()}"
println " Entities skipped (no data): ${skippedCount}"
println "Step 3: Writing results..."
println " Rows written: ${writeCount}"
println "=== Script End: ${new Date()} (${elapsed}ms) ==="
Debugging Quick Reference
| Symptom | Likely Cause | Fix |
|---|---|---|
| "Not all dimensions are represented" | Missing dimension in POV/row/column | Add every cube dimension to exactly one of POV, rows, or columns |
| "Invalid member name" | Typo in member name or wrong dimension | Verify member names against the dimension outline |
| Script returns no data but no error | All cells are #MISSING, suppressMissing=true |
Pass false as second arg to loadGrid() to include missing rows |
| "Unauthorized" on REST API call | External domain not whitelisted | Add domain to Allowed External Access in application settings |
| Script times out after 10 minutes | Grid too large or unbatched iteration | Batch operations, use suppressMissing=true, reduce grid scope |
| Data written but not visible in forms | Missing aggregation after write | Run AGG or CALC DIM after DataGridBuilder.execute() |
11. The Future: AI-Assisted Groovy Development
Groovy scripting in PBCS is powerful, but it has a steep learning curve. The EPM-specific object model is underdocumented, the sandbox restrictions are discovered through trial and error, and debugging is limited to println statements in the job console. This is exactly the kind of specialized, context-dependent programming where AI assistance delivers the highest value.
What AI-Assisted Groovy Development Looks Like
Consider a request like: "Write a Groovy rule that reads revenue for all level-0 entities in the Actual scenario for Q1 FY26, calculates year-over-year growth, and writes entities with negative growth into a custom report cube."
An AI-powered EPM assistant that knows your application's dimensions, members, and cube structure can generate the complete Groovy script: the correct DataGridDefinitionBuilder configuration with all dimensions accounted for, the growth calculation logic, the DataGridBuilder write operation, and appropriate error handling. It can do this in seconds, compared to the 30–60 minutes a senior developer might spend writing, testing, and debugging the same script manually.
EPM Agent connects directly to your Oracle EPM Cloud applications and indexes your metadata—dimensions, members, hierarchies, cube structures, and substitution variables. When you describe a Groovy requirement in plain English, the AI generates a script that uses your actual member names, accounts for all your dimensions, and follows the patterns and best practices covered in this guide.
Beyond Script Generation
AI assistance extends well beyond writing new scripts from scratch:
- Script debugging: Paste a failing Groovy script and the error message. The AI can identify whether you have a missing dimension, a member name typo, or a sandbox restriction violation—issues that can take hours to diagnose manually.
- Calc-to-Groovy migration: Paste a traditional calc script and ask the AI to convert it to Groovy, preserving the logic while adding capabilities like conditional execution, logging, or REST API integration.
- Performance optimization: The AI can analyze your script and suggest batching strategies, identify unnecessary grid reads, or recommend switching specific operations from Groovy to native calc script for better performance (as discussed in our EPM query optimization guide).
- Cross-training: When a team member inherits an undocumented Groovy business rule, the AI can explain what every section does, identify potential issues, and suggest improvements—accelerating onboarding from weeks to hours.
The broader transformation of EPM through AI encompasses everything from natural language querying to automated anomaly detection. But for PBCS developers, AI-assisted Groovy development addresses their most time-consuming daily challenge: bridging the gap between a business requirement and a working, production-quality Groovy script.
Ready to see it in action? EPM Agent can generate, debug, and optimize Groovy scripts using your application's actual metadata and dimension structures. Try the interactive demo to experience AI-assisted Groovy development firsthand.