core.project( # Required arguments. name, # Optional arguments. buildbucket = None, logdog = None, scheduler = None, swarming = None, acls = None, )
Defines a LUCI project.
There should be exactly one such definition in the top-level config file.
core.logdog(gs_bucket = None)
Defines configuration of the LogDog service for this project.
Usually required for any non-trivial project.
core.bucket(name, acls = None)
Defines a bucket: a container for LUCI resources that share the same ACL.
ci
or try
. Required.core.recipe( # Required arguments. name, cipd_package, # Optional arguments. cipd_version = None, recipe = None, )
Defines where to locate a particular recipe.
Builders refer to recipes in their recipe
field, see core.builder(...). Multiple builders can execute the same recipe (perhaps passing different properties to it).
Recipes are located inside cipd packages called “recipe bundles”. Typically the cipd package name with the recipe bundle will look like:
infra/recipe_bundles/chromium.googlesource.com/chromium/tools/build
Recipes bundled from internal repositories are typically under
infra_internal/recipe_bundles/...
But if you're building your own recipe bundles, they could be located elsewhere.
The cipd version to fetch is usually a lower-cased git ref (like refs/heads/master
), or it can be a cipd tag (like git_revision:abc...
).
recipe
is None, also specifies the recipe name within the bundle. Required.refs/heads/master
.name
. Useful if recipe names clash between different recipe bundles. When this happens, name
can be used as a non-ambiguous alias, and recipe
can provide the actual recipe name. Defaults to name
.core.builder( # Required arguments. name, bucket, recipe, # Optional arguments. properties = None, service_account = None, caches = None, execution_timeout = None, dimensions = None, priority = None, swarming_tags = None, expiration_timeout = None, build_numbers = None, experimental = None, task_template_canary_percentage = None, luci_migration_host = None, triggers = None, triggered_by = None, )
Defines a generic builder.
It runs some recipe in some requested environment, passing it a struct with given properties. It is launched whenever something triggers it (a poller or some other builder, or maybe some external actor via Buildbucket or LUCI Scheduler APIs).
The full unique builder name (as expected by Buildbucket RPC interface) is a pair (<project>, <bucket>/<name>)
, but within a single project config this builder can be referred to either via its bucket-scoped name (i.e. <bucket>/<name>
) or just via it‘s name alone (i.e. <name>
), if this doesn’t introduce ambiguities.
The definition of what can potentially trigger what is defined through triggers
and triggered_by
fields. They specify how to prepare ACLs and other configuration of services that execute builds. If builder A is defined as “triggers builder B”, it means all services should expect A builds to trigger B builds via LUCI Scheduler‘s EmitTriggers RPC or via Buildbucket’s ScheduleBuild RPC, but the actual triggering is still the responsibility of A's recipe.
There‘s a caveat though: only Scheduler ACLs are auto-generated by the config generator when one builder triggers another, because each Scheduler job has its own ACL and we can precisely configure who’s allowed to trigger this job. Buildbucket ACLs are left unchanged, since they apply to an entire bucket, and making a large scale change like that (without really knowing whether Buildbucket API will be used) is dangerous. If the recipe triggers other builds directly through Buildbucket, it is the responsibility of the config author (you) to correctly specify Buildbucket ACLs, for example by adding the corresponding service account to the bucket ACLs:
core.bucket( ... acls = [ ... acl.entry(acl.BUILDBUCKET_TRIGGERER, <builder service account>), ... ], )
This is not necessary if the recipe uses Scheduler API instead of Buildbucket.
os
), and values are either strings (e.g. Linux
), swarming.dimension(...) objects (for defining expiring dimensions) or lists of thereof.k:v
strings) to assign to the Swarming task that runs the builder. Each tag will also end up in swarming_tag
Buildbucket tag, for example swarming_tag:builder:release
.dimensions
) before canceling the build and marking it as expired. If None, defer the decision to Buildbucket service.core.gitiles_poller( # Required arguments. name, bucket, repo, # Optional arguments. refs = None, refs_regexps = None, schedule = None, triggers = None, )
Defines a gitiles poller which can trigger builders on git commits.
It watches a set of git refs and triggers builders if either:
The watched ref set is defined via refs
and refs_regexps
fields. One is just a simple enumeration of refs, and another allows to use regular expressions to define what refs belong to the watched set. Both fields can be used at the same time. If neither is set, the gitiles_poller defaults to watching refs/heads/master
.
https://
. Required.refs/heads/master
or refs/tags/v1.2.3
.refs/heads/[^/]+
or refs/branch-heads/\d+\.\d+
. The regular expression should have a literal prefix with at least two slashes present, e.g. refs/release-\d+/foobar
is not allowed, because the literal prefix refs/release-
contains only one slash. The regexp should not start with ^
or end with $
as they will be added automatically.core.generator(impl = None)
Registers a callback that is called at the end of the config generation stage to modify/append/delete generated configs in an arbitrary way.
The callback accepts single argument ctx
which is a struct with the following fields:
{config file name -> (str | proto)}
.The callback is free to modify ctx.config_set
in whatever way it wants, e.g. by adding new values there or mutating/deleting existing ones.
func(ctx) -> None
.Below is the table with role constants that can be passed as roles
in acl.entry(...).
Due to some inconsistencies in how LUCI service are currently implemented, some roles can be assigned only in core.project(...) rule, but some also in individual core.bucket(...) rules.
Similarly some roles can be assigned to individual users, other only to groups.
Role | Scope | Principals | Allows |
---|---|---|---|
acl.PROJECT_CONFIGS_READER | project only | groups, users | Reading contents of project configs through LUCI Config API/UI. |
acl.LOGDOG_READER | project only | groups | Reading logs under project's logdog prefix. |
acl.LOGDOG_WRITER | project only | groups | Writing logs under project's logdog prefix. |
acl.BUILDBUCKET_READER | project, bucket | groups, users | Fetching info about a build, searching for builds in a bucket. |
acl.BUILDBUCKET_TRIGGERER | project, bucket | groups, users | Same as BUILDBUCKET_READER + scheduling and canceling builds. |
acl.BUILDBUCKET_OWNER | project, bucket | groups, users | Full access to the bucket (should be used rarely). |
acl.SCHEDULER_READER | project, bucket | groups, users | Viewing Scheduler jobs, invocations and their debug logs. |
acl.SCHEDULER_TRIGGERER | project, bucket | groups, users | Same as SCHEDULER_READER + ability to trigger jobs. |
acl.SCHEDULER_OWNER | project, bucket | groups, users | Full access to Scheduler jobs, including ability to abort them. |
acl.entry(roles, groups = None, users = None)
Returns an ACL binding which assigns given role (or roles) to given individuals or groups.
Lists of acl.entry structs are passed to acls
fields of core.project(...) and core.bucket(...) rules.
An empty ACL binding is allowed. It is ignored everywhere. Useful for things like:
core.project( acls = [ acl.entry(acl.PROJECT_CONFIGS_READER, groups = [ # TODO: members will be added later ]) ] )
acl.entry object, should be treated as opaque.
swarming.cache(path, name = None, wait_for_warm_cache = None)
Represents a request for the bot to mount a named cache to a path.
Each bot has a LRU of named caches: think of them as local named directories in some protected place that survive between builds.
A build can request one or more such caches to be mounted (in read/write mode) at the requested path relative to some known root. In recipes-based builds, the path is relative to api.paths['cache']
dir.
If it's the first time a cache is mounted on this particular bot, it will appear as an empty directory. Otherwise it will contain whatever was left there by the previous build that mounted exact same named cache on this bot, even if that build is completely irrelevant to the current build and just happened to use the same named cache (sometimes this is useful to share state between different builders).
At the end of the build the cache directory is unmounted. If at that time the bot is running out of space, caches (in their entirety, the named cache directory and all files inside) are evicted in LRU manner until there's enough free disk space left. Renaming a cache is equivalent to clearing it from the builder perspective. The files will still be there, but eventually will be purged by GC.
Additionally, Buildbucket always implicitly requests to mount a special builder cache to ‘builder’ path:
swarming.cache('builder', name=some_hash('<project>/<bucket>/<builder>'))
This means that any LUCI builder has a “personal disk space” on the bot. Builder cache is often a good start before customizing caching. In recipes, it is available at api.path['cache'].join('builder')
.
In order to share the builder cache directory among multiple builders, some explicitly named cache can be mounted to builder
path on these builders. Buildbucket will not try to override it with its auto-generated builder cache.
For example, if builders A and B both declare they use named cache swarming.cache('builder', name='my_shared_cache')
, and an A build ran on a bot and left some files in the builder cache, then when a B build runs on the same bot, the same files will be available in its builder cache.
If the pool of swarming bots is shared among multiple LUCI projects and projects mount same named cache, the cache will be shared across projects. To avoid affecting and being affected by other projects, prefix the cache name with something project-specific, e.g. v8-
.
api.path['cache']
). Must use POSIX format (forward slashes). In most cases, it does not need slashes at all. Must be unique in the given builder definition (cannot mount multiple caches to the same path). Required.path
itself. Must be unique in the given builder definition (cannot mount the same cache to multiple paths).swarming.cache struct with fields path
, name
and wait_for_warm_cache
.
swarming.dimension(value, expiration = None)
A value of some Swarming dimension, annotated with its expiration time.
Intended to be used as a value in dimensions
dict of core.builder(...) when using dimensions that expire:
core.builder( ... dimensions = { ... 'device': swarming.dimension('preferred', expiration=5*time.minute), ... }, ... )
swarming.dimension struct with fields value
and expiration
.
swarming.validate_caches(attr, caches)
Validates a list of caches.
Ensures each entry is swarming.cache struct, and no two entries use same name or path.
Validates list of caches (may be an empty list, never None).
swarming.validate_dimensions(attr, dimensions)
Validates and normalizes a dict with dimensions.
The dict should have string keys and values are swarming.dimension, a string or a list of thereof (for repeated dimensions).
{string: string|swarming.dimension}
. Required.Validated and normalized dict in form {string: [swarming.dimension]}
.
swarming.validate_tags(attr, tags)
Validates a list of k:v
pairs with Swarming tags.
Validated list of tags in same order, with duplicates removed.
Refer to the list of built-in constants and functions exposed in the global namespace by Starlark itself.
In addition, lucicfg
exposes the following functions.
fail(msg, trace = None)
Aborts the execution with an error message.
fail
is called.stacktrace(skip = None)
Captures and returns a stack trace of the caller.
A captured stacktrace is an opaque object that can be stringified to get a nice looking trace (e.g. for error messages).
struct(**kwargs)
Returns an immutable struct object with fields populated from the specified keyword arguments.
Can be used to define namespaces, for example:
def _func1(): ... def _func2(): ... exported = struct( func1 = _func1, func2 = _func2, )
Then _func1
can be called as exported.func1()
.
to_json(value)
Serializes a value to a compact JSON string.
Doesn't support integers that do not fit int64. Fails if the value has cycles.
proto.to_pbtext(msg)
Serializes a protobuf message to a string using ASCII proto serialization.
proto.to_jsonpb(msg)
Serializes a protobuf message to a string using JSONPB serialization.