core.generator(impl = None)
Registers a callback that is called at the end of the config generation stage to modify/append/delete generated configs in an arbitrary way.
The callback accepts single argument ‘ctx’ which is a struct with the following fields: ‘config_set’: a dict {config file name -> (str | proto)}.
The callback is free to modify ctx.config_set in whatever way it wants, e.g. by adding new values there or mutating/deleting existing ones.
core.bucket(name, acls = None)
Defines a bucket: a container for LUCI resources that share the same ACL.
core.builder( # Required arguments. name, bucket, recipe, # Optional arguments. properties = None, service_account = None, caches = None, execution_timeout = None, dimensions = None, priority = None, swarming_tags = None, expiration_timeout = None, build_numbers = None, experimental = None, task_template_canary_percentage = None, luci_migration_host = None, triggers = None, triggered_by = None, )
Defines a generic builder.
It runs some recipe in some requested environment, passing it a struct with given properties. It is launched whenever something triggers it (a poller or some other builder, or maybe some external actor via Buildbucket or LUCI Scheduler APIs).
The full unique builder name (as expected by Buildbucket RPC interface) is a pair (“”, “/”), but within a single project config this builder can be referred to either via its bucket-scoped name (i.e. “/”) or just via it‘s name alone (i.e. “”), if this doesn’t introduce ambiguities.
The definition of what can potentially trigger what is defined through ‘triggers’ and ‘triggered_by’ fields. They specify how to prepare ACLs and other configuration of services that execute builds.
If builder A is defined as “triggers builder B”, it means all services should expect A builds to trigger B builds via LUCI Scheduler‘s EmitTriggers RPC or via Buildbucket’s ScheduleBuild RPC, but the actual triggering is still the responsibility of A's recipe.
There‘s a caveat though: only Scheduler ACLs are auto-generated by the config generator when one builder triggers another, because each Scheduler job has its own ACL and we can precisely configure who’s allowed to trigger this job.
Buildbucket ACLs are left unchanged though, since they apply to an entire bucket, and making a large scale change like that (without really knowing whether Buildbucket API will be used) is dangerous.
So if the recipe triggers other builds directly through Buildbucket, it is the responsibility of the config author (you) to correctly specify Buildbucket ACLs, e.g. by adding the corresponding service account to the bucket ACLs:
core.bucket( ... acls = [ ... acl.entry(acl.BUILDBUCKET_TRIGGERER, <builder service account>), ... ], )
This is not necessary if the recipe uses Scheduler API instead of Buildbucket.
core.gitiles_poller( # Required arguments. name, bucket, repo, # Optional arguments. refs = None, refs_regexps = None, schedule = None, triggers = None, )
Defines a gitiles poller which can trigger builders on git commits.
It watches a set of git refs and triggers builders if either: * A watched ref's tip has changed (e.g. a new commit landed on a ref). * A ref belonging to the watched set has just been created.
The watched ref set is defined via ‘refs’ and ‘refs_regexps’ fields. One is just a simple enumeration of refs, and another allows to use regular expressions to define what refs belong to the watched set. Both fields can be used at the same time. If neither is set, the gitiles_poller defaults to watching “refs/heads/master”.
core.logdog(gs_bucket = None)
Configuration for the LogDog service.
core.project( # Required arguments. name, # Optional arguments. buildbucket = None, logdog = None, scheduler = None, swarming = None, acls = None, )
Defines a LUCI project.
There should be exactly one such definition in a single top-level config file.
core.recipe( # Required arguments. name, cipd_package, # Optional arguments. cipd_version = None, recipe = None, )
Defines where to locate a particular recipe.
Builders refer to recipes in their ‘recipe’ field. Multiple builders can execute the same recipe (perhaps passing different properties to it).
Recipes are located inside cipd packages called “recipe bundles”. Typically the cipd package name with the recipe bundle will look like:
infra/recipe_bundles/chromium.googlesource.com/chromium/tools/build
Recipes bundled from internal repositories are typically under
infra_internal/recipe_bundles/...
But if you're building your own recipe bundles, they could be located elsewhere.
The cipd version to fetch is usually a lower-cased git ref (like ‘refs/heads/master’), or it can be a cipd tag (like ‘git_revision:abc...’).
acl.entry(roles = None, groups = None, users = None)
An ACL entry: assigns given role (or roles) to given individuals or groups.
Specifying an empty ACL entry is allowed. It is ignored everywhere. Useful for things like:
core.project( acl = [ acl.entry(acl.PROJECT_CONFIGS_READER, groups = [ # TODO: fill me in ]) ] )
acl.entry struct, consider it opaque.
swarming.cache(path = None, name = None, wait_for_warm_cache = None)
A request for the bot to mount a named cache to a path.
Each bot has a LRU of named caches: think of them as local named directories in some protected place that survive between builds.
A build can request one or more such caches to be mounted (in read/write mode) at the requested path relative to some known root. In recipes-based builds, the path is relative to api.paths[‘cache’] dir.
If it's the first time a cache is mounted on this particular bot, it will appear as an empty directory. Otherwise it will contain whatever was left there by the previous build that mounted exact same named cache on this bot, even if that build is completely irrelevant to the current build and just happened to use the same named cache (sometimes this is useful to share state between different builders).
At the end of the build the cache directory is unmounted. If at that time the bot is running out of space, caches (in their entirety, the named cache directory and all files inside) are evicted in LRU manner until there's enough free disk space left. Renaming a cache is equivalent to clearing it from the builder perspective. The files will still be there, but eventually will be purged by GC.
Additionally, Buildbucket always implicitly requests to mount a special builder cache to ‘builder’ path:
swarming.cache('builder', name=some_hash('<project>/<bucket>/<builder>'))
This means that any LUCI builder has a “personal disk space” on the bot. Builder cache is often a good start before customizing caching. In recipes, it is available at api.path[‘cache’].join(‘builder’).
In order to share the builder cache directory among multiple builders, some explicitly named cache can be mounted to ‘builder’ path on these builders. Buildbucket will not try to override it with its auto-generated builder cache.
For example, if builders ‘a’ and ‘b’ both declare they use named cache swarming.cache(‘builder’, name=‘my_shared_cache’), and an ‘a’ build ran on a bot and left some files in the builder cache, then when a ‘b’ build runs on the same bot, the same files will be available in its builder cache.
If the pool of swarming bots is shared among multiple LUCI projects and projects mount same named cache, the cache will be shared across projects. To avoid affecting and being affected by other projects, prefix the cache name with something project-specific, e.g. “v8-”.
swarming.cache struct.
swarming.dimension(value = None, expiration = None)
A value of some Swarming dimension, annotated with its expiration time.
Intended to be used as a value in ‘dimensions’ dict when using dimensions that expire:
dimensions = { ... 'device': swarming.dimension('preferred', expiration=5*time.minute), ... }
swarming.dimension struct.
swarming.validate_caches(attr = None, caches = None)
Validates a list of caches.
Ensures each entry is swarming.cache struct, and no two entries use same name or path.
Validate list of caches (may be an empty list, never None).
swarming.validate_dimensions(attr = None, dimensions = None)
Validates and normalizes a dict with dimensions.
The dict should have string keys and values are swarming.dimension, a string or a list of thereof (for repeated dimensions).
Validated and normalized dict in form {string: [swarming.dimension]}.
swarming.validate_tags(attr = None, tags = None)
Validates a list of “k:v” pairs with Swarming tags.
Validated list of tags in same order, with duplicates removed.
These functions are available in the global namespace.