Start converting pkg rules to a separate repo (#27)

- Move to ABSL flags.
- Add deps.bzl.
- Fix some docs. There is certainly more to do.
- Fix the tests to use the runfiles library rather than ad hoc methods

Note that the tests are now all in a distinct folder from the BUILD file
which contains the tools needed to create packages. This is an experiment
in packaging techniques for rules. The idea is that most users should be
able to import a "thin" version of a rule set.  That would only include
the files needed to use the rule, but not those needed to test and or
package the rules. That would currently be all the *files* in pkg but
none of the folders.

A "thicker" version might include the tests, but at the cost of perhaps
making your workspace deps resolver bring in more things.

The "thickest" version would include all the code needed to repackage
and redistribute the rule set. That would be using the full source
distribution.

I may abandon this experiment and fold tests back together with the
sources, but doing so will not impact users of the tools, so that
would be transparent and harmless. So please indulge me for now.
This commit is contained in:
aiuto 2019-06-06 15:12:26 -04:00 committed by GitHub
parent 80387a3b5e
commit 864739f1ff
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
33 changed files with 4784 additions and 0 deletions

63
pkg/BUILD Normal file
View File

@ -0,0 +1,63 @@
# -*- coding: utf-8 -*-
licenses(["notice"]) # Apache 2.0
exports_files(
glob(["*.bzl"]),
visibility = ["//visibility:public"],
)
py_library(
name = "archive",
srcs = [
"__init__.py",
"archive.py",
],
srcs_version = "PY2AND3",
visibility = ["//visibility:public"],
)
py_binary(
name = "build_tar",
srcs = ["build_tar.py"],
python_version = "PY2",
srcs_version = "PY2AND3",
visibility = ["//visibility:public"],
deps = [
":archive",
"@abseil_py//absl/flags",
],
)
py_binary(
name = "make_deb",
srcs = ["make_deb.py"],
python_version = "PY2",
srcs_version = "PY2AND3",
visibility = ["//visibility:public"],
deps = [
":archive",
"@abseil_py//absl/flags",
],
)
# Used by pkg_rpm in rpm.bzl.
py_binary(
name = "make_rpm",
srcs = ["make_rpm.py"],
python_version = "PY2",
srcs_version = "PY2AND3",
visibility = ["//visibility:public"],
deps = [
":make_rpm_lib",
],
)
py_library(
name = "make_rpm_lib",
srcs = ["make_rpm.py"],
srcs_version = "PY2AND3",
visibility = ["//visibility:public"],
deps = [
"@abseil_py//absl/flags",
],
)

577
pkg/README.md Normal file
View File

@ -0,0 +1,577 @@
# Packaging for Bazel
<div class="toc">
<h2>Rules</h2>
<ul>
<li><a href="#pkg_tar">pkg_tar</a></li>
<li><a href="#pkg_deb">pkg_deb</a></li>
<li><a href="#pkg_rpm">pkg_rpm</a></li>
</ul>
</div>
## Overview
These build rules are used for building various packaging such as tarball
and debian package.
<a name="basic-example"></a>
## Basic Example
This example is a simplification of the debian packaging of Bazel:
```python
load("@rules_pkg//:pkg.bzl", "pkg_tar", "pkg_deb")
pkg_tar(
name = "bazel-bin",
strip_prefix = "/src",
package_dir = "/usr/bin",
srcs = ["//src:bazel"],
mode = "0755",
)
pkg_tar(
name = "bazel-tools",
strip_prefix = "/",
package_dir = "/usr/share/lib/bazel/tools",
srcs = ["//tools:package-srcs"],
mode = "0644",
)
pkg_tar(
name = "debian-data",
extension = "tar.gz",
deps = [
":bazel-bin",
":bazel-tools",
],
)
pkg_deb(
name = "bazel-debian",
architecture = "amd64",
built_using = "unzip (6.0.1)",
data = ":debian-data",
depends = [
"zlib1g-dev",
"unzip",
],
description_file = "debian/description",
homepage = "http://bazel.build",
maintainer = "The Bazel Authors <bazel-dev@googlegroups.com>",
package = "bazel",
version = "0.1.1",
)
```
Here, the Debian package is built from three `pkg_tar` targets:
- `bazel-bin` creates a tarball with the main binary (mode `0755`) in
`/usr/bin`,
- `bazel-tools` create a tarball with the base workspace (mode `0644`) to
`/usr/share/bazel/tools` ; the `modes` attribute let us specifies executable
files,
- `debian-data` creates a gzip-compressed tarball that merge the three previous
tarballs.
`debian-data` is then used for the data content of the debian archive created by
`pkg_deb`.
<a name="future"></a>
## Future work
- Support more format, especially `pkg_zip`.
- Maybe a bit more integration with the `docker_build` rule.
<a name="pkg_tar"></a>
## pkg_tar
```python
pkg_tar(name, extension, strip_prefix, package_dir, srcs,
mode, modes, deps, symlinks)
```
Creates a tar file from a list of inputs.
<table class="table table-condensed table-bordered table-params">
<colgroup>
<col class="col-param" />
<col class="param-description" />
</colgroup>
<thead>
<tr>
<th colspan="2">Attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>name</code></td>
<td>
<code>Name, required</code>
<p>A unique name for this rule.</p>
</td>
</tr>
<tr>
<td><code>extension</code></td>
<td>
<code>String, default to 'tar'</code>
<p>
The extension for the resulting tarball. The output
file will be '<i>name</i>.<i>extension</i>'. This extension
also decide on the compression: if set to <code>tar.gz</code>
or <code>tgz</code> then gzip compression will be used and
if set to <code>tar.bz2</code> or <code>tar.bzip2</code> then
bzip2 compression will be used.
</p>
</td>
</tr>
<tr>
<td><code>strip_prefix</code></td>
<td>
<code>String, optional</code>
<p>Root path of the files.</p>
<p>
The directory structure from the files is preserved inside the
tarball but a prefix path determined by <code>strip_prefix</code>
is removed from the directory structure. This path can
be absolute from the workspace root if starting with a <code>/</code> or
relative to the rule's directory. A relative path may start with "./"
(or be ".") but cannot use ".." to go up level(s). By default, the
<code>strip_prefix</code> attribute is unused and all files are supposed to have no
prefix. A <code>strip_prefix</code> of "" (the empty string) means the
same as the default.
</p>
</td>
</tr>
<tr>
<td><code>package_dir</code></td>
<td>
<code>String, optional</code>
<p>Target directory.</p>
<p>
The directory in which to expand the specified files, defaulting to '/'.
Only makes sense accompanying files.
</p>
</td>
</tr>
<tr>
<td><code>srcs</code></td>
<td>
<code>List of files, optional</code>
<p>File to add to the layer.</p>
<p>
A list of files that should be included in the archive.
</p>
</td>
</tr>
<tr>
<td><code>mode</code></td>
<td>
<code>String, default to 0555</code>
<p>
Set the mode of files added by the <code>files</code> attribute.
</p>
</td>
</tr>
<tr>
<td><code>mtime</code></td>
<td>
<code>int, seconds since Jan 1, 1970, default to -1 (ignored)</code>
<p>
Set the mod time of files added by the <code>files</code> attribute.
</p>
</td>
</tr>
<tr>
<td><code>portable_mtime</code></td>
<td>
<code>bool, default True</code>
<p>
Set the mod time of files added by the <code>files</code> attribute
to a 2000-01-01.
</p>
</td>
</tr>
<tr>
<td><code>modes</code></td>
<td>
<code>Dictionary, default to '{}'</code>
<p>
A string dictionary to change default mode of specific files from
<code>files</code>. Each key should be a path to a file before
appending the prefix <code>package_dir</code> and the corresponding
value the octal permission of to apply to the file.
</p>
<p>
<code>
modes = {
"tools/py/2to3.sh": "0755",
...
},
</code>
</p>
</td>
</tr>
<tr>
<td><code>owner</code></td>
<td>
<code>String, default to '0.0'</code>
<p>
<code>UID.GID</code> to set the default numeric owner for all files
provided in <code>files</code>.
</p>
</td>
</tr>
<tr>
<td><code>owners</code></td>
<td>
<code>Dictionary, default to '{}'</code>
<p>
A string dictionary to change default owner of specific files from
<code>files</code>. Each key should be a path to a file before
appending the prefix <code>package_dir</code> and the corresponding
value the <code>UID.GID</code> numeric string for the owner of the
file. When determining owner ids, this attribute is looked first then
<code>owner</code>.
</p>
<p>
<code>
owners = {
"tools/py/2to3.sh": "42.24",
...
},
</code>
</p>
</td>
</tr>
<tr>
<td><code>ownername</code></td>
<td>
<code>String, optional</code>
<p>
<code>username.groupname</code> to set the default owner for all files
provided in <code>files</code> (by default there is no owner names).
</p>
</td>
</tr>
<tr>
<td><code>ownernames</code></td>
<td>
<code>Dictionary, default to '{}'</code>
<p>
A string dictionary to change default owner of specific files from
<code>files</code>. Each key should be a path to a file before
appending the prefix <code>package_dir</code> and the corresponding
value the <code>username.groupname</code> string for the owner of the
file. When determining ownernames, this attribute is looked first then
<code>ownername</code>.
</p>
<p>
<code>
owners = {
"tools/py/2to3.sh": "leeroy.jenkins",
...
},
</code>
</p>
</td>
</tr>
<tr>
<td><code>deps</code></td>
<td>
<code>List of labels, optional</code>
<p>Tar files to extract and include in this tar package.</p>
<p>
A list of tarball labels to merge into the output tarball.
</p>
</td>
</tr>
<tr>
<td><code>symlinks</code></td>
<td>
<code>Dictionary, optional</code>
<p>Symlinks to create in the output tarball.</p>
<p>
<code>
symlinks = {
"/path/to/link": "/path/to/target",
...
},
</code>
</p>
</td>
</tr>
<tr>
<td><code>remap_paths</code></td>
<td>
<code>Dictionary, optional</code>
<p>Source path prefixes to remap in the tarfile.</p>
<p>
<code>
remap_paths = {
"original/path/prefix": "replaced/path",
...
},
</code>
</p>
</td>
</tr>
</tbody>
</table>
<a name="pkg_deb"></a>
### pkg_deb
```python
pkg_deb(name, data, package, architecture, maintainer, preinst, postinst, prerm, postrm, version, version_file, description, description_file, built_using, built_using_file, priority, section, homepage, depends, suggests, enhances, conflicts, predepends, recommends)
```
Create a debian package. See <a
href="http://www.debian.org/doc/debian-policy/ch-controlfields.html">http://www.debian.org/doc/debian-policy/ch-controlfields.html</a>
for more details on this.
<table class="table table-condensed table-bordered table-params">
<colgroup>
<col class="col-param" />
<col class="param-description" />
</colgroup>
<thead>
<tr>
<th colspan="2">Attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>name</code></td>
<td>
<code>Name, required</code>
<p>A unique name for this rule.</p>
</td>
</tr>
<tr>
<td><code>data</code></td>
<td>
<code>File, required</code>
<p>
A tar file that contains the data for the debian package (basically
the list of files that will be installed by this package).
</p>
</td>
</tr>
<tr>
<td><code>package</code></td>
<td>
<code>String, required</code>
<p>The name of the package.</p>
</td>
</tr>
<tr>
<td><code>architecture</code></td>
<td>
<code>String, default to 'all'</code>
<p>The architecture that this package target.</p>
<p>
See <a href="http://www.debian.org/ports/">http://www.debian.org/ports/</a>.
</p>
</td>
</tr>
<tr>
<td><code>maintainer</code></td>
<td>
<code>String, required</code>
<p>The maintainer of the package.</p>
</td>
</tr>
<tr>
<td><code>preinst</code>, <code>postinst</code>, <code>prerm</code> and <code>postrm</code></td>
<td>
<code>Files, optional</code>
<p>
Respectively, the pre-install, post-install, pre-remove and
post-remove scripts for the package.
</p>
<p>
See <a href="http://www.debian.org/doc/debian-policy/ch-maintainerscripts.html">http://www.debian.org/doc/debian-policy/ch-maintainerscripts.html</a>.
</p>
</td>
</tr>
<tr>
<td><code>config</code></td>
<td>
<code>File, optional</code>
<p>
config file used for debconf integration.
</p>
<p>
See <a href="https://www.debian.org/doc/debian-policy/ch-binary.html#prompting-in-maintainer-scripts">https://www.debian.org/doc/debian-policy/ch-binary.html#prompting-in-maintainer-scripts</a>.
</p>
</td>
</tr>
<tr>
<td><code>templates</code></td>
<td>
<code>File, optional</code>
<p>
templates file used for debconf integration.
</p>
<p>
See <a href="https://www.debian.org/doc/debian-policy/ch-binary.html#prompting-in-maintainer-scripts">https://www.debian.org/doc/debian-policy/ch-binary.html#prompting-in-maintainer-scripts</a>.
</p>
</td>
</tr>
<tr>
<td><code>conffiles</code>, <code>conffiles_file</code></td>
<td>
<code>String list or File, optional</code>
<p>
The list of conffiles or a file containing one conffile per
line. Each item is an absolute path on the target system
where the deb is installed.
</p>
<p>
See <a href="https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-conffile">https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-conffile</a>.
</p>
</td>
</tr>
<tr>
<td><code>version</code>, <code>version_file</code></td>
<td>
<code>String or File, required</code>
<p>
The package version provided either inline (with <code>version</code>)
or from a file (with <code>version_file</code>).
</p>
</td>
</tr>
<tr>
<td><code>description</code>, <code>description_file</code></td>
<td>
<code>String or File, required</code>
<p>
The package description provided either inline (with <code>description</code>)
or from a file (with <code>description_file</code>).
</p>
</td>
</tr>
<tr>
<td><code>built_using</code>, <code>built_using_file</code></td>
<td>
<code>String or File</code>
<p>
The tool that were used to build this package provided either inline
(with <code>built_using</code>) or from a file (with <code>built_using_file</code>).
</p>
</td>
</tr>
<tr>
<td><code>priority</code></td>
<td>
<code>String, default to 'optional'</code>
<p>The priority of the package.</p>
<p>
See <a href="http://www.debian.org/doc/debian-policy/ch-archive.html#s-priorities">http://www.debian.org/doc/debian-policy/ch-archive.html#s-priorities</a>.
</p>
</td>
</tr>
<tr>
<td><code>section</code></td>
<td>
<code>String, default to 'contrib/devel'</code>
<p>The section of the package.</p>
<p>
See <a href="http://www.debian.org/doc/debian-policy/ch-archive.html#s-subsections">http://www.debian.org/doc/debian-policy/ch-archive.html#s-subsections</a>.
</p>
</td>
</tr>
<tr>
<td><code>homepage</code></td>
<td>
<code>String, optional</code>
<p>The homepage of the project.</p>
</td>
</tr>
<tr>
<td>
<code>depends</code>, <code>suggests</code>, <code>enhances</code>,
<code>conflicts</code>, <code>predepends</code> and <code>recommends</code>.
</td>
<td>
<code>String list, optional</code>
<p>The list of dependencies in the project.</p>
<p>
See <a href="http://www.debian.org/doc/debian-policy/ch-relationships.html#s-binarydeps">http://www.debian.org/doc/debian-policy/ch-relationships.html#s-binarydeps</a>.
</p>
</td>
</tr>
</tbody>
</table>
<a name="pkg_rpm"></a>
### pkg_rpm
```python
pkg_rpm(name, spec_file, architecture, version, version_file, changelog, data)
```
Create an RPM package. See <a
href="http://rpm.org/documentation.html">http://rpm.org/documentation.html</a>
for more details on this.
<table class="table table-condensed table-bordered table-params">
<colgroup>
<col class="col-param" />
<col class="param-description" />
</colgroup>
<thead>
<tr>
<th colspan="2">Attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>name</code></td>
<td>
<code>Name, required</code>
<p>A unique name for this rule. Used to name the output package.</p>
</td>
</tr>
<tr>
<td><code>spec_file</code></td>
<td>
<code>File, required</code>
<p>The RPM specification file used to generate the package.</p>
<p>
See <a href="http://ftp.rpm.org/max-rpm/s1-rpm-build-creating-spec-file.html">http://ftp.rpm.org/max-rpm/s1-rpm-build-creating-spec-file.html</a>.
</p>
</td>
</tr>
<tr>
<td><code>architecture</code></td>
<td>
<code>String, default to 'all'</code>
<p>The architecture that this package target.</p>
</td>
</tr>
<tr>
<td><code>version</code>, <code>version_file</code></td>
<td>
<code>String or File, required</code>
<p>
The package version provided either inline (with <code>version</code>)
or from a file (with <code>version_file</code>).
</p>
</td>
</tr>
<tr>
<td><code>data</code></td>
<td>
<code>Files, required</code>
<p>
Files to include in the generated package.
</p>
</td>
</tr>
</tbody>
</table>

4
pkg/WORKSPACE Normal file
View File

@ -0,0 +1,4 @@
workspace(name = "rules_pkg")
load("//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()

0
pkg/__init__.py Normal file
View File

443
pkg/archive.py Normal file
View File

@ -0,0 +1,443 @@
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Archive manipulation library for the Docker rules."""
# pylint: disable=g-import-not-at-top
import gzip
import io
import os
import subprocess
import tarfile
# Use a deterministic mtime that doesn't confuse other programs.
# See: https://github.com/bazelbuild/bazel/issues/1299
PORTABLE_MTIME = 946684800 # 2000-01-01 00:00:00.000 UTC
class SimpleArFile(object):
"""A simple AR file reader.
This enable to read AR file (System V variant) as described
in https://en.wikipedia.org/wiki/Ar_(Unix).
The standard usage of this class is:
with SimpleArFile(filename) as ar:
nextFile = ar.next()
while nextFile:
print(nextFile.filename)
nextFile = ar.next()
Upon error, this class will raise a ArError exception.
"""
# TODO(dmarting): We should use a standard library instead but python 2.7
# does not have AR reading library.
class ArError(Exception):
pass
class SimpleArFileEntry(object):
"""Represent one entry in a AR archive.
Attributes:
filename: the filename of the entry, as described in the archive.
timestamp: the timestamp of the file entry.
owner_id, group_id: numeric id of the user and group owning the file.
mode: unix permission mode of the file
size: size of the file
data: the content of the file.
"""
def __init__(self, f):
self.filename = f.read(16).decode('utf-8').strip()
if self.filename.endswith('/'): # SysV variant
self.filename = self.filename[:-1]
self.timestamp = int(f.read(12).strip())
self.owner_id = int(f.read(6).strip())
self.group_id = int(f.read(6).strip())
self.mode = int(f.read(8).strip(), 8)
self.size = int(f.read(10).strip())
pad = f.read(2)
if pad != b'\x60\x0a':
raise SimpleArFile.ArError('Invalid AR file header')
self.data = f.read(self.size)
MAGIC_STRING = b'!<arch>\n'
def __init__(self, filename):
self.filename = filename
def __enter__(self):
self.f = open(self.filename, 'rb')
if self.f.read(len(self.MAGIC_STRING)) != self.MAGIC_STRING:
raise self.ArError('Not a ar file: ' + self.filename)
return self
def __exit__(self, t, v, traceback):
self.f.close()
def next(self):
"""Read the next file. Returns None when reaching the end of file."""
# AR sections are two bit aligned using new lines.
if self.f.tell() % 2 != 0:
self.f.read(1)
# An AR sections is at least 60 bytes. Some file might contains garbage
# bytes at the end of the archive, ignore them.
if self.f.tell() > os.fstat(self.f.fileno()).st_size - 60:
return None
return self.SimpleArFileEntry(self.f)
class TarFileWriter(object):
"""A wrapper to write tar files."""
class Error(Exception):
pass
def __init__(self,
name,
compression='',
root_directory='./',
default_mtime=None):
"""TarFileWriter wraps tarfile.open().
Args:
name: the tar file name.
compression: compression type: bzip2, bz2, gz, tgz, xz, lzma.
root_directory: virtual root to prepend to elements in the archive.
default_mtime: default mtime to use for elements in the archive.
May be an integer or the value 'portable' to use the date
2000-01-01, which is compatible with non *nix OSes'.
"""
if compression in ['bzip2', 'bz2']:
mode = 'w:bz2'
else:
mode = 'w:'
self.gz = compression in ['tgz', 'gz']
# Support xz compression through xz... until we can use Py3
self.xz = compression in ['xz', 'lzma']
self.name = name
self.root_directory = root_directory.rstrip('/')
if default_mtime is None:
self.default_mtime = 0
elif default_mtime == 'portable':
self.default_mtime = PORTABLE_MTIME
else:
self.default_mtime = int(default_mtime)
self.fileobj = None
if self.gz:
# The Tarfile class doesn't allow us to specify gzip's mtime attribute.
# Instead, we manually re-implement gzopen from tarfile.py and set mtime.
self.fileobj = gzip.GzipFile(
filename=name, mode='w', compresslevel=9, mtime=self.default_mtime)
self.tar = tarfile.open(name=name, mode=mode, fileobj=self.fileobj)
self.members = set([])
self.directories = set([])
def __enter__(self):
return self
def __exit__(self, t, v, traceback):
self.close()
def add_dir(self,
name,
path,
uid=0,
gid=0,
uname='',
gname='',
mtime=None,
mode=None,
depth=100):
"""Recursively add a directory.
Args:
name: the destination path of the directory to add.
path: the path of the directory to add.
uid: owner user identifier.
gid: owner group identifier.
uname: owner user names.
gname: owner group names.
mtime: modification time to put in the archive.
mode: unix permission mode of the file, default 0644 (0755).
depth: maximum depth to recurse in to avoid infinite loops
with cyclic mounts.
Raises:
TarFileWriter.Error: when the recursion depth has exceeded the
`depth` argument.
"""
if not (name == self.root_directory or name.startswith('/') or
name.startswith(self.root_directory + '/')):
name = os.path.join(self.root_directory, name)
if mtime is None:
mtime = self.default_mtime
if os.path.isdir(path):
# Remove trailing '/' (index -1 => last character)
if name[-1] == '/':
name = name[:-1]
# Add the x bit to directories to prevent non-traversable directories.
# The x bit is set only to if the read bit is set.
dirmode = (mode | ((0o444 & mode) >> 2)) if mode else mode
self.add_file(name + '/',
tarfile.DIRTYPE,
uid=uid,
gid=gid,
uname=uname,
gname=gname,
mtime=mtime,
mode=dirmode)
if depth <= 0:
raise self.Error('Recursion depth exceeded, probably in '
'an infinite directory loop.')
# Iterate over the sorted list of file so we get a deterministic result.
filelist = os.listdir(path)
filelist.sort()
for f in filelist:
new_name = os.path.join(name, f)
new_path = os.path.join(path, f)
self.add_dir(new_name, new_path, uid, gid, uname, gname, mtime, mode,
depth - 1)
else:
self.add_file(name,
tarfile.REGTYPE,
file_content=path,
uid=uid,
gid=gid,
uname=uname,
gname=gname,
mtime=mtime,
mode=mode)
def _addfile(self, info, fileobj=None):
"""Add a file in the tar file if there is no conflict."""
if not info.name.endswith('/') and info.type == tarfile.DIRTYPE:
# Enforce the ending / for directories so we correctly deduplicate.
info.name += '/'
if info.name not in self.members:
self.tar.addfile(info, fileobj)
self.members.add(info.name)
elif info.type != tarfile.DIRTYPE:
print('Duplicate file in archive: %s, '
'picking first occurrence' % info.name)
def add_file(self,
name,
kind=tarfile.REGTYPE,
content=None,
link=None,
file_content=None,
uid=0,
gid=0,
uname='',
gname='',
mtime=None,
mode=None):
"""Add a file to the current tar.
Args:
name: the name of the file to add.
kind: the type of the file to add, see tarfile.*TYPE.
content: a textual content to put in the file.
link: if the file is a link, the destination of the link.
file_content: file to read the content from. Provide either this
one or `content` to specifies a content for the file.
uid: owner user identifier.
gid: owner group identifier.
uname: owner user names.
gname: owner group names.
mtime: modification time to put in the archive.
mode: unix permission mode of the file, default 0644 (0755).
"""
if file_content and os.path.isdir(file_content):
# Recurse into directory
self.add_dir(name, file_content, uid, gid, uname, gname, mtime, mode)
return
if not (name == self.root_directory or name.startswith('/') or
name.startswith(self.root_directory + '/')):
name = os.path.join(self.root_directory, name)
if kind == tarfile.DIRTYPE:
name = name.rstrip('/')
if name in self.directories:
return
if mtime is None:
mtime = self.default_mtime
components = name.rsplit('/', 1)
if len(components) > 1:
d = components[0]
self.add_file(d,
tarfile.DIRTYPE,
uid=uid,
gid=gid,
uname=uname,
gname=gname,
mtime=mtime,
mode=0o755)
tarinfo = tarfile.TarInfo(name)
tarinfo.mtime = mtime
tarinfo.uid = uid
tarinfo.gid = gid
tarinfo.uname = uname
tarinfo.gname = gname
tarinfo.type = kind
if mode is None:
tarinfo.mode = 0o644 if kind == tarfile.REGTYPE else 0o755
else:
tarinfo.mode = mode
if link:
tarinfo.linkname = link
if content:
content_bytes = content.encode('utf-8')
tarinfo.size = len(content_bytes)
self._addfile(tarinfo, io.BytesIO(content_bytes))
elif file_content:
with open(file_content, 'rb') as f:
tarinfo.size = os.fstat(f.fileno()).st_size
self._addfile(tarinfo, f)
else:
if kind == tarfile.DIRTYPE:
self.directories.add(name)
self._addfile(tarinfo)
def add_tar(self,
tar,
rootuid=None,
rootgid=None,
numeric=False,
name_filter=None,
root=None):
"""Merge a tar content into the current tar, stripping timestamp.
Args:
tar: the name of tar to extract and put content into the current tar.
rootuid: user id that we will pretend is root (replaced by uid 0).
rootgid: group id that we will pretend is root (replaced by gid 0).
numeric: set to true to strip out name of owners (and just use the
numeric values).
name_filter: filter out file by names. If not none, this method will be
called for each file to add, given the name and should return true if
the file is to be added to the final tar and false otherwise.
root: place all non-absolute content under given root directory, if not
None.
Raises:
TarFileWriter.Error: if an error happens when uncompressing the tar file.
"""
if root and root[0] not in ['/', '.']:
# Root prefix should start with a '/', adds it if missing
root = '/' + root
compression = os.path.splitext(tar)[-1][1:]
if compression == 'tgz':
compression = 'gz'
elif compression == 'bzip2':
compression = 'bz2'
elif compression == 'lzma':
compression = 'xz'
elif compression not in ['gz', 'bz2', 'xz']:
compression = ''
if compression == 'xz':
# Python 2 does not support lzma, our py3 support is terrible so let's
# just hack around.
# Note that we buffer the file in memory and it can have an important
# memory footprint but it's probably fine as we don't use them for really
# large files.
# TODO(dmarting): once our py3 support gets better, compile this tools
# with py3 for proper lzma support.
if subprocess.call('which xzcat', shell=True, stdout=subprocess.PIPE):
raise self.Error('Cannot handle .xz and .lzma compression: '
'xzcat not found.')
p = subprocess.Popen('cat %s | xzcat' % tar,
shell=True,
stdout=subprocess.PIPE)
f = io.BytesIO(p.stdout.read())
p.wait()
intar = tarfile.open(fileobj=f, mode='r:')
else:
if compression in ['gz', 'bz2']:
# prevent performance issues due to accidentally-introduced seeks
# during intar traversal by opening in "streaming" mode. gz, bz2
# are supported natively by python 2.7 and 3.x
inmode = 'r|' + compression
else:
inmode = 'r:' + compression
intar = tarfile.open(name=tar, mode=inmode)
for tarinfo in intar:
if name_filter is None or name_filter(tarinfo.name):
tarinfo.mtime = self.default_mtime
if rootuid is not None and tarinfo.uid == rootuid:
tarinfo.uid = 0
tarinfo.uname = 'root'
if rootgid is not None and tarinfo.gid == rootgid:
tarinfo.gid = 0
tarinfo.gname = 'root'
if numeric:
tarinfo.uname = ''
tarinfo.gname = ''
name = tarinfo.name
if (not name.startswith('/') and
not name.startswith(self.root_directory)):
name = os.path.join(self.root_directory, name)
if root is not None:
if name.startswith('.'):
name = '.' + root + name.lstrip('.')
# Add root dir with same permissions if missing. Note that
# add_file deduplicates directories and is safe to call here.
self.add_file('.' + root,
tarfile.DIRTYPE,
uid=tarinfo.uid,
gid=tarinfo.gid,
uname=tarinfo.uname,
gname=tarinfo.gname,
mtime=tarinfo.mtime,
mode=0o755)
# Relocate internal hardlinks as well to avoid breaking them.
link = tarinfo.linkname
if link.startswith('.') and tarinfo.type == tarfile.LNKTYPE:
tarinfo.linkname = '.' + root + link.lstrip('.')
tarinfo.name = name
if tarinfo.isfile():
# use extractfile(tarinfo) instead of tarinfo.name to preserve
# seek position in intar
self._addfile(tarinfo, intar.extractfile(tarinfo))
else:
self._addfile(tarinfo)
intar.close()
def close(self):
"""Close the output tar file.
This class should not be used anymore after calling that method.
Raises:
TarFileWriter.Error: if an error happens when compressing the output file.
"""
self.tar.close()
# Close the gzip file object if necessary.
if self.fileobj:
self.fileobj.close()
if self.xz:
# Support xz compression through xz... until we can use Py3
if subprocess.call('which xz', shell=True, stdout=subprocess.PIPE):
raise self.Error('Cannot handle .xz and .lzma compression: '
'xz not found.')
subprocess.call(
'mv {0} {0}.d && xz -z {0}.d && mv {0}.d.xz {0}'.format(self.name),
shell=True,
stdout=subprocess.PIPE)

387
pkg/build_tar.py Normal file
View File

@ -0,0 +1,387 @@
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This tool build tar files from a list of inputs."""
import json
import os
import os.path
import sys
import tarfile
import tempfile
from rules_pkg import archive
from absl import flags
flags.DEFINE_string('output', None, 'The output file, mandatory')
flags.mark_flag_as_required('output')
flags.DEFINE_multi_string('file', [], 'A file to add to the layer')
flags.DEFINE_string('manifest', None,
'JSON manifest of contents to add to the layer')
flags.DEFINE_string('mode', None,
'Force the mode on the added files (in octal).')
flags.DEFINE_string(
'mtime', None, 'Set mtime on tar file entries. May be an integer or the'
' value "portable", to get the value 2000-01-01, which is'
' is usable with non *nix OSes')
flags.DEFINE_multi_string('empty_file', [], 'An empty file to add to the layer')
flags.DEFINE_multi_string('empty_dir', [], 'An empty dir to add to the layer')
flags.DEFINE_multi_string('empty_root_dir', [],
'An empty dir to add to the layer')
flags.DEFINE_multi_string('tar', [], 'A tar file to add to the layer')
flags.DEFINE_multi_string('deb', [], 'A debian package to add to the layer')
flags.DEFINE_multi_string(
'link', [],
'Add a symlink a inside the layer ponting to b if a:b is specified')
flags.register_validator(
'link',
lambda l: all(value.find(':') > 0 for value in l),
message='--link value should contains a : separator')
flags.DEFINE_string('directory', None,
'Directory in which to store the file inside the layer')
flags.DEFINE_string('compression', None,
'Compression (`gz` or `bz2`), default is none.')
flags.DEFINE_multi_string(
'modes', None,
'Specific mode to apply to specific file (from the file argument),'
' e.g., path/to/file=0455.')
flags.DEFINE_multi_string(
'owners', None, 'Specify the numeric owners of individual files, '
'e.g. path/to/file=0.0.')
flags.DEFINE_string(
'owner', '0.0', 'Specify the numeric default owner of all files,'
' e.g., 0.0')
flags.DEFINE_string('owner_name', None,
'Specify the owner name of all files, e.g. root.root.')
flags.DEFINE_multi_string(
'owner_names', None, 'Specify the owner names of individual files, e.g. '
'path/to/file=root.root.')
flags.DEFINE_string('root_directory', './',
'Default root directory is named "."')
FLAGS = flags.FLAGS
class TarFile(object):
"""A class to generates a TAR file."""
class DebError(Exception):
pass
def __init__(self, output, directory, compression, root_directory,
default_mtime):
self.directory = directory
self.output = output
self.compression = compression
self.root_directory = root_directory
self.default_mtime = default_mtime
def __enter__(self):
self.tarfile = archive.TarFileWriter(
self.output,
self.compression,
self.root_directory,
default_mtime=self.default_mtime)
return self
def __exit__(self, t, v, traceback):
self.tarfile.close()
def add_file(self, f, destfile, mode=None, ids=None, names=None):
"""Add a file to the tar file.
Args:
f: the file to add to the layer
destfile: the name of the file in the layer
mode: force to set the specified mode, by default the value from the
source is taken.
ids: (uid, gid) for the file to set ownership
names: (username, groupname) for the file to set ownership. `f` will be
copied to `self.directory/destfile` in the layer.
"""
dest = destfile.lstrip('/') # Remove leading slashes
if self.directory and self.directory != '/':
dest = self.directory.lstrip('/') + '/' + dest
# If mode is unspecified, derive the mode from the file's mode.
if mode is None:
mode = 0o755 if os.access(f, os.X_OK) else 0o644
if ids is None:
ids = (0, 0)
if names is None:
names = ('', '')
dest = os.path.normpath(dest)
self.tarfile.add_file(
dest,
file_content=f,
mode=mode,
uid=ids[0],
gid=ids[1],
uname=names[0],
gname=names[1])
def add_empty_file(self,
destfile,
mode=None,
ids=None,
names=None,
kind=tarfile.REGTYPE):
"""Add a file to the tar file.
Args:
destfile: the name of the file in the layer
mode: force to set the specified mode, defaults to 644
ids: (uid, gid) for the file to set ownership
names: (username, groupname) for the file to set ownership.
kind: type of the file. tarfile.DIRTYPE for directory. An empty file
will be created as `destfile` in the layer.
"""
dest = destfile.lstrip('/') # Remove leading slashes
# If mode is unspecified, assume read only
if mode is None:
mode = 0o644
if ids is None:
ids = (0, 0)
if names is None:
names = ('', '')
dest = os.path.normpath(dest)
self.tarfile.add_file(
dest,
content='' if kind == tarfile.REGTYPE else None,
kind=kind,
mode=mode,
uid=ids[0],
gid=ids[1],
uname=names[0],
gname=names[1])
def add_empty_dir(self, destpath, mode=None, ids=None, names=None):
"""Add a directory to the tar file.
Args:
destpath: the name of the directory in the layer
mode: force to set the specified mode, defaults to 644
ids: (uid, gid) for the file to set ownership
names: (username, groupname) for the file to set ownership. An empty
file will be created as `destfile` in the layer.
"""
self.add_empty_file(
destpath, mode=mode, ids=ids, names=names, kind=tarfile.DIRTYPE)
def add_empty_root_dir(self, destpath, mode=None, ids=None, names=None):
"""Add a directory to the root of the tar file.
Args:
destpath: the name of the directory in the layer
mode: force to set the specified mode, defaults to 644
ids: (uid, gid) for the file to set ownership
names: (username, groupname) for the file to set ownership. An empty
directory will be created as `destfile` in the root layer.
"""
original_root_directory = self.tarfile.root_directory
self.tarfile.root_directory = destpath
self.add_empty_dir(destpath, mode=mode, ids=ids, names=names)
self.tarfile.root_directory = original_root_directory
def add_tar(self, tar):
"""Merge a tar file into the destination tar file.
All files presents in that tar will be added to the output file
under self.directory/path. No user name nor group name will be
added to the output.
Args:
tar: the tar file to add
"""
root = None
if self.directory and self.directory != '/':
root = self.directory
self.tarfile.add_tar(tar, numeric=True, root=root)
def add_link(self, symlink, destination):
"""Add a symbolic link pointing to `destination`.
Args:
symlink: the name of the symbolic link to add.
destination: where the symbolic link point to.
"""
symlink = os.path.normpath(symlink)
self.tarfile.add_file(symlink, tarfile.SYMTYPE, link=destination)
def add_deb(self, deb):
"""Extract a debian package in the output tar.
All files presents in that debian package will be added to the
output tar under the same paths. No user name nor group names will
be added to the output.
Args:
deb: the tar file to add
Raises:
DebError: if the format of the deb archive is incorrect.
"""
with archive.SimpleArFile(deb) as arfile:
current = arfile.next()
while current and not current.filename.startswith('data.'):
current = arfile.next()
if not current:
raise self.DebError(deb + ' does not contains a data file!')
tmpfile = tempfile.mkstemp(suffix=os.path.splitext(current.filename)[-1])
with open(tmpfile[1], 'wb') as f:
f.write(current.data)
self.add_tar(tmpfile[1])
os.remove(tmpfile[1])
def unquote_and_split(arg, c):
"""Split a string at the first unquoted occurrence of a character.
Split the string arg at the first unquoted occurrence of the character c.
Here, in the first part of arg, the backslash is considered the
quoting character indicating that the next character is to be
added literally to the first part, even if it is the split character.
Args:
arg: the string to be split
c: the character at which to split
Returns:
The unquoted string before the separator and the string after the
separator.
"""
head = ''
i = 0
while i < len(arg):
if arg[i] == c:
return (head, arg[i + 1:])
elif arg[i] == '\\':
i += 1
if i == len(arg):
# dangling quotation symbol
return (head, '')
else:
head += arg[i]
else:
head += arg[i]
i += 1
# if we leave the loop, the character c was not found unquoted
return (head, '')
def main(unused_argv):
# Parse modes arguments
default_mode = None
if FLAGS.mode:
# Convert from octal
default_mode = int(FLAGS.mode, 8)
mode_map = {}
if FLAGS.modes:
for filemode in FLAGS.modes:
(f, mode) = unquote_and_split(filemode, '=')
if f[0] == '/':
f = f[1:]
mode_map[f] = int(mode, 8)
default_ownername = ('', '')
if FLAGS.owner_name:
default_ownername = FLAGS.owner_name.split('.', 1)
names_map = {}
if FLAGS.owner_names:
for file_owner in FLAGS.owner_names:
(f, owner) = unquote_and_split(file_owner, '=')
(user, group) = owner.split('.', 1)
if f[0] == '/':
f = f[1:]
names_map[f] = (user, group)
default_ids = FLAGS.owner.split('.', 1)
default_ids = (int(default_ids[0]), int(default_ids[1]))
ids_map = {}
if FLAGS.owners:
for file_owner in FLAGS.owners:
(f, owner) = unquote_and_split(file_owner, '=')
(user, group) = owner.split('.', 1)
if f[0] == '/':
f = f[1:]
ids_map[f] = (int(user), int(group))
# Add objects to the tar file
with TarFile(FLAGS.output, FLAGS.directory, FLAGS.compression,
FLAGS.root_directory, FLAGS.mtime) as output:
def file_attributes(filename):
if filename.startswith('/'):
filename = filename[1:]
return {
'mode': mode_map.get(filename, default_mode),
'ids': ids_map.get(filename, default_ids),
'names': names_map.get(filename, default_ownername),
}
if FLAGS.manifest:
with open(FLAGS.manifest, 'r') as manifest_fp:
manifest = json.load(manifest_fp)
for f in manifest.get('files', []):
output.add_file(f['src'], f['dst'], **file_attributes(f['dst']))
for f in manifest.get('empty_files', []):
output.add_empty_file(f, **file_attributes(f))
for d in manifest.get('empty_dirs', []):
output.add_empty_dir(d, **file_attributes(d))
for d in manifest.get('empty_root_dirs', []):
output.add_empty_root_dir(d, **file_attributes(d))
for f in manifest.get('symlinks', []):
output.add_link(f['linkname'], f['target'])
for tar in manifest.get('tars', []):
output.add_tar(tar)
for deb in manifest.get('debs', []):
output.add_deb(deb)
for f in FLAGS.file:
(inf, tof) = unquote_and_split(f, '=')
output.add_file(inf, tof, **file_attributes(tof))
for f in FLAGS.empty_file:
output.add_empty_file(f, **file_attributes(f))
for f in FLAGS.empty_dir:
output.add_empty_dir(f, **file_attributes(f))
for f in FLAGS.empty_root_dir:
output.add_empty_root_dir(f, **file_attributes(f))
for tar in FLAGS.tar:
output.add_tar(tar)
for deb in FLAGS.deb:
output.add_deb(deb)
for link in FLAGS.link:
l = unquote_and_split(link, ':')
output.add_link(l[0], l[1])
if __name__ == '__main__':
main(FLAGS(sys.argv))

30
pkg/deps.bzl Normal file
View File

@ -0,0 +1,30 @@
# Workspace dependencies for rules_pkg/pkg
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
def _maybe(repo, name, **kwargs):
if not native.existing_rule(name):
repo(name = name, **kwargs)
def rules_pkg_dependencies():
# Needed for helper tools
http_archive(
name = "abseil_py",
urls = [
"https://github.com/abseil/abseil-py/archive/pypi-v0.7.1.tar.gz",
],
sha256 = "3d0f39e0920379ff1393de04b573bca3484d82a5f8b939e9e83b20b6106c9bbe",
strip_prefix = "abseil-py-pypi-v0.7.1",
)
# Needed by abseil-py. They do not use deps yet.
http_archive(
name = "six_archive",
urls = [
"http://mirror.bazel.build/pypi.python.org/packages/source/s/six/six-1.10.0.tar.gz",
"https://pypi.python.org/packages/source/s/six/six-1.10.0.tar.gz",
],
sha256 = "105f8d68616f8248e24bf0e9372ef04d3cc10104f1980f54d57b2ce73a5ad56a",
strip_prefix = "six-1.10.0",
build_file = "@abseil_py//third_party:six.BUILD"
)

368
pkg/make_deb.py Normal file
View File

@ -0,0 +1,368 @@
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A simple cross-platform helper to create a debian package."""
import gzip
import hashlib
from io import BytesIO
import os
import os.path
import sys
import tarfile
import textwrap
import time
from absl import flags
# list of debian fields : (name, mandatory, wrap[, default])
# see http://www.debian.org/doc/debian-policy/ch-controlfields.html
DEBIAN_FIELDS = [
('Package', True, False),
('Version', True, False),
('Section', False, False, 'contrib/devel'),
('Priority', False, False, 'optional'),
('Architecture', False, False, 'all'),
('Depends', False, True, []),
('Recommends', False, True, []),
('Suggests', False, True, []),
('Enhances', False, True, []),
('Conflicts', False, True, []),
('Pre-Depends', False, True, []),
('Installed-Size', False, False),
('Maintainer', True, False),
('Description', True, True),
('Homepage', False, False),
('Built-Using', False, False, None),
('Distribution', False, False, 'unstable'),
('Urgency', False, False, 'medium'),
]
flags.DEFINE_string('output', None, 'The output file, mandatory')
flags.mark_flag_as_required('output')
flags.DEFINE_string('changes', None, 'The changes output file, mandatory.')
flags.mark_flag_as_required('changes')
flags.DEFINE_string('data', None,
'Path to the data tarball, mandatory')
flags.mark_flag_as_required('data')
flags.DEFINE_string('preinst', None,
'The preinst script (prefix with @ to provide a path).')
flags.DEFINE_string('postinst', None,
'The postinst script (prefix with @ to provide a path).')
flags.DEFINE_string('prerm', None,
'The prerm script (prefix with @ to provide a path).')
flags.DEFINE_string('postrm', None,
'The postrm script (prefix with @ to provide a path).')
flags.DEFINE_string('config', None,
'The config script (prefix with @ to provide a path).')
flags.DEFINE_string('templates', None,
'The templates file (prefix with @ to provide a path).')
# size of chunks for copying package content to final .deb file
# This is a wild guess, but I am not convinced of the value of doing much work
# to tune it.
_COPY_CHUNK_SIZE = 1024 * 32
# see
# https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-conffile
flags.DEFINE_multi_string(
'conffile', None,
'List of conffiles (prefix item with @ to provide a path)')
def MakeGflags():
"""Creates a flag for each of the control file fields."""
for field in DEBIAN_FIELDS:
fieldname = field[0].replace('-', '_').lower()
msg = 'The value for the %s content header entry.' % field[0]
if len(field) > 3:
if isinstance(field[3], list):
flags.DEFINE_multi_string(fieldname, field[3], msg)
else:
flags.DEFINE_string(fieldname, field[3], msg)
else:
flags.DEFINE_string(fieldname, None, msg)
if field[1]:
flags.mark_flag_as_required(fieldname)
def ConvertToFileLike(content, content_len, converter):
if content_len < 0:
content_len = len(content)
content = converter(content)
return content_len, content
def AddArFileEntry(fileobj, filename,
content='', content_len=-1, timestamp=0,
owner_id=0, group_id=0, mode=0o644):
"""Add a AR file entry to fileobj."""
# If we got the content as a string, turn it into a file like thing.
if isinstance(content, (str, bytes)):
content_len, content = ConvertToFileLike(content, content_len, BytesIO)
inputs = [
(filename + '/').ljust(16), # filename (SysV)
str(timestamp).ljust(12), # timestamp
str(owner_id).ljust(6), # owner id
str(group_id).ljust(6), # group id
str(oct(mode)).replace('0o', '0').ljust(8), # mode
str(content_len).ljust(10), # size
'\x60\x0a', # end of file entry
]
for i in inputs:
fileobj.write(i.encode('ascii'))
size = 0
while True:
data = content.read(_COPY_CHUNK_SIZE)
if not data:
break
size += len(data)
fileobj.write(data)
if size % 2 != 0:
fileobj.write(b'\n') # 2-byte alignment padding
def MakeDebianControlField(name, value, wrap=False):
"""Add a field to a debian control file."""
result = name + ': '
if isinstance(value, str):
value = value.decode('utf-8')
if isinstance(value, list):
value = u', '.join(value)
if wrap:
result += u' '.join(value.split('\n'))
result = textwrap.fill(result,
break_on_hyphens=False,
break_long_words=False)
else:
result += value
return result.replace(u'\n', u'\n ') + u'\n'
def CreateDebControl(extrafiles=None, **kwargs):
"""Create the control.tar.gz file."""
# create the control file
controlfile = ''
for values in DEBIAN_FIELDS:
fieldname = values[0]
key = fieldname[0].lower() + fieldname[1:].replace('-', '')
if values[1] or (key in kwargs and kwargs[key]):
controlfile += MakeDebianControlField(fieldname, kwargs[key], values[2])
# Create the control.tar file
tar = BytesIO()
with gzip.GzipFile('control.tar.gz', mode='w', fileobj=tar, mtime=0) as gz:
with tarfile.open('control.tar.gz', mode='w', fileobj=gz) as f:
tarinfo = tarfile.TarInfo('control')
# Don't discard unicode characters when computing the size
tarinfo.size = len(controlfile.encode('utf-8'))
f.addfile(tarinfo, fileobj=BytesIO(controlfile.encode('utf-8')))
if extrafiles:
for name, (data, mode) in extrafiles.items():
tarinfo = tarfile.TarInfo(name)
tarinfo.size = len(data)
tarinfo.mode = mode
f.addfile(tarinfo, fileobj=BytesIO(data.encode('utf-8')))
control = tar.getvalue()
tar.close()
return control
def CreateDeb(output,
data,
preinst=None,
postinst=None,
prerm=None,
postrm=None,
config=None,
templates=None,
conffiles=None,
**kwargs):
"""Create a full debian package."""
extrafiles = {}
if preinst:
extrafiles['preinst'] = (preinst, 0o755)
if postinst:
extrafiles['postinst'] = (postinst, 0o755)
if prerm:
extrafiles['prerm'] = (prerm, 0o755)
if postrm:
extrafiles['postrm'] = (postrm, 0o755)
if config:
extrafiles['config'] = (config, 0o755)
if templates:
extrafiles['templates'] = (templates, 0o755)
if conffiles:
extrafiles['conffiles'] = ('\n'.join(conffiles) + '\n', 0o644)
control = CreateDebControl(extrafiles=extrafiles, **kwargs)
# Write the final AR archive (the deb package)
with open(output, 'wb') as f:
f.write(b'!<arch>\n') # Magic AR header
AddArFileEntry(f, 'debian-binary', b'2.0\n')
AddArFileEntry(f, 'control.tar.gz', control)
# Tries to preserve the extension name
ext = os.path.basename(data).split('.')[-2:]
if len(ext) < 2:
ext = 'tar'
elif ext[1] == 'tgz':
ext = 'tar.gz'
elif ext[1] == 'tar.bzip2':
ext = 'tar.bz2'
else:
ext = '.'.join(ext)
if ext not in ['tar.bz2', 'tar.gz', 'tar.xz', 'tar.lzma']:
ext = 'tar'
data_size = os.stat(data).st_size
with open(data, 'rb') as datafile:
AddArFileEntry(f, 'data.' + ext, datafile, content_len=data_size)
def GetChecksumsFromFile(filename, hash_fns=None):
"""Computes MD5 and/or other checksums of a file.
Args:
filename: Name of the file.
hash_fns: Mapping of hash functions.
Default is {'md5': hashlib.md5}
Returns:
Mapping of hash names to hexdigest strings.
{ <hashname>: <hexdigest>, ... }
"""
hash_fns = hash_fns or {'md5': hashlib.md5}
checksums = {k: fn() for (k, fn) in hash_fns.items()}
with open(filename, 'rb') as file_handle:
while True:
buf = file_handle.read(1048576) # 1 MiB
if not buf:
break
for hashfn in checksums.values():
hashfn.update(buf)
return {k: fn.hexdigest() for (k, fn) in checksums.items()}
def CreateChanges(output,
deb_file,
architecture,
short_description,
maintainer,
package,
version,
section,
priority,
distribution,
urgency,
timestamp=0):
"""Create the changes file."""
checksums = GetChecksumsFromFile(deb_file, {'md5': hashlib.md5,
'sha1': hashlib.sha1,
'sha256': hashlib.sha256})
debsize = str(os.path.getsize(deb_file))
deb_basename = os.path.basename(deb_file)
changesdata = ''.join([
MakeDebianControlField('Format', '1.8'),
MakeDebianControlField('Date', time.ctime(timestamp)),
MakeDebianControlField('Source', package),
MakeDebianControlField('Binary', package),
MakeDebianControlField('Architecture', architecture),
MakeDebianControlField('Version', version),
MakeDebianControlField('Distribution', distribution),
MakeDebianControlField('Urgency', urgency),
MakeDebianControlField('Maintainer', maintainer),
MakeDebianControlField('Changed-By', maintainer),
MakeDebianControlField('Description',
'\n%s - %s' % (package, short_description)),
MakeDebianControlField('Changes',
('\n%s (%s) %s; urgency=%s'
'\nChanges are tracked in revision control.') %
(package, version, distribution, urgency)),
MakeDebianControlField(
'Files', '\n' + ' '.join(
[checksums['md5'], debsize, section, priority, deb_basename])),
MakeDebianControlField(
'Checksums-Sha1',
'\n' + ' '.join([checksums['sha1'], debsize, deb_basename])),
MakeDebianControlField(
'Checksums-Sha256',
'\n' + ' '.join([checksums['sha256'], debsize, deb_basename]))
])
with open(output, 'w') as changes_fh:
changes_fh.write(changesdata.encode('utf-8'))
def GetFlagValue(flagvalue, strip=True):
if flagvalue:
flagvalue = flagvalue.decode('utf-8')
if flagvalue[0] == '@':
with open(flagvalue[1:], 'r') as f:
flagvalue = f.read().decode('utf-8')
if strip:
return flagvalue.strip()
return flagvalue
def GetFlagValues(flagvalues):
if flagvalues:
return [GetFlagValue(f, False) for f in flagvalues]
else:
return None
def main(unused_argv):
CreateDeb(
FLAGS.output,
FLAGS.data,
preinst=GetFlagValue(FLAGS.preinst, False),
postinst=GetFlagValue(FLAGS.postinst, False),
prerm=GetFlagValue(FLAGS.prerm, False),
postrm=GetFlagValue(FLAGS.postrm, False),
config=GetFlagValue(FLAGS.config, False),
templates=GetFlagValue(FLAGS.templates, False),
conffiles=GetFlagValues(FLAGS.conffile),
package=FLAGS.package,
version=GetFlagValue(FLAGS.version),
description=GetFlagValue(FLAGS.description),
maintainer=FLAGS.maintainer,
section=FLAGS.section,
architecture=FLAGS.architecture,
depends=GetFlagValues(FLAGS.depends),
suggests=FLAGS.suggests,
enhances=FLAGS.enhances,
preDepends=FLAGS.pre_depends,
recommends=FLAGS.recommends,
homepage=FLAGS.homepage,
builtUsing=GetFlagValue(FLAGS.built_using),
priority=FLAGS.priority,
conflicts=FLAGS.conflicts,
installedSize=GetFlagValue(FLAGS.installed_size))
CreateChanges(
output=FLAGS.changes,
deb_file=FLAGS.output,
architecture=FLAGS.architecture,
short_description=GetFlagValue(FLAGS.description).split('\n')[0],
maintainer=FLAGS.maintainer, package=FLAGS.package,
version=GetFlagValue(FLAGS.version), section=FLAGS.section,
priority=FLAGS.priority, distribution=FLAGS.distribution,
urgency=FLAGS.urgency)
if __name__ == '__main__':
MakeGflags()
FLAGS = flags.FLAGS
main(FLAGS(sys.argv))

305
pkg/make_rpm.py Normal file
View File

@ -0,0 +1,305 @@
# Copyright 2017 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A simple cross-platform helper to create an RPM package."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
import fileinput
import os
import re
import shutil
import subprocess
import sys
import tempfile
from absl import flags
flags.DEFINE_string('rpmbuild', '', 'Path to rpmbuild executable')
flags.DEFINE_string('name', '', 'The name of the software being packaged.')
flags.DEFINE_string('version', '',
'The version of the software being packaged.')
flags.DEFINE_string('release', '',
'The release of the software being packaged.')
flags.DEFINE_string('arch', '',
'The CPU architecture of the software being packaged.')
flags.DEFINE_string('spec_file', '',
'The file containing the RPM specification.')
flags.DEFINE_string('out_file', '',
'The destination to save the resulting RPM file to.')
flags.DEFINE_boolean('debug', False, 'Print debug messages.')
# Setup to safely create a temporary directory and clean it up when done.
@contextlib.contextmanager
def Cd(newdir, cleanup=lambda: True):
"""Change the current working directory.
This will run the provided cleanup function when the context exits and the
previous working directory is restored.
Args:
newdir: The directory to change to. This must already exist.
cleanup: An optional cleanup function to be executed when the context exits.
Yields:
Nothing.
"""
prevdir = os.getcwd()
os.chdir(os.path.expanduser(newdir))
try:
yield
finally:
os.chdir(prevdir)
cleanup()
@contextlib.contextmanager
def Tempdir():
"""Create a new temporary directory and change to it.
The temporary directory will be removed when the context exits.
Yields:
The full path of the temporary directory.
"""
dirpath = tempfile.mkdtemp()
def Cleanup():
shutil.rmtree(dirpath)
with Cd(dirpath, Cleanup):
yield dirpath
def GetFlagValue(flagvalue, strip=True):
if flagvalue:
if flagvalue[0] == '@':
with open(flagvalue[1:], 'r') as f:
flagvalue = f.read()
if strip:
return flagvalue.strip()
return flagvalue
WROTE_FILE_RE = re.compile(r'Wrote: (?P<rpm_path>.+)', re.MULTILINE)
def FindOutputFile(log):
"""Find the written file from the log information."""
m = WROTE_FILE_RE.search(log)
if m:
return m.group('rpm_path')
return None
def CopyAndRewrite(input_file, output_file, replacements=None):
"""Copies the given file and optionally rewrites with replacements.
Args:
input_file: The file to copy.
output_file: The file to write to.
replacements: A dictionary of replacements.
Keys are prefixes scan for, values are the replacements to write after
the prefix.
"""
with open(output_file, 'w') as output:
for line in fileinput.input(input_file):
if replacements:
for prefix, text in replacements.items():
if line.startswith(prefix):
line = prefix + ' ' + text + '\n'
break
output.write(line)
def IsExe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
def Which(program):
"""Search for the given program in the PATH.
Args:
program: The program to search for.
Returns:
The full path to the program.
"""
for path in os.environ['PATH'].split(os.pathsep):
filename = os.path.join(path, program)
if IsExe(filename):
return filename
return None
class NoRpmbuildFoundError(Exception):
pass
class InvalidRpmbuildError(Exception):
pass
def FindRpmbuild(rpmbuild_path):
if rpmbuild_path:
if not IsExe(rpmbuild_path):
raise InvalidRpmbuildError('{} is not executable'.format(rpmbuild_path))
return rpmbuild_path
path = Which('rpmbuild')
if path:
return path
raise NoRpmbuildFoundError()
class RpmBuilder(object):
"""A helper class to manage building the RPM file."""
SOURCE_DIR = 'SOURCES'
BUILD_DIR = 'BUILD'
TEMP_DIR = 'TMP'
DIRS = [SOURCE_DIR, BUILD_DIR, TEMP_DIR]
def __init__(self, name, version, release, arch, debug, rpmbuild_path):
self.name = name
self.version = GetFlagValue(version)
self.release = GetFlagValue(release)
self.arch = arch
self.debug = debug
self.files = []
self.rpmbuild_path = FindRpmbuild(rpmbuild_path)
self.rpm_path = None
def AddFiles(self, paths, root=''):
"""Add a set of files to the current RPM.
If an item in paths is a directory, its files are recursively added.
Args:
paths: The files to add.
root: The root of the filesystem to search for files. Defaults to ''.
"""
for path in paths:
full_path = os.path.join(root, path)
if os.path.isdir(full_path):
self.AddFiles(os.listdir(full_path), full_path)
else:
self.files.append(full_path)
def SetupWorkdir(self, spec_file, original_dir):
"""Create the needed structure in the workdir."""
# Create directory structure.
for name in RpmBuilder.DIRS:
if not os.path.exists(name):
os.makedirs(name, 0o777)
# Copy the files.
for f in self.files:
dst_dir = os.path.join(RpmBuilder.BUILD_DIR, os.path.dirname(f))
if not os.path.exists(dst_dir):
os.makedirs(dst_dir, 0o777)
shutil.copy(os.path.join(original_dir, f), dst_dir)
# Copy the spec file, updating with the correct version.
spec_origin = os.path.join(original_dir, spec_file)
self.spec_file = os.path.basename(spec_file)
replacements = {}
if self.version:
replacements['Version:'] = self.version
if self.release:
replacements['Release:'] = self.release
CopyAndRewrite(spec_origin, self.spec_file, replacements)
def CallRpmBuild(self, dirname):
"""Call rpmbuild with the correct arguments."""
args = [
self.rpmbuild_path,
'--define',
'_topdir %s' % dirname,
'--define',
'_tmppath %s/TMP' % dirname,
'--bb',
'--buildroot',
os.path.join(dirname, 'BUILDROOT'),
self.spec_file,
]
p = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env={'LANG': 'C'})
output = p.communicate()[0].decode()
if p.returncode == 0:
# Find the created file.
self.rpm_path = FindOutputFile(output)
if p.returncode != 0 or not self.rpm_path:
print('Error calling rpmbuild:')
print(output)
# Return the status.
return p.returncode
def SaveResult(self, out_file):
"""Save the result RPM out of the temporary working directory."""
if self.rpm_path:
shutil.copy(self.rpm_path, out_file)
if self.debug:
print('Saved RPM file to %s' % out_file)
else:
print('No RPM file created.')
def Build(self, spec_file, out_file):
"""Build the RPM described by the spec_file."""
if self.debug:
print('Building RPM for %s at %s' % (self.name, out_file))
original_dir = os.getcwd()
spec_file = os.path.join(original_dir, spec_file)
out_file = os.path.join(original_dir, out_file)
with Tempdir() as dirname:
self.SetupWorkdir(spec_file, original_dir)
status = self.CallRpmBuild(dirname)
self.SaveResult(out_file)
return status
def main(argv=()):
try:
builder = RpmBuilder(FLAGS.name, FLAGS.version, FLAGS.release, FLAGS.arch,
FLAGS.debug, FLAGS.rpmbuild)
builder.AddFiles(argv[1:])
return builder.Build(FLAGS.spec_file, FLAGS.out_file)
except NoRpmbuildFoundError:
print('ERROR: rpmbuild is required but is not present in PATH')
return 1
if __name__ == '__main__':
FLAGS = flags.FLAGS
main(FLAGS(sys.argv))

56
pkg/path.bzl Normal file
View File

@ -0,0 +1,56 @@
# Copyright 2016 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Helper functions that don't depend on Skylark, so can be unit tested."""
def _short_path_dirname(path):
"""Returns the directory's name of the short path of an artifact."""
sp = path.short_path
last_pkg = sp.rfind("/")
if last_pkg == -1:
# Top-level BUILD file.
return ""
return sp[:last_pkg]
def dest_path(f, strip_prefix):
"""Returns the short path of f, stripped of strip_prefix."""
if strip_prefix == None:
# If no strip_prefix was specified, use the package of the
# given input as the strip_prefix.
strip_prefix = _short_path_dirname(f)
if not strip_prefix:
return f.short_path
if f.short_path.startswith(strip_prefix):
return f.short_path[len(strip_prefix):]
return f.short_path
def compute_data_path(out, data_path):
"""Compute the relative data path prefix from the data_path attribute."""
if data_path:
# Strip ./ from the beginning if specified.
# There is no way to handle .// correctly (no function that would make
# that possible and Skylark is not turing complete) so just consider it
# as an absolute path.
if len(data_path) >= 2 and data_path[0:2] == "./":
data_path = data_path[2:]
if not data_path or data_path == ".": # Relative to current package
return _short_path_dirname(out)
elif data_path[0] == "/": # Absolute path
return data_path[1:]
else: # Relative to a sub-directory
tmp_short_path_dirname = _short_path_dirname(out)
if tmp_short_path_dirname:
return tmp_short_path_dirname + "/" + data_path
return data_path
else:
return None

325
pkg/pkg.bzl Normal file
View File

@ -0,0 +1,325 @@
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Rules for manipulation of various packaging."""
load(":path.bzl", "compute_data_path", "dest_path")
# Filetype to restrict inputs
tar_filetype = [".tar", ".tar.gz", ".tgz", ".tar.xz", ".tar.bz2"]
deb_filetype = [".deb", ".udeb"]
def _remap(remap_paths, path):
"""If path starts with a key in remap_paths, rewrite it."""
for prefix, replacement in remap_paths.items():
if path.startswith(prefix):
return replacement + path[len(prefix):]
return path
def _quote(filename, protect = "="):
"""Quote the filename, by escaping = by \\= and \\ by \\\\"""
return filename.replace("\\", "\\\\").replace(protect, "\\" + protect)
def _pkg_tar_impl(ctx):
"""Implementation of the pkg_tar rule."""
# Compute the relative path
data_path = compute_data_path(ctx.outputs.out, ctx.attr.strip_prefix)
# Find a list of path remappings to apply.
remap_paths = ctx.attr.remap_paths
# Start building the arguments.
args = [
"--output=" + ctx.outputs.out.path,
"--directory=" + ctx.attr.package_dir,
"--mode=" + ctx.attr.mode,
"--owner=" + ctx.attr.owner,
"--owner_name=" + ctx.attr.ownername,
]
if ctx.attr.mtime != -1: # Note: Must match default in rule def.
if ctx.attr.portable_mtime:
fail("You may not set both mtime and portable_mtime")
args.append("--mtime=%d" % ctx.attr.mtime)
if ctx.attr.portable_mtime:
args.append("--mtime=portable")
# Add runfiles if requested
file_inputs = []
if ctx.attr.include_runfiles:
runfiles_depsets = []
for f in ctx.attr.srcs:
default_runfiles = f[DefaultInfo].default_runfiles
if default_runfiles != None:
runfiles_depsets.append(default_runfiles.files)
# deduplicates files in srcs attribute and their runfiles
file_inputs = depset(ctx.files.srcs, transitive = runfiles_depsets).to_list()
else:
file_inputs = ctx.files.srcs[:]
args += [
"--file=%s=%s" % (_quote(f.path), _remap(remap_paths, dest_path(f, data_path)))
for f in file_inputs
]
for target, f_dest_path in ctx.attr.files.items():
target_files = target.files.to_list()
if len(target_files) != 1:
fail("Each input must describe exactly one file.", attr = "files")
file_inputs += target_files
args += ["--file=%s=%s" % (_quote(target_files[0].path), f_dest_path)]
if ctx.attr.modes:
args += [
"--modes=%s=%s" % (_quote(key), ctx.attr.modes[key])
for key in ctx.attr.modes
]
if ctx.attr.owners:
args += [
"--owners=%s=%s" % (_quote(key), ctx.attr.owners[key])
for key in ctx.attr.owners
]
if ctx.attr.ownernames:
args += [
"--owner_names=%s=%s" % (_quote(key), ctx.attr.ownernames[key])
for key in ctx.attr.ownernames
]
if ctx.attr.empty_files:
args += ["--empty_file=%s" % empty_file for empty_file in ctx.attr.empty_files]
if ctx.attr.empty_dirs:
args += ["--empty_dir=%s" % empty_dir for empty_dir in ctx.attr.empty_dirs]
if ctx.attr.extension:
dotPos = ctx.attr.extension.find(".")
if dotPos > 0:
dotPos += 1
args += ["--compression=%s" % ctx.attr.extension[dotPos:]]
elif ctx.attr.extension == "tgz":
args += ["--compression=gz"]
args += ["--tar=" + f.path for f in ctx.files.deps]
args += [
"--link=%s:%s" % (_quote(k, protect = ":"), ctx.attr.symlinks[k])
for k in ctx.attr.symlinks
]
arg_file = ctx.actions.declare_file(ctx.label.name + ".args")
ctx.actions.write(arg_file, "\n".join(args))
ctx.actions.run(
inputs = file_inputs + ctx.files.deps + [arg_file],
executable = ctx.executable.build_tar,
arguments = ["--flagfile", arg_file.path],
outputs = [ctx.outputs.out],
mnemonic = "PackageTar",
use_default_shell_env = True,
)
def _pkg_deb_impl(ctx):
"""The implementation for the pkg_deb rule."""
files = [ctx.file.data]
args = [
"--output=" + ctx.outputs.deb.path,
"--changes=" + ctx.outputs.changes.path,
"--data=" + ctx.file.data.path,
"--package=" + ctx.attr.package,
"--architecture=" + ctx.attr.architecture,
"--maintainer=" + ctx.attr.maintainer,
]
if ctx.attr.preinst:
args += ["--preinst=@" + ctx.file.preinst.path]
files += [ctx.file.preinst]
if ctx.attr.postinst:
args += ["--postinst=@" + ctx.file.postinst.path]
files += [ctx.file.postinst]
if ctx.attr.prerm:
args += ["--prerm=@" + ctx.file.prerm.path]
files += [ctx.file.prerm]
if ctx.attr.postrm:
args += ["--postrm=@" + ctx.file.postrm.path]
files += [ctx.file.postrm]
if ctx.attr.config:
args += ["--config=@" + ctx.file.config.path]
files += [ctx.file.config]
if ctx.attr.templates:
args += ["--templates=@" + ctx.file.templates.path]
files += [ctx.file.templates]
# Conffiles can be specified by a file or a string list
if ctx.attr.conffiles_file:
if ctx.attr.conffiles:
fail("Both conffiles and conffiles_file attributes were specified")
args += ["--conffile=@" + ctx.file.conffiles_file.path]
files += [ctx.file.conffiles_file]
elif ctx.attr.conffiles:
args += ["--conffile=%s" % cf for cf in ctx.attr.conffiles]
# Version and description can be specified by a file or inlined
if ctx.attr.version_file:
if ctx.attr.version:
fail("Both version and version_file attributes were specified")
args += ["--version=@" + ctx.file.version_file.path]
files += [ctx.file.version_file]
elif ctx.attr.version:
args += ["--version=" + ctx.attr.version]
else:
fail("Neither version_file nor version attribute was specified")
if ctx.attr.description_file:
if ctx.attr.description:
fail("Both description and description_file attributes were specified")
args += ["--description=@" + ctx.file.description_file.path]
files += [ctx.file.description_file]
elif ctx.attr.description:
args += ["--description=" + ctx.attr.description]
else:
fail("Neither description_file nor description attribute was specified")
# Built using can also be specified by a file or inlined (but is not mandatory)
if ctx.attr.built_using_file:
if ctx.attr.built_using:
fail("Both build_using and built_using_file attributes were specified")
args += ["--built_using=@" + ctx.file.built_using_file.path]
files += [ctx.file.built_using_file]
elif ctx.attr.built_using:
args += ["--built_using=" + ctx.attr.built_using]
if ctx.attr.depends_file:
if ctx.attr.depends:
fail("Both depends and depends_file attributes were specified")
args += ["--depends=@" + ctx.file.depends_file.path]
files += [ctx.file.depends_file]
elif ctx.attr.depends:
args += ["--depends=" + d for d in ctx.attr.depends]
if ctx.attr.priority:
args += ["--priority=" + ctx.attr.priority]
if ctx.attr.section:
args += ["--section=" + ctx.attr.section]
if ctx.attr.homepage:
args += ["--homepage=" + ctx.attr.homepage]
args += ["--distribution=" + ctx.attr.distribution]
args += ["--urgency=" + ctx.attr.urgency]
args += ["--suggests=" + d for d in ctx.attr.suggests]
args += ["--enhances=" + d for d in ctx.attr.enhances]
args += ["--conflicts=" + d for d in ctx.attr.conflicts]
args += ["--pre_depends=" + d for d in ctx.attr.predepends]
args += ["--recommends=" + d for d in ctx.attr.recommends]
ctx.actions.run(
executable = ctx.executable.make_deb,
arguments = args,
inputs = files,
outputs = [ctx.outputs.deb, ctx.outputs.changes],
mnemonic = "MakeDeb",
)
ctx.actions.run_shell(
command = "ln -s %s %s" % (ctx.outputs.deb.basename, ctx.outputs.out.path),
inputs = [ctx.outputs.deb],
outputs = [ctx.outputs.out],
)
# A rule for creating a tar file, see README.md
_real_pkg_tar = rule(
implementation = _pkg_tar_impl,
attrs = {
"strip_prefix": attr.string(),
"package_dir": attr.string(default = "/"),
"deps": attr.label_list(allow_files = tar_filetype),
"srcs": attr.label_list(allow_files = True),
"files": attr.label_keyed_string_dict(allow_files = True),
"mode": attr.string(default = "0555"),
"modes": attr.string_dict(),
"mtime": attr.int(default = -1),
"portable_mtime": attr.bool(default = True),
"owner": attr.string(default = "0.0"),
"ownername": attr.string(default = "."),
"owners": attr.string_dict(),
"ownernames": attr.string_dict(),
"extension": attr.string(default = "tar"),
"symlinks": attr.string_dict(),
"empty_files": attr.string_list(),
"include_runfiles": attr.bool(),
"empty_dirs": attr.string_list(),
"remap_paths": attr.string_dict(),
# Implicit dependencies.
"build_tar": attr.label(
default = Label("//tools/build_defs/pkg:build_tar"),
cfg = "host",
executable = True,
allow_files = True,
),
},
outputs = {
"out": "%{name}.%{extension}",
},
)
def pkg_tar(**kwargs):
# Compatibility with older versions of pkg_tar that define files as
# a flat list of labels.
if "srcs" not in kwargs:
if "files" in kwargs:
if not hasattr(kwargs["files"], "items"):
label = "%s//%s:%s" % (native.repository_name(), native.package_name(), kwargs["name"])
print("%s: you provided a non dictionary to the pkg_tar `files` attribute. " % (label,) +
"This attribute was renamed to `srcs`. " +
"Consider renaming it in your BUILD file.")
kwargs["srcs"] = kwargs.pop("files")
_real_pkg_tar(**kwargs)
# A rule for creating a deb file, see README.md
pkg_deb = rule(
implementation = _pkg_deb_impl,
attrs = {
"data": attr.label(mandatory = True, allow_single_file = tar_filetype),
"package": attr.string(mandatory = True),
"architecture": attr.string(default = "all"),
"distribution": attr.string(default = "unstable"),
"urgency": attr.string(default = "medium"),
"maintainer": attr.string(mandatory = True),
"preinst": attr.label(allow_single_file = True),
"postinst": attr.label(allow_single_file = True),
"prerm": attr.label(allow_single_file = True),
"postrm": attr.label(allow_single_file = True),
"config": attr.label(allow_single_file = True),
"templates": attr.label(allow_single_file = True),
"conffiles_file": attr.label(allow_single_file = True),
"conffiles": attr.string_list(default = []),
"version_file": attr.label(allow_single_file = True),
"version": attr.string(),
"description_file": attr.label(allow_single_file = True),
"description": attr.string(),
"built_using_file": attr.label(allow_single_file = True),
"built_using": attr.string(),
"priority": attr.string(),
"section": attr.string(),
"homepage": attr.string(),
"depends": attr.string_list(default = []),
"depends_file": attr.label(allow_single_file = True),
"suggests": attr.string_list(default = []),
"enhances": attr.string_list(default = []),
"conflicts": attr.string_list(default = []),
"predepends": attr.string_list(default = []),
"recommends": attr.string_list(default = []),
# Implicit dependencies.
"make_deb": attr.label(
default = Label("//tools/build_defs/pkg:make_deb"),
cfg = "host",
executable = True,
allow_files = True,
),
},
outputs = {
"out": "%{name}.deb",
"deb": "%{package}_%{version}_%{architecture}.deb",
"changes": "%{package}_%{version}_%{architecture}.changes",
},
)

204
pkg/rpm.bzl Normal file
View File

@ -0,0 +1,204 @@
# Copyright 2017 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Rules to create RPM archives."""
rpm_filetype = [".rpm"]
spec_filetype = [".spec"]
def _pkg_rpm_impl(ctx):
"""Implements to pkg_rpm rule."""
files = []
args = ["--name=" + ctx.label.name]
if ctx.attr.rpmbuild_path:
args += ["--rpmbuild=" + ctx.attr.rpmbuild_path]
# Version can be specified by a file or inlined.
if ctx.attr.version_file:
if ctx.attr.version:
fail("Both version and version_file attributes were specified")
args += ["--version=@" + ctx.file.version_file.path]
files += [ctx.file.version_file]
elif ctx.attr.version:
args += ["--version=" + ctx.attr.version]
# Release can be specified by a file or inlined.
if ctx.attr.release_file:
if ctx.attr.release:
fail("Both release and release_file attributes were specified")
args += ["--release=@" + ctx.file.release_file.path]
files += [ctx.file.release_file]
elif ctx.attr.release:
args += ["--release=" + ctx.attr.release]
if ctx.attr.architecture:
args += ["--arch=" + ctx.attr.architecture]
if not ctx.attr.spec_file:
fail("spec_file was not specified")
# Expand the spec file template.
spec_file = ctx.actions.declare_file("%s.spec" % ctx.label.name)
# Create the default substitutions based on the data files.
substitutions = {}
for data_file in ctx.files.data:
key = "{%s}" % data_file.basename
substitutions[key] = data_file.path
ctx.actions.expand_template(
template = ctx.file.spec_file,
output = spec_file,
substitutions = substitutions,
)
args += ["--spec_file=" + spec_file.path]
files += [spec_file]
args += ["--out_file=" + ctx.outputs.rpm.path]
# Add data files.
if ctx.file.changelog:
files += [ctx.file.changelog]
args += [ctx.file.changelog.path]
files += ctx.files.data
for f in ctx.files.data:
args += [f.path]
if ctx.attr.debug:
args += ["--debug"]
# Call the generator script.
# TODO(katre): Generate a source RPM.
ctx.actions.run(
executable = ctx.executable._make_rpm,
use_default_shell_env = True,
arguments = args,
inputs = files,
outputs = [ctx.outputs.rpm],
mnemonic = "MakeRpm",
)
# Link the RPM to the expected output name.
ctx.actions.run(
executable = "ln",
arguments = [
"-s",
ctx.outputs.rpm.basename,
ctx.outputs.out.path,
],
inputs = [ctx.outputs.rpm],
outputs = [ctx.outputs.out],
)
# Link the RPM to the RPM-recommended output name.
if "rpm_nvra" in dir(ctx.outputs):
ctx.actions.run(
executable = "ln",
arguments = [
"-s",
ctx.outputs.rpm.basename,
ctx.outputs.rpm_nvra.path,
],
inputs = [ctx.outputs.rpm],
outputs = [ctx.outputs.rpm_nvra],
)
def _pkg_rpm_outputs(version, release):
outputs = {
"out": "%{name}.rpm",
"rpm": "%{name}-%{architecture}.rpm",
}
# The "rpm_nvra" output follows the recommended package naming convention of
# Name-Version-Release.Arch.rpm
# See http://ftp.rpm.org/max-rpm/ch-rpm-file-format.html
if version and release:
outputs["rpm_nvra"] = "%{name}-%{version}-%{release}.%{architecture}.rpm"
return outputs
# Define the rule.
pkg_rpm = rule(
attrs = {
"spec_file": attr.label(
mandatory = True,
allow_single_file = spec_filetype,
),
"architecture": attr.string(default = "all"),
"version_file": attr.label(
allow_single_file = True,
),
"version": attr.string(),
"changelog": attr.label(
allow_single_file = True,
),
"data": attr.label_list(
mandatory = True,
allow_files = True,
),
"release_file": attr.label(allow_single_file = True),
"release": attr.string(),
"debug": attr.bool(default = False),
# Implicit dependencies.
"rpmbuild_path": attr.string(),
"_make_rpm": attr.label(
default = Label("//tools/build_defs/pkg:make_rpm"),
cfg = "host",
executable = True,
allow_files = True,
),
},
executable = False,
outputs = _pkg_rpm_outputs,
implementation = _pkg_rpm_impl,
)
"""Creates an RPM format package from the data files.
This runs rpmbuild (and requires it to be installed beforehand) to generate
an RPM package based on the spec_file and data attributes.
Two outputs are guaranteed to be produced: "%{name}.rpm", and
"%{name}-%{architecture}.rpm". If the "version" and "release" arguments are
non-empty, a third output will be produced, following the RPM-recommended
N-V-R.A format (Name-Version-Release.Architecture.rpm). Note that due to
the fact that rule implementations cannot access the contents of files,
the "version_file" and "release_file" arguments will not create an output
using N-V-R.A format.
Args:
spec_file: The RPM spec file to use. If the version or version_file
attributes are provided, the Version in the spec will be overwritten,
and likewise behaviour with release and release_file. Any Sources listed
in the spec file must be provided as data dependencies.
The base names of data dependencies can be replaced with the actual location
using "{basename}" syntax.
version: The version of the package to generate. This will overwrite any
Version provided in the spec file. Only specify one of version and
version_file.
version_file: A file containing the version of the package to generate. This
will overwrite any Version provided in the spec file. Only specify one of
version and version_file.
release: The release of the package to generate. This will overwrite any
release provided in the spec file. Only specify one of release and
release_file.
release_file: A file containing the release of the package to generate. This
will overwrite any release provided in the spec file. Only specify one of
release and release_file.
changelog: A changelog file to include. This will not be written to the spec
file, which should only list changes to the packaging, not the software itself.
data: List all files to be included in the package here.
"""

243
pkg/tests/BUILD Normal file
View File

@ -0,0 +1,243 @@
# -*- coding: utf-8 -*-
licenses(["notice"]) # Apache 2.0
load("@rules_pkg//:pkg.bzl", "pkg_deb", "pkg_tar")
genrule(
name = "generate_files",
outs = [
"etc/nsswitch.conf",
"usr/titi",
],
cmd = "for i in $(OUTS); do echo 1 >$$i; done",
)
filegroup(
name = "archive_testdata",
srcs = glob(["testdata/**"]),
visibility = ["//visibility:private"],
)
py_test(
name = "archive_test",
srcs = [
"archive_test.py",
],
data = [":archive_testdata"],
python_version = "PY2",
srcs_version = "PY2AND3",
tags = [
# archive.py requires xzcat, which is not available by default on Mac
"noci",
# TODO(laszlocsomor): fix on Windows or describe why it cannot pass.
"no_windows",
],
deps = [
"@rules_pkg//:archive",
"@bazel_tools//tools/python/runfiles",
],
)
py_test(
name = "path_test",
srcs = ["path_test.py"],
data = ["@rules_pkg//:path.bzl"],
srcs_version = "PY2AND3",
)
py_test(
name = "make_rpm_test",
srcs = ["make_rpm_test.py"],
python_version = "PY2",
srcs_version = "PY2AND3",
# rpmbuild is not available in windows
tags = [
"no_windows",
],
deps = [
"@rules_pkg//:make_rpm_lib",
],
)
pkg_deb(
name = "test-deb",
built_using = "some_test_data (0.1.2)",
conffiles = [
"/etc/nsswitch.conf",
"/etc/other",
],
config = ":testdata/config",
data = ":test-tar-gz.tar.gz",
depends = [
"dep1",
"dep2",
],
description = "toto ®, Й, ק ,م, ๗, あ, 叶, 葉, 말, ü and é",
distribution = "trusty",
maintainer = "soméone@somewhere.com",
make_deb = "@rules_pkg//:make_deb",
package = "titi",
templates = ":testdata/templates",
urgency = "low",
version = "test",
)
[pkg_tar(
name = "test-tar-%s" % ext[1:],
srcs = [
":etc/nsswitch.conf",
":usr/titi",
],
build_tar = "@rules_pkg//:build_tar",
extension = "tar%s" % ext,
mode = "0644",
modes = {"usr/titi": "0755"},
owner = "42.24",
ownername = "titi.tata",
ownernames = {"etc/nsswitch.conf": "tata.titi"},
owners = {"etc/nsswitch.conf": "24.42"},
package_dir = "/",
strip_prefix = ".",
symlinks = {"usr/bin/java": "/path/to/bin/java"},
) for ext in [
"",
".gz",
".bz2",
".xz", # This will breaks if xzcat is not installed
]]
[pkg_tar(
name = "test-tar-inclusion-%s" % ext,
build_tar = "@rules_pkg//:build_tar",
deps = [":test-tar-%s" % ext],
) for ext in [
"",
"gz",
"bz2",
"xz",
]]
pkg_tar(
name = "test-tar-strip_prefix-empty",
srcs = [
":etc/nsswitch.conf",
],
build_tar = "@rules_pkg//:build_tar",
strip_prefix = "",
)
pkg_tar(
name = "test-tar-strip_prefix-none",
srcs = [
":etc/nsswitch.conf",
],
build_tar = "@rules_pkg//:build_tar",
)
pkg_tar(
name = "test-tar-strip_prefix-etc",
srcs = [
":etc/nsswitch.conf",
],
build_tar = "@rules_pkg//:build_tar",
strip_prefix = "etc",
)
pkg_tar(
name = "test-tar-strip_prefix-dot",
srcs = [
":etc/nsswitch.conf",
],
build_tar = "@rules_pkg//:build_tar",
strip_prefix = ".",
)
pkg_tar(
name = "test-tar-files_dict",
build_tar = "@rules_pkg//:build_tar",
files = {
":etc/nsswitch.conf": "not-etc/mapped-filename.conf",
},
)
pkg_tar(
name = "test-tar-empty_files",
build_tar = "@rules_pkg//:build_tar",
empty_files = [
"/a",
"/b",
],
mode = "0o777",
)
pkg_tar(
name = "test-tar-empty_dirs",
build_tar = "@rules_pkg//:build_tar",
empty_dirs = [
"/tmp",
"/pmt",
],
mode = "0o777",
)
pkg_tar(
name = "test-tar-mtime",
srcs = [
":etc/nsswitch.conf",
],
build_tar = "@rules_pkg//:build_tar",
mtime = 946684740, # 1999-12-31, 23:59
portable_mtime = False,
)
sh_test(
name = "build_test",
size = "medium",
srcs = [
"build_test.sh",
],
data = [
"testenv.sh",
":test-deb.deb",
":test-tar-.tar",
":test-tar-bz2.tar.bz2",
":test-tar-empty_dirs.tar",
":test-tar-empty_files.tar",
":test-tar-files_dict.tar",
":test-tar-gz.tar.gz",
":test-tar-inclusion-.tar",
":test-tar-inclusion-bz2.tar",
":test-tar-inclusion-gz.tar",
":test-tar-inclusion-xz.tar",
":test-tar-mtime.tar",
":test-tar-strip_prefix-dot.tar",
":test-tar-strip_prefix-empty.tar",
":test-tar-strip_prefix-etc.tar",
":test-tar-strip_prefix-none.tar",
":test-tar-xz.tar.xz",
":titi_test_all.changes",
],
tags = [
# archive.py requires xzcat, which is not available by default on Mac
"noci",
# TODO(laszlocsomor): fix on Windows or describe why it cannot pass.
"no_windows",
],
deps = [
"@rules_pkg//third_party/test/shell:bashunit",
],
)
test_suite(
name = "windows_tests",
tags = [
"-no_windows",
"-slow",
],
visibility = ["//visibility:private"],
)
test_suite(
name = "all_windows_tests",
tests = [":windows_tests"],
)

343
pkg/tests/archive_test.py Normal file
View File

@ -0,0 +1,343 @@
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Testing for archive."""
import os
import os.path
import tarfile
import unittest
from rules_pkg import archive
from bazel_tools.tools.python.runfiles import runfiles
class SimpleArFileTest(unittest.TestCase):
"""Testing for SimpleArFile class."""
def setUp(self):
self.data_files = runfiles.Create()
def assertArFileContent(self, arfile, content):
"""Assert that arfile contains exactly the entry described by `content`.
Args:
arfile: the path to the AR file to test.
content: an array describing the expected content of the AR file.
Each entry in that list should be a dictionary where each field
is a field to test in the corresponding SimpleArFileEntry. For
testing the presence of a file "x", then the entry could simply
be `{"filename": "x"}`, the missing field will be ignored.
"""
print("READING: %s" % arfile)
with archive.SimpleArFile(arfile) as f:
current = f.next()
i = 0
while current:
error_msg = "Extraneous file at end of archive %s: %s" % (
arfile,
current.filename
)
self.assertTrue(i < len(content), error_msg)
for k, v in content[i].items():
value = getattr(current, k)
error_msg = " ".join([
"Value `%s` for key `%s` of file" % (value, k),
"%s in archive %s does" % (current.filename, arfile),
"not match expected value `%s`" % v
])
self.assertEqual(value, v, error_msg)
current = f.next()
i += 1
if i < len(content):
self.fail("Missing file %s in archive %s" % (content[i], arfile))
def testEmptyArFile(self):
self.assertArFileContent(
self.data_files.Rlocation(
os.path.join("rules_pkg", "tests", "testdata", "empty.ar")),
[])
def assertSimpleFileContent(self, names):
datafile = self.data_files.Rlocation(
os.path.join("rules_pkg", "tests", "testdata", "_".join(names) + ".ar"))
content = [{"filename": n,
"size": len(n.encode("utf-8")),
"data": n.encode("utf-8")}
for n in names]
self.assertArFileContent(datafile, content)
def testAFile(self):
self.assertSimpleFileContent(["a"])
def testBFile(self):
self.assertSimpleFileContent(["b"])
def testABFile(self):
self.assertSimpleFileContent(["ab"])
def testA_BFile(self):
self.assertSimpleFileContent(["a", "b"])
def testA_ABFile(self):
self.assertSimpleFileContent(["a", "ab"])
def testA_B_ABFile(self):
self.assertSimpleFileContent(["a", "b", "ab"])
class TarFileWriterTest(unittest.TestCase):
"""Testing for TarFileWriter class."""
def assertTarFileContent(self, tar, content):
"""Assert that tarfile contains exactly the entry described by `content`.
Args:
tar: the path to the TAR file to test.
content: an array describing the expected content of the TAR file.
Each entry in that list should be a dictionary where each field
is a field to test in the corresponding TarInfo. For
testing the presence of a file "x", then the entry could simply
be `{"name": "x"}`, the missing field will be ignored. To match
the content of a file entry, use the key "data".
"""
with tarfile.open(tar, "r:") as f:
i = 0
for current in f:
error_msg = "Extraneous file at end of archive %s: %s" % (
tar,
current.name
)
self.assertTrue(i < len(content), error_msg)
for k, v in content[i].items():
if k == "data":
value = f.extractfile(current).read()
else:
value = getattr(current, k)
error_msg = " ".join([
"Value `%s` for key `%s` of file" % (value, k),
"%s in archive %s does" % (current.name, tar),
"not match expected value `%s`" % v
])
self.assertEqual(value, v, error_msg)
i += 1
if i < len(content):
self.fail("Missing file %s in archive %s" % (content[i], tar))
def setUp(self):
self.tempfile = os.path.join(os.environ["TEST_TMPDIR"], "test.tar")
self.data_files = runfiles.Create()
def tearDown(self):
if os.path.exists(self.tempfile):
os.remove(self.tempfile)
def testEmptyTarFile(self):
with archive.TarFileWriter(self.tempfile):
pass
self.assertTarFileContent(self.tempfile, [])
def assertSimpleFileContent(self, names):
with archive.TarFileWriter(self.tempfile) as f:
for n in names:
f.add_file(n, content=n)
content = ([{"name": "."}] +
[{"name": n,
"size": len(n.encode("utf-8")),
"data": n.encode("utf-8")}
for n in names])
self.assertTarFileContent(self.tempfile, content)
def testAddFile(self):
self.assertSimpleFileContent(["./a"])
self.assertSimpleFileContent(["./b"])
self.assertSimpleFileContent(["./ab"])
self.assertSimpleFileContent(["./a", "./b"])
self.assertSimpleFileContent(["./a", "./ab"])
self.assertSimpleFileContent(["./a", "./b", "./ab"])
def testDottedFiles(self):
with archive.TarFileWriter(self.tempfile) as f:
f.add_file("a")
f.add_file("/b")
f.add_file("./c")
f.add_file("./.d")
f.add_file("..e")
f.add_file(".f")
content = [
{"name": "."}, {"name": "./a"}, {"name": "/b"}, {"name": "./c"},
{"name": "./.d"}, {"name": "./..e"}, {"name": "./.f"}
]
self.assertTarFileContent(self.tempfile, content)
def testAddDir(self):
# For some strange reason, ending slash is stripped by the test
content = [
{"name": ".", "mode": 0o755},
{"name": "./a", "mode": 0o755},
{"name": "./a/b", "data": b"ab", "mode": 0o644},
{"name": "./a/c", "mode": 0o755},
{"name": "./a/c/d", "data": b"acd", "mode": 0o644},
]
tempdir = os.path.join(os.environ["TEST_TMPDIR"], "test_dir")
# Iterate over the `content` array to create the directory
# structure it describes.
for c in content:
if "data" in c:
p = os.path.join(tempdir, c["name"][2:])
os.makedirs(os.path.dirname(p))
with open(p, "wb") as f:
f.write(c["data"])
with archive.TarFileWriter(self.tempfile) as f:
f.add_dir("./", tempdir, mode=0o644)
self.assertTarFileContent(self.tempfile, content)
def testMergeTar(self):
content = [
{"name": "./a", "data": b"a"},
{"name": "./ab", "data": b"ab"},
]
for ext in ["", ".gz", ".bz2", ".xz"]:
with archive.TarFileWriter(self.tempfile) as f:
datafile = self.data_files.Rlocation(
os.path.join("rules_pkg", "tests", "testdata", "tar_test.tar" + ext))
f.add_tar(datafile, name_filter=lambda n: n != "./b")
self.assertTarFileContent(self.tempfile, content)
def testMergeTarRelocated(self):
content = [
{"name": ".", "mode": 0o755},
{"name": "./foo", "mode": 0o755},
{"name": "./foo/a", "data": b"a"},
{"name": "./foo/ab", "data": b"ab"},
]
with archive.TarFileWriter(self.tempfile) as f:
datafile = self.data_files.Rlocation(
os.path.join("rules_pkg", "tests", "testdata", "tar_test.tar"))
f.add_tar(datafile, name_filter=lambda n: n != "./b", root="/foo")
self.assertTarFileContent(self.tempfile, content)
def testAddingDirectoriesForFile(self):
with archive.TarFileWriter(self.tempfile) as f:
f.add_file("d/f")
content = [
{"name": ".",
"mode": 0o755},
{"name": "./d",
"mode": 0o755},
{"name": "./d/f"},
]
self.assertTarFileContent(self.tempfile, content)
def testAddingDirectoriesForFileSeparately(self):
d_dir = os.path.join(os.environ["TEST_TMPDIR"], "d_dir")
os.makedirs(d_dir)
with open(os.path.join(d_dir, "dir_file"), "w"):
pass
a_dir = os.path.join(os.environ["TEST_TMPDIR"], "a_dir")
os.makedirs(a_dir)
with open(os.path.join(a_dir, "dir_file"), "w"):
pass
with archive.TarFileWriter(self.tempfile) as f:
f.add_dir("d", d_dir)
f.add_file("d/f")
f.add_dir("a", a_dir)
f.add_file("a/b/f")
content = [
{"name": ".",
"mode": 0o755},
{"name": "./d",
"mode": 0o755},
{"name": "./d/dir_file"},
{"name": "./d/f"},
{"name": "./a",
"mode": 0o755},
{"name": "./a/dir_file"},
{"name": "./a/b",
"mode": 0o755},
{"name": "./a/b/f"},
]
self.assertTarFileContent(self.tempfile, content)
def testAddingDirectoriesForFileManually(self):
with archive.TarFileWriter(self.tempfile) as f:
f.add_file("d", tarfile.DIRTYPE)
f.add_file("d/f")
f.add_file("a", tarfile.DIRTYPE)
f.add_file("a/b", tarfile.DIRTYPE)
f.add_file("a/b", tarfile.DIRTYPE)
f.add_file("a/b/", tarfile.DIRTYPE)
f.add_file("a/b/c/f")
f.add_file("x/y/f")
f.add_file("x", tarfile.DIRTYPE)
content = [
{"name": ".",
"mode": 0o755},
{"name": "./d",
"mode": 0o755},
{"name": "./d/f"},
{"name": "./a",
"mode": 0o755},
{"name": "./a/b",
"mode": 0o755},
{"name": "./a/b/c",
"mode": 0o755},
{"name": "./a/b/c/f"},
{"name": "./x",
"mode": 0o755},
{"name": "./x/y",
"mode": 0o755},
{"name": "./x/y/f"},
]
self.assertTarFileContent(self.tempfile, content)
def testChangingRootDirectory(self):
with archive.TarFileWriter(self.tempfile, root_directory="root") as f:
f.add_file("d", tarfile.DIRTYPE)
f.add_file("d/f")
f.add_file("a", tarfile.DIRTYPE)
f.add_file("a/b", tarfile.DIRTYPE)
f.add_file("a/b", tarfile.DIRTYPE)
f.add_file("a/b/", tarfile.DIRTYPE)
f.add_file("a/b/c/f")
f.add_file("x/y/f")
f.add_file("x", tarfile.DIRTYPE)
content = [
{"name": "root",
"mode": 0o755},
{"name": "root/d",
"mode": 0o755},
{"name": "root/d/f"},
{"name": "root/a",
"mode": 0o755},
{"name": "root/a/b",
"mode": 0o755},
{"name": "root/a/b/c",
"mode": 0o755},
{"name": "root/a/b/c/f"},
{"name": "root/x",
"mode": 0o755},
{"name": "root/x/y",
"mode": 0o755},
{"name": "root/x/y/f"},
]
self.assertTarFileContent(self.tempfile, content)
if __name__ == "__main__":
unittest.main()

234
pkg/tests/build_test.sh Executable file
View File

@ -0,0 +1,234 @@
#!/bin/bash
# -*- coding: utf-8 -*-
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Unit tests for pkg_deb and pkg_tar
DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
source ${DIR}/testenv.sh || { echo "testenv.sh not found!" >&2; exit 1; }
function get_tar_listing() {
local input=$1
local test_data="${TEST_DATA_DIR}/${input}"
# We strip unused prefixes rather than dropping "v" flag for tar, because we
# want to preserve symlink information.
tar tvf "${test_data}" | sed -e 's/^.*:00 //'
}
function get_tar_verbose_listing() {
local input=$1
local test_data="${TEST_DATA_DIR}/${input}"
TZ="UTC" tar tvf "${test_data}"
}
function get_tar_owner() {
local input=$1
local file=$2
local test_data="${TEST_DATA_DIR}/${input}"
tar tvf "${test_data}" | grep "00 $file\$" | cut -d " " -f 2
}
function get_numeric_tar_owner() {
local input=$1
local file=$2
local test_data="${TEST_DATA_DIR}/${input}"
tar --numeric-owner -tvf "${test_data}" | grep "00 $file\$" | cut -d " " -f 2
}
function get_tar_permission() {
local input=$1
local file=$2
local test_data="${TEST_DATA_DIR}/${input}"
tar tvf "${test_data}" | fgrep "00 $file" | cut -d " " -f 1
}
function get_deb_listing() {
local input=$1
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb -c "${test_data}" | sed -e 's/^.*:00 //'
}
function get_deb_description() {
local input=$1
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb -I "${test_data}"
}
function get_deb_permission() {
local input=$1
local file=$2
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb -c "${test_data}" | fgrep "00 $file" | cut -d " " -f 1
}
function get_deb_ctl_file() {
local input=$1
local file=$2
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb -I "${test_data}" "${file}"
}
function get_deb_ctl_listing() {
local input=$1
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb --ctrl-tarfile "${test_data}" | tar tf - | sort
}
function get_deb_ctl_file() {
local input=$1
local ctl_file=$2
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb --info "${test_data}" "${ctl_file}"
}
function get_deb_ctl_permission() {
local input=$1
local file=$2
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb --ctrl-tarfile "${test_data}" | tar tvf - | egrep " $file\$" | cut -d " " -f 1
}
function dpkg_deb_supports_ctrl_tarfile() {
local input=$1
local test_data="${TEST_DATA_DIR}/${input}"
dpkg-deb --ctrl-tarfile "${test_data}" > /dev/null 2> /dev/null
}
function get_changes() {
local input=$1
cat "${TEST_DATA_DIR}/${input}"
}
function assert_content() {
local listing="./
./etc/
./etc/nsswitch.conf
./usr/
./usr/titi
./usr/bin/
./usr/bin/java -> /path/to/bin/java"
check_eq "$listing" "$(get_tar_listing $1)"
check_eq "-rwxr-xr-x" "$(get_tar_permission $1 ./usr/titi)"
check_eq "-rw-r--r--" "$(get_tar_permission $1 ./etc/nsswitch.conf)"
check_eq "24/42" "$(get_numeric_tar_owner $1 ./etc/)"
check_eq "24/42" "$(get_numeric_tar_owner $1 ./etc/nsswitch.conf)"
check_eq "42/24" "$(get_numeric_tar_owner $1 ./usr/)"
check_eq "42/24" "$(get_numeric_tar_owner $1 ./usr/titi)"
if [ -z "${2-}" ]; then
check_eq "tata/titi" "$(get_tar_owner $1 ./etc/)"
check_eq "tata/titi" "$(get_tar_owner $1 ./etc/nsswitch.conf)"
check_eq "titi/tata" "$(get_tar_owner $1 ./usr/)"
check_eq "titi/tata" "$(get_tar_owner $1 ./usr/titi)"
fi
}
function test_tar() {
local listing="./
./etc/
./etc/nsswitch.conf
./usr/
./usr/titi
./usr/bin/
./usr/bin/java -> /path/to/bin/java"
for i in "" ".gz" ".bz2" ".xz"; do
assert_content "test-tar-${i:1}.tar$i"
# Test merging tar files
# We pass a second argument to not test for user and group
# names because tar merging ask for numeric owners.
assert_content "test-tar-inclusion-${i:1}.tar" "true"
done;
check_eq "./
./nsswitch.conf" "$(get_tar_listing test-tar-strip_prefix-empty.tar)"
check_eq "./
./nsswitch.conf" "$(get_tar_listing test-tar-strip_prefix-none.tar)"
check_eq "./
./nsswitch.conf" "$(get_tar_listing test-tar-strip_prefix-etc.tar)"
check_eq "./
./etc/
./etc/nsswitch.conf" "$(get_tar_listing test-tar-strip_prefix-dot.tar)"
check_eq "./
./not-etc/
./not-etc/mapped-filename.conf" "$(get_tar_listing test-tar-files_dict.tar)"
check_eq "drwxr-xr-x 0/0 0 2000-01-01 00:00 ./
-rwxrwxrwx 0/0 0 2000-01-01 00:00 ./a
-rwxrwxrwx 0/0 0 2000-01-01 00:00 ./b" \
"$(get_tar_verbose_listing test-tar-empty_files.tar)"
check_eq "drwxr-xr-x 0/0 0 2000-01-01 00:00 ./
drwxrwxrwx 0/0 0 2000-01-01 00:00 ./tmp/
drwxrwxrwx 0/0 0 2000-01-01 00:00 ./pmt/" \
"$(get_tar_verbose_listing test-tar-empty_dirs.tar)"
check_eq \
"drwxr-xr-x 0/0 0 1999-12-31 23:59 ./
-r-xr-xr-x 0/0 2 1999-12-31 23:59 ./nsswitch.conf" \
"$(get_tar_verbose_listing test-tar-mtime.tar)"
}
function test_deb() {
if ! (which dpkg-deb); then
echo "Unable to run test for debian, no dpkg-deb!" >&2
return 0
fi
local listing="./
./etc/
./etc/nsswitch.conf
./usr/
./usr/titi
./usr/bin/
./usr/bin/java -> /path/to/bin/java"
check_eq "$listing" "$(get_deb_listing test-deb.deb)"
check_eq "-rwxr-xr-x" "$(get_deb_permission test-deb.deb ./usr/titi)"
check_eq "-rw-r--r--" "$(get_deb_permission test-deb.deb ./etc/nsswitch.conf)"
get_deb_description test-deb.deb >$TEST_log
expect_log "Description: toto ®, Й, ק ,م, ๗, あ, 叶, 葉, 말, ü and é"
expect_log "Package: titi"
expect_log "soméone@somewhere.com"
expect_log "Depends: dep1, dep2"
expect_log "Built-Using: some_test_data"
get_changes titi_test_all.changes >$TEST_log
expect_log "Urgency: low"
expect_log "Distribution: trusty"
get_deb_ctl_file test-deb.deb templates >$TEST_log
expect_log "Template: titi/test"
expect_log "Type: string"
get_deb_ctl_file test-deb.deb config >$TEST_log
expect_log "# test config file"
if ! dpkg_deb_supports_ctrl_tarfile test-deb.deb ; then
echo "Unable to test deb control files listing, too old dpkg-deb!" >&2
return 0
fi
local ctrl_listing="conffiles
config
control
templates"
# TODO: The config and templates come out with a+x permissions. Because I am
# currently seeing the same behavior in the Bazel sources, I am going to look
# at root causes later. I am not sure if this is WAI or not.
check_eq "$ctrl_listing" "$(get_deb_ctl_listing test-deb.deb)"
check_eq "-rw-r--r--" "$(get_deb_ctl_permission test-deb.deb conffiles)"
check_eq "-rwxr-xr-x" "$(get_deb_ctl_permission test-deb.deb config)"
check_eq "-rw-r--r--" "$(get_deb_ctl_permission test-deb.deb control)"
check_eq "-rwxr-xr-x" "$(get_deb_ctl_permission test-deb.deb templates)"
local conffiles="/etc/nsswitch.conf
/etc/other"
check_eq "$conffiles" "$(get_deb_ctl_file test-deb.deb conffiles)"
}
run_suite "build_test"

179
pkg/tests/make_rpm_test.py Normal file
View File

@ -0,0 +1,179 @@
# Copyright 2017 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for make_rpm."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
import os
import unittest
import make_rpm
@contextlib.contextmanager
def PrependPath(dirs):
with ReplacePath(dirs + [os.environ['PATH']]):
yield
@contextlib.contextmanager
def ReplacePath(dirs):
original_path = os.environ['PATH']
try:
os.environ['PATH'] = os.pathsep.join(dirs)
yield
finally:
os.environ['PATH'] = original_path
def WriteFile(filename, *contents):
with open(filename, 'w') as text_file:
text_file.write('\n'.join(contents))
def DirExists(dirname):
return os.path.exists(dirname) and os.path.isdir(dirname)
def FileExists(filename):
return os.path.exists(filename) and not os.path.isdir(filename)
def FileContents(filename):
with open(filename, 'r') as text_file:
return [s.strip() for s in text_file.readlines()]
class MakeRpmTest(unittest.TestCase):
# Python 2 alias
if not hasattr(unittest.TestCase, 'assertCountEqual'):
def assertCountEqual(self, a, b):
# pylint: disable=g-deprecated-assert
return self.assertItemsEqual(a, b)
def testFindOutputFile(self):
log = """
Lots of data.
Wrote: /path/to/file/here.rpm
More data present.
"""
result = make_rpm.FindOutputFile(log)
self.assertEqual('/path/to/file/here.rpm', result)
def testFindOutputFile_missing(self):
log = """
Lots of data.
More data present.
"""
result = make_rpm.FindOutputFile(log)
self.assertEqual(None, result)
def testCopyAndRewrite(self):
with make_rpm.Tempdir():
WriteFile('test.txt', 'Some: data1', 'Other: data2', 'More: data3')
make_rpm.CopyAndRewrite('test.txt', 'out.txt', {
'Some:': 'data1a',
'More:': 'data3a',
})
self.assertTrue(FileExists('out.txt'))
self.assertCountEqual(['Some: data1a', 'Other: data2', 'More: data3a'],
FileContents('out.txt'))
def testFindRpmbuild_present(self):
with make_rpm.Tempdir() as outer:
dummy = os.sep.join([outer, 'rpmbuild'])
WriteFile(dummy, 'dummy rpmbuild')
os.chmod(dummy, 0o777)
with PrependPath([outer]):
path = make_rpm.FindRpmbuild('')
self.assertEqual(dummy, path)
def testFindRpmbuild_missing(self):
with make_rpm.Tempdir() as outer:
with ReplacePath([outer]):
with self.assertRaises(make_rpm.NoRpmbuildFoundError) as context:
make_rpm.FindRpmbuild('')
self.assertIsNotNone(context)
def testSetupWorkdir(self):
with make_rpm.Tempdir() as outer:
dummy = os.sep.join([outer, 'rpmbuild'])
WriteFile(dummy, 'dummy rpmbuild')
os.chmod(dummy, 0o777)
with PrependPath([outer]):
# Create the builder and exercise it.
builder = make_rpm.RpmBuilder('test', '1.0', '0', 'x86', False, None)
# Create spec_file, test files.
WriteFile('test.spec', 'Name: test', 'Version: 0.1',
'Summary: test data')
WriteFile('file1.txt', 'Hello')
WriteFile('file2.txt', 'Goodbye')
builder.AddFiles(['file1.txt', 'file2.txt'])
with make_rpm.Tempdir():
# Call RpmBuilder.
builder.SetupWorkdir('test.spec', outer)
# Make sure files exist.
self.assertTrue(DirExists('SOURCES'))
self.assertTrue(DirExists('BUILD'))
self.assertTrue(DirExists('TMP'))
self.assertTrue(FileExists('test.spec'))
self.assertCountEqual(
['Name: test', 'Version: 1.0', 'Summary: test data'],
FileContents('test.spec'))
self.assertTrue(FileExists('BUILD/file1.txt'))
self.assertCountEqual(['Hello'], FileContents('BUILD/file1.txt'))
self.assertTrue(FileExists('BUILD/file2.txt'))
self.assertCountEqual(['Goodbye'], FileContents('BUILD/file2.txt'))
def testBuild(self):
with make_rpm.Tempdir() as outer:
dummy = os.sep.join([outer, 'rpmbuild'])
WriteFile(
dummy,
'#!/bin/sh',
'mkdir -p RPMS',
'touch RPMS/test.rpm',
'echo "Wrote: $PWD/RPMS/test.rpm"',
)
os.chmod(dummy, 0o777)
with PrependPath([outer]):
# Create the builder and exercise it.
builder = make_rpm.RpmBuilder('test', '1.0', '0', 'x86', False, None)
# Create spec_file, test files.
WriteFile('test.spec', 'Name: test', 'Version: 0.1',
'Summary: test data')
# Call RpmBuilder.
builder.Build('test.spec', 'test.rpm')
# Make sure files exist.
self.assertTrue(FileExists('test.rpm'))
if __name__ == '__main__':
unittest.main()

84
pkg/tests/path_test.py Normal file
View File

@ -0,0 +1,84 @@
# Copyright 2016 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Testing for helper functions."""
import imp
import unittest
pkg_bzl = imp.load_source('pkg_bzl', 'path.bzl')
class File(object):
"""Mock Skylark File class for testing."""
def __init__(self, short_path):
self.short_path = short_path
class ShortPathDirnameTest(unittest.TestCase):
"""Testing for _short_path_dirname."""
def testShortPathDirname(self):
path = pkg_bzl._short_path_dirname(File('foo/bar/baz'))
self.assertEqual('foo/bar', path)
def testTopLevel(self):
path = pkg_bzl._short_path_dirname(File('baz'))
self.assertEqual('', path)
class DestPathTest(unittest.TestCase):
"""Testing for _dest_path."""
def testDestPath(self):
path = pkg_bzl.dest_path(File('foo/bar/baz'), 'foo')
self.assertEqual('/bar/baz', path)
def testNoMatch(self):
path = pkg_bzl.dest_path(File('foo/bar/baz'), 'qux')
self.assertEqual('foo/bar/baz', path)
def testNoStrip(self):
path = pkg_bzl.dest_path(File('foo/bar/baz'), None)
self.assertEqual('/baz', path)
def testTopLevel(self):
path = pkg_bzl.dest_path(File('baz'), None)
self.assertEqual('baz', path)
class ComputeDataPathTest(unittest.TestCase):
"""Testing for _data_path_out."""
def testComputeDataPath(self):
path = pkg_bzl.compute_data_path(File('foo/bar/baz.tar'), 'a/b/c')
self.assertEqual('foo/bar/a/b/c', path)
def testAbsolute(self):
path = pkg_bzl.compute_data_path(File('foo/bar/baz.tar'), '/a/b/c')
self.assertEqual('a/b/c', path)
def testRelative(self):
path = pkg_bzl.compute_data_path(File('foo/bar/baz.tar'), './a/b/c')
self.assertEqual('foo/bar/a/b/c', path)
def testEmpty(self):
path = pkg_bzl.compute_data_path(File('foo/bar/baz.tar'), './')
self.assertEqual('foo/bar', path)
path = pkg_bzl.compute_data_path(File('foo/bar/baz.tar'), './.')
self.assertEqual('foo/bar', path)
if __name__ == '__main__':
unittest.main()

3
pkg/tests/testdata/a.ar vendored Normal file
View File

@ -0,0 +1,3 @@
!<arch>
a/ 1439231934 1000 1000 100664 1 `
a

5
pkg/tests/testdata/a_ab.ar vendored Normal file
View File

@ -0,0 +1,5 @@
!<arch>
a/ 1439231934 1000 1000 100664 1 `
a
ab/ 1439231936 1000 1000 100664 2 `
ab

5
pkg/tests/testdata/a_b.ar vendored Normal file
View File

@ -0,0 +1,5 @@
!<arch>
a/ 1439231934 1000 1000 100664 1 `
a
b/ 1439231939 1000 1000 100664 1 `
b

7
pkg/tests/testdata/a_b_ab.ar vendored Normal file
View File

@ -0,0 +1,7 @@
!<arch>
a/ 1439231934 1000 1000 100664 1 `
a
b/ 1439231939 1000 1000 100664 1 `
b
ab/ 1439231936 1000 1000 100664 2 `
ab

3
pkg/tests/testdata/ab.ar vendored Normal file
View File

@ -0,0 +1,3 @@
!<arch>
ab/ 1439231936 1000 1000 100664 2 `
ab

3
pkg/tests/testdata/b.ar vendored Normal file
View File

@ -0,0 +1,3 @@
!<arch>
b/ 1439231939 1000 1000 100664 1 `
b

1
pkg/tests/testdata/config vendored Normal file
View File

@ -0,0 +1 @@
# test config file

1
pkg/tests/testdata/empty.ar vendored Normal file
View File

@ -0,0 +1 @@
!<arch>

BIN
pkg/tests/testdata/tar_test.tar vendored Normal file

Binary file not shown.

BIN
pkg/tests/testdata/tar_test.tar.bz2 vendored Normal file

Binary file not shown.

BIN
pkg/tests/testdata/tar_test.tar.gz vendored Normal file

Binary file not shown.

BIN
pkg/tests/testdata/tar_test.tar.xz vendored Normal file

Binary file not shown.

6
pkg/tests/testdata/templates vendored Normal file
View File

@ -0,0 +1,6 @@
Template: titi/test
Type: string
Default:
Description: test question
test question to check that templates are included into
debian directory

26
pkg/tests/testenv.sh Executable file
View File

@ -0,0 +1,26 @@
#!/bin/bash
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Integration test for pkg, test environment.
[ -z "$TEST_SRCDIR" ] && { echo "TEST_SRCDIR not set!" >&2; exit 1; }
[ -z "$TEST_WORKSPACE" ] && { echo "TEST_WORKSPACE not set!" >&2; exit 1; }
# Load the unit-testing framework
source "${TEST_SRCDIR}/${TEST_WORKSPACE}/third_party/test/shell/unittest.bash" || \
{ echo "Failed to source unittest.bash" >&2; exit 1; }
readonly TEST_DATA_DIR="${TEST_SRCDIR}/${TEST_WORKSPACE}/tests"

9
pkg/third_party/test/shell/BUILD vendored Normal file
View File

@ -0,0 +1,9 @@
sh_library(
name = "bashunit",
srcs = ["unittest.bash"],
#data = [
# "testenv.sh",
#],
visibility = ["//visibility:public"],
)

870
pkg/third_party/test/shell/unittest.bash vendored Normal file
View File

@ -0,0 +1,870 @@
#!/bin/bash
#
# Copyright 2015 The Bazel Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Common utility file for Bazel shell tests
#
# unittest.bash: a unit test framework in Bash.
#
# A typical test suite looks like so:
#
# ------------------------------------------------------------------------
# #!/bin/bash
#
# source path/to/unittest.bash || exit 1
#
# # Test that foo works.
# function test_foo() {
# foo >$TEST_log || fail "foo failed";
# expect_log "blah" "Expected to see 'blah' in output of 'foo'."
# }
#
# # Test that bar works.
# function test_bar() {
# bar 2>$TEST_log || fail "bar failed";
# expect_not_log "ERROR" "Unexpected error from 'bar'."
# ...
# assert_equals $x $y
# }
#
# run_suite "Test suite for blah"
# ------------------------------------------------------------------------
#
# Each test function is considered to pass iff fail() is not called
# while it is active. fail() may be called directly, or indirectly
# via other assertions such as expect_log(). run_suite must be called
# at the very end.
#
# A test suite may redefine functions "set_up" and/or "tear_down";
# these functions are executed before and after each test function,
# respectively. Similarly, "cleanup" and "timeout" may be redefined,
# and these function are called upon exit (of any kind) or a timeout.
#
# The user can pass --test_arg to blaze test to select specific tests
# to run. Specifying --test_arg multiple times allows to select several
# tests to be run in the given order. Additionally the user may define
# TESTS=(test_foo test_bar ...) to specify a subset of test functions to
# execute, for example, a working set during debugging. By default, all
# functions called test_* will be executed.
#
# This file provides utilities for assertions over the output of a
# command. The output of the command under test is directed to the
# file $TEST_log, and then the expect_log* assertions can be used to
# test for the presence of certain regular expressions in that file.
#
# The test framework is responsible for restoring the original working
# directory before each test.
#
# The order in which test functions are run is not defined, so it is
# important that tests clean up after themselves.
#
# Each test will be run in a new subshell.
#
# Functions named __* are not intended for use by clients.
#
# This framework implements the "test sharding protocol".
#
[ -n "$BASH_VERSION" ] ||
{ echo "unittest.bash only works with bash!" >&2; exit 1; }
DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
export BAZEL_SHELL_TEST=1
#### Configuration variables (may be overridden by testenv.sh or the suite):
# This function may be called by testenv.sh or a test suite to enable errexit
# in a way that enables us to print pretty stack traces when something fails.
function enable_errexit() {
set -o errtrace
set -eu
trap __test_terminated_err ERR
}
function disable_errexit() {
set +o errtrace
set +eu
trap - ERR
}
#### Set up the test environment, branched from the old shell/testenv.sh
# Enable errexit with pretty stack traces.
enable_errexit
cat_jvm_log () {
if [[ "$log_content" =~ "(error code:".*", error message: '".*"', log file: '"(.*)"')" ]]; then
echo >&2
echo "Content of ${BASH_REMATCH[1]}:" >&2
cat "${BASH_REMATCH[1]}" >&2
fi
}
# Print message in "$1" then exit with status "$2"
die () {
# second argument is optional, defaulting to 1
local status_code=${2:-1}
# Stop capturing stdout/stderr, and dump captured output
if [ "$CAPTURED_STD_ERR" -ne 0 -o "$CAPTURED_STD_OUT" -ne 0 ]; then
restore_outputs
if [ "$CAPTURED_STD_OUT" -ne 0 ]; then
cat "${TEST_TMPDIR}/captured.out"
CAPTURED_STD_OUT=0
fi
if [ "$CAPTURED_STD_ERR" -ne 0 ]; then
cat "${TEST_TMPDIR}/captured.err" 1>&2
cat_jvm_log "$(cat "${TEST_TMPDIR}/captured.err")"
CAPTURED_STD_ERR=0
fi
fi
if [ -n "${1-}" ] ; then
echo "$1" 1>&2
fi
if [ -n "${BASH-}" ]; then
local caller_n=0
while [ $caller_n -lt 4 ] && caller_out=$(caller $caller_n 2>/dev/null); do
test $caller_n -eq 0 && echo "CALLER stack (max 4):"
echo " $caller_out"
let caller_n=caller_n+1
done 1>&2
fi
if [ x"$status_code" != x -a x"$status_code" != x"0" ]; then
exit "$status_code"
else
exit 1
fi
}
# Print message in "$1" then record that a non-fatal error occurred in ERROR_COUNT
ERROR_COUNT="${ERROR_COUNT:-0}"
error () {
if [ -n "$1" ] ; then
echo "$1" 1>&2
fi
ERROR_COUNT=$(($ERROR_COUNT + 1))
}
# Die if "$1" != "$2", print $3 as death reason
check_eq () {
[ "$1" = "$2" ] || die "Check failed: '$1' == '$2' ${3:+ ($3)}"
}
# Die if "$1" == "$2", print $3 as death reason
check_ne () {
[ "$1" != "$2" ] || die "Check failed: '$1' != '$2' ${3:+ ($3)}"
}
# The structure of the following if statements is such that if '[' fails
# (e.g., a non-number was passed in) then the check will fail.
# Die if "$1" > "$2", print $3 as death reason
check_le () {
[ "$1" -gt "$2" ] || die "Check failed: '$1' <= '$2' ${3:+ ($3)}"
}
# Die if "$1" >= "$2", print $3 as death reason
check_lt () {
[ "$1" -lt "$2" ] || die "Check failed: '$1' < '$2' ${3:+ ($3)}"
}
# Die if "$1" < "$2", print $3 as death reason
check_ge () {
[ "$1" -ge "$2" ] || die "Check failed: '$1' >= '$2' ${3:+ ($3)}"
}
# Die if "$1" <= "$2", print $3 as death reason
check_gt () {
[ "$1" -gt "$2" ] || die "Check failed: '$1' > '$2' ${3:+ ($3)}"
}
# Die if $2 !~ $1; print $3 as death reason
check_match ()
{
expr match "$2" "$1" >/dev/null || \
die "Check failed: '$2' does not match regex '$1' ${3:+ ($3)}"
}
# Run command "$1" at exit. Like "trap" but multiple atexits don't
# overwrite each other. Will break if someone does call trap
# directly. So, don't do that.
ATEXIT="${ATEXIT-}"
atexit () {
if [ -z "$ATEXIT" ]; then
ATEXIT="$1"
else
ATEXIT="$1 ; $ATEXIT"
fi
trap "$ATEXIT" EXIT
}
## TEST_TMPDIR
if [ -z "${TEST_TMPDIR:-}" ]; then
export TEST_TMPDIR="$(mktemp -d ${TMPDIR:-/tmp}/bazel-test.XXXXXXXX)"
fi
if [ ! -e "${TEST_TMPDIR}" ]; then
mkdir -p -m 0700 "${TEST_TMPDIR}"
# Clean TEST_TMPDIR on exit
atexit "rm -fr ${TEST_TMPDIR}"
fi
# Functions to compare the actual output of a test to the expected
# (golden) output.
#
# Usage:
# capture_test_stdout
# ... do something ...
# diff_test_stdout "$TEST_SRCDIR/path/to/golden.out"
# Redirect a file descriptor to a file.
CAPTURED_STD_OUT="${CAPTURED_STD_OUT:-0}"
CAPTURED_STD_ERR="${CAPTURED_STD_ERR:-0}"
capture_test_stdout () {
exec 3>&1 # Save stdout as fd 3
exec 4>"${TEST_TMPDIR}/captured.out"
exec 1>&4
CAPTURED_STD_OUT=1
}
capture_test_stderr () {
exec 6>&2 # Save stderr as fd 6
exec 7>"${TEST_TMPDIR}/captured.err"
exec 2>&7
CAPTURED_STD_ERR=1
}
# Force XML_OUTPUT_FILE to an existing path
if [ -z "${XML_OUTPUT_FILE:-}" ]; then
XML_OUTPUT_FILE=${TEST_TMPDIR}/ouput.xml
fi
#### Global variables:
TEST_name="" # The name of the current test.
TEST_log=$TEST_TMPDIR/log # The log file over which the
# expect_log* assertions work. Must
# be absolute to be robust against
# tests invoking 'cd'!
TEST_passed="true" # The result of the current test;
# failed assertions cause this to
# become false.
# These variables may be overridden by the test suite:
TESTS=() # A subset or "working set" of test
# functions that should be run. By
# default, all tests called test_* are
# run.
if [ $# -gt 0 ]; then
# Legacy behavior is to ignore missing regexp, but with errexit
# the following line fails without || true.
# TODO(dmarting): maybe we should revisit the way of selecting
# test with that framework (use Bazel's environment variable instead).
TESTS=($(for i in $@; do echo $i; done | grep ^test_ || true))
if (( ${#TESTS[@]} == 0 )); then
echo "WARNING: Arguments do not specify tests!" >&2
fi
fi
# TESTBRIDGE_TEST_ONLY contains the value of --test_filter, if any. We want to
# preferentially use that instead of $@ to determine which tests to run.
if [[ ${TESTBRIDGE_TEST_ONLY:-} != "" ]]; then
# Split TESTBRIDGE_TEST_ONLY on comma and put the results into an array.
IFS=',' read -r -a TESTS <<< "$TESTBRIDGE_TEST_ONLY"
fi
TEST_verbose="true" # Whether or not to be verbose. A
# command; "true" or "false" are
# acceptable. The default is: true.
TEST_script="$0" # Full path to test script
# Check if the script path is absolute, if not prefix the PWD.
if [[ ! "$TEST_script" = /* ]]; then
TEST_script="$(pwd)/$0"
fi
#### Internal functions
function __show_log() {
echo "-- Test log: -----------------------------------------------------------"
[[ -e $TEST_log ]] && cat $TEST_log || echo "(Log file did not exist.)"
echo "------------------------------------------------------------------------"
}
# Usage: __pad <title> <pad-char>
# Print $title padded to 80 columns with $pad_char.
function __pad() {
local title=$1
local pad=$2
{
echo -n "$pad$pad $title "
printf "%80s" " " | tr ' ' "$pad"
} | head -c 80
echo
}
#### Exported functions
# Usage: init_test ...
# Deprecated. Has no effect.
function init_test() {
:
}
# Usage: set_up
# Called before every test function. May be redefined by the test suite.
function set_up() {
:
}
# Usage: tear_down
# Called after every test function. May be redefined by the test suite.
function tear_down() {
:
}
# Usage: cleanup
# Called upon eventual exit of the test suite. May be redefined by
# the test suite.
function cleanup() {
:
}
# Usage: timeout
# Called upon early exit from a test due to timeout.
function timeout() {
:
}
# Usage: testenv_set_up
# Called prior to set_up. For use by testenv.sh.
function testenv_set_up() {
:
}
# Usage: testenv_tear_down
# Called after tear_down. For use by testenv.sh.
function testenv_tear_down() {
:
}
# Usage: fail <message> [<message> ...]
# Print failure message with context information, and mark the test as
# a failure. The context includes a stacktrace including the longest sequence
# of calls outside this module. (We exclude the top and bottom portions of
# the stack because they just add noise.) Also prints the contents of
# $TEST_log.
function fail() {
__show_log >&2
echo "$TEST_name FAILED:" "$@" "." >&2
echo "$@" >$TEST_TMPDIR/__fail
TEST_passed="false"
__show_stack
# Cleanup as we are leaving the subshell now
tear_down
exit 1
}
# Usage: warn <message>
# Print a test warning with context information.
# The context includes a stacktrace including the longest sequence
# of calls outside this module. (We exclude the top and bottom portions of
# the stack because they just add noise.)
function warn() {
__show_log >&2
echo "$TEST_name WARNING: $1." >&2
__show_stack
if [ -n "${TEST_WARNINGS_OUTPUT_FILE:-}" ]; then
echo "$TEST_name WARNING: $1." >> "$TEST_WARNINGS_OUTPUT_FILE"
fi
}
# Usage: show_stack
# Prints the portion of the stack that does not belong to this module,
# i.e. the user's code that called a failing assertion. Stack may not
# be available if Bash is reading commands from stdin; an error is
# printed in that case.
__show_stack() {
local i=0
local trace_found=0
# Skip over active calls within this module:
while (( i < ${#FUNCNAME[@]} )) && [[ ${BASH_SOURCE[i]:-} == ${BASH_SOURCE[0]} ]]; do
(( ++i ))
done
# Show all calls until the next one within this module (typically run_suite):
while (( i < ${#FUNCNAME[@]} )) && [[ ${BASH_SOURCE[i]:-} != ${BASH_SOURCE[0]} ]]; do
# Read online docs for BASH_LINENO to understand the strange offset.
# Undefined can occur in the BASH_SOURCE stack apparently when one exits from a subshell
echo "${BASH_SOURCE[i]:-"Unknown"}:${BASH_LINENO[i - 1]:-"Unknown"}: in call to ${FUNCNAME[i]:-"Unknown"}" >&2
(( ++i ))
trace_found=1
done
[ $trace_found = 1 ] || echo "[Stack trace not available]" >&2
}
# Usage: expect_log <regexp> [error-message]
# Asserts that $TEST_log matches regexp. Prints the contents of
# $TEST_log and the specified (optional) error message otherwise, and
# returns non-zero.
function expect_log() {
local pattern=$1
local message=${2:-Expected regexp "$pattern" not found}
grep -sq -- "$pattern" $TEST_log && return 0
fail "$message"
return 1
}
# Usage: expect_log_warn <regexp> [error-message]
# Warns if $TEST_log does not match regexp. Prints the contents of
# $TEST_log and the specified (optional) error message on mismatch.
function expect_log_warn() {
local pattern=$1
local message=${2:-Expected regexp "$pattern" not found}
grep -sq -- "$pattern" $TEST_log && return 0
warn "$message"
return 1
}
# Usage: expect_log_once <regexp> [error-message]
# Asserts that $TEST_log contains one line matching <regexp>.
# Prints the contents of $TEST_log and the specified (optional)
# error message otherwise, and returns non-zero.
function expect_log_once() {
local pattern=$1
local message=${2:-Expected regexp "$pattern" not found exactly once}
expect_log_n "$pattern" 1 "$message"
}
# Usage: expect_log_n <regexp> <count> [error-message]
# Asserts that $TEST_log contains <count> lines matching <regexp>.
# Prints the contents of $TEST_log and the specified (optional)
# error message otherwise, and returns non-zero.
function expect_log_n() {
local pattern=$1
local expectednum=${2:-1}
local message=${3:-Expected regexp "$pattern" not found exactly $expectednum times}
local count=$(grep -sc -- "$pattern" $TEST_log)
[[ $count = $expectednum ]] && return 0
fail "$message"
return 1
}
# Usage: expect_not_log <regexp> [error-message]
# Asserts that $TEST_log does not match regexp. Prints the contents
# of $TEST_log and the specified (optional) error message otherwise, and
# returns non-zero.
function expect_not_log() {
local pattern=$1
local message=${2:-Unexpected regexp "$pattern" found}
grep -sq -- "$pattern" $TEST_log || return 0
fail "$message"
return 1
}
# Usage: expect_log_with_timeout <regexp> <timeout> [error-message]
# Waits for the given regexp in the $TEST_log for up to timeout seconds.
# Prints the contents of $TEST_log and the specified (optional)
# error message otherwise, and returns non-zero.
function expect_log_with_timeout() {
local pattern=$1
local timeout=$2
local message=${3:-Regexp "$pattern" not found in "$timeout" seconds}
local count=0
while [ $count -lt $timeout ]; do
grep -sq -- "$pattern" $TEST_log && return 0
let count=count+1
sleep 1
done
grep -sq -- "$pattern" $TEST_log && return 0
fail "$message"
return 1
}
# Usage: expect_cmd_with_timeout <expected> <cmd> [timeout]
# Repeats the command once a second for up to timeout seconds (10s by default),
# until the output matches the expected value. Fails and returns 1 if
# the command does not return the expected value in the end.
function expect_cmd_with_timeout() {
local expected="$1"
local cmd="$2"
local timeout=${3:-10}
local count=0
while [ $count -lt $timeout ]; do
local actual="$($cmd)"
[ "$expected" = "$actual" ] && return 0
let count=count+1
sleep 1
done
[ "$expected" = "$actual" ] && return 0
fail "Expected '$expected' within ${timeout}s, was '$actual'"
return 1
}
# Usage: assert_one_of <expected_list>... <actual>
# Asserts that actual is one of the items in expected_list
# Example: assert_one_of ( "foo", "bar", "baz" ) actualval
function assert_one_of() {
local args=("$@")
local last_arg_index=$((${#args[@]} - 1))
local actual=${args[last_arg_index]}
unset args[last_arg_index]
for expected_item in "${args[@]}"; do
[ "$expected_item" = "$actual" ] && return 0
done;
fail "Expected one of '${args[*]}', was '$actual'"
return 1
}
# Usage: assert_not_one_of <expected_list>... <actual>
# Asserts that actual is not one of the items in expected_list
# Example: assert_not_one_of ( "foo", "bar", "baz" ) actualval
function assert_not_one_of() {
local args=("$@")
local last_arg_index=$((${#args[@]} - 1))
local actual=${args[last_arg_index]}
unset args[last_arg_index]
for expected_item in "${args[@]}"; do
if [ "$expected_item" = "$actual" ]; then
fail "'${args[*]}' contains '$actual'"
return 1
fi
done;
return 0
}
# Usage: assert_equals <expected> <actual>
# Asserts [ expected = actual ].
function assert_equals() {
local expected=$1 actual=$2
[ "$expected" = "$actual" ] && return 0
fail "Expected '$expected', was '$actual'"
return 1
}
# Usage: assert_not_equals <unexpected> <actual>
# Asserts [ unexpected != actual ].
function assert_not_equals() {
local unexpected=$1 actual=$2
[ "$unexpected" != "$actual" ] && return 0;
fail "Expected not '$unexpected', was '$actual'"
return 1
}
# Usage: assert_contains <regexp> <file> [error-message]
# Asserts that file matches regexp. Prints the contents of
# file and the specified (optional) error message otherwise, and
# returns non-zero.
function assert_contains() {
local pattern=$1
local file=$2
local message=${3:-Expected regexp "$pattern" not found in "$file"}
grep -sq -- "$pattern" "$file" && return 0
cat "$file" >&2
fail "$message"
return 1
}
# Usage: assert_not_contains <regexp> <file> [error-message]
# Asserts that file does not match regexp. Prints the contents of
# file and the specified (optional) error message otherwise, and
# returns non-zero.
function assert_not_contains() {
local pattern=$1
local file=$2
local message=${3:-Expected regexp "$pattern" found in "$file"}
if [[ -f "$file" ]]; then
grep -sq -- "$pattern" "$file" || return 0
else
fail "$file is not a file: $message"
return 1
fi
cat "$file" >&2
fail "$message"
return 1
}
# Updates the global variables TESTS if
# sharding is enabled, i.e. ($TEST_TOTAL_SHARDS > 0).
function __update_shards() {
[ -z "${TEST_TOTAL_SHARDS-}" ] && return 0
[ "$TEST_TOTAL_SHARDS" -gt 0 ] ||
{ echo "Invalid total shards $TEST_TOTAL_SHARDS" >&2; exit 1; }
[ "$TEST_SHARD_INDEX" -lt 0 -o "$TEST_SHARD_INDEX" -ge "$TEST_TOTAL_SHARDS" ] &&
{ echo "Invalid shard $shard_index" >&2; exit 1; }
TESTS=$(for test in "${TESTS[@]}"; do echo "$test"; done |
awk "NR % $TEST_TOTAL_SHARDS == $TEST_SHARD_INDEX")
[ -z "${TEST_SHARD_STATUS_FILE-}" ] || touch "$TEST_SHARD_STATUS_FILE"
}
# Usage: __test_terminated <signal-number>
# Handler that is called when the test terminated unexpectedly
function __test_terminated() {
__show_log >&2
echo "$TEST_name FAILED: terminated by signal $1." >&2
TEST_passed="false"
__show_stack
timeout
exit 1
}
# Usage: __test_terminated_err
# Handler that is called when the test terminated unexpectedly due to "errexit".
function __test_terminated_err() {
# When a subshell exits due to signal ERR, its parent shell also exits,
# thus the signal handler is called recursively and we print out the
# error message and stack trace multiple times. We're only interested
# in the first one though, as it contains the most information, so ignore
# all following.
if [[ -f $TEST_TMPDIR/__err_handled ]]; then
exit 1
fi
__show_log >&2
if [[ ! -z "$TEST_name" ]]; then
echo -n "$TEST_name "
fi
echo "FAILED: terminated because this command returned a non-zero status:" >&2
touch $TEST_TMPDIR/__err_handled
TEST_passed="false"
__show_stack
# If $TEST_name is still empty, the test suite failed before we even started
# to run tests, so we shouldn't call tear_down.
if [[ ! -z "$TEST_name" ]]; then
tear_down
fi
exit 1
}
# Usage: __trap_with_arg <handler> <signals ...>
# Helper to install a trap handler for several signals preserving the signal
# number, so that the signal number is available to the trap handler.
function __trap_with_arg() {
func="$1" ; shift
for sig ; do
trap "$func $sig" "$sig"
done
}
# Usage: <node> <block>
# Adds the block to the given node in the report file. Quotes in the in
# arguments need to be escaped.
function __log_to_test_report() {
local node="$1"
local block="$2"
if [[ ! -e "$XML_OUTPUT_FILE" ]]; then
local xml_header='<?xml version="1.0" encoding="UTF-8"?>'
echo "$xml_header<testsuites></testsuites>" > $XML_OUTPUT_FILE
fi
# replace match on node with block and match
# replacement expression only needs escaping for quotes
perl -e "\
\$input = @ARGV[0]; \
\$/=undef; \
open FILE, '+<$XML_OUTPUT_FILE'; \
\$content = <FILE>; \
if (\$content =~ /($node.*)\$/) { \
seek FILE, 0, 0; \
print FILE \$\` . \$input . \$1; \
}; \
close FILE" "$block"
}
# Usage: <total> <passed>
# Adds the test summaries to the xml nodes.
function __finish_test_report() {
local suite_name="$1"
local total="$2"
local passed="$3"
local failed=$((total - passed))
# Update the xml output with the suite name and total number of
# passed/failed tests.
cat $XML_OUTPUT_FILE | \
sed \
"s/<testsuites>/<testsuites tests=\"$total\" failures=\"0\" errors=\"$failed\">/" | \
sed \
"s/<testsuite>/<testsuite name=\"${suite_name}\" tests=\"$total\" failures=\"0\" errors=\"$failed\">/" \
> $XML_OUTPUT_FILE.bak
rm -f $XML_OUTPUT_FILE
mv $XML_OUTPUT_FILE.bak $XML_OUTPUT_FILE
}
# Multi-platform timestamp function
UNAME=$(uname -s | tr 'A-Z' 'a-z')
if [ "$UNAME" = "linux" ] || [[ "$UNAME" =~ msys_nt* ]]; then
function timestamp() {
echo $(($(date +%s%N)/1000000))
}
else
function timestamp() {
# OS X and FreeBSD do not have %N so python is the best we can do
python -c 'import time; print(int(round(time.time() * 1000)))'
}
fi
function get_run_time() {
local ts_start=$1
local ts_end=$2
run_time_ms=$((${ts_end}-${ts_start}))
echo $(($run_time_ms/1000)).${run_time_ms: -3}
}
# Usage: run_tests <suite-comment>
# Must be called from the end of the user's test suite.
# Calls exit with zero on success, non-zero otherwise.
function run_suite() {
local message="$1"
# The name of the suite should be the script being run, under Bazel that
# will be the filename with the ".sh" extension removed.
local suite_name="$(basename $0)"
echo >&2
echo "$message" >&2
echo >&2
__log_to_test_report "<\/testsuites>" "<testsuite></testsuite>"
local total=0
local passed=0
atexit "cleanup"
# If the user didn't specify an explicit list of tests (e.g. a
# working set), use them all.
if [ ${#TESTS[@]} = 0 ]; then
TESTS=$(declare -F | awk '{print $3}' | grep ^test_)
elif [ -n "${TEST_WARNINGS_OUTPUT_FILE:-}" ]; then
if grep -q "TESTS=" "$TEST_script" ; then
echo "TESTS variable overridden in Bazel sh_test. Please remove before submitting" \
>> "$TEST_WARNINGS_OUTPUT_FILE"
fi
fi
__update_shards
for TEST_name in ${TESTS[@]}; do
>$TEST_log # Reset the log.
TEST_passed="true"
total=$(($total + 1))
if [[ "$TEST_verbose" == "true" ]]; then
__pad $TEST_name '*' >&2
fi
local run_time="0.0"
rm -f $TEST_TMPDIR/{__ts_start,__ts_end}
if [ "$(type -t $TEST_name)" = function ]; then
# Save exit handlers eventually set.
local SAVED_ATEXIT="$ATEXIT";
ATEXIT=
# Run test in a subshell.
rm -f $TEST_TMPDIR/__err_handled
__trap_with_arg __test_terminated INT KILL PIPE TERM ABRT FPE ILL QUIT SEGV
(
timestamp >$TEST_TMPDIR/__ts_start
testenv_set_up
set_up
eval $TEST_name
tear_down
testenv_tear_down
timestamp >$TEST_TMPDIR/__ts_end
test $TEST_passed == "true"
) 2>&1 | tee $TEST_TMPDIR/__log
# Note that tee will prevent the control flow continuing if the test
# spawned any processes which are still running and have not closed
# their stdout.
test_subshell_status=${PIPESTATUS[0]}
if [ "$test_subshell_status" != 0 ]; then
TEST_passed="false"
# Ensure that an end time is recorded in case the test subshell
# terminated prematurely.
[ -f $TEST_TMPDIR/__ts_end ] || timestamp >$TEST_TMPDIR/__ts_end
fi
# Calculate run time for the testcase.
local ts_start=$(cat $TEST_TMPDIR/__ts_start)
local ts_end=$(cat $TEST_TMPDIR/__ts_end)
run_time=$(get_run_time $ts_start $ts_end)
# Eventually restore exit handlers.
if [ -n "$SAVED_ATEXIT" ]; then
ATEXIT="$SAVED_ATEXIT"
trap "$ATEXIT" EXIT
fi
else # Bad test explicitly specified in $TESTS.
fail "Not a function: '$TEST_name'"
fi
local testcase_tag=""
if [[ "$TEST_passed" == "true" ]]; then
if [[ "$TEST_verbose" == "true" ]]; then
echo "PASSED: $TEST_name" >&2
fi
passed=$(($passed + 1))
testcase_tag="<testcase name=\"$TEST_name\" status=\"run\" time=\"$run_time\" classname=\"\"></testcase>"
else
echo "FAILED: $TEST_name" >&2
# end marker in CDATA cannot be escaped, we need to split the CDATA sections
log=$(cat $TEST_TMPDIR/__log | sed 's/]]>/]]>]]&gt;<![CDATA[/g')
fail_msg=$(cat $TEST_TMPDIR/__fail 2> /dev/null || echo "No failure message")
# Replacing '&' with '&amp;', '<' with '&lt;', '>' with '&gt;', and '"' with '&quot;'
escaped_fail_msg=$(echo $fail_msg | sed 's/&/\&amp;/g' | sed 's/</\&lt;/g' | sed 's/>/\&gt;/g' | sed 's/"/\&quot;/g')
testcase_tag="<testcase name=\"$TEST_name\" status=\"run\" time=\"$run_time\" classname=\"\"><error message=\"$escaped_fail_msg\"><![CDATA[$log]]></error></testcase>"
fi
if [[ "$TEST_verbose" == "true" ]]; then
echo >&2
fi
__log_to_test_report "<\/testsuite>" "$testcase_tag"
done
__finish_test_report "$suite_name" $total $passed
__pad "$passed / $total tests passed." '*' >&2
[ $total = $passed ] || {
__pad "There were errors." '*'
exit 1
} >&2
exit 0
}