diff --git a/vendor/github.com/coreos/go-systemd/LICENSE b/vendor/github.com/coreos/go-systemd/LICENSE deleted file mode 100644 index 37ec93a14..000000000 --- a/vendor/github.com/coreos/go-systemd/LICENSE +++ /dev/null @@ -1,191 +0,0 @@ -Apache License -Version 2.0, January 2004 -http://www.apache.org/licenses/ - -TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - -1. Definitions. - -"License" shall mean the terms and conditions for use, reproduction, and -distribution as defined by Sections 1 through 9 of this document. - -"Licensor" shall mean the copyright owner or entity authorized by the copyright -owner that is granting the License. - -"Legal Entity" shall mean the union of the acting entity and all other entities -that control, are controlled by, or are under common control with that entity. -For the purposes of this definition, "control" means (i) the power, direct or -indirect, to cause the direction or management of such entity, whether by -contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the -outstanding shares, or (iii) beneficial ownership of such entity. - -"You" (or "Your") shall mean an individual or Legal Entity exercising -permissions granted by this License. - -"Source" form shall mean the preferred form for making modifications, including -but not limited to software source code, documentation source, and configuration -files. - -"Object" form shall mean any form resulting from mechanical transformation or -translation of a Source form, including but not limited to compiled object code, -generated documentation, and conversions to other media types. - -"Work" shall mean the work of authorship, whether in Source or Object form, made -available under the License, as indicated by a copyright notice that is included -in or attached to the work (an example is provided in the Appendix below). - -"Derivative Works" shall mean any work, whether in Source or Object form, that -is based on (or derived from) the Work and for which the editorial revisions, -annotations, elaborations, or other modifications represent, as a whole, an -original work of authorship. For the purposes of this License, Derivative Works -shall not include works that remain separable from, or merely link (or bind by -name) to the interfaces of, the Work and Derivative Works thereof. - -"Contribution" shall mean any work of authorship, including the original version -of the Work and any modifications or additions to that Work or Derivative Works -thereof, that is intentionally submitted to Licensor for inclusion in the Work -by the copyright owner or by an individual or Legal Entity authorized to submit -on behalf of the copyright owner. For the purposes of this definition, -"submitted" means any form of electronic, verbal, or written communication sent -to the Licensor or its representatives, including but not limited to -communication on electronic mailing lists, source code control systems, and -issue tracking systems that are managed by, or on behalf of, the Licensor for -the purpose of discussing and improving the Work, but excluding communication -that is conspicuously marked or otherwise designated in writing by the copyright -owner as "Not a Contribution." - -"Contributor" shall mean Licensor and any individual or Legal Entity on behalf -of whom a Contribution has been received by Licensor and subsequently -incorporated within the Work. - -2. Grant of Copyright License. - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable copyright license to reproduce, prepare Derivative Works of, -publicly display, publicly perform, sublicense, and distribute the Work and such -Derivative Works in Source or Object form. - -3. Grant of Patent License. - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable (except as stated in this section) patent license to make, have -made, use, offer to sell, sell, import, and otherwise transfer the Work, where -such license applies only to those patent claims licensable by such Contributor -that are necessarily infringed by their Contribution(s) alone or by combination -of their Contribution(s) with the Work to which such Contribution(s) was -submitted. If You institute patent litigation against any entity (including a -cross-claim or counterclaim in a lawsuit) alleging that the Work or a -Contribution incorporated within the Work constitutes direct or contributory -patent infringement, then any patent licenses granted to You under this License -for that Work shall terminate as of the date such litigation is filed. - -4. Redistribution. - -You may reproduce and distribute copies of the Work or Derivative Works thereof -in any medium, with or without modifications, and in Source or Object form, -provided that You meet the following conditions: - -You must give any other recipients of the Work or Derivative Works a copy of -this License; and -You must cause any modified files to carry prominent notices stating that You -changed the files; and -You must retain, in the Source form of any Derivative Works that You distribute, -all copyright, patent, trademark, and attribution notices from the Source form -of the Work, excluding those notices that do not pertain to any part of the -Derivative Works; and -If the Work includes a "NOTICE" text file as part of its distribution, then any -Derivative Works that You distribute must include a readable copy of the -attribution notices contained within such NOTICE file, excluding those notices -that do not pertain to any part of the Derivative Works, in at least one of the -following places: within a NOTICE text file distributed as part of the -Derivative Works; within the Source form or documentation, if provided along -with the Derivative Works; or, within a display generated by the Derivative -Works, if and wherever such third-party notices normally appear. The contents of -the NOTICE file are for informational purposes only and do not modify the -License. You may add Your own attribution notices within Derivative Works that -You distribute, alongside or as an addendum to the NOTICE text from the Work, -provided that such additional attribution notices cannot be construed as -modifying the License. -You may add Your own copyright statement to Your modifications and may provide -additional or different license terms and conditions for use, reproduction, or -distribution of Your modifications, or for any such Derivative Works as a whole, -provided Your use, reproduction, and distribution of the Work otherwise complies -with the conditions stated in this License. - -5. Submission of Contributions. - -Unless You explicitly state otherwise, any Contribution intentionally submitted -for inclusion in the Work by You to the Licensor shall be under the terms and -conditions of this License, without any additional terms or conditions. -Notwithstanding the above, nothing herein shall supersede or modify the terms of -any separate license agreement you may have executed with Licensor regarding -such Contributions. - -6. Trademarks. - -This License does not grant permission to use the trade names, trademarks, -service marks, or product names of the Licensor, except as required for -reasonable and customary use in describing the origin of the Work and -reproducing the content of the NOTICE file. - -7. Disclaimer of Warranty. - -Unless required by applicable law or agreed to in writing, Licensor provides the -Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, -including, without limitation, any warranties or conditions of TITLE, -NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are -solely responsible for determining the appropriateness of using or -redistributing the Work and assume any risks associated with Your exercise of -permissions under this License. - -8. Limitation of Liability. - -In no event and under no legal theory, whether in tort (including negligence), -contract, or otherwise, unless required by applicable law (such as deliberate -and grossly negligent acts) or agreed to in writing, shall any Contributor be -liable to You for damages, including any direct, indirect, special, incidental, -or consequential damages of any character arising as a result of this License or -out of the use or inability to use the Work (including but not limited to -damages for loss of goodwill, work stoppage, computer failure or malfunction, or -any and all other commercial damages or losses), even if such Contributor has -been advised of the possibility of such damages. - -9. Accepting Warranty or Additional Liability. - -While redistributing the Work or Derivative Works thereof, You may choose to -offer, and charge a fee for, acceptance of support, warranty, indemnity, or -other liability obligations and/or rights consistent with this License. However, -in accepting such obligations, You may act only on Your own behalf and on Your -sole responsibility, not on behalf of any other Contributor, and only if You -agree to indemnify, defend, and hold each Contributor harmless for any liability -incurred by, or claims asserted against, such Contributor by reason of your -accepting any such warranty or additional liability. - -END OF TERMS AND CONDITIONS - -APPENDIX: How to apply the Apache License to your work - -To apply the Apache License to your work, attach the following boilerplate -notice, with the fields enclosed by brackets "[]" replaced with your own -identifying information. (Don't include the brackets!) The text should be -enclosed in the appropriate comment syntax for the file format. We also -recommend that a file or class name and description of purpose be included on -the same "printed page" as the copyright notice for easier identification within -third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/coreos/go-systemd/dbus/dbus.go b/vendor/github.com/coreos/go-systemd/dbus/dbus.go deleted file mode 100644 index c1694fb52..000000000 --- a/vendor/github.com/coreos/go-systemd/dbus/dbus.go +++ /dev/null @@ -1,213 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Integration with the systemd D-Bus API. See http://www.freedesktop.org/wiki/Software/systemd/dbus/ -package dbus - -import ( - "fmt" - "os" - "strconv" - "strings" - "sync" - - "github.com/godbus/dbus" -) - -const ( - alpha = `abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ` - num = `0123456789` - alphanum = alpha + num - signalBuffer = 100 -) - -// needsEscape checks whether a byte in a potential dbus ObjectPath needs to be escaped -func needsEscape(i int, b byte) bool { - // Escape everything that is not a-z-A-Z-0-9 - // Also escape 0-9 if it's the first character - return strings.IndexByte(alphanum, b) == -1 || - (i == 0 && strings.IndexByte(num, b) != -1) -} - -// PathBusEscape sanitizes a constituent string of a dbus ObjectPath using the -// rules that systemd uses for serializing special characters. -func PathBusEscape(path string) string { - // Special case the empty string - if len(path) == 0 { - return "_" - } - n := []byte{} - for i := 0; i < len(path); i++ { - c := path[i] - if needsEscape(i, c) { - e := fmt.Sprintf("_%x", c) - n = append(n, []byte(e)...) - } else { - n = append(n, c) - } - } - return string(n) -} - -// Conn is a connection to systemd's dbus endpoint. -type Conn struct { - // sysconn/sysobj are only used to call dbus methods - sysconn *dbus.Conn - sysobj dbus.BusObject - - // sigconn/sigobj are only used to receive dbus signals - sigconn *dbus.Conn - sigobj dbus.BusObject - - jobListener struct { - jobs map[dbus.ObjectPath]chan<- string - sync.Mutex - } - subscriber struct { - updateCh chan<- *SubStateUpdate - errCh chan<- error - sync.Mutex - ignore map[dbus.ObjectPath]int64 - cleanIgnore int64 - } -} - -// New establishes a connection to any available bus and authenticates. -// Callers should call Close() when done with the connection. -func New() (*Conn, error) { - conn, err := NewSystemConnection() - if err != nil && os.Geteuid() == 0 { - return NewSystemdConnection() - } - return conn, err -} - -// NewSystemConnection establishes a connection to the system bus and authenticates. -// Callers should call Close() when done with the connection -func NewSystemConnection() (*Conn, error) { - return NewConnection(func() (*dbus.Conn, error) { - return dbusAuthHelloConnection(dbus.SystemBusPrivate) - }) -} - -// NewUserConnection establishes a connection to the session bus and -// authenticates. This can be used to connect to systemd user instances. -// Callers should call Close() when done with the connection. -func NewUserConnection() (*Conn, error) { - return NewConnection(func() (*dbus.Conn, error) { - return dbusAuthHelloConnection(dbus.SessionBusPrivate) - }) -} - -// NewSystemdConnection establishes a private, direct connection to systemd. -// This can be used for communicating with systemd without a dbus daemon. -// Callers should call Close() when done with the connection. -func NewSystemdConnection() (*Conn, error) { - return NewConnection(func() (*dbus.Conn, error) { - // We skip Hello when talking directly to systemd. - return dbusAuthConnection(func() (*dbus.Conn, error) { - return dbus.Dial("unix:path=/run/systemd/private") - }) - }) -} - -// Close closes an established connection -func (c *Conn) Close() { - c.sysconn.Close() - c.sigconn.Close() -} - -// NewConnection establishes a connection to a bus using a caller-supplied function. -// This allows connecting to remote buses through a user-supplied mechanism. -// The supplied function may be called multiple times, and should return independent connections. -// The returned connection must be fully initialised: the org.freedesktop.DBus.Hello call must have succeeded, -// and any authentication should be handled by the function. -func NewConnection(dialBus func() (*dbus.Conn, error)) (*Conn, error) { - sysconn, err := dialBus() - if err != nil { - return nil, err - } - - sigconn, err := dialBus() - if err != nil { - sysconn.Close() - return nil, err - } - - c := &Conn{ - sysconn: sysconn, - sysobj: systemdObject(sysconn), - sigconn: sigconn, - sigobj: systemdObject(sigconn), - } - - c.subscriber.ignore = make(map[dbus.ObjectPath]int64) - c.jobListener.jobs = make(map[dbus.ObjectPath]chan<- string) - - // Setup the listeners on jobs so that we can get completions - c.sigconn.BusObject().Call("org.freedesktop.DBus.AddMatch", 0, - "type='signal', interface='org.freedesktop.systemd1.Manager', member='JobRemoved'") - - c.dispatch() - return c, nil -} - -// GetManagerProperty returns the value of a property on the org.freedesktop.systemd1.Manager -// interface. The value is returned in its string representation, as defined at -// https://developer.gnome.org/glib/unstable/gvariant-text.html -func (c *Conn) GetManagerProperty(prop string) (string, error) { - variant, err := c.sysobj.GetProperty("org.freedesktop.systemd1.Manager." + prop) - if err != nil { - return "", err - } - return variant.String(), nil -} - -func dbusAuthConnection(createBus func() (*dbus.Conn, error)) (*dbus.Conn, error) { - conn, err := createBus() - if err != nil { - return nil, err - } - - // Only use EXTERNAL method, and hardcode the uid (not username) - // to avoid a username lookup (which requires a dynamically linked - // libc) - methods := []dbus.Auth{dbus.AuthExternal(strconv.Itoa(os.Getuid()))} - - err = conn.Auth(methods) - if err != nil { - conn.Close() - return nil, err - } - - return conn, nil -} - -func dbusAuthHelloConnection(createBus func() (*dbus.Conn, error)) (*dbus.Conn, error) { - conn, err := dbusAuthConnection(createBus) - if err != nil { - return nil, err - } - - if err = conn.Hello(); err != nil { - conn.Close() - return nil, err - } - - return conn, nil -} - -func systemdObject(conn *dbus.Conn) dbus.BusObject { - return conn.Object("org.freedesktop.systemd1", dbus.ObjectPath("/org/freedesktop/systemd1")) -} diff --git a/vendor/github.com/coreos/go-systemd/dbus/methods.go b/vendor/github.com/coreos/go-systemd/dbus/methods.go deleted file mode 100644 index ab17f7cc7..000000000 --- a/vendor/github.com/coreos/go-systemd/dbus/methods.go +++ /dev/null @@ -1,565 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package dbus - -import ( - "errors" - "path" - "strconv" - - "github.com/godbus/dbus" -) - -func (c *Conn) jobComplete(signal *dbus.Signal) { - var id uint32 - var job dbus.ObjectPath - var unit string - var result string - dbus.Store(signal.Body, &id, &job, &unit, &result) - c.jobListener.Lock() - out, ok := c.jobListener.jobs[job] - if ok { - out <- result - delete(c.jobListener.jobs, job) - } - c.jobListener.Unlock() -} - -func (c *Conn) startJob(ch chan<- string, job string, args ...interface{}) (int, error) { - if ch != nil { - c.jobListener.Lock() - defer c.jobListener.Unlock() - } - - var p dbus.ObjectPath - err := c.sysobj.Call(job, 0, args...).Store(&p) - if err != nil { - return 0, err - } - - if ch != nil { - c.jobListener.jobs[p] = ch - } - - // ignore error since 0 is fine if conversion fails - jobID, _ := strconv.Atoi(path.Base(string(p))) - - return jobID, nil -} - -// StartUnit enqueues a start job and depending jobs, if any (unless otherwise -// specified by the mode string). -// -// Takes the unit to activate, plus a mode string. The mode needs to be one of -// replace, fail, isolate, ignore-dependencies, ignore-requirements. If -// "replace" the call will start the unit and its dependencies, possibly -// replacing already queued jobs that conflict with this. If "fail" the call -// will start the unit and its dependencies, but will fail if this would change -// an already queued job. If "isolate" the call will start the unit in question -// and terminate all units that aren't dependencies of it. If -// "ignore-dependencies" it will start a unit but ignore all its dependencies. -// If "ignore-requirements" it will start a unit but only ignore the -// requirement dependencies. It is not recommended to make use of the latter -// two options. -// -// If the provided channel is non-nil, a result string will be sent to it upon -// job completion: one of done, canceled, timeout, failed, dependency, skipped. -// done indicates successful execution of a job. canceled indicates that a job -// has been canceled before it finished execution. timeout indicates that the -// job timeout was reached. failed indicates that the job failed. dependency -// indicates that a job this job has been depending on failed and the job hence -// has been removed too. skipped indicates that a job was skipped because it -// didn't apply to the units current state. -// -// If no error occurs, the ID of the underlying systemd job will be returned. There -// does exist the possibility for no error to be returned, but for the returned job -// ID to be 0. In this case, the actual underlying ID is not 0 and this datapoint -// should not be considered authoritative. -// -// If an error does occur, it will be returned to the user alongside a job ID of 0. -func (c *Conn) StartUnit(name string, mode string, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.StartUnit", name, mode) -} - -// StopUnit is similar to StartUnit but stops the specified unit rather -// than starting it. -func (c *Conn) StopUnit(name string, mode string, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.StopUnit", name, mode) -} - -// ReloadUnit reloads a unit. Reloading is done only if the unit is already running and fails otherwise. -func (c *Conn) ReloadUnit(name string, mode string, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.ReloadUnit", name, mode) -} - -// RestartUnit restarts a service. If a service is restarted that isn't -// running it will be started. -func (c *Conn) RestartUnit(name string, mode string, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.RestartUnit", name, mode) -} - -// TryRestartUnit is like RestartUnit, except that a service that isn't running -// is not affected by the restart. -func (c *Conn) TryRestartUnit(name string, mode string, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.TryRestartUnit", name, mode) -} - -// ReloadOrRestart attempts a reload if the unit supports it and use a restart -// otherwise. -func (c *Conn) ReloadOrRestartUnit(name string, mode string, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.ReloadOrRestartUnit", name, mode) -} - -// ReloadOrTryRestart attempts a reload if the unit supports it and use a "Try" -// flavored restart otherwise. -func (c *Conn) ReloadOrTryRestartUnit(name string, mode string, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.ReloadOrTryRestartUnit", name, mode) -} - -// StartTransientUnit() may be used to create and start a transient unit, which -// will be released as soon as it is not running or referenced anymore or the -// system is rebooted. name is the unit name including suffix, and must be -// unique. mode is the same as in StartUnit(), properties contains properties -// of the unit. -func (c *Conn) StartTransientUnit(name string, mode string, properties []Property, ch chan<- string) (int, error) { - return c.startJob(ch, "org.freedesktop.systemd1.Manager.StartTransientUnit", name, mode, properties, make([]PropertyCollection, 0)) -} - -// KillUnit takes the unit name and a UNIX signal number to send. All of the unit's -// processes are killed. -func (c *Conn) KillUnit(name string, signal int32) { - c.sysobj.Call("org.freedesktop.systemd1.Manager.KillUnit", 0, name, "all", signal).Store() -} - -// ResetFailedUnit resets the "failed" state of a specific unit. -func (c *Conn) ResetFailedUnit(name string) error { - return c.sysobj.Call("org.freedesktop.systemd1.Manager.ResetFailedUnit", 0, name).Store() -} - -// getProperties takes the unit name and returns all of its dbus object properties, for the given dbus interface -func (c *Conn) getProperties(unit string, dbusInterface string) (map[string]interface{}, error) { - var err error - var props map[string]dbus.Variant - - path := unitPath(unit) - if !path.IsValid() { - return nil, errors.New("invalid unit name: " + unit) - } - - obj := c.sysconn.Object("org.freedesktop.systemd1", path) - err = obj.Call("org.freedesktop.DBus.Properties.GetAll", 0, dbusInterface).Store(&props) - if err != nil { - return nil, err - } - - out := make(map[string]interface{}, len(props)) - for k, v := range props { - out[k] = v.Value() - } - - return out, nil -} - -// GetUnitProperties takes the unit name and returns all of its dbus object properties. -func (c *Conn) GetUnitProperties(unit string) (map[string]interface{}, error) { - return c.getProperties(unit, "org.freedesktop.systemd1.Unit") -} - -func (c *Conn) getProperty(unit string, dbusInterface string, propertyName string) (*Property, error) { - var err error - var prop dbus.Variant - - path := unitPath(unit) - if !path.IsValid() { - return nil, errors.New("invalid unit name: " + unit) - } - - obj := c.sysconn.Object("org.freedesktop.systemd1", path) - err = obj.Call("org.freedesktop.DBus.Properties.Get", 0, dbusInterface, propertyName).Store(&prop) - if err != nil { - return nil, err - } - - return &Property{Name: propertyName, Value: prop}, nil -} - -func (c *Conn) GetUnitProperty(unit string, propertyName string) (*Property, error) { - return c.getProperty(unit, "org.freedesktop.systemd1.Unit", propertyName) -} - -// GetServiceProperty returns property for given service name and property name -func (c *Conn) GetServiceProperty(service string, propertyName string) (*Property, error) { - return c.getProperty(service, "org.freedesktop.systemd1.Service", propertyName) -} - -// GetUnitTypeProperties returns the extra properties for a unit, specific to the unit type. -// Valid values for unitType: Service, Socket, Target, Device, Mount, Automount, Snapshot, Timer, Swap, Path, Slice, Scope -// return "dbus.Error: Unknown interface" if the unitType is not the correct type of the unit -func (c *Conn) GetUnitTypeProperties(unit string, unitType string) (map[string]interface{}, error) { - return c.getProperties(unit, "org.freedesktop.systemd1."+unitType) -} - -// SetUnitProperties() may be used to modify certain unit properties at runtime. -// Not all properties may be changed at runtime, but many resource management -// settings (primarily those in systemd.cgroup(5)) may. The changes are applied -// instantly, and stored on disk for future boots, unless runtime is true, in which -// case the settings only apply until the next reboot. name is the name of the unit -// to modify. properties are the settings to set, encoded as an array of property -// name and value pairs. -func (c *Conn) SetUnitProperties(name string, runtime bool, properties ...Property) error { - return c.sysobj.Call("org.freedesktop.systemd1.Manager.SetUnitProperties", 0, name, runtime, properties).Store() -} - -func (c *Conn) GetUnitTypeProperty(unit string, unitType string, propertyName string) (*Property, error) { - return c.getProperty(unit, "org.freedesktop.systemd1."+unitType, propertyName) -} - -type UnitStatus struct { - Name string // The primary unit name as string - Description string // The human readable description string - LoadState string // The load state (i.e. whether the unit file has been loaded successfully) - ActiveState string // The active state (i.e. whether the unit is currently started or not) - SubState string // The sub state (a more fine-grained version of the active state that is specific to the unit type, which the active state is not) - Followed string // A unit that is being followed in its state by this unit, if there is any, otherwise the empty string. - Path dbus.ObjectPath // The unit object path - JobId uint32 // If there is a job queued for the job unit the numeric job id, 0 otherwise - JobType string // The job type as string - JobPath dbus.ObjectPath // The job object path -} - -type storeFunc func(retvalues ...interface{}) error - -func (c *Conn) listUnitsInternal(f storeFunc) ([]UnitStatus, error) { - result := make([][]interface{}, 0) - err := f(&result) - if err != nil { - return nil, err - } - - resultInterface := make([]interface{}, len(result)) - for i := range result { - resultInterface[i] = result[i] - } - - status := make([]UnitStatus, len(result)) - statusInterface := make([]interface{}, len(status)) - for i := range status { - statusInterface[i] = &status[i] - } - - err = dbus.Store(resultInterface, statusInterface...) - if err != nil { - return nil, err - } - - return status, nil -} - -// ListUnits returns an array with all currently loaded units. Note that -// units may be known by multiple names at the same time, and hence there might -// be more unit names loaded than actual units behind them. -func (c *Conn) ListUnits() ([]UnitStatus, error) { - return c.listUnitsInternal(c.sysobj.Call("org.freedesktop.systemd1.Manager.ListUnits", 0).Store) -} - -// ListUnitsFiltered returns an array with units filtered by state. -// It takes a list of units' statuses to filter. -func (c *Conn) ListUnitsFiltered(states []string) ([]UnitStatus, error) { - return c.listUnitsInternal(c.sysobj.Call("org.freedesktop.systemd1.Manager.ListUnitsFiltered", 0, states).Store) -} - -// ListUnitsByPatterns returns an array with units. -// It takes a list of units' statuses and names to filter. -// Note that units may be known by multiple names at the same time, -// and hence there might be more unit names loaded than actual units behind them. -func (c *Conn) ListUnitsByPatterns(states []string, patterns []string) ([]UnitStatus, error) { - return c.listUnitsInternal(c.sysobj.Call("org.freedesktop.systemd1.Manager.ListUnitsByPatterns", 0, states, patterns).Store) -} - -// ListUnitsByNames returns an array with units. It takes a list of units' -// names and returns an UnitStatus array. Comparing to ListUnitsByPatterns -// method, this method returns statuses even for inactive or non-existing -// units. Input array should contain exact unit names, but not patterns. -func (c *Conn) ListUnitsByNames(units []string) ([]UnitStatus, error) { - return c.listUnitsInternal(c.sysobj.Call("org.freedesktop.systemd1.Manager.ListUnitsByNames", 0, units).Store) -} - -type UnitFile struct { - Path string - Type string -} - -func (c *Conn) listUnitFilesInternal(f storeFunc) ([]UnitFile, error) { - result := make([][]interface{}, 0) - err := f(&result) - if err != nil { - return nil, err - } - - resultInterface := make([]interface{}, len(result)) - for i := range result { - resultInterface[i] = result[i] - } - - files := make([]UnitFile, len(result)) - fileInterface := make([]interface{}, len(files)) - for i := range files { - fileInterface[i] = &files[i] - } - - err = dbus.Store(resultInterface, fileInterface...) - if err != nil { - return nil, err - } - - return files, nil -} - -// ListUnitFiles returns an array of all available units on disk. -func (c *Conn) ListUnitFiles() ([]UnitFile, error) { - return c.listUnitFilesInternal(c.sysobj.Call("org.freedesktop.systemd1.Manager.ListUnitFiles", 0).Store) -} - -// ListUnitFilesByPatterns returns an array of all available units on disk matched the patterns. -func (c *Conn) ListUnitFilesByPatterns(states []string, patterns []string) ([]UnitFile, error) { - return c.listUnitFilesInternal(c.sysobj.Call("org.freedesktop.systemd1.Manager.ListUnitFilesByPatterns", 0, states, patterns).Store) -} - -type LinkUnitFileChange EnableUnitFileChange - -// LinkUnitFiles() links unit files (that are located outside of the -// usual unit search paths) into the unit search path. -// -// It takes a list of absolute paths to unit files to link and two -// booleans. The first boolean controls whether the unit shall be -// enabled for runtime only (true, /run), or persistently (false, -// /etc). -// The second controls whether symlinks pointing to other units shall -// be replaced if necessary. -// -// This call returns a list of the changes made. The list consists of -// structures with three strings: the type of the change (one of symlink -// or unlink), the file name of the symlink and the destination of the -// symlink. -func (c *Conn) LinkUnitFiles(files []string, runtime bool, force bool) ([]LinkUnitFileChange, error) { - result := make([][]interface{}, 0) - err := c.sysobj.Call("org.freedesktop.systemd1.Manager.LinkUnitFiles", 0, files, runtime, force).Store(&result) - if err != nil { - return nil, err - } - - resultInterface := make([]interface{}, len(result)) - for i := range result { - resultInterface[i] = result[i] - } - - changes := make([]LinkUnitFileChange, len(result)) - changesInterface := make([]interface{}, len(changes)) - for i := range changes { - changesInterface[i] = &changes[i] - } - - err = dbus.Store(resultInterface, changesInterface...) - if err != nil { - return nil, err - } - - return changes, nil -} - -// EnableUnitFiles() may be used to enable one or more units in the system (by -// creating symlinks to them in /etc or /run). -// -// It takes a list of unit files to enable (either just file names or full -// absolute paths if the unit files are residing outside the usual unit -// search paths), and two booleans: the first controls whether the unit shall -// be enabled for runtime only (true, /run), or persistently (false, /etc). -// The second one controls whether symlinks pointing to other units shall -// be replaced if necessary. -// -// This call returns one boolean and an array with the changes made. The -// boolean signals whether the unit files contained any enablement -// information (i.e. an [Install]) section. The changes list consists of -// structures with three strings: the type of the change (one of symlink -// or unlink), the file name of the symlink and the destination of the -// symlink. -func (c *Conn) EnableUnitFiles(files []string, runtime bool, force bool) (bool, []EnableUnitFileChange, error) { - var carries_install_info bool - - result := make([][]interface{}, 0) - err := c.sysobj.Call("org.freedesktop.systemd1.Manager.EnableUnitFiles", 0, files, runtime, force).Store(&carries_install_info, &result) - if err != nil { - return false, nil, err - } - - resultInterface := make([]interface{}, len(result)) - for i := range result { - resultInterface[i] = result[i] - } - - changes := make([]EnableUnitFileChange, len(result)) - changesInterface := make([]interface{}, len(changes)) - for i := range changes { - changesInterface[i] = &changes[i] - } - - err = dbus.Store(resultInterface, changesInterface...) - if err != nil { - return false, nil, err - } - - return carries_install_info, changes, nil -} - -type EnableUnitFileChange struct { - Type string // Type of the change (one of symlink or unlink) - Filename string // File name of the symlink - Destination string // Destination of the symlink -} - -// DisableUnitFiles() may be used to disable one or more units in the system (by -// removing symlinks to them from /etc or /run). -// -// It takes a list of unit files to disable (either just file names or full -// absolute paths if the unit files are residing outside the usual unit -// search paths), and one boolean: whether the unit was enabled for runtime -// only (true, /run), or persistently (false, /etc). -// -// This call returns an array with the changes made. The changes list -// consists of structures with three strings: the type of the change (one of -// symlink or unlink), the file name of the symlink and the destination of the -// symlink. -func (c *Conn) DisableUnitFiles(files []string, runtime bool) ([]DisableUnitFileChange, error) { - result := make([][]interface{}, 0) - err := c.sysobj.Call("org.freedesktop.systemd1.Manager.DisableUnitFiles", 0, files, runtime).Store(&result) - if err != nil { - return nil, err - } - - resultInterface := make([]interface{}, len(result)) - for i := range result { - resultInterface[i] = result[i] - } - - changes := make([]DisableUnitFileChange, len(result)) - changesInterface := make([]interface{}, len(changes)) - for i := range changes { - changesInterface[i] = &changes[i] - } - - err = dbus.Store(resultInterface, changesInterface...) - if err != nil { - return nil, err - } - - return changes, nil -} - -type DisableUnitFileChange struct { - Type string // Type of the change (one of symlink or unlink) - Filename string // File name of the symlink - Destination string // Destination of the symlink -} - -// MaskUnitFiles masks one or more units in the system -// -// It takes three arguments: -// * list of units to mask (either just file names or full -// absolute paths if the unit files are residing outside -// the usual unit search paths) -// * runtime to specify whether the unit was enabled for runtime -// only (true, /run/systemd/..), or persistently (false, /etc/systemd/..) -// * force flag -func (c *Conn) MaskUnitFiles(files []string, runtime bool, force bool) ([]MaskUnitFileChange, error) { - result := make([][]interface{}, 0) - err := c.sysobj.Call("org.freedesktop.systemd1.Manager.MaskUnitFiles", 0, files, runtime, force).Store(&result) - if err != nil { - return nil, err - } - - resultInterface := make([]interface{}, len(result)) - for i := range result { - resultInterface[i] = result[i] - } - - changes := make([]MaskUnitFileChange, len(result)) - changesInterface := make([]interface{}, len(changes)) - for i := range changes { - changesInterface[i] = &changes[i] - } - - err = dbus.Store(resultInterface, changesInterface...) - if err != nil { - return nil, err - } - - return changes, nil -} - -type MaskUnitFileChange struct { - Type string // Type of the change (one of symlink or unlink) - Filename string // File name of the symlink - Destination string // Destination of the symlink -} - -// UnmaskUnitFiles unmasks one or more units in the system -// -// It takes two arguments: -// * list of unit files to mask (either just file names or full -// absolute paths if the unit files are residing outside -// the usual unit search paths) -// * runtime to specify whether the unit was enabled for runtime -// only (true, /run/systemd/..), or persistently (false, /etc/systemd/..) -func (c *Conn) UnmaskUnitFiles(files []string, runtime bool) ([]UnmaskUnitFileChange, error) { - result := make([][]interface{}, 0) - err := c.sysobj.Call("org.freedesktop.systemd1.Manager.UnmaskUnitFiles", 0, files, runtime).Store(&result) - if err != nil { - return nil, err - } - - resultInterface := make([]interface{}, len(result)) - for i := range result { - resultInterface[i] = result[i] - } - - changes := make([]UnmaskUnitFileChange, len(result)) - changesInterface := make([]interface{}, len(changes)) - for i := range changes { - changesInterface[i] = &changes[i] - } - - err = dbus.Store(resultInterface, changesInterface...) - if err != nil { - return nil, err - } - - return changes, nil -} - -type UnmaskUnitFileChange struct { - Type string // Type of the change (one of symlink or unlink) - Filename string // File name of the symlink - Destination string // Destination of the symlink -} - -// Reload instructs systemd to scan for and reload unit files. This is -// equivalent to a 'systemctl daemon-reload'. -func (c *Conn) Reload() error { - return c.sysobj.Call("org.freedesktop.systemd1.Manager.Reload", 0).Store() -} - -func unitPath(name string) dbus.ObjectPath { - return dbus.ObjectPath("/org/freedesktop/systemd1/unit/" + PathBusEscape(name)) -} diff --git a/vendor/github.com/coreos/go-systemd/dbus/properties.go b/vendor/github.com/coreos/go-systemd/dbus/properties.go deleted file mode 100644 index 6c8189587..000000000 --- a/vendor/github.com/coreos/go-systemd/dbus/properties.go +++ /dev/null @@ -1,237 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package dbus - -import ( - "github.com/godbus/dbus" -) - -// From the systemd docs: -// -// The properties array of StartTransientUnit() may take many of the settings -// that may also be configured in unit files. Not all parameters are currently -// accepted though, but we plan to cover more properties with future release. -// Currently you may set the Description, Slice and all dependency types of -// units, as well as RemainAfterExit, ExecStart for service units, -// TimeoutStopUSec and PIDs for scope units, and CPUAccounting, CPUShares, -// BlockIOAccounting, BlockIOWeight, BlockIOReadBandwidth, -// BlockIOWriteBandwidth, BlockIODeviceWeight, MemoryAccounting, MemoryLimit, -// DevicePolicy, DeviceAllow for services/scopes/slices. These fields map -// directly to their counterparts in unit files and as normal D-Bus object -// properties. The exception here is the PIDs field of scope units which is -// used for construction of the scope only and specifies the initial PIDs to -// add to the scope object. - -type Property struct { - Name string - Value dbus.Variant -} - -type PropertyCollection struct { - Name string - Properties []Property -} - -type execStart struct { - Path string // the binary path to execute - Args []string // an array with all arguments to pass to the executed command, starting with argument 0 - UncleanIsFailure bool // a boolean whether it should be considered a failure if the process exits uncleanly -} - -// PropExecStart sets the ExecStart service property. The first argument is a -// slice with the binary path to execute followed by the arguments to pass to -// the executed command. See -// http://www.freedesktop.org/software/systemd/man/systemd.service.html#ExecStart= -func PropExecStart(command []string, uncleanIsFailure bool) Property { - execStarts := []execStart{ - execStart{ - Path: command[0], - Args: command, - UncleanIsFailure: uncleanIsFailure, - }, - } - - return Property{ - Name: "ExecStart", - Value: dbus.MakeVariant(execStarts), - } -} - -// PropRemainAfterExit sets the RemainAfterExit service property. See -// http://www.freedesktop.org/software/systemd/man/systemd.service.html#RemainAfterExit= -func PropRemainAfterExit(b bool) Property { - return Property{ - Name: "RemainAfterExit", - Value: dbus.MakeVariant(b), - } -} - -// PropType sets the Type service property. See -// http://www.freedesktop.org/software/systemd/man/systemd.service.html#Type= -func PropType(t string) Property { - return Property{ - Name: "Type", - Value: dbus.MakeVariant(t), - } -} - -// PropDescription sets the Description unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit#Description= -func PropDescription(desc string) Property { - return Property{ - Name: "Description", - Value: dbus.MakeVariant(desc), - } -} - -func propDependency(name string, units []string) Property { - return Property{ - Name: name, - Value: dbus.MakeVariant(units), - } -} - -// PropRequires sets the Requires unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Requires= -func PropRequires(units ...string) Property { - return propDependency("Requires", units) -} - -// PropRequiresOverridable sets the RequiresOverridable unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiresOverridable= -func PropRequiresOverridable(units ...string) Property { - return propDependency("RequiresOverridable", units) -} - -// PropRequisite sets the Requisite unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Requisite= -func PropRequisite(units ...string) Property { - return propDependency("Requisite", units) -} - -// PropRequisiteOverridable sets the RequisiteOverridable unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequisiteOverridable= -func PropRequisiteOverridable(units ...string) Property { - return propDependency("RequisiteOverridable", units) -} - -// PropWants sets the Wants unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Wants= -func PropWants(units ...string) Property { - return propDependency("Wants", units) -} - -// PropBindsTo sets the BindsTo unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#BindsTo= -func PropBindsTo(units ...string) Property { - return propDependency("BindsTo", units) -} - -// PropRequiredBy sets the RequiredBy unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiredBy= -func PropRequiredBy(units ...string) Property { - return propDependency("RequiredBy", units) -} - -// PropRequiredByOverridable sets the RequiredByOverridable unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiredByOverridable= -func PropRequiredByOverridable(units ...string) Property { - return propDependency("RequiredByOverridable", units) -} - -// PropWantedBy sets the WantedBy unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#WantedBy= -func PropWantedBy(units ...string) Property { - return propDependency("WantedBy", units) -} - -// PropBoundBy sets the BoundBy unit property. See -// http://www.freedesktop.org/software/systemd/main/systemd.unit.html#BoundBy= -func PropBoundBy(units ...string) Property { - return propDependency("BoundBy", units) -} - -// PropConflicts sets the Conflicts unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Conflicts= -func PropConflicts(units ...string) Property { - return propDependency("Conflicts", units) -} - -// PropConflictedBy sets the ConflictedBy unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#ConflictedBy= -func PropConflictedBy(units ...string) Property { - return propDependency("ConflictedBy", units) -} - -// PropBefore sets the Before unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Before= -func PropBefore(units ...string) Property { - return propDependency("Before", units) -} - -// PropAfter sets the After unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#After= -func PropAfter(units ...string) Property { - return propDependency("After", units) -} - -// PropOnFailure sets the OnFailure unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#OnFailure= -func PropOnFailure(units ...string) Property { - return propDependency("OnFailure", units) -} - -// PropTriggers sets the Triggers unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Triggers= -func PropTriggers(units ...string) Property { - return propDependency("Triggers", units) -} - -// PropTriggeredBy sets the TriggeredBy unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#TriggeredBy= -func PropTriggeredBy(units ...string) Property { - return propDependency("TriggeredBy", units) -} - -// PropPropagatesReloadTo sets the PropagatesReloadTo unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#PropagatesReloadTo= -func PropPropagatesReloadTo(units ...string) Property { - return propDependency("PropagatesReloadTo", units) -} - -// PropRequiresMountsFor sets the RequiresMountsFor unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.unit.html#RequiresMountsFor= -func PropRequiresMountsFor(units ...string) Property { - return propDependency("RequiresMountsFor", units) -} - -// PropSlice sets the Slice unit property. See -// http://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#Slice= -func PropSlice(slice string) Property { - return Property{ - Name: "Slice", - Value: dbus.MakeVariant(slice), - } -} - -// PropPids sets the PIDs field of scope units used in the initial construction -// of the scope only and specifies the initial PIDs to add to the scope object. -// See https://www.freedesktop.org/wiki/Software/systemd/ControlGroupInterface/#properties -func PropPids(pids ...uint32) Property { - return Property{ - Name: "PIDs", - Value: dbus.MakeVariant(pids), - } -} diff --git a/vendor/github.com/coreos/go-systemd/dbus/set.go b/vendor/github.com/coreos/go-systemd/dbus/set.go deleted file mode 100644 index f92e6fbed..000000000 --- a/vendor/github.com/coreos/go-systemd/dbus/set.go +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package dbus - -type set struct { - data map[string]bool -} - -func (s *set) Add(value string) { - s.data[value] = true -} - -func (s *set) Remove(value string) { - delete(s.data, value) -} - -func (s *set) Contains(value string) (exists bool) { - _, exists = s.data[value] - return -} - -func (s *set) Length() int { - return len(s.data) -} - -func (s *set) Values() (values []string) { - for val, _ := range s.data { - values = append(values, val) - } - return -} - -func newSet() *set { - return &set{make(map[string]bool)} -} diff --git a/vendor/github.com/coreos/go-systemd/dbus/subscription.go b/vendor/github.com/coreos/go-systemd/dbus/subscription.go deleted file mode 100644 index 996451445..000000000 --- a/vendor/github.com/coreos/go-systemd/dbus/subscription.go +++ /dev/null @@ -1,250 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package dbus - -import ( - "errors" - "time" - - "github.com/godbus/dbus" -) - -const ( - cleanIgnoreInterval = int64(10 * time.Second) - ignoreInterval = int64(30 * time.Millisecond) -) - -// Subscribe sets up this connection to subscribe to all systemd dbus events. -// This is required before calling SubscribeUnits. When the connection closes -// systemd will automatically stop sending signals so there is no need to -// explicitly call Unsubscribe(). -func (c *Conn) Subscribe() error { - c.sigconn.BusObject().Call("org.freedesktop.DBus.AddMatch", 0, - "type='signal',interface='org.freedesktop.systemd1.Manager',member='UnitNew'") - c.sigconn.BusObject().Call("org.freedesktop.DBus.AddMatch", 0, - "type='signal',interface='org.freedesktop.DBus.Properties',member='PropertiesChanged'") - - err := c.sigobj.Call("org.freedesktop.systemd1.Manager.Subscribe", 0).Store() - if err != nil { - return err - } - - return nil -} - -// Unsubscribe this connection from systemd dbus events. -func (c *Conn) Unsubscribe() error { - err := c.sigobj.Call("org.freedesktop.systemd1.Manager.Unsubscribe", 0).Store() - if err != nil { - return err - } - - return nil -} - -func (c *Conn) dispatch() { - ch := make(chan *dbus.Signal, signalBuffer) - - c.sigconn.Signal(ch) - - go func() { - for { - signal, ok := <-ch - if !ok { - return - } - - if signal.Name == "org.freedesktop.systemd1.Manager.JobRemoved" { - c.jobComplete(signal) - } - - if c.subscriber.updateCh == nil { - continue - } - - var unitPath dbus.ObjectPath - switch signal.Name { - case "org.freedesktop.systemd1.Manager.JobRemoved": - unitName := signal.Body[2].(string) - c.sysobj.Call("org.freedesktop.systemd1.Manager.GetUnit", 0, unitName).Store(&unitPath) - case "org.freedesktop.systemd1.Manager.UnitNew": - unitPath = signal.Body[1].(dbus.ObjectPath) - case "org.freedesktop.DBus.Properties.PropertiesChanged": - if signal.Body[0].(string) == "org.freedesktop.systemd1.Unit" { - unitPath = signal.Path - } - } - - if unitPath == dbus.ObjectPath("") { - continue - } - - c.sendSubStateUpdate(unitPath) - } - }() -} - -// Returns two unbuffered channels which will receive all changed units every -// interval. Deleted units are sent as nil. -func (c *Conn) SubscribeUnits(interval time.Duration) (<-chan map[string]*UnitStatus, <-chan error) { - return c.SubscribeUnitsCustom(interval, 0, func(u1, u2 *UnitStatus) bool { return *u1 != *u2 }, nil) -} - -// SubscribeUnitsCustom is like SubscribeUnits but lets you specify the buffer -// size of the channels, the comparison function for detecting changes and a filter -// function for cutting down on the noise that your channel receives. -func (c *Conn) SubscribeUnitsCustom(interval time.Duration, buffer int, isChanged func(*UnitStatus, *UnitStatus) bool, filterUnit func(string) bool) (<-chan map[string]*UnitStatus, <-chan error) { - old := make(map[string]*UnitStatus) - statusChan := make(chan map[string]*UnitStatus, buffer) - errChan := make(chan error, buffer) - - go func() { - for { - timerChan := time.After(interval) - - units, err := c.ListUnits() - if err == nil { - cur := make(map[string]*UnitStatus) - for i := range units { - if filterUnit != nil && filterUnit(units[i].Name) { - continue - } - cur[units[i].Name] = &units[i] - } - - // add all new or changed units - changed := make(map[string]*UnitStatus) - for n, u := range cur { - if oldU, ok := old[n]; !ok || isChanged(oldU, u) { - changed[n] = u - } - delete(old, n) - } - - // add all deleted units - for oldN := range old { - changed[oldN] = nil - } - - old = cur - - if len(changed) != 0 { - statusChan <- changed - } - } else { - errChan <- err - } - - <-timerChan - } - }() - - return statusChan, errChan -} - -type SubStateUpdate struct { - UnitName string - SubState string -} - -// SetSubStateSubscriber writes to updateCh when any unit's substate changes. -// Although this writes to updateCh on every state change, the reported state -// may be more recent than the change that generated it (due to an unavoidable -// race in the systemd dbus interface). That is, this method provides a good -// way to keep a current view of all units' states, but is not guaranteed to -// show every state transition they go through. Furthermore, state changes -// will only be written to the channel with non-blocking writes. If updateCh -// is full, it attempts to write an error to errCh; if errCh is full, the error -// passes silently. -func (c *Conn) SetSubStateSubscriber(updateCh chan<- *SubStateUpdate, errCh chan<- error) { - c.subscriber.Lock() - defer c.subscriber.Unlock() - c.subscriber.updateCh = updateCh - c.subscriber.errCh = errCh -} - -func (c *Conn) sendSubStateUpdate(path dbus.ObjectPath) { - c.subscriber.Lock() - defer c.subscriber.Unlock() - - if c.shouldIgnore(path) { - return - } - - info, err := c.GetUnitProperties(string(path)) - if err != nil { - select { - case c.subscriber.errCh <- err: - default: - } - } - - name := info["Id"].(string) - substate := info["SubState"].(string) - - update := &SubStateUpdate{name, substate} - select { - case c.subscriber.updateCh <- update: - default: - select { - case c.subscriber.errCh <- errors.New("update channel full!"): - default: - } - } - - c.updateIgnore(path, info) -} - -// The ignore functions work around a wart in the systemd dbus interface. -// Requesting the properties of an unloaded unit will cause systemd to send a -// pair of UnitNew/UnitRemoved signals. Because we need to get a unit's -// properties on UnitNew (as that's the only indication of a new unit coming up -// for the first time), we would enter an infinite loop if we did not attempt -// to detect and ignore these spurious signals. The signal themselves are -// indistinguishable from relevant ones, so we (somewhat hackishly) ignore an -// unloaded unit's signals for a short time after requesting its properties. -// This means that we will miss e.g. a transient unit being restarted -// *immediately* upon failure and also a transient unit being started -// immediately after requesting its status (with systemctl status, for example, -// because this causes a UnitNew signal to be sent which then causes us to fetch -// the properties). - -func (c *Conn) shouldIgnore(path dbus.ObjectPath) bool { - t, ok := c.subscriber.ignore[path] - return ok && t >= time.Now().UnixNano() -} - -func (c *Conn) updateIgnore(path dbus.ObjectPath, info map[string]interface{}) { - c.cleanIgnore() - - // unit is unloaded - it will trigger bad systemd dbus behavior - if info["LoadState"].(string) == "not-found" { - c.subscriber.ignore[path] = time.Now().UnixNano() + ignoreInterval - } -} - -// without this, ignore would grow unboundedly over time -func (c *Conn) cleanIgnore() { - now := time.Now().UnixNano() - if c.subscriber.cleanIgnore < now { - c.subscriber.cleanIgnore = now + cleanIgnoreInterval - - for p, t := range c.subscriber.ignore { - if t < now { - delete(c.subscriber.ignore, p) - } - } - } -} diff --git a/vendor/github.com/coreos/go-systemd/dbus/subscription_set.go b/vendor/github.com/coreos/go-systemd/dbus/subscription_set.go deleted file mode 100644 index 5b408d584..000000000 --- a/vendor/github.com/coreos/go-systemd/dbus/subscription_set.go +++ /dev/null @@ -1,57 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package dbus - -import ( - "time" -) - -// SubscriptionSet returns a subscription set which is like conn.Subscribe but -// can filter to only return events for a set of units. -type SubscriptionSet struct { - *set - conn *Conn -} - -func (s *SubscriptionSet) filter(unit string) bool { - return !s.Contains(unit) -} - -// Subscribe starts listening for dbus events for all of the units in the set. -// Returns channels identical to conn.SubscribeUnits. -func (s *SubscriptionSet) Subscribe() (<-chan map[string]*UnitStatus, <-chan error) { - // TODO: Make fully evented by using systemd 209 with properties changed values - return s.conn.SubscribeUnitsCustom(time.Second, 0, - mismatchUnitStatus, - func(unit string) bool { return s.filter(unit) }, - ) -} - -// NewSubscriptionSet returns a new subscription set. -func (conn *Conn) NewSubscriptionSet() *SubscriptionSet { - return &SubscriptionSet{newSet(), conn} -} - -// mismatchUnitStatus returns true if the provided UnitStatus objects -// are not equivalent. false is returned if the objects are equivalent. -// Only the Name, Description and state-related fields are used in -// the comparison. -func mismatchUnitStatus(u1, u2 *UnitStatus) bool { - return u1.Name != u2.Name || - u1.Description != u2.Description || - u1.LoadState != u2.LoadState || - u1.ActiveState != u2.ActiveState || - u1.SubState != u2.SubState -} diff --git a/vendor/github.com/coreos/go-systemd/util/util.go b/vendor/github.com/coreos/go-systemd/util/util.go deleted file mode 100644 index 7828ce6f0..000000000 --- a/vendor/github.com/coreos/go-systemd/util/util.go +++ /dev/null @@ -1,90 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Package util contains utility functions related to systemd that applications -// can use to check things like whether systemd is running. Note that some of -// these functions attempt to manually load systemd libraries at runtime rather -// than linking against them. -package util - -import ( - "fmt" - "io/ioutil" - "os" - "strings" -) - -var ( - ErrNoCGO = fmt.Errorf("go-systemd built with CGO disabled") -) - -// GetRunningSlice attempts to retrieve the name of the systemd slice in which -// the current process is running. -// This function is a wrapper around the libsystemd C library; if it cannot be -// opened, an error is returned. -func GetRunningSlice() (string, error) { - return getRunningSlice() -} - -// RunningFromSystemService tries to detect whether the current process has -// been invoked from a system service. The condition for this is whether the -// process is _not_ a user process. User processes are those running in session -// scopes or under per-user `systemd --user` instances. -// -// To avoid false positives on systems without `pam_systemd` (which is -// responsible for creating user sessions), this function also uses a heuristic -// to detect whether it's being invoked from a session leader process. This is -// the case if the current process is executed directly from a service file -// (e.g. with `ExecStart=/this/cmd`). Note that this heuristic will fail if the -// command is instead launched in a subshell or similar so that it is not -// session leader (e.g. `ExecStart=/bin/bash -c "/this/cmd"`) -// -// This function is a wrapper around the libsystemd C library; if this is -// unable to successfully open a handle to the library for any reason (e.g. it -// cannot be found), an error will be returned. -func RunningFromSystemService() (bool, error) { - return runningFromSystemService() -} - -// CurrentUnitName attempts to retrieve the name of the systemd system unit -// from which the calling process has been invoked. It wraps the systemd -// `sd_pid_get_unit` call, with the same caveat: for processes not part of a -// systemd system unit, this function will return an error. -func CurrentUnitName() (string, error) { - return currentUnitName() -} - -// IsRunningSystemd checks whether the host was booted with systemd as its init -// system. This functions similarly to systemd's `sd_booted(3)`: internally, it -// checks whether /run/systemd/system/ exists and is a directory. -// http://www.freedesktop.org/software/systemd/man/sd_booted.html -func IsRunningSystemd() bool { - fi, err := os.Lstat("/run/systemd/system") - if err != nil { - return false - } - return fi.IsDir() -} - -// GetMachineID returns a host's 128-bit machine ID as a string. This functions -// similarly to systemd's `sd_id128_get_machine`: internally, it simply reads -// the contents of /etc/machine-id -// http://www.freedesktop.org/software/systemd/man/sd_id128_get_machine.html -func GetMachineID() (string, error) { - machineID, err := ioutil.ReadFile("/etc/machine-id") - if err != nil { - return "", fmt.Errorf("failed to read /etc/machine-id: %v", err) - } - return strings.TrimSpace(string(machineID)), nil -} diff --git a/vendor/github.com/coreos/go-systemd/util/util_cgo.go b/vendor/github.com/coreos/go-systemd/util/util_cgo.go deleted file mode 100644 index 22c0d6099..000000000 --- a/vendor/github.com/coreos/go-systemd/util/util_cgo.go +++ /dev/null @@ -1,174 +0,0 @@ -// Copyright 2016 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// +build cgo - -package util - -// #include -// #include -// #include -// -// int -// my_sd_pid_get_owner_uid(void *f, pid_t pid, uid_t *uid) -// { -// int (*sd_pid_get_owner_uid)(pid_t, uid_t *); -// -// sd_pid_get_owner_uid = (int (*)(pid_t, uid_t *))f; -// return sd_pid_get_owner_uid(pid, uid); -// } -// -// int -// my_sd_pid_get_unit(void *f, pid_t pid, char **unit) -// { -// int (*sd_pid_get_unit)(pid_t, char **); -// -// sd_pid_get_unit = (int (*)(pid_t, char **))f; -// return sd_pid_get_unit(pid, unit); -// } -// -// int -// my_sd_pid_get_slice(void *f, pid_t pid, char **slice) -// { -// int (*sd_pid_get_slice)(pid_t, char **); -// -// sd_pid_get_slice = (int (*)(pid_t, char **))f; -// return sd_pid_get_slice(pid, slice); -// } -// -// int -// am_session_leader() -// { -// return (getsid(0) == getpid()); -// } -import "C" -import ( - "fmt" - "syscall" - "unsafe" - - "github.com/coreos/pkg/dlopen" -) - -var libsystemdNames = []string{ - // systemd < 209 - "libsystemd-login.so.0", - "libsystemd-login.so", - - // systemd >= 209 merged libsystemd-login into libsystemd proper - "libsystemd.so.0", - "libsystemd.so", -} - -func getRunningSlice() (slice string, err error) { - var h *dlopen.LibHandle - h, err = dlopen.GetHandle(libsystemdNames) - if err != nil { - return - } - defer func() { - if err1 := h.Close(); err1 != nil { - err = err1 - } - }() - - sd_pid_get_slice, err := h.GetSymbolPointer("sd_pid_get_slice") - if err != nil { - return - } - - var s string - sl := C.CString(s) - defer C.free(unsafe.Pointer(sl)) - - ret := C.my_sd_pid_get_slice(sd_pid_get_slice, 0, &sl) - if ret < 0 { - err = fmt.Errorf("error calling sd_pid_get_slice: %v", syscall.Errno(-ret)) - return - } - - return C.GoString(sl), nil -} - -func runningFromSystemService() (ret bool, err error) { - var h *dlopen.LibHandle - h, err = dlopen.GetHandle(libsystemdNames) - if err != nil { - return - } - defer func() { - if err1 := h.Close(); err1 != nil { - err = err1 - } - }() - - sd_pid_get_owner_uid, err := h.GetSymbolPointer("sd_pid_get_owner_uid") - if err != nil { - return - } - - var uid C.uid_t - errno := C.my_sd_pid_get_owner_uid(sd_pid_get_owner_uid, 0, &uid) - serrno := syscall.Errno(-errno) - // when we're running from a unit file, sd_pid_get_owner_uid returns - // ENOENT (systemd <220) or ENXIO (systemd >=220) - switch { - case errno >= 0: - ret = false - case serrno == syscall.ENOENT, serrno == syscall.ENXIO: - // Since the implementation of sessions in systemd relies on - // the `pam_systemd` module, using the sd_pid_get_owner_uid - // heuristic alone can result in false positives if that module - // (or PAM itself) is not present or properly configured on the - // system. As such, we also check if we're the session leader, - // which should be the case if we're invoked from a unit file, - // but not if e.g. we're invoked from the command line from a - // user's login session - ret = C.am_session_leader() == 1 - default: - err = fmt.Errorf("error calling sd_pid_get_owner_uid: %v", syscall.Errno(-errno)) - } - return -} - -func currentUnitName() (unit string, err error) { - var h *dlopen.LibHandle - h, err = dlopen.GetHandle(libsystemdNames) - if err != nil { - return - } - defer func() { - if err1 := h.Close(); err1 != nil { - err = err1 - } - }() - - sd_pid_get_unit, err := h.GetSymbolPointer("sd_pid_get_unit") - if err != nil { - return - } - - var s string - u := C.CString(s) - defer C.free(unsafe.Pointer(u)) - - ret := C.my_sd_pid_get_unit(sd_pid_get_unit, 0, &u) - if ret < 0 { - err = fmt.Errorf("error calling sd_pid_get_unit: %v", syscall.Errno(-ret)) - return - } - - unit = C.GoString(u) - return -} diff --git a/vendor/github.com/coreos/go-systemd/util/util_stub.go b/vendor/github.com/coreos/go-systemd/util/util_stub.go deleted file mode 100644 index 477589e12..000000000 --- a/vendor/github.com/coreos/go-systemd/util/util_stub.go +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright 2016 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// +build !cgo - -package util - -func getRunningSlice() (string, error) { return "", ErrNoCGO } - -func runningFromSystemService() (bool, error) { return false, ErrNoCGO } - -func currentUnitName() (string, error) { return "", ErrNoCGO } diff --git a/vendor/github.com/coreos/pkg/LICENSE b/vendor/github.com/coreos/pkg/LICENSE deleted file mode 100644 index 5c304d1a4..000000000 --- a/vendor/github.com/coreos/pkg/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ -Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "{}" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright {yyyy} {name of copyright owner} - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/coreos/pkg/NOTICE b/vendor/github.com/coreos/pkg/NOTICE deleted file mode 100644 index b39ddfa5c..000000000 --- a/vendor/github.com/coreos/pkg/NOTICE +++ /dev/null @@ -1,5 +0,0 @@ -CoreOS Project -Copyright 2014 CoreOS, Inc - -This product includes software developed at CoreOS, Inc. -(http://www.coreos.com/). diff --git a/vendor/github.com/coreos/pkg/dlopen/dlopen.go b/vendor/github.com/coreos/pkg/dlopen/dlopen.go deleted file mode 100644 index 23774f612..000000000 --- a/vendor/github.com/coreos/pkg/dlopen/dlopen.go +++ /dev/null @@ -1,82 +0,0 @@ -// Copyright 2016 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Package dlopen provides some convenience functions to dlopen a library and -// get its symbols. -package dlopen - -// #cgo LDFLAGS: -ldl -// #include -// #include -import "C" -import ( - "errors" - "fmt" - "unsafe" -) - -var ErrSoNotFound = errors.New("unable to open a handle to the library") - -// LibHandle represents an open handle to a library (.so) -type LibHandle struct { - Handle unsafe.Pointer - Libname string -} - -// GetHandle tries to get a handle to a library (.so), attempting to access it -// by the names specified in libs and returning the first that is successfully -// opened. Callers are responsible for closing the handler. If no library can -// be successfully opened, an error is returned. -func GetHandle(libs []string) (*LibHandle, error) { - for _, name := range libs { - libname := C.CString(name) - defer C.free(unsafe.Pointer(libname)) - handle := C.dlopen(libname, C.RTLD_LAZY) - if handle != nil { - h := &LibHandle{ - Handle: handle, - Libname: name, - } - return h, nil - } - } - return nil, ErrSoNotFound -} - -// GetSymbolPointer takes a symbol name and returns a pointer to the symbol. -func (l *LibHandle) GetSymbolPointer(symbol string) (unsafe.Pointer, error) { - sym := C.CString(symbol) - defer C.free(unsafe.Pointer(sym)) - - C.dlerror() - p := C.dlsym(l.Handle, sym) - e := C.dlerror() - if e != nil { - return nil, fmt.Errorf("error resolving symbol %q: %v", symbol, errors.New(C.GoString(e))) - } - - return p, nil -} - -// Close closes a LibHandle. -func (l *LibHandle) Close() error { - C.dlerror() - C.dlclose(l.Handle) - e := C.dlerror() - if e != nil { - return fmt.Errorf("error closing %v: %v", l.Libname, errors.New(C.GoString(e))) - } - - return nil -} diff --git a/vendor/github.com/coreos/pkg/dlopen/dlopen_example.go b/vendor/github.com/coreos/pkg/dlopen/dlopen_example.go deleted file mode 100644 index 48a660104..000000000 --- a/vendor/github.com/coreos/pkg/dlopen/dlopen_example.go +++ /dev/null @@ -1,56 +0,0 @@ -// Copyright 2015 CoreOS, Inc. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// -// +build linux - -package dlopen - -// #include -// #include -// -// int -// my_strlen(void *f, const char *s) -// { -// size_t (*strlen)(const char *); -// -// strlen = (size_t (*)(const char *))f; -// return strlen(s); -// } -import "C" - -import ( - "fmt" - "unsafe" -) - -func strlen(libs []string, s string) (int, error) { - h, err := GetHandle(libs) - if err != nil { - return -1, fmt.Errorf(`couldn't get a handle to the library: %v`, err) - } - defer h.Close() - - f := "strlen" - cs := C.CString(s) - defer C.free(unsafe.Pointer(cs)) - - strlen, err := h.GetSymbolPointer(f) - if err != nil { - return -1, fmt.Errorf(`couldn't get symbol %q: %v`, f, err) - } - - len := C.my_strlen(strlen, cs) - - return int(len), nil -} diff --git a/vendor/github.com/cyphar/filepath-securejoin/LICENSE b/vendor/github.com/cyphar/filepath-securejoin/LICENSE deleted file mode 100644 index bec842f29..000000000 --- a/vendor/github.com/cyphar/filepath-securejoin/LICENSE +++ /dev/null @@ -1,28 +0,0 @@ -Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved. -Copyright (C) 2017 SUSE LLC. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/cyphar/filepath-securejoin/README.md b/vendor/github.com/cyphar/filepath-securejoin/README.md deleted file mode 100644 index 49b2baa9f..000000000 --- a/vendor/github.com/cyphar/filepath-securejoin/README.md +++ /dev/null @@ -1,65 +0,0 @@ -## `filepath-securejoin` ## - -[![Build Status](https://travis-ci.org/cyphar/filepath-securejoin.svg?branch=master)](https://travis-ci.org/cyphar/filepath-securejoin) - -An implementation of `SecureJoin`, a [candidate for inclusion in the Go -standard library][go#20126]. The purpose of this function is to be a "secure" -alternative to `filepath.Join`, and in particular it provides certain -guarantees that are not provided by `filepath.Join`. - -This is the function prototype: - -```go -func SecureJoin(root, unsafePath string) (string, error) -``` - -This library **guarantees** the following: - -* If no error is set, the resulting string **must** be a child path of - `SecureJoin` and will not contain any symlink path components (they will all - be expanded). - -* When expanding symlinks, all symlink path components **must** be resolved - relative to the provided root. In particular, this can be considered a - userspace implementation of how `chroot(2)` operates on file paths. Note that - these symlinks will **not** be expanded lexically (`filepath.Clean` is not - called on the input before processing). - -* Non-existant path components are unaffected by `SecureJoin` (similar to - `filepath.EvalSymlinks`'s semantics). - -* The returned path will always be `filepath.Clean`ed and thus not contain any - `..` components. - -A (trivial) implementation of this function on GNU/Linux systems could be done -with the following (note that this requires root privileges and is far more -opaque than the implementation in this library, and also requires that -`readlink` is inside the `root` path): - -```go -package securejoin - -import ( - "os/exec" - "path/filepath" -) - -func SecureJoin(root, unsafePath string) (string, error) { - unsafePath = string(filepath.Separator) + unsafePath - cmd := exec.Command("chroot", root, - "readlink", "--canonicalize-missing", "--no-newline", unsafePath) - output, err := cmd.CombinedOutput() - if err != nil { - return "", err - } - expanded := string(output) - return filepath.Join(root, expanded), nil -} -``` - -[go#20126]: https://github.com/golang/go/issues/20126 - -### License ### - -The license of this project is the same as Go, which is a BSD 3-clause license -available in the `LICENSE` file. diff --git a/vendor/github.com/cyphar/filepath-securejoin/join.go b/vendor/github.com/cyphar/filepath-securejoin/join.go deleted file mode 100644 index f20985479..000000000 --- a/vendor/github.com/cyphar/filepath-securejoin/join.go +++ /dev/null @@ -1,135 +0,0 @@ -// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved. -// Copyright (C) 2017 SUSE LLC. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// Package securejoin is an implementation of the hopefully-soon-to-be-included -// SecureJoin helper that is meant to be part of the "path/filepath" package. -// The purpose of this project is to provide a PoC implementation to make the -// SecureJoin proposal (https://github.com/golang/go/issues/20126) more -// tangible. -package securejoin - -import ( - "bytes" - "fmt" - "os" - "path/filepath" - "strings" - "syscall" - - "github.com/pkg/errors" -) - -// ErrSymlinkLoop is returned by SecureJoinVFS when too many symlinks have been -// evaluated in attempting to securely join the two given paths. -var ErrSymlinkLoop = fmt.Errorf("SecureJoin: too many links") - -// IsNotExist tells you if err is an error that implies that either the path -// accessed does not exist (or path components don't exist). This is -// effectively a more broad version of os.IsNotExist. -func IsNotExist(err error) bool { - // If it's a bone-fide ENOENT just bail. - if os.IsNotExist(errors.Cause(err)) { - return true - } - - // Check that it's not actually an ENOTDIR, which in some cases is a more - // convoluted case of ENOENT (usually involving weird paths). - var errno error - switch err := errors.Cause(err).(type) { - case *os.PathError: - errno = err.Err - case *os.LinkError: - errno = err.Err - case *os.SyscallError: - errno = err.Err - } - return errno == syscall.ENOTDIR || errno == syscall.ENOENT -} - -// SecureJoinVFS joins the two given path components (similar to Join) except -// that the returned path is guaranteed to be scoped inside the provided root -// path (when evaluated). Any symbolic links in the path are evaluated with the -// given root treated as the root of the filesystem, similar to a chroot. The -// filesystem state is evaluated through the given VFS interface (if nil, the -// standard os.* family of functions are used). -// -// Note that the guarantees provided by this function only apply if the path -// components in the returned string are not modified (in other words are not -// replaced with symlinks on the filesystem) after this function has returned. -// Such a symlink race is necessarily out-of-scope of SecureJoin. -func SecureJoinVFS(root, unsafePath string, vfs VFS) (string, error) { - // Use the os.* VFS implementation if none was specified. - if vfs == nil { - vfs = osVFS{} - } - - var path bytes.Buffer - n := 0 - for unsafePath != "" { - if n > 255 { - return "", ErrSymlinkLoop - } - - // Next path component, p. - i := strings.IndexRune(unsafePath, filepath.Separator) - var p string - if i == -1 { - p, unsafePath = unsafePath, "" - } else { - p, unsafePath = unsafePath[:i], unsafePath[i+1:] - } - - // Create a cleaned path, using the lexical semantics of /../a, to - // create a "scoped" path component which can safely be joined to fullP - // for evaluation. At this point, path.String() doesn't contain any - // symlink components. - cleanP := filepath.Clean(string(filepath.Separator) + path.String() + p) - if cleanP == string(filepath.Separator) { - path.Reset() - continue - } - fullP := filepath.Clean(root + cleanP) - - // Figure out whether the path is a symlink. - fi, err := vfs.Lstat(fullP) - if err != nil && !IsNotExist(err) { - return "", err - } - // Treat non-existent path components the same as non-symlinks (we - // can't do any better here). - if IsNotExist(err) || fi.Mode()&os.ModeSymlink == 0 { - path.WriteString(p) - path.WriteRune(filepath.Separator) - continue - } - - // Only increment when we actually dereference a link. - n++ - - // It's a symlink, expand it by prepending it to the yet-unparsed path. - dest, err := vfs.Readlink(fullP) - if err != nil { - return "", err - } - // Absolute symlinks reset any work we've already done. - if filepath.IsAbs(dest) { - path.Reset() - } - unsafePath = dest + string(filepath.Separator) + unsafePath - } - - // We have to clean path.String() here because it may contain '..' - // components that are entirely lexical, but would be misleading otherwise. - // And finally do a final clean to ensure that root is also lexically - // clean. - fullP := filepath.Clean(string(filepath.Separator) + path.String()) - return filepath.Clean(root + fullP), nil -} - -// SecureJoin is a wrapper around SecureJoinVFS that just uses the os.* library -// of functions as the VFS. If in doubt, use this function over SecureJoinVFS. -func SecureJoin(root, unsafePath string) (string, error) { - return SecureJoinVFS(root, unsafePath, nil) -} diff --git a/vendor/github.com/cyphar/filepath-securejoin/vendor.conf b/vendor/github.com/cyphar/filepath-securejoin/vendor.conf deleted file mode 100644 index 66bb574b9..000000000 --- a/vendor/github.com/cyphar/filepath-securejoin/vendor.conf +++ /dev/null @@ -1 +0,0 @@ -github.com/pkg/errors v0.8.0 diff --git a/vendor/github.com/cyphar/filepath-securejoin/vfs.go b/vendor/github.com/cyphar/filepath-securejoin/vfs.go deleted file mode 100644 index a82a5eae1..000000000 --- a/vendor/github.com/cyphar/filepath-securejoin/vfs.go +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright (C) 2017 SUSE LLC. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package securejoin - -import "os" - -// In future this should be moved into a separate package, because now there -// are several projects (umoci and go-mtree) that are using this sort of -// interface. - -// VFS is the minimal interface necessary to use SecureJoinVFS. A nil VFS is -// equivalent to using the standard os.* family of functions. This is mainly -// used for the purposes of mock testing, but also can be used to otherwise use -// SecureJoin with VFS-like system. -type VFS interface { - // Lstat returns a FileInfo describing the named file. If the file is a - // symbolic link, the returned FileInfo describes the symbolic link. Lstat - // makes no attempt to follow the link. These semantics are identical to - // os.Lstat. - Lstat(name string) (os.FileInfo, error) - - // Readlink returns the destination of the named symbolic link. These - // semantics are identical to os.Readlink. - Readlink(name string) (string, error) -} - -// osVFS is the "nil" VFS, in that it just passes everything through to the os -// module. -type osVFS struct{} - -// Lstat returns a FileInfo describing the named file. If the file is a -// symbolic link, the returned FileInfo describes the symbolic link. Lstat -// makes no attempt to follow the link. These semantics are identical to -// os.Lstat. -func (o osVFS) Lstat(name string) (os.FileInfo, error) { return os.Lstat(name) } - -// Readlink returns the destination of the named symbolic link. These -// semantics are identical to os.Readlink. -func (o osVFS) Readlink(name string) (string, error) { return os.Readlink(name) } diff --git a/vendor/github.com/godbus/dbus/LICENSE b/vendor/github.com/godbus/dbus/LICENSE deleted file mode 100644 index 670d88fca..000000000 --- a/vendor/github.com/godbus/dbus/LICENSE +++ /dev/null @@ -1,25 +0,0 @@ -Copyright (c) 2013, Georg Reinke (), Google -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions -are met: - -1. Redistributions of source code must retain the above copyright notice, -this list of conditions and the following disclaimer. - -2. Redistributions in binary form must reproduce the above copyright -notice, this list of conditions and the following disclaimer in the -documentation and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED -TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR -PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF -LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING -NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/godbus/dbus/README.markdown b/vendor/github.com/godbus/dbus/README.markdown deleted file mode 100644 index 0a6e7e5bf..000000000 --- a/vendor/github.com/godbus/dbus/README.markdown +++ /dev/null @@ -1,41 +0,0 @@ -dbus ----- - -dbus is a simple library that implements native Go client bindings for the -D-Bus message bus system. - -### Features - -* Complete native implementation of the D-Bus message protocol -* Go-like API (channels for signals / asynchronous method calls, Goroutine-safe connections) -* Subpackages that help with the introspection / property interfaces - -### Installation - -This packages requires Go 1.1. If you installed it and set up your GOPATH, just run: - -``` -go get github.com/godbus/dbus -``` - -If you want to use the subpackages, you can install them the same way. - -### Usage - -The complete package documentation and some simple examples are available at -[godoc.org](http://godoc.org/github.com/godbus/dbus). Also, the -[_examples](https://github.com/godbus/dbus/tree/master/_examples) directory -gives a short overview over the basic usage. - -#### Projects using godbus -- [notify](https://github.com/esiqveland/notify) provides desktop notifications over dbus into a library. - -Please note that the API is considered unstable for now and may change without -further notice. - -### License - -go.dbus is available under the Simplified BSD License; see LICENSE for the full -text. - -Nearly all of the credit for this library goes to github.com/guelfey/go.dbus. diff --git a/vendor/github.com/godbus/dbus/auth.go b/vendor/github.com/godbus/dbus/auth.go deleted file mode 100644 index 98017b693..000000000 --- a/vendor/github.com/godbus/dbus/auth.go +++ /dev/null @@ -1,253 +0,0 @@ -package dbus - -import ( - "bufio" - "bytes" - "errors" - "io" - "os" - "strconv" -) - -// AuthStatus represents the Status of an authentication mechanism. -type AuthStatus byte - -const ( - // AuthOk signals that authentication is finished; the next command - // from the server should be an OK. - AuthOk AuthStatus = iota - - // AuthContinue signals that additional data is needed; the next command - // from the server should be a DATA. - AuthContinue - - // AuthError signals an error; the server sent invalid data or some - // other unexpected thing happened and the current authentication - // process should be aborted. - AuthError -) - -type authState byte - -const ( - waitingForData authState = iota - waitingForOk - waitingForReject -) - -// Auth defines the behaviour of an authentication mechanism. -type Auth interface { - // Return the name of the mechnism, the argument to the first AUTH command - // and the next status. - FirstData() (name, resp []byte, status AuthStatus) - - // Process the given DATA command, and return the argument to the DATA - // command and the next status. If len(resp) == 0, no DATA command is sent. - HandleData(data []byte) (resp []byte, status AuthStatus) -} - -// Auth authenticates the connection, trying the given list of authentication -// mechanisms (in that order). If nil is passed, the EXTERNAL and -// DBUS_COOKIE_SHA1 mechanisms are tried for the current user. For private -// connections, this method must be called before sending any messages to the -// bus. Auth must not be called on shared connections. -func (conn *Conn) Auth(methods []Auth) error { - if methods == nil { - uid := strconv.Itoa(os.Getuid()) - methods = []Auth{AuthExternal(uid), AuthCookieSha1(uid, getHomeDir())} - } - in := bufio.NewReader(conn.transport) - err := conn.transport.SendNullByte() - if err != nil { - return err - } - err = authWriteLine(conn.transport, []byte("AUTH")) - if err != nil { - return err - } - s, err := authReadLine(in) - if err != nil { - return err - } - if len(s) < 2 || !bytes.Equal(s[0], []byte("REJECTED")) { - return errors.New("dbus: authentication protocol error") - } - s = s[1:] - for _, v := range s { - for _, m := range methods { - if name, data, status := m.FirstData(); bytes.Equal(v, name) { - var ok bool - err = authWriteLine(conn.transport, []byte("AUTH"), []byte(v), data) - if err != nil { - return err - } - switch status { - case AuthOk: - err, ok = conn.tryAuth(m, waitingForOk, in) - case AuthContinue: - err, ok = conn.tryAuth(m, waitingForData, in) - default: - panic("dbus: invalid authentication status") - } - if err != nil { - return err - } - if ok { - if conn.transport.SupportsUnixFDs() { - err = authWriteLine(conn, []byte("NEGOTIATE_UNIX_FD")) - if err != nil { - return err - } - line, err := authReadLine(in) - if err != nil { - return err - } - switch { - case bytes.Equal(line[0], []byte("AGREE_UNIX_FD")): - conn.EnableUnixFDs() - conn.unixFD = true - case bytes.Equal(line[0], []byte("ERROR")): - default: - return errors.New("dbus: authentication protocol error") - } - } - err = authWriteLine(conn.transport, []byte("BEGIN")) - if err != nil { - return err - } - go conn.inWorker() - go conn.outWorker() - return nil - } - } - } - } - return errors.New("dbus: authentication failed") -} - -// tryAuth tries to authenticate with m as the mechanism, using state as the -// initial authState and in for reading input. It returns (nil, true) on -// success, (nil, false) on a REJECTED and (someErr, false) if some other -// error occured. -func (conn *Conn) tryAuth(m Auth, state authState, in *bufio.Reader) (error, bool) { - for { - s, err := authReadLine(in) - if err != nil { - return err, false - } - switch { - case state == waitingForData && string(s[0]) == "DATA": - if len(s) != 2 { - err = authWriteLine(conn.transport, []byte("ERROR")) - if err != nil { - return err, false - } - continue - } - data, status := m.HandleData(s[1]) - switch status { - case AuthOk, AuthContinue: - if len(data) != 0 { - err = authWriteLine(conn.transport, []byte("DATA"), data) - if err != nil { - return err, false - } - } - if status == AuthOk { - state = waitingForOk - } - case AuthError: - err = authWriteLine(conn.transport, []byte("ERROR")) - if err != nil { - return err, false - } - } - case state == waitingForData && string(s[0]) == "REJECTED": - return nil, false - case state == waitingForData && string(s[0]) == "ERROR": - err = authWriteLine(conn.transport, []byte("CANCEL")) - if err != nil { - return err, false - } - state = waitingForReject - case state == waitingForData && string(s[0]) == "OK": - if len(s) != 2 { - err = authWriteLine(conn.transport, []byte("CANCEL")) - if err != nil { - return err, false - } - state = waitingForReject - } - conn.uuid = string(s[1]) - return nil, true - case state == waitingForData: - err = authWriteLine(conn.transport, []byte("ERROR")) - if err != nil { - return err, false - } - case state == waitingForOk && string(s[0]) == "OK": - if len(s) != 2 { - err = authWriteLine(conn.transport, []byte("CANCEL")) - if err != nil { - return err, false - } - state = waitingForReject - } - conn.uuid = string(s[1]) - return nil, true - case state == waitingForOk && string(s[0]) == "REJECTED": - return nil, false - case state == waitingForOk && (string(s[0]) == "DATA" || - string(s[0]) == "ERROR"): - - err = authWriteLine(conn.transport, []byte("CANCEL")) - if err != nil { - return err, false - } - state = waitingForReject - case state == waitingForOk: - err = authWriteLine(conn.transport, []byte("ERROR")) - if err != nil { - return err, false - } - case state == waitingForReject && string(s[0]) == "REJECTED": - return nil, false - case state == waitingForReject: - return errors.New("dbus: authentication protocol error"), false - default: - panic("dbus: invalid auth state") - } - } -} - -// authReadLine reads a line and separates it into its fields. -func authReadLine(in *bufio.Reader) ([][]byte, error) { - data, err := in.ReadBytes('\n') - if err != nil { - return nil, err - } - data = bytes.TrimSuffix(data, []byte("\r\n")) - return bytes.Split(data, []byte{' '}), nil -} - -// authWriteLine writes the given line in the authentication protocol format -// (elements of data separated by a " " and terminated by "\r\n"). -func authWriteLine(out io.Writer, data ...[]byte) error { - buf := make([]byte, 0) - for i, v := range data { - buf = append(buf, v...) - if i != len(data)-1 { - buf = append(buf, ' ') - } - } - buf = append(buf, '\r') - buf = append(buf, '\n') - n, err := out.Write(buf) - if err != nil { - return err - } - if n != len(buf) { - return io.ErrUnexpectedEOF - } - return nil -} diff --git a/vendor/github.com/godbus/dbus/auth_external.go b/vendor/github.com/godbus/dbus/auth_external.go deleted file mode 100644 index 7e376d3ef..000000000 --- a/vendor/github.com/godbus/dbus/auth_external.go +++ /dev/null @@ -1,26 +0,0 @@ -package dbus - -import ( - "encoding/hex" -) - -// AuthExternal returns an Auth that authenticates as the given user with the -// EXTERNAL mechanism. -func AuthExternal(user string) Auth { - return authExternal{user} -} - -// AuthExternal implements the EXTERNAL authentication mechanism. -type authExternal struct { - user string -} - -func (a authExternal) FirstData() ([]byte, []byte, AuthStatus) { - b := make([]byte, 2*len(a.user)) - hex.Encode(b, []byte(a.user)) - return []byte("EXTERNAL"), b, AuthOk -} - -func (a authExternal) HandleData(b []byte) ([]byte, AuthStatus) { - return nil, AuthError -} diff --git a/vendor/github.com/godbus/dbus/auth_sha1.go b/vendor/github.com/godbus/dbus/auth_sha1.go deleted file mode 100644 index df15b4611..000000000 --- a/vendor/github.com/godbus/dbus/auth_sha1.go +++ /dev/null @@ -1,102 +0,0 @@ -package dbus - -import ( - "bufio" - "bytes" - "crypto/rand" - "crypto/sha1" - "encoding/hex" - "os" -) - -// AuthCookieSha1 returns an Auth that authenticates as the given user with the -// DBUS_COOKIE_SHA1 mechanism. The home parameter should specify the home -// directory of the user. -func AuthCookieSha1(user, home string) Auth { - return authCookieSha1{user, home} -} - -type authCookieSha1 struct { - user, home string -} - -func (a authCookieSha1) FirstData() ([]byte, []byte, AuthStatus) { - b := make([]byte, 2*len(a.user)) - hex.Encode(b, []byte(a.user)) - return []byte("DBUS_COOKIE_SHA1"), b, AuthContinue -} - -func (a authCookieSha1) HandleData(data []byte) ([]byte, AuthStatus) { - challenge := make([]byte, len(data)/2) - _, err := hex.Decode(challenge, data) - if err != nil { - return nil, AuthError - } - b := bytes.Split(challenge, []byte{' '}) - if len(b) != 3 { - return nil, AuthError - } - context := b[0] - id := b[1] - svchallenge := b[2] - cookie := a.getCookie(context, id) - if cookie == nil { - return nil, AuthError - } - clchallenge := a.generateChallenge() - if clchallenge == nil { - return nil, AuthError - } - hash := sha1.New() - hash.Write(bytes.Join([][]byte{svchallenge, clchallenge, cookie}, []byte{':'})) - hexhash := make([]byte, 2*hash.Size()) - hex.Encode(hexhash, hash.Sum(nil)) - data = append(clchallenge, ' ') - data = append(data, hexhash...) - resp := make([]byte, 2*len(data)) - hex.Encode(resp, data) - return resp, AuthOk -} - -// getCookie searches for the cookie identified by id in context and returns -// the cookie content or nil. (Since HandleData can't return a specific error, -// but only whether an error occured, this function also doesn't bother to -// return an error.) -func (a authCookieSha1) getCookie(context, id []byte) []byte { - file, err := os.Open(a.home + "/.dbus-keyrings/" + string(context)) - if err != nil { - return nil - } - defer file.Close() - rd := bufio.NewReader(file) - for { - line, err := rd.ReadBytes('\n') - if err != nil { - return nil - } - line = line[:len(line)-1] - b := bytes.Split(line, []byte{' '}) - if len(b) != 3 { - return nil - } - if bytes.Equal(b[0], id) { - return b[2] - } - } -} - -// generateChallenge returns a random, hex-encoded challenge, or nil on error -// (see above). -func (a authCookieSha1) generateChallenge() []byte { - b := make([]byte, 16) - n, err := rand.Read(b) - if err != nil { - return nil - } - if n != 16 { - return nil - } - enc := make([]byte, 32) - hex.Encode(enc, b) - return enc -} diff --git a/vendor/github.com/godbus/dbus/call.go b/vendor/github.com/godbus/dbus/call.go deleted file mode 100644 index ba6e73f60..000000000 --- a/vendor/github.com/godbus/dbus/call.go +++ /dev/null @@ -1,36 +0,0 @@ -package dbus - -import ( - "errors" -) - -// Call represents a pending or completed method call. -type Call struct { - Destination string - Path ObjectPath - Method string - Args []interface{} - - // Strobes when the call is complete. - Done chan *Call - - // After completion, the error status. If this is non-nil, it may be an - // error message from the peer (with Error as its type) or some other error. - Err error - - // Holds the response once the call is done. - Body []interface{} -} - -var errSignature = errors.New("dbus: mismatched signature") - -// Store stores the body of the reply into the provided pointers. It returns -// an error if the signatures of the body and retvalues don't match, or if -// the error status is not nil. -func (c *Call) Store(retvalues ...interface{}) error { - if c.Err != nil { - return c.Err - } - - return Store(c.Body, retvalues...) -} diff --git a/vendor/github.com/godbus/dbus/conn.go b/vendor/github.com/godbus/dbus/conn.go deleted file mode 100644 index a4f539401..000000000 --- a/vendor/github.com/godbus/dbus/conn.go +++ /dev/null @@ -1,625 +0,0 @@ -package dbus - -import ( - "errors" - "io" - "os" - "reflect" - "strings" - "sync" -) - -const defaultSystemBusAddress = "unix:path=/var/run/dbus/system_bus_socket" - -var ( - systemBus *Conn - systemBusLck sync.Mutex - sessionBus *Conn - sessionBusLck sync.Mutex -) - -// ErrClosed is the error returned by calls on a closed connection. -var ErrClosed = errors.New("dbus: connection closed by user") - -// Conn represents a connection to a message bus (usually, the system or -// session bus). -// -// Connections are either shared or private. Shared connections -// are shared between calls to the functions that return them. As a result, -// the methods Close, Auth and Hello must not be called on them. -// -// Multiple goroutines may invoke methods on a connection simultaneously. -type Conn struct { - transport - - busObj BusObject - unixFD bool - uuid string - - names []string - namesLck sync.RWMutex - - serialLck sync.Mutex - nextSerial uint32 - serialUsed map[uint32]bool - - calls map[uint32]*Call - callsLck sync.RWMutex - - handlers map[ObjectPath]map[string]exportWithMapping - handlersLck sync.RWMutex - - out chan *Message - closed bool - outLck sync.RWMutex - - signals []chan<- *Signal - signalsLck sync.Mutex - - eavesdropped chan<- *Message - eavesdroppedLck sync.Mutex -} - -// SessionBus returns a shared connection to the session bus, connecting to it -// if not already done. -func SessionBus() (conn *Conn, err error) { - sessionBusLck.Lock() - defer sessionBusLck.Unlock() - if sessionBus != nil { - return sessionBus, nil - } - defer func() { - if conn != nil { - sessionBus = conn - } - }() - conn, err = SessionBusPrivate() - if err != nil { - return - } - if err = conn.Auth(nil); err != nil { - conn.Close() - conn = nil - return - } - if err = conn.Hello(); err != nil { - conn.Close() - conn = nil - } - return -} - -// SessionBusPrivate returns a new private connection to the session bus. -func SessionBusPrivate() (*Conn, error) { - address := os.Getenv("DBUS_SESSION_BUS_ADDRESS") - if address != "" && address != "autolaunch:" { - return Dial(address) - } - - return sessionBusPlatform() -} - -// SystemBus returns a shared connection to the system bus, connecting to it if -// not already done. -func SystemBus() (conn *Conn, err error) { - systemBusLck.Lock() - defer systemBusLck.Unlock() - if systemBus != nil { - return systemBus, nil - } - defer func() { - if conn != nil { - systemBus = conn - } - }() - conn, err = SystemBusPrivate() - if err != nil { - return - } - if err = conn.Auth(nil); err != nil { - conn.Close() - conn = nil - return - } - if err = conn.Hello(); err != nil { - conn.Close() - conn = nil - } - return -} - -// SystemBusPrivate returns a new private connection to the system bus. -func SystemBusPrivate() (*Conn, error) { - address := os.Getenv("DBUS_SYSTEM_BUS_ADDRESS") - if address != "" { - return Dial(address) - } - return Dial(defaultSystemBusAddress) -} - -// Dial establishes a new private connection to the message bus specified by address. -func Dial(address string) (*Conn, error) { - tr, err := getTransport(address) - if err != nil { - return nil, err - } - return newConn(tr) -} - -// NewConn creates a new private *Conn from an already established connection. -func NewConn(conn io.ReadWriteCloser) (*Conn, error) { - return newConn(genericTransport{conn}) -} - -// newConn creates a new *Conn from a transport. -func newConn(tr transport) (*Conn, error) { - conn := new(Conn) - conn.transport = tr - conn.calls = make(map[uint32]*Call) - conn.out = make(chan *Message, 10) - conn.handlers = make(map[ObjectPath]map[string]exportWithMapping) - conn.nextSerial = 1 - conn.serialUsed = map[uint32]bool{0: true} - conn.busObj = conn.Object("org.freedesktop.DBus", "/org/freedesktop/DBus") - return conn, nil -} - -// BusObject returns the object owned by the bus daemon which handles -// administrative requests. -func (conn *Conn) BusObject() BusObject { - return conn.busObj -} - -// Close closes the connection. Any blocked operations will return with errors -// and the channels passed to Eavesdrop and Signal are closed. This method must -// not be called on shared connections. -func (conn *Conn) Close() error { - conn.outLck.Lock() - if conn.closed { - // inWorker calls Close on read error, the read error may - // be caused by another caller calling Close to shutdown the - // dbus connection, a double-close scenario we prevent here. - conn.outLck.Unlock() - return nil - } - close(conn.out) - conn.closed = true - conn.outLck.Unlock() - conn.signalsLck.Lock() - for _, ch := range conn.signals { - close(ch) - } - conn.signalsLck.Unlock() - conn.eavesdroppedLck.Lock() - if conn.eavesdropped != nil { - close(conn.eavesdropped) - } - conn.eavesdroppedLck.Unlock() - return conn.transport.Close() -} - -// Eavesdrop causes conn to send all incoming messages to the given channel -// without further processing. Method replies, errors and signals will not be -// sent to the appropiate channels and method calls will not be handled. If nil -// is passed, the normal behaviour is restored. -// -// The caller has to make sure that ch is sufficiently buffered; -// if a message arrives when a write to ch is not possible, the message is -// discarded. -func (conn *Conn) Eavesdrop(ch chan<- *Message) { - conn.eavesdroppedLck.Lock() - conn.eavesdropped = ch - conn.eavesdroppedLck.Unlock() -} - -// getSerial returns an unused serial. -func (conn *Conn) getSerial() uint32 { - conn.serialLck.Lock() - defer conn.serialLck.Unlock() - n := conn.nextSerial - for conn.serialUsed[n] { - n++ - } - conn.serialUsed[n] = true - conn.nextSerial = n + 1 - return n -} - -// Hello sends the initial org.freedesktop.DBus.Hello call. This method must be -// called after authentication, but before sending any other messages to the -// bus. Hello must not be called for shared connections. -func (conn *Conn) Hello() error { - var s string - err := conn.busObj.Call("org.freedesktop.DBus.Hello", 0).Store(&s) - if err != nil { - return err - } - conn.namesLck.Lock() - conn.names = make([]string, 1) - conn.names[0] = s - conn.namesLck.Unlock() - return nil -} - -// inWorker runs in an own goroutine, reading incoming messages from the -// transport and dispatching them appropiately. -func (conn *Conn) inWorker() { - for { - msg, err := conn.ReadMessage() - if err == nil { - conn.eavesdroppedLck.Lock() - if conn.eavesdropped != nil { - select { - case conn.eavesdropped <- msg: - default: - } - conn.eavesdroppedLck.Unlock() - continue - } - conn.eavesdroppedLck.Unlock() - dest, _ := msg.Headers[FieldDestination].value.(string) - found := false - if dest == "" { - found = true - } else { - conn.namesLck.RLock() - if len(conn.names) == 0 { - found = true - } - for _, v := range conn.names { - if dest == v { - found = true - break - } - } - conn.namesLck.RUnlock() - } - if !found { - // Eavesdropped a message, but no channel for it is registered. - // Ignore it. - continue - } - switch msg.Type { - case TypeMethodReply, TypeError: - serial := msg.Headers[FieldReplySerial].value.(uint32) - conn.callsLck.Lock() - if c, ok := conn.calls[serial]; ok { - if msg.Type == TypeError { - name, _ := msg.Headers[FieldErrorName].value.(string) - c.Err = Error{name, msg.Body} - } else { - c.Body = msg.Body - } - c.Done <- c - conn.serialLck.Lock() - delete(conn.serialUsed, serial) - conn.serialLck.Unlock() - delete(conn.calls, serial) - } - conn.callsLck.Unlock() - case TypeSignal: - iface := msg.Headers[FieldInterface].value.(string) - member := msg.Headers[FieldMember].value.(string) - // as per http://dbus.freedesktop.org/doc/dbus-specification.html , - // sender is optional for signals. - sender, _ := msg.Headers[FieldSender].value.(string) - if iface == "org.freedesktop.DBus" && sender == "org.freedesktop.DBus" { - if member == "NameLost" { - // If we lost the name on the bus, remove it from our - // tracking list. - name, ok := msg.Body[0].(string) - if !ok { - panic("Unable to read the lost name") - } - conn.namesLck.Lock() - for i, v := range conn.names { - if v == name { - conn.names = append(conn.names[:i], - conn.names[i+1:]...) - } - } - conn.namesLck.Unlock() - } else if member == "NameAcquired" { - // If we acquired the name on the bus, add it to our - // tracking list. - name, ok := msg.Body[0].(string) - if !ok { - panic("Unable to read the acquired name") - } - conn.namesLck.Lock() - conn.names = append(conn.names, name) - conn.namesLck.Unlock() - } - } - signal := &Signal{ - Sender: sender, - Path: msg.Headers[FieldPath].value.(ObjectPath), - Name: iface + "." + member, - Body: msg.Body, - } - conn.signalsLck.Lock() - for _, ch := range conn.signals { - ch <- signal - } - conn.signalsLck.Unlock() - case TypeMethodCall: - go conn.handleCall(msg) - } - } else if _, ok := err.(InvalidMessageError); !ok { - // Some read error occured (usually EOF); we can't really do - // anything but to shut down all stuff and returns errors to all - // pending replies. - conn.Close() - conn.callsLck.RLock() - for _, v := range conn.calls { - v.Err = err - v.Done <- v - } - conn.callsLck.RUnlock() - return - } - // invalid messages are ignored - } -} - -// Names returns the list of all names that are currently owned by this -// connection. The slice is always at least one element long, the first element -// being the unique name of the connection. -func (conn *Conn) Names() []string { - conn.namesLck.RLock() - // copy the slice so it can't be modified - s := make([]string, len(conn.names)) - copy(s, conn.names) - conn.namesLck.RUnlock() - return s -} - -// Object returns the object identified by the given destination name and path. -func (conn *Conn) Object(dest string, path ObjectPath) BusObject { - return &Object{conn, dest, path} -} - -// outWorker runs in an own goroutine, encoding and sending messages that are -// sent to conn.out. -func (conn *Conn) outWorker() { - for msg := range conn.out { - err := conn.SendMessage(msg) - conn.callsLck.RLock() - if err != nil { - if c := conn.calls[msg.serial]; c != nil { - c.Err = err - c.Done <- c - } - conn.serialLck.Lock() - delete(conn.serialUsed, msg.serial) - conn.serialLck.Unlock() - } else if msg.Type != TypeMethodCall { - conn.serialLck.Lock() - delete(conn.serialUsed, msg.serial) - conn.serialLck.Unlock() - } - conn.callsLck.RUnlock() - } -} - -// Send sends the given message to the message bus. You usually don't need to -// use this; use the higher-level equivalents (Call / Go, Emit and Export) -// instead. If msg is a method call and NoReplyExpected is not set, a non-nil -// call is returned and the same value is sent to ch (which must be buffered) -// once the call is complete. Otherwise, ch is ignored and a Call structure is -// returned of which only the Err member is valid. -func (conn *Conn) Send(msg *Message, ch chan *Call) *Call { - var call *Call - - msg.serial = conn.getSerial() - if msg.Type == TypeMethodCall && msg.Flags&FlagNoReplyExpected == 0 { - if ch == nil { - ch = make(chan *Call, 5) - } else if cap(ch) == 0 { - panic("dbus: unbuffered channel passed to (*Conn).Send") - } - call = new(Call) - call.Destination, _ = msg.Headers[FieldDestination].value.(string) - call.Path, _ = msg.Headers[FieldPath].value.(ObjectPath) - iface, _ := msg.Headers[FieldInterface].value.(string) - member, _ := msg.Headers[FieldMember].value.(string) - call.Method = iface + "." + member - call.Args = msg.Body - call.Done = ch - conn.callsLck.Lock() - conn.calls[msg.serial] = call - conn.callsLck.Unlock() - conn.outLck.RLock() - if conn.closed { - call.Err = ErrClosed - call.Done <- call - } else { - conn.out <- msg - } - conn.outLck.RUnlock() - } else { - conn.outLck.RLock() - if conn.closed { - call = &Call{Err: ErrClosed} - } else { - conn.out <- msg - call = &Call{Err: nil} - } - conn.outLck.RUnlock() - } - return call -} - -// sendError creates an error message corresponding to the parameters and sends -// it to conn.out. -func (conn *Conn) sendError(e Error, dest string, serial uint32) { - msg := new(Message) - msg.Type = TypeError - msg.serial = conn.getSerial() - msg.Headers = make(map[HeaderField]Variant) - if dest != "" { - msg.Headers[FieldDestination] = MakeVariant(dest) - } - msg.Headers[FieldErrorName] = MakeVariant(e.Name) - msg.Headers[FieldReplySerial] = MakeVariant(serial) - msg.Body = e.Body - if len(e.Body) > 0 { - msg.Headers[FieldSignature] = MakeVariant(SignatureOf(e.Body...)) - } - conn.outLck.RLock() - if !conn.closed { - conn.out <- msg - } - conn.outLck.RUnlock() -} - -// sendReply creates a method reply message corresponding to the parameters and -// sends it to conn.out. -func (conn *Conn) sendReply(dest string, serial uint32, values ...interface{}) { - msg := new(Message) - msg.Type = TypeMethodReply - msg.serial = conn.getSerial() - msg.Headers = make(map[HeaderField]Variant) - if dest != "" { - msg.Headers[FieldDestination] = MakeVariant(dest) - } - msg.Headers[FieldReplySerial] = MakeVariant(serial) - msg.Body = values - if len(values) > 0 { - msg.Headers[FieldSignature] = MakeVariant(SignatureOf(values...)) - } - conn.outLck.RLock() - if !conn.closed { - conn.out <- msg - } - conn.outLck.RUnlock() -} - -// Signal registers the given channel to be passed all received signal messages. -// The caller has to make sure that ch is sufficiently buffered; if a message -// arrives when a write to c is not possible, it is discarded. -// -// Multiple of these channels can be registered at the same time. Passing a -// channel that already is registered will remove it from the list of the -// registered channels. -// -// These channels are "overwritten" by Eavesdrop; i.e., if there currently is a -// channel for eavesdropped messages, this channel receives all signals, and -// none of the channels passed to Signal will receive any signals. -func (conn *Conn) Signal(ch chan<- *Signal) { - conn.signalsLck.Lock() - conn.signals = append(conn.signals, ch) - conn.signalsLck.Unlock() -} - -// SupportsUnixFDs returns whether the underlying transport supports passing of -// unix file descriptors. If this is false, method calls containing unix file -// descriptors will return an error and emitted signals containing them will -// not be sent. -func (conn *Conn) SupportsUnixFDs() bool { - return conn.unixFD -} - -// Error represents a D-Bus message of type Error. -type Error struct { - Name string - Body []interface{} -} - -func NewError(name string, body []interface{}) *Error { - return &Error{name, body} -} - -func (e Error) Error() string { - if len(e.Body) >= 1 { - s, ok := e.Body[0].(string) - if ok { - return s - } - } - return e.Name -} - -// Signal represents a D-Bus message of type Signal. The name member is given in -// "interface.member" notation, e.g. org.freedesktop.D-Bus.NameLost. -type Signal struct { - Sender string - Path ObjectPath - Name string - Body []interface{} -} - -// transport is a D-Bus transport. -type transport interface { - // Read and Write raw data (for example, for the authentication protocol). - io.ReadWriteCloser - - // Send the initial null byte used for the EXTERNAL mechanism. - SendNullByte() error - - // Returns whether this transport supports passing Unix FDs. - SupportsUnixFDs() bool - - // Signal the transport that Unix FD passing is enabled for this connection. - EnableUnixFDs() - - // Read / send a message, handling things like Unix FDs. - ReadMessage() (*Message, error) - SendMessage(*Message) error -} - -var ( - transports = make(map[string]func(string) (transport, error)) -) - -func getTransport(address string) (transport, error) { - var err error - var t transport - - addresses := strings.Split(address, ";") - for _, v := range addresses { - i := strings.IndexRune(v, ':') - if i == -1 { - err = errors.New("dbus: invalid bus address (no transport)") - continue - } - f := transports[v[:i]] - if f == nil { - err = errors.New("dbus: invalid bus address (invalid or unsupported transport)") - continue - } - t, err = f(v[i+1:]) - if err == nil { - return t, nil - } - } - return nil, err -} - -// dereferenceAll returns a slice that, assuming that vs is a slice of pointers -// of arbitrary types, containes the values that are obtained from dereferencing -// all elements in vs. -func dereferenceAll(vs []interface{}) []interface{} { - for i := range vs { - v := reflect.ValueOf(vs[i]) - v = v.Elem() - vs[i] = v.Interface() - } - return vs -} - -// getKey gets a key from a the list of keys. Returns "" on error / not found... -func getKey(s, key string) string { - i := strings.Index(s, key) - if i == -1 { - return "" - } - if i+len(key)+1 >= len(s) || s[i+len(key)] != '=' { - return "" - } - j := strings.Index(s, ",") - if j == -1 { - j = len(s) - } - return s[i+len(key)+1 : j] -} diff --git a/vendor/github.com/godbus/dbus/conn_darwin.go b/vendor/github.com/godbus/dbus/conn_darwin.go deleted file mode 100644 index b67bb1b81..000000000 --- a/vendor/github.com/godbus/dbus/conn_darwin.go +++ /dev/null @@ -1,21 +0,0 @@ -package dbus - -import ( - "errors" - "os/exec" -) - -func sessionBusPlatform() (*Conn, error) { - cmd := exec.Command("launchctl", "getenv", "DBUS_LAUNCHD_SESSION_BUS_SOCKET") - b, err := cmd.CombinedOutput() - - if err != nil { - return nil, err - } - - if len(b) == 0 { - return nil, errors.New("dbus: couldn't determine address of session bus") - } - - return Dial("unix:path=" + string(b[:len(b)-1])) -} diff --git a/vendor/github.com/godbus/dbus/conn_other.go b/vendor/github.com/godbus/dbus/conn_other.go deleted file mode 100644 index f74b8758d..000000000 --- a/vendor/github.com/godbus/dbus/conn_other.go +++ /dev/null @@ -1,27 +0,0 @@ -// +build !darwin - -package dbus - -import ( - "bytes" - "errors" - "os/exec" -) - -func sessionBusPlatform() (*Conn, error) { - cmd := exec.Command("dbus-launch") - b, err := cmd.CombinedOutput() - - if err != nil { - return nil, err - } - - i := bytes.IndexByte(b, '=') - j := bytes.IndexByte(b, '\n') - - if i == -1 || j == -1 { - return nil, errors.New("dbus: couldn't determine address of session bus") - } - - return Dial(string(b[i+1 : j])) -} diff --git a/vendor/github.com/godbus/dbus/dbus.go b/vendor/github.com/godbus/dbus/dbus.go deleted file mode 100644 index 2ce68735c..000000000 --- a/vendor/github.com/godbus/dbus/dbus.go +++ /dev/null @@ -1,258 +0,0 @@ -package dbus - -import ( - "errors" - "reflect" - "strings" -) - -var ( - byteType = reflect.TypeOf(byte(0)) - boolType = reflect.TypeOf(false) - uint8Type = reflect.TypeOf(uint8(0)) - int16Type = reflect.TypeOf(int16(0)) - uint16Type = reflect.TypeOf(uint16(0)) - int32Type = reflect.TypeOf(int32(0)) - uint32Type = reflect.TypeOf(uint32(0)) - int64Type = reflect.TypeOf(int64(0)) - uint64Type = reflect.TypeOf(uint64(0)) - float64Type = reflect.TypeOf(float64(0)) - stringType = reflect.TypeOf("") - signatureType = reflect.TypeOf(Signature{""}) - objectPathType = reflect.TypeOf(ObjectPath("")) - variantType = reflect.TypeOf(Variant{Signature{""}, nil}) - interfacesType = reflect.TypeOf([]interface{}{}) - unixFDType = reflect.TypeOf(UnixFD(0)) - unixFDIndexType = reflect.TypeOf(UnixFDIndex(0)) -) - -// An InvalidTypeError signals that a value which cannot be represented in the -// D-Bus wire format was passed to a function. -type InvalidTypeError struct { - Type reflect.Type -} - -func (e InvalidTypeError) Error() string { - return "dbus: invalid type " + e.Type.String() -} - -// Store copies the values contained in src to dest, which must be a slice of -// pointers. It converts slices of interfaces from src to corresponding structs -// in dest. An error is returned if the lengths of src and dest or the types of -// their elements don't match. -func Store(src []interface{}, dest ...interface{}) error { - if len(src) != len(dest) { - return errors.New("dbus.Store: length mismatch") - } - - for i := range src { - if err := store(src[i], dest[i]); err != nil { - return err - } - } - return nil -} - -func store(src, dest interface{}) error { - if reflect.TypeOf(dest).Elem() == reflect.TypeOf(src) { - reflect.ValueOf(dest).Elem().Set(reflect.ValueOf(src)) - return nil - } else if hasStruct(dest) { - rv := reflect.ValueOf(dest).Elem() - switch rv.Kind() { - case reflect.Struct: - vs, ok := src.([]interface{}) - if !ok { - return errors.New("dbus.Store: type mismatch") - } - t := rv.Type() - ndest := make([]interface{}, 0, rv.NumField()) - for i := 0; i < rv.NumField(); i++ { - field := t.Field(i) - if field.PkgPath == "" && field.Tag.Get("dbus") != "-" { - ndest = append(ndest, rv.Field(i).Addr().Interface()) - } - } - if len(vs) != len(ndest) { - return errors.New("dbus.Store: type mismatch") - } - err := Store(vs, ndest...) - if err != nil { - return errors.New("dbus.Store: type mismatch") - } - case reflect.Slice: - sv := reflect.ValueOf(src) - if sv.Kind() != reflect.Slice { - return errors.New("dbus.Store: type mismatch") - } - rv.Set(reflect.MakeSlice(rv.Type(), sv.Len(), sv.Len())) - for i := 0; i < sv.Len(); i++ { - if err := store(sv.Index(i).Interface(), rv.Index(i).Addr().Interface()); err != nil { - return err - } - } - case reflect.Map: - sv := reflect.ValueOf(src) - if sv.Kind() != reflect.Map { - return errors.New("dbus.Store: type mismatch") - } - keys := sv.MapKeys() - rv.Set(reflect.MakeMap(sv.Type())) - for _, key := range keys { - v := reflect.New(sv.Type().Elem()) - if err := store(v, sv.MapIndex(key).Interface()); err != nil { - return err - } - rv.SetMapIndex(key, v.Elem()) - } - default: - return errors.New("dbus.Store: type mismatch") - } - return nil - } else { - return errors.New("dbus.Store: type mismatch") - } -} - -func hasStruct(v interface{}) bool { - t := reflect.TypeOf(v) - for { - switch t.Kind() { - case reflect.Struct: - return true - case reflect.Slice, reflect.Ptr, reflect.Map: - t = t.Elem() - default: - return false - } - } -} - -// An ObjectPath is an object path as defined by the D-Bus spec. -type ObjectPath string - -// IsValid returns whether the object path is valid. -func (o ObjectPath) IsValid() bool { - s := string(o) - if len(s) == 0 { - return false - } - if s[0] != '/' { - return false - } - if s[len(s)-1] == '/' && len(s) != 1 { - return false - } - // probably not used, but technically possible - if s == "/" { - return true - } - split := strings.Split(s[1:], "/") - for _, v := range split { - if len(v) == 0 { - return false - } - for _, c := range v { - if !isMemberChar(c) { - return false - } - } - } - return true -} - -// A UnixFD is a Unix file descriptor sent over the wire. See the package-level -// documentation for more information about Unix file descriptor passsing. -type UnixFD int32 - -// A UnixFDIndex is the representation of a Unix file descriptor in a message. -type UnixFDIndex uint32 - -// alignment returns the alignment of values of type t. -func alignment(t reflect.Type) int { - switch t { - case variantType: - return 1 - case objectPathType: - return 4 - case signatureType: - return 1 - case interfacesType: // sometimes used for structs - return 8 - } - switch t.Kind() { - case reflect.Uint8: - return 1 - case reflect.Uint16, reflect.Int16: - return 2 - case reflect.Uint32, reflect.Int32, reflect.String, reflect.Array, reflect.Slice, reflect.Map: - return 4 - case reflect.Uint64, reflect.Int64, reflect.Float64, reflect.Struct: - return 8 - case reflect.Ptr: - return alignment(t.Elem()) - } - return 1 -} - -// isKeyType returns whether t is a valid type for a D-Bus dict. -func isKeyType(t reflect.Type) bool { - switch t.Kind() { - case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, - reflect.Int16, reflect.Int32, reflect.Int64, reflect.Float64, - reflect.String: - - return true - } - return false -} - -// isValidInterface returns whether s is a valid name for an interface. -func isValidInterface(s string) bool { - if len(s) == 0 || len(s) > 255 || s[0] == '.' { - return false - } - elem := strings.Split(s, ".") - if len(elem) < 2 { - return false - } - for _, v := range elem { - if len(v) == 0 { - return false - } - if v[0] >= '0' && v[0] <= '9' { - return false - } - for _, c := range v { - if !isMemberChar(c) { - return false - } - } - } - return true -} - -// isValidMember returns whether s is a valid name for a member. -func isValidMember(s string) bool { - if len(s) == 0 || len(s) > 255 { - return false - } - i := strings.Index(s, ".") - if i != -1 { - return false - } - if s[0] >= '0' && s[0] <= '9' { - return false - } - for _, c := range s { - if !isMemberChar(c) { - return false - } - } - return true -} - -func isMemberChar(c rune) bool { - return (c >= '0' && c <= '9') || (c >= 'A' && c <= 'Z') || - (c >= 'a' && c <= 'z') || c == '_' -} diff --git a/vendor/github.com/godbus/dbus/decoder.go b/vendor/github.com/godbus/dbus/decoder.go deleted file mode 100644 index ef50dcab9..000000000 --- a/vendor/github.com/godbus/dbus/decoder.go +++ /dev/null @@ -1,228 +0,0 @@ -package dbus - -import ( - "encoding/binary" - "io" - "reflect" -) - -type decoder struct { - in io.Reader - order binary.ByteOrder - pos int -} - -// newDecoder returns a new decoder that reads values from in. The input is -// expected to be in the given byte order. -func newDecoder(in io.Reader, order binary.ByteOrder) *decoder { - dec := new(decoder) - dec.in = in - dec.order = order - return dec -} - -// align aligns the input to the given boundary and panics on error. -func (dec *decoder) align(n int) { - if dec.pos%n != 0 { - newpos := (dec.pos + n - 1) & ^(n - 1) - empty := make([]byte, newpos-dec.pos) - if _, err := io.ReadFull(dec.in, empty); err != nil { - panic(err) - } - dec.pos = newpos - } -} - -// Calls binary.Read(dec.in, dec.order, v) and panics on read errors. -func (dec *decoder) binread(v interface{}) { - if err := binary.Read(dec.in, dec.order, v); err != nil { - panic(err) - } -} - -func (dec *decoder) Decode(sig Signature) (vs []interface{}, err error) { - defer func() { - var ok bool - v := recover() - if err, ok = v.(error); ok { - if err == io.EOF || err == io.ErrUnexpectedEOF { - err = FormatError("unexpected EOF") - } - } - }() - vs = make([]interface{}, 0) - s := sig.str - for s != "" { - err, rem := validSingle(s, 0) - if err != nil { - return nil, err - } - v := dec.decode(s[:len(s)-len(rem)], 0) - vs = append(vs, v) - s = rem - } - return vs, nil -} - -func (dec *decoder) decode(s string, depth int) interface{} { - dec.align(alignment(typeFor(s))) - switch s[0] { - case 'y': - var b [1]byte - if _, err := dec.in.Read(b[:]); err != nil { - panic(err) - } - dec.pos++ - return b[0] - case 'b': - i := dec.decode("u", depth).(uint32) - switch { - case i == 0: - return false - case i == 1: - return true - default: - panic(FormatError("invalid value for boolean")) - } - case 'n': - var i int16 - dec.binread(&i) - dec.pos += 2 - return i - case 'i': - var i int32 - dec.binread(&i) - dec.pos += 4 - return i - case 'x': - var i int64 - dec.binread(&i) - dec.pos += 8 - return i - case 'q': - var i uint16 - dec.binread(&i) - dec.pos += 2 - return i - case 'u': - var i uint32 - dec.binread(&i) - dec.pos += 4 - return i - case 't': - var i uint64 - dec.binread(&i) - dec.pos += 8 - return i - case 'd': - var f float64 - dec.binread(&f) - dec.pos += 8 - return f - case 's': - length := dec.decode("u", depth).(uint32) - b := make([]byte, int(length)+1) - if _, err := io.ReadFull(dec.in, b); err != nil { - panic(err) - } - dec.pos += int(length) + 1 - return string(b[:len(b)-1]) - case 'o': - return ObjectPath(dec.decode("s", depth).(string)) - case 'g': - length := dec.decode("y", depth).(byte) - b := make([]byte, int(length)+1) - if _, err := io.ReadFull(dec.in, b); err != nil { - panic(err) - } - dec.pos += int(length) + 1 - sig, err := ParseSignature(string(b[:len(b)-1])) - if err != nil { - panic(err) - } - return sig - case 'v': - if depth >= 64 { - panic(FormatError("input exceeds container depth limit")) - } - var variant Variant - sig := dec.decode("g", depth).(Signature) - if len(sig.str) == 0 { - panic(FormatError("variant signature is empty")) - } - err, rem := validSingle(sig.str, 0) - if err != nil { - panic(err) - } - if rem != "" { - panic(FormatError("variant signature has multiple types")) - } - variant.sig = sig - variant.value = dec.decode(sig.str, depth+1) - return variant - case 'h': - return UnixFDIndex(dec.decode("u", depth).(uint32)) - case 'a': - if len(s) > 1 && s[1] == '{' { - ksig := s[2:3] - vsig := s[3 : len(s)-1] - v := reflect.MakeMap(reflect.MapOf(typeFor(ksig), typeFor(vsig))) - if depth >= 63 { - panic(FormatError("input exceeds container depth limit")) - } - length := dec.decode("u", depth).(uint32) - // Even for empty maps, the correct padding must be included - dec.align(8) - spos := dec.pos - for dec.pos < spos+int(length) { - dec.align(8) - if !isKeyType(v.Type().Key()) { - panic(InvalidTypeError{v.Type()}) - } - kv := dec.decode(ksig, depth+2) - vv := dec.decode(vsig, depth+2) - v.SetMapIndex(reflect.ValueOf(kv), reflect.ValueOf(vv)) - } - return v.Interface() - } - if depth >= 64 { - panic(FormatError("input exceeds container depth limit")) - } - length := dec.decode("u", depth).(uint32) - v := reflect.MakeSlice(reflect.SliceOf(typeFor(s[1:])), 0, int(length)) - // Even for empty arrays, the correct padding must be included - dec.align(alignment(typeFor(s[1:]))) - spos := dec.pos - for dec.pos < spos+int(length) { - ev := dec.decode(s[1:], depth+1) - v = reflect.Append(v, reflect.ValueOf(ev)) - } - return v.Interface() - case '(': - if depth >= 64 { - panic(FormatError("input exceeds container depth limit")) - } - dec.align(8) - v := make([]interface{}, 0) - s = s[1 : len(s)-1] - for s != "" { - err, rem := validSingle(s, 0) - if err != nil { - panic(err) - } - ev := dec.decode(s[:len(s)-len(rem)], depth+1) - v = append(v, ev) - s = rem - } - return v - default: - panic(SignatureError{Sig: s}) - } -} - -// A FormatError is an error in the wire format. -type FormatError string - -func (e FormatError) Error() string { - return "dbus: wire format error: " + string(e) -} diff --git a/vendor/github.com/godbus/dbus/doc.go b/vendor/github.com/godbus/dbus/doc.go deleted file mode 100644 index deff554a3..000000000 --- a/vendor/github.com/godbus/dbus/doc.go +++ /dev/null @@ -1,63 +0,0 @@ -/* -Package dbus implements bindings to the D-Bus message bus system. - -To use the message bus API, you first need to connect to a bus (usually the -session or system bus). The acquired connection then can be used to call methods -on remote objects and emit or receive signals. Using the Export method, you can -arrange D-Bus methods calls to be directly translated to method calls on a Go -value. - -Conversion Rules - -For outgoing messages, Go types are automatically converted to the -corresponding D-Bus types. The following types are directly encoded as their -respective D-Bus equivalents: - - Go type | D-Bus type - ------------+----------- - byte | BYTE - bool | BOOLEAN - int16 | INT16 - uint16 | UINT16 - int32 | INT32 - uint32 | UINT32 - int64 | INT64 - uint64 | UINT64 - float64 | DOUBLE - string | STRING - ObjectPath | OBJECT_PATH - Signature | SIGNATURE - Variant | VARIANT - UnixFDIndex | UNIX_FD - -Slices and arrays encode as ARRAYs of their element type. - -Maps encode as DICTs, provided that their key type can be used as a key for -a DICT. - -Structs other than Variant and Signature encode as a STRUCT containing their -exported fields. Fields whose tags contain `dbus:"-"` and unexported fields will -be skipped. - -Pointers encode as the value they're pointed to. - -Trying to encode any other type or a slice, map or struct containing an -unsupported type will result in an InvalidTypeError. - -For incoming messages, the inverse of these rules are used, with the exception -of STRUCTs. Incoming STRUCTS are represented as a slice of empty interfaces -containing the struct fields in the correct order. The Store function can be -used to convert such values to Go structs. - -Unix FD passing - -Handling Unix file descriptors deserves special mention. To use them, you should -first check that they are supported on a connection by calling SupportsUnixFDs. -If it returns true, all method of Connection will translate messages containing -UnixFD's to messages that are accompanied by the given file descriptors with the -UnixFD values being substituted by the correct indices. Similarily, the indices -of incoming messages are automatically resolved. It shouldn't be necessary to use -UnixFDIndex. - -*/ -package dbus diff --git a/vendor/github.com/godbus/dbus/encoder.go b/vendor/github.com/godbus/dbus/encoder.go deleted file mode 100644 index 9f0a9e89e..000000000 --- a/vendor/github.com/godbus/dbus/encoder.go +++ /dev/null @@ -1,208 +0,0 @@ -package dbus - -import ( - "bytes" - "encoding/binary" - "io" - "reflect" -) - -// An encoder encodes values to the D-Bus wire format. -type encoder struct { - out io.Writer - order binary.ByteOrder - pos int -} - -// NewEncoder returns a new encoder that writes to out in the given byte order. -func newEncoder(out io.Writer, order binary.ByteOrder) *encoder { - return newEncoderAtOffset(out, 0, order) -} - -// newEncoderAtOffset returns a new encoder that writes to out in the given -// byte order. Specify the offset to initialize pos for proper alignment -// computation. -func newEncoderAtOffset(out io.Writer, offset int, order binary.ByteOrder) *encoder { - enc := new(encoder) - enc.out = out - enc.order = order - enc.pos = offset - return enc -} - -// Aligns the next output to be on a multiple of n. Panics on write errors. -func (enc *encoder) align(n int) { - pad := enc.padding(0, n) - if pad > 0 { - empty := make([]byte, pad) - if _, err := enc.out.Write(empty); err != nil { - panic(err) - } - enc.pos += pad - } -} - -// pad returns the number of bytes of padding, based on current position and additional offset. -// and alignment. -func (enc *encoder) padding(offset, algn int) int { - abs := enc.pos + offset - if abs%algn != 0 { - newabs := (abs + algn - 1) & ^(algn - 1) - return newabs - abs - } - return 0 -} - -// Calls binary.Write(enc.out, enc.order, v) and panics on write errors. -func (enc *encoder) binwrite(v interface{}) { - if err := binary.Write(enc.out, enc.order, v); err != nil { - panic(err) - } -} - -// Encode encodes the given values to the underyling reader. All written values -// are aligned properly as required by the D-Bus spec. -func (enc *encoder) Encode(vs ...interface{}) (err error) { - defer func() { - err, _ = recover().(error) - }() - for _, v := range vs { - enc.encode(reflect.ValueOf(v), 0) - } - return nil -} - -// encode encodes the given value to the writer and panics on error. depth holds -// the depth of the container nesting. -func (enc *encoder) encode(v reflect.Value, depth int) { - enc.align(alignment(v.Type())) - switch v.Kind() { - case reflect.Uint8: - var b [1]byte - b[0] = byte(v.Uint()) - if _, err := enc.out.Write(b[:]); err != nil { - panic(err) - } - enc.pos++ - case reflect.Bool: - if v.Bool() { - enc.encode(reflect.ValueOf(uint32(1)), depth) - } else { - enc.encode(reflect.ValueOf(uint32(0)), depth) - } - case reflect.Int16: - enc.binwrite(int16(v.Int())) - enc.pos += 2 - case reflect.Uint16: - enc.binwrite(uint16(v.Uint())) - enc.pos += 2 - case reflect.Int32: - enc.binwrite(int32(v.Int())) - enc.pos += 4 - case reflect.Uint32: - enc.binwrite(uint32(v.Uint())) - enc.pos += 4 - case reflect.Int64: - enc.binwrite(v.Int()) - enc.pos += 8 - case reflect.Uint64: - enc.binwrite(v.Uint()) - enc.pos += 8 - case reflect.Float64: - enc.binwrite(v.Float()) - enc.pos += 8 - case reflect.String: - enc.encode(reflect.ValueOf(uint32(len(v.String()))), depth) - b := make([]byte, v.Len()+1) - copy(b, v.String()) - b[len(b)-1] = 0 - n, err := enc.out.Write(b) - if err != nil { - panic(err) - } - enc.pos += n - case reflect.Ptr: - enc.encode(v.Elem(), depth) - case reflect.Slice, reflect.Array: - if depth >= 64 { - panic(FormatError("input exceeds container depth limit")) - } - // Lookahead offset: 4 bytes for uint32 length (with alignment), - // plus alignment for elements. - n := enc.padding(0, 4) + 4 - offset := enc.pos + n + enc.padding(n, alignment(v.Type().Elem())) - - var buf bytes.Buffer - bufenc := newEncoderAtOffset(&buf, offset, enc.order) - - for i := 0; i < v.Len(); i++ { - bufenc.encode(v.Index(i), depth+1) - } - enc.encode(reflect.ValueOf(uint32(buf.Len())), depth) - length := buf.Len() - enc.align(alignment(v.Type().Elem())) - if _, err := buf.WriteTo(enc.out); err != nil { - panic(err) - } - enc.pos += length - case reflect.Struct: - if depth >= 64 && v.Type() != signatureType { - panic(FormatError("input exceeds container depth limit")) - } - switch t := v.Type(); t { - case signatureType: - str := v.Field(0) - enc.encode(reflect.ValueOf(byte(str.Len())), depth+1) - b := make([]byte, str.Len()+1) - copy(b, str.String()) - b[len(b)-1] = 0 - n, err := enc.out.Write(b) - if err != nil { - panic(err) - } - enc.pos += n - case variantType: - variant := v.Interface().(Variant) - enc.encode(reflect.ValueOf(variant.sig), depth+1) - enc.encode(reflect.ValueOf(variant.value), depth+1) - default: - for i := 0; i < v.Type().NumField(); i++ { - field := t.Field(i) - if field.PkgPath == "" && field.Tag.Get("dbus") != "-" { - enc.encode(v.Field(i), depth+1) - } - } - } - case reflect.Map: - // Maps are arrays of structures, so they actually increase the depth by - // 2. - if depth >= 63 { - panic(FormatError("input exceeds container depth limit")) - } - if !isKeyType(v.Type().Key()) { - panic(InvalidTypeError{v.Type()}) - } - keys := v.MapKeys() - // Lookahead offset: 4 bytes for uint32 length (with alignment), - // plus 8-byte alignment - n := enc.padding(0, 4) + 4 - offset := enc.pos + n + enc.padding(n, 8) - - var buf bytes.Buffer - bufenc := newEncoderAtOffset(&buf, offset, enc.order) - for _, k := range keys { - bufenc.align(8) - bufenc.encode(k, depth+2) - bufenc.encode(v.MapIndex(k), depth+2) - } - enc.encode(reflect.ValueOf(uint32(buf.Len())), depth) - length := buf.Len() - enc.align(8) - if _, err := buf.WriteTo(enc.out); err != nil { - panic(err) - } - enc.pos += length - default: - panic(InvalidTypeError{v.Type()}) - } -} diff --git a/vendor/github.com/godbus/dbus/export.go b/vendor/github.com/godbus/dbus/export.go deleted file mode 100644 index c6440a741..000000000 --- a/vendor/github.com/godbus/dbus/export.go +++ /dev/null @@ -1,411 +0,0 @@ -package dbus - -import ( - "errors" - "fmt" - "reflect" - "strings" -) - -var ( - errmsgInvalidArg = Error{ - "org.freedesktop.DBus.Error.InvalidArgs", - []interface{}{"Invalid type / number of args"}, - } - errmsgNoObject = Error{ - "org.freedesktop.DBus.Error.NoSuchObject", - []interface{}{"No such object"}, - } - errmsgUnknownMethod = Error{ - "org.freedesktop.DBus.Error.UnknownMethod", - []interface{}{"Unknown / invalid method"}, - } -) - -// exportWithMapping represents an exported struct along with a method name -// mapping to allow for exporting lower-case methods, etc. -type exportWithMapping struct { - export interface{} - - // Method name mapping; key -> struct method, value -> dbus method. - mapping map[string]string - - // Whether or not this export is for the entire subtree - includeSubtree bool -} - -// Sender is a type which can be used in exported methods to receive the message -// sender. -type Sender string - -func exportedMethod(export exportWithMapping, name string) reflect.Value { - if export.export == nil { - return reflect.Value{} - } - - // If a mapping was included in the export, check the map to see if we - // should be looking for a different method in the export. - if export.mapping != nil { - for key, value := range export.mapping { - if value == name { - name = key - break - } - - // Catch the case where a method is aliased but the client is calling - // the original, e.g. the "Foo" method was exported mapped to - // "foo," and dbus client called the original "Foo." - if key == name { - return reflect.Value{} - } - } - } - - value := reflect.ValueOf(export.export) - m := value.MethodByName(name) - - // Catch the case of attempting to call an unexported method - method, ok := value.Type().MethodByName(name) - - if !m.IsValid() || !ok || method.PkgPath != "" { - return reflect.Value{} - } - t := m.Type() - if t.NumOut() == 0 || - t.Out(t.NumOut()-1) != reflect.TypeOf(&errmsgInvalidArg) { - - return reflect.Value{} - } - return m -} - -// searchHandlers will look through all registered handlers looking for one -// to handle the given path. If a verbatim one isn't found, it will check for -// a subtree registration for the path as well. -func (conn *Conn) searchHandlers(path ObjectPath) (map[string]exportWithMapping, bool) { - conn.handlersLck.RLock() - defer conn.handlersLck.RUnlock() - - handlers, ok := conn.handlers[path] - if ok { - return handlers, ok - } - - // If handlers weren't found for this exact path, look for a matching subtree - // registration - handlers = make(map[string]exportWithMapping) - path = path[:strings.LastIndex(string(path), "/")] - for len(path) > 0 { - var subtreeHandlers map[string]exportWithMapping - subtreeHandlers, ok = conn.handlers[path] - if ok { - for iface, handler := range subtreeHandlers { - // Only include this handler if it registered for the subtree - if handler.includeSubtree { - handlers[iface] = handler - } - } - - break - } - - path = path[:strings.LastIndex(string(path), "/")] - } - - return handlers, ok -} - -// handleCall handles the given method call (i.e. looks if it's one of the -// pre-implemented ones and searches for a corresponding handler if not). -func (conn *Conn) handleCall(msg *Message) { - name := msg.Headers[FieldMember].value.(string) - path := msg.Headers[FieldPath].value.(ObjectPath) - ifaceName, hasIface := msg.Headers[FieldInterface].value.(string) - sender, hasSender := msg.Headers[FieldSender].value.(string) - serial := msg.serial - if ifaceName == "org.freedesktop.DBus.Peer" { - switch name { - case "Ping": - conn.sendReply(sender, serial) - case "GetMachineId": - conn.sendReply(sender, serial, conn.uuid) - default: - conn.sendError(errmsgUnknownMethod, sender, serial) - } - return - } - if len(name) == 0 { - conn.sendError(errmsgUnknownMethod, sender, serial) - } - - // Find the exported handler (if any) for this path - handlers, ok := conn.searchHandlers(path) - if !ok { - conn.sendError(errmsgNoObject, sender, serial) - return - } - - var m reflect.Value - if hasIface { - iface := handlers[ifaceName] - m = exportedMethod(iface, name) - } else { - for _, v := range handlers { - m = exportedMethod(v, name) - if m.IsValid() { - break - } - } - } - - if !m.IsValid() { - conn.sendError(errmsgUnknownMethod, sender, serial) - return - } - - t := m.Type() - vs := msg.Body - pointers := make([]interface{}, t.NumIn()) - decode := make([]interface{}, 0, len(vs)) - for i := 0; i < t.NumIn(); i++ { - tp := t.In(i) - val := reflect.New(tp) - pointers[i] = val.Interface() - if tp == reflect.TypeOf((*Sender)(nil)).Elem() { - val.Elem().SetString(sender) - } else if tp == reflect.TypeOf((*Message)(nil)).Elem() { - val.Elem().Set(reflect.ValueOf(*msg)) - } else { - decode = append(decode, pointers[i]) - } - } - - if len(decode) != len(vs) { - conn.sendError(errmsgInvalidArg, sender, serial) - return - } - - if err := Store(vs, decode...); err != nil { - conn.sendError(errmsgInvalidArg, sender, serial) - return - } - - // Extract parameters - params := make([]reflect.Value, len(pointers)) - for i := 0; i < len(pointers); i++ { - params[i] = reflect.ValueOf(pointers[i]).Elem() - } - - // Call method - ret := m.Call(params) - if em := ret[t.NumOut()-1].Interface().(*Error); em != nil { - conn.sendError(*em, sender, serial) - return - } - - if msg.Flags&FlagNoReplyExpected == 0 { - reply := new(Message) - reply.Type = TypeMethodReply - reply.serial = conn.getSerial() - reply.Headers = make(map[HeaderField]Variant) - if hasSender { - reply.Headers[FieldDestination] = msg.Headers[FieldSender] - } - reply.Headers[FieldReplySerial] = MakeVariant(msg.serial) - reply.Body = make([]interface{}, len(ret)-1) - for i := 0; i < len(ret)-1; i++ { - reply.Body[i] = ret[i].Interface() - } - if len(ret) != 1 { - reply.Headers[FieldSignature] = MakeVariant(SignatureOf(reply.Body...)) - } - conn.outLck.RLock() - if !conn.closed { - conn.out <- reply - } - conn.outLck.RUnlock() - } -} - -// Emit emits the given signal on the message bus. The name parameter must be -// formatted as "interface.member", e.g., "org.freedesktop.DBus.NameLost". -func (conn *Conn) Emit(path ObjectPath, name string, values ...interface{}) error { - if !path.IsValid() { - return errors.New("dbus: invalid object path") - } - i := strings.LastIndex(name, ".") - if i == -1 { - return errors.New("dbus: invalid method name") - } - iface := name[:i] - member := name[i+1:] - if !isValidMember(member) { - return errors.New("dbus: invalid method name") - } - if !isValidInterface(iface) { - return errors.New("dbus: invalid interface name") - } - msg := new(Message) - msg.Type = TypeSignal - msg.serial = conn.getSerial() - msg.Headers = make(map[HeaderField]Variant) - msg.Headers[FieldInterface] = MakeVariant(iface) - msg.Headers[FieldMember] = MakeVariant(member) - msg.Headers[FieldPath] = MakeVariant(path) - msg.Body = values - if len(values) > 0 { - msg.Headers[FieldSignature] = MakeVariant(SignatureOf(values...)) - } - conn.outLck.RLock() - defer conn.outLck.RUnlock() - if conn.closed { - return ErrClosed - } - conn.out <- msg - return nil -} - -// Export registers the given value to be exported as an object on the -// message bus. -// -// If a method call on the given path and interface is received, an exported -// method with the same name is called with v as the receiver if the -// parameters match and the last return value is of type *Error. If this -// *Error is not nil, it is sent back to the caller as an error. -// Otherwise, a method reply is sent with the other return values as its body. -// -// Any parameters with the special type Sender are set to the sender of the -// dbus message when the method is called. Parameters of this type do not -// contribute to the dbus signature of the method (i.e. the method is exposed -// as if the parameters of type Sender were not there). -// -// Similarly, any parameters with the type Message are set to the raw message -// received on the bus. Again, parameters of this type do not contribute to the -// dbus signature of the method. -// -// Every method call is executed in a new goroutine, so the method may be called -// in multiple goroutines at once. -// -// Method calls on the interface org.freedesktop.DBus.Peer will be automatically -// handled for every object. -// -// Passing nil as the first parameter will cause conn to cease handling calls on -// the given combination of path and interface. -// -// Export returns an error if path is not a valid path name. -func (conn *Conn) Export(v interface{}, path ObjectPath, iface string) error { - return conn.ExportWithMap(v, nil, path, iface) -} - -// ExportWithMap works exactly like Export but provides the ability to remap -// method names (e.g. export a lower-case method). -// -// The keys in the map are the real method names (exported on the struct), and -// the values are the method names to be exported on DBus. -func (conn *Conn) ExportWithMap(v interface{}, mapping map[string]string, path ObjectPath, iface string) error { - return conn.exportWithMap(v, mapping, path, iface, false) -} - -// ExportSubtree works exactly like Export but registers the given value for -// an entire subtree rather under the root path provided. -// -// In order to make this useful, one parameter in each of the value's exported -// methods should be a Message, in which case it will contain the raw message -// (allowing one to get access to the path that caused the method to be called). -// -// Note that more specific export paths take precedence over less specific. For -// example, a method call using the ObjectPath /foo/bar/baz will call a method -// exported on /foo/bar before a method exported on /foo. -func (conn *Conn) ExportSubtree(v interface{}, path ObjectPath, iface string) error { - return conn.ExportSubtreeWithMap(v, nil, path, iface) -} - -// ExportSubtreeWithMap works exactly like ExportSubtree but provides the -// ability to remap method names (e.g. export a lower-case method). -// -// The keys in the map are the real method names (exported on the struct), and -// the values are the method names to be exported on DBus. -func (conn *Conn) ExportSubtreeWithMap(v interface{}, mapping map[string]string, path ObjectPath, iface string) error { - return conn.exportWithMap(v, mapping, path, iface, true) -} - -// exportWithMap is the worker function for all exports/registrations. -func (conn *Conn) exportWithMap(v interface{}, mapping map[string]string, path ObjectPath, iface string, includeSubtree bool) error { - if !path.IsValid() { - return fmt.Errorf(`dbus: Invalid path name: "%s"`, path) - } - - conn.handlersLck.Lock() - defer conn.handlersLck.Unlock() - - // Remove a previous export if the interface is nil - if v == nil { - if _, ok := conn.handlers[path]; ok { - delete(conn.handlers[path], iface) - if len(conn.handlers[path]) == 0 { - delete(conn.handlers, path) - } - } - - return nil - } - - // If this is the first handler for this path, make a new map to hold all - // handlers for this path. - if _, ok := conn.handlers[path]; !ok { - conn.handlers[path] = make(map[string]exportWithMapping) - } - - // Finally, save this handler - conn.handlers[path][iface] = exportWithMapping{export: v, mapping: mapping, includeSubtree: includeSubtree} - - return nil -} - -// ReleaseName calls org.freedesktop.DBus.ReleaseName and awaits a response. -func (conn *Conn) ReleaseName(name string) (ReleaseNameReply, error) { - var r uint32 - err := conn.busObj.Call("org.freedesktop.DBus.ReleaseName", 0, name).Store(&r) - if err != nil { - return 0, err - } - return ReleaseNameReply(r), nil -} - -// RequestName calls org.freedesktop.DBus.RequestName and awaits a response. -func (conn *Conn) RequestName(name string, flags RequestNameFlags) (RequestNameReply, error) { - var r uint32 - err := conn.busObj.Call("org.freedesktop.DBus.RequestName", 0, name, flags).Store(&r) - if err != nil { - return 0, err - } - return RequestNameReply(r), nil -} - -// ReleaseNameReply is the reply to a ReleaseName call. -type ReleaseNameReply uint32 - -const ( - ReleaseNameReplyReleased ReleaseNameReply = 1 + iota - ReleaseNameReplyNonExistent - ReleaseNameReplyNotOwner -) - -// RequestNameFlags represents the possible flags for a RequestName call. -type RequestNameFlags uint32 - -const ( - NameFlagAllowReplacement RequestNameFlags = 1 << iota - NameFlagReplaceExisting - NameFlagDoNotQueue -) - -// RequestNameReply is the reply to a RequestName call. -type RequestNameReply uint32 - -const ( - RequestNameReplyPrimaryOwner RequestNameReply = 1 + iota - RequestNameReplyInQueue - RequestNameReplyExists - RequestNameReplyAlreadyOwner -) diff --git a/vendor/github.com/godbus/dbus/homedir.go b/vendor/github.com/godbus/dbus/homedir.go deleted file mode 100644 index 0b745f931..000000000 --- a/vendor/github.com/godbus/dbus/homedir.go +++ /dev/null @@ -1,28 +0,0 @@ -package dbus - -import ( - "os" - "sync" -) - -var ( - homeDir string - homeDirLock sync.Mutex -) - -func getHomeDir() string { - homeDirLock.Lock() - defer homeDirLock.Unlock() - - if homeDir != "" { - return homeDir - } - - homeDir = os.Getenv("HOME") - if homeDir != "" { - return homeDir - } - - homeDir = lookupHomeDir() - return homeDir -} diff --git a/vendor/github.com/godbus/dbus/homedir_dynamic.go b/vendor/github.com/godbus/dbus/homedir_dynamic.go deleted file mode 100644 index 2732081e7..000000000 --- a/vendor/github.com/godbus/dbus/homedir_dynamic.go +++ /dev/null @@ -1,15 +0,0 @@ -// +build !static_build - -package dbus - -import ( - "os/user" -) - -func lookupHomeDir() string { - u, err := user.Current() - if err != nil { - return "/" - } - return u.HomeDir -} diff --git a/vendor/github.com/godbus/dbus/homedir_static.go b/vendor/github.com/godbus/dbus/homedir_static.go deleted file mode 100644 index b9d9cb552..000000000 --- a/vendor/github.com/godbus/dbus/homedir_static.go +++ /dev/null @@ -1,45 +0,0 @@ -// +build static_build - -package dbus - -import ( - "bufio" - "os" - "strconv" - "strings" -) - -func lookupHomeDir() string { - myUid := os.Getuid() - - f, err := os.Open("/etc/passwd") - if err != nil { - return "/" - } - defer f.Close() - - s := bufio.NewScanner(f) - - for s.Scan() { - if err := s.Err(); err != nil { - break - } - - line := strings.TrimSpace(s.Text()) - if line == "" { - continue - } - - parts := strings.Split(line, ":") - - if len(parts) >= 6 { - uid, err := strconv.Atoi(parts[2]) - if err == nil && uid == myUid { - return parts[5] - } - } - } - - // Default to / if we can't get a better value - return "/" -} diff --git a/vendor/github.com/godbus/dbus/message.go b/vendor/github.com/godbus/dbus/message.go deleted file mode 100644 index 075d6e38b..000000000 --- a/vendor/github.com/godbus/dbus/message.go +++ /dev/null @@ -1,346 +0,0 @@ -package dbus - -import ( - "bytes" - "encoding/binary" - "errors" - "io" - "reflect" - "strconv" -) - -const protoVersion byte = 1 - -// Flags represents the possible flags of a D-Bus message. -type Flags byte - -const ( - // FlagNoReplyExpected signals that the message is not expected to generate - // a reply. If this flag is set on outgoing messages, any possible reply - // will be discarded. - FlagNoReplyExpected Flags = 1 << iota - // FlagNoAutoStart signals that the message bus should not automatically - // start an application when handling this message. - FlagNoAutoStart -) - -// Type represents the possible types of a D-Bus message. -type Type byte - -const ( - TypeMethodCall Type = 1 + iota - TypeMethodReply - TypeError - TypeSignal - typeMax -) - -func (t Type) String() string { - switch t { - case TypeMethodCall: - return "method call" - case TypeMethodReply: - return "reply" - case TypeError: - return "error" - case TypeSignal: - return "signal" - } - return "invalid" -} - -// HeaderField represents the possible byte codes for the headers -// of a D-Bus message. -type HeaderField byte - -const ( - FieldPath HeaderField = 1 + iota - FieldInterface - FieldMember - FieldErrorName - FieldReplySerial - FieldDestination - FieldSender - FieldSignature - FieldUnixFDs - fieldMax -) - -// An InvalidMessageError describes the reason why a D-Bus message is regarded as -// invalid. -type InvalidMessageError string - -func (e InvalidMessageError) Error() string { - return "dbus: invalid message: " + string(e) -} - -// fieldType are the types of the various header fields. -var fieldTypes = [fieldMax]reflect.Type{ - FieldPath: objectPathType, - FieldInterface: stringType, - FieldMember: stringType, - FieldErrorName: stringType, - FieldReplySerial: uint32Type, - FieldDestination: stringType, - FieldSender: stringType, - FieldSignature: signatureType, - FieldUnixFDs: uint32Type, -} - -// requiredFields lists the header fields that are required by the different -// message types. -var requiredFields = [typeMax][]HeaderField{ - TypeMethodCall: {FieldPath, FieldMember}, - TypeMethodReply: {FieldReplySerial}, - TypeError: {FieldErrorName, FieldReplySerial}, - TypeSignal: {FieldPath, FieldInterface, FieldMember}, -} - -// Message represents a single D-Bus message. -type Message struct { - Type - Flags - Headers map[HeaderField]Variant - Body []interface{} - - serial uint32 -} - -type header struct { - Field byte - Variant -} - -// DecodeMessage tries to decode a single message in the D-Bus wire format -// from the given reader. The byte order is figured out from the first byte. -// The possibly returned error can be an error of the underlying reader, an -// InvalidMessageError or a FormatError. -func DecodeMessage(rd io.Reader) (msg *Message, err error) { - var order binary.ByteOrder - var hlength, length uint32 - var typ, flags, proto byte - var headers []header - - b := make([]byte, 1) - _, err = rd.Read(b) - if err != nil { - return - } - switch b[0] { - case 'l': - order = binary.LittleEndian - case 'B': - order = binary.BigEndian - default: - return nil, InvalidMessageError("invalid byte order") - } - - dec := newDecoder(rd, order) - dec.pos = 1 - - msg = new(Message) - vs, err := dec.Decode(Signature{"yyyuu"}) - if err != nil { - return nil, err - } - if err = Store(vs, &typ, &flags, &proto, &length, &msg.serial); err != nil { - return nil, err - } - msg.Type = Type(typ) - msg.Flags = Flags(flags) - - // get the header length separately because we need it later - b = make([]byte, 4) - _, err = io.ReadFull(rd, b) - if err != nil { - return nil, err - } - binary.Read(bytes.NewBuffer(b), order, &hlength) - if hlength+length+16 > 1<<27 { - return nil, InvalidMessageError("message is too long") - } - dec = newDecoder(io.MultiReader(bytes.NewBuffer(b), rd), order) - dec.pos = 12 - vs, err = dec.Decode(Signature{"a(yv)"}) - if err != nil { - return nil, err - } - if err = Store(vs, &headers); err != nil { - return nil, err - } - - msg.Headers = make(map[HeaderField]Variant) - for _, v := range headers { - msg.Headers[HeaderField(v.Field)] = v.Variant - } - - dec.align(8) - body := make([]byte, int(length)) - if length != 0 { - _, err := io.ReadFull(rd, body) - if err != nil { - return nil, err - } - } - - if err = msg.IsValid(); err != nil { - return nil, err - } - sig, _ := msg.Headers[FieldSignature].value.(Signature) - if sig.str != "" { - buf := bytes.NewBuffer(body) - dec = newDecoder(buf, order) - vs, err := dec.Decode(sig) - if err != nil { - return nil, err - } - msg.Body = vs - } - - return -} - -// EncodeTo encodes and sends a message to the given writer. The byte order must -// be either binary.LittleEndian or binary.BigEndian. If the message is not -// valid or an error occurs when writing, an error is returned. -func (msg *Message) EncodeTo(out io.Writer, order binary.ByteOrder) error { - if err := msg.IsValid(); err != nil { - return err - } - var vs [7]interface{} - switch order { - case binary.LittleEndian: - vs[0] = byte('l') - case binary.BigEndian: - vs[0] = byte('B') - default: - return errors.New("dbus: invalid byte order") - } - body := new(bytes.Buffer) - enc := newEncoder(body, order) - if len(msg.Body) != 0 { - enc.Encode(msg.Body...) - } - vs[1] = msg.Type - vs[2] = msg.Flags - vs[3] = protoVersion - vs[4] = uint32(len(body.Bytes())) - vs[5] = msg.serial - headers := make([]header, 0, len(msg.Headers)) - for k, v := range msg.Headers { - headers = append(headers, header{byte(k), v}) - } - vs[6] = headers - var buf bytes.Buffer - enc = newEncoder(&buf, order) - enc.Encode(vs[:]...) - enc.align(8) - body.WriteTo(&buf) - if buf.Len() > 1<<27 { - return InvalidMessageError("message is too long") - } - if _, err := buf.WriteTo(out); err != nil { - return err - } - return nil -} - -// IsValid checks whether msg is a valid message and returns an -// InvalidMessageError if it is not. -func (msg *Message) IsValid() error { - if msg.Flags & ^(FlagNoAutoStart|FlagNoReplyExpected) != 0 { - return InvalidMessageError("invalid flags") - } - if msg.Type == 0 || msg.Type >= typeMax { - return InvalidMessageError("invalid message type") - } - for k, v := range msg.Headers { - if k == 0 || k >= fieldMax { - return InvalidMessageError("invalid header") - } - if reflect.TypeOf(v.value) != fieldTypes[k] { - return InvalidMessageError("invalid type of header field") - } - } - for _, v := range requiredFields[msg.Type] { - if _, ok := msg.Headers[v]; !ok { - return InvalidMessageError("missing required header") - } - } - if path, ok := msg.Headers[FieldPath]; ok { - if !path.value.(ObjectPath).IsValid() { - return InvalidMessageError("invalid path name") - } - } - if iface, ok := msg.Headers[FieldInterface]; ok { - if !isValidInterface(iface.value.(string)) { - return InvalidMessageError("invalid interface name") - } - } - if member, ok := msg.Headers[FieldMember]; ok { - if !isValidMember(member.value.(string)) { - return InvalidMessageError("invalid member name") - } - } - if errname, ok := msg.Headers[FieldErrorName]; ok { - if !isValidInterface(errname.value.(string)) { - return InvalidMessageError("invalid error name") - } - } - if len(msg.Body) != 0 { - if _, ok := msg.Headers[FieldSignature]; !ok { - return InvalidMessageError("missing signature") - } - } - return nil -} - -// Serial returns the message's serial number. The returned value is only valid -// for messages received by eavesdropping. -func (msg *Message) Serial() uint32 { - return msg.serial -} - -// String returns a string representation of a message similar to the format of -// dbus-monitor. -func (msg *Message) String() string { - if err := msg.IsValid(); err != nil { - return "" - } - s := msg.Type.String() - if v, ok := msg.Headers[FieldSender]; ok { - s += " from " + v.value.(string) - } - if v, ok := msg.Headers[FieldDestination]; ok { - s += " to " + v.value.(string) - } - s += " serial " + strconv.FormatUint(uint64(msg.serial), 10) - if v, ok := msg.Headers[FieldReplySerial]; ok { - s += " reply_serial " + strconv.FormatUint(uint64(v.value.(uint32)), 10) - } - if v, ok := msg.Headers[FieldUnixFDs]; ok { - s += " unixfds " + strconv.FormatUint(uint64(v.value.(uint32)), 10) - } - if v, ok := msg.Headers[FieldPath]; ok { - s += " path " + string(v.value.(ObjectPath)) - } - if v, ok := msg.Headers[FieldInterface]; ok { - s += " interface " + v.value.(string) - } - if v, ok := msg.Headers[FieldErrorName]; ok { - s += " error " + v.value.(string) - } - if v, ok := msg.Headers[FieldMember]; ok { - s += " member " + v.value.(string) - } - if len(msg.Body) != 0 { - s += "\n" - } - for i, v := range msg.Body { - s += " " + MakeVariant(v).String() - if i != len(msg.Body)-1 { - s += "\n" - } - } - return s -} diff --git a/vendor/github.com/godbus/dbus/object.go b/vendor/github.com/godbus/dbus/object.go deleted file mode 100644 index 7ef45da4c..000000000 --- a/vendor/github.com/godbus/dbus/object.go +++ /dev/null @@ -1,126 +0,0 @@ -package dbus - -import ( - "errors" - "strings" -) - -// BusObject is the interface of a remote object on which methods can be -// invoked. -type BusObject interface { - Call(method string, flags Flags, args ...interface{}) *Call - Go(method string, flags Flags, ch chan *Call, args ...interface{}) *Call - GetProperty(p string) (Variant, error) - Destination() string - Path() ObjectPath -} - -// Object represents a remote object on which methods can be invoked. -type Object struct { - conn *Conn - dest string - path ObjectPath -} - -// Call calls a method with (*Object).Go and waits for its reply. -func (o *Object) Call(method string, flags Flags, args ...interface{}) *Call { - return <-o.Go(method, flags, make(chan *Call, 1), args...).Done -} - -// Go calls a method with the given arguments asynchronously. It returns a -// Call structure representing this method call. The passed channel will -// return the same value once the call is done. If ch is nil, a new channel -// will be allocated. Otherwise, ch has to be buffered or Go will panic. -// -// If the flags include FlagNoReplyExpected, ch is ignored and a Call structure -// is returned of which only the Err member is valid. -// -// If the method parameter contains a dot ('.'), the part before the last dot -// specifies the interface on which the method is called. -func (o *Object) Go(method string, flags Flags, ch chan *Call, args ...interface{}) *Call { - iface := "" - i := strings.LastIndex(method, ".") - if i != -1 { - iface = method[:i] - } - method = method[i+1:] - msg := new(Message) - msg.Type = TypeMethodCall - msg.serial = o.conn.getSerial() - msg.Flags = flags & (FlagNoAutoStart | FlagNoReplyExpected) - msg.Headers = make(map[HeaderField]Variant) - msg.Headers[FieldPath] = MakeVariant(o.path) - msg.Headers[FieldDestination] = MakeVariant(o.dest) - msg.Headers[FieldMember] = MakeVariant(method) - if iface != "" { - msg.Headers[FieldInterface] = MakeVariant(iface) - } - msg.Body = args - if len(args) > 0 { - msg.Headers[FieldSignature] = MakeVariant(SignatureOf(args...)) - } - if msg.Flags&FlagNoReplyExpected == 0 { - if ch == nil { - ch = make(chan *Call, 10) - } else if cap(ch) == 0 { - panic("dbus: unbuffered channel passed to (*Object).Go") - } - call := &Call{ - Destination: o.dest, - Path: o.path, - Method: method, - Args: args, - Done: ch, - } - o.conn.callsLck.Lock() - o.conn.calls[msg.serial] = call - o.conn.callsLck.Unlock() - o.conn.outLck.RLock() - if o.conn.closed { - call.Err = ErrClosed - call.Done <- call - } else { - o.conn.out <- msg - } - o.conn.outLck.RUnlock() - return call - } - o.conn.outLck.RLock() - defer o.conn.outLck.RUnlock() - if o.conn.closed { - return &Call{Err: ErrClosed} - } - o.conn.out <- msg - return &Call{Err: nil} -} - -// GetProperty calls org.freedesktop.DBus.Properties.GetProperty on the given -// object. The property name must be given in interface.member notation. -func (o *Object) GetProperty(p string) (Variant, error) { - idx := strings.LastIndex(p, ".") - if idx == -1 || idx+1 == len(p) { - return Variant{}, errors.New("dbus: invalid property " + p) - } - - iface := p[:idx] - prop := p[idx+1:] - - result := Variant{} - err := o.Call("org.freedesktop.DBus.Properties.Get", 0, iface, prop).Store(&result) - - if err != nil { - return Variant{}, err - } - - return result, nil -} - -// Destination returns the destination that calls on o are sent to. -func (o *Object) Destination() string { - return o.dest -} - -// Path returns the path that calls on o are sent to. -func (o *Object) Path() ObjectPath { - return o.path -} diff --git a/vendor/github.com/godbus/dbus/sig.go b/vendor/github.com/godbus/dbus/sig.go deleted file mode 100644 index f45b53ce1..000000000 --- a/vendor/github.com/godbus/dbus/sig.go +++ /dev/null @@ -1,257 +0,0 @@ -package dbus - -import ( - "fmt" - "reflect" - "strings" -) - -var sigToType = map[byte]reflect.Type{ - 'y': byteType, - 'b': boolType, - 'n': int16Type, - 'q': uint16Type, - 'i': int32Type, - 'u': uint32Type, - 'x': int64Type, - 't': uint64Type, - 'd': float64Type, - 's': stringType, - 'g': signatureType, - 'o': objectPathType, - 'v': variantType, - 'h': unixFDIndexType, -} - -// Signature represents a correct type signature as specified by the D-Bus -// specification. The zero value represents the empty signature, "". -type Signature struct { - str string -} - -// SignatureOf returns the concatenation of all the signatures of the given -// values. It panics if one of them is not representable in D-Bus. -func SignatureOf(vs ...interface{}) Signature { - var s string - for _, v := range vs { - s += getSignature(reflect.TypeOf(v)) - } - return Signature{s} -} - -// SignatureOfType returns the signature of the given type. It panics if the -// type is not representable in D-Bus. -func SignatureOfType(t reflect.Type) Signature { - return Signature{getSignature(t)} -} - -// getSignature returns the signature of the given type and panics on unknown types. -func getSignature(t reflect.Type) string { - // handle simple types first - switch t.Kind() { - case reflect.Uint8: - return "y" - case reflect.Bool: - return "b" - case reflect.Int16: - return "n" - case reflect.Uint16: - return "q" - case reflect.Int32: - if t == unixFDType { - return "h" - } - return "i" - case reflect.Uint32: - if t == unixFDIndexType { - return "h" - } - return "u" - case reflect.Int64: - return "x" - case reflect.Uint64: - return "t" - case reflect.Float64: - return "d" - case reflect.Ptr: - return getSignature(t.Elem()) - case reflect.String: - if t == objectPathType { - return "o" - } - return "s" - case reflect.Struct: - if t == variantType { - return "v" - } else if t == signatureType { - return "g" - } - var s string - for i := 0; i < t.NumField(); i++ { - field := t.Field(i) - if field.PkgPath == "" && field.Tag.Get("dbus") != "-" { - s += getSignature(t.Field(i).Type) - } - } - return "(" + s + ")" - case reflect.Array, reflect.Slice: - return "a" + getSignature(t.Elem()) - case reflect.Map: - if !isKeyType(t.Key()) { - panic(InvalidTypeError{t}) - } - return "a{" + getSignature(t.Key()) + getSignature(t.Elem()) + "}" - } - panic(InvalidTypeError{t}) -} - -// ParseSignature returns the signature represented by this string, or a -// SignatureError if the string is not a valid signature. -func ParseSignature(s string) (sig Signature, err error) { - if len(s) == 0 { - return - } - if len(s) > 255 { - return Signature{""}, SignatureError{s, "too long"} - } - sig.str = s - for err == nil && len(s) != 0 { - err, s = validSingle(s, 0) - } - if err != nil { - sig = Signature{""} - } - - return -} - -// ParseSignatureMust behaves like ParseSignature, except that it panics if s -// is not valid. -func ParseSignatureMust(s string) Signature { - sig, err := ParseSignature(s) - if err != nil { - panic(err) - } - return sig -} - -// Empty retruns whether the signature is the empty signature. -func (s Signature) Empty() bool { - return s.str == "" -} - -// Single returns whether the signature represents a single, complete type. -func (s Signature) Single() bool { - err, r := validSingle(s.str, 0) - return err != nil && r == "" -} - -// String returns the signature's string representation. -func (s Signature) String() string { - return s.str -} - -// A SignatureError indicates that a signature passed to a function or received -// on a connection is not a valid signature. -type SignatureError struct { - Sig string - Reason string -} - -func (e SignatureError) Error() string { - return fmt.Sprintf("dbus: invalid signature: %q (%s)", e.Sig, e.Reason) -} - -// Try to read a single type from this string. If it was successfull, err is nil -// and rem is the remaining unparsed part. Otherwise, err is a non-nil -// SignatureError and rem is "". depth is the current recursion depth which may -// not be greater than 64 and should be given as 0 on the first call. -func validSingle(s string, depth int) (err error, rem string) { - if s == "" { - return SignatureError{Sig: s, Reason: "empty signature"}, "" - } - if depth > 64 { - return SignatureError{Sig: s, Reason: "container nesting too deep"}, "" - } - switch s[0] { - case 'y', 'b', 'n', 'q', 'i', 'u', 'x', 't', 'd', 's', 'g', 'o', 'v', 'h': - return nil, s[1:] - case 'a': - if len(s) > 1 && s[1] == '{' { - i := findMatching(s[1:], '{', '}') - if i == -1 { - return SignatureError{Sig: s, Reason: "unmatched '{'"}, "" - } - i++ - rem = s[i+1:] - s = s[2:i] - if err, _ = validSingle(s[:1], depth+1); err != nil { - return err, "" - } - err, nr := validSingle(s[1:], depth+1) - if err != nil { - return err, "" - } - if nr != "" { - return SignatureError{Sig: s, Reason: "too many types in dict"}, "" - } - return nil, rem - } - return validSingle(s[1:], depth+1) - case '(': - i := findMatching(s, '(', ')') - if i == -1 { - return SignatureError{Sig: s, Reason: "unmatched ')'"}, "" - } - rem = s[i+1:] - s = s[1:i] - for err == nil && s != "" { - err, s = validSingle(s, depth+1) - } - if err != nil { - rem = "" - } - return - } - return SignatureError{Sig: s, Reason: "invalid type character"}, "" -} - -func findMatching(s string, left, right rune) int { - n := 0 - for i, v := range s { - if v == left { - n++ - } else if v == right { - n-- - } - if n == 0 { - return i - } - } - return -1 -} - -// typeFor returns the type of the given signature. It ignores any left over -// characters and panics if s doesn't start with a valid type signature. -func typeFor(s string) (t reflect.Type) { - err, _ := validSingle(s, 0) - if err != nil { - panic(err) - } - - if t, ok := sigToType[s[0]]; ok { - return t - } - switch s[0] { - case 'a': - if s[1] == '{' { - i := strings.LastIndex(s, "}") - t = reflect.MapOf(sigToType[s[2]], typeFor(s[3:i])) - } else { - t = reflect.SliceOf(typeFor(s[1:])) - } - case '(': - t = interfacesType - } - return -} diff --git a/vendor/github.com/godbus/dbus/transport_darwin.go b/vendor/github.com/godbus/dbus/transport_darwin.go deleted file mode 100644 index 1bba0d6bf..000000000 --- a/vendor/github.com/godbus/dbus/transport_darwin.go +++ /dev/null @@ -1,6 +0,0 @@ -package dbus - -func (t *unixTransport) SendNullByte() error { - _, err := t.Write([]byte{0}) - return err -} diff --git a/vendor/github.com/godbus/dbus/transport_generic.go b/vendor/github.com/godbus/dbus/transport_generic.go deleted file mode 100644 index 46f8f49d6..000000000 --- a/vendor/github.com/godbus/dbus/transport_generic.go +++ /dev/null @@ -1,35 +0,0 @@ -package dbus - -import ( - "encoding/binary" - "errors" - "io" -) - -type genericTransport struct { - io.ReadWriteCloser -} - -func (t genericTransport) SendNullByte() error { - _, err := t.Write([]byte{0}) - return err -} - -func (t genericTransport) SupportsUnixFDs() bool { - return false -} - -func (t genericTransport) EnableUnixFDs() {} - -func (t genericTransport) ReadMessage() (*Message, error) { - return DecodeMessage(t) -} - -func (t genericTransport) SendMessage(msg *Message) error { - for _, v := range msg.Body { - if _, ok := v.(UnixFD); ok { - return errors.New("dbus: unix fd passing not enabled") - } - } - return msg.EncodeTo(t, binary.LittleEndian) -} diff --git a/vendor/github.com/godbus/dbus/transport_unix.go b/vendor/github.com/godbus/dbus/transport_unix.go deleted file mode 100644 index 3fafeabb1..000000000 --- a/vendor/github.com/godbus/dbus/transport_unix.go +++ /dev/null @@ -1,196 +0,0 @@ -//+build !windows - -package dbus - -import ( - "bytes" - "encoding/binary" - "errors" - "io" - "net" - "syscall" -) - -type oobReader struct { - conn *net.UnixConn - oob []byte - buf [4096]byte -} - -func (o *oobReader) Read(b []byte) (n int, err error) { - n, oobn, flags, _, err := o.conn.ReadMsgUnix(b, o.buf[:]) - if err != nil { - return n, err - } - if flags&syscall.MSG_CTRUNC != 0 { - return n, errors.New("dbus: control data truncated (too many fds received)") - } - o.oob = append(o.oob, o.buf[:oobn]...) - return n, nil -} - -type unixTransport struct { - *net.UnixConn - hasUnixFDs bool -} - -func newUnixTransport(keys string) (transport, error) { - var err error - - t := new(unixTransport) - abstract := getKey(keys, "abstract") - path := getKey(keys, "path") - switch { - case abstract == "" && path == "": - return nil, errors.New("dbus: invalid address (neither path nor abstract set)") - case abstract != "" && path == "": - t.UnixConn, err = net.DialUnix("unix", nil, &net.UnixAddr{Name: "@" + abstract, Net: "unix"}) - if err != nil { - return nil, err - } - return t, nil - case abstract == "" && path != "": - t.UnixConn, err = net.DialUnix("unix", nil, &net.UnixAddr{Name: path, Net: "unix"}) - if err != nil { - return nil, err - } - return t, nil - default: - return nil, errors.New("dbus: invalid address (both path and abstract set)") - } -} - -func init() { - transports["unix"] = newUnixTransport -} - -func (t *unixTransport) EnableUnixFDs() { - t.hasUnixFDs = true -} - -func (t *unixTransport) ReadMessage() (*Message, error) { - var ( - blen, hlen uint32 - csheader [16]byte - headers []header - order binary.ByteOrder - unixfds uint32 - ) - // To be sure that all bytes of out-of-band data are read, we use a special - // reader that uses ReadUnix on the underlying connection instead of Read - // and gathers the out-of-band data in a buffer. - rd := &oobReader{conn: t.UnixConn} - // read the first 16 bytes (the part of the header that has a constant size), - // from which we can figure out the length of the rest of the message - if _, err := io.ReadFull(rd, csheader[:]); err != nil { - return nil, err - } - switch csheader[0] { - case 'l': - order = binary.LittleEndian - case 'B': - order = binary.BigEndian - default: - return nil, InvalidMessageError("invalid byte order") - } - // csheader[4:8] -> length of message body, csheader[12:16] -> length of - // header fields (without alignment) - binary.Read(bytes.NewBuffer(csheader[4:8]), order, &blen) - binary.Read(bytes.NewBuffer(csheader[12:]), order, &hlen) - if hlen%8 != 0 { - hlen += 8 - (hlen % 8) - } - - // decode headers and look for unix fds - headerdata := make([]byte, hlen+4) - copy(headerdata, csheader[12:]) - if _, err := io.ReadFull(t, headerdata[4:]); err != nil { - return nil, err - } - dec := newDecoder(bytes.NewBuffer(headerdata), order) - dec.pos = 12 - vs, err := dec.Decode(Signature{"a(yv)"}) - if err != nil { - return nil, err - } - Store(vs, &headers) - for _, v := range headers { - if v.Field == byte(FieldUnixFDs) { - unixfds, _ = v.Variant.value.(uint32) - } - } - all := make([]byte, 16+hlen+blen) - copy(all, csheader[:]) - copy(all[16:], headerdata[4:]) - if _, err := io.ReadFull(rd, all[16+hlen:]); err != nil { - return nil, err - } - if unixfds != 0 { - if !t.hasUnixFDs { - return nil, errors.New("dbus: got unix fds on unsupported transport") - } - // read the fds from the OOB data - scms, err := syscall.ParseSocketControlMessage(rd.oob) - if err != nil { - return nil, err - } - if len(scms) != 1 { - return nil, errors.New("dbus: received more than one socket control message") - } - fds, err := syscall.ParseUnixRights(&scms[0]) - if err != nil { - return nil, err - } - msg, err := DecodeMessage(bytes.NewBuffer(all)) - if err != nil { - return nil, err - } - // substitute the values in the message body (which are indices for the - // array receiver via OOB) with the actual values - for i, v := range msg.Body { - if j, ok := v.(UnixFDIndex); ok { - if uint32(j) >= unixfds { - return nil, InvalidMessageError("invalid index for unix fd") - } - msg.Body[i] = UnixFD(fds[j]) - } - } - return msg, nil - } - return DecodeMessage(bytes.NewBuffer(all)) -} - -func (t *unixTransport) SendMessage(msg *Message) error { - fds := make([]int, 0) - for i, v := range msg.Body { - if fd, ok := v.(UnixFD); ok { - msg.Body[i] = UnixFDIndex(len(fds)) - fds = append(fds, int(fd)) - } - } - if len(fds) != 0 { - if !t.hasUnixFDs { - return errors.New("dbus: unix fd passing not enabled") - } - msg.Headers[FieldUnixFDs] = MakeVariant(uint32(len(fds))) - oob := syscall.UnixRights(fds...) - buf := new(bytes.Buffer) - msg.EncodeTo(buf, binary.LittleEndian) - n, oobn, err := t.UnixConn.WriteMsgUnix(buf.Bytes(), oob, nil) - if err != nil { - return err - } - if n != buf.Len() || oobn != len(oob) { - return io.ErrShortWrite - } - } else { - if err := msg.EncodeTo(t, binary.LittleEndian); err != nil { - return nil - } - } - return nil -} - -func (t *unixTransport) SupportsUnixFDs() bool { - return true -} diff --git a/vendor/github.com/godbus/dbus/transport_unixcred_dragonfly.go b/vendor/github.com/godbus/dbus/transport_unixcred_dragonfly.go deleted file mode 100644 index a8cd39395..000000000 --- a/vendor/github.com/godbus/dbus/transport_unixcred_dragonfly.go +++ /dev/null @@ -1,95 +0,0 @@ -// The UnixCredentials system call is currently only implemented on Linux -// http://golang.org/src/pkg/syscall/sockcmsg_linux.go -// https://golang.org/s/go1.4-syscall -// http://code.google.com/p/go/source/browse/unix/sockcmsg_linux.go?repo=sys - -// Local implementation of the UnixCredentials system call for DragonFly BSD - -package dbus - -/* -#include -*/ -import "C" - -import ( - "io" - "os" - "syscall" - "unsafe" -) - -// http://golang.org/src/pkg/syscall/ztypes_linux_amd64.go -// http://golang.org/src/pkg/syscall/ztypes_dragonfly_amd64.go -type Ucred struct { - Pid int32 - Uid uint32 - Gid uint32 -} - -// http://golang.org/src/pkg/syscall/types_linux.go -// http://golang.org/src/pkg/syscall/types_dragonfly.go -// https://github.com/DragonFlyBSD/DragonFlyBSD/blob/master/sys/sys/ucred.h -const ( - SizeofUcred = C.sizeof_struct_ucred -) - -// http://golang.org/src/pkg/syscall/sockcmsg_unix.go -func cmsgAlignOf(salen int) int { - // From http://golang.org/src/pkg/syscall/sockcmsg_unix.go - //salign := sizeofPtr - // NOTE: It seems like 64-bit Darwin and DragonFly BSD kernels - // still require 32-bit aligned access to network subsystem. - //if darwin64Bit || dragonfly64Bit { - // salign = 4 - //} - salign := 4 - return (salen + salign - 1) & ^(salign - 1) -} - -// http://golang.org/src/pkg/syscall/sockcmsg_unix.go -func cmsgData(h *syscall.Cmsghdr) unsafe.Pointer { - return unsafe.Pointer(uintptr(unsafe.Pointer(h)) + uintptr(cmsgAlignOf(syscall.SizeofCmsghdr))) -} - -// http://golang.org/src/pkg/syscall/sockcmsg_linux.go -// UnixCredentials encodes credentials into a socket control message -// for sending to another process. This can be used for -// authentication. -func UnixCredentials(ucred *Ucred) []byte { - b := make([]byte, syscall.CmsgSpace(SizeofUcred)) - h := (*syscall.Cmsghdr)(unsafe.Pointer(&b[0])) - h.Level = syscall.SOL_SOCKET - h.Type = syscall.SCM_CREDS - h.SetLen(syscall.CmsgLen(SizeofUcred)) - *((*Ucred)(cmsgData(h))) = *ucred - return b -} - -// http://golang.org/src/pkg/syscall/sockcmsg_linux.go -// ParseUnixCredentials decodes a socket control message that contains -// credentials in a Ucred structure. To receive such a message, the -// SO_PASSCRED option must be enabled on the socket. -func ParseUnixCredentials(m *syscall.SocketControlMessage) (*Ucred, error) { - if m.Header.Level != syscall.SOL_SOCKET { - return nil, syscall.EINVAL - } - if m.Header.Type != syscall.SCM_CREDS { - return nil, syscall.EINVAL - } - ucred := *(*Ucred)(unsafe.Pointer(&m.Data[0])) - return &ucred, nil -} - -func (t *unixTransport) SendNullByte() error { - ucred := &Ucred{Pid: int32(os.Getpid()), Uid: uint32(os.Getuid()), Gid: uint32(os.Getgid())} - b := UnixCredentials(ucred) - _, oobn, err := t.UnixConn.WriteMsgUnix([]byte{0}, b, nil) - if err != nil { - return err - } - if oobn != len(b) { - return io.ErrShortWrite - } - return nil -} diff --git a/vendor/github.com/godbus/dbus/transport_unixcred_linux.go b/vendor/github.com/godbus/dbus/transport_unixcred_linux.go deleted file mode 100644 index d9dfdf698..000000000 --- a/vendor/github.com/godbus/dbus/transport_unixcred_linux.go +++ /dev/null @@ -1,25 +0,0 @@ -// The UnixCredentials system call is currently only implemented on Linux -// http://golang.org/src/pkg/syscall/sockcmsg_linux.go -// https://golang.org/s/go1.4-syscall -// http://code.google.com/p/go/source/browse/unix/sockcmsg_linux.go?repo=sys - -package dbus - -import ( - "io" - "os" - "syscall" -) - -func (t *unixTransport) SendNullByte() error { - ucred := &syscall.Ucred{Pid: int32(os.Getpid()), Uid: uint32(os.Getuid()), Gid: uint32(os.Getgid())} - b := syscall.UnixCredentials(ucred) - _, oobn, err := t.UnixConn.WriteMsgUnix([]byte{0}, b, nil) - if err != nil { - return err - } - if oobn != len(b) { - return io.ErrShortWrite - } - return nil -} diff --git a/vendor/github.com/godbus/dbus/variant.go b/vendor/github.com/godbus/dbus/variant.go deleted file mode 100644 index b7b13ae90..000000000 --- a/vendor/github.com/godbus/dbus/variant.go +++ /dev/null @@ -1,139 +0,0 @@ -package dbus - -import ( - "bytes" - "fmt" - "reflect" - "sort" - "strconv" -) - -// Variant represents the D-Bus variant type. -type Variant struct { - sig Signature - value interface{} -} - -// MakeVariant converts the given value to a Variant. It panics if v cannot be -// represented as a D-Bus type. -func MakeVariant(v interface{}) Variant { - return Variant{SignatureOf(v), v} -} - -// ParseVariant parses the given string as a variant as described at -// https://developer.gnome.org/glib/unstable/gvariant-text.html. If sig is not -// empty, it is taken to be the expected signature for the variant. -func ParseVariant(s string, sig Signature) (Variant, error) { - tokens := varLex(s) - p := &varParser{tokens: tokens} - n, err := varMakeNode(p) - if err != nil { - return Variant{}, err - } - if sig.str == "" { - sig, err = varInfer(n) - if err != nil { - return Variant{}, err - } - } - v, err := n.Value(sig) - if err != nil { - return Variant{}, err - } - return MakeVariant(v), nil -} - -// format returns a formatted version of v and whether this string can be parsed -// unambigously. -func (v Variant) format() (string, bool) { - switch v.sig.str[0] { - case 'b', 'i': - return fmt.Sprint(v.value), true - case 'n', 'q', 'u', 'x', 't', 'd', 'h': - return fmt.Sprint(v.value), false - case 's': - return strconv.Quote(v.value.(string)), true - case 'o': - return strconv.Quote(string(v.value.(ObjectPath))), false - case 'g': - return strconv.Quote(v.value.(Signature).str), false - case 'v': - s, unamb := v.value.(Variant).format() - if !unamb { - return "<@" + v.value.(Variant).sig.str + " " + s + ">", true - } - return "<" + s + ">", true - case 'y': - return fmt.Sprintf("%#x", v.value.(byte)), false - } - rv := reflect.ValueOf(v.value) - switch rv.Kind() { - case reflect.Slice: - if rv.Len() == 0 { - return "[]", false - } - unamb := true - buf := bytes.NewBuffer([]byte("[")) - for i := 0; i < rv.Len(); i++ { - // TODO: slooow - s, b := MakeVariant(rv.Index(i).Interface()).format() - unamb = unamb && b - buf.WriteString(s) - if i != rv.Len()-1 { - buf.WriteString(", ") - } - } - buf.WriteByte(']') - return buf.String(), unamb - case reflect.Map: - if rv.Len() == 0 { - return "{}", false - } - unamb := true - var buf bytes.Buffer - kvs := make([]string, rv.Len()) - for i, k := range rv.MapKeys() { - s, b := MakeVariant(k.Interface()).format() - unamb = unamb && b - buf.Reset() - buf.WriteString(s) - buf.WriteString(": ") - s, b = MakeVariant(rv.MapIndex(k).Interface()).format() - unamb = unamb && b - buf.WriteString(s) - kvs[i] = buf.String() - } - buf.Reset() - buf.WriteByte('{') - sort.Strings(kvs) - for i, kv := range kvs { - if i > 0 { - buf.WriteString(", ") - } - buf.WriteString(kv) - } - buf.WriteByte('}') - return buf.String(), unamb - } - return `"INVALID"`, true -} - -// Signature returns the D-Bus signature of the underlying value of v. -func (v Variant) Signature() Signature { - return v.sig -} - -// String returns the string representation of the underlying value of v as -// described at https://developer.gnome.org/glib/unstable/gvariant-text.html. -func (v Variant) String() string { - s, unamb := v.format() - if !unamb { - return "@" + v.sig.str + " " + s - } - return s -} - -// Value returns the underlying value of v. -func (v Variant) Value() interface{} { - return v.value -} diff --git a/vendor/github.com/godbus/dbus/variant_lexer.go b/vendor/github.com/godbus/dbus/variant_lexer.go deleted file mode 100644 index 332007d6f..000000000 --- a/vendor/github.com/godbus/dbus/variant_lexer.go +++ /dev/null @@ -1,284 +0,0 @@ -package dbus - -import ( - "fmt" - "strings" - "unicode" - "unicode/utf8" -) - -// Heavily inspired by the lexer from text/template. - -type varToken struct { - typ varTokenType - val string -} - -type varTokenType byte - -const ( - tokEOF varTokenType = iota - tokError - tokNumber - tokString - tokBool - tokArrayStart - tokArrayEnd - tokDictStart - tokDictEnd - tokVariantStart - tokVariantEnd - tokComma - tokColon - tokType - tokByteString -) - -type varLexer struct { - input string - start int - pos int - width int - tokens []varToken -} - -type lexState func(*varLexer) lexState - -func varLex(s string) []varToken { - l := &varLexer{input: s} - l.run() - return l.tokens -} - -func (l *varLexer) accept(valid string) bool { - if strings.IndexRune(valid, l.next()) >= 0 { - return true - } - l.backup() - return false -} - -func (l *varLexer) backup() { - l.pos -= l.width -} - -func (l *varLexer) emit(t varTokenType) { - l.tokens = append(l.tokens, varToken{t, l.input[l.start:l.pos]}) - l.start = l.pos -} - -func (l *varLexer) errorf(format string, v ...interface{}) lexState { - l.tokens = append(l.tokens, varToken{ - tokError, - fmt.Sprintf(format, v...), - }) - return nil -} - -func (l *varLexer) ignore() { - l.start = l.pos -} - -func (l *varLexer) next() rune { - var r rune - - if l.pos >= len(l.input) { - l.width = 0 - return -1 - } - r, l.width = utf8.DecodeRuneInString(l.input[l.pos:]) - l.pos += l.width - return r -} - -func (l *varLexer) run() { - for state := varLexNormal; state != nil; { - state = state(l) - } -} - -func (l *varLexer) peek() rune { - r := l.next() - l.backup() - return r -} - -func varLexNormal(l *varLexer) lexState { - for { - r := l.next() - switch { - case r == -1: - l.emit(tokEOF) - return nil - case r == '[': - l.emit(tokArrayStart) - case r == ']': - l.emit(tokArrayEnd) - case r == '{': - l.emit(tokDictStart) - case r == '}': - l.emit(tokDictEnd) - case r == '<': - l.emit(tokVariantStart) - case r == '>': - l.emit(tokVariantEnd) - case r == ':': - l.emit(tokColon) - case r == ',': - l.emit(tokComma) - case r == '\'' || r == '"': - l.backup() - return varLexString - case r == '@': - l.backup() - return varLexType - case unicode.IsSpace(r): - l.ignore() - case unicode.IsNumber(r) || r == '+' || r == '-': - l.backup() - return varLexNumber - case r == 'b': - pos := l.start - if n := l.peek(); n == '"' || n == '\'' { - return varLexByteString - } - // not a byte string; try to parse it as a type or bool below - l.pos = pos + 1 - l.width = 1 - fallthrough - default: - // either a bool or a type. Try bools first. - l.backup() - if l.pos+4 <= len(l.input) { - if l.input[l.pos:l.pos+4] == "true" { - l.pos += 4 - l.emit(tokBool) - continue - } - } - if l.pos+5 <= len(l.input) { - if l.input[l.pos:l.pos+5] == "false" { - l.pos += 5 - l.emit(tokBool) - continue - } - } - // must be a type. - return varLexType - } - } -} - -var varTypeMap = map[string]string{ - "boolean": "b", - "byte": "y", - "int16": "n", - "uint16": "q", - "int32": "i", - "uint32": "u", - "int64": "x", - "uint64": "t", - "double": "f", - "string": "s", - "objectpath": "o", - "signature": "g", -} - -func varLexByteString(l *varLexer) lexState { - q := l.next() -Loop: - for { - switch l.next() { - case '\\': - if r := l.next(); r != -1 { - break - } - fallthrough - case -1: - return l.errorf("unterminated bytestring") - case q: - break Loop - } - } - l.emit(tokByteString) - return varLexNormal -} - -func varLexNumber(l *varLexer) lexState { - l.accept("+-") - digits := "0123456789" - if l.accept("0") { - if l.accept("x") { - digits = "0123456789abcdefABCDEF" - } else { - digits = "01234567" - } - } - for strings.IndexRune(digits, l.next()) >= 0 { - } - l.backup() - if l.accept(".") { - for strings.IndexRune(digits, l.next()) >= 0 { - } - l.backup() - } - if l.accept("eE") { - l.accept("+-") - for strings.IndexRune("0123456789", l.next()) >= 0 { - } - l.backup() - } - if r := l.peek(); unicode.IsLetter(r) { - l.next() - return l.errorf("bad number syntax: %q", l.input[l.start:l.pos]) - } - l.emit(tokNumber) - return varLexNormal -} - -func varLexString(l *varLexer) lexState { - q := l.next() -Loop: - for { - switch l.next() { - case '\\': - if r := l.next(); r != -1 { - break - } - fallthrough - case -1: - return l.errorf("unterminated string") - case q: - break Loop - } - } - l.emit(tokString) - return varLexNormal -} - -func varLexType(l *varLexer) lexState { - at := l.accept("@") - for { - r := l.next() - if r == -1 { - break - } - if unicode.IsSpace(r) { - l.backup() - break - } - } - if at { - if _, err := ParseSignature(l.input[l.start+1 : l.pos]); err != nil { - return l.errorf("%s", err) - } - } else { - if _, ok := varTypeMap[l.input[l.start:l.pos]]; ok { - l.emit(tokType) - return varLexNormal - } - return l.errorf("unrecognized type %q", l.input[l.start:l.pos]) - } - l.emit(tokType) - return varLexNormal -} diff --git a/vendor/github.com/godbus/dbus/variant_parser.go b/vendor/github.com/godbus/dbus/variant_parser.go deleted file mode 100644 index d20f5da6d..000000000 --- a/vendor/github.com/godbus/dbus/variant_parser.go +++ /dev/null @@ -1,817 +0,0 @@ -package dbus - -import ( - "bytes" - "errors" - "fmt" - "io" - "reflect" - "strconv" - "strings" - "unicode/utf8" -) - -type varParser struct { - tokens []varToken - i int -} - -func (p *varParser) backup() { - p.i-- -} - -func (p *varParser) next() varToken { - if p.i < len(p.tokens) { - t := p.tokens[p.i] - p.i++ - return t - } - return varToken{typ: tokEOF} -} - -type varNode interface { - Infer() (Signature, error) - String() string - Sigs() sigSet - Value(Signature) (interface{}, error) -} - -func varMakeNode(p *varParser) (varNode, error) { - var sig Signature - - for { - t := p.next() - switch t.typ { - case tokEOF: - return nil, io.ErrUnexpectedEOF - case tokError: - return nil, errors.New(t.val) - case tokNumber: - return varMakeNumNode(t, sig) - case tokString: - return varMakeStringNode(t, sig) - case tokBool: - if sig.str != "" && sig.str != "b" { - return nil, varTypeError{t.val, sig} - } - b, err := strconv.ParseBool(t.val) - if err != nil { - return nil, err - } - return boolNode(b), nil - case tokArrayStart: - return varMakeArrayNode(p, sig) - case tokVariantStart: - return varMakeVariantNode(p, sig) - case tokDictStart: - return varMakeDictNode(p, sig) - case tokType: - if sig.str != "" { - return nil, errors.New("unexpected type annotation") - } - if t.val[0] == '@' { - sig.str = t.val[1:] - } else { - sig.str = varTypeMap[t.val] - } - case tokByteString: - if sig.str != "" && sig.str != "ay" { - return nil, varTypeError{t.val, sig} - } - b, err := varParseByteString(t.val) - if err != nil { - return nil, err - } - return byteStringNode(b), nil - default: - return nil, fmt.Errorf("unexpected %q", t.val) - } - } -} - -type varTypeError struct { - val string - sig Signature -} - -func (e varTypeError) Error() string { - return fmt.Sprintf("dbus: can't parse %q as type %q", e.val, e.sig.str) -} - -type sigSet map[Signature]bool - -func (s sigSet) Empty() bool { - return len(s) == 0 -} - -func (s sigSet) Intersect(s2 sigSet) sigSet { - r := make(sigSet) - for k := range s { - if s2[k] { - r[k] = true - } - } - return r -} - -func (s sigSet) Single() (Signature, bool) { - if len(s) == 1 { - for k := range s { - return k, true - } - } - return Signature{}, false -} - -func (s sigSet) ToArray() sigSet { - r := make(sigSet, len(s)) - for k := range s { - r[Signature{"a" + k.str}] = true - } - return r -} - -type numNode struct { - sig Signature - str string - val interface{} -} - -var numSigSet = sigSet{ - Signature{"y"}: true, - Signature{"n"}: true, - Signature{"q"}: true, - Signature{"i"}: true, - Signature{"u"}: true, - Signature{"x"}: true, - Signature{"t"}: true, - Signature{"d"}: true, -} - -func (n numNode) Infer() (Signature, error) { - if strings.ContainsAny(n.str, ".e") { - return Signature{"d"}, nil - } - return Signature{"i"}, nil -} - -func (n numNode) String() string { - return n.str -} - -func (n numNode) Sigs() sigSet { - if n.sig.str != "" { - return sigSet{n.sig: true} - } - if strings.ContainsAny(n.str, ".e") { - return sigSet{Signature{"d"}: true} - } - return numSigSet -} - -func (n numNode) Value(sig Signature) (interface{}, error) { - if n.sig.str != "" && n.sig != sig { - return nil, varTypeError{n.str, sig} - } - if n.val != nil { - return n.val, nil - } - return varNumAs(n.str, sig) -} - -func varMakeNumNode(tok varToken, sig Signature) (varNode, error) { - if sig.str == "" { - return numNode{str: tok.val}, nil - } - num, err := varNumAs(tok.val, sig) - if err != nil { - return nil, err - } - return numNode{sig: sig, val: num}, nil -} - -func varNumAs(s string, sig Signature) (interface{}, error) { - isUnsigned := false - size := 32 - switch sig.str { - case "n": - size = 16 - case "i": - case "x": - size = 64 - case "y": - size = 8 - isUnsigned = true - case "q": - size = 16 - isUnsigned = true - case "u": - isUnsigned = true - case "t": - size = 64 - isUnsigned = true - case "d": - d, err := strconv.ParseFloat(s, 64) - if err != nil { - return nil, err - } - return d, nil - default: - return nil, varTypeError{s, sig} - } - base := 10 - if strings.HasPrefix(s, "0x") { - base = 16 - s = s[2:] - } - if strings.HasPrefix(s, "0") && len(s) != 1 { - base = 8 - s = s[1:] - } - if isUnsigned { - i, err := strconv.ParseUint(s, base, size) - if err != nil { - return nil, err - } - var v interface{} = i - switch sig.str { - case "y": - v = byte(i) - case "q": - v = uint16(i) - case "u": - v = uint32(i) - } - return v, nil - } - i, err := strconv.ParseInt(s, base, size) - if err != nil { - return nil, err - } - var v interface{} = i - switch sig.str { - case "n": - v = int16(i) - case "i": - v = int32(i) - } - return v, nil -} - -type stringNode struct { - sig Signature - str string // parsed - val interface{} // has correct type -} - -var stringSigSet = sigSet{ - Signature{"s"}: true, - Signature{"g"}: true, - Signature{"o"}: true, -} - -func (n stringNode) Infer() (Signature, error) { - return Signature{"s"}, nil -} - -func (n stringNode) String() string { - return n.str -} - -func (n stringNode) Sigs() sigSet { - if n.sig.str != "" { - return sigSet{n.sig: true} - } - return stringSigSet -} - -func (n stringNode) Value(sig Signature) (interface{}, error) { - if n.sig.str != "" && n.sig != sig { - return nil, varTypeError{n.str, sig} - } - if n.val != nil { - return n.val, nil - } - switch { - case sig.str == "g": - return Signature{n.str}, nil - case sig.str == "o": - return ObjectPath(n.str), nil - case sig.str == "s": - return n.str, nil - default: - return nil, varTypeError{n.str, sig} - } -} - -func varMakeStringNode(tok varToken, sig Signature) (varNode, error) { - if sig.str != "" && sig.str != "s" && sig.str != "g" && sig.str != "o" { - return nil, fmt.Errorf("invalid type %q for string", sig.str) - } - s, err := varParseString(tok.val) - if err != nil { - return nil, err - } - n := stringNode{str: s} - if sig.str == "" { - return stringNode{str: s}, nil - } - n.sig = sig - switch sig.str { - case "o": - n.val = ObjectPath(s) - case "g": - n.val = Signature{s} - case "s": - n.val = s - } - return n, nil -} - -func varParseString(s string) (string, error) { - // quotes are guaranteed to be there - s = s[1 : len(s)-1] - buf := new(bytes.Buffer) - for len(s) != 0 { - r, size := utf8.DecodeRuneInString(s) - if r == utf8.RuneError && size == 1 { - return "", errors.New("invalid UTF-8") - } - s = s[size:] - if r != '\\' { - buf.WriteRune(r) - continue - } - r, size = utf8.DecodeRuneInString(s) - if r == utf8.RuneError && size == 1 { - return "", errors.New("invalid UTF-8") - } - s = s[size:] - switch r { - case 'a': - buf.WriteRune(0x7) - case 'b': - buf.WriteRune(0x8) - case 'f': - buf.WriteRune(0xc) - case 'n': - buf.WriteRune('\n') - case 'r': - buf.WriteRune('\r') - case 't': - buf.WriteRune('\t') - case '\n': - case 'u': - if len(s) < 4 { - return "", errors.New("short unicode escape") - } - r, err := strconv.ParseUint(s[:4], 16, 32) - if err != nil { - return "", err - } - buf.WriteRune(rune(r)) - s = s[4:] - case 'U': - if len(s) < 8 { - return "", errors.New("short unicode escape") - } - r, err := strconv.ParseUint(s[:8], 16, 32) - if err != nil { - return "", err - } - buf.WriteRune(rune(r)) - s = s[8:] - default: - buf.WriteRune(r) - } - } - return buf.String(), nil -} - -var boolSigSet = sigSet{Signature{"b"}: true} - -type boolNode bool - -func (boolNode) Infer() (Signature, error) { - return Signature{"b"}, nil -} - -func (b boolNode) String() string { - if b { - return "true" - } - return "false" -} - -func (boolNode) Sigs() sigSet { - return boolSigSet -} - -func (b boolNode) Value(sig Signature) (interface{}, error) { - if sig.str != "b" { - return nil, varTypeError{b.String(), sig} - } - return bool(b), nil -} - -type arrayNode struct { - set sigSet - children []varNode - val interface{} -} - -func (n arrayNode) Infer() (Signature, error) { - for _, v := range n.children { - csig, err := varInfer(v) - if err != nil { - continue - } - return Signature{"a" + csig.str}, nil - } - return Signature{}, fmt.Errorf("can't infer type for %q", n.String()) -} - -func (n arrayNode) String() string { - s := "[" - for i, v := range n.children { - s += v.String() - if i != len(n.children)-1 { - s += ", " - } - } - return s + "]" -} - -func (n arrayNode) Sigs() sigSet { - return n.set -} - -func (n arrayNode) Value(sig Signature) (interface{}, error) { - if n.set.Empty() { - // no type information whatsoever, so this must be an empty slice - return reflect.MakeSlice(typeFor(sig.str), 0, 0).Interface(), nil - } - if !n.set[sig] { - return nil, varTypeError{n.String(), sig} - } - s := reflect.MakeSlice(typeFor(sig.str), len(n.children), len(n.children)) - for i, v := range n.children { - rv, err := v.Value(Signature{sig.str[1:]}) - if err != nil { - return nil, err - } - s.Index(i).Set(reflect.ValueOf(rv)) - } - return s.Interface(), nil -} - -func varMakeArrayNode(p *varParser, sig Signature) (varNode, error) { - var n arrayNode - if sig.str != "" { - n.set = sigSet{sig: true} - } - if t := p.next(); t.typ == tokArrayEnd { - return n, nil - } else { - p.backup() - } -Loop: - for { - t := p.next() - switch t.typ { - case tokEOF: - return nil, io.ErrUnexpectedEOF - case tokError: - return nil, errors.New(t.val) - } - p.backup() - cn, err := varMakeNode(p) - if err != nil { - return nil, err - } - if cset := cn.Sigs(); !cset.Empty() { - if n.set.Empty() { - n.set = cset.ToArray() - } else { - nset := cset.ToArray().Intersect(n.set) - if nset.Empty() { - return nil, fmt.Errorf("can't parse %q with given type information", cn.String()) - } - n.set = nset - } - } - n.children = append(n.children, cn) - switch t := p.next(); t.typ { - case tokEOF: - return nil, io.ErrUnexpectedEOF - case tokError: - return nil, errors.New(t.val) - case tokArrayEnd: - break Loop - case tokComma: - continue - default: - return nil, fmt.Errorf("unexpected %q", t.val) - } - } - return n, nil -} - -type variantNode struct { - n varNode -} - -var variantSet = sigSet{ - Signature{"v"}: true, -} - -func (variantNode) Infer() (Signature, error) { - return Signature{"v"}, nil -} - -func (n variantNode) String() string { - return "<" + n.n.String() + ">" -} - -func (variantNode) Sigs() sigSet { - return variantSet -} - -func (n variantNode) Value(sig Signature) (interface{}, error) { - if sig.str != "v" { - return nil, varTypeError{n.String(), sig} - } - sig, err := varInfer(n.n) - if err != nil { - return nil, err - } - v, err := n.n.Value(sig) - if err != nil { - return nil, err - } - return MakeVariant(v), nil -} - -func varMakeVariantNode(p *varParser, sig Signature) (varNode, error) { - n, err := varMakeNode(p) - if err != nil { - return nil, err - } - if t := p.next(); t.typ != tokVariantEnd { - return nil, fmt.Errorf("unexpected %q", t.val) - } - vn := variantNode{n} - if sig.str != "" && sig.str != "v" { - return nil, varTypeError{vn.String(), sig} - } - return variantNode{n}, nil -} - -type dictEntry struct { - key, val varNode -} - -type dictNode struct { - kset, vset sigSet - children []dictEntry - val interface{} -} - -func (n dictNode) Infer() (Signature, error) { - for _, v := range n.children { - ksig, err := varInfer(v.key) - if err != nil { - continue - } - vsig, err := varInfer(v.val) - if err != nil { - continue - } - return Signature{"a{" + ksig.str + vsig.str + "}"}, nil - } - return Signature{}, fmt.Errorf("can't infer type for %q", n.String()) -} - -func (n dictNode) String() string { - s := "{" - for i, v := range n.children { - s += v.key.String() + ": " + v.val.String() - if i != len(n.children)-1 { - s += ", " - } - } - return s + "}" -} - -func (n dictNode) Sigs() sigSet { - r := sigSet{} - for k := range n.kset { - for v := range n.vset { - sig := "a{" + k.str + v.str + "}" - r[Signature{sig}] = true - } - } - return r -} - -func (n dictNode) Value(sig Signature) (interface{}, error) { - set := n.Sigs() - if set.Empty() { - // no type information -> empty dict - return reflect.MakeMap(typeFor(sig.str)).Interface(), nil - } - if !set[sig] { - return nil, varTypeError{n.String(), sig} - } - m := reflect.MakeMap(typeFor(sig.str)) - ksig := Signature{sig.str[2:3]} - vsig := Signature{sig.str[3 : len(sig.str)-1]} - for _, v := range n.children { - kv, err := v.key.Value(ksig) - if err != nil { - return nil, err - } - vv, err := v.val.Value(vsig) - if err != nil { - return nil, err - } - m.SetMapIndex(reflect.ValueOf(kv), reflect.ValueOf(vv)) - } - return m.Interface(), nil -} - -func varMakeDictNode(p *varParser, sig Signature) (varNode, error) { - var n dictNode - - if sig.str != "" { - if len(sig.str) < 5 { - return nil, fmt.Errorf("invalid signature %q for dict type", sig) - } - ksig := Signature{string(sig.str[2])} - vsig := Signature{sig.str[3 : len(sig.str)-1]} - n.kset = sigSet{ksig: true} - n.vset = sigSet{vsig: true} - } - if t := p.next(); t.typ == tokDictEnd { - return n, nil - } else { - p.backup() - } -Loop: - for { - t := p.next() - switch t.typ { - case tokEOF: - return nil, io.ErrUnexpectedEOF - case tokError: - return nil, errors.New(t.val) - } - p.backup() - kn, err := varMakeNode(p) - if err != nil { - return nil, err - } - if kset := kn.Sigs(); !kset.Empty() { - if n.kset.Empty() { - n.kset = kset - } else { - n.kset = kset.Intersect(n.kset) - if n.kset.Empty() { - return nil, fmt.Errorf("can't parse %q with given type information", kn.String()) - } - } - } - t = p.next() - switch t.typ { - case tokEOF: - return nil, io.ErrUnexpectedEOF - case tokError: - return nil, errors.New(t.val) - case tokColon: - default: - return nil, fmt.Errorf("unexpected %q", t.val) - } - t = p.next() - switch t.typ { - case tokEOF: - return nil, io.ErrUnexpectedEOF - case tokError: - return nil, errors.New(t.val) - } - p.backup() - vn, err := varMakeNode(p) - if err != nil { - return nil, err - } - if vset := vn.Sigs(); !vset.Empty() { - if n.vset.Empty() { - n.vset = vset - } else { - n.vset = n.vset.Intersect(vset) - if n.vset.Empty() { - return nil, fmt.Errorf("can't parse %q with given type information", vn.String()) - } - } - } - n.children = append(n.children, dictEntry{kn, vn}) - t = p.next() - switch t.typ { - case tokEOF: - return nil, io.ErrUnexpectedEOF - case tokError: - return nil, errors.New(t.val) - case tokDictEnd: - break Loop - case tokComma: - continue - default: - return nil, fmt.Errorf("unexpected %q", t.val) - } - } - return n, nil -} - -type byteStringNode []byte - -var byteStringSet = sigSet{ - Signature{"ay"}: true, -} - -func (byteStringNode) Infer() (Signature, error) { - return Signature{"ay"}, nil -} - -func (b byteStringNode) String() string { - return string(b) -} - -func (b byteStringNode) Sigs() sigSet { - return byteStringSet -} - -func (b byteStringNode) Value(sig Signature) (interface{}, error) { - if sig.str != "ay" { - return nil, varTypeError{b.String(), sig} - } - return []byte(b), nil -} - -func varParseByteString(s string) ([]byte, error) { - // quotes and b at start are guaranteed to be there - b := make([]byte, 0, 1) - s = s[2 : len(s)-1] - for len(s) != 0 { - c := s[0] - s = s[1:] - if c != '\\' { - b = append(b, c) - continue - } - c = s[0] - s = s[1:] - switch c { - case 'a': - b = append(b, 0x7) - case 'b': - b = append(b, 0x8) - case 'f': - b = append(b, 0xc) - case 'n': - b = append(b, '\n') - case 'r': - b = append(b, '\r') - case 't': - b = append(b, '\t') - case 'x': - if len(s) < 2 { - return nil, errors.New("short escape") - } - n, err := strconv.ParseUint(s[:2], 16, 8) - if err != nil { - return nil, err - } - b = append(b, byte(n)) - s = s[2:] - case '0': - if len(s) < 3 { - return nil, errors.New("short escape") - } - n, err := strconv.ParseUint(s[:3], 8, 8) - if err != nil { - return nil, err - } - b = append(b, byte(n)) - s = s[3:] - default: - b = append(b, c) - } - } - return append(b, 0), nil -} - -func varInfer(n varNode) (Signature, error) { - if sig, ok := n.Sigs().Single(); ok { - return sig, nil - } - return n.Infer() -} diff --git a/vendor/github.com/mrunalp/fileutils/LICENSE b/vendor/github.com/mrunalp/fileutils/LICENSE deleted file mode 100644 index 27448585a..000000000 --- a/vendor/github.com/mrunalp/fileutils/LICENSE +++ /dev/null @@ -1,191 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - Copyright 2014 Docker, Inc. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/mrunalp/fileutils/README.md b/vendor/github.com/mrunalp/fileutils/README.md deleted file mode 100644 index d15692488..000000000 --- a/vendor/github.com/mrunalp/fileutils/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# fileutils - -Collection of utilities for file manipulation in golang - -The library is based on docker pkg/archive but does copies instead of handling archive formats. diff --git a/vendor/github.com/mrunalp/fileutils/fileutils.go b/vendor/github.com/mrunalp/fileutils/fileutils.go deleted file mode 100644 index b60cb909c..000000000 --- a/vendor/github.com/mrunalp/fileutils/fileutils.go +++ /dev/null @@ -1,161 +0,0 @@ -package fileutils - -import ( - "fmt" - "io" - "os" - "path/filepath" - "syscall" -) - -// CopyFile copies the file at source to dest -func CopyFile(source string, dest string) error { - si, err := os.Lstat(source) - if err != nil { - return err - } - - st, ok := si.Sys().(*syscall.Stat_t) - if !ok { - return fmt.Errorf("could not convert to syscall.Stat_t") - } - - uid := int(st.Uid) - gid := int(st.Gid) - - // Handle symlinks - if si.Mode()&os.ModeSymlink != 0 { - target, err := os.Readlink(source) - if err != nil { - return err - } - if err := os.Symlink(target, dest); err != nil { - return err - } - } - - // Handle device files - if st.Mode&syscall.S_IFMT == syscall.S_IFBLK || st.Mode&syscall.S_IFMT == syscall.S_IFCHR { - devMajor := int64(major(uint64(st.Rdev))) - devMinor := int64(minor(uint64(st.Rdev))) - mode := uint32(si.Mode() & 07777) - if st.Mode&syscall.S_IFMT == syscall.S_IFBLK { - mode |= syscall.S_IFBLK - } - if st.Mode&syscall.S_IFMT == syscall.S_IFCHR { - mode |= syscall.S_IFCHR - } - if err := syscall.Mknod(dest, mode, int(mkdev(devMajor, devMinor))); err != nil { - return err - } - } - - // Handle regular files - if si.Mode().IsRegular() { - sf, err := os.Open(source) - if err != nil { - return err - } - defer sf.Close() - - df, err := os.Create(dest) - if err != nil { - return err - } - defer df.Close() - - _, err = io.Copy(df, sf) - if err != nil { - return err - } - } - - // Chown the file - if err := os.Lchown(dest, uid, gid); err != nil { - return err - } - - // Chmod the file - if !(si.Mode()&os.ModeSymlink == os.ModeSymlink) { - if err := os.Chmod(dest, si.Mode()); err != nil { - return err - } - } - - return nil -} - -// CopyDirectory copies the files under the source directory -// to dest directory. The dest directory is created if it -// does not exist. -func CopyDirectory(source string, dest string) error { - fi, err := os.Stat(source) - if err != nil { - return err - } - - // Get owner. - st, ok := fi.Sys().(*syscall.Stat_t) - if !ok { - return fmt.Errorf("could not convert to syscall.Stat_t") - } - - // We have to pick an owner here anyway. - if err := MkdirAllNewAs(dest, fi.Mode(), int(st.Uid), int(st.Gid)); err != nil { - return err - } - - return filepath.Walk(source, func(path string, info os.FileInfo, err error) error { - if err != nil { - return err - } - - // Get the relative path - relPath, err := filepath.Rel(source, path) - if err != nil { - return nil - } - - if info.IsDir() { - // Skip the source directory. - if path != source { - // Get the owner. - st, ok := info.Sys().(*syscall.Stat_t) - if !ok { - return fmt.Errorf("could not convert to syscall.Stat_t") - } - - uid := int(st.Uid) - gid := int(st.Gid) - - if err := os.Mkdir(filepath.Join(dest, relPath), info.Mode()); err != nil { - return err - } - - if err := os.Lchown(filepath.Join(dest, relPath), uid, gid); err != nil { - return err - } - } - return nil - } - - // Copy the file. - if err := CopyFile(path, filepath.Join(dest, relPath)); err != nil { - return err - } - - return nil - }) -} - -func major(device uint64) uint64 { - return (device >> 8) & 0xfff -} - -func minor(device uint64) uint64 { - return (device & 0xff) | ((device >> 12) & 0xfff00) -} - -func mkdev(major int64, minor int64) uint32 { - return uint32(((minor & 0xfff00) << 12) | ((major & 0xfff) << 8) | (minor & 0xff)) -} diff --git a/vendor/github.com/mrunalp/fileutils/idtools.go b/vendor/github.com/mrunalp/fileutils/idtools.go deleted file mode 100644 index 161aec8f5..000000000 --- a/vendor/github.com/mrunalp/fileutils/idtools.go +++ /dev/null @@ -1,49 +0,0 @@ -package fileutils - -import ( - "os" - "path/filepath" -) - -// MkdirAllNewAs creates a directory (include any along the path) and then modifies -// ownership ONLY of newly created directories to the requested uid/gid. If the -// directories along the path exist, no change of ownership will be performed -func MkdirAllNewAs(path string, mode os.FileMode, ownerUID, ownerGID int) error { - // make an array containing the original path asked for, plus (for mkAll == true) - // all path components leading up to the complete path that don't exist before we MkdirAll - // so that we can chown all of them properly at the end. If chownExisting is false, we won't - // chown the full directory path if it exists - var paths []string - if _, err := os.Stat(path); err != nil && os.IsNotExist(err) { - paths = []string{path} - } else if err == nil { - // nothing to do; directory path fully exists already - return nil - } - - // walk back to "/" looking for directories which do not exist - // and add them to the paths array for chown after creation - dirPath := path - for { - dirPath = filepath.Dir(dirPath) - if dirPath == "/" { - break - } - if _, err := os.Stat(dirPath); err != nil && os.IsNotExist(err) { - paths = append(paths, dirPath) - } - } - - if err := os.MkdirAll(path, mode); err != nil && !os.IsExist(err) { - return err - } - - // even if it existed, we will chown the requested path + any subpaths that - // didn't exist when we called MkdirAll - for _, pathComponent := range paths { - if err := os.Chown(pathComponent, ownerUID, ownerGID); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/LICENSE b/vendor/github.com/opencontainers/runc/LICENSE deleted file mode 100644 index 27448585a..000000000 --- a/vendor/github.com/opencontainers/runc/LICENSE +++ /dev/null @@ -1,191 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - Copyright 2014 Docker, Inc. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/opencontainers/runc/NOTICE b/vendor/github.com/opencontainers/runc/NOTICE deleted file mode 100644 index 5c97abce4..000000000 --- a/vendor/github.com/opencontainers/runc/NOTICE +++ /dev/null @@ -1,17 +0,0 @@ -runc - -Copyright 2012-2015 Docker, Inc. - -This product includes software developed at Docker, Inc. (http://www.docker.com). - -The following is courtesy of our legal counsel: - - -Use and transfer of Docker may be subject to certain restrictions by the -United States and other governments. -It is your responsibility to ensure that your use and/or transfer does not -violate applicable laws. - -For more information, please see http://www.bis.doc.gov - -See also http://www.apache.org/dev/crypto.html and/or seek legal counsel. diff --git a/vendor/github.com/opencontainers/runc/libcontainer/README.md b/vendor/github.com/opencontainers/runc/libcontainer/README.md deleted file mode 100644 index 1d7fa04c0..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/README.md +++ /dev/null @@ -1,330 +0,0 @@ -# libcontainer - -[![GoDoc](https://godoc.org/github.com/opencontainers/runc/libcontainer?status.svg)](https://godoc.org/github.com/opencontainers/runc/libcontainer) - -Libcontainer provides a native Go implementation for creating containers -with namespaces, cgroups, capabilities, and filesystem access controls. -It allows you to manage the lifecycle of the container performing additional operations -after the container is created. - - -#### Container -A container is a self contained execution environment that shares the kernel of the -host system and which is (optionally) isolated from other containers in the system. - -#### Using libcontainer - -Because containers are spawned in a two step process you will need a binary that -will be executed as the init process for the container. In libcontainer, we use -the current binary (/proc/self/exe) to be executed as the init process, and use -arg "init", we call the first step process "bootstrap", so you always need a "init" -function as the entry of "bootstrap". - -In addition to the go init function the early stage bootstrap is handled by importing -[nsenter](https://github.com/opencontainers/runc/blob/master/libcontainer/nsenter/README.md). - -```go -import ( - _ "github.com/opencontainers/runc/libcontainer/nsenter" -) - -func init() { - if len(os.Args) > 1 && os.Args[1] == "init" { - runtime.GOMAXPROCS(1) - runtime.LockOSThread() - factory, _ := libcontainer.New("") - if err := factory.StartInitialization(); err != nil { - logrus.Fatal(err) - } - panic("--this line should have never been executed, congratulations--") - } -} -``` - -Then to create a container you first have to initialize an instance of a factory -that will handle the creation and initialization for a container. - -```go -factory, err := libcontainer.New("/var/lib/container", libcontainer.Cgroupfs, libcontainer.InitArgs(os.Args[0], "init")) -if err != nil { - logrus.Fatal(err) - return -} -``` - -Once you have an instance of the factory created we can create a configuration -struct describing how the container is to be created. A sample would look similar to this: - -```go -defaultMountFlags := unix.MS_NOEXEC | unix.MS_NOSUID | unix.MS_NODEV -config := &configs.Config{ - Rootfs: "/your/path/to/rootfs", - Capabilities: &configs.Capabilities{ - Bounding: []string{ - "CAP_CHOWN", - "CAP_DAC_OVERRIDE", - "CAP_FSETID", - "CAP_FOWNER", - "CAP_MKNOD", - "CAP_NET_RAW", - "CAP_SETGID", - "CAP_SETUID", - "CAP_SETFCAP", - "CAP_SETPCAP", - "CAP_NET_BIND_SERVICE", - "CAP_SYS_CHROOT", - "CAP_KILL", - "CAP_AUDIT_WRITE", - }, - Effective: []string{ - "CAP_CHOWN", - "CAP_DAC_OVERRIDE", - "CAP_FSETID", - "CAP_FOWNER", - "CAP_MKNOD", - "CAP_NET_RAW", - "CAP_SETGID", - "CAP_SETUID", - "CAP_SETFCAP", - "CAP_SETPCAP", - "CAP_NET_BIND_SERVICE", - "CAP_SYS_CHROOT", - "CAP_KILL", - "CAP_AUDIT_WRITE", - }, - Inheritable: []string{ - "CAP_CHOWN", - "CAP_DAC_OVERRIDE", - "CAP_FSETID", - "CAP_FOWNER", - "CAP_MKNOD", - "CAP_NET_RAW", - "CAP_SETGID", - "CAP_SETUID", - "CAP_SETFCAP", - "CAP_SETPCAP", - "CAP_NET_BIND_SERVICE", - "CAP_SYS_CHROOT", - "CAP_KILL", - "CAP_AUDIT_WRITE", - }, - Permitted: []string{ - "CAP_CHOWN", - "CAP_DAC_OVERRIDE", - "CAP_FSETID", - "CAP_FOWNER", - "CAP_MKNOD", - "CAP_NET_RAW", - "CAP_SETGID", - "CAP_SETUID", - "CAP_SETFCAP", - "CAP_SETPCAP", - "CAP_NET_BIND_SERVICE", - "CAP_SYS_CHROOT", - "CAP_KILL", - "CAP_AUDIT_WRITE", - }, - Ambient: []string{ - "CAP_CHOWN", - "CAP_DAC_OVERRIDE", - "CAP_FSETID", - "CAP_FOWNER", - "CAP_MKNOD", - "CAP_NET_RAW", - "CAP_SETGID", - "CAP_SETUID", - "CAP_SETFCAP", - "CAP_SETPCAP", - "CAP_NET_BIND_SERVICE", - "CAP_SYS_CHROOT", - "CAP_KILL", - "CAP_AUDIT_WRITE", - }, - }, - Namespaces: configs.Namespaces([]configs.Namespace{ - {Type: configs.NEWNS}, - {Type: configs.NEWUTS}, - {Type: configs.NEWIPC}, - {Type: configs.NEWPID}, - {Type: configs.NEWUSER}, - {Type: configs.NEWNET}, - {Type: configs.NEWCGROUP}, - }), - Cgroups: &configs.Cgroup{ - Name: "test-container", - Parent: "system", - Resources: &configs.Resources{ - MemorySwappiness: nil, - AllowAllDevices: nil, - AllowedDevices: configs.DefaultAllowedDevices, - }, - }, - MaskPaths: []string{ - "/proc/kcore", - "/sys/firmware", - }, - ReadonlyPaths: []string{ - "/proc/sys", "/proc/sysrq-trigger", "/proc/irq", "/proc/bus", - }, - Devices: configs.DefaultAutoCreatedDevices, - Hostname: "testing", - Mounts: []*configs.Mount{ - { - Source: "proc", - Destination: "/proc", - Device: "proc", - Flags: defaultMountFlags, - }, - { - Source: "tmpfs", - Destination: "/dev", - Device: "tmpfs", - Flags: unix.MS_NOSUID | unix.MS_STRICTATIME, - Data: "mode=755", - }, - { - Source: "devpts", - Destination: "/dev/pts", - Device: "devpts", - Flags: unix.MS_NOSUID | unix.MS_NOEXEC, - Data: "newinstance,ptmxmode=0666,mode=0620,gid=5", - }, - { - Device: "tmpfs", - Source: "shm", - Destination: "/dev/shm", - Data: "mode=1777,size=65536k", - Flags: defaultMountFlags, - }, - { - Source: "mqueue", - Destination: "/dev/mqueue", - Device: "mqueue", - Flags: defaultMountFlags, - }, - { - Source: "sysfs", - Destination: "/sys", - Device: "sysfs", - Flags: defaultMountFlags | unix.MS_RDONLY, - }, - }, - UidMappings: []configs.IDMap{ - { - ContainerID: 0, - HostID: 1000, - Size: 65536, - }, - }, - GidMappings: []configs.IDMap{ - { - ContainerID: 0, - HostID: 1000, - Size: 65536, - }, - }, - Networks: []*configs.Network{ - { - Type: "loopback", - Address: "127.0.0.1/0", - Gateway: "localhost", - }, - }, - Rlimits: []configs.Rlimit{ - { - Type: unix.RLIMIT_NOFILE, - Hard: uint64(1025), - Soft: uint64(1025), - }, - }, -} -``` - -Once you have the configuration populated you can create a container: - -```go -container, err := factory.Create("container-id", config) -if err != nil { - logrus.Fatal(err) - return -} -``` - -To spawn bash as the initial process inside the container and have the -processes pid returned in order to wait, signal, or kill the process: - -```go -process := &libcontainer.Process{ - Args: []string{"/bin/bash"}, - Env: []string{"PATH=/bin"}, - User: "daemon", - Stdin: os.Stdin, - Stdout: os.Stdout, - Stderr: os.Stderr, -} - -err := container.Run(process) -if err != nil { - container.Destroy() - logrus.Fatal(err) - return -} - -// wait for the process to finish. -_, err := process.Wait() -if err != nil { - logrus.Fatal(err) -} - -// destroy the container. -container.Destroy() -``` - -Additional ways to interact with a running container are: - -```go -// return all the pids for all processes running inside the container. -processes, err := container.Processes() - -// get detailed cpu, memory, io, and network statistics for the container and -// it's processes. -stats, err := container.Stats() - -// pause all processes inside the container. -container.Pause() - -// resume all paused processes. -container.Resume() - -// send signal to container's init process. -container.Signal(signal) - -// update container resource constraints. -container.Set(config) - -// get current status of the container. -status, err := container.Status() - -// get current container's state information. -state, err := container.State() -``` - - -#### Checkpoint & Restore - -libcontainer now integrates [CRIU](http://criu.org/) for checkpointing and restoring containers. -This let's you save the state of a process running inside a container to disk, and then restore -that state into a new process, on the same machine or on another machine. - -`criu` version 1.5.2 or higher is required to use checkpoint and restore. -If you don't already have `criu` installed, you can build it from source, following the -[online instructions](http://criu.org/Installation). `criu` is also installed in the docker image -generated when building libcontainer with docker. - - -## Copyright and license - -Code and documentation copyright 2014 Docker, inc. -The code and documentation are released under the [Apache 2.0 license](../LICENSE). -The documentation is also released under Creative Commons Attribution 4.0 International License. -You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/. diff --git a/vendor/github.com/opencontainers/runc/libcontainer/SPEC.md b/vendor/github.com/opencontainers/runc/libcontainer/SPEC.md deleted file mode 100644 index 07ebdc121..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/SPEC.md +++ /dev/null @@ -1,465 +0,0 @@ -## Container Specification - v1 - -This is the standard configuration for version 1 containers. It includes -namespaces, standard filesystem setup, a default Linux capability set, and -information about resource reservations. It also has information about any -populated environment settings for the processes running inside a container. - -Along with the configuration of how a container is created the standard also -discusses actions that can be performed on a container to manage and inspect -information about the processes running inside. - -The v1 profile is meant to be able to accommodate the majority of applications -with a strong security configuration. - -### System Requirements and Compatibility - -Minimum requirements: -* Kernel version - 3.10 recommended 2.6.2x minimum(with backported patches) -* Mounted cgroups with each subsystem in its own hierarchy - - -### Namespaces - -| Flag | Enabled | -| --------------- | ------- | -| CLONE_NEWPID | 1 | -| CLONE_NEWUTS | 1 | -| CLONE_NEWIPC | 1 | -| CLONE_NEWNET | 1 | -| CLONE_NEWNS | 1 | -| CLONE_NEWUSER | 1 | -| CLONE_NEWCGROUP | 1 | - -Namespaces are created for the container via the `unshare` syscall. - - -### Filesystem - -A root filesystem must be provided to a container for execution. The container -will use this root filesystem (rootfs) to jail and spawn processes inside where -the binaries and system libraries are local to that directory. Any binaries -to be executed must be contained within this rootfs. - -Mounts that happen inside the container are automatically cleaned up when the -container exits as the mount namespace is destroyed and the kernel will -unmount all the mounts that were setup within that namespace. - -For a container to execute properly there are certain filesystems that -are required to be mounted within the rootfs that the runtime will setup. - -| Path | Type | Flags | Data | -| ----------- | ------ | -------------------------------------- | ---------------------------------------- | -| /proc | proc | MS_NOEXEC,MS_NOSUID,MS_NODEV | | -| /dev | tmpfs | MS_NOEXEC,MS_STRICTATIME | mode=755 | -| /dev/shm | tmpfs | MS_NOEXEC,MS_NOSUID,MS_NODEV | mode=1777,size=65536k | -| /dev/mqueue | mqueue | MS_NOEXEC,MS_NOSUID,MS_NODEV | | -| /dev/pts | devpts | MS_NOEXEC,MS_NOSUID | newinstance,ptmxmode=0666,mode=620,gid=5 | -| /sys | sysfs | MS_NOEXEC,MS_NOSUID,MS_NODEV,MS_RDONLY | | - - -After a container's filesystems are mounted within the newly created -mount namespace `/dev` will need to be populated with a set of device nodes. -It is expected that a rootfs does not need to have any device nodes specified -for `/dev` within the rootfs as the container will setup the correct devices -that are required for executing a container's process. - -| Path | Mode | Access | -| ------------ | ---- | ---------- | -| /dev/null | 0666 | rwm | -| /dev/zero | 0666 | rwm | -| /dev/full | 0666 | rwm | -| /dev/tty | 0666 | rwm | -| /dev/random | 0666 | rwm | -| /dev/urandom | 0666 | rwm | - - -**ptmx** -`/dev/ptmx` will need to be a symlink to the host's `/dev/ptmx` within -the container. - -The use of a pseudo TTY is optional within a container and it should support both. -If a pseudo is provided to the container `/dev/console` will need to be -setup by binding the console in `/dev/` after it has been populated and mounted -in tmpfs. - -| Source | Destination | UID GID | Mode | Type | -| --------------- | ------------ | ------- | ---- | ---- | -| *pty host path* | /dev/console | 0 0 | 0600 | bind | - - -After `/dev/null` has been setup we check for any external links between -the container's io, STDIN, STDOUT, STDERR. If the container's io is pointing -to `/dev/null` outside the container we close and `dup2` the `/dev/null` -that is local to the container's rootfs. - - -After the container has `/proc` mounted a few standard symlinks are setup -within `/dev/` for the io. - -| Source | Destination | -| --------------- | ----------- | -| /proc/self/fd | /dev/fd | -| /proc/self/fd/0 | /dev/stdin | -| /proc/self/fd/1 | /dev/stdout | -| /proc/self/fd/2 | /dev/stderr | - -A `pivot_root` is used to change the root for the process, effectively -jailing the process inside the rootfs. - -```c -put_old = mkdir(...); -pivot_root(rootfs, put_old); -chdir("/"); -unmount(put_old, MS_DETACH); -rmdir(put_old); -``` - -For container's running with a rootfs inside `ramfs` a `MS_MOVE` combined -with a `chroot` is required as `pivot_root` is not supported in `ramfs`. - -```c -mount(rootfs, "/", NULL, MS_MOVE, NULL); -chroot("."); -chdir("/"); -``` - -The `umask` is set back to `0022` after the filesystem setup has been completed. - -### Resources - -Cgroups are used to handle resource allocation for containers. This includes -system resources like cpu, memory, and device access. - -| Subsystem | Enabled | -| ---------- | ------- | -| devices | 1 | -| memory | 1 | -| cpu | 1 | -| cpuacct | 1 | -| cpuset | 1 | -| blkio | 1 | -| perf_event | 1 | -| freezer | 1 | -| hugetlb | 1 | -| pids | 1 | - - -All cgroup subsystem are joined so that statistics can be collected from -each of the subsystems. Freezer does not expose any stats but is joined -so that containers can be paused and resumed. - -The parent process of the container's init must place the init pid inside -the correct cgroups before the initialization begins. This is done so -that no processes or threads escape the cgroups. This sync is -done via a pipe ( specified in the runtime section below ) that the container's -init process will block waiting for the parent to finish setup. - -### IntelRdt - -Intel platforms with new Xeon CPU support Resource Director Technology (RDT). -Cache Allocation Technology (CAT) and Memory Bandwidth Allocation (MBA) are -two sub-features of RDT. - -Cache Allocation Technology (CAT) provides a way for the software to restrict -cache allocation to a defined 'subset' of L3 cache which may be overlapping -with other 'subsets'. The different subsets are identified by class of -service (CLOS) and each CLOS has a capacity bitmask (CBM). - -Memory Bandwidth Allocation (MBA) provides indirect and approximate throttle -over memory bandwidth for the software. A user controls the resource by -indicating the percentage of maximum memory bandwidth or memory bandwidth limit -in MBps unit if MBA Software Controller is enabled. - -It can be used to handle L3 cache and memory bandwidth resources allocation -for containers if hardware and kernel support Intel RDT CAT and MBA features. - -In Linux 4.10 kernel or newer, the interface is defined and exposed via -"resource control" filesystem, which is a "cgroup-like" interface. - -Comparing with cgroups, it has similar process management lifecycle and -interfaces in a container. But unlike cgroups' hierarchy, it has single level -filesystem layout. - -CAT and MBA features are introduced in Linux 4.10 and 4.12 kernel via -"resource control" filesystem. - -Intel RDT "resource control" filesystem hierarchy: -``` -mount -t resctrl resctrl /sys/fs/resctrl -tree /sys/fs/resctrl -/sys/fs/resctrl/ -|-- info -| |-- L3 -| | |-- cbm_mask -| | |-- min_cbm_bits -| | |-- num_closids -| |-- MB -| |-- bandwidth_gran -| |-- delay_linear -| |-- min_bandwidth -| |-- num_closids -|-- ... -|-- schemata -|-- tasks -|-- - |-- ... - |-- schemata - |-- tasks -``` - -For runc, we can make use of `tasks` and `schemata` configuration for L3 -cache and memory bandwidth resources constraints. - -The file `tasks` has a list of tasks that belongs to this group (e.g., -" group). Tasks can be added to a group by writing the task ID -to the "tasks" file (which will automatically remove them from the previous -group to which they belonged). New tasks created by fork(2) and clone(2) are -added to the same group as their parent. - -The file `schemata` has a list of all the resources available to this group. -Each resource (L3 cache, memory bandwidth) has its own line and format. - -L3 cache schema: -It has allocation bitmasks/values for L3 cache on each socket, which -contains L3 cache id and capacity bitmask (CBM). -``` - Format: "L3:=;=;..." -``` -For example, on a two-socket machine, the schema line could be "L3:0=ff;1=c0" -which means L3 cache id 0's CBM is 0xff, and L3 cache id 1's CBM is 0xc0. - -The valid L3 cache CBM is a *contiguous bits set* and number of bits that can -be set is less than the max bit. The max bits in the CBM is varied among -supported Intel CPU models. Kernel will check if it is valid when writing. -e.g., default value 0xfffff in root indicates the max bits of CBM is 20 -bits, which mapping to entire L3 cache capacity. Some valid CBM values to -set in a group: 0xf, 0xf0, 0x3ff, 0x1f00 and etc. - -Memory bandwidth schema: -It has allocation values for memory bandwidth on each socket, which contains -L3 cache id and memory bandwidth. -``` - Format: "MB:=bandwidth0;=bandwidth1;..." -``` -For example, on a two-socket machine, the schema line could be "MB:0=20;1=70" - -The minimum bandwidth percentage value for each CPU model is predefined and -can be looked up through "info/MB/min_bandwidth". The bandwidth granularity -that is allocated is also dependent on the CPU model and can be looked up at -"info/MB/bandwidth_gran". The available bandwidth control steps are: -min_bw + N * bw_gran. Intermediate values are rounded to the next control -step available on the hardware. - -If MBA Software Controller is enabled through mount option "-o mba_MBps" -mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl -We could specify memory bandwidth in "MBps" (Mega Bytes per second) unit -instead of "percentages". The kernel underneath would use a software feedback -mechanism or a "Software Controller" which reads the actual bandwidth using -MBM counters and adjust the memory bandwidth percentages to ensure: -"actual memory bandwidth < user specified memory bandwidth". - -For example, on a two-socket machine, the schema line could be -"MB:0=5000;1=7000" which means 5000 MBps memory bandwidth limit on socket 0 -and 7000 MBps memory bandwidth limit on socket 1. - -For more information about Intel RDT kernel interface: -https://www.kernel.org/doc/Documentation/x86/intel_rdt_ui.txt - -``` -An example for runc: -Consider a two-socket machine with two L3 caches where the default CBM is -0x7ff and the max CBM length is 11 bits, and minimum memory bandwidth of 10% -with a memory bandwidth granularity of 10%. - -Tasks inside the container only have access to the "upper" 7/11 of L3 cache -on socket 0 and the "lower" 5/11 L3 cache on socket 1, and may use a -maximum memory bandwidth of 20% on socket 0 and 70% on socket 1. - -"linux": { - "intelRdt": { - "closID": "guaranteed_group", - "l3CacheSchema": "L3:0=7f0;1=1f", - "memBwSchema": "MB:0=20;1=70" - } -} -``` - -### Security - -The standard set of Linux capabilities that are set in a container -provide a good default for security and flexibility for the applications. - - -| Capability | Enabled | -| -------------------- | ------- | -| CAP_NET_RAW | 1 | -| CAP_NET_BIND_SERVICE | 1 | -| CAP_AUDIT_READ | 1 | -| CAP_AUDIT_WRITE | 1 | -| CAP_DAC_OVERRIDE | 1 | -| CAP_SETFCAP | 1 | -| CAP_SETPCAP | 1 | -| CAP_SETGID | 1 | -| CAP_SETUID | 1 | -| CAP_MKNOD | 1 | -| CAP_CHOWN | 1 | -| CAP_FOWNER | 1 | -| CAP_FSETID | 1 | -| CAP_KILL | 1 | -| CAP_SYS_CHROOT | 1 | -| CAP_NET_BROADCAST | 0 | -| CAP_SYS_MODULE | 0 | -| CAP_SYS_RAWIO | 0 | -| CAP_SYS_PACCT | 0 | -| CAP_SYS_ADMIN | 0 | -| CAP_SYS_NICE | 0 | -| CAP_SYS_RESOURCE | 0 | -| CAP_SYS_TIME | 0 | -| CAP_SYS_TTY_CONFIG | 0 | -| CAP_AUDIT_CONTROL | 0 | -| CAP_MAC_OVERRIDE | 0 | -| CAP_MAC_ADMIN | 0 | -| CAP_NET_ADMIN | 0 | -| CAP_SYSLOG | 0 | -| CAP_DAC_READ_SEARCH | 0 | -| CAP_LINUX_IMMUTABLE | 0 | -| CAP_IPC_LOCK | 0 | -| CAP_IPC_OWNER | 0 | -| CAP_SYS_PTRACE | 0 | -| CAP_SYS_BOOT | 0 | -| CAP_LEASE | 0 | -| CAP_WAKE_ALARM | 0 | -| CAP_BLOCK_SUSPEND | 0 | - - -Additional security layers like [apparmor](https://wiki.ubuntu.com/AppArmor) -and [selinux](http://selinuxproject.org/page/Main_Page) can be used with -the containers. A container should support setting an apparmor profile or -selinux process and mount labels if provided in the configuration. - -Standard apparmor profile: -```c -#include -profile flags=(attach_disconnected,mediate_deleted) { - #include - network, - capability, - file, - umount, - - deny @{PROC}/sys/fs/** wklx, - deny @{PROC}/sysrq-trigger rwklx, - deny @{PROC}/mem rwklx, - deny @{PROC}/kmem rwklx, - deny @{PROC}/sys/kernel/[^s][^h][^m]* wklx, - deny @{PROC}/sys/kernel/*/** wklx, - - deny mount, - - deny /sys/[^f]*/** wklx, - deny /sys/f[^s]*/** wklx, - deny /sys/fs/[^c]*/** wklx, - deny /sys/fs/c[^g]*/** wklx, - deny /sys/fs/cg[^r]*/** wklx, - deny /sys/firmware/efi/efivars/** rwklx, - deny /sys/kernel/security/** rwklx, -} -``` - -*TODO: seccomp work is being done to find a good default config* - -### Runtime and Init Process - -During container creation the parent process needs to talk to the container's init -process and have a form of synchronization. This is accomplished by creating -a pipe that is passed to the container's init. When the init process first spawns -it will block on its side of the pipe until the parent closes its side. This -allows the parent to have time to set the new process inside a cgroup hierarchy -and/or write any uid/gid mappings required for user namespaces. -The pipe is passed to the init process via FD 3. - -The application consuming libcontainer should be compiled statically. libcontainer -does not define any init process and the arguments provided are used to `exec` the -process inside the application. There should be no long running init within the -container spec. - -If a pseudo tty is provided to a container it will open and `dup2` the console -as the container's STDIN, STDOUT, STDERR as well as mounting the console -as `/dev/console`. - -An extra set of mounts are provided to a container and setup for use. A container's -rootfs can contain some non portable files inside that can cause side effects during -execution of a process. These files are usually created and populated with the container -specific information via the runtime. - -**Extra runtime files:** -* /etc/hosts -* /etc/resolv.conf -* /etc/hostname -* /etc/localtime - - -#### Defaults - -There are a few defaults that can be overridden by users, but in their omission -these apply to processes within a container. - -| Type | Value | -| ------------------- | ------------------------------ | -| Parent Death Signal | SIGKILL | -| UID | 0 | -| GID | 0 | -| GROUPS | 0, NULL | -| CWD | "/" | -| $HOME | Current user's home dir or "/" | -| Readonly rootfs | false | -| Pseudo TTY | false | - - -## Actions - -After a container is created there is a standard set of actions that can -be done to the container. These actions are part of the public API for -a container. - -| Action | Description | -| -------------- | ------------------------------------------------------------------ | -| Get processes | Return all the pids for processes running inside a container | -| Get Stats | Return resource statistics for the container as a whole | -| Wait | Waits on the container's init process ( pid 1 ) | -| Wait Process | Wait on any of the container's processes returning the exit status | -| Destroy | Kill the container's init process and remove any filesystem state | -| Signal | Send a signal to the container's init process | -| Signal Process | Send a signal to any of the container's processes | -| Pause | Pause all processes inside the container | -| Resume | Resume all processes inside the container if paused | -| Exec | Execute a new process inside of the container ( requires setns ) | -| Set | Setup configs of the container after it's created | - -### Execute a new process inside of a running container - -User can execute a new process inside of a running container. Any binaries to be -executed must be accessible within the container's rootfs. - -The started process will run inside the container's rootfs. Any changes -made by the process to the container's filesystem will persist after the -process finished executing. - -The started process will join all the container's existing namespaces. When the -container is paused, the process will also be paused and will resume when -the container is unpaused. The started process will only run when the container's -primary process (PID 1) is running, and will not be restarted when the container -is restarted. - -#### Planned additions - -The started process will have its own cgroups nested inside the container's -cgroups. This is used for process tracking and optionally resource allocation -handling for the new process. Freezer cgroup is required, the rest of the cgroups -are optional. The process executor must place its pid inside the correct -cgroups before starting the process. This is done so that no child processes or -threads can escape the cgroups. - -When the process is stopped, the process executor will try (in a best-effort way) -to stop all its children and remove the sub-cgroups. diff --git a/vendor/github.com/opencontainers/runc/libcontainer/apparmor/apparmor.go b/vendor/github.com/opencontainers/runc/libcontainer/apparmor/apparmor.go deleted file mode 100644 index 7fff0627f..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/apparmor/apparmor.go +++ /dev/null @@ -1,54 +0,0 @@ -// +build apparmor,linux - -package apparmor - -import ( - "fmt" - "io/ioutil" - "os" -) - -// IsEnabled returns true if apparmor is enabled for the host. -func IsEnabled() bool { - if _, err := os.Stat("/sys/kernel/security/apparmor"); err == nil && os.Getenv("container") == "" { - if _, err = os.Stat("/sbin/apparmor_parser"); err == nil { - buf, err := ioutil.ReadFile("/sys/module/apparmor/parameters/enabled") - return err == nil && len(buf) > 1 && buf[0] == 'Y' - } - } - return false -} - -func setprocattr(attr, value string) error { - // Under AppArmor you can only change your own attr, so use /proc/self/ - // instead of /proc// like libapparmor does - path := fmt.Sprintf("/proc/self/attr/%s", attr) - - f, err := os.OpenFile(path, os.O_WRONLY, 0) - if err != nil { - return err - } - defer f.Close() - - _, err = fmt.Fprintf(f, "%s", value) - return err -} - -// changeOnExec reimplements aa_change_onexec from libapparmor in Go -func changeOnExec(name string) error { - value := "exec " + name - if err := setprocattr("exec", value); err != nil { - return fmt.Errorf("apparmor failed to apply profile: %s", err) - } - return nil -} - -// ApplyProfile will apply the profile with the specified name to the process after -// the next exec. -func ApplyProfile(name string) error { - if name == "" { - return nil - } - - return changeOnExec(name) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/apparmor/apparmor_disabled.go b/vendor/github.com/opencontainers/runc/libcontainer/apparmor/apparmor_disabled.go deleted file mode 100644 index d4110cf0b..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/apparmor/apparmor_disabled.go +++ /dev/null @@ -1,20 +0,0 @@ -// +build !apparmor !linux - -package apparmor - -import ( - "errors" -) - -var ErrApparmorNotEnabled = errors.New("apparmor: config provided but apparmor not supported") - -func IsEnabled() bool { - return false -} - -func ApplyProfile(name string) error { - if name != "" { - return ErrApparmorNotEnabled - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/capabilities_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/capabilities_linux.go deleted file mode 100644 index 7c66f5725..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/capabilities_linux.go +++ /dev/null @@ -1,113 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "strings" - - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/syndtr/gocapability/capability" -) - -const allCapabilityTypes = capability.CAPS | capability.BOUNDS | capability.AMBS - -var capabilityMap map[string]capability.Cap - -func init() { - capabilityMap = make(map[string]capability.Cap) - last := capability.CAP_LAST_CAP - // workaround for RHEL6 which has no /proc/sys/kernel/cap_last_cap - if last == capability.Cap(63) { - last = capability.CAP_BLOCK_SUSPEND - } - for _, cap := range capability.List() { - if cap > last { - continue - } - capKey := fmt.Sprintf("CAP_%s", strings.ToUpper(cap.String())) - capabilityMap[capKey] = cap - } -} - -func newContainerCapList(capConfig *configs.Capabilities) (*containerCapabilities, error) { - bounding := []capability.Cap{} - for _, c := range capConfig.Bounding { - v, ok := capabilityMap[c] - if !ok { - return nil, fmt.Errorf("unknown capability %q", c) - } - bounding = append(bounding, v) - } - effective := []capability.Cap{} - for _, c := range capConfig.Effective { - v, ok := capabilityMap[c] - if !ok { - return nil, fmt.Errorf("unknown capability %q", c) - } - effective = append(effective, v) - } - inheritable := []capability.Cap{} - for _, c := range capConfig.Inheritable { - v, ok := capabilityMap[c] - if !ok { - return nil, fmt.Errorf("unknown capability %q", c) - } - inheritable = append(inheritable, v) - } - permitted := []capability.Cap{} - for _, c := range capConfig.Permitted { - v, ok := capabilityMap[c] - if !ok { - return nil, fmt.Errorf("unknown capability %q", c) - } - permitted = append(permitted, v) - } - ambient := []capability.Cap{} - for _, c := range capConfig.Ambient { - v, ok := capabilityMap[c] - if !ok { - return nil, fmt.Errorf("unknown capability %q", c) - } - ambient = append(ambient, v) - } - pid, err := capability.NewPid(0) - if err != nil { - return nil, err - } - return &containerCapabilities{ - bounding: bounding, - effective: effective, - inheritable: inheritable, - permitted: permitted, - ambient: ambient, - pid: pid, - }, nil -} - -type containerCapabilities struct { - pid capability.Capabilities - bounding []capability.Cap - effective []capability.Cap - inheritable []capability.Cap - permitted []capability.Cap - ambient []capability.Cap -} - -// ApplyBoundingSet sets the capability bounding set to those specified in the whitelist. -func (c *containerCapabilities) ApplyBoundingSet() error { - c.pid.Clear(capability.BOUNDS) - c.pid.Set(capability.BOUNDS, c.bounding...) - return c.pid.Apply(capability.BOUNDS) -} - -// Apply sets all the capabilities for the current process in the config. -func (c *containerCapabilities) ApplyCaps() error { - c.pid.Clear(allCapabilityTypes) - c.pid.Set(capability.BOUNDS, c.bounding...) - c.pid.Set(capability.PERMITTED, c.permitted...) - c.pid.Set(capability.INHERITABLE, c.inheritable...) - c.pid.Set(capability.EFFECTIVE, c.effective...) - c.pid.Set(capability.AMBIENT, c.ambient...) - return c.pid.Apply(allCapabilityTypes) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/cgroups.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/cgroups.go deleted file mode 100644 index 25ff51589..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/cgroups.go +++ /dev/null @@ -1,64 +0,0 @@ -// +build linux - -package cgroups - -import ( - "fmt" - - "github.com/opencontainers/runc/libcontainer/configs" -) - -type Manager interface { - // Applies cgroup configuration to the process with the specified pid - Apply(pid int) error - - // Returns the PIDs inside the cgroup set - GetPids() ([]int, error) - - // Returns the PIDs inside the cgroup set & all sub-cgroups - GetAllPids() ([]int, error) - - // Returns statistics for the cgroup set - GetStats() (*Stats, error) - - // Toggles the freezer cgroup according with specified state - Freeze(state configs.FreezerState) error - - // Destroys the cgroup set - Destroy() error - - // The option func SystemdCgroups() and Cgroupfs() require following attributes: - // Paths map[string]string - // Cgroups *configs.Cgroup - // Paths maps cgroup subsystem to path at which it is mounted. - // Cgroups specifies specific cgroup settings for the various subsystems - - // Returns cgroup paths to save in a state file and to be able to - // restore the object later. - GetPaths() map[string]string - - // Sets the cgroup as configured. - Set(container *configs.Config) error -} - -type NotFoundError struct { - Subsystem string -} - -func (e *NotFoundError) Error() string { - return fmt.Sprintf("mountpoint for %s not found", e.Subsystem) -} - -func NewNotFoundError(sub string) error { - return &NotFoundError{ - Subsystem: sub, - } -} - -func IsNotFound(err error) bool { - if err == nil { - return false - } - _, ok := err.(*NotFoundError) - return ok -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/cgroups_unsupported.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/cgroups_unsupported.go deleted file mode 100644 index 278d507e2..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/cgroups_unsupported.go +++ /dev/null @@ -1,3 +0,0 @@ -// +build !linux - -package cgroups diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/apply_raw.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/apply_raw.go deleted file mode 100644 index f672ba273..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/apply_raw.go +++ /dev/null @@ -1,409 +0,0 @@ -// +build linux - -package fs - -import ( - "fmt" - "io" - "io/ioutil" - "os" - "path/filepath" - "sync" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - libcontainerUtils "github.com/opencontainers/runc/libcontainer/utils" - "github.com/pkg/errors" - "golang.org/x/sys/unix" -) - -var ( - subsystems = subsystemSet{ - &CpusetGroup{}, - &DevicesGroup{}, - &MemoryGroup{}, - &CpuGroup{}, - &CpuacctGroup{}, - &PidsGroup{}, - &BlkioGroup{}, - &HugetlbGroup{}, - &NetClsGroup{}, - &NetPrioGroup{}, - &PerfEventGroup{}, - &FreezerGroup{}, - &NameGroup{GroupName: "name=systemd", Join: true}, - } - HugePageSizes, _ = cgroups.GetHugePageSize() -) - -var errSubsystemDoesNotExist = fmt.Errorf("cgroup: subsystem does not exist") - -type subsystemSet []subsystem - -func (s subsystemSet) Get(name string) (subsystem, error) { - for _, ss := range s { - if ss.Name() == name { - return ss, nil - } - } - return nil, errSubsystemDoesNotExist -} - -type subsystem interface { - // Name returns the name of the subsystem. - Name() string - // Returns the stats, as 'stats', corresponding to the cgroup under 'path'. - GetStats(path string, stats *cgroups.Stats) error - // Removes the cgroup represented by 'cgroupData'. - Remove(*cgroupData) error - // Creates and joins the cgroup represented by 'cgroupData'. - Apply(*cgroupData) error - // Set the cgroup represented by cgroup. - Set(path string, cgroup *configs.Cgroup) error -} - -type Manager struct { - mu sync.Mutex - Cgroups *configs.Cgroup - Rootless bool // ignore permission-related errors - Paths map[string]string -} - -// The absolute path to the root of the cgroup hierarchies. -var cgroupRootLock sync.Mutex -var cgroupRoot string - -// Gets the cgroupRoot. -func getCgroupRoot() (string, error) { - cgroupRootLock.Lock() - defer cgroupRootLock.Unlock() - - if cgroupRoot != "" { - return cgroupRoot, nil - } - - root, err := cgroups.FindCgroupMountpointDir() - if err != nil { - return "", err - } - - if _, err := os.Stat(root); err != nil { - return "", err - } - - cgroupRoot = root - return cgroupRoot, nil -} - -type cgroupData struct { - root string - innerPath string - config *configs.Cgroup - pid int -} - -// isIgnorableError returns whether err is a permission error (in the loose -// sense of the word). This includes EROFS (which for an unprivileged user is -// basically a permission error) and EACCES (for similar reasons) as well as -// the normal EPERM. -func isIgnorableError(rootless bool, err error) bool { - // We do not ignore errors if we are root. - if !rootless { - return false - } - // Is it an ordinary EPERM? - if os.IsPermission(errors.Cause(err)) { - return true - } - - // Try to handle other errnos. - var errno error - switch err := errors.Cause(err).(type) { - case *os.PathError: - errno = err.Err - case *os.LinkError: - errno = err.Err - case *os.SyscallError: - errno = err.Err - } - return errno == unix.EROFS || errno == unix.EPERM || errno == unix.EACCES -} - -func (m *Manager) Apply(pid int) (err error) { - if m.Cgroups == nil { - return nil - } - m.mu.Lock() - defer m.mu.Unlock() - - var c = m.Cgroups - - d, err := getCgroupData(m.Cgroups, pid) - if err != nil { - return err - } - - m.Paths = make(map[string]string) - if c.Paths != nil { - for name, path := range c.Paths { - _, err := d.path(name) - if err != nil { - if cgroups.IsNotFound(err) { - continue - } - return err - } - m.Paths[name] = path - } - return cgroups.EnterPid(m.Paths, pid) - } - - for _, sys := range subsystems { - // TODO: Apply should, ideally, be reentrant or be broken up into a separate - // create and join phase so that the cgroup hierarchy for a container can be - // created then join consists of writing the process pids to cgroup.procs - p, err := d.path(sys.Name()) - if err != nil { - // The non-presence of the devices subsystem is - // considered fatal for security reasons. - if cgroups.IsNotFound(err) && sys.Name() != "devices" { - continue - } - return err - } - m.Paths[sys.Name()] = p - - if err := sys.Apply(d); err != nil { - // In the case of rootless (including euid=0 in userns), where an explicit cgroup path hasn't - // been set, we don't bail on error in case of permission problems. - // Cases where limits have been set (and we couldn't create our own - // cgroup) are handled by Set. - if isIgnorableError(m.Rootless, err) && m.Cgroups.Path == "" { - delete(m.Paths, sys.Name()) - continue - } - return err - } - - } - return nil -} - -func (m *Manager) Destroy() error { - if m.Cgroups == nil || m.Cgroups.Paths != nil { - return nil - } - m.mu.Lock() - defer m.mu.Unlock() - if err := cgroups.RemovePaths(m.Paths); err != nil { - return err - } - m.Paths = make(map[string]string) - return nil -} - -func (m *Manager) GetPaths() map[string]string { - m.mu.Lock() - paths := m.Paths - m.mu.Unlock() - return paths -} - -func (m *Manager) GetStats() (*cgroups.Stats, error) { - m.mu.Lock() - defer m.mu.Unlock() - stats := cgroups.NewStats() - for name, path := range m.Paths { - sys, err := subsystems.Get(name) - if err == errSubsystemDoesNotExist || !cgroups.PathExists(path) { - continue - } - if err := sys.GetStats(path, stats); err != nil { - return nil, err - } - } - return stats, nil -} - -func (m *Manager) Set(container *configs.Config) error { - // If Paths are set, then we are just joining cgroups paths - // and there is no need to set any values. - if m.Cgroups.Paths != nil { - return nil - } - - paths := m.GetPaths() - for _, sys := range subsystems { - path := paths[sys.Name()] - if err := sys.Set(path, container.Cgroups); err != nil { - if m.Rootless && sys.Name() == "devices" { - continue - } - // When m.Rootless is true, errors from the device subsystem are ignored because it is really not expected to work. - // However, errors from other subsystems are not ignored. - // see @test "runc create (rootless + limits + no cgrouppath + no permission) fails with informative error" - if path == "" { - // We never created a path for this cgroup, so we cannot set - // limits for it (though we have already tried at this point). - return fmt.Errorf("cannot set %s limit: container could not join or create cgroup", sys.Name()) - } - return err - } - } - - if m.Paths["cpu"] != "" { - if err := CheckCpushares(m.Paths["cpu"], container.Cgroups.Resources.CpuShares); err != nil { - return err - } - } - return nil -} - -// Freeze toggles the container's freezer cgroup depending on the state -// provided -func (m *Manager) Freeze(state configs.FreezerState) error { - paths := m.GetPaths() - dir := paths["freezer"] - prevState := m.Cgroups.Resources.Freezer - m.Cgroups.Resources.Freezer = state - freezer, err := subsystems.Get("freezer") - if err != nil { - return err - } - err = freezer.Set(dir, m.Cgroups) - if err != nil { - m.Cgroups.Resources.Freezer = prevState - return err - } - return nil -} - -func (m *Manager) GetPids() ([]int, error) { - paths := m.GetPaths() - return cgroups.GetPids(paths["devices"]) -} - -func (m *Manager) GetAllPids() ([]int, error) { - paths := m.GetPaths() - return cgroups.GetAllPids(paths["devices"]) -} - -func getCgroupData(c *configs.Cgroup, pid int) (*cgroupData, error) { - root, err := getCgroupRoot() - if err != nil { - return nil, err - } - - if (c.Name != "" || c.Parent != "") && c.Path != "" { - return nil, fmt.Errorf("cgroup: either Path or Name and Parent should be used") - } - - // XXX: Do not remove this code. Path safety is important! -- cyphar - cgPath := libcontainerUtils.CleanPath(c.Path) - cgParent := libcontainerUtils.CleanPath(c.Parent) - cgName := libcontainerUtils.CleanPath(c.Name) - - innerPath := cgPath - if innerPath == "" { - innerPath = filepath.Join(cgParent, cgName) - } - - return &cgroupData{ - root: root, - innerPath: innerPath, - config: c, - pid: pid, - }, nil -} - -func (raw *cgroupData) path(subsystem string) (string, error) { - mnt, err := cgroups.FindCgroupMountpoint(raw.root, subsystem) - // If we didn't mount the subsystem, there is no point we make the path. - if err != nil { - return "", err - } - - // If the cgroup name/path is absolute do not look relative to the cgroup of the init process. - if filepath.IsAbs(raw.innerPath) { - // Sometimes subsystems can be mounted together as 'cpu,cpuacct'. - return filepath.Join(raw.root, filepath.Base(mnt), raw.innerPath), nil - } - - // Use GetOwnCgroupPath instead of GetInitCgroupPath, because the creating - // process could in container and shared pid namespace with host, and - // /proc/1/cgroup could point to whole other world of cgroups. - parentPath, err := cgroups.GetOwnCgroupPath(subsystem) - if err != nil { - return "", err - } - - return filepath.Join(parentPath, raw.innerPath), nil -} - -func (raw *cgroupData) join(subsystem string) (string, error) { - path, err := raw.path(subsystem) - if err != nil { - return "", err - } - if err := os.MkdirAll(path, 0755); err != nil { - return "", err - } - if err := cgroups.WriteCgroupProc(path, raw.pid); err != nil { - return "", err - } - return path, nil -} - -func writeFile(dir, file, data string) error { - // Normally dir should not be empty, one case is that cgroup subsystem - // is not mounted, we will get empty dir, and we want it fail here. - if dir == "" { - return fmt.Errorf("no such directory for %s", file) - } - if err := ioutil.WriteFile(filepath.Join(dir, file), []byte(data), 0700); err != nil { - return fmt.Errorf("failed to write %v to %v: %v", data, file, err) - } - return nil -} - -func readFile(dir, file string) (string, error) { - data, err := ioutil.ReadFile(filepath.Join(dir, file)) - return string(data), err -} - -func removePath(p string, err error) error { - if err != nil { - return err - } - if p != "" { - return os.RemoveAll(p) - } - return nil -} - -func CheckCpushares(path string, c uint64) error { - var cpuShares uint64 - - if c == 0 { - return nil - } - - fd, err := os.Open(filepath.Join(path, "cpu.shares")) - if err != nil { - return err - } - defer fd.Close() - - _, err = fmt.Fscanf(fd, "%d", &cpuShares) - if err != nil && err != io.EOF { - return err - } - - if c > cpuShares { - return fmt.Errorf("The maximum allowed cpu-shares is %d", cpuShares) - } else if c < cpuShares { - return fmt.Errorf("The minimum allowed cpu-shares is %d", cpuShares) - } - - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/blkio.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/blkio.go deleted file mode 100644 index a142cb991..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/blkio.go +++ /dev/null @@ -1,237 +0,0 @@ -// +build linux - -package fs - -import ( - "bufio" - "fmt" - "os" - "path/filepath" - "strconv" - "strings" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type BlkioGroup struct { -} - -func (s *BlkioGroup) Name() string { - return "blkio" -} - -func (s *BlkioGroup) Apply(d *cgroupData) error { - _, err := d.join("blkio") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func (s *BlkioGroup) Set(path string, cgroup *configs.Cgroup) error { - if cgroup.Resources.BlkioWeight != 0 { - if err := writeFile(path, "blkio.weight", strconv.FormatUint(uint64(cgroup.Resources.BlkioWeight), 10)); err != nil { - return err - } - } - - if cgroup.Resources.BlkioLeafWeight != 0 { - if err := writeFile(path, "blkio.leaf_weight", strconv.FormatUint(uint64(cgroup.Resources.BlkioLeafWeight), 10)); err != nil { - return err - } - } - for _, wd := range cgroup.Resources.BlkioWeightDevice { - if err := writeFile(path, "blkio.weight_device", wd.WeightString()); err != nil { - return err - } - if err := writeFile(path, "blkio.leaf_weight_device", wd.LeafWeightString()); err != nil { - return err - } - } - for _, td := range cgroup.Resources.BlkioThrottleReadBpsDevice { - if err := writeFile(path, "blkio.throttle.read_bps_device", td.String()); err != nil { - return err - } - } - for _, td := range cgroup.Resources.BlkioThrottleWriteBpsDevice { - if err := writeFile(path, "blkio.throttle.write_bps_device", td.String()); err != nil { - return err - } - } - for _, td := range cgroup.Resources.BlkioThrottleReadIOPSDevice { - if err := writeFile(path, "blkio.throttle.read_iops_device", td.String()); err != nil { - return err - } - } - for _, td := range cgroup.Resources.BlkioThrottleWriteIOPSDevice { - if err := writeFile(path, "blkio.throttle.write_iops_device", td.String()); err != nil { - return err - } - } - - return nil -} - -func (s *BlkioGroup) Remove(d *cgroupData) error { - return removePath(d.path("blkio")) -} - -/* -examples: - - blkio.sectors - 8:0 6792 - - blkio.io_service_bytes - 8:0 Read 1282048 - 8:0 Write 2195456 - 8:0 Sync 2195456 - 8:0 Async 1282048 - 8:0 Total 3477504 - Total 3477504 - - blkio.io_serviced - 8:0 Read 124 - 8:0 Write 104 - 8:0 Sync 104 - 8:0 Async 124 - 8:0 Total 228 - Total 228 - - blkio.io_queued - 8:0 Read 0 - 8:0 Write 0 - 8:0 Sync 0 - 8:0 Async 0 - 8:0 Total 0 - Total 0 -*/ - -func splitBlkioStatLine(r rune) bool { - return r == ' ' || r == ':' -} - -func getBlkioStat(path string) ([]cgroups.BlkioStatEntry, error) { - var blkioStats []cgroups.BlkioStatEntry - f, err := os.Open(path) - if err != nil { - if os.IsNotExist(err) { - return blkioStats, nil - } - return nil, err - } - defer f.Close() - - sc := bufio.NewScanner(f) - for sc.Scan() { - // format: dev type amount - fields := strings.FieldsFunc(sc.Text(), splitBlkioStatLine) - if len(fields) < 3 { - if len(fields) == 2 && fields[0] == "Total" { - // skip total line - continue - } else { - return nil, fmt.Errorf("Invalid line found while parsing %s: %s", path, sc.Text()) - } - } - - v, err := strconv.ParseUint(fields[0], 10, 64) - if err != nil { - return nil, err - } - major := v - - v, err = strconv.ParseUint(fields[1], 10, 64) - if err != nil { - return nil, err - } - minor := v - - op := "" - valueField := 2 - if len(fields) == 4 { - op = fields[2] - valueField = 3 - } - v, err = strconv.ParseUint(fields[valueField], 10, 64) - if err != nil { - return nil, err - } - blkioStats = append(blkioStats, cgroups.BlkioStatEntry{Major: major, Minor: minor, Op: op, Value: v}) - } - - return blkioStats, nil -} - -func (s *BlkioGroup) GetStats(path string, stats *cgroups.Stats) error { - // Try to read CFQ stats available on all CFQ enabled kernels first - if blkioStats, err := getBlkioStat(filepath.Join(path, "blkio.io_serviced_recursive")); err == nil && blkioStats != nil { - return getCFQStats(path, stats) - } - return getStats(path, stats) // Use generic stats as fallback -} - -func getCFQStats(path string, stats *cgroups.Stats) error { - var blkioStats []cgroups.BlkioStatEntry - var err error - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.sectors_recursive")); err != nil { - return err - } - stats.BlkioStats.SectorsRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.io_service_bytes_recursive")); err != nil { - return err - } - stats.BlkioStats.IoServiceBytesRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.io_serviced_recursive")); err != nil { - return err - } - stats.BlkioStats.IoServicedRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.io_queued_recursive")); err != nil { - return err - } - stats.BlkioStats.IoQueuedRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.io_service_time_recursive")); err != nil { - return err - } - stats.BlkioStats.IoServiceTimeRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.io_wait_time_recursive")); err != nil { - return err - } - stats.BlkioStats.IoWaitTimeRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.io_merged_recursive")); err != nil { - return err - } - stats.BlkioStats.IoMergedRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.time_recursive")); err != nil { - return err - } - stats.BlkioStats.IoTimeRecursive = blkioStats - - return nil -} - -func getStats(path string, stats *cgroups.Stats) error { - var blkioStats []cgroups.BlkioStatEntry - var err error - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.throttle.io_service_bytes")); err != nil { - return err - } - stats.BlkioStats.IoServiceBytesRecursive = blkioStats - - if blkioStats, err = getBlkioStat(filepath.Join(path, "blkio.throttle.io_serviced")); err != nil { - return err - } - stats.BlkioStats.IoServicedRecursive = blkioStats - - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpu.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpu.go deleted file mode 100644 index e240a8313..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpu.go +++ /dev/null @@ -1,117 +0,0 @@ -// +build linux - -package fs - -import ( - "bufio" - "os" - "path/filepath" - "strconv" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type CpuGroup struct { -} - -func (s *CpuGroup) Name() string { - return "cpu" -} - -func (s *CpuGroup) Apply(d *cgroupData) error { - // We always want to join the cpu group, to allow fair cpu scheduling - // on a container basis - path, err := d.path("cpu") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return s.ApplyDir(path, d.config, d.pid) -} - -func (s *CpuGroup) ApplyDir(path string, cgroup *configs.Cgroup, pid int) error { - // This might happen if we have no cpu cgroup mounted. - // Just do nothing and don't fail. - if path == "" { - return nil - } - if err := os.MkdirAll(path, 0755); err != nil { - return err - } - // We should set the real-Time group scheduling settings before moving - // in the process because if the process is already in SCHED_RR mode - // and no RT bandwidth is set, adding it will fail. - if err := s.SetRtSched(path, cgroup); err != nil { - return err - } - // because we are not using d.join we need to place the pid into the procs file - // unlike the other subsystems - return cgroups.WriteCgroupProc(path, pid) -} - -func (s *CpuGroup) SetRtSched(path string, cgroup *configs.Cgroup) error { - if cgroup.Resources.CpuRtPeriod != 0 { - if err := writeFile(path, "cpu.rt_period_us", strconv.FormatUint(cgroup.Resources.CpuRtPeriod, 10)); err != nil { - return err - } - } - if cgroup.Resources.CpuRtRuntime != 0 { - if err := writeFile(path, "cpu.rt_runtime_us", strconv.FormatInt(cgroup.Resources.CpuRtRuntime, 10)); err != nil { - return err - } - } - return nil -} - -func (s *CpuGroup) Set(path string, cgroup *configs.Cgroup) error { - if cgroup.Resources.CpuShares != 0 { - if err := writeFile(path, "cpu.shares", strconv.FormatUint(cgroup.Resources.CpuShares, 10)); err != nil { - return err - } - } - if cgroup.Resources.CpuPeriod != 0 { - if err := writeFile(path, "cpu.cfs_period_us", strconv.FormatUint(cgroup.Resources.CpuPeriod, 10)); err != nil { - return err - } - } - if cgroup.Resources.CpuQuota != 0 { - if err := writeFile(path, "cpu.cfs_quota_us", strconv.FormatInt(cgroup.Resources.CpuQuota, 10)); err != nil { - return err - } - } - return s.SetRtSched(path, cgroup) -} - -func (s *CpuGroup) Remove(d *cgroupData) error { - return removePath(d.path("cpu")) -} - -func (s *CpuGroup) GetStats(path string, stats *cgroups.Stats) error { - f, err := os.Open(filepath.Join(path, "cpu.stat")) - if err != nil { - if os.IsNotExist(err) { - return nil - } - return err - } - defer f.Close() - - sc := bufio.NewScanner(f) - for sc.Scan() { - t, v, err := getCgroupParamKeyValue(sc.Text()) - if err != nil { - return err - } - switch t { - case "nr_periods": - stats.CpuStats.ThrottlingData.Periods = v - - case "nr_throttled": - stats.CpuStats.ThrottlingData.ThrottledPeriods = v - - case "throttled_time": - stats.CpuStats.ThrottlingData.ThrottledTime = v - } - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuacct.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuacct.go deleted file mode 100644 index 53afbaddf..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuacct.go +++ /dev/null @@ -1,121 +0,0 @@ -// +build linux - -package fs - -import ( - "fmt" - "io/ioutil" - "path/filepath" - "strconv" - "strings" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/system" -) - -const ( - cgroupCpuacctStat = "cpuacct.stat" - nanosecondsInSecond = 1000000000 -) - -var clockTicks = uint64(system.GetClockTicks()) - -type CpuacctGroup struct { -} - -func (s *CpuacctGroup) Name() string { - return "cpuacct" -} - -func (s *CpuacctGroup) Apply(d *cgroupData) error { - // we just want to join this group even though we don't set anything - if _, err := d.join("cpuacct"); err != nil && !cgroups.IsNotFound(err) { - return err - } - - return nil -} - -func (s *CpuacctGroup) Set(path string, cgroup *configs.Cgroup) error { - return nil -} - -func (s *CpuacctGroup) Remove(d *cgroupData) error { - return removePath(d.path("cpuacct")) -} - -func (s *CpuacctGroup) GetStats(path string, stats *cgroups.Stats) error { - userModeUsage, kernelModeUsage, err := getCpuUsageBreakdown(path) - if err != nil { - return err - } - - totalUsage, err := getCgroupParamUint(path, "cpuacct.usage") - if err != nil { - return err - } - - percpuUsage, err := getPercpuUsage(path) - if err != nil { - return err - } - - stats.CpuStats.CpuUsage.TotalUsage = totalUsage - stats.CpuStats.CpuUsage.PercpuUsage = percpuUsage - stats.CpuStats.CpuUsage.UsageInUsermode = userModeUsage - stats.CpuStats.CpuUsage.UsageInKernelmode = kernelModeUsage - return nil -} - -// Returns user and kernel usage breakdown in nanoseconds. -func getCpuUsageBreakdown(path string) (uint64, uint64, error) { - userModeUsage := uint64(0) - kernelModeUsage := uint64(0) - const ( - userField = "user" - systemField = "system" - ) - - // Expected format: - // user - // system - data, err := ioutil.ReadFile(filepath.Join(path, cgroupCpuacctStat)) - if err != nil { - return 0, 0, err - } - fields := strings.Fields(string(data)) - if len(fields) != 4 { - return 0, 0, fmt.Errorf("failure - %s is expected to have 4 fields", filepath.Join(path, cgroupCpuacctStat)) - } - if fields[0] != userField { - return 0, 0, fmt.Errorf("unexpected field %q in %q, expected %q", fields[0], cgroupCpuacctStat, userField) - } - if fields[2] != systemField { - return 0, 0, fmt.Errorf("unexpected field %q in %q, expected %q", fields[2], cgroupCpuacctStat, systemField) - } - if userModeUsage, err = strconv.ParseUint(fields[1], 10, 64); err != nil { - return 0, 0, err - } - if kernelModeUsage, err = strconv.ParseUint(fields[3], 10, 64); err != nil { - return 0, 0, err - } - - return (userModeUsage * nanosecondsInSecond) / clockTicks, (kernelModeUsage * nanosecondsInSecond) / clockTicks, nil -} - -func getPercpuUsage(path string) ([]uint64, error) { - percpuUsage := []uint64{} - data, err := ioutil.ReadFile(filepath.Join(path, "cpuacct.usage_percpu")) - if err != nil { - return percpuUsage, err - } - for _, value := range strings.Fields(string(data)) { - value, err := strconv.ParseUint(value, 10, 64) - if err != nil { - return percpuUsage, fmt.Errorf("Unable to convert param value to uint64: %s", err) - } - percpuUsage = append(percpuUsage, value) - } - return percpuUsage, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuset.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuset.go deleted file mode 100644 index 5a1d152ea..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuset.go +++ /dev/null @@ -1,159 +0,0 @@ -// +build linux - -package fs - -import ( - "bytes" - "fmt" - "io/ioutil" - "os" - "path/filepath" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - libcontainerUtils "github.com/opencontainers/runc/libcontainer/utils" -) - -type CpusetGroup struct { -} - -func (s *CpusetGroup) Name() string { - return "cpuset" -} - -func (s *CpusetGroup) Apply(d *cgroupData) error { - dir, err := d.path("cpuset") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return s.ApplyDir(dir, d.config, d.pid) -} - -func (s *CpusetGroup) Set(path string, cgroup *configs.Cgroup) error { - if cgroup.Resources.CpusetCpus != "" { - if err := writeFile(path, "cpuset.cpus", cgroup.Resources.CpusetCpus); err != nil { - return err - } - } - if cgroup.Resources.CpusetMems != "" { - if err := writeFile(path, "cpuset.mems", cgroup.Resources.CpusetMems); err != nil { - return err - } - } - return nil -} - -func (s *CpusetGroup) Remove(d *cgroupData) error { - return removePath(d.path("cpuset")) -} - -func (s *CpusetGroup) GetStats(path string, stats *cgroups.Stats) error { - return nil -} - -func (s *CpusetGroup) ApplyDir(dir string, cgroup *configs.Cgroup, pid int) error { - // This might happen if we have no cpuset cgroup mounted. - // Just do nothing and don't fail. - if dir == "" { - return nil - } - mountInfo, err := ioutil.ReadFile("/proc/self/mountinfo") - if err != nil { - return err - } - root := filepath.Dir(cgroups.GetClosestMountpointAncestor(dir, string(mountInfo))) - // 'ensureParent' start with parent because we don't want to - // explicitly inherit from parent, it could conflict with - // 'cpuset.cpu_exclusive'. - if err := s.ensureParent(filepath.Dir(dir), root); err != nil { - return err - } - if err := os.MkdirAll(dir, 0755); err != nil { - return err - } - // We didn't inherit cpuset configs from parent, but we have - // to ensure cpuset configs are set before moving task into the - // cgroup. - // The logic is, if user specified cpuset configs, use these - // specified configs, otherwise, inherit from parent. This makes - // cpuset configs work correctly with 'cpuset.cpu_exclusive', and - // keep backward compatibility. - if err := s.ensureCpusAndMems(dir, cgroup); err != nil { - return err - } - - // because we are not using d.join we need to place the pid into the procs file - // unlike the other subsystems - return cgroups.WriteCgroupProc(dir, pid) -} - -func (s *CpusetGroup) getSubsystemSettings(parent string) (cpus []byte, mems []byte, err error) { - if cpus, err = ioutil.ReadFile(filepath.Join(parent, "cpuset.cpus")); err != nil { - return - } - if mems, err = ioutil.ReadFile(filepath.Join(parent, "cpuset.mems")); err != nil { - return - } - return cpus, mems, nil -} - -// ensureParent makes sure that the parent directory of current is created -// and populated with the proper cpus and mems files copied from -// it's parent. -func (s *CpusetGroup) ensureParent(current, root string) error { - parent := filepath.Dir(current) - if libcontainerUtils.CleanPath(parent) == root { - return nil - } - // Avoid infinite recursion. - if parent == current { - return fmt.Errorf("cpuset: cgroup parent path outside cgroup root") - } - if err := s.ensureParent(parent, root); err != nil { - return err - } - if err := os.MkdirAll(current, 0755); err != nil { - return err - } - return s.copyIfNeeded(current, parent) -} - -// copyIfNeeded copies the cpuset.cpus and cpuset.mems from the parent -// directory to the current directory if the file's contents are 0 -func (s *CpusetGroup) copyIfNeeded(current, parent string) error { - var ( - err error - currentCpus, currentMems []byte - parentCpus, parentMems []byte - ) - - if currentCpus, currentMems, err = s.getSubsystemSettings(current); err != nil { - return err - } - if parentCpus, parentMems, err = s.getSubsystemSettings(parent); err != nil { - return err - } - - if s.isEmpty(currentCpus) { - if err := writeFile(current, "cpuset.cpus", string(parentCpus)); err != nil { - return err - } - } - if s.isEmpty(currentMems) { - if err := writeFile(current, "cpuset.mems", string(parentMems)); err != nil { - return err - } - } - return nil -} - -func (s *CpusetGroup) isEmpty(b []byte) bool { - return len(bytes.Trim(b, "\n")) == 0 -} - -func (s *CpusetGroup) ensureCpusAndMems(path string, cgroup *configs.Cgroup) error { - if err := s.Set(path, cgroup); err != nil { - return err - } - return s.copyIfNeeded(path, filepath.Dir(path)) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/devices.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/devices.go deleted file mode 100644 index 0ac5b4ed7..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/devices.go +++ /dev/null @@ -1,80 +0,0 @@ -// +build linux - -package fs - -import ( - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/system" -) - -type DevicesGroup struct { -} - -func (s *DevicesGroup) Name() string { - return "devices" -} - -func (s *DevicesGroup) Apply(d *cgroupData) error { - _, err := d.join("devices") - if err != nil { - // We will return error even it's `not found` error, devices - // cgroup is hard requirement for container's security. - return err - } - return nil -} - -func (s *DevicesGroup) Set(path string, cgroup *configs.Cgroup) error { - if system.RunningInUserNS() { - return nil - } - - devices := cgroup.Resources.Devices - if len(devices) > 0 { - for _, dev := range devices { - file := "devices.deny" - if dev.Allow { - file = "devices.allow" - } - if err := writeFile(path, file, dev.CgroupString()); err != nil { - return err - } - } - return nil - } - if cgroup.Resources.AllowAllDevices != nil { - if *cgroup.Resources.AllowAllDevices == false { - if err := writeFile(path, "devices.deny", "a"); err != nil { - return err - } - - for _, dev := range cgroup.Resources.AllowedDevices { - if err := writeFile(path, "devices.allow", dev.CgroupString()); err != nil { - return err - } - } - return nil - } - - if err := writeFile(path, "devices.allow", "a"); err != nil { - return err - } - } - - for _, dev := range cgroup.Resources.DeniedDevices { - if err := writeFile(path, "devices.deny", dev.CgroupString()); err != nil { - return err - } - } - - return nil -} - -func (s *DevicesGroup) Remove(d *cgroupData) error { - return removePath(d.path("devices")) -} - -func (s *DevicesGroup) GetStats(path string, stats *cgroups.Stats) error { - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/freezer.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/freezer.go deleted file mode 100644 index 4b19f8a97..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/freezer.go +++ /dev/null @@ -1,66 +0,0 @@ -// +build linux - -package fs - -import ( - "fmt" - "strings" - "time" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type FreezerGroup struct { -} - -func (s *FreezerGroup) Name() string { - return "freezer" -} - -func (s *FreezerGroup) Apply(d *cgroupData) error { - _, err := d.join("freezer") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func (s *FreezerGroup) Set(path string, cgroup *configs.Cgroup) error { - switch cgroup.Resources.Freezer { - case configs.Frozen, configs.Thawed: - for { - // In case this loop does not exit because it doesn't get the expected - // state, let's write again this state, hoping it's going to be properly - // set this time. Otherwise, this loop could run infinitely, waiting for - // a state change that would never happen. - if err := writeFile(path, "freezer.state", string(cgroup.Resources.Freezer)); err != nil { - return err - } - - state, err := readFile(path, "freezer.state") - if err != nil { - return err - } - if strings.TrimSpace(state) == string(cgroup.Resources.Freezer) { - break - } - - time.Sleep(1 * time.Millisecond) - } - case configs.Undefined: - return nil - default: - return fmt.Errorf("Invalid argument '%s' to freezer.state", string(cgroup.Resources.Freezer)) - } - - return nil -} - -func (s *FreezerGroup) Remove(d *cgroupData) error { - return removePath(d.path("freezer")) -} - -func (s *FreezerGroup) GetStats(path string, stats *cgroups.Stats) error { - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/fs_unsupported.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/fs_unsupported.go deleted file mode 100644 index 3ef9e0315..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/fs_unsupported.go +++ /dev/null @@ -1,3 +0,0 @@ -// +build !linux - -package fs diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/hugetlb.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/hugetlb.go deleted file mode 100644 index 2f9727719..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/hugetlb.go +++ /dev/null @@ -1,71 +0,0 @@ -// +build linux - -package fs - -import ( - "fmt" - "strconv" - "strings" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type HugetlbGroup struct { -} - -func (s *HugetlbGroup) Name() string { - return "hugetlb" -} - -func (s *HugetlbGroup) Apply(d *cgroupData) error { - _, err := d.join("hugetlb") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func (s *HugetlbGroup) Set(path string, cgroup *configs.Cgroup) error { - for _, hugetlb := range cgroup.Resources.HugetlbLimit { - if err := writeFile(path, strings.Join([]string{"hugetlb", hugetlb.Pagesize, "limit_in_bytes"}, "."), strconv.FormatUint(hugetlb.Limit, 10)); err != nil { - return err - } - } - - return nil -} - -func (s *HugetlbGroup) Remove(d *cgroupData) error { - return removePath(d.path("hugetlb")) -} - -func (s *HugetlbGroup) GetStats(path string, stats *cgroups.Stats) error { - hugetlbStats := cgroups.HugetlbStats{} - for _, pageSize := range HugePageSizes { - usage := strings.Join([]string{"hugetlb", pageSize, "usage_in_bytes"}, ".") - value, err := getCgroupParamUint(path, usage) - if err != nil { - return fmt.Errorf("failed to parse %s - %v", usage, err) - } - hugetlbStats.Usage = value - - maxUsage := strings.Join([]string{"hugetlb", pageSize, "max_usage_in_bytes"}, ".") - value, err = getCgroupParamUint(path, maxUsage) - if err != nil { - return fmt.Errorf("failed to parse %s - %v", maxUsage, err) - } - hugetlbStats.MaxUsage = value - - failcnt := strings.Join([]string{"hugetlb", pageSize, "failcnt"}, ".") - value, err = getCgroupParamUint(path, failcnt) - if err != nil { - return fmt.Errorf("failed to parse %s - %v", failcnt, err) - } - hugetlbStats.Failcnt = value - - stats.HugetlbStats[pageSize] = hugetlbStats - } - - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/kmem.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/kmem.go deleted file mode 100644 index 69b5a1946..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/kmem.go +++ /dev/null @@ -1,62 +0,0 @@ -// +build linux,!nokmem - -package fs - -import ( - "errors" - "fmt" - "io/ioutil" - "os" - "path/filepath" - "strconv" - "syscall" // for Errno type only - - "github.com/opencontainers/runc/libcontainer/cgroups" - "golang.org/x/sys/unix" -) - -const cgroupKernelMemoryLimit = "memory.kmem.limit_in_bytes" - -func EnableKernelMemoryAccounting(path string) error { - // Ensure that kernel memory is available in this kernel build. If it - // isn't, we just ignore it because EnableKernelMemoryAccounting is - // automatically called for all memory limits. - if !cgroups.PathExists(filepath.Join(path, cgroupKernelMemoryLimit)) { - return nil - } - // We have to limit the kernel memory here as it won't be accounted at all - // until a limit is set on the cgroup and limit cannot be set once the - // cgroup has children, or if there are already tasks in the cgroup. - for _, i := range []int64{1, -1} { - if err := setKernelMemory(path, i); err != nil { - return err - } - } - return nil -} - -func setKernelMemory(path string, kernelMemoryLimit int64) error { - if path == "" { - return fmt.Errorf("no such directory for %s", cgroupKernelMemoryLimit) - } - if !cgroups.PathExists(filepath.Join(path, cgroupKernelMemoryLimit)) { - // We have specifically been asked to set a kmem limit. If the kernel - // doesn't support it we *must* error out. - return errors.New("kernel memory accounting not supported by this kernel") - } - if err := ioutil.WriteFile(filepath.Join(path, cgroupKernelMemoryLimit), []byte(strconv.FormatInt(kernelMemoryLimit, 10)), 0700); err != nil { - // Check if the error number returned by the syscall is "EBUSY" - // The EBUSY signal is returned on attempts to write to the - // memory.kmem.limit_in_bytes file if the cgroup has children or - // once tasks have been attached to the cgroup - if pathErr, ok := err.(*os.PathError); ok { - if errNo, ok := pathErr.Err.(syscall.Errno); ok { - if errNo == unix.EBUSY { - return fmt.Errorf("failed to set %s, because either tasks have already joined this cgroup or it has children", cgroupKernelMemoryLimit) - } - } - } - return fmt.Errorf("failed to write %v to %v: %v", kernelMemoryLimit, cgroupKernelMemoryLimit, err) - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/kmem_disabled.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/kmem_disabled.go deleted file mode 100644 index ac290fd7a..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/kmem_disabled.go +++ /dev/null @@ -1,15 +0,0 @@ -// +build linux,nokmem - -package fs - -import ( - "errors" -) - -func EnableKernelMemoryAccounting(path string) error { - return nil -} - -func setKernelMemory(path string, kernelMemoryLimit int64) error { - return errors.New("kernel memory accounting disabled in this runc build") -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/memory.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/memory.go deleted file mode 100644 index d5310d569..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/memory.go +++ /dev/null @@ -1,270 +0,0 @@ -// +build linux - -package fs - -import ( - "bufio" - "fmt" - "os" - "path/filepath" - "strconv" - "strings" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -const ( - cgroupMemorySwapLimit = "memory.memsw.limit_in_bytes" - cgroupMemoryLimit = "memory.limit_in_bytes" -) - -type MemoryGroup struct { -} - -func (s *MemoryGroup) Name() string { - return "memory" -} - -func (s *MemoryGroup) Apply(d *cgroupData) (err error) { - path, err := d.path("memory") - if err != nil && !cgroups.IsNotFound(err) { - return err - } else if path == "" { - return nil - } - if memoryAssigned(d.config) { - if _, err := os.Stat(path); os.IsNotExist(err) { - if err := os.MkdirAll(path, 0755); err != nil { - return err - } - // Only enable kernel memory accouting when this cgroup - // is created by libcontainer, otherwise we might get - // error when people use `cgroupsPath` to join an existed - // cgroup whose kernel memory is not initialized. - if err := EnableKernelMemoryAccounting(path); err != nil { - return err - } - } - } - defer func() { - if err != nil { - os.RemoveAll(path) - } - }() - - // We need to join memory cgroup after set memory limits, because - // kmem.limit_in_bytes can only be set when the cgroup is empty. - _, err = d.join("memory") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func setMemoryAndSwap(path string, cgroup *configs.Cgroup) error { - // If the memory update is set to -1 we should also - // set swap to -1, it means unlimited memory. - if cgroup.Resources.Memory == -1 { - // Only set swap if it's enabled in kernel - if cgroups.PathExists(filepath.Join(path, cgroupMemorySwapLimit)) { - cgroup.Resources.MemorySwap = -1 - } - } - - // When memory and swap memory are both set, we need to handle the cases - // for updating container. - if cgroup.Resources.Memory != 0 && cgroup.Resources.MemorySwap != 0 { - memoryUsage, err := getMemoryData(path, "") - if err != nil { - return err - } - - // When update memory limit, we should adapt the write sequence - // for memory and swap memory, so it won't fail because the new - // value and the old value don't fit kernel's validation. - if cgroup.Resources.MemorySwap == -1 || memoryUsage.Limit < uint64(cgroup.Resources.MemorySwap) { - if err := writeFile(path, cgroupMemorySwapLimit, strconv.FormatInt(cgroup.Resources.MemorySwap, 10)); err != nil { - return err - } - if err := writeFile(path, cgroupMemoryLimit, strconv.FormatInt(cgroup.Resources.Memory, 10)); err != nil { - return err - } - } else { - if err := writeFile(path, cgroupMemoryLimit, strconv.FormatInt(cgroup.Resources.Memory, 10)); err != nil { - return err - } - if err := writeFile(path, cgroupMemorySwapLimit, strconv.FormatInt(cgroup.Resources.MemorySwap, 10)); err != nil { - return err - } - } - } else { - if cgroup.Resources.Memory != 0 { - if err := writeFile(path, cgroupMemoryLimit, strconv.FormatInt(cgroup.Resources.Memory, 10)); err != nil { - return err - } - } - if cgroup.Resources.MemorySwap != 0 { - if err := writeFile(path, cgroupMemorySwapLimit, strconv.FormatInt(cgroup.Resources.MemorySwap, 10)); err != nil { - return err - } - } - } - - return nil -} - -func (s *MemoryGroup) Set(path string, cgroup *configs.Cgroup) error { - if err := setMemoryAndSwap(path, cgroup); err != nil { - return err - } - - if cgroup.Resources.KernelMemory != 0 { - if err := setKernelMemory(path, cgroup.Resources.KernelMemory); err != nil { - return err - } - } - - if cgroup.Resources.MemoryReservation != 0 { - if err := writeFile(path, "memory.soft_limit_in_bytes", strconv.FormatInt(cgroup.Resources.MemoryReservation, 10)); err != nil { - return err - } - } - - if cgroup.Resources.KernelMemoryTCP != 0 { - if err := writeFile(path, "memory.kmem.tcp.limit_in_bytes", strconv.FormatInt(cgroup.Resources.KernelMemoryTCP, 10)); err != nil { - return err - } - } - if cgroup.Resources.OomKillDisable { - if err := writeFile(path, "memory.oom_control", "1"); err != nil { - return err - } - } - if cgroup.Resources.MemorySwappiness == nil || int64(*cgroup.Resources.MemorySwappiness) == -1 { - return nil - } else if *cgroup.Resources.MemorySwappiness <= 100 { - if err := writeFile(path, "memory.swappiness", strconv.FormatUint(*cgroup.Resources.MemorySwappiness, 10)); err != nil { - return err - } - } else { - return fmt.Errorf("invalid value:%d. valid memory swappiness range is 0-100", *cgroup.Resources.MemorySwappiness) - } - - return nil -} - -func (s *MemoryGroup) Remove(d *cgroupData) error { - return removePath(d.path("memory")) -} - -func (s *MemoryGroup) GetStats(path string, stats *cgroups.Stats) error { - // Set stats from memory.stat. - statsFile, err := os.Open(filepath.Join(path, "memory.stat")) - if err != nil { - if os.IsNotExist(err) { - return nil - } - return err - } - defer statsFile.Close() - - sc := bufio.NewScanner(statsFile) - for sc.Scan() { - t, v, err := getCgroupParamKeyValue(sc.Text()) - if err != nil { - return fmt.Errorf("failed to parse memory.stat (%q) - %v", sc.Text(), err) - } - stats.MemoryStats.Stats[t] = v - } - stats.MemoryStats.Cache = stats.MemoryStats.Stats["cache"] - - memoryUsage, err := getMemoryData(path, "") - if err != nil { - return err - } - stats.MemoryStats.Usage = memoryUsage - swapUsage, err := getMemoryData(path, "memsw") - if err != nil { - return err - } - stats.MemoryStats.SwapUsage = swapUsage - kernelUsage, err := getMemoryData(path, "kmem") - if err != nil { - return err - } - stats.MemoryStats.KernelUsage = kernelUsage - kernelTCPUsage, err := getMemoryData(path, "kmem.tcp") - if err != nil { - return err - } - stats.MemoryStats.KernelTCPUsage = kernelTCPUsage - - useHierarchy := strings.Join([]string{"memory", "use_hierarchy"}, ".") - value, err := getCgroupParamUint(path, useHierarchy) - if err != nil { - return err - } - if value == 1 { - stats.MemoryStats.UseHierarchy = true - } - return nil -} - -func memoryAssigned(cgroup *configs.Cgroup) bool { - return cgroup.Resources.Memory != 0 || - cgroup.Resources.MemoryReservation != 0 || - cgroup.Resources.MemorySwap > 0 || - cgroup.Resources.KernelMemory > 0 || - cgroup.Resources.KernelMemoryTCP > 0 || - cgroup.Resources.OomKillDisable || - (cgroup.Resources.MemorySwappiness != nil && int64(*cgroup.Resources.MemorySwappiness) != -1) -} - -func getMemoryData(path, name string) (cgroups.MemoryData, error) { - memoryData := cgroups.MemoryData{} - - moduleName := "memory" - if name != "" { - moduleName = strings.Join([]string{"memory", name}, ".") - } - usage := strings.Join([]string{moduleName, "usage_in_bytes"}, ".") - maxUsage := strings.Join([]string{moduleName, "max_usage_in_bytes"}, ".") - failcnt := strings.Join([]string{moduleName, "failcnt"}, ".") - limit := strings.Join([]string{moduleName, "limit_in_bytes"}, ".") - - value, err := getCgroupParamUint(path, usage) - if err != nil { - if moduleName != "memory" && os.IsNotExist(err) { - return cgroups.MemoryData{}, nil - } - return cgroups.MemoryData{}, fmt.Errorf("failed to parse %s - %v", usage, err) - } - memoryData.Usage = value - value, err = getCgroupParamUint(path, maxUsage) - if err != nil { - if moduleName != "memory" && os.IsNotExist(err) { - return cgroups.MemoryData{}, nil - } - return cgroups.MemoryData{}, fmt.Errorf("failed to parse %s - %v", maxUsage, err) - } - memoryData.MaxUsage = value - value, err = getCgroupParamUint(path, failcnt) - if err != nil { - if moduleName != "memory" && os.IsNotExist(err) { - return cgroups.MemoryData{}, nil - } - return cgroups.MemoryData{}, fmt.Errorf("failed to parse %s - %v", failcnt, err) - } - memoryData.Failcnt = value - value, err = getCgroupParamUint(path, limit) - if err != nil { - if moduleName != "memory" && os.IsNotExist(err) { - return cgroups.MemoryData{}, nil - } - return cgroups.MemoryData{}, fmt.Errorf("failed to parse %s - %v", limit, err) - } - memoryData.Limit = value - - return memoryData, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/name.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/name.go deleted file mode 100644 index d8cf1d87c..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/name.go +++ /dev/null @@ -1,40 +0,0 @@ -// +build linux - -package fs - -import ( - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type NameGroup struct { - GroupName string - Join bool -} - -func (s *NameGroup) Name() string { - return s.GroupName -} - -func (s *NameGroup) Apply(d *cgroupData) error { - if s.Join { - // ignore errors if the named cgroup does not exist - d.join(s.GroupName) - } - return nil -} - -func (s *NameGroup) Set(path string, cgroup *configs.Cgroup) error { - return nil -} - -func (s *NameGroup) Remove(d *cgroupData) error { - if s.Join { - removePath(d.path(s.GroupName)) - } - return nil -} - -func (s *NameGroup) GetStats(path string, stats *cgroups.Stats) error { - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/net_cls.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/net_cls.go deleted file mode 100644 index 8e74b645e..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/net_cls.go +++ /dev/null @@ -1,43 +0,0 @@ -// +build linux - -package fs - -import ( - "strconv" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type NetClsGroup struct { -} - -func (s *NetClsGroup) Name() string { - return "net_cls" -} - -func (s *NetClsGroup) Apply(d *cgroupData) error { - _, err := d.join("net_cls") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func (s *NetClsGroup) Set(path string, cgroup *configs.Cgroup) error { - if cgroup.Resources.NetClsClassid != 0 { - if err := writeFile(path, "net_cls.classid", strconv.FormatUint(uint64(cgroup.Resources.NetClsClassid), 10)); err != nil { - return err - } - } - - return nil -} - -func (s *NetClsGroup) Remove(d *cgroupData) error { - return removePath(d.path("net_cls")) -} - -func (s *NetClsGroup) GetStats(path string, stats *cgroups.Stats) error { - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/net_prio.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/net_prio.go deleted file mode 100644 index d0ab2af89..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/net_prio.go +++ /dev/null @@ -1,41 +0,0 @@ -// +build linux - -package fs - -import ( - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type NetPrioGroup struct { -} - -func (s *NetPrioGroup) Name() string { - return "net_prio" -} - -func (s *NetPrioGroup) Apply(d *cgroupData) error { - _, err := d.join("net_prio") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func (s *NetPrioGroup) Set(path string, cgroup *configs.Cgroup) error { - for _, prioMap := range cgroup.Resources.NetPrioIfpriomap { - if err := writeFile(path, "net_prio.ifpriomap", prioMap.CgroupString()); err != nil { - return err - } - } - - return nil -} - -func (s *NetPrioGroup) Remove(d *cgroupData) error { - return removePath(d.path("net_prio")) -} - -func (s *NetPrioGroup) GetStats(path string, stats *cgroups.Stats) error { - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/perf_event.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/perf_event.go deleted file mode 100644 index 5693676d3..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/perf_event.go +++ /dev/null @@ -1,35 +0,0 @@ -// +build linux - -package fs - -import ( - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type PerfEventGroup struct { -} - -func (s *PerfEventGroup) Name() string { - return "perf_event" -} - -func (s *PerfEventGroup) Apply(d *cgroupData) error { - // we just want to join this group even though we don't set anything - if _, err := d.join("perf_event"); err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func (s *PerfEventGroup) Set(path string, cgroup *configs.Cgroup) error { - return nil -} - -func (s *PerfEventGroup) Remove(d *cgroupData) error { - return removePath(d.path("perf_event")) -} - -func (s *PerfEventGroup) GetStats(path string, stats *cgroups.Stats) error { - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/pids.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/pids.go deleted file mode 100644 index f1e372055..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/pids.go +++ /dev/null @@ -1,73 +0,0 @@ -// +build linux - -package fs - -import ( - "fmt" - "path/filepath" - "strconv" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type PidsGroup struct { -} - -func (s *PidsGroup) Name() string { - return "pids" -} - -func (s *PidsGroup) Apply(d *cgroupData) error { - _, err := d.join("pids") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - return nil -} - -func (s *PidsGroup) Set(path string, cgroup *configs.Cgroup) error { - if cgroup.Resources.PidsLimit != 0 { - // "max" is the fallback value. - limit := "max" - - if cgroup.Resources.PidsLimit > 0 { - limit = strconv.FormatInt(cgroup.Resources.PidsLimit, 10) - } - - if err := writeFile(path, "pids.max", limit); err != nil { - return err - } - } - - return nil -} - -func (s *PidsGroup) Remove(d *cgroupData) error { - return removePath(d.path("pids")) -} - -func (s *PidsGroup) GetStats(path string, stats *cgroups.Stats) error { - current, err := getCgroupParamUint(path, "pids.current") - if err != nil { - return fmt.Errorf("failed to parse pids.current - %s", err) - } - - maxString, err := getCgroupParamString(path, "pids.max") - if err != nil { - return fmt.Errorf("failed to parse pids.max - %s", err) - } - - // Default if pids.max == "max" is 0 -- which represents "no limit". - var max uint64 - if maxString != "max" { - max, err = parseUint(maxString, 10, 64) - if err != nil { - return fmt.Errorf("failed to parse pids.max - unable to parse %q as a uint from Cgroup file %q", maxString, filepath.Join(path, "pids.max")) - } - } - - stats.PidsStats.Current = current - stats.PidsStats.Limit = max - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/utils.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/utils.go deleted file mode 100644 index 5ff0a1615..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/utils.go +++ /dev/null @@ -1,78 +0,0 @@ -// +build linux - -package fs - -import ( - "errors" - "fmt" - "io/ioutil" - "path/filepath" - "strconv" - "strings" -) - -var ( - ErrNotValidFormat = errors.New("line is not a valid key value format") -) - -// Saturates negative values at zero and returns a uint64. -// Due to kernel bugs, some of the memory cgroup stats can be negative. -func parseUint(s string, base, bitSize int) (uint64, error) { - value, err := strconv.ParseUint(s, base, bitSize) - if err != nil { - intValue, intErr := strconv.ParseInt(s, base, bitSize) - // 1. Handle negative values greater than MinInt64 (and) - // 2. Handle negative values lesser than MinInt64 - if intErr == nil && intValue < 0 { - return 0, nil - } else if intErr != nil && intErr.(*strconv.NumError).Err == strconv.ErrRange && intValue < 0 { - return 0, nil - } - - return value, err - } - - return value, nil -} - -// Parses a cgroup param and returns as name, value -// i.e. "io_service_bytes 1234" will return as io_service_bytes, 1234 -func getCgroupParamKeyValue(t string) (string, uint64, error) { - parts := strings.Fields(t) - switch len(parts) { - case 2: - value, err := parseUint(parts[1], 10, 64) - if err != nil { - return "", 0, fmt.Errorf("unable to convert param value (%q) to uint64: %v", parts[1], err) - } - - return parts[0], value, nil - default: - return "", 0, ErrNotValidFormat - } -} - -// Gets a single uint64 value from the specified cgroup file. -func getCgroupParamUint(cgroupPath, cgroupFile string) (uint64, error) { - fileName := filepath.Join(cgroupPath, cgroupFile) - contents, err := ioutil.ReadFile(fileName) - if err != nil { - return 0, err - } - - res, err := parseUint(strings.TrimSpace(string(contents)), 10, 64) - if err != nil { - return res, fmt.Errorf("unable to parse %q as a uint from Cgroup file %q", string(contents), fileName) - } - return res, nil -} - -// Gets a string value from the specified cgroup file -func getCgroupParamString(cgroupPath, cgroupFile string) (string, error) { - contents, err := ioutil.ReadFile(filepath.Join(cgroupPath, cgroupFile)) - if err != nil { - return "", err - } - - return strings.TrimSpace(string(contents)), nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/stats.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/stats.go deleted file mode 100644 index 8eeedc55b..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/stats.go +++ /dev/null @@ -1,108 +0,0 @@ -// +build linux - -package cgroups - -type ThrottlingData struct { - // Number of periods with throttling active - Periods uint64 `json:"periods,omitempty"` - // Number of periods when the container hit its throttling limit. - ThrottledPeriods uint64 `json:"throttled_periods,omitempty"` - // Aggregate time the container was throttled for in nanoseconds. - ThrottledTime uint64 `json:"throttled_time,omitempty"` -} - -// CpuUsage denotes the usage of a CPU. -// All CPU stats are aggregate since container inception. -type CpuUsage struct { - // Total CPU time consumed. - // Units: nanoseconds. - TotalUsage uint64 `json:"total_usage,omitempty"` - // Total CPU time consumed per core. - // Units: nanoseconds. - PercpuUsage []uint64 `json:"percpu_usage,omitempty"` - // Time spent by tasks of the cgroup in kernel mode. - // Units: nanoseconds. - UsageInKernelmode uint64 `json:"usage_in_kernelmode"` - // Time spent by tasks of the cgroup in user mode. - // Units: nanoseconds. - UsageInUsermode uint64 `json:"usage_in_usermode"` -} - -type CpuStats struct { - CpuUsage CpuUsage `json:"cpu_usage,omitempty"` - ThrottlingData ThrottlingData `json:"throttling_data,omitempty"` -} - -type MemoryData struct { - Usage uint64 `json:"usage,omitempty"` - MaxUsage uint64 `json:"max_usage,omitempty"` - Failcnt uint64 `json:"failcnt"` - Limit uint64 `json:"limit"` -} - -type MemoryStats struct { - // memory used for cache - Cache uint64 `json:"cache,omitempty"` - // usage of memory - Usage MemoryData `json:"usage,omitempty"` - // usage of memory + swap - SwapUsage MemoryData `json:"swap_usage,omitempty"` - // usage of kernel memory - KernelUsage MemoryData `json:"kernel_usage,omitempty"` - // usage of kernel TCP memory - KernelTCPUsage MemoryData `json:"kernel_tcp_usage,omitempty"` - // if true, memory usage is accounted for throughout a hierarchy of cgroups. - UseHierarchy bool `json:"use_hierarchy"` - - Stats map[string]uint64 `json:"stats,omitempty"` -} - -type PidsStats struct { - // number of pids in the cgroup - Current uint64 `json:"current,omitempty"` - // active pids hard limit - Limit uint64 `json:"limit,omitempty"` -} - -type BlkioStatEntry struct { - Major uint64 `json:"major,omitempty"` - Minor uint64 `json:"minor,omitempty"` - Op string `json:"op,omitempty"` - Value uint64 `json:"value,omitempty"` -} - -type BlkioStats struct { - // number of bytes tranferred to and from the block device - IoServiceBytesRecursive []BlkioStatEntry `json:"io_service_bytes_recursive,omitempty"` - IoServicedRecursive []BlkioStatEntry `json:"io_serviced_recursive,omitempty"` - IoQueuedRecursive []BlkioStatEntry `json:"io_queue_recursive,omitempty"` - IoServiceTimeRecursive []BlkioStatEntry `json:"io_service_time_recursive,omitempty"` - IoWaitTimeRecursive []BlkioStatEntry `json:"io_wait_time_recursive,omitempty"` - IoMergedRecursive []BlkioStatEntry `json:"io_merged_recursive,omitempty"` - IoTimeRecursive []BlkioStatEntry `json:"io_time_recursive,omitempty"` - SectorsRecursive []BlkioStatEntry `json:"sectors_recursive,omitempty"` -} - -type HugetlbStats struct { - // current res_counter usage for hugetlb - Usage uint64 `json:"usage,omitempty"` - // maximum usage ever recorded. - MaxUsage uint64 `json:"max_usage,omitempty"` - // number of times hugetlb usage allocation failure. - Failcnt uint64 `json:"failcnt"` -} - -type Stats struct { - CpuStats CpuStats `json:"cpu_stats,omitempty"` - MemoryStats MemoryStats `json:"memory_stats,omitempty"` - PidsStats PidsStats `json:"pids_stats,omitempty"` - BlkioStats BlkioStats `json:"blkio_stats,omitempty"` - // the map is in the format "size of hugepage: stats of the hugepage" - HugetlbStats map[string]HugetlbStats `json:"hugetlb_stats,omitempty"` -} - -func NewStats() *Stats { - memoryStats := MemoryStats{Stats: make(map[string]uint64)} - hugetlbStats := make(map[string]HugetlbStats) - return &Stats{MemoryStats: memoryStats, HugetlbStats: hugetlbStats} -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/apply_nosystemd.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/apply_nosystemd.go deleted file mode 100644 index c171365be..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/apply_nosystemd.go +++ /dev/null @@ -1,59 +0,0 @@ -// +build !linux static_build - -package systemd - -import ( - "fmt" - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" -) - -type Manager struct { - Cgroups *configs.Cgroup - Paths map[string]string -} - -func UseSystemd() bool { - return false -} - -func NewSystemdCgroupsManager() (func(config *configs.Cgroup, paths map[string]string) cgroups.Manager, error) { - return nil, fmt.Errorf("Systemd not supported") -} - -func (m *Manager) Apply(pid int) error { - return fmt.Errorf("Systemd not supported") -} - -func (m *Manager) GetPids() ([]int, error) { - return nil, fmt.Errorf("Systemd not supported") -} - -func (m *Manager) GetAllPids() ([]int, error) { - return nil, fmt.Errorf("Systemd not supported") -} - -func (m *Manager) Destroy() error { - return fmt.Errorf("Systemd not supported") -} - -func (m *Manager) GetPaths() map[string]string { - return nil -} - -func (m *Manager) GetStats() (*cgroups.Stats, error) { - return nil, fmt.Errorf("Systemd not supported") -} - -func (m *Manager) Set(container *configs.Config) error { - return fmt.Errorf("Systemd not supported") -} - -func (m *Manager) Freeze(state configs.FreezerState) error { - return fmt.Errorf("Systemd not supported") -} - -func Freeze(c *configs.Cgroup, state configs.FreezerState) error { - return fmt.Errorf("Systemd not supported") -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/apply_systemd.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/apply_systemd.go deleted file mode 100644 index 3bf723bf9..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/systemd/apply_systemd.go +++ /dev/null @@ -1,574 +0,0 @@ -// +build linux,!static_build - -package systemd - -import ( - "errors" - "fmt" - "io/ioutil" - "math" - "os" - "path/filepath" - "strings" - "sync" - "time" - - systemdDbus "github.com/coreos/go-systemd/dbus" - systemdUtil "github.com/coreos/go-systemd/util" - "github.com/godbus/dbus" - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/cgroups/fs" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/sirupsen/logrus" -) - -type Manager struct { - mu sync.Mutex - Cgroups *configs.Cgroup - Paths map[string]string -} - -type subsystem interface { - // Name returns the name of the subsystem. - Name() string - // Returns the stats, as 'stats', corresponding to the cgroup under 'path'. - GetStats(path string, stats *cgroups.Stats) error - // Set the cgroup represented by cgroup. - Set(path string, cgroup *configs.Cgroup) error -} - -var errSubsystemDoesNotExist = errors.New("cgroup: subsystem does not exist") - -type subsystemSet []subsystem - -func (s subsystemSet) Get(name string) (subsystem, error) { - for _, ss := range s { - if ss.Name() == name { - return ss, nil - } - } - return nil, errSubsystemDoesNotExist -} - -var subsystems = subsystemSet{ - &fs.CpusetGroup{}, - &fs.DevicesGroup{}, - &fs.MemoryGroup{}, - &fs.CpuGroup{}, - &fs.CpuacctGroup{}, - &fs.PidsGroup{}, - &fs.BlkioGroup{}, - &fs.HugetlbGroup{}, - &fs.PerfEventGroup{}, - &fs.FreezerGroup{}, - &fs.NetPrioGroup{}, - &fs.NetClsGroup{}, - &fs.NameGroup{GroupName: "name=systemd"}, -} - -const ( - testScopeWait = 4 - testSliceWait = 4 -) - -var ( - connLock sync.Mutex - theConn *systemdDbus.Conn - hasStartTransientUnit bool - hasStartTransientSliceUnit bool - hasDelegateSlice bool -) - -func newProp(name string, units interface{}) systemdDbus.Property { - return systemdDbus.Property{ - Name: name, - Value: dbus.MakeVariant(units), - } -} - -func UseSystemd() bool { - if !systemdUtil.IsRunningSystemd() { - return false - } - - connLock.Lock() - defer connLock.Unlock() - - if theConn == nil { - var err error - theConn, err = systemdDbus.New() - if err != nil { - return false - } - - // Assume we have StartTransientUnit - hasStartTransientUnit = true - - // But if we get UnknownMethod error we don't - if _, err := theConn.StartTransientUnit("test.scope", "invalid", nil, nil); err != nil { - if dbusError, ok := err.(dbus.Error); ok { - if dbusError.Name == "org.freedesktop.DBus.Error.UnknownMethod" { - hasStartTransientUnit = false - return hasStartTransientUnit - } - } - } - - // Assume we have the ability to start a transient unit as a slice - // This was broken until systemd v229, but has been back-ported on RHEL environments >= 219 - // For details, see: https://bugzilla.redhat.com/show_bug.cgi?id=1370299 - hasStartTransientSliceUnit = true - - // To ensure simple clean-up, we create a slice off the root with no hierarchy - slice := fmt.Sprintf("libcontainer_%d_systemd_test_default.slice", os.Getpid()) - if _, err := theConn.StartTransientUnit(slice, "replace", nil, nil); err != nil { - if _, ok := err.(dbus.Error); ok { - hasStartTransientSliceUnit = false - } - } - - for i := 0; i <= testSliceWait; i++ { - if _, err := theConn.StopUnit(slice, "replace", nil); err != nil { - if dbusError, ok := err.(dbus.Error); ok { - if strings.Contains(dbusError.Name, "org.freedesktop.systemd1.NoSuchUnit") { - hasStartTransientSliceUnit = false - break - } - } - } else { - break - } - time.Sleep(time.Millisecond) - } - - // Not critical because of the stop unit logic above. - theConn.StopUnit(slice, "replace", nil) - - // Assume StartTransientUnit on a slice allows Delegate - hasDelegateSlice = true - dlSlice := newProp("Delegate", true) - if _, err := theConn.StartTransientUnit(slice, "replace", []systemdDbus.Property{dlSlice}, nil); err != nil { - if dbusError, ok := err.(dbus.Error); ok { - // Starting with systemd v237, Delegate is not even a property of slices anymore, - // so the D-Bus call fails with "InvalidArgs" error. - if strings.Contains(dbusError.Name, "org.freedesktop.DBus.Error.PropertyReadOnly") || strings.Contains(dbusError.Name, "org.freedesktop.DBus.Error.InvalidArgs") { - hasDelegateSlice = false - } - } - } - - // Not critical because of the stop unit logic above. - theConn.StopUnit(slice, "replace", nil) - } - return hasStartTransientUnit -} - -func NewSystemdCgroupsManager() (func(config *configs.Cgroup, paths map[string]string) cgroups.Manager, error) { - if !systemdUtil.IsRunningSystemd() { - return nil, fmt.Errorf("systemd not running on this host, can't use systemd as a cgroups.Manager") - } - return func(config *configs.Cgroup, paths map[string]string) cgroups.Manager { - return &Manager{ - Cgroups: config, - Paths: paths, - } - }, nil -} - -func (m *Manager) Apply(pid int) error { - var ( - c = m.Cgroups - unitName = getUnitName(c) - slice = "system.slice" - properties []systemdDbus.Property - ) - - if c.Paths != nil { - paths := make(map[string]string) - for name, path := range c.Paths { - _, err := getSubsystemPath(m.Cgroups, name) - if err != nil { - // Don't fail if a cgroup hierarchy was not found, just skip this subsystem - if cgroups.IsNotFound(err) { - continue - } - return err - } - paths[name] = path - } - m.Paths = paths - return cgroups.EnterPid(m.Paths, pid) - } - - if c.Parent != "" { - slice = c.Parent - } - - properties = append(properties, systemdDbus.PropDescription("libcontainer container "+c.Name)) - - // if we create a slice, the parent is defined via a Wants= - if strings.HasSuffix(unitName, ".slice") { - // This was broken until systemd v229, but has been back-ported on RHEL environments >= 219 - if !hasStartTransientSliceUnit { - return fmt.Errorf("systemd version does not support ability to start a slice as transient unit") - } - properties = append(properties, systemdDbus.PropWants(slice)) - } else { - // otherwise, we use Slice= - properties = append(properties, systemdDbus.PropSlice(slice)) - } - - // only add pid if its valid, -1 is used w/ general slice creation. - if pid != -1 { - properties = append(properties, newProp("PIDs", []uint32{uint32(pid)})) - } - - // Check if we can delegate. This is only supported on systemd versions 218 and above. - if strings.HasSuffix(unitName, ".slice") { - if hasDelegateSlice { - // systemd 237 and above no longer allows delegation on a slice - properties = append(properties, newProp("Delegate", true)) - } - } else { - // Assume scopes always support delegation. - properties = append(properties, newProp("Delegate", true)) - } - - // Always enable accounting, this gets us the same behaviour as the fs implementation, - // plus the kernel has some problems with joining the memory cgroup at a later time. - properties = append(properties, - newProp("MemoryAccounting", true), - newProp("CPUAccounting", true), - newProp("BlockIOAccounting", true)) - - // Assume DefaultDependencies= will always work (the check for it was previously broken.) - properties = append(properties, - newProp("DefaultDependencies", false)) - - if c.Resources.Memory != 0 { - properties = append(properties, - newProp("MemoryLimit", uint64(c.Resources.Memory))) - } - - if c.Resources.CpuShares != 0 { - properties = append(properties, - newProp("CPUShares", c.Resources.CpuShares)) - } - - // cpu.cfs_quota_us and cpu.cfs_period_us are controlled by systemd. - if c.Resources.CpuQuota != 0 && c.Resources.CpuPeriod != 0 { - // corresponds to USEC_INFINITY in systemd - // if USEC_INFINITY is provided, CPUQuota is left unbound by systemd - // always setting a property value ensures we can apply a quota and remove it later - cpuQuotaPerSecUSec := uint64(math.MaxUint64) - if c.Resources.CpuQuota > 0 { - // systemd converts CPUQuotaPerSecUSec (microseconds per CPU second) to CPUQuota - // (integer percentage of CPU) internally. This means that if a fractional percent of - // CPU is indicated by Resources.CpuQuota, we need to round up to the nearest - // 10ms (1% of a second) such that child cgroups can set the cpu.cfs_quota_us they expect. - cpuQuotaPerSecUSec = uint64(c.Resources.CpuQuota*1000000) / c.Resources.CpuPeriod - if cpuQuotaPerSecUSec%10000 != 0 { - cpuQuotaPerSecUSec = ((cpuQuotaPerSecUSec / 10000) + 1) * 10000 - } - } - properties = append(properties, - newProp("CPUQuotaPerSecUSec", cpuQuotaPerSecUSec)) - } - - if c.Resources.BlkioWeight != 0 { - properties = append(properties, - newProp("BlockIOWeight", uint64(c.Resources.BlkioWeight))) - } - - if c.Resources.PidsLimit > 0 { - properties = append(properties, - newProp("TasksAccounting", true), - newProp("TasksMax", uint64(c.Resources.PidsLimit))) - } - - // We have to set kernel memory here, as we can't change it once - // processes have been attached to the cgroup. - if c.Resources.KernelMemory != 0 { - if err := setKernelMemory(c); err != nil { - return err - } - } - - statusChan := make(chan string, 1) - if _, err := theConn.StartTransientUnit(unitName, "replace", properties, statusChan); err == nil { - select { - case <-statusChan: - case <-time.After(time.Second): - logrus.Warnf("Timed out while waiting for StartTransientUnit(%s) completion signal from dbus. Continuing...", unitName) - } - } else if !isUnitExists(err) { - return err - } - - if err := joinCgroups(c, pid); err != nil { - return err - } - - paths := make(map[string]string) - for _, s := range subsystems { - subsystemPath, err := getSubsystemPath(m.Cgroups, s.Name()) - if err != nil { - // Don't fail if a cgroup hierarchy was not found, just skip this subsystem - if cgroups.IsNotFound(err) { - continue - } - return err - } - paths[s.Name()] = subsystemPath - } - m.Paths = paths - return nil -} - -func (m *Manager) Destroy() error { - if m.Cgroups.Paths != nil { - return nil - } - m.mu.Lock() - defer m.mu.Unlock() - theConn.StopUnit(getUnitName(m.Cgroups), "replace", nil) - if err := cgroups.RemovePaths(m.Paths); err != nil { - return err - } - m.Paths = make(map[string]string) - return nil -} - -func (m *Manager) GetPaths() map[string]string { - m.mu.Lock() - paths := m.Paths - m.mu.Unlock() - return paths -} - -func join(c *configs.Cgroup, subsystem string, pid int) (string, error) { - path, err := getSubsystemPath(c, subsystem) - if err != nil { - return "", err - } - if err := os.MkdirAll(path, 0755); err != nil { - return "", err - } - if err := cgroups.WriteCgroupProc(path, pid); err != nil { - return "", err - } - return path, nil -} - -func joinCgroups(c *configs.Cgroup, pid int) error { - for _, sys := range subsystems { - name := sys.Name() - switch name { - case "name=systemd": - // let systemd handle this - case "cpuset": - path, err := getSubsystemPath(c, name) - if err != nil && !cgroups.IsNotFound(err) { - return err - } - s := &fs.CpusetGroup{} - if err := s.ApplyDir(path, c, pid); err != nil { - return err - } - default: - _, err := join(c, name, pid) - if err != nil { - // Even if it's `not found` error, we'll return err - // because devices cgroup is hard requirement for - // container security. - if name == "devices" { - return err - } - // For other subsystems, omit the `not found` error - // because they are optional. - if !cgroups.IsNotFound(err) { - return err - } - } - } - } - - return nil -} - -// systemd represents slice hierarchy using `-`, so we need to follow suit when -// generating the path of slice. Essentially, test-a-b.slice becomes -// /test.slice/test-a.slice/test-a-b.slice. -func ExpandSlice(slice string) (string, error) { - suffix := ".slice" - // Name has to end with ".slice", but can't be just ".slice". - if len(slice) < len(suffix) || !strings.HasSuffix(slice, suffix) { - return "", fmt.Errorf("invalid slice name: %s", slice) - } - - // Path-separators are not allowed. - if strings.Contains(slice, "/") { - return "", fmt.Errorf("invalid slice name: %s", slice) - } - - var path, prefix string - sliceName := strings.TrimSuffix(slice, suffix) - // if input was -.slice, we should just return root now - if sliceName == "-" { - return "/", nil - } - for _, component := range strings.Split(sliceName, "-") { - // test--a.slice isn't permitted, nor is -test.slice. - if component == "" { - return "", fmt.Errorf("invalid slice name: %s", slice) - } - - // Append the component to the path and to the prefix. - path += "/" + prefix + component + suffix - prefix += component + "-" - } - return path, nil -} - -func getSubsystemPath(c *configs.Cgroup, subsystem string) (string, error) { - mountpoint, err := cgroups.FindCgroupMountpoint(c.Path, subsystem) - if err != nil { - return "", err - } - - initPath, err := cgroups.GetInitCgroup(subsystem) - if err != nil { - return "", err - } - // if pid 1 is systemd 226 or later, it will be in init.scope, not the root - initPath = strings.TrimSuffix(filepath.Clean(initPath), "init.scope") - - slice := "system.slice" - if c.Parent != "" { - slice = c.Parent - } - - slice, err = ExpandSlice(slice) - if err != nil { - return "", err - } - - return filepath.Join(mountpoint, initPath, slice, getUnitName(c)), nil -} - -func (m *Manager) Freeze(state configs.FreezerState) error { - path, err := getSubsystemPath(m.Cgroups, "freezer") - if err != nil { - return err - } - prevState := m.Cgroups.Resources.Freezer - m.Cgroups.Resources.Freezer = state - freezer, err := subsystems.Get("freezer") - if err != nil { - return err - } - err = freezer.Set(path, m.Cgroups) - if err != nil { - m.Cgroups.Resources.Freezer = prevState - return err - } - return nil -} - -func (m *Manager) GetPids() ([]int, error) { - path, err := getSubsystemPath(m.Cgroups, "devices") - if err != nil { - return nil, err - } - return cgroups.GetPids(path) -} - -func (m *Manager) GetAllPids() ([]int, error) { - path, err := getSubsystemPath(m.Cgroups, "devices") - if err != nil { - return nil, err - } - return cgroups.GetAllPids(path) -} - -func (m *Manager) GetStats() (*cgroups.Stats, error) { - m.mu.Lock() - defer m.mu.Unlock() - stats := cgroups.NewStats() - for name, path := range m.Paths { - sys, err := subsystems.Get(name) - if err == errSubsystemDoesNotExist || !cgroups.PathExists(path) { - continue - } - if err := sys.GetStats(path, stats); err != nil { - return nil, err - } - } - - return stats, nil -} - -func (m *Manager) Set(container *configs.Config) error { - // If Paths are set, then we are just joining cgroups paths - // and there is no need to set any values. - if m.Cgroups.Paths != nil { - return nil - } - for _, sys := range subsystems { - // Get the subsystem path, but don't error out for not found cgroups. - path, err := getSubsystemPath(container.Cgroups, sys.Name()) - if err != nil && !cgroups.IsNotFound(err) { - return err - } - - if err := sys.Set(path, container.Cgroups); err != nil { - return err - } - } - - if m.Paths["cpu"] != "" { - if err := fs.CheckCpushares(m.Paths["cpu"], container.Cgroups.Resources.CpuShares); err != nil { - return err - } - } - return nil -} - -func getUnitName(c *configs.Cgroup) string { - // by default, we create a scope unless the user explicitly asks for a slice. - if !strings.HasSuffix(c.Name, ".slice") { - return fmt.Sprintf("%s-%s.scope", c.ScopePrefix, c.Name) - } - return c.Name -} - -func setKernelMemory(c *configs.Cgroup) error { - path, err := getSubsystemPath(c, "memory") - if err != nil && !cgroups.IsNotFound(err) { - return err - } - - if err := os.MkdirAll(path, 0755); err != nil { - return err - } - // do not try to enable the kernel memory if we already have - // tasks in the cgroup. - content, err := ioutil.ReadFile(filepath.Join(path, "tasks")) - if err != nil { - return err - } - if len(content) > 0 { - return nil - } - return fs.EnableKernelMemoryAccounting(path) -} - -// isUnitExists returns true if the error is that a systemd unit already exists. -func isUnitExists(err error) bool { - if err != nil { - if dbusError, ok := err.(dbus.Error); ok { - return strings.Contains(dbusError.Name, "org.freedesktop.systemd1.UnitExists") - } - } - return false -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go b/vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go deleted file mode 100644 index ec79ae767..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/cgroups/utils.go +++ /dev/null @@ -1,517 +0,0 @@ -// +build linux - -package cgroups - -import ( - "bufio" - "fmt" - "io" - "io/ioutil" - "os" - "path/filepath" - "strconv" - "strings" - "time" - - units "github.com/docker/go-units" - "golang.org/x/sys/unix" -) - -const ( - CgroupNamePrefix = "name=" - CgroupProcesses = "cgroup.procs" -) - -// HugePageSizeUnitList is a list of the units used by the linux kernel when -// naming the HugePage control files. -// https://www.kernel.org/doc/Documentation/cgroup-v1/hugetlb.txt -// TODO Since the kernel only use KB, MB and GB; TB and PB should be removed, -// depends on https://github.com/docker/go-units/commit/a09cd47f892041a4fac473133d181f5aea6fa393 -var HugePageSizeUnitList = []string{"B", "KB", "MB", "GB", "TB", "PB"} - -// https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt -func FindCgroupMountpoint(cgroupPath, subsystem string) (string, error) { - mnt, _, err := FindCgroupMountpointAndRoot(cgroupPath, subsystem) - return mnt, err -} - -func FindCgroupMountpointAndRoot(cgroupPath, subsystem string) (string, string, error) { - // We are not using mount.GetMounts() because it's super-inefficient, - // parsing it directly sped up x10 times because of not using Sscanf. - // It was one of two major performance drawbacks in container start. - if !isSubsystemAvailable(subsystem) { - return "", "", NewNotFoundError(subsystem) - } - - f, err := os.Open("/proc/self/mountinfo") - if err != nil { - return "", "", err - } - defer f.Close() - - return findCgroupMountpointAndRootFromReader(f, cgroupPath, subsystem) -} - -func findCgroupMountpointAndRootFromReader(reader io.Reader, cgroupPath, subsystem string) (string, string, error) { - scanner := bufio.NewScanner(reader) - for scanner.Scan() { - txt := scanner.Text() - fields := strings.Fields(txt) - if len(fields) < 5 { - continue - } - if strings.HasPrefix(fields[4], cgroupPath) { - for _, opt := range strings.Split(fields[len(fields)-1], ",") { - if opt == subsystem { - return fields[4], fields[3], nil - } - } - } - } - if err := scanner.Err(); err != nil { - return "", "", err - } - - return "", "", NewNotFoundError(subsystem) -} - -func isSubsystemAvailable(subsystem string) bool { - cgroups, err := ParseCgroupFile("/proc/self/cgroup") - if err != nil { - return false - } - _, avail := cgroups[subsystem] - return avail -} - -func GetClosestMountpointAncestor(dir, mountinfo string) string { - deepestMountPoint := "" - for _, mountInfoEntry := range strings.Split(mountinfo, "\n") { - mountInfoParts := strings.Fields(mountInfoEntry) - if len(mountInfoParts) < 5 { - continue - } - mountPoint := mountInfoParts[4] - if strings.HasPrefix(mountPoint, deepestMountPoint) && strings.HasPrefix(dir, mountPoint) { - deepestMountPoint = mountPoint - } - } - return deepestMountPoint -} - -func FindCgroupMountpointDir() (string, error) { - f, err := os.Open("/proc/self/mountinfo") - if err != nil { - return "", err - } - defer f.Close() - - scanner := bufio.NewScanner(f) - for scanner.Scan() { - text := scanner.Text() - fields := strings.Split(text, " ") - // Safe as mountinfo encodes mountpoints with spaces as \040. - index := strings.Index(text, " - ") - postSeparatorFields := strings.Fields(text[index+3:]) - numPostFields := len(postSeparatorFields) - - // This is an error as we can't detect if the mount is for "cgroup" - if numPostFields == 0 { - return "", fmt.Errorf("Found no fields post '-' in %q", text) - } - - if postSeparatorFields[0] == "cgroup" { - // Check that the mount is properly formatted. - if numPostFields < 3 { - return "", fmt.Errorf("Error found less than 3 fields post '-' in %q", text) - } - - return filepath.Dir(fields[4]), nil - } - } - if err := scanner.Err(); err != nil { - return "", err - } - - return "", NewNotFoundError("cgroup") -} - -type Mount struct { - Mountpoint string - Root string - Subsystems []string -} - -func (m Mount) GetOwnCgroup(cgroups map[string]string) (string, error) { - if len(m.Subsystems) == 0 { - return "", fmt.Errorf("no subsystem for mount") - } - - return getControllerPath(m.Subsystems[0], cgroups) -} - -func getCgroupMountsHelper(ss map[string]bool, mi io.Reader, all bool) ([]Mount, error) { - res := make([]Mount, 0, len(ss)) - scanner := bufio.NewScanner(mi) - numFound := 0 - for scanner.Scan() && numFound < len(ss) { - txt := scanner.Text() - sepIdx := strings.Index(txt, " - ") - if sepIdx == -1 { - return nil, fmt.Errorf("invalid mountinfo format") - } - if txt[sepIdx+3:sepIdx+10] == "cgroup2" || txt[sepIdx+3:sepIdx+9] != "cgroup" { - continue - } - fields := strings.Split(txt, " ") - m := Mount{ - Mountpoint: fields[4], - Root: fields[3], - } - for _, opt := range strings.Split(fields[len(fields)-1], ",") { - seen, known := ss[opt] - if !known || (!all && seen) { - continue - } - ss[opt] = true - if strings.HasPrefix(opt, CgroupNamePrefix) { - opt = opt[len(CgroupNamePrefix):] - } - m.Subsystems = append(m.Subsystems, opt) - numFound++ - } - if len(m.Subsystems) > 0 || all { - res = append(res, m) - } - } - if err := scanner.Err(); err != nil { - return nil, err - } - return res, nil -} - -// GetCgroupMounts returns the mounts for the cgroup subsystems. -// all indicates whether to return just the first instance or all the mounts. -func GetCgroupMounts(all bool) ([]Mount, error) { - f, err := os.Open("/proc/self/mountinfo") - if err != nil { - return nil, err - } - defer f.Close() - - allSubsystems, err := ParseCgroupFile("/proc/self/cgroup") - if err != nil { - return nil, err - } - - allMap := make(map[string]bool) - for s := range allSubsystems { - allMap[s] = false - } - return getCgroupMountsHelper(allMap, f, all) -} - -// GetAllSubsystems returns all the cgroup subsystems supported by the kernel -func GetAllSubsystems() ([]string, error) { - f, err := os.Open("/proc/cgroups") - if err != nil { - return nil, err - } - defer f.Close() - - subsystems := []string{} - - s := bufio.NewScanner(f) - for s.Scan() { - text := s.Text() - if text[0] != '#' { - parts := strings.Fields(text) - if len(parts) >= 4 && parts[3] != "0" { - subsystems = append(subsystems, parts[0]) - } - } - } - if err := s.Err(); err != nil { - return nil, err - } - return subsystems, nil -} - -// GetOwnCgroup returns the relative path to the cgroup docker is running in. -func GetOwnCgroup(subsystem string) (string, error) { - cgroups, err := ParseCgroupFile("/proc/self/cgroup") - if err != nil { - return "", err - } - - return getControllerPath(subsystem, cgroups) -} - -func GetOwnCgroupPath(subsystem string) (string, error) { - cgroup, err := GetOwnCgroup(subsystem) - if err != nil { - return "", err - } - - return getCgroupPathHelper(subsystem, cgroup) -} - -func GetInitCgroup(subsystem string) (string, error) { - cgroups, err := ParseCgroupFile("/proc/1/cgroup") - if err != nil { - return "", err - } - - return getControllerPath(subsystem, cgroups) -} - -func GetInitCgroupPath(subsystem string) (string, error) { - cgroup, err := GetInitCgroup(subsystem) - if err != nil { - return "", err - } - - return getCgroupPathHelper(subsystem, cgroup) -} - -func getCgroupPathHelper(subsystem, cgroup string) (string, error) { - mnt, root, err := FindCgroupMountpointAndRoot("", subsystem) - if err != nil { - return "", err - } - - // This is needed for nested containers, because in /proc/self/cgroup we - // see paths from host, which don't exist in container. - relCgroup, err := filepath.Rel(root, cgroup) - if err != nil { - return "", err - } - - return filepath.Join(mnt, relCgroup), nil -} - -func readProcsFile(dir string) ([]int, error) { - f, err := os.Open(filepath.Join(dir, CgroupProcesses)) - if err != nil { - return nil, err - } - defer f.Close() - - var ( - s = bufio.NewScanner(f) - out = []int{} - ) - - for s.Scan() { - if t := s.Text(); t != "" { - pid, err := strconv.Atoi(t) - if err != nil { - return nil, err - } - out = append(out, pid) - } - } - return out, nil -} - -// ParseCgroupFile parses the given cgroup file, typically from -// /proc//cgroup, into a map of subgroups to cgroup names. -func ParseCgroupFile(path string) (map[string]string, error) { - f, err := os.Open(path) - if err != nil { - return nil, err - } - defer f.Close() - - return parseCgroupFromReader(f) -} - -// helper function for ParseCgroupFile to make testing easier -func parseCgroupFromReader(r io.Reader) (map[string]string, error) { - s := bufio.NewScanner(r) - cgroups := make(map[string]string) - - for s.Scan() { - text := s.Text() - // from cgroups(7): - // /proc/[pid]/cgroup - // ... - // For each cgroup hierarchy ... there is one entry - // containing three colon-separated fields of the form: - // hierarchy-ID:subsystem-list:cgroup-path - parts := strings.SplitN(text, ":", 3) - if len(parts) < 3 { - return nil, fmt.Errorf("invalid cgroup entry: must contain at least two colons: %v", text) - } - - for _, subs := range strings.Split(parts[1], ",") { - cgroups[subs] = parts[2] - } - } - if err := s.Err(); err != nil { - return nil, err - } - - return cgroups, nil -} - -func getControllerPath(subsystem string, cgroups map[string]string) (string, error) { - - if p, ok := cgroups[subsystem]; ok { - return p, nil - } - - if p, ok := cgroups[CgroupNamePrefix+subsystem]; ok { - return p, nil - } - - return "", NewNotFoundError(subsystem) -} - -func PathExists(path string) bool { - if _, err := os.Stat(path); err != nil { - return false - } - return true -} - -func EnterPid(cgroupPaths map[string]string, pid int) error { - for _, path := range cgroupPaths { - if PathExists(path) { - if err := WriteCgroupProc(path, pid); err != nil { - return err - } - } - } - return nil -} - -// RemovePaths iterates over the provided paths removing them. -// We trying to remove all paths five times with increasing delay between tries. -// If after all there are not removed cgroups - appropriate error will be -// returned. -func RemovePaths(paths map[string]string) (err error) { - delay := 10 * time.Millisecond - for i := 0; i < 5; i++ { - if i != 0 { - time.Sleep(delay) - delay *= 2 - } - for s, p := range paths { - os.RemoveAll(p) - // TODO: here probably should be logging - _, err := os.Stat(p) - // We need this strange way of checking cgroups existence because - // RemoveAll almost always returns error, even on already removed - // cgroups - if os.IsNotExist(err) { - delete(paths, s) - } - } - if len(paths) == 0 { - return nil - } - } - return fmt.Errorf("Failed to remove paths: %v", paths) -} - -func GetHugePageSize() ([]string, error) { - files, err := ioutil.ReadDir("/sys/kernel/mm/hugepages") - if err != nil { - return []string{}, err - } - var fileNames []string - for _, st := range files { - fileNames = append(fileNames, st.Name()) - } - return getHugePageSizeFromFilenames(fileNames) -} - -func getHugePageSizeFromFilenames(fileNames []string) ([]string, error) { - var pageSizes []string - for _, fileName := range fileNames { - nameArray := strings.Split(fileName, "-") - pageSize, err := units.RAMInBytes(nameArray[1]) - if err != nil { - return []string{}, err - } - sizeString := units.CustomSize("%g%s", float64(pageSize), 1024.0, HugePageSizeUnitList) - pageSizes = append(pageSizes, sizeString) - } - - return pageSizes, nil -} - -// GetPids returns all pids, that were added to cgroup at path. -func GetPids(path string) ([]int, error) { - return readProcsFile(path) -} - -// GetAllPids returns all pids, that were added to cgroup at path and to all its -// subcgroups. -func GetAllPids(path string) ([]int, error) { - var pids []int - // collect pids from all sub-cgroups - err := filepath.Walk(path, func(p string, info os.FileInfo, iErr error) error { - dir, file := filepath.Split(p) - if file != CgroupProcesses { - return nil - } - if iErr != nil { - return iErr - } - cPids, err := readProcsFile(dir) - if err != nil { - return err - } - pids = append(pids, cPids...) - return nil - }) - return pids, err -} - -// WriteCgroupProc writes the specified pid into the cgroup's cgroup.procs file -func WriteCgroupProc(dir string, pid int) error { - // Normally dir should not be empty, one case is that cgroup subsystem - // is not mounted, we will get empty dir, and we want it fail here. - if dir == "" { - return fmt.Errorf("no such directory for %s", CgroupProcesses) - } - - // Dont attach any pid to the cgroup if -1 is specified as a pid - if pid == -1 { - return nil - } - - cgroupProcessesFile, err := os.OpenFile(filepath.Join(dir, CgroupProcesses), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0700) - if err != nil { - return fmt.Errorf("failed to write %v to %v: %v", pid, CgroupProcesses, err) - } - defer cgroupProcessesFile.Close() - - for i := 0; i < 5; i++ { - _, err = cgroupProcessesFile.WriteString(strconv.Itoa(pid)) - if err == nil { - return nil - } - - // EINVAL might mean that the task being added to cgroup.procs is in state - // TASK_NEW. We should attempt to do so again. - if isEINVAL(err) { - time.Sleep(30 * time.Millisecond) - continue - } - - return fmt.Errorf("failed to write %v to %v: %v", pid, CgroupProcesses, err) - } - return err -} - -func isEINVAL(err error) bool { - switch err := err.(type) { - case *os.PathError: - return err.Err == unix.EINVAL - default: - return false - } -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/blkio_device.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/blkio_device.go deleted file mode 100644 index e0f3ca165..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/blkio_device.go +++ /dev/null @@ -1,61 +0,0 @@ -package configs - -import "fmt" - -// blockIODevice holds major:minor format supported in blkio cgroup -type blockIODevice struct { - // Major is the device's major number - Major int64 `json:"major"` - // Minor is the device's minor number - Minor int64 `json:"minor"` -} - -// WeightDevice struct holds a `major:minor weight`|`major:minor leaf_weight` pair -type WeightDevice struct { - blockIODevice - // Weight is the bandwidth rate for the device, range is from 10 to 1000 - Weight uint16 `json:"weight"` - // LeafWeight is the bandwidth rate for the device while competing with the cgroup's child cgroups, range is from 10 to 1000, cfq scheduler only - LeafWeight uint16 `json:"leafWeight"` -} - -// NewWeightDevice returns a configured WeightDevice pointer -func NewWeightDevice(major, minor int64, weight, leafWeight uint16) *WeightDevice { - wd := &WeightDevice{} - wd.Major = major - wd.Minor = minor - wd.Weight = weight - wd.LeafWeight = leafWeight - return wd -} - -// WeightString formats the struct to be writable to the cgroup specific file -func (wd *WeightDevice) WeightString() string { - return fmt.Sprintf("%d:%d %d", wd.Major, wd.Minor, wd.Weight) -} - -// LeafWeightString formats the struct to be writable to the cgroup specific file -func (wd *WeightDevice) LeafWeightString() string { - return fmt.Sprintf("%d:%d %d", wd.Major, wd.Minor, wd.LeafWeight) -} - -// ThrottleDevice struct holds a `major:minor rate_per_second` pair -type ThrottleDevice struct { - blockIODevice - // Rate is the IO rate limit per cgroup per device - Rate uint64 `json:"rate"` -} - -// NewThrottleDevice returns a configured ThrottleDevice pointer -func NewThrottleDevice(major, minor int64, rate uint64) *ThrottleDevice { - td := &ThrottleDevice{} - td.Major = major - td.Minor = minor - td.Rate = rate - return td -} - -// String formats the struct to be writable to the cgroup specific file -func (td *ThrottleDevice) String() string { - return fmt.Sprintf("%d:%d %d", td.Major, td.Minor, td.Rate) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_linux.go deleted file mode 100644 index e15a662f5..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_linux.go +++ /dev/null @@ -1,122 +0,0 @@ -package configs - -type FreezerState string - -const ( - Undefined FreezerState = "" - Frozen FreezerState = "FROZEN" - Thawed FreezerState = "THAWED" -) - -type Cgroup struct { - // Deprecated, use Path instead - Name string `json:"name,omitempty"` - - // name of parent of cgroup or slice - // Deprecated, use Path instead - Parent string `json:"parent,omitempty"` - - // Path specifies the path to cgroups that are created and/or joined by the container. - // The path is assumed to be relative to the host system cgroup mountpoint. - Path string `json:"path"` - - // ScopePrefix describes prefix for the scope name - ScopePrefix string `json:"scope_prefix"` - - // Paths represent the absolute cgroups paths to join. - // This takes precedence over Path. - Paths map[string]string - - // Resources contains various cgroups settings to apply - *Resources -} - -type Resources struct { - // If this is true allow access to any kind of device within the container. If false, allow access only to devices explicitly listed in the allowed_devices list. - // Deprecated - AllowAllDevices *bool `json:"allow_all_devices,omitempty"` - // Deprecated - AllowedDevices []*Device `json:"allowed_devices,omitempty"` - // Deprecated - DeniedDevices []*Device `json:"denied_devices,omitempty"` - - Devices []*Device `json:"devices"` - - // Memory limit (in bytes) - Memory int64 `json:"memory"` - - // Memory reservation or soft_limit (in bytes) - MemoryReservation int64 `json:"memory_reservation"` - - // Total memory usage (memory + swap); set `-1` to enable unlimited swap - MemorySwap int64 `json:"memory_swap"` - - // Kernel memory limit (in bytes) - KernelMemory int64 `json:"kernel_memory"` - - // Kernel memory limit for TCP use (in bytes) - KernelMemoryTCP int64 `json:"kernel_memory_tcp"` - - // CPU shares (relative weight vs. other containers) - CpuShares uint64 `json:"cpu_shares"` - - // CPU hardcap limit (in usecs). Allowed cpu time in a given period. - CpuQuota int64 `json:"cpu_quota"` - - // CPU period to be used for hardcapping (in usecs). 0 to use system default. - CpuPeriod uint64 `json:"cpu_period"` - - // How many time CPU will use in realtime scheduling (in usecs). - CpuRtRuntime int64 `json:"cpu_rt_quota"` - - // CPU period to be used for realtime scheduling (in usecs). - CpuRtPeriod uint64 `json:"cpu_rt_period"` - - // CPU to use - CpusetCpus string `json:"cpuset_cpus"` - - // MEM to use - CpusetMems string `json:"cpuset_mems"` - - // Process limit; set <= `0' to disable limit. - PidsLimit int64 `json:"pids_limit"` - - // Specifies per cgroup weight, range is from 10 to 1000. - BlkioWeight uint16 `json:"blkio_weight"` - - // Specifies tasks' weight in the given cgroup while competing with the cgroup's child cgroups, range is from 10 to 1000, cfq scheduler only - BlkioLeafWeight uint16 `json:"blkio_leaf_weight"` - - // Weight per cgroup per device, can override BlkioWeight. - BlkioWeightDevice []*WeightDevice `json:"blkio_weight_device"` - - // IO read rate limit per cgroup per device, bytes per second. - BlkioThrottleReadBpsDevice []*ThrottleDevice `json:"blkio_throttle_read_bps_device"` - - // IO write rate limit per cgroup per device, bytes per second. - BlkioThrottleWriteBpsDevice []*ThrottleDevice `json:"blkio_throttle_write_bps_device"` - - // IO read rate limit per cgroup per device, IO per second. - BlkioThrottleReadIOPSDevice []*ThrottleDevice `json:"blkio_throttle_read_iops_device"` - - // IO write rate limit per cgroup per device, IO per second. - BlkioThrottleWriteIOPSDevice []*ThrottleDevice `json:"blkio_throttle_write_iops_device"` - - // set the freeze value for the process - Freezer FreezerState `json:"freezer"` - - // Hugetlb limit (in bytes) - HugetlbLimit []*HugepageLimit `json:"hugetlb_limit"` - - // Whether to disable OOM Killer - OomKillDisable bool `json:"oom_kill_disable"` - - // Tuning swappiness behaviour per cgroup - MemorySwappiness *uint64 `json:"memory_swappiness"` - - // Set priority of network traffic for container - NetPrioIfpriomap []*IfPrioMap `json:"net_prio_ifpriomap"` - - // Set class identifier for container's network packets - NetClsClassid uint32 `json:"net_cls_classid_u"` -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_windows.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_windows.go deleted file mode 100644 index d74847b0d..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/cgroup_windows.go +++ /dev/null @@ -1,6 +0,0 @@ -package configs - -// TODO Windows: This can ultimately be entirely factored out on Windows as -// cgroups are a Unix-specific construct. -type Cgroup struct { -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/config.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/config.go deleted file mode 100644 index 7728522fe..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/config.go +++ /dev/null @@ -1,353 +0,0 @@ -package configs - -import ( - "bytes" - "encoding/json" - "fmt" - "os/exec" - "time" - - "github.com/opencontainers/runtime-spec/specs-go" - - "github.com/sirupsen/logrus" -) - -type Rlimit struct { - Type int `json:"type"` - Hard uint64 `json:"hard"` - Soft uint64 `json:"soft"` -} - -// IDMap represents UID/GID Mappings for User Namespaces. -type IDMap struct { - ContainerID int `json:"container_id"` - HostID int `json:"host_id"` - Size int `json:"size"` -} - -// Seccomp represents syscall restrictions -// By default, only the native architecture of the kernel is allowed to be used -// for syscalls. Additional architectures can be added by specifying them in -// Architectures. -type Seccomp struct { - DefaultAction Action `json:"default_action"` - Architectures []string `json:"architectures"` - Syscalls []*Syscall `json:"syscalls"` -} - -// Action is taken upon rule match in Seccomp -type Action int - -const ( - Kill Action = iota + 1 - Errno - Trap - Allow - Trace -) - -// Operator is a comparison operator to be used when matching syscall arguments in Seccomp -type Operator int - -const ( - EqualTo Operator = iota + 1 - NotEqualTo - GreaterThan - GreaterThanOrEqualTo - LessThan - LessThanOrEqualTo - MaskEqualTo -) - -// Arg is a rule to match a specific syscall argument in Seccomp -type Arg struct { - Index uint `json:"index"` - Value uint64 `json:"value"` - ValueTwo uint64 `json:"value_two"` - Op Operator `json:"op"` -} - -// Syscall is a rule to match a syscall in Seccomp -type Syscall struct { - Name string `json:"name"` - Action Action `json:"action"` - Args []*Arg `json:"args"` -} - -// TODO Windows. Many of these fields should be factored out into those parts -// which are common across platforms, and those which are platform specific. - -// Config defines configuration options for executing a process inside a contained environment. -type Config struct { - // NoPivotRoot will use MS_MOVE and a chroot to jail the process into the container's rootfs - // This is a common option when the container is running in ramdisk - NoPivotRoot bool `json:"no_pivot_root"` - - // ParentDeathSignal specifies the signal that is sent to the container's process in the case - // that the parent process dies. - ParentDeathSignal int `json:"parent_death_signal"` - - // Path to a directory containing the container's root filesystem. - Rootfs string `json:"rootfs"` - - // Readonlyfs will remount the container's rootfs as readonly where only externally mounted - // bind mounts are writtable. - Readonlyfs bool `json:"readonlyfs"` - - // Specifies the mount propagation flags to be applied to /. - RootPropagation int `json:"rootPropagation"` - - // Mounts specify additional source and destination paths that will be mounted inside the container's - // rootfs and mount namespace if specified - Mounts []*Mount `json:"mounts"` - - // The device nodes that should be automatically created within the container upon container start. Note, make sure that the node is marked as allowed in the cgroup as well! - Devices []*Device `json:"devices"` - - MountLabel string `json:"mount_label"` - - // Hostname optionally sets the container's hostname if provided - Hostname string `json:"hostname"` - - // Namespaces specifies the container's namespaces that it should setup when cloning the init process - // If a namespace is not provided that namespace is shared from the container's parent process - Namespaces Namespaces `json:"namespaces"` - - // Capabilities specify the capabilities to keep when executing the process inside the container - // All capabilities not specified will be dropped from the processes capability mask - Capabilities *Capabilities `json:"capabilities"` - - // Networks specifies the container's network setup to be created - Networks []*Network `json:"networks"` - - // Routes can be specified to create entries in the route table as the container is started - Routes []*Route `json:"routes"` - - // Cgroups specifies specific cgroup settings for the various subsystems that the container is - // placed into to limit the resources the container has available - Cgroups *Cgroup `json:"cgroups"` - - // AppArmorProfile specifies the profile to apply to the process running in the container and is - // change at the time the process is execed - AppArmorProfile string `json:"apparmor_profile,omitempty"` - - // ProcessLabel specifies the label to apply to the process running in the container. It is - // commonly used by selinux - ProcessLabel string `json:"process_label,omitempty"` - - // Rlimits specifies the resource limits, such as max open files, to set in the container - // If Rlimits are not set, the container will inherit rlimits from the parent process - Rlimits []Rlimit `json:"rlimits,omitempty"` - - // OomScoreAdj specifies the adjustment to be made by the kernel when calculating oom scores - // for a process. Valid values are between the range [-1000, '1000'], where processes with - // higher scores are preferred for being killed. If it is unset then we don't touch the current - // value. - // More information about kernel oom score calculation here: https://lwn.net/Articles/317814/ - OomScoreAdj *int `json:"oom_score_adj,omitempty"` - - // UidMappings is an array of User ID mappings for User Namespaces - UidMappings []IDMap `json:"uid_mappings"` - - // GidMappings is an array of Group ID mappings for User Namespaces - GidMappings []IDMap `json:"gid_mappings"` - - // MaskPaths specifies paths within the container's rootfs to mask over with a bind - // mount pointing to /dev/null as to prevent reads of the file. - MaskPaths []string `json:"mask_paths"` - - // ReadonlyPaths specifies paths within the container's rootfs to remount as read-only - // so that these files prevent any writes. - ReadonlyPaths []string `json:"readonly_paths"` - - // Sysctl is a map of properties and their values. It is the equivalent of using - // sysctl -w my.property.name value in Linux. - Sysctl map[string]string `json:"sysctl"` - - // Seccomp allows actions to be taken whenever a syscall is made within the container. - // A number of rules are given, each having an action to be taken if a syscall matches it. - // A default action to be taken if no rules match is also given. - Seccomp *Seccomp `json:"seccomp"` - - // NoNewPrivileges controls whether processes in the container can gain additional privileges. - NoNewPrivileges bool `json:"no_new_privileges,omitempty"` - - // Hooks are a collection of actions to perform at various container lifecycle events. - // CommandHooks are serialized to JSON, but other hooks are not. - Hooks *Hooks - - // Version is the version of opencontainer specification that is supported. - Version string `json:"version"` - - // Labels are user defined metadata that is stored in the config and populated on the state - Labels []string `json:"labels"` - - // NoNewKeyring will not allocated a new session keyring for the container. It will use the - // callers keyring in this case. - NoNewKeyring bool `json:"no_new_keyring"` - - // IntelRdt specifies settings for Intel RDT group that the container is placed into - // to limit the resources (e.g., L3 cache, memory bandwidth) the container has available - IntelRdt *IntelRdt `json:"intel_rdt,omitempty"` - - // RootlessEUID is set when the runc was launched with non-zero EUID. - // Note that RootlessEUID is set to false when launched with EUID=0 in userns. - // When RootlessEUID is set, runc creates a new userns for the container. - // (config.json needs to contain userns settings) - RootlessEUID bool `json:"rootless_euid,omitempty"` - - // RootlessCgroups is set when unlikely to have the full access to cgroups. - // When RootlessCgroups is set, cgroups errors are ignored. - RootlessCgroups bool `json:"rootless_cgroups,omitempty"` -} - -type Hooks struct { - // Prestart commands are executed after the container namespaces are created, - // but before the user supplied command is executed from init. - Prestart []Hook - - // Poststart commands are executed after the container init process starts. - Poststart []Hook - - // Poststop commands are executed after the container init process exits. - Poststop []Hook -} - -type Capabilities struct { - // Bounding is the set of capabilities checked by the kernel. - Bounding []string - // Effective is the set of capabilities checked by the kernel. - Effective []string - // Inheritable is the capabilities preserved across execve. - Inheritable []string - // Permitted is the limiting superset for effective capabilities. - Permitted []string - // Ambient is the ambient set of capabilities that are kept. - Ambient []string -} - -func (hooks *Hooks) UnmarshalJSON(b []byte) error { - var state struct { - Prestart []CommandHook - Poststart []CommandHook - Poststop []CommandHook - } - - if err := json.Unmarshal(b, &state); err != nil { - return err - } - - deserialize := func(shooks []CommandHook) (hooks []Hook) { - for _, shook := range shooks { - hooks = append(hooks, shook) - } - - return hooks - } - - hooks.Prestart = deserialize(state.Prestart) - hooks.Poststart = deserialize(state.Poststart) - hooks.Poststop = deserialize(state.Poststop) - return nil -} - -func (hooks Hooks) MarshalJSON() ([]byte, error) { - serialize := func(hooks []Hook) (serializableHooks []CommandHook) { - for _, hook := range hooks { - switch chook := hook.(type) { - case CommandHook: - serializableHooks = append(serializableHooks, chook) - default: - logrus.Warnf("cannot serialize hook of type %T, skipping", hook) - } - } - - return serializableHooks - } - - return json.Marshal(map[string]interface{}{ - "prestart": serialize(hooks.Prestart), - "poststart": serialize(hooks.Poststart), - "poststop": serialize(hooks.Poststop), - }) -} - -type Hook interface { - // Run executes the hook with the provided state. - Run(*specs.State) error -} - -// NewFunctionHook will call the provided function when the hook is run. -func NewFunctionHook(f func(*specs.State) error) FuncHook { - return FuncHook{ - run: f, - } -} - -type FuncHook struct { - run func(*specs.State) error -} - -func (f FuncHook) Run(s *specs.State) error { - return f.run(s) -} - -type Command struct { - Path string `json:"path"` - Args []string `json:"args"` - Env []string `json:"env"` - Dir string `json:"dir"` - Timeout *time.Duration `json:"timeout"` -} - -// NewCommandHook will execute the provided command when the hook is run. -func NewCommandHook(cmd Command) CommandHook { - return CommandHook{ - Command: cmd, - } -} - -type CommandHook struct { - Command -} - -func (c Command) Run(s *specs.State) error { - b, err := json.Marshal(s) - if err != nil { - return err - } - var stdout, stderr bytes.Buffer - cmd := exec.Cmd{ - Path: c.Path, - Args: c.Args, - Env: c.Env, - Stdin: bytes.NewReader(b), - Stdout: &stdout, - Stderr: &stderr, - } - if err := cmd.Start(); err != nil { - return err - } - errC := make(chan error, 1) - go func() { - err := cmd.Wait() - if err != nil { - err = fmt.Errorf("error running hook: %v, stdout: %s, stderr: %s", err, stdout.String(), stderr.String()) - } - errC <- err - }() - var timerCh <-chan time.Time - if c.Timeout != nil { - timer := time.NewTimer(*c.Timeout) - defer timer.Stop() - timerCh = timer.C - } - select { - case err := <-errC: - return err - case <-timerCh: - cmd.Process.Kill() - cmd.Wait() - return fmt.Errorf("hook ran past specified timeout of %.1fs", c.Timeout.Seconds()) - } -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/config_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/config_linux.go deleted file mode 100644 index 07da10804..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/config_linux.go +++ /dev/null @@ -1,61 +0,0 @@ -package configs - -import "fmt" - -// HostUID gets the translated uid for the process on host which could be -// different when user namespaces are enabled. -func (c Config) HostUID(containerId int) (int, error) { - if c.Namespaces.Contains(NEWUSER) { - if c.UidMappings == nil { - return -1, fmt.Errorf("User namespaces enabled, but no uid mappings found.") - } - id, found := c.hostIDFromMapping(containerId, c.UidMappings) - if !found { - return -1, fmt.Errorf("User namespaces enabled, but no user mapping found.") - } - return id, nil - } - // Return unchanged id. - return containerId, nil -} - -// HostRootUID gets the root uid for the process on host which could be non-zero -// when user namespaces are enabled. -func (c Config) HostRootUID() (int, error) { - return c.HostUID(0) -} - -// HostGID gets the translated gid for the process on host which could be -// different when user namespaces are enabled. -func (c Config) HostGID(containerId int) (int, error) { - if c.Namespaces.Contains(NEWUSER) { - if c.GidMappings == nil { - return -1, fmt.Errorf("User namespaces enabled, but no gid mappings found.") - } - id, found := c.hostIDFromMapping(containerId, c.GidMappings) - if !found { - return -1, fmt.Errorf("User namespaces enabled, but no group mapping found.") - } - return id, nil - } - // Return unchanged id. - return containerId, nil -} - -// HostRootGID gets the root gid for the process on host which could be non-zero -// when user namespaces are enabled. -func (c Config) HostRootGID() (int, error) { - return c.HostGID(0) -} - -// Utility function that gets a host ID for a container ID from user namespace map -// if that ID is present in the map. -func (c Config) hostIDFromMapping(containerID int, uMap []IDMap) (int, bool) { - for _, m := range uMap { - if (containerID >= m.ContainerID) && (containerID <= (m.ContainerID + m.Size - 1)) { - hostID := m.HostID + (containerID - m.ContainerID) - return hostID, true - } - } - return -1, false -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/device.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/device.go deleted file mode 100644 index 8701bb212..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/device.go +++ /dev/null @@ -1,57 +0,0 @@ -package configs - -import ( - "fmt" - "os" -) - -const ( - Wildcard = -1 -) - -// TODO Windows: This can be factored out in the future - -type Device struct { - // Device type, block, char, etc. - Type rune `json:"type"` - - // Path to the device. - Path string `json:"path"` - - // Major is the device's major number. - Major int64 `json:"major"` - - // Minor is the device's minor number. - Minor int64 `json:"minor"` - - // Cgroup permissions format, rwm. - Permissions string `json:"permissions"` - - // FileMode permission bits for the device. - FileMode os.FileMode `json:"file_mode"` - - // Uid of the device. - Uid uint32 `json:"uid"` - - // Gid of the device. - Gid uint32 `json:"gid"` - - // Write the file to the allowed list - Allow bool `json:"allow"` -} - -func (d *Device) CgroupString() string { - return fmt.Sprintf("%c %s:%s %s", d.Type, deviceNumberString(d.Major), deviceNumberString(d.Minor), d.Permissions) -} - -func (d *Device) Mkdev() int { - return int((d.Major << 8) | (d.Minor & 0xff) | ((d.Minor & 0xfff00) << 12)) -} - -// deviceNumberString converts the device number to a string return result. -func deviceNumberString(number int64) string { - if number == Wildcard { - return "*" - } - return fmt.Sprint(number) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/device_defaults.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/device_defaults.go deleted file mode 100644 index e4f423c52..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/device_defaults.go +++ /dev/null @@ -1,111 +0,0 @@ -// +build linux - -package configs - -var ( - // DefaultSimpleDevices are devices that are to be both allowed and created. - DefaultSimpleDevices = []*Device{ - // /dev/null and zero - { - Path: "/dev/null", - Type: 'c', - Major: 1, - Minor: 3, - Permissions: "rwm", - FileMode: 0666, - }, - { - Path: "/dev/zero", - Type: 'c', - Major: 1, - Minor: 5, - Permissions: "rwm", - FileMode: 0666, - }, - - { - Path: "/dev/full", - Type: 'c', - Major: 1, - Minor: 7, - Permissions: "rwm", - FileMode: 0666, - }, - - // consoles and ttys - { - Path: "/dev/tty", - Type: 'c', - Major: 5, - Minor: 0, - Permissions: "rwm", - FileMode: 0666, - }, - - // /dev/urandom,/dev/random - { - Path: "/dev/urandom", - Type: 'c', - Major: 1, - Minor: 9, - Permissions: "rwm", - FileMode: 0666, - }, - { - Path: "/dev/random", - Type: 'c', - Major: 1, - Minor: 8, - Permissions: "rwm", - FileMode: 0666, - }, - } - DefaultAllowedDevices = append([]*Device{ - // allow mknod for any device - { - Type: 'c', - Major: Wildcard, - Minor: Wildcard, - Permissions: "m", - }, - { - Type: 'b', - Major: Wildcard, - Minor: Wildcard, - Permissions: "m", - }, - - { - Path: "/dev/console", - Type: 'c', - Major: 5, - Minor: 1, - Permissions: "rwm", - }, - // /dev/pts/ - pts namespaces are "coming soon" - { - Path: "", - Type: 'c', - Major: 136, - Minor: Wildcard, - Permissions: "rwm", - }, - { - Path: "", - Type: 'c', - Major: 5, - Minor: 2, - Permissions: "rwm", - }, - - // tuntap - { - Path: "", - Type: 'c', - Major: 10, - Minor: 200, - Permissions: "rwm", - }, - }, DefaultSimpleDevices...) - DefaultAutoCreatedDevices = append([]*Device{}, DefaultSimpleDevices...) -) diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/hugepage_limit.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/hugepage_limit.go deleted file mode 100644 index d30216380..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/hugepage_limit.go +++ /dev/null @@ -1,9 +0,0 @@ -package configs - -type HugepageLimit struct { - // which type of hugepage to limit. - Pagesize string `json:"page_size"` - - // usage limit for hugepage. - Limit uint64 `json:"limit"` -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/intelrdt.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/intelrdt.go deleted file mode 100644 index 57e9f037d..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/intelrdt.go +++ /dev/null @@ -1,13 +0,0 @@ -package configs - -type IntelRdt struct { - // The schema for L3 cache id and capacity bitmask (CBM) - // Format: "L3:=;=;..." - L3CacheSchema string `json:"l3_cache_schema,omitempty"` - - // The schema of memory bandwidth per L3 cache id - // Format: "MB:=bandwidth0;=bandwidth1;..." - // The unit of memory bandwidth is specified in "percentages" by - // default, and in "MBps" if MBA Software Controller is enabled. - MemBwSchema string `json:"memBwSchema,omitempty"` -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/interface_priority_map.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/interface_priority_map.go deleted file mode 100644 index 9a0395eaf..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/interface_priority_map.go +++ /dev/null @@ -1,14 +0,0 @@ -package configs - -import ( - "fmt" -) - -type IfPrioMap struct { - Interface string `json:"interface"` - Priority int64 `json:"priority"` -} - -func (i *IfPrioMap) CgroupString() string { - return fmt.Sprintf("%s %d", i.Interface, i.Priority) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/mount.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/mount.go deleted file mode 100644 index 670757ddb..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/mount.go +++ /dev/null @@ -1,39 +0,0 @@ -package configs - -const ( - // EXT_COPYUP is a directive to copy up the contents of a directory when - // a tmpfs is mounted over it. - EXT_COPYUP = 1 << iota -) - -type Mount struct { - // Source path for the mount. - Source string `json:"source"` - - // Destination path for the mount inside the container. - Destination string `json:"destination"` - - // Device the mount is for. - Device string `json:"device"` - - // Mount flags. - Flags int `json:"flags"` - - // Propagation Flags - PropagationFlags []int `json:"propagation_flags"` - - // Mount data applied to the mount. - Data string `json:"data"` - - // Relabel source if set, "z" indicates shared, "Z" indicates unshared. - Relabel string `json:"relabel"` - - // Extensions are additional flags that are specific to runc. - Extensions int `json:"extensions"` - - // Optional Command to be run before Source is mounted. - PremountCmds []Command `json:"premount_cmds"` - - // Optional Command to be run after Source is mounted. - PostmountCmds []Command `json:"postmount_cmds"` -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces.go deleted file mode 100644 index a3329a31a..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces.go +++ /dev/null @@ -1,5 +0,0 @@ -package configs - -type NamespaceType string - -type Namespaces []Namespace diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_linux.go deleted file mode 100644 index 1bbaef9bd..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_linux.go +++ /dev/null @@ -1,126 +0,0 @@ -package configs - -import ( - "fmt" - "os" - "sync" -) - -const ( - NEWNET NamespaceType = "NEWNET" - NEWPID NamespaceType = "NEWPID" - NEWNS NamespaceType = "NEWNS" - NEWUTS NamespaceType = "NEWUTS" - NEWIPC NamespaceType = "NEWIPC" - NEWUSER NamespaceType = "NEWUSER" - NEWCGROUP NamespaceType = "NEWCGROUP" -) - -var ( - nsLock sync.Mutex - supportedNamespaces = make(map[NamespaceType]bool) -) - -// NsName converts the namespace type to its filename -func NsName(ns NamespaceType) string { - switch ns { - case NEWNET: - return "net" - case NEWNS: - return "mnt" - case NEWPID: - return "pid" - case NEWIPC: - return "ipc" - case NEWUSER: - return "user" - case NEWUTS: - return "uts" - case NEWCGROUP: - return "cgroup" - } - return "" -} - -// IsNamespaceSupported returns whether a namespace is available or -// not -func IsNamespaceSupported(ns NamespaceType) bool { - nsLock.Lock() - defer nsLock.Unlock() - supported, ok := supportedNamespaces[ns] - if ok { - return supported - } - nsFile := NsName(ns) - // if the namespace type is unknown, just return false - if nsFile == "" { - return false - } - _, err := os.Stat(fmt.Sprintf("/proc/self/ns/%s", nsFile)) - // a namespace is supported if it exists and we have permissions to read it - supported = err == nil - supportedNamespaces[ns] = supported - return supported -} - -func NamespaceTypes() []NamespaceType { - return []NamespaceType{ - NEWUSER, // Keep user NS always first, don't move it. - NEWIPC, - NEWUTS, - NEWNET, - NEWPID, - NEWNS, - NEWCGROUP, - } -} - -// Namespace defines configuration for each namespace. It specifies an -// alternate path that is able to be joined via setns. -type Namespace struct { - Type NamespaceType `json:"type"` - Path string `json:"path"` -} - -func (n *Namespace) GetPath(pid int) string { - return fmt.Sprintf("/proc/%d/ns/%s", pid, NsName(n.Type)) -} - -func (n *Namespaces) Remove(t NamespaceType) bool { - i := n.index(t) - if i == -1 { - return false - } - *n = append((*n)[:i], (*n)[i+1:]...) - return true -} - -func (n *Namespaces) Add(t NamespaceType, path string) { - i := n.index(t) - if i == -1 { - *n = append(*n, Namespace{Type: t, Path: path}) - return - } - (*n)[i].Path = path -} - -func (n *Namespaces) index(t NamespaceType) int { - for i, ns := range *n { - if ns.Type == t { - return i - } - } - return -1 -} - -func (n *Namespaces) Contains(t NamespaceType) bool { - return n.index(t) != -1 -} - -func (n *Namespaces) PathOf(t NamespaceType) string { - i := n.index(t) - if i == -1 { - return "" - } - return (*n)[i].Path -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall.go deleted file mode 100644 index 2dc7adfc9..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall.go +++ /dev/null @@ -1,32 +0,0 @@ -// +build linux - -package configs - -import "golang.org/x/sys/unix" - -func (n *Namespace) Syscall() int { - return namespaceInfo[n.Type] -} - -var namespaceInfo = map[NamespaceType]int{ - NEWNET: unix.CLONE_NEWNET, - NEWNS: unix.CLONE_NEWNS, - NEWUSER: unix.CLONE_NEWUSER, - NEWIPC: unix.CLONE_NEWIPC, - NEWUTS: unix.CLONE_NEWUTS, - NEWPID: unix.CLONE_NEWPID, - NEWCGROUP: unix.CLONE_NEWCGROUP, -} - -// CloneFlags parses the container's Namespaces options to set the correct -// flags on clone, unshare. This function returns flags only for new namespaces. -func (n *Namespaces) CloneFlags() uintptr { - var flag int - for _, v := range *n { - if v.Path != "" { - continue - } - flag |= namespaceInfo[v.Type] - } - return uintptr(flag) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall_unsupported.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall_unsupported.go deleted file mode 100644 index 5d9a5c81f..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_syscall_unsupported.go +++ /dev/null @@ -1,13 +0,0 @@ -// +build !linux,!windows - -package configs - -func (n *Namespace) Syscall() int { - panic("No namespace syscall support") -} - -// CloneFlags parses the container's Namespaces options to set the correct -// flags on clone, unshare. This function returns flags only for new namespaces. -func (n *Namespaces) CloneFlags() uintptr { - panic("No namespace syscall support") -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_unsupported.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_unsupported.go deleted file mode 100644 index 19bf713de..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/namespaces_unsupported.go +++ /dev/null @@ -1,8 +0,0 @@ -// +build !linux - -package configs - -// Namespace defines configuration for each namespace. It specifies an -// alternate path that is able to be joined via setns. -type Namespace struct { -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/network.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/network.go deleted file mode 100644 index ccdb228e1..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/network.go +++ /dev/null @@ -1,72 +0,0 @@ -package configs - -// Network defines configuration for a container's networking stack -// -// The network configuration can be omitted from a container causing the -// container to be setup with the host's networking stack -type Network struct { - // Type sets the networks type, commonly veth and loopback - Type string `json:"type"` - - // Name of the network interface - Name string `json:"name"` - - // The bridge to use. - Bridge string `json:"bridge"` - - // MacAddress contains the MAC address to set on the network interface - MacAddress string `json:"mac_address"` - - // Address contains the IPv4 and mask to set on the network interface - Address string `json:"address"` - - // Gateway sets the gateway address that is used as the default for the interface - Gateway string `json:"gateway"` - - // IPv6Address contains the IPv6 and mask to set on the network interface - IPv6Address string `json:"ipv6_address"` - - // IPv6Gateway sets the ipv6 gateway address that is used as the default for the interface - IPv6Gateway string `json:"ipv6_gateway"` - - // Mtu sets the mtu value for the interface and will be mirrored on both the host and - // container's interfaces if a pair is created, specifically in the case of type veth - // Note: This does not apply to loopback interfaces. - Mtu int `json:"mtu"` - - // TxQueueLen sets the tx_queuelen value for the interface and will be mirrored on both the host and - // container's interfaces if a pair is created, specifically in the case of type veth - // Note: This does not apply to loopback interfaces. - TxQueueLen int `json:"txqueuelen"` - - // HostInterfaceName is a unique name of a veth pair that resides on in the host interface of the - // container. - HostInterfaceName string `json:"host_interface_name"` - - // HairpinMode specifies if hairpin NAT should be enabled on the virtual interface - // bridge port in the case of type veth - // Note: This is unsupported on some systems. - // Note: This does not apply to loopback interfaces. - HairpinMode bool `json:"hairpin_mode"` -} - -// Routes can be specified to create entries in the route table as the container is started -// -// All of destination, source, and gateway should be either IPv4 or IPv6. -// One of the three options must be present, and omitted entries will use their -// IP family default for the route table. For IPv4 for example, setting the -// gateway to 1.2.3.4 and the interface to eth0 will set up a standard -// destination of 0.0.0.0(or *) when viewed in the route table. -type Route struct { - // Sets the destination and mask, should be a CIDR. Accepts IPv4 and IPv6 - Destination string `json:"destination"` - - // Sets the source and mask, should be a CIDR. Accepts IPv4 and IPv6 - Source string `json:"source"` - - // Sets the gateway. Accepts IPv4 and IPv6 - Gateway string `json:"gateway"` - - // The device to set this route up for, for example: eth0 - InterfaceName string `json:"interface_name"` -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/rootless.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/rootless.go deleted file mode 100644 index 393d9e81e..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/rootless.go +++ /dev/null @@ -1,89 +0,0 @@ -package validate - -import ( - "fmt" - "strings" - - "github.com/opencontainers/runc/libcontainer/configs" -) - -// rootlessEUID makes sure that the config can be applied when runc -// is being executed as a non-root user (euid != 0) in the current user namespace. -func (v *ConfigValidator) rootlessEUID(config *configs.Config) error { - if err := rootlessEUIDMappings(config); err != nil { - return err - } - if err := rootlessEUIDMount(config); err != nil { - return err - } - - // XXX: We currently can't verify the user config at all, because - // configs.Config doesn't store the user-related configs. So this - // has to be verified by setupUser() in init_linux.go. - - return nil -} - -func hasIDMapping(id int, mappings []configs.IDMap) bool { - for _, m := range mappings { - if id >= m.ContainerID && id < m.ContainerID+m.Size { - return true - } - } - return false -} - -func rootlessEUIDMappings(config *configs.Config) error { - if !config.Namespaces.Contains(configs.NEWUSER) { - return fmt.Errorf("rootless container requires user namespaces") - } - - if len(config.UidMappings) == 0 { - return fmt.Errorf("rootless containers requires at least one UID mapping") - } - if len(config.GidMappings) == 0 { - return fmt.Errorf("rootless containers requires at least one GID mapping") - } - return nil -} - -// mount verifies that the user isn't trying to set up any mounts they don't have -// the rights to do. In addition, it makes sure that no mount has a `uid=` or -// `gid=` option that doesn't resolve to root. -func rootlessEUIDMount(config *configs.Config) error { - // XXX: We could whitelist allowed devices at this point, but I'm not - // convinced that's a good idea. The kernel is the best arbiter of - // access control. - - for _, mount := range config.Mounts { - // Check that the options list doesn't contain any uid= or gid= entries - // that don't resolve to root. - for _, opt := range strings.Split(mount.Data, ",") { - if strings.HasPrefix(opt, "uid=") { - var uid int - n, err := fmt.Sscanf(opt, "uid=%d", &uid) - if n != 1 || err != nil { - // Ignore unknown mount options. - continue - } - if !hasIDMapping(uid, config.UidMappings) { - return fmt.Errorf("cannot specify uid= mount options for unmapped uid in rootless containers") - } - } - - if strings.HasPrefix(opt, "gid=") { - var gid int - n, err := fmt.Sscanf(opt, "gid=%d", &gid) - if n != 1 || err != nil { - // Ignore unknown mount options. - continue - } - if !hasIDMapping(gid, config.GidMappings) { - return fmt.Errorf("cannot specify gid= mount options for unmapped gid in rootless containers") - } - } - } - } - - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/validator.go b/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/validator.go deleted file mode 100644 index 3b42f3010..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/configs/validate/validator.go +++ /dev/null @@ -1,245 +0,0 @@ -package validate - -import ( - "fmt" - "os" - "path/filepath" - "strings" - - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/intelrdt" - selinux "github.com/opencontainers/selinux/go-selinux" -) - -type Validator interface { - Validate(*configs.Config) error -} - -func New() Validator { - return &ConfigValidator{} -} - -type ConfigValidator struct { -} - -func (v *ConfigValidator) Validate(config *configs.Config) error { - if err := v.rootfs(config); err != nil { - return err - } - if err := v.network(config); err != nil { - return err - } - if err := v.hostname(config); err != nil { - return err - } - if err := v.security(config); err != nil { - return err - } - if err := v.usernamespace(config); err != nil { - return err - } - if err := v.cgroupnamespace(config); err != nil { - return err - } - if err := v.sysctl(config); err != nil { - return err - } - if err := v.intelrdt(config); err != nil { - return err - } - if config.RootlessEUID { - if err := v.rootlessEUID(config); err != nil { - return err - } - } - return nil -} - -// rootfs validates if the rootfs is an absolute path and is not a symlink -// to the container's root filesystem. -func (v *ConfigValidator) rootfs(config *configs.Config) error { - if _, err := os.Stat(config.Rootfs); err != nil { - if os.IsNotExist(err) { - return fmt.Errorf("rootfs (%s) does not exist", config.Rootfs) - } - return err - } - cleaned, err := filepath.Abs(config.Rootfs) - if err != nil { - return err - } - if cleaned, err = filepath.EvalSymlinks(cleaned); err != nil { - return err - } - if filepath.Clean(config.Rootfs) != cleaned { - return fmt.Errorf("%s is not an absolute path or is a symlink", config.Rootfs) - } - return nil -} - -func (v *ConfigValidator) network(config *configs.Config) error { - if !config.Namespaces.Contains(configs.NEWNET) { - if len(config.Networks) > 0 || len(config.Routes) > 0 { - return fmt.Errorf("unable to apply network settings without a private NET namespace") - } - } - return nil -} - -func (v *ConfigValidator) hostname(config *configs.Config) error { - if config.Hostname != "" && !config.Namespaces.Contains(configs.NEWUTS) { - return fmt.Errorf("unable to set hostname without a private UTS namespace") - } - return nil -} - -func (v *ConfigValidator) security(config *configs.Config) error { - // restrict sys without mount namespace - if (len(config.MaskPaths) > 0 || len(config.ReadonlyPaths) > 0) && - !config.Namespaces.Contains(configs.NEWNS) { - return fmt.Errorf("unable to restrict sys entries without a private MNT namespace") - } - if config.ProcessLabel != "" && !selinux.GetEnabled() { - return fmt.Errorf("selinux label is specified in config, but selinux is disabled or not supported") - } - - return nil -} - -func (v *ConfigValidator) usernamespace(config *configs.Config) error { - if config.Namespaces.Contains(configs.NEWUSER) { - if _, err := os.Stat("/proc/self/ns/user"); os.IsNotExist(err) { - return fmt.Errorf("USER namespaces aren't enabled in the kernel") - } - } else { - if config.UidMappings != nil || config.GidMappings != nil { - return fmt.Errorf("User namespace mappings specified, but USER namespace isn't enabled in the config") - } - } - return nil -} - -func (v *ConfigValidator) cgroupnamespace(config *configs.Config) error { - if config.Namespaces.Contains(configs.NEWCGROUP) { - if _, err := os.Stat("/proc/self/ns/cgroup"); os.IsNotExist(err) { - return fmt.Errorf("cgroup namespaces aren't enabled in the kernel") - } - } - return nil -} - -// sysctl validates that the specified sysctl keys are valid or not. -// /proc/sys isn't completely namespaced and depending on which namespaces -// are specified, a subset of sysctls are permitted. -func (v *ConfigValidator) sysctl(config *configs.Config) error { - validSysctlMap := map[string]bool{ - "kernel.msgmax": true, - "kernel.msgmnb": true, - "kernel.msgmni": true, - "kernel.sem": true, - "kernel.shmall": true, - "kernel.shmmax": true, - "kernel.shmmni": true, - "kernel.shm_rmid_forced": true, - } - - for s := range config.Sysctl { - if validSysctlMap[s] || strings.HasPrefix(s, "fs.mqueue.") { - if config.Namespaces.Contains(configs.NEWIPC) { - continue - } else { - return fmt.Errorf("sysctl %q is not allowed in the hosts ipc namespace", s) - } - } - if strings.HasPrefix(s, "net.") { - if config.Namespaces.Contains(configs.NEWNET) { - if path := config.Namespaces.PathOf(configs.NEWNET); path != "" { - if err := checkHostNs(s, path); err != nil { - return err - } - } - continue - } else { - return fmt.Errorf("sysctl %q is not allowed in the hosts network namespace", s) - } - } - if config.Namespaces.Contains(configs.NEWUTS) { - switch s { - case "kernel.domainname": - // This is namespaced and there's no explicit OCI field for it. - continue - case "kernel.hostname": - // This is namespaced but there's a conflicting (dedicated) OCI field for it. - return fmt.Errorf("sysctl %q is not allowed as it conflicts with the OCI %q field", s, "hostname") - } - } - return fmt.Errorf("sysctl %q is not in a separate kernel namespace", s) - } - - return nil -} - -func (v *ConfigValidator) intelrdt(config *configs.Config) error { - if config.IntelRdt != nil { - if !intelrdt.IsCatEnabled() && !intelrdt.IsMbaEnabled() { - return fmt.Errorf("intelRdt is specified in config, but Intel RDT is not supported or enabled") - } - - if !intelrdt.IsCatEnabled() && config.IntelRdt.L3CacheSchema != "" { - return fmt.Errorf("intelRdt.l3CacheSchema is specified in config, but Intel RDT/CAT is not enabled") - } - if !intelrdt.IsMbaEnabled() && config.IntelRdt.MemBwSchema != "" { - return fmt.Errorf("intelRdt.memBwSchema is specified in config, but Intel RDT/MBA is not enabled") - } - - if intelrdt.IsCatEnabled() && config.IntelRdt.L3CacheSchema == "" { - return fmt.Errorf("Intel RDT/CAT is enabled and intelRdt is specified in config, but intelRdt.l3CacheSchema is empty") - } - if intelrdt.IsMbaEnabled() && config.IntelRdt.MemBwSchema == "" { - return fmt.Errorf("Intel RDT/MBA is enabled and intelRdt is specified in config, but intelRdt.memBwSchema is empty") - } - } - - return nil -} - -func isSymbolicLink(path string) (bool, error) { - fi, err := os.Lstat(path) - if err != nil { - return false, err - } - - return fi.Mode()&os.ModeSymlink == os.ModeSymlink, nil -} - -// checkHostNs checks whether network sysctl is used in host namespace. -func checkHostNs(sysctlConfig string, path string) error { - var currentProcessNetns = "/proc/self/ns/net" - // readlink on the current processes network namespace - destOfCurrentProcess, err := os.Readlink(currentProcessNetns) - if err != nil { - return fmt.Errorf("read soft link %q error", currentProcessNetns) - } - - // First check if the provided path is a symbolic link - symLink, err := isSymbolicLink(path) - if err != nil { - return fmt.Errorf("could not check that %q is a symlink: %v", path, err) - } - - if symLink == false { - // The provided namespace is not a symbolic link, - // it is not the host namespace. - return nil - } - - // readlink on the path provided in the struct - destOfContainer, err := os.Readlink(path) - if err != nil { - return fmt.Errorf("read soft link %q error", path) - } - if destOfContainer == destOfCurrentProcess { - return fmt.Errorf("sysctl %q is not allowed in the hosts network namespace", sysctlConfig) - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/console_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/console_linux.go deleted file mode 100644 index 9997e93ed..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/console_linux.go +++ /dev/null @@ -1,41 +0,0 @@ -package libcontainer - -import ( - "os" - - "golang.org/x/sys/unix" -) - -// mount initializes the console inside the rootfs mounting with the specified mount label -// and applying the correct ownership of the console. -func mountConsole(slavePath string) error { - oldMask := unix.Umask(0000) - defer unix.Umask(oldMask) - f, err := os.Create("/dev/console") - if err != nil && !os.IsExist(err) { - return err - } - if f != nil { - f.Close() - } - return unix.Mount(slavePath, "/dev/console", "bind", unix.MS_BIND, "") -} - -// dupStdio opens the slavePath for the console and dups the fds to the current -// processes stdio, fd 0,1,2. -func dupStdio(slavePath string) error { - fd, err := unix.Open(slavePath, unix.O_RDWR, 0) - if err != nil { - return &os.PathError{ - Op: "open", - Path: slavePath, - Err: err, - } - } - for _, i := range []int{0, 1, 2} { - if err := unix.Dup3(fd, i, 0); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/container.go b/vendor/github.com/opencontainers/runc/libcontainer/container.go deleted file mode 100644 index ba7541c5f..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/container.go +++ /dev/null @@ -1,173 +0,0 @@ -// Package libcontainer provides a native Go implementation for creating containers -// with namespaces, cgroups, capabilities, and filesystem access controls. -// It allows you to manage the lifecycle of the container performing additional operations -// after the container is created. -package libcontainer - -import ( - "os" - "time" - - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runtime-spec/specs-go" -) - -// Status is the status of a container. -type Status int - -const ( - // Created is the status that denotes the container exists but has not been run yet. - Created Status = iota - // Running is the status that denotes the container exists and is running. - Running - // Pausing is the status that denotes the container exists, it is in the process of being paused. - Pausing - // Paused is the status that denotes the container exists, but all its processes are paused. - Paused - // Stopped is the status that denotes the container does not have a created or running process. - Stopped -) - -func (s Status) String() string { - switch s { - case Created: - return "created" - case Running: - return "running" - case Pausing: - return "pausing" - case Paused: - return "paused" - case Stopped: - return "stopped" - default: - return "unknown" - } -} - -// BaseState represents the platform agnostic pieces relating to a -// running container's state -type BaseState struct { - // ID is the container ID. - ID string `json:"id"` - - // InitProcessPid is the init process id in the parent namespace. - InitProcessPid int `json:"init_process_pid"` - - // InitProcessStartTime is the init process start time in clock cycles since boot time. - InitProcessStartTime uint64 `json:"init_process_start"` - - // Created is the unix timestamp for the creation time of the container in UTC - Created time.Time `json:"created"` - - // Config is the container's configuration. - Config configs.Config `json:"config"` -} - -// BaseContainer is a libcontainer container object. -// -// Each container is thread-safe within the same process. Since a container can -// be destroyed by a separate process, any function may return that the container -// was not found. BaseContainer includes methods that are platform agnostic. -type BaseContainer interface { - // Returns the ID of the container - ID() string - - // Returns the current status of the container. - // - // errors: - // ContainerNotExists - Container no longer exists, - // Systemerror - System error. - Status() (Status, error) - - // State returns the current container's state information. - // - // errors: - // SystemError - System error. - State() (*State, error) - - // OCIState returns the current container's state information. - // - // errors: - // SystemError - System error. - OCIState() (*specs.State, error) - - // Returns the current config of the container. - Config() configs.Config - - // Returns the PIDs inside this container. The PIDs are in the namespace of the calling process. - // - // errors: - // ContainerNotExists - Container no longer exists, - // Systemerror - System error. - // - // Some of the returned PIDs may no longer refer to processes in the Container, unless - // the Container state is PAUSED in which case every PID in the slice is valid. - Processes() ([]int, error) - - // Returns statistics for the container. - // - // errors: - // ContainerNotExists - Container no longer exists, - // Systemerror - System error. - Stats() (*Stats, error) - - // Set resources of container as configured - // - // We can use this to change resources when containers are running. - // - // errors: - // SystemError - System error. - Set(config configs.Config) error - - // Start a process inside the container. Returns error if process fails to - // start. You can track process lifecycle with passed Process structure. - // - // errors: - // ContainerNotExists - Container no longer exists, - // ConfigInvalid - config is invalid, - // ContainerPaused - Container is paused, - // SystemError - System error. - Start(process *Process) (err error) - - // Run immediately starts the process inside the container. Returns error if process - // fails to start. It does not block waiting for the exec fifo after start returns but - // opens the fifo after start returns. - // - // errors: - // ContainerNotExists - Container no longer exists, - // ConfigInvalid - config is invalid, - // ContainerPaused - Container is paused, - // SystemError - System error. - Run(process *Process) (err error) - - // Destroys the container, if its in a valid state, after killing any - // remaining running processes. - // - // Any event registrations are removed before the container is destroyed. - // No error is returned if the container is already destroyed. - // - // Running containers must first be stopped using Signal(..). - // Paused containers must first be resumed using Resume(..). - // - // errors: - // ContainerNotStopped - Container is still running, - // ContainerPaused - Container is paused, - // SystemError - System error. - Destroy() error - - // Signal sends the provided signal code to the container's initial process. - // - // If all is specified the signal is sent to all processes in the container - // including the initial process. - // - // errors: - // SystemError - System error. - Signal(s os.Signal, all bool) error - - // Exec signals the container to exec the users process at the end of the init. - // - // errors: - // SystemError - System error. - Exec() error -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/container_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/container_linux.go deleted file mode 100644 index d6c4ebdaa..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/container_linux.go +++ /dev/null @@ -1,2050 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "bytes" - "encoding/json" - "errors" - "fmt" - "io" - "io/ioutil" - "net" - "os" - "os/exec" - "path/filepath" - "reflect" - "strings" - "sync" - "syscall" // only for SysProcAttr and Signal - "time" - - "github.com/cyphar/filepath-securejoin" - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/intelrdt" - "github.com/opencontainers/runc/libcontainer/system" - "github.com/opencontainers/runc/libcontainer/utils" - "github.com/opencontainers/runtime-spec/specs-go" - - criurpc "github.com/checkpoint-restore/go-criu/rpc" - "github.com/golang/protobuf/proto" - "github.com/sirupsen/logrus" - "github.com/vishvananda/netlink/nl" - "golang.org/x/sys/unix" -) - -const stdioFdCount = 3 - -type linuxContainer struct { - id string - root string - config *configs.Config - cgroupManager cgroups.Manager - intelRdtManager intelrdt.Manager - initPath string - initArgs []string - initProcess parentProcess - initProcessStartTime uint64 - criuPath string - newuidmapPath string - newgidmapPath string - m sync.Mutex - criuVersion int - state containerState - created time.Time -} - -// State represents a running container's state -type State struct { - BaseState - - // Platform specific fields below here - - // Specified if the container was started under the rootless mode. - // Set to true if BaseState.Config.RootlessEUID && BaseState.Config.RootlessCgroups - Rootless bool `json:"rootless"` - - // Path to all the cgroups setup for a container. Key is cgroup subsystem name - // with the value as the path. - CgroupPaths map[string]string `json:"cgroup_paths"` - - // NamespacePaths are filepaths to the container's namespaces. Key is the namespace type - // with the value as the path. - NamespacePaths map[configs.NamespaceType]string `json:"namespace_paths"` - - // Container's standard descriptors (std{in,out,err}), needed for checkpoint and restore - ExternalDescriptors []string `json:"external_descriptors,omitempty"` - - // Intel RDT "resource control" filesystem path - IntelRdtPath string `json:"intel_rdt_path"` -} - -// Container is a libcontainer container object. -// -// Each container is thread-safe within the same process. Since a container can -// be destroyed by a separate process, any function may return that the container -// was not found. -type Container interface { - BaseContainer - - // Methods below here are platform specific - - // Checkpoint checkpoints the running container's state to disk using the criu(8) utility. - // - // errors: - // Systemerror - System error. - Checkpoint(criuOpts *CriuOpts) error - - // Restore restores the checkpointed container to a running state using the criu(8) utility. - // - // errors: - // Systemerror - System error. - Restore(process *Process, criuOpts *CriuOpts) error - - // If the Container state is RUNNING or CREATED, sets the Container state to PAUSING and pauses - // the execution of any user processes. Asynchronously, when the container finished being paused the - // state is changed to PAUSED. - // If the Container state is PAUSED, do nothing. - // - // errors: - // ContainerNotExists - Container no longer exists, - // ContainerNotRunning - Container not running or created, - // Systemerror - System error. - Pause() error - - // If the Container state is PAUSED, resumes the execution of any user processes in the - // Container before setting the Container state to RUNNING. - // If the Container state is RUNNING, do nothing. - // - // errors: - // ContainerNotExists - Container no longer exists, - // ContainerNotPaused - Container is not paused, - // Systemerror - System error. - Resume() error - - // NotifyOOM returns a read-only channel signaling when the container receives an OOM notification. - // - // errors: - // Systemerror - System error. - NotifyOOM() (<-chan struct{}, error) - - // NotifyMemoryPressure returns a read-only channel signaling when the container reaches a given pressure level - // - // errors: - // Systemerror - System error. - NotifyMemoryPressure(level PressureLevel) (<-chan struct{}, error) -} - -// ID returns the container's unique ID -func (c *linuxContainer) ID() string { - return c.id -} - -// Config returns the container's configuration -func (c *linuxContainer) Config() configs.Config { - return *c.config -} - -func (c *linuxContainer) Status() (Status, error) { - c.m.Lock() - defer c.m.Unlock() - return c.currentStatus() -} - -func (c *linuxContainer) State() (*State, error) { - c.m.Lock() - defer c.m.Unlock() - return c.currentState() -} - -func (c *linuxContainer) OCIState() (*specs.State, error) { - c.m.Lock() - defer c.m.Unlock() - return c.currentOCIState() -} - -func (c *linuxContainer) Processes() ([]int, error) { - pids, err := c.cgroupManager.GetAllPids() - if err != nil { - return nil, newSystemErrorWithCause(err, "getting all container pids from cgroups") - } - return pids, nil -} - -func (c *linuxContainer) Stats() (*Stats, error) { - var ( - err error - stats = &Stats{} - ) - if stats.CgroupStats, err = c.cgroupManager.GetStats(); err != nil { - return stats, newSystemErrorWithCause(err, "getting container stats from cgroups") - } - if c.intelRdtManager != nil { - if stats.IntelRdtStats, err = c.intelRdtManager.GetStats(); err != nil { - return stats, newSystemErrorWithCause(err, "getting container's Intel RDT stats") - } - } - for _, iface := range c.config.Networks { - switch iface.Type { - case "veth": - istats, err := getNetworkInterfaceStats(iface.HostInterfaceName) - if err != nil { - return stats, newSystemErrorWithCausef(err, "getting network stats for interface %q", iface.HostInterfaceName) - } - stats.Interfaces = append(stats.Interfaces, istats) - } - } - return stats, nil -} - -func (c *linuxContainer) Set(config configs.Config) error { - c.m.Lock() - defer c.m.Unlock() - status, err := c.currentStatus() - if err != nil { - return err - } - if status == Stopped { - return newGenericError(fmt.Errorf("container not running"), ContainerNotRunning) - } - if err := c.cgroupManager.Set(&config); err != nil { - // Set configs back - if err2 := c.cgroupManager.Set(c.config); err2 != nil { - logrus.Warnf("Setting back cgroup configs failed due to error: %v, your state.json and actual configs might be inconsistent.", err2) - } - return err - } - if c.intelRdtManager != nil { - if err := c.intelRdtManager.Set(&config); err != nil { - // Set configs back - if err2 := c.intelRdtManager.Set(c.config); err2 != nil { - logrus.Warnf("Setting back intelrdt configs failed due to error: %v, your state.json and actual configs might be inconsistent.", err2) - } - return err - } - } - // After config setting succeed, update config and states - c.config = &config - _, err = c.updateState(nil) - return err -} - -func (c *linuxContainer) Start(process *Process) error { - c.m.Lock() - defer c.m.Unlock() - if process.Init { - if err := c.createExecFifo(); err != nil { - return err - } - } - if err := c.start(process); err != nil { - if process.Init { - c.deleteExecFifo() - } - return err - } - return nil -} - -func (c *linuxContainer) Run(process *Process) error { - if err := c.Start(process); err != nil { - return err - } - if process.Init { - return c.exec() - } - return nil -} - -func (c *linuxContainer) Exec() error { - c.m.Lock() - defer c.m.Unlock() - return c.exec() -} - -func (c *linuxContainer) exec() error { - path := filepath.Join(c.root, execFifoFilename) - - fifoOpen := make(chan struct{}) - select { - case <-awaitProcessExit(c.initProcess.pid(), fifoOpen): - return errors.New("container process is already dead") - case result := <-awaitFifoOpen(path): - close(fifoOpen) - if result.err != nil { - return result.err - } - f := result.file - defer f.Close() - if err := readFromExecFifo(f); err != nil { - return err - } - return os.Remove(path) - } -} - -func readFromExecFifo(execFifo io.Reader) error { - data, err := ioutil.ReadAll(execFifo) - if err != nil { - return err - } - if len(data) <= 0 { - return fmt.Errorf("cannot start an already running container") - } - return nil -} - -func awaitProcessExit(pid int, exit <-chan struct{}) <-chan struct{} { - isDead := make(chan struct{}) - go func() { - for { - select { - case <-exit: - return - case <-time.After(time.Millisecond * 100): - stat, err := system.Stat(pid) - if err != nil || stat.State == system.Zombie { - close(isDead) - return - } - } - } - }() - return isDead -} - -func awaitFifoOpen(path string) <-chan openResult { - fifoOpened := make(chan openResult) - go func() { - f, err := os.OpenFile(path, os.O_RDONLY, 0) - if err != nil { - fifoOpened <- openResult{err: newSystemErrorWithCause(err, "open exec fifo for reading")} - return - } - fifoOpened <- openResult{file: f} - }() - return fifoOpened -} - -type openResult struct { - file *os.File - err error -} - -func (c *linuxContainer) start(process *Process) error { - parent, err := c.newParentProcess(process) - if err != nil { - return newSystemErrorWithCause(err, "creating new parent process") - } - parent.forwardChildLogs() - if err := parent.start(); err != nil { - // terminate the process to ensure that it properly is reaped. - if err := ignoreTerminateErrors(parent.terminate()); err != nil { - logrus.Warn(err) - } - return newSystemErrorWithCause(err, "starting container process") - } - // generate a timestamp indicating when the container was started - c.created = time.Now().UTC() - if process.Init { - c.state = &createdState{ - c: c, - } - state, err := c.updateState(parent) - if err != nil { - return err - } - c.initProcessStartTime = state.InitProcessStartTime - - if c.config.Hooks != nil { - s, err := c.currentOCIState() - if err != nil { - return err - } - for i, hook := range c.config.Hooks.Poststart { - if err := hook.Run(s); err != nil { - if err := ignoreTerminateErrors(parent.terminate()); err != nil { - logrus.Warn(err) - } - return newSystemErrorWithCausef(err, "running poststart hook %d", i) - } - } - } - } - return nil -} - -func (c *linuxContainer) Signal(s os.Signal, all bool) error { - if all { - return signalAllProcesses(c.cgroupManager, s) - } - status, err := c.currentStatus() - if err != nil { - return err - } - // to avoid a PID reuse attack - if status == Running || status == Created || status == Paused { - if err := c.initProcess.signal(s); err != nil { - return newSystemErrorWithCause(err, "signaling init process") - } - return nil - } - return newGenericError(fmt.Errorf("container not running"), ContainerNotRunning) -} - -func (c *linuxContainer) createExecFifo() error { - rootuid, err := c.Config().HostRootUID() - if err != nil { - return err - } - rootgid, err := c.Config().HostRootGID() - if err != nil { - return err - } - - fifoName := filepath.Join(c.root, execFifoFilename) - if _, err := os.Stat(fifoName); err == nil { - return fmt.Errorf("exec fifo %s already exists", fifoName) - } - oldMask := unix.Umask(0000) - if err := unix.Mkfifo(fifoName, 0622); err != nil { - unix.Umask(oldMask) - return err - } - unix.Umask(oldMask) - return os.Chown(fifoName, rootuid, rootgid) -} - -func (c *linuxContainer) deleteExecFifo() { - fifoName := filepath.Join(c.root, execFifoFilename) - os.Remove(fifoName) -} - -// includeExecFifo opens the container's execfifo as a pathfd, so that the -// container cannot access the statedir (and the FIFO itself remains -// un-opened). It then adds the FifoFd to the given exec.Cmd as an inherited -// fd, with _LIBCONTAINER_FIFOFD set to its fd number. -func (c *linuxContainer) includeExecFifo(cmd *exec.Cmd) error { - fifoName := filepath.Join(c.root, execFifoFilename) - fifoFd, err := unix.Open(fifoName, unix.O_PATH|unix.O_CLOEXEC, 0) - if err != nil { - return err - } - - cmd.ExtraFiles = append(cmd.ExtraFiles, os.NewFile(uintptr(fifoFd), fifoName)) - cmd.Env = append(cmd.Env, - fmt.Sprintf("_LIBCONTAINER_FIFOFD=%d", stdioFdCount+len(cmd.ExtraFiles)-1)) - return nil -} - -func (c *linuxContainer) newParentProcess(p *Process) (parentProcess, error) { - parentInitPipe, childInitPipe, err := utils.NewSockPair("init") - if err != nil { - return nil, newSystemErrorWithCause(err, "creating new init pipe") - } - messageSockPair := filePair{parentInitPipe, childInitPipe} - - parentLogPipe, childLogPipe, err := os.Pipe() - if err != nil { - return nil, fmt.Errorf("Unable to create the log pipe: %s", err) - } - logFilePair := filePair{parentLogPipe, childLogPipe} - - cmd, err := c.commandTemplate(p, childInitPipe, childLogPipe) - if err != nil { - return nil, newSystemErrorWithCause(err, "creating new command template") - } - if !p.Init { - return c.newSetnsProcess(p, cmd, messageSockPair, logFilePair) - } - - // We only set up fifoFd if we're not doing a `runc exec`. The historic - // reason for this is that previously we would pass a dirfd that allowed - // for container rootfs escape (and not doing it in `runc exec` avoided - // that problem), but we no longer do that. However, there's no need to do - // this for `runc exec` so we just keep it this way to be safe. - if err := c.includeExecFifo(cmd); err != nil { - return nil, newSystemErrorWithCause(err, "including execfifo in cmd.Exec setup") - } - return c.newInitProcess(p, cmd, messageSockPair, logFilePair) -} - -func (c *linuxContainer) commandTemplate(p *Process, childInitPipe *os.File, childLogPipe *os.File) (*exec.Cmd, error) { - cmd := exec.Command(c.initPath, c.initArgs[1:]...) - cmd.Args[0] = c.initArgs[0] - cmd.Stdin = p.Stdin - cmd.Stdout = p.Stdout - cmd.Stderr = p.Stderr - cmd.Dir = c.config.Rootfs - if cmd.SysProcAttr == nil { - cmd.SysProcAttr = &syscall.SysProcAttr{} - } - cmd.Env = append(cmd.Env, fmt.Sprintf("GOMAXPROCS=%s", os.Getenv("GOMAXPROCS"))) - cmd.ExtraFiles = append(cmd.ExtraFiles, p.ExtraFiles...) - if p.ConsoleSocket != nil { - cmd.ExtraFiles = append(cmd.ExtraFiles, p.ConsoleSocket) - cmd.Env = append(cmd.Env, - fmt.Sprintf("_LIBCONTAINER_CONSOLE=%d", stdioFdCount+len(cmd.ExtraFiles)-1), - ) - } - cmd.ExtraFiles = append(cmd.ExtraFiles, childInitPipe) - cmd.Env = append(cmd.Env, - fmt.Sprintf("_LIBCONTAINER_INITPIPE=%d", stdioFdCount+len(cmd.ExtraFiles)-1), - fmt.Sprintf("_LIBCONTAINER_STATEDIR=%s", c.root), - ) - - cmd.ExtraFiles = append(cmd.ExtraFiles, childLogPipe) - cmd.Env = append(cmd.Env, - fmt.Sprintf("_LIBCONTAINER_LOGPIPE=%d", stdioFdCount+len(cmd.ExtraFiles)-1), - fmt.Sprintf("_LIBCONTAINER_LOGLEVEL=%s", p.LogLevel), - ) - - // NOTE: when running a container with no PID namespace and the parent process spawning the container is - // PID1 the pdeathsig is being delivered to the container's init process by the kernel for some reason - // even with the parent still running. - if c.config.ParentDeathSignal > 0 { - cmd.SysProcAttr.Pdeathsig = syscall.Signal(c.config.ParentDeathSignal) - } - return cmd, nil -} - -func (c *linuxContainer) newInitProcess(p *Process, cmd *exec.Cmd, messageSockPair, logFilePair filePair) (*initProcess, error) { - cmd.Env = append(cmd.Env, "_LIBCONTAINER_INITTYPE="+string(initStandard)) - nsMaps := make(map[configs.NamespaceType]string) - for _, ns := range c.config.Namespaces { - if ns.Path != "" { - nsMaps[ns.Type] = ns.Path - } - } - _, sharePidns := nsMaps[configs.NEWPID] - data, err := c.bootstrapData(c.config.Namespaces.CloneFlags(), nsMaps) - if err != nil { - return nil, err - } - init := &initProcess{ - cmd: cmd, - messageSockPair: messageSockPair, - logFilePair: logFilePair, - manager: c.cgroupManager, - intelRdtManager: c.intelRdtManager, - config: c.newInitConfig(p), - container: c, - process: p, - bootstrapData: data, - sharePidns: sharePidns, - } - c.initProcess = init - return init, nil -} - -func (c *linuxContainer) newSetnsProcess(p *Process, cmd *exec.Cmd, messageSockPair, logFilePair filePair) (*setnsProcess, error) { - cmd.Env = append(cmd.Env, "_LIBCONTAINER_INITTYPE="+string(initSetns)) - state, err := c.currentState() - if err != nil { - return nil, newSystemErrorWithCause(err, "getting container's current state") - } - // for setns process, we don't have to set cloneflags as the process namespaces - // will only be set via setns syscall - data, err := c.bootstrapData(0, state.NamespacePaths) - if err != nil { - return nil, err - } - return &setnsProcess{ - cmd: cmd, - cgroupPaths: c.cgroupManager.GetPaths(), - rootlessCgroups: c.config.RootlessCgroups, - intelRdtPath: state.IntelRdtPath, - messageSockPair: messageSockPair, - logFilePair: logFilePair, - config: c.newInitConfig(p), - process: p, - bootstrapData: data, - }, nil -} - -func (c *linuxContainer) newInitConfig(process *Process) *initConfig { - cfg := &initConfig{ - Config: c.config, - Args: process.Args, - Env: process.Env, - User: process.User, - AdditionalGroups: process.AdditionalGroups, - Cwd: process.Cwd, - Capabilities: process.Capabilities, - PassedFilesCount: len(process.ExtraFiles), - ContainerId: c.ID(), - NoNewPrivileges: c.config.NoNewPrivileges, - RootlessEUID: c.config.RootlessEUID, - RootlessCgroups: c.config.RootlessCgroups, - AppArmorProfile: c.config.AppArmorProfile, - ProcessLabel: c.config.ProcessLabel, - Rlimits: c.config.Rlimits, - } - if process.NoNewPrivileges != nil { - cfg.NoNewPrivileges = *process.NoNewPrivileges - } - if process.AppArmorProfile != "" { - cfg.AppArmorProfile = process.AppArmorProfile - } - if process.Label != "" { - cfg.ProcessLabel = process.Label - } - if len(process.Rlimits) > 0 { - cfg.Rlimits = process.Rlimits - } - cfg.CreateConsole = process.ConsoleSocket != nil - cfg.ConsoleWidth = process.ConsoleWidth - cfg.ConsoleHeight = process.ConsoleHeight - return cfg -} - -func (c *linuxContainer) Destroy() error { - c.m.Lock() - defer c.m.Unlock() - return c.state.destroy() -} - -func (c *linuxContainer) Pause() error { - c.m.Lock() - defer c.m.Unlock() - status, err := c.currentStatus() - if err != nil { - return err - } - switch status { - case Running, Created: - if err := c.cgroupManager.Freeze(configs.Frozen); err != nil { - return err - } - return c.state.transition(&pausedState{ - c: c, - }) - } - return newGenericError(fmt.Errorf("container not running or created: %s", status), ContainerNotRunning) -} - -func (c *linuxContainer) Resume() error { - c.m.Lock() - defer c.m.Unlock() - status, err := c.currentStatus() - if err != nil { - return err - } - if status != Paused { - return newGenericError(fmt.Errorf("container not paused"), ContainerNotPaused) - } - if err := c.cgroupManager.Freeze(configs.Thawed); err != nil { - return err - } - return c.state.transition(&runningState{ - c: c, - }) -} - -func (c *linuxContainer) NotifyOOM() (<-chan struct{}, error) { - // XXX(cyphar): This requires cgroups. - if c.config.RootlessCgroups { - logrus.Warn("getting OOM notifications may fail if you don't have the full access to cgroups") - } - return notifyOnOOM(c.cgroupManager.GetPaths()) -} - -func (c *linuxContainer) NotifyMemoryPressure(level PressureLevel) (<-chan struct{}, error) { - // XXX(cyphar): This requires cgroups. - if c.config.RootlessCgroups { - logrus.Warn("getting memory pressure notifications may fail if you don't have the full access to cgroups") - } - return notifyMemoryPressure(c.cgroupManager.GetPaths(), level) -} - -var criuFeatures *criurpc.CriuFeatures - -func (c *linuxContainer) checkCriuFeatures(criuOpts *CriuOpts, rpcOpts *criurpc.CriuOpts, criuFeat *criurpc.CriuFeatures) error { - - var t criurpc.CriuReqType - t = criurpc.CriuReqType_FEATURE_CHECK - - // criu 1.8 => 10800 - if err := c.checkCriuVersion(10800); err != nil { - // Feature checking was introduced with CRIU 1.8. - // Ignore the feature check if an older CRIU version is used - // and just act as before. - // As all automated PR testing is done using CRIU 1.7 this - // code will not be tested by automated PR testing. - return nil - } - - // make sure the features we are looking for are really not from - // some previous check - criuFeatures = nil - - req := &criurpc.CriuReq{ - Type: &t, - // Theoretically this should not be necessary but CRIU - // segfaults if Opts is empty. - // Fixed in CRIU 2.12 - Opts: rpcOpts, - Features: criuFeat, - } - - err := c.criuSwrk(nil, req, criuOpts, false, nil) - if err != nil { - logrus.Debugf("%s", err) - return fmt.Errorf("CRIU feature check failed") - } - - logrus.Debugf("Feature check says: %s", criuFeatures) - missingFeatures := false - - // The outer if checks if the fields actually exist - if (criuFeat.MemTrack != nil) && - (criuFeatures.MemTrack != nil) { - // The inner if checks if they are set to true - if *criuFeat.MemTrack && !*criuFeatures.MemTrack { - missingFeatures = true - logrus.Debugf("CRIU does not support MemTrack") - } - } - - // This needs to be repeated for every new feature check. - // Is there a way to put this in a function. Reflection? - if (criuFeat.LazyPages != nil) && - (criuFeatures.LazyPages != nil) { - if *criuFeat.LazyPages && !*criuFeatures.LazyPages { - missingFeatures = true - logrus.Debugf("CRIU does not support LazyPages") - } - } - - if missingFeatures { - return fmt.Errorf("CRIU is missing features") - } - - return nil -} - -func parseCriuVersion(path string) (int, error) { - var x, y, z int - - out, err := exec.Command(path, "-V").Output() - if err != nil { - return 0, fmt.Errorf("Unable to execute CRIU command: %s", path) - } - - x = 0 - y = 0 - z = 0 - if ep := strings.Index(string(out), "-"); ep >= 0 { - // criu Git version format - var version string - if sp := strings.Index(string(out), "GitID"); sp > 0 { - version = string(out)[sp:ep] - } else { - return 0, fmt.Errorf("Unable to parse the CRIU version: %s", path) - } - - n, err := fmt.Sscanf(version, "GitID: v%d.%d.%d", &x, &y, &z) // 1.5.2 - if err != nil { - n, err = fmt.Sscanf(version, "GitID: v%d.%d", &x, &y) // 1.6 - y++ - } else { - z++ - } - if n < 2 || err != nil { - return 0, fmt.Errorf("Unable to parse the CRIU version: %s %d %s", version, n, err) - } - } else { - // criu release version format - n, err := fmt.Sscanf(string(out), "Version: %d.%d.%d\n", &x, &y, &z) // 1.5.2 - if err != nil { - n, err = fmt.Sscanf(string(out), "Version: %d.%d\n", &x, &y) // 1.6 - } - if n < 2 || err != nil { - return 0, fmt.Errorf("Unable to parse the CRIU version: %s %d %s", out, n, err) - } - } - - return x*10000 + y*100 + z, nil -} - -func compareCriuVersion(criuVersion int, minVersion int) error { - // simple function to perform the actual version compare - if criuVersion < minVersion { - return fmt.Errorf("CRIU version %d must be %d or higher", criuVersion, minVersion) - } - - return nil -} - -// This is used to store the result of criu version RPC -var criuVersionRPC *criurpc.CriuVersion - -// checkCriuVersion checks Criu version greater than or equal to minVersion -func (c *linuxContainer) checkCriuVersion(minVersion int) error { - - // If the version of criu has already been determined there is no need - // to ask criu for the version again. Use the value from c.criuVersion. - if c.criuVersion != 0 { - return compareCriuVersion(c.criuVersion, minVersion) - } - - // First try if this version of CRIU support the version RPC. - // The CRIU version RPC was introduced with CRIU 3.0. - - // First, reset the variable for the RPC answer to nil - criuVersionRPC = nil - - var t criurpc.CriuReqType - t = criurpc.CriuReqType_VERSION - req := &criurpc.CriuReq{ - Type: &t, - } - - err := c.criuSwrk(nil, req, nil, false, nil) - if err != nil { - return fmt.Errorf("CRIU version check failed: %s", err) - } - - if criuVersionRPC != nil { - logrus.Debugf("CRIU version: %s", criuVersionRPC) - // major and minor are always set - c.criuVersion = int(*criuVersionRPC.Major) * 10000 - c.criuVersion += int(*criuVersionRPC.Minor) * 100 - if criuVersionRPC.Sublevel != nil { - c.criuVersion += int(*criuVersionRPC.Sublevel) - } - if criuVersionRPC.Gitid != nil { - // runc's convention is that a CRIU git release is - // always the same as increasing the minor by 1 - c.criuVersion -= (c.criuVersion % 100) - c.criuVersion += 100 - } - return compareCriuVersion(c.criuVersion, minVersion) - } - - // This is CRIU without the version RPC and therefore - // older than 3.0. Parsing the output is required. - - // This can be remove once runc does not work with criu older than 3.0 - - c.criuVersion, err = parseCriuVersion(c.criuPath) - if err != nil { - return err - } - - return compareCriuVersion(c.criuVersion, minVersion) -} - -const descriptorsFilename = "descriptors.json" - -func (c *linuxContainer) addCriuDumpMount(req *criurpc.CriuReq, m *configs.Mount) { - mountDest := m.Destination - if strings.HasPrefix(mountDest, c.config.Rootfs) { - mountDest = mountDest[len(c.config.Rootfs):] - } - - extMnt := &criurpc.ExtMountMap{ - Key: proto.String(mountDest), - Val: proto.String(mountDest), - } - req.Opts.ExtMnt = append(req.Opts.ExtMnt, extMnt) -} - -func (c *linuxContainer) addMaskPaths(req *criurpc.CriuReq) error { - for _, path := range c.config.MaskPaths { - fi, err := os.Stat(fmt.Sprintf("/proc/%d/root/%s", c.initProcess.pid(), path)) - if err != nil { - if os.IsNotExist(err) { - continue - } - return err - } - if fi.IsDir() { - continue - } - - extMnt := &criurpc.ExtMountMap{ - Key: proto.String(path), - Val: proto.String("/dev/null"), - } - req.Opts.ExtMnt = append(req.Opts.ExtMnt, extMnt) - } - return nil -} - -func waitForCriuLazyServer(r *os.File, status string) error { - - data := make([]byte, 1) - _, err := r.Read(data) - if err != nil { - return err - } - fd, err := os.OpenFile(status, os.O_TRUNC|os.O_WRONLY, os.ModeAppend) - if err != nil { - return err - } - _, err = fd.Write(data) - if err != nil { - return err - } - fd.Close() - - return nil -} - -func (c *linuxContainer) handleCriuConfigurationFile(rpcOpts *criurpc.CriuOpts) { - // CRIU will evaluate a configuration starting with release 3.11. - // Settings in the configuration file will overwrite RPC settings. - // Look for annotations. The annotation 'org.criu.config' - // specifies if CRIU should use a different, container specific - // configuration file. - _, annotations := utils.Annotations(c.config.Labels) - configFile, exists := annotations["org.criu.config"] - if exists { - // If the annotation 'org.criu.config' exists and is set - // to a non-empty string, tell CRIU to use that as a - // configuration file. If the file does not exist, CRIU - // will just ignore it. - if configFile != "" { - rpcOpts.ConfigFile = proto.String(configFile) - } - // If 'org.criu.config' exists and is set to an empty - // string, a runc specific CRIU configuration file will - // be not set at all. - } else { - // If the mentioned annotation has not been found, specify - // a default CRIU configuration file. - rpcOpts.ConfigFile = proto.String("/etc/criu/runc.conf") - } -} - -func (c *linuxContainer) Checkpoint(criuOpts *CriuOpts) error { - c.m.Lock() - defer c.m.Unlock() - - // Checkpoint is unlikely to work if os.Geteuid() != 0 || system.RunningInUserNS(). - // (CLI prints a warning) - // TODO(avagin): Figure out how to make this work nicely. CRIU 2.0 has - // support for doing unprivileged dumps, but the setup of - // rootless containers might make this complicated. - - // criu 1.5.2 => 10502 - if err := c.checkCriuVersion(10502); err != nil { - return err - } - - if criuOpts.ImagesDirectory == "" { - return fmt.Errorf("invalid directory to save checkpoint") - } - - // Since a container can be C/R'ed multiple times, - // the checkpoint directory may already exist. - if err := os.Mkdir(criuOpts.ImagesDirectory, 0755); err != nil && !os.IsExist(err) { - return err - } - - if criuOpts.WorkDirectory == "" { - criuOpts.WorkDirectory = filepath.Join(c.root, "criu.work") - } - - if err := os.Mkdir(criuOpts.WorkDirectory, 0755); err != nil && !os.IsExist(err) { - return err - } - - workDir, err := os.Open(criuOpts.WorkDirectory) - if err != nil { - return err - } - defer workDir.Close() - - imageDir, err := os.Open(criuOpts.ImagesDirectory) - if err != nil { - return err - } - defer imageDir.Close() - - rpcOpts := criurpc.CriuOpts{ - ImagesDirFd: proto.Int32(int32(imageDir.Fd())), - WorkDirFd: proto.Int32(int32(workDir.Fd())), - LogLevel: proto.Int32(4), - LogFile: proto.String("dump.log"), - Root: proto.String(c.config.Rootfs), - ManageCgroups: proto.Bool(true), - NotifyScripts: proto.Bool(true), - Pid: proto.Int32(int32(c.initProcess.pid())), - ShellJob: proto.Bool(criuOpts.ShellJob), - LeaveRunning: proto.Bool(criuOpts.LeaveRunning), - TcpEstablished: proto.Bool(criuOpts.TcpEstablished), - ExtUnixSk: proto.Bool(criuOpts.ExternalUnixConnections), - FileLocks: proto.Bool(criuOpts.FileLocks), - EmptyNs: proto.Uint32(criuOpts.EmptyNs), - OrphanPtsMaster: proto.Bool(true), - AutoDedup: proto.Bool(criuOpts.AutoDedup), - LazyPages: proto.Bool(criuOpts.LazyPages), - } - - c.handleCriuConfigurationFile(&rpcOpts) - - // If the container is running in a network namespace and has - // a path to the network namespace configured, we will dump - // that network namespace as an external namespace and we - // will expect that the namespace exists during restore. - // This basically means that CRIU will ignore the namespace - // and expect to be setup correctly. - nsPath := c.config.Namespaces.PathOf(configs.NEWNET) - if nsPath != "" { - // For this to work we need at least criu 3.11.0 => 31100. - // As there was already a successful version check we will - // not error out if it fails. runc will just behave as it used - // to do and ignore external network namespaces. - err := c.checkCriuVersion(31100) - if err == nil { - // CRIU expects the information about an external namespace - // like this: --external net[]: - // This is always 'extRootNetNS'. - var netns syscall.Stat_t - err = syscall.Stat(nsPath, &netns) - if err != nil { - return err - } - criuExternal := fmt.Sprintf("net[%d]:extRootNetNS", netns.Ino) - rpcOpts.External = append(rpcOpts.External, criuExternal) - } - } - - fcg := c.cgroupManager.GetPaths()["freezer"] - if fcg != "" { - rpcOpts.FreezeCgroup = proto.String(fcg) - } - - // append optional criu opts, e.g., page-server and port - if criuOpts.PageServer.Address != "" && criuOpts.PageServer.Port != 0 { - rpcOpts.Ps = &criurpc.CriuPageServerInfo{ - Address: proto.String(criuOpts.PageServer.Address), - Port: proto.Int32(criuOpts.PageServer.Port), - } - } - - //pre-dump may need parentImage param to complete iterative migration - if criuOpts.ParentImage != "" { - rpcOpts.ParentImg = proto.String(criuOpts.ParentImage) - rpcOpts.TrackMem = proto.Bool(true) - } - - // append optional manage cgroups mode - if criuOpts.ManageCgroupsMode != 0 { - // criu 1.7 => 10700 - if err := c.checkCriuVersion(10700); err != nil { - return err - } - mode := criurpc.CriuCgMode(criuOpts.ManageCgroupsMode) - rpcOpts.ManageCgroupsMode = &mode - } - - var t criurpc.CriuReqType - if criuOpts.PreDump { - feat := criurpc.CriuFeatures{ - MemTrack: proto.Bool(true), - } - - if err := c.checkCriuFeatures(criuOpts, &rpcOpts, &feat); err != nil { - return err - } - - t = criurpc.CriuReqType_PRE_DUMP - } else { - t = criurpc.CriuReqType_DUMP - } - req := &criurpc.CriuReq{ - Type: &t, - Opts: &rpcOpts, - } - - if criuOpts.LazyPages { - // lazy migration requested; check if criu supports it - feat := criurpc.CriuFeatures{ - LazyPages: proto.Bool(true), - } - - if err := c.checkCriuFeatures(criuOpts, &rpcOpts, &feat); err != nil { - return err - } - - statusRead, statusWrite, err := os.Pipe() - if err != nil { - return err - } - rpcOpts.StatusFd = proto.Int32(int32(statusWrite.Fd())) - go waitForCriuLazyServer(statusRead, criuOpts.StatusFd) - } - - //no need to dump these information in pre-dump - if !criuOpts.PreDump { - for _, m := range c.config.Mounts { - switch m.Device { - case "bind": - c.addCriuDumpMount(req, m) - case "cgroup": - binds, err := getCgroupMounts(m) - if err != nil { - return err - } - for _, b := range binds { - c.addCriuDumpMount(req, b) - } - } - } - - if err := c.addMaskPaths(req); err != nil { - return err - } - - for _, node := range c.config.Devices { - m := &configs.Mount{Destination: node.Path, Source: node.Path} - c.addCriuDumpMount(req, m) - } - - // Write the FD info to a file in the image directory - fdsJSON, err := json.Marshal(c.initProcess.externalDescriptors()) - if err != nil { - return err - } - - err = ioutil.WriteFile(filepath.Join(criuOpts.ImagesDirectory, descriptorsFilename), fdsJSON, 0655) - if err != nil { - return err - } - } - - err = c.criuSwrk(nil, req, criuOpts, false, nil) - if err != nil { - return err - } - return nil -} - -func (c *linuxContainer) addCriuRestoreMount(req *criurpc.CriuReq, m *configs.Mount) { - mountDest := m.Destination - if strings.HasPrefix(mountDest, c.config.Rootfs) { - mountDest = mountDest[len(c.config.Rootfs):] - } - - extMnt := &criurpc.ExtMountMap{ - Key: proto.String(mountDest), - Val: proto.String(m.Source), - } - req.Opts.ExtMnt = append(req.Opts.ExtMnt, extMnt) -} - -func (c *linuxContainer) restoreNetwork(req *criurpc.CriuReq, criuOpts *CriuOpts) { - for _, iface := range c.config.Networks { - switch iface.Type { - case "veth": - veth := new(criurpc.CriuVethPair) - veth.IfOut = proto.String(iface.HostInterfaceName) - veth.IfIn = proto.String(iface.Name) - req.Opts.Veths = append(req.Opts.Veths, veth) - case "loopback": - // Do nothing - } - } - for _, i := range criuOpts.VethPairs { - veth := new(criurpc.CriuVethPair) - veth.IfOut = proto.String(i.HostInterfaceName) - veth.IfIn = proto.String(i.ContainerInterfaceName) - req.Opts.Veths = append(req.Opts.Veths, veth) - } -} - -// makeCriuRestoreMountpoints makes the actual mountpoints for the -// restore using CRIU. This function is inspired from the code in -// rootfs_linux.go -func (c *linuxContainer) makeCriuRestoreMountpoints(m *configs.Mount) error { - switch m.Device { - case "cgroup": - // Do nothing for cgroup, CRIU should handle it - case "bind": - // The prepareBindMount() function checks if source - // exists. So it cannot be used for other filesystem types. - if err := prepareBindMount(m, c.config.Rootfs); err != nil { - return err - } - default: - // for all other file-systems just create the mountpoints - dest, err := securejoin.SecureJoin(c.config.Rootfs, m.Destination) - if err != nil { - return err - } - if err := checkMountDestination(c.config.Rootfs, dest); err != nil { - return err - } - m.Destination = dest - if err := os.MkdirAll(dest, 0755); err != nil { - return err - } - } - return nil -} - -// isPathInPrefixList is a small function for CRIU restore to make sure -// mountpoints, which are on a tmpfs, are not created in the roofs -func isPathInPrefixList(path string, prefix []string) bool { - for _, p := range prefix { - if strings.HasPrefix(path, p+"/") { - return false - } - } - return true -} - -// prepareCriuRestoreMounts tries to set up the rootfs of the -// container to be restored in the same way runc does it for -// initial container creation. Even for a read-only rootfs container -// runc modifies the rootfs to add mountpoints which do not exist. -// This function also creates missing mountpoints as long as they -// are not on top of a tmpfs, as CRIU will restore tmpfs content anyway. -func (c *linuxContainer) prepareCriuRestoreMounts(mounts []*configs.Mount) error { - // First get a list of a all tmpfs mounts - tmpfs := []string{} - for _, m := range mounts { - switch m.Device { - case "tmpfs": - tmpfs = append(tmpfs, m.Destination) - } - } - // Now go through all mounts and create the mountpoints - // if the mountpoints are not on a tmpfs, as CRIU will - // restore the complete tmpfs content from its checkpoint. - for _, m := range mounts { - if isPathInPrefixList(m.Destination, tmpfs) { - if err := c.makeCriuRestoreMountpoints(m); err != nil { - return err - } - } - } - return nil -} - -func (c *linuxContainer) Restore(process *Process, criuOpts *CriuOpts) error { - c.m.Lock() - defer c.m.Unlock() - - var extraFiles []*os.File - - // Restore is unlikely to work if os.Geteuid() != 0 || system.RunningInUserNS(). - // (CLI prints a warning) - // TODO(avagin): Figure out how to make this work nicely. CRIU doesn't have - // support for unprivileged restore at the moment. - - // criu 1.5.2 => 10502 - if err := c.checkCriuVersion(10502); err != nil { - return err - } - if criuOpts.WorkDirectory == "" { - criuOpts.WorkDirectory = filepath.Join(c.root, "criu.work") - } - // Since a container can be C/R'ed multiple times, - // the work directory may already exist. - if err := os.Mkdir(criuOpts.WorkDirectory, 0655); err != nil && !os.IsExist(err) { - return err - } - workDir, err := os.Open(criuOpts.WorkDirectory) - if err != nil { - return err - } - defer workDir.Close() - if criuOpts.ImagesDirectory == "" { - return fmt.Errorf("invalid directory to restore checkpoint") - } - imageDir, err := os.Open(criuOpts.ImagesDirectory) - if err != nil { - return err - } - defer imageDir.Close() - // CRIU has a few requirements for a root directory: - // * it must be a mount point - // * its parent must not be overmounted - // c.config.Rootfs is bind-mounted to a temporary directory - // to satisfy these requirements. - root := filepath.Join(c.root, "criu-root") - if err := os.Mkdir(root, 0755); err != nil { - return err - } - defer os.Remove(root) - root, err = filepath.EvalSymlinks(root) - if err != nil { - return err - } - err = unix.Mount(c.config.Rootfs, root, "", unix.MS_BIND|unix.MS_REC, "") - if err != nil { - return err - } - defer unix.Unmount(root, unix.MNT_DETACH) - t := criurpc.CriuReqType_RESTORE - req := &criurpc.CriuReq{ - Type: &t, - Opts: &criurpc.CriuOpts{ - ImagesDirFd: proto.Int32(int32(imageDir.Fd())), - WorkDirFd: proto.Int32(int32(workDir.Fd())), - EvasiveDevices: proto.Bool(true), - LogLevel: proto.Int32(4), - LogFile: proto.String("restore.log"), - RstSibling: proto.Bool(true), - Root: proto.String(root), - ManageCgroups: proto.Bool(true), - NotifyScripts: proto.Bool(true), - ShellJob: proto.Bool(criuOpts.ShellJob), - ExtUnixSk: proto.Bool(criuOpts.ExternalUnixConnections), - TcpEstablished: proto.Bool(criuOpts.TcpEstablished), - FileLocks: proto.Bool(criuOpts.FileLocks), - EmptyNs: proto.Uint32(criuOpts.EmptyNs), - OrphanPtsMaster: proto.Bool(true), - AutoDedup: proto.Bool(criuOpts.AutoDedup), - LazyPages: proto.Bool(criuOpts.LazyPages), - }, - } - - c.handleCriuConfigurationFile(req.Opts) - - // Same as during checkpointing. If the container has a specific network namespace - // assigned to it, this now expects that the checkpoint will be restored in a - // already created network namespace. - nsPath := c.config.Namespaces.PathOf(configs.NEWNET) - if nsPath != "" { - // For this to work we need at least criu 3.11.0 => 31100. - // As there was already a successful version check we will - // not error out if it fails. runc will just behave as it used - // to do and ignore external network namespaces. - err := c.checkCriuVersion(31100) - if err == nil { - // CRIU wants the information about an existing network namespace - // like this: --inherit-fd fd[]: - // The needs to be the same as during checkpointing. - // We are always using 'extRootNetNS' as the key in this. - netns, err := os.Open(nsPath) - defer netns.Close() - if err != nil { - logrus.Errorf("If a specific network namespace is defined it must exist: %s", err) - return fmt.Errorf("Requested network namespace %v does not exist", nsPath) - } - inheritFd := new(criurpc.InheritFd) - inheritFd.Key = proto.String("extRootNetNS") - // The offset of four is necessary because 0, 1, 2 and 3 is already - // used by stdin, stdout, stderr, 'criu swrk' socket. - inheritFd.Fd = proto.Int32(int32(4 + len(extraFiles))) - req.Opts.InheritFd = append(req.Opts.InheritFd, inheritFd) - // All open FDs need to be transferred to CRIU via extraFiles - extraFiles = append(extraFiles, netns) - } - } - - // This will modify the rootfs of the container in the same way runc - // modifies the container during initial creation. - if err := c.prepareCriuRestoreMounts(c.config.Mounts); err != nil { - return err - } - - for _, m := range c.config.Mounts { - switch m.Device { - case "bind": - c.addCriuRestoreMount(req, m) - case "cgroup": - binds, err := getCgroupMounts(m) - if err != nil { - return err - } - for _, b := range binds { - c.addCriuRestoreMount(req, b) - } - } - } - - if len(c.config.MaskPaths) > 0 { - m := &configs.Mount{Destination: "/dev/null", Source: "/dev/null"} - c.addCriuRestoreMount(req, m) - } - - for _, node := range c.config.Devices { - m := &configs.Mount{Destination: node.Path, Source: node.Path} - c.addCriuRestoreMount(req, m) - } - - if criuOpts.EmptyNs&unix.CLONE_NEWNET == 0 { - c.restoreNetwork(req, criuOpts) - } - - // append optional manage cgroups mode - if criuOpts.ManageCgroupsMode != 0 { - // criu 1.7 => 10700 - if err := c.checkCriuVersion(10700); err != nil { - return err - } - mode := criurpc.CriuCgMode(criuOpts.ManageCgroupsMode) - req.Opts.ManageCgroupsMode = &mode - } - - var ( - fds []string - fdJSON []byte - ) - if fdJSON, err = ioutil.ReadFile(filepath.Join(criuOpts.ImagesDirectory, descriptorsFilename)); err != nil { - return err - } - - if err := json.Unmarshal(fdJSON, &fds); err != nil { - return err - } - for i := range fds { - if s := fds[i]; strings.Contains(s, "pipe:") { - inheritFd := new(criurpc.InheritFd) - inheritFd.Key = proto.String(s) - inheritFd.Fd = proto.Int32(int32(i)) - req.Opts.InheritFd = append(req.Opts.InheritFd, inheritFd) - } - } - return c.criuSwrk(process, req, criuOpts, true, extraFiles) -} - -func (c *linuxContainer) criuApplyCgroups(pid int, req *criurpc.CriuReq) error { - // XXX: Do we need to deal with this case? AFAIK criu still requires root. - if err := c.cgroupManager.Apply(pid); err != nil { - return err - } - - if err := c.cgroupManager.Set(c.config); err != nil { - return newSystemError(err) - } - - path := fmt.Sprintf("/proc/%d/cgroup", pid) - cgroupsPaths, err := cgroups.ParseCgroupFile(path) - if err != nil { - return err - } - - for c, p := range cgroupsPaths { - cgroupRoot := &criurpc.CgroupRoot{ - Ctrl: proto.String(c), - Path: proto.String(p), - } - req.Opts.CgRoot = append(req.Opts.CgRoot, cgroupRoot) - } - - return nil -} - -func (c *linuxContainer) criuSwrk(process *Process, req *criurpc.CriuReq, opts *CriuOpts, applyCgroups bool, extraFiles []*os.File) error { - fds, err := unix.Socketpair(unix.AF_LOCAL, unix.SOCK_SEQPACKET|unix.SOCK_CLOEXEC, 0) - if err != nil { - return err - } - - var logPath string - if opts != nil { - logPath = filepath.Join(opts.WorkDirectory, req.GetOpts().GetLogFile()) - } else { - // For the VERSION RPC 'opts' is set to 'nil' and therefore - // opts.WorkDirectory does not exist. Set logPath to "". - logPath = "" - } - criuClient := os.NewFile(uintptr(fds[0]), "criu-transport-client") - criuClientFileCon, err := net.FileConn(criuClient) - criuClient.Close() - if err != nil { - return err - } - - criuClientCon := criuClientFileCon.(*net.UnixConn) - defer criuClientCon.Close() - - criuServer := os.NewFile(uintptr(fds[1]), "criu-transport-server") - defer criuServer.Close() - - args := []string{"swrk", "3"} - if c.criuVersion != 0 { - // If the CRIU Version is still '0' then this is probably - // the initial CRIU run to detect the version. Skip it. - logrus.Debugf("Using CRIU %d at: %s", c.criuVersion, c.criuPath) - } - logrus.Debugf("Using CRIU with following args: %s", args) - cmd := exec.Command(c.criuPath, args...) - if process != nil { - cmd.Stdin = process.Stdin - cmd.Stdout = process.Stdout - cmd.Stderr = process.Stderr - } - cmd.ExtraFiles = append(cmd.ExtraFiles, criuServer) - if extraFiles != nil { - cmd.ExtraFiles = append(cmd.ExtraFiles, extraFiles...) - } - - if err := cmd.Start(); err != nil { - return err - } - criuServer.Close() - - defer func() { - criuClientCon.Close() - _, err := cmd.Process.Wait() - if err != nil { - return - } - }() - - if applyCgroups { - err := c.criuApplyCgroups(cmd.Process.Pid, req) - if err != nil { - return err - } - } - - var extFds []string - if process != nil { - extFds, err = getPipeFds(cmd.Process.Pid) - if err != nil { - return err - } - } - - logrus.Debugf("Using CRIU in %s mode", req.GetType().String()) - // In the case of criurpc.CriuReqType_FEATURE_CHECK req.GetOpts() - // should be empty. For older CRIU versions it still will be - // available but empty. criurpc.CriuReqType_VERSION actually - // has no req.GetOpts(). - if !(req.GetType() == criurpc.CriuReqType_FEATURE_CHECK || - req.GetType() == criurpc.CriuReqType_VERSION) { - - val := reflect.ValueOf(req.GetOpts()) - v := reflect.Indirect(val) - for i := 0; i < v.NumField(); i++ { - st := v.Type() - name := st.Field(i).Name - if strings.HasPrefix(name, "XXX_") { - continue - } - value := val.MethodByName("Get" + name).Call([]reflect.Value{}) - logrus.Debugf("CRIU option %s with value %v", name, value[0]) - } - } - data, err := proto.Marshal(req) - if err != nil { - return err - } - _, err = criuClientCon.Write(data) - if err != nil { - return err - } - - buf := make([]byte, 10*4096) - oob := make([]byte, 4096) - for true { - n, oobn, _, _, err := criuClientCon.ReadMsgUnix(buf, oob) - if err != nil { - return err - } - if n == 0 { - return fmt.Errorf("unexpected EOF") - } - if n == len(buf) { - return fmt.Errorf("buffer is too small") - } - - resp := new(criurpc.CriuResp) - err = proto.Unmarshal(buf[:n], resp) - if err != nil { - return err - } - if !resp.GetSuccess() { - typeString := req.GetType().String() - if typeString == "VERSION" { - // If the VERSION RPC fails this probably means that the CRIU - // version is too old for this RPC. Just return 'nil'. - return nil - } - return fmt.Errorf("criu failed: type %s errno %d\nlog file: %s", typeString, resp.GetCrErrno(), logPath) - } - - t := resp.GetType() - switch { - case t == criurpc.CriuReqType_VERSION: - logrus.Debugf("CRIU version: %s", resp) - criuVersionRPC = resp.GetVersion() - break - case t == criurpc.CriuReqType_FEATURE_CHECK: - logrus.Debugf("Feature check says: %s", resp) - criuFeatures = resp.GetFeatures() - case t == criurpc.CriuReqType_NOTIFY: - if err := c.criuNotifications(resp, process, opts, extFds, oob[:oobn]); err != nil { - return err - } - t = criurpc.CriuReqType_NOTIFY - req = &criurpc.CriuReq{ - Type: &t, - NotifySuccess: proto.Bool(true), - } - data, err = proto.Marshal(req) - if err != nil { - return err - } - _, err = criuClientCon.Write(data) - if err != nil { - return err - } - continue - case t == criurpc.CriuReqType_RESTORE: - case t == criurpc.CriuReqType_DUMP: - case t == criurpc.CriuReqType_PRE_DUMP: - default: - return fmt.Errorf("unable to parse the response %s", resp.String()) - } - - break - } - - criuClientCon.CloseWrite() - // cmd.Wait() waits cmd.goroutines which are used for proxying file descriptors. - // Here we want to wait only the CRIU process. - st, err := cmd.Process.Wait() - if err != nil { - return err - } - - // In pre-dump mode CRIU is in a loop and waits for - // the final DUMP command. - // The current runc pre-dump approach, however, is - // start criu in PRE_DUMP once for a single pre-dump - // and not the whole series of pre-dump, pre-dump, ...m, dump - // If we got the message CriuReqType_PRE_DUMP it means - // CRIU was successful and we need to forcefully stop CRIU - if !st.Success() && *req.Type != criurpc.CriuReqType_PRE_DUMP { - return fmt.Errorf("criu failed: %s\nlog file: %s", st.String(), logPath) - } - return nil -} - -// block any external network activity -func lockNetwork(config *configs.Config) error { - for _, config := range config.Networks { - strategy, err := getStrategy(config.Type) - if err != nil { - return err - } - - if err := strategy.detach(config); err != nil { - return err - } - } - return nil -} - -func unlockNetwork(config *configs.Config) error { - for _, config := range config.Networks { - strategy, err := getStrategy(config.Type) - if err != nil { - return err - } - if err = strategy.attach(config); err != nil { - return err - } - } - return nil -} - -func (c *linuxContainer) criuNotifications(resp *criurpc.CriuResp, process *Process, opts *CriuOpts, fds []string, oob []byte) error { - notify := resp.GetNotify() - if notify == nil { - return fmt.Errorf("invalid response: %s", resp.String()) - } - logrus.Debugf("notify: %s\n", notify.GetScript()) - switch { - case notify.GetScript() == "post-dump": - f, err := os.Create(filepath.Join(c.root, "checkpoint")) - if err != nil { - return err - } - f.Close() - case notify.GetScript() == "network-unlock": - if err := unlockNetwork(c.config); err != nil { - return err - } - case notify.GetScript() == "network-lock": - if err := lockNetwork(c.config); err != nil { - return err - } - case notify.GetScript() == "setup-namespaces": - if c.config.Hooks != nil { - s, err := c.currentOCIState() - if err != nil { - return nil - } - s.Pid = int(notify.GetPid()) - for i, hook := range c.config.Hooks.Prestart { - if err := hook.Run(s); err != nil { - return newSystemErrorWithCausef(err, "running prestart hook %d", i) - } - } - } - case notify.GetScript() == "post-restore": - pid := notify.GetPid() - r, err := newRestoredProcess(int(pid), fds) - if err != nil { - return err - } - process.ops = r - if err := c.state.transition(&restoredState{ - imageDir: opts.ImagesDirectory, - c: c, - }); err != nil { - return err - } - // create a timestamp indicating when the restored checkpoint was started - c.created = time.Now().UTC() - if _, err := c.updateState(r); err != nil { - return err - } - if err := os.Remove(filepath.Join(c.root, "checkpoint")); err != nil { - if !os.IsNotExist(err) { - logrus.Error(err) - } - } - case notify.GetScript() == "orphan-pts-master": - scm, err := unix.ParseSocketControlMessage(oob) - if err != nil { - return err - } - fds, err := unix.ParseUnixRights(&scm[0]) - if err != nil { - return err - } - - master := os.NewFile(uintptr(fds[0]), "orphan-pts-master") - defer master.Close() - - // While we can access console.master, using the API is a good idea. - if err := utils.SendFd(process.ConsoleSocket, master.Name(), master.Fd()); err != nil { - return err - } - } - return nil -} - -func (c *linuxContainer) updateState(process parentProcess) (*State, error) { - if process != nil { - c.initProcess = process - } - state, err := c.currentState() - if err != nil { - return nil, err - } - err = c.saveState(state) - if err != nil { - return nil, err - } - return state, nil -} - -func (c *linuxContainer) saveState(s *State) error { - f, err := os.Create(filepath.Join(c.root, stateFilename)) - if err != nil { - return err - } - defer f.Close() - return utils.WriteJSON(f, s) -} - -func (c *linuxContainer) deleteState() error { - return os.Remove(filepath.Join(c.root, stateFilename)) -} - -func (c *linuxContainer) currentStatus() (Status, error) { - if err := c.refreshState(); err != nil { - return -1, err - } - return c.state.status(), nil -} - -// refreshState needs to be called to verify that the current state on the -// container is what is true. Because consumers of libcontainer can use it -// out of process we need to verify the container's status based on runtime -// information and not rely on our in process info. -func (c *linuxContainer) refreshState() error { - paused, err := c.isPaused() - if err != nil { - return err - } - if paused { - return c.state.transition(&pausedState{c: c}) - } - t, err := c.runType() - if err != nil { - return err - } - switch t { - case Created: - return c.state.transition(&createdState{c: c}) - case Running: - return c.state.transition(&runningState{c: c}) - } - return c.state.transition(&stoppedState{c: c}) -} - -func (c *linuxContainer) runType() (Status, error) { - if c.initProcess == nil { - return Stopped, nil - } - pid := c.initProcess.pid() - stat, err := system.Stat(pid) - if err != nil { - return Stopped, nil - } - if stat.StartTime != c.initProcessStartTime || stat.State == system.Zombie || stat.State == system.Dead { - return Stopped, nil - } - // We'll create exec fifo and blocking on it after container is created, - // and delete it after start container. - if _, err := os.Stat(filepath.Join(c.root, execFifoFilename)); err == nil { - return Created, nil - } - return Running, nil -} - -func (c *linuxContainer) isPaused() (bool, error) { - fcg := c.cgroupManager.GetPaths()["freezer"] - if fcg == "" { - // A container doesn't have a freezer cgroup - return false, nil - } - data, err := ioutil.ReadFile(filepath.Join(fcg, "freezer.state")) - if err != nil { - // If freezer cgroup is not mounted, the container would just be not paused. - if os.IsNotExist(err) { - return false, nil - } - return false, newSystemErrorWithCause(err, "checking if container is paused") - } - return bytes.Equal(bytes.TrimSpace(data), []byte("FROZEN")), nil -} - -func (c *linuxContainer) currentState() (*State, error) { - var ( - startTime uint64 - externalDescriptors []string - pid = -1 - ) - if c.initProcess != nil { - pid = c.initProcess.pid() - startTime, _ = c.initProcess.startTime() - externalDescriptors = c.initProcess.externalDescriptors() - } - intelRdtPath, err := intelrdt.GetIntelRdtPath(c.ID()) - if err != nil { - intelRdtPath = "" - } - state := &State{ - BaseState: BaseState{ - ID: c.ID(), - Config: *c.config, - InitProcessPid: pid, - InitProcessStartTime: startTime, - Created: c.created, - }, - Rootless: c.config.RootlessEUID && c.config.RootlessCgroups, - CgroupPaths: c.cgroupManager.GetPaths(), - IntelRdtPath: intelRdtPath, - NamespacePaths: make(map[configs.NamespaceType]string), - ExternalDescriptors: externalDescriptors, - } - if pid > 0 { - for _, ns := range c.config.Namespaces { - state.NamespacePaths[ns.Type] = ns.GetPath(pid) - } - for _, nsType := range configs.NamespaceTypes() { - if !configs.IsNamespaceSupported(nsType) { - continue - } - if _, ok := state.NamespacePaths[nsType]; !ok { - ns := configs.Namespace{Type: nsType} - state.NamespacePaths[ns.Type] = ns.GetPath(pid) - } - } - } - return state, nil -} - -func (c *linuxContainer) currentOCIState() (*specs.State, error) { - bundle, annotations := utils.Annotations(c.config.Labels) - state := &specs.State{ - Version: specs.Version, - ID: c.ID(), - Bundle: bundle, - Annotations: annotations, - } - status, err := c.currentStatus() - if err != nil { - return nil, err - } - state.Status = status.String() - if status != Stopped { - if c.initProcess != nil { - state.Pid = c.initProcess.pid() - } - } - return state, nil -} - -// orderNamespacePaths sorts namespace paths into a list of paths that we -// can setns in order. -func (c *linuxContainer) orderNamespacePaths(namespaces map[configs.NamespaceType]string) ([]string, error) { - paths := []string{} - for _, ns := range configs.NamespaceTypes() { - - // Remove namespaces that we don't need to join. - if !c.config.Namespaces.Contains(ns) { - continue - } - - if p, ok := namespaces[ns]; ok && p != "" { - // check if the requested namespace is supported - if !configs.IsNamespaceSupported(ns) { - return nil, newSystemError(fmt.Errorf("namespace %s is not supported", ns)) - } - // only set to join this namespace if it exists - if _, err := os.Lstat(p); err != nil { - return nil, newSystemErrorWithCausef(err, "running lstat on namespace path %q", p) - } - // do not allow namespace path with comma as we use it to separate - // the namespace paths - if strings.ContainsRune(p, ',') { - return nil, newSystemError(fmt.Errorf("invalid path %s", p)) - } - paths = append(paths, fmt.Sprintf("%s:%s", configs.NsName(ns), p)) - } - - } - - return paths, nil -} - -func encodeIDMapping(idMap []configs.IDMap) ([]byte, error) { - data := bytes.NewBuffer(nil) - for _, im := range idMap { - line := fmt.Sprintf("%d %d %d\n", im.ContainerID, im.HostID, im.Size) - if _, err := data.WriteString(line); err != nil { - return nil, err - } - } - return data.Bytes(), nil -} - -// bootstrapData encodes the necessary data in netlink binary format -// as a io.Reader. -// Consumer can write the data to a bootstrap program -// such as one that uses nsenter package to bootstrap the container's -// init process correctly, i.e. with correct namespaces, uid/gid -// mapping etc. -func (c *linuxContainer) bootstrapData(cloneFlags uintptr, nsMaps map[configs.NamespaceType]string) (io.Reader, error) { - // create the netlink message - r := nl.NewNetlinkRequest(int(InitMsg), 0) - - // write cloneFlags - r.AddData(&Int32msg{ - Type: CloneFlagsAttr, - Value: uint32(cloneFlags), - }) - - // write custom namespace paths - if len(nsMaps) > 0 { - nsPaths, err := c.orderNamespacePaths(nsMaps) - if err != nil { - return nil, err - } - r.AddData(&Bytemsg{ - Type: NsPathsAttr, - Value: []byte(strings.Join(nsPaths, ",")), - }) - } - - // write namespace paths only when we are not joining an existing user ns - _, joinExistingUser := nsMaps[configs.NEWUSER] - if !joinExistingUser { - // write uid mappings - if len(c.config.UidMappings) > 0 { - if c.config.RootlessEUID && c.newuidmapPath != "" { - r.AddData(&Bytemsg{ - Type: UidmapPathAttr, - Value: []byte(c.newuidmapPath), - }) - } - b, err := encodeIDMapping(c.config.UidMappings) - if err != nil { - return nil, err - } - r.AddData(&Bytemsg{ - Type: UidmapAttr, - Value: b, - }) - } - - // write gid mappings - if len(c.config.GidMappings) > 0 { - b, err := encodeIDMapping(c.config.GidMappings) - if err != nil { - return nil, err - } - r.AddData(&Bytemsg{ - Type: GidmapAttr, - Value: b, - }) - if c.config.RootlessEUID && c.newgidmapPath != "" { - r.AddData(&Bytemsg{ - Type: GidmapPathAttr, - Value: []byte(c.newgidmapPath), - }) - } - if requiresRootOrMappingTool(c.config) { - r.AddData(&Boolmsg{ - Type: SetgroupAttr, - Value: true, - }) - } - } - } - - if c.config.OomScoreAdj != nil { - // write oom_score_adj - r.AddData(&Bytemsg{ - Type: OomScoreAdjAttr, - Value: []byte(fmt.Sprintf("%d", *c.config.OomScoreAdj)), - }) - } - - // write rootless - r.AddData(&Boolmsg{ - Type: RootlessEUIDAttr, - Value: c.config.RootlessEUID, - }) - - return bytes.NewReader(r.Serialize()), nil -} - -// ignoreTerminateErrors returns nil if the given err matches an error known -// to indicate that the terminate occurred successfully or err was nil, otherwise -// err is returned unaltered. -func ignoreTerminateErrors(err error) error { - if err == nil { - return nil - } - s := err.Error() - switch { - case strings.Contains(s, "process already finished"), strings.Contains(s, "Wait was already called"): - return nil - } - return err -} - -func requiresRootOrMappingTool(c *configs.Config) bool { - gidMap := []configs.IDMap{ - {ContainerID: 0, HostID: os.Getegid(), Size: 1}, - } - return !reflect.DeepEqual(c.GidMappings, gidMap) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/criu_opts_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/criu_opts_linux.go deleted file mode 100644 index a2e344fc4..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/criu_opts_linux.go +++ /dev/null @@ -1,40 +0,0 @@ -package libcontainer - -// cgroup restoring strategy provided by criu -type cgMode uint32 - -const ( - CRIU_CG_MODE_SOFT cgMode = 3 + iota // restore cgroup properties if only dir created by criu - CRIU_CG_MODE_FULL // always restore all cgroups and their properties - CRIU_CG_MODE_STRICT // restore all, requiring them to not present in the system - CRIU_CG_MODE_DEFAULT // the same as CRIU_CG_MODE_SOFT -) - -type CriuPageServerInfo struct { - Address string // IP address of CRIU page server - Port int32 // port number of CRIU page server -} - -type VethPairName struct { - ContainerInterfaceName string - HostInterfaceName string -} - -type CriuOpts struct { - ImagesDirectory string // directory for storing image files - WorkDirectory string // directory to cd and write logs/pidfiles/stats to - ParentImage string // directory for storing parent image files in pre-dump and dump - LeaveRunning bool // leave container in running state after checkpoint - TcpEstablished bool // checkpoint/restore established TCP connections - ExternalUnixConnections bool // allow external unix connections - ShellJob bool // allow to dump and restore shell jobs - FileLocks bool // handle file locks, for safety - PreDump bool // call criu predump to perform iterative checkpoint - PageServer CriuPageServerInfo // allow to dump to criu page server - VethPairs []VethPairName // pass the veth to criu when restore - ManageCgroupsMode cgMode // dump or restore cgroup mode - EmptyNs uint32 // don't c/r properties for namespace from this mask - AutoDedup bool // auto deduplication for incremental dumps - LazyPages bool // restore memory pages lazily using userfaultfd - StatusFd string // fd for feedback when lazy server is ready -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/devices/devices.go b/vendor/github.com/opencontainers/runc/libcontainer/devices/devices.go deleted file mode 100644 index 5e2ab0581..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/devices/devices.go +++ /dev/null @@ -1,105 +0,0 @@ -package devices - -import ( - "errors" - "io/ioutil" - "os" - "path/filepath" - - "github.com/opencontainers/runc/libcontainer/configs" - - "golang.org/x/sys/unix" -) - -var ( - ErrNotADevice = errors.New("not a device node") -) - -// Testing dependencies -var ( - unixLstat = unix.Lstat - ioutilReadDir = ioutil.ReadDir -) - -// Given the path to a device and its cgroup_permissions(which cannot be easily queried) look up the information about a linux device and return that information as a Device struct. -func DeviceFromPath(path, permissions string) (*configs.Device, error) { - var stat unix.Stat_t - err := unixLstat(path, &stat) - if err != nil { - return nil, err - } - - var ( - devNumber = uint64(stat.Rdev) - major = unix.Major(devNumber) - minor = unix.Minor(devNumber) - ) - if major == 0 { - return nil, ErrNotADevice - } - - var ( - devType rune - mode = stat.Mode - ) - switch { - case mode&unix.S_IFBLK == unix.S_IFBLK: - devType = 'b' - case mode&unix.S_IFCHR == unix.S_IFCHR: - devType = 'c' - } - return &configs.Device{ - Type: devType, - Path: path, - Major: int64(major), - Minor: int64(minor), - Permissions: permissions, - FileMode: os.FileMode(mode), - Uid: stat.Uid, - Gid: stat.Gid, - }, nil -} - -func HostDevices() ([]*configs.Device, error) { - return getDevices("/dev") -} - -func getDevices(path string) ([]*configs.Device, error) { - files, err := ioutilReadDir(path) - if err != nil { - return nil, err - } - out := []*configs.Device{} - for _, f := range files { - switch { - case f.IsDir(): - switch f.Name() { - // ".lxc" & ".lxd-mounts" added to address https://github.com/lxc/lxd/issues/2825 - case "pts", "shm", "fd", "mqueue", ".lxc", ".lxd-mounts": - continue - default: - sub, err := getDevices(filepath.Join(path, f.Name())) - if err != nil { - return nil, err - } - - out = append(out, sub...) - continue - } - case f.Name() == "console": - continue - } - device, err := DeviceFromPath(filepath.Join(path, f.Name()), "rwm") - if err != nil { - if err == ErrNotADevice { - continue - } - if os.IsNotExist(err) { - continue - } - return nil, err - } - out = append(out, device) - } - return out, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/error.go b/vendor/github.com/opencontainers/runc/libcontainer/error.go deleted file mode 100644 index 21a3789ba..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/error.go +++ /dev/null @@ -1,70 +0,0 @@ -package libcontainer - -import "io" - -// ErrorCode is the API error code type. -type ErrorCode int - -// API error codes. -const ( - // Factory errors - IdInUse ErrorCode = iota - InvalidIdFormat - - // Container errors - ContainerNotExists - ContainerPaused - ContainerNotStopped - ContainerNotRunning - ContainerNotPaused - - // Process errors - NoProcessOps - - // Common errors - ConfigInvalid - ConsoleExists - SystemError -) - -func (c ErrorCode) String() string { - switch c { - case IdInUse: - return "Id already in use" - case InvalidIdFormat: - return "Invalid format" - case ContainerPaused: - return "Container paused" - case ConfigInvalid: - return "Invalid configuration" - case SystemError: - return "System error" - case ContainerNotExists: - return "Container does not exist" - case ContainerNotStopped: - return "Container is not stopped" - case ContainerNotRunning: - return "Container is not running" - case ConsoleExists: - return "Console exists for process" - case ContainerNotPaused: - return "Container is not paused" - case NoProcessOps: - return "No process operations" - default: - return "Unknown error" - } -} - -// Error is the API error type. -type Error interface { - error - - // Returns an error if it failed to write the detail of the Error to w. - // The detail of the Error may include the error message and a - // representation of the stack trace. - Detail(w io.Writer) error - - // Returns the error code for this error. - Code() ErrorCode -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/factory.go b/vendor/github.com/opencontainers/runc/libcontainer/factory.go deleted file mode 100644 index 0986cd77e..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/factory.go +++ /dev/null @@ -1,44 +0,0 @@ -package libcontainer - -import ( - "github.com/opencontainers/runc/libcontainer/configs" -) - -type Factory interface { - // Creates a new container with the given id and starts the initial process inside it. - // id must be a string containing only letters, digits and underscores and must contain - // between 1 and 1024 characters, inclusive. - // - // The id must not already be in use by an existing container. Containers created using - // a factory with the same path (and filesystem) must have distinct ids. - // - // Returns the new container with a running process. - // - // errors: - // IdInUse - id is already in use by a container - // InvalidIdFormat - id has incorrect format - // ConfigInvalid - config is invalid - // Systemerror - System error - // - // On error, any partially created container parts are cleaned up (the operation is atomic). - Create(id string, config *configs.Config) (Container, error) - - // Load takes an ID for an existing container and returns the container information - // from the state. This presents a read only view of the container. - // - // errors: - // Path does not exist - // System error - Load(id string) (Container, error) - - // StartInitialization is an internal API to libcontainer used during the reexec of the - // container. - // - // Errors: - // Pipe connection error - // System error - StartInitialization() error - - // Type returns info string about factory type (e.g. lxc, libcontainer...) - Type() string -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/factory_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/factory_linux.go deleted file mode 100644 index 78a8c0a81..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/factory_linux.go +++ /dev/null @@ -1,395 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "encoding/json" - "fmt" - "os" - "path/filepath" - "regexp" - "runtime/debug" - "strconv" - - "github.com/cyphar/filepath-securejoin" - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/cgroups/fs" - "github.com/opencontainers/runc/libcontainer/cgroups/systemd" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/configs/validate" - "github.com/opencontainers/runc/libcontainer/intelrdt" - "github.com/opencontainers/runc/libcontainer/mount" - "github.com/opencontainers/runc/libcontainer/utils" - - "golang.org/x/sys/unix" -) - -const ( - stateFilename = "state.json" - execFifoFilename = "exec.fifo" -) - -var idRegex = regexp.MustCompile(`^[\w+-\.]+$`) - -// InitArgs returns an options func to configure a LinuxFactory with the -// provided init binary path and arguments. -func InitArgs(args ...string) func(*LinuxFactory) error { - return func(l *LinuxFactory) (err error) { - if len(args) > 0 { - // Resolve relative paths to ensure that its available - // after directory changes. - if args[0], err = filepath.Abs(args[0]); err != nil { - return newGenericError(err, ConfigInvalid) - } - } - - l.InitArgs = args - return nil - } -} - -// SystemdCgroups is an options func to configure a LinuxFactory to return -// containers that use systemd to create and manage cgroups. -func SystemdCgroups(l *LinuxFactory) error { - systemdCgroupsManager, err := systemd.NewSystemdCgroupsManager() - if err != nil { - return err - } - l.NewCgroupsManager = systemdCgroupsManager - return nil -} - -// Cgroupfs is an options func to configure a LinuxFactory to return containers -// that use the native cgroups filesystem implementation to create and manage -// cgroups. -func Cgroupfs(l *LinuxFactory) error { - l.NewCgroupsManager = func(config *configs.Cgroup, paths map[string]string) cgroups.Manager { - return &fs.Manager{ - Cgroups: config, - Paths: paths, - } - } - return nil -} - -// RootlessCgroupfs is an options func to configure a LinuxFactory to return -// containers that use the native cgroups filesystem implementation to create -// and manage cgroups. The difference between RootlessCgroupfs and Cgroupfs is -// that RootlessCgroupfs can transparently handle permission errors that occur -// during rootless container (including euid=0 in userns) setup (while still allowing cgroup usage if -// they've been set up properly). -func RootlessCgroupfs(l *LinuxFactory) error { - l.NewCgroupsManager = func(config *configs.Cgroup, paths map[string]string) cgroups.Manager { - return &fs.Manager{ - Cgroups: config, - Rootless: true, - Paths: paths, - } - } - return nil -} - -// IntelRdtfs is an options func to configure a LinuxFactory to return -// containers that use the Intel RDT "resource control" filesystem to -// create and manage Intel RDT resources (e.g., L3 cache, memory bandwidth). -func IntelRdtFs(l *LinuxFactory) error { - l.NewIntelRdtManager = func(config *configs.Config, id string, path string) intelrdt.Manager { - return &intelrdt.IntelRdtManager{ - Config: config, - Id: id, - Path: path, - } - } - return nil -} - -// TmpfsRoot is an option func to mount LinuxFactory.Root to tmpfs. -func TmpfsRoot(l *LinuxFactory) error { - mounted, err := mount.Mounted(l.Root) - if err != nil { - return err - } - if !mounted { - if err := unix.Mount("tmpfs", l.Root, "tmpfs", 0, ""); err != nil { - return err - } - } - return nil -} - -// CriuPath returns an option func to configure a LinuxFactory with the -// provided criupath -func CriuPath(criupath string) func(*LinuxFactory) error { - return func(l *LinuxFactory) error { - l.CriuPath = criupath - return nil - } -} - -// New returns a linux based container factory based in the root directory and -// configures the factory with the provided option funcs. -func New(root string, options ...func(*LinuxFactory) error) (Factory, error) { - if root != "" { - if err := os.MkdirAll(root, 0700); err != nil { - return nil, newGenericError(err, SystemError) - } - } - l := &LinuxFactory{ - Root: root, - InitPath: "/proc/self/exe", - InitArgs: []string{os.Args[0], "init"}, - Validator: validate.New(), - CriuPath: "criu", - } - Cgroupfs(l) - for _, opt := range options { - if opt == nil { - continue - } - if err := opt(l); err != nil { - return nil, err - } - } - return l, nil -} - -// LinuxFactory implements the default factory interface for linux based systems. -type LinuxFactory struct { - // Root directory for the factory to store state. - Root string - - // InitPath is the path for calling the init responsibilities for spawning - // a container. - InitPath string - - // InitArgs are arguments for calling the init responsibilities for spawning - // a container. - InitArgs []string - - // CriuPath is the path to the criu binary used for checkpoint and restore of - // containers. - CriuPath string - - // New{u,g}uidmapPath is the path to the binaries used for mapping with - // rootless containers. - NewuidmapPath string - NewgidmapPath string - - // Validator provides validation to container configurations. - Validator validate.Validator - - // NewCgroupsManager returns an initialized cgroups manager for a single container. - NewCgroupsManager func(config *configs.Cgroup, paths map[string]string) cgroups.Manager - - // NewIntelRdtManager returns an initialized Intel RDT manager for a single container. - NewIntelRdtManager func(config *configs.Config, id string, path string) intelrdt.Manager -} - -func (l *LinuxFactory) Create(id string, config *configs.Config) (Container, error) { - if l.Root == "" { - return nil, newGenericError(fmt.Errorf("invalid root"), ConfigInvalid) - } - if err := l.validateID(id); err != nil { - return nil, err - } - if err := l.Validator.Validate(config); err != nil { - return nil, newGenericError(err, ConfigInvalid) - } - containerRoot, err := securejoin.SecureJoin(l.Root, id) - if err != nil { - return nil, err - } - if _, err := os.Stat(containerRoot); err == nil { - return nil, newGenericError(fmt.Errorf("container with id exists: %v", id), IdInUse) - } else if !os.IsNotExist(err) { - return nil, newGenericError(err, SystemError) - } - if err := os.MkdirAll(containerRoot, 0711); err != nil { - return nil, newGenericError(err, SystemError) - } - if err := os.Chown(containerRoot, unix.Geteuid(), unix.Getegid()); err != nil { - return nil, newGenericError(err, SystemError) - } - c := &linuxContainer{ - id: id, - root: containerRoot, - config: config, - initPath: l.InitPath, - initArgs: l.InitArgs, - criuPath: l.CriuPath, - newuidmapPath: l.NewuidmapPath, - newgidmapPath: l.NewgidmapPath, - cgroupManager: l.NewCgroupsManager(config.Cgroups, nil), - } - if intelrdt.IsCatEnabled() || intelrdt.IsMbaEnabled() { - c.intelRdtManager = l.NewIntelRdtManager(config, id, "") - } - c.state = &stoppedState{c: c} - return c, nil -} - -func (l *LinuxFactory) Load(id string) (Container, error) { - if l.Root == "" { - return nil, newGenericError(fmt.Errorf("invalid root"), ConfigInvalid) - } - //when load, we need to check id is valid or not. - if err := l.validateID(id); err != nil { - return nil, err - } - containerRoot, err := securejoin.SecureJoin(l.Root, id) - if err != nil { - return nil, err - } - state, err := l.loadState(containerRoot, id) - if err != nil { - return nil, err - } - r := &nonChildProcess{ - processPid: state.InitProcessPid, - processStartTime: state.InitProcessStartTime, - fds: state.ExternalDescriptors, - } - c := &linuxContainer{ - initProcess: r, - initProcessStartTime: state.InitProcessStartTime, - id: id, - config: &state.Config, - initPath: l.InitPath, - initArgs: l.InitArgs, - criuPath: l.CriuPath, - newuidmapPath: l.NewuidmapPath, - newgidmapPath: l.NewgidmapPath, - cgroupManager: l.NewCgroupsManager(state.Config.Cgroups, state.CgroupPaths), - root: containerRoot, - created: state.Created, - } - c.state = &loadedState{c: c} - if err := c.refreshState(); err != nil { - return nil, err - } - if intelrdt.IsCatEnabled() || intelrdt.IsMbaEnabled() { - c.intelRdtManager = l.NewIntelRdtManager(&state.Config, id, state.IntelRdtPath) - } - return c, nil -} - -func (l *LinuxFactory) Type() string { - return "libcontainer" -} - -// StartInitialization loads a container by opening the pipe fd from the parent to read the configuration and state -// This is a low level implementation detail of the reexec and should not be consumed externally -func (l *LinuxFactory) StartInitialization() (err error) { - var ( - pipefd, fifofd int - consoleSocket *os.File - envInitPipe = os.Getenv("_LIBCONTAINER_INITPIPE") - envFifoFd = os.Getenv("_LIBCONTAINER_FIFOFD") - envConsole = os.Getenv("_LIBCONTAINER_CONSOLE") - ) - - // Get the INITPIPE. - pipefd, err = strconv.Atoi(envInitPipe) - if err != nil { - return fmt.Errorf("unable to convert _LIBCONTAINER_INITPIPE=%s to int: %s", envInitPipe, err) - } - - var ( - pipe = os.NewFile(uintptr(pipefd), "pipe") - it = initType(os.Getenv("_LIBCONTAINER_INITTYPE")) - ) - defer pipe.Close() - - // Only init processes have FIFOFD. - fifofd = -1 - if it == initStandard { - if fifofd, err = strconv.Atoi(envFifoFd); err != nil { - return fmt.Errorf("unable to convert _LIBCONTAINER_FIFOFD=%s to int: %s", envFifoFd, err) - } - } - - if envConsole != "" { - console, err := strconv.Atoi(envConsole) - if err != nil { - return fmt.Errorf("unable to convert _LIBCONTAINER_CONSOLE=%s to int: %s", envConsole, err) - } - consoleSocket = os.NewFile(uintptr(console), "console-socket") - defer consoleSocket.Close() - } - - // clear the current process's environment to clean any libcontainer - // specific env vars. - os.Clearenv() - - defer func() { - // We have an error during the initialization of the container's init, - // send it back to the parent process in the form of an initError. - if werr := utils.WriteJSON(pipe, syncT{procError}); werr != nil { - fmt.Fprintln(os.Stderr, err) - return - } - if werr := utils.WriteJSON(pipe, newSystemError(err)); werr != nil { - fmt.Fprintln(os.Stderr, err) - return - } - }() - defer func() { - if e := recover(); e != nil { - err = fmt.Errorf("panic from initialization: %v, %v", e, string(debug.Stack())) - } - }() - - i, err := newContainerInit(it, pipe, consoleSocket, fifofd) - if err != nil { - return err - } - - // If Init succeeds, syscall.Exec will not return, hence none of the defers will be called. - return i.Init() -} - -func (l *LinuxFactory) loadState(root, id string) (*State, error) { - stateFilePath, err := securejoin.SecureJoin(root, stateFilename) - if err != nil { - return nil, err - } - f, err := os.Open(stateFilePath) - if err != nil { - if os.IsNotExist(err) { - return nil, newGenericError(fmt.Errorf("container %q does not exist", id), ContainerNotExists) - } - return nil, newGenericError(err, SystemError) - } - defer f.Close() - var state *State - if err := json.NewDecoder(f).Decode(&state); err != nil { - return nil, newGenericError(err, SystemError) - } - return state, nil -} - -func (l *LinuxFactory) validateID(id string) error { - if !idRegex.MatchString(id) || string(os.PathSeparator)+id != utils.CleanPath(string(os.PathSeparator)+id) { - return newGenericError(fmt.Errorf("invalid id format: %v", id), InvalidIdFormat) - } - - return nil -} - -// NewuidmapPath returns an option func to configure a LinuxFactory with the -// provided .. -func NewuidmapPath(newuidmapPath string) func(*LinuxFactory) error { - return func(l *LinuxFactory) error { - l.NewuidmapPath = newuidmapPath - return nil - } -} - -// NewgidmapPath returns an option func to configure a LinuxFactory with the -// provided .. -func NewgidmapPath(newgidmapPath string) func(*LinuxFactory) error { - return func(l *LinuxFactory) error { - l.NewgidmapPath = newgidmapPath - return nil - } -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/generic_error.go b/vendor/github.com/opencontainers/runc/libcontainer/generic_error.go deleted file mode 100644 index 6e7de2fe7..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/generic_error.go +++ /dev/null @@ -1,92 +0,0 @@ -package libcontainer - -import ( - "fmt" - "io" - "text/template" - "time" - - "github.com/opencontainers/runc/libcontainer/stacktrace" -) - -var errorTemplate = template.Must(template.New("error").Parse(`Timestamp: {{.Timestamp}} -Code: {{.ECode}} -{{if .Message }} -Message: {{.Message}} -{{end}} -Frames:{{range $i, $frame := .Stack.Frames}} ---- -{{$i}}: {{$frame.Function}} -Package: {{$frame.Package}} -File: {{$frame.File}}@{{$frame.Line}}{{end}} -`)) - -func newGenericError(err error, c ErrorCode) Error { - if le, ok := err.(Error); ok { - return le - } - gerr := &genericError{ - Timestamp: time.Now(), - Err: err, - ECode: c, - Stack: stacktrace.Capture(1), - } - if err != nil { - gerr.Message = err.Error() - } - return gerr -} - -func newSystemError(err error) Error { - return createSystemError(err, "") -} - -func newSystemErrorWithCausef(err error, cause string, v ...interface{}) Error { - return createSystemError(err, fmt.Sprintf(cause, v...)) -} - -func newSystemErrorWithCause(err error, cause string) Error { - return createSystemError(err, cause) -} - -// createSystemError creates the specified error with the correct number of -// stack frames skipped. This is only to be called by the other functions for -// formatting the error. -func createSystemError(err error, cause string) Error { - gerr := &genericError{ - Timestamp: time.Now(), - Err: err, - ECode: SystemError, - Cause: cause, - Stack: stacktrace.Capture(2), - } - if err != nil { - gerr.Message = err.Error() - } - return gerr -} - -type genericError struct { - Timestamp time.Time - ECode ErrorCode - Err error `json:"-"` - Cause string - Message string - Stack stacktrace.Stacktrace -} - -func (e *genericError) Error() string { - if e.Cause == "" { - return e.Message - } - frame := e.Stack.Frames[0] - return fmt.Sprintf("%s:%d: %s caused %q", frame.File, frame.Line, e.Cause, e.Message) -} - -func (e *genericError) Code() ErrorCode { - return e.ECode -} - -func (e *genericError) Detail(w io.Writer) error { - return errorTemplate.Execute(w, e) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/init_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/init_linux.go deleted file mode 100644 index cd7ff67a7..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/init_linux.go +++ /dev/null @@ -1,536 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "encoding/json" - "fmt" - "io" - "io/ioutil" - "net" - "os" - "strings" - "syscall" // only for Errno - "unsafe" - - "golang.org/x/sys/unix" - - "github.com/containerd/console" - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/system" - "github.com/opencontainers/runc/libcontainer/user" - "github.com/opencontainers/runc/libcontainer/utils" - "github.com/pkg/errors" - "github.com/sirupsen/logrus" - "github.com/vishvananda/netlink" -) - -type initType string - -const ( - initSetns initType = "setns" - initStandard initType = "standard" -) - -type pid struct { - Pid int `json:"pid"` - PidFirstChild int `json:"pid_first"` -} - -// network is an internal struct used to setup container networks. -type network struct { - configs.Network - - // TempVethPeerName is a unique temporary veth peer name that was placed into - // the container's namespace. - TempVethPeerName string `json:"temp_veth_peer_name"` -} - -// initConfig is used for transferring parameters from Exec() to Init() -type initConfig struct { - Args []string `json:"args"` - Env []string `json:"env"` - Cwd string `json:"cwd"` - Capabilities *configs.Capabilities `json:"capabilities"` - ProcessLabel string `json:"process_label"` - AppArmorProfile string `json:"apparmor_profile"` - NoNewPrivileges bool `json:"no_new_privileges"` - User string `json:"user"` - AdditionalGroups []string `json:"additional_groups"` - Config *configs.Config `json:"config"` - Networks []*network `json:"network"` - PassedFilesCount int `json:"passed_files_count"` - ContainerId string `json:"containerid"` - Rlimits []configs.Rlimit `json:"rlimits"` - CreateConsole bool `json:"create_console"` - ConsoleWidth uint16 `json:"console_width"` - ConsoleHeight uint16 `json:"console_height"` - RootlessEUID bool `json:"rootless_euid,omitempty"` - RootlessCgroups bool `json:"rootless_cgroups,omitempty"` -} - -type initer interface { - Init() error -} - -func newContainerInit(t initType, pipe *os.File, consoleSocket *os.File, fifoFd int) (initer, error) { - var config *initConfig - if err := json.NewDecoder(pipe).Decode(&config); err != nil { - return nil, err - } - if err := populateProcessEnvironment(config.Env); err != nil { - return nil, err - } - switch t { - case initSetns: - return &linuxSetnsInit{ - pipe: pipe, - consoleSocket: consoleSocket, - config: config, - }, nil - case initStandard: - return &linuxStandardInit{ - pipe: pipe, - consoleSocket: consoleSocket, - parentPid: unix.Getppid(), - config: config, - fifoFd: fifoFd, - }, nil - } - return nil, fmt.Errorf("unknown init type %q", t) -} - -// populateProcessEnvironment loads the provided environment variables into the -// current processes's environment. -func populateProcessEnvironment(env []string) error { - for _, pair := range env { - p := strings.SplitN(pair, "=", 2) - if len(p) < 2 { - return fmt.Errorf("invalid environment '%v'", pair) - } - if err := os.Setenv(p[0], p[1]); err != nil { - return err - } - } - return nil -} - -// finalizeNamespace drops the caps, sets the correct user -// and working dir, and closes any leaked file descriptors -// before executing the command inside the namespace -func finalizeNamespace(config *initConfig) error { - // Ensure that all unwanted fds we may have accidentally - // inherited are marked close-on-exec so they stay out of the - // container - if err := utils.CloseExecFrom(config.PassedFilesCount + 3); err != nil { - return errors.Wrap(err, "close exec fds") - } - - capabilities := &configs.Capabilities{} - if config.Capabilities != nil { - capabilities = config.Capabilities - } else if config.Config.Capabilities != nil { - capabilities = config.Config.Capabilities - } - w, err := newContainerCapList(capabilities) - if err != nil { - return err - } - // drop capabilities in bounding set before changing user - if err := w.ApplyBoundingSet(); err != nil { - return errors.Wrap(err, "apply bounding set") - } - // preserve existing capabilities while we change users - if err := system.SetKeepCaps(); err != nil { - return errors.Wrap(err, "set keep caps") - } - if err := setupUser(config); err != nil { - return errors.Wrap(err, "setup user") - } - if err := system.ClearKeepCaps(); err != nil { - return errors.Wrap(err, "clear keep caps") - } - if err := w.ApplyCaps(); err != nil { - return errors.Wrap(err, "apply caps") - } - if config.Cwd != "" { - if err := unix.Chdir(config.Cwd); err != nil { - return fmt.Errorf("chdir to cwd (%q) set in config.json failed: %v", config.Cwd, err) - } - } - return nil -} - -// setupConsole sets up the console from inside the container, and sends the -// master pty fd to the config.Pipe (using cmsg). This is done to ensure that -// consoles are scoped to a container properly (see runc#814 and the many -// issues related to that). This has to be run *after* we've pivoted to the new -// rootfs (and the users' configuration is entirely set up). -func setupConsole(socket *os.File, config *initConfig, mount bool) error { - defer socket.Close() - // At this point, /dev/ptmx points to something that we would expect. We - // used to change the owner of the slave path, but since the /dev/pts mount - // can have gid=X set (at the users' option). So touching the owner of the - // slave PTY is not necessary, as the kernel will handle that for us. Note - // however, that setupUser (specifically fixStdioPermissions) *will* change - // the UID owner of the console to be the user the process will run as (so - // they can actually control their console). - - pty, slavePath, err := console.NewPty() - if err != nil { - return err - } - - if config.ConsoleHeight != 0 && config.ConsoleWidth != 0 { - err = pty.Resize(console.WinSize{ - Height: config.ConsoleHeight, - Width: config.ConsoleWidth, - }) - - if err != nil { - return err - } - } - - // After we return from here, we don't need the console anymore. - defer pty.Close() - - // Mount the console inside our rootfs. - if mount { - if err := mountConsole(slavePath); err != nil { - return err - } - } - // While we can access console.master, using the API is a good idea. - if err := utils.SendFd(socket, pty.Name(), pty.Fd()); err != nil { - return err - } - // Now, dup over all the things. - return dupStdio(slavePath) -} - -// syncParentReady sends to the given pipe a JSON payload which indicates that -// the init is ready to Exec the child process. It then waits for the parent to -// indicate that it is cleared to Exec. -func syncParentReady(pipe io.ReadWriter) error { - // Tell parent. - if err := writeSync(pipe, procReady); err != nil { - return err - } - - // Wait for parent to give the all-clear. - return readSync(pipe, procRun) -} - -// syncParentHooks sends to the given pipe a JSON payload which indicates that -// the parent should execute pre-start hooks. It then waits for the parent to -// indicate that it is cleared to resume. -func syncParentHooks(pipe io.ReadWriter) error { - // Tell parent. - if err := writeSync(pipe, procHooks); err != nil { - return err - } - - // Wait for parent to give the all-clear. - return readSync(pipe, procResume) -} - -// setupUser changes the groups, gid, and uid for the user inside the container -func setupUser(config *initConfig) error { - // Set up defaults. - defaultExecUser := user.ExecUser{ - Uid: 0, - Gid: 0, - Home: "/", - } - - passwdPath, err := user.GetPasswdPath() - if err != nil { - return err - } - - groupPath, err := user.GetGroupPath() - if err != nil { - return err - } - - execUser, err := user.GetExecUserPath(config.User, &defaultExecUser, passwdPath, groupPath) - if err != nil { - return err - } - - var addGroups []int - if len(config.AdditionalGroups) > 0 { - addGroups, err = user.GetAdditionalGroupsPath(config.AdditionalGroups, groupPath) - if err != nil { - return err - } - } - - // Rather than just erroring out later in setuid(2) and setgid(2), check - // that the user is mapped here. - if _, err := config.Config.HostUID(execUser.Uid); err != nil { - return fmt.Errorf("cannot set uid to unmapped user in user namespace") - } - if _, err := config.Config.HostGID(execUser.Gid); err != nil { - return fmt.Errorf("cannot set gid to unmapped user in user namespace") - } - - if config.RootlessEUID { - // We cannot set any additional groups in a rootless container and thus - // we bail if the user asked us to do so. TODO: We currently can't do - // this check earlier, but if libcontainer.Process.User was typesafe - // this might work. - if len(addGroups) > 0 { - return fmt.Errorf("cannot set any additional groups in a rootless container") - } - } - - // Before we change to the container's user make sure that the processes - // STDIO is correctly owned by the user that we are switching to. - if err := fixStdioPermissions(config, execUser); err != nil { - return err - } - - setgroups, err := ioutil.ReadFile("/proc/self/setgroups") - if err != nil && !os.IsNotExist(err) { - return err - } - - // This isn't allowed in an unprivileged user namespace since Linux 3.19. - // There's nothing we can do about /etc/group entries, so we silently - // ignore setting groups here (since the user didn't explicitly ask us to - // set the group). - allowSupGroups := !config.RootlessEUID && strings.TrimSpace(string(setgroups)) != "deny" - - if allowSupGroups { - suppGroups := append(execUser.Sgids, addGroups...) - if err := unix.Setgroups(suppGroups); err != nil { - return err - } - } - - if err := system.Setgid(execUser.Gid); err != nil { - return err - } - if err := system.Setuid(execUser.Uid); err != nil { - return err - } - - // if we didn't get HOME already, set it based on the user's HOME - if envHome := os.Getenv("HOME"); envHome == "" { - if err := os.Setenv("HOME", execUser.Home); err != nil { - return err - } - } - return nil -} - -// fixStdioPermissions fixes the permissions of PID 1's STDIO within the container to the specified user. -// The ownership needs to match because it is created outside of the container and needs to be -// localized. -func fixStdioPermissions(config *initConfig, u *user.ExecUser) error { - var null unix.Stat_t - if err := unix.Stat("/dev/null", &null); err != nil { - return err - } - for _, fd := range []uintptr{ - os.Stdin.Fd(), - os.Stderr.Fd(), - os.Stdout.Fd(), - } { - var s unix.Stat_t - if err := unix.Fstat(int(fd), &s); err != nil { - return err - } - - // Skip chown of /dev/null if it was used as one of the STDIO fds. - if s.Rdev == null.Rdev { - continue - } - - // We only change the uid owner (as it is possible for the mount to - // prefer a different gid, and there's no reason for us to change it). - // The reason why we don't just leave the default uid=X mount setup is - // that users expect to be able to actually use their console. Without - // this code, you couldn't effectively run as a non-root user inside a - // container and also have a console set up. - if err := unix.Fchown(int(fd), u.Uid, int(s.Gid)); err != nil { - // If we've hit an EINVAL then s.Gid isn't mapped in the user - // namespace. If we've hit an EPERM then the inode's current owner - // is not mapped in our user namespace (in particular, - // privileged_wrt_inode_uidgid() has failed). In either case, we - // are in a configuration where it's better for us to just not - // touch the stdio rather than bail at this point. - if err == unix.EINVAL || err == unix.EPERM { - continue - } - return err - } - } - return nil -} - -// setupNetwork sets up and initializes any network interface inside the container. -func setupNetwork(config *initConfig) error { - for _, config := range config.Networks { - strategy, err := getStrategy(config.Type) - if err != nil { - return err - } - if err := strategy.initialize(config); err != nil { - return err - } - } - return nil -} - -func setupRoute(config *configs.Config) error { - for _, config := range config.Routes { - _, dst, err := net.ParseCIDR(config.Destination) - if err != nil { - return err - } - src := net.ParseIP(config.Source) - if src == nil { - return fmt.Errorf("Invalid source for route: %s", config.Source) - } - gw := net.ParseIP(config.Gateway) - if gw == nil { - return fmt.Errorf("Invalid gateway for route: %s", config.Gateway) - } - l, err := netlink.LinkByName(config.InterfaceName) - if err != nil { - return err - } - route := &netlink.Route{ - Scope: netlink.SCOPE_UNIVERSE, - Dst: dst, - Src: src, - Gw: gw, - LinkIndex: l.Attrs().Index, - } - if err := netlink.RouteAdd(route); err != nil { - return err - } - } - return nil -} - -func setupRlimits(limits []configs.Rlimit, pid int) error { - for _, rlimit := range limits { - if err := system.Prlimit(pid, rlimit.Type, unix.Rlimit{Max: rlimit.Hard, Cur: rlimit.Soft}); err != nil { - return fmt.Errorf("error setting rlimit type %v: %v", rlimit.Type, err) - } - } - return nil -} - -const _P_PID = 1 - -type siginfo struct { - si_signo int32 - si_errno int32 - si_code int32 - // below here is a union; si_pid is the only field we use - si_pid int32 - // Pad to 128 bytes as detailed in blockUntilWaitable - pad [96]byte -} - -// isWaitable returns true if the process has exited false otherwise. -// Its based off blockUntilWaitable in src/os/wait_waitid.go -func isWaitable(pid int) (bool, error) { - si := &siginfo{} - _, _, e := unix.Syscall6(unix.SYS_WAITID, _P_PID, uintptr(pid), uintptr(unsafe.Pointer(si)), unix.WEXITED|unix.WNOWAIT|unix.WNOHANG, 0, 0) - if e != 0 { - return false, os.NewSyscallError("waitid", e) - } - - return si.si_pid != 0, nil -} - -// isNoChildren returns true if err represents a unix.ECHILD (formerly syscall.ECHILD) false otherwise -func isNoChildren(err error) bool { - switch err := err.(type) { - case syscall.Errno: - if err == unix.ECHILD { - return true - } - case *os.SyscallError: - if err.Err == unix.ECHILD { - return true - } - } - return false -} - -// signalAllProcesses freezes then iterates over all the processes inside the -// manager's cgroups sending the signal s to them. -// If s is SIGKILL then it will wait for each process to exit. -// For all other signals it will check if the process is ready to report its -// exit status and only if it is will a wait be performed. -func signalAllProcesses(m cgroups.Manager, s os.Signal) error { - var procs []*os.Process - if err := m.Freeze(configs.Frozen); err != nil { - logrus.Warn(err) - } - pids, err := m.GetAllPids() - if err != nil { - m.Freeze(configs.Thawed) - return err - } - for _, pid := range pids { - p, err := os.FindProcess(pid) - if err != nil { - logrus.Warn(err) - continue - } - procs = append(procs, p) - if err := p.Signal(s); err != nil { - logrus.Warn(err) - } - } - if err := m.Freeze(configs.Thawed); err != nil { - logrus.Warn(err) - } - - subreaper, err := system.GetSubreaper() - if err != nil { - // The error here means that PR_GET_CHILD_SUBREAPER is not - // supported because this code might run on a kernel older - // than 3.4. We don't want to throw an error in that case, - // and we simplify things, considering there is no subreaper - // set. - subreaper = 0 - } - - for _, p := range procs { - if s != unix.SIGKILL { - if ok, err := isWaitable(p.Pid); err != nil { - if !isNoChildren(err) { - logrus.Warn("signalAllProcesses: ", p.Pid, err) - } - continue - } else if !ok { - // Not ready to report so don't wait - continue - } - } - - // In case a subreaper has been setup, this code must not - // wait for the process. Otherwise, we cannot be sure the - // current process will be reaped by the subreaper, while - // the subreaper might be waiting for this process in order - // to retrieve its exit code. - if subreaper == 0 { - if _, err := p.Wait(); err != nil { - if !isNoChildren(err) { - logrus.Warn("wait: ", err) - } - } - } - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/intelrdt/intelrdt.go b/vendor/github.com/opencontainers/runc/libcontainer/intelrdt/intelrdt.go deleted file mode 100644 index 0071ce755..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/intelrdt/intelrdt.go +++ /dev/null @@ -1,773 +0,0 @@ -// +build linux - -package intelrdt - -import ( - "bufio" - "fmt" - "io/ioutil" - "os" - "path/filepath" - "strconv" - "strings" - "sync" - - "github.com/opencontainers/runc/libcontainer/configs" -) - -/* - * About Intel RDT features: - * Intel platforms with new Xeon CPU support Resource Director Technology (RDT). - * Cache Allocation Technology (CAT) and Memory Bandwidth Allocation (MBA) are - * two sub-features of RDT. - * - * Cache Allocation Technology (CAT) provides a way for the software to restrict - * cache allocation to a defined 'subset' of L3 cache which may be overlapping - * with other 'subsets'. The different subsets are identified by class of - * service (CLOS) and each CLOS has a capacity bitmask (CBM). - * - * Memory Bandwidth Allocation (MBA) provides indirect and approximate throttle - * over memory bandwidth for the software. A user controls the resource by - * indicating the percentage of maximum memory bandwidth or memory bandwidth - * limit in MBps unit if MBA Software Controller is enabled. - * - * More details about Intel RDT CAT and MBA can be found in the section 17.18 - * of Intel Software Developer Manual: - * https://software.intel.com/en-us/articles/intel-sdm - * - * About Intel RDT kernel interface: - * In Linux 4.10 kernel or newer, the interface is defined and exposed via - * "resource control" filesystem, which is a "cgroup-like" interface. - * - * Comparing with cgroups, it has similar process management lifecycle and - * interfaces in a container. But unlike cgroups' hierarchy, it has single level - * filesystem layout. - * - * CAT and MBA features are introduced in Linux 4.10 and 4.12 kernel via - * "resource control" filesystem. - * - * Intel RDT "resource control" filesystem hierarchy: - * mount -t resctrl resctrl /sys/fs/resctrl - * tree /sys/fs/resctrl - * /sys/fs/resctrl/ - * |-- info - * | |-- L3 - * | | |-- cbm_mask - * | | |-- min_cbm_bits - * | | |-- num_closids - * | |-- MB - * | |-- bandwidth_gran - * | |-- delay_linear - * | |-- min_bandwidth - * | |-- num_closids - * |-- ... - * |-- schemata - * |-- tasks - * |-- - * |-- ... - * |-- schemata - * |-- tasks - * - * For runc, we can make use of `tasks` and `schemata` configuration for L3 - * cache and memory bandwidth resources constraints. - * - * The file `tasks` has a list of tasks that belongs to this group (e.g., - * " group). Tasks can be added to a group by writing the task ID - * to the "tasks" file (which will automatically remove them from the previous - * group to which they belonged). New tasks created by fork(2) and clone(2) are - * added to the same group as their parent. - * - * The file `schemata` has a list of all the resources available to this group. - * Each resource (L3 cache, memory bandwidth) has its own line and format. - * - * L3 cache schema: - * It has allocation bitmasks/values for L3 cache on each socket, which - * contains L3 cache id and capacity bitmask (CBM). - * Format: "L3:=;=;..." - * For example, on a two-socket machine, the schema line could be "L3:0=ff;1=c0" - * which means L3 cache id 0's CBM is 0xff, and L3 cache id 1's CBM is 0xc0. - * - * The valid L3 cache CBM is a *contiguous bits set* and number of bits that can - * be set is less than the max bit. The max bits in the CBM is varied among - * supported Intel CPU models. Kernel will check if it is valid when writing. - * e.g., default value 0xfffff in root indicates the max bits of CBM is 20 - * bits, which mapping to entire L3 cache capacity. Some valid CBM values to - * set in a group: 0xf, 0xf0, 0x3ff, 0x1f00 and etc. - * - * Memory bandwidth schema: - * It has allocation values for memory bandwidth on each socket, which contains - * L3 cache id and memory bandwidth. - * Format: "MB:=bandwidth0;=bandwidth1;..." - * For example, on a two-socket machine, the schema line could be "MB:0=20;1=70" - * - * The minimum bandwidth percentage value for each CPU model is predefined and - * can be looked up through "info/MB/min_bandwidth". The bandwidth granularity - * that is allocated is also dependent on the CPU model and can be looked up at - * "info/MB/bandwidth_gran". The available bandwidth control steps are: - * min_bw + N * bw_gran. Intermediate values are rounded to the next control - * step available on the hardware. - * - * If MBA Software Controller is enabled through mount option "-o mba_MBps": - * mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl - * We could specify memory bandwidth in "MBps" (Mega Bytes per second) unit - * instead of "percentages". The kernel underneath would use a software feedback - * mechanism or a "Software Controller" which reads the actual bandwidth using - * MBM counters and adjust the memory bandwidth percentages to ensure: - * "actual memory bandwidth < user specified memory bandwidth". - * - * For example, on a two-socket machine, the schema line could be - * "MB:0=5000;1=7000" which means 5000 MBps memory bandwidth limit on socket 0 - * and 7000 MBps memory bandwidth limit on socket 1. - * - * For more information about Intel RDT kernel interface: - * https://www.kernel.org/doc/Documentation/x86/intel_rdt_ui.txt - * - * An example for runc: - * Consider a two-socket machine with two L3 caches where the default CBM is - * 0x7ff and the max CBM length is 11 bits, and minimum memory bandwidth of 10% - * with a memory bandwidth granularity of 10%. - * - * Tasks inside the container only have access to the "upper" 7/11 of L3 cache - * on socket 0 and the "lower" 5/11 L3 cache on socket 1, and may use a - * maximum memory bandwidth of 20% on socket 0 and 70% on socket 1. - * - * "linux": { - * "intelRdt": { - * "l3CacheSchema": "L3:0=7f0;1=1f", - * "memBwSchema": "MB:0=20;1=70" - * } - * } - */ - -type Manager interface { - // Applies Intel RDT configuration to the process with the specified pid - Apply(pid int) error - - // Returns statistics for Intel RDT - GetStats() (*Stats, error) - - // Destroys the Intel RDT 'container_id' group - Destroy() error - - // Returns Intel RDT path to save in a state file and to be able to - // restore the object later - GetPath() string - - // Set Intel RDT "resource control" filesystem as configured. - Set(container *configs.Config) error -} - -// This implements interface Manager -type IntelRdtManager struct { - mu sync.Mutex - Config *configs.Config - Id string - Path string -} - -const ( - IntelRdtTasks = "tasks" -) - -var ( - // The absolute root path of the Intel RDT "resource control" filesystem - intelRdtRoot string - intelRdtRootLock sync.Mutex - - // The flag to indicate if Intel RDT/CAT is enabled - isCatEnabled bool - // The flag to indicate if Intel RDT/MBA is enabled - isMbaEnabled bool - // The flag to indicate if Intel RDT/MBA Software Controller is enabled - isMbaScEnabled bool -) - -type intelRdtData struct { - root string - config *configs.Config - pid int -} - -// Check if Intel RDT sub-features are enabled in init() -func init() { - // 1. Check if hardware and kernel support Intel RDT sub-features - // "cat_l3" flag for CAT and "mba" flag for MBA - isCatFlagSet, isMbaFlagSet, err := parseCpuInfoFile("/proc/cpuinfo") - if err != nil { - return - } - - // 2. Check if Intel RDT "resource control" filesystem is mounted - // The user guarantees to mount the filesystem - if !isIntelRdtMounted() { - return - } - - // 3. Double check if Intel RDT sub-features are available in - // "resource control" filesystem. Intel RDT sub-features can be - // selectively disabled or enabled by kernel command line - // (e.g., rdt=!l3cat,mba) in 4.14 and newer kernel - if isCatFlagSet { - if _, err := os.Stat(filepath.Join(intelRdtRoot, "info", "L3")); err == nil { - isCatEnabled = true - } - } - if isMbaScEnabled { - // We confirm MBA Software Controller is enabled in step 2, - // MBA should be enabled because MBA Software Controller - // depends on MBA - isMbaEnabled = true - } else if isMbaFlagSet { - if _, err := os.Stat(filepath.Join(intelRdtRoot, "info", "MB")); err == nil { - isMbaEnabled = true - } - } -} - -// Return the mount point path of Intel RDT "resource control" filesysem -func findIntelRdtMountpointDir() (string, error) { - f, err := os.Open("/proc/self/mountinfo") - if err != nil { - return "", err - } - defer f.Close() - - s := bufio.NewScanner(f) - for s.Scan() { - text := s.Text() - fields := strings.Split(text, " ") - // Safe as mountinfo encodes mountpoints with spaces as \040. - index := strings.Index(text, " - ") - postSeparatorFields := strings.Fields(text[index+3:]) - numPostFields := len(postSeparatorFields) - - // This is an error as we can't detect if the mount is for "Intel RDT" - if numPostFields == 0 { - return "", fmt.Errorf("Found no fields post '-' in %q", text) - } - - if postSeparatorFields[0] == "resctrl" { - // Check that the mount is properly formatted. - if numPostFields < 3 { - return "", fmt.Errorf("Error found less than 3 fields post '-' in %q", text) - } - - // Check if MBA Software Controller is enabled through mount option "-o mba_MBps" - if strings.Contains(postSeparatorFields[2], "mba_MBps") { - isMbaScEnabled = true - } - - return fields[4], nil - } - } - if err := s.Err(); err != nil { - return "", err - } - - return "", NewNotFoundError("Intel RDT") -} - -// Gets the root path of Intel RDT "resource control" filesystem -func getIntelRdtRoot() (string, error) { - intelRdtRootLock.Lock() - defer intelRdtRootLock.Unlock() - - if intelRdtRoot != "" { - return intelRdtRoot, nil - } - - root, err := findIntelRdtMountpointDir() - if err != nil { - return "", err - } - - if _, err := os.Stat(root); err != nil { - return "", err - } - - intelRdtRoot = root - return intelRdtRoot, nil -} - -func isIntelRdtMounted() bool { - _, err := getIntelRdtRoot() - if err != nil { - return false - } - - return true -} - -func parseCpuInfoFile(path string) (bool, bool, error) { - isCatFlagSet := false - isMbaFlagSet := false - - f, err := os.Open(path) - if err != nil { - return false, false, err - } - defer f.Close() - - s := bufio.NewScanner(f) - for s.Scan() { - if err := s.Err(); err != nil { - return false, false, err - } - - line := s.Text() - - // Search "cat_l3" and "mba" flags in first "flags" line - if strings.Contains(line, "flags") { - flags := strings.Split(line, " ") - // "cat_l3" flag for CAT and "mba" flag for MBA - for _, flag := range flags { - switch flag { - case "cat_l3": - isCatFlagSet = true - case "mba": - isMbaFlagSet = true - } - } - return isCatFlagSet, isMbaFlagSet, nil - } - } - return isCatFlagSet, isMbaFlagSet, nil -} - -func parseUint(s string, base, bitSize int) (uint64, error) { - value, err := strconv.ParseUint(s, base, bitSize) - if err != nil { - intValue, intErr := strconv.ParseInt(s, base, bitSize) - // 1. Handle negative values greater than MinInt64 (and) - // 2. Handle negative values lesser than MinInt64 - if intErr == nil && intValue < 0 { - return 0, nil - } else if intErr != nil && intErr.(*strconv.NumError).Err == strconv.ErrRange && intValue < 0 { - return 0, nil - } - - return value, err - } - - return value, nil -} - -// Gets a single uint64 value from the specified file. -func getIntelRdtParamUint(path, file string) (uint64, error) { - fileName := filepath.Join(path, file) - contents, err := ioutil.ReadFile(fileName) - if err != nil { - return 0, err - } - - res, err := parseUint(strings.TrimSpace(string(contents)), 10, 64) - if err != nil { - return res, fmt.Errorf("unable to parse %q as a uint from file %q", string(contents), fileName) - } - return res, nil -} - -// Gets a string value from the specified file -func getIntelRdtParamString(path, file string) (string, error) { - contents, err := ioutil.ReadFile(filepath.Join(path, file)) - if err != nil { - return "", err - } - - return strings.TrimSpace(string(contents)), nil -} - -func writeFile(dir, file, data string) error { - if dir == "" { - return fmt.Errorf("no such directory for %s", file) - } - if err := ioutil.WriteFile(filepath.Join(dir, file), []byte(data+"\n"), 0700); err != nil { - return fmt.Errorf("failed to write %v to %v: %v", data, file, err) - } - return nil -} - -func getIntelRdtData(c *configs.Config, pid int) (*intelRdtData, error) { - rootPath, err := getIntelRdtRoot() - if err != nil { - return nil, err - } - return &intelRdtData{ - root: rootPath, - config: c, - pid: pid, - }, nil -} - -// Get the read-only L3 cache information -func getL3CacheInfo() (*L3CacheInfo, error) { - l3CacheInfo := &L3CacheInfo{} - - rootPath, err := getIntelRdtRoot() - if err != nil { - return l3CacheInfo, err - } - - path := filepath.Join(rootPath, "info", "L3") - cbmMask, err := getIntelRdtParamString(path, "cbm_mask") - if err != nil { - return l3CacheInfo, err - } - minCbmBits, err := getIntelRdtParamUint(path, "min_cbm_bits") - if err != nil { - return l3CacheInfo, err - } - numClosids, err := getIntelRdtParamUint(path, "num_closids") - if err != nil { - return l3CacheInfo, err - } - - l3CacheInfo.CbmMask = cbmMask - l3CacheInfo.MinCbmBits = minCbmBits - l3CacheInfo.NumClosids = numClosids - - return l3CacheInfo, nil -} - -// Get the read-only memory bandwidth information -func getMemBwInfo() (*MemBwInfo, error) { - memBwInfo := &MemBwInfo{} - - rootPath, err := getIntelRdtRoot() - if err != nil { - return memBwInfo, err - } - - path := filepath.Join(rootPath, "info", "MB") - bandwidthGran, err := getIntelRdtParamUint(path, "bandwidth_gran") - if err != nil { - return memBwInfo, err - } - delayLinear, err := getIntelRdtParamUint(path, "delay_linear") - if err != nil { - return memBwInfo, err - } - minBandwidth, err := getIntelRdtParamUint(path, "min_bandwidth") - if err != nil { - return memBwInfo, err - } - numClosids, err := getIntelRdtParamUint(path, "num_closids") - if err != nil { - return memBwInfo, err - } - - memBwInfo.BandwidthGran = bandwidthGran - memBwInfo.DelayLinear = delayLinear - memBwInfo.MinBandwidth = minBandwidth - memBwInfo.NumClosids = numClosids - - return memBwInfo, nil -} - -// Get diagnostics for last filesystem operation error from file info/last_cmd_status -func getLastCmdStatus() (string, error) { - rootPath, err := getIntelRdtRoot() - if err != nil { - return "", err - } - - path := filepath.Join(rootPath, "info") - lastCmdStatus, err := getIntelRdtParamString(path, "last_cmd_status") - if err != nil { - return "", err - } - - return lastCmdStatus, nil -} - -// WriteIntelRdtTasks writes the specified pid into the "tasks" file -func WriteIntelRdtTasks(dir string, pid int) error { - if dir == "" { - return fmt.Errorf("no such directory for %s", IntelRdtTasks) - } - - // Don't attach any pid if -1 is specified as a pid - if pid != -1 { - if err := ioutil.WriteFile(filepath.Join(dir, IntelRdtTasks), []byte(strconv.Itoa(pid)), 0700); err != nil { - return fmt.Errorf("failed to write %v to %v: %v", pid, IntelRdtTasks, err) - } - } - return nil -} - -// Check if Intel RDT/CAT is enabled -func IsCatEnabled() bool { - return isCatEnabled -} - -// Check if Intel RDT/MBA is enabled -func IsMbaEnabled() bool { - return isMbaEnabled -} - -// Check if Intel RDT/MBA Software Controller is enabled -func IsMbaScEnabled() bool { - return isMbaScEnabled -} - -// Get the 'container_id' path in Intel RDT "resource control" filesystem -func GetIntelRdtPath(id string) (string, error) { - rootPath, err := getIntelRdtRoot() - if err != nil { - return "", err - } - - path := filepath.Join(rootPath, id) - return path, nil -} - -// Applies Intel RDT configuration to the process with the specified pid -func (m *IntelRdtManager) Apply(pid int) (err error) { - // If intelRdt is not specified in config, we do nothing - if m.Config.IntelRdt == nil { - return nil - } - d, err := getIntelRdtData(m.Config, pid) - if err != nil && !IsNotFound(err) { - return err - } - - m.mu.Lock() - defer m.mu.Unlock() - path, err := d.join(m.Id) - if err != nil { - return err - } - - m.Path = path - return nil -} - -// Destroys the Intel RDT 'container_id' group -func (m *IntelRdtManager) Destroy() error { - m.mu.Lock() - defer m.mu.Unlock() - if err := os.RemoveAll(m.GetPath()); err != nil { - return err - } - m.Path = "" - return nil -} - -// Returns Intel RDT path to save in a state file and to be able to -// restore the object later -func (m *IntelRdtManager) GetPath() string { - if m.Path == "" { - m.Path, _ = GetIntelRdtPath(m.Id) - } - return m.Path -} - -// Returns statistics for Intel RDT -func (m *IntelRdtManager) GetStats() (*Stats, error) { - // If intelRdt is not specified in config - if m.Config.IntelRdt == nil { - return nil, nil - } - - m.mu.Lock() - defer m.mu.Unlock() - stats := NewStats() - - rootPath, err := getIntelRdtRoot() - if err != nil { - return nil, err - } - // The read-only L3 cache and memory bandwidth schemata in root - tmpRootStrings, err := getIntelRdtParamString(rootPath, "schemata") - if err != nil { - return nil, err - } - schemaRootStrings := strings.Split(tmpRootStrings, "\n") - - // The L3 cache and memory bandwidth schemata in 'container_id' group - tmpStrings, err := getIntelRdtParamString(m.GetPath(), "schemata") - if err != nil { - return nil, err - } - schemaStrings := strings.Split(tmpStrings, "\n") - - if IsCatEnabled() { - // The read-only L3 cache information - l3CacheInfo, err := getL3CacheInfo() - if err != nil { - return nil, err - } - stats.L3CacheInfo = l3CacheInfo - - // The read-only L3 cache schema in root - for _, schemaRoot := range schemaRootStrings { - if strings.Contains(schemaRoot, "L3") { - stats.L3CacheSchemaRoot = strings.TrimSpace(schemaRoot) - } - } - - // The L3 cache schema in 'container_id' group - for _, schema := range schemaStrings { - if strings.Contains(schema, "L3") { - stats.L3CacheSchema = strings.TrimSpace(schema) - } - } - } - - if IsMbaEnabled() { - // The read-only memory bandwidth information - memBwInfo, err := getMemBwInfo() - if err != nil { - return nil, err - } - stats.MemBwInfo = memBwInfo - - // The read-only memory bandwidth information - for _, schemaRoot := range schemaRootStrings { - if strings.Contains(schemaRoot, "MB") { - stats.MemBwSchemaRoot = strings.TrimSpace(schemaRoot) - } - } - - // The memory bandwidth schema in 'container_id' group - for _, schema := range schemaStrings { - if strings.Contains(schema, "MB") { - stats.MemBwSchema = strings.TrimSpace(schema) - } - } - } - - return stats, nil -} - -// Set Intel RDT "resource control" filesystem as configured. -func (m *IntelRdtManager) Set(container *configs.Config) error { - // About L3 cache schema: - // It has allocation bitmasks/values for L3 cache on each socket, - // which contains L3 cache id and capacity bitmask (CBM). - // Format: "L3:=;=;..." - // For example, on a two-socket machine, the schema line could be: - // L3:0=ff;1=c0 - // which means L3 cache id 0's CBM is 0xff, and L3 cache id 1's CBM - // is 0xc0. - // - // The valid L3 cache CBM is a *contiguous bits set* and number of - // bits that can be set is less than the max bit. The max bits in the - // CBM is varied among supported Intel CPU models. Kernel will check - // if it is valid when writing. e.g., default value 0xfffff in root - // indicates the max bits of CBM is 20 bits, which mapping to entire - // L3 cache capacity. Some valid CBM values to set in a group: - // 0xf, 0xf0, 0x3ff, 0x1f00 and etc. - // - // - // About memory bandwidth schema: - // It has allocation values for memory bandwidth on each socket, which - // contains L3 cache id and memory bandwidth. - // Format: "MB:=bandwidth0;=bandwidth1;..." - // For example, on a two-socket machine, the schema line could be: - // "MB:0=20;1=70" - // - // The minimum bandwidth percentage value for each CPU model is - // predefined and can be looked up through "info/MB/min_bandwidth". - // The bandwidth granularity that is allocated is also dependent on - // the CPU model and can be looked up at "info/MB/bandwidth_gran". - // The available bandwidth control steps are: min_bw + N * bw_gran. - // Intermediate values are rounded to the next control step available - // on the hardware. - // - // If MBA Software Controller is enabled through mount option - // "-o mba_MBps": mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl - // We could specify memory bandwidth in "MBps" (Mega Bytes per second) - // unit instead of "percentages". The kernel underneath would use a - // software feedback mechanism or a "Software Controller" which reads - // the actual bandwidth using MBM counters and adjust the memory - // bandwidth percentages to ensure: - // "actual memory bandwidth < user specified memory bandwidth". - // - // For example, on a two-socket machine, the schema line could be - // "MB:0=5000;1=7000" which means 5000 MBps memory bandwidth limit on - // socket 0 and 7000 MBps memory bandwidth limit on socket 1. - if container.IntelRdt != nil { - path := m.GetPath() - l3CacheSchema := container.IntelRdt.L3CacheSchema - memBwSchema := container.IntelRdt.MemBwSchema - - // Write a single joint schema string to schemata file - if l3CacheSchema != "" && memBwSchema != "" { - if err := writeFile(path, "schemata", l3CacheSchema+"\n"+memBwSchema); err != nil { - return NewLastCmdError(err) - } - } - - // Write only L3 cache schema string to schemata file - if l3CacheSchema != "" && memBwSchema == "" { - if err := writeFile(path, "schemata", l3CacheSchema); err != nil { - return NewLastCmdError(err) - } - } - - // Write only memory bandwidth schema string to schemata file - if l3CacheSchema == "" && memBwSchema != "" { - if err := writeFile(path, "schemata", memBwSchema); err != nil { - return NewLastCmdError(err) - } - } - } - - return nil -} - -func (raw *intelRdtData) join(id string) (string, error) { - path := filepath.Join(raw.root, id) - if err := os.MkdirAll(path, 0755); err != nil { - return "", NewLastCmdError(err) - } - - if err := WriteIntelRdtTasks(path, raw.pid); err != nil { - return "", NewLastCmdError(err) - } - return path, nil -} - -type NotFoundError struct { - ResourceControl string -} - -func (e *NotFoundError) Error() string { - return fmt.Sprintf("mountpoint for %s not found", e.ResourceControl) -} - -func NewNotFoundError(res string) error { - return &NotFoundError{ - ResourceControl: res, - } -} - -func IsNotFound(err error) bool { - if err == nil { - return false - } - _, ok := err.(*NotFoundError) - return ok -} - -type LastCmdError struct { - LastCmdStatus string - Err error -} - -func (e *LastCmdError) Error() string { - return fmt.Sprintf(e.Err.Error() + ", last_cmd_status: " + e.LastCmdStatus) -} - -func NewLastCmdError(err error) error { - lastCmdStatus, err1 := getLastCmdStatus() - if err1 == nil { - return &LastCmdError{ - LastCmdStatus: lastCmdStatus, - Err: err, - } - } - return err -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/intelrdt/stats.go b/vendor/github.com/opencontainers/runc/libcontainer/intelrdt/stats.go deleted file mode 100644 index df5686f3b..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/intelrdt/stats.go +++ /dev/null @@ -1,40 +0,0 @@ -// +build linux - -package intelrdt - -type L3CacheInfo struct { - CbmMask string `json:"cbm_mask,omitempty"` - MinCbmBits uint64 `json:"min_cbm_bits,omitempty"` - NumClosids uint64 `json:"num_closids,omitempty"` -} - -type MemBwInfo struct { - BandwidthGran uint64 `json:"bandwidth_gran,omitempty"` - DelayLinear uint64 `json:"delay_linear,omitempty"` - MinBandwidth uint64 `json:"min_bandwidth,omitempty"` - NumClosids uint64 `json:"num_closids,omitempty"` -} - -type Stats struct { - // The read-only L3 cache information - L3CacheInfo *L3CacheInfo `json:"l3_cache_info,omitempty"` - - // The read-only L3 cache schema in root - L3CacheSchemaRoot string `json:"l3_cache_schema_root,omitempty"` - - // The L3 cache schema in 'container_id' group - L3CacheSchema string `json:"l3_cache_schema,omitempty"` - - // The read-only memory bandwidth information - MemBwInfo *MemBwInfo `json:"mem_bw_info,omitempty"` - - // The read-only memory bandwidth schema in root - MemBwSchemaRoot string `json:"mem_bw_schema_root,omitempty"` - - // The memory bandwidth schema in 'container_id' group - MemBwSchema string `json:"mem_bw_schema,omitempty"` -} - -func NewStats() *Stats { - return &Stats{} -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/keys/keyctl.go b/vendor/github.com/opencontainers/runc/libcontainer/keys/keyctl.go deleted file mode 100644 index 74dedd56c..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/keys/keyctl.go +++ /dev/null @@ -1,48 +0,0 @@ -// +build linux - -package keys - -import ( - "fmt" - "strconv" - "strings" - - "github.com/pkg/errors" - - "golang.org/x/sys/unix" -) - -type KeySerial uint32 - -func JoinSessionKeyring(name string) (KeySerial, error) { - sessKeyId, err := unix.KeyctlJoinSessionKeyring(name) - if err != nil { - return 0, errors.Wrap(err, "create session key") - } - return KeySerial(sessKeyId), nil -} - -// ModKeyringPerm modifies permissions on a keyring by reading the current permissions, -// anding the bits with the given mask (clearing permissions) and setting -// additional permission bits -func ModKeyringPerm(ringId KeySerial, mask, setbits uint32) error { - dest, err := unix.KeyctlString(unix.KEYCTL_DESCRIBE, int(ringId)) - if err != nil { - return err - } - - res := strings.Split(dest, ";") - if len(res) < 5 { - return fmt.Errorf("Destination buffer for key description is too small") - } - - // parse permissions - perm64, err := strconv.ParseUint(res[3], 16, 32) - if err != nil { - return err - } - - perm := (uint32(perm64) & mask) | setbits - - return unix.KeyctlSetperm(int(ringId), perm) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/logs/logs.go b/vendor/github.com/opencontainers/runc/libcontainer/logs/logs.go deleted file mode 100644 index 1077e7b01..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/logs/logs.go +++ /dev/null @@ -1,102 +0,0 @@ -package logs - -import ( - "bufio" - "encoding/json" - "fmt" - "io" - "os" - "strconv" - "sync" - - "github.com/sirupsen/logrus" -) - -var ( - configureMutex = sync.Mutex{} - // loggingConfigured will be set once logging has been configured via invoking `ConfigureLogging`. - // Subsequent invocations of `ConfigureLogging` would be no-op - loggingConfigured = false -) - -type Config struct { - LogLevel logrus.Level - LogFormat string - LogFilePath string - LogPipeFd string -} - -func ForwardLogs(logPipe io.Reader) { - lineReader := bufio.NewReader(logPipe) - for { - line, err := lineReader.ReadBytes('\n') - if len(line) > 0 { - processEntry(line) - } - if err == io.EOF { - logrus.Debugf("log pipe has been closed: %+v", err) - return - } - if err != nil { - logrus.Errorf("log pipe read error: %+v", err) - } - } -} - -func processEntry(text []byte) { - type jsonLog struct { - Level string `json:"level"` - Msg string `json:"msg"` - } - - var jl jsonLog - if err := json.Unmarshal(text, &jl); err != nil { - logrus.Errorf("failed to decode %q to json: %+v", text, err) - return - } - - lvl, err := logrus.ParseLevel(jl.Level) - if err != nil { - logrus.Errorf("failed to parse log level %q: %v\n", jl.Level, err) - return - } - logrus.StandardLogger().Logf(lvl, jl.Msg) -} - -func ConfigureLogging(config Config) error { - configureMutex.Lock() - defer configureMutex.Unlock() - - if loggingConfigured { - logrus.Debug("logging has already been configured") - return nil - } - - logrus.SetLevel(config.LogLevel) - - if config.LogPipeFd != "" { - logPipeFdInt, err := strconv.Atoi(config.LogPipeFd) - if err != nil { - return fmt.Errorf("failed to convert _LIBCONTAINER_LOGPIPE environment variable value %q to int: %v", config.LogPipeFd, err) - } - logrus.SetOutput(os.NewFile(uintptr(logPipeFdInt), "logpipe")) - } else if config.LogFilePath != "" { - f, err := os.OpenFile(config.LogFilePath, os.O_CREATE|os.O_WRONLY|os.O_APPEND|os.O_SYNC, 0644) - if err != nil { - return err - } - logrus.SetOutput(f) - } - - switch config.LogFormat { - case "text": - // retain logrus's default. - case "json": - logrus.SetFormatter(new(logrus.JSONFormatter)) - default: - return fmt.Errorf("unknown log-format %q", config.LogFormat) - } - - loggingConfigured = true - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/message_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/message_linux.go deleted file mode 100644 index 1d4f5033a..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/message_linux.go +++ /dev/null @@ -1,89 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "github.com/vishvananda/netlink/nl" - "golang.org/x/sys/unix" -) - -// list of known message types we want to send to bootstrap program -// The number is randomly chosen to not conflict with known netlink types -const ( - InitMsg uint16 = 62000 - CloneFlagsAttr uint16 = 27281 - NsPathsAttr uint16 = 27282 - UidmapAttr uint16 = 27283 - GidmapAttr uint16 = 27284 - SetgroupAttr uint16 = 27285 - OomScoreAdjAttr uint16 = 27286 - RootlessEUIDAttr uint16 = 27287 - UidmapPathAttr uint16 = 27288 - GidmapPathAttr uint16 = 27289 -) - -type Int32msg struct { - Type uint16 - Value uint32 -} - -// Serialize serializes the message. -// Int32msg has the following representation -// | nlattr len | nlattr type | -// | uint32 value | -func (msg *Int32msg) Serialize() []byte { - buf := make([]byte, msg.Len()) - native := nl.NativeEndian() - native.PutUint16(buf[0:2], uint16(msg.Len())) - native.PutUint16(buf[2:4], msg.Type) - native.PutUint32(buf[4:8], msg.Value) - return buf -} - -func (msg *Int32msg) Len() int { - return unix.NLA_HDRLEN + 4 -} - -// Bytemsg has the following representation -// | nlattr len | nlattr type | -// | value | pad | -type Bytemsg struct { - Type uint16 - Value []byte -} - -func (msg *Bytemsg) Serialize() []byte { - l := msg.Len() - buf := make([]byte, (l+unix.NLA_ALIGNTO-1) & ^(unix.NLA_ALIGNTO-1)) - native := nl.NativeEndian() - native.PutUint16(buf[0:2], uint16(l)) - native.PutUint16(buf[2:4], msg.Type) - copy(buf[4:], msg.Value) - return buf -} - -func (msg *Bytemsg) Len() int { - return unix.NLA_HDRLEN + len(msg.Value) + 1 // null-terminated -} - -type Boolmsg struct { - Type uint16 - Value bool -} - -func (msg *Boolmsg) Serialize() []byte { - buf := make([]byte, msg.Len()) - native := nl.NativeEndian() - native.PutUint16(buf[0:2], uint16(msg.Len())) - native.PutUint16(buf[2:4], msg.Type) - if msg.Value { - native.PutUint32(buf[4:8], uint32(1)) - } else { - native.PutUint32(buf[4:8], uint32(0)) - } - return buf -} - -func (msg *Boolmsg) Len() int { - return unix.NLA_HDRLEN + 4 // alignment -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/mount/mount.go b/vendor/github.com/opencontainers/runc/libcontainer/mount/mount.go deleted file mode 100644 index e8965e081..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/mount/mount.go +++ /dev/null @@ -1,23 +0,0 @@ -package mount - -// GetMounts retrieves a list of mounts for the current running process. -func GetMounts() ([]*Info, error) { - return parseMountTable() -} - -// Mounted looks at /proc/self/mountinfo to determine of the specified -// mountpoint has been mounted -func Mounted(mountpoint string) (bool, error) { - entries, err := parseMountTable() - if err != nil { - return false, err - } - - // Search the table for the mountpoint - for _, e := range entries { - if e.Mountpoint == mountpoint { - return true, nil - } - } - return false, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/mount/mount_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/mount/mount_linux.go deleted file mode 100644 index 1e5191928..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/mount/mount_linux.go +++ /dev/null @@ -1,82 +0,0 @@ -// +build linux - -package mount - -import ( - "bufio" - "fmt" - "io" - "os" - "strings" -) - -const ( - /* 36 35 98:0 /mnt1 /mnt2 rw,noatime master:1 - ext3 /dev/root rw,errors=continue - (1)(2)(3) (4) (5) (6) (7) (8) (9) (10) (11) - - (1) mount ID: unique identifier of the mount (may be reused after umount) - (2) parent ID: ID of parent (or of self for the top of the mount tree) - (3) major:minor: value of st_dev for files on filesystem - (4) root: root of the mount within the filesystem - (5) mount point: mount point relative to the process's root - (6) mount options: per mount options - (7) optional fields: zero or more fields of the form "tag[:value]" - (8) separator: marks the end of the optional fields - (9) filesystem type: name of filesystem of the form "type[.subtype]" - (10) mount source: filesystem specific information or "none" - (11) super options: per super block options*/ - mountinfoFormat = "%d %d %d:%d %s %s %s %s" -) - -// Parse /proc/self/mountinfo because comparing Dev and ino does not work from -// bind mounts -func parseMountTable() ([]*Info, error) { - f, err := os.Open("/proc/self/mountinfo") - if err != nil { - return nil, err - } - defer f.Close() - - return parseInfoFile(f) -} - -func parseInfoFile(r io.Reader) ([]*Info, error) { - var ( - s = bufio.NewScanner(r) - out = []*Info{} - ) - - for s.Scan() { - if err := s.Err(); err != nil { - return nil, err - } - - var ( - p = &Info{} - text = s.Text() - optionalFields string - ) - - if _, err := fmt.Sscanf(text, mountinfoFormat, - &p.ID, &p.Parent, &p.Major, &p.Minor, - &p.Root, &p.Mountpoint, &p.Opts, &optionalFields); err != nil { - return nil, fmt.Errorf("Scanning '%s' failed: %s", text, err) - } - // Safe as mountinfo encodes mountpoints with spaces as \040. - index := strings.Index(text, " - ") - postSeparatorFields := strings.Fields(text[index+3:]) - if len(postSeparatorFields) < 3 { - return nil, fmt.Errorf("Error found less than 3 fields post '-' in %q", text) - } - - if optionalFields != "-" { - p.Optional = optionalFields - } - - p.Fstype = postSeparatorFields[0] - p.Source = postSeparatorFields[1] - p.VfsOpts = strings.Join(postSeparatorFields[2:], " ") - out = append(out, p) - } - return out, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/mount/mountinfo.go b/vendor/github.com/opencontainers/runc/libcontainer/mount/mountinfo.go deleted file mode 100644 index e3fc3535e..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/mount/mountinfo.go +++ /dev/null @@ -1,40 +0,0 @@ -package mount - -// Info reveals information about a particular mounted filesystem. This -// struct is populated from the content in the /proc//mountinfo file. -type Info struct { - // ID is a unique identifier of the mount (may be reused after umount). - ID int - - // Parent indicates the ID of the mount parent (or of self for the top of the - // mount tree). - Parent int - - // Major indicates one half of the device ID which identifies the device class. - Major int - - // Minor indicates one half of the device ID which identifies a specific - // instance of device. - Minor int - - // Root of the mount within the filesystem. - Root string - - // Mountpoint indicates the mount point relative to the process's root. - Mountpoint string - - // Opts represents mount-specific options. - Opts string - - // Optional represents optional fields. - Optional string - - // Fstype indicates the type of filesystem, such as EXT3. - Fstype string - - // Source indicates filesystem specific information or "none". - Source string - - // VfsOpts represents per super block options. - VfsOpts string -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/network_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/network_linux.go deleted file mode 100644 index 569c53f6e..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/network_linux.go +++ /dev/null @@ -1,102 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "io/ioutil" - "path/filepath" - "strconv" - "strings" - - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/vishvananda/netlink" -) - -var strategies = map[string]networkStrategy{ - "loopback": &loopback{}, -} - -// networkStrategy represents a specific network configuration for -// a container's networking stack -type networkStrategy interface { - create(*network, int) error - initialize(*network) error - detach(*configs.Network) error - attach(*configs.Network) error -} - -// getStrategy returns the specific network strategy for the -// provided type. -func getStrategy(tpe string) (networkStrategy, error) { - s, exists := strategies[tpe] - if !exists { - return nil, fmt.Errorf("unknown strategy type %q", tpe) - } - return s, nil -} - -// Returns the network statistics for the network interfaces represented by the NetworkRuntimeInfo. -func getNetworkInterfaceStats(interfaceName string) (*NetworkInterface, error) { - out := &NetworkInterface{Name: interfaceName} - // This can happen if the network runtime information is missing - possible if the - // container was created by an old version of libcontainer. - if interfaceName == "" { - return out, nil - } - type netStatsPair struct { - // Where to write the output. - Out *uint64 - // The network stats file to read. - File string - } - // Ingress for host veth is from the container. Hence tx_bytes stat on the host veth is actually number of bytes received by the container. - netStats := []netStatsPair{ - {Out: &out.RxBytes, File: "tx_bytes"}, - {Out: &out.RxPackets, File: "tx_packets"}, - {Out: &out.RxErrors, File: "tx_errors"}, - {Out: &out.RxDropped, File: "tx_dropped"}, - - {Out: &out.TxBytes, File: "rx_bytes"}, - {Out: &out.TxPackets, File: "rx_packets"}, - {Out: &out.TxErrors, File: "rx_errors"}, - {Out: &out.TxDropped, File: "rx_dropped"}, - } - for _, netStat := range netStats { - data, err := readSysfsNetworkStats(interfaceName, netStat.File) - if err != nil { - return nil, err - } - *(netStat.Out) = data - } - return out, nil -} - -// Reads the specified statistics available under /sys/class/net//statistics -func readSysfsNetworkStats(ethInterface, statsFile string) (uint64, error) { - data, err := ioutil.ReadFile(filepath.Join("/sys/class/net", ethInterface, "statistics", statsFile)) - if err != nil { - return 0, err - } - return strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64) -} - -// loopback is a network strategy that provides a basic loopback device -type loopback struct { -} - -func (l *loopback) create(n *network, nspid int) error { - return nil -} - -func (l *loopback) initialize(config *network) error { - return netlink.LinkSetUp(&netlink.Device{LinkAttrs: netlink.LinkAttrs{Name: "lo"}}) -} - -func (l *loopback) attach(n *configs.Network) (err error) { - return nil -} - -func (l *loopback) detach(n *configs.Network) (err error) { - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/notify_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/notify_linux.go deleted file mode 100644 index 47a06783d..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/notify_linux.go +++ /dev/null @@ -1,90 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "io/ioutil" - "os" - "path/filepath" - - "golang.org/x/sys/unix" -) - -const oomCgroupName = "memory" - -type PressureLevel uint - -const ( - LowPressure PressureLevel = iota - MediumPressure - CriticalPressure -) - -func registerMemoryEvent(cgDir string, evName string, arg string) (<-chan struct{}, error) { - evFile, err := os.Open(filepath.Join(cgDir, evName)) - if err != nil { - return nil, err - } - fd, err := unix.Eventfd(0, unix.EFD_CLOEXEC) - if err != nil { - evFile.Close() - return nil, err - } - - eventfd := os.NewFile(uintptr(fd), "eventfd") - - eventControlPath := filepath.Join(cgDir, "cgroup.event_control") - data := fmt.Sprintf("%d %d %s", eventfd.Fd(), evFile.Fd(), arg) - if err := ioutil.WriteFile(eventControlPath, []byte(data), 0700); err != nil { - eventfd.Close() - evFile.Close() - return nil, err - } - ch := make(chan struct{}) - go func() { - defer func() { - eventfd.Close() - evFile.Close() - close(ch) - }() - buf := make([]byte, 8) - for { - if _, err := eventfd.Read(buf); err != nil { - return - } - // When a cgroup is destroyed, an event is sent to eventfd. - // So if the control path is gone, return instead of notifying. - if _, err := os.Lstat(eventControlPath); os.IsNotExist(err) { - return - } - ch <- struct{}{} - } - }() - return ch, nil -} - -// notifyOnOOM returns channel on which you can expect event about OOM, -// if process died without OOM this channel will be closed. -func notifyOnOOM(paths map[string]string) (<-chan struct{}, error) { - dir := paths[oomCgroupName] - if dir == "" { - return nil, fmt.Errorf("path %q missing", oomCgroupName) - } - - return registerMemoryEvent(dir, "memory.oom_control", "") -} - -func notifyMemoryPressure(paths map[string]string, level PressureLevel) (<-chan struct{}, error) { - dir := paths[oomCgroupName] - if dir == "" { - return nil, fmt.Errorf("path %q missing", oomCgroupName) - } - - if level > CriticalPressure { - return nil, fmt.Errorf("invalid pressure level %d", level) - } - - levelStr := []string{"low", "medium", "critical"}[level] - return registerMemoryEvent(dir, "memory.pressure_level", levelStr) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/README.md b/vendor/github.com/opencontainers/runc/libcontainer/nsenter/README.md deleted file mode 100644 index 9ec6c3931..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/README.md +++ /dev/null @@ -1,44 +0,0 @@ -## nsenter - -The `nsenter` package registers a special init constructor that is called before -the Go runtime has a chance to boot. This provides us the ability to `setns` on -existing namespaces and avoid the issues that the Go runtime has with multiple -threads. This constructor will be called if this package is registered, -imported, in your go application. - -The `nsenter` package will `import "C"` and it uses [cgo](https://golang.org/cmd/cgo/) -package. In cgo, if the import of "C" is immediately preceded by a comment, that comment, -called the preamble, is used as a header when compiling the C parts of the package. -So every time we import package `nsenter`, the C code function `nsexec()` would be -called. And package `nsenter` is only imported in `init.go`, so every time the runc -`init` command is invoked, that C code is run. - -Because `nsexec()` must be run before the Go runtime in order to use the -Linux kernel namespace, you must `import` this library into a package if -you plan to use `libcontainer` directly. Otherwise Go will not execute -the `nsexec()` constructor, which means that the re-exec will not cause -the namespaces to be joined. You can import it like this: - -```go -import _ "github.com/opencontainers/runc/libcontainer/nsenter" -``` - -`nsexec()` will first get the file descriptor number for the init pipe -from the environment variable `_LIBCONTAINER_INITPIPE` (which was opened -by the parent and kept open across the fork-exec of the `nsexec()` init -process). The init pipe is used to read bootstrap data (namespace paths, -clone flags, uid and gid mappings, and the console path) from the parent -process. `nsexec()` will then call `setns(2)` to join the namespaces -provided in the bootstrap data (if available), `clone(2)` a child process -with the provided clone flags, update the user and group ID mappings, do -some further miscellaneous setup steps, and then send the PID of the -child process to the parent of the `nsexec()` "caller". Finally, -the parent `nsexec()` will exit and the child `nsexec()` process will -return to allow the Go runtime take over. - -NOTE: We do both `setns(2)` and `clone(2)` even if we don't have any -`CLONE_NEW*` clone flags because we must fork a new process in order to -enter the PID namespace. - - - diff --git a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/cloned_binary.c b/vendor/github.com/opencontainers/runc/libcontainer/nsenter/cloned_binary.c deleted file mode 100644 index ad10f1406..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/cloned_binary.c +++ /dev/null @@ -1,516 +0,0 @@ -/* - * Copyright (C) 2019 Aleksa Sarai - * Copyright (C) 2019 SUSE LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#define _GNU_SOURCE -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include - -/* Use our own wrapper for memfd_create. */ -#if !defined(SYS_memfd_create) && defined(__NR_memfd_create) -# define SYS_memfd_create __NR_memfd_create -#endif -/* memfd_create(2) flags -- copied from . */ -#ifndef MFD_CLOEXEC -# define MFD_CLOEXEC 0x0001U -# define MFD_ALLOW_SEALING 0x0002U -#endif -int memfd_create(const char *name, unsigned int flags) -{ -#ifdef SYS_memfd_create - return syscall(SYS_memfd_create, name, flags); -#else - errno = ENOSYS; - return -1; -#endif -} - - -/* This comes directly from . */ -#ifndef F_LINUX_SPECIFIC_BASE -# define F_LINUX_SPECIFIC_BASE 1024 -#endif -#ifndef F_ADD_SEALS -# define F_ADD_SEALS (F_LINUX_SPECIFIC_BASE + 9) -# define F_GET_SEALS (F_LINUX_SPECIFIC_BASE + 10) -#endif -#ifndef F_SEAL_SEAL -# define F_SEAL_SEAL 0x0001 /* prevent further seals from being set */ -# define F_SEAL_SHRINK 0x0002 /* prevent file from shrinking */ -# define F_SEAL_GROW 0x0004 /* prevent file from growing */ -# define F_SEAL_WRITE 0x0008 /* prevent writes */ -#endif - -#define CLONED_BINARY_ENV "_LIBCONTAINER_CLONED_BINARY" -#define RUNC_MEMFD_COMMENT "runc_cloned:/proc/self/exe" -#define RUNC_MEMFD_SEALS \ - (F_SEAL_SEAL | F_SEAL_SHRINK | F_SEAL_GROW | F_SEAL_WRITE) - -static void *must_realloc(void *ptr, size_t size) -{ - void *old = ptr; - do { - ptr = realloc(old, size); - } while(!ptr); - return ptr; -} - -/* - * Verify whether we are currently in a self-cloned program (namely, is - * /proc/self/exe a memfd). F_GET_SEALS will only succeed for memfds (or rather - * for shmem files), and we want to be sure it's actually sealed. - */ -static int is_self_cloned(void) -{ - int fd, ret, is_cloned = 0; - struct stat statbuf = {}; - struct statfs fsbuf = {}; - - fd = open("/proc/self/exe", O_RDONLY|O_CLOEXEC); - if (fd < 0) - return -ENOTRECOVERABLE; - - /* - * Is the binary a fully-sealed memfd? We don't need CLONED_BINARY_ENV for - * this, because you cannot write to a sealed memfd no matter what (so - * sharing it isn't a bad thing -- and an admin could bind-mount a sealed - * memfd to /usr/bin/runc to allow re-use). - */ - ret = fcntl(fd, F_GET_SEALS); - if (ret >= 0) { - is_cloned = (ret == RUNC_MEMFD_SEALS); - goto out; - } - - /* - * All other forms require CLONED_BINARY_ENV, since they are potentially - * writeable (or we can't tell if they're fully safe) and thus we must - * check the environment as an extra layer of defence. - */ - if (!getenv(CLONED_BINARY_ENV)) { - is_cloned = false; - goto out; - } - - /* - * Is the binary on a read-only filesystem? We can't detect bind-mounts in - * particular (in-kernel they are identical to regular mounts) but we can - * at least be sure that it's read-only. In addition, to make sure that - * it's *our* bind-mount we check CLONED_BINARY_ENV. - */ - if (fstatfs(fd, &fsbuf) >= 0) - is_cloned |= (fsbuf.f_flags & MS_RDONLY); - - /* - * Okay, we're a tmpfile -- or we're currently running on RHEL <=7.6 - * which appears to have a borked backport of F_GET_SEALS. Either way, - * having a file which has no hardlinks indicates that we aren't using - * a host-side "runc" binary and this is something that a container - * cannot fake (because unlinking requires being able to resolve the - * path that you want to unlink). - */ - if (fstat(fd, &statbuf) >= 0) - is_cloned |= (statbuf.st_nlink == 0); - -out: - close(fd); - return is_cloned; -} - -/* Read a given file into a new buffer, and providing the length. */ -static char *read_file(char *path, size_t *length) -{ - int fd; - char buf[4096], *copy = NULL; - - if (!length) - return NULL; - - fd = open(path, O_RDONLY | O_CLOEXEC); - if (fd < 0) - return NULL; - - *length = 0; - for (;;) { - ssize_t n; - - n = read(fd, buf, sizeof(buf)); - if (n < 0) - goto error; - if (!n) - break; - - copy = must_realloc(copy, (*length + n) * sizeof(*copy)); - memcpy(copy + *length, buf, n); - *length += n; - } - close(fd); - return copy; - -error: - close(fd); - free(copy); - return NULL; -} - -/* - * A poor-man's version of "xargs -0". Basically parses a given block of - * NUL-delimited data, within the given length and adds a pointer to each entry - * to the array of pointers. - */ -static int parse_xargs(char *data, int data_length, char ***output) -{ - int num = 0; - char *cur = data; - - if (!data || *output != NULL) - return -1; - - while (cur < data + data_length) { - num++; - *output = must_realloc(*output, (num + 1) * sizeof(**output)); - (*output)[num - 1] = cur; - cur += strlen(cur) + 1; - } - (*output)[num] = NULL; - return num; -} - -/* - * "Parse" out argv from /proc/self/cmdline. - * This is necessary because we are running in a context where we don't have a - * main() that we can just get the arguments from. - */ -static int fetchve(char ***argv) -{ - char *cmdline = NULL; - size_t cmdline_size; - - cmdline = read_file("/proc/self/cmdline", &cmdline_size); - if (!cmdline) - goto error; - - if (parse_xargs(cmdline, cmdline_size, argv) <= 0) - goto error; - - return 0; - -error: - free(cmdline); - return -EINVAL; -} - -enum { - EFD_NONE = 0, - EFD_MEMFD, - EFD_FILE, -}; - -/* - * This comes from . We can't hard-code __O_TMPFILE because it - * changes depending on the architecture. If we don't have O_TMPFILE we always - * have the mkostemp(3) fallback. - */ -#ifndef O_TMPFILE -# if defined(__O_TMPFILE) && defined(O_DIRECTORY) -# define O_TMPFILE (__O_TMPFILE | O_DIRECTORY) -# endif -#endif - -static int make_execfd(int *fdtype) -{ - int fd = -1; - char template[PATH_MAX] = {0}; - char *prefix = getenv("_LIBCONTAINER_STATEDIR"); - - if (!prefix || *prefix != '/') - prefix = "/tmp"; - if (snprintf(template, sizeof(template), "%s/runc.XXXXXX", prefix) < 0) - return -1; - - /* - * Now try memfd, it's much nicer than actually creating a file in STATEDIR - * since it's easily detected thanks to sealing and also doesn't require - * assumptions about STATEDIR. - */ - *fdtype = EFD_MEMFD; - fd = memfd_create(RUNC_MEMFD_COMMENT, MFD_CLOEXEC | MFD_ALLOW_SEALING); - if (fd >= 0) - return fd; - if (errno != ENOSYS && errno != EINVAL) - goto error; - -#ifdef O_TMPFILE - /* - * Try O_TMPFILE to avoid races where someone might snatch our file. Note - * that O_EXCL isn't actually a security measure here (since you can just - * fd re-open it and clear O_EXCL). - */ - *fdtype = EFD_FILE; - fd = open(prefix, O_TMPFILE | O_EXCL | O_RDWR | O_CLOEXEC, 0700); - if (fd >= 0) { - struct stat statbuf = {}; - bool working_otmpfile = false; - - /* - * open(2) ignores unknown O_* flags -- yeah, I was surprised when I - * found this out too. As a result we can't check for EINVAL. However, - * if we get nlink != 0 (or EISDIR) then we know that this kernel - * doesn't support O_TMPFILE. - */ - if (fstat(fd, &statbuf) >= 0) - working_otmpfile = (statbuf.st_nlink == 0); - - if (working_otmpfile) - return fd; - - /* Pretend that we got EISDIR since O_TMPFILE failed. */ - close(fd); - errno = EISDIR; - } - if (errno != EISDIR) - goto error; -#endif /* defined(O_TMPFILE) */ - - /* - * Our final option is to create a temporary file the old-school way, and - * then unlink it so that nothing else sees it by accident. - */ - *fdtype = EFD_FILE; - fd = mkostemp(template, O_CLOEXEC); - if (fd >= 0) { - if (unlink(template) >= 0) - return fd; - close(fd); - } - -error: - *fdtype = EFD_NONE; - return -1; -} - -static int seal_execfd(int *fd, int fdtype) -{ - switch (fdtype) { - case EFD_MEMFD: - return fcntl(*fd, F_ADD_SEALS, RUNC_MEMFD_SEALS); - case EFD_FILE: { - /* Need to re-open our pseudo-memfd as an O_PATH to avoid execve(2) giving -ETXTBSY. */ - int newfd; - char fdpath[PATH_MAX] = {0}; - - if (fchmod(*fd, 0100) < 0) - return -1; - - if (snprintf(fdpath, sizeof(fdpath), "/proc/self/fd/%d", *fd) < 0) - return -1; - - newfd = open(fdpath, O_PATH | O_CLOEXEC); - if (newfd < 0) - return -1; - - close(*fd); - *fd = newfd; - return 0; - } - default: - break; - } - return -1; -} - -static int try_bindfd(void) -{ - int fd, ret = -1; - char template[PATH_MAX] = {0}; - char *prefix = getenv("_LIBCONTAINER_STATEDIR"); - - if (!prefix || *prefix != '/') - prefix = "/tmp"; - if (snprintf(template, sizeof(template), "%s/runc.XXXXXX", prefix) < 0) - return ret; - - /* - * We need somewhere to mount it, mounting anything over /proc/self is a - * BAD idea on the host -- even if we do it temporarily. - */ - fd = mkstemp(template); - if (fd < 0) - return ret; - close(fd); - - /* - * For obvious reasons this won't work in rootless mode because we haven't - * created a userns+mntns -- but getting that to work will be a bit - * complicated and it's only worth doing if someone actually needs it. - */ - ret = -EPERM; - if (mount("/proc/self/exe", template, "", MS_BIND, "") < 0) - goto out; - if (mount("", template, "", MS_REMOUNT | MS_BIND | MS_RDONLY, "") < 0) - goto out_umount; - - - /* Get read-only handle that we're sure can't be made read-write. */ - ret = open(template, O_PATH | O_CLOEXEC); - -out_umount: - /* - * Make sure the MNT_DETACH works, otherwise we could get remounted - * read-write and that would be quite bad (the fd would be made read-write - * too, invalidating the protection). - */ - if (umount2(template, MNT_DETACH) < 0) { - if (ret >= 0) - close(ret); - ret = -ENOTRECOVERABLE; - } - -out: - /* - * We don't care about unlink errors, the worst that happens is that - * there's an empty file left around in STATEDIR. - */ - unlink(template); - return ret; -} - -static ssize_t fd_to_fd(int outfd, int infd) -{ - ssize_t total = 0; - char buffer[4096]; - - for (;;) { - ssize_t nread, nwritten = 0; - - nread = read(infd, buffer, sizeof(buffer)); - if (nread < 0) - return -1; - if (!nread) - break; - - do { - ssize_t n = write(outfd, buffer + nwritten, nread - nwritten); - if (n < 0) - return -1; - nwritten += n; - } while(nwritten < nread); - - total += nwritten; - } - - return total; -} - -static int clone_binary(void) -{ - int binfd, execfd; - struct stat statbuf = {}; - size_t sent = 0; - int fdtype = EFD_NONE; - - /* - * Before we resort to copying, let's try creating an ro-binfd in one shot - * by getting a handle for a read-only bind-mount of the execfd. - */ - execfd = try_bindfd(); - if (execfd >= 0) - return execfd; - - /* - * Dammit, that didn't work -- time to copy the binary to a safe place we - * can seal the contents. - */ - execfd = make_execfd(&fdtype); - if (execfd < 0 || fdtype == EFD_NONE) - return -ENOTRECOVERABLE; - - binfd = open("/proc/self/exe", O_RDONLY | O_CLOEXEC); - if (binfd < 0) - goto error; - - if (fstat(binfd, &statbuf) < 0) - goto error_binfd; - - while (sent < statbuf.st_size) { - int n = sendfile(execfd, binfd, NULL, statbuf.st_size - sent); - if (n < 0) { - /* sendfile can fail so we fallback to a dumb user-space copy. */ - n = fd_to_fd(execfd, binfd); - if (n < 0) - goto error_binfd; - } - sent += n; - } - close(binfd); - if (sent != statbuf.st_size) - goto error; - - if (seal_execfd(&execfd, fdtype) < 0) - goto error; - - return execfd; - -error_binfd: - close(binfd); -error: - close(execfd); - return -EIO; -} - -/* Get cheap access to the environment. */ -extern char **environ; - -int ensure_cloned_binary(void) -{ - int execfd; - char **argv = NULL; - - /* Check that we're not self-cloned, and if we are then bail. */ - int cloned = is_self_cloned(); - if (cloned > 0 || cloned == -ENOTRECOVERABLE) - return cloned; - - if (fetchve(&argv) < 0) - return -EINVAL; - - execfd = clone_binary(); - if (execfd < 0) - return -EIO; - - if (putenv(CLONED_BINARY_ENV "=1")) - goto error; - - fexecve(execfd, argv, environ); -error: - close(execfd); - return -ENOEXEC; -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/namespace.h b/vendor/github.com/opencontainers/runc/libcontainer/nsenter/namespace.h deleted file mode 100644 index 9e9bdca05..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/namespace.h +++ /dev/null @@ -1,32 +0,0 @@ -#ifndef NSENTER_NAMESPACE_H -#define NSENTER_NAMESPACE_H - -#ifndef _GNU_SOURCE -# define _GNU_SOURCE -#endif -#include - -/* All of these are taken from include/uapi/linux/sched.h */ -#ifndef CLONE_NEWNS -# define CLONE_NEWNS 0x00020000 /* New mount namespace group */ -#endif -#ifndef CLONE_NEWCGROUP -# define CLONE_NEWCGROUP 0x02000000 /* New cgroup namespace */ -#endif -#ifndef CLONE_NEWUTS -# define CLONE_NEWUTS 0x04000000 /* New utsname namespace */ -#endif -#ifndef CLONE_NEWIPC -# define CLONE_NEWIPC 0x08000000 /* New ipc namespace */ -#endif -#ifndef CLONE_NEWUSER -# define CLONE_NEWUSER 0x10000000 /* New user namespace */ -#endif -#ifndef CLONE_NEWPID -# define CLONE_NEWPID 0x20000000 /* New pid namespace */ -#endif -#ifndef CLONE_NEWNET -# define CLONE_NEWNET 0x40000000 /* New network namespace */ -#endif - -#endif /* NSENTER_NAMESPACE_H */ diff --git a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter.go b/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter.go deleted file mode 100644 index 07f4d63e4..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter.go +++ /dev/null @@ -1,12 +0,0 @@ -// +build linux,!gccgo - -package nsenter - -/* -#cgo CFLAGS: -Wall -extern void nsexec(); -void __attribute__((constructor)) init(void) { - nsexec(); -} -*/ -import "C" diff --git a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter_gccgo.go b/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter_gccgo.go deleted file mode 100644 index 63c7a3ec2..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter_gccgo.go +++ /dev/null @@ -1,25 +0,0 @@ -// +build linux,gccgo - -package nsenter - -/* -#cgo CFLAGS: -Wall -extern void nsexec(); -void __attribute__((constructor)) init(void) { - nsexec(); -} -*/ -import "C" - -// AlwaysFalse is here to stay false -// (and be exported so the compiler doesn't optimize out its reference) -var AlwaysFalse bool - -func init() { - if AlwaysFalse { - // by referencing this C init() in a noop test, it will ensure the compiler - // links in the C function. - // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65134 - C.init() - } -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter_unsupported.go b/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter_unsupported.go deleted file mode 100644 index ac701ca39..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsenter_unsupported.go +++ /dev/null @@ -1,5 +0,0 @@ -// +build !linux !cgo - -package nsenter - -import "C" diff --git a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsexec.c b/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsexec.c deleted file mode 100644 index 3b08c5e33..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/nsenter/nsexec.c +++ /dev/null @@ -1,1035 +0,0 @@ - -#define _GNU_SOURCE -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include - -#include -#include -#include - -/* Get all of the CLONE_NEW* flags. */ -#include "namespace.h" - -/* Synchronisation values. */ -enum sync_t { - SYNC_USERMAP_PLS = 0x40, /* Request parent to map our users. */ - SYNC_USERMAP_ACK = 0x41, /* Mapping finished by the parent. */ - SYNC_RECVPID_PLS = 0x42, /* Tell parent we're sending the PID. */ - SYNC_RECVPID_ACK = 0x43, /* PID was correctly received by parent. */ - SYNC_GRANDCHILD = 0x44, /* The grandchild is ready to run. */ - SYNC_CHILD_READY = 0x45, /* The child or grandchild is ready to return. */ -}; - -/* - * Synchronisation value for cgroup namespace setup. - * The same constant is defined in process_linux.go as "createCgroupns". - */ -#define CREATECGROUPNS 0x80 - -/* longjmp() arguments. */ -#define JUMP_PARENT 0x00 -#define JUMP_CHILD 0xA0 -#define JUMP_INIT 0xA1 - -/* JSON buffer. */ -#define JSON_MAX 4096 - -/* Assume the stack grows down, so arguments should be above it. */ -struct clone_t { - /* - * Reserve some space for clone() to locate arguments - * and retcode in this place - */ - char stack[4096] __attribute__ ((aligned(16))); - char stack_ptr[0]; - - /* There's two children. This is used to execute the different code. */ - jmp_buf *env; - int jmpval; -}; - -struct nlconfig_t { - char *data; - - /* Process settings. */ - uint32_t cloneflags; - char *oom_score_adj; - size_t oom_score_adj_len; - - /* User namespace settings. */ - char *uidmap; - size_t uidmap_len; - char *gidmap; - size_t gidmap_len; - char *namespaces; - size_t namespaces_len; - uint8_t is_setgroup; - - /* Rootless container settings. */ - uint8_t is_rootless_euid; /* boolean */ - char *uidmappath; - size_t uidmappath_len; - char *gidmappath; - size_t gidmappath_len; -}; - -#define PANIC "panic" -#define FATAL "fatal" -#define ERROR "error" -#define WARNING "warning" -#define INFO "info" -#define DEBUG "debug" - -static int logfd = -1; - -/* - * List of netlink message types sent to us as part of bootstrapping the init. - * These constants are defined in libcontainer/message_linux.go. - */ -#define INIT_MSG 62000 -#define CLONE_FLAGS_ATTR 27281 -#define NS_PATHS_ATTR 27282 -#define UIDMAP_ATTR 27283 -#define GIDMAP_ATTR 27284 -#define SETGROUP_ATTR 27285 -#define OOM_SCORE_ADJ_ATTR 27286 -#define ROOTLESS_EUID_ATTR 27287 -#define UIDMAPPATH_ATTR 27288 -#define GIDMAPPATH_ATTR 27289 - -/* - * Use the raw syscall for versions of glibc which don't include a function for - * it, namely (glibc 2.12). - */ -#if __GLIBC__ == 2 && __GLIBC_MINOR__ < 14 -# define _GNU_SOURCE -# include "syscall.h" -# if !defined(SYS_setns) && defined(__NR_setns) -# define SYS_setns __NR_setns -# endif - -#ifndef SYS_setns -# error "setns(2) syscall not supported by glibc version" -#endif - -int setns(int fd, int nstype) -{ - return syscall(SYS_setns, fd, nstype); -} -#endif - -static void write_log_with_info(const char *level, const char *function, int line, const char *format, ...) -{ - char message[1024] = {}; - - va_list args; - - if (logfd < 0 || level == NULL) - return; - - va_start(args, format); - if (vsnprintf(message, sizeof(message), format, args) < 0) - return; - va_end(args); - - if (dprintf(logfd, "{\"level\":\"%s\", \"msg\": \"%s:%d %s\"}\n", level, function, line, message) < 0) - return; -} - -#define write_log(level, fmt, ...) \ - write_log_with_info((level), __FUNCTION__, __LINE__, (fmt), ##__VA_ARGS__) - -/* XXX: This is ugly. */ -static int syncfd = -1; - -#define bail(fmt, ...) \ - do { \ - write_log(FATAL, "nsenter: " fmt ": %m", ##__VA_ARGS__); \ - exit(1); \ - } while(0) - -static int write_file(char *data, size_t data_len, char *pathfmt, ...) -{ - int fd, len, ret = 0; - char path[PATH_MAX]; - - va_list ap; - va_start(ap, pathfmt); - len = vsnprintf(path, PATH_MAX, pathfmt, ap); - va_end(ap); - if (len < 0) - return -1; - - fd = open(path, O_RDWR); - if (fd < 0) { - return -1; - } - - len = write(fd, data, data_len); - if (len != data_len) { - ret = -1; - goto out; - } - - out: - close(fd); - return ret; -} - -enum policy_t { - SETGROUPS_DEFAULT = 0, - SETGROUPS_ALLOW, - SETGROUPS_DENY, -}; - -/* This *must* be called before we touch gid_map. */ -static void update_setgroups(int pid, enum policy_t setgroup) -{ - char *policy; - - switch (setgroup) { - case SETGROUPS_ALLOW: - policy = "allow"; - break; - case SETGROUPS_DENY: - policy = "deny"; - break; - case SETGROUPS_DEFAULT: - default: - /* Nothing to do. */ - return; - } - - if (write_file(policy, strlen(policy), "/proc/%d/setgroups", pid) < 0) { - /* - * If the kernel is too old to support /proc/pid/setgroups, - * open(2) or write(2) will return ENOENT. This is fine. - */ - if (errno != ENOENT) - bail("failed to write '%s' to /proc/%d/setgroups", policy, pid); - } -} - -static int try_mapping_tool(const char *app, int pid, char *map, size_t map_len) -{ - int child; - - /* - * If @app is NULL, execve will segfault. Just check it here and bail (if - * we're in this path, the caller is already getting desperate and there - * isn't a backup to this failing). This usually would be a configuration - * or programming issue. - */ - if (!app) - bail("mapping tool not present"); - - child = fork(); - if (child < 0) - bail("failed to fork"); - - if (!child) { -#define MAX_ARGV 20 - char *argv[MAX_ARGV]; - char *envp[] = { NULL }; - char pid_fmt[16]; - int argc = 0; - char *next; - - snprintf(pid_fmt, 16, "%d", pid); - - argv[argc++] = (char *)app; - argv[argc++] = pid_fmt; - /* - * Convert the map string into a list of argument that - * newuidmap/newgidmap can understand. - */ - - while (argc < MAX_ARGV) { - if (*map == '\0') { - argv[argc++] = NULL; - break; - } - argv[argc++] = map; - next = strpbrk(map, "\n "); - if (next == NULL) - break; - *next++ = '\0'; - map = next + strspn(next, "\n "); - } - - execve(app, argv, envp); - bail("failed to execv"); - } else { - int status; - - while (true) { - if (waitpid(child, &status, 0) < 0) { - if (errno == EINTR) - continue; - bail("failed to waitpid"); - } - if (WIFEXITED(status) || WIFSIGNALED(status)) - return WEXITSTATUS(status); - } - } - - return -1; -} - -static void update_uidmap(const char *path, int pid, char *map, size_t map_len) -{ - if (map == NULL || map_len <= 0) - return; - - if (write_file(map, map_len, "/proc/%d/uid_map", pid) < 0) { - if (errno != EPERM) - bail("failed to update /proc/%d/uid_map", pid); - if (try_mapping_tool(path, pid, map, map_len)) - bail("failed to use newuid map on %d", pid); - } -} - -static void update_gidmap(const char *path, int pid, char *map, size_t map_len) -{ - if (map == NULL || map_len <= 0) - return; - - if (write_file(map, map_len, "/proc/%d/gid_map", pid) < 0) { - if (errno != EPERM) - bail("failed to update /proc/%d/gid_map", pid); - if (try_mapping_tool(path, pid, map, map_len)) - bail("failed to use newgid map on %d", pid); - } -} - -static void update_oom_score_adj(char *data, size_t len) -{ - if (data == NULL || len <= 0) - return; - - if (write_file(data, len, "/proc/self/oom_score_adj") < 0) - bail("failed to update /proc/self/oom_score_adj"); -} - -/* A dummy function that just jumps to the given jumpval. */ -static int child_func(void *arg) __attribute__ ((noinline)); -static int child_func(void *arg) -{ - struct clone_t *ca = (struct clone_t *)arg; - longjmp(*ca->env, ca->jmpval); -} - -static int clone_parent(jmp_buf *env, int jmpval) __attribute__ ((noinline)); -static int clone_parent(jmp_buf *env, int jmpval) -{ - struct clone_t ca = { - .env = env, - .jmpval = jmpval, - }; - - return clone(child_func, ca.stack_ptr, CLONE_PARENT | SIGCHLD, &ca); -} - -/* - * Gets the init pipe fd from the environment, which is used to read the - * bootstrap data and tell the parent what the new pid is after we finish - * setting up the environment. - */ -static int initpipe(void) -{ - int pipenum; - char *initpipe, *endptr; - - initpipe = getenv("_LIBCONTAINER_INITPIPE"); - if (initpipe == NULL || *initpipe == '\0') - return -1; - - pipenum = strtol(initpipe, &endptr, 10); - if (*endptr != '\0') - bail("unable to parse _LIBCONTAINER_INITPIPE"); - - return pipenum; -} - -static void setup_logpipe(void) -{ - char *logpipe, *endptr; - - logpipe = getenv("_LIBCONTAINER_LOGPIPE"); - if (logpipe == NULL || *logpipe == '\0') { - return; - } - - logfd = strtol(logpipe, &endptr, 10); - if (logpipe == endptr || *endptr != '\0') { - fprintf(stderr, "unable to parse _LIBCONTAINER_LOGPIPE, value: %s\n", logpipe); - /* It is too early to use bail */ - exit(1); - } -} - -/* Returns the clone(2) flag for a namespace, given the name of a namespace. */ -static int nsflag(char *name) -{ - if (!strcmp(name, "cgroup")) - return CLONE_NEWCGROUP; - else if (!strcmp(name, "ipc")) - return CLONE_NEWIPC; - else if (!strcmp(name, "mnt")) - return CLONE_NEWNS; - else if (!strcmp(name, "net")) - return CLONE_NEWNET; - else if (!strcmp(name, "pid")) - return CLONE_NEWPID; - else if (!strcmp(name, "user")) - return CLONE_NEWUSER; - else if (!strcmp(name, "uts")) - return CLONE_NEWUTS; - - /* If we don't recognise a name, fallback to 0. */ - return 0; -} - -static uint32_t readint32(char *buf) -{ - return *(uint32_t *) buf; -} - -static uint8_t readint8(char *buf) -{ - return *(uint8_t *) buf; -} - -static void nl_parse(int fd, struct nlconfig_t *config) -{ - size_t len, size; - struct nlmsghdr hdr; - char *data, *current; - - /* Retrieve the netlink header. */ - len = read(fd, &hdr, NLMSG_HDRLEN); - if (len != NLMSG_HDRLEN) - bail("invalid netlink header length %zu", len); - - if (hdr.nlmsg_type == NLMSG_ERROR) - bail("failed to read netlink message"); - - if (hdr.nlmsg_type != INIT_MSG) - bail("unexpected msg type %d", hdr.nlmsg_type); - - /* Retrieve data. */ - size = NLMSG_PAYLOAD(&hdr, 0); - current = data = malloc(size); - if (!data) - bail("failed to allocate %zu bytes of memory for nl_payload", size); - - len = read(fd, data, size); - if (len != size) - bail("failed to read netlink payload, %zu != %zu", len, size); - - /* Parse the netlink payload. */ - config->data = data; - while (current < data + size) { - struct nlattr *nlattr = (struct nlattr *)current; - size_t payload_len = nlattr->nla_len - NLA_HDRLEN; - - /* Advance to payload. */ - current += NLA_HDRLEN; - - /* Handle payload. */ - switch (nlattr->nla_type) { - case CLONE_FLAGS_ATTR: - config->cloneflags = readint32(current); - break; - case ROOTLESS_EUID_ATTR: - config->is_rootless_euid = readint8(current); /* boolean */ - break; - case OOM_SCORE_ADJ_ATTR: - config->oom_score_adj = current; - config->oom_score_adj_len = payload_len; - break; - case NS_PATHS_ATTR: - config->namespaces = current; - config->namespaces_len = payload_len; - break; - case UIDMAP_ATTR: - config->uidmap = current; - config->uidmap_len = payload_len; - break; - case GIDMAP_ATTR: - config->gidmap = current; - config->gidmap_len = payload_len; - break; - case UIDMAPPATH_ATTR: - config->uidmappath = current; - config->uidmappath_len = payload_len; - break; - case GIDMAPPATH_ATTR: - config->gidmappath = current; - config->gidmappath_len = payload_len; - break; - case SETGROUP_ATTR: - config->is_setgroup = readint8(current); - break; - default: - bail("unknown netlink message type %d", nlattr->nla_type); - } - - current += NLA_ALIGN(payload_len); - } -} - -void nl_free(struct nlconfig_t *config) -{ - free(config->data); -} - -void join_namespaces(char *nslist) -{ - int num = 0, i; - char *saveptr = NULL; - char *namespace = strtok_r(nslist, ",", &saveptr); - struct namespace_t { - int fd; - int ns; - char type[PATH_MAX]; - char path[PATH_MAX]; - } *namespaces = NULL; - - if (!namespace || !strlen(namespace) || !strlen(nslist)) - bail("ns paths are empty"); - - /* - * We have to open the file descriptors first, since after - * we join the mnt namespace we might no longer be able to - * access the paths. - */ - do { - int fd; - char *path; - struct namespace_t *ns; - - /* Resize the namespace array. */ - namespaces = realloc(namespaces, ++num * sizeof(struct namespace_t)); - if (!namespaces) - bail("failed to reallocate namespace array"); - ns = &namespaces[num - 1]; - - /* Split 'ns:path'. */ - path = strstr(namespace, ":"); - if (!path) - bail("failed to parse %s", namespace); - *path++ = '\0'; - - fd = open(path, O_RDONLY); - if (fd < 0) - bail("failed to open %s", path); - - ns->fd = fd; - ns->ns = nsflag(namespace); - strncpy(ns->path, path, PATH_MAX - 1); - ns->path[PATH_MAX - 1] = '\0'; - } while ((namespace = strtok_r(NULL, ",", &saveptr)) != NULL); - - /* - * The ordering in which we join namespaces is important. We should - * always join the user namespace *first*. This is all guaranteed - * from the container_linux.go side of this, so we're just going to - * follow the order given to us. - */ - - for (i = 0; i < num; i++) { - struct namespace_t ns = namespaces[i]; - - if (setns(ns.fd, ns.ns) < 0) - bail("failed to setns to %s", ns.path); - - close(ns.fd); - } - - free(namespaces); -} - -/* Defined in cloned_binary.c. */ -extern int ensure_cloned_binary(void); - -void nsexec(void) -{ - int pipenum; - jmp_buf env; - int sync_child_pipe[2], sync_grandchild_pipe[2]; - struct nlconfig_t config = { 0 }; - - /* - * Setup a pipe to send logs to the parent. This should happen - * first, because bail will use that pipe. - */ - setup_logpipe(); - - /* - * If we don't have an init pipe, just return to the go routine. - * We'll only get an init pipe for start or exec. - */ - pipenum = initpipe(); - if (pipenum == -1) - return; - - /* - * We need to re-exec if we are not in a cloned binary. This is necessary - * to ensure that containers won't be able to access the host binary - * through /proc/self/exe. See CVE-2019-5736. - */ - if (ensure_cloned_binary() < 0) - bail("could not ensure we are a cloned binary"); - - write_log(DEBUG, "nsexec started"); - - /* Parse all of the netlink configuration. */ - nl_parse(pipenum, &config); - - /* Set oom_score_adj. This has to be done before !dumpable because - * /proc/self/oom_score_adj is not writeable unless you're an privileged - * user (if !dumpable is set). All children inherit their parent's - * oom_score_adj value on fork(2) so this will always be propagated - * properly. - */ - update_oom_score_adj(config.oom_score_adj, config.oom_score_adj_len); - - /* - * Make the process non-dumpable, to avoid various race conditions that - * could cause processes in namespaces we're joining to access host - * resources (or potentially execute code). - * - * However, if the number of namespaces we are joining is 0, we are not - * going to be switching to a different security context. Thus setting - * ourselves to be non-dumpable only breaks things (like rootless - * containers), which is the recommendation from the kernel folks. - */ - if (config.namespaces) { - if (prctl(PR_SET_DUMPABLE, 0, 0, 0, 0) < 0) - bail("failed to set process as non-dumpable"); - } - - /* Pipe so we can tell the child when we've finished setting up. */ - if (socketpair(AF_LOCAL, SOCK_STREAM, 0, sync_child_pipe) < 0) - bail("failed to setup sync pipe between parent and child"); - - /* - * We need a new socketpair to sync with grandchild so we don't have - * race condition with child. - */ - if (socketpair(AF_LOCAL, SOCK_STREAM, 0, sync_grandchild_pipe) < 0) - bail("failed to setup sync pipe between parent and grandchild"); - - /* TODO: Currently we aren't dealing with child deaths properly. */ - - /* - * Okay, so this is quite annoying. - * - * In order for this unsharing code to be more extensible we need to split - * up unshare(CLONE_NEWUSER) and clone() in various ways. The ideal case - * would be if we did clone(CLONE_NEWUSER) and the other namespaces - * separately, but because of SELinux issues we cannot really do that. But - * we cannot just dump the namespace flags into clone(...) because several - * usecases (such as rootless containers) require more granularity around - * the namespace setup. In addition, some older kernels had issues where - * CLONE_NEWUSER wasn't handled before other namespaces (but we cannot - * handle this while also dealing with SELinux so we choose SELinux support - * over broken kernel support). - * - * However, if we unshare(2) the user namespace *before* we clone(2), then - * all hell breaks loose. - * - * The parent no longer has permissions to do many things (unshare(2) drops - * all capabilities in your old namespace), and the container cannot be set - * up to have more than one {uid,gid} mapping. This is obviously less than - * ideal. In order to fix this, we have to first clone(2) and then unshare. - * - * Unfortunately, it's not as simple as that. We have to fork to enter the - * PID namespace (the PID namespace only applies to children). Since we'll - * have to double-fork, this clone_parent() call won't be able to get the - * PID of the _actual_ init process (without doing more synchronisation than - * I can deal with at the moment). So we'll just get the parent to send it - * for us, the only job of this process is to update - * /proc/pid/{setgroups,uid_map,gid_map}. - * - * And as a result of the above, we also need to setns(2) in the first child - * because if we join a PID namespace in the topmost parent then our child - * will be in that namespace (and it will not be able to give us a PID value - * that makes sense without resorting to sending things with cmsg). - * - * This also deals with an older issue caused by dumping cloneflags into - * clone(2): On old kernels, CLONE_PARENT didn't work with CLONE_NEWPID, so - * we have to unshare(2) before clone(2) in order to do this. This was fixed - * in upstream commit 1f7f4dde5c945f41a7abc2285be43d918029ecc5, and was - * introduced by 40a0d32d1eaffe6aac7324ca92604b6b3977eb0e. As far as we're - * aware, the last mainline kernel which had this bug was Linux 3.12. - * However, we cannot comment on which kernels the broken patch was - * backported to. - * - * -- Aleksa "what has my life come to?" Sarai - */ - - switch (setjmp(env)) { - /* - * Stage 0: We're in the parent. Our job is just to create a new child - * (stage 1: JUMP_CHILD) process and write its uid_map and - * gid_map. That process will go on to create a new process, then - * it will send us its PID which we will send to the bootstrap - * process. - */ - case JUMP_PARENT:{ - int len; - pid_t child, first_child = -1; - bool ready = false; - - /* For debugging. */ - prctl(PR_SET_NAME, (unsigned long)"runc:[0:PARENT]", 0, 0, 0); - - /* Start the process of getting a container. */ - child = clone_parent(&env, JUMP_CHILD); - if (child < 0) - bail("unable to fork: child_func"); - - /* - * State machine for synchronisation with the children. - * - * Father only return when both child and grandchild are - * ready, so we can receive all possible error codes - * generated by children. - */ - while (!ready) { - enum sync_t s; - - syncfd = sync_child_pipe[1]; - close(sync_child_pipe[0]); - - if (read(syncfd, &s, sizeof(s)) != sizeof(s)) - bail("failed to sync with child: next state"); - - switch (s) { - case SYNC_USERMAP_PLS: - /* - * Enable setgroups(2) if we've been asked to. But we also - * have to explicitly disable setgroups(2) if we're - * creating a rootless container for single-entry mapping. - * i.e. config.is_setgroup == false. - * (this is required since Linux 3.19). - * - * For rootless multi-entry mapping, config.is_setgroup shall be true and - * newuidmap/newgidmap shall be used. - */ - - if (config.is_rootless_euid && !config.is_setgroup) - update_setgroups(child, SETGROUPS_DENY); - - /* Set up mappings. */ - update_uidmap(config.uidmappath, child, config.uidmap, config.uidmap_len); - update_gidmap(config.gidmappath, child, config.gidmap, config.gidmap_len); - - s = SYNC_USERMAP_ACK; - if (write(syncfd, &s, sizeof(s)) != sizeof(s)) { - kill(child, SIGKILL); - bail("failed to sync with child: write(SYNC_USERMAP_ACK)"); - } - break; - case SYNC_RECVPID_PLS:{ - first_child = child; - - /* Get the init_func pid. */ - if (read(syncfd, &child, sizeof(child)) != sizeof(child)) { - kill(first_child, SIGKILL); - bail("failed to sync with child: read(childpid)"); - } - - /* Send ACK. */ - s = SYNC_RECVPID_ACK; - if (write(syncfd, &s, sizeof(s)) != sizeof(s)) { - kill(first_child, SIGKILL); - kill(child, SIGKILL); - bail("failed to sync with child: write(SYNC_RECVPID_ACK)"); - } - - /* Send the init_func pid back to our parent. - * - * Send the init_func pid and the pid of the first child back to our parent. - * We need to send both back because we can't reap the first child we created (CLONE_PARENT). - * It becomes the responsibility of our parent to reap the first child. - */ - len = dprintf(pipenum, "{\"pid\": %d, \"pid_first\": %d}\n", child, first_child); - if (len < 0) { - kill(child, SIGKILL); - bail("unable to generate JSON for child pid"); - } - } - break; - case SYNC_CHILD_READY: - ready = true; - break; - default: - bail("unexpected sync value: %u", s); - } - } - - /* Now sync with grandchild. */ - - ready = false; - while (!ready) { - enum sync_t s; - - syncfd = sync_grandchild_pipe[1]; - close(sync_grandchild_pipe[0]); - - s = SYNC_GRANDCHILD; - if (write(syncfd, &s, sizeof(s)) != sizeof(s)) { - kill(child, SIGKILL); - bail("failed to sync with child: write(SYNC_GRANDCHILD)"); - } - - if (read(syncfd, &s, sizeof(s)) != sizeof(s)) - bail("failed to sync with child: next state"); - - switch (s) { - case SYNC_CHILD_READY: - ready = true; - break; - default: - bail("unexpected sync value: %u", s); - } - } - exit(0); - } - - /* - * Stage 1: We're in the first child process. Our job is to join any - * provided namespaces in the netlink payload and unshare all - * of the requested namespaces. If we've been asked to - * CLONE_NEWUSER, we will ask our parent (stage 0) to set up - * our user mappings for us. Then, we create a new child - * (stage 2: JUMP_INIT) for PID namespace. We then send the - * child's PID to our parent (stage 0). - */ - case JUMP_CHILD:{ - pid_t child; - enum sync_t s; - - /* We're in a child and thus need to tell the parent if we die. */ - syncfd = sync_child_pipe[0]; - close(sync_child_pipe[1]); - - /* For debugging. */ - prctl(PR_SET_NAME, (unsigned long)"runc:[1:CHILD]", 0, 0, 0); - - /* - * We need to setns first. We cannot do this earlier (in stage 0) - * because of the fact that we forked to get here (the PID of - * [stage 2: JUMP_INIT]) would be meaningless). We could send it - * using cmsg(3) but that's just annoying. - */ - if (config.namespaces) - join_namespaces(config.namespaces); - - /* - * Deal with user namespaces first. They are quite special, as they - * affect our ability to unshare other namespaces and are used as - * context for privilege checks. - * - * We don't unshare all namespaces in one go. The reason for this - * is that, while the kernel documentation may claim otherwise, - * there are certain cases where unsharing all namespaces at once - * will result in namespace objects being owned incorrectly. - * Ideally we should just fix these kernel bugs, but it's better to - * be safe than sorry, and fix them separately. - * - * A specific case of this is that the SELinux label of the - * internal kern-mount that mqueue uses will be incorrect if the - * UTS namespace is cloned before the USER namespace is mapped. - * I've also heard of similar problems with the network namespace - * in some scenarios. This also mirrors how LXC deals with this - * problem. - */ - if (config.cloneflags & CLONE_NEWUSER) { - if (unshare(CLONE_NEWUSER) < 0) - bail("failed to unshare user namespace"); - config.cloneflags &= ~CLONE_NEWUSER; - - /* - * We don't have the privileges to do any mapping here (see the - * clone_parent rant). So signal our parent to hook us up. - */ - - /* Switching is only necessary if we joined namespaces. */ - if (config.namespaces) { - if (prctl(PR_SET_DUMPABLE, 1, 0, 0, 0) < 0) - bail("failed to set process as dumpable"); - } - s = SYNC_USERMAP_PLS; - if (write(syncfd, &s, sizeof(s)) != sizeof(s)) - bail("failed to sync with parent: write(SYNC_USERMAP_PLS)"); - - /* ... wait for mapping ... */ - - if (read(syncfd, &s, sizeof(s)) != sizeof(s)) - bail("failed to sync with parent: read(SYNC_USERMAP_ACK)"); - if (s != SYNC_USERMAP_ACK) - bail("failed to sync with parent: SYNC_USERMAP_ACK: got %u", s); - /* Switching is only necessary if we joined namespaces. */ - if (config.namespaces) { - if (prctl(PR_SET_DUMPABLE, 0, 0, 0, 0) < 0) - bail("failed to set process as dumpable"); - } - - /* Become root in the namespace proper. */ - if (setresuid(0, 0, 0) < 0) - bail("failed to become root in user namespace"); - } - /* - * Unshare all of the namespaces. Now, it should be noted that this - * ordering might break in the future (especially with rootless - * containers). But for now, it's not possible to split this into - * CLONE_NEWUSER + [the rest] because of some RHEL SELinux issues. - * - * Note that we don't merge this with clone() because there were - * some old kernel versions where clone(CLONE_PARENT | CLONE_NEWPID) - * was broken, so we'll just do it the long way anyway. - */ - if (unshare(config.cloneflags & ~CLONE_NEWCGROUP) < 0) - bail("failed to unshare namespaces"); - - /* - * TODO: What about non-namespace clone flags that we're dropping here? - * - * We fork again because of PID namespace, setns(2) or unshare(2) don't - * change the PID namespace of the calling process, because doing so - * would change the caller's idea of its own PID (as reported by getpid()), - * which would break many applications and libraries, so we must fork - * to actually enter the new PID namespace. - */ - child = clone_parent(&env, JUMP_INIT); - if (child < 0) - bail("unable to fork: init_func"); - - /* Send the child to our parent, which knows what it's doing. */ - s = SYNC_RECVPID_PLS; - if (write(syncfd, &s, sizeof(s)) != sizeof(s)) { - kill(child, SIGKILL); - bail("failed to sync with parent: write(SYNC_RECVPID_PLS)"); - } - if (write(syncfd, &child, sizeof(child)) != sizeof(child)) { - kill(child, SIGKILL); - bail("failed to sync with parent: write(childpid)"); - } - - /* ... wait for parent to get the pid ... */ - - if (read(syncfd, &s, sizeof(s)) != sizeof(s)) { - kill(child, SIGKILL); - bail("failed to sync with parent: read(SYNC_RECVPID_ACK)"); - } - if (s != SYNC_RECVPID_ACK) { - kill(child, SIGKILL); - bail("failed to sync with parent: SYNC_RECVPID_ACK: got %u", s); - } - - s = SYNC_CHILD_READY; - if (write(syncfd, &s, sizeof(s)) != sizeof(s)) { - kill(child, SIGKILL); - bail("failed to sync with parent: write(SYNC_CHILD_READY)"); - } - - /* Our work is done. [Stage 2: JUMP_INIT] is doing the rest of the work. */ - exit(0); - } - - /* - * Stage 2: We're the final child process, and the only process that will - * actually return to the Go runtime. Our job is to just do the - * final cleanup steps and then return to the Go runtime to allow - * init_linux.go to run. - */ - case JUMP_INIT:{ - /* - * We're inside the child now, having jumped from the - * start_child() code after forking in the parent. - */ - enum sync_t s; - - /* We're in a child and thus need to tell the parent if we die. */ - syncfd = sync_grandchild_pipe[0]; - close(sync_grandchild_pipe[1]); - close(sync_child_pipe[0]); - close(sync_child_pipe[1]); - - /* For debugging. */ - prctl(PR_SET_NAME, (unsigned long)"runc:[2:INIT]", 0, 0, 0); - - if (read(syncfd, &s, sizeof(s)) != sizeof(s)) - bail("failed to sync with parent: read(SYNC_GRANDCHILD)"); - if (s != SYNC_GRANDCHILD) - bail("failed to sync with parent: SYNC_GRANDCHILD: got %u", s); - - if (setsid() < 0) - bail("setsid failed"); - - if (setuid(0) < 0) - bail("setuid failed"); - - if (setgid(0) < 0) - bail("setgid failed"); - - if (!config.is_rootless_euid && config.is_setgroup) { - if (setgroups(0, NULL) < 0) - bail("setgroups failed"); - } - - /* ... wait until our topmost parent has finished cgroup setup in p.manager.Apply() ... */ - if (config.cloneflags & CLONE_NEWCGROUP) { - uint8_t value; - if (read(pipenum, &value, sizeof(value)) != sizeof(value)) - bail("read synchronisation value failed"); - if (value == CREATECGROUPNS) { - if (unshare(CLONE_NEWCGROUP) < 0) - bail("failed to unshare cgroup namespace"); - } else - bail("received unknown synchronisation value"); - } - - s = SYNC_CHILD_READY; - if (write(syncfd, &s, sizeof(s)) != sizeof(s)) - bail("failed to sync with patent: write(SYNC_CHILD_READY)"); - - /* Close sync pipes. */ - close(sync_grandchild_pipe[0]); - - /* Free netlink data. */ - nl_free(&config); - - /* Finish executing, let the Go runtime take over. */ - return; - } - default: - bail("unexpected jump value"); - } - - /* Should never be reached. */ - bail("should never be reached"); -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/process.go b/vendor/github.com/opencontainers/runc/libcontainer/process.go deleted file mode 100644 index d3e472a4f..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/process.go +++ /dev/null @@ -1,115 +0,0 @@ -package libcontainer - -import ( - "fmt" - "io" - "math" - "os" - - "github.com/opencontainers/runc/libcontainer/configs" -) - -type processOperations interface { - wait() (*os.ProcessState, error) - signal(sig os.Signal) error - pid() int -} - -// Process specifies the configuration and IO for a process inside -// a container. -type Process struct { - // The command to be run followed by any arguments. - Args []string - - // Env specifies the environment variables for the process. - Env []string - - // User will set the uid and gid of the executing process running inside the container - // local to the container's user and group configuration. - User string - - // AdditionalGroups specifies the gids that should be added to supplementary groups - // in addition to those that the user belongs to. - AdditionalGroups []string - - // Cwd will change the processes current working directory inside the container's rootfs. - Cwd string - - // Stdin is a pointer to a reader which provides the standard input stream. - Stdin io.Reader - - // Stdout is a pointer to a writer which receives the standard output stream. - Stdout io.Writer - - // Stderr is a pointer to a writer which receives the standard error stream. - Stderr io.Writer - - // ExtraFiles specifies additional open files to be inherited by the container - ExtraFiles []*os.File - - // Initial sizings for the console - ConsoleWidth uint16 - ConsoleHeight uint16 - - // Capabilities specify the capabilities to keep when executing the process inside the container - // All capabilities not specified will be dropped from the processes capability mask - Capabilities *configs.Capabilities - - // AppArmorProfile specifies the profile to apply to the process and is - // changed at the time the process is execed - AppArmorProfile string - - // Label specifies the label to apply to the process. It is commonly used by selinux - Label string - - // NoNewPrivileges controls whether processes can gain additional privileges. - NoNewPrivileges *bool - - // Rlimits specifies the resource limits, such as max open files, to set in the container - // If Rlimits are not set, the container will inherit rlimits from the parent process - Rlimits []configs.Rlimit - - // ConsoleSocket provides the masterfd console. - ConsoleSocket *os.File - - // Init specifies whether the process is the first process in the container. - Init bool - - ops processOperations - - LogLevel string -} - -// Wait waits for the process to exit. -// Wait releases any resources associated with the Process -func (p Process) Wait() (*os.ProcessState, error) { - if p.ops == nil { - return nil, newGenericError(fmt.Errorf("invalid process"), NoProcessOps) - } - return p.ops.wait() -} - -// Pid returns the process ID -func (p Process) Pid() (int, error) { - // math.MinInt32 is returned here, because it's invalid value - // for the kill() system call. - if p.ops == nil { - return math.MinInt32, newGenericError(fmt.Errorf("invalid process"), NoProcessOps) - } - return p.ops.pid(), nil -} - -// Signal sends a signal to the Process. -func (p Process) Signal(sig os.Signal) error { - if p.ops == nil { - return newGenericError(fmt.Errorf("invalid process"), NoProcessOps) - } - return p.ops.signal(sig) -} - -// IO holds the process's STDIO -type IO struct { - Stdin io.WriteCloser - Stdout io.ReadCloser - Stderr io.ReadCloser -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/process_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/process_linux.go deleted file mode 100644 index de989b5bc..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/process_linux.go +++ /dev/null @@ -1,598 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "encoding/json" - "errors" - "fmt" - "io" - "os" - "os/exec" - "path/filepath" - "strconv" - "syscall" // only for Signal - - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/intelrdt" - "github.com/opencontainers/runc/libcontainer/logs" - "github.com/opencontainers/runc/libcontainer/system" - "github.com/opencontainers/runc/libcontainer/utils" - - "golang.org/x/sys/unix" -) - -// Synchronisation value for cgroup namespace setup. -// The same constant is defined in nsexec.c as "CREATECGROUPNS". -const createCgroupns = 0x80 - -type parentProcess interface { - // pid returns the pid for the running process. - pid() int - - // start starts the process execution. - start() error - - // send a SIGKILL to the process and wait for the exit. - terminate() error - - // wait waits on the process returning the process state. - wait() (*os.ProcessState, error) - - // startTime returns the process start time. - startTime() (uint64, error) - - signal(os.Signal) error - - externalDescriptors() []string - - setExternalDescriptors(fds []string) - - forwardChildLogs() -} - -type filePair struct { - parent *os.File - child *os.File -} - -type setnsProcess struct { - cmd *exec.Cmd - messageSockPair filePair - logFilePair filePair - cgroupPaths map[string]string - rootlessCgroups bool - intelRdtPath string - config *initConfig - fds []string - process *Process - bootstrapData io.Reader -} - -func (p *setnsProcess) startTime() (uint64, error) { - stat, err := system.Stat(p.pid()) - return stat.StartTime, err -} - -func (p *setnsProcess) signal(sig os.Signal) error { - s, ok := sig.(syscall.Signal) - if !ok { - return errors.New("os: unsupported signal type") - } - return unix.Kill(p.pid(), s) -} - -func (p *setnsProcess) start() (err error) { - defer p.messageSockPair.parent.Close() - err = p.cmd.Start() - // close the write-side of the pipes (controlled by child) - p.messageSockPair.child.Close() - p.logFilePair.child.Close() - if err != nil { - return newSystemErrorWithCause(err, "starting setns process") - } - if p.bootstrapData != nil { - if _, err := io.Copy(p.messageSockPair.parent, p.bootstrapData); err != nil { - return newSystemErrorWithCause(err, "copying bootstrap data to pipe") - } - } - if err = p.execSetns(); err != nil { - return newSystemErrorWithCause(err, "executing setns process") - } - if len(p.cgroupPaths) > 0 { - if err := cgroups.EnterPid(p.cgroupPaths, p.pid()); err != nil && !p.rootlessCgroups { - return newSystemErrorWithCausef(err, "adding pid %d to cgroups", p.pid()) - } - } - if p.intelRdtPath != "" { - // if Intel RDT "resource control" filesystem path exists - _, err := os.Stat(p.intelRdtPath) - if err == nil { - if err := intelrdt.WriteIntelRdtTasks(p.intelRdtPath, p.pid()); err != nil { - return newSystemErrorWithCausef(err, "adding pid %d to Intel RDT resource control filesystem", p.pid()) - } - } - } - // set rlimits, this has to be done here because we lose permissions - // to raise the limits once we enter a user-namespace - if err := setupRlimits(p.config.Rlimits, p.pid()); err != nil { - return newSystemErrorWithCause(err, "setting rlimits for process") - } - if err := utils.WriteJSON(p.messageSockPair.parent, p.config); err != nil { - return newSystemErrorWithCause(err, "writing config to pipe") - } - - ierr := parseSync(p.messageSockPair.parent, func(sync *syncT) error { - switch sync.Type { - case procReady: - // This shouldn't happen. - panic("unexpected procReady in setns") - case procHooks: - // This shouldn't happen. - panic("unexpected procHooks in setns") - default: - return newSystemError(fmt.Errorf("invalid JSON payload from child")) - } - }) - - if err := unix.Shutdown(int(p.messageSockPair.parent.Fd()), unix.SHUT_WR); err != nil { - return newSystemErrorWithCause(err, "calling shutdown on init pipe") - } - // Must be done after Shutdown so the child will exit and we can wait for it. - if ierr != nil { - p.wait() - return ierr - } - return nil -} - -// execSetns runs the process that executes C code to perform the setns calls -// because setns support requires the C process to fork off a child and perform the setns -// before the go runtime boots, we wait on the process to die and receive the child's pid -// over the provided pipe. -func (p *setnsProcess) execSetns() error { - status, err := p.cmd.Process.Wait() - if err != nil { - p.cmd.Wait() - return newSystemErrorWithCause(err, "waiting on setns process to finish") - } - if !status.Success() { - p.cmd.Wait() - return newSystemError(&exec.ExitError{ProcessState: status}) - } - var pid *pid - if err := json.NewDecoder(p.messageSockPair.parent).Decode(&pid); err != nil { - p.cmd.Wait() - return newSystemErrorWithCause(err, "reading pid from init pipe") - } - - // Clean up the zombie parent process - // On Unix systems FindProcess always succeeds. - firstChildProcess, _ := os.FindProcess(pid.PidFirstChild) - - // Ignore the error in case the child has already been reaped for any reason - _, _ = firstChildProcess.Wait() - - process, err := os.FindProcess(pid.Pid) - if err != nil { - return err - } - p.cmd.Process = process - p.process.ops = p - return nil -} - -// terminate sends a SIGKILL to the forked process for the setns routine then waits to -// avoid the process becoming a zombie. -func (p *setnsProcess) terminate() error { - if p.cmd.Process == nil { - return nil - } - err := p.cmd.Process.Kill() - if _, werr := p.wait(); err == nil { - err = werr - } - return err -} - -func (p *setnsProcess) wait() (*os.ProcessState, error) { - err := p.cmd.Wait() - - // Return actual ProcessState even on Wait error - return p.cmd.ProcessState, err -} - -func (p *setnsProcess) pid() int { - return p.cmd.Process.Pid -} - -func (p *setnsProcess) externalDescriptors() []string { - return p.fds -} - -func (p *setnsProcess) setExternalDescriptors(newFds []string) { - p.fds = newFds -} - -func (p *setnsProcess) forwardChildLogs() { - go logs.ForwardLogs(p.logFilePair.parent) -} - -type initProcess struct { - cmd *exec.Cmd - messageSockPair filePair - logFilePair filePair - config *initConfig - manager cgroups.Manager - intelRdtManager intelrdt.Manager - container *linuxContainer - fds []string - process *Process - bootstrapData io.Reader - sharePidns bool -} - -func (p *initProcess) pid() int { - return p.cmd.Process.Pid -} - -func (p *initProcess) externalDescriptors() []string { - return p.fds -} - -// getChildPid receives the final child's pid over the provided pipe. -func (p *initProcess) getChildPid() (int, error) { - var pid pid - if err := json.NewDecoder(p.messageSockPair.parent).Decode(&pid); err != nil { - p.cmd.Wait() - return -1, err - } - - // Clean up the zombie parent process - // On Unix systems FindProcess always succeeds. - firstChildProcess, _ := os.FindProcess(pid.PidFirstChild) - - // Ignore the error in case the child has already been reaped for any reason - _, _ = firstChildProcess.Wait() - - return pid.Pid, nil -} - -func (p *initProcess) waitForChildExit(childPid int) error { - status, err := p.cmd.Process.Wait() - if err != nil { - p.cmd.Wait() - return err - } - if !status.Success() { - p.cmd.Wait() - return &exec.ExitError{ProcessState: status} - } - - process, err := os.FindProcess(childPid) - if err != nil { - return err - } - p.cmd.Process = process - p.process.ops = p - return nil -} - -func (p *initProcess) start() error { - defer p.messageSockPair.parent.Close() - err := p.cmd.Start() - p.process.ops = p - // close the write-side of the pipes (controlled by child) - p.messageSockPair.child.Close() - p.logFilePair.child.Close() - if err != nil { - p.process.ops = nil - return newSystemErrorWithCause(err, "starting init process command") - } - // Do this before syncing with child so that no children can escape the - // cgroup. We don't need to worry about not doing this and not being root - // because we'd be using the rootless cgroup manager in that case. - if err := p.manager.Apply(p.pid()); err != nil { - return newSystemErrorWithCause(err, "applying cgroup configuration for process") - } - if p.intelRdtManager != nil { - if err := p.intelRdtManager.Apply(p.pid()); err != nil { - return newSystemErrorWithCause(err, "applying Intel RDT configuration for process") - } - } - defer func() { - if err != nil { - // TODO: should not be the responsibility to call here - p.manager.Destroy() - if p.intelRdtManager != nil { - p.intelRdtManager.Destroy() - } - } - }() - - if _, err := io.Copy(p.messageSockPair.parent, p.bootstrapData); err != nil { - return newSystemErrorWithCause(err, "copying bootstrap data to pipe") - } - childPid, err := p.getChildPid() - if err != nil { - return newSystemErrorWithCause(err, "getting the final child's pid from pipe") - } - - // Save the standard descriptor names before the container process - // can potentially move them (e.g., via dup2()). If we don't do this now, - // we won't know at checkpoint time which file descriptor to look up. - fds, err := getPipeFds(childPid) - if err != nil { - return newSystemErrorWithCausef(err, "getting pipe fds for pid %d", childPid) - } - p.setExternalDescriptors(fds) - // Do this before syncing with child so that no children - // can escape the cgroup - if err := p.manager.Apply(childPid); err != nil { - return newSystemErrorWithCause(err, "applying cgroup configuration for process") - } - if p.intelRdtManager != nil { - if err := p.intelRdtManager.Apply(childPid); err != nil { - return newSystemErrorWithCause(err, "applying Intel RDT configuration for process") - } - } - // Now it's time to setup cgroup namesapce - if p.config.Config.Namespaces.Contains(configs.NEWCGROUP) && p.config.Config.Namespaces.PathOf(configs.NEWCGROUP) == "" { - if _, err := p.messageSockPair.parent.Write([]byte{createCgroupns}); err != nil { - return newSystemErrorWithCause(err, "sending synchronization value to init process") - } - } - - // Wait for our first child to exit - if err := p.waitForChildExit(childPid); err != nil { - return newSystemErrorWithCause(err, "waiting for our first child to exit") - } - - defer func() { - if err != nil { - // TODO: should not be the responsibility to call here - p.manager.Destroy() - if p.intelRdtManager != nil { - p.intelRdtManager.Destroy() - } - } - }() - if err := p.createNetworkInterfaces(); err != nil { - return newSystemErrorWithCause(err, "creating network interfaces") - } - if err := p.sendConfig(); err != nil { - return newSystemErrorWithCause(err, "sending config to init process") - } - var ( - sentRun bool - sentResume bool - ) - - ierr := parseSync(p.messageSockPair.parent, func(sync *syncT) error { - switch sync.Type { - case procReady: - // set rlimits, this has to be done here because we lose permissions - // to raise the limits once we enter a user-namespace - if err := setupRlimits(p.config.Rlimits, p.pid()); err != nil { - return newSystemErrorWithCause(err, "setting rlimits for ready process") - } - // call prestart hooks - if !p.config.Config.Namespaces.Contains(configs.NEWNS) { - // Setup cgroup before prestart hook, so that the prestart hook could apply cgroup permissions. - if err := p.manager.Set(p.config.Config); err != nil { - return newSystemErrorWithCause(err, "setting cgroup config for ready process") - } - if p.intelRdtManager != nil { - if err := p.intelRdtManager.Set(p.config.Config); err != nil { - return newSystemErrorWithCause(err, "setting Intel RDT config for ready process") - } - } - - if p.config.Config.Hooks != nil { - s, err := p.container.currentOCIState() - if err != nil { - return err - } - // initProcessStartTime hasn't been set yet. - s.Pid = p.cmd.Process.Pid - s.Status = "creating" - for i, hook := range p.config.Config.Hooks.Prestart { - if err := hook.Run(s); err != nil { - return newSystemErrorWithCausef(err, "running prestart hook %d", i) - } - } - } - } - // Sync with child. - if err := writeSync(p.messageSockPair.parent, procRun); err != nil { - return newSystemErrorWithCause(err, "writing syncT 'run'") - } - sentRun = true - case procHooks: - // Setup cgroup before prestart hook, so that the prestart hook could apply cgroup permissions. - if err := p.manager.Set(p.config.Config); err != nil { - return newSystemErrorWithCause(err, "setting cgroup config for procHooks process") - } - if p.intelRdtManager != nil { - if err := p.intelRdtManager.Set(p.config.Config); err != nil { - return newSystemErrorWithCause(err, "setting Intel RDT config for procHooks process") - } - } - if p.config.Config.Hooks != nil { - s, err := p.container.currentOCIState() - if err != nil { - return err - } - // initProcessStartTime hasn't been set yet. - s.Pid = p.cmd.Process.Pid - s.Status = "creating" - for i, hook := range p.config.Config.Hooks.Prestart { - if err := hook.Run(s); err != nil { - return newSystemErrorWithCausef(err, "running prestart hook %d", i) - } - } - } - // Sync with child. - if err := writeSync(p.messageSockPair.parent, procResume); err != nil { - return newSystemErrorWithCause(err, "writing syncT 'resume'") - } - sentResume = true - default: - return newSystemError(fmt.Errorf("invalid JSON payload from child")) - } - - return nil - }) - - if !sentRun { - return newSystemErrorWithCause(ierr, "container init") - } - if p.config.Config.Namespaces.Contains(configs.NEWNS) && !sentResume { - return newSystemError(fmt.Errorf("could not synchronise after executing prestart hooks with container process")) - } - if err := unix.Shutdown(int(p.messageSockPair.parent.Fd()), unix.SHUT_WR); err != nil { - return newSystemErrorWithCause(err, "shutting down init pipe") - } - - // Must be done after Shutdown so the child will exit and we can wait for it. - if ierr != nil { - p.wait() - return ierr - } - return nil -} - -func (p *initProcess) wait() (*os.ProcessState, error) { - err := p.cmd.Wait() - if err != nil { - return p.cmd.ProcessState, err - } - // we should kill all processes in cgroup when init is died if we use host PID namespace - if p.sharePidns { - signalAllProcesses(p.manager, unix.SIGKILL) - } - return p.cmd.ProcessState, nil -} - -func (p *initProcess) terminate() error { - if p.cmd.Process == nil { - return nil - } - err := p.cmd.Process.Kill() - if _, werr := p.wait(); err == nil { - err = werr - } - return err -} - -func (p *initProcess) startTime() (uint64, error) { - stat, err := system.Stat(p.pid()) - return stat.StartTime, err -} - -func (p *initProcess) sendConfig() error { - // send the config to the container's init process, we don't use JSON Encode - // here because there might be a problem in JSON decoder in some cases, see: - // https://github.com/docker/docker/issues/14203#issuecomment-174177790 - return utils.WriteJSON(p.messageSockPair.parent, p.config) -} - -func (p *initProcess) createNetworkInterfaces() error { - for _, config := range p.config.Config.Networks { - strategy, err := getStrategy(config.Type) - if err != nil { - return err - } - n := &network{ - Network: *config, - } - if err := strategy.create(n, p.pid()); err != nil { - return err - } - p.config.Networks = append(p.config.Networks, n) - } - return nil -} - -func (p *initProcess) signal(sig os.Signal) error { - s, ok := sig.(syscall.Signal) - if !ok { - return errors.New("os: unsupported signal type") - } - return unix.Kill(p.pid(), s) -} - -func (p *initProcess) setExternalDescriptors(newFds []string) { - p.fds = newFds -} - -func (p *initProcess) forwardChildLogs() { - go logs.ForwardLogs(p.logFilePair.parent) -} - -func getPipeFds(pid int) ([]string, error) { - fds := make([]string, 3) - - dirPath := filepath.Join("/proc", strconv.Itoa(pid), "/fd") - for i := 0; i < 3; i++ { - // XXX: This breaks if the path is not a valid symlink (which can - // happen in certain particularly unlucky mount namespace setups). - f := filepath.Join(dirPath, strconv.Itoa(i)) - target, err := os.Readlink(f) - if err != nil { - // Ignore permission errors, for rootless containers and other - // non-dumpable processes. if we can't get the fd for a particular - // file, there's not much we can do. - if os.IsPermission(err) { - continue - } - return fds, err - } - fds[i] = target - } - return fds, nil -} - -// InitializeIO creates pipes for use with the process's stdio and returns the -// opposite side for each. Do not use this if you want to have a pseudoterminal -// set up for you by libcontainer (TODO: fix that too). -// TODO: This is mostly unnecessary, and should be handled by clients. -func (p *Process) InitializeIO(rootuid, rootgid int) (i *IO, err error) { - var fds []uintptr - i = &IO{} - // cleanup in case of an error - defer func() { - if err != nil { - for _, fd := range fds { - unix.Close(int(fd)) - } - } - }() - // STDIN - r, w, err := os.Pipe() - if err != nil { - return nil, err - } - fds = append(fds, r.Fd(), w.Fd()) - p.Stdin, i.Stdin = r, w - // STDOUT - if r, w, err = os.Pipe(); err != nil { - return nil, err - } - fds = append(fds, r.Fd(), w.Fd()) - p.Stdout, i.Stdout = w, r - // STDERR - if r, w, err = os.Pipe(); err != nil { - return nil, err - } - fds = append(fds, r.Fd(), w.Fd()) - p.Stderr, i.Stderr = w, r - // change ownership of the pipes in case we are in a user namespace - for _, fd := range fds { - if err := unix.Fchown(int(fd), rootuid, rootgid); err != nil { - return nil, err - } - } - return i, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/restored_process.go b/vendor/github.com/opencontainers/runc/libcontainer/restored_process.go deleted file mode 100644 index 28d52ad06..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/restored_process.go +++ /dev/null @@ -1,128 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "os" - - "github.com/opencontainers/runc/libcontainer/system" -) - -func newRestoredProcess(pid int, fds []string) (*restoredProcess, error) { - var ( - err error - ) - proc, err := os.FindProcess(pid) - if err != nil { - return nil, err - } - stat, err := system.Stat(pid) - if err != nil { - return nil, err - } - return &restoredProcess{ - proc: proc, - processStartTime: stat.StartTime, - fds: fds, - }, nil -} - -type restoredProcess struct { - proc *os.Process - processStartTime uint64 - fds []string -} - -func (p *restoredProcess) start() error { - return newGenericError(fmt.Errorf("restored process cannot be started"), SystemError) -} - -func (p *restoredProcess) pid() int { - return p.proc.Pid -} - -func (p *restoredProcess) terminate() error { - err := p.proc.Kill() - if _, werr := p.wait(); err == nil { - err = werr - } - return err -} - -func (p *restoredProcess) wait() (*os.ProcessState, error) { - // TODO: how do we wait on the actual process? - // maybe use --exec-cmd in criu - st, err := p.proc.Wait() - if err != nil { - return nil, err - } - return st, nil -} - -func (p *restoredProcess) startTime() (uint64, error) { - return p.processStartTime, nil -} - -func (p *restoredProcess) signal(s os.Signal) error { - return p.proc.Signal(s) -} - -func (p *restoredProcess) externalDescriptors() []string { - return p.fds -} - -func (p *restoredProcess) setExternalDescriptors(newFds []string) { - p.fds = newFds -} - -func (p *restoredProcess) forwardChildLogs() { -} - -// nonChildProcess represents a process where the calling process is not -// the parent process. This process is created when a factory loads a container from -// a persisted state. -type nonChildProcess struct { - processPid int - processStartTime uint64 - fds []string -} - -func (p *nonChildProcess) start() error { - return newGenericError(fmt.Errorf("restored process cannot be started"), SystemError) -} - -func (p *nonChildProcess) pid() int { - return p.processPid -} - -func (p *nonChildProcess) terminate() error { - return newGenericError(fmt.Errorf("restored process cannot be terminated"), SystemError) -} - -func (p *nonChildProcess) wait() (*os.ProcessState, error) { - return nil, newGenericError(fmt.Errorf("restored process cannot be waited on"), SystemError) -} - -func (p *nonChildProcess) startTime() (uint64, error) { - return p.processStartTime, nil -} - -func (p *nonChildProcess) signal(s os.Signal) error { - proc, err := os.FindProcess(p.processPid) - if err != nil { - return err - } - return proc.Signal(s) -} - -func (p *nonChildProcess) externalDescriptors() []string { - return p.fds -} - -func (p *nonChildProcess) setExternalDescriptors(newFds []string) { - p.fds = newFds -} - -func (p *nonChildProcess) forwardChildLogs() { -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/rootfs_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/rootfs_linux.go deleted file mode 100644 index f13b226e4..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/rootfs_linux.go +++ /dev/null @@ -1,941 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "io" - "io/ioutil" - "os" - "os/exec" - "path" - "path/filepath" - "strings" - "time" - - "github.com/cyphar/filepath-securejoin" - "github.com/mrunalp/fileutils" - "github.com/opencontainers/runc/libcontainer/cgroups" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/mount" - "github.com/opencontainers/runc/libcontainer/system" - libcontainerUtils "github.com/opencontainers/runc/libcontainer/utils" - "github.com/opencontainers/selinux/go-selinux/label" - - "golang.org/x/sys/unix" -) - -const defaultMountFlags = unix.MS_NOEXEC | unix.MS_NOSUID | unix.MS_NODEV - -// needsSetupDev returns true if /dev needs to be set up. -func needsSetupDev(config *configs.Config) bool { - for _, m := range config.Mounts { - if m.Device == "bind" && libcontainerUtils.CleanPath(m.Destination) == "/dev" { - return false - } - } - return true -} - -// prepareRootfs sets up the devices, mount points, and filesystems for use -// inside a new mount namespace. It doesn't set anything as ro. You must call -// finalizeRootfs after this function to finish setting up the rootfs. -func prepareRootfs(pipe io.ReadWriter, iConfig *initConfig) (err error) { - config := iConfig.Config - if err := prepareRoot(config); err != nil { - return newSystemErrorWithCause(err, "preparing rootfs") - } - - hasCgroupns := config.Namespaces.Contains(configs.NEWCGROUP) - setupDev := needsSetupDev(config) - for _, m := range config.Mounts { - for _, precmd := range m.PremountCmds { - if err := mountCmd(precmd); err != nil { - return newSystemErrorWithCause(err, "running premount command") - } - } - if err := mountToRootfs(m, config.Rootfs, config.MountLabel, hasCgroupns); err != nil { - return newSystemErrorWithCausef(err, "mounting %q to rootfs %q at %q", m.Source, config.Rootfs, m.Destination) - } - - for _, postcmd := range m.PostmountCmds { - if err := mountCmd(postcmd); err != nil { - return newSystemErrorWithCause(err, "running postmount command") - } - } - } - - if setupDev { - if err := createDevices(config); err != nil { - return newSystemErrorWithCause(err, "creating device nodes") - } - if err := setupPtmx(config); err != nil { - return newSystemErrorWithCause(err, "setting up ptmx") - } - if err := setupDevSymlinks(config.Rootfs); err != nil { - return newSystemErrorWithCause(err, "setting up /dev symlinks") - } - } - - // Signal the parent to run the pre-start hooks. - // The hooks are run after the mounts are setup, but before we switch to the new - // root, so that the old root is still available in the hooks for any mount - // manipulations. - // Note that iConfig.Cwd is not guaranteed to exist here. - if err := syncParentHooks(pipe); err != nil { - return err - } - - // The reason these operations are done here rather than in finalizeRootfs - // is because the console-handling code gets quite sticky if we have to set - // up the console before doing the pivot_root(2). This is because the - // Console API has to also work with the ExecIn case, which means that the - // API must be able to deal with being inside as well as outside the - // container. It's just cleaner to do this here (at the expense of the - // operation not being perfectly split). - - if err := unix.Chdir(config.Rootfs); err != nil { - return newSystemErrorWithCausef(err, "changing dir to %q", config.Rootfs) - } - - if config.NoPivotRoot { - err = msMoveRoot(config.Rootfs) - } else if config.Namespaces.Contains(configs.NEWNS) { - err = pivotRoot(config.Rootfs) - } else { - err = chroot(config.Rootfs) - } - if err != nil { - return newSystemErrorWithCause(err, "jailing process inside rootfs") - } - - if setupDev { - if err := reOpenDevNull(); err != nil { - return newSystemErrorWithCause(err, "reopening /dev/null inside container") - } - } - - if cwd := iConfig.Cwd; cwd != "" { - // Note that spec.Process.Cwd can contain unclean value like "../../../../foo/bar...". - // However, we are safe to call MkDirAll directly because we are in the jail here. - if err := os.MkdirAll(cwd, 0755); err != nil { - return err - } - } - - return nil -} - -// finalizeRootfs sets anything to ro if necessary. You must call -// prepareRootfs first. -func finalizeRootfs(config *configs.Config) (err error) { - // remount dev as ro if specified - for _, m := range config.Mounts { - if libcontainerUtils.CleanPath(m.Destination) == "/dev" { - if m.Flags&unix.MS_RDONLY == unix.MS_RDONLY { - if err := remountReadonly(m); err != nil { - return newSystemErrorWithCausef(err, "remounting %q as readonly", m.Destination) - } - } - break - } - } - - // set rootfs ( / ) as readonly - if config.Readonlyfs { - if err := setReadonly(); err != nil { - return newSystemErrorWithCause(err, "setting rootfs as readonly") - } - } - - unix.Umask(0022) - return nil -} - -// /tmp has to be mounted as private to allow MS_MOVE to work in all situations -func prepareTmp(topTmpDir string) (string, error) { - tmpdir, err := ioutil.TempDir(topTmpDir, "runctop") - if err != nil { - return "", err - } - if err := unix.Mount(tmpdir, tmpdir, "bind", unix.MS_BIND, ""); err != nil { - return "", err - } - if err := unix.Mount("", tmpdir, "", uintptr(unix.MS_PRIVATE), ""); err != nil { - return "", err - } - return tmpdir, nil -} - -func cleanupTmp(tmpdir string) error { - unix.Unmount(tmpdir, 0) - return os.RemoveAll(tmpdir) -} - -func mountCmd(cmd configs.Command) error { - command := exec.Command(cmd.Path, cmd.Args[:]...) - command.Env = cmd.Env - command.Dir = cmd.Dir - if out, err := command.CombinedOutput(); err != nil { - return fmt.Errorf("%#v failed: %s: %v", cmd, string(out), err) - } - return nil -} - -func prepareBindMount(m *configs.Mount, rootfs string) error { - stat, err := os.Stat(m.Source) - if err != nil { - // error out if the source of a bind mount does not exist as we will be - // unable to bind anything to it. - return err - } - // ensure that the destination of the bind mount is resolved of symlinks at mount time because - // any previous mounts can invalidate the next mount's destination. - // this can happen when a user specifies mounts within other mounts to cause breakouts or other - // evil stuff to try to escape the container's rootfs. - var dest string - if dest, err = securejoin.SecureJoin(rootfs, m.Destination); err != nil { - return err - } - if err := checkMountDestination(rootfs, dest); err != nil { - return err - } - // update the mount with the correct dest after symlinks are resolved. - m.Destination = dest - if err := createIfNotExists(dest, stat.IsDir()); err != nil { - return err - } - - return nil -} - -func mountToRootfs(m *configs.Mount, rootfs, mountLabel string, enableCgroupns bool) error { - var ( - dest = m.Destination - ) - if !strings.HasPrefix(dest, rootfs) { - dest = filepath.Join(rootfs, dest) - } - - switch m.Device { - case "proc", "sysfs": - if err := os.MkdirAll(dest, 0755); err != nil { - return err - } - // Selinux kernels do not support labeling of /proc or /sys - return mountPropagate(m, rootfs, "") - case "mqueue": - if err := os.MkdirAll(dest, 0755); err != nil { - return err - } - if err := mountPropagate(m, rootfs, mountLabel); err != nil { - // older kernels do not support labeling of /dev/mqueue - if err := mountPropagate(m, rootfs, ""); err != nil { - return err - } - return label.SetFileLabel(dest, mountLabel) - } - return nil - case "tmpfs": - copyUp := m.Extensions&configs.EXT_COPYUP == configs.EXT_COPYUP - tmpDir := "" - stat, err := os.Stat(dest) - if err != nil { - if err := os.MkdirAll(dest, 0755); err != nil { - return err - } - } - if copyUp { - tmpdir, err := prepareTmp("/tmp") - if err != nil { - return newSystemErrorWithCause(err, "tmpcopyup: failed to setup tmpdir") - } - defer cleanupTmp(tmpdir) - tmpDir, err = ioutil.TempDir(tmpdir, "runctmpdir") - if err != nil { - return newSystemErrorWithCause(err, "tmpcopyup: failed to create tmpdir") - } - defer os.RemoveAll(tmpDir) - m.Destination = tmpDir - } - if err := mountPropagate(m, rootfs, mountLabel); err != nil { - return err - } - if copyUp { - if err := fileutils.CopyDirectory(dest, tmpDir); err != nil { - errMsg := fmt.Errorf("tmpcopyup: failed to copy %s to %s: %v", dest, tmpDir, err) - if err1 := unix.Unmount(tmpDir, unix.MNT_DETACH); err1 != nil { - return newSystemErrorWithCausef(err1, "tmpcopyup: %v: failed to unmount", errMsg) - } - return errMsg - } - if err := unix.Mount(tmpDir, dest, "", unix.MS_MOVE, ""); err != nil { - errMsg := fmt.Errorf("tmpcopyup: failed to move mount %s to %s: %v", tmpDir, dest, err) - if err1 := unix.Unmount(tmpDir, unix.MNT_DETACH); err1 != nil { - return newSystemErrorWithCausef(err1, "tmpcopyup: %v: failed to unmount", errMsg) - } - return errMsg - } - } - if stat != nil { - if err = os.Chmod(dest, stat.Mode()); err != nil { - return err - } - } - return nil - case "bind": - if err := prepareBindMount(m, rootfs); err != nil { - return err - } - if err := mountPropagate(m, rootfs, mountLabel); err != nil { - return err - } - // bind mount won't change mount options, we need remount to make mount options effective. - // first check that we have non-default options required before attempting a remount - if m.Flags&^(unix.MS_REC|unix.MS_REMOUNT|unix.MS_BIND) != 0 { - // only remount if unique mount options are set - if err := remount(m, rootfs); err != nil { - return err - } - } - - if m.Relabel != "" { - if err := label.Validate(m.Relabel); err != nil { - return err - } - shared := label.IsShared(m.Relabel) - if err := label.Relabel(m.Source, mountLabel, shared); err != nil { - return err - } - } - case "cgroup": - binds, err := getCgroupMounts(m) - if err != nil { - return err - } - var merged []string - for _, b := range binds { - ss := filepath.Base(b.Destination) - if strings.Contains(ss, ",") { - merged = append(merged, ss) - } - } - tmpfs := &configs.Mount{ - Source: "tmpfs", - Device: "tmpfs", - Destination: m.Destination, - Flags: defaultMountFlags, - Data: "mode=755", - PropagationFlags: m.PropagationFlags, - } - if err := mountToRootfs(tmpfs, rootfs, mountLabel, enableCgroupns); err != nil { - return err - } - for _, b := range binds { - if enableCgroupns { - subsystemPath := filepath.Join(rootfs, b.Destination) - if err := os.MkdirAll(subsystemPath, 0755); err != nil { - return err - } - flags := defaultMountFlags - if m.Flags&unix.MS_RDONLY != 0 { - flags = flags | unix.MS_RDONLY - } - cgroupmount := &configs.Mount{ - Source: "cgroup", - Device: "cgroup", - Destination: subsystemPath, - Flags: flags, - Data: filepath.Base(subsystemPath), - } - if err := mountNewCgroup(cgroupmount); err != nil { - return err - } - } else { - if err := mountToRootfs(b, rootfs, mountLabel, enableCgroupns); err != nil { - return err - } - } - } - for _, mc := range merged { - for _, ss := range strings.Split(mc, ",") { - // symlink(2) is very dumb, it will just shove the path into - // the link and doesn't do any checks or relative path - // conversion. Also, don't error out if the cgroup already exists. - if err := os.Symlink(mc, filepath.Join(rootfs, m.Destination, ss)); err != nil && !os.IsExist(err) { - return err - } - } - } - if m.Flags&unix.MS_RDONLY != 0 { - // remount cgroup root as readonly - mcgrouproot := &configs.Mount{ - Source: m.Destination, - Device: "bind", - Destination: m.Destination, - Flags: defaultMountFlags | unix.MS_RDONLY | unix.MS_BIND, - } - if err := remount(mcgrouproot, rootfs); err != nil { - return err - } - } - default: - // ensure that the destination of the mount is resolved of symlinks at mount time because - // any previous mounts can invalidate the next mount's destination. - // this can happen when a user specifies mounts within other mounts to cause breakouts or other - // evil stuff to try to escape the container's rootfs. - var err error - if dest, err = securejoin.SecureJoin(rootfs, m.Destination); err != nil { - return err - } - if err := checkMountDestination(rootfs, dest); err != nil { - return err - } - // update the mount with the correct dest after symlinks are resolved. - m.Destination = dest - if err := os.MkdirAll(dest, 0755); err != nil { - return err - } - return mountPropagate(m, rootfs, mountLabel) - } - return nil -} - -func getCgroupMounts(m *configs.Mount) ([]*configs.Mount, error) { - mounts, err := cgroups.GetCgroupMounts(false) - if err != nil { - return nil, err - } - - cgroupPaths, err := cgroups.ParseCgroupFile("/proc/self/cgroup") - if err != nil { - return nil, err - } - - var binds []*configs.Mount - - for _, mm := range mounts { - dir, err := mm.GetOwnCgroup(cgroupPaths) - if err != nil { - return nil, err - } - relDir, err := filepath.Rel(mm.Root, dir) - if err != nil { - return nil, err - } - binds = append(binds, &configs.Mount{ - Device: "bind", - Source: filepath.Join(mm.Mountpoint, relDir), - Destination: filepath.Join(m.Destination, filepath.Base(mm.Mountpoint)), - Flags: unix.MS_BIND | unix.MS_REC | m.Flags, - PropagationFlags: m.PropagationFlags, - }) - } - - return binds, nil -} - -// checkMountDestination checks to ensure that the mount destination is not over the top of /proc. -// dest is required to be an abs path and have any symlinks resolved before calling this function. -func checkMountDestination(rootfs, dest string) error { - invalidDestinations := []string{ - "/proc", - } - // White list, it should be sub directories of invalid destinations - validDestinations := []string{ - // These entries can be bind mounted by files emulated by fuse, - // so commands like top, free displays stats in container. - "/proc/cpuinfo", - "/proc/diskstats", - "/proc/meminfo", - "/proc/stat", - "/proc/swaps", - "/proc/uptime", - "/proc/loadavg", - "/proc/net/dev", - } - for _, valid := range validDestinations { - path, err := filepath.Rel(filepath.Join(rootfs, valid), dest) - if err != nil { - return err - } - if path == "." { - return nil - } - } - for _, invalid := range invalidDestinations { - path, err := filepath.Rel(filepath.Join(rootfs, invalid), dest) - if err != nil { - return err - } - if path != "." && !strings.HasPrefix(path, "..") { - return fmt.Errorf("%q cannot be mounted because it is located inside %q", dest, invalid) - } - } - return nil -} - -func setupDevSymlinks(rootfs string) error { - var links = [][2]string{ - {"/proc/self/fd", "/dev/fd"}, - {"/proc/self/fd/0", "/dev/stdin"}, - {"/proc/self/fd/1", "/dev/stdout"}, - {"/proc/self/fd/2", "/dev/stderr"}, - } - // kcore support can be toggled with CONFIG_PROC_KCORE; only create a symlink - // in /dev if it exists in /proc. - if _, err := os.Stat("/proc/kcore"); err == nil { - links = append(links, [2]string{"/proc/kcore", "/dev/core"}) - } - for _, link := range links { - var ( - src = link[0] - dst = filepath.Join(rootfs, link[1]) - ) - if err := os.Symlink(src, dst); err != nil && !os.IsExist(err) { - return fmt.Errorf("symlink %s %s %s", src, dst, err) - } - } - return nil -} - -// If stdin, stdout, and/or stderr are pointing to `/dev/null` in the parent's rootfs -// this method will make them point to `/dev/null` in this container's rootfs. This -// needs to be called after we chroot/pivot into the container's rootfs so that any -// symlinks are resolved locally. -func reOpenDevNull() error { - var stat, devNullStat unix.Stat_t - file, err := os.OpenFile("/dev/null", os.O_RDWR, 0) - if err != nil { - return fmt.Errorf("Failed to open /dev/null - %s", err) - } - defer file.Close() - if err := unix.Fstat(int(file.Fd()), &devNullStat); err != nil { - return err - } - for fd := 0; fd < 3; fd++ { - if err := unix.Fstat(fd, &stat); err != nil { - return err - } - if stat.Rdev == devNullStat.Rdev { - // Close and re-open the fd. - if err := unix.Dup3(int(file.Fd()), fd, 0); err != nil { - return err - } - } - } - return nil -} - -// Create the device nodes in the container. -func createDevices(config *configs.Config) error { - useBindMount := system.RunningInUserNS() || config.Namespaces.Contains(configs.NEWUSER) - oldMask := unix.Umask(0000) - for _, node := range config.Devices { - // containers running in a user namespace are not allowed to mknod - // devices so we can just bind mount it from the host. - if err := createDeviceNode(config.Rootfs, node, useBindMount); err != nil { - unix.Umask(oldMask) - return err - } - } - unix.Umask(oldMask) - return nil -} - -func bindMountDeviceNode(dest string, node *configs.Device) error { - f, err := os.Create(dest) - if err != nil && !os.IsExist(err) { - return err - } - if f != nil { - f.Close() - } - return unix.Mount(node.Path, dest, "bind", unix.MS_BIND, "") -} - -// Creates the device node in the rootfs of the container. -func createDeviceNode(rootfs string, node *configs.Device, bind bool) error { - dest := filepath.Join(rootfs, node.Path) - if err := os.MkdirAll(filepath.Dir(dest), 0755); err != nil { - return err - } - - if bind { - return bindMountDeviceNode(dest, node) - } - if err := mknodDevice(dest, node); err != nil { - if os.IsExist(err) { - return nil - } else if os.IsPermission(err) { - return bindMountDeviceNode(dest, node) - } - return err - } - return nil -} - -func mknodDevice(dest string, node *configs.Device) error { - fileMode := node.FileMode - switch node.Type { - case 'c', 'u': - fileMode |= unix.S_IFCHR - case 'b': - fileMode |= unix.S_IFBLK - case 'p': - fileMode |= unix.S_IFIFO - default: - return fmt.Errorf("%c is not a valid device type for device %s", node.Type, node.Path) - } - if err := unix.Mknod(dest, uint32(fileMode), node.Mkdev()); err != nil { - return err - } - return unix.Chown(dest, int(node.Uid), int(node.Gid)) -} - -func getMountInfo(mountinfo []*mount.Info, dir string) *mount.Info { - for _, m := range mountinfo { - if m.Mountpoint == dir { - return m - } - } - return nil -} - -// Get the parent mount point of directory passed in as argument. Also return -// optional fields. -func getParentMount(rootfs string) (string, string, error) { - var path string - - mountinfos, err := mount.GetMounts() - if err != nil { - return "", "", err - } - - mountinfo := getMountInfo(mountinfos, rootfs) - if mountinfo != nil { - return rootfs, mountinfo.Optional, nil - } - - path = rootfs - for { - path = filepath.Dir(path) - - mountinfo = getMountInfo(mountinfos, path) - if mountinfo != nil { - return path, mountinfo.Optional, nil - } - - if path == "/" { - break - } - } - - // If we are here, we did not find parent mount. Something is wrong. - return "", "", fmt.Errorf("Could not find parent mount of %s", rootfs) -} - -// Make parent mount private if it was shared -func rootfsParentMountPrivate(rootfs string) error { - sharedMount := false - - parentMount, optionalOpts, err := getParentMount(rootfs) - if err != nil { - return err - } - - optsSplit := strings.Split(optionalOpts, " ") - for _, opt := range optsSplit { - if strings.HasPrefix(opt, "shared:") { - sharedMount = true - break - } - } - - // Make parent mount PRIVATE if it was shared. It is needed for two - // reasons. First of all pivot_root() will fail if parent mount is - // shared. Secondly when we bind mount rootfs it will propagate to - // parent namespace and we don't want that to happen. - if sharedMount { - return unix.Mount("", parentMount, "", unix.MS_PRIVATE, "") - } - - return nil -} - -func prepareRoot(config *configs.Config) error { - flag := unix.MS_SLAVE | unix.MS_REC - if config.RootPropagation != 0 { - flag = config.RootPropagation - } - if err := unix.Mount("", "/", "", uintptr(flag), ""); err != nil { - return err - } - - // Make parent mount private to make sure following bind mount does - // not propagate in other namespaces. Also it will help with kernel - // check pass in pivot_root. (IS_SHARED(new_mnt->mnt_parent)) - if err := rootfsParentMountPrivate(config.Rootfs); err != nil { - return err - } - - return unix.Mount(config.Rootfs, config.Rootfs, "bind", unix.MS_BIND|unix.MS_REC, "") -} - -func setReadonly() error { - return unix.Mount("/", "/", "bind", unix.MS_BIND|unix.MS_REMOUNT|unix.MS_RDONLY|unix.MS_REC, "") -} - -func setupPtmx(config *configs.Config) error { - ptmx := filepath.Join(config.Rootfs, "dev/ptmx") - if err := os.Remove(ptmx); err != nil && !os.IsNotExist(err) { - return err - } - if err := os.Symlink("pts/ptmx", ptmx); err != nil { - return fmt.Errorf("symlink dev ptmx %s", err) - } - return nil -} - -// pivotRoot will call pivot_root such that rootfs becomes the new root -// filesystem, and everything else is cleaned up. -func pivotRoot(rootfs string) error { - // While the documentation may claim otherwise, pivot_root(".", ".") is - // actually valid. What this results in is / being the new root but - // /proc/self/cwd being the old root. Since we can play around with the cwd - // with pivot_root this allows us to pivot without creating directories in - // the rootfs. Shout-outs to the LXC developers for giving us this idea. - - oldroot, err := unix.Open("/", unix.O_DIRECTORY|unix.O_RDONLY, 0) - if err != nil { - return err - } - defer unix.Close(oldroot) - - newroot, err := unix.Open(rootfs, unix.O_DIRECTORY|unix.O_RDONLY, 0) - if err != nil { - return err - } - defer unix.Close(newroot) - - // Change to the new root so that the pivot_root actually acts on it. - if err := unix.Fchdir(newroot); err != nil { - return err - } - - if err := unix.PivotRoot(".", "."); err != nil { - return fmt.Errorf("pivot_root %s", err) - } - - // Currently our "." is oldroot (according to the current kernel code). - // However, purely for safety, we will fchdir(oldroot) since there isn't - // really any guarantee from the kernel what /proc/self/cwd will be after a - // pivot_root(2). - - if err := unix.Fchdir(oldroot); err != nil { - return err - } - - // Make oldroot rslave to make sure our unmounts don't propagate to the - // host (and thus bork the machine). We don't use rprivate because this is - // known to cause issues due to races where we still have a reference to a - // mount while a process in the host namespace are trying to operate on - // something they think has no mounts (devicemapper in particular). - if err := unix.Mount("", ".", "", unix.MS_SLAVE|unix.MS_REC, ""); err != nil { - return err - } - // Preform the unmount. MNT_DETACH allows us to unmount /proc/self/cwd. - if err := unix.Unmount(".", unix.MNT_DETACH); err != nil { - return err - } - - // Switch back to our shiny new root. - if err := unix.Chdir("/"); err != nil { - return fmt.Errorf("chdir / %s", err) - } - return nil -} - -func msMoveRoot(rootfs string) error { - mountinfos, err := mount.GetMounts() - if err != nil { - return err - } - - absRootfs, err := filepath.Abs(rootfs) - if err != nil { - return err - } - - for _, info := range mountinfos { - p, err := filepath.Abs(info.Mountpoint) - if err != nil { - return err - } - // Umount every syfs and proc file systems, except those under the container rootfs - if (info.Fstype != "proc" && info.Fstype != "sysfs") || filepath.HasPrefix(p, absRootfs) { - continue - } - // Be sure umount events are not propagated to the host. - if err := unix.Mount("", p, "", unix.MS_SLAVE|unix.MS_REC, ""); err != nil { - return err - } - if err := unix.Unmount(p, unix.MNT_DETACH); err != nil { - if err != unix.EINVAL && err != unix.EPERM { - return err - } else { - // If we have not privileges for umounting (e.g. rootless), then - // cover the path. - if err := unix.Mount("tmpfs", p, "tmpfs", 0, ""); err != nil { - return err - } - } - } - } - if err := unix.Mount(rootfs, "/", "", unix.MS_MOVE, ""); err != nil { - return err - } - return chroot(rootfs) -} - -func chroot(rootfs string) error { - if err := unix.Chroot("."); err != nil { - return err - } - return unix.Chdir("/") -} - -// createIfNotExists creates a file or a directory only if it does not already exist. -func createIfNotExists(path string, isDir bool) error { - if _, err := os.Stat(path); err != nil { - if os.IsNotExist(err) { - if isDir { - return os.MkdirAll(path, 0755) - } - if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil { - return err - } - f, err := os.OpenFile(path, os.O_CREATE, 0755) - if err != nil { - return err - } - f.Close() - } - } - return nil -} - -// readonlyPath will make a path read only. -func readonlyPath(path string) error { - if err := unix.Mount(path, path, "", unix.MS_BIND|unix.MS_REC, ""); err != nil { - if os.IsNotExist(err) { - return nil - } - return err - } - return unix.Mount(path, path, "", unix.MS_BIND|unix.MS_REMOUNT|unix.MS_RDONLY|unix.MS_REC, "") -} - -// remountReadonly will remount an existing mount point and ensure that it is read-only. -func remountReadonly(m *configs.Mount) error { - var ( - dest = m.Destination - flags = m.Flags - ) - for i := 0; i < 5; i++ { - // There is a special case in the kernel for - // MS_REMOUNT | MS_BIND, which allows us to change only the - // flags even as an unprivileged user (i.e. user namespace) - // assuming we don't drop any security related flags (nodev, - // nosuid, etc.). So, let's use that case so that we can do - // this re-mount without failing in a userns. - flags |= unix.MS_REMOUNT | unix.MS_BIND | unix.MS_RDONLY - if err := unix.Mount("", dest, "", uintptr(flags), ""); err != nil { - switch err { - case unix.EBUSY: - time.Sleep(100 * time.Millisecond) - continue - default: - return err - } - } - return nil - } - return fmt.Errorf("unable to mount %s as readonly max retries reached", dest) -} - -// maskPath masks the top of the specified path inside a container to avoid -// security issues from processes reading information from non-namespace aware -// mounts ( proc/kcore ). -// For files, maskPath bind mounts /dev/null over the top of the specified path. -// For directories, maskPath mounts read-only tmpfs over the top of the specified path. -func maskPath(path string, mountLabel string) error { - if err := unix.Mount("/dev/null", path, "", unix.MS_BIND, ""); err != nil && !os.IsNotExist(err) { - if err == unix.ENOTDIR { - return unix.Mount("tmpfs", path, "tmpfs", unix.MS_RDONLY, label.FormatMountLabel("", mountLabel)) - } - return err - } - return nil -} - -// writeSystemProperty writes the value to a path under /proc/sys as determined from the key. -// For e.g. net.ipv4.ip_forward translated to /proc/sys/net/ipv4/ip_forward. -func writeSystemProperty(key, value string) error { - keyPath := strings.Replace(key, ".", "/", -1) - return ioutil.WriteFile(path.Join("/proc/sys", keyPath), []byte(value), 0644) -} - -func remount(m *configs.Mount, rootfs string) error { - var ( - dest = m.Destination - ) - if !strings.HasPrefix(dest, rootfs) { - dest = filepath.Join(rootfs, dest) - } - return unix.Mount(m.Source, dest, m.Device, uintptr(m.Flags|unix.MS_REMOUNT), "") -} - -// Do the mount operation followed by additional mounts required to take care -// of propagation flags. -func mountPropagate(m *configs.Mount, rootfs string, mountLabel string) error { - var ( - dest = m.Destination - data = label.FormatMountLabel(m.Data, mountLabel) - flags = m.Flags - ) - if libcontainerUtils.CleanPath(dest) == "/dev" { - flags &= ^unix.MS_RDONLY - } - - copyUp := m.Extensions&configs.EXT_COPYUP == configs.EXT_COPYUP - if !(copyUp || strings.HasPrefix(dest, rootfs)) { - dest = filepath.Join(rootfs, dest) - } - - if err := unix.Mount(m.Source, dest, m.Device, uintptr(flags), data); err != nil { - return err - } - - for _, pflag := range m.PropagationFlags { - if err := unix.Mount("", dest, "", uintptr(pflag), ""); err != nil { - return err - } - } - return nil -} - -func mountNewCgroup(m *configs.Mount) error { - var ( - data = m.Data - source = m.Source - ) - if data == "systemd" { - data = cgroups.CgroupNamePrefix + data - source = "systemd" - } - if err := unix.Mount(source, m.Destination, m.Device, uintptr(m.Flags), data); err != nil { - return err - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/seccomp/config.go b/vendor/github.com/opencontainers/runc/libcontainer/seccomp/config.go deleted file mode 100644 index ded5a6bbc..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/seccomp/config.go +++ /dev/null @@ -1,76 +0,0 @@ -package seccomp - -import ( - "fmt" - - "github.com/opencontainers/runc/libcontainer/configs" -) - -var operators = map[string]configs.Operator{ - "SCMP_CMP_NE": configs.NotEqualTo, - "SCMP_CMP_LT": configs.LessThan, - "SCMP_CMP_LE": configs.LessThanOrEqualTo, - "SCMP_CMP_EQ": configs.EqualTo, - "SCMP_CMP_GE": configs.GreaterThanOrEqualTo, - "SCMP_CMP_GT": configs.GreaterThan, - "SCMP_CMP_MASKED_EQ": configs.MaskEqualTo, -} - -var actions = map[string]configs.Action{ - "SCMP_ACT_KILL": configs.Kill, - "SCMP_ACT_ERRNO": configs.Errno, - "SCMP_ACT_TRAP": configs.Trap, - "SCMP_ACT_ALLOW": configs.Allow, - "SCMP_ACT_TRACE": configs.Trace, -} - -var archs = map[string]string{ - "SCMP_ARCH_X86": "x86", - "SCMP_ARCH_X86_64": "amd64", - "SCMP_ARCH_X32": "x32", - "SCMP_ARCH_ARM": "arm", - "SCMP_ARCH_AARCH64": "arm64", - "SCMP_ARCH_MIPS": "mips", - "SCMP_ARCH_MIPS64": "mips64", - "SCMP_ARCH_MIPS64N32": "mips64n32", - "SCMP_ARCH_MIPSEL": "mipsel", - "SCMP_ARCH_MIPSEL64": "mipsel64", - "SCMP_ARCH_MIPSEL64N32": "mipsel64n32", - "SCMP_ARCH_PPC": "ppc", - "SCMP_ARCH_PPC64": "ppc64", - "SCMP_ARCH_PPC64LE": "ppc64le", - "SCMP_ARCH_S390": "s390", - "SCMP_ARCH_S390X": "s390x", -} - -// ConvertStringToOperator converts a string into a Seccomp comparison operator. -// Comparison operators use the names they are assigned by Libseccomp's header. -// Attempting to convert a string that is not a valid operator results in an -// error. -func ConvertStringToOperator(in string) (configs.Operator, error) { - if op, ok := operators[in]; ok == true { - return op, nil - } - return 0, fmt.Errorf("string %s is not a valid operator for seccomp", in) -} - -// ConvertStringToAction converts a string into a Seccomp rule match action. -// Actions use the names they are assigned in Libseccomp's header, though some -// (notable, SCMP_ACT_TRACE) are not available in this implementation and will -// return errors. -// Attempting to convert a string that is not a valid action results in an -// error. -func ConvertStringToAction(in string) (configs.Action, error) { - if act, ok := actions[in]; ok == true { - return act, nil - } - return 0, fmt.Errorf("string %s is not a valid action for seccomp", in) -} - -// ConvertStringToArch converts a string into a Seccomp comparison arch. -func ConvertStringToArch(in string) (string, error) { - if arch, ok := archs[in]; ok == true { - return arch, nil - } - return "", fmt.Errorf("string %s is not a valid arch for seccomp", in) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/seccomp/seccomp_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/seccomp/seccomp_linux.go deleted file mode 100644 index d99f3fe64..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/seccomp/seccomp_linux.go +++ /dev/null @@ -1,258 +0,0 @@ -// +build linux,cgo,seccomp - -package seccomp - -import ( - "bufio" - "fmt" - "os" - "strings" - - "github.com/opencontainers/runc/libcontainer/configs" - libseccomp "github.com/seccomp/libseccomp-golang" - - "golang.org/x/sys/unix" -) - -var ( - actAllow = libseccomp.ActAllow - actTrap = libseccomp.ActTrap - actKill = libseccomp.ActKill - actTrace = libseccomp.ActTrace.SetReturnCode(int16(unix.EPERM)) - actErrno = libseccomp.ActErrno.SetReturnCode(int16(unix.EPERM)) -) - -const ( - // Linux system calls can have at most 6 arguments - syscallMaxArguments int = 6 -) - -// Filters given syscalls in a container, preventing them from being used -// Started in the container init process, and carried over to all child processes -// Setns calls, however, require a separate invocation, as they are not children -// of the init until they join the namespace -func InitSeccomp(config *configs.Seccomp) error { - if config == nil { - return fmt.Errorf("cannot initialize Seccomp - nil config passed") - } - - defaultAction, err := getAction(config.DefaultAction) - if err != nil { - return fmt.Errorf("error initializing seccomp - invalid default action") - } - - filter, err := libseccomp.NewFilter(defaultAction) - if err != nil { - return fmt.Errorf("error creating filter: %s", err) - } - - // Add extra architectures - for _, arch := range config.Architectures { - scmpArch, err := libseccomp.GetArchFromString(arch) - if err != nil { - return fmt.Errorf("error validating Seccomp architecture: %s", err) - } - - if err := filter.AddArch(scmpArch); err != nil { - return fmt.Errorf("error adding architecture to seccomp filter: %s", err) - } - } - - // Unset no new privs bit - if err := filter.SetNoNewPrivsBit(false); err != nil { - return fmt.Errorf("error setting no new privileges: %s", err) - } - - // Add a rule for each syscall - for _, call := range config.Syscalls { - if call == nil { - return fmt.Errorf("encountered nil syscall while initializing Seccomp") - } - - if err = matchCall(filter, call); err != nil { - return err - } - } - - if err = filter.Load(); err != nil { - return fmt.Errorf("error loading seccomp filter into kernel: %s", err) - } - - return nil -} - -// IsEnabled returns if the kernel has been configured to support seccomp. -func IsEnabled() bool { - // Try to read from /proc/self/status for kernels > 3.8 - s, err := parseStatusFile("/proc/self/status") - if err != nil { - // Check if Seccomp is supported, via CONFIG_SECCOMP. - if err := unix.Prctl(unix.PR_GET_SECCOMP, 0, 0, 0, 0); err != unix.EINVAL { - // Make sure the kernel has CONFIG_SECCOMP_FILTER. - if err := unix.Prctl(unix.PR_SET_SECCOMP, unix.SECCOMP_MODE_FILTER, 0, 0, 0); err != unix.EINVAL { - return true - } - } - return false - } - _, ok := s["Seccomp"] - return ok -} - -// Convert Libcontainer Action to Libseccomp ScmpAction -func getAction(act configs.Action) (libseccomp.ScmpAction, error) { - switch act { - case configs.Kill: - return actKill, nil - case configs.Errno: - return actErrno, nil - case configs.Trap: - return actTrap, nil - case configs.Allow: - return actAllow, nil - case configs.Trace: - return actTrace, nil - default: - return libseccomp.ActInvalid, fmt.Errorf("invalid action, cannot use in rule") - } -} - -// Convert Libcontainer Operator to Libseccomp ScmpCompareOp -func getOperator(op configs.Operator) (libseccomp.ScmpCompareOp, error) { - switch op { - case configs.EqualTo: - return libseccomp.CompareEqual, nil - case configs.NotEqualTo: - return libseccomp.CompareNotEqual, nil - case configs.GreaterThan: - return libseccomp.CompareGreater, nil - case configs.GreaterThanOrEqualTo: - return libseccomp.CompareGreaterEqual, nil - case configs.LessThan: - return libseccomp.CompareLess, nil - case configs.LessThanOrEqualTo: - return libseccomp.CompareLessOrEqual, nil - case configs.MaskEqualTo: - return libseccomp.CompareMaskedEqual, nil - default: - return libseccomp.CompareInvalid, fmt.Errorf("invalid operator, cannot use in rule") - } -} - -// Convert Libcontainer Arg to Libseccomp ScmpCondition -func getCondition(arg *configs.Arg) (libseccomp.ScmpCondition, error) { - cond := libseccomp.ScmpCondition{} - - if arg == nil { - return cond, fmt.Errorf("cannot convert nil to syscall condition") - } - - op, err := getOperator(arg.Op) - if err != nil { - return cond, err - } - - return libseccomp.MakeCondition(arg.Index, op, arg.Value, arg.ValueTwo) -} - -// Add a rule to match a single syscall -func matchCall(filter *libseccomp.ScmpFilter, call *configs.Syscall) error { - if call == nil || filter == nil { - return fmt.Errorf("cannot use nil as syscall to block") - } - - if len(call.Name) == 0 { - return fmt.Errorf("empty string is not a valid syscall") - } - - // If we can't resolve the syscall, assume it's not supported on this kernel - // Ignore it, don't error out - callNum, err := libseccomp.GetSyscallFromName(call.Name) - if err != nil { - return nil - } - - // Convert the call's action to the libseccomp equivalent - callAct, err := getAction(call.Action) - if err != nil { - return fmt.Errorf("action in seccomp profile is invalid: %s", err) - } - - // Unconditional match - just add the rule - if len(call.Args) == 0 { - if err = filter.AddRule(callNum, callAct); err != nil { - return fmt.Errorf("error adding seccomp filter rule for syscall %s: %s", call.Name, err) - } - } else { - // If two or more arguments have the same condition, - // Revert to old behavior, adding each condition as a separate rule - argCounts := make([]uint, syscallMaxArguments) - conditions := []libseccomp.ScmpCondition{} - - for _, cond := range call.Args { - newCond, err := getCondition(cond) - if err != nil { - return fmt.Errorf("error creating seccomp syscall condition for syscall %s: %s", call.Name, err) - } - - argCounts[cond.Index] += 1 - - conditions = append(conditions, newCond) - } - - hasMultipleArgs := false - for _, count := range argCounts { - if count > 1 { - hasMultipleArgs = true - break - } - } - - if hasMultipleArgs { - // Revert to old behavior - // Add each condition attached to a separate rule - for _, cond := range conditions { - condArr := []libseccomp.ScmpCondition{cond} - - if err = filter.AddRuleConditional(callNum, callAct, condArr); err != nil { - return fmt.Errorf("error adding seccomp rule for syscall %s: %s", call.Name, err) - } - } - } else { - // No conditions share same argument - // Use new, proper behavior - if err = filter.AddRuleConditional(callNum, callAct, conditions); err != nil { - return fmt.Errorf("error adding seccomp rule for syscall %s: %s", call.Name, err) - } - } - } - - return nil -} - -func parseStatusFile(path string) (map[string]string, error) { - f, err := os.Open(path) - if err != nil { - return nil, err - } - defer f.Close() - - s := bufio.NewScanner(f) - status := make(map[string]string) - - for s.Scan() { - text := s.Text() - parts := strings.Split(text, ":") - - if len(parts) <= 1 { - continue - } - - status[parts[0]] = parts[1] - } - if err := s.Err(); err != nil { - return nil, err - } - - return status, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/seccomp/seccomp_unsupported.go b/vendor/github.com/opencontainers/runc/libcontainer/seccomp/seccomp_unsupported.go deleted file mode 100644 index 44df1ad4c..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/seccomp/seccomp_unsupported.go +++ /dev/null @@ -1,24 +0,0 @@ -// +build !linux !cgo !seccomp - -package seccomp - -import ( - "errors" - - "github.com/opencontainers/runc/libcontainer/configs" -) - -var ErrSeccompNotEnabled = errors.New("seccomp: config provided but seccomp not supported") - -// InitSeccomp does nothing because seccomp is not supported. -func InitSeccomp(config *configs.Seccomp) error { - if config != nil { - return ErrSeccompNotEnabled - } - return nil -} - -// IsEnabled returns false, because it is not supported. -func IsEnabled() bool { - return false -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/setns_init_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/setns_init_linux.go deleted file mode 100644 index 888981f52..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/setns_init_linux.go +++ /dev/null @@ -1,92 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "os" - "runtime" - - "github.com/opencontainers/runc/libcontainer/apparmor" - "github.com/opencontainers/runc/libcontainer/keys" - "github.com/opencontainers/runc/libcontainer/seccomp" - "github.com/opencontainers/runc/libcontainer/system" - "github.com/opencontainers/selinux/go-selinux/label" - "github.com/pkg/errors" - - "golang.org/x/sys/unix" -) - -// linuxSetnsInit performs the container's initialization for running a new process -// inside an existing container. -type linuxSetnsInit struct { - pipe *os.File - consoleSocket *os.File - config *initConfig -} - -func (l *linuxSetnsInit) getSessionRingName() string { - return fmt.Sprintf("_ses.%s", l.config.ContainerId) -} - -func (l *linuxSetnsInit) Init() error { - runtime.LockOSThread() - defer runtime.UnlockOSThread() - - if !l.config.Config.NoNewKeyring { - if err := label.SetKeyLabel(l.config.ProcessLabel); err != nil { - return err - } - defer label.SetKeyLabel("") - // Do not inherit the parent's session keyring. - if _, err := keys.JoinSessionKeyring(l.getSessionRingName()); err != nil { - // Same justification as in standart_init_linux.go as to why we - // don't bail on ENOSYS. - // - // TODO(cyphar): And we should have logging here too. - if errors.Cause(err) != unix.ENOSYS { - return errors.Wrap(err, "join session keyring") - } - } - } - if l.config.CreateConsole { - if err := setupConsole(l.consoleSocket, l.config, false); err != nil { - return err - } - if err := system.Setctty(); err != nil { - return err - } - } - if l.config.NoNewPrivileges { - if err := unix.Prctl(unix.PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); err != nil { - return err - } - } - if err := label.SetProcessLabel(l.config.ProcessLabel); err != nil { - return err - } - defer label.SetProcessLabel("") - // Without NoNewPrivileges seccomp is a privileged operation, so we need to - // do this before dropping capabilities; otherwise do it as late as possible - // just before execve so as few syscalls take place after it as possible. - if l.config.Config.Seccomp != nil && !l.config.NoNewPrivileges { - if err := seccomp.InitSeccomp(l.config.Config.Seccomp); err != nil { - return err - } - } - if err := finalizeNamespace(l.config); err != nil { - return err - } - if err := apparmor.ApplyProfile(l.config.AppArmorProfile); err != nil { - return err - } - // Set seccomp as close to execve as possible, so as few syscalls take - // place afterward (reducing the amount of syscalls that users need to - // enable in their seccomp profiles). - if l.config.Config.Seccomp != nil && l.config.NoNewPrivileges { - if err := seccomp.InitSeccomp(l.config.Config.Seccomp); err != nil { - return newSystemErrorWithCause(err, "init seccomp") - } - } - return system.Execv(l.config.Args[0], l.config.Args[0:], os.Environ()) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/capture.go b/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/capture.go deleted file mode 100644 index 0bbe14950..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/capture.go +++ /dev/null @@ -1,27 +0,0 @@ -package stacktrace - -import "runtime" - -// Capture captures a stacktrace for the current calling go program -// -// skip is the number of frames to skip -func Capture(userSkip int) Stacktrace { - var ( - skip = userSkip + 1 // add one for our own function - frames []Frame - prevPc uintptr - ) - for i := skip; ; i++ { - pc, file, line, ok := runtime.Caller(i) - //detect if caller is repeated to avoid loop, gccgo - //currently runs into a loop without this check - if !ok || pc == prevPc { - break - } - frames = append(frames, NewFrame(pc, file, line)) - prevPc = pc - } - return Stacktrace{ - Frames: frames, - } -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/frame.go b/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/frame.go deleted file mode 100644 index 0d590d9a5..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/frame.go +++ /dev/null @@ -1,38 +0,0 @@ -package stacktrace - -import ( - "path/filepath" - "runtime" - "strings" -) - -// NewFrame returns a new stack frame for the provided information -func NewFrame(pc uintptr, file string, line int) Frame { - fn := runtime.FuncForPC(pc) - if fn == nil { - return Frame{} - } - pack, name := parseFunctionName(fn.Name()) - return Frame{ - Line: line, - File: filepath.Base(file), - Package: pack, - Function: name, - } -} - -func parseFunctionName(name string) (string, string) { - i := strings.LastIndex(name, ".") - if i == -1 { - return "", name - } - return name[:i], name[i+1:] -} - -// Frame contains all the information for a stack frame within a go program -type Frame struct { - File string - Function string - Package string - Line int -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/stacktrace.go b/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/stacktrace.go deleted file mode 100644 index 5e8b58d2d..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/stacktrace/stacktrace.go +++ /dev/null @@ -1,5 +0,0 @@ -package stacktrace - -type Stacktrace struct { - Frames []Frame -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/standard_init_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/standard_init_linux.go deleted file mode 100644 index 4e03b8bc0..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/standard_init_linux.go +++ /dev/null @@ -1,214 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "os" - "os/exec" - "runtime" - "syscall" //only for Exec - - "github.com/opencontainers/runc/libcontainer/apparmor" - "github.com/opencontainers/runc/libcontainer/configs" - "github.com/opencontainers/runc/libcontainer/keys" - "github.com/opencontainers/runc/libcontainer/seccomp" - "github.com/opencontainers/runc/libcontainer/system" - "github.com/opencontainers/selinux/go-selinux/label" - "github.com/pkg/errors" - - "golang.org/x/sys/unix" -) - -type linuxStandardInit struct { - pipe *os.File - consoleSocket *os.File - parentPid int - fifoFd int - config *initConfig -} - -func (l *linuxStandardInit) getSessionRingParams() (string, uint32, uint32) { - var newperms uint32 - - if l.config.Config.Namespaces.Contains(configs.NEWUSER) { - // With user ns we need 'other' search permissions. - newperms = 0x8 - } else { - // Without user ns we need 'UID' search permissions. - newperms = 0x80000 - } - - // Create a unique per session container name that we can join in setns; - // However, other containers can also join it. - return fmt.Sprintf("_ses.%s", l.config.ContainerId), 0xffffffff, newperms -} - -func (l *linuxStandardInit) Init() error { - runtime.LockOSThread() - defer runtime.UnlockOSThread() - if !l.config.Config.NoNewKeyring { - if err := label.SetKeyLabel(l.config.ProcessLabel); err != nil { - return err - } - defer label.SetKeyLabel("") - ringname, keepperms, newperms := l.getSessionRingParams() - - // Do not inherit the parent's session keyring. - if sessKeyId, err := keys.JoinSessionKeyring(ringname); err != nil { - // If keyrings aren't supported then it is likely we are on an - // older kernel (or inside an LXC container). While we could bail, - // the security feature we are using here is best-effort (it only - // really provides marginal protection since VFS credentials are - // the only significant protection of keyrings). - // - // TODO(cyphar): Log this so people know what's going on, once we - // have proper logging in 'runc init'. - if errors.Cause(err) != unix.ENOSYS { - return errors.Wrap(err, "join session keyring") - } - } else { - // Make session keyring searcheable. If we've gotten this far we - // bail on any error -- we don't want to have a keyring with bad - // permissions. - if err := keys.ModKeyringPerm(sessKeyId, keepperms, newperms); err != nil { - return errors.Wrap(err, "mod keyring permissions") - } - } - } - - if err := setupNetwork(l.config); err != nil { - return err - } - if err := setupRoute(l.config.Config); err != nil { - return err - } - - label.Init() - if err := prepareRootfs(l.pipe, l.config); err != nil { - return err - } - // Set up the console. This has to be done *before* we finalize the rootfs, - // but *after* we've given the user the chance to set up all of the mounts - // they wanted. - if l.config.CreateConsole { - if err := setupConsole(l.consoleSocket, l.config, true); err != nil { - return err - } - if err := system.Setctty(); err != nil { - return errors.Wrap(err, "setctty") - } - } - - // Finish the rootfs setup. - if l.config.Config.Namespaces.Contains(configs.NEWNS) { - if err := finalizeRootfs(l.config.Config); err != nil { - return err - } - } - - if hostname := l.config.Config.Hostname; hostname != "" { - if err := unix.Sethostname([]byte(hostname)); err != nil { - return errors.Wrap(err, "sethostname") - } - } - if err := apparmor.ApplyProfile(l.config.AppArmorProfile); err != nil { - return errors.Wrap(err, "apply apparmor profile") - } - - for key, value := range l.config.Config.Sysctl { - if err := writeSystemProperty(key, value); err != nil { - return errors.Wrapf(err, "write sysctl key %s", key) - } - } - for _, path := range l.config.Config.ReadonlyPaths { - if err := readonlyPath(path); err != nil { - return errors.Wrapf(err, "readonly path %s", path) - } - } - for _, path := range l.config.Config.MaskPaths { - if err := maskPath(path, l.config.Config.MountLabel); err != nil { - return errors.Wrapf(err, "mask path %s", path) - } - } - pdeath, err := system.GetParentDeathSignal() - if err != nil { - return errors.Wrap(err, "get pdeath signal") - } - if l.config.NoNewPrivileges { - if err := unix.Prctl(unix.PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); err != nil { - return errors.Wrap(err, "set nonewprivileges") - } - } - // Tell our parent that we're ready to Execv. This must be done before the - // Seccomp rules have been applied, because we need to be able to read and - // write to a socket. - if err := syncParentReady(l.pipe); err != nil { - return errors.Wrap(err, "sync ready") - } - if err := label.SetProcessLabel(l.config.ProcessLabel); err != nil { - return errors.Wrap(err, "set process label") - } - defer label.SetProcessLabel("") - // Without NoNewPrivileges seccomp is a privileged operation, so we need to - // do this before dropping capabilities; otherwise do it as late as possible - // just before execve so as few syscalls take place after it as possible. - if l.config.Config.Seccomp != nil && !l.config.NoNewPrivileges { - if err := seccomp.InitSeccomp(l.config.Config.Seccomp); err != nil { - return err - } - } - if err := finalizeNamespace(l.config); err != nil { - return err - } - // finalizeNamespace can change user/group which clears the parent death - // signal, so we restore it here. - if err := pdeath.Restore(); err != nil { - return errors.Wrap(err, "restore pdeath signal") - } - // Compare the parent from the initial start of the init process and make - // sure that it did not change. if the parent changes that means it died - // and we were reparented to something else so we should just kill ourself - // and not cause problems for someone else. - if unix.Getppid() != l.parentPid { - return unix.Kill(unix.Getpid(), unix.SIGKILL) - } - // Check for the arg before waiting to make sure it exists and it is - // returned as a create time error. - name, err := exec.LookPath(l.config.Args[0]) - if err != nil { - return err - } - // Close the pipe to signal that we have completed our init. - l.pipe.Close() - // Wait for the FIFO to be opened on the other side before exec-ing the - // user process. We open it through /proc/self/fd/$fd, because the fd that - // was given to us was an O_PATH fd to the fifo itself. Linux allows us to - // re-open an O_PATH fd through /proc. - fd, err := unix.Open(fmt.Sprintf("/proc/self/fd/%d", l.fifoFd), unix.O_WRONLY|unix.O_CLOEXEC, 0) - if err != nil { - return newSystemErrorWithCause(err, "open exec fifo") - } - if _, err := unix.Write(fd, []byte("0")); err != nil { - return newSystemErrorWithCause(err, "write 0 exec fifo") - } - // Close the O_PATH fifofd fd before exec because the kernel resets - // dumpable in the wrong order. This has been fixed in newer kernels, but - // we keep this to ensure CVE-2016-9962 doesn't re-emerge on older kernels. - // N.B. the core issue itself (passing dirfds to the host filesystem) has - // since been resolved. - // https://github.com/torvalds/linux/blob/v4.9/fs/exec.c#L1290-L1318 - unix.Close(l.fifoFd) - // Set seccomp as close to execve as possible, so as few syscalls take - // place afterward (reducing the amount of syscalls that users need to - // enable in their seccomp profiles). - if l.config.Config.Seccomp != nil && l.config.NoNewPrivileges { - if err := seccomp.InitSeccomp(l.config.Config.Seccomp); err != nil { - return newSystemErrorWithCause(err, "init seccomp") - } - } - if err := syscall.Exec(name, l.config.Args[0:], os.Environ()); err != nil { - return newSystemErrorWithCause(err, "exec user process") - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/state_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/state_linux.go deleted file mode 100644 index 5c16a423f..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/state_linux.go +++ /dev/null @@ -1,251 +0,0 @@ -// +build linux - -package libcontainer - -import ( - "fmt" - "os" - "path/filepath" - - "github.com/opencontainers/runc/libcontainer/configs" - - "github.com/sirupsen/logrus" - "golang.org/x/sys/unix" -) - -func newStateTransitionError(from, to containerState) error { - return &stateTransitionError{ - From: from.status().String(), - To: to.status().String(), - } -} - -// stateTransitionError is returned when an invalid state transition happens from one -// state to another. -type stateTransitionError struct { - From string - To string -} - -func (s *stateTransitionError) Error() string { - return fmt.Sprintf("invalid state transition from %s to %s", s.From, s.To) -} - -type containerState interface { - transition(containerState) error - destroy() error - status() Status -} - -func destroy(c *linuxContainer) error { - if !c.config.Namespaces.Contains(configs.NEWPID) { - if err := signalAllProcesses(c.cgroupManager, unix.SIGKILL); err != nil { - logrus.Warn(err) - } - } - err := c.cgroupManager.Destroy() - if c.intelRdtManager != nil { - if ierr := c.intelRdtManager.Destroy(); err == nil { - err = ierr - } - } - if rerr := os.RemoveAll(c.root); err == nil { - err = rerr - } - c.initProcess = nil - if herr := runPoststopHooks(c); err == nil { - err = herr - } - c.state = &stoppedState{c: c} - return err -} - -func runPoststopHooks(c *linuxContainer) error { - if c.config.Hooks != nil { - s, err := c.currentOCIState() - if err != nil { - return err - } - for _, hook := range c.config.Hooks.Poststop { - if err := hook.Run(s); err != nil { - return err - } - } - } - return nil -} - -// stoppedState represents a container is a stopped/destroyed state. -type stoppedState struct { - c *linuxContainer -} - -func (b *stoppedState) status() Status { - return Stopped -} - -func (b *stoppedState) transition(s containerState) error { - switch s.(type) { - case *runningState, *restoredState: - b.c.state = s - return nil - case *stoppedState: - return nil - } - return newStateTransitionError(b, s) -} - -func (b *stoppedState) destroy() error { - return destroy(b.c) -} - -// runningState represents a container that is currently running. -type runningState struct { - c *linuxContainer -} - -func (r *runningState) status() Status { - return Running -} - -func (r *runningState) transition(s containerState) error { - switch s.(type) { - case *stoppedState: - t, err := r.c.runType() - if err != nil { - return err - } - if t == Running { - return newGenericError(fmt.Errorf("container still running"), ContainerNotStopped) - } - r.c.state = s - return nil - case *pausedState: - r.c.state = s - return nil - case *runningState: - return nil - } - return newStateTransitionError(r, s) -} - -func (r *runningState) destroy() error { - t, err := r.c.runType() - if err != nil { - return err - } - if t == Running { - return newGenericError(fmt.Errorf("container is not destroyed"), ContainerNotStopped) - } - return destroy(r.c) -} - -type createdState struct { - c *linuxContainer -} - -func (i *createdState) status() Status { - return Created -} - -func (i *createdState) transition(s containerState) error { - switch s.(type) { - case *runningState, *pausedState, *stoppedState: - i.c.state = s - return nil - case *createdState: - return nil - } - return newStateTransitionError(i, s) -} - -func (i *createdState) destroy() error { - i.c.initProcess.signal(unix.SIGKILL) - return destroy(i.c) -} - -// pausedState represents a container that is currently pause. It cannot be destroyed in a -// paused state and must transition back to running first. -type pausedState struct { - c *linuxContainer -} - -func (p *pausedState) status() Status { - return Paused -} - -func (p *pausedState) transition(s containerState) error { - switch s.(type) { - case *runningState, *stoppedState: - p.c.state = s - return nil - case *pausedState: - return nil - } - return newStateTransitionError(p, s) -} - -func (p *pausedState) destroy() error { - t, err := p.c.runType() - if err != nil { - return err - } - if t != Running && t != Created { - if err := p.c.cgroupManager.Freeze(configs.Thawed); err != nil { - return err - } - return destroy(p.c) - } - return newGenericError(fmt.Errorf("container is paused"), ContainerPaused) -} - -// restoredState is the same as the running state but also has associated checkpoint -// information that maybe need destroyed when the container is stopped and destroy is called. -type restoredState struct { - imageDir string - c *linuxContainer -} - -func (r *restoredState) status() Status { - return Running -} - -func (r *restoredState) transition(s containerState) error { - switch s.(type) { - case *stoppedState, *runningState: - return nil - } - return newStateTransitionError(r, s) -} - -func (r *restoredState) destroy() error { - if _, err := os.Stat(filepath.Join(r.c.root, "checkpoint")); err != nil { - if !os.IsNotExist(err) { - return err - } - } - return destroy(r.c) -} - -// loadedState is used whenever a container is restored, loaded, or setting additional -// processes inside and it should not be destroyed when it is exiting. -type loadedState struct { - c *linuxContainer - s Status -} - -func (n *loadedState) status() Status { - return n.s -} - -func (n *loadedState) transition(s containerState) error { - n.c.state = s - return nil -} - -func (n *loadedState) destroy() error { - if err := n.c.refreshState(); err != nil { - return err - } - return n.c.state.destroy() -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/stats.go b/vendor/github.com/opencontainers/runc/libcontainer/stats.go deleted file mode 100644 index 303e4b94c..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/stats.go +++ /dev/null @@ -1,15 +0,0 @@ -package libcontainer - -type NetworkInterface struct { - // Name is the name of the network interface. - Name string - - RxBytes uint64 - RxPackets uint64 - RxErrors uint64 - RxDropped uint64 - TxBytes uint64 - TxPackets uint64 - TxErrors uint64 - TxDropped uint64 -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/stats_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/stats_linux.go deleted file mode 100644 index 29fd641e9..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/stats_linux.go +++ /dev/null @@ -1,10 +0,0 @@ -package libcontainer - -import "github.com/opencontainers/runc/libcontainer/cgroups" -import "github.com/opencontainers/runc/libcontainer/intelrdt" - -type Stats struct { - Interfaces []*NetworkInterface - CgroupStats *cgroups.Stats - IntelRdtStats *intelrdt.Stats -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/sync.go b/vendor/github.com/opencontainers/runc/libcontainer/sync.go deleted file mode 100644 index a8704a267..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/sync.go +++ /dev/null @@ -1,104 +0,0 @@ -package libcontainer - -import ( - "encoding/json" - "fmt" - "io" - - "github.com/opencontainers/runc/libcontainer/utils" -) - -type syncType string - -// Constants that are used for synchronisation between the parent and child -// during container setup. They come in pairs (with procError being a generic -// response which is followed by a &genericError). -// -// [ child ] <-> [ parent ] -// -// procHooks --> [run hooks] -// <-- procResume -// -// procConsole --> -// <-- procConsoleReq -// [send(fd)] --> [recv(fd)] -// <-- procConsoleAck -// -// procReady --> [final setup] -// <-- procRun -const ( - procError syncType = "procError" - procReady syncType = "procReady" - procRun syncType = "procRun" - procHooks syncType = "procHooks" - procResume syncType = "procResume" -) - -type syncT struct { - Type syncType `json:"type"` -} - -// writeSync is used to write to a synchronisation pipe. An error is returned -// if there was a problem writing the payload. -func writeSync(pipe io.Writer, sync syncType) error { - return utils.WriteJSON(pipe, syncT{sync}) -} - -// readSync is used to read from a synchronisation pipe. An error is returned -// if we got a genericError, the pipe was closed, or we got an unexpected flag. -func readSync(pipe io.Reader, expected syncType) error { - var procSync syncT - if err := json.NewDecoder(pipe).Decode(&procSync); err != nil { - if err == io.EOF { - return fmt.Errorf("parent closed synchronisation channel") - } - - if procSync.Type == procError { - var ierr genericError - - if err := json.NewDecoder(pipe).Decode(&ierr); err != nil { - return fmt.Errorf("failed reading error from parent: %v", err) - } - - return &ierr - } - - if procSync.Type != expected { - return fmt.Errorf("invalid synchronisation flag from parent") - } - } - return nil -} - -// parseSync runs the given callback function on each syncT received from the -// child. It will return once io.EOF is returned from the given pipe. -func parseSync(pipe io.Reader, fn func(*syncT) error) error { - dec := json.NewDecoder(pipe) - for { - var sync syncT - if err := dec.Decode(&sync); err != nil { - if err == io.EOF { - break - } - return err - } - - // We handle this case outside fn for cleanliness reasons. - var ierr *genericError - if sync.Type == procError { - if err := dec.Decode(&ierr); err != nil && err != io.EOF { - return newSystemErrorWithCause(err, "decoding proc error from init") - } - if ierr != nil { - return ierr - } - // Programmer error. - panic("No error following JSON procError payload.") - } - - if err := fn(&sync); err != nil { - return err - } - } - return nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/linux.go b/vendor/github.com/opencontainers/runc/libcontainer/system/linux.go deleted file mode 100644 index a4ae8901a..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/linux.go +++ /dev/null @@ -1,155 +0,0 @@ -// +build linux - -package system - -import ( - "os" - "os/exec" - "syscall" // only for exec - "unsafe" - - "github.com/opencontainers/runc/libcontainer/user" - "golang.org/x/sys/unix" -) - -// If arg2 is nonzero, set the "child subreaper" attribute of the -// calling process; if arg2 is zero, unset the attribute. When a -// process is marked as a child subreaper, all of the children -// that it creates, and their descendants, will be marked as -// having a subreaper. In effect, a subreaper fulfills the role -// of init(1) for its descendant processes. Upon termination of -// a process that is orphaned (i.e., its immediate parent has -// already terminated) and marked as having a subreaper, the -// nearest still living ancestor subreaper will receive a SIGCHLD -// signal and be able to wait(2) on the process to discover its -// termination status. -const PR_SET_CHILD_SUBREAPER = 36 - -type ParentDeathSignal int - -func (p ParentDeathSignal) Restore() error { - if p == 0 { - return nil - } - current, err := GetParentDeathSignal() - if err != nil { - return err - } - if p == current { - return nil - } - return p.Set() -} - -func (p ParentDeathSignal) Set() error { - return SetParentDeathSignal(uintptr(p)) -} - -func Execv(cmd string, args []string, env []string) error { - name, err := exec.LookPath(cmd) - if err != nil { - return err - } - - return syscall.Exec(name, args, env) -} - -func Prlimit(pid, resource int, limit unix.Rlimit) error { - _, _, err := unix.RawSyscall6(unix.SYS_PRLIMIT64, uintptr(pid), uintptr(resource), uintptr(unsafe.Pointer(&limit)), uintptr(unsafe.Pointer(&limit)), 0, 0) - if err != 0 { - return err - } - return nil -} - -func SetParentDeathSignal(sig uintptr) error { - if err := unix.Prctl(unix.PR_SET_PDEATHSIG, sig, 0, 0, 0); err != nil { - return err - } - return nil -} - -func GetParentDeathSignal() (ParentDeathSignal, error) { - var sig int - if err := unix.Prctl(unix.PR_GET_PDEATHSIG, uintptr(unsafe.Pointer(&sig)), 0, 0, 0); err != nil { - return -1, err - } - return ParentDeathSignal(sig), nil -} - -func SetKeepCaps() error { - if err := unix.Prctl(unix.PR_SET_KEEPCAPS, 1, 0, 0, 0); err != nil { - return err - } - - return nil -} - -func ClearKeepCaps() error { - if err := unix.Prctl(unix.PR_SET_KEEPCAPS, 0, 0, 0, 0); err != nil { - return err - } - - return nil -} - -func Setctty() error { - if err := unix.IoctlSetInt(0, unix.TIOCSCTTY, 0); err != nil { - return err - } - return nil -} - -// RunningInUserNS detects whether we are currently running in a user namespace. -// Originally copied from github.com/lxc/lxd/shared/util.go -func RunningInUserNS() bool { - uidmap, err := user.CurrentProcessUIDMap() - if err != nil { - // This kernel-provided file only exists if user namespaces are supported - return false - } - return UIDMapInUserNS(uidmap) -} - -func UIDMapInUserNS(uidmap []user.IDMap) bool { - /* - * We assume we are in the initial user namespace if we have a full - * range - 4294967295 uids starting at uid 0. - */ - if len(uidmap) == 1 && uidmap[0].ID == 0 && uidmap[0].ParentID == 0 && uidmap[0].Count == 4294967295 { - return false - } - return true -} - -// GetParentNSeuid returns the euid within the parent user namespace -func GetParentNSeuid() int64 { - euid := int64(os.Geteuid()) - uidmap, err := user.CurrentProcessUIDMap() - if err != nil { - // This kernel-provided file only exists if user namespaces are supported - return euid - } - for _, um := range uidmap { - if um.ID <= euid && euid <= um.ID+um.Count-1 { - return um.ParentID + euid - um.ID - } - } - return euid -} - -// SetSubreaper sets the value i as the subreaper setting for the calling process -func SetSubreaper(i int) error { - return unix.Prctl(PR_SET_CHILD_SUBREAPER, uintptr(i), 0, 0, 0) -} - -// GetSubreaper returns the subreaper setting for the calling process -func GetSubreaper() (int, error) { - var i uintptr - - if err := unix.Prctl(unix.PR_GET_CHILD_SUBREAPER, uintptr(unsafe.Pointer(&i)), 0, 0, 0); err != nil { - return -1, err - } - - return int(i), nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/proc.go b/vendor/github.com/opencontainers/runc/libcontainer/system/proc.go deleted file mode 100644 index 79232a437..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/proc.go +++ /dev/null @@ -1,113 +0,0 @@ -package system - -import ( - "fmt" - "io/ioutil" - "path/filepath" - "strconv" - "strings" -) - -// State is the status of a process. -type State rune - -const ( // Only values for Linux 3.14 and later are listed here - Dead State = 'X' - DiskSleep State = 'D' - Running State = 'R' - Sleeping State = 'S' - Stopped State = 'T' - TracingStop State = 't' - Zombie State = 'Z' -) - -// String forms of the state from proc(5)'s documentation for -// /proc/[pid]/status' "State" field. -func (s State) String() string { - switch s { - case Dead: - return "dead" - case DiskSleep: - return "disk sleep" - case Running: - return "running" - case Sleeping: - return "sleeping" - case Stopped: - return "stopped" - case TracingStop: - return "tracing stop" - case Zombie: - return "zombie" - default: - return fmt.Sprintf("unknown (%c)", s) - } -} - -// Stat_t represents the information from /proc/[pid]/stat, as -// described in proc(5) with names based on the /proc/[pid]/status -// fields. -type Stat_t struct { - // PID is the process ID. - PID uint - - // Name is the command run by the process. - Name string - - // State is the state of the process. - State State - - // StartTime is the number of clock ticks after system boot (since - // Linux 2.6). - StartTime uint64 -} - -// Stat returns a Stat_t instance for the specified process. -func Stat(pid int) (stat Stat_t, err error) { - bytes, err := ioutil.ReadFile(filepath.Join("/proc", strconv.Itoa(pid), "stat")) - if err != nil { - return stat, err - } - return parseStat(string(bytes)) -} - -// GetProcessStartTime is deprecated. Use Stat(pid) and -// Stat_t.StartTime instead. -func GetProcessStartTime(pid int) (string, error) { - stat, err := Stat(pid) - if err != nil { - return "", err - } - return fmt.Sprintf("%d", stat.StartTime), nil -} - -func parseStat(data string) (stat Stat_t, err error) { - // From proc(5), field 2 could contain space and is inside `(` and `)`. - // The following is an example: - // 89653 (gunicorn: maste) S 89630 89653 89653 0 -1 4194560 29689 28896 0 3 146 32 76 19 20 0 1 0 2971844 52965376 3920 18446744073709551615 1 1 0 0 0 0 0 16781312 137447943 0 0 0 17 1 0 0 0 0 0 0 0 0 0 0 0 0 0 - i := strings.LastIndex(data, ")") - if i <= 2 || i >= len(data)-1 { - return stat, fmt.Errorf("invalid stat data: %q", data) - } - - parts := strings.SplitN(data[:i], "(", 2) - if len(parts) != 2 { - return stat, fmt.Errorf("invalid stat data: %q", data) - } - - stat.Name = parts[1] - _, err = fmt.Sscanf(parts[0], "%d", &stat.PID) - if err != nil { - return stat, err - } - - // parts indexes should be offset by 3 from the field number given - // proc(5), because parts is zero-indexed and we've removed fields - // one (PID) and two (Name) in the paren-split. - parts = strings.Split(data[i+2:], " ") - var state int - fmt.Sscanf(parts[3-3], "%c", &state) - stat.State = State(state) - fmt.Sscanf(parts[22-3], "%d", &stat.StartTime) - return stat, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_32.go b/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_32.go deleted file mode 100644 index c5ca5d862..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_32.go +++ /dev/null @@ -1,26 +0,0 @@ -// +build linux -// +build 386 arm - -package system - -import ( - "golang.org/x/sys/unix" -) - -// Setuid sets the uid of the calling thread to the specified uid. -func Setuid(uid int) (err error) { - _, _, e1 := unix.RawSyscall(unix.SYS_SETUID32, uintptr(uid), 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -// Setgid sets the gid of the calling thread to the specified gid. -func Setgid(gid int) (err error) { - _, _, e1 := unix.RawSyscall(unix.SYS_SETGID32, uintptr(gid), 0, 0) - if e1 != 0 { - err = e1 - } - return -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_64.go b/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_64.go deleted file mode 100644 index 11c3faafb..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/syscall_linux_64.go +++ /dev/null @@ -1,26 +0,0 @@ -// +build linux -// +build arm64 amd64 mips mipsle mips64 mips64le ppc ppc64 ppc64le s390x - -package system - -import ( - "golang.org/x/sys/unix" -) - -// Setuid sets the uid of the calling thread to the specified uid. -func Setuid(uid int) (err error) { - _, _, e1 := unix.RawSyscall(unix.SYS_SETUID, uintptr(uid), 0, 0) - if e1 != 0 { - err = e1 - } - return -} - -// Setgid sets the gid of the calling thread to the specified gid. -func Setgid(gid int) (err error) { - _, _, e1 := unix.RawSyscall(unix.SYS_SETGID, uintptr(gid), 0, 0) - if e1 != 0 { - err = e1 - } - return -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/sysconfig.go b/vendor/github.com/opencontainers/runc/libcontainer/system/sysconfig.go deleted file mode 100644 index b8434f105..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/sysconfig.go +++ /dev/null @@ -1,12 +0,0 @@ -// +build cgo,linux - -package system - -/* -#include -*/ -import "C" - -func GetClockTicks() int { - return int(C.sysconf(C._SC_CLK_TCK)) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/sysconfig_notcgo.go b/vendor/github.com/opencontainers/runc/libcontainer/system/sysconfig_notcgo.go deleted file mode 100644 index d93b5d5fd..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/sysconfig_notcgo.go +++ /dev/null @@ -1,15 +0,0 @@ -// +build !cgo windows - -package system - -func GetClockTicks() int { - // TODO figure out a better alternative for platforms where we're missing cgo - // - // TODO Windows. This could be implemented using Win32 QueryPerformanceFrequency(). - // https://msdn.microsoft.com/en-us/library/windows/desktop/ms644905(v=vs.85).aspx - // - // An example of its usage can be found here. - // https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx - - return 100 -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/unsupported.go b/vendor/github.com/opencontainers/runc/libcontainer/system/unsupported.go deleted file mode 100644 index b94be74a6..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/unsupported.go +++ /dev/null @@ -1,27 +0,0 @@ -// +build !linux - -package system - -import ( - "os" - - "github.com/opencontainers/runc/libcontainer/user" -) - -// RunningInUserNS is a stub for non-Linux systems -// Always returns false -func RunningInUserNS() bool { - return false -} - -// UIDMapInUserNS is a stub for non-Linux systems -// Always returns false -func UIDMapInUserNS(uidmap []user.IDMap) bool { - return false -} - -// GetParentNSeuid returns the euid within the parent user namespace -// Always returns os.Geteuid on non-linux -func GetParentNSeuid() int { - return os.Geteuid() -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/system/xattrs_linux.go b/vendor/github.com/opencontainers/runc/libcontainer/system/xattrs_linux.go deleted file mode 100644 index a6823fc99..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/system/xattrs_linux.go +++ /dev/null @@ -1,35 +0,0 @@ -package system - -import "golang.org/x/sys/unix" - -// Returns a []byte slice if the xattr is set and nil otherwise -// Requires path and its attribute as arguments -func Lgetxattr(path string, attr string) ([]byte, error) { - var sz int - // Start with a 128 length byte array - dest := make([]byte, 128) - sz, errno := unix.Lgetxattr(path, attr, dest) - - switch { - case errno == unix.ENODATA: - return nil, errno - case errno == unix.ENOTSUP: - return nil, errno - case errno == unix.ERANGE: - // 128 byte array might just not be good enough, - // A dummy buffer is used to get the real size - // of the xattrs on disk - sz, errno = unix.Lgetxattr(path, attr, []byte{}) - if errno != nil { - return nil, errno - } - dest = make([]byte, sz) - sz, errno = unix.Lgetxattr(path, attr, dest) - if errno != nil { - return nil, errno - } - case errno != nil: - return nil, errno - } - return dest[:sz], nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/user/MAINTAINERS b/vendor/github.com/opencontainers/runc/libcontainer/user/MAINTAINERS deleted file mode 100644 index edbe20066..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/user/MAINTAINERS +++ /dev/null @@ -1,2 +0,0 @@ -Tianon Gravi (@tianon) -Aleksa Sarai (@cyphar) diff --git a/vendor/github.com/opencontainers/runc/libcontainer/user/lookup.go b/vendor/github.com/opencontainers/runc/libcontainer/user/lookup.go deleted file mode 100644 index 6fd8dd0d4..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/user/lookup.go +++ /dev/null @@ -1,41 +0,0 @@ -package user - -import ( - "errors" -) - -var ( - // The current operating system does not provide the required data for user lookups. - ErrUnsupported = errors.New("user lookup: operating system does not provide passwd-formatted data") - // No matching entries found in file. - ErrNoPasswdEntries = errors.New("no matching entries in passwd file") - ErrNoGroupEntries = errors.New("no matching entries in group file") -) - -// LookupUser looks up a user by their username in /etc/passwd. If the user -// cannot be found (or there is no /etc/passwd file on the filesystem), then -// LookupUser returns an error. -func LookupUser(username string) (User, error) { - return lookupUser(username) -} - -// LookupUid looks up a user by their user id in /etc/passwd. If the user cannot -// be found (or there is no /etc/passwd file on the filesystem), then LookupId -// returns an error. -func LookupUid(uid int) (User, error) { - return lookupUid(uid) -} - -// LookupGroup looks up a group by its name in /etc/group. If the group cannot -// be found (or there is no /etc/group file on the filesystem), then LookupGroup -// returns an error. -func LookupGroup(groupname string) (Group, error) { - return lookupGroup(groupname) -} - -// LookupGid looks up a group by its group id in /etc/group. If the group cannot -// be found (or there is no /etc/group file on the filesystem), then LookupGid -// returns an error. -func LookupGid(gid int) (Group, error) { - return lookupGid(gid) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go b/vendor/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go deleted file mode 100644 index 92b5ae8de..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/user/lookup_unix.go +++ /dev/null @@ -1,144 +0,0 @@ -// +build darwin dragonfly freebsd linux netbsd openbsd solaris - -package user - -import ( - "io" - "os" - "strconv" - - "golang.org/x/sys/unix" -) - -// Unix-specific path to the passwd and group formatted files. -const ( - unixPasswdPath = "/etc/passwd" - unixGroupPath = "/etc/group" -) - -func lookupUser(username string) (User, error) { - return lookupUserFunc(func(u User) bool { - return u.Name == username - }) -} - -func lookupUid(uid int) (User, error) { - return lookupUserFunc(func(u User) bool { - return u.Uid == uid - }) -} - -func lookupUserFunc(filter func(u User) bool) (User, error) { - // Get operating system-specific passwd reader-closer. - passwd, err := GetPasswd() - if err != nil { - return User{}, err - } - defer passwd.Close() - - // Get the users. - users, err := ParsePasswdFilter(passwd, filter) - if err != nil { - return User{}, err - } - - // No user entries found. - if len(users) == 0 { - return User{}, ErrNoPasswdEntries - } - - // Assume the first entry is the "correct" one. - return users[0], nil -} - -func lookupGroup(groupname string) (Group, error) { - return lookupGroupFunc(func(g Group) bool { - return g.Name == groupname - }) -} - -func lookupGid(gid int) (Group, error) { - return lookupGroupFunc(func(g Group) bool { - return g.Gid == gid - }) -} - -func lookupGroupFunc(filter func(g Group) bool) (Group, error) { - // Get operating system-specific group reader-closer. - group, err := GetGroup() - if err != nil { - return Group{}, err - } - defer group.Close() - - // Get the users. - groups, err := ParseGroupFilter(group, filter) - if err != nil { - return Group{}, err - } - - // No user entries found. - if len(groups) == 0 { - return Group{}, ErrNoGroupEntries - } - - // Assume the first entry is the "correct" one. - return groups[0], nil -} - -func GetPasswdPath() (string, error) { - return unixPasswdPath, nil -} - -func GetPasswd() (io.ReadCloser, error) { - return os.Open(unixPasswdPath) -} - -func GetGroupPath() (string, error) { - return unixGroupPath, nil -} - -func GetGroup() (io.ReadCloser, error) { - return os.Open(unixGroupPath) -} - -// CurrentUser looks up the current user by their user id in /etc/passwd. If the -// user cannot be found (or there is no /etc/passwd file on the filesystem), -// then CurrentUser returns an error. -func CurrentUser() (User, error) { - return LookupUid(unix.Getuid()) -} - -// CurrentGroup looks up the current user's group by their primary group id's -// entry in /etc/passwd. If the group cannot be found (or there is no -// /etc/group file on the filesystem), then CurrentGroup returns an error. -func CurrentGroup() (Group, error) { - return LookupGid(unix.Getgid()) -} - -func currentUserSubIDs(fileName string) ([]SubID, error) { - u, err := CurrentUser() - if err != nil { - return nil, err - } - filter := func(entry SubID) bool { - return entry.Name == u.Name || entry.Name == strconv.Itoa(u.Uid) - } - return ParseSubIDFileFilter(fileName, filter) -} - -func CurrentUserSubUIDs() ([]SubID, error) { - return currentUserSubIDs("/etc/subuid") -} - -func CurrentUserSubGIDs() ([]SubID, error) { - return currentUserSubIDs("/etc/subgid") -} - -func CurrentProcessUIDMap() ([]IDMap, error) { - return ParseIDMapFile("/proc/self/uid_map") -} - -func CurrentProcessGIDMap() ([]IDMap, error) { - return ParseIDMapFile("/proc/self/gid_map") -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/user/lookup_windows.go b/vendor/github.com/opencontainers/runc/libcontainer/user/lookup_windows.go deleted file mode 100644 index 65cd40e92..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/user/lookup_windows.go +++ /dev/null @@ -1,40 +0,0 @@ -// +build windows - -package user - -import ( - "fmt" - "os/user" -) - -func lookupUser(username string) (User, error) { - u, err := user.Lookup(username) - if err != nil { - return User{}, err - } - return userFromOS(u) -} - -func lookupUid(uid int) (User, error) { - u, err := user.LookupId(fmt.Sprintf("%d", uid)) - if err != nil { - return User{}, err - } - return userFromOS(u) -} - -func lookupGroup(groupname string) (Group, error) { - g, err := user.LookupGroup(groupname) - if err != nil { - return Group{}, err - } - return groupFromOS(g) -} - -func lookupGid(gid int) (Group, error) { - g, err := user.LookupGroupId(fmt.Sprintf("%d", gid)) - if err != nil { - return Group{}, err - } - return groupFromOS(g) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/user/user.go b/vendor/github.com/opencontainers/runc/libcontainer/user/user.go deleted file mode 100644 index 7b912bbf8..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/user/user.go +++ /dev/null @@ -1,608 +0,0 @@ -package user - -import ( - "bufio" - "fmt" - "io" - "os" - "os/user" - "strconv" - "strings" -) - -const ( - minId = 0 - maxId = 1<<31 - 1 //for 32-bit systems compatibility -) - -var ( - ErrRange = fmt.Errorf("uids and gids must be in range %d-%d", minId, maxId) -) - -type User struct { - Name string - Pass string - Uid int - Gid int - Gecos string - Home string - Shell string -} - -// userFromOS converts an os/user.(*User) to local User -// -// (This does not include Pass, Shell or Gecos) -func userFromOS(u *user.User) (User, error) { - newUser := User{ - Name: u.Username, - Home: u.HomeDir, - } - id, err := strconv.Atoi(u.Uid) - if err != nil { - return newUser, err - } - newUser.Uid = id - - id, err = strconv.Atoi(u.Gid) - if err != nil { - return newUser, err - } - newUser.Gid = id - return newUser, nil -} - -type Group struct { - Name string - Pass string - Gid int - List []string -} - -// groupFromOS converts an os/user.(*Group) to local Group -// -// (This does not include Pass, Shell or Gecos) -func groupFromOS(g *user.Group) (Group, error) { - newGroup := Group{ - Name: g.Name, - } - - id, err := strconv.Atoi(g.Gid) - if err != nil { - return newGroup, err - } - newGroup.Gid = id - - return newGroup, nil -} - -// SubID represents an entry in /etc/sub{u,g}id -type SubID struct { - Name string - SubID int64 - Count int64 -} - -// IDMap represents an entry in /proc/PID/{u,g}id_map -type IDMap struct { - ID int64 - ParentID int64 - Count int64 -} - -func parseLine(line string, v ...interface{}) { - parseParts(strings.Split(line, ":"), v...) -} - -func parseParts(parts []string, v ...interface{}) { - if len(parts) == 0 { - return - } - - for i, p := range parts { - // Ignore cases where we don't have enough fields to populate the arguments. - // Some configuration files like to misbehave. - if len(v) <= i { - break - } - - // Use the type of the argument to figure out how to parse it, scanf() style. - // This is legit. - switch e := v[i].(type) { - case *string: - *e = p - case *int: - // "numbers", with conversion errors ignored because of some misbehaving configuration files. - *e, _ = strconv.Atoi(p) - case *int64: - *e, _ = strconv.ParseInt(p, 10, 64) - case *[]string: - // Comma-separated lists. - if p != "" { - *e = strings.Split(p, ",") - } else { - *e = []string{} - } - default: - // Someone goof'd when writing code using this function. Scream so they can hear us. - panic(fmt.Sprintf("parseLine only accepts {*string, *int, *int64, *[]string} as arguments! %#v is not a pointer!", e)) - } - } -} - -func ParsePasswdFile(path string) ([]User, error) { - passwd, err := os.Open(path) - if err != nil { - return nil, err - } - defer passwd.Close() - return ParsePasswd(passwd) -} - -func ParsePasswd(passwd io.Reader) ([]User, error) { - return ParsePasswdFilter(passwd, nil) -} - -func ParsePasswdFileFilter(path string, filter func(User) bool) ([]User, error) { - passwd, err := os.Open(path) - if err != nil { - return nil, err - } - defer passwd.Close() - return ParsePasswdFilter(passwd, filter) -} - -func ParsePasswdFilter(r io.Reader, filter func(User) bool) ([]User, error) { - if r == nil { - return nil, fmt.Errorf("nil source for passwd-formatted data") - } - - var ( - s = bufio.NewScanner(r) - out = []User{} - ) - - for s.Scan() { - if err := s.Err(); err != nil { - return nil, err - } - - line := strings.TrimSpace(s.Text()) - if line == "" { - continue - } - - // see: man 5 passwd - // name:password:UID:GID:GECOS:directory:shell - // Name:Pass:Uid:Gid:Gecos:Home:Shell - // root:x:0:0:root:/root:/bin/bash - // adm:x:3:4:adm:/var/adm:/bin/false - p := User{} - parseLine(line, &p.Name, &p.Pass, &p.Uid, &p.Gid, &p.Gecos, &p.Home, &p.Shell) - - if filter == nil || filter(p) { - out = append(out, p) - } - } - - return out, nil -} - -func ParseGroupFile(path string) ([]Group, error) { - group, err := os.Open(path) - if err != nil { - return nil, err - } - - defer group.Close() - return ParseGroup(group) -} - -func ParseGroup(group io.Reader) ([]Group, error) { - return ParseGroupFilter(group, nil) -} - -func ParseGroupFileFilter(path string, filter func(Group) bool) ([]Group, error) { - group, err := os.Open(path) - if err != nil { - return nil, err - } - defer group.Close() - return ParseGroupFilter(group, filter) -} - -func ParseGroupFilter(r io.Reader, filter func(Group) bool) ([]Group, error) { - if r == nil { - return nil, fmt.Errorf("nil source for group-formatted data") - } - - var ( - s = bufio.NewScanner(r) - out = []Group{} - ) - - for s.Scan() { - if err := s.Err(); err != nil { - return nil, err - } - - text := s.Text() - if text == "" { - continue - } - - // see: man 5 group - // group_name:password:GID:user_list - // Name:Pass:Gid:List - // root:x:0:root - // adm:x:4:root,adm,daemon - p := Group{} - parseLine(text, &p.Name, &p.Pass, &p.Gid, &p.List) - - if filter == nil || filter(p) { - out = append(out, p) - } - } - - return out, nil -} - -type ExecUser struct { - Uid int - Gid int - Sgids []int - Home string -} - -// GetExecUserPath is a wrapper for GetExecUser. It reads data from each of the -// given file paths and uses that data as the arguments to GetExecUser. If the -// files cannot be opened for any reason, the error is ignored and a nil -// io.Reader is passed instead. -func GetExecUserPath(userSpec string, defaults *ExecUser, passwdPath, groupPath string) (*ExecUser, error) { - var passwd, group io.Reader - - if passwdFile, err := os.Open(passwdPath); err == nil { - passwd = passwdFile - defer passwdFile.Close() - } - - if groupFile, err := os.Open(groupPath); err == nil { - group = groupFile - defer groupFile.Close() - } - - return GetExecUser(userSpec, defaults, passwd, group) -} - -// GetExecUser parses a user specification string (using the passwd and group -// readers as sources for /etc/passwd and /etc/group data, respectively). In -// the case of blank fields or missing data from the sources, the values in -// defaults is used. -// -// GetExecUser will return an error if a user or group literal could not be -// found in any entry in passwd and group respectively. -// -// Examples of valid user specifications are: -// * "" -// * "user" -// * "uid" -// * "user:group" -// * "uid:gid -// * "user:gid" -// * "uid:group" -// -// It should be noted that if you specify a numeric user or group id, they will -// not be evaluated as usernames (only the metadata will be filled). So attempting -// to parse a user with user.Name = "1337" will produce the user with a UID of -// 1337. -func GetExecUser(userSpec string, defaults *ExecUser, passwd, group io.Reader) (*ExecUser, error) { - if defaults == nil { - defaults = new(ExecUser) - } - - // Copy over defaults. - user := &ExecUser{ - Uid: defaults.Uid, - Gid: defaults.Gid, - Sgids: defaults.Sgids, - Home: defaults.Home, - } - - // Sgids slice *cannot* be nil. - if user.Sgids == nil { - user.Sgids = []int{} - } - - // Allow for userArg to have either "user" syntax, or optionally "user:group" syntax - var userArg, groupArg string - parseLine(userSpec, &userArg, &groupArg) - - // Convert userArg and groupArg to be numeric, so we don't have to execute - // Atoi *twice* for each iteration over lines. - uidArg, uidErr := strconv.Atoi(userArg) - gidArg, gidErr := strconv.Atoi(groupArg) - - // Find the matching user. - users, err := ParsePasswdFilter(passwd, func(u User) bool { - if userArg == "" { - // Default to current state of the user. - return u.Uid == user.Uid - } - - if uidErr == nil { - // If the userArg is numeric, always treat it as a UID. - return uidArg == u.Uid - } - - return u.Name == userArg - }) - - // If we can't find the user, we have to bail. - if err != nil && passwd != nil { - if userArg == "" { - userArg = strconv.Itoa(user.Uid) - } - return nil, fmt.Errorf("unable to find user %s: %v", userArg, err) - } - - var matchedUserName string - if len(users) > 0 { - // First match wins, even if there's more than one matching entry. - matchedUserName = users[0].Name - user.Uid = users[0].Uid - user.Gid = users[0].Gid - user.Home = users[0].Home - } else if userArg != "" { - // If we can't find a user with the given username, the only other valid - // option is if it's a numeric username with no associated entry in passwd. - - if uidErr != nil { - // Not numeric. - return nil, fmt.Errorf("unable to find user %s: %v", userArg, ErrNoPasswdEntries) - } - user.Uid = uidArg - - // Must be inside valid uid range. - if user.Uid < minId || user.Uid > maxId { - return nil, ErrRange - } - - // Okay, so it's numeric. We can just roll with this. - } - - // On to the groups. If we matched a username, we need to do this because of - // the supplementary group IDs. - if groupArg != "" || matchedUserName != "" { - groups, err := ParseGroupFilter(group, func(g Group) bool { - // If the group argument isn't explicit, we'll just search for it. - if groupArg == "" { - // Check if user is a member of this group. - for _, u := range g.List { - if u == matchedUserName { - return true - } - } - return false - } - - if gidErr == nil { - // If the groupArg is numeric, always treat it as a GID. - return gidArg == g.Gid - } - - return g.Name == groupArg - }) - if err != nil && group != nil { - return nil, fmt.Errorf("unable to find groups for spec %v: %v", matchedUserName, err) - } - - // Only start modifying user.Gid if it is in explicit form. - if groupArg != "" { - if len(groups) > 0 { - // First match wins, even if there's more than one matching entry. - user.Gid = groups[0].Gid - } else { - // If we can't find a group with the given name, the only other valid - // option is if it's a numeric group name with no associated entry in group. - - if gidErr != nil { - // Not numeric. - return nil, fmt.Errorf("unable to find group %s: %v", groupArg, ErrNoGroupEntries) - } - user.Gid = gidArg - - // Must be inside valid gid range. - if user.Gid < minId || user.Gid > maxId { - return nil, ErrRange - } - - // Okay, so it's numeric. We can just roll with this. - } - } else if len(groups) > 0 { - // Supplementary group ids only make sense if in the implicit form. - user.Sgids = make([]int, len(groups)) - for i, group := range groups { - user.Sgids[i] = group.Gid - } - } - } - - return user, nil -} - -// GetAdditionalGroups looks up a list of groups by name or group id -// against the given /etc/group formatted data. If a group name cannot -// be found, an error will be returned. If a group id cannot be found, -// or the given group data is nil, the id will be returned as-is -// provided it is in the legal range. -func GetAdditionalGroups(additionalGroups []string, group io.Reader) ([]int, error) { - var groups = []Group{} - if group != nil { - var err error - groups, err = ParseGroupFilter(group, func(g Group) bool { - for _, ag := range additionalGroups { - if g.Name == ag || strconv.Itoa(g.Gid) == ag { - return true - } - } - return false - }) - if err != nil { - return nil, fmt.Errorf("Unable to find additional groups %v: %v", additionalGroups, err) - } - } - - gidMap := make(map[int]struct{}) - for _, ag := range additionalGroups { - var found bool - for _, g := range groups { - // if we found a matched group either by name or gid, take the - // first matched as correct - if g.Name == ag || strconv.Itoa(g.Gid) == ag { - if _, ok := gidMap[g.Gid]; !ok { - gidMap[g.Gid] = struct{}{} - found = true - break - } - } - } - // we asked for a group but didn't find it. let's check to see - // if we wanted a numeric group - if !found { - gid, err := strconv.Atoi(ag) - if err != nil { - return nil, fmt.Errorf("Unable to find group %s", ag) - } - // Ensure gid is inside gid range. - if gid < minId || gid > maxId { - return nil, ErrRange - } - gidMap[gid] = struct{}{} - } - } - gids := []int{} - for gid := range gidMap { - gids = append(gids, gid) - } - return gids, nil -} - -// GetAdditionalGroupsPath is a wrapper around GetAdditionalGroups -// that opens the groupPath given and gives it as an argument to -// GetAdditionalGroups. -func GetAdditionalGroupsPath(additionalGroups []string, groupPath string) ([]int, error) { - var group io.Reader - - if groupFile, err := os.Open(groupPath); err == nil { - group = groupFile - defer groupFile.Close() - } - return GetAdditionalGroups(additionalGroups, group) -} - -func ParseSubIDFile(path string) ([]SubID, error) { - subid, err := os.Open(path) - if err != nil { - return nil, err - } - defer subid.Close() - return ParseSubID(subid) -} - -func ParseSubID(subid io.Reader) ([]SubID, error) { - return ParseSubIDFilter(subid, nil) -} - -func ParseSubIDFileFilter(path string, filter func(SubID) bool) ([]SubID, error) { - subid, err := os.Open(path) - if err != nil { - return nil, err - } - defer subid.Close() - return ParseSubIDFilter(subid, filter) -} - -func ParseSubIDFilter(r io.Reader, filter func(SubID) bool) ([]SubID, error) { - if r == nil { - return nil, fmt.Errorf("nil source for subid-formatted data") - } - - var ( - s = bufio.NewScanner(r) - out = []SubID{} - ) - - for s.Scan() { - if err := s.Err(); err != nil { - return nil, err - } - - line := strings.TrimSpace(s.Text()) - if line == "" { - continue - } - - // see: man 5 subuid - p := SubID{} - parseLine(line, &p.Name, &p.SubID, &p.Count) - - if filter == nil || filter(p) { - out = append(out, p) - } - } - - return out, nil -} - -func ParseIDMapFile(path string) ([]IDMap, error) { - r, err := os.Open(path) - if err != nil { - return nil, err - } - defer r.Close() - return ParseIDMap(r) -} - -func ParseIDMap(r io.Reader) ([]IDMap, error) { - return ParseIDMapFilter(r, nil) -} - -func ParseIDMapFileFilter(path string, filter func(IDMap) bool) ([]IDMap, error) { - r, err := os.Open(path) - if err != nil { - return nil, err - } - defer r.Close() - return ParseIDMapFilter(r, filter) -} - -func ParseIDMapFilter(r io.Reader, filter func(IDMap) bool) ([]IDMap, error) { - if r == nil { - return nil, fmt.Errorf("nil source for idmap-formatted data") - } - - var ( - s = bufio.NewScanner(r) - out = []IDMap{} - ) - - for s.Scan() { - if err := s.Err(); err != nil { - return nil, err - } - - line := strings.TrimSpace(s.Text()) - if line == "" { - continue - } - - // see: man 7 user_namespaces - p := IDMap{} - parseParts(strings.Fields(line), &p.ID, &p.ParentID, &p.Count) - - if filter == nil || filter(p) { - out = append(out, p) - } - } - - return out, nil -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/utils/cmsg.go b/vendor/github.com/opencontainers/runc/libcontainer/utils/cmsg.go deleted file mode 100644 index c8a9364d5..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/utils/cmsg.go +++ /dev/null @@ -1,93 +0,0 @@ -// +build linux - -package utils - -/* - * Copyright 2016, 2017 SUSE LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import ( - "fmt" - "os" - - "golang.org/x/sys/unix" -) - -// MaxSendfdLen is the maximum length of the name of a file descriptor being -// sent using SendFd. The name of the file handle returned by RecvFd will never -// be larger than this value. -const MaxNameLen = 4096 - -// oobSpace is the size of the oob slice required to store a single FD. Note -// that unix.UnixRights appears to make the assumption that fd is always int32, -// so sizeof(fd) = 4. -var oobSpace = unix.CmsgSpace(4) - -// RecvFd waits for a file descriptor to be sent over the given AF_UNIX -// socket. The file name of the remote file descriptor will be recreated -// locally (it is sent as non-auxiliary data in the same payload). -func RecvFd(socket *os.File) (*os.File, error) { - // For some reason, unix.Recvmsg uses the length rather than the capacity - // when passing the msg_controllen and other attributes to recvmsg. So we - // have to actually set the length. - name := make([]byte, MaxNameLen) - oob := make([]byte, oobSpace) - - sockfd := socket.Fd() - n, oobn, _, _, err := unix.Recvmsg(int(sockfd), name, oob, 0) - if err != nil { - return nil, err - } - - if n >= MaxNameLen || oobn != oobSpace { - return nil, fmt.Errorf("recvfd: incorrect number of bytes read (n=%d oobn=%d)", n, oobn) - } - - // Truncate. - name = name[:n] - oob = oob[:oobn] - - scms, err := unix.ParseSocketControlMessage(oob) - if err != nil { - return nil, err - } - if len(scms) != 1 { - return nil, fmt.Errorf("recvfd: number of SCMs is not 1: %d", len(scms)) - } - scm := scms[0] - - fds, err := unix.ParseUnixRights(&scm) - if err != nil { - return nil, err - } - if len(fds) != 1 { - return nil, fmt.Errorf("recvfd: number of fds is not 1: %d", len(fds)) - } - fd := uintptr(fds[0]) - - return os.NewFile(fd, string(name)), nil -} - -// SendFd sends a file descriptor over the given AF_UNIX socket. In -// addition, the file.Name() of the given file will also be sent as -// non-auxiliary data in the same payload (allowing to send contextual -// information for a file descriptor). -func SendFd(socket *os.File, name string, fd uintptr) error { - if len(name) >= MaxNameLen { - return fmt.Errorf("sendfd: filename too long: %s", name) - } - oob := unix.UnixRights(int(fd)) - return unix.Sendmsg(int(socket.Fd()), []byte(name), oob, nil, 0) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/utils/utils.go b/vendor/github.com/opencontainers/runc/libcontainer/utils/utils.go deleted file mode 100644 index 40ccfaa1a..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/utils/utils.go +++ /dev/null @@ -1,112 +0,0 @@ -package utils - -import ( - "encoding/json" - "io" - "os" - "path/filepath" - "strings" - "unsafe" - - "golang.org/x/sys/unix" -) - -const ( - exitSignalOffset = 128 -) - -// ResolveRootfs ensures that the current working directory is -// not a symlink and returns the absolute path to the rootfs -func ResolveRootfs(uncleanRootfs string) (string, error) { - rootfs, err := filepath.Abs(uncleanRootfs) - if err != nil { - return "", err - } - return filepath.EvalSymlinks(rootfs) -} - -// ExitStatus returns the correct exit status for a process based on if it -// was signaled or exited cleanly -func ExitStatus(status unix.WaitStatus) int { - if status.Signaled() { - return exitSignalOffset + int(status.Signal()) - } - return status.ExitStatus() -} - -// WriteJSON writes the provided struct v to w using standard json marshaling -func WriteJSON(w io.Writer, v interface{}) error { - data, err := json.Marshal(v) - if err != nil { - return err - } - _, err = w.Write(data) - return err -} - -// CleanPath makes a path safe for use with filepath.Join. This is done by not -// only cleaning the path, but also (if the path is relative) adding a leading -// '/' and cleaning it (then removing the leading '/'). This ensures that a -// path resulting from prepending another path will always resolve to lexically -// be a subdirectory of the prefixed path. This is all done lexically, so paths -// that include symlinks won't be safe as a result of using CleanPath. -func CleanPath(path string) string { - // Deal with empty strings nicely. - if path == "" { - return "" - } - - // Ensure that all paths are cleaned (especially problematic ones like - // "/../../../../../" which can cause lots of issues). - path = filepath.Clean(path) - - // If the path isn't absolute, we need to do more processing to fix paths - // such as "../../../..//some/path". We also shouldn't convert absolute - // paths to relative ones. - if !filepath.IsAbs(path) { - path = filepath.Clean(string(os.PathSeparator) + path) - // This can't fail, as (by definition) all paths are relative to root. - path, _ = filepath.Rel(string(os.PathSeparator), path) - } - - // Clean the path again for good measure. - return filepath.Clean(path) -} - -// SearchLabels searches a list of key-value pairs for the provided key and -// returns the corresponding value. The pairs must be separated with '='. -func SearchLabels(labels []string, query string) string { - for _, l := range labels { - parts := strings.SplitN(l, "=", 2) - if len(parts) < 2 { - continue - } - if parts[0] == query { - return parts[1] - } - } - return "" -} - -// Annotations returns the bundle path and user defined annotations from the -// libcontainer state. We need to remove the bundle because that is a label -// added by libcontainer. -func Annotations(labels []string) (bundle string, userAnnotations map[string]string) { - userAnnotations = make(map[string]string) - for _, l := range labels { - parts := strings.SplitN(l, "=", 2) - if len(parts) < 2 { - continue - } - if parts[0] == "bundle" { - bundle = parts[1] - } else { - userAnnotations[parts[0]] = parts[1] - } - } - return -} - -func GetIntSize() int { - return int(unsafe.Sizeof(1)) -} diff --git a/vendor/github.com/opencontainers/runc/libcontainer/utils/utils_unix.go b/vendor/github.com/opencontainers/runc/libcontainer/utils/utils_unix.go deleted file mode 100644 index c96088988..000000000 --- a/vendor/github.com/opencontainers/runc/libcontainer/utils/utils_unix.go +++ /dev/null @@ -1,44 +0,0 @@ -// +build !windows - -package utils - -import ( - "io/ioutil" - "os" - "strconv" - - "golang.org/x/sys/unix" -) - -func CloseExecFrom(minFd int) error { - fdList, err := ioutil.ReadDir("/proc/self/fd") - if err != nil { - return err - } - for _, fi := range fdList { - fd, err := strconv.Atoi(fi.Name()) - if err != nil { - // ignore non-numeric file names - continue - } - - if fd < minFd { - // ignore descriptors lower than our specified minimum - continue - } - - // intentionally ignore errors from unix.CloseOnExec - unix.CloseOnExec(fd) - // the cases where this might fail are basically file descriptors that have already been closed (including and especially the one that was created when ioutil.ReadDir did the "opendir" syscall) - } - return nil -} - -// NewSockPair returns a new unix socket pair -func NewSockPair(name string) (parent *os.File, child *os.File, err error) { - fds, err := unix.Socketpair(unix.AF_LOCAL, unix.SOCK_STREAM|unix.SOCK_CLOEXEC, 0) - if err != nil { - return nil, nil, err - } - return os.NewFile(uintptr(fds[1]), name+"-p"), os.NewFile(uintptr(fds[0]), name+"-c"), nil -} diff --git a/vendor/github.com/opencontainers/runtime-spec/LICENSE b/vendor/github.com/opencontainers/runtime-spec/LICENSE deleted file mode 100644 index bdc403653..000000000 --- a/vendor/github.com/opencontainers/runtime-spec/LICENSE +++ /dev/null @@ -1,191 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - Copyright 2015 The Linux Foundation. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/opencontainers/runtime-spec/specs-go/config.go b/vendor/github.com/opencontainers/runtime-spec/specs-go/config.go deleted file mode 100644 index f3f37d42d..000000000 --- a/vendor/github.com/opencontainers/runtime-spec/specs-go/config.go +++ /dev/null @@ -1,570 +0,0 @@ -package specs - -import "os" - -// Spec is the base configuration for the container. -type Spec struct { - // Version of the Open Container Runtime Specification with which the bundle complies. - Version string `json:"ociVersion"` - // Process configures the container process. - Process *Process `json:"process,omitempty"` - // Root configures the container's root filesystem. - Root *Root `json:"root,omitempty"` - // Hostname configures the container's hostname. - Hostname string `json:"hostname,omitempty"` - // Mounts configures additional mounts (on top of Root). - Mounts []Mount `json:"mounts,omitempty"` - // Hooks configures callbacks for container lifecycle events. - Hooks *Hooks `json:"hooks,omitempty" platform:"linux,solaris"` - // Annotations contains arbitrary metadata for the container. - Annotations map[string]string `json:"annotations,omitempty"` - - // Linux is platform-specific configuration for Linux based containers. - Linux *Linux `json:"linux,omitempty" platform:"linux"` - // Solaris is platform-specific configuration for Solaris based containers. - Solaris *Solaris `json:"solaris,omitempty" platform:"solaris"` - // Windows is platform-specific configuration for Windows based containers. - Windows *Windows `json:"windows,omitempty" platform:"windows"` -} - -// Process contains information to start a specific application inside the container. -type Process struct { - // Terminal creates an interactive terminal for the container. - Terminal bool `json:"terminal,omitempty"` - // ConsoleSize specifies the size of the console. - ConsoleSize *Box `json:"consoleSize,omitempty"` - // User specifies user information for the process. - User User `json:"user"` - // Args specifies the binary and arguments for the application to execute. - Args []string `json:"args"` - // Env populates the process environment for the process. - Env []string `json:"env,omitempty"` - // Cwd is the current working directory for the process and must be - // relative to the container's root. - Cwd string `json:"cwd"` - // Capabilities are Linux capabilities that are kept for the process. - Capabilities *LinuxCapabilities `json:"capabilities,omitempty" platform:"linux"` - // Rlimits specifies rlimit options to apply to the process. - Rlimits []POSIXRlimit `json:"rlimits,omitempty" platform:"linux,solaris"` - // NoNewPrivileges controls whether additional privileges could be gained by processes in the container. - NoNewPrivileges bool `json:"noNewPrivileges,omitempty" platform:"linux"` - // ApparmorProfile specifies the apparmor profile for the container. - ApparmorProfile string `json:"apparmorProfile,omitempty" platform:"linux"` - // Specify an oom_score_adj for the container. - OOMScoreAdj *int `json:"oomScoreAdj,omitempty" platform:"linux"` - // SelinuxLabel specifies the selinux context that the container process is run as. - SelinuxLabel string `json:"selinuxLabel,omitempty" platform:"linux"` -} - -// LinuxCapabilities specifies the whitelist of capabilities that are kept for a process. -// http://man7.org/linux/man-pages/man7/capabilities.7.html -type LinuxCapabilities struct { - // Bounding is the set of capabilities checked by the kernel. - Bounding []string `json:"bounding,omitempty" platform:"linux"` - // Effective is the set of capabilities checked by the kernel. - Effective []string `json:"effective,omitempty" platform:"linux"` - // Inheritable is the capabilities preserved across execve. - Inheritable []string `json:"inheritable,omitempty" platform:"linux"` - // Permitted is the limiting superset for effective capabilities. - Permitted []string `json:"permitted,omitempty" platform:"linux"` - // Ambient is the ambient set of capabilities that are kept. - Ambient []string `json:"ambient,omitempty" platform:"linux"` -} - -// Box specifies dimensions of a rectangle. Used for specifying the size of a console. -type Box struct { - // Height is the vertical dimension of a box. - Height uint `json:"height"` - // Width is the horizontal dimension of a box. - Width uint `json:"width"` -} - -// User specifies specific user (and group) information for the container process. -type User struct { - // UID is the user id. - UID uint32 `json:"uid" platform:"linux,solaris"` - // GID is the group id. - GID uint32 `json:"gid" platform:"linux,solaris"` - // AdditionalGids are additional group ids set for the container's process. - AdditionalGids []uint32 `json:"additionalGids,omitempty" platform:"linux,solaris"` - // Username is the user name. - Username string `json:"username,omitempty" platform:"windows"` -} - -// Root contains information about the container's root filesystem on the host. -type Root struct { - // Path is the absolute path to the container's root filesystem. - Path string `json:"path"` - // Readonly makes the root filesystem for the container readonly before the process is executed. - Readonly bool `json:"readonly,omitempty"` -} - -// Mount specifies a mount for a container. -type Mount struct { - // Destination is the absolute path where the mount will be placed in the container. - Destination string `json:"destination"` - // Type specifies the mount kind. - Type string `json:"type,omitempty" platform:"linux,solaris"` - // Source specifies the source path of the mount. - Source string `json:"source,omitempty"` - // Options are fstab style mount options. - Options []string `json:"options,omitempty"` -} - -// Hook specifies a command that is run at a particular event in the lifecycle of a container -type Hook struct { - Path string `json:"path"` - Args []string `json:"args,omitempty"` - Env []string `json:"env,omitempty"` - Timeout *int `json:"timeout,omitempty"` -} - -// Hooks for container setup and teardown -type Hooks struct { - // Prestart is a list of hooks to be run before the container process is executed. - Prestart []Hook `json:"prestart,omitempty"` - // Poststart is a list of hooks to be run after the container process is started. - Poststart []Hook `json:"poststart,omitempty"` - // Poststop is a list of hooks to be run after the container process exits. - Poststop []Hook `json:"poststop,omitempty"` -} - -// Linux contains platform-specific configuration for Linux based containers. -type Linux struct { - // UIDMapping specifies user mappings for supporting user namespaces. - UIDMappings []LinuxIDMapping `json:"uidMappings,omitempty"` - // GIDMapping specifies group mappings for supporting user namespaces. - GIDMappings []LinuxIDMapping `json:"gidMappings,omitempty"` - // Sysctl are a set of key value pairs that are set for the container on start - Sysctl map[string]string `json:"sysctl,omitempty"` - // Resources contain cgroup information for handling resource constraints - // for the container - Resources *LinuxResources `json:"resources,omitempty"` - // CgroupsPath specifies the path to cgroups that are created and/or joined by the container. - // The path is expected to be relative to the cgroups mountpoint. - // If resources are specified, the cgroups at CgroupsPath will be updated based on resources. - CgroupsPath string `json:"cgroupsPath,omitempty"` - // Namespaces contains the namespaces that are created and/or joined by the container - Namespaces []LinuxNamespace `json:"namespaces,omitempty"` - // Devices are a list of device nodes that are created for the container - Devices []LinuxDevice `json:"devices,omitempty"` - // Seccomp specifies the seccomp security settings for the container. - Seccomp *LinuxSeccomp `json:"seccomp,omitempty"` - // RootfsPropagation is the rootfs mount propagation mode for the container. - RootfsPropagation string `json:"rootfsPropagation,omitempty"` - // MaskedPaths masks over the provided paths inside the container. - MaskedPaths []string `json:"maskedPaths,omitempty"` - // ReadonlyPaths sets the provided paths as RO inside the container. - ReadonlyPaths []string `json:"readonlyPaths,omitempty"` - // MountLabel specifies the selinux context for the mounts in the container. - MountLabel string `json:"mountLabel,omitempty"` - // IntelRdt contains Intel Resource Director Technology (RDT) information - // for handling resource constraints (e.g., L3 cache) for the container - IntelRdt *LinuxIntelRdt `json:"intelRdt,omitempty"` -} - -// LinuxNamespace is the configuration for a Linux namespace -type LinuxNamespace struct { - // Type is the type of namespace - Type LinuxNamespaceType `json:"type"` - // Path is a path to an existing namespace persisted on disk that can be joined - // and is of the same type - Path string `json:"path,omitempty"` -} - -// LinuxNamespaceType is one of the Linux namespaces -type LinuxNamespaceType string - -const ( - // PIDNamespace for isolating process IDs - PIDNamespace LinuxNamespaceType = "pid" - // NetworkNamespace for isolating network devices, stacks, ports, etc - NetworkNamespace = "network" - // MountNamespace for isolating mount points - MountNamespace = "mount" - // IPCNamespace for isolating System V IPC, POSIX message queues - IPCNamespace = "ipc" - // UTSNamespace for isolating hostname and NIS domain name - UTSNamespace = "uts" - // UserNamespace for isolating user and group IDs - UserNamespace = "user" - // CgroupNamespace for isolating cgroup hierarchies - CgroupNamespace = "cgroup" -) - -// LinuxIDMapping specifies UID/GID mappings -type LinuxIDMapping struct { - // HostID is the starting UID/GID on the host to be mapped to 'ContainerID' - HostID uint32 `json:"hostID"` - // ContainerID is the starting UID/GID in the container - ContainerID uint32 `json:"containerID"` - // Size is the number of IDs to be mapped - Size uint32 `json:"size"` -} - -// POSIXRlimit type and restrictions -type POSIXRlimit struct { - // Type of the rlimit to set - Type string `json:"type"` - // Hard is the hard limit for the specified type - Hard uint64 `json:"hard"` - // Soft is the soft limit for the specified type - Soft uint64 `json:"soft"` -} - -// LinuxHugepageLimit structure corresponds to limiting kernel hugepages -type LinuxHugepageLimit struct { - // Pagesize is the hugepage size - Pagesize string `json:"pageSize"` - // Limit is the limit of "hugepagesize" hugetlb usage - Limit uint64 `json:"limit"` -} - -// LinuxInterfacePriority for network interfaces -type LinuxInterfacePriority struct { - // Name is the name of the network interface - Name string `json:"name"` - // Priority for the interface - Priority uint32 `json:"priority"` -} - -// linuxBlockIODevice holds major:minor format supported in blkio cgroup -type linuxBlockIODevice struct { - // Major is the device's major number. - Major int64 `json:"major"` - // Minor is the device's minor number. - Minor int64 `json:"minor"` -} - -// LinuxWeightDevice struct holds a `major:minor weight` pair for weightDevice -type LinuxWeightDevice struct { - linuxBlockIODevice - // Weight is the bandwidth rate for the device. - Weight *uint16 `json:"weight,omitempty"` - // LeafWeight is the bandwidth rate for the device while competing with the cgroup's child cgroups, CFQ scheduler only - LeafWeight *uint16 `json:"leafWeight,omitempty"` -} - -// LinuxThrottleDevice struct holds a `major:minor rate_per_second` pair -type LinuxThrottleDevice struct { - linuxBlockIODevice - // Rate is the IO rate limit per cgroup per device - Rate uint64 `json:"rate"` -} - -// LinuxBlockIO for Linux cgroup 'blkio' resource management -type LinuxBlockIO struct { - // Specifies per cgroup weight - Weight *uint16 `json:"weight,omitempty"` - // Specifies tasks' weight in the given cgroup while competing with the cgroup's child cgroups, CFQ scheduler only - LeafWeight *uint16 `json:"leafWeight,omitempty"` - // Weight per cgroup per device, can override BlkioWeight - WeightDevice []LinuxWeightDevice `json:"weightDevice,omitempty"` - // IO read rate limit per cgroup per device, bytes per second - ThrottleReadBpsDevice []LinuxThrottleDevice `json:"throttleReadBpsDevice,omitempty"` - // IO write rate limit per cgroup per device, bytes per second - ThrottleWriteBpsDevice []LinuxThrottleDevice `json:"throttleWriteBpsDevice,omitempty"` - // IO read rate limit per cgroup per device, IO per second - ThrottleReadIOPSDevice []LinuxThrottleDevice `json:"throttleReadIOPSDevice,omitempty"` - // IO write rate limit per cgroup per device, IO per second - ThrottleWriteIOPSDevice []LinuxThrottleDevice `json:"throttleWriteIOPSDevice,omitempty"` -} - -// LinuxMemory for Linux cgroup 'memory' resource management -type LinuxMemory struct { - // Memory limit (in bytes). - Limit *int64 `json:"limit,omitempty"` - // Memory reservation or soft_limit (in bytes). - Reservation *int64 `json:"reservation,omitempty"` - // Total memory limit (memory + swap). - Swap *int64 `json:"swap,omitempty"` - // Kernel memory limit (in bytes). - Kernel *int64 `json:"kernel,omitempty"` - // Kernel memory limit for tcp (in bytes) - KernelTCP *int64 `json:"kernelTCP,omitempty"` - // How aggressive the kernel will swap memory pages. - Swappiness *uint64 `json:"swappiness,omitempty"` - // DisableOOMKiller disables the OOM killer for out of memory conditions - DisableOOMKiller *bool `json:"disableOOMKiller,omitempty"` -} - -// LinuxCPU for Linux cgroup 'cpu' resource management -type LinuxCPU struct { - // CPU shares (relative weight (ratio) vs. other cgroups with cpu shares). - Shares *uint64 `json:"shares,omitempty"` - // CPU hardcap limit (in usecs). Allowed cpu time in a given period. - Quota *int64 `json:"quota,omitempty"` - // CPU period to be used for hardcapping (in usecs). - Period *uint64 `json:"period,omitempty"` - // How much time realtime scheduling may use (in usecs). - RealtimeRuntime *int64 `json:"realtimeRuntime,omitempty"` - // CPU period to be used for realtime scheduling (in usecs). - RealtimePeriod *uint64 `json:"realtimePeriod,omitempty"` - // CPUs to use within the cpuset. Default is to use any CPU available. - Cpus string `json:"cpus,omitempty"` - // List of memory nodes in the cpuset. Default is to use any available memory node. - Mems string `json:"mems,omitempty"` -} - -// LinuxPids for Linux cgroup 'pids' resource management (Linux 4.3) -type LinuxPids struct { - // Maximum number of PIDs. Default is "no limit". - Limit int64 `json:"limit"` -} - -// LinuxNetwork identification and priority configuration -type LinuxNetwork struct { - // Set class identifier for container's network packets - ClassID *uint32 `json:"classID,omitempty"` - // Set priority of network traffic for container - Priorities []LinuxInterfacePriority `json:"priorities,omitempty"` -} - -// LinuxResources has container runtime resource constraints -type LinuxResources struct { - // Devices configures the device whitelist. - Devices []LinuxDeviceCgroup `json:"devices,omitempty"` - // Memory restriction configuration - Memory *LinuxMemory `json:"memory,omitempty"` - // CPU resource restriction configuration - CPU *LinuxCPU `json:"cpu,omitempty"` - // Task resource restriction configuration. - Pids *LinuxPids `json:"pids,omitempty"` - // BlockIO restriction configuration - BlockIO *LinuxBlockIO `json:"blockIO,omitempty"` - // Hugetlb limit (in bytes) - HugepageLimits []LinuxHugepageLimit `json:"hugepageLimits,omitempty"` - // Network restriction configuration - Network *LinuxNetwork `json:"network,omitempty"` -} - -// LinuxDevice represents the mknod information for a Linux special device file -type LinuxDevice struct { - // Path to the device. - Path string `json:"path"` - // Device type, block, char, etc. - Type string `json:"type"` - // Major is the device's major number. - Major int64 `json:"major"` - // Minor is the device's minor number. - Minor int64 `json:"minor"` - // FileMode permission bits for the device. - FileMode *os.FileMode `json:"fileMode,omitempty"` - // UID of the device. - UID *uint32 `json:"uid,omitempty"` - // Gid of the device. - GID *uint32 `json:"gid,omitempty"` -} - -// LinuxDeviceCgroup represents a device rule for the whitelist controller -type LinuxDeviceCgroup struct { - // Allow or deny - Allow bool `json:"allow"` - // Device type, block, char, etc. - Type string `json:"type,omitempty"` - // Major is the device's major number. - Major *int64 `json:"major,omitempty"` - // Minor is the device's minor number. - Minor *int64 `json:"minor,omitempty"` - // Cgroup access permissions format, rwm. - Access string `json:"access,omitempty"` -} - -// Solaris contains platform-specific configuration for Solaris application containers. -type Solaris struct { - // SMF FMRI which should go "online" before we start the container process. - Milestone string `json:"milestone,omitempty"` - // Maximum set of privileges any process in this container can obtain. - LimitPriv string `json:"limitpriv,omitempty"` - // The maximum amount of shared memory allowed for this container. - MaxShmMemory string `json:"maxShmMemory,omitempty"` - // Specification for automatic creation of network resources for this container. - Anet []SolarisAnet `json:"anet,omitempty"` - // Set limit on the amount of CPU time that can be used by container. - CappedCPU *SolarisCappedCPU `json:"cappedCPU,omitempty"` - // The physical and swap caps on the memory that can be used by this container. - CappedMemory *SolarisCappedMemory `json:"cappedMemory,omitempty"` -} - -// SolarisCappedCPU allows users to set limit on the amount of CPU time that can be used by container. -type SolarisCappedCPU struct { - Ncpus string `json:"ncpus,omitempty"` -} - -// SolarisCappedMemory allows users to set the physical and swap caps on the memory that can be used by this container. -type SolarisCappedMemory struct { - Physical string `json:"physical,omitempty"` - Swap string `json:"swap,omitempty"` -} - -// SolarisAnet provides the specification for automatic creation of network resources for this container. -type SolarisAnet struct { - // Specify a name for the automatically created VNIC datalink. - Linkname string `json:"linkname,omitempty"` - // Specify the link over which the VNIC will be created. - Lowerlink string `json:"lowerLink,omitempty"` - // The set of IP addresses that the container can use. - Allowedaddr string `json:"allowedAddress,omitempty"` - // Specifies whether allowedAddress limitation is to be applied to the VNIC. - Configallowedaddr string `json:"configureAllowedAddress,omitempty"` - // The value of the optional default router. - Defrouter string `json:"defrouter,omitempty"` - // Enable one or more types of link protection. - Linkprotection string `json:"linkProtection,omitempty"` - // Set the VNIC's macAddress - Macaddress string `json:"macAddress,omitempty"` -} - -// Windows defines the runtime configuration for Windows based containers, including Hyper-V containers. -type Windows struct { - // LayerFolders contains a list of absolute paths to directories containing image layers. - LayerFolders []string `json:"layerFolders"` - // Resources contains information for handling resource constraints for the container. - Resources *WindowsResources `json:"resources,omitempty"` - // CredentialSpec contains a JSON object describing a group Managed Service Account (gMSA) specification. - CredentialSpec interface{} `json:"credentialSpec,omitempty"` - // Servicing indicates if the container is being started in a mode to apply a Windows Update servicing operation. - Servicing bool `json:"servicing,omitempty"` - // IgnoreFlushesDuringBoot indicates if the container is being started in a mode where disk writes are not flushed during its boot process. - IgnoreFlushesDuringBoot bool `json:"ignoreFlushesDuringBoot,omitempty"` - // HyperV contains information for running a container with Hyper-V isolation. - HyperV *WindowsHyperV `json:"hyperv,omitempty"` - // Network restriction configuration. - Network *WindowsNetwork `json:"network,omitempty"` -} - -// WindowsResources has container runtime resource constraints for containers running on Windows. -type WindowsResources struct { - // Memory restriction configuration. - Memory *WindowsMemoryResources `json:"memory,omitempty"` - // CPU resource restriction configuration. - CPU *WindowsCPUResources `json:"cpu,omitempty"` - // Storage restriction configuration. - Storage *WindowsStorageResources `json:"storage,omitempty"` -} - -// WindowsMemoryResources contains memory resource management settings. -type WindowsMemoryResources struct { - // Memory limit in bytes. - Limit *uint64 `json:"limit,omitempty"` -} - -// WindowsCPUResources contains CPU resource management settings. -type WindowsCPUResources struct { - // Number of CPUs available to the container. - Count *uint64 `json:"count,omitempty"` - // CPU shares (relative weight to other containers with cpu shares). - Shares *uint16 `json:"shares,omitempty"` - // Specifies the portion of processor cycles that this container can use as a percentage times 100. - Maximum *uint16 `json:"maximum,omitempty"` -} - -// WindowsStorageResources contains storage resource management settings. -type WindowsStorageResources struct { - // Specifies maximum Iops for the system drive. - Iops *uint64 `json:"iops,omitempty"` - // Specifies maximum bytes per second for the system drive. - Bps *uint64 `json:"bps,omitempty"` - // Sandbox size specifies the minimum size of the system drive in bytes. - SandboxSize *uint64 `json:"sandboxSize,omitempty"` -} - -// WindowsNetwork contains network settings for Windows containers. -type WindowsNetwork struct { - // List of HNS endpoints that the container should connect to. - EndpointList []string `json:"endpointList,omitempty"` - // Specifies if unqualified DNS name resolution is allowed. - AllowUnqualifiedDNSQuery bool `json:"allowUnqualifiedDNSQuery,omitempty"` - // Comma separated list of DNS suffixes to use for name resolution. - DNSSearchList []string `json:"DNSSearchList,omitempty"` - // Name (ID) of the container that we will share with the network stack. - NetworkSharedContainerName string `json:"networkSharedContainerName,omitempty"` -} - -// WindowsHyperV contains information for configuring a container to run with Hyper-V isolation. -type WindowsHyperV struct { - // UtilityVMPath is an optional path to the image used for the Utility VM. - UtilityVMPath string `json:"utilityVMPath,omitempty"` -} - -// LinuxSeccomp represents syscall restrictions -type LinuxSeccomp struct { - DefaultAction LinuxSeccompAction `json:"defaultAction"` - Architectures []Arch `json:"architectures,omitempty"` - Syscalls []LinuxSyscall `json:"syscalls,omitempty"` -} - -// Arch used for additional architectures -type Arch string - -// Additional architectures permitted to be used for system calls -// By default only the native architecture of the kernel is permitted -const ( - ArchX86 Arch = "SCMP_ARCH_X86" - ArchX86_64 Arch = "SCMP_ARCH_X86_64" - ArchX32 Arch = "SCMP_ARCH_X32" - ArchARM Arch = "SCMP_ARCH_ARM" - ArchAARCH64 Arch = "SCMP_ARCH_AARCH64" - ArchMIPS Arch = "SCMP_ARCH_MIPS" - ArchMIPS64 Arch = "SCMP_ARCH_MIPS64" - ArchMIPS64N32 Arch = "SCMP_ARCH_MIPS64N32" - ArchMIPSEL Arch = "SCMP_ARCH_MIPSEL" - ArchMIPSEL64 Arch = "SCMP_ARCH_MIPSEL64" - ArchMIPSEL64N32 Arch = "SCMP_ARCH_MIPSEL64N32" - ArchPPC Arch = "SCMP_ARCH_PPC" - ArchPPC64 Arch = "SCMP_ARCH_PPC64" - ArchPPC64LE Arch = "SCMP_ARCH_PPC64LE" - ArchS390 Arch = "SCMP_ARCH_S390" - ArchS390X Arch = "SCMP_ARCH_S390X" - ArchPARISC Arch = "SCMP_ARCH_PARISC" - ArchPARISC64 Arch = "SCMP_ARCH_PARISC64" -) - -// LinuxSeccompAction taken upon Seccomp rule match -type LinuxSeccompAction string - -// Define actions for Seccomp rules -const ( - ActKill LinuxSeccompAction = "SCMP_ACT_KILL" - ActTrap LinuxSeccompAction = "SCMP_ACT_TRAP" - ActErrno LinuxSeccompAction = "SCMP_ACT_ERRNO" - ActTrace LinuxSeccompAction = "SCMP_ACT_TRACE" - ActAllow LinuxSeccompAction = "SCMP_ACT_ALLOW" -) - -// LinuxSeccompOperator used to match syscall arguments in Seccomp -type LinuxSeccompOperator string - -// Define operators for syscall arguments in Seccomp -const ( - OpNotEqual LinuxSeccompOperator = "SCMP_CMP_NE" - OpLessThan LinuxSeccompOperator = "SCMP_CMP_LT" - OpLessEqual LinuxSeccompOperator = "SCMP_CMP_LE" - OpEqualTo LinuxSeccompOperator = "SCMP_CMP_EQ" - OpGreaterEqual LinuxSeccompOperator = "SCMP_CMP_GE" - OpGreaterThan LinuxSeccompOperator = "SCMP_CMP_GT" - OpMaskedEqual LinuxSeccompOperator = "SCMP_CMP_MASKED_EQ" -) - -// LinuxSeccompArg used for matching specific syscall arguments in Seccomp -type LinuxSeccompArg struct { - Index uint `json:"index"` - Value uint64 `json:"value"` - ValueTwo uint64 `json:"valueTwo,omitempty"` - Op LinuxSeccompOperator `json:"op"` -} - -// LinuxSyscall is used to match a syscall in Seccomp -type LinuxSyscall struct { - Names []string `json:"names"` - Action LinuxSeccompAction `json:"action"` - Args []LinuxSeccompArg `json:"args,omitempty"` -} - -// LinuxIntelRdt has container runtime resource constraints -// for Intel RDT/CAT which introduced in Linux 4.10 kernel -type LinuxIntelRdt struct { - // The schema for L3 cache id and capacity bitmask (CBM) - // Format: "L3:=;=;..." - L3CacheSchema string `json:"l3CacheSchema,omitempty"` -} diff --git a/vendor/github.com/opencontainers/runtime-spec/specs-go/state.go b/vendor/github.com/opencontainers/runtime-spec/specs-go/state.go deleted file mode 100644 index 89dce34be..000000000 --- a/vendor/github.com/opencontainers/runtime-spec/specs-go/state.go +++ /dev/null @@ -1,17 +0,0 @@ -package specs - -// State holds information about the runtime state of the container. -type State struct { - // Version is the version of the specification that is supported. - Version string `json:"ociVersion"` - // ID is the container ID - ID string `json:"id"` - // Status is the runtime status of the container. - Status string `json:"status"` - // Pid is the process ID for the container process. - Pid int `json:"pid,omitempty"` - // Bundle is the path to the container's bundle directory. - Bundle string `json:"bundle"` - // Annotations are key values associated with the container. - Annotations map[string]string `json:"annotations,omitempty"` -} diff --git a/vendor/github.com/opencontainers/runtime-spec/specs-go/version.go b/vendor/github.com/opencontainers/runtime-spec/specs-go/version.go deleted file mode 100644 index 926ce6650..000000000 --- a/vendor/github.com/opencontainers/runtime-spec/specs-go/version.go +++ /dev/null @@ -1,18 +0,0 @@ -package specs - -import "fmt" - -const ( - // VersionMajor is for an API incompatible changes - VersionMajor = 1 - // VersionMinor is for functionality in a backwards-compatible manner - VersionMinor = 0 - // VersionPatch is for backwards-compatible bug fixes - VersionPatch = 0 - - // VersionDev indicates development branch. Releases will be empty string. - VersionDev = "" -) - -// Version is the specification version that the package types support. -var Version = fmt.Sprintf("%d.%d.%d%s", VersionMajor, VersionMinor, VersionPatch, VersionDev) diff --git a/vendor/github.com/opencontainers/selinux/LICENSE b/vendor/github.com/opencontainers/selinux/LICENSE deleted file mode 100644 index 8dada3eda..000000000 --- a/vendor/github.com/opencontainers/selinux/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "{}" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright {yyyy} {name of copyright owner} - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/opencontainers/selinux/go-selinux/label/label.go b/vendor/github.com/opencontainers/selinux/go-selinux/label/label.go deleted file mode 100644 index e178568fd..000000000 --- a/vendor/github.com/opencontainers/selinux/go-selinux/label/label.go +++ /dev/null @@ -1,109 +0,0 @@ -// +build !selinux !linux - -package label - -// InitLabels returns the process label and file labels to be used within -// the container. A list of options can be passed into this function to alter -// the labels. -func InitLabels(options []string) (string, string, error) { - return "", "", nil -} - -func ROMountLabel() string { - return "" -} - -func GenLabels(options string) (string, string, error) { - return "", "", nil -} - -func FormatMountLabel(src string, mountLabel string) string { - return src -} - -func SetProcessLabel(processLabel string) error { - return nil -} - -func ProcessLabel() (string, error) { - return "", nil -} - -func SetSocketLabel(processLabel string) error { - return nil -} - -func SocketLabel() (string, error) { - return "", nil -} - -func SetKeyLabel(processLabel string) error { - return nil -} - -func KeyLabel() (string, error) { - return "", nil -} - -func FileLabel(path string) (string, error) { - return "", nil -} - -func SetFileLabel(path string, fileLabel string) error { - return nil -} - -func SetFileCreateLabel(fileLabel string) error { - return nil -} - -func Relabel(path string, fileLabel string, shared bool) error { - return nil -} - -func PidLabel(pid int) (string, error) { - return "", nil -} - -func Init() { -} - -// ClearLabels clears all reserved labels -func ClearLabels() { - return -} - -func ReserveLabel(label string) error { - return nil -} - -func ReleaseLabel(label string) error { - return nil -} - -// DupSecOpt takes a process label and returns security options that -// can be used to set duplicate labels on future container processes -func DupSecOpt(src string) ([]string, error) { - return nil, nil -} - -// DisableSecOpt returns a security opt that can disable labeling -// support for future container processes -func DisableSecOpt() []string { - return nil -} - -// Validate checks that the label does not include unexpected options -func Validate(label string) error { - return nil -} - -// RelabelNeeded checks whether the user requested a relabel -func RelabelNeeded(label string) bool { - return false -} - -// IsShared checks that the label includes a "shared" mark -func IsShared(label string) bool { - return false -} diff --git a/vendor/github.com/opencontainers/selinux/go-selinux/label/label_selinux.go b/vendor/github.com/opencontainers/selinux/go-selinux/label/label_selinux.go deleted file mode 100644 index 1eb9a6bf2..000000000 --- a/vendor/github.com/opencontainers/selinux/go-selinux/label/label_selinux.go +++ /dev/null @@ -1,287 +0,0 @@ -// +build selinux,linux - -package label - -import ( - "fmt" - "os" - "os/user" - "strings" - - "github.com/opencontainers/selinux/go-selinux" -) - -// Valid Label Options -var validOptions = map[string]bool{ - "disable": true, - "type": true, - "user": true, - "role": true, - "level": true, -} - -var ErrIncompatibleLabel = fmt.Errorf("Bad SELinux option z and Z can not be used together") - -// InitLabels returns the process label and file labels to be used within -// the container. A list of options can be passed into this function to alter -// the labels. The labels returned will include a random MCS String, that is -// guaranteed to be unique. -func InitLabels(options []string) (plabel string, mlabel string, Err error) { - if !selinux.GetEnabled() { - return "", "", nil - } - processLabel, mountLabel := selinux.ContainerLabels() - if processLabel != "" { - defer func() { - if Err != nil { - ReleaseLabel(mountLabel) - } - }() - pcon, err := selinux.NewContext(processLabel) - if err != nil { - return "", "", err - } - - mcon, err := selinux.NewContext(mountLabel) - if err != nil { - return "", "", err - } - for _, opt := range options { - if opt == "disable" { - return "", mountLabel, nil - } - if i := strings.Index(opt, ":"); i == -1 { - return "", "", fmt.Errorf("Bad label option %q, valid options 'disable' or \n'user, role, level, type' followed by ':' and a value", opt) - } - con := strings.SplitN(opt, ":", 2) - if !validOptions[con[0]] { - return "", "", fmt.Errorf("Bad label option %q, valid options 'disable, user, role, level, type'", con[0]) - - } - pcon[con[0]] = con[1] - if con[0] == "level" || con[0] == "user" { - mcon[con[0]] = con[1] - } - } - _ = ReleaseLabel(processLabel) - processLabel = pcon.Get() - mountLabel = mcon.Get() - _ = ReserveLabel(processLabel) - } - return processLabel, mountLabel, nil -} - -func ROMountLabel() string { - return selinux.ROFileLabel() -} - -// DEPRECATED: The GenLabels function is only to be used during the transition to the official API. -func GenLabels(options string) (string, string, error) { - return InitLabels(strings.Fields(options)) -} - -// FormatMountLabel returns a string to be used by the mount command. -// The format of this string will be used to alter the labeling of the mountpoint. -// The string returned is suitable to be used as the options field of the mount command. -// If you need to have additional mount point options, you can pass them in as -// the first parameter. Second parameter is the label that you wish to apply -// to all content in the mount point. -func FormatMountLabel(src, mountLabel string) string { - if mountLabel != "" { - switch src { - case "": - src = fmt.Sprintf("context=%q", mountLabel) - default: - src = fmt.Sprintf("%s,context=%q", src, mountLabel) - } - } - return src -} - -// SetProcessLabel takes a process label and tells the kernel to assign the -// label to the next program executed by the current process. -func SetProcessLabel(processLabel string) error { - return selinux.SetExecLabel(processLabel) -} - -// SetSocketLabel takes a process label and tells the kernel to assign the -// label to the next socket that gets created -func SetSocketLabel(processLabel string) error { - return selinux.SetSocketLabel(processLabel) -} - -// SocketLabel retrieves the current default socket label setting -func SocketLabel() (string, error) { - return selinux.SocketLabel() -} - -// SetKeyLabel takes a process label and tells the kernel to assign the -// label to the next kernel keyring that gets created -func SetKeyLabel(processLabel string) error { - return selinux.SetKeyLabel(processLabel) -} - -// KeyLabel retrieves the current default kernel keyring label setting -func KeyLabel() (string, error) { - return selinux.KeyLabel() -} - -// ProcessLabel returns the process label that the kernel will assign -// to the next program executed by the current process. If "" is returned -// this indicates that the default labeling will happen for the process. -func ProcessLabel() (string, error) { - return selinux.ExecLabel() -} - -// FileLabel returns the label for specified path -func FileLabel(path string) (string, error) { - return selinux.FileLabel(path) -} - -// SetFileLabel modifies the "path" label to the specified file label -func SetFileLabel(path string, fileLabel string) error { - if selinux.GetEnabled() && fileLabel != "" { - return selinux.SetFileLabel(path, fileLabel) - } - return nil -} - -// SetFileCreateLabel tells the kernel the label for all files to be created -func SetFileCreateLabel(fileLabel string) error { - if selinux.GetEnabled() { - return selinux.SetFSCreateLabel(fileLabel) - } - return nil -} - -// Relabel changes the label of path to the filelabel string. -// It changes the MCS label to s0 if shared is true. -// This will allow all containers to share the content. -func Relabel(path string, fileLabel string, shared bool) error { - if !selinux.GetEnabled() { - return nil - } - - if fileLabel == "" { - return nil - } - - exclude_paths := map[string]bool{ - "/": true, - "/bin": true, - "/boot": true, - "/dev": true, - "/etc": true, - "/etc/passwd": true, - "/etc/pki": true, - "/etc/shadow": true, - "/home": true, - "/lib": true, - "/lib64": true, - "/media": true, - "/opt": true, - "/proc": true, - "/root": true, - "/run": true, - "/sbin": true, - "/srv": true, - "/sys": true, - "/tmp": true, - "/usr": true, - "/var": true, - "/var/lib": true, - "/var/log": true, - } - - if home := os.Getenv("HOME"); home != "" { - exclude_paths[home] = true - } - - if sudoUser := os.Getenv("SUDO_USER"); sudoUser != "" { - if usr, err := user.Lookup(sudoUser); err == nil { - exclude_paths[usr.HomeDir] = true - } - } - - if path != "/" { - path = strings.TrimSuffix(path, "/") - } - if exclude_paths[path] { - return fmt.Errorf("SELinux relabeling of %s is not allowed", path) - } - - if shared { - c, err := selinux.NewContext(fileLabel) - if err != nil { - return err - } - - c["level"] = "s0" - fileLabel = c.Get() - } - if err := selinux.Chcon(path, fileLabel, true); err != nil { - return err - } - return nil -} - -// PidLabel will return the label of the process running with the specified pid -func PidLabel(pid int) (string, error) { - return selinux.PidLabel(pid) -} - -// Init initialises the labeling system -func Init() { - selinux.GetEnabled() -} - -// ClearLabels will clear all reserved labels -func ClearLabels() { - selinux.ClearLabels() -} - -// ReserveLabel will record the fact that the MCS label has already been used. -// This will prevent InitLabels from using the MCS label in a newly created -// container -func ReserveLabel(label string) error { - selinux.ReserveLabel(label) - return nil -} - -// ReleaseLabel will remove the reservation of the MCS label. -// This will allow InitLabels to use the MCS label in a newly created -// containers -func ReleaseLabel(label string) error { - selinux.ReleaseLabel(label) - return nil -} - -// DupSecOpt takes a process label and returns security options that -// can be used to set duplicate labels on future container processes -func DupSecOpt(src string) ([]string, error) { - return selinux.DupSecOpt(src) -} - -// DisableSecOpt returns a security opt that can disable labeling -// support for future container processes -func DisableSecOpt() []string { - return selinux.DisableSecOpt() -} - -// Validate checks that the label does not include unexpected options -func Validate(label string) error { - if strings.Contains(label, "z") && strings.Contains(label, "Z") { - return ErrIncompatibleLabel - } - return nil -} - -// RelabelNeeded checks whether the user requested a relabel -func RelabelNeeded(label string) bool { - return strings.Contains(label, "z") || strings.Contains(label, "Z") -} - -// IsShared checks that the label includes a "shared" mark -func IsShared(label string) bool { - return strings.Contains(label, "z") -} diff --git a/vendor/github.com/opencontainers/selinux/go-selinux/selinux_linux.go b/vendor/github.com/opencontainers/selinux/go-selinux/selinux_linux.go deleted file mode 100644 index d7786c33c..000000000 --- a/vendor/github.com/opencontainers/selinux/go-selinux/selinux_linux.go +++ /dev/null @@ -1,780 +0,0 @@ -// +build selinux,linux - -package selinux - -import ( - "bufio" - "bytes" - "crypto/rand" - "encoding/binary" - "errors" - "fmt" - "io" - "io/ioutil" - "os" - "path/filepath" - "regexp" - "strconv" - "strings" - "sync" - "syscall" -) - -const ( - // Enforcing constant indicate SELinux is in enforcing mode - Enforcing = 1 - // Permissive constant to indicate SELinux is in permissive mode - Permissive = 0 - // Disabled constant to indicate SELinux is disabled - Disabled = -1 - - selinuxDir = "/etc/selinux/" - selinuxConfig = selinuxDir + "config" - selinuxfsMount = "/sys/fs/selinux" - selinuxTypeTag = "SELINUXTYPE" - selinuxTag = "SELINUX" - xattrNameSelinux = "security.selinux" - stRdOnly = 0x01 - selinuxfsMagic = 0xf97cff8c -) - -type selinuxState struct { - enabledSet bool - enabled bool - selinuxfsSet bool - selinuxfs string - mcsList map[string]bool - sync.Mutex -} - -var ( - // ErrMCSAlreadyExists is returned when trying to allocate a duplicate MCS. - ErrMCSAlreadyExists = errors.New("MCS label already exists") - // ErrEmptyPath is returned when an empty path has been specified. - ErrEmptyPath = errors.New("empty path") - // InvalidLabel is returned when an invalid label is specified. - InvalidLabel = errors.New("Invalid Label") - - assignRegex = regexp.MustCompile(`^([^=]+)=(.*)$`) - roFileLabel string - state = selinuxState{ - mcsList: make(map[string]bool), - } -) - -// Context is a representation of the SELinux label broken into 4 parts -type Context map[string]string - -func (s *selinuxState) setEnable(enabled bool) bool { - s.Lock() - defer s.Unlock() - s.enabledSet = true - s.enabled = enabled - return s.enabled -} - -func (s *selinuxState) getEnabled() bool { - s.Lock() - enabled := s.enabled - enabledSet := s.enabledSet - s.Unlock() - if enabledSet { - return enabled - } - - enabled = false - if fs := getSelinuxMountPoint(); fs != "" { - if con, _ := CurrentLabel(); con != "kernel" { - enabled = true - } - } - return s.setEnable(enabled) -} - -// SetDisabled disables selinux support for the package -func SetDisabled() { - state.setEnable(false) -} - -func (s *selinuxState) setSELinuxfs(selinuxfs string) string { - s.Lock() - defer s.Unlock() - s.selinuxfsSet = true - s.selinuxfs = selinuxfs - return s.selinuxfs -} - -func verifySELinuxfsMount(mnt string) bool { - var buf syscall.Statfs_t - for { - err := syscall.Statfs(mnt, &buf) - if err == nil { - break - } - if err == syscall.EAGAIN { - continue - } - return false - } - if uint32(buf.Type) != uint32(selinuxfsMagic) { - return false - } - if (buf.Flags & stRdOnly) != 0 { - return false - } - - return true -} - -func findSELinuxfs() string { - // fast path: check the default mount first - if verifySELinuxfsMount(selinuxfsMount) { - return selinuxfsMount - } - - // check if selinuxfs is available before going the slow path - fs, err := ioutil.ReadFile("/proc/filesystems") - if err != nil { - return "" - } - if !bytes.Contains(fs, []byte("\tselinuxfs\n")) { - return "" - } - - // slow path: try to find among the mounts - f, err := os.Open("/proc/self/mountinfo") - if err != nil { - return "" - } - defer f.Close() - - scanner := bufio.NewScanner(f) - for { - mnt := findSELinuxfsMount(scanner) - if mnt == "" { // error or not found - return "" - } - if verifySELinuxfsMount(mnt) { - return mnt - } - } -} - -// findSELinuxfsMount returns a next selinuxfs mount point found, -// if there is one, or an empty string in case of EOF or error. -func findSELinuxfsMount(s *bufio.Scanner) string { - for s.Scan() { - txt := s.Text() - // The first field after - is fs type. - // Safe as spaces in mountpoints are encoded as \040 - if !strings.Contains(txt, " - selinuxfs ") { - continue - } - const mPos = 5 // mount point is 5th field - fields := strings.SplitN(txt, " ", mPos+1) - if len(fields) < mPos+1 { - continue - } - return fields[mPos-1] - } - - return "" -} - -func (s *selinuxState) getSELinuxfs() string { - s.Lock() - selinuxfs := s.selinuxfs - selinuxfsSet := s.selinuxfsSet - s.Unlock() - if selinuxfsSet { - return selinuxfs - } - - return s.setSELinuxfs(findSELinuxfs()) -} - -// getSelinuxMountPoint returns the path to the mountpoint of an selinuxfs -// filesystem or an empty string if no mountpoint is found. Selinuxfs is -// a proc-like pseudo-filesystem that exposes the selinux policy API to -// processes. The existence of an selinuxfs mount is used to determine -// whether selinux is currently enabled or not. -func getSelinuxMountPoint() string { - return state.getSELinuxfs() -} - -// GetEnabled returns whether selinux is currently enabled. -func GetEnabled() bool { - return state.getEnabled() -} - -func readConfig(target string) string { - var ( - val, key string - bufin *bufio.Reader - ) - - in, err := os.Open(selinuxConfig) - if err != nil { - return "" - } - defer in.Close() - - bufin = bufio.NewReader(in) - - for done := false; !done; { - var line string - if line, err = bufin.ReadString('\n'); err != nil { - if err != io.EOF { - return "" - } - done = true - } - line = strings.TrimSpace(line) - if len(line) == 0 { - // Skip blank lines - continue - } - if line[0] == ';' || line[0] == '#' { - // Skip comments - continue - } - if groups := assignRegex.FindStringSubmatch(line); groups != nil { - key, val = strings.TrimSpace(groups[1]), strings.TrimSpace(groups[2]) - if key == target { - return strings.Trim(val, "\"") - } - } - } - return "" -} - -func getSELinuxPolicyRoot() string { - return filepath.Join(selinuxDir, readConfig(selinuxTypeTag)) -} - -func readCon(fpath string) (string, error) { - if fpath == "" { - return "", ErrEmptyPath - } - - in, err := os.Open(fpath) - if err != nil { - return "", err - } - defer in.Close() - - var retval string - if _, err := fmt.Fscanf(in, "%s", &retval); err != nil { - return "", err - } - return strings.Trim(retval, "\x00"), nil -} - -// SetFileLabel sets the SELinux label for this path or returns an error. -func SetFileLabel(fpath string, label string) error { - if fpath == "" { - return ErrEmptyPath - } - return lsetxattr(fpath, xattrNameSelinux, []byte(label), 0) -} - -// FileLabel returns the SELinux label for this path or returns an error. -func FileLabel(fpath string) (string, error) { - if fpath == "" { - return "", ErrEmptyPath - } - - label, err := lgetxattr(fpath, xattrNameSelinux) - if err != nil { - return "", err - } - // Trim the NUL byte at the end of the byte buffer, if present. - if len(label) > 0 && label[len(label)-1] == '\x00' { - label = label[:len(label)-1] - } - return string(label), nil -} - -/* -SetFSCreateLabel tells kernel the label to create all file system objects -created by this task. Setting label="" to return to default. -*/ -func SetFSCreateLabel(label string) error { - return writeCon(fmt.Sprintf("/proc/self/task/%d/attr/fscreate", syscall.Gettid()), label) -} - -/* -FSCreateLabel returns the default label the kernel which the kernel is using -for file system objects created by this task. "" indicates default. -*/ -func FSCreateLabel() (string, error) { - return readCon(fmt.Sprintf("/proc/self/task/%d/attr/fscreate", syscall.Gettid())) -} - -// CurrentLabel returns the SELinux label of the current process thread, or an error. -func CurrentLabel() (string, error) { - return readCon(fmt.Sprintf("/proc/self/task/%d/attr/current", syscall.Gettid())) -} - -// PidLabel returns the SELinux label of the given pid, or an error. -func PidLabel(pid int) (string, error) { - return readCon(fmt.Sprintf("/proc/%d/attr/current", pid)) -} - -/* -ExecLabel returns the SELinux label that the kernel will use for any programs -that are executed by the current process thread, or an error. -*/ -func ExecLabel() (string, error) { - return readCon(fmt.Sprintf("/proc/self/task/%d/attr/exec", syscall.Gettid())) -} - -func writeCon(fpath string, val string) error { - if fpath == "" { - return ErrEmptyPath - } - if val == "" { - if !GetEnabled() { - return nil - } - } - - out, err := os.OpenFile(fpath, os.O_WRONLY, 0) - if err != nil { - return err - } - defer out.Close() - - if val != "" { - _, err = out.Write([]byte(val)) - } else { - _, err = out.Write(nil) - } - return err -} - -/* -CanonicalizeContext takes a context string and writes it to the kernel -the function then returns the context that the kernel will use. This function -can be used to see if two contexts are equivalent -*/ -func CanonicalizeContext(val string) (string, error) { - return readWriteCon(filepath.Join(getSelinuxMountPoint(), "context"), val) -} - -func readWriteCon(fpath string, val string) (string, error) { - if fpath == "" { - return "", ErrEmptyPath - } - f, err := os.OpenFile(fpath, os.O_RDWR, 0) - if err != nil { - return "", err - } - defer f.Close() - - _, err = f.Write([]byte(val)) - if err != nil { - return "", err - } - - var retval string - if _, err := fmt.Fscanf(f, "%s", &retval); err != nil { - return "", err - } - return strings.Trim(retval, "\x00"), nil -} - -/* -SetExecLabel sets the SELinux label that the kernel will use for any programs -that are executed by the current process thread, or an error. -*/ -func SetExecLabel(label string) error { - return writeCon(fmt.Sprintf("/proc/self/task/%d/attr/exec", syscall.Gettid()), label) -} - -// SetSocketLabel takes a process label and tells the kernel to assign the -// label to the next socket that gets created -func SetSocketLabel(label string) error { - return writeCon(fmt.Sprintf("/proc/self/task/%d/attr/sockcreate", syscall.Gettid()), label) -} - -// SocketLabel retrieves the current socket label setting -func SocketLabel() (string, error) { - return readCon(fmt.Sprintf("/proc/self/task/%d/attr/sockcreate", syscall.Gettid())) -} - -// SetKeyLabel takes a process label and tells the kernel to assign the -// label to the next kernel keyring that gets created -func SetKeyLabel(label string) error { - err := writeCon("/proc/self/attr/keycreate", label) - if os.IsNotExist(err) { - return nil - } - if label == "" && os.IsPermission(err) && !GetEnabled() { - return nil - } - return err -} - -// KeyLabel retrieves the current kernel keyring label setting -func KeyLabel() (string, error) { - return readCon("/proc/self/attr/keycreate") -} - -// Get returns the Context as a string -func (c Context) Get() string { - if c["level"] != "" { - return fmt.Sprintf("%s:%s:%s:%s", c["user"], c["role"], c["type"], c["level"]) - } - return fmt.Sprintf("%s:%s:%s", c["user"], c["role"], c["type"]) -} - -// NewContext creates a new Context struct from the specified label -func NewContext(label string) (Context, error) { - c := make(Context) - - if len(label) != 0 { - con := strings.SplitN(label, ":", 4) - if len(con) < 3 { - return c, InvalidLabel - } - c["user"] = con[0] - c["role"] = con[1] - c["type"] = con[2] - if len(con) > 3 { - c["level"] = con[3] - } - } - return c, nil -} - -// ClearLabels clears all reserved labels -func ClearLabels() { - state.Lock() - state.mcsList = make(map[string]bool) - state.Unlock() -} - -// ReserveLabel reserves the MLS/MCS level component of the specified label -func ReserveLabel(label string) { - if len(label) != 0 { - con := strings.SplitN(label, ":", 4) - if len(con) > 3 { - mcsAdd(con[3]) - } - } -} - -func selinuxEnforcePath() string { - return fmt.Sprintf("%s/enforce", getSelinuxMountPoint()) -} - -// EnforceMode returns the current SELinux mode Enforcing, Permissive, Disabled -func EnforceMode() int { - var enforce int - - enforceS, err := readCon(selinuxEnforcePath()) - if err != nil { - return -1 - } - - enforce, err = strconv.Atoi(string(enforceS)) - if err != nil { - return -1 - } - return enforce -} - -/* -SetEnforceMode sets the current SELinux mode Enforcing, Permissive. -Disabled is not valid, since this needs to be set at boot time. -*/ -func SetEnforceMode(mode int) error { - return writeCon(selinuxEnforcePath(), fmt.Sprintf("%d", mode)) -} - -/* -DefaultEnforceMode returns the systems default SELinux mode Enforcing, -Permissive or Disabled. Note this is is just the default at boot time. -EnforceMode tells you the systems current mode. -*/ -func DefaultEnforceMode() int { - switch readConfig(selinuxTag) { - case "enforcing": - return Enforcing - case "permissive": - return Permissive - } - return Disabled -} - -func mcsAdd(mcs string) error { - if mcs == "" { - return nil - } - state.Lock() - defer state.Unlock() - if state.mcsList[mcs] { - return ErrMCSAlreadyExists - } - state.mcsList[mcs] = true - return nil -} - -func mcsDelete(mcs string) { - if mcs == "" { - return - } - state.Lock() - defer state.Unlock() - state.mcsList[mcs] = false -} - -func intToMcs(id int, catRange uint32) string { - var ( - SETSIZE = int(catRange) - TIER = SETSIZE - ORD = id - ) - - if id < 1 || id > 523776 { - return "" - } - - for ORD > TIER { - ORD = ORD - TIER - TIER-- - } - TIER = SETSIZE - TIER - ORD = ORD + TIER - return fmt.Sprintf("s0:c%d,c%d", TIER, ORD) -} - -func uniqMcs(catRange uint32) string { - var ( - n uint32 - c1, c2 uint32 - mcs string - ) - - for { - binary.Read(rand.Reader, binary.LittleEndian, &n) - c1 = n % catRange - binary.Read(rand.Reader, binary.LittleEndian, &n) - c2 = n % catRange - if c1 == c2 { - continue - } else { - if c1 > c2 { - c1, c2 = c2, c1 - } - } - mcs = fmt.Sprintf("s0:c%d,c%d", c1, c2) - if err := mcsAdd(mcs); err != nil { - continue - } - break - } - return mcs -} - -/* -ReleaseLabel will unreserve the MLS/MCS Level field of the specified label. -Allowing it to be used by another process. -*/ -func ReleaseLabel(label string) { - if len(label) != 0 { - con := strings.SplitN(label, ":", 4) - if len(con) > 3 { - mcsDelete(con[3]) - } - } -} - -// ROFileLabel returns the specified SELinux readonly file label -func ROFileLabel() string { - return roFileLabel -} - -/* -ContainerLabels returns an allocated processLabel and fileLabel to be used for -container labeling by the calling process. -*/ -func ContainerLabels() (processLabel string, fileLabel string) { - var ( - val, key string - bufin *bufio.Reader - ) - - if !GetEnabled() { - return "", "" - } - lxcPath := fmt.Sprintf("%s/contexts/lxc_contexts", getSELinuxPolicyRoot()) - in, err := os.Open(lxcPath) - if err != nil { - return "", "" - } - defer in.Close() - - bufin = bufio.NewReader(in) - - for done := false; !done; { - var line string - if line, err = bufin.ReadString('\n'); err != nil { - if err == io.EOF { - done = true - } else { - goto exit - } - } - line = strings.TrimSpace(line) - if len(line) == 0 { - // Skip blank lines - continue - } - if line[0] == ';' || line[0] == '#' { - // Skip comments - continue - } - if groups := assignRegex.FindStringSubmatch(line); groups != nil { - key, val = strings.TrimSpace(groups[1]), strings.TrimSpace(groups[2]) - if key == "process" { - processLabel = strings.Trim(val, "\"") - } - if key == "file" { - fileLabel = strings.Trim(val, "\"") - } - if key == "ro_file" { - roFileLabel = strings.Trim(val, "\"") - } - } - } - - if processLabel == "" || fileLabel == "" { - return "", "" - } - - if roFileLabel == "" { - roFileLabel = fileLabel - } -exit: - scon, _ := NewContext(processLabel) - if scon["level"] != "" { - mcs := uniqMcs(1024) - scon["level"] = mcs - processLabel = scon.Get() - scon, _ = NewContext(fileLabel) - scon["level"] = mcs - fileLabel = scon.Get() - } - return processLabel, fileLabel -} - -// SecurityCheckContext validates that the SELinux label is understood by the kernel -func SecurityCheckContext(val string) error { - return writeCon(fmt.Sprintf("%s/context", getSelinuxMountPoint()), val) -} - -/* -CopyLevel returns a label with the MLS/MCS level from src label replaced on -the dest label. -*/ -func CopyLevel(src, dest string) (string, error) { - if src == "" { - return "", nil - } - if err := SecurityCheckContext(src); err != nil { - return "", err - } - if err := SecurityCheckContext(dest); err != nil { - return "", err - } - scon, err := NewContext(src) - if err != nil { - return "", err - } - tcon, err := NewContext(dest) - if err != nil { - return "", err - } - mcsDelete(tcon["level"]) - mcsAdd(scon["level"]) - tcon["level"] = scon["level"] - return tcon.Get(), nil -} - -// Prevent users from relabing system files -func badPrefix(fpath string) error { - if fpath == "" { - return ErrEmptyPath - } - - badPrefixes := []string{"/usr"} - for _, prefix := range badPrefixes { - if strings.HasPrefix(fpath, prefix) { - return fmt.Errorf("relabeling content in %s is not allowed", prefix) - } - } - return nil -} - -// Chcon changes the `fpath` file object to the SELinux label `label`. -// If `fpath` is a directory and `recurse`` is true, Chcon will walk the -// directory tree setting the label. -func Chcon(fpath string, label string, recurse bool) error { - if fpath == "" { - return ErrEmptyPath - } - if label == "" { - return nil - } - if err := badPrefix(fpath); err != nil { - return err - } - callback := func(p string, info os.FileInfo, err error) error { - e := SetFileLabel(p, label) - if os.IsNotExist(e) { - return nil - } - return e - } - - if recurse { - return filepath.Walk(fpath, callback) - } - - return SetFileLabel(fpath, label) -} - -// DupSecOpt takes an SELinux process label and returns security options that -// can be used to set the SELinux Type and Level for future container processes. -func DupSecOpt(src string) ([]string, error) { - if src == "" { - return nil, nil - } - con, err := NewContext(src) - if err != nil { - return nil, err - } - if con["user"] == "" || - con["role"] == "" || - con["type"] == "" { - return nil, nil - } - dup := []string{"user:" + con["user"], - "role:" + con["role"], - "type:" + con["type"], - } - - if con["level"] != "" { - dup = append(dup, "level:"+con["level"]) - } - - return dup, nil -} - -// DisableSecOpt returns a security opt that can be used to disable SELinux -// labeling support for future container processes. -func DisableSecOpt() []string { - return []string{"disable"} -} diff --git a/vendor/github.com/opencontainers/selinux/go-selinux/selinux_stub.go b/vendor/github.com/opencontainers/selinux/go-selinux/selinux_stub.go deleted file mode 100644 index 79b005d19..000000000 --- a/vendor/github.com/opencontainers/selinux/go-selinux/selinux_stub.go +++ /dev/null @@ -1,217 +0,0 @@ -// +build !selinux - -package selinux - -import ( - "errors" -) - -const ( - // Enforcing constant indicate SELinux is in enforcing mode - Enforcing = 1 - // Permissive constant to indicate SELinux is in permissive mode - Permissive = 0 - // Disabled constant to indicate SELinux is disabled - Disabled = -1 -) - -var ( - // ErrMCSAlreadyExists is returned when trying to allocate a duplicate MCS. - ErrMCSAlreadyExists = errors.New("MCS label already exists") - // ErrEmptyPath is returned when an empty path has been specified. - ErrEmptyPath = errors.New("empty path") -) - -// Context is a representation of the SELinux label broken into 4 parts -type Context map[string]string - -// SetDisabled disables selinux support for the package -func SetDisabled() { - return -} - -// GetEnabled returns whether selinux is currently enabled. -func GetEnabled() bool { - return false -} - -// SetFileLabel sets the SELinux label for this path or returns an error. -func SetFileLabel(fpath string, label string) error { - return nil -} - -// FileLabel returns the SELinux label for this path or returns an error. -func FileLabel(fpath string) (string, error) { - return "", nil -} - -/* -SetFSCreateLabel tells kernel the label to create all file system objects -created by this task. Setting label="" to return to default. -*/ -func SetFSCreateLabel(label string) error { - return nil -} - -/* -FSCreateLabel returns the default label the kernel which the kernel is using -for file system objects created by this task. "" indicates default. -*/ -func FSCreateLabel() (string, error) { - return "", nil -} - -// CurrentLabel returns the SELinux label of the current process thread, or an error. -func CurrentLabel() (string, error) { - return "", nil -} - -// PidLabel returns the SELinux label of the given pid, or an error. -func PidLabel(pid int) (string, error) { - return "", nil -} - -/* -ExecLabel returns the SELinux label that the kernel will use for any programs -that are executed by the current process thread, or an error. -*/ -func ExecLabel() (string, error) { - return "", nil -} - -/* -CanonicalizeContext takes a context string and writes it to the kernel -the function then returns the context that the kernel will use. This function -can be used to see if two contexts are equivalent -*/ -func CanonicalizeContext(val string) (string, error) { - return "", nil -} - -/* -SetExecLabel sets the SELinux label that the kernel will use for any programs -that are executed by the current process thread, or an error. -*/ -func SetExecLabel(label string) error { - return nil -} - -/* -SetSocketLabel sets the SELinux label that the kernel will use for any programs -that are executed by the current process thread, or an error. -*/ -func SetSocketLabel(label string) error { - return nil -} - -// SocketLabel retrieves the current socket label setting -func SocketLabel() (string, error) { - return "", nil -} - -// SetKeyLabel takes a process label and tells the kernel to assign the -// label to the next kernel keyring that gets created -func SetKeyLabel(label string) error { - return nil -} - -// KeyLabel retrieves the current kernel keyring label setting -func KeyLabel() (string, error) { - return "", nil -} - -// Get returns the Context as a string -func (c Context) Get() string { - return "" -} - -// NewContext creates a new Context struct from the specified label -func NewContext(label string) (Context, error) { - c := make(Context) - return c, nil -} - -// ClearLabels clears all reserved MLS/MCS levels -func ClearLabels() { - return -} - -// ReserveLabel reserves the MLS/MCS level component of the specified label -func ReserveLabel(label string) { - return -} - -// EnforceMode returns the current SELinux mode Enforcing, Permissive, Disabled -func EnforceMode() int { - return Disabled -} - -/* -SetEnforceMode sets the current SELinux mode Enforcing, Permissive. -Disabled is not valid, since this needs to be set at boot time. -*/ -func SetEnforceMode(mode int) error { - return nil -} - -/* -DefaultEnforceMode returns the systems default SELinux mode Enforcing, -Permissive or Disabled. Note this is is just the default at boot time. -EnforceMode tells you the systems current mode. -*/ -func DefaultEnforceMode() int { - return Disabled -} - -/* -ReleaseLabel will unreserve the MLS/MCS Level field of the specified label. -Allowing it to be used by another process. -*/ -func ReleaseLabel(label string) { - return -} - -// ROFileLabel returns the specified SELinux readonly file label -func ROFileLabel() string { - return "" -} - -/* -ContainerLabels returns an allocated processLabel and fileLabel to be used for -container labeling by the calling process. -*/ -func ContainerLabels() (processLabel string, fileLabel string) { - return "", "" -} - -// SecurityCheckContext validates that the SELinux label is understood by the kernel -func SecurityCheckContext(val string) error { - return nil -} - -/* -CopyLevel returns a label with the MLS/MCS level from src label replaced on -the dest label. -*/ -func CopyLevel(src, dest string) (string, error) { - return "", nil -} - -// Chcon changes the `fpath` file object to the SELinux label `label`. -// If `fpath` is a directory and `recurse`` is true, Chcon will walk the -// directory tree setting the label. -func Chcon(fpath string, label string, recurse bool) error { - return nil -} - -// DupSecOpt takes an SELinux process label and returns security options that -// can be used to set the SELinux Type and Level for future container processes. -func DupSecOpt(src string) ([]string, error) { - return nil, nil -} - -// DisableSecOpt returns a security opt that can be used to disable SELinux -// labeling support for future container processes. -func DisableSecOpt() []string { - return []string{"disable"} -} diff --git a/vendor/github.com/opencontainers/selinux/go-selinux/xattrs.go b/vendor/github.com/opencontainers/selinux/go-selinux/xattrs.go deleted file mode 100644 index 67a9d8ee8..000000000 --- a/vendor/github.com/opencontainers/selinux/go-selinux/xattrs.go +++ /dev/null @@ -1,78 +0,0 @@ -// +build selinux,linux - -package selinux - -import ( - "syscall" - "unsafe" -) - -var _zero uintptr - -// Returns a []byte slice if the xattr is set and nil otherwise -// Requires path and its attribute as arguments -func lgetxattr(path string, attr string) ([]byte, error) { - var sz int - pathBytes, err := syscall.BytePtrFromString(path) - if err != nil { - return nil, err - } - attrBytes, err := syscall.BytePtrFromString(attr) - if err != nil { - return nil, err - } - - // Start with a 128 length byte array - sz = 128 - dest := make([]byte, sz) - destBytes := unsafe.Pointer(&dest[0]) - _sz, _, errno := syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0) - - switch { - case errno == syscall.ENODATA: - return nil, errno - case errno == syscall.ENOTSUP: - return nil, errno - case errno == syscall.ERANGE: - // 128 byte array might just not be good enough, - // A dummy buffer is used ``uintptr(0)`` to get real size - // of the xattrs on disk - _sz, _, errno = syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(unsafe.Pointer(nil)), uintptr(0), 0, 0) - sz = int(_sz) - if sz < 0 { - return nil, errno - } - dest = make([]byte, sz) - destBytes := unsafe.Pointer(&dest[0]) - _sz, _, errno = syscall.Syscall6(syscall.SYS_LGETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(destBytes), uintptr(len(dest)), 0, 0) - if errno != 0 { - return nil, errno - } - case errno != 0: - return nil, errno - } - sz = int(_sz) - return dest[:sz], nil -} - -func lsetxattr(path string, attr string, data []byte, flags int) error { - pathBytes, err := syscall.BytePtrFromString(path) - if err != nil { - return err - } - attrBytes, err := syscall.BytePtrFromString(attr) - if err != nil { - return err - } - var dataBytes unsafe.Pointer - if len(data) > 0 { - dataBytes = unsafe.Pointer(&data[0]) - } else { - dataBytes = unsafe.Pointer(&_zero) - } - _, _, errno := syscall.Syscall6(syscall.SYS_LSETXATTR, uintptr(unsafe.Pointer(pathBytes)), uintptr(unsafe.Pointer(attrBytes)), uintptr(dataBytes), uintptr(len(data)), uintptr(flags), 0) - if errno != 0 { - return errno - } - return nil -} diff --git a/vendor/github.com/seccomp/libseccomp-golang/LICENSE b/vendor/github.com/seccomp/libseccomp-golang/LICENSE deleted file mode 100644 index 81cf60de2..000000000 --- a/vendor/github.com/seccomp/libseccomp-golang/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2015 Matthew Heon -Copyright (c) 2015 Paul Moore -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: -- Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. -- Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/seccomp/libseccomp-golang/README b/vendor/github.com/seccomp/libseccomp-golang/README deleted file mode 100644 index 66839a466..000000000 --- a/vendor/github.com/seccomp/libseccomp-golang/README +++ /dev/null @@ -1,51 +0,0 @@ -libseccomp-golang: Go Language Bindings for the libseccomp Project -=============================================================================== -https://github.com/seccomp/libseccomp-golang -https://github.com/seccomp/libseccomp - -The libseccomp library provides an easy to use, platform independent, interface -to the Linux Kernel's syscall filtering mechanism. The libseccomp API is -designed to abstract away the underlying BPF based syscall filter language and -present a more conventional function-call based filtering interface that should -be familiar to, and easily adopted by, application developers. - -The libseccomp-golang library provides a Go based interface to the libseccomp -library. - -* Online Resources - -The library source repository currently lives on GitHub at the following URLs: - - -> https://github.com/seccomp/libseccomp-golang - -> https://github.com/seccomp/libseccomp - -The project mailing list is currently hosted on Google Groups at the URL below, -please note that a Google account is not required to subscribe to the mailing -list. - - -> https://groups.google.com/d/forum/libseccomp - -Documentation is also available at: - - -> https://godoc.org/github.com/seccomp/libseccomp-golang - -* Installing the package - -The libseccomp-golang bindings require at least Go v1.2.1 and GCC v4.8.4; -earlier versions may yield unpredictable results. If you meet these -requirements you can install this package using the command below: - - $ go get github.com/seccomp/libseccomp-golang - -* Testing the Library - -A number of tests and lint related recipes are provided in the Makefile, if -you want to run the standard regression tests, you can excute the following: - - $ make check - -In order to execute the 'make lint' recipe the 'golint' tool is needed, it -can be found at: - - -> https://github.com/golang/lint - diff --git a/vendor/github.com/seccomp/libseccomp-golang/seccomp.go b/vendor/github.com/seccomp/libseccomp-golang/seccomp.go deleted file mode 100644 index d3d9e6bfb..000000000 --- a/vendor/github.com/seccomp/libseccomp-golang/seccomp.go +++ /dev/null @@ -1,864 +0,0 @@ -// +build linux - -// Public API specification for libseccomp Go bindings -// Contains public API for the bindings - -// Package seccomp provides bindings for libseccomp, a library wrapping the Linux -// seccomp syscall. Seccomp enables an application to restrict system call use -// for itself and its children. -package seccomp - -import ( - "fmt" - "os" - "runtime" - "strings" - "sync" - "syscall" - "unsafe" -) - -// C wrapping code - -// #cgo pkg-config: libseccomp -// #include -// #include -import "C" - -// Exported types - -// VersionError denotes that the system libseccomp version is incompatible -// with this package. -type VersionError struct { - message string - minimum string -} - -func (e VersionError) Error() string { - format := "Libseccomp version too low: " - if e.message != "" { - format += e.message + ": " - } - format += "minimum supported is " - if e.minimum != "" { - format += e.minimum + ": " - } else { - format += "2.2.0: " - } - format += "detected %d.%d.%d" - return fmt.Sprintf(format, verMajor, verMinor, verMicro) -} - -// ScmpArch represents a CPU architecture. Seccomp can restrict syscalls on a -// per-architecture basis. -type ScmpArch uint - -// ScmpAction represents an action to be taken on a filter rule match in -// libseccomp -type ScmpAction uint - -// ScmpCompareOp represents a comparison operator which can be used in a filter -// rule -type ScmpCompareOp uint - -// ScmpCondition represents a rule in a libseccomp filter context -type ScmpCondition struct { - Argument uint `json:"argument,omitempty"` - Op ScmpCompareOp `json:"operator,omitempty"` - Operand1 uint64 `json:"operand_one,omitempty"` - Operand2 uint64 `json:"operand_two,omitempty"` -} - -// ScmpSyscall represents a Linux System Call -type ScmpSyscall int32 - -// Exported Constants - -const ( - // Valid architectures recognized by libseccomp - // PowerPC and S390(x) architectures are unavailable below library version - // v2.3.0 and will returns errors if used with incompatible libraries - - // ArchInvalid is a placeholder to ensure uninitialized ScmpArch - // variables are invalid - ArchInvalid ScmpArch = iota - // ArchNative is the native architecture of the kernel - ArchNative ScmpArch = iota - // ArchX86 represents 32-bit x86 syscalls - ArchX86 ScmpArch = iota - // ArchAMD64 represents 64-bit x86-64 syscalls - ArchAMD64 ScmpArch = iota - // ArchX32 represents 64-bit x86-64 syscalls (32-bit pointers) - ArchX32 ScmpArch = iota - // ArchARM represents 32-bit ARM syscalls - ArchARM ScmpArch = iota - // ArchARM64 represents 64-bit ARM syscalls - ArchARM64 ScmpArch = iota - // ArchMIPS represents 32-bit MIPS syscalls - ArchMIPS ScmpArch = iota - // ArchMIPS64 represents 64-bit MIPS syscalls - ArchMIPS64 ScmpArch = iota - // ArchMIPS64N32 represents 64-bit MIPS syscalls (32-bit pointers) - ArchMIPS64N32 ScmpArch = iota - // ArchMIPSEL represents 32-bit MIPS syscalls (little endian) - ArchMIPSEL ScmpArch = iota - // ArchMIPSEL64 represents 64-bit MIPS syscalls (little endian) - ArchMIPSEL64 ScmpArch = iota - // ArchMIPSEL64N32 represents 64-bit MIPS syscalls (little endian, - // 32-bit pointers) - ArchMIPSEL64N32 ScmpArch = iota - // ArchPPC represents 32-bit POWERPC syscalls - ArchPPC ScmpArch = iota - // ArchPPC64 represents 64-bit POWER syscalls (big endian) - ArchPPC64 ScmpArch = iota - // ArchPPC64LE represents 64-bit POWER syscalls (little endian) - ArchPPC64LE ScmpArch = iota - // ArchS390 represents 31-bit System z/390 syscalls - ArchS390 ScmpArch = iota - // ArchS390X represents 64-bit System z/390 syscalls - ArchS390X ScmpArch = iota -) - -const ( - // Supported actions on filter match - - // ActInvalid is a placeholder to ensure uninitialized ScmpAction - // variables are invalid - ActInvalid ScmpAction = iota - // ActKill kills the process - ActKill ScmpAction = iota - // ActTrap throws SIGSYS - ActTrap ScmpAction = iota - // ActErrno causes the syscall to return a negative error code. This - // code can be set with the SetReturnCode method - ActErrno ScmpAction = iota - // ActTrace causes the syscall to notify tracing processes with the - // given error code. This code can be set with the SetReturnCode method - ActTrace ScmpAction = iota - // ActAllow permits the syscall to continue execution - ActAllow ScmpAction = iota -) - -const ( - // These are comparison operators used in conditional seccomp rules - // They are used to compare the value of a single argument of a syscall - // against a user-defined constant - - // CompareInvalid is a placeholder to ensure uninitialized ScmpCompareOp - // variables are invalid - CompareInvalid ScmpCompareOp = iota - // CompareNotEqual returns true if the argument is not equal to the - // given value - CompareNotEqual ScmpCompareOp = iota - // CompareLess returns true if the argument is less than the given value - CompareLess ScmpCompareOp = iota - // CompareLessOrEqual returns true if the argument is less than or equal - // to the given value - CompareLessOrEqual ScmpCompareOp = iota - // CompareEqual returns true if the argument is equal to the given value - CompareEqual ScmpCompareOp = iota - // CompareGreaterEqual returns true if the argument is greater than or - // equal to the given value - CompareGreaterEqual ScmpCompareOp = iota - // CompareGreater returns true if the argument is greater than the given - // value - CompareGreater ScmpCompareOp = iota - // CompareMaskedEqual returns true if the argument is equal to the given - // value, when masked (bitwise &) against the second given value - CompareMaskedEqual ScmpCompareOp = iota -) - -// Helpers for types - -// GetArchFromString returns an ScmpArch constant from a string representing an -// architecture -func GetArchFromString(arch string) (ScmpArch, error) { - if err := ensureSupportedVersion(); err != nil { - return ArchInvalid, err - } - - switch strings.ToLower(arch) { - case "x86": - return ArchX86, nil - case "amd64", "x86-64", "x86_64", "x64": - return ArchAMD64, nil - case "x32": - return ArchX32, nil - case "arm": - return ArchARM, nil - case "arm64", "aarch64": - return ArchARM64, nil - case "mips": - return ArchMIPS, nil - case "mips64": - return ArchMIPS64, nil - case "mips64n32": - return ArchMIPS64N32, nil - case "mipsel": - return ArchMIPSEL, nil - case "mipsel64": - return ArchMIPSEL64, nil - case "mipsel64n32": - return ArchMIPSEL64N32, nil - case "ppc": - return ArchPPC, nil - case "ppc64": - return ArchPPC64, nil - case "ppc64le": - return ArchPPC64LE, nil - case "s390": - return ArchS390, nil - case "s390x": - return ArchS390X, nil - default: - return ArchInvalid, fmt.Errorf("cannot convert unrecognized string %s", arch) - } -} - -// String returns a string representation of an architecture constant -func (a ScmpArch) String() string { - switch a { - case ArchX86: - return "x86" - case ArchAMD64: - return "amd64" - case ArchX32: - return "x32" - case ArchARM: - return "arm" - case ArchARM64: - return "arm64" - case ArchMIPS: - return "mips" - case ArchMIPS64: - return "mips64" - case ArchMIPS64N32: - return "mips64n32" - case ArchMIPSEL: - return "mipsel" - case ArchMIPSEL64: - return "mipsel64" - case ArchMIPSEL64N32: - return "mipsel64n32" - case ArchPPC: - return "ppc" - case ArchPPC64: - return "ppc64" - case ArchPPC64LE: - return "ppc64le" - case ArchS390: - return "s390" - case ArchS390X: - return "s390x" - case ArchNative: - return "native" - case ArchInvalid: - return "Invalid architecture" - default: - return "Unknown architecture" - } -} - -// String returns a string representation of a comparison operator constant -func (a ScmpCompareOp) String() string { - switch a { - case CompareNotEqual: - return "Not equal" - case CompareLess: - return "Less than" - case CompareLessOrEqual: - return "Less than or equal to" - case CompareEqual: - return "Equal" - case CompareGreaterEqual: - return "Greater than or equal to" - case CompareGreater: - return "Greater than" - case CompareMaskedEqual: - return "Masked equality" - case CompareInvalid: - return "Invalid comparison operator" - default: - return "Unrecognized comparison operator" - } -} - -// String returns a string representation of a seccomp match action -func (a ScmpAction) String() string { - switch a & 0xFFFF { - case ActKill: - return "Action: Kill Process" - case ActTrap: - return "Action: Send SIGSYS" - case ActErrno: - return fmt.Sprintf("Action: Return error code %d", (a >> 16)) - case ActTrace: - return fmt.Sprintf("Action: Notify tracing processes with code %d", - (a >> 16)) - case ActAllow: - return "Action: Allow system call" - default: - return "Unrecognized Action" - } -} - -// SetReturnCode adds a return code to a supporting ScmpAction, clearing any -// existing code Only valid on ActErrno and ActTrace. Takes no action otherwise. -// Accepts 16-bit return code as argument. -// Returns a valid ScmpAction of the original type with the new error code set. -func (a ScmpAction) SetReturnCode(code int16) ScmpAction { - aTmp := a & 0x0000FFFF - if aTmp == ActErrno || aTmp == ActTrace { - return (aTmp | (ScmpAction(code)&0xFFFF)<<16) - } - return a -} - -// GetReturnCode returns the return code of an ScmpAction -func (a ScmpAction) GetReturnCode() int16 { - return int16(a >> 16) -} - -// General utility functions - -// GetLibraryVersion returns the version of the library the bindings are built -// against. -// The version is formatted as follows: Major.Minor.Micro -func GetLibraryVersion() (major, minor, micro uint) { - return verMajor, verMinor, verMicro -} - -// Syscall functions - -// GetName retrieves the name of a syscall from its number. -// Acts on any syscall number. -// Returns either a string containing the name of the syscall, or an error. -func (s ScmpSyscall) GetName() (string, error) { - return s.GetNameByArch(ArchNative) -} - -// GetNameByArch retrieves the name of a syscall from its number for a given -// architecture. -// Acts on any syscall number. -// Accepts a valid architecture constant. -// Returns either a string containing the name of the syscall, or an error. -// if the syscall is unrecognized or an issue occurred. -func (s ScmpSyscall) GetNameByArch(arch ScmpArch) (string, error) { - if err := sanitizeArch(arch); err != nil { - return "", err - } - - cString := C.seccomp_syscall_resolve_num_arch(arch.toNative(), C.int(s)) - if cString == nil { - return "", fmt.Errorf("could not resolve syscall name") - } - defer C.free(unsafe.Pointer(cString)) - - finalStr := C.GoString(cString) - return finalStr, nil -} - -// GetSyscallFromName returns the number of a syscall by name on the kernel's -// native architecture. -// Accepts a string containing the name of a syscall. -// Returns the number of the syscall, or an error if no syscall with that name -// was found. -func GetSyscallFromName(name string) (ScmpSyscall, error) { - if err := ensureSupportedVersion(); err != nil { - return 0, err - } - - cString := C.CString(name) - defer C.free(unsafe.Pointer(cString)) - - result := C.seccomp_syscall_resolve_name(cString) - if result == scmpError { - return 0, fmt.Errorf("could not resolve name to syscall") - } - - return ScmpSyscall(result), nil -} - -// GetSyscallFromNameByArch returns the number of a syscall by name for a given -// architecture's ABI. -// Accepts the name of a syscall and an architecture constant. -// Returns the number of the syscall, or an error if an invalid architecture is -// passed or a syscall with that name was not found. -func GetSyscallFromNameByArch(name string, arch ScmpArch) (ScmpSyscall, error) { - if err := ensureSupportedVersion(); err != nil { - return 0, err - } - if err := sanitizeArch(arch); err != nil { - return 0, err - } - - cString := C.CString(name) - defer C.free(unsafe.Pointer(cString)) - - result := C.seccomp_syscall_resolve_name_arch(arch.toNative(), cString) - if result == scmpError { - return 0, fmt.Errorf("could not resolve name to syscall") - } - - return ScmpSyscall(result), nil -} - -// MakeCondition creates and returns a new condition to attach to a filter rule. -// Associated rules will only match if this condition is true. -// Accepts the number the argument we are checking, and a comparison operator -// and value to compare to. -// The rule will match if argument $arg (zero-indexed) of the syscall is -// $COMPARE_OP the provided comparison value. -// Some comparison operators accept two values. Masked equals, for example, -// will mask $arg of the syscall with the second value provided (via bitwise -// AND) and then compare against the first value provided. -// For example, in the less than or equal case, if the syscall argument was -// 0 and the value provided was 1, the condition would match, as 0 is less -// than or equal to 1. -// Return either an error on bad argument or a valid ScmpCondition struct. -func MakeCondition(arg uint, comparison ScmpCompareOp, values ...uint64) (ScmpCondition, error) { - var condStruct ScmpCondition - - if err := ensureSupportedVersion(); err != nil { - return condStruct, err - } - - if comparison == CompareInvalid { - return condStruct, fmt.Errorf("invalid comparison operator") - } else if arg > 5 { - return condStruct, fmt.Errorf("syscalls only have up to 6 arguments") - } else if len(values) > 2 { - return condStruct, fmt.Errorf("conditions can have at most 2 arguments") - } else if len(values) == 0 { - return condStruct, fmt.Errorf("must provide at least one value to compare against") - } - - condStruct.Argument = arg - condStruct.Op = comparison - condStruct.Operand1 = values[0] - if len(values) == 2 { - condStruct.Operand2 = values[1] - } else { - condStruct.Operand2 = 0 // Unused - } - - return condStruct, nil -} - -// Utility Functions - -// GetNativeArch returns architecture token representing the native kernel -// architecture -func GetNativeArch() (ScmpArch, error) { - if err := ensureSupportedVersion(); err != nil { - return ArchInvalid, err - } - - arch := C.seccomp_arch_native() - - return archFromNative(arch) -} - -// Public Filter API - -// ScmpFilter represents a filter context in libseccomp. -// A filter context is initially empty. Rules can be added to it, and it can -// then be loaded into the kernel. -type ScmpFilter struct { - filterCtx C.scmp_filter_ctx - valid bool - lock sync.Mutex -} - -// NewFilter creates and returns a new filter context. -// Accepts a default action to be taken for syscalls which match no rules in -// the filter. -// Returns a reference to a valid filter context, or nil and an error if the -// filter context could not be created or an invalid default action was given. -func NewFilter(defaultAction ScmpAction) (*ScmpFilter, error) { - if err := ensureSupportedVersion(); err != nil { - return nil, err - } - - if err := sanitizeAction(defaultAction); err != nil { - return nil, err - } - - fPtr := C.seccomp_init(defaultAction.toNative()) - if fPtr == nil { - return nil, fmt.Errorf("could not create filter") - } - - filter := new(ScmpFilter) - filter.filterCtx = fPtr - filter.valid = true - runtime.SetFinalizer(filter, filterFinalizer) - - // Enable TSync so all goroutines will receive the same rules - // If the kernel does not support TSYNC, allow us to continue without error - if err := filter.setFilterAttr(filterAttrTsync, 0x1); err != nil && err != syscall.ENOTSUP { - filter.Release() - return nil, fmt.Errorf("could not create filter - error setting tsync bit: %v", err) - } - - return filter, nil -} - -// IsValid determines whether a filter context is valid to use. -// Some operations (Release and Merge) render filter contexts invalid and -// consequently prevent further use. -func (f *ScmpFilter) IsValid() bool { - f.lock.Lock() - defer f.lock.Unlock() - - return f.valid -} - -// Reset resets a filter context, removing all its existing state. -// Accepts a new default action to be taken for syscalls which do not match. -// Returns an error if the filter or action provided are invalid. -func (f *ScmpFilter) Reset(defaultAction ScmpAction) error { - f.lock.Lock() - defer f.lock.Unlock() - - if err := sanitizeAction(defaultAction); err != nil { - return err - } else if !f.valid { - return errBadFilter - } - - retCode := C.seccomp_reset(f.filterCtx, defaultAction.toNative()) - if retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// Release releases a filter context, freeing its memory. Should be called after -// loading into the kernel, when the filter is no longer needed. -// After calling this function, the given filter is no longer valid and cannot -// be used. -// Release() will be invoked automatically when a filter context is garbage -// collected, but can also be called manually to free memory. -func (f *ScmpFilter) Release() { - f.lock.Lock() - defer f.lock.Unlock() - - if !f.valid { - return - } - - f.valid = false - C.seccomp_release(f.filterCtx) -} - -// Merge merges two filter contexts. -// The source filter src will be released as part of the process, and will no -// longer be usable or valid after this call. -// To be merged, filters must NOT share any architectures, and all their -// attributes (Default Action, Bad Arch Action, and No New Privs bools) -// must match. -// The filter src will be merged into the filter this is called on. -// The architectures of the src filter not present in the destination, and all -// associated rules, will be added to the destination. -// Returns an error if merging the filters failed. -func (f *ScmpFilter) Merge(src *ScmpFilter) error { - f.lock.Lock() - defer f.lock.Unlock() - - src.lock.Lock() - defer src.lock.Unlock() - - if !src.valid || !f.valid { - return fmt.Errorf("one or more of the filter contexts is invalid or uninitialized") - } - - // Merge the filters - retCode := C.seccomp_merge(f.filterCtx, src.filterCtx) - if syscall.Errno(-1*retCode) == syscall.EINVAL { - return fmt.Errorf("filters could not be merged due to a mismatch in attributes or invalid filter") - } else if retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - src.valid = false - - return nil -} - -// IsArchPresent checks if an architecture is present in a filter. -// If a filter contains an architecture, it uses its default action for -// syscalls which do not match rules in it, and its rules can match syscalls -// for that ABI. -// If a filter does not contain an architecture, all syscalls made to that -// kernel ABI will fail with the filter's default Bad Architecture Action -// (by default, killing the process). -// Accepts an architecture constant. -// Returns true if the architecture is present in the filter, false otherwise, -// and an error on an invalid filter context, architecture constant, or an -// issue with the call to libseccomp. -func (f *ScmpFilter) IsArchPresent(arch ScmpArch) (bool, error) { - f.lock.Lock() - defer f.lock.Unlock() - - if err := sanitizeArch(arch); err != nil { - return false, err - } else if !f.valid { - return false, errBadFilter - } - - retCode := C.seccomp_arch_exist(f.filterCtx, arch.toNative()) - if syscall.Errno(-1*retCode) == syscall.EEXIST { - // -EEXIST is "arch not present" - return false, nil - } else if retCode != 0 { - return false, syscall.Errno(-1 * retCode) - } - - return true, nil -} - -// AddArch adds an architecture to the filter. -// Accepts an architecture constant. -// Returns an error on invalid filter context or architecture token, or an -// issue with the call to libseccomp. -func (f *ScmpFilter) AddArch(arch ScmpArch) error { - f.lock.Lock() - defer f.lock.Unlock() - - if err := sanitizeArch(arch); err != nil { - return err - } else if !f.valid { - return errBadFilter - } - - // Libseccomp returns -EEXIST if the specified architecture is already - // present. Succeed silently in this case, as it's not fatal, and the - // architecture is present already. - retCode := C.seccomp_arch_add(f.filterCtx, arch.toNative()) - if retCode != 0 && syscall.Errno(-1*retCode) != syscall.EEXIST { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// RemoveArch removes an architecture from the filter. -// Accepts an architecture constant. -// Returns an error on invalid filter context or architecture token, or an -// issue with the call to libseccomp. -func (f *ScmpFilter) RemoveArch(arch ScmpArch) error { - f.lock.Lock() - defer f.lock.Unlock() - - if err := sanitizeArch(arch); err != nil { - return err - } else if !f.valid { - return errBadFilter - } - - // Similar to AddArch, -EEXIST is returned if the arch is not present - // Succeed silently in that case, this is not fatal and the architecture - // is not present in the filter after RemoveArch - retCode := C.seccomp_arch_remove(f.filterCtx, arch.toNative()) - if retCode != 0 && syscall.Errno(-1*retCode) != syscall.EEXIST { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// Load loads a filter context into the kernel. -// Returns an error if the filter context is invalid or the syscall failed. -func (f *ScmpFilter) Load() error { - f.lock.Lock() - defer f.lock.Unlock() - - if !f.valid { - return errBadFilter - } - - if retCode := C.seccomp_load(f.filterCtx); retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// GetDefaultAction returns the default action taken on a syscall which does not -// match a rule in the filter, or an error if an issue was encountered -// retrieving the value. -func (f *ScmpFilter) GetDefaultAction() (ScmpAction, error) { - action, err := f.getFilterAttr(filterAttrActDefault) - if err != nil { - return 0x0, err - } - - return actionFromNative(action) -} - -// GetBadArchAction returns the default action taken on a syscall for an -// architecture not in the filter, or an error if an issue was encountered -// retrieving the value. -func (f *ScmpFilter) GetBadArchAction() (ScmpAction, error) { - action, err := f.getFilterAttr(filterAttrActBadArch) - if err != nil { - return 0x0, err - } - - return actionFromNative(action) -} - -// GetNoNewPrivsBit returns the current state the No New Privileges bit will be set -// to on the filter being loaded, or an error if an issue was encountered -// retrieving the value. -// The No New Privileges bit tells the kernel that new processes run with exec() -// cannot gain more privileges than the process that ran exec(). -// For example, a process with No New Privileges set would be unable to exec -// setuid/setgid executables. -func (f *ScmpFilter) GetNoNewPrivsBit() (bool, error) { - noNewPrivs, err := f.getFilterAttr(filterAttrNNP) - if err != nil { - return false, err - } - - if noNewPrivs == 0 { - return false, nil - } - - return true, nil -} - -// SetBadArchAction sets the default action taken on a syscall for an -// architecture not in the filter, or an error if an issue was encountered -// setting the value. -func (f *ScmpFilter) SetBadArchAction(action ScmpAction) error { - if err := sanitizeAction(action); err != nil { - return err - } - - return f.setFilterAttr(filterAttrActBadArch, action.toNative()) -} - -// SetNoNewPrivsBit sets the state of the No New Privileges bit, which will be -// applied on filter load, or an error if an issue was encountered setting the -// value. -// Filters with No New Privileges set to 0 can only be loaded if the process -// has the CAP_SYS_ADMIN capability. -func (f *ScmpFilter) SetNoNewPrivsBit(state bool) error { - var toSet C.uint32_t = 0x0 - - if state { - toSet = 0x1 - } - - return f.setFilterAttr(filterAttrNNP, toSet) -} - -// SetSyscallPriority sets a syscall's priority. -// This provides a hint to the filter generator in libseccomp about the -// importance of this syscall. High-priority syscalls are placed -// first in the filter code, and incur less overhead (at the expense of -// lower-priority syscalls). -func (f *ScmpFilter) SetSyscallPriority(call ScmpSyscall, priority uint8) error { - f.lock.Lock() - defer f.lock.Unlock() - - if !f.valid { - return errBadFilter - } - - if retCode := C.seccomp_syscall_priority(f.filterCtx, C.int(call), - C.uint8_t(priority)); retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// AddRule adds a single rule for an unconditional action on a syscall. -// Accepts the number of the syscall and the action to be taken on the call -// being made. -// Returns an error if an issue was encountered adding the rule. -func (f *ScmpFilter) AddRule(call ScmpSyscall, action ScmpAction) error { - return f.addRuleGeneric(call, action, false, nil) -} - -// AddRuleExact adds a single rule for an unconditional action on a syscall. -// Accepts the number of the syscall and the action to be taken on the call -// being made. -// No modifications will be made to the rule, and it will fail to add if it -// cannot be applied to the current architecture without modification. -// The rule will function exactly as described, but it may not function identically -// (or be able to be applied to) all architectures. -// Returns an error if an issue was encountered adding the rule. -func (f *ScmpFilter) AddRuleExact(call ScmpSyscall, action ScmpAction) error { - return f.addRuleGeneric(call, action, true, nil) -} - -// AddRuleConditional adds a single rule for a conditional action on a syscall. -// Returns an error if an issue was encountered adding the rule. -// All conditions must match for the rule to match. -// There is a bug in library versions below v2.2.1 which can, in some cases, -// cause conditions to be lost when more than one are used. Consequently, -// AddRuleConditional is disabled on library versions lower than v2.2.1 -func (f *ScmpFilter) AddRuleConditional(call ScmpSyscall, action ScmpAction, conds []ScmpCondition) error { - return f.addRuleGeneric(call, action, false, conds) -} - -// AddRuleConditionalExact adds a single rule for a conditional action on a -// syscall. -// No modifications will be made to the rule, and it will fail to add if it -// cannot be applied to the current architecture without modification. -// The rule will function exactly as described, but it may not function identically -// (or be able to be applied to) all architectures. -// Returns an error if an issue was encountered adding the rule. -// There is a bug in library versions below v2.2.1 which can, in some cases, -// cause conditions to be lost when more than one are used. Consequently, -// AddRuleConditionalExact is disabled on library versions lower than v2.2.1 -func (f *ScmpFilter) AddRuleConditionalExact(call ScmpSyscall, action ScmpAction, conds []ScmpCondition) error { - return f.addRuleGeneric(call, action, true, conds) -} - -// ExportPFC output PFC-formatted, human-readable dump of a filter context's -// rules to a file. -// Accepts file to write to (must be open for writing). -// Returns an error if writing to the file fails. -func (f *ScmpFilter) ExportPFC(file *os.File) error { - f.lock.Lock() - defer f.lock.Unlock() - - fd := file.Fd() - - if !f.valid { - return errBadFilter - } - - if retCode := C.seccomp_export_pfc(f.filterCtx, C.int(fd)); retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// ExportBPF outputs Berkeley Packet Filter-formatted, kernel-readable dump of a -// filter context's rules to a file. -// Accepts file to write to (must be open for writing). -// Returns an error if writing to the file fails. -func (f *ScmpFilter) ExportBPF(file *os.File) error { - f.lock.Lock() - defer f.lock.Unlock() - - fd := file.Fd() - - if !f.valid { - return errBadFilter - } - - if retCode := C.seccomp_export_bpf(f.filterCtx, C.int(fd)); retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - return nil -} diff --git a/vendor/github.com/seccomp/libseccomp-golang/seccomp_internal.go b/vendor/github.com/seccomp/libseccomp-golang/seccomp_internal.go deleted file mode 100644 index 5b6a79ada..000000000 --- a/vendor/github.com/seccomp/libseccomp-golang/seccomp_internal.go +++ /dev/null @@ -1,508 +0,0 @@ -// +build linux - -// Internal functions for libseccomp Go bindings -// No exported functions - -package seccomp - -import ( - "fmt" - "syscall" -) - -// Unexported C wrapping code - provides the C-Golang interface -// Get the seccomp header in scope -// Need stdlib.h for free() on cstrings - -// #cgo pkg-config: libseccomp -/* -#include -#include - -#if SCMP_VER_MAJOR < 2 -#error Minimum supported version of Libseccomp is v2.2.0 -#elif SCMP_VER_MAJOR == 2 && SCMP_VER_MINOR < 2 -#error Minimum supported version of Libseccomp is v2.2.0 -#endif - -#define ARCH_BAD ~0 - -const uint32_t C_ARCH_BAD = ARCH_BAD; - -#ifndef SCMP_ARCH_PPC -#define SCMP_ARCH_PPC ARCH_BAD -#endif - -#ifndef SCMP_ARCH_PPC64 -#define SCMP_ARCH_PPC64 ARCH_BAD -#endif - -#ifndef SCMP_ARCH_PPC64LE -#define SCMP_ARCH_PPC64LE ARCH_BAD -#endif - -#ifndef SCMP_ARCH_S390 -#define SCMP_ARCH_S390 ARCH_BAD -#endif - -#ifndef SCMP_ARCH_S390X -#define SCMP_ARCH_S390X ARCH_BAD -#endif - -const uint32_t C_ARCH_NATIVE = SCMP_ARCH_NATIVE; -const uint32_t C_ARCH_X86 = SCMP_ARCH_X86; -const uint32_t C_ARCH_X86_64 = SCMP_ARCH_X86_64; -const uint32_t C_ARCH_X32 = SCMP_ARCH_X32; -const uint32_t C_ARCH_ARM = SCMP_ARCH_ARM; -const uint32_t C_ARCH_AARCH64 = SCMP_ARCH_AARCH64; -const uint32_t C_ARCH_MIPS = SCMP_ARCH_MIPS; -const uint32_t C_ARCH_MIPS64 = SCMP_ARCH_MIPS64; -const uint32_t C_ARCH_MIPS64N32 = SCMP_ARCH_MIPS64N32; -const uint32_t C_ARCH_MIPSEL = SCMP_ARCH_MIPSEL; -const uint32_t C_ARCH_MIPSEL64 = SCMP_ARCH_MIPSEL64; -const uint32_t C_ARCH_MIPSEL64N32 = SCMP_ARCH_MIPSEL64N32; -const uint32_t C_ARCH_PPC = SCMP_ARCH_PPC; -const uint32_t C_ARCH_PPC64 = SCMP_ARCH_PPC64; -const uint32_t C_ARCH_PPC64LE = SCMP_ARCH_PPC64LE; -const uint32_t C_ARCH_S390 = SCMP_ARCH_S390; -const uint32_t C_ARCH_S390X = SCMP_ARCH_S390X; - -const uint32_t C_ACT_KILL = SCMP_ACT_KILL; -const uint32_t C_ACT_TRAP = SCMP_ACT_TRAP; -const uint32_t C_ACT_ERRNO = SCMP_ACT_ERRNO(0); -const uint32_t C_ACT_TRACE = SCMP_ACT_TRACE(0); -const uint32_t C_ACT_ALLOW = SCMP_ACT_ALLOW; - -const uint32_t C_ATTRIBUTE_DEFAULT = (uint32_t)SCMP_FLTATR_ACT_DEFAULT; -const uint32_t C_ATTRIBUTE_BADARCH = (uint32_t)SCMP_FLTATR_ACT_BADARCH; -const uint32_t C_ATTRIBUTE_NNP = (uint32_t)SCMP_FLTATR_CTL_NNP; -const uint32_t C_ATTRIBUTE_TSYNC = (uint32_t)SCMP_FLTATR_CTL_TSYNC; - -const int C_CMP_NE = (int)SCMP_CMP_NE; -const int C_CMP_LT = (int)SCMP_CMP_LT; -const int C_CMP_LE = (int)SCMP_CMP_LE; -const int C_CMP_EQ = (int)SCMP_CMP_EQ; -const int C_CMP_GE = (int)SCMP_CMP_GE; -const int C_CMP_GT = (int)SCMP_CMP_GT; -const int C_CMP_MASKED_EQ = (int)SCMP_CMP_MASKED_EQ; - -const int C_VERSION_MAJOR = SCMP_VER_MAJOR; -const int C_VERSION_MINOR = SCMP_VER_MINOR; -const int C_VERSION_MICRO = SCMP_VER_MICRO; - -#if SCMP_VER_MAJOR == 2 && SCMP_VER_MINOR >= 3 -unsigned int get_major_version() -{ - return seccomp_version()->major; -} - -unsigned int get_minor_version() -{ - return seccomp_version()->minor; -} - -unsigned int get_micro_version() -{ - return seccomp_version()->micro; -} -#else -unsigned int get_major_version() -{ - return (unsigned int)C_VERSION_MAJOR; -} - -unsigned int get_minor_version() -{ - return (unsigned int)C_VERSION_MINOR; -} - -unsigned int get_micro_version() -{ - return (unsigned int)C_VERSION_MICRO; -} -#endif - -typedef struct scmp_arg_cmp* scmp_cast_t; - -void* make_arg_cmp_array(unsigned int length) -{ - return calloc(length, sizeof(struct scmp_arg_cmp)); -} - -// Wrapper to add an scmp_arg_cmp struct to an existing arg_cmp array -void add_struct_arg_cmp( - struct scmp_arg_cmp* arr, - unsigned int pos, - unsigned int arg, - int compare, - uint64_t a, - uint64_t b - ) -{ - arr[pos].arg = arg; - arr[pos].op = compare; - arr[pos].datum_a = a; - arr[pos].datum_b = b; - - return; -} -*/ -import "C" - -// Nonexported types -type scmpFilterAttr uint32 - -// Nonexported constants - -const ( - filterAttrActDefault scmpFilterAttr = iota - filterAttrActBadArch scmpFilterAttr = iota - filterAttrNNP scmpFilterAttr = iota - filterAttrTsync scmpFilterAttr = iota -) - -const ( - // An error return from certain libseccomp functions - scmpError C.int = -1 - // Comparison boundaries to check for architecture validity - archStart ScmpArch = ArchNative - archEnd ScmpArch = ArchS390X - // Comparison boundaries to check for action validity - actionStart ScmpAction = ActKill - actionEnd ScmpAction = ActAllow - // Comparison boundaries to check for comparison operator validity - compareOpStart ScmpCompareOp = CompareNotEqual - compareOpEnd ScmpCompareOp = CompareMaskedEqual -) - -var ( - // Error thrown on bad filter context - errBadFilter = fmt.Errorf("filter is invalid or uninitialized") - // Constants representing library major, minor, and micro versions - verMajor = uint(C.get_major_version()) - verMinor = uint(C.get_minor_version()) - verMicro = uint(C.get_micro_version()) -) - -// Nonexported functions - -// Check if library version is greater than or equal to the given one -func checkVersionAbove(major, minor, micro uint) bool { - return (verMajor > major) || - (verMajor == major && verMinor > minor) || - (verMajor == major && verMinor == minor && verMicro >= micro) -} - -// Ensure that the library is supported, i.e. >= 2.2.0. -func ensureSupportedVersion() error { - if !checkVersionAbove(2, 2, 0) { - return VersionError{} - } - return nil -} - -// Filter helpers - -// Filter finalizer - ensure that kernel context for filters is freed -func filterFinalizer(f *ScmpFilter) { - f.Release() -} - -// Get a raw filter attribute -func (f *ScmpFilter) getFilterAttr(attr scmpFilterAttr) (C.uint32_t, error) { - f.lock.Lock() - defer f.lock.Unlock() - - if !f.valid { - return 0x0, errBadFilter - } - - var attribute C.uint32_t - - retCode := C.seccomp_attr_get(f.filterCtx, attr.toNative(), &attribute) - if retCode != 0 { - return 0x0, syscall.Errno(-1 * retCode) - } - - return attribute, nil -} - -// Set a raw filter attribute -func (f *ScmpFilter) setFilterAttr(attr scmpFilterAttr, value C.uint32_t) error { - f.lock.Lock() - defer f.lock.Unlock() - - if !f.valid { - return errBadFilter - } - - retCode := C.seccomp_attr_set(f.filterCtx, attr.toNative(), value) - if retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// DOES NOT LOCK OR CHECK VALIDITY -// Assumes caller has already done this -// Wrapper for seccomp_rule_add_... functions -func (f *ScmpFilter) addRuleWrapper(call ScmpSyscall, action ScmpAction, exact bool, length C.uint, cond C.scmp_cast_t) error { - if length != 0 && cond == nil { - return fmt.Errorf("null conditions list, but length is nonzero") - } - - var retCode C.int - if exact { - retCode = C.seccomp_rule_add_exact_array(f.filterCtx, action.toNative(), C.int(call), length, cond) - } else { - retCode = C.seccomp_rule_add_array(f.filterCtx, action.toNative(), C.int(call), length, cond) - } - - if syscall.Errno(-1*retCode) == syscall.EFAULT { - return fmt.Errorf("unrecognized syscall") - } else if syscall.Errno(-1*retCode) == syscall.EPERM { - return fmt.Errorf("requested action matches default action of filter") - } else if syscall.Errno(-1*retCode) == syscall.EINVAL { - return fmt.Errorf("two checks on same syscall argument") - } else if retCode != 0 { - return syscall.Errno(-1 * retCode) - } - - return nil -} - -// Generic add function for filter rules -func (f *ScmpFilter) addRuleGeneric(call ScmpSyscall, action ScmpAction, exact bool, conds []ScmpCondition) error { - f.lock.Lock() - defer f.lock.Unlock() - - if !f.valid { - return errBadFilter - } - - if len(conds) == 0 { - if err := f.addRuleWrapper(call, action, exact, 0, nil); err != nil { - return err - } - } else { - // We don't support conditional filtering in library version v2.1 - if !checkVersionAbove(2, 2, 1) { - return VersionError{ - message: "conditional filtering is not supported", - minimum: "2.2.1", - } - } - - argsArr := C.make_arg_cmp_array(C.uint(len(conds))) - if argsArr == nil { - return fmt.Errorf("error allocating memory for conditions") - } - defer C.free(argsArr) - - for i, cond := range conds { - C.add_struct_arg_cmp(C.scmp_cast_t(argsArr), C.uint(i), - C.uint(cond.Argument), cond.Op.toNative(), - C.uint64_t(cond.Operand1), C.uint64_t(cond.Operand2)) - } - - if err := f.addRuleWrapper(call, action, exact, C.uint(len(conds)), C.scmp_cast_t(argsArr)); err != nil { - return err - } - } - - return nil -} - -// Generic Helpers - -// Helper - Sanitize Arch token input -func sanitizeArch(in ScmpArch) error { - if in < archStart || in > archEnd { - return fmt.Errorf("unrecognized architecture") - } - - if in.toNative() == C.C_ARCH_BAD { - return fmt.Errorf("architecture is not supported on this version of the library") - } - - return nil -} - -func sanitizeAction(in ScmpAction) error { - inTmp := in & 0x0000FFFF - if inTmp < actionStart || inTmp > actionEnd { - return fmt.Errorf("unrecognized action") - } - - if inTmp != ActTrace && inTmp != ActErrno && (in&0xFFFF0000) != 0 { - return fmt.Errorf("highest 16 bits must be zeroed except for Trace and Errno") - } - - return nil -} - -func sanitizeCompareOp(in ScmpCompareOp) error { - if in < compareOpStart || in > compareOpEnd { - return fmt.Errorf("unrecognized comparison operator") - } - - return nil -} - -func archFromNative(a C.uint32_t) (ScmpArch, error) { - switch a { - case C.C_ARCH_X86: - return ArchX86, nil - case C.C_ARCH_X86_64: - return ArchAMD64, nil - case C.C_ARCH_X32: - return ArchX32, nil - case C.C_ARCH_ARM: - return ArchARM, nil - case C.C_ARCH_NATIVE: - return ArchNative, nil - case C.C_ARCH_AARCH64: - return ArchARM64, nil - case C.C_ARCH_MIPS: - return ArchMIPS, nil - case C.C_ARCH_MIPS64: - return ArchMIPS64, nil - case C.C_ARCH_MIPS64N32: - return ArchMIPS64N32, nil - case C.C_ARCH_MIPSEL: - return ArchMIPSEL, nil - case C.C_ARCH_MIPSEL64: - return ArchMIPSEL64, nil - case C.C_ARCH_MIPSEL64N32: - return ArchMIPSEL64N32, nil - case C.C_ARCH_PPC: - return ArchPPC, nil - case C.C_ARCH_PPC64: - return ArchPPC64, nil - case C.C_ARCH_PPC64LE: - return ArchPPC64LE, nil - case C.C_ARCH_S390: - return ArchS390, nil - case C.C_ARCH_S390X: - return ArchS390X, nil - default: - return 0x0, fmt.Errorf("unrecognized architecture") - } -} - -// Only use with sanitized arches, no error handling -func (a ScmpArch) toNative() C.uint32_t { - switch a { - case ArchX86: - return C.C_ARCH_X86 - case ArchAMD64: - return C.C_ARCH_X86_64 - case ArchX32: - return C.C_ARCH_X32 - case ArchARM: - return C.C_ARCH_ARM - case ArchARM64: - return C.C_ARCH_AARCH64 - case ArchMIPS: - return C.C_ARCH_MIPS - case ArchMIPS64: - return C.C_ARCH_MIPS64 - case ArchMIPS64N32: - return C.C_ARCH_MIPS64N32 - case ArchMIPSEL: - return C.C_ARCH_MIPSEL - case ArchMIPSEL64: - return C.C_ARCH_MIPSEL64 - case ArchMIPSEL64N32: - return C.C_ARCH_MIPSEL64N32 - case ArchPPC: - return C.C_ARCH_PPC - case ArchPPC64: - return C.C_ARCH_PPC64 - case ArchPPC64LE: - return C.C_ARCH_PPC64LE - case ArchS390: - return C.C_ARCH_S390 - case ArchS390X: - return C.C_ARCH_S390X - case ArchNative: - return C.C_ARCH_NATIVE - default: - return 0x0 - } -} - -// Only use with sanitized ops, no error handling -func (a ScmpCompareOp) toNative() C.int { - switch a { - case CompareNotEqual: - return C.C_CMP_NE - case CompareLess: - return C.C_CMP_LT - case CompareLessOrEqual: - return C.C_CMP_LE - case CompareEqual: - return C.C_CMP_EQ - case CompareGreaterEqual: - return C.C_CMP_GE - case CompareGreater: - return C.C_CMP_GT - case CompareMaskedEqual: - return C.C_CMP_MASKED_EQ - default: - return 0x0 - } -} - -func actionFromNative(a C.uint32_t) (ScmpAction, error) { - aTmp := a & 0xFFFF - switch a & 0xFFFF0000 { - case C.C_ACT_KILL: - return ActKill, nil - case C.C_ACT_TRAP: - return ActTrap, nil - case C.C_ACT_ERRNO: - return ActErrno.SetReturnCode(int16(aTmp)), nil - case C.C_ACT_TRACE: - return ActTrace.SetReturnCode(int16(aTmp)), nil - case C.C_ACT_ALLOW: - return ActAllow, nil - default: - return 0x0, fmt.Errorf("unrecognized action") - } -} - -// Only use with sanitized actions, no error handling -func (a ScmpAction) toNative() C.uint32_t { - switch a & 0xFFFF { - case ActKill: - return C.C_ACT_KILL - case ActTrap: - return C.C_ACT_TRAP - case ActErrno: - return C.C_ACT_ERRNO | (C.uint32_t(a) >> 16) - case ActTrace: - return C.C_ACT_TRACE | (C.uint32_t(a) >> 16) - case ActAllow: - return C.C_ACT_ALLOW - default: - return 0x0 - } -} - -// Internal only, assumes safe attribute -func (a scmpFilterAttr) toNative() uint32 { - switch a { - case filterAttrActDefault: - return uint32(C.C_ATTRIBUTE_DEFAULT) - case filterAttrActBadArch: - return uint32(C.C_ATTRIBUTE_BADARCH) - case filterAttrNNP: - return uint32(C.C_ATTRIBUTE_NNP) - case filterAttrTsync: - return uint32(C.C_ATTRIBUTE_TSYNC) - default: - return 0x0 - } -} diff --git a/vendor/github.com/vishvananda/netlink/LICENSE b/vendor/github.com/vishvananda/netlink/LICENSE deleted file mode 100644 index 9f64db858..000000000 --- a/vendor/github.com/vishvananda/netlink/LICENSE +++ /dev/null @@ -1,192 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - Copyright 2014 Vishvananda Ishaya. - Copyright 2014 Docker, Inc. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/vendor/github.com/vishvananda/netlink/README.md b/vendor/github.com/vishvananda/netlink/README.md deleted file mode 100644 index 8cd50a93b..000000000 --- a/vendor/github.com/vishvananda/netlink/README.md +++ /dev/null @@ -1,89 +0,0 @@ -# netlink - netlink library for go # - -[![Build Status](https://travis-ci.org/vishvananda/netlink.png?branch=master)](https://travis-ci.org/vishvananda/netlink) [![GoDoc](https://godoc.org/github.com/vishvananda/netlink?status.svg)](https://godoc.org/github.com/vishvananda/netlink) - -The netlink package provides a simple netlink library for go. Netlink -is the interface a user-space program in linux uses to communicate with -the kernel. It can be used to add and remove interfaces, set ip addresses -and routes, and configure ipsec. Netlink communication requires elevated -privileges, so in most cases this code needs to be run as root. Since -low-level netlink messages are inscrutable at best, the library attempts -to provide an api that is loosely modeled on the CLI provied by iproute2. -Actions like `ip link add` will be accomplished via a similarly named -function like AddLink(). This library began its life as a fork of the -netlink functionality in -[docker/libcontainer](https://github.com/docker/libcontainer) but was -heavily rewritten to improve testability, performance, and to add new -functionality like ipsec xfrm handling. - -## Local Build and Test ## - -You can use go get command: - - go get github.com/vishvananda/netlink - -Testing dependencies: - - go get github.com/vishvananda/netns - -Testing (requires root): - - sudo -E go test github.com/vishvananda/netlink - -## Examples ## - -Add a new bridge and add eth1 into it: - -```go -package main - -import ( - "net" - "github.com/vishvananda/netlink" -) - -func main() { - la := netlink.NewLinkAttrs() - la.Name = "foo" - mybridge := &netlink.Bridge{la}} - _ := netlink.LinkAdd(mybridge) - eth1, _ := netlink.LinkByName("eth1") - netlink.LinkSetMaster(eth1, mybridge) -} - -``` -Note `NewLinkAttrs` constructor, it sets default values in structure. For now -it sets only `TxQLen` to `-1`, so kernel will set default by itself. If you're -using simple initialization(`LinkAttrs{Name: "foo"}`) `TxQLen` will be set to -`0` unless you specify it like `LinkAttrs{Name: "foo", TxQLen: 1000}`. - -Add a new ip address to loopback: - -```go -package main - -import ( - "net" - "github.com/vishvananda/netlink" -) - -func main() { - lo, _ := netlink.LinkByName("lo") - addr, _ := netlink.ParseAddr("169.254.169.254/32") - netlink.AddrAdd(lo, addr) -} - -``` - -## Future Work ## - -Many pieces of netlink are not yet fully supported in the high-level -interface. Aspects of virtually all of the high-level objects don't exist. -Many of the underlying primitives are there, so its a matter of putting -the right fields into the high-level objects and making sure that they -are serialized and deserialized correctly in the Add and List methods. - -There are also a few pieces of low level netlink functionality that still -need to be implemented. Routing rules are not in place and some of the -more advanced link types. Hopefully there is decent structure and testing -in place to make these fairly straightforward to add. diff --git a/vendor/github.com/vishvananda/netlink/addr.go b/vendor/github.com/vishvananda/netlink/addr.go deleted file mode 100644 index 9bbaf508e..000000000 --- a/vendor/github.com/vishvananda/netlink/addr.go +++ /dev/null @@ -1,43 +0,0 @@ -package netlink - -import ( - "fmt" - "net" - "strings" -) - -// Addr represents an IP address from netlink. Netlink ip addresses -// include a mask, so it stores the address as a net.IPNet. -type Addr struct { - *net.IPNet - Label string -} - -// String returns $ip/$netmask $label -func (a Addr) String() string { - return fmt.Sprintf("%s %s", a.IPNet, a.Label) -} - -// ParseAddr parses the string representation of an address in the -// form $ip/$netmask $label. The label portion is optional -func ParseAddr(s string) (*Addr, error) { - label := "" - parts := strings.Split(s, " ") - if len(parts) > 1 { - s = parts[0] - label = parts[1] - } - m, err := ParseIPNet(s) - if err != nil { - return nil, err - } - return &Addr{IPNet: m, Label: label}, nil -} - -// Equal returns true if both Addrs have the same net.IPNet value. -func (a Addr) Equal(x Addr) bool { - sizea, _ := a.Mask.Size() - sizeb, _ := x.Mask.Size() - // ignore label for comparison - return a.IP.Equal(x.IP) && sizea == sizeb -} diff --git a/vendor/github.com/vishvananda/netlink/addr_linux.go b/vendor/github.com/vishvananda/netlink/addr_linux.go deleted file mode 100644 index 19aac0fb9..000000000 --- a/vendor/github.com/vishvananda/netlink/addr_linux.go +++ /dev/null @@ -1,128 +0,0 @@ -package netlink - -import ( - "fmt" - "net" - "strings" - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -// AddrAdd will add an IP address to a link device. -// Equivalent to: `ip addr add $addr dev $link` -func AddrAdd(link Link, addr *Addr) error { - - req := nl.NewNetlinkRequest(syscall.RTM_NEWADDR, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK) - return addrHandle(link, addr, req) -} - -// AddrDel will delete an IP address from a link device. -// Equivalent to: `ip addr del $addr dev $link` -func AddrDel(link Link, addr *Addr) error { - req := nl.NewNetlinkRequest(syscall.RTM_DELADDR, syscall.NLM_F_ACK) - return addrHandle(link, addr, req) -} - -func addrHandle(link Link, addr *Addr, req *nl.NetlinkRequest) error { - base := link.Attrs() - if addr.Label != "" && !strings.HasPrefix(addr.Label, base.Name) { - return fmt.Errorf("label must begin with interface name") - } - ensureIndex(base) - - family := nl.GetIPFamily(addr.IP) - - msg := nl.NewIfAddrmsg(family) - msg.Index = uint32(base.Index) - prefixlen, _ := addr.Mask.Size() - msg.Prefixlen = uint8(prefixlen) - req.AddData(msg) - - var addrData []byte - if family == FAMILY_V4 { - addrData = addr.IP.To4() - } else { - addrData = addr.IP.To16() - } - - localData := nl.NewRtAttr(syscall.IFA_LOCAL, addrData) - req.AddData(localData) - - addressData := nl.NewRtAttr(syscall.IFA_ADDRESS, addrData) - req.AddData(addressData) - - if addr.Label != "" { - labelData := nl.NewRtAttr(syscall.IFA_LABEL, nl.ZeroTerminated(addr.Label)) - req.AddData(labelData) - } - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// AddrList gets a list of IP addresses in the system. -// Equivalent to: `ip addr show`. -// The list can be filtered by link and ip family. -func AddrList(link Link, family int) ([]Addr, error) { - req := nl.NewNetlinkRequest(syscall.RTM_GETADDR, syscall.NLM_F_DUMP) - msg := nl.NewIfInfomsg(family) - req.AddData(msg) - - msgs, err := req.Execute(syscall.NETLINK_ROUTE, syscall.RTM_NEWADDR) - if err != nil { - return nil, err - } - - index := 0 - if link != nil { - base := link.Attrs() - ensureIndex(base) - index = base.Index - } - - var res []Addr - for _, m := range msgs { - msg := nl.DeserializeIfAddrmsg(m) - - if link != nil && msg.Index != uint32(index) { - // Ignore messages from other interfaces - continue - } - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - var local, dst *net.IPNet - var addr Addr - for _, attr := range attrs { - switch attr.Attr.Type { - case syscall.IFA_ADDRESS: - dst = &net.IPNet{ - IP: attr.Value, - Mask: net.CIDRMask(int(msg.Prefixlen), 8*len(attr.Value)), - } - case syscall.IFA_LOCAL: - local = &net.IPNet{ - IP: attr.Value, - Mask: net.CIDRMask(int(msg.Prefixlen), 8*len(attr.Value)), - } - case syscall.IFA_LABEL: - addr.Label = string(attr.Value[:len(attr.Value)-1]) - } - } - - // IFA_LOCAL should be there but if not, fall back to IFA_ADDRESS - if local != nil { - addr.IPNet = local - } else { - addr.IPNet = dst - } - - res = append(res, addr) - } - - return res, nil -} diff --git a/vendor/github.com/vishvananda/netlink/filter.go b/vendor/github.com/vishvananda/netlink/filter.go deleted file mode 100644 index 83ad7007b..000000000 --- a/vendor/github.com/vishvananda/netlink/filter.go +++ /dev/null @@ -1,55 +0,0 @@ -package netlink - -import ( - "fmt" -) - -type Filter interface { - Attrs() *FilterAttrs - Type() string -} - -// Filter represents a netlink filter. A filter is associated with a link, -// has a handle and a parent. The root filter of a device should have a -// parent == HANDLE_ROOT. -type FilterAttrs struct { - LinkIndex int - Handle uint32 - Parent uint32 - Priority uint16 // lower is higher priority - Protocol uint16 // syscall.ETH_P_* -} - -func (q FilterAttrs) String() string { - return fmt.Sprintf("{LinkIndex: %d, Handle: %s, Parent: %s, Priority: %d, Protocol: %d}", q.LinkIndex, HandleStr(q.Handle), HandleStr(q.Parent), q.Priority, q.Protocol) -} - -// U32 filters on many packet related properties -type U32 struct { - FilterAttrs - // Currently only supports redirecting to another interface - RedirIndex int -} - -func (filter *U32) Attrs() *FilterAttrs { - return &filter.FilterAttrs -} - -func (filter *U32) Type() string { - return "u32" -} - -// GenericFilter filters represent types that are not currently understood -// by this netlink library. -type GenericFilter struct { - FilterAttrs - FilterType string -} - -func (filter *GenericFilter) Attrs() *FilterAttrs { - return &filter.FilterAttrs -} - -func (filter *GenericFilter) Type() string { - return filter.FilterType -} diff --git a/vendor/github.com/vishvananda/netlink/filter_linux.go b/vendor/github.com/vishvananda/netlink/filter_linux.go deleted file mode 100644 index 1ec698702..000000000 --- a/vendor/github.com/vishvananda/netlink/filter_linux.go +++ /dev/null @@ -1,191 +0,0 @@ -package netlink - -import ( - "fmt" - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -// FilterDel will delete a filter from the system. -// Equivalent to: `tc filter del $filter` -func FilterDel(filter Filter) error { - req := nl.NewNetlinkRequest(syscall.RTM_DELTFILTER, syscall.NLM_F_ACK) - base := filter.Attrs() - msg := &nl.TcMsg{ - Family: nl.FAMILY_ALL, - Ifindex: int32(base.LinkIndex), - Handle: base.Handle, - Parent: base.Parent, - Info: MakeHandle(base.Priority, nl.Swap16(base.Protocol)), - } - req.AddData(msg) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// FilterAdd will add a filter to the system. -// Equivalent to: `tc filter add $filter` -func FilterAdd(filter Filter) error { - req := nl.NewNetlinkRequest(syscall.RTM_NEWTFILTER, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK) - base := filter.Attrs() - msg := &nl.TcMsg{ - Family: nl.FAMILY_ALL, - Ifindex: int32(base.LinkIndex), - Handle: base.Handle, - Parent: base.Parent, - Info: MakeHandle(base.Priority, nl.Swap16(base.Protocol)), - } - req.AddData(msg) - req.AddData(nl.NewRtAttr(nl.TCA_KIND, nl.ZeroTerminated(filter.Type()))) - - options := nl.NewRtAttr(nl.TCA_OPTIONS, nil) - if u32, ok := filter.(*U32); ok { - // match all - sel := nl.TcU32Sel{ - Nkeys: 1, - Flags: nl.TC_U32_TERMINAL, - } - sel.Keys = append(sel.Keys, nl.TcU32Key{}) - nl.NewRtAttrChild(options, nl.TCA_U32_SEL, sel.Serialize()) - actions := nl.NewRtAttrChild(options, nl.TCA_U32_ACT, nil) - table := nl.NewRtAttrChild(actions, nl.TCA_ACT_TAB, nil) - nl.NewRtAttrChild(table, nl.TCA_KIND, nl.ZeroTerminated("mirred")) - // redirect to other interface - mir := nl.TcMirred{ - Action: nl.TC_ACT_STOLEN, - Eaction: nl.TCA_EGRESS_REDIR, - Ifindex: uint32(u32.RedirIndex), - } - aopts := nl.NewRtAttrChild(table, nl.TCA_OPTIONS, nil) - nl.NewRtAttrChild(aopts, nl.TCA_MIRRED_PARMS, mir.Serialize()) - } - req.AddData(options) - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// FilterList gets a list of filters in the system. -// Equivalent to: `tc filter show`. -// Generally retunrs nothing if link and parent are not specified. -func FilterList(link Link, parent uint32) ([]Filter, error) { - req := nl.NewNetlinkRequest(syscall.RTM_GETTFILTER, syscall.NLM_F_DUMP) - msg := &nl.TcMsg{ - Family: nl.FAMILY_ALL, - Parent: parent, - } - if link != nil { - base := link.Attrs() - ensureIndex(base) - msg.Ifindex = int32(base.Index) - } - req.AddData(msg) - - msgs, err := req.Execute(syscall.NETLINK_ROUTE, syscall.RTM_NEWTFILTER) - if err != nil { - return nil, err - } - - var res []Filter - for _, m := range msgs { - msg := nl.DeserializeTcMsg(m) - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - base := FilterAttrs{ - LinkIndex: int(msg.Ifindex), - Handle: msg.Handle, - Parent: msg.Parent, - } - base.Priority, base.Protocol = MajorMinor(msg.Info) - base.Protocol = nl.Swap16(base.Protocol) - - var filter Filter - filterType := "" - detailed := false - for _, attr := range attrs { - switch attr.Attr.Type { - case nl.TCA_KIND: - filterType = string(attr.Value[:len(attr.Value)-1]) - switch filterType { - case "u32": - filter = &U32{} - default: - filter = &GenericFilter{FilterType: filterType} - } - case nl.TCA_OPTIONS: - switch filterType { - case "u32": - data, err := nl.ParseRouteAttr(attr.Value) - if err != nil { - return nil, err - } - detailed, err = parseU32Data(filter, data) - if err != nil { - return nil, err - } - } - } - } - // only return the detailed version of the filter - if detailed { - *filter.Attrs() = base - res = append(res, filter) - } - } - - return res, nil -} - -func parseU32Data(filter Filter, data []syscall.NetlinkRouteAttr) (bool, error) { - native = nl.NativeEndian() - u32 := filter.(*U32) - detailed := false - for _, datum := range data { - switch datum.Attr.Type { - case nl.TCA_U32_SEL: - detailed = true - sel := nl.DeserializeTcU32Sel(datum.Value) - // only parse if we have a very basic redirect - if sel.Flags&nl.TC_U32_TERMINAL == 0 || sel.Nkeys != 1 { - return detailed, nil - } - case nl.TCA_U32_ACT: - table, err := nl.ParseRouteAttr(datum.Value) - if err != nil { - return detailed, err - } - if len(table) != 1 || table[0].Attr.Type != nl.TCA_ACT_TAB { - return detailed, fmt.Errorf("Action table not formed properly") - } - aattrs, err := nl.ParseRouteAttr(table[0].Value) - for _, aattr := range aattrs { - switch aattr.Attr.Type { - case nl.TCA_KIND: - actionType := string(aattr.Value[:len(aattr.Value)-1]) - // only parse if the action is mirred - if actionType != "mirred" { - return detailed, nil - } - case nl.TCA_OPTIONS: - adata, err := nl.ParseRouteAttr(aattr.Value) - if err != nil { - return detailed, err - } - for _, adatum := range adata { - switch adatum.Attr.Type { - case nl.TCA_MIRRED_PARMS: - mir := nl.DeserializeTcMirred(adatum.Value) - u32.RedirIndex = int(mir.Ifindex) - } - } - } - } - } - } - return detailed, nil -} diff --git a/vendor/github.com/vishvananda/netlink/link.go b/vendor/github.com/vishvananda/netlink/link.go deleted file mode 100644 index 18fd1759a..000000000 --- a/vendor/github.com/vishvananda/netlink/link.go +++ /dev/null @@ -1,223 +0,0 @@ -package netlink - -import "net" - -// Link represents a link device from netlink. Shared link attributes -// like name may be retrieved using the Attrs() method. Unique data -// can be retrieved by casting the object to the proper type. -type Link interface { - Attrs() *LinkAttrs - Type() string -} - -type ( - NsPid int - NsFd int -) - -// LinkAttrs represents data shared by most link types -type LinkAttrs struct { - Index int - MTU int - TxQLen int // Transmit Queue Length - Name string - HardwareAddr net.HardwareAddr - Flags net.Flags - ParentIndex int // index of the parent link device - MasterIndex int // must be the index of a bridge - Namespace interface{} // nil | NsPid | NsFd -} - -// NewLinkAttrs returns LinkAttrs structure filled with default values -func NewLinkAttrs() LinkAttrs { - return LinkAttrs{ - TxQLen: -1, - } -} - -// Device links cannot be created via netlink. These links -// are links created by udev like 'lo' and 'etho0' -type Device struct { - LinkAttrs -} - -func (device *Device) Attrs() *LinkAttrs { - return &device.LinkAttrs -} - -func (device *Device) Type() string { - return "device" -} - -// Dummy links are dummy ethernet devices -type Dummy struct { - LinkAttrs -} - -func (dummy *Dummy) Attrs() *LinkAttrs { - return &dummy.LinkAttrs -} - -func (dummy *Dummy) Type() string { - return "dummy" -} - -// Ifb links are advanced dummy devices for packet filtering -type Ifb struct { - LinkAttrs -} - -func (ifb *Ifb) Attrs() *LinkAttrs { - return &ifb.LinkAttrs -} - -func (ifb *Ifb) Type() string { - return "ifb" -} - -// Bridge links are simple linux bridges -type Bridge struct { - LinkAttrs -} - -func (bridge *Bridge) Attrs() *LinkAttrs { - return &bridge.LinkAttrs -} - -func (bridge *Bridge) Type() string { - return "bridge" -} - -// Vlan links have ParentIndex set in their Attrs() -type Vlan struct { - LinkAttrs - VlanId int -} - -func (vlan *Vlan) Attrs() *LinkAttrs { - return &vlan.LinkAttrs -} - -func (vlan *Vlan) Type() string { - return "vlan" -} - -type MacvlanMode uint16 - -const ( - MACVLAN_MODE_DEFAULT MacvlanMode = iota - MACVLAN_MODE_PRIVATE - MACVLAN_MODE_VEPA - MACVLAN_MODE_BRIDGE - MACVLAN_MODE_PASSTHRU - MACVLAN_MODE_SOURCE -) - -// Macvlan links have ParentIndex set in their Attrs() -type Macvlan struct { - LinkAttrs - Mode MacvlanMode -} - -func (macvlan *Macvlan) Attrs() *LinkAttrs { - return &macvlan.LinkAttrs -} - -func (macvlan *Macvlan) Type() string { - return "macvlan" -} - -// Macvtap - macvtap is a virtual interfaces based on macvlan -type Macvtap struct { - Macvlan -} - -func (macvtap Macvtap) Type() string { - return "macvtap" -} - -// Veth devices must specify PeerName on create -type Veth struct { - LinkAttrs - PeerName string // veth on create only -} - -func (veth *Veth) Attrs() *LinkAttrs { - return &veth.LinkAttrs -} - -func (veth *Veth) Type() string { - return "veth" -} - -// GenericLink links represent types that are not currently understood -// by this netlink library. -type GenericLink struct { - LinkAttrs - LinkType string -} - -func (generic *GenericLink) Attrs() *LinkAttrs { - return &generic.LinkAttrs -} - -func (generic *GenericLink) Type() string { - return generic.LinkType -} - -type Vxlan struct { - LinkAttrs - VxlanId int - VtepDevIndex int - SrcAddr net.IP - Group net.IP - TTL int - TOS int - Learning bool - Proxy bool - RSC bool - L2miss bool - L3miss bool - NoAge bool - GBP bool - Age int - Limit int - Port int - PortLow int - PortHigh int -} - -func (vxlan *Vxlan) Attrs() *LinkAttrs { - return &vxlan.LinkAttrs -} - -func (vxlan *Vxlan) Type() string { - return "vxlan" -} - -type IPVlanMode uint16 - -const ( - IPVLAN_MODE_L2 IPVlanMode = iota - IPVLAN_MODE_L3 - IPVLAN_MODE_MAX -) - -type IPVlan struct { - LinkAttrs - Mode IPVlanMode -} - -func (ipvlan *IPVlan) Attrs() *LinkAttrs { - return &ipvlan.LinkAttrs -} - -func (ipvlan *IPVlan) Type() string { - return "ipvlan" -} - -// iproute2 supported devices; -// vlan | veth | vcan | dummy | ifb | macvlan | macvtap | -// bridge | bond | ipoib | ip6tnl | ipip | sit | vxlan | -// gre | gretap | ip6gre | ip6gretap | vti | nlmon | -// bond_slave | ipvlan diff --git a/vendor/github.com/vishvananda/netlink/link_linux.go b/vendor/github.com/vishvananda/netlink/link_linux.go deleted file mode 100644 index 685115044..000000000 --- a/vendor/github.com/vishvananda/netlink/link_linux.go +++ /dev/null @@ -1,750 +0,0 @@ -package netlink - -import ( - "bytes" - "encoding/binary" - "fmt" - "net" - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -var native = nl.NativeEndian() -var lookupByDump = false - -var macvlanModes = [...]uint32{ - 0, - nl.MACVLAN_MODE_PRIVATE, - nl.MACVLAN_MODE_VEPA, - nl.MACVLAN_MODE_BRIDGE, - nl.MACVLAN_MODE_PASSTHRU, - nl.MACVLAN_MODE_SOURCE, -} - -func ensureIndex(link *LinkAttrs) { - if link != nil && link.Index == 0 { - newlink, _ := LinkByName(link.Name) - if newlink != nil { - link.Index = newlink.Attrs().Index - } - } -} - -// LinkSetUp enables the link device. -// Equivalent to: `ip link set $link up` -func LinkSetUp(link Link) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Change = syscall.IFF_UP - msg.Flags = syscall.IFF_UP - msg.Index = int32(base.Index) - req.AddData(msg) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// LinkSetDown disables link device. -// Equivalent to: `ip link set $link down` -func LinkSetDown(link Link) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Change = syscall.IFF_UP - msg.Flags = 0 & ^syscall.IFF_UP - msg.Index = int32(base.Index) - req.AddData(msg) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// LinkSetMTU sets the mtu of the link device. -// Equivalent to: `ip link set $link mtu $mtu` -func LinkSetMTU(link Link, mtu int) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(base.Index) - req.AddData(msg) - - b := make([]byte, 4) - native.PutUint32(b, uint32(mtu)) - - data := nl.NewRtAttr(syscall.IFLA_MTU, b) - req.AddData(data) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// LinkSetName sets the name of the link device. -// Equivalent to: `ip link set $link name $name` -func LinkSetName(link Link, name string) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(base.Index) - req.AddData(msg) - - data := nl.NewRtAttr(syscall.IFLA_IFNAME, []byte(name)) - req.AddData(data) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// LinkSetHardwareAddr sets the hardware address of the link device. -// Equivalent to: `ip link set $link address $hwaddr` -func LinkSetHardwareAddr(link Link, hwaddr net.HardwareAddr) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(base.Index) - req.AddData(msg) - - data := nl.NewRtAttr(syscall.IFLA_ADDRESS, []byte(hwaddr)) - req.AddData(data) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// LinkSetMaster sets the master of the link device. -// Equivalent to: `ip link set $link master $master` -func LinkSetMaster(link Link, master *Bridge) error { - index := 0 - if master != nil { - masterBase := master.Attrs() - ensureIndex(masterBase) - index = masterBase.Index - } - return LinkSetMasterByIndex(link, index) -} - -// LinkSetMasterByIndex sets the master of the link device. -// Equivalent to: `ip link set $link master $master` -func LinkSetMasterByIndex(link Link, masterIndex int) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(base.Index) - req.AddData(msg) - - b := make([]byte, 4) - native.PutUint32(b, uint32(masterIndex)) - - data := nl.NewRtAttr(syscall.IFLA_MASTER, b) - req.AddData(data) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// LinkSetNsPid puts the device into a new network namespace. The -// pid must be a pid of a running process. -// Equivalent to: `ip link set $link netns $pid` -func LinkSetNsPid(link Link, nspid int) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(base.Index) - req.AddData(msg) - - b := make([]byte, 4) - native.PutUint32(b, uint32(nspid)) - - data := nl.NewRtAttr(syscall.IFLA_NET_NS_PID, b) - req.AddData(data) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// LinkSetNsFd puts the device into a new network namespace. The -// fd must be an open file descriptor to a network namespace. -// Similar to: `ip link set $link netns $ns` -func LinkSetNsFd(link Link, fd int) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(base.Index) - req.AddData(msg) - - b := make([]byte, 4) - native.PutUint32(b, uint32(fd)) - - data := nl.NewRtAttr(nl.IFLA_NET_NS_FD, b) - req.AddData(data) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -func boolAttr(val bool) []byte { - var v uint8 - if val { - v = 1 - } - return nl.Uint8Attr(v) -} - -type vxlanPortRange struct { - Lo, Hi uint16 -} - -func addVxlanAttrs(vxlan *Vxlan, linkInfo *nl.RtAttr) { - data := nl.NewRtAttrChild(linkInfo, nl.IFLA_INFO_DATA, nil) - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_ID, nl.Uint32Attr(uint32(vxlan.VxlanId))) - if vxlan.VtepDevIndex != 0 { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_LINK, nl.Uint32Attr(uint32(vxlan.VtepDevIndex))) - } - if vxlan.SrcAddr != nil { - ip := vxlan.SrcAddr.To4() - if ip != nil { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_LOCAL, []byte(ip)) - } else { - ip = vxlan.SrcAddr.To16() - if ip != nil { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_LOCAL6, []byte(ip)) - } - } - } - if vxlan.Group != nil { - group := vxlan.Group.To4() - if group != nil { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_GROUP, []byte(group)) - } else { - group = vxlan.Group.To16() - if group != nil { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_GROUP6, []byte(group)) - } - } - } - - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_TTL, nl.Uint8Attr(uint8(vxlan.TTL))) - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_TOS, nl.Uint8Attr(uint8(vxlan.TOS))) - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_LEARNING, boolAttr(vxlan.Learning)) - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_PROXY, boolAttr(vxlan.Proxy)) - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_RSC, boolAttr(vxlan.RSC)) - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_L2MISS, boolAttr(vxlan.L2miss)) - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_L3MISS, boolAttr(vxlan.L3miss)) - - if vxlan.GBP { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_GBP, boolAttr(vxlan.GBP)) - } - - if vxlan.NoAge { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_AGEING, nl.Uint32Attr(0)) - } else if vxlan.Age > 0 { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_AGEING, nl.Uint32Attr(uint32(vxlan.Age))) - } - if vxlan.Limit > 0 { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_LIMIT, nl.Uint32Attr(uint32(vxlan.Limit))) - } - if vxlan.Port > 0 { - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_PORT, nl.Uint16Attr(uint16(vxlan.Port))) - } - if vxlan.PortLow > 0 || vxlan.PortHigh > 0 { - pr := vxlanPortRange{uint16(vxlan.PortLow), uint16(vxlan.PortHigh)} - - buf := new(bytes.Buffer) - binary.Write(buf, binary.BigEndian, &pr) - - nl.NewRtAttrChild(data, nl.IFLA_VXLAN_PORT_RANGE, buf.Bytes()) - } -} - -// LinkAdd adds a new link device. The type and features of the device -// are taken fromt the parameters in the link object. -// Equivalent to: `ip link add $link` -func LinkAdd(link Link) error { - // TODO: set mtu and hardware address - // TODO: support extra data for macvlan - base := link.Attrs() - - if base.Name == "" { - return fmt.Errorf("LinkAttrs.Name cannot be empty!") - } - - req := nl.NewNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - req.AddData(msg) - - if base.ParentIndex != 0 { - b := make([]byte, 4) - native.PutUint32(b, uint32(base.ParentIndex)) - data := nl.NewRtAttr(syscall.IFLA_LINK, b) - req.AddData(data) - } else if link.Type() == "ipvlan" { - return fmt.Errorf("Can't create ipvlan link without ParentIndex") - } - - nameData := nl.NewRtAttr(syscall.IFLA_IFNAME, nl.ZeroTerminated(base.Name)) - req.AddData(nameData) - - if base.MTU > 0 { - mtu := nl.NewRtAttr(syscall.IFLA_MTU, nl.Uint32Attr(uint32(base.MTU))) - req.AddData(mtu) - } - - if base.TxQLen >= 0 { - qlen := nl.NewRtAttr(syscall.IFLA_TXQLEN, nl.Uint32Attr(uint32(base.TxQLen))) - req.AddData(qlen) - } - - if base.Namespace != nil { - var attr *nl.RtAttr - switch base.Namespace.(type) { - case NsPid: - val := nl.Uint32Attr(uint32(base.Namespace.(NsPid))) - attr = nl.NewRtAttr(syscall.IFLA_NET_NS_PID, val) - case NsFd: - val := nl.Uint32Attr(uint32(base.Namespace.(NsFd))) - attr = nl.NewRtAttr(nl.IFLA_NET_NS_FD, val) - } - - req.AddData(attr) - } - - linkInfo := nl.NewRtAttr(syscall.IFLA_LINKINFO, nil) - nl.NewRtAttrChild(linkInfo, nl.IFLA_INFO_KIND, nl.NonZeroTerminated(link.Type())) - - if vlan, ok := link.(*Vlan); ok { - b := make([]byte, 2) - native.PutUint16(b, uint16(vlan.VlanId)) - data := nl.NewRtAttrChild(linkInfo, nl.IFLA_INFO_DATA, nil) - nl.NewRtAttrChild(data, nl.IFLA_VLAN_ID, b) - } else if veth, ok := link.(*Veth); ok { - data := nl.NewRtAttrChild(linkInfo, nl.IFLA_INFO_DATA, nil) - peer := nl.NewRtAttrChild(data, nl.VETH_INFO_PEER, nil) - nl.NewIfInfomsgChild(peer, syscall.AF_UNSPEC) - nl.NewRtAttrChild(peer, syscall.IFLA_IFNAME, nl.ZeroTerminated(veth.PeerName)) - if base.TxQLen >= 0 { - nl.NewRtAttrChild(peer, syscall.IFLA_TXQLEN, nl.Uint32Attr(uint32(base.TxQLen))) - } - if base.MTU > 0 { - nl.NewRtAttrChild(peer, syscall.IFLA_MTU, nl.Uint32Attr(uint32(base.MTU))) - } - - } else if vxlan, ok := link.(*Vxlan); ok { - addVxlanAttrs(vxlan, linkInfo) - } else if ipv, ok := link.(*IPVlan); ok { - data := nl.NewRtAttrChild(linkInfo, nl.IFLA_INFO_DATA, nil) - nl.NewRtAttrChild(data, nl.IFLA_IPVLAN_MODE, nl.Uint16Attr(uint16(ipv.Mode))) - } else if macv, ok := link.(*Macvlan); ok { - if macv.Mode != MACVLAN_MODE_DEFAULT { - data := nl.NewRtAttrChild(linkInfo, nl.IFLA_INFO_DATA, nil) - nl.NewRtAttrChild(data, nl.IFLA_MACVLAN_MODE, nl.Uint32Attr(macvlanModes[macv.Mode])) - } - } - - req.AddData(linkInfo) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - if err != nil { - return err - } - - ensureIndex(base) - - // can't set master during create, so set it afterwards - if base.MasterIndex != 0 { - // TODO: verify MasterIndex is actually a bridge? - return LinkSetMasterByIndex(link, base.MasterIndex) - } - return nil -} - -// LinkDel deletes link device. Either Index or Name must be set in -// the link object for it to be deleted. The other values are ignored. -// Equivalent to: `ip link del $link` -func LinkDel(link Link) error { - base := link.Attrs() - - ensureIndex(base) - - req := nl.NewNetlinkRequest(syscall.RTM_DELLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(base.Index) - req.AddData(msg) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -func linkByNameDump(name string) (Link, error) { - links, err := LinkList() - if err != nil { - return nil, err - } - - for _, link := range links { - if link.Attrs().Name == name { - return link, nil - } - } - return nil, fmt.Errorf("Link %s not found", name) -} - -// LinkByName finds a link by name and returns a pointer to the object. -func LinkByName(name string) (Link, error) { - if lookupByDump { - return linkByNameDump(name) - } - - req := nl.NewNetlinkRequest(syscall.RTM_GETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - req.AddData(msg) - - nameData := nl.NewRtAttr(syscall.IFLA_IFNAME, nl.ZeroTerminated(name)) - req.AddData(nameData) - - link, err := execGetLink(req) - if err == syscall.EINVAL { - // older kernels don't support looking up via IFLA_IFNAME - // so fall back to dumping all links - lookupByDump = true - return linkByNameDump(name) - } - - return link, err -} - -// LinkByIndex finds a link by index and returns a pointer to the object. -func LinkByIndex(index int) (Link, error) { - req := nl.NewNetlinkRequest(syscall.RTM_GETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - msg.Index = int32(index) - req.AddData(msg) - - return execGetLink(req) -} - -func execGetLink(req *nl.NetlinkRequest) (Link, error) { - msgs, err := req.Execute(syscall.NETLINK_ROUTE, 0) - if err != nil { - if errno, ok := err.(syscall.Errno); ok { - if errno == syscall.ENODEV { - return nil, fmt.Errorf("Link not found") - } - } - return nil, err - } - - switch { - case len(msgs) == 0: - return nil, fmt.Errorf("Link not found") - - case len(msgs) == 1: - return linkDeserialize(msgs[0]) - - default: - return nil, fmt.Errorf("More than one link found") - } -} - -// linkDeserialize deserializes a raw message received from netlink into -// a link object. -func linkDeserialize(m []byte) (Link, error) { - msg := nl.DeserializeIfInfomsg(m) - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - base := LinkAttrs{Index: int(msg.Index), Flags: linkFlags(msg.Flags)} - var link Link - linkType := "" - for _, attr := range attrs { - switch attr.Attr.Type { - case syscall.IFLA_LINKINFO: - infos, err := nl.ParseRouteAttr(attr.Value) - if err != nil { - return nil, err - } - for _, info := range infos { - switch info.Attr.Type { - case nl.IFLA_INFO_KIND: - linkType = string(info.Value[:len(info.Value)-1]) - switch linkType { - case "dummy": - link = &Dummy{} - case "ifb": - link = &Ifb{} - case "bridge": - link = &Bridge{} - case "vlan": - link = &Vlan{} - case "veth": - link = &Veth{} - case "vxlan": - link = &Vxlan{} - case "ipvlan": - link = &IPVlan{} - case "macvlan": - link = &Macvlan{} - case "macvtap": - link = &Macvtap{} - default: - link = &GenericLink{LinkType: linkType} - } - case nl.IFLA_INFO_DATA: - data, err := nl.ParseRouteAttr(info.Value) - if err != nil { - return nil, err - } - switch linkType { - case "vlan": - parseVlanData(link, data) - case "vxlan": - parseVxlanData(link, data) - case "ipvlan": - parseIPVlanData(link, data) - case "macvlan": - parseMacvlanData(link, data) - case "macvtap": - parseMacvtapData(link, data) - } - } - } - case syscall.IFLA_ADDRESS: - var nonzero bool - for _, b := range attr.Value { - if b != 0 { - nonzero = true - } - } - if nonzero { - base.HardwareAddr = attr.Value[:] - } - case syscall.IFLA_IFNAME: - base.Name = string(attr.Value[:len(attr.Value)-1]) - case syscall.IFLA_MTU: - base.MTU = int(native.Uint32(attr.Value[0:4])) - case syscall.IFLA_LINK: - base.ParentIndex = int(native.Uint32(attr.Value[0:4])) - case syscall.IFLA_MASTER: - base.MasterIndex = int(native.Uint32(attr.Value[0:4])) - case syscall.IFLA_TXQLEN: - base.TxQLen = int(native.Uint32(attr.Value[0:4])) - } - } - // Links that don't have IFLA_INFO_KIND are hardware devices - if link == nil { - link = &Device{} - } - *link.Attrs() = base - - return link, nil -} - -// LinkList gets a list of link devices. -// Equivalent to: `ip link show` -func LinkList() ([]Link, error) { - // NOTE(vish): This duplicates functionality in net/iface_linux.go, but we need - // to get the message ourselves to parse link type. - req := nl.NewNetlinkRequest(syscall.RTM_GETLINK, syscall.NLM_F_DUMP) - - msg := nl.NewIfInfomsg(syscall.AF_UNSPEC) - req.AddData(msg) - - msgs, err := req.Execute(syscall.NETLINK_ROUTE, syscall.RTM_NEWLINK) - if err != nil { - return nil, err - } - - var res []Link - for _, m := range msgs { - link, err := linkDeserialize(m) - if err != nil { - return nil, err - } - res = append(res, link) - } - - return res, nil -} - -func LinkSetHairpin(link Link, mode bool) error { - return setProtinfoAttr(link, mode, nl.IFLA_BRPORT_MODE) -} - -func LinkSetGuard(link Link, mode bool) error { - return setProtinfoAttr(link, mode, nl.IFLA_BRPORT_GUARD) -} - -func LinkSetFastLeave(link Link, mode bool) error { - return setProtinfoAttr(link, mode, nl.IFLA_BRPORT_FAST_LEAVE) -} - -func LinkSetLearning(link Link, mode bool) error { - return setProtinfoAttr(link, mode, nl.IFLA_BRPORT_LEARNING) -} - -func LinkSetRootBlock(link Link, mode bool) error { - return setProtinfoAttr(link, mode, nl.IFLA_BRPORT_PROTECT) -} - -func LinkSetFlood(link Link, mode bool) error { - return setProtinfoAttr(link, mode, nl.IFLA_BRPORT_UNICAST_FLOOD) -} - -func setProtinfoAttr(link Link, mode bool, attr int) error { - base := link.Attrs() - ensureIndex(base) - req := nl.NewNetlinkRequest(syscall.RTM_SETLINK, syscall.NLM_F_ACK) - - msg := nl.NewIfInfomsg(syscall.AF_BRIDGE) - msg.Index = int32(base.Index) - req.AddData(msg) - - br := nl.NewRtAttr(syscall.IFLA_PROTINFO|syscall.NLA_F_NESTED, nil) - nl.NewRtAttrChild(br, attr, boolToByte(mode)) - req.AddData(br) - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - if err != nil { - return err - } - return nil -} - -func parseVlanData(link Link, data []syscall.NetlinkRouteAttr) { - vlan := link.(*Vlan) - for _, datum := range data { - switch datum.Attr.Type { - case nl.IFLA_VLAN_ID: - vlan.VlanId = int(native.Uint16(datum.Value[0:2])) - } - } -} - -func parseVxlanData(link Link, data []syscall.NetlinkRouteAttr) { - vxlan := link.(*Vxlan) - for _, datum := range data { - switch datum.Attr.Type { - case nl.IFLA_VXLAN_ID: - vxlan.VxlanId = int(native.Uint32(datum.Value[0:4])) - case nl.IFLA_VXLAN_LINK: - vxlan.VtepDevIndex = int(native.Uint32(datum.Value[0:4])) - case nl.IFLA_VXLAN_LOCAL: - vxlan.SrcAddr = net.IP(datum.Value[0:4]) - case nl.IFLA_VXLAN_LOCAL6: - vxlan.SrcAddr = net.IP(datum.Value[0:16]) - case nl.IFLA_VXLAN_GROUP: - vxlan.Group = net.IP(datum.Value[0:4]) - case nl.IFLA_VXLAN_GROUP6: - vxlan.Group = net.IP(datum.Value[0:16]) - case nl.IFLA_VXLAN_TTL: - vxlan.TTL = int(datum.Value[0]) - case nl.IFLA_VXLAN_TOS: - vxlan.TOS = int(datum.Value[0]) - case nl.IFLA_VXLAN_LEARNING: - vxlan.Learning = int8(datum.Value[0]) != 0 - case nl.IFLA_VXLAN_PROXY: - vxlan.Proxy = int8(datum.Value[0]) != 0 - case nl.IFLA_VXLAN_RSC: - vxlan.RSC = int8(datum.Value[0]) != 0 - case nl.IFLA_VXLAN_L2MISS: - vxlan.L2miss = int8(datum.Value[0]) != 0 - case nl.IFLA_VXLAN_L3MISS: - vxlan.L3miss = int8(datum.Value[0]) != 0 - case nl.IFLA_VXLAN_GBP: - vxlan.GBP = int8(datum.Value[0]) != 0 - case nl.IFLA_VXLAN_AGEING: - vxlan.Age = int(native.Uint32(datum.Value[0:4])) - vxlan.NoAge = vxlan.Age == 0 - case nl.IFLA_VXLAN_LIMIT: - vxlan.Limit = int(native.Uint32(datum.Value[0:4])) - case nl.IFLA_VXLAN_PORT: - vxlan.Port = int(native.Uint16(datum.Value[0:2])) - case nl.IFLA_VXLAN_PORT_RANGE: - buf := bytes.NewBuffer(datum.Value[0:4]) - var pr vxlanPortRange - if binary.Read(buf, binary.BigEndian, &pr) != nil { - vxlan.PortLow = int(pr.Lo) - vxlan.PortHigh = int(pr.Hi) - } - } - } -} - -func parseIPVlanData(link Link, data []syscall.NetlinkRouteAttr) { - ipv := link.(*IPVlan) - for _, datum := range data { - if datum.Attr.Type == nl.IFLA_IPVLAN_MODE { - ipv.Mode = IPVlanMode(native.Uint32(datum.Value[0:4])) - return - } - } -} - -func parseMacvtapData(link Link, data []syscall.NetlinkRouteAttr) { - macv := link.(*Macvtap) - parseMacvlanData(&macv.Macvlan, data) -} - -func parseMacvlanData(link Link, data []syscall.NetlinkRouteAttr) { - macv := link.(*Macvlan) - for _, datum := range data { - if datum.Attr.Type == nl.IFLA_MACVLAN_MODE { - switch native.Uint32(datum.Value[0:4]) { - case nl.MACVLAN_MODE_PRIVATE: - macv.Mode = MACVLAN_MODE_PRIVATE - case nl.MACVLAN_MODE_VEPA: - macv.Mode = MACVLAN_MODE_VEPA - case nl.MACVLAN_MODE_BRIDGE: - macv.Mode = MACVLAN_MODE_BRIDGE - case nl.MACVLAN_MODE_PASSTHRU: - macv.Mode = MACVLAN_MODE_PASSTHRU - case nl.MACVLAN_MODE_SOURCE: - macv.Mode = MACVLAN_MODE_SOURCE - } - return - } - } -} - -// copied from pkg/net_linux.go -func linkFlags(rawFlags uint32) net.Flags { - var f net.Flags - if rawFlags&syscall.IFF_UP != 0 { - f |= net.FlagUp - } - if rawFlags&syscall.IFF_BROADCAST != 0 { - f |= net.FlagBroadcast - } - if rawFlags&syscall.IFF_LOOPBACK != 0 { - f |= net.FlagLoopback - } - if rawFlags&syscall.IFF_POINTOPOINT != 0 { - f |= net.FlagPointToPoint - } - if rawFlags&syscall.IFF_MULTICAST != 0 { - f |= net.FlagMulticast - } - return f -} diff --git a/vendor/github.com/vishvananda/netlink/neigh.go b/vendor/github.com/vishvananda/netlink/neigh.go deleted file mode 100644 index 0e5eb90c9..000000000 --- a/vendor/github.com/vishvananda/netlink/neigh.go +++ /dev/null @@ -1,22 +0,0 @@ -package netlink - -import ( - "fmt" - "net" -) - -// Neigh represents a link layer neighbor from netlink. -type Neigh struct { - LinkIndex int - Family int - State int - Type int - Flags int - IP net.IP - HardwareAddr net.HardwareAddr -} - -// String returns $ip/$hwaddr $label -func (neigh *Neigh) String() string { - return fmt.Sprintf("%s %s", neigh.IP, neigh.HardwareAddr) -} diff --git a/vendor/github.com/vishvananda/netlink/neigh_linux.go b/vendor/github.com/vishvananda/netlink/neigh_linux.go deleted file mode 100644 index 620a0ee70..000000000 --- a/vendor/github.com/vishvananda/netlink/neigh_linux.go +++ /dev/null @@ -1,189 +0,0 @@ -package netlink - -import ( - "net" - "syscall" - "unsafe" - - "github.com/vishvananda/netlink/nl" -) - -const ( - NDA_UNSPEC = iota - NDA_DST - NDA_LLADDR - NDA_CACHEINFO - NDA_PROBES - NDA_VLAN - NDA_PORT - NDA_VNI - NDA_IFINDEX - NDA_MAX = NDA_IFINDEX -) - -// Neighbor Cache Entry States. -const ( - NUD_NONE = 0x00 - NUD_INCOMPLETE = 0x01 - NUD_REACHABLE = 0x02 - NUD_STALE = 0x04 - NUD_DELAY = 0x08 - NUD_PROBE = 0x10 - NUD_FAILED = 0x20 - NUD_NOARP = 0x40 - NUD_PERMANENT = 0x80 -) - -// Neighbor Flags -const ( - NTF_USE = 0x01 - NTF_SELF = 0x02 - NTF_MASTER = 0x04 - NTF_PROXY = 0x08 - NTF_ROUTER = 0x80 -) - -type Ndmsg struct { - Family uint8 - Index uint32 - State uint16 - Flags uint8 - Type uint8 -} - -func deserializeNdmsg(b []byte) *Ndmsg { - var dummy Ndmsg - return (*Ndmsg)(unsafe.Pointer(&b[0:unsafe.Sizeof(dummy)][0])) -} - -func (msg *Ndmsg) Serialize() []byte { - return (*(*[unsafe.Sizeof(*msg)]byte)(unsafe.Pointer(msg)))[:] -} - -func (msg *Ndmsg) Len() int { - return int(unsafe.Sizeof(*msg)) -} - -// NeighAdd will add an IP to MAC mapping to the ARP table -// Equivalent to: `ip neigh add ....` -func NeighAdd(neigh *Neigh) error { - return neighAdd(neigh, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL) -} - -// NeighAdd will add or replace an IP to MAC mapping to the ARP table -// Equivalent to: `ip neigh replace....` -func NeighSet(neigh *Neigh) error { - return neighAdd(neigh, syscall.NLM_F_CREATE) -} - -// NeighAppend will append an entry to FDB -// Equivalent to: `bridge fdb append...` -func NeighAppend(neigh *Neigh) error { - return neighAdd(neigh, syscall.NLM_F_CREATE|syscall.NLM_F_APPEND) -} - -func neighAdd(neigh *Neigh, mode int) error { - req := nl.NewNetlinkRequest(syscall.RTM_NEWNEIGH, mode|syscall.NLM_F_ACK) - return neighHandle(neigh, req) -} - -// NeighDel will delete an IP address from a link device. -// Equivalent to: `ip addr del $addr dev $link` -func NeighDel(neigh *Neigh) error { - req := nl.NewNetlinkRequest(syscall.RTM_DELNEIGH, syscall.NLM_F_ACK) - return neighHandle(neigh, req) -} - -func neighHandle(neigh *Neigh, req *nl.NetlinkRequest) error { - var family int - if neigh.Family > 0 { - family = neigh.Family - } else { - family = nl.GetIPFamily(neigh.IP) - } - - msg := Ndmsg{ - Family: uint8(family), - Index: uint32(neigh.LinkIndex), - State: uint16(neigh.State), - Type: uint8(neigh.Type), - Flags: uint8(neigh.Flags), - } - req.AddData(&msg) - - ipData := neigh.IP.To4() - if ipData == nil { - ipData = neigh.IP.To16() - } - - dstData := nl.NewRtAttr(NDA_DST, ipData) - req.AddData(dstData) - - hwData := nl.NewRtAttr(NDA_LLADDR, []byte(neigh.HardwareAddr)) - req.AddData(hwData) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// NeighList gets a list of IP-MAC mappings in the system (ARP table). -// Equivalent to: `ip neighbor show`. -// The list can be filtered by link and ip family. -func NeighList(linkIndex, family int) ([]Neigh, error) { - req := nl.NewNetlinkRequest(syscall.RTM_GETNEIGH, syscall.NLM_F_DUMP) - msg := Ndmsg{ - Family: uint8(family), - } - req.AddData(&msg) - - msgs, err := req.Execute(syscall.NETLINK_ROUTE, syscall.RTM_NEWNEIGH) - if err != nil { - return nil, err - } - - var res []Neigh - for _, m := range msgs { - ndm := deserializeNdmsg(m) - if linkIndex != 0 && int(ndm.Index) != linkIndex { - // Ignore messages from other interfaces - continue - } - - neigh, err := NeighDeserialize(m) - if err != nil { - continue - } - - res = append(res, *neigh) - } - - return res, nil -} - -func NeighDeserialize(m []byte) (*Neigh, error) { - msg := deserializeNdmsg(m) - - neigh := Neigh{ - LinkIndex: int(msg.Index), - Family: int(msg.Family), - State: int(msg.State), - Type: int(msg.Type), - Flags: int(msg.Flags), - } - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - for _, attr := range attrs { - switch attr.Attr.Type { - case NDA_DST: - neigh.IP = net.IP(attr.Value) - case NDA_LLADDR: - neigh.HardwareAddr = net.HardwareAddr(attr.Value) - } - } - - return &neigh, nil -} diff --git a/vendor/github.com/vishvananda/netlink/netlink.go b/vendor/github.com/vishvananda/netlink/netlink.go deleted file mode 100644 index 41ebdb11f..000000000 --- a/vendor/github.com/vishvananda/netlink/netlink.go +++ /dev/null @@ -1,39 +0,0 @@ -// Package netlink provides a simple library for netlink. Netlink is -// the interface a user-space program in linux uses to communicate with -// the kernel. It can be used to add and remove interfaces, set up ip -// addresses and routes, and confiugre ipsec. Netlink communication -// requires elevated privileges, so in most cases this code needs to -// be run as root. The low level primitives for netlink are contained -// in the nl subpackage. This package attempts to provide a high-level -// interface that is loosly modeled on the iproute2 cli. -package netlink - -import ( - "net" - - "github.com/vishvananda/netlink/nl" -) - -const ( - // Family type definitions - FAMILY_ALL = nl.FAMILY_ALL - FAMILY_V4 = nl.FAMILY_V4 - FAMILY_V6 = nl.FAMILY_V6 -) - -// ParseIPNet parses a string in ip/net format and returns a net.IPNet. -// This is valuable because addresses in netlink are often IPNets and -// ParseCIDR returns an IPNet with the IP part set to the base IP of the -// range. -func ParseIPNet(s string) (*net.IPNet, error) { - ip, ipNet, err := net.ParseCIDR(s) - if err != nil { - return nil, err - } - return &net.IPNet{IP: ip, Mask: ipNet.Mask}, nil -} - -// NewIPNet generates an IPNet from an ip address using a netmask of 32. -func NewIPNet(ip net.IP) *net.IPNet { - return &net.IPNet{IP: ip, Mask: net.CIDRMask(32, 32)} -} diff --git a/vendor/github.com/vishvananda/netlink/netlink_unspecified.go b/vendor/github.com/vishvananda/netlink/netlink_unspecified.go deleted file mode 100644 index 10c49c1bf..000000000 --- a/vendor/github.com/vishvananda/netlink/netlink_unspecified.go +++ /dev/null @@ -1,143 +0,0 @@ -// +build !linux - -package netlink - -import ( - "errors" -) - -var ( - ErrNotImplemented = errors.New("not implemented") -) - -func LinkSetUp(link *Link) error { - return ErrNotImplemented -} - -func LinkSetDown(link *Link) error { - return ErrNotImplemented -} - -func LinkSetMTU(link *Link, mtu int) error { - return ErrNotImplemented -} - -func LinkSetMaster(link *Link, master *Link) error { - return ErrNotImplemented -} - -func LinkSetNsPid(link *Link, nspid int) error { - return ErrNotImplemented -} - -func LinkSetNsFd(link *Link, fd int) error { - return ErrNotImplemented -} - -func LinkAdd(link *Link) error { - return ErrNotImplemented -} - -func LinkDel(link *Link) error { - return ErrNotImplemented -} - -func SetHairpin(link Link, mode bool) error { - return ErrNotImplemented -} - -func SetGuard(link Link, mode bool) error { - return ErrNotImplemented -} - -func SetFastLeave(link Link, mode bool) error { - return ErrNotImplemented -} - -func SetLearning(link Link, mode bool) error { - return ErrNotImplemented -} - -func SetRootBlock(link Link, mode bool) error { - return ErrNotImplemented -} - -func SetFlood(link Link, mode bool) error { - return ErrNotImplemented -} - -func LinkList() ([]Link, error) { - return nil, ErrNotImplemented -} - -func AddrAdd(link *Link, addr *Addr) error { - return ErrNotImplemented -} - -func AddrDel(link *Link, addr *Addr) error { - return ErrNotImplemented -} - -func AddrList(link *Link, family int) ([]Addr, error) { - return nil, ErrNotImplemented -} - -func RouteAdd(route *Route) error { - return ErrNotImplemented -} - -func RouteDel(route *Route) error { - return ErrNotImplemented -} - -func RouteList(link *Link, family int) ([]Route, error) { - return nil, ErrNotImplemented -} - -func XfrmPolicyAdd(policy *XfrmPolicy) error { - return ErrNotImplemented -} - -func XfrmPolicyDel(policy *XfrmPolicy) error { - return ErrNotImplemented -} - -func XfrmPolicyList(family int) ([]XfrmPolicy, error) { - return nil, ErrNotImplemented -} - -func XfrmStateAdd(policy *XfrmState) error { - return ErrNotImplemented -} - -func XfrmStateDel(policy *XfrmState) error { - return ErrNotImplemented -} - -func XfrmStateList(family int) ([]XfrmState, error) { - return nil, ErrNotImplemented -} - -func NeighAdd(neigh *Neigh) error { - return ErrNotImplemented -} - -func NeighSet(neigh *Neigh) error { - return ErrNotImplemented -} - -func NeighAppend(neigh *Neigh) error { - return ErrNotImplemented -} - -func NeighDel(neigh *Neigh) error { - return ErrNotImplemented -} - -func NeighList(linkIndex, family int) ([]Neigh, error) { - return nil, ErrNotImplemented -} - -func NeighDeserialize(m []byte) (*Ndmsg, *Neigh, error) { - return nil, nil, ErrNotImplemented -} diff --git a/vendor/github.com/vishvananda/netlink/nl/addr_linux.go b/vendor/github.com/vishvananda/netlink/nl/addr_linux.go deleted file mode 100644 index 17088fa0c..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/addr_linux.go +++ /dev/null @@ -1,47 +0,0 @@ -package nl - -import ( - "syscall" - "unsafe" -) - -type IfAddrmsg struct { - syscall.IfAddrmsg -} - -func NewIfAddrmsg(family int) *IfAddrmsg { - return &IfAddrmsg{ - IfAddrmsg: syscall.IfAddrmsg{ - Family: uint8(family), - }, - } -} - -// struct ifaddrmsg { -// __u8 ifa_family; -// __u8 ifa_prefixlen; /* The prefix length */ -// __u8 ifa_flags; /* Flags */ -// __u8 ifa_scope; /* Address scope */ -// __u32 ifa_index; /* Link index */ -// }; - -// type IfAddrmsg struct { -// Family uint8 -// Prefixlen uint8 -// Flags uint8 -// Scope uint8 -// Index uint32 -// } -// SizeofIfAddrmsg = 0x8 - -func DeserializeIfAddrmsg(b []byte) *IfAddrmsg { - return (*IfAddrmsg)(unsafe.Pointer(&b[0:syscall.SizeofIfAddrmsg][0])) -} - -func (msg *IfAddrmsg) Serialize() []byte { - return (*(*[syscall.SizeofIfAddrmsg]byte)(unsafe.Pointer(msg)))[:] -} - -func (msg *IfAddrmsg) Len() int { - return syscall.SizeofIfAddrmsg -} diff --git a/vendor/github.com/vishvananda/netlink/nl/link_linux.go b/vendor/github.com/vishvananda/netlink/nl/link_linux.go deleted file mode 100644 index 1f9ab088f..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/link_linux.go +++ /dev/null @@ -1,104 +0,0 @@ -package nl - -const ( - DEFAULT_CHANGE = 0xFFFFFFFF -) - -const ( - IFLA_INFO_UNSPEC = iota - IFLA_INFO_KIND - IFLA_INFO_DATA - IFLA_INFO_XSTATS - IFLA_INFO_MAX = IFLA_INFO_XSTATS -) - -const ( - IFLA_VLAN_UNSPEC = iota - IFLA_VLAN_ID - IFLA_VLAN_FLAGS - IFLA_VLAN_EGRESS_QOS - IFLA_VLAN_INGRESS_QOS - IFLA_VLAN_PROTOCOL - IFLA_VLAN_MAX = IFLA_VLAN_PROTOCOL -) - -const ( - VETH_INFO_UNSPEC = iota - VETH_INFO_PEER - VETH_INFO_MAX = VETH_INFO_PEER -) - -const ( - IFLA_VXLAN_UNSPEC = iota - IFLA_VXLAN_ID - IFLA_VXLAN_GROUP - IFLA_VXLAN_LINK - IFLA_VXLAN_LOCAL - IFLA_VXLAN_TTL - IFLA_VXLAN_TOS - IFLA_VXLAN_LEARNING - IFLA_VXLAN_AGEING - IFLA_VXLAN_LIMIT - IFLA_VXLAN_PORT_RANGE - IFLA_VXLAN_PROXY - IFLA_VXLAN_RSC - IFLA_VXLAN_L2MISS - IFLA_VXLAN_L3MISS - IFLA_VXLAN_PORT - IFLA_VXLAN_GROUP6 - IFLA_VXLAN_LOCAL6 - IFLA_VXLAN_UDP_CSUM - IFLA_VXLAN_UDP_ZERO_CSUM6_TX - IFLA_VXLAN_UDP_ZERO_CSUM6_RX - IFLA_VXLAN_REMCSUM_TX - IFLA_VXLAN_REMCSUM_RX - IFLA_VXLAN_GBP - IFLA_VXLAN_REMCSUM_NOPARTIAL - IFLA_VXLAN_FLOWBASED - IFLA_VXLAN_MAX = IFLA_VXLAN_FLOWBASED -) - -const ( - BRIDGE_MODE_UNSPEC = iota - BRIDGE_MODE_HAIRPIN -) - -const ( - IFLA_BRPORT_UNSPEC = iota - IFLA_BRPORT_STATE - IFLA_BRPORT_PRIORITY - IFLA_BRPORT_COST - IFLA_BRPORT_MODE - IFLA_BRPORT_GUARD - IFLA_BRPORT_PROTECT - IFLA_BRPORT_FAST_LEAVE - IFLA_BRPORT_LEARNING - IFLA_BRPORT_UNICAST_FLOOD - IFLA_BRPORT_MAX = IFLA_BRPORT_UNICAST_FLOOD -) - -const ( - IFLA_IPVLAN_UNSPEC = iota - IFLA_IPVLAN_MODE - IFLA_IPVLAN_MAX = IFLA_IPVLAN_MODE -) - -const ( - // not defined in syscall - IFLA_NET_NS_FD = 28 -) - -const ( - IFLA_MACVLAN_UNSPEC = iota - IFLA_MACVLAN_MODE - IFLA_MACVLAN_FLAGS - IFLA_MACVLAN_MAX = IFLA_MACVLAN_FLAGS -) - -const ( - MACVLAN_MODE_PRIVATE = 1 - MACVLAN_MODE_VEPA = 2 - MACVLAN_MODE_BRIDGE = 4 - MACVLAN_MODE_PASSTHRU = 8 - MACVLAN_MODE_SOURCE = 16 -) diff --git a/vendor/github.com/vishvananda/netlink/nl/nl_linux.go b/vendor/github.com/vishvananda/netlink/nl/nl_linux.go deleted file mode 100644 index 8dbd92b81..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/nl_linux.go +++ /dev/null @@ -1,418 +0,0 @@ -// Package nl has low level primitives for making Netlink calls. -package nl - -import ( - "bytes" - "encoding/binary" - "fmt" - "net" - "sync/atomic" - "syscall" - "unsafe" -) - -const ( - // Family type definitions - FAMILY_ALL = syscall.AF_UNSPEC - FAMILY_V4 = syscall.AF_INET - FAMILY_V6 = syscall.AF_INET6 -) - -var nextSeqNr uint32 - -// GetIPFamily returns the family type of a net.IP. -func GetIPFamily(ip net.IP) int { - if len(ip) <= net.IPv4len { - return FAMILY_V4 - } - if ip.To4() != nil { - return FAMILY_V4 - } - return FAMILY_V6 -} - -var nativeEndian binary.ByteOrder - -// Get native endianness for the system -func NativeEndian() binary.ByteOrder { - if nativeEndian == nil { - var x uint32 = 0x01020304 - if *(*byte)(unsafe.Pointer(&x)) == 0x01 { - nativeEndian = binary.BigEndian - } else { - nativeEndian = binary.LittleEndian - } - } - return nativeEndian -} - -// Byte swap a 16 bit value if we aren't big endian -func Swap16(i uint16) uint16 { - if NativeEndian() == binary.BigEndian { - return i - } - return (i&0xff00)>>8 | (i&0xff)<<8 -} - -// Byte swap a 32 bit value if aren't big endian -func Swap32(i uint32) uint32 { - if NativeEndian() == binary.BigEndian { - return i - } - return (i&0xff000000)>>24 | (i&0xff0000)>>8 | (i&0xff00)<<8 | (i&0xff)<<24 -} - -type NetlinkRequestData interface { - Len() int - Serialize() []byte -} - -// IfInfomsg is related to links, but it is used for list requests as well -type IfInfomsg struct { - syscall.IfInfomsg -} - -// Create an IfInfomsg with family specified -func NewIfInfomsg(family int) *IfInfomsg { - return &IfInfomsg{ - IfInfomsg: syscall.IfInfomsg{ - Family: uint8(family), - }, - } -} - -func DeserializeIfInfomsg(b []byte) *IfInfomsg { - return (*IfInfomsg)(unsafe.Pointer(&b[0:syscall.SizeofIfInfomsg][0])) -} - -func (msg *IfInfomsg) Serialize() []byte { - return (*(*[syscall.SizeofIfInfomsg]byte)(unsafe.Pointer(msg)))[:] -} - -func (msg *IfInfomsg) Len() int { - return syscall.SizeofIfInfomsg -} - -func rtaAlignOf(attrlen int) int { - return (attrlen + syscall.RTA_ALIGNTO - 1) & ^(syscall.RTA_ALIGNTO - 1) -} - -func NewIfInfomsgChild(parent *RtAttr, family int) *IfInfomsg { - msg := NewIfInfomsg(family) - parent.children = append(parent.children, msg) - return msg -} - -// Extend RtAttr to handle data and children -type RtAttr struct { - syscall.RtAttr - Data []byte - children []NetlinkRequestData -} - -// Create a new Extended RtAttr object -func NewRtAttr(attrType int, data []byte) *RtAttr { - return &RtAttr{ - RtAttr: syscall.RtAttr{ - Type: uint16(attrType), - }, - children: []NetlinkRequestData{}, - Data: data, - } -} - -// Create a new RtAttr obj anc add it as a child of an existing object -func NewRtAttrChild(parent *RtAttr, attrType int, data []byte) *RtAttr { - attr := NewRtAttr(attrType, data) - parent.children = append(parent.children, attr) - return attr -} - -func (a *RtAttr) Len() int { - if len(a.children) == 0 { - return (syscall.SizeofRtAttr + len(a.Data)) - } - - l := 0 - for _, child := range a.children { - l += rtaAlignOf(child.Len()) - } - l += syscall.SizeofRtAttr - return rtaAlignOf(l + len(a.Data)) -} - -// Serialize the RtAttr into a byte array -// This can't just unsafe.cast because it must iterate through children. -func (a *RtAttr) Serialize() []byte { - native := NativeEndian() - - length := a.Len() - buf := make([]byte, rtaAlignOf(length)) - - if a.Data != nil { - copy(buf[4:], a.Data) - } else { - next := 4 - for _, child := range a.children { - childBuf := child.Serialize() - copy(buf[next:], childBuf) - next += rtaAlignOf(len(childBuf)) - } - } - - if l := uint16(length); l != 0 { - native.PutUint16(buf[0:2], l) - } - native.PutUint16(buf[2:4], a.Type) - return buf -} - -type NetlinkRequest struct { - syscall.NlMsghdr - Data []NetlinkRequestData -} - -// Serialize the Netlink Request into a byte array -func (req *NetlinkRequest) Serialize() []byte { - length := syscall.SizeofNlMsghdr - dataBytes := make([][]byte, len(req.Data)) - for i, data := range req.Data { - dataBytes[i] = data.Serialize() - length = length + len(dataBytes[i]) - } - req.Len = uint32(length) - b := make([]byte, length) - hdr := (*(*[syscall.SizeofNlMsghdr]byte)(unsafe.Pointer(req)))[:] - next := syscall.SizeofNlMsghdr - copy(b[0:next], hdr) - for _, data := range dataBytes { - for _, dataByte := range data { - b[next] = dataByte - next = next + 1 - } - } - return b -} - -func (req *NetlinkRequest) AddData(data NetlinkRequestData) { - if data != nil { - req.Data = append(req.Data, data) - } -} - -// Execute the request against a the given sockType. -// Returns a list of netlink messages in seriaized format, optionally filtered -// by resType. -func (req *NetlinkRequest) Execute(sockType int, resType uint16) ([][]byte, error) { - s, err := getNetlinkSocket(sockType) - if err != nil { - return nil, err - } - defer s.Close() - - if err := s.Send(req); err != nil { - return nil, err - } - - pid, err := s.GetPid() - if err != nil { - return nil, err - } - - var res [][]byte - -done: - for { - msgs, err := s.Receive() - if err != nil { - return nil, err - } - for _, m := range msgs { - if m.Header.Seq != req.Seq { - return nil, fmt.Errorf("Wrong Seq nr %d, expected 1", m.Header.Seq) - } - if m.Header.Pid != pid { - return nil, fmt.Errorf("Wrong pid %d, expected %d", m.Header.Pid, pid) - } - if m.Header.Type == syscall.NLMSG_DONE { - break done - } - if m.Header.Type == syscall.NLMSG_ERROR { - native := NativeEndian() - error := int32(native.Uint32(m.Data[0:4])) - if error == 0 { - break done - } - return nil, syscall.Errno(-error) - } - if resType != 0 && m.Header.Type != resType { - continue - } - res = append(res, m.Data) - if m.Header.Flags&syscall.NLM_F_MULTI == 0 { - break done - } - } - } - return res, nil -} - -// Create a new netlink request from proto and flags -// Note the Len value will be inaccurate once data is added until -// the message is serialized -func NewNetlinkRequest(proto, flags int) *NetlinkRequest { - return &NetlinkRequest{ - NlMsghdr: syscall.NlMsghdr{ - Len: uint32(syscall.SizeofNlMsghdr), - Type: uint16(proto), - Flags: syscall.NLM_F_REQUEST | uint16(flags), - Seq: atomic.AddUint32(&nextSeqNr, 1), - }, - } -} - -type NetlinkSocket struct { - fd int - lsa syscall.SockaddrNetlink -} - -func getNetlinkSocket(protocol int) (*NetlinkSocket, error) { - fd, err := syscall.Socket(syscall.AF_NETLINK, syscall.SOCK_RAW, protocol) - if err != nil { - return nil, err - } - s := &NetlinkSocket{ - fd: fd, - } - s.lsa.Family = syscall.AF_NETLINK - if err := syscall.Bind(fd, &s.lsa); err != nil { - syscall.Close(fd) - return nil, err - } - - return s, nil -} - -// Create a netlink socket with a given protocol (e.g. NETLINK_ROUTE) -// and subscribe it to multicast groups passed in variable argument list. -// Returns the netlink socket on which Receive() method can be called -// to retrieve the messages from the kernel. -func Subscribe(protocol int, groups ...uint) (*NetlinkSocket, error) { - fd, err := syscall.Socket(syscall.AF_NETLINK, syscall.SOCK_RAW, protocol) - if err != nil { - return nil, err - } - s := &NetlinkSocket{ - fd: fd, - } - s.lsa.Family = syscall.AF_NETLINK - - for _, g := range groups { - s.lsa.Groups |= (1 << (g - 1)) - } - - if err := syscall.Bind(fd, &s.lsa); err != nil { - syscall.Close(fd) - return nil, err - } - - return s, nil -} - -func (s *NetlinkSocket) Close() { - syscall.Close(s.fd) -} - -func (s *NetlinkSocket) Send(request *NetlinkRequest) error { - if err := syscall.Sendto(s.fd, request.Serialize(), 0, &s.lsa); err != nil { - return err - } - return nil -} - -func (s *NetlinkSocket) Receive() ([]syscall.NetlinkMessage, error) { - rb := make([]byte, syscall.Getpagesize()) - nr, _, err := syscall.Recvfrom(s.fd, rb, 0) - if err != nil { - return nil, err - } - if nr < syscall.NLMSG_HDRLEN { - return nil, fmt.Errorf("Got short response from netlink") - } - rb = rb[:nr] - return syscall.ParseNetlinkMessage(rb) -} - -func (s *NetlinkSocket) GetPid() (uint32, error) { - lsa, err := syscall.Getsockname(s.fd) - if err != nil { - return 0, err - } - switch v := lsa.(type) { - case *syscall.SockaddrNetlink: - return v.Pid, nil - } - return 0, fmt.Errorf("Wrong socket type") -} - -func ZeroTerminated(s string) []byte { - bytes := make([]byte, len(s)+1) - for i := 0; i < len(s); i++ { - bytes[i] = s[i] - } - bytes[len(s)] = 0 - return bytes -} - -func NonZeroTerminated(s string) []byte { - bytes := make([]byte, len(s)) - for i := 0; i < len(s); i++ { - bytes[i] = s[i] - } - return bytes -} - -func BytesToString(b []byte) string { - n := bytes.Index(b, []byte{0}) - return string(b[:n]) -} - -func Uint8Attr(v uint8) []byte { - return []byte{byte(v)} -} - -func Uint16Attr(v uint16) []byte { - native := NativeEndian() - bytes := make([]byte, 2) - native.PutUint16(bytes, v) - return bytes -} - -func Uint32Attr(v uint32) []byte { - native := NativeEndian() - bytes := make([]byte, 4) - native.PutUint32(bytes, v) - return bytes -} - -func ParseRouteAttr(b []byte) ([]syscall.NetlinkRouteAttr, error) { - var attrs []syscall.NetlinkRouteAttr - for len(b) >= syscall.SizeofRtAttr { - a, vbuf, alen, err := netlinkRouteAttrAndValue(b) - if err != nil { - return nil, err - } - ra := syscall.NetlinkRouteAttr{Attr: *a, Value: vbuf[:int(a.Len)-syscall.SizeofRtAttr]} - attrs = append(attrs, ra) - b = b[alen:] - } - return attrs, nil -} - -func netlinkRouteAttrAndValue(b []byte) (*syscall.RtAttr, []byte, int, error) { - a := (*syscall.RtAttr)(unsafe.Pointer(&b[0])) - if int(a.Len) < syscall.SizeofRtAttr || int(a.Len) > len(b) { - return nil, nil, 0, syscall.EINVAL - } - return a, b[syscall.SizeofRtAttr:], rtaAlignOf(int(a.Len)), nil -} diff --git a/vendor/github.com/vishvananda/netlink/nl/route_linux.go b/vendor/github.com/vishvananda/netlink/nl/route_linux.go deleted file mode 100644 index 447e83e5a..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/route_linux.go +++ /dev/null @@ -1,42 +0,0 @@ -package nl - -import ( - "syscall" - "unsafe" -) - -type RtMsg struct { - syscall.RtMsg -} - -func NewRtMsg() *RtMsg { - return &RtMsg{ - RtMsg: syscall.RtMsg{ - Table: syscall.RT_TABLE_MAIN, - Scope: syscall.RT_SCOPE_UNIVERSE, - Protocol: syscall.RTPROT_BOOT, - Type: syscall.RTN_UNICAST, - }, - } -} - -func NewRtDelMsg() *RtMsg { - return &RtMsg{ - RtMsg: syscall.RtMsg{ - Table: syscall.RT_TABLE_MAIN, - Scope: syscall.RT_SCOPE_NOWHERE, - }, - } -} - -func (msg *RtMsg) Len() int { - return syscall.SizeofRtMsg -} - -func DeserializeRtMsg(b []byte) *RtMsg { - return (*RtMsg)(unsafe.Pointer(&b[0:syscall.SizeofRtMsg][0])) -} - -func (msg *RtMsg) Serialize() []byte { - return (*(*[syscall.SizeofRtMsg]byte)(unsafe.Pointer(msg)))[:] -} diff --git a/vendor/github.com/vishvananda/netlink/nl/tc_linux.go b/vendor/github.com/vishvananda/netlink/nl/tc_linux.go deleted file mode 100644 index c9bfe8dfd..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/tc_linux.go +++ /dev/null @@ -1,359 +0,0 @@ -package nl - -import ( - "unsafe" -) - -// Message types -const ( - TCA_UNSPEC = iota - TCA_KIND - TCA_OPTIONS - TCA_STATS - TCA_XSTATS - TCA_RATE - TCA_FCNT - TCA_STATS2 - TCA_STAB - TCA_MAX = TCA_STAB -) - -const ( - TCA_ACT_TAB = 1 - TCAA_MAX = 1 -) - -const ( - TCA_PRIO_UNSPEC = iota - TCA_PRIO_MQ - TCA_PRIO_MAX = TCA_PRIO_MQ -) - -const ( - SizeofTcMsg = 0x14 - SizeofTcActionMsg = 0x04 - SizeofTcPrioMap = 0x14 - SizeofTcRateSpec = 0x0c - SizeofTcTbfQopt = 2*SizeofTcRateSpec + 0x0c - SizeofTcU32Key = 0x10 - SizeofTcU32Sel = 0x10 // without keys - SizeofTcMirred = 0x1c -) - -// struct tcmsg { -// unsigned char tcm_family; -// unsigned char tcm__pad1; -// unsigned short tcm__pad2; -// int tcm_ifindex; -// __u32 tcm_handle; -// __u32 tcm_parent; -// __u32 tcm_info; -// }; - -type TcMsg struct { - Family uint8 - Pad [3]byte - Ifindex int32 - Handle uint32 - Parent uint32 - Info uint32 -} - -func (msg *TcMsg) Len() int { - return SizeofTcMsg -} - -func DeserializeTcMsg(b []byte) *TcMsg { - return (*TcMsg)(unsafe.Pointer(&b[0:SizeofTcMsg][0])) -} - -func (x *TcMsg) Serialize() []byte { - return (*(*[SizeofTcMsg]byte)(unsafe.Pointer(x)))[:] -} - -// struct tcamsg { -// unsigned char tca_family; -// unsigned char tca__pad1; -// unsigned short tca__pad2; -// }; - -type TcActionMsg struct { - Family uint8 - Pad [3]byte -} - -func (msg *TcActionMsg) Len() int { - return SizeofTcActionMsg -} - -func DeserializeTcActionMsg(b []byte) *TcActionMsg { - return (*TcActionMsg)(unsafe.Pointer(&b[0:SizeofTcActionMsg][0])) -} - -func (x *TcActionMsg) Serialize() []byte { - return (*(*[SizeofTcActionMsg]byte)(unsafe.Pointer(x)))[:] -} - -const ( - TC_PRIO_MAX = 15 -) - -// struct tc_prio_qopt { -// int bands; /* Number of bands */ -// __u8 priomap[TC_PRIO_MAX+1]; /* Map: logical priority -> PRIO band */ -// }; - -type TcPrioMap struct { - Bands int32 - Priomap [TC_PRIO_MAX + 1]uint8 -} - -func (msg *TcPrioMap) Len() int { - return SizeofTcPrioMap -} - -func DeserializeTcPrioMap(b []byte) *TcPrioMap { - return (*TcPrioMap)(unsafe.Pointer(&b[0:SizeofTcPrioMap][0])) -} - -func (x *TcPrioMap) Serialize() []byte { - return (*(*[SizeofTcPrioMap]byte)(unsafe.Pointer(x)))[:] -} - -const ( - TCA_TBF_UNSPEC = iota - TCA_TBF_PARMS - TCA_TBF_RTAB - TCA_TBF_PTAB - TCA_TBF_RATE64 - TCA_TBF_PRATE64 - TCA_TBF_BURST - TCA_TBF_PBURST - TCA_TBF_MAX = TCA_TBF_PBURST -) - -// struct tc_ratespec { -// unsigned char cell_log; -// __u8 linklayer; /* lower 4 bits */ -// unsigned short overhead; -// short cell_align; -// unsigned short mpu; -// __u32 rate; -// }; - -type TcRateSpec struct { - CellLog uint8 - Linklayer uint8 - Overhead uint16 - CellAlign int16 - Mpu uint16 - Rate uint32 -} - -func (msg *TcRateSpec) Len() int { - return SizeofTcRateSpec -} - -func DeserializeTcRateSpec(b []byte) *TcRateSpec { - return (*TcRateSpec)(unsafe.Pointer(&b[0:SizeofTcRateSpec][0])) -} - -func (x *TcRateSpec) Serialize() []byte { - return (*(*[SizeofTcRateSpec]byte)(unsafe.Pointer(x)))[:] -} - -// struct tc_tbf_qopt { -// struct tc_ratespec rate; -// struct tc_ratespec peakrate; -// __u32 limit; -// __u32 buffer; -// __u32 mtu; -// }; - -type TcTbfQopt struct { - Rate TcRateSpec - Peakrate TcRateSpec - Limit uint32 - Buffer uint32 - Mtu uint32 -} - -func (msg *TcTbfQopt) Len() int { - return SizeofTcTbfQopt -} - -func DeserializeTcTbfQopt(b []byte) *TcTbfQopt { - return (*TcTbfQopt)(unsafe.Pointer(&b[0:SizeofTcTbfQopt][0])) -} - -func (x *TcTbfQopt) Serialize() []byte { - return (*(*[SizeofTcTbfQopt]byte)(unsafe.Pointer(x)))[:] -} - -const ( - TCA_U32_UNSPEC = iota - TCA_U32_CLASSID - TCA_U32_HASH - TCA_U32_LINK - TCA_U32_DIVISOR - TCA_U32_SEL - TCA_U32_POLICE - TCA_U32_ACT - TCA_U32_INDEV - TCA_U32_PCNT - TCA_U32_MARK - TCA_U32_MAX = TCA_U32_MARK -) - -// struct tc_u32_key { -// __be32 mask; -// __be32 val; -// int off; -// int offmask; -// }; - -type TcU32Key struct { - Mask uint32 // big endian - Val uint32 // big endian - Off int32 - OffMask int32 -} - -func (msg *TcU32Key) Len() int { - return SizeofTcU32Key -} - -func DeserializeTcU32Key(b []byte) *TcU32Key { - return (*TcU32Key)(unsafe.Pointer(&b[0:SizeofTcU32Key][0])) -} - -func (x *TcU32Key) Serialize() []byte { - return (*(*[SizeofTcU32Key]byte)(unsafe.Pointer(x)))[:] -} - -// struct tc_u32_sel { -// unsigned char flags; -// unsigned char offshift; -// unsigned char nkeys; -// -// __be16 offmask; -// __u16 off; -// short offoff; -// -// short hoff; -// __be32 hmask; -// struct tc_u32_key keys[0]; -// }; - -const ( - TC_U32_TERMINAL = 1 << iota - TC_U32_OFFSET = 1 << iota - TC_U32_VAROFFSET = 1 << iota - TC_U32_EAT = 1 << iota -) - -type TcU32Sel struct { - Flags uint8 - Offshift uint8 - Nkeys uint8 - Pad uint8 - Offmask uint16 // big endian - Off uint16 - Offoff int16 - Hoff int16 - Hmask uint32 // big endian - Keys []TcU32Key -} - -func (msg *TcU32Sel) Len() int { - return SizeofTcU32Sel + int(msg.Nkeys)*SizeofTcU32Key -} - -func DeserializeTcU32Sel(b []byte) *TcU32Sel { - x := &TcU32Sel{} - copy((*(*[SizeofTcU32Sel]byte)(unsafe.Pointer(x)))[:], b) - next := SizeofTcU32Sel - var i uint8 - for i = 0; i < x.Nkeys; i++ { - x.Keys = append(x.Keys, *DeserializeTcU32Key(b[next:])) - next += SizeofTcU32Key - } - return x -} - -func (x *TcU32Sel) Serialize() []byte { - // This can't just unsafe.cast because it must iterate through keys. - buf := make([]byte, x.Len()) - copy(buf, (*(*[SizeofTcU32Sel]byte)(unsafe.Pointer(x)))[:]) - next := SizeofTcU32Sel - for _, key := range x.Keys { - keyBuf := key.Serialize() - copy(buf[next:], keyBuf) - next += SizeofTcU32Key - } - return buf -} - -const ( - TCA_ACT_MIRRED = 8 -) - -const ( - TCA_MIRRED_UNSPEC = iota - TCA_MIRRED_TM - TCA_MIRRED_PARMS - TCA_MIRRED_MAX = TCA_MIRRED_PARMS -) - -const ( - TCA_EGRESS_REDIR = 1 /* packet redirect to EGRESS*/ - TCA_EGRESS_MIRROR = 2 /* mirror packet to EGRESS */ - TCA_INGRESS_REDIR = 3 /* packet redirect to INGRESS*/ - TCA_INGRESS_MIRROR = 4 /* mirror packet to INGRESS */ -) - -const ( - TC_ACT_UNSPEC = int32(-1) - TC_ACT_OK = 0 - TC_ACT_RECLASSIFY = 1 - TC_ACT_SHOT = 2 - TC_ACT_PIPE = 3 - TC_ACT_STOLEN = 4 - TC_ACT_QUEUED = 5 - TC_ACT_REPEAT = 6 - TC_ACT_JUMP = 0x10000000 -) - -// #define tc_gen \ -// __u32 index; \ -// __u32 capab; \ -// int action; \ -// int refcnt; \ -// int bindcnt -// struct tc_mirred { -// tc_gen; -// int eaction; /* one of IN/EGRESS_MIRROR/REDIR */ -// __u32 ifindex; /* ifindex of egress port */ -// }; - -type TcMirred struct { - Index uint32 - Capab uint32 - Action int32 - Refcnt int32 - Bindcnt int32 - Eaction int32 - Ifindex uint32 -} - -func (msg *TcMirred) Len() int { - return SizeofTcMirred -} - -func DeserializeTcMirred(b []byte) *TcMirred { - return (*TcMirred)(unsafe.Pointer(&b[0:SizeofTcMirred][0])) -} - -func (x *TcMirred) Serialize() []byte { - return (*(*[SizeofTcMirred]byte)(unsafe.Pointer(x)))[:] -} diff --git a/vendor/github.com/vishvananda/netlink/nl/xfrm_linux.go b/vendor/github.com/vishvananda/netlink/nl/xfrm_linux.go deleted file mode 100644 index d24637d27..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/xfrm_linux.go +++ /dev/null @@ -1,258 +0,0 @@ -package nl - -import ( - "bytes" - "net" - "unsafe" -) - -// Infinity for packet and byte counts -const ( - XFRM_INF = ^uint64(0) -) - -// Message Types -const ( - XFRM_MSG_BASE = 0x10 - XFRM_MSG_NEWSA = 0x10 - XFRM_MSG_DELSA = 0x11 - XFRM_MSG_GETSA = 0x12 - XFRM_MSG_NEWPOLICY = 0x13 - XFRM_MSG_DELPOLICY = 0x14 - XFRM_MSG_GETPOLICY = 0x15 - XFRM_MSG_ALLOCSPI = 0x16 - XFRM_MSG_ACQUIRE = 0x17 - XFRM_MSG_EXPIRE = 0x18 - XFRM_MSG_UPDPOLICY = 0x19 - XFRM_MSG_UPDSA = 0x1a - XFRM_MSG_POLEXPIRE = 0x1b - XFRM_MSG_FLUSHSA = 0x1c - XFRM_MSG_FLUSHPOLICY = 0x1d - XFRM_MSG_NEWAE = 0x1e - XFRM_MSG_GETAE = 0x1f - XFRM_MSG_REPORT = 0x20 - XFRM_MSG_MIGRATE = 0x21 - XFRM_MSG_NEWSADINFO = 0x22 - XFRM_MSG_GETSADINFO = 0x23 - XFRM_MSG_NEWSPDINFO = 0x24 - XFRM_MSG_GETSPDINFO = 0x25 - XFRM_MSG_MAPPING = 0x26 - XFRM_MSG_MAX = 0x26 - XFRM_NR_MSGTYPES = 0x17 -) - -// Attribute types -const ( - /* Netlink message attributes. */ - XFRMA_UNSPEC = 0x00 - XFRMA_ALG_AUTH = 0x01 /* struct xfrm_algo */ - XFRMA_ALG_CRYPT = 0x02 /* struct xfrm_algo */ - XFRMA_ALG_COMP = 0x03 /* struct xfrm_algo */ - XFRMA_ENCAP = 0x04 /* struct xfrm_algo + struct xfrm_encap_tmpl */ - XFRMA_TMPL = 0x05 /* 1 or more struct xfrm_user_tmpl */ - XFRMA_SA = 0x06 /* struct xfrm_usersa_info */ - XFRMA_POLICY = 0x07 /* struct xfrm_userpolicy_info */ - XFRMA_SEC_CTX = 0x08 /* struct xfrm_sec_ctx */ - XFRMA_LTIME_VAL = 0x09 - XFRMA_REPLAY_VAL = 0x0a - XFRMA_REPLAY_THRESH = 0x0b - XFRMA_ETIMER_THRESH = 0x0c - XFRMA_SRCADDR = 0x0d /* xfrm_address_t */ - XFRMA_COADDR = 0x0e /* xfrm_address_t */ - XFRMA_LASTUSED = 0x0f /* unsigned long */ - XFRMA_POLICY_TYPE = 0x10 /* struct xfrm_userpolicy_type */ - XFRMA_MIGRATE = 0x11 - XFRMA_ALG_AEAD = 0x12 /* struct xfrm_algo_aead */ - XFRMA_KMADDRESS = 0x13 /* struct xfrm_user_kmaddress */ - XFRMA_ALG_AUTH_TRUNC = 0x14 /* struct xfrm_algo_auth */ - XFRMA_MARK = 0x15 /* struct xfrm_mark */ - XFRMA_TFCPAD = 0x16 /* __u32 */ - XFRMA_REPLAY_ESN_VAL = 0x17 /* struct xfrm_replay_esn */ - XFRMA_SA_EXTRA_FLAGS = 0x18 /* __u32 */ - XFRMA_MAX = 0x18 -) - -const ( - SizeofXfrmAddress = 0x10 - SizeofXfrmSelector = 0x38 - SizeofXfrmLifetimeCfg = 0x40 - SizeofXfrmLifetimeCur = 0x20 - SizeofXfrmId = 0x18 -) - -// typedef union { -// __be32 a4; -// __be32 a6[4]; -// } xfrm_address_t; - -type XfrmAddress [SizeofXfrmAddress]byte - -func (x *XfrmAddress) ToIP() net.IP { - var empty = [12]byte{} - ip := make(net.IP, net.IPv6len) - if bytes.Equal(x[4:16], empty[:]) { - ip[10] = 0xff - ip[11] = 0xff - copy(ip[12:16], x[0:4]) - } else { - copy(ip[:], x[:]) - } - return ip -} - -func (x *XfrmAddress) ToIPNet(prefixlen uint8) *net.IPNet { - ip := x.ToIP() - if GetIPFamily(ip) == FAMILY_V4 { - return &net.IPNet{IP: ip, Mask: net.CIDRMask(int(prefixlen), 32)} - } - return &net.IPNet{IP: ip, Mask: net.CIDRMask(int(prefixlen), 128)} -} - -func (x *XfrmAddress) FromIP(ip net.IP) { - var empty = [16]byte{} - if len(ip) < net.IPv4len { - copy(x[4:16], empty[:]) - } else if GetIPFamily(ip) == FAMILY_V4 { - copy(x[0:4], ip.To4()[0:4]) - copy(x[4:16], empty[:12]) - } else { - copy(x[0:16], ip.To16()[0:16]) - } -} - -func DeserializeXfrmAddress(b []byte) *XfrmAddress { - return (*XfrmAddress)(unsafe.Pointer(&b[0:SizeofXfrmAddress][0])) -} - -func (x *XfrmAddress) Serialize() []byte { - return (*(*[SizeofXfrmAddress]byte)(unsafe.Pointer(x)))[:] -} - -// struct xfrm_selector { -// xfrm_address_t daddr; -// xfrm_address_t saddr; -// __be16 dport; -// __be16 dport_mask; -// __be16 sport; -// __be16 sport_mask; -// __u16 family; -// __u8 prefixlen_d; -// __u8 prefixlen_s; -// __u8 proto; -// int ifindex; -// __kernel_uid32_t user; -// }; - -type XfrmSelector struct { - Daddr XfrmAddress - Saddr XfrmAddress - Dport uint16 // big endian - DportMask uint16 // big endian - Sport uint16 // big endian - SportMask uint16 // big endian - Family uint16 - PrefixlenD uint8 - PrefixlenS uint8 - Proto uint8 - Pad [3]byte - Ifindex int32 - User uint32 -} - -func (msg *XfrmSelector) Len() int { - return SizeofXfrmSelector -} - -func DeserializeXfrmSelector(b []byte) *XfrmSelector { - return (*XfrmSelector)(unsafe.Pointer(&b[0:SizeofXfrmSelector][0])) -} - -func (msg *XfrmSelector) Serialize() []byte { - return (*(*[SizeofXfrmSelector]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_lifetime_cfg { -// __u64 soft_byte_limit; -// __u64 hard_byte_limit; -// __u64 soft_packet_limit; -// __u64 hard_packet_limit; -// __u64 soft_add_expires_seconds; -// __u64 hard_add_expires_seconds; -// __u64 soft_use_expires_seconds; -// __u64 hard_use_expires_seconds; -// }; -// - -type XfrmLifetimeCfg struct { - SoftByteLimit uint64 - HardByteLimit uint64 - SoftPacketLimit uint64 - HardPacketLimit uint64 - SoftAddExpiresSeconds uint64 - HardAddExpiresSeconds uint64 - SoftUseExpiresSeconds uint64 - HardUseExpiresSeconds uint64 -} - -func (msg *XfrmLifetimeCfg) Len() int { - return SizeofXfrmLifetimeCfg -} - -func DeserializeXfrmLifetimeCfg(b []byte) *XfrmLifetimeCfg { - return (*XfrmLifetimeCfg)(unsafe.Pointer(&b[0:SizeofXfrmLifetimeCfg][0])) -} - -func (msg *XfrmLifetimeCfg) Serialize() []byte { - return (*(*[SizeofXfrmLifetimeCfg]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_lifetime_cur { -// __u64 bytes; -// __u64 packets; -// __u64 add_time; -// __u64 use_time; -// }; - -type XfrmLifetimeCur struct { - Bytes uint64 - Packets uint64 - AddTime uint64 - UseTime uint64 -} - -func (msg *XfrmLifetimeCur) Len() int { - return SizeofXfrmLifetimeCur -} - -func DeserializeXfrmLifetimeCur(b []byte) *XfrmLifetimeCur { - return (*XfrmLifetimeCur)(unsafe.Pointer(&b[0:SizeofXfrmLifetimeCur][0])) -} - -func (msg *XfrmLifetimeCur) Serialize() []byte { - return (*(*[SizeofXfrmLifetimeCur]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_id { -// xfrm_address_t daddr; -// __be32 spi; -// __u8 proto; -// }; - -type XfrmId struct { - Daddr XfrmAddress - Spi uint32 // big endian - Proto uint8 - Pad [3]byte -} - -func (msg *XfrmId) Len() int { - return SizeofXfrmId -} - -func DeserializeXfrmId(b []byte) *XfrmId { - return (*XfrmId)(unsafe.Pointer(&b[0:SizeofXfrmId][0])) -} - -func (msg *XfrmId) Serialize() []byte { - return (*(*[SizeofXfrmId]byte)(unsafe.Pointer(msg)))[:] -} diff --git a/vendor/github.com/vishvananda/netlink/nl/xfrm_policy_linux.go b/vendor/github.com/vishvananda/netlink/nl/xfrm_policy_linux.go deleted file mode 100644 index 66f7e03d2..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/xfrm_policy_linux.go +++ /dev/null @@ -1,119 +0,0 @@ -package nl - -import ( - "unsafe" -) - -const ( - SizeofXfrmUserpolicyId = 0x40 - SizeofXfrmUserpolicyInfo = 0xa8 - SizeofXfrmUserTmpl = 0x40 -) - -// struct xfrm_userpolicy_id { -// struct xfrm_selector sel; -// __u32 index; -// __u8 dir; -// }; -// - -type XfrmUserpolicyId struct { - Sel XfrmSelector - Index uint32 - Dir uint8 - Pad [3]byte -} - -func (msg *XfrmUserpolicyId) Len() int { - return SizeofXfrmUserpolicyId -} - -func DeserializeXfrmUserpolicyId(b []byte) *XfrmUserpolicyId { - return (*XfrmUserpolicyId)(unsafe.Pointer(&b[0:SizeofXfrmUserpolicyId][0])) -} - -func (msg *XfrmUserpolicyId) Serialize() []byte { - return (*(*[SizeofXfrmUserpolicyId]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_userpolicy_info { -// struct xfrm_selector sel; -// struct xfrm_lifetime_cfg lft; -// struct xfrm_lifetime_cur curlft; -// __u32 priority; -// __u32 index; -// __u8 dir; -// __u8 action; -// #define XFRM_POLICY_ALLOW 0 -// #define XFRM_POLICY_BLOCK 1 -// __u8 flags; -// #define XFRM_POLICY_LOCALOK 1 /* Allow user to override global policy */ -// /* Automatically expand selector to include matching ICMP payloads. */ -// #define XFRM_POLICY_ICMP 2 -// __u8 share; -// }; - -type XfrmUserpolicyInfo struct { - Sel XfrmSelector - Lft XfrmLifetimeCfg - Curlft XfrmLifetimeCur - Priority uint32 - Index uint32 - Dir uint8 - Action uint8 - Flags uint8 - Share uint8 - Pad [4]byte -} - -func (msg *XfrmUserpolicyInfo) Len() int { - return SizeofXfrmUserpolicyInfo -} - -func DeserializeXfrmUserpolicyInfo(b []byte) *XfrmUserpolicyInfo { - return (*XfrmUserpolicyInfo)(unsafe.Pointer(&b[0:SizeofXfrmUserpolicyInfo][0])) -} - -func (msg *XfrmUserpolicyInfo) Serialize() []byte { - return (*(*[SizeofXfrmUserpolicyInfo]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_user_tmpl { -// struct xfrm_id id; -// __u16 family; -// xfrm_address_t saddr; -// __u32 reqid; -// __u8 mode; -// __u8 share; -// __u8 optional; -// __u32 aalgos; -// __u32 ealgos; -// __u32 calgos; -// } - -type XfrmUserTmpl struct { - XfrmId XfrmId - Family uint16 - Pad1 [2]byte - Saddr XfrmAddress - Reqid uint32 - Mode uint8 - Share uint8 - Optional uint8 - Pad2 byte - Aalgos uint32 - Ealgos uint32 - Calgos uint32 -} - -func (msg *XfrmUserTmpl) Len() int { - return SizeofXfrmUserTmpl -} - -func DeserializeXfrmUserTmpl(b []byte) *XfrmUserTmpl { - return (*XfrmUserTmpl)(unsafe.Pointer(&b[0:SizeofXfrmUserTmpl][0])) -} - -func (msg *XfrmUserTmpl) Serialize() []byte { - return (*(*[SizeofXfrmUserTmpl]byte)(unsafe.Pointer(msg)))[:] -} diff --git a/vendor/github.com/vishvananda/netlink/nl/xfrm_state_linux.go b/vendor/github.com/vishvananda/netlink/nl/xfrm_state_linux.go deleted file mode 100644 index 4876ce458..000000000 --- a/vendor/github.com/vishvananda/netlink/nl/xfrm_state_linux.go +++ /dev/null @@ -1,221 +0,0 @@ -package nl - -import ( - "unsafe" -) - -const ( - SizeofXfrmUsersaId = 0x18 - SizeofXfrmStats = 0x0c - SizeofXfrmUsersaInfo = 0xe0 - SizeofXfrmAlgo = 0x44 - SizeofXfrmAlgoAuth = 0x48 - SizeofXfrmEncapTmpl = 0x18 -) - -// struct xfrm_usersa_id { -// xfrm_address_t daddr; -// __be32 spi; -// __u16 family; -// __u8 proto; -// }; - -type XfrmUsersaId struct { - Daddr XfrmAddress - Spi uint32 // big endian - Family uint16 - Proto uint8 - Pad byte -} - -func (msg *XfrmUsersaId) Len() int { - return SizeofXfrmUsersaId -} - -func DeserializeXfrmUsersaId(b []byte) *XfrmUsersaId { - return (*XfrmUsersaId)(unsafe.Pointer(&b[0:SizeofXfrmUsersaId][0])) -} - -func (msg *XfrmUsersaId) Serialize() []byte { - return (*(*[SizeofXfrmUsersaId]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_stats { -// __u32 replay_window; -// __u32 replay; -// __u32 integrity_failed; -// }; - -type XfrmStats struct { - ReplayWindow uint32 - Replay uint32 - IntegrityFailed uint32 -} - -func (msg *XfrmStats) Len() int { - return SizeofXfrmStats -} - -func DeserializeXfrmStats(b []byte) *XfrmStats { - return (*XfrmStats)(unsafe.Pointer(&b[0:SizeofXfrmStats][0])) -} - -func (msg *XfrmStats) Serialize() []byte { - return (*(*[SizeofXfrmStats]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_usersa_info { -// struct xfrm_selector sel; -// struct xfrm_id id; -// xfrm_address_t saddr; -// struct xfrm_lifetime_cfg lft; -// struct xfrm_lifetime_cur curlft; -// struct xfrm_stats stats; -// __u32 seq; -// __u32 reqid; -// __u16 family; -// __u8 mode; /* XFRM_MODE_xxx */ -// __u8 replay_window; -// __u8 flags; -// #define XFRM_STATE_NOECN 1 -// #define XFRM_STATE_DECAP_DSCP 2 -// #define XFRM_STATE_NOPMTUDISC 4 -// #define XFRM_STATE_WILDRECV 8 -// #define XFRM_STATE_ICMP 16 -// #define XFRM_STATE_AF_UNSPEC 32 -// #define XFRM_STATE_ALIGN4 64 -// #define XFRM_STATE_ESN 128 -// }; -// -// #define XFRM_SA_XFLAG_DONT_ENCAP_DSCP 1 -// - -type XfrmUsersaInfo struct { - Sel XfrmSelector - Id XfrmId - Saddr XfrmAddress - Lft XfrmLifetimeCfg - Curlft XfrmLifetimeCur - Stats XfrmStats - Seq uint32 - Reqid uint32 - Family uint16 - Mode uint8 - ReplayWindow uint8 - Flags uint8 - Pad [7]byte -} - -func (msg *XfrmUsersaInfo) Len() int { - return SizeofXfrmUsersaInfo -} - -func DeserializeXfrmUsersaInfo(b []byte) *XfrmUsersaInfo { - return (*XfrmUsersaInfo)(unsafe.Pointer(&b[0:SizeofXfrmUsersaInfo][0])) -} - -func (msg *XfrmUsersaInfo) Serialize() []byte { - return (*(*[SizeofXfrmUsersaInfo]byte)(unsafe.Pointer(msg)))[:] -} - -// struct xfrm_algo { -// char alg_name[64]; -// unsigned int alg_key_len; /* in bits */ -// char alg_key[0]; -// }; - -type XfrmAlgo struct { - AlgName [64]byte - AlgKeyLen uint32 - AlgKey []byte -} - -func (msg *XfrmAlgo) Len() int { - return SizeofXfrmAlgo + int(msg.AlgKeyLen/8) -} - -func DeserializeXfrmAlgo(b []byte) *XfrmAlgo { - ret := XfrmAlgo{} - copy(ret.AlgName[:], b[0:64]) - ret.AlgKeyLen = *(*uint32)(unsafe.Pointer(&b[64])) - ret.AlgKey = b[68:ret.Len()] - return &ret -} - -func (msg *XfrmAlgo) Serialize() []byte { - b := make([]byte, msg.Len()) - copy(b[0:64], msg.AlgName[:]) - copy(b[64:68], (*(*[4]byte)(unsafe.Pointer(&msg.AlgKeyLen)))[:]) - copy(b[68:msg.Len()], msg.AlgKey[:]) - return b -} - -// struct xfrm_algo_auth { -// char alg_name[64]; -// unsigned int alg_key_len; /* in bits */ -// unsigned int alg_trunc_len; /* in bits */ -// char alg_key[0]; -// }; - -type XfrmAlgoAuth struct { - AlgName [64]byte - AlgKeyLen uint32 - AlgTruncLen uint32 - AlgKey []byte -} - -func (msg *XfrmAlgoAuth) Len() int { - return SizeofXfrmAlgoAuth + int(msg.AlgKeyLen/8) -} - -func DeserializeXfrmAlgoAuth(b []byte) *XfrmAlgoAuth { - ret := XfrmAlgoAuth{} - copy(ret.AlgName[:], b[0:64]) - ret.AlgKeyLen = *(*uint32)(unsafe.Pointer(&b[64])) - ret.AlgTruncLen = *(*uint32)(unsafe.Pointer(&b[68])) - ret.AlgKey = b[72:ret.Len()] - return &ret -} - -func (msg *XfrmAlgoAuth) Serialize() []byte { - b := make([]byte, msg.Len()) - copy(b[0:64], msg.AlgName[:]) - copy(b[64:68], (*(*[4]byte)(unsafe.Pointer(&msg.AlgKeyLen)))[:]) - copy(b[68:72], (*(*[4]byte)(unsafe.Pointer(&msg.AlgTruncLen)))[:]) - copy(b[72:msg.Len()], msg.AlgKey[:]) - return b -} - -// struct xfrm_algo_aead { -// char alg_name[64]; -// unsigned int alg_key_len; /* in bits */ -// unsigned int alg_icv_len; /* in bits */ -// char alg_key[0]; -// } - -// struct xfrm_encap_tmpl { -// __u16 encap_type; -// __be16 encap_sport; -// __be16 encap_dport; -// xfrm_address_t encap_oa; -// }; - -type XfrmEncapTmpl struct { - EncapType uint16 - EncapSport uint16 // big endian - EncapDport uint16 // big endian - Pad [2]byte - EncapOa XfrmAddress -} - -func (msg *XfrmEncapTmpl) Len() int { - return SizeofXfrmEncapTmpl -} - -func DeserializeXfrmEncapTmpl(b []byte) *XfrmEncapTmpl { - return (*XfrmEncapTmpl)(unsafe.Pointer(&b[0:SizeofXfrmEncapTmpl][0])) -} - -func (msg *XfrmEncapTmpl) Serialize() []byte { - return (*(*[SizeofXfrmEncapTmpl]byte)(unsafe.Pointer(msg)))[:] -} diff --git a/vendor/github.com/vishvananda/netlink/protinfo.go b/vendor/github.com/vishvananda/netlink/protinfo.go deleted file mode 100644 index f39ab8f4e..000000000 --- a/vendor/github.com/vishvananda/netlink/protinfo.go +++ /dev/null @@ -1,53 +0,0 @@ -package netlink - -import ( - "strings" -) - -// Protinfo represents bridge flags from netlink. -type Protinfo struct { - Hairpin bool - Guard bool - FastLeave bool - RootBlock bool - Learning bool - Flood bool -} - -// String returns a list of enabled flags -func (prot *Protinfo) String() string { - var boolStrings []string - if prot.Hairpin { - boolStrings = append(boolStrings, "Hairpin") - } - if prot.Guard { - boolStrings = append(boolStrings, "Guard") - } - if prot.FastLeave { - boolStrings = append(boolStrings, "FastLeave") - } - if prot.RootBlock { - boolStrings = append(boolStrings, "RootBlock") - } - if prot.Learning { - boolStrings = append(boolStrings, "Learning") - } - if prot.Flood { - boolStrings = append(boolStrings, "Flood") - } - return strings.Join(boolStrings, " ") -} - -func boolToByte(x bool) []byte { - if x { - return []byte{1} - } - return []byte{0} -} - -func byteToBool(x byte) bool { - if uint8(x) != 0 { - return true - } - return false -} diff --git a/vendor/github.com/vishvananda/netlink/protinfo_linux.go b/vendor/github.com/vishvananda/netlink/protinfo_linux.go deleted file mode 100644 index 7181eba10..000000000 --- a/vendor/github.com/vishvananda/netlink/protinfo_linux.go +++ /dev/null @@ -1,60 +0,0 @@ -package netlink - -import ( - "fmt" - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -func LinkGetProtinfo(link Link) (Protinfo, error) { - base := link.Attrs() - ensureIndex(base) - var pi Protinfo - req := nl.NewNetlinkRequest(syscall.RTM_GETLINK, syscall.NLM_F_DUMP) - msg := nl.NewIfInfomsg(syscall.AF_BRIDGE) - req.AddData(msg) - msgs, err := req.Execute(syscall.NETLINK_ROUTE, 0) - if err != nil { - return pi, err - } - - for _, m := range msgs { - ans := nl.DeserializeIfInfomsg(m) - if int(ans.Index) != base.Index { - continue - } - attrs, err := nl.ParseRouteAttr(m[ans.Len():]) - if err != nil { - return pi, err - } - for _, attr := range attrs { - if attr.Attr.Type != syscall.IFLA_PROTINFO|syscall.NLA_F_NESTED { - continue - } - infos, err := nl.ParseRouteAttr(attr.Value) - if err != nil { - return pi, err - } - var pi Protinfo - for _, info := range infos { - switch info.Attr.Type { - case nl.IFLA_BRPORT_MODE: - pi.Hairpin = byteToBool(info.Value[0]) - case nl.IFLA_BRPORT_GUARD: - pi.Guard = byteToBool(info.Value[0]) - case nl.IFLA_BRPORT_FAST_LEAVE: - pi.FastLeave = byteToBool(info.Value[0]) - case nl.IFLA_BRPORT_PROTECT: - pi.RootBlock = byteToBool(info.Value[0]) - case nl.IFLA_BRPORT_LEARNING: - pi.Learning = byteToBool(info.Value[0]) - case nl.IFLA_BRPORT_UNICAST_FLOOD: - pi.Flood = byteToBool(info.Value[0]) - } - } - return pi, nil - } - } - return pi, fmt.Errorf("Device with index %d not found", base.Index) -} diff --git a/vendor/github.com/vishvananda/netlink/qdisc.go b/vendor/github.com/vishvananda/netlink/qdisc.go deleted file mode 100644 index 8e3d020fd..000000000 --- a/vendor/github.com/vishvananda/netlink/qdisc.go +++ /dev/null @@ -1,138 +0,0 @@ -package netlink - -import ( - "fmt" -) - -const ( - HANDLE_NONE = 0 - HANDLE_INGRESS = 0xFFFFFFF1 - HANDLE_ROOT = 0xFFFFFFFF - PRIORITY_MAP_LEN = 16 -) - -type Qdisc interface { - Attrs() *QdiscAttrs - Type() string -} - -// Qdisc represents a netlink qdisc. A qdisc is associated with a link, -// has a handle, a parent and a refcnt. The root qdisc of a device should -// have parent == HANDLE_ROOT. -type QdiscAttrs struct { - LinkIndex int - Handle uint32 - Parent uint32 - Refcnt uint32 // read only -} - -func (q QdiscAttrs) String() string { - return fmt.Sprintf("{LinkIndex: %d, Handle: %s, Parent: %s, Refcnt: %s}", q.LinkIndex, HandleStr(q.Handle), HandleStr(q.Parent), q.Refcnt) -} - -func MakeHandle(major, minor uint16) uint32 { - return (uint32(major) << 16) | uint32(minor) -} - -func MajorMinor(handle uint32) (uint16, uint16) { - return uint16((handle & 0xFFFF0000) >> 16), uint16(handle & 0x0000FFFFF) -} - -func HandleStr(handle uint32) string { - switch handle { - case HANDLE_NONE: - return "none" - case HANDLE_INGRESS: - return "ingress" - case HANDLE_ROOT: - return "root" - default: - major, minor := MajorMinor(handle) - return fmt.Sprintf("%x:%x", major, minor) - } -} - -// PfifoFast is the default qdisc created by the kernel if one has not -// been defined for the interface -type PfifoFast struct { - QdiscAttrs - Bands uint8 - PriorityMap [PRIORITY_MAP_LEN]uint8 -} - -func (qdisc *PfifoFast) Attrs() *QdiscAttrs { - return &qdisc.QdiscAttrs -} - -func (qdisc *PfifoFast) Type() string { - return "pfifo_fast" -} - -// Prio is a basic qdisc that works just like PfifoFast -type Prio struct { - QdiscAttrs - Bands uint8 - PriorityMap [PRIORITY_MAP_LEN]uint8 -} - -func NewPrio(attrs QdiscAttrs) *Prio { - return &Prio{ - QdiscAttrs: attrs, - Bands: 3, - PriorityMap: [PRIORITY_MAP_LEN]uint8{1, 2, 2, 2, 1, 2, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1}, - } -} - -func (qdisc *Prio) Attrs() *QdiscAttrs { - return &qdisc.QdiscAttrs -} - -func (qdisc *Prio) Type() string { - return "prio" -} - -// Tbf is a classful qdisc that rate limits based on tokens -type Tbf struct { - QdiscAttrs - // TODO: handle 64bit rate properly - Rate uint64 - Limit uint32 - Buffer uint32 - // TODO: handle other settings -} - -func (qdisc *Tbf) Attrs() *QdiscAttrs { - return &qdisc.QdiscAttrs -} - -func (qdisc *Tbf) Type() string { - return "tbf" -} - -// Ingress is a qdisc for adding ingress filters -type Ingress struct { - QdiscAttrs -} - -func (qdisc *Ingress) Attrs() *QdiscAttrs { - return &qdisc.QdiscAttrs -} - -func (qdisc *Ingress) Type() string { - return "ingress" -} - -// GenericQdisc qdiscs represent types that are not currently understood -// by this netlink library. -type GenericQdisc struct { - QdiscAttrs - QdiscType string -} - -func (qdisc *GenericQdisc) Attrs() *QdiscAttrs { - return &qdisc.QdiscAttrs -} - -func (qdisc *GenericQdisc) Type() string { - return qdisc.QdiscType -} diff --git a/vendor/github.com/vishvananda/netlink/qdisc_linux.go b/vendor/github.com/vishvananda/netlink/qdisc_linux.go deleted file mode 100644 index 2531c9dd1..000000000 --- a/vendor/github.com/vishvananda/netlink/qdisc_linux.go +++ /dev/null @@ -1,263 +0,0 @@ -package netlink - -import ( - "fmt" - "io/ioutil" - "strconv" - "strings" - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -// QdiscDel will delete a qdisc from the system. -// Equivalent to: `tc qdisc del $qdisc` -func QdiscDel(qdisc Qdisc) error { - req := nl.NewNetlinkRequest(syscall.RTM_DELQDISC, syscall.NLM_F_ACK) - base := qdisc.Attrs() - msg := &nl.TcMsg{ - Family: nl.FAMILY_ALL, - Ifindex: int32(base.LinkIndex), - Handle: base.Handle, - Parent: base.Parent, - } - req.AddData(msg) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// QdiscAdd will add a qdisc to the system. -// Equivalent to: `tc qdisc add $qdisc` -func QdiscAdd(qdisc Qdisc) error { - req := nl.NewNetlinkRequest(syscall.RTM_NEWQDISC, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK) - base := qdisc.Attrs() - msg := &nl.TcMsg{ - Family: nl.FAMILY_ALL, - Ifindex: int32(base.LinkIndex), - Handle: base.Handle, - Parent: base.Parent, - } - req.AddData(msg) - req.AddData(nl.NewRtAttr(nl.TCA_KIND, nl.ZeroTerminated(qdisc.Type()))) - - options := nl.NewRtAttr(nl.TCA_OPTIONS, nil) - if prio, ok := qdisc.(*Prio); ok { - tcmap := nl.TcPrioMap{ - Bands: int32(prio.Bands), - Priomap: prio.PriorityMap, - } - options = nl.NewRtAttr(nl.TCA_OPTIONS, tcmap.Serialize()) - } else if tbf, ok := qdisc.(*Tbf); ok { - opt := nl.TcTbfQopt{} - // TODO: handle rate > uint32 - opt.Rate.Rate = uint32(tbf.Rate) - opt.Limit = tbf.Limit - opt.Buffer = tbf.Buffer - nl.NewRtAttrChild(options, nl.TCA_TBF_PARMS, opt.Serialize()) - } else if _, ok := qdisc.(*Ingress); ok { - // ingress filters must use the proper handle - if msg.Parent != HANDLE_INGRESS { - return fmt.Errorf("Ingress filters must set Parent to HANDLE_INGRESS") - } - } - req.AddData(options) - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// QdiscList gets a list of qdiscs in the system. -// Equivalent to: `tc qdisc show`. -// The list can be filtered by link. -func QdiscList(link Link) ([]Qdisc, error) { - req := nl.NewNetlinkRequest(syscall.RTM_GETQDISC, syscall.NLM_F_DUMP) - index := int32(0) - if link != nil { - base := link.Attrs() - ensureIndex(base) - index = int32(base.Index) - } - msg := &nl.TcMsg{ - Family: nl.FAMILY_ALL, - Ifindex: index, - } - req.AddData(msg) - - msgs, err := req.Execute(syscall.NETLINK_ROUTE, syscall.RTM_NEWQDISC) - if err != nil { - return nil, err - } - - var res []Qdisc - for _, m := range msgs { - msg := nl.DeserializeTcMsg(m) - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - // skip qdiscs from other interfaces - if link != nil && msg.Ifindex != index { - continue - } - - base := QdiscAttrs{ - LinkIndex: int(msg.Ifindex), - Handle: msg.Handle, - Parent: msg.Parent, - Refcnt: msg.Info, - } - var qdisc Qdisc - qdiscType := "" - for _, attr := range attrs { - switch attr.Attr.Type { - case nl.TCA_KIND: - qdiscType = string(attr.Value[:len(attr.Value)-1]) - switch qdiscType { - case "pfifo_fast": - qdisc = &PfifoFast{} - case "prio": - qdisc = &Prio{} - case "tbf": - qdisc = &Tbf{} - case "ingress": - qdisc = &Ingress{} - default: - qdisc = &GenericQdisc{QdiscType: qdiscType} - } - case nl.TCA_OPTIONS: - switch qdiscType { - case "pfifo_fast": - // pfifo returns TcPrioMap directly without wrapping it in rtattr - if err := parsePfifoFastData(qdisc, attr.Value); err != nil { - return nil, err - } - case "prio": - // prio returns TcPrioMap directly without wrapping it in rtattr - if err := parsePrioData(qdisc, attr.Value); err != nil { - return nil, err - } - case "tbf": - data, err := nl.ParseRouteAttr(attr.Value) - if err != nil { - return nil, err - } - if err := parseTbfData(qdisc, data); err != nil { - return nil, err - } - // no options for ingress - } - } - } - *qdisc.Attrs() = base - res = append(res, qdisc) - } - - return res, nil -} - -func parsePfifoFastData(qdisc Qdisc, value []byte) error { - pfifo := qdisc.(*PfifoFast) - tcmap := nl.DeserializeTcPrioMap(value) - pfifo.PriorityMap = tcmap.Priomap - pfifo.Bands = uint8(tcmap.Bands) - return nil -} - -func parsePrioData(qdisc Qdisc, value []byte) error { - prio := qdisc.(*Prio) - tcmap := nl.DeserializeTcPrioMap(value) - prio.PriorityMap = tcmap.Priomap - prio.Bands = uint8(tcmap.Bands) - return nil -} - -func parseTbfData(qdisc Qdisc, data []syscall.NetlinkRouteAttr) error { - native = nl.NativeEndian() - tbf := qdisc.(*Tbf) - for _, datum := range data { - switch datum.Attr.Type { - case nl.TCA_TBF_PARMS: - opt := nl.DeserializeTcTbfQopt(datum.Value) - tbf.Rate = uint64(opt.Rate.Rate) - tbf.Limit = opt.Limit - tbf.Buffer = opt.Buffer - case nl.TCA_TBF_RATE64: - tbf.Rate = native.Uint64(datum.Value[0:4]) - } - } - return nil -} - -const ( - TIME_UNITS_PER_SEC = 1000000 -) - -var ( - tickInUsec float64 = 0.0 - clockFactor float64 = 0.0 -) - -func initClock() { - data, err := ioutil.ReadFile("/proc/net/psched") - if err != nil { - return - } - parts := strings.Split(strings.TrimSpace(string(data)), " ") - if len(parts) < 3 { - return - } - var vals [3]uint64 - for i := range vals { - val, err := strconv.ParseUint(parts[i], 16, 32) - if err != nil { - return - } - vals[i] = val - } - // compatibility - if vals[2] == 1000000000 { - vals[0] = vals[1] - } - clockFactor = float64(vals[2]) / TIME_UNITS_PER_SEC - tickInUsec = float64(vals[0]) / float64(vals[1]) * clockFactor -} - -func TickInUsec() float64 { - if tickInUsec == 0.0 { - initClock() - } - return tickInUsec -} - -func ClockFactor() float64 { - if clockFactor == 0.0 { - initClock() - } - return clockFactor -} - -func time2Tick(time uint32) uint32 { - return uint32(float64(time) * TickInUsec()) -} - -func tick2Time(tick uint32) uint32 { - return uint32(float64(tick) / TickInUsec()) -} - -func time2Ktime(time uint32) uint32 { - return uint32(float64(time) * ClockFactor()) -} - -func ktime2Time(ktime uint32) uint32 { - return uint32(float64(ktime) / ClockFactor()) -} - -func burst(rate uint64, buffer uint32) uint32 { - return uint32(float64(rate) * float64(tick2Time(buffer)) / TIME_UNITS_PER_SEC) -} - -func latency(rate uint64, limit, buffer uint32) float64 { - return TIME_UNITS_PER_SEC*(float64(limit)/float64(rate)) - float64(tick2Time(buffer)) -} diff --git a/vendor/github.com/vishvananda/netlink/route.go b/vendor/github.com/vishvananda/netlink/route.go deleted file mode 100644 index 6218546f8..000000000 --- a/vendor/github.com/vishvananda/netlink/route.go +++ /dev/null @@ -1,35 +0,0 @@ -package netlink - -import ( - "fmt" - "net" - "syscall" -) - -// Scope is an enum representing a route scope. -type Scope uint8 - -const ( - SCOPE_UNIVERSE Scope = syscall.RT_SCOPE_UNIVERSE - SCOPE_SITE Scope = syscall.RT_SCOPE_SITE - SCOPE_LINK Scope = syscall.RT_SCOPE_LINK - SCOPE_HOST Scope = syscall.RT_SCOPE_HOST - SCOPE_NOWHERE Scope = syscall.RT_SCOPE_NOWHERE -) - -// Route represents a netlink route. A route is associated with a link, -// has a destination network, an optional source ip, and optional -// gateway. Advanced route parameters and non-main routing tables are -// currently not supported. -type Route struct { - LinkIndex int - Scope Scope - Dst *net.IPNet - Src net.IP - Gw net.IP -} - -func (r Route) String() string { - return fmt.Sprintf("{Ifindex: %d Dst: %s Src: %s Gw: %s}", r.LinkIndex, r.Dst, - r.Src, r.Gw) -} diff --git a/vendor/github.com/vishvananda/netlink/route_linux.go b/vendor/github.com/vishvananda/netlink/route_linux.go deleted file mode 100644 index 9e76d4414..000000000 --- a/vendor/github.com/vishvananda/netlink/route_linux.go +++ /dev/null @@ -1,225 +0,0 @@ -package netlink - -import ( - "fmt" - "net" - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -// RtAttr is shared so it is in netlink_linux.go - -// RouteAdd will add a route to the system. -// Equivalent to: `ip route add $route` -func RouteAdd(route *Route) error { - req := nl.NewNetlinkRequest(syscall.RTM_NEWROUTE, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK) - return routeHandle(route, req, nl.NewRtMsg()) -} - -// RouteAdd will delete a route from the system. -// Equivalent to: `ip route del $route` -func RouteDel(route *Route) error { - req := nl.NewNetlinkRequest(syscall.RTM_DELROUTE, syscall.NLM_F_ACK) - return routeHandle(route, req, nl.NewRtDelMsg()) -} - -func routeHandle(route *Route, req *nl.NetlinkRequest, msg *nl.RtMsg) error { - if (route.Dst == nil || route.Dst.IP == nil) && route.Src == nil && route.Gw == nil { - return fmt.Errorf("one of Dst.IP, Src, or Gw must not be nil") - } - - msg.Scope = uint8(route.Scope) - family := -1 - var rtAttrs []*nl.RtAttr - - if route.Dst != nil && route.Dst.IP != nil { - dstLen, _ := route.Dst.Mask.Size() - msg.Dst_len = uint8(dstLen) - dstFamily := nl.GetIPFamily(route.Dst.IP) - family = dstFamily - var dstData []byte - if dstFamily == FAMILY_V4 { - dstData = route.Dst.IP.To4() - } else { - dstData = route.Dst.IP.To16() - } - rtAttrs = append(rtAttrs, nl.NewRtAttr(syscall.RTA_DST, dstData)) - } - - if route.Src != nil { - srcFamily := nl.GetIPFamily(route.Src) - if family != -1 && family != srcFamily { - return fmt.Errorf("source and destination ip are not the same IP family") - } - family = srcFamily - var srcData []byte - if srcFamily == FAMILY_V4 { - srcData = route.Src.To4() - } else { - srcData = route.Src.To16() - } - // The commonly used src ip for routes is actually PREFSRC - rtAttrs = append(rtAttrs, nl.NewRtAttr(syscall.RTA_PREFSRC, srcData)) - } - - if route.Gw != nil { - gwFamily := nl.GetIPFamily(route.Gw) - if family != -1 && family != gwFamily { - return fmt.Errorf("gateway, source, and destination ip are not the same IP family") - } - family = gwFamily - var gwData []byte - if gwFamily == FAMILY_V4 { - gwData = route.Gw.To4() - } else { - gwData = route.Gw.To16() - } - rtAttrs = append(rtAttrs, nl.NewRtAttr(syscall.RTA_GATEWAY, gwData)) - } - - msg.Family = uint8(family) - - req.AddData(msg) - for _, attr := range rtAttrs { - req.AddData(attr) - } - - var ( - b = make([]byte, 4) - native = nl.NativeEndian() - ) - native.PutUint32(b, uint32(route.LinkIndex)) - - req.AddData(nl.NewRtAttr(syscall.RTA_OIF, b)) - - _, err := req.Execute(syscall.NETLINK_ROUTE, 0) - return err -} - -// RouteList gets a list of routes in the system. -// Equivalent to: `ip route show`. -// The list can be filtered by link and ip family. -func RouteList(link Link, family int) ([]Route, error) { - req := nl.NewNetlinkRequest(syscall.RTM_GETROUTE, syscall.NLM_F_DUMP) - msg := nl.NewIfInfomsg(family) - req.AddData(msg) - - msgs, err := req.Execute(syscall.NETLINK_ROUTE, syscall.RTM_NEWROUTE) - if err != nil { - return nil, err - } - - index := 0 - if link != nil { - base := link.Attrs() - ensureIndex(base) - index = base.Index - } - - native := nl.NativeEndian() - var res []Route -MsgLoop: - for _, m := range msgs { - msg := nl.DeserializeRtMsg(m) - - if msg.Flags&syscall.RTM_F_CLONED != 0 { - // Ignore cloned routes - continue - } - - if msg.Table != syscall.RT_TABLE_MAIN { - // Ignore non-main tables - continue - } - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - route := Route{Scope: Scope(msg.Scope)} - for _, attr := range attrs { - switch attr.Attr.Type { - case syscall.RTA_GATEWAY: - route.Gw = net.IP(attr.Value) - case syscall.RTA_PREFSRC: - route.Src = net.IP(attr.Value) - case syscall.RTA_DST: - route.Dst = &net.IPNet{ - IP: attr.Value, - Mask: net.CIDRMask(int(msg.Dst_len), 8*len(attr.Value)), - } - case syscall.RTA_OIF: - routeIndex := int(native.Uint32(attr.Value[0:4])) - if link != nil && routeIndex != index { - // Ignore routes from other interfaces - continue MsgLoop - } - route.LinkIndex = routeIndex - } - } - res = append(res, route) - } - - return res, nil -} - -// RouteGet gets a route to a specific destination from the host system. -// Equivalent to: 'ip route get'. -func RouteGet(destination net.IP) ([]Route, error) { - req := nl.NewNetlinkRequest(syscall.RTM_GETROUTE, syscall.NLM_F_REQUEST) - family := nl.GetIPFamily(destination) - var destinationData []byte - var bitlen uint8 - if family == FAMILY_V4 { - destinationData = destination.To4() - bitlen = 32 - } else { - destinationData = destination.To16() - bitlen = 128 - } - msg := &nl.RtMsg{} - msg.Family = uint8(family) - msg.Dst_len = bitlen - req.AddData(msg) - - rtaDst := nl.NewRtAttr(syscall.RTA_DST, destinationData) - req.AddData(rtaDst) - - msgs, err := req.Execute(syscall.NETLINK_ROUTE, syscall.RTM_NEWROUTE) - if err != nil { - return nil, err - } - - native := nl.NativeEndian() - var res []Route - for _, m := range msgs { - msg := nl.DeserializeRtMsg(m) - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - route := Route{} - for _, attr := range attrs { - switch attr.Attr.Type { - case syscall.RTA_GATEWAY: - route.Gw = net.IP(attr.Value) - case syscall.RTA_PREFSRC: - route.Src = net.IP(attr.Value) - case syscall.RTA_DST: - route.Dst = &net.IPNet{ - IP: attr.Value, - Mask: net.CIDRMask(int(msg.Dst_len), 8*len(attr.Value)), - } - case syscall.RTA_OIF: - routeIndex := int(native.Uint32(attr.Value[0:4])) - route.LinkIndex = routeIndex - } - } - res = append(res, route) - } - return res, nil - -} diff --git a/vendor/github.com/vishvananda/netlink/xfrm.go b/vendor/github.com/vishvananda/netlink/xfrm.go deleted file mode 100644 index 621ffb6c6..000000000 --- a/vendor/github.com/vishvananda/netlink/xfrm.go +++ /dev/null @@ -1,64 +0,0 @@ -package netlink - -import ( - "fmt" - "syscall" -) - -// Proto is an enum representing an ipsec protocol. -type Proto uint8 - -const ( - XFRM_PROTO_ROUTE2 Proto = syscall.IPPROTO_ROUTING - XFRM_PROTO_ESP Proto = syscall.IPPROTO_ESP - XFRM_PROTO_AH Proto = syscall.IPPROTO_AH - XFRM_PROTO_HAO Proto = syscall.IPPROTO_DSTOPTS - XFRM_PROTO_COMP Proto = syscall.IPPROTO_COMP - XFRM_PROTO_IPSEC_ANY Proto = syscall.IPPROTO_RAW -) - -func (p Proto) String() string { - switch p { - case XFRM_PROTO_ROUTE2: - return "route2" - case XFRM_PROTO_ESP: - return "esp" - case XFRM_PROTO_AH: - return "ah" - case XFRM_PROTO_HAO: - return "hao" - case XFRM_PROTO_COMP: - return "comp" - case XFRM_PROTO_IPSEC_ANY: - return "ipsec-any" - } - return fmt.Sprintf("%d", p) -} - -// Mode is an enum representing an ipsec transport. -type Mode uint8 - -const ( - XFRM_MODE_TRANSPORT Mode = iota - XFRM_MODE_TUNNEL - XFRM_MODE_ROUTEOPTIMIZATION - XFRM_MODE_IN_TRIGGER - XFRM_MODE_BEET - XFRM_MODE_MAX -) - -func (m Mode) String() string { - switch m { - case XFRM_MODE_TRANSPORT: - return "transport" - case XFRM_MODE_TUNNEL: - return "tunnel" - case XFRM_MODE_ROUTEOPTIMIZATION: - return "ro" - case XFRM_MODE_IN_TRIGGER: - return "in_trigger" - case XFRM_MODE_BEET: - return "beet" - } - return fmt.Sprintf("%d", m) -} diff --git a/vendor/github.com/vishvananda/netlink/xfrm_policy.go b/vendor/github.com/vishvananda/netlink/xfrm_policy.go deleted file mode 100644 index d85c65d2d..000000000 --- a/vendor/github.com/vishvananda/netlink/xfrm_policy.go +++ /dev/null @@ -1,59 +0,0 @@ -package netlink - -import ( - "fmt" - "net" -) - -// Dir is an enum representing an ipsec template direction. -type Dir uint8 - -const ( - XFRM_DIR_IN Dir = iota - XFRM_DIR_OUT - XFRM_DIR_FWD - XFRM_SOCKET_IN - XFRM_SOCKET_OUT - XFRM_SOCKET_FWD -) - -func (d Dir) String() string { - switch d { - case XFRM_DIR_IN: - return "dir in" - case XFRM_DIR_OUT: - return "dir out" - case XFRM_DIR_FWD: - return "dir fwd" - case XFRM_SOCKET_IN: - return "socket in" - case XFRM_SOCKET_OUT: - return "socket out" - case XFRM_SOCKET_FWD: - return "socket fwd" - } - return fmt.Sprintf("socket %d", d-XFRM_SOCKET_IN) -} - -// XfrmPolicyTmpl encapsulates a rule for the base addresses of an ipsec -// policy. These rules are matched with XfrmState to determine encryption -// and authentication algorithms. -type XfrmPolicyTmpl struct { - Dst net.IP - Src net.IP - Proto Proto - Mode Mode - Reqid int -} - -// XfrmPolicy represents an ipsec policy. It represents the overlay network -// and has a list of XfrmPolicyTmpls representing the base addresses of -// the policy. -type XfrmPolicy struct { - Dst *net.IPNet - Src *net.IPNet - Dir Dir - Priority int - Index int - Tmpls []XfrmPolicyTmpl -} diff --git a/vendor/github.com/vishvananda/netlink/xfrm_policy_linux.go b/vendor/github.com/vishvananda/netlink/xfrm_policy_linux.go deleted file mode 100644 index 2daf6dc8b..000000000 --- a/vendor/github.com/vishvananda/netlink/xfrm_policy_linux.go +++ /dev/null @@ -1,127 +0,0 @@ -package netlink - -import ( - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -func selFromPolicy(sel *nl.XfrmSelector, policy *XfrmPolicy) { - sel.Family = uint16(nl.GetIPFamily(policy.Dst.IP)) - sel.Daddr.FromIP(policy.Dst.IP) - sel.Saddr.FromIP(policy.Src.IP) - prefixlenD, _ := policy.Dst.Mask.Size() - sel.PrefixlenD = uint8(prefixlenD) - prefixlenS, _ := policy.Src.Mask.Size() - sel.PrefixlenS = uint8(prefixlenS) -} - -// XfrmPolicyAdd will add an xfrm policy to the system. -// Equivalent to: `ip xfrm policy add $policy` -func XfrmPolicyAdd(policy *XfrmPolicy) error { - req := nl.NewNetlinkRequest(nl.XFRM_MSG_NEWPOLICY, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK) - - msg := &nl.XfrmUserpolicyInfo{} - selFromPolicy(&msg.Sel, policy) - msg.Priority = uint32(policy.Priority) - msg.Index = uint32(policy.Index) - msg.Dir = uint8(policy.Dir) - msg.Lft.SoftByteLimit = nl.XFRM_INF - msg.Lft.HardByteLimit = nl.XFRM_INF - msg.Lft.SoftPacketLimit = nl.XFRM_INF - msg.Lft.HardPacketLimit = nl.XFRM_INF - req.AddData(msg) - - tmplData := make([]byte, nl.SizeofXfrmUserTmpl*len(policy.Tmpls)) - for i, tmpl := range policy.Tmpls { - start := i * nl.SizeofXfrmUserTmpl - userTmpl := nl.DeserializeXfrmUserTmpl(tmplData[start : start+nl.SizeofXfrmUserTmpl]) - userTmpl.XfrmId.Daddr.FromIP(tmpl.Dst) - userTmpl.Saddr.FromIP(tmpl.Src) - userTmpl.XfrmId.Proto = uint8(tmpl.Proto) - userTmpl.Mode = uint8(tmpl.Mode) - userTmpl.Reqid = uint32(tmpl.Reqid) - userTmpl.Aalgos = ^uint32(0) - userTmpl.Ealgos = ^uint32(0) - userTmpl.Calgos = ^uint32(0) - } - if len(tmplData) > 0 { - tmpls := nl.NewRtAttr(nl.XFRMA_TMPL, tmplData) - req.AddData(tmpls) - } - - _, err := req.Execute(syscall.NETLINK_XFRM, 0) - return err -} - -// XfrmPolicyDel will delete an xfrm policy from the system. Note that -// the Tmpls are ignored when matching the policy to delete. -// Equivalent to: `ip xfrm policy del $policy` -func XfrmPolicyDel(policy *XfrmPolicy) error { - req := nl.NewNetlinkRequest(nl.XFRM_MSG_DELPOLICY, syscall.NLM_F_ACK) - - msg := &nl.XfrmUserpolicyId{} - selFromPolicy(&msg.Sel, policy) - msg.Index = uint32(policy.Index) - msg.Dir = uint8(policy.Dir) - req.AddData(msg) - - _, err := req.Execute(syscall.NETLINK_XFRM, 0) - return err -} - -// XfrmPolicyList gets a list of xfrm policies in the system. -// Equivalent to: `ip xfrm policy show`. -// The list can be filtered by ip family. -func XfrmPolicyList(family int) ([]XfrmPolicy, error) { - req := nl.NewNetlinkRequest(nl.XFRM_MSG_GETPOLICY, syscall.NLM_F_DUMP) - - msg := nl.NewIfInfomsg(family) - req.AddData(msg) - - msgs, err := req.Execute(syscall.NETLINK_XFRM, nl.XFRM_MSG_NEWPOLICY) - if err != nil { - return nil, err - } - - var res []XfrmPolicy - for _, m := range msgs { - msg := nl.DeserializeXfrmUserpolicyInfo(m) - - if family != FAMILY_ALL && family != int(msg.Sel.Family) { - continue - } - - var policy XfrmPolicy - - policy.Dst = msg.Sel.Daddr.ToIPNet(msg.Sel.PrefixlenD) - policy.Src = msg.Sel.Saddr.ToIPNet(msg.Sel.PrefixlenS) - policy.Priority = int(msg.Priority) - policy.Index = int(msg.Index) - policy.Dir = Dir(msg.Dir) - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - for _, attr := range attrs { - switch attr.Attr.Type { - case nl.XFRMA_TMPL: - max := len(attr.Value) - for i := 0; i < max; i += nl.SizeofXfrmUserTmpl { - var resTmpl XfrmPolicyTmpl - tmpl := nl.DeserializeXfrmUserTmpl(attr.Value[i : i+nl.SizeofXfrmUserTmpl]) - resTmpl.Dst = tmpl.XfrmId.Daddr.ToIP() - resTmpl.Src = tmpl.Saddr.ToIP() - resTmpl.Proto = Proto(tmpl.XfrmId.Proto) - resTmpl.Mode = Mode(tmpl.Mode) - resTmpl.Reqid = int(tmpl.Reqid) - policy.Tmpls = append(policy.Tmpls, resTmpl) - } - } - } - res = append(res, policy) - } - return res, nil -} diff --git a/vendor/github.com/vishvananda/netlink/xfrm_state.go b/vendor/github.com/vishvananda/netlink/xfrm_state.go deleted file mode 100644 index 5b8f2df70..000000000 --- a/vendor/github.com/vishvananda/netlink/xfrm_state.go +++ /dev/null @@ -1,53 +0,0 @@ -package netlink - -import ( - "net" -) - -// XfrmStateAlgo represents the algorithm to use for the ipsec encryption. -type XfrmStateAlgo struct { - Name string - Key []byte - TruncateLen int // Auth only -} - -// EncapType is an enum representing an ipsec template direction. -type EncapType uint8 - -const ( - XFRM_ENCAP_ESPINUDP_NONIKE EncapType = iota + 1 - XFRM_ENCAP_ESPINUDP -) - -func (e EncapType) String() string { - switch e { - case XFRM_ENCAP_ESPINUDP_NONIKE: - return "espinudp-nonike" - case XFRM_ENCAP_ESPINUDP: - return "espinudp" - } - return "unknown" -} - -// XfrmEncap represents the encapsulation to use for the ipsec encryption. -type XfrmStateEncap struct { - Type EncapType - SrcPort int - DstPort int - OriginalAddress net.IP -} - -// XfrmState represents the state of an ipsec policy. It optionally -// contains an XfrmStateAlgo for encryption and one for authentication. -type XfrmState struct { - Dst net.IP - Src net.IP - Proto Proto - Mode Mode - Spi int - Reqid int - ReplayWindow int - Auth *XfrmStateAlgo - Crypt *XfrmStateAlgo - Encap *XfrmStateEncap -} diff --git a/vendor/github.com/vishvananda/netlink/xfrm_state_linux.go b/vendor/github.com/vishvananda/netlink/xfrm_state_linux.go deleted file mode 100644 index 5f44ec852..000000000 --- a/vendor/github.com/vishvananda/netlink/xfrm_state_linux.go +++ /dev/null @@ -1,181 +0,0 @@ -package netlink - -import ( - "fmt" - "syscall" - - "github.com/vishvananda/netlink/nl" -) - -func writeStateAlgo(a *XfrmStateAlgo) []byte { - algo := nl.XfrmAlgo{ - AlgKeyLen: uint32(len(a.Key) * 8), - AlgKey: a.Key, - } - end := len(a.Name) - if end > 64 { - end = 64 - } - copy(algo.AlgName[:end], a.Name) - return algo.Serialize() -} - -func writeStateAlgoAuth(a *XfrmStateAlgo) []byte { - algo := nl.XfrmAlgoAuth{ - AlgKeyLen: uint32(len(a.Key) * 8), - AlgTruncLen: uint32(a.TruncateLen), - AlgKey: a.Key, - } - end := len(a.Name) - if end > 64 { - end = 64 - } - copy(algo.AlgName[:end], a.Name) - return algo.Serialize() -} - -// XfrmStateAdd will add an xfrm state to the system. -// Equivalent to: `ip xfrm state add $state` -func XfrmStateAdd(state *XfrmState) error { - // A state with spi 0 can't be deleted so don't allow it to be set - if state.Spi == 0 { - return fmt.Errorf("Spi must be set when adding xfrm state.") - } - req := nl.NewNetlinkRequest(nl.XFRM_MSG_NEWSA, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK) - - msg := &nl.XfrmUsersaInfo{} - msg.Family = uint16(nl.GetIPFamily(state.Dst)) - msg.Id.Daddr.FromIP(state.Dst) - msg.Saddr.FromIP(state.Src) - msg.Id.Proto = uint8(state.Proto) - msg.Mode = uint8(state.Mode) - msg.Id.Spi = nl.Swap32(uint32(state.Spi)) - msg.Reqid = uint32(state.Reqid) - msg.ReplayWindow = uint8(state.ReplayWindow) - msg.Lft.SoftByteLimit = nl.XFRM_INF - msg.Lft.HardByteLimit = nl.XFRM_INF - msg.Lft.SoftPacketLimit = nl.XFRM_INF - msg.Lft.HardPacketLimit = nl.XFRM_INF - req.AddData(msg) - - if state.Auth != nil { - out := nl.NewRtAttr(nl.XFRMA_ALG_AUTH_TRUNC, writeStateAlgoAuth(state.Auth)) - req.AddData(out) - } - if state.Crypt != nil { - out := nl.NewRtAttr(nl.XFRMA_ALG_CRYPT, writeStateAlgo(state.Crypt)) - req.AddData(out) - } - if state.Encap != nil { - encapData := make([]byte, nl.SizeofXfrmEncapTmpl) - encap := nl.DeserializeXfrmEncapTmpl(encapData) - encap.EncapType = uint16(state.Encap.Type) - encap.EncapSport = nl.Swap16(uint16(state.Encap.SrcPort)) - encap.EncapDport = nl.Swap16(uint16(state.Encap.DstPort)) - encap.EncapOa.FromIP(state.Encap.OriginalAddress) - out := nl.NewRtAttr(nl.XFRMA_ENCAP, encapData) - req.AddData(out) - } - - _, err := req.Execute(syscall.NETLINK_XFRM, 0) - return err -} - -// XfrmStateDel will delete an xfrm state from the system. Note that -// the Algos are ignored when matching the state to delete. -// Equivalent to: `ip xfrm state del $state` -func XfrmStateDel(state *XfrmState) error { - req := nl.NewNetlinkRequest(nl.XFRM_MSG_DELSA, syscall.NLM_F_ACK) - - msg := &nl.XfrmUsersaId{} - msg.Daddr.FromIP(state.Dst) - msg.Family = uint16(nl.GetIPFamily(state.Dst)) - msg.Proto = uint8(state.Proto) - msg.Spi = nl.Swap32(uint32(state.Spi)) - req.AddData(msg) - - saddr := nl.XfrmAddress{} - saddr.FromIP(state.Src) - srcdata := nl.NewRtAttr(nl.XFRMA_SRCADDR, saddr.Serialize()) - - req.AddData(srcdata) - - _, err := req.Execute(syscall.NETLINK_XFRM, 0) - return err -} - -// XfrmStateList gets a list of xfrm states in the system. -// Equivalent to: `ip xfrm state show`. -// The list can be filtered by ip family. -func XfrmStateList(family int) ([]XfrmState, error) { - req := nl.NewNetlinkRequest(nl.XFRM_MSG_GETSA, syscall.NLM_F_DUMP) - - msg := nl.NewIfInfomsg(family) - req.AddData(msg) - - msgs, err := req.Execute(syscall.NETLINK_XFRM, nl.XFRM_MSG_NEWSA) - if err != nil { - return nil, err - } - - var res []XfrmState - for _, m := range msgs { - msg := nl.DeserializeXfrmUsersaInfo(m) - - if family != FAMILY_ALL && family != int(msg.Family) { - continue - } - - var state XfrmState - - state.Dst = msg.Id.Daddr.ToIP() - state.Src = msg.Saddr.ToIP() - state.Proto = Proto(msg.Id.Proto) - state.Mode = Mode(msg.Mode) - state.Spi = int(nl.Swap32(msg.Id.Spi)) - state.Reqid = int(msg.Reqid) - state.ReplayWindow = int(msg.ReplayWindow) - - attrs, err := nl.ParseRouteAttr(m[msg.Len():]) - if err != nil { - return nil, err - } - - for _, attr := range attrs { - switch attr.Attr.Type { - case nl.XFRMA_ALG_AUTH, nl.XFRMA_ALG_CRYPT: - var resAlgo *XfrmStateAlgo - if attr.Attr.Type == nl.XFRMA_ALG_AUTH { - if state.Auth == nil { - state.Auth = new(XfrmStateAlgo) - } - resAlgo = state.Auth - } else { - state.Crypt = new(XfrmStateAlgo) - resAlgo = state.Crypt - } - algo := nl.DeserializeXfrmAlgo(attr.Value[:]) - (*resAlgo).Name = nl.BytesToString(algo.AlgName[:]) - (*resAlgo).Key = algo.AlgKey - case nl.XFRMA_ALG_AUTH_TRUNC: - if state.Auth == nil { - state.Auth = new(XfrmStateAlgo) - } - algo := nl.DeserializeXfrmAlgoAuth(attr.Value[:]) - state.Auth.Name = nl.BytesToString(algo.AlgName[:]) - state.Auth.Key = algo.AlgKey - state.Auth.TruncateLen = int(algo.AlgTruncLen) - case nl.XFRMA_ENCAP: - encap := nl.DeserializeXfrmEncapTmpl(attr.Value[:]) - state.Encap = new(XfrmStateEncap) - state.Encap.Type = EncapType(encap.EncapType) - state.Encap.SrcPort = int(nl.Swap16(encap.EncapSport)) - state.Encap.DstPort = int(nl.Swap16(encap.EncapDport)) - state.Encap.OriginalAddress = encap.EncapOa.ToIP() - } - - } - res = append(res, state) - } - return res, nil -} diff --git a/vendor/vendor.json b/vendor/vendor.json index 7acd023de..1a2c06931 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -101,10 +101,6 @@ {"path":"github.com/containernetworking/plugins/pkg/ns","checksumSHA1":"n3dCDigZOU+eD84Cr4Kg30GO4nI=","revision":"2d6d46d308b2c45a0466324c9e3f1330ab5cacd6","revisionTime":"2019-05-01T19:17:48Z"}, {"path":"github.com/coreos/go-iptables/iptables","checksumSHA1":"UAPz3mxGiQmd3pRWOTghB2aXzGU=","revision":"969b135e941d8baadca0a40047a38d6aca381855","revisionTime":"2019-07-24T15:17:50Z"}, {"path":"github.com/coreos/go-semver/semver","checksumSHA1":"97BsbXOiZ8+Kr+LIuZkQFtSj7H4=","revision":"1817cd4bea52af76542157eeabd74b057d1a199e","revisionTime":"2017-06-13T09:22:38Z"}, - {"path":"github.com/coreos/go-systemd/dbus","checksumSHA1":"/zxxFPYjUB7Wowz33r5AhTDvoz0=","origin":"github.com/opencontainers/runc/vendor/github.com/coreos/go-systemd/dbus","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, - {"path":"github.com/coreos/go-systemd/util","checksumSHA1":"e8qgBHxXbij3RVspqrkeBzMZ564=","origin":"github.com/opencontainers/runc/vendor/github.com/coreos/go-systemd/util","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, - {"path":"github.com/coreos/pkg/dlopen","checksumSHA1":"O8c/VKtW34XPJNNlyeb/im8vWSI=","origin":"github.com/opencontainers/runc/vendor/github.com/coreos/pkg/dlopen","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, - {"path":"github.com/cyphar/filepath-securejoin","checksumSHA1":"4Cq4wS4l0O8WlugamGEPvooJPAk=","origin":"github.com/opencontainers/runc/vendor/github.com/cyphar/filepath-securejoin","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, {"path":"github.com/davecgh/go-spew/spew","checksumSHA1":"mrz/kicZiUaHxkyfvC/DyQcr8Do=","revision":"ecdeabc65495df2dec95d7c4a4c3e021903035e5","revisionTime":"2017-10-02T20:02:53Z"}, {"path":"github.com/docker/cli/cli/config/configfile","checksumSHA1":"wf9Rn3a9cPag5B9Dd+qHHEink+I=","revision":"67f9a3912cf944cf71b31f3fc14e3f2a18d95802","revisionTime":"2018-08-14T14:54:37Z","version":"v18.06.1-ce","versionExact":"v18.06.1-ce"}, {"path":"github.com/docker/cli/cli/config/credentials","checksumSHA1":"fJpuGdxgATGNHm+INOPNVIhBnj0=","revision":"deb84a9e4e10b590e6de6aa6081532c87a5a2cfe","revisionTime":"2018-08-29T13:09:58Z"}, @@ -173,7 +169,6 @@ {"path":"github.com/go-ini/ini","checksumSHA1":"U4k9IYSBQcW5DW5QDc44n5dddxs=","comment":"v1.8.5-2-g6ec4abd","revision":"6ec4abd8f8d587536da56f730858f0e27aeb4126"}, {"path":"github.com/go-ole/go-ole","checksumSHA1":"IvHj/4iR2nYa/S3cB2GXoyDG/xQ=","comment":"v1.2.0-4-g5005588","revision":"085abb85892dc1949567b726dff00fa226c60c45","revisionTime":"2017-07-12T17:44:59Z"}, {"path":"github.com/go-ole/go-ole/oleutil","checksumSHA1":"qLYVTQDhgrVIeZ2KI9eZV51mmug=","comment":"v1.2.0-4-g5005588","revision":"50055884d646dd9434f16bbb5c9801749b9bafe4"}, - {"path":"github.com/godbus/dbus","checksumSHA1":"bFplS7sPkJNtlKKCIszFQkAsmGI=","origin":"github.com/opencontainers/runc/vendor/github.com/godbus/dbus","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, {"path":"github.com/gogo/protobuf/proto","checksumSHA1":"I460dM/HmGE9DWimQvd1hJkYosU=","revision":"616a82ed12d78d24d4839363e8f3c5d3f20627cf","revisionTime":"2017-11-09T18:15:19Z"}, {"path":"github.com/golang/protobuf/proto","checksumSHA1":"Q3FteGbNvRRUMJqbYbmrcBd2DMo=","revision":"5d5b4c10bd43f85e63bd9e4a3fa9b1ea2ef88af2","revisionTime":"2020-02-26T19:23:50Z","version":"v1","versionExact":"v1.1.0"}, {"path":"github.com/golang/protobuf/protoc-gen-go/descriptor","checksumSHA1":"ZfNZBif8xEtwy16tgsXp+oBVnL8=","revision":"5d5b4c10bd43f85e63bd9e4a3fa9b1ea2ef88af2","revisionTime":"2020-02-26T19:23:50Z","version":"v1.1.0","versionExact":"v1.1.0"}, @@ -310,7 +305,6 @@ {"path":"github.com/mitchellh/hashstructure","checksumSHA1":"Z3FoiV93oUfDoQYMMiHxWCQPlBw=","revision":"1ef5c71b025aef149d12346356ac5973992860bc"}, {"path":"github.com/mitchellh/mapstructure","checksumSHA1":"4Js6Jlu93Wa0o6Kjt393L9Z7diE=","revision":"281073eb9eb092240d33ef253c404f1cca550309"}, {"path":"github.com/mitchellh/reflectwalk","checksumSHA1":"KqsMqI+Y+3EFYPhyzafpIneaVCM=","revision":"8d802ff4ae93611b807597f639c19f76074df5c6","revisionTime":"2017-05-08T17:38:06Z"}, - {"path":"github.com/mrunalp/fileutils","checksumSHA1":"EKGlMEHq/nwBXQGi9JN/y+H7YMU=","origin":"github.com/opencontainers/runc/vendor/github.com/mrunalp/fileutils","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, {"path":"github.com/oklog/run","checksumSHA1":"nf3UoPNBIut7BL9nWE8Fw2X2j+Q=","revision":"6934b124db28979da51d3470dadfa34d73d72652","revisionTime":"2018-03-08T00:51:04Z"}, {"path":"github.com/onsi/ginkgo","checksumSHA1":"cwbidLG1ET7YSqlwca+nSfYxIbg=","revision":"ba8e856bb854d6771a72ddf6497a42dad3a0c971","revisionTime":"2018-03-12T10:34:14Z"}, {"path":"github.com/onsi/ginkgo/config","checksumSHA1":"Tarhbqac6rFsGPugPoQ4lyhfc7Q=","revision":"9008c7b79f9636c46a0a945141020124702f0ecf","revisionTime":"2018-02-16T17:00:43Z"}, @@ -352,27 +346,6 @@ {"path":"github.com/opencontainers/go-digest","checksumSHA1":"NTperEHVh1uBqfTy9+oKceN4tKI=","revision":"21dfd564fd89c944783d00d069f33e3e7123c448","revisionTime":"2017-01-11T18:16:59Z"}, {"path":"github.com/opencontainers/image-spec/specs-go","checksumSHA1":"ZGlIwSRjdLYCUII7JLE++N4w7Xc=","revision":"89b51c794e9113108a2914e38e66c826a649f2b5","revisionTime":"2017-11-03T11:36:04Z"}, {"path":"github.com/opencontainers/image-spec/specs-go/v1","checksumSHA1":"jdbXRRzeu0njLE9/nCEZG+Yg/Jk=","revision":"89b51c794e9113108a2914e38e66c826a649f2b5","revisionTime":"2017-11-03T11:36:04Z"}, - {"path":"github.com/opencontainers/runc/libcontainer","checksumSHA1":"OJlgvnpJuV+SDPW48YVUKWDbOnU=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/apparmor","checksumSHA1":"gVVY8k2G3ws+V1czsfxfuRs8log=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/cgroups","checksumSHA1":"aWtm1zkVCz9l2/zQNfnc246yQew=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/cgroups/fs","checksumSHA1":"OnnBJ2WfB/Y9EQpABKetBedf6ts=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/cgroups/systemd","checksumSHA1":"d7B9MiKb1k1Egh5qkNokIfcZ+OY=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/configs","checksumSHA1":"v9sgw4eYRNSsJUSG33OoFIwLqRI=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/configs/validate","checksumSHA1":"hUveFGK1HhGenf0OVoYZWccoW9I=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/devices","checksumSHA1":"2CwtFvz9kB0RSjFlcCkmq4taJ9U=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/intelrdt","checksumSHA1":"sAbowQ7hjveSH5ADUD9IYXnEAJM=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/keys","checksumSHA1":"mKxBw0il2IWjWYgksX+17ufDw34=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/logs","checksumSHA1":"mBbwlspKSImoGTw4uKE40AX3PYs=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/mount","checksumSHA1":"MJiogPDUU2nFr1fzQU6T+Ry1W8o=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/nsenter","checksumSHA1":"PnGFQdbZhZ4pcxFtQep5MEQ4/8E=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/seccomp","checksumSHA1":"I1Qw/btE1twMqKHpYNsC98cteak=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/stacktrace","checksumSHA1":"yp/kYBgVqKtxlnpq4CmyxLFMAE4=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/system","checksumSHA1":"cjg/UcueM1/2/ExZ3N7010sa+hI=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/user","checksumSHA1":"mdUukOXCVJxmT0CufSKDeMg5JFM=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runc/libcontainer/utils","checksumSHA1":"PqGgeBjTHnyGrTr5ekLFEXpC3iQ=","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/runtime-spec/specs-go","checksumSHA1":"AMYc2X2O/IL6EGrq6lTl5vEhLiY=","origin":"github.com/opencontainers/runc/vendor/github.com/opencontainers/runtime-spec/specs-go","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, - {"path":"github.com/opencontainers/selinux/go-selinux","checksumSHA1":"X4jufgAG12FCYi0U9VK4DK5vvXo=","origin":"github.com/opencontainers/runc/vendor/github.com/opencontainers/selinux/go-selinux","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, - {"path":"github.com/opencontainers/selinux/go-selinux/label","checksumSHA1":"5hHzDoXjeTtGaVzyN8lOowcwvdQ=","origin":"github.com/opencontainers/runc/vendor/github.com/opencontainers/selinux/go-selinux/label","revision":"6cc515888830787a93d82138821f0309ad970640","revisionTime":"2019-06-11T12:12:36Z"}, {"path":"github.com/pkg/errors","checksumSHA1":"ynJSWoF6v+3zMnh9R0QmmG6iGV8=","revision":"248dadf4e9068a0b3e79f02ed0a610d935de5302","revisionTime":"2016-10-29T09:36:37Z"}, {"path":"github.com/pmezard/go-difflib/difflib","checksumSHA1":"LuFv4/jlrmFNnDb/5SCSEPAM9vU=","revision":"792786c7400a136282c1664665ae0a8db921c6c2","revisionTime":"2016-01-10T10:55:54Z"}, {"path":"github.com/posener/complete","checksumSHA1":"rTNABfFJ9wtLQRH8uYNkEZGQOrY=","revision":"9f41f7636a724791a3b8b1d35e84caa1124f0d3c","revisionTime":"2017-08-29T17:11:12Z"}, @@ -393,7 +366,6 @@ {"path":"github.com/ryanuber/columnize","checksumSHA1":"M57Rrfc8Z966p+IBtQ91QOcUtcg=","comment":"v2.0.1-8-g983d3a5","revision":"abc90934186a77966e2beeac62ed966aac0561d5","revisionTime":"2017-07-03T20:58:27Z"}, {"path":"github.com/ryanuber/go-glob","checksumSHA1":"6JP37UqrI0H80Gpk0Y2P+KXgn5M=","revision":"256dc444b735e061061cf46c809487313d5b0065","revisionTime":"2017-01-28T01:21:29Z"}, {"path":"github.com/sean-/seed","checksumSHA1":"tnMZLo/kR9Kqx6GtmWwowtTLlA8=","revision":"e2103e2c35297fb7e17febb81e49b312087a2372","revisionTime":"2017-03-13T16:33:22Z"}, - {"path":"github.com/seccomp/libseccomp-golang","checksumSHA1":"6Z/chtTVA74eUZTlG5VRDy59K1M=","origin":"github.com/opencontainers/runc/vendor/github.com/seccomp/libseccomp-golang","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, {"path":"github.com/sethgrid/pester","checksumSHA1":"8Lm8nsMCFz4+gr9EvQLqK8+w+Ks=","revision":"8053687f99650573b28fb75cddf3f295082704d7","revisionTime":"2016-04-29T17:20:22Z"}, {"path":"github.com/shirou/gopsutil/cpu","checksumSHA1":"of1VYIkuYFIom46U7G5/TFetEao=","revision":"a3b23c5ccf4fb7b33d319fcaad53d7777907f4e1","revisionTime":"2020-03-01T00:40:00Z","version":"v2.20.2","versionExact":"v2.20.2"}, {"path":"github.com/shirou/gopsutil/disk","checksumSHA1":"lb5OsljQpw0AW47HDh8bb6TzZ/0=","revision":"a3b23c5ccf4fb7b33d319fcaad53d7777907f4e1","revisionTime":"2020-03-01T00:40:00Z","version":"v2.20.2","versionExact":"v2.20.2"}, @@ -419,8 +391,6 @@ {"path":"github.com/ulikunitz/xz/internal/hash","checksumSHA1":"vjnTkzNrMs5Xj6so/fq0mQ6dT1c=","revision":"0c6b41e72360850ca4f98dc341fd999726ea007f","revisionTime":"2017-06-05T21:53:11Z"}, {"path":"github.com/ulikunitz/xz/internal/xlog","checksumSHA1":"m0pm57ASBK/CTdmC0ppRHO17mBs=","revision":"0c6b41e72360850ca4f98dc341fd999726ea007f","revisionTime":"2017-06-05T21:53:11Z"}, {"path":"github.com/ulikunitz/xz/lzma","checksumSHA1":"2vZw6zc8xuNlyVz2QKvdlNSZQ1U=","revision":"0c6b41e72360850ca4f98dc341fd999726ea007f","revisionTime":"2017-06-05T21:53:11Z"}, - {"path":"github.com/vishvananda/netlink","checksumSHA1":"cIkE6EIE7A0IzdhR/Yes8Nzyqtk=","origin":"github.com/opencontainers/runc/vendor/github.com/vishvananda/netlink","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, - {"path":"github.com/vishvananda/netlink/nl","checksumSHA1":"r/vcO8YkOWNHKX5HKCukaU4Xzlg=","origin":"github.com/opencontainers/runc/vendor/github.com/vishvananda/netlink/nl","revision":"459bfaec1fc6c17d8bfb12d0a0f69e7e7271ed2a","revisionTime":"2018-08-23T14:46:37Z"}, {"path":"github.com/vmihailenco/msgpack","checksumSHA1":"t9A/EE2GhHFPHzK+ksAKgKW9ZC8=","revision":"b5e691b1eb52a28c05e67ab9df303626c095c23b","revisionTime":"2018-06-13T09:15:15Z"}, {"path":"github.com/vmihailenco/msgpack/codes","checksumSHA1":"OcTSGT2v7/2saIGq06nDhEZwm8I=","revision":"b5e691b1eb52a28c05e67ab9df303626c095c23b","revisionTime":"2018-06-13T09:15:15Z"}, {"path":"github.com/zclconf/go-cty/cty","checksumSHA1":"0jAKo5tFC1SpRjwB+AiPNfNAzmM=","revision":"4ca19710f0562cab70f0b3c9cbff0ecc70ee06d1","revisionTime":"2019-02-01T22:06:20Z"},