diff --git a/CHANGELOG b/CHANGELOG index 0908aeaa5..f34679fb6 100644 --- a/CHANGELOG +++ b/CHANGELOG @@ -1,916 +1,926 @@ + Fixing #976 -- both the full name of qualified classes and + the class parts are now added as tags. I've also + created a Tagging module that we should push throughout + the rest of the system that uses tags. + + Fixing #995 -- puppetd no longer dies at startup if the server + is not running. + + Fixing #977 -- the rundir is again set to 1777. + Fixed #971 -- classes can once again be included multiple times. Added builtin support for Nagios types using Naginator to parse and generate the files. 0.24.1 Updated vim filetype detection. (#900 and #963) Default resources like schedules no longer conflict with managed resources. (#965) Removing the ability to disable http keep-alive, since it didn't really work anyway and it should no longer be necessary. Refactored http keep-alive so it actually works again. This should be sufficient enough that we no longer need the ability to disable keep-alive. There is now a central module responsible for managing HTTP instances, along with all certificates in those instances. Fixed a backward compatibility issue when running 0.23.x clients against 0.24.0 servers -- relationships would consistently not work. (#967) Closing existing http connections when opening a new one, and closing all connections after each run. (#961) Removed warning about deprecated explicit plugins mounts. 0.24.0 (misspiggy) Modifying the behaviour of the certdnsnames setting. It now defaults to an empty string, and will only be used if it is set to something else. If it is set, then the host's FQDN will also be added as an alias. The default behaviour is now to add 'puppet' and 'puppet.$domain' as DNS aliases when the name for the cert being signed is equal to the signing machine's name, which will only be the case for CA servers. This should result in servers always having the alias set up and no one else, but you can still override the aliases if you want. External node support now requires that you set the 'node_terminus' setting to 'exec'. See the IndirectionReference on the wiki for more information. http_enable_post_connection_check added as a configuration option for puppetd. This defaults to true, which validates the server SSL certificate against the requested host name in new versions of ruby. See #896 for more information. Mounts no longer remount swap filesystems. Slightly modifying how services manage their list of paths (and adding documention for it). Services now default to the paths specified by the provider classes. Removed 'type' as a valid attribute for services, since it's been deprecated since the creation of providers. Removed 'running' as a valid attribute for services, since it's been deprecated since February 2006. Added modified patch by Matt Palmer which adds a 'plugins' mount, fixing #891. See PluginsInModules on the wiki for information on usage. Empty dbserver and dbpassword settings will now be ignored when initializing Rails connections (patch by womble). Configuration settings can now be blank (patch by womble). Added calls to endpwent/endgrent when searching for user and group IDs, which fixes #791. Obviated 'target' in interfaces, as all file paths were automatically calculated anyway. The parameter is still there, but it's not used and just generates a warning. Fixing some of the problems with interface management on Red Hat. Puppet now uses the :netmask property and does not try to set the bootproto (#762). You now must specify an environment and you are required to specify the valid environments for your site. (#911) Certificates now always specify a subjectAltName, but it defaults to '*', meaning that it doesn't require DNS names to match. You can override that behaviour by specifying a value for 'certdnsnames', which will then require that hostname as a match (#896). Relationship metaparams (:notify, :require, :subscribe, and :before) now stack when they are collecting metaparam values from their containers (#446). For instance, if a resource inside a definition has a value set for 'require', and you call the definition with 'require', the resource gets both requires, where before it would only retain its initial value. Changed the behavior of --debug to include Mongrel client debugging information. Mongrel output will be written to the terminal only, not to the puppet debug log. This should help anyone working with reverse HTTP SSL proxies. (#905) Fixed #800 -- invalid configurations are no longer cached. This was done partially by adding a relationship validation step once the entire configuration is created, but it also required the previously-mentioned changes to how the configuration retrieval process works. Removed some functionality from the Master client, since the local functionality has been replaced with the Indirector already, and rearranging how configuration retrieval is done to fix ordering and caching bugs. The node scope is now above all other scopes besides the 'main' scope, which should help make its variables visible to other classes, assuming those classes were not included in the node's parent. Replaced GRATR::Digraph with Puppet::SimpleGraph as the base class for Puppet's graphing. Functionality should be equivalent but with dramatically better performance. The --use-nodes and --no-nodes options are now obsolete. Puppet automatically detects when nodes are defined, and if they are defined it will require that a node be found, else it will not look for a node nor will it fail if it fails to find one. Fixed #832. Added the '--no-daemonize' option to puppetd and puppetmasterd. NOTE: The default behavior of 'verbose' and 'debug' no longer cause puppetd and puppetmasterd to not daemonize. Added k5login type. (#759) Fixed CA race condition. (#693) Added shortname support to config.rb and refactored addargs 0.23.2 Fixed the problem in cron jobs where environment settings tended to multiple. (#749) Collection of resources now correctly only collects exported resources again. This was broken in 0.23.0. (#731) 'gen_config' now generates a configuration with all parameters under a heading that matches the process name, rather than keeping section headings. Refactored how the parser and interpreter relate, so parsing is now effectively an atomic process (thus fixing #314 and #729). This makes the interpreter less prone to error and less prone to show the error to the clients. Note that this means that if a configuration fails to parse, then the previous, parseable configuration will be used instead, so the client will not know that the configuration failed to parse. Added support for managing interfaces, thanks to work by Paul Rose. Fixed #652, thanks to a patch by emerose; --fqdn again works with puppetd. Added an extra check to the Mongrel support so that Apache can be used with optional cert checking, instead of mandatory, thus allowing Mongrel to function as the CA. This is thanks to work done by Marcin Owsiany. 0.23.1 (beaker) You can now specify relationships to classes, which work exactly like relationships to defined types: require => Class[myclass] This works with qualified classes, too. You can now do simple queries in a collection of exported resources. You still cannot do multi-condition queries, though. (#703) puppetca now exits with a non-zero code if it cannot find any host certificates to clean. (Patch by Dean Wilson.) Fully-qualified resources can now have defaults. (#589) Resource references can now be fully-qualified names, meaning you can list definitions with a namespace as dependencies. (#468) Files modified using a FileType instance, as ParsedFile does, will now automatically get backed up to the filebucket named "puppet". Added a 'maillist' type for managing mailing lists. Added a 'mailalias' type for managing mail aliases. Added patch by Valentin Vidic that adds the '+>' syntax to resources, so parameter values can be added to. The configuration client now pulls libraries down to $libdir, and all autoloading is done from there with full support for any reloadable file, such as types and providers. (#621) Note that this is not backward compatible -- if you're using pluginsync right now, you'll need to disable it on your clients until you can upgrade them. The Rails log level can now be set via (shockingly!) the 'rails_loglevel' parameter (#710). Note that this isn't exactly the feature asked for, but I could not find a way to directly copy ActiveRecord's concept of an environment. External node sources can now return undefined classes (#687). Puppet clients now have http proxy support (#701). The parser now throws an error when a resource reference is created for an unknown type. Also, resource references look up defined types and translate their type accordingly. (#706) Hostnames can now be double quoted. Adding module autoloading (#596) -- you can now 'include' classes from modules without ever needing to specifically load them. Class names and node names now conflict (#620). 0.23.0 Modified the fileserver to cache file information, so that each file isn't being read on every connection. Also, added londo's patch from #678 to avoid reading entire files into memory. Fixed environment handling in the crontab provider (#669). Added patch by trombik in #572, supporting old-style freebsd init scripts with '.sh' endings. Added fink package provider (#642), as provided by 'do'. Marked the dpkg package provider as versionable (#647). Applied patches by trombik to fix FreeBSD ports (#624 and #628). Fixed the CA server so that it refuses to send back a certificate whose public key doesn't match the CSR. Instead, it tells the user to run 'puppetca --clean'. Invalid certificates are no longer written to disk (#578). Added a package provider (appdmg) able to install .app packages on .dmg files on OS X (#641). Applied the patch from #667 to hopefully kill the client hanging problems (permanently, this time). Fixed functions so that they accept most other rvalues as valid values (#548). COMPATIBILITY ALERT: Significantly reworked external node support, in a way that's NOT backward-compatible: Only ONE node source can be used -- you can use LDAP, code, or an external node program, but not more than one. LDAP node support has two changes: First, the "ldapattrs" attribute is now used for setting the attributes to retrieve from the server (in addition to required attriutes), and second, all retrieved attributes are set as variables in the top scope. This means you can set attributes on your LDAP nodes and they will automatically appear as variables in your configurations. External node support has been completely rewritten. These programs must now generate a YAML dump of a hash, with "classes" and "parameters" keys. The classes should be an array, and the parameters should be a hash. The external node program has no support for parent nodes -- the script must handle that on its own. Reworked the database schema used to store configurations with the storeconfigs option. Replaced the obsolete RRD ruby library with the maintained RubyRRDtool library (which requires rrdtool2) (#659). The Portage package provider now calls eix-update automatically when eix's database is absent or out of sync (#666). Mounts now correctly handle existing fstabs with no pass or dump values (#550). Mounts now default to 0 for pass and dump (#112). Added urpmi support (#592). Finishing up the type => provider interface work. Basically, package providers now return lists of provider instances. In the proces, I rewrote the interface between package types and providers, and also enabled prefetching on all packages. This should significantly speed up most package operations. Hopefully fixing the file descriptor/open port problems, with patches from Valentin Vidic. Significantly reworked the type => provider interface with respect to listing existing provider instances. The class method on both class heirarchies has been renamed to 'instances', to start. Providers are now expected to return provider instances, instead of creating resources, and the resource's 'instances' method is expected to find the matching resource, if any, and set the resource's provider appropriately. This *significantly* reduces the reliance on effectively global state (resource references in the resource classes). This global state will go away soon. Along with this change, the 'prefetch' class method on providers now accepts the list of resources for prefetching. This again reduces reliance on global state, and makes the execution path much easier to follow. Fixed #532 -- reparsing config files now longer throws an exception. Added some warnings and logs to the service type so users will be encouraged to specify either "ensure" or "enabled" and added debugging to indicate why restarting is skipped when it is. Changed the location of the classes.txt to the state directory. Added better error reporting on unmatched brackets. Moved puppetd and puppetmasterd to sbin in svn and fixed install.rb to copy them into sbin on the local system appropriately. (#323) Added a splay option (#501). It's disabled when running under --test in puppetd. The value is random but cached. It defaults to the runinterval but can be tuned with --splaylimit Changing the notify type so that it always uses the loglevel. Fixing #568 - nodes can inherit from quoted node names. Tags (and thus definitions and classes) can now be a single character. (#566) Added an 'undef' keyword (#629), which will evaluate to "" within strings but when used as a resource parameter value will cause that parameter to be evaluated as undefined. Changed the topological sort algorithm (#507) so it will always fail on cycles. Added a 'dynamicfacts' configuration option; any facts in that comma-separated list will be ignored when comparing facts to see if they have changed and thus whether a recompile is necessary. Renamed some poorly named internal variables: @models in providers are now either @resource or @resource_type (#605). @children is no longer used except by components (#606). @parent is now @resource within parameters (#607). The old variables are still set for backward compatibility. Significantly reworking configuration parsing. Executables all now look for 'puppet.conf' (#206), although they will parse the old-style configuration files if they are present, although they throw a deprecation warning. Also, file parameters (owner, mode, group) are now set on the same line as the parameter, in brackets. (#422) Added transaction summaries (available with the --summarize option), useful for getting a quick idea of what happened in a transaction. Currently only useful on the client or with the puppet interpreter. Changed the interal workings for retrieve and removed the :is attribute from Property. The retrieve methods now return the current value of the property for the system. Removed acts_as_taggable from the rails models. 0.22.4 Execs now autorequire the user they run as, as long as the user is specified by name. (#430) Files on the local machine but not on the remote server during a source copy are now purged if purge => true. (#594) Providers can now specify that some commands are optional (#585). Also, the 'command' method returns nil on missing commands, rather than throwing an error, so the presence of commands be tested. The 'useradd' provider for Users can now manage passwords. No other providers can, at this point. Parameters can now declare a dependency on specific features, and parameters that require missing features will not be instantiated. This is most useful for properties. FileParsing classes can now use instance_eval to add many methods at once to a record type. Modules no longer return directories in the list of found manifests (#588). The crontab provider now defaults to root when there is no USER set in the environment. Puppetd once again correctly responds to HUP. Added a syntax for referring to variables defined in other classes (e.g., $puppet::server). STDIN, STDOUT, STDERR are now redirected to /dev/null in service providers descending from base. Certificates are now valid starting one day before they are created, to help handle small amounts of clock skew. Files are no longer considered out of sync if some properties are out of sync but they have no properties that can create the file. 0.22.3 Fixed backward compatibility for logs and metrics from older clients. Fixed the location of the authconfig parameters so there aren't loading order issues. Enabling attribute validation on the providers that subclass 'nameservice', so we can verify that an integer is passed to UID and GID. Added a stand-alone filebucket client, named 'filebucket'. Fixed the new nested paths for filebuckets; the entire md5 sum was not being stored. Fixing #553; -M is no longer added when home directories are being managed on Red Hat. 0.22.2 (grover) Users can now manage their home directories, using the managehome parameter, partially using patches provided by Tim Stoop and Matt Palmer. (#432) Added 'ralsh' (formerly x2puppet) to the svn tree. When possible it should be added to the packages. The 'notify' type now defaults to its message being the same as its name. Reopening $stdin to read from /dev/null during execution, in hopes that init scripts will stop hanging. Changed the 'servername' fact set on the server to use the server's fqdn, instead of the short-name. Changing the location of the configuration cache. It now defaults to being in the state directory, rather than in the configuration directory. All parameter instances are stored in a single @parameters instance variable hash within resource type instances. We used to use separate hashes for each parameter type. Added the concept of provider features. Eventually these should be able to express the full range of provider functionality, but for now they can test a provider to see what methods it has set and determine what features it provides as a result. These features are integrated into the doc generation system so that you get feature documentation automatically. Switched apt/aptitide to using "apt-cache policy" instead of "apt-cache showpkg" for determining the latest available version. (#487) FileBuckets now use a deeply nested structure for storing files, so you do not end up with hundreds or thousands of files in the same directory. (#447) Facts are now cached in the state file, and when they change the configuration is always recompiled. (#519) Added 'ignoreimport' setting for use in commit hooks. This causes the parser to ignore import statements so a single file can be parse-checked. (#544) Import statements can now specify multiple comma-separated arguments. Definitions now support both 'name' and 'title', just like any other resource type. (#539) Added a generate() command, which sets values to the result of an external command. (#541) Added a file() command to read in files with no interpolation. The first found file has its content returned. puppetd now exits if no cert is present in onetime mode. (#533) The client configuration cache can be safely removed and the client will correctly realize the client is not in sync. Resources can now be freely deleted, thus fixing many problems introduced when deletion of required resources was forbidden when purging was introduced. Only resources being purged will not be deleted. Facts and plugins now download even in noop mode (#540). Resources in noop mode now log when they would have responded to an event (#542). Refactored cron support entirely. Cron now uses providers, and there is a single 'crontab' provider that handles user crontabs. While this refactor does not include providers for /etc/crontab or cron.d, it should now be straightforward to write those providers. Changed the parameter sorting so that the provider parameter comes right after name, so the provider is available when the other parameters and properties are being created. Redid some of the internals of the ParsedFile provider base class. It now passes a FileRecord around instead of a hash. Fixing a bug related to link recursion that caused link directories to always be considered out of sync. The bind address for puppetmasterd can now be specified with --bindaddress. Added (probably experimental) mongrel support. At this point you're still responsible for starting each individual process, and you have to set up a proxy in front of it. Redesigned the 'network' tree to support multiple web servers, including refactoring most of the structural code so it's much clearer and more reusable now. Set up the CA client to default to ca_server and ca_port, so you can easily run a separate CA. Supporting hosts with no domain name, thanks to a patch from Dennis Jacobfeuerborn. Added an 'ignorecache' option to tell puppetd to force a recompile, thanks to a patch by Chris McEniry. Made up2date the default for RHEL < 4 and yum the default for the rest. The yum provider now supports versions. Case statements correctly match when multiple values are provided, thanks to a patch by David Schmitt. Functions can now be called with no arguments. String escapes parse correctly in all cases now, thanks to a patch by cstorey. Subclasses again search parent classes for defaults. You can now purge apt and dpkg packages. When doing file recursion, 'ensure' only affects the top-level directory. States have been renamed to Properties. 0.22.1 (kermit) -- Mostly a bugfix release Compile times now persist between restarts of puppetd. Timeouts have been added to many parts of Puppet, reducing the likelihood if it hanging forever on broken scripts or servers. All of the documentation and recipes have been moved to the wiki by Peter Abrahamsen and Ben Kite has moved the FAQ to the wiki. Explicit relationships now override automatic relationships, allowing you to manually specify deletion order when removing resources. Resources with dependencies can now be deleted as long as all of their dependencies are also being deleted. Namespaces for both classes and definitions now work much more consistently. You should now be able to specify a class or definition with a namespace everywhere you would normally expect to be able to specify one without. Downcasing of facts can be selectively disabled. Cyclic dependency graphs are now checked for and forbidden. The netinfo mounts provider was commented out, because it really doesn't work at all. Stupid NetInfo stores mount information with the device as the key, which doesn't work with my current NetInfo code. Otherwise, lots and lots of bugfixes. Check the tickets associated with the 'kermit' milestone. 0.22.0 Integrated the GRATR graph library into Puppet, for handling resource relationships. Lots of bug-fixes (see bugs tickets associated with the 'minor' milestone). Added new 'resources' metatype, which currently only includes the ability to purge unmanaged resources. Added better ability to generate new resource objects during transactions (using 'generate' and 'eval_generate' methods). Rewrote all Rails support with a much better database design. Export/collect now works, although the database is incompatible with previous versions. Removed downcasing of facts and made most of the language case-insensitive. Added support for printing the graphs built during transactions. Reworked how paths are built for logging. Switched all providers to directly executing commands instead of going through a subshell, which removes the need to quote or escape arguments. 0.20.1 Mostly a bug-fix release, with the most important fix being the multiple-definition error. Completely rewrote the ParsedFile system; each provider is now much shorter and much more maintainable. However, fundamental problems were found with the 'port' type, so it was disabled. Also, added a NetInfo provider for 'host' and an experimental NetInfo provider for 'mount'. Made the RRDGraph report *much* better and added reference generation for reports and functions. 0.20.0 Significantly refactored the parser. Resource overrides now consistently work anywhere in a class hierarchy. The language was also modified somewhat. The previous export/collect syntax is now used for handling virtual objects, and export/collect (which is still experimental) now uses double sigils (@@ and <<| |>>). Resource references (e.g., File["/etc/passwd"]) now have to be capitalized, in fitting in with capitalizing type operations. As usual, lots of other smaller fixes, but most of the work was in the language. 0.19.3 Fixing a bug in server/master.rb that causes the hostname not to be available in locally-executed manifests. 0.19.2 Fixing a few smaller bugs, notably in the reports system. Refreshed objects now generate an event, which can result in further refreshes of other objects. 0.19.1 Fixing two critical bugs: User management works again and cron jobs are no longer added to all user accounts. 0.19.0 Added provider support. Added support for %h, %H, and %d expansion in fileserver.conf. Added Certificate Revocation support. Made dynamic loading pervasive -- nearly every aspect of Puppet will now automatically load new instances (e.g., types, providers, and reports). Added support for automatic distribution of facts and plugins (custom types). 0.18.4 Another bug-fix release. The most import bug fixed is that cronjobs again work even with initially empty crontabs. 0.18.3 Mostly a bug-fix release; fixed small bugs in the functionality added in 0.18.2. 0.18.2 Added templating support. Added reporting. Added gem and blastwave packaging support. 0.18.1 Added signal handlers for HUP, so both client and server deal correctly with it. Added signal handler for USR1, which triggers a run on the client. As usual, fixed many bugs. Significant fixes to puppetrun -- it should behave much more correctly now. Added "fail" function which throws a syntax error if it's encountered. Added plugin downloading from the central server to the client. It must be enabled with --pluginsync. Added support for FreeBSD's special "@daily" cron schedules. Correctly handling spaces in file sources. Moved documentation into svn tree. 0.18.0 Added support for a "default" node. When multiple nodes are specified, they must now be comma-separated (this introduces a language incompatibility). Failed dependencies cause dependent objects within the same transaction not to run. Many updates to puppetrun Many bug fixes Function names are no longer reserved words. Links can now replace files. 0.17.2 Added "puppetrun" application and associated runner server and client classes. Fixed cron support so it better supports valid values and environment settings. 0.17.1 Fixing a bug requiring rails on all Debian boxes Fixing a couple of other small bugs 0.17.0 Adding ActiveRecord integration on the server Adding export/collect functionality Fixing many bugs 0.16.5 Fixing a critical bug in importing classes from other files Fixing nodename handling to actually allow dashes 0.16.4 Fixing a critical bug in puppetd when acquiring a certificate for the first time 0.16.3 Some significant bug fixes Modified puppetd so that it can now function as an agent independent of a puppetmasterd process, e.g., using the PuppetShow web application. 0.16.2 Modified some of the AST classes so that class names, definition names, and node names are all set within the code being evaluated, so 'tagged(name)' returns true while evaluating 'name', for instance. Added '--clean' argument to puppetca to remove all traces of a given client. 0.16.1 Added 'tagged' and 'defined' functions. Moved all functions to a general framework that makes it very easy to add new functions. 0.16.0 Added 'tag' keyword/function. Added FreeBSD Ports support Added 'pelement' server for sending or receiving Puppet objects, although none of the executables use it yet. 0.15.3 Fixed many bugs in :exec, including adding support for arrays of checks Added autoloading for types and service variants (e.g., you can now just create a new type in the appropriate location and use it in Puppet, without modifying the core Puppet libs). 0.15.2 Added darwinport, Apple .pkg, and freebsd package types Added 'mount type Host facts are now set at the top scope (Bug #103) Added -e (inline exection) flag to 'puppet' executable Many small bug fixes 0.15.1 Fixed 'yum' installs so that they successfully upgrade packages. Fixed puppetmasterd.conf file so group settings take. 0.15.0 Upped the minor release because the File server is incompatible with 0.14, because it now handles links. The 'symlink' type is deprecated (but still present), in favor of using files with the 'target' parameter. Unset variables no longer throw an error, they just return an empty string You can now specify tags to restrict which objects run during a given run. You can also specify to skip running against the cached copy when there's a failure, which is useful for testing new configurations. RPMs and Sun packages can now install, as long as they specify a package location, and they'll automatically upgrade if you point them to a new file with an upgrade. Multiple bug fixes. 0.14.1 Fixed a couple of small logging bugs Fixed a bug with handling group ownership of links 0.14.0 Added some ability to selectively manage symlinks when doing file management Many bug fixes Variables can now be used as the test values in case statements and selectors Bumping a minor release number because 0.13.4 introduced a protocol incompatibility and should have had a minor rev bump 0.13.6 Many, many small bug fixes FreeBSD user/group support has been added The configuration system has been rewritten so that daemons can now generate and repair the files and directories they need. (Fixed bug #68.) Fixed the element override issues; now only subclasses can override values. 0.13.5 Fixed packages so types can be specified Added 'enable' state to services, although it does not work everywhere yet 0.13.4 A few important bug fixes, mostly in the parser. 0.13.3 Changed transactions to be one-stage instead of two Changed all types to use self[:name] instead of self.name, to support the symbolic naming implemented in 0.13.1 0.13.2 Changed package[answerfile] to package[adminfile], and added package[responsefile] Fixed a bunch of internal functions to behave more consistently and usefully 0.13.1 Fixed RPM spec files to create puppet user and group (lutter) Fixed crontab reading and writing (luke) Added symbolic naming in the language (luke) 0.13.0 Added support for configuration files. Even more bug fixes, including the infamous 'frozen object' bug, which was a problem with 'waitforcert'. David Lutterkort got RPM into good shape. 0.12.0 Added Scheduling, and many bug fixes, of course. 0.11.2 Fixed bugs related to specifying arrays of requirements Fixed a key bug in retrieving checksums Fixed lots of usability bugs Added 'fail' methods that automatically add file and line info when possible, and converted many errors to use that method 0.11.1 Fixed bug with recursive copying with 'ignore' set. Added OpenBSD package support. 0.11.0 Added 'ensure' state to many elements. Modified puppetdoc to correctly handle indentation and such. Significantly rewrote much of the builtin documentation to take advantage of the new features in puppetdoc, including many examples. 0.10.2 Added SMF support Added autorequire functionality, with specific support for exec and file Exec elements autorequire any mentioned files, including the scripts, along with their CWDs. Files autorequire any parent directories. Added 'alias' metaparam. Fixed dependencies so they don't depend on file order. 0.10.1 Added Solaris package support and changed puppetmasterd to run as a non-root user. 0.10.0 Significant refactoring of how types, states, and parameters work, including breaking out parameters into a separate class. This refactoring did not introduce much new functionality, but made extension of Puppet significantly easier Also, fixed the bug with 'waitforcert' in puppetd. 0.9.4 Small fix to wrap the StatusServer class in the checks for required classes. 0.9.3 Fixed some significant bugs in cron job management. 0.9.2 Second Public Beta 0.9.0 First Public Beta diff --git a/lib/puppet/defaults.rb b/lib/puppet/defaults.rb index a95023895..0c8ac3f82 100644 --- a/lib/puppet/defaults.rb +++ b/lib/puppet/defaults.rb @@ -1,672 +1,676 @@ # The majority of the system configuration parameters are set in this file. module Puppet # If we're running the standalone puppet process as a non-root user, # use basedirs that are in the user's home directory. conf = nil var = nil name = $0.gsub(/.+#{File::SEPARATOR}/,'').sub(/\.rb$/, '') # Make File.expand_path happy require 'etc' ENV["HOME"] ||= Etc.getpwuid(Process.uid).dir if name != "puppetmasterd" and Puppet::Util::SUIDManager.uid != 0 conf = File.expand_path("~/.puppet") var = File.expand_path("~/.puppet/var") else # Else, use system-wide directories. conf = "/etc/puppet" var = "/var/puppet" end self.setdefaults(:main, :confdir => [conf, "The main Puppet configuration directory. The default for this parameter is calculated based on the user. If the process is runnig as root or the user that ``puppetmasterd`` is supposed to run as, it defaults to a system directory, but if it's running as any other user, it defaults to being in ``~``."], :vardir => [var, "Where Puppet stores dynamic and growing data. The default for this parameter is calculated specially, like `confdir`_."], :name => [name, "The name of the service, if we are running as one. The default is essentially $0 without the path or ``.rb``."] ) if name == "puppetmasterd" logopts = {:default => "$vardir/log", :mode => 0750, :owner => "$user", :group => "$group", :desc => "The Puppet log directory." } else logopts = ["$vardir/log", "The Puppet log directory."] end setdefaults(:main, :logdir => logopts) # This name hackery is necessary so that the rundir is set reasonably during # unit tests. if Process.uid == 0 and %w{puppetd puppetmasterd}.include?(self.name) rundir = "/var/run/puppet" else rundir = "$vardir/run" end self.setdefaults(:main, :trace => [false, "Whether to print stack traces on some errors"], :autoflush => [false, "Whether log files should always flush to disk."], :syslogfacility => ["daemon", "What syslog facility to use when logging to syslog. Syslog has a fixed list of valid facilities, and you must choose one of those; you cannot just make one up."], :statedir => { :default => "$vardir/state", :mode => 01755, :desc => "The directory where Puppet state is stored. Generally, this directory can be removed without causing harm (although it might result in spurious service restarts)." }, :ssldir => { :default => "$confdir/ssl", :mode => 0771, :owner => "root", :desc => "Where SSL certificates are kept." }, - :rundir => { :default => rundir, + :rundir => { + :default => rundir, + :mode => 01777, + :owner => "$user", + :group => "$group", :desc => "Where Puppet PID files are kept." }, :genconfig => [false, "Whether to just print a configuration to stdout and exit. Only makes sense when used interactively. Takes into account arguments specified on the CLI."], :genmanifest => [false, "Whether to just print a manifest to stdout and exit. Only makes sense when used interactively. Takes into account arguments specified on the CLI."], :configprint => ["", "Print the value of a specific configuration parameter. If a parameter is provided for this, then the value is printed and puppet exits. Comma-separate multiple values. For a list of all values, specify 'all'. This feature is only available in Puppet versions higher than 0.18.4."], :color => ["ansi", "Whether to use colors when logging to the console. Valid values are ``ansi`` (equivalent to ``true``), ``html`` (mostly used during testing with TextMate), and ``false``, which produces no color."], :mkusers => [false, "Whether to create the necessary user and group that puppetd will run as."], :path => {:default => "none", :desc => "The shell search path. Defaults to whatever is inherited from the parent process.", :hook => proc do |value| ENV["PATH"] = value unless value == "none" end }, :libdir => {:default => "$vardir/lib", :desc => "An extra search path for Puppet. This is only useful for those files that Puppet will load on demand, and is only guaranteed to work for those cases. In fact, the autoload mechanism is responsible for making sure this directory is in Ruby's search path", :call_on_define => true, # Call our hook with the default value, so we always get the libdir set. :hook => proc do |value| if defined? @oldlibdir and $:.include?(@oldlibdir) $:.delete(@oldlibdir) end @oldlibdir = value $: << value end }, :ignoreimport => [false, "A parameter that can be used in commit hooks, since it enables you to parse-check a single file rather than requiring that all files exist."], :authconfig => [ "$confdir/namespaceauth.conf", "The configuration file that defines the rights to the different namespaces and methods. This can be used as a coarse-grained authorization system for both ``puppetd`` and ``puppetmasterd``." ], :environments => ["production,development", "The valid environments for Puppet clients. This is more useful as a server-side setting than client, but any environment chosen must be in this list. Values should be separated by a comma."], :environment => {:default => "development", :desc => "The environment Puppet is running in. For clients (e.g., ``puppetd``) this determines the environment itself, which is used to find modules and much more. For servers (i.e., ``puppetmasterd``) this provides the default environment for nodes we know nothing about.", :hook => proc { |value| raise(ArgumentError, "Invalid environment %s" % value) unless Puppet::Node::Environment.valid?(value) } }, :diff_args => ["", "Which arguments to pass to the diff command when printing differences between files."], :diff => ["diff", "Which diff command to use when printing differences between files."], :show_diff => [false, "Whether to print a contextual diff when files are being replaced. The diff is printed on stdout, so this option is meaningless unless you are running Puppet interactively. This feature currently requires the ``diff/lcs`` Ruby library."], :yamldir => {:default => "$vardir/yaml", :owner => "$user", :group => "$user", :mode => "750", :desc => "The directory in which YAML data is stored, usually in a subdirectory."}, :daemonize => { :default => true, :desc => "Send the process into the background. This is the default.", :short => "D" }, :maximum_uid => [4294967290, "The maximum allowed UID. Some platforms use negative UIDs but then ship with tools that do not know how to handle signed ints, so the UIDs show up as huge numbers that can then not be fed back into the system. This is a hackish way to fail in a slightly more useful way when that happens."], :node_terminus => ["plain", "Where to find information about nodes."] ) hostname = Facter["hostname"].value domain = Facter["domain"].value if domain and domain != "" fqdn = [hostname, domain].join(".") else fqdn = hostname end Puppet.setdefaults(:ssl, :certname => [fqdn, "The name to use when handling certificates. Defaults to the fully qualified domain name."], :certdnsnames => ['', "The DNS names on the Server certificate as a colon-separated list. If it's anything other than an empty string, it will be used as an alias in the created certificate. By default, only the server gets an alias set up, and only for 'puppet'."], :certdir => ["$ssldir/certs", "The certificate directory."], :publickeydir => ["$ssldir/public_keys", "The public key directory."], :privatekeydir => { :default => "$ssldir/private_keys", :mode => 0750, :desc => "The private key directory." }, :privatedir => { :default => "$ssldir/private", :mode => 0750, :desc => "Where the client stores private certificate information." }, :passfile => { :default => "$privatedir/password", :mode => 0640, :desc => "Where puppetd stores the password for its private key. Generally unused." }, :hostcsr => { :default => "$ssldir/csr_$certname.pem", :mode => 0644, :desc => "Where individual hosts store and look for their certificates." }, :hostcert => { :default => "$certdir/$certname.pem", :mode => 0644, :desc => "Where individual hosts store and look for their certificates." }, :hostprivkey => { :default => "$privatekeydir/$certname.pem", :mode => 0600, :desc => "Where individual hosts store and look for their private key." }, :hostpubkey => { :default => "$publickeydir/$certname.pem", :mode => 0644, :desc => "Where individual hosts store and look for their public key." }, :localcacert => { :default => "$certdir/ca.pem", :mode => 0644, :desc => "Where each client stores the CA certificate." } ) setdefaults(:ca, :cadir => { :default => "$ssldir/ca", :owner => "$user", :group => "$group", :mode => 0770, :desc => "The root directory for the certificate authority." }, :cacert => { :default => "$cadir/ca_crt.pem", :owner => "$user", :group => "$group", :mode => 0660, :desc => "The CA certificate." }, :cakey => { :default => "$cadir/ca_key.pem", :owner => "$user", :group => "$group", :mode => 0660, :desc => "The CA private key." }, :capub => { :default => "$cadir/ca_pub.pem", :owner => "$user", :group => "$group", :desc => "The CA public key." }, :cacrl => { :default => "$cadir/ca_crl.pem", :owner => "$user", :group => "$group", :mode => 0664, :desc => "The certificate revocation list (CRL) for the CA. Set this to 'none' if you do not want to use a CRL." }, :caprivatedir => { :default => "$cadir/private", :owner => "$user", :group => "$group", :mode => 0770, :desc => "Where the CA stores private certificate information." }, :csrdir => { :default => "$cadir/requests", :owner => "$user", :group => "$group", :desc => "Where the CA stores certificate requests" }, :signeddir => { :default => "$cadir/signed", :owner => "$user", :group => "$group", :mode => 0770, :desc => "Where the CA stores signed certificates." }, :capass => { :default => "$caprivatedir/ca.pass", :owner => "$user", :group => "$group", :mode => 0660, :desc => "Where the CA stores the password for the private key" }, :serial => { :default => "$cadir/serial", :owner => "$user", :group => "$group", :desc => "Where the serial number for certificates is stored." }, :autosign => { :default => "$confdir/autosign.conf", :mode => 0644, :desc => "Whether to enable autosign. Valid values are true (which autosigns any key request, and is a very bad idea), false (which never autosigns any key request), and the path to a file, which uses that configuration file to determine which keys to sign."}, :ca_days => ["", "How long a certificate should be valid. This parameter is deprecated, use ca_ttl instead"], :ca_ttl => ["5y", "The default TTL for new certificates; valid values must be an integer, optionally followed by one of the units 'y' (years of 365 days), 'd' (days), 'h' (hours), or 's' (seconds). The unit defaults to seconds. If this parameter is set, ca_days is ignored. Examples are '3600' (one hour) and '1825d', which is the same as '5y' (5 years) "], :ca_md => ["md5", "The type of hash used in certificates."], :req_bits => [2048, "The bit length of the certificates."], :keylength => [1024, "The bit length of keys."], :cert_inventory => { :default => "$cadir/inventory.txt", :mode => 0644, :owner => "$user", :group => "$group", :desc => "A Complete listing of all certificates" } ) # Define the config default. self.setdefaults(self.settings[:name], :config => ["$confdir/puppet.conf", "The configuration file for #{Puppet[:name]}."], :pidfile => ["", "The pid file"], :bindaddress => ["", "The address to bind to. Mongrel servers default to 127.0.0.1 and WEBrick defaults to 0.0.0.0."], :servertype => ["webrick", "The type of server to use. Currently supported options are webrick and mongrel. If you use mongrel, you will need a proxy in front of the process or processes, since Mongrel cannot speak SSL."] ) self.setdefaults(:puppetmasterd, :user => ["puppet", "The user puppetmasterd should run as."], :group => ["puppet", "The group puppetmasterd should run as."], :manifestdir => ["$confdir/manifests", "Where puppetmasterd looks for its manifests."], :manifest => ["$manifestdir/site.pp", "The entry-point manifest for puppetmasterd."], :code => ["", "Code to parse directly. This is essentially only used by ``puppet``, and should only be set if you're writing your own Puppet executable"], :masterlog => { :default => "$logdir/puppetmaster.log", :owner => "$user", :group => "$group", :mode => 0660, :desc => "Where puppetmasterd logs. This is generally not used, since syslog is the default log destination." }, :masterhttplog => { :default => "$logdir/masterhttp.log", :owner => "$user", :group => "$group", :mode => 0660, :create => true, :desc => "Where the puppetmasterd web server logs." }, :masterport => [8140, "Which port puppetmasterd listens on."], :parseonly => [false, "Just check the syntax of the manifests."], :node_name => ["cert", "How the puppetmaster determines the client's identity and sets the 'hostname' fact for use in the manifest, in particular for determining which 'node' statement applies to the client. Possible values are 'cert' (use the subject's CN in the client's certificate) and 'facter' (use the hostname that the client reported in its facts)"], :bucketdir => { :default => "$vardir/bucket", :mode => 0750, :owner => "$user", :group => "$group", :desc => "Where FileBucket files are stored." }, :ca => [true, "Wether the master should function as a certificate authority."], :modulepath => [ "$confdir/modules:/usr/share/puppet/modules", "The search path for modules as a colon-separated list of directories." ], :ssl_client_header => ["HTTP_X_CLIENT_DN", "The header containing an authenticated client's SSL DN. Only used with Mongrel. This header must be set by the proxy to the authenticated client's SSL DN (e.g., ``/CN=puppet.reductivelabs.com``). See the `UsingMongrel`:trac: wiki page for more information."], :ssl_client_verify_header => ["HTTP_X_CLIENT_VERIFY", "The header containing the status message of the client verification. Only used with Mongrel. This header must be set by the proxy to 'SUCCESS' if the client successfully authenticated, and anything else otherwise. See the `UsingMongrel`:trac: wiki page for more information."] ) self.setdefaults(:puppetd, :localconfig => { :default => "$statedir/localconfig", :owner => "root", :mode => 0660, :desc => "Where puppetd caches the local configuration. An extension indicating the cache format is added automatically."}, :statefile => { :default => "$statedir/state.yaml", :mode => 0660, :desc => "Where puppetd and puppetmasterd store state associated with the running configuration. In the case of puppetmasterd, this file reflects the state discovered through interacting with clients." }, :classfile => { :default => "$statedir/classes.txt", :owner => "root", :mode => 0644, :desc => "The file in which puppetd stores a list of the classes associated with the retrieved configuration. Can be loaded in the separate ``puppet`` executable using the ``--loadclasses`` option."}, :puppetdlog => { :default => "$logdir/puppetd.log", :owner => "root", :mode => 0640, :desc => "The log file for puppetd. This is generally not used." }, :httplog => { :default => "$logdir/http.log", :owner => "root", :mode => 0640, :desc => "Where the puppetd web server logs." }, :http_proxy_host => ["none", "The HTTP proxy host to use for outgoing connections. Note: You may need to use a FQDN for the server hostname when using a proxy."], :http_proxy_port => [3128, "The HTTP proxy port to use for outgoing connections"], :http_enable_post_connection_check => [true, "Boolean; wheter or not puppetd should validate the server SSL certificate against the request hostname."], :server => ["puppet", "The server to which server puppetd should connect"], :ignoreschedules => [false, "Boolean; whether puppetd should ignore schedules. This is useful for initial puppetd runs."], :puppetport => [8139, "Which port puppetd listens on."], :noop => [false, "Whether puppetd should be run in noop mode."], :runinterval => [1800, # 30 minutes "How often puppetd applies the client configuration; in seconds."], :listen => [false, "Whether puppetd should listen for connections. If this is true, then by default only the ``runner`` server is started, which allows remote authorized and authenticated nodes to connect and trigger ``puppetd`` runs."], :ca_server => ["$server", "The server to use for certificate authority requests. It's a separate server because it cannot and does not need to horizontally scale."], :ca_port => ["$masterport", "The port to use for the certificate authority."] ) self.setdefaults(:filebucket, :clientbucketdir => { :default => "$vardir/clientbucket", :mode => 0750, :desc => "Where FileBucket files are stored locally." } ) self.setdefaults(:fileserver, :fileserverconfig => ["$confdir/fileserver.conf", "Where the fileserver configuration is stored."] ) self.setdefaults(:reporting, :reports => ["store", "The list of reports to generate. All reports are looked for in puppet/reports/.rb, and multiple report names should be comma-separated (whitespace is okay)." ], :reportdir => {:default => "$vardir/reports", :mode => 0750, :owner => "$user", :group => "$group", :desc => "The directory in which to store reports received from the client. Each client gets a separate subdirectory."} ) self.setdefaults(:puppetd, :puppetdlockfile => [ "$statedir/puppetdlock", "A lock file to temporarily stop puppetd from doing anything."], :usecacheonfailure => [true, "Whether to use the cached configuration when the remote configuration will not compile. This option is useful for testing new configurations, where you want to fix the broken configuration rather than reverting to a known-good one." ], :ignorecache => [false, "Ignore cache and always recompile the configuration. This is useful for testing new configurations, where the local cache may in fact be stale even if the timestamps are up to date - if the facts change or if the server changes." ], :downcasefacts => [false, "Whether facts should be made all lowercase when sent to the server."], :dynamicfacts => ["memorysize,memoryfree,swapsize,swapfree", "Facts that are dynamic; these facts will be ignored when deciding whether changed facts should result in a recompile. Multiple facts should be comma-separated."], :splaylimit => ["$runinterval", "The maximum time to delay before runs. Defaults to being the same as the run interval."], :splay => [false, "Whether to sleep for a pseudo-random (but consistent) amount of time before a run."] ) self.setdefaults(:puppetd, :configtimeout => [120, "How long the client should wait for the configuration to be retrieved before considering it a failure. This can help reduce flapping if too many clients contact the server at one time." ], :reportserver => ["$server", "The server to which to send transaction reports." ], :report => [false, "Whether to send reports after every transaction." ] ) # Plugin information. self.setdefaults(:main, :pluginpath => ["$vardir/plugins", "Where Puppet should look for plugins. Multiple directories should be colon-separated, like normal PATH variables. As of 0.23.1, this option is deprecated; download your custom libraries to the $libdir instead."], :plugindest => ["$libdir", "Where Puppet should store plugins that it pulls down from the central server."], :pluginsource => ["puppet://$server/plugins", "From where to retrieve plugins. The standard Puppet ``file`` type is used for retrieval, so anything that is a valid file source can be used here."], :pluginsync => [false, "Whether plugins should be synced with the central server."], :pluginsignore => [".svn CVS", "What files to ignore when pulling down plugins."] ) # Central fact information. self.setdefaults(:main, :factpath => ["$vardir/facts", "Where Puppet should look for facts. Multiple directories should be colon-separated, like normal PATH variables."], :factdest => ["$vardir/facts", "Where Puppet should store facts that it pulls down from the central server."], :factsource => ["puppet://$server/facts", "From where to retrieve facts. The standard Puppet ``file`` type is used for retrieval, so anything that is a valid file source can be used here."], :factsync => [false, "Whether facts should be synced with the central server."], :factsignore => [".svn CVS", "What files to ignore when pulling down facts."] ) self.setdefaults(:tagmail, :tagmap => ["$confdir/tagmail.conf", "The mapping between reporting tags and email addresses."], :sendmail => [%x{which sendmail 2>/dev/null}.chomp, "Where to find the sendmail binary with which to send email."], :reportfrom => ["report@" + [Facter["hostname"].value, Facter["domain"].value].join("."), "The 'from' email address for the reports."], :smtpserver => ["none", "The server through which to send email reports."] ) self.setdefaults(:rails, :dblocation => { :default => "$statedir/clientconfigs.sqlite3", :mode => 0660, :owner => "$user", :group => "$group", :desc => "The database cache for client configurations. Used for querying within the language." }, :dbadapter => [ "sqlite3", "The type of database to use." ], :dbmigrate => [ false, "Whether to automatically migrate the database." ], :dbname => [ "puppet", "The name of the database to use." ], :dbserver => [ "localhost", "The database server for Client caching. Only used when networked databases are used."], :dbuser => [ "puppet", "The database user for Client caching. Only used when networked databases are used."], :dbpassword => [ "puppet", "The database password for Client caching. Only used when networked databases are used."], :dbsocket => [ "", "The database socket location. Only used when networked databases are used. Will be ignored if the value is an empty string."], :railslog => {:default => "$logdir/rails.log", :mode => 0600, :owner => "$user", :group => "$group", :desc => "Where Rails-specific logs are sent" }, :rails_loglevel => ["info", "The log level for Rails connections. The value must be a valid log level within Rails. Production environments normally use ``info`` and other environments normally use ``debug``."] ) setdefaults(:graphing, :graph => [false, "Whether to create dot graph files for the different configuration graphs. These dot files can be interpreted by tools like OmniGraffle or dot (which is part of ImageMagick)."], :graphdir => ["$statedir/graphs", "Where to store dot-outputted graphs."] ) setdefaults(:transaction, :tags => ["", "Tags to use to find resources. If this is set, then only resources tagged with the specified tags will be applied. Values must be comma-separated."], :evaltrace => [false, "Whether each resource should log when it is being evaluated. This allows you to interactively see exactly what is being done."], :summarize => [false, "Whether to print a transaction summary." ] ) setdefaults(:parser, :typecheck => [true, "Whether to validate types during parsing."], :paramcheck => [true, "Whether to validate parameters during parsing."] ) setdefaults(:main, :casesensitive => [false, "Whether matching in case statements and selectors should be case-sensitive. Case insensitivity is handled by downcasing all values before comparison."], :external_nodes => ["none", "An external command that can produce node information. The output must be a YAML dump of a hash, and that hash must have one or both of ``classes`` and ``parameters``, where ``classes`` is an array and ``parameters`` is a hash. For unknown nodes, the commands should exit with a non-zero exit code. This command makes it straightforward to store your node mapping information in other data sources like databases."]) setdefaults(:ldap, :ldapnodes => [false, "Whether to search for node configurations in LDAP. See `LdapNodes`:trac: for more information."], :ldapssl => [false, "Whether SSL should be used when searching for nodes. Defaults to false because SSL usually requires certificates to be set up on the client side."], :ldaptls => [false, "Whether TLS should be used when searching for nodes. Defaults to false because TLS usually requires certificates to be set up on the client side."], :ldapserver => ["ldap", "The LDAP server. Only used if ``ldapnodes`` is enabled."], :ldapport => [389, "The LDAP port. Only used if ``ldapnodes`` is enabled."], :ldapstring => ["(&(objectclass=puppetClient)(cn=%s))", "The search string used to find an LDAP node."], :ldapclassattrs => ["puppetclass", "The LDAP attributes to use to define Puppet classes. Values should be comma-separated."], :ldapattrs => ["all", "The LDAP attributes to include when querying LDAP for nodes. All returned attributes are set as variables in the top-level scope. Multiple values should be comma-separated. The value 'all' returns all attributes."], :ldapparentattr => ["parentnode", "The attribute to use to define the parent node."], :ldapuser => ["", "The user to use to connect to LDAP. Must be specified as a full DN."], :ldappassword => ["", "The password to use to connect to LDAP."], :ldapbase => ["", "The search base for LDAP searches. It's impossible to provide a meaningful default here, although the LDAP libraries might have one already set. Generally, it should be the 'ou=Hosts' branch under your main directory."] ) setdefaults(:puppetmasterd, :storeconfigs => [false, "Whether to store each client's configuration. This requires ActiveRecord from Ruby on Rails."] ) # This doesn't actually work right now. setdefaults(:parser, :lexical => [false, "Whether to use lexical scoping (vs. dynamic)."], :templatedir => ["$vardir/templates", "Where Puppet looks for template files." ] ) setdefaults(:main, :filetimeout => [ 15, "The minimum time to wait (in seconds) between checking for updates in configuration files. This timeout determines how quickly Puppet checks whether a file (such as manifests or templates) has changed on disk." ] ) setdefaults(:metrics, :rrddir => {:default => "$vardir/rrd", :owner => "$user", :group => "$group", :desc => "The directory where RRD database files are stored. Directories for each reporting host will be created under this directory." }, :rrdgraph => [false, "Whether RRD information should be graphed."], :rrdinterval => ["$runinterval", "How often RRD should expect data. This should match how often the hosts report back to the server."] ) end diff --git a/lib/puppet/network/http_server/mongrel.rb b/lib/puppet/network/http_server/mongrel.rb index d6e21b189..d340f3d63 100644 --- a/lib/puppet/network/http_server/mongrel.rb +++ b/lib/puppet/network/http_server/mongrel.rb @@ -1,146 +1,151 @@ #!/usr/bin/env ruby # File: 06-11-14-mongrel_xmlrpc.rb # Author: Manuel Holtgrewe # # Copyright (c) 2006 Manuel Holtgrewe, 2007 Luke Kanies # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS # BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN # ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN # CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # This file is based heavily on a file retrieved from # http://ttt.ggnore.net/2006/11/15/xmlrpc-with-mongrel-and-ruby-off-rails/ require 'rubygems' require 'mongrel' require 'xmlrpc/server' require 'puppet/network/xmlrpc/server' require 'puppet/network/http_server' require 'puppet/network/client_request' require 'puppet/daemon' require 'resolv' # This handler can be hooked into Mongrel to accept HTTP requests. After # checking whether the request itself is sane, the handler forwards it # to an internal instance of XMLRPC::BasicServer to process it. # # You can access the server by calling the Handler's "xmlrpc_server" # attribute accessor method and add XMLRPC handlers there. For example: # #
 # handler = XmlRpcHandler.new
 # handler.xmlrpc_server.add_handler("my.add") { |a, b| a.to_i + b.to_i }
 # 
module Puppet::Network class HTTPServer::Mongrel < ::Mongrel::HttpHandler include Puppet::Daemon attr_reader :xmlrpc_server def initialize(handlers) if Puppet[:debug] $mongrel_debug_client = true Puppet.debug 'Mongrel client debugging enabled. [$mongrel_debug_client = true].' end # Create a new instance of BasicServer. We are supposed to subclass it # but that does not make sense since we would not introduce any new # behaviour and we have to subclass Mongrel::HttpHandler so our handler # works for Mongrel. @xmlrpc_server = Puppet::Network::XMLRPCServer.new handlers.each do |name, args| unless handler = Puppet::Network::Handler.handler(name) raise ArgumentError, "Invalid handler %s" % name end @xmlrpc_server.add_handler(handler.interface, handler.new(args)) end end # This method produces the same results as XMLRPC::CGIServer.serve # from Ruby's stdlib XMLRPC implementation. def process(request, response) # Make sure this has been a POST as required for XMLRPC. request_method = request.params[Mongrel::Const::REQUEST_METHOD] || Mongrel::Const::GET if request_method != "POST" then response.start(405) { |head, out| out.write("Method Not Allowed") } return end # Make sure the user has sent text/xml data. request_mime = request.params["CONTENT_TYPE"] || "text/plain" if parse_content_type(request_mime).first != "text/xml" then response.start(400) { |head, out| out.write("Bad Request") } return end # Make sure there is data in the body at all. length = request.params[Mongrel::Const::CONTENT_LENGTH].to_i if length <= 0 then response.start(411) { |head, out| out.write("Length Required") } return end # Check the body to be valid. if request.body.nil? or request.body.size != length then response.start(400) { |head, out| out.write("Bad Request") } return end info = client_info(request) # All checks above passed through response.start(200) do |head, out| head["Content-Type"] = "text/xml; charset=utf-8" begin out.write(@xmlrpc_server.process(request.body, info)) rescue => detail puts detail.backtrace raise end end end private def client_info(request) params = request.params ip = params["REMOTE_ADDR"] # JJM #906 The following dn.match regular expression is forgiving # enough to match the two Distinguished Name string contents # coming from Apache, Pound or other reverse SSL proxies. if dn = params[Puppet[:ssl_client_header]] and dn_matchdata = dn.match(/^.*?CN\s*=\s*(.*)/) client = dn_matchdata[1].to_str valid = (params[Puppet[:ssl_client_verify_header]] == 'SUCCESS') else - client = Resolv.getname(ip) + begin + client = Resolv.getname(ip) + rescue => detail + Puppet.err "Could not resolve %s: %s" % [ip, detail] + client = "unknown" + end valid = false end info = Puppet::Network::ClientRequest.new(client, ip, valid) return info end # Taken from XMLRPC::ParseContentType def parse_content_type(str) a, *b = str.split(";") return a.strip, *b end end end diff --git a/lib/puppet/network/xmlrpc/client.rb b/lib/puppet/network/xmlrpc/client.rb index 27bb3dc5e..f6a5e8db6 100644 --- a/lib/puppet/network/xmlrpc/client.rb +++ b/lib/puppet/network/xmlrpc/client.rb @@ -1,139 +1,150 @@ require 'puppet/sslcertificates' require 'puppet/network/http_pool' require 'openssl' require 'puppet/external/base64' require 'xmlrpc/client' require 'net/https' require 'yaml' module Puppet::Network class ClientError < Puppet::Error; end class XMLRPCClientError < Puppet::Error; end class XMLRPCClient < ::XMLRPC::Client attr_accessor :puppet_server, :puppet_port @clients = {} class << self include Puppet::Util include Puppet::Util::ClassGen end # Create a netclient for each handler def self.mkclient(handler) interface = handler.interface namespace = interface.prefix # Create a subclass for every client type. This is # so that all of the methods are on their own class, # so that their namespaces can define the same methods if # they want. constant = handler.name.to_s.capitalize name = namespace.downcase newclient = genclass(name, :hash => @clients, :constant => constant) interface.methods.each { |ary| method = ary[0] if public_method_defined?(method) raise Puppet::DevError, "Method %s is already defined" % method end newclient.send(:define_method,method) { |*args| Puppet.debug "Calling %s.%s" % [namespace, method] begin call("%s.%s" % [namespace, method.to_s],*args) rescue OpenSSL::SSL::SSLError => detail if detail.message =~ /bad write retry/ Puppet.warning "Transient SSL write error; restarting connection and retrying" self.recycle_connection retry end raise XMLRPCClientError, "Certificates were not trusted: %s" % detail rescue ::XMLRPC::FaultException => detail raise XMLRPCClientError, detail.faultString rescue Errno::ECONNREFUSED => detail msg = "Could not connect to %s on port %s" % [@host, @port] raise XMLRPCClientError, msg rescue SocketError => detail Puppet.err "Could not find server %s: %s" % [@host, detail.to_s] error = XMLRPCClientError.new( "Could not find server %s" % @host ) error.set_backtrace detail.backtrace raise error rescue Errno::EPIPE, EOFError Puppet.warning "Other end went away; restarting connection and retrying" self.recycle_connection retry rescue => detail if detail.message =~ /^Wrong size\. Was \d+, should be \d+$/ Puppet.warning "XMLRPC returned wrong size. Retrying." retry end Puppet.err "Could not call %s.%s: %s" % [namespace, method, detail.inspect] error = XMLRPCClientError.new(detail.to_s) error.set_backtrace detail.backtrace raise error end } } return newclient end def self.handler_class(handler) @clients[handler] || self.mkclient(handler) end + def http + unless @http + @http = Puppet::Network::HttpPool.http_instance(@host, @port, true) + end + @http + end + def initialize(hash = {}) hash[:Path] ||= "/RPC2" hash[:Server] ||= Puppet[:server] hash[:Port] ||= Puppet[:masterport] hash[:HTTPProxyHost] ||= Puppet[:http_proxy_host] hash[:HTTPProxyPort] ||= Puppet[:http_proxy_port] if "none" == hash[:HTTPProxyHost] hash[:HTTPProxyHost] = nil hash[:HTTPProxyPort] = nil end super( hash[:Server], hash[:Path], hash[:Port], hash[:HTTPProxyHost], hash[:HTTPProxyPort], nil, # user nil, # password true, # use_ssl 120 # a two minute timeout, instead of 30 seconds ) @http = Puppet::Network::HttpPool.http_instance(@host, @port) end # Get rid of our existing connection, replacing it with a new one. # This should only happen if we lose our connection somehow (e.g., an EPIPE) # or we've just downloaded certs and we need to create new http instances # with the certs added. def recycle_connection @http = Puppet::Network::HttpPool.http_instance(@host, @port, true) # reset the instance end def start - @http.start unless @http.started? + begin + @http.start unless @http.started? + rescue => detail + Puppet.err "Could not connect to server: %s" % detail + end end def local false end def local? false end end end diff --git a/lib/puppet/parser/compile.rb b/lib/puppet/parser/compile.rb index f76103a28..e1e230d48 100644 --- a/lib/puppet/parser/compile.rb +++ b/lib/puppet/parser/compile.rb @@ -1,508 +1,511 @@ # Created by Luke A. Kanies on 2007-08-13. # Copyright (c) 2007. All rights reserved. require 'puppet/node' require 'puppet/node/catalog' require 'puppet/util/errors' # Maintain a graph of scopes, along with a bunch of data # about the individual catalog we're compiling. class Puppet::Parser::Compile include Puppet::Util include Puppet::Util::Errors attr_reader :parser, :node, :facts, :collections, :catalog, :node_scope # Add a collection to the global list. def add_collection(coll) @collections << coll end # Do we use nodes found in the code, vs. the external node sources? def ast_nodes? parser.nodes.length > 0 end # Store the fact that we've evaluated a class, and store a reference to # the scope in which it was evaluated, so that we can look it up later. def class_set(name, scope) if existing = @class_scopes[name] if existing.nodescope? or scope.nodescope? raise Puppet::ParseError, "Cannot have classes, nodes, or definitions with the same name" else raise Puppet::DevError, "Somehow evaluated the same class twice" end end @class_scopes[name] = scope @catalog.add_class(name) unless name == "" end # Return the scope associated with a class. This is just here so # that subclasses can set their parent scopes to be the scope of # their parent class, and it's also used when looking up qualified # variables. def class_scope(klass) # They might pass in either the class or class name if klass.respond_to?(:classname) @class_scopes[klass.classname] else @class_scopes[klass] end end # Return a list of all of the defined classes. def classlist return @catalog.classes end # Compile our catalog. This mostly revolves around finding and evaluating classes. # This is the main entry into our catalog. def compile # Set the client's parameters into the top scope. set_node_parameters() evaluate_main() evaluate_ast_node() evaluate_node_classes() evaluate_generators() fail_on_unevaluated() finish() if Puppet[:storeconfigs] store() end return @catalog end # LAK:FIXME There are no tests for this. def delete_collection(coll) @collections.delete(coll) if @collections.include?(coll) end # LAK:FIXME There are no tests for this. def delete_resource(resource) @resource_table.delete(resource.ref) if @resource_table.include?(resource.ref) end # Return the node's environment. def environment unless defined? @environment if node.environment and node.environment != "" @environment = node.environment else @environment = nil end end @environment end # Evaluate all of the classes specified by the node. def evaluate_node_classes evaluate_classes(@node.classes, topscope) end # Evaluate each specified class in turn. If there are any classes we can't # find, just tag the catalog and move on. This method really just # creates resource objects that point back to the classes, and then the # resources are themselves evaluated later in the process. def evaluate_classes(classes, scope, lazy_evaluate = true) unless scope.source raise Puppet::DevError, "No source for scope passed to evaluate_classes" end found = [] classes.each do |name| # If we can find the class, then make a resource that will evaluate it. if klass = scope.findclass(name) found << name and next if class_scope(klass) # Create a resource to model this class, and then add it to the list # of resources. resource = Puppet::Parser::Resource.new(:type => "class", :title => klass.classname, :scope => scope, :source => scope.source) store_resource(scope, resource) # If they've disabled lazy evaluation (which the :include function does), # then evaluate our resource immediately. resource.evaluate unless lazy_evaluate @catalog.tag(klass.classname) found << name else Puppet.info "Could not find class %s for %s" % [name, node.name] @catalog.tag(name) end end found end # Return a resource by either its ref or its type and title. def findresource(string, name = nil) string = "%s[%s]" % [string.capitalize, name] if name @resource_table[string] end # Set up our compile. We require a parser # and a node object; the parser is so we can look up classes # and AST nodes, and the node has all of the client's info, # like facts and environment. def initialize(node, parser, options = {}) @node = node @parser = parser options.each do |param, value| begin send(param.to_s + "=", value) rescue NoMethodError raise ArgumentError, "Compile objects do not accept %s" % param end end initvars() init_main() end # Create a new scope, with either a specified parent scope or # using the top scope. Adds an edge between the scope and # its parent to the graph. def newscope(parent, options = {}) parent ||= topscope options[:compile] = self options[:parser] ||= self.parser scope = Puppet::Parser::Scope.new(options) @scope_graph.add_edge!(parent, scope) scope end # Find the parent of a given scope. Assumes scopes only ever have # one in edge, which will always be true. def parent(scope) if ary = @scope_graph.adjacent(scope, :direction => :in) and ary.length > 0 ary[0] else nil end end # Return any overrides for the given resource. def resource_overrides(resource) @resource_overrides[resource.ref] end # Return a list of all resources. def resources @resource_table.values end # Store a resource override. def store_override(override) override.override = true # If possible, merge the override in immediately. if resource = @resource_table[override.ref] resource.merge(override) else # Otherwise, store the override for later; these # get evaluated in Resource#finish. @resource_overrides[override.ref] << override end end # Store a resource in our resource table. def store_resource(scope, resource) # This might throw an exception verify_uniqueness(resource) # Store it in the global table. @resource_table[resource.ref] = resource # And in the resource graph. At some point, this might supercede # the global resource table, but the table is a lot faster # so it makes sense to maintain for now. @catalog.add_edge!(scope.resource, resource) end # The top scope is usually the top-level scope, but if we're using AST nodes, # then it is instead the node's scope. def topscope node_scope || @topscope end private # If ast nodes are enabled, then see if we can find and evaluate one. def evaluate_ast_node return unless ast_nodes? # Now see if we can find the node. astnode = nil @node.names.each do |name| break if astnode = @parser.nodes[name.to_s.downcase] end unless (astnode ||= @parser.nodes["default"]) raise Puppet::ParseError, "Could not find default node or by name with '%s'" % node.names.join(", ") end # Create a resource to model this node, and then add it to the list # of resources. resource = Puppet::Parser::Resource.new(:type => "node", :title => astnode.classname, :scope => topscope, :source => topscope.source) store_resource(topscope, resource) @catalog.tag(astnode.classname) resource.evaluate # Now set the node scope appropriately, so that :topscope can # behave differently. @node_scope = class_scope(astnode) end # Evaluate our collections and return true if anything returned an object. # The 'true' is used to continue a loop, so it's important. def evaluate_collections return false if @collections.empty? found_something = false exceptwrap do # We have to iterate over a dup of the array because # collections can delete themselves from the list, which # changes its length and causes some collections to get missed. @collections.dup.each do |collection| found_something = true if collection.evaluate end end return found_something end # Make sure all of our resources have been evaluated into native resources. # We return true if any resources have, so that we know to continue the # evaluate_generators loop. def evaluate_definitions exceptwrap do if ary = unevaluated_resources ary.each do |resource| resource.evaluate end # If we evaluated, let the loop know. return true else return false end end end # Iterate over collections and resources until we're sure that the whole # compile is evaluated. This is necessary because both collections # and defined resources can generate new resources, which themselves could # be defined resources. def evaluate_generators count = 0 loop do done = true # Call collections first, then definitions. done = false if evaluate_collections done = false if evaluate_definitions break if done + + count += 1 + if count > 1000 raise Puppet::ParseError, "Somehow looped more than 1000 times while evaluating host catalog" end end end # Find and evaluate our main object, if possible. def evaluate_main @main = @parser.findclass("", "") || @parser.newclass("") @topscope.source = @main @main_resource = Puppet::Parser::Resource.new(:type => "class", :title => :main, :scope => @topscope, :source => @main) @topscope.resource = @main_resource @catalog.add_vertex!(@main_resource) @resource_table["Class[main]"] = @main_resource @main_resource.evaluate end # Make sure the entire catalog is evaluated. def fail_on_unevaluated fail_on_unevaluated_overrides fail_on_unevaluated_resource_collections end # If there are any resource overrides remaining, then we could # not find the resource they were supposed to override, so we # want to throw an exception. def fail_on_unevaluated_overrides remaining = [] @resource_overrides.each do |name, overrides| remaining += overrides end unless remaining.empty? fail Puppet::ParseError, "Could not find object(s) %s" % remaining.collect { |o| o.ref }.join(", ") end end # Make sure we don't have any remaining collections that specifically # look for resources, because we want to consider those to be # parse errors. def fail_on_unevaluated_resource_collections remaining = [] @collections.each do |coll| # We're only interested in the 'resource' collections, # which result from direct calls of 'realize'. Anything # else is allowed not to return resources. # Collect all of them, so we have a useful error. if r = coll.resources if r.is_a?(Array) remaining += r else remaining << r end end end unless remaining.empty? raise Puppet::ParseError, "Failed to realize virtual resources %s" % remaining.join(', ') end end # Make sure all of our resources and such have done any last work # necessary. def finish @resource_table.each { |name, resource| resource.finish if resource.respond_to?(:finish) } end # Initialize the top-level scope, class, and resource. def init_main # Create our initial scope and a resource that will evaluate main. @topscope = Puppet::Parser::Scope.new(:compile => self, :parser => self.parser) @scope_graph.add_vertex!(@topscope) end # Set up all of our internal variables. def initvars # The table for storing class singletons. This will only actually # be used by top scopes and node scopes. @class_scopes = {} # The table for all defined resources. @resource_table = {} # The list of objects that will available for export. @exported_resources = {} # The list of overrides. This is used to cache overrides on objects # that don't exist yet. We store an array of each override. @resource_overrides = Hash.new do |overs, ref| overs[ref] = [] end # The list of collections that have been created. This is a global list, # but they each refer back to the scope that created them. @collections = [] # A list of tags we've generated; most class names. @tags = [] # A graph for maintaining scope relationships. @scope_graph = Puppet::SimpleGraph.new # For maintaining the relationship between scopes and their resources. @catalog = Puppet::Node::Catalog.new(@node.name) @catalog.version = @parser.version end # Set the node's parameters into the top-scope as variables. def set_node_parameters node.parameters.each do |param, value| @topscope.setvar(param, value) end end # Store the catalog into the database. def store unless Puppet.features.rails? raise Puppet::Error, "storeconfigs is enabled but rails is unavailable" end unless ActiveRecord::Base.connected? Puppet::Rails.connect end # We used to have hooks here for forking and saving, but I don't # think it's worth retaining at this point. store_to_active_record(@node, @resource_table.values) end # Do the actual storage. def store_to_active_record(node, resources) begin # We store all of the objects, even the collectable ones benchmark(:info, "Stored catalog for #{node.name}") do Puppet::Rails::Host.transaction do Puppet::Rails::Host.store(node, resources) end end rescue => detail if Puppet[:trace] puts detail.backtrace end Puppet.err "Could not store configs: %s" % detail.to_s end end # Return an array of all of the unevaluated resources. These will be definitions, # which need to get evaluated into native resources. def unevaluated_resources ary = @resource_table.find_all do |name, object| ! object.builtin? and ! object.evaluated? end.collect { |name, object| object } if ary.empty? return nil else return ary end end # Verify that the given resource isn't defined elsewhere. def verify_uniqueness(resource) # Short-curcuit the common case, unless existing_resource = @resource_table[resource.ref] return true end if typeclass = Puppet::Type.type(resource.type) and ! typeclass.isomorphic? Puppet.info "Allowing duplicate %s" % typeclass.name return true end # Either it's a defined type, which are never # isomorphic, or it's a non-isomorphic type, so # we should throw an exception. msg = "Duplicate definition: %s is already defined" % resource.ref if existing_resource.file and existing_resource.line msg << " in file %s at line %s" % [existing_resource.file, existing_resource.line] end if resource.line or resource.file msg << "; cannot redefine" end raise Puppet::ParseError.new(msg) end end diff --git a/lib/puppet/parser/resource.rb b/lib/puppet/parser/resource.rb index 3f346166e..7dc42ccec 100644 --- a/lib/puppet/parser/resource.rb +++ b/lib/puppet/parser/resource.rb @@ -1,462 +1,447 @@ # A resource that we're managing. This handles making sure that only subclasses # can set parameters. class Puppet::Parser::Resource require 'puppet/parser/resource/param' require 'puppet/parser/resource/reference' + require 'puppet/util/tagging' include Puppet::Util include Puppet::Util::MethodHelper include Puppet::Util::Errors include Puppet::Util::Logging + include Puppet::Util::Tagging attr_accessor :source, :line, :file, :scope, :rails_id attr_accessor :virtual, :override, :translated attr_reader :exported, :evaluated, :params - attr_writer :tags - # Determine whether the provided parameter name is a relationship parameter. def self.relationship_parameter?(name) unless defined?(@relationship_names) @relationship_names = Puppet::Type.relationship_params.collect { |p| p.name } end @relationship_names.include?(name) end # Proxy a few methods to our @ref object. [:builtin?, :type, :title].each do |method| define_method(method) do @ref.send(method) end end # Set up some boolean test methods [:exported, :translated, :override, :virtual, :evaluated].each do |method| newmeth = (method.to_s + "?").intern define_method(newmeth) do self.send(method) end end def [](param) param = symbolize(param) if param == :title return self.title end if @params.has_key?(param) @params[param].value else nil end end def builtin=(bool) @ref.builtin = bool end # Retrieve the associated definition and evaluate it. def evaluate if klass = @ref.definedtype finish() scope.compile.delete_resource(self) return klass.evaluate(:scope => scope, :resource => self) elsif builtin? devfail "Cannot evaluate a builtin type" else self.fail "Cannot find definition %s" % self.type end ensure @evaluated = true end # Mark this resource as both exported and virtual, # or remove the exported mark. def exported=(value) if value @virtual = true @exported = value else @exported = value end end # Do any finishing work on this object, called before evaluation or # before storage/translation. def finish add_overrides() add_defaults() add_metaparams() + add_scope_tags() validate() end def initialize(options) # Set all of the options we can. options.each do |option, value| if respond_to?(option.to_s + "=") send(option.to_s + "=", value) options.delete(option) end end unless self.scope raise ArgumentError, "Resources require a scope" end @source ||= scope.source options = symbolize_options(options) # Set up our reference. if type = options[:type] and title = options[:title] options.delete(:type) options.delete(:title) else raise ArgumentError, "Resources require a type and title" end @ref = Reference.new(:type => type, :title => title, :scope => self.scope) @params = {} # Define all of the parameters if params = options[:params] options.delete(:params) params.each do |param| set_parameter(param) end end # Throw an exception if we've got any arguments left to set. unless options.empty? raise ArgumentError, "Resources do not accept %s" % options.keys.collect { |k| k.to_s }.join(", ") end - @tags = [] tag(@ref.type) - tag(@ref.title) if @ref.title.to_s =~ /^[-\w]+$/ - - if scope.resource - @tags += scope.resource.tags - end + tag(@ref.title) if valid_tag?(@ref.title.to_s) end # Merge an override resource in. This will throw exceptions if # any overrides aren't allowed. def merge(resource) # Test the resource scope, to make sure the resource is even allowed # to override. unless self.source.object_id == resource.source.object_id || resource.source.child_of?(self.source) raise Puppet::ParseError.new("Only subclasses can override parameters", resource.line, resource.file) end # Some of these might fail, but they'll fail in the way we want. resource.params.each do |name, param| override_parameter(param) end end # Modify this resource in the Rails database. Poor design, yo. def modify_rails(db_resource) args = rails_args args.each do |param, value| db_resource[param] = value unless db_resource[param] == value end # Handle file specially if (self.file and (!db_resource.file or db_resource.file != self.file)) db_resource.file = self.file end updated_params = @params.inject({}) do |hash, ary| hash[ary[0].to_s] = ary[1] hash end db_resource.ar_hash_merge(db_resource.get_params_hash(db_resource.param_values), updated_params, :create => Proc.new { |name, parameter| parameter.to_rails(db_resource) }, :delete => Proc.new { |values| values.each { |value| db_resource.param_values.delete(value) } }, :modify => Proc.new { |db, mem| mem.modify_rails_values(db) }) updated_tags = tags.inject({}) { |hash, tag| hash[tag] = tag hash } db_resource.ar_hash_merge(db_resource.get_tag_hash(), updated_tags, :create => Proc.new { |name, tag| db_resource.add_resource_tag(name) }, :delete => Proc.new { |tag| db_resource.resource_tags.delete(tag) }, :modify => Proc.new { |db, mem| # nothing here }) end # Return the resource name, or the title if no name # was specified. def name unless defined? @name @name = self[:name] || self.title end @name end # This *significantly* reduces the number of calls to Puppet.[]. def paramcheck? unless defined? @@paramcheck @@paramcheck = Puppet[:paramcheck] end @@paramcheck end # A temporary occasion, until I get paths in the scopes figured out. def path to_s end # Return the short version of our name. def ref @ref.to_s end - # Add a tag to our current list. These tags will be added to all - # of the objects contained in this scope. - def tag(*ary) - ary.collect { |tag| tag.to_s.downcase }.collect { |tag| tag.split("::") }.flatten.each do |tag| - unless tag =~ /^\w[-\w]*$/ - fail Puppet::ParseError, "Invalid tag %s" % tag.inspect - end - unless @tags.include?(tag) - @tags << tag - end - end - end - - def tags - @tags.dup - end - def to_hash @params.inject({}) do |hash, ary| param = ary[1] # Skip "undef" values. if param.value != :undef hash[param.name] = param.value end hash end end # Turn our parser resource into a Rails resource. def to_rails(host) args = rails_args db_resource = host.resources.build(args) # Handle file specially db_resource.file = self.file @params.each { |name, param| param.to_rails(db_resource) } tags.each { |tag| db_resource.add_resource_tag(tag) } return db_resource end def to_s self.ref end # Translate our object to a transportable object. def to_trans return nil if virtual? if builtin? to_transobject else to_transbucket end end def to_transbucket bucket = Puppet::TransBucket.new([]) bucket.type = self.type bucket.name = self.title # TransBuckets don't support parameters, which is why they're being deprecated. return bucket end def to_transobject # Now convert to a transobject obj = Puppet::TransObject.new(@ref.title, @ref.type) to_hash.each do |p, v| if v.is_a?(Reference) v = v.to_ref elsif v.is_a?(Array) v = v.collect { |av| if av.is_a?(Reference) av = av.to_ref end av } end # If the value is an array with only one value, then # convert it to a single value. This is largely so that # the database interaction doesn't have to worry about # whether it returns an array or a string. obj[p.to_s] = if v.is_a?(Array) and v.length == 1 v[0] else v end end obj.file = self.file obj.line = self.line obj.tags = self.tags return obj end private # Add default values from our definition. def add_defaults scope.lookupdefaults(self.type).each do |name, param| unless @params.include?(name) self.debug "Adding default for %s" % name @params[name] = param end end end # Add any metaparams defined in our scope. This actually adds any metaparams # from any parent scope, and there's currently no way to turn that off. def add_metaparams Puppet::Type.eachmetaparam do |name| # Skip metaparams that we already have defined, unless they're relationship metaparams. # LAK:NOTE Relationship metaparams get treated specially -- we stack them, instead of # overriding. next if @params[name] and not self.class.relationship_parameter?(name) # Skip metaparams for which we get no value. next unless val = scope.lookupvar(name.to_s, false) and val != :undefined # The default case: just set the value return set_parameter(name, val) unless @params[name] # For relationship params, though, join the values (a la #446). @params[name].value = [@params[name].value, val].flatten end end # Add any overrides for this object. def add_overrides if overrides = scope.compile.resource_overrides(self) overrides.each do |over| self.merge(over) end # Remove the overrides, so that the configuration knows there # are none left. overrides.clear end end + def add_scope_tags + if scope_resource = scope.resource + tag(*scope_resource.tags) + end + end + # Accept a parameter from an override. def override_parameter(param) # This can happen if the override is defining a new parameter, rather # than replacing an existing one. (@params[param.name] = param and return) unless current = @params[param.name] # The parameter is already set. Fail if they're not allowed to override it. unless param.source.child_of?(current.source) if Puppet[:trace] puts caller end msg = "Parameter '%s' is already set on %s" % [param.name, self.to_s] if current.source.to_s != "" msg += " by %s" % current.source end if current.file or current.line fields = [] fields << current.file if current.file fields << current.line.to_s if current.line msg += " at %s" % fields.join(":") end msg += "; cannot redefine" raise Puppet::ParseError.new(msg, param.line, param.file) end # If we've gotten this far, we're allowed to override. # Merge with previous value, if the parameter was generated with the +> syntax. # It's important that we use the new param instance here, not the old one, # so that the source is registered correctly for later overrides. param.value = [current.value, param.value].flatten if param.add @params[param.name] = param end # Verify that all passed parameters are valid. This throws an error if # there's a problem, so we don't have to worry about the return value. def paramcheck(param) param = param.to_s # Now make sure it's a valid argument to our class. These checks # are organized in order of commonhood -- most types, it's a valid # argument and paramcheck is enabled. if @ref.typeclass.validattr?(param) true elsif %w{name title}.include?(param) # always allow these true elsif paramcheck? self.fail Puppet::ParseError, "Invalid parameter '%s' for type '%s'" % [param, @ref.type] end end def rails_args return [:type, :title, :line, :exported].inject({}) do |hash, param| # 'type' isn't a valid column name, so we have to use another name. to = (param == :type) ? :restype : param if value = self.send(param) hash[to] = value end hash end end # Define a parameter in our resource. def set_parameter(param, value = nil) if value param = Puppet::Parser::Resource::Param.new( :name => param, :value => value, :source => self.source ) elsif ! param.is_a?(Puppet::Parser::Resource::Param) raise ArgumentError, "Must pass a parameter or all necessary values" end # And store it in our parameter hash. @params[param.name] = param end # Make sure the resource's parameters are all valid for the type. def validate @params.each do |name, param| # Make sure it's a valid parameter. paramcheck(name) end end end diff --git a/lib/puppet/provider/package/fink.rb b/lib/puppet/provider/package/fink.rb index e0933df08..030e1a347 100755 --- a/lib/puppet/provider/package/fink.rb +++ b/lib/puppet/provider/package/fink.rb @@ -1,86 +1,84 @@ Puppet::Type.type(:package).provide :fink, :parent => :dpkg, :source => :dpkg do # Provide sorting functionality include Puppet::Util::Package desc "Package management via ``fink``." commands :fink => "/sw/bin/fink" commands :aptget => "/sw/bin/apt-get" commands :aptcache => "/sw/bin/apt-cache" commands :dpkgquery => "/sw/bin/dpkg-query" - defaultfor :operatingsystem => :darwin - has_feature :versionable # A derivative of DPKG; this is how most people actually manage # Debian boxes, and the only thing that differs is that it can # install packages from remote sites. def finkcmd(*args) fink(*args) end # Install a package using 'apt-get'. This function needs to support # installing a specific version. def install if @resource[:responsefile] self.run_preseed end should = @resource.should(:ensure) str = @resource[:name] case should when true, false, Symbol # pass else # Add the package version str += "=%s" % should end cmd = %w{-b -q -y} keep = "" cmd << :install << str finkcmd(cmd) end # What's the latest package version available? def latest output = aptcache :policy, @resource[:name] if output =~ /Candidate:\s+(\S+)\s/ return $1 else self.err "Could not find latest version" return nil end end # # preseeds answers to dpkg-set-selection from the "responsefile" # def run_preseed if response = @resource[:responsefile] and FileTest.exists?(response) self.info("Preseeding %s to debconf-set-selections" % response) preseed response else self.info "No responsefile specified or non existant, not preseeding anything" end end def update self.install end def uninstall finkcmd "-y", "-q", :remove, @model[:name] end def purge aptget '-y', '-q', 'remove', '--purge', @resource[:name] end end diff --git a/lib/puppet/provider/sshkey/parsed.rb b/lib/puppet/provider/sshkey/parsed.rb index cb1010c5b..6f7d98f56 100755 --- a/lib/puppet/provider/sshkey/parsed.rb +++ b/lib/puppet/provider/sshkey/parsed.rb @@ -1,35 +1,37 @@ require 'puppet/provider/parsedfile' known = nil case Facter.value(:operatingsystem) when "Darwin": known = "/etc/ssh_known_hosts" else known = "/etc/ssh/ssh_known_hosts" end Puppet::Type.type(:sshkey).provide(:parsed, :parent => Puppet::Provider::ParsedFile, :default_target => known, :filetype => :flat ) do + desc "Parse and generate host-wide known hosts files for SSH." + text_line :comment, :match => /^#/ text_line :blank, :match => /^\s+/ record_line :parsed, :fields => %w{name type key}, :post_parse => proc { |hash| if hash[:name] =~ /,/ names = hash[:name].split(",") hash[:name] = names.shift hash[:alias] = names end }, :pre_gen => proc { |hash| if hash[:alias] names = [hash[:name], hash[:alias]].flatten hash[:name] = [hash[:name], hash[:alias]].flatten.join(",") hash.delete(:alias) end } end diff --git a/lib/puppet/transaction.rb b/lib/puppet/transaction.rb index 6a4981298..f304cadc6 100644 --- a/lib/puppet/transaction.rb +++ b/lib/puppet/transaction.rb @@ -1,737 +1,737 @@ # the class that actually walks our resource/property tree, collects the changes, # and performs them require 'puppet' require 'puppet/propertychange' module Puppet class Transaction attr_accessor :component, :catalog, :ignoreschedules attr_accessor :sorted_resources, :configurator # The report, once generated. attr_reader :report # The list of events generated in this transaction. attr_reader :events attr_writer :tags include Puppet::Util # Add some additional times for reporting def addtimes(hash) hash.each do |name, num| @timemetrics[name] = num end end # Check to see if we should actually allow processing, but this really only # matters when a resource is getting deleted. def allow_processing?(resource, changes) # If a resource is going to be deleted but it still has # dependencies, then don't delete it unless it's implicit or the # dependency is itself being deleted. if resource.purging? and resource.deleting? if deps = relationship_graph.dependents(resource) and ! deps.empty? and deps.detect { |d| ! d.deleting? } resource.warning "%s still depend%s on me -- not purging" % [deps.collect { |r| r.ref }.join(","), deps.length > 1 ? "":"s"] return false end end return true end # Are there any failed resources in this transaction? def any_failed? failures = @failures.inject(0) { |failures, array| failures += array[1]; failures } if failures > 0 failures else false end end # Apply all changes for a resource, returning a list of the events # generated. def apply(resource) begin changes = resource.evaluate rescue => detail if Puppet[:trace] puts detail.backtrace end resource.err "Failed to retrieve current state of resource: %s" % detail # Mark that it failed @failures[resource] += 1 # And then return return [] end changes = [changes] unless changes.is_a?(Array) if changes.length > 0 @resourcemetrics[:out_of_sync] += 1 end return [] if changes.empty? or ! allow_processing?(resource, changes) resourceevents = apply_changes(resource, changes) # If there were changes and the resource isn't in noop mode... unless changes.empty? or resource.noop # Record when we last synced resource.cache(:synced, Time.now) # Flush, if appropriate if resource.respond_to?(:flush) resource.flush end # And set a trigger for refreshing this resource if it's a # self-refresher if resource.self_refresh? and ! resource.deleting? # Create an edge with this resource as both the source and # target. The triggering method treats these specially for # logging. events = resourceevents.collect { |e| e.event } set_trigger(Puppet::Relationship.new(resource, resource, :callback => :refresh, :event => events)) end end resourceevents end # Apply each change in turn. def apply_changes(resource, changes) changes.collect { |change| @changes << change @count += 1 change.transaction = self events = nil begin # use an array, so that changes can return more than one # event if they want events = [change.forward].flatten.reject { |e| e.nil? } rescue => detail if Puppet[:trace] puts detail.backtrace end change.property.err "change from %s to %s failed: %s" % [change.property.is_to_s(change.is), change.property.should_to_s(change.should), detail] @failures[resource] += 1 next # FIXME this should support using onerror to determine # behaviour; or more likely, the client calling us # should do so end # Mark that our change happened, so it can be reversed # if we ever get to that point unless events.nil? or (events.is_a?(Array) and (events.empty?) or events.include?(:noop)) change.changed = true @resourcemetrics[:applied] += 1 end events }.flatten.reject { |e| e.nil? } end # Find all of the changed resources. def changed? @changes.find_all { |change| change.changed }.collect { |change| unless change.property.resource raise "No resource for %s" % change.inspect end change.property.resource }.uniq end # Do any necessary cleanup. If we don't get rid of the graphs, the # contained resources might never get cleaned up. def cleanup if defined? @generated relationship_graph.remove_resource(*@generated) end end # Copy an important relationships from the parent to the newly-generated # child resource. def copy_relationships(resource, children) depthfirst = resource.depthfirst? children.each do |gen_child| if depthfirst edge = [gen_child, resource] else edge = [resource, gen_child] end relationship_graph.add_resource(gen_child) unless relationship_graph.resource(gen_child.ref) unless relationship_graph.edge?(edge[1], edge[0]) relationship_graph.add_edge!(*edge) else resource.debug "Skipping automatic relationship to %s" % gen_child end end end # Are we deleting this resource? def deleting?(changes) changes.detect { |change| change.property.name == :ensure and change.should == :absent } end # See if the resource generates new resources at evaluation time. def eval_generate(resource) if resource.respond_to?(:eval_generate) begin children = resource.eval_generate rescue => detail if Puppet[:trace] puts detail.backtrace end resource.err "Failed to generate additional resources during transaction: %s" % detail return nil end if children children.each { |child| child.finish } @generated += children return children end end end # Evaluate a single resource. def eval_resource(resource, checkskip = true) events = [] if resource.is_a?(Puppet::Type::Component) raise Puppet::DevError, "Got a component to evaluate" end if checkskip and skip?(resource) @resourcemetrics[:skipped] += 1 else @resourcemetrics[:scheduled] += 1 changecount = @changes.length # We need to generate first regardless, because the recursive # actions sometimes change how the top resource is applied. children = eval_generate(resource) if children and resource.depthfirst? children.each do |child| # The child will never be skipped when the parent isn't events += eval_resource(child, false) end end # Perform the actual changes seconds = thinmark do events += apply(resource) end if children and ! resource.depthfirst? children.each do |child| events += eval_resource(child, false) end end # Create a child/parent relationship. We do this after everything else because # we want explicit relationships to be able to override automatic relationships, # including this one. if children copy_relationships(resource, children) end # A bit of hackery here -- if skipcheck is true, then we're the # top-level resource. If that's the case, then make sure all of # the changes list this resource as a proxy. This is really only # necessary for rollback, since we know the generating resource # during forward changes. if children and checkskip @changes[changecount..-1].each { |change| change.proxy = resource } end # Keep track of how long we spend in each type of resource @timemetrics[resource.class.name] += seconds end # Check to see if there are any events for this resource if triggedevents = trigger(resource) events += triggedevents end # Collect the targets of any subscriptions to those events. We pass # the parent resource in so it will override the source in the events, # since eval_generated children can't have direct relationships. relationship_graph.matching_edges(events, resource).each do |orig_edge| # We have to dup the label here, else we modify the original edge label, # which affects whether a given event will match on the next run, which is, # of course, bad. edge = orig_edge.class.new(orig_edge.source, orig_edge.target) label = orig_edge.label.dup label[:event] = events.collect { |e| e.event } edge.label = label set_trigger(edge) end # And return the events for collection events end # This method does all the actual work of running a transaction. It # collects all of the changes, executes them, and responds to any # necessary events. def evaluate @count = 0 # Start logging. Puppet::Util::Log.newdestination(@report) prepare() begin allevents = @sorted_resources.collect { |resource| if resource.is_a?(Puppet::Type::Component) Puppet.warning "Somehow left a component in the relationship graph" next end ret = nil seconds = thinmark do ret = eval_resource(resource) end if Puppet[:evaltrace] and @catalog.host_config? resource.info "Evaluated in %0.2f seconds" % seconds end ret }.flatten.reject { |e| e.nil? } ensure # And then close the transaction log. Puppet::Util::Log.close(@report) end Puppet.debug "Finishing transaction %s with %s changes" % [self.object_id, @count] @events = allevents allevents end # Determine whether a given resource has failed. def failed?(obj) if @failures[obj] > 0 return @failures[obj] else return false end end # Does this resource have any failed dependencies? def failed_dependencies?(resource) # First make sure there are no failed dependencies. To do this, # we check for failures in any of the vertexes above us. It's not # enough to check the immediate dependencies, which is why we use # a tree from the reversed graph. skip = false deps = relationship_graph.dependencies(resource) deps.each do |dep| if fails = failed?(dep) resource.notice "Dependency %s[%s] has %s failures" % [dep.class.name, dep.name, @failures[dep]] skip = true end end return skip end # Collect any dynamically generated resources. def generate list = @catalog.vertices # Store a list of all generated resources, so that we can clean them up # after the transaction closes. @generated = [] newlist = [] while ! list.empty? list.each do |resource| if resource.respond_to?(:generate) begin made = resource.generate rescue => detail resource.err "Failed to generate additional resources: %s" % detail end next unless made unless made.is_a?(Array) made = [made] end made.uniq! made.each do |res| @catalog.add_resource(res) res.catalog = catalog newlist << res @generated << res res.finish end end end list.clear list = newlist newlist = [] end end # Generate a transaction report. def generate_report @resourcemetrics[:failed] = @failures.find_all do |name, num| num > 0 end.length # Get the total time spent @timemetrics[:total] = @timemetrics.inject(0) do |total, vals| total += vals[1] total end # Add all of the metrics related to resource count and status @report.newmetric(:resources, @resourcemetrics) # Record the relative time spent in each resource. @report.newmetric(:time, @timemetrics) # Then all of the change-related metrics @report.newmetric(:changes, :total => @changes.length ) @report.time = Time.now return @report end # Should we ignore tags? def ignore_tags? - ! @catalog.host_config? + ! (@catalog.host_config? or Puppet[:name] == "puppet") end # this should only be called by a Puppet::Type::Component resource now # and it should only receive an array def initialize(resources) if resources.is_a?(Puppet::Node::Catalog) @catalog = resources elsif resources.is_a?(Puppet::PGraph) raise "Transactions should get catalogs now, not PGraph" else raise "Transactions require catalogs" end @resourcemetrics = { :total => @catalog.vertices.length, :out_of_sync => 0, # The number of resources that had changes :applied => 0, # The number of resources fixed :skipped => 0, # The number of resources skipped :restarted => 0, # The number of resources triggered :failed_restarts => 0, # The number of resources that fail a trigger :scheduled => 0 # The number of resources scheduled } # Metrics for distributing times across the different types. @timemetrics = Hash.new(0) # The number of resources that were triggered in this run @triggered = Hash.new { |hash, key| hash[key] = Hash.new(0) } # Targets of being triggered. @targets = Hash.new do |hash, key| hash[key] = [] end # The changes we're performing @changes = [] # The resources that have failed and the number of failures each. This # is used for skipping resources because of failed dependencies. @failures = Hash.new do |h, key| h[key] = 0 end @report = Report.new @count = 0 end # Prefetch any providers that support it. We don't support prefetching # types, just providers. def prefetch prefetchers = {} @catalog.vertices.each do |resource| if provider = resource.provider and provider.class.respond_to?(:prefetch) prefetchers[provider.class] ||= {} prefetchers[provider.class][resource.title] = resource end end # Now call prefetch, passing in the resources so that the provider instances can be replaced. prefetchers.each do |provider, resources| Puppet.debug "Prefetching %s resources for %s" % [provider.name, provider.resource_type.name] begin provider.prefetch(resources) rescue => detail if Puppet[:trace] puts detail.backtrace end Puppet.err "Could not prefetch %s provider '%s': %s" % [provider.resource_type.name, provider.name, detail] end end end # Prepare to evaluate the resources in a transaction. def prepare prefetch() # Now add any dynamically generated resources generate() # This will throw an error if there are cycles in the graph. @sorted_resources = relationship_graph.topsort end def relationship_graph catalog.relationship_graph end # Send off the transaction report. def send_report begin report = generate_report() rescue => detail Puppet.err "Could not generate report: %s" % detail return end if Puppet[:rrdgraph] == true report.graph() end if Puppet[:summarize] puts report.summary end if Puppet[:report] begin reportclient().report(report) rescue => detail Puppet.err "Reporting failed: %s" % detail end end end def reportclient unless defined? @reportclient @reportclient = Puppet::Network::Client.report.new( :Server => Puppet[:reportserver] ) end @reportclient end # Roll all completed changes back. def rollback @targets.clear @triggered.clear allevents = @changes.reverse.collect { |change| # skip changes that were never actually run unless change.changed Puppet.debug "%s was not changed" % change.to_s next end begin events = change.backward rescue => detail Puppet.err("%s rollback failed: %s" % [change,detail]) if Puppet[:trace] puts detail.backtrace end next # at this point, we would normally do error handling # but i haven't decided what to do for that yet # so just record that a sync failed for a given resource #@@failures[change.property.parent] += 1 # this still could get hairy; what if file contents changed, # but a chmod failed? how would i handle that error? dern end # FIXME This won't work right now. relationship_graph.matching_edges(events).each do |edge| @targets[edge.target] << edge end # Now check to see if there are any events for this child. # Kind of hackish, since going backwards goes a change at a # time, not a child at a time. trigger(change.property.resource) # And return the events for collection events }.flatten.reject { |e| e.nil? } end # Is the resource currently scheduled? def scheduled?(resource) self.ignoreschedules or resource.scheduled? end # Set an edge to be triggered when we evaluate its target. def set_trigger(edge) return unless method = edge.callback return unless edge.target.respond_to?(method) if edge.target.respond_to?(:ref) unless edge.source == edge.target edge.source.info "Scheduling %s of %s" % [edge.callback, edge.target.ref] end end @targets[edge.target] << edge end # Should this resource be skipped? def skip?(resource) skip = false if missing_tags?(resource) resource.debug "Not tagged with %s" % tags.join(", ") elsif ! scheduled?(resource) resource.debug "Not scheduled" elsif failed_dependencies?(resource) resource.warning "Skipping because of failed dependencies" else return false end return true end # The tags we should be checking. def tags unless defined? @tags tags = Puppet[:tags] if tags.nil? or tags == "" @tags = [] else @tags = tags.split(/\s*,\s*/) end end @tags end # Is this resource tagged appropriately? def missing_tags?(resource) return false if self.ignore_tags? or tags.empty? return true unless resource.tagged?(tags) end # Are there any edges that target this resource? def targeted?(resource) # The default value is a new array so we have to test the length of it. @targets.include?(resource) and @targets[resource].length > 0 end # Trigger any subscriptions to a child. This does an upwardly recursive # search -- it triggers the passed resource, but also the resource's parent # and so on up the tree. def trigger(resource) return nil unless targeted?(resource) callbacks = Hash.new { |hash, key| hash[key] = [] } trigged = [] @targets[resource].each do |edge| # Collect all of the subs for each callback callbacks[edge.callback] << edge end callbacks.each do |callback, subs| noop = true subs.each do |edge| if edge.event.nil? or ! edge.event.include?(:noop) noop = false end end if noop resource.notice "Would have triggered %s from %s dependencies" % [callback, subs.length] # And then add an event for it. return [Puppet::Event.new( :event => :noop, :transaction => self, :source => resource )] end if subs.length == 1 and subs[0].source == resource message = "Refreshing self" else message = "Triggering '%s' from %s dependencies" % [callback, subs.length] end resource.notice message # At this point, just log failures, don't try to react # to them in any way. begin resource.send(callback) @resourcemetrics[:restarted] += 1 rescue => detail resource.err "Failed to call %s on %s: %s" % [callback, resource, detail] @resourcemetrics[:failed_restarts] += 1 if Puppet[:trace] puts detail.backtrace end end # And then add an event for it. trigged << Puppet::Event.new( :event => :triggered, :transaction => self, :source => resource ) triggered(resource, callback) end if trigged.empty? return nil else return trigged end end def triggered(resource, method) @triggered[resource][method] += 1 end def triggered?(resource, method) @triggered[resource][method] end end end require 'puppet/transaction/report' diff --git a/lib/puppet/type/pfile/ensure.rb b/lib/puppet/type/pfile/ensure.rb index 0a6f73d95..3aa918f65 100755 --- a/lib/puppet/type/pfile/ensure.rb +++ b/lib/puppet/type/pfile/ensure.rb @@ -1,174 +1,179 @@ module Puppet Puppet.type(:file).ensurable do require 'etc' desc "Whether to create files that don't currently exist. Possible values are *absent*, *present* (will match any form of file existence, and if the file is missing will create an empty file), *file*, and *directory*. Specifying ``absent`` will delete the file, although currently this will not recursively delete directories. Anything other than those values will be considered to be a symlink. For instance, the following text creates a link:: # Useful on solaris file { \"/etc/inetd.conf\": ensure => \"/etc/inet/inetd.conf\" } You can make relative links:: # Useful on solaris file { \"/etc/inetd.conf\": ensure => \"inet/inetd.conf\" } If you need to make a relative link to a file named the same as one of the valid values, you must prefix it with ``./`` or something similar. You can also make recursive symlinks, which will create a directory structure that maps to the target directory, with directories corresponding to each directory and links corresponding to each file." # Most 'ensure' properties have a default, but with files we, um, don't. nodefault newvalue(:absent) do File.unlink(@resource[:path]) end aliasvalue(:false, :absent) newvalue(:file) do # Make sure we're not managing the content some other way if property = (@resource.property(:content) || @resource.property(:source)) property.sync else @resource.write(false) { |f| f.flush } mode = @resource.should(:mode) end return :file_created end #aliasvalue(:present, :file) newvalue(:present) do # Make a file if they want something, but this will match almost # anything. set_file end newvalue(:directory) do mode = @resource.should(:mode) parent = File.dirname(@resource[:path]) unless FileTest.exists? parent raise Puppet::Error, "Cannot create %s; parent directory %s does not exist" % [@resource[:path], parent] end @resource.write_if_writable(parent) do if mode Puppet::Util.withumask(000) do Dir.mkdir(@resource[:path],mode) end else Dir.mkdir(@resource[:path]) end end @resource.send(:property_fix) @resource.setchecksum return :directory_created end newvalue(:link) do if property = @resource.property(:target) property.retrieve return property.mklink else self.fail "Cannot create a symlink without a target" end end # Symlinks. newvalue(/./) do # This code never gets executed. We need the regex to support # specifying it, but the work is done in the 'symlink' code block. end munge do |value| value = super(value) return value if value.is_a? Symbol @resource[:target] = value return :link end def change_to_s(currentvalue, newvalue) if property = (@resource.property(:content) || @resource.property(:source)) and ! property.insync?(currentvalue) currentvalue = property.retrieve return property.change_to_s(property.retrieve, property.should) else super(currentvalue, newvalue) end end # Check that we can actually create anything def check basedir = File.dirname(@resource[:path]) if ! FileTest.exists?(basedir) raise Puppet::Error, "Can not create %s; parent directory does not exist" % @resource.title elsif ! FileTest.directory?(basedir) raise Puppet::Error, "Can not create %s; %s is not a directory" % [@resource.title, dirname] end end # We have to treat :present specially, because it works with any # type of file. def insync?(currentvalue) + if property = @resource.property(:source) and ! property.described? + warning "No specified sources exist" + return true + end + if self.should == :present if currentvalue.nil? or currentvalue == :absent return false else return true end else return super(currentvalue) end end def retrieve if stat = @resource.stat(false) return stat.ftype.intern else if self.should == :false return :false else return :absent end end end def sync @resource.remove_existing(self.should) if self.should == :absent return :file_removed end event = super return event end end end diff --git a/lib/puppet/type/pfile/source.rb b/lib/puppet/type/pfile/source.rb index 1849d5a61..3dfb5cccd 100755 --- a/lib/puppet/type/pfile/source.rb +++ b/lib/puppet/type/pfile/source.rb @@ -1,297 +1,297 @@ module Puppet # Copy files from a local or remote source. This state *only* does any work # when the remote file is an actual file; in that case, this state copies # the file down. If the remote file is a dir or a link or whatever, then # this state, during retrieval, modifies the appropriate other states # so that things get taken care of appropriately. Puppet.type(:file).newproperty(:source) do include Puppet::Util::Diff attr_accessor :source, :local desc "Copy a file over the current file. Uses ``checksum`` to determine when a file should be copied. Valid values are either fully qualified paths to files, or URIs. Currently supported URI types are *puppet* and *file*. This is one of the primary mechanisms for getting content into applications that Puppet does not directly support and is very useful for those configuration files that don't change much across sytems. For instance:: class sendmail { file { \"/etc/mail/sendmail.cf\": source => \"puppet://server/module/sendmail.cf\" } } You can also leave out the server name, in which case ``puppetd`` will fill in the name of its configuration server and ``puppet`` will use the local filesystem. This makes it easy to use the same configuration in both local and centralized forms. Currently, only the ``puppet`` scheme is supported for source URL's. Puppet will connect to the file server running on ``server`` to retrieve the contents of the file. If the ``server`` part is empty, the behavior of the command-line interpreter (``puppet``) and the client demon (``puppetd``) differs slightly: ``puppet`` will look such a file up on the module path on the local host, whereas ``puppetd`` will connect to the puppet server that it received the manifest from. See the `FileServingConfiguration fileserver configuration documentation`:trac: for information on how to configure and use file services within Puppet. If you specify multiple file sources for a file, then the first source that exists will be used. This allows you to specify what amount to search paths for files:: file { \"/path/to/my/file\": source => [ \"/nfs/files/file.$host\", \"/nfs/files/file.$operatingsystem\", \"/nfs/files/file\" ] } This will use the first found file as the source. You cannot currently copy links using this mechanism; set ``links`` to ``follow`` if any remote sources are links. " uncheckable validate do |source| unless @resource.uri2obj(source) raise Puppet::Error, "Invalid source %s" % source end end munge do |source| # if source.is_a? Symbol # return source # end # Remove any trailing slashes source.sub(/\/$/, '') end def change_to_s(currentvalue, newvalue) # newvalue = "{md5}" + @stats[:checksum] if @resource.property(:ensure).retrieve == :absent return "creating from source %s with contents %s" % [@source, @stats[:checksum]] else return "replacing from source %s with contents %s" % [@source, @stats[:checksum]] end end def checksum if defined?(@stats) @stats[:checksum] else nil end end # Ask the file server to describe our file. def describe(source) sourceobj, path = @resource.uri2obj(source) server = sourceobj.server begin desc = server.describe(path, @resource[:links]) rescue Puppet::Network::XMLRPCClientError => detail self.err "Could not describe %s: %s" % [path, detail] return nil end args = {} pinparams.zip( desc.split("\t") ).each { |param, value| if value =~ /^[0-9]+$/ value = value.to_i end unless value.nil? args[param] = value end } # we can't manage ownership as root, so don't even try unless Puppet::Util::SUIDManager.uid == 0 args.delete(:owner) end if args.empty? or (args[:type] == "link" and @resource[:links] == :ignore) return nil else return args end end # Have we successfully described the remote source? def described? ! @stats.nil? and ! @stats[:type].nil? #and @is != :notdescribed end # Use the info we get from describe() to check if we're in sync. def insync?(currentvalue) unless described? - info "No specified sources exist" + warning "No specified sources exist" return true end - + if currentvalue == :nocopy return true end # the only thing this actual state can do is copy files around. Therefore, # only pay attention if the remote is a file. unless @stats[:type] == "file" return true end #FIXARB: Inefficient? Needed to call retrieve on parent's ensure and checksum parentensure = @resource.property(:ensure).retrieve if parentensure != :absent and ! @resource.replace? return true end # Now, we just check to see if the checksums are the same parentchecksum = @resource.property(:checksum).retrieve result = (!parentchecksum.nil? and (parentchecksum == @stats[:checksum])) # Diff the contents if they ask it. This is quite annoying -- we need to do this in # 'insync?' because they might be in noop mode, but we don't want to do the file # retrieval twice, so we cache the value annoyingly. if ! result and Puppet[:show_diff] and File.exists?(@resource[:path]) and ! @stats[:_diffed] @stats[:_remote_content] = get_remote_content string_file_diff(@resource[:path], @stats[:_remote_content]) @stats[:_diffed] = true end return result end def pinparams Puppet::Network::Handler.handler(:fileserver).params end # This basically calls describe() on our file, and then sets all # of the local states appropriately. If the remote file is a normal # file then we set it to copy; if it's a directory, then we just mark # that the local directory should be created. def retrieve(remote = true) sum = nil @source = nil # This is set to false by the File#retrieve function on the second # retrieve, so that we do not do two describes. if remote # Find the first source that exists. @shouldorig contains # the sources as specified by the user. @should.each { |source| if @stats = self.describe(source) @source = source break end } end if @stats.nil? or @stats[:type].nil? return nil # :notdescribed end case @stats[:type] when "directory", "file": unless @resource.deleting? @resource[:ensure] = @stats[:type] end else self.info @stats.inspect self.err "Cannot use files of type %s as sources" % @stats[:type] return :nocopy end # Take each of the stats and set them as states on the local file # if a value has not already been provided. @stats.each { |stat, value| next if stat == :checksum next if stat == :type # was the stat already specified, or should the value # be inherited from the source? unless @resource.argument?(stat) @resource[stat] = value end } return @stats[:checksum] end def should @should end # Make sure we're also checking the checksum def should=(value) super checks = (pinparams + [:ensure]) checks.delete(:checksum) @resource[:check] = checks unless @resource.property(:checksum) @resource[:checksum] = :md5 end end def sync contents = @stats[:_remote_content] || get_remote_content() exists = File.exists?(@resource[:path]) @resource.write(:source) { |f| f.print contents } if exists return :file_changed else return :file_created end end private def get_remote_content unless @stats[:type] == "file" #if @stats[:type] == "directory" #[@resource.name, @should.inspect] #end raise Puppet::DevError, "Got told to copy non-file %s" % @resource[:path] end sourceobj, path = @resource.uri2obj(@source) begin contents = sourceobj.server.retrieve(path, @resource[:links]) rescue Puppet::Network::XMLRPCClientError => detail self.err "Could not retrieve %s: %s" % [path, detail] return nil end # FIXME It's stupid that this isn't taken care of in the # protocol. unless sourceobj.server.local contents = CGI.unescape(contents) end if contents == "" self.notice "Could not retrieve contents for %s" % @source end return contents end end end diff --git a/lib/puppet/type/sshkey.rb b/lib/puppet/type/sshkey.rb index bf4b0aac8..c2bdd39e3 100755 --- a/lib/puppet/type/sshkey.rb +++ b/lib/puppet/type/sshkey.rb @@ -1,74 +1,74 @@ module Puppet newtype(:sshkey) do @doc = "Installs and manages ssh host keys. At this point, this type only knows how to install keys into /etc/ssh/ssh_known_hosts, and it cannot manage user authorized keys yet." ensurable newproperty(:type) do desc "The encryption type used. Probably ssh-dss or ssh-rsa." newvalue("ssh-dss") newvalue("ssh-rsa") aliasvalue(:dsa, "ssh-dss") aliasvalue(:rsa, "ssh-rsa") end newproperty(:key) do desc "The key itself; generally a long string of hex digits." end # FIXME This should automagically check for aliases to the hosts, just # to see if we can automatically glean any aliases. newproperty(:alias) do desc "Any alias the host might have. Multiple values must be specified as an array. Note that this parameter has the same name as one of the metaparams; using this parameter to set aliases will make those aliases available in your Puppet scripts." attr_accessor :meta def insync?(is) is == @should end # We actually want to return the whole array here, not just the first # value. def should if defined? @should return @should else return nil end end validate do |value| if value =~ /\s/ raise Puppet::Error, "Aliases cannot include whitespace" end if value =~ /,/ raise Puppet::Error, "Aliases cannot include whitespace" end end end newparam(:name) do - desc "The host name." + desc "The host name that the key is associated with." isnamevar end newproperty(:target) do - desc "The file in which to store the mount table. Only used by - those providers that write to disk (i.e., not NetInfo)." + desc "The file in which to store the ssh key. Only used by + the ``parsed`` provider." defaultto { if @resource.class.defaultprovider.ancestors.include?(Puppet::Provider::ParsedFile) @resource.class.defaultprovider.default_target else nil end } end end end diff --git a/lib/puppet/util/tagging.rb b/lib/puppet/util/tagging.rb new file mode 100644 index 000000000..25d74c420 --- /dev/null +++ b/lib/puppet/util/tagging.rb @@ -0,0 +1,34 @@ +# Created on 2008-01-19 +# Copyright Luke Kanies + +# A common module to handle tagging. +module Puppet::Util::Tagging + # Add a tag to our current list. These tags will be added to all + # of the objects contained in this scope. + def tag(*ary) + @tags ||= [] + + qualified = [] + + ary.collect { |tag| tag.to_s.downcase }.each do |tag| + fail(Puppet::ParseError, "Invalid tag %s" % tag.inspect) unless valid_tag?(tag) + qualified << tag if tag.include?("::") + @tags << tag unless @tags.include?(tag) + end + + qualified.collect { |name| name.split("::") }.flatten.each { |tag| @tags << tag unless @tags.include?(tag) } + end + + # Return a copy of the tag list, so someone can't ask for our tags + # and then modify them. + def tags + @tags ||= [] + @tags.dup + end + + private + + def valid_tag?(tag) + tag =~ /^\w[-\w:]*$/ + end +end diff --git a/spec/unit/parser/resource.rb b/spec/unit/parser/resource.rb index 3d048f7e6..319d8f7d8 100755 --- a/spec/unit/parser/resource.rb +++ b/spec/unit/parser/resource.rb @@ -1,89 +1,149 @@ #!/usr/bin/env ruby require File.dirname(__FILE__) + '/../../spec_helper' # LAK: FIXME This is just new tests for resources; I have # not moved all tests over yet. + describe Puppet::Parser::Resource, " when evaluating" do before do @type = Puppet::Parser::Resource @parser = Puppet::Parser::Parser.new :Code => "" @source = @parser.newclass "" @definition = @parser.newdefine "mydefine" @class = @parser.newclass "myclass" @nodedef = @parser.newnode("mynode")[0] @node = Puppet::Node.new("yaynode") @compile = Puppet::Parser::Compile.new(@node, @parser) @scope = @compile.topscope end it "should evaluate the associated AST definition" do res = @type.new(:type => "mydefine", :title => "whatever", :scope => @scope, :source => @source) @definition.expects(:evaluate).with(:scope => @scope, :resource => res) res.evaluate end it "should evaluate the associated AST class" do res = @type.new(:type => "class", :title => "myclass", :scope => @scope, :source => @source) @class.expects(:evaluate).with(:scope => @scope, :resource => res) res.evaluate end it "should evaluate the associated AST node" do res = @type.new(:type => "node", :title => "mynode", :scope => @scope, :source => @source) @nodedef.expects(:evaluate).with(:scope => @scope, :resource => res) res.evaluate end end describe Puppet::Parser::Resource, " when finishing" do before do @parser = Puppet::Parser::Parser.new :Code => "" @source = @parser.newclass "" @definition = @parser.newdefine "mydefine" @class = @parser.newclass "myclass" @nodedef = @parser.newnode("mynode")[0] @node = Puppet::Node.new("yaynode") @compile = Puppet::Parser::Compile.new(@node, @parser) @scope = @compile.topscope @resource = Puppet::Parser::Resource.new(:type => "mydefine", :title => "whatever", :scope => @scope, :source => @source) end it "should copy metaparams from its scope" do @scope.setvar("noop", "true") @resource.class.publicize_methods(:add_metaparams) { @resource.add_metaparams } @resource["noop"].should == "true" end it "should not copy metaparams that it already has" do @resource.class.publicize_methods(:set_parameter) { @resource.set_parameter("noop", "false") } @scope.setvar("noop", "true") @resource.class.publicize_methods(:add_metaparams) { @resource.add_metaparams } @resource["noop"].should == "false" end it "should stack relationship metaparams from its container if it already has them" do @resource.class.publicize_methods(:set_parameter) { @resource.set_parameter("require", "resource") } @scope.setvar("require", "container") @resource.class.publicize_methods(:add_metaparams) { @resource.add_metaparams } @resource["require"].sort.should == %w{container resource} end it "should flatten the array resulting from stacking relationship metaparams" do @resource.class.publicize_methods(:set_parameter) { @resource.set_parameter("require", ["resource1", "resource2"]) } @scope.setvar("require", %w{container1 container2}) @resource.class.publicize_methods(:add_metaparams) { @resource.add_metaparams } @resource["require"].sort.should == %w{container1 container2 resource1 resource2} end + + it "should add any tags from the scope resource" do + scope_resource = stub 'scope_resource', :tags => %w{one two} + @scope.stubs(:resource).returns(scope_resource) + + @resource.class.publicize_methods(:add_scope_tags) { @resource.add_scope_tags } + + @resource.tags.should be_include("one") + @resource.tags.should be_include("two") + end +end + +describe Puppet::Parser::Resource, "when being tagged" do + before do + @scope_resource = stub 'scope_resource', :tags => %w{srone srtwo} + @scope = stub 'scope', :resource => @scope_resource + @resource = Puppet::Parser::Resource.new(:type => "file", :title => "yay", :scope => @scope, :source => mock('source')) + end + + it "should get tagged with the resource type" do + @resource.tags.should be_include("file") + end + + it "should get tagged with the title" do + @resource.tags.should be_include("yay") + end + + it "should get tagged with each name in the title if the title is a qualified class name" do + resource = Puppet::Parser::Resource.new(:type => "file", :title => "one::two", :scope => @scope, :source => mock('source')) + resource.tags.should be_include("one") + resource.tags.should be_include("two") + end + + it "should get tagged with each name in the type if the type is a qualified class name" do + resource = Puppet::Parser::Resource.new(:type => "one::two", :title => "whatever", :scope => @scope, :source => mock('source')) + resource.tags.should be_include("one") + resource.tags.should be_include("two") + end + + it "should not get tagged with non-alphanumeric titles" do + resource = Puppet::Parser::Resource.new(:type => "file", :title => "this is a test", :scope => @scope, :source => mock('source')) + resource.tags.should_not be_include("this is a test") + end + + it "should fail on tags containing '*' characters" do + lambda { @resource.tag("bad*tag") }.should raise_error(Puppet::ParseError) + end + + it "should fail on tags starting with '-' characters" do + lambda { @resource.tag("-badtag") }.should raise_error(Puppet::ParseError) + end + + it "should fail on tags containing ' ' characters" do + lambda { @resource.tag("bad tag") }.should raise_error(Puppet::ParseError) + end + + it "should allow alpha tags" do + lambda { @resource.tag("good_tag") }.should_not raise_error(Puppet::ParseError) + end end diff --git a/spec/unit/util/tagging.rb b/spec/unit/util/tagging.rb new file mode 100755 index 000000000..51b69a63c --- /dev/null +++ b/spec/unit/util/tagging.rb @@ -0,0 +1,79 @@ +#!/usr/bin/env ruby +# +# Created by Luke Kanies on 2008-01-19. +# Copyright (c) 2007. All rights reserved. + +require File.dirname(__FILE__) + '/../../spec_helper' + +require 'puppet/util/tagging' + +describe Puppet::Util::Tagging, "when adding tags" do + before do + @tagger = Object.new + @tagger.extend(Puppet::Util::Tagging) + end + + it "should have a method for adding tags" do + @tagger.should be_respond_to(:tag) + end + + it "should have a method for returning all tags" do + @tagger.should be_respond_to(:tags) + end + + it "should add tags to the returned tag list" do + @tagger.tag("one") + @tagger.tags.should be_include("one") + end + + it "should not add duplicate tags to the returned tag list" do + @tagger.tag("one") + @tagger.tag("one") + @tagger.tags.should == ["one"] + end + + it "should return a duplicate of the tag list, rather than the original" do + @tagger.tag("one") + tags = @tagger.tags + tags << "two" + @tagger.tags.should_not be_include("two") + end + + it "should add all provided tags to the tag list" do + @tagger.tag("one", "two") + @tagger.tags.should be_include("one") + @tagger.tags.should be_include("two") + end + + it "should fail on tags containing '*' characters" do + lambda { @tagger.tag("bad*tag") }.should raise_error(Puppet::ParseError) + end + + it "should fail on tags starting with '-' characters" do + lambda { @tagger.tag("-badtag") }.should raise_error(Puppet::ParseError) + end + + it "should fail on tags containing ' ' characters" do + lambda { @tagger.tag("bad tag") }.should raise_error(Puppet::ParseError) + end + + it "should allow alpha tags" do + lambda { @tagger.tag("good_tag") }.should_not raise_error(Puppet::ParseError) + end + + it "should provide a method for testing tag validity" do + @tagger.metaclass.publicize_methods(:valid_tag?) { @tagger.should be_respond_to(:valid_tag?) } + end + + it "should add qualified classes as tags" do + @tagger.tag("one::two") + @tagger.tags.should be_include("one::two") + end + + it "should add each part of qualified classes as tags" do + @tagger.tag("one::two::three") + @tagger.tags.should be_include("one") + @tagger.tags.should be_include("two") + @tagger.tags.should be_include("three") + end +end