diff --git a/.gitignore b/.gitignore index a2082370b..e23f50fa2 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,4 @@ .rspec results +tags .*.sw[op] diff --git a/README_DEVELOPER.md b/README_DEVELOPER.md index aa6fb2bf4..75e123509 100644 --- a/README_DEVELOPER.md +++ b/README_DEVELOPER.md @@ -1,91 +1,126 @@ # Developer README # This file is intended to provide a place for developers and contributors to document what other developers need to know about changes made to Puppet. # UTF-8 Handling # As Ruby 1.9 becomes more commonly used with Puppet, developers should be aware of major changes to the way Strings and Regexp objects are handled. Specifically, every instance of these two classes will have an encoding attribute determined in a number of ways. * If the source file has an encoding specified in the magic comment at the top, the instance will take on that encoding. * Otherwise, the encoding will be determined by the LC\_LANG or LANG environment variables. * Otherwise, the encoding will default to ASCII-8BIT ## References ## Excellent information about the differences between encodings in Ruby 1.8 and Ruby 1.9 is published in this blog series: [Understanding M17n](http://links.puppetlabs.com/understanding_m17n) ## Encodings of Regexp and String instances ## In general, please be aware that Ruby 1.9 regular expressions need to be compatible with the encoding of a string being used to match them. If they are not compatible you can expect to receive and error such as: Encoding::CompatibilityError: incompatible encoding regexp match (ASCII-8BIT regexp with UTF-8 string) In addition, some escape sequences were valid in Ruby 1.8 are no longer valid in 1.9 if the regular expression is not marked as an ASCII-8BIT object. You may expect errors like this in this situation: SyntaxError: (irb):7: invalid multibyte escape: /\xFF/ This error is particularly common when serializing a string to other representations like JSON or YAML. To resolve the problem you can explicitly mark the regular expression as ASCII-8BIT using the /n flag: "a" =~ /\342\230\203/n Finally, any time you're thinking of a string as an array of bytes rather than an array of characters, common when escaping a string, you should work with everything in ASCII-8BIT. Changing the encoding will not change the data itself and allow the Regexp and the String to deal with bytes rather than characters. Puppet provides a monkey patch to String which returns an encoding suitable for byte manipulations: # Example of how to escape non ASCII printable characters for YAML. >> snowman = "☃" >> snowman.to_ascii8bit.gsub(/([\x80-\xFF])/n) { |x| "\\x#{x.unpack("C")[0].to_s(16)} } => "\\xe2\\x98\\x83" If the Regexp is not marked as ASCII-8BIT using /n, then you can expect the SyntaxError, invalid multibyte escape as mentioned above. # Windows # If you'd like to run Puppet from source on Windows platforms, the -include `ext/envpuppet.bat` will help. All file paths in the Puppet -code base should use a path separator of / regardless of Windows or -Unix filesystem. +include `ext/envpuppet.bat` will help. To quickly run Puppet from source, assuming you already have Ruby installed from [rubyinstaller.org](http://rubyinstaller.org). gem install sys-admin win32-process win32-dir win32-taskscheduler --no-rdoc --no-ri gem install win32-service --platform=mswin32 --no-rdoc --no-ri --version 0.7.1 net use Z: "\\vmware-host\Shared Folders" /persistent:yes Z: cd set PATH=%PATH%;Z:\\ext envpuppet puppet --version 2.7.9 Some spec tests are known to fail on Windows, e.g. no mount provider on Windows, so use the following rspec exclude filter: cd envpuppet rspec --tag ~fails_on_windows spec This will give you a shared filesystem with your Mac and allow you to run Puppet directly from source without using install.rb or copying files around. +## Common Issues ## + + * Don't assume file paths start with '/', as that is not a valid path on + Windows. Use Puppet::Util.absolute_path? to validate that a path is fully + qualified. + + * Use File.expand_path('/tmp') in tests to generate a fully qualified path + that is valid on POSIX and Windows. In the latter case, the current working + directory will be used to expand the path. + + * Always use binary mode when performing file I/O, unless you explicitly want + Ruby to translate between unix and dos line endings. For example, opening an + executable file in text mode will almost certainly corrupt the resulting + stream, as will occur when using: + + IO.open(path, 'r') { |f| ... } + IO.read(path) + + If in doubt, specify binary mode explicitly: + + IO.open(path, 'rb') + + * Don't assume file paths are separated by ':'. Use File::PATH_SEPARATOR + instead, which is ':' on POSIX and ';' on Windows. + + * On Windows, File::SEPARATOR is '/', and File::ALT_SEPARATOR is '\'. On + POSIX systems, File::ALT_SEPARATOR is nil. In general, use '/' as the + separator as most Windows APIs, e.g. CreateFile, accept both types of + separators. + + * Don't use waitpid/waitpid2 if you need the child process' exit code, + as the child process may exit before it has a chance to open the + child's HANDLE and retrieve its exit code. Use Puppet::Util.execute. + + * Don't assume 'C' drive. Use environment variables to look these up: + + "#{ENV['windir']}/system32/netsh.exe" EOF diff --git a/acceptance/tests/modules/install/with_modulepath.rb b/acceptance/tests/modules/install/with_modulepath.rb new file mode 100644 index 000000000..1637ea5a8 --- /dev/null +++ b/acceptance/tests/modules/install/with_modulepath.rb @@ -0,0 +1,37 @@ +begin test_name "puppet module install (with modulepath)" + +step 'Setup' +require 'resolv'; ip = Resolv.getaddress('forge-dev.puppetlabs.lan') +apply_manifest_on master, "host { 'forge.puppetlabs.com': ip => '#{ip}' }" +apply_manifest_on master, "file { ['/etc/puppet/modules2']: ensure => directory, recurse => true, purge => true, force => true }" + +step "Install a module with relative modulepath" +on master, "cd /etc/puppet/modules2 && puppet module install pmtacceptance-nginx --modulepath=." do + assert_output <<-OUTPUT + Preparing to install into /etc/puppet/modules2 ... + Downloading from http://forge.puppetlabs.com ... + Installing -- do not interrupt ... + /etc/puppet/modules2 + └── pmtacceptance-nginx (\e[0;36mv0.0.1\e[0m) + OUTPUT +end +on master, '[ -d /etc/puppet/modules2/nginx ]' +apply_manifest_on master, "file { ['/etc/puppet/modules2']: ensure => directory, recurse => true, purge => true, force => true }" + +step "Install a module with absolute modulepath" +on master, puppet('module install pmtacceptance-nginx --modulepath=/etc/puppet/modules2') do + assert_output <<-OUTPUT + Preparing to install into /etc/puppet/modules2 ... + Downloading from http://forge.puppetlabs.com ... + Installing -- do not interrupt ... + /etc/puppet/modules2 + └── pmtacceptance-nginx (\e[0;36mv0.0.1\e[0m) + OUTPUT +end +on master, '[ -d /etc/puppet/modules2/nginx ]' +apply_manifest_on master, "file { ['/etc/puppet/modules2']: ensure => directory, recurse => true, purge => true, force => true }" + +ensure step "Teardown" +apply_manifest_on master, "host { 'forge.puppetlabs.com': ensure => absent }" +apply_manifest_on master, "file { ['/etc/puppet/modules2']: ensure => directory, recurse => true, purge => true, force => true }" +end diff --git a/acceptance/tests/modules/list/with_modulepath.rb b/acceptance/tests/modules/list/with_modulepath.rb new file mode 100644 index 000000000..02726db19 --- /dev/null +++ b/acceptance/tests/modules/list/with_modulepath.rb @@ -0,0 +1,76 @@ +begin test_name "puppet module list (with modulepath)" + +step "Setup" +apply_manifest_on master, <<-PP +file { + [ + '/etc/puppet/modules2', + '/etc/puppet/modules2/crakorn', + '/etc/puppet/modules2/appleseed', + '/etc/puppet/modules2/thelock', + ]: ensure => directory, + recurse => true, + purge => true, + force => true; + '/etc/puppet/modules2/crakorn/metadata.json': + content => '{ + "name": "jimmy/crakorn", + "version": "0.4.0", + "source": "", + "author": "jimmy", + "license": "MIT", + "dependencies": [] + }'; + '/etc/puppet/modules2/appleseed/metadata.json': + content => '{ + "name": "jimmy/appleseed", + "version": "1.1.0", + "source": "", + "author": "jimmy", + "license": "MIT", + "dependencies": [ + { "name": "jimmy/crakorn", "version_requirement": "0.4.0" } + ] + }'; + '/etc/puppet/modules2/thelock/metadata.json': + content => '{ + "name": "jimmy/thelock", + "version": "1.0.0", + "source": "", + "author": "jimmy", + "license": "MIT", + "dependencies": [ + { "name": "jimmy/appleseed", "version_requirement": "1.x" } + ] + }'; +} +PP +on master, '[ -d /etc/puppet/modules2/crakorn ]' +on master, '[ -d /etc/puppet/modules2/appleseed ]' +on master, '[ -d /etc/puppet/modules2/thelock ]' + +step "List the installed modules with relative modulepath" +on master, 'cd /etc/puppet/modules2 && puppet module list --modulepath=.' do + assert_equal '', stderr + assert_equal <<-STDOUT, stdout +/etc/puppet/modules2 +├── jimmy-appleseed (\e[0;36mv1.1.0\e[0m) +├── jimmy-crakorn (\e[0;36mv0.4.0\e[0m) +└── jimmy-thelock (\e[0;36mv1.0.0\e[0m) +STDOUT +end + +step "List the installed modules with absolute modulepath" +on master, puppet('module list --modulepath=/etc/puppet/modules2') do + assert_equal '', stderr + assert_equal <<-STDOUT, stdout +/etc/puppet/modules2 +├── jimmy-appleseed (\e[0;36mv1.1.0\e[0m) +├── jimmy-crakorn (\e[0;36mv0.4.0\e[0m) +└── jimmy-thelock (\e[0;36mv1.0.0\e[0m) +STDOUT +end + +ensure step "Teardown" +apply_manifest_on master, "file { ['/etc/puppet/modules2']: ensure => directory, recurse => true, purge => true, force => true }" +end diff --git a/acceptance/tests/modules/uninstall/with_modulepath.rb b/acceptance/tests/modules/uninstall/with_modulepath.rb new file mode 100644 index 000000000..e26ba1c62 --- /dev/null +++ b/acceptance/tests/modules/uninstall/with_modulepath.rb @@ -0,0 +1,54 @@ +begin test_name "puppet module uninstall (with modulepath)" + +step "Setup" +apply_manifest_on master, <<-PP +file { + [ + '/etc/puppet/modules2', + '/etc/puppet/modules2/crakorn', + '/etc/puppet/modules2/absolute', + ]: ensure => directory; + '/etc/puppet/modules2/crakorn/metadata.json': + content => '{ + "name": "jimmy/crakorn", + "version": "0.4.0", + "source": "", + "author": "jimmy", + "license": "MIT", + "dependencies": [] + }'; + '/etc/puppet/modules2/absolute/metadata.json': + content => '{ + "name": "jimmy/absolute", + "version": "0.4.0", + "source": "", + "author": "jimmy", + "license": "MIT", + "dependencies": [] + }'; +} +PP +on master, '[ -d /etc/puppet/modules2/crakorn ]' +on master, '[ -d /etc/puppet/modules2/absolute ]' + +step "Try to uninstall the module jimmy-crakorn using relative modulepath" +on master, 'cd /etc/puppet/modules2 && puppet module uninstall jimmy-crakorn --modulepath=.' do + assert_output <<-OUTPUT + Preparing to uninstall 'jimmy-crakorn' ... + Removed 'jimmy-crakorn' (\e[0;36mv0.4.0\e[0m) from /etc/puppet/modules2 + OUTPUT +end +on master, '[ ! -d /etc/puppet/modules2/crakorn ]' + +step "Try to uninstall the module jimmy-absolute using an absolute modulepath" +on master, 'cd /etc/puppet/modules2 && puppet module uninstall jimmy-absolute --modulepath=/etc/puppet/modules2' do + assert_output <<-OUTPUT + Preparing to uninstall 'jimmy-absolute' ... + Removed 'jimmy-absolute' (\e[0;36mv0.4.0\e[0m) from /etc/puppet/modules2 + OUTPUT +end +on master, '[ ! -d /etc/puppet/modules2/absolute ]' + +ensure step "Teardown" +apply_manifest_on master, "file { ['/etc/puppet/modules2']: ensure => directory, recurse => true, purge => true, force => true }" +end diff --git a/acceptance/tests/resource/scheduled_task/should_create.rb b/acceptance/tests/resource/scheduled_task/should_create.rb new file mode 100644 index 000000000..507273964 --- /dev/null +++ b/acceptance/tests/resource/scheduled_task/should_create.rb @@ -0,0 +1,33 @@ +test_name "should create a scheduled task" + +name = "pl#{rand(999999).to_i}" +confine :to, :platform => 'windows' + +agents.each do |agent| + # query only supports /tn parameter on Vista and later + query_cmd = "schtasks.exe /query /v /fo list /tn #{name}" + on agents, facter('kernelmajversion') do + query_cmd = "schtasks.exe /query /v /fo list | grep -q #{name}" if stdout.chomp.to_f < 6.0 + end + + step "create the task" + args = ['ensure=present', + 'command=c:\\\\windows\\\\system32\\\\notepad.exe', + 'arguments="foo bar baz"', + 'working_dir=c:\\\\windows'] + on agent, puppet_resource('scheduled_task', name, args) + + step "verify the task exists" + on agent, query_cmd + + step "verify task properties" + on agent, puppet_resource('scheduled_task', name) do + assert_match(/command\s*=>\s*'c:\\windows\\system32\\notepad.exe'/, stdout) + assert_match(/arguments\s*=>\s*'foo bar baz'/, stdout) + assert_match(/enabled\s*=>\s*'true'/, stdout) + assert_match(/working_dir\s*=>\s*'c:\\windows'/, stdout) + end + + step "delete the task" + on agent, "schtasks.exe /delete /tn #{name} /f" +end diff --git a/acceptance/tests/resource/scheduled_task/should_destroy.rb b/acceptance/tests/resource/scheduled_task/should_destroy.rb new file mode 100644 index 000000000..e61caa45b --- /dev/null +++ b/acceptance/tests/resource/scheduled_task/should_destroy.rb @@ -0,0 +1,27 @@ +test_name "should delete a scheduled task" + +name = "pl#{rand(999999).to_i}" +confine :to, :platform => 'windows' + +agents.each do |agent| + # Have to use /v1 parameter for Vista and later, older versions + # don't accept the parameter + version = '/v1' + # query only supports /tn parameter on Vista and later + query_cmd = "schtasks.exe /query /v /fo list /tn #{name}" + on agents, facter('kernelmajversion') do + if stdout.chomp.to_f < 6.0 + version = '' + query_cmd = "schtasks.exe /query /v /fo list | grep -vq #{name}" + end + end + + step "create the task" + on agent, "schtasks.exe /create #{version} /tn #{name} /tr c:\\\\windows\\\\system32\\\\notepad.exe /sc daily /ru system" + + step "delete the task" + on agent, puppet_resource('scheduled_task', name, 'ensure=absent') + + step "verify the task was deleted" + on agent, query_cmd +end diff --git a/acceptance/tests/resource/scheduled_task/should_modify.rb b/acceptance/tests/resource/scheduled_task/should_modify.rb new file mode 100644 index 000000000..35d9d3ef7 --- /dev/null +++ b/acceptance/tests/resource/scheduled_task/should_modify.rb @@ -0,0 +1,28 @@ +test_name "should modify a scheduled task" + +name = "pl#{rand(999999).to_i}" +confine :to, :platform => 'windows' + +agents.each do |agent| + # Have to use /v1 parameter for Vista and later, older versions + # don't accept the parameter + version = '/v1' + on agents, facter('kernelmajversion') do + version = '' if stdout.chomp.to_f < 6.0 + end + + step "create the task" + on agent, "schtasks.exe /create #{version} /tn #{name} /tr c:\\\\windows\\\\system32\\\\notepad.exe /sc daily /ru system" + + step "modify the task" + on agent, puppet_resource('scheduled_task', name, ['ensure=present', 'command=c:\\\\windows\\\\system32\\\\notepad2.exe', "arguments=args-#{name}"]) + + step "verify the arguments were updated" + on agent, puppet_resource('scheduled_task', name) do + assert_match(/command\s*=>\s*'c:\\windows\\system32\\notepad2.exe'/, stdout) + assert_match(/arguments\s*=>\s*'args-#{name}'/, stdout) + end + + step "delete the task" + on agent, "schtasks.exe /delete /tn #{name} /f" +end diff --git a/acceptance/tests/resource/scheduled_task/should_query.rb b/acceptance/tests/resource/scheduled_task/should_query.rb new file mode 100644 index 000000000..a28054a6a --- /dev/null +++ b/acceptance/tests/resource/scheduled_task/should_query.rb @@ -0,0 +1,24 @@ +test_name "test that we can query and find a scheduled task that exists." + +name = "pl#{rand(999999).to_i}" +confine :to, :platform => 'windows' + +agents.each do |agent| + # Have to use /v1 parameter for Vista and later, older versions + # don't accept the parameter + version = '/v1' + on agents, facter('kernelmajversion') do + version = '' if stdout.chomp.to_f < 6.0 + end + + step "create the task" + on agent, "schtasks.exe /create #{version} /tn #{name} /tr c:\\\\windows\\\\system32\\\\notepad.exe /sc daily /ru system" + + step "query for the task and verify it was found" + on agent, puppet_resource('scheduled_task', name) do + fail_test "didn't find the scheduled_task #{name}" unless stdout.include? 'present' + end + + step "delete the task" + on agent, "schtasks.exe /delete /tn #{name} /f" +end diff --git a/acceptance/tests/ticket_13489_service_refresh.pp b/acceptance/tests/ticket_13489_service_refresh.pp new file mode 100644 index 000000000..ba3e48600 --- /dev/null +++ b/acceptance/tests/ticket_13489_service_refresh.pp @@ -0,0 +1,20 @@ +test_name "#13489: refresh service" + +confine :to, :platform => 'windows' + +manifest = < 'running', +} + +exec { 'hello': + command => "cmd /c echo hello", + path => $::path, + logoutput => true, +} + +Exec['hello'] ~> Service['BITS'] +MANIFEST + +step "Refresh service" +apply_manifest_on(agents, manifest) diff --git a/conf/osx/createpackage.sh b/conf/osx/createpackage.sh index 0c153f25a..0f63ec8fa 100755 --- a/conf/osx/createpackage.sh +++ b/conf/osx/createpackage.sh @@ -1,185 +1,187 @@ #!/bin/bash # # Script to build an "old style" not flat pkg out of the puppet repository. # # Author: Nigel Kersten (nigelk@google.com) # # Last Updated: 2008-07-31 # # Copyright 2008 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License INSTALLRB="install.rb" BINDIR="/usr/bin" SBINDIR="/usr/sbin" SITELIBDIR="/usr/lib/ruby/site_ruby/1.8" PACKAGEMAKER="/Developer/usr/bin/packagemaker" PROTO_PLIST="PackageInfo.plist" PREFLIGHT="preflight" function find_installer() { # we walk up three directories to make this executable from the root, # root/conf or root/conf/osx if [ -f "./${INSTALLRB}" ]; then installer="$(pwd)/${INSTALLRB}" elif [ -f "../${INSTALLRB}" ]; then installer="$(pwd)/../${INSTALLRB}" elif [ -f "../../${INSTALLRB}" ]; then installer="$(pwd)/../../${INSTALLRB}" else installer="" fi } function find_puppet_root() { puppet_root=$(dirname "${installer}") } function install_puppet() { echo "Installing Puppet to ${pkgroot}" "${installer}" --destdir="${pkgroot}" --bindir="${BINDIR}" --sbindir="${SBINDIR}" --sitelibdir="${SITELIBDIR}" mkdir -p ${pkgroot}/var/lib/puppet chown -R root:admin "${pkgroot}" + chmod -R go-w "${pkgroot}" } function install_docs() { echo "Installing docs to ${pkgroot}" docdir="${pkgroot}/usr/share/doc/puppet" mkdir -p "${docdir}" for docfile in CHANGELOG CHANGELOG.old COPYING LICENSE README README.queueing README.rst; do install -m 0644 "${puppet_root}/${docfile}" "${docdir}" done chown -R root:wheel "${docdir}" chmod 0755 "${docdir}" } function get_puppet_version() { puppet_version=$(RUBYLIB="${pkgroot}/${SITELIBDIR}:${RUBYLIB}" ruby -e "require 'puppet'; puts Puppet.version") } function prepare_package() { # As we can't specify to follow symlinks from the command line, we have # to go through the hassle of creating an Info.plist file for packagemaker # to look at for package creation and substitue the version strings out. # Major/Minor versions can only be integers, so we have "0" and "245" for # puppet version 0.24.5 # Note too that for 10.5 compatibility this Info.plist *must* be set to # follow symlinks. VER1=$(echo ${puppet_version} | awk -F "." '{print $1}') VER2=$(echo ${puppet_version} | awk -F "." '{print $2}') VER3=$(echo ${puppet_version} | awk -F "." '{print $3}') major_version="${VER1}" minor_version="${VER2}${VER3}" cp "${puppet_root}/conf/osx/${PROTO_PLIST}" "${pkgtemp}" sed -i '' "s/{SHORTVERSION}/${puppet_version}/g" "${pkgtemp}/${PROTO_PLIST}" sed -i '' "s/{MAJORVERSION}/${major_version}/g" "${pkgtemp}/${PROTO_PLIST}" sed -i '' "s/{MINORVERSION}/${minor_version}/g" "${pkgtemp}/${PROTO_PLIST}" # We need to create a preflight script to remove traces of previous # puppet installs due to limitations in Apple's pkg format. mkdir "${pkgtemp}/scripts" cp "${puppet_root}/conf/osx/${PREFLIGHT}" "${pkgtemp}/scripts" # substitute in the sitelibdir specified above on the assumption that this # is where any previous puppet install exists that should be cleaned out. sed -i '' "s|{SITELIBDIR}|${SITELIBDIR}|g" "${pkgtemp}/scripts/${PREFLIGHT}" # substitute in the bindir sepcified on the assumption that this is where # any old executables that have moved from bindir->sbindir should be # cleaned out from. sed -i '' "s|{BINDIR}|${BINDIR}|g" "${pkgtemp}/scripts/${PREFLIGHT}" chmod 0755 "${pkgtemp}/scripts/${PREFLIGHT}" } function create_package() { rm -fr "$(pwd)/puppet-${puppet_version}.pkg" echo "Building package" echo "Note that packagemaker is reknowned for spurious errors. Don't panic." - "${PACKAGEMAKER}" --root "${pkgroot}" \ + "${PACKAGEMAKER}" --verbose --no-recommend --no-relocate \ + --root "${pkgroot}" \ --info "${pkgtemp}/${PROTO_PLIST}" \ --scripts ${pkgtemp}/scripts \ --out "$(pwd)/puppet-${puppet_version}.pkg" if [ $? -ne 0 ]; then echo "There was a problem building the package." cleanup_and_exit 1 exit 1 else echo "The package has been built at:" echo "$(pwd)/puppet-${puppet_version}.pkg" fi } function cleanup_and_exit() { if [ -d "${pkgroot}" ]; then rm -fr "${pkgroot}" fi if [ -d "${pkgtemp}" ]; then rm -fr "${pkgtemp}" fi exit $1 } # Program entry point function main() { if [ $(whoami) != "root" ]; then echo "This script needs to be run as root via su or sudo." cleanup_and_exit 1 fi find_installer if [ ! "${installer}" ]; then echo "Unable to find ${INSTALLRB}" cleanup_and_exit 1 fi find_puppet_root if [ ! "${puppet_root}" ]; then echo "Unable to find puppet repository root." cleanup_and_exit 1 fi pkgroot=$(mktemp -d -t puppetpkg) if [ ! "${pkgroot}" ]; then echo "Unable to create temporary package root." cleanup_and_exit 1 fi pkgtemp=$(mktemp -d -t puppettmp) if [ ! "${pkgtemp}" ]; then echo "Unable to create temporary package root." cleanup_and_exit 1 fi install_puppet install_docs get_puppet_version if [ ! "${puppet_version}" ]; then echo "Unable to retrieve puppet version" cleanup_and_exit 1 fi prepare_package create_package cleanup_and_exit 0 } main "$@" diff --git a/conf/redhat/logrotate b/conf/redhat/logrotate index c3a4d437a..385e6e8da 100644 --- a/conf/redhat/logrotate +++ b/conf/redhat/logrotate @@ -1,10 +1,10 @@ /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate - [ -e /etc/init.d/puppetmaster ] && /etc/init.d/puppetmaster condrestart >/dev/null 2>&1 || true + pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } diff --git a/conf/redhat/puppet.spec b/conf/redhat/puppet.spec index 89e7f5129..952dbf1c7 100644 --- a/conf/redhat/puppet.spec +++ b/conf/redhat/puppet.spec @@ -1,643 +1,650 @@ # Augeas and SELinux requirements may be disabled at build time by passing # --without augeas and/or --without selinux to rpmbuild or mock %{!?ruby_sitelibdir: %global ruby_sitelibdir %(ruby -rrbconfig -e 'puts Config::CONFIG["sitelibdir"]')} %global confdir conf/redhat Name: puppet Version: 2.7.18 #Release: 0.1rc1.2%{?dist} -Release: 1%{?dist} +Release: 2%{?dist} Summary: A network tool for managing many disparate systems License: ASL 2.0 URL: http://puppetlabs.com Source0: http://puppetlabs.com/downloads/%{name}/%{name}-%{version}.tar.gz #Source0: http://puppetlabs.com/downloads/%{name}/%{name}-%{version}rc1.tar.gz Source1: http://puppetlabs.com/downloads/%{name}/%{name}-%{version}.tar.gz.asc #Source1: http://puppetlabs.com/downloads/%{name}/%{name}-%{version}rc1.tar.gz.asc Group: System Environment/Base BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) BuildRequires: facter >= 1.5, facter < 1:2.0 BuildRequires: ruby >= 1.8.5 BuildArch: noarch Requires: ruby(abi) >= 1.8 Requires: ruby-shadow # Pull in ruby selinux bindings where available %if 0%{?fedora} || 0%{?rhel} >= 6 %{!?_without_selinux:Requires: ruby(selinux), libselinux-utils} %else %if 0%{?rhel} && 0%{?rhel} == 5 %{!?_without_selinux:Requires: libselinux-ruby, libselinux-utils} %endif %endif Requires: facter >= 1.5, facter < 1:2.0 Requires: ruby >= 1.8.5 %{!?_without_augeas:Requires: ruby-augeas} Requires(pre): shadow-utils Requires(post): chkconfig Requires(preun): chkconfig Requires(preun): initscripts Requires(postun): initscripts %description Puppet lets you centrally manage every important aspect of your system using a cross-platform specification language that manages all the separate elements normally aggregated in different files, like users, cron jobs, and hosts, along with obviously discrete elements like packages, services, and files. %package server Group: System Environment/Base Summary: Server for the puppet system management tool Requires: puppet = %{version}-%{release} Requires(post): chkconfig Requires(preun): chkconfig Requires(preun): initscripts Requires(postun): initscripts %description server Provides the central puppet server daemon which provides manifests to clients. The server can also function as a certificate authority and file server. %prep %setup -q -n %{name}-%{version} #%setup -q -n %{name}-%{version}rc1 patch -s -p1 < conf/redhat/rundir-perms.patch %build # Fix some rpmlint complaints for f in mac_dscl.pp mac_dscl_revert.pp \ mac_pkgdmg.pp ; do sed -i -e'1d' examples/$f chmod a-x examples/$f done for f in external/nagios.rb network/http_server/mongrel.rb relationship.rb; do sed -i -e '1d' lib/puppet/$f done #chmod +x ext/puppetstoredconfigclean.rb find examples/ -type f -empty | xargs rm find examples/ -type f | xargs chmod a-x # puppet-queue.conf is more of an example, used for stompserver mv conf/puppet-queue.conf examples/etc/puppet/ %install rm -rf %{buildroot} ruby install.rb --destdir=%{buildroot} --quick --no-rdoc install -d -m0755 %{buildroot}%{_sysconfdir}/puppet/manifests install -d -m0755 %{buildroot}%{_datadir}/%{name}/modules install -d -m0755 %{buildroot}%{_localstatedir}/lib/puppet install -d -m0755 %{buildroot}%{_localstatedir}/run/puppet install -d -m0750 %{buildroot}%{_localstatedir}/log/puppet install -Dp -m0644 %{confdir}/client.sysconfig %{buildroot}%{_sysconfdir}/sysconfig/puppet install -Dp -m0755 %{confdir}/client.init %{buildroot}%{_initrddir}/puppet install -Dp -m0644 %{confdir}/server.sysconfig %{buildroot}%{_sysconfdir}/sysconfig/puppetmaster install -Dp -m0755 %{confdir}/server.init %{buildroot}%{_initrddir}/puppetmaster install -Dp -m0644 %{confdir}/fileserver.conf %{buildroot}%{_sysconfdir}/puppet/fileserver.conf install -Dp -m0644 %{confdir}/puppet.conf %{buildroot}%{_sysconfdir}/puppet/puppet.conf install -Dp -m0644 %{confdir}/logrotate %{buildroot}%{_sysconfdir}/logrotate.d/puppet # We need something for these ghosted files, otherwise rpmbuild # will complain loudly. They won't be included in the binary packages touch %{buildroot}%{_sysconfdir}/puppet/puppetmasterd.conf touch %{buildroot}%{_sysconfdir}/puppet/puppetca.conf touch %{buildroot}%{_sysconfdir}/puppet/puppetd.conf # Install the ext/ directory to %%{_datadir}/%%{name} install -d %{buildroot}%{_datadir}/%{name} cp -a ext/ %{buildroot}%{_datadir}/%{name} # emacs and vim bits are installed elsewhere rm -rf %{buildroot}%{_datadir}/%{name}/ext/{emacs,vim} # Install emacs mode files emacsdir=%{buildroot}%{_datadir}/emacs/site-lisp install -Dp -m0644 ext/emacs/puppet-mode.el $emacsdir/puppet-mode.el install -Dp -m0644 ext/emacs/puppet-mode-init.el \ $emacsdir/site-start.d/puppet-mode-init.el # Install vim syntax files vimdir=%{buildroot}%{_datadir}/vim/vimfiles install -Dp -m0644 ext/vim/ftdetect/puppet.vim $vimdir/ftdetect/puppet.vim install -Dp -m0644 ext/vim/syntax/puppet.vim $vimdir/syntax/puppet.vim %if 0%{?fedora} >= 15 # Setup tmpfiles.d config mkdir -p %{buildroot}%{_sysconfdir}/tmpfiles.d echo "D /var/run/%{name} 0755 %{name} %{name} -" > \ %{buildroot}%{_sysconfdir}/tmpfiles.d/%{name}.conf %endif +# Create puppet modules directory for puppet module tool +mkdir -p %{buildroot}%{_sysconfdir}/%{name}/modules + %files %defattr(-, root, root, 0755) %doc CHANGELOG LICENSE README.md examples %{_bindir}/pi %{_bindir}/puppet %{_bindir}/ralsh %{_bindir}/filebucket %{_bindir}/puppetdoc %{_sbindir}/puppetca %{_sbindir}/puppetd %{ruby_sitelibdir}/* %{_initrddir}/puppet %dir %{_sysconfdir}/puppet +%dir %{_sysconfdir}/%{name}/modules %if 0%{?fedora} >= 15 %config(noreplace) %{_sysconfdir}/tmpfiles.d/%{name}.conf %endif %config(noreplace) %{_sysconfdir}/sysconfig/puppet %config(noreplace) %{_sysconfdir}/puppet/puppet.conf %config(noreplace) %{_sysconfdir}/puppet/auth.conf %ghost %config(noreplace,missingok) %{_sysconfdir}/puppet/puppetca.conf %ghost %config(noreplace,missingok) %{_sysconfdir}/puppet/puppetd.conf %config(noreplace) %{_sysconfdir}/logrotate.d/puppet # We don't want to require emacs or vim, so we need to own these dirs %{_datadir}/emacs %{_datadir}/vim %{_datadir}/%{name} # These need to be owned by puppet so the server can # write to them %attr(-, puppet, puppet) %{_localstatedir}/run/puppet %attr(-, puppet, puppet) %{_localstatedir}/log/puppet %attr(-, puppet, puppet) %{_localstatedir}/lib/puppet %{_mandir}/man5/puppet.conf.5.gz %{_mandir}/man8/pi.8.gz %{_mandir}/man8/puppet.8.gz %{_mandir}/man8/puppetca.8.gz %{_mandir}/man8/puppetd.8.gz %{_mandir}/man8/ralsh.8.gz %{_mandir}/man8/puppetdoc.8.gz %{_mandir}/man8/puppet-agent.8.gz %{_mandir}/man8/puppet-apply.8.gz %{_mandir}/man8/puppet-catalog.8.gz %{_mandir}/man8/puppet-describe.8.gz %{_mandir}/man8/puppet-ca.8.gz %{_mandir}/man8/puppet-cert.8.gz %{_mandir}/man8/puppet-certificate.8.gz %{_mandir}/man8/puppet-certificate_request.8.gz %{_mandir}/man8/puppet-certificate_revocation_list.8.gz %{_mandir}/man8/puppet-config.8.gz %{_mandir}/man8/puppet-device.8.gz %{_mandir}/man8/puppet-doc.8.gz %{_mandir}/man8/puppet-facts.8.gz %{_mandir}/man8/puppet-file.8.gz %{_mandir}/man8/puppet-filebucket.8.gz %{_mandir}/man8/puppet-help.8.gz %{_mandir}/man8/puppet-inspect.8.gz %{_mandir}/man8/puppet-instrumentation_data.8.gz %{_mandir}/man8/puppet-instrumentation_listener.8.gz %{_mandir}/man8/puppet-instrumentation_probe.8.gz %{_mandir}/man8/puppet-key.8.gz %{_mandir}/man8/puppet-kick.8.gz %{_mandir}/man8/puppet-man.8.gz %{_mandir}/man8/puppet-module.8.gz %{_mandir}/man8/puppet-node.8.gz %{_mandir}/man8/puppet-parser.8.gz %{_mandir}/man8/puppet-plugin.8.gz %{_mandir}/man8/puppet-queue.8.gz %{_mandir}/man8/puppet-report.8.gz %{_mandir}/man8/puppet-resource.8.gz %{_mandir}/man8/puppet-resource_type.8.gz %{_mandir}/man8/puppet-secret_agent.8.gz %{_mandir}/man8/puppet-status.8.gz %files server %defattr(-, root, root, 0755) %{_sbindir}/puppetmasterd %{_sbindir}/puppetrun %{_sbindir}/puppetqd %{_initrddir}/puppetmaster %config(noreplace) %{_sysconfdir}/puppet/fileserver.conf %dir %{_sysconfdir}/puppet/manifests %config(noreplace) %{_sysconfdir}/sysconfig/puppetmaster %ghost %config(noreplace,missingok) %{_sysconfdir}/puppet/puppetmasterd.conf %{_mandir}/man8/filebucket.8.gz %{_mandir}/man8/puppetmasterd.8.gz %{_mandir}/man8/puppetrun.8.gz %{_mandir}/man8/puppetqd.8.gz %{_mandir}/man8/puppet-master.8.gz # Fixed uid/gid were assigned in bz 472073 (Fedora), 471918 (RHEL-5), # and 471919 (RHEL-4) %pre getent group puppet &>/dev/null || groupadd -r puppet -g 52 &>/dev/null getent passwd puppet &>/dev/null || \ useradd -r -u 52 -g puppet -d %{_localstatedir}/lib/puppet -s /sbin/nologin \ -c "Puppet" puppet &>/dev/null # ensure that old setups have the right puppet home dir if [ $1 -gt 1 ] ; then usermod -d %{_localstatedir}/lib/puppet puppet &>/dev/null fi exit 0 %post /sbin/chkconfig --add puppet || : if [ "$1" -ge 1 ]; then # The pidfile changed from 0.25.x to 2.6.x, handle upgrades without leaving # the old process running. oldpid="%{_localstatedir}/run/puppet/puppetd.pid" newpid="%{_localstatedir}/run/puppet/agent.pid" if [ -s "$oldpid" -a ! -s "$newpid" ]; then (kill $(< "$oldpid") && rm -f "$oldpid" && \ /sbin/service puppet start) >/dev/null 2>&1 || : fi fi %post server /sbin/chkconfig --add puppetmaster || : if [ "$1" -ge 1 ]; then # The pidfile changed from 0.25.x to 2.6.x, handle upgrades without leaving # the old process running. oldpid="%{_localstatedir}/run/puppet/puppetmasterd.pid" newpid="%{_localstatedir}/run/puppet/master.pid" if [ -s "$oldpid" -a ! -s "$newpid" ]; then (kill $(< "$oldpid") && rm -f "$oldpid" && \ /sbin/service puppetmaster start) >/dev/null 2>&1 || : fi fi %preun if [ "$1" = 0 ] ; then /sbin/service puppet stop >/dev/null 2>&1 /sbin/chkconfig --del puppet || : fi %preun server if [ "$1" = 0 ] ; then /sbin/service puppetmaster stop >/dev/null 2>&1 /sbin/chkconfig --del puppetmaster || : fi %postun if [ "$1" -ge 1 ]; then /sbin/service puppet condrestart >/dev/null 2>&1 || : fi %postun server if [ "$1" -ge 1 ]; then /sbin/service puppetmaster condrestart >/dev/null 2>&1 || : fi %clean rm -rf %{buildroot} %changelog +* Wed Jul 11 2012 William Hopper - 2.7.18-2 +- (#15221) Create /etc/puppet/modules for puppet module tool + * Mon Jul 9 2012 Moses Mendoza - 2.7.18-1 - Update for 2.7.18 * Tue Jun 19 2012 Matthaus Litteken - 2.7.17-1 - Update for 2.7.17 * Wed Jun 13 2012 Matthaus Litteken - 2.7.16-1 - Update for 2.7.16 * Fri Jun 08 2012 Moses Mendoza - 2.7.16-0.1rc1.2 - Updated facter 2.0 dep to include epoch 1 * Wed Jun 06 2012 Matthaus Litteken - 2.7.16-0.1rc1 - Update for 2.7.16rc1, added generated manpages * Fri Jun 01 2012 Matthaus Litteken - 2.7.15-0.1rc4 - Update for 2.7.15rc4 * Tue May 29 2012 Moses Mendoza - 2.7.15-0.1rc3 - Update for 2.7.15rc3 * Wed May 16 2012 Moses Mendoza - 2.7.15-0.1rc2 - Update for 2.7.15rc2 * Tue May 15 2012 Moses Mendoza - 2.7.15-0.1rc1 - Update for 2.7.15rc1 * Wed May 02 2012 Moses Mendoza - 2.7.14-1 - Update for 2.7.14 * Tue Apr 10 2012 Matthaus Litteken - 2.7.13-1 - Update for 2.7.13 * Mon Mar 12 2012 Michael Stahnke - 2.7.12-1 - Update for 2.7.12 * Fri Feb 24 2012 Matthaus Litteken - 2.7.11-2 - Update 2.7.11 from proper tag, including #12572 * Wed Feb 22 2012 Michael Stahnke - 2.7.11-1 - Update for 2.7.11 * Wed Jan 25 2012 Michael Stahnke - 2.7.10-1 - Update for 2.7.10 * Fri Dec 9 2011 Matthaus Litteken - 2.7.9-1 - Update for 2.7.9 * Thu Dec 8 2011 Matthaus Litteken - 2.7.8-1 - Update for 2.7.8 * Wed Nov 30 2011 Michael Stahnke - 2.7.8-0.1rc1 - Update for 2.7.8rc1 * Mon Nov 21 2011 Michael Stahnke - 2.7.7-1 - Relaese 2.7.7 * Tue Nov 01 2011 Michael Stahnke - 2.7.7-0.1rc1 - Update for 2.7.7rc1 * Fri Oct 21 2011 Michael Stahnke - 2.7.6-1 - 2.7.6 final * Thu Oct 13 2011 Michael Stahnke - 2.7.6-.1rc3 - New RC * Fri Oct 07 2011 Michael Stahnke - 2.7.6-0.1rc2 - New RC * Mon Oct 03 2011 Michael Stahnke - 2.7.6-0.1rc1 - New RC * Fri Sep 30 2011 Michael Stahnke - 2.7.5-1 - Fixes for CVE-2011-3869, 3870, 3871 * Wed Sep 28 2011 Michael Stahnke - 2.7.4-1 - Fix for CVE-2011-3484 * Wed Jul 06 2011 Michael Stahnke - 2.7.2-0.2.rc1 - Clean up rpmlint errors - Put man pages in correct package * Wed Jul 06 2011 Michael Stahnke - 2.7.2-0.1.rc1 - Update to 2.7.2rc1 * Wed Jun 15 2011 Todd Zullinger - 2.6.9-0.1.rc1 - Update rc versioning to ensure 2.6.9 final is newer to rpm - sync changes with Fedora/EPEL * Tue Jun 14 2011 Michael Stahnke - 2.6.9rc1-1 - Update to 2.6.9rc1 * Thu Apr 14 2011 Todd Zullinger - 2.6.8-1 - Update to 2.6.8 * Thu Mar 24 2011 Todd Zullinger - 2.6.7-1 - Update to 2.6.7 * Wed Mar 16 2011 Todd Zullinger - 2.6.6-1 - Update to 2.6.6 - Ensure %%pre exits cleanly - Fix License tag, puppet is now GPLv2 only - Create and own /usr/share/puppet/modules (#615432) - Properly restart puppet agent/master daemons on upgrades from 0.25.x - Require libselinux-utils when selinux support is enabled - Support tmpfiles.d for Fedora >= 15 (#656677) * Wed Feb 09 2011 Fedora Release Engineering - 0.25.5-2 - Rebuilt for https://fedoraproject.org/wiki/Fedora_15_Mass_Rebuild * Mon May 17 2010 Todd Zullinger - 0.25.5-1 - Update to 0.25.5 - Adjust selinux conditional for EL-6 - Apply rundir-perms patch from tarball rather than including it separately - Update URL's to reflect the new puppetlabs.com domain * Fri Jan 29 2010 Todd Zullinger - 0.25.4-1 - Update to 0.25.4 * Tue Jan 19 2010 Todd Zullinger - 0.25.3-2 - Apply upstream patch to fix cron resources (upstream #2845) * Mon Jan 11 2010 Todd Zullinger - 0.25.3-1 - Update to 0.25.3 * Tue Jan 05 2010 Todd Zullinger - 0.25.2-1.1 - Replace %%define with %%global for macros * Tue Jan 05 2010 Todd Zullinger - 0.25.2-1 - Update to 0.25.2 - Fixes CVE-2010-0156, tmpfile security issue (#502881) - Install auth.conf, puppetqd manpage, and queuing examples/docs * Wed Nov 25 2009 Jeroen van Meeuwen - 0.25.1-1 - New upstream version * Tue Oct 27 2009 Todd Zullinger - 0.25.1-0.3 - Update to 0.25.1 - Include the pi program and man page (R.I.Pienaar) * Sat Oct 17 2009 Todd Zullinger - 0.25.1-0.2.rc2 - Update to 0.25.1rc2 * Tue Sep 22 2009 Todd Zullinger - 0.25.1-0.1.rc1 - Update to 0.25.1rc1 - Move puppetca to puppet package, it has uses on client systems - Drop redundant %%doc from manpage %%file listings * Fri Sep 04 2009 Todd Zullinger - 0.25.0-1 - Update to 0.25.0 - Fix permissions on /var/log/puppet (#495096) - Install emacs mode and vim syntax files (#491437) - Install ext/ directory in %%{_datadir}/%%{name} (/usr/share/puppet) * Mon May 04 2009 Todd Zullinger - 0.25.0-0.1.beta1 - Update to 0.25.0beta1 - Make Augeas and SELinux requirements build time options * Mon Mar 23 2009 Todd Zullinger - 0.24.8-1 - Update to 0.24.8 - Quiet output from %%pre - Use upstream install script - Increase required facter version to >= 1.5 * Tue Dec 16 2008 Todd Zullinger - 0.24.7-4 - Remove redundant useradd from %%pre * Tue Dec 16 2008 Jeroen van Meeuwen - 0.24.7-3 - New upstream version - Set a static uid and gid (#472073, #471918, #471919) - Add a conditional requirement on libselinux-ruby for Fedora >= 9 - Add a dependency on ruby-augeas * Wed Oct 22 2008 Todd Zullinger - 0.24.6-1 - Update to 0.24.6 - Require ruby-shadow on Fedora and RHEL >= 5 - Simplify Fedora/RHEL version checks for ruby(abi) and BuildArch - Require chkconfig and initstripts for preun, post, and postun scripts - Conditionally restart puppet in %%postun - Ensure %%preun, %%post, and %%postun scripts exit cleanly - Create puppet user/group according to Fedora packaging guidelines - Quiet a few rpmlint complaints - Remove useless %%pbuild macro - Make specfile more like the Fedora/EPEL template * Mon Jul 28 2008 David Lutterkort - 0.24.5-1 - Add /usr/bin/puppetdoc * Thu Jul 24 2008 Brenton Leanhardt - New version - man pages now ship with tarball - examples/code moved to root examples dir in upstream tarball * Tue Mar 25 2008 David Lutterkort - 0.24.4-1 - Add man pages (from separate tarball, upstream will fix to include in main tarball) * Mon Mar 24 2008 David Lutterkort - 0.24.3-1 - New version * Wed Mar 5 2008 David Lutterkort - 0.24.2-1 - New version * Sat Dec 22 2007 David Lutterkort - 0.24.1-1 - New version * Mon Dec 17 2007 David Lutterkort - 0.24.0-2 - Use updated upstream tarball that contains yumhelper.py * Fri Dec 14 2007 David Lutterkort - 0.24.0-1 - Fixed license - Munge examples/ to make rpmlint happier * Wed Aug 22 2007 David Lutterkort - 0.23.2-1 - New version * Thu Jul 26 2007 David Lutterkort - 0.23.1-1 - Remove old config files * Wed Jun 20 2007 David Lutterkort - 0.23.0-1 - Install one puppet.conf instead of old config files, keep old configs around to ease update - Use plain shell commands in install instead of macros * Wed May 2 2007 David Lutterkort - 0.22.4-1 - New version * Thu Mar 29 2007 David Lutterkort - 0.22.3-1 - Claim ownership of _sysconfdir/puppet (bz 233908) * Mon Mar 19 2007 David Lutterkort - 0.22.2-1 - Set puppet's homedir to /var/lib/puppet, not /var/puppet - Remove no-lockdir patch, not needed anymore * Mon Feb 12 2007 David Lutterkort - 0.22.1-2 - Fix bogus config parameter in puppetd.conf * Sat Feb 3 2007 David Lutterkort - 0.22.1-1 - New version * Fri Jan 5 2007 David Lutterkort - 0.22.0-1 - New version * Mon Nov 20 2006 David Lutterkort - 0.20.1-2 - Make require ruby(abi) and buildarch: noarch conditional for fedora 5 or later to allow building on older fedora releases * Mon Nov 13 2006 David Lutterkort - 0.20.1-1 - New version * Mon Oct 23 2006 David Lutterkort - 0.20.0-1 - New version * Tue Sep 26 2006 David Lutterkort - 0.19.3-1 - New version * Mon Sep 18 2006 David Lutterkort - 0.19.1-1 - New version * Thu Sep 7 2006 David Lutterkort - 0.19.0-1 - New version * Tue Aug 1 2006 David Lutterkort - 0.18.4-2 - Use /usr/bin/ruby directly instead of /usr/bin/env ruby in executables. Otherwise, initscripts break since pidof can't find the right process * Tue Aug 1 2006 David Lutterkort - 0.18.4-1 - New version * Fri Jul 14 2006 David Lutterkort - 0.18.3-1 - New version * Wed Jul 5 2006 David Lutterkort - 0.18.2-1 - New version * Wed Jun 28 2006 David Lutterkort - 0.18.1-1 - Removed lsb-config.patch and yumrepo.patch since they are upstream now * Mon Jun 19 2006 David Lutterkort - 0.18.0-1 - Patch config for LSB compliance (lsb-config.patch) - Changed config moves /var/puppet to /var/lib/puppet, /etc/puppet/ssl to /var/lib/puppet, /etc/puppet/clases.txt to /var/lib/puppet/classes.txt, /etc/puppet/localconfig.yaml to /var/lib/puppet/localconfig.yaml * Fri May 19 2006 David Lutterkort - 0.17.2-1 - Added /usr/bin/puppetrun to server subpackage - Backported patch for yumrepo type (yumrepo.patch) * Wed May 3 2006 David Lutterkort - 0.16.4-1 - Rebuilt * Fri Apr 21 2006 David Lutterkort - 0.16.0-1 - Fix default file permissions in server subpackage - Run puppetmaster as user puppet - rebuilt for 0.16.0 * Mon Apr 17 2006 David Lutterkort - 0.15.3-2 - Don't create empty log files in post-install scriptlet * Fri Apr 7 2006 David Lutterkort - 0.15.3-1 - Rebuilt for new version * Wed Mar 22 2006 David Lutterkort - 0.15.1-1 - Patch0: Run puppetmaster as root; running as puppet is not ready for primetime * Mon Mar 13 2006 David Lutterkort - 0.15.0-1 - Commented out noarch; requires fix for bz184199 * Mon Mar 6 2006 David Lutterkort - 0.14.0-1 - Added BuildRequires for ruby * Wed Mar 1 2006 David Lutterkort - 0.13.5-1 - Removed use of fedora-usermgmt. It is not required for Fedora Extras and makes it unnecessarily hard to use this rpm outside of Fedora. Just allocate the puppet uid/gid dynamically * Sun Feb 19 2006 David Lutterkort - 0.13.0-4 - Use fedora-usermgmt to create puppet user/group. Use uid/gid 24. Fixed problem with listing fileserver.conf and puppetmaster.conf twice * Wed Feb 8 2006 David Lutterkort - 0.13.0-3 - Fix puppetd.conf * Wed Feb 8 2006 David Lutterkort - 0.13.0-2 - Changes to run puppetmaster as user puppet * Mon Feb 6 2006 David Lutterkort - 0.13.0-1 - Don't mark initscripts as config files * Mon Feb 6 2006 David Lutterkort - 0.12.0-2 - Fix BuildRoot. Add dist to release * Tue Jan 17 2006 David Lutterkort - 0.11.0-1 - Rebuild * Thu Jan 12 2006 David Lutterkort - 0.10.2-1 - Updated for 0.10.2 Fixed minor kink in how Source is given * Wed Jan 11 2006 David Lutterkort - 0.10.1-3 - Added basic fileserver.conf * Wed Jan 11 2006 David Lutterkort - 0.10.1-1 - Updated. Moved installation of library files to sitelibdir. Pulled initscripts into separate files. Folded tools rpm into server * Thu Nov 24 2005 Duane Griffin - Added init scripts for the client * Wed Nov 23 2005 Duane Griffin - First packaging diff --git a/lib/puppet/application/agent.rb b/lib/puppet/application/agent.rb index eab02d0f6..0c6485b08 100644 --- a/lib/puppet/application/agent.rb +++ b/lib/puppet/application/agent.rb @@ -1,487 +1,489 @@ require 'puppet/application' class Puppet::Application::Agent < Puppet::Application should_parse_config run_mode :agent attr_accessor :args, :agent, :daemon, :host def preinit # Do an initial trap, so that cancels don't get a stack trace. Signal.trap(:INT) do $stderr.puts "Cancelling startup" exit(0) end { :waitforcert => nil, :detailed_exitcodes => false, :verbose => false, :debug => false, :centrallogs => false, :setdest => false, :enable => false, :disable => false, :client => true, :fqdn => nil, :serve => [], :digest => :MD5, :graph => true, :fingerprint => false, }.each do |opt,val| options[opt] = val end @args = {} require 'puppet/daemon' @daemon = Puppet::Daemon.new @daemon.argv = ARGV.dup end option("--centrallogging") option("--disable") option("--enable") option("--debug","-d") option("--fqdn FQDN","-f") option("--test","-t") option("--verbose","-v") option("--fingerprint") option("--digest DIGEST") option("--serve HANDLER", "-s") do |arg| if Puppet::Network::Handler.handler(arg) options[:serve] << arg.to_sym else raise "Could not find handler for #{arg}" end end option("--no-client") do |arg| options[:client] = false end option("--detailed-exitcodes") do |arg| options[:detailed_exitcodes] = true end option("--logdest DEST", "-l DEST") do |arg| begin Puppet::Util::Log.newdestination(arg) options[:setdest] = true rescue => detail puts detail.backtrace if Puppet[:debug] $stderr.puts detail.to_s end end option("--waitforcert WAITFORCERT", "-w") do |arg| options[:waitforcert] = arg.to_i end option("--port PORT","-p") do |arg| @args[:Port] = arg end def help <<-HELP puppet-agent(8) -- The puppet agent daemon ======== SYNOPSIS -------- Retrieves the client configuration from the puppet master and applies it to the local host. This service may be run as a daemon, run periodically using cron (or something similar), or run interactively for testing purposes. USAGE ----- puppet agent [--certname ] [-D|--daemonize|--no-daemonize] [-d|--debug] [--detailed-exitcodes] [--digest ] [--disable] [--enable] [--fingerprint] [-h|--help] [-l|--logdest syslog||console] [--no-client] [--noop] [-o|--onetime] [--serve ] [-t|--test] [-v|--verbose] [-V|--version] [-w|--waitforcert ] DESCRIPTION ----------- This is the main puppet client. Its job is to retrieve the local machine's configuration from a remote server and apply it. In order to successfully communicate with the remote server, the client must have a certificate signed by a certificate authority that the server trusts; the recommended method for this, at the moment, is to run a certificate authority as part of the puppet server (which is the default). The client will connect and request a signed certificate, and will continue connecting until it receives one. Once the client has a signed certificate, it will retrieve its configuration and apply it. USAGE NOTES ----------- 'puppet agent' does its best to find a compromise between interactive use and daemon use. Run with no arguments and no configuration, it will go into the background, attempt to get a signed certificate, and retrieve and apply its configuration every 30 minutes. Some flags are meant specifically for interactive use -- in particular, 'test', 'tags' or 'fingerprint' are useful. 'test' enables verbose logging, causes the daemon to stay in the foreground, exits if the server's configuration is invalid (this happens if, for instance, you've left a syntax error on the server), and exits after running the configuration once (rather than hanging around as a long-running process). 'tags' allows you to specify what portions of a configuration you want to apply. Puppet elements are tagged with all of the class or definition names that contain them, and you can use the 'tags' flag to specify one of these names, causing only configuration elements contained within that class or definition to be applied. This is very useful when you are testing new configurations -- for instance, if you are just starting to manage 'ntpd', you would put all of the new elements into an 'ntpd' class, and call puppet with '--tags ntpd', which would only apply that small portion of the configuration during your testing, rather than applying the whole thing. 'fingerprint' is a one-time flag. In this mode 'puppet agent' will run once and display on the console (and in the log) the current certificate (or certificate request) fingerprint. Providing the '--digest' option allows to use a different digest algorithm to generate the fingerprint. The main use is to verify that before signing a certificate request on the master, the certificate request the master received is the same as the one the client sent (to prevent against man-in-the-middle attacks when signing certificates). OPTIONS ------- Note that any configuration parameter that's valid in the configuration file is also a valid long argument. For example, 'server' is a valid configuration parameter, so you can specify '--server ' as an argument. See the configuration file documentation at http://docs.puppetlabs.com/references/stable/configuration.html for the full list of acceptable parameters. A commented list of all configuration options can also be generated by running puppet agent with '--genconfig'. * --certname: Set the certname (unique ID) of the client. The master reads this unique identifying string, which is usually set to the node's fully-qualified domain name, to determine which configurations the node will receive. Use this option to debug setup problems or implement unusual node identification schemes. * --daemonize: Send the process into the background. This is the default. * --no-daemonize: Do not send the process into the background. * --debug: Enable full debugging. * --detailed-exitcodes: Provide transaction information via exit codes. If this is enabled, an exit code of '2' means there were changes, an exit code of '4' means there were failures during the transaction, and an exit code of '6' means there were both changes and failures. * --digest: Change the certificate fingerprinting digest algorithm. The default is MD5. Valid values depends on the version of OpenSSL installed, but should always at least contain MD5, MD2, SHA1 and SHA256. * --disable: Disable working on the local system. This puts a lock file in place, causing 'puppet agent' not to work on the system until the lock file is removed. This is useful if you are testing a configuration and do not want the central configuration to override the local state until everything is tested and committed. 'puppet agent' uses the same lock file while it is running, so no more than one 'puppet agent' process is working at a time. 'puppet agent' exits after executing this. * --enable: Enable working on the local system. This removes any lock file, causing 'puppet agent' to start managing the local system again (although it will continue to use its normal scheduling, so it might not start for another half hour). 'puppet agent' exits after executing this. * --fingerprint: Display the current certificate or certificate signing request fingerprint and then exit. Use the '--digest' option to change the digest algorithm used. * --help: Print this help message * --logdest: Where to send messages. Choose between syslog, the console, and a log file. Defaults to sending messages to syslog, or the console if debugging or verbosity is enabled. * --no-client: Do not create a config client. This will cause the daemon to run without ever checking for its configuration automatically, and only makes sense when puppet agent is being run with listen = true in puppet.conf or was started with the `--listen` option. * --noop: Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This is useful for seeing what changes Puppet will make without actually executing the changes. * --onetime: Run the configuration once. Runs a single (normally daemonized) Puppet run. Useful for interactively running puppet agent when used in conjunction with the --no-daemonize option. * --serve: Start another type of server. By default, 'puppet agent' will start a service handler that allows authenticated and authorized remote nodes to trigger the configuration to be pulled down and applied. You can specify any handler here that does not require configuration, e.g., filebucket, ca, or resource. The handlers are in 'lib/puppet/network/handler', and the names must match exactly, both in the call to 'serve' and in 'namespaceauth.conf'. * --test: Enable the most common options used for testing. These are 'onetime', 'verbose', 'ignorecache', 'no-daemonize', 'no-usecacheonfailure', 'detailed-exit-codes', 'no-splay', and 'show_diff'. * --verbose: Turn on verbose reporting. * --version: Print the puppet version number and exit. * --waitforcert: This option only matters for daemons that do not yet have certificates and it is enabled by default, with a value of 120 (seconds). This causes 'puppet agent' to connect to the server every 2 minutes and ask it to sign a certificate request. This is useful for the initial setup of a puppet client. You can turn off waiting for certificates by specifying a time of 0. EXAMPLE ------- $ puppet agent --server puppet.domain.com DIAGNOSTICS ----------- Puppet agent accepts the following signals: * SIGHUP: Restart the puppet agent daemon. * SIGINT and SIGTERM: Shut down the puppet agent daemon. * SIGUSR1: Immediately retrieve and apply configurations from the puppet master. +* SIGUSR2: + Close file descriptors for log files and reopen them. Used with logrotate. AUTHOR ------ Luke Kanies COPYRIGHT --------- Copyright (c) 2011 Puppet Labs, LLC Licensed under the Apache 2.0 License HELP end def run_command return fingerprint if options[:fingerprint] return onetime if Puppet[:onetime] main end def fingerprint unless cert = host.certificate || host.certificate_request $stderr.puts "Fingerprint asked but no certificate nor certificate request have yet been issued" exit(1) return end unless fingerprint = cert.fingerprint(options[:digest]) raise ArgumentError, "Could not get fingerprint for digest '#{options[:digest]}'" end puts fingerprint end def onetime unless options[:client] $stderr.puts "onetime is specified but there is no client" exit(43) return end @daemon.set_signal_traps begin report = @agent.run rescue => detail puts detail.backtrace if Puppet[:trace] Puppet.err detail.to_s end @daemon.stop(:exit => false) if not report exit(1) elsif options[:detailed_exitcodes] then exit(report.exit_status) else exit(0) end end def main Puppet.notice "Starting Puppet client version #{Puppet.version}" @daemon.start end # Enable all of the most common test options. def setup_test Puppet.settings.handlearg("--ignorecache") Puppet.settings.handlearg("--no-usecacheonfailure") Puppet.settings.handlearg("--no-splay") Puppet.settings.handlearg("--show_diff") Puppet.settings.handlearg("--no-daemonize") options[:verbose] = true Puppet[:onetime] = true options[:detailed_exitcodes] = true end def enable_disable_client(agent) if options[:enable] agent.enable elsif options[:disable] agent.disable end exit(0) end def setup_listen unless FileTest.exists?(Puppet[:rest_authconfig]) Puppet.err "Will not start without authorization file #{Puppet[:rest_authconfig]}" exit(14) end handlers = nil if options[:serve].empty? handlers = [:Runner] else handlers = options[:serve] end require 'puppet/network/server' # No REST handlers yet. server = Puppet::Network::Server.new(:xmlrpc_handlers => handlers, :port => Puppet[:puppetport]) @daemon.server = server end def setup_host @host = Puppet::SSL::Host.new waitforcert = options[:waitforcert] || (Puppet[:onetime] ? 0 : 120) cert = @host.wait_for_cert(waitforcert) unless options[:fingerprint] end def setup_agent # We need tomake the client either way, we just don't start it # if --no-client is set. require 'puppet/agent' require 'puppet/configurer' @agent = Puppet::Agent.new(Puppet::Configurer) enable_disable_client(@agent) if options[:enable] or options[:disable] @daemon.agent = agent if options[:client] # It'd be nice to daemonize later, but we have to daemonize before the # waitforcert happens. @daemon.daemonize if Puppet[:daemonize] setup_host @objects = [] # This has to go after the certs are dealt with. if Puppet[:listen] unless Puppet[:onetime] setup_listen else Puppet.notice "Ignoring --listen on onetime run" end end end def setup setup_test if options[:test] setup_logs exit(Puppet.settings.print_configs ? 0 : 1) if Puppet.settings.print_configs? args[:Server] = Puppet[:server] if options[:fqdn] args[:FQDN] = options[:fqdn] Puppet[:certname] = options[:fqdn] end if options[:centrallogs] logdest = args[:Server] logdest += ":" + args[:Port] if args.include?(:Port) Puppet::Util::Log.newdestination(logdest) end Puppet.settings.use :main, :agent, :ssl # Always ignoreimport for agent. It really shouldn't even try to import, # but this is just a temporary band-aid. Puppet[:ignoreimport] = true # We need to specify a ca location for all of the SSL-related i # indirected classes to work; in fingerprint mode we just need # access to the local files and we don't need a ca. Puppet::SSL::Host.ca_location = options[:fingerprint] ? :none : :remote Puppet::Transaction::Report.indirection.terminus_class = :rest # we want the last report to be persisted locally Puppet::Transaction::Report.indirection.cache_class = :yaml # Override the default; puppetd needs this, usually. # You can still override this on the command-line with, e.g., :compiler. Puppet[:catalog_terminus] = :rest # Override the default. Puppet[:facts_terminus] = :facter Puppet::Resource::Catalog.indirection.cache_class = :yaml unless options[:fingerprint] setup_agent else setup_host end end end diff --git a/lib/puppet/application/master.rb b/lib/puppet/application/master.rb index f4f4cbdea..97c243c71 100644 --- a/lib/puppet/application/master.rb +++ b/lib/puppet/application/master.rb @@ -1,261 +1,263 @@ require 'puppet/application' class Puppet::Application::Master < Puppet::Application should_parse_config run_mode :master option("--debug", "-d") option("--verbose", "-v") # internal option, only to be used by ext/rack/config.ru option("--rack") option("--compile host", "-c host") do |arg| options[:node] = arg end option("--logdest DEST", "-l DEST") do |arg| begin Puppet::Util::Log.newdestination(arg) options[:setdest] = true rescue => detail puts detail.backtrace if Puppet[:debug] $stderr.puts detail.to_s end end option("--parseonly") do puts "--parseonly has been removed. Please use 'puppet parser validate '" exit 1 end def help <<-HELP puppet-master(8) -- The puppet master daemon ======== SYNOPSIS -------- The central puppet server. Functions as a certificate authority by default. USAGE ----- puppet master [-D|--daemonize|--no-daemonize] [-d|--debug] [-h|--help] [-l|--logdest |console|syslog] [-v|--verbose] [-V|--version] [--compile ] DESCRIPTION ----------- This command starts an instance of puppet master, running as a daemon and using Ruby's built-in Webrick webserver. Puppet master can also be managed by other application servers; when this is the case, this executable is not used. OPTIONS ------- Note that any configuration parameter that's valid in the configuration file is also a valid long argument. For example, 'ssldir' is a valid configuration parameter, so you can specify '--ssldir ' as an argument. See the configuration file documentation at http://docs.puppetlabs.com/references/stable/configuration.html for the full list of acceptable parameters. A commented list of all configuration options can also be generated by running puppet master with '--genconfig'. * --daemonize: Send the process into the background. This is the default. * --no-daemonize: Do not send the process into the background. * --debug: Enable full debugging. * --help: Print this help message. * --logdest: Where to send messages. Choose between syslog, the console, and a log file. Defaults to sending messages to syslog, or the console if debugging or verbosity is enabled. * --verbose: Enable verbosity. * --version: Print the puppet version number and exit. * --compile: Compile a catalogue and output it in JSON from the puppet master. Uses facts contained in the $vardir/yaml/ directory to compile the catalog. EXAMPLE ------- puppet master DIAGNOSTICS ----------- When running as a standalone daemon, puppet master accepts the following signals: * SIGHUP: Restart the puppet master server. * SIGINT and SIGTERM: Shut down the puppet master server. +* SIGUSR2: + Close file descriptors for log files and reopen them. Used with logrotate. AUTHOR ------ Luke Kanies COPYRIGHT --------- Copyright (c) 2011 Puppet Labs, LLC Licensed under the Apache 2.0 License HELP end def preinit Signal.trap(:INT) do $stderr.puts "Cancelling startup" exit(0) end # Create this first-off, so we have ARGV require 'puppet/daemon' @daemon = Puppet::Daemon.new @daemon.argv = ARGV.dup end def run_command if options[:node] compile else main end end def compile Puppet::Util::Log.newdestination :console raise ArgumentError, "Cannot render compiled catalogs without pson support" unless Puppet.features.pson? begin unless catalog = Puppet::Resource::Catalog.indirection.find(options[:node]) raise "Could not compile catalog for #{options[:node]}" end jj catalog.to_resource rescue => detail $stderr.puts detail exit(30) end exit(0) end def main require 'etc' xmlrpc_handlers = [:Status, :FileServer, :Master, :Report, :Filebucket] xmlrpc_handlers << :CA if Puppet[:ca] # Make sure we've got a localhost ssl cert Puppet::SSL::Host.localhost # And now configure our server to *only* hit the CA for data, because that's # all it will have write access to. Puppet::SSL::Host.ca_location = :only if Puppet::SSL::CertificateAuthority.ca? if Puppet.features.root? begin Puppet::Util.chuser rescue => detail puts detail.backtrace if Puppet[:trace] $stderr.puts "Could not change user to #{Puppet[:user]}: #{detail}" exit(39) end end unless options[:rack] require 'puppet/network/server' @daemon.server = Puppet::Network::Server.new(:xmlrpc_handlers => xmlrpc_handlers) @daemon.daemonize if Puppet[:daemonize] else require 'puppet/network/http/rack' @app = Puppet::Network::HTTP::Rack.new(:xmlrpc_handlers => xmlrpc_handlers, :protocols => [:rest, :xmlrpc]) end Puppet.notice "Starting Puppet master version #{Puppet.version}" unless options[:rack] @daemon.start else return @app end end def setup_logs # Handle the logging settings. if options[:debug] or options[:verbose] if options[:debug] Puppet::Util::Log.level = :debug else Puppet::Util::Log.level = :info end unless Puppet[:daemonize] or options[:rack] Puppet::Util::Log.newdestination(:console) options[:setdest] = true end end Puppet::Util::Log.newdestination(:syslog) unless options[:setdest] end def setup_terminuses require 'puppet/file_serving/content' require 'puppet/file_serving/metadata' # Cache our nodes in yaml. Currently not configurable. Puppet::Node.indirection.cache_class = :yaml Puppet::FileServing::Content.indirection.terminus_class = :file_server Puppet::FileServing::Metadata.indirection.terminus_class = :file_server Puppet::FileBucket::File.indirection.terminus_class = :file end def setup_ssl # Configure all of the SSL stuff. if Puppet::SSL::CertificateAuthority.ca? Puppet::SSL::Host.ca_location = :local Puppet.settings.use :ca Puppet::SSL::CertificateAuthority.instance else Puppet::SSL::Host.ca_location = :none end end def setup raise Puppet::Error.new("Puppet master is not supported on Microsoft Windows") if Puppet.features.microsoft_windows? setup_logs exit(Puppet.settings.print_configs ? 0 : 1) if Puppet.settings.print_configs? Puppet.settings.use :main, :master, :ssl, :metrics setup_terminuses setup_ssl end end diff --git a/lib/puppet/defaults.rb b/lib/puppet/defaults.rb index 56223bb53..ff877ca2e 100644 --- a/lib/puppet/defaults.rb +++ b/lib/puppet/defaults.rb @@ -1,951 +1,952 @@ # The majority of Puppet's configuration settings are set in this file. module Puppet setdefaults(:main, :confdir => [Puppet.run_mode.conf_dir, "The main Puppet configuration directory. The default for this setting is calculated based on the user. If the process is running as root or the user that Puppet is supposed to run as, it defaults to a system directory, but if it's running as any other user, it defaults to being in the user's home directory."], :vardir => [Puppet.run_mode.var_dir, "Where Puppet stores dynamic and growing data. The default for this setting is calculated specially, like `confdir`_."], :name => [Puppet.application_name.to_s, "The name of the application, if we are running as one. The default is essentially $0 without the path or `.rb`."], :run_mode => [Puppet.run_mode.name.to_s, "The effective 'run mode' of the application: master, agent, or user."] ) setdefaults(:main, :logdir => Puppet.run_mode.logopts) setdefaults(:main, :trace => [false, "Whether to print stack traces on some errors"], :autoflush => { :default => true, :desc => "Whether log files should always flush to disk.", :hook => proc { |value| Log.autoflush = value } }, :syslogfacility => ["daemon", "What syslog facility to use when logging to syslog. Syslog has a fixed list of valid facilities, and you must choose one of those; you cannot just make one up."], :statedir => { :default => "$vardir/state", :mode => 01755, :desc => "The directory where Puppet state is stored. Generally, this directory can be removed without causing harm (although it might result in spurious service restarts)." }, :rundir => { :default => Puppet.run_mode.run_dir, :mode => 01777, :desc => "Where Puppet PID files are kept." }, :genconfig => [false, "Whether to just print a configuration to stdout and exit. Only makes sense when used interactively. Takes into account arguments specified on the CLI."], :genmanifest => [false, "Whether to just print a manifest to stdout and exit. Only makes sense when used interactively. Takes into account arguments specified on the CLI."], :configprint => ["", "Print the value of a specific configuration setting. If the name of a setting is provided for this, then the value is printed and puppet exits. Comma-separate multiple values. For a list of all values, specify 'all'."], :color => { :default => "ansi", :type => :setting, :desc => "Whether to use colors when logging to the console. Valid values are `ansi` (equivalent to `true`), `html`, and `false`, which produces no color.", }, :mkusers => [false, "Whether to create the necessary user and group that puppet agent will run as."], :manage_internal_file_permissions => [true, "Whether Puppet should manage the owner, group, and mode of files it uses internally" ], :onetime => {:default => false, :desc => "Run the configuration once, rather than as a long-running daemon. This is useful for interactively running puppetd.", :short => 'o' }, :path => {:default => "none", :desc => "The shell search path. Defaults to whatever is inherited from the parent process.", :call_on_define => true, # Call our hook with the default value, so we always get the libdir set. :hook => proc do |value| ENV["PATH"] = "" if ENV["PATH"].nil? ENV["PATH"] = value unless value == "none" paths = ENV["PATH"].split(File::PATH_SEPARATOR) %w{/usr/sbin /sbin}.each do |path| ENV["PATH"] += File::PATH_SEPARATOR + path unless paths.include?(path) end value end }, :libdir => {:default => "$vardir/lib", :desc => "An extra search path for Puppet. This is only useful for those files that Puppet will load on demand, and is only guaranteed to work for those cases. In fact, the autoload mechanism is responsible for making sure this directory is in Ruby's search path", :call_on_define => true, # Call our hook with the default value, so we always get the libdir set. :hook => proc do |value| $LOAD_PATH.delete(@oldlibdir) if defined?(@oldlibdir) and $LOAD_PATH.include?(@oldlibdir) @oldlibdir = value $LOAD_PATH << value end }, :ignoreimport => [false, "If true, allows the parser to continue without requiring all files referenced with `import` statements to exist. This setting was primarily designed for use with commit hooks for parse-checking."], :authconfig => [ "$confdir/namespaceauth.conf", "The configuration file that defines the rights to the different namespaces and methods. This can be used as a coarse-grained authorization system for both `puppet agent` and `puppet master`." ], :environment => {:default => "production", :desc => "The environment Puppet is running in. For clients (e.g., `puppet agent`) this determines the environment itself, which is used to find modules and much more. For servers (i.e., `puppet master`) this provides the default environment for nodes we know nothing about." }, :diff_args => ["-u", "Which arguments to pass to the diff command when printing differences between files. The command to use can be chosen with the `diff` setting."], :diff => { :default => (Puppet.features.microsoft_windows? ? "" : "diff"), :desc => "Which diff command to use when printing differences between files. This setting has no default value on Windows, as standard `diff` is not available, but Puppet can use many third-party diff tools.", }, :show_diff => [false, "Whether to log and report a contextual diff when files are being replaced. This causes partial file contents to pass through Puppet's normal logging and reporting system, so this setting should be used with caution if you are sending Puppet's reports to an insecure destination. This feature currently requires the `diff/lcs` Ruby library."], :daemonize => { :default => (Puppet.features.microsoft_windows? ? false : true), :desc => "Whether to send the process into the background. This defaults to true on POSIX systems, and to false on Windows (where Puppet currently cannot daemonize).", :short => "D", :hook => proc do |value| if value and Puppet.features.microsoft_windows? raise "Cannot daemonize on Windows" end end }, :maximum_uid => [4294967290, "The maximum allowed UID. Some platforms use negative UIDs but then ship with tools that do not know how to handle signed ints, so the UIDs show up as huge numbers that can then not be fed back into the system. This is a hackish way to fail in a slightly more useful way when that happens."], :route_file => ["$confdir/routes.yaml", "The YAML file containing indirector route configuration."], :node_terminus => ["plain", "Where to find information about nodes."], :catalog_terminus => ["compiler", "Where to get node catalogs. This is useful to change if, for instance, you'd like to pre-compile catalogs and store them in memcached or some other easily-accessed store."], :facts_terminus => { :default => Puppet.application_name.to_s == "master" ? 'yaml' : 'facter', :desc => "The node facts terminus.", :hook => proc do |value| require 'puppet/node/facts' # Cache to YAML if we're uploading facts away if %w[rest inventory_service].include? value.to_s Puppet::Node::Facts.indirection.cache_class = :yaml end end }, :inventory_terminus => [ "$facts_terminus", "Should usually be the same as the facts terminus" ], :httplog => { :default => "$logdir/http.log", :owner => "root", :mode => 0640, :desc => "Where the puppet agent web server logs." }, :http_proxy_host => ["none", "The HTTP proxy host to use for outgoing connections. Note: You may need to use a FQDN for the server hostname when using a proxy."], :http_proxy_port => [3128, "The HTTP proxy port to use for outgoing connections"], :filetimeout => [ 15, "The minimum time to wait (in seconds) between checking for updates in configuration files. This timeout determines how quickly Puppet checks whether a file (such as manifests or templates) has changed on disk." ], :queue_type => ["stomp", "Which type of queue to use for asynchronous processing."], :queue_type => ["stomp", "Which type of queue to use for asynchronous processing."], :queue_source => ["stomp://localhost:61613/", "Which type of queue to use for asynchronous processing. If your stomp server requires authentication, you can include it in the URI as long as your stomp client library is at least 1.1.1"], :async_storeconfigs => {:default => false, :desc => "Whether to use a queueing system to provide asynchronous database integration. Requires that `puppetqd` be running and that 'PSON' support for ruby be installed.", :hook => proc do |value| if value # This reconfigures the terminii for Node, Facts, and Catalog Puppet.settings[:storeconfigs] = true # But then we modify the configuration Puppet::Resource::Catalog.indirection.cache_class = :queue else raise "Cannot disable asynchronous storeconfigs in a running process" end end }, :thin_storeconfigs => {:default => false, :desc => - "Boolean; wether storeconfigs store in the database only the facts and exported resources. - If true, then storeconfigs performance will be higher and still allow exported/collected - resources, but other usage external to Puppet might not work", + "Boolean; whether Puppet should store only facts and exported resources in the storeconfigs + database. This will improve the performance of exported resources with the older + `active_record` backend, but will disable external tools that search the storeconfigs database. + Thinning catalogs is generally unnecessary when using PuppetDB to store catalogs.", :hook => proc do |value| Puppet.settings[:storeconfigs] = true if value end }, :config_version => ["", "How to determine the configuration version. By default, it will be the time that the configuration is parsed, but you can provide a shell script to override how the version is determined. The output of this script will be added to every log message in the reports, allowing you to correlate changes on your hosts to the source version on the server."], :zlib => [true, "Boolean; whether to use the zlib library", ], :prerun_command => ["", "A command to run before every agent run. If this command returns a non-zero return code, the entire Puppet run will fail."], :postrun_command => ["", "A command to run after every agent run. If this command returns a non-zero return code, the entire Puppet run will be considered to have failed, even though it might have performed work during the normal run."], :freeze_main => [false, "Freezes the 'main' class, disallowing any code to be added to it. This essentially means that you can't have any code outside of a node, class, or definition other than in the site manifest."] ) Puppet.setdefaults(:module_tool, :module_repository => ['http://forge.puppetlabs.com', "The module repository"], :module_working_dir => ['$vardir/puppet-module', "The directory into which module tool data is stored"] ) hostname = Facter["hostname"].value domain = Facter["domain"].value if domain and domain != "" fqdn = [hostname, domain].join(".") else fqdn = hostname end Puppet.setdefaults( :main, # We have to downcase the fqdn, because the current ssl stuff (as oppsed to in master) doesn't have good facilities for # manipulating naming. :certname => {:default => fqdn.downcase, :desc => "The name to use when handling certificates. Defaults to the fully qualified domain name.", :call_on_define => true, # Call our hook with the default value, so we're always downcased :hook => proc { |value| raise(ArgumentError, "Certificate names must be lower case; see #1168") unless value == value.downcase }}, :certdnsnames => { :default => '', :hook => proc do |value| unless value.nil? or value == '' then Puppet.warning < < { :default => '', :desc => < { :default => "$ssldir/certs", :owner => "service", :desc => "The certificate directory." }, :ssldir => { :default => "$confdir/ssl", :mode => 0771, :owner => "service", :desc => "Where SSL certificates are kept." }, :publickeydir => { :default => "$ssldir/public_keys", :owner => "service", :desc => "The public key directory." }, :requestdir => { :default => "$ssldir/certificate_requests", :owner => "service", :desc => "Where host certificate requests are stored." }, :privatekeydir => { :default => "$ssldir/private_keys", :mode => 0750, :owner => "service", :desc => "The private key directory." }, :privatedir => { :default => "$ssldir/private", :mode => 0750, :owner => "service", :desc => "Where the client stores private certificate information." }, :passfile => { :default => "$privatedir/password", :mode => 0640, :owner => "service", :desc => "Where puppet agent stores the password for its private key. Generally unused." }, :hostcsr => { :default => "$ssldir/csr_$certname.pem", :mode => 0644, :owner => "service", :desc => "Where individual hosts store and look for their certificate requests." }, :hostcert => { :default => "$certdir/$certname.pem", :mode => 0644, :owner => "service", :desc => "Where individual hosts store and look for their certificates." }, :hostprivkey => { :default => "$privatekeydir/$certname.pem", :mode => 0600, :owner => "service", :desc => "Where individual hosts store and look for their private key." }, :hostpubkey => { :default => "$publickeydir/$certname.pem", :mode => 0644, :owner => "service", :desc => "Where individual hosts store and look for their public key." }, :localcacert => { :default => "$certdir/ca.pem", :mode => 0644, :owner => "service", :desc => "Where each client stores the CA certificate." }, :hostcrl => { :default => "$ssldir/crl.pem", :mode => 0644, :owner => "service", :desc => "Where the host's certificate revocation list can be found. This is distinct from the certificate authority's CRL." }, :certificate_revocation => [true, "Whether certificate revocation should be supported by downloading a Certificate Revocation List (CRL) to all clients. If enabled, CA chaining will almost definitely not work."] ) setdefaults( :ca, :ca_name => ["Puppet CA: $certname", "The name to use the Certificate Authority certificate."], :cadir => { :default => "$ssldir/ca", :owner => "service", :group => "service", :mode => 0770, :desc => "The root directory for the certificate authority." }, :cacert => { :default => "$cadir/ca_crt.pem", :owner => "service", :group => "service", :mode => 0660, :desc => "The CA certificate." }, :cakey => { :default => "$cadir/ca_key.pem", :owner => "service", :group => "service", :mode => 0660, :desc => "The CA private key." }, :capub => { :default => "$cadir/ca_pub.pem", :owner => "service", :group => "service", :desc => "The CA public key." }, :cacrl => { :default => "$cadir/ca_crl.pem", :owner => "service", :group => "service", :mode => 0664, :desc => "The certificate revocation list (CRL) for the CA. Will be used if present but otherwise ignored.", :hook => proc do |value| if value == 'false' Puppet.warning "Setting the :cacrl to 'false' is deprecated; Puppet will just ignore the crl if yours is missing" end end }, :caprivatedir => { :default => "$cadir/private", :owner => "service", :group => "service", :mode => 0770, :desc => "Where the CA stores private certificate information." }, :csrdir => { :default => "$cadir/requests", :owner => "service", :group => "service", :desc => "Where the CA stores certificate requests" }, :signeddir => { :default => "$cadir/signed", :owner => "service", :group => "service", :mode => 0770, :desc => "Where the CA stores signed certificates." }, :capass => { :default => "$caprivatedir/ca.pass", :owner => "service", :group => "service", :mode => 0660, :desc => "Where the CA stores the password for the private key" }, :serial => { :default => "$cadir/serial", :owner => "service", :group => "service", :mode => 0644, :desc => "Where the serial number for certificates is stored." }, :autosign => { :default => "$confdir/autosign.conf", :mode => 0644, :desc => "Whether to enable autosign. Valid values are true (which autosigns any key request, and is a very bad idea), false (which never autosigns any key request), and the path to a file, which uses that configuration file to determine which keys to sign."}, :allow_duplicate_certs => [false, "Whether to allow a new certificate request to overwrite an existing certificate."], :ca_days => ["", "How long a certificate should be valid, in days. This setting is deprecated; use `ca_ttl` instead"], :ca_ttl => ["5y", "The default TTL for new certificates; valid values must be an integer, optionally followed by one of the units 'y' (years of 365 days), 'd' (days), 'h' (hours), or 's' (seconds). The unit defaults to seconds. If this setting is set, ca_days is ignored. Examples are '3600' (one hour) and '1825d', which is the same as '5y' (5 years) "], :ca_md => ["md5", "The type of hash used in certificates."], :req_bits => [4096, "The bit length of the certificates."], :keylength => [4096, "The bit length of keys."], :cert_inventory => { :default => "$cadir/inventory.txt", :mode => 0644, :owner => "service", :group => "service", :desc => "A Complete listing of all certificates" } ) # Define the config default. setdefaults( Puppet.settings[:name], :config => ["$confdir/puppet.conf", "The configuration file for #{Puppet[:name]}."], :pidfile => ["$rundir/$name.pid", "The pid file"], :bindaddress => ["", "The address a listening server should bind to. Mongrel servers default to 127.0.0.1 and WEBrick defaults to 0.0.0.0."], :servertype => {:default => "webrick", :desc => "The type of server to use. Currently supported options are webrick and mongrel. If you use mongrel, you will need a proxy in front of the process or processes, since Mongrel cannot speak SSL.", :call_on_define => true, # Call our hook with the default value, so we always get the correct bind address set. :hook => proc { |value| value == "webrick" ? Puppet.settings[:bindaddress] = "0.0.0.0" : Puppet.settings[:bindaddress] = "127.0.0.1" if Puppet.settings[:bindaddress] == "" } } ) setdefaults(:master, :user => ["puppet", "The user puppet master should run as."], :group => ["puppet", "The group puppet master should run as."], :manifestdir => ["$confdir/manifests", "Where puppet master looks for its manifests."], :manifest => ["$manifestdir/site.pp", "The entry-point manifest for puppet master."], :code => ["", "Code to parse directly. This is essentially only used by `puppet`, and should only be set if you're writing your own Puppet executable"], :masterlog => { :default => "$logdir/puppetmaster.log", :owner => "service", :group => "service", :mode => 0660, :desc => "Where puppet master logs. This is generally not used, since syslog is the default log destination." }, :masterhttplog => { :default => "$logdir/masterhttp.log", :owner => "service", :group => "service", :mode => 0660, :create => true, :desc => "Where the puppet master web server logs." }, :masterport => [8140, "Which port puppet master listens on."], :node_name => ["cert", "How the puppet master determines the client's identity and sets the 'hostname', 'fqdn' and 'domain' facts for use in the manifest, in particular for determining which 'node' statement applies to the client. Possible values are 'cert' (use the subject's CN in the client's certificate) and 'facter' (use the hostname that the client reported in its facts)"], :bucketdir => { :default => "$vardir/bucket", :mode => 0750, :owner => "service", :group => "service", :desc => "Where FileBucket files are stored." }, :rest_authconfig => [ "$confdir/auth.conf", "The configuration file that defines the rights to the different rest indirections. This can be used as a fine-grained authorization system for `puppet master`." ], - :ca => [true, "Wether the master should function as a certificate authority."], + :ca => [true, "Whether the master should function as a certificate authority."], :modulepath => { :default => "$confdir/modules#{File::PATH_SEPARATOR}/usr/share/puppet/modules", :desc => "The search path for modules, as a list of directories separated by the system path separator character. (The POSIX path separator is ':', and the Windows path separator is ';'.)", :type => :setting # We don't want this to be considered a file, since it's multiple files. }, :ssl_client_header => ["HTTP_X_CLIENT_DN", "The header containing an authenticated client's SSL DN. Only used with Mongrel. This header must be set by the proxy to the authenticated client's SSL DN (e.g., `/CN=puppet.puppetlabs.com`). See http://projects.puppetlabs.com/projects/puppet/wiki/Using_Mongrel for more information."], :ssl_client_verify_header => ["HTTP_X_CLIENT_VERIFY", "The header containing the status message of the client verification. Only used with Mongrel. This header must be set by the proxy to 'SUCCESS' if the client successfully authenticated, and anything else otherwise. See http://projects.puppetlabs.com/projects/puppet/wiki/Using_Mongrel for more information."], # To make sure this directory is created before we try to use it on the server, we need # it to be in the server section (#1138). :yamldir => {:default => "$vardir/yaml", :owner => "service", :group => "service", :mode => "750", :desc => "The directory in which YAML data is stored, usually in a subdirectory."}, :server_datadir => {:default => "$vardir/server_data", :owner => "service", :group => "service", :mode => "750", :desc => "The directory in which serialized data is stored, usually in a subdirectory."}, :reports => ["store", "The list of reports to generate. All reports are looked for in `puppet/reports/name.rb`, and multiple report names should be comma-separated (whitespace is okay)." ], :reportdir => {:default => "$vardir/reports", :mode => 0750, :owner => "service", :group => "service", :desc => "The directory in which to store reports received from the client. Each client gets a separate subdirectory."}, :reporturl => ["http://localhost:3000/reports/upload", "The URL used by the http reports processor to send reports"], :fileserverconfig => ["$confdir/fileserver.conf", "Where the fileserver configuration is stored."], :strict_hostname_checking => [false, "Whether to only search for the complete hostname as it is in the certificate when searching for node information in the catalogs."] ) setdefaults(:metrics, :rrddir => {:default => "$vardir/rrd", :mode => 0750, :owner => "service", :group => "service", :desc => "The directory where RRD database files are stored. Directories for each reporting host will be created under this directory." }, :rrdinterval => ["$runinterval", "How often RRD should expect data. This should match how often the hosts report back to the server."] ) setdefaults(:device, :devicedir => {:default => "$vardir/devices", :mode => "750", :desc => "The root directory of devices' $vardir"}, :deviceconfig => ["$confdir/device.conf","Path to the device config file for puppet device"] ) setdefaults(:agent, :node_name_value => { :default => "$certname", :desc => "The explicit value used for the node name for all requests the agent makes to the master. WARNING: This setting is mutually exclusive with node_name_fact. Changing this setting also requires changes to the default auth.conf configuration on the Puppet Master. Please see http://links.puppetlabs.com/node_name_value for more information." }, :node_name_fact => { :default => "", :desc => "The fact name used to determine the node name used for all requests the agent makes to the master. WARNING: This setting is mutually exclusive with node_name_value. Changing this setting also requires changes to the default auth.conf configuration on the Puppet Master. Please see http://links.puppetlabs.com/node_name_fact for more information.", :hook => proc do |value| if !value.empty? and Puppet[:node_name_value] != Puppet[:certname] raise "Cannot specify both the node_name_value and node_name_fact settings" end end }, :localconfig => { :default => "$statedir/localconfig", :owner => "root", :mode => 0660, :desc => "Where puppet agent caches the local configuration. An extension indicating the cache format is added automatically."}, :statefile => { :default => "$statedir/state.yaml", :mode => 0660, :desc => "Where puppet agent and puppet master store state associated with the running configuration. In the case of puppet master, this file reflects the state discovered through interacting with clients." }, :clientyamldir => {:default => "$vardir/client_yaml", :mode => "750", :desc => "The directory in which client-side YAML data is stored."}, :client_datadir => {:default => "$vardir/client_data", :mode => "750", :desc => "The directory in which serialized data is stored on the client."}, :classfile => { :default => "$statedir/classes.txt", :owner => "root", :mode => 0640, :desc => "The file in which puppet agent stores a list of the classes associated with the retrieved configuration. Can be loaded in the separate `puppet` executable using the `--loadclasses` option."}, :resourcefile => { :default => "$statedir/resources.txt", :owner => "root", :mode => 0640, :desc => "The file in which puppet agent stores a list of the resources associated with the retrieved configuration." }, :puppetdlog => { :default => "$logdir/puppetd.log", :owner => "root", :mode => 0640, :desc => "The log file for puppet agent. This is generally not used." }, :server => ["puppet", "The server to which server puppet agent should connect"], :ignoreschedules => [false, "Boolean; whether puppet agent should ignore schedules. This is useful for initial puppet agent runs."], :puppetport => [8139, "Which port puppet agent listens on."], :noop => [false, "Whether puppet agent should be run in noop mode."], :runinterval => [1800, # 30 minutes "How often puppet agent applies the client configuration; in seconds. Note that a runinterval of 0 means \"run continuously\" rather than \"never run.\" If you want puppet agent to never run, you should start it with the `--no-client` option."], :listen => [false, "Whether puppet agent should listen for connections. If this is true, then puppet agent will accept incoming REST API requests, subject to the default ACLs and the ACLs set in the `rest_authconfig` file. Puppet agent can respond usefully to requests on the `run`, `facts`, `certificate`, and `resource` endpoints."], :ca_server => ["$server", "The server to use for certificate authority requests. It's a separate server because it cannot and does not need to horizontally scale."], :ca_port => ["$masterport", "The port to use for the certificate authority."], :catalog_format => { :default => "", :desc => "(Deprecated for 'preferred_serialization_format') What format to use to dump the catalog. Only supports 'marshal' and 'yaml'. Only matters on the client, since it asks the server for a specific format.", :hook => proc { |value| if value Puppet.warning "Setting 'catalog_format' is deprecated; use 'preferred_serialization_format' instead." Puppet.settings[:preferred_serialization_format] = value end } }, :preferred_serialization_format => ["pson", "The preferred means of serializing ruby instances for passing over the wire. This won't guarantee that all instances will be serialized using this method, since not all classes can be guaranteed to support this format, but it will be used for all classes that support it."], :puppetdlockfile => [ "$statedir/puppetdlock", "A lock file to temporarily stop puppet agent from doing anything."], :usecacheonfailure => [true, "Whether to use the cached configuration when the remote configuration will not compile. This option is useful for testing new configurations, where you want to fix the broken configuration rather than reverting to a known-good one." ], :use_cached_catalog => [false, "Whether to only use the cached catalog rather than compiling a new catalog on every run. Puppet can be run with this enabled by default and then selectively disabled when a recompile is desired."], :ignorecache => [false, "Ignore cache and always recompile the configuration. This is useful for testing new configurations, where the local cache may in fact be stale even if the timestamps are up to date - if the facts change or if the server changes." ], :downcasefacts => [false, "Whether facts should be made all lowercase when sent to the server."], :dynamicfacts => ["memorysize,memoryfree,swapsize,swapfree", "Facts that are dynamic; these facts will be ignored when deciding whether changed facts should result in a recompile. Multiple facts should be comma-separated."], :splaylimit => ["$runinterval", "The maximum time to delay before runs. Defaults to being the same as the run interval."], :splay => [false, "Whether to sleep for a pseudo-random (but consistent) amount of time before a run."], :clientbucketdir => { :default => "$vardir/clientbucket", :mode => 0750, :desc => "Where FileBucket files are stored locally." }, :configtimeout => [120, "How long the client should wait for the configuration to be retrieved before considering it a failure. This can help reduce flapping if too many clients contact the server at one time." ], :reportserver => { :default => "$server", :call_on_define => false, :desc => "(Deprecated for 'report_server') The server to which to send transaction reports.", :hook => proc do |value| Puppet.settings[:report_server] = value if value end }, :report_server => ["$server", "The server to send transaction reports to." ], :report_port => ["$masterport", "The port to communicate with the report_server." ], :inventory_server => ["$server", "The server to send facts to." ], :inventory_port => ["$masterport", "The port to communicate with the inventory_server." ], :report => [true, "Whether to send reports after every transaction." ], :lastrunfile => { :default => "$statedir/last_run_summary.yaml", :mode => 0640, :desc => "Where puppet agent stores the last run report summary in yaml format." }, :lastrunreport => { :default => "$statedir/last_run_report.yaml", :mode => 0640, :desc => "Where puppet agent stores the last run report in yaml format." }, :graph => [false, "Whether to create dot graph files for the different configuration graphs. These dot files can be interpreted by tools like OmniGraffle or dot (which is part of ImageMagick)."], :graphdir => ["$statedir/graphs", "Where to store dot-outputted graphs."], :http_compression => [false, "Allow http compression in REST communication with the master. This setting might improve performance for agent -> master communications over slow WANs. Your puppet master needs to support compression (usually by activating some settings in a reverse-proxy in front of the puppet master, which rules out webrick). It is harmless to activate this settings if your master doesn't support compression, but if it supports it, this setting might reduce performance on high-speed LANs."] ) setdefaults(:inspect, :archive_files => [false, "During an inspect run, whether to archive files whose contents are audited to a file bucket."], :archive_file_server => ["$server", "During an inspect run, the file bucket server to archive files to if archive_files is set."] ) # Plugin information. setdefaults( :main, :plugindest => ["$libdir", "Where Puppet should store plugins that it pulls down from the central server."], :pluginsource => ["puppet://$server/plugins", "From where to retrieve plugins. The standard Puppet `file` type is used for retrieval, so anything that is a valid file source can be used here."], :pluginsync => [false, "Whether plugins should be synced with the central server."], :pluginsignore => [".svn CVS .git", "What files to ignore when pulling down plugins."] ) # Central fact information. setdefaults( :main, :factpath => {:default => "$vardir/lib/facter#{File::PATH_SEPARATOR}$vardir/facts", :desc => "Where Puppet should look for facts. Multiple directories should be separated by the system path separator character. (The POSIX path separator is ':', and the Windows path separator is ';'.)", :call_on_define => true, # Call our hook with the default value, so we always get the value added to facter. :type => :setting, # Don't consider it a file, because it could be multiple colon-separated files :hook => proc { |value| Facter.search(value) if Facter.respond_to?(:search) }}, :factdest => ["$vardir/facts/", "Where Puppet should store facts that it pulls down from the central server."], :factsource => ["puppet://$server/facts/", "From where to retrieve facts. The standard Puppet `file` type is used for retrieval, so anything that is a valid file source can be used here."], :factsync => [false, "Whether facts should be synced with the central server."], :factsignore => [".svn CVS", "What files to ignore when pulling down facts."] ) setdefaults( :tagmail, :tagmap => ["$confdir/tagmail.conf", "The mapping between reporting tags and email addresses."], :sendmail => [which('sendmail') || '', "Where to find the sendmail binary with which to send email."], :reportfrom => ["report@" + [Facter["hostname"].value, Facter["domain"].value].join("."), "The 'from' email address for the reports."], :smtpserver => ["none", "The server through which to send email reports."] ) setdefaults( :rails, :dblocation => { :default => "$statedir/clientconfigs.sqlite3", :mode => 0660, :owner => "service", :group => "service", :desc => "The database cache for client configurations. Used for querying within the language." }, :dbadapter => [ "sqlite3", "The type of database to use." ], :dbmigrate => [ false, "Whether to automatically migrate the database." ], :dbname => [ "puppet", "The name of the database to use." ], :dbserver => [ "localhost", "The database server for caching. Only used when networked databases are used."], :dbport => [ "", "The database password for caching. Only used when networked databases are used."], :dbuser => [ "puppet", "The database user for caching. Only used when networked databases are used."], :dbpassword => [ "puppet", "The database password for caching. Only used when networked databases are used."], :dbconnections => [ '', "The number of database connections for networked databases. Will be ignored unless the value is a positive integer."], :dbsocket => [ "", "The database socket location. Only used when networked databases are used. Will be ignored if the value is an empty string."], :railslog => {:default => "$logdir/rails.log", :mode => 0600, :owner => "service", :group => "service", :desc => "Where Rails-specific logs are sent" }, :rails_loglevel => ["info", "The log level for Rails connections. The value must be a valid log level within Rails. Production environments normally use `info` and other environments normally use `debug`."] ) setdefaults( :couchdb, :couchdb_url => ["http://127.0.0.1:5984/puppet", "The url where the puppet couchdb database will be created"] ) setdefaults( :transaction, :tags => ["", "Tags to use to find resources. If this is set, then only resources tagged with the specified tags will be applied. Values must be comma-separated."], :evaltrace => [false, "Whether each resource should log when it is being evaluated. This allows you to interactively see exactly what is being done."], :summarize => [false, "Whether to print a transaction summary." ] ) setdefaults( :main, :external_nodes => ["none", "An external command that can produce node information. The command's output must be a YAML dump of a hash, and that hash must have a `classes` key and/or a `parameters` key, where `classes` is an array or hash and `parameters` is a hash. For unknown nodes, the command should exit with a non-zero exit code. This command makes it straightforward to store your node mapping information in other data sources like databases."]) setdefaults( :ldap, :ldapnodes => [false, "Whether to search for node configurations in LDAP. See http://projects.puppetlabs.com/projects/puppet/wiki/LDAP_Nodes for more information."], :ldapssl => [false, "Whether SSL should be used when searching for nodes. Defaults to false because SSL usually requires certificates to be set up on the client side."], :ldaptls => [false, "Whether TLS should be used when searching for nodes. Defaults to false because TLS usually requires certificates to be set up on the client side."], :ldapserver => ["ldap", "The LDAP server. Only used if `ldapnodes` is enabled."], :ldapport => [389, "The LDAP port. Only used if `ldapnodes` is enabled."], :ldapstring => ["(&(objectclass=puppetClient)(cn=%s))", "The search string used to find an LDAP node."], :ldapclassattrs => ["puppetclass", "The LDAP attributes to use to define Puppet classes. Values should be comma-separated."], :ldapstackedattrs => ["puppetvar", "The LDAP attributes that should be stacked to arrays by adding the values in all hierarchy elements of the tree. Values should be comma-separated."], :ldapattrs => ["all", "The LDAP attributes to include when querying LDAP for nodes. All returned attributes are set as variables in the top-level scope. Multiple values should be comma-separated. The value 'all' returns all attributes."], :ldapparentattr => ["parentnode", "The attribute to use to define the parent node."], :ldapuser => ["", "The user to use to connect to LDAP. Must be specified as a full DN."], :ldappassword => ["", "The password to use to connect to LDAP."], :ldapbase => ["", "The search base for LDAP searches. It's impossible to provide a meaningful default here, although the LDAP libraries might have one already set. Generally, it should be the 'ou=Hosts' branch under your main directory."] ) setdefaults(:master, :storeconfigs => { :default => false, :desc => "Whether to store each client's configuration, including catalogs, facts, and related data. This also enables the import and export of resources in the Puppet language - a mechanism for exchange resources between nodes. By default this uses ActiveRecord and an SQL database to store and query the data; this, in turn, will depend on Rails being available. You can adjust the backend using the storeconfigs_backend setting.", # Call our hook with the default value, so we always get the libdir set. :call_on_define => true, :hook => proc do |value| require 'puppet/node' require 'puppet/node/facts' if value Puppet.settings[:async_storeconfigs] or Puppet::Resource::Catalog.indirection.cache_class = :store_configs Puppet::Node::Facts.indirection.cache_class = :store_configs Puppet::Node.indirection.cache_class = :store_configs Puppet::Resource.indirection.terminus_class = :store_configs end end }, :storeconfigs_backend => { :default => "active_record", :desc => "Configure the backend terminus used for StoreConfigs. By default, this uses the ActiveRecord store, which directly talks to the database from within the Puppet Master process." } ) # This doesn't actually work right now. setdefaults( :parser, :lexical => [false, "Whether to use lexical scoping (vs. dynamic)."], :templatedir => ["$vardir/templates", "Where Puppet looks for template files. Can be a list of colon-separated directories." ] ) setdefaults( :puppetdoc, :document_all => [false, "Document all resources"] ) end diff --git a/lib/puppet/face/module/install.rb b/lib/puppet/face/module/install.rb index 68eebec87..d7dc84536 100644 --- a/lib/puppet/face/module/install.rb +++ b/lib/puppet/face/module/install.rb @@ -1,173 +1,174 @@ # encoding: UTF-8 Puppet::Face.define(:module, '1.0.0') do action(:install) do summary "Install a module from the Puppet Forge or a release archive." description <<-EOT Installs a module from the Puppet Forge or from a release archive file. The specified module will be installed into the directory specified with the `--target-dir` option, which defaults to #{Puppet.settings[:modulepath].split(File::PATH_SEPARATOR).first}. EOT returns "Pathname object representing the path to the installed module." examples <<-EOT Install a module: $ puppet module install puppetlabs-vcsrepo Preparing to install into /etc/puppet/modules ... Downloading from http://forge.puppetlabs.com ... Installing -- do not interrupt ... /etc/puppet/modules └── puppetlabs-vcsrepo (v0.0.4) Install a module to a specific environment: $ puppet module install puppetlabs-vcsrepo --environment development Preparing to install into /etc/puppet/environments/development/modules ... Downloading from http://forge.puppetlabs.com ... Installing -- do not interrupt ... /etc/puppet/environments/development/modules └── puppetlabs-vcsrepo (v0.0.4) Install a specific module version: $ puppet module install puppetlabs-vcsrepo -v 0.0.4 Preparing to install into /etc/puppet/modules ... Downloading from http://forge.puppetlabs.com ... Installing -- do not interrupt ... /etc/puppet/modules └── puppetlabs-vcsrepo (v0.0.4) Install a module into a specific directory: $ puppet module install puppetlabs-vcsrepo --target-dir=/usr/share/puppet/modules Preparing to install into /usr/share/puppet/modules ... Downloading from http://forge.puppetlabs.com ... Installing -- do not interrupt ... /usr/share/puppet/modules └── puppetlabs-vcsrepo (v0.0.4) Install a module into a specific directory and check for dependencies in other directories: $ puppet module install puppetlabs-vcsrepo --target-dir=/usr/share/puppet/modules --modulepath /etc/puppet/modules Preparing to install into /usr/share/puppet/modules ... Downloading from http://forge.puppetlabs.com ... Installing -- do not interrupt ... /usr/share/puppet/modules └── puppetlabs-vcsrepo (v0.0.4) Install a module from a release archive: $ puppet module install puppetlabs-vcsrepo-0.0.4.tar.gz Preparing to install into /etc/puppet/modules ... Downloading from http://forge.puppetlabs.com ... Installing -- do not interrupt ... /etc/puppet/modules └── puppetlabs-vcsrepo (v0.0.4) Install a module from a release archive and ignore dependencies: $ puppet module install puppetlabs-vcsrepo-0.0.4.tar.gz --ignore-dependencies Preparing to install into /etc/puppet/modules ... Installing -- do not interrupt ... /etc/puppet/modules └── puppetlabs-vcsrepo (v0.0.4) EOT arguments "" option "--force", "-f" do summary "Force overwrite of existing module, if any." description <<-EOT Force overwrite of existing module, if any. EOT end option "--target-dir DIR", "-i DIR" do summary "The directory into which modules are installed." description <<-EOT The directory into which modules are installed; defaults to the first directory in the modulepath. Specifying this option will change the installation directory, and will use the existing modulepath when checking for dependencies. If you wish to check a different set of directories for dependencies, you must also use the `--environment` or `--modulepath` options. EOT end option "--ignore-dependencies" do summary "Do not attempt to install dependencies" description <<-EOT Do not attempt to install dependencies. EOT end option "--modulepath MODULEPATH" do default_to { Puppet.settings[:modulepath] } summary "Which directories to look for modules in" description <<-EOT The list of directories to check for modules. When installing a new module, this setting determines where the module tool will look for its dependencies. If the `--target dir` option is not specified, the first directory in the modulepath will also be used as the install directory. When installing a module into an environment whose modulepath is specified in puppet.conf, you can use the `--environment` option instead, and its modulepath will be used automatically. This setting should be a list of directories separated by the path separator character. (The path separator is `:` on Unix-like platforms and `;` on Windows.) EOT end option "--version VER", "-v VER" do summary "Module version to install." description <<-EOT Module version to install; can be an exact version or a requirement string, eg '>= 1.0.3'. Defaults to latest version. EOT end option "--environment NAME" do default_to { "production" } summary "The target environment to install modules into." description <<-EOT The target environment to install modules into. Only applicable if multiple environments (with different modulepaths) have been configured in puppet.conf. EOT end when_invoked do |name, options| sep = File::PATH_SEPARATOR + if options[:target_dir] - options[:target_dir] = File.expand_path(options[:target_dir]) options[:modulepath] = "#{options[:target_dir]}#{sep}#{options[:modulepath]}" end Puppet.settings[:modulepath] = options[:modulepath] options[:target_dir] = Puppet.settings[:modulepath].split(sep).first + options[:target_dir] = File.expand_path(options[:target_dir]) Puppet.notice "Preparing to install into #{options[:target_dir]} ..." Puppet::ModuleTool::Applications::Installer.run(name, options) end when_rendering :console do |return_value, name, options| if return_value[:result] == :failure Puppet.err(return_value[:error][:multiline]) exit 1 else tree = Puppet::ModuleTool.build_tree(return_value[:installed_modules], return_value[:install_dir]) return_value[:install_dir] + "\n" + Puppet::ModuleTool.format_tree(tree) end end end end diff --git a/lib/puppet/file_bucket/dipper.rb b/lib/puppet/file_bucket/dipper.rb index 4adab5df0..8e35bde67 100644 --- a/lib/puppet/file_bucket/dipper.rb +++ b/lib/puppet/file_bucket/dipper.rb @@ -1,107 +1,107 @@ +require 'pathname' require 'puppet/file_bucket' require 'puppet/file_bucket/file' require 'puppet/indirector/request' class Puppet::FileBucket::Dipper # This is a transitional implementation that uses REST # to access remote filebucket files. attr_accessor :name # Create our bucket client def initialize(hash = {}) # Emulate the XMLRPC client server = hash[:Server] port = hash[:Port] || Puppet[:masterport] environment = Puppet[:environment] if hash.include?(:Path) @local_path = hash[:Path] @rest_path = nil else @local_path = nil @rest_path = "https://#{server}:#{port}/#{environment}/file_bucket_file/" end end def local? !! @local_path end # Back up a file to our bucket def backup(file) raise(ArgumentError, "File #{file} does not exist") unless ::File.exist?(file) contents = IO.binread(file) begin file_bucket_file = Puppet::FileBucket::File.new(contents, :bucket_path => @local_path) files_original_path = absolutize_path(file) dest_path = "#{@rest_path}#{file_bucket_file.name}/#{files_original_path}" file_bucket_path = "#{@rest_path}#{file_bucket_file.checksum_type}/#{file_bucket_file.checksum_data}/#{files_original_path}" # Make a HEAD request for the file so that we don't waste time # uploading it if it already exists in the bucket. unless Puppet::FileBucket::File.indirection.head(file_bucket_path) Puppet::FileBucket::File.indirection.save(file_bucket_file, dest_path) end return file_bucket_file.checksum_data rescue => detail puts detail.backtrace if Puppet[:trace] raise Puppet::Error, "Could not back up #{file}: #{detail}" end end # Retrieve a file by sum. def getfile(sum) source_path = "#{@rest_path}md5/#{sum}" file_bucket_file = Puppet::FileBucket::File.indirection.find(source_path, :bucket_path => @local_path) raise Puppet::Error, "File not found" unless file_bucket_file file_bucket_file.to_s end # Restore the file def restore(file,sum) restore = true if FileTest.exists?(file) cursum = Digest::MD5.hexdigest(IO.binread(file)) # if the checksum has changed... # this might be extra effort if cursum == sum restore = false end end if restore if newcontents = getfile(sum) tmp = "" newsum = Digest::MD5.hexdigest(newcontents) changed = nil if FileTest.exists?(file) and ! FileTest.writable?(file) changed = ::File.stat(file).mode ::File.chmod(changed | 0200, file) end ::File.open(file, ::File::WRONLY|::File::TRUNC|::File::CREAT) { |of| of.binmode of.print(newcontents) } ::File.chmod(changed, file) if changed else Puppet.err "Could not find file with checksum #{sum}" return nil end return newsum else return nil end end private def absolutize_path( path ) - require 'pathname' Pathname.new(path).realpath end end diff --git a/lib/puppet/indirector/indirection.rb b/lib/puppet/indirector/indirection.rb index 7296be2b9..e2b183639 100644 --- a/lib/puppet/indirector/indirection.rb +++ b/lib/puppet/indirector/indirection.rb @@ -1,324 +1,323 @@ require 'puppet/util/docs' require 'puppet/indirector/envelope' require 'puppet/indirector/request' require 'puppet/util/instrumentation/instrumentable' # The class that connects functional classes with their different collection # back-ends. Each indirection has a set of associated terminus classes, # each of which is a subclass of Puppet::Indirector::Terminus. class Puppet::Indirector::Indirection include Puppet::Util::Docs extend Puppet::Util::Instrumentation::Instrumentable + attr_accessor :name, :model + attr_reader :termini + probe :find, :label => Proc.new { |parent, key, *args| "find_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }} probe :save, :label => Proc.new { |parent, key, *args| "save_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }} probe :search, :label => Proc.new { |parent, key, *args| "search_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }} probe :destroy, :label => Proc.new { |parent, key, *args| "destroy_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }} @@indirections = [] # Find an indirection by name. This is provided so that Terminus classes # can specifically hook up with the indirections they are associated with. def self.instance(name) @@indirections.find { |i| i.name == name } end # Return a list of all known indirections. Used to generate the # reference. def self.instances @@indirections.collect { |i| i.name } end # Find an indirected model by name. This is provided so that Terminus classes # can specifically hook up with the indirections they are associated with. def self.model(name) return nil unless match = @@indirections.find { |i| i.name == name } match.model end - attr_accessor :name, :model - - attr_reader :termini - # Create and return our cache terminus. def cache raise(Puppet::DevError, "Tried to cache when no cache class was set") unless cache_class terminus(cache_class) end # Should we use a cache? def cache? cache_class ? true : false end attr_reader :cache_class # Define a terminus class to be used for caching. def cache_class=(class_name) validate_terminus_class(class_name) if class_name @cache_class = class_name end # This is only used for testing. def delete @@indirections.delete(self) if @@indirections.include?(self) end # Set the time-to-live for instances created through this indirection. def ttl=(value) raise ArgumentError, "Indirection TTL must be an integer" unless value.is_a?(Fixnum) @ttl = value end # Default to the runinterval for the ttl. def ttl @ttl ||= Puppet[:runinterval].to_i end # Calculate the expiration date for a returned instance. def expiration Time.now + ttl end # Generate the full doc string. def doc text = "" text += scrub(@doc) + "\n\n" if @doc if s = terminus_setting text += "* **Terminus Setting**: #{terminus_setting}" end text end def initialize(model, name, options = {}) @model = model @name = name @termini = {} @cache_class = nil @terminus_class = nil raise(ArgumentError, "Indirection #{@name} is already defined") if @@indirections.find { |i| i.name == @name } @@indirections << self if mod = options[:extend] extend(mod) options.delete(:extend) end # This is currently only used for cache_class and terminus_class. options.each do |name, value| begin send(name.to_s + "=", value) rescue NoMethodError raise ArgumentError, "#{name} is not a valid Indirection parameter" end end end # Set up our request object. def request(*args) Puppet::Indirector::Request.new(self.name, *args) end # Return the singleton terminus for this indirection. def terminus(terminus_name = nil) # Get the name of the terminus. raise Puppet::DevError, "No terminus specified for #{self.name}; cannot redirect" unless terminus_name ||= terminus_class termini[terminus_name] ||= make_terminus(terminus_name) end # This can be used to select the terminus class. attr_accessor :terminus_setting # Determine the terminus class. def terminus_class unless @terminus_class if setting = self.terminus_setting self.terminus_class = Puppet.settings[setting].to_sym else raise Puppet::DevError, "No terminus class nor terminus setting was provided for indirection #{self.name}" end end @terminus_class end def reset_terminus_class @terminus_class = nil end # Specify the terminus class to use. def terminus_class=(klass) validate_terminus_class(klass) @terminus_class = klass end # This is used by terminus_class= and cache=. def validate_terminus_class(terminus_class) raise ArgumentError, "Invalid terminus name #{terminus_class.inspect}" unless terminus_class and terminus_class.to_s != "" unless Puppet::Indirector::Terminus.terminus_class(self.name, terminus_class) raise ArgumentError, "Could not find terminus #{terminus_class} for indirection #{self.name}" end end # Expire a cached object, if one is cached. Note that we don't actually # remove it, we expire it and write it back out to disk. This way people # can still use the expired object if they want. def expire(key, *args) request = request(:expire, key, *args) return nil unless cache? return nil unless instance = cache.find(request(:find, key, *args)) Puppet.info "Expiring the #{self.name} cache of #{instance.name}" # Set an expiration date in the past instance.expiration = Time.now - 60 cache.save(request(:save, instance, *args)) end # Search for an instance in the appropriate terminus, caching the # results if caching is configured.. def find(key, *args) request = request(:find, key, *args) terminus = prepare(request) if result = find_in_cache(request) return result end # Otherwise, return the result from the terminus, caching if appropriate. if ! request.ignore_terminus? and result = terminus.find(request) result.expiration ||= self.expiration if result.respond_to?(:expiration) if cache? and request.use_cache? Puppet.info "Caching #{self.name} for #{request.key}" cache.save request(:save, result, *args) end return terminus.respond_to?(:filter) ? terminus.filter(result) : result end nil end # Search for an instance in the appropriate terminus, and return a # boolean indicating whether the instance was found. def head(key, *args) request = request(:head, key, *args) terminus = prepare(request) # Look in the cache first, then in the terminus. Force the result # to be a boolean. !!(find_in_cache(request) || terminus.head(request)) end def find_in_cache(request) # See if our instance is in the cache and up to date. return nil unless cache? and ! request.ignore_cache? and cached = cache.find(request) if cached.expired? Puppet.info "Not using expired #{self.name} for #{request.key} from cache; expired at #{cached.expiration}" return nil end Puppet.debug "Using cached #{self.name} for #{request.key}" cached rescue => detail puts detail.backtrace if Puppet[:trace] Puppet.err "Cached #{self.name} for #{request.key} failed: #{detail}" nil end # Remove something via the terminus. def destroy(key, *args) request = request(:destroy, key, *args) terminus = prepare(request) result = terminus.destroy(request) if cache? and cached = cache.find(request(:find, key, *args)) # Reuse the existing request, since it's equivalent. cache.destroy(request) end result end # Search for more than one instance. Should always return an array. def search(key, *args) request = request(:search, key, *args) terminus = prepare(request) if result = terminus.search(request) raise Puppet::DevError, "Search results from terminus #{terminus.name} are not an array" unless result.is_a?(Array) result.each do |instance| next unless instance.respond_to? :expiration instance.expiration ||= self.expiration end return result end end # Save the instance in the appropriate terminus. This method is # normally an instance method on the indirected class. def save(instance, key = nil) request = request(:save, key, instance) terminus = prepare(request) result = terminus.save(request) # If caching is enabled, save our document there cache.save(request) if cache? result end private # Check authorization if there's a hook available; fail if there is one # and it returns false. def check_authorization(request, terminus) # At this point, we're assuming authorization makes no sense without # client information. return unless request.node # This is only to authorize via a terminus-specific authorization hook. return unless terminus.respond_to?(:authorized?) unless terminus.authorized?(request) msg = "Not authorized to call #{request.method} on #{request}" msg += " with #{request.options.inspect}" unless request.options.empty? raise ArgumentError, msg end end # Setup a request, pick the appropriate terminus, check the request's authorization, and return it. def prepare(request) # Pick our terminus. if respond_to?(:select_terminus) unless terminus_name = select_terminus(request) raise ArgumentError, "Could not determine appropriate terminus for #{request}" end else terminus_name = terminus_class end dest_terminus = terminus(terminus_name) check_authorization(request, dest_terminus) dest_terminus end # Create a new terminus instance. def make_terminus(terminus_class) # Load our terminus class. unless klass = Puppet::Indirector::Terminus.terminus_class(self.name, terminus_class) raise ArgumentError, "Could not find terminus #{terminus_class} for indirection #{self.name}" end klass.new end end diff --git a/lib/puppet/interface/action_manager.rb b/lib/puppet/interface/action_manager.rb index 5c9af4f96..1226507f5 100644 --- a/lib/puppet/interface/action_manager.rb +++ b/lib/puppet/interface/action_manager.rb @@ -1,75 +1,74 @@ require 'puppet/interface/action' +require 'puppet/interface/action_builder' module Puppet::Interface::ActionManager # Declare that this app can take a specific action, and provide # the code to do so. def action(name, &block) - require 'puppet/interface/action_builder' - @actions ||= {} raise "Action #{name} already defined for #{self}" if action?(name) action = Puppet::Interface::ActionBuilder.build(self, name, &block) if action.default and current = get_default_action raise "Actions #{current.name} and #{name} cannot both be default" end @actions[action.name] = action end # This is the short-form of an action definition; it doesn't use the # builder, just creates the action directly from the block. def script(name, &block) @actions ||= {} raise "Action #{name} already defined for #{self}" if action?(name) @actions[name] = Puppet::Interface::Action.new(self, name, :when_invoked => block) end def actions @actions ||= {} result = @actions.keys if self.is_a?(Class) and superclass.respond_to?(:actions) result += superclass.actions elsif self.class.respond_to?(:actions) result += self.class.actions end # We need to uniq the result, because we duplicate actions when they are # fetched to ensure that they have the correct bindings; they shadow the # parent, and uniq implements that. --daniel 2011-06-01 result.uniq.sort end def get_action(name) @actions ||= {} result = @actions[name.to_sym] if result.nil? if self.is_a?(Class) and superclass.respond_to?(:get_action) found = superclass.get_action(name) elsif self.class.respond_to?(:get_action) found = self.class.get_action(name) end if found then # This is not the nicest way to make action equivalent to the Ruby # Method object, rather than UnboundMethod, but it will do for now, # and we only have to make this change in *one* place. --daniel 2011-04-12 result = @actions[name.to_sym] = found.__dup_and_rebind_to(self) end end return result end def get_default_action default = actions.map {|x| get_action(x) }.select {|x| x.default } if default.length > 1 raise "The actions #{default.map(&:name).join(", ")} cannot all be default" end default.first end def action?(name) actions.include?(name.to_sym) end end diff --git a/lib/puppet/module_tool/applications/unpacker.rb b/lib/puppet/module_tool/applications/unpacker.rb index e9b0b50d1..3ed6058f3 100644 --- a/lib/puppet/module_tool/applications/unpacker.rb +++ b/lib/puppet/module_tool/applications/unpacker.rb @@ -1,48 +1,67 @@ require 'pathname' require 'tmpdir' module Puppet::ModuleTool module Applications class Unpacker < Application def initialize(filename, options = {}) @filename = Pathname.new(filename) parsed = parse_filename(filename) super(options) @module_dir = Pathname.new(options[:target_dir]) + parsed[:dir_name] end def run extract_module_to_install_dir # Return the Pathname object representing the directory where the # module release archive was unpacked the to, and the module release # name. @module_dir end + # Obtain a suitable temporary path for building and unpacking tarballs + # + # @return [Pathname] path to temporary build location + def build_dir + Puppet::Forge::Cache.base_path + "tmp-unpacker-#{Digest::SHA1.hexdigest(@filename.basename.to_s)}" + end + private def extract_module_to_install_dir delete_existing_installation_or_abort! - build_dir = Puppet::Forge::Cache.base_path + "tmp-unpacker-#{Digest::SHA1.hexdigest(@filename.basename.to_s)}" build_dir.mkpath begin - unless system "tar xzf #{@filename} -C #{build_dir}" - raise RuntimeError, "Could not extract contents of module archive." + begin + if Facter.value('operatingsystem') == "Solaris" + # Solaris tar is not as safe and works differently, so we prefer + # gnutar instead. + if Puppet::Util.which('gtar') + Puppet::Util.execute("gtar xzf #{@filename} -C #{build_dir}") + else + raise RuntimeError, "Cannot find the command 'gtar'. Make sure GNU tar is installed, and is in your PATH." + end + else + Puppet::Util.execute("tar xzf #{@filename} -C #{build_dir}") + end + rescue Puppet::ExecutionFailure => e + raise RuntimeError, "Could not extract contents of module archive: #{e.message}" end + # grab the first directory extracted = build_dir.children.detect { |c| c.directory? } FileUtils.mv extracted, @module_dir ensure build_dir.rmtree end end def delete_existing_installation_or_abort! return unless @module_dir.exist? FileUtils.rm_rf(@module_dir, :secure => true) end end end end diff --git a/lib/puppet/network/handler/fileserver.rb b/lib/puppet/network/handler/fileserver.rb index 5548f40fb..c0d5756f1 100755 --- a/lib/puppet/network/handler/fileserver.rb +++ b/lib/puppet/network/handler/fileserver.rb @@ -1,732 +1,732 @@ require 'puppet' require 'puppet/network/authstore' require 'webrick/httpstatus' require 'cgi' require 'delegate' require 'sync' require 'puppet/network/handler' require 'puppet/network/handler' require 'puppet/network/xmlrpc/server' require 'puppet/file_serving' require 'puppet/file_serving/metadata' require 'puppet/network/handler' class Puppet::Network::Handler AuthStoreError = Puppet::AuthStoreError class FileServerError < Puppet::Error; end class FileServer < Handler desc "The interface to Puppet's fileserving abilities." attr_accessor :local CHECKPARAMS = [:mode, :type, :owner, :group, :checksum] # Special filserver module for puppet's module system MODULES = "modules" PLUGINS = "plugins" @interface = XMLRPC::Service::Interface.new("fileserver") { |iface| iface.add_method("string describe(string, string)") iface.add_method("string list(string, string, boolean, array)") iface.add_method("string retrieve(string, string)") } def self.params CHECKPARAMS.dup end # If the configuration file exists, then create (if necessary) a LoadedFile # object to manage it; else, return nil. def configuration # Short-circuit the default case. return @configuration if defined?(@configuration) config_path = @passed_configuration_path || Puppet[:fileserverconfig] return nil unless FileTest.exist?(config_path) # The file exists but we don't have a LoadedFile instance for it. @configuration = Puppet::Util::LoadedFile.new(config_path) end # Create our default mounts for modules and plugins. This is duplicated code, # but I'm not really worried about that. def create_default_mounts @mounts = {} Puppet.debug "No file server configuration file; autocreating #{MODULES} mount with default permissions" mount = Mount.new(MODULES) mount.allow("*") @mounts[MODULES] = mount Puppet.debug "No file server configuration file; autocreating #{PLUGINS} mount with default permissions" mount = PluginMount.new(PLUGINS) mount.allow("*") @mounts[PLUGINS] = mount end # Describe a given file. This returns all of the manageable aspects # of that file. def describe(url, links = :follow, client = nil, clientip = nil) links = links.intern if links.is_a? String mount, path = convert(url, client, clientip) mount.debug("Describing #{url} for #{client}") if client # use the mount to resolve the path for us. return "" unless full_path = mount.file_path(path, client) metadata = Puppet::FileServing::Metadata.new(url, :path => full_path, :links => links) return "" unless metadata.exist? begin metadata.collect rescue => detail puts detail.backtrace if Puppet[:trace] Puppet.err detail return "" end metadata.attributes_with_tabs end # Create a new fileserving module. def initialize(hash = {}) @mounts = {} @files = {} @local = hash[:Local] @noreadconfig = true if hash[:Config] == false @passed_configuration_path = hash[:Config] if hash.include?(:Mount) @passedconfig = true raise Puppet::DevError, "Invalid mount hash #{hash[:Mount].inspect}" unless hash[:Mount].is_a?(Hash) hash[:Mount].each { |dir, name| self.mount(dir, name) if FileTest.exists?(dir) } self.mount(nil, MODULES) self.mount(nil, PLUGINS) else @passedconfig = false if configuration readconfig(false) # don't check the file the first time. else create_default_mounts end end end # List a specific directory's contents. def list(url, links = :ignore, recurse = false, ignore = false, client = nil, clientip = nil) mount, path = convert(url, client, clientip) mount.debug "Listing #{url} for #{client}" if client return "" unless mount.path_exists?(path, client) desc = mount.list(path, recurse, ignore, client) if desc.length == 0 mount.notice "Got no information on //#{mount}/#{path}" return "" end desc.collect { |sub| sub.join("\t") }.join("\n") end def local? self.local end # Is a given mount available? def mounted?(name) @mounts.include?(name) end # Mount a new directory with a name. def mount(path, name) if @mounts.include?(name) if @mounts[name] != path raise FileServerError, "#{@mounts[name].path} is already mounted at #{name}" else # it's already mounted; no problem return end end # Let the mounts do their own error-checking. @mounts[name] = Mount.new(name, path) @mounts[name].info "Mounted #{path}" @mounts[name] end # Retrieve a file from the local disk and pass it to the remote # client. def retrieve(url, links = :ignore, client = nil, clientip = nil) links = links.intern if links.is_a? String mount, path = convert(url, client, clientip) mount.info "Sending #{url} to #{client}" if client unless mount.path_exists?(path, client) mount.debug "#{mount} reported that #{path} does not exist" return "" end links = links.intern if links.is_a? String if links == :ignore and FileTest.symlink?(path) mount.debug "I think that #{path} is a symlink and we're ignoring them" return "" end str = mount.read_file(path, client) if @local return str else return CGI.escape(str) end end def umount(name) @mounts.delete(name) if @mounts.include? name end private def authcheck(file, mount, client, clientip) # If we're local, don't bother passing in information. if local? client = nil clientip = nil end unless mount.allowed?(client, clientip) mount.warning "#{client} cannot access #{file}" raise Puppet::AuthorizationError, "Cannot access #{mount}" end end # Take a URL and some client info and return a mount and relative # path pair. # def convert(url, client, clientip) readconfig url = URI.unescape(url) mount, stub = splitpath(url, client) authcheck(url, mount, client, clientip) return mount, stub end # Return the mount for the Puppet modules; allows file copying from # the modules. def modules_mount(module_name, client) # Find our environment, if we have one. unless hostname = (client || Facter.value("hostname")) raise ArgumentError, "Could not find hostname" end env = (node = Puppet::Node.indirection.find(hostname)) ? node.environment : nil # And use the environment to look up the module. (mod = Puppet::Node::Environment.new(env).module(module_name) and mod.files?) ? @mounts[MODULES].copy(mod.name, mod.file_directory) : nil end # Read the configuration file. def readconfig(check = true) return if @noreadconfig return unless configuration return if check and ! @configuration.changed? newmounts = {} begin File.open(@configuration.file) { |f| mount = nil count = 1 f.each_line { |line| case line when /^\s*#/; next # skip comments when /^\s*$/; next # skip blank lines when /\[([-\w]+)\]/ name = $1 raise FileServerError, "#{newmounts[name]} is already mounted as #{name} in #{@configuration.file}" if newmounts.include?(name) mount = Mount.new(name) newmounts[name] = mount when /^\s*(\w+)\s+(.+)$/ var = $1 value = $2 case var when "path" raise FileServerError.new("No mount specified for argument #{var} #{value}") unless mount if mount.name == MODULES Puppet.warning "The '#{mount.name}' module can not have a path. Ignoring attempt to set it" else begin mount.path = value rescue FileServerError => detail Puppet.err "Removing mount #{mount.name}: #{detail}" newmounts.delete(mount.name) end end when "allow" raise FileServerError.new("No mount specified for argument #{var} #{value}") unless mount value.split(/\s*,\s*/).each { |val| begin mount.info "allowing #{val} access" mount.allow(val) rescue AuthStoreError => detail puts detail.backtrace if Puppet[:trace] raise FileServerError.new( detail.to_s, count, @configuration.file) end } when "deny" raise FileServerError.new("No mount specified for argument #{var} #{value}") unless mount value.split(/\s*,\s*/).each { |val| begin mount.info "denying #{val} access" mount.deny(val) rescue AuthStoreError => detail raise FileServerError.new( detail.to_s, count, @configuration.file) end } else raise FileServerError.new("Invalid argument '#{var}'", count, @configuration.file) end else raise FileServerError.new("Invalid line '#{line.chomp}'", count, @configuration.file) end count += 1 } } rescue Errno::EACCES => detail Puppet.err "FileServer error: Cannot read #{@configuration}; cannot serve" #raise Puppet::Error, "Cannot read #{@configuration}" rescue Errno::ENOENT => detail Puppet.err "FileServer error: '#{@configuration}' does not exist; cannot serve" end unless newmounts[MODULES] Puppet.debug "No #{MODULES} mount given; autocreating with default permissions" mount = Mount.new(MODULES) mount.allow("*") newmounts[MODULES] = mount end unless newmounts[PLUGINS] Puppet.debug "No #{PLUGINS} mount given; autocreating with default permissions" mount = PluginMount.new(PLUGINS) mount.allow("*") newmounts[PLUGINS] = mount end unless newmounts[PLUGINS].valid? Puppet.debug "No path given for #{PLUGINS} mount; creating a special PluginMount" # We end up here if the user has specified access rules for # the plugins mount, without specifying a path (which means # they want to have the default behaviour for the mount, but # special access control). So we need to move all the # user-specified access controls into the new PluginMount # object... mount = PluginMount.new(PLUGINS) # Yes, you're allowed to hate me for this. mount.instance_variable_set( :@declarations, newmounts[PLUGINS].instance_variable_get(:@declarations) ) newmounts[PLUGINS] = mount end # Verify each of the mounts are valid. # We let the check raise an error, so that it can raise an error # pointing to the specific problem. newmounts.each { |name, mount| raise FileServerError, "Invalid mount #{name}" unless mount.valid? } @mounts = newmounts end # Split the path into the separate mount point and path. def splitpath(dir, client) # the dir is based on one of the mounts # so first retrieve the mount path mount = nil path = nil if dir =~ %r{/([-\w]+)} # Strip off the mount name. mount_name, path = dir.sub(%r{^/}, '').split(File::Separator, 2) unless mount = modules_mount(mount_name, client) unless mount = @mounts[mount_name] raise FileServerError, "Fileserver module '#{mount_name}' not mounted" end end else raise FileServerError, "Fileserver error: Invalid path '#{dir}'" end if path.nil? or path == '' path = '/' elsif path # Remove any double slashes that might have occurred path = URI.unescape(path.gsub(/\/\//, "/")) end return mount, path end def to_s "fileserver" end # A simple class for wrapping mount points. Instances of this class # don't know about the enclosing object; they're mainly just used for # authorization. class Mount < Puppet::Network::AuthStore attr_reader :name @@syncs = {} @@files = {} Puppet::Util.logmethods(self, true) # Create a map for a specific client. def clientmap(client) { "h" => client.sub(/\..*$/, ""), "H" => client, "d" => client.sub(/[^.]+\./, "") # domain name } end # Replace % patterns as appropriate. def expand(path, client = nil) # This map should probably be moved into a method. map = nil if client map = clientmap(client) else Puppet.notice "No client; expanding '#{path}' with local host" # Else, use the local information map = localmap end path.gsub(/%(.)/) do |v| key = $1 if key == "%" "%" else map[key] || v end end end # Do we have any patterns in our path, yo? def expandable? if defined?(@expandable) @expandable else false end end # Return a fully qualified path, given a short path and # possibly a client name. def file_path(relative_path, node = nil) full_path = path(node) unless full_path p self raise ArgumentError.new("Mounts without paths are not usable") unless full_path end # If there's no relative path name, then we're serving the mount itself. return full_path unless relative_path and relative_path != "/" File.join(full_path, relative_path) end # Create out object. It must have a name. def initialize(name, path = nil) unless name =~ %r{^[-\w]+$} raise FileServerError, "Invalid name format '#{name}'" end @name = name if path self.path = path else @path = nil end @files = {} super() end def fileobj(path, links, client) obj = nil if obj = @files[file_path(path, client)] # This can only happen in local fileserving, but it's an # important one. It'd be nice if we didn't just set # the check params every time, but I'm not sure it's worth # the effort. obj[:audit] = CHECKPARAMS else obj = Puppet::Type.type(:file).new( :name => file_path(path, client), :audit => CHECKPARAMS ) @files[file_path(path, client)] = obj end if links == :manage links = :follow end # This, ah, might be completely redundant obj[:links] = links unless obj[:links] == links obj end # Read the contents of the file at the relative path given. def read_file(relpath, client) File.read(file_path(relpath, client)) end # Cache this manufactured map, since if it's used it's likely # to get used a lot. def localmap unless defined?(@@localmap) @@localmap = { "h" => Facter.value("hostname"), "H" => [Facter.value("hostname"), Facter.value("domain")].join("."), "d" => Facter.value("domain") } end @@localmap end # Return the path as appropriate, expanding as necessary. def path(client = nil) if expandable? return expand(@path, client) else return @path end end # Set the path. def path=(path) # FIXME: For now, just don't validate paths with replacement # patterns in them. if path =~ /%./ # Mark that we're expandable. @expandable = true else raise FileServerError, "#{path} does not exist" unless FileTest.exists?(path) raise FileServerError, "#{path} is not a directory" unless FileTest.directory?(path) raise FileServerError, "#{path} is not readable" unless FileTest.readable?(path) @expandable = false end @path = path end # Verify that the path given exists within this mount's subtree. # def path_exists?(relpath, client = nil) File.exists?(file_path(relpath, client)) end # Return the current values for the object. def properties(obj) obj.retrieve.inject({}) { |props, ary| props[ary[0].name] = ary[1]; props } end # Retrieve a specific directory relative to a mount point. # If they pass in a client, then expand as necessary. def subdir(dir = nil, client = nil) basedir = self.path(client) dirname = if dir File.join(basedir, *dir.split("/")) else basedir end dirname end def sync(path) @@syncs[path] ||= Sync.new @@syncs[path] end def to_s "mount[#{@name}]" end # Verify our configuration is valid. This should really check to # make sure at least someone will be allowed, but, eh. def valid? if name == MODULES return @path.nil? else return ! @path.nil? end end # Return a new mount with the same properties as +self+, except # with a different name and path. def copy(name, path) result = self.clone result.path = path result.instance_variable_set(:@name, name) result end # List the contents of the relative path +relpath+ of this mount. # # +recurse+ is the number of levels to recurse into the tree, # or false to provide no recursion or true if you just want to # go for broke. # # +ignore+ is an array of filenames to ignore when traversing # the list. # # The return value of this method is a complex nest of arrays, # which describes a directory tree. Each file or directory is # represented by an array, where the first element is the path # of the file (relative to the root of the mount), and the # second element is the type. A directory is represented by an # array as well, where the first element is a "directory" array, # while the remaining elements are other file or directory # arrays. Confusing? Hell yes. As an added bonus, all names # must start with a slash, because... well, I'm fairly certain # a complete explanation would involve the words "crack pipe" # and "bad batch". # def list(relpath, recurse, ignore, client = nil) abspath = file_path(relpath, client) if FileTest.exists?(abspath) if FileTest.directory?(abspath) and recurse return reclist(abspath, recurse, ignore) else return [["/", File.stat(abspath).ftype]] end end nil end + require 'puppet/file_serving' + require 'puppet/file_serving/fileset' def reclist(abspath, recurse, ignore) - require 'puppet/file_serving' - require 'puppet/file_serving/fileset' if recurse.is_a?(Fixnum) args = { :recurse => true, :recurselimit => recurse, :links => :follow } else args = { :recurse => recurse, :links => :follow } end args[:ignore] = ignore if ignore fs = Puppet::FileServing::Fileset.new(abspath, args) ary = fs.files.collect do |file| if file == "." file = "/" else file = File.join("/", file ) end stat = fs.stat(File.join(abspath, file)) next if stat.nil? [ file, stat.ftype ] end ary.compact end end # A special mount class specifically for the plugins mount -- just # has some magic to effectively do a union mount of the 'plugins' # directory of all modules. # class PluginMount < Mount def path(client) '' end def mod_path_exists?(mod, relpath, client = nil) ! mod.plugin(relpath).nil? end def path_exists?(relpath, client = nil) !valid_modules(client).find { |mod| mod.plugin(relpath) }.nil? end def valid? true end def mod_file_path(mod, relpath, client = nil) File.join(mod, PLUGINS, relpath) end def file_path(relpath, client = nil) return nil unless mod = valid_modules(client).find { |m| m.plugin(relpath) } mod.plugin(relpath) end # create a list of files by merging all modules def list(relpath, recurse, ignore, client = nil) result = [] valid_modules(client).each do |mod| if modpath = mod.plugin(relpath) if FileTest.directory?(modpath) and recurse ary = reclist(modpath, recurse, ignore) ary ||= [] result += ary else result += [["/", File.stat(modpath).ftype]] end end end result end private def valid_modules(client) Puppet::Node::Environment.new.modules.find_all { |mod| mod.exist? } end def add_to_filetree(f, filetree) first, rest = f.split(File::SEPARATOR, 2) end end end end diff --git a/lib/puppet/parser/functions/fqdn_rand.rb b/lib/puppet/parser/functions/fqdn_rand.rb index 93ab98bcd..dd8d56800 100644 --- a/lib/puppet/parser/functions/fqdn_rand.rb +++ b/lib/puppet/parser/functions/fqdn_rand.rb @@ -1,12 +1,13 @@ +require 'digest/md5' + Puppet::Parser::Functions::newfunction(:fqdn_rand, :type => :rvalue, :doc => "Generates random numbers based on the node's fqdn. Generated random values will be a range from 0 up to and excluding n, where n is the first parameter. The second argument specifies a number to add to the seed and is optional, for example: $random_number = fqdn_rand(30) $random_number_seed = fqdn_rand(30,30)") do |args| - require 'digest/md5' max = args.shift srand(Digest::MD5.hexdigest([lookupvar('::fqdn'),args].join(':')).hex) rand(max).to_s end diff --git a/lib/puppet/parser/functions/md5.rb b/lib/puppet/parser/functions/md5.rb index f7a4f7222..d1e1efae2 100644 --- a/lib/puppet/parser/functions/md5.rb +++ b/lib/puppet/parser/functions/md5.rb @@ -1,5 +1,5 @@ -Puppet::Parser::Functions::newfunction(:md5, :type => :rvalue, :doc => "Returns a MD5 hash value from a provided string.") do |args| - require 'md5' +require 'digest/md5' +Puppet::Parser::Functions::newfunction(:md5, :type => :rvalue, :doc => "Returns a MD5 hash value from a provided string.") do |args| Digest::MD5.hexdigest(args[0]) end diff --git a/lib/puppet/parser/functions/sha1.rb b/lib/puppet/parser/functions/sha1.rb index 1e7d5abe4..c52df4d28 100644 --- a/lib/puppet/parser/functions/sha1.rb +++ b/lib/puppet/parser/functions/sha1.rb @@ -1,5 +1,5 @@ -Puppet::Parser::Functions::newfunction(:sha1, :type => :rvalue, :doc => "Returns a SHA1 hash value from a provided string.") do |args| - require 'digest/sha1' +require 'digest/sha1' +Puppet::Parser::Functions::newfunction(:sha1, :type => :rvalue, :doc => "Returns a SHA1 hash value from a provided string.") do |args| Digest::SHA1.hexdigest(args[0]) end diff --git a/lib/puppet/parser/functions/template.rb b/lib/puppet/parser/functions/template.rb index 8105e2cac..5e4b00e1e 100644 --- a/lib/puppet/parser/functions/template.rb +++ b/lib/puppet/parser/functions/template.rb @@ -1,25 +1,23 @@ Puppet::Parser::Functions::newfunction(:template, :type => :rvalue, :doc => "Evaluate a template and return its value. See [the templating docs](http://docs.puppetlabs.com/guides/templating.html) for more information. Note that if multiple templates are specified, their output is all concatenated and returned as the output of the function.") do |vals| - require 'erb' - vals.collect do |file| # Use a wrapper, so the template can't get access to the full # Scope object. debug "Retrieving template #{file}" wrapper = Puppet::Parser::TemplateWrapper.new(self) wrapper.file = file begin wrapper.result rescue => detail info = detail.backtrace.first.split(':') raise Puppet::ParseError, "Failed to parse template #{file}:\n Filepath: #{info[0]}\n Line: #{info[1]}\n Detail: #{detail}\n" end end.join("") end diff --git a/lib/puppet/parser/type_loader.rb b/lib/puppet/parser/type_loader.rb index 65579a820..861a098a5 100644 --- a/lib/puppet/parser/type_loader.rb +++ b/lib/puppet/parser/type_loader.rb @@ -1,173 +1,172 @@ +require 'find' require 'puppet/node/environment' class Puppet::Parser::TypeLoader include Puppet::Node::Environment::Helper # Helper class that makes sure we don't try to import the same file # more than once from either the same thread or different threads. class Helper include MonitorMixin def initialize super # These hashes are indexed by filename @state = {} # :doing or :done @thread = {} # if :doing, thread that's doing the parsing @cond_var = {} # if :doing, condition var that will be signaled when done. end # Execute the supplied block exactly once per file, no matter how # many threads have asked for it to run. If another thread is # already executing it, wait for it to finish. If this thread is # already executing it, return immediately without executing the # block. # # Note: the reason for returning immediately if this thread is # already executing the block is to handle the case of a circular # import--when this happens, we attempt to recursively re-parse a # file that we are already in the process of parsing. To prevent # an infinite regress we need to simply do nothing when the # recursive import is attempted. def do_once(file) need_to_execute = synchronize do case @state[file] when :doing if @thread[file] != Thread.current @cond_var[file].wait end false when :done false else @state[file] = :doing @thread[file] = Thread.current @cond_var[file] = new_cond true end end if need_to_execute begin yield ensure synchronize do @state[file] = :done @thread.delete(file) @cond_var.delete(file).broadcast end end end end end # Import our files. def import(file, current_file = nil) return if Puppet[:ignoreimport] # use a path relative to the file doing the importing if current_file dir = current_file.sub(%r{[^/]+$},'').sub(/\/$/, '') else dir = "." end if dir == "" dir = "." end pat = file modname, files = Puppet::Parser::Files.find_manifests(pat, :cwd => dir, :environment => environment) if files.size == 0 raise Puppet::ImportError.new("No file(s) found for import of '#{pat}'") end loaded_asts = [] files.each do |file| unless Puppet::Util.absolute_path?(file) file = File.join(dir, file) end @loading_helper.do_once(file) do loaded_asts << parse_file(file) end end loaded_asts.inject([]) do |loaded_types, ast| loaded_types + known_resource_types.import_ast(ast, modname) end end def import_all - require 'find' - module_names = [] # Collect the list of all known modules environment.modulepath.each do |path| Dir.chdir(path) do Dir.glob("*").each do |dir| next unless FileTest.directory?(dir) module_names << dir end end end module_names.uniq! # And then load all files from each module, but (relying on system # behavior) only load files from the first module of a given name. E.g., # given first/foo and second/foo, only files from first/foo will be loaded. module_names.each do |name| mod = Puppet::Module.new(name, :environment => environment) Find.find(File.join(mod.path, "manifests")) do |path| if path =~ /\.pp$/ or path =~ /\.rb$/ import(path) end end end end def known_resource_types environment.known_resource_types end def initialize(env) self.environment = env @loading_helper = Helper.new end # Try to load the object with the given fully qualified name. def try_load_fqname(type, fqname) return nil if fqname == "" # special-case main. name2files(fqname).each do |filename| begin imported_types = import(filename) if result = imported_types.find { |t| t.type == type and t.name == fqname } Puppet.debug "Automatically imported #{fqname} from #{filename} into #{environment}" return result end rescue Puppet::ImportError => detail # We couldn't load the item # I'm not convienced we should just drop these errors, but this # preserves existing behaviours. end end # Nothing found. return nil end def parse_file(file) Puppet.debug("importing '#{file}' in environment #{environment}") parser = Puppet::Parser::Parser.new(environment) parser.file = file return parser.parse end private # Return a list of all file basenames that should be tried in order # to load the object with the given fully qualified name. def name2files(fqname) result = [] ary = fqname.split("::") while ary.length > 0 result << ary.join(File::SEPARATOR) ary.pop end return result end end diff --git a/lib/puppet/provider/augeas/augeas.rb b/lib/puppet/provider/augeas/augeas.rb index 6998121d7..75ea9b940 100644 --- a/lib/puppet/provider/augeas/augeas.rb +++ b/lib/puppet/provider/augeas/augeas.rb @@ -1,400 +1,418 @@ # # Copyright 2011 Bryan Kearney # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'augeas' if Puppet.features.augeas? require 'strscan' require 'puppet/util' require 'puppet/util/diff' require 'puppet/util/package' Puppet::Type.type(:augeas).provide(:augeas) do include Puppet::Util include Puppet::Util::Diff include Puppet::Util::Package confine :true => Puppet.features.augeas? has_features :parse_commands, :need_to_run?,:execute_changes SAVE_NOOP = "noop" SAVE_OVERWRITE = "overwrite" SAVE_NEWFILE = "newfile" SAVE_BACKUP = "backup" COMMANDS = { "set" => [ :path, :string ], "setm" => [ :path, :string, :string ], "rm" => [ :path ], "clear" => [ :path ], "mv" => [ :path, :path ], "insert" => [ :string, :string, :path ], "get" => [ :path, :comparator, :string ], "defvar" => [ :string, :path ], "defnode" => [ :string, :path, :string ], "match" => [ :path, :glob ], "size" => [:comparator, :int], "include" => [:string], "not_include" => [:string], "==" => [:glob], "!=" => [:glob] } COMMANDS["ins"] = COMMANDS["insert"] COMMANDS["remove"] = COMMANDS["rm"] COMMANDS["move"] = COMMANDS["mv"] attr_accessor :aug # Extracts an 2 dimensional array of commands which are in the # form of command path value. # The input can be # - A string with one command # - A string with many commands per line # - An array of strings. def parse_commands(data) context = resource[:context] # Add a trailing / if it is not there if (context.length > 0) context << "/" if context[-1, 1] != "/" end data = data.split($/) if data.is_a?(String) data = data.flatten args = [] data.each do |line| line.strip! next if line.nil? || line.empty? argline = [] sc = StringScanner.new(line) cmd = sc.scan(/\w+|==|!=/) formals = COMMANDS[cmd] fail("Unknown command #{cmd}") unless formals argline << cmd narg = 0 formals.each do |f| sc.skip(/\s+/) narg += 1 if f == :path start = sc.pos nbracket = 0 inSingleTick = false inDoubleTick = false begin sc.skip(/([^\]\[\s\\'"]|\\.)+/) ch = sc.getch nbracket += 1 if ch == "[" nbracket -= 1 if ch == "]" inSingleTick = !inSingleTick if ch == "'" inDoubleTick = !inDoubleTick if ch == "\"" fail("unmatched [") if nbracket < 0 end until ((nbracket == 0 && !inSingleTick && !inDoubleTick && (ch =~ /\s/)) || sc.eos?) len = sc.pos - start len -= 1 unless sc.eos? unless p = sc.string[start, len] fail("missing path argument #{narg} for #{cmd}") end # Rip off any ticks if they are there. p = p[1, (p.size - 2)] if p[0,1] == "'" || p[0,1] == "\"" p.chomp!("/") if p[0,1] != '$' && p[0,1] != "/" argline << context + p else argline << p end elsif f == :string delim = sc.peek(1) if delim == "'" || delim == "\"" sc.getch argline << sc.scan(/([^\\#{delim}]|(\\.))*/) sc.getch else argline << sc.scan(/[^\s]+/) end fail("missing string argument #{narg} for #{cmd}") unless argline[-1] elsif f == :comparator argline << sc.scan(/(==|!=|=~|<|<=|>|>=)/) unless argline[-1] puts sc.rest fail("invalid comparator for command #{cmd}") end elsif f == :int argline << sc.scan(/\d+/).to_i elsif f== :glob argline << sc.rest end end args << argline end args end def open_augeas unless @aug flags = Augeas::NONE flags = Augeas::TYPE_CHECK if resource[:type_check] == :true flags |= Augeas::NO_MODL_AUTOLOAD if resource[:incl] root = resource[:root] - load_path = resource[:load_path] + load_path = get_load_path(resource) debug("Opening augeas with root #{root}, lens path #{load_path}, flags #{flags}") @aug = Augeas::open(root, load_path,flags) debug("Augeas version #{get_augeas_version} is installed") if versioncmp(get_augeas_version, "0.3.6") >= 0 if resource[:incl] aug.set("/augeas/load/Xfm/lens", resource[:lens]) aug.set("/augeas/load/Xfm/incl", resource[:incl]) aug.load end end @aug end def close_augeas if @aug @aug.close debug("Closed the augeas connection") @aug = nil end end # Used by the need_to_run? method to process get filters. Returns # true if there is a match, false if otherwise # Assumes a syntax of get /files/path [COMPARATOR] value def process_get(cmd_array) return_value = false #validate and tear apart the command fail ("Invalid command: #{cmd_array.join(" ")}") if cmd_array.length < 4 cmd = cmd_array.shift path = cmd_array.shift comparator = cmd_array.shift arg = cmd_array.join(" ") #check the value in augeas result = @aug.get(path) || '' case comparator when "!=" return_value = (result != arg) when "=~" regex = Regexp.new(arg) return_value = (result =~ regex) else return_value = (result.send(comparator, arg)) end !!return_value end # Used by the need_to_run? method to process match filters. Returns # true if there is a match, false if otherwise def process_match(cmd_array) return_value = false #validate and tear apart the command fail("Invalid command: #{cmd_array.join(" ")}") if cmd_array.length < 3 cmd = cmd_array.shift path = cmd_array.shift # Need to break apart the clause clause_array = parse_commands(cmd_array.shift)[0] verb = clause_array.shift #Get the values from augeas result = @aug.match(path) || [] fail("Error trying to match path '#{path}'") if (result == -1) # Now do the work case verb when "size" fail("Invalid command: #{cmd_array.join(" ")}") if clause_array.length != 2 comparator = clause_array.shift arg = clause_array.shift case comparator when "!=" return_value = !(result.size.send(:==, arg)) else return_value = (result.size.send(comparator, arg)) end when "include" arg = clause_array.shift return_value = result.include?(arg) when "not_include" arg = clause_array.shift return_value = !result.include?(arg) when "==" begin arg = clause_array.shift new_array = eval arg return_value = (result == new_array) rescue fail("Invalid array in command: #{cmd_array.join(" ")}") end when "!=" begin arg = clause_array.shift new_array = eval arg return_value = (result != new_array) rescue fail("Invalid array in command: #{cmd_array.join(" ")}") end end !!return_value end + # Generate lens load paths from user given paths and local pluginsync dir + def get_load_path(resource) + load_path = [] + + # Permits colon separated strings or arrays + if resource[:load_path] + load_path = [resource[:load_path]].flatten + load_path.map! { |path| path.split(/:/) } + load_path.flatten! + end + + if File.exists?("#{Puppet[:libdir]}/augeas/lenses") + load_path << "#{Puppet[:libdir]}/augeas/lenses" + end + + load_path.join(":") + end + def get_augeas_version @aug.get("/augeas/version") || "" end def set_augeas_save_mode(mode) @aug.set("/augeas/save", mode) end # Determines if augeas acutally needs to run. def need_to_run? force = resource[:force] return_value = true begin open_augeas filter = resource[:onlyif] unless filter == "" cmd_array = parse_commands(filter)[0] command = cmd_array[0]; begin case command when "get"; return_value = process_get(cmd_array) when "match"; return_value = process_match(cmd_array) end rescue SystemExit,NoMemoryError raise rescue Exception => e fail("Error sending command '#{command}' with params #{cmd_array[1..-1].inspect}/#{e.message}") end end unless force # If we have a verison of augeas which is at least 0.3.6 then we # can make the changes now and see if changes were made. if return_value and versioncmp(get_augeas_version, "0.3.6") >= 0 debug("Will attempt to save and only run if files changed") # Execute in NEWFILE mode so we can show a diff set_augeas_save_mode(SAVE_NEWFILE) do_execute_changes save_result = @aug.save fail("Save failed with return code #{save_result}") unless save_result saved_files = @aug.match("/augeas/events/saved") if saved_files.size > 0 root = resource[:root].sub(/^\/$/, "") saved_files.map! {|key| @aug.get(key).sub(/^\/files/, root) } saved_files.uniq.each do |saved_file| if Puppet[:show_diff] notice "\n" + diff(saved_file, saved_file + ".augnew") end File.delete(saved_file + ".augnew") end debug("Files changed, should execute") return_value = true else debug("Skipping because no files were changed") return_value = false end end end ensure if not return_value or resource.noop? or not save_result close_augeas end end return_value end def execute_changes # Workaround Augeas bug where changing the save mode doesn't trigger a # reload of the previously saved file(s) when we call Augeas#load @aug.match("/augeas/events/saved").each do |file| @aug.rm("/augeas#{@aug.get(file)}/mtime") end # Reload augeas, and execute the changes for real set_augeas_save_mode(SAVE_OVERWRITE) if versioncmp(get_augeas_version, "0.3.6") >= 0 @aug.load do_execute_changes success = @aug.save fail("Save failed with return code #{success}") if success != true :executed ensure close_augeas end # Actually execute the augeas changes. def do_execute_changes commands = parse_commands(resource[:changes]) commands.each do |cmd_array| fail("invalid command #{cmd_array.join[" "]}") if cmd_array.length < 2 command = cmd_array[0] cmd_array.shift begin case command when "set" debug("sending command '#{command}' with params #{cmd_array.inspect}") rv = aug.set(cmd_array[0], cmd_array[1]) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (!rv) when "setm" debug("sending command '#{command}' with params #{cmd_array.inspect}") rv = aug.setm(cmd_array[0], cmd_array[1], cmd_array[2]) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (rv == -1) when "rm", "remove" debug("sending command '#{command}' with params #{cmd_array.inspect}") rv = aug.rm(cmd_array[0]) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (rv == -1) when "clear" debug("sending command '#{command}' with params #{cmd_array.inspect}") rv = aug.clear(cmd_array[0]) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (!rv) when "insert", "ins" label = cmd_array[0] where = cmd_array[1] path = cmd_array[2] case where when "before"; before = true when "after"; before = false else fail("Invalid value '#{where}' for where param") end debug("sending command '#{command}' with params #{[label, where, path].inspect}") rv = aug.insert(path, label, before) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (rv == -1) when "defvar" debug("sending command '#{command}' with params #{cmd_array.inspect}") rv = aug.defvar(cmd_array[0], cmd_array[1]) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (!rv) when "defnode" debug("sending command '#{command}' with params #{cmd_array.inspect}") rv = aug.defnode(cmd_array[0], cmd_array[1], cmd_array[2]) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (!rv) when "mv", "move" debug("sending command '#{command}' with params #{cmd_array.inspect}") rv = aug.mv(cmd_array[0], cmd_array[1]) fail("Error sending command '#{command}' with params #{cmd_array.inspect}") if (rv == -1) else fail("Command '#{command}' is not supported") end rescue SystemExit,NoMemoryError raise rescue Exception => e fail("Error sending command '#{command}' with params #{cmd_array.inspect}/#{e.message}") end end end end diff --git a/lib/puppet/provider/package/msi.rb b/lib/puppet/provider/package/msi.rb index 86b5eacdc..575d4562a 100644 --- a/lib/puppet/provider/package/msi.rb +++ b/lib/puppet/provider/package/msi.rb @@ -1,95 +1,141 @@ require 'puppet/provider/package' Puppet::Type.type(:package).provide(:msi, :parent => Puppet::Provider::Package) do desc "Windows package management by installing and removing MSIs. This provider requires a `source` attribute, and will accept paths to local - files or files on mapped drives. - - This provider cannot uninstall arbitrary MSI packages; it can only uninstall - packages which were originally installed by Puppet." + files, mapped drives, or UNC paths." confine :operatingsystem => :windows defaultfor :operatingsystem => :windows + has_feature :installable + has_feature :uninstallable has_feature :install_options - # This is just here to make sure we can find it, and fail if we - # can't. Unfortunately, we need to do "special" quoting of the - # install options or msiexec.exe won't know what to do with them, if - # the value contains a space. - commands :msiexec => "msiexec.exe" + class MsiPackage + extend Enumerable - def self.instances - Dir.entries(installed_listing_dir).reject {|d| d == '.' or d == '..'}.collect do |name| - new(:name => File.basename(name, '.yml'), :provider => :msi, :ensure => :installed) + # From msi.h + INSTALLSTATE_DEFAULT = 5 # product is installed for the current user + INSTALLUILEVEL_NONE = 2 # completely silent installation + + def self.installer + require 'win32ole' + WIN32OLE.new("WindowsInstaller.Installer") + end + + def self.each(&block) + inst = installer + inst.UILevel = INSTALLUILEVEL_NONE + + inst.Products.each do |guid| + # products may be advertised, installed in a different user + # context, etc, we only want to know about products currently + # installed in our context. + next unless inst.ProductState(guid) == INSTALLSTATE_DEFAULT + + package = { + :name => inst.ProductInfo(guid, 'ProductName'), + # although packages have a version, the provider isn't versionable, + # so we can't return a version + # :ensure => inst.ProductInfo(guid, 'VersionString'), + :ensure => :installed, + :provider => :msi, + :productcode => guid, + :packagecode => inst.ProductInfo(guid, 'PackageCode') + } + + yield package + end end end + # Get an array of provider instances for currently installed packages + def self.instances + MsiPackage.enum_for.map { |package| new(package) } + end + + # Find first package whose PackageCode, e.g. {B2BE95D2-CD2C-46D6-8D27-35D150E58EC9}, + # matches the resource name (case-insensitively due to hex) or the ProductName matches + # the resource name. The ProductName is not guaranteed to be unique, but the PackageCode + # should be if the package is authored correctly. def query - {:name => resource[:name], :ensure => :installed} if FileTest.exists?(state_file) + MsiPackage.enum_for.find do |package| + resource[:name].casecmp(package[:packagecode]) == 0 || resource[:name] == package[:name] + end end def install - properties_for_command = nil - if resource[:install_options] - properties_for_command = resource[:install_options].collect do |k,v| - property = shell_quote k - value = shell_quote v - - "#{property}=#{value}" - end - end + fail("The source parameter is required when using the MSI provider.") unless resource[:source] # Unfortunately, we can't use the msiexec method defined earlier, # because of the special quoting we need to do around the MSI # properties to use. - execute ['msiexec.exe', '/qn', '/norestart', '/i', shell_quote(msi_source), properties_for_command].flatten.compact.join(' ') - - File.open(state_file, 'w') do |f| - metadata = { - 'name' => resource[:name], - 'install_options' => resource[:install_options], - 'source' => msi_source - } + command = ['msiexec.exe', '/qn', '/norestart', '/i', shell_quote(resource[:source]), install_options].flatten.compact.join(' ') + execute(command, :combine => true) - f.puts(YAML.dump(metadata)) - end + check_result(exit_status) end def uninstall - msiexec '/qn', '/norestart', '/x', msi_source + fail("The productcode property is missing.") unless properties[:productcode] - File.delete state_file - end + command = ['msiexec.exe', '/qn', '/norestart', '/x', properties[:productcode]].flatten.compact.join(' ') + execute(command, :combine => true) - def validate_source(value) - fail("The source parameter cannot be empty when using the MSI provider.") if value.empty? + check_result(exit_status) end - private - - def msi_source - resource[:source] ||= YAML.load_file(state_file)['source'] rescue nil - - fail("The source parameter is required when using the MSI provider.") unless resource[:source] + def exit_status + $CHILD_STATUS.exitstatus + end - resource[:source] + # http://msdn.microsoft.com/en-us/library/windows/desktop/aa368542(v=vs.85).aspx + ERROR_SUCCESS = 0 + ERROR_SUCCESS_REBOOT_INITIATED = 1641 + ERROR_SUCCESS_REBOOT_REQUIRED = 3010 + + # (Un)install may "fail" because the package requested a reboot, the system requested a + # reboot, or something else entirely. Reboot requests mean the package was installed + # successfully, but we warn since we don't have a good reboot strategy. + def check_result(hr) + operation = resource[:ensure] == :absent ? 'uninstall' : 'install' + + case hr + when ERROR_SUCCESS + # yeah + when 194 + warning("The package requested a reboot to finish the operation.") + when ERROR_SUCCESS_REBOOT_INITIATED + warning("The package #{operation}ed successfully and the system is rebooting now.") + when ERROR_SUCCESS_REBOOT_REQUIRED + warning("The package #{operation}ed successfully, but the system must be rebooted.") + else + raise Puppet::Util::Windows::Error.new("Failed to #{operation}", hr) + end end - def self.installed_listing_dir - listing_dir = File.join(Puppet[:vardir], 'db', 'package', 'msi') + def validate_source(value) + fail("The source parameter cannot be empty when using the MSI provider.") if value.empty? + end - FileUtils.mkdir_p listing_dir unless File.directory? listing_dir + def install_options + # properties is a string delimited by spaces, so each key value must be quoted + properties_for_command = nil + if resource[:install_options] + properties_for_command = resource[:install_options].collect do |k,v| + property = shell_quote k + value = shell_quote v - listing_dir - end + "#{property}=#{value}" + end + end - def state_file - File.join(self.class.installed_listing_dir, "#{resource[:name]}.yml") + properties_for_command end def shell_quote(value) value.include?(' ') ? %Q["#{value.gsub(/"/, '\"')}"] : value end end diff --git a/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb b/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb index 3e8f2bc9b..b9491294d 100644 --- a/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb +++ b/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb @@ -1,564 +1,565 @@ require 'puppet/parameter' if Puppet.features.microsoft_windows? require 'win32/taskscheduler' require 'puppet/util/adsi' end Puppet::Type.type(:scheduled_task).provide(:win32_taskscheduler) do desc %q{This provider uses the win32-taskscheduler gem to manage scheduled tasks on Windows. Puppet requires version 0.2.1 or later of the win32-taskscheduler gem; previous versions can cause "Could not evaluate: The operation completed successfully" errors.} defaultfor :operatingsystem => :windows confine :operatingsystem => :windows def self.instances Win32::TaskScheduler.new.tasks.collect do |job_file| job_title = File.basename(job_file, '.job') new( :provider => :win32_taskscheduler, :name => job_title ) end end def exists? Win32::TaskScheduler.new.exists? resource[:name] end def task return @task if @task @task ||= Win32::TaskScheduler.new @task.activate(resource[:name] + '.job') if exists? @task end def clear_task @task = nil @triggers = nil end def enabled task.flags & Win32::TaskScheduler::DISABLED == 0 ? :true : :false end def command task.application_name end def arguments task.parameters end def working_dir task.working_directory end def user account = task.account_information return 'system' if account == '' account end def trigger return @triggers if @triggers @triggers = [] task.trigger_count.times do |i| trigger = begin task.trigger(i) rescue Win32::TaskScheduler::Error => e # Win32::TaskScheduler can't handle all of the # trigger types Windows uses, so we need to skip the # unhandled types to prevent "puppet resource" from # blowing up. nil end next unless trigger and scheduler_trigger_types.include?(trigger['trigger_type']) puppet_trigger = {} case trigger['trigger_type'] when Win32::TaskScheduler::TASK_TIME_TRIGGER_DAILY puppet_trigger['schedule'] = 'daily' puppet_trigger['every'] = trigger['type']['days_interval'].to_s when Win32::TaskScheduler::TASK_TIME_TRIGGER_WEEKLY puppet_trigger['schedule'] = 'weekly' puppet_trigger['every'] = trigger['type']['weeks_interval'].to_s puppet_trigger['on'] = days_of_week_from_bitfield(trigger['type']['days_of_week']) when Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDATE puppet_trigger['schedule'] = 'monthly' puppet_trigger['months'] = months_from_bitfield(trigger['type']['months']) puppet_trigger['on'] = days_from_bitfield(trigger['type']['days']) when Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDOW puppet_trigger['schedule'] = 'monthly' puppet_trigger['months'] = months_from_bitfield(trigger['type']['months']) puppet_trigger['which_occurrence'] = occurrence_constant_to_name(trigger['type']['weeks']) puppet_trigger['day_of_week'] = days_of_week_from_bitfield(trigger['type']['days_of_week']) when Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE puppet_trigger['schedule'] = 'once' end puppet_trigger['start_date'] = self.class.normalized_date("#{trigger['start_year']}-#{trigger['start_month']}-#{trigger['start_day']}") puppet_trigger['start_time'] = self.class.normalized_time("#{trigger['start_hour']}:#{trigger['start_minute']}") puppet_trigger['enabled'] = trigger['flags'] & Win32::TaskScheduler::TASK_TRIGGER_FLAG_DISABLED == 0 puppet_trigger['index'] = i @triggers << puppet_trigger end @triggers = @triggers[0] if @triggers.length == 1 @triggers end def user_insync?(current, should) return false unless current # Win32::TaskScheduler can return the 'SYSTEM' account as the # empty string. current = 'system' if current == '' # By comparing account SIDs we don't have to worry about case # sensitivity, or canonicalization of the account name. Puppet::Util::ADSI.sid_for_account(current) == Puppet::Util::ADSI.sid_for_account(should[0]) end def trigger_insync?(current, should) should = [should] unless should.is_a?(Array) current = [current] unless current.is_a?(Array) return false unless current.length == should.length current_in_sync = current.all? do |c| should.any? {|s| triggers_same?(c, s)} end should_in_sync = should.all? do |s| current.any? {|c| triggers_same?(c,s)} end current_in_sync && should_in_sync end def command=(value) task.application_name = value end def arguments=(value) task.parameters = value end def working_dir=(value) task.working_directory = value end def enabled=(value) if value == :true task.flags = task.flags & ~Win32::TaskScheduler::DISABLED else task.flags = task.flags | Win32::TaskScheduler::DISABLED end end def trigger=(value) desired_triggers = value.is_a?(Array) ? value : [value] current_triggers = trigger.is_a?(Array) ? trigger : [trigger] extra_triggers = [] desired_to_search = desired_triggers.dup current_triggers.each do |current| if found = desired_to_search.find {|desired| triggers_same?(current, desired)} desired_to_search.delete(found) else extra_triggers << current['index'] end end needed_triggers = [] current_to_search = current_triggers.dup desired_triggers.each do |desired| if found = current_to_search.find {|current| triggers_same?(current, desired)} current_to_search.delete(found) else needed_triggers << desired end end extra_triggers.reverse_each do |index| task.delete_trigger(index) end needed_triggers.each do |trigger_hash| # Even though this is an assignment, the API for # Win32::TaskScheduler ends up appending this trigger to the # list of triggers for the task, while #add_trigger is only able # to replace existing triggers. *shrug* task.trigger = translate_hash_to_trigger(trigger_hash) end end def user=(value) self.fail("Invalid user: #{value}") unless Puppet::Util::ADSI.sid_for_account(value) if value.to_s.downcase != 'system' task.set_account_information(value, resource[:password]) else # Win32::TaskScheduler treats a nil/empty username & password as # requesting the SYSTEM account. task.set_account_information(nil, nil) end end def create clear_task @task = Win32::TaskScheduler.new(resource[:name], dummy_time_trigger) self.command = resource[:command] [:arguments, :working_dir, :enabled, :trigger, :user].each do |prop| send("#{prop}=", resource[prop]) if resource[prop] end end def destroy Win32::TaskScheduler.new.delete(resource[:name] + '.job') end def flush unless resource[:ensure] == :absent self.fail('Parameter command is required.') unless resource[:command] task.save + @task = nil end end def triggers_same?(current_trigger, desired_trigger) return false unless current_trigger['schedule'] == desired_trigger['schedule'] return false if current_trigger.has_key?('enabled') && !current_trigger['enabled'] desired = desired_trigger.dup desired['every'] ||= current_trigger['every'] if current_trigger.has_key?('every') desired['months'] ||= current_trigger['months'] if current_trigger.has_key?('months') desired['on'] ||= current_trigger['on'] if current_trigger.has_key?('on') desired['day_of_week'] ||= current_trigger['day_of_week'] if current_trigger.has_key?('day_of_week') translate_hash_to_trigger(current_trigger) == translate_hash_to_trigger(desired) end def self.normalized_date(date_string) date = Date.parse("#{date_string}") "#{date.year}-#{date.month}-#{date.day}" end def self.normalized_time(time_string) Time.parse("#{time_string}").strftime('%H:%M') end def dummy_time_trigger now = Time.now { 'flags' => 0, 'random_minutes_interval' => 0, 'end_day' => 0, "end_year" => 0, "trigger_type" => 0, "minutes_interval" => 0, "end_month" => 0, "minutes_duration" => 0, 'start_year' => now.year, 'start_month' => now.month, 'start_day' => now.day, 'start_hour' => now.hour, 'start_minute' => now.min, 'trigger_type' => Win32::TaskScheduler::ONCE, } end def translate_hash_to_trigger(puppet_trigger, user_provided_input=false) trigger = dummy_time_trigger if user_provided_input self.fail "'enabled' is read-only on triggers" if puppet_trigger.has_key?('enabled') self.fail "'index' is read-only on triggers" if puppet_trigger.has_key?('index') end puppet_trigger.delete('index') if puppet_trigger.delete('enabled') == false trigger['flags'] |= Win32::TaskScheduler::TASK_TRIGGER_FLAG_DISABLED else trigger['flags'] &= ~Win32::TaskScheduler::TASK_TRIGGER_FLAG_DISABLED end extra_keys = puppet_trigger.keys.sort - ['schedule', 'start_date', 'start_time', 'every', 'months', 'on', 'which_occurrence', 'day_of_week'] self.fail "Unknown trigger option(s): #{Puppet::Parameter.format_value_for_display(extra_keys)}" unless extra_keys.empty? self.fail "Must specify 'start_time' when defining a trigger" unless puppet_trigger['start_time'] case puppet_trigger['schedule'] when 'daily' trigger['trigger_type'] = Win32::TaskScheduler::DAILY trigger['type'] = { 'days_interval' => Integer(puppet_trigger['every'] || 1) } when 'weekly' trigger['trigger_type'] = Win32::TaskScheduler::WEEKLY trigger['type'] = { 'weeks_interval' => Integer(puppet_trigger['every'] || 1) } trigger['type']['days_of_week'] = if puppet_trigger['day_of_week'] bitfield_from_days_of_week(puppet_trigger['day_of_week']) else scheduler_days_of_week.inject(0) {|day_flags,day| day_flags |= day} end when 'monthly' trigger['type'] = { 'months' => bitfield_from_months(puppet_trigger['months'] || (1..12).to_a), } if puppet_trigger.keys.include?('on') if puppet_trigger.has_key?('day_of_week') or puppet_trigger.has_key?('which_occurrence') self.fail "Neither 'day_of_week' nor 'which_occurrence' can be specified when creating a monthly date-based trigger" end trigger['trigger_type'] = Win32::TaskScheduler::MONTHLYDATE trigger['type']['days'] = bitfield_from_days(puppet_trigger['on']) elsif puppet_trigger.keys.include?('which_occurrence') or puppet_trigger.keys.include?('day_of_week') self.fail 'which_occurrence cannot be specified as an array' if puppet_trigger['which_occurrence'].is_a?(Array) %w{day_of_week which_occurrence}.each do |field| self.fail "#{field} must be specified when creating a monthly day-of-week based trigger" unless puppet_trigger.has_key?(field) end trigger['trigger_type'] = Win32::TaskScheduler::MONTHLYDOW trigger['type']['weeks'] = occurrence_name_to_constant(puppet_trigger['which_occurrence']) trigger['type']['days_of_week'] = bitfield_from_days_of_week(puppet_trigger['day_of_week']) else self.fail "Don't know how to create a 'monthly' schedule with the options: #{puppet_trigger.keys.sort.join(', ')}" end when 'once' self.fail "Must specify 'start_date' when defining a one-time trigger" unless puppet_trigger['start_date'] trigger['trigger_type'] = Win32::TaskScheduler::ONCE else self.fail "Unknown schedule type: #{puppet_trigger["schedule"].inspect}" end if start_date = puppet_trigger['start_date'] start_date = Date.parse(start_date) self.fail "start_date must be on or after 1753-01-01" unless start_date >= Date.new(1753, 1, 1) trigger['start_year'] = start_date.year trigger['start_month'] = start_date.month trigger['start_day'] = start_date.day end start_time = Time.parse(puppet_trigger['start_time']) trigger['start_hour'] = start_time.hour trigger['start_minute'] = start_time.min trigger end def validate_trigger(value) value = [value] unless value.is_a?(Array) # translate_hash_to_trigger handles the same validation that we # would be doing here at the individual trigger level. value.each {|t| translate_hash_to_trigger(t, true)} true end private def bitfield_from_months(months) bitfield = 0 months = [months] unless months.is_a?(Array) months.each do |month| integer_month = Integer(month) rescue nil self.fail 'Month must be specified as an integer in the range 1-12' unless integer_month == month.to_f and integer_month.between?(1,12) bitfield |= scheduler_months[integer_month - 1] end bitfield end def bitfield_from_days(days) bitfield = 0 days = [days] unless days.is_a?(Array) days.each do |day| # The special "day" of 'last' is represented by day "number" # 32. 'last' has the special meaning of "the last day of the # month", no matter how many days there are in the month. day = 32 if day == 'last' integer_day = Integer(day) self.fail "Day must be specified as an integer in the range 1-31, or as 'last'" unless integer_day = day.to_f and integer_day.between?(1,32) bitfield |= 1 << integer_day - 1 end bitfield end def bitfield_from_days_of_week(days_of_week) bitfield = 0 days_of_week = [days_of_week] unless days_of_week.is_a?(Array) days_of_week.each do |day_of_week| bitfield |= day_of_week_name_to_constant(day_of_week) end bitfield end def months_from_bitfield(bitfield) months = [] scheduler_months.each do |month| if bitfield & month != 0 months << month_constant_to_number(month) end end months end def days_from_bitfield(bitfield) days = [] i = 0 while bitfield > 0 if bitfield & 1 > 0 # Day 32 has the special meaning of "the last day of the # month", no matter how many days there are in the month. days << (i == 31 ? 'last' : i + 1) end bitfield = bitfield >> 1 i += 1 end days end def days_of_week_from_bitfield(bitfield) days_of_week = [] scheduler_days_of_week.each do |day_of_week| if bitfield & day_of_week != 0 days_of_week << day_of_week_constant_to_name(day_of_week) end end days_of_week end def scheduler_trigger_types [ Win32::TaskScheduler::TASK_TIME_TRIGGER_DAILY, Win32::TaskScheduler::TASK_TIME_TRIGGER_WEEKLY, Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDATE, Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDOW, Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE ] end def scheduler_days_of_week [ Win32::TaskScheduler::SUNDAY, Win32::TaskScheduler::MONDAY, Win32::TaskScheduler::TUESDAY, Win32::TaskScheduler::WEDNESDAY, Win32::TaskScheduler::THURSDAY, Win32::TaskScheduler::FRIDAY, Win32::TaskScheduler::SATURDAY ] end def scheduler_months [ Win32::TaskScheduler::JANUARY, Win32::TaskScheduler::FEBRUARY, Win32::TaskScheduler::MARCH, Win32::TaskScheduler::APRIL, Win32::TaskScheduler::MAY, Win32::TaskScheduler::JUNE, Win32::TaskScheduler::JULY, Win32::TaskScheduler::AUGUST, Win32::TaskScheduler::SEPTEMBER, Win32::TaskScheduler::OCTOBER, Win32::TaskScheduler::NOVEMBER, Win32::TaskScheduler::DECEMBER ] end def scheduler_occurrences [ Win32::TaskScheduler::FIRST_WEEK, Win32::TaskScheduler::SECOND_WEEK, Win32::TaskScheduler::THIRD_WEEK, Win32::TaskScheduler::FOURTH_WEEK, Win32::TaskScheduler::LAST_WEEK ] end def day_of_week_constant_to_name(constant) case constant when Win32::TaskScheduler::SUNDAY; 'sun' when Win32::TaskScheduler::MONDAY; 'mon' when Win32::TaskScheduler::TUESDAY; 'tues' when Win32::TaskScheduler::WEDNESDAY; 'wed' when Win32::TaskScheduler::THURSDAY; 'thurs' when Win32::TaskScheduler::FRIDAY; 'fri' when Win32::TaskScheduler::SATURDAY; 'sat' end end def day_of_week_name_to_constant(name) case name when 'sun'; Win32::TaskScheduler::SUNDAY when 'mon'; Win32::TaskScheduler::MONDAY when 'tues'; Win32::TaskScheduler::TUESDAY when 'wed'; Win32::TaskScheduler::WEDNESDAY when 'thurs'; Win32::TaskScheduler::THURSDAY when 'fri'; Win32::TaskScheduler::FRIDAY when 'sat'; Win32::TaskScheduler::SATURDAY end end def month_constant_to_number(constant) month_num = 1 while constant >> month_num - 1 > 1 month_num += 1 end month_num end def occurrence_constant_to_name(constant) case constant when Win32::TaskScheduler::FIRST_WEEK; 'first' when Win32::TaskScheduler::SECOND_WEEK; 'second' when Win32::TaskScheduler::THIRD_WEEK; 'third' when Win32::TaskScheduler::FOURTH_WEEK; 'fourth' when Win32::TaskScheduler::LAST_WEEK; 'last' end end def occurrence_name_to_constant(name) case name when 'first'; Win32::TaskScheduler::FIRST_WEEK when 'second'; Win32::TaskScheduler::SECOND_WEEK when 'third'; Win32::TaskScheduler::THIRD_WEEK when 'fourth'; Win32::TaskScheduler::FOURTH_WEEK when 'last'; Win32::TaskScheduler::LAST_WEEK end end end diff --git a/lib/puppet/provider/service/windows.rb b/lib/puppet/provider/service/windows.rb index 717e585fd..d8ba69961 100644 --- a/lib/puppet/provider/service/windows.rb +++ b/lib/puppet/provider/service/windows.rb @@ -1,111 +1,113 @@ # Windows Service Control Manager (SCM) provider require 'win32/service' if Puppet.features.microsoft_windows? Puppet::Type.type(:service).provide :windows do desc <<-EOT Support for Windows Service Control Manager (SCM). This provider can start, stop, enable, and disable services, and the SCM provides working status methods for all services. Control of service groups (dependencies) is not yet supported, nor is running services as a specific user. EOT defaultfor :operatingsystem => :windows confine :operatingsystem => :windows has_feature :refreshable + commands :net => 'net.exe' + def enable w32ss = Win32::Service.configure( 'service_name' => @resource[:name], 'start_type' => Win32::Service::SERVICE_AUTO_START ) raise Puppet::Error.new("Win32 service enable of #{@resource[:name]} failed" ) if( w32ss.nil? ) rescue Win32::Service::Error => detail raise Puppet::Error.new("Cannot enable #{@resource[:name]}, error was: #{detail}" ) end def disable w32ss = Win32::Service.configure( 'service_name' => @resource[:name], 'start_type' => Win32::Service::SERVICE_DISABLED ) raise Puppet::Error.new("Win32 service disable of #{@resource[:name]} failed" ) if( w32ss.nil? ) rescue Win32::Service::Error => detail raise Puppet::Error.new("Cannot disable #{@resource[:name]}, error was: #{detail}" ) end def manual_start w32ss = Win32::Service.configure( 'service_name' => @resource[:name], 'start_type' => Win32::Service::SERVICE_DEMAND_START ) raise Puppet::Error.new("Win32 service manual enable of #{@resource[:name]} failed" ) if( w32ss.nil? ) rescue Win32::Service::Error => detail raise Puppet::Error.new("Cannot enable #{@resource[:name]} for manual start, error was: #{detail}" ) end def enabled? w32ss = Win32::Service.config_info( @resource[:name] ) raise Puppet::Error.new("Win32 service query of #{@resource[:name]} failed" ) unless( !w32ss.nil? && w32ss.instance_of?( Struct::ServiceConfigInfo ) ) debug("Service #{@resource[:name]} start type is #{w32ss.start_type}") case w32ss.start_type when Win32::Service.get_start_type(Win32::Service::SERVICE_AUTO_START), Win32::Service.get_start_type(Win32::Service::SERVICE_BOOT_START), Win32::Service.get_start_type(Win32::Service::SERVICE_SYSTEM_START) :true when Win32::Service.get_start_type(Win32::Service::SERVICE_DEMAND_START) :manual when Win32::Service.get_start_type(Win32::Service::SERVICE_DISABLED) :false else raise Puppet::Error.new("Unknown start type: #{w32ss.start_type}") end rescue Win32::Service::Error => detail raise Puppet::Error.new("Cannot get start type for #{@resource[:name]}, error was: #{detail}" ) end def start if enabled? == :false # If disabled and not managing enable, respect disabled and fail. if @resource[:enable].nil? raise Puppet::Error, "Will not start disabled service #{@resource[:name]} without managing enable. Specify 'enable => false' to override." # Otherwise start. If enable => false, we will later sync enable and # disable the service again. elsif @resource[:enable] == :true enable else manual_start end end - Win32::Service.start( @resource[:name] ) - rescue Win32::Service::Error => detail + net(:start, @resource[:name]) + rescue Puppet::ExecutionFailure => detail raise Puppet::Error.new("Cannot start #{@resource[:name]}, error was: #{detail}" ) end def stop - Win32::Service.stop( @resource[:name] ) - rescue Win32::Service::Error => detail + net(:stop, @resource[:name]) + rescue Puppet::ExecutionFailure => detail raise Puppet::Error.new("Cannot stop #{@resource[:name]}, error was: #{detail}" ) end def restart self.stop self.start end def status w32ss = Win32::Service.status( @resource[:name] ) raise Puppet::Error.new("Win32 service query of #{@resource[:name]} failed" ) unless( !w32ss.nil? && w32ss.instance_of?( Struct::ServiceStatus ) ) state = case w32ss.current_state when "stopped", "pause pending", "stop pending", "paused" then :stopped when "running", "continue pending", "start pending" then :running else raise Puppet::Error.new("Unknown service state '#{w32ss.current_state}' for service '#{@resource[:name]}'") end debug("Service #{@resource[:name]} is #{w32ss.current_state}") return state rescue Win32::Service::Error => detail raise Puppet::Error.new("Cannot get status of #{@resource[:name]}, error was: #{detail}" ) end # returns all providers for all existing services and startup state def self.instances Win32::Service.services.collect { |s| new(:name => s.service_name) } end end diff --git a/lib/puppet/rails/benchmark.rb b/lib/puppet/rails/benchmark.rb index e1e92bb79..741b6d5bd 100644 --- a/lib/puppet/rails/benchmark.rb +++ b/lib/puppet/rails/benchmark.rb @@ -1,63 +1,63 @@ require 'benchmark' +require 'yaml' + module Puppet::Rails::Benchmark $benchmarks = {:accumulated => {}} def time_debug? Puppet::Rails::TIME_DEBUG end def railsmark(message) result = nil seconds = Benchmark.realtime { result = yield } Puppet.debug(message + " in %0.2f seconds" % seconds) $benchmarks[message] = seconds if time_debug? result end def debug_benchmark(message) return yield unless Puppet::Rails::TIME_DEBUG railsmark(message) { yield } end # Collect partial benchmarks to be logged when they're # all done. # These are always low-level debugging so we only # print them if time_debug is enabled. def accumulate_benchmark(message, label) return yield unless time_debug? $benchmarks[:accumulated][message] ||= Hash.new(0) $benchmarks[:accumulated][message][label] += Benchmark.realtime { yield } end # Log the accumulated marks. def log_accumulated_marks(message) return unless time_debug? return if $benchmarks[:accumulated].empty? or $benchmarks[:accumulated][message].nil? or $benchmarks[:accumulated][message].empty? $benchmarks[:accumulated][message].each do |label, value| Puppet.debug(message + ("(#{label})") + (" in %0.2f seconds" % value)) end end def write_benchmarks return unless time_debug? branch = %x{git branch}.split("\n").find { |l| l =~ /^\*/ }.sub("* ", '') file = "/tmp/time_debugging.yaml" - require 'yaml' - if FileTest.exist?(file) data = YAML.load_file(file) else data = {} end data[branch] = $benchmarks Puppet::Util.replace_file(file, 0644) { |f| f.print YAML.dump(data) } end end diff --git a/lib/puppet/reports/store.rb b/lib/puppet/reports/store.rb index 4f2d22532..21dfa2d25 100644 --- a/lib/puppet/reports/store.rb +++ b/lib/puppet/reports/store.rb @@ -1,74 +1,74 @@ require 'puppet' require 'fileutils' require 'tempfile' SEPARATOR = [Regexp.escape(File::SEPARATOR.to_s), Regexp.escape(File::ALT_SEPARATOR.to_s)].join Puppet::Reports.register_report(:store) do desc "Store the yaml report on disk. Each host sends its report as a YAML dump and this just stores the file on disk, in the `reportdir` directory. These files collect quickly -- one every half hour -- so it is a good idea to perform some maintenance on them if you use this report (it's the only default report)." def process - # We don't want any tracking back in the fs. Unlikely, but there - # you go. - if host =~ Regexp.union(/[#{SEPARATOR}]/, /\A\.\.?\Z/) - raise ArgumentError, "Invalid node name #{host.inspect}" - end + validate_host(host) dir = File.join(Puppet[:reportdir], host) if ! FileTest.exists?(dir) FileUtils.mkdir_p(dir) FileUtils.chmod_R(0750, dir) end # Now store the report. now = Time.now.gmtime name = %w{year month day hour min}.collect do |method| # Make sure we're at least two digits everywhere "%02d" % now.send(method).to_s end.join("") + ".yaml" file = File.join(dir, name) f = Tempfile.new(name, dir) begin begin f.chmod(0640) f.print to_yaml ensure f.close end FileUtils.mv(f.path, file) rescue => detail puts detail.backtrace if Puppet[:trace] Puppet.warning "Could not write report for #{host} at #{file}: #{detail}" end # Only testing cares about the return value file end # removes all reports for a given host? def self.destroy(host) - if host =~ Regexp.union(/[#{SEPARATOR}]/, /\A\.\.?\Z/) - raise ArgumentError, "Invalid node name #{host.inspect}" - end + validate_host(host) dir = File.join(Puppet[:reportdir], host) if File.exists?(dir) Dir.entries(dir).each do |file| next if ['.','..'].include?(file) file = File.join(dir, file) File.unlink(file) if File.file?(file) end Dir.rmdir(dir) end end -end + def validate_host(host) + if host =~ Regexp.union(/[#{SEPARATOR}]/, /\A\.\.?\Z/) + raise ArgumentError, "Invalid node name #{host.inspect}" + end + end + module_function :validate_host +end diff --git a/lib/puppet/resource/catalog.rb b/lib/puppet/resource/catalog.rb index 3dcf19c7a..eda5ed8f5 100644 --- a/lib/puppet/resource/catalog.rb +++ b/lib/puppet/resource/catalog.rb @@ -1,660 +1,661 @@ require 'puppet/node' require 'puppet/indirector' require 'puppet/simple_graph' require 'puppet/transaction' require 'puppet/util/pson' require 'puppet/util/tagging' # This class models a node catalog. It is the thing # meant to be passed from server to client, and it contains all # of the information in the catalog, including the resources # and the relationships between them. class Puppet::Resource::Catalog < Puppet::SimpleGraph class DuplicateResourceError < Puppet::Error; end extend Puppet::Indirector indirects :catalog, :terminus_setting => :catalog_terminus include Puppet::Util::Tagging extend Puppet::Util::Pson # The host name this is a catalog for. attr_accessor :name # The catalog version. Used for testing whether a catalog # is up to date. attr_accessor :version # How long this catalog took to retrieve. Used for reporting stats. attr_accessor :retrieval_duration # Whether this is a host catalog, which behaves very differently. # In particular, reports are sent, graphs are made, and state is # stored in the state database. If this is set incorrectly, then you often # end up in infinite loops, because catalogs are used to make things # that the host catalog needs. attr_accessor :host_config # Whether this catalog was retrieved from the cache, which affects # whether it is written back out again. attr_accessor :from_cache # Some metadata to help us compile and generally respond to the current state. attr_accessor :client_version, :server_version # Add classes to our class list. def add_class(*classes) classes.each do |klass| @classes << klass end # Add the class names as tags, too. tag(*classes) end def title_key_for_ref( ref ) ref =~ /^([-\w:]+)\[(.*)\]$/m [$1, $2] end # Add a resource to our graph and to our resource table. # This is actually a relatively complicated method, because it handles multiple # aspects of Catalog behaviour: # * Add the resource to the resource table # * Add the resource to the resource graph # * Add the resource to the relationship graph # * Add any aliases that make sense for the resource (e.g., name != title) def add_resource(*resource) add_resource(*resource[0..-2]) if resource.length > 1 resource = resource.pop raise ArgumentError, "Can only add objects that respond to :ref, not instances of #{resource.class}" unless resource.respond_to?(:ref) fail_on_duplicate_type_and_title(resource) title_key = title_key_for_ref(resource.ref) @transient_resources << resource if applying? @resource_table[title_key] = resource # If the name and title differ, set up an alias if resource.respond_to?(:name) and resource.respond_to?(:title) and resource.respond_to?(:isomorphic?) and resource.name != resource.title self.alias(resource, resource.uniqueness_key) if resource.isomorphic? end resource.catalog = self if resource.respond_to?(:catalog=) add_vertex(resource) @relationship_graph.add_vertex(resource) if @relationship_graph end # Create an alias for a resource. def alias(resource, key) resource.ref =~ /^(.+)\[/ class_name = $1 || resource.class.name newref = [class_name, key].flatten if key.is_a? String ref_string = "#{class_name}[#{key}]" return if ref_string == resource.ref end # LAK:NOTE It's important that we directly compare the references, # because sometimes an alias is created before the resource is # added to the catalog, so comparing inside the below if block # isn't sufficient. if existing = @resource_table[newref] return if existing == resource resource_declaration = " at #{resource.file}:#{resource.line}" if resource.file and resource.line existing_declaration = " at #{existing.file}:#{existing.line}" if existing.file and existing.line msg = "Cannot alias #{resource.ref} to #{key.inspect}#{resource_declaration}; resource #{newref.inspect} already declared#{existing_declaration}" raise ArgumentError, msg end @resource_table[newref] = resource @aliases[resource.ref] ||= [] @aliases[resource.ref] << newref end # Apply our catalog to the local host. Valid options # are: # :tags - set the tags that restrict what resources run # during the transaction # :ignoreschedules - tell the transaction to ignore schedules # when determining the resources to run def apply(options = {}) @applying = true Puppet::Util::Storage.load if host_config? transaction = Puppet::Transaction.new(self, options[:report]) register_report = options[:report].nil? transaction.tags = options[:tags] if options[:tags] transaction.ignoreschedules = true if options[:ignoreschedules] transaction.for_network_device = options[:network_device] transaction.add_times :config_retrieval => self.retrieval_duration || 0 begin Puppet::Util::Log.newdestination(transaction.report) if register_report begin transaction.evaluate ensure Puppet::Util::Log.close(transaction.report) if register_report end rescue Puppet::Error => detail puts detail.backtrace if Puppet[:trace] Puppet.err "Could not apply complete catalog: #{detail}" rescue => detail puts detail.backtrace if Puppet[:trace] Puppet.err "Got an uncaught exception of type #{detail.class}: #{detail}" ensure # Don't try to store state unless we're a host config # too recursive. Puppet::Util::Storage.store if host_config? end yield transaction if block_given? return transaction ensure @applying = false end # Are we in the middle of applying the catalog? def applying? @applying end def clear(remove_resources = true) super() # We have to do this so that the resources clean themselves up. @resource_table.values.each { |resource| resource.remove } if remove_resources @resource_table.clear if @relationship_graph @relationship_graph.clear @relationship_graph = nil end end def classes @classes.dup end # Create a new resource and register it in the catalog. def create_resource(type, options) unless klass = Puppet::Type.type(type) raise ArgumentError, "Unknown resource type #{type}" end return unless resource = klass.new(options) add_resource(resource) resource end # Turn our catalog graph into an old-style tree of TransObjects and TransBuckets. # LAK:NOTE(20081211): This is a pre-0.25 backward compatibility method. # It can be removed as soon as xmlrpc is killed. def extract top = nil current = nil buckets = {} unless main = resource(:stage, "main") raise Puppet::DevError, "Could not find 'main' stage; cannot generate catalog" end if stages = vertices.find_all { |v| v.type == "Stage" and v.title != "main" } and ! stages.empty? Puppet.warning "Stages are not supported by 0.24.x client; stage(s) #{stages.collect { |s| s.to_s }.join(', ') } will be ignored" end bucket = nil walk(main, :out) do |source, target| # The sources are always non-builtins. unless tmp = buckets[source.to_s] if tmp = buckets[source.to_s] = source.to_trans bucket = tmp else # This is because virtual resources return nil. If a virtual # container resource contains realized resources, we still need to get # to them. So, we keep a reference to the last valid bucket # we returned and use that if the container resource is virtual. end end bucket = tmp || bucket if child = target.to_trans raise "No bucket created for #{source}" unless bucket bucket.push child # It's important that we keep a reference to any TransBuckets we've created, so # we don't create multiple buckets for children. buckets[target.to_s] = child unless target.builtin? end end # Retrieve the bucket for the top-level scope and set the appropriate metadata. unless result = buckets[main.to_s] # This only happens when the catalog is entirely empty. result = buckets[main.to_s] = main.to_trans end result.classes = classes # Clear the cache to encourage the GC buckets.clear result end # Make sure all of our resources are "finished". def finalize make_default_resources @resource_table.values.each { |resource| resource.finish } write_graph(:resources) end def host_config? host_config end def initialize(name = nil) super() @name = name if name @classes = [] @resource_table = {} @transient_resources = [] @applying = false @relationship_graph = nil @host_config = true @aliases = {} if block_given? yield(self) finalize end end # Make the default objects necessary for function. def make_default_resources # We have to add the resources to the catalog, or else they won't get cleaned up after # the transaction. # First create the default scheduling objects Puppet::Type.type(:schedule).mkdefaultschedules.each { |res| add_resource(res) unless resource(res.ref) } # And filebuckets if bucket = Puppet::Type.type(:filebucket).mkdefaultbucket add_resource(bucket) unless resource(bucket.ref) end end # Create a graph of all of the relationships in our catalog. def relationship_graph unless @relationship_graph # It's important that we assign the graph immediately, because # the debug messages below use the relationships in the # relationship graph to determine the path to the resources # spitting out the messages. If this is not set, # then we get into an infinite loop. @relationship_graph = Puppet::SimpleGraph.new # First create the dependency graph self.vertices.each do |vertex| @relationship_graph.add_vertex vertex vertex.builddepends.each do |edge| @relationship_graph.add_edge(edge) end end # Lastly, add in any autorequires @relationship_graph.vertices.each do |vertex| vertex.autorequire(self).each do |edge| unless @relationship_graph.edge?(edge.source, edge.target) # don't let automatic relationships conflict with manual ones. unless @relationship_graph.edge?(edge.target, edge.source) vertex.debug "Autorequiring #{edge.source}" @relationship_graph.add_edge(edge) else vertex.debug "Skipping automatic relationship with #{(edge.source == vertex ? edge.target : edge.source)}" end end end end @relationship_graph.write_graph(:relationships) if host_config? # Then splice in the container information splice!(@relationship_graph) @relationship_graph.write_graph(:expanded_relationships) if host_config? end @relationship_graph end # Impose our container information on another graph by using it # to replace any container vertices X with a pair of verticies # { admissible_X and completed_X } such that that # # 0) completed_X depends on admissible_X # 1) contents of X each depend on admissible_X # 2) completed_X depends on each on the contents of X # 3) everything which depended on X depens on completed_X # 4) admissible_X depends on everything X depended on # 5) the containers and their edges must be removed # # Note that this requires attention to the possible case of containers # which contain or depend on other containers, but has the advantage # that the number of new edges created scales linearly with the number # of contained verticies regardless of how containers are related; # alternatives such as replacing container-edges with content-edges # scale as the product of the number of external dependences, which is # to say geometrically in the case of nested / chained containers. # Default_label = { :callback => :refresh, :event => :ALL_EVENTS } def splice!(other) stage_class = Puppet::Type.type(:stage) whit_class = Puppet::Type.type(:whit) component_class = Puppet::Type.type(:component) containers = vertices.find_all { |v| (v.is_a?(component_class) or v.is_a?(stage_class)) and vertex?(v) } # # These two hashes comprise the aforementioned attention to the possible # case of containers that contain / depend on other containers; they map # containers to their sentinels but pass other verticies through. Thus we # can "do the right thing" for references to other verticies that may or # may not be containers. # admissible = Hash.new { |h,k| k } completed = Hash.new { |h,k| k } containers.each { |x| admissible[x] = whit_class.new(:name => "admissible_#{x.ref}", :catalog => self) completed[x] = whit_class.new(:name => "completed_#{x.ref}", :catalog => self) } # # Implement the six requierments listed above # containers.each { |x| contents = adjacent(x, :direction => :out) other.add_edge(admissible[x],completed[x]) if contents.empty? # (0) contents.each { |v| other.add_edge(admissible[x],admissible[v],Default_label) # (1) other.add_edge(completed[v], completed[x], Default_label) # (2) } # (3) & (5) other.adjacent(x,:direction => :in,:type => :edges).each { |e| other.add_edge(completed[e.source],admissible[x],e.label) other.remove_edge! e } # (4) & (5) other.adjacent(x,:direction => :out,:type => :edges).each { |e| other.add_edge(completed[x],admissible[e.target],e.label) other.remove_edge! e } } containers.each { |x| other.remove_vertex! x } # (5) end # Remove the resource from our catalog. Notice that we also call # 'remove' on the resource, at least until resource classes no longer maintain # references to the resource instances. def remove_resource(*resources) resources.each do |resource| - @resource_table.delete(resource.ref) + title_key = title_key_for_ref(resource.ref) + @resource_table.delete(title_key) if aliases = @aliases[resource.ref] aliases.each { |res_alias| @resource_table.delete(res_alias) } @aliases.delete(resource.ref) end remove_vertex!(resource) if vertex?(resource) @relationship_graph.remove_vertex!(resource) if @relationship_graph and @relationship_graph.vertex?(resource) resource.remove end end # Look a resource up by its reference (e.g., File[/etc/passwd]). def resource(type, title = nil) # Always create a resource reference, so that it always canonizes how we # are referring to them. if title res = Puppet::Resource.new(type, title) else # If they didn't provide a title, then we expect the first # argument to be of the form 'Class[name]', which our # Reference class canonizes for us. res = Puppet::Resource.new(nil, type) end title_key = [res.type, res.title.to_s] uniqueness_key = [res.type, res.uniqueness_key].flatten @resource_table[title_key] || @resource_table[uniqueness_key] end def resource_refs resource_keys.collect{ |type, name| name.is_a?( String ) ? "#{type}[#{name}]" : nil}.compact end def resource_keys @resource_table.keys end def resources @resource_table.values.uniq end def self.from_pson(data) result = new(data['name']) if tags = data['tags'] result.tag(*tags) end if version = data['version'] result.version = version end if resources = data['resources'] resources = PSON.parse(resources) if resources.is_a?(String) resources.each do |res| resource_from_pson(result, res) end end if edges = data['edges'] edges = PSON.parse(edges) if edges.is_a?(String) edges.each do |edge| edge_from_pson(result, edge) end end if classes = data['classes'] result.add_class(*classes) end result end def self.edge_from_pson(result, edge) # If no type information was presented, we manually find # the class. edge = Puppet::Relationship.from_pson(edge) if edge.is_a?(Hash) unless source = result.resource(edge.source) raise ArgumentError, "Could not convert from pson: Could not find relationship source #{edge.source.inspect}" end edge.source = source unless target = result.resource(edge.target) raise ArgumentError, "Could not convert from pson: Could not find relationship target #{edge.target.inspect}" end edge.target = target result.add_edge(edge) end def self.resource_from_pson(result, res) res = Puppet::Resource.from_pson(res) if res.is_a? Hash result.add_resource(res) end PSON.register_document_type('Catalog',self) def to_pson_data_hash { 'document_type' => 'Catalog', 'data' => { 'tags' => tags, 'name' => name, 'version' => version, 'resources' => vertices.collect { |v| v.to_pson_data_hash }, 'edges' => edges. collect { |e| e.to_pson_data_hash }, 'classes' => classes }, 'metadata' => { 'api_version' => 1 } } end def to_pson(*args) to_pson_data_hash.to_pson(*args) end # Convert our catalog into a RAL catalog. def to_ral to_catalog :to_ral end # Convert our catalog into a catalog of Puppet::Resource instances. def to_resource to_catalog :to_resource end # filter out the catalog, applying +block+ to each resource. # If the block result is false, the resource will # be kept otherwise it will be skipped def filter(&block) to_catalog :to_resource, &block end # Store the classes in the classfile. def write_class_file ::File.open(Puppet[:classfile], "w") do |f| f.puts classes.join("\n") end rescue => detail Puppet.err "Could not create class file #{Puppet[:classfile]}: #{detail}" end # Store the list of resources we manage def write_resource_file ::File.open(Puppet[:resourcefile], "w") do |f| to_print = resources.map do |resource| next unless resource.managed? if resource.name_var "#{resource.type}[#{resource[resource.name_var]}]" else "#{resource.ref.downcase}" end end.compact f.puts to_print.join("\n") end rescue => detail Puppet.err "Could not create resource file #{Puppet[:resourcefile]}: #{detail}" end # Produce the graph files if requested. def write_graph(name) # We only want to graph the main host catalog. return unless host_config? super end private # Verify that the given resource isn't declared elsewhere. def fail_on_duplicate_type_and_title(resource) # Short-curcuit the common case, return unless existing_resource = @resource_table[title_key_for_ref(resource.ref)] # If we've gotten this far, it's a real conflict msg = "Duplicate declaration: #{resource.ref} is already declared" msg << " in file #{existing_resource.file} at line #{existing_resource.line}" if existing_resource.file and existing_resource.line msg << "; cannot redeclare" if resource.line or resource.file raise DuplicateResourceError.new(msg) end # An abstracted method for converting one catalog into another type of catalog. # This pretty much just converts all of the resources from one class to another, using # a conversion method. def to_catalog(convert) result = self.class.new(self.name) result.version = self.version map = {} vertices.each do |resource| next if virtual_not_exported?(resource) next if block_given? and yield resource #This is hackity hack for 1094 #Aliases aren't working in the ral catalog because the current instance of the resource #has a reference to the catalog being converted. . . So, give it a reference to the new one #problem solved. . . if resource.class == Puppet::Resource resource = resource.dup resource.catalog = result elsif resource.is_a?(Puppet::TransObject) resource = resource.dup resource.catalog = result elsif resource.is_a?(Puppet::Parser::Resource) resource = resource.to_resource resource.catalog = result end if resource.is_a?(Puppet::Resource) and convert.to_s == "to_resource" newres = resource else newres = resource.send(convert) end # We can't guarantee that resources don't munge their names # (like files do with trailing slashes), so we have to keep track # of what a resource got converted to. map[resource.ref] = newres result.add_resource newres end message = convert.to_s.gsub "_", " " edges.each do |edge| # Skip edges between virtual resources. next if virtual_not_exported?(edge.source) next if block_given? and yield edge.source next if virtual_not_exported?(edge.target) next if block_given? and yield edge.target unless source = map[edge.source.ref] raise Puppet::DevError, "Could not find resource #{edge.source.ref} when converting #{message} resources" end unless target = map[edge.target.ref] raise Puppet::DevError, "Could not find resource #{edge.target.ref} when converting #{message} resources" end result.add_edge(source, target, edge.label) end map.clear result.add_class(*self.classes) result.tag(*self.tags) result end def virtual_not_exported?(resource) resource.respond_to?(:virtual?) and resource.virtual? and (resource.respond_to?(:exported?) and not resource.exported?) end end diff --git a/lib/puppet/resource/type_collection.rb b/lib/puppet/resource/type_collection.rb index 89b0a16ed..e432d52a0 100644 --- a/lib/puppet/resource/type_collection.rb +++ b/lib/puppet/resource/type_collection.rb @@ -1,214 +1,215 @@ +require 'puppet/parser/type_loader' + class Puppet::Resource::TypeCollection attr_reader :environment attr_accessor :parse_failed def clear @hostclasses.clear @definitions.clear @nodes.clear @watched_files.clear end def initialize(env) @environment = env.is_a?(String) ? Puppet::Node::Environment.new(env) : env @hostclasses = {} @definitions = {} @nodes = {} # So we can keep a list and match the first-defined regex @node_list = [] @watched_files = {} end def import_ast(ast, modname) ast.instantiate(modname).each do |instance| add(instance) end end def inspect "TypeCollection" + { :hostclasses => @hostclasses.keys, :definitions => @definitions.keys, :nodes => @nodes.keys }.inspect end def <<(thing) add(thing) self end def add(instance) if instance.type == :hostclass and other = @hostclasses[instance.name] and other.type == :hostclass other.merge(instance) return other end method = "add_#{instance.type}" send(method, instance) instance.resource_type_collection = self instance end def add_hostclass(instance) dupe_check(instance, @hostclasses) { |dupe| "Class '#{instance.name}' is already defined#{dupe.error_context}; cannot redefine" } dupe_check(instance, @definitions) { |dupe| "Definition '#{instance.name}' is already defined#{dupe.error_context}; cannot be redefined as a class" } @hostclasses[instance.name] = instance instance end def hostclass(name) @hostclasses[munge_name(name)] end def add_node(instance) dupe_check(instance, @nodes) { |dupe| "Node '#{instance.name}' is already defined#{dupe.error_context}; cannot redefine" } @node_list << instance @nodes[instance.name] = instance instance end def loader - require 'puppet/parser/type_loader' @loader ||= Puppet::Parser::TypeLoader.new(environment) end def node(name) name = munge_name(name) if node = @nodes[name] return node end @node_list.each do |node| next unless node.name_is_regex? return node if node.match(name) end nil end def node_exists?(name) @nodes[munge_name(name)] end def nodes? @nodes.length > 0 end def add_definition(instance) dupe_check(instance, @hostclasses) { |dupe| "'#{instance.name}' is already defined#{dupe.error_context} as a class; cannot redefine as a definition" } dupe_check(instance, @definitions) { |dupe| "Definition '#{instance.name}' is already defined#{dupe.error_context}; cannot be redefined" } @definitions[instance.name] = instance end def definition(name) @definitions[munge_name(name)] end def find_node(namespaces, name) @nodes[munge_name(name)] end def find_hostclass(namespaces, name) find_or_load(namespaces, name, :hostclass) end def find_definition(namespaces, name) find_or_load(namespaces, name, :definition) end [:hostclasses, :nodes, :definitions].each do |m| define_method(m) do instance_variable_get("@#{m}").dup end end def require_reparse? @parse_failed || stale? end def stale? @watched_files.values.detect { |file| file.changed? } end def version return @version if defined?(@version) if environment[:config_version] == "" @version = Time.now.to_i return @version end @version = Puppet::Util.execute([environment[:config_version]]).strip rescue Puppet::ExecutionFailure => e raise Puppet::ParseError, "Unable to set config_version: #{e.message}" end def watch_file(file) @watched_files[file] = Puppet::Util::LoadedFile.new(file) end def watching_file?(file) @watched_files.include?(file) end private # Return a list of all possible fully-qualified names that might be # meant by the given name, in the context of namespaces. def resolve_namespaces(namespaces, name) name = name.downcase if name =~ /^::/ # name is explicitly fully qualified, so just return it, sans # initial "::". return [name.sub(/^::/, '')] end if name == "" # The name "" has special meaning--it always refers to a "main" # hostclass which contains all toplevel resources. return [""] end namespaces = [namespaces] unless namespaces.is_a?(Array) namespaces = namespaces.collect { |ns| ns.downcase } result = [] namespaces.each do |namespace| ary = namespace.split("::") # Search each namespace nesting in innermost-to-outermost order. while ary.length > 0 result << "#{ary.join("::")}::#{name}" ary.pop end # Finally, search the toplevel namespace. result << name end return result.uniq end # Resolve namespaces and find the given object. Autoload it if # necessary. def find_or_load(namespaces, name, type) resolve_namespaces(namespaces, name).each do |fqname| if result = send(type, fqname) || loader.try_load_fqname(type, fqname) return result end end # Nothing found. return nil end def munge_name(name) name.to_s.downcase end def dupe_check(instance, hash) return unless dupe = hash[instance.name] message = yield dupe instance.fail Puppet::ParseError, message end end diff --git a/lib/puppet/ssl/base.rb b/lib/puppet/ssl/base.rb index 25fd70ca2..f44649016 100644 --- a/lib/puppet/ssl/base.rb +++ b/lib/puppet/ssl/base.rb @@ -1,87 +1,86 @@ +require 'openssl/digest' require 'puppet/ssl' # The base class for wrapping SSL instances. class Puppet::SSL::Base # For now, use the YAML separator. SEPARATOR = "\n---\n" # Only allow printing ascii characters, excluding / VALID_CERTNAME = /\A[ -.0-~]+\Z/ def self.from_multiple_s(text) text.split(SEPARATOR).collect { |inst| from_s(inst) } end def self.to_multiple_s(instances) instances.collect { |inst| inst.to_s }.join(SEPARATOR) end def self.wraps(klass) @wrapped_class = klass end def self.wrapped_class raise(Puppet::DevError, "#{self} has not declared what class it wraps") unless defined?(@wrapped_class) @wrapped_class end def self.validate_certname(name) raise "Certname #{name.inspect} must not contain unprintable or non-ASCII characters" unless name =~ VALID_CERTNAME end attr_accessor :name, :content # Is this file for the CA? def ca? name == Puppet::SSL::Host.ca_name end def generate raise Puppet::DevError, "#{self.class} did not override 'generate'" end def initialize(name) @name = name.to_s.downcase self.class.validate_certname(@name) end # Read content from disk appropriately. def read(path) @content = wrapped_class.new(File.read(path)) end # Convert our thing to pem. def to_s return "" unless content content.to_pem end # Provide the full text of the thing we're dealing with. def to_text return "" unless content content.to_text end def fingerprint(md = :MD5) - require 'openssl/digest' - # ruby 1.8.x openssl digest constants are string # but in 1.9.x they are symbols mds = md.to_s.upcase if OpenSSL::Digest.constants.include?(mds) md = mds elsif OpenSSL::Digest.constants.include?(mds.to_sym) md = mds.to_sym else raise ArgumentError, "#{md} is not a valid digest algorithm for fingerprinting certificate #{name}" end OpenSSL::Digest.const_get(md).hexdigest(content.to_der).scan(/../).join(':').upcase end private def wrapped_class self.class.wrapped_class end end diff --git a/lib/puppet/test/test_helper.rb b/lib/puppet/test/test_helper.rb index 8a3648348..9a05b4db3 100644 --- a/lib/puppet/test/test_helper.rb +++ b/lib/puppet/test/test_helper.rb @@ -1,138 +1,139 @@ module Puppet::Test # This class is intended to provide an API to be used by external projects # when they are running tests that depend on puppet core. This should # allow us to vary the implementation details of managing puppet's state # for testing, from one version of puppet to the next--without forcing # the external projects to do any of that state management or be aware of # the implementation details. # # For now, this consists of a few very simple signatures. The plan is # that it should be the responsibility of the puppetlabs_spec_helper # to broker between external projects and this API; thus, if any # hacks are required (e.g. to determine whether or not a particular) # version of puppet supports this API, those hacks will be consolidated in # one place and won't need to be duplicated in every external project. # # This should also alleviate the anti-pattern that we've been following, # wherein each external project starts off with a copy of puppet core's # test_helper.rb and is exposed to risk of that code getting out of # sync with core. # # Since this class will be "library code" that ships with puppet, it does # not use API from any existing test framework such as rspec. This should # theoretically allow it to be used with other unit test frameworks in the # future, if desired. # # Note that in the future this API could potentially be expanded to handle # other features such as "around_test", but we didn't see a compelling # reason to deal with that right now. class TestHelper # Call this method once, when beginning a test run--prior to running # any individual tests. # @return nil def self.before_all_tests() end # Call this method once, at the end of a test run, when no more tests # will be run. # @return nil def self.after_all_tests() end # Call this method once per test, prior to execution of each invididual test. # @return nil def self.before_each_test() # We need to preserve the current state of all our indirection cache and # terminus classes. This is pretty important, because changes to these # are global and lead to order dependencies in our testing. # # We go direct to the implementation because there is no safe, sane public # API to manage restoration of these to their default values. This # should, once the value is proved, be moved to a standard API on the # indirector. # # To make things worse, a number of the tests stub parts of the # indirector. These stubs have very specific expectations that what # little of the public API we could use is, well, likely to explode # randomly in some tests. So, direct access. --daniel 2011-08-30 $saved_indirection_state = {} indirections = Puppet::Indirector::Indirection.send(:class_variable_get, :@@indirections) indirections.each do |indirector| $saved_indirection_state[indirector.name] = { :@terminus_class => indirector.instance_variable_get(:@terminus_class), :@cache_class => indirector.instance_variable_get(:@cache_class) } end initialize_settings_before_each() # Longer keys are secure, but they sure make for some slow testing - both # in terms of generating keys, and in terms of anything the next step down # the line doing validation or whatever. Most tests don't care how long # or secure it is, just that it exists, so these are better and faster # defaults, in testing only. # # I would make these even shorter, but OpenSSL doesn't support anything # below 512 bits. Sad, really, because a 0 bit key would be just fine. Puppet[:req_bits] = 512 Puppet[:keylength] = 512 + Puppet.clear_deprecation_warnings end # Call this method once per test, after execution of each individual test. # @return nil def self.after_each_test() clear_settings_after_each() Puppet::Node::Environment.clear Puppet::Util::Storage.clear Puppet::Util::ExecutionStub.reset # Restore the indirector configuration. See before hook. indirections = Puppet::Indirector::Indirection.send(:class_variable_get, :@@indirections) indirections.each do |indirector| $saved_indirection_state.fetch(indirector.name, {}).each do |variable, value| indirector.instance_variable_set(variable, value) end end $saved_indirection_state = nil # Some tests can cause us to connect, in which case the lingering # connection is a resource that can cause unexpected failure in later # tests, as well as sharing state accidentally. # We're testing if ActiveRecord::Base is defined because some test cases # may stub Puppet.features.rails? which is how we should normally # introspect for this functionality. ActiveRecord::Base.remove_connection if defined?(ActiveRecord::Base) end ######################################################################################### # PRIVATE METHODS (not part of the public TestHelper API--do not call these from outside # of this class!) ######################################################################################### def self.initialize_settings_before_each() # these globals are set by Application $puppet_application_mode = nil $puppet_application_name = nil # Set the confdir and vardir to gibberish so that tests # have to be correctly mocked. Puppet[:confdir] = "/dev/null" Puppet[:vardir] = "/dev/null" # Avoid opening ports to the outside world Puppet[:bindaddress] = "127.0.0.1" end private_class_method :initialize_settings_before_each def self.clear_settings_after_each() Puppet.settings.clear end private_class_method :clear_settings_after_each end -end \ No newline at end of file +end diff --git a/lib/puppet/type.rb b/lib/puppet/type.rb index e41666570..8d0e586b9 100644 --- a/lib/puppet/type.rb +++ b/lib/puppet/type.rb @@ -1,1974 +1,1974 @@ require 'puppet' require 'puppet/util/log' require 'puppet/util/metric' require 'puppet/property' require 'puppet/parameter' require 'puppet/util' require 'puppet/util/autoload' require 'puppet/metatype/manager' require 'puppet/util/errors' require 'puppet/util/log_paths' require 'puppet/util/logging' require 'puppet/file_collection/lookup' require 'puppet/util/tagging' # see the bottom of the file for the rest of the inclusions module Puppet class Type include Puppet::Util include Puppet::Util::Errors include Puppet::Util::LogPaths include Puppet::Util::Logging include Puppet::FileCollection::Lookup include Puppet::Util::Tagging ############################### # Comparing type instances. include Comparable def <=>(other) # We only order against other types, not arbitrary objects. return nil unless other.is_a? Puppet::Type # Our natural order is based on the reference name we use when comparing # against other type instances. self.ref <=> other.ref end ############################### # Code related to resource type attributes. class << self include Puppet::Util::ClassGen include Puppet::Util::Warnings attr_reader :properties end def self.states warnonce "The states method is deprecated; use properties" properties end # All parameters, in the appropriate order. The key_attributes come first, then # the provider, then the properties, and finally the params and metaparams # in the order they were specified in the files. def self.allattrs key_attributes | (parameters & [:provider]) | properties.collect { |property| property.name } | parameters | metaparams end # Retrieve an attribute alias, if there is one. def self.attr_alias(param) @attr_aliases[symbolize(param)] end # Create an alias to an existing attribute. This will cause the aliased # attribute to be valid when setting and retrieving values on the instance. def self.set_attr_alias(hash) hash.each do |new, old| @attr_aliases[symbolize(new)] = symbolize(old) end end # Find the class associated with any given attribute. def self.attrclass(name) @attrclasses ||= {} # We cache the value, since this method gets called such a huge number # of times (as in, hundreds of thousands in a given run). unless @attrclasses.include?(name) @attrclasses[name] = case self.attrtype(name) when :property; @validproperties[name] when :meta; @@metaparamhash[name] when :param; @paramhash[name] end end @attrclasses[name] end # What type of parameter are we dealing with? Cache the results, because # this method gets called so many times. def self.attrtype(attr) @attrtypes ||= {} unless @attrtypes.include?(attr) @attrtypes[attr] = case when @validproperties.include?(attr); :property when @paramhash.include?(attr); :param when @@metaparamhash.include?(attr); :meta end end @attrtypes[attr] end def self.eachmetaparam @@metaparams.each { |p| yield p.name } end # Create the 'ensure' class. This is a separate method so other types # can easily call it and create their own 'ensure' values. def self.ensurable(&block) if block_given? self.newproperty(:ensure, :parent => Puppet::Property::Ensure, &block) else self.newproperty(:ensure, :parent => Puppet::Property::Ensure) do self.defaultvalues end end end # Should we add the 'ensure' property to this class? def self.ensurable? # If the class has all three of these methods defined, then it's # ensurable. [:exists?, :create, :destroy].all? { |method| self.public_method_defined?(method) } end # These `apply_to` methods are horrible. They should really be implemented # as part of the usual system of constraints that apply to a type and # provider pair, but were implemented as a separate shadow system. # # We should rip them out in favour of a real constraint pattern around the # target device - whatever that looks like - and not have this additional # magic here. --daniel 2012-03-08 def self.apply_to_device @apply_to = :device end def self.apply_to_host @apply_to = :host end def self.apply_to_all @apply_to = :both end def self.apply_to @apply_to ||= :host end def self.can_apply_to(target) [ target == :device ? :device : :host, :both ].include?(apply_to) end # Deal with any options passed into parameters. def self.handle_param_options(name, options) # If it's a boolean parameter, create a method to test the value easily if options[:boolean] define_method(name.to_s + "?") do val = self[name] if val == :true or val == true return true end end end end # Is the parameter in question a meta-parameter? def self.metaparam?(param) @@metaparamhash.include?(symbolize(param)) end # Find the metaparameter class associated with a given metaparameter name. def self.metaparamclass(name) @@metaparamhash[symbolize(name)] end def self.metaparams @@metaparams.collect { |param| param.name } end def self.metaparamdoc(metaparam) @@metaparamhash[metaparam].doc end # Create a new metaparam. Requires a block and a name, stores it in the # @parameters array, and does some basic checking on it. def self.newmetaparam(name, options = {}, &block) @@metaparams ||= [] @@metaparamhash ||= {} name = symbolize(name) param = genclass( name, :parent => options[:parent] || Puppet::Parameter, :prefix => "MetaParam", :hash => @@metaparamhash, :array => @@metaparams, :attributes => options[:attributes], &block ) # Grr. param.required_features = options[:required_features] if options[:required_features] handle_param_options(name, options) param.metaparam = true param end def self.key_attribute_parameters @key_attribute_parameters ||= ( params = @parameters.find_all { |param| param.isnamevar? or param.name == :name } ) end def self.key_attributes key_attribute_parameters.collect { |p| p.name } end def self.title_patterns case key_attributes.length when 0; [] when 1; identity = lambda {|x| x} [ [ /(.*)/m, [ [key_attributes.first, identity ] ] ] ] else raise Puppet::DevError,"you must specify title patterns when there are two or more key attributes" end end def uniqueness_key self.class.key_attributes.sort_by { |attribute_name| attribute_name.to_s }.map{ |attribute_name| self[attribute_name] } end # Create a new parameter. Requires a block and a name, stores it in the # @parameters array, and does some basic checking on it. def self.newparam(name, options = {}, &block) options[:attributes] ||= {} param = genclass( name, :parent => options[:parent] || Puppet::Parameter, :attributes => options[:attributes], :block => block, :prefix => "Parameter", :array => @parameters, :hash => @paramhash ) handle_param_options(name, options) # Grr. param.required_features = options[:required_features] if options[:required_features] param.isnamevar if options[:namevar] param end def self.newstate(name, options = {}, &block) Puppet.warning "newstate() has been deprecrated; use newproperty(#{name})" newproperty(name, options, &block) end # Create a new property. The first parameter must be the name of the property; # this is how users will refer to the property when creating new instances. # The second parameter is a hash of options; the options are: # * :parent: The parent class for the property. Defaults to Puppet::Property. # * :retrieve: The method to call on the provider or @parent object (if # the provider is not set) to retrieve the current value. def self.newproperty(name, options = {}, &block) name = symbolize(name) # This is here for types that might still have the old method of defining # a parent class. unless options.is_a? Hash raise Puppet::DevError, "Options must be a hash, not #{options.inspect}" end raise Puppet::DevError, "Class #{self.name} already has a property named #{name}" if @validproperties.include?(name) if parent = options[:parent] options.delete(:parent) else parent = Puppet::Property end # We have to create our own, new block here because we want to define # an initial :retrieve method, if told to, and then eval the passed # block if available. prop = genclass(name, :parent => parent, :hash => @validproperties, :attributes => options) do # If they've passed a retrieve method, then override the retrieve # method on the class. if options[:retrieve] define_method(:retrieve) do provider.send(options[:retrieve]) end end class_eval(&block) if block end # If it's the 'ensure' property, always put it first. if name == :ensure @properties.unshift prop else @properties << prop end prop end def self.paramdoc(param) @paramhash[param].doc end # Return the parameter names def self.parameters return [] unless defined?(@parameters) @parameters.collect { |klass| klass.name } end # Find the parameter class associated with a given parameter name. def self.paramclass(name) @paramhash[name] end # Return the property class associated with a name def self.propertybyname(name) @validproperties[name] end def self.validattr?(name) name = symbolize(name) return true if name == :name @validattrs ||= {} unless @validattrs.include?(name) @validattrs[name] = !!(self.validproperty?(name) or self.validparameter?(name) or self.metaparam?(name)) end @validattrs[name] end # does the name reflect a valid property? def self.validproperty?(name) name = symbolize(name) @validproperties.include?(name) && @validproperties[name] end # Return the list of validproperties def self.validproperties return {} unless defined?(@parameters) @validproperties.keys end # does the name reflect a valid parameter? def self.validparameter?(name) raise Puppet::DevError, "Class #{self} has not defined parameters" unless defined?(@parameters) !!(@paramhash.include?(name) or @@metaparamhash.include?(name)) end # This is a forward-compatibility method - it's the validity interface we'll use in Puppet::Resource. def self.valid_parameter?(name) validattr?(name) end # Return either the attribute alias or the attribute. def attr_alias(name) name = symbolize(name) if synonym = self.class.attr_alias(name) return synonym else return name end end # Are we deleting this resource? def deleting? obj = @parameters[:ensure] and obj.should == :absent end # Create a new property if it is valid but doesn't exist # Returns: true if a new parameter was added, false otherwise def add_property_parameter(prop_name) if self.class.validproperty?(prop_name) && !@parameters[prop_name] self.newattr(prop_name) return true end false end # # The name_var is the key_attribute in the case that there is only one. # def name_var key_attributes = self.class.key_attributes (key_attributes.length == 1) && key_attributes.first end # abstract accessing parameters and properties, and normalize # access to always be symbols, not strings # This returns a value, not an object. It returns the 'is' # value, but you can also specifically return 'is' and 'should' # values using 'object.is(:property)' or 'object.should(:property)'. def [](name) name = attr_alias(name) fail("Invalid parameter #{name}(#{name.inspect})") unless self.class.validattr?(name) if name == :name && nv = name_var name = nv end if obj = @parameters[name] # Note that if this is a property, then the value is the "should" value, # not the current value. obj.value else return nil end end # Abstract setting parameters and properties, and normalize # access to always be symbols, not strings. This sets the 'should' # value on properties, and otherwise just sets the appropriate parameter. def []=(name,value) name = attr_alias(name) fail("Invalid parameter #{name}") unless self.class.validattr?(name) if name == :name && nv = name_var name = nv end raise Puppet::Error.new("Got nil value for #{name}") if value.nil? property = self.newattr(name) if property begin # make sure the parameter doesn't have any errors property.value = value rescue => detail error = Puppet::Error.new("Parameter #{name} failed: #{detail}") error.set_backtrace(detail.backtrace) raise error end end nil end # remove a property from the object; useful in testing or in cleanup # when an error has been encountered def delete(attr) attr = symbolize(attr) if @parameters.has_key?(attr) @parameters.delete(attr) else raise Puppet::DevError.new("Undefined attribute '#{attr}' in #{self}") end end # iterate across the existing properties def eachproperty # properties is a private method properties.each { |property| yield property } end # Create a transaction event. Called by Transaction or by # a property. def event(options = {}) Puppet::Transaction::Event.new({:resource => self, :file => file, :line => line, :tags => tags}.merge(options)) end # retrieve the 'should' value for a specified property def should(name) name = attr_alias(name) (prop = @parameters[name] and prop.is_a?(Puppet::Property)) ? prop.should : nil end # Create the actual attribute instance. Requires either the attribute # name or class as the first argument, then an optional hash of # attributes to set during initialization. def newattr(name) if name.is_a?(Class) klass = name name = klass.name end unless klass = self.class.attrclass(name) raise Puppet::Error, "Resource type #{self.class.name} does not support parameter #{name}" end if provider and ! provider.class.supports_parameter?(klass) missing = klass.required_features.find_all { |f| ! provider.class.feature?(f) } - info "Provider %s does not support features %s; not managing attribute %s" % [provider.class.name, missing.join(", "), name] + debug "Provider %s does not support features %s; not managing attribute %s" % [provider.class.name, missing.join(", "), name] return nil end return @parameters[name] if @parameters.include?(name) @parameters[name] = klass.new(:resource => self) end # return the value of a parameter def parameter(name) @parameters[name.to_sym] end def parameters @parameters.dup end # Is the named property defined? def propertydefined?(name) name = name.intern unless name.is_a? Symbol @parameters.include?(name) end # Return an actual property instance by name; to return the value, use 'resource[param]' # LAK:NOTE(20081028) Since the 'parameter' method is now a superset of this method, # this one should probably go away at some point. def property(name) (obj = @parameters[symbolize(name)] and obj.is_a?(Puppet::Property)) ? obj : nil end # For any parameters or properties that have defaults and have not yet been # set, set them now. This method can be handed a list of attributes, # and if so it will only set defaults for those attributes. def set_default(attr) return unless klass = self.class.attrclass(attr) return unless klass.method_defined?(:default) return if @parameters.include?(klass.name) return unless parameter = newattr(klass.name) if value = parameter.default and ! value.nil? parameter.value = value else @parameters.delete(parameter.name) end end # Convert our object to a hash. This just includes properties. def to_hash rethash = {} @parameters.each do |name, obj| rethash[name] = obj.value end rethash end def type self.class.name end # Return a specific value for an attribute. def value(name) name = attr_alias(name) (obj = @parameters[name] and obj.respond_to?(:value)) ? obj.value : nil end def version return 0 unless catalog catalog.version end # Return all of the property objects, in the order specified in the # class. def properties self.class.properties.collect { |prop| @parameters[prop.name] }.compact end # Is this type's name isomorphic with the object? That is, if the # name conflicts, does it necessarily mean that the objects conflict? # Defaults to true. def self.isomorphic? if defined?(@isomorphic) return @isomorphic else return true end end def isomorphic? self.class.isomorphic? end # is the instance a managed instance? A 'yes' here means that # the instance was created from the language, vs. being created # in order resolve other questions, such as finding a package # in a list def managed? # Once an object is managed, it always stays managed; but an object # that is listed as unmanaged might become managed later in the process, # so we have to check that every time if @managed return @managed else @managed = false properties.each { |property| s = property.should if s and ! property.class.unmanaged @managed = true break end } return @managed end end ############################### # Code related to the container behaviour. def depthfirst? false end # Remove an object. The argument determines whether the object's # subscriptions get eliminated, too. def remove(rmdeps = true) # This is hackish (mmm, cut and paste), but it works for now, and it's # better than warnings. @parameters.each do |name, obj| obj.remove end @parameters.clear @parent = nil # Remove the reference to the provider. if self.provider @provider.clear @provider = nil end end ############################### # Code related to evaluating the resources. def ancestors [] end # Flush the provider, if it supports it. This is called by the # transaction. def flush self.provider.flush if self.provider and self.provider.respond_to?(:flush) end # if all contained objects are in sync, then we're in sync # FIXME I don't think this is used on the type instances any more, # it's really only used for testing def insync?(is) insync = true if property = @parameters[:ensure] unless is.include? property raise Puppet::DevError, "The is value is not in the is array for '#{property.name}'" end ensureis = is[property] if property.safe_insync?(ensureis) and property.should == :absent return true end end properties.each { |property| unless is.include? property raise Puppet::DevError, "The is value is not in the is array for '#{property.name}'" end propis = is[property] unless property.safe_insync?(propis) property.debug("Not in sync: #{propis.inspect} vs #{property.should.inspect}") insync = false #else # property.debug("In sync") end } #self.debug("#{self} sync status is #{insync}") insync end # retrieve the current value of all contained properties def retrieve fail "Provider #{provider.class.name} is not functional on this host" if self.provider.is_a?(Puppet::Provider) and ! provider.class.suitable? result = Puppet::Resource.new(type, title) # Provide the name, so we know we'll always refer to a real thing result[:name] = self[:name] unless self[:name] == title if ensure_prop = property(:ensure) or (self.class.validattr?(:ensure) and ensure_prop = newattr(:ensure)) result[:ensure] = ensure_state = ensure_prop.retrieve else ensure_state = nil end properties.each do |property| next if property.name == :ensure if ensure_state == :absent result[property] = :absent else result[property] = property.retrieve end end result end def retrieve_resource resource = retrieve resource = Resource.new(type, title, :parameters => resource) if resource.is_a? Hash resource end # Get a hash of the current properties. Returns a hash with # the actual property instance as the key and the current value # as the, um, value. def currentpropvalues # It's important to use the 'properties' method here, as it follows the order # in which they're defined in the class. It also guarantees that 'ensure' # is the first property, which is important for skipping 'retrieve' on # all the properties if the resource is absent. ensure_state = false return properties.inject({}) do | prophash, property| if property.name == :ensure ensure_state = property.retrieve prophash[property] = ensure_state else if ensure_state == :absent prophash[property] = :absent else prophash[property] = property.retrieve end end prophash end end # Are we running in noop mode? def noop? # If we're not a host_config, we're almost certainly part of # Settings, and we want to ignore 'noop' return false if catalog and ! catalog.host_config? if defined?(@noop) @noop else Puppet[:noop] end end def noop noop? end ############################### # Code related to managing resource instances. require 'puppet/transportable' # retrieve a named instance of the current type def self.[](name) raise "Global resource access is deprecated" @objects[name] || @aliases[name] end # add an instance by name to the class list of instances def self.[]=(name,object) raise "Global resource storage is deprecated" newobj = nil if object.is_a?(Puppet::Type) newobj = object else raise Puppet::DevError, "must pass a Puppet::Type object" end if exobj = @objects[name] and self.isomorphic? msg = "Object '#{newobj.class.name}[#{name}]' already exists" msg += ("in file #{object.file} at line #{object.line}") if exobj.file and exobj.line msg += ("and cannot be redefined in file #{object.file} at line #{object.line}") if object.file and object.line error = Puppet::Error.new(msg) raise error else #Puppet.info("adding %s of type %s to class list" % # [name,object.class]) @objects[name] = newobj end end # Create an alias. We keep these in a separate hash so that we don't encounter # the objects multiple times when iterating over them. def self.alias(name, obj) raise "Global resource aliasing is deprecated" if @objects.include?(name) unless @objects[name] == obj raise Puppet::Error.new( "Cannot create alias #{name}: object already exists" ) end end if @aliases.include?(name) unless @aliases[name] == obj raise Puppet::Error.new( "Object #{@aliases[name].name} already has alias #{name}" ) end end @aliases[name] = obj end # remove all of the instances of a single type def self.clear raise "Global resource removal is deprecated" if defined?(@objects) @objects.each do |name, obj| obj.remove(true) end @objects.clear end @aliases.clear if defined?(@aliases) end # Force users to call this, so that we can merge objects if # necessary. def self.create(args) # LAK:DEP Deprecation notice added 12/17/2008 Puppet.warning "Puppet::Type.create is deprecated; use Puppet::Type.new" new(args) end # remove a specified object def self.delete(resource) raise "Global resource removal is deprecated" return unless defined?(@objects) @objects.delete(resource.title) if @objects.include?(resource.title) @aliases.delete(resource.title) if @aliases.include?(resource.title) if @aliases.has_value?(resource) names = [] @aliases.each do |name, otherres| if otherres == resource names << name end end names.each { |name| @aliases.delete(name) } end end # iterate across each of the type's instances def self.each raise "Global resource iteration is deprecated" return unless defined?(@objects) @objects.each { |name,instance| yield instance } end # does the type have an object with the given name? def self.has_key?(name) raise "Global resource access is deprecated" @objects.has_key?(name) end # Retrieve all known instances. Either requires providers or must be overridden. def self.instances raise Puppet::DevError, "#{self.name} has no providers and has not overridden 'instances'" if provider_hash.empty? # Put the default provider first, then the rest of the suitable providers. provider_instances = {} providers_by_source.collect do |provider| all_properties = self.properties.find_all do |property| provider.supports_parameter?(property) end.collect do |property| property.name end provider.instances.collect do |instance| # We always want to use the "first" provider instance we find, unless the resource # is already managed and has a different provider set if other = provider_instances[instance.name] Puppet.warning "%s %s found in both %s and %s; skipping the %s version" % [self.name.to_s.capitalize, instance.name, other.class.name, instance.class.name, instance.class.name] next end provider_instances[instance.name] = instance result = new(:name => instance.name, :provider => instance) properties.each { |name| result.newattr(name) } result end end.flatten.compact end # Return a list of one suitable provider per source, with the default provider first. def self.providers_by_source # Put the default provider first (can be nil), then the rest of the suitable providers. sources = [] [defaultprovider, suitableprovider].flatten.uniq.collect do |provider| next if provider.nil? next if sources.include?(provider.source) sources << provider.source provider end.compact end # Convert a simple hash into a Resource instance. def self.hash2resource(hash) hash = hash.inject({}) { |result, ary| result[ary[0].to_sym] = ary[1]; result } title = hash.delete(:title) title ||= hash[:name] title ||= hash[key_attributes.first] if key_attributes.length == 1 raise Puppet::Error, "Title or name must be provided" unless title # Now create our resource. resource = Puppet::Resource.new(self.name, title) [:catalog].each do |attribute| if value = hash[attribute] hash.delete(attribute) resource.send(attribute.to_s + "=", value) end end hash.each do |param, value| resource[param] = value end resource end # Create the path for logging and such. def pathbuilder if p = parent [p.pathbuilder, self.ref].flatten else [self.ref] end end ############################### # Add all of the meta parameters. newmetaparam(:noop) do desc "Boolean flag indicating whether work should actually be done." newvalues(:true, :false) munge do |value| case value when true, :true, "true"; @resource.noop = true when false, :false, "false"; @resource.noop = false end end end newmetaparam(:schedule) do desc "On what schedule the object should be managed. You must create a schedule object, and then reference the name of that object to use that for your schedule: schedule { 'daily': period => daily, range => \"2-4\" } exec { \"/usr/bin/apt-get update\": schedule => 'daily' } The creation of the schedule object does not need to appear in the configuration before objects that use it." end newmetaparam(:audit) do desc "Marks a subset of this resource's unmanaged attributes for auditing. Accepts an attribute name, an array of attribute names, or `all`. Auditing a resource attribute has two effects: First, whenever a catalog is applied with puppet apply or puppet agent, Puppet will check whether that attribute of the resource has been modified, comparing its current value to the previous run; any change will be logged alongside any actions performed by Puppet while applying the catalog. Secondly, marking a resource attribute for auditing will include that attribute in inspection reports generated by puppet inspect; see the puppet inspect documentation for more details. Managed attributes for a resource can also be audited, but note that changes made by Puppet will be logged as additional modifications. (I.e. if a user manually edits a file whose contents are audited and managed, puppet agent's next two runs will both log an audit notice: the first run will log the user's edit and then revert the file to the desired state, and the second run will log the edit made by Puppet.)" validate do |list| list = Array(list).collect {|p| p.to_sym} unless list == [:all] list.each do |param| next if @resource.class.validattr?(param) fail "Cannot audit #{param}: not a valid attribute for #{resource}" end end end munge do |args| properties_to_audit(args).each do |param| next unless resource.class.validproperty?(param) resource.newattr(param) end end def all_properties resource.class.properties.find_all do |property| resource.provider.nil? or resource.provider.class.supports_parameter?(property) end.collect do |property| property.name end end def properties_to_audit(list) if !list.kind_of?(Array) && list.to_sym == :all list = all_properties else list = Array(list).collect { |p| p.to_sym } end end end newmetaparam(:check) do desc "Audit specified attributes of resources over time, and report if any have changed. This parameter has been deprecated in favor of 'audit'." munge do |args| resource.warning "'check' attribute is deprecated; use 'audit' instead" resource[:audit] = args end end newmetaparam(:loglevel) do desc "Sets the level that information will be logged. The log levels have the biggest impact when logs are sent to syslog (which is currently the default)." defaultto :notice newvalues(*Puppet::Util::Log.levels) newvalues(:verbose) munge do |loglevel| val = super(loglevel) if val == :verbose val = :info end val end end newmetaparam(:alias) do desc "Creates an alias for the object. Puppet uses this internally when you provide a symbolic title: file { 'sshdconfig': path => $operatingsystem ? { solaris => \"/usr/local/etc/ssh/sshd_config\", default => \"/etc/ssh/sshd_config\" }, source => \"...\" } service { 'sshd': subscribe => File['sshdconfig'] } When you use this feature, the parser sets `sshdconfig` as the title, and the library sets that as an alias for the file so the dependency lookup in `Service['sshd']` works. You can use this metaparameter yourself, but note that only the library can use these aliases; for instance, the following code will not work: file { \"/etc/ssh/sshd_config\": owner => root, group => root, alias => 'sshdconfig' } file { 'sshdconfig': mode => 644 } There's no way here for the Puppet parser to know that these two stanzas should be affecting the same file. See the [Language Guide](http://docs.puppetlabs.com/guides/language_guide.html) for more information. " munge do |aliases| aliases = [aliases] unless aliases.is_a?(Array) raise(ArgumentError, "Cannot add aliases without a catalog") unless @resource.catalog aliases.each do |other| if obj = @resource.catalog.resource(@resource.class.name, other) unless obj.object_id == @resource.object_id self.fail("#{@resource.title} can not create alias #{other}: object already exists") end next end # Newschool, add it to the catalog. @resource.catalog.alias(@resource, other) end end end newmetaparam(:tag) do desc "Add the specified tags to the associated resource. While all resources are automatically tagged with as much information as possible (e.g., each class and definition containing the resource), it can be useful to add your own tags to a given resource. Multiple tags can be specified as an array: file {'/etc/hosts': ensure => file, source => 'puppet:///modules/site/hosts', mode => 0644, tag => ['bootstrap', 'minimumrun', 'mediumrun'], } Tags are useful for things like applying a subset of a host's configuration with [the `tags` setting](/references/latest/configuration.html#tags): puppet agent --test --tags bootstrap This way, you can easily isolate the portion of the configuration you're trying to test." munge do |tags| tags = [tags] unless tags.is_a? Array tags.each do |tag| @resource.tag(tag) end end end class RelationshipMetaparam < Puppet::Parameter class << self attr_accessor :direction, :events, :callback, :subclasses end @subclasses = [] def self.inherited(sub) @subclasses << sub end def munge(references) references = [references] unless references.is_a?(Array) references.collect do |ref| if ref.is_a?(Puppet::Resource) ref else Puppet::Resource.new(ref) end end end def validate_relationship @value.each do |ref| unless @resource.catalog.resource(ref.to_s) description = self.class.direction == :in ? "dependency" : "dependent" fail "Could not find #{description} #{ref} for #{resource.ref}" end end end # Create edges from each of our relationships. :in # relationships are specified by the event-receivers, and :out # relationships are specified by the event generator. This # way 'source' and 'target' are consistent terms in both edges # and events -- that is, an event targets edges whose source matches # the event's source. The direction of the relationship determines # which resource is applied first and which resource is considered # to be the event generator. def to_edges @value.collect do |reference| reference.catalog = resource.catalog # Either of the two retrieval attempts could have returned # nil. unless related_resource = reference.resolve self.fail "Could not retrieve dependency '#{reference}' of #{@resource.ref}" end # Are we requiring them, or vice versa? See the method docs # for futher info on this. if self.class.direction == :in source = related_resource target = @resource else source = @resource target = related_resource end if method = self.class.callback subargs = { :event => self.class.events, :callback => method } self.debug("subscribes to #{related_resource.ref}") else # If there's no callback, there's no point in even adding # a label. subargs = nil self.debug("requires #{related_resource.ref}") end rel = Puppet::Relationship.new(source, target, subargs) end end end def self.relationship_params RelationshipMetaparam.subclasses end # Note that the order in which the relationships params is defined # matters. The labelled params (notify and subcribe) must be later, # so that if both params are used, those ones win. It's a hackish # solution, but it works. newmetaparam(:require, :parent => RelationshipMetaparam, :attributes => {:direction => :in, :events => :NONE}) do desc "References to one or more objects that this object depends on. This is used purely for guaranteeing that changes to required objects happen before the dependent object. For instance: # Create the destination directory before you copy things down file { \"/usr/local/scripts\": ensure => directory } file { \"/usr/local/scripts/myscript\": source => \"puppet://server/module/myscript\", mode => 755, require => File[\"/usr/local/scripts\"] } Multiple dependencies can be specified by providing a comma-separated list of resources, enclosed in square brackets: require => [ File[\"/usr/local\"], File[\"/usr/local/scripts\"] ] Note that Puppet will autorequire everything that it can, and there are hooks in place so that it's easy for resources to add new ways to autorequire objects, so if you think Puppet could be smarter here, let us know. In fact, the above code was redundant --- Puppet will autorequire any parent directories that are being managed; it will automatically realize that the parent directory should be created before the script is pulled down. Currently, exec resources will autorequire their CWD (if it is specified) plus any fully qualified paths that appear in the command. For instance, if you had an `exec` command that ran the `myscript` mentioned above, the above code that pulls the file down would be automatically listed as a requirement to the `exec` code, so that you would always be running againts the most recent version. " end newmetaparam(:subscribe, :parent => RelationshipMetaparam, :attributes => {:direction => :in, :events => :ALL_EVENTS, :callback => :refresh}) do desc "References to one or more objects that this object depends on. This metaparameter creates a dependency relationship like **require,** and also causes the dependent object to be refreshed when the subscribed object is changed. For instance: class nagios { file { 'nagconf': path => \"/etc/nagios/nagios.conf\" source => \"puppet://server/module/nagios.conf\", } service { 'nagios': ensure => running, subscribe => File['nagconf'] } } Currently the `exec`, `mount` and `service` types support refreshing. " end newmetaparam(:before, :parent => RelationshipMetaparam, :attributes => {:direction => :out, :events => :NONE}) do desc %{References to one or more objects that depend on this object. This parameter is the opposite of **require** --- it guarantees that the specified object is applied later than the specifying object: file { "/var/nagios/configuration": source => "...", recurse => true, before => Exec["nagios-rebuid"] } exec { "nagios-rebuild": command => "/usr/bin/make", cwd => "/var/nagios/configuration" } This will make sure all of the files are up to date before the make command is run.} end newmetaparam(:notify, :parent => RelationshipMetaparam, :attributes => {:direction => :out, :events => :ALL_EVENTS, :callback => :refresh}) do desc %{References to one or more objects that depend on this object. This parameter is the opposite of **subscribe** --- it creates a dependency relationship like **before,** and also causes the dependent object(s) to be refreshed when this object is changed. For instance: file { "/etc/sshd_config": source => "....", notify => Service['sshd'] } service { 'sshd': ensure => running } This will restart the sshd service if the sshd config file changes.} end newmetaparam(:stage) do desc %{Which run stage a given resource should reside in. This just creates a dependency on or from the named milestone. For instance, saying that this is in the 'bootstrap' stage creates a dependency on the 'bootstrap' milestone. By default, all classes get directly added to the 'main' stage. You can create new stages as resources: stage { ['pre', 'post']: } To order stages, use standard relationships: stage { 'pre': before => Stage['main'] } Or use the new relationship syntax: Stage['pre'] -> Stage['main'] -> Stage['post'] Then use the new class parameters to specify a stage: class { 'foo': stage => 'pre' } Stages can only be set on classes, not individual resources. This will fail: file { '/foo': stage => 'pre', ensure => file } } end ############################### # All of the provider plumbing for the resource types. require 'puppet/provider' require 'puppet/util/provider_features' # Add the feature handling module. extend Puppet::Util::ProviderFeatures attr_reader :provider # the Type class attribute accessors class << self attr_accessor :providerloader attr_writer :defaultprovider end # Find the default provider. def self.defaultprovider return @defaultprovider if @defaultprovider suitable = suitableprovider # Find which providers are a default for this system. defaults = suitable.find_all { |provider| provider.default? } # If we don't have any default we use suitable providers defaults = suitable if defaults.empty? max = defaults.collect { |provider| provider.specificity }.max defaults = defaults.find_all { |provider| provider.specificity == max } if defaults.length > 1 Puppet.warning( "Found multiple default providers for #{self.name}: #{defaults.collect { |i| i.name.to_s }.join(", ")}; using #{defaults[0].name}" ) end @defaultprovider = defaults.shift unless defaults.empty? end def self.provider_hash_by_type(type) @provider_hashes ||= {} @provider_hashes[type] ||= {} end def self.provider_hash Puppet::Type.provider_hash_by_type(self.name) end # Retrieve a provider by name. def self.provider(name) name = Puppet::Util.symbolize(name) # If we don't have it yet, try loading it. @providerloader.load(name) unless provider_hash.has_key?(name) provider_hash[name] end # Just list all of the providers. def self.providers provider_hash.keys end def self.validprovider?(name) name = Puppet::Util.symbolize(name) (provider_hash.has_key?(name) && provider_hash[name].suitable?) end # Create a new provider of a type. This method must be called # directly on the type that it's implementing. def self.provide(name, options = {}, &block) name = Puppet::Util.symbolize(name) if unprovide(name) Puppet.debug "Reloading #{name} #{self.name} provider" end parent = if pname = options[:parent] options.delete(:parent) if pname.is_a? Class pname else if provider = self.provider(pname) provider else raise Puppet::DevError, "Could not find parent provider #{pname} of #{name}" end end else Puppet::Provider end options[:resource_type] ||= self self.providify provider = genclass( name, :parent => parent, :hash => provider_hash, :prefix => "Provider", :block => block, :include => feature_module, :extend => feature_module, :attributes => options ) provider end # Make sure we have a :provider parameter defined. Only gets called if there # are providers. def self.providify return if @paramhash.has_key? :provider newparam(:provider) do # We're using a hacky way to get the name of our type, since there doesn't # seem to be a correct way to introspect this at the time this code is run. # We expect that the class in which this code is executed will be something # like Puppet::Type::Ssh_authorized_key::ParameterProvider. desc <<-EOT The specific backend to use for this `#{self.to_s.split('::')[2].downcase}` resource. You will seldom need to specify this --- Puppet will usually discover the appropriate provider for your platform. EOT # This is so we can refer back to the type to get a list of # providers for documentation. class << self attr_accessor :parenttype end # We need to add documentation for each provider. def self.doc # Since we're mixing @doc with text from other sources, we must normalize # its indentation with scrub. But we don't need to manually scrub the # provider's doc string, since markdown_definitionlist sanitizes its inputs. scrub(@doc) + "Available providers are:\n\n" + parenttype.providers.sort { |a,b| a.to_s <=> b.to_s }.collect { |i| markdown_definitionlist( i, scrub(parenttype().provider(i).doc) ) }.join end defaultto { prov = @resource.class.defaultprovider prov.name if prov } validate do |provider_class| provider_class = provider_class[0] if provider_class.is_a? Array provider_class = provider_class.class.name if provider_class.is_a?(Puppet::Provider) unless provider = @resource.class.provider(provider_class) raise ArgumentError, "Invalid #{@resource.class.name} provider '#{provider_class}'" end end munge do |provider| provider = provider[0] if provider.is_a? Array provider = provider.intern if provider.is_a? String @resource.provider = provider if provider.is_a?(Puppet::Provider) provider.class.name else provider end end end.parenttype = self end def self.unprovide(name) if @defaultprovider and @defaultprovider.name == name @defaultprovider = nil end rmclass(name, :hash => provider_hash, :prefix => "Provider") end # Return an array of all of the suitable providers. def self.suitableprovider providerloader.loadall if provider_hash.empty? provider_hash.find_all { |name, provider| provider.suitable? }.collect { |name, provider| provider }.reject { |p| p.name == :fake } # For testing end def suitable? # If we don't use providers, then we consider it suitable. return true unless self.class.paramclass(:provider) # We have a provider and it is suitable. return true if provider && provider.class.suitable? # We're using the default provider and there is one. if !provider and self.class.defaultprovider self.provider = self.class.defaultprovider.name return true end # We specified an unsuitable provider, or there isn't any suitable # provider. false end def provider=(name) if name.is_a?(Puppet::Provider) @provider = name @provider.resource = self elsif klass = self.class.provider(name) @provider = klass.new(self) else raise ArgumentError, "Could not find #{name} provider of #{self.class.name}" end end ############################### # All of the relationship code. # Specify a block for generating a list of objects to autorequire. This # makes it so that you don't have to manually specify things that you clearly # require. def self.autorequire(name, &block) @autorequires ||= {} @autorequires[name] = block end # Yield each of those autorequires in turn, yo. def self.eachautorequire @autorequires ||= {} @autorequires.each { |type, block| yield(type, block) } end # Figure out of there are any objects we can automatically add as # dependencies. def autorequire(rel_catalog = nil) rel_catalog ||= catalog raise(Puppet::DevError, "You cannot add relationships without a catalog") unless rel_catalog reqs = [] self.class.eachautorequire { |type, block| # Ignore any types we can't find, although that would be a bit odd. next unless typeobj = Puppet::Type.type(type) # Retrieve the list of names from the block. next unless list = self.instance_eval(&block) list = [list] unless list.is_a?(Array) # Collect the current prereqs list.each { |dep| # Support them passing objects directly, to save some effort. unless dep.is_a? Puppet::Type # Skip autorequires that we aren't managing unless dep = rel_catalog.resource(type, dep) next end end reqs << Puppet::Relationship.new(dep, self) } } reqs end # Build the dependencies associated with an individual object. def builddepends # Handle the requires self.class.relationship_params.collect do |klass| if param = @parameters[klass.name] param.to_edges end end.flatten.reject { |r| r.nil? } end # Define the initial list of tags. def tags=(list) tag(self.class.name) tag(*list) end # Types (which map to resources in the languages) are entirely composed of # attribute value pairs. Generally, Puppet calls any of these things an # 'attribute', but these attributes always take one of three specific # forms: parameters, metaparams, or properties. # In naming methods, I have tried to consistently name the method so # that it is clear whether it operates on all attributes (thus has 'attr' in # the method name, or whether it operates on a specific type of attributes. attr_writer :title attr_writer :noop include Enumerable # class methods dealing with Type management public # the Type class attribute accessors class << self attr_reader :name attr_accessor :self_refresh include Enumerable, Puppet::Util::ClassGen include Puppet::MetaType::Manager include Puppet::Util include Puppet::Util::Logging end # all of the variables that must be initialized for each subclass def self.initvars # all of the instances of this class @objects = Hash.new @aliases = Hash.new @defaults = {} @parameters ||= [] @validproperties = {} @properties = [] @parameters = [] @paramhash = {} @attr_aliases = {} @paramdoc = Hash.new { |hash,key| key = key.intern if key.is_a?(String) if hash.include?(key) hash[key] else "Param Documentation for #{key} not found" end } @doc ||= "" end def self.to_s if defined?(@name) "Puppet::Type::#{@name.to_s.capitalize}" else super end end # Create a block to validate that our object is set up entirely. This will # be run before the object is operated on. def self.validate(&block) define_method(:validate, &block) #@validate = block end # The catalog that this resource is stored in. attr_accessor :catalog # is the resource exported attr_accessor :exported # is the resource virtual (it should not :-)) attr_accessor :virtual # create a log at specified level def log(msg) Puppet::Util::Log.create( :level => @parameters[:loglevel].value, :message => msg, :source => self ) end # instance methods related to instance intrinsics # e.g., initialize and name public attr_reader :original_parameters # initialize the type instance def initialize(resource) raise Puppet::DevError, "Got TransObject instead of Resource or hash" if resource.is_a?(Puppet::TransObject) resource = self.class.hash2resource(resource) unless resource.is_a?(Puppet::Resource) # The list of parameter/property instances. @parameters = {} # Set the title first, so any failures print correctly. if resource.type.to_s.downcase.to_sym == self.class.name self.title = resource.title else # This should only ever happen for components self.title = resource.ref end [:file, :line, :catalog, :exported, :virtual].each do |getter| setter = getter.to_s + "=" if val = resource.send(getter) self.send(setter, val) end end @tags = resource.tags @original_parameters = resource.to_hash set_name(@original_parameters) set_default(:provider) set_parameters(@original_parameters) self.validate if self.respond_to?(:validate) end private # Set our resource's name. def set_name(hash) self[name_var] = hash.delete(name_var) if name_var end # Set all of the parameters from a hash, in the appropriate order. def set_parameters(hash) # Use the order provided by allattrs, but add in any # extra attributes from the resource so we get failures # on invalid attributes. no_values = [] (self.class.allattrs + hash.keys).uniq.each do |attr| begin # Set any defaults immediately. This is mostly done so # that the default provider is available for any other # property validation. if hash.has_key?(attr) self[attr] = hash[attr] else no_values << attr end rescue ArgumentError, Puppet::Error, TypeError raise rescue => detail error = Puppet::DevError.new( "Could not set #{attr} on #{self.class.name}: #{detail}") error.set_backtrace(detail.backtrace) raise error end end no_values.each do |attr| set_default(attr) end end public # Set up all of our autorequires. def finish # Make sure all of our relationships are valid. Again, must be done # when the entire catalog is instantiated. self.class.relationship_params.collect do |klass| if param = @parameters[klass.name] param.validate_relationship end end.flatten.reject { |r| r.nil? } end # For now, leave the 'name' method functioning like it used to. Once 'title' # works everywhere, I'll switch it. def name self[:name] end # Look up our parent in the catalog, if we have one. def parent return nil unless catalog unless defined?(@parent) if parents = catalog.adjacent(self, :direction => :in) # We should never have more than one parent, so let's just ignore # it if we happen to. @parent = parents.shift else @parent = nil end end @parent end # Return the "type[name]" style reference. def ref "#{self.class.name.to_s.capitalize}[#{self.title}]" end def self_refresh? self.class.self_refresh end # Mark that we're purging. def purging @purging = true end # Is this resource being purged? Used by transactions to forbid # deletion when there are dependencies. def purging? if defined?(@purging) @purging else false end end # Retrieve the title of an object. If no title was set separately, # then use the object's name. def title unless @title if self.class.validparameter?(name_var) @title = self[:name] elsif self.class.validproperty?(name_var) @title = self.should(name_var) else self.devfail "Could not find namevar #{name_var} for #{self.class.name}" end end @title end # convert to a string def to_s self.ref end # Convert to a transportable object def to_trans(ret = true) trans = TransObject.new(self.title, self.class.name) values = retrieve_resource values.each do |name, value| name = name.name if name.respond_to? :name trans[name] = value end @parameters.each do |name, param| # Avoid adding each instance name twice next if param.class.isnamevar? and param.value == self.title # We've already got property values next if param.is_a?(Puppet::Property) trans[name] = param.value end trans.tags = self.tags # FIXME I'm currently ignoring 'parent' and 'path' trans end def to_resource # this 'type instance' versus 'resource' distinction seems artificial # I'd like to see it collapsed someday ~JW self.to_trans.to_resource end def virtual?; !!@virtual; end def exported?; !!@exported; end def appliable_to_device? self.class.can_apply_to(:device) end def appliable_to_host? self.class.can_apply_to(:host) end end end require 'puppet/provider' # Always load these types. Puppet::Type.type(:component) diff --git a/lib/puppet/type/augeas.rb b/lib/puppet/type/augeas.rb index cc2ca6b1d..ef075c326 100644 --- a/lib/puppet/type/augeas.rb +++ b/lib/puppet/type/augeas.rb @@ -1,218 +1,218 @@ # # Copyright 2011 Bryan Kearney # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Puppet::Type.newtype(:augeas) do include Puppet::Util feature :parse_commands, "Parse the command string" feature :need_to_run?, "If the command should run" feature :execute_changes, "Actually make the changes" @doc = <<-EOT Apply a change or an array of changes to the filesystem using the augeas tool. Requires: - [Augeas](http://www.augeas.net) - The ruby-augeas bindings Sample usage with a string: augeas{"test1" : context => "/files/etc/sysconfig/firstboot", changes => "set RUN_FIRSTBOOT YES", onlyif => "match other_value size > 0", } Sample usage with an array and custom lenses: augeas{"jboss_conf": context => "/files", changes => [ "set etc/jbossas/jbossas.conf/JBOSS_IP $ipaddress", "set etc/jbossas/jbossas.conf/JAVA_HOME /usr", ], load_path => "$/usr/share/jbossas/lenses", } EOT newparam (:name) do desc "The name of this task. Used for uniqueness." isnamevar end newparam (:context) do desc "Optional context path. This value is prepended to the paths of all changes if the path is relative. If the `incl` parameter is set, defaults to `/files + incl`; otherwise, defaults to the empty string." defaultto "" munge do |value| if value.empty? and resource[:incl] "/files" + resource[:incl] else value end end end newparam (:onlyif) do desc "Optional augeas command and comparisons to control the execution of this type. Supported onlyif syntax: * `get ` * `match size ` * `match include ` * `match not_include ` * `match == ` * `match != ` where: * `AUGEAS_PATH` is a valid path scoped by the context * `MATCH_PATH` is a valid match synatx scoped by the context * `COMPARATOR` is one of `>, >=, !=, ==, <=,` or `<` * `STRING` is a string * `INT` is a number * `AN_ARRAY` is in the form `['a string', 'another']`" defaultto "" end newparam(:changes) do desc "The changes which should be applied to the filesystem. This can be a command or an array of commands. The following commands are supported: `set ` : Sets the value `VALUE` at loction `PATH` `setm ` : Sets multiple nodes (matching `SUB` relative to `PATH`) to `VALUE` `rm ` : Removes the node at location `PATH` `remove ` : Synonym for `rm` `clear ` : Sets the node at `PATH` to `NULL`, creating it if needed `ins