Page MenuHomePhorge

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
This document is not UTF8. It was detected as ISO-8859-1 (Latin 1) and converted to UTF8 for display.
diff --git a/.travis.yml b/.travis.yml
index c04611b45..e3f8e696e 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,10 +1,10 @@
language: ruby
bundler_args: --without development
-script: "bundle exec rake \"parallel:spec[1]\""
+script: "bundle exec rake \"parallel:spec[2]\""
notifications:
email: false
rvm:
- 2.1.1
- 2.0.0
- 1.9.3
- 1.8.7-p374
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 3c6dc52b5..e7ccf7747 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,92 +1,91 @@
# How to contribute
Third-party patches are essential for keeping puppet great. We simply can't
access the huge number of platforms and myriad configurations for running
puppet. We want to keep it as easy as possible to contribute changes that
get things working in your environment. There are a few guidelines that we
need contributors to follow so that we can have a chance of keeping on
top of things.
## Getting Started
* Make sure you have a [Jira account](http://tickets.puppetlabs.com)
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Submit a ticket for your issue, assuming one does not already exist.
* Clearly describe the issue including steps to reproduce when it is a bug.
* Make sure you fill in the earliest version that you know has the issue.
* Fork the repository on GitHub
## Making Changes
* Create a topic branch from where you want to base your work.
* This is usually the master branch.
* Only target release branches if you are certain your fix must be on that
branch.
- * To quickly create a topic branch based on master; `git branch
- fix/master/my_contribution master` then checkout the new branch with `git
- checkout fix/master/my_contribution`. Please avoid working directly on the
+ * To quickly create a topic branch based on master; `git checkout -b
+ fix/master/my_contribution master`. Please avoid working directly on the
`master` branch.
* Make commits of logical units.
* Check for unnecessary whitespace with `git diff --check` before committing.
* Make sure your commit messages are in the proper format.
````
(PUP-1234) Make the example in CONTRIBUTING imperative and concrete
Without this patch applied the example commit message in the CONTRIBUTING
document is not a concrete example. This is a problem because the
contributor is left to imagine what the commit message should look like
based on a description rather than an example. This patch fixes the
problem by making the example concrete and imperative.
The first line is a real life imperative statement with a ticket number
from our issue tracker. The body describes the behavior without the patch,
why this is a problem, and how the patch fixes the problem when applied.
````
* Make sure you have added the necessary tests for your changes.
* Run _all_ the tests to assure nothing else was accidentally broken.
## Making Trivial Changes
### Documentation
For changes of a trivial nature to comments and documentation, it is not
always necessary to create a new ticket in Jira. In this case, it is
appropriate to start the first line of a commit with '(doc)' instead of
a ticket number.
````
(doc) Add documentation commit example to CONTRIBUTING
There is no example for contributing a documentation commit
to the Puppet repository. This is a problem because the contributor
is left to assume how a commit of this nature may appear.
The first line is a real life imperative statement with '(doc)' in
place of what would have been the ticket number in a
non-documentation related commit. The body describes the nature of
the new documentation or comments added.
````
## Submitting Changes
* Sign the [Contributor License Agreement](http://links.puppetlabs.com/cla).
* Push your changes to a topic branch in your fork of the repository.
* Submit a pull request to the repository in the puppetlabs organization.
* Update your Jira ticket to mark that you have submitted code and are ready for it to be reviewed (Status: Ready for Merge).
* Include a link to the pull request in the ticket.
* The core team looks at Pull Requests on a regular basis in a weekly triage
meeting that we hold in a public Google Hangout. The hangout is announced in
the weekly status updates that are sent to the puppet-dev list.
* After feedback has been given we expect responses within two weeks. After two
weeks will may close the pull request if it isn't showing any activity.
# Additional Resources
* [More information on contributing](http://links.puppetlabs.com/contribute-to-puppet)
* [Bug tracker (Jira)](http://tickets.puppetlabs.com)
* [Contributor License Agreement](http://links.puppetlabs.com/cla)
* [General GitHub documentation](http://help.github.com/)
* [GitHub pull request documentation](http://help.github.com/send-pull-requests/)
* #puppet-dev IRC channel on freenode.org
diff --git a/Gemfile b/Gemfile
index 6d7ce7541..e2f137121 100644
--- a/Gemfile
+++ b/Gemfile
@@ -1,97 +1,100 @@
source ENV['GEM_SOURCE'] || "https://rubygems.org"
def location_for(place, fake_version = nil)
if place =~ /^(git[:@][^#]*)#(.*)/
[fake_version, { :git => $1, :branch => $2, :require => false }].compact
elsif place =~ /^file:\/\/(.*)/
['>= 0', { :path => File.expand_path($1), :require => false }]
else
[place, { :require => false }]
end
end
# C Ruby (MRI) or Rubinius, but NOT Windows
platforms :ruby do
gem 'pry', :group => :development
gem 'yard', :group => :development
gem 'redcarpet', '~> 2.0', :group => :development
gem "racc", "1.4.9", :group => :development
# To enable the augeas feature, use this gem.
# Note that it is a native gem, so the augeas headers/libs
# are neeed.
#gem 'ruby-augeas', :group => :development
end
gem "puppet", :path => File.dirname(__FILE__), :require => false
gem "facter", *location_for(ENV['FACTER_LOCATION'] || ['> 1.6', '< 3'])
gem "hiera", *location_for(ENV['HIERA_LOCATION'] || '~> 1.0')
gem "rake", "10.1.1", :require => false
-gem "rgen", "0.6.5", :require => false
group(:development, :test) do
-
- # Jenkins workers may be using RSpec 2.9, so RSpec 2.11 syntax
- # (like `expect(value).to eq matcher`) should be avoided.
- gem "rspec", "~> 2.11.0", :require => false
+ gem "rspec", "~> 2.14.0", :require => false
# Mocha is not compatible across minor version changes; because of this only
# versions matching ~> 0.10.5 are supported. All other versions are unsupported
# and can be expected to fail.
gem "mocha", "~> 0.10.5", :require => false
gem "yarjuf", "~> 1.0"
# json-schema does not support windows, so omit it from the platforms list
# json-schema uses multi_json, but chokes with multi_json 1.7.9, so prefer 1.7.7
gem "multi_json", "1.7.7", :require => false, :platforms => [:ruby, :jruby]
gem "json-schema", "2.1.1", :require => false, :platforms => [:ruby, :jruby]
end
group(:development) do
- case RUBY_VERSION
- when /^1.8/
- gem 'ruby-prof', "~> 0.13.1", :require => false
- else
- gem 'ruby-prof', :require => false
+ if RUBY_PLATFORM != 'java'
+ case RUBY_VERSION
+ when /^1.8/
+ gem 'ruby-prof', "~> 0.13.1", :require => false
+ else
+ gem 'ruby-prof', :require => false
+ end
end
end
group(:extra) do
gem "rack", "~> 1.4", :require => false
gem "activerecord", '~> 3.2', :require => false
gem "couchrest", '~> 1.0', :require => false
gem "net-ssh", '~> 2.1', :require => false
gem "puppetlabs_spec_helper", :require => false
+ # rest-client is used only by couchrest, so when
+ # that dependency goes away, this one can also
gem "rest-client", '1.6.7', :require => false
gem "stomp", :require => false
gem "tzinfo", :require => false
case RUBY_PLATFORM
when 'java'
gem "jdbc-sqlite3", :require => false
gem "msgpack-jruby", :require => false
else
gem "sqlite3", :require => false
gem "msgpack", :require => false
end
end
require 'yaml'
data = YAML.load_file(File.join(File.dirname(__FILE__), 'ext', 'project_data.yaml'))
bundle_platforms = data['bundle_platforms']
+x64_platform = Gem::Platform.local.cpu == 'x64'
data['gem_platform_dependencies'].each_pair do |gem_platform, info|
+ next if gem_platform == 'x86-mingw32' && x64_platform
+ next if gem_platform == 'x64-mingw32' && !x64_platform
if bundle_deps = info['gem_runtime_dependencies']
bundle_platform = bundle_platforms[gem_platform] or raise "Missing bundle_platform"
platform(bundle_platform.intern) do
bundle_deps.each_pair do |name, version|
gem(name, version, :require => false)
end
end
end
end
if File.exists? "#{__FILE__}.local"
eval(File.read("#{__FILE__}.local"), binding)
end
# vim:filetype=ruby
diff --git a/README.md b/README.md
index 94f1cea29..a0747d1df 100644
--- a/README.md
+++ b/README.md
@@ -1,76 +1,76 @@
Puppet
======
[![Build Status](https://travis-ci.org/puppetlabs/puppet.png?branch=master)](https://travis-ci.org/puppetlabs/puppet)
-[![Inline docs](http://inch-pages.github.io/github/puppetlabs/puppet.png)](http://inch-pages.github.io/github/puppetlabs/puppet)
+[![Inline docs](http://inch-ci.org/github/puppetlabs/puppet.png)](http://inch-ci.org/github/puppetlabs/puppet)
Puppet, an automated administrative engine for your Linux, Unix, and Windows systems, performs
administrative tasks (such as adding users, installing packages, and updating server
configurations) based on a centralized specification.
Documentation
-------------
Documentation for Puppet and related projects can be found online at the
[Puppet Docs site](http://docs.puppetlabs.com).
HTTP API
--------
[HTTP API Index](api/docs/http_api_index.md)
Installation
------------
The best way to run Puppet is with [Puppet Enterprise](http://puppetlabs.com/puppet/puppet-enterprise),
which also includes orchestration features, a web console, and professional support.
[The PE documentation is available here.](http://docs.puppetlabs.com/pe/latest)
To install an open source release of Puppet,
[see the installation guide on the docs site.](http://docs.puppetlabs.com/guides/installation.html)
If you need to run Puppet from source as a tester or developer,
[see the running from source guide on the docs site.](http://docs.puppetlabs.com/guides/from_source.html)
Developing and Contributing
------
We'd love to get contributions from you! For a quick guide to getting your
system setup for developing take a look at our [Quickstart
Guide](docs/quickstart.md). Once you are up and running, take a look at the
[Contribution Documents](CONTRIBUTING.md) to see how to get your changes merged
in.
For more complete docs on developing with puppet you can take a look at the
rest of the [developer documents](docs/index.md).
License
-------
See [LICENSE](LICENSE) file.
Support
-------
Please log tickets and issues at our [JIRA tracker](http://tickets.puppetlabs.com). A [mailing
list](https://groups.google.com/forum/?fromgroups#!forum/puppet-users) is
available for asking questions and getting help from others. In addition there
is an active #puppet channel on Freenode.
We use semantic version numbers for our releases, and recommend that users stay
as up-to-date as possible by upgrading to patch releases and minor releases as
they become available.
Bugfixes and ongoing development will occur in minor releases for the current
major version. Security fixes will be backported to a previous major version on
a best-effort basis, until the previous major version is no longer maintained.
For example: If a security vulnerability is discovered in Puppet 4.1.1, we
would fix it in the 4 series, most likely as 4.1.2. Maintainers would then make
a best effort to backport that fix onto the latest Puppet 3 release.
Long-term support, including security patches and bug fixes, is available for
commercial customers. Please see the following page for more details:
[Puppet Enterprise Support Lifecycle](http://puppetlabs.com/misc/puppet-enterprise-lifecycle)
diff --git a/acceptance/Gemfile b/acceptance/Gemfile
index a4fe65e95..29949fee9 100644
--- a/acceptance/Gemfile
+++ b/acceptance/Gemfile
@@ -1,13 +1,13 @@
source ENV['GEM_SOURCE'] || "https://rubygems.org"
-gem "beaker", "~> 1.11.0"
+gem "beaker", "~> 1.17"
gem 'rake', "~> 10.1.0"
group(:test) do
gem "rspec", "~> 2.11.0", :require => false
gem "mocha", "~> 0.10.5", :require => false
end
if File.exists? "#{__FILE__}.local"
eval(File.read("#{__FILE__}.local"), binding)
end
diff --git a/acceptance/Rakefile b/acceptance/Rakefile
index b42ec8bce..b47e51d72 100644
--- a/acceptance/Rakefile
+++ b/acceptance/Rakefile
@@ -1,323 +1,355 @@
require 'rake/clean'
require 'pp'
require 'yaml'
$LOAD_PATH << File.expand_path(File.join(File.dirname(__FILE__), 'lib'))
require 'puppet/acceptance/git_utils'
extend Puppet::Acceptance::GitUtils
ONE_DAY_IN_SECS = 24 * 60 * 60
REPO_CONFIGS_DIR = "repo-configs"
CLEAN.include('*.tar', REPO_CONFIGS_DIR, 'merged_options.rb')
module HarnessOptions
DEFAULTS = {
:type => 'git',
:helper => ['lib/helper.rb'],
:tests => ['tests'],
:log_level => 'debug',
:color => false,
:root_keys => true,
:ssh => {
:keys => ["id_rsa-acceptance"],
},
:xml => true,
:timesync => false,
:repo_proxy => true,
:add_el_extras => true,
:preserve_hosts => 'onfail',
- :forge_host => 'forge-aio01-petest.puppetlabs.com'
+ :forge_host => 'forge-aio01-petest.puppetlabs.com',
+ :'master-start-curl-retries' => 30,
}
class Aggregator
attr_reader :mode
def initialize(mode)
@mode = mode
end
def get_options(file_path)
puts file_path
if File.exists? file_path
options = eval(File.read(file_path), binding)
else
puts "No options file found at #{File.expand_path(file_path)}"
end
options || {}
end
def get_mode_options
get_options("./config/#{mode}/options.rb")
end
def get_local_options
get_options("./local_options.rb")
end
def final_options(intermediary_options = {})
mode_options = get_mode_options
local_overrides = get_local_options
final_options = DEFAULTS.merge(mode_options)
final_options.merge!(intermediary_options)
final_options.merge!(local_overrides)
return final_options
end
end
def self.options(mode, options)
final_options = Aggregator.new(mode).final_options(options)
final_options
end
end
def beaker_test(mode = :packages, options = {})
- delete_options = options[:__delete_options__] || []
- final_options = HarnessOptions.options(mode,
- options.reject { |k,v| k == :__delete_options__ })
+ delete_options = options.delete(:__delete_options__) || []
+ final_options = HarnessOptions.options(mode, options)
+ preserve_config = final_options.delete(:__preserve_config__)
if mode == :git
# Build up project git urls based on git server and fork env variables or defaults
final_options[:install].map! do |install|
if md = /^(\w+)#(\w+)$/.match(install)
project, project_sha = md.captures
"#{build_giturl(project)}##{project_sha}"
elsif md = /^(\w+)$/.match(install)
project = md[1]
"#{build_giturl(project)}##{sha}"
end
end
end
delete_options.each do |delete_me|
final_options.delete(delete_me)
end
options_file = 'merged_options.rb'
File.open(options_file, 'w') do |merged|
merged.puts <<-EOS
# Copy this file to local_options.rb and adjust as needed if you wish to run
# with some local overrides.
EOS
merged.puts(final_options.pretty_inspect)
end
tests = ENV['TESTS'] || ENV['TEST']
tests_opt = "--tests=#{tests}" if tests
config_opt = "--hosts=#{config}" if config
overriding_options = ENV['OPTIONS']
args = ["--options-file", options_file, config_opt, tests_opt, overriding_options].compact
begin
sh("beaker", *args)
ensure
- preserve_configuration(final_options, options_file)
+ preserve_configuration(final_options, options_file) if preserve_config
end
end
def preserve_configuration(final_options, options_file)
if (hosts_file = config || final_options[:hosts_file]) && hosts_file !~ /preserved_config/
cp(hosts_file, "log/latest/config.yml")
generate_config_for_latest_hosts
end
mv(options_file, "log/latest")
end
def generate_config_for_latest_hosts
preserved_config_hash = { 'HOSTS' => {} }
- config_hash = YAML.load_file('log/latest/config.yml').to_hash
- nodes = config_hash['HOSTS'].map do |node_label,hash|
- { :node_label => node_label, :platform => hash['platform'] }
- end
+ puts "\nPreserving configuration so that any preserved nodes can be tested again locally..."
+
+ config_hash = YAML.load_file('log/latest/config.yml')
+ if !config_hash || !config_hash.include?('HOSTS')
+ puts "Warning: No HOSTS configuration found in log/latest/config.yml"
+ return
+ else
+ nodes = config_hash['HOSTS'].map do |node_label,hash|
+ {
+ :node_label => node_label,
+ :roles => hash['roles'],
+ :platform => hash['platform']
+ }
+ end
- pre_suite_log = File.read('log/latest/pre_suite-run.log')
- nodes.each do |node_info|
- hostname = /^(\w+) \(#{node_info[:node_label]}\)/.match(pre_suite_log)[1]
- fqdn = "#{hostname}.delivery.puppetlabs.net"
- preserved_config_hash['HOSTS'][fqdn] = {
- 'roles' => [ 'agent'],
- 'platform' => node_info[:platform],
- }
- preserved_config_hash['HOSTS'][fqdn]['roles'].unshift('master') if node_info[:node_label] =~ /master/
- end
- pp preserved_config_hash
+ pre_suite_log = File.read('log/latest/pre_suite-run.log')
+ nodes.each do |node_info|
+ host_regex = /^([\w.]+) \(#{node_info[:node_label]}\)/
+ if matched = host_regex.match(pre_suite_log)
+ hostname = matched[1]
+ fqdn = "#{hostname}.delivery.puppetlabs.net"
+ elsif /^#{node_info[:node_label]} /.match(pre_suite_log)
+ fqdn = "#{node_info[:node_label]}"
+ puts "* Couldn't find any log lines for #{host_regex}, assuming #{fqdn} is the fqdn"
+ end
+ if fqdn
+ preserved_config_hash['HOSTS'][fqdn] = {
+ 'roles' => node_info[:roles],
+ 'platform' => node_info[:platform],
+ }
+ else
+ puts "* Couldn't match #{node_info[:node_label]} in pre_suite-run.log"
+ end
+ end
+ pp preserved_config_hash
- File.open('log/latest/preserved_config.yaml', 'w') do |config_file|
- YAML.dump(preserved_config_hash, config_file)
+ File.open('log/latest/preserved_config.yaml', 'w') do |config_file|
+ YAML.dump(preserved_config_hash, config_file)
+ end
end
rescue Errno::ENOENT => e
- puts "Couldn't generate log #{e}"
+ puts "Warning: Couldn't generate preserved_config.yaml #{e}"
end
def list_preserved_configurations(secs_ago = ONE_DAY_IN_SECS)
preserved = {}
Dir.glob('log/*_*').each do |dir|
preserved_config_path = "#{dir}/preserved_config.yaml"
yesterday = Time.now - secs_ago.to_i
if preserved_config = File.exists?(preserved_config_path)
directory = File.new(dir)
if directory.ctime > yesterday
hosts = []
preserved_config = YAML.load_file(preserved_config_path).to_hash
preserved_config['HOSTS'].each do |hostname,values|
hosts << "#{hostname}: #{values['platform']}, #{values['roles']}"
end
preserved[hosts] = directory.to_path
end
end
end
preserved.map { |k,v| [v,k] }.sort { |a,b| a[0] <=> b[0] }.reverse
end
def list_preserved_hosts(secs_ago = ONE_DAY_IN_SECS)
hosts = Set.new
Dir.glob('log/**/pre*suite*run.log').each do |log|
yesterday = Time.now - secs_ago.to_i
File.open(log, 'r') do |file|
if file.ctime > yesterday
file.each_line do |line|
matchdata = /^(\w+) \(.*?\) \d\d:\d\d:\d\d\$/.match(line.encode!('UTF-8', 'UTF-8', :invalid => :replace))
hosts.add(matchdata[1]) if matchdata
end
end
end
end
hosts
end
def release_hosts(hosts = nil, secs_ago = ONE_DAY_IN_SECS)
secs_ago ||= ONE_DAY_IN_SECS
hosts ||= list_preserved_hosts(secs_ago)
require 'beaker'
vcloud_pooled = Beaker::VcloudPooled.new(hosts.map { |h| { 'vmhostname' => h } },
:logger => Beaker::Logger.new,
:dot_fog => "#{ENV['HOME']}/.fog",
'pooling_api' => 'http://vcloud.delivery.puppetlabs.net' ,
'datastore' => 'not-used',
'resourcepool' => 'not-used',
'folder' => 'not-used')
vcloud_pooled.cleanup
end
def print_preserved(preserved)
preserved.each_with_index do |entry,i|
puts "##{i}: #{entry[0]}"
entry[1].each { |h| puts " #{h}" }
end
end
def beaker_run_type
type = ENV['TYPE'] || :packages
type = type.to_sym
end
def sha
ENV['SHA']
end
def config
ENV['CONFIG']
end
namespace :ci do
task :check_env do
raise(USAGE) unless sha
end
namespace :test do
USAGE = <<-EOS
Requires commit SHA to be put under test as environment variable: SHA='<sha>'.
Also must set CONFIG=config/nodes/foo.yaml or include it in an options.rb for Beaker.
You may set TESTS=path/to/test,and/more/tests.
You may set additional Beaker OPTIONS='--more --options'
If testing from git checkouts, you may optionally set the github fork to checkout from using PUPPET_FORK='some-other-puppet-fork' (you may change the HIERA_FORK and FACTER_FORK as well if you wish).
You may also optionally set the git server to checkout repos from using GIT_SERVER='some.git.mirror'.
Or you may set PUPPET_GIT_SERVER='my.host.with.git.daemon', specifically, if you have set up a `git daemon` to pull local commits from. (You will need to allow the git daemon to serve the repo (see `git help daemon` and the docs/acceptance_tests.md for more details)).
If there is a Beaker options hash in a ./local_options.rb, it will be included. Commandline options set through the above environment variables will override settings in this file.
EOS
desc <<-EOS
Run the acceptance tests through Beaker and install packages on the configuration targets.
#{USAGE}
EOS
task :packages => 'ci:check_env' do
beaker_test
end
desc <<-EOS
Run the acceptance tests through Beaker and install from git on the configuration targets.
#{USAGE}
EOS
task :git => 'ci:check_env' do
beaker_test(:git)
end
end
desc "Capture the master and agent hostname from the latest log and construct a preserved_config.yaml for re-running against preserved hosts without provisioning."
task :extract_preserved_config do
generate_config_for_latest_hosts
end
desc <<-EOS
Run an acceptance test for a given node configuration and preserve the hosts.
Defaults to a packages run, but you can set it to 'git' with TYPE='git'.
#{USAGE}
EOS
task :test_and_preserve_hosts => 'ci:check_env' do
- beaker_test(beaker_run_type, :preserve_hosts => 'always')
+ beaker_test(beaker_run_type, :preserve_hosts => 'always', :__preserve_config__ => true)
end
desc "List acceptance runs from the past day which had hosts preserved."
task :list_preserved do
preserved = list_preserved_configurations
print_preserved(preserved)
end
desc <<-EOS
Shutdown and destroy any hosts that we have preserved for testing. These should be reaped daily by scripts, but this will free up resources immediately.
Specify a list of comma separated HOST_NAMES if you have a set of dynamic vcloud host names you want to purge outside of what can be grepped from the logs.
You can go back through the last SECS_AGO logs. Default is one day ago in secs.
EOS
task :release_hosts do
host_names = ENV['HOST_NAMES'].split(',') if ENV['HOST_NAMES']
secs_ago = ENV['SECS_AGO']
release_hosts(host_names, secs_ago)
end
task :destroy_preserved_hosts => 'ci:release_hosts' do
puts "Note: we are now releasing hosts back to the vcloud pooling api rather than destroying them directly. The rake task for this is ci:release_hosts"
end
desc <<-EOS
Rerun an acceptance test using the last captured preserved_config.yaml to skip provisioning.
Or specify a CONFIG_NUMBER from `rake ci:list_preserved`.
Defaults to a packages run, but you can set it to 'git' with TYPE='git'.
EOS
task :test_against_preserved_hosts do
config_number = (ENV['CONFIG_NUMBER'] || 0).to_i
preserved = list_preserved_configurations
print_preserved(preserved)
config_path = preserved[config_number][0]
+
puts "Using ##{config_number}: #{config_path}"
- beaker_test(beaker_run_type,
+
+ options = {
:hosts_file => "#{config_path}/preserved_config.yaml",
:no_provision => true,
:preserve_hosts => 'always',
- :__delete_options__ => [:pre_suite]
- )
+ }
+ run_type = beaker_run_type
+ if run_type == :packages
+ options.merge!(:pre_suite => [
+ 'setup/packages/pre-suite/015_PackageHostsPresets.rb',
+ 'setup/packages/pre-suite/045_EnsureMasterStartedOnPassenger.rb',
+ ])
+ else
+ options.merge!(:__delete_options__ => [:pre_suite])
+ end
+ beaker_test(beaker_run_type, options)
end
end
task :default do
sh('rake -T')
end
task :spec do
sh('rspec lib')
end
diff --git a/acceptance/bin/ci-bootstrap-from-artifacts.sh b/acceptance/bin/ci-bootstrap-from-artifacts.sh
index 06f886c12..7936f428f 100755
--- a/acceptance/bin/ci-bootstrap-from-artifacts.sh
+++ b/acceptance/bin/ci-bootstrap-from-artifacts.sh
@@ -1,49 +1,54 @@
#! /usr/bin/env bash
###############################################################################
# Initial preparation for a ci acceptance job in Jenkins. Crucially, it
# handles the untarring of the build artifact and bundle install, getting us to
# a state where we can then bundle exec rake the particular ci:test we want to
# run.
#
# Having this checked in in a script makes it much easier to have multiple
# acceptance jobs. It must be kept agnostic between Linux/Solaris/Windows
# builds, however.
set -x
# If $GEM_SOURCE is not set, fall back to rubygems.org
if [ -z $GEM_SOURCE ]; then
export GEM_SOURCE='https://rubygems.org'
fi
echo "SHA: ${SHA}"
echo "FORK: ${FORK}"
echo "BUILD_SELECTOR: ${BUILD_SELECTOR}"
echo "PACKAGE_BUILD_STATUS: ${PACKAGE_BUILD_STATUS}"
rm -rf acceptance
mkdir acceptance
cd acceptance
tar -xzf ../acceptance-artifacts.tar.gz
echo "===== This artifact is from ====="
cat creator.txt
bundle install --without=development --path=.bundle/gems
if [[ "${platform}" =~ 'solaris' ]]; then
repo_proxy=" :repo_proxy => false,"
fi
+# If the platform is Windows and $ruby_arch is set, append it
+if [[ "${platform}" =~ 'win' && ! -z $ruby_arch ]]; then
+ platform="${platform}-${ruby_arch}"
+fi
+
cat > local_options.rb <<-EOF
{
:hosts_file => 'config/nodes/${platform}.yaml',
:ssh => {
:keys => ["${HOME}/.ssh/id_rsa-old.private"],
},
${repo_proxy}
}
EOF
[[ (-z "${PACKAGE_BUILD_STATUS}") || ("${PACKAGE_BUILD_STATUS}" = "success") ]] || exit 1
diff --git a/acceptance/config/git/options.rb b/acceptance/config/git/options.rb
index dba0be850..1056c4133 100644
--- a/acceptance/config/git/options.rb
+++ b/acceptance/config/git/options.rb
@@ -1,18 +1,18 @@
{
:install => [
- 'facter#7845c9aa0dff4e5cc9831615306f394f6db0137c',
+ 'facter#stable',
'hiera#stable',
'puppet',
],
:pre_suite => [
'setup/git/pre-suite/000_EnvSetup.rb',
'setup/git/pre-suite/010_TestSetup.rb',
'setup/git/pre-suite/020_PuppetUserAndGroup.rb',
'setup/common/pre-suite/025_StopFirewall.rb',
'setup/git/pre-suite/030_PuppetMasterSanity.rb',
'setup/common/pre-suite/040_ValidateSignCert.rb',
'setup/git/pre-suite/060_InstallModules.rb',
'setup/git/pre-suite/070_InstalCACerts.rb',
'setup/common/pre-suite/100_SetParser.rb',
],
}
diff --git a/acceptance/config/nodes/win2003-all.yaml b/acceptance/config/nodes/win2003-all.yaml
deleted file mode 100644
index 1f27d0e72..000000000
--- a/acceptance/config/nodes/win2003-all.yaml
+++ /dev/null
@@ -1,38 +0,0 @@
-HOSTS:
- master:
- roles:
- - master
- - agent
- platform: el-6-i386
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent-i386:
- roles:
- - agent
- platform: windows-2003-32
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003-i386
- agent-x86_64:
- roles:
- - agent
- platform: windows-2003-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003-x86_64
- agent-r2-i386:
- roles:
- - agent
- platform: windows-2003r2-32
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-i386
- agent-r2-x86_64:
- roles:
- - agent
- platform: windows-2003r2-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-x86_64
-CONFIG:
- filecount: 12
- datastore: instance0
- resourcepool: delivery/Quality Assurance/FOSS/Dynamic
- folder: Delivery/Quality Assurance/FOSS/Dynamic
- pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2003r2.yaml b/acceptance/config/nodes/win2003r2x64-rubyx64.yaml
similarity index 71%
rename from acceptance/config/nodes/win2003r2.yaml
rename to acceptance/config/nodes/win2003r2x64-rubyx64.yaml
index d3c431736..5a94aab24 100644
--- a/acceptance/config/nodes/win2003r2.yaml
+++ b/acceptance/config/nodes/win2003r2x64-rubyx64.yaml
@@ -1,26 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent-r2-i386:
- roles:
- - agent
- platform: windows-2003r2-32
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-i386
- agent-r2-x86_64:
+ agent-2003r2-x86_64-rubyx64:
roles:
- agent
platform: windows-2003r2-64
+ ruby_arch: x64
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2003r2x64-rubyx86.yaml
similarity index 83%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2003r2x64-rubyx86.yaml
index 56608a68a..d58429524 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2003r2x64-rubyx86.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2003r2-x86_64-rubyx86:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2003r2-64
+ ruby_arch: x86
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2003r2x86-rubyx86.yaml
similarity index 85%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2003r2x86-rubyx86.yaml
index 56608a68a..d91bc7385 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2003r2x86-rubyx86.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2003r2-i386:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2003r2-32
+ ruby_arch: x86
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-i386
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2003.yaml b/acceptance/config/nodes/win2003x64-rubyx64.yaml
similarity index 72%
rename from acceptance/config/nodes/win2003.yaml
rename to acceptance/config/nodes/win2003x64-rubyx64.yaml
index 3d61fb2d4..4f6f0754e 100644
--- a/acceptance/config/nodes/win2003.yaml
+++ b/acceptance/config/nodes/win2003x64-rubyx64.yaml
@@ -1,26 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent-i386:
- roles:
- - agent
- platform: windows-2003-32
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003-i386
- agent-x86_64:
+ agent-2003-x86_64-rubyx64:
roles:
- agent
platform: windows-2003-64
+ ruby_arch: x64
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/win-2003-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2003x64-rubyx86.yaml
similarity index 84%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2003x64-rubyx86.yaml
index 56608a68a..5929ab47d 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2003x64-rubyx86.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2003-x86_64-rubyx86:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2003-64
+ ruby_arch: x86
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2003-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2003x86-rubyx86.yaml
similarity index 86%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2003x86-rubyx86.yaml
index 56608a68a..b558a5514 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2003x86-rubyx86.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2003-i386:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2003-32
+ ruby_arch: x86
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2003-i386
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2008-rubyx64.yaml
similarity index 84%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2008-rubyx64.yaml
index 56608a68a..0892d7931 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2008-rubyx64.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2008-x86_64-rubyx64:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2008-64
+ ruby_arch: x64
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2008-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2008-rubyx86.yaml
similarity index 84%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2008-rubyx86.yaml
index 56608a68a..5de98710b 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2008-rubyx86.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2008-x86_64-rubyx86:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2008-64
+ ruby_arch: x86
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2008-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2008r2-rubyx64.yaml
similarity index 91%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2008r2-rubyx64.yaml
index 56608a68a..3de3f5db1 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2008r2-rubyx64.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2008r2-x86_64-rubyx64:
roles:
- agent
platform: windows-2008r2-64
+ ruby_arch: x64
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008-all.yaml b/acceptance/config/nodes/win2008r2-rubyx86.yaml
similarity index 71%
rename from acceptance/config/nodes/win2008-all.yaml
rename to acceptance/config/nodes/win2008r2-rubyx86.yaml
index b9e052224..2020cc6c6 100644
--- a/acceptance/config/nodes/win2008-all.yaml
+++ b/acceptance/config/nodes/win2008r2-rubyx86.yaml
@@ -1,26 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent-2008r2-x86_64:
+ agent-2008r2-x86_64-rubyx86:
roles:
- agent
platform: windows-2008r2-64
+ ruby_arch: x86
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
- agent-2008-x86_64:
- roles:
- - agent
- platform: windows-2008-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2012-rubyx64.yaml
similarity index 84%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2012-rubyx64.yaml
index 56608a68a..73b8488a1 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2012-rubyx64.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2012-x86_64-rubyx64:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2012-64
+ ruby_arch: x64
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2012-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2012-rubyx86.yaml
similarity index 84%
copy from acceptance/config/nodes/win2008r2.yaml
copy to acceptance/config/nodes/win2012-rubyx86.yaml
index 56608a68a..9e18b234e 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2012-rubyx86.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2012-x86_64-rubyx86:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2012-64
+ ruby_arch: x86
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2012-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2012-all.yaml b/acceptance/config/nodes/win2012r2-rubyx64.yaml
similarity index 71%
rename from acceptance/config/nodes/win2012-all.yaml
rename to acceptance/config/nodes/win2012r2-rubyx64.yaml
index 3e9a3d83f..59c35f4b9 100644
--- a/acceptance/config/nodes/win2012-all.yaml
+++ b/acceptance/config/nodes/win2012r2-rubyx64.yaml
@@ -1,26 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent-2012r2-x86_64:
+ agent-2012r2-x86_64-rubyx64:
roles:
- agent
platform: windows-2012r2-64
+ ruby_arch: x64
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/win-2012r2-x86_64
- agent-2012-x86_64:
- roles:
- - agent
- platform: windows-2012-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2012-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/win2008r2.yaml b/acceptance/config/nodes/win2012r2-rubyx86.yaml
similarity index 83%
rename from acceptance/config/nodes/win2008r2.yaml
rename to acceptance/config/nodes/win2012r2-rubyx86.yaml
index 56608a68a..2ebb2dc71 100644
--- a/acceptance/config/nodes/win2008r2.yaml
+++ b/acceptance/config/nodes/win2012r2-rubyx86.yaml
@@ -1,20 +1,20 @@
HOSTS:
master:
roles:
- master
- agent
platform: el-6-i386
hypervisor: vcloud
template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent:
+ agent-2012r2-x86_64-rubyx86:
roles:
- agent
- platform: windows-2008r2-64
+ platform: windows-2012r2-64
+ ruby_arch: x86
hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
+ template: Delivery/Quality Assurance/Templates/vCloud/win-2012r2-x86_64
CONFIG:
- filecount: 12
datastore: instance0
resourcepool: delivery/Quality Assurance/FOSS/Dynamic
folder: Delivery/Quality Assurance/FOSS/Dynamic
pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/nodes/windows-all.yaml b/acceptance/config/nodes/windows-all.yaml
deleted file mode 100644
index d9d05ded5..000000000
--- a/acceptance/config/nodes/windows-all.yaml
+++ /dev/null
@@ -1,62 +0,0 @@
-HOSTS:
- master:
- roles:
- - master
- - agent
- platform: el-6-i386
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/redhat-6-i386
- agent-2003-i386:
- roles:
- - agent
- platform: windows-2003-32
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003-i386
- agent-2003-x86_64:
- roles:
- - agent
- platform: windows-2003-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003-x86_64
- agent-2003r2-i386:
- roles:
- - agent
- platform: windows-2003r2-32
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-i386
- agent-2003r2-x86_64:
- roles:
- - agent
- platform: windows-2003r2-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2003r2-x86_64
- agent-2008r2-x86_64:
- roles:
- - agent
- platform: windows-2008r2-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008r2-x86_64
- agent-2008-x86_64:
- roles:
- - agent
- platform: windows-2008-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2008-x86_64
- agent-2012r2-x86_64:
- roles:
- - agent
- platform: windows-2012r2-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2012r2-x86_64
- agent-2012-x86_64:
- roles:
- - agent
- platform: windows-2012-64
- hypervisor: vcloud
- template: Delivery/Quality Assurance/Templates/vCloud/win-2012-x86_64
-CONFIG:
- filecount: 12
- datastore: instance0
- resourcepool: delivery/Quality Assurance/FOSS/Dynamic
- folder: Delivery/Quality Assurance/FOSS/Dynamic
- pooling_api: http://vcloud.delivery.puppetlabs.net/
diff --git a/acceptance/config/packages/options.rb b/acceptance/config/packages/options.rb
index 4adaab5de..343cbb48a 100644
--- a/acceptance/config/packages/options.rb
+++ b/acceptance/config/packages/options.rb
@@ -1,8 +1,11 @@
{
+ :type => 'foss-packages',
:pre_suite => [
'setup/packages/pre-suite/010_Install.rb',
+ 'setup/packages/pre-suite/015_PackageHostsPresets.rb',
'setup/common/pre-suite/025_StopFirewall.rb',
'setup/common/pre-suite/040_ValidateSignCert.rb',
+ 'setup/packages/pre-suite/045_EnsureMasterStartedOnPassenger.rb',
'setup/common/pre-suite/100_SetParser.rb',
],
}
diff --git a/acceptance/lib/puppet/acceptance/common_utils.rb b/acceptance/lib/puppet/acceptance/common_utils.rb
index 5213595ed..c9aeb1d15 100644
--- a/acceptance/lib/puppet/acceptance/common_utils.rb
+++ b/acceptance/lib/puppet/acceptance/common_utils.rb
@@ -1,107 +1,125 @@
module Puppet
module Acceptance
module CronUtils
def clean(agent, o={})
o = {:user => 'tstuser'}.merge(o)
run_cron_on(agent, :remove, o[:user])
apply_manifest_on(agent, %[user { '%s': ensure => absent, managehome => false }] % o[:user])
end
def setup(agent, o={})
o = {:user => 'tstuser'}.merge(o)
apply_manifest_on(agent, %[user { '%s': ensure => present, managehome => false }] % o[:user])
apply_manifest_on(agent, %[case $operatingsystem {
centos, redhat: {$cron = 'cronie'}
solaris: { $cron = 'core-os' }
default: {$cron ='cron'} }
package {'cron': name=> $cron, ensure=>present, }])
end
end
module CAUtils
def initialize_ssl
hostname = on(master, 'facter hostname').stdout.strip
fqdn = on(master, 'facter fqdn').stdout.strip
+ if master.use_service_scripts?
+ step "Ensure puppet is stopped"
+ # Passenger, in particular, must be shutdown for the cert setup steps to work,
+ # but any running puppet master will interfere with webrick starting up and
+ # potentially ignore the puppet.conf changes.
+ on(master, puppet('resource', 'service', master['puppetservice'], "ensure=stopped"))
+ end
+
step "Clear SSL on all hosts"
hosts.each do |host|
ssldir = on(host, puppet('agent --configprint ssldir')).stdout.chomp
on(host, "rm -rf '#{ssldir}'")
end
step "Master: Start Puppet Master" do
- with_puppet_running_on(master, :main => { :dns_alt_names => "puppet,#{hostname},#{fqdn}", :verbose => true, :daemonize => true }) do
+ master_opts = {
+ :main => {
+ :dns_alt_names => "puppet,#{hostname},#{fqdn}",
+ },
+ :__service_args__ => {
+ # apache2 service scripts can't restart if we've removed the ssl dir
+ :bypass_service_script => true,
+ },
+ }
+ with_puppet_running_on(master, master_opts) do
hosts.each do |host|
next if host['roles'].include? 'master'
step "Agents: Run agent --test first time to gen CSR"
on host, puppet("agent --test --server #{master}"), :acceptable_exit_codes => [1]
end
# Sign all waiting certs
step "Master: sign all certs"
on master, puppet("cert --sign --all"), :acceptable_exit_codes => [0,24]
step "Agents: Run agent --test second time to obtain signed cert"
on agents, puppet("agent --test --server #{master}"), :acceptable_exit_codes => [0,2]
end
end
end
def clean_cert(host, cn, check = true)
on(host, puppet('cert', 'clean', cn), :acceptable_exit_codes => check ? [0] : [0, 24])
if check
assert_match(/remov.*Certificate.*#{cn}/i, stdout, "Should see a log message that certificate request was removed.")
on(host, puppet('cert', 'list', '--all'))
assert_no_match(/#{cn}/, stdout, "Should not see certificate in list anymore.")
end
end
def clear_agent_ssl
return if master.is_pe?
step "All: Clear agent only ssl settings (do not clear master)"
hosts.each do |host|
next if host == master
ssldir = on(host, puppet('agent --configprint ssldir')).stdout.chomp
on( host, host_command("rm -rf '#{ssldir}'") )
end
end
def reset_agent_ssl(resign = true)
return if master.is_pe?
clear_agent_ssl
hostname = master.execute('facter hostname')
fqdn = master.execute('facter fqdn')
step "Clear old agent certificates from master" do
agents.each do |agent|
+ next if agent == master && agent.is_using_passenger?
agent_cn = on(agent, puppet('agent --configprint certname')).stdout.chomp
clean_cert(master, agent_cn, false) if agent_cn
end
end
if resign
step "Master: Ensure the master is listening and autosigning"
with_puppet_running_on(master,
:master => {
:dns_alt_names => "puppet,#{hostname},#{fqdn}",
:autosign => true,
}
) do
agents.each do |agent|
-
+ next if agent == master && agent.is_using_passenger?
step "Agents: Run agent --test once to obtain auto-signed cert" do
on agent, puppet('agent', "--test --server #{master}"), :acceptable_exit_codes => [0,2]
end
end
end
end
end
end
end
end
diff --git a/acceptance/lib/puppet/acceptance/environment_utils.rb b/acceptance/lib/puppet/acceptance/environment_utils.rb
new file mode 100644
index 000000000..b657ad128
--- /dev/null
+++ b/acceptance/lib/puppet/acceptance/environment_utils.rb
@@ -0,0 +1,354 @@
+require 'puppet/acceptance/module_utils'
+
+module Puppet
+ module Acceptance
+ module EnvironmentUtils
+ include Puppet::Acceptance::ModuleUtils
+
+ # Generate puppet manifest for the creation of an environment with
+ # the given modulepath and manifest and env_name. The created environment
+ # will have on testing_mod module, and manifest site.pp which includes it.
+ #
+ # @param options [Hash<Sym,String>]
+ # @option options [String] :modulepath Modules directory
+ # @option options [String] :manifest Manifest directory
+ # @option options [String] :env_name Environment name
+ # @return [String] Puppet manifest to create the environment files
+ def generate_environment(options)
+ modulepath = options[:modulepath]
+ manifestpath = options[:manifestpath]
+ env_name = options[:env_name]
+
+ environment = <<-MANIFEST_SNIPPET
+ file {
+ ###################################################
+ # #{env_name}
+ #{generate_module("testing_mod", env_name, modulepath)}
+
+ "#{manifestpath}":;
+ "#{manifestpath}/site.pp":
+ ensure => file,
+ mode => 0640,
+ content => '
+ notify { "in #{env_name} site.pp": }
+ include testing_mod
+ '
+ ;
+ }
+ MANIFEST_SNIPPET
+ end
+
+ # Generate one module's manifest code.
+ def generate_module(module_name, env_name, modulepath)
+ module_pp = <<-MANIFEST_SNIPPET
+ "#{modulepath}":;
+ "#{modulepath}/#{module_name}":;
+ "#{modulepath}/#{module_name}/manifests":;
+
+ "#{modulepath}/#{module_name}/manifests/init.pp":
+ ensure => file,
+ mode => 0640,
+ content => 'class #{module_name} {
+ notify { "include #{env_name} #{module_name}": }
+ }'
+ ;
+ MANIFEST_SNIPPET
+ end
+
+ # Default, legacy, dynamic and directory environments
+ # using generate_manifest(), all rooted in testdir.
+ #
+ # @param [String] testdir path to the temp directory which will be the confdir all
+ # the environments live in
+ # @return [String] Puppet manifest to generate all of the environment files.
+ def environment_manifest(testdir)
+ manifest = <<-MANIFEST
+ File {
+ ensure => directory,
+ owner => #{master['user']},
+ group => #{master['group']},
+ mode => 0750,
+ }
+
+ file { "#{testdir}": }
+
+ #{generate_environment(
+ :modulepath => "#{testdir}/modules",
+ :manifestpath => "#{testdir}/manifests",
+ :env_name => "default environment")}
+
+ #{generate_environment(
+ :modulepath => "#{testdir}/testing-modules",
+ :manifestpath => "#{testdir}/testing-manifests",
+ :env_name => "legacy testing environment")}
+
+ file {
+ "#{testdir}/dynamic":;
+ "#{testdir}/dynamic/testing":;
+ }
+
+ #{generate_environment(
+ :modulepath => "#{testdir}/dynamic/testing/modules",
+ :manifestpath => "#{testdir}/dynamic/testing/manifests",
+ :env_name => "dynamic testing environment")}
+
+ file {
+ "#{testdir}/environments":;
+ "#{testdir}/environments/testing":;
+ }
+
+ #{generate_environment(
+ :modulepath => "#{testdir}/environments/testing/modules",
+ :manifestpath => "#{testdir}/environments/testing/manifests",
+ :env_name => "directory testing environment")}
+
+ file {
+ "#{testdir}/environments/testing_environment_conf":;
+ }
+
+ #{generate_environment(
+ :modulepath => "#{testdir}/environments/testing_environment_conf/nonstandard-modules",
+ :manifestpath => "#{testdir}/environments/testing_environment_conf/nonstandard-manifests",
+ :env_name => "directory testing with environment.conf")}
+
+ file { "#{testdir}/environments/testing_environment_conf/environment.conf":
+ ensure => file,
+ mode => 0640,
+ content => '
+ modulepath = nonstandard-modules:$basemodulepath
+ manifest = nonstandard-manifests
+ config_version = local-version.sh
+ '
+ }
+
+ file {
+ "#{testdir}/environments/testing_environment_conf/local-version.sh":
+ ensure => file,
+ mode => 0640,
+ content => '#! /usr/bin/env bash
+ echo "local testing_environment_conf"'
+ ;
+ }
+
+ ###################
+ # Services
+
+ file {
+ "#{testdir}/services":;
+ "#{testdir}/services/testing":;
+ #{generate_module('service_mod',
+ "service testing environment",
+ "#{testdir}/services/testing/modules")}
+ }
+
+ #######################
+ # Config version script
+
+ file {
+ "#{testdir}/static-version.sh":
+ ensure => file,
+ mode => 0640,
+ content => '#! /usr/bin/env bash
+ echo "static"'
+ ;
+ }
+ MANIFEST
+ end
+
+ def get_directory_hash_from(host, path)
+ dir_hash = {}
+ on(host, "ls #{path}") do |result|
+ result.stdout.split.inject(dir_hash) do |hash,f|
+ hash[f] = "#{path}/#{f}"
+ hash
+ end
+ end
+ dir_hash
+ end
+
+ def safely_shadow_directory_contents_and_yield(host, original_path, new_path, &block)
+ original_files = get_directory_hash_from(host, original_path)
+ new_files = get_directory_hash_from(host, new_path)
+ conflicts = original_files.keys & new_files.keys
+
+ step "backup original files" do
+ conflicts.each do |c|
+ on(host, "mv #{original_files[c]} #{original_files[c]}.bak")
+ end
+ end
+
+ step "shadow original files with temporary files" do
+ new_files.each do |name,full_path_name|
+ on(host, "cp -R #{full_path_name} #{original_path}/#{name}")
+ end
+ end
+
+ step "open permissions to 770 on all temporary files copied into working dir and set ownership" do
+ file_list = new_files.keys.map { |name| "#{original_path}/#{name}" }.join(' ')
+ on(host, "chown -R #{host['user']}:#{host['group']} #{file_list}")
+ on(host, "chmod -R 770 #{file_list}")
+ end
+
+ yield
+
+ ensure
+ step "clear out the temporary files" do
+ files_to_delete = new_files.keys.map { |name| "#{original_path}/#{name}" }
+ on(host, "rm -rf #{files_to_delete.join(' ')}")
+ end
+ step "move the shadowed files back to their original places" do
+ conflicts.each do |c|
+ on(host, "mv #{original_files[c]}.bak #{original_files[c]}")
+ end
+ end
+ end
+
+ # Stand up a puppet master on the master node with the given master_opts
+ # using the passed envdir as the source of the puppet environment files,
+ # and passed confdir as the directory to use for the temporary
+ # puppet.conf. It then runs through a series of environment tests for the
+ # passed environment and returns a hashed structure of the results.
+ #
+ # @return [Hash<Beaker::Host,Hash<Sym,Beaker::Result>>] Hash of
+ # Beaker::Hosts for each agent run keyed to a hash of Beaker::Result
+ # objects keyed by each subtest that was performed.
+ def use_an_environment(environment, description, master_opts, envdir, confdir, options = {})
+ master_puppet_conf = master_opts.dup # shallow clone
+
+ results = {}
+ safely_shadow_directory_contents_and_yield(master, master['puppetpath'], envdir) do
+
+ config_print = options[:config_print]
+ directory_environments = options[:directory_environments]
+
+ with_puppet_running_on(master, master_puppet_conf, confdir) do
+ agents.each do |agent|
+ agent_results = results[agent] = {}
+
+ step "puppet agent using #{description} environment"
+ args = "-t", "--server", master
+ args << ["--environment", environment] if environment
+ # Test agents configured to use directory environments (affects environment
+ # loading on the agent, especially with regards to requests/node environment)
+ args << "--environmentpath='$confdir/environments'" if directory_environments && agent != master
+ on(agent, puppet("agent", *args), :acceptable_exit_codes => (0..255)) do
+ agent_results[:puppet_agent] = result
+ end
+
+ if agent == master
+ args = ["--trace"]
+ args << ["--environment", environment] if environment
+
+ step "print puppet config for #{description} environment"
+ on(agent, puppet(*(["config", "print", "basemodulepath", "modulepath", "manifest", "config_version", config_print] + args)), :acceptable_exit_codes => (0..255)) do
+ agent_results[:puppet_config] = result
+ end
+
+ step "puppet apply using #{description} environment"
+ on(agent, puppet(*(["apply", '-e', '"include testing_mod"'] + args)), :acceptable_exit_codes => (0..255)) do
+ agent_results[:puppet_apply] = result
+ end
+
+ # Be aware that Puppet Module Tool will create the module directory path if it
+ # does not exist. So these tests should be run last...
+ step "install a module into environment"
+ on(agent, puppet(*(["module", "install", "pmtacceptance-nginx"] + args)), :acceptable_exit_codes => (0..255)) do
+ agent_results[:puppet_module_install] = result
+ end
+
+ step "uninstall a module from #{description} environment"
+ on(agent, puppet(*(["module", "uninstall", "pmtacceptance-nginx"] + args)), :acceptable_exit_codes => (0..255)) do
+ agent_results[:puppet_module_uninstall] = result
+ end
+ end
+ end
+ end
+ end
+
+ return results
+ end
+
+ # For each Beaker::Host in the results Hash, generates a chart, comparing
+ # the expected exit code and regexp matches from expectations to the
+ # Beaker::Result.output for a particular command that was executed in the
+ # environment. Outputs either 'ok' or text highlighting the errors, and
+ # returns false if any errors were found.
+ #
+ # @param [Hash<Beaker::Host,Hash<Sym,Beaker::Result>>] results
+ # @param [Hash<Sym,Hash{Sym => Integer,Array<Regexp>}>] expectations
+ # @return [Array] Returns an empty array of there were no failures, or an
+ # Array of failed cases.
+ def review_results(results, expectations)
+ failed = []
+
+ results.each do |agent, agent_results|
+ divider = "-" * 79
+
+ logger.info divider
+ logger.info "For: (#{agent.name}) #{agent}"
+ logger.info divider
+
+ agent_results.each do |testname, execution_results|
+ expected_exit_code = expectations[testname][:exit_code]
+ match_tests = expectations[testname][:matches] || []
+ not_match_tests = expectations[testname][:does_not_match] || []
+ expect_failure = expectations[testname][:expect_failure]
+ notes = expectations[testname][:notes]
+
+ errors = []
+
+ if execution_results.exit_code != expected_exit_code
+ errors << "To exit with an exit code of '#{expected_exit_code}', instead of '#{execution_results.exit_code}'"
+ end
+
+ match_tests.each do |regexp|
+ if execution_results.output !~ regexp
+ errors << "#{errors.empty? ? "To" : "And"} match: #{regexp}"
+ end
+ end
+
+ not_match_tests.each do |regexp|
+ if execution_results.output =~ regexp
+ errors << "#{errors.empty? ? "Not to" : "And not"} match: #{regexp}"
+ end
+ end
+
+ error_msg = "Expected the output:\n#{execution_results.output}\n#{errors.join("\n")}" unless errors.empty?
+
+ case_failed = case
+ when errors.empty? && expect_failure then 'ok - failed as expected'
+ when errors.empty? && !expect_failure then 'ok'
+ else '*UNEXPECTED FAILURE*'
+ end
+ logger.info "#{testname}: #{case_failed}"
+ if case_failed == 'ok - failed as expected'
+ logger.info divider
+ logger.info "Case is known to fail as follows:\n#{execution_results.output}\n"
+ elsif case_failed == '*UNEXPECTED FAILURE*'
+ failed << "Unexpected failure for #{testname}"
+ logger.info divider
+ logger.info "#{error_msg}"
+ end
+
+ logger.info("------\nNotes: #{notes}") if notes
+ logger.info divider
+ end
+ end
+
+ return failed
+ end
+
+ def assert_review(review)
+ failures = []
+ review.each do |scenario, failed|
+ if !failed.empty?
+ problems = "Problems in the '#{scenario}' output reported above:\n #{failed.join("\n ")}"
+ logger.warn(problems)
+ failures << problems
+ end
+ end
+ assert failures.empty?, "Failed Review:\n\n#{failures.join("\n")}\n"
+ end
+ end
+ end
+end
diff --git a/acceptance/lib/puppet/acceptance/environment_utils_spec.rb b/acceptance/lib/puppet/acceptance/environment_utils_spec.rb
new file mode 100644
index 000000000..596b5dac9
--- /dev/null
+++ b/acceptance/lib/puppet/acceptance/environment_utils_spec.rb
@@ -0,0 +1,144 @@
+require File.join(File.dirname(__FILE__),'../../acceptance_spec_helper.rb')
+require 'puppet/acceptance/environment_utils'
+
+module EnvironmentUtilsSpec
+describe 'EnvironmentUtils' do
+ class ATestCase
+ include Puppet::Acceptance::EnvironmentUtils
+
+ def step(str)
+ yield
+ end
+
+ def on(host, command, options = nil)
+ stdout = host.do(command, options)
+ yield TestResult.new(stdout) if block_given?
+ end
+ end
+
+ class TestResult
+ attr_accessor :stdout
+
+ def initialize(stdout)
+ self.stdout = stdout
+ end
+ end
+
+ class TestHost
+ attr_accessor :did, :directories, :attributes
+
+ def initialize(directories, attributes = {})
+ self.directories = directories
+ self.did = []
+ self.attributes = attributes
+ end
+
+ def do(command, options)
+ did << (options.nil? ? command : [command, options])
+ case command
+ when /^ls (.*)/ then directories[$1]
+ end
+ end
+
+ def [](param)
+ attributes[param]
+ end
+ end
+
+ let(:testcase) { ATestCase.new }
+ let(:host) { TestHost.new(directory_contents, 'user' => 'root', 'group' => 'puppet') }
+ let(:directory_contents) do
+ {
+ '/etc/puppet' => 'foo bar baz widget',
+ '/tmp/dir' => 'foo dingo bar thing',
+ }
+ end
+
+ it "runs the block of code" do
+ ran_code = false
+ testcase.safely_shadow_directory_contents_and_yield(host, '/etc/puppet', '/tmp/dir') do
+ ran_code = true
+ end
+ expect(ran_code).to be_true
+ expect(host.did).to eq([
+ "ls /etc/puppet",
+ "ls /tmp/dir",
+ "mv /etc/puppet/foo /etc/puppet/foo.bak",
+ "mv /etc/puppet/bar /etc/puppet/bar.bak",
+ "cp -R /tmp/dir/foo /etc/puppet/foo",
+ "cp -R /tmp/dir/dingo /etc/puppet/dingo",
+ "cp -R /tmp/dir/bar /etc/puppet/bar",
+ "cp -R /tmp/dir/thing /etc/puppet/thing",
+ "chown -R root:puppet /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ "chmod -R 770 /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ "rm -rf /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ "mv /etc/puppet/foo.bak /etc/puppet/foo",
+ "mv /etc/puppet/bar.bak /etc/puppet/bar"
+ ])
+ end
+
+ it "backs up the original items that are shadowed by tmp items" do
+ testcase.safely_shadow_directory_contents_and_yield(host, '/etc/puppet', '/tmp/dir') {}
+ expect(host.did.grep(%r{mv /etc/puppet/\w+ })).to eq([
+ "mv /etc/puppet/foo /etc/puppet/foo.bak",
+ "mv /etc/puppet/bar /etc/puppet/bar.bak",
+ ])
+ end
+
+ it "copies in all the tmp items into the working dir" do
+ testcase.safely_shadow_directory_contents_and_yield(host, '/etc/puppet', '/tmp/dir') {}
+ expect(host.did.grep(%r{cp})).to eq([
+ "cp -R /tmp/dir/foo /etc/puppet/foo",
+ "cp -R /tmp/dir/dingo /etc/puppet/dingo",
+ "cp -R /tmp/dir/bar /etc/puppet/bar",
+ "cp -R /tmp/dir/thing /etc/puppet/thing",
+ ])
+ end
+
+ it "opens the permissions on all copied files to 770 and sets ownership based on host settings" do
+ testcase.safely_shadow_directory_contents_and_yield(host, '/etc/puppet', '/tmp/dir') {}
+ expect(host.did.grep(%r{ch(mod|own)})).to eq([
+ "chown -R root:puppet /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ "chmod -R 770 /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ ])
+ end
+
+ it "deletes all the tmp items from the working dir" do
+ testcase.safely_shadow_directory_contents_and_yield(host, '/etc/puppet', '/tmp/dir') {}
+ expect(host.did.grep(%r{rm})).to eq([
+ "rm -rf /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ ])
+ end
+
+ it "replaces the original items that had been shadowed into the working dir" do
+ testcase.safely_shadow_directory_contents_and_yield(host, '/etc/puppet', '/tmp/dir') {}
+ expect(host.did.grep(%r{mv /etc/puppet/\w+\.bak})).to eq([
+ "mv /etc/puppet/foo.bak /etc/puppet/foo",
+ "mv /etc/puppet/bar.bak /etc/puppet/bar"
+ ])
+ end
+
+ it "always cleans up, even if the code we yield to raises an error" do
+ expect do
+ testcase.safely_shadow_directory_contents_and_yield(host, '/etc/puppet', '/tmp/dir') do
+ raise 'oops'
+ end
+ end.to raise_error('oops')
+ expect(host.did).to eq([
+ "ls /etc/puppet",
+ "ls /tmp/dir",
+ "mv /etc/puppet/foo /etc/puppet/foo.bak",
+ "mv /etc/puppet/bar /etc/puppet/bar.bak",
+ "cp -R /tmp/dir/foo /etc/puppet/foo",
+ "cp -R /tmp/dir/dingo /etc/puppet/dingo",
+ "cp -R /tmp/dir/bar /etc/puppet/bar",
+ "cp -R /tmp/dir/thing /etc/puppet/thing",
+ "chown -R root:puppet /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ "chmod -R 770 /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ "rm -rf /etc/puppet/foo /etc/puppet/dingo /etc/puppet/bar /etc/puppet/thing",
+ "mv /etc/puppet/foo.bak /etc/puppet/foo",
+ "mv /etc/puppet/bar.bak /etc/puppet/bar"
+ ])
+ end
+end
+end
diff --git a/acceptance/lib/puppet/acceptance/install_utils_spec.rb b/acceptance/lib/puppet/acceptance/install_utils_spec.rb
index 344a4e461..3c077f1d2 100644
--- a/acceptance/lib/puppet/acceptance/install_utils_spec.rb
+++ b/acceptance/lib/puppet/acceptance/install_utils_spec.rb
@@ -1,261 +1,263 @@
require File.join(File.dirname(__FILE__),'../../acceptance_spec_helper.rb')
require 'puppet/acceptance/install_utils'
+module InstallUtilsSpec
describe 'InstallUtils' do
class ATestCase
include Puppet::Acceptance::InstallUtils
end
class Platform < String
def with_version_codename
self
end
end
class TestHost
attr_accessor :config
def initialize(config = {})
self.config = config
end
def [](key)
config[key]
end
end
let(:host) { TestHost.new }
let(:testcase) { ATestCase.new }
describe "install_packages_on" do
it "raises an error if package_hash has unknown platform keys" do
expect do
testcase.install_packages_on(host, { :foo => 'bar'})
end.to raise_error(RuntimeError, /Unknown platform 'foo' in package_hash/)
end
shared_examples_for(:install_packages_on) do |platform,command,package|
let(:package_hash) do
{
:redhat => ['rh_package'],
:debian => [['db_command', 'db_package']],
}
end
let(:additional_switches) { platform == 'debian' ? '--allow-unauthenticated' : nil }
before do
logger = mock('logger', :notify => nil)
host.stubs(:logger).returns(logger)
host.config['platform'] = Platform.new(platform)
end
it "installs packages on a host" do
host.expects(:check_for_package).never
host.expects(:install_package).with(package, additional_switches).once
testcase.install_packages_on(host, package_hash)
end
it "checks and installs packages on a host" do
host.expects(:check_for_package).with(command).once
host.expects(:install_package).with(package, additional_switches).once
testcase.install_packages_on(host, package_hash, :check_if_exists => true)
end
end
it_should_behave_like(:install_packages_on, 'fedora', 'rh_package', 'rh_package')
it_should_behave_like(:install_packages_on, 'debian', 'db_command', 'db_package')
end
describe "fetch" do
before do
logger = stub('logger', :notify => nil)
testcase.stubs(:logger).returns(logger)
FileUtils.expects(:makedirs).with('dir')
end
it "does not fetch if destination file already exists" do
File.expects(:exists?).with('dir/file').returns(true)
testcase.expects(:open).never
testcase.fetch('http://foo', 'file', 'dir')
end
it "fetches file from url and stores in destination directory as filename" do
stream = mock('stream')
file = mock('file')
testcase.expects(:open).with('http://foo/file').yields(stream)
File.expects(:open).with('dir/file', 'w').yields(file)
FileUtils.expects(:copy_stream).with(stream, file)
testcase.fetch('http://foo', 'file', 'dir')
end
it "returns path to destination file" do
testcase.expects(:open).with('http://foo/file')
expect(testcase.fetch('http://foo', 'file', 'dir')).to eql('dir/file')
end
end
describe "fetch_remote_dir" do
before do
logger = stub('logger', {:notify => nil, :debug => nil})
testcase.stubs(:logger).returns(logger)
end
it "calls wget with the right amount of cut dirs for url that ends in '/'" do
url = 'http://builds.puppetlabs.lan/puppet/7807591405af849da2ad6534c66bd2d4efff604f/repos/el/6/devel/x86_64/'
testcase.expects(:`).with("wget -nv -P dir --reject \"index.html*\",\"*.gif\" --cut-dirs=6 -np -nH --no-check-certificate -r #{url} 2>&1").returns("log")
expect( testcase.fetch_remote_dir(url, 'dir')).to eql('dir/x86_64')
end
it "calls wget with the right amount of cut dirs for url that doesn't end in '/'" do
url = 'http://builds.puppetlabs.lan/puppet/7807591405af849da2ad6534c66bd2d4efff604f/repos/apt/wheezy'
testcase.expects(:`).with("wget -nv -P dir --reject \"index.html*\",\"*.gif\" --cut-dirs=4 -np -nH --no-check-certificate -r #{url}/ 2>&1").returns("log")
expect( testcase.fetch_remote_dir(url, 'dir')).to eql('dir/wheezy')
end
end
shared_examples_for :redhat_platforms do |platform,sha,files|
before do
host.config['platform'] = Platform.new(platform)
end
it "fetches and installs repo configurations for #{platform}" do
platform_configs_dir = "repo-configs/#{platform}"
rpm_url = files[:rpm][0]
rpm_file = files[:rpm][1]
testcase.expects(:fetch).with(
"http://yum.puppetlabs.com",
rpm_file,
platform_configs_dir
).returns("#{platform_configs_dir}/#{rpm_file}")
repo_url = files[:repo][0]
repo_file = files[:repo][1]
testcase.expects(:fetch).with(
repo_url,
repo_file,
platform_configs_dir
).returns("#{platform_configs_dir}/#{repo_file}")
repo_dir_url = files[:repo_dir][0]
repo_dir = files[:repo_dir][1]
testcase.expects(:link_exists?).returns( true )
testcase.expects(:fetch_remote_dir).with(
repo_dir_url,
platform_configs_dir
).returns("#{platform_configs_dir}/#{repo_dir}")
testcase.expects(:link_exists?).returns( true )
testcase.expects(:on).with(host, regexp_matches(/rm.*repo; rm.*rpm; rm.*#{repo_dir}/))
testcase.expects(:scp_to).with(host, "#{platform_configs_dir}/#{rpm_file}", '/root')
testcase.expects(:scp_to).with(host, "#{platform_configs_dir}/#{repo_file}", '/root')
testcase.expects(:scp_to).with(host, "#{platform_configs_dir}/#{repo_dir}", '/root')
testcase.expects(:on).with(host, regexp_matches(%r{mv.*repo /etc/yum.repos.d}))
testcase.expects(:on).with(host, regexp_matches(%r{find /etc/yum.repos.d/ -name .*}))
testcase.expects(:on).with(host, regexp_matches(%r{rpm.*/root/.*rpm}))
testcase.install_repos_on(host, sha, 'repo-configs')
end
end
describe "install_repos_on" do
let(:sha) { "abcdef10" }
it_should_behave_like(:redhat_platforms,
'el-6-i386',
'abcdef10',
{
:rpm => [
"http://yum.puppetlabs.com",
"puppetlabs-release-el-6.noarch.rpm",
],
:repo => [
"http://builds.puppetlabs.lan/puppet/abcdef10/repo_configs/rpm/",
"pl-puppet-abcdef10-el-6-i386.repo",
],
:repo_dir => [
"http://builds.puppetlabs.lan/puppet/abcdef10/repos/el/6/products/i386/",
"i386",
],
},
)
it_should_behave_like(:redhat_platforms,
'fedora-20-x86_64',
'abcdef10',
{
:rpm => [
"http://yum.puppetlabs.com",
"puppetlabs-release-fedora-20.noarch.rpm",
],
:repo => [
"http://builds.puppetlabs.lan/puppet/abcdef10/repo_configs/rpm/",
"pl-puppet-abcdef10-fedora-f20-x86_64.repo",
],
:repo_dir => [
"http://builds.puppetlabs.lan/puppet/abcdef10/repos/fedora/f20/products/x86_64/",
"x86_64",
],
},
)
it_should_behave_like(:redhat_platforms,
'centos-5-x86_64',
'abcdef10',
{
:rpm => [
"http://yum.puppetlabs.com",
"puppetlabs-release-el-5.noarch.rpm",
],
:repo => [
"http://builds.puppetlabs.lan/puppet/abcdef10/repo_configs/rpm/",
"pl-puppet-abcdef10-el-5-x86_64.repo",
],
:repo_dir => [
"http://builds.puppetlabs.lan/puppet/abcdef10/repos/el/5/products/x86_64/",
"x86_64",
],
},
)
it "installs on a debian host" do
host.config['platform'] = platform = Platform.new('ubuntu-precise-x86_64')
platform_configs_dir = "repo-configs/#{platform}"
deb = "puppetlabs-release-precise.deb"
testcase.expects(:fetch).with(
"http://apt.puppetlabs.com/",
deb,
platform_configs_dir
).returns("#{platform_configs_dir}/#{deb}")
list = "pl-puppet-#{sha}-precise.list"
testcase.expects(:fetch).with(
"http://builds.puppetlabs.lan/puppet/#{sha}/repo_configs/deb/",
list,
platform_configs_dir
).returns("#{platform_configs_dir}/#{list}")
testcase.expects(:fetch_remote_dir).with(
"http://builds.puppetlabs.lan/puppet/#{sha}/repos/apt/precise",
platform_configs_dir
).returns("#{platform_configs_dir}/precise")
testcase.expects(:on).with(host, regexp_matches(/rm.*list; rm.*deb; rm.*/))
testcase.expects(:scp_to).with(host, "#{platform_configs_dir}/#{deb}", '/root')
testcase.expects(:scp_to).with(host, "#{platform_configs_dir}/#{list}", '/root')
testcase.expects(:scp_to).with(host, "#{platform_configs_dir}/precise", '/root')
testcase.expects(:on).with(host, regexp_matches(%r{mv.*list /etc/apt/sources.list.d}))
testcase.expects(:on).with(host, regexp_matches(%r{find /etc/apt/sources.list.d/ -name .*}))
testcase.expects(:on).with(host, regexp_matches(%r{dpkg -i.*/root/.*deb}))
testcase.expects(:on).with(host, regexp_matches(%r{apt-get update}))
testcase.install_repos_on(host, sha, 'repo-configs')
end
end
end
+end
diff --git a/acceptance/setup/git/pre-suite/000_EnvSetup.rb b/acceptance/setup/git/pre-suite/000_EnvSetup.rb
index d34f72a2a..5be74955b 100644
--- a/acceptance/setup/git/pre-suite/000_EnvSetup.rb
+++ b/acceptance/setup/git/pre-suite/000_EnvSetup.rb
@@ -1,56 +1,67 @@
test_name "Setup environment"
step "Ensure Git and Ruby"
require 'puppet/acceptance/install_utils'
extend Puppet::Acceptance::InstallUtils
require 'puppet/acceptance/git_utils'
extend Puppet::Acceptance::GitUtils
require 'beaker/dsl/install_utils'
extend Beaker::DSL::InstallUtils
PACKAGES = {
:redhat => [
'git',
'ruby',
'rubygem-json',
],
:debian => [
['git', 'git-core'],
'ruby',
],
:debian_ruby18 => [
'libjson-ruby',
],
:solaris => [
['git', 'developer/versioning/git'],
['ruby', 'runtime/ruby-18'],
# there isn't a package for json, so it is installed later via gems
],
:windows => [
'git',
# there isn't a need for json on windows because it is bundled in ruby 1.9
],
}
install_packages_on(hosts, PACKAGES, :check_if_exists => true)
hosts.each do |host|
case host['platform']
when /windows/
- step "#{host} Install ruby from git"
+ arch = host[:ruby_arch] || 'x86'
+ step "#{host} Selected architecture #{arch}"
+
+ revision = if arch == 'x64'
+ '2.0.0-x64'
+ else
+ '1.9.3-x86'
+ end
+
+ step "#{host} Install ruby from git using revision #{revision}"
# TODO remove this step once we are installing puppet from msi packages
install_from_git(host, "/opt/puppet-git-repos",
:name => 'puppet-win32-ruby',
:path => build_giturl('puppet-win32-ruby'),
- :rev => '1.9.3')
+ :rev => revision)
on host, 'cd /opt/puppet-git-repos/puppet-win32-ruby; cp -r ruby/* /'
on host, 'cd /lib; icacls ruby /grant "Everyone:(OI)(CI)(RX)"'
on host, 'cd /lib; icacls ruby /reset /T'
+ on host, 'cd /; icacls bin /grant "Everyone:(OI)(CI)(RX)"'
+ on host, 'cd /; icacls bin /reset /T'
on host, 'ruby --version'
on host, 'cmd /c gem list'
when /solaris/
step "#{host} Install json from rubygems"
on host, 'gem install json'
end
end
diff --git a/acceptance/setup/packages/pre-suite/010_Install.rb b/acceptance/setup/packages/pre-suite/010_Install.rb
index dfdfc1f37..4397211fe 100644
--- a/acceptance/setup/packages/pre-suite/010_Install.rb
+++ b/acceptance/setup/packages/pre-suite/010_Install.rb
@@ -1,52 +1,49 @@
require 'puppet/acceptance/install_utils'
extend Puppet::Acceptance::InstallUtils
test_name "Install Packages"
step "Install repositories on target machines..." do
sha = ENV['SHA']
repo_configs_dir = 'repo-configs'
hosts.each do |host|
install_repos_on(host, sha, repo_configs_dir)
end
end
MASTER_PACKAGES = {
:redhat => [
'puppet-server',
],
:debian => [
- 'puppetmaster',
+ 'puppetmaster-passenger',
],
# :solaris => [
# 'puppet-server',
# ],
# :windows => [
# 'puppet-server',
# ],
}
AGENT_PACKAGES = {
:redhat => [
'puppet',
],
:debian => [
'puppet',
],
# :solaris => [
# 'puppet',
# ],
# :windows => [
# 'puppet',
# ],
}
install_packages_on(master, MASTER_PACKAGES)
-if master['platform'] =~ /debian|ubuntu/
- on(master, '/etc/init.d/puppetmaster stop')
-end
install_packages_on(agents, AGENT_PACKAGES)
diff --git a/acceptance/setup/packages/pre-suite/015_PackageHostsPresets.rb b/acceptance/setup/packages/pre-suite/015_PackageHostsPresets.rb
new file mode 100644
index 000000000..1a1101aa5
--- /dev/null
+++ b/acceptance/setup/packages/pre-suite/015_PackageHostsPresets.rb
@@ -0,0 +1,5 @@
+if master['platform'] =~ /debian|ubuntu/
+ master.uses_passenger!
+elsif master['platform'] =~ /redhat|el|centos|scientific/
+ master['use-service'] = true
+end
diff --git a/acceptance/setup/packages/pre-suite/045_EnsureMasterStartedOnPassenger.rb b/acceptance/setup/packages/pre-suite/045_EnsureMasterStartedOnPassenger.rb
new file mode 100644
index 000000000..20f4fdfb5
--- /dev/null
+++ b/acceptance/setup/packages/pre-suite/045_EnsureMasterStartedOnPassenger.rb
@@ -0,0 +1,3 @@
+if master.graceful_restarts?
+ on(master, puppet('resource', 'service', master['puppetservice'], "ensure=running"))
+end
diff --git a/acceptance/tests/apply/conditionals/should_evaluate_else.rb b/acceptance/tests/apply/conditionals/should_evaluate_else.rb
deleted file mode 100755
index 7bdceb1d1..000000000
--- a/acceptance/tests/apply/conditionals/should_evaluate_else.rb
+++ /dev/null
@@ -1,15 +0,0 @@
-test_name "else clause will be reached if no expressions match"
-manifest = %q{
-if( 1 == 2) {
- notice('if')
-} elsif(2 == 3) {
- notice('elsif')
-} else {
- notice('else')
-}
-}
-
-apply_manifest_on(agents, manifest) do
- fail_test "the else clause did not evaluate" unless stdout.include? 'else'
-end
-
diff --git a/acceptance/tests/apply/conditionals/should_evaluate_elsif.rb b/acceptance/tests/apply/conditionals/should_evaluate_elsif.rb
deleted file mode 100755
index 027e247c9..000000000
--- a/acceptance/tests/apply/conditionals/should_evaluate_elsif.rb
+++ /dev/null
@@ -1,15 +0,0 @@
-test_name "should evaluate the elsif block in a conditional"
-manifest = %q{
-if( 1 == 3) {
- notice('if')
-} elsif(2 == 2) {
- notice('elsif')
-} else {
- notice('else')
-}
-}
-
-apply_manifest_on(agents, manifest) do
- fail_test "didn't evaluate elsif" unless stdout.include? 'elsif'
-end
-
diff --git a/acceptance/tests/apply/conditionals/should_evaluate_empty.rb b/acceptance/tests/apply/conditionals/should_evaluate_empty.rb
deleted file mode 100644
index 85b0792b4..000000000
--- a/acceptance/tests/apply/conditionals/should_evaluate_empty.rb
+++ /dev/null
@@ -1,12 +0,0 @@
-test_name "ensure that undefined variables evaluate as false"
-manifest = %q{
-if $undef_var {
-} else {
- notice('undef')
-}
-}
-
-apply_manifest_on(agents, manifest) do
- fail_test "did not evaluate as expected" unless stdout.include? 'undef'
-end
-
diff --git a/acceptance/tests/apply/conditionals/should_evaluate_false.rb b/acceptance/tests/apply/conditionals/should_evaluate_false.rb
deleted file mode 100755
index 9a64e1663..000000000
--- a/acceptance/tests/apply/conditionals/should_evaluate_false.rb
+++ /dev/null
@@ -1,12 +0,0 @@
-test_name "test that false evaluates to false"
-manifest = %q{
-if false {
-} else {
- notice('false')
-}
-}
-
-apply_manifest_on(agents, manifest) do
- fail_test "didn't evaluate false correcly" unless stdout.include? 'false'
-end
-
diff --git a/acceptance/tests/apply/conditionals/should_evaluate_if.rb b/acceptance/tests/apply/conditionals/should_evaluate_if.rb
deleted file mode 100755
index d0113e518..000000000
--- a/acceptance/tests/apply/conditionals/should_evaluate_if.rb
+++ /dev/null
@@ -1,15 +0,0 @@
-test_name = "should evaluate an if block correctly"
-manifest = %q{
-if( 1 == 1) {
- notice('if')
-} elsif(2 == 2) {
- notice('elsif')
-} else {
- notice('else')
-}
-}
-
-apply_manifest_on(agents, manifest) do
- fail_test "didn't evaluate correctly" unless stdout.include? 'if'
-end
-
diff --git a/acceptance/tests/apply/conditionals/should_evaluate_strings_true.rb b/acceptance/tests/apply/conditionals/should_evaluate_strings_true.rb
deleted file mode 100755
index 14b753085..000000000
--- a/acceptance/tests/apply/conditionals/should_evaluate_strings_true.rb
+++ /dev/null
@@ -1,13 +0,0 @@
-test_name "test that the string 'false' evaluates to true"
-manifest = %q{
-if 'false' {
- notice('true')
-} else {
- notice('false')
-}
-}
-
-apply_manifest_on(agents, manifest) do
- fail_test "string 'false' didn't evaluate as true" unless
- stdout.include? 'true'
-end
diff --git a/acceptance/tests/apply/conditionals/should_evaluate_undef.rb b/acceptance/tests/apply/conditionals/should_evaluate_undef.rb
deleted file mode 100755
index ba5d6403e..000000000
--- a/acceptance/tests/apply/conditionals/should_evaluate_undef.rb
+++ /dev/null
@@ -1,11 +0,0 @@
-test_name "empty string should evaluate as false"
-manifest = %q{
-if '' {
-} else {
- notice('empty')
-}
-}
-
-apply_manifest_on(agents, manifest) do
- fail_test "didn't evaluate as false" unless stdout.include? 'empty'
-end
diff --git a/acceptance/tests/apply/virtual/should_realize.rb b/acceptance/tests/apply/virtual/should_realize.rb
deleted file mode 100755
index 7dc2406cd..000000000
--- a/acceptance/tests/apply/virtual/should_realize.rb
+++ /dev/null
@@ -1,25 +0,0 @@
-test_name "should realize"
-
-agents.each do |agent|
- out = agent.tmpfile('should_realize')
- name = "test-#{Time.new.to_i}-host"
-
-manifest = %Q{
- @host{'#{name}': ip=>'127.0.0.2', target=>'#{out}', ensure=>present}
- realize(Host['#{name}'])
-}
-
- step "clean the system ready for testing"
- on agent, "rm -f #{out}"
-
- step "realize the resource on the host"
- apply_manifest_on agent, manifest
-
- step "verify the content of the file"
- on(agent, "cat #{out}") do
- fail_test "missing host definition" unless stdout.include? name
- end
-
- step "final cleanup of the system"
- on agent, "rm -f #{out}"
-end
diff --git a/acceptance/tests/apply/virtual/should_realize_complex_query.rb b/acceptance/tests/apply/virtual/should_realize_complex_query.rb
deleted file mode 100755
index 512b5cbfc..000000000
--- a/acceptance/tests/apply/virtual/should_realize_complex_query.rb
+++ /dev/null
@@ -1,39 +0,0 @@
-test_name "should realize with complex query"
-
-agents.each do |agent|
- out = agent.tmpfile('should_realize_complex_query')
- name = "test-#{Time.new.to_i}-host"
-
-manifest = %Q{
- @host { '#{name}1':
- ip => '127.0.0.2',
- target => '#{out}',
- host_aliases => ['one', 'two', 'three'],
- ensure => present,
- }
- @host { '#{name}2':
- ip => '127.0.0.3',
- target => '#{out}',
- host_aliases => 'two',
- ensure => present,
- }
- Host<| host_aliases == 'two' and ip == '127.0.0.3' |>
-}
-
- step "clean up target system for test"
- on agent, "rm -f #{out}"
-
- step "run the manifest"
- apply_manifest_on agent, manifest
-
- step "verify the file output"
- on(agent, "cat #{out}") do
- fail_test "second host not found in output" unless
- stdout.include? "#{name}2"
- fail_test "first host was found in output" if
- stdout.include? "#{name}1"
- end
-
- step "clean up system after testing"
- on agent, "rm -f #{out}"
-end
diff --git a/acceptance/tests/apply/virtual/should_realize_many.rb b/acceptance/tests/apply/virtual/should_realize_many.rb
deleted file mode 100755
index 80cf3f580..000000000
--- a/acceptance/tests/apply/virtual/should_realize_many.rb
+++ /dev/null
@@ -1,24 +0,0 @@
-test_name "test that realize function takes a list"
-
-agents.each do |agent|
- out = agent.tmpfile('should_realize_many')
- name = "test-#{Time.new.to_i}-host"
-
-manifest = %Q{
- @host{'#{name}1': ip=>'127.0.0.2', target=>'#{out}', ensure=>present}
- @host{'#{name}2': ip=>'127.0.0.2', target=>'#{out}', ensure=>present}
- realize(Host['#{name}1'], Host['#{name}2'])
-}
-
- step "clean up target system for test"
- on agent, "rm -f #{out}"
-
- step "run the manifest"
- apply_manifest_on agent, manifest
-
- step "verify the file output"
- on(agent, "cat #{out}") do
- fail_test "first host not found in output" unless stdout.include? "#{name}1"
- fail_test "second host not found in output" unless stdout.include? "#{name}2"
- end
-end
diff --git a/acceptance/tests/apply/virtual/should_realize_query.rb b/acceptance/tests/apply/virtual/should_realize_query.rb
deleted file mode 100755
index ae1ff3eb3..000000000
--- a/acceptance/tests/apply/virtual/should_realize_query.rb
+++ /dev/null
@@ -1,27 +0,0 @@
-test_name "should realize query"
-
-agents.each do |agent|
- out = agent.tmpfile('should_realize_query')
- name = "test-#{Time.new.to_i}-host"
-
-manifest = %Q{
- @host { '#{name}':
- ip => '127.0.0.2',
- target => '#{out}',
- host_aliases => 'alias',
- ensure => present,
- }
- Host<| ip == '127.0.0.2' |>
-}
-
- step "clean up target system for test"
- on agent, "rm -f #{out}"
-
- step "run the manifest"
- apply_manifest_on agent, manifest
-
- step "verify the file output"
- on(agent, "cat #{out}") do
- fail_test "host not found in output" unless stdout.include? name
- end
-end
diff --git a/acceptance/tests/apply/virtual/should_realize_query_array.rb b/acceptance/tests/apply/virtual/should_realize_query_array.rb
deleted file mode 100755
index c2fa8f6ea..000000000
--- a/acceptance/tests/apply/virtual/should_realize_query_array.rb
+++ /dev/null
@@ -1,27 +0,0 @@
-test_name "should realize query array"
-
-agents.each do |agent|
- out = agent.tmpfile('should_realize_query_array')
- name = "test-#{Time.new.to_i}-host"
-
-manifest = %Q{
- @host { '#{name}':
- ip => '127.0.0.2',
- target => '#{out}',
- host_aliases => ['one', 'two', 'three'],
- ensure => present,
- }
- Host<| host_aliases == 'two' |>
-}
-
- step "clean up target system for test"
- on agent, "rm -f #{out}"
-
- step "run the manifest"
- apply_manifest_on agent, manifest
-
- step "verify the file output"
- on(agent, "cat #{out}") do
- fail_test "host not found in output" unless stdout.include? name
- end
-end
diff --git a/acceptance/tests/concurrency/ticket_2659_concurrent_catalog_requests.rb b/acceptance/tests/concurrency/ticket_2659_concurrent_catalog_requests.rb
index 1a6234037..b3fec25d0 100644
--- a/acceptance/tests/concurrency/ticket_2659_concurrent_catalog_requests.rb
+++ b/acceptance/tests/concurrency/ticket_2659_concurrent_catalog_requests.rb
@@ -1,108 +1,108 @@
test_name "concurrent catalog requests (PUP-2659)"
# we're only testing the effects of loading a master with concurrent requests
confine :except, :platform => 'windows'
step "setup a manifest"
testdir = master.tmpdir("concurrent")
apply_manifest_on(master, <<-MANIFEST, :catch_failures => true)
File {
ensure => directory,
owner => #{master['user']},
group => #{master['group']},
mode => '750',
}
file { '#{testdir}': }
file { '#{testdir}/busy': }
file { '#{testdir}/busy/one.txt':
ensure => file,
mode => '640',
content => "Something to read",
}
file { '#{testdir}/busy/two.txt':
ensure => file,
mode => '640',
content => "Something else to read",
}
file { '#{testdir}/busy/three.txt':
ensure => file,
mode => '640',
content => "Something more else to read",
}
file { '#{testdir}/manifests': }
file { '#{testdir}/manifests/site.pp':
ensure => file,
content => '
$foo = inline_template("
<%- 1000.times do
Dir.glob(\\'#{testdir}/busy/*.txt\\').each do |f|
File.read(f)
end
end
%>
\\'touched the file system for a bit\\'
")
notify { "end":
message => $foo,
}
',
mode => '640',
}
MANIFEST
step "start master"
master_opts = {
'main' => {
'manifest' => "#{testdir}/manifests/site.pp",
}
}
with_puppet_running_on(master, master_opts, testdir) do
step "concurrent catalog curls (with alliterative alacrity)"
agents.each do |agent|
cert_path = on(agent, puppet('config', 'print', 'hostcert')).stdout.chomp
key_path = on(agent, puppet('config', 'print', 'hostprivkey')).stdout.chomp
cacert_path = on(agent, puppet('config', 'print', 'localcacert')).stdout.chomp
agent_cert = on(agent, puppet('config', 'print', 'certname')).stdout.chomp
run_count = 6
agent_tmpdir = agent.tmpdir("concurrent-loop-script")
test_script = "#{agent_tmpdir}/loop.sh"
create_remote_file(agent, test_script, <<-EOF)
declare -a MYPIDS
loops=#{run_count}
for (( i=0; i<$loops; i++ )); do
(
sleep_for="0.$(( $RANDOM % 49 ))"
sleep $sleep_for
url='https://#{master}:8140/production/catalog/#{agent_cert}'
echo "Curling: $url"
- curl -v -# -H 'Accept: text/pson' --cert #{cert_path} --key #{key_path} --cacert #{cacert_path} $url
+ curl --tlsv1 -v -# -H 'Accept: text/pson' --cert #{cert_path} --key #{key_path} --cacert #{cacert_path} $url
echo "$PPID Completed"
) > "#{agent_tmpdir}/catalog-request-$i.out" 2>&1 &
echo "Launched $!"
MYPIDS[$i]=$!
done
for (( i=0; i<$loops; i++ )); do
wait ${MYPIDS[$i]}
done
echo "All requests are finished"
EOF
on(agent, "chmod +x #{test_script}")
on(agent, "#{test_script}")
run_count.times do |i|
step "Checking the results of catalog request ##{i}"
on(agent, "cat #{agent_tmpdir}/catalog-request-#{i}.out") do
assert_match(%r{< HTTP/1.* 200}, stdout)
assert_match(%r{touched the file system for a bit}, stdout)
end
end
end
end
diff --git a/acceptance/tests/config/puppet_manages_own_configuration_in_robust_manner.rb b/acceptance/tests/config/puppet_manages_own_configuration_in_robust_manner.rb
index 8b3a91537..dafef1cff 100644
--- a/acceptance/tests/config/puppet_manages_own_configuration_in_robust_manner.rb
+++ b/acceptance/tests/config/puppet_manages_own_configuration_in_robust_manner.rb
@@ -1,71 +1,82 @@
# User story:
# A new user has installed puppet either from source or from a gem, which does
# not put the "puppet" user or group on the system. They run the puppet master,
# which fails because of the missing user and then correct their actions. They
# expect that after correcting their actions, puppet will work correctly.
test_name "Puppet manages its own configuration in a robust manner"
+skip_test "JVM Puppet cannot change its user while running." if @options[:is_jvm_puppet]
+
# when owner/group works on windows for settings, this confine should be removed.
confine :except, :platform => 'windows'
# when managhome roundtrips for solaris, this confine should be removed
confine :except, :platform => 'solaris'
# pe setup includes ownership of external directories such as the passenger
# document root, which puppet itself knows nothing about
confine :except, :type => 'pe'
+# same issue for a foss passenger run
+if master.is_using_passenger?
+ skip_test 'Cannot test with passenger.'
+end
+
+if master.use_service_scripts?
+ # Beaker defaults to leaving puppet running when using service scripts,
+ # Need to shut it down so we can modify user/group and test startup failure
+ on(master, puppet('resource', 'service', master['puppetservice'], 'ensure=stopped'))
+end
step "Clear out yaml directory because of a bug in the indirector/yaml. (See #21145)"
on master, 'rm -rf $(puppet master --configprint yamldir)'
original_state = {}
step "Record original state of system users" do
hosts.each do |host|
original_state[host] = {}
original_state[host][:user] = user = host.execute('puppet config print user')
original_state[host][:group] = group = host.execute('puppet config print group')
original_state[host][:ug_resources] = on(host, puppet('resource', 'user', user)).stdout
original_state[host][:ug_resources] += on(host, puppet('resource', 'group', group)).stdout
original_state[host][:ug_resources] += "Group['#{group}'] -> User['#{user}']\n"
end
end
teardown do
# And cleaning up yaml dir again here because we are changing service
# user and group ids back to the original uid and gid
on master, 'rm -rf $(puppet master --configprint yamldir)'
hosts.each do |host|
apply_manifest_on(host, <<-ORIG)
#{original_state[host][:ug_resources]}
ORIG
end
with_puppet_running_on(master, {}) do
agents.each do |agent|
on agent, puppet('agent', '-t', '--server', master)
end
end
end
step "Remove system users" do
hosts.each do |host|
on host, puppet('resource', 'user', original_state[host][:user], 'ensure=absent')
on host, puppet('resource', 'group', original_state[host][:group], 'ensure=absent')
end
end
step "Ensure master fails to start when missing system user" do
on master, puppet('master'), :acceptable_exit_codes => [74] do
assert_match(/could not change to group "#{original_state[master][:group]}"/, result.output)
assert_match(/Could not change to user #{original_state[master][:user]}/, result.output)
end
end
step "Ensure master starts when making users after having previously failed startup" do
with_puppet_running_on(master,
- :__commandline_args__ => '--debug --trace',
:master => { :mkusers => true }) do
agents.each do |agent|
on agent, puppet('agent', '-t', '--server', master)
end
end
end
diff --git a/acceptance/tests/environment/can_enumerate_environments.rb b/acceptance/tests/environment/can_enumerate_environments.rb
index ee033d93a..088e9de1b 100644
--- a/acceptance/tests/environment/can_enumerate_environments.rb
+++ b/acceptance/tests/environment/can_enumerate_environments.rb
@@ -1,68 +1,69 @@
test_name "Can enumerate environments via an HTTP endpoint"
def master_port(agent)
setting_on(agent, "agent", "masterport")
end
def setting_on(host, section, name)
on(host, puppet("config", "print", name, "--section", section)).stdout.chomp
end
def full_path(host, path)
if host['platform'] =~ /win/
on(host, "cygpath '#{path}'").stdout.chomp
else
path
end
end
def curl_master_from(agent, path, headers = '', &block)
url = "https://#{master}:#{master_port(agent)}#{path}"
cert_path = full_path(agent, setting_on(agent, "agent", "hostcert"))
key_path = full_path(agent, setting_on(agent, "agent", "hostprivkey"))
- curl_base = "curl -sg --cert \"#{cert_path}\" --key \"#{key_path}\" -k -H '#{headers}'"
+ curl_base = "curl --tlsv1 -sg --cert \"#{cert_path}\" --key \"#{key_path}\" -k -H '#{headers}'"
on agent, "#{curl_base} '#{url}'", &block
end
-environments_dir = master.tmpdir("environments")
+master_user = on(master, "puppet master --configprint user").stdout.strip
+environments_dir = create_tmpdir_for_user master, "environments"
apply_manifest_on(master, <<-MANIFEST)
File {
ensure => directory,
- owner => #{master['user']},
+ owner => #{master_user},
group => #{master['group']},
- mode => 0750,
+ mode => 0770,
}
file {
"#{environments_dir}":;
"#{environments_dir}/env1":;
"#{environments_dir}/env2":;
}
MANIFEST
master_opts = {
:master => {
:environmentpath => environments_dir
}
}
if master.is_pe?
master_opts[:master][:basemodulepath] = master['sitemoduledir']
end
with_puppet_running_on(master, master_opts) do
agents.each do |agent|
step "Ensure that an unauthenticated client cannot access the environments list" do
- on agent, "curl -ksv https://#{master}:#{master_port(agent)}/v2.0/environments", :acceptable_exit_codes => [0,7] do
+ on agent, "curl --tlsv1 -ksv https://#{master}:#{master_port(agent)}/v2.0/environments", :acceptable_exit_codes => [0,7] do
assert_match(/< HTTP\/1\.\d 403/, stderr)
end
end
step "Ensure that an authenticated client can retrieve the list of environments" do
curl_master_from(agent, '/v2.0/environments') do
data = JSON.parse(stdout)
- assert_equal(["env1", "env2"], data["environments"].keys.sort)
+ assert_equal(["env1", "env2", "production"], data["environments"].keys.sort)
end
end
end
end
diff --git a/acceptance/tests/environment/cmdline_overrides_environment.rb b/acceptance/tests/environment/cmdline_overrides_environment.rb
index 09319f749..78a07413b 100644
--- a/acceptance/tests/environment/cmdline_overrides_environment.rb
+++ b/acceptance/tests/environment/cmdline_overrides_environment.rb
@@ -1,293 +1,322 @@
test_name "Commandline modulepath and manifest settings override environment"
-testdir = master.tmpdir('cmdline_and_environment')
+skip_test "CLI-master tests are not applicable" if @options[:is_jvm_puppet]
+
+testdir = create_tmpdir_for_user master, 'cmdline_and_environment'
environmentpath = "#{testdir}/environments"
modulepath = "#{testdir}/modules"
manifests = "#{testdir}/manifests"
sitepp = "#{manifests}/site.pp"
other_manifestdir = "#{testdir}/other_manifests"
other_sitepp = "#{other_manifestdir}/site.pp"
other_modulepath = "#{testdir}/some_other_modulepath"
cmdline_manifest = "#{testdir}/cmdline.pp"
step "Prepare manifests and modules"
apply_manifest_on(master, <<-MANIFEST, :catch_failures => true)
File {
ensure => directory,
owner => #{master['user']},
group => #{master['group']},
mode => 0750,
}
##############################################
# A production directory environment
file {
"#{testdir}":;
"#{environmentpath}":;
"#{environmentpath}/production":;
"#{environmentpath}/production/manifests":;
"#{environmentpath}/production/modules":;
"#{environmentpath}/production/modules/amod":;
"#{environmentpath}/production/modules/amod/manifests":;
}
file { "#{environmentpath}/production/modules/amod/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class amod {
notify { "amod from production environment": }
}'
}
file { "#{environmentpath}/production/manifests/production.pp":
ensure => file,
mode => 0640,
content => '
notify { "in production.pp": }
include amod
'
}
##############################################################
# To be set as default manifests and modulepath in puppet.conf
file {
"#{modulepath}":;
"#{modulepath}/amod/":;
"#{modulepath}/amod/manifests":;
}
file { "#{modulepath}/amod/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class amod {
notify { "amod from modulepath": }
}'
}
file { "#{manifests}": }
file { "#{sitepp}":
ensure => file,
mode => 0640,
content => '
notify { "in site.pp": }
include amod
'
}
file { "#{other_manifestdir}": }
file { "#{other_sitepp}":
ensure => file,
mode => 0640,
content => '
notify { "in other manifestdir site.pp": }
include amod
'
}
################################
# To be specified on commandline
file {
"#{other_modulepath}":;
"#{other_modulepath}/amod/":;
"#{other_modulepath}/amod/manifests":;
}
file { "#{other_modulepath}/amod/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class amod {
notify { "amod from commandline modulepath": }
}'
}
file { "#{cmdline_manifest}":
ensure => file,
mode => 0640,
content => '
notify { "in cmdline.pp": }
include amod
'
}
MANIFEST
+def shutdown_puppet_if_running_as_a_service
+ if master.use_service_scripts?
+ # Beaker defaults to leaving puppet running when using service scripts,
+ # Need to shut it down so we can start up with commandline options
+ on(master, puppet('resource', 'service', master['puppetservice'], 'ensure=stopped'))
+ end
+end
+
+teardown do
+ if master.use_service_scripts?
+ # Beaker defaults to leaving puppet running when using service scripts,
+ on(master, puppet('resource', 'service', master['puppetservice'], 'ensure=running'))
+ end
+end
+
# Note: this is the semantics seen with legacy environments if commandline
# manifest/modulepath are set.
step "CASE 1: puppet master with --manifest and --modulepath overrides set production directory environment" do
- if master.is_pe?
+ if master.is_using_passenger?
step "Skipping for Passenger (PE) setup; since the equivalent of a commandline override would be adding the setting to config.ru, which seems like a very odd thing to do."
else
+
+ shutdown_puppet_if_running_as_a_service
+
master_opts = {
'master' => {
'environmentpath' => environmentpath,
'manifest' => sitepp,
'modulepath' => modulepath,
- }
+ },
+ :__service_args__ => {
+ :bypass_service_script => true,
+ },
}
master_opts_with_cmdline = master_opts.merge(:__commandline_args__ => "--manifest=#{cmdline_manifest} --modulepath=#{other_modulepath}")
with_puppet_running_on master, master_opts_with_cmdline, testdir do
agents.each do |agent|
on(agent, puppet("agent -t --server #{master}"), :acceptable_exit_codes => [2] ) do
assert_match(/in cmdline\.pp/, stdout)
assert_match(/amod from commandline modulepath/, stdout)
assert_no_match(/production/, stdout)
end
step "CASE 1a: even if environment is specified"
on(agent, puppet("agent -t --server #{master} --environment production"), :acceptable_exit_codes => [2]) do
assert_match(/in cmdline\.pp/, stdout)
assert_match(/amod from commandline modulepath/, stdout)
assert_no_match(/production/, stdout)
end
end
end
step "CASE 2: or if you set --manifestdir" do
master_opts_with_cmdline = master_opts.merge(:__commandline_args__ => "--manifestdir=#{other_manifestdir} --modulepath=#{other_modulepath}")
step "CASE 2: it is ignored if manifest is set in puppet.conf to something not using $manifestdir"
with_puppet_running_on master, master_opts_with_cmdline, testdir do
agents.each do |agent|
on(agent, puppet("agent -t --server #{master}"), :acceptable_exit_codes => [2]) do
assert_match(/in production\.pp/, stdout)
assert_match(/amod from commandline modulepath/, stdout)
end
end
end
step "CASE 2a: but does pull in the default manifest via manifestdir if manifest is not set"
master_opts_with_cmdline = master_opts.merge(:__commandline_args__ => "--manifestdir=#{other_manifestdir} --modulepath=#{other_modulepath}")
master_opts_with_cmdline['master'].delete('manifest')
with_puppet_running_on master, master_opts_with_cmdline, testdir do
agents.each do |agent|
on(agent, puppet("agent -t --server #{master}"), :acceptable_exit_codes => [2]) do
assert_match(/in other manifestdir site\.pp/, stdout)
assert_match(/amod from commandline modulepath/, stdout)
assert_no_match(/production/, stdout)
end
end
end
end
end
end
step "CASE 3: puppet master with manifest and modulepath set in puppet.conf is overriden by an existing and set production directory environment" do
master_opts = {
'master' => {
'environmentpath' => environmentpath,
'manifest' => sitepp,
'modulepath' => modulepath,
}
}
if master.is_pe?
master_opts['master']['basemodulepath'] = master['sitemoduledir']
end
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
step "CASE 3: this case is unfortunate, but will be irrelevant when we remove legacyenv in 4.0"
on(agent, puppet("agent -t --server #{master}"), :acceptable_exit_codes => [2] ) do
assert_match(/in production\.pp/, stdout)
assert_match(/amod from production environment/, stdout)
end
step "CASE 3a: if environment is specified"
on(agent, puppet("agent -t --server #{master} --environment production"), :acceptable_exit_codes => [2]) do
assert_match(/in production\.pp/, stdout)
assert_match(/amod from production environment/, stdout)
end
end
end
end
step "CASE 4: puppet master with default manifest, modulepath, environment, environmentpath and an existing '#{environmentpath}/production' directory environment that has not been set" do
- if master.is_pe?
+ if master.is_using_passenger?
step "Skipping for PE because PE requires most of the existing puppet.conf and /etc/puppetlabs/puppet configuration, and we cannot simply point to a new conf directory."
else
+
+ shutdown_puppet_if_running_as_a_service
+
ssldir = on(master, puppet("master --configprint ssldir")).stdout.chomp
master_opts = {
+ :__service_args__ => {
+ :bypass_service_script => true,
+ },
:__commandline_args__ => "--confdir=#{testdir} --ssldir=#{ssldir}"
}
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
step "CASE 4: #{environmentpath}/production directory environment does not take precedence because default environmentpath is ''"
on(agent, puppet("agent -t --server #{master}"), :acceptable_exit_codes => [2] ) do
assert_match(/in site\.pp/, stdout)
assert_match(/amod from modulepath/, stdout)
end
on(agent, puppet("agent -t --server #{master} --environment production"), :acceptable_exit_codes => [2]) do
assert_match(/in site\.pp/, stdout)
assert_match(/amod from modulepath/, stdout)
end
end
end
end
end
step "CASE 5: puppet master with explicit dynamic environment settings and empty environmentpath" do
step "CASE 5: Prepare an additional modulepath module"
apply_manifest_on(master, <<-MANIFEST, :catch_failures => true)
File {
ensure => directory,
owner => #{master['user']},
group => #{master['group']},
mode => 0750,
}
# A second module in another modules dir
file {
"#{other_modulepath}":;
"#{other_modulepath}/bmod/":;
"#{other_modulepath}/bmod/manifests":;
}
file { "#{other_modulepath}/bmod/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class bmod {
notify { "bmod from other modulepath": }
}'
}
file { "#{environmentpath}/production/manifests/production.pp":
ensure => file,
mode => 0640,
content => '
notify { "in production.pp": }
include amod
include bmod
'
}
MANIFEST
master_opts = {
'master' => {
'manifest' => "#{environmentpath}/$environment/manifests",
'modulepath' => "#{environmentpath}/$environment/modules:#{other_modulepath}",
}
}
if master.is_pe?
master_opts['master']['modulepath'] << ":#{master['sitemoduledir']}"
end
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
step "CASE 5: pulls in the production environment based on $environment default"
on(agent, puppet("agent -t --server #{master}"), :acceptable_exit_codes => [2] ) do
assert_match(/in production\.pp/, stdout)
assert_match(/amod from production environment/, stdout)
step "CASE 5: and sees modules located in later elements of the modulepath (which would not be seen by a directory env (PUP-2158)"
assert_match(/bmod from other modulepath/, stdout)
end
step "CASE 5a: pulls in the production environment when explicitly set"
on(agent, puppet("agent -t --server #{master} --environment production"), :acceptable_exit_codes => [2] ) do
assert_match(/in production\.pp/, stdout)
assert_match(/amod from production environment/, stdout)
step "CASE 5a: and sees modules located in later elements of the modulepath (which would not be seen by a directory env (PUP-2158)"
assert_match(/bmod from other modulepath/, stdout)
end
end
end
end
diff --git a/acceptance/tests/environment/directory.rb b/acceptance/tests/environment/directory.rb
new file mode 100644
index 000000000..a295a9a48
--- /dev/null
+++ b/acceptance/tests/environment/directory.rb
@@ -0,0 +1,224 @@
+test_name "directory environments"
+require 'puppet/acceptance/environment_utils'
+extend Puppet::Acceptance::EnvironmentUtils
+
+step "setup environments"
+
+stub_forge_on(master)
+
+testdir = create_tmpdir_for_user master, "confdir"
+puppet_conf_backup_dir = create_tmpdir_for_user(master, "puppet-conf-backup-dir")
+
+apply_manifest_on(master, environment_manifest(testdir), :catch_failures => true)
+
+results = {}
+review = {}
+
+####################
+step "[ Run Tests ]"
+
+existing_directory_scenario = "Test a specific, existing directory environment configuration"
+step existing_directory_scenario
+master_opts = {
+ 'main' => {
+ 'environmentpath' => '$confdir/environments',
+ 'config_version' => '$confdir/static-version.sh',
+ }
+}
+general = [ master_opts, testdir, puppet_conf_backup_dir, { :directory_environments => true } ]
+
+results[existing_directory_scenario] = use_an_environment("testing", "directory testing", *general)
+
+default_environment_scenario = "Test behavior of default environment"
+step default_environment_scenario
+results[default_environment_scenario] = use_an_environment(nil, "default environment", *general)
+
+non_existent_environment_scenario = "Test for an environment that does not exist"
+step non_existent_environment_scenario
+results[non_existent_environment_scenario] = use_an_environment("doesnotexist", "non existent environment", *general)
+
+with_explicit_environment_conf_scenario = "Test a specific, existing directory environment with an explicit environment.conf file"
+step with_explicit_environment_conf_scenario
+results[with_explicit_environment_conf_scenario] = use_an_environment("testing_environment_conf", "directory with environment.conf testing", *general)
+
+master_environmentpath_scenario = "Test behavior of a directory environment when environmentpath is set in the master section"
+step master_environmentpath_scenario
+master_opts = {
+ 'master' => {
+ 'environmentpath' => '$confdir/environments',
+ 'config_version' => '$confdir/static-version.sh',
+ }
+}
+results[master_environmentpath_scenario] = use_an_environment("testing", "master environmentpath", master_opts, testdir, puppet_conf_backup_dir, :directory_environments => true, :config_print => '--section=master')
+
+bad_environmentpath_scenario = "Test behavior of directory environments when environmentpath is set to a non-existent directory"
+step bad_environmentpath_scenario
+master_opts = {
+ 'main' => {
+ 'environmentpath' => '/doesnotexist',
+ 'config_version' => '$confdir/static-version.sh',
+ }
+}
+results[bad_environmentpath_scenario] = use_an_environment("testing", "bad environmentpath", master_opts, testdir, puppet_conf_backup_dir, :directory_environments => true)
+
+########################################
+step "[ Report on Environment Results ]"
+
+step "Reviewing: #{existing_directory_scenario}"
+existing_directory_expectations = lambda do |env|
+ {
+ :puppet_config => {
+ :exit_code => 0,
+ :matches => [%r{manifest.*#{master['puppetpath']}/environments/#{env}/manifests$},
+ %r{modulepath.*#{master['puppetpath']}/environments/#{env}/modules:.+},
+ %r{config_version = $}]
+ },
+ :puppet_module_install => {
+ :exit_code => 0,
+ :matches => [%r{Preparing to install into #{master['puppetpath']}/environments/#{env}/modules},
+ %r{pmtacceptance-nginx}],
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 0,
+ :matches => [%r{Removed.*pmtacceptance-nginx.*from #{master['puppetpath']}/environments/#{env}/modules}],
+ },
+ :puppet_apply => {
+ :exit_code => 0,
+ :matches => [%r{include directory #{env} environment testing_mod}],
+ },
+ :puppet_agent => {
+ :exit_code => 2,
+ :matches => [%r{Applying configuration version '\d+'},
+ %r{in directory #{env} environment site.pp},
+ %r{include directory #{env} environment testing_mod}],
+ },
+ }
+end
+review[existing_directory_scenario] = review_results(
+ results[existing_directory_scenario],
+ existing_directory_expectations.call('testing')
+)
+
+step "Reviewing: #{default_environment_scenario}"
+default_environment_expectations = existing_directory_expectations.call('production').merge(
+ :puppet_apply => {
+ :exit_code => 0,
+ :matches => [%r{include default environment testing_mod}],
+ :notes => "The production directory environment is empty, but the inclusion of basemodulepath in the directory environment modulepath picks up the default testing_mod class in $confdir/modules"
+ },
+ :puppet_agent => {
+ :exit_code => 0,
+ :matches => [ %r{Applying configuration version '\d+'}],
+ :does_not_match => [%r{include.*testing_mod},
+ %r{Warning.*404}],
+ :notes => "The master automatically creates an empty production env dir."
+ }
+)
+review[default_environment_scenario] = review_results(
+ results[default_environment_scenario],
+ default_environment_expectations
+)
+
+step "Reviewing: #{non_existent_environment_scenario}"
+non_existent_environment_expectations = lambda do |env,path|
+ {
+ :puppet_config => {
+ :exit_code => 1,
+ :matches => [%r{Could not find a directory environment named '#{env}' anywhere in the path.*#{path}}],
+ },
+ :puppet_module_install => {
+ :exit_code => 1,
+ :matches => [%r{Could not find a directory environment named '#{env}' anywhere in the path.*#{path}}],
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 1,
+ :matches => [%r{Could not find a directory environment named '#{env}' anywhere in the path.*#{path}}],
+ },
+ :puppet_apply => {
+ :exit_code => 1,
+ :matches => [%r{Could not find a directory environment named '#{env}' anywhere in the path.*#{path}}],
+ },
+ :puppet_agent => {
+ :exit_code => 1,
+ :matches => [%r{Warning.*404.*Could not find environment '#{env}'},
+ %r{Could not retrieve catalog; skipping run}],
+ },
+ }
+end
+
+review[non_existent_environment_scenario] = review_results(
+ results[non_existent_environment_scenario],
+ non_existent_environment_expectations.call('doesnotexist', master['puppetpath'])
+)
+
+existing_directory_with_puppet_conf_expectations = {
+ :puppet_config => {
+ :exit_code => 0,
+ :matches => [%r{manifest.*#{master['puppetpath']}/environments/testing_environment_conf/nonstandard-manifests$},
+ %r{modulepath.*#{master['puppetpath']}/environments/testing_environment_conf/nonstandard-modules:.+},
+ %r{config_version = #{master['puppetpath']}/environments/testing_environment_conf/local-version.sh$}]
+ },
+ :puppet_module_install => {
+ :exit_code => 0,
+ :matches => [%r{Preparing to install into #{master['puppetpath']}/environments/testing_environment_conf/nonstandard-modules},
+ %r{pmtacceptance-nginx}],
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 0,
+ :matches => [%r{Removed.*pmtacceptance-nginx.*from #{master['puppetpath']}/environments/testing_environment_conf/nonstandard-modules}],
+ },
+ :puppet_apply => {
+ :exit_code => 0,
+ :matches => [%r{include directory testing with environment\.conf testing_mod}],
+ },
+ :puppet_agent => {
+ :exit_code => 2,
+ :matches => [%r{Applying configuration version 'local testing_environment_conf'},
+ %r{in directory testing with environment\.conf site.pp},
+ %r{include directory testing with environment\.conf testing_mod}],
+ },
+}
+step "Reviewing: #{with_explicit_environment_conf_scenario}"
+review[with_explicit_environment_conf_scenario] = review_results(
+ results[with_explicit_environment_conf_scenario],
+ existing_directory_with_puppet_conf_expectations
+)
+
+master_environmentpath_expectations = existing_directory_expectations.call('testing').merge(
+ :puppet_module_install => {
+ :exit_code => 0,
+ :matches => [%r{Preparing to install into #{master['puppetpath']}/modules},
+ %r{pmtacceptance-nginx}],
+ :expect_failure => true,
+ :notes => "Runs in user mode and doesn't see the master environmenetpath setting.",
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 0,
+ :matches => [%r{Removed.*pmtacceptance-nginx.*from #{master['puppetpath']}/modules}],
+ :expect_failure => true,
+ :notes => "Runs in user mode and doesn't see the master environmenetpath setting.",
+ },
+ :puppet_apply => {
+ :exit_code => 0,
+ :matches => [%r{include default environment testing_mod}],
+ :expect_failure => true,
+ :notes => "Runs in user mode and doesn't see the master environmenetpath setting.",
+ }
+)
+step "Reviewing: #{master_environmentpath_scenario}"
+review[master_environmentpath_scenario] = review_results(
+ results[master_environmentpath_scenario],
+ master_environmentpath_expectations
+)
+
+bad_environmentpath_expectations = non_existent_environment_expectations.call('testing', '/doesnotexist')
+step "Reviewing: #{bad_environmentpath_scenario}"
+review[bad_environmentpath_scenario] = review_results(
+ results[bad_environmentpath_scenario],
+ bad_environmentpath_expectations
+)
+
+#########################
+step "[ Assert Success ]"
+
+assert_review(review)
diff --git a/acceptance/tests/environment/directory_environment_with_environment_conf.rb b/acceptance/tests/environment/directory_environment_with_environment_conf.rb
index f1d75ce2b..cba94941e 100644
--- a/acceptance/tests/environment/directory_environment_with_environment_conf.rb
+++ b/acceptance/tests/environment/directory_environment_with_environment_conf.rb
@@ -1,109 +1,110 @@
test_name 'Use a directory environment from environmentpath with an environment.conf'
-testdir = master.tmpdir('use-environment-conf')
+testdir = create_tmpdir_for_user master, 'use-environment-conf'
absolute_manifestdir = "#{testdir}/manifests"
absolute_modulesdir = "#{testdir}/absolute-modules"
absolute_globalsdir = "#{testdir}/global-modules"
+master_user = on(master, "puppet master --configprint user").stdout.strip
apply_manifest_on(master, <<-MANIFEST, :catch_failures => true)
File {
ensure => directory,
- owner => #{master['user']},
+ owner => #{master_user},
group => #{master['group']},
- mode => 0750,
+ mode => 0770,
}
file {
"#{testdir}":;
"#{testdir}/environments":;
"#{testdir}/environments/direnv":;
"#{testdir}/environments/direnv/environment.conf":
ensure => file,
mode => 0640,
content => '
manifest=#{absolute_manifestdir}
modulepath=relative-modules:#{absolute_modulesdir}:$basemodulepath
config_version=version_script.sh
'
;
"#{testdir}/environments/direnv/relative-modules":;
"#{testdir}/environments/direnv/relative-modules/relmod":;
"#{testdir}/environments/direnv/relative-modules/relmod/manifests":;
"#{testdir}/environments/direnv/relative-modules/relmod/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class relmod {
notify { "included relmod": }
}'
;
"#{testdir}/environments/direnv/version_script.sh":
ensure => file,
mode => 0750,
content => '#!/usr/bin/env sh
echo "ver123"
'
;
"#{absolute_manifestdir}":;
"#{absolute_manifestdir}/site.pp":
ensure => file,
mode => 0640,
content => '
notify { "direnv site.pp": }
include relmod
include absmod
include globalmod
'
;
"#{absolute_modulesdir}":;
"#{absolute_modulesdir}/absmod":;
"#{absolute_modulesdir}/absmod/manifests":;
"#{absolute_modulesdir}/absmod/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class absmod {
notify { "included absmod": }
}'
;
"#{absolute_globalsdir}":;
"#{absolute_globalsdir}/globalmod":;
"#{absolute_globalsdir}/globalmod/manifests":;
"#{absolute_globalsdir}/globalmod/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class globalmod {
notify { "included globalmod": }
}'
;
}
MANIFEST
master_opts = {
'master' => {
'environmentpath' => "#{testdir}/environments",
'basemodulepath' => "#{absolute_globalsdir}",
}
}
if master.is_pe?
master_opts['master']['basemodulepath'] << ":#{master['sitemoduledir']}"
end
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
on(agent,
puppet("agent", "-t", "--server", master, "--environment", "direnv"),
:acceptable_exit_codes => [2]) do |result|
assert_match(/direnv site.pp/, result.stdout)
assert_match(/included relmod/, result.stdout)
assert_match(/included absmod/, result.stdout)
assert_match(/included globalmod/, result.stdout)
assert_match(/Applying.*ver123/, result.stdout)
end
end
end
diff --git a/acceptance/tests/environment/dynamic.rb b/acceptance/tests/environment/dynamic.rb
new file mode 100644
index 000000000..2431c1155
--- /dev/null
+++ b/acceptance/tests/environment/dynamic.rb
@@ -0,0 +1,121 @@
+test_name "dynamic environments"
+require 'puppet/acceptance/environment_utils'
+extend Puppet::Acceptance::EnvironmentUtils
+
+step "setup environments"
+
+stub_forge_on(master)
+
+testdir = create_tmpdir_for_user master, "confdir"
+puppet_conf_backup_dir = create_tmpdir_for_user(master, "puppet-conf-backup-dir")
+
+apply_manifest_on(master, environment_manifest(testdir), :catch_failures => true)
+
+results = {}
+review = {}
+
+####################
+step "[ Run Tests ]"
+
+existing_dynamic_scenario = "Test a specific, existing dynamic environment configuration"
+step existing_dynamic_scenario
+master_opts = {
+ 'main' => {
+ 'manifest' => '$confdir/dynamic/$environment/manifests',
+ 'modulepath' => '$confdir/dynamic/$environment/modules',
+ 'config_version' => '$confdir/static-version.sh',
+ }
+}
+results[existing_dynamic_scenario] = use_an_environment("testing", "dynamic testing", master_opts, testdir, puppet_conf_backup_dir)
+
+default_environment_scenario = "Test behavior of default environment"
+step default_environment_scenario
+results[default_environment_scenario] = use_an_environment(nil, "default environment", master_opts, testdir, puppet_conf_backup_dir)
+
+non_existent_environment_scenario = "Test for an environment that does not exist"
+step non_existent_environment_scenario
+results[non_existent_environment_scenario] = use_an_environment("doesnotexist", "non existent environment", master_opts, testdir, puppet_conf_backup_dir)
+
+########################################
+step "[ Report on Environment Results ]"
+
+confdir = master.puppet['confdir']
+
+step "Reviewing: #{existing_dynamic_scenario}"
+review[existing_dynamic_scenario] = review_results(results[existing_dynamic_scenario],
+ :puppet_config => {
+ :exit_code => 0,
+ :matches => [%r{manifest.*#{confdir}/dynamic/testing/manifests$},
+ %r{modulepath.*#{confdir}/dynamic/testing/modules$},
+ %r{config_version.*#{confdir}/static-version.sh$}]
+ },
+ :puppet_module_install => {
+ :exit_code => 0,
+ :matches => [%r{Preparing to install into #{confdir}/dynamic/testing/modules},
+ %r{pmtacceptance-nginx}],
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 0,
+ :matches => [%r{Removed.*pmtacceptance-nginx.*from #{confdir}/dynamic/testing/modules}],
+ },
+ :puppet_apply => {
+ :exit_code => 0,
+ :matches => [%r{include dynamic testing environment testing_mod}],
+ },
+ :puppet_agent => {
+ :exit_code => 2,
+ :matches => [%r{Applying configuration version 'static'},
+ %r{in dynamic testing environment site.pp},
+ %r{include dynamic testing environment testing_mod}],
+ }
+)
+
+step "Reviewing: #{default_environment_scenario}"
+default_expectations = lambda do |env|
+ {
+ :puppet_config => {
+ :exit_code => 0,
+ :matches => [%r{manifest.*#{confdir}/dynamic/#{env}/manifests$},
+ %r{modulepath.*#{confdir}/dynamic/#{env}/modules$},
+ %r{^config_version.*#{confdir}/static-version.sh$}]
+ },
+ :puppet_module_install => {
+ :exit_code => 0,
+ :matches => [%r{Preparing to install into #{confdir}/dynamic/#{env}/modules},
+ %r{pmtacceptance-nginx}],
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 0,
+ :matches => [%r{Removed.*pmtacceptance-nginx.*from #{confdir}/dynamic/#{env}/modules}],
+ },
+ :puppet_apply => {
+ :exit_code => 1,
+ :matches => [ENV['PARSER'] == 'future' ?
+ %r{Error:.*Could not find class ::testing_mod} :
+ %r{Error:.*Could not find class testing_mod}
+ ],
+ },
+ :puppet_agent => {
+ :exit_code => 0,
+ :matches => [%r{Applying configuration version 'static'}],
+ :does_not_match => [%r{in default environment site.pp},
+ %r{include default environment testing_mod},
+ %r{Notice: include}],
+ },
+ }
+end
+review[default_environment_scenario] = review_results(
+ results[default_environment_scenario],
+ default_expectations.call('production')
+)
+
+step "Reviewing: #{non_existent_environment_scenario}"
+review[non_existent_environment_scenario] = review_results(
+ results[non_existent_environment_scenario],
+ default_expectations.call('doesnotexist')
+)
+
+#########################
+step "[ Assert Success ]"
+
+assert_review(review)
diff --git a/acceptance/tests/environment/dynamic_environments.rb b/acceptance/tests/environment/dynamic_environments.rb
index f1994744d..6fe68688a 100644
--- a/acceptance/tests/environment/dynamic_environments.rb
+++ b/acceptance/tests/environment/dynamic_environments.rb
@@ -1,138 +1,138 @@
test_name "Dynamic Environments"
-testdir = master.tmpdir('dynamic-environment')
+testdir = create_tmpdir_for_user master, 'dynamic-environment'
environmentsdir = "#{testdir}/environments"
step "Prepare manifests and modules"
def an_environment(envdir, env)
content = <<-ENVIRONMENT
####################
# #{env} environment
file {
"#{envdir}/#{env}":;
"#{envdir}/#{env}/hiera":;
"#{envdir}/#{env}/manifests":;
"#{envdir}/#{env}/modules":;
"#{envdir}/#{env}/modules/amod":;
"#{envdir}/#{env}/modules/amod/manifests":;
}
file { "#{envdir}/#{env}/hiera/#{env}.yaml":
ensure => file,
mode => 0640,
content => 'foo: foo-#{env}',
}
file { "#{envdir}/#{env}/hiera/common.yaml":
ensure => file,
mode => 0640,
content => 'foo: foo-common',
}
file { "#{envdir}/#{env}/manifests/site.pp":
ensure => file,
mode => 0640,
content => '
notify { "#{env}-site.pp": }
notify { "hiera":
message => hiera(foo),
}
include amod
'
}
file { "#{envdir}/#{env}/modules/amod/manifests/init.pp":
ensure => file,
mode => 0640,
content => '
class amod {
notify { "#{env}-amod": }
}
'
}
ENVIRONMENT
end
manifest = <<-MANIFEST
File {
ensure => directory,
owner => #{master['user']},
group => #{master['group']},
mode => 0750,
}
file {
"#{testdir}":;
"#{environmentsdir}":;
}
file { "#{testdir}/hiera.yaml":
ensure => file,
mode => 0640,
content => '
---
:backends: yaml
:yaml:
:datadir: "#{environmentsdir}/%{environment}/hiera"
:hierarchy:
- "%{environment}"
- common
',
}
#{an_environment(environmentsdir, 'production')}
#{an_environment(environmentsdir, 'testing')}
MANIFEST
apply_manifest_on(master, manifest, :catch_failures => true)
def test_on_agents(environment, default_env = false)
agents.each do |agent|
environment_switch = "--environment #{environment}" if !default_env
on(agent, puppet("agent -t --server #{master}", environment_switch), :acceptable_exit_codes => [2] ) do
assert_match(/#{environment}-site.pp/, stdout)
assert_match(/foo-#{environment}/, stdout)
assert_match(/#{environment}-amod/, stdout)
end
end
end
ssldir = on(master, puppet("master --configprint ssldir")).stdout.chomp
common_opts = {
'modulepath' => "#{testdir}/environments/$environment/modules",
'hiera_config' => "#{testdir}/hiera.yaml",
}
if master.is_pe?
common_opts['modulepath'] << ":#{master['sitemoduledir']}"
end
master_opts = {
'master' => {
'manifest' => "#{testdir}/environments/$environment/manifests/site.pp",
}.merge(common_opts)
}
with_puppet_running_on master, master_opts, testdir do
step "Agent run with default environment"
test_on_agents('production', true)
end
master_opts = {
'master' => {
'manifest' => "#{testdir}/environments/$environment/manifests/site.pp",
}.merge(common_opts)
}
with_puppet_running_on master, master_opts, testdir do
step "Agent run with testing environment"
test_on_agents('testing')
step "And then agent run with another environment but the same master process"
test_on_agents('production')
end
master_opts = {
'master' => {
'manifestdir' => "#{testdir}/environments/$environment/manifests",
}.merge(common_opts)
}
with_puppet_running_on master, master_opts, testdir do
step "Agent run with testing environment and manifestdir set instead of manifest"
test_on_agents('testing')
end
diff --git a/acceptance/tests/environment/static.rb b/acceptance/tests/environment/static.rb
new file mode 100644
index 000000000..d37823bd5
--- /dev/null
+++ b/acceptance/tests/environment/static.rb
@@ -0,0 +1,115 @@
+test_name "legacy environments"
+require 'puppet/acceptance/environment_utils'
+extend Puppet::Acceptance::EnvironmentUtils
+
+step "setup environments"
+
+stub_forge_on(master)
+
+testdir = create_tmpdir_for_user master, "confdir"
+puppet_conf_backup_dir = create_tmpdir_for_user(master, "puppet-conf-backup-dir")
+
+apply_manifest_on(master, environment_manifest(testdir), :catch_failures => true)
+
+results = {}
+review = {}
+
+####################
+step "[ Run Tests ]"
+
+existing_legacy_scenario = "Test a specific, existing legacy environment configuration"
+step existing_legacy_scenario
+master_opts = {
+ 'testing' => {
+ 'manifest' => "$confdir/testing-manifests",
+ 'modulepath' => "$confdir/testing-modules",
+ 'config_version' => "$confdir/static-version.sh",
+ },
+}
+results[existing_legacy_scenario] = use_an_environment("testing", "legacy testing", master_opts, testdir, puppet_conf_backup_dir)
+
+default_environment_scenario = "Test behavior of default environment"
+step default_environment_scenario
+results[default_environment_scenario] = use_an_environment(nil, "default environment", master_opts, testdir, puppet_conf_backup_dir)
+
+non_existent_environment_scenario = "Test for an environment that does not exist"
+step non_existent_environment_scenario
+results[non_existent_environment_scenario] = use_an_environment("doesnotexist", "non existent environment", master_opts, testdir, puppet_conf_backup_dir)
+
+########################################
+step "[ Report on Environment Results ]"
+
+confdir = master.puppet['confdir']
+
+step "Reviewing: #{existing_legacy_scenario}"
+review[existing_legacy_scenario] = review_results(results[existing_legacy_scenario],
+ :puppet_config => {
+ :exit_code => 0,
+ :matches => [%r{manifest.*#{confdir}/testing-manifests$},
+ %r{modulepath.*#{confdir}/testing-modules$},
+ %r{config_version.*#{confdir}/static-version.sh$}]
+ },
+ :puppet_module_install => {
+ :exit_code => 0,
+ :matches => [%r{Preparing to install into #{confdir}/testing-modules},
+ %r{pmtacceptance-nginx}],
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 0,
+ :matches => [%r{Removed.*pmtacceptance-nginx.*from #{confdir}/testing-modules}],
+ },
+ :puppet_apply => {
+ :exit_code => 0,
+ :matches => [%r{include legacy testing environment testing_mod}],
+ },
+ :puppet_agent => {
+ :exit_code => 2,
+ :matches => [%r{Applying configuration version 'static'},
+ %r{in legacy testing environment site.pp},
+ %r{include legacy testing environment testing_mod}],
+ }
+)
+
+step "Reviewing: #{default_environment_scenario}"
+default_expectations = {
+ :puppet_config => {
+ :exit_code => 0,
+ :matches => [%r{manifest.*#{confdir}/manifests/site.pp$},
+ %r{modulepath.*#{confdir}/modules:.*},
+ %r{^config_version\s+=\s*$}]
+ },
+ :puppet_module_install => {
+ :exit_code => 0,
+ :matches => [%r{Preparing to install into #{confdir}/modules},
+ %r{pmtacceptance-nginx}],
+ },
+ :puppet_module_uninstall => {
+ :exit_code => 0,
+ :matches => [%r{Removed.*pmtacceptance-nginx.*from #{confdir}/modules}],
+ },
+ :puppet_apply => {
+ :exit_code => 0,
+ :matches => [%r{include default environment testing_mod}],
+ },
+ :puppet_agent => {
+ :exit_code => 2,
+ :matches => [%r{Applying configuration version '\d+'},
+ %r{in default environment site.pp},
+ %r{include default environment testing_mod}],
+ },
+}
+review[default_environment_scenario] = review_results(
+ results[default_environment_scenario],
+ default_expectations
+)
+
+step "Reviewing: #{non_existent_environment_scenario}"
+review[non_existent_environment_scenario] = review_results(
+ results[non_existent_environment_scenario],
+ default_expectations
+)
+
+#########################
+step "[ Assert Success ]"
+
+assert_review(review)
diff --git a/acceptance/tests/environment/use_agent_environment_when_enc_doesnt_specify.rb b/acceptance/tests/environment/use_agent_environment_when_enc_doesnt_specify.rb
index 098cf1cc8..310f9fb2b 100644
--- a/acceptance/tests/environment/use_agent_environment_when_enc_doesnt_specify.rb
+++ b/acceptance/tests/environment/use_agent_environment_when_enc_doesnt_specify.rb
@@ -1,40 +1,40 @@
test_name "Agent should use agent environment if there is an enc that does not specify the environment"
-testdir = master.tmpdir('use_agent_env')
+testdir = create_tmpdir_for_user master, 'use_agent_env'
create_remote_file master, "#{testdir}/enc.rb", <<END
#!#{master['puppetbindir']}/ruby
puts <<YAML
parameters:
YAML
END
on master, "chmod 755 #{testdir}/enc.rb"
create_remote_file(master, "#{testdir}/different.pp", 'notify { "production environment": }')
create_remote_file(master, "#{testdir}/more_different.pp", 'notify { "more_different_string": }')
master_opts = {
'main' => {
'node_terminus' => 'exec',
'external_nodes' => "#{testdir}/enc.rb",
'manifest' => "#{testdir}/site.pp"
},
'production' => {
'manifest' => "#{testdir}/different.pp"
},
'more_different' => {
'manifest' => "#{testdir}/more_different.pp"
}
}
on master, "chown -R #{master['user']}:#{master['group']} #{testdir}"
on master, "chmod -R g+rwX #{testdir}"
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
run_agent_on(agent, "--no-daemonize --onetime --server #{master} --verbose --environment more_different")
assert_match(/more_different_string/, stdout, "Did not find more_different_string from \"more_different\" environment")
end
end
diff --git a/acceptance/tests/environment/use_agent_environment_when_no_enc.rb b/acceptance/tests/environment/use_agent_environment_when_no_enc.rb
index 2f9ce9397..9c850bbe1 100644
--- a/acceptance/tests/environment/use_agent_environment_when_no_enc.rb
+++ b/acceptance/tests/environment/use_agent_environment_when_no_enc.rb
@@ -1,29 +1,29 @@
test_name "Agent should use agent environment if there is no enc-specified environment"
-testdir = master.tmpdir('use_agent_env')
+testdir = create_tmpdir_for_user master, 'use_agent_env'
create_remote_file(master, "#{testdir}/different.pp", 'notify { "production environment": }')
create_remote_file(master, "#{testdir}/more_different.pp", 'notify { "more_different_string": }')
on master, "chown -R #{master['user']}:#{master['group']} #{testdir}"
on master, "chmod -R g+rwX #{testdir}"
master_opts = {
'main' => {
'manifest' => "#{testdir}/site.pp"
},
'production' => {
'manifest' => "#{testdir}/different.pp"
},
'more_different' => {
'manifest' => "#{testdir}/more_different.pp"
}
}
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
run_agent_on(agent, "--no-daemonize --onetime --server #{master} --verbose --environment more_different")
assert_match(/more_different_string/, stdout, "Did not find more_different_string from \"more_different\" environment")
end
end
diff --git a/acceptance/tests/environment/use_enc_environment.rb b/acceptance/tests/environment/use_enc_environment.rb
index c6d62e13e..5c182570b 100644
--- a/acceptance/tests/environment/use_enc_environment.rb
+++ b/acceptance/tests/environment/use_enc_environment.rb
@@ -1,36 +1,36 @@
test_name "Agent should environment given by ENC"
-testdir = master.tmpdir('use_enc_env')
+testdir = create_tmpdir_for_user master, 'use_enc_env'
create_remote_file master, "#{testdir}/enc.rb", <<END
#!#{master['puppetbindir']}/ruby
puts <<YAML
parameters:
environment: special
YAML
END
on master, "chmod 755 #{testdir}/enc.rb"
master_opts = {
'master' => {
'node_terminus' => 'exec',
'external_nodes' => "#{testdir}/enc.rb",
'manifest' => "#{testdir}/site.pp"
},
'special' => {
'manifest' => "#{testdir}/different.pp"
}
}
create_remote_file(master, "#{testdir}/different.pp", 'notify { "expected_string": }')
on master, "chown -R #{master['user']}:#{master['group']} #{testdir}"
on master, "chmod -R g+rwX #{testdir}"
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
run_agent_on(agent, "--no-daemonize --onetime --server #{master} --verbose")
assert_match(/expected_string/, stdout, "Did not find expected_string from \"special\" environment")
end
end
diff --git a/acceptance/tests/environment/use_enc_environment_for_files.rb b/acceptance/tests/environment/use_enc_environment_for_files.rb
index 357d1a8ff..02b79badf 100644
--- a/acceptance/tests/environment/use_enc_environment_for_files.rb
+++ b/acceptance/tests/environment/use_enc_environment_for_files.rb
@@ -1,63 +1,63 @@
test_name "Agent should use environment given by ENC for fetching remote files"
-testdir = master.tmpdir('respect_enc_test')
+testdir = create_tmpdir_for_user master, 'respect_enc_test'
create_remote_file master, "#{testdir}/enc.rb", <<END
#!#{master['puppetbindir']}/ruby
puts <<YAML
parameters:
environment: special
YAML
END
on master, "chmod 755 #{testdir}/enc.rb"
on master, "mkdir -p #{testdir}/modules"
# Create a plugin file on the master
on master, "mkdir -p #{testdir}/special/amod/files"
create_remote_file(master, "#{testdir}/special/amod/files/testy", "special_environment")
on master, "chown -R #{master['user']}:#{master['group']} #{testdir}"
on master, "chmod -R g+rwX #{testdir}"
master_opts = {
'master' => {
'node_terminus' => 'exec',
'external_nodes' => "#{testdir}/enc.rb",
'filetimeout' => 1
},
'special' => {
'modulepath' => "#{testdir}/special",
'manifest' => "#{testdir}/different.pp"
}
}
if master.is_pe?
master_opts['special']['modulepath'] << ":#{master['sitemoduledir']}"
end
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
atmp = agent.tmpdir('respect_enc_test')
logger.debug "agent: #{agent} \tagent.tmpdir => #{atmp}"
create_remote_file master, "#{testdir}/different.pp", <<END
file { "#{atmp}/special_testy":
source => "puppet:///modules/amod/testy",
}
notify { "mytemp is ${::mytemp}": }
END
on master, "chmod 644 #{testdir}/different.pp"
sleep 2 # Make sure the master has time to reload the file
run_agent_on(agent, "--no-daemonize --onetime --server #{master} --verbose --trace")
on agent, "cat #{atmp}/special_testy" do |result|
assert_match(/special_environment/,
result.stdout,
"The file from environment 'special' was not found")
end
on agent, "rm -rf #{atmp}"
end
end
diff --git a/acceptance/tests/environment/use_enc_environment_for_pluginsync.rb b/acceptance/tests/environment/use_enc_environment_for_pluginsync.rb
index 72a12ed33..bfebc0b8e 100644
--- a/acceptance/tests/environment/use_enc_environment_for_pluginsync.rb
+++ b/acceptance/tests/environment/use_enc_environment_for_pluginsync.rb
@@ -1,43 +1,43 @@
test_name "Agent should use environment given by ENC for pluginsync"
-testdir = master.tmpdir('respect_enc_test')
+testdir = create_tmpdir_for_user master, 'respect_enc_test'
create_remote_file master, "#{testdir}/enc.rb", <<END
#!#{master['puppetbindir']}/ruby
puts <<YAML
parameters:
environment: special
YAML
END
on master, "chmod 755 #{testdir}/enc.rb"
master_opts = {
'master' => {
'node_terminus' => 'exec',
'external_nodes' => "#{testdir}/enc.rb"
},
'special' => {
'modulepath' => "#{testdir}/special"
}
}
if master.is_pe?
master_opts['special']['modulepath'] << ":#{master['sitemoduledir']}"
end
on master, "mkdir -p #{testdir}/modules"
# Create a plugin file on the master
on master, "mkdir -p #{testdir}/special/amod/lib/puppet"
create_remote_file(master, "#{testdir}/special/amod/lib/puppet/foo.rb", "#special_version")
on master, "chown -R #{master['user']}:#{master['group']} #{testdir}"
on master, "chmod -R g+rwX #{testdir}"
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
run_agent_on(agent, "--no-daemonize --onetime --server #{master}")
on agent, "cat \"#{agent.puppet['vardir']}/lib/puppet/foo.rb\""
assert_match(/#special_version/, stdout, "The plugin from environment 'special' was not synced")
on agent, "rm -rf \"#{agent.puppet['vardir']}/lib\""
end
end
diff --git a/acceptance/tests/environment/use_environment_from_environmentpath.rb b/acceptance/tests/environment/use_environment_from_environmentpath.rb
index 479ed5ce2..78fdce654 100644
--- a/acceptance/tests/environment/use_environment_from_environmentpath.rb
+++ b/acceptance/tests/environment/use_environment_from_environmentpath.rb
@@ -1,177 +1,189 @@
test_name "Use environments from the environmentpath"
-testdir = master.tmpdir('use_environmentpath')
+testdir = create_tmpdir_for_user master, 'use_environmentpath'
def generate_environment(path_to_env, environment)
env_content = <<-EOS
"#{path_to_env}/#{environment}":;
"#{path_to_env}/#{environment}/manifests":;
"#{path_to_env}/#{environment}/modules":;
EOS
end
def generate_module_content(module_name, options = {})
base_path = options[:base_path]
environment = options[:environment]
env_path = options[:env_path]
path_to_module = [base_path, env_path, environment, "modules"].compact.join("/")
module_info = "module-#{module_name}"
module_info << "-from-#{environment}" if environment
module_content = <<-EOS
"#{path_to_module}/#{module_name}":;
"#{path_to_module}/#{module_name}/manifests":;
"#{path_to_module}/#{module_name}/files":;
"#{path_to_module}/#{module_name}/templates":;
"#{path_to_module}/#{module_name}/lib":;
"#{path_to_module}/#{module_name}/lib/facter":;
"#{path_to_module}/#{module_name}/manifests/init.pp":
ensure => file,
mode => 0640,
content => 'class #{module_name} {
notify { "template-#{module_name}": message => template("#{module_name}/our_template.erb") }
file { "$agent_file_location/file-#{module_info}": source => "puppet:///modules/#{module_name}/data" }
}'
;
"#{path_to_module}/#{module_name}/lib/facter/environment_fact_#{module_name}.rb":
ensure => file,
mode => 0640,
content => "Facter.add(:environment_fact_#{module_name}) { setcode { 'environment fact from #{module_info}' } }"
;
"#{path_to_module}/#{module_name}/files/data":
ensure => file,
mode => 0640,
content => "data file from #{module_info}"
;
"#{path_to_module}/#{module_name}/templates/our_template.erb":
ensure => file,
mode => 0640,
content => "<%= @environment_fact_#{module_name} %>"
;
EOS
end
def generate_site_manifest(path_to_manifest, *modules_to_include)
manifest_content = <<-EOS
"#{path_to_manifest}/site.pp":
ensure => file,
mode => 0640,
content => "#{modules_to_include.map { |m| "include #{m}" }.join("\n")}"
;
EOS
end
+master_user = on(master, "puppet master --configprint user").stdout.strip
apply_manifest_on(master, <<-MANIFEST, :catch_failures => true)
File {
ensure => directory,
- owner => #{master['user']},
+ owner => #{master_user},
group => #{master['group']},
- mode => 0750,
+ mode => 0770,
}
file {
"#{testdir}":;
"#{testdir}/base":;
"#{testdir}/additional":;
"#{testdir}/modules":;
#{generate_environment("#{testdir}/base", "shadowed")}
#{generate_environment("#{testdir}/base", "onlybase")}
#{generate_environment("#{testdir}/additional", "shadowed")}
#{generate_module_content("atmp",
:base_path => testdir,
:env_path => 'base',
:environment => 'shadowed')}
#{generate_site_manifest("#{testdir}/base/shadowed/manifests", "atmp", "globalmod")}
#{generate_module_content("atmp",
:base_path => testdir,
:env_path => 'base',
:environment => 'onlybase')}
#{generate_site_manifest("#{testdir}/base/onlybase/manifests", "atmp", "globalmod")}
#{generate_module_content("atmp",
:base_path => testdir,
:env_path => 'additional',
:environment => 'shadowed')}
#{generate_site_manifest("#{testdir}/additional/shadowed/manifests", "atmp", "globalmod")}
# And one global module (--modulepath setting)
#{generate_module_content("globalmod", :base_path => testdir)}
+ "#{testdir}/additional/production":;
+ "#{testdir}/additional/production/manifests":;
+#{generate_site_manifest("#{testdir}/additional/production/manifests", "globalmod")}
}
MANIFEST
def run_with_environment(agent, environment, options = {})
expected_exit_code = options[:expected_exit_code] || 2
expected_strings = options[:expected_strings]
step "running an agent in environment '#{environment}'"
atmp = agent.tmpdir("use_environmentpath_#{environment}")
agent_config = [
"-t",
"--server", master,
]
agent_config << '--environment' << environment if environment
+ # This to test how the agent behaves when using the directory environment
+ # loaders (which will not load an environment if it does not exist)
+ agent_config << "--environmentpath='$confdir/environments'" if agent != master
agent_config << {
'ENV' => { "FACTER_agent_file_location" => atmp },
}
on(agent,
puppet("agent", *agent_config),
:acceptable_exit_codes => [expected_exit_code]) do |result|
yield atmp, result
end
on agent, "rm -rf #{atmp}"
end
master_opts = {
'master' => {
'environmentpath' => "#{testdir}/additional:#{testdir}/base",
'basemodulepath' => "#{testdir}/modules",
}
}
if master.is_pe?
master_opts['master']['basemodulepath'] << ":#{master['sitemoduledir']}"
end
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
run_with_environment(agent, "shadowed") do |tmpdir,catalog_result|
["module-atmp-from-shadowed", "module-globalmod"].each do |expected|
assert_match(/environment fact from #{expected}/, catalog_result.stdout)
end
["module-atmp-from-shadowed", "module-globalmod"].each do |expected|
on agent, "cat #{tmpdir}/file-#{expected}" do |file_result|
assert_match(/data file from #{expected}/, file_result.stdout)
end
end
end
run_with_environment(agent, "onlybase") do |tmpdir,catalog_result|
["module-atmp-from-onlybase", "module-globalmod"].each do |expected|
assert_match(/environment fact from #{expected}/, catalog_result.stdout)
end
["module-atmp-from-onlybase", "module-globalmod"].each do |expected|
on agent, "cat #{tmpdir}/file-#{expected}" do |file_result|
assert_match(/data file from #{expected}/, file_result.stdout)
end
end
end
if master.is_pe?
step("This test cannot run if the production environment directory does not exist, because the fallback production environment puppet creates has an empty modulepath and PE cannot run without it's basemodulepath in /opt. PUP-2519, which implicitly creates the production environment directory should allow this to run again")
else
- run_with_environment(agent, nil, :expected_exit_code => 0) do |tmpdir, result|
- assert_no_match(/module-atmp/, result.stdout, "module-atmp was included despite no environment being loaded")
- assert_match(/Loading facts.*globalmod/, result.stdout)
+ run_with_environment(agent, nil, :expected_exit_code => 2) do |tmpdir, catalog_result|
+ assert_no_match(/module-atmp/, catalog_result.stdout, "module-atmp was included despite no environment being loaded")
+
+ assert_match(/environment fact from module-globalmod/, catalog_result.stdout)
+
+ on agent, "cat #{tmpdir}/file-module-globalmod" do |file_result|
+ assert_match(/data file from module-globalmod/, file_result.stdout)
+ end
end
end
end
end
diff --git a/acceptance/tests/external_ca_support/apache_external_root_ca.rb b/acceptance/tests/external_ca_support/apache_external_root_ca.rb
index 4019b56d5..39bd194c1 100644
--- a/acceptance/tests/external_ca_support/apache_external_root_ca.rb
+++ b/acceptance/tests/external_ca_support/apache_external_root_ca.rb
@@ -1,190 +1,203 @@
begin
require 'puppet_x/acceptance/external_cert_fixtures'
rescue LoadError
$LOAD_PATH.unshift(File.expand_path('../../../lib', __FILE__))
require 'puppet_x/acceptance/external_cert_fixtures'
end
# This test only runs on EL-6 master roles.
confine :to, :platform => 'el-6'
confine :except, :type => 'pe'
+skip_test "Test not supported on jvm" if @options[:is_jvm_puppet]
+
+if master.use_service_scripts?
+ # Beaker defaults to leaving puppet running when using service scripts,
+ # Need to shut it down so we can start up our apache instance
+ on(master, puppet('resource', 'service', master['puppetservice'], 'ensure=stopped'))
+
+ teardown do
+ # And ensure that it is up again after everything is done
+ on(master, puppet('resource', 'service', master['puppetservice'], 'ensure=running'))
+ end
+end
+
# Verify that a trivial manifest can be run to completion.
# Supported Setup: Single, Root CA
# - Agent and Master SSL cert issued by the Root CA
# - Revocation disabled on the agent `certificate_revocation = false`
# - CA disabled on the master `ca = false`
#
# SUPPORT NOTES
#
# * If the x509 alt names extension is used when issuing SSL server certificates
# for the Puppet master, then the client SSL certificate issued by an external
# CA must posses the DNS common name in the alternate name field. This is
# due to a bug in Ruby. If the CN is not duplicated in the Alt Names, then
# the following error will appear on the agent with MRI 1.8.7:
#
# Warning: Server hostname 'master1.example.org' did not match server
# certificate; expected one of master1.example.org, DNS:puppet,
# DNS:master-ca.example.org
#
# See: https://bugs.ruby-lang.org/issues/6493
test_name "Puppet agent works with Apache, both configured with externally issued certificates from independent intermediate CA's"
step "Copy certificates and configuration files to the master..."
fixture_dir = File.expand_path('../fixtures', __FILE__)
testdir = master.tmpdir('apache_external_root_ca')
fixtures = PuppetX::Acceptance::ExternalCertFixtures.new(fixture_dir, testdir)
# We need this variable in scope.
disable_and_reenable_selinux = nil
# Register our cleanup steps early in a teardown so that they will happen even
# if execution aborts part way.
teardown do
step "Cleanup Apache (httpd) and /etc/hosts"
# Restore /etc/hosts
on master, "cp -p '#{testdir}/hosts' /etc/hosts"
# stop the service before moving files around
on master, "/etc/init.d/httpd stop"
on master, "mv --force /etc/httpd/conf/httpd.conf{,.external_ca_test}"
on master, "mv --force /etc/httpd/conf/httpd.conf{.orig,}"
if disable_and_reenable_selinux
step "Restore the original state of SELinux"
on master, "setenforce 1"
end
end
# Read all of the CA certificates.
# Copy all of the x.509 fixture data over to the master.
create_remote_file master, "#{testdir}/ca_root.crt", fixtures.root_ca_cert
create_remote_file master, "#{testdir}/ca_agent.crt", fixtures.agent_ca_cert
create_remote_file master, "#{testdir}/ca_master.crt", fixtures.master_ca_cert
create_remote_file master, "#{testdir}/ca_master.crl", fixtures.master_ca_crl
create_remote_file master, "#{testdir}/ca_master_bundle.crt", "#{fixtures.master_ca_cert}\n#{fixtures.root_ca_cert}\n"
create_remote_file master, "#{testdir}/ca_agent_bundle.crt", "#{fixtures.agent_ca_cert}\n#{fixtures.root_ca_cert}\n"
create_remote_file master, "#{testdir}/agent.crt", fixtures.agent_cert
create_remote_file master, "#{testdir}/agent.key", fixtures.agent_key
create_remote_file master, "#{testdir}/agent_email.crt", fixtures.agent_email_cert
create_remote_file master, "#{testdir}/agent_email.key", fixtures.agent_email_key
create_remote_file master, "#{testdir}/master.crt", fixtures.master_cert
create_remote_file master, "#{testdir}/master.key", fixtures.master_key
create_remote_file master, "#{testdir}/master_rogue.crt", fixtures.master_cert_rogue
create_remote_file master, "#{testdir}/master_rogue.key", fixtures.master_key_rogue
##
# Now create the master and agent puppet.conf
#
# We need to create the public directory for Passenger and the modules
# directory to avoid `Error: Could not evaluate: Could not retrieve information
# from environment production source(s) puppet://master1.example.org/plugins`
on master, "mkdir -p #{testdir}/etc/{master/{public,modules/empty/lib},agent}"
# Backup /etc/hosts
on master, "cp -p /etc/hosts '#{testdir}/hosts'"
# Make master1.example.org resolve if it doesn't already.
on master, "grep -q -x '#{fixtures.host_entry}' /etc/hosts || echo '#{fixtures.host_entry}' >> /etc/hosts"
create_remote_file master, "#{testdir}/etc/agent/puppet.conf", fixtures.agent_conf
create_remote_file master, "#{testdir}/etc/agent/puppet.conf.crl", fixtures.agent_conf_crl
create_remote_file master, "#{testdir}/etc/agent/puppet.conf.email", fixtures.agent_conf_email
create_remote_file master, "#{testdir}/etc/master/puppet.conf", fixtures.master_conf
# auth.conf to allow *.example.com access to the rest API
create_remote_file master, "#{testdir}/etc/master/auth.conf", fixtures.auth_conf
create_remote_file master, "#{testdir}/etc/master/config.ru", fixtures.config_ru
step "Set filesystem permissions and ownership for the master"
# These permissions are required for Passenger to start Puppet as puppet
on master, "chown -R puppet:puppet #{testdir}/etc/master"
# These permissions are just for testing, end users should protect their
# private keys.
on master, "chmod -R a+rX #{testdir}"
agent_cmd_prefix = "--confdir #{testdir}/etc/agent --vardir #{testdir}/etc/agent/var"
step "Configure EPEL"
epel_release_path = "http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-8.noarch.rpm"
on master, "rpm -q epel-release || (yum -y install #{epel_release_path} && yum -y upgrade epel-release)"
step "Configure Apache and Passenger"
packages = [ 'httpd', 'mod_ssl', 'mod_passenger', 'rubygem-passenger', 'policycoreutils-python' ]
packages.each do |pkg|
on master, "rpm -q #{pkg} || (yum -y install #{pkg})"
end
create_remote_file master, "#{testdir}/etc/httpd.conf", fixtures.httpd_conf
on master, 'test -f /etc/httpd/conf/httpd.conf.orig || cp -p /etc/httpd/conf/httpd.conf{,.orig}'
on master, "cat #{testdir}/etc/httpd.conf > /etc/httpd/conf/httpd.conf"
step "Make SELinux and Apache play nicely together..."
on master, "sestatus" do
if stdout.match(/Current mode:.*enforcing/)
disable_and_reenable_selinux = true
else
disable_and_reenable_selinux = false
end
end
if disable_and_reenable_selinux
on master, "setenforce 0"
end
step "Start the Apache httpd service..."
on master, 'service httpd restart'
# Move the agent SSL cert and key into place.
# The filename must match the configured certname, otherwise Puppet will try
# and generate a new certificate and key
step "Configure the agent with the externally issued certificates"
on master, "mkdir -p #{testdir}/etc/agent/ssl/{public_keys,certs,certificate_requests,private_keys,private}"
create_remote_file master, "#{testdir}/etc/agent/ssl/certs/#{fixtures.agent_name}.pem", fixtures.agent_cert
create_remote_file master, "#{testdir}/etc/agent/ssl/private_keys/#{fixtures.agent_name}.pem", fixtures.agent_key
# Now, try and run the agent on the master against itself.
step "Successfully run the puppet agent on the master"
on master, puppet_agent("#{agent_cmd_prefix} --test"), :acceptable_exit_codes => (0..255) do
assert_no_match /Creating a new SSL key/, stdout
assert_no_match /\Wfailed\W/i, stderr
assert_no_match /\Wfailed\W/i, stdout
assert_no_match /\Werror\W/i, stderr
assert_no_match /\Werror\W/i, stdout
# Assert the exit code so we get a "Failed test" instead of an "Errored test"
assert exit_code == 0
end
step "Agent refuses to connect to a rogue master"
on master, puppet_agent("#{agent_cmd_prefix} --ssl_client_ca_auth=#{testdir}/ca_master.crt --masterport=8141 --test"), :acceptable_exit_codes => (0..255) do
assert_no_match /Creating a new SSL key/, stdout
assert_match /certificate verify failed/i, stderr
assert_match /The server presented a SSL certificate chain which does not include a CA listed in the ssl_client_ca_auth file/i, stderr
assert exit_code == 1
end
step "Master accepts client cert with email address in subject"
on master, "cp #{testdir}/etc/agent/puppet.conf{,.no_email}"
on master, "cp #{testdir}/etc/agent/puppet.conf{.email,}"
on master, puppet_agent("#{agent_cmd_prefix} --test"), :acceptable_exit_codes => (0..255) do
assert_no_match /\Wfailed\W/i, stdout
assert_no_match /\Wfailed\W/i, stderr
assert_no_match /\Werror\W/i, stdout
assert_no_match /\Werror\W/i, stderr
# Assert the exit code so we get a "Failed test" instead of an "Errored test"
assert exit_code == 0
end
step "Agent refuses to connect to revoked master"
on master, "cp #{testdir}/etc/agent/puppet.conf{,.no_crl}"
on master, "cp #{testdir}/etc/agent/puppet.conf{.crl,}"
revoke_opts = "--hostcrl #{testdir}/ca_master.crl"
on master, puppet_agent("#{agent_cmd_prefix} #{revoke_opts} --test"), :acceptable_exit_codes => (0..255) do
assert_match /certificate revoked.*?example.org/, stderr
assert exit_code == 1
end
step "Finished testing External Certificates"
diff --git a/acceptance/tests/helpful_error_message_when_hostname_not_match_server_certificate.rb b/acceptance/tests/helpful_error_message_when_hostname_not_match_server_certificate.rb
index 7d18cc298..115e69e2c 100644
--- a/acceptance/tests/helpful_error_message_when_hostname_not_match_server_certificate.rb
+++ b/acceptance/tests/helpful_error_message_when_hostname_not_match_server_certificate.rb
@@ -1,17 +1,18 @@
test_name "generate a helpful error message when hostname doesn't match server certificate"
-skip_test( 'Changing certnames of the master will break PE' )if master.is_pe?
+skip_test "Certs need to be signed with DNS Alt names." if @options[:is_jvm_puppet]
+skip_test( 'Changing certnames of the master will break PE/Passenger installations' ) if master.is_using_passenger?
# Start the master with a certname not matching its hostname
master_opts = {
'master' => {
'certname' => 'foobar_not_my_hostname',
'dns_alt_names' => 'one_cert,two_cert,red_cert,blue_cert'
}
}
with_puppet_running_on master, master_opts do
run_agent_on(agents, "--test --server #{master}", :acceptable_exit_codes => (1..255)) do
msg = "Server hostname '#{master}' did not match server certificate; expected one of foobar_not_my_hostname, DNS:blue_cert, DNS:foobar_not_my_hostname, DNS:one_cert, DNS:red_cert, DNS:two_cert"
assert_match(msg, stderr)
end
end
diff --git a/acceptance/tests/language/node_overrides_topscope_when_using_enc.rb b/acceptance/tests/language/node_overrides_topscope_when_using_enc.rb
deleted file mode 100644
index 22c9e81cb..000000000
--- a/acceptance/tests/language/node_overrides_topscope_when_using_enc.rb
+++ /dev/null
@@ -1,66 +0,0 @@
-test_name "ENC still allows a node to override a topscope var"
-
-testdir = master.tmpdir('scoping_deprecation')
-
-on master, "mkdir -p #{testdir}/modules/a/manifests"
-
-create_remote_file(master, "#{testdir}/enc", <<-PP)
-#!/usr/bin/env sh
-
-cat <<END
----
-classes:
- - a
-parameters:
- enc_var: "Set from ENC."
-END
-exit 0
-PP
-
-agent_names = agents.map { |agent| "'#{agent.to_s}'" }.join(', ')
-create_remote_file(master, "#{testdir}/site.pp", <<-PP)
-$top_scope = "set from site.pp"
-node default {
- $enc_var = "ENC overridden in default node."
-}
-
-node #{agent_names} inherits default {
- $top_scope = "top_scope overridden in agent node."
-}
-PP
-create_remote_file(master, "#{testdir}/modules/a/manifests/init.pp", <<-PP)
-class a {
- notify { "from enc": message => $enc_var }
- notify { "from site.pp": message => $top_scope }
-}
-PP
-
-on master, "chown -R #{master['user']}:#{master['group']} #{testdir}"
-on master, "chmod -R g+rwX #{testdir}"
-on master, "chmod -R a+x #{testdir}/enc"
-
-assert_log_on_master_contains = lambda do |string|
- on master, "grep '#{string}' #{testdir}/log"
-end
-
-assert_log_on_master_does_not_contain = lambda do |string|
- on master, "grep -v '#{string}' #{testdir}/log"
-end
-
-master_opts = {
- 'master' => {
- 'node_terminus' => 'exec',
- 'external_nodes' => "#{testdir}/enc",
- 'manifest' => "#{testdir}/site.pp",
- 'modulepath' => "#{testdir}/modules"
- }
-}
-
-with_puppet_running_on master, master_opts, testdir do
- agents.each do |agent|
- run_agent_on(agent, "--no-daemonize --onetime --verbose --server #{master}")
-
- assert_match("top_scope overridden in agent node.", stdout)
- assert_match("ENC overridden in default node.", stdout)
- end
-end
diff --git a/acceptance/tests/modules/build/build_no_modulefile.rb b/acceptance/tests/modules/build/build_no_modulefile.rb
deleted file mode 100644
index 1dad8299d..000000000
--- a/acceptance/tests/modules/build/build_no_modulefile.rb
+++ /dev/null
@@ -1,27 +0,0 @@
-begin test_name "puppet module build (bad modulefiles)"
-
-step 'Setup'
-apply_manifest_on master, <<-PP
-file {
- [
- '#{master['distmoduledir']}/nginx',
- ]: ensure => directory;
- '#{master['distmoduledir']}/nginx/Modulefile':
- ensure => absent;
-}
-PP
-
-step "Try to build a module with no modulefile"
-on master, puppet("module build #{master['distmoduledir']}/nginx"), :acceptable_exit_codes => [1] do
- pattern = Regexp.new([
- ".*Error: Unable to find module root at #{master['distmoduledir']}/nginx.*",
- ".*Error: Try 'puppet help module build' for usage.*",
- ].join("\n"), Regexp::MULTILINE)
- assert_match(pattern, result.stderr)
-end
-on master, "[ ! -d #{master['distmoduledir']}/nginx/pkg/puppetlabs-nginx-0.0.1 ]"
-on master, "[ ! -f #{master['distmoduledir']}/nginx/pkg/puppetlabs-nginx-0.0.1.tar.gz ]"
-
-ensure step "Teardown"
- apply_manifest_on master, "file { '#{master['distmoduledir']}/nginx': ensure => absent, force => true }"
-end
diff --git a/acceptance/tests/modules/upgrade/to_installed_version.rb b/acceptance/tests/modules/upgrade/to_installed_version.rb
index a4a92b0fe..d64e2b1e6 100644
--- a/acceptance/tests/modules/upgrade/to_installed_version.rb
+++ b/acceptance/tests/modules/upgrade/to_installed_version.rb
@@ -1,62 +1,62 @@
test_name "puppet module upgrade (to installed version)"
step 'Setup'
on master, "mkdir -p #{master['distmoduledir']}"
stub_forge_on(master)
teardown do
on master, "rm -rf #{master['distmoduledir']}/java"
on master, "rm -rf #{master['distmoduledir']}/stdlub"
end
on master, puppet("module install pmtacceptance-java --version 1.6.0")
on master, puppet("module list --modulepath #{master['distmoduledir']}") do
assert_equal <<-OUTPUT, stdout
#{master['distmoduledir']}
├── pmtacceptance-java (\e[0;36mv1.6.0\e[0m)
└── pmtacceptance-stdlub (\e[0;36mv1.0.0\e[0m)
OUTPUT
end
step "Try to upgrade a module to the current version"
on master, puppet("module upgrade pmtacceptance-java --version 1.6.x"), :acceptable_exit_codes => [0] do
assert_match(/The installed version is already the latest version matching/, stdout,
"Error that specified version was already satisfied was not displayed")
end
step "Upgrade a module to the current version with --force"
on master, puppet("module upgrade pmtacceptance-java --version 1.6.x --force") do
assert_match(/#{master['distmoduledir']}/, stdout,
'Error that distmoduledir was not displayed')
- assert_match(/pmtacceptance-java \(.*v1\.6\.0.*\)/, stdout,
+ assert_match(/\'pmtacceptance-java\' \(.*v1\.6\.0.*\)/, stdout,
'Error that package name and version were not displayed')
end
step "Upgrade to the latest version"
on master, puppet("module upgrade pmtacceptance-java") do
assert_equal <<-OUTPUT, stdout
\e[mNotice: Preparing to upgrade 'pmtacceptance-java' ...\e[0m
\e[mNotice: Found 'pmtacceptance-java' (\e[0;36mv1.6.0\e[m) in #{master['distmoduledir']} ...\e[0m
\e[mNotice: Downloading from https://forgeapi.puppetlabs.com ...\e[0m
\e[mNotice: Upgrading -- do not interrupt ...\e[0m
#{master['distmoduledir']}
└── pmtacceptance-java (\e[0;36mv1.6.0 -> v1.7.1\e[0m)
OUTPUT
end
step "Try to upgrade a module to the latest version with the latest version installed"
on master, puppet("module upgrade pmtacceptance-java"), :acceptable_exit_codes => [0] do
assert_match(/The installed version is already the latest version matching.*latest/, stdout,
"Error that latest version was already installed was not displayed")
end
step "Upgrade a module to the latest version with --force"
on master, puppet("module upgrade pmtacceptance-java --force") do
assert_match(/#{master['distmoduledir']}/, stdout,
'Error that distmoduledir was not displayed')
assert_match(/pmtacceptance-java \(.*v1\.7\.1.*\)/, stdout,
'Error that package name and version were not displayed')
end
diff --git a/acceptance/tests/reports/submission.rb b/acceptance/tests/reports/submission.rb
index 9b96080df..8036f2c32 100644
--- a/acceptance/tests/reports/submission.rb
+++ b/acceptance/tests/reports/submission.rb
@@ -1,54 +1,54 @@
test_name "Report submission"
if master.is_pe?
require "time"
def query_last_report_time_on(agent)
time_query_script = <<-EOS
require "net/http"
require "json"
puppetdb_url = URI("http://localhost:8080/v3/reports")
puppetdb_url.query = URI.escape(%Q{query=["=","certname","#{agent}"]})
result = Net::HTTP.get(puppetdb_url)
json = JSON.load(result)
puts json.first["receive-time"]
EOS
puppetdb = hosts.detect { |h| h['roles'].include?('database') }
on(puppetdb, "#{master[:puppetbindir]}/ruby -e '#{time_query_script}'").output.chomp
end
last_times = {}
agents.each do |agent|
last_times[agent] = query_last_report_time_on(agent)
end
with_puppet_running_on(master, {}) do
agents.each do |agent|
on(agent, puppet('agent', "-t --server #{master}"))
current_time = Time.parse(query_last_report_time_on(agent))
last_time = Time.parse(last_times[agent])
assert(current_time > last_time, "Most recent report time #{current_time} is not newer than last report time #{last_time}")
end
end
else
- testdir = master.tmpdir('report_submission')
+ testdir = create_tmpdir_for_user master, 'report_submission'
teardown do
on master, "rm -rf #{testdir}"
end
with_puppet_running_on(master, :main => { :reportdir => testdir, :reports => 'store' }) do
agents.each do |agent|
on(agent, puppet('agent', "-t --server #{master}"))
on master, "grep -q #{agent} #{testdir}/*/*"
end
end
end
diff --git a/acceptance/tests/resource/cron/should_allow_changing_parameters.rb b/acceptance/tests/resource/cron/should_allow_changing_parameters.rb
index 376ef06a0..224d1407f 100644
--- a/acceptance/tests/resource/cron/should_allow_changing_parameters.rb
+++ b/acceptance/tests/resource/cron/should_allow_changing_parameters.rb
@@ -1,66 +1,66 @@
test_name "Cron: should allow changing parameters after creation"
confine :except, :platform => 'windows'
require 'puppet/acceptance/common_utils'
extend Puppet::Acceptance::CronUtils
teardown do
step "Cron: cleanup"
agents.each do |agent|
clean agent
end
end
agents.each do |agent|
step "ensure the user exist via puppet"
setup agent
step "Cron: basic - verify that it can be created"
apply_manifest_on(agent, 'cron { "myjob": command => "/bin/false", user => "tstuser", hour => "*", minute => [1], ensure => present,}') do
assert_match( /ensure: created/, result.stdout, "err: #{agent}")
end
run_cron_on(agent,:list,'tstuser') do
assert_match(/.bin.false/, result.stdout, "err: #{agent}")
end
step "Cron: allow changing command"
apply_manifest_on(agent, 'cron { "myjob": command => "/bin/true", user => "tstuser", hour => "*", minute => [1], ensure => present,}') do
- assert_match(/command changed '.bin.false' to '.bin.true'/, result.stdout, "err: #{agent}")
+ assert_match(/command changed '.bin.false'.* to '.bin.true'/, result.stdout, "err: #{agent}")
end
run_cron_on(agent,:list,'tstuser') do
assert_match(/1 . . . . .bin.true/, result.stdout, "err: #{agent}")
end
step "Cron: allow changing time"
apply_manifest_on(agent, 'cron { "myjob": command => "/bin/true", user => "tstuser", hour => "1", minute => [1], ensure => present,}') do
assert_match(/hour: defined 'hour' as '1'/, result.stdout, "err: #{agent}")
end
run_cron_on(agent,:list,'tstuser') do
assert_match(/1 1 . . . .bin.true/, result.stdout, "err: #{agent}")
end
step "Cron: allow changing time(array)"
apply_manifest_on(agent, 'cron { "myjob": command => "/bin/true", user => "tstuser", hour => ["1","2"], minute => [1], ensure => present,}') do
- assert_match(/hour: hour changed '1' to '1,2'/, result.stdout, "err: #{agent}")
+ assert_match(/hour: hour changed '1'.* to '1,2'/, result.stdout, "err: #{agent}")
end
run_cron_on(agent,:list,'tstuser') do
assert_match(/1 1,2 . . . .bin.true/, result.stdout, "err: #{agent}")
end
step "Cron: allow changing time(array modification)"
apply_manifest_on(agent, 'cron { "myjob": command => "/bin/true", user => "tstuser", hour => ["3","2"], minute => [1], ensure => present,}') do
- assert_match(/hour: hour changed '1,2' to '3,2'/, result.stdout, "err: #{agent}")
+ assert_match(/hour: hour changed '1,2'.* to '3,2'/, result.stdout, "err: #{agent}")
end
run_cron_on(agent,:list,'tstuser') do
assert_match(/1 3,2 . . . .bin.true/, result.stdout, "err: #{agent}")
end
step "Cron: allow changing time(array modification to *)"
apply_manifest_on(agent, 'cron { "myjob": command => "/bin/true", user => "tstuser", hour => "*", minute => "*", ensure => present,}') do
assert_match(/minute: undefined 'minute' from '1'/,result.stdout, "err: #{agent}")
assert_match(/hour: undefined 'hour' from '3,2'/,result.stdout, "err: #{agent}")
end
run_cron_on(agent,:list,'tstuser') do
assert_match(/\* \* . . . .bin.true/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/file/source_attribute.rb b/acceptance/tests/resource/file/source_attribute.rb
index b89c1e74a..8f249bfa3 100644
--- a/acceptance/tests/resource/file/source_attribute.rb
+++ b/acceptance/tests/resource/file/source_attribute.rb
@@ -1,91 +1,98 @@
test_name "The source attribute"
step "when using a puppet:/// URI with a master/agent setup"
testdir = master.tmpdir('file_source_attr')
source_path = "#{testdir}/modules/source_test_module/files/source_file"
on master, "mkdir -p #{File.dirname(source_path)}"
create_remote_file master, source_path, <<EOF
the content is present
EOF
target_file_on_windows = 'C:/windows/temp/source_attr_test'
target_file_on_nix = '/tmp/source_attr_test'
+teardown do
+ hosts.each do |host|
+ file_to_rm = host['platform'] =~ /windows/ ? target_file_on_windows : target_file_on_nix
+ on(host, "rm #{file_to_rm}", :acceptable_exit_codes => [0,1])
+ end
+end
+
mod_manifest = "#{testdir}/modules/source_test_module/manifests/init.pp"
on master, "mkdir -p #{File.dirname(mod_manifest)}"
create_remote_file master, mod_manifest, <<EOF
class source_test_module {
$target_file = $::kernel ? {
'windows' => '#{target_file_on_windows}',
default => '#{target_file_on_nix}'
}
file { $target_file:
source => 'puppet:///modules/source_test_module/source_file',
ensure => present
}
}
EOF
manifest = "#{testdir}/site.pp"
create_remote_file master, manifest, <<EOF
node default {
include source_test_module
}
EOF
on master, "chmod -R 777 #{testdir}"
on master, "chmod -R 644 #{mod_manifest} #{source_path} #{manifest}"
on master, "chown -R #{master['user']}:#{master['group']} #{testdir}"
master_opts = {
'master' => {
'manifest' => manifest,
'node_terminus' => 'plain',
'modulepath' => "#{testdir}/modules"
}
}
with_puppet_running_on master, master_opts, testdir do
agents.each do |agent|
on(agent, puppet('agent', "--test --server #{master}"), :acceptable_exit_codes => [2]) do
file_to_check = agent['platform'] =~ /windows/ ? target_file_on_windows : target_file_on_nix
on agent, "cat #{file_to_check}" do
assert_match(/the content is present/, stdout, "Result file not created")
end
end
end
end
# TODO: Add tests for puppet:// URIs with multi-master/agent setups.
# step "when using a puppet://$server/ URI with a master/agent setup"
agents.each do |agent|
step "Setup testing local file sources"
a_testdir = agent.tmpdir('local_source_file_test')
source = "#{a_testdir}/source_mod/files/source"
target = "#{a_testdir}/target"
on agent, "mkdir -p #{File.dirname(source)}"
create_remote_file agent, source, 'Yay, this is the local file.'
step "Using a local file path"
apply_manifest_on agent, "file { '#{target}': source => '#{source}', ensure => present }"
on agent, "cat #{target}" do
assert_match(/Yay, this is the local file./, stdout, "FIRST: File contents not matched on #{agent}")
end
step "Using a puppet:/// URI with puppet apply"
on agent, "rm -rf #{target}"
manifest = %{"file { '#{target}': source => 'puppet:///modules/source_mod/source', ensure => 'present' }"}
on agent, puppet( %{apply --modulepath=#{a_testdir} -e #{manifest}})
on agent, "cat #{target}" do
assert_match(/Yay, this is the local file./, stdout, "FIRST: File contents not matched on #{agent}")
end
end
diff --git a/acceptance/tests/resource/file/symbolic_modes.rb b/acceptance/tests/resource/file/symbolic_modes.rb
index d9e294c7e..f15a5dc10 100644
--- a/acceptance/tests/resource/file/symbolic_modes.rb
+++ b/acceptance/tests/resource/file/symbolic_modes.rb
@@ -1,259 +1,260 @@
test_name "file resource: symbolic modes"
require 'test/unit/assertions'
module FileModeAssertions
include Test::Unit::Assertions
def assert_create(agent, manifest, path, expected_mode)
testcase.apply_manifest_on(agent, manifest) do
assert_match(/File\[#{Regexp.escape(path)}\]\/ensure: created/, testcase.stdout, "Failed to create #{path}")
end
assert_mode(agent, path, expected_mode)
end
def assert_mode(agent, path, expected_mode)
current_mode = testcase.on(agent, "stat --format '%a' #{path}").stdout.chomp.to_i(8)
assert_equal(expected_mode, current_mode, "current mode #{current_mode.to_s(8)} doesn't match expected mode #{expected_mode.to_s(8)}")
end
def assert_mode_change(agent, manifest, path, symbolic_mode, start_mode, expected_mode)
testcase.apply_manifest_on(agent, manifest) do
- assert_match(/mode changed '#{'%04o' % start_mode}' to '#{'%04o' % expected_mode}'/, testcase.stdout,
+ assert_match(/mode changed '#{'%04o' % start_mode}'.* to '#{'%04o' % expected_mode}'/, testcase.stdout,
"couldn't set mode to #{symbolic_mode}")
end
assert_mode(agent, path, expected_mode)
end
def assert_no_mode_change(agent, manifest)
testcase.apply_manifest_on(agent, manifest) do
assert_no_match(/mode changed/, testcase.stdout, "reapplied the symbolic mode change")
end
end
end
class ActionModeTest
include FileModeAssertions
attr_reader :testcase
def initialize(testcase, agent, basedir, symbolic_mode)
@testcase = testcase
@agent = agent
@basedir = basedir
@symbolic_mode = symbolic_mode
@file = "#{basedir}/file"
@dir = "#{basedir}/dir"
testcase.on(agent, "rm -rf #{@file} #{@dir}")
end
def get_manifest(path, type, symbolic_mode)
"file { #{path.inspect}: ensure => #{type}, mode => '#{symbolic_mode}' }"
end
end
class CreatesModeTest < ActionModeTest
def initialize(testcase, agent, basedir, symbolic_mode)
super(testcase, agent, basedir, symbolic_mode)
end
def assert_file_mode(expected_mode)
manifest = get_manifest(@file, 'file', @symbolic_mode)
assert_create(@agent, manifest, @file, expected_mode)
assert_no_mode_change(@agent, manifest)
end
def assert_dir_mode(expected_mode)
manifest = get_manifest(@dir, 'directory', @symbolic_mode)
assert_create(@agent, manifest, @dir, expected_mode)
assert_no_mode_change(@agent, manifest)
end
end
class ModifiesModeTest < ActionModeTest
def initialize(testcase, agent, basedir, symbolic_mode, start_mode)
super(testcase, agent, basedir, symbolic_mode)
@start_mode = start_mode
user = agent['user']
group = agent['group'] || user
testcase.on(agent, "touch #{@file} && chown #{user}:#{group} #{@file} && chmod #{start_mode.to_s(8)} #{@file}")
testcase.on(agent, "mkdir -p #{@dir} && chown #{user}:#{group} #{@dir} && chmod #{start_mode.to_s(8)} #{@dir}")
end
def assert_file_mode(expected_mode)
manifest = get_manifest(@file, 'file', @symbolic_mode)
if @start_mode != expected_mode
assert_mode_change(@agent, manifest, @file, @symbolic_mode, @start_mode, expected_mode)
end
assert_no_mode_change(@agent, manifest)
end
def assert_dir_mode(expected_mode)
manifest = get_manifest(@dir, 'directory', @symbolic_mode)
if @start_mode != expected_mode
assert_mode_change(@agent, manifest, @dir, @symbolic_mode, @start_mode, expected_mode)
end
assert_no_mode_change(@agent, manifest)
end
end
class ModeTest
def initialize(testcase, agent, basedir)
@testcase = testcase
@agent = agent
@basedir = basedir
end
def assert_creates(symbolic_mode, file_mode, dir_mode)
creates = CreatesModeTest.new(@testcase, @agent, @basedir, symbolic_mode)
creates.assert_file_mode(file_mode)
creates.assert_dir_mode(dir_mode)
end
def assert_modifies(symbolic_mode, start_mode, file_mode, dir_mode)
modifies = ModifiesModeTest.new(@testcase, @agent, @basedir, symbolic_mode, start_mode)
modifies.assert_file_mode(file_mode)
modifies.assert_dir_mode(dir_mode)
end
end
# For your reference:
# 4000 the set-user-ID-on-execution bit
# 2000 the set-group-ID-on-execution bit
# 1000 the sticky bit
# 0400 Allow read by owner.
# 0200 Allow write by owner.
# 0100 For files, allow execution by owner. For directories, allow the
# owner to search in the directory.
# 0040 Allow read by group members.
# 0020 Allow write by group members.
# 0010 For files, allow execution by group members. For directories, allow
# group members to search in the directory.
# 0004 Allow read by others.
# 0002 Allow write by others.
# 0001 For files, allow execution by others. For directories allow others
# to search in the directory.
#
# On Solaris 11 (from man chmod):
#
# 20#0 Set group ID on execution if # is 7, 5, 3, or 1.
# Enable mandatory locking if # is 6, 4, 2, or 0.
# ...
# For directories, the set-gid bit can
# only be set or cleared by using symbolic mode.
# From http://www.gnu.org/software/coreutils/manual/html_node/Symbolic-Modes.html#Symbolic-Modes
# Users
# u the user who owns the file;
# g other users who are in the file's group;
# o all other users;
# a all users; the same as 'ugo'.
#
# Operations
# + to add the permissions to whatever permissions the users already have for the file;
# - to remove the permissions from whatever permissions the users already have for the file;
# = to make the permissions the only permissions that the users have for the file.
#
# Permissions
# r the permission the users have to read the file;
# w the permission the users have to write to the file;
# x the permission the users have to execute the file, or search it if it is a directory.
# s the meaning depends on which user (uga) the permission is associated with:
# to set set-user-id-on-execution, use 'u' in the users part of the symbolic mode and 's' in the permissions part.
# to set set-group-id-on-execution, use 'g' in the users part of the symbolic mode and 's' in the permissions part.
# to set both user and group-id-on-execution, omit the users part of the symbolic mode (or use 'a') and use 's' in the permissions part.
# t the restricted deletion flag (sticky bit), omit the users part of the symbolic mode (or use 'a') and use 't' in the permissions part.
# X execute/search permission is affected only if the file is a directory or already had execute permission.
#
# Note we do not currently support the Solaris (l) permission:
# l mandatory file and record locking refers to a file's ability to have its reading or writing
# permissions locked while a program is accessing that file.
#
agents.each do |agent|
if agent['platform'].include?('windows')
Log.warn("Pending: this does not currently work on Windows")
next
end
is_solaris = agent['platform'].include?('solaris')
basedir = agent.tmpdir('symbolic-modes')
on(agent, "mkdir -p #{basedir}")
test = ModeTest.new(self, agent, basedir)
test.assert_creates('u=r', 00444, 00455)
test.assert_creates('u=w', 00244, 00255)
test.assert_creates('u=x', 00144, 00155)
test.assert_creates('u=rw', 00644, 00655)
test.assert_creates('u=rwx', 00744, 00755)
test.assert_creates('u=rwxt', 01744, 01755)
test.assert_creates('u=rwxs', 04744, 04755)
test.assert_creates('u=rwxts', 05744, 05755)
test.assert_creates('ug=r', 00444, 00445)
test.assert_creates('ug=rw', 00664, 00665)
test.assert_creates('ug=rwx', 00774, 00775)
test.assert_creates('ug=rwxt', 01774, 01775)
test.assert_creates('ug=rwxs', 06774, 06775)
test.assert_creates('ug=rwxts', 07774, 07775)
test.assert_creates('ugo=r', 00444, 00444)
test.assert_creates('ugo=rw', 00666, 00666)
test.assert_creates('ugo=rwx', 00777, 00777)
test.assert_creates('ugo=rwxt', 01777, 01777)
# # test.assert_creates('ugo=rwxs', 06777, 06777) ## BUG, puppet creates 07777
test.assert_creates('ugo=rwxts', 07777, 07777)
test.assert_creates('u=rwx,go=rx', 00755, 00755)
test.assert_creates('u=rwx,g=rx,o=r', 00754, 00754)
test.assert_creates('u=rwx,g=rx,o=', 00750, 00750)
test.assert_creates('a=rwx', 00777, 00777)
test.assert_creates('u+r', 00644, 00755)
test.assert_creates('u+w', 00644, 00755)
test.assert_creates('u+x', 00744, 00755)
test.assert_modifies('u+r', 00200, 00600, 00600)
test.assert_modifies('u+r', 00600, 00600, 00600)
test.assert_modifies('u+w', 00500, 00700, 00700)
test.assert_modifies('u+w', 00400, 00600, 00600)
test.assert_modifies('u+x', 00700, 00700, 00700)
test.assert_modifies('u+x', 00600, 00700, 00700)
test.assert_modifies('u+X', 00100, 00100, 00100)
- test.assert_modifies('u+X', 00200, 00300, 00300)
- test.assert_modifies('u+X', 00400, 00500, 00500)
+ test.assert_modifies('u+X', 00200, 00200, 00300)
+ test.assert_modifies('u+X', 00410, 00510, 00510)
+ test.assert_modifies('a+X', 00600, 00600, 00711)
test.assert_modifies('a+X', 00700, 00711, 00711)
test.assert_modifies('u+s', 00744, 04744, 04744)
test.assert_modifies('g+s', 00744, 02744, 02744)
test.assert_modifies('u+t', 00744, 01744, 01744)
test.assert_modifies('u-r', 00200, 00200, 00200)
test.assert_modifies('u-r', 00600, 00200, 00200)
test.assert_modifies('u-w', 00500, 00500, 00500)
test.assert_modifies('u-w', 00600, 00400, 00400)
test.assert_modifies('u-x', 00700, 00600, 00600)
test.assert_modifies('u-x', 00600, 00600, 00600)
test.assert_modifies('u-s', 04744, 00744, 00744)
# using chmod 2744 on a directory to set the startmode fails on Solaris
test.assert_modifies('g-s', 02744, 00744, 00744) unless is_solaris
test.assert_modifies('u-t', 01744, 00744, 00744)
# these raise
# test.assert_raises('')
# test.assert_raises(' ')
# test.assert_raises('u=X')
# test.assert_raises('u-X')
# test.assert_raises('+l')
# test.assert_raises('-l')
step "clean up old test things"
on agent, "rm -rf #{basedir}"
end
diff --git a/acceptance/tests/resource/mailalias/create.rb b/acceptance/tests/resource/mailalias/create.rb
new file mode 100644
index 000000000..718159149
--- /dev/null
+++ b/acceptance/tests/resource/mailalias/create.rb
@@ -0,0 +1,26 @@
+test_name "should create an email alias"
+
+confine :except, :platform => 'windows'
+
+name = "pl#{rand(999999).to_i}"
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the alias file
+ on(agent, "mv /tmp/aliases /etc/aliases", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup alias file"
+ on(agent, "cp /etc/aliases /tmp/aliases", :acceptable_exit_codes => [0,1])
+
+ #------- TESTS -------#
+ step "create a mailalias with puppet"
+ args = ['ensure=present',
+ 'recipient="foo,bar,baz"']
+ on(agent, puppet_resource('mailalias', name, args))
+
+ step "verify the alias exists"
+ on(agent, "cat /etc/aliases") do |res|
+ assert_match(/#{name}:.*foo,bar,baz/, res.stdout, "mailalias not in aliases file")
+ end
+end
diff --git a/acceptance/tests/resource/mailalias/destroy.rb b/acceptance/tests/resource/mailalias/destroy.rb
new file mode 100644
index 000000000..65ac57507
--- /dev/null
+++ b/acceptance/tests/resource/mailalias/destroy.rb
@@ -0,0 +1,35 @@
+test_name "should delete an email alias"
+
+confine :except, :platform => 'windows'
+
+name = "pl#{rand(999999).to_i}"
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the alias file
+ on(agent, "mv /tmp/aliases /etc/aliases", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup alias file"
+ on(agent, "cp /etc/aliases /tmp/aliases", :acceptable_exit_codes => [0,1])
+
+ step "(setup) create a mailalias"
+ on(agent, "echo '#{name}: foo,bar,baz' >> /etc/aliases")
+
+ step "(setup) verify the alias exists"
+ on(agent, "cat /etc/aliases") do |res|
+ assert_match(/#{name}:.*foo,bar,baz/, res.stdout, "mailalias not in aliases file")
+ end
+
+ #------- TESTS -------#
+ step "delete the aliases database with puppet"
+ args = ['ensure=absent',
+ 'recipient="foo,bar,baz"']
+ on(agent, puppet_resource('mailalias', name, args))
+
+
+ step "verify the alias is absent"
+ on(agent, "cat /etc/aliases") do |res|
+ assert_no_match(/#{name}:.*foo,bar,baz/, res.stdout, "mailalias was not removed from aliases file")
+ end
+end
diff --git a/acceptance/tests/resource/mailalias/modify.rb b/acceptance/tests/resource/mailalias/modify.rb
new file mode 100644
index 000000000..41c52f9db
--- /dev/null
+++ b/acceptance/tests/resource/mailalias/modify.rb
@@ -0,0 +1,35 @@
+test_name "should modify an email alias"
+
+confine :except, :platform => 'windows'
+
+name = "pl#{rand(999999).to_i}"
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the alias file
+ on(agent, "mv /tmp/aliases /etc/aliases", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup alias file"
+ on(agent, "cp /etc/aliases /tmp/aliases", :acceptable_exit_codes => [0,1])
+
+ step "(setup) create a mailalias"
+ on(agent, "echo '#{name}: foo,bar,baz' >> /etc/aliases")
+
+ step "(setup) verify the alias exists"
+ on(agent, "cat /etc/aliases") do |res|
+ assert_match(/#{name}:.*foo,bar,baz/, res.stdout, "mailalias not in aliases file")
+ end
+
+ #------- TESTS -------#
+ step "modify the aliases database with puppet"
+ args = ['ensure=present',
+ 'recipient="foo,bar,baz,blarvitz"']
+ on(agent, puppet_resource('mailalias', name, args))
+
+
+ step "verify the updated alias is present"
+ on(agent, "cat /etc/aliases") do |res|
+ assert_match(/#{name}:.*foo,bar,baz,blarvitz/, res.stdout, "updated mailalias not in aliases file")
+ end
+end
diff --git a/acceptance/tests/resource/mailalias/query.rb b/acceptance/tests/resource/mailalias/query.rb
new file mode 100644
index 000000000..afadc370a
--- /dev/null
+++ b/acceptance/tests/resource/mailalias/query.rb
@@ -0,0 +1,29 @@
+test_name "should be able to find an exisitng email alias"
+
+confine :except, :platform => 'windows'
+
+name = "pl#{rand(999999).to_i}"
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the alias file
+ on(agent, "mv /tmp/aliases /etc/aliases", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup alias file"
+ on(agent, "cp /etc/aliases /tmp/aliases", :acceptable_exit_codes => [0,1])
+
+ step "(setup) create a mailalias"
+ on(agent, "echo '#{name}: foo,bar,baz' >> /etc/aliases")
+
+ step "(setup) verify the alias exists"
+ on(agent, "cat /etc/aliases") do |res|
+ assert_match(/#{name}:.*foo,bar,baz/, res.stdout, "mailalias not in aliases file")
+ end
+
+ #------- TESTS -------#
+ step "query for the mail alias with puppet"
+ on(agent, puppet_resource('mailalias', name)) do
+ fail_test "didn't find the scheduled_task #{name}" unless stdout.include? 'present'
+ end
+end
diff --git a/acceptance/tests/resource/service/smf_basic_tests.rb b/acceptance/tests/resource/service/smf_basic_tests.rb
index 5f222adcb..0eda50c04 100644
--- a/acceptance/tests/resource/service/smf_basic_tests.rb
+++ b/acceptance/tests/resource/service/smf_basic_tests.rb
@@ -1,40 +1,40 @@
test_name "SMF: basic tests"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::SMFUtils
teardown do
step "SMF: cleanup"
agents.each do |agent|
clean agent, :service => 'tstapp'
end
end
agents.each do |agent|
clean agent, :service => 'tstapp'
manifest, method = setup agent, :service => 'tstapp'
step "SMF: clean slate"
apply_manifest_on(agent, 'service {tstapp : ensure=>stopped}') do
assert_match( /.*/, result.stdout, "err: #{agent}")
end
- step "SMF: ensre it is created with a manifest"
+ step "SMF: ensure it is created with a manifest"
apply_manifest_on(agent, 'service {tstapp : ensure=>running, manifest=>"%s"}' % manifest) do
- assert_match( / ensure changed 'stopped' to 'running'/, result.stdout, "err: #{agent}")
+ assert_match( / ensure changed 'stopped'.* to 'running'/, result.stdout, "err: #{agent}")
end
step "SMF: verify it with puppet"
on agent, "puppet resource service tstapp" do
assert_match( /ensure => 'running'/, result.stdout, "err: #{agent}")
end
step "SMF: verify with svcs that the service is online"
on agent, "svcs -l application/tstapp" do
assert_match( /state\s+online/, result.stdout, "err: #{agent}")
end
step "SMF: stop the service"
apply_manifest_on(agent, 'service {tstapp : ensure=>stopped}') do
- assert_match( /changed 'running' to 'stopped'/, result.stdout, "err: #{agent}")
+ assert_match( /changed 'running'.* to 'stopped'/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/service/smf_should_be_idempotent.rb b/acceptance/tests/resource/service/smf_should_be_idempotent.rb
index d2f9b2de8..c86ec248f 100644
--- a/acceptance/tests/resource/service/smf_should_be_idempotent.rb
+++ b/acceptance/tests/resource/service/smf_should_be_idempotent.rb
@@ -1,33 +1,33 @@
test_name "SMF: should be idempotent"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::SMFUtils
teardown do
step "SMF: cleanup"
agents.each do |agent|
clean agent, :service => 'tstapp'
end
end
agents.each do |agent|
clean agent, :service => 'tstapp'
manifest, method = setup agent, :service => 'tstapp'
step "SMF: ensre it is created with a manifest"
apply_manifest_on(agent, 'service {tstapp : ensure=>running, manifest=>"%s"}' % manifest) do
- assert_match( / ensure changed 'stopped' to 'running'/, result.stdout, "err: #{agent}")
+ assert_match( / ensure changed 'stopped'.* to 'running'/, result.stdout, "err: #{agent}")
end
step "SMF: verify with svcs that the service is online"
on agent, "svcs -l application/tstapp" do
assert_match( /state\s+online/, result.stdout, "err: #{agent}")
end
step "SMF: ensre it is not created again"
apply_manifest_on(agent, 'service {tstapp : ensure=>running, manifest=>"%s"}' % manifest) do
assert_no_match( /changed/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/service/smf_should_create.rb b/acceptance/tests/resource/service/smf_should_create.rb
index af8e10d9a..33cdeba64 100644
--- a/acceptance/tests/resource/service/smf_should_create.rb
+++ b/acceptance/tests/resource/service/smf_should_create.rb
@@ -1,26 +1,26 @@
test_name "SMF: should create a service given a manifest"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::SMFUtils
teardown do
step "SMF: cleanup"
agents.each do |agent|
clean agent, :service => 'tstapp'
end
end
agents.each do |agent|
manifest, method = setup agent, :service => 'tstapp'
step "SMF: ensure it is created with a manifest"
apply_manifest_on(agent, 'service {tstapp : ensure=>running, manifest=>"%s"}' % manifest) do
- assert_match( / ensure changed 'stopped' to 'running'/, result.stdout, "err: #{agent}")
+ assert_match( / ensure changed 'stopped'.* to 'running'/, result.stdout, "err: #{agent}")
end
step "SMF: verify with svcs that the service is online"
on agent, "svcs -l application/tstapp" do
assert_match( /state\s+online/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/service/smf_should_query.rb b/acceptance/tests/resource/service/smf_should_query.rb
index ca231c112..3da796ac9 100644
--- a/acceptance/tests/resource/service/smf_should_query.rb
+++ b/acceptance/tests/resource/service/smf_should_query.rb
@@ -1,29 +1,29 @@
test_name "SMF: should query instances"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::SMFUtils
teardown do
step "SMF: cleanup"
agents.each do |agent|
clean agent, :service => 'tstapp'
end
end
agents.each do |agent|
manifest, method = setup agent, :service => 'tstapp'
step "SMF: ensre it is created with a manifest"
apply_manifest_on(agent, 'service {tstapp : ensure=>running, manifest=>"%s"}' % manifest) do
- assert_match( / ensure changed 'stopped' to 'running'/, result.stdout, "err: #{agent}")
+ assert_match( / ensure changed 'stopped'.* to 'running'/, result.stdout, "err: #{agent}")
end
step "SMF: query the resource"
on agent, "puppet resource service tstapp" do
assert_match( /ensure => 'running'/, result.stdout, "err: #{agent}")
end
step "SMF: query all the instances"
on agent, "puppet resource service" do
assert_match( /tstapp/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/service/smf_should_stop.rb b/acceptance/tests/resource/service/smf_should_stop.rb
index d74f51834..9f4ee0cfc 100644
--- a/acceptance/tests/resource/service/smf_should_stop.rb
+++ b/acceptance/tests/resource/service/smf_should_stop.rb
@@ -1,31 +1,31 @@
test_name "SMF: should stop a given service"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::SMFUtils
teardown do
step "SMF: cleanup"
agents.each do |agent|
clean agent, :service => 'tstapp'
end
end
agents.each do |agent|
manifest, method = setup agent, :service => 'tstapp'
step "SMF: ensre it is created with a manifest"
apply_manifest_on(agent, 'service {tstapp : ensure=>running, manifest=>"%s"}' % manifest) do
- assert_match( / ensure changed 'stopped' to 'running'/, result.stdout, "err: #{agent}")
+ assert_match( / ensure changed 'stopped'.* to 'running'/, result.stdout, "err: #{agent}")
end
step "SMF: stop the service"
apply_manifest_on(agent, 'service {tstapp : ensure=>stopped}') do
- assert_match( /changed 'running' to 'stopped'/, result.stdout, "err: #{agent}")
+ assert_match( /changed 'running'.* to 'stopped'/, result.stdout, "err: #{agent}")
end
step "SMF: verify with svcs that the service is not online"
on agent, "svcs -l application/tstapp", :acceptable_exit_codes => [0,1] do
assert_no_match( /state\s+online/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/ssh_authorized_key/create.rb b/acceptance/tests/resource/ssh_authorized_key/create.rb
new file mode 100644
index 000000000..c775f5e44
--- /dev/null
+++ b/acceptance/tests/resource/ssh_authorized_key/create.rb
@@ -0,0 +1,32 @@
+test_name "should create an entry for an SSH authorized key"
+
+confine :except, :platform => ['windows']
+
+auth_keys = '~/.ssh/authorized_keys'
+name = "pl#{rand(999999).to_i}"
+
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the #{auth_keys} file
+ on(agent, "mv /tmp/auth_keys #{auth_keys}", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup #{auth_keys} file"
+ on(agent, "cp #{auth_keys} /tmp/auth_keys", :acceptable_exit_codes => [0,1])
+
+ #------- TESTS -------#
+ step "create an authorized key entry with puppet (present)"
+ args = ['ensure=present',
+ "user='root'",
+ "type='rsa'",
+ "key='mykey'",
+ ]
+ on(agent, puppet_resource('ssh_authorized_key', "#{name}", args))
+
+ step "verify entry in #{auth_keys}"
+ on(agent, "cat #{auth_keys}") do |res|
+ fail_test "didn't find the ssh_authorized_key for #{name}" unless stdout.include? "#{name}"
+ end
+
+end
diff --git a/acceptance/tests/resource/ssh_authorized_key/destroy.rb b/acceptance/tests/resource/ssh_authorized_key/destroy.rb
new file mode 100644
index 000000000..a402a2fe7
--- /dev/null
+++ b/acceptance/tests/resource/ssh_authorized_key/destroy.rb
@@ -0,0 +1,35 @@
+test_name "should delete an entry for an SSH authorized key"
+
+confine :except, :platform => ['windows']
+
+auth_keys = '~/.ssh/authorized_keys'
+name = "pl#{rand(999999).to_i}"
+
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the #{auth_keys} file
+ on(agent, "mv /tmp/auth_keys #{auth_keys}", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup #{auth_keys} file"
+ on(agent, "cp #{auth_keys} /tmp/auth_keys", :acceptable_exit_codes => [0,1])
+
+ step "(setup) create an authorized key in the #{auth_keys} file"
+ on(agent, "echo 'ssh-rsa mykey #{name}' >> #{auth_keys}")
+
+ #------- TESTS -------#
+ step "delete an authorized key entry with puppet (absent)"
+ args = ['ensure=absent',
+ "user='root'",
+ "type='rsa'",
+ "key='mykey'",
+ ]
+ on(agent, puppet_resource('ssh_authorized_key', "#{name}", args))
+
+ step "verify entry deleted from #{auth_keys}"
+ on(agent, "cat #{auth_keys}") do |res|
+ fail_test "found the ssh_authorized_key for #{name}" if stdout.include? "#{name}"
+ end
+
+end
diff --git a/acceptance/tests/resource/ssh_authorized_key/modify.rb b/acceptance/tests/resource/ssh_authorized_key/modify.rb
new file mode 100644
index 000000000..2a87761c8
--- /dev/null
+++ b/acceptance/tests/resource/ssh_authorized_key/modify.rb
@@ -0,0 +1,35 @@
+test_name "should update an entry for an SSH authorized key"
+
+confine :except, :platform => ['windows']
+
+auth_keys = '~/.ssh/authorized_keys'
+name = "pl#{rand(999999).to_i}"
+
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the #{auth_keys} file
+ on(agent, "mv /tmp/auth_keys #{auth_keys}", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup #{auth_keys} file"
+ on(agent, "cp #{auth_keys} /tmp/auth_keys", :acceptable_exit_codes => [0,1])
+
+ step "(setup) create an authorized key in the #{auth_keys} file"
+ on(agent, "echo 'ssh-rsa mykey #{name}' >> #{auth_keys}")
+
+ #------- TESTS -------#
+ step "update an authorized key entry with puppet (present)"
+ args = ['ensure=present',
+ "user='root'",
+ "type='rsa'",
+ "key='mynewshinykey'",
+ ]
+ on(agent, puppet_resource('ssh_authorized_key', "#{name}", args))
+
+ step "verify entry updated in #{auth_keys}"
+ on(agent, "cat #{auth_keys}") do |res|
+ fail_test "didn't find the updated key for #{name}" unless stdout.include? "mynewshinykey #{name}"
+ end
+
+end
diff --git a/acceptance/tests/resource/ssh_authorized_key/query.rb b/acceptance/tests/resource/ssh_authorized_key/query.rb
new file mode 100644
index 000000000..83bf22cc1
--- /dev/null
+++ b/acceptance/tests/resource/ssh_authorized_key/query.rb
@@ -0,0 +1,29 @@
+test_name "should be able to find an existing SSH authorized key"
+
+skip_test("This test is blocked by PUP-1605")
+
+confine :except, :platform => ['windows']
+
+auth_keys = '~/.ssh/authorized_keys'
+name = "pl#{rand(999999).to_i}"
+
+agents.each do |agent|
+ teardown do
+ #(teardown) restore the #{auth_keys} file
+ on(agent, "mv /tmp/auth_keys #{auth_keys}", :acceptable_exit_codes => [0,1])
+ end
+
+ #------- SETUP -------#
+ step "(setup) backup #{auth_keys} file"
+ on(agent, "cp #{auth_keys} /tmp/auth_keys", :acceptable_exit_codes => [0,1])
+
+ step "(setup) create an authorized key in the #{auth_keys} file"
+ on(agent, "echo 'ssh-rsa mykey #{name}' >> #{auth_keys}")
+
+ #------- TESTS -------#
+ step "verify SSH authorized key query with puppet"
+ on(agent, puppet_resource('ssh_authorized_key', "/#{name}")) do |res|
+ fail_test "found the ssh_authorized_key for #{name}" unless stdout.include? "#{name}"
+ end
+
+end
diff --git a/acceptance/tests/resource/user/should_manage_shell.rb b/acceptance/tests/resource/user/should_manage_shell.rb
new file mode 100755
index 000000000..584a92937
--- /dev/null
+++ b/acceptance/tests/resource/user/should_manage_shell.rb
@@ -0,0 +1,33 @@
+test_name "should manage user shell"
+
+name = "pl#{rand(999999).to_i}"
+
+confine :except, :platform => 'windows'
+
+agents.each do |agent|
+ step "ensure the user and group do not exist"
+ agent.user_absent(name)
+ agent.group_absent(name)
+
+ step "create the user with shell"
+ shell = '/bin/sh'
+ on agent, puppet_resource('user', name, ["ensure=present", "shell=#{shell}"])
+
+ step "verify the user shell matches the managed shell"
+ agent.user_get(name) do |result|
+ fail_test "didn't set the user shell for #{name}" unless result.stdout.include? shell
+ end
+
+ step "modify the user with shell"
+ shell = '/bin/bash'
+ on agent, puppet_resource('user', name, ["ensure=present", "shell=#{shell}"])
+
+ step "verify the user shell matches the managed shell"
+ agent.user_get(name) do |result|
+ fail_test "didn't set the user shell for #{name}" unless result.stdout.include? shell
+ end
+
+ step "delete the user, and group, if any"
+ agent.user_absent(name)
+ agent.group_absent(name)
+end
diff --git a/acceptance/tests/resource/zfs/basic_tests.rb b/acceptance/tests/resource/zfs/basic_tests.rb
index e5f6413ed..a91d0a486 100644
--- a/acceptance/tests/resource/zfs/basic_tests.rb
+++ b/acceptance/tests/resource/zfs/basic_tests.rb
@@ -1,53 +1,53 @@
test_name "ZFS: configuration"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::ZFSUtils
teardown do
step "ZFS: cleanup"
agents.each do |agent|
clean agent
end
end
agents.each do |agent|
step "ZFS: cleanup"
clean agent
step "ZFS: setup"
setup agent
step "ZFS: ensure clean slate"
apply_manifest_on(agent, 'zfs { "tstpool/tstfs": ensure=>absent}') do
assert_match( /Finished catalog run in .*/, result.stdout, "err: #{agent}")
end
step "ZFS: basic - ensure it is created"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present}') do
assert_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "ZFS: idempotence - create"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present}') do
assert_no_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "ZFS: cleanup for next test"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>absent}') do
assert_match( /ensure: removed/, result.stdout, "err: #{agent}")
end
step "ZFS: create with a mount point"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present, mountpoint=>"/ztstpool/mnt"}') do
assert_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "ZFS: change mount point and verify"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present, mountpoint=>"/ztstpool/mnt2"}') do
- assert_match( /mountpoint changed '.ztstpool.mnt' to '.ztstpool.mnt2'/, result.stdout, "err: #{agent}")
+ assert_match( /mountpoint changed '.ztstpool.mnt'.* to '.ztstpool.mnt2'/, result.stdout, "err: #{agent}")
end
step "ZFS: ensure can be removed."
apply_manifest_on(agent, 'zfs { "tstpool/tstfs": ensure=>absent}') do
assert_match( /ensure: removed/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/zfs/should_be_idempotent.rb b/acceptance/tests/resource/zfs/should_be_idempotent.rb
index b7797d637..b56f742d2 100644
--- a/acceptance/tests/resource/zfs/should_be_idempotent.rb
+++ b/acceptance/tests/resource/zfs/should_be_idempotent.rb
@@ -1,37 +1,37 @@
test_name "ZFS: configuration"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::ZFSUtils
teardown do
step "ZFS: cleanup"
agents.each do |agent|
clean agent
end
end
agents.each do |agent|
step "ZFS: setup"
setup agent
step "ZFS: create with a mount point"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present, mountpoint=>"/ztstpool/mnt"}') do
assert_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "ZFS: idempotence - create"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present}') do
assert_no_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "ZFS: change mount point and verify"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present, mountpoint=>"/ztstpool/mnt2"}') do
- assert_match( /mountpoint changed '.ztstpool.mnt' to '.ztstpool.mnt2'/, result.stdout, "err: #{agent}")
+ assert_match( /mountpoint changed '.ztstpool.mnt'.* to '.ztstpool.mnt2'/, result.stdout, "err: #{agent}")
end
step "ZFS: change mount point and verify idempotence"
apply_manifest_on(agent, 'zfs {"tstpool/tstfs": ensure=>present, mountpoint=>"/ztstpool/mnt2"}') do
assert_no_match( /changed/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/zone/dataset.rb b/acceptance/tests/resource/zone/dataset.rb
index 53479424b..e775f2ef9 100755
--- a/acceptance/tests/resource/zone/dataset.rb
+++ b/acceptance/tests/resource/zone/dataset.rb
@@ -1,68 +1,68 @@
test_name "Zone: dataset configuration"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::ZoneUtils
def moresetup(agent)
# This should fail if /tstzones already exists.
on agent,"mkdir /tstzones"
on agent,"mkfile 64m /tstzones/dsk"
on agent,"zpool create tstpool /tstzones/dsk"
on agent,"zfs create tstpool/xx"
on agent,"zfs create tstpool/yy"
on agent,"zfs create tstpool/zz"
end
def moreclean(agent)
on agent, "zfs destroy -r tstpool"
on agent, "zpool destroy tstpool"
on agent, "rm -f /tstzones/dsk"
end
teardown do
step "Zone: dataset - cleanup"
agents.each do |agent|
clean agent
moreclean agent
end
end
agents.each do |agent|
step "Zone: dataset - setup"
setup agent
moresetup agent
#-----------------------------------
step "Zone: dataset - make it configured"
# Make it configured
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, path=>"/tstzones/mnt" }') do
assert_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "Zone: dataset - basic test, a single data set"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, dataset=>"tstpool/xx", path=>"/tstzones/mnt" }') do
assert_match(/defined 'dataset' as .'tstpool.xx'./, result.stdout, "err: #{agent}")
end
step "Zone: dataset - basic test, a single data set should change to another"
apply_manifest_on(agent,'zone {tstzone : ensure=>configured, dataset=>"tstpool/yy", path=>"/tstzones/mnt" }') do
- assert_match(/dataset changed 'tstpool.xx' to .'tstpool.yy'./, result.stdout, "err: #{agent}")
+ assert_match(/dataset changed 'tstpool.xx'.* to .'tstpool.yy'./, result.stdout, "err: #{agent}")
end
step "Zone: dataset - basic test, idempotency"
apply_manifest_on(agent,'zone {tstzone : ensure=>configured, dataset=>"tstpool/yy", path=>"/tstzones/mnt" }') do
- assert_no_match(/dataset changed 'tstpool.xx' to .'tstpool.yy'./, result.stdout, "err: #{agent}")
+ assert_no_match(/dataset changed 'tstpool.xx'.* to .'tstpool.yy'./, result.stdout, "err: #{agent}")
end
step "Zone: dataset - array test, should change to an array"
apply_manifest_on(agent,'zone {tstzone : ensure=>configured, dataset=>["tstpool/yy","tstpool/zz"], path=>"/tstzones/mnt" }') do
- assert_match(/dataset changed 'tstpool.yy' to .'tstpool.yy', 'tstpool.zz'./, result.stdout, "err: #{agent}")
+ assert_match(/dataset changed 'tstpool.yy'.* to .'tstpool.yy', 'tstpool.zz'./, result.stdout, "err: #{agent}")
end
step "Zone: dataset - array test, should change one single element"
apply_manifest_on(agent,'zone {tstzone : ensure=>configured, dataset=>["tstpool/xx","tstpool/zz"], path=>"/tstzones/mnt" }') do
- assert_match(/dataset changed 'tstpool.yy,tstpool.zz' to .'tstpool.xx', 'tstpool.zz'./, result.stdout, "err: #{agent}")
+ assert_match(/dataset changed 'tstpool.yy,tstpool.zz'.* to .'tstpool.xx', 'tstpool.zz'./, result.stdout, "err: #{agent}")
end
step "Zone: dataset - array test, should remove elements"
apply_manifest_on(agent,'zone {tstzone : ensure=>configured, dataset=>[], path=>"/tstzones/mnt" }') do
- assert_match(/dataset changed 'tstpool.zz,tstpool.xx' to ../, result.stdout, "err: #{agent}")
+ assert_match(/dataset changed 'tstpool.zz,tstpool.xx'.* to ../, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/zone/set_ip.rb b/acceptance/tests/resource/zone/set_ip.rb
index aa9f6e237..aad2463ef 100755
--- a/acceptance/tests/resource/zone/set_ip.rb
+++ b/acceptance/tests/resource/zone/set_ip.rb
@@ -1,64 +1,64 @@
test_name "Zone:IP ip-type and ip configuration"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::ZoneUtils
teardown do
step "Zone: ip - cleanup"
agents.each do |agent|
clean agent
end
end
agents.each do |agent|
step "Zone: ip - setup"
setup agent
# See
# https://hg.openindiana.org/upstream/illumos/illumos-gate/file/03d5725cda56/usr/src/lib/libinetutil/common/ifspec.c
# for the funciton ifparse_ifspec. This is the only documentation that exists
# as to what the zone interface can be.
#-----------------------------------
# Make sure that the zone is absent.
step "Zone: ip - make it configured"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt" }') do
assert_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "Zone: ip - ip switch: verify that the change from shared to exclusive works."
# --------------------------------------------------------------------
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>exclusive, path=>"/tstzones/mnt" }') do
- assert_match(/iptype changed 'shared' to 'exclusive'/, result.stdout, "err: #{agent}")
+ assert_match(/iptype changed 'shared'.* to 'exclusive'/, result.stdout, "err: #{agent}")
end
step "Zone: ip - ip switch: verify that we can change it back"
# --------------------------------------------------------------------
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>exclusive, path=>"/tstzones/mnt" }') do
- assert_no_match(/iptype changed 'shared' to 'exclusive'/, result.stdout, "err: #{agent}")
+ assert_no_match(/iptype changed 'shared'.* to 'exclusive'/, result.stdout, "err: #{agent}")
end
step "Zone: ip - switch to shared for remaining cases"
# we have to use shared for remaining test cases.
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt" }') do
- assert_match(/iptype changed 'exclusive' to 'shared'/, result.stdout, "err: #{agent}")
+ assert_match(/iptype changed 'exclusive'.* to 'shared'/, result.stdout, "err: #{agent}")
end
step "Zone: ip - assign: ensure that our ip assignment works."
# --------------------------------------------------------------------
apply_manifest_on(agent,'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt", ip=>"ip.if.1" }', :acceptable_exit_codes => [1] ) do
assert_match(/ip must contain interface name and ip address separated by a \W*?:/, result.output, "err: #{agent}")
end
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt", ip=>"ip.if.1:1.1.1.1" }') do
assert_match(/defined 'ip' as .'ip.if.1:1.1.1.1'./ , result.stdout, "err: #{agent}")
end
step "Zone: ip - assign: arrays should be created"
# --------------------------------------------------------------------
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt", ip=>["ip.if.1:1.1.1.1", "ip.if.2:1.1.1.2"] }') do
- assert_match( /ip changed 'ip.if.1:1.1.1.1' to .'ip.if.1:1.1.1.1', 'ip.if.2:1.1.1.2'./, result.stdout, "err: #{agent}")
+ assert_match( /ip changed 'ip.if.1:1.1.1.1'.* to .'ip.if.1:1.1.1.1', 'ip.if.2:1.1.1.2'./, result.stdout, "err: #{agent}")
end
step "Zone: ip - assign: arrays should be modified"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt", ip=>["ip.if.1:1.1.1.1", "ip.if.2:1.1.1.3"] }') do
- assert_match(/ip changed 'ip.if.1:1.1.1.1,ip.if.2:1.1.1.2' to .'ip.if.1:1.1.1.1', 'ip.if.2:1.1.1.3'./, result.stdout, "err: #{agent}")
+ assert_match(/ip changed 'ip.if.1:1.1.1.1,ip.if.2:1.1.1.2'.* to .'ip.if.1:1.1.1.1', 'ip.if.2:1.1.1.3'./, result.stdout, "err: #{agent}")
end
step "Zone: ip - idempotency: arrays"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt", ip=>["ip.if.1:1.1.1.1", "ip.if.2:1.1.1.3"] }') do
assert_no_match(/ip changed/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/resource/zone/set_path.rb b/acceptance/tests/resource/zone/set_path.rb
index e92f51060..7a172421f 100644
--- a/acceptance/tests/resource/zone/set_path.rb
+++ b/acceptance/tests/resource/zone/set_path.rb
@@ -1,49 +1,49 @@
test_name "Zone:Path configuration"
confine :to, :platform => 'solaris'
require 'puppet/acceptance/solaris_util'
extend Puppet::Acceptance::ZoneUtils
teardown do
step "Zone: path - cleanup"
agents.each do |agent|
clean agent
end
end
agents.each do |agent|
step "Zone: path - setup"
setup agent
#-----------------------------------
step "Zone: path - required parameter (-)"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared }') do
assert_match( /Error: Path is required/, result.output, "err: #{agent}")
end
step "Zone: path - required parameter (+)"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt" }') do
assert_match( /ensure: created/, result.stdout, "err: #{agent}")
end
step "Zone: path - should change the path if it is switched before install"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt2" }') do
- assert_match(/path changed '.tstzones.mnt' to '.tstzones.mnt2'/, result.stdout, "err: #{agent}")
+ assert_match(/path changed '.tstzones.mnt'.* to '.tstzones.mnt2'/, result.stdout, "err: #{agent}")
end
step "Zone: path - verify the path is correct"
on agent,"/usr/sbin/zonecfg -z tstzone export" do
assert_match(/set zonepath=.*mnt2/, result.stdout, "err: #{agent}")
end
step "Zone: path - revert to original path"
apply_manifest_on(agent, 'zone {tstzone : ensure=>configured, iptype=>shared, path=>"/tstzones/mnt" }') do
- assert_match(/path changed '.tstzones.mnt2' to '.tstzones.mnt'/, result.stdout, "err: #{agent}")
+ assert_match(/path changed '.tstzones.mnt2'.* to '.tstzones.mnt'/, result.stdout, "err: #{agent}")
end
step "Zone: path - verify that we have correct path"
on agent,"/usr/sbin/zonecfg -z tstzone export" do
assert_match(/set zonepath=.tstzones.mnt/, result.stdout, "err: #{agent}")
end
end
diff --git a/acceptance/tests/security/cve-2013-1652_improper_query_params.rb b/acceptance/tests/security/cve-2013-1652_improper_query_params.rb
index 1f42c307f..4c805c761 100644
--- a/acceptance/tests/security/cve-2013-1652_improper_query_params.rb
+++ b/acceptance/tests/security/cve-2013-1652_improper_query_params.rb
@@ -1,39 +1,39 @@
require 'json'
test_name "CVE 2013-1652 Improper query parameter validation" do
confine :except, :platform => 'windows'
with_puppet_running_on master, {} do
# Ensure each agent has a signed cert
on agents, puppet('agent', "-t --server #{master}" )
agents.each do |agent|
next if agent['roles'].include?( 'master' )
certname = on(agent, puppet('agent', "--configprint certname")).stdout.chomp
payload = "https://#{master}:8140/production/catalog/#{certname}?use_node=" +
"---%20!ruby/object:Puppet::Node%0A%20%20" +
"name:%20#{master}%0A%20%20classes:%20\[\]%0A%20%20" +
"parameters:%20%7B%7D%0A%20%20facts:%20%7B%7D"
cert_path = on(agent, puppet('agent', "--configprint hostcert")).stdout.chomp
key_path = on(agent, puppet('agent', "--configprint hostprivkey")).stdout.chomp
- curl_base = "curl -g --cert \"#{cert_path}\" --key \"#{key_path}\" -k -H 'Accept: pson'"
+ curl_base = "curl --tlsv1 -g --cert \"#{cert_path}\" --key \"#{key_path}\" -k -H 'Accept: pson'"
curl_call = "#{curl_base} '#{payload}'"
step "Attempt to retrieve another nodes catalog" do
on agent, curl_call do |test|
begin
res = JSON.parse( test.stdout )
fail_test( "Retrieved catalog for #{master} from #{agent}" ) if
res['data']['name'] == master.name
rescue JSON::ParserError
# good, continue
end
end
end
end
end
end
diff --git a/acceptance/tests/security/cve-2013-1652_poison_other_node_cache.rb b/acceptance/tests/security/cve-2013-1652_poison_other_node_cache.rb
index f88fc6647..550b67367 100644
--- a/acceptance/tests/security/cve-2013-1652_poison_other_node_cache.rb
+++ b/acceptance/tests/security/cve-2013-1652_poison_other_node_cache.rb
@@ -1,40 +1,40 @@
test_name "CVE 2013-1652 Poison node cache" do
step "Determine suitability of the test" do
skip_test( "This test will only run on Puppet 3.x" ) if
on(master, puppet('--version')).stdout =~ /\A2\./
end
with_puppet_running_on( master, {} ) do
# Ensure agent has a signed cert
on master, puppet('agent', '-t', "--server #{master}" )
certname = on(
master, puppet('agent', "--configprint certname")).stdout.chomp
cert_path = on(
master, puppet('agent', "--configprint hostcert")).stdout.chomp
key_path = on(
master, puppet('agent', "--configprint hostprivkey")).stdout.chomp
- curl_base = "curl -g --cert \"#{cert_path}\" " +
+ curl_base = "curl --tlsv1 -g --cert \"#{cert_path}\" " +
"--key \"#{key_path}\" -k -H 'Accept: pson'"
step "Attempt to poison the master's node cache" do
yamldir = on(
master, puppet('master', '--configprint yamldir' )).stdout.chomp
exploited = "#{yamldir}/node/you.lose.yaml"
on master, "rm -rf #{exploited}"
on master, "rm -rf #{yamldir}/node/*"
payload2 = "https://#{master}:8140/production/node/#{certname}?instance=" +
"---+%21ruby%2Fobject%3APuppet%3A%3ANode%0A+classes" +
"%3A%0A+-+foo%0A+name%3A+you.lose%0A+parameters" +
"%3A+%7B%7D%0A+time%3A+2013-02-28+15%3A12%3A30.367008+-08%3A00"
on master, "#{curl_base} '#{payload2}'"
fail_test( "Found exploit file #{exploited}" ) if
on( master, "[ ! -f #{exploited} ]",
:acceptable_exit_codes => [0,1] ).exit_code == 1
end
end
end
diff --git a/acceptance/tests/security/cve-2013-1653_puppet_kick.rb b/acceptance/tests/security/cve-2013-1653_puppet_kick.rb
index 818516aed..6320f8b32 100644
--- a/acceptance/tests/security/cve-2013-1653_puppet_kick.rb
+++ b/acceptance/tests/security/cve-2013-1653_puppet_kick.rb
@@ -1,113 +1,113 @@
test_name "CVE 2013-1653: Puppet Kick Remote Code Exploit" do
step "Determine suitability of the test" do
confine :except, :platform => 'windows'
versions = on( hosts, puppet( '--version' ))
skip_test( "This test will not run on Puppet 2.6" ) if
versions.any? {|r| r.stdout =~ /\A2\.6\./ }
end
def exploit_code( exploiter, exploitee, endpoint, port, file_to_create )
certfile = on( exploiter, puppet_agent( '--configprint hostcert' )).stdout.chomp
keyfile = on( exploiter, puppet_agent( '--configprint hostprivkey' )).stdout.chomp
exploit = %Q[#!#{exploiter['puppetbindir']}/ruby
require 'rubygems'
require 'puppet'
require 'openssl'
require 'net/https'
yaml = <<EOM
--- !ruby/object:ERB
safe_level:
src: |-
#coding:US-ASCII
_erbout = ''; _erbout.concat(( File.open( '#{file_to_create}', 'w') ).to_s)
filename:
EOM
headers = {'Content-Type' => 'text/yaml', 'Accept' => 'yaml'}
conn = Net::HTTP.new('#{exploitee}', #{port})
conn.use_ssl = true
conn.cert = OpenSSL::X509::Certificate.new(File.read('#{certfile}'))
conn.key = OpenSSL::PKey::RSA.new(File.read('#{keyfile}'))
conn.verify_mode = OpenSSL::SSL::VERIFY_NONE
conn.request_put("/production/#{endpoint}/#{exploiter}", yaml, headers) do |response|
response.read_body do |chunk|
puts chunk
end
end ]
return exploit
end
exploited = '/tmp/cve-2013-1653-has-worked'
restauth_conf = %q[
path /run
auth yes
allow *
]
teardown do
agents.each do |agent|
pidfile = on( agent, puppet_agent("--configprint pidfile") ).stdout.chomp
on agent, "[ -f #{pidfile} ] && kill `cat #{pidfile}` || true"
on agent, "rm -rf #{exploited}"
end
end
agents.each do |agent|
# We have to skip this case because of bug PP-436. When that gets fixed, we
# can test on all nodes again.
if agent == master
Log.warn("This test does not support nodes that are both master and agents")
next
end
atestdir = agent.tmpdir('puppet-kick-auth')
mtestdir = master.tmpdir('puppet-kick-auth')
step "Daemonize the agent" do
# Lay down a tempory auth.conf that will allow the agent to be kicked
create_remote_file(agent, "#{atestdir}/auth.conf", restauth_conf)
# Start the agent
on(agent, puppet_agent("--debug --daemonize --server #{master} --listen --no-client --rest_authconfig #{atestdir}/auth.conf"))
step "Wait for agent to start listening" do
timeout = 15
begin
Timeout.timeout(timeout) do
loop do
# 7 is "Could not connect to host", which will happen before it's running
- result = on(agent, "curl -k https://#{agent}:8139", :acceptable_exit_codes => [0,7])
+ result = on(agent, "curl --tlsv1 -k https://#{agent}:8139", :acceptable_exit_codes => [0,7])
break if result.exit_code == 0
sleep 1
end
end
rescue Timeout::Error
fail_test "Puppet agent #{agent} failed to start after #{timeout} seconds"
end
end
end
step "Attempt to exploit #{agent}" do
# Ensure there's no stale data
on agent, "rm -rf #{exploited}"
on master, "rm -rf #{mtestdir}/exploit.rb"
# Copy over our exploit and execute
create_remote_file( master, "#{mtestdir}/exploit.rb", exploit_code( master, agent, 'run', 8139, exploited ))
on master, "chmod +x #{mtestdir}/exploit.rb"
on master, "#{mtestdir}/exploit.rb"
# Did it work?
fail_test( "Found exploit file #{exploited}" ) if
on( agent, "[ ! -f #{exploited} ]",
:acceptable_exit_codes => [0,1] ).exit_code == 1
end
end
end
diff --git a/acceptance/tests/security/cve-2013-1654_sslv2_downgrade_agent.rb b/acceptance/tests/security/cve-2013-1654_sslv2_downgrade_agent.rb
index a2b25d6a4..08475aa1a 100644
--- a/acceptance/tests/security/cve-2013-1654_sslv2_downgrade_agent.rb
+++ b/acceptance/tests/security/cve-2013-1654_sslv2_downgrade_agent.rb
@@ -1,98 +1,98 @@
test_name "CVE 2013-1654 SSL2 Downgrade of Agent connection" do
require 'puppet/acceptance/windows_utils'
extend Puppet::Acceptance::WindowsUtils
def which_ruby(host)
if host['platform'] =~ /windows/
ruby_cmd(host)
else
host['puppetbindir'] ? "#{host['puppetbindir']}/ruby" : 'ruby'
end
end
def suitable?(host)
cmd = <<END
#{which_ruby(host)} -ropenssl -e "puts OpenSSL::SSL::SSLContext::METHODS.include?(:SSLv2)"
END
on(host, cmd).stdout.chomp == "true"
end
agents.each do |agent|
if suitable?( agent )
certfile = on(agent, puppet_agent("--configprint hostcert")).stdout.chomp
keyfile = on(agent, puppet_agent("--configprint hostprivkey")).stdout.chomp
cafile = on(agent, puppet_agent("--configprint localcacert")).stdout.chomp
port = 8150
sslserver = <<END
#!/usr/bin/env ruby
require 'webrick'
require 'webrick/https'
class Servlet < WEBrick::HTTPServlet::AbstractServlet
def do_GET(request, response)
response.status = 200
response['Content-Type'] = 'text/pson'
response.body = 'FOOBAR'
end
end
class SSLServer
def run
config = {}
config[:Port] = #{port}
config[:SSLCACertificateFile] = '#{cafile}'
config[:SSLCertificate] = OpenSSL::X509::Certificate.new(File.read('#{certfile}'))
config[:SSLPrivateKey] = OpenSSL::PKey::RSA.new(File.read('#{keyfile}'))
config[:SSLStartImmediately] = true
config[:SSLEnable] = true
config[:SSLVerifyClient] = OpenSSL::SSL::VERIFY_NONE
server = WEBrick::HTTPServer.new(config)
server.mount('/', Servlet)
server.ssl_context.ssl_version = 'SSLv2'
trap :TERM do
exit!(0)
end
server.start
end
end
if $0 == __FILE__
SSLServer.new.run
end
END
testdir = agent.tmpdir('puppet-sslv2')
teardown do
sslserver_in_process_list = agent["platform"] =~ /win/ ? 'ruby' : 'sslserver.rb'
on(agent, "ps -ef | grep #{sslserver_in_process_list} | grep -v grep | awk '{ print $2 }' | xargs kill || echo \"ruby sslserver.rb not running\"")
on(agent, "rm -rf #{testdir}")
end
create_remote_file(agent, "#{testdir}/sslserver.rb", sslserver)
on agent, "#{which_ruby(agent)} #{testdir}/sslserver.rb &>/dev/null &"
timeout = 15
begin
Timeout.timeout(timeout) do
loop do
# 7 is "Could not connect to host", which will happen before it's running
# 28 is "Operation timeout", which could happen if the vm was running slowly
- result = on(agent, "curl -m1 -k https://#{agent}:#{port}", :acceptable_exit_codes => [0,7,28,35])
+ result = on(agent, "curl --tlsv1 -m1 -k https://#{agent}:#{port}", :acceptable_exit_codes => [0,7,28,35])
break if result.exit_code == 0 or result.exit_code == 35
sleep 1
end
end
rescue Timeout::Error
fail_test "Insecure Mock Server on #{agent} failed to start after #{timeout} seconds"
end
on(agent, puppet("agent --debug --test --server #{agent} --masterport #{port}"), :acceptable_exit_codes => [1]) do |test|
assert_no_match(/'FOOBAR'/, test.stdout)
end
else
logger.debug( "skipping #{agent} since SSLv2 is not available" )
end
end
end
diff --git a/acceptance/tests/security/cve-2013-2275_report_acl.rb b/acceptance/tests/security/cve-2013-2275_report_acl.rb
index f0071533b..48c158442 100644
--- a/acceptance/tests/security/cve-2013-2275_report_acl.rb
+++ b/acceptance/tests/security/cve-2013-2275_report_acl.rb
@@ -1,30 +1,30 @@
test_name "(#19531) report save access control"
step "Verify puppet only allows saving reports from the node matching the certificate"
fake_report = <<-EOYAML
--- !ruby/object:Puppet::Transaction::Report
host: mccune
metrics: {}
logs: []
kind: inspect
puppet_version: "2.7.20"
status: failed
report_format: 3
EOYAML
with_puppet_running_on(master, {}) do
submit_fake_report_cmd = [
- "curl -k -X PUT",
+ "curl --tlsv1 -k -X PUT",
"--cacert \"$(puppet master --configprint cacert)\"",
"--cert \"$(puppet master --configprint hostcert)\"",
"--key \"$(puppet master --configprint hostprivkey)\"",
"-H 'Content-Type: text/yaml'",
"-d '#{fake_report}'",
"\"https://#{master}:8140/production/report/mccune\"",
].join(" ")
on master, submit_fake_report_cmd, :acceptable_exit_codes => [0] do
msg = "(#19531) (CVE-2013-2275) Puppet master accepted a report for a node that does not match the certname"
assert_match(/Forbidden request/, stdout, msg)
end
end
diff --git a/acceptance/tests/security/cve-2013-3567_yaml_deserialization_again.rb b/acceptance/tests/security/cve-2013-3567_yaml_deserialization_again.rb
index bb216b93b..ebdfa32a0 100644
--- a/acceptance/tests/security/cve-2013-3567_yaml_deserialization_again.rb
+++ b/acceptance/tests/security/cve-2013-3567_yaml_deserialization_again.rb
@@ -1,31 +1,40 @@
test_name "CVE-2013-3567 Arbitrary YAML Deserialization"
-reportdir = master.tmpdir('yaml_deserialization')
+reportdir = create_tmpdir_for_user master, 'yaml_deserialization'
dangerous_yaml = "--- !ruby/object:Puppet::Transaction::Report { metrics: { resources: !ruby/object:ERB { src: 'exit 0' } }, logs: [], resource_statuses: [], host: '$(puppet master --configprint certname)' }"
submit_bad_yaml = [
- "curl -k -X PUT",
+ "curl --tlsv1 -k -X PUT",
"--cacert $(puppet master --configprint cacert)",
"--cert $(puppet master --configprint hostcert)",
"--key $(puppet master --configprint hostprivkey)",
"-H 'Content-Type: text/yaml'",
"-d \"#{dangerous_yaml}\"",
"\"https://#{master}:8140/production/report/$(puppet master --configprint certname)\""
].join(' ')
master_opts = {
'master' => {
'reportdir' => reportdir,
'reports' => 'store',
}
}
+# In PE, the master is running as non-root. We need to set the
+# reportdir permissions correctly for it.
+on master, "chmod 750 #{reportdir}"
+if options.is_pe?
+ on master, "chown pe-puppet:pe-puppet #{reportdir}"
+elsif master.is_using_passenger?
+ on master, "chown puppet:puppet #{reportdir}"
+end
+
with_puppet_running_on(master, master_opts) do
on master, submit_bad_yaml
on master, "cat #{reportdir}/$(puppet master --configprint certname)/*" do
assert_no_match(/ERB/, stdout, "Improperly propagated ERB object from input into puppet code")
end
end
on master, "rm -rf #{reportdir}"
diff --git a/acceptance/tests/security/cve-2013-3567_yaml_parameter_deserialization.rb b/acceptance/tests/security/cve-2013-3567_yaml_parameter_deserialization.rb
index d4c059018..a2b20e0d7 100644
--- a/acceptance/tests/security/cve-2013-3567_yaml_parameter_deserialization.rb
+++ b/acceptance/tests/security/cve-2013-3567_yaml_parameter_deserialization.rb
@@ -1,36 +1,36 @@
test_name "CVE-2013-3567 Arbitrary YAML Query Parameter Deserialization"
CURL_UNABLE_TO_FETCH_PAGE = 22
require 'uri'
dangerous_yaml = "--- !ruby/object:Puppet::Node::Environment { name: 'manage' }"
submit_bad_yaml_as_parameter = [
- "curl -f -s -S -k -X GET",
+ "curl --tlsv1 -f -s -S -k -X GET",
"--cacert $(puppet master --configprint cacert)",
"--cert $(puppet master --configprint hostcert)",
"--key $(puppet master --configprint hostprivkey)",
"-H 'Accept: yaml'",
"\"https://#{master}:8140/production/file_metadata/modules/testing/tested?links=#{URI.encode(dangerous_yaml)}\""
].join(' ')
modules = master.tmpdir('modules')
apply_manifest_on master, <<MANIFEST
file { "#{modules}": ensure => directory, owner => puppet }
-> file { "#{modules}/testing": ensure => directory, owner => puppet }
-> file { "#{modules}/testing/files": ensure => directory, owner => puppet }
-> file { "#{modules}/testing/files/tested": ensure => file, content => "test", owner => puppet }
MANIFEST
master_opts = {
'master' => {
'modulepath' => modules,
}
}
with_puppet_running_on(master, master_opts) do
step "Expect the master to reject the request"
on master, submit_bad_yaml_as_parameter, :acceptable_exit_codes => [CURL_UNABLE_TO_FETCH_PAGE]
end
diff --git a/acceptance/tests/security/cve-2013-4761_resource_type.rb b/acceptance/tests/security/cve-2013-4761_resource_type.rb
index d33af1af4..24252a1d0 100644
--- a/acceptance/tests/security/cve-2013-4761_resource_type.rb
+++ b/acceptance/tests/security/cve-2013-4761_resource_type.rb
@@ -1,58 +1,58 @@
require 'puppet/acceptance/temp_file_utils'
extend Puppet::Acceptance::TempFileUtils
initialize_temp_dirs
teardown do
remove_temp_dirs
end
test_name "CVE 2013-4761 Remote code execution via REST resource_type" do
confine :except, :platform => 'windows'
create_test_file(master, 'auth.conf', <<-AUTH)
path /resource_type
method find, search
auth any
allow *
AUTH
create_remote_file(master, '/tmp/exploit.rb', <<-EXPLOIT)
::File.open('/tmp/exploited', 'w') { |f| f.puts("exploited") }
EXPLOIT
chmod(master, '777', '/tmp/exploit.rb')
master_opts = {
'master' => {
'autosign' => true,
'rest_authconfig' => get_test_file_path(master, 'auth.conf'),
},
}
with_puppet_running_on(master, master_opts) do
# Ensure each agent has a signed cert
on agents, puppet("agent", "-t", "--server #{master}")
agents.each do |agent|
next if agent['roles'].include?('master')
step "Ensure that the exploit marker is gone" do
on master, "rm -f /tmp/exploited"
end
step "Request a type that maps to the exploit file" do
type_name = "::..::..::..::..::..::tmp::exploit"
payload = "https://#{master}:8140/production/resource_type/#{type_name}"
cert_path = on(agent, puppet("agent", "--configprint hostcert")).stdout.chomp
key_path = on(agent, puppet("agent", "--configprint hostprivkey")).stdout.chomp
- curl_base = "curl -g --cert \"#{cert_path}\" --key \"#{key_path}\" -k -H 'Accept: pson'"
+ curl_base = "curl --tlsv1 -g --cert \"#{cert_path}\" --key \"#{key_path}\" -k -H 'Accept: pson'"
on agent, "#{curl_base} '#{payload}'"
end
step "Check that the exploit marker was not created" do
on master, "test ! -e /tmp/exploited"
end
end
end
end
diff --git a/acceptance/tests/ssl/autosign_command.rb b/acceptance/tests/ssl/autosign_command.rb
index 54c6a969e..b161c1be5 100644
--- a/acceptance/tests/ssl/autosign_command.rb
+++ b/acceptance/tests/ssl/autosign_command.rb
@@ -1,129 +1,128 @@
require 'puppet/acceptance/common_utils'
extend Puppet::Acceptance::CAUtils
test_name "autosign command and csr attributes behavior (#7243,#7244)" do
def assert_key_generated(name)
assert_match(/Creating a new SSL key for #{name}/, stdout, "Expected agent to create a new SSL key for autosigning")
end
testdirs = {}
step "generate tmp dirs on all hosts" do
hosts.each { |host| testdirs[host] = host.tmpdir('autosign_command') }
end
teardown do
step "Remove autosign configuration"
testdirs.each do |host,testdir|
on(host, host_command("rm -rf '#{testdir}'") )
end
reset_agent_ssl
end
hostname = master.execute('facter hostname')
fqdn = master.execute('facter fqdn')
reset_agent_ssl(false)
step "Step 1: ensure autosign command can approve CSRs" do
master_opts = {
'master' => {
'autosign' => '/bin/true',
'dns_alt_names' => "puppet,#{hostname},#{fqdn}",
}
}
with_puppet_running_on(master, master_opts) do
agents.each do |agent|
next if agent == master
on(agent, puppet("agent --test --server #{master} --waitforcert 0 --certname #{agent}-autosign"))
assert_key_generated(agent)
assert_match(/Caching certificate for #{agent}/, stdout, "Expected certificate to be autosigned")
end
end
end
reset_agent_ssl(false)
step "Step 2: ensure autosign command can reject CSRs" do
master_opts = {
'master' => {
'autosign' => '/bin/false',
'dns_alt_names' => "puppet,#{hostname},#{fqdn}",
}
}
with_puppet_running_on(master, master_opts) do
agents.each do |agent|
next if agent == master
on(agent, puppet("agent --test --server #{master} --waitforcert 0 --certname #{agent}-reject"), :acceptable_exit_codes => [1])
assert_key_generated(agent)
assert_match(/no certificate found/, stdout, "Expected certificate to not be autosigned")
end
end
end
autosign_inspect_csr_path = "#{testdirs[master]}/autosign_inspect_csr.rb"
step "Step 3: setup an autosign command that inspects CSR attributes" do
autosign_inspect_csr = <<-END
#!/usr/bin/env ruby
require 'openssl'
def unwrap_attr(attr)
set = attr.value
str = set.value.first
str.value
end
csr_text = STDIN.read
csr = OpenSSL::X509::Request.new(csr_text)
passphrase = csr.attributes.find { |a| a.oid == '1.3.6.1.4.1.34380.2.1' }
# And here we jump hoops to unwrap ASN1's Attr Set Str
if unwrap_attr(passphrase) == 'my passphrase'
exit 0
end
exit 1
END
create_remote_file(master, autosign_inspect_csr_path, autosign_inspect_csr)
on master, "chmod 777 #{testdirs[master]}"
on master, "chmod 777 #{autosign_inspect_csr_path}"
end
agent_csr_attributes = {}
step "Step 4: create attributes for inclusion on csr on agents" do
csr_attributes = <<-END
custom_attributes:
1.3.6.1.4.1.34380.2.0: hostname.domain.com
1.3.6.1.4.1.34380.2.1: my passphrase
1.3.6.1.4.1.34380.2.2: # system IPs in hex
- 0xC0A80001 # 192.168.0.1
- 0xC0A80101 # 192.168.1.1
END
agents.each do |agent|
agent_csr_attributes[agent] = "#{testdirs[agent]}/csr_attributes.yaml"
create_remote_file(agent, agent_csr_attributes[agent], csr_attributes)
end
end
reset_agent_ssl(false)
step "Step 5: successfully obtain a cert" do
master_opts = {
'master' => {
'autosign' => autosign_inspect_csr_path,
'dns_alt_names' => "puppet,#{hostname},#{fqdn}",
},
- :__commandline_args__ => '--debug --trace',
}
with_puppet_running_on(master, master_opts) do
agents.each do |agent|
next if agent == master
step "attempting to obtain cert for #{agent}"
on(agent, puppet("agent --test --server #{master} --waitforcert 0 --csr_attributes '#{agent_csr_attributes[agent]}' --certname #{agent}-attrs"), :acceptable_exit_codes => [0])
assert_key_generated(agent)
end
end
end
end
diff --git a/acceptance/tests/ssl/certificate_extensions.rb b/acceptance/tests/ssl/certificate_extensions.rb
index 5b8c49f62..1f7832aa2 100644
--- a/acceptance/tests/ssl/certificate_extensions.rb
+++ b/acceptance/tests/ssl/certificate_extensions.rb
@@ -1,74 +1,75 @@
require 'puppet/acceptance/common_utils'
require 'puppet/acceptance/temp_file_utils'
extend Puppet::Acceptance::CAUtils
extend Puppet::Acceptance::TempFileUtils
initialize_temp_dirs
test_name "certificate extensions available as trusted data" do
+
teardown do
reset_agent_ssl
end
hostname = master.execute('facter hostname')
fqdn = master.execute('facter fqdn')
site_pp = get_test_file_path(master, "site.pp")
master_config = {
'master' => {
'autosign' => '/bin/true',
'dns_alt_names' => "puppet,#{hostname},#{fqdn}",
'manifest' => site_pp,
'trusted_node_data' => true,
}
}
csr_attributes = YAML.dump({
'extension_requests' => {
# registered puppet extensions
'pp_uuid' => 'b5e63090-5167-11e3-8f96-0800200c9a66',
'pp_instance_id' => 'i-3fkva',
# private (arbitrary) extensions
'1.3.6.1.4.1.34380.1.2.1' => 'db-server', # node role
'1.3.6.1.4.1.34380.1.2.2' => 'webops' # node group
}
})
create_test_file(master, "site.pp", <<-SITE)
file { "$test_dir/trusted.yaml":
ensure => file,
content => inline_template("<%= YAML.dump(@trusted) %>")
}
SITE
reset_agent_ssl(false)
with_puppet_running_on(master, master_config) do
agents.each do |agent|
next if agent == master
agent_csr_attributes = get_test_file_path(agent, "csr_attributes.yaml")
create_remote_file(agent, agent_csr_attributes, csr_attributes)
on(agent, puppet("agent", "--test",
"--server", master,
"--waitforcert", 0,
"--csr_attributes", agent_csr_attributes,
"--certname #{agent}-extensions",
'ENV' => { "FACTER_test_dir" => get_test_file_path(agent, "") }),
:acceptable_exit_codes => [0, 2])
trusted_data = YAML.load(on(agent, "cat #{get_test_file_path(agent, 'trusted.yaml')}").stdout)
assert_equal({
'authenticated' => 'remote',
'certname' => "#{agent}-extensions",
'extensions' => {
'pp_uuid' => 'b5e63090-5167-11e3-8f96-0800200c9a66',
'pp_instance_id' => 'i-3fkva',
'1.3.6.1.4.1.34380.1.2.1' => 'db-server',
'1.3.6.1.4.1.34380.1.2.2' => 'webops'
}
},
trusted_data)
end
end
end
diff --git a/acceptance/tests/ssl/puppet_cert_generate_and_autosign.rb b/acceptance/tests/ssl/puppet_cert_generate_and_autosign.rb
index 83b912cd9..618b9a000 100644
--- a/acceptance/tests/ssl/puppet_cert_generate_and_autosign.rb
+++ b/acceptance/tests/ssl/puppet_cert_generate_and_autosign.rb
@@ -1,186 +1,191 @@
require 'puppet/acceptance/common_utils'
extend Puppet::Acceptance::CAUtils
test_name "Puppet cert generate behavior (#6112)" do
# This acceptance test documents the behavior of `puppet cert generate` calls
# for three cases:
#
# 1) On a host which has ssl/ca infrastructure. Typically this would be the
# puppet master which is also the CA, and the expectation is that this is the
# host that `puppet cert generate` commands should be issued on.
#
# This case should succeed as it is the documented use case for the command.
#
# 2) On a host which has no ssl/ca infrastructure but has a valid ca.pem from
# the CA cached in ssl/cert. This would be a host (let's say CN=foo) with a
# puppet agent that has checked in and received a signed ca.pem and foo.pem
# certificate from the master CA.
#
# Talking with Nick Fagerlund, this behavior is unspecified, although it
# should not result in a certificate. And it currently fails with "Error:
# The certificate retrieved from the master does not match the agent's
# private key." This error messaging is a little misleading, in that it is
# strictly speaking true but does not point out that it is the CA cert and
# local CA keys that are involved.
#
# What happens is `puppet cert generate` starts by creating a local
# CertificateAuthority instance which looks for a locally cached
# ssl/cert/ca.pem, generating ssl/ca/ keys in the process. It finds an
# ssl/cert/ca.pem, because we have the master CA's pem, but this certificate
# does not match the keys in ssl/ca that have just been generated (for the
# local CA instance), and validation of the cert fails with the above error.
#
# 3) On a host which has no ssl infrastructure at all (fresh install, hasn't
# tried to send a CSR to the puppet master yet).
#
# Tracing this case, what happens is that `puppet cert generate` starts by
# creating a local CertificateAuthority instance which looks for a locally
# cached ssl/cert/ca.pem. It does not find one and then procedes to
# generate_ca_certificate, which populates a local ssl/ca with public/private
# keys for a local ca cert, creates a CSR locally for this cert, which it
# then signs and saves in ssl/ca/ca_crt.pem and caches in ssl/certs/ca.pem.
# (This is the normal bootstrapping case for a CA; same thing happens during an
# initial `puppet master` run).
#
# This case succeeds, but future calls such as `puppet agent -t` fail.
# Haven't fully traced what happens here.
test_cn = "cert.test"
teardown do
step "And try to leave with a good ssl configuration"
reset_agent_ssl
clean_cert(master, test_cn, false)
end
def generate_and_clean_cert(host, cn, autosign)
on(host, puppet('cert', 'generate', cn, '--autosign', autosign))
assert_no_match(/Could not find certificate request for.*cert\.test/i, stderr, "Should not see an error message for a missing certificate request.")
clean_cert(host, cn)
end
def fail_to_generate_cert_on_agent_that_is_not_ca(host, cn, autosign)
return if master.is_pe?
on(host, puppet('cert', 'generate', cn, '--autosign', autosign), :acceptable_exit_codes => [23])
assert_match(/Error: The certificate retrieved from the master does not match the agent's private key./, stderr, "Should not be able to generate a certificate on an agent that is not also the CA, with autosign #{autosign}.")
end
def generate_and_clean_cert_with_dns_alt_names(host, cn, autosign)
on(host, puppet('cert', 'generate', cn, '--autosign', autosign, '--dns_alt_names', 'foo,bar'))
on(master, puppet('cert', 'list', '--all'))
assert_match(/cert.test.*DNS:foo/, stdout, "Should find a dns entry for 'foo' in the cert.test listing.")
assert_match(/cert.test.*DNS:bar/, stdout, "Should find a dns entry for 'bar' in the cert.test listing.")
clean_cert(host, cn)
end
# @return true if the passed host operates in a master role.
def host_is_master?(host)
host['roles'].include?('master')
end
################
# Cases 1 and 2:
step "Case 1 and 2: Tests behavior of `puppet cert generate` on a master node, and on an agent node that has already authenticated to the master. Tests with combinations of autosign and dns_alt_names."
reset_agent_ssl
# User story:
# A root user on the puppet master has a configuration where autosigning is
# explicitly false. They run 'puppet cert generate foo.bar' for a new
# certificate and expect a certificate to be generated and signed because they
# are the root CA, and autosigning should not effect this.
step "puppet cert generate with autosign false"
hosts.each do |host|
if host_is_master?(host)
generate_and_clean_cert(host, test_cn, false)
else
fail_to_generate_cert_on_agent_that_is_not_ca(host, test_cn, false)
end
end
# User story:
# A root user on the puppet master has a configuration where autosigning is
# explicitly true. They run 'puppet cert generate foo.bar' for a new
# certificate and expect a certificate to be generated and signed without
# interference from the autosigning setting. (This succeedes in 3.2.2 and
# earlier but produces an extraneous error message per #6112 because there are
# two attempts to sign the CSR, only the first of which succedes due to the CSR
# already haveing been signed and removed.)
step "puppet cert generate with autosign true"
hosts.each do |host|
if host_is_master?(host)
generate_and_clean_cert(host, test_cn, true)
else
fail_to_generate_cert_on_agent_that_is_not_ca(host, test_cn, true)
end
end
# These steps are documenting the current behavior with regard to --dns_alt_names
# flags submitted on the command line with a puppet cert generate.
step "puppet cert generate with autosign false and dns_alt_names"
hosts.each do |host|
if host_is_master?(host)
generate_and_clean_cert_with_dns_alt_names(host, test_cn, false)
else
fail_to_generate_cert_on_agent_that_is_not_ca(host, test_cn, false)
end
end
step "puppet cert generate with autosign true and dns_alt_names"
hosts.each do |host|
if host_is_master?(host)
- generate_and_clean_cert_with_dns_alt_names(host, test_cn, false)
+ on(host, puppet('cert', 'generate', test_cn, '--autosign', 'true', '--dns_alt_names', 'foo,bar'), :acceptable_exit_codes => [24])
+ assert_match(/Error: CSR '#{test_cn}' contains subject alternative names.*Use.*--allow-dns-alt-names/, stderr, "Should not be able to generate a certificate, with autosign true and dns_alt_names without specifying allow_dns_alt_names flag.")
+ # And now sign with allow_dns_alt_names set
+ on(host, puppet('cert', '--allow-dns-alt-names', 'sign', test_cn))
+ assert_match(/Signed certificate request for #{test_cn}/, stdout, "Signed certificate once --allow-dns-alt-names specified")
+ clean_cert(host, test_cn)
else
- fail_to_generate_cert_on_agent_that_is_not_ca(host, test_cn, false)
+ fail_to_generate_cert_on_agent_that_is_not_ca(host, test_cn, true)
end
end
#########
# Case 3:
# Attempting to run this in a windows machine fails during the CA generation
# attempting to set the ssl/ca/serial file. Fails inside
# Puppet::Settings#readwritelock because we can't overwrite the lock file in
# Windows.
step "Case 3: A host with no ssl infrastructure makes a `puppet cert generate` call" do
if !master.is_pe?
confine_block :except, :platform => 'windows' do
clear_agent_ssl
step "puppet cert generate"
hosts.each do |host|
generate_and_clean_cert(host, test_cn, false)
# Commenting this out until we can figure out whether this behavior is a bug or
# not, and what the platform issues are.
#
# Need to figure out exactly why this fails, where it fails, and document or
# fix. Can reproduce a failure locally in Ubuntu, and the attempt fails
# 'as expected' in Jenkins acceptance jobs on Lucid and Fedora, but succeeds
# on RHEL and Centos...
#
# Redmine (#21739) captures this.
#
# with_puppet_running_on(master, :master => { :certname => master, :autosign => true }) do
# step "but now unable to authenticate normally as an agent"
#
# on(host, puppet('agent', '-t'), :acceptable_exit_codes => [1])
#
# end
end
end
end
end
##########
# PENDING: Test behavior of `puppet cert generate` with an external ca.
end
diff --git a/acceptance/tests/ticket_15717_puppet_kick.rb b/acceptance/tests/ticket_15717_puppet_kick.rb
index 5daba5d8d..337020721 100644
--- a/acceptance/tests/ticket_15717_puppet_kick.rb
+++ b/acceptance/tests/ticket_15717_puppet_kick.rb
@@ -1,68 +1,68 @@
test_name "#15717: puppet kick"
step "verify puppet kick actually triggers an agent run"
confine :except, :platform => 'windows'
restauth_conf = <<END
path /run
auth yes
allow *
path /
auth any
END
with_puppet_running_on master, {} do
agents.each do |agent|
if agent == master
Log.warn("This test does not support nodes that are both master and agent")
next
end
# kick will verify the SSL server's cert, but since the master and
# agent can be in different domains (ec2, dc1), ask the agent for
# its fqdn, and always kick using that
agentname = on(agent, puppet_agent('--configprint certname')).stdout.chomp
step "create rest auth.conf on agent"
testdir = agent.tmpdir('puppet-kick-auth')
create_remote_file(agent, "#{testdir}/auth.conf", restauth_conf)
step "daemonize the agent"
on(agent, puppet_agent("--debug --daemonize --server #{master} --listen --no-client --rest_authconfig #{testdir}/auth.conf"))
begin
step "wait for agent to start listening"
timeout = 15
begin
Timeout.timeout(timeout) do
loop do
# 7 is "Could not connect to host", which will happen before it's running
- result = on(agent, "curl -k https://#{agent}:8139", :acceptable_exit_codes => [0,7])
+ result = on(agent, "curl --tlsv1 -k https://#{agent}:8139", :acceptable_exit_codes => [0,7])
break if result.exit_code == 0
sleep 1
end
end
rescue Timeout::Error
fail_test "Puppet agent #{agent} failed to start after #{timeout} seconds"
end
step "kick the agent from the master"
on(master, puppet_kick("--host #{agentname}")) do |result|
assert_match(/Puppet kick is deprecated/,
result.stderr,
"Puppet kick did not issue deprecation warning")
assert_match(/status is success/,
result.stdout,
"Puppet kick was successful, " +
"but agent #{agent} did not report success")
end
ensure
step "kill agent"
on(agent, puppet_agent("--configprint pidfile")) do |result|
on(agent, "kill `cat #{result.stdout.chomp}`")
end
end
end
end
diff --git a/acceptance/tests/ticket_17458_puppet_command_prints_help.rb b/acceptance/tests/ticket_17458_puppet_command_prints_help.rb
index 3ba5a31b5..c6ce9d9cf 100644
--- a/acceptance/tests/ticket_17458_puppet_command_prints_help.rb
+++ b/acceptance/tests/ticket_17458_puppet_command_prints_help.rb
@@ -1,5 +1,5 @@
test_name "puppet command with an unknown external command prints help"
-on agents, puppet('unknown') do
+on(agents, puppet('unknown'), :acceptable_exit_codes => [1]) do
assert_match(/See 'puppet help' for help on available puppet subcommands/, stdout)
end
diff --git a/acceptance/tests/ticket_7117_broke_env_criteria_authconf.rb b/acceptance/tests/ticket_7117_broke_env_criteria_authconf.rb
index a6970d1f9..a94165076 100644
--- a/acceptance/tests/ticket_7117_broke_env_criteria_authconf.rb
+++ b/acceptance/tests/ticket_7117_broke_env_criteria_authconf.rb
@@ -1,37 +1,38 @@
# Windows doesn't suppoert Facter fqdn properly
confine :except, :platform => 'windows'
test_name "#7117 Broke the environment criteria in auth.conf"
-testdir = master.tmpdir('env_in_auth_conf')
+
+testdir = create_tmpdir_for_user master, 'env_in_auth_conf'
# add to auth.conf
add_2_authconf = %q{
path /
environment override
auth any
allow *
}
step "Create a temp auth.conf"
create_remote_file master, "#{testdir}/auth.conf", add_2_authconf
on master, "chmod 644 #{testdir}/auth.conf"
on master, "chmod 777 #{testdir}"
with_puppet_running_on master, {'master' => {'rest_authconfig' => "#{testdir}/auth.conf"}}, testdir do
agents.each do |agent|
# Run test on Agents
step "Run agent to upload facts"
on agent, puppet_agent("--test --server #{master}")
certname = master.is_pe? ?
agent.to_s :
on(agent, facter('fqdn')).stdout.chomp
step "Fetch agent facts from Puppet Master"
- on(agent, "curl -k -H \"Accept: yaml\" https://#{master}:8140/override/facts/#{certname}") do
+ on(agent, "curl --tlsv1 -k -H \"Accept: yaml\" https://#{master}:8140/override/facts/#{certname}") do
assert_match(/--- !ruby\/object:Puppet::Node::Facts/, stdout, "Agent Facts not returned for #{agent}")
end
end
end
diff --git a/api/docs/http_certificate_status.md b/api/docs/http_certificate_status.md
index b2411c1b1..41bbdfda4 100644
--- a/api/docs/http_certificate_status.md
+++ b/api/docs/http_certificate_status.md
@@ -1,130 +1,129 @@
Certificate Status
===============
The `certificate status` endpoint allows a client to read or alter the
status of a certificate or pending certificate request. It is only
useful on the CA.
In all requests the `:environment` must be given, but it has no bearing
on the request. Certificates are global.
Find
----
GET /:environment/certificate_status/:certname
Accept: pson
Retrieve information about the specified certificate. Similar to `puppet
cert --list :certname`.
Search
-----
GET /:environment/certificate_statuses/:any_key
Accept: pson
Retrieve information about all known certificates. Similar to `puppet
cert --list --all`. A key is required but is ignored.
Save
----
PUT /:environment/certificate_status/:certname
Content-Type: text/pson
Change the status of the specified certificate. The desired state
is sent in the body of the PUT request as a one-item PSON hash; the two
allowed complete hashes are `{"desired_state":"signed"}` (for signing a
certificate signing request; similar to `puppet cert --sign`) and
`{"desired_state":"revoked"}` (for revoking a certificate; similar to
`puppet cert --revoke`).
-When revoking certificates, you may wish to use a DELETE request
-instead, which will also clean up other info about the host.
-
+Note that revoking a certificate will not clean up other info about the
+host - see the DELETE request for more information.
Delete
-----
DELETE /:environment/certificate_status/:hostname
Accept: pson
Cause the certificate authority to discard all SSL information regarding
a host (including any certificates, certificate requests, and keys).
This does not revoke the certificate if one is present; if you wish to
emulate the behavior of `puppet cert --clean`, you must PUT a
`desired_state` of `revoked` before deleting the host’s SSL information.
If the deletion was successful, it returns a string listing the deleted
classes like
"Deleted for myhost: Puppet::SSL::Certificate, Puppet::SSL::Key"
Otherwise it returns
"Nothing was deleted"
### Supported HTTP Methods
This endpoint is disabled in the default configuration. It is
recommended to be careful with this endpoint, as it can allow control
over the certificates used by the puppet master.
GET, PUT, DELETE
### Supported Response Formats
PSON
This endpoint can produce yaml as well, but the returned data is
incomplete.
### Examples
#### Certificate information
GET /env/certificate_status/mycertname
HTTP/1.1 200 OK
Content-Type: text/pson
{
"name":"mycertname",
"state":"signed",
"fingerprint":"A6:44:08:A6:38:62:88:5B:32:97:20:49:8A:4A:4A:AD:65:C3:3E:A2:4C:30:72:73:02:C5:F3:D4:0E:B7:FC:2F",
"fingerprints":{
"default":"A6:44:08:A6:38:62:88:5B:32:97:20:49:8A:4A:4A:AD:65:C3:3E:A2:4C:30:72:73:02:C5:F3:D4:0E:B7:FC:2F",
"SHA1":"77:E6:5A:7E:DD:83:78:DC:F8:51:E3:8B:12:71:F4:57:F1:C2:34:AE",
"SHA256":"A6:44:08:A6:38:62:88:5B:32:97:20:49:8A:4A:4A:AD:65:C3:3E:A2:4C:30:72:73:02:C5:F3:D4:0E:B7:FC:2F",
"SHA512":"CA:A0:8C:B9:FE:9D:C2:72:18:57:08:E9:4B:11:B7:BC:4E:F7:52:C8:9C:76:03:45:B4:B6:C5:D2:DC:E8:79:43:D7:71:1F:5C:97:FA:B2:F3:ED:AE:19:BD:A9:3B:DB:9F:A5:B4:8D:57:3F:40:34:29:50:AA:AA:0A:93:D8:D7:54"
},
"dns_alt_names":["DNS:puppet","DNS:mycertname"]
}
#### Revoking a certificate
PUT /production/certificate_status/mycertname HTTP/1.1
Content-Type: text/pson
Content-Length: 27
{"desired_state":"revoked"}
This has no meaningful return value.
#### Deleting the certificate information
DELETE /production/certificate_status/mycertname HTTP/1.1
Gets the response:
"Deleted for mycertname: Puppet::SSL::Certificate, Puppet::SSL::Key"
Schema
-----
Find and search operations return objects which
conform to the json schema at {file:api/schemas/host.json
api/schemas/host.json}.
diff --git a/api/docs/http_environments.md b/api/docs/http_environments.md
index bf7dce576..2a0bb6904 100644
--- a/api/docs/http_environments.md
+++ b/api/docs/http_environments.md
@@ -1,41 +1,46 @@
Environments
============
-The `environments` endpoint allows for enumeration of the environments known to the master, along with the modules available in each.
+The `environments` endpoint allows for enumeration of the environments known to the master. Each environment contains information
+about itself like its modulepath, manifest directory, environment timeout, and the config version.
This endpoint is by default accessible to any client with a valid certificate, though this may be changed by `auth.conf`.
Get
---
Get the list of known environments.
GET /v2.0/environments
### Parameters
None
### Example Request & Response
GET /v2.0/environments
HTTP 200 OK
Content-Type: application/json
{
"search_paths": ["/etc/puppet/environments"]
"environments": {
"production": {
"settings": {
- "modulepath": ["/first/module/directory", "/second/module/directory"],
- "manifest": ["/location/of/manifests"]
+ "modulepath": ["/etc/puppetlabs/puppet/environments/production/modules", "/etc/puppetlabs/puppet/environments/development/modules"],
+ "manifest": ["/etc/puppetlabs/puppet/environments/production/manifests"]
+ "environment_timeout": 180,
+ "config_version": "/version/of/config"
}
}
}
}
+The `environment_timeout` attribute could also be the string "unlimited".
+
Schema
------
A environments response body adheres to the {file:api/schemas/environments.json
api/schemas/environments.json} schema.
diff --git a/api/schemas/environments.json b/api/schemas/environments.json
index 3b99869d1..58931180a 100644
--- a/api/schemas/environments.json
+++ b/api/schemas/environments.json
@@ -1,39 +1,61 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Environment Enumeration",
"description": "An enumeration of environments and their settings",
"type": "object",
"properties": {
"search_paths": {
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"description": "An array of the paths where the master looked for environments."
},
"environments": {
"type": "object",
"patternProperties": {
"^[a-z0-9_]+$": {
"type": "object",
"properties": {
"settings" : {
"type": "object",
"properties": {
"manifest": { "type": "string" },
"modulepath": {
"type": "array",
"items": { "type": "string" }
- }
+ },
+ "config_version": { "type": "string" },
+ "environment_timeout": { "type": ["integer", "string"] }
},
- "required": ["modulepath", "manifest"]
+ "required": ["modulepath", "manifest", "environment_timeout", "config_version"],
+ "oneOf": [
+ {
+ "title": "numeric timeout",
+ "properties": {
+ "environment_timeout": {
+ "type": "integer",
+ "minimum": 0
+ }
+ }
+ },
+ {
+ "title": "unlimited timeout",
+ "properties": {
+ "environment_timeout": {
+ "type": "string",
+ "enum": ["unlimited"]
+ }
+ }
+ }
+ ]
}
},
"required": ["settings"]
}
}
}
},
"required": ["search_paths", "environments"]
}
diff --git a/benchmarks/defined_types/benchmarker.rb b/benchmarks/defined_types/benchmarker.rb
index 77d46e71e..a54e228b0 100644
--- a/benchmarks/defined_types/benchmarker.rb
+++ b/benchmarks/defined_types/benchmarker.rb
@@ -1,73 +1,73 @@
require 'erb'
require 'ostruct'
require 'fileutils'
require 'json'
class Benchmarker
include FileUtils
def initialize(target, size)
@target = target
@size = size
end
def setup
require 'puppet'
config = File.join(@target, 'puppet.conf')
Puppet.initialize_settings(['--config', config])
end
- def run
+ def run(args=nil)
env = Puppet.lookup(:environments).get('benchmarking')
node = Puppet::Node.new("testing", :environment => env)
Puppet::Resource::Catalog.indirection.find("testing", :use_node => node)
end
def generate
environment = File.join(@target, 'environments', 'benchmarking')
templates = File.join('benchmarks', 'defined_types')
mkdir_p(File.join(environment, 'modules'))
mkdir_p(File.join(environment, 'manifests'))
render(File.join(templates, 'site.pp.erb'),
File.join(environment, 'manifests', 'site.pp'),
:size => @size)
@size.times do |i|
module_name = "module#{i}"
module_base = File.join(environment, 'modules', module_name)
manifests = File.join(module_base, 'manifests')
mkdir_p(manifests)
File.open(File.join(module_base, 'metadata.json'), 'w') do |f|
JSON.dump({
"types" => [],
"source" => "",
"author" => "Defined Types Benchmark",
"license" => "Apache 2.0",
"version" => "1.0.0",
"description" => "Defined Types benchmark module #{i}",
"summary" => "Just this benchmark module, you know?",
"dependencies" => [],
}, f)
end
render(File.join(templates, 'module', 'testing.pp.erb'),
File.join(manifests, 'testing.pp'),
:name => module_name)
end
render(File.join(templates, 'puppet.conf.erb'),
File.join(@target, 'puppet.conf'),
:location => @target)
end
def render(erb_file, output_file, bindings)
site = ERB.new(File.read(erb_file))
File.open(output_file, 'w') do |fh|
fh.write(site.result(OpenStruct.new(bindings).instance_eval { binding }))
end
end
end
diff --git a/benchmarks/defined_types/benchmarker.rb b/benchmarks/defined_types4/benchmarker.rb
similarity index 93%
copy from benchmarks/defined_types/benchmarker.rb
copy to benchmarks/defined_types4/benchmarker.rb
index 77d46e71e..5b0669291 100644
--- a/benchmarks/defined_types/benchmarker.rb
+++ b/benchmarks/defined_types4/benchmarker.rb
@@ -1,73 +1,73 @@
require 'erb'
require 'ostruct'
require 'fileutils'
require 'json'
class Benchmarker
include FileUtils
def initialize(target, size)
@target = target
@size = size
end
def setup
require 'puppet'
config = File.join(@target, 'puppet.conf')
Puppet.initialize_settings(['--config', config])
end
- def run
+ def run(args=nil)
env = Puppet.lookup(:environments).get('benchmarking')
node = Puppet::Node.new("testing", :environment => env)
Puppet::Resource::Catalog.indirection.find("testing", :use_node => node)
end
def generate
environment = File.join(@target, 'environments', 'benchmarking')
- templates = File.join('benchmarks', 'defined_types')
+ templates = File.join('benchmarks', 'defined_types4')
mkdir_p(File.join(environment, 'modules'))
mkdir_p(File.join(environment, 'manifests'))
render(File.join(templates, 'site.pp.erb'),
File.join(environment, 'manifests', 'site.pp'),
:size => @size)
@size.times do |i|
module_name = "module#{i}"
module_base = File.join(environment, 'modules', module_name)
manifests = File.join(module_base, 'manifests')
mkdir_p(manifests)
File.open(File.join(module_base, 'metadata.json'), 'w') do |f|
JSON.dump({
"types" => [],
"source" => "",
- "author" => "Defined Types Benchmark",
+ "author" => "Defined Types Benchmark Future Parser",
"license" => "Apache 2.0",
"version" => "1.0.0",
"description" => "Defined Types benchmark module #{i}",
"summary" => "Just this benchmark module, you know?",
"dependencies" => [],
}, f)
end
render(File.join(templates, 'module', 'testing.pp.erb'),
File.join(manifests, 'testing.pp'),
:name => module_name)
end
render(File.join(templates, 'puppet.conf.erb'),
File.join(@target, 'puppet.conf'),
:location => @target)
end
def render(erb_file, output_file, bindings)
site = ERB.new(File.read(erb_file))
File.open(output_file, 'w') do |fh|
fh.write(site.result(OpenStruct.new(bindings).instance_eval { binding }))
end
end
end
diff --git a/benchmarks/defined_types4/description b/benchmarks/defined_types4/description
new file mode 100644
index 000000000..f24e9bba4
--- /dev/null
+++ b/benchmarks/defined_types4/description
@@ -0,0 +1,4 @@
+Benchmark scenario: heavy use of defined types future parser
+Benchmark target: catalog compilation
+Parser: Future
+
diff --git a/benchmarks/defined_types4/module/testing.pp.erb b/benchmarks/defined_types4/module/testing.pp.erb
new file mode 100644
index 000000000..e723561d5
--- /dev/null
+++ b/benchmarks/defined_types4/module/testing.pp.erb
@@ -0,0 +1,3 @@
+define <%= name %>::testing {
+ notify { "in <%= name %>: $title": }
+}
diff --git a/benchmarks/defined_types4/puppet.conf.erb b/benchmarks/defined_types4/puppet.conf.erb
new file mode 100644
index 000000000..618ea6e2f
--- /dev/null
+++ b/benchmarks/defined_types4/puppet.conf.erb
@@ -0,0 +1,4 @@
+confdir = <%= location %>
+vardir = <%= location %>
+environmentpath = <%= File.join(location, 'environments') %>
+parser = future
diff --git a/benchmarks/defined_types4/site.pp.erb b/benchmarks/defined_types4/site.pp.erb
new file mode 100644
index 000000000..338b635b2
--- /dev/null
+++ b/benchmarks/defined_types4/site.pp.erb
@@ -0,0 +1,4 @@
+<% size.times do |i| %>
+ module<%= i %>::testing { "first": }
+ module<%= i %>::testing{ "second": }
+<% end %>
diff --git a/benchmarks/empty_catalog/benchmarker.rb b/benchmarks/empty_catalog/benchmarker.rb
new file mode 100644
index 000000000..aee97d26e
--- /dev/null
+++ b/benchmarks/empty_catalog/benchmarker.rb
@@ -0,0 +1,52 @@
+require 'erb'
+require 'ostruct'
+require 'fileutils'
+require 'json'
+
+class Benchmarker
+ include FileUtils
+
+ def initialize(target, size)
+ @target = target
+ @size = size
+ end
+
+ def setup
+ end
+
+ def run(args=nil)
+ unless @initialized
+ require 'puppet'
+ config = File.join(@target, 'puppet.conf')
+ Puppet.initialize_settings(['--config', config])
+ @initialized = true
+ end
+ env = Puppet.lookup(:environments).get('benchmarking')
+ node = Puppet::Node.new("testing", :environment => env)
+ # Mimic what apply does (or the benchmark will in part run for the *root* environment)
+ Puppet.push_context({:current_environment => env},'current env for benchmark')
+ Puppet::Resource::Catalog.indirection.find("testing", :use_node => node)
+ end
+
+ def generate
+ environment = File.join(@target, 'environments', 'benchmarking')
+ templates = File.join('benchmarks', 'empty_catalog')
+
+ mkdir_p(File.join(environment, 'modules'))
+ mkdir_p(File.join(environment, 'manifests'))
+
+ render(File.join(templates, 'site.pp.erb'),
+ File.join(environment, 'manifests', 'site.pp'),{})
+
+ render(File.join(templates, 'puppet.conf.erb'),
+ File.join(@target, 'puppet.conf'),
+ :location => @target)
+ end
+
+ def render(erb_file, output_file, bindings)
+ site = ERB.new(File.read(erb_file))
+ File.open(output_file, 'w') do |fh|
+ fh.write(site.result(OpenStruct.new(bindings).instance_eval { binding }))
+ end
+ end
+end
diff --git a/benchmarks/empty_catalog/description b/benchmarks/empty_catalog/description
new file mode 100644
index 000000000..d06e41b82
--- /dev/null
+++ b/benchmarks/empty_catalog/description
@@ -0,0 +1,4 @@
+Benchmark scenario: an empty catalog (only one call to log a message) shows the setup time for env / compiler
+Benchmark target: catalog compilation overhead
+Parser: Future
+
diff --git a/benchmarks/empty_catalog/puppet.conf.erb b/benchmarks/empty_catalog/puppet.conf.erb
new file mode 100644
index 000000000..00e2986bf
--- /dev/null
+++ b/benchmarks/empty_catalog/puppet.conf.erb
@@ -0,0 +1,5 @@
+confdir = <%= location %>
+vardir = <%= location %>
+environmentpath = <%= File.join(location, 'environments') %>
+environment_timeout = '0'
+parser = future
diff --git a/benchmarks/empty_catalog/site.pp.erb b/benchmarks/empty_catalog/site.pp.erb
new file mode 100644
index 000000000..054628183
--- /dev/null
+++ b/benchmarks/empty_catalog/site.pp.erb
@@ -0,0 +1 @@
+notice('hello world')
\ No newline at end of file
diff --git a/benchmarks/evaluations/benchmarker.rb b/benchmarks/evaluations/benchmarker.rb
new file mode 100644
index 000000000..4ed397db7
--- /dev/null
+++ b/benchmarks/evaluations/benchmarker.rb
@@ -0,0 +1,140 @@
+require 'erb'
+require 'ostruct'
+require 'fileutils'
+require 'json'
+
+class Benchmarker
+ include FileUtils
+
+ def initialize(target, size)
+ @target = target
+ @size = size
+ @micro_benchmarks = {}
+ @parsecount = 100
+ @evalcount = 100
+ end
+
+ def setup
+ require 'puppet'
+ require 'puppet/pops'
+ config = File.join(@target, 'puppet.conf')
+ Puppet.initialize_settings(['--config', config])
+ manifests = File.join('benchmarks', 'evaluations', 'manifests')
+ Dir.foreach(manifests) do |f|
+ if f =~ /^(.*)\.pp$/
+ @micro_benchmarks[$1] = File.read(File.join(manifests, f))
+ end
+ end
+ # Run / Evaluate the common puppet logic
+ @env = Puppet.lookup(:environments).get('benchmarking')
+ @node = Puppet::Node.new("testing", :environment => @env)
+ @parser = Puppet::Pops::Parser::EvaluatingParser.new
+ @compiler = Puppet::Parser::Compiler.new(@node)
+ @scope = @compiler.topscope
+
+ # Perform a portion of what a compile does (just enough to evaluate the site.pp logic)
+ @compiler.catalog.environment_instance = @compiler.environment
+ @compiler.send(:evaluate_main)
+
+ # Then pretend we are running as part of a compilation
+ Puppet.push_context(@compiler.context_overrides, "Benchmark masquerading as compiler configured context")
+ end
+
+ def run(args = {})
+ details = args[:detail] || 'all'
+ measurements = []
+ @micro_benchmarks.each do |name, source|
+ # skip if all but the wanted if a single benchmark is wanted
+ next unless details == 'all' || match = details.match(/#{name}(?:[\._\s](parse|eval))?$/)
+ # if name ends with .parse or .eval only do that part, else do both parts
+ ending = match ? match[1] : nil # parse, eval or nil ending
+ unless ending == 'eval'
+ measurements << Benchmark.measure("#{name} parse") do
+ 1..@parsecount.times { @parser.parse_string(source, name) }
+ end
+ end
+ unless ending == 'parse'
+ model = @parser.parse_string(source, name)
+ measurements << Benchmark.measure("#{name} eval") do
+ 1..@evalcount.times do
+ begin
+ # Run each in a local scope
+ scope_memo = @scope.ephemeral_level
+ @scope.new_ephemeral(true)
+ @parser.evaluate(@scope, model)
+ ensure
+ # Toss the created local scope
+ @scope.unset_ephemeral_var(scope_memo)
+ end
+ end
+ end
+ end
+ end
+ measurements
+ end
+
+ def generate
+ environment = File.join(@target, 'environments', 'benchmarking')
+ templates = File.join('benchmarks', 'evaluations')
+
+ mkdir_p(File.join(environment, 'modules'))
+ mkdir_p(File.join(environment, 'manifests'))
+
+ render(File.join(templates, 'site.pp.erb'),
+ File.join(environment, 'manifests', 'site.pp'),{})
+
+ render(File.join(templates, 'puppet.conf.erb'),
+ File.join(@target, 'puppet.conf'),
+ :location => @target)
+
+ # Generate one module with a 3x function and a 4x function (namespaces)
+ module_name = "module1"
+ module_base = File.join(environment, 'modules', module_name)
+ manifests = File.join(module_base, 'manifests')
+ mkdir_p(manifests)
+ functions_3x = File.join(module_base, 'lib', 'puppet', 'parser', 'functions')
+ functions_4x = File.join(module_base, 'lib', 'puppet', 'functions')
+ mkdir_p(functions_3x)
+ mkdir_p(functions_4x)
+
+ File.open(File.join(module_base, 'metadata.json'), 'w') do |f|
+ JSON.dump({
+ "types" => [],
+ "source" => "",
+ "author" => "Evaluations Benchmark",
+ "license" => "Apache 2.0",
+ "version" => "1.0.0",
+ "description" => "Evaluations Benchmark module 1",
+ "summary" => "Module with supporting logic for evaluations benchmark",
+ "dependencies" => [],
+ }, f)
+ end
+
+ render(File.join(templates, 'module', 'init.pp.erb'),
+ File.join(manifests, 'init.pp'),
+ :name => module_name)
+
+ render(File.join(templates, 'module', 'func3.rb.erb'),
+ File.join(functions_3x, 'func3.rb'),
+ :name => module_name)
+
+ # namespaced function
+ mkdir_p(File.join(functions_4x, module_name))
+ render(File.join(templates, 'module', 'module1_func4.rb.erb'),
+ File.join(functions_4x, module_name, 'func4.rb'),
+ :name => module_name)
+
+ # non namespaced
+ render(File.join(templates, 'module', 'func4.rb.erb'),
+ File.join(functions_4x, 'func4.rb'),
+ :name => module_name)
+ end
+
+ def render(erb_file, output_file, bindings)
+ site = ERB.new(File.read(erb_file))
+ File.open(output_file, 'w') do |fh|
+ fh.write(site.result(OpenStruct.new(bindings).instance_eval { binding }))
+ end
+ end
+
+end
diff --git a/benchmarks/evaluations/benchmarker_task.rb b/benchmarks/evaluations/benchmarker_task.rb
new file mode 100644
index 000000000..d38ed1088
--- /dev/null
+++ b/benchmarks/evaluations/benchmarker_task.rb
@@ -0,0 +1,11 @@
+# Helper class that is used by the Rake task generator.
+# Currently only supports defining arguments that are passed to run
+# (The rake task generator always passes :warm_up_runs as an Integer when profiling).
+# Other benchmarks, and for regular runs that wants arguments must specified them
+# as an Array of symbols.
+#
+class BenchmarkerTask
+ def self.run_args
+ [:detail]
+ end
+end
\ No newline at end of file
diff --git a/benchmarks/evaluations/description b/benchmarks/evaluations/description
new file mode 100644
index 000000000..812037e6d
--- /dev/null
+++ b/benchmarks/evaluations/description
@@ -0,0 +1,13 @@
+Benchmark scenario: evaluates a select set of time critical expressions
+Benchmark target: measuring individual use cases of evaluation
+Parser: Future
+
+Evaluations:
+* fcall_3x - calls sprintf 20x times
+* fcall_4x - calls assert_type 20x times (is heavier than sprintf, have no similar simple 4x function)
+* interpolation - does 20x interpolations of variying length
+* var_absolute - references a top scope variable 20x times with absolute reference
+* var_relative - references a top scope variable 20x times with non absolute reference
+* var_class_absolute - references a class variable 20x times with absolute reference
+* var_class_relative - references a class variable 20x times with non absolute reference
+
diff --git a/benchmarks/evaluations/manifests/assert_type.pp b/benchmarks/evaluations/manifests/assert_type.pp
new file mode 100644
index 000000000..0be757281
--- /dev/null
+++ b/benchmarks/evaluations/manifests/assert_type.pp
@@ -0,0 +1,6 @@
+$tmp = [
+assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1),
+assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1),
+assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1),
+assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1), assert_type(Integer,1),
+]
diff --git a/benchmarks/evaluations/manifests/fcall_3x.pp b/benchmarks/evaluations/manifests/fcall_3x.pp
new file mode 100644
index 000000000..9689ca97a
--- /dev/null
+++ b/benchmarks/evaluations/manifests/fcall_3x.pp
@@ -0,0 +1,6 @@
+$tmp = [
+func3(x,y), func3(x,y), func3(x,y), func3(x,y), func3(x,y),
+func3(x,y), func3(x,y), func3(x,y), func3(x,y), func3(x,y),
+func3(x,y), func3(x,y), func3(x,y), func3(x,y), func3(x,y),
+func3(x,y), func3(x,y), func3(x,y), func3(x,y), func3(x,y),
+]
diff --git a/benchmarks/evaluations/manifests/fcall_4x.pp b/benchmarks/evaluations/manifests/fcall_4x.pp
new file mode 100644
index 000000000..a7b7b712a
--- /dev/null
+++ b/benchmarks/evaluations/manifests/fcall_4x.pp
@@ -0,0 +1,6 @@
+$tmp = [
+func4(x,y), func4(x,y), func4(x,y), func4(x,y), func4(x,y),
+func4(x,y), func4(x,y), func4(x,y), func4(x,y), func4(x,y),
+func4(x,y), func4(x,y), func4(x,y), func4(x,y), func4(x,y),
+func4(x,y), func4(x,y), func4(x,y), func4(x,y), func4(x,y),
+]
diff --git a/benchmarks/evaluations/manifests/fcall_ns4x.pp b/benchmarks/evaluations/manifests/fcall_ns4x.pp
new file mode 100644
index 000000000..d59de8539
--- /dev/null
+++ b/benchmarks/evaluations/manifests/fcall_ns4x.pp
@@ -0,0 +1,6 @@
+$tmp = [
+module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y),
+module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y),
+module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y),
+module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y), module1::func4(x,y),
+]
diff --git a/benchmarks/evaluations/manifests/interpolation.pp b/benchmarks/evaluations/manifests/interpolation.pp
new file mode 100644
index 000000000..817770146
--- /dev/null
+++ b/benchmarks/evaluations/manifests/interpolation.pp
@@ -0,0 +1,11 @@
+$tmp = [ "...$x...",
+ "...$x...$x",
+ "...$x...$x...",
+ "...$x...$x...$x...",
+ "...$x...$x...$x...$x",
+ "...$x...$x...$x...$x...",
+ "...$x...$x...$x...$x...$x",
+ "...$x...$x...$x...$x...$x...",
+ "...$x...$x...$x...$x...$x...$x",
+ "...$x...$x...$x...$x...$x...$x...",
+]
\ No newline at end of file
diff --git a/benchmarks/evaluations/manifests/var_absolute.pp b/benchmarks/evaluations/manifests/var_absolute.pp
new file mode 100644
index 000000000..ed4731817
--- /dev/null
+++ b/benchmarks/evaluations/manifests/var_absolute.pp
@@ -0,0 +1,3 @@
+$tmp = [ $::x, $::x, $::x, $::x, $::x, $::x, $::x, $::x, $::x, $::x,
+ $::x, $::x, $::x, $::x, $::x, $::x, $::x, $::x, $::x, $::x,
+]
diff --git a/benchmarks/evaluations/manifests/var_class_absolute.pp b/benchmarks/evaluations/manifests/var_class_absolute.pp
new file mode 100644
index 000000000..e86940f6e
--- /dev/null
+++ b/benchmarks/evaluations/manifests/var_class_absolute.pp
@@ -0,0 +1,5 @@
+$tmp = [ $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a,
+ $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a,
+ $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a,
+ $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a, $::testing::param_a,
+]
diff --git a/benchmarks/evaluations/manifests/var_class_relative.pp b/benchmarks/evaluations/manifests/var_class_relative.pp
new file mode 100644
index 000000000..3048db6ad
--- /dev/null
+++ b/benchmarks/evaluations/manifests/var_class_relative.pp
@@ -0,0 +1,5 @@
+$tmp = [ $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a,
+ $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a,
+ $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a,
+ $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a, $testing::param_a,
+]
diff --git a/benchmarks/evaluations/manifests/var_relative.pp b/benchmarks/evaluations/manifests/var_relative.pp
new file mode 100644
index 000000000..3fba7b6ef
--- /dev/null
+++ b/benchmarks/evaluations/manifests/var_relative.pp
@@ -0,0 +1,3 @@
+$tmp = [$x, $x, $x, $x, $x, $x, $x, $x, $x, $x,
+ $x, $x, $x, $x, $x, $x, $x, $x, $x, $x,
+]
diff --git a/benchmarks/evaluations/module/func3.rb.erb b/benchmarks/evaluations/module/func3.rb.erb
new file mode 100644
index 000000000..6b814a0d0
--- /dev/null
+++ b/benchmarks/evaluations/module/func3.rb.erb
@@ -0,0 +1,8 @@
+Puppet::Parser::Functions::newfunction(:func3,
+ :arity => 2,
+ :doc => "Blah blah, this is a lot of documentation that the ruby parser must deal with
+ because documentation is part of what is loaded at runtime. Some functions have
+ very little documentation, and some have quite a lot. This simulates documentation
+ that is slightly longer than the shortest ones.") do |vals|
+ # produces nil
+end
diff --git a/benchmarks/evaluations/module/func4.rb.erb b/benchmarks/evaluations/module/func4.rb.erb
new file mode 100644
index 000000000..b923ec4f5
--- /dev/null
+++ b/benchmarks/evaluations/module/func4.rb.erb
@@ -0,0 +1,9 @@
+# Blah blah, this is a lot of documentation that the ruby parser must deal with
+# because documentation is part of what is loaded at runtime. Some functions have
+# very little documentation, and some have quite a lot. This simulates documentation
+# that is slightly longer than the shortest ones.
+#
+Puppet::Functions.create_function(:func4) do
+ def func4(x,y)
+ end
+end
\ No newline at end of file
diff --git a/benchmarks/evaluations/module/init.pp.erb b/benchmarks/evaluations/module/init.pp.erb
new file mode 100644
index 000000000..755657c8a
--- /dev/null
+++ b/benchmarks/evaluations/module/init.pp.erb
@@ -0,0 +1 @@
+# empty init (for now)
\ No newline at end of file
diff --git a/benchmarks/evaluations/module/module1_func4.rb.erb b/benchmarks/evaluations/module/module1_func4.rb.erb
new file mode 100644
index 000000000..1891156c0
--- /dev/null
+++ b/benchmarks/evaluations/module/module1_func4.rb.erb
@@ -0,0 +1,9 @@
+# Blah blah, this is a lot of documentation that the ruby parser must deal with
+# because documentation is part of what is loaded at runtime. Some functions have
+# very little documentation, and some have quite a lot. This simulates documentation
+# that is slightly longer than the shortest ones.
+#
+Puppet::Functions.create_function(:'<%= name %>::func4') do
+ def func4(x,y)
+ end
+end
\ No newline at end of file
diff --git a/benchmarks/evaluations/puppet.conf.erb b/benchmarks/evaluations/puppet.conf.erb
new file mode 100644
index 000000000..d0bf4b2f1
--- /dev/null
+++ b/benchmarks/evaluations/puppet.conf.erb
@@ -0,0 +1,6 @@
+confdir = <%= location %>
+vardir = <%= location %>
+environmentpath = <%= File.join(location, 'environments') %>
+environment_timeout = '0'
+parser = future
+strict_variables = true
diff --git a/benchmarks/evaluations/site.pp.erb b/benchmarks/evaluations/site.pp.erb
new file mode 100644
index 000000000..05cc8d8c5
--- /dev/null
+++ b/benchmarks/evaluations/site.pp.erb
@@ -0,0 +1,10 @@
+# Common setup done once for all micro benchmarks
+#
+class testing {
+ $param_a = 10
+ $param_b = 20
+}
+include testing
+$x = 'aaaaaaaa'
+
+
diff --git a/benchmarks/many_modules/benchmarker.rb b/benchmarks/many_modules/benchmarker.rb
index 8540f85b1..4a6d4f463 100644
--- a/benchmarks/many_modules/benchmarker.rb
+++ b/benchmarks/many_modules/benchmarker.rb
@@ -1,77 +1,77 @@
require 'erb'
require 'ostruct'
require 'fileutils'
require 'json'
class Benchmarker
include FileUtils
def initialize(target, size)
@target = target
@size = size
end
def setup
require 'puppet'
config = File.join(@target, 'puppet.conf')
Puppet.initialize_settings(['--config', config])
end
- def run
+ def run(args=nil)
env = Puppet.lookup(:environments).get('benchmarking')
node = Puppet::Node.new("testing", :environment => env)
Puppet::Resource::Catalog.indirection.find("testing", :use_node => node)
end
def generate
environment = File.join(@target, 'environments', 'benchmarking')
templates = File.join('benchmarks', 'many_modules')
mkdir_p(File.join(environment, 'modules'))
mkdir_p(File.join(environment, 'manifests'))
render(File.join(templates, 'site.pp.erb'),
File.join(environment, 'manifests', 'site.pp'),
:size => @size)
@size.times do |i|
module_name = "module#{i}"
module_base = File.join(environment, 'modules', module_name)
manifests = File.join(module_base, 'manifests')
mkdir_p(manifests)
File.open(File.join(module_base, 'metadata.json'), 'w') do |f|
JSON.dump({
"types" => [],
"source" => "",
"author" => "ManyModules Benchmark",
"license" => "Apache 2.0",
"version" => "1.0.0",
"description" => "Many Modules benchmark module #{i}",
"summary" => "Just this benchmark module, you know?",
"dependencies" => [],
}, f)
end
render(File.join(templates, 'module', 'init.pp.erb'),
File.join(manifests, 'init.pp'),
:name => module_name)
render(File.join(templates, 'module', 'internal.pp.erb'),
File.join(manifests, 'internal.pp'),
:name => module_name)
end
render(File.join(templates, 'puppet.conf.erb'),
File.join(@target, 'puppet.conf'),
:location => @target)
end
def render(erb_file, output_file, bindings)
site = ERB.new(File.read(erb_file))
File.open(output_file, 'w') do |fh|
fh.write(site.result(OpenStruct.new(bindings).instance_eval { binding }))
end
end
end
diff --git a/benchmarks/system_startup/benchmarker.rb b/benchmarks/system_startup/benchmarker.rb
index c48a878dd..4a691a35c 100644
--- a/benchmarks/system_startup/benchmarker.rb
+++ b/benchmarks/system_startup/benchmarker.rb
@@ -1,17 +1,17 @@
class Benchmarker
def initialize(target, size)
end
def setup
end
def generate
end
- def run
+ def run(args=nil)
# Just running help is probably a good proxy of a full startup.
# Simply asking for the version might also be good, but it would miss all
# of the app searching and loading parts
`puppet help`
end
end
diff --git a/docs/acceptance_tests.md b/docs/acceptance_tests.md
index 51b8b964a..809ac9c43 100644
--- a/docs/acceptance_tests.md
+++ b/docs/acceptance_tests.md
@@ -1,236 +1,239 @@
Running Acceptance Tests Yourself
=================================
Table of Contents
-----------------
* [General Notes](#general-notes)
* [Running Tests on the vcloud](#running-tests-on-the-vcloud)
* [Running Tests on Vagrant Boxen](#running-tests-on-vagrant-boxen)
General Notes
-------------
The rake tasks for running the tests are defined by the Rakefile in the acceptance test directory.
These tasks come with some documentation: `rake -T` will give short descriptions, and a `rake -D` will give full descriptions with information on ENV options required and optional for the various tasks.
If you are setting up a new repository for acceptance, you will need to bundle install first. This step assumes you have ruby and the bundler gem installed.
```sh
cd /path/to/repo/acceptance
bundle install --path=.bundle/gems
```
### Using Git Mirrors
By default if you are installing from source, packages will be installed from Github, from their puppetlabs forks. This can be selectively overridden for all installed projects, or per project, by setting environment variables.
GIT_SERVER => this will be the address the git server used for all installed projects. Defaults to 'github.com'.
FORK => this will be the fork of the project for all installed projects. Defaults to 'puppetlabs'.
To customize the server or fork for a specific project use PROJECT_NAME_GIT_SERVER and PROJECT_NAME_FORK.
For example, run with these options:
```sh
bundle exec rake ci:test:git CONFIG=config/nodes/win2008r2.yaml SHA=abcd PUPPET_GIT_SERVER=percival.corp.puppetlabs.net GIT_SERVER=github.delivery.puppetlabs.net
```
Beaker will install the following:
```
:install=>
["git://github.delivery.puppetlabs.net/puppetlabs-facter.git#stable",
"git://github.delivery.puppetlabs.net/puppetlabs-hiera.git#stable",
"git://percival.corp.puppetlabs.net/puppetlabs-puppet.git#abcd"],
```
This corresponds to installing facter and hiera stable from our internal mirror, while installing puppet SHA abcd from a git daemon on my local machine percival. See below for details on setting up a local git daemon.
Running Tests on the vcloud
---------------------------
In order to use the Puppet Labs vcloud, you'll need to be a Puppet Labs employee.
-Community members should see the [guide to running the tests on vagrant boxen](#running-tests-on-local-vagrant-boxen).
+Community members should see the [guide to running the tests on vagrant boxen](#running-tests-on-vagrant-boxen).
### Authentication
Normally the ci tasks are called from a prepared Jenkins job.
If you are running this on your laptop, you will need this ssh private key in order for beaker to be able to log into the vms created from the hosts file:
-https://github.com/puppetlabs/puppetlabs-modules/blob/qa/secure/jenkins/id_rsa-acceptance
-https://github.com/puppetlabs/puppetlabs-modules/blob/qa/secure/jenkins/id_rsa-acceptance.pub
+https://github.com/puppetlabs/puppetlabs-modules/blob/production/secure/jenkins/id_rsa-acceptance
+https://github.com/puppetlabs/puppetlabs-modules/blob/production/secure/jenkins/id_rsa-acceptance.pub
-TODO fetch these files directly from github, but am running into rate limits and then would also have to cross the issue of authentication.
+Please note in acceptance/Rakefile where the ssh key is defaulted to. It may be looking in ~/.ssh/id_rsa-acceptance, but it may want to look in the working directory (e.g. puppet/acceptance).
-You will also need QA credentials to vsphere in a ~/.fog file. These credentials can be found on any of the Jenkins coordinator hosts.
+You will also need QA credentials to vsphere in a ~/.fog file. These credentials can be found on any of the Jenkins coordinator hosts. You may want to check periodically to ensure that the credentials you have are still valid as they may change periodically.
### Packages
In order to run the tests on hosts provisioned from packages produced by Delivery, you will need to reference a Puppet commit sha that has been packaged using Delivery's pl:jenkins:uber_build task. This is the snippet used by 'Puppet Packaging' Jenkins jobs:
```sh
+# EXAMPLE - DO NOT RUN THIS
rake --trace package:implode
rake --trace package:bootstrap
rake --trace pl:jenkins:uber_build
```
The above Rake tasks were run from the root of a Puppet checkout. They are quoted just for reference. Typically if you are investigating a failure, you will have a SHA from a failed jenkins run which should correspond to a successful pipeline run, and you should not need to run the pipeline manually.
A finished pipeline will have repository information available at http://builds.puppetlabs.lan/puppet/ So you can also browse this list and select a recent sha which has repo_configs/ available.
When executing the ci:test:packages task, you must set the SHA, and also set CONFIG to point to a valid Beaker hosts_file. Configurations used in the Jenkins jobs are available under config/nodes
```sh
bundle exec rake ci:test:packages SHA=abcdef CONFIG=config/nodes/rhel.yaml
```
Optionally you may set the TEST (TEST=a/test.rb,and/another/test.rb), and may pass additional OPTIONS to beaker (OPTIONS='--opt foo').
You may also edit a ./local_options.rb hash which will override config/ options, and in turn be overriden by commandline options set in the environment variables CONFIG, TEST and OPTIONS. This file is a ruby file containing a Ruby hash with configuration expected by Beaker. See Beaker source, and examples in config/.
### Git
Alternatively you may provision via git clone by calling the ci:test:git task. Currently we don't have packages for Windows or Solaris from the Delivery pipeline, and must use ci:test:git to provision and test these platforms.
#### Source Checkout for Different Fork
If you have a branch pushed to your fork which you wish to test prior to merging into puppetlabs/puppet, you can do so be setting the FORK environment variable. So, if I have a branch 'issue/master/wonder-if-this-explodes' pushed to my jpartlow puppet fork that I want to test on Windows, I could invoke the following:
```sh
bundle exec rake ci:test:git CONFIG=config/nodes/win2008r2.yaml SHA=issue/master/wonder-if-this-explodes FORK=jpartlow
```
#### Source Checkout for Local Branch
See notes on running acceptance with Vagrant for more details on using a local git daemon.
TODO Fix up the Rakefile's handling of git urls so that there is a simple way to specify both a branch on a github fork, and a branch on some other git server daemon, so that you have fewer steps when serving from a local git daemon.
### Preserving Hosts
If you need to ssh into the hosts after a test run, you can use the following sequence:
bundle exec rake ci:test_and_preserve_hosts CONFIG=some/config.yaml SHA=12345 TEST=a/foo_test.rb
to get the initial templates provisioned, and a local log/latest/preserve_config.yaml created for them.
Then you can log into the hosts, or rerun tests against them by:
bundle exec rake ci:test_against_preserved_hosts TEST=a/foo_test.rb
This will use the existing hosts.
+NOTE: If you want configuration information to be preserved for all runs (potentially allowing you to run ci:test_against_preserved_hosts for any previous run that failed, and who's hosts were preserved, regardless of whether you initiated with a ci:test_and_preserve_hosts call) then you should add a ':__preserve_config__ => true' to your local_options.rb.
+
### Cleaning Up Preserved Hosts
If you run a number of jobs with --preserve_hosts or vi ci:test_and_preserve_hosts, you may eventually generate a large number of stale vms. They should be reaped automatically by qa infrastructure within a day or so, but you may also run:
bundle exec rake ci:release_hosts
to clean them up sooner and free resources.
There also may be scenarios where you want to specify the host(s) to relase. E.g. you may want to relase a subset of the hosts you've created. Or, if a test run terminates early, ci:release_hosts may not be able to derive the name of the vm to delete. In such cases you can specify host(s) to be deleted using the HOST_NAMES environment variable. E.g.
HOST_NAMES=lvwwr9tdplg351u bundle exec rake ci:release_hosts
HOST_NAMES=lvwwr9tdplg351u,ylrqjh5l6xvym4t bundle exec rake ci:release_hosts
Running Tests on Vagrant Boxen
------------------------------
This guide assumes that you have an acceptable Ruby (i.e. 1.9+) installed along with the bundler gem, that you have the puppet repo checked out locally somewhere, and that the name of the checkout folder is `puppet`.
I used Ruby 1.9.3-p484
Change to the `acceptance` directory in the root of the puppet repo:
```sh
cd /path/to/repo/puppet/acceptance
```
Install the necessary gems with bundler:
```sh
bundle install
```
Now you can get a list of test-related tasks you can run via rake:
```sh
bundle exec rake -T
```
and view detailed information on the tasks with
```sh
bundle exec rake -D
```
As an example, let's try running the acceptance tests using git as the code deployment mechanism.
First, we'll have to create a beaker configuration file for a local vagrant box on which to run the tests.
Here's what such a file could look like:
```yaml
HOSTS:
all-in-one:
roles:
- master
- agent
platform: centos-64-x64
hypervisor: vagrant
ip: 192.168.80.100
box: centos-64-x64-vbox4210-nocm
box_url: http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210-nocm.box
CONFIG:
```
This defines a 64-bit CentOS 6.4 vagrant box that serves as both a puppet master and a puppet agent for the test roles.
(For more information on beaker config files, see [beaker's README](https://github.com/puppetlabs/beaker/blob/master/README.md).)
Save this file as `config/nodes/centos6-local.yaml`; we'll be needing it later.
Since we have only provided a CentOS box, we don't have anywhere to run windows tests, therefore we'll have to skip those tests.
That means we want to pass beaker a --tests argument that contains every directory and file in the `tests` directory besides the one called `windows`.
We could pass this option on the command line, but it will be gigantic, so instead let's create a `local_options.rb` file that beaker will automatically read in.
This file should contain a ruby hash of beaker's command-line flags to the corresponding flag arguments.
Our hash will only contain the `tests` key, and its value will be a comma-seperated list of the other files and directories in `tests`.
Here's an easy way to generate this file:
```sh
echo "{tests: \"$(echo tests/* | sed -e 's| *tests/windows *||' -e 's/ /,/g')\"}" > local_options.rb"
```
The last thing that needs to be done before we can run the tests is to set up a way for the test box to check out our local changes for testing.
We'll do this by starting a git daemon on our host.
In another session, navigate to the folder that contains your checkout of the puppet repo, and then create the following symlink:
```sh
ln -s . puppetlabs-puppet.git
```
This works around the inflexible checkout path used by the test prep code.
Now start the git daemon with
```sh
git daemon --verbose --informative-errors --reuseaddr --export-all --base-path=.
```
after which you should see a message like `[32963] Ready to rumble` echoed to the console.
Now we can finally run the tests!
The rake task that we'll use is `ci:test:git`.
Run
```
bundle exec rake -D ci:test:git
```
to read the full description of this task.
From the description, we can see that we'll need to set a few environment variables:
+ CONFIG should be set to point to the CentOS beaker config file we created above.
+ SHA should be the SHA of the commit we want to test.
+ GIT_SERVER should be the IP address of the host (i.e. your machine) in the vagrant private network created for the test box.
This is derived from the test box's ip by replacing the last octet with 1.
For our example above, the host IP is 192.168.80.1
+ FORK should be the path to a 'puppetlabs-puppet.git' directory that points to the repo.
In our case, this is the path to the symlink we created before, which is inside your puppet repo checkout, so FORK should just be the name of your checkout.
We'll assume that the name is `puppet`.
Putting it all together, we construct the following command-line invocation to run the tests:
```sh
CONFIG=config/nodes/centos6-local.yaml SHA=#{test-commit-sha} GIT_SERVER='192.168.80.1' FORK='puppet' bundle exec rake --trace ci:test:git
```
Go ahead and run that sucker!
Testing will take some time.
After the testing finishes, you'll either see this line
```
systest completed successfully, thanks.
```
near the end of the output, indicating that all tests completed succesfully, or you'll see the end of a stack trace, indicating failed tests further up.
diff --git a/ext/build_defaults.yaml b/ext/build_defaults.yaml
index 90e39249e..d62e70555 100644
--- a/ext/build_defaults.yaml
+++ b/ext/build_defaults.yaml
@@ -1,23 +1,38 @@
---
packaging_url: 'git://github.com/puppetlabs/packaging.git --branch=master'
packaging_repo: 'packaging'
default_cow: 'base-squeeze-i386.cow'
cows: 'base-lucid-i386.cow base-precise-i386.cow base-squeeze-i386.cow base-stable-i386.cow base-testing-i386.cow base-trusty-i386.cow base-wheezy-i386.cow'
pbuild_conf: '/etc/pbuilderrc'
packager: 'puppetlabs'
gpg_name: 'info@puppetlabs.com'
gpg_key: '4BD6EC30'
sign_tar: FALSE
# a space separated list of mock configs
final_mocks: 'pl-el-5-i386 pl-el-6-i386 pl-el-7-x86_64 pl-fedora-19-i386 pl-fedora-20-i386'
yum_host: 'yum.puppetlabs.com'
yum_repo_path: '/opt/repository/yum/'
build_gem: TRUE
build_dmg: TRUE
+build_msi:
+ puppet_for_the_win:
+ ref: 'c61dda253b09bd855488f7b6dd7332fdf84d3039'
+ repo: 'git://github.com/puppetlabs/puppet_for_the_win.git'
+ facter:
+ ref: 'refs/tags/2.2.0'
+ repo: 'git://github.com/puppetlabs/facter.git'
+ hiera:
+ ref: 'refs/tags/1.3.4'
+ repo: 'git://github.com/puppetlabs/hiera.git'
+ sys:
+ ref:
+ x86: '890ad47c3b70de4e4956d2699b54d481d5a1b72e'
+ x64: '2820779a3f4281534a6792b0ab20b2cf213647dc'
+ repo: 'git://github.com/puppetlabs/puppet-win32-ruby.git'
apt_host: 'apt.puppetlabs.com'
apt_repo_url: 'http://apt.puppetlabs.com'
apt_repo_path: '/opt/repository/incoming'
ips_repo: '/var/pkgrepo'
ips_store: '/opt/repository'
ips_host: 'solaris-11-ips-repo.acctest.dc1.puppetlabs.net'
tar_host: 'downloads.puppetlabs.com'
diff --git a/ext/debian/control b/ext/debian/control
index 5cfcf6182..02c1f8f42 100644
--- a/ext/debian/control
+++ b/ext/debian/control
@@ -1,144 +1,144 @@
Source: puppet
Section: admin
Priority: optional
Maintainer: Puppet Labs <info@puppetlabs.com>
Uploaders: Micah Anderson <micah@debian.org>, Andrew Pollock <apollock@debian.org>, Nigel Kersten <nigel@explanatorygap.net>, Stig Sandbeck Mathisen <ssm@debian.org>
Build-Depends-Indep: ruby | ruby-interpreter, libopenssl-ruby | libopenssl-ruby1.9.1 | libruby (>= 1:1.9.3.4), facter (>= 1.7.0), hiera (>= 1.0.0)
Build-Depends: debhelper (>= 7.0.0), openssl
Standards-Version: 3.9.1
Vcs-Git: git://github.com/puppetlabs/puppet
Homepage: http://projects.puppetlabs.com/projects/puppet
Package: puppet-common
Architecture: all
-Depends: ${misc:Depends}, ruby | ruby-interpreter, libopenssl-ruby | libopenssl-ruby1.9.1 | libruby (>= 1:1.9.3.4), ruby-shadow | libshadow-ruby1.8, libaugeas-ruby | libaugeas-ruby1.9.1 | libaugeas-ruby1.8, adduser, lsb-base, sysv-rc (>= 2.86) | file-rc, hiera (>= 1.0.0), facter (>= 1.7.0), ruby-rgen (>= 0.6.5), libjson-ruby | ruby-json
+Depends: ${misc:Depends}, ruby | ruby-interpreter, libopenssl-ruby | libopenssl-ruby1.9.1 | libruby (>= 1:1.9.3.4), ruby-shadow | libshadow-ruby1.8, libaugeas-ruby | libaugeas-ruby1.9.1 | libaugeas-ruby1.8, adduser, lsb-base, sysv-rc (>= 2.86) | file-rc, hiera (>= 1.0.0), facter (>= 1.7.0), libjson-ruby | ruby-json
Recommends: lsb-release, debconf-utils
Suggests: ruby-selinux | libselinux-ruby1.8, librrd-ruby1.9.1 | librrd-ruby1.8
Breaks: puppet (<< 2.6.0~rc2-1), puppetmaster (<< 0.25.4-1)
Provides: hiera-puppet
Conflicts: hiera-puppet, puppet (<< 3.3.0-1puppetlabs1)
Replaces: hiera-puppet
Description: Centralized configuration management
Puppet lets you centrally manage every important aspect of your system
using a cross-platform specification language that manages all the
separate elements normally aggregated in different files, like users,
cron jobs, and hosts, along with obviously discrete elements like
packages, services, and files.
.
Puppet's simple declarative specification language provides powerful
classing abilities for drawing out the similarities between hosts while
allowing them to be as specific as necessary, and it handles dependency
and prerequisite relationships between objects clearly and explicitly.
.
This package contains the puppet software and documentation. For the startup
scripts needed to run the puppet agent and master, see the "puppet" and
"puppetmaster" packages, respectively.
Package: puppet
Architecture: all
Depends: ${misc:Depends}, puppet-common (= ${binary:Version}), ruby | ruby-interpreter
Recommends: rdoc
Suggests: puppet-el, vim-puppet
Conflicts: puppet-common (<< 3.3.0-1puppetlabs1)
Description: Centralized configuration management - agent startup and compatibility scripts
This package contains the startup script and compatbility scripts for the
puppet agent, which is the process responsible for configuring the local node.
.
Puppet lets you centrally manage every important aspect of your system
using a cross-platform specification language that manages all the
separate elements normally aggregated in different files, like users,
cron jobs, and hosts, along with obviously discrete elements like
packages, services, and files.
.
Puppet's simple declarative specification language provides powerful
classing abilities for drawing out the similarities between hosts while
allowing them to be as specific as necessary, and it handles dependency
and prerequisite relationships between objects clearly and explicitly.
Package: puppetmaster-common
Architecture: all
Depends: ${misc:Depends}, ruby | ruby-interpreter, puppet-common (= ${binary:Version}), facter (>= 1.7.0), lsb-base
Breaks: puppet (<< 0.24.7-1), puppetmaster (<< 2.6.1~rc2-1)
Replaces: puppetmaster (<< 2.6.1~rc2-1)
Suggests: apache2 | nginx, puppet-el, vim-puppet, stompserver, ruby-stomp | libstomp-ruby1.8,
rdoc, ruby-ldap | libldap-ruby1.8, puppetdb-terminus
Description: Puppet master common scripts
This package contains common scripts for the puppet master,
which is the server hosting manifests and files for the puppet nodes.
.
Puppet lets you centrally manage every important aspect of your system
using a cross-platform specification language that manages all the
separate elements normally aggregated in different files, like users,
cron jobs, and hosts, along with obviously discrete elements like
packages, services, and files.
.
Puppet's simple declarative specification language provides powerful
classing abilities for drawing out the similarities between hosts while
allowing them to be as specific as necessary, and it handles dependency
and prerequisite relationships between objects clearly and explicitly.
Package: puppetmaster
Architecture: all
Depends: ${misc:Depends}, ruby | ruby-interpreter, puppetmaster-common (= ${source:Version}), facter (>= 1.7.0), lsb-base
Breaks: puppet (<< 0.24.7-1)
Suggests: apache2 | nginx, puppet-el, vim-puppet, stompserver, ruby-stomp | libstomp-ruby1.8,
rdoc, ruby-ldap | libldap-ruby1.8, puppetdb-terminus
Description: Centralized configuration management - master startup and compatibility scripts
This package contains the startup and compatibility scripts for the puppet
master, which is the server hosting manifests and files for the puppet nodes.
.
Puppet lets you centrally manage every important aspect of your system
using a cross-platform specification language that manages all the
separate elements normally aggregated in different files, like users,
cron jobs, and hosts, along with obviously discrete elements like
packages, services, and files.
.
Puppet's simple declarative specification language provides powerful
classing abilities for drawing out the similarities between hosts while
allowing them to be as specific as necessary, and it handles dependency
and prerequisite relationships between objects clearly and explicitly.
Package: puppetmaster-passenger
Architecture: all
Depends: ${misc:Depends}, ruby | ruby-interpreter, puppetmaster-common (= ${source:Version}), facter (>= 1.7.0), lsb-base, apache2, libapache2-mod-passenger
Conflicts: puppetmaster (<< 2.6.1~rc2-1)
Replaces: puppetmaster (<< 2.6.1~rc2-1)
Description: Centralised configuration management - master setup to run under mod passenger
This package provides a puppetmaster running under mod passenger.
This configuration offers better performance and scalability.
.
Puppet lets you centrally manage every important aspect of your system
using a cross-platform specification language that manages all the
separate elements normally aggregated in different files, like users,
cron jobs, and hosts, along with obviously discrete elements like
packages, services, and files.
.
Puppet's simple declarative specification language provides powerful
classing abilities for drawing out the similarities between hosts while
allowing them to be as specific as necessary, and it handles dependency
and prerequisite relationships between objects clearly and explicitly.
.
Package: vim-puppet
Architecture: all
Depends: ${misc:Depends}
Recommends: vim-addon-manager
Conflicts: puppet (<< ${source:Version})
Description: syntax highlighting for puppet manifests in vim
The vim-puppet package provides filetype detection and syntax highlighting for
puppet manifests (files ending with ".pp").
Package: puppet-el
Architecture: all
Depends: ${misc:Depends}, emacsen-common
Conflicts: puppet (<< ${source:Version})
Description: syntax highlighting for puppet manifests in emacs
The puppet-el package provides syntax highlighting for puppet manifests
Package: puppet-testsuite
Architecture: all
Depends: ${misc:Depends}, ruby | ruby-interpreter, puppet-common (= ${source:Version}), facter (>= 1.7.0), lsb-base, rails (>= 1.2.3-2), rdoc, ruby-ldap | libldap-ruby1.8, ruby-rspec | librspec-ruby, git-core, ruby-mocha | libmocha-ruby1.8
Recommends: cron
Description: Centralized configuration management - test suite
This package provides all the tests from the upstream puppet source code.
The tests are used for improving the QA of the puppet package.
diff --git a/ext/debian/puppet-common.dirs b/ext/debian/puppet-common.dirs
index 4f1bbfa8d..7b920412c 100644
--- a/ext/debian/puppet-common.dirs
+++ b/ext/debian/puppet-common.dirs
@@ -1,12 +1,13 @@
etc/puppet
etc/puppet/environments
etc/puppet/environments/example_env
etc/puppet/environments/example_env/modules
etc/puppet/environments/example_env/manifests
etc/puppet/manifests
etc/puppet/templates
etc/puppet/modules
usr/lib/ruby/vendor_ruby
usr/share/puppet/ext
var/lib/puppet
var/log/puppet
+var/run/puppet
diff --git a/ext/debian/puppet-common.postinst b/ext/debian/puppet-common.postinst
index b9021a7bb..f90524c48 100644
--- a/ext/debian/puppet-common.postinst
+++ b/ext/debian/puppet-common.postinst
@@ -1,35 +1,35 @@
-#!/bin/sh
+#!/bin/bash
set -e
if [ "$1" = "configure" ]; then
# Create the "puppet" user
if ! getent passwd puppet > /dev/null; then
adduser --quiet --system --group --home /var/lib/puppet \
--no-create-home \
--gecos "Puppet configuration management daemon" \
puppet
fi
# Set correct permissions and ownership for puppet directories
- if ! dpkg-statoverride --list /var/log/puppet >/dev/null 2>&1; then
- dpkg-statoverride --update --add puppet puppet 0750 /var/log/puppet
- fi
-
- if ! dpkg-statoverride --list /var/lib/puppet >/dev/null 2>&1; then
- dpkg-statoverride --update --add puppet puppet 0750 /var/lib/puppet
- fi
+ for dir in /var/{run,lib,log}/puppet; do
+ if ! dpkg-statoverride --list "$dir" >/dev/null 2>&1; then
+ dpkg-statoverride --update --add puppet puppet 0750 "$dir"
+ fi
+ done
# Create folders common to "puppet" and "puppetmaster", which need
# to be owned by the "puppet" user
install --owner puppet --group puppet --directory \
/var/lib/puppet/state
+ install --owner puppet --group puppet --directory \
+ /var/lib/puppet/reports
# Handle
if [ -d /etc/puppet/ssl ] && [ ! -e /var/lib/puppet/ssl ] && grep -q 'ssldir=/var/lib/puppet/ssl' /etc/puppet/puppet.conf; then
mv /etc/puppet/ssl /var/lib/puppet/ssl
fi
fi
#DEBHELPER#
diff --git a/ext/debian/puppet-common.postrm b/ext/debian/puppet-common.postrm
index 841995675..fa2fff52d 100644
--- a/ext/debian/puppet-common.postrm
+++ b/ext/debian/puppet-common.postrm
@@ -1,32 +1,33 @@
#!/bin/sh -e
case "$1" in
purge)
# Remove puppetd.conf (used in > 0.24)
rm -f /etc/puppet/puppetd.conf
# Remove puppet state directory created by the postinst script.
- # This directory can be removed without causing harm
+ # This directory can be removed without causing harm
# according to upstream documentation.
rm -rf /var/lib/puppet/state
+ rm -rf /var/lib/puppet/reports
if [ -d /var/lib/puppet ]; then
rmdir --ignore-fail-on-non-empty /var/lib/puppet
fi
# Remove puppet log files
rm -rf /var/log/puppet/
;;
remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
;;
*)
echo "postrm called with unknown argument \`$1'" >&2
exit 1
esac
#DEBHELPER#
exit 0
diff --git a/ext/project_data.yaml b/ext/project_data.yaml
index ee3fc71f5..292d8de26 100644
--- a/ext/project_data.yaml
+++ b/ext/project_data.yaml
@@ -1,46 +1,50 @@
---
project: 'puppet'
author: 'Puppet Labs'
email: 'info@puppetlabs.com'
homepage: 'https://github.com/puppetlabs/puppet'
summary: 'Puppet, an automated configuration management tool'
description: 'Puppet, an automated configuration management tool'
version_file: 'lib/puppet/version.rb'
# files and gem_files are space separated lists
files: '[A-Z]* install.rb bin lib conf man examples ext tasks spec'
# The gem specification bits only work on Puppet >= 3.0rc, NOT 2.7.x and earlier
gem_files: '[A-Z]* install.rb bin lib conf man examples ext tasks spec'
gem_test_files: 'spec/**/*'
gem_executables: 'puppet'
gem_default_executables: 'puppet'
gem_forge_project: 'puppet'
gem_runtime_dependencies:
facter: ['> 1.6', '< 3']
hiera: '~> 1.0'
- rgen: '~> 0.6.5'
json_pure:
gem_rdoc_options:
- --title
- "Puppet - Configuration Management"
- --main
- README.md
- --line-numbers
gem_platform_dependencies:
x86-mingw32:
gem_runtime_dependencies:
# Pinning versions that require native extensions
- ffi: '1.9.0'
- sys-admin: '1.5.6'
- win32-api: '1.4.8'
- win32-dir: '~> 0.4.3'
- win32-eventlog: '~> 0.5.3'
- win32-process: '~> 0.6.5'
- win32-security: '~> 0.1.4'
- win32-service: '0.7.2'
- win32-taskscheduler: '~> 0.2.2'
+ ffi: '1.9.3'
+ win32-dir: '~> 0.4.9'
+ win32-eventlog: '~> 0.6.1'
+ win32-process: '~> 0.7.4'
+ win32-security: '~> 0.2.5'
+ win32-service: '~> 0.8.4'
win32console: '1.3.2'
- windows-api: '~> 0.4.2'
- windows-pr: '~> 1.2.2'
+ minitar: '~> 0.5.4'
+ x64-mingw32:
+ gem_runtime_dependencies:
+ ffi: '1.9.3'
+ win32-dir: '~> 0.4.9'
+ win32-eventlog: '~> 0.6.1'
+ win32-process: '~> 0.7.4'
+ win32-security: '~> 0.2.5'
+ win32-service: '~> 0.8.4'
minitar: '~> 0.5.4'
bundle_platforms:
x86-mingw32: mingw
+ x64-mingw32: x64_mingw
diff --git a/ext/rack/example-passenger-vhost.conf b/ext/rack/example-passenger-vhost.conf
index 7d40b9498..e8c2102e8 100644
--- a/ext/rack/example-passenger-vhost.conf
+++ b/ext/rack/example-passenger-vhost.conf
@@ -1,57 +1,57 @@
# This Apache 2 virtual host config shows how to use Puppet as a Rack
# application via Passenger. See
# http://docs.puppetlabs.com/guides/passenger.html for more information.
# You can also use the included config.ru file to run Puppet with other Rack
# servers instead of Passenger.
# you probably want to tune these settings
PassengerHighPerformance on
PassengerMaxPoolSize 12
PassengerPoolIdleTime 1500
# PassengerMaxRequests 1000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off
Listen 8140
<VirtualHost *:8140>
SSLEngine on
- SSLProtocol ALL -SSLv2
- SSLCipherSuite ALL:!aNULL:!eNULL:!DES:!3DES:!IDEA:!SEED:!DSS:!PSK:!RC4:!MD5:+HIGH:+MEDIUM:!LOW:!SSLv2:!EXP
+ SSLProtocol ALL -SSLv2 -SSLv3
+ SSLCipherSuite EDH+CAMELLIA:EDH+aRSA:EECDH+aRSA+AESGCM:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH:+CAMELLIA256:+AES256:+CAMELLIA128:+AES128:+SSLv3:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!DSS:!RC4:!SEED:!IDEA:!ECDSA:kEDH:CAMELLIA256-SHA:AES256-SHA:CAMELLIA128-SHA:AES128-SHA
SSLHonorCipherOrder on
SSLCertificateFile /etc/puppet/ssl/certs/squigley.namespace.at.pem
SSLCertificateKeyFile /etc/puppet/ssl/private_keys/squigley.namespace.at.pem
SSLCertificateChainFile /etc/puppet/ssl/ca/ca_crt.pem
SSLCACertificateFile /etc/puppet/ssl/ca/ca_crt.pem
# If Apache complains about invalid signatures on the CRL, you can try disabling
# CRL checking by commenting the next line, but this is not recommended.
SSLCARevocationFile /etc/puppet/ssl/ca/ca_crl.pem
# Apache 2.4 introduces the SSLCARevocationCheck directive and sets it to none
# which effectively disables CRL checking; if you are using Apache 2.4+ you must
# specify 'SSLCARevocationCheck chain' to actually use the CRL.
# SSLCARevocationCheck chain
SSLVerifyClient optional
SSLVerifyDepth 1
# The `ExportCertData` option is needed for agent certificate expiration warnings
SSLOptions +StdEnvVars +ExportCertData
# This header needs to be set if using a loadbalancer or proxy
RequestHeader unset X-Forwarded-For
RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e
DocumentRoot /etc/puppet/rack/public/
RackBaseURI /
<Directory /etc/puppet/rack/>
Options None
AllowOverride None
Order allow,deny
allow from all
</Directory>
</VirtualHost>
diff --git a/ext/redhat/puppet.spec.erb b/ext/redhat/puppet.spec.erb
index 34006cff6..6b8b3f9f5 100644
--- a/ext/redhat/puppet.spec.erb
+++ b/ext/redhat/puppet.spec.erb
@@ -1,857 +1,860 @@
# Augeas and SELinux requirements may be disabled at build time by passing
# --without augeas and/or --without selinux to rpmbuild or mock
# Fedora 17 ships with ruby 1.9, RHEL 7 with ruby 2.0, which use vendorlibdir instead
# of sitelibdir. Adjust our target if installing on f17 or rhel7.
-%if 0%{?fedora} >= 17 || 0%{?rhel} >= 7
+%if 0%{?fedora} >= 17 || 0%{?rhel} >= 7 || 0%{?amzn} >= 1
%global puppet_libdir %(ruby -rrbconfig -e 'puts RbConfig::CONFIG["vendorlibdir"]')
%else
%global puppet_libdir %(ruby -rrbconfig -e 'puts RbConfig::CONFIG["sitelibdir"]')
%endif
%if 0%{?fedora} >= 17 || 0%{?rhel} >= 7
%global _with_systemd 1
%else
%global _with_systemd 0
%endif
# VERSION is subbed out during rake srpm process
%global realversion <%= @version %>
%global rpmversion <%= @rpmversion %>
%global confdir ext/redhat
%global pending_upgrade_path %{_localstatedir}/lib/rpm-state/puppet
%global pending_upgrade_file %{pending_upgrade_path}/upgrade_pending
Name: puppet
Version: %{rpmversion}
Release: <%= @rpmrelease -%>%{?dist}
Vendor: %{?_host_vendor}
Summary: A network tool for managing many disparate systems
License: ASL 2.0
URL: http://puppetlabs.com
Source0: http://puppetlabs.com/downloads/%{name}/%{name}-%{realversion}.tar.gz
Group: System Environment/Base
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
BuildRequires: facter >= 1:1.7.0
# Puppet 3.x drops ruby 1.8.5 support and adds ruby 1.9 support
BuildRequires: ruby >= 1.8.7
BuildRequires: hiera >= 1.0.0
BuildArch: noarch
Requires: ruby >= 1.8
Requires: ruby-shadow
Requires: rubygem-json
# Pull in ruby selinux bindings where available
%if 0%{?fedora} || 0%{?rhel} >= 6
%{!?_without_selinux:Requires: ruby(selinux), libselinux-utils}
%else
-%if 0%{?rhel} && 0%{?rhel} == 5
+%if ( 0%{?rhel} && 0%{?rhel} == 5 ) || 0%{?amzn} >= 1
%{!?_without_selinux:Requires: libselinux-ruby, libselinux-utils}
%endif
%endif
Requires: facter >= 1:1.7.0
# Puppet 3.x drops ruby 1.8.5 support and adds ruby 1.9 support
# Ruby 1.8.7 available for el5 at: yum.puppetlabs.com/el/5/devel/$ARCH
Requires: ruby >= 1.8.7
Requires: hiera >= 1.0.0
-Requires: ruby-rgen >= 0.6.5
Obsoletes: hiera-puppet < 1.0.0
Provides: hiera-puppet >= 1.0.0
%{!?_without_augeas:Requires: ruby-augeas}
# Required for %%pre
Requires: shadow-utils
%if 0%{?_with_systemd}
# Required for %%post, %%preun, %%postun
Requires: systemd
%if 0%{?fedora} >= 18 || 0%{?rhel} >= 7
BuildRequires: systemd
%else
BuildRequires: systemd-units
%endif
%else
# Required for %%post and %%preun
Requires: chkconfig
# Required for %%preun and %%postun
Requires: initscripts
%endif
%description
Puppet lets you centrally manage every important aspect of your system using a
cross-platform specification language that manages all the separate elements
normally aggregated in different files, like users, cron jobs, and hosts,
along with obviously discrete elements like packages, services, and files.
%package server
Group: System Environment/Base
Summary: Server for the puppet system management tool
Requires: puppet = %{version}-%{release}
# chkconfig (%%post, %%preun) and initscripts (%%preun %%postun) are required for non systemd
# and systemd (%%post, %%preun, and %%postun) are required for systems with systemd as default
# They come along transitively with puppet-%{version}-%{release}.
%description server
Provides the central puppet server daemon which provides manifests to clients.
The server can also function as a certificate authority and file server.
%prep
%setup -q -n %{name}-%{realversion}
%build
for f in external/nagios.rb relationship.rb; do
sed -i -e '1d' lib/puppet/$f
done
find examples/ -type f | xargs --no-run-if-empty chmod a-x
%install
rm -rf %{buildroot}
ruby install.rb --destdir=%{buildroot} --quick --no-rdoc --sitelibdir=%{puppet_libdir}
install -d -m0755 %{buildroot}%{_sysconfdir}/puppet/environments/example_env/manifests
install -d -m0755 %{buildroot}%{_sysconfdir}/puppet/environments/example_env/modules
install -d -m0755 %{buildroot}%{_sysconfdir}/puppet/manifests
install -d -m0755 %{buildroot}%{_datadir}/%{name}/modules
install -d -m0755 %{buildroot}%{_localstatedir}/lib/puppet
+install -d -m0755 %{buildroot}%{_localstatedir}/lib/puppet/state
+install -d -m0755 %{buildroot}%{_localstatedir}/lib/puppet/reports
install -d -m0755 %{buildroot}%{_localstatedir}/run/puppet
# As per redhat bz #495096
install -d -m0750 %{buildroot}%{_localstatedir}/log/puppet
%if 0%{?_with_systemd}
# Systemd for fedora >= 17 or el 7
%{__install} -d -m0755 %{buildroot}%{_unitdir}
install -Dp -m0644 ext/systemd/puppet.service %{buildroot}%{_unitdir}/puppet.service
ln -s %{_unitdir}/puppet.service %{buildroot}%{_unitdir}/puppetagent.service
install -Dp -m0644 ext/systemd/puppetmaster.service %{buildroot}%{_unitdir}/puppetmaster.service
%else
# Otherwise init.d for fedora < 17 or el 5, 6
install -Dp -m0644 %{confdir}/client.sysconfig %{buildroot}%{_sysconfdir}/sysconfig/puppet
install -Dp -m0755 %{confdir}/client.init %{buildroot}%{_initrddir}/puppet
install -Dp -m0644 %{confdir}/server.sysconfig %{buildroot}%{_sysconfdir}/sysconfig/puppetmaster
install -Dp -m0755 %{confdir}/server.init %{buildroot}%{_initrddir}/puppetmaster
install -Dp -m0755 %{confdir}/queue.init %{buildroot}%{_initrddir}/puppetqueue
%endif
install -Dp -m0644 %{confdir}/fileserver.conf %{buildroot}%{_sysconfdir}/puppet/fileserver.conf
install -Dp -m0644 %{confdir}/puppet.conf %{buildroot}%{_sysconfdir}/puppet/puppet.conf
install -Dp -m0644 %{confdir}/logrotate %{buildroot}%{_sysconfdir}/logrotate.d/puppet
install -Dp -m0644 ext/README.environment %{buildroot}%{_sysconfdir}/puppet/environments/example_env/README.environment
# Install the ext/ directory to %%{_datadir}/%%{name}
install -d %{buildroot}%{_datadir}/%{name}
cp -a ext/ %{buildroot}%{_datadir}/%{name}
# emacs and vim bits are installed elsewhere
rm -rf %{buildroot}%{_datadir}/%{name}/ext/{emacs,vim}
# remove misc packaging artifacts not applicable to rpms
rm -rf %{buildroot}%{_datadir}/%{name}/ext/{gentoo,freebsd,solaris,suse,windows,osx,ips,debian}
rm -f %{buildroot}%{_datadir}/%{name}/ext/redhat/*.init
rm -f %{buildroot}%{_datadir}/%{name}/ext/{build_defaults.yaml,project_data.yaml}
# Rpmlint fixup
chmod 755 %{buildroot}%{_datadir}/%{name}/ext/regexp_nodes/regexp_nodes.rb
chmod 755 %{buildroot}%{_datadir}/%{name}/ext/puppet-load.rb
# Install emacs mode files
emacsdir=%{buildroot}%{_datadir}/emacs/site-lisp
install -Dp -m0644 ext/emacs/puppet-mode.el $emacsdir/puppet-mode.el
install -Dp -m0644 ext/emacs/puppet-mode-init.el \
$emacsdir/site-start.d/puppet-mode-init.el
# Install vim syntax files
vimdir=%{buildroot}%{_datadir}/vim/vimfiles
install -Dp -m0644 ext/vim/ftdetect/puppet.vim $vimdir/ftdetect/puppet.vim
install -Dp -m0644 ext/vim/syntax/puppet.vim $vimdir/syntax/puppet.vim
%if 0%{?fedora} >= 15 || 0%{?rhel} >= 7
# Setup tmpfiles.d config
mkdir -p %{buildroot}%{_sysconfdir}/tmpfiles.d
echo "D /var/run/%{name} 0755 %{name} %{name} -" > \
%{buildroot}%{_sysconfdir}/tmpfiles.d/%{name}.conf
%endif
# Create puppet modules directory for puppet module tool
mkdir -p %{buildroot}%{_sysconfdir}/%{name}/modules
# Install a NetworkManager dispatcher script to pickup changes to
# # /etc/resolv.conf and such (https://bugzilla.redhat.com/532085).
mkdir -p %{buildroot}%{_sysconfdir}/NetworkManager/dispatcher.d
cp -pr ext/puppet-nm-dispatcher \
%{buildroot}%{_sysconfdir}/NetworkManager/dispatcher.d/98-%{name}
%files
%defattr(-, root, root, 0755)
%doc LICENSE README.md examples
%{_bindir}/puppet
%{_bindir}/extlookup2hiera
%{puppet_libdir}/*
%dir %{_sysconfdir}/NetworkManager
%dir %{_sysconfdir}/NetworkManager/dispatcher.d
%{_sysconfdir}/NetworkManager/dispatcher.d/98-puppet
%if 0%{?_with_systemd}
%{_unitdir}/puppet.service
%{_unitdir}/puppetagent.service
%else
%{_initrddir}/puppet
%config(noreplace) %{_sysconfdir}/sysconfig/puppet
%endif
%dir %{_sysconfdir}/puppet
%dir %{_sysconfdir}/%{name}/modules
%if 0%{?fedora} >= 15 || 0%{?rhel} >= 7
%config(noreplace) %{_sysconfdir}/tmpfiles.d/%{name}.conf
%endif
%config(noreplace) %{_sysconfdir}/puppet/puppet.conf
%config(noreplace) %{_sysconfdir}/puppet/auth.conf
%config(noreplace) %{_sysconfdir}/logrotate.d/puppet
# We don't want to require emacs or vim, so we need to own these dirs
%{_datadir}/emacs
%{_datadir}/vim
%{_datadir}/%{name}
# man pages
%{_mandir}/man5/puppet.conf.5.gz
%{_mandir}/man8/puppet.8.gz
%{_mandir}/man8/puppet-agent.8.gz
%{_mandir}/man8/puppet-apply.8.gz
%{_mandir}/man8/puppet-catalog.8.gz
%{_mandir}/man8/puppet-describe.8.gz
%{_mandir}/man8/puppet-ca.8.gz
%{_mandir}/man8/puppet-cert.8.gz
%{_mandir}/man8/puppet-certificate.8.gz
%{_mandir}/man8/puppet-certificate_request.8.gz
%{_mandir}/man8/puppet-certificate_revocation_list.8.gz
%{_mandir}/man8/puppet-config.8.gz
%{_mandir}/man8/puppet-device.8.gz
%{_mandir}/man8/puppet-doc.8.gz
%{_mandir}/man8/puppet-facts.8.gz
%{_mandir}/man8/puppet-file.8.gz
%{_mandir}/man8/puppet-filebucket.8.gz
%{_mandir}/man8/puppet-help.8.gz
%{_mandir}/man8/puppet-inspect.8.gz
%{_mandir}/man8/puppet-instrumentation_data.8.gz
%{_mandir}/man8/puppet-instrumentation_listener.8.gz
%{_mandir}/man8/puppet-instrumentation_probe.8.gz
%{_mandir}/man8/puppet-key.8.gz
%{_mandir}/man8/puppet-kick.8.gz
%{_mandir}/man8/puppet-man.8.gz
%{_mandir}/man8/puppet-module.8.gz
%{_mandir}/man8/puppet-node.8.gz
%{_mandir}/man8/puppet-parser.8.gz
%{_mandir}/man8/puppet-plugin.8.gz
%{_mandir}/man8/puppet-queue.8.gz
%{_mandir}/man8/puppet-report.8.gz
%{_mandir}/man8/puppet-resource.8.gz
%{_mandir}/man8/puppet-resource_type.8.gz
%{_mandir}/man8/puppet-secret_agent.8.gz
%{_mandir}/man8/puppet-status.8.gz
%{_mandir}/man8/extlookup2hiera.8.gz
# These need to be owned by puppet so the server can
# write to them. The separate %defattr's are required
# to work around RH Bugzilla 681540
%defattr(-, puppet, puppet, 0755)
%{_localstatedir}/run/puppet
%defattr(-, puppet, puppet, 0750)
%{_localstatedir}/log/puppet
%{_localstatedir}/lib/puppet
+%{_localstatedir}/lib/puppet/state
+%{_localstatedir}/lib/puppet/reports
# Return the default attributes to 0755 to
# prevent incorrect permission assignment on EL6
%defattr(-, root, root, 0755)
%files server
%defattr(-, root, root, 0755)
%if 0%{?_with_systemd}
%{_unitdir}/puppetmaster.service
%else
%{_initrddir}/puppetmaster
%{_initrddir}/puppetqueue
%config(noreplace) %{_sysconfdir}/sysconfig/puppetmaster
%endif
%config(noreplace) %{_sysconfdir}/puppet/fileserver.conf
%dir %{_sysconfdir}/puppet/manifests
%dir %{_sysconfdir}/puppet/environments
%dir %{_sysconfdir}/puppet/environments/example_env
%dir %{_sysconfdir}/puppet/environments/example_env/manifests
%dir %{_sysconfdir}/puppet/environments/example_env/modules
%{_sysconfdir}/puppet/environments/example_env/README.environment
%{_mandir}/man8/puppet-ca.8.gz
%{_mandir}/man8/puppet-master.8.gz
# Fixed uid/gid were assigned in bz 472073 (Fedora), 471918 (RHEL-5),
# and 471919 (RHEL-4)
%pre
getent group puppet &>/dev/null || groupadd -r puppet -g 52 &>/dev/null
getent passwd puppet &>/dev/null || \
useradd -r -u 52 -g puppet -d %{_localstatedir}/lib/puppet -s /sbin/nologin \
-c "Puppet" puppet &>/dev/null
# ensure that old setups have the right puppet home dir
if [ $1 -gt 1 ] ; then
usermod -d %{_localstatedir}/lib/puppet puppet &>/dev/null
fi
exit 0
%post
%if 0%{?_with_systemd}
/bin/systemctl daemon-reload >/dev/null 2>&1 || :
if [ "$1" -ge 1 ]; then
# The pidfile changed from 0.25.x to 2.6.x, handle upgrades without leaving
# the old process running.
oldpid="%{_localstatedir}/run/puppet/puppetd.pid"
newpid="%{_localstatedir}/run/puppet/agent.pid"
if [ -s "$oldpid" -a ! -s "$newpid" ]; then
(kill $(< "$oldpid") && rm -f "$oldpid" && \
/bin/systemctl start puppet.service) >/dev/null 2>&1 || :
fi
fi
%else
/sbin/chkconfig --add puppet || :
if [ "$1" -ge 1 ]; then
# The pidfile changed from 0.25.x to 2.6.x, handle upgrades without leaving
# the old process running.
oldpid="%{_localstatedir}/run/puppet/puppetd.pid"
newpid="%{_localstatedir}/run/puppet/agent.pid"
if [ -s "$oldpid" -a ! -s "$newpid" ]; then
(kill $(< "$oldpid") && rm -f "$oldpid" && \
/sbin/service puppet start) >/dev/null 2>&1 || :
fi
# If an old puppet process (one whose binary is located in /sbin) is running,
# kill it and then start up a fresh with the new binary.
if [ -e "$newpid" ]; then
if ps aux | grep `cat "$newpid"` | grep -v grep | awk '{ print $12 }' | grep -q sbin; then
(kill $(< "$newpid") && rm -f "$newpid" && \
/sbin/service puppet start) >/dev/null 2>&1 || :
fi
fi
fi
%endif
%post server
%if 0%{?_with_systemd}
/bin/systemctl daemon-reload >/dev/null 2>&1 || :
if [ "$1" -ge 1 ]; then
# The pidfile changed from 0.25.x to 2.6.x, handle upgrades without leaving
# the old process running.
oldpid="%{_localstatedir}/run/puppet/puppetmasterd.pid"
newpid="%{_localstatedir}/run/puppet/master.pid"
if [ -s "$oldpid" -a ! -s "$newpid" ]; then
(kill $(< "$oldpid") && rm -f "$oldpid" && \
/bin/systemctl start puppetmaster.service) > /dev/null 2>&1 || :
fi
fi
%else
/sbin/chkconfig --add puppetmaster || :
if [ "$1" -ge 1 ]; then
# The pidfile changed from 0.25.x to 2.6.x, handle upgrades without leaving
# the old process running.
oldpid="%{_localstatedir}/run/puppet/puppetmasterd.pid"
newpid="%{_localstatedir}/run/puppet/master.pid"
if [ -s "$oldpid" -a ! -s "$newpid" ]; then
(kill $(< "$oldpid") && rm -f "$oldpid" && \
/sbin/service puppetmaster start) >/dev/null 2>&1 || :
fi
fi
%endif
%preun
%if 0%{?_with_systemd}
if [ "$1" -eq 0 ] ; then
# Package removal, not upgrade
/bin/systemctl --no-reload disable puppetagent.service > /dev/null 2>&1 || :
/bin/systemctl --no-reload disable puppet.service > /dev/null 2>&1 || :
/bin/systemctl stop puppetagent.service > /dev/null 2>&1 || :
/bin/systemctl stop puppet.service > /dev/null 2>&1 || :
/bin/systemctl daemon-reload >/dev/null 2>&1 || :
fi
if [ "$1" == "1" ]; then
/bin/systemctl is-enabled puppetagent.service > /dev/null 2>&1
if [ "$?" == "0" ]; then
/bin/systemctl --no-reload disable puppetagent.service > /dev/null 2>&1 ||:
/bin/systemctl stop puppetagent.service > /dev/null 2>&1 ||:
/bin/systemctl daemon-reload >/dev/null 2>&1 ||:
if [ ! -d %{pending_upgrade_path} ]; then
mkdir -p %{pending_upgrade_path}
fi
if [ ! -e %{pending_upgrade_file} ]; then
touch %{pending_upgrade_file}
fi
fi
fi
%else
if [ "$1" = 0 ] ; then
/sbin/service puppet stop > /dev/null 2>&1
/sbin/chkconfig --del puppet || :
fi
%endif
%preun server
%if 0%{?_with_systemd}
if [ $1 -eq 0 ] ; then
# Package removal, not upgrade
/bin/systemctl --no-reload disable puppetmaster.service > /dev/null 2>&1 || :
/bin/systemctl stop puppetmaster.service > /dev/null 2>&1 || :
/bin/systemctl daemon-reload >/dev/null 2>&1 || :
fi
%else
if [ "$1" = 0 ] ; then
/sbin/service puppetmaster stop > /dev/null 2>&1
/sbin/chkconfig --del puppetmaster || :
fi
%endif
%postun
%if 0%{?_with_systemd}
if [ $1 -ge 1 ] ; then
if [ -e %{pending_upgrade_file} ]; then
/bin/systemctl --no-reload enable puppet.service > /dev/null 2>&1 ||:
/bin/systemctl start puppet.service > /dev/null 2>&1 ||:
/bin/systemctl daemon-reload >/dev/null 2>&1 ||:
rm %{pending_upgrade_file}
fi
# Package upgrade, not uninstall
/bin/systemctl try-restart puppetagent.service >/dev/null 2>&1 || :
fi
%else
if [ "$1" -ge 1 ]; then
/sbin/service puppet condrestart >/dev/null 2>&1 || :
fi
%endif
%postun server
%if 0%{?_with_systemd}
if [ $1 -ge 1 ] ; then
# Package upgrade, not uninstall
/bin/systemctl try-restart puppetmaster.service >/dev/null 2>&1 || :
fi
%else
if [ "$1" -ge 1 ]; then
/sbin/service puppetmaster condrestart >/dev/null 2>&1 || :
fi
%endif
%clean
rm -rf %{buildroot}
%changelog
* <%= Time.now.strftime("%a %b %d %Y") %> Puppet Labs Release <info@puppetlabs.com> - <%= @rpmversion %>-<%= @rpmrelease %>
- Build for <%= @version %>
* Wed Oct 2 2013 Jason Antman <jason@jasonantman.com>
- Move systemd service and unit file names back to "puppet" from erroneous "puppetagent"
- Add symlink to puppetagent unit file for compatibility with current bug
- Alter package removal actions to deactivate and stop both service names
* Thu Jun 27 2013 Matthaus Owens <matthaus@puppetlabs.com> - 3.2.3-0.1rc0
- Bump requires on ruby-rgen to 0.6.5
* Fri Apr 12 2013 Matthaus Owens <matthaus@puppetlabs.com> - 3.2.0-0.1rc0
- Add requires on ruby-rgen for new parser in Puppet 3.2
* Fri Jan 25 2013 Matthaus Owens <matthaus@puppetlabs.com> - 3.1.0-0.1rc1
- Add extlookup2hiera.8.gz to the files list
* Wed Jan 9 2013 Ryan Uber <ru@ryanuber.com> - 3.1.0-0.1rc1
- Work-around for RH Bugzilla 681540
* Fri Dec 28 2012 Michael Stahnke <stahnma@puppetlabs.com> - 3.0.2-2
- Added a script for Network Manager for bug https://bugzilla.redhat.com/532085
* Tue Dec 18 2012 Matthaus Owens <matthaus@puppetlabs.com>
- Remove for loop on examples/ code which no longer exists. Add --no-run-if-empty to xargs invocations.
* Sat Dec 1 2012 Ryan Uber <ryuber@cisco.com>
- Fix for logdir perms regression (#17866)
* Wed Aug 29 2012 Moses Mendoza <moses@puppetlabs.com> - 3.0.0-0.1rc5
- Update for 3.0.0 rc5
* Fri Aug 24 2012 Eric Sorenson <eric0@puppetlabs.com> - 3.0.0-0.1rc4
- Facter requirement is 1.6.11, not 2.0
- Update for 3.0.0 rc4
* Tue Aug 21 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.19-1
- Update for 2.7.19
* Tue Aug 14 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.19-0.1rc3
- Update for 2.7.19rc3
* Tue Aug 7 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.19-0.1rc2
- Update for 2.7.19rc2
* Wed Aug 1 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.19-0.1rc1
- Update for 2.7.19rc1
* Wed Jul 11 2012 William Hopper <whopper@puppetlabs.com> - 2.7.18-2
- (#15221) Create /etc/puppet/modules for puppet module tool
* Mon Jul 9 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.18-1
- Update for 2.7.18
* Tue Jun 19 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.17-1
- Update for 2.7.17
* Wed Jun 13 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.16-1
- Update for 2.7.16
* Fri Jun 08 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.16-0.1rc1.2
- Updated facter 2.0 dep to include epoch 1
* Wed Jun 06 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.16-0.1rc1
- Update for 2.7.16rc1, added generated manpages
* Fri Jun 01 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 3.0.0-0.1rc3
- Puppet 3.0.0rc3 Release
* Fri Jun 01 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.15-0.1rc4
- Update for 2.7.15rc4
* Tue May 29 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.15-0.1rc3
- Update for 2.7.15rc3
* Tue May 22 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 3.0.0-0.1rc2
- Puppet 3.0.0rc2 Release
* Thu May 17 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 3.0.0-0.1rc1
- Puppet 3.0.0rc1 Release
* Wed May 16 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.15-0.1rc2
- Update for 2.7.15rc2
* Tue May 15 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.15-0.1rc1
- Update for 2.7.15rc1
* Wed May 02 2012 Moses Mendoza <moses@puppetlabs.com> - 2.7.14-1
- Update for 2.7.14
* Tue Apr 10 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.13-1
- Update for 2.7.13
* Mon Mar 12 2012 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.12-1
- Update for 2.7.12
* Fri Feb 24 2012 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.11-2
- Update 2.7.11 from proper tag, including #12572
* Wed Feb 22 2012 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.11-1
- Update for 2.7.11
* Wed Jan 25 2012 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.10-1
- Update for 2.7.10
* Fri Dec 9 2011 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.9-1
- Update for 2.7.9
* Thu Dec 8 2011 Matthaus Litteken <matthaus@puppetlabs.com> - 2.7.8-1
- Update for 2.7.8
* Wed Nov 30 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.8-0.1rc1
- Update for 2.7.8rc1
* Mon Nov 21 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.7-1
- Relaese 2.7.7
* Tue Nov 01 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.7-0.1rc1
- Update for 2.7.7rc1
* Fri Oct 21 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.6-1
- 2.7.6 final
* Thu Oct 13 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.6-.1rc3
- New RC
* Fri Oct 07 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.6-0.1rc2
- New RC
* Mon Oct 03 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.6-0.1rc1
- New RC
* Fri Sep 30 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.5-1
- Fixes for CVE-2011-3869, 3870, 3871
* Wed Sep 28 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.4-1
- Fix for CVE-2011-3484
* Wed Jul 06 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.2-0.2.rc1
- Clean up rpmlint errors
- Put man pages in correct package
* Wed Jul 06 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.7.2-0.1.rc1
- Update to 2.7.2rc1
* Wed Jun 15 2011 Todd Zullinger <tmz@pobox.com> - 2.6.9-0.1.rc1
- Update rc versioning to ensure 2.6.9 final is newer to rpm
- sync changes with Fedora/EPEL
* Tue Jun 14 2011 Michael Stahnke <stahnma@puppetlabs.com> - 2.6.9rc1-1
- Update to 2.6.9rc1
* Thu Apr 14 2011 Todd Zullinger <tmz@pobox.com> - 2.6.8-1
- Update to 2.6.8
* Thu Mar 24 2011 Todd Zullinger <tmz@pobox.com> - 2.6.7-1
- Update to 2.6.7
* Wed Mar 16 2011 Todd Zullinger <tmz@pobox.com> - 2.6.6-1
- Update to 2.6.6
- Ensure %%pre exits cleanly
- Fix License tag, puppet is now GPLv2 only
- Create and own /usr/share/puppet/modules (#615432)
- Properly restart puppet agent/master daemons on upgrades from 0.25.x
- Require libselinux-utils when selinux support is enabled
- Support tmpfiles.d for Fedora >= 15 (#656677)
* Wed Feb 09 2011 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 0.25.5-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_15_Mass_Rebuild
* Mon May 17 2010 Todd Zullinger <tmz@pobox.com> - 0.25.5-1
- Update to 0.25.5
- Adjust selinux conditional for EL-6
- Apply rundir-perms patch from tarball rather than including it separately
- Update URL's to reflect the new puppetlabs.com domain
* Fri Jan 29 2010 Todd Zullinger <tmz@pobox.com> - 0.25.4-1
- Update to 0.25.4
* Tue Jan 19 2010 Todd Zullinger <tmz@pobox.com> - 0.25.3-2
- Apply upstream patch to fix cron resources (upstream #2845)
* Mon Jan 11 2010 Todd Zullinger <tmz@pobox.com> - 0.25.3-1
- Update to 0.25.3
* Tue Jan 05 2010 Todd Zullinger <tmz@pobox.com> - 0.25.2-1.1
- Replace %%define with %%global for macros
* Tue Jan 05 2010 Todd Zullinger <tmz@pobox.com> - 0.25.2-1
- Update to 0.25.2
- Fixes CVE-2010-0156, tmpfile security issue (#502881)
- Install auth.conf, puppetqd manpage, and queuing examples/docs
* Wed Nov 25 2009 Jeroen van Meeuwen <j.van.meeuwen@ogd.nl> - 0.25.1-1
- New upstream version
* Tue Oct 27 2009 Todd Zullinger <tmz@pobox.com> - 0.25.1-0.3
- Update to 0.25.1
- Include the pi program and man page (R.I.Pienaar)
* Sat Oct 17 2009 Todd Zullinger <tmz@pobox.com> - 0.25.1-0.2.rc2
- Update to 0.25.1rc2
* Tue Sep 22 2009 Todd Zullinger <tmz@pobox.com> - 0.25.1-0.1.rc1
- Update to 0.25.1rc1
- Move puppetca to puppet package, it has uses on client systems
- Drop redundant %%doc from manpage %%file listings
* Fri Sep 04 2009 Todd Zullinger <tmz@pobox.com> - 0.25.0-1
- Update to 0.25.0
- Fix permissions on /var/log/puppet (#495096)
- Install emacs mode and vim syntax files (#491437)
- Install ext/ directory in %%{_datadir}/%%{name} (/usr/share/puppet)
* Mon May 04 2009 Todd Zullinger <tmz@pobox.com> - 0.25.0-0.1.beta1
- Update to 0.25.0beta1
- Make Augeas and SELinux requirements build time options
* Mon Mar 23 2009 Todd Zullinger <tmz@pobox.com> - 0.24.8-1
- Update to 0.24.8
- Quiet output from %%pre
- Use upstream install script
- Increase required facter version to >= 1.5
* Tue Dec 16 2008 Todd Zullinger <tmz@pobox.com> - 0.24.7-4
- Remove redundant useradd from %%pre
* Tue Dec 16 2008 Jeroen van Meeuwen <kanarip@kanarip.com> - 0.24.7-3
- New upstream version
- Set a static uid and gid (#472073, #471918, #471919)
- Add a conditional requirement on libselinux-ruby for Fedora >= 9
- Add a dependency on ruby-augeas
* Wed Oct 22 2008 Todd Zullinger <tmz@pobox.com> - 0.24.6-1
- Update to 0.24.6
- Require ruby-shadow on Fedora and RHEL >= 5
- Simplify Fedora/RHEL version checks for ruby(abi) and BuildArch
- Require chkconfig and initstripts for preun, post, and postun scripts
- Conditionally restart puppet in %%postun
- Ensure %%preun, %%post, and %%postun scripts exit cleanly
- Create puppet user/group according to Fedora packaging guidelines
- Quiet a few rpmlint complaints
- Remove useless %%pbuild macro
- Make specfile more like the Fedora/EPEL template
* Mon Jul 28 2008 David Lutterkort <dlutter@redhat.com> - 0.24.5-1
- Add /usr/bin/puppetdoc
* Thu Jul 24 2008 Brenton Leanhardt <bleanhar@redhat.com>
- New version
- man pages now ship with tarball
- examples/code moved to root examples dir in upstream tarball
* Tue Mar 25 2008 David Lutterkort <dlutter@redhat.com> - 0.24.4-1
- Add man pages (from separate tarball, upstream will fix to
include in main tarball)
* Mon Mar 24 2008 David Lutterkort <dlutter@redhat.com> - 0.24.3-1
- New version
* Wed Mar 5 2008 David Lutterkort <dlutter@redhat.com> - 0.24.2-1
- New version
* Sat Dec 22 2007 David Lutterkort <dlutter@redhat.com> - 0.24.1-1
- New version
* Mon Dec 17 2007 David Lutterkort <dlutter@redhat.com> - 0.24.0-2
- Use updated upstream tarball that contains yumhelper.py
* Fri Dec 14 2007 David Lutterkort <dlutter@redhat.com> - 0.24.0-1
- Fixed license
- Munge examples/ to make rpmlint happier
* Wed Aug 22 2007 David Lutterkort <dlutter@redhat.com> - 0.23.2-1
- New version
* Thu Jul 26 2007 David Lutterkort <dlutter@redhat.com> - 0.23.1-1
- Remove old config files
* Wed Jun 20 2007 David Lutterkort <dlutter@redhat.com> - 0.23.0-1
- Install one puppet.conf instead of old config files, keep old configs
around to ease update
- Use plain shell commands in install instead of macros
* Wed May 2 2007 David Lutterkort <dlutter@redhat.com> - 0.22.4-1
- New version
* Thu Mar 29 2007 David Lutterkort <dlutter@redhat.com> - 0.22.3-1
- Claim ownership of _sysconfdir/puppet (bz 233908)
* Mon Mar 19 2007 David Lutterkort <dlutter@redhat.com> - 0.22.2-1
- Set puppet's homedir to /var/lib/puppet, not /var/puppet
- Remove no-lockdir patch, not needed anymore
* Mon Feb 12 2007 David Lutterkort <dlutter@redhat.com> - 0.22.1-2
- Fix bogus config parameter in puppetd.conf
* Sat Feb 3 2007 David Lutterkort <dlutter@redhat.com> - 0.22.1-1
- New version
* Fri Jan 5 2007 David Lutterkort <dlutter@redhat.com> - 0.22.0-1
- New version
* Mon Nov 20 2006 David Lutterkort <dlutter@redhat.com> - 0.20.1-2
- Make require ruby(abi) and buildarch: noarch conditional for fedora 5 or
later to allow building on older fedora releases
* Mon Nov 13 2006 David Lutterkort <dlutter@redhat.com> - 0.20.1-1
- New version
* Mon Oct 23 2006 David Lutterkort <dlutter@redhat.com> - 0.20.0-1
- New version
* Tue Sep 26 2006 David Lutterkort <dlutter@redhat.com> - 0.19.3-1
- New version
* Mon Sep 18 2006 David Lutterkort <dlutter@redhat.com> - 0.19.1-1
- New version
* Thu Sep 7 2006 David Lutterkort <dlutter@redhat.com> - 0.19.0-1
- New version
* Tue Aug 1 2006 David Lutterkort <dlutter@redhat.com> - 0.18.4-2
- Use /usr/bin/ruby directly instead of /usr/bin/env ruby in
executables. Otherwise, initscripts break since pidof can't find the
right process
* Tue Aug 1 2006 David Lutterkort <dlutter@redhat.com> - 0.18.4-1
- New version
* Fri Jul 14 2006 David Lutterkort <dlutter@redhat.com> - 0.18.3-1
- New version
* Wed Jul 5 2006 David Lutterkort <dlutter@redhat.com> - 0.18.2-1
- New version
* Wed Jun 28 2006 David Lutterkort <dlutter@redhat.com> - 0.18.1-1
- Removed lsb-config.patch and yumrepo.patch since they are upstream now
* Mon Jun 19 2006 David Lutterkort <dlutter@redhat.com> - 0.18.0-1
- Patch config for LSB compliance (lsb-config.patch)
- Changed config moves /var/puppet to /var/lib/puppet, /etc/puppet/ssl
to /var/lib/puppet, /etc/puppet/clases.txt to /var/lib/puppet/classes.txt,
/etc/puppet/localconfig.yaml to /var/lib/puppet/localconfig.yaml
* Fri May 19 2006 David Lutterkort <dlutter@redhat.com> - 0.17.2-1
- Added /usr/bin/puppetrun to server subpackage
- Backported patch for yumrepo type (yumrepo.patch)
* Wed May 3 2006 David Lutterkort <dlutter@redhat.com> - 0.16.4-1
- Rebuilt
* Fri Apr 21 2006 David Lutterkort <dlutter@redhat.com> - 0.16.0-1
- Fix default file permissions in server subpackage
- Run puppetmaster as user puppet
- rebuilt for 0.16.0
* Mon Apr 17 2006 David Lutterkort <dlutter@redhat.com> - 0.15.3-2
- Don't create empty log files in post-install scriptlet
* Fri Apr 7 2006 David Lutterkort <dlutter@redhat.com> - 0.15.3-1
- Rebuilt for new version
* Wed Mar 22 2006 David Lutterkort <dlutter@redhat.com> - 0.15.1-1
- Patch0: Run puppetmaster as root; running as puppet is not ready
for primetime
* Mon Mar 13 2006 David Lutterkort <dlutter@redhat.com> - 0.15.0-1
- Commented out noarch; requires fix for bz184199
* Mon Mar 6 2006 David Lutterkort <dlutter@redhat.com> - 0.14.0-1
- Added BuildRequires for ruby
* Wed Mar 1 2006 David Lutterkort <dlutter@redhat.com> - 0.13.5-1
- Removed use of fedora-usermgmt. It is not required for Fedora Extras and
makes it unnecessarily hard to use this rpm outside of Fedora. Just
allocate the puppet uid/gid dynamically
* Sun Feb 19 2006 David Lutterkort <dlutter@redhat.com> - 0.13.0-4
- Use fedora-usermgmt to create puppet user/group. Use uid/gid 24. Fixed
problem with listing fileserver.conf and puppetmaster.conf twice
* Wed Feb 8 2006 David Lutterkort <dlutter@redhat.com> - 0.13.0-3
- Fix puppetd.conf
* Wed Feb 8 2006 David Lutterkort <dlutter@redhat.com> - 0.13.0-2
- Changes to run puppetmaster as user puppet
* Mon Feb 6 2006 David Lutterkort <dlutter@redhat.com> - 0.13.0-1
- Don't mark initscripts as config files
* Mon Feb 6 2006 David Lutterkort <dlutter@redhat.com> - 0.12.0-2
- Fix BuildRoot. Add dist to release
* Tue Jan 17 2006 David Lutterkort <dlutter@redhat.com> - 0.11.0-1
- Rebuild
* Thu Jan 12 2006 David Lutterkort <dlutter@redhat.com> - 0.10.2-1
- Updated for 0.10.2 Fixed minor kink in how Source is given
* Wed Jan 11 2006 David Lutterkort <dlutter@redhat.com> - 0.10.1-3
- Added basic fileserver.conf
* Wed Jan 11 2006 David Lutterkort <dlutter@redhat.com> - 0.10.1-1
- Updated. Moved installation of library files to sitelibdir. Pulled
initscripts into separate files. Folded tools rpm into server
* Thu Nov 24 2005 Duane Griffin <d.griffin@psenterprise.com>
- Added init scripts for the client
* Wed Nov 23 2005 Duane Griffin <d.griffin@psenterprise.com>
- First packaging
diff --git a/ext/windows/service/daemon.rb b/ext/windows/service/daemon.rb
index 84e4a283d..45f8cd050 100755
--- a/ext/windows/service/daemon.rb
+++ b/ext/windows/service/daemon.rb
@@ -1,169 +1,175 @@
#!/usr/bin/env ruby
require 'fileutils'
require 'win32/daemon'
require 'win32/dir'
require 'win32/process'
require 'win32/eventlog'
-require 'windows/synchronize'
-require 'windows/handle'
-
class WindowsDaemon < Win32::Daemon
- include Windows::Synchronize
- include Windows::Handle
- include Windows::Process
+ CREATE_NEW_CONSOLE = 0x00000010
+ EVENTLOG_ERROR_TYPE = 0x0001
+ EVENTLOG_WARNING_TYPE = 0x0002
+ EVENTLOG_INFORMATION_TYPE = 0x0004
+ @run_thread = nil
@LOG_TO_FILE = false
LOG_FILE = File.expand_path(File.join(Dir::COMMON_APPDATA, 'PuppetLabs', 'puppet', 'var', 'log', 'windows.log'))
LEVELS = [:debug, :info, :notice, :err]
LEVELS.each do |level|
define_method("log_#{level}") do |msg|
log(msg, level)
end
end
def service_init
end
def service_main(*argsv)
argsv = (argsv << ARGV).flatten.compact
args = argsv.join(' ')
@loglevel = LEVELS.index(argsv.index('--debug') ? :debug : :notice)
@LOG_TO_FILE = (argsv.index('--logtofile') ? true : false)
if (@LOG_TO_FILE)
FileUtils.mkdir_p(File.dirname(LOG_FILE))
args = args.gsub("--logtofile","")
end
basedir = File.expand_path(File.join(File.dirname(__FILE__), '..'))
# The puppet installer registers a 'Puppet' event source. For the moment events will be logged with this key, but
# it may be a good idea to split the Service and Puppet events later so it's easier to read in the windows Event Log.
#
# Example code to register an event source;
# eventlogdll = File.expand_path(File.join(basedir, 'puppet', 'ext', 'windows', 'eventlog', 'puppetres.dll'))
# if (File.exists?(eventlogdll))
# Win32::EventLog.add_event_source(
# 'source' => "Application",
# 'key_name' => "Puppet Agent",
# 'category_count' => 3,
# 'event_message_file' => eventlogdll,
# 'category_message_file' => eventlogdll
# )
# end
puppet = File.join(basedir, 'bin', 'puppet.bat')
unless File.exists?(puppet)
log_err("File not found: '#{puppet}'")
return
end
log_debug("Using '#{puppet}'")
log_notice('Service started')
- while running? do
+ service = self
+ @run_thread = Thread.new do
begin
- runinterval = %x{ "#{puppet}" agent --configprint runinterval }.to_i
- if runinterval == 0
- runinterval = 1800
- log_err("Failed to determine runinterval, defaulting to #{runinterval} seconds")
+ while service.running? do
+ runinterval = service.parse_runinterval(puppet)
+ if service.state == RUNNING or service.state == IDLE
+ service.log_notice("Executing agent with arguments: #{args}")
+ pid = Process.create(:command_line => "\"#{puppet}\" agent --onetime #{args}", :creation_flags => CREATE_NEW_CONSOLE).process_id
+ service.log_debug("Process created: #{pid}")
+ else
+ service.log_debug("Service is paused. Not invoking Puppet agent")
+ end
+
+ service.log_debug("Service worker thread waiting for #{runinterval} seconds")
+ sleep(runinterval)
+ service.log_debug('Service worker thread woken up')
end
rescue Exception => e
- log_exception(e)
- runinterval = 1800
- end
-
- if state == RUNNING or state == IDLE
- log_notice("Executing agent with arguments: #{args}")
- pid = Process.create(:command_line => "\"#{puppet}\" agent --onetime #{args}", :creation_flags => Process::CREATE_NEW_CONSOLE).process_id
- log_debug("Process created: #{pid}")
- else
- log_debug("Service is paused. Not invoking Puppet agent")
+ service.log_exception(e)
end
-
- log_debug("Service waiting for #{runinterval} seconds")
- sleep(runinterval)
- log_debug('Service woken up')
end
+ @run_thread.join
- log_notice('Service stopped')
rescue Exception => e
log_exception(e)
+ ensure
+ log_notice('Service stopped')
end
def service_stop
- log_notice('Service stopping')
- Thread.main.wakeup
+ log_notice('Service stopping / killing worker thread')
+ @run_thread.kill if @run_thread
end
def service_pause
- # The service will not stay in a paused stated, instead it will go back into a running state after a short period of time. This is an issue in the Win32-Service ruby code
- # Raised bug https://github.com/djberg96/win32-service/issues/11 and is fixed in version 0.8.3.
- # Because the Pause feature is so rarely used, there is no point in creating a workaround until puppet uses 0.8.3.
- log_notice('Service pausing. The service will not stay paused. See Puppet Issue PUP-1471 for more information')
+ log_notice('Service pausing')
end
def service_resume
log_notice('Service resuming')
end
def service_shutdown
log_notice('Host shutting down')
end
# Interrogation handler is just for debug. Can be commented out or removed entirely.
# def service_interrogate
# log_debug('Service is being interrogated')
# end
def log_exception(e)
log_err(e.message)
log_err(e.backtrace.join("\n"))
end
def log(msg, level)
if LEVELS.index(level) >= @loglevel
if (@LOG_TO_FILE)
File.open(LOG_FILE, 'a') { |f| f.puts("#{Time.now} Puppet (#{level}): #{msg}") }
end
case level
- when :debug
- report_windows_event(Win32::EventLog::INFO,0x01,msg.to_s)
- when :info
- report_windows_event(Win32::EventLog::INFO,0x01,msg.to_s)
- when :notice
- report_windows_event(Win32::EventLog::INFO,0x01,msg.to_s)
+ when :debug, :info, :notice
+ report_windows_event(EVENTLOG_INFORMATION_TYPE,0x01,msg.to_s)
when :err
- report_windows_event(Win32::EventLog::ERR,0x03,msg.to_s)
+ report_windows_event(EVENTLOG_ERROR_TYPE,0x03,msg.to_s)
else
- report_windows_event(Win32::EventLog::WARN,0x02,msg.to_s)
+ report_windows_event(EVENTLOG_WARNING_TYPE,0x02,msg.to_s)
end
end
end
def report_windows_event(type,id,message)
begin
eventlog = nil
eventlog = Win32::EventLog.open("Application")
eventlog.report_event(
:source => "Puppet",
- :event_type => type, # Win32::EventLog::INFO or WARN, ERROR
+ :event_type => type, # EVENTLOG_ERROR_TYPE, etc
:event_id => id, # 0x01 or 0x02, 0x03 etc.
:data => message # "the message"
)
rescue Exception => e
# Ignore all errors
ensure
if (!eventlog.nil?)
eventlog.close
end
end
end
+
+ def parse_runinterval(puppet_path)
+ begin
+ runinterval = %x{ "#{puppet_path}" agent --configprint runinterval }.to_i
+ if runinterval == 0
+ runinterval = 1800
+ log_err("Failed to determine runinterval, defaulting to #{runinterval} seconds")
+ end
+ rescue Exception => e
+ log_exception(e)
+ runinterval = 1800
+ end
+
+ runinterval
+ end
end
if __FILE__ == $0
WindowsDaemon.mainloop
end
diff --git a/install.rb b/install.rb
index 1663686f2..9bdfa2586 100755
--- a/install.rb
+++ b/install.rb
@@ -1,429 +1,429 @@
#! /usr/bin/env ruby
#--
# Copyright 2004 Austin Ziegler <ruby-install@halostatue.ca>
# Install utility. Based on the original installation script for rdoc by the
# Pragmatic Programmers.
#
# This program is free software. It may be redistributed and/or modified under
# the terms of the GPL version 2 (or later) or the Ruby licence.
#
# Usage
# -----
# In most cases, if you have a typical project layout, you will need to do
# absolutely nothing to make this work for you. This layout is:
#
# bin/ # executable files -- "commands"
# lib/ # the source of the library
#
# The default behaviour:
# 1) Build Rdoc documentation from all files in bin/ (excluding .bat and .cmd),
# all .rb files in lib/, ./README, ./ChangeLog, and ./Install.
# 2) Build ri documentation from all files in bin/ (excluding .bat and .cmd),
# and all .rb files in lib/. This is disabled by default on Microsoft Windows.
# 3) Install commands from bin/ into the Ruby bin directory. On Windows, if a
# if a corresponding batch file (.bat or .cmd) exists in the bin directory,
# it will be copied over as well. Otherwise, a batch file (always .bat) will
# be created to run the specified command.
# 4) Install all library files ending in .rb from lib/ into Ruby's
# site_lib/version directory.
#
#++
require 'rbconfig'
require 'find'
require 'fileutils'
require 'tempfile'
begin
require 'ftools' # apparently on some system ftools doesn't get loaded
$haveftools = true
rescue LoadError
puts "ftools not found. Using FileUtils instead.."
$haveftools = false
end
require 'optparse'
require 'ostruct'
begin
require 'rdoc/rdoc'
$haverdoc = true
rescue LoadError
puts "Missing rdoc; skipping documentation"
$haverdoc = false
end
PREREQS = %w{openssl facter cgi hiera}
MIN_FACTER_VERSION = 1.5
InstallOptions = OpenStruct.new
def glob(list)
g = list.map { |i| Dir.glob(i) }
g.flatten!
g.compact!
g
end
def do_configs(configs, target, strip = 'conf/')
Dir.mkdir(target) unless File.directory? target
configs.each do |cf|
ocf = File.join(InstallOptions.config_dir, cf.gsub(/#{strip}/, ''))
if $haveftools
File.install(cf, ocf, 0644, true)
else
FileUtils.install(cf, ocf, {:mode => 0644, :preserve => true, :verbose => true})
end
end
if $operatingsystem == 'windows'
src_dll = 'ext/windows/eventlog/puppetres.dll'
dst_dll = File.join(InstallOptions.bin_dir, 'puppetres.dll')
if $haveftools
File.install(src_dll, dst_dll, 0644, true)
else
FileUtils.install(src_dll, dst_dll, {:mode => 0644, :preserve => true, :verbose => true})
end
require 'win32/registry'
include Win32::Registry::Constants
begin
Win32::Registry::HKEY_LOCAL_MACHINE.create('SYSTEM\CurrentControlSet\services\eventlog\Application\Puppet', KEY_ALL_ACCESS | 0x0100) do |reg|
reg.write_s('EventMessageFile', dst_dll.tr('/', '\\'))
reg.write_i('TypesSupported', 0x7)
end
rescue Win32::Registry::Error => e
warn "Failed to create puppet eventlog registry key: #{e}"
end
end
end
def do_bins(bins, target, strip = 's?bin/')
Dir.mkdir(target) unless File.directory? target
bins.each do |bf|
obf = bf.gsub(/#{strip}/, '')
install_binfile(bf, obf, target)
end
end
def do_libs(libs, strip = 'lib/')
libs.each do |lf|
next if File.directory? lf
olf = File.join(InstallOptions.site_dir, lf.sub(/^#{strip}/, ''))
op = File.dirname(olf)
if $haveftools
File.makedirs(op, true)
File.chmod(0755, op)
File.install(lf, olf, 0644, true)
else
FileUtils.makedirs(op, {:mode => 0755, :verbose => true})
FileUtils.chmod(0755, op)
FileUtils.install(lf, olf, {:mode => 0644, :preserve => true, :verbose => true})
end
end
end
def do_man(man, strip = 'man/')
man.each do |mf|
omf = File.join(InstallOptions.man_dir, mf.gsub(/#{strip}/, ''))
om = File.dirname(omf)
if $haveftools
File.makedirs(om, true)
File.chmod(0755, om)
File.install(mf, omf, 0644, true)
else
FileUtils.makedirs(om, {:mode => 0755, :verbose => true})
FileUtils.chmod(0755, om)
FileUtils.install(mf, omf, {:mode => 0644, :preserve => true, :verbose => true})
end
gzip = %x{which gzip}
gzip.chomp!
%x{#{gzip} -f #{omf}}
end
end
# Verify that all of the prereqs are installed
def check_prereqs
PREREQS.each { |pre|
begin
require pre
if pre == "facter"
# to_f isn't quite exact for strings like "1.5.1" but is good
# enough for this purpose.
facter_version = Facter.version.to_f
if facter_version < MIN_FACTER_VERSION
puts "Facter version: #{facter_version}; minimum required: #{MIN_FACTER_VERSION}; cannot install"
exit -1
end
end
rescue LoadError
puts "Could not load #{pre}; cannot install"
exit -1
end
}
end
##
# Prepare the file installation.
#
def prepare_installation
$operatingsystem = Facter["operatingsystem"].value
InstallOptions.configs = true
# Only try to do docs if we're sure they have rdoc
if $haverdoc
InstallOptions.rdoc = true
InstallOptions.ri = $operatingsystem != "windows"
else
InstallOptions.rdoc = false
InstallOptions.ri = false
end
ARGV.options do |opts|
opts.banner = "Usage: #{File.basename($0)} [options]"
opts.separator ""
opts.on('--[no-]rdoc', 'Prevents the creation of RDoc output.', 'Default on.') do |onrdoc|
InstallOptions.rdoc = onrdoc
end
opts.on('--[no-]ri', 'Prevents the creation of RI output.', 'Default off on mswin32.') do |onri|
InstallOptions.ri = onri
end
opts.on('--[no-]tests', 'Prevents the execution of unit tests.', 'Default off.') do |ontest|
InstallOptions.tests = ontest
warn "The tests flag is no longer functional in Puppet and is deprecated as of Dec 19, 2012. It will be removed in a future version of Puppet."
end
opts.on('--[no-]configs', 'Prevents the installation of config files', 'Default off.') do |ontest|
InstallOptions.configs = ontest
end
opts.on('--destdir[=OPTIONAL]', 'Installation prefix for all targets', 'Default essentially /') do |destdir|
InstallOptions.destdir = destdir
end
opts.on('--configdir[=OPTIONAL]', 'Installation directory for config files', 'Default /etc/puppet') do |configdir|
InstallOptions.configdir = configdir
end
opts.on('--bindir[=OPTIONAL]', 'Installation directory for binaries', 'overrides RbConfig::CONFIG["bindir"]') do |bindir|
InstallOptions.bindir = bindir
end
opts.on('--ruby[=OPTIONAL]', 'Ruby interpreter to use with installation', 'overrides ruby used to call install.rb') do |ruby|
InstallOptions.ruby = ruby
end
opts.on('--sitelibdir[=OPTIONAL]', 'Installation directory for libraries', 'overrides RbConfig::CONFIG["sitelibdir"]') do |sitelibdir|
InstallOptions.sitelibdir = sitelibdir
end
opts.on('--mandir[=OPTIONAL]', 'Installation directory for man pages', 'overrides RbConfig::CONFIG["mandir"]') do |mandir|
InstallOptions.mandir = mandir
end
opts.on('--quick', 'Performs a quick installation. Only the', 'installation is done.') do |quick|
InstallOptions.rdoc = false
InstallOptions.ri = false
InstallOptions.configs = true
end
opts.on('--full', 'Performs a full installation. All', 'optional installation steps are run.') do |full|
InstallOptions.rdoc = true
InstallOptions.ri = true
InstallOptions.configs = true
end
opts.separator("")
opts.on_tail('--help', "Shows this help text.") do
$stderr.puts opts
exit
end
opts.parse!
end
version = [RbConfig::CONFIG["MAJOR"], RbConfig::CONFIG["MINOR"]].join(".")
libdir = File.join(RbConfig::CONFIG["libdir"], "ruby", version)
# Mac OS X 10.5 and higher declare bindir
# /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin
# which is not generally where people expect executables to be installed
# These settings are appropriate defaults for all OS X versions.
if RUBY_PLATFORM =~ /^universal-darwin[\d\.]+$/
RbConfig::CONFIG['bindir'] = "/usr/bin"
end
if not InstallOptions.configdir.nil?
configdir = InstallOptions.configdir
elsif $operatingsystem == "windows"
begin
require 'win32/dir'
rescue LoadError => e
- puts "Cannot run on Microsoft Windows without the sys-admin, win32-process, win32-dir & win32-service gems: #{e}"
+ puts "Cannot run on Microsoft Windows without the win32-process, win32-dir & win32-service gems: #{e}"
exit -1
end
configdir = File.join(Dir::COMMON_APPDATA, "PuppetLabs", "puppet", "etc")
else
configdir = "/etc/puppet"
end
if not InstallOptions.bindir.nil?
bindir = InstallOptions.bindir
else
bindir = RbConfig::CONFIG['bindir']
end
if not InstallOptions.sitelibdir.nil?
sitelibdir = InstallOptions.sitelibdir
else
sitelibdir = RbConfig::CONFIG["sitelibdir"]
if sitelibdir.nil?
sitelibdir = $LOAD_PATH.find { |x| x =~ /site_ruby/ }
if sitelibdir.nil?
sitelibdir = File.join(libdir, "site_ruby")
elsif sitelibdir !~ Regexp.quote(version)
sitelibdir = File.join(sitelibdir, version)
end
end
end
if not InstallOptions.mandir.nil?
mandir = InstallOptions.mandir
else
mandir = RbConfig::CONFIG['mandir']
end
# This is the new way forward
if not InstallOptions.destdir.nil?
destdir = InstallOptions.destdir
# To be deprecated once people move over to using --destdir option
elsif not ENV['DESTDIR'].nil?
destdir = ENV['DESTDIR']
warn "DESTDIR is deprecated. Use --destdir instead."
else
destdir = ''
end
configdir = join(destdir, configdir)
bindir = join(destdir, bindir)
mandir = join(destdir, mandir)
sitelibdir = join(destdir, sitelibdir)
FileUtils.makedirs(configdir) if InstallOptions.configs
FileUtils.makedirs(bindir)
FileUtils.makedirs(mandir)
FileUtils.makedirs(sitelibdir)
InstallOptions.site_dir = sitelibdir
InstallOptions.config_dir = configdir
InstallOptions.bin_dir = bindir
InstallOptions.lib_dir = libdir
InstallOptions.man_dir = mandir
end
##
# Join two paths. On Windows, dir must be converted to a relative path,
# by stripping the drive letter, but only if the basedir is not empty.
#
def join(basedir, dir)
return "#{basedir}#{dir[2..-1]}" if $operatingsystem == "windows" and basedir.length > 0 and dir.length > 2
"#{basedir}#{dir}"
end
##
# Build the rdoc documentation. Also, try to build the RI documentation.
#
def build_rdoc(files)
return unless $haverdoc
begin
r = RDoc::RDoc.new
r.document(["--main", "README", "--title", "Puppet -- Site Configuration Management", "--line-numbers"] + files)
rescue RDoc::RDocError => e
$stderr.puts e.message
rescue Exception => e
$stderr.puts "Couldn't build RDoc documentation\n#{e.message}"
end
end
def build_ri(files)
return unless $haverdoc
begin
ri = RDoc::RDoc.new
#ri.document(["--ri-site", "--merge"] + files)
ri.document(["--ri-site"] + files)
rescue RDoc::RDocError => e
$stderr.puts e.message
rescue Exception => e
$stderr.puts "Couldn't build Ri documentation\n#{e.message}"
$stderr.puts "Continuing with install..."
end
end
##
# Install file(s) from ./bin to RbConfig::CONFIG['bindir']. Patch it on the way
# to insert a #! line; on a Unix install, the command is named as expected
# (e.g., bin/rdoc becomes rdoc); the shebang line handles running it. Under
# windows, we add an '.rb' extension and let file associations do their stuff.
def install_binfile(from, op_file, target)
tmp_file = Tempfile.new('puppet-binfile')
if not InstallOptions.ruby.nil?
ruby = InstallOptions.ruby
else
ruby = File.join(RbConfig::CONFIG['bindir'], RbConfig::CONFIG['ruby_install_name'])
end
File.open(from) do |ip|
File.open(tmp_file.path, "w") do |op|
op.puts "#!#{ruby}"
contents = ip.readlines
contents.shift if contents[0] =~ /^#!/
op.write contents.join
end
end
if $operatingsystem == "windows"
installed_wrapper = false
if File.exists?("#{from}.bat")
FileUtils.install("#{from}.bat", File.join(target, "#{op_file}.bat"), :mode => 0755, :preserve => true, :verbose => true)
installed_wrapper = true
end
if File.exists?("#{from}.cmd")
FileUtils.install("#{from}.cmd", File.join(target, "#{op_file}.cmd"), :mode => 0755, :preserve => true, :verbose => true)
installed_wrapper = true
end
if not installed_wrapper
tmp_file2 = Tempfile.new('puppet-wrapper')
cwv = <<-EOS
@echo off
setlocal
set RUBY_BIN=%~dp0
set RUBY_BIN=%RUBY_BIN:\\=/%
"%RUBY_BIN%ruby.exe" -x "%RUBY_BIN%puppet" %*
EOS
File.open(tmp_file2.path, "w") { |cw| cw.puts cwv }
FileUtils.install(tmp_file2.path, File.join(target, "#{op_file}.bat"), :mode => 0755, :preserve => true, :verbose => true)
tmp_file2.unlink
installed_wrapper = true
end
end
FileUtils.install(tmp_file.path, File.join(target, op_file), :mode => 0755, :preserve => true, :verbose => true)
tmp_file.unlink
end
# Change directory into the puppet root so we don't get the wrong files for install.
FileUtils.cd File.dirname(__FILE__) do
# Set these values to what you want installed.
configs = glob(%w{conf/auth.conf})
bins = glob(%w{bin/*})
rdoc = glob(%w{bin/* lib/**/*.rb README* }).reject { |e| e=~ /\.(bat|cmd)$/ }
ri = glob(%w{bin/*.rb lib/**/*.rb}).reject { |e| e=~ /\.(bat|cmd)$/ }
man = glob(%w{man/man[0-9]/*})
libs = glob(%w{lib/**/*})
check_prereqs
prepare_installation
#build_rdoc(rdoc) if InstallOptions.rdoc
#build_ri(ri) if InstallOptions.ri
do_configs(configs, InstallOptions.config_dir) if InstallOptions.configs
do_bins(bins, InstallOptions.bin_dir)
do_libs(libs)
do_man(man) unless $operatingsystem == "windows"
end
diff --git a/lib/puppet.rb b/lib/puppet.rb
index bf54dd764..2a52e9887 100644
--- a/lib/puppet.rb
+++ b/lib/puppet.rb
@@ -1,267 +1,277 @@
require 'puppet/version'
# see the bottom of the file for further inclusions
# Also see the new Vendor support - towards the end
#
require 'facter'
require 'puppet/error'
require 'puppet/util'
require 'puppet/util/autoload'
require 'puppet/settings'
require 'puppet/util/feature'
require 'puppet/util/suidmanager'
require 'puppet/util/run_mode'
require 'puppet/external/pson/common'
require 'puppet/external/pson/version'
require 'puppet/external/pson/pure'
#------------------------------------------------------------
# the top-level module
#
# all this really does is dictate how the whole system behaves, through
# preferences for things like debugging
#
# it's also a place to find top-level commands like 'debug'
# The main Puppet class. Everything is contained here.
#
# @api public
module Puppet
require 'puppet/file_system'
require 'puppet/context'
require 'puppet/environments'
class << self
include Puppet::Util
attr_reader :features
attr_writer :name
end
# the hash that determines how our system behaves
@@settings = Puppet::Settings.new
+ # Note: It's important that these accessors (`self.settings`, `self.[]`) are
+ # defined before we try to load any "features" (which happens a few lines below),
+ # because the implementation of the features loading may examine the values of
+ # settings.
+ def self.settings
+ @@settings
+ end
+
+ # Get the value for a setting
+ #
+ # @param [Symbol] param the setting to retrieve
+ #
+ # @api public
+ def self.[](param)
+ if param == :debug
+ return Puppet::Util::Log.level == :debug
+ else
+ return @@settings[param]
+ end
+ end
+
# The services running in this process.
@services ||= []
require 'puppet/util/logging'
extend Puppet::Util::Logging
# The feature collection
@features = Puppet::Util::Feature.new('puppet/feature')
# Load the base features.
require 'puppet/feature/base'
# Store a new default value.
def self.define_settings(section, hash)
@@settings.define_settings(section, hash)
end
- # Get the value for a setting
- #
- # @param [Symbol] param the setting to retrieve
- #
- # @api public
- def self.[](param)
- if param == :debug
- return Puppet::Util::Log.level == :debug
- else
- return @@settings[param]
- end
- end
-
# setting access and stuff
def self.[]=(param,value)
@@settings[param] = value
end
def self.clear
@@settings.clear
end
def self.debug=(value)
if value
Puppet::Util::Log.level=(:debug)
else
Puppet::Util::Log.level=(:notice)
end
end
- def self.settings
- @@settings
- end
-
-
def self.run_mode
# This sucks (the existence of this method); there are a lot of places in our code that branch based the value of
# "run mode", but there used to be some really confusing code paths that made it almost impossible to determine
# when during the lifecycle of a puppet application run the value would be set properly. A lot of the lifecycle
# stuff has been cleaned up now, but it still seems frightening that we rely so heavily on this value.
#
# I'd like to see about getting rid of the concept of "run_mode" entirely, but there are just too many places in
# the code that call this method at the moment... so I've settled for isolating it inside of the Settings class
# (rather than using a global variable, as we did previously...). Would be good to revisit this at some point.
#
# --cprice 2012-03-16
Puppet::Util::RunMode[@@settings.preferred_run_mode]
end
# Load all of the settings.
require 'puppet/defaults'
def self.genmanifest
if Puppet[:genmanifest]
puts Puppet.settings.to_manifest
exit(0)
end
end
# Parse the config file for this process.
# @deprecated Use {initialize_settings}
def self.parse_config()
Puppet.deprecation_warning("Puppet.parse_config is deprecated; please use Faces API (which will handle settings and state management for you), or (less desirable) call Puppet.initialize_settings")
Puppet.initialize_settings
end
# Initialize puppet's settings. This is intended only for use by external tools that are not
# built off of the Faces API or the Puppet::Util::Application class. It may also be used
# to initialize state so that a Face may be used programatically, rather than as a stand-alone
# command-line tool.
#
# @api public
# @param args [Array<String>] the command line arguments to use for initialization
# @return [void]
def self.initialize_settings(args = [])
do_initialize_settings_for_run_mode(:user, args)
end
# Initialize puppet's settings for a specified run_mode.
#
# @deprecated Use {initialize_settings}
def self.initialize_settings_for_run_mode(run_mode)
Puppet.deprecation_warning("initialize_settings_for_run_mode may be removed in a future release, as may run_mode itself")
do_initialize_settings_for_run_mode(run_mode, [])
end
# private helper method to provide the implementation details of initializing for a run mode,
# but allowing us to control where the deprecation warning is issued
def self.do_initialize_settings_for_run_mode(run_mode, args)
Puppet.settings.initialize_global_settings(args)
run_mode = Puppet::Util::RunMode[run_mode]
Puppet.settings.initialize_app_defaults(Puppet::Settings.app_defaults_for_run_mode(run_mode))
Puppet.push_context(Puppet.base_context(Puppet.settings), "Initial context after settings initialization")
Puppet::Parser::Functions.reset
Puppet::Util::Log.level = Puppet[:log_level]
end
private_class_method :do_initialize_settings_for_run_mode
# Create a new type. Just proxy to the Type class. The mirroring query
# code was deprecated in 2008, but this is still in heavy use. I suppose
# this can count as a soft deprecation for the next dev. --daniel 2011-04-12
def self.newtype(name, options = {}, &block)
Puppet::Type.newtype(name, options, &block)
end
# Load vendored (setup paths, and load what is needed upfront).
# See the Vendor class for how to add additional vendored gems/code
require "puppet/vendor"
Puppet::Vendor.load_vendored
# Set default for YAML.load to unsafe so we don't affect programs
# requiring puppet -- in puppet we will call safe explicitly
SafeYAML::OPTIONS[:default_mode] = :unsafe
# The bindings used for initialization of puppet
# @api private
def self.base_context(settings)
environments = settings[:environmentpath]
modulepath = Puppet::Node::Environment.split_path(settings[:basemodulepath])
if environments.empty?
loaders = [Puppet::Environments::Legacy.new]
else
loaders = Puppet::Environments::Directories.from_path(environments, modulepath)
# in case the configured environment (used for the default sometimes)
# doesn't exist
- loaders << Puppet::Environments::StaticPrivate.new(
- Puppet::Node::Environment.create(Puppet[:environment].to_sym,
- [],
- Puppet::Node::Environment::NO_MANIFEST))
+ default_environment = Puppet[:environment].to_sym
+ if default_environment == :production
+ loaders << Puppet::Environments::StaticPrivate.new(
+ Puppet::Node::Environment.create(Puppet[:environment].to_sym,
+ [],
+ Puppet::Node::Environment::NO_MANIFEST))
+ end
end
{
- :environments => Puppet::Environments::Cached.new(*loaders)
+ :environments => Puppet::Environments::Cached.new(*loaders),
+ :http_pool => proc {
+ require 'puppet/network/http'
+ Puppet::Network::HTTP::NoCachePool.new
+ }
}
end
# A simple set of bindings that is just enough to limp along to
# initialization where the {base_context} bindings are put in place
# @api private
def self.bootstrap_context
- root_environment = Puppet::Node::Environment.create(:'*root*', [], '')
+ root_environment = Puppet::Node::Environment.create(:'*root*', [], Puppet::Node::Environment::NO_MANIFEST)
{
:current_environment => root_environment,
:root_environment => root_environment
}
end
# @param overrides [Hash] A hash of bindings to be merged with the parent context.
# @param description [String] A description of the context.
# @api private
def self.push_context(overrides, description = "")
@context.push(overrides, description)
end
# Return to the previous context.
# @raise [StackUnderflow] if the current context is the root
# @api private
def self.pop_context
@context.pop
end
# Lookup a binding by name or return a default value provided by a passed block (if given).
# @api private
def self.lookup(name, &block)
@context.lookup(name, &block)
end
# @param bindings [Hash] A hash of bindings to be merged with the parent context.
# @param description [String] A description of the context.
# @yield [] A block executed in the context of the temporarily pushed bindings.
# @api private
def self.override(bindings, description = "", &block)
@context.override(bindings, description, &block)
end
# @api private
def self.mark_context(name)
@context.mark(name)
end
# @api private
def self.rollback_context(name)
@context.rollback(name)
end
require 'puppet/node'
# The single instance used for normal operation
@context = Puppet::Context.new(bootstrap_context)
end
# This feels weird to me; I would really like for us to get to a state where there is never a "require" statement
# anywhere besides the very top of a file. That would not be possible at the moment without a great deal of
# effort, but I think we should strive for it and revisit this at some point. --cprice 2012-03-16
require 'puppet/indirector'
require 'puppet/type'
require 'puppet/resource'
require 'puppet/parser'
require 'puppet/network'
require 'puppet/ssl'
require 'puppet/module'
require 'puppet/data_binding'
require 'puppet/util/storage'
require 'puppet/status'
require 'puppet/file_bucket/file'
diff --git a/lib/puppet/application.rb b/lib/puppet/application.rb
index 9f533b12d..1d9611093 100644
--- a/lib/puppet/application.rb
+++ b/lib/puppet/application.rb
@@ -1,483 +1,516 @@
require 'optparse'
require 'puppet/util/command_line'
require 'puppet/util/plugins'
require 'puppet/util/constant_inflector'
require 'puppet/error'
module Puppet
# This class handles all the aspects of a Puppet application/executable
# * setting up options
# * setting up logs
# * choosing what to run
# * representing execution status
#
# === Usage
# An application is a subclass of Puppet::Application.
#
# For legacy compatibility,
# Puppet::Application[:example].run
# is equivalent to
# Puppet::Application::Example.new.run
#
#
# class Puppet::Application::Example < Puppet::Application
#
# def preinit
# # perform some pre initialization
# @all = false
# end
#
# # run_command is called to actually run the specified command
# def run_command
# send Puppet::Util::CommandLine.new.args.shift
# end
#
# # option uses metaprogramming to create a method
# # and also tells the option parser how to invoke that method
# option("--arg ARGUMENT") do |v|
# @args << v
# end
#
# option("--debug", "-d") do |v|
# @debug = v
# end
#
# option("--all", "-a:) do |v|
# @all = v
# end
#
# def handle_unknown(opt,arg)
# # last chance to manage an option
# ...
# # let's say to the framework we finally handle this option
# true
# end
#
# def read
# # read action
# end
#
# def write
# # writeaction
# end
#
# end
#
# === Preinit
# The preinit block is the first code to be called in your application, before option parsing,
# setup or command execution.
#
# === Options
# Puppet::Application uses +OptionParser+ to manage the application options.
# Options are defined with the +option+ method to which are passed various
# arguments, including the long option, the short option, a description...
# Refer to +OptionParser+ documentation for the exact format.
# * If the option method is given a block, this one will be called whenever
# the option is encountered in the command-line argument.
# * If the option method has no block, a default functionnality will be used, that
# stores the argument (or true/false if the option doesn't require an argument) in
# the global (to the application) options array.
# * If a given option was not defined by a the +option+ method, but it exists as a Puppet settings:
# * if +unknown+ was used with a block, it will be called with the option name and argument
# * if +unknown+ wasn't used, then the option/argument is handed to Puppet.settings.handlearg for
# a default behavior
#
# --help is managed directly by the Puppet::Application class, but can be overriden.
#
# === Setup
# Applications can use the setup block to perform any initialization.
# The default +setup+ behaviour is to: read Puppet configuration and manage log level and destination
#
# === What and how to run
# If the +dispatch+ block is defined it is called. This block should return the name of the registered command
# to be run.
# If it doesn't exist, it defaults to execute the +main+ command if defined.
#
# === Execution state
# The class attributes/methods of Puppet::Application serve as a global place to set and query the execution
# status of the application: stopping, restarting, etc. The setting of the application status does not directly
# affect its running status; it's assumed that the various components within the application will consult these
# settings appropriately and affect their own processing accordingly. Control operations (signal handlers and
# the like) should set the status appropriately to indicate to the overall system that it's the process of
# stopping or restarting (or just running as usual).
#
# So, if something in your application needs to stop the process, for some reason, you might consider:
#
# def stop_me!
# # indicate that we're stopping
# Puppet::Application.stop!
# # ...do stuff...
# end
#
# And, if you have some component that involves a long-running process, you might want to consider:
#
# def my_long_process(giant_list_to_munge)
# giant_list_to_munge.collect do |member|
# # bail if we're stopping
# return if Puppet::Application.stop_requested?
# process_member(member)
# end
# end
class Application
require 'puppet/util'
include Puppet::Util
DOCPATTERN = ::File.expand_path(::File.dirname(__FILE__) + "/util/command_line/*" )
CommandLineArgs = Struct.new(:subcommand_name, :args)
@loader = Puppet::Util::Autoload.new(self, 'puppet/application')
class << self
include Puppet::Util
attr_accessor :run_status
def clear!
self.run_status = nil
end
def stop!
self.run_status = :stop_requested
end
def restart!
self.run_status = :restart_requested
end
# Indicates that Puppet::Application.restart! has been invoked and components should
# do what is necessary to facilitate a restart.
def restart_requested?
:restart_requested == run_status
end
# Indicates that Puppet::Application.stop! has been invoked and components should do what is necessary
# for a clean stop.
def stop_requested?
:stop_requested == run_status
end
# Indicates that one of stop! or start! was invoked on Puppet::Application, and some kind of process
# shutdown/short-circuit may be necessary.
def interrupted?
[:restart_requested, :stop_requested].include? run_status
end
# Indicates that Puppet::Application believes that it's in usual running run_mode (no stop/restart request
# currently active).
def clear?
run_status.nil?
end
# Only executes the given block if the run status of Puppet::Application is clear (no restarts, stops,
# etc. requested).
# Upon block execution, checks the run status again; if a restart has been requested during the block's
# execution, then controlled_run will send a new HUP signal to the current process.
# Thus, long-running background processes can potentially finish their work before a restart.
def controlled_run(&block)
return unless clear?
result = block.call
Process.kill(:HUP, $PID) if restart_requested?
result
end
SHOULD_PARSE_CONFIG_DEPRECATION_MSG = "is no longer supported; config file parsing " +
"is now controlled by the puppet engine, rather than by individual applications. This " +
"method will be removed in a future version of puppet."
def should_parse_config
Puppet.deprecation_warning("should_parse_config " + SHOULD_PARSE_CONFIG_DEPRECATION_MSG)
end
def should_not_parse_config
Puppet.deprecation_warning("should_not_parse_config " + SHOULD_PARSE_CONFIG_DEPRECATION_MSG)
end
def should_parse_config?
Puppet.deprecation_warning("should_parse_config? " + SHOULD_PARSE_CONFIG_DEPRECATION_MSG)
true
end
# used to declare code that handle an option
def option(*options, &block)
long = options.find { |opt| opt =~ /^--/ }.gsub(/^--(?:\[no-\])?([^ =]+).*$/, '\1' ).gsub('-','_')
fname = "handle_#{long}".intern
if (block_given?)
define_method(fname, &block)
else
define_method(fname) do |value|
self.options["#{long}".to_sym] = value
end
end
self.option_parser_commands << [options, fname]
end
def banner(banner = nil)
@banner ||= banner
end
def option_parser_commands
@option_parser_commands ||= (
superclass.respond_to?(:option_parser_commands) ? superclass.option_parser_commands.dup : []
)
@option_parser_commands
end
# @return [Array<String>] the names of available applications
# @api public
def available_application_names
@loader.files_to_load.map do |fn|
::File.basename(fn, '.rb')
end.uniq
end
# Finds the class for a given application and loads the class. This does
# not create an instance of the application, it only gets a handle to the
# class. The code for the application is expected to live in a ruby file
# `puppet/application/#{name}.rb` that is available on the `$LOAD_PATH`.
#
# @param application_name [String] the name of the application to find (eg. "apply").
# @return [Class] the Class instance of the application that was found.
# @raise [Puppet::Error] if the application class was not found.
# @raise [LoadError] if there was a problem loading the application file.
# @api public
def find(application_name)
begin
require @loader.expand(application_name.to_s.downcase)
rescue LoadError => e
Puppet.log_and_raise(e, "Unable to find application '#{application_name}'. #{e}")
end
class_name = Puppet::Util::ConstantInflector.file2constant(application_name.to_s)
clazz = try_load_class(class_name)
################################################################
#### Begin 2.7.x backward compatibility hack;
#### eventually we need to issue a deprecation warning here,
#### and then get rid of this stanza in a subsequent release.
################################################################
if (clazz.nil?)
class_name = application_name.capitalize
clazz = try_load_class(class_name)
end
################################################################
#### End 2.7.x backward compatibility hack
################################################################
if clazz.nil?
raise Puppet::Error.new("Unable to load application class '#{class_name}' from file 'puppet/application/#{application_name}.rb'")
end
return clazz
end
# Given the fully qualified name of a class, attempt to get the class instance.
# @param [String] class_name the fully qualified name of the class to try to load
# @return [Class] the Class instance, or nil? if it could not be loaded.
def try_load_class(class_name)
return self.const_defined?(class_name) ? const_get(class_name) : nil
end
private :try_load_class
def [](name)
find(name).new
end
# Sets or gets the run_mode name. Sets the run_mode name if a mode_name is
# passed. Otherwise, gets the run_mode or a default run_mode
#
def run_mode( mode_name = nil)
if mode_name
Puppet.settings.preferred_run_mode = mode_name
end
return @run_mode if @run_mode and not mode_name
require 'puppet/util/run_mode'
@run_mode = Puppet::Util::RunMode[ mode_name || Puppet.settings.preferred_run_mode ]
end
# This is for testing only
def clear_everything_for_tests
@run_mode = @banner = @run_status = @option_parser_commands = nil
end
end
attr_reader :options, :command_line
# Every app responds to --version
# See also `lib/puppet/util/command_line.rb` for some special case early
# handling of this.
option("--version", "-V") do |arg|
puts "#{Puppet.version}"
exit
end
# Every app responds to --help
option("--help", "-h") do |v|
puts help
exit
end
def app_defaults()
Puppet::Settings.app_defaults_for_run_mode(self.class.run_mode).merge(
:name => name
)
end
def initialize_app_defaults()
Puppet.settings.initialize_app_defaults(app_defaults)
end
# override to execute code before running anything else
def preinit
end
def initialize(command_line = Puppet::Util::CommandLine.new)
@command_line = CommandLineArgs.new(command_line.subcommand_name, command_line.args.dup)
@options = {}
end
# Execute the application.
# @api public
# @return [void]
def run
# I don't really like the names of these lifecycle phases. It would be nice to change them to some more meaningful
# names, and make deprecated aliases. Also, Daniel suggests that we can probably get rid of this "plugin_hook"
# pattern, but we need to check with PE and the community first. --cprice 2012-03-16
#
exit_on_fail("get application-specific default settings") do
plugin_hook('initialize_app_defaults') { initialize_app_defaults }
end
Puppet.push_context(Puppet.base_context(Puppet.settings), "Update for application settings (#{self.class.run_mode})")
- configured_environment = Puppet.lookup(:environments).get(Puppet[:environment])
+ # This use of configured environment is correct, this is used to establish
+ # the defaults for an application that does not override, or where an override
+ # has not been made from the command line.
+ #
+ configured_environment_name = Puppet[:environment]
+ if self.class.run_mode.name != :agent
+ configured_environment = Puppet.lookup(:environments).get(configured_environment_name)
+ if configured_environment.nil?
+ fail(Puppet::Environments::EnvironmentNotFound, configured_environment_name)
+ end
+ else
+ configured_environment = Puppet::Node::Environment.remote(configured_environment_name)
+ end
configured_environment = configured_environment.override_from_commandline(Puppet.settings)
# Setup a new context using the app's configuration
Puppet.push_context({ :current_environment => configured_environment },
"Update current environment from application's configuration")
require 'puppet/util/instrumentation'
Puppet::Util::Instrumentation.init
exit_on_fail("initialize") { plugin_hook('preinit') { preinit } }
exit_on_fail("parse application options") { plugin_hook('parse_options') { parse_options } }
exit_on_fail("prepare for execution") { plugin_hook('setup') { setup } }
exit_on_fail("configure routes from #{Puppet[:route_file]}") { configure_indirector_routes }
+ exit_on_fail("log runtime debug info") { log_runtime_environment }
exit_on_fail("run") { plugin_hook('run_command') { run_command } }
end
def main
raise NotImplementedError, "No valid command or main"
end
def run_command
main
end
def setup
setup_logs
end
def setup_logs
if options[:debug] || options[:verbose]
Puppet::Util::Log.newdestination(:console)
end
set_log_level
Puppet::Util::Log.setup_default unless options[:setdest]
end
def set_log_level
if options[:debug]
Puppet::Util::Log.level = :debug
elsif options[:verbose]
Puppet::Util::Log.level = :info
end
end
def handle_logdest_arg(arg)
begin
Puppet::Util::Log.newdestination(arg)
options[:setdest] = true
rescue => detail
Puppet.log_exception(detail)
end
end
def configure_indirector_routes
route_file = Puppet[:route_file]
if Puppet::FileSystem.exist?(route_file)
routes = YAML.load_file(route_file)
application_routes = routes[name.to_s]
Puppet::Indirector.configure_routes(application_routes) if application_routes
end
end
+ # Output basic information about the runtime environment for debugging
+ # purposes.
+ #
+ # @api public
+ #
+ # @param extra_info [Hash{String => #to_s}] a flat hash of extra information
+ # to log. Intended to be passed to super by subclasses.
+ # @return [void]
+ def log_runtime_environment(extra_info=nil)
+ runtime_info = {
+ 'puppet_version' => Puppet.version,
+ 'ruby_version' => RUBY_VERSION,
+ 'run_mode' => self.class.run_mode.name,
+ }
+ runtime_info['default_encoding'] = Encoding.default_external if RUBY_VERSION >= '1.9.3'
+ runtime_info.merge!(extra_info) unless extra_info.nil?
+
+ Puppet.debug 'Runtime environment: ' + runtime_info.map{|k,v| k + '=' + v.to_s}.join(', ')
+ end
+
def parse_options
# Create an option parser
option_parser = OptionParser.new(self.class.banner)
# Here we're building up all of the options that the application may need to handle. The main
# puppet settings defined in "defaults.rb" have already been parsed once (in command_line.rb) by
# the time we get here; however, our app may wish to handle some of them specially, so we need to
# make the parser aware of them again. We might be able to make this a bit more efficient by
# re-using the parser object that gets built up in command_line.rb. --cprice 2012-03-16
# Add all global options to it.
Puppet.settings.optparse_addargs([]).each do |option|
option_parser.on(*option) do |arg|
handlearg(option[0], arg)
end
end
# Add options that are local to this application, which were
# created using the "option()" metaprogramming method. If there
# are any conflicts, this application's options will be favored.
self.class.option_parser_commands.each do |options, fname|
option_parser.on(*options) do |value|
# Call the method that "option()" created.
self.send(fname, value)
end
end
# Scan command line. We just hand any exceptions to our upper levels,
# rather than printing help and exiting, so that we can meaningfully
# respond with context-sensitive help if we want to. --daniel 2011-04-12
option_parser.parse!(self.command_line.args)
end
def handlearg(opt, val)
opt, val = Puppet::Settings.clean_opt(opt, val)
send(:handle_unknown, opt, val) if respond_to?(:handle_unknown)
end
# this is used for testing
def self.exit(code)
exit(code)
end
def name
self.class.to_s.sub(/.*::/,"").downcase.to_sym
end
def help
"No help available for puppet #{name}"
end
def plugin_hook(step,&block)
Puppet::Plugins.send("before_application_#{step}",:application_object => self)
x = yield
Puppet::Plugins.send("after_application_#{step}",:application_object => self, :return_value => x)
x
end
private :plugin_hook
end
end
diff --git a/lib/puppet/application/agent.rb b/lib/puppet/application/agent.rb
index 7c7c11f4e..d13a3f983 100644
--- a/lib/puppet/application/agent.rb
+++ b/lib/puppet/application/agent.rb
@@ -1,479 +1,480 @@
require 'puppet/application'
require 'puppet/run'
require 'puppet/daemon'
require 'puppet/util/pidlock'
class Puppet::Application::Agent < Puppet::Application
run_mode :agent
def app_defaults
super.merge({
:catalog_terminus => :rest,
:catalog_cache_terminus => :json,
:node_terminus => :rest,
:facts_terminus => :facter,
})
end
def preinit
# Do an initial trap, so that cancels don't get a stack trace.
Signal.trap(:INT) do
$stderr.puts "Cancelling startup"
exit(0)
end
{
:waitforcert => nil,
:detailed_exitcodes => false,
:verbose => false,
:debug => false,
:setdest => false,
:enable => false,
:disable => false,
:client => true,
:fqdn => nil,
:serve => [],
:digest => 'SHA256',
:graph => true,
:fingerprint => false,
}.each do |opt,val|
options[opt] = val
end
@argv = ARGV.dup
end
option("--disable [MESSAGE]") do |message|
options[:disable] = true
options[:disable_message] = message
end
option("--enable")
option("--debug","-d")
option("--fqdn FQDN","-f")
option("--test","-t")
option("--verbose","-v")
option("--fingerprint")
option("--digest DIGEST")
option("--no-client") do |arg|
options[:client] = false
end
option("--detailed-exitcodes") do |arg|
options[:detailed_exitcodes] = true
end
option("--logdest DEST", "-l DEST") do |arg|
handle_logdest_arg(arg)
end
option("--waitforcert WAITFORCERT", "-w") do |arg|
options[:waitforcert] = arg.to_i
end
def help
<<-'HELP'
puppet-agent(8) -- The puppet agent daemon
========
SYNOPSIS
--------
Retrieves the client configuration from the puppet master and applies it to
the local host.
This service may be run as a daemon, run periodically using cron (or something
similar), or run interactively for testing purposes.
USAGE
-----
-puppet agent [--certname <name>] [-D|--daemonize|--no-daemonize]
- [-d|--debug] [--detailed-exitcodes] [--digest <digest>] [--disable [message]] [--enable]
- [--fingerprint] [-h|--help] [-l|--logdest syslog|<file>|console]
- [--no-client] [--noop] [-o|--onetime] [-t|--test]
- [-v|--verbose] [-V|--version] [-w|--waitforcert <seconds>]
+puppet agent [--certname <NAME>] [-D|--daemonize|--no-daemonize]
+ [-d|--debug] [--detailed-exitcodes] [--digest <DIGEST>] [--disable [MESSAGE]] [--enable]
+ [--fingerprint] [-h|--help] [-l|--logdest syslog|eventlog|<FILE>|console]
+ [--masterport <PORT>] [--no-client] [--noop] [-o|--onetime] [-t|--test]
+ [-v|--verbose] [-V|--version] [-w|--waitforcert <SECONDS>]
DESCRIPTION
-----------
This is the main puppet client. Its job is to retrieve the local
machine's configuration from a remote server and apply it. In order to
successfully communicate with the remote server, the client must have a
certificate signed by a certificate authority that the server trusts;
the recommended method for this, at the moment, is to run a certificate
authority as part of the puppet server (which is the default). The
client will connect and request a signed certificate, and will continue
connecting until it receives one.
Once the client has a signed certificate, it will retrieve its
configuration and apply it.
USAGE NOTES
-----------
'puppet agent' does its best to find a compromise between interactive
use and daemon use. Run with no arguments and no configuration, it will
go into the background, attempt to get a signed certificate, and retrieve
and apply its configuration every 30 minutes.
Some flags are meant specifically for interactive use -- in particular,
'test', 'tags' or 'fingerprint' are useful. 'test' enables verbose
logging, causes the daemon to stay in the foreground, exits if the
server's configuration is invalid (this happens if, for instance, you've
left a syntax error on the server), and exits after running the
configuration once (rather than hanging around as a long-running
process).
'tags' allows you to specify what portions of a configuration you want
to apply. Puppet elements are tagged with all of the class or definition
names that contain them, and you can use the 'tags' flag to specify one
of these names, causing only configuration elements contained within
that class or definition to be applied. This is very useful when you are
testing new configurations -- for instance, if you are just starting to
manage 'ntpd', you would put all of the new elements into an 'ntpd'
class, and call puppet with '--tags ntpd', which would only apply that
small portion of the configuration during your testing, rather than
applying the whole thing.
'fingerprint' is a one-time flag. In this mode 'puppet agent' will run
once and display on the console (and in the log) the current certificate
(or certificate request) fingerprint. Providing the '--digest' option
allows to use a different digest algorithm to generate the fingerprint.
The main use is to verify that before signing a certificate request on
the master, the certificate request the master received is the same as
the one the client sent (to prevent against man-in-the-middle attacks
when signing certificates).
OPTIONS
-------
Note that any Puppet setting that's valid in the configuration file is also a
valid long argument. For example, 'server' is a valid setting, so you can
specify '--server <servername>' as an argument. Boolean settings translate into
'--setting' and '--no-setting' pairs.
See the configuration file documentation at
http://docs.puppetlabs.com/references/stable/configuration.html for the
full list of acceptable settings. A commented list of all settings can also be
generated by running puppet agent with '--genconfig'.
* --certname:
Set the certname (unique ID) of the client. The master reads this
unique identifying string, which is usually set to the node's
fully-qualified domain name, to determine which configurations the
node will receive. Use this option to debug setup problems or
implement unusual node identification schemes.
(This is a Puppet setting, and can go in puppet.conf.)
* --daemonize:
Send the process into the background. This is the default.
(This is a Puppet setting, and can go in puppet.conf. Note the special 'no-'
prefix for boolean settings on the command line.)
* --no-daemonize:
Do not send the process into the background.
(This is a Puppet setting, and can go in puppet.conf. Note the special 'no-'
prefix for boolean settings on the command line.)
* --debug:
Enable full debugging.
* --detailed-exitcodes:
Provide transaction information via exit codes. If this is enabled, an exit
code of '2' means there were changes, an exit code of '4' means there were
failures during the transaction, and an exit code of '6' means there were both
changes and failures.
* --digest:
Change the certificate fingerprinting digest algorithm. The default is
SHA256. Valid values depends on the version of OpenSSL installed, but
will likely contain MD5, MD2, SHA1 and SHA256.
* --disable:
Disable working on the local system. This puts a lock file in place,
causing 'puppet agent' not to work on the system until the lock file
is removed. This is useful if you are testing a configuration and do
not want the central configuration to override the local state until
everything is tested and committed.
Disable can also take an optional message that will be reported by the
'puppet agent' at the next disabled run.
'puppet agent' uses the same lock file while it is running, so no more
than one 'puppet agent' process is working at a time.
'puppet agent' exits after executing this.
* --enable:
Enable working on the local system. This removes any lock file,
causing 'puppet agent' to start managing the local system again
(although it will continue to use its normal scheduling, so it might
not start for another half hour).
'puppet agent' exits after executing this.
* --fingerprint:
Display the current certificate or certificate signing request
fingerprint and then exit. Use the '--digest' option to change the
digest algorithm used.
* --help:
Print this help message
* --logdest:
- Where to send messages. Choose between syslog, the console, and a log
- file. Defaults to sending messages to syslog, or the console if
- debugging or verbosity is enabled.
+ Where to send log messages. Choose between 'syslog' (the POSIX syslog
+ service), 'eventlog' (the Windows Event Log), 'console', or the path to a log
+ file. If debugging or verbosity is enabled, this defaults to 'console'.
+ Otherwise, it defaults to 'syslog' on POSIX systems and 'eventlog' on Windows.
* --masterport:
The port on which to contact the puppet master.
(This is a Puppet setting, and can go in puppet.conf.)
* --no-client:
Do not create a config client. This will cause the daemon to start
but not check configuration unless it is triggered with `puppet
kick`. This only makes sense when puppet agent is being run with
listen = true in puppet.conf or was started with the `--listen` option.
* --noop:
Use 'noop' mode where the daemon runs in a no-op or dry-run mode. This
is useful for seeing what changes Puppet will make without actually
executing the changes.
(This is a Puppet setting, and can go in puppet.conf. Note the special 'no-'
prefix for boolean settings on the command line.)
* --onetime:
Run the configuration once. Runs a single (normally daemonized) Puppet
run. Useful for interactively running puppet agent when used in
conjunction with the --no-daemonize option.
(This is a Puppet setting, and can go in puppet.conf. Note the special 'no-'
prefix for boolean settings on the command line.)
* --test:
Enable the most common options used for testing. These are 'onetime',
'verbose', 'ignorecache', 'no-daemonize', 'no-usecacheonfailure',
'detailed-exitcodes', 'no-splay', and 'show_diff'.
* --verbose:
Turn on verbose reporting.
* --version:
Print the puppet version number and exit.
* --waitforcert:
This option only matters for daemons that do not yet have certificates
and it is enabled by default, with a value of 120 (seconds). This
causes 'puppet agent' to connect to the server every 2 minutes and ask
it to sign a certificate request. This is useful for the initial setup
of a puppet client. You can turn off waiting for certificates by
specifying a time of 0.
(This is a Puppet setting, and can go in puppet.conf. Note the special 'no-'
prefix for boolean settings on the command line.)
EXAMPLE
-------
$ puppet agent --server puppet.domain.com
DIAGNOSTICS
-----------
Puppet agent accepts the following signals:
* SIGHUP:
Restart the puppet agent daemon.
* SIGINT and SIGTERM:
Shut down the puppet agent daemon.
* SIGUSR1:
Immediately retrieve and apply configurations from the puppet master.
* SIGUSR2:
Close file descriptors for log files and reopen them. Used with logrotate.
AUTHOR
------
Luke Kanies
COPYRIGHT
---------
Copyright (c) 2011 Puppet Labs, LLC Licensed under the Apache 2.0 License
HELP
end
def run_command
if options[:fingerprint]
fingerprint
else
# It'd be nice to daemonize later, but we have to daemonize before
# waiting for certificates so that we don't block
daemon = daemonize_process_when(Puppet[:daemonize])
wait_for_certificates
if Puppet[:onetime]
onetime(daemon)
else
main(daemon)
end
end
end
def fingerprint
host = Puppet::SSL::Host.new
unless cert = host.certificate || host.certificate_request
$stderr.puts "Fingerprint asked but no certificate nor certificate request have yet been issued"
exit(1)
return
end
unless digest = cert.digest(options[:digest].to_s)
raise ArgumentError, "Could not get fingerprint for digest '#{options[:digest]}'"
end
puts digest.to_s
end
def onetime(daemon)
if Puppet[:listen]
Puppet.notice "Ignoring --listen on onetime run"
end
unless options[:client]
Puppet.err "onetime is specified but there is no client"
exit(43)
return
end
daemon.set_signal_traps
begin
exitstatus = daemon.agent.run
rescue => detail
Puppet.log_exception(detail)
end
daemon.stop(:exit => false)
if not exitstatus
exit(1)
elsif options[:detailed_exitcodes] then
exit(exitstatus)
else
exit(0)
end
end
def main(daemon)
if Puppet[:listen]
setup_listen(daemon)
end
Puppet.notice "Starting Puppet client version #{Puppet.version}"
daemon.start
end
# Enable all of the most common test options.
def setup_test
Puppet.settings.handlearg("--ignorecache")
Puppet.settings.handlearg("--no-usecacheonfailure")
Puppet.settings.handlearg("--no-splay")
Puppet.settings.handlearg("--show_diff")
Puppet.settings.handlearg("--no-daemonize")
options[:verbose] = true
Puppet[:onetime] = true
options[:detailed_exitcodes] = true
end
def setup
setup_test if options[:test]
setup_logs
exit(Puppet.settings.print_configs ? 0 : 1) if Puppet.settings.print_configs?
if options[:fqdn]
Puppet[:certname] = options[:fqdn]
end
Puppet.settings.use :main, :agent, :ssl
# Always ignoreimport for agent. It really shouldn't even try to import,
# but this is just a temporary band-aid.
Puppet[:ignoreimport] = true
Puppet::Transaction::Report.indirection.terminus_class = :rest
# we want the last report to be persisted locally
Puppet::Transaction::Report.indirection.cache_class = :yaml
if Puppet[:catalog_cache_terminus]
Puppet::Resource::Catalog.indirection.cache_class = Puppet[:catalog_cache_terminus]
end
if options[:fingerprint]
# in fingerprint mode we just need
# access to the local files and we don't need a ca
Puppet::SSL::Host.ca_location = :none
else
Puppet::SSL::Host.ca_location = :remote
setup_agent
end
end
private
def enable_disable_client(agent)
if options[:enable]
agent.enable
elsif options[:disable]
agent.disable(options[:disable_message] || 'reason not specified')
end
exit(0)
end
def setup_listen(daemon)
Puppet.warning "Puppet --listen / kick is deprecated. See http://links.puppetlabs.com/puppet-kick-deprecation"
unless Puppet::FileSystem.exist?(Puppet[:rest_authconfig])
Puppet.err "Will not start without authorization file #{Puppet[:rest_authconfig]}"
exit(14)
end
require 'puppet/network/server'
# No REST handlers yet.
server = Puppet::Network::Server.new(Puppet[:bindaddress], Puppet[:puppetport])
daemon.server = server
end
def setup_agent
# We need to make the client either way, we just don't start it
# if --no-client is set.
require 'puppet/agent'
require 'puppet/configurer'
agent = Puppet::Agent.new(Puppet::Configurer, (not(Puppet[:onetime])))
enable_disable_client(agent) if options[:enable] or options[:disable]
@agent = agent if options[:client]
end
def daemonize_process_when(should_daemonize)
daemon = Puppet::Daemon.new(Puppet::Util::Pidlock.new(Puppet[:pidfile]))
daemon.argv = @argv
daemon.agent = @agent
daemon.daemonize if should_daemonize
daemon
end
def wait_for_certificates
host = Puppet::SSL::Host.new
waitforcert = options[:waitforcert] || (Puppet[:onetime] ? 0 : Puppet[:waitforcert])
host.wait_for_cert(waitforcert)
end
end
diff --git a/lib/puppet/application/apply.rb b/lib/puppet/application/apply.rb
index 22f29558b..f7f85fcc5 100644
--- a/lib/puppet/application/apply.rb
+++ b/lib/puppet/application/apply.rb
@@ -1,290 +1,299 @@
require 'puppet/application'
require 'puppet/configurer'
+require 'puppet/util/profiler/aggregate'
class Puppet::Application::Apply < Puppet::Application
option("--debug","-d")
option("--execute EXECUTE","-e") do |arg|
options[:code] = arg
end
option("--loadclasses","-L")
option("--test","-t")
option("--verbose","-v")
option("--use-nodes")
option("--detailed-exitcodes")
option("--write-catalog-summary")
option("--catalog catalog", "-c catalog") do |arg|
options[:catalog] = arg
end
option("--logdest LOGDEST", "-l") do |arg|
handle_logdest_arg(arg)
end
option("--parseonly") do |args|
puts "--parseonly has been removed. Please use 'puppet parser validate <manifest>'"
exit 1
end
def help
<<-'HELP'
puppet-apply(8) -- Apply Puppet manifests locally
========
SYNOPSIS
--------
Applies a standalone Puppet manifest to the local system.
USAGE
-----
puppet apply [-h|--help] [-V|--version] [-d|--debug] [-v|--verbose]
- [-e|--execute] [--detailed-exitcodes] [-l|--logdest <file>] [--noop]
+ [-e|--execute] [--detailed-exitcodes] [-L|--loadclasses]
+ [-l|--logdest syslog|eventlog|<FILE>|console] [--noop]
[--catalog <catalog>] [--write-catalog-summary] <file>
DESCRIPTION
-----------
This is the standalone puppet execution tool; use it to apply
individual manifests.
When provided with a modulepath, via command line or config file, puppet
apply can effectively mimic the catalog that would be served by puppet
master with access to the same modules, although there are some subtle
differences. When combined with scheduling and an automated system for
pushing manifests, this can be used to implement a serverless Puppet
site.
Most users should use 'puppet agent' and 'puppet master' for site-wide
manifests.
OPTIONS
-------
Note that any setting that's valid in the configuration
file is also a valid long argument. For example, 'tags' is a
valid setting, so you can specify '--tags <class>,<tag>'
as an argument.
See the configuration file documentation at
http://docs.puppetlabs.com/references/stable/configuration.html for the
full list of acceptable parameters. A commented list of all
configuration options can also be generated by running puppet with
'--genconfig'.
* --debug:
Enable full debugging.
* --detailed-exitcodes:
Provide transaction information via exit codes. If this is enabled, an exit
code of '2' means there were changes, an exit code of '4' means there were
failures during the transaction, and an exit code of '6' means there were both
changes and failures.
* --help:
Print this help message
* --loadclasses:
Load any stored classes. 'puppet agent' caches configured classes
(usually at /etc/puppet/classes.txt), and setting this option causes
all of those classes to be set in your puppet manifest.
* --logdest:
- Where to send messages. Choose between syslog, the console, and a log
- file. Defaults to sending messages to the console.
+ Where to send log messages. Choose between 'syslog' (the POSIX syslog
+ service), 'eventlog' (the Windows Event Log), 'console', or the path to a log
+ file. Defaults to 'console'.
* --noop:
Use 'noop' mode where Puppet runs in a no-op or dry-run mode. This
is useful for seeing what changes Puppet will make without actually
executing the changes.
* --execute:
Execute a specific piece of Puppet code
* --test:
Enable the most common options used for testing. These are 'verbose',
'detailed-exitcodes' and 'show_diff'.
* --verbose:
Print extra information.
* --catalog:
Apply a JSON catalog (such as one generated with 'puppet master --compile'). You can
either specify a JSON file or pipe in JSON from standard input.
* --write-catalog-summary
After compiling the catalog saves the resource list and classes list to the node
in the state directory named classes.txt and resources.txt
EXAMPLE
-------
$ puppet apply -l /tmp/manifest.log manifest.pp
$ puppet apply --modulepath=/root/dev/modules -e "include ntpd::server"
$ puppet apply --catalog catalog.json
AUTHOR
------
Luke Kanies
COPYRIGHT
---------
Copyright (c) 2011 Puppet Labs, LLC Licensed under the Apache 2.0 License
HELP
end
def app_defaults
super.merge({
:default_file_terminus => :file_server,
})
end
def run_command
if options[:catalog]
apply
else
main
end
end
def apply
if options[:catalog] == "-"
text = $stdin.read
else
text = ::File.read(options[:catalog])
end
catalog = read_catalog(text)
apply_catalog(catalog)
end
def main
# Set our code or file to use.
if options[:code] or command_line.args.length == 0
Puppet[:code] = options[:code] || STDIN.read
else
manifest = command_line.args.shift
raise "Could not find file #{manifest}" unless Puppet::FileSystem.exist?(manifest)
Puppet.warning("Only one file can be applied per run. Skipping #{command_line.args.join(', ')}") if command_line.args.size > 0
end
unless Puppet[:node_name_fact].empty?
# Collect our facts.
unless facts = Puppet::Node::Facts.indirection.find(Puppet[:node_name_value])
raise "Could not find facts for #{Puppet[:node_name_value]}"
end
Puppet[:node_name_value] = facts.values[Puppet[:node_name_fact]]
facts.name = Puppet[:node_name_value]
end
configured_environment = Puppet.lookup(:current_environment)
apply_environment = manifest ?
configured_environment.override_with(:manifest => manifest) :
configured_environment
- Puppet.override(:environments => Puppet::Environments::Static.new(apply_environment)) do
+ Puppet.override({:current_environment => apply_environment}, "For puppet apply") do
# Find our Node
unless node = Puppet::Node.indirection.find(Puppet[:node_name_value])
raise "Could not find node #{Puppet[:node_name_value]}"
end
# Merge in the facts.
node.merge(facts.values) if facts
# Allow users to load the classes that puppet agent creates.
if options[:loadclasses]
file = Puppet[:classfile]
if Puppet::FileSystem.exist?(file)
unless FileTest.readable?(file)
$stderr.puts "#{file} is not readable"
exit(63)
end
node.classes = ::File.read(file).split(/[\s\n]+/)
end
end
begin
# Compile our catalog
starttime = Time.now
catalog = Puppet::Resource::Catalog.indirection.find(node.name, :use_node => node)
# Translate it to a RAL catalog
catalog = catalog.to_ral
catalog.finalize
catalog.retrieval_duration = Time.now - starttime
if options[:write_catalog_summary]
catalog.write_class_file
catalog.write_resource_file
end
exit_status = apply_catalog(catalog)
if not exit_status
exit(1)
elsif options[:detailed_exitcodes] then
exit(exit_status)
else
exit(0)
end
rescue => detail
Puppet.log_exception(detail)
exit(1)
end
end
+
+ ensure
+ if @profiler
+ Puppet::Util::Profiler.remove_profiler(@profiler)
+ @profiler.shutdown
+ end
end
# Enable all of the most common test options.
def setup_test
Puppet.settings.handlearg("--show_diff")
options[:verbose] = true
options[:detailed_exitcodes] = true
end
def setup
setup_test if options[:test]
exit(Puppet.settings.print_configs ? 0 : 1) if Puppet.settings.print_configs?
Puppet::Util::Log.newdestination(:console) unless options[:setdest]
Signal.trap(:INT) do
$stderr.puts "Exiting"
exit(1)
end
# we want the last report to be persisted locally
Puppet::Transaction::Report.indirection.cache_class = :yaml
set_log_level
if Puppet[:profile]
- Puppet::Util::Profiler.current = Puppet::Util::Profiler::WallClock.new(Puppet.method(:debug), "apply")
+ @profiler = Puppet::Util::Profiler.add_profiler(Puppet::Util::Profiler::Aggregate.new(Puppet.method(:debug), "apply"))
end
end
private
def read_catalog(text)
begin
catalog = Puppet::Resource::Catalog.convert_from(Puppet::Resource::Catalog.default_format,text)
catalog = Puppet::Resource::Catalog.pson_create(catalog) unless catalog.is_a?(Puppet::Resource::Catalog)
rescue => detail
raise Puppet::Error, "Could not deserialize catalog from pson: #{detail}", detail.backtrace
end
catalog.to_ral
end
def apply_catalog(catalog)
configurer = Puppet::Configurer.new
configurer.run(:catalog => catalog, :pluginsync => false)
end
end
diff --git a/lib/puppet/application/doc.rb b/lib/puppet/application/doc.rb
index 866619231..e45d4d0e0 100644
--- a/lib/puppet/application/doc.rb
+++ b/lib/puppet/application/doc.rb
@@ -1,280 +1,273 @@
require 'puppet/application'
class Puppet::Application::Doc < Puppet::Application
run_mode :master
attr_accessor :unknown_args, :manifest
def preinit
{:references => [], :mode => :text, :format => :to_markdown }.each do |name,value|
options[name] = value
end
@unknown_args = []
@manifest = false
end
option("--all","-a")
option("--outputdir OUTPUTDIR","-o")
option("--verbose","-v")
option("--debug","-d")
option("--charset CHARSET")
option("--format FORMAT", "-f") do |arg|
method = "to_#{arg}"
require 'puppet/util/reference'
if Puppet::Util::Reference.method_defined?(method)
options[:format] = method
else
raise "Invalid output format #{arg}"
end
end
option("--mode MODE", "-m") do |arg|
require 'puppet/util/reference'
if Puppet::Util::Reference.modes.include?(arg) or arg.intern==:rdoc
options[:mode] = arg.intern
else
raise "Invalid output mode #{arg}"
end
end
option("--list", "-l") do |arg|
require 'puppet/util/reference'
puts Puppet::Util::Reference.references.collect { |r| Puppet::Util::Reference.reference(r).doc }.join("\n")
exit(0)
end
option("--reference REFERENCE", "-r") do |arg|
options[:references] << arg.intern
end
def help
<<-'HELP'
puppet-doc(8) -- Generate Puppet documentation and references
========
SYNOPSIS
--------
Generates a reference for all Puppet types. Largely meant for internal
Puppet Labs use.
-WARNING: RDoc support is only available under Ruby 1.8.7 and earlier.
-
USAGE
-----
-puppet doc [-a|--all] [-h|--help] [-o|--outputdir <rdoc-outputdir>]
+puppet doc [-a|--all] [-h|--help] [-l|--list] [-o|--outputdir <rdoc-outputdir>]
[-m|--mode text|pdf|rdoc] [-r|--reference <reference-name>]
[--charset <charset>] [<manifest-file>]
DESCRIPTION
-----------
If mode is not 'rdoc', then this command generates a Markdown document
describing all installed Puppet types or all allowable arguments to
puppet executables. It is largely meant for internal use and is used to
generate the reference document available on the Puppet Labs web site.
In 'rdoc' mode, this command generates an html RDoc hierarchy describing
the manifests that are in 'manifestdir' and 'modulepath' configuration
directives. The generated documentation directory is doc by default but
can be changed with the 'outputdir' option.
If the command is run with the name of a manifest file as an argument,
puppet doc will output a single manifest's documentation on stdout.
-WARNING: RDoc support is only available under Ruby 1.8.7 and earlier.
-The internal API used to support manifest documentation has changed
-radically in newer versions, and support is not yet available for
-using those versions of RDoc.
-
OPTIONS
-------
* --all:
Output the docs for all of the reference types. In 'rdoc' mode, this also
outputs documentation for all resources.
* --help:
Print this help message
* --outputdir:
Used only in 'rdoc' mode. The directory to which the rdoc output should
be written.
* --mode:
Determine the output mode. Valid modes are 'text', 'pdf' and 'rdoc'. The 'pdf'
mode creates PDF formatted files in the /tmp directory. The default mode is
'text'.
* --reference:
Build a particular reference. Get a list of references by running
'puppet doc --list'.
* --charset:
Used only in 'rdoc' mode. It sets the charset used in the html files produced.
* --manifestdir:
Used only in 'rdoc' mode. The directory to scan for stand-alone manifests.
If not supplied, puppet doc will use the manifestdir from puppet.conf.
* --modulepath:
Used only in 'rdoc' mode. The directory or directories to scan for modules.
If not supplied, puppet doc will use the modulepath from puppet.conf.
* --environment:
Used only in 'rdoc' mode. The configuration environment from which
to read the modulepath and manifestdir settings, when reading said settings
from puppet.conf.
EXAMPLE
-------
$ puppet doc -r type > /tmp/type_reference.markdown
or
$ puppet doc --outputdir /tmp/rdoc --mode rdoc /path/to/manifests
or
$ puppet doc /etc/puppet/manifests/site.pp
or
$ puppet doc -m pdf -r configuration
AUTHOR
------
Luke Kanies
COPYRIGHT
---------
Copyright (c) 2011 Puppet Labs, LLC Licensed under the Apache 2.0 License
HELP
end
def handle_unknown( opt, arg )
@unknown_args << {:opt => opt, :arg => arg }
true
end
def run_command
- return[:rdoc].include?(options[:mode]) ? send(options[:mode]) : other
+ return [:rdoc].include?(options[:mode]) ? send(options[:mode]) : other
end
def rdoc
exit_code = 0
files = []
unless @manifest
- env = Puppet.lookup(:environments).get(Puppet[:environment])
+ env = Puppet.lookup(:current_environment)
files += env.modulepath
- files << ::File.dirname(env.manifest)
+ files << ::File.dirname(env.manifest) if env.manifest != Puppet::Node::Environment::NO_MANIFEST
end
files += command_line.args
Puppet.info "scanning: #{files.inspect}"
Puppet.settings[:document_all] = options[:all] || false
begin
require 'puppet/util/rdoc'
if @manifest
Puppet::Util::RDoc.manifestdoc(files)
else
options[:outputdir] = "doc" unless options[:outputdir]
Puppet::Util::RDoc.rdoc(options[:outputdir], files, options[:charset])
end
rescue => detail
Puppet.log_exception(detail, "Could not generate documentation: #{detail}")
exit_code = 1
end
exit exit_code
end
def other
text = ""
with_contents = options[:references].length <= 1
exit_code = 0
require 'puppet/util/reference'
options[:references].sort { |a,b| a.to_s <=> b.to_s }.each do |name|
raise "Could not find reference #{name}" unless section = Puppet::Util::Reference.reference(name)
begin
# Add the per-section text, but with no ToC
text += section.send(options[:format], with_contents)
rescue => detail
Puppet.log_exception(detail, "Could not generate reference #{name}: #{detail}")
exit_code = 1
next
end
end
text += Puppet::Util::Reference.footer unless with_contents # We've only got one reference
if options[:mode] == :pdf
Puppet::Util::Reference.pdf(text)
else
puts text
end
exit exit_code
end
def setup
# sole manifest documentation
if command_line.args.size > 0
options[:mode] = :rdoc
@manifest = true
end
if options[:mode] == :rdoc
setup_rdoc
else
setup_reference
end
setup_logging
end
def setup_reference
if options[:all]
# Don't add dynamic references to the "all" list.
require 'puppet/util/reference'
options[:references] = Puppet::Util::Reference.references.reject do |ref|
Puppet::Util::Reference.reference(ref).dynamic?
end
end
options[:references] << :type if options[:references].empty?
end
def setup_rdoc(dummy_argument=:work_arround_for_ruby_GC_bug)
# consume the unknown options
# and feed them as settings
if @unknown_args.size > 0
@unknown_args.each do |option|
# force absolute path for modulepath when passed on commandline
if option[:opt]=="--modulepath" or option[:opt] == "--manifestdir"
option[:arg] = option[:arg].split(::File::PATH_SEPARATOR).collect { |p| ::File.expand_path(p) }.join(::File::PATH_SEPARATOR)
end
Puppet.settings.handlearg(option[:opt], option[:arg])
end
end
end
def setup_logging
# Handle the logging settings.
if options[:debug]
Puppet::Util::Log.level = :debug
elsif options[:verbose]
Puppet::Util::Log.level = :info
else
Puppet::Util::Log.level = :warning
end
Puppet::Util::Log.newdestination(:console)
end
end
diff --git a/lib/puppet/application/master.rb b/lib/puppet/application/master.rb
index 0c6a780a7..a238ff4b5 100644
--- a/lib/puppet/application/master.rb
+++ b/lib/puppet/application/master.rb
@@ -1,302 +1,304 @@
require 'puppet/application'
require 'puppet/daemon'
require 'puppet/util/pidlock'
class Puppet::Application::Master < Puppet::Application
run_mode :master
option("--debug", "-d")
option("--verbose", "-v")
# internal option, only to be used by ext/rack/config.ru
option("--rack")
option("--compile host", "-c host") do |arg|
options[:node] = arg
end
option("--logdest DEST", "-l DEST") do |arg|
handle_logdest_arg(arg)
end
option("--parseonly") do |args|
puts "--parseonly has been removed. Please use 'puppet parser validate <manifest>'"
exit 1
end
def help
<<-'HELP'
puppet-master(8) -- The puppet master daemon
========
SYNOPSIS
--------
The central puppet server. Functions as a certificate authority by
default.
USAGE
-----
puppet master [-D|--daemonize|--no-daemonize] [-d|--debug] [-h|--help]
- [-l|--logdest <file>|console|syslog] [-v|--verbose] [-V|--version]
- [--compile <node-name>]
+ [-l|--logdest syslog|<FILE>|console] [-v|--verbose] [-V|--version]
+ [--compile <NODE-NAME>]
DESCRIPTION
-----------
This command starts an instance of puppet master, running as a daemon
and using Ruby's built-in Webrick webserver. Puppet master can also be
managed by other application servers; when this is the case, this
executable is not used.
OPTIONS
-------
Note that any Puppet setting that's valid in the configuration file is also a
valid long argument. For example, 'server' is a valid setting, so you can
specify '--server <servername>' as an argument. Boolean settings translate into
'--setting' and '--no-setting' pairs.
See the configuration file documentation at
http://docs.puppetlabs.com/references/stable/configuration.html for the
full list of acceptable settings. A commented list of all settings can also be
generated by running puppet master with '--genconfig'.
* --daemonize:
Send the process into the background. This is the default.
(This is a Puppet setting, and can go in puppet.conf. Note the special 'no-'
prefix for boolean settings on the command line.)
* --no-daemonize:
Do not send the process into the background.
(This is a Puppet setting, and can go in puppet.conf. Note the special 'no-'
prefix for boolean settings on the command line.)
* --debug:
Enable full debugging.
* --help:
Print this help message.
* --logdest:
- Where to send messages. Choose between syslog, the console, and a log
- file. Defaults to sending messages to syslog, or the console if
- debugging or verbosity is enabled.
+ Where to send log messages. Choose between 'syslog' (the POSIX syslog
+ service), 'console', or the path to a log file. If debugging or verbosity is
+ enabled, this defaults to 'console'. Otherwise, it defaults to 'syslog'.
* --masterport:
The port on which to listen for traffic.
(This is a Puppet setting, and can go in puppet.conf.)
* --verbose:
Enable verbosity.
* --version:
Print the puppet version number and exit.
* --compile:
Compile a catalogue and output it in JSON from the puppet master. Uses
facts contained in the $vardir/yaml/ directory to compile the catalog.
EXAMPLE
-------
puppet master
DIAGNOSTICS
-----------
When running as a standalone daemon, puppet master accepts the
following signals:
* SIGHUP:
Restart the puppet master server.
* SIGINT and SIGTERM:
Shut down the puppet master server.
* SIGUSR2:
Close file descriptors for log files and reopen them. Used with logrotate.
AUTHOR
------
Luke Kanies
COPYRIGHT
---------
Copyright (c) 2012 Puppet Labs, LLC Licensed under the Apache 2.0 License
HELP
end
# Sets up the 'node_cache_terminus' default to use the Write Only Yaml terminus :write_only_yaml.
# If this is not wanted, the setting ´node_cache_terminus´ should be set to nil.
# @see Puppet::Node::WriteOnlyYaml
# @see #setup_node_cache
# @see puppet issue 16753
#
def app_defaults
super.merge({
:node_cache_terminus => :write_only_yaml,
:facts_terminus => 'yaml'
})
end
def preinit
Signal.trap(:INT) do
$stderr.puts "Canceling startup"
exit(0)
end
# save ARGV to protect us from it being smashed later by something
@argv = ARGV.dup
end
def run_command
if options[:node]
compile
else
main
end
end
def compile
- Puppet::Util::Log.newdestination :console
begin
unless catalog = Puppet::Resource::Catalog.indirection.find(options[:node])
raise "Could not compile catalog for #{options[:node]}"
end
puts PSON::pretty_generate(catalog.to_resource, :allow_nan => true, :max_nesting => false)
rescue => detail
Puppet.log_exception(detail, "Failed to compile catalog for node #{options[:node]}: #{detail}")
exit(30)
end
exit(0)
end
def main
require 'etc'
# Make sure we've got a localhost ssl cert
Puppet::SSL::Host.localhost
# And now configure our server to *only* hit the CA for data, because that's
# all it will have write access to.
Puppet::SSL::Host.ca_location = :only if Puppet::SSL::CertificateAuthority.ca?
if Puppet.features.root?
begin
Puppet::Util.chuser
rescue => detail
Puppet.log_exception(detail, "Could not change user to #{Puppet[:user]}: #{detail}")
exit(39)
end
end
if options[:rack]
start_rack_master
else
start_webrick_master
end
end
def setup_logs
- # Handle the logging settings.
- if options[:debug] or options[:verbose]
- if options[:debug]
- Puppet::Util::Log.level = :debug
- else
- Puppet::Util::Log.level = :info
- end
+ set_log_level
- unless Puppet[:daemonize] or options[:rack]
+ if !options[:setdest]
+ if options[:node]
+ # We are compiling a catalog for a single node with '--compile' and logging
+ # has not already been configured via '--logdest' so log to the console.
+ Puppet::Util::Log.newdestination(:console)
+ elsif !(Puppet[:daemonize] or options[:rack])
+ # We are running a webrick master which has been explicitly foregrounded
+ # and '--logdest' has not been passed, assume users want to see logging
+ # and log to the console.
Puppet::Util::Log.newdestination(:console)
- options[:setdest] = true
+ else
+ # No explicit log destination has been given with '--logdest' and we're
+ # either a daemonized webrick master or running under rack, log to syslog.
+ Puppet::Util::Log.newdestination(:syslog)
end
end
-
- Puppet::Util::Log.newdestination(:syslog) unless options[:setdest]
end
def setup_terminuses
require 'puppet/file_serving/content'
require 'puppet/file_serving/metadata'
Puppet::FileServing::Content.indirection.terminus_class = :file_server
Puppet::FileServing::Metadata.indirection.terminus_class = :file_server
Puppet::FileBucket::File.indirection.terminus_class = :file
end
def setup_ssl
# Configure all of the SSL stuff.
if Puppet::SSL::CertificateAuthority.ca?
Puppet::SSL::Host.ca_location = :local
Puppet.settings.use :ca
Puppet::SSL::CertificateAuthority.instance
else
Puppet::SSL::Host.ca_location = :none
end
end
# Sets up a special node cache "write only yaml" that collects and stores node data in yaml
# but never finds or reads anything (this since a real cache causes stale data to be served
# in circumstances when the cache can not be cleared).
# @see puppet issue 16753
# @see Puppet::Node::WriteOnlyYaml
# @return [void]
def setup_node_cache
Puppet::Node.indirection.cache_class = Puppet[:node_cache_terminus]
end
def setup
raise Puppet::Error.new("Puppet master is not supported on Microsoft Windows") if Puppet.features.microsoft_windows?
setup_logs
exit(Puppet.settings.print_configs ? 0 : 1) if Puppet.settings.print_configs?
Puppet.settings.use :main, :master, :ssl, :metrics
setup_terminuses
setup_node_cache
setup_ssl
end
private
# Start a master that will be using WeBrick.
#
# This method will block until the master exits.
def start_webrick_master
require 'puppet/network/server'
daemon = Puppet::Daemon.new(Puppet::Util::Pidlock.new(Puppet[:pidfile]))
daemon.argv = @argv
daemon.server = Puppet::Network::Server.new(Puppet[:bindaddress], Puppet[:masterport])
daemon.daemonize if Puppet[:daemonize]
announce_start_of_master
daemon.start
end
# Start a master that will be used for a Rack container.
#
# This method immediately returns the Rack handler that must be returned to
# the calling Rack container
def start_rack_master
require 'puppet/network/http/rack'
announce_start_of_master
return Puppet::Network::HTTP::Rack.new()
end
def announce_start_of_master
Puppet.notice "Starting Puppet master version #{Puppet.version}"
end
end
diff --git a/lib/puppet/application/queue.rb b/lib/puppet/application/queue.rb
index 1efca9be2..33155fc7f 100644
--- a/lib/puppet/application/queue.rb
+++ b/lib/puppet/application/queue.rb
@@ -1,161 +1,161 @@
require 'puppet/application'
require 'puppet/util'
require 'puppet/daemon'
require 'puppet/util/pidlock'
class Puppet::Application::Queue < Puppet::Application
attr_accessor :daemon
def app_defaults()
super.merge( :pidfile => "$rundir/queue.pid" )
end
def preinit
@argv = ARGV.dup
# Do an initial trap, so that cancels don't get a stack trace.
# This exits with exit code 1
Signal.trap(:INT) do
$stderr.puts "Caught SIGINT; shutting down"
exit(1)
end
# This is a normal shutdown, so code 0
Signal.trap(:TERM) do
$stderr.puts "Caught SIGTERM; shutting down"
exit(0)
end
{
:verbose => false,
:debug => false
}.each do |opt,val|
options[opt] = val
end
end
option("--debug","-d")
option("--verbose","-v")
def help
<<-HELP
puppet-queue(8) -- Deprecated queuing daemon for asynchronous storeconfigs
========
SYNOPSIS
--------
Retrieves serialized storeconfigs records from a queue and processes
them in order. THIS FEATURE IS DEPRECATED; use PuppetDB instead.
USAGE
-----
-puppet queue [-d|--debug] [-v|--verbose]
+puppet queue [-d|--debug] [--help] [-v|--verbose] [--version]
DESCRIPTION
-----------
This application runs as a daemon and processes storeconfigs data,
retrieving the data from a stomp server message queue and writing it to
a database. It was once necessary as a workaround for the poor performance
of ActiveRecord-based storeconfigs, but has been supplanted by the PuppetDB
service, which gives better performance with less complexity.
For more information, see the PuppetDB documentation at
http://docs.puppetlabs.com/puppetdb/latest
OPTIONS
-------
Note that any setting that's valid in the configuration
file is also a valid long argument. For example, 'server' is a valid
setting, so you can specify '--server <servername>' as
an argument.
See the configuration file documentation at
http://docs.puppetlabs.com/references/stable/configuration.html for the
full list of acceptable parameters. A commented list of all
configuration options can also be generated by running puppet queue with
'--genconfig'.
* --debug:
Enable full debugging.
* --help:
Print this help message
* --verbose:
Turn on verbose reporting.
* --version:
Print the puppet version number and exit.
EXAMPLE
-------
$ puppet queue
AUTHOR
------
Luke Kanies
COPYRIGHT
---------
Copyright (c) 2011 Puppet Labs, LLC Licensed under the Apache 2.0 License
HELP
end
option("--logdest DEST", "-l DEST") do |arg|
handle_logdest_arg(arg)
end
def main
require 'puppet/indirector/catalog/queue' # provides Puppet::Indirector::Queue.subscribe
Puppet.notice "Starting puppet queue #{Puppet.version}"
Puppet::Resource::Catalog::Queue.subscribe do |catalog|
# Once you have a Puppet::Resource::Catalog instance, passing it to save should suffice
# to put it through to the database via its active_record indirector (which is determined
# by the terminus_class = :active_record setting above)
Puppet::Util.benchmark(:notice, "Processing queued catalog for #{catalog.name}") do
begin
Puppet::Resource::Catalog.indirection.save(catalog)
rescue => detail
Puppet.log_exception(detail, "Could not save queued catalog for #{catalog.name}: #{detail}")
end
end
end
Thread.list.each { |thread| thread.join }
end
def setup
Puppet.warning "Puppet queue is deprecated. See http://links.puppetlabs.com/puppet-queue-deprecation"
unless Puppet.features.stomp?
raise ArgumentError, "Could not load the 'stomp' library, which must be present for queueing to work. You must install the required library."
end
setup_logs
exit(Puppet.settings.print_configs ? 0 : 1) if Puppet.settings.print_configs?
require 'puppet/resource/catalog'
Puppet::Resource::Catalog.indirection.terminus_class = :store_configs
daemon = Puppet::Daemon.new(Puppet::Util::Pidlock.new(Puppet[:pidfile]))
daemon.argv = @argv
daemon.daemonize if Puppet[:daemonize]
# We want to make sure that we don't have a cache
# class set up, because if storeconfigs is enabled,
# we'll get a loop of continually caching the catalog
# for storage again.
Puppet::Resource::Catalog.indirection.cache_class = nil
end
end
diff --git a/lib/puppet/application/resource.rb b/lib/puppet/application/resource.rb
index c03181671..4a3a4491a 100644
--- a/lib/puppet/application/resource.rb
+++ b/lib/puppet/application/resource.rb
@@ -1,229 +1,228 @@
require 'puppet/application'
class Puppet::Application::Resource < Puppet::Application
attr_accessor :host, :extra_params
def preinit
@extra_params = []
- Facter.loadfacts
end
option("--debug","-d")
option("--verbose","-v")
option("--edit","-e")
option("--host HOST","-H") do |arg|
Puppet.warning("Accessing resources on the network is deprecated. See http://links.puppetlabs.com/deprecate-networked-resource")
@host = arg
end
option("--types", "-t") do |arg|
types = []
Puppet::Type.loadall
Puppet::Type.eachtype do |t|
next if t.name == :component
types << t.name.to_s
end
puts types.sort
exit
end
option("--param PARAM", "-p") do |arg|
@extra_params << arg.to_sym
end
def help
<<-'HELP'
puppet-resource(8) -- The resource abstraction layer shell
========
SYNOPSIS
--------
Uses the Puppet RAL to directly interact with the system.
USAGE
-----
puppet resource [-h|--help] [-d|--debug] [-v|--verbose] [-e|--edit]
[-H|--host <host>] [-p|--param <parameter>] [-t|--types] <type>
[<name>] [<attribute>=<value> ...]
DESCRIPTION
-----------
This command provides simple facilities for converting current system
state into Puppet code, along with some ability to modify the current
state using Puppet's RAL.
By default, you must at least provide a type to list, in which case
puppet resource will tell you everything it knows about all resources of
that type. You can optionally specify an instance name, and puppet
resource will only describe that single instance.
If given a type, a name, and a series of <attribute>=<value> pairs,
puppet resource will modify the state of the specified resource.
Alternately, if given a type, a name, and the '--edit' flag, puppet
resource will write its output to a file, open that file in an editor,
and then apply the saved file as a Puppet transaction.
OPTIONS
-------
Note that any setting that's valid in the configuration
file is also a valid long argument. For example, 'ssldir' is a valid
setting, so you can specify '--ssldir <directory>' as an
argument.
See the configuration file documentation at
http://docs.puppetlabs.com/references/stable/configuration.html for the
full list of acceptable parameters. A commented list of all
configuration options can also be generated by running puppet with
'--genconfig'.
* --debug:
Enable full debugging.
* --edit:
Write the results of the query to a file, open the file in an editor,
and read the file back in as an executable Puppet manifest.
* --host:
When specified, connect to the resource server on the named host
and retrieve the list of resouces of the type specified.
* --help:
Print this help message.
* --param:
Add more parameters to be outputted from queries.
* --types:
List all available types.
* --verbose:
Print extra information.
EXAMPLE
-------
This example uses `puppet resource` to return a Puppet configuration for
the user `luke`:
$ puppet resource user luke
user { 'luke':
home => '/home/luke',
uid => '100',
ensure => 'present',
comment => 'Luke Kanies,,,',
gid => '1000',
shell => '/bin/bash',
groups => ['sysadmin','audio','video','puppet']
}
AUTHOR
------
Luke Kanies
COPYRIGHT
---------
Copyright (c) 2011 Puppet Labs, LLC Licensed under the Apache 2.0 License
HELP
end
def main
type, name, params = parse_args(command_line.args)
raise "You cannot edit a remote host" if options[:edit] and @host
resources = find_or_save_resources(type, name, params)
text = resources.
map { |resource| resource.prune_parameters(:parameters_to_include => @extra_params).to_manifest }.
join("\n")
options[:edit] ?
handle_editing(text) :
(puts text)
end
def setup
Puppet::Util::Log.newdestination(:console)
set_log_level
end
private
def remote_key(type, name)
Puppet::Resource.indirection.terminus_class = :rest
port = Puppet[:puppetport]
["https://#{@host}:#{port}", "production", "resources", type, name].join('/')
end
def local_key(type, name)
[type, name].join('/')
end
def handle_editing(text)
require 'tempfile'
# Prefer the current directory, which is more likely to be secure
# and, in the case of interactive use, accessible to the user.
tmpfile = Tempfile.new('x2puppet', Dir.pwd)
begin
# sync write, so nothing buffers before we invoke the editor.
tmpfile.sync = true
tmpfile.puts text
# edit the content
system(ENV["EDITOR"] || 'vi', tmpfile.path)
# ...and, now, pass that file to puppet to apply. Because
# many editors rename or replace the original file we need to
# feed the pathname, not the file content itself, to puppet.
system('puppet apply -v ' + tmpfile.path)
ensure
# The temporary file will be safely removed.
tmpfile.close(true)
end
end
def parse_args(args)
type = args.shift or raise "You must specify the type to display"
Puppet::Type.type(type) or raise "Could not find type #{type}"
name = args.shift
params = {}
args.each do |setting|
if setting =~ /^(\w+)=(.+)$/
params[$1] = $2
else
raise "Invalid parameter setting #{setting}"
end
end
[type, name, params]
end
def find_or_save_resources(type, name, params)
key = @host ? remote_key(type, name) : local_key(type, name)
if name
if params.empty?
[ Puppet::Resource.indirection.find( key ) ]
else
resource = Puppet::Resource.new( type, name, :parameters => params )
# save returns [resource that was saved, transaction log from applying the resource]
save_result = Puppet::Resource.indirection.save(resource, key)
[ save_result.first ]
end
else
if type == "file"
raise "Listing all file instances is not supported. Please specify a file or directory, e.g. puppet resource file /etc"
end
Puppet::Resource.indirection.search( key, {} )
end
end
end
diff --git a/lib/puppet/configurer.rb b/lib/puppet/configurer.rb
index 4638f611c..bfd4b518b 100644
--- a/lib/puppet/configurer.rb
+++ b/lib/puppet/configurer.rb
@@ -1,266 +1,303 @@
# The client for interacting with the puppetmaster config server.
require 'sync'
require 'timeout'
require 'puppet/network/http_pool'
require 'puppet/util'
require 'securerandom'
class Puppet::Configurer
require 'puppet/configurer/fact_handler'
require 'puppet/configurer/plugin_handler'
+ require 'puppet/configurer/downloader_factory'
include Puppet::Configurer::FactHandler
- include Puppet::Configurer::PluginHandler
# For benchmarking
include Puppet::Util
attr_reader :compile_time, :environment
# Provide more helpful strings to the logging that the Agent does
def self.to_s
"Puppet configuration client"
end
def execute_postrun_command
execute_from_setting(:postrun_command)
end
def execute_prerun_command
execute_from_setting(:prerun_command)
end
# Initialize and load storage
def init_storage
Puppet::Util::Storage.load
@compile_time ||= Puppet::Util::Storage.cache(:configuration)[:compile_time]
rescue => detail
Puppet.log_exception(detail, "Removing corrupt state file #{Puppet[:statefile]}: #{detail}")
begin
Puppet::FileSystem.unlink(Puppet[:statefile])
retry
rescue => detail
raise Puppet::Error.new("Cannot remove #{Puppet[:statefile]}: #{detail}", detail)
end
end
- def initialize
+ def initialize(factory = Puppet::Configurer::DownloaderFactory.new)
Puppet.settings.use(:main, :ssl, :agent)
@running = false
@splayed = false
@environment = Puppet[:environment]
@transaction_uuid = SecureRandom.uuid
+ @handler = Puppet::Configurer::PluginHandler.new(factory)
end
# Get the remote catalog, yo. Returns nil if no catalog can be found.
def retrieve_catalog(query_options)
query_options ||= {}
# First try it with no cache, then with the cache.
unless (Puppet[:use_cached_catalog] and result = retrieve_catalog_from_cache(query_options)) or result = retrieve_new_catalog(query_options)
if ! Puppet[:usecacheonfailure]
Puppet.warning "Not using cache on failed catalog"
return nil
end
result = retrieve_catalog_from_cache(query_options)
end
return nil unless result
convert_catalog(result, @duration)
end
# Convert a plain resource catalog into our full host catalog.
def convert_catalog(result, duration)
catalog = result.to_ral
catalog.finalize
catalog.retrieval_duration = duration
catalog.write_class_file
catalog.write_resource_file
catalog
end
def get_facts(options)
if options[:pluginsync]
remote_environment_for_plugins = Puppet::Node::Environment.remote(@environment)
download_plugins(remote_environment_for_plugins)
end
if Puppet::Resource::Catalog.indirection.terminus_class == :rest
# This is a bit complicated. We need the serialized and escaped facts,
# and we need to know which format they're encoded in. Thus, we
# get a hash with both of these pieces of information.
#
# facts_for_uploading may set Puppet[:node_name_value] as a side effect
return facts_for_uploading
end
end
def prepare_and_retrieve_catalog(options, query_options)
# set report host name now that we have the fact
options[:report].host = Puppet[:node_name_value]
unless catalog = (options.delete(:catalog) || retrieve_catalog(query_options))
Puppet.err "Could not retrieve catalog; skipping run"
return
end
catalog
end
# Retrieve (optionally) and apply a catalog. If a catalog is passed in
# the options, then apply that one, otherwise retrieve it.
def apply_catalog(catalog, options)
report = options[:report]
report.configuration_version = catalog.version
benchmark(:notice, "Finished catalog run") do
catalog.apply(options)
end
report.finalize_report
report
end
# The code that actually runs the catalog.
# This just passes any options on to the catalog,
# which accepts :tags and :ignoreschedules.
def run(options = {})
+ pool = Puppet::Network::HTTP::Pool.new(Puppet[:http_keepalive_timeout])
+ begin
+ Puppet.override(:http_pool => pool) do
+ run_internal(options)
+ end
+ ensure
+ pool.close
+ end
+ end
+
+ def run_internal(options)
# We create the report pre-populated with default settings for
# environment and transaction_uuid very early, this is to ensure
# they are sent regardless of any catalog compilation failures or
# exceptions.
options[:report] ||= Puppet::Transaction::Report.new("apply", nil, @environment, @transaction_uuid)
report = options[:report]
init_storage
Puppet::Util::Log.newdestination(report)
begin
unless Puppet[:node_name_fact].empty?
query_options = get_facts(options)
end
# We only need to find out the environment to run in if we don't already have a catalog
unless options[:catalog]
begin
if node = Puppet::Node.indirection.find(Puppet[:node_name_value],
:environment => @environment, :ignore_cache => true, :transaction_uuid => @transaction_uuid,
:fail_on_404 => true)
+
+ # If we have deserialized a node from a rest call, we want to set
+ # an environment instance as a simple 'remote' environment reference.
+ if !node.has_environment_instance? && node.environment_name
+ node.environment = Puppet::Node::Environment.remote(node.environment_name)
+ end
+
if node.environment.to_s != @environment
Puppet.warning "Local environment: \"#{@environment}\" doesn't match server specified node environment \"#{node.environment}\", switching agent to \"#{node.environment}\"."
@environment = node.environment.to_s
report.environment = @environment
query_options = nil
end
end
rescue SystemExit,NoMemoryError
raise
rescue Exception => detail
Puppet.warning("Unable to fetch my node definition, but the agent run will continue:")
Puppet.warning(detail)
end
end
+ current_environment = Puppet.lookup(:current_environment)
+ local_node_environment =
+ if current_environment.name == @environment.intern
+ current_environment
+ else
+ Puppet::Node::Environment.create(@environment,
+ current_environment.modulepath,
+ current_environment.manifest,
+ current_environment.config_version)
+ end
+ Puppet.push_context({:current_environment => local_node_environment}, "Local node environment for configurer transaction")
+
query_options = get_facts(options) unless query_options
# get_facts returns nil during puppet apply
query_options ||= {}
query_options[:transaction_uuid] = @transaction_uuid
unless catalog = prepare_and_retrieve_catalog(options, query_options)
return nil
end
# Here we set the local environment based on what we get from the
# catalog. Since a change in environment means a change in facts, and
# facts may be used to determine which catalog we get, we need to
# rerun the process if the environment is changed.
tries = 0
while catalog.environment and not catalog.environment.empty? and catalog.environment != @environment
if tries > 3
raise Puppet::Error, "Catalog environment didn't stabilize after #{tries} fetches, aborting run"
end
Puppet.warning "Local environment: \"#{@environment}\" doesn't match server specified environment \"#{catalog.environment}\", restarting agent run with environment \"#{catalog.environment}\""
@environment = catalog.environment
report.environment = @environment
return nil unless catalog = prepare_and_retrieve_catalog(options, query_options)
tries += 1
end
execute_prerun_command or return nil
apply_catalog(catalog, options)
report.exit_status
rescue => detail
Puppet.log_exception(detail, "Failed to apply catalog: #{detail}")
return nil
ensure
execute_postrun_command or return nil
end
ensure
# Between Puppet runs we need to forget the cached values. This lets us
# pick up on new functions installed by gems or new modules being added
# without the daemon being restarted.
$env_module_directories = nil
Puppet::Util::Log.close(report)
send_report(report)
+ Puppet.pop_context
end
+ private :run_internal
def send_report(report)
puts report.summary if Puppet[:summarize]
save_last_run_summary(report)
Puppet::Transaction::Report.indirection.save(report, nil, :environment => @environment) if Puppet[:report]
rescue => detail
Puppet.log_exception(detail, "Could not send report: #{detail}")
end
def save_last_run_summary(report)
mode = Puppet.settings.setting(:lastrunfile).mode
Puppet::Util.replace_file(Puppet[:lastrunfile], mode) do |fh|
fh.print YAML.dump(report.raw_summary)
end
rescue => detail
Puppet.log_exception(detail, "Could not save last run local report: #{detail}")
end
private
def execute_from_setting(setting)
return true if (command = Puppet[setting]) == ""
begin
Puppet::Util::Execution.execute([command])
true
rescue => detail
Puppet.log_exception(detail, "Could not run command from #{setting}: #{detail}")
false
end
end
def retrieve_catalog_from_cache(query_options)
result = nil
@duration = thinmark do
result = Puppet::Resource::Catalog.indirection.find(Puppet[:node_name_value],
query_options.merge(:ignore_terminus => true, :environment => @environment))
end
Puppet.notice "Using cached catalog"
result
rescue => detail
Puppet.log_exception(detail, "Could not retrieve catalog from cache: #{detail}")
return nil
end
def retrieve_new_catalog(query_options)
result = nil
@duration = thinmark do
result = Puppet::Resource::Catalog.indirection.find(Puppet[:node_name_value],
query_options.merge(:ignore_cache => true, :environment => @environment, :fail_on_404 => true))
end
result
rescue SystemExit,NoMemoryError
raise
rescue Exception => detail
Puppet.log_exception(detail, "Could not retrieve catalog from remote server: #{detail}")
return nil
end
+
+ def download_plugins(remote_environment_for_plugins)
+ @handler.download_plugins(remote_environment_for_plugins)
+ end
end
diff --git a/lib/puppet/configurer/downloader.rb b/lib/puppet/configurer/downloader.rb
index c8554dcde..319546b03 100644
--- a/lib/puppet/configurer/downloader.rb
+++ b/lib/puppet/configurer/downloader.rb
@@ -1,71 +1,66 @@
require 'puppet/configurer'
require 'puppet/resource/catalog'
class Puppet::Configurer::Downloader
attr_reader :name, :path, :source, :ignore
# Evaluate our download, returning the list of changed values.
def evaluate
Puppet.info "Retrieving #{name}"
files = []
begin
- ::Timeout.timeout(Puppet[:configtimeout]) do
- catalog.apply do |trans|
- trans.changed?.find_all do |resource|
- yield resource if block_given?
- files << resource[:path]
- end
+ catalog.apply do |trans|
+ trans.changed?.each do |resource|
+ yield resource if block_given?
+ files << resource[:path]
end
end
- rescue Puppet::Error, Timeout::Error => detail
+ rescue Puppet::Error => detail
Puppet.log_exception(detail, "Could not retrieve #{name}: #{detail}")
end
-
files
end
def initialize(name, path, source, ignore = nil, environment = nil, source_permissions = :ignore)
@name, @path, @source, @ignore, @environment, @source_permissions = name, path, source, ignore, environment, source_permissions
end
def catalog
catalog = Puppet::Resource::Catalog.new("PluginSync", @environment)
catalog.host_config = false
catalog.add_resource(file)
catalog
end
def file
args = default_arguments.merge(:path => path, :source => source)
args[:ignore] = ignore.split if ignore
Puppet::Type.type(:file).new(args)
end
private
- require 'sys/admin' if Puppet.features.microsoft_windows?
-
def default_arguments
defargs = {
:path => path,
:recurse => true,
:source => source,
:source_permissions => @source_permissions,
:tag => name,
:purge => true,
:force => true,
:backup => false,
:noop => false
}
if !Puppet.features.microsoft_windows?
defargs.merge!(
{
:owner => Process.uid,
:group => Process.gid
}
)
end
return defargs
end
end
diff --git a/lib/puppet/configurer/downloader_factory.rb b/lib/puppet/configurer/downloader_factory.rb
new file mode 100644
index 000000000..65aac8b3d
--- /dev/null
+++ b/lib/puppet/configurer/downloader_factory.rb
@@ -0,0 +1,34 @@
+require 'puppet/configurer'
+
+# Factory for <tt>Puppet::Configurer::Downloader</tt> objects.
+#
+# Puppet's pluginsync facilities can be used to download modules
+# and external facts, each with a different destination directory
+# and related settings.
+#
+# @api private
+#
+class Puppet::Configurer::DownloaderFactory
+ def create_plugin_downloader(environment)
+ plugin_downloader = Puppet::Configurer::Downloader.new(
+ "plugin",
+ Puppet[:plugindest],
+ Puppet[:pluginsource],
+ Puppet[:pluginsignore],
+ environment
+ )
+ end
+
+ def create_plugin_facts_downloader(environment)
+ source_permissions = Puppet.features.microsoft_windows? ? :ignore : :use
+
+ plugin_fact_downloader = Puppet::Configurer::Downloader.new(
+ "pluginfacts",
+ Puppet[:pluginfactdest],
+ Puppet[:pluginfactsource],
+ Puppet[:pluginsignore],
+ environment,
+ source_permissions
+ )
+ end
+end
diff --git a/lib/puppet/configurer/plugin_handler.rb b/lib/puppet/configurer/plugin_handler.rb
index b8aeb95a5..908e49dfd 100644
--- a/lib/puppet/configurer/plugin_handler.rb
+++ b/lib/puppet/configurer/plugin_handler.rb
@@ -1,29 +1,23 @@
# Break out the code related to plugins. This module is
# just included into the agent, but having it here makes it
# easier to test.
-module Puppet::Configurer::PluginHandler
+require 'puppet/configurer'
+
+class Puppet::Configurer::PluginHandler
+ def initialize(factory)
+ @factory = factory
+ end
+
# Retrieve facts from the central server.
def download_plugins(environment)
- plugin_downloader = Puppet::Configurer::Downloader.new(
- "plugin",
- Puppet[:plugindest],
- Puppet[:pluginsource],
- Puppet[:pluginsignore],
- environment
- )
+ plugin_downloader = @factory.create_plugin_downloader(environment)
+
if Puppet.features.external_facts?
- plugin_fact_downloader = Puppet::Configurer::Downloader.new(
- "pluginfacts",
- Puppet[:pluginfactdest],
- Puppet[:pluginfactsource],
- Puppet[:pluginsignore],
- environment,
- :use
- )
- plugin_fact_downloader.evaluate
+ plugin_fact_downloader = @factory.create_plugin_facts_downloader(environment)
+ plugin_fact_downloader.evaluate
end
plugin_downloader.evaluate
Puppet::Util::Autoload.reload_changed
end
end
diff --git a/lib/puppet/defaults.rb b/lib/puppet/defaults.rb
index 97a9a07ed..b2cb92975 100644
--- a/lib/puppet/defaults.rb
+++ b/lib/puppet/defaults.rb
@@ -1,1966 +1,2091 @@
module Puppet
def self.default_diffargs
if (Facter.value(:kernel) == "AIX" && Facter.value(:kernelmajversion) == "5300")
""
else
"-u"
end
end
############################################################################################
# NOTE: For information about the available values for the ":type" property of settings,
# see the docs for Settings.define_settings
############################################################################################
AS_DURATION = %q{This setting can be a time interval in seconds (30 or 30s), minutes (30m), hours (6h), days (2d), or years (5y).}
STORECONFIGS_ONLY = %q{This setting is only used by the ActiveRecord storeconfigs and inventory backends, which are deprecated.}
+ # This is defined first so that the facter implementation is replaced before other setting defaults are evaluated.
+ define_settings(:main,
+ :cfacter => {
+ :default => false,
+ :type => :boolean,
+ :desc => 'Whether or not to use the native facter (cfacter) implementation instead of the Ruby one (facter). Defaults to false.',
+ :hook => proc do |value|
+ return unless value
+ raise ArgumentError, 'facter has already evaluated facts.' if Facter.instance_variable_get(:@collection)
+ raise ArgumentError, 'cfacter version 0.2.0 or later is not installed.' unless Puppet.features.cfacter?
+ CFacter.initialize
+ end
+ }
+ )
+
define_settings(:main,
:confdir => {
:default => nil,
:type => :directory,
:desc => "The main Puppet configuration directory. The default for this setting
is calculated based on the user. If the process is running as root or
the user that Puppet is supposed to run as, it defaults to a system
directory, but if it's running as any other user, it defaults to being
in the user's home directory.",
},
:vardir => {
:default => nil,
:type => :directory,
:owner => "service",
:group => "service",
:desc => "Where Puppet stores dynamic and growing data. The default for this
setting is calculated specially, like `confdir`_.",
},
### NOTE: this setting is usually being set to a symbol value. We don't officially have a
### setting type for that yet, but we might want to consider creating one.
:name => {
:default => nil,
:desc => "The name of the application, if we are running as one. The
default is essentially $0 without the path or `.rb`.",
}
)
define_settings(:main,
:logdir => {
:default => nil,
:type => :directory,
- :mode => 0750,
+ :mode => "0750",
:owner => "service",
:group => "service",
:desc => "The directory in which to store log files",
},
:log_level => {
:default => 'notice',
:type => :enum,
:values => ["debug","info","notice","warning","err","alert","emerg","crit"],
- :desc => "Default logging level",
+ :desc => "Default logging level for messages from Puppet. Allowed values are:
+
+ * debug
+ * info
+ * notice
+ * warning
+ * err
+ * alert
+ * emerg
+ * crit
+ ",
},
:disable_warnings => {
:default => [],
:type => :array,
- :desc => "A list of warning types to disable. Currently the only warning type that can be
- disabled are deprecations, but more warning types may be added later.",
+ :desc => "A comma-separated list of warning types to suppress. If large numbers
+ of warnings are making Puppet's logs too large or difficult to use, you
+ can temporarily silence them with this setting.
+
+ If you are preparing to upgrade Puppet to a new major version, you
+ should re-enable all warnings for a while.
+
+ Valid values for this setting are:
+
+ * `deprecations` --- disables deprecation warnings.",
:hook => proc do |value|
values = munge(value)
valid = %w[deprecations]
invalid = values - (values & valid)
if not invalid.empty?
raise ArgumentError, "Cannot disable unrecognized warning types #{invalid.inspect}. Valid values are #{valid.inspect}."
end
end
}
)
define_settings(:main,
:priority => {
:default => nil,
:type => :priority,
:desc => "The scheduling priority of the process. Valid values are 'high',
'normal', 'low', or 'idle', which are mapped to platform-specific
values. The priority can also be specified as an integer value and
will be passed as is, e.g. -5. Puppet must be running as a privileged
user in order to increase scheduling priority.",
},
:trace => {
:default => false,
:type => :boolean,
:desc => "Whether to print stack traces on some errors",
},
:profile => {
:default => false,
:type => :boolean,
:desc => "Whether to enable experimental performance profiling",
},
:autoflush => {
:default => true,
:type => :boolean,
:desc => "Whether log files should always flush to disk.",
:hook => proc { |value| Log.autoflush = value }
},
:syslogfacility => {
:default => "daemon",
:desc => "What syslog facility to use when logging to syslog.
Syslog has a fixed list of valid facilities, and you must
choose one of those; you cannot just make one up."
},
:statedir => {
:default => "$vardir/state",
:type => :directory,
- :mode => 01755,
+ :mode => "01755",
:desc => "The directory where Puppet state is stored. Generally,
this directory can be removed without causing harm (although it
might result in spurious service restarts)."
},
:rundir => {
:default => nil,
:type => :directory,
- :mode => 0755,
+ :mode => "0755",
:owner => "service",
:group => "service",
:desc => "Where Puppet PID files are kept."
},
:genconfig => {
:default => false,
:type => :boolean,
:desc => "When true, causes Puppet applications to print an example config file
to stdout and exit. The example will include descriptions of each
setting, and the current (or default) value of each setting,
incorporating any settings overridden on the CLI (with the exception
of `genconfig` itself). This setting only makes sense when specified
on the command line as `--genconfig`.",
},
:genmanifest => {
:default => false,
:type => :boolean,
:desc => "Whether to just print a manifest to stdout and exit. Only makes
sense when specified on the command line as `--genmanifest`. Takes into account arguments specified
on the CLI.",
},
:configprint => {
:default => "",
:desc => "Print the value of a specific configuration setting. If the name of a
setting is provided for this, then the value is printed and puppet
exits. Comma-separate multiple values. For a list of all values,
specify 'all'.",
},
:color => {
:default => "ansi",
:type => :string,
:desc => "Whether to use colors when logging to the console. Valid values are
`ansi` (equivalent to `true`), `html`, and `false`, which produces no color.
Defaults to false on Windows, as its console does not support ansi colors.",
},
:mkusers => {
:default => false,
:type => :boolean,
:desc => "Whether to create the necessary user and group that puppet agent will run as.",
},
:manage_internal_file_permissions => {
:default => true,
:type => :boolean,
:desc => "Whether Puppet should manage the owner, group, and mode of files it uses internally",
},
:onetime => {
:default => false,
:type => :boolean,
:desc => "Perform one configuration run and exit, rather than spawning a long-running
daemon. This is useful for interactively running puppet agent, or
running puppet agent from cron.",
:short => 'o',
},
:path => {
:default => "none",
:desc => "The shell search path. Defaults to whatever is inherited
from the parent process.",
:call_hook => :on_define_and_write,
:hook => proc do |value|
ENV["PATH"] = "" if ENV["PATH"].nil?
ENV["PATH"] = value unless value == "none"
paths = ENV["PATH"].split(File::PATH_SEPARATOR)
Puppet::Util::Platform.default_paths.each do |path|
ENV["PATH"] += File::PATH_SEPARATOR + path unless paths.include?(path)
end
value
end
},
:libdir => {
:type => :directory,
:default => "$vardir/lib",
:desc => "An extra search path for Puppet. This is only useful
for those files that Puppet will load on demand, and is only
guaranteed to work for those cases. In fact, the autoload
mechanism is responsible for making sure this directory
is in Ruby's search path\n",
:call_hook => :on_initialize_and_write,
:hook => proc do |value|
$LOAD_PATH.delete(@oldlibdir) if defined?(@oldlibdir) and $LOAD_PATH.include?(@oldlibdir)
@oldlibdir = value
$LOAD_PATH << value
end
},
:ignoreimport => {
:default => false,
:type => :boolean,
:desc => "If true, allows the parser to continue without requiring
all files referenced with `import` statements to exist. This setting was primarily
designed for use with commit hooks for parse-checking.",
},
:environment => {
:default => "production",
:desc => "The environment Puppet is running in. For clients
(e.g., `puppet agent`) this determines the environment itself, which
is used to find modules and much more. For servers (i.e., `puppet master`)
this provides the default environment for nodes we know nothing about."
},
:environmentpath => {
:default => "",
:desc => "A search path for directory environments, as a list of directories
separated by the system path separator character. (The POSIX path separator
is ':', and the Windows path separator is ';'.)
This setting must have a value set to enable **directory environments.** The
recommended value is `$confdir/environments`. For more details, see
http://docs.puppetlabs.com/puppet/latest/reference/environments.html",
:type => :path,
},
+ :always_cache_features => {
+ :type => :boolean,
+ :default => false,
+ :desc => <<-'EOT'
+ Affects how we cache attempts to load Puppet 'features'. If false, then
+ calls to `Puppet.features.<feature>?` will always attempt to load the
+ feature (which can be an expensive operation) unless it has already been
+ loaded successfully. This makes it possible for a single agent run to,
+ e.g., install a package that provides the underlying capabilities for
+ a feature, and then later load that feature during the same run (even if
+ the feature had been tested earlier and had not been available).
+
+ If this setting is set to true, then features will only be checked once,
+ and if they are not available, the negative result is cached and returned
+ for all subsequent attempts to load the feature. This behavior is almost
+ always appropriate for the server, and can result in a significant performance
+ improvement for features that are checked frequently.
+ EOT
+ },
:diff_args => {
- :default => default_diffargs,
+ :default => lambda { default_diffargs },
:desc => "Which arguments to pass to the diff command when printing differences between
files. The command to use can be chosen with the `diff` setting.",
},
:diff => {
:default => (Puppet.features.microsoft_windows? ? "" : "diff"),
:desc => "Which diff command to use when printing differences between files. This setting
has no default value on Windows, as standard `diff` is not available, but Puppet can use many
third-party diff tools.",
},
:show_diff => {
:type => :boolean,
:default => false,
:desc => "Whether to log and report a contextual diff when files are being replaced.
This causes partial file contents to pass through Puppet's normal
logging and reporting system, so this setting should be used with
caution if you are sending Puppet's reports to an insecure
destination. This feature currently requires the `diff/lcs` Ruby
library.",
},
:daemonize => {
:type => :boolean,
:default => (Puppet.features.microsoft_windows? ? false : true),
:desc => "Whether to send the process into the background. This defaults
to true on POSIX systems, and to false on Windows (where Puppet
currently cannot daemonize).",
:short => "D",
:hook => proc do |value|
if value and Puppet.features.microsoft_windows?
raise "Cannot daemonize on Windows"
end
end
},
:maximum_uid => {
:default => 4294967290,
:desc => "The maximum allowed UID. Some platforms use negative UIDs
but then ship with tools that do not know how to handle signed ints,
so the UIDs show up as huge numbers that can then not be fed back into
the system. This is a hackish way to fail in a slightly more useful
way when that happens.",
},
:route_file => {
:default => "$confdir/routes.yaml",
:desc => "The YAML file containing indirector route configuration.",
},
:node_terminus => {
:type => :terminus,
:default => "plain",
:desc => "Where to find information about nodes.",
},
:node_cache_terminus => {
:type => :terminus,
:default => nil,
:desc => "How to store cached nodes.
Valid values are (none), 'json', 'msgpack', 'yaml' or write only yaml ('write_only_yaml').
The master application defaults to 'write_only_yaml', all others to none.",
},
:data_binding_terminus => {
:type => :terminus,
:default => "hiera",
:desc => "Where to retrive information about data.",
},
:hiera_config => {
:default => "$confdir/hiera.yaml",
:desc => "The hiera configuration file. Puppet only reads this file on startup, so you must restart the puppet master every time you edit it.",
:type => :file,
},
:binder => {
:default => false,
:desc => "Turns the binding system on or off. This includes bindings in modules.
The binding system aggregates data from modules and other locations and makes them available for lookup.
The binding system is experimental and any or all of it may change.",
:type => :boolean,
},
:binder_config => {
:default => nil,
:desc => "The binder configuration file. Puppet reads this file on each request to configure the bindings system.
If set to nil (the default), a $confdir/binder_config.yaml is optionally loaded. If it does not exists, a default configuration
is used. If the setting :binding_config is specified, it must reference a valid and existing yaml file.",
:type => :file,
},
:catalog_terminus => {
:type => :terminus,
:default => "compiler",
:desc => "Where to get node catalogs. This is useful to change if, for instance,
you'd like to pre-compile catalogs and store them in memcached or some other easily-accessed store.",
},
:catalog_cache_terminus => {
:type => :terminus,
:default => nil,
:desc => "How to store cached catalogs. Valid values are 'json', 'msgpack' and 'yaml'. The agent application defaults to 'json'."
},
:facts_terminus => {
:default => 'facter',
:desc => "The node facts terminus.",
:call_hook => :on_initialize_and_write,
:hook => proc do |value|
require 'puppet/node/facts'
# Cache to YAML if we're uploading facts away
if %w[rest inventory_service].include? value.to_s
Puppet.info "configuring the YAML fact cache because a remote terminus is active"
Puppet::Node::Facts.indirection.cache_class = :yaml
end
end
},
:inventory_terminus => {
:type => :terminus,
:default => "$facts_terminus",
:desc => "Should usually be the same as the facts terminus",
},
:default_file_terminus => {
:type => :terminus,
:default => "rest",
:desc => "The default source for files if no server is given in a
uri, e.g. puppet:///file. The default of `rest` causes the file to be
retrieved using the `server` setting. When running `apply` the default
is `file_server`, causing requests to be filled locally."
},
:httplog => {
:default => "$logdir/http.log",
:type => :file,
:owner => "root",
- :mode => 0640,
+ :mode => "0640",
:desc => "Where the puppet agent web server logs.",
},
:http_proxy_host => {
:default => "none",
:desc => "The HTTP proxy host to use for outgoing connections. Note: You
- may need to use a FQDN for the server hostname when using a proxy.",
+ may need to use a FQDN for the server hostname when using a proxy. Environment variable
+ http_proxy or HTTP_PROXY will override this value",
},
:http_proxy_port => {
:default => 3128,
:desc => "The HTTP proxy port to use for outgoing connections",
},
+ :http_proxy_user => {
+ :default => "none",
+ :desc => "The user name for an authenticated HTTP proxy. Requires the `http_proxy_host` setting.",
+ },
+ :http_proxy_password =>{
+ :default => "none",
+ :hook => proc do |value|
+ if Puppet.settings[:http_proxy_password] =~ /[@!# \/]/
+ raise "Passwords set in the http_proxy_password setting must be valid as part of a URL, and any reserved characters must be URL-encoded. We received: #{value}"
+ end
+ end,
+ :desc => "The password for the user of an authenticated HTTP proxy.
+ Requires the `http_proxy_user` setting.
+
+ Note that passwords must be valid when used as part of a URL. If a password
+ contains any characters with special meanings in URLs (as specified by RFC 3986
+ section 2.2), they must be URL-encoded. (For example, `#` would become `%23`.)",
+ },
+ :http_keepalive_timeout => {
+ :default => "4s",
+ :type => :duration,
+ :desc => "The maximum amount of time a persistent HTTP connection can remain idle in the connection pool, before it is closed. This timeout should be shorter than the keepalive timeout used on the HTTP server, e.g. Apache KeepAliveTimeout directive.
+ #{AS_DURATION}"
+ },
+ :http_debug => {
+ :default => false,
+ :type => :boolean,
+ :desc => "Whether to write HTTP request and responses to stderr. This should never be used in a production environment."
+ },
:filetimeout => {
:default => "15s",
:type => :duration,
:desc => "The minimum time to wait between checking for updates in
configuration files. This timeout determines how quickly Puppet checks whether
a file (such as manifests or templates) has changed on disk. #{AS_DURATION}",
},
:environment_timeout => {
:default => "3m",
:type => :ttl,
- :desc => "The time to live for a cached environment. The time is either given #{AS_DURATION}, or
- the word 'unlimited' which causes the environment to be cached until the master is restarted."
+ :desc => "The time to live for a cached environment.
+ #{AS_DURATION}
+ This setting can also be set to `unlimited`, which causes the environment to
+ be cached until the master is restarted."
},
:queue_type => {
:default => "stomp",
:desc => "Which type of queue to use for asynchronous processing.",
},
:queue_type => {
:default => "stomp",
:desc => "Which type of queue to use for asynchronous processing.",
},
:queue_source => {
:default => "stomp://localhost:61613/",
:desc => "Which type of queue to use for asynchronous processing. If your stomp server requires
authentication, you can include it in the URI as long as your stomp client library is at least 1.1.1",
},
:async_storeconfigs => {
:default => false,
:type => :boolean,
:desc => "Whether to use a queueing system to provide asynchronous database integration.
Requires that `puppet queue` be running.",
:hook => proc do |value|
if value
# This reconfigures the termini for Node, Facts, and Catalog
Puppet.settings.override_default(:storeconfigs, true)
# But then we modify the configuration
Puppet::Resource::Catalog.indirection.cache_class = :queue
Puppet.settings.override_default(:catalog_cache_terminus, :queue)
else
raise "Cannot disable asynchronous storeconfigs in a running process"
end
end
},
:thin_storeconfigs => {
:default => false,
:type => :boolean,
:desc =>
"Boolean; whether Puppet should store only facts and exported resources in the storeconfigs
database. This will improve the performance of exported resources with the older
`active_record` backend, but will disable external tools that search the storeconfigs database.
Thinning catalogs is generally unnecessary when using PuppetDB to store catalogs.",
:hook => proc do |value|
Puppet.settings.override_default(:storeconfigs, true) if value
end
},
:config_version => {
:default => "",
:desc => "How to determine the configuration version. By default, it will be the
time that the configuration is parsed, but you can provide a shell script to override how the
version is determined. The output of this script will be added to every log message in the
reports, allowing you to correlate changes on your hosts to the source version on the server.
Setting a global value for config_version in puppet.conf is deprecated. Please set a
per-environment value in environment.conf instead. For more info, see
http://docs.puppetlabs.com/puppet/latest/reference/environments.html",
:deprecated => :allowed_on_commandline,
},
:zlib => {
:default => true,
:type => :boolean,
:desc => "Boolean; whether to use the zlib library",
},
:prerun_command => {
:default => "",
:desc => "A command to run before every agent run. If this command returns a non-zero
return code, the entire Puppet run will fail.",
},
:postrun_command => {
:default => "",
:desc => "A command to run after every agent run. If this command returns a non-zero
return code, the entire Puppet run will be considered to have failed, even though it might have
performed work during the normal run.",
},
:freeze_main => {
:default => false,
:type => :boolean,
:desc => "Freezes the 'main' class, disallowing any code to be added to it. This
essentially means that you can't have any code outside of a node,
class, or definition other than in the site manifest.",
},
:stringify_facts => {
:default => true,
:type => :boolean,
:desc => "Flatten fact values to strings using #to_s. Means you can't have arrays or
- hashes as fact values.",
+ hashes as fact values. (DEPRECATED) This option will be removed in Puppet 4.0.",
},
:trusted_node_data => {
:default => false,
:type => :boolean,
:desc => "Stores trusted node data in a hash called $trusted.
When true also prevents $trusted from being overridden in any scope.",
},
:immutable_node_data => {
:default => '$trusted_node_data',
:type => :boolean,
:desc => "When true, also prevents $trusted and $facts from being overridden in any scope",
}
)
Puppet.define_settings(:module_tool,
:module_repository => {
:default => 'https://forgeapi.puppetlabs.com',
:desc => "The module repository",
},
:module_working_dir => {
:default => '$vardir/puppet-module',
:desc => "The directory into which module tool data is stored",
},
:module_skeleton_dir => {
:default => '$module_working_dir/skeleton',
:desc => "The directory which the skeleton for module tool generate is stored.",
+ },
+ :forge_authorization => {
+ :default => nil,
+ :desc => "The authorization key to connect to the Puppet Forge. Leave blank for unauthorized or license based connections",
+ },
+ :module_groups => {
+ :default => nil,
+ :desc => "Extra module groups to request from the Puppet Forge",
}
)
Puppet.define_settings(
:main,
# We have to downcase the fqdn, because the current ssl stuff (as oppsed to in master) doesn't have good facilities for
# manipulating naming.
:certname => {
- :default => Puppet::Settings.default_certname.downcase, :desc => "The name to use when handling certificates. Defaults
- to the fully qualified domain name.",
- :call_hook => :on_define_and_write, # Call our hook with the default value, so we're always downcased
+ :default => lambda { Puppet::Settings.default_certname.downcase },
+ :desc => "The name to use when handling certificates. When a node
+ requests a certificate from the CA puppet master, it uses the value of the
+ `certname` setting as its requested Subject CN.
+
+ This is the name used when managing a node's permissions in
+ [auth.conf](http://docs.puppetlabs.com/puppet/latest/reference/config_file_auth.html).
+ In most cases, it is also used as the node's name when matching
+ [node definitions](http://docs.puppetlabs.com/puppet/latest/reference/lang_node_definitions.html)
+ and requesting data from an ENC. (This can be changed with the `node_name_value`
+ and `node_name_fact` settings, although you should only do so if you have
+ a compelling reason.)
+
+ A node's certname is available in Puppet manifests as `$trusted['certname']`. (See
+ [Facts and Built-In Variables](http://docs.puppetlabs.com/puppet/latest/reference/lang_facts_and_builtin_vars.html)
+ for more details.)
+
+ * For best compatibility, you should limit the value of `certname` to
+ only use letters, numbers, periods, underscores, and dashes. (That is,
+ it should match `/\A[a-z0-9._-]+\Z/`.)
+ * The special value `ca` is reserved, and can't be used as the certname
+ for a normal node.
+
+ Defaults to the node's fully qualified domain name.",
:hook => proc { |value| raise(ArgumentError, "Certificate names must be lower case; see #1168") unless value == value.downcase }},
:certdnsnames => {
:default => '',
:hook => proc do |value|
unless value.nil? or value == '' then
Puppet.warning <<WARN
The `certdnsnames` setting is no longer functional,
after CVE-2011-3872. We ignore the value completely.
For your own certificate request you can set `dns_alt_names` in the
configuration and it will apply locally. There is no configuration option to
set DNS alt names, or any other `subjectAltName` value, for another nodes
certificate.
Alternately you can use the `--dns_alt_names` command line option to set the
labels added while generating your own CSR.
WARN
end
end,
:desc => <<EOT
The `certdnsnames` setting is no longer functional,
after CVE-2011-3872. We ignore the value completely.
For your own certificate request you can set `dns_alt_names` in the
configuration and it will apply locally. There is no configuration option to
set DNS alt names, or any other `subjectAltName` value, for another nodes
certificate.
Alternately you can use the `--dns_alt_names` command line option to set the
labels added while generating your own CSR.
EOT
},
:dns_alt_names => {
:default => '',
:desc => <<EOT,
The comma-separated list of alternative DNS names to use for the local host.
When the node generates a CSR for itself, these are added to the request
as the desired `subjectAltName` in the certificate: additional DNS labels
that the certificate is also valid answering as.
This is generally required if you use a non-hostname `certname`, or if you
want to use `puppet kick` or `puppet resource -H` and the primary certname
does not match the DNS name you use to communicate with the host.
This is unnecessary for agents, unless you intend to use them as a server for
`puppet kick` or remote `puppet resource` management.
It is rarely necessary for servers; it is usually helpful only if you need to
have a pool of multiple load balanced masters, or for the same master to
respond on two physically separate networks under different names.
EOT
},
:csr_attributes => {
:default => "$confdir/csr_attributes.yaml",
:type => :file,
:desc => <<EOT
An optional file containing custom attributes to add to certificate signing
requests (CSRs). You should ensure that this file does not exist on your CA
puppet master; if it does, unwanted certificate extensions may leak into
certificates created with the `puppet cert generate` command.
If present, this file must be a YAML hash containing a `custom_attributes` key
and/or an `extension_requests` key. The value of each key must be a hash, where
each key is a valid OID and each value is an object that can be cast to a string.
Custom attributes can be used by the CA when deciding whether to sign the
certificate, but are then discarded. Attribute OIDs can be any OID value except
the standard CSR attributes (i.e. attributes described in RFC 2985 section 5.4).
This is useful for embedding a pre-shared key for autosigning policy executables
(see the `autosign` setting), often by using the `1.2.840.113549.1.9.7`
("challenge password") OID.
Extension requests will be permanently embedded in the final certificate.
Extension OIDs must be in the "ppRegCertExt" (`1.3.6.1.4.1.34380.1.1`) or
"ppPrivCertExt" (`1.3.6.1.4.1.34380.1.2`) OID arcs. The ppRegCertExt arc is
reserved for four of the most common pieces of data to embed: `pp_uuid` (`.1`),
`pp_instance_id` (`.2`), `pp_image_name` (`.3`), and `pp_preshared_key` (`.4`)
--- in the YAML file, these can be referred to by their short descriptive names
instead of their full OID. The ppPrivCertExt arc is unregulated, and can be used
for site-specific extensions.
EOT
},
:certdir => {
:default => "$ssldir/certs",
:type => :directory,
- :mode => 0755,
+ :mode => "0755",
:owner => "service",
:group => "service",
:desc => "The certificate directory."
},
:ssldir => {
:default => "$confdir/ssl",
:type => :directory,
- :mode => 0771,
+ :mode => "0771",
:owner => "service",
:group => "service",
:desc => "Where SSL certificates are kept."
},
:publickeydir => {
:default => "$ssldir/public_keys",
:type => :directory,
- :mode => 0755,
+ :mode => "0755",
:owner => "service",
:group => "service",
:desc => "The public key directory."
},
:requestdir => {
:default => "$ssldir/certificate_requests",
:type => :directory,
- :mode => 0755,
+ :mode => "0755",
:owner => "service",
:group => "service",
:desc => "Where host certificate requests are stored."
},
:privatekeydir => {
:default => "$ssldir/private_keys",
:type => :directory,
- :mode => 0750,
+ :mode => "0750",
:owner => "service",
:group => "service",
:desc => "The private key directory."
},
:privatedir => {
:default => "$ssldir/private",
:type => :directory,
- :mode => 0750,
+ :mode => "0750",
:owner => "service",
:group => "service",
:desc => "Where the client stores private certificate information."
},
:passfile => {
:default => "$privatedir/password",
:type => :file,
- :mode => 0640,
+ :mode => "0640",
:owner => "service",
:group => "service",
:desc => "Where puppet agent stores the password for its private key.
Generally unused."
},
:hostcsr => {
:default => "$ssldir/csr_$certname.pem",
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "Where individual hosts store and look for their certificate requests."
},
:hostcert => {
:default => "$certdir/$certname.pem",
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "Where individual hosts store and look for their certificates."
},
:hostprivkey => {
:default => "$privatekeydir/$certname.pem",
:type => :file,
- :mode => 0640,
+ :mode => "0640",
:owner => "service",
:group => "service",
:desc => "Where individual hosts store and look for their private key."
},
:hostpubkey => {
:default => "$publickeydir/$certname.pem",
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "Where individual hosts store and look for their public key."
},
:localcacert => {
:default => "$certdir/ca.pem",
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "Where each client stores the CA certificate."
},
:ssl_client_ca_auth => {
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "Certificate authorities who issue server certificates. SSL servers will not be
considered authentic unless they possess a certificate issued by an authority
listed in this file. If this setting has no value then the Puppet master's CA
certificate (localcacert) will be used."
},
:ssl_server_ca_auth => {
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "Certificate authorities who issue client certificates. SSL clients will not be
considered authentic unless they possess a certificate issued by an authority
listed in this file. If this setting has no value then the Puppet master's CA
certificate (localcacert) will be used."
},
:hostcrl => {
:default => "$ssldir/crl.pem",
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "Where the host's certificate revocation list can be found.
This is distinct from the certificate authority's CRL."
},
:certificate_revocation => {
:default => true,
:type => :boolean,
:desc => "Whether certificate revocation should be supported by downloading a
Certificate Revocation List (CRL)
to all clients. If enabled, CA chaining will almost definitely not work.",
},
:certificate_expire_warning => {
:default => "60d",
:type => :duration,
:desc => "The window of time leading up to a certificate's expiration that a notification
will be logged. This applies to CA, master, and agent certificates. #{AS_DURATION}"
},
:digest_algorithm => {
:default => 'md5',
:type => :enum,
:values => ["md5", "sha256"],
:desc => 'Which digest algorithm to use for file resources and the filebucket.
Valid values are md5, sha256. Default is md5.',
}
)
define_settings(
:ca,
:ca_name => {
:default => "Puppet CA: $certname",
:desc => "The name to use the Certificate Authority certificate.",
},
:cadir => {
:default => "$ssldir/ca",
:type => :directory,
:owner => "service",
:group => "service",
- :mode => 0755,
+ :mode => "0755",
:desc => "The root directory for the certificate authority."
},
:cacert => {
:default => "$cadir/ca_crt.pem",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0644,
+ :mode => "0644",
:desc => "The CA certificate."
},
:cakey => {
:default => "$cadir/ca_key.pem",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0640,
+ :mode => "0640",
:desc => "The CA private key."
},
:capub => {
:default => "$cadir/ca_pub.pem",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0644,
+ :mode => "0644",
:desc => "The CA public key."
},
:cacrl => {
:default => "$cadir/ca_crl.pem",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0644,
+ :mode => "0644",
:desc => "The certificate revocation list (CRL) for the CA. Will be used if present but otherwise ignored.",
},
:caprivatedir => {
:default => "$cadir/private",
:type => :directory,
:owner => "service",
:group => "service",
- :mode => 0750,
+ :mode => "0750",
:desc => "Where the CA stores private certificate information."
},
:csrdir => {
:default => "$cadir/requests",
:type => :directory,
:owner => "service",
:group => "service",
- :mode => 0755,
+ :mode => "0755",
:desc => "Where the CA stores certificate requests"
},
:signeddir => {
:default => "$cadir/signed",
:type => :directory,
:owner => "service",
:group => "service",
- :mode => 0755,
+ :mode => "0755",
:desc => "Where the CA stores signed certificates."
},
:capass => {
:default => "$caprivatedir/ca.pass",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0640,
+ :mode => "0640",
:desc => "Where the CA stores the password for the private key."
},
:serial => {
:default => "$cadir/serial",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0644,
+ :mode => "0644",
:desc => "Where the serial number for certificates is stored."
},
:autosign => {
:default => "$confdir/autosign.conf",
:type => :autosign,
:desc => "Whether (and how) to autosign certificate requests. This setting
is only relevant on a puppet master acting as a certificate authority (CA).
Valid values are true (autosigns all certificate requests; not recommended),
false (disables autosigning certificates), or the absolute path to a file.
The file specified in this setting may be either a **configuration file**
or a **custom policy executable.** Puppet will automatically determine
what it is: If the Puppet user (see the `user` setting) can execute the
file, it will be treated as a policy executable; otherwise, it will be
treated as a config file.
If a custom policy executable is configured, the CA puppet master will run it
every time it receives a CSR. The executable will be passed the subject CN of the
request _as a command line argument,_ and the contents of the CSR in PEM format
_on stdin._ It should exit with a status of 0 if the cert should be autosigned
and non-zero if the cert should not be autosigned.
If a certificate request is not autosigned, it will persist for review. An admin
user can use the `puppet cert sign` command to manually sign it, or can delete
the request.
For info on autosign configuration files, see
[the guide to Puppet's config files](http://docs.puppetlabs.com/guides/configuring.html).",
},
:allow_duplicate_certs => {
:default => false,
:type => :boolean,
:desc => "Whether to allow a new certificate
request to overwrite an existing certificate.",
},
:ca_ttl => {
:default => "5y",
:type => :duration,
:desc => "The default TTL for new certificates.
#{AS_DURATION}"
},
:req_bits => {
:default => 4096,
:desc => "The bit length of the certificates.",
},
:keylength => {
:default => 4096,
:desc => "The bit length of keys.",
},
:cert_inventory => {
:default => "$cadir/inventory.txt",
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:owner => "service",
:group => "service",
:desc => "The inventory file. This is a text file to which the CA writes a
complete listing of all certificates."
}
)
# Define the config default.
define_settings(:application,
:config_file_name => {
:type => :string,
:default => Puppet::Settings.default_config_file_name,
:desc => "The name of the puppet config file.",
},
:config => {
:type => :file,
:default => "$confdir/${config_file_name}",
:desc => "The configuration file for the current puppet application.",
},
:pidfile => {
:type => :file,
:default => "$rundir/${run_mode}.pid",
:desc => "The file containing the PID of a running process.
This file is intended to be used by service management frameworks
and monitoring systems to determine if a puppet process is still in
the process table.",
},
:bindaddress => {
:default => "0.0.0.0",
:desc => "The address a listening server should bind to.",
}
)
define_settings(:master,
:user => {
:default => "puppet",
:desc => "The user puppet master should run as.",
},
:group => {
:default => "puppet",
:desc => "The group puppet master should run as.",
},
:manifestdir => {
:default => "$confdir/manifests",
:type => :directory,
:desc => "Used to build the default value of the `manifest` setting. Has no other purpose.
This setting is deprecated.",
:deprecated => :completely,
},
:manifest => {
:default => "$manifestdir/site.pp",
:type => :file_or_directory,
:desc => "The entry-point manifest for puppet master. This can be one file
or a directory of manifests to be evaluated in alphabetical order. Puppet manages
this path as a directory if one exists or if the path ends with a / or \\.
Setting a global value for `manifest` in puppet.conf is deprecated. Please use
directory environments instead. If you need to use something other than the
environment's `manifests` directory as the main manifest, you can set
`manifest` in environment.conf. For more info, see
http://docs.puppetlabs.com/puppet/latest/reference/environments.html",
:deprecated => :allowed_on_commandline,
},
+ :default_manifest => {
+ :default => "./manifests",
+ :type => :string,
+ :desc => "The default main manifest for directory environments. Any environment that
+ doesn't set the `manifest` setting in its `environment.conf` file will use
+ this manifest.
+
+ This setting's value can be an absolute or relative path. An absolute path
+ will make all environments default to the same main manifest; a relative
+ path will allow each environment to use its own manifest, and Puppet will
+ resolve the path relative to each environment's main directory.
+
+ In either case, the path can point to a single file or to a directory of
+ manifests to be evaluated in alphabetical order.",
+ :hook => proc do |value|
+ uninterpolated_value = self.value(true)
+ if uninterpolated_value =~ /\$environment/ || value =~ /\$environment/ then
+ raise(Puppet::Settings::ValidationError,
+ "You cannot interpolate '$environment' within the 'default_manifest' setting.")
+ end
+ end
+ },
+ :disable_per_environment_manifest => {
+ :default => false,
+ :type => :boolean,
+ :desc => "Whether to disallow an environment-specific main manifest. When set
+ to `true`, Puppet will use the manifest specified in the `default_manifest` setting
+ for all environments. If an environment specifies a different main manifest in its
+ `environment.conf` file, catalog requests for that environment will fail with an error.
+
+ This setting requires `default_manifest` to be set to an absolute path.",
+ :hook => proc do |value|
+ if value && !Pathname.new(Puppet[:default_manifest]).absolute?
+ raise(Puppet::Settings::ValidationError,
+ "The 'default_manifest' setting must be set to an absolute path when 'disable_per_environment_manifest' is true")
+ end
+ end,
+ },
:code => {
:default => "",
:desc => "Code to parse directly. This is essentially only used
by `puppet`, and should only be set if you're writing your own Puppet
executable.",
},
:masterlog => {
:default => "$logdir/puppetmaster.log",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0660,
- :desc => "Where puppet master logs. This is generally not used,
- since syslog is the default log destination."
+ :mode => "0660",
+ :desc => "This file is literally never used, although Puppet may create it
+ as an empty file. For more context, see the `puppetdlog` setting and
+ puppet master's `--logdest` command line option.
+
+ This setting is deprecated and will be removed in a future version of Puppet.",
+ :deprecated => :completely
},
:masterhttplog => {
:default => "$logdir/masterhttp.log",
:type => :file,
:owner => "service",
:group => "service",
- :mode => 0660,
+ :mode => "0660",
:create => true,
- :desc => "Where the puppet master web server logs."
+ :desc => "Where the puppet master web server saves its access log. This is
+ only used when running a WEBrick puppet master. When puppet master is
+ running under a Rack server like Passenger, that web server will have
+ its own logging behavior."
},
:masterport => {
:default => 8140,
:desc => "The port for puppet master traffic. For puppet master,
this is the port to listen on; for puppet agent, this is the port
to make requests on. Both applications use this setting to get the port.",
},
:node_name => {
:default => "cert",
:desc => "How the puppet master determines the client's identity
and sets the 'hostname', 'fqdn' and 'domain' facts for use in the manifest,
in particular for determining which 'node' statement applies to the client.
Possible values are 'cert' (use the subject's CN in the client's
certificate) and 'facter' (use the hostname that the client
reported in its facts)",
},
:bucketdir => {
:default => "$vardir/bucket",
:type => :directory,
- :mode => 0750,
+ :mode => "0750",
:owner => "service",
:group => "service",
:desc => "Where FileBucket files are stored."
},
:rest_authconfig => {
:default => "$confdir/auth.conf",
:type => :file,
:desc => "The configuration file that defines the rights to the different
rest indirections. This can be used as a fine-grained
authorization system for `puppet master`.",
},
:ca => {
:default => true,
:type => :boolean,
:desc => "Whether the master should function as a certificate authority.",
},
:basemodulepath => {
:default => "$confdir/modules#{File::PATH_SEPARATOR}/usr/share/puppet/modules",
:type => :path,
:desc => "The search path for **global** modules. Should be specified as a
list of directories separated by the system path separator character. (The
POSIX path separator is ':', and the Windows path separator is ';'.)
If you are using directory environments, these are the modules that will
be used by _all_ environments. Note that the `modules` directory of the active
environment will have priority over any global directories. For more info, see
http://docs.puppetlabs.com/puppet/latest/reference/environments.html
This setting also provides the default value for the deprecated `modulepath`
setting, which is used when directory environments are disabled.",
},
:modulepath => {
:default => "$basemodulepath",
:type => :path,
:desc => "The search path for modules, as a list of directories separated by the system
path separator character. (The POSIX path separator is ':', and the
Windows path separator is ';'.)
Setting a global value for `modulepath` in puppet.conf is deprecated. Please use
directory environments instead. If you need to use something other than the
default modulepath of `<ACTIVE ENVIRONMENT'S MODULES DIR>:$basemodulepath`,
you can set `modulepath` in environment.conf. For more info, see
http://docs.puppetlabs.com/puppet/latest/reference/environments.html",
:deprecated => :allowed_on_commandline,
},
:ssl_client_header => {
:default => "HTTP_X_CLIENT_DN",
:desc => "The header containing an authenticated client's SSL DN.
This header must be set by the proxy to the authenticated client's SSL
DN (e.g., `/CN=puppet.puppetlabs.com`). Puppet will parse out the Common
Name (CN) from the Distinguished Name (DN) and use the value of the CN
field for authorization.
Note that the name of the HTTP header gets munged by the web server
common gateway inteface: an `HTTP_` prefix is added, dashes are converted
to underscores, and all letters are uppercased. Thus, to use the
`X-Client-DN` header, this setting should be `HTTP_X_CLIENT_DN`.",
},
:ssl_client_verify_header => {
:default => "HTTP_X_CLIENT_VERIFY",
:desc => "The header containing the status message of the client
verification. This header must be set by the proxy to 'SUCCESS' if the
client successfully authenticated, and anything else otherwise.
Note that the name of the HTTP header gets munged by the web server
common gateway inteface: an `HTTP_` prefix is added, dashes are converted
to underscores, and all letters are uppercased. Thus, to use the
`X-Client-Verify` header, this setting should be
`HTTP_X_CLIENT_VERIFY`.",
},
# To make sure this directory is created before we try to use it on the server, we need
# it to be in the server section (#1138).
:yamldir => {
:default => "$vardir/yaml",
:type => :directory,
:owner => "service",
:group => "service",
- :mode => "750",
+ :mode => "0750",
:desc => "The directory in which YAML data is stored, usually in a subdirectory."},
:server_datadir => {
:default => "$vardir/server_data",
:type => :directory,
:owner => "service",
:group => "service",
- :mode => "750",
+ :mode => "0750",
:desc => "The directory in which serialized data is stored, usually in a subdirectory."},
:reports => {
:default => "store",
:desc => "The list of report handlers to use. When using multiple report handlers,
their names should be comma-separated, with whitespace allowed. (For example,
`reports = http, tagmail`.)
This setting is relevant to puppet master and puppet apply. The puppet
master will call these report handlers with the reports it receives from
agent nodes, and puppet apply will call them with its own report. (In
all cases, the node applying the catalog must have `report = true`.)
See the report reference for information on the built-in report
handlers; custom report handlers can also be loaded from modules.
(Report handlers are loaded from the lib directory, at
`puppet/reports/NAME.rb`.)",
},
:reportdir => {
:default => "$vardir/reports",
:type => :directory,
- :mode => 0750,
+ :mode => "0750",
:owner => "service",
:group => "service",
:desc => "The directory in which to store reports. Each node gets
a separate subdirectory in this directory. This setting is only
used when the `store` report processor is enabled (see the
`reports` setting)."},
:reporturl => {
:default => "http://localhost:3000/reports/upload",
:desc => "The URL that reports should be forwarded to. This setting
is only used when the `http` report processor is enabled (see the
`reports` setting).",
},
:fileserverconfig => {
:default => "$confdir/fileserver.conf",
:type => :file,
:desc => "Where the fileserver configuration is stored.",
},
:strict_hostname_checking => {
:default => false,
:desc => "Whether to only search for the complete
hostname as it is in the certificate when searching for node information
in the catalogs.",
}
)
define_settings(:metrics,
:rrddir => {
:type => :directory,
:default => "$vardir/rrd",
- :mode => 0750,
+ :mode => "0750",
:owner => "service",
:group => "service",
:desc => "The directory where RRD database files are stored.
Directories for each reporting host will be created under
this directory."
},
:rrdinterval => {
:default => "$runinterval",
:type => :duration,
:desc => "How often RRD should expect data.
This should match how often the hosts report back to the server. #{AS_DURATION}",
}
)
define_settings(:device,
:devicedir => {
:default => "$vardir/devices",
:type => :directory,
- :mode => "750",
+ :mode => "0750",
:desc => "The root directory of devices' $vardir.",
},
:deviceconfig => {
:default => "$confdir/device.conf",
:desc => "Path to the device config file for puppet device.",
}
)
define_settings(:agent,
:node_name_value => {
:default => "$certname",
:desc => "The explicit value used for the node name for all requests the agent
makes to the master. WARNING: This setting is mutually exclusive with
node_name_fact. Changing this setting also requires changes to the default
auth.conf configuration on the Puppet Master. Please see
http://links.puppetlabs.com/node_name_value for more information."
},
:node_name_fact => {
:default => "",
:desc => "The fact name used to determine the node name used for all requests the agent
makes to the master. WARNING: This setting is mutually exclusive with
node_name_value. Changing this setting also requires changes to the default
auth.conf configuration on the Puppet Master. Please see
http://links.puppetlabs.com/node_name_fact for more information.",
:hook => proc do |value|
if !value.empty? and Puppet[:node_name_value] != Puppet[:certname]
raise "Cannot specify both the node_name_value and node_name_fact settings"
end
end
},
:localconfig => {
:default => "$statedir/localconfig",
:type => :file,
:owner => "root",
- :mode => 0660,
+ :mode => "0660",
:desc => "Where puppet agent caches the local configuration. An
extension indicating the cache format is added automatically."},
:statefile => {
:default => "$statedir/state.yaml",
:type => :file,
- :mode => 0660,
+ :mode => "0660",
:desc => "Where puppet agent and puppet master store state associated
with the running configuration. In the case of puppet master,
this file reflects the state discovered through interacting
with clients."
},
:clientyamldir => {
:default => "$vardir/client_yaml",
:type => :directory,
- :mode => "750",
+ :mode => "0750",
:desc => "The directory in which client-side YAML data is stored."
},
:client_datadir => {
:default => "$vardir/client_data",
:type => :directory,
- :mode => "750",
+ :mode => "0750",
:desc => "The directory in which serialized data is stored on the client."
},
:classfile => {
:default => "$statedir/classes.txt",
:type => :file,
:owner => "root",
- :mode => 0640,
+ :mode => "0640",
:desc => "The file in which puppet agent stores a list of the classes
associated with the retrieved configuration. Can be loaded in
the separate `puppet` executable using the `--loadclasses`
option."},
:resourcefile => {
:default => "$statedir/resources.txt",
:type => :file,
:owner => "root",
- :mode => 0640,
+ :mode => "0640",
:desc => "The file in which puppet agent stores a list of the resources
associated with the retrieved configuration." },
:puppetdlog => {
:default => "$logdir/puppetd.log",
:type => :file,
:owner => "root",
- :mode => 0640,
- :desc => "The log file for puppet agent. This is generally not used."
+ :mode => "0640",
+ :desc => "The fallback log file. This is only used when the `--logdest` option
+ is not specified AND Puppet is running on an operating system where both
+ the POSIX syslog service and the Windows Event Log are unavailable. (Currently,
+ no supported operating systems match that description.)
+
+ Despite the name, both puppet agent and puppet master will use this file
+ as the fallback logging destination.
+
+ For control over logging destinations, see the `--logdest` command line
+ option in the manual pages for puppet master, puppet agent, and puppet
+ apply. You can see man pages by running `puppet <SUBCOMMAND> --help`,
+ or read them online at http://docs.puppetlabs.com/references/latest/man/."
},
:server => {
:default => "puppet",
:desc => "The puppet master server to which the puppet agent should connect."
},
:use_srv_records => {
:default => false,
:type => :boolean,
:desc => "Whether the server will search for SRV records in DNS for the current domain.",
},
:srv_domain => {
- :default => "#{Puppet::Settings.domain_fact}",
+ :default => lambda { Puppet::Settings.domain_fact },
:desc => "The domain which will be queried to find the SRV records of servers to use.",
},
:ignoreschedules => {
:default => false,
:type => :boolean,
:desc => "Boolean; whether puppet agent should ignore schedules. This is useful
for initial puppet agent runs.",
},
:default_schedules => {
:default => true,
:type => :boolean,
:desc => "Boolean; whether to generate the default schedule resources. Setting this to
false is useful for keeping external report processors clean of skipped schedule resources.",
},
:puppetport => {
:default => 8139,
:desc => "Which port puppet agent listens on.",
},
:noop => {
:default => false,
:type => :boolean,
:desc => "Whether to apply catalogs in noop mode, which allows Puppet to
partially simulate a normal run. This setting affects puppet agent and
puppet apply.
When running in noop mode, Puppet will check whether each resource is in sync,
like it does when running normally. However, if a resource attribute is not in
the desired state (as declared in the catalog), Puppet will take no
action, and will instead report the changes it _would_ have made. These
simulated changes will appear in the report sent to the puppet master, or
be shown on the console if running puppet agent or puppet apply in the
foreground. The simulated changes will not send refresh events to any
subscribing or notified resources, although Puppet will log that a refresh
event _would_ have been sent.
**Important note:**
[The `noop` metaparameter](http://docs.puppetlabs.com/references/latest/metaparameter.html#noop)
allows you to apply individual resources in noop mode, and will override
the global value of the `noop` setting. This means a resource with
`noop => false` _will_ be changed if necessary, even when running puppet
agent with `noop = true` or `--noop`. (Conversely, a resource with
`noop => true` will only be simulated, even when noop mode is globally disabled.)",
},
:runinterval => {
:default => "30m",
:type => :duration,
:desc => "How often puppet agent applies the catalog.
Note that a runinterval of 0 means \"run continuously\" rather than
\"never run.\" If you want puppet agent to never run, you should start
it with the `--no-client` option. #{AS_DURATION}",
},
:listen => {
:default => false,
:type => :boolean,
:desc => "Whether puppet agent should listen for
connections. If this is true, then puppet agent will accept incoming
REST API requests, subject to the default ACLs and the ACLs set in
the `rest_authconfig` file. Puppet agent can respond usefully to
requests on the `run`, `facts`, `certificate`, and `resource` endpoints.",
},
:ca_server => {
:default => "$server",
:desc => "The server to use for certificate
authority requests. It's a separate server because it cannot
and does not need to horizontally scale.",
},
:ca_port => {
:default => "$masterport",
:desc => "The port to use for the certificate authority.",
},
:catalog_format => {
:default => "",
:desc => "(Deprecated for 'preferred_serialization_format') What format to
use to dump the catalog. Only supports 'marshal' and 'yaml'. Only
matters on the client, since it asks the server for a specific format.",
:hook => proc { |value|
if value
Puppet.deprecation_warning "Setting 'catalog_format' is deprecated; use 'preferred_serialization_format' instead."
Puppet.settings.override_default(:preferred_serialization_format, value)
end
}
},
:preferred_serialization_format => {
:default => "pson",
:desc => "The preferred means of serializing
ruby instances for passing over the wire. This won't guarantee that all
instances will be serialized using this method, since not all classes
can be guaranteed to support this format, but it will be used for all
classes that support it.",
},
:report_serialization_format => {
:default => "pson",
:type => :enum,
:values => ["pson", "yaml"],
:desc => "The serialization format to use when sending reports to the
`report_server`. Possible values are `pson` and `yaml`. This setting
affects puppet agent, but not puppet apply (which processes its own
reports).
This should almost always be set to `pson`. It can be temporarily set to
`yaml` to let agents using this Puppet version connect to a puppet master
running Puppet 3.0.0 through 3.2.x.
Note that this is set to 'yaml' automatically if the agent detects an
older master, so should never need to be set explicitly."
},
:legacy_query_parameter_serialization => {
:default => false,
:type => :boolean,
:desc => "The serialization format to use when sending file_metadata
query parameters. Older versions of puppet master expect certain query
parameters to be serialized as yaml, which is deprecated.
This should almost always be false. It can be temporarily set to true
to let agents using this Puppet version connect to a puppet master
running Puppet 3.0.0 through 3.2.x.
Note that this is set to true automatically if the agent detects an
older master, so should never need to be set explicitly."
},
:agent_catalog_run_lockfile => {
:default => "$statedir/agent_catalog_run.lock",
:type => :string, # (#2888) Ensure this file is not added to the settings catalog.
:desc => "A lock file to indicate that a puppet agent catalog run is currently in progress.
The file contains the pid of the process that holds the lock on the catalog run.",
},
:agent_disabled_lockfile => {
:default => "$statedir/agent_disabled.lock",
:type => :file,
:desc => "A lock file to indicate that puppet agent runs have been administratively
disabled. File contains a JSON object with state information.",
},
:usecacheonfailure => {
:default => true,
:type => :boolean,
:desc => "Whether to use the cached configuration when the remote
configuration will not compile. This option is useful for testing
new configurations, where you want to fix the broken configuration
rather than reverting to a known-good one.",
},
:use_cached_catalog => {
:default => false,
:type => :boolean,
:desc => "Whether to only use the cached catalog rather than compiling a new catalog
on every run. Puppet can be run with this enabled by default and then selectively
disabled when a recompile is desired.",
},
:ignoremissingtypes => {
:default => false,
:type => :boolean,
:desc => "Skip searching for classes and definitions that were missing during a
prior compilation. The list of missing objects is maintained per-environment and
persists until the environment is cleared or the master is restarted.",
},
:ignorecache => {
:default => false,
:type => :boolean,
:desc => "Ignore cache and always recompile the configuration. This is
useful for testing new configurations, where the local cache may in
fact be stale even if the timestamps are up to date - if the facts
change or if the server changes.",
},
:dynamicfacts => {
:default => "memorysize,memoryfree,swapsize,swapfree",
:desc => "(Deprecated) Facts that are dynamic; these facts will be ignored when deciding whether
changed facts should result in a recompile. Multiple facts should be
comma-separated.",
:hook => proc { |value|
if value
Puppet.deprecation_warning "The dynamicfacts setting is deprecated and will be ignored."
end
}
},
:splaylimit => {
:default => "$runinterval",
:type => :duration,
:desc => "The maximum time to delay before runs. Defaults to being the same as the
run interval. #{AS_DURATION}",
},
:splay => {
:default => false,
:type => :boolean,
:desc => "Whether to sleep for a pseudo-random (but consistent) amount of time before
a run.",
},
:clientbucketdir => {
:default => "$vardir/clientbucket",
:type => :directory,
- :mode => 0750,
+ :mode => "0750",
:desc => "Where FileBucket files are stored locally."
},
:configtimeout => {
:default => "2m",
:type => :duration,
:desc => "How long the client should wait for the configuration to be retrieved
before considering it a failure. This can help reduce flapping if too
many clients contact the server at one time. #{AS_DURATION}",
},
:report_server => {
:default => "$server",
:desc => "The server to send transaction reports to.",
},
:report_port => {
:default => "$masterport",
:desc => "The port to communicate with the report_server.",
},
:inventory_server => {
:default => "$server",
:desc => "The server to send facts to.",
},
:inventory_port => {
:default => "$masterport",
:desc => "The port to communicate with the inventory_server.",
},
:report => {
:default => true,
:type => :boolean,
:desc => "Whether to send reports after every transaction.",
},
:lastrunfile => {
:default => "$statedir/last_run_summary.yaml",
:type => :file,
- :mode => 0644,
+ :mode => "0644",
:desc => "Where puppet agent stores the last run report summary in yaml format."
},
:lastrunreport => {
:default => "$statedir/last_run_report.yaml",
:type => :file,
- :mode => 0640,
+ :mode => "0640",
:desc => "Where puppet agent stores the last run report in yaml format."
},
:graph => {
:default => false,
:type => :boolean,
:desc => "Whether to create dot graph files for the different
configuration graphs. These dot files can be interpreted by tools
like OmniGraffle or dot (which is part of ImageMagick).",
},
:graphdir => {
:default => "$statedir/graphs",
:type => :directory,
:desc => "Where to store dot-outputted graphs.",
},
:http_compression => {
:default => false,
:type => :boolean,
:desc => "Allow http compression in REST communication with the master.
This setting might improve performance for agent -> master
communications over slow WANs. Your puppet master needs to support
compression (usually by activating some settings in a reverse-proxy in
front of the puppet master, which rules out webrick). It is harmless to
activate this settings if your master doesn't support compression, but
if it supports it, this setting might reduce performance on high-speed LANs.",
},
:waitforcert => {
:default => "2m",
:type => :duration,
:desc => "How frequently puppet agent should ask for a signed certificate.
When starting for the first time, puppet agent will submit a certificate
signing request (CSR) to the server named in the `ca_server` setting
(usually the puppet master); this may be autosigned, or may need to be
approved by a human, depending on the CA server's configuration.
Puppet agent cannot apply configurations until its approved certificate is
available. Since the certificate may or may not be available immediately,
puppet agent will repeatedly try to fetch it at this interval. You can
turn off waiting for certificates by specifying a time of 0, in which case
puppet agent will exit if it cannot get a cert.
#{AS_DURATION}",
},
:ordering => {
:type => :enum,
:values => ["manifest", "title-hash", "random"],
:default => "title-hash",
:desc => "How unrelated resources should be ordered when applying a catalog.
Allowed values are `title-hash`, `manifest`, and `random`. This
setting affects puppet agent and puppet apply, but not puppet master.
* `title-hash` (the default) will order resources randomly, but will use
the same order across runs and across nodes.
* `manifest` will use the order in which the resources were declared in
their manifest files.
* `random` will order resources randomly and change their order with each
run. This can work like a fuzzer for shaking out undeclared dependencies.
Regardless of this setting's value, Puppet will always obey explicit
dependencies set with the before/require/notify/subscribe metaparameters
and the `->`/`~>` chaining arrows; this setting only affects the relative
ordering of _unrelated_ resources."
}
)
define_settings(:inspect,
:archive_files => {
:type => :boolean,
:default => false,
:desc => "During an inspect run, whether to archive files whose contents are audited to a file bucket.",
},
:archive_file_server => {
:default => "$server",
:desc => "During an inspect run, the file bucket server to archive files to if archive_files is set.",
}
)
# Plugin information.
define_settings(
:main,
:plugindest => {
:type => :directory,
:default => "$libdir",
:desc => "Where Puppet should store plugins that it pulls down from the central
server.",
},
:pluginsource => {
:default => "puppet://$server/plugins",
:desc => "From where to retrieve plugins. The standard Puppet `file` type
is used for retrieval, so anything that is a valid file source can
be used here.",
},
:pluginfactdest => {
:type => :directory,
:default => "$vardir/facts.d",
:desc => "Where Puppet should store external facts that are being handled by pluginsync",
},
:pluginfactsource => {
:default => "puppet://$server/pluginfacts",
:desc => "Where to retrieve external facts for pluginsync",
},
:pluginsync => {
:default => true,
:type => :boolean,
:desc => "Whether plugins should be synced with the central server.",
},
:pluginsignore => {
:default => ".svn CVS .git",
:desc => "What files to ignore when pulling down plugins.",
}
)
# Central fact information.
define_settings(
:main,
:factpath => {
:type => :path,
:default => "$vardir/lib/facter#{File::PATH_SEPARATOR}$vardir/facts",
:desc => "Where Puppet should look for facts. Multiple directories should
be separated by the system path separator character. (The POSIX path
separator is ':', and the Windows path separator is ';'.)",
:call_hook => :on_initialize_and_write, # Call our hook with the default value, so we always get the value added to facter.
:hook => proc do |value|
paths = value.split(File::PATH_SEPARATOR)
Facter.search(*paths)
end
}
)
define_settings(
:tagmail,
:tagmap => {
:default => "$confdir/tagmail.conf",
:desc => "The mapping between reporting tags and email addresses.",
},
:sendmail => {
:default => which('sendmail') || '',
:desc => "Where to find the sendmail binary with which to send email.",
},
:reportfrom => {
- :default => "report@" + [Facter["hostname"].value,Facter["domain"].value].join("."),
+ :default => lambda { "report@#{Puppet::Settings.default_certname.downcase}" },
:desc => "The 'from' email address for the reports.",
},
:smtpserver => {
:default => "none",
:desc => "The server through which to send email reports.",
},
:smtpport => {
:default => 25,
:desc => "The TCP port through which to send email reports.",
},
:smtphelo => {
- :default => Facter["fqdn"].value,
+ :default => lambda { Facter.value 'fqdn' },
:desc => "The name by which we identify ourselves in SMTP HELO for reports.
If you send to a smtpserver which does strict HELO checking (as with Postfix's
`smtpd_helo_restrictions` access controls), you may need to ensure this resolves.",
}
)
define_settings(
:rails,
:dblocation => {
:default => "$statedir/clientconfigs.sqlite3",
:type => :file,
- :mode => 0660,
+ :mode => "0660",
:owner => "service",
:group => "service",
:desc => "The sqlite database file. #{STORECONFIGS_ONLY}"
},
:dbadapter => {
:default => "sqlite3",
:desc => "The type of database to use. #{STORECONFIGS_ONLY}",
},
:dbmigrate => {
:default => false,
:type => :boolean,
:desc => "Whether to automatically migrate the database. #{STORECONFIGS_ONLY}",
},
:dbname => {
:default => "puppet",
:desc => "The name of the database to use. #{STORECONFIGS_ONLY}",
},
:dbserver => {
:default => "localhost",
:desc => "The database server for caching. Only
used when networked databases are used.",
},
:dbport => {
:default => "",
:desc => "The database password for caching. Only
used when networked databases are used. #{STORECONFIGS_ONLY}",
},
:dbuser => {
:default => "puppet",
:desc => "The database user for caching. Only
used when networked databases are used. #{STORECONFIGS_ONLY}",
},
:dbpassword => {
:default => "puppet",
:desc => "The database password for caching. Only
used when networked databases are used. #{STORECONFIGS_ONLY}",
},
:dbconnections => {
:default => '',
:desc => "The number of database connections for networked
databases. Will be ignored unless the value is a positive integer. #{STORECONFIGS_ONLY}",
},
:dbsocket => {
:default => "",
:desc => "The database socket location. Only used when networked
databases are used. Will be ignored if the value is an empty string. #{STORECONFIGS_ONLY}",
},
:railslog => {
:default => "$logdir/rails.log",
:type => :file,
- :mode => 0600,
+ :mode => "0600",
:owner => "service",
:group => "service",
:desc => "Where Rails-specific logs are sent. #{STORECONFIGS_ONLY}"
},
:rails_loglevel => {
:default => "info",
:desc => "The log level for Rails connections. The value must be
a valid log level within Rails. Production environments normally use `info`
and other environments normally use `debug`. #{STORECONFIGS_ONLY}",
}
)
define_settings(
:couchdb,
:couchdb_url => {
:default => "http://127.0.0.1:5984/puppet",
:desc => "The url where the puppet couchdb database will be created.
Only used when `facts_terminus` is set to `couch`.",
}
)
define_settings(
:transaction,
:tags => {
:default => "",
:desc => "Tags to use to find resources. If this is set, then
only resources tagged with the specified tags will be applied.
Values must be comma-separated.",
},
:evaltrace => {
:default => false,
:type => :boolean,
:desc => "Whether each resource should log when it is
being evaluated. This allows you to interactively see exactly
what is being done.",
},
:summarize => {
:default => false,
:type => :boolean,
:desc => "Whether to print a transaction summary.",
}
)
define_settings(
:main,
:external_nodes => {
:default => "none",
:desc => "An external command that can produce node information. The command's output
must be a YAML dump of a hash, and that hash must have a `classes` key and/or
a `parameters` key, where `classes` is an array or hash and
`parameters` is a hash. For unknown nodes, the command should
exit with a non-zero exit code.
This command makes it straightforward to store your node mapping
information in other data sources like databases.",
}
)
define_settings(
:ldap,
:ldapssl => {
:default => false,
:type => :boolean,
:desc => "Whether SSL should be used when searching for nodes.
Defaults to false because SSL usually requires certificates
to be set up on the client side.",
},
:ldaptls => {
:default => false,
:type => :boolean,
:desc => "Whether TLS should be used when searching for nodes.
Defaults to false because TLS usually requires certificates
to be set up on the client side.",
},
:ldapserver => {
:default => "ldap",
:desc => "The LDAP server. Only used if `node_terminus` is set to `ldap`.",
},
:ldapport => {
:default => 389,
:desc => "The LDAP port. Only used if `node_terminus` is set to `ldap`.",
},
:ldapstring => {
:default => "(&(objectclass=puppetClient)(cn=%s))",
:desc => "The search string used to find an LDAP node.",
},
:ldapclassattrs => {
:default => "puppetclass",
:desc => "The LDAP attributes to use to define Puppet classes. Values
should be comma-separated.",
},
:ldapstackedattrs => {
:default => "puppetvar",
:desc => "The LDAP attributes that should be stacked to arrays by adding
the values in all hierarchy elements of the tree. Values
should be comma-separated.",
},
:ldapattrs => {
:default => "all",
:desc => "The LDAP attributes to include when querying LDAP for nodes. All
returned attributes are set as variables in the top-level scope.
Multiple values should be comma-separated. The value 'all' returns
all attributes.",
},
:ldapparentattr => {
:default => "parentnode",
:desc => "The attribute to use to define the parent node.",
},
:ldapuser => {
:default => "",
:desc => "The user to use to connect to LDAP. Must be specified as a
full DN.",
},
:ldappassword => {
:default => "",
:desc => "The password to use to connect to LDAP.",
},
:ldapbase => {
:default => "",
:desc => "The search base for LDAP searches. It's impossible to provide
a meaningful default here, although the LDAP libraries might
have one already set. Generally, it should be the 'ou=Hosts'
branch under your main directory.",
}
)
define_settings(:master,
:storeconfigs => {
:default => false,
:type => :boolean,
:desc => "Whether to store each client's configuration, including catalogs, facts,
and related data. This also enables the import and export of resources in
the Puppet language - a mechanism for exchange resources between nodes.
By default this uses ActiveRecord and an SQL database to store and query
the data; this, in turn, will depend on Rails being available.
You can adjust the backend using the storeconfigs_backend setting.",
# Call our hook with the default value, so we always get the libdir set.
:call_hook => :on_initialize_and_write,
:hook => proc do |value|
require 'puppet/node'
require 'puppet/node/facts'
if value
if not Puppet.settings[:async_storeconfigs]
Puppet::Resource::Catalog.indirection.cache_class = :store_configs
Puppet.settings.override_default(:catalog_cache_terminus, :store_configs)
end
Puppet::Node::Facts.indirection.cache_class = :store_configs
Puppet::Resource.indirection.terminus_class = :store_configs
end
end
},
:storeconfigs_backend => {
:type => :terminus,
:default => "active_record",
:desc => "Configure the backend terminus used for StoreConfigs.
By default, this uses the ActiveRecord store, which directly talks to the
database from within the Puppet Master process."
}
)
define_settings(:parser,
:templatedir => {
:default => "$vardir/templates",
:type => :directory,
:desc => "Where Puppet looks for template files. Can be a list of colon-separated
directories.
This setting is deprecated. Please put your templates in modules instead.",
:deprecated => :completely,
},
:allow_variables_with_dashes => {
:default => false,
:desc => <<-'EOT'
Permit hyphens (`-`) in variable names and issue deprecation warnings about
them. This setting **should always be `false`;** setting it to `true`
will cause subtle and wide-ranging bugs. It will be removed in a future version.
Hyphenated variables caused major problems in the language, but were allowed
between Puppet 2.7.3 and 2.7.14. If you used them during this window, we
apologize for the inconvenience --- you can temporarily set this to `true`
in order to upgrade, and can rename your variables at your leisure. Please
revert it to `false` after you have renamed all affected variables.
EOT
},
:parser => {
:default => "current",
:desc => <<-'EOT'
Selects the parser to use for parsing puppet manifests (in puppet DSL
language/'.pp' files). Available choices are `current` (the default)
and `future`.
- The `curent` parser means that the released version of the parser should
+ The `current` parser means that the released version of the parser should
be used.
The `future` parser is a "time travel to the future" allowing early
exposure to new language features. What these features are will vary from
release to release and they may be invididually configurable.
Available Since Puppet 3.2.
EOT
},
- :evaluator => {
- :default => "future",
- :hook => proc do |value|
- if !['future', 'current'].include?(value)
- raise "evaluator can only be set to 'future' or 'current', got '#{value}'"
- end
- end,
- :desc => <<-'EOT'
- Which evaluator to use when compiling Puppet manifests. Valid values
- are `current` and `future` (the default).
-
- **Note:** This setting is only used when `parser = future`. It allows
- testers to turn off the `future` evaluator when doing detailed tests and
- comparisons of the new compilation system.
-
- Evaluation is the second stage of catalog compilation. After the parser
- converts a manifest to a model of expressions, the evaluator processes
- each expression. (For example, a resource declaration signals the
- evaluator to add a resource to the catalog).
-
- The `future` parser and evaluator are slated to become default in Puppet
- 4. Their purpose is to add new features and improve consistency
- and reliability.
-
- Available Since Puppet 3.5.
- EOT
- },
- :biff => {
- :default => false,
- :type => :boolean,
- :hook => proc do |value|
- if Puppet.settings[:parser] != 'future'
- Puppet.settings.override_default(:parser, 'future')
- end
- if Puppet.settings[:evaluator] != 'future'
- Puppet.settings.override_default(:evaluator, 'future')
- end
- end,
- :desc => <<-EOT
- Turns on Biff the catalog builder, future parser, and future evaluator.
- This is an experimental feature - and this setting may go away before
- release of Pupet 3.6.
- EOT
- },
:max_errors => {
:default => 10,
:desc => <<-'EOT'
Sets the max number of logged/displayed parser validation errors in case
- multiple errors have been detected. A value of 0 is the same as value 1.
- The count is per manifest.
+ multiple errors have been detected. A value of 0 is the same as a value of 1; a
+ minimum of one error is always raised. The count is per manifest.
EOT
},
:max_warnings => {
:default => 10,
:desc => <<-'EOT'
Sets the max number of logged/displayed parser validation warnings in
- case multiple errors have been detected. A value of 0 is the same as
- value 1. The count is per manifest.
+ case multiple warnings have been detected. A value of 0 blocks logging of
+ warnings. The count is per manifest.
EOT
},
:max_deprecations => {
:default => 10,
:desc => <<-'EOT'
Sets the max number of logged/displayed parser validation deprecation
- warnings in case multiple errors have been detected. A value of 0 is the
- same as value 1. The count is per manifest.
+ warnings in case multiple deprecation warnings have been detected. A value of 0
+ blocks the logging of deprecation warnings. The count is per manifest.
EOT
},
:strict_variables => {
:default => false,
:type => :boolean,
:desc => <<-'EOT'
Makes the parser raise errors when referencing unknown variables. (This does not affect
referencing variables that are explicitly set to undef).
EOT
}
)
define_settings(:puppetdoc,
:document_all => {
:default => false,
:type => :boolean,
:desc => "Whether to document all resources when using `puppet doc` to
generate manifest documentation.",
}
)
end
diff --git a/lib/puppet/environments.rb b/lib/puppet/environments.rb
index b77fc337c..9f7ac5c31 100644
--- a/lib/puppet/environments.rb
+++ b/lib/puppet/environments.rb
@@ -1,359 +1,367 @@
# @api private
module Puppet::Environments
+
+ class EnvironmentNotFound < Puppet::Error
+ def initialize(environment_name, original = nil)
+ environmentpath = Puppet[:environmentpath]
+ super("Could not find a directory environment named '#{environment_name}' anywhere in the path: #{environmentpath}. Does the directory exist?", original)
+ end
+ end
+
# @api private
module EnvironmentCreator
# Create an anonymous environment.
#
# @param module_path [String] A list of module directories separated by the
# PATH_SEPARATOR
# @param manifest [String] The path to the manifest
# @return A new environment with the `name` `:anonymous`
#
# @api private
def for(module_path, manifest)
Puppet::Node::Environment.create(:anonymous,
module_path.split(File::PATH_SEPARATOR),
manifest)
end
end
# @!macro [new] loader_search_paths
# A list of indicators of where the loader is getting its environments from.
# @return [Array<String>] The URIs of the load locations
#
# @!macro [new] loader_list
# @return [Array<Puppet::Node::Environment>] All of the environments known
# to the loader
#
# @!macro [new] loader_get
# Find a named environment
#
# @param name [String,Symbol] The name of environment to find
# @return [Puppet::Node::Environment, nil] the requested environment or nil
# if it wasn't found
#
# @!macro [new] loader_get_conf
# Attempt to obtain the initial configuration for the environment. Not all
# loaders can provide this.
#
# @param name [String,Symbol] The name of the environment whose configuration
# we are looking up
# @return [Puppet::Setting::EnvironmentConf, nil] the configuration for the
# requested environment, or nil if not found or no configuration is available
# A source of pre-defined environments.
#
# @api private
class Static
include EnvironmentCreator
def initialize(*environments)
@environments = environments
end
# @!macro loader_search_paths
def search_paths
["data:text/plain,internal"]
end
# @!macro loader_list
def list
@environments
end
# @!macro loader_get
def get(name)
@environments.find do |env|
env.name == name.intern
end
end
# Returns a basic environment configuration object tied to the environment's
# implementation values. Will not interpolate.
#
# @!macro loader_get_conf
def get_conf(name)
env = get(name)
if env
Puppet::Settings::EnvironmentConf.static_for(env)
else
nil
end
end
end
# A source of unlisted pre-defined environments.
#
# Used only for internal bootstrapping environments which are not relevant
# to an end user (such as the fall back 'configured' environment).
#
# @api private
class StaticPrivate < Static
# Unlisted
#
# @!macro loader_list
def list
[]
end
end
# Old-style environments that come either from explicit stanzas in
# puppet.conf or from dynamic environments created from use of `$environment`
# in puppet.conf.
#
# @example Explicit Stanza
# [environment_name]
# modulepath=/var/my_env/modules
#
# @example Dynamic Environments
# [master]
# modulepath=/var/$environment/modules
#
# @api private
class Legacy
include EnvironmentCreator
# @!macro loader_search_paths
def search_paths
["file://#{Puppet[:config]}"]
end
# @note The list of environments for the Legacy environments is always
# empty.
#
# @!macro loader_list
def list
[]
end
# @note Because the Legacy system cannot list out all of its environments,
# get is able to return environments that are not returned by a call to
# {#list}.
#
# @!macro loader_get
def get(name)
Puppet::Node::Environment.new(name)
end
# @note we could return something here, but since legacy environments
# are deprecated, there is no point.
#
# @!macro loader_get_conf
def get_conf(name)
nil
end
end
# Reads environments from a directory on disk. Each environment is
# represented as a sub-directory. The environment's manifest setting is the
# `manifest` directory of the environment directory. The environment's
# modulepath setting is the global modulepath (from the `[master]` section
# for the master) prepended with the `modules` directory of the environment
# directory.
#
# @api private
class Directories
def initialize(environment_dir, global_module_path)
@environment_dir = environment_dir
@global_module_path = global_module_path
end
# Generate an array of directory loaders from a path string.
# @param path [String] path to environment directories
# @param global_module_path [Array<String>] the global modulepath setting
# @return [Array<Puppet::Environments::Directories>] An array
# of configured directory loaders.
def self.from_path(path, global_module_path)
environments = path.split(File::PATH_SEPARATOR)
environments.map do |dir|
Puppet::Environments::Directories.new(dir, global_module_path)
end
end
# @!macro loader_search_paths
def search_paths
["file://#{@environment_dir}"]
end
# @!macro loader_list
def list
valid_directories.collect do |envdir|
name = Puppet::FileSystem.basename_string(envdir)
setting_values = Puppet.settings.values(name, Puppet.settings.preferred_run_mode)
env = Puppet::Node::Environment.create(
name.intern,
Puppet::Node::Environment.split_path(setting_values.interpolate(:modulepath)),
setting_values.interpolate(:manifest),
setting_values.interpolate(:config_version)
)
env.watching = false
env
end
end
# @!macro loader_get
def get(name)
list.find { |env| env.name == name.intern }
end
# @!macro loader_get_conf
def get_conf(name)
valid_directories.each do |envdir|
envname = Puppet::FileSystem.basename_string(envdir)
if envname == name.to_s
return Puppet::Settings::EnvironmentConf.load_from(envdir, @global_module_path)
end
end
nil
end
private
def valid_directories
if Puppet::FileSystem.directory?(@environment_dir)
Puppet::FileSystem.children(@environment_dir).select do |child|
name = Puppet::FileSystem.basename_string(child)
Puppet::FileSystem.directory?(child) &&
Puppet::Node::Environment.valid_name?(name)
end
else
[]
end
end
end
# Combine together multiple loaders to act as one.
# @api private
class Combined
def initialize(*loaders)
@loaders = loaders
end
# @!macro loader_search_paths
def search_paths
@loaders.collect(&:search_paths).flatten
end
# @!macro loader_list
def list
@loaders.collect(&:list).flatten
end
# @!macro loader_get
def get(name)
@loaders.each do |loader|
if env = loader.get(name)
return env
end
end
nil
end
# @!macro loader_get_conf
def get_conf(name)
@loaders.each do |loader|
if conf = loader.get_conf(name)
return conf
end
end
nil
end
end
class Cached < Combined
INFINITY = 1.0 / 0.0
def initialize(*loaders)
super
@cache = {}
end
def get(name)
evict_if_expired(name)
if result = @cache[name]
return result.value
elsif (result = super(name))
@cache[name] = entry(result)
result
end
end
# Clears the cache of the environment with the given name.
# (The intention is that this could be used from a MANUAL cache eviction command (TBD)
def clear(name)
@cache.delete(name)
end
# Clears all cached environments.
# (The intention is that this could be used from a MANUAL cache eviction command (TBD)
def clear_all()
@cache = {}
end
# This implementation evicts the cache, and always gets the current configuration of the environment
# TODO: While this is wasteful since it needs to go on a search for the conf, it is too disruptive to optimize
# this.
#
def get_conf(name)
evict_if_expired(name)
super name
end
# Creates a suitable cache entry given the time to live for one environment
#
def entry(env)
ttl = (conf = get_conf(env.name)) ? conf.environment_timeout : Puppet.settings.value(:environment_timeout)
case ttl
when 0
NotCachedEntry.new(env) # Entry that is always expired (avoids syscall to get time)
when INFINITY
Entry.new(env) # Entry that never expires (avoids syscall to get time)
else
TTLEntry.new(env, ttl)
end
end
# Evicts the entry if it has expired
#
def evict_if_expired(name)
if (result = @cache[name]) && result.expired?
@cache.delete(name)
end
end
# Never evicting entry
class Entry
attr_reader :value
def initialize(value)
@value = value
end
def expired?
false
end
end
# Always evicting entry
class NotCachedEntry < Entry
def expired?
true
end
end
# Time to Live eviction policy entry
class TTLEntry < Entry
def initialize(value, ttl_seconds)
super value
@ttl = Time.now + ttl_seconds
end
def expired?
Time.now > @ttl
end
end
end
end
diff --git a/lib/puppet/external/nagios/base.rb b/lib/puppet/external/nagios/base.rb
index 0aa50b411..06f6987ab 100644
--- a/lib/puppet/external/nagios/base.rb
+++ b/lib/puppet/external/nagios/base.rb
@@ -1,472 +1,472 @@
# The base class for all of our Nagios object types. Everything else
# is mostly just data.
class Nagios::Base
class UnknownNagiosType < RuntimeError # When an unknown type is asked for by name.
end
include Enumerable
class << self
attr_accessor :parameters, :derivatives, :ocs, :name, :att
attr_accessor :ldapbase
attr_writer :namevar
attr_reader :superior
end
# Attach one class to another.
def self.attach(hash)
@attach ||= {}
hash.each do |n, v| @attach[n] = v end
end
# Convert a parameter to camelcase
def self.camelcase(param)
param.gsub(/_./) do |match|
match.sub(/_/,'').capitalize
end
end
# Uncamelcase a parameter.
def self.decamelcase(param)
param.gsub(/[A-Z]/) do |match|
"_#{match.downcase}"
end
end
# Create a new instance of a given class.
def self.create(name, args = {})
name = name.intern if name.is_a? String
if @types.include?(name)
@types[name].new(args)
else
raise UnknownNagiosType, "Unknown type #{name}"
end
end
# Yield each type in turn.
def self.eachtype
@types.each do |name, type|
yield [name, type]
end
end
# Create a mapping.
def self.map(hash)
@map ||= {}
hash.each do |n, v| @map[n] = v end
end
# Return a mapping (or nil) for a param
def self.mapping(name)
name = name.intern if name.is_a? String
if defined?(@map)
@map[name]
else
nil
end
end
# Return the namevar for the canonical name.
def self.namevar
if defined?(@namevar)
return @namevar
else
if parameter?(:name)
return :name
elsif tmp = (self.name.to_s + "_name").intern and parameter?(tmp)
@namevar = tmp
return @namevar
else
raise "Type #{self.name} has no name var"
end
end
end
# Create a new type.
def self.newtype(name, &block)
name = name.intern if name.is_a? String
@types ||= {}
# Create the class, with the correct name.
t = Class.new(self)
t.name = name
# Everyone gets this. There should probably be a better way, and I
# should probably hack the attribute system to look things up based on
# this "use" setting, but, eh.
t.parameters = [:use]
const_set(name.to_s.capitalize,t)
# Evaluate the passed block. This should usually define all of the work.
t.class_eval(&block)
@types[name] = t
end
# Define both the normal case and camelcase method for a parameter
def self.paramattr(name)
camel = camelcase(name)
param = name
[name, camel].each do |method|
define_method(method) do
@parameters[param]
end
define_method(method.to_s + "=") do |value|
@parameters[param] = value
end
end
end
# Is the specified name a valid parameter?
def self.parameter?(name)
name = name.intern if name.is_a? String
@parameters.include?(name)
end
# Manually set the namevar
def self.setnamevar(name)
name = name.intern if name.is_a? String
@namevar = name
end
# Set the valid parameters for this class
def self.setparameters(*array)
@parameters += array
end
# Set the superior ldap object class. Seems silly to include this
# in this class, but, eh.
def self.setsuperior(name)
@superior = name
end
# Parameters to suppress in output.
def self.suppress(name)
@suppress ||= []
@suppress << name
end
# Whether a given parameter is suppressed.
def self.suppress?(name)
defined?(@suppress) and @suppress.include?(name)
end
# Return our name as the string.
def self.to_s
self.name.to_s
end
# Return a type by name.
def self.type(name)
name = name.intern if name.is_a? String
@types[name]
end
# Convenience methods.
def [](param)
send(param)
end
# Convenience methods.
def []=(param,value)
send(param.to_s + "=", value)
end
# Iterate across all ofour set parameters.
def each
@parameters.each { |param,value|
yield(param,value)
}
end
# Initialize our object, optionally with a list of parameters.
def initialize(args = {})
@parameters = {}
args.each { |param,value|
self[param] = value
}
if @namevar == :_naginator_name
self['_naginator_name'] = self['name']
end
end
# Handle parameters like attributes.
def method_missing(mname, *args)
pname = mname.to_s
pname.sub!(/=/, '')
if self.class.parameter?(pname)
if pname =~ /A-Z/
pname = self.class.decamelcase(pname)
end
self.class.paramattr(pname)
# Now access the parameters directly, to make it at least less
# likely we'll end up in an infinite recursion.
if mname.to_s =~ /=$/
@parameters[pname] = args.first
else
return @parameters[mname]
end
else
super
end
end
# Retrieve our name, through a bit of redirection.
def name
send(self.class.namevar)
end
# This is probably a bad idea.
def name=(value)
unless self.class.namevar.to_s == "name"
send(self.class.namevar.to_s + "=", value)
end
end
def namevar
(self.type + "_name").intern
end
def parammap(param)
unless defined?(@map)
map = {
self.namevar => "cn"
}
map.update(self.class.map) if self.class.map
end
if map.include?(param)
return map[param]
else
return "nagios-" + param.id2name.gsub(/_/,'-')
end
end
def parent
unless defined?(self.class.attached)
puts "Duh, you called parent on an unattached class"
return
end
klass,param = self.class.attached
unless @parameters.include?(param)
puts "Huh, no attachment param"
return
end
klass[@parameters[param]]
end
# okay, this sucks
# how do i get my list of ocs?
def to_ldif
str = self.dn + "\n"
ocs = Array.new
if self.class.ocs
# i'm storing an array, so i have to flatten it and stuff
kocs = self.class.ocs
ocs.push(*kocs)
end
ocs.push "top"
oc = self.class.to_s
oc.sub!(/Nagios/,'nagios')
oc.sub!(/::/,'')
ocs.push oc
ocs.each { |oc|
str += "objectclass: #{oc}\n"
}
@parameters.each { |name,value|
next if self.class.suppress.include?(name)
ldapname = self.parammap(name)
str += ldapname + ": #{value}\n"
}
str += "\n"
end
def to_s
str = "define #{self.type} {\n"
@parameters.keys.sort.each { |param|
value = @parameters[param]
str += %{\t%-30s %s\n} % [ param,
if value.is_a? Array
value.join(",").sub(';', '\;')
else
- value.sub(';', '\;')
+ value.to_s.sub(';', '\;')
end
]
}
str += "}\n"
str
end
# The type of object we are.
def type
self.class.name
end
# object types
newtype :host do
setparameters :host_name, :alias, :display_name, :address, :parents,
:hostgroups, :check_command, :initial_state, :max_check_attempts,
:check_interval, :retry_interval, :active_checks_enabled,
:passive_checks_enabled, :check_period, :obsess_over_host,
:check_freshness, :freshness_threshold, :event_handler,
:event_handler_enabled, :low_flap_threshold, :high_flap_threshold,
:flap_detection_enabled, :flap_detection_options,
:failure_prediction_enabled, :process_perf_data,
:retain_status_information, :retain_nonstatus_information, :contacts,
:contact_groups, :notification_interval, :first_notification_delay,
:notification_period, :notification_options, :notifications_enabled,
:stalking_options, :notes, :notes_url, :action_url, :icon_image,
:icon_image_alt, :vrml_image, :statusmap_image, "2d_coords".intern,
"3d_coords".intern,
:register, :use,
:realm, :poller_tag, :business_impact
setsuperior "person"
map :address => "ipHostNumber"
end
newtype :hostgroup do
setparameters :hostgroup_name, :alias, :members, :hostgroup_members, :notes,
:notes_url, :action_url,
:register, :use,
:realm
end
newtype :service do
attach :host => :host_name
setparameters :host_name, :hostgroup_name, :service_description,
:display_name, :servicegroups, :is_volatile, :check_command,
:initial_state, :max_check_attempts, :check_interval, :retry_interval,
:normal_check_interval, :retry_check_interval, :active_checks_enabled,
:passive_checks_enabled, :parallelize_check, :check_period,
:obsess_over_service, :check_freshness, :freshness_threshold,
:event_handler, :event_handler_enabled, :low_flap_threshold,
:high_flap_threshold, :flap_detection_enabled,:flap_detection_options,
:process_perf_data, :failure_prediction_enabled, :retain_status_information,
:retain_nonstatus_information, :notification_interval,
:first_notification_delay, :notification_period, :notification_options,
:notifications_enabled, :contacts, :contact_groups, :stalking_options,
:notes, :notes_url, :action_url, :icon_image, :icon_image_alt,
:register, :use,
:_naginator_name,
:poller_tag, :business_impact
suppress :host_name
setnamevar :_naginator_name
end
newtype :servicegroup do
setparameters :servicegroup_name, :alias, :members, :servicegroup_members,
:notes, :notes_url, :action_url,
:register, :use
end
newtype :contact do
setparameters :contact_name, :alias, :contactgroups,
:host_notifications_enabled, :service_notifications_enabled,
:host_notification_period, :service_notification_period,
:host_notification_options, :service_notification_options,
:host_notification_commands, :service_notification_commands,
:email, :pager, :address1, :address2, :address3, :address4,
:address5, :address6, :can_submit_commands, :retain_status_information,
:retain_nonstatus_information,
:register, :use
setsuperior "person"
end
newtype :contactgroup do
setparameters :contactgroup_name, :alias, :members, :contactgroup_members,
:register, :use
end
# TODO - We should support generic time periods here eg "day 1 - 15"
newtype :timeperiod do
setparameters :timeperiod_name, :alias, :sunday, :monday, :tuesday,
:wednesday, :thursday, :friday, :saturday, :exclude,
:register, :use
end
newtype :command do
setparameters :command_name, :command_line,
:poller_tag
end
newtype :servicedependency do
setparameters :dependent_host_name, :dependent_hostgroup_name,
:dependent_service_description, :host_name, :hostgroup_name,
:service_description, :inherits_parent, :execution_failure_criteria,
:notification_failure_criteria, :dependency_period,
:register, :use,
:_naginator_name
setnamevar :_naginator_name
end
newtype :serviceescalation do
setparameters :host_name, :hostgroup_name, :servicegroup_name,
:service_description, :contacts, :contact_groups,
:first_notification, :last_notification, :notification_interval,
:escalation_period, :escalation_options,
:register, :use,
:_naginator_name
setnamevar :_naginator_name
end
newtype :hostdependency do
setparameters :dependent_host_name, :dependent_hostgroup_name, :host_name,
:hostgroup_name, :inherits_parent, :execution_failure_criteria,
:notification_failure_criteria, :dependency_period,
:register, :use,
:_naginator_name
setnamevar :_naginator_name
end
newtype :hostescalation do
setparameters :host_name, :hostgroup_name, :contacts, :contact_groups,
:first_notification, :last_notification, :notification_interval,
:escalation_period, :escalation_options,
:register, :use,
:_naginator_name
setnamevar :_naginator_name
end
newtype :hostextinfo do
setparameters :host_name, :notes, :notes_url, :icon_image, :icon_image_alt,
:vrml_image, :statusmap_image, "2d_coords".intern, "3d_coords".intern,
:register, :use
setnamevar :host_name
end
newtype :serviceextinfo do
setparameters :host_name, :service_description, :notes, :notes_url,
:action_url, :icon_image, :icon_image_alt,
:register, :use,
:_naginator_name
setnamevar :_naginator_name
end
end
diff --git a/lib/puppet/external/pson/pure/generator.rb b/lib/puppet/external/pson/pure/generator.rb
index bcf2fde2a..17c98d58c 100644
--- a/lib/puppet/external/pson/pure/generator.rb
+++ b/lib/puppet/external/pson/pure/generator.rb
@@ -1,401 +1,394 @@
module PSON
MAP = {
"\x0" => '\u0000',
"\x1" => '\u0001',
"\x2" => '\u0002',
"\x3" => '\u0003',
"\x4" => '\u0004',
"\x5" => '\u0005',
"\x6" => '\u0006',
"\x7" => '\u0007',
"\b" => '\b',
"\t" => '\t',
"\n" => '\n',
"\xb" => '\u000b',
"\f" => '\f',
"\r" => '\r',
"\xe" => '\u000e',
"\xf" => '\u000f',
"\x10" => '\u0010',
"\x11" => '\u0011',
"\x12" => '\u0012',
"\x13" => '\u0013',
"\x14" => '\u0014',
"\x15" => '\u0015',
"\x16" => '\u0016',
"\x17" => '\u0017',
"\x18" => '\u0018',
"\x19" => '\u0019',
"\x1a" => '\u001a',
"\x1b" => '\u001b',
"\x1c" => '\u001c',
"\x1d" => '\u001d',
"\x1e" => '\u001e',
"\x1f" => '\u001f',
'"' => '\"',
'\\' => '\\\\',
} # :nodoc:
# Convert a UTF8 encoded Ruby string _string_ to a PSON string, encoded with
# UTF16 big endian characters as \u????, and return it.
if String.method_defined?(:force_encoding)
def utf8_to_pson(string) # :nodoc:
string = string.dup
string << '' # XXX workaround: avoid buffer sharing
string.force_encoding(Encoding::ASCII_8BIT)
string.gsub!(/["\\\x0-\x1f]/) { MAP[$MATCH] }
string
rescue => e
raise GeneratorError, "Caught #{e.class}: #{e}", e.backtrace
end
else
def utf8_to_pson(string) # :nodoc:
string.gsub(/["\\\x0-\x1f]/n) { MAP[$MATCH] }
end
end
module_function :utf8_to_pson
module Pure
module Generator
# This class is used to create State instances, that are use to hold data
# while generating a PSON text from a Ruby data structure.
class State
# Creates a State object from _opts_, which ought to be Hash to create
# a new State instance configured by _opts_, something else to create
# an unconfigured instance. If _opts_ is a State object, it is just
# returned.
def self.from_state(opts)
case opts
when self
opts
when Hash
new(opts)
else
new
end
end
# Instantiates a new State object, configured by _opts_.
#
# _opts_ can have the following keys:
#
# * *indent*: a string used to indent levels (default: ''),
# * *space*: a string that is put after, a : or , delimiter (default: ''),
# * *space_before*: a string that is put before a : pair delimiter (default: ''),
# * *object_nl*: a string that is put at the end of a PSON object (default: ''),
# * *array_nl*: a string that is put at the end of a PSON array (default: ''),
# * *check_circular*: true if checking for circular data structures
# should be done (the default), false otherwise.
# * *check_circular*: true if checking for circular data structures
# should be done, false (the default) otherwise.
# * *allow_nan*: true if NaN, Infinity, and -Infinity should be
# generated, otherwise an exception is thrown, if these values are
# encountered. This options defaults to false.
def initialize(opts = {})
@seen = {}
@indent = ''
@space = ''
@space_before = ''
@object_nl = ''
@array_nl = ''
@check_circular = true
@allow_nan = false
configure opts
end
# This string is used to indent levels in the PSON text.
attr_accessor :indent
# This string is used to insert a space between the tokens in a PSON
# string.
attr_accessor :space
# This string is used to insert a space before the ':' in PSON objects.
attr_accessor :space_before
# This string is put at the end of a line that holds a PSON object (or
# Hash).
attr_accessor :object_nl
# This string is put at the end of a line that holds a PSON array.
attr_accessor :array_nl
# This integer returns the maximum level of data structure nesting in
# the generated PSON, max_nesting = 0 if no maximum is checked.
attr_accessor :max_nesting
def check_max_nesting(depth) # :nodoc:
return if @max_nesting.zero?
current_nesting = depth + 1
current_nesting > @max_nesting and
raise NestingError, "nesting of #{current_nesting} is too deep"
end
# Returns true, if circular data structures should be checked,
# otherwise returns false.
def check_circular?
@check_circular
end
# Returns true if NaN, Infinity, and -Infinity should be considered as
# valid PSON and output.
def allow_nan?
@allow_nan
end
# Returns _true_, if _object_ was already seen during this generating
# run.
def seen?(object)
@seen.key?(object.__id__)
end
# Remember _object_, to find out if it was already encountered (if a
# cyclic data structure is if a cyclic data structure is rendered).
def remember(object)
@seen[object.__id__] = true
end
# Forget _object_ for this generating run.
def forget(object)
@seen.delete object.__id__
end
# Configure this State instance with the Hash _opts_, and return
# itself.
def configure(opts)
@indent = opts[:indent] if opts.key?(:indent)
@space = opts[:space] if opts.key?(:space)
@space_before = opts[:space_before] if opts.key?(:space_before)
@object_nl = opts[:object_nl] if opts.key?(:object_nl)
@array_nl = opts[:array_nl] if opts.key?(:array_nl)
@check_circular = !!opts[:check_circular] if opts.key?(:check_circular)
@allow_nan = !!opts[:allow_nan] if opts.key?(:allow_nan)
if !opts.key?(:max_nesting) # defaults to 19
@max_nesting = 19
elsif opts[:max_nesting]
@max_nesting = opts[:max_nesting]
else
@max_nesting = 0
end
self
end
# Returns the configuration instance variables as a hash, that can be
# passed to the configure method.
def to_h
result = {}
for iv in %w{indent space space_before object_nl array_nl check_circular allow_nan max_nesting}
result[iv.intern] = instance_variable_get("@#{iv}")
end
result
end
end
module GeneratorMethods
module Object
# Converts this object to a string (calling #to_s), converts
# it to a PSON string, and returns the result. This is a fallback, if no
# special method #to_pson was defined for some object.
def to_pson(*) to_s.to_pson end
end
module Hash
# Returns a PSON string containing a PSON object, that is unparsed from
# this Hash instance.
# _state_ is a PSON::State object, that can also be used to configure the
# produced PSON string output further.
# _depth_ is used to find out nesting depth, to indent accordingly.
def to_pson(state = nil, depth = 0, *)
if state
state = PSON.state.from_state(state)
state.check_max_nesting(depth)
pson_check_circular(state) { pson_transform(state, depth) }
else
pson_transform(state, depth)
end
end
private
def pson_check_circular(state)
if state and state.check_circular?
state.seen?(self) and raise PSON::CircularDatastructure,
"circular data structures not supported!"
state.remember self
end
yield
ensure
state and state.forget self
end
def pson_shift(state, depth)
state and not state.object_nl.empty? or return ''
state.indent * depth
end
def pson_transform(state, depth)
delim = ','
if state
delim << state.object_nl
result = '{'
result << state.object_nl
result << map { |key,value|
s = pson_shift(state, depth + 1)
s << key.to_s.to_pson(state, depth + 1)
s << state.space_before
s << ':'
s << state.space
s << value.to_pson(state, depth + 1)
}.join(delim)
result << state.object_nl
result << pson_shift(state, depth)
result << '}'
else
result = '{'
result << map { |key,value|
key.to_s.to_pson << ':' << value.to_pson
}.join(delim)
result << '}'
end
result
end
end
module Array
# Returns a PSON string containing a PSON array, that is unparsed from
# this Array instance.
# _state_ is a PSON::State object, that can also be used to configure the
# produced PSON string output further.
# _depth_ is used to find out nesting depth, to indent accordingly.
def to_pson(state = nil, depth = 0, *)
if state
state = PSON.state.from_state(state)
state.check_max_nesting(depth)
pson_check_circular(state) { pson_transform(state, depth) }
else
pson_transform(state, depth)
end
end
private
def pson_check_circular(state)
if state and state.check_circular?
state.seen?(self) and raise PSON::CircularDatastructure,
"circular data structures not supported!"
state.remember self
end
yield
ensure
state and state.forget self
end
def pson_shift(state, depth)
state and not state.array_nl.empty? or return ''
state.indent * depth
end
def pson_transform(state, depth)
delim = ','
if state
delim << state.array_nl
result = '['
result << state.array_nl
result << map { |value|
pson_shift(state, depth + 1) << value.to_pson(state, depth + 1)
}.join(delim)
result << state.array_nl
result << pson_shift(state, depth)
result << ']'
else
'[' << map { |value| value.to_pson }.join(delim) << ']'
end
end
end
module Integer
# Returns a PSON string representation for this Integer number.
def to_pson(*) to_s end
end
module Float
# Returns a PSON string representation for this Float number.
def to_pson(state = nil, *)
- case
- when infinite?
- if !state || state.allow_nan?
- to_s
- else
- raise GeneratorError, "#{self} not allowed in PSON"
- end
- when nan?
+ if infinite? || nan?
if !state || state.allow_nan?
to_s
else
raise GeneratorError, "#{self} not allowed in PSON"
end
else
to_s
end
end
end
module String
# This string should be encoded with UTF-8 A call to this method
# returns a PSON string encoded with UTF16 big endian characters as
# \u????.
def to_pson(*)
'"' << PSON.utf8_to_pson(self) << '"'
end
# Module that holds the extinding methods if, the String module is
# included.
module Extend
# Raw Strings are PSON Objects (the raw bytes are stored in an array for the
# key "raw"). The Ruby String can be created by this module method.
def pson_create(o)
o['raw'].pack('C*')
end
end
# Extends _modul_ with the String::Extend module.
def self.included(modul)
modul.extend Extend
end
# This method creates a raw object hash, that can be nested into
# other data structures and will be unparsed as a raw string. This
# method should be used, if you want to convert raw strings to PSON
# instead of UTF-8 strings, e.g. binary data.
def to_pson_raw_object
{
PSON.create_id => self.class.name,
'raw' => self.unpack('C*'),
}
end
# This method creates a PSON text from the result of
# a call to to_pson_raw_object of this String.
def to_pson_raw(*args)
to_pson_raw_object.to_pson(*args)
end
end
module TrueClass
# Returns a PSON string for true: 'true'.
def to_pson(*) 'true' end
end
module FalseClass
# Returns a PSON string for false: 'false'.
def to_pson(*) 'false' end
end
module NilClass
# Returns a PSON string for nil: 'null'.
def to_pson(*) 'null' end
end
end
end
end
end
diff --git a/lib/puppet/face/ca.rb b/lib/puppet/face/ca.rb
index 55475de87..9b7cc5eae 100644
--- a/lib/puppet/face/ca.rb
+++ b/lib/puppet/face/ca.rb
@@ -1,247 +1,254 @@
require 'puppet/face'
Puppet::Face.define(:ca, '0.1.0') do
copyright "Puppet Labs", 2011
license "Apache 2 license; see COPYING"
summary "Local Puppet Certificate Authority management."
description <<-TEXT
This provides local management of the Puppet Certificate Authority.
You can use this subcommand to sign outstanding certificate requests, list
and manage local certificates, and inspect the state of the CA.
TEXT
action :list do
summary "List certificates and/or certificate requests."
description <<-TEXT
This will list the current certificates and certificate signing requests
in the Puppet CA. You will also get the fingerprint, and any certificate
verification failure reported.
TEXT
option "--[no-]all" do
summary "Include all certificates and requests."
end
option "--[no-]pending" do
summary "Include pending certificate signing requests."
end
option "--[no-]signed" do
summary "Include signed certificates."
end
option "--digest ALGORITHM" do
summary "The hash algorithm to use when displaying the fingerprint"
end
option "--subject PATTERN" do
summary "Only list if the subject matches PATTERN."
description <<-TEXT
Only include certificates or requests where subject matches PATTERN.
PATTERN is interpreted as a regular expression, allowing complex
filtering of the content.
TEXT
end
when_invoked do |options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :only
pattern = options[:subject].nil? ? nil :
Regexp.new(options[:subject], Regexp::IGNORECASE)
pending = options[:pending].nil? ? options[:all] : options[:pending]
signed = options[:signed].nil? ? options[:all] : options[:signed]
# By default we list pending, so if nothing at all was requested...
unless pending or signed then pending = true end
hosts = []
pending and hosts += ca.waiting?
signed and hosts += ca.list
pattern and hosts = hosts.select {|hostname| pattern.match hostname }
hosts.sort.map {|host| Puppet::SSL::Host.new(host) }
end
when_rendering :console do |hosts, options|
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
length = hosts.map{|x| x.name.length }.max.to_i + 1
hosts.map do |host|
name = host.name.ljust(length)
if host.certificate_request then
" #{name} #{host.certificate_request.digest(options[:digest])}"
else
begin
ca.verify(host.name)
"+ #{name} #{host.certificate.digest(options[:digest])}"
rescue Puppet::SSL::CertificateAuthority::CertificateVerificationError => e
"- #{name} #{host.certificate.digest(options[:digest])} (#{e.to_s})"
end
end
end.join("\n")
end
end
action :destroy do
+ summary "Destroy named certificate or pending certificate request."
when_invoked do |host, options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :local
ca.destroy host
end
end
action :revoke do
+ summary "Add certificate to certificate revocation list."
when_invoked do |host, options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :only
begin
ca.revoke host
rescue ArgumentError => e
# This is a bit naff, but it makes the behaviour consistent with the
# destroy action. The underlying tools could be nicer for that sort
# of thing; they have fairly inconsistent reporting of failures.
raise unless e.to_s =~ /Could not find a serial number for /
"Nothing was revoked"
end
end
end
action :generate do
+ summary "Generate a certificate for a named client."
option "--dns-alt-names NAMES" do
summary "Additional DNS names to add to the certificate request"
description Puppet.settings.setting(:dns_alt_names).desc
end
when_invoked do |host, options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :local
begin
ca.generate(host, :dns_alt_names => options[:dns_alt_names])
rescue RuntimeError => e
if e.to_s =~ /already has a requested certificate/
"#{host} already has a certificate request; use sign instead"
else
raise
end
rescue ArgumentError => e
if e.to_s =~ /A Certificate already exists for /
"#{host} already has a certificate"
else
raise
end
end
end
end
action :sign do
+ summary "Sign an outstanding certificate request."
option("--[no-]allow-dns-alt-names") do
summary "Whether or not to accept DNS alt names in the certificate request"
end
when_invoked do |host, options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :only
begin
ca.sign(host, options[:allow_dns_alt_names])
rescue ArgumentError => e
if e.to_s =~ /Could not find certificate request/
e.to_s
else
raise
end
end
end
end
action :print do
+ summary "Print the full-text version of a host's certificate."
when_invoked do |host, options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :only
ca.print host
end
end
action :fingerprint do
+ summary "Print the DIGEST (defaults to the signing algorithm) fingerprint of a host's certificate."
option "--digest ALGORITHM" do
summary "The hash algorithm to use when displaying the fingerprint"
end
when_invoked do |host, options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :only
if cert = (Puppet::SSL::Certificate.indirection.find(host) || Puppet::SSL::CertificateRequest.indirection.find(host))
cert.digest(options[:digest]).to_s
else
nil
end
end
end
action :verify do
+ summary "Verify the named certificate against the local CA certificate."
when_invoked do |host, options|
raise "Not a CA" unless Puppet::SSL::CertificateAuthority.ca?
unless ca = Puppet::SSL::CertificateAuthority.instance
raise "Unable to fetch the CA"
end
Puppet::SSL::Host.ca_location = :only
begin
ca.verify host
{ :host => host, :valid => true }
rescue ArgumentError => e
raise unless e.to_s =~ /Could not find a certificate for/
{ :host => host, :valid => false, :error => e.to_s }
rescue Puppet::SSL::CertificateAuthority::CertificateVerificationError => e
{ :host => host, :valid => false, :error => e.to_s }
end
end
when_rendering :console do |value|
if value[:valid]
nil
else
"Could not verify #{value[:host]}: #{value[:error]}"
end
end
end
end
diff --git a/lib/puppet/face/file/download.rb b/lib/puppet/face/file/download.rb
index aae318565..3ab28b151 100644
--- a/lib/puppet/face/file/download.rb
+++ b/lib/puppet/face/file/download.rb
@@ -1,54 +1,57 @@
# Download a specified file into the local filebucket.
Puppet::Face.define(:file, '0.0.1') do
action :download do |*args|
summary "Download a file into the local filebucket."
arguments "( {md5}<checksum> | <puppet_url> )"
returns "Nothing."
description <<-EOT
Downloads a file from the puppet master's filebucket and duplicates it in
the local filebucket. This action's checksum syntax differs from `find`'s,
and it can accept a <puppet:///> URL.
EOT
examples <<-'EOT'
Download a file by URL:
$ puppet file download puppet:///modules/editors/vim/.vimrc
Download a file by MD5 sum:
$ puppet file download {md5}8f798d4e754db0ac89186bbaeaf0af18
EOT
when_invoked do |sum, options|
if sum =~ /^puppet:\/\// # it's a puppet url
require 'puppet/file_serving'
require 'puppet/file_serving/content'
- raise "Could not find metadata for #{sum}" unless content = Puppet::FileServing::Content.indirection.find(sum)
- file = Puppet::FileBucket::File.new(content.content)
+ unless content = Puppet::FileServing::Content.indirection.find(sum)
+ raise "Could not find metadata for #{sum}"
+ end
+ pathname = Puppet::FileSystem.pathname(content.full_path())
+ file = Puppet::FileBucket::File.new(pathname)
else
tester = Object.new
tester.extend(Puppet::Util::Checksums)
type = tester.sumtype(sum)
sumdata = tester.sumdata(sum)
key = "#{type}/#{sumdata}"
Puppet::FileBucket::File.indirection.terminus_class = :file
if Puppet::FileBucket::File.indirection.head(key)
Puppet.info "Content for '#{sum}' already exists"
return
end
Puppet::FileBucket::File.indirection.terminus_class = :rest
raise "Could not download content for '#{sum}'" unless file = Puppet::FileBucket::File.indirection.find(key)
end
Puppet::FileBucket::File.indirection.terminus_class = :file
Puppet.notice "Saved #{sum} to filebucket"
Puppet::FileBucket::File.indirection.save file
return nil
end
end
end
diff --git a/lib/puppet/face/file/store.rb b/lib/puppet/face/file/store.rb
index dc43fee04..dc4c73b44 100644
--- a/lib/puppet/face/file/store.rb
+++ b/lib/puppet/face/file/store.rb
@@ -1,21 +1,21 @@
# Store a specified file in our filebucket.
Puppet::Face.define(:file, '0.0.1') do
action :store do |*args|
summary "Store a file in the local filebucket."
arguments "<file>"
returns "Nothing."
examples <<-EOT
Store a file:
$ puppet file store /root/.bashrc
EOT
when_invoked do |path, options|
- file = Puppet::FileBucket::File.new(Puppet::FileSystem.binread(path))
+ file = Puppet::FileBucket::File.new(Puppet::FileSystem.pathname(path))
Puppet::FileBucket::File.indirection.terminus_class = :file
Puppet::FileBucket::File.indirection.save file
file.checksum
end
end
end
diff --git a/lib/puppet/face/instrumentation_data.rb b/lib/puppet/face/instrumentation_data.rb
index c091f5542..05eb4fdb6 100644
--- a/lib/puppet/face/instrumentation_data.rb
+++ b/lib/puppet/face/instrumentation_data.rb
@@ -1,29 +1,30 @@
require 'puppet/indirector/face'
require 'puppet/util/instrumentation/data'
Puppet::Indirector::Face.define(:instrumentation_data, '0.0.1') do
copyright "Puppet Labs", 2011
license "Apache 2 license; see COPYING"
- summary "Manage instrumentation listener accumulated data."
+ summary "Manage instrumentation listener accumulated data. DEPRECATED."
description <<-EOT
This subcommand allows to retrieve the various listener data.
+ (DEPRECATED) This subcommand will be removed in Puppet 4.0.
EOT
get_action(:destroy).summary "Invalid for this subcommand."
get_action(:save).summary "Invalid for this subcommand."
get_action(:search).summary "Invalid for this subcommand."
find = get_action(:find)
find.summary "Retrieve listener data."
find.render_as = :pson
find.returns <<-EOT
The data of an instrumentation listener
EOT
find.examples <<-EOT
Retrieve listener data:
$ puppet instrumentation_data find performance --terminus rest
EOT
end
diff --git a/lib/puppet/face/instrumentation_listener.rb b/lib/puppet/face/instrumentation_listener.rb
index 53b61bef7..e11ef407f 100644
--- a/lib/puppet/face/instrumentation_listener.rb
+++ b/lib/puppet/face/instrumentation_listener.rb
@@ -1,97 +1,98 @@
require 'puppet/indirector/face'
require 'puppet/util/instrumentation/listener'
Puppet::Indirector::Face.define(:instrumentation_listener, '0.0.1') do
copyright "Puppet Labs", 2011
license "Apache 2 license; see COPYING"
- summary "Manage instrumentation listeners."
+ summary "Manage instrumentation listeners. DEPRECATED."
description <<-EOT
This subcommand enables/disables or list instrumentation listeners.
+ (DEPRECATED) This subcommand will be removed in Puppet 4.0.
EOT
get_action(:destroy).summary "Invalid for this subcommand."
find = get_action(:find)
find.summary "Retrieve a single listener."
find.render_as = :pson
find.returns <<-EOT
The status of an instrumentation listener
EOT
find.examples <<-EOT
Retrieve a given listener:
$ puppet instrumentation_listener find performance --terminus rest
EOT
search = get_action(:search)
search.summary "Retrieve all instrumentation listeners statuses."
search.arguments "<dummy_text>"
search.render_as = :pson
search.returns <<-EOT
The statuses of all instrumentation listeners
EOT
search.short_description <<-EOT
This retrieves all instrumentation listeners
EOT
search.notes <<-EOT
Although this action always returns all instrumentation listeners, it requires a dummy search
key; this is a known bug.
EOT
search.examples <<-EOT
Retrieve the state of the listeners running in the remote puppet master:
$ puppet instrumentation_listener search x --terminus rest
EOT
def manage(name, activate)
Puppet::Util::Instrumentation::Listener.indirection.terminus_class = :rest
listener = Puppet::Face[:instrumentation_listener, '0.0.1'].find(name)
if listener
listener.enabled = activate
Puppet::Face[:instrumentation_listener, '0.0.1'].save(listener)
end
end
action :enable do
summary "Enable a given instrumentation listener."
arguments "<listener>"
returns "Nothing."
description <<-EOT
Enable a given instrumentation listener. After being enabled the listener
will start receiving instrumentation notifications from the probes if those
are enabled.
EOT
examples <<-EOT
Enable the "performance" listener in the running master:
$ puppet instrumentation_listener enable performance --terminus rest
EOT
when_invoked do |name, options|
manage(name, true)
end
end
action :disable do
summary "Disable a given instrumentation listener."
arguments "<listener>"
returns "Nothing."
description <<-EOT
Disable a given instrumentation listener. After being disabled the listener
will stop receiving instrumentation notifications from the probes.
EOT
examples <<-EOT
Disable the "performance" listener in the running master:
$ puppet instrumentation_listener disable performance --terminus rest
EOT
when_invoked do |name, options|
manage(name, false)
end
end
get_action(:save).summary "API only: modify an instrumentation listener status."
get_action(:save).arguments "<listener>"
end
diff --git a/lib/puppet/face/instrumentation_probe.rb b/lib/puppet/face/instrumentation_probe.rb
index 840eac70a..b77f5e977 100644
--- a/lib/puppet/face/instrumentation_probe.rb
+++ b/lib/puppet/face/instrumentation_probe.rb
@@ -1,78 +1,79 @@
require 'puppet/indirector/face'
require 'puppet/util/instrumentation/indirection_probe'
Puppet::Indirector::Face.define(:instrumentation_probe, '0.0.1') do
copyright "Puppet Labs", 2011
license "Apache 2 license; see COPYING"
- summary "Manage instrumentation probes."
+ summary "Manage instrumentation probes. Deprecated"
description <<-EOT
This subcommand enables/disables or list instrumentation listeners.
+ (DEPRECATED) This subcommand will be removed in Puppet 4.0.
EOT
get_action(:find).summary "Invalid for this subcommand."
search = get_action(:search)
search.summary "Retrieve all probe statuses."
search.arguments "<dummy_text>"
search.render_as = :pson
search.returns <<-EOT
The statuses of all instrumentation probes
EOT
search.short_description <<-EOT
This retrieves all instrumentation probes
EOT
search.notes <<-EOT
Although this action always returns all instrumentation probes, it requires a dummy search
key; this is a known bug.
EOT
search.examples <<-EOT
Retrieve the state of the probes running in the remote puppet master:
$ puppet instrumentation_probe search x --terminus rest
EOT
action :enable do
summary "Enable all instrumentation probes."
arguments "<dummy>"
returns "Nothing."
description <<-EOT
Enable all instrumentation probes. After being enabled, all enabled listeners
will start receiving instrumentation notifications from the probes.
EOT
examples <<-EOT
Enable the probes for the running master:
$ puppet instrumentation_probe enable x --terminus rest
EOT
when_invoked do |name, options|
Puppet::Face[:instrumentation_probe, '0.0.1'].save(nil)
end
end
action :disable do
summary "Disable all instrumentation probes."
arguments "<dummy>"
returns "Nothing."
description <<-EOT
Disable all instrumentation probes. After being disabled, no listeners
will receive instrumentation notifications.
EOT
examples <<-EOT
Disable the probes for the running master:
$ puppet instrumentation_probe disable x --terminus rest
EOT
when_invoked do |name, options|
Puppet::Face[:instrumentation_probe, '0.0.1'].destroy(nil)
end
end
get_action(:save).summary "API only: enable all instrumentation probes."
get_action(:save).arguments "<dummy>"
get_action(:destroy).summary "API only: disable all instrumentation probes."
get_action(:destroy).arguments "<dummy>"
end
diff --git a/lib/puppet/face/module/build.rb b/lib/puppet/face/module/build.rb
index f1d006a74..5ee8b2363 100644
--- a/lib/puppet/face/module/build.rb
+++ b/lib/puppet/face/module/build.rb
@@ -1,63 +1,63 @@
Puppet::Face.define(:module, '1.0.0') do
action(:build) do
summary "Build a module release package."
description <<-EOT
Prepares a local module for release on the Puppet Forge by building a
ready-to-upload archive file.
This action uses the Modulefile in the module directory to set metadata
used by the Forge. See <http://links.puppetlabs.com/modulefile> for more
about writing modulefiles.
After being built, the release archive file can be found in the module's
`pkg` directory.
EOT
returns "Pathname object representing the path to the release archive."
examples <<-EOT
Build a module release:
$ puppet module build puppetlabs-apache
notice: Building /Users/kelseyhightower/puppetlabs-apache for release
Module built: /Users/kelseyhightower/puppetlabs-apache/pkg/puppetlabs-apache-0.0.1.tar.gz
Build the module in the current working directory:
$ cd /Users/kelseyhightower/puppetlabs-apache
$ puppet module build
notice: Building /Users/kelseyhightower/puppetlabs-apache for release
Module built: /Users/kelseyhightower/puppetlabs-apache/pkg/puppetlabs-apache-0.0.1.tar.gz
EOT
arguments "[<path>]"
when_invoked do |*args|
options = args.pop
if options.nil? or args.length > 1 then
raise ArgumentError, "puppet module build only accepts 0 or 1 arguments"
end
module_path = args.first
if module_path.nil?
pwd = Dir.pwd
module_path = Puppet::ModuleTool.find_module_root(pwd)
if module_path.nil?
- raise "Unable to find module root at #{pwd} or parent directories"
+ raise "Unable to find metadata.json or Modulefile in module root #{pwd} or parent directories. See <http://links.puppetlabs.com/modulefile> for required file format."
end
else
unless Puppet::ModuleTool.is_module_root?(module_path)
- raise "Unable to find module root at #{module_path}"
+ raise "Unable to find metadata.json or Modulefile in module root #{module_path}. See <http://links.puppetlabs.com/modulefile> for required file format."
end
end
Puppet::ModuleTool.set_option_defaults options
Puppet::ModuleTool::Applications::Builder.run(module_path, options)
end
when_rendering :console do |return_value|
# Get the string representation of the Pathname object.
"Module built: " + return_value.expand_path.to_s
end
end
end
diff --git a/lib/puppet/face/module/generate.rb b/lib/puppet/face/module/generate.rb
index f25e504cb..9b88a8c65 100644
--- a/lib/puppet/face/module/generate.rb
+++ b/lib/puppet/face/module/generate.rb
@@ -1,243 +1,251 @@
Puppet::Face.define(:module, '1.0.0') do
action(:generate) do
summary "Generate boilerplate for a new module."
description <<-EOT
Generates boilerplate for a new module by creating the directory
structure and files recommended for the Puppet community's best practices.
A module may need additional directories beyond this boilerplate
if it provides plugins, files, or templates.
EOT
returns "Array of Pathname objects representing paths of generated files."
examples <<-EOT
Generate a new module in the current directory:
$ puppet module generate puppetlabs-ssh
We need to create a metadata.json file for this module. Please answer the
following questions; if the question is not applicable to this module, feel free
to leave it blank.
Puppet uses Semantic Versioning (semver.org) to version modules.
What version is this module? [0.1.0]
-->
Who wrote this module? [puppetlabs]
-->
What license does this module code fall under? [Apache 2.0]
-->
How would you describe this module in a single sentence?
-->
Where is this module's source code repository?
-->
Where can others go to learn more about this module?
-->
Where can others go to file issues about this module?
-->
----------------------------------------
{
"name": "puppetlabs-ssh",
"version": "0.1.0",
"author": "puppetlabs",
"summary": null,
"license": "Apache 2.0",
"source": "",
"project_page": null,
"issues_url": null,
"dependencies": [
{
"name": "puppetlabs-stdlib",
- "version_range": ">= 1.0.0"
+ "version_requirement": ">= 1.0.0"
}
]
}
----------------------------------------
About to generate this metadata; continue? [n/Y]
-->
Notice: Generating module at /Users/username/Projects/puppet/puppetlabs-ssh...
Notice: Populating ERB templates...
Finished; module generated in puppetlabs-ssh.
puppetlabs-ssh/manifests
puppetlabs-ssh/manifests/init.pp
puppetlabs-ssh/metadata.json
puppetlabs-ssh/README.md
puppetlabs-ssh/spec
puppetlabs-ssh/spec/spec_helper.rb
puppetlabs-ssh/tests
puppetlabs-ssh/tests/init.pp
EOT
option "--skip-interview" do
summary "Bypass the interactive metadata interview"
description <<-EOT
Do not attempt to perform a metadata interview. Primarily useful for automatic
execution of `puppet module generate`.
EOT
end
arguments "<name>"
when_invoked do |name, options|
# Since we only want to interview if it's being rendered to the console
# (i.e. when invoked with `puppet module generate`), we can't do any work
# here in the when_invoked block. The result of this block is then
# passed to each renderer, which will handle it appropriately; by
# returning a simple message like this, every renderer will simply output
# the string.
# Our `when_rendering :console` handler will ignore this value and
# actually generate the module.
#
# All this is necessary because it is not possible at this point in time
# to know what the destination of the output is.
"This format is not supported by this action."
end
when_rendering :console do |_, name, options|
Puppet::ModuleTool.set_option_defaults options
begin
# A default dependency for all newly generated modules is being
# introduced as a substitute for the comments we used to include in the
# Modulefile. While introducing a default dependency is less than
# perfectly desirable, the cost is low, and the syntax is obtuse enough
# to justify its inclusion.
metadata = Puppet::ModuleTool::Metadata.new.update(
'name' => name,
'version' => '0.1.0',
'dependencies' => [
- { :name => 'puppetlabs-stdlib', :version_range => '>= 1.0.0' }
+ { 'name' => 'puppetlabs-stdlib', 'version_requirement' => '>= 1.0.0' }
]
)
rescue ArgumentError
msg = "Could not generate directory #{name.inspect}, you must specify a dash-separated username and module name."
raise ArgumentError, msg, $!.backtrace
end
dest = Puppet::ModuleTool::Generate.destination(metadata)
result = Puppet::ModuleTool::Generate.generate(metadata, options[:skip_interview])
path = dest.relative_path_from(Pathname.pwd)
puts "Finished; module generated in #{path}."
result.join("\n")
end
end
end
module Puppet::ModuleTool::Generate
module_function
def generate(metadata, skip_interview = false)
interview(metadata) unless skip_interview
destination = duplicate_skeleton(metadata)
all_files = destination.basename + '**/*'
return Dir[all_files.to_s]
end
def interview(metadata)
puts "We need to create a metadata.json file for this module. Please answer the"
puts "following questions; if the question is not applicable to this module, feel free"
puts "to leave it blank."
begin
puts
puts "Puppet uses Semantic Versioning (semver.org) to version modules."
puts "What version is this module? [#{metadata.version}]"
metadata.update 'version' => user_input(metadata.version)
rescue
Puppet.err "We're sorry, we could not parse that as a Semantic Version."
retry
end
puts
puts "Who wrote this module? [#{metadata.author}]"
metadata.update 'author' => user_input(metadata.author)
puts
puts "What license does this module code fall under? [#{metadata.license}]"
metadata.update 'license' => user_input(metadata.license)
puts
puts "How would you describe this module in a single sentence?"
metadata.update 'summary' => user_input(metadata.summary)
puts
puts "Where is this module's source code repository?"
metadata.update 'source' => user_input(metadata.source)
puts
puts "Where can others go to learn more about this module?#{ metadata.project_page && " [#{metadata.project_page}]" }"
metadata.update 'project_page' => user_input(metadata.project_page)
puts
puts "Where can others go to file issues about this module?#{ metadata.issues_url && " [#{metadata.issues_url}]" }"
metadata.update 'issues_url' => user_input(metadata.issues_url)
puts
puts '-' * 40
puts metadata.to_json
puts '-' * 40
puts
puts "About to generate this metadata; continue? [n/Y]"
if user_input('Y') !~ /^y(es)?$/i
puts "Aborting..."
exit 0
end
end
def user_input(default=nil)
print '--> '
input = STDIN.gets.chomp.strip
input = default if input == ''
return input
end
def destination(metadata)
return @dest if defined? @dest
@dest = Pathname.pwd + metadata.dashed_name
raise ArgumentError, "#{@dest} already exists." if @dest.exist?
return @dest
end
def duplicate_skeleton(metadata)
dest = destination(metadata)
puts
Puppet.notice "Generating module at #{dest}..."
FileUtils.cp_r skeleton_path, dest
- populate_erb_templates(metadata, dest)
+ populate_templates(metadata, dest)
return dest
end
- def populate_erb_templates(metadata, destination)
- Puppet.notice "Populating ERB templates..."
+ def populate_templates(metadata, destination)
+ Puppet.notice "Populating templates..."
- templates = destination + '**/*.erb'
- Dir[templates.to_s].each do |erb|
- path = Pathname.new(erb)
- content = ERB.new(path.read).result(binding)
+ formatters = {
+ :erb => proc { |data, ctx| ERB.new(data).result(ctx) },
+ :template => proc { |data, _| data },
+ }
- target = path.parent + path.basename('.erb')
- target.open('w') { |f| f.write(content) }
- path.unlink
+ formatters.each do |type, block|
+ templates = destination + "**/*.#{type}"
+
+ Dir.glob(templates.to_s, File::FNM_DOTMATCH).each do |erb|
+ path = Pathname.new(erb)
+ content = block[path.read, binding]
+
+ target = path.parent + path.basename(".#{type}")
+ target.open('w') { |f| f.write(content) }
+ path.unlink
+ end
end
end
def skeleton_path
return @path if defined? @path
path = Pathname(Puppet.settings[:module_skeleton_dir])
path = Pathname(__FILE__).dirname + '../../module_tool/skeleton/templates/generator' unless path.directory?
@path = path
end
end
diff --git a/lib/puppet/face/module/install.rb b/lib/puppet/face/module/install.rb
index 929ff9db8..0644d0518 100644
--- a/lib/puppet/face/module/install.rb
+++ b/lib/puppet/face/module/install.rb
@@ -1,144 +1,145 @@
# encoding: UTF-8
require 'puppet/forge'
require 'puppet/module_tool/install_directory'
require 'pathname'
Puppet::Face.define(:module, '1.0.0') do
action(:install) do
summary "Install a module from the Puppet Forge or a release archive."
description <<-EOT
Installs a module from the Puppet Forge or from a release archive file.
The specified module will be installed into the directory
specified with the `--target-dir` option, which defaults to the first
directory in the modulepath.
EOT
returns "Pathname object representing the path to the installed module."
examples <<-'EOT'
Install a module:
$ puppet module install puppetlabs-vcsrepo
Preparing to install into /etc/puppet/modules ...
Downloading from http://forgeapi.puppetlabs.com ...
Installing -- do not interrupt ...
/etc/puppet/modules
└── puppetlabs-vcsrepo (v0.0.4)
Install a module to a specific environment:
$ puppet module install puppetlabs-vcsrepo --environment development
Preparing to install into /etc/puppet/environments/development/modules ...
Downloading from http://forgeapi.puppetlabs.com ...
Installing -- do not interrupt ...
/etc/puppet/environments/development/modules
└── puppetlabs-vcsrepo (v0.0.4)
Install a specific module version:
$ puppet module install puppetlabs-vcsrepo -v 0.0.4
Preparing to install into /etc/puppet/modules ...
Downloading from http://forgeapi.puppetlabs.com ...
Installing -- do not interrupt ...
/etc/puppet/modules
└── puppetlabs-vcsrepo (v0.0.4)
Install a module into a specific directory:
$ puppet module install puppetlabs-vcsrepo --target-dir=/usr/share/puppet/modules
Preparing to install into /usr/share/puppet/modules ...
Downloading from http://forgeapi.puppetlabs.com ...
Installing -- do not interrupt ...
/usr/share/puppet/modules
└── puppetlabs-vcsrepo (v0.0.4)
Install a module into a specific directory and check for dependencies in other directories:
$ puppet module install puppetlabs-vcsrepo --target-dir=/usr/share/puppet/modules --modulepath /etc/puppet/modules
Preparing to install into /usr/share/puppet/modules ...
Downloading from http://forgeapi.puppetlabs.com ...
Installing -- do not interrupt ...
/usr/share/puppet/modules
└── puppetlabs-vcsrepo (v0.0.4)
Install a module from a release archive:
$ puppet module install puppetlabs-vcsrepo-0.0.4.tar.gz
Preparing to install into /etc/puppet/modules ...
Downloading from http://forgeapi.puppetlabs.com ...
Installing -- do not interrupt ...
/etc/puppet/modules
└── puppetlabs-vcsrepo (v0.0.4)
Install a module from a release archive and ignore dependencies:
$ puppet module install puppetlabs-vcsrepo-0.0.4.tar.gz --ignore-dependencies
Preparing to install into /etc/puppet/modules ...
Installing -- do not interrupt ...
/etc/puppet/modules
└── puppetlabs-vcsrepo (v0.0.4)
EOT
arguments "<name>"
option "--force", "-f" do
- summary "Force overwrite of existing module, if any."
+ summary "Force overwrite of existing module, if any. (Implies --ignore-dependencies.)"
description <<-EOT
Force overwrite of existing module, if any.
+ Implies --ignore-dependencies.
EOT
end
option "--target-dir DIR", "-i DIR" do
summary "The directory into which modules are installed."
description <<-EOT
The directory into which modules are installed; defaults to the first
directory in the modulepath.
Specifying this option will change the installation directory, and
will use the existing modulepath when checking for dependencies. If
you wish to check a different set of directories for dependencies, you
must also use the `--environment` or `--modulepath` options.
EOT
end
option "--ignore-dependencies" do
- summary "Do not attempt to install dependencies"
+ summary "Do not attempt to install dependencies. (Implied by --force.)"
description <<-EOT
- Do not attempt to install dependencies. (Implied by --force.)
+ Do not attempt to install dependencies. Implied by --force.
EOT
end
option "--version VER", "-v VER" do
summary "Module version to install."
description <<-EOT
Module version to install; can be an exact version or a requirement string,
eg '>= 1.0.3'. Defaults to latest version.
EOT
end
when_invoked do |name, options|
Puppet::ModuleTool.set_option_defaults options
Puppet.notice "Preparing to install into #{options[:target_dir]} ..."
install_dir = Puppet::ModuleTool::InstallDirectory.new(Pathname.new(options[:target_dir]))
Puppet::ModuleTool::Applications::Installer.run(name, install_dir, options)
end
when_rendering :console do |return_value, name, options|
if return_value[:result] == :noop
Puppet.notice "Module #{name} #{return_value[:version]} is already installed."
exit 0
elsif return_value[:result] == :failure
Puppet.err(return_value[:error][:multiline])
exit 1
else
tree = Puppet::ModuleTool.build_tree(return_value[:graph], return_value[:install_dir])
"#{return_value[:install_dir]}\n" +
Puppet::ModuleTool.format_tree(tree)
end
end
end
end
diff --git a/lib/puppet/face/module/upgrade.rb b/lib/puppet/face/module/upgrade.rb
index 7a6bd66bc..f671887e1 100644
--- a/lib/puppet/face/module/upgrade.rb
+++ b/lib/puppet/face/module/upgrade.rb
@@ -1,85 +1,86 @@
# encoding: UTF-8
Puppet::Face.define(:module, '1.0.0') do
action(:upgrade) do
summary "Upgrade a puppet module."
description <<-EOT
Upgrades a puppet module.
EOT
returns "Hash"
examples <<-EOT
upgrade an installed module to the latest version
$ puppet module upgrade puppetlabs-apache
/etc/puppet/modules
└── puppetlabs-apache (v1.0.0 -> v2.4.0)
upgrade an installed module to a specific version
$ puppet module upgrade puppetlabs-apache --version 2.1.0
/etc/puppet/modules
└── puppetlabs-apache (v1.0.0 -> v2.1.0)
upgrade an installed module for a specific environment
$ puppet module upgrade puppetlabs-apache --environment test
/usr/share/puppet/environments/test/modules
└── puppetlabs-apache (v1.0.0 -> v2.4.0)
EOT
arguments "<name>"
option "--force", "-f" do
- summary "Force upgrade of an installed module."
+ summary "Force upgrade of an installed module. (Implies --ignore-dependencies.)"
description <<-EOT
Force the upgrade of an installed module even if there are local
changes or the possibility of causing broken dependencies.
+ Implies --ignore-dependencies.
EOT
end
option "--ignore-dependencies" do
- summary "Do not attempt to install dependencies"
+ summary "Do not attempt to install dependencies. (Implied by --force.)"
description <<-EOT
- Do not attempt to install dependencies. (Implied by --force.)
+ Do not attempt to install dependencies. Implied by --force.
EOT
end
option "--ignore-changes", "-c" do
summary "Ignore and overwrite any local changes made. (Implied by --force.)"
description <<-EOT
Upgrade an installed module even if there are local changes to it. (Implied by --force.)
EOT
end
option "--version=" do
summary "The version of the module to upgrade to."
description <<-EOT
The version of the module to upgrade to.
EOT
end
when_invoked do |name, options|
name = name.gsub('/', '-')
Puppet.notice "Preparing to upgrade '#{name}' ..."
Puppet::ModuleTool.set_option_defaults options
Puppet::ModuleTool::Applications::Upgrader.new(name, options).run
end
when_rendering :console do |return_value|
if return_value[:result] == :noop
Puppet.notice return_value[:error][:multiline]
exit 0
elsif return_value[:result] == :failure
Puppet.err(return_value[:error][:multiline])
exit 1
else
tree = Puppet::ModuleTool.build_tree(return_value[:graph], return_value[:base_dir])
"#{return_value[:base_dir]}\n" +
Puppet::ModuleTool.format_tree(tree)
end
end
end
end
diff --git a/lib/puppet/face/node/clean.rb b/lib/puppet/face/node/clean.rb
index 903e93819..f1c3c34b8 100644
--- a/lib/puppet/face/node/clean.rb
+++ b/lib/puppet/face/node/clean.rb
@@ -1,159 +1,159 @@
Puppet::Face.define(:node, '0.0.1') do
action(:clean) do
option "--[no-]unexport" do
summary "Whether to remove this node's exported resources from other nodes"
end
summary "Clean up everything a puppetmaster knows about a node."
arguments "<host1> [<host2> ...]"
description <<-'EOT'
Clean up everything a puppet master knows about a node, including certificates
and storeconfigs data.
The full list of info cleaned by this action is:
<Signed certificates> - ($vardir/ssl/ca/signed/node.domain.pem)
<Cached facts> - ($vardir/yaml/facts/node.domain.yaml)
<Cached node objects> - ($vardir/yaml/node/node.domain.yaml)
<Reports> - ($vardir/reports/node.domain)
<Stored configs> - (in database) The clean action can either remove all
data from a host in your storeconfigs database, or, with the
<--unexport> option, turn every exported resource supporting ensure to
absent so that any other host that collected those resources can remove
them. Without unexporting, a removed node's exported resources become
unmanaged by Puppet, and may linger as cruft unless you are purging
that resource type.
EOT
when_invoked do |*args|
nodes = args[0..-2]
options = args.last
raise "At least one node should be passed" if nodes.empty? || nodes == options
# This seems really bad; run_mode should be set as part of a class
# definition, and should not be modifiable beyond that. This is one of
# the only places left in the code that tries to manipulate it. Other
# parts of code that handle certificates behave differently if the
# run_mode is master. Those other behaviors are needed for cleaning the
# certificates correctly.
Puppet.settings.preferred_run_mode = "master"
if Puppet::SSL::CertificateAuthority.ca?
Puppet::SSL::Host.ca_location = :local
else
Puppet::SSL::Host.ca_location = :none
end
Puppet::Node::Facts.indirection.terminus_class = :yaml
Puppet::Node::Facts.indirection.cache_class = :yaml
Puppet::Node.indirection.terminus_class = :yaml
Puppet::Node.indirection.cache_class = :yaml
nodes.each { |node| cleanup(node.downcase, options[:unexport]) }
end
end
def cleanup(node, unexport)
clean_cert(node)
clean_cached_facts(node)
clean_cached_node(node)
clean_reports(node)
clean_storeconfigs(node, unexport)
end
# clean signed cert for +host+
def clean_cert(node)
if Puppet::SSL::CertificateAuthority.ca?
Puppet::Face[:ca, :current].revoke(node)
Puppet::Face[:ca, :current].destroy(node)
Puppet.info "#{node} certificates removed from ca"
else
Puppet.info "Not managing #{node} certs as this host is not a CA"
end
end
# clean facts for +host+
def clean_cached_facts(node)
Puppet::Node::Facts.indirection.destroy(node)
Puppet.info "#{node}'s facts removed"
end
# clean cached node +host+
def clean_cached_node(node)
Puppet::Node.indirection.destroy(node)
Puppet.info "#{node}'s cached node removed"
end
# clean node reports for +host+
def clean_reports(node)
Puppet::Transaction::Report.indirection.destroy(node)
Puppet.info "#{node}'s reports removed"
end
# clean storeconfig for +node+
def clean_storeconfigs(node, do_unexport=false)
return unless Puppet[:storeconfigs] && Puppet.features.rails?
require 'puppet/rails'
Puppet::Rails.connect
unless rails_node = Puppet::Rails::Host.find_by_name(node)
Puppet.notice "No entries found for #{node} in storedconfigs."
return
end
if do_unexport
unexport(rails_node)
Puppet.notice "Force #{node}'s exported resources to absent"
Puppet.warning "Please wait until all other hosts have checked out their configuration before finishing the cleanup with:"
Puppet.warning "$ puppet node clean #{node}"
else
rails_node.destroy
Puppet.notice "#{node} storeconfigs removed"
end
end
def unexport(node)
# fetch all exported resource
query = {:include => {:param_values => :param_name}}
query[:conditions] = [ "exported=? AND host_id=?", true, node.id ]
Puppet::Rails::Resource.find(:all, query).each do |resource|
if type_is_ensurable(resource)
line = 0
param_name = Puppet::Rails::ParamName.find_or_create_by_name("ensure")
if ensure_param = resource.param_values.find(
:first,
:conditions => [ 'param_name_id = ?', param_name.id ]
)
line = ensure_param.line.to_i
Puppet::Rails::ParamValue.delete(ensure_param.id);
end
# force ensure parameter to "absent"
resource.param_values.create(
:value => "absent",
:line => line,
:param_name => param_name
)
Puppet.info("#{resource.name} has been marked as \"absent\"")
end
end
end
def environment
- @environment ||= Puppet.lookup(:environments).get(Puppet[:environment])
+ @environment ||= Puppet.lookup(:current_environment)
end
def type_is_ensurable(resource)
if (type = Puppet::Type.type(resource.restype)) && type.validattr?(:ensure)
return true
else
type = environment.known_resource_types.find_definition('', resource.restype)
return true if type && type.arguments.keys.include?('ensure')
end
return false
end
end
diff --git a/lib/puppet/face/parser.rb b/lib/puppet/face/parser.rb
index a3835bfa0..b3dec4ed7 100644
--- a/lib/puppet/face/parser.rb
+++ b/lib/puppet/face/parser.rb
@@ -1,68 +1,160 @@
require 'puppet/face'
require 'puppet/parser'
Puppet::Face.define(:parser, '0.0.1') do
copyright "Puppet Labs", 2011
license "Apache 2 license; see COPYING"
summary "Interact directly with the parser."
action :validate do
summary "Validate the syntax of one or more Puppet manifests."
arguments "[<manifest>] [<manifest> ...]"
returns "Nothing, or the first syntax error encountered."
description <<-'EOT'
This action validates Puppet DSL syntax without compiling a catalog or
syncing any resources. If no manifest files are provided, it will
validate the default site manifest.
+
+ When validating with --parser current, the validation stops after the first
+ encountered issue.
+
+ When validating with --parser future, multiple issues per file are reported up
+ to the settings of max_error, and max_warnings. The processing stops
+ after having reported issues for the first encountered file with errors.
EOT
examples <<-'EOT'
Validate the default site manifest at /etc/puppet/manifests/site.pp:
$ puppet parser validate
Validate two arbitrary manifest files:
$ puppet parser validate init.pp vhost.pp
Validate from STDIN:
$ cat init.pp | puppet parser validate
EOT
when_invoked do |*args|
args.pop
files = args
if files.empty?
if not STDIN.tty?
Puppet[:code] = STDIN.read
validate_manifest
else
manifest = Puppet.lookup(:current_environment).manifest
files << manifest
Puppet.notice "No manifest specified. Validating the default manifest #{manifest}"
end
end
missing_files = []
files.each do |file|
- missing_files << file if ! Puppet::FileSystem.exist?(file)
- validate_manifest(file)
+ if Puppet::FileSystem.exist?(file)
+ validate_manifest(file)
+ else
+ missing_files << file
+ end
+ end
+ unless missing_files.empty?
+ raise Puppet::Error, "One or more file(s) specified did not exist:\n#{missing_files.collect {|f| " " * 3 + f + "\n"}}"
end
- raise Puppet::Error, "One or more file(s) specified did not exist:\n#{missing_files.collect {|f| " " * 3 + f + "\n"}}" if ! missing_files.empty?
nil
end
end
+
+ action (:dump) do
+ summary "Outputs a dump of the internal parse tree for debugging"
+ arguments "-e <source>| [<manifest> ...] "
+ returns "A dump of the resulting AST model unless there are syntax or validation errors."
+ description <<-'EOT'
+ This action parses and validates the Puppet DSL syntax without compiling a catalog
+ or syncing any resources. It automatically turns on the future parser for the parsing.
+
+ The command accepts one or more manifests (.pp) files, or an -e followed by the puppet
+ source text.
+ If no arguments are given, the stdin is read (unless it is attached to a terminal)
+
+ The output format of the dumped tree is not API, it may change from time to time.
+ EOT
+
+ option "--e <source>" do
+ default_to { nil }
+ summary "dump one source expression given on the command line."
+ end
+
+ option("--[no-]validate") do
+ summary "Whether or not to validate the parsed result, if no-validate only syntax errors are reported"
+ end
+
+ when_invoked do |*args|
+ require 'puppet/pops'
+ options = args.pop
+ if options[:e]
+ dump_parse(options[:e], 'command-line-string', options, false)
+ elsif args.empty?
+ if ! STDIN.tty?
+ dump_parse(STDIN.read, 'stdin', options, false)
+ else
+ raise Puppet::Error, "No input to parse given on command line or stdin"
+ end
+ else
+ missing_files = []
+ files = args
+ available_files = files.select do |file|
+ Puppet::FileSystem.exist?(file)
+ end
+ missing_files = files - available_files
+
+ dumps = available_files.collect do |file|
+ dump_parse(File.read(file), file, options)
+ end.join("")
+
+ if missing_files.empty?
+ dumps
+ else
+ dumps + "One or more file(s) specified did not exist:\n" + missing_files.collect { |f| " #{f}" }.join("\n")
+ end
+ end
+ end
+ end
+
+ def dump_parse(source, filename, options, show_filename = true)
+ output = ""
+ dumper = Puppet::Pops::Model::ModelTreeDumper.new
+ evaluating_parser = Puppet::Pops::Parser::EvaluatingParser.new
+ begin
+ if options[:validate]
+ parse_result = evaluating_parser.parse_string(source, filename)
+ else
+ # side step the assert_and_report step
+ parse_result = evaluating_parser.parser.parse_string(source)
+ end
+ if show_filename
+ output << "--- #{filename}"
+ end
+ output << dumper.dump(parse_result) << "\n"
+ rescue Puppet::ParseError => detail
+ if show_filename
+ Puppet.err("--- #{filename}")
+ end
+ Puppet.err(detail.message)
+ ""
+ end
+ end
+
# @api private
def validate_manifest(manifest = nil)
- configured_environment = Puppet.lookup(:environments).get(Puppet[:environment])
- validation_environment = manifest ?
- configured_environment.override_with(:manifest => manifest) :
- configured_environment
+ env = Puppet.lookup(:current_environment)
+ validation_environment = manifest ? env.override_with(:manifest => manifest) : env
+ validation_environment.check_for_reparse
validation_environment.known_resource_types.clear
rescue => detail
Puppet.log_exception(detail)
exit(1)
end
end
diff --git a/lib/puppet/feature/base.rb b/lib/puppet/feature/base.rb
index e30aed313..b410297fb 100644
--- a/lib/puppet/feature/base.rb
+++ b/lib/puppet/feature/base.rb
@@ -1,88 +1,94 @@
require 'puppet/util/feature'
# Add the simple features, all in one file.
# Order is important as some features depend on others
# We have a syslog implementation
Puppet.features.add(:syslog, :libs => ["syslog"])
# We can use POSIX user functions
Puppet.features.add(:posix) do
require 'etc'
!Etc.getpwuid(0).nil? && Puppet.features.syslog?
end
# We can use Microsoft Windows functions
Puppet.features.add(:microsoft_windows) do
begin
# ruby
require 'Win32API' # case matters in this require!
require 'win32ole'
- require 'win32/registry'
# gems
- require 'sys/admin'
require 'win32/process'
require 'win32/dir'
require 'win32/service'
- require 'win32/api'
- require 'win32/taskscheduler'
true
rescue LoadError => err
- warn "Cannot run on Microsoft Windows without the sys-admin, win32-process, win32-dir, win32-service and win32-taskscheduler gems: #{err}" unless Puppet.features.posix?
+ warn "Cannot run on Microsoft Windows without the win32-process, win32-dir and win32-service gems: #{err}" unless Puppet.features.posix?
end
end
raise Puppet::Error,"Cannot determine basic system flavour" unless Puppet.features.posix? or Puppet.features.microsoft_windows?
# We've got LDAP available.
Puppet.features.add(:ldap, :libs => ["ldap"])
# We have the Rdoc::Usage library.
Puppet.features.add(:usage, :libs => %w{rdoc/ri/ri_paths rdoc/usage})
# We have libshadow, useful for managing passwords.
Puppet.features.add(:libshadow, :libs => ["shadow"])
# We're running as root.
Puppet.features.add(:root) { require 'puppet/util/suidmanager'; Puppet::Util::SUIDManager.root? }
# We have lcs diff
Puppet.features.add :diff, :libs => %w{diff/lcs diff/lcs/hunk}
# We have augeas
Puppet.features.add(:augeas, :libs => ["augeas"])
# We have RRD available
Puppet.features.add(:rrd_legacy, :libs => ["RRDtool"])
Puppet.features.add(:rrd, :libs => ["RRD"])
# We have OpenSSL
Puppet.features.add(:openssl, :libs => ["openssl"])
# We have CouchDB
Puppet.features.add(:couchdb, :libs => ["couchrest"])
# We have sqlite
Puppet.features.add(:sqlite, :libs => ["sqlite3"])
# We have Hiera
Puppet.features.add(:hiera, :libs => ["hiera"])
Puppet.features.add(:minitar, :libs => ["archive/tar/minitar"])
# We can manage symlinks
Puppet.features.add(:manages_symlinks) do
if ! Puppet::Util::Platform.windows?
true
else
- begin
- require 'Win32API'
- Win32API.new('kernel32', 'CreateSymbolicLink', 'SSL', 'B')
- true
- rescue LoadError => err
- Puppet.debug("CreateSymbolicLink is not available")
- false
+ module WindowsSymlink
+ require 'ffi'
+ extend FFI::Library
+
+ def self.is_implemented
+ begin
+ ffi_lib :kernel32
+ attach_function :CreateSymbolicLinkW, [:lpwstr, :lpwstr, :dword], :win32_bool
+
+ true
+ rescue LoadError => err
+ Puppet.debug("CreateSymbolicLink is not available")
+ false
+ end
+ end
end
+
+ WindowsSymlink.is_implemented
end
end
diff --git a/lib/puppet/feature/cfacter.rb b/lib/puppet/feature/cfacter.rb
new file mode 100644
index 000000000..18924a951
--- /dev/null
+++ b/lib/puppet/feature/cfacter.rb
@@ -0,0 +1,14 @@
+require 'facter'
+require 'puppet/util/feature'
+
+Puppet.features.add :cfacter do
+ begin
+ require 'cfacter'
+
+ # The first release of cfacter didn't have the necessary interface to work with Puppet
+ # Therefore, if the version is 0.1.0, treat the feature as not present
+ CFacter.version != '0.1.0'
+ rescue LoadError
+ false
+ end
+end
diff --git a/lib/puppet/feature/pe_license.rb b/lib/puppet/feature/pe_license.rb
new file mode 100644
index 000000000..cdb66c19b
--- /dev/null
+++ b/lib/puppet/feature/pe_license.rb
@@ -0,0 +1,4 @@
+require 'puppet/util/feature'
+
+#Is the pe license library installed providing the ability to read licenses.
+Puppet.features.add(:pe_license, :libs => %{pe_license})
diff --git a/lib/puppet/file_bucket/dipper.rb b/lib/puppet/file_bucket/dipper.rb
index 33c800b47..b627ce02d 100644
--- a/lib/puppet/file_bucket/dipper.rb
+++ b/lib/puppet/file_bucket/dipper.rb
@@ -1,112 +1,119 @@
require 'pathname'
require 'puppet/file_bucket'
require 'puppet/file_bucket/file'
require 'puppet/indirector/request'
class Puppet::FileBucket::Dipper
include Puppet::Util::Checksums
# This is a transitional implementation that uses REST
# to access remote filebucket files.
attr_accessor :name
- # Create our bucket client
+ # Creates a bucket client
def initialize(hash = {})
# Emulate the XMLRPC client
server = hash[:Server]
port = hash[:Port] || Puppet[:masterport]
environment = Puppet[:environment]
if hash.include?(:Path)
@local_path = hash[:Path]
@rest_path = nil
else
@local_path = nil
@rest_path = "https://#{server}:#{port}/#{environment}/file_bucket_file/"
end
@checksum_type = Puppet[:digest_algorithm].to_sym
@digest = method(@checksum_type)
end
def local?
!! @local_path
end
- # Back up a file to our bucket
+ # Backs up a file to the file bucket
def backup(file)
file_handle = Puppet::FileSystem.pathname(file)
raise(ArgumentError, "File #{file} does not exist") unless Puppet::FileSystem.exist?(file_handle)
- contents = Puppet::FileSystem.binread(file_handle)
begin
- file_bucket_file = Puppet::FileBucket::File.new(contents, :bucket_path => @local_path)
+ file_bucket_file = Puppet::FileBucket::File.new(file_handle, :bucket_path => @local_path)
files_original_path = absolutize_path(file)
dest_path = "#{@rest_path}#{file_bucket_file.name}/#{files_original_path}"
file_bucket_path = "#{@rest_path}#{file_bucket_file.checksum_type}/#{file_bucket_file.checksum_data}/#{files_original_path}"
# Make a HEAD request for the file so that we don't waste time
# uploading it if it already exists in the bucket.
unless Puppet::FileBucket::File.indirection.head(file_bucket_path)
Puppet::FileBucket::File.indirection.save(file_bucket_file, dest_path)
end
return file_bucket_file.checksum_data
rescue => detail
message = "Could not back up #{file}: #{detail}"
Puppet.log_exception(detail, message)
raise Puppet::Error, message, detail.backtrace
end
end
- # Retrieve a file by sum.
+ # Retrieves a file by sum.
def getfile(sum)
+ get_bucket_file(sum).to_s
+ end
+
+ # Retrieves a FileBucket::File by sum.
+ def get_bucket_file(sum)
source_path = "#{@rest_path}#{@checksum_type}/#{sum}"
file_bucket_file = Puppet::FileBucket::File.indirection.find(source_path, :bucket_path => @local_path)
raise Puppet::Error, "File not found" unless file_bucket_file
- file_bucket_file.to_s
+ file_bucket_file
end
- # Restore the file
- def restore(file,sum)
+ # Restores the file
+ def restore(file, sum)
restore = true
file_handle = Puppet::FileSystem.pathname(file)
if Puppet::FileSystem.exist?(file_handle)
- cursum = @digest.call(Puppet::FileSystem.binread(file_handle))
+ cursum = Puppet::FileBucket::File.new(file_handle).checksum_data()
# if the checksum has changed...
# this might be extra effort
if cursum == sum
restore = false
end
end
if restore
- if newcontents = getfile(sum)
- newsum = @digest.call(newcontents)
+ if newcontents = get_bucket_file(sum)
+ newsum = newcontents.checksum_data
changed = nil
if Puppet::FileSystem.exist?(file_handle) and ! Puppet::FileSystem.writable?(file_handle)
changed = Puppet::FileSystem.stat(file_handle).mode
::File.chmod(changed | 0200, file)
end
::File.open(file, ::File::WRONLY|::File::TRUNC|::File::CREAT) { |of|
of.binmode
- of.print(newcontents)
+ source_stream = newcontents.stream do |source_stream|
+ FileUtils.copy_stream(source_stream, of)
+ end
+ #of.print(newcontents)
}
::File.chmod(changed, file) if changed
else
Puppet.err "Could not find file with checksum #{sum}"
return nil
end
return newsum
else
return nil
end
end
private
def absolutize_path( path )
Pathname.new(path).realpath
end
end
diff --git a/lib/puppet/file_bucket/file.rb b/lib/puppet/file_bucket/file.rb
index cb8ecd699..9c9d2ebe3 100644
--- a/lib/puppet/file_bucket/file.rb
+++ b/lib/puppet/file_bucket/file.rb
@@ -1,94 +1,157 @@
require 'puppet/file_bucket'
require 'puppet/indirector'
require 'puppet/util/checksums'
require 'digest/md5'
require 'stringio'
class Puppet::FileBucket::File
# This class handles the abstract notion of a file in a filebucket.
# There are mechanisms to save and load this file locally and remotely in puppet/indirector/filebucketfile/*
# There is a compatibility class that emulates pre-indirector filebuckets in Puppet::FileBucket::Dipper
extend Puppet::Indirector
indirects :file_bucket_file, :terminus_class => :selector
- attr :contents
attr :bucket_path
def self.supported_formats
[:s, :pson]
end
def self.default_format
# This should really be :raw, like is done for Puppet::FileServing::Content
# but this class hasn't historically supported `from_raw`, so switching
# would break compatibility between newer 3.x agents talking to older 3.x
# masters. However, to/from_s has been supported and achieves the desired
# result without breaking compatibility.
:s
end
def initialize(contents, options = {})
- raise ArgumentError.new("contents must be a String, got a #{contents.class}") unless contents.is_a?(String)
- @contents = contents
+ case contents
+ when String
+ @contents = StringContents.new(contents)
+ when Pathname
+ @contents = FileContents.new(contents)
+ else
+ raise ArgumentError.new("contents must be a String or Pathname, got a #{contents.class}")
+ end
@bucket_path = options.delete(:bucket_path)
@checksum_type = Puppet[:digest_algorithm].to_sym
raise ArgumentError.new("Unknown option(s): #{options.keys.join(', ')}") unless options.empty?
end
# @return [Num] The size of the contents
def size
- contents.size
+ @contents.size()
end
# @return [IO] A stream that reads the contents
- def stream
- StringIO.new(contents)
+ def stream(&block)
+ @contents.stream(&block)
end
def checksum_type
@checksum_type.to_s
end
def checksum
"{#{checksum_type}}#{checksum_data}"
end
def checksum_data
- algorithm = Puppet::Util::Checksums.method(@checksum_type)
- @checksum_data ||= algorithm.call(contents)
+ @checksum_data ||= @contents.checksum_data(@checksum_type)
end
def to_s
- contents
+ @contents.to_s
+ end
+
+ def contents
+ to_s
end
def name
"#{checksum_type}/#{checksum_data}"
end
def self.from_s(contents)
self.new(contents)
end
def to_data_hash
- { "contents" => contents }
+ # Note that this serializes the entire data to a string and places it in a hash.
+ { "contents" => contents.to_s }
end
def self.from_data_hash(data)
self.new(data["contents"])
end
def to_pson
Puppet.deprecation_warning("Serializing Puppet::FileBucket::File objects to pson is deprecated.")
to_data_hash.to_pson
end
# This method is deprecated, but cannot be removed for awhile, otherwise
# older agents sending pson couldn't backup to filebuckets on newer masters
def self.from_pson(pson)
Puppet.deprecation_warning("Deserializing Puppet::FileBucket::File objects from pson is deprecated. Upgrade to a newer version.")
self.from_data_hash(pson)
end
+ private
+
+ class StringContents
+ def initialize(content)
+ @contents = content;
+ end
+
+ def stream(&block)
+ s = StringIO.new(@contents)
+ begin
+ block.call(s)
+ ensure
+ s.close
+ end
+ end
+
+ def size
+ @contents.size
+ end
+
+ def checksum_data(base_method)
+ Puppet.info("Computing checksum on string")
+ Puppet::Util::Checksums.method(base_method).call(@contents)
+ end
+
+ def to_s
+ # This is not so horrible as for FileContent, but still possible to mutate the content that the
+ # checksum is based on... so semi horrible...
+ return @contents;
+ end
+ end
+
+ class FileContents
+ def initialize(path)
+ @path = path
+ end
+
+ def stream(&block)
+ Puppet::FileSystem.open(@path, nil, 'rb', &block)
+ end
+
+ def size
+ Puppet::FileSystem.size(@path)
+ end
+
+ def checksum_data(base_method)
+ Puppet.info("Computing checksum on file #{@path}")
+ Puppet::Util::Checksums.method(:"#{base_method}_file").call(@path)
+ end
+
+ def to_s
+ Puppet::FileSystem::binread(@path)
+ end
+ end
end
diff --git a/lib/puppet/file_serving/configuration/parser.rb b/lib/puppet/file_serving/configuration/parser.rb
index ccb6b3568..2ae8313b1 100644
--- a/lib/puppet/file_serving/configuration/parser.rb
+++ b/lib/puppet/file_serving/configuration/parser.rb
@@ -1,121 +1,121 @@
require 'puppet/file_serving/configuration'
require 'puppet/util/watched_file'
class Puppet::FileServing::Configuration::Parser
Mount = Puppet::FileServing::Mount
MODULES = 'modules'
# Parse our configuration file.
def parse
raise("File server configuration #{@file} does not exist") unless Puppet::FileSystem.exist?(@file)
raise("Cannot read file server configuration #{@file}") unless FileTest.readable?(@file)
@mounts = {}
@count = 0
File.open(@file) { |f|
mount = nil
f.each_line { |line|
# Have the count increment at the top, in case we throw exceptions.
@count += 1
case line
when /^\s*#/; next # skip comments
when /^\s*$/; next # skip blank lines
when /\[([-\w]+)\]/
mount = newmount($1)
when /^\s*(\w+)\s+(.+?)(\s*#.*)?$/
var = $1
value = $2
value.strip!
raise(ArgumentError, "Fileserver configuration file does not use '=' as a separator") if value =~ /^=/
case var
when "path"
path(mount, value)
when "allow"
allow(mount, value)
when "deny"
deny(mount, value)
else
raise ArgumentError.new("Invalid argument '#{var}' in #{@file.filename}, line #{@count}")
end
else
raise ArgumentError.new("Invalid line '#{line.chomp}' at #{@file.filename}, line #{@count}")
end
}
}
validate
@mounts
end
def initialize(filename)
@file = Puppet::Util::WatchedFile.new(filename)
end
def changed?
@file.changed?
end
private
# Allow a given pattern access to a mount.
def allow(mount, value)
value.split(/\s*,\s*/).each { |val|
begin
mount.info "allowing #{val} access"
mount.allow(val)
rescue Puppet::AuthStoreError => detail
- raise ArgumentError.new(detail.to_s, @count, @file)
+ raise ArgumentError.new("#{detail.to_s} in #{@file}, line #{@count}")
end
}
end
# Deny a given pattern access to a mount.
def deny(mount, value)
value.split(/\s*,\s*/).each { |val|
begin
mount.info "denying #{val} access"
mount.deny(val)
rescue Puppet::AuthStoreError => detail
- raise ArgumentError.new(detail.to_s, @count, @file)
+ raise ArgumentError.new("#{detail.to_s} in #{@file}, line #{@count}")
end
}
end
# Create a new mount.
def newmount(name)
- raise ArgumentError, "#{@mounts[name]} is already mounted at #{name}", @count, @file if @mounts.include?(name)
+ raise ArgumentError.new("#{@mounts[name]} is already mounted at #{name} in #{@file}, line #{@count}") if @mounts.include?(name)
case name
when "modules"
mount = Mount::Modules.new(name)
when "plugins"
mount = Mount::Plugins.new(name)
else
mount = Mount::File.new(name)
end
@mounts[name] = mount
mount
end
# Set the path for a mount.
def path(mount, value)
if mount.respond_to?(:path=)
begin
mount.path = value
rescue ArgumentError => detail
Puppet.log_exception(detail, "Removing mount \"#{mount.name}\": #{detail}")
@mounts.delete(mount.name)
end
else
Puppet.warning "The '#{mount.name}' module can not have a path. Ignoring attempt to set it"
end
end
# Make sure all of our mounts are valid. We have to do this after the fact
# because details are added over time as the file is parsed.
def validate
@mounts.each { |name, mount| mount.validate }
end
end
diff --git a/lib/puppet/file_system.rb b/lib/puppet/file_system.rb
index 6db15ee37..8a37e7e30 100644
--- a/lib/puppet/file_system.rb
+++ b/lib/puppet/file_system.rb
@@ -1,366 +1,366 @@
module Puppet::FileSystem
require 'puppet/file_system/path_pattern'
require 'puppet/file_system/file_impl'
require 'puppet/file_system/memory_file'
require 'puppet/file_system/memory_impl'
- require 'puppet/file_system/tempfile'
+ require 'puppet/file_system/uniquefile'
# create instance of the file system implementation to use for the current platform
@impl = if RUBY_VERSION =~ /^1\.8/
require 'puppet/file_system/file18'
Puppet::FileSystem::File18
elsif Puppet::Util::Platform.windows?
require 'puppet/file_system/file19windows'
Puppet::FileSystem::File19Windows
else
require 'puppet/file_system/file19'
Puppet::FileSystem::File19
end.new()
# Allows overriding the filesystem for the duration of the given block.
# The filesystem will only contain the given file(s).
#
# @param files [Puppet::FileSystem::MemoryFile] the files to have available
#
# @api private
#
def self.overlay(*files, &block)
old_impl = @impl
@impl = Puppet::FileSystem::MemoryImpl.new(*files)
yield
ensure
@impl = old_impl
end
# Opens the given path with given mode, and options and optionally yields it to the given block.
#
# @api public
#
def self.open(path, mode, options, &block)
@impl.open(assert_path(path), mode, options, &block)
end
# @return [Object] The directory of this file as an opaque handle
#
# @api public
#
def self.dir(path)
@impl.dir(assert_path(path))
end
# @return [String] The directory of this file as a String
#
# @api public
#
def self.dir_string(path)
@impl.path_string(@impl.dir(assert_path(path)))
end
# @return [Boolean] Does the directory of the given path exist?
def self.dir_exist?(path)
@impl.exist?(@impl.dir(assert_path(path)))
end
# Creates all directories down to (inclusive) the dir of the given path
def self.dir_mkpath(path)
@impl.mkpath(@impl.dir(assert_path(path)))
end
# @return [Object] the name of the file as a opaque handle
#
# @api public
#
def self.basename(path)
@impl.basename(assert_path(path))
end
# @return [String] the name of the file
#
# @api public
#
def self.basename_string(path)
@impl.path_string(@impl.basename(assert_path(path)))
end
# @return [Integer] the size of the file
#
# @api public
#
def self.size(path)
@impl.size(assert_path(path))
end
# Allows exclusive updates to a file to be made by excluding concurrent
# access using flock. This means that if the file is on a filesystem that
# does not support flock, this method will provide no protection.
#
# While polling to aquire the lock the process will wait ever increasing
# amounts of time in order to prevent multiple processes from wasting
# resources.
#
# @param path [Pathname] the path to the file to operate on
# @param mode [Integer] The mode to apply to the file if it is created
# @param options [Integer] Extra file operation mode information to use
# (defaults to read-only mode)
# @param timeout [Integer] Number of seconds to wait for the lock (defaults to 300)
# @yield The file handle, in read-write mode
# @return [Void]
# @raise [Timeout::Error] If the timeout is exceeded while waiting to acquire the lock
#
# @api public
#
def self.exclusive_open(path, mode, options = 'r', timeout = 300, &block)
@impl.exclusive_open(assert_path(path), mode, options, timeout, &block)
end
# Processes each line of the file by yielding it to the given block
#
# @api public
#
def self.each_line(path, &block)
@impl.each_line(assert_path(path), &block)
end
# @return [String] The contents of the file
#
# @api public
#
def self.read(path)
@impl.read(assert_path(path))
end
# @return [String] The binary contents of the file
#
# @api public
#
def self.binread(path)
@impl.binread(assert_path(path))
end
# Determines if a file exists by verifying that the file can be stat'd.
# Will follow symlinks and verify that the actual target path exists.
#
# @return [Boolean] true if the named file exists.
#
# @api public
#
def self.exist?(path)
@impl.exist?(assert_path(path))
end
# Determines if a file is a directory.
#
# @return [Boolean] true if the given file is a directory.
#
# @api public
def self.directory?(path)
@impl.directory?(assert_path(path))
end
# Determines if a file is a file.
#
# @return [Boolean] true if the given file is a file.
#
# @api public
def self.file?(path)
@impl.file?(assert_path(path))
end
# Determines if a file is executable.
#
# @todo Should this take into account extensions on the windows platform?
#
# @return [Boolean] true if this file can be executed
#
# @api public
#
def self.executable?(path)
@impl.executable?(assert_path(path))
end
# @return [Boolean] Whether the file is writable by the current process
#
# @api public
#
def self.writable?(path)
@impl.writable?(assert_path(path))
end
# Touches the file. On most systems this updates the mtime of the file.
#
# @api public
#
def self.touch(path)
@impl.touch(assert_path(path))
end
# Creates directories for all parts of the given path.
#
# @api public
#
def self.mkpath(path)
@impl.mkpath(assert_path(path))
end
# @return [Array<Object>] references to all of the children of the given
# directory path, excluding `.` and `..`.
# @api public
def self.children(path)
@impl.children(assert_path(path))
end
# Creates a symbolic link dest which points to the current file.
# If dest already exists:
#
# * and is a file, will raise Errno::EEXIST
# * and is a directory, will return 0 but perform no action
# * and is a symlink referencing a file, will raise Errno::EEXIST
# * and is a symlink referencing a directory, will return 0 but perform no action
#
# With the :force option set to true, when dest already exists:
#
# * and is a file, will replace the existing file with a symlink (DANGEROUS)
# * and is a directory, will return 0 but perform no action
# * and is a symlink referencing a file, will modify the existing symlink
# * and is a symlink referencing a directory, will return 0 but perform no action
#
# @param dest [String] The path to create the new symlink at
# @param [Hash] options the options to create the symlink with
# @option options [Boolean] :force overwrite dest
# @option options [Boolean] :noop do not perform the operation
# @option options [Boolean] :verbose verbose output
#
# @raise [Errno::EEXIST] dest already exists as a file and, :force is not set
#
# @return [Integer] 0
#
# @api public
#
def self.symlink(path, dest, options = {})
@impl.symlink(assert_path(path), dest, options)
end
# @return [Boolean] true if the file is a symbolic link.
#
# @api public
#
def self.symlink?(path)
@impl.symlink?(assert_path(path))
end
# @return [String] the name of the file referenced by the given link.
#
# @api public
#
def self.readlink(path)
@impl.readlink(assert_path(path))
end
# Deletes the given paths, returning the number of names passed as arguments.
# See also Dir::rmdir.
#
# @raise an exception on any error.
#
# @return [Integer] the number of paths passed as arguments
#
# @api public
#
def self.unlink(*paths)
@impl.unlink(*(paths.map {|p| assert_path(p) }))
end
# @return [File::Stat] object for the named file.
#
# @api public
#
def self.stat(path)
@impl.stat(assert_path(path))
end
# @return [Integer] the size of the file
#
# @api public
#
def self.size(path)
@impl.size(assert_path(path))
end
# @return [File::Stat] Same as stat, but does not follow the last symbolic
# link. Instead, reports on the link itself.
#
# @api public
#
def self.lstat(path)
@impl.lstat(assert_path(path))
end
# Compares the contents of this file against the contents of a stream.
#
# @param stream [IO] The stream to compare the contents against
# @return [Boolean] Whether the contents were the same
#
# @api public
#
def self.compare_stream(path, stream)
@impl.compare_stream(assert_path(path), stream)
end
# Produces an opaque pathname "handle" object representing the given path.
# Different implementations of the underlying file system may use different runtime
# objects. The produced "handle" should be used in all other operations
# that take a "path". No operation should be directly invoked on the returned opaque object
#
# @param path [String] The string representation of the path
# @return [Object] An opaque path handle on which no operations should be directly performed
#
# @api public
#
def self.pathname(path)
@impl.pathname(path)
end
# Asserts that the given path is of the expected type produced by #pathname
#
# @raise [ArgumentError] when path is not of the expected type
#
# @api public
#
def self.assert_path(path)
@impl.assert_path(path)
end
# Produces a string representation of the opaque path handle.
#
# @param path [Object] a path handle produced by {#pathname}
# @return [String] a string representation of the path
#
def self.path_string(path)
@impl.path_string(path)
end
# Create and open a file for write only if it doesn't exist.
#
# @see Puppet::FileSystem::open
#
# @raise [Errno::EEXIST] path already exists.
#
# @api public
#
def self.exclusive_create(path, mode, &block)
@impl.exclusive_create(assert_path(path), mode, &block)
end
# Changes permission bits on the named path to the bit pattern represented
# by mode.
#
# @param mode [Integer] The mode to apply to the file if it is created
# @param path [String] The path to the file, can also accept [PathName]
#
# @raise [Errno::ENOENT]: path doesn't exist
#
# @api public
#
def self.chmod(mode, path)
@impl.chmod(mode, path)
end
end
diff --git a/lib/puppet/file_system/file19.rb b/lib/puppet/file_system/file19.rb
index fce9a6a82..8872ba6fc 100644
--- a/lib/puppet/file_system/file19.rb
+++ b/lib/puppet/file_system/file19.rb
@@ -1,5 +1,46 @@
class Puppet::FileSystem::File19 < Puppet::FileSystem::FileImpl
def binread(path)
path.binread
end
+
+ # Provide an encoding agnostic version of compare_stream
+ #
+ # The FileUtils implementation in Ruby 2.0+ was modified in a manner where
+ # it cannot properly compare File and StringIO instances. To sidestep that
+ # issue this method reimplements the faster 2.0 version that will correctly
+ # compare binary File and StringIO streams.
+ def compare_stream(path, stream)
+ open(path, 0, 'rb') do |this|
+ bsize = stream_blksize(this, stream)
+ sa = "".force_encoding('ASCII-8BIT')
+ sb = "".force_encoding('ASCII-8BIT')
+ begin
+ this.read(bsize, sa)
+ stream.read(bsize, sb)
+ return true if sa.empty? && sb.empty?
+ end while sa == sb
+ false
+ end
+ end
+
+ private
+ def stream_blksize(*streams)
+ streams.each do |s|
+ next unless s.respond_to?(:stat)
+ size = blksize(s.stat)
+ return size if size
+ end
+ default_blksize()
+ end
+
+ def blksize(st)
+ s = st.blksize
+ return nil unless s
+ return nil if s == 0
+ s
+ end
+
+ def default_blksize
+ 1024
+ end
end
diff --git a/lib/puppet/file_system/file19windows.rb b/lib/puppet/file_system/file19windows.rb
index 7ebba2cf4..7ae984f48 100644
--- a/lib/puppet/file_system/file19windows.rb
+++ b/lib/puppet/file_system/file19windows.rb
@@ -1,108 +1,107 @@
require 'puppet/file_system/file19'
require 'puppet/util/windows'
class Puppet::FileSystem::File19Windows < Puppet::FileSystem::File19
def exist?(path)
if ! Puppet.features.manages_symlinks?
return ::File.exist?(path)
end
path = path.to_str if path.respond_to?(:to_str) # support WatchedFile
path = path.to_s # support String and Pathname
begin
if Puppet::Util::Windows::File.symlink?(path)
path = Puppet::Util::Windows::File.readlink(path)
end
! Puppet::Util::Windows::File.stat(path).nil?
rescue # generally INVALID_HANDLE_VALUE which means 'file not found'
false
end
end
def symlink(path, dest, options = {})
raise_if_symlinks_unsupported
dest_exists = exist?(dest) # returns false on dangling symlink
dest_stat = Puppet::Util::Windows::File.stat(dest) if dest_exists
- dest_symlink = Puppet::Util::Windows::File.symlink?(dest)
# silent fail to preserve semantics of original FileUtils
return 0 if dest_exists && dest_stat.ftype == 'directory'
if dest_exists && dest_stat.ftype == 'file' && options[:force] != true
raise(Errno::EEXIST, "#{dest} already exists and the :force option was not specified")
end
if options[:noop] != true
::File.delete(dest) if dest_exists # can only be file
Puppet::Util::Windows::File.symlink(path, dest)
end
0
end
def symlink?(path)
return false if ! Puppet.features.manages_symlinks?
Puppet::Util::Windows::File.symlink?(path)
end
def readlink(path)
raise_if_symlinks_unsupported
Puppet::Util::Windows::File.readlink(path)
end
def unlink(*file_names)
if ! Puppet.features.manages_symlinks?
return ::File.unlink(*file_names)
end
file_names.each do |file_name|
file_name = file_name.to_s # handle PathName
stat = Puppet::Util::Windows::File.stat(file_name) rescue nil
# sigh, Ruby + Windows :(
if stat && stat.ftype == 'directory'
if Puppet::Util::Windows::File.symlink?(file_name)
Dir.rmdir(file_name)
else
raise Errno::EPERM.new(file_name)
end
else
::File.unlink(file_name)
end
end
file_names.length
end
def stat(path)
Puppet::Util::Windows::File.stat(path)
end
def lstat(path)
if ! Puppet.features.manages_symlinks?
return Puppet::Util::Windows::File.stat(path)
end
Puppet::Util::Windows::File.lstat(path)
end
def chmod(mode, path)
Puppet::Util::Windows::Security.set_mode(mode, path.to_s)
end
private
def raise_if_symlinks_unsupported
if ! Puppet.features.manages_symlinks?
msg = "This version of Windows does not support symlinks. Windows Vista / 2008 or higher is required."
raise Puppet::Util::Windows::Error.new(msg)
end
if ! Puppet::Util::Windows::Process.process_privilege_symlink?
Puppet.warning "The current user does not have the necessary permission to manage symlinks."
end
end
end
diff --git a/lib/puppet/file_system/tempfile.rb b/lib/puppet/file_system/tempfile.rb
deleted file mode 100644
index 6766a7aec..000000000
--- a/lib/puppet/file_system/tempfile.rb
+++ /dev/null
@@ -1,20 +0,0 @@
-require 'tempfile'
-
-class Puppet::FileSystem::Tempfile
-
- # Variation of Tempfile.open which ensures that the tempfile is closed and
- # unlinked before returning
- #
- # @param identifier [String] additional part of generated pathname
- # @yieldparam file [File] the temporary file object
- # @return result of the passed block
- # @api private
- def self.open(identifier)
- file = ::Tempfile.new(identifier)
-
- yield file
-
- ensure
- file.close!
- end
-end
diff --git a/lib/puppet/file_system/uniquefile.rb b/lib/puppet/file_system/uniquefile.rb
new file mode 100644
index 000000000..2b1311509
--- /dev/null
+++ b/lib/puppet/file_system/uniquefile.rb
@@ -0,0 +1,190 @@
+require 'puppet/file_system'
+require 'delegate'
+require 'tmpdir'
+
+# A class that provides `Tempfile`-like capabilities, but does not attempt to
+# manage the deletion of the file for you. API is identical to the
+# normal `Tempfile` class.
+#
+# @api public
+class Puppet::FileSystem::Uniquefile < DelegateClass(File)
+ # Convenience method which ensures that the file is closed and
+ # unlinked before returning
+ #
+ # @param identifier [String] additional part of generated pathname
+ # @yieldparam file [File] the temporary file object
+ # @return result of the passed block
+ # @api private
+ def self.open_tmp(identifier)
+ f = new(identifier)
+ yield f
+ ensure
+ if f
+ f.close!
+ end
+ end
+
+ def initialize(basename, *rest)
+ create_tmpname(basename, *rest) do |tmpname, n, opts|
+ mode = File::RDWR|File::CREAT|File::EXCL
+ perm = 0600
+ if opts
+ mode |= opts.delete(:mode) || 0
+ opts[:perm] = perm
+ perm = nil
+ else
+ opts = perm
+ end
+ self.class.locking(tmpname) do
+ @tmpfile = File.open(tmpname, mode, opts)
+ @tmpname = tmpname
+ end
+ @mode = mode & ~(File::CREAT|File::EXCL)
+ perm or opts.freeze
+ @opts = opts
+ end
+
+ super(@tmpfile)
+ end
+
+ # Opens or reopens the file with mode "r+".
+ def open
+ @tmpfile.close if @tmpfile
+ @tmpfile = File.open(@tmpname, @mode, @opts)
+ __setobj__(@tmpfile)
+ end
+
+ def _close
+ begin
+ @tmpfile.close if @tmpfile
+ ensure
+ @tmpfile = nil
+ end
+ end
+ protected :_close
+
+ def close(unlink_now=false)
+ if unlink_now
+ close!
+ else
+ _close
+ end
+ end
+
+ def close!
+ _close
+ unlink
+ end
+
+ def unlink
+ return unless @tmpname
+ begin
+ File.unlink(@tmpname)
+ rescue Errno::ENOENT
+ rescue Errno::EACCES
+ # may not be able to unlink on Windows; just ignore
+ return
+ end
+ @tmpname = nil
+ end
+ alias delete unlink
+
+ # Returns the full path name of the temporary file.
+ # This will be nil if #unlink has been called.
+ def path
+ @tmpname
+ end
+
+ private
+
+ def make_tmpname(prefix_suffix, n)
+ case prefix_suffix
+ when String
+ prefix = prefix_suffix
+ suffix = ""
+ when Array
+ prefix = prefix_suffix[0]
+ suffix = prefix_suffix[1]
+ else
+ raise ArgumentError, "unexpected prefix_suffix: #{prefix_suffix.inspect}"
+ end
+ t = Time.now.strftime("%Y%m%d")
+ path = "#{prefix}#{t}-#{$$}-#{rand(0x100000000).to_s(36)}"
+ path << "-#{n}" if n
+ path << suffix
+ end
+
+ def create_tmpname(basename, *rest)
+ if opts = try_convert_to_hash(rest[-1])
+ opts = opts.dup if rest.pop.equal?(opts)
+ max_try = opts.delete(:max_try)
+ opts = [opts]
+ else
+ opts = []
+ end
+ tmpdir, = *rest
+ if $SAFE > 0 and tmpdir.tainted?
+ tmpdir = '/tmp'
+ else
+ tmpdir ||= tmpdir()
+ end
+ n = nil
+ begin
+ path = File.expand_path(make_tmpname(basename, n), tmpdir)
+ yield(path, n, *opts)
+ rescue Errno::EEXIST
+ n ||= 0
+ n += 1
+ retry if !max_try or n < max_try
+ raise "cannot generate temporary name using `#{basename}' under `#{tmpdir}'"
+ end
+ path
+ end
+
+ def try_convert_to_hash(h)
+ begin
+ h.to_hash
+ rescue NoMethodError => e
+ nil
+ end
+ end
+
+ @@systmpdir ||= defined?(Etc.systmpdir) ? Etc.systmpdir : '/tmp'
+
+ def tmpdir
+ tmp = '.'
+ if $SAFE > 0
+ tmp = @@systmpdir
+ else
+ for dir in [ENV['TMPDIR'], ENV['TMP'], ENV['TEMP'], @@systmpdir, '/tmp']
+ if dir and stat = File.stat(dir) and stat.directory? and stat.writable?
+ tmp = dir
+ break
+ end rescue nil
+ end
+ File.expand_path(tmp)
+ end
+ end
+
+
+ class << self
+ # yields with locking for +tmpname+ and returns the result of the
+ # block.
+ def locking(tmpname)
+ lock = tmpname + '.lock'
+ mkdir(lock)
+ yield
+ ensure
+ rmdir(lock) if lock
+ end
+
+ def mkdir(*args)
+ Dir.mkdir(*args)
+ end
+
+ def rmdir(*args)
+ Dir.rmdir(*args)
+ end
+ end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/forge.rb b/lib/puppet/forge.rb
index d43114b80..a7cdd42a4 100644
--- a/lib/puppet/forge.rb
+++ b/lib/puppet/forge.rb
@@ -1,199 +1,226 @@
require 'puppet/vendor'
Puppet::Vendor.load_vendored
require 'net/http'
require 'tempfile'
require 'uri'
require 'pathname'
require 'json'
require 'semantic'
class Puppet::Forge < Semantic::Dependency::Source
require 'puppet/forge/cache'
require 'puppet/forge/repository'
require 'puppet/forge/errors'
include Puppet::Forge::Errors
USER_AGENT = "PMT/1.1.1 (v3; Net::HTTP)".freeze
attr_reader :host, :repository
def initialize(host = Puppet[:module_repository])
@host = host
@repository = Puppet::Forge::Repository.new(host, USER_AGENT)
end
# Return a list of module metadata hashes that match the search query.
# This return value is used by the module_tool face install search,
# and displayed to on the console.
#
# Example return value:
#
# [
# {
# "author" => "puppetlabs",
# "name" => "bacula",
# "tag_list" => ["backup", "bacula"],
# "releases" => [{"version"=>"0.0.1"}, {"version"=>"0.0.2"}],
# "full_name" => "puppetlabs/bacula",
# "version" => "0.0.2",
# "project_url" => "http://github.com/puppetlabs/puppetlabs-bacula",
# "desc" => "bacula"
# }
# ]
#
# @param term [String] search term
# @return [Array] modules found
# @raise [Puppet::Forge::Errors::CommunicationError] if there is a network
# related error
# @raise [Puppet::Forge::Errors::SSLVerifyError] if there is a problem
# verifying the remote SSL certificate
# @raise [Puppet::Forge::Errors::ResponseError] if the repository returns a
# bad HTTP response
def search(term)
matches = []
uri = "/v3/modules?query=#{URI.escape(term)}"
+ if Puppet[:module_groups]
+ uri += "&module_groups=#{Puppet[:module_groups]}"
+ end
while uri
response = make_http_request(uri)
if response.code == '200'
result = JSON.parse(response.body)
uri = result['pagination']['next']
matches.concat result['results']
else
- raise ResponseError.new(:uri => URI.parse(@host).merge(uri) , :input => term, :response => response)
+ raise ResponseError.new(:uri => URI.parse(@host).merge(uri), :response => response)
end
end
matches.each do |mod|
mod['author'] = mod['owner']['username']
mod['tag_list'] = mod['current_release']['tags']
mod['full_name'] = "#{mod['author']}/#{mod['name']}"
mod['version'] = mod['current_release']['version']
mod['project_url'] = mod['homepage_url']
mod['desc'] = mod['current_release']['metadata']['summary'] || ''
end
end
# Fetches {ModuleRelease} entries for each release of the named module.
#
# @param input [String] the module name to look up
# @return [Array<Semantic::Dependency::ModuleRelease>] a list of releases for
# the given name
# @see Semantic::Dependency::Source#fetch
def fetch(input)
name = input.tr('/', '-')
uri = "/v3/releases?module=#{name}"
+ if Puppet[:module_groups]
+ uri += "&module_groups=#{Puppet[:module_groups]}"
+ end
releases = []
while uri
response = make_http_request(uri)
if response.code == '200'
response = JSON.parse(response.body)
else
- raise ResponseError.new(:uri => URI.parse(@host).merge(uri), :input => input, :response => response)
+ raise ResponseError.new(:uri => URI.parse(@host).merge(uri), :response => response)
end
releases.concat(process(response['results']))
uri = response['pagination']['next']
end
return releases
end
def make_http_request(*args)
@repository.make_http_request(*args)
end
class ModuleRelease < Semantic::Dependency::ModuleRelease
attr_reader :install_dir, :metadata
def initialize(source, data)
@data = data
@metadata = meta = data['metadata']
name = meta['name'].tr('/', '-')
version = Semantic::Version.parse(meta['version'])
release = "#{name}@#{version}"
- dependencies = (meta['dependencies'] || [])
- dependencies.map! do |dep|
- Puppet::ModuleTool.parse_module_dependency(release, dep)[0..1]
+ if meta['dependencies']
+ dependencies = meta['dependencies'].collect do |dep|
+ begin
+ Puppet::ModuleTool::Metadata.new.add_dependency(dep['name'], dep['version_requirement'], dep['repository'])
+ Puppet::ModuleTool.parse_module_dependency(release, dep)[0..1]
+ rescue ArgumentError => e
+ raise ArgumentError, "Malformed dependency: #{dep['name']}. Exception was: #{e}"
+ end
+ end
+ else
+ dependencies = []
end
super(source, name, version, Hash[dependencies])
end
def install(dir)
staging_dir = self.prepare
module_dir = dir + name[/-(.*)/, 1]
module_dir.rmtree if module_dir.exist?
# Make sure unpacked module has the same ownership as the folder we are moving it into.
Puppet::ModuleTool::Applications::Unpacker.harmonize_ownership(dir, staging_dir)
FileUtils.mv(staging_dir, module_dir)
@install_dir = dir
# Return the Pathname object representing the directory where the
# module release archive was unpacked the to.
return module_dir
ensure
staging_dir.rmtree if staging_dir.exist?
end
def prepare
return @unpacked_into if @unpacked_into
download(@data['file_uri'], tmpfile)
validate_checksum(tmpfile, @data['file_md5'])
unpack(tmpfile, tmpdir)
@unpacked_into = Pathname.new(tmpdir)
end
private
# Obtain a suitable temporary path for unpacking tarballs
#
# @return [Pathname] path to temporary unpacking location
def tmpdir
@dir ||= Dir.mktmpdir(name, Puppet::Forge::Cache.base_path)
end
def tmpfile
@file ||= Tempfile.new(name, Puppet::Forge::Cache.base_path).tap do |f|
f.binmode
end
end
def download(uri, destination)
- @source.make_http_request(uri, destination)
+ response = @source.make_http_request(uri, destination)
destination.flush and destination.close
+ unless response.code == '200'
+ raise Puppet::Forge::Errors::ResponseError.new(:uri => uri, :response => response)
+ end
end
def validate_checksum(file, checksum)
if Digest::MD5.file(file.path).hexdigest != checksum
raise RuntimeError, "Downloaded release for #{name} did not match expected checksum"
end
end
def unpack(file, destination)
begin
Puppet::ModuleTool::Applications::Unpacker.unpack(file.path, destination)
rescue Puppet::ExecutionFailure => e
raise RuntimeError, "Could not extract contents of module archive: #{e.message}"
end
end
end
private
def process(list)
- list.map { |release| ModuleRelease.new(self, release) }
+ l = list.map do |release|
+ metadata = release['metadata']
+ begin
+ ModuleRelease.new(self, release)
+ rescue ArgumentError => e
+ Puppet.warning "Cannot consider release #{metadata['name']}-#{metadata['version']}: #{e}"
+ false
+ end
+ end
+
+ l.select { |r| r }
end
end
diff --git a/lib/puppet/forge/errors.rb b/lib/puppet/forge/errors.rb
index 211c63fe8..54041f592 100644
--- a/lib/puppet/forge/errors.rb
+++ b/lib/puppet/forge/errors.rb
@@ -1,113 +1,112 @@
require 'json'
require 'puppet/error'
require 'puppet/forge'
# Puppet::Forge specific exceptions
module Puppet::Forge::Errors
# This exception is the parent for all Forge API errors
class ForgeError < Puppet::Error
# This is normally set by the child class, but if it is not this will
# fall back to displaying the message as a multiline.
#
# @return [String] the multiline version of the error message
def multiline
self.message
end
end
# This exception is raised when there is an SSL verification error when
# communicating with the forge.
class SSLVerifyError < ForgeError
# @option options [String] :uri The URI that failed
# @option options [String] :original the original exception
def initialize(options)
@uri = options[:uri]
original = options[:original]
super("Unable to verify the SSL certificate at #{@uri}", original)
end
# Return a multiline version of the error message
#
# @return [String] the multiline version of the error message
def multiline
<<-EOS.chomp
Could not connect via HTTPS to #{@uri}
Unable to verify the SSL certificate
The certificate may not be signed by a valid CA
The CA bundle included with OpenSSL may not be valid or up to date
EOS
end
end
# This exception is raised when there is a communication error when connecting
# to the forge
class CommunicationError < ForgeError
# @option options [String] :uri The URI that failed
# @option options [String] :original the original exception
def initialize(options)
@uri = options[:uri]
original = options[:original]
@detail = original.message
message = "Unable to connect to the server at #{@uri}. Detail: #{@detail}."
super(message, original)
end
# Return a multiline version of the error message
#
# @return [String] the multiline version of the error message
def multiline
<<-EOS.chomp
Could not connect to #{@uri}
There was a network communications problem
The error we caught said '#{@detail}'
Check your network connection and try again
EOS
end
end
# This exception is raised when there is a bad HTTP response from the forge
# and optionally a message in the response.
class ResponseError < ForgeError
# @option options [String] :uri The URI that failed
# @option options [String] :input The user's input (e.g. module name)
# @option options [String] :message Error from the API response (optional)
# @option options [Net::HTTPResponse] :response The original HTTP response
def initialize(options)
@uri = options[:uri]
- @input = options[:input]
@message = options[:message]
response = options[:response]
@response = "#{response.code} #{response.message.strip}"
begin
body = JSON.parse(response.body)
if body['message']
@message ||= body['message'].strip
end
rescue JSON::ParserError
end
- message = "Could not execute operation for '#{@input}'. Detail: "
+ message = "Request to Puppet Forge failed. Detail: "
message << @message << " / " if @message
message << @response << "."
super(message, original)
end
# Return a multiline version of the error message
#
# @return [String] the multiline version of the error message
def multiline
- message = <<-EOS
-Could not execute operation for '#{@input}'
+ message = <<-EOS.chomp
+Request to Puppet Forge failed.
The server being queried was #{@uri}
The HTTP response we received was '#{@response}'
EOS
- message << " The message we received said '#{@message}'\n" if @message
- message << " Check the author and module names are correct."
+ message << "\n The message we received said '#{@message}'" if @message
+ message
end
end
end
diff --git a/lib/puppet/forge/repository.rb b/lib/puppet/forge/repository.rb
index 9043f6bdc..a96faf309 100644
--- a/lib/puppet/forge/repository.rb
+++ b/lib/puppet/forge/repository.rb
@@ -1,170 +1,182 @@
require 'net/https'
require 'digest/sha1'
require 'uri'
require 'puppet/util/http_proxy'
require 'puppet/forge'
require 'puppet/forge/errors'
if Puppet.features.zlib? && Puppet[:zlib]
require 'zlib'
end
class Puppet::Forge
# = Repository
#
# This class is a file for accessing remote repositories with modules.
class Repository
include Puppet::Forge::Errors
attr_reader :uri, :cache
# List of Net::HTTP exceptions to catch
NET_HTTP_EXCEPTIONS = [
EOFError,
Errno::ECONNABORTED,
Errno::ECONNREFUSED,
Errno::ECONNRESET,
Errno::EINVAL,
Errno::ETIMEDOUT,
Net::HTTPBadResponse,
Net::HTTPHeaderSyntaxError,
Net::ProtocolError,
SocketError,
]
if Puppet.features.zlib? && Puppet[:zlib]
NET_HTTP_EXCEPTIONS << Zlib::GzipFile::Error
end
# Instantiate a new repository instance rooted at the +url+.
# The library will report +for_agent+ in the User-Agent to the repository.
def initialize(host, for_agent)
@host = host
@agent = for_agent
@cache = Cache.new(self)
@uri = URI.parse(host)
end
# Return a Net::HTTPResponse read for this +path+.
def make_http_request(path, io = nil)
Puppet.debug "HTTP GET #{@host}#{path}"
request = get_request_object(path)
return read_response(request, io)
end
+ def forge_authorization
+ if Puppet[:forge_authorization]
+ Puppet[:forge_authorization]
+ elsif Puppet.features.pe_license?
+ PELicense.load_license_key.authorization_token
+ end
+ end
+
def get_request_object(path)
headers = {
"User-Agent" => user_agent,
}
if Puppet.features.zlib? && Puppet[:zlib] && RUBY_VERSION >= "1.9"
headers = headers.merge({
"Accept-Encoding" => "gzip;q=1.0,deflate;q=0.6,identity;q=0.3",
})
end
+ if forge_authorization
+ headers = headers.merge({"Authorization" => forge_authorization})
+ end
+
request = Net::HTTP::Get.new(URI.escape(path), headers)
- unless @uri.user.nil? || @uri.password.nil?
+ unless @uri.user.nil? || @uri.password.nil? || forge_authorization
request.basic_auth(@uri.user, @uri.password)
end
return request
end
# Return a Net::HTTPResponse read from this HTTPRequest +request+.
#
# @param request [Net::HTTPRequest] request to make
# @return [Net::HTTPResponse] response from request
# @raise [Puppet::Forge::Errors::CommunicationError] if there is a network
# related error
# @raise [Puppet::Forge::Errors::SSLVerifyError] if there is a problem
# verifying the remote SSL certificate
def read_response(request, io = nil)
http_object = get_http_object
http_object.start do |http|
response = http.request(request)
if Puppet.features.zlib? && Puppet[:zlib]
if response && response.key?("content-encoding")
case response["content-encoding"]
when "gzip"
response.body = Zlib::GzipReader.new(StringIO.new(response.read_body), :encoding => "ASCII-8BIT").read
response.delete("content-encoding")
when "deflate"
response.body = Zlib::Inflate.inflate(response.read_body)
response.delete("content-encoding")
end
end
end
io.write(response.body) if io.respond_to? :write
response
end
rescue *NET_HTTP_EXCEPTIONS => e
raise CommunicationError.new(:uri => @uri.to_s, :original => e)
rescue OpenSSL::SSL::SSLError => e
if e.message =~ /certificate verify failed/
raise SSLVerifyError.new(:uri => @uri.to_s, :original => e)
else
raise e
end
end
# Return a Net::HTTP::Proxy object constructed from the settings provided
# accessing the repository.
#
# This method optionally configures SSL correctly if the URI scheme is
# 'https', including setting up the root certificate store so remote server
# SSL certificates can be validated.
#
# @return [Net::HTTP::Proxy] object constructed from repo settings
def get_http_object
- proxy_class = Net::HTTP::Proxy(Puppet::Util::HttpProxy.http_proxy_host, Puppet::Util::HttpProxy.http_proxy_port)
+ proxy_class = Net::HTTP::Proxy(Puppet::Util::HttpProxy.http_proxy_host, Puppet::Util::HttpProxy.http_proxy_port, Puppet::Util::HttpProxy.http_proxy_user, Puppet::Util::HttpProxy.http_proxy_password)
proxy = proxy_class.new(@uri.host, @uri.port)
if @uri.scheme == 'https'
cert_store = OpenSSL::X509::Store.new
cert_store.set_default_paths
proxy.use_ssl = true
proxy.verify_mode = OpenSSL::SSL::VERIFY_PEER
proxy.cert_store = cert_store
end
proxy
end
# Return the local file name containing the data downloaded from the
# repository at +release+ (e.g. "myuser-mymodule").
def retrieve(release)
path = @host.chomp('/') + release
return cache.retrieve(path)
end
# Return the URI string for this repository.
def to_s
"#<#{self.class} #{@host}>"
end
# Return the cache key for this repository, this a hashed string based on
# the URI.
def cache_key
return @cache_key ||= [
@host.to_s.gsub(/[^[:alnum:]]+/, '_').sub(/_$/, ''),
Digest::SHA1.hexdigest(@host.to_s)
].join('-').freeze
end
private
def user_agent
@user_agent ||= [
@agent,
"Puppet/#{Puppet.version}",
"Ruby/#{RUBY_VERSION}-p#{RUBY_PATCHLEVEL} (#{RUBY_PLATFORM})",
].join(' ').freeze
end
end
end
diff --git a/lib/puppet/functions.rb b/lib/puppet/functions.rb
index a15e166f6..965d7b8f1 100644
--- a/lib/puppet/functions.rb
+++ b/lib/puppet/functions.rb
@@ -1,548 +1,555 @@
# @note WARNING: This new function API is still under development and may change at any time
#
# Functions in the puppet language can be written in Ruby and distributed in
# puppet modules. The function is written by creating a file in the module's
# `lib/puppet/functions/<modulename>` directory, where `<modulename>` is
# replaced with the module's name. The file should have the name of the function.
# For example, to create a function named `min` in a module named `math` create
# a file named `lib/puppet/functions/math/min.rb` in the module.
#
# A function is implemented by calling {Puppet::Functions.create_function}, and
# passing it a block that defines the implementation of the function.
#
# Functions are namespaced inside the module that contains them. The name of
# the function is prefixed with the name of the module. For example,
# `math::min`.
#
# @example A simple function
# Puppet::Functions.create_function('math::min') do
# def min(a, b)
# a <= b ? a : b
# end
# end
#
# Anatomy of a function
# ---
#
# Functions are composed of four parts: the name, the implementation methods,
# the signatures, and the dispatches.
#
# The name is the string given to the {Puppet::Functions.create_function}
# method. It specifies the name to use when calling the function in the puppet
# language, or from other functions.
#
# The implementation methods are ruby methods (there can be one or more) that
# provide that actual implementation of the function's behavior. In the
# simplest case the name of the function (excluding any namespace) and the name
# of the method are the same. When that is done no other parts (signatures and
# dispatches) need to be used.
#
# Signatures are a way of specifying the types of the function's parameters.
# The types of any arguments will be checked against the types declared in the
# signature and an error will be produced if they don't match. The types are
# defined by using the same syntax for types as in the puppet language.
#
# Dispatches are how signatures and implementation methods are tied together.
# When the function is called, puppet searches the signatures for one that
# matches the supplied arguments. Each signature is part of a dispatch, which
# specifies the method that should be called for that signature. When a
# matching signature is found, the corrosponding method is called.
#
# Documentation for the function should be placed as comments to the
# implementation method(s).
#
# @todo Documentation for individual instances of these new functions is not
# yet tied into the puppet doc system.
#
# @example Dispatching to different methods by type
# Puppet::Functions.create_function('math::min') do
# dispatch :numeric_min do
# param 'Numeric', 'a'
# param 'Numeric', 'b'
# end
#
# dispatch :string_min do
# param 'String', 'a'
# param 'String', 'b'
# end
#
# def numeric_min(a, b)
# a <= b ? a : b
# end
#
# def string_min(a, b)
# a.downcase <= b.downcase ? a : b
# end
# end
#
# Specifying Signatures
# ---
#
# If nothing is specified, the number of arguments given to the function must
# be the same as the number of parameters, and all of the parameters are of
-# type 'Object'.
+# type 'Any'.
#
# To express that the last parameter captures the rest, the method
# `last_captures_rest` can be called. This indicates that the last parameter is
# a varargs parameter and will be passed to the implementing method as an array
# of the given type.
#
# When defining a dispatch for a function, the resulting dispatch matches
# against the specified argument types and min/max occurrence of optional
# entries. When the dispatch makes the call to the implementation method the
# arguments are simply passed and it is the responsibility of the method's
# implementor to ensure it can handle those arguments (i.e. there is no check
# that what was declared as optional actually has a default value, and that
# a "captures rest" is declared using a `*`).
#
# @example Varargs
# Puppet::Functions.create_function('foo') do
# dispatch :foo do
# param 'Numeric', 'first'
# param 'Numeric', 'values'
# last_captures_rest
# end
#
# def foo(first, *values)
# # do something
# end
# end
#
# Access to Scope
# ---
# In general, functions should not need access to scope; they should be
# written to act on their given input only. If they absolutely must look up
# variable values, they should do so via the closure scope (the scope where
# they are defined) - this is done by calling `closure_scope()`.
#
# Calling other Functions
# ---
# Calling other functions by name is directly supported via
# {Puppet::Pops::Functions::Function#call_function}. This allows a function to
# call other functions visible from its loader.
#
# @api public
module Puppet::Functions
# @param func_name [String, Symbol] a simple or qualified function name
# @param block [Proc] the block that defines the methods and dispatch of the
# Function to create
# @return [Class<Function>] the newly created Function class
#
# @api public
def self.create_function(func_name, function_base = Function, &block)
if function_base.ancestors.none? { |s| s == Puppet::Pops::Functions::Function }
raise ArgumentError, "Functions must be based on Puppet::Pops::Functions::Function. Got #{function_base}"
end
func_name = func_name.to_s
# Creates an anonymous class to represent the function
# The idea being that it is garbage collected when there are no more
# references to it.
#
the_class = Class.new(function_base, &block)
# Make the anonymous class appear to have the class-name <func_name>
# Even if this class is not bound to such a symbol in a global ruby scope and
# must be resolved via the loader.
# This also overrides any attempt to define a name method in the given block
# (Since it redefines it)
#
# TODO, enforce name in lower case (to further make it stand out since Ruby
# class names are upper case)
#
the_class.instance_eval do
@func_name = func_name
def name
@func_name
end
end
# Automatically create an object dispatcher based on introspection if the
# loaded user code did not define any dispatchers. Fail if function name
# does not match a given method name in user code.
#
if the_class.dispatcher.empty?
simple_name = func_name.split(/::/)[-1]
type, names = default_dispatcher(the_class, simple_name)
last_captures_rest = (type.size_range[1] == Puppet::Pops::Types::INFINITY)
the_class.dispatcher.add_dispatch(type, simple_name, names, nil, nil, nil, last_captures_rest)
end
# The function class is returned as the result of the create function method
the_class
end
# Creates a default dispatcher configured from a method with the same name as the function
#
# @api private
def self.default_dispatcher(the_class, func_name)
unless the_class.method_defined?(func_name)
raise ArgumentError, "Function Creation Error, cannot create a default dispatcher for function '#{func_name}', no method with this name found"
end
- object_signature(*min_max_param(the_class.instance_method(func_name)))
+ any_signature(*min_max_param(the_class.instance_method(func_name)))
end
# @api private
def self.min_max_param(method)
# Ruby 1.8.7 does not have support for details about parameters
if method.respond_to?(:parameters)
result = {:req => 0, :opt => 0, :rest => 0 }
# TODO: Optimize into one map iteration that produces names map, and sets
# count as side effect
method.parameters.each { |p| result[p[0]] += 1 }
from = result[:req]
to = result[:rest] > 0 ? :default : from + result[:opt]
names = method.parameters.map {|p| p[1].to_s }
else
# Cannot correctly compute the signature in Ruby 1.8.7 because arity for
# optional values is screwed up (there is no way to get the upper limit),
# an optional looks the same as a varargs In this case - the failure will
# simply come later when the call fails
#
arity = method.arity
from = arity >= 0 ? arity : -arity -1
to = arity >= 0 ? arity : :default # i.e. infinite (which is wrong when there are optional - flaw in 1.8.7)
names = [] # no names available
end
[from, to, names]
end
# Construct a signature consisting of Object type, with min, and max, and given names.
- # (there is only one type entry). Note that this signature is Object, not Optional[Object].
+ # (there is only one type entry).
#
# @api private
- def self.object_signature(from, to, names)
+ def self.any_signature(from, to, names)
# Construct the type for the signature
# Tuple[Object, from, to]
factory = Puppet::Pops::Types::TypeFactory
- [factory.callable(factory.object, from, to), names]
+ [factory.callable(factory.any, from, to), names]
end
# Function
# ===
# This class is the base class for all Puppet 4x Function API functions. A
# specialized class is created for each puppet function.
#
# @api public
class Function < Puppet::Pops::Functions::Function
# @api private
def self.builder
@type_parser ||= Puppet::Pops::Types::TypeParser.new
@all_callables ||= Puppet::Pops::Types::TypeFactory.all_callables
DispatcherBuilder.new(dispatcher, @type_parser, @all_callables)
end
# Dispatch any calls that match the signature to the provided method name.
#
# @param meth_name [Symbol] The name of the implementation method to call
# when the signature defined in the block matches the arguments to a call
# to the function.
# @return [Void]
#
# @api public
def self.dispatch(meth_name, &block)
builder().instance_eval do
dispatch(meth_name, &block)
end
end
end
# Public api methods of the DispatcherBuilder are available within dispatch()
# blocks declared in a Puppet::Function.create_function() call.
#
# @api public
class DispatcherBuilder
# @api private
def initialize(dispatcher, type_parser, all_callables)
@type_parser = type_parser
@all_callables = all_callables
@dispatcher = dispatcher
end
# Defines a positional parameter with type and name
#
# @param type [String] The type specification for the parameter.
# @param name [String] The name of the parameter. This is primarily used
# for error message output and does not have to match the name of the
# parameter on the implementation method.
# @return [Void]
#
# @api public
def param(type, name)
if type.is_a?(String)
@types << type
@names << name
# mark what should be picked for this position when dispatching
@weaving << @names.size()-1
else
raise ArgumentError, "Type signature argument must be a String reference to a Puppet Data Type. Got #{type.class}"
end
end
# Defines one required block parameter that may appear last. If type and name is missing the
# default type is "Callable", and the name is "block". If only one
# parameter is given, then that is the name and the type is "Callable".
#
# @api public
def required_block_param(*type_and_name)
case type_and_name.size
when 0
- type = @all_callables
+ # the type must be an independent instance since it will be contained in another type
+ type = @all_callables.copy
name = 'block'
when 1
- type = @all_callables
+ # the type must be an independent instance since it will be contained in another type
+ type = @all_callables.copy
name = type_and_name[0]
when 2
type_string, name = type_and_name
type = @type_parser.parse(type_string)
else
raise ArgumentError, "block_param accepts max 2 arguments (type, name), got #{type_and_name.size}."
end
- unless type.is_a?(Puppet::Pops::Types::PCallableType)
- raise ArgumentError, "Expected PCallableType, got #{type.class}"
+ unless Puppet::Pops::Types::TypeCalculator.is_kind_of_callable?(type, false)
+ raise ArgumentError, "Expected PCallableType or PVariantType thereof, got #{type.class}"
end
- unless name.is_a?(String)
- raise ArgumentError, "Expected block_param name to be a String, got #{name.class}"
+ unless name.is_a?(String) || name.is_a?(Symbol)
+ raise ArgumentError, "Expected block_param name to be a String or Symbol, got #{name.class}"
end
if @block_type.nil?
@block_type = type
@block_name = name
else
raise ArgumentError, "Attempt to redefine block"
end
end
# Defines one optional block parameter that may appear last. If type or name is missing the
# defaults are "any callable", and the name is "block". The implementor of the dispatch target
# must use block = nil when it is optional (or an error is raised when the call is made).
#
# @api public
def optional_block_param(*type_and_name)
# same as required, only wrap the result in an optional type
required_block_param(*type_and_name)
@block_type = Puppet::Pops::Types::TypeFactory.optional(@block_type)
end
# Specifies the min and max occurance of arguments (of the specified types)
# if something other than the exact count from the number of specified
- # types). The max value may be specified as -1 if an infinite number of
+ # types). The max value may be specified as :default if an infinite number of
# arguments are supported. When max is > than the number of specified
# types, the last specified type repeats.
#
# @api public
def arg_count(min_occurs, max_occurs)
@min = min_occurs
@max = max_occurs
unless min_occurs.is_a?(Integer) && min_occurs >= 0
raise ArgumentError, "min arg_count of function parameter must be an Integer >=0, got #{min_occurs.class} '#{min_occurs}'"
end
unless max_occurs == :default || (max_occurs.is_a?(Integer) && max_occurs >= 0)
raise ArgumentError, "max arg_count of function parameter must be an Integer >= 0, or :default, got #{max_occurs.class} '#{max_occurs}'"
end
unless max_occurs == :default || (max_occurs.is_a?(Integer) && max_occurs >= min_occurs)
raise ArgumentError, "max arg_count must be :default (infinite) or >= min arg_count, got min: '#{min_occurs}, max: '#{max_occurs}'"
end
end
# Specifies that the last argument captures the rest.
#
# @api public
def last_captures_rest
@last_captures = true
end
private
# @api private
def dispatch(meth_name, &block)
# an array of either an index into names/types, or an array with
# injection information [type, name, injection_name] used when the call
# is being made to weave injections into the given arguments.
#
@types = []
@names = []
@weaving = []
@injections = []
@min = nil
@max = nil
@last_captures = false
@block_type = nil
@block_name = nil
self.instance_eval &block
callable_t = create_callable(@types, @block_type, @min, @max)
@dispatcher.add_dispatch(callable_t, meth_name, @names, @block_name, @injections, @weaving, @last_captures)
end
# Handles creation of a callable type from strings specifications of puppet
# types and allows the min/max occurs of the given types to be given as one
# or two integer values at the end. The given block_type should be
# Optional[Callable], Callable, or nil.
#
# @api private
def create_callable(types, block_type, from, to)
mapped_types = types.map do |t|
@type_parser.parse(t)
end
if !(from.nil? && to.nil?)
mapped_types << from
mapped_types << to
end
if block_type
mapped_types << block_type
end
Puppet::Pops::Types::TypeFactory.callable(*mapped_types)
end
end
private
# @note WARNING: This style of creating functions is not public. It is a system
# under development that will be used for creating "system" functions.
#
# This is a private, internal, system for creating functions. It supports
# everything that the public function definition system supports as well as a
# few extra features.
#
# Injection Support
# ===
# The Function API supports injection of data and services. It is possible to
# make injection that takes effect when the function is loaded (for services
# and runtime configuration that does not change depending on how/from where
# in what context the function is called. It is also possible to inject and
# weave argument values into a call.
#
# Injection of attributes
# ---
# Injection of attributes is performed by one of the methods `attr_injected`,
# and `attr_injected_producer`. The injected attributes are available via
# accessor method calls.
#
# @example using injected attributes
# Puppet::Functions.create_function('test') do
# attr_injected String, :larger, 'message_larger'
# attr_injected String, :smaller, 'message_smaller'
# def test(a, b)
# a > b ? larger() : smaller()
# end
# end
#
# @api private
class InternalFunction < Function
# @api private
def self.builder
@type_parser ||= Puppet::Pops::Types::TypeParser.new
@all_callables ||= Puppet::Pops::Types::TypeFactory.all_callables
InternalDispatchBuilder.new(dispatcher, @type_parser, @all_callables)
end
# Defines class level injected attribute with reader method
#
# @api private
def self.attr_injected(type, attribute_name, injection_name = nil)
define_method(attribute_name) do
ivar = :"@#{attribute_name.to_s}"
unless instance_variable_defined?(ivar)
injector = Puppet.lookup(:injector)
instance_variable_set(ivar, injector.lookup(closure_scope, type, injection_name))
end
instance_variable_get(ivar)
end
end
# Defines class level injected producer attribute with reader method
#
# @api private
def self.attr_injected_producer(type, attribute_name, injection_name = nil)
define_method(attribute_name) do
ivar = :"@#{attribute_name.to_s}"
unless instance_variable_defined?(ivar)
injector = Puppet.lookup(:injector)
instance_variable_set(ivar, injector.lookup_producer(closure_scope, type, injection_name))
end
instance_variable_get(ivar)
end
end
end
# @note WARNING: This style of creating functions is not public. It is a system
# under development that will be used for creating "system" functions.
#
# Injection and Weaving of parameters
# ---
# It is possible to inject and weave parameters into a call. These extra
# parameters are not part of the parameters passed from the Puppet logic, and
# they can not be overridden by parameters given as arguments in the call.
# They are invisible to the Puppet Language.
#
# @example using injected parameters
# Puppet::Functions.create_function('test') do
# dispatch :test do
# param 'Scalar', 'a'
# param 'Scalar', 'b'
# injected_param 'String', 'larger', 'message_larger'
# injected_param 'String', 'smaller', 'message_smaller'
# end
# def test(a, b, larger, smaller)
# a > b ? larger : smaller
# end
# end
#
# The function in the example above is called like this:
#
# test(10, 20)
#
# Using injected value as default
# ---
# Default value assignment is handled by using the regular Ruby mechanism (a
# value is assigned to the variable). The dispatch simply indicates that the
# value is optional. If the default value should be injected, it can be
# handled different ways depending on what is desired:
#
# * by calling the accessor method for an injected Function class attribute.
# This is suitable if the value is constant across all instantiations of the
# function, and across all calls.
# * by injecting a parameter into the call
# to the left of the parameter, and then assigning that as the default value.
# * One of the above forms, but using an injected producer instead of a
# directly injected value.
#
# @example method with injected default values
# Puppet::Functions.create_function('test') do
# dispatch :test do
# injected_param String, 'b_default', 'b_default_value_key'
# param 'Scalar', 'a'
# param 'Scalar', 'b'
# end
# def test(b_default, a, b = b_default)
# # ...
# end
# end
#
# @api private
class InternalDispatchBuilder < DispatcherBuilder
+ def scope_param()
+ @injections << [:scope, 'scope', '', :dispatcher_internal]
+ # mark what should be picked for this position when dispatching
+ @weaving << [@injections.size()-1]
+ end
# TODO: is param name really needed? Perhaps for error messages? (it is unused now)
#
# @api private
def injected_param(type, name, injection_name = '')
@injections << [type, name, injection_name]
# mark what should be picked for this position when dispatching
@weaving << [@injections.size() -1]
end
# TODO: is param name really needed? Perhaps for error messages? (it is unused now)
#
# @api private
def injected_producer_param(type, name, injection_name = '')
@injections << [type, name, injection_name, :producer]
# mark what should be picked for this position when dispatching
@weaving << [@injections.size()-1]
end
end
end
diff --git a/lib/puppet/functions/assert_type.rb b/lib/puppet/functions/assert_type.rb
index 4e9f33fb1..7165dec70 100644
--- a/lib/puppet/functions/assert_type.rb
+++ b/lib/puppet/functions/assert_type.rb
@@ -1,42 +1,59 @@
# Returns the given value if it is an instance of the given type, and raises an error otherwise.
+# Optionally, if a block is given (accepting two parameters), it will be called instead of raising
+# an error. This to enable giving the user richer feedback, or to supply a default value.
#
# @example how to assert type
# # assert that `$b` is a non empty `String` and assign to `$a`
# $a = assert_type(String[1], $b)
#
+# @example using custom error message
+# $a = assert_type(String[1], $b) |$expected, $actual| { fail("The name cannot be empty") }
+#
+# @example using a warning and a default
+# $a = assert_type(String[1], $b) |$expected, $actual| { warning("Name is empty, using default") 'anonymous' }
+#
# See the documentation for "The Puppet Type System" for more information about types.
#
Puppet::Functions.create_function(:assert_type) do
dispatch :assert_type do
param 'Type', 'type'
- param 'Optional[Object]', 'value'
+ param 'Any', 'value'
+ optional_block_param 'Callable[Type, Type]', 'block'
end
dispatch :assert_type_s do
param 'String', 'type_string'
- param 'Optional[Object]', 'value'
+ param 'Any', 'value'
+ optional_block_param 'Callable[Type, Type]', 'block'
end
# @param type [Type] the type the value must be an instance of
- # @param value [Optional[Object]] the value to assert
+ # @param value [Object] the value to assert
#
- def assert_type(type, value)
+ def assert_type(type, value, block=nil)
unless Puppet::Pops::Types::TypeCalculator.instance?(type,value)
inferred_type = Puppet::Pops::Types::TypeCalculator.infer(value)
- # Do not give all the details - i.e. format as Integer, instead of Integer[n, n] for exact value, which
- # is just confusing. (OTOH: may need to revisit, or provide a better "type diff" output.
- #
- actual = Puppet::Pops::Types::TypeCalculator.generalize!(inferred_type)
- raise Puppet::ParseError, "assert_type(): Expected type #{type} does not match actual: #{actual}"
+ if block
+ # Give the inferred type to allow richer comparisson in the given block (if generalized
+ # information is lost).
+ #
+ value = block.call(nil, type, inferred_type)
+ else
+ # Do not give all the details - i.e. format as Integer, instead of Integer[n, n] for exact value, which
+ # is just confusing. (OTOH: may need to revisit, or provide a better "type diff" output.
+ #
+ actual = Puppet::Pops::Types::TypeCalculator.generalize!(inferred_type)
+ raise Puppet::ParseError, "assert_type(): Expected type #{type} does not match actual: #{actual}"
+ end
end
value
end
# @param type_string [String] the type the value must be an instance of given in String form
- # @param value [Optional[Object]] the value to assert
+ # @param value [Object] the value to assert
#
def assert_type_s(type_string, value)
t = Puppet::Pops::Types::TypeParser.new.parse(type_string)
assert_type(t, value)
end
end
diff --git a/lib/puppet/functions/each.rb b/lib/puppet/functions/each.rb
new file mode 100644
index 000000000..1b9ac937c
--- /dev/null
+++ b/lib/puppet/functions/each.rb
@@ -0,0 +1,111 @@
+# Applies a parameterized block to each element in a sequence of selected entries from the first
+# argument and returns the first argument.
+#
+# This function takes two mandatory arguments: the first should be an Array or a Hash or something that is
+# of enumerable type (integer, Integer range, or String), and the second
+# a parameterized block as produced by the puppet syntax:
+#
+# $a.each |$x| { ... }
+# each($a) |$x| { ... }
+#
+# When the first argument is an Array (or of enumerable type other than Hash), the parameterized block
+# should define one or two block parameters.
+# For each application of the block, the next element from the array is selected, and it is passed to
+# the block if the block has one parameter. If the block has two parameters, the first is the elements
+# index, and the second the value. The index starts from 0.
+#
+# $a.each |$index, $value| { ... }
+# each($a) |$index, $value| { ... }
+#
+# When the first argument is a Hash, the parameterized block should define one or two parameters.
+# When one parameter is defined, the iteration is performed with each entry as an array of `[key, value]`,
+# and when two parameters are defined the iteration is performed with key and value.
+#
+# $a.each |$entry| { ..."key ${$entry[0]}, value ${$entry[1]}" }
+# $a.each |$key, $value| { ..."key ${key}, value ${value}" }
+#
+# @example using each
+#
+# [1,2,3].each |$val| { ... } # 1, 2, 3
+# [5,6,7].each |$index, $val| { ... } # (0, 5), (1, 6), (2, 7)
+# {a=>1, b=>2, c=>3}].each |$val| { ... } # ['a', 1], ['b', 2], ['c', 3]
+# {a=>1, b=>2, c=>3}.each |$key, $val| { ... } # ('a', 1), ('b', 2), ('c', 3)
+# Integer[ 10, 20 ].each |$index, $value| { ... } # (0, 10), (1, 11) ...
+# "hello".each |$char| { ... } # 'h', 'e', 'l', 'l', 'o'
+# 3.each |$number| { ... } # 0, 1, 2
+#
+# @since 3.2 for Array and Hash
+# @since 3.5 for other enumerables
+# @note requires `parser = future`
+#
+Puppet::Functions.create_function(:each) do
+ dispatch :foreach_Hash_2 do
+ param 'Hash[Any, Any]', :hash
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ dispatch :foreach_Hash_1 do
+ param 'Hash[Any, Any]', :hash
+ required_block_param 'Callable[1,1]', :block
+ end
+
+ dispatch :foreach_Enumerable_2 do
+ param 'Any', :enumerable
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ dispatch :foreach_Enumerable_1 do
+ param 'Any', :enumerable
+ required_block_param 'Callable[1,1]', :block
+ end
+
+ def foreach_Hash_1(hash, pblock)
+ enumerator = hash.each_pair
+ hash.size.times do
+ pblock.call(nil, enumerator.next)
+ end
+ # produces the receiver
+ hash
+ end
+
+ def foreach_Hash_2(hash, pblock)
+ enumerator = hash.each_pair
+ hash.size.times do
+ pblock.call(nil, *enumerator.next)
+ end
+ # produces the receiver
+ hash
+ end
+
+ def foreach_Enumerable_1(enumerable, pblock)
+ enum = asserted_enumerable(enumerable)
+ begin
+ loop { pblock.call(nil, enum.next) }
+ rescue StopIteration
+ end
+ # produces the receiver
+ enumerable
+ end
+
+ def foreach_Enumerable_2(enumerable, pblock)
+ enum = asserted_enumerable(enumerable)
+ index = 0
+ begin
+ loop do
+ pblock.call(nil, index, enum.next)
+ index += 1
+ end
+ rescue StopIteration
+ end
+ # produces the receiver
+ enumerable
+ end
+
+ def asserted_enumerable(obj)
+ unless enum = Puppet::Pops::Types::Enumeration.enumerator(obj)
+ raise ArgumentError, ("#{self.class.name}(): wrong argument type (#{obj.class}; must be something enumerable.")
+ end
+ enum
+ end
+
+end
diff --git a/lib/puppet/functions/epp.rb b/lib/puppet/functions/epp.rb
new file mode 100644
index 000000000..baa0c1bed
--- /dev/null
+++ b/lib/puppet/functions/epp.rb
@@ -0,0 +1,54 @@
+# Evaluates an Embedded Puppet Template (EPP) file and returns the rendered text result as a String.
+#
+# The first argument to this function should be a `<MODULE NAME>/<TEMPLATE FILE>`
+# reference, which will load `<TEMPLATE FILE>` from a module's `templates`
+# directory. (For example, the reference `apache/vhost.conf.epp` will load the
+# file `<MODULES DIRECTORY>/apache/templates/vhost.conf.epp`.)
+#
+# The second argument is optional; if present, it should be a hash containing parameters for the
+# template. (See below.)
+#
+# EPP supports the following tags:
+#
+# * `<%= puppet expression %>` - This tag renders the value of the expression it contains.
+# * `<% puppet expression(s) %>` - This tag will execute the expression(s) it contains, but renders nothing.
+# * `<%# comment %>` - The tag and its content renders nothing.
+# * `<%%` or `%%>` - Renders a literal `<%` or `%>` respectively.
+# * `<%-` - Same as `<%` but suppresses any leading whitespace.
+# * `-%>` - Same as `%>` but suppresses any trailing whitespace on the same line (including line break).
+# * `<%- |parameters| -%>` - When placed as the first tag declares the template's parameters.
+#
+# File based EPP supports the following visibilities of variables in scope:
+#
+# * Global scope (i.e. top + node scopes) - global scope is always visible
+# * Global + all given arguments - if the EPP template does not declare parameters, and arguments are given
+# * Global + declared parameters - if the EPP declares parameters, given argument names must match
+#
+# EPP supports parameters by placing an optional parameter list as the very first element in the EPP. As an example,
+# `<%- |$x, $y, $z = 'unicorn'| -%>` when placed first in the EPP text declares that the parameters `x` and `y` must be
+# given as template arguments when calling `inline_epp`, and that `z` if not given as a template argument
+# defaults to `'unicorn'`. Template parameters are available as variables, e.g.arguments `$x`, `$y` and `$z` in the example.
+# Note that `<%-` must be used or any leading whitespace will be interpreted as text
+#
+# Arguments are passed to the template by calling `epp` with a Hash as the last argument, where parameters
+# are bound to values, e.g. `epp('...', {'x'=>10, 'y'=>20})`. Excess arguments may be given
+# (i.e. undeclared parameters) only if the EPP templates does not declare any parameters at all.
+# Template parameters shadow variables in outer scopes. File based epp does never have access to variables in the
+# scope where the `epp` function is called from.
+#
+# @see function inline_epp for examples of EPP
+# @since 3.5
+# @note Requires Future Parser
+Puppet::Functions.create_function(:epp, Puppet::Functions::InternalFunction) do
+
+ dispatch :epp do
+ scope_param()
+ param 'String', 'path'
+ param 'Hash[Pattern[/^\w+$/], Any]', 'parameters'
+ arg_count(1, 2)
+ end
+
+ def epp(scope, path, parameters = nil)
+ Puppet::Pops::Evaluator::EppEvaluator.epp(scope, path, scope.compiler.environment, parameters)
+ end
+end
\ No newline at end of file
diff --git a/lib/puppet/functions/filter.rb b/lib/puppet/functions/filter.rb
new file mode 100644
index 000000000..0654d9c9c
--- /dev/null
+++ b/lib/puppet/functions/filter.rb
@@ -0,0 +1,113 @@
+# Applies a parameterized block to each element in a sequence of entries from the first
+# argument and returns an array or hash (same type as left operand for array/hash, and array for
+# other enumerable types) with the entries for which the block evaluates to `true`.
+#
+# This function takes two mandatory arguments: the first should be an Array, a Hash, or an
+# Enumerable object (integer, Integer range, or String),
+# and the second a parameterized block as produced by the puppet syntax:
+#
+# $a.filter |$x| { ... }
+# filter($a) |$x| { ... }
+#
+# When the first argument is something other than a Hash, the block is called with each entry in turn.
+# When the first argument is a Hash the entry is an array with `[key, value]`.
+#
+# @example Using filter with one parameter
+#
+# # selects all that end with berry
+# $a = ["raspberry", "blueberry", "orange"]
+# $a.filter |$x| { $x =~ /berry$/ } # rasberry, blueberry
+#
+# If the block defines two parameters, they will be set to `index, value` (with index starting at 0) for all
+# enumerables except Hash, and to `key, value` for a Hash.
+#
+# @example Using filter with two parameters
+#
+# # selects all that end with 'berry' at an even numbered index
+# $a = ["raspberry", "blueberry", "orange"]
+# $a.filter |$index, $x| { $index % 2 == 0 and $x =~ /berry$/ } # raspberry
+#
+# # selects all that end with 'berry' and value >= 1
+# $a = {"raspberry"=>0, "blueberry"=>1, "orange"=>1}
+# $a.filter |$key, $x| { $x =~ /berry$/ and $x >= 1 } # blueberry
+#
+# @since 3.4 for Array and Hash
+# @since 3.5 for other enumerables
+# @note requires `parser = future`
+#
+Puppet::Functions.create_function(:filter) do
+ dispatch :filter_Hash_2 do
+ param 'Hash[Any, Any]', :hash
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ dispatch :filter_Hash_1 do
+ param 'Hash[Any, Any]', :hash
+ required_block_param 'Callable[1,1]', :block
+ end
+
+ dispatch :filter_Enumerable_2 do
+ param 'Any', :enumerable
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ dispatch :filter_Enumerable_1 do
+ param 'Any', :enumerable
+ required_block_param 'Callable[1,1]', :block
+ end
+
+ def filter_Hash_1(hash, pblock)
+ result = hash.select {|x, y| pblock.call(self, [x, y]) }
+ # Ruby 1.8.7 returns Array
+ result = Hash[result] unless result.is_a? Hash
+ result
+ end
+
+ def filter_Hash_2(hash, pblock)
+ result = hash.select {|x, y| pblock.call(self, x, y) }
+ # Ruby 1.8.7 returns Array
+ result = Hash[result] unless result.is_a? Hash
+ result
+ end
+
+ def filter_Enumerable_1(enumerable, pblock)
+ result = []
+ index = 0
+ enum = asserted_enumerable(enumerable)
+ begin
+ loop do
+ it = enum.next
+ if pblock.call(nil, it) == true
+ result << it
+ end
+ end
+ rescue StopIteration
+ end
+ result
+ end
+
+ def filter_Enumerable_2(enumerable, pblock)
+ result = []
+ index = 0
+ enum = asserted_enumerable(enumerable)
+ begin
+ loop do
+ it = enum.next
+ if pblock.call(nil, index, it) == true
+ result << it
+ end
+ index += 1
+ end
+ rescue StopIteration
+ end
+ result
+ end
+
+ def asserted_enumerable(obj)
+ unless enum = Puppet::Pops::Types::Enumeration.enumerator(obj)
+ raise ArgumentError, ("#{self.class.name}(): wrong argument type (#{obj.class}; must be something enumerable.")
+ end
+ enum
+ end
+
+end
diff --git a/lib/puppet/functions/inline_epp.rb b/lib/puppet/functions/inline_epp.rb
new file mode 100644
index 000000000..31a966334
--- /dev/null
+++ b/lib/puppet/functions/inline_epp.rb
@@ -0,0 +1,88 @@
+# Evaluates an Embedded Puppet Template (EPP) string and returns the rendered text result as a String.
+#
+# EPP support the following tags:
+#
+# * `<%= puppet expression %>` - This tag renders the value of the expression it contains.
+# * `<% puppet expression(s) %>` - This tag will execute the expression(s) it contains, but renders nothing.
+# * `<%# comment %>` - The tag and its content renders nothing.
+# * `<%%` or `%%>` - Renders a literal `<%` or `%>` respectively.
+# * `<%-` - Same as `<%` but suppresses any leading whitespace.
+# * `-%>` - Same as `%>` but suppresses any trailing whitespace on the same line (including line break).
+# * `<%- |parameters| -%>` - When placed as the first tag declares the template's parameters.
+#
+# Inline EPP supports the following visibilities of variables in scope which depends on how EPP parameters
+# are used - see further below:
+#
+# * Global scope (i.e. top + node scopes) - global scope is always visible
+# * Global + Enclosing scope - if the EPP template does not declare parameters, and no arguments are given
+# * Global + all given arguments - if the EPP template does not declare parameters, and arguments are given
+# * Global + declared parameters - if the EPP declares parameters, given argument names must match
+#
+# EPP supports parameters by placing an optional parameter list as the very first element in the EPP. As an example,
+# `<%- |$x, $y, $z='unicorn'| -%>` when placed first in the EPP text declares that the parameters `x` and `y` must be
+# given as template arguments when calling `inline_epp`, and that `z` if not given as a template argument
+# defaults to `'unicorn'`. Template parameters are available as variables, e.g.arguments `$x`, `$y` and `$z` in the example.
+# Note that `<%-` must be used or any leading whitespace will be interpreted as text
+#
+# Arguments are passed to the template by calling `inline_epp` with a Hash as the last argument, where parameters
+# are bound to values, e.g. `inline_epp('...', {'x'=>10, 'y'=>20})`. Excess arguments may be given
+# (i.e. undeclared parameters) only if the EPP templates does not declare any parameters at all.
+# Template parameters shadow variables in outer scopes.
+#
+# Note: An inline template is best stated using a single-quoted string, or a heredoc since a double-quoted string
+# is subject to expression interpolation before the string is parsed as an EPP template. Here are examples
+# (using heredoc to define the EPP text):
+#
+# @example Various Examples using `inline_epp`
+#
+# # produces 'Hello local variable world!'
+# $x ='local variable'
+# inline_epptemplate(@(END:epp))
+# <%- |$x| -%>
+# Hello <%= $x %> world!
+# END
+#
+# # produces 'Hello given argument world!'
+# $x ='local variable world'
+# inline_epptemplate(@(END:epp), { x =>'given argument'})
+# <%- |$x| -%>
+# Hello <%= $x %> world!
+# END
+#
+# # produces 'Hello given argument world!'
+# $x ='local variable world'
+# inline_epptemplate(@(END:epp), { x =>'given argument'})
+# <%- |$x| -%>
+# Hello <%= $x %>!
+# END
+#
+# # results in error, missing value for y
+# $x ='local variable world'
+# inline_epptemplate(@(END:epp), { x =>'given argument'})
+# <%- |$x, $y| -%>
+# Hello <%= $x %>!
+# END
+#
+# # Produces 'Hello given argument planet'
+# $x ='local variable world'
+# inline_epptemplate(@(END:epp), { x =>'given argument'})
+# <%- |$x, $y=planet| -%>
+# Hello <%= $x %> <%= $y %>!
+# END
+#
+# @since 3.5
+# @note Requires Future Parser
+#
+Puppet::Functions.create_function(:inline_epp, Puppet::Functions::InternalFunction) do
+
+ dispatch :inline_epp do
+ scope_param()
+ param 'String', 'template'
+ param 'Hash[Pattern[/^\w+$/], Any]', 'parameters'
+ arg_count(1, 2)
+ end
+
+ def inline_epp(scope, template, parameters = nil)
+ Puppet::Pops::Evaluator::EppEvaluator.inline_epp(scope, template, parameters)
+ end
+end
diff --git a/lib/puppet/functions/map.rb b/lib/puppet/functions/map.rb
new file mode 100644
index 000000000..2141d1e81
--- /dev/null
+++ b/lib/puppet/functions/map.rb
@@ -0,0 +1,97 @@
+# Applies a parameterized block to each element in a sequence of entries from the first
+# argument and returns an array with the result of each invocation of the parameterized block.
+#
+# This function takes two mandatory arguments: the first should be an Array, Hash, or of Enumerable type
+# (integer, Integer range, or String), and the second a parameterized block as produced by the puppet syntax:
+#
+# $a.map |$x| { ... }
+# map($a) |$x| { ... }
+#
+# When the first argument `$a` is an Array or of enumerable type, the block is called with each entry in turn.
+# When the first argument is a hash the entry is an array with `[key, value]`.
+#
+# @example Using map with two arguments
+#
+# # Turns hash into array of values
+# $a.map |$x|{ $x[1] }
+#
+# # Turns hash into array of keys
+# $a.map |$x| { $x[0] }
+#
+# When using a block with 2 parameters, the element's index (starting from 0) for an array, and the key for a hash
+# is given to the block's first parameter, and the value is given to the block's second parameter.args.
+#
+# @example Using map with two arguments
+#
+# # Turns hash into array of values
+# $a.map |$key,$val|{ $val }
+#
+# # Turns hash into array of keys
+# $a.map |$key,$val|{ $key }
+#
+# @since 3.4 for Array and Hash
+# @since 3.5 for other enumerables, and support for blocks with 2 parameters
+# @note requires `parser = future`
+#
+Puppet::Functions.create_function(:map) do
+ dispatch :map_Hash_2 do
+ param 'Hash[Any, Any]', :hash
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ dispatch :map_Hash_1 do
+ param 'Hash[Any, Any]', :hash
+ required_block_param 'Callable[1,1]', :block
+ end
+
+ dispatch :map_Enumerable_2 do
+ param 'Any', :enumerable
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ dispatch :map_Enumerable_1 do
+ param 'Any', :enumerable
+ required_block_param 'Callable[1,1]', :block
+ end
+
+ def map_Hash_1(hash, pblock)
+ hash.map {|x, y| pblock.call(nil, [x, y]) }
+ end
+
+ def map_Hash_2(hash, pblock)
+ hash.map {|x, y| pblock.call(nil, x, y) }
+ end
+
+ def map_Enumerable_1(enumerable, pblock)
+ result = []
+ index = 0
+ enum = asserted_enumerable(enumerable)
+ begin
+ loop { result << pblock.call(nil, enum.next) }
+ rescue StopIteration
+ end
+ result
+ end
+
+ def map_Enumerable_2(enumerable, pblock)
+ result = []
+ index = 0
+ enum = asserted_enumerable(enumerable)
+ begin
+ loop do
+ result << pblock.call(nil, index, enum.next)
+ index = index +1
+ end
+ rescue StopIteration
+ end
+ result
+ end
+
+ def asserted_enumerable(obj)
+ unless enum = Puppet::Pops::Types::Enumeration.enumerator(obj)
+ raise ArgumentError, ("#{self.class.name}(): wrong argument type (#{obj.class}; must be something enumerable.")
+ end
+ enum
+ end
+
+end
diff --git a/lib/puppet/functions/match.rb b/lib/puppet/functions/match.rb
new file mode 100644
index 000000000..8808a29b6
--- /dev/null
+++ b/lib/puppet/functions/match.rb
@@ -0,0 +1,102 @@
+# Returns the match result of matching a String or Array[String] with one of:
+#
+# * Regexp
+# * String - transformed to a Regexp
+# * Pattern type
+# * Regexp type
+#
+# Returns An Array with the entire match at index 0, and each subsequent submatch at index 1-n.
+# If there was no match, nil (ie. undef) is returned. If the value to match is an Array, a array
+# with mapped match results is returned.
+#
+# @example matching
+# "abc123".match(/([a-z]+)[1-9]+/) # => ["abc"]
+# "abc123".match(/([a-z]+)([1-9]+)/) # => ["abc", "123"]
+#
+# See the documentation for "The Puppet Type System" for more information about types.
+# @since 3.7.0
+#
+Puppet::Functions.create_function(:match) do
+ dispatch :match do
+ param 'String', 'string'
+ param 'Variant[Any, Type]', 'pattern'
+ end
+
+ dispatch :enumerable_match do
+ param 'Array[String]', 'string'
+ param 'Variant[Any, Type]', 'pattern'
+ end
+
+ def initialize(closure_scope, loader)
+ super
+
+ # Make this visitor shared among all instantiations of this function since it is faster.
+ # This can be used because it is not possible to replace
+ # a puppet runtime (where this function is) without a reboot. If you model a function in a module after
+ # this class, use a regular instance variable instead to enable reloading of the module without reboot
+ #
+ @@match_visitor ||= Puppet::Pops::Visitor.new(self, "match", 1, 1)
+ end
+
+ # Matches given string against given pattern and returns an Array with matches.
+ # @param string [String] the string to match
+ # @param pattern [String, Regexp, Puppet::Pops::Types::PPatternType, Puppet::Pops::PRegexpType, Array] the pattern
+ # @return [Array<String>] matches where first match is the entire match, and index 1-n are captures from left to right
+ #
+ def match(string, pattern)
+ @@match_visitor.visit_this_1(self, pattern, string)
+ end
+
+ # Matches given Array[String] against given pattern and returns an Array with mapped match results.
+ #
+ # @param array [Array<String>] the array of strings to match
+ # @param pattern [String, Regexp, Puppet::Pops::Types::PPatternType, Puppet::Pops::PRegexpType, Array] the pattern
+ # @return [Array<Array<String, nil>>] Array with matches (see {#match}), non matching entries produce a nil entry
+ #
+ def enumerable_match(array, pattern)
+ array.map {|s| match(s, pattern) }
+ end
+
+ protected
+
+ def match_Object(obj, s)
+ msg = "match() expects pattern of T, where T is String, Regexp, Regexp[r], Pattern[p], or Array[T]. Got #{obj.class}"
+ raise ArgumentError, msg
+ end
+
+ def match_String(pattern_string, s)
+ do_match(s, Regexp.new(pattern_string))
+ end
+
+ def match_Regexp(regexp, s)
+ do_match(s, regexp)
+ end
+
+ def match_PRegexpType(regexp_t, s)
+ raise ArgumentError, "Given Regexp Type has no regular expression" unless regexp_t.pattern
+ do_match(s, regexp_t.regexp)
+ end
+
+ def match_PPatternType(pattern_t, s)
+ # Since we want the actual match result (not just a boolean), an iteration over
+ # Pattern's regular expressions is needed. (They are of PRegexpType)
+ result = nil
+ pattern_t.patterns.find {|pattern| result = match(s, pattern) }
+ result
+ end
+
+ # Returns the first matching entry
+ def match_Array(array, s)
+ result = nil
+ array.flatten.find {|entry| result = match(s, entry) }
+ result
+ end
+
+ private
+
+ def do_match(s, regexp)
+ if result = regexp.match(s)
+ result.to_a
+ end
+ end
+end
diff --git a/lib/puppet/functions/reduce.rb b/lib/puppet/functions/reduce.rb
new file mode 100644
index 000000000..5b54e41c5
--- /dev/null
+++ b/lib/puppet/functions/reduce.rb
@@ -0,0 +1,94 @@
+# Applies a parameterized block to each element in a sequence of entries from the first
+# argument (_the enumerable_) and returns the last result of the invocation of the parameterized block.
+#
+# This function takes two mandatory arguments: the first should be an Array, Hash, or something of
+# enumerable type, and the last a parameterized block as produced by the puppet syntax:
+#
+# $a.reduce |$memo, $x| { ... }
+# reduce($a) |$memo, $x| { ... }
+#
+# When the first argument is an Array or someting of an enumerable type, the block is called with each entry in turn.
+# When the first argument is a hash each entry is converted to an array with `[key, value]` before being
+# fed to the block. An optional 'start memo' value may be supplied as an argument between the array/hash
+# and mandatory block.
+#
+# $a.reduce(start) |$memo, $x| { ... }
+# reduce($a, start) |$memo, $x| { ... }
+#
+# If no 'start memo' is given, the first invocation of the parameterized block will be given the first and second
+# elements of the enumeration, and if the enumerable has fewer than 2 elements, the first
+# element is produced as the result of the reduction without invocation of the block.
+#
+# On each subsequent invocation, the produced value of the invoked parameterized block is given as the memo in the
+# next invocation.
+#
+# @example Using reduce
+#
+# # Reduce an array
+# $a = [1,2,3]
+# $a.reduce |$memo, $entry| { $memo + $entry }
+# #=> 6
+#
+# # Reduce hash values
+# $a = {a => 1, b => 2, c => 3}
+# $a.reduce |$memo, $entry| { [sum, $memo[1]+$entry[1]] }
+# #=> [sum, 6]
+#
+# # reverse a string
+# "abc".reduce |$memo, $char| { "$char$memo" }
+# #=>"cbe"
+#
+# It is possible to provide a starting 'memo' as an argument.
+#
+# @example Using reduce with given start 'memo'
+#
+# # Reduce an array
+# $a = [1,2,3]
+# $a.reduce(4) |$memo, $entry| { $memo + $entry }
+# #=> 10
+#
+# # Reduce hash values
+# $a = {a => 1, b => 2, c => 3}
+# $a.reduce([na, 4]) |$memo, $entry| { [sum, $memo[1]+$entry[1]] }
+# #=> [sum, 10]
+#
+# @example Using reduce with an Integer range
+#
+# Integer[1,4].reduce |$memo, $x| { $memo + $x }
+# #=> 10
+#
+# @since 3.2 for Array and Hash
+# @since 3.5 for additional enumerable types
+# @note requires `parser = future`.
+#
+Puppet::Functions.create_function(:reduce) do
+
+ dispatch :reduce_without_memo do
+ param 'Any', :enumerable
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ dispatch :reduce_with_memo do
+ param 'Any', :enumerable
+ param 'Any', :memo
+ required_block_param 'Callable[2,2]', :block
+ end
+
+ def reduce_without_memo(enumerable, pblock)
+ enum = asserted_enumerable(enumerable)
+ enum.reduce {|memo, x| pblock.call(nil, memo, x) }
+ end
+
+ def reduce_with_memo(enumerable, given_memo, pblock)
+ enum = asserted_enumerable(enumerable)
+ enum.reduce(given_memo) {|memo, x| pblock.call(nil, memo, x) }
+ end
+
+ def asserted_enumerable(obj)
+ unless enum = Puppet::Pops::Types::Enumeration.enumerator(obj)
+ raise ArgumentError, ("#{self.class.name}(): wrong argument type (#{obj.class}; must be something enumerable.")
+ end
+ enum
+ end
+
+end
diff --git a/lib/puppet/functions/slice.rb b/lib/puppet/functions/slice.rb
new file mode 100644
index 000000000..ef3a2932a
--- /dev/null
+++ b/lib/puppet/functions/slice.rb
@@ -0,0 +1,126 @@
+# Applies a parameterized block to each _slice_ of elements in a sequence of selected entries from the first
+# argument and returns the first argument, or if no block is given returns a new array with a concatenation of
+# the slices.
+#
+# This function takes two mandatory arguments: the first, `$a`, should be an Array, Hash, or something of
+# enumerable type (integer, Integer range, or String), and the second, `$n`, the number of elements to include
+# in each slice. The optional third argument should be a a parameterized block as produced by the puppet syntax:
+#
+# $a.slice($n) |$x| { ... }
+# slice($a) |$x| { ... }
+#
+# The parameterized block should have either one parameter (receiving an array with the slice), or the same number
+# of parameters as specified by the slice size (each parameter receiving its part of the slice).
+# In case there are fewer remaining elements than the slice size for the last slice it will contain the remaining
+# elements. When the block has multiple parameters, excess parameters are set to undef for an array or
+# enumerable type, and to empty arrays for a Hash.
+#
+# $a.slice(2) |$first, $second| { ... }
+#
+# When the first argument is a Hash, each `key,value` entry is counted as one, e.g, a slice size of 2 will produce
+# an array of two arrays with key, and value.
+#
+# @example Using slice with Hash
+#
+# $a.slice(2) |$entry| { notice "first ${$entry[0]}, second ${$entry[1]}" }
+# $a.slice(2) |$first, $second| { notice "first ${first}, second ${second}" }
+#
+# When called without a block, the function produces a concatenated result of the slices.
+#
+# @example Using slice without a block
+#
+# slice([1,2,3,4,5,6], 2) # produces [[1,2], [3,4], [5,6]]
+# slice(Integer[1,6], 2) # produces [[1,2], [3,4], [5,6]]
+# slice(4,2) # produces [[0,1], [2,3]]
+# slice('hello',2) # produces [[h, e], [l, l], [o]]
+#
+# @since 3.2 for Array and Hash
+# @since 3.5 for additional enumerable types
+# @note requires `parser = future`.
+#
+Puppet::Functions.create_function(:slice) do
+ dispatch :slice_Hash do
+ param 'Hash[Any, Any]', :hash
+ param 'Integer[1, default]', :slize_size
+ optional_block_param
+ end
+
+ dispatch :slice_Enumerable do
+ param 'Any', :enumerable
+ param 'Integer[1, default]', :slize_size
+ optional_block_param
+ end
+
+ def slice_Hash(hash, slice_size, pblock = nil)
+ result = slice_Common(hash, slice_size, [], pblock)
+ pblock ? hash : result
+ end
+
+ def slice_Enumerable(enumerable, slice_size, pblock = nil)
+ enum = asserted_enumerable(enumerable)
+ result = slice_Common(enum, slice_size, nil, pblock)
+ pblock ? enumerable : result
+ end
+
+ def slice_Common(o, slice_size, filler, pblock)
+ serving_size = asserted_slice_serving_size(pblock, slice_size)
+
+ enumerator = o.each_slice(slice_size)
+ result = []
+ if serving_size == 1
+ begin
+ if pblock
+ loop do
+ pblock.call(nil, enumerator.next)
+ end
+ else
+ loop do
+ result << enumerator.next
+ end
+ end
+ rescue StopIteration
+ end
+ else
+ begin
+ loop do
+ a = enumerator.next
+ if a.size < serving_size
+ a = a.dup.fill(filler, a.length...serving_size)
+ end
+ pblock.call(nil, *a)
+ end
+ rescue StopIteration
+ end
+ end
+ if pblock
+ o
+ else
+ result
+ end
+ end
+
+ def asserted_slice_serving_size(pblock, slice_size)
+ if pblock
+ serving_size = pblock.last_captures_rest? ? slice_size : pblock.parameter_count
+ else
+ serving_size = 1
+ end
+ if serving_size == 0
+ raise ArgumentError, "slice(): block must define at least one parameter. Block has 0."
+ end
+ unless serving_size == 1 || serving_size == slice_size
+ raise ArgumentError, "slice(): block must define one parameter, or " +
+ "the same number of parameters as the given size of the slice (#{slice_size}). Block has #{serving_size}; "+
+ pblock.parameter_names.join(', ')
+ end
+ serving_size
+ end
+
+ def asserted_enumerable(obj)
+ unless enum = Puppet::Pops::Types::Enumeration.enumerator(obj)
+ raise ArgumentError, ("#{self.class.name}(): wrong argument type (#{obj.class}; must be something enumerable.")
+ end
+ enum
+ end
+
+end
diff --git a/lib/puppet/functions/with.rb b/lib/puppet/functions/with.rb
new file mode 100644
index 000000000..6dd51ec18
--- /dev/null
+++ b/lib/puppet/functions/with.rb
@@ -0,0 +1,23 @@
+# Call a lambda with the given arguments. Since the parameters of the lambda
+# are local to the lambda's scope, this can be used to create private sections
+# of logic in a class so that the variables are not visible outside of the
+# class.
+#
+# @example Using with
+#
+# # notices the array [1, 2, 'foo']
+# with(1, 2, 'foo') |$x, $y, $z| { notice [$x, $y, $z] }
+#
+# @since 3.7.0
+#
+Puppet::Functions.create_function(:with) do
+ dispatch :with do
+ param 'Any', 'arg'
+ arg_count(0, :default)
+ required_block_param
+ end
+
+ def with(*args)
+ args[-1].call({}, *args[0..-2])
+ end
+end
diff --git a/lib/puppet/indirector/catalog/compiler.rb b/lib/puppet/indirector/catalog/compiler.rb
index 0804d1819..6f4e2f3e4 100644
--- a/lib/puppet/indirector/catalog/compiler.rb
+++ b/lib/puppet/indirector/catalog/compiler.rb
@@ -1,182 +1,182 @@
require 'puppet/node'
require 'puppet/resource/catalog'
require 'puppet/indirector/code'
require 'puppet/util/profiler'
require 'yaml'
class Puppet::Resource::Catalog::Compiler < Puppet::Indirector::Code
desc "Compiles catalogs on demand using Puppet's compiler."
include Puppet::Util
attr_accessor :code
def extract_facts_from_request(request)
return unless text_facts = request.options[:facts]
unless format = request.options[:facts_format]
raise ArgumentError, "Facts but no fact format provided for #{request.key}"
end
- Puppet::Util::Profiler.profile("Found facts") do
+ Puppet::Util::Profiler.profile("Found facts", [:compiler, :find_facts]) do
# If the facts were encoded as yaml, then the param reconstitution system
# in Network::HTTP::Handler will automagically deserialize the value.
if text_facts.is_a?(Puppet::Node::Facts)
facts = text_facts
else
# We unescape here because the corresponding code in Puppet::Configurer::FactHandler escapes
facts = Puppet::Node::Facts.convert_from(format, CGI.unescape(text_facts))
end
unless facts.name == request.key
raise Puppet::Error, "Catalog for #{request.key.inspect} was requested with fact definition for the wrong node (#{facts.name.inspect})."
end
facts.add_timestamp
options = {
:environment => request.environment,
:transaction_uuid => request.options[:transaction_uuid],
}
Puppet::Node::Facts.indirection.save(facts, nil, options)
end
end
# Compile a node's catalog.
def find(request)
extract_facts_from_request(request)
node = node_from_request(request)
node.trusted_data = Puppet.lookup(:trusted_information) { Puppet::Context::TrustedInformation.local(node) }.to_h
if catalog = compile(node)
return catalog
else
# This shouldn't actually happen; we should either return
# a config or raise an exception.
return nil
end
end
# filter-out a catalog to remove exported resources
def filter(catalog)
return catalog.filter { |r| r.virtual? } if catalog.respond_to?(:filter)
catalog
end
def initialize
- Puppet::Util::Profiler.profile("Setup server facts for compiling") do
+ Puppet::Util::Profiler.profile("Setup server facts for compiling", [:compiler, :init_server_facts]) do
set_server_facts
end
end
# Is our compiler part of a network, or are we just local?
def networked?
Puppet.run_mode.master?
end
private
# Add any extra data necessary to the node.
def add_node_data(node)
# Merge in our server-side facts, so they can be used during compilation.
node.merge(@server_facts)
end
# Compile the actual catalog.
def compile(node)
str = "Compiled catalog for #{node.name}"
str += " in environment #{node.environment}" if node.environment
config = nil
benchmark(:notice, str) do
- Puppet::Util::Profiler.profile(str) do
+ Puppet::Util::Profiler.profile(str, [:compiler, :compile, node.environment, node.name]) do
begin
config = Puppet::Parser::Compiler.compile(node)
rescue Puppet::Error => detail
Puppet.err(detail.to_s) if networked?
raise
end
end
end
config
end
# Turn our host name into a node object.
def find_node(name, environment, transaction_uuid)
- Puppet::Util::Profiler.profile("Found node information") do
+ Puppet::Util::Profiler.profile("Found node information", [:compiler, :find_node]) do
node = nil
begin
node = Puppet::Node.indirection.find(name, :environment => environment,
:transaction_uuid => transaction_uuid)
rescue => detail
message = "Failed when searching for node #{name}: #{detail}"
Puppet.log_exception(detail, message)
raise Puppet::Error, message, detail.backtrace
end
# Add any external data to the node.
if node
add_node_data(node)
end
node
end
end
# Extract the node from the request, or use the request
# to find the node.
def node_from_request(request)
if node = request.options[:use_node]
if request.remote?
raise Puppet::Error, "Invalid option use_node for a remote request"
else
return node
end
end
# We rely on our authorization system to determine whether the connected
# node is allowed to compile the catalog's node referenced by key.
# By default the REST authorization system makes sure only the connected node
# can compile his catalog.
# This allows for instance monitoring systems or puppet-load to check several
# node's catalog with only one certificate and a modification to auth.conf
# If no key is provided we can only compile the currently connected node.
name = request.key || request.node
if node = find_node(name, request.environment, request.options[:transaction_uuid])
return node
end
raise ArgumentError, "Could not find node '#{name}'; cannot compile"
end
# Initialize our server fact hash; we add these to each client, and they
# won't change while we're running, so it's safe to cache the values.
def set_server_facts
@server_facts = {}
# Add our server version to the fact list
@server_facts["serverversion"] = Puppet.version.to_s
# And then add the server name and IP
{"servername" => "fqdn",
"serverip" => "ipaddress"
}.each do |var, fact|
if value = Facter.value(fact)
@server_facts[var] = value
else
Puppet.warning "Could not retrieve fact #{fact}"
end
end
if @server_facts["servername"].nil?
host = Facter.value(:hostname)
if domain = Facter.value(:domain)
@server_facts["servername"] = [host, domain].join(".")
else
@server_facts["servername"] = host
end
end
end
end
diff --git a/lib/puppet/indirector/data_binding/hiera.rb b/lib/puppet/indirector/data_binding/hiera.rb
index 7fbc63782..e6e609d55 100644
--- a/lib/puppet/indirector/data_binding/hiera.rb
+++ b/lib/puppet/indirector/data_binding/hiera.rb
@@ -1,50 +1,7 @@
-require 'puppet/indirector/code'
+require 'puppet/indirector/hiera'
require 'hiera/scope'
-class Puppet::DataBinding::Hiera < Puppet::Indirector::Code
+class Puppet::DataBinding::Hiera < Puppet::Indirector::Hiera
desc "Retrieve data using Hiera."
-
- def initialize(*args)
- if ! Puppet.features.hiera?
- raise "Hiera terminus not supported without hiera library"
- end
- super
- end
-
- if defined?(::Psych::SyntaxError)
- DataBindingExceptions = [::StandardError, ::Psych::SyntaxError]
- else
- DataBindingExceptions = [::StandardError]
- end
-
- def find(request)
- hiera.lookup(request.key, nil, Hiera::Scope.new(request.options[:variables]), nil, nil)
- rescue *DataBindingExceptions => detail
- raise Puppet::DataBinding::LookupError.new(detail.message, detail)
- end
-
- private
-
- def self.hiera_config
- hiera_config = Puppet.settings[:hiera_config]
- config = {}
-
- if Puppet::FileSystem.exist?(hiera_config)
- config = Hiera::Config.load(hiera_config)
- else
- Puppet.warning "Config file #{hiera_config} not found, using Hiera defaults"
- end
-
- config[:logger] = 'puppet'
- config
- end
-
- def self.hiera
- @hiera ||= Hiera.new(:config => hiera_config)
- end
-
- def hiera
- self.class.hiera
- end
end
diff --git a/lib/puppet/indirector/facts/couch.rb b/lib/puppet/indirector/facts/couch.rb
index 54980eb14..9cbd0a3dd 100644
--- a/lib/puppet/indirector/facts/couch.rb
+++ b/lib/puppet/indirector/facts/couch.rb
@@ -1,34 +1,36 @@
require 'puppet/node/facts'
require 'puppet/indirector/couch'
class Puppet::Node::Facts::Couch < Puppet::Indirector::Couch
- desc "Store facts in CouchDB. This should not be used with the inventory service;
+ desc "DEPRECATED. This terminus will be removed in Puppet 4.0.
+
+ Store facts in CouchDB. This should not be used with the inventory service;
it is for more obscure custom integrations. If you are wondering whether you
should use it, you shouldn't; use PuppetDB instead."
# Return the facts object or nil if there is no document
def find(request)
doc = super
doc ? model.new(doc['_id'], doc['facts']) : nil
end
private
# Facts values are stored to the document's 'facts' attribute. Hostname is
# stored to 'name'
#
def hash_from(request)
super.merge('facts' => request.instance.values)
end
# Facts are stored to the 'node' document.
def document_type_for(request)
'node'
end
# The id used to store the object in couchdb.
def id_for(request)
request.key.to_s
end
end
diff --git a/lib/puppet/indirector/facts/facter.rb b/lib/puppet/indirector/facts/facter.rb
index e22977568..7f65e3488 100644
--- a/lib/puppet/indirector/facts/facter.rb
+++ b/lib/puppet/indirector/facts/facter.rb
@@ -1,91 +1,78 @@
require 'puppet/node/facts'
require 'puppet/indirector/code'
class Puppet::Node::Facts::Facter < Puppet::Indirector::Code
desc "Retrieve facts from Facter. This provides a somewhat abstract interface
between Puppet and Facter. It's only `somewhat` abstract because it always
returns the local host's facts, regardless of what you attempt to find."
- private
-
- def self.reload_facter
- Facter.clear
- Facter.loadfacts
+ def destroy(facts)
+ raise Puppet::DevError, 'You cannot destroy facts in the code store; it is only used for getting facts from Facter'
end
- def self.load_fact_plugins
- # Add any per-module fact directories to the factpath
- module_fact_dirs = Puppet.lookup(:current_environment).modulepath.collect do |d|
- ["lib", "plugins"].map do |subdirectory|
- Dir.glob("#{d}/*/#{subdirectory}/facter")
- end
- end.flatten
- dirs = module_fact_dirs + Puppet[:factpath].split(File::PATH_SEPARATOR)
- dirs.uniq.each do |dir|
- load_facts_in_dir(dir)
- end
+ def save(facts)
+ raise Puppet::DevError, 'You cannot save facts to the code store; it is only used for getting facts from Facter'
end
- def self.setup_external_facts(request)
- # Add any per-module fact directories to the factpath
- external_facts_dirs = []
- request.environment.modules.each do |m|
- if m.has_external_facts?
- Puppet.info "Loading external facts from #{m.plugin_fact_directory}"
- external_facts_dirs << m.plugin_fact_directory
- end
- end
-
- # Add system external fact directory if it exists
- if File.directory?(Puppet[:pluginfactdest])
- external_facts_dirs << Puppet[:pluginfactdest]
- end
-
- # Add to facter config
- Facter.search_external external_facts_dirs
+ # Lookup a host's facts up in Facter.
+ def find(request)
+ Facter.reset
+ self.class.setup_external_search_paths(request) if Puppet.features.external_facts?
+ self.class.setup_search_paths(request)
+ result = Puppet::Node::Facts.new(request.key, Facter.to_hash)
+ result.add_local_facts
+ Puppet[:stringify_facts] ? result.stringify : result.sanitize
+ result
end
- def self.load_facts_in_dir(dir)
- return unless FileTest.directory?(dir)
+ private
- Dir.chdir(dir) do
- Dir.glob("*.rb").each do |file|
- fqfile = ::File.join(dir, file)
- begin
- Puppet.info "Loading facts in #{fqfile}"
- ::Timeout::timeout(Puppet[:configtimeout]) do
- load File.join('.', file)
- end
- rescue SystemExit,NoMemoryError
- raise
- rescue Exception => detail
- Puppet.warning "Could not load fact file #{fqfile}: #{detail}"
+ def self.setup_search_paths(request)
+ # Add any per-module fact directories to facter's search path
+ dirs = request.environment.modulepath.collect do |dir|
+ ['lib', 'plugins'].map do |subdirectory|
+ Dir.glob("#{dir}/*/#{subdirectory}/facter")
+ end
+ end.flatten + Puppet[:factpath].split(File::PATH_SEPARATOR)
+
+ dirs = dirs.select do |dir|
+ next false unless FileTest.directory?(dir)
+
+ # Even through we no longer directly load facts in the terminus,
+ # print out each .rb in the facts directory as module
+ # developers may find that information useful for debugging purposes
+ if Puppet::Util::Log.sendlevel?(:info)
+ Puppet.info "Loading facts"
+ Dir.glob("#{dir}/*.rb").each do |file|
+ Puppet.debug "Loading facts from #{file}"
end
end
- end
- end
- public
+ true
+ end
- def destroy(facts)
- raise Puppet::DevError, "You cannot destroy facts in the code store; it is only used for getting facts from Facter"
+ Facter.search *dirs
end
- # Look a host's facts up in Facter.
- def find(request)
- self.class.setup_external_facts(request) if Puppet.features.external_facts?
- self.class.reload_facter
- self.class.load_fact_plugins
- result = Puppet::Node::Facts.new(request.key, Facter.to_hash)
-
- result.add_local_facts
- Puppet[:stringify_facts] ? result.stringify : result.sanitize
+ def self.setup_external_search_paths(request)
+ # Add any per-module external fact directories to facter's external search path
+ dirs = []
+ request.environment.modules.each do |m|
+ if m.has_external_facts?
+ dir = m.plugin_fact_directory
+ Puppet.debug "Loading external facts from #{dir}"
+ dirs << dir
+ end
+ end
- result
- end
+ # Add system external fact directory if it exists
+ if FileTest.directory?(Puppet[:pluginfactdest])
+ dir = Puppet[:pluginfactdest]
+ Puppet.debug "Loading external facts from #{dir}"
+ dirs << dir
+ end
- def save(facts)
- raise Puppet::DevError, "You cannot save facts to the code store; it is only used for getting facts from Facter"
+ Facter.search_external dirs
end
end
diff --git a/lib/puppet/indirector/file_bucket_file/file.rb b/lib/puppet/indirector/file_bucket_file/file.rb
index 14d9f1087..a356033b6 100644
--- a/lib/puppet/indirector/file_bucket_file/file.rb
+++ b/lib/puppet/indirector/file_bucket_file/file.rb
@@ -1,139 +1,142 @@
require 'puppet/indirector/code'
require 'puppet/file_bucket/file'
require 'puppet/util/checksums'
require 'fileutils'
module Puppet::FileBucketFile
class File < Puppet::Indirector::Code
include Puppet::Util::Checksums
desc "Store files in a directory set based on their checksums."
def find(request)
checksum, files_original_path = request_to_checksum_and_path(request)
contents_file = path_for(request.options[:bucket_path], checksum, 'contents')
paths_file = path_for(request.options[:bucket_path], checksum, 'paths')
if Puppet::FileSystem.exist?(contents_file) && matches(paths_file, files_original_path)
if request.options[:diff_with]
other_contents_file = path_for(request.options[:bucket_path], request.options[:diff_with], 'contents')
raise "could not find diff_with #{request.options[:diff_with]}" unless Puppet::FileSystem.exist?(other_contents_file)
return `diff #{Puppet::FileSystem.path_string(contents_file).inspect} #{Puppet::FileSystem.path_string(other_contents_file).inspect}`
else
Puppet.info "FileBucket read #{checksum}"
model.new(Puppet::FileSystem.binread(contents_file))
end
else
nil
end
end
def head(request)
checksum, files_original_path = request_to_checksum_and_path(request)
contents_file = path_for(request.options[:bucket_path], checksum, 'contents')
paths_file = path_for(request.options[:bucket_path], checksum, 'paths')
Puppet::FileSystem.exist?(contents_file) && matches(paths_file, files_original_path)
end
def save(request)
instance = request.instance
_, files_original_path = request_to_checksum_and_path(request)
contents_file = path_for(instance.bucket_path, instance.checksum_data, 'contents')
paths_file = path_for(instance.bucket_path, instance.checksum_data, 'paths')
save_to_disk(instance, files_original_path, contents_file, paths_file)
# don't echo the request content back to the agent
model.new('')
end
def validate_key(request)
# There are no ACLs on filebucket files so validating key is not important
end
private
# @param paths_file [Object] Opaque file path
# @param files_original_path [String]
#
def matches(paths_file, files_original_path)
Puppet::FileSystem.open(paths_file, 0640, 'a+') do |f|
path_match(f, files_original_path)
end
end
def path_match(file_handle, files_original_path)
return true unless files_original_path # if no path was provided, it's a match
file_handle.rewind
file_handle.each_line do |line|
return true if line.chomp == files_original_path
end
return false
end
# @param contents_file [Object] Opaque file path
# @param paths_file [Object] Opaque file path
#
def save_to_disk(bucket_file, files_original_path, contents_file, paths_file)
Puppet::Util.withumask(0007) do
unless Puppet::FileSystem.dir_exist?(paths_file)
Puppet::FileSystem.dir_mkpath(paths_file)
end
Puppet::FileSystem.exclusive_open(paths_file, 0640, 'a+') do |f|
if Puppet::FileSystem.exist?(contents_file)
verify_identical_file!(contents_file, bucket_file)
Puppet::FileSystem.touch(contents_file)
else
Puppet::FileSystem.open(contents_file, 0440, 'wb') do |of|
- of.write(bucket_file.contents)
+ # PUP-1044 writes all of the contents
+ bucket_file.stream() do |src|
+ FileUtils.copy_stream(src, of)
+ end
end
end
unless path_match(f, files_original_path)
f.seek(0, IO::SEEK_END)
f.puts(files_original_path)
end
end
end
end
def request_to_checksum_and_path(request)
checksum_type, checksum, path = request.key.split(/\//, 3)
if path == '' # Treat "md5/<checksum>/" like "md5/<checksum>"
path = nil
end
raise ArgumentError, "Unsupported checksum type #{checksum_type.inspect}" if checksum_type != Puppet[:digest_algorithm]
expected = method(checksum_type + "_hex_length").call
raise "Invalid checksum #{checksum.inspect}" if checksum !~ /^[0-9a-f]{#{expected}}$/
[checksum, path]
end
# @return [Object] Opaque path as constructed by the Puppet::FileSystem
#
def path_for(bucket_path, digest, subfile = nil)
bucket_path ||= Puppet[:bucketdir]
dir = ::File.join(digest[0..7].split(""))
basedir = ::File.join(bucket_path, dir, digest)
Puppet::FileSystem.pathname(subfile ? ::File.join(basedir, subfile) : basedir)
end
# @param contents_file [Object] Opaque file path
# @param bucket_file [IO]
def verify_identical_file!(contents_file, bucket_file)
- if bucket_file.contents.size == Puppet::FileSystem.size(contents_file)
- if Puppet::FileSystem.compare_stream(contents_file, bucket_file.stream)
+ if bucket_file.size == Puppet::FileSystem.size(contents_file)
+ if bucket_file.stream() {|s| Puppet::FileSystem.compare_stream(contents_file, s) }
Puppet.info "FileBucket got a duplicate file #{bucket_file.checksum}"
return
end
end
# If the contents or sizes don't match, then we've found a conflict.
# Unlikely, but quite bad.
raise Puppet::FileBucket::BucketError, "Got passed new contents for sum #{bucket_file.checksum}"
end
end
end
diff --git a/lib/puppet/indirector/data_binding/hiera.rb b/lib/puppet/indirector/hiera.rb
similarity index 88%
copy from lib/puppet/indirector/data_binding/hiera.rb
copy to lib/puppet/indirector/hiera.rb
index 7fbc63782..a552d569b 100644
--- a/lib/puppet/indirector/data_binding/hiera.rb
+++ b/lib/puppet/indirector/hiera.rb
@@ -1,50 +1,48 @@
-require 'puppet/indirector/code'
+require 'puppet/indirector/terminus'
require 'hiera/scope'
-class Puppet::DataBinding::Hiera < Puppet::Indirector::Code
- desc "Retrieve data using Hiera."
-
+class Puppet::Indirector::Hiera < Puppet::Indirector::Terminus
def initialize(*args)
if ! Puppet.features.hiera?
raise "Hiera terminus not supported without hiera library"
end
super
end
if defined?(::Psych::SyntaxError)
DataBindingExceptions = [::StandardError, ::Psych::SyntaxError]
else
DataBindingExceptions = [::StandardError]
end
def find(request)
hiera.lookup(request.key, nil, Hiera::Scope.new(request.options[:variables]), nil, nil)
rescue *DataBindingExceptions => detail
raise Puppet::DataBinding::LookupError.new(detail.message, detail)
end
private
def self.hiera_config
hiera_config = Puppet.settings[:hiera_config]
config = {}
if Puppet::FileSystem.exist?(hiera_config)
config = Hiera::Config.load(hiera_config)
else
Puppet.warning "Config file #{hiera_config} not found, using Hiera defaults"
end
config[:logger] = 'puppet'
config
end
def self.hiera
@hiera ||= Hiera.new(:config => hiera_config)
end
def hiera
self.class.hiera
end
end
diff --git a/lib/puppet/indirector/indirection.rb b/lib/puppet/indirector/indirection.rb
index a22f465ac..26a33543c 100644
--- a/lib/puppet/indirector/indirection.rb
+++ b/lib/puppet/indirector/indirection.rb
@@ -1,336 +1,336 @@
require 'puppet/util/docs'
require 'puppet/util/profiler'
require 'puppet/util/methodhelper'
require 'puppet/indirector/envelope'
require 'puppet/indirector/request'
require 'puppet/util/instrumentation/instrumentable'
# The class that connects functional classes with their different collection
# back-ends. Each indirection has a set of associated terminus classes,
# each of which is a subclass of Puppet::Indirector::Terminus.
class Puppet::Indirector::Indirection
include Puppet::Util::MethodHelper
include Puppet::Util::Docs
extend Puppet::Util::Instrumentation::Instrumentable
attr_accessor :name, :model
attr_reader :termini
probe :find, :label => Proc.new { |parent, key, *args| "find_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }}
probe :save, :label => Proc.new { |parent, key, *args| "save_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }}
probe :search, :label => Proc.new { |parent, key, *args| "search_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }}
probe :destroy, :label => Proc.new { |parent, key, *args| "destroy_#{parent.name}_#{parent.terminus_class}" }, :data => Proc.new { |parent, key, *args| { :key => key }}
@@indirections = []
# Find an indirection by name. This is provided so that Terminus classes
# can specifically hook up with the indirections they are associated with.
def self.instance(name)
@@indirections.find { |i| i.name == name }
end
# Return a list of all known indirections. Used to generate the
# reference.
def self.instances
@@indirections.collect { |i| i.name }
end
# Find an indirected model by name. This is provided so that Terminus classes
# can specifically hook up with the indirections they are associated with.
def self.model(name)
return nil unless match = @@indirections.find { |i| i.name == name }
match.model
end
# Create and return our cache terminus.
def cache
raise(Puppet::DevError, "Tried to cache when no cache class was set") unless cache_class
terminus(cache_class)
end
# Should we use a cache?
def cache?
cache_class ? true : false
end
attr_reader :cache_class
# Define a terminus class to be used for caching.
def cache_class=(class_name)
validate_terminus_class(class_name) if class_name
@cache_class = class_name
end
# This is only used for testing.
def delete
@@indirections.delete(self) if @@indirections.include?(self)
end
# Set the time-to-live for instances created through this indirection.
def ttl=(value)
raise ArgumentError, "Indirection TTL must be an integer" unless value.is_a?(Fixnum)
@ttl = value
end
# Default to the runinterval for the ttl.
def ttl
@ttl ||= Puppet[:runinterval]
end
# Calculate the expiration date for a returned instance.
def expiration
Time.now + ttl
end
# Generate the full doc string.
def doc
text = ""
text << scrub(@doc) << "\n\n" if @doc
text << "* **Indirected Class**: `#{@indirected_class}`\n";
if terminus_setting
text << "* **Terminus Setting**: #{terminus_setting}\n"
end
text
end
def initialize(model, name, options = {})
@model = model
@name = name
@termini = {}
@cache_class = nil
@terminus_class = nil
raise(ArgumentError, "Indirection #{@name} is already defined") if @@indirections.find { |i| i.name == @name }
@@indirections << self
@indirected_class = options.delete(:indirected_class)
if mod = options[:extend]
extend(mod)
options.delete(:extend)
end
# This is currently only used for cache_class and terminus_class.
set_options(options)
end
# Set up our request object.
def request(*args)
Puppet::Indirector::Request.new(self.name, *args)
end
# Return the singleton terminus for this indirection.
def terminus(terminus_name = nil)
# Get the name of the terminus.
raise Puppet::DevError, "No terminus specified for #{self.name}; cannot redirect" unless terminus_name ||= terminus_class
termini[terminus_name] ||= make_terminus(terminus_name)
end
# This can be used to select the terminus class.
attr_accessor :terminus_setting
# Determine the terminus class.
def terminus_class
unless @terminus_class
if setting = self.terminus_setting
self.terminus_class = Puppet.settings[setting]
else
raise Puppet::DevError, "No terminus class nor terminus setting was provided for indirection #{self.name}"
end
end
@terminus_class
end
def reset_terminus_class
@terminus_class = nil
end
# Specify the terminus class to use.
def terminus_class=(klass)
validate_terminus_class(klass)
@terminus_class = klass
end
# This is used by terminus_class= and cache=.
def validate_terminus_class(terminus_class)
raise ArgumentError, "Invalid terminus name #{terminus_class.inspect}" unless terminus_class and terminus_class.to_s != ""
unless Puppet::Indirector::Terminus.terminus_class(self.name, terminus_class)
raise ArgumentError, "Could not find terminus #{terminus_class} for indirection #{self.name}"
end
end
# Expire a cached object, if one is cached. Note that we don't actually
# remove it, we expire it and write it back out to disk. This way people
# can still use the expired object if they want.
def expire(key, options={})
request = request(:expire, key, nil, options)
return nil unless cache?
return nil unless instance = cache.find(request(:find, key, nil, options))
Puppet.info "Expiring the #{self.name} cache of #{instance.name}"
# Set an expiration date in the past
instance.expiration = Time.now - 60
cache.save(request(:save, nil, instance, options))
end
def allow_remote_requests?
terminus.allow_remote_requests?
end
# Search for an instance in the appropriate terminus, caching the
# results if caching is configured..
def find(key, options={})
request = request(:find, key, nil, options)
terminus = prepare(request)
result = find_in_cache(request)
if not result.nil?
result
elsif request.ignore_terminus?
nil
else
# Otherwise, return the result from the terminus, caching if
# appropriate.
result = terminus.find(request)
if not result.nil?
result.expiration ||= self.expiration if result.respond_to?(:expiration)
if cache?
Puppet.info "Caching #{self.name} for #{request.key}"
cache.save request(:save, key, result, options)
end
filtered = result
if terminus.respond_to?(:filter)
- Puppet::Util::Profiler.profile("Filtered result for #{self.name} #{request.key}") do
+ Puppet::Util::Profiler.profile("Filtered result for #{self.name} #{request.key}", [:indirector, :filter, self.name, request.key]) do
filtered = terminus.filter(result)
end
end
filtered
end
end
end
# Search for an instance in the appropriate terminus, and return a
# boolean indicating whether the instance was found.
def head(key, options={})
request = request(:head, key, nil, options)
terminus = prepare(request)
# Look in the cache first, then in the terminus. Force the result
# to be a boolean.
!!(find_in_cache(request) || terminus.head(request))
end
def find_in_cache(request)
# See if our instance is in the cache and up to date.
return nil unless cache? and ! request.ignore_cache? and cached = cache.find(request)
if cached.expired?
Puppet.info "Not using expired #{self.name} for #{request.key} from cache; expired at #{cached.expiration}"
return nil
end
Puppet.debug "Using cached #{self.name} for #{request.key}"
cached
rescue => detail
Puppet.log_exception(detail, "Cached #{self.name} for #{request.key} failed: #{detail}")
nil
end
# Remove something via the terminus.
def destroy(key, options={})
request = request(:destroy, key, nil, options)
terminus = prepare(request)
result = terminus.destroy(request)
if cache? and cache.find(request(:find, key, nil, options))
# Reuse the existing request, since it's equivalent.
cache.destroy(request)
end
result
end
# Search for more than one instance. Should always return an array.
def search(key, options={})
request = request(:search, key, nil, options)
terminus = prepare(request)
if result = terminus.search(request)
raise Puppet::DevError, "Search results from terminus #{terminus.name} are not an array" unless result.is_a?(Array)
result.each do |instance|
next unless instance.respond_to? :expiration
instance.expiration ||= self.expiration
end
return result
end
end
# Save the instance in the appropriate terminus. This method is
# normally an instance method on the indirected class.
def save(instance, key = nil, options={})
request = request(:save, key, instance, options)
terminus = prepare(request)
result = terminus.save(request)
# If caching is enabled, save our document there
cache.save(request) if cache?
result
end
private
# Check authorization if there's a hook available; fail if there is one
# and it returns false.
def check_authorization(request, terminus)
# At this point, we're assuming authorization makes no sense without
# client information.
return unless request.node
# This is only to authorize via a terminus-specific authorization hook.
return unless terminus.respond_to?(:authorized?)
unless terminus.authorized?(request)
msg = "Not authorized to call #{request.method} on #{request}"
msg += " with #{request.options.inspect}" unless request.options.empty?
raise ArgumentError, msg
end
end
# Setup a request, pick the appropriate terminus, check the request's authorization, and return it.
def prepare(request)
# Pick our terminus.
if respond_to?(:select_terminus)
unless terminus_name = select_terminus(request)
raise ArgumentError, "Could not determine appropriate terminus for #{request}"
end
else
terminus_name = terminus_class
end
dest_terminus = terminus(terminus_name)
check_authorization(request, dest_terminus)
dest_terminus.validate(request)
dest_terminus
end
# Create a new terminus instance.
def make_terminus(terminus_class)
# Load our terminus class.
unless klass = Puppet::Indirector::Terminus.terminus_class(self.name, terminus_class)
raise ArgumentError, "Could not find terminus #{terminus_class} for indirection #{self.name}"
end
klass.new
end
end
diff --git a/lib/puppet/indirector/request.rb b/lib/puppet/indirector/request.rb
index a67753f68..04c55f9a3 100644
--- a/lib/puppet/indirector/request.rb
+++ b/lib/puppet/indirector/request.rb
@@ -1,312 +1,318 @@
require 'cgi'
require 'uri'
require 'puppet/indirector'
require 'puppet/util/pson'
require 'puppet/network/resolver'
# This class encapsulates all of the information you need to make an
# Indirection call, and as a result also handles REST calls. It's somewhat
# analogous to an HTTP Request object, except tuned for our Indirector.
class Puppet::Indirector::Request
attr_accessor :key, :method, :options, :instance, :node, :ip, :authenticated, :ignore_cache, :ignore_terminus
attr_accessor :server, :port, :uri, :protocol
attr_reader :indirection_name
# trusted_information is specifically left out because we can't serialize it
# and keep it "trusted"
OPTION_ATTRIBUTES = [:ip, :node, :authenticated, :ignore_terminus, :ignore_cache, :instance, :environment]
::PSON.register_document_type('IndirectorRequest',self)
def self.from_data_hash(data)
raise ArgumentError, "No indirection name provided in data" unless indirection_name = data['type']
raise ArgumentError, "No method name provided in data" unless method = data['method']
raise ArgumentError, "No key provided in data" unless key = data['key']
request = new(indirection_name, method, key, nil, data['attributes'])
if instance = data['instance']
klass = Puppet::Indirector::Indirection.instance(request.indirection_name).model
if instance.is_a?(klass)
request.instance = instance
else
request.instance = klass.from_data_hash(instance)
end
end
request
end
def self.from_pson(json)
Puppet.deprecation_warning("from_pson is being removed in favour of from_data_hash.")
self.from_data_hash(json)
end
def to_data_hash
result = {
'type' => indirection_name,
'method' => method,
'key' => key
}
attributes = {}
OPTION_ATTRIBUTES.each do |key|
next unless value = send(key)
attributes[key] = value
end
options.each do |opt, value|
attributes[opt] = value
end
result['attributes'] = attributes unless attributes.empty?
result['instance'] = instance if instance
result
end
def to_pson_data_hash
{
'document_type' => 'IndirectorRequest',
'data' => to_data_hash,
}
end
def to_pson(*args)
to_pson_data_hash.to_pson(*args)
end
# Is this an authenticated request?
def authenticated?
# Double negative, so we just get true or false
! ! authenticated
end
def environment
- @environment ||= Puppet.lookup(:environments).get(Puppet[:environment])
+ # If environment has not been set directly, we should use the application's
+ # current environment
+ @environment ||= Puppet.lookup(:current_environment)
end
def environment=(env)
- @environment = if env.is_a?(Puppet::Node::Environment)
+ @environment =
+ if env.is_a?(Puppet::Node::Environment)
env
+ elsif (current_environment = Puppet.lookup(:current_environment)).name == env
+ current_environment
else
- Puppet.lookup(:environments).get(env)
+ Puppet.lookup(:environments).get(env) ||
+ raise(Puppet::Environments::EnvironmentNotFound, env)
end
end
def escaped_key
URI.escape(key)
end
# LAK:NOTE This is a messy interface to the cache, and it's only
# used by the Configurer class. I decided it was better to implement
# it now and refactor later, when we have a better design, than
# to spend another month coming up with a design now that might
# not be any better.
def ignore_cache?
ignore_cache
end
def ignore_terminus?
ignore_terminus
end
def initialize(indirection_name, method, key, instance, options = {})
@instance = instance
options ||= {}
self.indirection_name = indirection_name
self.method = method
options = options.inject({}) { |hash, ary| hash[ary[0].to_sym] = ary[1]; hash }
set_attributes(options)
@options = options
if key
# If the request key is a URI, then we need to treat it specially,
# because it rewrites the key. We could otherwise strip server/port/etc
# info out in the REST class, but it seemed bad design for the REST
# class to rewrite the key.
if key.to_s =~ /^\w+:\// and not Puppet::Util.absolute_path?(key.to_s) # it's a URI
set_uri_key(key)
else
@key = key
end
end
@key = @instance.name if ! @key and @instance
end
# Look up the indirection based on the name provided.
def indirection
Puppet::Indirector::Indirection.instance(indirection_name)
end
def indirection_name=(name)
@indirection_name = name.to_sym
end
def model
raise ArgumentError, "Could not find indirection '#{indirection_name}'" unless i = indirection
i.model
end
# Are we trying to interact with multiple resources, or just one?
def plural?
method == :search
end
# Create the query string, if options are present.
def query_string
return "" if options.nil? || options.empty?
# For backward compatibility with older (pre-3.3) masters,
# this puppet option allows serialization of query parameter
# arrays as yaml. This can be removed when we remove yaml
# support entirely.
if Puppet.settings[:legacy_query_parameter_serialization]
replace_arrays_with_yaml
end
"?" + encode_params(expand_into_parameters(options.to_a))
end
def replace_arrays_with_yaml
options.each do |key, value|
case value
when Array
options[key] = YAML.dump(value)
end
end
end
def expand_into_parameters(data)
data.inject([]) do |params, key_value|
key, value = key_value
expanded_value = case value
when Array
value.collect { |val| [key, val] }
else
[key_value]
end
params.concat(expand_primitive_types_into_parameters(expanded_value))
end
end
def expand_primitive_types_into_parameters(data)
data.inject([]) do |params, key_value|
key, value = key_value
case value
when nil
params
when true, false, String, Symbol, Fixnum, Bignum, Float
params << [key, value]
else
raise ArgumentError, "HTTP REST queries cannot handle values of type '#{value.class}'"
end
end
end
def encode_params(params)
params.collect do |key, value|
"#{key}=#{CGI.escape(value.to_s)}"
end.join("&")
end
def to_hash
result = options.dup
OPTION_ATTRIBUTES.each do |attribute|
if value = send(attribute)
result[attribute] = value
end
end
result
end
def to_s
return(uri ? uri : "/#{indirection_name}/#{key}")
end
def do_request(srv_service=:puppet, default_server=Puppet.settings[:server], default_port=Puppet.settings[:masterport], &block)
# We were given a specific server to use, so just use that one.
# This happens if someone does something like specifying a file
# source using a puppet:// URI with a specific server.
return yield(self) if !self.server.nil?
if Puppet.settings[:use_srv_records]
Puppet::Network::Resolver.each_srv_record(Puppet.settings[:srv_domain], srv_service) do |srv_server, srv_port|
begin
self.server = srv_server
self.port = srv_port
return yield(self)
rescue SystemCallError => e
Puppet.warning "Error connecting to #{srv_server}:#{srv_port}: #{e.message}"
end
end
end
# ... Fall back onto the default server.
Puppet.debug "No more servers left, falling back to #{default_server}:#{default_port}" if Puppet.settings[:use_srv_records]
self.server = default_server
self.port = default_port
return yield(self)
end
def remote?
self.node or self.ip
end
private
def set_attributes(options)
OPTION_ATTRIBUTES.each do |attribute|
if options.include?(attribute.to_sym)
send(attribute.to_s + "=", options[attribute])
options.delete(attribute)
end
end
end
# Parse the key as a URI, setting attributes appropriately.
def set_uri_key(key)
@uri = key
begin
uri = URI.parse(URI.escape(key))
rescue => detail
raise ArgumentError, "Could not understand URL #{key}: #{detail}", detail.backtrace
end
# Just short-circuit these to full paths
if uri.scheme == "file"
@key = Puppet::Util.uri_to_path(uri)
return
end
@server = uri.host if uri.host
# If the URI class can look up the scheme, it will provide a port,
# otherwise it will default to '0'.
if uri.port.to_i == 0 and uri.scheme == "puppet"
@port = Puppet.settings[:masterport].to_i
else
@port = uri.port.to_i
end
@protocol = uri.scheme
if uri.scheme == 'puppet'
@key = URI.unescape(uri.path.sub(/^\//, ''))
return
end
env, indirector, @key = URI.unescape(uri.path.sub(/^\//, '')).split('/',3)
@key ||= ''
self.environment = env unless env == ''
end
end
diff --git a/lib/puppet/indirector/resource/ral.rb b/lib/puppet/indirector/resource/ral.rb
index 5a366a329..350d722db 100644
--- a/lib/puppet/indirector/resource/ral.rb
+++ b/lib/puppet/indirector/resource/ral.rb
@@ -1,64 +1,64 @@
require 'puppet/indirector/resource/validator'
class Puppet::Resource::Ral < Puppet::Indirector::Code
include Puppet::Resource::Validator
desc "Manipulate resources with the resource abstraction layer. Only used internally."
def allow_remote_requests?
Puppet.deprecation_warning("Accessing resources on the network is deprecated. See http://links.puppetlabs.com/deprecate-networked-resource")
super
end
def find( request )
# find by name
res = type(request).instances.find { |o| o.name == resource_name(request) }
res ||= type(request).new(:name => resource_name(request), :audit => type(request).properties.collect { |s| s.name })
res.to_resource
end
def search( request )
conditions = request.options.dup
conditions[:name] = resource_name(request) if resource_name(request)
type(request).instances.map do |res|
res.to_resource
end.find_all do |res|
conditions.all? {|property, value| res.to_resource[property].to_s == value.to_s}
end.sort do |a,b|
a.title <=> b.title
end
end
def save( request )
# In RAL-land, to "save" means to actually try to change machine state
res = request.instance
ral_res = res.to_ral
catalog = Puppet::Resource::Catalog.new
catalog.add_resource ral_res
transaction = catalog.apply
[ral_res.to_resource, transaction.report]
end
private
# {type,resource}_name: the resource name may contain slashes:
# File["/etc/hosts"]. To handle, assume the type name does
# _not_ have any slashes in it, and split only on the first.
def type_name( request )
request.key.split('/', 2)[0]
end
def resource_name( request )
name = request.key.split('/', 2)[1]
name unless name == ""
end
def type( request )
- Puppet::Type.type(type_name(request)) or raise Puppet::Error, "Could not find type #{type}"
+ Puppet::Type.type(type_name(request)) or raise Puppet::Error, "Could not find type #{type_name(request)}"
end
end
diff --git a/lib/puppet/indirector/rest.rb b/lib/puppet/indirector/rest.rb
index 8c10c33e0..d1f2f4d05 100644
--- a/lib/puppet/indirector/rest.rb
+++ b/lib/puppet/indirector/rest.rb
@@ -1,263 +1,267 @@
require 'net/http'
require 'uri'
require 'puppet/network/http'
require 'puppet/network/http_pool'
require 'puppet/network/http/api/v1'
require 'puppet/network/http/compression'
# Access objects via REST
class Puppet::Indirector::REST < Puppet::Indirector::Terminus
include Puppet::Network::HTTP::Compression.module
class << self
attr_reader :server_setting, :port_setting
end
# Specify the setting that we should use to get the server name.
def self.use_server_setting(setting)
@server_setting = setting
end
# Specify the setting that we should use to get the port.
def self.use_port_setting(setting)
@port_setting = setting
end
# Specify the service to use when doing SRV record lookup
def self.use_srv_service(service)
@srv_service = service
end
def self.srv_service
@srv_service || :puppet
end
def self.server
Puppet.settings[server_setting || :server]
end
def self.port
Puppet.settings[port_setting || :masterport].to_i
end
# Provide appropriate headers.
def headers
add_accept_encoding({"Accept" => model.supported_formats.join(", ")})
end
def add_profiling_header(headers)
if (Puppet[:profile])
headers[Puppet::Network::HTTP::HEADER_ENABLE_PROFILING] = "true"
end
headers
end
def network(request)
Puppet::Network::HttpPool.http_instance(request.server || self.class.server,
request.port || self.class.port)
end
def http_get(request, path, headers = nil, *args)
http_request(:get, request, path, add_profiling_header(headers), *args)
end
def http_post(request, path, data, headers = nil, *args)
http_request(:post, request, path, data, add_profiling_header(headers), *args)
end
def http_head(request, path, headers = nil, *args)
http_request(:head, request, path, add_profiling_header(headers), *args)
end
def http_delete(request, path, headers = nil, *args)
http_request(:delete, request, path, add_profiling_header(headers), *args)
end
def http_put(request, path, data, headers = nil, *args)
http_request(:put, request, path, data, add_profiling_header(headers), *args)
end
def http_request(method, request, *args)
conn = network(request)
conn.send(method, *args)
end
def find(request)
uri, body = Puppet::Network::HTTP::API::V1.request_to_uri_and_body(request)
uri_with_query_string = "#{uri}?#{body}"
response = do_request(request) do |request|
# WEBrick in Ruby 1.9.1 only supports up to 1024 character lines in an HTTP request
# http://redmine.ruby-lang.org/issues/show/3991
if "GET #{uri_with_query_string} HTTP/1.1\r\n".length > 1024
http_post(request, uri, body, headers)
else
http_get(request, uri_with_query_string, headers)
end
end
if is_http_200?(response)
check_master_version(response)
content_type, body = parse_response(response)
result = deserialize_find(content_type, body)
result.name = request.key if result.respond_to?(:name=)
result
elsif is_http_404?(response)
return nil unless request.options[:fail_on_404]
# 404 can get special treatment as the indirector API can not produce a meaningful
# reason to why something is not found - it may not be the thing the user is
# expecting to find that is missing, but something else (like the environment).
# While this way of handling the issue is not perfect, there is at least an error
# that makes a user aware of the reason for the failure.
#
content_type, body = parse_response(response)
- msg = "Find #{uri_with_query_string} resulted in 404 with the message: #{body}"
+ msg = "Find #{elide(uri_with_query_string, 100)} resulted in 404 with the message: #{body}"
raise Puppet::Error, msg
else
nil
end
end
def head(request)
response = do_request(request) do |request|
http_head(request, Puppet::Network::HTTP::API::V1.indirection2uri(request), headers)
end
if is_http_200?(response)
check_master_version(response)
true
else
false
end
end
def search(request)
response = do_request(request) do |request|
http_get(request, Puppet::Network::HTTP::API::V1.indirection2uri(request), headers)
end
if is_http_200?(response)
check_master_version(response)
content_type, body = parse_response(response)
deserialize_search(content_type, body) || []
else
[]
end
end
def destroy(request)
raise ArgumentError, "DELETE does not accept options" unless request.options.empty?
response = do_request(request) do |request|
http_delete(request, Puppet::Network::HTTP::API::V1.indirection2uri(request), headers)
end
if is_http_200?(response)
check_master_version(response)
content_type, body = parse_response(response)
deserialize_destroy(content_type, body)
else
nil
end
end
def save(request)
raise ArgumentError, "PUT does not accept options" unless request.options.empty?
response = do_request(request) do |request|
http_put(request, Puppet::Network::HTTP::API::V1.indirection2uri(request), request.instance.render, headers.merge({ "Content-Type" => request.instance.mime }))
end
if is_http_200?(response)
check_master_version(response)
content_type, body = parse_response(response)
deserialize_save(content_type, body)
else
nil
end
end
# Encapsulate call to request.do_request with the arguments from this class
# Then yield to the code block that was called in
# We certainly could have retained the full request.do_request(...) { |r| ... }
# but this makes the code much cleaner and we only then actually make the call
# to request.do_request from here, thus if we change what we pass or how we
# get it, we only need to change it here.
def do_request(request)
request.do_request(self.class.srv_service, self.class.server, self.class.port) { |request| yield(request) }
end
def validate_key(request)
# Validation happens on the remote end
end
private
def is_http_200?(response)
case response.code
when "404"
false
when /^2/
true
else
# Raise the http error if we didn't get a 'success' of some kind.
raise convert_to_http_error(response)
end
end
def is_http_404?(response)
response.code == "404"
end
def convert_to_http_error(response)
message = "Error #{response.code} on SERVER: #{(response.body||'').empty? ? response.message : uncompress_body(response)}"
Net::HTTPError.new(message, response)
end
def check_master_version response
if !response[Puppet::Network::HTTP::HEADER_PUPPET_VERSION] &&
(Puppet[:legacy_query_parameter_serialization] == false || Puppet[:report_serialization_format] != "yaml")
Puppet.notice "Using less secure serialization of reports and query parameters for compatibility"
Puppet.notice "with older puppet master. To remove this notice, please upgrade your master(s) "
Puppet.notice "to Puppet 3.3 or newer."
Puppet.notice "See http://links.puppetlabs.com/deprecate_yaml_on_network for more information."
Puppet[:legacy_query_parameter_serialization] = true
Puppet[:report_serialization_format] = "yaml"
end
end
# Returns the content_type, stripping any appended charset, and the
# body, decompressed if necessary (content-encoding is checked inside
# uncompress_body)
def parse_response(response)
if response['content-type']
[ response['content-type'].gsub(/\s*;.*$/,''),
body = uncompress_body(response) ]
else
raise "No content type in http response; cannot parse"
end
end
def deserialize_find(content_type, body)
model.convert_from(content_type, body)
end
def deserialize_search(content_type, body)
model.convert_from_multiple(content_type, body)
end
def deserialize_destroy(content_type, body)
model.convert_from(content_type, body)
end
def deserialize_save(content_type, body)
nil
end
- def environment
- Puppet.lookup(:environments).get(Puppet[:environment])
+ def elide(string, length)
+ if Puppet::Util::Log.level == :debug || string.length <= length
+ string
+ else
+ string[0, length - 3] + "..."
+ end
end
end
diff --git a/lib/puppet/loaders.rb b/lib/puppet/loaders.rb
index e85b5e3fc..3d150bdcf 100644
--- a/lib/puppet/loaders.rb
+++ b/lib/puppet/loaders.rb
@@ -1,20 +1,19 @@
module Puppet
module Pops
require 'puppet/pops/loaders'
module Loader
require 'puppet/pops/loader/loader'
require 'puppet/pops/loader/base_loader'
require 'puppet/pops/loader/gem_support'
require 'puppet/pops/loader/module_loaders'
require 'puppet/pops/loader/dependency_loader'
require 'puppet/pops/loader/null_loader'
require 'puppet/pops/loader/static_loader'
require 'puppet/pops/loader/ruby_function_instantiator'
- require 'puppet/pops/loader/ruby_legacy_function_instantiator'
require 'puppet/pops/loader/loader_paths'
require 'puppet/pops/loader/simple_environment_loader'
end
end
end
diff --git a/lib/puppet/module.rb b/lib/puppet/module.rb
index 422d66d87..3a3435c35 100644
--- a/lib/puppet/module.rb
+++ b/lib/puppet/module.rb
@@ -1,338 +1,339 @@
require 'puppet/util/logging'
require 'semver'
require 'json'
# Support for modules
class Puppet::Module
class Error < Puppet::Error; end
class MissingModule < Error; end
class IncompatibleModule < Error; end
class UnsupportedPlatform < Error; end
class IncompatiblePlatform < Error; end
class MissingMetadata < Error; end
class InvalidName < Error; end
class InvalidFilePattern < Error; end
include Puppet::Util::Logging
FILETYPES = {
"manifests" => "manifests",
"files" => "files",
"templates" => "templates",
"plugins" => "lib",
"pluginfacts" => "facts.d",
}
# Find and return the +module+ that +path+ belongs to. If +path+ is
# absolute, or if there is no module whose name is the first component
# of +path+, return +nil+
def self.find(modname, environment = nil)
return nil unless modname
- env = Puppet.lookup(:environments).get(environment || Puppet[:environment])
+ # Unless a specific environment is given, use the current environment
+ env = environment ? Puppet.lookup(:environments).get(environment) : Puppet.lookup(:current_environment)
env.module(modname)
end
attr_reader :name, :environment, :path, :metadata
attr_writer :environment
attr_accessor :dependencies, :forge_name
attr_accessor :source, :author, :version, :license, :puppetversion, :summary, :description, :project_page
def initialize(name, path, environment)
@name = name
@path = path
@environment = environment
assert_validity
load_metadata if has_metadata?
validate_puppet_version
@absolute_path_to_manifests = Puppet::FileSystem::PathPattern.absolute(manifests)
end
def has_metadata?
return false unless metadata_file
return false unless Puppet::FileSystem.exist?(metadata_file)
begin
metadata = JSON.parse(File.read(metadata_file))
rescue JSON::JSONError => e
Puppet.debug("#{name} has an invalid and unparsable metadata.json file. The parse error: #{e.message}")
return false
end
return metadata.is_a?(Hash) && !metadata.keys.empty?
end
FILETYPES.each do |type, location|
# A boolean method to let external callers determine if
# we have files of a given type.
define_method(type +'?') do
type_subpath = subpath(location)
unless Puppet::FileSystem.exist?(type_subpath)
Puppet.debug("No #{type} found in subpath '#{type_subpath}' " +
"(file / directory does not exist)")
return false
end
return true
end
# A method for returning a given file of a given type.
# e.g., file = mod.manifest("my/manifest.pp")
#
# If the file name is nil, then the base directory for the
# file type is passed; this is used for fileserving.
define_method(type.sub(/s$/, '')) do |file|
# If 'file' is nil then they're asking for the base path.
# This is used for things like fileserving.
if file
full_path = File.join(subpath(location), file)
else
full_path = subpath(location)
end
return nil unless Puppet::FileSystem.exist?(full_path)
return full_path
end
# Return the base directory for the given type
define_method(type) do
subpath(location)
end
end
def license_file
return @license_file if defined?(@license_file)
return @license_file = nil unless path
@license_file = File.join(path, "License")
end
def load_metadata
@metadata = data = JSON.parse(File.read(metadata_file))
@forge_name = data['name'].gsub('-', '/') if data['name']
[:source, :author, :version, :license, :puppetversion, :dependencies].each do |attr|
unless value = data[attr.to_s]
unless attr == :puppetversion
raise MissingMetadata, "No #{attr} module metadata provided for #{self.name}"
end
end
# NOTICE: The fallback to `versionRequirement` is something we'd like to
# not have to support, but we have a reasonable number of releases that
# don't use `version_requirement`. When we can deprecate this, we should.
if attr == :dependencies
value.tap do |dependencies|
dependencies.each do |dep|
dep['version_requirement'] ||= dep['versionRequirement'] || '>= 0.0.0'
end
end
end
send(attr.to_s + "=", value)
end
end
# Return the list of manifests matching the given glob pattern,
# defaulting to 'init.{pp,rb}' for empty modules.
def match_manifests(rest)
if rest
wanted_manifests = wanted_manifests_from(rest)
searched_manifests = wanted_manifests.glob.reject { |f| FileTest.directory?(f) }
else
searched_manifests = []
end
# (#4220) Always ensure init.pp in case class is defined there.
init_manifests = [manifest("init.pp"), manifest("init.rb")].compact
init_manifests + searched_manifests
end
def all_manifests
return [] unless Puppet::FileSystem.exist?(manifests)
Dir.glob(File.join(manifests, '**', '*.{rb,pp}'))
end
def metadata_file
return @metadata_file if defined?(@metadata_file)
return @metadata_file = nil unless path
@metadata_file = File.join(path, "metadata.json")
end
def modulepath
File.dirname(path) if path
end
# Find all plugin directories. This is used by the Plugins fileserving mount.
def plugin_directory
subpath("lib")
end
def plugin_fact_directory
subpath("facts.d")
end
def has_external_facts?
File.directory?(plugin_fact_directory)
end
def supports(name, version = nil)
@supports ||= []
@supports << [name, version]
end
def to_s
result = "Module #{name}"
result += "(#{path})" if path
result
end
def dependencies_as_modules
dependent_modules = []
dependencies and dependencies.each do |dep|
author, dep_name = dep["name"].split('/')
found_module = environment.module(dep_name)
dependent_modules << found_module if found_module
end
dependent_modules
end
def required_by
environment.module_requirements[self.forge_name] || {}
end
def has_local_changes?
Puppet.deprecation_warning("This method is being removed.")
require 'puppet/module_tool/applications'
changes = Puppet::ModuleTool::Applications::Checksummer.run(path)
!changes.empty?
end
def local_changes
Puppet.deprecation_warning("This method is being removed.")
require 'puppet/module_tool/applications'
Puppet::ModuleTool::Applications::Checksummer.run(path)
end
# Identify and mark unmet dependencies. A dependency will be marked unmet
# for the following reasons:
#
# * not installed and is thus considered missing
# * installed and does not meet the version requirements for this module
# * installed and doesn't use semantic versioning
#
# Returns a list of hashes representing the details of an unmet dependency.
#
# Example:
#
# [
# {
# :reason => :missing,
# :name => 'puppetlabs-mysql',
# :version_constraint => 'v0.0.1',
# :mod_details => {
# :installed_version => '0.0.1'
# }
# :parent => {
# :name => 'puppetlabs-bacula',
# :version => 'v1.0.0'
# }
# }
# ]
#
def unmet_dependencies
unmet_dependencies = []
return unmet_dependencies unless dependencies
dependencies.each do |dependency|
forge_name = dependency['name']
version_string = dependency['version_requirement'] || '>= 0.0.0'
dep_mod = begin
environment.module_by_forge_name(forge_name)
rescue
nil
end
error_details = {
:name => forge_name,
:version_constraint => version_string.gsub(/^(?=\d)/, "v"),
:parent => {
:name => self.forge_name,
:version => self.version.gsub(/^(?=\d)/, "v")
},
:mod_details => {
:installed_version => dep_mod.nil? ? nil : dep_mod.version
}
}
unless dep_mod
error_details[:reason] = :missing
unmet_dependencies << error_details
next
end
if version_string
begin
required_version_semver_range = SemVer[version_string]
actual_version_semver = SemVer.new(dep_mod.version)
rescue ArgumentError
error_details[:reason] = :non_semantic_version
unmet_dependencies << error_details
next
end
unless required_version_semver_range.include? actual_version_semver
error_details[:reason] = :version_mismatch
unmet_dependencies << error_details
next
end
end
end
unmet_dependencies
end
def validate_puppet_version
return unless puppetversion and puppetversion != Puppet.version
raise IncompatibleModule, "Module #{self.name} is only compatible with Puppet version #{puppetversion}, not #{Puppet.version}"
end
private
def wanted_manifests_from(pattern)
begin
extended = File.extname(pattern).empty? ? "#{pattern}.{pp,rb}" : pattern
relative_pattern = Puppet::FileSystem::PathPattern.relative(extended)
rescue Puppet::FileSystem::PathPattern::InvalidPattern => error
raise Puppet::Module::InvalidFilePattern.new(
"The pattern \"#{pattern}\" to find manifests in the module \"#{name}\" " +
"is invalid and potentially unsafe.", error)
end
relative_pattern.prefix_with(@absolute_path_to_manifests)
end
def subpath(type)
File.join(path, type)
end
def assert_validity
raise InvalidName, "Invalid module name #{name}; module names must be alphanumeric (plus '-'), not '#{name}'" unless name =~ /^[-\w]+$/
end
def ==(other)
self.name == other.name &&
self.version == other.version &&
self.path == other.path &&
self.environment == other.environment
end
end
diff --git a/lib/puppet/module_tool.rb b/lib/puppet/module_tool.rb
index 8a34d26d1..8f462f6d8 100644
--- a/lib/puppet/module_tool.rb
+++ b/lib/puppet/module_tool.rb
@@ -1,192 +1,194 @@
# encoding: UTF-8
# Load standard libraries
require 'pathname'
require 'fileutils'
require 'puppet/util/colors'
module Puppet
module ModuleTool
require 'puppet/module_tool/tar'
extend Puppet::Util::Colors
# Directory and names that should not be checksummed.
ARTIFACTS = ['pkg', /^\./, /^~/, /^#/, 'coverage', 'checksums.json', 'REVISION']
FULL_MODULE_NAME_PATTERN = /\A([^-\/|.]+)[-|\/](.+)\z/
REPOSITORY_URL = Puppet.settings[:module_repository]
# Is this a directory that shouldn't be checksummed?
#
# TODO: Should this be part of Checksums?
# TODO: Rename this method to reflect its purpose?
# TODO: Shouldn't this be used when building packages too?
def self.artifact?(path)
case File.basename(path)
when *ARTIFACTS
true
else
false
end
end
# Return the +username+ and +modname+ for a given +full_module_name+, or raise an
# ArgumentError if the argument isn't parseable.
def self.username_and_modname_from(full_module_name)
if matcher = full_module_name.match(FULL_MODULE_NAME_PATTERN)
return matcher.captures
else
raise ArgumentError, "Not a valid full name: #{full_module_name}"
end
end
# Find the module root when given a path by checking each directory up from
# its current location until it finds one that contains a file called
# 'Modulefile'.
#
# @param path [Pathname, String] path to start from
# @return [Pathname, nil] the root path of the module directory or nil if
# we cannot find one
def self.find_module_root(path)
path = Pathname.new(path) if path.class == String
path.expand_path.ascend do |p|
return p if is_module_root?(p)
end
nil
end
# Analyse path to see if it is a module root directory by detecting a
# file named 'metadata.json' or 'Modulefile' in the directory.
#
# @param path [Pathname, String] path to analyse
# @return [Boolean] true if the path is a module root, false otherwise
def self.is_module_root?(path)
path = Pathname.new(path) if path.class == String
FileTest.file?(path + 'metadata.json') || FileTest.file?(path + 'Modulefile')
end
# Builds a formatted tree from a list of node hashes containing +:text+
# and +:dependencies+ keys.
def self.format_tree(nodes, level = 0)
str = ''
nodes.each_with_index do |node, i|
last_node = nodes.length - 1 == i
deps = node[:dependencies] || []
str << (indent = " " * level)
str << (last_node ? "└" : "├")
str << "─"
str << (deps.empty? ? "─" : "┬")
str << " #{node[:text]}\n"
branch = format_tree(deps, level + 1)
branch.gsub!(/^#{indent} /, indent + '│') unless last_node
str << branch
end
return str
end
def self.build_tree(mods, dir)
mods.each do |mod|
version_string = mod[:version].to_s.sub(/^(?!v)/, 'v')
if mod[:action] == :upgrade
previous_version = mod[:previous_version].to_s.sub(/^(?!v)/, 'v')
version_string = "#{previous_version} -> #{version_string}"
end
mod[:text] = "#{mod[:name]} (#{colorize(:cyan, version_string)})"
mod[:text] += " [#{mod[:path]}]" unless mod[:path].to_s == dir.to_s
deps = (mod[:dependencies] || [])
deps.sort! { |a, b| a[:name] <=> b[:name] }
build_tree(deps, dir)
end
end
# @param options [Hash<Symbol,String>] This hash will contain any
# command-line arguments that are not Settings, as those will have already
# been extracted by the underlying application code.
#
# @note Unfortunately the whole point of this method is the side effect of
# modifying the options parameter. This same hash is referenced both
# when_invoked and when_rendering. For this reason, we are not returning
# a duplicate.
# @todo Validate the above note...
#
# An :environment_instance and a :target_dir are added/updated in the
# options parameter.
#
# @api private
def self.set_option_defaults(options)
current_environment = environment_from_options(options)
modulepath = [options[:target_dir]] + current_environment.full_modulepath
face_environment = current_environment.override_with(:modulepath => modulepath.compact)
options[:environment_instance] = face_environment
# Note: environment will have expanded the path
options[:target_dir] = face_environment.full_modulepath.first
end
# Given a hash of options, we should discover or create a
# {Puppet::Node::Environment} instance that reflects the provided options.
#
# Generally speaking, the `:modulepath` parameter should supercede all
# others, the `:environment` parameter should follow after that, and we
# should default to Puppet's current environment.
#
# @param options [{Symbol => Object}] the options to derive environment from
# @return [Puppet::Node::Environment] the environment described by the options
def self.environment_from_options(options)
if options[:modulepath]
path = options[:modulepath].split(File::PATH_SEPARATOR)
Puppet::Node::Environment.create(:anonymous, path, '')
elsif options[:environment].is_a?(Puppet::Node::Environment)
options[:environment]
elsif options[:environment]
+ # This use of looking up an environment is correct since it honours
+ # a reguest to get a particular environment via environment name.
Puppet.lookup(:environments).get(options[:environment])
else
Puppet.lookup(:current_environment)
end
end
# Handles parsing of module dependency expressions into proper
# {Semantic::VersionRange}s, including reasonable error handling.
#
# @param where [String] a description of the thing we're parsing the
# dependency expression for
# @param dep [Hash] the dependency description to parse
# @return [Array(String, Semantic::VersionRange, String)] an tuple of the
# dependent module's name, the version range dependency, and the
# unparsed range expression.
def self.parse_module_dependency(where, dep)
dep_name = dep['name'].tr('/', '-')
range = dep['version_requirement'] || dep['versionRequirement'] || '>= 0.0.0'
begin
parsed_range = Semantic::VersionRange.parse(range)
rescue ArgumentError => e
Puppet.debug "Error in #{where} parsing dependency #{dep_name} (#{e.message}); using empty range."
parsed_range = Semantic::VersionRange::EMPTY_RANGE
end
[ dep_name, parsed_range, range ]
end
end
end
# Load remaining libraries
require 'puppet/module_tool/errors'
require 'puppet/module_tool/applications'
require 'puppet/module_tool/checksums'
require 'puppet/module_tool/contents_description'
require 'puppet/module_tool/dependency'
require 'puppet/module_tool/metadata'
require 'puppet/module_tool/modulefile'
require 'puppet/forge/cache'
require 'puppet/forge'
diff --git a/lib/puppet/module_tool/applications/application.rb b/lib/puppet/module_tool/applications/application.rb
index d175f563c..c94990b56 100644
--- a/lib/puppet/module_tool/applications/application.rb
+++ b/lib/puppet/module_tool/applications/application.rb
@@ -1,99 +1,98 @@
require 'net/http'
require 'semver'
require 'json'
require 'puppet/util/colors'
module Puppet::ModuleTool
module Applications
class Application
include Puppet::Util::Colors
def self.run(*args)
new(*args).run
end
attr_accessor :options
def initialize(options = {})
@options = options
end
def run
raise NotImplementedError, "Should be implemented in child classes."
end
def discuss(response, success, failure)
case response
when Net::HTTPOK, Net::HTTPCreated
Puppet.notice success
else
errors = JSON.parse(response.body)['error'] rescue "HTTP #{response.code}, #{response.body}"
Puppet.warning "#{failure} (#{errors})"
end
end
def metadata(require_metadata = false)
return @metadata if @metadata
@metadata = Puppet::ModuleTool::Metadata.new
unless @path
raise ArgumentError, "Could not determine module path"
end
+ if require_metadata && !Puppet::ModuleTool.is_module_root?(@path)
+ raise ArgumentError, "Unable to find metadata.json or Modulefile in module root at #{@path} See http://links.puppetlabs.com/modulefile for required file format."
+ end
+
modulefile_path = File.join(@path, 'Modulefile')
metadata_path = File.join(@path, 'metadata.json')
if File.file?(metadata_path)
File.open(metadata_path) do |f|
begin
@metadata.update(JSON.load(f))
rescue JSON::ParserError => ex
raise ArgumentError, "Could not parse JSON #{metadata_path}", ex.backtrace
end
end
end
if File.file?(modulefile_path)
if File.file?(metadata_path)
Puppet.warning "Modulefile is deprecated. Merging your Modulefile and metadata.json."
else
Puppet.warning "Modulefile is deprecated. Building metadata.json from Modulefile."
end
Puppet::ModuleTool::ModulefileReader.evaluate(@metadata, modulefile_path)
end
- has_metadata = File.file?(modulefile_path) || File.file?(metadata_path)
- if !has_metadata && require_metadata
- raise ArgumentError, "No metadata found for module #{@path}"
- end
-
return @metadata
end
def load_metadata!
@metadata = nil
metadata(true)
end
def parse_filename(filename)
if match = /^((.*?)-(.*?))-(\d+\.\d+\.\d+.*?)$/.match(File.basename(filename,'.tar.gz'))
module_name, author, shortname, version = match.captures
else
raise ArgumentError, "Could not parse filename to obtain the username, module name and version. (#{@release_name})"
end
unless SemVer.valid?(version)
raise ArgumentError, "Invalid version format: #{version} (Semantic Versions are acceptable: http://semver.org)"
end
return {
:module_name => module_name,
:author => author,
:dir_name => shortname,
:version => version
}
end
end
end
end
diff --git a/lib/puppet/module_tool/applications/builder.rb b/lib/puppet/module_tool/applications/builder.rb
index 6ca9cab65..edfd2c9ef 100644
--- a/lib/puppet/module_tool/applications/builder.rb
+++ b/lib/puppet/module_tool/applications/builder.rb
@@ -1,93 +1,148 @@
require 'fileutils'
require 'json'
+require 'puppet/file_system'
+require 'pathspec'
module Puppet::ModuleTool
module Applications
class Builder < Application
def initialize(path, options = {})
@path = File.expand_path(path)
@pkg_path = File.join(@path, 'pkg')
super(options)
end
def run
load_metadata!
create_directory
copy_contents
write_json
Puppet.notice "Building #{@path} for release"
pack
relative = Pathname.new(archive_file).relative_path_from(Pathname.new(File.expand_path(Dir.pwd)))
# Return the Pathname object representing the path to the release
# archive just created. This return value is used by the module_tool
# face build action, and displayed to on the console using the to_s
# method.
#
# Example return value:
#
# <Pathname:puppetlabs-apache/pkg/puppetlabs-apache-0.0.1.tar.gz>
#
relative
end
private
def archive_file
File.join(@pkg_path, "#{metadata.release_name}.tar.gz")
end
def pack
FileUtils.rm archive_file rescue nil
tar = Puppet::ModuleTool::Tar.instance
Dir.chdir(@pkg_path) do
tar.pack(metadata.release_name, archive_file)
end
end
def create_directory
FileUtils.mkdir(@pkg_path) rescue nil
if File.directory?(build_path)
FileUtils.rm_rf(build_path, :secure => true)
end
FileUtils.mkdir(build_path)
end
+ def ignored_files
+ if @ignored_files
+ return @ignored_files
+ else
+ pmtignore = File.join(@path, '.pmtignore')
+ gitignore = File.join(@path, '.gitignore')
+
+ if File.file? pmtignore
+ @ignored_files = PathSpec.new File.read(pmtignore)
+ elsif File.file? gitignore
+ @ignored_files = PathSpec.new File.read(gitignore)
+ else
+ @ignored_files = PathSpec.new
+ end
+ end
+ end
+
def copy_contents
- Dir[File.join(@path, '*')].each do |path|
- case File.basename(path)
- when *Puppet::ModuleTool::ARTIFACTS
+ symlinks = []
+ Find.find(File.join(@path)) do |path|
+ # because Find.find finds the path itself
+ if path == @path
next
+ end
+
+ # Needed because pathspec looks for a trailing slash in the path to
+ # determine if a path is a directory
+ path = path.to_s + '/' if File.directory? path
+
+ # if it matches, then prune it with fire
+ unless ignored_files.match_paths([path], @path).empty?
+ Find.prune
+ end
+
+ # don't copy all the Puppet ARTIFACTS
+ rel = Pathname.new(path).relative_path_from(Pathname.new(@path))
+ case rel.to_s
+ when *Puppet::ModuleTool::ARTIFACTS
+ Find.prune
+ end
+
+ # make dir tree, copy files, and add symlinks to the symlinks list
+ dest = "#{build_path}/#{rel.to_s}"
+ if File.directory? path
+ FileUtils.mkdir dest, :mode => File.stat(path).mode
+ elsif Puppet::FileSystem.symlink? path
+ symlinks << path
else
- FileUtils.cp_r path, build_path, :preserve => true
+ FileUtils.cp path, dest, :preserve => true
end
end
+
+ # send a message about each symlink and raise an error if they exist
+ unless symlinks.empty?
+ symlinks.each do |s|
+ s = Pathname.new s
+ mpath = Pathname.new @path
+ Puppet.warning "Symlinks in modules are unsupported. Please investigate symlink #{s.relative_path_from mpath} -> #{s.realpath.relative_path_from mpath}."
+ end
+
+ raise Puppet::ModuleTool::Errors::ModuleToolError, "Found symlinks. Symlinks in modules are not allowed, please remove them."
+ end
end
def write_json
metadata_path = File.join(build_path, 'metadata.json')
if metadata.to_hash.include? 'checksums'
Puppet.warning "A 'checksums' field was found in metadata.json. This field will be ignored and can safely be removed."
end
# TODO: This may necessarily change the order in which the metadata.json
# file is packaged from what was written by the user. This is a
# regretable, but required for now.
File.open(metadata_path, 'w') do |f|
f.write(metadata.to_json)
end
File.open(File.join(build_path, 'checksums.json'), 'w') do |f|
f.write(PSON.pretty_generate(Checksums.new(build_path)))
end
end
def build_path
@build_path ||= File.join(@pkg_path, metadata.release_name)
end
end
end
end
diff --git a/lib/puppet/module_tool/applications/unpacker.rb b/lib/puppet/module_tool/applications/unpacker.rb
index 6673c9b12..8ffef3d64 100644
--- a/lib/puppet/module_tool/applications/unpacker.rb
+++ b/lib/puppet/module_tool/applications/unpacker.rb
@@ -1,86 +1,100 @@
require 'pathname'
require 'tmpdir'
require 'json'
+require 'puppet/file_system'
module Puppet::ModuleTool
module Applications
class Unpacker < Application
def self.unpack(filename, target)
app = self.new(filename, :target_dir => target)
app.unpack
+ app.sanity_check
app.move_into(target)
end
def self.harmonize_ownership(source, target)
unless Puppet.features.microsoft_windows?
source = Pathname.new(source) unless source.respond_to?(:stat)
target = Pathname.new(target) unless target.respond_to?(:stat)
FileUtils.chown_R(source.stat.uid, source.stat.gid, target)
end
end
def initialize(filename, options = {})
@filename = Pathname.new(filename)
super(options)
@module_path = Pathname(options[:target_dir])
end
def run
unpack
+ sanity_check
module_dir = @module_path + module_name
move_into(module_dir)
# Return the Pathname object representing the directory where the
# module release archive was unpacked the to.
return module_dir
end
+ # @api private
+ # Error on symlinks and other junk
+ def sanity_check
+ symlinks = Dir.glob("#{tmpdir}/**/*", File::FNM_DOTMATCH).map { |f| Pathname.new(f) }.select {|p| Puppet::FileSystem.symlink? p}
+ tmpdirpath = Pathname.new tmpdir
+
+ symlinks.each do |s|
+ Puppet.warning "Symlinks in modules are unsupported. Please investigate symlink #{s.relative_path_from tmpdirpath}->#{s.realpath.relative_path_from tmpdirpath}."
+ end
+ end
+
# @api private
def unpack
begin
Puppet::ModuleTool::Tar.instance.unpack(@filename.to_s, tmpdir, [@module_path.stat.uid, @module_path.stat.gid].join(':'))
rescue Puppet::ExecutionFailure => e
raise RuntimeError, "Could not extract contents of module archive: #{e.message}"
end
end
# @api private
def root_dir
return @root_dir if @root_dir
# Grab the first directory containing a metadata.json file
metadata_file = Dir["#{tmpdir}/**/metadata.json"].sort_by(&:length)[0]
if metadata_file
@root_dir = Pathname.new(metadata_file).dirname
else
raise "No valid metadata.json found!"
end
end
# @api private
def module_name
metadata = JSON.parse((root_dir + 'metadata.json').read)
name = metadata['name'][/-(.*)/, 1]
end
# @api private
def move_into(dir)
dir = Pathname.new(dir)
dir.rmtree if dir.exist?
FileUtils.mv(root_dir, dir)
ensure
FileUtils.rmtree(tmpdir)
end
# Obtain a suitable temporary path for unpacking tarballs
#
# @api private
# @return [String] path to temporary unpacking location
def tmpdir
@dir ||= Dir.mktmpdir('tmp-unpacker', Puppet::Forge::Cache.base_path)
end
end
end
end
diff --git a/lib/puppet/module_tool/applications/upgrader.rb b/lib/puppet/module_tool/applications/upgrader.rb
index 9ea2e1e8d..d901f86a8 100644
--- a/lib/puppet/module_tool/applications/upgrader.rb
+++ b/lib/puppet/module_tool/applications/upgrader.rb
@@ -1,270 +1,279 @@
require 'pathname'
require 'puppet/forge'
require 'puppet/module_tool'
require 'puppet/module_tool/shared_behaviors'
require 'puppet/module_tool/install_directory'
require 'puppet/module_tool/installed_modules'
module Puppet::ModuleTool
module Applications
class Upgrader < Application
include Puppet::ModuleTool::Errors
def initialize(name, options)
super(options)
@action = :upgrade
@environment = options[:environment_instance]
@name = name
@ignore_changes = forced? || options[:ignore_changes]
@ignore_dependencies = forced? || options[:ignore_dependencies]
Semantic::Dependency.add_source(installed_modules_source)
Semantic::Dependency.add_source(module_repository)
end
def run
name = @name.tr('/', '-')
version = options[:version] || '>= 0.0.0'
results = {
:action => :upgrade,
:requested_version => options[:version] || :latest,
}
begin
all_modules = @environment.modules_by_path.values.flatten
matching_modules = all_modules.select do |x|
x.forge_name && x.forge_name.tr('/', '-') == name
end
if matching_modules.empty?
raise NotInstalledError, results.merge(:module_name => name)
elsif matching_modules.length > 1
raise MultipleInstalledError, results.merge(:module_name => name, :installed_modules => matching_modules)
end
- mod = installed_modules[name]
+ installed_release = installed_modules[name]
# `priority` is an attribute of a `Semantic::Dependency::Source`,
# which is delegated through `ModuleRelease` instances for the sake of
# comparison (sorting). By default, the `InstalledModules` source has
# a priority of 10 (making it the most preferable source, so that
# already installed versions of modules are selected in preference to
# modules from e.g. the Forge). Since we are specifically looking to
# upgrade this module, we don't want the installed version of this
# module to be chosen in preference to those with higher versions.
#
# This implementation is suboptimal, and since we can expect this sort
# of behavior to be reasonably common in Semantic, we should probably
# see about implementing a `ModuleRelease#override_priority` method
# (or something similar).
- def mod.priority
+ def installed_release.priority
0
end
- mod = mod.mod
+ mod = installed_release.mod
results[:installed_version] = Semantic::Version.parse(mod.version)
dir = Pathname.new(mod.modulepath)
vstring = mod.version ? "v#{mod.version}" : '???'
Puppet.notice "Found '#{name}' (#{colorize(:cyan, vstring)}) in #{dir} ..."
unless @ignore_changes
changes = Checksummer.run(mod.path) rescue []
if mod.has_metadata? && !changes.empty?
raise LocalChangesError,
:action => :upgrade,
:module_name => name,
:requested_version => results[:requested_version],
:installed_version => mod.version
end
end
Puppet::Forge::Cache.clean
+ # Ensure that there is at least one candidate release available
+ # for the target package.
+ available_versions = module_repository.fetch(name)
+ if available_versions.empty?
+ raise NoCandidateReleasesError, results.merge(:module_name => name, :source => module_repository.host)
+ elsif results[:requested_version] != :latest
+ requested = Semantic::VersionRange.parse(results[:requested_version])
+ unless available_versions.any? {|m| requested.include? m.version}
+ raise NoCandidateReleasesError, results.merge(:module_name => name, :source => module_repository.host)
+ end
+ end
+
Puppet.notice "Downloading from #{module_repository.host} ..."
if @ignore_dependencies
graph = build_single_module_graph(name, version)
else
graph = build_dependency_graph(name, version)
end
unless forced?
add_module_name_constraints_to_graph(graph)
end
installed_modules.each do |mod, release|
mod = mod.tr('/', '-')
next if mod == name
version = release.version
unless forced?
# Since upgrading already installed modules can be troublesome,
# we'll place constraints on the graph for each installed
# module, locking it to upgrades within the same major version.
">=#{version} #{version.major}.x".tap do |range|
graph.add_constraint('installed', mod, range) do |node|
Semantic::VersionRange.parse(range).include? node.version
end
end
release.mod.dependencies.each do |dep|
dep_name = dep['name'].tr('/', '-')
dep['version_requirement'].tap do |range|
graph.add_constraint("#{mod} constraint", dep_name, range) do |node|
Semantic::VersionRange.parse(range).include? node.version
end
end
end
end
end
- # Ensure that there is at least one candidate release available
- # for the target package.
- if graph.dependencies[name].empty? || graph.dependencies[name] == SortedSet.new([ installed_modules[name] ])
- if results[:requested_version] == :latest || !Semantic::VersionRange.parse(results[:requested_version]).include?(results[:installed_version])
- raise NoCandidateReleasesError, results.merge(:module_name => name, :source => module_repository.host)
- end
- end
-
begin
Puppet.info "Resolving dependencies ..."
releases = Semantic::Dependency.resolve(graph)
rescue Semantic::Dependency::UnsatisfiableGraph
raise NoVersionsSatisfyError, results.merge(:requested_name => name)
end
releases.each do |rel|
if mod = installed_modules_source.by_name[rel.name.split('-').last]
next if mod.has_metadata? && mod.forge_name.tr('/', '-') == rel.name
if rel.name != name
dependency = {
:name => rel.name,
:version => rel.version
}
end
raise InstallConflictError,
:requested_module => name,
:requested_version => options[:version] || 'latest',
:dependency => dependency,
:directory => mod.path,
:metadata => mod.metadata
end
end
child = releases.find { |x| x.name == name }
unless forced?
- if child.version <= results[:installed_version]
+ if child.version == results[:installed_version]
versions = graph.dependencies[name].map { |r| r.version }
newer_versions = versions.select { |v| v > results[:installed_version] }
raise VersionAlreadyInstalledError,
:module_name => name,
:requested_version => results[:requested_version],
:installed_version => results[:installed_version],
:newer_versions => newer_versions,
:possible_culprits => installed_modules_source.fetched.reject { |x| x == name }
+ elsif child.version < results[:installed_version]
+ raise DowngradingUnsupportedError,
+ :module_name => name,
+ :requested_version => results[:requested_version],
+ :installed_version => results[:installed_version]
end
end
Puppet.info "Preparing to upgrade ..."
releases.each { |release| release.prepare }
Puppet.notice 'Upgrading -- do not interrupt ...'
releases.each do |release|
if installed = installed_modules[release.name]
release.install(Pathname.new(installed.mod.modulepath))
else
release.install(dir)
end
end
results[:result] = :success
results[:base_dir] = releases.first.install_dir
results[:affected_modules] = releases
results[:graph] = [ build_install_graph(releases.first, releases) ]
rescue VersionAlreadyInstalledError => e
results[:result] = (e.newer_versions.empty? ? :noop : :failure)
results[:error] = { :oneline => e.message, :multiline => e.multiline }
rescue => e
results[:error] = {
:oneline => e.message,
:multiline => e.respond_to?(:multiline) ? e.multiline : [e.to_s, e.backtrace].join("\n")
}
ensure
results[:result] ||= :failure
end
results
end
private
def module_repository
@repo ||= Puppet::Forge.new
end
def installed_modules_source
@installed ||= Puppet::ModuleTool::InstalledModules.new(@environment)
end
def installed_modules
installed_modules_source.modules
end
def build_single_module_graph(name, version)
range = Semantic::VersionRange.parse(version)
graph = Semantic::Dependency::Graph.new(name => range)
releases = Semantic::Dependency.fetch_releases(name)
releases.each { |release| release.dependencies.clear }
graph << releases
end
def build_dependency_graph(name, version)
Semantic::Dependency.query(name => version)
end
def build_install_graph(release, installed, graphed = [])
previous = installed_modules[release.name]
previous = previous.version if previous
action = :upgrade
unless previous && previous != release.version
action = :install
end
graphed << release
dependencies = release.dependencies.values.map do |deps|
dep = (deps & installed).first
if dep == installed_modules[dep.name]
next
end
if dep && !graphed.include?(dep)
build_install_graph(dep, installed, graphed)
end
end.compact
return {
:release => release,
:name => release.name,
:path => release.install_dir,
:dependencies => dependencies.compact,
:version => release.version,
:previous_version => previous,
:action => action,
}
end
include Puppet::ModuleTool::Shared
end
end
end
diff --git a/lib/puppet/module_tool/dependency.rb b/lib/puppet/module_tool/dependency.rb
index c213e55ce..2f0995e58 100644
--- a/lib/puppet/module_tool/dependency.rb
+++ b/lib/puppet/module_tool/dependency.rb
@@ -1,30 +1,42 @@
require 'puppet/module_tool'
require 'puppet/network/format_support'
module Puppet::ModuleTool
class Dependency
include Puppet::Network::FormatSupport
alias :to_json :to_pson
attr_reader :full_module_name, :username, :name, :version_requirement, :repository
# Instantiates a new module dependency with a +full_module_name+ (e.g.
# "myuser-mymodule"), and optional +version_requirement+ (e.g. "0.0.1") and
# optional repository (a URL string).
def initialize(full_module_name, version_requirement = nil, repository = nil)
@full_module_name = full_module_name
# TODO: add error checking, the next line raises ArgumentError when +full_module_name+ is invalid
@username, @name = Puppet::ModuleTool.username_and_modname_from(full_module_name)
@version_requirement = version_requirement
@repository = repository ? Puppet::Forge::Repository.new(repository) : nil
end
+ # We override Object's ==, eql, and hash so we can more easily find identical
+ # dependencies.
+ def ==(o)
+ self.hash == o.hash
+ end
+
+ alias :eql? :==
+
+ def hash
+ [@full_module_name, @version_requirement, @repository].hash
+ end
+
def to_data_hash
result = { :name => @full_module_name }
result[:version_requirement] = @version_requirement if @version_requirement && ! @version_requirement.nil?
result[:repository] = @repository.to_s if @repository && ! @repository.nil?
result
end
end
end
diff --git a/lib/puppet/module_tool/errors/upgrader.rb b/lib/puppet/module_tool/errors/upgrader.rb
index 28c0304ed..0247f79ae 100644
--- a/lib/puppet/module_tool/errors/upgrader.rb
+++ b/lib/puppet/module_tool/errors/upgrader.rb
@@ -1,43 +1,63 @@
module Puppet::ModuleTool::Errors
class UpgradeError < ModuleToolError
def initialize(msg)
@action = :upgrade
super
end
end
class VersionAlreadyInstalledError < UpgradeError
attr_reader :newer_versions
def initialize(options)
@module_name = options[:module_name]
@requested_version = options[:requested_version]
@installed_version = options[:installed_version]
@dependency_name = options[:dependency_name]
@newer_versions = options[:newer_versions]
@possible_culprits = options[:possible_culprits]
super "Could not upgrade '#{@module_name}'; more recent versions not found"
end
def multiline
message = []
message << "Could not upgrade module '#{@module_name}' (#{vstring})"
if @newer_versions.empty?
message << " The installed version is already the latest version matching #{vstring}"
else
message << " There are #{@newer_versions.length} newer versions"
message << " No combination of dependency upgrades would satisfy all dependencies"
unless @possible_culprits.empty?
message << " Dependencies will not be automatically upgraded across major versions"
message << " Upgrading one or more of these modules may permit the upgrade to succeed:"
@possible_culprits.each do |name|
message << " - #{name}"
end
end
end
message << " Use `puppet module upgrade --force` to upgrade only this module"
message.join("\n")
end
end
+
+ class DowngradingUnsupportedError < UpgradeError
+ def initialize(options)
+ @module_name = options[:module_name]
+ @requested_version = options[:requested_version]
+ @installed_version = options[:installed_version]
+ @conditions = options[:conditions]
+ @action = options[:action]
+
+ super "Could not #{@action} '#{@module_name}' (#{vstring}); downgrades are not allowed"
+ end
+
+ def multiline
+ message = []
+ message << "Could not #{@action} module '#{@module_name}' (#{vstring})"
+ message << " Downgrading is not allowed."
+
+ message.join("\n")
+ end
+ end
end
diff --git a/lib/puppet/module_tool/installed_modules.rb b/lib/puppet/module_tool/installed_modules.rb
index 0e82b6bd0..de9421132 100644
--- a/lib/puppet/module_tool/installed_modules.rb
+++ b/lib/puppet/module_tool/installed_modules.rb
@@ -1,93 +1,98 @@
require 'pathname'
require 'puppet/forge'
require 'puppet/module_tool'
module Puppet::ModuleTool
class InstalledModules < Semantic::Dependency::Source
attr_reader :modules, :by_name
def priority
10
end
def initialize(env)
@env = env
modules = env.modules_by_path
@fetched = []
@modules = {}
@by_name = {}
env.modulepath.each do |path|
modules[path].each do |mod|
@by_name[mod.name] = mod
next unless mod.has_metadata?
release = ModuleRelease.new(self, mod)
@modules[release.name] ||= release
end
end
@modules.freeze
end
# Fetches {ModuleRelease} entries for each release of the named module.
#
# @param name [String] the module name to look up
# @return [Array<Semantic::Dependency::ModuleRelease>] a list of releases for
# the given name
# @see Semantic::Dependency::Source#fetch
def fetch(name)
name = name.tr('/', '-')
if @modules.key? name
@fetched << name
[ @modules[name] ]
else
[ ]
end
end
def fetched
@fetched
end
class ModuleRelease < Semantic::Dependency::ModuleRelease
attr_reader :mod, :metadata
def initialize(source, mod)
@mod = mod
@metadata = mod.metadata
name = mod.forge_name.tr('/', '-')
- version = Semantic::Version.parse(mod.version)
+ begin
+ version = Semantic::Version.parse(mod.version)
+ rescue Semantic::Version::ValidationFailure => e
+ Puppet.warning "#{mod.name} (#{mod.path}) has an invalid version number (#{mod.version}). The version has been set to 0.0.0. If you are the maintainer for this module, please update the metadata.json with a valid Semantic Version (http://semver.org)."
+ version = Semantic::Version.parse("0.0.0")
+ end
release = "#{name}@#{version}"
super(source, name, version, {})
if mod.dependencies
mod.dependencies.each do |dep|
results = Puppet::ModuleTool.parse_module_dependency(release, dep)
dep_name, parsed_range, range = results
dep.tap do |dep|
add_constraint('initialize', dep_name, range.to_s) do |node|
parsed_range === node.version
end
end
end
end
end
def install_dir
Pathname.new(@mod.path).dirname
end
def install(dir)
# If we're already installed, there's no need for us to faff about.
end
def prepare
# We're already installed; what preparation remains?
end
end
end
end
diff --git a/lib/puppet/module_tool/metadata.rb b/lib/puppet/module_tool/metadata.rb
index e625c3fdb..0237555ad 100644
--- a/lib/puppet/module_tool/metadata.rb
+++ b/lib/puppet/module_tool/metadata.rb
@@ -1,159 +1,209 @@
require 'puppet/util/methodhelper'
require 'puppet/module_tool'
require 'puppet/network/format_support'
require 'uri'
require 'json'
+require 'set'
module Puppet::ModuleTool
# This class provides a data structure representing a module's metadata.
# @api private
class Metadata
include Puppet::Network::FormatSupport
attr_accessor :module_name
DEFAULTS = {
'name' => nil,
'version' => nil,
'author' => nil,
'summary' => nil,
'license' => 'Apache 2.0',
'source' => '',
'project_page' => nil,
'issues_url' => nil,
- 'dependencies' => [].freeze,
+ 'dependencies' => Set.new.freeze,
}
def initialize
@data = DEFAULTS.dup
@data['dependencies'] = @data['dependencies'].dup
end
# Returns a filesystem-friendly version of this module name.
def dashed_name
@data['name'].tr('/', '-') if @data['name']
end
# Returns a string that uniquely represents this version of this module.
def release_name
return nil unless @data['name'] && @data['version']
[ dashed_name, @data['version'] ].join('-')
end
alias :name :module_name
alias :full_module_name :dashed_name
# Merges the current set of metadata with another metadata hash. This
# method also handles the validation of module names and versions, in an
# effort to be proactive about module publishing constraints.
def update(data)
process_name(data) if data['name']
process_version(data) if data['version']
process_source(data) if data['source']
+ merge_dependencies(data) if data['dependencies']
@data.merge!(data)
return self
end
+ # Validates the name and version_requirement for a dependency, then creates
+ # the Dependency and adds it.
+ # Returns the Dependency that was added.
+ def add_dependency(name, version_requirement=nil, repository=nil)
+ validate_name(name)
+ validate_version_range(version_requirement) if version_requirement
+
+ if dup = @data['dependencies'].find { |d| d.full_module_name == name && d.version_requirement != version_requirement }
+ raise ArgumentError, "Dependency conflict for #{full_module_name}: Dependency #{name} was given conflicting version requirements #{version_requirement} and #{dup.version_requirement}. Verify that there are no duplicates in the metadata.json or the Modulefile."
+ end
+
+ dep = Dependency.new(name, version_requirement, repository)
+ @data['dependencies'].add(dep)
+
+ dep
+ end
+
+ # Provides an accessor for the now defunct 'description' property. This
+ # addresses a regression in Puppet 3.6.x where previously valid templates
+ # refering to the 'description' property were broken.
+ # @deprecated
+ def description
+ @data['description']
+ end
+
+ def dependencies
+ @data['dependencies'].to_a
+ end
+
# Returns a hash of the module's metadata. Used by Puppet's automated
# serialization routines.
#
# @see Puppet::Network::FormatSupport#to_data_hash
def to_hash
@data
end
alias :to_data_hash :to_hash
def to_json
+ data = @data.dup.merge('dependencies' => dependencies)
+
# This is used to simulate an ordered hash. In particular, some keys
# are promoted to the top of the serialized hash (while others are
# demoted) for human-friendliness.
#
# This particularly works around the lack of ordered hashes in 1.8.7.
promoted_keys = %w[ name version author summary license source ]
demoted_keys = %w[ dependencies ]
- keys = @data.keys
+ keys = data.keys
keys -= promoted_keys
keys -= demoted_keys
contents = (promoted_keys + keys + demoted_keys).map do |k|
- value = (JSON.pretty_generate(@data[k]) rescue @data[k].to_json)
+ value = (JSON.pretty_generate(data[k]) rescue data[k].to_json)
"#{k.to_json}: #{value}"
end
"{\n" + contents.join(",\n").gsub(/^/, ' ') + "\n}\n"
end
# Expose any metadata keys as callable reader methods.
def method_missing(name, *args)
return @data[name.to_s] if @data.key? name.to_s
super
end
private
# Do basic validation and parsing of the name parameter.
def process_name(data)
validate_name(data['name'])
author, @module_name = data['name'].split(/[-\/]/, 2)
data['author'] ||= author if @data['author'] == DEFAULTS['author']
end
# Do basic validation on the version parameter.
def process_version(data)
validate_version(data['version'])
end
# Do basic parsing of the source parameter. If the source is hosted on
# GitHub, we can predict sensible defaults for both project_page and
# issues_url.
def process_source(data)
if data['source'] =~ %r[://]
source_uri = URI.parse(data['source'])
else
source_uri = URI.parse("http://#{data['source']}")
end
if source_uri.host =~ /^(www\.)?github\.com$/
source_uri.scheme = 'https'
source_uri.path.sub!(/\.git$/, '')
data['project_page'] ||= @data['project_page'] || source_uri.to_s
data['issues_url'] ||= @data['issues_url'] || source_uri.to_s.sub(/\/*$/, '') + '/issues'
end
rescue URI::Error
return
end
+ # Validates and parses the dependencies.
+ def merge_dependencies(data)
+ data['dependencies'].each do |dep|
+ add_dependency(dep['name'], dep['version_requirement'], dep['repository'])
+ end
+
+ # Clear dependencies so @data dependencies are not overwritten
+ data.delete 'dependencies'
+ end
+
# Validates that the given module name is both namespaced and well-formed.
def validate_name(name)
return if name =~ /\A[a-z0-9]+[-\/][a-z][a-z0-9_]*\Z/i
namespace, modname = name.split(/[-\/]/, 2)
modname = :namespace_missing if namespace == ''
err = case modname
when nil, '', :namespace_missing
"the field must be a namespaced module name"
when /[^a-z0-9_]/i
"the module name contains non-alphanumeric (or underscore) characters"
when /^[^a-z]/i
"the module name must begin with a letter"
else
"the namespace contains non-alphanumeric characters"
end
raise ArgumentError, "Invalid 'name' field in metadata.json: #{err}"
end
# Validates that the version string can be parsed as per SemVer.
def validate_version(version)
return if SemVer.valid?(version)
err = "version string cannot be parsed as a valid Semantic Version"
raise ArgumentError, "Invalid 'version' field in metadata.json: #{err}"
end
+
+ # Validates that the version range can be parsed by Semantic.
+ def validate_version_range(version_range)
+ Semantic::VersionRange.parse(version_range)
+ rescue ArgumentError => e
+ raise ArgumentError, "Invalid 'version_range' field in metadata.json: #{e}"
+ end
end
end
diff --git a/lib/puppet/module_tool/modulefile.rb b/lib/puppet/module_tool/modulefile.rb
index 07f578312..31eb9c14c 100644
--- a/lib/puppet/module_tool/modulefile.rb
+++ b/lib/puppet/module_tool/modulefile.rb
@@ -1,78 +1,78 @@
require 'puppet/module_tool'
require 'puppet/module_tool/dependency'
module Puppet::ModuleTool
# = Modulefile
#
# This class provides the DSL used for evaluating the module's 'Modulefile'.
# These methods are used to concisely define this module's attributes, which
# are later rendered as PSON into a 'metadata.json' file.
class ModulefileReader
# Read the +filename+ and eval its Ruby code to set values in the Metadata
# +metadata+ instance.
def self.evaluate(metadata, filename)
builder = new(metadata)
if File.file?(filename)
builder.instance_eval(File.read(filename.to_s), filename.to_s, 1)
else
Puppet.warning "No Modulefile: #{filename}"
end
return builder
end
# Instantiate with the Metadata +metadata+ instance.
def initialize(metadata)
@metadata = metadata
end
# Set the +full_module_name+ (e.g. "myuser-mymodule"), which will also set the
# +username+ and module +name+. Required.
def name(name)
@metadata.update('name' => name)
end
# Set the module +version+ (e.g., "0.1.0"). Required.
def version(version)
@metadata.update('version' => version)
end
# Add a dependency with the full_module_name +name+ (e.g. "myuser-mymodule"), an
# optional +version_requirement+ (e.g. "0.1.0") and +repository+ (a URL
# string). Optional. Can be called multiple times to add many dependencies.
def dependency(name, version_requirement = nil, repository = nil)
- @metadata.to_hash['dependencies'] << Dependency.new(name, version_requirement, repository)
+ @metadata.add_dependency(name, version_requirement, repository)
end
# Set the source
def source(source)
@metadata.update('source' => source)
end
# Set the author or default to +username+
def author(author)
@metadata.update('author' => author)
end
# Set the license
def license(license)
@metadata.update('license' => license)
end
# Set the summary
def summary(summary)
@metadata.update('summary' => summary)
end
# Set the description
def description(description)
@metadata.update('description' => description)
end
# Set the project page
def project_page(project_page)
@metadata.update('project_page' => project_page)
end
end
end
diff --git a/lib/puppet/module_tool/skeleton/templates/generator/.fixtures.yml.erb b/lib/puppet/module_tool/skeleton/templates/generator/.fixtures.yml.erb
new file mode 100644
index 000000000..ff33af1eb
--- /dev/null
+++ b/lib/puppet/module_tool/skeleton/templates/generator/.fixtures.yml.erb
@@ -0,0 +1,7 @@
+fixtures:
+ repositories:
+ stdlib:
+ repo: 'git://github.com/puppetlabs/puppetlabs-stdlib.git'
+ ref: '4.1.0'
+ symlinks:
+ <%= metadata.name %>: "#{source_dir}"
diff --git a/lib/puppet/module_tool/skeleton/templates/generator/Gemfile b/lib/puppet/module_tool/skeleton/templates/generator/Gemfile
new file mode 100644
index 000000000..7bd34cda7
--- /dev/null
+++ b/lib/puppet/module_tool/skeleton/templates/generator/Gemfile
@@ -0,0 +1,7 @@
+source 'https://rubygems.org'
+
+puppetversion = ENV.key?('PUPPET_VERSION') ? "= #{ENV['PUPPET_VERSION']}" : ['>= 3.3']
+gem 'puppet', puppetversion
+gem 'puppetlabs_spec_helper', '>= 0.1.0'
+gem 'puppet-lint', '>= 0.3.2'
+gem 'facter', '>= 1.7.0'
diff --git a/lib/puppet/module_tool/skeleton/templates/generator/manifests/init.pp.erb b/lib/puppet/module_tool/skeleton/templates/generator/manifests/init.pp.erb
index d9d0df0b5..d7c5f4aa7 100644
--- a/lib/puppet/module_tool/skeleton/templates/generator/manifests/init.pp.erb
+++ b/lib/puppet/module_tool/skeleton/templates/generator/manifests/init.pp.erb
@@ -1,41 +1,41 @@
# == Class: <%= metadata.name %>
#
# Full description of class <%= metadata.name %> here.
#
# === Parameters
#
# Document parameters here.
#
# [*sample_parameter*]
# Explanation of what this parameter affects and what it defaults to.
# e.g. "Specify one or more upstream ntp servers as an array."
#
# === Variables
#
# Here you should define a list of variables that this module would require.
#
# [*sample_variable*]
# Explanation of how this variable affects the funtion of this class and if
# it has a default. e.g. "The parameter enc_ntp_servers must be set by the
# External Node Classifier as a comma separated list of hostnames." (Note,
# global variables should be avoided in favor of class parameters as
# of Puppet 2.6.)
#
# === Examples
#
-# class { <%= metadata.name %>:
+# class { '<%= metadata.name %>':
# servers => [ 'pool.ntp.org', 'ntp.local.company.com' ],
# }
#
# === Authors
#
# Author Name <author@domain.com>
#
# === Copyright
#
# Copyright <%= Time.now.year %> Your name here, unless otherwise noted.
#
class <%= metadata.name %> {
}
diff --git a/lib/puppet/module_tool/skeleton/templates/generator/spec/spec_helper.rb b/lib/puppet/module_tool/skeleton/templates/generator/spec/spec_helper.rb
index 5fda58875..2c6f56649 100644
--- a/lib/puppet/module_tool/skeleton/templates/generator/spec/spec_helper.rb
+++ b/lib/puppet/module_tool/skeleton/templates/generator/spec/spec_helper.rb
@@ -1,17 +1 @@
-dir = File.expand_path(File.dirname(__FILE__))
-$LOAD_PATH.unshift File.join(dir, 'lib')
-
-require 'mocha'
-require 'puppet'
-require 'rspec'
-require 'spec/autorun'
-
-Spec::Runner.configure do |config|
- config.mock_with :mocha
-end
-
-# We need this because the RAL uses 'should' as a method. This
-# allows us the same behaviour but with a different method name.
-class Object
- alias :must :should
-end
+require 'puppetlabs_spec_helper/module_spec_helper'
diff --git a/lib/puppet/module_tool/tar/mini.rb b/lib/puppet/module_tool/tar/mini.rb
index ef60e3717..4f1518247 100644
--- a/lib/puppet/module_tool/tar/mini.rb
+++ b/lib/puppet/module_tool/tar/mini.rb
@@ -1,35 +1,53 @@
class Puppet::ModuleTool::Tar::Mini
def unpack(sourcefile, destdir, _)
Zlib::GzipReader.open(sourcefile) do |reader|
- Archive::Tar::Minitar.unpack(reader, destdir) do |action, name, stats|
+ Archive::Tar::Minitar.unpack(reader, destdir, find_valid_files(reader)) do |action, name, stats|
case action
when :file_done
File.chmod(0444, "#{destdir}/#{name}")
when :dir, :file_start
validate_entry(destdir, name)
- Puppet.debug("extracting #{destdir}/#{name}")
+ Puppet.debug("Extracting: #{destdir}/#{name}")
end
end
end
end
def pack(sourcedir, destfile)
Zlib::GzipWriter.open(destfile) do |writer|
Archive::Tar::Minitar.pack(sourcedir, writer)
end
end
private
+ # Find all the valid files in tarfile.
+ #
+ # This check was mainly added to ignore 'x' and 'g' flags from the PAX
+ # standard but will also ignore any other non-standard tar flags.
+ # tar format info: http://pic.dhe.ibm.com/infocenter/zos/v1r13/index.jsp?topic=%2Fcom.ibm.zos.r13.bpxa500%2Ftaf.htm
+ # pax format info: http://pic.dhe.ibm.com/infocenter/zos/v1r13/index.jsp?topic=%2Fcom.ibm.zos.r13.bpxa500%2Fpxarchfm.htm
+ def find_valid_files(tarfile)
+ Archive::Tar::Minitar.open(tarfile).collect do |entry|
+ flag = entry.typeflag
+ if flag.nil? || flag =~ /[[:digit:]]/ && (0..7).include?(flag.to_i)
+ entry.name
+ else
+ Puppet.debug "Invalid tar flag '#{flag}' will not be extracted: #{entry.name}"
+ next
+ end
+ end
+ end
+
def validate_entry(destdir, path)
if Pathname.new(path).absolute?
raise Puppet::ModuleTool::Errors::InvalidPathInPackageError, :entry_path => path, :directory => destdir
end
path = File.expand_path File.join(destdir, path)
if path !~ /\A#{Regexp.escape destdir}/
raise Puppet::ModuleTool::Errors::InvalidPathInPackageError, :entry_path => path, :directory => destdir
end
end
end
diff --git a/lib/puppet/network/http.rb b/lib/puppet/network/http.rb
index a2712b8d5..68a345e63 100644
--- a/lib/puppet/network/http.rb
+++ b/lib/puppet/network/http.rb
@@ -1,15 +1,20 @@
module Puppet::Network::HTTP
HEADER_ENABLE_PROFILING = "X-Puppet-Profiling"
HEADER_PUPPET_VERSION = "X-Puppet-Version"
require 'puppet/network/http/issues'
require 'puppet/network/http/error'
require 'puppet/network/http/route'
require 'puppet/network/http/api'
require 'puppet/network/http/api/v1'
require 'puppet/network/http/api/v2'
require 'puppet/network/http/handler'
require 'puppet/network/http/response'
require 'puppet/network/http/request'
+ require 'puppet/network/http/site'
+ require 'puppet/network/http/session'
+ require 'puppet/network/http/factory'
+ require 'puppet/network/http/nocache_pool'
+ require 'puppet/network/http/pool'
require 'puppet/network/http/memory_response'
end
diff --git a/lib/puppet/network/http/api/v1.rb b/lib/puppet/network/http/api/v1.rb
index 54fa9af23..24b7d910f 100644
--- a/lib/puppet/network/http/api/v1.rb
+++ b/lib/puppet/network/http/api/v1.rb
@@ -1,222 +1,222 @@
require 'puppet/network/authorization'
class Puppet::Network::HTTP::API::V1
include Puppet::Network::Authorization
# How we map http methods and the indirection name in the URI
# to an indirection method.
METHOD_MAP = {
"GET" => {
:plural => :search,
:singular => :find
},
"POST" => {
:singular => :find,
},
"PUT" => {
:singular => :save
},
"DELETE" => {
:singular => :destroy
},
"HEAD" => {
:singular => :head
}
}
def self.routes
Puppet::Network::HTTP::Route.path(/.*/).any(new)
end
# handle an HTTP request
def call(request, response)
indirection_name, method, key, params = uri2indirection(request.method, request.path, request.params)
certificate = request.client_cert
check_authorization(method, "/#{indirection_name}/#{key}", params)
indirection = Puppet::Indirector::Indirection.instance(indirection_name.to_sym)
raise ArgumentError, "Could not find indirection '#{indirection_name}'" unless indirection
if !indirection.allow_remote_requests?
# TODO: should we tell the user we found an indirection but it doesn't
# allow remote requests, or just pretend there's no handler at all? what
# are the security implications for the former?
raise Puppet::Network::HTTP::Error::HTTPNotFoundError.new("No handler for #{indirection.name}", :NO_INDIRECTION_REMOTE_REQUESTS)
end
trusted = Puppet::Context::TrustedInformation.remote(params[:authenticated], params[:node], certificate)
Puppet.override(:trusted_information => trusted) do
send("do_#{method}", indirection, key, params, request, response)
end
rescue Puppet::Network::HTTP::Error::HTTPError => e
return do_http_control_exception(response, e)
rescue Exception => e
return do_exception(response, e)
end
def uri2indirection(http_method, uri, params)
environment, indirection, key = uri.split("/", 4)[1..-1] # the first field is always nil because of the leading slash
raise ArgumentError, "The environment must be purely alphanumeric, not '#{environment}'" unless Puppet::Node::Environment.valid_name?(environment)
raise ArgumentError, "The indirection name must be purely alphanumeric, not '#{indirection}'" unless indirection =~ /^\w+$/
method = indirection_method(http_method, indirection)
configured_environment = Puppet.lookup(:environments).get(environment)
if configured_environment.nil?
raise Puppet::Network::HTTP::Error::HTTPNotFoundError.new("Could not find environment '#{environment}'", Puppet::Network::HTTP::Issues::ENVIRONMENT_NOT_FOUND)
else
configured_environment = configured_environment.override_from_commandline(Puppet.settings)
params[:environment] = configured_environment
end
params.delete(:bucket_path)
raise ArgumentError, "No request key specified in #{uri}" if key == "" or key.nil?
key = URI.unescape(key)
[indirection, method, key, params]
end
private
def do_http_control_exception(response, exception)
msg = exception.message
Puppet.info(msg)
response.respond_with(exception.status, "text/plain", msg)
end
def do_exception(response, exception, status=400)
if exception.is_a?(Puppet::Network::AuthorizationError)
# make sure we return the correct status code
# for authorization issues
status = 403 if status == 400
end
Puppet.log_exception(exception)
response.respond_with(status, "text/plain", exception.to_s)
end
# Execute our find.
def do_find(indirection, key, params, request, response)
unless result = indirection.find(key, params)
raise Puppet::Network::HTTP::Error::HTTPNotFoundError.new("Could not find #{indirection.name} #{key}", Puppet::Network::HTTP::Issues::RESOURCE_NOT_FOUND)
end
format = accepted_response_formatter_for(indirection.model, request)
rendered_result = result
if result.respond_to?(:render)
- Puppet::Util::Profiler.profile("Rendered result in #{format}") do
+ Puppet::Util::Profiler.profile("Rendered result in #{format}", [:http, :v1_render, format]) do
rendered_result = result.render(format)
end
end
- Puppet::Util::Profiler.profile("Sent response") do
+ Puppet::Util::Profiler.profile("Sent response", [:http, :v1_response]) do
response.respond_with(200, format, rendered_result)
end
end
# Execute our head.
def do_head(indirection, key, params, request, response)
unless indirection.head(key, params)
raise Puppet::Network::HTTP::Error::HTTPNotFoundError.new("Could not find #{indirection.name} #{key}", Puppet::Network::HTTP::Issues::RESOURCE_NOT_FOUND)
end
# No need to set a response because no response is expected from a
# HEAD request. All we need to do is not die.
end
# Execute our search.
def do_search(indirection, key, params, request, response)
result = indirection.search(key, params)
if result.nil?
raise Puppet::Network::HTTP::Error::HTTPNotFoundError.new("Could not find instances in #{indirection.name} with '#{key}'", Puppet::Network::HTTP::Issues::RESOURCE_NOT_FOUND)
end
format = accepted_response_formatter_for(indirection.model, request)
response.respond_with(200, format, indirection.model.render_multiple(format, result))
end
# Execute our destroy.
def do_destroy(indirection, key, params, request, response)
formatter = accepted_response_formatter_or_yaml_for(indirection.model, request)
result = indirection.destroy(key, params)
response.respond_with(200, formatter, formatter.render(result))
end
# Execute our save.
def do_save(indirection, key, params, request, response)
formatter = accepted_response_formatter_or_yaml_for(indirection.model, request)
sent_object = read_body_into_model(indirection.model, request)
result = indirection.save(sent_object, key)
response.respond_with(200, formatter, formatter.render(result))
end
def accepted_response_formatter_for(model_class, request)
accepted_formats = request.headers['accept'] or raise Puppet::Network::HTTP::Error::HTTPNotAcceptableError.new("Missing required Accept header", Puppet::Network::HTTP::Issues::MISSING_HEADER_FIELD)
request.response_formatter_for(model_class.supported_formats, accepted_formats)
end
def accepted_response_formatter_or_yaml_for(model_class, request)
accepted_formats = request.headers['accept'] || "yaml"
request.response_formatter_for(model_class.supported_formats, accepted_formats)
end
def read_body_into_model(model_class, request)
data = request.body.to_s
format = request.format
model_class.convert_from(format, data)
end
def indirection_method(http_method, indirection)
raise ArgumentError, "No support for http method #{http_method}" unless METHOD_MAP[http_method]
unless method = METHOD_MAP[http_method][plurality(indirection)]
raise ArgumentError, "No support for plurality #{plurality(indirection)} for #{http_method} operations"
end
method
end
def self.indirection2uri(request)
indirection = request.method == :search ? pluralize(request.indirection_name.to_s) : request.indirection_name.to_s
"/#{request.environment.to_s}/#{indirection}/#{request.escaped_key}#{request.query_string}"
end
def self.request_to_uri_and_body(request)
indirection = request.method == :search ? pluralize(request.indirection_name.to_s) : request.indirection_name.to_s
["/#{request.environment.to_s}/#{indirection}/#{request.escaped_key}", request.query_string.sub(/^\?/,'')]
end
def self.pluralize(indirection)
return(indirection == "status" ? "statuses" : indirection + "s")
end
def plurality(indirection)
# NOTE This specific hook for facts is ridiculous, but it's a *many*-line
# fix to not need this, and our goal is to move away from the complication
# that leads to the fix being too long.
return :singular if indirection == "facts"
return :singular if indirection == "status"
return :singular if indirection == "certificate_status"
return :plural if indirection == "inventory"
result = (indirection =~ /s$|_search$/) ? :plural : :singular
indirection.sub!(/s$|_search$/, '')
indirection.sub!(/statuse$/, 'status')
result
end
end
diff --git a/lib/puppet/network/http/api/v2/environments.rb b/lib/puppet/network/http/api/v2/environments.rb
index 6331be857..51a4e04f2 100644
--- a/lib/puppet/network/http/api/v2/environments.rb
+++ b/lib/puppet/network/http/api/v2/environments.rb
@@ -1,21 +1,35 @@
require 'json'
class Puppet::Network::HTTP::API::V2::Environments
def initialize(env_loader)
@env_loader = env_loader
end
def call(request, response)
response.respond_with(200, "application/json", JSON.dump({
"search_paths" => @env_loader.search_paths,
"environments" => Hash[@env_loader.list.collect do |env|
[env.name, {
"settings" => {
"modulepath" => env.full_modulepath,
- "manifest" => env.manifest
+ "manifest" => env.manifest,
+ "environment_timeout" => timeout(env),
+ "config_version" => env.config_version || '',
}
}]
end]
}))
end
+
+ private
+
+ def timeout(env)
+ ttl = @env_loader.get_conf(env.name).environment_timeout
+ if ttl == 1.0 / 0.0 # INFINITY
+ "unlimited"
+ else
+ ttl
+ end
+ end
+
end
diff --git a/lib/puppet/network/http/connection.rb b/lib/puppet/network/http/connection.rb
index a2ce6eef3..8ffb1dda1 100644
--- a/lib/puppet/network/http/connection.rb
+++ b/lib/puppet/network/http/connection.rb
@@ -1,254 +1,240 @@
require 'net/https'
require 'puppet/ssl/host'
require 'puppet/ssl/configuration'
require 'puppet/ssl/validator'
require 'puppet/network/authentication'
+require 'puppet/network/http'
require 'uri'
module Puppet::Network::HTTP
# This will be raised if too many redirects happen for a given HTTP request
class RedirectionLimitExceededException < Puppet::Error ; end
# This class provides simple methods for issuing various types of HTTP
# requests. It's interface is intended to mirror Ruby's Net::HTTP
# object, but it provides a few important bits of additional
# functionality. Notably:
#
# * Any HTTPS requests made using this class will use Puppet's SSL
# certificate configuration for their authentication, and
# * Provides some useful error handling for any SSL errors that occur
# during a request.
# @api public
class Connection
include Puppet::Network::Authentication
OPTION_DEFAULTS = {
:use_ssl => true,
:verify => nil,
:redirect_limit => 10,
}
- @@openssl_initialized = false
-
# Creates a new HTTP client connection to `host`:`port`.
# @param host [String] the host to which this client will connect to
# @param port [Fixnum] the port to which this client will connect to
# @param options [Hash] options influencing the properties of the created
# connection,
# @option options [Boolean] :use_ssl true to connect with SSL, false
# otherwise, defaults to true
# @option options [#setup_connection] :verify An object that will configure
# any verification to do on the connection
# @option options [Fixnum] :redirect_limit the number of allowed
# redirections, defaults to 10 passing any other option in the options
# hash results in a Puppet::Error exception
#
# @note the HTTP connection itself happens lazily only when {#request}, or
# one of the {#get}, {#post}, {#delete}, {#head} or {#put} is called
# @note The correct way to obtain a connection is to use one of the factory
# methods on {Puppet::Network::HttpPool}
# @api private
def initialize(host, port, options = {})
@host = host
@port = port
unknown_options = options.keys - OPTION_DEFAULTS.keys
raise Puppet::Error, "Unrecognized option(s): #{unknown_options.map(&:inspect).sort.join(', ')}" unless unknown_options.empty?
options = OPTION_DEFAULTS.merge(options)
@use_ssl = options[:use_ssl]
@verify = options[:verify]
@redirect_limit = options[:redirect_limit]
+ @site = Puppet::Network::HTTP::Site.new(@use_ssl ? 'https' : 'http', host, port)
+ @pool = Puppet.lookup(:http_pool)
end
# @!macro [new] common_options
# @param options [Hash] options influencing the request made
# @option options [Hash{Symbol => String}] :basic_auth The basic auth
# :username and :password to use for the request
# @param path [String]
# @param headers [Hash{String => String}]
# @!macro common_options
# @api public
def get(path, headers = {}, options = {})
request_with_redirects(Net::HTTP::Get.new(path, headers), options)
end
# @param path [String]
# @param data [String]
# @param headers [Hash{String => String}]
# @!macro common_options
# @api public
def post(path, data, headers = nil, options = {})
request = Net::HTTP::Post.new(path, headers)
request.body = data
request_with_redirects(request, options)
end
# @param path [String]
# @param headers [Hash{String => String}]
# @!macro common_options
# @api public
def head(path, headers = {}, options = {})
request_with_redirects(Net::HTTP::Head.new(path, headers), options)
end
# @param path [String]
# @param headers [Hash{String => String}]
# @!macro common_options
# @api public
def delete(path, headers = {'Depth' => 'Infinity'}, options = {})
request_with_redirects(Net::HTTP::Delete.new(path, headers), options)
end
# @param path [String]
# @param data [String]
# @param headers [Hash{String => String}]
# @!macro common_options
# @api public
def put(path, data, headers = nil, options = {})
request = Net::HTTP::Put.new(path, headers)
request.body = data
request_with_redirects(request, options)
end
def request(method, *args)
self.send(method, *args)
end
# TODO: These are proxies for the Net::HTTP#request_* methods, which are
# almost the same as the "get", "post", etc. methods that we've ported above,
- # but they are able to accept a code block and will yield to it. For now
+ # but they are able to accept a code block and will yield to it, which is
+ # necessary to stream responses, e.g. file content. For now
# we're not funneling these proxy implementations through our #request
# method above, so they will not inherit the same error handling. In the
# future we may want to refactor these so that they are funneled through
# that method and do inherit the error handling.
def request_get(*args, &block)
- connection.request_get(*args, &block)
+ with_connection(@site) do |connection|
+ connection.request_get(*args, &block)
+ end
end
def request_head(*args, &block)
- connection.request_head(*args, &block)
+ with_connection(@site) do |connection|
+ connection.request_head(*args, &block)
+ end
end
def request_post(*args, &block)
- connection.request_post(*args, &block)
+ with_connection(@site) do |connection|
+ connection.request_post(*args, &block)
+ end
end
# end of Net::HTTP#request_* proxies
+ # The address to connect to.
def address
- connection.address
+ @site.host
end
+ # The port to connect to.
def port
- connection.port
+ @site.port
end
+ # Whether to use ssl
def use_ssl?
- connection.use_ssl?
+ @site.use_ssl?
end
private
def request_with_redirects(request, options)
current_request = request
- @redirect_limit.times do |redirection|
- apply_options_to(current_request, options)
+ current_site = @site
+ response = nil
- response = execute_request(current_request)
- return response unless [301, 302, 307].include?(response.code.to_i)
+ 0.upto(@redirect_limit) do |redirection|
+ return response if response
- # handle the redirection
- location = URI.parse(response['location'])
- @connection = initialize_connection(location.host, location.port, location.scheme == 'https')
+ with_connection(current_site) do |connection|
+ apply_options_to(current_request, options)
- # update to the current request path
- current_request = current_request.class.new(location.path)
- current_request.body = request.body
- request.each do |header, value|
- current_request[header] = value
+ current_response = execute_request(connection, current_request)
+
+ if [301, 302, 307].include?(current_response.code.to_i)
+
+ # handle the redirection
+ location = URI.parse(current_response['location'])
+ current_site = current_site.move_to(location)
+
+ # update to the current request path
+ current_request = current_request.class.new(location.path)
+ current_request.body = request.body
+ request.each do |header, value|
+ current_request[header] = value
+ end
+ else
+ response = current_response
+ end
end
# and try again...
end
+
raise RedirectionLimitExceededException, "Too many HTTP redirections for #{@host}:#{@port}"
end
def apply_options_to(request, options)
if options[:basic_auth]
request.basic_auth(options[:basic_auth][:user], options[:basic_auth][:password])
end
end
- def connection
- @connection || initialize_connection(@host, @port, @use_ssl)
- end
-
- def execute_request(request)
+ def execute_request(connection, request)
response = connection.request(request)
# Check the peer certs and warn if they're nearing expiration.
warn_if_near_expiration(*@verify.peer_certs)
+ response
+ end
+
+ def with_connection(site, &block)
+ response = nil
+ @pool.with_connection(site, @verify) do |conn|
+ response = yield conn
+ end
response
rescue OpenSSL::SSL::SSLError => error
if error.message.include? "certificate verify failed"
msg = error.message
msg << ": [" + @verify.verify_errors.join('; ') + "]"
raise Puppet::Error, msg, error.backtrace
elsif error.message =~ /hostname.*not match.*server certificate/
leaf_ssl_cert = @verify.peer_certs.last
valid_certnames = [leaf_ssl_cert.name, *leaf_ssl_cert.subject_alt_names].uniq
msg = valid_certnames.length > 1 ? "one of #{valid_certnames.join(', ')}" : valid_certnames.first
- msg = "Server hostname '#{connection.address}' did not match server certificate; expected #{msg}"
+ msg = "Server hostname '#{site.host}' did not match server certificate; expected #{msg}"
raise Puppet::Error, msg, error.backtrace
else
raise
end
end
-
- def initialize_connection(host, port, use_ssl)
- args = [host, port]
- if Puppet[:http_proxy_host] == "none"
- args << nil << nil
- else
- args << Puppet[:http_proxy_host] << Puppet[:http_proxy_port]
- end
-
- @connection = create_connection(*args)
-
- # Pop open the http client a little; older versions of Net::HTTP(s) didn't
- # give us a reader for ca_file... Grr...
- class << @connection; attr_accessor :ca_file; end
-
- @connection.use_ssl = use_ssl
- # Use configured timeout (#1176)
- @connection.read_timeout = Puppet[:configtimeout]
- @connection.open_timeout = Puppet[:configtimeout]
-
- cert_setup
-
- @connection
- end
-
- # Use cert information from a Puppet client to set up the http object.
- def cert_setup
- # PUP-1411, make sure that openssl is initialized before we try to connect
- if ! @@openssl_initialized
- OpenSSL::SSL::SSLContext.new
- @@openssl_initialized = true
- end
-
- @verify.setup_connection(@connection)
- end
-
- # This method largely exists for testing purposes, so that we can
- # mock the actual HTTP connection.
- def create_connection(*args)
- Net::HTTP.new(*args)
- end
end
end
diff --git a/lib/puppet/network/http/factory.rb b/lib/puppet/network/http/factory.rb
new file mode 100644
index 000000000..8e60ad2c6
--- /dev/null
+++ b/lib/puppet/network/http/factory.rb
@@ -0,0 +1,44 @@
+require 'openssl'
+require 'net/http'
+
+# Factory for <tt>Net::HTTP</tt> objects.
+#
+# Encapsulates the logic for creating a <tt>Net::HTTP</tt> object based on the
+# specified {Puppet::Network::HTTP::Site Site} and puppet settings.
+#
+# @api private
+#
+class Puppet::Network::HTTP::Factory
+ @@openssl_initialized = false
+
+ def initialize
+ # PUP-1411, make sure that openssl is initialized before we try to connect
+ if ! @@openssl_initialized
+ OpenSSL::SSL::SSLContext.new
+ @@openssl_initialized = true
+ end
+ end
+
+ def create_connection(site)
+ Puppet.debug("Creating new connection for #{site}")
+
+ args = [site.host, site.port]
+ if Puppet[:http_proxy_host] == "none"
+ args << nil << nil
+ else
+ args << Puppet[:http_proxy_host] << Puppet[:http_proxy_port]
+ end
+
+ http = Net::HTTP.new(*args)
+ http.use_ssl = site.use_ssl?
+ # Use configured timeout (#1176)
+ http.read_timeout = Puppet[:configtimeout]
+ http.open_timeout = Puppet[:configtimeout]
+
+ if Puppet[:http_debug]
+ http.set_debug_output($stderr)
+ end
+
+ http
+ end
+end
diff --git a/lib/puppet/network/http/handler.rb b/lib/puppet/network/http/handler.rb
index 82e873ea0..a8aa1aaeb 100644
--- a/lib/puppet/network/http/handler.rb
+++ b/lib/puppet/network/http/handler.rb
@@ -1,180 +1,186 @@
module Puppet::Network::HTTP
end
require 'puppet/network/http'
require 'puppet/network/http/api/v1'
require 'puppet/network/authentication'
require 'puppet/network/rights'
require 'puppet/util/profiler'
+require 'puppet/util/profiler/aggregate'
require 'resolv'
module Puppet::Network::HTTP::Handler
include Puppet::Network::Authentication
include Puppet::Network::HTTP::Issues
# These shouldn't be allowed to be set by clients
# in the query string, for security reasons.
DISALLOWED_KEYS = ["node", "ip"]
def register(routes)
# There's got to be a simpler way to do this, right?
dupes = {}
routes.each { |r| dupes[r.path_matcher] = (dupes[r.path_matcher] || 0) + 1 }
dupes = dupes.collect { |pm, count| pm if count > 1 }.compact
if dupes.count > 0
raise ArgumentError, "Given multiple routes with identical path regexes: #{dupes.map{ |rgx| rgx.inspect }.join(', ')}"
end
@routes = routes
Puppet.debug("Routes Registered:")
@routes.each do |route|
Puppet.debug(route.inspect)
end
end
# Retrieve all headers from the http request, as a hash with the header names
# (lower-cased) as the keys
def headers(request)
raise NotImplementedError
end
def format_to_mime(format)
format.is_a?(Puppet::Network::Format) ? format.mime : format
end
# handle an HTTP request
def process(request, response)
new_response = Puppet::Network::HTTP::Response.new(self, response)
request_headers = headers(request)
request_params = params(request)
request_method = http_method(request)
request_path = path(request)
new_request = Puppet::Network::HTTP::Request.new(request_headers, request_params, request_method, request_path, request_path, client_cert(request), body(request))
response[Puppet::Network::HTTP::HEADER_PUPPET_VERSION] = Puppet.version
- configure_profiler(request_headers, request_params)
- warn_if_near_expiration(new_request.client_cert)
+ profiler = configure_profiler(request_headers, request_params)
- Puppet::Util::Profiler.profile("Processed request #{request_method} #{request_path}") do
+ Puppet::Util::Profiler.profile("Processed request #{request_method} #{request_path}", [:http, request_method, request_path]) do
if route = @routes.find { |route| route.matches?(new_request) }
route.process(new_request, new_response)
else
raise Puppet::Network::HTTP::Error::HTTPNotFoundError.new("No route for #{new_request.method} #{new_request.path}", HANDLER_NOT_FOUND)
end
end
rescue Puppet::Network::HTTP::Error::HTTPError => e
Puppet.info(e.message)
new_response.respond_with(e.status, "application/json", e.to_json)
rescue Exception => e
http_e = Puppet::Network::HTTP::Error::HTTPServerError.new(e)
Puppet.err(http_e.message)
new_response.respond_with(http_e.status, "application/json", http_e.to_json)
ensure
+ if profiler
+ remove_profiler(profiler)
+ end
cleanup(request)
end
# Set the response up, with the body and status.
def set_response(response, body, status = 200)
raise NotImplementedError
end
# Set the specified format as the content type of the response.
def set_content_type(response, format)
raise NotImplementedError
end
# resolve node name from peer's ip address
# this is used when the request is unauthenticated
def resolve_node(result)
begin
return Resolv.getname(result[:ip])
rescue => detail
Puppet.err "Could not resolve #{result[:ip]}: #{detail}"
end
result[:ip]
end
private
# methods to be overridden by the including web server class
def http_method(request)
raise NotImplementedError
end
def path(request)
raise NotImplementedError
end
def request_key(request)
raise NotImplementedError
end
def body(request)
raise NotImplementedError
end
def params(request)
raise NotImplementedError
end
def client_cert(request)
raise NotImplementedError
end
def cleanup(request)
# By default, there is nothing to cleanup.
end
def decode_params(params)
params.select { |key, _| allowed_parameter?(key) }.inject({}) do |result, ary|
param, value = ary
result[param.to_sym] = parse_parameter_value(param, value)
result
end
end
def allowed_parameter?(name)
not (name.nil? || name.empty? || DISALLOWED_KEYS.include?(name))
end
def parse_parameter_value(param, value)
case value
when /^---/
Puppet.debug("Found YAML while processing request parameter #{param} (value: <#{value}>)")
Puppet.deprecation_warning("YAML in network requests is deprecated and will be removed in a future version. See http://links.puppetlabs.com/deprecate_yaml_on_network")
YAML.load(value, :safe => true, :deserialize_symbols => true)
when Array
value.collect { |v| parse_primitive_parameter_value(v) }
else
parse_primitive_parameter_value(value)
end
end
def parse_primitive_parameter_value(value)
case value
when "true"
true
when "false"
false
when /^\d+$/
Integer(value)
when /^\d+\.\d+$/
value.to_f
else
value
end
end
def configure_profiler(request_headers, request_params)
if (request_headers.has_key?(Puppet::Network::HTTP::HEADER_ENABLE_PROFILING.downcase) or Puppet[:profile])
- Puppet::Util::Profiler.current = Puppet::Util::Profiler::WallClock.new(Puppet.method(:debug), request_params.object_id)
- else
- Puppet::Util::Profiler.current = Puppet::Util::Profiler::NONE
+ Puppet::Util::Profiler.add_profiler(Puppet::Util::Profiler::Aggregate.new(Puppet.method(:debug), request_params.object_id))
end
end
+
+ def remove_profiler(profiler)
+ profiler.shutdown
+ Puppet::Util::Profiler.remove_profiler(profiler)
+ end
end
diff --git a/lib/puppet/network/http/nocache_pool.rb b/lib/puppet/network/http/nocache_pool.rb
new file mode 100644
index 000000000..d80e0409a
--- /dev/null
+++ b/lib/puppet/network/http/nocache_pool.rb
@@ -0,0 +1,21 @@
+# A pool that does not cache HTTP connections.
+#
+# @api private
+class Puppet::Network::HTTP::NoCachePool
+ def initialize(factory = Puppet::Network::HTTP::Factory.new)
+ @factory = factory
+ end
+
+ # Yields a <tt>Net::HTTP</tt> connection.
+ #
+ # @yieldparam http [Net::HTTP] An HTTP connection
+ def with_connection(site, verify, &block)
+ http = @factory.create_connection(site)
+ verify.setup_connection(http)
+ yield http
+ end
+
+ def close
+ # do nothing
+ end
+end
diff --git a/lib/puppet/network/http/pool.rb b/lib/puppet/network/http/pool.rb
new file mode 100644
index 000000000..b817b472b
--- /dev/null
+++ b/lib/puppet/network/http/pool.rb
@@ -0,0 +1,120 @@
+# A pool for persistent <tt>Net::HTTP</tt> connections. Connections are
+# stored in the pool indexed by their {Puppet::Network::HTTP::Site Site}.
+# Connections are borrowed from the pool, yielded to the caller, and
+# released back into the pool. If a connection is expired, it will be
+# closed either when a connection to that site is requested, or when
+# the pool is closed. The pool can store multiple connections to the
+# same site, and will be reused in MRU order.
+#
+# @api private
+#
+class Puppet::Network::HTTP::Pool
+ FIFTEEN_SECONDS = 15
+
+ attr_reader :factory
+
+ def initialize(keepalive_timeout = FIFTEEN_SECONDS)
+ @pool = {}
+ @factory = Puppet::Network::HTTP::Factory.new
+ @keepalive_timeout = keepalive_timeout
+ end
+
+ def with_connection(site, verify, &block)
+ reuse = true
+
+ http = borrow(site, verify)
+ begin
+ if http.use_ssl? && http.verify_mode != OpenSSL::SSL::VERIFY_PEER
+ reuse = false
+ end
+
+ yield http
+ rescue => detail
+ reuse = false
+ raise detail
+ ensure
+ if reuse
+ release(site, http)
+ else
+ close_connection(site, http)
+ end
+ end
+ end
+
+ def close
+ @pool.each_pair do |site, sessions|
+ sessions.each do |session|
+ close_connection(site, session.connection)
+ end
+ end
+ @pool.clear
+ end
+
+ # @api private
+ def pool
+ @pool
+ end
+
+ # Safely close a persistent connection.
+ #
+ # @api private
+ def close_connection(site, http)
+ Puppet.debug("Closing connection for #{site}")
+ http.finish
+ rescue => detail
+ Puppet.log_exception(detail, "Failed to close connection for #{site}: #{detail}")
+ end
+
+ # Borrow and take ownership of a persistent connection. If a new
+ # connection is created, it will be started prior to being returned.
+ #
+ # @api private
+ def borrow(site, verify)
+ @pool[site] = active_sessions(site)
+ session = @pool[site].shift
+ if session
+ Puppet.debug("Using cached connection for #{site}")
+ session.connection
+ else
+ http = @factory.create_connection(site)
+ verify.setup_connection(http)
+
+ Puppet.debug("Starting connection for #{site}")
+ http.start
+ http
+ end
+ end
+
+ # Release a connection back into the pool.
+ #
+ # @api private
+ def release(site, http)
+ expiration = Time.now + @keepalive_timeout
+ session = Puppet::Network::HTTP::Session.new(http, expiration)
+ Puppet.debug("Caching connection for #{site}")
+
+ sessions = @pool[site]
+ if sessions
+ sessions.unshift(session)
+ else
+ @pool[site] = [session]
+ end
+ end
+
+ # Returns an Array of sessions whose connections are not expired.
+ #
+ # @api private
+ def active_sessions(site)
+ now = Time.now
+
+ sessions = @pool[site] || []
+ sessions.select do |session|
+ if session.expired?(now)
+ close_connection(site, session.connection)
+ false
+ else
+ true
+ end
+ end
+ end
+end
diff --git a/lib/puppet/network/http/rack/rest.rb b/lib/puppet/network/http/rack/rest.rb
index 23d73bf93..3ca79c8ae 100644
--- a/lib/puppet/network/http/rack/rest.rb
+++ b/lib/puppet/network/http/rack/rest.rb
@@ -1,136 +1,138 @@
require 'openssl'
require 'cgi'
require 'puppet/network/http/handler'
require 'puppet/util/ssl'
class Puppet::Network::HTTP::RackREST
include Puppet::Network::HTTP::Handler
ContentType = 'Content-Type'.freeze
CHUNK_SIZE = 8192
class RackFile
def initialize(file)
@file = file
end
def each
while chunk = @file.read(CHUNK_SIZE)
yield chunk
end
end
def close
@file.close
end
end
def initialize(args={})
super()
register([Puppet::Network::HTTP::API::V2.routes, Puppet::Network::HTTP::API::V1.routes])
end
def set_content_type(response, format)
response[ContentType] = format_to_mime(format)
end
# produce the body of the response
def set_response(response, result, status = 200)
response.status = status
unless result.is_a?(File)
response.write result
else
response["Content-Length"] = result.stat.size.to_s
response.body = RackFile.new(result)
end
end
# Retrieve all headers from the http request, as a map.
def headers(request)
headers = request.env.select {|k,v| k.start_with? 'HTTP_'}.inject({}) do |m, (k,v)|
m[k.sub(/^HTTP_/, '').gsub('_','-').downcase] = v
m
end
headers['content-type'] = request.content_type
headers
end
# Return which HTTP verb was used in this request.
def http_method(request)
request.request_method
end
# Return the query params for this request.
def params(request)
if request.post?
params = request.params
else
# rack doesn't support multi-valued query parameters,
# e.g. ignore, so parse them ourselves
params = CGI.parse(request.query_string)
convert_singular_arrays_to_value(params)
end
result = decode_params(params)
result.merge(extract_client_info(request))
end
# what path was requested? (this is, without any query parameters)
def path(request)
request.path
end
# return the request body
def body(request)
request.body.read
end
def client_cert(request)
# This environment variable is set by mod_ssl, note that it
# requires the `+ExportCertData` option in the `SSLOptions` directive
cert = request.env['SSL_CLIENT_CERT']
# NOTE: The SSL_CLIENT_CERT environment variable will be the empty string
# when Puppet agent nodes have not yet obtained a signed certificate.
if cert.nil? || cert.empty?
nil
else
- Puppet::SSL::Certificate.from_instance(OpenSSL::X509::Certificate.new(cert))
+ cert = Puppet::SSL::Certificate.from_instance(OpenSSL::X509::Certificate.new(cert))
+ warn_if_near_expiration(cert)
+ cert
end
end
# Passenger freaks out if we finish handling the request without reading any
# part of the body, so make sure we have.
def cleanup(request)
request.body.read(1)
nil
end
def extract_client_info(request)
result = {}
result[:ip] = request.ip
# if we find SSL info in the headers, use them to get a hostname from the CN.
# try this with :ssl_client_header, which defaults should work for
# Apache with StdEnvVars.
subj_str = request.env[Puppet[:ssl_client_header]]
subject = Puppet::Util::SSL.subject_from_dn(subj_str || "")
if cn = Puppet::Util::SSL.cn_from_subject(subject)
result[:node] = cn
result[:authenticated] = (request.env[Puppet[:ssl_client_verify_header]] == 'SUCCESS')
else
result[:node] = resolve_node(result)
result[:authenticated] = false
end
result
end
def convert_singular_arrays_to_value(hash)
hash.each do |key, value|
if value.size == 1
hash[key] = value.first
end
end
end
end
diff --git a/lib/puppet/network/http/session.rb b/lib/puppet/network/http/session.rb
new file mode 100644
index 000000000..21136a12d
--- /dev/null
+++ b/lib/puppet/network/http/session.rb
@@ -0,0 +1,17 @@
+# An HTTP session that references a persistent HTTP connection and
+# an expiration time for the connection.
+#
+# @api private
+#
+class Puppet::Network::HTTP::Session
+ attr_reader :connection
+
+ def initialize(connection, expiration_time)
+ @connection = connection
+ @expiration_time = expiration_time
+ end
+
+ def expired?(now)
+ @expiration_time <= now
+ end
+end
diff --git a/lib/puppet/network/http/site.rb b/lib/puppet/network/http/site.rb
new file mode 100644
index 000000000..c219e6021
--- /dev/null
+++ b/lib/puppet/network/http/site.rb
@@ -0,0 +1,39 @@
+# Represents a site to which HTTP connections are made. It is a value
+# object, and is suitable for use in a hash. If two sites are equal,
+# then a persistent connection made to the first site, can be re-used
+# for the second.
+#
+# @api private
+#
+class Puppet::Network::HTTP::Site
+ attr_reader :scheme, :host, :port
+
+ def initialize(scheme, host, port)
+ @scheme = scheme
+ @host = host
+ @port = port.to_i
+ end
+
+ def addr
+ "#{@scheme}://#{@host}:#{@port.to_s}"
+ end
+ alias to_s addr
+
+ def ==(rhs)
+ (@scheme == rhs.scheme) && (@host == rhs.host) && (@port == rhs.port)
+ end
+
+ alias eql? ==
+
+ def hash
+ [@scheme, @host, @port].hash
+ end
+
+ def use_ssl?
+ @scheme == 'https'
+ end
+
+ def move_to(uri)
+ self.class.new(uri.scheme, uri.host, uri.port)
+ end
+end
diff --git a/lib/puppet/network/http/webrick/rest.rb b/lib/puppet/network/http/webrick/rest.rb
index 64c859a10..ae0867dfb 100644
--- a/lib/puppet/network/http/webrick/rest.rb
+++ b/lib/puppet/network/http/webrick/rest.rb
@@ -1,102 +1,104 @@
require 'puppet/network/http/handler'
require 'resolv'
require 'webrick'
require 'puppet/util/ssl'
class Puppet::Network::HTTP::WEBrickREST < WEBrick::HTTPServlet::AbstractServlet
include Puppet::Network::HTTP::Handler
def self.mutex
@mutex ||= Mutex.new
end
def initialize(server)
raise ArgumentError, "server is required" unless server
register([Puppet::Network::HTTP::API::V2.routes, Puppet::Network::HTTP::API::V1.routes])
super(server)
end
# Retrieve the request parameters, including authentication information.
def params(request)
params = request.query || {}
params = Hash[params.collect do |key, value|
all_values = value.list
[key, all_values.length == 1 ? value : all_values]
end]
params = decode_params(params)
params.merge(client_information(request))
end
# WEBrick uses a service method to respond to requests. Simply delegate to
# the handler response method.
def service(request, response)
self.class.mutex.synchronize do
process(request, response)
end
end
def headers(request)
result = {}
request.each do |k, v|
result[k.downcase] = v
end
result
end
def http_method(request)
request.request_method
end
def path(request)
request.path
end
def body(request)
request.body
end
def client_cert(request)
if cert = request.client_cert
- Puppet::SSL::Certificate.from_instance(cert)
+ cert = Puppet::SSL::Certificate.from_instance(cert)
+ warn_if_near_expiration(cert)
+ cert
else
nil
end
end
# Set the specified format as the content type of the response.
def set_content_type(response, format)
response["content-type"] = format_to_mime(format)
end
def set_response(response, result, status = 200)
response.status = status
if status >= 200 and status != 304
response.body = result
response["content-length"] = result.stat.size if result.is_a?(File)
end
end
# Retrieve node/cert/ip information from the request object.
def client_information(request)
result = {}
if peer = request.peeraddr and ip = peer[3]
result[:ip] = ip
end
# If they have a certificate (which will almost always be true)
# then we get the hostname from the cert, instead of via IP
# info
result[:authenticated] = false
if cert = request.client_cert and cn = Puppet::Util::SSL.cn_from_subject(cert.subject)
result[:node] = cn
result[:authenticated] = true
else
result[:node] = resolve_node(result)
end
result
end
end
diff --git a/lib/puppet/network/http_pool.rb b/lib/puppet/network/http_pool.rb
index 8c2783d36..5bab3803e 100644
--- a/lib/puppet/network/http_pool.rb
+++ b/lib/puppet/network/http_pool.rb
@@ -1,62 +1,61 @@
require 'puppet/network/http/connection'
module Puppet::Network; end
# This module contains the factory methods that should be used for getting a
-# {Puppet::Network::HTTP::Connection} instance.
-#
-# @note The name "HttpPool" is a misnomer, and a leftover of history, but we would
-# like to make this cache connections in the future.
+# {Puppet::Network::HTTP::Connection} instance. The pool may return a new
+# connection or a persistent cached connection, depending on the underlying
+# pool implementation in use.
#
# @api public
#
module Puppet::Network::HttpPool
@http_client_class = Puppet::Network::HTTP::Connection
def self.http_client_class
@http_client_class
end
def self.http_client_class=(klass)
@http_client_class = klass
end
# Retrieve a connection for the given host and port.
#
# @param host [String] The hostname to connect to
# @param port [Integer] The port on the host to connect to
# @param use_ssl [Boolean] Whether to use an SSL connection
# @param verify_peer [Boolean] Whether to verify the peer credentials, if possible. Verification will not take place if the CA certificate is missing.
# @return [Puppet::Network::HTTP::Connection]
#
# @api public
#
def self.http_instance(host, port, use_ssl = true, verify_peer = true)
verifier = if verify_peer
Puppet::SSL::Validator.default_validator()
else
Puppet::SSL::Validator.no_validator()
end
http_client_class.new(host, port,
:use_ssl => use_ssl,
:verify => verifier)
end
# Get an http connection that will be secured with SSL and have the
# connection verified with the given verifier
#
# @param host [String] the DNS name to connect to
# @param port [Integer] the port to connect to
# @param verifier [#setup_connection, #peer_certs, #verify_errors] An object that will setup the appropriate
# verification on a Net::HTTP instance and report any errors and the certificates used.
# @return [Puppet::Network::HTTP::Connection]
#
# @api public
#
def self.http_ssl_instance(host, port, verifier = Puppet::SSL::Validator.default_validator())
http_client_class.new(host, port,
:use_ssl => true,
:verify => verifier)
end
end
diff --git a/lib/puppet/node.rb b/lib/puppet/node.rb
index fbb57b2dd..d61d49385 100644
--- a/lib/puppet/node.rb
+++ b/lib/puppet/node.rb
@@ -1,171 +1,184 @@
require 'puppet/indirector'
# A class for managing nodes, including their facts and environment.
class Puppet::Node
require 'puppet/node/facts'
require 'puppet/node/environment'
# Set up indirection, so that nodes can be looked for in
# the node sources.
extend Puppet::Indirector
# Use the node source as the indirection terminus.
indirects :node, :terminus_setting => :node_terminus, :doc => "Where to find node information.
A node is composed of its name, its facts, and its environment."
- attr_accessor :name, :classes, :source, :ipaddress, :parameters, :trusted_data
+ attr_accessor :name, :classes, :source, :ipaddress, :parameters, :trusted_data, :environment_name
attr_reader :time, :facts
::PSON.register_document_type('Node',self)
def self.from_data_hash(data)
raise ArgumentError, "No name provided in serialized data" unless name = data['name']
node = new(name)
node.classes = data['classes']
node.parameters = data['parameters']
- node.environment = data['environment']
+ node.environment_name = data['environment']
node
end
def self.from_pson(pson)
Puppet.deprecation_warning("from_pson is being removed in favour of from_data_hash.")
self.from_data_hash(pson)
end
def to_data_hash
result = {
'name' => name,
'environment' => environment.name,
}
result['classes'] = classes unless classes.empty?
result['parameters'] = parameters unless parameters.empty?
result
end
def to_pson_data_hash(*args)
{
'document_type' => "Node",
'data' => to_data_hash,
}
end
def to_pson(*args)
to_pson_data_hash.to_pson(*args)
end
def environment
if @environment
@environment
- elsif env = parameters["environment"]
- self.environment = env
- @environment
else
- Puppet.lookup(:environments).get(Puppet[:environment])
+ if env = parameters["environment"]
+ self.environment = env
+ elsif environment_name
+ self.environment = environment_name
+ else
+ # This should not be :current_environment, this is the default
+ # for a node when it has not specified its environment
+ # Tt will be used to establish what the current environment is.
+ #
+ self.environment = Puppet.lookup(:environments).get(Puppet[:environment])
+ end
+
+ @environment
end
end
def environment=(env)
if env.is_a?(String) or env.is_a?(Symbol)
@environment = Puppet.lookup(:environments).get(env)
else
@environment = env
end
end
+ def has_environment_instance?
+ !@environment.nil?
+ end
+
def initialize(name, options = {})
raise ArgumentError, "Node names cannot be nil" unless name
@name = name
if classes = options[:classes]
if classes.is_a?(String)
@classes = [classes]
else
@classes = classes
end
else
@classes = []
end
@parameters = options[:parameters] || {}
@facts = options[:facts]
if env = options[:environment]
self.environment = env
end
@time = Time.now
end
# Merge the node facts with parameters from the node source.
def fact_merge
if @facts = Puppet::Node::Facts.indirection.find(name, :environment => environment)
@facts.sanitize
merge(@facts.values)
end
rescue => detail
error = Puppet::Error.new("Could not retrieve facts for #{name}: #{detail}")
error.set_backtrace(detail.backtrace)
raise error
end
# Merge any random parameters into our parameter list.
def merge(params)
params.each do |name, value|
@parameters[name] = value unless @parameters.include?(name)
end
@parameters["environment"] ||= self.environment.name.to_s
end
# Calculate the list of names we might use for looking
# up our node. This is only used for AST nodes.
def names
return [name] if Puppet.settings[:strict_hostname_checking]
names = []
names += split_name(name) if name.include?(".")
# First, get the fqdn
unless fqdn = parameters["fqdn"]
if parameters["hostname"] and parameters["domain"]
fqdn = parameters["hostname"] + "." + parameters["domain"]
else
Puppet.warning "Host is missing hostname and/or domain: #{name}"
end
end
# Now that we (might) have the fqdn, add each piece to the name
# list to search, in order of longest to shortest.
names += split_name(fqdn) if fqdn
# And make sure the node name is first, since that's the most
# likely usage.
# The name is usually the Certificate CN, but it can be
# set to the 'facter' hostname instead.
if Puppet[:node_name] == 'cert'
names.unshift name
else
names.unshift parameters["hostname"]
end
names.uniq
end
def split_name(name)
list = name.split(".")
tmp = []
list.each_with_index do |short, i|
tmp << list[0..i].join(".")
end
tmp.reverse
end
# Ensures the data is frozen
#
def trusted_data=(data)
Puppet.warning("Trusted node data modified for node #{name}") unless @trusted_data.nil?
@trusted_data = data.freeze
end
end
diff --git a/lib/puppet/node/environment.rb b/lib/puppet/node/environment.rb
index 109fc9c57..5a0eed5e1 100644
--- a/lib/puppet/node/environment.rb
+++ b/lib/puppet/node/environment.rb
@@ -1,569 +1,595 @@
require 'puppet/util'
require 'puppet/util/cacher'
require 'monitor'
require 'puppet/parser/parser_factory'
# Just define it, so this class has fewer load dependencies.
class Puppet::Node
end
# Puppet::Node::Environment acts as a container for all configuration
# that is expected to vary between environments.
#
# ## The root environment
#
# In addition to normal environments that are defined by the user,there is a
# special 'root' environment. It is defined as an instance variable on the
# Puppet::Node::Environment metaclass. The environment name is `*root*` and can
# be accessed by looking up the `:root_environment` using {Puppet.lookup}.
#
# The primary purpose of the root environment is to contain parser functions
# that are not bound to a specific environment. The main case for this is for
# logging functions. Logging functions are attached to the 'root' environment
# when {Puppet::Parser::Functions.reset} is called.
class Puppet::Node::Environment
include Puppet::Util::Cacher
NO_MANIFEST = :no_manifest
# @api private
def self.seen
@seen ||= {}
end
# Create a new environment with the given name, or return an existing one
#
# The environment class memoizes instances so that attempts to instantiate an
# environment with the same name with an existing environment will return the
# existing environment.
#
# @overload self.new(environment)
# @param environment [Puppet::Node::Environment]
# @return [Puppet::Node::Environment] the environment passed as the param,
# this is implemented so that a calling class can use strings or
# environments interchangeably.
#
# @overload self.new(string)
# @param string [String, Symbol]
# @return [Puppet::Node::Environment] An existing environment if it exists,
# else a new environment with that name
#
# @overload self.new()
# @return [Puppet::Node::Environment] The environment as set by
# Puppet.settings[:environment]
#
# @api public
def self.new(name = nil)
return name if name.is_a?(self)
name ||= Puppet.settings.value(:environment)
raise ArgumentError, "Environment name must be specified" unless name
symbol = name.to_sym
return seen[symbol] if seen[symbol]
obj = self.create(symbol,
split_path(Puppet.settings.value(:modulepath, symbol)),
Puppet.settings.value(:manifest, symbol),
Puppet.settings.value(:config_version, symbol))
seen[symbol] = obj
end
# Create a new environment with the given name
#
# @param name [Symbol] the name of the
# @param modulepath [Array<String>] the list of paths from which to load modules
# @param manifest [String] the path to the manifest for the environment or
# the constant Puppet::Node::Environment::NO_MANIFEST if there is none.
# @param config_version [String] path to a script whose output will be added
# to report logs (optional)
# @return [Puppet::Node::Environment]
#
# @api public
def self.create(name, modulepath, manifest = NO_MANIFEST, config_version = nil)
obj = self.allocate
obj.send(:initialize,
name,
expand_dirs(extralibs() + modulepath),
manifest == NO_MANIFEST ? manifest : File.expand_path(manifest),
config_version)
obj
end
# A "reference" to a remote environment. The created environment instance
# isn't expected to exist on the local system, but is instead a reference to
# environment information on a remote system. For instance when a catalog is
# being applied, this will be used on the agent.
#
# @note This does not provide access to the information of the remote
# environment's modules, manifest, or anything else. It is simply a value
# object to pass around and use as an environment.
#
# @param name [Symbol] The name of the remote environment
#
def self.remote(name)
create(name, [], NO_MANIFEST)
end
# Instantiate a new environment
#
# @note {Puppet::Node::Environment.new} is overridden to return memoized
# objects, so this will not be invoked with the normal Ruby initialization
# semantics.
#
# @param name [Symbol] The environment name
def initialize(name, modulepath, manifest, config_version)
@name = name
@modulepath = modulepath
@manifest = manifest
@config_version = config_version
# set watching to true for legacy environments - the directory based environment loaders will set this to
# false for directory based environments after the environment has been created.
@watching = true
end
# Returns if files are being watched or not.
# @api private
#
def watching?
@watching
end
# Turns watching of files on or off
# @param flag [TrueClass, FalseClass] if files should be watched or not
# @ api private
def watching=(flag)
@watching = flag
end
# Creates a new Puppet::Node::Environment instance, overriding any of the passed
# parameters.
#
# @param env_params [Hash<{Symbol => String,Array<String>}>] new environment
# parameters (:modulepath, :manifest, :config_version)
# @return [Puppet::Node::Environment]
def override_with(env_params)
return self.class.create(name,
env_params[:modulepath] || modulepath,
env_params[:manifest] || manifest,
env_params[:config_version] || config_version)
end
# Creates a new Puppet::Node::Environment instance, overriding manfiest
# modulepath, or :config_version from the passed settings if they were
# originally set from the commandline, or returns self if there is nothing to
# override.
#
# @param settings [Puppet::Settings] an initialized puppet settings instance
# @return [Puppet::Node::Environment] new overridden environment or self if
# there are no commandline changes from settings.
def override_from_commandline(settings)
overrides = {}
if settings.set_by_cli?(:modulepath)
overrides[:modulepath] = self.class.split_path(settings.value(:modulepath))
end
if settings.set_by_cli?(:config_version)
overrides[:config_version] = settings.value(:config_version)
end
if settings.set_by_cli?(:manifest) ||
(settings.set_by_cli?(:manifestdir) && settings.value(:manifest).start_with?(settings.value(:manifestdir)))
overrides[:manifest] = settings.value(:manifest)
end
overrides.empty? ?
self :
self.override_with(overrides)
end
# Retrieve the environment for the current process.
#
# @note This should only used when a catalog is being compiled.
#
# @api private
#
# @return [Puppet::Node::Environment] the currently set environment if one
# has been explicitly set, else it will return the '*root*' environment
def self.current
Puppet.deprecation_warning("Puppet::Node::Environment.current has been replaced by Puppet.lookup(:current_environment), see http://links.puppetlabs.com/current-env-deprecation")
Puppet.lookup(:current_environment)
end
# @param [String] name Environment name to check for valid syntax.
# @return [Boolean] true if name is valid
# @api public
def self.valid_name?(name)
!!name.match(/\A\w+\Z/)
end
# Clear all memoized environments and the 'current' environment
#
# @api private
def self.clear
seen.clear
end
# @!attribute [r] name
# @api public
# @return [Symbol] the human readable environment name that serves as the
# environment identifier
attr_reader :name
# @api public
# @return [Array<String>] All directories present on disk in the modulepath
def modulepath
@modulepath.find_all do |p|
Puppet::FileSystem.directory?(p)
end
end
# @api public
# @return [Array<String>] All directories in the modulepath (even if they are not present on disk)
def full_modulepath
@modulepath
end
# @!attribute [r] manifest
# @api public
# @return [String] path to the manifest file or directory.
attr_reader :manifest
# @!attribute [r] config_version
# @api public
# @return [String] path to a script whose output will be added to report logs
# (optional)
attr_reader :config_version
+ # Checks to make sure that this environment did not have a manifest set in
+ # its original environment.conf if Puppet is configured with
+ # +disable_per_environment_manifest+ set true. If it did, the environment's
+ # modules may not function as intended by the original authors, and we may
+ # seek to halt a puppet compilation for a node in this environment.
+ #
+ # The only exception to this would be if the environment.conf manifest is an exact,
+ # uninterpolated match for the current +default_manifest+ setting.
+ #
+ # @return [Boolean] true if using directory environments, and
+ # Puppet[:disable_per_environment_manifest] is true, and this environment's
+ # original environment.conf had a manifest setting that is not the
+ # Puppet[:default_manifest].
+ # @api public
+ def conflicting_manifest_settings?
+ return false if Puppet[:environmentpath].empty? || !Puppet[:disable_per_environment_manifest]
+ environment_conf = Puppet.lookup(:environments).get_conf(name)
+ original_manifest = environment_conf.raw_setting(:manifest)
+ !original_manifest.nil? && !original_manifest.empty? && original_manifest != Puppet[:default_manifest]
+ end
+
# Return an environment-specific Puppet setting.
#
# @api public
#
# @param param [String, Symbol] The environment setting to look up
# @return [Object] The resolved setting value
def [](param)
Puppet.settings.value(param, self.name)
end
# @api public
# @return [Puppet::Resource::TypeCollection] The current global TypeCollection
def known_resource_types
if @known_resource_types.nil?
@known_resource_types = Puppet::Resource::TypeCollection.new(self)
@known_resource_types.import_ast(perform_initial_import(), '')
end
@known_resource_types
end
# Yields each modules' plugin directory if the plugin directory (modulename/lib)
# is present on the filesystem.
#
# @yield [String] Yields the plugin directory from each module to the block.
# @api public
def each_plugin_directory(&block)
modules.map(&:plugin_directory).each do |lib|
lib = Puppet::Util::Autoload.cleanpath(lib)
yield lib if File.directory?(lib)
end
end
# Locate a module instance by the module name alone.
#
# @api public
#
# @param name [String] The module name
# @return [Puppet::Module, nil] The module if found, else nil
def module(name)
modules.find {|mod| mod.name == name}
end
# Locate a module instance by the full forge name (EG authorname/module)
#
# @api public
#
# @param forge_name [String] The module name
# @return [Puppet::Module, nil] The module if found, else nil
def module_by_forge_name(forge_name)
author, modname = forge_name.split('/')
found_mod = self.module(modname)
found_mod and found_mod.forge_name == forge_name ?
found_mod :
nil
end
# @!attribute [r] modules
# Return all modules for this environment in the order they appear in the
# modulepath.
# @note If multiple modules with the same name are present they will
# both be added, but methods like {#module} and {#module_by_forge_name}
# will return the first matching entry in this list.
# @note This value is cached so that the filesystem doesn't have to be
# re-enumerated every time this method is invoked, since that
# enumeration could be a costly operation and this method is called
# frequently. The cache expiry is determined by `Puppet[:filetimeout]`.
# @see Puppet::Util::Cacher.cached_attr
# @api public
# @return [Array<Puppet::Module>] All modules for this environment
cached_attr(:modules, Puppet[:filetimeout]) do
module_references = []
seen_modules = {}
modulepath.each do |path|
Dir.entries(path).each do |name|
warn_about_mistaken_path(path, name)
next if module_references.include?(name)
if not seen_modules[name]
module_references << {:name => name, :path => File.join(path, name)}
seen_modules[name] = true
end
end
end
module_references.collect do |reference|
begin
Puppet::Module.new(reference[:name], reference[:path], self)
rescue Puppet::Module::Error
nil
end
end.compact
end
# Generate a warning if the given directory in a module path entry is named `lib`.
#
# @api private
#
# @param path [String] The module directory containing the given directory
# @param name [String] The directory name
def warn_about_mistaken_path(path, name)
if name == "lib"
Puppet.debug("Warning: Found directory named 'lib' in module path ('#{path}/lib'); unless " +
"you are expecting to load a module named 'lib', your module path may be set " +
"incorrectly.")
end
end
# Modules broken out by directory in the modulepath
#
# @note This method _changes_ the current working directory while enumerating
# the modules. This seems rather dangerous.
#
# @api public
#
# @return [Hash<String, Array<Puppet::Module>>] A hash whose keys are file
# paths, and whose values is an array of Puppet Modules for that path
def modules_by_path
modules_by_path = {}
modulepath.each do |path|
Dir.chdir(path) do
module_names = Dir.glob('*').select do |d|
FileTest.directory?(d) && (File.basename(d) =~ /\A\w+(-\w+)*\Z/)
end
modules_by_path[path] = module_names.sort.map do |name|
Puppet::Module.new(name, File.join(path, name), self)
end
end
end
modules_by_path
end
# All module requirements for all modules in the environment modulepath
#
# @api public
#
# @comment This has nothing to do with an environment. It seems like it was
# stuffed into the first convenient class that vaguely involved modules.
#
# @example
# environment.module_requirements
# # => {
# # 'username/amodule' => [
# # {
# # 'name' => 'username/moduledep',
# # 'version' => '1.2.3',
# # 'version_requirement' => '>= 1.0.0',
# # },
# # {
# # 'name' => 'username/anotherdep',
# # 'version' => '4.5.6',
# # 'version_requirement' => '>= 3.0.0',
# # }
# # ]
# # }
# #
#
# @return [Hash<String, Array<Hash<String, String>>>] See the method example
# for an explanation of the return value.
def module_requirements
deps = {}
modules.each do |mod|
next unless mod.forge_name
deps[mod.forge_name] ||= []
mod.dependencies and mod.dependencies.each do |mod_dep|
dep_name = mod_dep['name'].tr('-', '/')
(deps[dep_name] ||= []) << {
'name' => mod.forge_name,
'version' => mod.version,
'version_requirement' => mod_dep['version_requirement']
}
end
end
deps.each do |mod, mod_deps|
deps[mod] = mod_deps.sort_by { |d| d['name'] }
end
deps
end
# Set a periodic watcher on the file, so we can tell if it has changed.
# If watching has been turned off, this call has no effect.
# @param file[File,String] File instance or filename
# @api private
def watch_file(file)
if watching?
known_resource_types.watch_file(file.to_s)
end
end
# Checks if a reparse is required (cache of files is stale).
# This call does nothing unless files are being watched.
#
def check_for_reparse
- if watching?
- if (Puppet[:code] != @parsed_code) || (@known_resource_types && @known_resource_types.require_reparse?)
- @parsed_code = nil
- @known_resource_types = nil
- end
+ if (Puppet[:code] != @parsed_code) || (watching? && @known_resource_types && @known_resource_types.require_reparse?)
+ @parsed_code = nil
+ @known_resource_types = nil
end
end
# @return [String] The stringified value of the `name` instance variable
# @api public
def to_s
name.to_s
end
# @return [Symbol] The `name` value, cast to a string, then cast to a symbol.
#
# @api public
#
# @note the `name` instance variable is a Symbol, but this casts the value
# to a String and then converts it back into a Symbol which will needlessly
# create an object that needs to be garbage collected
def to_sym
to_s.to_sym
end
# Return only the environment name when serializing.
#
# The only thing we care about when serializing an environment is its
# identity; everything else is ephemeral and should not be stored or
# transmitted.
#
# @api public
def to_zaml(z)
self.to_s.to_zaml(z)
end
def self.split_path(path_string)
path_string.split(File::PATH_SEPARATOR)
end
def ==(other)
return true if other.kind_of?(Puppet::Node::Environment) &&
self.name == other.name &&
self.full_modulepath == other.full_modulepath &&
self.manifest == other.manifest
end
alias eql? ==
def hash
[self.class, name, full_modulepath, manifest].hash
end
private
def self.extralibs()
if ENV["PUPPETLIB"]
split_path(ENV["PUPPETLIB"])
else
[]
end
end
def self.expand_dirs(dirs)
dirs.collect do |dir|
File.expand_path(dir)
end
end
# Reparse the manifests for the given environment
#
# There are two sources that can be used for the initial parse:
#
# 1. The value of `Puppet.settings[:code]`: Puppet can take a string from
# its settings and parse that as a manifest. This is used by various
# Puppet applications to read in a manifest and pass it to the
# environment as a side effect. This is attempted first.
# 2. The contents of `Puppet.settings[:manifest]`: Puppet will try to load
# the environment manifest. By default this is `$manifestdir/site.pp`
#
# @note This method will return an empty hostclass if
# `Puppet.settings[:ignoreimport]` is set to true.
#
# @return [Puppet::Parser::AST::Hostclass] The AST hostclass object
# representing the 'main' hostclass
def perform_initial_import
return empty_parse_result if Puppet[:ignoreimport]
parser = Puppet::Parser::ParserFactory.parser(self)
@parsed_code = Puppet[:code]
if @parsed_code != ""
parser.string = @parsed_code
parser.parse
else
file = self.manifest
# if the manifest file is a reference to a directory, parse and combine all .pp files in that
# directory
if file == NO_MANIFEST
Puppet::Parser::AST::Hostclass.new('')
elsif File.directory?(file)
- parse_results = Dir.entries(file).find_all { |f| f =~ /\.pp$/ }.sort.map do |pp_file|
- parser.file = File.join(file, pp_file)
- parser.parse
+ if Puppet[:parser] == 'future'
+ parse_results = Puppet::FileSystem::PathPattern.absolute(File.join(file, '**/*.pp')).glob.sort.map do | file_to_parse |
+ parser.file = file_to_parse
+ parser.parse
+ end
+ else
+ parse_results = Dir.entries(file).find_all { |f| f =~ /\.pp$/ }.sort.map do |file_to_parse|
+ parser.file = File.join(file, file_to_parse)
+ parser.parse
+ end
end
# Use a parser type specific merger to concatenate the results
Puppet::Parser::AST::Hostclass.new('', :code => Puppet::Parser::ParserFactory.code_merger.concatenate(parse_results))
else
parser.file = file
parser.parse
end
end
rescue => detail
@known_resource_types.parse_failed = true
msg = "Could not parse for environment #{self}: #{detail}"
error = Puppet::Error.new(msg)
error.set_backtrace(detail.backtrace)
raise error
end
# Return an empty toplevel hostclass to indicate that no file was loaded
#
# This is used as the return value of {#perform_initial_import} when
# `Puppet.settings[:ignoreimport]` is true.
#
# @return [Puppet::Parser::AST::Hostclass]
def empty_parse_result
return Puppet::Parser::AST::Hostclass.new('')
end
# A special "null" environment
#
# This environment should be used when there is no specific environment in
# effect.
NONE = create(:none, [])
end
diff --git a/lib/puppet/parser/ast/collection.rb b/lib/puppet/parser/ast/collection.rb
index 12f73281e..0f4b17500 100644
--- a/lib/puppet/parser/ast/collection.rb
+++ b/lib/puppet/parser/ast/collection.rb
@@ -1,49 +1,53 @@
require 'puppet'
require 'puppet/parser/ast/branch'
require 'puppet/parser/collector'
# An object that collects stored objects from the central cache and returns
# them to the current host, yo.
class Puppet::Parser::AST
class Collection < AST::Branch
attr_accessor :type, :query, :form
attr_reader :override
associates_doc
# We return an object that does a late-binding evaluation.
def evaluate(scope)
match, code = query && query.safeevaluate(scope)
+ if @type == 'class'
+ fail "Classes cannot be collected"
+ end
+
resource_type = scope.find_resource_type(@type)
fail "Resource type #{@type} doesn't exist" unless resource_type
newcoll = Puppet::Parser::Collector.new(scope, resource_type.name, match, code, self.form)
scope.compiler.add_collection(newcoll)
# overrides if any
# Evaluate all of the specified params.
if @override
params = @override.collect { |param| param.safeevaluate(scope) }
newcoll.add_override(
:parameters => params,
:file => @file,
:line => @line,
:source => scope.source,
:scope => scope
)
end
newcoll
end
# Handle our parameter ourselves
def override=(override)
@override = if override.is_a?(AST::ASTArray)
override
else
AST::ASTArray.new(:line => override.line,:file => override.file,:children => [override])
end
end
end
end
diff --git a/lib/puppet/parser/ast/collexpr.rb b/lib/puppet/parser/ast/collexpr.rb
index 481c14b20..92f9cc10f 100644
--- a/lib/puppet/parser/ast/collexpr.rb
+++ b/lib/puppet/parser/ast/collexpr.rb
@@ -1,109 +1,109 @@
require 'puppet'
require 'puppet/parser/ast/branch'
require 'puppet/parser/collector'
# An object that collects stored objects from the central cache and returns
# them to the current host, yo.
class Puppet::Parser::AST
class CollExpr < AST::Branch
attr_accessor :test1, :test2, :oper, :form, :type, :parens
def evaluate(scope)
if Puppet[:parser] == 'future'
evaluate4x(scope)
else
evaluate3x(scope)
end
end
# We return an object that does a late-binding evaluation.
def evaluate3x(scope)
# Make sure our contained expressions have all the info they need.
[@test1, @test2].each do |t|
if t.is_a?(self.class)
t.form ||= self.form
t.type ||= self.type
end
end
# The code is only used for virtual lookups
match1, code1 = @test1.safeevaluate scope
match2, code2 = @test2.safeevaluate scope
# First build up the virtual code.
# If we're a conjunction operator, then we're calling code. I did
# some speed comparisons, and it's at least twice as fast doing these
# case statements as doing an eval here.
code = proc do |resource|
case @oper
when "and"; code1.call(resource) and code2.call(resource)
when "or"; code1.call(resource) or code2.call(resource)
when "=="
if match1 == "tag"
resource.tagged?(match2)
else
if resource[match1].is_a?(Array)
resource[match1].include?(match2)
else
resource[match1] == match2
end
end
when "!="; resource[match1] != match2
end
end
match = [match1, @oper, match2]
return match, code
end
# Late binding evaluation of a collect expression (as done in 3x), but with proper Puppet Language
# semantics for equals and include
#
def evaluate4x(scope)
# Make sure our contained expressions have all the info they need.
[@test1, @test2].each do |t|
if t.is_a?(self.class)
t.form ||= self.form
t.type ||= self.type
end
end
# The code is only used for virtual lookups
match1, code1 = @test1.safeevaluate scope
match2, code2 = @test2.safeevaluate scope
# First build up the virtual code.
# If we're a conjunction operator, then we're calling code. I did
# some speed comparisons, and it's at least twice as fast doing these
# case statements as doing an eval here.
code = proc do |resource|
case @oper
when "and"; code1.call(resource) and code2.call(resource)
when "or"; code1.call(resource) or code2.call(resource)
when "=="
if match1 == "tag"
resource.tagged?(match2)
else
if resource[match1].is_a?(Array)
- @@compare_operator.include?(resource[match1], match2)
+ @@compare_operator.include?(resource[match1], match2, scope)
else
@@compare_operator.equals(resource[match1], match2)
end
end
when "!="; ! @@compare_operator.equals(resource[match1], match2)
end
end
match = [match1, @oper, match2]
return match, code
end
def initialize(hash = {})
super
if Puppet[:parser] == "future"
@@compare_operator ||= Puppet::Pops::Evaluator::CompareOperator.new
end
raise ArgumentError, "Invalid operator #{@oper}" unless %w{== != and or}.include?(@oper)
end
end
end
diff --git a/lib/puppet/parser/ast/node.rb b/lib/puppet/parser/ast/node.rb
index b69a5c4e0..fd6443327 100644
--- a/lib/puppet/parser/ast/node.rb
+++ b/lib/puppet/parser/ast/node.rb
@@ -1,20 +1,25 @@
require 'puppet/parser/ast/top_level_construct'
class Puppet::Parser::AST::Node < Puppet::Parser::AST::TopLevelConstruct
attr_accessor :names, :context
def initialize(names, context = {}, &ruby_code)
raise ArgumentError, "names should be an array" unless names.is_a? Array
+ if context[:parent]
+ msg = "Deprecation notice: Node inheritance is not supported in Puppet >= 4.0.0. See http://links.puppetlabs.com/puppet-node-inheritance-deprecation"
+ Puppet.puppet_deprecation_warning(msg, :key => "node-inheritance-#{names.join}", :file => context[:file], :line => context[:line])
+ end
+
@names = names
@context = context
@ruby_code = ruby_code
end
def instantiate(modname)
@names.collect do |name|
new_node = Puppet::Resource::Type.new(:node, name, @context.merge(:module_name => modname))
new_node.ruby_code = @ruby_code if @ruby_code
new_node
end
end
end
diff --git a/lib/puppet/parser/ast/pops_bridge.rb b/lib/puppet/parser/ast/pops_bridge.rb
index 77d06da9d..9c6a31744 100644
--- a/lib/puppet/parser/ast/pops_bridge.rb
+++ b/lib/puppet/parser/ast/pops_bridge.rb
@@ -1,209 +1,252 @@
require 'puppet/parser/ast/top_level_construct'
require 'puppet/pops'
# The AST::Bridge contains classes that bridges between the new Pops based model
# and the 3.x AST. This is required to be able to reuse the Puppet::Resource::Type which is
# fundamental for the rest of the logic.
#
class Puppet::Parser::AST::PopsBridge
# Bridges to one Pops Model Expression
# The @value is the expression
# This is used to represent the body of a class, definition, or node, and for each parameter's default value
# expression.
#
class Expression < Puppet::Parser::AST::Leaf
def initialize args
super
- @@evaluator ||= Puppet::Pops::Parser::EvaluatingParser::Transitional.new()
+ @@evaluator ||= Puppet::Pops::Parser::EvaluatingParser.new()
end
def to_s
Puppet::Pops::Model::ModelTreeDumper.new.dump(@value)
end
def evaluate(scope)
@@evaluator.evaluate(scope, @value)
end
# Adapts to 3x where top level constructs needs to have each to iterate over children. Short circuit this
# by yielding self. By adding this there is no need to wrap a pops expression inside an AST::BlockExpression
#
def each
yield self
end
def sequence_with(other)
if value.nil?
# This happens when testing and not having a complete setup
other
else
# When does this happen ? Ever ?
raise "sequence_with called on Puppet::Parser::AST::PopsBridge::Expression - please report use case"
# What should be done if the above happens (We don't want this to happen).
# Puppet::Parser::AST::BlockExpression.new(:children => [self] + other.children)
end
end
# The 3x requires code plugged in to an AST to have this in certain positions in the tree. The purpose
# is to either print the content, or to look for things that needs to be defined. This implementation
# cheats by always returning an empty array. (This allows simple files to not require a "Program" at the top.
#
def children
[]
end
end
class NilAsUndefExpression < Expression
def evaluate(scope)
result = super
result.nil? ? :undef : result
end
end
# Bridges the top level "Program" produced by the pops parser.
# Its main purpose is to give one point where all definitions are instantiated (actually defined since the
# Puppet 3x terminology is somewhat misleading - the definitions are instantiated, but instances of the created types
# are not created, that happens when classes are included / required, nodes are matched and when resources are instantiated
# by a resource expression (which is also used to instantiate a host class).
#
class Program < Puppet::Parser::AST::TopLevelConstruct
attr_reader :program_model, :context
def initialize(program_model, context = {})
@program_model = program_model
@context = context
@ast_transformer ||= Puppet::Pops::Model::AstTransformer.new(@context[:file])
- @@evaluator ||= Puppet::Pops::Parser::EvaluatingParser::Transitional.new()
+ @@evaluator ||= Puppet::Pops::Parser::EvaluatingParser.new()
end
# This is the 3x API, the 3x AST searches through all code to find the instructions that can be instantiated.
# This Pops-model based instantiation relies on the parser to build this list while parsing (which is more
# efficient as it avoids one full scan of all logic via recursive enumeration/yield)
#
def instantiate(modname)
@program_model.definitions.collect do |d|
case d
when Puppet::Pops::Model::HostClassDefinition
instantiate_HostClassDefinition(d, modname)
when Puppet::Pops::Model::ResourceTypeDefinition
instantiate_ResourceTypeDefinition(d, modname)
when Puppet::Pops::Model::NodeDefinition
instantiate_NodeDefinition(d, modname)
else
raise Puppet::ParseError, "Internal Error: Unknown type of definition - got '#{d.class}'"
end
end.flatten().compact() # flatten since node definition may have returned an array
# Compact since functions are not understood by compiler
end
def evaluate(scope)
@@evaluator.evaluate(scope, program_model)
end
# Adapts to 3x where top level constructs needs to have each to iterate over children. Short circuit this
# by yielding self. This means that the HostClass container will call this bridge instance with `instantiate`.
#
def each
yield self
end
private
def instantiate_Parameter(o)
# 3x needs parameters as an array of `[name]` or `[name, value_expr]`
# One problem is that the parameter evaluation takes place in the wrong context in 3x (the caller's and
# can thus reference all sorts of information. Here the value expression is wrapped in an AST Bridge to a Pops
# expression since the Pops side can not control the evaluation
if o.value
- [ o.name, NilAsUndefExpression.new(:value => o.value) ]
+ [o.name, NilAsUndefExpression.new(:value => o.value)]
else
- [ o.name ]
+ [o.name]
end
end
+ def create_type_map(definition)
+ result = {}
+ # No need to do anything if there are no parameters
+ return result unless definition.parameters.size > 0
+
+ # No need to do anything if there are no typed parameters
+ typed_parameters = definition.parameters.select {|p| p.type_expr }
+ return result if typed_parameters.empty?
+
+ # If there are typed parameters, they need to be evaluated to produce the corresponding type
+ # instances. This evaluation requires a scope. A scope is not available when doing deserialization
+ # (there is also no initialized evaluator). When running apply and test however, the environment is
+ # reused and we may reenter without a scope (which is fine). A debug message is then output in case
+ # there is the need to track down the odd corner case. See {#obtain_scope}.
+ #
+ if scope = obtain_scope
+ typed_parameters.each do |p|
+ result[p.name] = @@evaluator.evaluate(scope, p.type_expr)
+ end
+ end
+ result
+ end
+
+ # Obtains the scope or issues a warning if :global_scope is not bound
+ def obtain_scope
+ scope = Puppet.lookup(:global_scope) do
+ # This occurs when testing and when applying a catalog (there is no scope available then), and
+ # when running tests that run a partial setup.
+ # This is bad if the logic is trying to compile, but a warning can not be issues since it is a normal
+ # use case that there is no scope when requesting the type in order to just get the parameters.
+ Puppet.debug("Instantiating Resource with type checked parameters - scope is missing, skipping type checking.")
+ nil
+ end
+ scope
+ end
+
# Produces a hash with data for Definition and HostClass
def args_from_definition(o, modname)
args = {
:arguments => o.parameters.collect {|p| instantiate_Parameter(p) },
+ :argument_types => create_type_map(o),
:module_name => modname
}
unless is_nop?(o.body)
args[:code] = Expression.new(:value => o.body)
end
@ast_transformer.merge_location(args, o)
end
def instantiate_HostClassDefinition(o, modname)
args = args_from_definition(o, modname)
- args[:parent] = o.parent_class
+ args[:parent] = absolute_reference(o.parent_class)
Puppet::Resource::Type.new(:hostclass, o.name, @context.merge(args))
end
def instantiate_ResourceTypeDefinition(o, modname)
Puppet::Resource::Type.new(:definition, o.name, @context.merge(args_from_definition(o, modname)))
end
def instantiate_NodeDefinition(o, modname)
args = { :module_name => modname }
unless is_nop?(o.body)
args[:code] = Expression.new(:value => o.body)
end
unless is_nop?(o.parent)
args[:parent] = @ast_transformer.hostname(o.parent)
end
host_matches = @ast_transformer.hostname(o.host_matches)
@ast_transformer.merge_location(args, o)
host_matches.collect do |name|
Puppet::Resource::Type.new(:node, name, @context.merge(args))
end
end
# Propagates a found Function to the appropriate loader.
# This is for 4x future-evaluator/loader
#
def instantiate_FunctionDefinition(function_definition, modname)
loaders = (Puppet.lookup(:loaders) { nil })
unless loaders
raise Puppet::ParseError, "Internal Error: Puppet Context ':loaders' missing - cannot define any functions"
end
loader =
if modname.nil? || modname == ""
# TODO : Later when functions can be private, a decision is needed regarding what that means.
# A private environment loader could be used for logic outside of modules, then only that logic
# would see the function.
#
# Use the private loader, this function may see the environment's dependencies (currently, all modules)
loaders.private_environment_loader()
else
# TODO : Later check if function is private, and then add it to
# private_loader_for_module
#
loaders.public_loader_for_module(modname)
end
unless loader
raise Puppet::ParseError, "Internal Error: did not find public loader for module: '#{modname}'"
end
# Instantiate Function, and store it in the environment loader
typed_name, f = Puppet::Pops::Loader::PuppetFunctionInstantiator.create_from_model(function_definition, loader)
loader.set_entry(typed_name, f, Puppet::Pops::Adapters::SourcePosAdapter.adapt(function_definition).to_uri)
nil # do not want the function to inadvertently leak into 3x
end
def code()
Expression.new(:value => @value)
end
def is_nop?(o)
@ast_transformer.is_nop?(o)
end
+ def absolute_reference(ref)
+ if ref.nil? || ref.empty? || ref.start_with?('::')
+ ref
+ else
+ "::#{ref}"
+ end
+ end
end
-
end
diff --git a/lib/puppet/parser/compiler.rb b/lib/puppet/parser/compiler.rb
index 7f4e82298..5df2916fc 100644
--- a/lib/puppet/parser/compiler.rb
+++ b/lib/puppet/parser/compiler.rb
@@ -1,598 +1,626 @@
require 'forwardable'
require 'puppet/node'
require 'puppet/resource/catalog'
require 'puppet/util/errors'
require 'puppet/resource/type_collection_helper'
# Maintain a graph of scopes, along with a bunch of data
# about the individual catalog we're compiling.
class Puppet::Parser::Compiler
extend Forwardable
include Puppet::Util
include Puppet::Util::Errors
include Puppet::Util::MethodHelper
include Puppet::Resource::TypeCollectionHelper
def self.compile(node)
$env_module_directories = nil
node.environment.check_for_reparse
+ if node.environment.conflicting_manifest_settings?
+ errmsg = [
+ "The 'disable_per_environment_manifest' setting is true, and this '#{node.environment}'",
+ "has an environment.conf manifest that conflicts with the 'default_manifest' setting.",
+ "Compilation has been halted in order to avoid running a catalog which may be using",
+ "unexpected manifests. For more information, see",
+ "http://docs.puppetlabs.com/puppet/latest/reference/environments.html",
+ ]
+ raise(Puppet::Error, errmsg.join(' '))
+ end
+
new(node).compile.to_resource
rescue => detail
message = "#{detail} on node #{node.name}"
Puppet.log_exception(detail, message)
raise Puppet::Error, message, detail.backtrace
end
attr_reader :node, :facts, :collections, :catalog, :resources, :relationships, :topscope
# The injector that provides lookup services, or nil if accessed before the compiler has started compiling and
# bootstrapped. The injector is initialized and available before any manifests are evaluated.
#
# @return [Puppet::Pops::Binder::Injector, nil] The injector that provides lookup services for this compiler/environment
# @api public
#
attr_accessor :injector
# Access to the configured loaders for 4x
# @return [Puppet::Pops::Loader::Loaders] the configured loaders
# @api private
attr_reader :loaders
# The injector that provides lookup services during the creation of the {#injector}.
# @return [Puppet::Pops::Binder::Injector, nil] The injector that provides lookup services during injector creation
# for this compiler/environment
#
# @api private
#
attr_accessor :boot_injector
# Add a collection to the global list.
def_delegator :@collections, :<<, :add_collection
def_delegator :@relationships, :<<, :add_relationship
# Store a resource override.
def add_override(override)
# If possible, merge the override in immediately.
if resource = @catalog.resource(override.ref)
resource.merge(override)
else
# Otherwise, store the override for later; these
# get evaluated in Resource#finish.
@resource_overrides[override.ref] << override
end
end
def add_resource(scope, resource)
@resources << resource
# Note that this will fail if the resource is not unique.
@catalog.add_resource(resource)
if not resource.class? and resource[:stage]
raise ArgumentError, "Only classes can set 'stage'; normal resources like #{resource} cannot change run stage"
end
# Stages should not be inside of classes. They are always a
# top-level container, regardless of where they appear in the
# manifest.
return if resource.stage?
# This adds a resource to the class it lexically appears in in the
# manifest.
unless resource.class?
return @catalog.add_edge(scope.resource, resource)
end
end
# Do we use nodes found in the code, vs. the external node sources?
def_delegator :known_resource_types, :nodes?, :ast_nodes?
# Store the fact that we've evaluated a class
def add_class(name)
@catalog.add_class(name) unless name == ""
end
# Return a list of all of the defined classes.
def_delegator :@catalog, :classes, :classlist
# Compiler our catalog. This mostly revolves around finding and evaluating classes.
# This is the main entry into our catalog.
def compile
Puppet.override( @context_overrides , "For compiling #{node.name}") do
@catalog.environment_instance = environment
# Set the client's parameters into the top scope.
- Puppet::Util::Profiler.profile("Compile: Set node parameters") { set_node_parameters }
+ Puppet::Util::Profiler.profile("Compile: Set node parameters", [:compiler, :set_node_params]) { set_node_parameters }
- Puppet::Util::Profiler.profile("Compile: Created settings scope") { create_settings_scope }
+ Puppet::Util::Profiler.profile("Compile: Created settings scope", [:compiler, :create_settings_scope]) { create_settings_scope }
if is_binder_active?
# create injector, if not already created - this is for 3x that does not trigger
# lazy loading of injector via context
- Puppet::Util::Profiler.profile("Compile: Created injector") { injector }
+ Puppet::Util::Profiler.profile("Compile: Created injector", [:compiler, :create_injector]) { injector }
end
- Puppet::Util::Profiler.profile("Compile: Evaluated main") { evaluate_main }
+ Puppet::Util::Profiler.profile("Compile: Evaluated main", [:compiler, :evaluate_main]) { evaluate_main }
- Puppet::Util::Profiler.profile("Compile: Evaluated AST node") { evaluate_ast_node }
+ Puppet::Util::Profiler.profile("Compile: Evaluated AST node", [:compiler, :evaluate_ast_node]) { evaluate_ast_node }
- Puppet::Util::Profiler.profile("Compile: Evaluated node classes") { evaluate_node_classes }
+ Puppet::Util::Profiler.profile("Compile: Evaluated node classes", [:compiler, :evaluate_node_classes]) { evaluate_node_classes }
- Puppet::Util::Profiler.profile("Compile: Evaluated generators") { evaluate_generators }
+ Puppet::Util::Profiler.profile("Compile: Evaluated generators", [:compiler, :evaluate_generators]) { evaluate_generators }
- Puppet::Util::Profiler.profile("Compile: Finished catalog") { finish }
+ Puppet::Util::Profiler.profile("Compile: Finished catalog", [:compiler, :finish_catalog]) { finish }
fail_on_unevaluated
@catalog
end
end
# Constructs the overrides for the context
def context_overrides()
if Puppet[:parser] == 'future'
require 'puppet/loaders'
{
:current_environment => environment,
:global_scope => @topscope, # 4x placeholder for new global scope
:loaders => lambda {|| loaders() }, # 4x loaders
:injector => lambda {|| injector() } # 4x API - via context instead of via compiler
}
else
{
:current_environment => environment,
}
end
end
def_delegator :@collections, :delete, :delete_collection
# Return the node's environment.
def environment
- unless node.environment.is_a? Puppet::Node::Environment
- raise Puppet::DevError, "node #{node} has an invalid environment!"
- end
node.environment
end
# Evaluate all of the classes specified by the node.
# Classes with parameters are evaluated as if they were declared.
# Classes without parameters or with an empty set of parameters are evaluated
# as if they were included. This means classes with an empty set of
# parameters won't conflict even if the class has already been included.
def evaluate_node_classes
if @node.classes.is_a? Hash
classes_with_params, classes_without_params = @node.classes.partition {|name,params| params and !params.empty?}
# The results from Hash#partition are arrays of pairs rather than hashes,
# so we have to convert to the forms evaluate_classes expects (Hash, and
# Array of class names)
classes_with_params = Hash[classes_with_params]
classes_without_params.map!(&:first)
else
classes_with_params = {}
classes_without_params = @node.classes
end
evaluate_classes(classes_without_params, @node_scope || topscope)
evaluate_classes(classes_with_params, @node_scope || topscope)
end
# Evaluate each specified class in turn. If there are any classes we can't
# find, raise an error. This method really just creates resource objects
# that point back to the classes, and then the resources are themselves
# evaluated later in the process.
#
# Sometimes we evaluate classes with a fully qualified name already, in which
# case, we tell scope.find_hostclass we've pre-qualified the name so it
# doesn't need to search its namespaces again. This gets around a weird
# edge case of duplicate class names, one at top scope and one nested in our
# namespace and the wrong one (or both!) getting selected. See ticket #13349
# for more detail. --jeffweiss 26 apr 2012
def evaluate_classes(classes, scope, lazy_evaluate = true, fqname = false)
raise Puppet::DevError, "No source for scope passed to evaluate_classes" unless scope.source
class_parameters = nil
# if we are a param class, save the classes hash
# and transform classes to be the keys
if classes.class == Hash
class_parameters = classes
classes = classes.keys
end
- classes.each do |name|
- # If we can find the class, then make a resource that will evaluate it.
- if klass = scope.find_hostclass(name, :assume_fqname => fqname)
-
- # If parameters are passed, then attempt to create a duplicate resource
- # so the appropriate error is thrown.
- if class_parameters
- resource = klass.ensure_in_catalog(scope, class_parameters[name] || {})
- else
- next if scope.class_scope(klass)
- resource = klass.ensure_in_catalog(scope)
- end
- # If they've disabled lazy evaluation (which the :include function does),
- # then evaluate our resource immediately.
- resource.evaluate unless lazy_evaluate
- else
- raise Puppet::Error, "Could not find class #{name} for #{node.name}"
+ hostclasses = classes.collect do |name|
+ scope.find_hostclass(name, :assume_fqname => fqname) or raise Puppet::Error, "Could not find class #{name} for #{node.name}"
+ end
+
+ if class_parameters
+ resources = ensure_classes_with_parameters(scope, hostclasses, class_parameters)
+ if !lazy_evaluate
+ resources.each(&:evaluate)
end
+
+ resources
+ else
+ already_included, newly_included = ensure_classes_without_parameters(scope, hostclasses)
+ if !lazy_evaluate
+ newly_included.each(&:evaluate)
+ end
+
+ already_included + newly_included
end
end
def evaluate_relationships
@relationships.each { |rel| rel.evaluate(catalog) }
end
# Return a resource by either its ref or its type and title.
def_delegator :@catalog, :resource, :findresource
def initialize(node, options = {})
@node = node
set_options(options)
initvars
end
# Create a new scope, with either a specified parent scope or
# using the top scope.
def newscope(parent, options = {})
parent ||= topscope
scope = Puppet::Parser::Scope.new(self, options)
scope.parent = parent
scope
end
# Return any overrides for the given resource.
def resource_overrides(resource)
@resource_overrides[resource.ref]
end
def injector
create_injector if @injector.nil?
@injector
end
def loaders
@loaders ||= Puppet::Pops::Loaders.new(environment)
end
def boot_injector
create_boot_injector(nil) if @boot_injector.nil?
@boot_injector
end
# Creates the boot injector from registered system, default, and injector config.
# @return [Puppet::Pops::Binder::Injector] the created boot injector
# @api private Cannot be 'private' since it is called from the BindingsComposer.
#
def create_boot_injector(env_boot_bindings)
assert_binder_active()
pb = Puppet::Pops::Binder
boot_contribution = pb::SystemBindings.injector_boot_contribution(env_boot_bindings)
final_contribution = pb::SystemBindings.final_contribution
binder = pb::Binder.new(pb::BindingsFactory.layered_bindings(final_contribution, boot_contribution))
@boot_injector = pb::Injector.new(binder)
end
# Answers if Puppet Binder should be active or not, and if it should and is not active, then it is activated.
# @return [Boolean] true if the Puppet Binder should be activated
def is_binder_active?
should_be_active = Puppet[:binder] || Puppet[:parser] == 'future'
if should_be_active
# TODO: this should be in a central place, not just for ParserFactory anymore...
Puppet::Parser::ParserFactory.assert_rgen_installed()
@@binder_loaded ||= false
unless @@binder_loaded
require 'puppet/pops'
require 'puppetx'
@@binder_loaded = true
end
end
should_be_active
end
private
+ def ensure_classes_with_parameters(scope, hostclasses, parameters)
+ hostclasses.collect do |klass|
+ klass.ensure_in_catalog(scope, parameters[klass.name] || {})
+ end
+ end
+
+ def ensure_classes_without_parameters(scope, hostclasses)
+ already_included = []
+ newly_included = []
+ hostclasses.each do |klass|
+ class_scope = scope.class_scope(klass)
+ if class_scope
+ already_included << class_scope.resource
+ else
+ newly_included << klass.ensure_in_catalog(scope)
+ end
+ end
+
+ [already_included, newly_included]
+ end
+
# If ast nodes are enabled, then see if we can find and evaluate one.
def evaluate_ast_node
return unless ast_nodes?
# Now see if we can find the node.
astnode = nil
@node.names.each do |name|
break if astnode = known_resource_types.node(name.to_s.downcase)
end
unless (astnode ||= known_resource_types.node("default"))
raise Puppet::ParseError, "Could not find default node or by name with '#{node.names.join(", ")}'"
end
# Create a resource to model this node, and then add it to the list
# of resources.
resource = astnode.ensure_in_catalog(topscope)
resource.evaluate
@node_scope = topscope.class_scope(astnode)
end
# Evaluate our collections and return true if anything returned an object.
# The 'true' is used to continue a loop, so it's important.
def evaluate_collections
return false if @collections.empty?
exceptwrap do
# We have to iterate over a dup of the array because
# collections can delete themselves from the list, which
# changes its length and causes some collections to get missed.
- Puppet::Util::Profiler.profile("Evaluated collections") do
+ Puppet::Util::Profiler.profile("Evaluated collections", [:compiler, :evaluate_collections]) do
found_something = false
@collections.dup.each do |collection|
found_something = true if collection.evaluate
end
found_something
end
end
end
# Make sure all of our resources have been evaluated into native resources.
# We return true if any resources have, so that we know to continue the
# evaluate_generators loop.
def evaluate_definitions
exceptwrap do
- Puppet::Util::Profiler.profile("Evaluated definitions") do
+ Puppet::Util::Profiler.profile("Evaluated definitions", [:compiler, :evaluate_definitions]) do
!unevaluated_resources.each do |resource|
- Puppet::Util::Profiler.profile("Evaluated resource #{resource}") do
+ Puppet::Util::Profiler.profile("Evaluated resource #{resource}", [:compiler, :evaluate_resource, resource]) do
resource.evaluate
end
end.empty?
end
end
end
# Iterate over collections and resources until we're sure that the whole
# compile is evaluated. This is necessary because both collections
# and defined resources can generate new resources, which themselves could
# be defined resources.
def evaluate_generators
count = 0
loop do
done = true
- Puppet::Util::Profiler.profile("Iterated (#{count + 1}) on generators") do
+ Puppet::Util::Profiler.profile("Iterated (#{count + 1}) on generators", [:compiler, :iterate_on_generators]) do
# Call collections first, then definitions.
done = false if evaluate_collections
done = false if evaluate_definitions
end
break if done
count += 1
if count > 1000
raise Puppet::ParseError, "Somehow looped more than 1000 times while evaluating host catalog"
end
end
end
# Find and evaluate our main object, if possible.
def evaluate_main
@main = known_resource_types.find_hostclass([""], "") || known_resource_types.add(Puppet::Resource::Type.new(:hostclass, ""))
@topscope.source = @main
@main_resource = Puppet::Parser::Resource.new("class", :main, :scope => @topscope, :source => @main)
@topscope.resource = @main_resource
add_resource(@topscope, @main_resource)
@main_resource.evaluate
end
# Make sure the entire catalog is evaluated.
def fail_on_unevaluated
fail_on_unevaluated_overrides
fail_on_unevaluated_resource_collections
end
# If there are any resource overrides remaining, then we could
# not find the resource they were supposed to override, so we
# want to throw an exception.
def fail_on_unevaluated_overrides
remaining = @resource_overrides.values.flatten.collect(&:ref)
if !remaining.empty?
fail Puppet::ParseError,
"Could not find resource(s) #{remaining.join(', ')} for overriding"
end
end
# Make sure we don't have any remaining collections that specifically
# look for resources, because we want to consider those to be
# parse errors.
def fail_on_unevaluated_resource_collections
remaining = @collections.collect(&:resources).flatten.compact
if !remaining.empty?
raise Puppet::ParseError, "Failed to realize virtual resources #{remaining.join(', ')}"
end
end
# Make sure all of our resources and such have done any last work
# necessary.
def finish
evaluate_relationships
resources.each do |resource|
# Add in any resource overrides.
if overrides = resource_overrides(resource)
overrides.each do |over|
resource.merge(over)
end
# Remove the overrides, so that the configuration knows there
# are none left.
overrides.clear
end
resource.finish if resource.respond_to?(:finish)
end
add_resource_metaparams
end
def add_resource_metaparams
unless main = catalog.resource(:class, :main)
raise "Couldn't find main"
end
names = Puppet::Type.metaparams.select do |name|
!Puppet::Parser::Resource.relationship_parameter?(name)
end
data = {}
catalog.walk(main, :out) do |source, target|
if source_data = data[source] || metaparams_as_data(source, names)
# only store anything in the data hash if we've actually got
# data
data[source] ||= source_data
source_data.each do |param, value|
target[param] = value if target[param].nil?
end
data[target] = source_data.merge(metaparams_as_data(target, names))
end
target.tag(*(source.tags))
end
end
def metaparams_as_data(resource, params)
data = nil
params.each do |param|
unless resource[param].nil?
# Because we could be creating a hash for every resource,
# and we actually probably don't often have any data here at all,
# we're optimizing a bit by only creating a hash if there's
# any data to put in it.
data ||= {}
data[param] = resource[param]
end
end
data
end
# Set up all of our internal variables.
def initvars
# The list of overrides. This is used to cache overrides on objects
# that don't exist yet. We store an array of each override.
@resource_overrides = Hash.new do |overs, ref|
overs[ref] = []
end
# The list of collections that have been created. This is a global list,
# but they each refer back to the scope that created them.
@collections = []
# The list of relationships to evaluate.
@relationships = []
# For maintaining the relationship between scopes and their resources.
@catalog = Puppet::Resource::Catalog.new(@node.name, @node.environment)
# MOVED HERE - SCOPE IS NEEDED (MOVE-SCOPE)
# Create the initial scope, it is needed early
@topscope = Puppet::Parser::Scope.new(self)
# Need to compute overrides here, and remember them, because we are about to
# enter the magic zone of known_resource_types and intial import.
# Expensive entries in the context are bound lazily.
@context_overrides = context_overrides()
# This construct ensures that initial import (triggered by instantiating
# the structure 'known_resource_types') has a configured context
# It cannot survive the initvars method, and is later reinstated
# as part of compiling...
#
Puppet.override( @context_overrides , "For initializing compiler") do
# THE MAGIC STARTS HERE ! This triggers parsing, loading etc.
@catalog.version = known_resource_types.version
end
@catalog.add_resource(Puppet::Parser::Resource.new("stage", :main, :scope => @topscope))
# local resource array to maintain resource ordering
@resources = []
# Make sure any external node classes are in our class list
if @node.classes.class == Hash
@catalog.add_class(*@node.classes.keys)
else
@catalog.add_class(*@node.classes)
end
end
# Set the node's parameters into the top-scope as variables.
def set_node_parameters
node.parameters.each do |param, value|
@topscope[param.to_s] = value
end
# These might be nil.
catalog.client_version = node.parameters["clientversion"]
catalog.server_version = node.parameters["serverversion"]
if Puppet[:trusted_node_data]
@topscope.set_trusted(node.trusted_data)
end
if(Puppet[:immutable_node_data])
facts_hash = node.facts.nil? ? {} : node.facts.values
@topscope.set_facts(facts_hash)
end
end
def create_settings_scope
- unless settings_type = environment.known_resource_types.hostclass("settings")
- settings_type = Puppet::Resource::Type.new :hostclass, "settings"
- environment.known_resource_types.add(settings_type)
- end
+ settings_type = Puppet::Resource::Type.new :hostclass, "settings"
+ environment.known_resource_types.add(settings_type)
settings_resource = Puppet::Parser::Resource.new("class", "settings", :scope => @topscope)
@catalog.add_resource(settings_resource)
settings_type.evaluate_code(settings_resource)
scope = @topscope.class_scope(settings_type)
+ env = environment
Puppet.settings.each do |name, setting|
- next if name.to_s == "name"
- scope[name.to_s] = environment[name]
+ next if name == :name
+ scope[name.to_s] = env[name]
end
end
# Return an array of all of the unevaluated resources. These will be definitions,
# which need to get evaluated into native resources.
def unevaluated_resources
# The order of these is significant for speed due to short-circuting
resources.reject { |resource| resource.evaluated? or resource.virtual? or resource.builtin_type? }
end
# Creates the injector from bindings found in the current environment.
# @return [void]
# @api private
#
def create_injector
assert_binder_active()
composer = Puppet::Pops::Binder::BindingsComposer.new()
layered_bindings = composer.compose(topscope)
@injector = Puppet::Pops::Binder::Injector.new(Puppet::Pops::Binder::Binder.new(layered_bindings))
end
def assert_binder_active
unless is_binder_active?
raise ArgumentError, "The Puppet Binder is only available when either '--binder true' or '--parser future' is used"
end
end
end
diff --git a/lib/puppet/parser/e4_parser_adapter.rb b/lib/puppet/parser/e4_parser_adapter.rb
index 01822db13..00911b5d5 100644
--- a/lib/puppet/parser/e4_parser_adapter.rb
+++ b/lib/puppet/parser/e4_parser_adapter.rb
@@ -1,81 +1,81 @@
require 'puppet/pops'
module Puppet; module Parser; end; end;
# Adapts an egrammar/eparser to respond to the public API of the classic parser
# and makes use of the new evaluator.
#
class Puppet::Parser::E4ParserAdapter
# Empty adapter fulfills watch_file contract without doing anything.
# @api private
class NullFileWatcher
def watch_file(file)
#nop
end
end
# @param file_watcher [#watch_file] something that can watch a file
def initialize(file_watcher = nil)
@file_watcher = file_watcher || NullFileWatcher.new
@file = ''
@string = ''
- @use = :undefined
- @@evaluating_parser ||= Puppet::Pops::Parser::EvaluatingParser::Transitional.new()
+ @use = :unspecified
+ @@evaluating_parser ||= Puppet::Pops::Parser::EvaluatingParser.new()
end
def file=(file)
@file = file
@use = :file
# watch if possible, but only if the file is something worth watching
@file_watcher.watch_file(file) if !file.nil? && file != ''
end
def parse(string = nil)
self.string= string if string
if @file =~ /\.rb$/ && @use != :string
# Will throw an error
parse_ruby_file
end
parse_result =
if @use == :string
# Parse with a source_file to set in created AST objects (it was either given, or it may be unknown
# if caller did not set a file and the present a string.
#
@@evaluating_parser.parse_string(@string, @file || "unknown-source-location")
else
@@evaluating_parser.parse_file(@file)
end
# the parse_result may be
# * empty / nil (no input)
# * a Model::Program
# * a Model::Expression
#
model = parse_result.nil? ? nil : parse_result.current
args = {}
Puppet::Pops::Model::AstTransformer.new(@file).merge_location(args, model)
ast_code =
if model.is_a? Puppet::Pops::Model::Program
Puppet::Parser::AST::PopsBridge::Program.new(model, args)
else
args[:value] = model
Puppet::Parser::AST::PopsBridge::Expression.new(args)
end
# Create the "main" class for the content - this content will get merged with all other "main" content
Puppet::Parser::AST::Hostclass.new('', :code => ast_code)
end
def string=(string)
@string = string
@use = :string
end
def parse_ruby_file
raise Puppet::ParseError, "Ruby DSL is no longer supported. Attempt to parse #{@file}"
end
end
diff --git a/lib/puppet/parser/e_parser_adapter.rb b/lib/puppet/parser/e_parser_adapter.rb
deleted file mode 100644
index fe0e28b55..000000000
--- a/lib/puppet/parser/e_parser_adapter.rb
+++ /dev/null
@@ -1,119 +0,0 @@
-require 'puppet/pops'
-
-module Puppet; module Parser; end; end;
-# Adapts an egrammar/eparser to respond to the public API of the classic parser
-#
-class Puppet::Parser::EParserAdapter
-
- def initialize(classic_parser)
- @classic_parser = classic_parser
- @file = ''
- @string = ''
- @use = :undefined
- end
-
- def file=(file)
- @classic_parser.file = file
- @file = file
- @use = :file
- end
-
- def parse(string = nil)
- if @file =~ /\.rb$/
- return parse_ruby_file
- else
- self.string= string if string
- parser = Puppet::Pops::Parser::Parser.new()
- parse_result = if @use == :string
- parser.parse_string(@string)
- else
- parser.parse_file(@file)
- end
- # Compute the source_file to set in created AST objects (it was either given, or it may be unknown
- # if caller did not set a file and the present a string.
- #
- source_file = @file || "unknown-source-location"
-
- # Validate
- validate(parse_result)
- # Transform the result, but only if not nil
- parse_result = Puppet::Pops::Model::AstTransformer.new(source_file, @classic_parser).transform(parse_result) if parse_result
- if parse_result && !parse_result.is_a?(Puppet::Parser::AST::BlockExpression)
- # Need to transform again, if result is not wrapped in something iterable when handed off to
- # a new Hostclass as its code.
- parse_result = Puppet::Parser::AST::BlockExpression.new(:children => [parse_result]) if parse_result
- end
- end
-
- Puppet::Parser::AST::Hostclass.new('', :code => parse_result)
- end
-
- def validate(parse_result)
- # TODO: This is too many hoops to jump through... ugly API
- # could reference a ValidatorFactory.validator_3_1(acceptor) instead.
- # and let the factory abstract the rest.
- #
- return unless parse_result
-
- acceptor = Puppet::Pops::Validation::Acceptor.new
- validator = Puppet::Pops::Validation::ValidatorFactory_3_1.new().validator(acceptor)
- validator.validate(parse_result)
-
- max_errors = Puppet[:max_errors]
- max_warnings = Puppet[:max_warnings] + 1
- max_deprecations = Puppet[:max_deprecations] + 1
-
- # If there are warnings output them
- warnings = acceptor.warnings
- if warnings.size > 0
- formatter = Puppet::Pops::Validation::DiagnosticFormatterPuppetStyle.new
- emitted_w = 0
- emitted_dw = 0
- acceptor.warnings.each {|w|
- if w.severity == :deprecation
- # Do *not* call Puppet.deprecation_warning it is for internal deprecation, not
- # deprecation of constructs in manifests! (It is not designed for that purpose even if
- # used throughout the code base).
- #
- Puppet.warning(formatter.format(w)) if emitted_dw < max_deprecations
- emitted_dw += 1
- else
- Puppet.warning(formatter.format(w)) if emitted_w < max_warnings
- emitted_w += 1
- end
- break if emitted_w > max_warnings && emitted_dw > max_deprecations # but only then
- }
- end
-
- # If there were errors, report the first found. Use a puppet style formatter.
- errors = acceptor.errors
- if errors.size > 0
- formatter = Puppet::Pops::Validation::DiagnosticFormatterPuppetStyle.new
- if errors.size == 1 || max_errors <= 1
- # raise immediately
- raise Puppet::ParseError.new(formatter.format(errors[0]))
- end
- emitted = 0
- errors.each do |e|
- Puppet.err(formatter.format(e))
- emitted += 1
- break if emitted >= max_errors
- end
- warnings_message = warnings.size > 0 ? ", and #{warnings.size} warnings" : ""
- giving_up_message = "Found #{errors.size} errors#{warnings_message}. Giving up"
- exception = Puppet::ParseError.new(giving_up_message)
- exception.file = errors[0].file
- raise exception
- end
- end
-
- def string=(string)
- @classic_parser.string = string
- @string = string
- @use = :string
- end
-
- def parse_ruby_file
- @classic_parser.parse
- end
-end
diff --git a/lib/puppet/parser/files.rb b/lib/puppet/parser/files.rb
index 605bbeb69..81c523ffd 100644
--- a/lib/puppet/parser/files.rb
+++ b/lib/puppet/parser/files.rb
@@ -1,94 +1,137 @@
require 'puppet/module'
-module Puppet; module Parser; module Files
+module Puppet::Parser::Files
module_function
- # Return a list of manifests as absolute filenames matching the given
+ # Return a list of manifests as absolute filenames matching the given
# pattern.
#
# @param pattern [String] A reference for a file in a module. It is the format "<modulename>/<file glob>"
# @param environment [Puppet::Node::Environment] the environment of modules
#
# @return [Array(String, Array<String>)] the module name and the list of files found
# @api private
def find_manifests_in_modules(pattern, environment)
module_name, file_pattern = split_file_path(pattern)
begin
if mod = environment.module(module_name)
return [mod.name, mod.match_manifests(file_pattern)]
end
rescue Puppet::Module::InvalidName
# one of the modules being loaded might have an invalid name and so
# looking for one might blow up since we load them lazily.
end
[nil, []]
end
- # Find the concrete file denoted by +file+. If +file+ is absolute,
- # return it directly. Otherwise try to find relative to the +templatedir+
- # config param. If that fails try to find it as a template in a
- # module.
- # In all cases, an absolute path is returned, which does not
- # necessarily refer to an existing file
+ # Find the path to the given file selector. Files can be selected in
+ # one of two ways:
+ # * absolute path: the path is simply returned
+ # * modulename/filename selector: a file is found in the file directory
+ # of the named module.
+ #
+ # In the second case a nil is returned if there isn't a file found. In the
+ # first case (absolute path), there is no existence check done and so the
+ # path will be returned even if there isn't a file available.
+ #
+ # @param template [String] the file selector
+ # @param environment [Puppet::Node::Environment] the environment in which to search
+ # @return [String, nil] the absolute path to the file or nil if there is no file found
#
# @api private
- def find_template(template, environment)
- if template == File.expand_path(template)
- return template
- end
+ def find_file(file, environment)
+ if Puppet::Util.absolute_path?(file)
+ file
+ else
+ path, module_file = split_file_path(file)
+ mod = environment.module(path)
- if template_paths = templatepath(environment)
- # If we can find the template in :templatedir, we return that.
- template_paths.collect { |path|
- File::join(path, template)
- }.each do |f|
- return f if Puppet::FileSystem.exist?(f)
+ if module_file && mod
+ mod.file(module_file)
+ else
+ nil
end
end
+ end
- # check in the default template dir, if there is one
- if td_file = find_template_in_module(template, environment)
- return td_file
+ # Find the path to the given template selector. Templates can be selected in
+ # a number of ways:
+ # * absolute path: the path is simply returned
+ # * path relative to the templatepath setting: a file is found and the path
+ # is returned
+ # * modulename/filename selector: a file is found in the template directory
+ # of the named module.
+ #
+ # In the last two cases a nil is returned if there isn't a file found. In the
+ # first case (absolute path), there is no existence check done and so the
+ # path will be returned even if there isn't a file available.
+ #
+ # @param template [String] the template selector
+ # @param environment [Puppet::Node::Environment] the environment in which to search
+ # @return [String, nil] the absolute path to the template file or nil if there is no file found
+ #
+ # @api private
+ def find_template(template, environment)
+ if Puppet::Util.absolute_path?(template)
+ template
+ else
+ in_templatepath = find_template_in_templatepath(template, environment)
+ if in_templatepath
+ in_templatepath
+ else
+ find_template_in_module(template, environment)
+ end
end
+ end
- nil
+ # Templatepaths are deprecated functionality, this will be going away in
+ # Puppet 4.
+ #
+ # @api private
+ def find_template_in_templatepath(template, environment)
+ template_paths = templatepath(environment)
+ if template_paths
+ template_paths.collect do |path|
+ File::join(path, template)
+ end.find do |f|
+ Puppet::FileSystem.exist?(f)
+ end
+ else
+ nil
+ end
end
# @api private
def find_template_in_module(template, environment)
path, file = split_file_path(template)
+ mod = environment.module(path)
- # Because templates don't have an assumed template name, like manifests do,
- # we treat templates with no name as being templates in the main template
- # directory.
- return nil unless file
-
- if mod = environment.module(path) and t = mod.template(file)
- return t
+ if file && mod
+ mod.template(file)
+ else
+ nil
end
- nil
end
# Return an array of paths by splitting the +templatedir+ config
# parameter.
# @api private
def templatepath(environment)
dirs = Puppet.settings.value(:templatedir, environment.to_s).split(File::PATH_SEPARATOR)
dirs.select do |p|
File::directory?(p)
end
end
# Split the path into the module and the rest of the path, or return
# nil if the path is empty or absolute (starts with a /).
# @api private
def split_file_path(path)
- if path == "" or Puppet::Util.absolute_path?(path)
+ if path == "" || Puppet::Util.absolute_path?(path)
nil
else
path.split(File::SEPARATOR, 2)
end
end
-
-end; end; end
+end
diff --git a/lib/puppet/parser/functions.rb b/lib/puppet/parser/functions.rb
index 6ce9c641a..be104d810 100644
--- a/lib/puppet/parser/functions.rb
+++ b/lib/puppet/parser/functions.rb
@@ -1,268 +1,262 @@
require 'puppet/util/autoload'
require 'puppet/parser/scope'
# A module for managing parser functions. Each specified function
# is added to a central module that then gets included into the Scope
# class.
#
# @api public
module Puppet::Parser::Functions
Environment = Puppet::Node::Environment
class << self
include Puppet::Util
end
# Reset the list of loaded functions.
#
# @api private
def self.reset
@modules = {}
# Runs a newfunction to create a function for each of the log levels
Puppet::Util::Log.levels.each do |level|
newfunction(level,
:environment => Puppet.lookup(:root_environment),
:doc => "Log a message on the server at level #{level.to_s}.") do |vals|
send(level, vals.join(" "))
end
end
end
# Accessor for singleton autoloader
#
# @api private
def self.autoloader
@autoloader ||= Puppet::Util::Autoload.new(
self, "puppet/parser/functions", :wrap => false
)
end
# Get the module that functions are mixed into corresponding to an
# environment
#
# @api private
def self.environment_module(env)
@modules[env.name] ||= Module.new do
@metadata = {}
def self.all_function_info
@metadata
end
def self.get_function_info(name)
@metadata[name]
end
def self.add_function_info(name, info)
@metadata[name] = info
end
end
end
# Create a new Puppet DSL function.
#
# **The {newfunction} method provides a public API.**
#
# This method is used both internally inside of Puppet to define parser
# functions. For example, template() is defined in
# {file:lib/puppet/parser/functions/template.rb template.rb} using the
# {newfunction} method. Third party Puppet modules such as
# [stdlib](https://forge.puppetlabs.com/puppetlabs/stdlib) use this method to
# extend the behavior and functionality of Puppet.
#
# See also [Docs: Custom
# Functions](http://docs.puppetlabs.com/guides/custom_functions.html)
#
# @example Define a new Puppet DSL Function
# >> Puppet::Parser::Functions.newfunction(:double, :arity => 1,
# :doc => "Doubles an object, typically a number or string.",
# :type => :rvalue) {|i| i[0]*2 }
# => {:arity=>1, :type=>:rvalue,
# :name=>"function_double",
# :doc=>"Doubles an object, typically a number or string."}
#
# @example Invoke the double function from irb as is done in RSpec examples:
# >> require 'puppet_spec/scope'
# >> scope = PuppetSpec::Scope.create_test_scope_for_node('example')
# => Scope()
# >> scope.function_double([2])
# => 4
# >> scope.function_double([4])
# => 8
# >> scope.function_double([])
# ArgumentError: double(): Wrong number of arguments given (0 for 1)
# >> scope.function_double([4,8])
# ArgumentError: double(): Wrong number of arguments given (2 for 1)
# >> scope.function_double(["hello"])
# => "hellohello"
#
# @param [Symbol] name the name of the function represented as a ruby Symbol.
# The {newfunction} method will define a Ruby method based on this name on
# the parser scope instance.
#
# @param [Proc] block the block provided to the {newfunction} method will be
# executed when the Puppet DSL function is evaluated during catalog
# compilation. The arguments to the function will be passed as an array to
# the first argument of the block. The return value of the block will be
# the return value of the Puppet DSL function for `:rvalue` functions.
#
# @option options [:rvalue, :statement] :type (:statement) the type of function.
# Either `:rvalue` for functions that return a value, or `:statement` for
# functions that do not return a value.
#
# @option options [String] :doc ('') the documentation for the function.
# This string will be extracted by documentation generation tools.
#
# @option options [Integer] :arity (-1) the
# [arity](http://en.wikipedia.org/wiki/Arity) of the function. When
# specified as a positive integer the function is expected to receive
# _exactly_ the specified number of arguments. When specified as a
# negative number, the function is expected to receive _at least_ the
# absolute value of the specified number of arguments incremented by one.
# For example, a function with an arity of `-4` is expected to receive at
# minimum 3 arguments. A function with the default arity of `-1` accepts
# zero or more arguments. A function with an arity of 2 must be provided
# with exactly two arguments, no more and no less. Added in Puppet 3.1.0.
#
# @option options [Puppet::Node::Environment] :environment (nil) can
# explicitly pass the environment we wanted the function added to. Only used
# to set logging functions in root environment
#
# @return [Hash] describing the function.
#
# @api public
def self.newfunction(name, options = {}, &block)
- # Short circuit this call when 4x "biff" is in effect to allow the new loader system to load
- # and define the function a different way.
- #
- if Puppet[:biff]
- return Puppet::Pops::Loader::RubyLegacyFunctionInstantiator.legacy_newfunction(name, options, &block)
- end
name = name.intern
environment = options[:environment] || Puppet.lookup(:current_environment)
Puppet.warning "Overwriting previous definition for function #{name}" if get_function(name, environment)
arity = options[:arity] || -1
ftype = options[:type] || :statement
unless ftype == :statement or ftype == :rvalue
raise Puppet::DevError, "Invalid statement type #{ftype.inspect}"
end
# the block must be installed as a method because it may use "return",
# which is not allowed from procs.
real_fname = "real_function_#{name}"
environment_module(environment).send(:define_method, real_fname, &block)
fname = "function_#{name}"
env_module = environment_module(environment)
env_module.send(:define_method, fname) do |*args|
- Puppet::Util::Profiler.profile("Called #{name}") do
+ Puppet::Util::Profiler.profile("Called #{name}", [:functions, name]) do
if args[0].is_a? Array
if arity >= 0 and args[0].size != arity
raise ArgumentError, "#{name}(): Wrong number of arguments given (#{args[0].size} for #{arity})"
elsif arity < 0 and args[0].size < (arity+1).abs
raise ArgumentError, "#{name}(): Wrong number of arguments given (#{args[0].size} for minimum #{(arity+1).abs})"
end
self.send(real_fname, args[0])
else
raise ArgumentError, "custom functions must be called with a single array that contains the arguments. For example, function_example([1]) instead of function_example(1)"
end
end
end
func = {:arity => arity, :type => ftype, :name => fname}
func[:doc] = options[:doc] if options[:doc]
env_module.add_function_info(name, func)
func
end
# Determine if a function is defined
#
# @param [Symbol] name the function
# @param [Puppet::Node::Environment] environment the environment to find the function in
#
# @return [Symbol, false] The name of the function if it's defined,
# otherwise false.
#
# @api public
def self.function(name, environment = Puppet.lookup(:current_environment))
name = name.intern
func = nil
unless func = get_function(name, environment)
autoloader.load(name, environment)
func = get_function(name, environment)
end
if func
func[:name]
else
false
end
end
def self.functiondocs(environment = Puppet.lookup(:current_environment))
autoloader.loadall
ret = ""
merged_functions(environment).sort { |a,b| a[0].to_s <=> b[0].to_s }.each do |name, hash|
ret << "#{name}\n#{"-" * name.to_s.length}\n"
if hash[:doc]
ret << Puppet::Util::Docs.scrub(hash[:doc])
else
ret << "Undocumented.\n"
end
ret << "\n\n- *Type*: #{hash[:type]}\n\n"
end
ret
end
# Determine whether a given function returns a value.
#
# @param [Symbol] name the function
# @param [Puppet::Node::Environment] environment The environment to find the function in
# @return [Boolean] whether it is an rvalue function
#
# @api public
def self.rvalue?(name, environment = Puppet.lookup(:current_environment))
func = get_function(name, environment)
func ? func[:type] == :rvalue : false
end
# Return the number of arguments a function expects.
#
# @param [Symbol] name the function
# @param [Puppet::Node::Environment] environment The environment to find the function in
# @return [Integer] The arity of the function. See {newfunction} for
# the meaning of negative values.
#
# @api public
def self.arity(name, environment = Puppet.lookup(:current_environment))
func = get_function(name, environment)
func ? func[:arity] : -1
end
class << self
private
def merged_functions(environment)
root = environment_module(Puppet.lookup(:root_environment))
env = environment_module(environment)
root.all_function_info.merge(env.all_function_info)
end
def get_function(name, environment)
environment_module(environment).get_function_info(name.intern) || environment_module(Puppet.lookup(:root_environment)).get_function_info(name.intern)
end
end
end
diff --git a/lib/puppet/parser/functions/assert_type.rb b/lib/puppet/parser/functions/assert_type.rb
new file mode 100644
index 000000000..577697420
--- /dev/null
+++ b/lib/puppet/parser/functions/assert_type.rb
@@ -0,0 +1,31 @@
+Puppet::Parser::Functions::newfunction(
+ :assert_type,
+ :type => :rvalue,
+ :arity => -3,
+ :doc => "Returns the given value if it is an instance of the given type, and raises an error otherwise.
+Optionally, if a block is given (accepting two parameters), it will be called instead of raising
+an error. This to enable giving the user richer feedback, or to supply a default value.
+
+Example: assert that `$b` is a non empty `String` and assign to `$a`:
+
+ $a = assert_type(String[1], $b)
+
+Example using custom error message:
+
+ $a = assert_type(String[1], $b) |$expected, $actual| {
+ fail('The name cannot be empty')
+ }
+
+Example, using a warning and a default:
+
+ $a = assert_type(String[1], $b) |$expected, $actual| {
+ warning('Name is empty, using default')
+ 'anonymous'
+ }
+
+See the documentation for 'The Puppet Type System' for more information about types.
+- since Puppet 3.7
+- requires future parser/evaluator
+") do |args|
+ function_fail(["assert_type() is only available when parser/evaluator future is in effect"])
+end
diff --git a/lib/puppet/parser/functions/collect.rb b/lib/puppet/parser/functions/collect.rb
deleted file mode 100644
index fea42a4df..000000000
--- a/lib/puppet/parser/functions/collect.rb
+++ /dev/null
@@ -1,15 +0,0 @@
-Puppet::Parser::Functions::newfunction(
-:collect,
-:type => :rvalue,
-:arity => 2,
-:doc => <<-'ENDHEREDOC') do |args|
- The 'collect' function has been renamed to 'map'. Please update your manifests.
-
- The collect function is reserved for future use.
- - Removed as of 3.4
- - requires `parser = future`.
- ENDHEREDOC
-
- raise NotImplementedError,
- "The 'collect' function has been renamed to 'map'. Please update your manifests."
-end
diff --git a/lib/puppet/parser/functions/contain.rb b/lib/puppet/parser/functions/contain.rb
index 8eb514561..9c9447661 100644
--- a/lib/puppet/parser/functions/contain.rb
+++ b/lib/puppet/parser/functions/contain.rb
@@ -1,26 +1,36 @@
# Called within a class definition, establishes a containment
# relationship with another class
Puppet::Parser::Functions::newfunction(
:contain,
:arity => -2,
:doc => "Contain one or more classes inside the current class. If any of
these classes are undeclared, they will be declared as if called with the
`include` function. Accepts a class name, an array of class names, or a
comma-separated list of class names.
A contained class will not be applied before the containing class is
begun, and will be finished before the containing class is finished.
+
+When the future parser is used, you must use the class's full name;
+relative names are no longer allowed. In addition to names in string form,
+you may also directly use Class and Resource Type values that are produced by
+the future parser's resource and relationship expressions.
"
) do |classes|
scope = self
- scope.function_include(classes)
+ # Make call patterns uniform and protected against nested arrays, also make
+ # names absolute if so desired.
+ classes = transform_and_assert_classnames(classes.is_a?(Array) ? classes.flatten : [classes])
+
+ containing_resource = scope.resource
- classes.each do |class_name|
- class_resource = scope.catalog.resource("Class", class_name)
- if ! scope.catalog.edge?(scope.resource, class_resource)
- scope.catalog.add_edge(scope.resource, class_resource)
+ # This is the same as calling the include function but faster and does not rely on the include
+ # function (which is a statement) to return something (it should not).
+ (compiler.evaluate_classes(classes, self, false) || []).each do |resource|
+ if ! scope.catalog.edge?(containing_resource, resource)
+ scope.catalog.add_edge(containing_resource, resource)
end
end
end
diff --git a/lib/puppet/parser/functions/create_resources.rb b/lib/puppet/parser/functions/create_resources.rb
index 1c3b910b0..d0ac165c2 100644
--- a/lib/puppet/parser/functions/create_resources.rb
+++ b/lib/puppet/parser/functions/create_resources.rb
@@ -1,78 +1,82 @@
Puppet::Parser::Functions::newfunction(:create_resources, :arity => -3, :doc => <<-'ENDHEREDOC') do |args|
Converts a hash into a set of resources and adds them to the catalog.
This function takes two mandatory arguments: a resource type, and a hash describing
a set of resources. The hash should be in the form `{title => {parameters} }`:
# A hash of user resources:
$myusers = {
'nick' => { uid => '1330',
gid => allstaff,
groups => ['developers', 'operations', 'release'], },
'dan' => { uid => '1308',
gid => allstaff,
groups => ['developers', 'prosvc', 'release'], },
}
create_resources(user, $myusers)
A third, optional parameter may be given, also as a hash:
$defaults = {
'ensure' => present,
'provider' => 'ldap',
}
create_resources(user, $myusers, $defaults)
The values given on the third argument are added to the parameters of each resource
present in the set given on the second argument. If a parameter is present on both
the second and third arguments, the one on the second argument takes precedence.
This function can be used to create defined resources and classes, as well
as native resources.
Virtual and Exported resources may be created by prefixing the type name
with @ or @@ respectively. For example, the $myusers hash may be exported
in the following manner:
create_resources("@@user", $myusers)
The $myusers may be declared as virtual resources using:
create_resources("@user", $myusers)
ENDHEREDOC
raise ArgumentError, ("create_resources(): wrong number of arguments (#{args.length}; must be 2 or 3)") if args.length > 3
raise ArgumentError, ('create_resources(): second argument must be a hash') unless args[1].is_a?(Hash)
if args.length == 3
raise ArgumentError, ('create_resources(): third argument, if provided, must be a hash') unless args[2].is_a?(Hash)
end
type, instances, defaults = args
defaults ||= {}
resource = Puppet::Parser::AST::Resource.new(:type => type.sub(/^@{1,2}/, '').downcase, :instances =>
instances.collect do |title, params|
Puppet::Parser::AST::ResourceInstance.new(
:title => Puppet::Parser::AST::Leaf.new(:value => title),
:parameters => defaults.merge(params).collect do |name, value|
Puppet::Parser::AST::ResourceParam.new(
:param => name,
:value => Puppet::Parser::AST::Leaf.new(:value => value))
end)
end)
if type.start_with? '@@'
resource.exported = true
elsif type.start_with? '@'
resource.virtual = true
end
begin
resource.safeevaluate(self)
rescue Puppet::ParseError => internal_error
- raise internal_error.original
+ if internal_error.original.nil?
+ raise internal_error
+ else
+ raise internal_error.original
+ end
end
end
diff --git a/lib/puppet/parser/functions/digest.rb b/lib/puppet/parser/functions/digest.rb
new file mode 100644
index 000000000..8cd75c3fb
--- /dev/null
+++ b/lib/puppet/parser/functions/digest.rb
@@ -0,0 +1,5 @@
+require 'puppet/util/checksums'
+Puppet::Parser::Functions::newfunction(:digest, :type => :rvalue, :arity => 1, :doc => "Returns a hash value from a provided string using the digest_algorithm setting from the Puppet config file.") do |args|
+ algo = Puppet[:digest_algorithm]
+ Puppet::Util::Checksums.method(algo.intern).call args[0]
+end
diff --git a/lib/puppet/parser/functions/each.rb b/lib/puppet/parser/functions/each.rb
index 39d26ba38..3f4437eef 100644
--- a/lib/puppet/parser/functions/each.rb
+++ b/lib/puppet/parser/functions/each.rb
@@ -1,109 +1,48 @@
Puppet::Parser::Functions::newfunction(
-:each,
-:type => :rvalue,
-:arity => 2,
-:doc => <<-'ENDHEREDOC') do |args|
- Applies a parameterized block to each element in a sequence of selected entries from the first
- argument and returns the first argument.
-
- This function takes two mandatory arguments: the first should be an Array or a Hash or something that is
- of enumerable type (integer, Integer range, or String), and the second
- a parameterized block as produced by the puppet syntax:
-
- $a.each |$x| { ... }
- each($a) |$x| { ... }
-
- When the first argument is an Array (or of enumerable type other than Hash), the parameterized block
- should define one or two block parameters.
- For each application of the block, the next element from the array is selected, and it is passed to
- the block if the block has one parameter. If the block has two parameters, the first is the elements
- index, and the second the value. The index starts from 0.
-
- $a.each |$index, $value| { ... }
- each($a) |$index, $value| { ... }
-
- When the first argument is a Hash, the parameterized block should define one or two parameters.
- When one parameter is defined, the iteration is performed with each entry as an array of `[key, value]`,
- and when two parameters are defined the iteration is performed with key and value.
-
- $a.each |$entry| { ..."key ${$entry[0]}, value ${$entry[1]}" }
- $a.each |$key, $value| { ..."key ${key}, value ${value}" }
-
- *Examples*
-
- [1,2,3].each |$val| { ... } # 1, 2, 3
- [5,6,7].each |$index, $val| { ... } # (0, 5), (1, 6), (2, 7)
- {a=>1, b=>2, c=>3}].each |$val| { ... } # ['a', 1], ['b', 2], ['c', 3]
- {a=>1, b=>2, c=>3}.each |$key, $val| { ... } # ('a', 1), ('b', 2), ('c', 3)
- Integer[ 10, 20 ].each |$index, $value| { ... } # (0, 10), (1, 11) ...
- "hello".each |$char| { ... } # 'h', 'e', 'l', 'l', 'o'
- 3.each |$number| { ... } # 0, 1, 2
-
- - Since 3.2 for Array and Hash
- - Since 3.5 for other enumerables
- - requires `parser = future`.
- ENDHEREDOC
- require 'puppet/parser/ast/lambda'
-
- def foreach_Hash(o, scope, pblock, serving_size)
- enumerator = o.each_pair
- if serving_size == 1
- (o.size).times do
- pblock.call(scope, enumerator.next)
- end
- else
- (o.size).times do
- pblock.call(scope, *enumerator.next)
- end
- end
- end
-
- def foreach_Enumerator(enumerator, scope, pblock, serving_size)
- index = 0
- if serving_size == 1
- begin
- loop { pblock.call(scope, enumerator.next) }
- rescue StopIteration
- end
- else
- begin
- loop do
- pblock.call(scope, index, enumerator.next)
- index = index +1
- end
- rescue StopIteration
- end
- end
- end
-
- raise ArgumentError, ("each(): wrong number of arguments (#{args.length}; expected 2, got #{args.length})") if args.length != 2
- receiver = args[0]
- pblock = args[1]
- raise ArgumentError, ("each(): wrong argument type (#{args[1].class}; must be a parameterized block.") unless pblock.respond_to?(:puppet_lambda)
-
- serving_size = pblock.parameter_count
- if serving_size == 0
- raise ArgumentError, "each(): block must define at least one parameter; value. Block has 0."
- end
-
- case receiver
- when Hash
- if serving_size > 2
- raise ArgumentError, "each(): block must define at most two parameters; key, value. Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
- foreach_Hash(receiver, self, pblock, serving_size)
- else
- if serving_size > 2
- raise ArgumentError, "each(): block must define at most two parameters; index, value. Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
- enum = Puppet::Pops::Types::Enumeration.enumerator(receiver)
- unless enum
- raise ArgumentError, ("each(): wrong argument type (#{receiver.class}; must be something enumerable.")
- end
- foreach_Enumerator(enum, self, pblock, serving_size)
- end
- # each always produces the receiver
- receiver
+ :each,
+ :type => :rvalue,
+ :arity => -3,
+ :doc => <<-DOC
+Applies a parameterized block to each element in a sequence of selected entries from the first
+argument and returns the first argument.
+
+This function takes two mandatory arguments: the first should be an Array or a Hash or something that is
+of enumerable type (integer, Integer range, or String), and the second
+a parameterized block as produced by the puppet syntax:
+
+ $a.each |$x| { ... }
+ each($a) |$x| { ... }
+
+When the first argument is an Array (or of enumerable type other than Hash), the parameterized block
+should define one or two block parameters.
+For each application of the block, the next element from the array is selected, and it is passed to
+the block if the block has one parameter. If the block has two parameters, the first is the elements
+index, and the second the value. The index starts from 0.
+
+ $a.each |$index, $value| { ... }
+ each($a) |$index, $value| { ... }
+
+When the first argument is a Hash, the parameterized block should define one or two parameters.
+When one parameter is defined, the iteration is performed with each entry as an array of `[key, value]`,
+and when two parameters are defined the iteration is performed with key and value.
+
+ $a.each |$entry| { ..."key ${$entry[0]}, value ${$entry[1]}" }
+ $a.each |$key, $value| { ..."key ${key}, value ${value}" }
+
+Example using each:
+
+ [1,2,3].each |$val| { ... } # 1, 2, 3
+ [5,6,7].each |$index, $val| { ... } # (0, 5), (1, 6), (2, 7)
+ {a=>1, b=>2, c=>3}].each |$val| { ... } # ['a', 1], ['b', 2], ['c', 3]
+ {a=>1, b=>2, c=>3}.each |$key, $val| { ... } # ('a', 1), ('b', 2), ('c', 3)
+ Integer[ 10, 20 ].each |$index, $value| { ... } # (0, 10), (1, 11) ...
+ "hello".each |$char| { ... } # 'h', 'e', 'l', 'l', 'o'
+ 3.each |$number| { ... } # 0, 1, 2
+
+- since 3.2 for Array and Hash
+- since 3.5 for other enumerables
+- note requires `parser = future`
+DOC
+) do |args|
+ function_fail(["each() is only available when parser/evaluator future is in effect"])
end
diff --git a/lib/puppet/parser/functions/epp.rb b/lib/puppet/parser/functions/epp.rb
index 616d61422..5127aec71 100644
--- a/lib/puppet/parser/functions/epp.rb
+++ b/lib/puppet/parser/functions/epp.rb
@@ -1,41 +1,45 @@
Puppet::Parser::Functions::newfunction(:epp, :type => :rvalue, :arity => -2, :doc =>
"Evaluates an Embedded Puppet Template (EPP) file and returns the rendered text result as a String.
-EPP support the following tags:
+The first argument to this function should be a `<MODULE NAME>/<TEMPLATE FILE>`
+reference, which will load `<TEMPLATE FILE>` from a module's `templates`
+directory. (For example, the reference `apache/vhost.conf.epp` will load the
+file `<MODULES DIRECTORY>/apache/templates/vhost.conf.epp`.)
+
+The second argument is optional; if present, it should be a hash containing parameters for the
+template. (See below.)
+
+EPP supports the following tags:
* `<%= puppet expression %>` - This tag renders the value of the expression it contains.
* `<% puppet expression(s) %>` - This tag will execute the expression(s) it contains, but renders nothing.
* `<%# comment %>` - The tag and its content renders nothing.
* `<%%` or `%%>` - Renders a literal `<%` or `%>` respectively.
* `<%-` - Same as `<%` but suppresses any leading whitespace.
* `-%>` - Same as `%>` but suppresses any trailing whitespace on the same line (including line break).
-* `<%-( parameters )-%>` - When placed as the first tag declares the template's parameters.
+* `<%- |parameters| -%>` - When placed as the first tag declares the template's parameters.
File based EPP supports the following visibilities of variables in scope:
* Global scope (i.e. top + node scopes) - global scope is always visible
* Global + all given arguments - if the EPP template does not declare parameters, and arguments are given
* Global + declared parameters - if the EPP declares parameters, given argument names must match
EPP supports parameters by placing an optional parameter list as the very first element in the EPP. As an example,
-`<%- ($x, $y, $z='unicorn') -%>` when placed first in the EPP text declares that the parameters `x` and `y` must be
+`<%- |$x, $y, $z = 'unicorn'| -%>` when placed first in the EPP text declares that the parameters `x` and `y` must be
given as template arguments when calling `inline_epp`, and that `z` if not given as a template argument
defaults to `'unicorn'`. Template parameters are available as variables, e.g.arguments `$x`, `$y` and `$z` in the example.
Note that `<%-` must be used or any leading whitespace will be interpreted as text
Arguments are passed to the template by calling `epp` with a Hash as the last argument, where parameters
are bound to values, e.g. `epp('...', {'x'=>10, 'y'=>20})`. Excess arguments may be given
(i.e. undeclared parameters) only if the EPP templates does not declare any parameters at all.
Template parameters shadow variables in outer scopes. File based epp does never have access to variables in the
scope where the `epp` function is called from.
- See function inline_epp for examples of EPP
- Since 3.5
-- Requires Future Parser") do |arguments|
- # Requires future parser
- unless Puppet[:parser] == "future"
- raise ArgumentError, "epp(): function is only available when --parser future is in effect"
- end
- Puppet::Pops::Evaluator::EppEvaluator.epp(self, arguments[0], self.compiler.environment, arguments[1])
+- Requires Future Parser") do |args|
+ function_fail(["epp() is only available when parser/evaluator future is in effect"])
end
diff --git a/lib/puppet/parser/functions/file.rb b/lib/puppet/parser/functions/file.rb
index 17401fc8b..cde496ab4 100644
--- a/lib/puppet/parser/functions/file.rb
+++ b/lib/puppet/parser/functions/file.rb
@@ -1,23 +1,31 @@
-# Returns the contents of a file
-
Puppet::Parser::Functions::newfunction(
:file, :arity => -2, :type => :rvalue,
- :doc => "Return the contents of a file. Multiple files
- can be passed, and the first file that exists will be read in."
+ :doc => "Loads a file from a module and returns its contents as a string.
+
+ The argument to this function should be a `<MODULE NAME>/<FILE>`
+ reference, which will load `<FILE>` from a module's `files`
+ directory. (For example, the reference `mysql/mysqltuner.pl` will load the
+ file `<MODULES DIRECTORY>/mysql/files/mysqltuner.pl`.)
+
+ This function can also accept:
+
+ * An absolute path, which can load a file from anywhere on disk.
+ * Multiple arguments, which will return the contents of the **first** file
+ found, skipping any files that don't exist.
+ "
) do |vals|
- ret = nil
+ path = nil
vals.each do |file|
- unless Puppet::Util.absolute_path?(file)
- raise Puppet::ParseError, "Files must be fully qualified"
- end
- if Puppet::FileSystem.exist?(file)
- ret = File.read(file)
+ found = Puppet::Parser::Files.find_file(file, compiler.environment)
+ if found && Puppet::FileSystem.exist?(found)
+ path = found
break
end
end
- if ret
- ret
+
+ if path
+ File.read(path)
else
raise Puppet::ParseError, "Could not find any files from #{vals.join(", ")}"
end
end
diff --git a/lib/puppet/parser/functions/filter.rb b/lib/puppet/parser/functions/filter.rb
index 5760a8a75..365de9fcf 100644
--- a/lib/puppet/parser/functions/filter.rb
+++ b/lib/puppet/parser/functions/filter.rb
@@ -1,100 +1,44 @@
-require 'puppet/parser/ast/lambda'
-
Puppet::Parser::Functions::newfunction(
-:filter,
-:type => :rvalue,
-:arity => 2,
-:doc => <<-'ENDHEREDOC') do |args|
- Applies a parameterized block to each element in a sequence of entries from the first
- argument and returns an array or hash (same type as left operand for array/hash, and array for
- other enumerable types) with the entries for which the block evaluates to `true`.
-
- This function takes two mandatory arguments: the first should be an Array, a Hash, or an
- Enumerable object (integer, Integer range, or String),
- and the second a parameterized block as produced by the puppet syntax:
-
- $a.filter |$x| { ... }
- filter($a) |$x| { ... }
-
- When the first argument is something other than a Hash, the block is called with each entry in turn.
- When the first argument is a Hash the entry is an array with `[key, value]`.
-
- *Examples*
+ :filter,
+ :arity => -3,
+ :doc => <<-DOC
+ Applies a parameterized block to each element in a sequence of entries from the first
+ argument and returns an array or hash (same type as left operand for array/hash, and array for
+ other enumerable types) with the entries for which the block evaluates to `true`.
- # selects all that end with berry
- $a = ["raspberry", "blueberry", "orange"]
- $a.filter |$x| { $x =~ /berry$/ } # rasberry, blueberry
+ This function takes two mandatory arguments: the first should be an Array, a Hash, or an
+ Enumerable object (integer, Integer range, or String),
+ and the second a parameterized block as produced by the puppet syntax:
- If the block defines two parameters, they will be set to `index, value` (with index starting at 0) for all
- enumerables except Hash, and to `key, value` for a Hash.
+ $a.filter |$x| { ... }
+ filter($a) |$x| { ... }
- *Examples*
+ When the first argument is something other than a Hash, the block is called with each entry in turn.
+ When the first argument is a Hash the entry is an array with `[key, value]`.
- # selects all that end with 'berry' at an even numbered index
- $a = ["raspberry", "blueberry", "orange"]
- $a.filter |$index, $x| { $index % 2 == 0 and $x =~ /berry$/ } # raspberry
+ Example Using filter with one parameter
- # selects all that end with 'berry' and value >= 1
- $a = {"raspberry"=>0, "blueberry"=>1, "orange"=>1}
- $a.filter |$key, $x| { $x =~ /berry$/ and $x >= 1 } # blueberry
+ # selects all that end with berry
+ $a = ["raspberry", "blueberry", "orange"]
+ $a.filter |$x| { $x =~ /berry$/ } # rasberry, blueberry
- - Since 3.4 for Array and Hash
- - Since 3.5 for other enumerables
- - requires `parser = future`
- ENDHEREDOC
+ If the block defines two parameters, they will be set to `index, value` (with index starting at 0) for all
+ enumerables except Hash, and to `key, value` for a Hash.
- def filter_Enumerator(enumerator, scope, pblock, serving_size)
- result = []
- index = 0
- if serving_size == 1
- begin
- loop { pblock.call(scope, it = enumerator.next) == true ? result << it : nil }
- rescue StopIteration
- end
- else
- begin
- loop do
- pblock.call(scope, index, it = enumerator.next) == true ? result << it : nil
- index = index +1
- end
- rescue StopIteration
- end
- end
- result
- end
+Example Using filter with two parameters
- receiver = args[0]
- pblock = args[1]
+ # selects all that end with 'berry' at an even numbered index
+ $a = ["raspberry", "blueberry", "orange"]
+ $a.filter |$index, $x| { $index % 2 == 0 and $x =~ /berry$/ } # raspberry
- raise ArgumentError, ("filter(): wrong argument type (#{pblock.class}; must be a parameterized block.") unless pblock.respond_to?(:puppet_lambda)
- serving_size = pblock.parameter_count
- if serving_size == 0
- raise ArgumentError, "filter(): block must define at least one parameter; value. Block has 0."
- end
+ # selects all that end with 'berry' and value >= 1
+ $a = {"raspberry"=>0, "blueberry"=>1, "orange"=>1}
+ $a.filter |$key, $x| { $x =~ /berry$/ and $x >= 1 } # blueberry
- case receiver
- when Hash
- if serving_size > 2
- raise ArgumentError, "filter(): block must define at most two parameters; key, value. Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
- if serving_size == 1
- result = receiver.select {|x, y| pblock.call(self, [x, y]) }
- else
- result = receiver.select {|x, y| pblock.call(self, x, y) }
- end
- # Ruby 1.8.7 returns Array
- result = Hash[result] unless result.is_a? Hash
- result
- else
- if serving_size > 2
- raise ArgumentError, "filter(): block must define at most two parameters; index, value. Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
- enum = Puppet::Pops::Types::Enumeration.enumerator(receiver)
- unless enum
- raise ArgumentError, ("filter(): wrong argument type (#{receiver.class}; must be something enumerable.")
- end
- filter_Enumerator(enum, self, pblock, serving_size)
- end
+- since 3.4 for Array and Hash
+- since 3.5 for other enumerables
+- note requires `parser = future`
+DOC
+) do |args|
+ function_fail(["filter() is only available when parser/evaluator future is in effect"])
end
diff --git a/lib/puppet/parser/functions/include.rb b/lib/puppet/parser/functions/include.rb
index 29ef45d40..2bebbba8b 100644
--- a/lib/puppet/parser/functions/include.rb
+++ b/lib/puppet/parser/functions/include.rb
@@ -1,47 +1,35 @@
# Include the specified classes
Puppet::Parser::Functions::newfunction(:include, :arity => -2, :doc =>
"Declares one or more classes, causing the resources in them to be
evaluated and added to the catalog. Accepts a class name, an array of class
names, or a comma-separated list of class names.
The `include` function can be used multiple times on the same class and will
only declare a given class once. If a class declared with `include` has any
parameters, Puppet will automatically look up values for them in Hiera, using
`<class name>::<parameter name>` as the lookup key.
Contrast this behavior with resource-like class declarations
(`class {'name': parameter => 'value',}`), which must be used in only one place
per class and can directly set parameters. You should avoid using both `include`
and resource-like declarations with the same class.
The `include` function does not cause classes to be contained in the class
where they are declared. For that, see the `contain` function. It also
does not create a dependency relationship between the declared class and the
-surrounding class; for that, see the `require` function.") do |vals|
- if vals.is_a?(Array)
- # Protect against array inside array
- vals = vals.flatten
- else
- vals = [vals]
- end
+surrounding class; for that, see the `require` function.
- # The 'false' disables lazy evaluation.
- klasses = compiler.evaluate_classes(vals, self, false)
+When the future parser is used, you must use the class's full name;
+relative names are no longer allowed. In addition to names in string form,
+you may also directly use Class and Resource Type values that are produced by
+the future parser's resource and relationship expressions.
- missing = vals.find_all do |klass|
- ! klasses.include?(klass)
- end
+") do |vals|
- unless missing.empty?
- # Throw an error if we didn't evaluate all of the classes.
- str = "Could not find class"
- str += "es" if missing.length > 1
-
- str += " " + missing.join(", ")
-
- if n = namespaces and ! n.empty? and n != [""]
- str += " in namespaces #{@namespaces.join(", ")}"
- end
- self.fail Puppet::ParseError, str
- end
+ # Unify call patterns (if called with nested arrays), make names absolute if
+ # wanted and evaluate the classes
+ compiler.evaluate_classes(
+ transform_and_assert_classnames(
+ vals.is_a?(Array) ? vals.flatten : [vals]),
+ self, false)
end
diff --git a/lib/puppet/parser/functions/inline_epp.rb b/lib/puppet/parser/functions/inline_epp.rb
index cfc1597a1..1e8af8fe4 100644
--- a/lib/puppet/parser/functions/inline_epp.rb
+++ b/lib/puppet/parser/functions/inline_epp.rb
@@ -1,79 +1,76 @@
Puppet::Parser::Functions::newfunction(:inline_epp, :type => :rvalue, :arity => -2, :doc =>
"Evaluates an Embedded Puppet Template (EPP) string and returns the rendered text result as a String.
EPP support the following tags:
* `<%= puppet expression %>` - This tag renders the value of the expression it contains.
* `<% puppet expression(s) %>` - This tag will execute the expression(s) it contains, but renders nothing.
* `<%# comment %>` - The tag and its content renders nothing.
* `<%%` or `%%>` - Renders a literal `<%` or `%>` respectively.
* `<%-` - Same as `<%` but suppresses any leading whitespace.
* `-%>` - Same as `%>` but suppresses any trailing whitespace on the same line (including line break).
-* `<%-( parameters )-%>` - When placed as the first tag declares the template's parameters.
+* `<%- |parameters| -%>` - When placed as the first tag declares the template's parameters.
Inline EPP supports the following visibilities of variables in scope which depends on how EPP parameters
are used - see further below:
* Global scope (i.e. top + node scopes) - global scope is always visible
* Global + Enclosing scope - if the EPP template does not declare parameters, and no arguments are given
* Global + all given arguments - if the EPP template does not declare parameters, and arguments are given
* Global + declared parameters - if the EPP declares parameters, given argument names must match
EPP supports parameters by placing an optional parameter list as the very first element in the EPP. As an example,
-`<%-( $x, $y, $z='unicorn' )-%>` when placed first in the EPP text declares that the parameters `x` and `y` must be
+`<%- |$x, $y, $z='unicorn'| -%>` when placed first in the EPP text declares that the parameters `x` and `y` must be
given as template arguments when calling `inline_epp`, and that `z` if not given as a template argument
defaults to `'unicorn'`. Template parameters are available as variables, e.g.arguments `$x`, `$y` and `$z` in the example.
Note that `<%-` must be used or any leading whitespace will be interpreted as text
Arguments are passed to the template by calling `inline_epp` with a Hash as the last argument, where parameters
are bound to values, e.g. `inline_epp('...', {'x'=>10, 'y'=>20})`. Excess arguments may be given
(i.e. undeclared parameters) only if the EPP templates does not declare any parameters at all.
Template parameters shadow variables in outer scopes.
Note: An inline template is best stated using a single-quoted string, or a heredoc since a double-quoted string
is subject to expression interpolation before the string is parsed as an EPP template. Here are examples
(using heredoc to define the EPP text):
# produces 'Hello local variable world!'
$x ='local variable'
inline_epptemplate(@(END:epp))
- <%-( $x )-%>
+ <%- |$x| -%>
Hello <%= $x %> world!
END
# produces 'Hello given argument world!'
$x ='local variable world'
inline_epptemplate(@(END:epp), { x =>'given argument'})
- <%-( $x )-%>
+ <%- |$x| -%>
Hello <%= $x %> world!
END
# produces 'Hello given argument world!'
$x ='local variable world'
inline_epptemplate(@(END:epp), { x =>'given argument'})
- <%-( $x )-%>
+ <%- |$x| -%>
Hello <%= $x %>!
END
# results in error, missing value for y
$x ='local variable world'
inline_epptemplate(@(END:epp), { x =>'given argument'})
- <%-( $x, $y )-%>
+ <%- |$x, $y| -%>
Hello <%= $x %>!
END
# Produces 'Hello given argument planet'
$x ='local variable world'
inline_epptemplate(@(END:epp), { x =>'given argument'})
- <%-( $x, $y=planet)-%>
+ <%- |$x, $y=planet| -%>
Hello <%= $x %> <%= $y %>!
END
- Since 3.5
- Requires Future Parser") do |arguments|
- # Requires future parser
- unless Puppet[:parser] == "future"
- raise ArgumentError, "inline_epp(): function is only available when --parser future is in effect"
- end
- Puppet::Pops::Evaluator::EppEvaluator.inline_epp(self, arguments[0], arguments[1])
+
+ function_fail(["inline_epp() is only available when parser/evaluator future is in effect"])
end
diff --git a/lib/puppet/parser/functions/lookup.rb b/lib/puppet/parser/functions/lookup.rb
index 55d56f452..e36c4c378 100644
--- a/lib/puppet/parser/functions/lookup.rb
+++ b/lib/puppet/parser/functions/lookup.rb
@@ -1,144 +1,144 @@
Puppet::Parser::Functions.newfunction(:lookup, :type => :rvalue, :arity => -2, :doc => <<-'ENDHEREDOC') do |args|
Looks up data defined using Puppet Bindings and Hiera.
The function is callable with one to three arguments and optionally with a code block to further process the result.
The lookup function can be called in one of these ways:
lookup(name)
lookup(name, type)
lookup(name, type, default)
lookup(options_hash)
lookup(name, options_hash)
The function may optionally be called with a code block / lambda with the following signatures:
lookup(...) |$result| { ... }
lookup(...) |$name, $result| { ... }
lookup(...) |$name, $result, $default| { ... }
The longer signatures are useful when the block needs to raise an error (it can report the name), or
if it needs to know if the given default value was selected.
The code block receives the following three arguments:
* The `$name` is the last name that was looked up (*the* name if only one name was looked up)
* The `$result` is the looked up value (or the default value if not found).
* The `$default` is the given default value (`undef` if not given).
The block, if present, is called with the result from the lookup. The value produced by the block is also what is
produced by the `lookup` function.
When a block is used, it is the users responsibility to call `error` if the result does not meet additional
criteria, or if an undef value is not acceptable. If a value is not found, and a default has been
specified, the default value is given to the block.
The content of the options hash is:
* `name` - The name or array of names to lookup (first found is returned)
* `type` - The type to assert (a Type or a type specification in string form)
* `default` - The default value if there was no value found (must comply with the data type)
* `accept_undef` - (default `false`) An `undef` result is accepted if this options is set to `true`.
* `override` - a hash with map from names to values that are used instead of the underlying bindings. If the name
is found here it wins. Defaults to an empty hash.
* `extra` - a hash with map from names to values that are used as a last resort to obtain a value. Defaults to an
empty hash.
When the call is on the form `lookup(name, options_hash)`, or `lookup(name, type, options_hash)`, the given name
argument wins over the `options_hash['name']`.
The search order is `override` (if given), then `binder`, then `hiera` and finally `extra` (if given). The first to produce
a value other than undef for a given name wins.
The type specification is one of:
* A type in the Puppet Type System, e.g.:
* `Integer`, an integral value with optional range e.g.:
* `Integer[0, default]` - 0 or positive
* `Integer[default, -1]` - negative,
* `Integer[1,100]` - value between 1 and 100 inclusive
* `String`- any string
* `Float` - floating point number (same signature as for Integer for `Integer` ranges)
* `Boolean` - true of false (strict)
* `Array` - an array (of Data by default), or parameterized as `Array[<element_type>]`, where
`<element_type>` is the expected type of elements
* `Hash`, - a hash (of default `Literal` keys and `Data` values), or parameterized as
`Hash[<value_type>]`, `Hash[<key_type>, <value_type>]`, where `<key_type>`, and
`<value_type>` are the types of the keys and values respectively
(key is `Literal` by default).
* `Data` - abstract type representing any `Literal`, `Array[Data]`, or `Hash[Literal, Data]`
* `Pattern[<p1>, <p2>, ..., <pn>]` - an enumeration of valid patterns (one or more) where
a pattern is a regular expression string or regular expression,
e.g. `Pattern['.com$', '.net$']`, `Pattern[/[a-z]+[0-9]+/]`
* `Enum[<s1>, <s2>, ..., <sn>]`, - an enumeration of exact string values (one or more)
e.g. `Enum[blue, red, green]`.
* `Variant[<t1>, <t2>,...<tn>]` - matches one of the listed types (at least one must be given)
e.g. `Variant[Integer[8000,8999], Integer[20000, 99999]]` to accept a value in either range
* `Regexp`- a regular expression (i.e. the result is a regular expression, not a string
matching a regular expression).
* A string containing a type description - one of the types as shown above but in string form.
If the function is called without specifying a default value, and nothing is bound to the given name
an error is raised unless the option `accept_undef` is true. If a block is given it must produce an acceptable
value (or call `error`). If the block does not produce an acceptable value an error is
raised.
Examples:
When called with one argument; **the name**, it
returns the bound value with the given name after having asserted it has the default datatype `Data`:
lookup('the_name')
When called with two arguments; **the name**, and **the expected type**, it
returns the bound value with the given name after having asserted it has the given data
type ('String' in the example):
lookup('the_name', 'String') # 3.x
lookup('the_name', String) # parser future
When called with three arguments, **the name**, the **expected type**, and a **default**, it
returns the bound value with the given name, or the default after having asserted the value
has the given data type (`String` in the example above):
lookup('the_name', 'String', 'Fred') # 3x
lookup('the_name', String, 'Fred') # parser future
Using a lambda to process the looked up result - asserting that it starts with an upper case letter:
# only with parser future
lookup('the_size', Integer[1,100]) |$result| {
if $large_value_allowed and $result > 10
{ error 'Values larger than 10 are not allowed'}
$result
}
Including the name in the error
# only with parser future
lookup('the_size', Integer[1,100]) |$name, $result| {
if $large_value_allowed and $result > 10
{ error 'The bound value for '${name}' can not be larger than 10 in this configuration'}
$result
}
When using a block, the value it produces is also asserted against the given type, and it may not be
`undef` unless the option `'accept_undef'` is `true`.
All options work as the corresponding (direct) argument. The `first_found` option and
`accept_undef` are however only available as options.
Using first_found semantics option to return the first name that has a bound value:
lookup(['apache::port', 'nginx::port'], 'Integer', 80)
If you want to make lookup return undef when no value was found instead of raising an error:
$are_you_there = lookup('peekaboo', { accept_undef => true} )
$are_you_there = lookup('peekaboo', { accept_undef => true}) |$result| { $result }
ENDHEREDOC
unless Puppet[:binder] || Puppet[:parser] == 'future'
- raise Puppet::ParseError, "The lookup function is only available with settings --binder true, or --parser future"
+ raise Puppet::ParseError, "The lookup function is only available with settings --binder true, or --parser future"
end
Puppet::Pops::Binder::Lookup.lookup(self, args)
end
diff --git a/lib/puppet/parser/functions/map.rb b/lib/puppet/parser/functions/map.rb
index 8bc9fd383..c2ca3aae6 100644
--- a/lib/puppet/parser/functions/map.rb
+++ b/lib/puppet/parser/functions/map.rb
@@ -1,96 +1,43 @@
-require 'puppet/parser/ast/lambda'
-
Puppet::Parser::Functions::newfunction(
-:map,
-:type => :rvalue,
-:arity => 2,
-:doc => <<-'ENDHEREDOC') do |args|
- Applies a parameterized block to each element in a sequence of entries from the first
- argument and returns an array with the result of each invocation of the parameterized block.
-
- This function takes two mandatory arguments: the first should be an Array, Hash, or of Enumerable type
- (integer, Integer range, or String), and the second a parameterized block as produced by the puppet syntax:
-
- $a.map |$x| { ... }
- map($a) |$x| { ... }
-
- When the first argument `$a` is an Array or of enumerable type, the block is called with each entry in turn.
- When the first argument is a hash the entry is an array with `[key, value]`.
-
- *Examples*
+ :map,
+ :type => :rvalue,
+ :arity => -3,
+ :doc => <<-DOC
+Applies a parameterized block to each element in a sequence of entries from the first
+argument and returns an array with the result of each invocation of the parameterized block.
- # Turns hash into array of values
- $a.map |$x|{ $x[1] }
+This function takes two mandatory arguments: the first should be an Array, Hash, or of Enumerable type
+(integer, Integer range, or String), and the second a parameterized block as produced by the puppet syntax:
- # Turns hash into array of keys
- $a.map |$x| { $x[0] }
+ $a.map |$x| { ... }
+ map($a) |$x| { ... }
- When using a block with 2 parameters, the element's index (starting from 0) for an array, and the key for a hash
- is given to the block's first parameter, and the value is given to the block's second parameter.args.
+When the first argument `$a` is an Array or of enumerable type, the block is called with each entry in turn.
+When the first argument is a hash the entry is an array with `[key, value]`.
- *Examples*
+Example Using map with two arguments
- # Turns hash into array of values
- $a.map |$key,$val|{ $val }
+ # Turns hash into array of values
+ $a.map |$x|{ $x[1] }
- # Turns hash into array of keys
- $a.map |$key,$val|{ $key }
+ # Turns hash into array of keys
+ $a.map |$x| { $x[0] }
- - Since 3.4 for Array and Hash
- - Since 3.5 for other enumerables, and support for blocks with 2 parameters
- - requires `parser = future`
- ENDHEREDOC
+When using a block with 2 parameters, the element's index (starting from 0) for an array, and the key for a hash
+is given to the block's first parameter, and the value is given to the block's second parameter.args.
- def map_Enumerator(enumerator, scope, pblock, serving_size)
- result = []
- index = 0
- if serving_size == 1
- begin
- loop { result << pblock.call(scope, enumerator.next) }
- rescue StopIteration
- end
- else
- begin
- loop do
- result << pblock.call(scope, index, enumerator.next)
- index = index +1
- end
- rescue StopIteration
- end
- end
- result
- end
+Example Using map with two arguments
- receiver = args[0]
- pblock = args[1]
+ # Turns hash into array of values
+ $a.map |$key,$val|{ $val }
- raise ArgumentError, ("map(): wrong argument type (#{pblock.class}; must be a parameterized block.") unless pblock.respond_to?(:puppet_lambda)
- serving_size = pblock.parameter_count
- if serving_size == 0
- raise ArgumentError, "map(): block must define at least one parameter; value. Block has 0."
- end
- case receiver
- when Hash
- if serving_size > 2
- raise ArgumentError, "map(): block must define at most two parameters; key, value.args Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
- if serving_size == 1
- result = receiver.map {|x, y| pblock.call(self, [x, y]) }
- else
- result = receiver.map {|x, y| pblock.call(self, x, y) }
- end
- else
- if serving_size > 2
- raise ArgumentError, "map(): block must define at most two parameters; index, value. Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
+ # Turns hash into array of keys
+ $a.map |$key,$val|{ $key }
- enum = Puppet::Pops::Types::Enumeration.enumerator(receiver)
- unless enum
- raise ArgumentError, ("map(): wrong argument type (#{receiver.class}; must be something enumerable.")
- end
- result = map_Enumerator(enum, self, pblock, serving_size)
- end
- result
+- since 3.4 for Array and Hash
+- since 3.5 for other enumerables, and support for blocks with 2 parameters
+- note requires `parser = future`
+DOC
+) do |args|
+ function_fail(["map() is only available when parser/evaluator future is in effect"])
end
diff --git a/lib/puppet/parser/functions/match.rb b/lib/puppet/parser/functions/match.rb
new file mode 100644
index 000000000..33bdf3e88
--- /dev/null
+++ b/lib/puppet/parser/functions/match.rb
@@ -0,0 +1,28 @@
+Puppet::Parser::Functions::newfunction(
+ :match,
+ :arity => 2,
+ :doc => <<-DOC
+Returns the match result of matching a String or Array[String] with one of:
+
+* Regexp
+* String - transformed to a Regexp
+* Pattern type
+* Regexp type
+
+Returns An Array with the entire match at index 0, and each subsequent submatch at index 1-n.
+If there was no match `undef` is returned. If the value to match is an Array, a array
+with mapped match results is returned.
+
+Example matching:
+
+ "abc123".match(/([a-z]+)[1-9]+/) # => ["abc"]
+ "abc123".match(/([a-z]+)([1-9]+)/) # => ["abc", "123"]
+
+See the documentation for "The Puppet Type System" for more information about types.
+
+- since 3.7.0
+- note requires future parser
+DOC
+) do |args|
+ function_fail(["match() is only available when parser/evaluator future is in effect"])
+end
diff --git a/lib/puppet/parser/functions/reduce.rb b/lib/puppet/parser/functions/reduce.rb
index 078ebc2e9..4fe90a239 100644
--- a/lib/puppet/parser/functions/reduce.rb
+++ b/lib/puppet/parser/functions/reduce.rb
@@ -1,100 +1,71 @@
Puppet::Parser::Functions::newfunction(
-:reduce,
-:type => :rvalue,
-:arity => -2,
-:doc => <<-'ENDHEREDOC') do |args|
- Applies a parameterized block to each element in a sequence of entries from the first
- argument (_the enumerable_) and returns the last result of the invocation of the parameterized block.
-
- This function takes two mandatory arguments: the first should be an Array, Hash, or something of
- enumerable type, and the last a parameterized block as produced by the puppet syntax:
-
- $a.reduce |$memo, $x| { ... }
- reduce($a) |$memo, $x| { ... }
-
- When the first argument is an Array or someting of an enumerable type, the block is called with each entry in turn.
- When the first argument is a hash each entry is converted to an array with `[key, value]` before being
- fed to the block. An optional 'start memo' value may be supplied as an argument between the array/hash
- and mandatory block.
-
- $a.reduce(start) |$memo, $x| { ... }
- reduce($a, start) |$memo, $x| { ... }
-
- If no 'start memo' is given, the first invocation of the parameterized block will be given the first and second
- elements of the enumeration, and if the enumerable has fewer than 2 elements, the first
- element is produced as the result of the reduction without invocation of the block.
-
- On each subsequent invocation, the produced value of the invoked parameterized block is given as the memo in the
- next invocation.
-
- *Examples*
-
- # Reduce an array
- $a = [1,2,3]
- $a.reduce |$memo, $entry| { $memo + $entry }
- #=> 6
-
- # Reduce hash values
- $a = {a => 1, b => 2, c => 3}
- $a.reduce |$memo, $entry| { [sum, $memo[1]+$entry[1]] }
- #=> [sum, 6]
-
- # reverse a string
- "abc".reduce |$memo, $char| { "$char$memo" }
- #=>"cbe"
-
- It is possible to provide a starting 'memo' as an argument.
-
- *Examples*
-
- # Reduce an array
- $a = [1,2,3]
- $a.reduce(4) |$memo, $entry| { $memo + $entry }
- #=> 10
-
- # Reduce hash values
- $a = {a => 1, b => 2, c => 3}
- $a.reduce([na, 4]) |$memo, $entry| { [sum, $memo[1]+$entry[1]] }
- #=> [sum, 10]
-
- *Examples*
-
- Integer[1,4].reduce |$memo, $x| { $memo + $x }
- #=> 10
-
- - Since 3.2 for Array and Hash
- - Since 3.5 for additional enumerable types
- - requires `parser = future`.
- ENDHEREDOC
-
- require 'puppet/parser/ast/lambda'
-
- case args.length
- when 2
- pblock = args[1]
- when 3
- pblock = args[2]
- else
- raise ArgumentError, ("reduce(): wrong number of arguments (#{args.length}; expected 2 or 3, got #{args.length})")
- end
- unless pblock.respond_to?(:puppet_lambda)
- raise ArgumentError, ("reduce(): wrong argument type (#{pblock.class}; must be a parameterized block.")
- end
- receiver = args[0]
- enum = Puppet::Pops::Types::Enumeration.enumerator(receiver)
- unless enum
- raise ArgumentError, ("reduce(): wrong argument type (#{receiver.class}; must be something enumerable.")
- end
-
- serving_size = pblock.parameter_count
- if serving_size != 2
- raise ArgumentError, "reduce(): block must define 2 parameters; memo, value. Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
-
- if args.length == 3
- enum.reduce(args[1]) {|memo, x| pblock.call(self, memo, x) }
- else
- enum.reduce {|memo, x| pblock.call(self, memo, x) }
- end
+ :reduce,
+ :type => :rvalue,
+ :arity => -3,
+ :doc => <<-DOC
+Applies a parameterized block to each element in a sequence of entries from the first
+argument (_the enumerable_) and returns the last result of the invocation of the parameterized block.
+
+This function takes two mandatory arguments: the first should be an Array, Hash, or something of
+enumerable type, and the last a parameterized block as produced by the puppet syntax:
+
+ $a.reduce |$memo, $x| { ... }
+ reduce($a) |$memo, $x| { ... }
+
+When the first argument is an Array or someting of an enumerable type, the block is called with each entry in turn.
+When the first argument is a hash each entry is converted to an array with `[key, value]` before being
+fed to the block. An optional 'start memo' value may be supplied as an argument between the array/hash
+and mandatory block.
+
+ $a.reduce(start) |$memo, $x| { ... }
+ reduce($a, start) |$memo, $x| { ... }
+
+If no 'start memo' is given, the first invocation of the parameterized block will be given the first and second
+elements of the enumeration, and if the enumerable has fewer than 2 elements, the first
+element is produced as the result of the reduction without invocation of the block.
+
+On each subsequent invocation, the produced value of the invoked parameterized block is given as the memo in the
+next invocation.
+
+Example Using reduce
+
+ # Reduce an array
+ $a = [1,2,3]
+ $a.reduce |$memo, $entry| { $memo + $entry }
+ #=> 6
+
+ # Reduce hash values
+ $a = {a => 1, b => 2, c => 3}
+ $a.reduce |$memo, $entry| { [sum, $memo[1]+$entry[1]] }
+ #=> [sum, 6]
+
+ # reverse a string
+ "abc".reduce |$memo, $char| { "$char$memo" }
+ #=>"cbe"
+
+It is possible to provide a starting 'memo' as an argument.
+
+Example Using reduce with given start 'memo'
+
+ # Reduce an array
+ $a = [1,2,3]
+ $a.reduce(4) |$memo, $entry| { $memo + $entry }
+ #=> 10
+
+ # Reduce hash values
+ $a = {a => 1, b => 2, c => 3}
+ $a.reduce([na, 4]) |$memo, $entry| { [sum, $memo[1]+$entry[1]] }
+ #=> [sum, 10]
+
+Example Using reduce with an Integer range
+
+ Integer[1,4].reduce |$memo, $x| { $memo + $x }
+ #=> 10
+
+- since 3.2 for Array and Hash
+- since 3.5 for additional enumerable types
+- note requires `parser = future`.
+DOC
+) do |args|
+ function_fail(["reduce() is only available when parser/evaluator future is in effect"])
end
diff --git a/lib/puppet/parser/functions/require.rb b/lib/puppet/parser/functions/require.rb
index 819b82619..d6350fc4b 100644
--- a/lib/puppet/parser/functions/require.rb
+++ b/lib/puppet/parser/functions/require.rb
@@ -1,53 +1,61 @@
# Requires the specified classes
Puppet::Parser::Functions::newfunction(
:require,
:arity => -2,
:doc =>"Evaluate one or more classes, adding the required class as a dependency.
The relationship metaparameters work well for specifying relationships
between individual resources, but they can be clumsy for specifying
relationships between classes. This function is a superset of the
'include' function, adding a class relationship so that the requiring
class depends on the required class.
Warning: using require in place of include can lead to unwanted dependency cycles.
For instance the following manifest, with 'require' instead of 'include' would produce a nasty dependence cycle, because notify imposes a before between File[/foo] and Service[foo]:
class myservice {
service { foo: ensure => running }
}
class otherstuff {
include myservice
file { '/foo': notify => Service[foo] }
}
Note that this function only works with clients 0.25 and later, and it will
fail if used with earlier clients.
+When the future parser is used, you must use the class's full name;
+relative names are no longer allowed. In addition to names in string form,
+you may also directly use Class and Resource Type values that are produced by
+the future parser's resource and relationship expressions.
") do |vals|
- # Verify that the 'include' function is loaded
- method = Puppet::Parser::Functions.function(:include)
-
- send(method, vals)
- vals = [vals] unless vals.is_a?(Array)
+ # Make call patterns uniform and protected against nested arrays, also make
+ # names absolute if so desired.
+ vals = transform_and_assert_classnames(vals.is_a?(Array) ? vals.flatten : [vals])
+
+ # This is the same as calling the include function (but faster) since it again
+ # would otherwise need to perform the optional absolute name transformation
+ # (for no reason since they are already made absolute here).
+ #
+ compiler.evaluate_classes(vals, self, false)
vals.each do |klass|
# lookup the class in the scopes
if classobj = find_hostclass(klass)
klass = classobj.name
else
raise Puppet::ParseError, "Could not find class #{klass}"
end
# This is a bit hackish, in some ways, but it's the only way
# to configure a dependency that will make it to the client.
# The 'obvious' way is just to add an edge in the catalog,
# but that is considered a containment edge, not a dependency
# edge, so it usually gets lost on the client.
ref = Puppet::Resource.new(:class, klass)
resource.set_parameter(:require, [resource[:require]].flatten.compact << ref)
end
end
diff --git a/lib/puppet/parser/functions/search.rb b/lib/puppet/parser/functions/search.rb
index 04c7579d7..8055e38b1 100644
--- a/lib/puppet/parser/functions/search.rb
+++ b/lib/puppet/parser/functions/search.rb
@@ -1,7 +1,12 @@
Puppet::Parser::Functions::newfunction(:search, :arity => -2, :doc => "Add another namespace for this class to search.
This allows you to create classes with sets of definitions and add
- those classes to another class's search path.") do |vals|
+ those classes to another class's search path.
+
+ Deprecated in Puppet 3.7.0, to be removed in Puppet 4.0.0.") do |vals|
+
+ Puppet.deprecation_warning("The 'search' function is deprecated. See http://links.puppetlabs.com/search-function-deprecation")
+
vals.each do |val|
add_namespace(val)
end
end
diff --git a/lib/puppet/parser/functions/select.rb b/lib/puppet/parser/functions/select.rb
deleted file mode 100644
index 93924f9d0..000000000
--- a/lib/puppet/parser/functions/select.rb
+++ /dev/null
@@ -1,15 +0,0 @@
-Puppet::Parser::Functions::newfunction(
-:select,
-:type => :rvalue,
-:arity => 2,
-:doc => <<-'ENDHEREDOC') do |args|
- The 'select' function has been renamed to 'filter'. Please update your manifests.
-
- The select function is reserved for future use.
- - Removed as of 3.4
- - requires `parser = future`.
- ENDHEREDOC
-
- raise NotImplementedError,
- "The 'select' function has been renamed to 'filter'. Please update your manifests."
-end
diff --git a/lib/puppet/parser/functions/slice.rb b/lib/puppet/parser/functions/slice.rb
index bcc830f74..d57b99cff 100644
--- a/lib/puppet/parser/functions/slice.rb
+++ b/lib/puppet/parser/functions/slice.rb
@@ -1,116 +1,48 @@
Puppet::Parser::Functions::newfunction(
-:slice,
-:type => :rvalue,
-:arity => -2,
-:doc => <<-'ENDHEREDOC') do |args|
- Applies a parameterized block to each _slice_ of elements in a sequence of selected entries from the first
- argument and returns the first argument, or if no block is given returns a new array with a concatenation of
- the slices.
+ :slice,
+ :type => :rvalue,
+ :arity => -3,
+ :doc => <<-DOC
+Applies a parameterized block to each _slice_ of elements in a sequence of selected entries from the first
+argument and returns the first argument, or if no block is given returns a new array with a concatenation of
+the slices.
- This function takes two mandatory arguments: the first, `$a`, should be an Array, Hash, or something of
- enumerable type (integer, Integer range, or String), and the second, `$n`, the number of elements to include
- in each slice. The optional third argument should be a a parameterized block as produced by the puppet syntax:
+This function takes two mandatory arguments: the first, `$a`, should be an Array, Hash, or something of
+enumerable type (integer, Integer range, or String), and the second, `$n`, the number of elements to include
+in each slice. The optional third argument should be a a parameterized block as produced by the puppet syntax:
- $a.slice($n) |$x| { ... }
- slice($a) |$x| { ... }
+ $a.slice($n) |$x| { ... }
+ slice($a) |$x| { ... }
- The parameterized block should have either one parameter (receiving an array with the slice), or the same number
- of parameters as specified by the slice size (each parameter receiving its part of the slice).
- In case there are fewer remaining elements than the slice size for the last slice it will contain the remaining
- elements. When the block has multiple parameters, excess parameters are set to :undef for an array or
- enumerable type, and to empty arrays for a Hash.
+The parameterized block should have either one parameter (receiving an array with the slice), or the same number
+of parameters as specified by the slice size (each parameter receiving its part of the slice).
+In case there are fewer remaining elements than the slice size for the last slice it will contain the remaining
+elements. When the block has multiple parameters, excess parameters are set to undef for an array or
+enumerable type, and to empty arrays for a Hash.
- $a.slice(2) |$first, $second| { ... }
+ $a.slice(2) |$first, $second| { ... }
- When the first argument is a Hash, each `key,value` entry is counted as one, e.g, a slice size of 2 will produce
- an array of two arrays with key, and value.
+When the first argument is a Hash, each `key,value` entry is counted as one, e.g, a slice size of 2 will produce
+an array of two arrays with key, and value.
- $a.slice(2) |$entry| { notice "first ${$entry[0]}, second ${$entry[1]}" }
- $a.slice(2) |$first, $second| { notice "first ${first}, second ${second}" }
+Example Using slice with Hash
- When called without a block, the function produces a concatenated result of the slices.
+ $a.slice(2) |$entry| { notice "first ${$entry[0]}, second ${$entry[1]}" }
+ $a.slice(2) |$first, $second| { notice "first ${first}, second ${second}" }
- slice([1,2,3,4,5,6], 2) # produces [[1,2], [3,4], [5,6]]
- slice(Integer[1,6], 2) # produces [[1,2], [3,4], [5,6]]
- slice(4,2) # produces [[0,1], [2,3]]
- slice('hello',2) # produces [[h, e], [l, l], [o]]
+When called without a block, the function produces a concatenated result of the slices.
- - Since 3.2 for Array and Hash
- - Since 3.5 for additional enumerable types
- - requires `parser = future`.
- ENDHEREDOC
- require 'puppet/parser/ast/lambda'
- require 'puppet/parser/scope'
+Example Using slice without a block
- def each_Common(o, slice_size, filler, scope, pblock)
- serving_size = pblock ? pblock.parameter_count : 1
- if serving_size == 0
- raise ArgumentError, "slice(): block must define at least one parameter. Block has 0."
- end
- unless serving_size == 1 || serving_size == slice_size
- raise ArgumentError, "slice(): block must define one parameter, or " +
- "the same number of parameters as the given size of the slice (#{slice_size}). Block has #{serving_size}; "+
- pblock.parameter_names.join(', ')
- end
- enumerator = o.each_slice(slice_size)
- result = []
- if serving_size == 1
- begin
- if pblock
- loop do
- pblock.call(scope, enumerator.next)
- end
- else
- loop do
- result << enumerator.next
- end
- end
- rescue StopIteration
- end
- else
- begin
- loop do
- a = enumerator.next
- if a.size < serving_size
- a = a.dup.fill(filler, a.length...serving_size)
- end
- pblock.call(scope, *a)
- end
- rescue StopIteration
- end
- end
- if pblock
- o
- else
- result
- end
- end
+ slice([1,2,3,4,5,6], 2) # produces [[1,2], [3,4], [5,6]]
+ slice(Integer[1,6], 2) # produces [[1,2], [3,4], [5,6]]
+ slice(4,2) # produces [[0,1], [2,3]]
+ slice('hello',2) # produces [[h, e], [l, l], [o]]
- raise ArgumentError, ("slice(): wrong number of arguments (#{args.length}; must be 2 or 3)") unless args.length == 2 || args.length == 3
- if args.length >= 2
- begin
- slice_size = Puppet::Parser::Scope.number?(args[1])
- rescue
- raise ArgumentError, ("slice(): wrong argument type (#{args[1]}; must be number.")
- end
- end
- raise ArgumentError, ("slice(): wrong argument type (#{args[1]}; must be number.") unless slice_size
- raise ArgumentError, ("slice(): wrong argument value: #{slice_size}; is not a positive integer number > 0") unless slice_size.is_a?(Fixnum) && slice_size > 0
- receiver = args[0]
-
- # the block is optional, ok if nil, function then produces an array
- pblock = args[2]
- raise ArgumentError, ("slice(): wrong argument type (#{args[2].class}; must be a parameterized block.") unless pblock.respond_to?(:puppet_lambda) || args.length == 2
-
- case receiver
- when Hash
- each_Common(receiver, slice_size, [], self, pblock)
- else
- enum = Puppet::Pops::Types::Enumeration.enumerator(receiver)
- if enum.nil?
- raise ArgumentError, ("slice(): given type '#{tc.string(receiver)}' is not enumerable")
- end
- result = each_Common(enum, slice_size, :undef, self, pblock)
- pblock ? receiver : result
- end
+- since 3.2 for Array and Hash
+- since 3.5 for additional enumerable types
+- note requires `parser = future`.
+DOC
+) do |args|
+ function_fail(["slice() is only available when parser/evaluator future is in effect"])
end
diff --git a/lib/puppet/parser/functions/template.rb b/lib/puppet/parser/functions/template.rb
index 0bd5a5424..c789347ac 100644
--- a/lib/puppet/parser/functions/template.rb
+++ b/lib/puppet/parser/functions/template.rb
@@ -1,23 +1,30 @@
Puppet::Parser::Functions::newfunction(:template, :type => :rvalue, :arity => -2, :doc =>
- "Evaluate a template and return its value. See
- [the templating docs](http://docs.puppetlabs.com/guides/templating.html) for
- more information.
+ "Loads an ERB template from a module, evaluates it, and returns the resulting
+ value as a string.
- Note that if multiple templates are specified, their output is all
- concatenated and returned as the output of the function.") do |vals|
+ The argument to this function should be a `<MODULE NAME>/<TEMPLATE FILE>`
+ reference, which will load `<TEMPLATE FILE>` from a module's `templates`
+ directory. (For example, the reference `apache/vhost.conf.erb` will load the
+ file `<MODULES DIRECTORY>/apache/templates/vhost.conf.erb`.)
+
+ This function can also accept:
+
+ * An absolute path, which can load a template file from anywhere on disk.
+ * Multiple arguments, which will evaluate all of the specified templates and
+ return their outputs concatenated into a single string.") do |vals|
vals.collect do |file|
# Use a wrapper, so the template can't get access to the full
# Scope object.
debug "Retrieving template #{file}"
wrapper = Puppet::Parser::TemplateWrapper.new(self)
wrapper.file = file
begin
wrapper.result
rescue => detail
info = detail.backtrace.first.split(':')
raise Puppet::ParseError,
"Failed to parse template #{file}:\n Filepath: #{info[0]}\n Line: #{info[1]}\n Detail: #{detail}\n"
end
end.join("")
end
diff --git a/lib/puppet/parser/functions/with.rb b/lib/puppet/parser/functions/with.rb
new file mode 100644
index 000000000..07beff6fd
--- /dev/null
+++ b/lib/puppet/parser/functions/with.rb
@@ -0,0 +1,21 @@
+Puppet::Parser::Functions::newfunction(
+ :with,
+ :type => :rvalue,
+ :arity => -1,
+ :doc => <<-DOC
+Call a lambda code block with the given arguments. Since the parameters of the lambda
+are local to the lambda's scope, this can be used to create private sections
+of logic in a class so that the variables are not visible outside of the
+class.
+
+Example:
+
+ # notices the array [1, 2, 'foo']
+ with(1, 2, 'foo') |$x, $y, $z| { notice [$x, $y, $z] }
+
+- since 3.7.0
+- note requires future parser
+DOC
+) do |args|
+ function_fail(["with() is only available when parser/evaluator future is in effect"])
+end
diff --git a/lib/puppet/parser/lexer.rb b/lib/puppet/parser/lexer.rb
index 986183241..de1249991 100644
--- a/lib/puppet/parser/lexer.rb
+++ b/lib/puppet/parser/lexer.rb
@@ -1,608 +1,608 @@
# the scanner/lexer
require 'forwardable'
require 'strscan'
require 'puppet'
require 'puppet/util/methodhelper'
module Puppet
class LexError < RuntimeError; end
end
module Puppet::Parser; end
class Puppet::Parser::Lexer
extend Forwardable
attr_reader :last, :file, :lexing_context, :token_queue
attr_accessor :line, :indefine
alias :indefine? :indefine
# Returns the position on the line.
# This implementation always returns nil. It is here for API reasons in Puppet::Error
# which needs to support both --parser current, and --parser future.
#
def pos
# Make the lexer comply with newer API. It does not produce a pos...
nil
end
-
+
def lex_error msg
raise Puppet::LexError.new(msg)
end
class Token
ALWAYS_ACCEPTABLE = Proc.new { |context| true }
include Puppet::Util::MethodHelper
attr_accessor :regex, :name, :string, :skip, :incr_line, :skip_text, :accumulate
alias skip? skip
alias accumulate? accumulate
def initialize(string_or_regex, name, options = {})
if string_or_regex.is_a?(String)
@name, @string = name, string_or_regex
@regex = Regexp.new(Regexp.escape(string_or_regex))
else
@name, @regex = name, string_or_regex
end
set_options(options)
@acceptable_when = ALWAYS_ACCEPTABLE
end
def to_s
string or @name.to_s
end
def acceptable?(context={})
@acceptable_when.call(context)
end
# Define when the token is able to match.
# This provides context that cannot be expressed otherwise, such as feature flags.
#
# @param block [Proc] a proc that given a context returns a boolean
def acceptable_when(block)
@acceptable_when = block
end
end
# Maintain a list of tokens.
class TokenList
extend Forwardable
attr_reader :regex_tokens, :string_tokens
def_delegator :@tokens, :[]
# Create a new token.
def add_token(name, regex, options = {}, &block)
raise(ArgumentError, "Token #{name} already exists") if @tokens.include?(name)
token = Token.new(regex, name, options)
@tokens[token.name] = token
if token.string
@string_tokens << token
@tokens_by_string[token.string] = token
else
@regex_tokens << token
end
token.meta_def(:convert, &block) if block_given?
token
end
def initialize
@tokens = {}
@regex_tokens = []
@string_tokens = []
@tokens_by_string = {}
end
# Look up a token by its value, rather than name.
def lookup(string)
@tokens_by_string[string]
end
# Define more tokens.
def add_tokens(hash)
hash.each do |regex, name|
add_token(name, regex)
end
end
# Sort our tokens by length, so we know once we match, we're done.
# This helps us avoid the O(n^2) nature of token matching.
def sort_tokens
@string_tokens.sort! { |a, b| b.string.length <=> a.string.length }
end
# Yield each token name and value in turn.
def each
@tokens.each {|name, value| yield name, value }
end
end
TOKENS = TokenList.new
TOKENS.add_tokens(
'[' => :LBRACK,
']' => :RBRACK,
'{' => :LBRACE,
'}' => :RBRACE,
'(' => :LPAREN,
')' => :RPAREN,
'=' => :EQUALS,
'+=' => :APPENDS,
'==' => :ISEQUAL,
'>=' => :GREATEREQUAL,
'>' => :GREATERTHAN,
'<' => :LESSTHAN,
'<=' => :LESSEQUAL,
'!=' => :NOTEQUAL,
'!' => :NOT,
',' => :COMMA,
'.' => :DOT,
':' => :COLON,
'@' => :AT,
'<<|' => :LLCOLLECT,
'|>>' => :RRCOLLECT,
'->' => :IN_EDGE,
'<-' => :OUT_EDGE,
'~>' => :IN_EDGE_SUB,
'<~' => :OUT_EDGE_SUB,
'<|' => :LCOLLECT,
'|>' => :RCOLLECT,
';' => :SEMIC,
'?' => :QMARK,
'\\' => :BACKSLASH,
'=>' => :FARROW,
'+>' => :PARROW,
'+' => :PLUS,
'-' => :MINUS,
'/' => :DIV,
'*' => :TIMES,
'%' => :MODULO,
'<<' => :LSHIFT,
'>>' => :RSHIFT,
'=~' => :MATCH,
'!~' => :NOMATCH,
%r{((::){0,1}[A-Z][-\w]*)+} => :CLASSREF,
"<string>" => :STRING,
"<dqstring up to first interpolation>" => :DQPRE,
"<dqstring between two interpolations>" => :DQMID,
"<dqstring after final interpolation>" => :DQPOST,
"<boolean>" => :BOOLEAN
)
module Contextual
QUOTE_TOKENS = [:DQPRE,:DQMID]
REGEX_INTRODUCING_TOKENS = [:NODE,:LBRACE,:RBRACE,:MATCH,:NOMATCH,:COMMA]
NOT_INSIDE_QUOTES = Proc.new do |context|
!QUOTE_TOKENS.include? context[:after]
end
INSIDE_QUOTES = Proc.new do |context|
QUOTE_TOKENS.include? context[:after]
end
IN_REGEX_POSITION = Proc.new do |context|
REGEX_INTRODUCING_TOKENS.include? context[:after]
end
IN_STRING_INTERPOLATION = Proc.new do |context|
context[:string_interpolation_depth] > 0
end
DASHED_VARIABLES_ALLOWED = Proc.new do |context|
Puppet[:allow_variables_with_dashes]
end
VARIABLE_AND_DASHES_ALLOWED = Proc.new do |context|
Contextual::DASHED_VARIABLES_ALLOWED.call(context) and TOKENS[:VARIABLE].acceptable?(context)
end
end
# Numbers are treated separately from names, so that they may contain dots.
TOKENS.add_token :NUMBER, %r{\b(?:0[xX][0-9A-Fa-f]+|0?\d+(?:\.\d+)?(?:[eE]-?\d+)?)\b} do |lexer, value|
[TOKENS[:NAME], value]
end
TOKENS[:NUMBER].acceptable_when Contextual::NOT_INSIDE_QUOTES
TOKENS.add_token :NAME, %r{((::)?[a-z0-9][-\w]*)(::[a-z0-9][-\w]*)*} do |lexer, value|
string_token = self
# we're looking for keywords here
if tmp = KEYWORDS.lookup(value)
string_token = tmp
if [:TRUE, :FALSE].include?(string_token.name)
value = eval(value)
string_token = TOKENS[:BOOLEAN]
end
end
[string_token, value]
end
[:NAME, :CLASSREF].each do |name_token|
TOKENS[name_token].acceptable_when Contextual::NOT_INSIDE_QUOTES
end
TOKENS.add_token :COMMENT, %r{#.*}, :accumulate => true, :skip => true do |lexer,value|
value.sub!(/# ?/,'')
[self, value]
end
TOKENS.add_token :MLCOMMENT, %r{/\*(.*?)\*/}m, :accumulate => true, :skip => true do |lexer, value|
lexer.line += value.count("\n")
value.sub!(/^\/\* ?/,'')
value.sub!(/ ?\*\/$/,'')
[self,value]
end
TOKENS.add_token :REGEX, %r{/[^/\n]*/} do |lexer, value|
# Make sure we haven't matched an escaped /
while value[-2..-2] == '\\'
other = lexer.scan_until(%r{/})
value += other
end
regex = value.sub(%r{\A/}, "").sub(%r{/\Z}, '').gsub("\\/", "/")
[self, Regexp.new(regex)]
end
TOKENS[:REGEX].acceptable_when Contextual::IN_REGEX_POSITION
TOKENS.add_token :RETURN, "\n", :skip => true, :incr_line => true, :skip_text => true
TOKENS.add_token :SQUOTE, "'" do |lexer, value|
[TOKENS[:STRING], lexer.slurpstring(value,["'"],:ignore_invalid_escapes).first ]
end
DQ_initial_token_types = {'$' => :DQPRE,'"' => :STRING}
DQ_continuation_token_types = {'$' => :DQMID,'"' => :DQPOST}
TOKENS.add_token :DQUOTE, /"/ do |lexer, value|
lexer.tokenize_interpolated_string(DQ_initial_token_types)
end
TOKENS.add_token :DQCONT, /\}/ do |lexer, value|
lexer.tokenize_interpolated_string(DQ_continuation_token_types)
end
TOKENS[:DQCONT].acceptable_when Contextual::IN_STRING_INTERPOLATION
TOKENS.add_token :DOLLAR_VAR_WITH_DASH, %r{\$(?:::)?(?:[-\w]+::)*[-\w]+} do |lexer, value|
lexer.warn_if_variable_has_hyphen(value)
[TOKENS[:VARIABLE], value[1..-1]]
end
TOKENS[:DOLLAR_VAR_WITH_DASH].acceptable_when Contextual::DASHED_VARIABLES_ALLOWED
TOKENS.add_token :DOLLAR_VAR, %r{\$(::)?(\w+::)*\w+} do |lexer, value|
[TOKENS[:VARIABLE],value[1..-1]]
end
TOKENS.add_token :VARIABLE_WITH_DASH, %r{(?:::)?(?:[-\w]+::)*[-\w]+} do |lexer, value|
lexer.warn_if_variable_has_hyphen(value)
[TOKENS[:VARIABLE], value]
end
TOKENS[:VARIABLE_WITH_DASH].acceptable_when Contextual::VARIABLE_AND_DASHES_ALLOWED
TOKENS.add_token :VARIABLE, %r{(::)?(\w+::)*\w+}
TOKENS[:VARIABLE].acceptable_when Contextual::INSIDE_QUOTES
TOKENS.sort_tokens
@@pairs = {
"{" => "}",
"(" => ")",
"[" => "]",
"<|" => "|>",
"<<|" => "|>>"
}
KEYWORDS = TokenList.new
KEYWORDS.add_tokens(
"case" => :CASE,
"class" => :CLASS,
"default" => :DEFAULT,
"define" => :DEFINE,
"import" => :IMPORT,
"if" => :IF,
"elsif" => :ELSIF,
"else" => :ELSE,
"inherits" => :INHERITS,
"node" => :NODE,
"and" => :AND,
"or" => :OR,
"undef" => :UNDEF,
"false" => :FALSE,
"true" => :TRUE,
"in" => :IN,
"unless" => :UNLESS
)
def clear
initvars
end
def expected
return nil if @expected.empty?
name = @expected[-1]
TOKENS.lookup(name) or lex_error "Could not find expected token #{name}"
end
# scan the whole file
# basically just used for testing
def fullscan
array = []
self.scan { |token, str|
# Ignore any definition nesting problems
@indefine = false
array.push([token,str])
}
array
end
def file=(file)
@file = file
@line = 1
contents = Puppet::FileSystem.exist?(file) ? Puppet::FileSystem.read(file) : ""
@scanner = StringScanner.new(contents)
end
def_delegator :@token_queue, :shift, :shift_token
def find_string_token
# We know our longest string token is three chars, so try each size in turn
# until we either match or run out of chars. This way our worst-case is three
# tries, where it is otherwise the number of string token we have. Also,
# the lookups are optimized hash lookups, instead of regex scans.
#
s = @scanner.peek(3)
token = TOKENS.lookup(s[0,3]) || TOKENS.lookup(s[0,2]) || TOKENS.lookup(s[0,1])
[ token, token && @scanner.scan(token.regex) ]
end
# Find the next token that matches a regex. We look for these first.
def find_regex_token
best_token = nil
best_length = 0
# I tried optimizing based on the first char, but it had
# a slightly negative affect and was a good bit more complicated.
TOKENS.regex_tokens.each do |token|
if length = @scanner.match?(token.regex) and token.acceptable?(lexing_context)
# We've found a longer match
if length > best_length
best_length = length
best_token = token
end
end
end
return best_token, @scanner.scan(best_token.regex) if best_token
end
# Find the next token, returning the string and the token.
def find_token
shift_token || find_regex_token || find_string_token
end
def initialize
initvars
end
def initvars
@line = 1
@previous_token = nil
@scanner = nil
@file = nil
# AAARRGGGG! okay, regexes in ruby are bloody annoying
# no one else has "\n" =~ /\s/
@skip = %r{[ \t\r]+}
@namestack = []
@token_queue = []
@indefine = false
@expected = []
@commentstack = [ ['', @line] ]
@lexing_context = {
:after => nil,
:start_of_line => true,
:string_interpolation_depth => 0
}
end
# Make any necessary changes to the token and/or value.
def munge_token(token, value)
@line += 1 if token.incr_line
skip if token.skip_text
return if token.skip and not token.accumulate?
token, value = token.convert(self, value) if token.respond_to?(:convert)
return unless token
if token.accumulate?
comment = @commentstack.pop
comment[0] << value + "\n"
@commentstack.push(comment)
end
return if token.skip
return token, { :value => value, :line => @line }
end
# Handling the namespace stack
def_delegator :@namestack, :pop, :namepop
# This value might have :: in it, but we don't care -- it'll be handled
# normally when joining, and when popping we want to pop this full value,
# however long the namespace is.
def_delegator :@namestack, :<<, :namestack
# Collect the current namespace.
def namespace
@namestack.join("::")
end
def_delegator :@scanner, :rest
# this is the heart of the lexer
def scan
#Puppet.debug("entering scan")
lex_error "Invalid or empty string" unless @scanner
# Skip any initial whitespace.
skip
until token_queue.empty? and @scanner.eos? do
matched_token, value = find_token
# error out if we didn't match anything at all
lex_error "Could not match #{@scanner.rest[/^(\S+|\s+|.*)/]}" unless matched_token
newline = matched_token.name == :RETURN
# this matches a blank line; eat the previously accumulated comments
getcomment if lexing_context[:start_of_line] and newline
lexing_context[:start_of_line] = newline
final_token, token_value = munge_token(matched_token, value)
unless final_token
skip
next
end
final_token_name = final_token.name
lexing_context[:after] = final_token_name unless newline
lexing_context[:string_interpolation_depth] += 1 if final_token_name == :DQPRE
lexing_context[:string_interpolation_depth] -= 1 if final_token_name == :DQPOST
value = token_value[:value]
if match = @@pairs[value] and final_token_name != :DQUOTE and final_token_name != :SQUOTE
@expected << match
elsif exp = @expected[-1] and exp == value and final_token_name != :DQUOTE and final_token_name != :SQUOTE
@expected.pop
end
if final_token_name == :LBRACE or final_token_name == :LPAREN
commentpush
end
if final_token_name == :RPAREN
commentpop
end
yield [final_token_name, token_value]
if @previous_token
namestack(value) if @previous_token.name == :CLASS and value != '{'
if @previous_token.name == :DEFINE
if indefine?
msg = "Cannot nest definition #{value} inside #{@indefine}"
self.indefine = false
raise Puppet::ParseError, msg
end
@indefine = value
end
end
@previous_token = final_token
skip
end
@scanner = nil
# This indicates that we're done parsing.
yield [false,false]
end
# Skip any skipchars in our remaining string.
def skip
@scanner.skip(@skip)
end
# Provide some limited access to the scanner, for those
# tokens that need it.
def_delegator :@scanner, :scan_until
# we've encountered the start of a string...
# slurp in the rest of the string and return it
def slurpstring(terminators,escapes=%w{ \\ $ ' " r n t s }+["\n"],ignore_invalid_escapes=false)
# we search for the next quote that isn't preceded by a
# backslash; the caret is there to match empty strings
str = @scanner.scan_until(/([^\\]|^|[^\\])([\\]{2})*[#{terminators}]/) or lex_error "Unclosed quote after '#{last}' in '#{rest}'"
@line += str.count("\n") # literal carriage returns add to the line count.
str.gsub!(/\\(.)/m) {
ch = $1
if escapes.include? ch
case ch
when 'r'; "\r"
when 'n'; "\n"
when 't'; "\t"
when 's'; " "
when "\n"; ''
else ch
end
else
Puppet.warning "Unrecognised escape sequence '\\#{ch}'#{file && " in file #{file}"}#{line && " at line #{line}"}" unless ignore_invalid_escapes
"\\#{ch}"
end
}
[ str[0..-2],str[-1,1] ]
end
def tokenize_interpolated_string(token_type,preamble='')
value,terminator = slurpstring('"$')
token_queue << [TOKENS[token_type[terminator]],preamble+value]
variable_regex = if Puppet[:allow_variables_with_dashes]
TOKENS[:VARIABLE_WITH_DASH].regex
else
TOKENS[:VARIABLE].regex
end
if terminator != '$' or @scanner.scan(/\{/)
token_queue.shift
elsif var_name = @scanner.scan(variable_regex)
warn_if_variable_has_hyphen(var_name)
token_queue << [TOKENS[:VARIABLE],var_name]
tokenize_interpolated_string(DQ_continuation_token_types)
else
tokenize_interpolated_string(token_type,token_queue.pop.last + terminator)
end
end
# just parse a string, not a whole file
def string=(string)
@scanner = StringScanner.new(string)
end
# returns the content of the currently accumulated content cache
def commentpop
@commentstack.pop[0]
end
def getcomment(line = nil)
comment = @commentstack.last
if line.nil? or comment[1] <= line
@commentstack.pop
@commentstack.push(['', @line])
return comment[0]
end
''
end
def commentpush
@commentstack.push(['', @line])
end
def warn_if_variable_has_hyphen(var_name)
if var_name.include?('-')
Puppet.deprecation_warning("Using `-` in variable names is deprecated at #{file || '<string>'}:#{line}. See http://links.puppetlabs.com/puppet-hyphenated-variable-deprecation")
end
end
end
diff --git a/lib/puppet/parser/parser_factory.rb b/lib/puppet/parser/parser_factory.rb
index 576bc8755..adc0e578a 100644
--- a/lib/puppet/parser/parser_factory.rb
+++ b/lib/puppet/parser/parser_factory.rb
@@ -1,88 +1,76 @@
module Puppet; end
module Puppet::Parser
# The ParserFactory makes selection of parser possible.
# Currently, it is possible to switch between two different parsers:
# * classic_parser, the parser in 3.1
# * eparser, the Expression Based Parser
#
class ParserFactory
# Produces a parser instance for the given environment
def self.parser(environment)
- case Puppet[:parser]
- when 'future'
- if Puppet[:evaluator] == 'future'
- evaluating_parser(environment)
- else
- eparser(environment)
- end
+ if Puppet[:parser] == 'future'
+ evaluating_parser(environment)
else
classic_parser(environment)
end
end
# Creates an instance of the classic parser.
#
def self.classic_parser(environment)
- require 'puppet/parser'
-
+ # avoid expensive require if already loaded
+ require 'puppet/parser' unless defined? Puppet::Parser::Parser
Puppet::Parser::Parser.new(environment)
end
- # Returns an instance of an EvaluatingParser
+ # Creates an instance of an E4ParserAdapter that adapts an
+ # EvaluatingParser to the 3x way of parsing.
+ #
def self.evaluating_parser(file_watcher)
# Since RGen is optional, test that it is installed
- @@asserted ||= false
- assert_rgen_installed() unless @@asserted
- @@asserted = true
- require 'puppet/parser/e4_parser_adapter'
- require 'puppet/pops/parser/code_merger'
+ assert_rgen_installed()
+ unless defined?(Puppet::Pops::Parser::E4ParserAdapter)
+ require 'puppet/parser/e4_parser_adapter'
+ require 'puppet/pops/parser/code_merger'
+ end
E4ParserAdapter.new(file_watcher)
end
- # Creates an instance of the expression based parser 'eparser'
+ # Asserts that RGen >= 0.6.6 is installed by checking that certain behavior is available.
+ # Note that this assert is expensive as it also requires puppet/pops (if not already loaded).
#
- def self.eparser(environment)
- # Since RGen is optional, test that it is installed
+ def self.assert_rgen_installed
@@asserted ||= false
- assert_rgen_installed() unless @@asserted
+ return if @@asserted
@@asserted = true
- require 'puppet/parser'
- require 'puppet/parser/e_parser_adapter'
- EParserAdapter.new(Puppet::Parser::Parser.new(environment))
- end
-
- private
-
- def self.assert_rgen_installed
begin
require 'rgen/metamodel_builder'
rescue LoadError
- raise Puppet::DevError.new("The gem 'rgen' version >= 0.6.1 is required when using the setting '--parser future'. Please install 'rgen'.")
+ raise Puppet::DevError.new("The gem 'rgen' version >= 0.7.0 is required when using the setting '--parser future'. Please install 'rgen'.")
end
# Since RGen is optional, there is nothing specifying its version.
# It is not installed in any controlled way, so not possible to use gems to check (it may be installed some other way).
# Instead check that "eContainer, and eContainingFeature" has been installed.
require 'puppet/pops'
begin
litstring = Puppet::Pops::Model::LiteralString.new();
container = Puppet::Pops::Model::ArithmeticExpression.new();
container.left_expr = litstring
raise "no eContainer" if litstring.eContainer() != container
raise "no eContainingFeature" if litstring.eContainingFeature() != :left_expr
- rescue
- raise Puppet::DevError.new("The gem 'rgen' version >= 0.6.1 is required when using '--parser future'. An older version is installed, please update.")
+ rescue => e
+ # TODO: RGen can raise exceptions for other reasons!
+ raise Puppet::DevError.new("The gem 'rgen' version >= 0.7.0 is required when using '--parser future'. An older version is installed, please update.")
end
end
def self.code_merger
- if Puppet[:parser] == 'future' && Puppet[:evaluator] == 'future'
+ if Puppet[:parser] == 'future'
Puppet::Pops::Parser::CodeMerger.new
else
Puppet::Parser::CodeMerger.new
end
end
-
end
-
end
diff --git a/lib/puppet/parser/resource.rb b/lib/puppet/parser/resource.rb
index 506f8a27e..cde664a33 100644
--- a/lib/puppet/parser/resource.rb
+++ b/lib/puppet/parser/resource.rb
@@ -1,278 +1,277 @@
require 'puppet/resource'
# The primary difference between this class and its
# parent is that this class has rules on who can set
# parameters
class Puppet::Parser::Resource < Puppet::Resource
require 'puppet/parser/resource/param'
require 'puppet/util/tagging'
require 'puppet/parser/yaml_trimmer'
require 'puppet/resource/type_collection_helper'
include Puppet::Resource::TypeCollectionHelper
include Puppet::Util
include Puppet::Util::MethodHelper
include Puppet::Util::Errors
include Puppet::Util::Logging
include Puppet::Parser::YamlTrimmer
attr_accessor :source, :scope, :collector_id
attr_accessor :virtual, :override, :translated, :catalog, :evaluated
attr_accessor :file, :line
attr_reader :exported, :parameters
# Determine whether the provided parameter name is a relationship parameter.
def self.relationship_parameter?(name)
@relationship_names ||= Puppet::Type.relationship_params.collect { |p| p.name }
@relationship_names.include?(name)
end
# Set up some boolean test methods
def translated?; !!@translated; end
def override?; !!@override; end
def evaluated?; !!@evaluated; end
def [](param)
param = param.intern
if param == :title
return self.title
end
if @parameters.has_key?(param)
@parameters[param].value
else
nil
end
end
def eachparam
@parameters.each do |name, param|
yield param
end
end
def environment
scope.environment
end
# Process the stage metaparameter for a class. A containment edge
# is drawn from the class to the stage. The stage for containment
# defaults to main, if none is specified.
def add_edge_to_stage
return unless self.class?
unless stage = catalog.resource(:stage, self[:stage] || (scope && scope.resource && scope.resource[:stage]) || :main)
raise ArgumentError, "Could not find stage #{self[:stage] || :main} specified by #{self}"
end
self[:stage] ||= stage.title unless stage.title == :main
catalog.add_edge(stage, self)
end
# Retrieve the associated definition and evaluate it.
def evaluate
return if evaluated?
@evaluated = true
if klass = resource_type and ! builtin_type?
finish
evaluated_code = klass.evaluate_code(self)
return evaluated_code
elsif builtin?
devfail "Cannot evaluate a builtin type (#{type})"
else
self.fail "Cannot find definition #{type}"
end
end
# Mark this resource as both exported and virtual,
# or remove the exported mark.
def exported=(value)
if value
@virtual = true
@exported = value
else
@exported = value
end
end
# Do any finishing work on this object, called before evaluation or
# before storage/translation.
def finish
return if finished?
@finished = true
add_defaults
add_scope_tags
validate
end
# Has this resource already been finished?
def finished?
@finished
end
def initialize(*args)
raise ArgumentError, "Resources require a hash as last argument" unless args.last.is_a? Hash
raise ArgumentError, "Resources require a scope" unless args.last[:scope]
super
@source ||= scope.source
end
# Is this resource modeling an isomorphic resource type?
def isomorphic?
if builtin_type?
return resource_type.isomorphic?
else
return true
end
end
# Merge an override resource in. This will throw exceptions if
# any overrides aren't allowed.
def merge(resource)
# Test the resource scope, to make sure the resource is even allowed
# to override.
unless self.source.object_id == resource.source.object_id || resource.source.child_of?(self.source)
raise Puppet::ParseError.new("Only subclasses can override parameters", resource.line, resource.file)
end
# Some of these might fail, but they'll fail in the way we want.
resource.parameters.each do |name, param|
override_parameter(param)
end
end
# This only mattered for clients < 0.25, which we don't support any longer.
# ...but, since this hasn't been deprecated, and at least some functions
# used it, deprecate now rather than just eliminate. --daniel 2012-07-15
def metaparam_compatibility_mode?
Puppet.deprecation_warning "metaparam_compatibility_mode? is obsolete since < 0.25 clients are really, really not supported any more"
false
end
def name
self[:name] || self.title
end
# A temporary occasion, until I get paths in the scopes figured out.
alias path to_s
# Define a parameter in our resource.
# if we ever receive a parameter named 'tag', set
# the resource tags with its value.
def set_parameter(param, value = nil)
if ! value.nil?
param = Puppet::Parser::Resource::Param.new(
:name => param, :value => value, :source => self.source
)
elsif ! param.is_a?(Puppet::Parser::Resource::Param)
raise ArgumentError, "Received incomplete information - no value provided for parameter #{param}"
end
tag(*param.value) if param.name == :tag
# And store it in our parameter hash.
@parameters[param.name] = param
end
alias []= set_parameter
def to_hash
@parameters.inject({}) do |hash, ary|
param = ary[1]
- # Skip "undef" values.
- hash[param.name] = param.value if param.value != :undef
+ # Skip "undef" and nil values.
+ hash[param.name] = param.value if param.value != :undef && !param.value.nil?
hash
end
end
# Convert this resource to a RAL resource.
def to_ral
copy_as_resource.to_ral
end
# Is the receiver tagged with the given tags?
# This match takes into account the tags that a resource will inherit from its container
# but have not been set yet.
# It does *not* take tags set via resource defaults as these will *never* be set on
# the resource itself since all resources always have tags that are automatically
# assigned.
#
def tagged?(*tags)
super || ((scope_resource = scope.resource) && scope_resource != self && scope_resource.tagged?(tags))
end
private
# Add default values from our definition.
def add_defaults
scope.lookupdefaults(self.type).each do |name, param|
unless @parameters.include?(name)
self.debug "Adding default for #{name}"
@parameters[name] = param.dup
end
end
end
def add_scope_tags
if scope_resource = scope.resource
tag(*scope_resource.tags)
end
end
# Accept a parameter from an override.
def override_parameter(param)
# This can happen if the override is defining a new parameter, rather
# than replacing an existing one.
(set_parameter(param) and return) unless current = @parameters[param.name]
# The parameter is already set. Fail if they're not allowed to override it.
unless param.source.child_of?(current.source)
msg = "Parameter '#{param.name}' is already set on #{self}"
msg += " by #{current.source}" if current.source.to_s != ""
if current.file or current.line
fields = []
fields << current.file if current.file
fields << current.line.to_s if current.line
msg += " at #{fields.join(":")}"
end
msg += "; cannot redefine"
- Puppet.log_exception(ArgumentError.new(), msg)
raise Puppet::ParseError.new(msg, param.line, param.file)
end
# If we've gotten this far, we're allowed to override.
# Merge with previous value, if the parameter was generated with the +>
# syntax. It's important that we use a copy of the new param instance
# here, not the old one, and not the original new one, so that the source
# is registered correctly for later overrides but the values aren't
# implcitly shared when multiple resources are overrriden at once (see
# ticket #3556).
if param.add
param = param.dup
param.value = [current.value, param.value].flatten
end
set_parameter(param)
end
# Make sure the resource's parameters are all valid for the type.
def validate
@parameters.each do |name, param|
validate_parameter(name)
end
rescue => detail
self.fail Puppet::ParseError, detail.to_s + " on #{self}", detail
end
def extract_parameters(params)
params.each do |param|
# Don't set the same parameter twice
self.fail Puppet::ParseError, "Duplicate parameter '#{param.name}' for on #{self}" if @parameters[param.name]
set_parameter(param)
end
end
end
diff --git a/lib/puppet/parser/scope.rb b/lib/puppet/parser/scope.rb
index bfdd67e79..c8cfa6060 100644
--- a/lib/puppet/parser/scope.rb
+++ b/lib/puppet/parser/scope.rb
@@ -1,836 +1,906 @@
# The scope class, which handles storing and retrieving variables and types and
# such.
require 'forwardable'
require 'puppet/parser'
require 'puppet/parser/templatewrapper'
require 'puppet/resource/type_collection_helper'
require 'puppet/util/methodhelper'
# This class is part of the internal parser/evaluator/compiler functionality of Puppet.
# It is passed between the various classes that participate in evaluation.
# None of its methods are API except those that are clearly marked as such.
#
# @api public
class Puppet::Parser::Scope
extend Forwardable
include Puppet::Util::MethodHelper
include Puppet::Resource::TypeCollectionHelper
require 'puppet/parser/resource'
AST = Puppet::Parser::AST
Puppet::Util.logmethods(self)
include Puppet::Util::Errors
attr_accessor :source, :resource
attr_accessor :compiler
attr_accessor :parent
attr_reader :namespaces
+ # Hash of hashes of default values per type name
+ attr_reader :defaults
+
# Add some alias methods that forward to the compiler, since we reference
# them frequently enough to justify the extra method call.
def_delegators :compiler, :catalog, :environment
# Abstract base class for LocalScope and MatchScope
#
class Ephemeral
attr_reader :parent
def initialize(parent = nil)
@parent = parent
end
def is_local_scope?
false
end
def [](name)
if @parent
@parent[name]
end
end
def include?(name)
(@parent and @parent.include?(name))
end
def bound?(name)
false
end
def add_entries_to(target = {})
@parent.add_entries_to(target) unless @parent.nil?
# do not include match data ($0-$n)
target
end
end
class LocalScope < Ephemeral
def initialize(parent=nil)
super parent
@symbols = {}
end
def [](name)
if @symbols.include?(name)
@symbols[name]
else
super
end
end
def is_local_scope?
true
end
def []=(name, value)
@symbols[name] = value
end
def include?(name)
bound?(name) || super
end
def delete(name)
@symbols.delete(name)
end
def bound?(name)
@symbols.include?(name)
end
def add_entries_to(target = {})
super
@symbols.each do |k, v|
- if v == :undef
+ if v == :undef || v.nil?
target.delete(k)
else
target[ k ] = v
end
end
target
end
end
class MatchScope < Ephemeral
attr_accessor :match_data
def initialize(parent = nil, match_data = nil)
super parent
@match_data = match_data
end
def is_local_scope?
false
end
def [](name)
if bound?(name)
@match_data[name.to_i]
else
super
end
end
def include?(name)
bound?(name) or super
end
def bound?(name)
# A "match variables" scope reports all numeric variables to be bound if the scope has
# match_data. Without match data the scope is transparent.
#
@match_data && name =~ /^\d+$/
end
def []=(name, value)
# TODO: Bad choice of exception
raise Puppet::ParseError, "Numerical variables cannot be changed. Attempt to set $#{name}"
end
def delete(name)
# TODO: Bad choice of exception
raise Puppet::ParseError, "Numerical variables cannot be deleted: Attempt to delete: $#{name}"
end
def add_entries_to(target = {})
# do not include match data ($0-$n)
super
end
end
# Returns true if the variable of the given name has a non nil value.
# TODO: This has vague semantics - does the variable exist or not?
# use ['name'] to get nil or value, and if nil check with exist?('name')
# this include? is only useful because of checking against the boolean value false.
#
def include?(name)
! self[name].nil?
end
# Returns true if the variable of the given name is set to any value (including nil)
#
def exist?(name)
next_scope = inherited_scope || enclosing_scope
effective_symtable(true).include?(name) || next_scope && next_scope.exist?(name)
end
# Returns true if the given name is bound in the current (most nested) scope for assignments.
#
def bound?(name)
# Do not look in ephemeral (match scope), the semantics is to answer if an assignable variable is bound
effective_symtable(false).bound?(name)
end
# Is the value true? This allows us to control the definition of truth
# in one place.
def self.true?(value)
case value
when ''
false
when :undef
false
else
!!value
end
end
# Coerce value to a number, or return `nil` if it isn't one.
def self.number?(value)
case value
when Numeric
value
when /^-?\d+(:?\.\d+|(:?\.\d+)?e\d+)$/
value.to_f
when /^0x[0-9a-f]+$/i
value.to_i(16)
when /^0[0-7]+$/
value.to_i(8)
when /^-?\d+$/
value.to_i
else
nil
end
end
# Add to our list of namespaces.
def add_namespace(ns)
return false if @namespaces.include?(ns)
if @namespaces == [""]
@namespaces = [ns]
else
@namespaces << ns
end
end
def find_hostclass(name, options = {})
known_resource_types.find_hostclass(namespaces, name, options)
end
def find_definition(name)
known_resource_types.find_definition(namespaces, name)
end
def find_global_scope()
# walk upwards until first found node_scope or top_scope
if is_nodescope? || is_topscope?
self
else
next_scope = inherited_scope || enclosing_scope
if next_scope.nil?
# this happens when testing, and there is only a single test scope and no link to any
# other scopes
self
else
next_scope.find_global_scope()
end
end
end
# This just delegates directly.
def_delegator :compiler, :findresource
# Initialize our new scope. Defaults to having no parent.
def initialize(compiler, options = {})
if compiler.is_a? Puppet::Parser::Compiler
self.compiler = compiler
else
raise Puppet::DevError, "you must pass a compiler instance to a new scope object"
end
if n = options.delete(:namespace)
@namespaces = [n]
else
@namespaces = [""]
end
raise Puppet::DevError, "compiler passed in options" if options.include? :compiler
set_options(options)
extend_with_functions_module
# The symbol table for this scope. This is where we store variables.
# @symtable = Ephemeral.new(nil, true)
@symtable = LocalScope.new(nil)
@ephemeral = [ MatchScope.new(@symtable, nil) ]
# All of the defaults set for types. It's a hash of hashes,
# with the first key being the type, then the second key being
# the parameter.
@defaults = Hash.new { |dhash,type|
dhash[type] = {}
}
# The table for storing class singletons. This will only actually
# be used by top scopes and node scopes.
@class_scopes = {}
@enable_immutable_data = Puppet[:immutable_node_data]
end
# Store the fact that we've evaluated a class, and store a reference to
# the scope in which it was evaluated, so that we can look it up later.
def class_set(name, scope)
if parent
parent.class_set(name, scope)
else
@class_scopes[name] = scope
end
end
# Return the scope associated with a class. This is just here so
# that subclasses can set their parent scopes to be the scope of
# their parent class, and it's also used when looking up qualified
# variables.
def class_scope(klass)
# They might pass in either the class or class name
k = klass.respond_to?(:name) ? klass.name : klass
@class_scopes[k] || (parent && parent.class_scope(k))
end
# Collect all of the defaults set at any higher scopes.
- # This is a different type of lookup because it's additive --
- # it collects all of the defaults, with defaults in closer scopes
- # overriding those in later scopes.
+ # This is a different type of lookup because it's
+ # additive -- it collects all of the defaults, with defaults
+ # in closer scopes overriding those in later scopes.
+ #
+ # The lookupdefaults searches in the the order:
+ #
+ # * inherited
+ # * contained (recursive)
+ # * self
+ #
def lookupdefaults(type)
values = {}
# first collect the values from the parents
if parent
parent.lookupdefaults(type).each { |var,value|
values[var] = value
}
end
# then override them with any current values
# this should probably be done differently
if @defaults.include?(type)
@defaults[type].each { |var,value|
values[var] = value
}
end
values
end
# Look up a defined type.
def lookuptype(name)
find_definition(name) || find_hostclass(name)
end
def undef_as(x,v)
if v.nil? or v == :undef
x
else
v
end
end
# Lookup a variable within this scope using the Puppet language's
# scoping rules. Variables can be qualified using just as in a
# manifest.
#
# @param [String] name the variable name to lookup
#
# @return Object the value of the variable, or nil if it's not found
#
# @api public
def lookupvar(name, options = {})
unless name.is_a? String
raise Puppet::ParseError, "Scope variable name #{name.inspect} is a #{name.class}, not a string"
end
table = @ephemeral.last
if name =~ /^(.*)::(.+)$/
class_name = $1
variable_name = $2
lookup_qualified_variable(class_name, variable_name, options)
# TODO: optimize with an assoc instead, this searches through scopes twice for a hit
elsif table.include?(name)
table[name]
else
next_scope = inherited_scope || enclosing_scope
if next_scope
next_scope.lookupvar(name, options)
else
variable_not_found(name)
end
end
end
def variable_not_found(name, reason=nil)
if Puppet[:strict_variables]
- if Puppet[:evaluator] == 'future' && Puppet[:parser] == 'future'
+ if Puppet[:parser] == 'future'
throw :undefined_variable
else
reason_msg = reason.nil? ? '' : "; #{reason}"
raise Puppet::ParseError, "Undefined variable #{name.inspect}#{reason_msg}"
end
else
nil
end
end
+
# Retrieves the variable value assigned to the name given as an argument. The name must be a String,
# and namespace can be qualified with '::'. The value is looked up in this scope, its parent scopes,
# or in a specific visible named scope.
#
# @param varname [String] the name of the variable (may be a qualified name using `(ns'::')*varname`
# @param options [Hash] Additional options, not part of api.
# @return [Object] the value assigned to the given varname
# @see #[]=
# @api public
#
def [](varname, options={})
lookupvar(varname, options)
end
# The scope of the inherited thing of this scope's resource. This could
# either be a node that was inherited or the class.
#
# @return [Puppet::Parser::Scope] The scope or nil if there is not an inherited scope
def inherited_scope
if has_inherited_class?
qualified_scope(resource.resource_type.parent)
else
nil
end
end
# The enclosing scope (topscope or nodescope) of this scope.
# The enclosing scopes are produced when a class or define is included at
# some point. The parent scope of the included class or define becomes the
# scope in which it was included. The chain of parent scopes is followed
# until a node scope or the topscope is found
#
# @return [Puppet::Parser::Scope] The scope or nil if there is no enclosing scope
def enclosing_scope
if has_enclosing_scope?
if parent.is_topscope? or parent.is_nodescope?
parent
else
parent.enclosing_scope
end
else
nil
end
end
def is_classscope?
resource and resource.type == "Class"
end
def is_nodescope?
resource and resource.type == "Node"
end
def is_topscope?
compiler and self == compiler.topscope
end
def lookup_qualified_variable(class_name, variable_name, position)
begin
if lookup_as_local_name?(class_name, variable_name)
self[variable_name]
else
qualified_scope(class_name).lookupvar(variable_name, position)
end
rescue RuntimeError => e
unless Puppet[:strict_variables]
# Do not issue warning if strict variables are on, as an error will be raised by variable_not_found
location = if position[:lineproc]
" at #{position[:lineproc].call}"
elsif position[:file] && position[:line]
" at #{position[:file]}:#{position[:line]}"
else
""
end
warning "Could not look up qualified variable '#{class_name}::#{variable_name}'; #{e.message}#{location}"
end
variable_not_found("#{class_name}::#{variable_name}", e.message)
end
end
# Handles the special case of looking up fully qualified variable in not yet evaluated top scope
# This is ok if the lookup request originated in topscope (this happens when evaluating
# bindings; using the top scope to provide the values for facts.
# @param class_name [String] the classname part of a variable name, may be special ""
# @param variable_name [String] the variable name without the absolute leading '::'
# @return [Boolean] true if the given variable name should be looked up directly in this scope
#
def lookup_as_local_name?(class_name, variable_name)
# not a local if name has more than one segment
return nil if variable_name =~ /::/
# partial only if the class for "" cannot be found
return nil unless class_name == "" && klass = find_hostclass(class_name) && class_scope(klass).nil?
is_topscope?
end
def has_inherited_class?
is_classscope? and resource.resource_type.parent
end
private :has_inherited_class?
def has_enclosing_scope?
not parent.nil?
end
private :has_enclosing_scope?
def qualified_scope(classname)
raise "class #{classname} could not be found" unless klass = find_hostclass(classname)
raise "class #{classname} has not been evaluated" unless kscope = class_scope(klass)
kscope
end
private :qualified_scope
# Returns a Hash containing all variables and their values, optionally (and
# by default) including the values defined in parent. Local values
# shadow parent values. Ephemeral scopes for match results ($0 - $n) are not included.
#
# This is currently a wrapper for to_hash_legacy or to_hash_future.
#
# @see to_hash_future
#
# @see to_hash_legacy
def to_hash(recursive = true)
@parser ||= Puppet[:parser]
if @parser == 'future'
to_hash_future(recursive)
else
to_hash_legacy(recursive)
end
end
# Fixed version of to_hash that implements scoping correctly (i.e., with
# dynamic scoping disabled #28200 / PUP-1220
#
# @see to_hash
def to_hash_future(recursive)
if recursive and has_enclosing_scope?
target = enclosing_scope.to_hash_future(recursive)
if !(inherited = inherited_scope).nil?
target.merge!(inherited.to_hash_future(recursive))
end
else
target = Hash.new
end
# add all local scopes
@ephemeral.last.add_entries_to(target)
target
end
# The old broken implementation of to_hash that retains the dynamic scoping
# semantics
#
# @see to_hash
def to_hash_legacy(recursive = true)
if recursive and parent
target = parent.to_hash_legacy(recursive)
else
target = Hash.new
end
# add all local scopes
@ephemeral.last.add_entries_to(target)
target
end
def namespaces
@namespaces.dup
end
# Create a new scope and set these options.
def newscope(options = {})
compiler.newscope(self, options)
end
def parent_module_name
return nil unless @parent
return nil unless @parent.source
@parent.source.module_name
end
# Set defaults for a type. The typename should already be downcased,
# so that the syntax is isolated. We don't do any kind of type-checking
# here; instead we let the resource do it when the defaults are used.
def define_settings(type, params)
table = @defaults[type]
# if we got a single param, it'll be in its own array
params = [params] unless params.is_a?(Array)
params.each { |param|
if table.include?(param.name)
raise Puppet::ParseError.new("Default already defined for #{type} { #{param.name} }; cannot redefine", param.line, param.file)
end
table[param.name] = param
}
end
RESERVED_VARIABLE_NAMES = ['trusted', 'facts'].freeze
# Set a variable in the current scope. This will override settings
# in scopes above, but will not allow variables in the current scope
# to be reassigned.
# It's preferred that you use self[]= instead of this; only use this
# when you need to set options.
def setvar(name, value, options = {})
if name =~ /^[0-9]+$/
raise Puppet::ParseError.new("Cannot assign to a numeric match result variable '$#{name}'") # unless options[:ephemeral]
end
unless name.is_a? String
raise Puppet::ParseError, "Scope variable name #{name.inspect} is a #{name.class}, not a string"
end
# Check for reserved variable names
if @enable_immutable_data && !options[:privileged] && RESERVED_VARIABLE_NAMES.include?(name)
raise Puppet::ParseError, "Attempt to assign to a reserved variable name: '#{name}'"
end
- table = effective_symtable options[:ephemeral]
+ table = effective_symtable(options[:ephemeral])
if table.bound?(name)
if options[:append]
error = Puppet::ParseError.new("Cannot append, variable #{name} is defined in this scope")
else
error = Puppet::ParseError.new("Cannot reassign variable #{name}")
end
error.file = options[:file] if options[:file]
error.line = options[:line] if options[:line]
raise error
end
if options[:append]
- table[name] = append_value(undef_as('', self[name]), value)
+ # produced result (value) is the resulting appended value, note: the table[]= does not return the value
+ table[name] = (value = append_value(undef_as('', self[name]), value))
else
table[name] = value
end
- table[name]
+ value
end
def set_trusted(hash)
setvar('trusted', deep_freeze(hash), :privileged => true)
end
def set_facts(hash)
+ # Remove _timestamp (it has an illegal datatype). It is not allowed to mutate the given hash
+ # since it contains the facts.
+ hash = hash.dup
+ hash.delete('_timestamp')
setvar('facts', deep_freeze(hash), :privileged => true)
end
# Deeply freezes the given object. The object and its content must be of the types:
# Array, Hash, Numeric, Boolean, Symbol, Regexp, NilClass, or String. All other types raises an Error.
# (i.e. if they are assignable to Puppet::Pops::Types::Data type).
#
def deep_freeze(object)
case object
when Array
object.each {|v| deep_freeze(v) }
object.freeze
when Hash
object.each {|k, v| deep_freeze(k); deep_freeze(v) }
object.freeze
when NilClass, Numeric, TrueClass, FalseClass
# do nothing
when String
object.freeze
else
raise Puppet::Error, "Unsupported data type: '#{object.class}'"
end
object
end
private :deep_freeze
# Return the effective "table" for setting variables.
# This method returns the first ephemeral "table" that acts as a local scope, or this
# scope's symtable. If the parameter `use_ephemeral` is true, the "top most" ephemeral "table"
# will be returned (irrespective of it being a match scope or a local scope).
#
# @param use_ephemeral [Boolean] whether the top most ephemeral (of any kind) should be used or not
- def effective_symtable use_ephemeral
+ def effective_symtable(use_ephemeral)
s = @ephemeral.last
- return s || @symtable if use_ephemeral
-
- # Why check if ephemeral is a Hash ??? Not needed, a hash cannot be a parent scope ???
- while s && !(s.is_a?(Hash) || s.is_local_scope?())
- s = s.parent
+ if use_ephemeral
+ return s || @symtable
+ else
+ while s && !s.is_local_scope?()
+ s = s.parent
+ end
+ s || @symtable
end
- s ? s : @symtable
end
# Sets the variable value of the name given as an argument to the given value. The value is
# set in the current scope and may shadow a variable with the same name in a visible outer scope.
# It is illegal to re-assign a variable in the same scope. It is illegal to set a variable in some other
# scope/namespace than the scope passed to a method.
#
# @param varname [String] The variable name to which the value is assigned. Must not contain `::`
# @param value [String] The value to assign to the given variable name.
# @param options [Hash] Additional options, not part of api.
#
# @api public
#
def []=(varname, value, options = {})
setvar(varname, value, options = {})
end
def append_value(bound_value, new_value)
case new_value
when Array
bound_value + new_value
when Hash
bound_value.merge(new_value)
else
if bound_value.is_a?(Hash)
raise ArgumentError, "Trying to append to a hash with something which is not a hash is unsupported"
end
bound_value + new_value
end
end
private :append_value
# Return the tags associated with this scope.
def_delegator :resource, :tags
# Used mainly for logging
def to_s
"Scope(#{@resource})"
end
# remove ephemeral scope up to level
# TODO: Who uses :all ? Remove ??
#
def unset_ephemeral_var(level=:all)
if level == :all
@ephemeral = [ MatchScope.new(@symtable, nil)]
else
@ephemeral.pop(@ephemeral.size - level)
end
end
def ephemeral_level
@ephemeral.size
end
# TODO: Who calls this?
def new_ephemeral(local_scope = false)
if local_scope
@ephemeral.push(LocalScope.new(@ephemeral.last))
else
@ephemeral.push(MatchScope.new(@ephemeral.last, nil))
end
end
# Sets match data in the most nested scope (which always is a MatchScope), it clobbers match data already set there
#
def set_match_data(match_data)
@ephemeral.last.match_data = match_data
end
# Nests a match data scope
def new_match_scope(match_data)
@ephemeral.push(MatchScope.new(@ephemeral.last, match_data))
end
def ephemeral_from(match, file = nil, line = nil)
case match
when Hash
# Create local scope ephemeral and set all values from hash
new_ephemeral(true)
match.each {|k,v| setvar(k, v, :file => file, :line => line, :ephemeral => true) }
# Must always have an inner match data scope (that starts out as transparent)
# In 3x slightly wasteful, since a new nested scope is created for a match
# (TODO: Fix that problem)
new_ephemeral(false)
else
raise(ArgumentError,"Invalid regex match data. Got a #{match.class}") unless match.is_a?(MatchData)
# Create a match ephemeral and set values from match data
new_match_scope(match)
end
end
def find_resource_type(type)
# It still works fine without the type == 'class' short-cut, but it is a lot slower.
return nil if ["class", "node"].include? type.to_s.downcase
find_builtin_resource_type(type) || find_defined_resource_type(type)
end
def find_builtin_resource_type(type)
Puppet::Type.type(type.to_s.downcase.to_sym)
end
def find_defined_resource_type(type)
known_resource_types.find_definition(namespaces, type.to_s.downcase)
end
def method_missing(method, *args, &block)
method.to_s =~ /^function_(.*)$/
name = $1
super unless name
super unless Puppet::Parser::Functions.function(name)
# In odd circumstances, this might not end up defined by the previous
# method, so we might as well be certain.
if respond_to? method
send(method, *args)
else
raise Puppet::DevError, "Function #{name} not defined despite being loaded!"
end
end
def resolve_type_and_titles(type, titles)
raise ArgumentError, "titles must be an array" unless titles.is_a?(Array)
case type.downcase
when "class"
# resolve the titles
titles = titles.collect do |a_title|
hostclass = find_hostclass(a_title)
hostclass ? hostclass.name : a_title
end
when "node"
# no-op
else
# resolve the type
resource_type = find_resource_type(type)
type = resource_type.name if resource_type
end
return [type, titles]
end
+ # Transforms references to classes to the form suitable for
+ # lookup in the compiler.
+ #
+ # Makes names passed in the names array absolute if they are relative
+ # Names are now made absolute if Puppet[:parser] == 'future', this will
+ # be the default behavior in Puppet 4.0
+ #
+ # Transforms Class[] and Resource[] type referenes to class name
+ # or raises an error if a Class[] is unspecific, if a Resource is not
+ # a 'class' resource, or if unspecific (no title).
+ #
+ # TODO: Change this for 4.0 to always make names absolute
+ #
+ # @param names [Array<String>] names to (optionally) make absolute
+ # @return [Array<String>] names after transformation
+ #
+ def transform_and_assert_classnames(names)
+ if Puppet[:parser] == 'future'
+ names.map do |name|
+ case name
+ when String
+ name.sub(/^([^:]{1,2})/, '::\1')
+
+ when Puppet::Resource
+ assert_class_and_title(name.type, name.title)
+ name.title.sub(/^([^:]{1,2})/, '::\1')
+
+ when Puppet::Pops::Types::PHostClassType
+ raise ArgumentError, "Cannot use an unspecific Class[] Type" unless name.class_name
+ name.class_name.sub(/^([^:]{1,2})/, '::\1')
+
+ when Puppet::Pops::Types::PResourceType
+ assert_class_and_title(name.type_name, name.title)
+ name.title.sub(/^([^:]{1,2})/, '::\1')
+ end
+ end
+ else
+ names
+ end
+ end
+
private
+ def assert_class_and_title(type_name, title)
+ if type_name.nil? || type_name == ''
+ raise ArgumentError, "Cannot use an unspecific Resource[] where a Resource['class', name] is expected"
+ end
+ unless type_name =~ /^[Cc]lass$/
+ raise ArgumentError, "Cannot use a Resource[#{type_name}] where a Resource['class', name] is expected"
+ end
+ if title.nil?
+ raise ArgumentError, "Cannot use an unspecific Resource['class'] where a Resource['class', name] is expected"
+ end
+ end
+
def extend_with_functions_module
root = Puppet.lookup(:root_environment)
extend Puppet::Parser::Functions.environment_module(root)
extend Puppet::Parser::Functions.environment_module(environment) if environment != root
end
end
diff --git a/lib/puppet/pops.rb b/lib/puppet/pops.rb
index 3c06c5d71..744e58d24 100644
--- a/lib/puppet/pops.rb
+++ b/lib/puppet/pops.rb
@@ -1,120 +1,118 @@
module Puppet
# The Pops language system. This includes the parser, evaluator, AST model, and
# Binder.
#
# @todo Explain how a user should use this to parse and evaluate the puppet
# language.
#
# @note Warning: Pops is still considered experimental, as such the API may
# change at any time.
#
# @api public
module Pops
require 'puppet/pops/patterns'
require 'puppet/pops/utils'
require 'puppet/pops/adaptable'
require 'puppet/pops/adapters'
require 'puppet/pops/visitable'
require 'puppet/pops/visitor'
require 'puppet/pops/containment'
require 'puppet/pops/issues'
require 'puppet/pops/semantic_error'
require 'puppet/pops/label_provider'
require 'puppet/pops/validation'
require 'puppet/pops/issue_reporter'
require 'puppet/pops/model/model'
- module Types
- require 'puppet/pops/types/types'
- require 'puppet/pops/types/type_calculator'
- require 'puppet/pops/types/type_factory'
- require 'puppet/pops/types/type_parser'
- require 'puppet/pops/types/class_loader'
- require 'puppet/pops/types/enumeration'
- end
+ # (the Types module initializes itself)
+ require 'puppet/pops/types/types'
+ require 'puppet/pops/types/type_calculator'
+ require 'puppet/pops/types/type_factory'
+ require 'puppet/pops/types/type_parser'
+ require 'puppet/pops/types/class_loader'
+ require 'puppet/pops/types/enumeration'
+
module Model
require 'puppet/pops/model/tree_dumper'
require 'puppet/pops/model/ast_transformer'
require 'puppet/pops/model/ast_tree_dumper'
require 'puppet/pops/model/factory'
require 'puppet/pops/model/model_tree_dumper'
require 'puppet/pops/model/model_label_provider'
end
module Binder
module SchemeHandler
# the handlers are auto loaded via bindings
end
module Producers
require 'puppet/pops/binder/producers'
end
require 'puppet/pops/binder/binder'
require 'puppet/pops/binder/bindings_model'
require 'puppet/pops/binder/binder_issues'
require 'puppet/pops/binder/bindings_checker'
require 'puppet/pops/binder/bindings_factory'
require 'puppet/pops/binder/bindings_label_provider'
require 'puppet/pops/binder/bindings_validator_factory'
require 'puppet/pops/binder/injector_entry'
require 'puppet/pops/binder/key_factory'
require 'puppet/pops/binder/injector'
require 'puppet/pops/binder/bindings_composer'
require 'puppet/pops/binder/bindings_model_dumper'
require 'puppet/pops/binder/system_bindings'
require 'puppet/pops/binder/bindings_loader'
require 'puppet/pops/binder/lookup'
module Config
require 'puppet/pops/binder/config/binder_config'
require 'puppet/pops/binder/config/binder_config_checker'
require 'puppet/pops/binder/config/issues'
require 'puppet/pops/binder/config/diagnostic_producer'
end
end
module Parser
require 'puppet/pops/parser/eparser'
require 'puppet/pops/parser/parser_support'
require 'puppet/pops/parser/locator'
require 'puppet/pops/parser/locatable'
- require 'puppet/pops/parser/lexer'
require 'puppet/pops/parser/lexer2'
require 'puppet/pops/parser/evaluating_parser'
require 'puppet/pops/parser/epp_parser'
end
module Validation
- require 'puppet/pops/validation/checker3_1'
- require 'puppet/pops/validation/validator_factory_3_1'
require 'puppet/pops/validation/checker4_0'
require 'puppet/pops/validation/validator_factory_4_0'
end
module Evaluator
require 'puppet/pops/evaluator/callable_signature'
require 'puppet/pops/evaluator/runtime3_support'
require 'puppet/pops/evaluator/evaluator_impl'
require 'puppet/pops/evaluator/epp_evaluator'
+ require 'puppet/pops/evaluator/callable_mismatch_describer'
end
# Subsystem for puppet functions defined in ruby.
#
# @api public
module Functions
require 'puppet/pops/functions/function'
require 'puppet/pops/functions/dispatch'
require 'puppet/pops/functions/dispatcher'
end
end
require 'puppet/parser/ast/pops_bridge'
require 'puppet/bindings'
require 'puppet/functions'
end
diff --git a/lib/puppet/pops/adapters.rb b/lib/puppet/pops/adapters.rb
index 294639172..05af1172f 100644
--- a/lib/puppet/pops/adapters.rb
+++ b/lib/puppet/pops/adapters.rb
@@ -1,108 +1,109 @@
# The Adapters module contains adapters for Documentation, Origin, SourcePosition, and Loader.
#
module Puppet::Pops::Adapters
# A documentation adapter adapts an object with a documentation string.
# (The intended use is for a source text parser to extract documentation and store this
# in DocumentationAdapter instances).
#
class DocumentationAdapter < Puppet::Pops::Adaptable::Adapter
# @return [String] The documentation associated with an object
attr_accessor :documentation
end
# A SourcePosAdapter holds a reference to a *Positioned* object (object that has offset and length).
# This somewhat complex structure makes it possible to correctly refer to a source position
# in source that is embedded in some resource; a parser only sees the embedded snippet of source text
# and does not know where it was embedded. It also enables lazy evaluation of source positions (they are
# rarely needed - typically just when there is an error to report.
#
# @note It is relatively expensive to compute line and position on line - it is not something that
# should be done for every token or model object.
#
# @see Puppet::Pops::Utils#find_adapter, Puppet::Pops::Utils#find_closest_positioned
#
class SourcePosAdapter < Puppet::Pops::Adaptable::Adapter
attr_accessor :locator
def self.create_adapter(o)
new(o)
end
def initialize(o)
@adapted = o
end
def locator
# The locator is always the parent locator, all positioned objects are positioned within their
# parent. If a positioned object also has a locator that locator is for its children!
#
@locator ||= find_locator(@adapted.eContainer)
end
def find_locator(o)
if o.nil?
raise ArgumentError, "InternalError: SourcePosAdapter for something that has no locator among parents"
end
case
when o.is_a?(Puppet::Pops::Model::Program)
return o.locator
# TODO_HEREDOC use case of SubLocator instead
when o.is_a?(Puppet::Pops::Model::SubLocatedExpression) && !(found_locator = o.locator).nil?
return found_locator
when adapter = self.class.get(o)
return adapter.locator
else
find_locator(o.eContainer)
end
end
private :find_locator
def offset
@adapted.offset
end
def length
@adapted.length
end
# Produces the line number for the given offset.
# @note This is an expensive operation
#
def line
- locator.line_for_offset(offset)
+ # Optimization: manual inlining of locator accessor since this method is frequently called
+ (@locator ||= find_locator(@adapted.eContainer)).line_for_offset(offset)
end
# Produces the position on the line of the given offset.
# @note This is an expensive operation
#
def pos
locator.pos_on_line(offset)
end
# Extracts the text represented by this source position (the string is obtained from the locator)
def extract_text
locator.string.slice(offset, length)
end
# Produces an URI with path?line=n&pos=n. If origin is unknown the URI is string:?line=n&pos=n
def to_uri
f = locator.file
f = 'string:' if f.nil? || f.empty?
URI("#{f}?line=#{line.to_s}&pos=#{pos.to_s}")
end
end
# A LoaderAdapter adapts an object with a {Puppet::Pops::Loader}. This is used to make further loading from the
# perspective of the adapted object take place in the perspective of this Loader.
#
# It is typically enough to adapt the root of a model as a search is made towards the root of the model
# until a loader is found, but there is no harm in duplicating this information provided a contained
# object is adapted with the correct loader.
#
# @see Puppet::Pops::Utils#find_adapter
#
class LoaderAdapter < Puppet::Pops::Adaptable::Adapter
# @return [Puppet::Pops::Loader::Loader] the loader
attr_accessor :loader
end
end
diff --git a/lib/puppet/pops/binder/bindings_checker.rb b/lib/puppet/pops/binder/bindings_checker.rb
index 6e436db72..11d97a6d2 100644
--- a/lib/puppet/pops/binder/bindings_checker.rb
+++ b/lib/puppet/pops/binder/bindings_checker.rb
@@ -1,197 +1,197 @@
# A validator/checker of a bindings model
# @api public
#
class Puppet::Pops::Binder::BindingsChecker
Bindings = Puppet::Pops::Binder::Bindings
Issues = Puppet::Pops::Binder::BinderIssues
Types = Puppet::Pops::Types
attr_reader :type_calculator
attr_reader :acceptor
# @api public
def initialize(diagnostics_producer)
@@check_visitor ||= Puppet::Pops::Visitor.new(nil, "check", 0, 0)
@type_calculator = Puppet::Pops::Types::TypeCalculator.new()
- @expression_validator = Puppet::Pops::Validation::ValidatorFactory_3_1.new().checker(diagnostics_producer)
+ @expression_validator = Puppet::Pops::Validation::ValidatorFactory_4_0.new().checker(diagnostics_producer)
@acceptor = diagnostics_producer
end
# Validates the entire model by visiting each model element and calling `check`.
# The result is collected (or acted on immediately) by the configured diagnostic provider/acceptor
# given when creating this Checker.
#
# @api public
#
def validate(b)
check(b)
b.eAllContents.each {|c| check(c) }
end
# Performs binding validity check
# @api private
def check(b)
- @@check_visitor.visit_this(self, b)
+ @@check_visitor.visit_this_0(self, b)
end
# Checks that a binding has a producer and a type
# @api private
def check_Binding(b)
# Must have a type
- acceptor.accept(Issues::MISSING_TYPE, b) unless b.type.is_a?(Types::PObjectType)
+ acceptor.accept(Issues::MISSING_TYPE, b) unless b.type.is_a?(Types::PAnyType)
# Must have a producer
acceptor.accept(Issues::MISSING_PRODUCER, b) unless b.producer.is_a?(Bindings::ProducerDescriptor)
end
# Checks that the producer is a Multibind producer and that the type is a PCollectionType
# @api private
def check_Multibinding(b)
# id is optional (empty id blocks contributions)
# A multibinding must have PCollectionType
acceptor.accept(Issues::MULTIBIND_TYPE_ERROR, b, {:actual_type => b.type}) unless b.type.is_a?(Types::PCollectionType)
# if the producer is nil, a suitable producer will be picked automatically
unless b.producer.nil? || b.producer.is_a?(Bindings::MultibindProducerDescriptor)
acceptor.accept(Issues::MULTIBIND_NOT_COLLECTION_PRODUCER, b, {:actual_producer => b.producer})
end
end
# Checks that the bindings object contains at least one binding. Then checks each binding in turn
# @api private
def check_Bindings(b)
acceptor.accept(Issues::MISSING_BINDINGS, b) unless has_entries?(b.bindings)
end
# Checks that a name has been associated with the bindings
# @api private
def check_NamedBindings(b)
acceptor.accept(Issues::MISSING_BINDINGS_NAME, b) unless has_chars?(b.name)
check_Bindings(b)
end
# Check layer has a name
# @api private
def check_NamedLayer(l)
acceptor.accept(Issues::MISSING_LAYER_NAME, binding_parent(l)) unless has_chars?(l.name)
end
# Checks that the binding has layers and that each layer has a name and at least one binding
# @api private
def check_LayeredBindings(b)
acceptor.accept(Issues::MISSING_LAYERS, b) unless has_entries?(b.layers)
end
# Checks that the non caching producer has a producer to delegate to
# @api private
def check_NonCachingProducerDescriptor(p)
acceptor.accept(Issues::PRODUCER_MISSING_PRODUCER, p) unless p.producer.is_a?(Bindings::ProducerDescriptor)
end
# Checks that a constant value has been declared in the producer and that the type
# of the value is compatible with the type declared in the binding
# @api private
def check_ConstantProducerDescriptor(p)
# the product must be of compatible type
# TODO: Likely to change when value becomes a typed Puppet Object
b = binding_parent(p)
if p.value.nil?
acceptor.accept(Issues::MISSING_VALUE, p, {:binding => b})
else
infered = type_calculator.infer(p.value)
unless type_calculator.assignable?(b.type, infered)
acceptor.accept(Issues::INCOMPATIBLE_TYPE, p, {:binding => b, :expected_type => b.type, :actual_type => infered})
end
end
end
# Checks that an expression has been declared in the producer
# @api private
def check_EvaluatingProducerDescriptor(p)
unless p.expression.is_a?(Puppet::Pops::Model::Expression)
acceptor.accept(Issues::MISSING_EXPRESSION, p, {:binding => binding_parent(p)})
end
end
# Checks that a class name has been declared in the producer
# @api private
def check_InstanceProducerDescriptor(p)
acceptor.accept(Issues::MISSING_CLASS_NAME, p, {:binding => binding_parent(p)}) unless has_chars?(p.class_name)
end
# Checks that a type and a name has been declared. The type must be assignable to the type
# declared in the binding. The name can be an empty string to denote 'no name'
# @api private
def check_LookupProducerDescriptor(p)
b = binding_parent(p)
unless type_calculator.assignable(b.type, p.type)
acceptor.accept(Issues::INCOMPATIBLE_TYPE, p, {:binding => b, :expected_type => b.type, :actual_type => p.type })
end
acceptor.accept(Issues::MISSING_NAME, p, {:binding => b}) if p.name.nil? # empty string is OK
end
# Checks that a key has been declared, then calls producer_LookupProducerDescriptor to perform
# checks associated with the super class
# @api private
def check_HashLookupProducerDescriptor(p)
acceptor.accept(Issues::MISSING_KEY, p, {:binding => binding_parent(p)}) unless has_chars?(p.key)
check_LookupProducerDescriptor(p)
end
# Checks that the type declared in the binder is a PArrayType
# @api private
def check_ArrayMultibindProducerDescriptor(p)
b = binding_parent(p)
acceptor.accept(Issues::MULTIBIND_INCOMPATIBLE_TYPE, p, {:binding => b, :actual_type => b.type}) unless b.type.is_a?(Types::PArrayType)
end
# Checks that the type declared in the binder is a PHashType
# @api private
def check_HashMultibindProducerDescriptor(p)
b = binding_parent(p)
acceptor.accept(Issues::MULTIBIND_INCOMPATIBLE_TYPE, p, {:binding => b, :actual_type => b.type}) unless b.type.is_a?(Types::PHashType)
end
# Checks that the producer that this producer delegates to is declared
# @api private
def check_ProducerProducerDescriptor(p)
unless p.producer.is_a?(Bindings::ProducerDescriptor)
acceptor.accept(Issues::PRODUCER_MISSING_PRODUCER, p, {:binding => binding_parent(p)})
end
end
# @api private
def check_Expression(t)
@expression_validator.validate(t)
end
# @api private
- def check_PObjectType(t)
+ def check_PAnyType(t)
# Do nothing
end
# Returns true if the argument is a non empty string
# @api private
def has_chars?(s)
s.is_a?(String) && !s.empty?
end
# @api private
def has_entries?(s)
!(s.nil? || s.empty?)
end
# @api private
def binding_parent(p)
begin
x = p.eContainer
if x.nil?
acceptor.accept(Issues::MODEL_OBJECT_IS_UNBOUND, p)
return nil
end
p = x
end while !p.is_a?(Bindings::AbstractBinding)
p
end
end
diff --git a/lib/puppet/pops/binder/bindings_factory.rb b/lib/puppet/pops/binder/bindings_factory.rb
index ca01a7119..e3c4445a7 100644
--- a/lib/puppet/pops/binder/bindings_factory.rb
+++ b/lib/puppet/pops/binder/bindings_factory.rb
@@ -1,805 +1,805 @@
# A helper class that makes it easier to construct a Bindings model.
#
# The Bindings Model
# ------------------
# The BindingsModel (defined in {Puppet::Pops::Binder::Bindings} is a model that is intended to be generally free from Ruby concerns.
# This means that it is possible for system integrators to create and serialize such models using other technologies than
# Ruby. This manifests itself in the model in that producers are described using instances of a `ProducerDescriptor` rather than
# describing Ruby classes directly. This is also true of the type system where type is expressed using the {Puppet::Pops::Types} model
# to describe all types.
#
# This class, the `BindingsFactory` is a concrete Ruby API for constructing instances of classes in the model.
#
# Named Bindings
# --------------
# The typical usage of the factory is to call {named_bindings} which creates a container of bindings wrapped in a *build object*
# equipped with convenience methods to define the details of the just created named bindings.
# The returned builder is an instance of {Puppet::Pops::Binder::BindingsFactory::BindingsContainerBuilder BindingsContainerBuilder}.
#
# Binding
# -------
# A Binding binds a type/name key to a producer of a value. A binding is conveniently created by calling `bind` on a
# `BindingsContainerBuilder`. The call to bind, produces a binding wrapped in a build object equipped with convenience methods
# to define the details of the just created binding. The returned builder is an instance of
# {Puppet::Pops::Binder::BindingsFactory::BindingsBuilder BindingsBuilder}.
#
# Multibinding
# ------------
# A multibinding works like a binding, but it requires an additional ID. It also places constraints on the type of the binding;
# it must be a collection type (Hash or Array).
#
# Constructing and Contributing Bindings from Ruby
# ------------------------------------------------
# The bindings system is used by referencing bindings symbolically; these are then specified in a Ruby file which is autoloaded
# by Puppet. The entry point for user code that creates bindings is described in {Puppet::Bindings Bindings}.
# That class makes use of a BindingsFactory, and the builder objects to make it easy to construct bindings.
#
# It is intended that a user defining bindings in Ruby should be able to use the builder object methods for the majority of tasks.
# If something advanced is wanted, use of one of the helper class methods on the BuildingsFactory, and/or the
# {Puppet::Pops::Types::TypeCalculator TypeCalculator} will be required to create and configure objects that are not handled by
# the methods in the builder objects.
#
# Chaining of calls
# ------------------
# Since all the build methods return the build object it is easy to stack on additional calls. The intention is to
# do this in an order that is readable from left to right: `bind.string.name('thename').to(42)`, but there is nothing preventing
# making the calls in some other order e.g. `bind.to(42).name('thename').string`, the second is quite unreadable but produces
# the same result.
#
# For sake of human readability, the method `name` is alsp available as `named`, with the intention that it is used after a type,
# e.g. `bind.integer.named('the meaning of life').to(42)`
#
# Methods taking blocks
# ----------------------
# Several methods take an optional block. The block evaluates with the builder object as `self`. This means that there is no
# need to chain the methods calls, they can instead be made in sequence - e.g.
#
# bind do
# integer
# named 'the meaning of life'
# to 42
# end
#
# or mix the two styles
#
# bind do
# integer.named 'the meaning of life'
# to 42
# end
#
# Unwrapping the result
# ---------------------
# The result from all methods is a builder object. Call the method `model` to unwrap the constructed bindings model object.
#
# bindings = BindingsFactory.named_bindings('my named bindings') do
# # bind things
# end.model
#
# @example Create a NamedBinding with content
# result = Puppet::Pops::Binder::BindingsFactory.named_bindings("mymodule::mybindings") do
# bind.name("foo").to(42)
# bind.string.name("site url").to("http://www.example.com")
# end
# result.model()
#
# @api public
#
module Puppet::Pops::Binder::BindingsFactory
# Alias for the {Puppet::Pops::Types::TypeFactory TypeFactory}. This is also available as the method
# `type_factory`.
#
T = Puppet::Pops::Types::TypeFactory
# Abstract base class for bindings object builders.
# Supports delegation of method calls to the BindingsFactory class methods for all methods not implemented
# by a concrete builder.
#
# @abstract
#
class AbstractBuilder
# The built model object.
attr_reader :model
# @param binding [Puppet::Pops::Binder::Bindings::AbstractBinding] The binding to build.
# @api public
def initialize(binding)
@model = binding
end
# Provides convenient access to the Bindings Factory class methods. The intent is to provide access to the
# methods that return producers for the purpose of composing more elaborate things than the builder convenience
# methods support directly.
# @api private
#
def method_missing(meth, *args, &block)
factory = Puppet::Pops::Binder::BindingsFactory
if factory.respond_to?(meth)
factory.send(meth, *args, &block)
else
super
end
end
end
# A bindings builder for an AbstractBinding containing other AbstractBinding instances.
# @api public
class BindingsContainerBuilder < AbstractBuilder
# Adds an empty binding to the container, and returns a builder for it for further detailing.
# An optional block may be given which is evaluated using `instance_eval`.
# @return [BindingsBuilder] the builder for the created binding
# @api public
#
def bind(&block)
binding = Puppet::Pops::Binder::Bindings::Binding.new()
model.addBindings(binding)
builder = BindingsBuilder.new(binding)
builder.instance_eval(&block) if block_given?
builder
end
# Binds a multibind with the given identity where later, the looked up result contains all
# contributions to this key. An optional block may be given which is evaluated using `instance_eval`.
# @param id [String] the multibind's id used when adding contributions
# @return [MultibindingsBuilder] the builder for the created multibinding
# @api public
#
def multibind(id, &block)
binding = Puppet::Pops::Binder::Bindings::Multibinding.new()
binding.id = id
model.addBindings(binding)
builder = MultibindingsBuilder.new(binding)
builder.instance_eval(&block) if block_given?
builder
end
end
# Builds a Binding via convenience methods.
#
# @api public
#
class BindingsBuilder < AbstractBuilder
# @param binding [Puppet::Pops::Binder::Bindings::AbstractBinding] the binding to build.
# @api public
def initialize(binding)
super binding
data()
end
# Sets the name of the binding.
# @param name [String] the name to bind.
# @api public
def name(name)
model.name = name
self
end
# Same as {#name}, but reads better in certain combinations.
# @api public
alias_method :named, :name
# Sets the binding to be abstract (it must be overridden)
# @api public
def abstract
model.abstract = true
self
end
# Sets the binding to be override (it must override something)
# @api public
def override
model.override = true
self
end
# Sets the binding to be final (it may not be overridden)
# @api public
def final
model.final = true
self
end
# Makes the binding a multibind contribution to the given multibind id
# @param id [String] the multibind id to contribute this binding to
# @api public
def in_multibind(id)
model.multibind_id = id
self
end
# Sets the type of the binding to the given type.
# @note
# This is only needed if something other than the default type `Data` is wanted, or if the wanted type is
# not provided by one of the convenience methods {#array_of_data}, {#boolean}, {#float}, {#hash_of_data},
# {#integer}, {#scalar}, {#pattern}, {#string}, or one of the collection methods {#array_of}, or {#hash_of}.
#
# To create a type, use the method {#type_factory}, to obtain the type.
# @example creating a Hash with Integer key and Array[Integer] element type
# tc = type_factory
# type(tc.hash(tc.array_of(tc.integer), tc.integer)
- # @param type [Puppet::Pops::Types::PObjectType] the type to set for the binding
+ # @param type [Puppet::Pops::Types::PAnyType] the type to set for the binding
# @api public
#
def type(type)
model.type = type
self
end
# Sets the type of the binding to Integer.
# @return [Puppet::Pops::Types::PIntegerType] the type
# @api public
def integer()
type(T.integer())
end
# Sets the type of the binding to Float.
# @return [Puppet::Pops::Types::PFloatType] the type
# @api public
def float()
type(T.float())
end
# Sets the type of the binding to Boolean.
# @return [Puppet::Pops::Types::PBooleanType] the type
# @api public
def boolean()
type(T.boolean())
end
# Sets the type of the binding to String.
# @return [Puppet::Pops::Types::PStringType] the type
# @api public
def string()
type(T.string())
end
# Sets the type of the binding to Pattern.
# @return [Puppet::Pops::Types::PRegexpType] the type
# @api public
def pattern()
type(T.pattern())
end
# Sets the type of the binding to the abstract type Scalar.
# @return [Puppet::Pops::Types::PScalarType] the type
# @api public
def scalar()
type(T.scalar())
end
# Sets the type of the binding to the abstract type Data.
# @return [Puppet::Pops::Types::PDataType] the type
# @api public
def data()
type(T.data())
end
# Sets the type of the binding to Array[Data].
# @return [Puppet::Pops::Types::PArrayType] the type
# @api public
def array_of_data()
type(T.array_of_data())
end
# Sets the type of the binding to Array[T], where T is given.
- # @param t [Puppet::Pops::Types::PObjectType] the type of the elements of the array
+ # @param t [Puppet::Pops::Types::PAnyType] the type of the elements of the array
# @return [Puppet::Pops::Types::PArrayType] the type
# @api public
def array_of(t)
type(T.array_of(t))
end
# Sets the type of the binding to Hash[Literal, Data].
# @return [Puppet::Pops::Types::PHashType] the type
# @api public
def hash_of_data()
type(T.hash_of_data())
end
# Sets type of the binding to `Hash[Literal, t]`.
# To also limit the key type, use {#type} and give it a fully specified
# hash using {#type_factory} and then `hash_of(value_type, key_type)`.
# @return [Puppet::Pops::Types::PHashType] the type
# @api public
def hash_of(t)
type(T.hash_of(t))
end
# Sets the type of the binding based on the given argument.
# @overload instance_of(t)
# The same as calling {#type} with `t`.
- # @param t [Puppet::Pops::Types::PObjectType] the type
+ # @param t [Puppet::Pops::Types::PAnyType] the type
# @overload instance_of(o)
# Infers the type from the given Ruby object and sets that as the type - i.e. "set the type
# of the binding to be that of the given data object".
# @param o [Object] the object to infer the type from
# @overload instance_of(c)
# @param c [Class] the Class to base the type on.
# Sets the type based on the given ruby class. The result is one of the specific puppet types
- # if the class can be represented by a specific type, or the open ended PRubyType otherwise.
+ # if the class can be represented by a specific type, or the open ended PRuntimeType otherwise.
# @overload instance_of(s)
# The same as using a class, but instead of giving a class instance, the class is expressed using its fully
# qualified name. This method of specifying the type allows late binding (the class does not have to be loaded
# before it can be used in a binding).
# @param s [String] the fully qualified classname to base the type on.
# @return the resulting type
# @api public
#
def instance_of(t)
type(T.type_of(t))
end
# Provides convenient access to the type factory.
# This is intended to be used when methods taking a type as argument i.e. {#type}, {#array_of}, {#hash_of}, and {#instance_of}.
# @note
# The type factory is also available via the constant {T}.
# @api public
def type_factory
Puppet::Pops::Types::TypeFactory
end
# Sets the binding's producer to a singleton producer, if given argument is a value, a literal producer is created for it.
# To create a producer producing an instance of a class with lazy loading of the class, use {#to_instance}.
#
# @overload to(a_literal)
# Sets a constant producer in the binding.
# @overload to(a_class, *args)
# Sets an Instantiating producer (producing an instance of the given class)
# @overload to(a_producer_descriptor)
# Sets the producer from the given producer descriptor
# @return [BindingsBuilder] self
# @api public
#
def to(producer, *args)
case producer
when Class
producer = Puppet::Pops::Binder::BindingsFactory.instance_producer(producer.name, *args)
when Puppet::Pops::Model::Program
# program is not an expression
producer = Puppet::Pops::Binder::BindingsFactory.evaluating_producer(producer.body)
when Puppet::Pops::Model::Expression
producer = Puppet::Pops::Binder::BindingsFactory.evaluating_producer(producer)
when Puppet::Pops::Binder::Bindings::ProducerDescriptor
else
# If given producer is not a producer, create a literal producer
producer = Puppet::Pops::Binder::BindingsFactory.literal_producer(producer)
end
model.producer = producer
self
end
# Sets the binding's producer to a producer of an instance of given class (a String class name, or a Class instance).
# Use a string class name when lazy loading of the class is wanted.
#
# @overload to_instance(class_name, *args)
# @param class_name [String] the name of the class to instantiate
# @param args [Object] optional arguments to the constructor
# @overload to_instance(a_class)
# @param a_class [Class] the class to instantiate
# @param args [Object] optional arguments to the constructor
#
def to_instance(type, *args)
class_name = case type
when Class
type.name
when String
type
else
raise ArgumentError, "to_instance accepts String (a class name), or a Class.*args got: #{type.class}."
end
model.producer = Puppet::Pops::Binder::BindingsFactory.instance_producer(class_name, *args)
end
# Sets the binding's producer to a singleton producer
# @overload to_producer(a_producer)
# Sets the producer to an instantiated producer. The resulting model can not be serialized as a consequence as there
# is no meta-model describing the specialized producer. Use this only in exceptional cases, or where there is never the
# need to serialize the model.
# @param a_producer [Puppet::Pops::Binder::Producers::Producer] an instantiated producer, not serializeable !
#
# @overload to_producer(a_class, *args)
# @param a_class [Class] the class to create an instance of
# @param args [Object] the arguments to the given class' new
#
# @overload to_producer(a_producer_descriptor)
# @param a_producer_descriptor [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a descriptor
# producing Puppet::Pops::Binder::Producers::Producer
#
# @api public
#
def to_producer(producer, *args)
case producer
when Class
producer = Puppet::Pops::Binder::BindingsFactory.instance_producer(producer.name, *args)
when Puppet::Pops::Binder::Bindings::ProducerDescriptor
when Puppet::Pops::Binder::Producers::Producer
# a custom producer instance
producer = Puppet::Pops::Binder::BindingsFactory.literal_producer(producer)
else
raise ArgumentError, "Given producer argument is none of a producer descriptor, a class, or a producer"
end
metaproducer = Puppet::Pops::Binder::BindingsFactory.producer_producer(producer)
model.producer = metaproducer
self
end
# Sets the binding's producer to a series of producers.
# Use this when you want to produce a different producer on each request for a producer
#
# @overload to_producer(a_producer)
# Sets the producer to an instantiated producer. The resulting model can not be serialized as a consequence as there
# is no meta-model describing the specialized producer. Use this only in exceptional cases, or where there is never the
# need to serialize the model.
# @param a_producer [Puppet::Pops::Binder::Producers::Producer] an instantiated producer, not serializeable !
#
# @overload to_producer(a_class, *args)
# @param a_class [Class] the class to create an instance of
# @param args [Object] the arguments to the given class' new
#
# @overload to_producer(a_producer_descriptor)
# @param a_producer_descriptor [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a descriptor
# producing Puppet::Pops::Binder::Producers::Producer
#
# @api public
#
def to_producer_series(producer, *args)
case producer
when Class
producer = Puppet::Pops::Binder::BindingsFactory.instance_producer(producer.name, *args)
when Puppet::Pops::Binder::Bindings::ProducerDescriptor
when Puppet::Pops::Binder::Producers::Producer
# a custom producer instance
producer = Puppet::Pops::Binder::BindingsFactory.literal_producer(producer)
else
raise ArgumentError, "Given producer argument is none of a producer descriptor, a class, or a producer"
end
non_caching = Puppet::Pops::Binder::Bindings::NonCachingProducerDescriptor.new()
non_caching.producer = producer
metaproducer = Puppet::Pops::Binder::BindingsFactory.producer_producer(non_caching)
non_caching = Puppet::Pops::Binder::Bindings::NonCachingProducerDescriptor.new()
non_caching.producer = metaproducer
model.producer = non_caching
self
end
# Sets the binding's producer to a "non singleton" producer (each call to produce produces a new instance/copy).
# @overload to_series_of(a_literal)
# a constant producer
# @overload to_series_of(a_class, *args)
# Instantiating producer
# @overload to_series_of(a_producer_descriptor)
# a given producer
#
# @api public
#
def to_series_of(producer, *args)
case producer
when Class
producer = Puppet::Pops::Binder::BindingsFactory.instance_producer(producer.name, *args)
when Puppet::Pops::Binder::Bindings::ProducerDescriptor
else
# If given producer is not a producer, create a literal producer
producer = Puppet::Pops::Binder::BindingsFactory.literal_producer(producer)
end
non_caching = Puppet::Pops::Binder::Bindings::NonCachingProducerDescriptor.new()
non_caching.producer = producer
model.producer = non_caching
self
end
# Sets the binding's producer to one that performs a lookup of another key
# @overload to_lookup_of(type, name)
# @overload to_lookup_of(name)
# @api public
#
def to_lookup_of(type, name=nil)
unless name
name = type
type = Puppet::Pops::Types::TypeFactory.data()
end
model.producer = Puppet::Pops::Binder::BindingsFactory.lookup_producer(type, name)
self
end
# Sets the binding's producer to a one that performs a lookup of another key and they applies hash lookup on
# the result.
#
# @overload to_lookup_of(type, name)
# @overload to_lookup_of(name)
# @api public
#
def to_hash_lookup_of(type, name, key)
model.producer = Puppet::Pops::Binder::BindingsFactory.hash_lookup_producer(type, name, key)
self
end
# Sets the binding's producer to one that produces the first found lookup of another key
# @param list_of_lookups [Array] array of arrays [type name], or just name (implies data)
# @example
# binder.bind().name('foo').to_first_found('fee', 'fum', 'extended-bar')
# binder.bind().name('foo').to_first_found(
# [T.ruby(ThisClass), 'fee'],
# [T.ruby(ThatClass), 'fum'],
# 'extended-bar')
# @api public
#
def to_first_found(*list_of_lookups)
producers = list_of_lookups.collect do |entry|
if entry.is_a?(Array)
case entry.size
when 2
Puppet::Pops::Binder::BindingsFactory.lookup_producer(entry[0], entry[1])
when 1
Puppet::Pops::Binder::BindingsFactory.lookup_producer(Puppet::Pops::Types::TypeFactory.data(), entry[0])
else
raise ArgumentError, "Not an array of [type, name], name, or [name]"
end
else
Puppet::Pops::Binder::BindingsFactory.lookup_producer(T.data(), entry)
end
end
model.producer = Puppet::Pops::Binder::BindingsFactory.first_found_producer(*producers)
self
end
# Sets options to the producer.
# See the respective producer for the options it supports. All producers supports the option `:transformer`, a
# puppet or ruby lambda that is evaluated with the produced result as an argument. The ruby lambda gets scope and
# value as arguments.
# @note
# A Ruby lambda is not cross platform safe. Use a puppet lambda if you want a bindings model that is.
#
# @api public
def producer_options(options)
options.each do |k, v|
arg = Puppet::Pops::Binder::Bindings::NamedArgument.new()
arg.name = k.to_s
arg.value = v
model.addProducer_args(arg)
end
self
end
end
# A builder specialized for multibind - checks that type is Array or Hash based. A new builder sets the
# multibinding to be of type Hash[Data].
#
# @api public
class MultibindingsBuilder < BindingsBuilder
# Constraints type to be one of {Puppet::Pops::Types::PArrayType PArrayType}, or {Puppet::Pops::Types::PHashType PHashType}.
# @raise [ArgumentError] if type constraint is not met.
# @api public
def type(type)
unless type.class == Puppet::Pops::Types::PArrayType || type.class == Puppet::Pops::Types::PHashType
raise ArgumentError, "Wrong type; only PArrayType, or PHashType allowed, got '#{type.to_s}'"
end
model.type = type
self
end
# Overrides the default implementation that will raise an exception as a multibind requires a hash type.
# Thus, if nothing else is requested, a multibind will be configured as Hash[Data].
#
def data()
hash_of_data()
end
end
# Produces a ContributedBindings.
# A ContributedBindings is used by bindings providers to return a set of named bindings.
#
# @param name [String] the name of the contributed bindings (for human use in messages/logs only)
# @param named_bindings [Puppet::Pops::Binder::Bindings::NamedBindings, Array<Puppet::Pops::Binder::Bindings::NamedBindings>] the
# named bindings to include
#
def self.contributed_bindings(name, named_bindings)
cb = Puppet::Pops::Binder::Bindings::ContributedBindings.new()
cb.name = name
named_bindings = [named_bindings] unless named_bindings.is_a?(Array)
named_bindings.each {|b| cb.addBindings(b) }
cb
end
# Creates a named binding container, the top bindings model object.
# A NamedBindings is typically produced by a bindings provider.
#
# The created container is wrapped in a BindingsContainerBuilder for further detailing.
# Unwrap the built result when done.
# @api public
#
def self.named_bindings(name, &block)
binding = Puppet::Pops::Binder::Bindings::NamedBindings.new()
binding.name = name
builder = BindingsContainerBuilder.new(binding)
builder.instance_eval(&block) if block_given?
builder
end
# This variant of {named_bindings} evaluates the given block as a method on an anonymous class,
# thus, if the block defines methods or do something with the class itself, this does not pollute
# the base class (BindingsContainerBuilder).
# @api private
#
def self.safe_named_bindings(name, scope, &block)
binding = Puppet::Pops::Binder::Bindings::NamedBindings.new()
binding.name = name
anon = Class.new(BindingsContainerBuilder) do
def initialize(b)
super b
end
end
anon.send(:define_method, :_produce, block)
builder = anon.new(binding)
case block.arity
when 0
builder._produce()
when 1
builder._produce(scope)
end
builder
end
# Creates a literal/constant producer
# @param value [Object] the value to produce
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer description
# @api public
#
def self.literal_producer(value)
producer = Puppet::Pops::Binder::Bindings::ConstantProducerDescriptor.new()
producer.value = value
producer
end
# Creates a non caching producer
# @param producer [Puppet::Pops::Binder::Bindings::Producer] the producer to make non caching
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer description
# @api public
#
def self.non_caching_producer(producer)
p = Puppet::Pops::Binder::Bindings::NonCachingProducerDescriptor.new()
p.producer = producer
p
end
# Creates a producer producer
# @param producer [Puppet::Pops::Binder::Bindings::Producer] a producer producing a Producer.
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer description
# @api public
#
def self.producer_producer(producer)
p = Puppet::Pops::Binder::Bindings::ProducerProducerDescriptor.new()
p.producer = producer
p
end
# Creates an instance producer
# An instance producer creates a new instance of a class.
# If the class implements the class method `inject` this method is called instead of `new` to allow further lookups
# to take place. This is referred to as *assisted inject*. If the class method `inject` is missing, the regular `new` method
# is called.
#
# @param class_name [String] the name of the class
# @param args[Object] arguments to the class' `new` method.
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer description
# @api public
#
def self.instance_producer(class_name, *args)
p = Puppet::Pops::Binder::Bindings::InstanceProducerDescriptor.new()
p.class_name = class_name
args.each {|a| p.addArguments(a) }
p
end
# Creates a Producer that looks up a value.
- # @param type [Puppet::Pops::Types::PObjectType] the type to lookup
+ # @param type [Puppet::Pops::Types::PAnyType] the type to lookup
# @param name [String] the name to lookup
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer description
# @api public
def self.lookup_producer(type, name)
p = Puppet::Pops::Binder::Bindings::LookupProducerDescriptor.new()
p.type = type
p.name = name
p
end
# Creates a Hash lookup producer that looks up a hash value, and then a key in the hash.
#
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer description
- # @param type [Puppet::Pops::Types::PObjectType] the type to lookup (i.e. a Hash of some key/value type).
+ # @param type [Puppet::Pops::Types::PAnyType] the type to lookup (i.e. a Hash of some key/value type).
# @param name [String] the name to lookup
# @param key [Object] the key to lookup in the looked up hash (type should comply with given key type).
# @api public
#
def self.hash_lookup_producer(type, name, key)
p = Puppet::Pops::Binder::Bindings::HashLookupProducerDescriptor.new()
p.type = type
p.name = name
p.key = key
p
end
# Creates a first-found producer that looks up from a given series of keys. The first found looked up
# value will be produced.
# @param producers [Array<Puppet::Pops::Binder::Bindings::ProducerDescriptor>] the producers to consult in given order
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer descriptor
# @api public
def self.first_found_producer(*producers)
p = Puppet::Pops::Binder::Bindings::FirstFoundProducerDescriptor.new()
producers.each {|p2| p.addProducers(p2) }
p
end
# Creates an evaluating producer that evaluates a puppet expression.
# A puppet expression is most conveniently created by using the {Puppet::Pops::Parser::EvaluatingParser EvaluatingParser} as it performs
# all set up and validation of the parsed source. Two convenience methods are used to parse an expression, or parse a ruby string
# as a puppet string. See methods {puppet_expression}, {puppet_string} and {parser} for more information.
#
# @example producing a puppet expression
# expr = puppet_string("Interpolated $fqdn", __FILE__)
#
# @param expression [Puppet::Pops::Model::Expression] a puppet DSL expression as producer by the eparser.
# @return [Puppet::Pops::Binder::Bindings::ProducerDescriptor] a producer descriptor
# @api public
#
def self.evaluating_producer(expression)
p = Puppet::Pops::Binder::Bindings::EvaluatingProducerDescriptor.new()
p.expression = expression
p
end
# Creates a NamedLayer. This is used by the bindings system to create a model of the layers.
#
# @api public
#
def self.named_layer(name, *bindings)
result = Puppet::Pops::Binder::Bindings::NamedLayer.new()
result.name = name
bindings.each { |b| result.addBindings(b) }
result
end
# Create a LayeredBindings. This is used by the bindings system to create a model of all given layers.
# @param named_layers [Puppet::Pops::Binder::Bindings::NamedLayer] one or more named layers
# @return [Puppet::Pops::Binder::Bindings::LayeredBindings] the constructed layered bindings.
# @api public
#
def self.layered_bindings(*named_layers)
result = Puppet::Pops::Binder::Bindings::LayeredBindings.new()
named_layers.each {|b| result.addLayers(b) }
result
end
# @return [Puppet::Pops::Parser::EvaluatingParser] a parser for puppet expressions
def self.parser
@parser ||= Puppet::Pops::Parser::EvaluatingParser.new()
end
# Parses and produces a puppet expression from the given string.
# @param string [String] puppet source e.g. "1 + 2"
# @param source_file [String] the source location, typically `__File__`
# @return [Puppet::Pops::Model::Expression] an expression (that can be bound)
# @api public
#
def self.puppet_expression(string, source_file)
parser.parse_string(string, source_file).current
end
# Parses and produces a puppet string expression from the given string.
# The string will automatically be quoted and special characters escaped.
# As an example if given the (ruby) string "Hi\nMary" it is transformed to
# the puppet string (illustrated with a ruby string) "\"Hi\\nMary\”" before being
# parsed.
#
# @param string [String] puppet source e.g. "On node $!{fqdn}"
# @param source_file [String] the source location, typically `__File__`
# @return [Puppet::Pops::Model::Expression] an expression (that can be bound)
# @api public
#
def self.puppet_string(string, source_file)
parser.parse_string(parser.quote(string), source_file).current
end
end
diff --git a/lib/puppet/pops/binder/bindings_label_provider.rb b/lib/puppet/pops/binder/bindings_label_provider.rb
index ac10f7677..0d531c827 100644
--- a/lib/puppet/pops/binder/bindings_label_provider.rb
+++ b/lib/puppet/pops/binder/bindings_label_provider.rb
@@ -1,43 +1,43 @@
# A provider of labels for bindings model object, producing a human name for the model object.
# @api private
#
class Puppet::Pops::Binder::BindingsLabelProvider < Puppet::Pops::LabelProvider
def initialize
@@label_visitor ||= Puppet::Pops::Visitor.new(self,"label",0,0)
end
# Produces a label for the given object without article.
# @return [String] a human readable label
#
def label o
@@label_visitor.visit(o)
end
- def label_PObjectType o ; "#{Puppet::Pops::Types::TypeFactory.label(o)}" end
+ def label_PAnyType o ; "#{Puppet::Pops::Types::TypeFactory.label(o)}" end
def label_ProducerDescriptor o ; "Producer" end
def label_NonCachingProducerDescriptor o ; "Non Caching Producer" end
def label_ConstantProducerDescriptor o ; "Producer['#{o.value}']" end
def label_EvaluatingProducerDescriptor o ; "Evaluating Producer" end
def label_InstanceProducerDescriptor o ; "Producer[#{o.class_name}]" end
def label_LookupProducerDescriptor o ; "Lookup Producer[#{o.name}]" end
def label_HashLookupProducerDescriptor o ; "Hash Lookup Producer[#{o.name}][#{o.key}]" end
def label_FirstFoundProducerDescriptor o ; "First Found Producer" end
def label_ProducerProducerDescriptor o ; "Producer[Producer]" end
def label_MultibindProducerDescriptor o ; "Multibind Producer" end
def label_ArrayMultibindProducerDescriptor o ; "Array Multibind Producer" end
def label_HashMultibindProducerDescriptor o ; "Hash Multibind Producer" end
def label_Bindings o ; "Bindings" end
def label_NamedBindings o ; "Named Bindings" end
def label_LayeredBindings o ; "Layered Bindings" end
def label_NamedLayer o ; "Layer '#{o.name}'" end
def label_ContributedBindings o ; "Contributed Bindings" end
def label_NamedArgument o ; "Named Argument" end
def label_Binding(o)
'Binding' + (o.multibind_id.nil? ? '' : ' In Multibind')
end
def label_Multibinding(o)
'Multibinding' + (o.multibind_id.nil? ? '' : ' In Multibind')
end
end
diff --git a/lib/puppet/pops/binder/bindings_loader.rb b/lib/puppet/pops/binder/bindings_loader.rb
index 84a4051b8..c57bb3cb6 100644
--- a/lib/puppet/pops/binder/bindings_loader.rb
+++ b/lib/puppet/pops/binder/bindings_loader.rb
@@ -1,88 +1,88 @@
require 'rgen/metamodel_builder'
# The ClassLoader provides a Class instance given a class name or a meta-type.
# If the class is not already loaded, it is loaded using the Puppet Autoloader.
# This means it can load a class from a gem, or from puppet modules.
#
class Puppet::Pops::Binder::BindingsLoader
@confdir = Puppet.settings[:confdir]
# @autoloader = Puppet::Util::Autoload.new("BindingsLoader", "puppet/bindings", :wrap => false)
# Returns a XXXXX given a fully qualified class name.
# Lookup of class is never relative to the calling namespace.
- # @param name [String, Array<String>, Array<Symbol>, Puppet::Pops::Types::PObjectType] A fully qualified
- # class name String (e.g. '::Foo::Bar', 'Foo::Bar'), a PObjectType, or a fully qualified name in Array form where each part
+ # @param name [String, Array<String>, Array<Symbol>, Puppet::Pops::Types::PAnyType] A fully qualified
+ # class name String (e.g. '::Foo::Bar', 'Foo::Bar'), a PAnyType, or a fully qualified name in Array form where each part
# is either a String or a Symbol, e.g. `%w{Puppetx Puppetlabs SomeExtension}`.
# @return [Class, nil] the looked up class or nil if no such class is loaded
# @raise ArgumentError If the given argument has the wrong type
# @api public
#
def self.provide(scope, name)
case name
when String
provide_from_string(scope, name)
when Array
provide_from_name_path(scope, name.join('::'), name)
else
raise ArgumentError, "Cannot provide a bindings from a '#{name.class.name}'"
end
end
# If loadable name exists relative to a a basedir or not. Returns the loadable path as a side effect.
# @return [String, nil] a loadable path for the given name, or nil
#
def self.loadable?(basedir, name)
# note, "lib" is added by the autoloader
#
paths_for_name(name).find {|p| Puppet::FileSystem.exist?(File.join(basedir, "lib/puppet/bindings", p)+'.rb') }
end
private
def self.loader()
unless Puppet.settings[:confdir] == @confdir
@confdir = Puppet.settings[:confdir] == @confdir
@autoloader = Puppet::Util::Autoload.new("BindingsLoader", "puppet/bindings", :wrap => false)
end
@autoloader
end
def self.provide_from_string(scope, name)
name_path = name.split('::')
# always from the root, so remove an empty first segment
if name_path[0].empty?
name_path = name_path[1..-1]
end
provide_from_name_path(scope, name, name_path)
end
def self.provide_from_name_path(scope, name, name_path)
# If bindings is already loaded, try this first
result = Puppet::Bindings.resolve(scope, name)
unless result
# Attempt to load it using the auto loader
paths_for_name(name).find {|path| loader.load(path) }
result = Puppet::Bindings.resolve(scope, name)
end
result
end
def self.paths_for_name(fq_name)
[de_camel(fq_name), downcased_path(fq_name)]
end
def self.downcased_path(fq_name)
fq_name.to_s.gsub(/::/, '/').downcase
end
def self.de_camel(fq_name)
fq_name.to_s.gsub(/::/, '/').
gsub(/([A-Z]+)([A-Z][a-z])/,'\1_\2').
gsub(/([a-z\d])([A-Z])/,'\1_\2').
tr("-", "_").
downcase
end
end
diff --git a/lib/puppet/pops/binder/bindings_model.rb b/lib/puppet/pops/binder/bindings_model.rb
index 4d041fca1..9e935ae65 100644
--- a/lib/puppet/pops/binder/bindings_model.rb
+++ b/lib/puppet/pops/binder/bindings_model.rb
@@ -1,201 +1,68 @@
require 'rgen/metamodel_builder'
# The Bindings model is a model of Key to Producer mappings (bindings).
-# The central concept is that a Bindings is a nested structure of bindings.
-# A top level Bindings should be a NamedBindings (the name is used primarily
-# in error messages). A Key is a Type/Name combination.
-#
-# TODO: In this version, references to "any object" uses the class Object,
-# but this is only temporary. The intent is to use specific Puppet Objects
-# that are typed using the Puppet Type System (to enable serialization).
+# It is composed of a meta-model part (bindings_model_meta.rb), and
+# and implementation part (this file).
#
# @see Puppet::Pops::Binder::BindingsFactory The BindingsFactory for more details on how to create model instances.
# @api public
-module Puppet::Pops::Binder::Bindings
-
- # @abstract
- # @api public
- #
- class AbstractBinding < Puppet::Pops::Model::PopsObject
- abstract
- end
-
- # An abstract producer
- # @abstract
- # @api public
- #
- class ProducerDescriptor < Puppet::Pops::Model::PopsObject
- abstract
- contains_one_uni 'transformer', Puppet::Pops::Model::LambdaExpression
- end
-
- # All producers are singleton producers unless wrapped in a non caching producer
- # where each lookup produces a new instance. It is an error to have a nesting level > 1
- # and to nest a NonCachingProducerDescriptor.
- #
- # @api public
- #
- class NonCachingProducerDescriptor < ProducerDescriptor
- contains_one_uni 'producer', ProducerDescriptor
- end
-
- # Produces a constant value (i.e. something of {Puppet::Pops::Types::PDataType PDataType})
- # @api public
- #
- class ConstantProducerDescriptor < ProducerDescriptor
- # TODO: This should be a typed Puppet Object
- has_attr 'value', Object
- end
-
- # Produces a value by evaluating a Puppet DSL expression.
- # Note that the expression is not contained as it is part of a Puppet::Pops::Model::Program.
- # To include the expression in the serialization, the Program it is contained in must be
- # contained in the same serialization. This can be achieved by containing it in the
- # ContributedBindings that is the top of a BindingsModel produced and given to the Injector.
- #
- # @api public
- #
- class EvaluatingProducerDescriptor < ProducerDescriptor
- has_one 'expression', Puppet::Pops::Model::Expression
- end
-
- # An InstanceProducer creates an instance of the given class
- # Arguments are passed to the class' `new` operator in the order they are given.
- # @api public
- #
- class InstanceProducerDescriptor < ProducerDescriptor
- # TODO: This should be a typed Puppet Object ??
- has_many_attr 'arguments', Object, :upperBound => -1
- has_attr 'class_name', String
- end
-
- # A ProducerProducerDescriptor, describes that the produced instance is itself a Producer
- # that should be used to produce the value.
- # @api public
- #
- class ProducerProducerDescriptor < ProducerDescriptor
- contains_one_uni 'producer', ProducerDescriptor, :lowerBound => 1
- end
-
- # Produces a value by looking up another key (type/name)
- # @api public
- #
- class LookupProducerDescriptor < ProducerDescriptor
- contains_one_uni 'type', Puppet::Pops::Types::PObjectType
- has_attr 'name', String
- end
-
- # Produces a value by looking up another multibound key, and then looking up
- # the detail using a detail_key.
- # This is used to produce a specific service of a given type (such as a SyntaxChecker for the syntax "json").
- # @api public
- #
- class HashLookupProducerDescriptor < LookupProducerDescriptor
- has_attr 'key', String
- end
-
- # Produces a value by looking up each producer in turn. The first existing producer wins.
- # @api public
- #
- class FirstFoundProducerDescriptor < ProducerDescriptor
- contains_many_uni 'producers', LookupProducerDescriptor
- end
+module Puppet::Pops::Binder
+ require 'puppet/pops/binder/bindings_model_meta'
+
+ # TODO: See PUP-2978 for possible performance optimization
+
+ # Mix in implementation into the generated code
+ module Bindings
+ class BindingsModelObject
+ include Puppet::Pops::Visitable
+ include Puppet::Pops::Adaptable
+ include Puppet::Pops::Containment
+ end
+
+ class ConstantProducerDescriptor
+ module ClassModule
+ def setValue(v)
+ @value = v
+ end
+ def getValue()
+ @value
+ end
+ def value=(v)
+ @value = v
+ end
+ end
+ end
+
+ class NamedArgument
+ module ClassModule
+ def setValue(v)
+ @value = v
+ end
+ def getValue()
+ @value
+ end
+ def value=(v)
+ @value = v
+ end
+ end
+ end
+
+ class InstanceProducerDescriptor
+ module ClassModule
+ def addArguments(val, index =-1)
+ @arguments ||= []
+ @arguments.insert(index, val)
+ end
+ def removeArguments(val)
+ raise "unsupported operation"
+ end
+ def setArguments(values)
+ @arguments = []
+ values.each {|v| addArguments(v) }
+ end
+ end
+ end
- # @api public
- # @abstract
- class MultibindProducerDescriptor < ProducerDescriptor
- abstract
end
- # Used in a Multibind of Array type unless it has a producer. May explicitly be used as well.
- # @api public
- #
- class ArrayMultibindProducerDescriptor < MultibindProducerDescriptor
- end
-
- # Used in a Multibind of Hash type unless it has a producer. May explicitly be used as well.
- # @api public
- #
- class HashMultibindProducerDescriptor < MultibindProducerDescriptor
- end
-
- # Plays the role of "Hash[String, Object] entry" but with keys in defined order.
- #
- # @api public
- #
- class NamedArgument < Puppet::Pops::Model::PopsObject
- has_attr 'name', String, :lowerBound => 1
- has_attr 'value', Object, :lowerBound => 1
- end
-
- # Binds a type/name combination to a producer. Optionally marking the bindidng as being abstract, or being an
- # override of another binding. Optionally, the binding defines producer arguments passed to the producer when
- # it is created.
- #
- # @api public
- class Binding < AbstractBinding
- contains_one_uni 'type', Puppet::Pops::Types::PObjectType
- has_attr 'name', String
- has_attr 'override', Boolean
- has_attr 'abstract', Boolean
- has_attr 'final', Boolean
- # If set is a contribution in a multibind
- has_attr 'multibind_id', String, :lowerBound => 0
- # Invariant: Only multibinds may have lowerBound 0, all regular Binding must have a producer.
- contains_one_uni 'producer', ProducerDescriptor, :lowerBound => 0
- contains_many_uni 'producer_args', NamedArgument, :lowerBound => 0
- end
-
-
- # A multibinding is a binding other bindings can contribute to.
- #
- # @api public
- class Multibinding < Binding
- has_attr 'id', String
- end
-
- # A container of Binding instances
- # @api public
- #
- class Bindings < AbstractBinding
- contains_many_uni 'bindings', AbstractBinding
- end
-
- # The top level container of bindings can have a name (for error messages, logging, tracing).
- # May be nested.
- # @api public
- #
- class NamedBindings < Bindings
- has_attr 'name', String
- end
-
- # A named layer of bindings having the same priority.
- # @api public
- class NamedLayer < Puppet::Pops::Model::PopsObject
- has_attr 'name', String, :lowerBound => 1
- contains_many_uni 'bindings', NamedBindings
- end
-
- # A list of layers with bindings in descending priority order.
- # @api public
- #
- class LayeredBindings < Puppet::Pops::Model::PopsObject
- contains_many_uni 'layers', NamedLayer
- end
-
- # ContributedBindings is a named container of one or more NamedBindings.
- # The intent is that a bindings producer returns a ContributedBindings that identifies the contributor
- # as opposed to the names of the different set of bindings. The ContributorBindings name is typically
- # a technical name that indicates their source (a service).
- #
- # When EvaluatingProducerDescriptor is used, it holds a reference to an Expression. That expression
- # should be contained in the programs referenced in the ContributedBindings that contains that producer.
- # While the bindings model will still work if this is not the case, it will not serialize and deserialize
- # correctly.
- #
- # @api public
- #
- class ContributedBindings < NamedLayer
- contains_many_uni 'programs', Puppet::Pops::Model::Program
- end
end
diff --git a/lib/puppet/pops/binder/bindings_model_dumper.rb b/lib/puppet/pops/binder/bindings_model_dumper.rb
index 1abfd0675..98b2049fd 100644
--- a/lib/puppet/pops/binder/bindings_model_dumper.rb
+++ b/lib/puppet/pops/binder/bindings_model_dumper.rb
@@ -1,187 +1,187 @@
# Dumps a Pops::Binder::Bindings model in reverse polish notation; i.e. LISP style
# The intention is to use this for debugging output
# TODO: BAD NAME - A DUMP is a Ruby Serialization
# NOTE: use :break, :indent, :dedent in lists to do just that
#
class Puppet::Pops::Binder::BindingsModelDumper < Puppet::Pops::Model::TreeDumper
Bindings = Puppet::Pops::Binder::Bindings
attr_reader :type_calculator
attr_reader :expression_dumper
def initialize
super
@type_calculator = Puppet::Pops::Types::TypeCalculator.new()
@expression_dumper = Puppet::Pops::Model::ModelTreeDumper.new()
end
def dump_BindingsFactory o
do_dump(o.model)
end
def dump_BindingsBuilder o
do_dump(o.model)
end
def dump_BindingsContainerBuilder o
do_dump(o.model)
end
def dump_NamedLayer o
result = ['named-layer', (o.name.nil? ? '<no-name>': o.name), :indent]
if o.bindings
o.bindings.each do |b|
result << :break
result << do_dump(b)
end
end
result << :dedent
result
end
def dump_Array o
o.collect {|e| do_dump(e) }
end
def dump_ASTArray o
["[]"] + o.children.collect {|x| do_dump(x)}
end
def dump_ASTHash o
["{}"] + o.value.sort_by{|k,v| k.to_s}.collect {|x| [do_dump(x[0]), do_dump(x[1])]}
end
def dump_Integer o
o.to_s
end
# Dump a Ruby String in single quotes unless it is a number.
def dump_String o
"'#{o}'"
end
def dump_NilClass o
"()"
end
def dump_Object o
['dev-error-no-polymorph-dump-for:', o.class.to_s, o.to_s]
end
def is_nop? o
o.nil? || o.is_a?(Model::Nop) || o.is_a?(AST::Nop)
end
def dump_ProducerDescriptor o
result = [o.class.name]
result << expression_dumper.dump(o.transformer) if o.transformer
result
end
def dump_NonCachingProducerDescriptor o
dump_ProducerDescriptor(o) + do_dump(o.producer)
end
def dump_ConstantProducerDescriptor o
['constant', do_dump(o.value)]
end
def dump_EvaluatingProducerDescriptor o
result = dump_ProducerDescriptor(o)
result << expression_dumper.dump(o.expression)
end
def dump_InstanceProducerDescriptor o
# TODO: o.arguments, o. transformer
['instance', o.class_name]
end
def dump_ProducerProducerDescriptor o
# skip the transformer lambda...
result = ['producer-producer', do_dump(o.producer)]
result << expression_dumper.dump(o.transformer) if o.transformer
result
end
def dump_LookupProducerDescriptor o
['lookup', do_dump(o.type), o.name]
end
- def dump_PObjectType o
+ def dump_PAnyType o
type_calculator.string(o)
end
def dump_HashLookupProducerDescriptor o
# TODO: transformer lambda
result = ['hash-lookup', do_dump(o.type), o.name, "[#{do_dump(o.key)}]"]
result << expression_dumper.dump(o.transformer) if o.transformer
result
end
def dump_FirstFoundProducerDescriptor o
# TODO: transformer lambda
['first-found', do_dump(o.producers)]
end
def dump_ArrayMultibindProducerDescriptor o
['multibind-array']
end
def dump_HashMultibindProducerDescriptor o
['multibind-hash']
end
def dump_NamedArgument o
"#{o.name} => #{do_dump(o.value)}"
end
def dump_Binding o
result = ['bind']
result << 'override' if o.override
result << 'abstract' if o.abstract
result.concat([do_dump(o.type), o.name])
result << "(in #{o.multibind_id})" if o.multibind_id
result << ['to', do_dump(o.producer)] + do_dump(o.producer_args)
result
end
def dump_Multibinding o
result = ['multibind', o.id]
result << 'override' if o.override
result << 'abstract' if o.abstract
result.concat([do_dump(o.type), o.name])
result << "(in #{o.multibind_id})" if o.multibind_id
result << ['to', do_dump(o.producer)] + do_dump(o.producer_args)
result
end
def dump_Bindings o
do_dump(o.bindings)
end
def dump_NamedBindings o
result = ['named-bindings', o.name, :indent]
o.bindings.each do |b|
result << :break
result << do_dump(b)
end
result << :dedent
result
end
def dump_LayeredBindings o
result = ['layers', :indent]
o.layers.each do |layer|
result << :break
result << do_dump(layer)
end
result << :dedent
result
end
def dump_ContributedBindings o
['contributed', o.name, do_dump(o.bindings)]
end
end
diff --git a/lib/puppet/pops/binder/bindings_model.rb b/lib/puppet/pops/binder/bindings_model_meta.rb
similarity index 86%
copy from lib/puppet/pops/binder/bindings_model.rb
copy to lib/puppet/pops/binder/bindings_model_meta.rb
index 4d041fca1..a4b874997 100644
--- a/lib/puppet/pops/binder/bindings_model.rb
+++ b/lib/puppet/pops/binder/bindings_model_meta.rb
@@ -1,201 +1,215 @@
require 'rgen/metamodel_builder'
# The Bindings model is a model of Key to Producer mappings (bindings).
# The central concept is that a Bindings is a nested structure of bindings.
# A top level Bindings should be a NamedBindings (the name is used primarily
# in error messages). A Key is a Type/Name combination.
#
# TODO: In this version, references to "any object" uses the class Object,
# but this is only temporary. The intent is to use specific Puppet Objects
# that are typed using the Puppet Type System (to enable serialization).
#
# @see Puppet::Pops::Binder::BindingsFactory The BindingsFactory for more details on how to create model instances.
# @api public
module Puppet::Pops::Binder::Bindings
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ # This declaration is used to overcome bugs in RGen. What is really wanted is an Opaque Object
+ # type that does not serialize its values, but such an type does not work when recreating the
+ # meta model from a dump.
+ # Instead, after loading the model, the generated code for type validation must be patched
+ #
+ FakeObject = String
+
+ # @abstract
+ # @api public
+ class BindingsModelObject < RGen::MetamodelBuilder::MMBase
+ abstract
+ end
# @abstract
# @api public
#
- class AbstractBinding < Puppet::Pops::Model::PopsObject
+ class AbstractBinding < BindingsModelObject
abstract
end
# An abstract producer
# @abstract
# @api public
#
- class ProducerDescriptor < Puppet::Pops::Model::PopsObject
+ class ProducerDescriptor < BindingsModelObject
abstract
contains_one_uni 'transformer', Puppet::Pops::Model::LambdaExpression
end
-
# All producers are singleton producers unless wrapped in a non caching producer
# where each lookup produces a new instance. It is an error to have a nesting level > 1
# and to nest a NonCachingProducerDescriptor.
#
# @api public
#
class NonCachingProducerDescriptor < ProducerDescriptor
contains_one_uni 'producer', ProducerDescriptor
end
# Produces a constant value (i.e. something of {Puppet::Pops::Types::PDataType PDataType})
# @api public
#
class ConstantProducerDescriptor < ProducerDescriptor
# TODO: This should be a typed Puppet Object
- has_attr 'value', Object
+ has_attr 'value', FakeObject
end
# Produces a value by evaluating a Puppet DSL expression.
# Note that the expression is not contained as it is part of a Puppet::Pops::Model::Program.
# To include the expression in the serialization, the Program it is contained in must be
# contained in the same serialization. This can be achieved by containing it in the
# ContributedBindings that is the top of a BindingsModel produced and given to the Injector.
#
# @api public
#
class EvaluatingProducerDescriptor < ProducerDescriptor
has_one 'expression', Puppet::Pops::Model::Expression
end
# An InstanceProducer creates an instance of the given class
# Arguments are passed to the class' `new` operator in the order they are given.
# @api public
#
class InstanceProducerDescriptor < ProducerDescriptor
# TODO: This should be a typed Puppet Object ??
- has_many_attr 'arguments', Object, :upperBound => -1
+ has_many_attr 'arguments', FakeObject, :upperBound => -1
has_attr 'class_name', String
end
# A ProducerProducerDescriptor, describes that the produced instance is itself a Producer
# that should be used to produce the value.
# @api public
#
class ProducerProducerDescriptor < ProducerDescriptor
contains_one_uni 'producer', ProducerDescriptor, :lowerBound => 1
end
# Produces a value by looking up another key (type/name)
# @api public
#
class LookupProducerDescriptor < ProducerDescriptor
- contains_one_uni 'type', Puppet::Pops::Types::PObjectType
+ contains_one_uni 'type', Puppet::Pops::Types::PAnyType
has_attr 'name', String
end
# Produces a value by looking up another multibound key, and then looking up
# the detail using a detail_key.
# This is used to produce a specific service of a given type (such as a SyntaxChecker for the syntax "json").
# @api public
#
class HashLookupProducerDescriptor < LookupProducerDescriptor
has_attr 'key', String
end
# Produces a value by looking up each producer in turn. The first existing producer wins.
# @api public
#
class FirstFoundProducerDescriptor < ProducerDescriptor
contains_many_uni 'producers', LookupProducerDescriptor
end
# @api public
# @abstract
class MultibindProducerDescriptor < ProducerDescriptor
abstract
end
# Used in a Multibind of Array type unless it has a producer. May explicitly be used as well.
# @api public
#
class ArrayMultibindProducerDescriptor < MultibindProducerDescriptor
end
# Used in a Multibind of Hash type unless it has a producer. May explicitly be used as well.
# @api public
#
class HashMultibindProducerDescriptor < MultibindProducerDescriptor
end
# Plays the role of "Hash[String, Object] entry" but with keys in defined order.
#
# @api public
#
- class NamedArgument < Puppet::Pops::Model::PopsObject
+ class NamedArgument < BindingsModelObject
has_attr 'name', String, :lowerBound => 1
- has_attr 'value', Object, :lowerBound => 1
+ has_attr 'value', FakeObject
end
# Binds a type/name combination to a producer. Optionally marking the bindidng as being abstract, or being an
# override of another binding. Optionally, the binding defines producer arguments passed to the producer when
# it is created.
#
# @api public
class Binding < AbstractBinding
- contains_one_uni 'type', Puppet::Pops::Types::PObjectType
+ contains_one_uni 'type', Puppet::Pops::Types::PAnyType
has_attr 'name', String
has_attr 'override', Boolean
has_attr 'abstract', Boolean
has_attr 'final', Boolean
# If set is a contribution in a multibind
has_attr 'multibind_id', String, :lowerBound => 0
# Invariant: Only multibinds may have lowerBound 0, all regular Binding must have a producer.
contains_one_uni 'producer', ProducerDescriptor, :lowerBound => 0
contains_many_uni 'producer_args', NamedArgument, :lowerBound => 0
end
# A multibinding is a binding other bindings can contribute to.
#
# @api public
class Multibinding < Binding
has_attr 'id', String
end
# A container of Binding instances
# @api public
#
class Bindings < AbstractBinding
contains_many_uni 'bindings', AbstractBinding
end
# The top level container of bindings can have a name (for error messages, logging, tracing).
# May be nested.
# @api public
#
class NamedBindings < Bindings
has_attr 'name', String
end
# A named layer of bindings having the same priority.
# @api public
- class NamedLayer < Puppet::Pops::Model::PopsObject
+ class NamedLayer < BindingsModelObject
has_attr 'name', String, :lowerBound => 1
contains_many_uni 'bindings', NamedBindings
end
# A list of layers with bindings in descending priority order.
# @api public
#
- class LayeredBindings < Puppet::Pops::Model::PopsObject
+ class LayeredBindings < BindingsModelObject
contains_many_uni 'layers', NamedLayer
end
# ContributedBindings is a named container of one or more NamedBindings.
# The intent is that a bindings producer returns a ContributedBindings that identifies the contributor
# as opposed to the names of the different set of bindings. The ContributorBindings name is typically
# a technical name that indicates their source (a service).
#
# When EvaluatingProducerDescriptor is used, it holds a reference to an Expression. That expression
# should be contained in the programs referenced in the ContributedBindings that contains that producer.
# While the bindings model will still work if this is not the case, it will not serialize and deserialize
# correctly.
#
# @api public
#
class ContributedBindings < NamedLayer
contains_many_uni 'programs', Puppet::Pops::Model::Program
end
+
end
diff --git a/lib/puppet/pops/binder/injector.rb b/lib/puppet/pops/binder/injector.rb
index 994394975..fb11c3cc1 100644
--- a/lib/puppet/pops/binder/injector.rb
+++ b/lib/puppet/pops/binder/injector.rb
@@ -1,767 +1,767 @@
# The injector is the "lookup service" class
#
# Initialization
# --------------
# The injector is initialized with a configured {Puppet::Pops::Binder::Binder Binder}. The Binder instance contains a resolved set of
# `key => "binding information"` that is used to setup the injector.
#
# Lookup
# ------
# It is possible to lookup either the value, or a producer of the value. The {#lookup} method looks up a value, and the
# {#lookup_producer} looks up a producer.
# Both of these methods can be called with three different signatures; `lookup(key)`, `lookup(type, name)`, and `lookup(name)`,
# with the corresponding calls to obtain a producer; `lookup_producer(key)`, `lookup_producer(type, name)`, and `lookup_producer(name)`.
#
# It is possible to pass a block to {#lookup} and {#lookup_producer}, the block is passed the result of the lookup
# and the result of the block is returned as the value of the lookup. This is useful in order to provide a default value.
#
# @example Lookup with default value
# injector.lookup('favourite_food') {|x| x.nil? ? 'bacon' : x }
#
# Singleton or Not
# ----------------
# The lookup of a value is always based on the lookup of a producer. For *singleton producers* this means that the value is
# determined by the first value lookup. Subsequent lookups via `lookup` or `lookup_producer` will produce the same instance.
#
# *Non singleton producers* will produce a new instance on each request for a value. For constant value producers this
# means that a new deep-clone is produced for mutable objects (but not for immutable objects as this is not needed).
# Custom producers should have non singleton behavior, or if this is not possible ensure that the produced result is
# immutable. (The behavior if a custom producer hands out a mutable value and this is mutated is undefined).
#
# Custom bound producers capable of producing a series of objects when bound as a singleton means that the producer
# is a singleton, not the value it produces. If such a producer is bound as non singleton, each `lookup` will get a new
# producer (hence, typically, restarting the series). However, the producer returned from `lookup_producer` will not
# recreate the producer on each call to `produce`; i.e. each `lookup_producer` returns a producer capable of returning
# a series of objects.
#
# @see Puppet::Pops::Binder::Binder Binder, for details about how to bind keys to producers
# @see Puppet::Pops::Binder::BindingsFactory BindingsFactory, for a convenient way to create a Binder and bindings
#
# Assisted Inject
# ---------------
# The injector supports lookup of instances of classes *even if the requested class is not explicitly bound*.
# This is possible for classes that have a zero argument `initialize` method, or that has a class method called
# `inject` that takes two arguments; `injector`, and `scope`.
# This is useful in ruby logic as a class can then use the given injector to inject details.
# An `inject` class method wins over a zero argument `initialize` in all cases.
#
# @example Using assisted inject
# # Class with assisted inject support
# class Duck
# attr_reader :name, :year_of_birth
#
# def self.inject(injector, scope, binding, *args)
# # lookup default name and year of birth, and use defaults if not present
# name = injector.lookup(scope,'default-duck-name') {|x| x ? x : 'Donald Duck' }
# year_of_birth = injector.lookup(scope,'default-duck-year_of_birth') {|x| x ? x : 1934 }
# self.new(name, year_of_birth)
# end
#
# def initialize(name, year_of_birth)
# @name = name
# @year_of_birth = year_of_birth
# end
# end
#
# injector.lookup(scope, Duck)
# # Produces a Duck named 'Donald Duck' or named after the binding 'default-duck-name' (and with similar treatment of
# # year_of_birth).
# @see Puppet::Pops::Binder::Producers::AssistedInjectProducer AssistedInjectProducer, for more details on assisted injection
#
# Access to key factory and type calculator
# -----------------------------------------
# It is important to use the same key factory, and type calculator as the binder. It is therefor possible to obtain
# these with the methods {#key_factory}, and {#type_calculator}.
#
# Special support for producers
# -----------------------------
# There is one method specially designed for producers. The {#get_contributions} method returns an array of all contributions
# to a given *contributions key*. This key is obtained from the {#key_factory} for a given multibinding. The returned set of
# contributed bindings is sorted in descending precedence order. Any conflicts, merges, etc. is performed by the multibinding
# producer configured for a multibinding.
#
# @api public
#
class Puppet::Pops::Binder::Injector
Producers = Puppet::Pops::Binder::Producers
def self.create_from_model(layered_bindings_model)
self.new(Puppet::Pops::Binder::Binder.new(layered_bindings_model))
end
def self.create_from_hash(name, key_value_hash)
factory = Puppet::Pops::Binder::BindingsFactory
named_bindings = factory.named_bindings(name) { key_value_hash.each {|k,v| bind.name(k).to(v) }}
layered_bindings = factory.layered_bindings(factory.named_layer(name+'-layer',named_bindings.model))
self.new(Puppet::Pops::Binder::Binder.new(layered_bindings))
end
# Creates an injector with a single bindings layer created with the given name, and the bindings
# produced by the given block. The block is evaluated with self bound to a BindingsContainerBuilder.
#
# @example
# Injector.create('mysettings') do
# bind('name').to(42)
# end
#
# @api public
#
def self.create(name, &block)
factory = Puppet::Pops::Binder::BindingsFactory
layered_bindings = factory.layered_bindings(factory.named_layer(name+'-layer',factory.named_bindings(name, &block).model))
self.new(Puppet::Pops::Binder::Binder.new(layered_bindings))
end
# Creates an overriding injector with a single bindings layer
# created with the given name, and the bindings produced by the given block.
# The block is evaluated with self bound to a BindingsContainerBuilder.
#
# @example
# an_injector.override('myoverrides') do
# bind('name').to(43)
# end
#
# @api public
#
def override(name, &block)
factory = Puppet::Pops::Binder::BindingsFactory
layered_bindings = factory.layered_bindings(factory.named_layer(name+'-layer',factory.named_bindings(name, &block).model))
self.class.new(Puppet::Pops::Binder::Binder.new(layered_bindings, @impl.binder))
end
# Creates an overriding injector with bindings from a bindings model (a LayeredBindings) which
# may consists of multiple layers of bindings.
#
# @api public
#
def override_with_model(layered_bindings)
unless layered_bindings.is_a?(Puppet::Pops::Binder::Bindings::LayeredBindings)
raise ArgumentError, "Expected a LayeredBindings model, got '#{bindings_model.class}'"
end
self.class.new(Puppet::Pops::Binder::Binder.new(layered_bindings, @impl.binder))
end
# Creates an overriding injector with a single bindings layer
# created with the given name, and the bindings given in the key_value_hash
# @api public
#
def override_with_hash(name, key_value_hash)
factory = Puppet::Pops::Binder::BindingsFactory
named_bindings = factory.named_bindings(name) { key_value_hash.each {|k,v| bind.name(k).to(v) }}
layered_bindings = factory.layered_bindings(factory.named_layer(name+'-layer',named_bindings.model))
self.class.new(Puppet::Pops::Binder::Binder.new(layered_bindings, @impl.binder))
end
# An Injector is initialized with a configured {Puppet::Pops::Binder::Binder Binder}.
#
# @param configured_binder [Puppet::Pops::Binder::Binder,nil] The configured binder containing effective bindings. A given value
# of nil creates an injector that returns or yields nil on all lookup requests.
# @raise ArgumentError if the given binder is not fully configured
#
# @api public
#
def initialize(configured_binder, parent_injector = nil)
if configured_binder.nil?
@impl = Private::NullInjectorImpl.new()
else
@impl = Private::InjectorImpl.new(configured_binder, parent_injector)
end
end
# The KeyFactory used to produce keys in this injector.
# The factory is shared with the Binder to ensure consistent translation to keys.
# A compatible type calculator can also be obtained from the key factory.
# @return [Puppet::Pops::Binder::KeyFactory] the key factory in use
#
# @api public
#
def key_factory()
@impl.key_factory
end
# Returns the TypeCalculator in use for keys. The same calculator (as used for keys) should be used if there is a need
# to check type conformance, or infer the type of Ruby objects.
#
# @return [Puppet::Pops::Types::TypeCalculator] the type calculator that is in use for keys
# @api public
#
def type_calculator()
@impl.type_calculator()
end
# Lookup (a.k.a "inject") of a value given a key.
# The lookup may be called with different parameters. This method is a convenience method that
# dispatches to one of #lookup_key or #lookup_type depending on the arguments. It also provides
# the ability to use an optional block that is called with the looked up value, or scope and value if the
# block takes two parameters. This is useful to provide a default value or other transformations, calculations
# based on the result of the lookup.
#
# @overload lookup(scope, key)
# (see #lookup_key)
# @param scope [Puppet::Parser::Scope] the scope to use for evaluation
# @param key [Object] an opaque object being the full key
#
# @overload lookup(scope, type, name = '')
# (see #lookup_type)
# @param scope [Puppet::Parser::Scope] the scope to use for evaluation
- # @param type [Puppet::Pops::Types::PObjectType] the type of what to lookup
+ # @param type [Puppet::Pops::Types::PAnyType] the type of what to lookup
# @param name [String] the name to use, defaults to empty string (for unnamed)
#
# @overload lookup(scope, name)
# Lookup of Data type with given name.
# @see #lookup_type
# @param scope [Puppet::Parser::Scope] the scope to use for evaluation
# @param name [String] the Data/name to lookup
#
# @yield [value] passes the looked up value to an optional block and returns what this block returns
# @yield [scope, value] passes scope and value to the block and returns what this block returns
# @yieldparam scope [Puppet::Parser::Scope] the scope given to lookup
# @yieldparam value [Object, nil] the looked up value or nil if nothing was found
#
# @raise [ArgumentError] if the block has an arity that is not 1 or 2
#
# @api public
#
def lookup(scope, *args, &block)
@impl.lookup(scope, *args, &block)
end
# Looks up a (typesafe) value based on a type/name combination.
# Creates a key for the type/name combination using a KeyFactory. Specialization of the Data type are transformed
# to a Data key, and the result is type checked to conform with the given key.
#
- # @param type [Puppet::Pops::Types::PObjectType] the type to lookup as defined by Puppet::Pops::Types::TypeFactory
+ # @param type [Puppet::Pops::Types::PAnyType] the type to lookup as defined by Puppet::Pops::Types::TypeFactory
# @param name [String] the (optional for non `Data` types) name of the entry to lookup.
# The name may be an empty String (the default), but not nil. The name is required for lookup for subtypes of
# `Data`.
# @return [Object, nil] the looked up bound object, or nil if not found (type conformance with given type is guaranteed)
# @raise [ArgumentError] if the produced value does not conform with the given type
#
# @api public
#
def lookup_type(scope, type, name='')
@impl.lookup_type(scope, type, name)
end
# Looks up the key and returns the entry, or nil if no entry is found.
# Produced type is checked for type conformance with its binding, but not with the lookup key.
# (This since all subtypes of PDataType are looked up using a key based on PDataType).
# Use the Puppet::Pops::Types::TypeCalculator#instance? method to check for conformance of the result
# if this is wanted, or use #lookup_type.
#
# @param key [Object] lookup of key as produced by the key factory
# @return [Object, nil] produced value of type that conforms with bound type (type conformance with key not guaranteed).
# @raise [ArgumentError] if the produced value does not conform with the bound type
#
# @api public
#
def lookup_key(scope, key)
@impl.lookup_key(scope, key)
end
# Lookup (a.k.a "inject") producer of a value given a key.
# The producer lookup may be called with different parameters. This method is a convenience method that
# dispatches to one of #lookup_producer_key or #lookup_producer_type depending on the arguments. It also provides
# the ability to use an optional block that is called with the looked up producer, or scope and producer if the
# block takes two parameters. This is useful to provide a default value, call a custom producer method,
# or other transformations, calculations based on the result of the lookup.
#
# @overload lookup_producer(scope, key)
# (see #lookup_proudcer_key)
# @param scope [Puppet::Parser::Scope] the scope to use for evaluation
# @param key [Object] an opaque object being the full key
#
# @overload lookup_producer(scope, type, name = '')
# (see #lookup_type)
# @param scope [Puppet::Parser::Scope] the scope to use for evaluation
- # @param type [Puppet::Pops::Types::PObjectType], the type of what to lookup
+ # @param type [Puppet::Pops::Types::PAnyType], the type of what to lookup
# @param name [String], the name to use, defaults to empty string (for unnamed)
#
# @overload lookup_producer(scope, name)
# Lookup of Data type with given name.
# @see #lookup_type
# @param scope [Puppet::Parser::Scope] the scope to use for evaluation
# @param name [String], the Data/name to lookup
#
# @return [Puppet::Pops::Binder::Producers::Producer, Object, nil] a producer, or what the optional block returns
#
# @yield [producer] passes the looked up producer to an optional block and returns what this block returns
# @yield [scope, producer] passes scope and producer to the block and returns what this block returns
# @yieldparam producer [Puppet::Pops::Binder::Producers::Producer, nil] the looked up producer or nil if nothing was bound
# @yieldparam scope [Puppet::Parser::Scope] the scope given to lookup
#
# @raise [ArgumentError] if the block has an arity that is not 1 or 2
#
# @api public
#
def lookup_producer(scope, *args, &block)
@impl.lookup_producer(scope, *args, &block)
end
# Looks up a Producer given an opaque binder key.
# @return [Puppet::Pops::Binder::Producers::Producer, nil] the bound producer, or nil if no such producer was found.
#
# @api public
#
def lookup_producer_key(scope, key)
@impl.lookup_producer_key(scope, key)
end
# Looks up a Producer given a type/name key.
# @note The result is not type checked (it cannot be until the producer has produced an instance).
# @return [Puppet::Pops::Binder::Producers::Producer, nil] the bound producer, or nil if no such producer was found
#
# @api public
#
def lookup_producer_type(scope, type, name='')
@impl.lookup_producer_type(scope, type, name)
end
# Returns the contributions to a multibind given its contribution key (as produced by the KeyFactory).
# This method is typically used by multibind value producers, but may be used for introspection of the injector's state.
#
# @param scope [Puppet::Parser::Scope] the scope to use
# @param contributions_key [Object] Opaque key as produced by KeyFactory as the contributions key for a multibinding
# @return [Array<Puppet::Pops::Binder::InjectorEntry>] the contributions sorted in deecending order of precedence
#
# @api public
#
def get_contributions(scope, contributions_key)
@impl.get_contributions(scope, contributions_key)
end
# Returns an Injector that returns (or yields) nil on all lookups, and produces an empty structure for contributions
# This method is intended for testing purposes.
#
def self.null_injector
self.new(nil)
end
# The implementation of the Injector is private.
# @see Puppet::Pops::Binder::Injector The public API this module implements.
# @api private
#
module Private
# This is a mocking "Null" implementation of Injector. It never finds anything
# @api private
class NullInjectorImpl
attr_reader :entries
attr_reader :key_factory
attr_reader :type_calculator
def initialize
@entries = []
@key_factory = Puppet::Pops::Binder::KeyFactory.new()
@type_calculator = @key_factory.type_calculator
end
def lookup(scope, *args, &block)
raise ArgumentError, "lookup should be called with two or three arguments, got: #{args.size()+1}" unless args.size.between?(1,2)
# call block with result if given
if block
case block.arity
when 1
- block.call(:undef)
+ block.call(nil)
when 2
- block.call(scope, :undef)
+ block.call(scope, nil)
else
raise ArgumentError, "The block should have arity 1 or 2"
end
else
val
end
end
# @api private
def binder
nil
end
# @api private
def lookup_key(scope, key)
nil
end
# @api private
def lookup_producer(scope, *args, &block)
lookup(scope, *args, &block)
end
# @api private
def lookup_producer_key(scope, key)
nil
end
# @api private
def lookup_producer_type(scope, type, name='')
nil
end
def get_contributions()
[]
end
end
# @api private
#
class InjectorImpl
# Hash of key => InjectorEntry
# @api private
#
attr_reader :entries
attr_reader :key_factory
attr_reader :type_calculator
attr_reader :binder
def initialize(configured_binder, parent_injector = nil)
@binder = configured_binder
@parent = parent_injector
# TODO: Different error message
raise ArgumentError, "Given Binder is not configured" unless configured_binder #&& configured_binder.configured?()
@entries = configured_binder.injector_entries()
# It is essential that the injector uses the same key factory as the binder since keys must be
# represented the same (but still opaque) way.
#
@key_factory = configured_binder.key_factory()
@type_calculator = key_factory.type_calculator()
@@transform_visitor ||= Puppet::Pops::Visitor.new(nil,"transform", 2, 2)
@recursion_lock = [ ]
end
# @api private
def lookup(scope, *args, &block)
raise ArgumentError, "lookup should be called with two or three arguments, got: #{args.size()+1}" unless args.size.between?(1,2)
val = case args[ 0 ]
- when Puppet::Pops::Types::PObjectType
+ when Puppet::Pops::Types::PAnyType
lookup_type(scope, *args)
when String
raise ArgumentError, "lookup of name should only pass the name" unless args.size == 1
lookup_key(scope, key_factory.data_key(args[ 0 ]))
else
raise ArgumentError, 'lookup using a key should only pass a single key' unless args.size == 1
lookup_key(scope, args[ 0 ])
end
# call block with result if given
if block
case block.arity
when 1
block.call(val)
when 2
block.call(scope, val)
else
raise ArgumentError, "The block should have arity 1 or 2"
end
else
val
end
end
# Produces a key for a type/name combination.
# @api private
def named_key(type, name)
key_factory.named_key(type, name)
end
# Produces a key for a PDataType/name combination
# @api private
def data_key(name)
key_factory.data_key(name)
end
# @api private
def lookup_type(scope, type, name='')
val = lookup_key(scope, named_key(type, name))
return nil if val.nil?
unless key_factory.type_calculator.instance?(type, val)
raise ArgumentError, "Type error: incompatible type, #{type_error_detail(type, val)}"
end
val
end
# @api private
def type_error_detail(expected, actual)
actual_t = type_calculator.infer(actual)
"expected: #{type_calculator.string(expected)}, got: #{type_calculator.string(actual_t)}"
end
# @api private
def lookup_key(scope, key)
if @recursion_lock.include?(key)
raise ArgumentError, "Lookup loop detected for key: #{key}"
end
begin
@recursion_lock.push(key)
case entry = get_entry(key)
when NilClass
@parent ? @parent.lookup_key(scope, key) : nil
when Puppet::Pops::Binder::InjectorEntry
val = produce(scope, entry)
return nil if val.nil?
unless key_factory.type_calculator.instance?(entry.binding.type, val)
raise "Type error: incompatible type returned by producer, #{type_error_detail(entry.binding.type, val)}"
end
val
when Producers::AssistedInjectProducer
entry.produce(scope)
else
# internal, direct entries
entry
end
ensure
@recursion_lock.pop()
end
end
# Should be used to get entries as it converts missing entries to NotFound entries or AssistedInject entries
#
# @api private
def get_entry(key)
case entry = entries[ key ]
when NilClass
# not found, is this an assisted inject?
if clazz = assistable_injected_class(key)
entry = Producers::AssistedInjectProducer.new(self, clazz)
entries[ key ] = entry
else
entries[ key ] = NotFound.new()
entry = nil
end
when NotFound
entry = nil
end
entry
end
# Returns contributions to a multibind in precedence order; highest first.
# Returns an Array on the form [ [key, entry], [key, entry]] where the key is intended to be used to lookup the value
# (or a producer) for that entry.
# @api private
def get_contributions(scope, contributions_key)
result = {}
return [] unless contributions = lookup_key(scope, contributions_key)
contributions.each { |k| result[k] = get_entry(k) }
result.sort {|a, b| a[0] <=> b[0] }
#result.sort_by {|key, entry| entry }
end
# Produces an injectable class given a key, or nil if key does not represent an injectable class
# @api private
#
def assistable_injected_class(key)
kt = key_factory.get_type(key)
- return nil unless kt.is_a?(Puppet::Pops::Types::PRubyType) && !key_factory.is_named?(key)
+ return nil unless kt.is_a?(Puppet::Pops::Types::PRuntimeType) && kt.runtime == :ruby && !key_factory.is_named?(key)
type_calculator.injectable_class(kt)
end
def lookup_producer(scope, *args, &block)
raise ArgumentError, "lookup_producer should be called with two or three arguments, got: #{args.size()+1}" unless args.size <= 2
p = case args[ 0 ]
- when Puppet::Pops::Types::PObjectType
+ when Puppet::Pops::Types::PAnyType
lookup_producer_type(scope, *args)
when String
raise ArgumentError, "lookup_producer of name should only pass the name" unless args.size == 1
lookup_producer_key(scope, key_factory.data_key(args[ 0 ]))
else
raise ArgumentError, "lookup_producer using a key should only pass a single key" unless args.size == 1
lookup_producer_key(scope, args[ 0 ])
end
# call block with result if given
if block
case block.arity
when 1
block.call(p)
when 2
block.call(scope, p)
else
raise ArgumentError, "The block should have arity 1 or 2"
end
else
p
end
end
# @api private
def lookup_producer_key(scope, key)
if @recursion_lock.include?(key)
raise ArgumentError, "Lookup loop detected for key: #{key}"
end
begin
@recursion_lock.push(key)
producer(scope, get_entry(key), :multiple_use)
ensure
@recursion_lock.pop()
end
end
# @api private
def lookup_producer_type(scope, type, name='')
lookup_producer_key(scope, named_key(type, name))
end
# Returns the producer for the entry
# @return [Puppet::Pops::Binder::Producers::Producer] the entry's producer.
#
# @api private
#
def producer(scope, entry, use)
return nil unless entry # not found
return entry.producer(scope) if entry.is_a?(Producers::AssistedInjectProducer)
unless entry.cached_producer
entry.cached_producer = transform(entry.binding.producer, scope, entry)
end
unless entry.cached_producer
raise ArgumentError, "Injector entry without a producer #{format_binding(entry.binding)}"
end
entry.cached_producer.producer(scope)
end
# @api private
def transform(producer_descriptor, scope, entry)
- @@transform_visitor.visit_this(self, producer_descriptor, scope, entry)
+ @@transform_visitor.visit_this_2(self, producer_descriptor, scope, entry)
end
# Returns the produced instance
# @return [Object] the produced instance
# @api private
#
def produce(scope, entry)
return nil unless entry # not found
producer(scope, entry, :single_use).produce(scope)
end
# @api private
def named_arguments_to_hash(named_args)
nb = named_args.nil? ? [] : named_args
result = {}
nb.each {|arg| result[ :"#{arg.name}" ] = arg.value }
result
end
# @api private
def merge_producer_options(binding, options)
named_arguments_to_hash(binding.producer_args).merge(options)
end
# @api private
def format_binding(b)
Puppet::Pops::Binder::Binder.format_binding(b)
end
# Handles a missing producer (which is valid for a Multibinding where one is selected automatically)
# @api private
#
def transform_NilClass(descriptor, scope, entry)
unless entry.binding.is_a?(Puppet::Pops::Binder::Bindings::Multibinding)
raise ArgumentError, "Binding without producer detected, #{format_binding(entry.binding)}"
end
case entry.binding.type
when Puppet::Pops::Types::PArrayType
transform(Puppet::Pops::Binder::Bindings::ArrayMultibindProducerDescriptor.new(), scope, entry)
when Puppet::Pops::Types::PHashType
transform(Puppet::Pops::Binder::Bindings::HashMultibindProducerDescriptor.new(), scope, entry)
else
raise ArgumentError, "Unsupported multibind type, must be an array or hash type, #{format_binding(entry.binding)}"
end
end
# @api private
def transform_ArrayMultibindProducerDescriptor(descriptor, scope, entry)
make_producer(Producers::ArrayMultibindProducer, descriptor, scope, entry, named_arguments_to_hash(entry.binding.producer_args))
end
# @api private
def transform_HashMultibindProducerDescriptor(descriptor, scope, entry)
make_producer(Producers::HashMultibindProducer, descriptor, scope, entry, named_arguments_to_hash(entry.binding.producer_args))
end
# @api private
def transform_ConstantProducerDescriptor(descriptor, scope, entry)
producer_class = singleton?(descriptor) ? Producers::SingletonProducer : Producers::DeepCloningProducer
producer_class.new(self, entry.binding, scope, merge_producer_options(entry.binding, {:value => descriptor.value}))
end
# @api private
def transform_InstanceProducerDescriptor(descriptor, scope, entry)
make_producer(Producers::InstantiatingProducer, descriptor, scope, entry,
merge_producer_options(entry.binding, {:class_name => descriptor.class_name, :init_args => descriptor.arguments}))
end
# @api private
def transform_EvaluatingProducerDescriptor(descriptor, scope, entry)
make_producer(Producers::EvaluatingProducer, descriptor, scope, entry,
merge_producer_options(entry.binding, {:expression => descriptor.expression}))
end
# @api private
def make_producer(clazz, descriptor, scope, entry, options)
singleton_wrapped(descriptor, scope, entry, clazz.new(self, entry.binding, scope, options))
end
# @api private
def singleton_wrapped(descriptor, scope, entry, producer)
return producer unless singleton?(descriptor)
Producers::SingletonProducer.new(self, entry.binding, scope,
merge_producer_options(entry.binding, {:value => producer.produce(scope)}))
end
# @api private
def transform_ProducerProducerDescriptor(descriptor, scope, entry)
p = transform(descriptor.producer, scope, entry)
clazz = singleton?(descriptor) ? Producers::SingletonProducerProducer : Producers::ProducerProducer
clazz.new(self, entry.binding, scope, merge_producer_options(entry.binding,
merge_producer_options(entry.binding, { :producer_producer => p })))
end
# @api private
def transform_LookupProducerDescriptor(descriptor, scope, entry)
make_producer(Producers::LookupProducer, descriptor, scope, entry,
merge_producer_options(entry.binding, {:type => descriptor.type, :name => descriptor.name}))
end
# @api private
def transform_HashLookupProducerDescriptor(descriptor, scope, entry)
make_producer(Producers::LookupKeyProducer, descriptor, scope, entry,
merge_producer_options(entry.binding, {:type => descriptor.type, :name => descriptor.name, :key => descriptor.key}))
end
# @api private
def transform_NonCachingProducerDescriptor(descriptor, scope, entry)
# simply delegates to the wrapped producer
transform(descriptor.producer, scope, entry)
end
# @api private
def transform_FirstFoundProducerDescriptor(descriptor, scope, entry)
make_producer(Producers::FirstFoundProducer, descriptor, scope, entry,
merge_producer_options(entry.binding, {:producers => descriptor.producers.collect {|p| transform(p, scope, entry) }}))
end
# @api private
def singleton?(descriptor)
! descriptor.eContainer().is_a?(Puppet::Pops::Binder::Bindings::NonCachingProducerDescriptor)
end
# Special marker class used in entries
# @api private
class NotFound
end
end
end
end
diff --git a/lib/puppet/pops/binder/key_factory.rb b/lib/puppet/pops/binder/key_factory.rb
index 0b45d4f02..3dccd5184 100644
--- a/lib/puppet/pops/binder/key_factory.rb
+++ b/lib/puppet/pops/binder/key_factory.rb
@@ -1,67 +1,67 @@
# The KeyFactory is responsible for creating keys used for lookup of bindings.
# @api public
#
class Puppet::Pops::Binder::KeyFactory
attr_reader :type_calculator
# @api public
def initialize(type_calculator = Puppet::Pops::Types::TypeCalculator.new())
@type_calculator = type_calculator
end
# @api public
def binding_key(binding)
named_key(binding.type, binding.name)
end
# @api public
def named_key(type, name)
[(@type_calculator.assignable?(@type_calculator.data, type) ? @type_calculator.data : type), name]
end
# @api public
def data_key(name)
[@type_calculator.data, name]
end
# @api public
def is_contributions_key?(s)
return false unless s.is_a?(String)
s.start_with?('mc_')
end
# @api public
def multibind_contributions(multibind_id)
"mc_#{multibind_id}"
end
# @api public
def multibind_contribution_key_to_id(contributions_key)
# removes the leading "mc_" from the key to get the multibind_id
contributions_key[3..-1]
end
# @api public
def is_named?(key)
key.is_a?(Array) && key[1] && !key[1].empty?
end
# @api public
def is_data?(key)
- return false unless key.is_a?(Array) && key[0].is_a?(Puppet::Pops::Types::PObjectType)
+ return false unless key.is_a?(Array) && key[0].is_a?(Puppet::Pops::Types::PAnyType)
type_calculator.assignable?(type_calculator.data(), key[0])
end
# @api public
def is_ruby?(key)
- return key.is_a?(Array) && key[0].is_a?(Puppet::Pops::Types::PRubyType)
+ key.is_a?(Array) && key[0].is_a?(Puppet::Pops::Types::PRuntimeType) && key[0].runtime == :ruby
end
# Returns the type of the key
# @api public
#
def get_type(key)
return nil unless key.is_a?(Array)
key[0]
end
end
diff --git a/lib/puppet/pops/binder/lookup.rb b/lib/puppet/pops/binder/lookup.rb
index d44d5269c..07980ce62 100644
--- a/lib/puppet/pops/binder/lookup.rb
+++ b/lib/puppet/pops/binder/lookup.rb
@@ -1,191 +1,199 @@
# This class is the backing implementation of the Puppet function 'lookup'.
# See puppet/parser/functions/lookup.rb for documentation.
#
class Puppet::Pops::Binder::Lookup
def self.parse_lookup_args(args)
options = {}
pblock = if args[-1].respond_to?(:puppet_lambda)
args.pop
end
case args.size
when 1
# name, or all options
if args[ 0 ].is_a?(Hash)
options = to_symbolic_hash(args[ 0 ])
else
options[ :name ] = args[ 0 ]
end
when 2
# name and type, or name and options
if args[ 1 ].is_a?(Hash)
options = to_symbolic_hash(args[ 1 ])
options[:name] = args[ 0 ] # silently overwrite option with given name
else
options[:name] = args[ 0 ]
options[:type] = args[ 1 ]
end
when 3
# name, type, default (no options)
options[ :name ] = args[ 0 ]
options[ :type ] = args[ 1 ]
options[ :default ] = args[ 2 ]
else
raise Puppet::ParseError, "The lookup function accepts 1-3 arguments, got #{args.size}"
end
options[:pblock] = pblock
options
end
def self.to_symbolic_hash(input)
names = [:name, :type, :default, :accept_undef, :extra, :override]
options = {}
names.each {|n| options[n] = undef_as_nil(input[n.to_s] || input[n]) }
options
end
def self.type_mismatch(type_calculator, expected, got)
"has wrong type, expected #{type_calculator.string(expected)}, got #{type_calculator.string(got)}"
end
def self.fail(msg)
raise Puppet::ParseError, "Function lookup() " + msg
end
def self.fail_lookup(names)
name_part = if names.size == 1
"the name '#{names[0]}'"
else
"any of the names ['" + names.join(', ') + "']"
end
fail("did not find a value for #{name_part}")
end
def self.validate_options(options, type_calculator)
type_parser = Puppet::Pops::Types::TypeParser.new
name_type = type_parser.parse('Variant[Array[String], String]')
if is_nil_or_undef?(options[:name]) || options[:name].is_a?(Array) && options[:name].empty?
fail ("requires a name, or array of names. Got nothing to lookup.")
end
t = type_calculator.infer(options[:name])
if ! type_calculator.assignable?(name_type, t)
fail("given 'name' argument, #{type_mismatch(type_calculator, options[:name], t)}")
end
# unless a type is already given (future case), parse the type (or default 'Data'), fails if invalid type is given
- unless options[:type].is_a?(Puppet::Pops::Types::PAbstractType)
+ unless options[:type].is_a?(Puppet::Pops::Types::PAnyType)
options[:type] = type_parser.parse(options[:type] || 'Data')
end
# default value must comply with the given type
if options[:default]
t = type_calculator.infer(options[:default])
if ! type_calculator.assignable?(options[:type], t)
fail("'default' value #{type_mismatch(type_calculator, options[:type], t)}")
end
end
if options[:extra] && !options[:extra].is_a?(Hash)
# do not perform inference here, it is enough to know that it is not a hash
fail("'extra' value must be a Hash, got #{options[:extra].class}")
end
options[:extra] = {} unless options[:extra]
if options[:override] && !options[:override].is_a?(Hash)
# do not perform inference here, it is enough to know that it is not a hash
fail("'override' value must be a Hash, got #{options[:extra].class}")
end
options[:override] = {} unless options[:override]
end
def self.nil_as_undef(x)
x.nil? ? :undef : x
end
def self.undef_as_nil(x)
is_nil_or_undef?(x) ? nil : x
end
def self.is_nil_or_undef?(x)
x.nil? || x == :undef
end
# This is used as a marker - a value that cannot (at least not easily) by mistake be found in
# hiera data.
#
class PrivateNotFoundMarker; end
def self.search_for(scope, type, name, options)
# search in order, override, injector, hiera, then extra
if !(result = options[:override][name]).nil?
result
elsif !(result = scope.compiler.injector.lookup(scope, type, name)).nil?
result
else
result = scope.function_hiera([name, PrivateNotFoundMarker])
if !result.nil? && result != PrivateNotFoundMarker
result
else
options[:extra][name]
end
end
end
# This is the method called from the puppet/parser/functions/lookup.rb
# @param args [Array] array following the puppet function call conventions
def self.lookup(scope, args)
type_calculator = Puppet::Pops::Types::TypeCalculator.new
options = parse_lookup_args(args)
validate_options(options, type_calculator)
names = [options[:name]].flatten
type = options[:type]
result_with_name = names.reduce([]) do |memo, name|
break memo if !memo[1].nil?
[name, search_for(scope, type, name, options)]
end
result = if result_with_name[1].nil?
# not found, use default (which may be nil), the default is already type checked
options[:default]
else
# injector.lookup is type-safe already do no need to type check the result
result_with_name[1]
end
+ # If a block is given it is called with :undef passed as 'nil' since the lookup function
+ # is available from 3x with --binder turned on, and the evaluation is always 4x.
+ # TODO PUPPET4: Simply pass the value
+ #
result = if pblock = options[:pblock]
result2 = case pblock.parameter_count
when 1
- pblock.call(scope, nil_as_undef(result))
+ pblock.call(scope, undef_as_nil(result))
when 2
- pblock.call(scope, result_with_name[ 0 ], nil_as_undef(result))
+ pblock.call(scope, result_with_name[ 0 ], undef_as_nil(result))
else
- pblock.call(scope, result_with_name[ 0 ], nil_as_undef(result), nil_as_undef(options[ :default ]))
+ pblock.call(scope, result_with_name[ 0 ], undef_as_nil(result), undef_as_nil(options[ :default ]))
end
- # if the given result was returned, there is not need to type-check it again
+ # if the given result was returned, there is no need to type-check it again
if !result2.equal?(result)
t = type_calculator.infer(undef_as_nil(result2))
if !type_calculator.assignable?(type, t)
fail "the value produced by the given code block #{type_mismatch(type_calculator, type, t)}"
end
end
result2
else
result
end
# Finally, the result if nil must be acceptable or an error is raised
if is_nil_or_undef?(result) && !options[:accept_undef]
fail_lookup(names)
else
- nil_as_undef(result)
+ # Since the function may be used without future parser being in effect, nil is not handled in a good
+ # way, and should instead be turned into :undef.
+ # TODO PUPPET4: Simply return the result
+ #
+ Puppet[:parser] == 'future' ? result : nil_as_undef(result)
end
end
end
diff --git a/lib/puppet/pops/binder/producers.rb b/lib/puppet/pops/binder/producers.rb
index 98016f144..8302ebf4c 100644
--- a/lib/puppet/pops/binder/producers.rb
+++ b/lib/puppet/pops/binder/producers.rb
@@ -1,829 +1,826 @@
# This module contains the various producers used by Puppet Bindings.
# The main (abstract) class is {Puppet::Pops::Binder::Producers::Producer} which documents the
# Producer API and serves as a base class for all other producers.
# It is required that custom producers inherit from this producer (directly or indirectly).
#
# The selection of a Producer is typically performed by the Innjector when it configures itself
# from a Bindings model where a {Puppet::Pops::Binder::Bindings::ProducerDescriptor} describes
# which producer to use. The configuration uses this to create the concrete producer.
# It is possible to describe that a particular producer class is to be used, and also to describe that
# a custom producer (derived from Producer) should be used. This is available for both regular
# bindings as well as multi-bindings.
#
#
# @api public
#
module Puppet::Pops::Binder::Producers
# Producer is an abstract base class representing the base contract for a bound producer.
# Typically, when a lookup is performed it is the value that is returned (via a producer), but
# it is also possible to lookup the producer, and ask it to produce the value (the producer may
# return a series of values, which makes this especially useful).
#
# When looking up a producer, it is of importance to only use the API of the Producer class
# unless it is known that a particular custom producer class has been bound.
#
# Custom Producers
# ----------------
# The intent is that this class is derived for custom producers that require additional
# options/arguments when producing an instance. Such a custom producer may raise an error if called
# with too few arguments, or may implement specific `produce` methods and always raise an
# error on #produce indicating that this producer requires custom calls and that it can not
# be used as an implicit producer.
#
# Features of Producer
# --------------------
# The Producer class is abstract, but offers the ability to transform the produced result
# by passing the option `:transformer` which should be a Puppet Lambda Expression taking one argument
# and producing the transformed (wanted) result.
#
# @abstract
# @api public
#
class Producer
# A Puppet 3 AST Lambda Expression
# @api public
#
attr_reader :transformer
# Creates a Producer.
# Derived classes should call this constructor to get support for transformer lambda.
#
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @api public
#
def initialize(injector, binding, scope, options)
if transformer_lambda = options[:transformer]
if transformer_lambda.is_a?(Proc)
raise ArgumentError, "Transformer Proc must take two arguments; scope, value." unless transformer_lambda.arity == 2
@transformer = transformer_lambda
else
raise ArgumentError, "Transformer must be a LambdaExpression" unless transformer_lambda.is_a?(Puppet::Pops::Model::LambdaExpression)
raise ArgumentError, "Transformer lambda must take one argument; value." unless transformer_lambda.parameters.size() == 1
- # NOTE: This depends on Puppet 3 AST Lambda
- @transformer = Puppet::Pops::Model::AstTransformer.new().transform(transformer_lambda)
+ @transformer = Puppet::Pops::Parser::EvaluatingParser.new.closure(transformer_lambda, scope)
end
end
end
# Produces an instance.
# @param scope [Puppet::Parser:Scope] the scope to use for evaluation
# @param args [Object] arguments to custom producers, always empty for implicit productions
# @return [Object] the produced instance (should never be nil).
# @api public
#
def produce(scope, *args)
do_transformation(scope, internal_produce(scope))
end
# Returns the producer after possibly having recreated an internal/wrapped producer.
# This implementation returns `self`. A derived class may want to override this method
# to perform initialization/refresh of its internal state. This method is called when
# a producer is requested.
# @see Puppet::Pops::Binder::ProducerProducer for an example of implementation.
# @param scope [Puppet::Parser:Scope] the scope to use for evaluation
# @return [Puppet::Pops::Binder::Producer] the producer to use
# @api public
#
def producer(scope)
self
end
protected
# Derived classes should implement this method to do the production of a value
# @param scope [Puppet::Parser::Scope] the scope to use when performing lookup and evaluation
# @raise [NotImplementedError] this implementation always raises an error
# @abstract
# @api private
#
def internal_produce(scope)
raise NotImplementedError, "Producer-class '#{self.class.name}' should implement #internal_produce(scope)"
end
# Transforms the produced value if a transformer has been defined.
# @param scope [Puppet::Parser::Scope] the scope used for evaluation
# @param produced_value [Object, nil] the produced value (possibly nil)
# @return [Object] the transformed value if a transformer is defined, else the given `produced_value`
# @api private
#
def do_transformation(scope, produced_value)
return produced_value unless transformer
- produced_value = :undef if produced_value.nil?
transformer.call(scope, produced_value)
end
end
# Abstract Producer holding a value
# @abstract
# @api public
#
class AbstractValueProducer < Producer
# @api public
attr_reader :value
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [Puppet::Pops::Model::LambdaExpression, nil] :value (nil) the value to produce
# @api public
#
def initialize(injector, binding, scope, options)
super
# nil is ok here, as an abstract value producer may be used to signal "not found"
@value = options[:value]
end
end
# Produces the same/singleton value on each production
# @api public
#
class SingletonProducer < AbstractValueProducer
protected
# @api private
def internal_produce(scope)
value()
end
end
# Produces a deep clone of its value on each production.
# @api public
#
class DeepCloningProducer < AbstractValueProducer
protected
# @api private
def internal_produce(scope)
case value
when Integer, Float, TrueClass, FalseClass, Symbol
# These are immutable
return value
when String
# ok if frozen, else fall through to default
return value() if value.frozen?
end
# The default: serialize/deserialize to get a deep copy
Marshal.load(Marshal.dump(value()))
end
end
# This abstract producer class remembers the injector and binding.
# @abstract
# @api public
#
class AbstractArgumentedProducer < Producer
# @api public
attr_reader :injector
# @api public
attr_reader :binding
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @api public
#
def initialize(injector, binding, scope, options)
super
@injector = injector
@binding = binding
end
end
# @api public
class InstantiatingProducer < AbstractArgumentedProducer
# @api public
attr_reader :the_class
# @api public
attr_reader :init_args
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [String] :class_name The name of the class to create instance of
# @option options [Array<Object>] :init_args ([]) Optional arguments to class constructor
# @api public
#
def initialize(injector, binding, scope, options)
# Better do this, even if a transformation of a created instance is kind of an odd thing to do, one can imagine
# sending it to a function for further detailing.
#
super
class_name = options[:class_name]
raise ArgumentError, "Option 'class_name' must be given for an InstantiatingProducer" unless class_name
# get class by name
@the_class = Puppet::Pops::Types::ClassLoader.provide(class_name)
@init_args = options[:init_args] || []
raise ArgumentError, "Can not load the class #{class_name} specified in binding named: '#{binding.name}'" unless @the_class
end
protected
# Performs initialization the same way as Assisted Inject does (but handle arguments to
# constructor)
# @api private
#
def internal_produce(scope)
result = nil
# A class :inject method wins over an instance :initialize if it is present, unless a more specific
# constructor exists. (i.e do not pick :inject from superclass if class has a constructor).
#
if the_class.respond_to?(:inject)
inject_method = the_class.method(:inject)
initialize_method = the_class.instance_method(:initialize)
if inject_method.owner <= initialize_method.owner
result = the_class.inject(injector, scope, binding, *init_args)
end
end
if result.nil?
result = the_class.new(*init_args)
end
result
end
end
# @api public
class FirstFoundProducer < Producer
# @api public
attr_reader :producers
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [Array<Puppet::Pops::Binder::Producers::Producer>] :producers list of producers to consult. Required.
# @api public
#
def initialize(injector, binding, scope, options)
super
@producers = options[:producers]
raise ArgumentError, "Option :producers' must be set to a list of producers." if @producers.nil?
raise ArgumentError, "Given 'producers' option is not an Array" unless @producers.is_a?(Array)
end
protected
# @api private
def internal_produce(scope)
# return the first produced value that is non-nil (unfortunately there is no such enumerable method)
producers.reduce(nil) {|memo, p| break memo unless memo.nil?; p.produce(scope)}
end
end
# Evaluates a Puppet Expression and returns the result.
# This is typically used for strings with interpolated expressions.
# @api public
#
class EvaluatingProducer < Producer
# A Puppet 3 AST Expression
# @api public
#
attr_reader :expression
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [Array<Puppet::Pops::Model::Expression>] :expression The expression to evaluate
# @api public
#
def initialize(injector, binding, scope, options)
super
- expr = options[:expression]
- raise ArgumentError, "Option 'expression' must be given to an EvaluatingProducer." unless expr
- @expression = Puppet::Pops::Model::AstTransformer.new().transform(expr)
+ @expression = options[:expression]
+ raise ArgumentError, "Option 'expression' must be given to an EvaluatingProducer." unless @expression
end
# @api private
def internal_produce(scope)
- expression.evaluate(scope)
+ Puppet::Pops::Parser::EvaluatingParser.new.evaluate(scope, expression)
end
end
# @api public
class LookupProducer < AbstractArgumentedProducer
# @api public
attr_reader :type
# @api public
attr_reader :name
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binder [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
- # @option options [Puppet::Pops::Types::PObjectType] :type The type to lookup
+ # @option options [Puppet::Pops::Types::PAnyType] :type The type to lookup
# @option options [String] :name ('') The name to lookup
# @api public
#
def initialize(injector, binder, scope, options)
super
@type = options[:type]
@name = options[:name] || ''
raise ArgumentError, "Option 'type' must be given in a LookupProducer." unless @type
end
protected
# @api private
def internal_produce(scope)
injector.lookup_type(scope, type, name)
end
end
# @api public
class LookupKeyProducer < LookupProducer
# @api public
attr_reader :key
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binder [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
- # @option options [Puppet::Pops::Types::PObjectType] :type The type to lookup
+ # @option options [Puppet::Pops::Types::PAnyType] :type The type to lookup
# @option options [String] :name ('') The name to lookup
- # @option options [Puppet::Pops::Types::PObjectType] :key The key to lookup in the hash
+ # @option options [Puppet::Pops::Types::PAnyType] :key The key to lookup in the hash
# @api public
#
def initialize(injector, binder, scope, options)
super
@key = options[:key]
raise ArgumentError, "Option 'key' must be given in a LookupKeyProducer." if key.nil?
end
protected
# @api private
def internal_produce(scope)
result = super
result.is_a?(Hash) ? result[key] : nil
end
end
# Produces the given producer, then uses that producer.
# @see ProducerProducer for the non singleton version
# @api public
#
class SingletonProducerProducer < Producer
# @api public
attr_reader :value_producer
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [Puppet::Pops::Model::LambdaExpression] :producer_producer a producer of a value producer (required)
# @api public
#
def initialize(injector, binding, scope, options)
super
p = options[:producer_producer]
raise ArgumentError, "Option :producer_producer must be given in a SingletonProducerProducer" unless p
@value_producer = p.produce(scope)
end
protected
# @api private
def internal_produce(scope)
value_producer.produce(scope)
end
end
# A ProducerProducer creates a producer via another producer, and then uses this created producer
# to produce values. This is useful for custom production of series of values.
# On each request for a producer, this producer will reset its internal producer (i.e. restarting
# the series).
#
# @param producer_producer [#produce(scope)] the producer of the producer
#
# @api public
#
class ProducerProducer < Producer
# @api public
attr_reader :producer_producer
# @api public
attr_reader :value_producer
# Creates new ProducerProducer given a producer.
#
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [Puppet::Pops::Binder::Producer] :producer_producer a producer of a value producer (required)
#
# @api public
#
def initialize(injector, binding, scope, options)
super
unless producer_producer = options[:producer_producer]
raise ArgumentError, "The option :producer_producer must be set in a ProducerProducer"
end
raise ArgumentError, "Argument must be a Producer" unless producer_producer.is_a?(Producer)
@producer_producer = producer_producer
@value_producer = nil
end
# Updates the internal state to use a new instance of the wrapped producer.
# @api public
#
def producer(scope)
@value_producer = @producer_producer.produce(scope)
self
end
protected
# Produces a value after having created an instance of the wrapped producer (if not already created).
# @api private
#
def internal_produce(scope, *args)
producer() unless value_producer
value_producer.produce(scope)
end
end
# This type of producer should only be created by the Injector.
#
# @api private
#
class AssistedInjectProducer < Producer
# An Assisted Inject Producer is created when a lookup is made of a type that is
# not bound. It does not support a transformer lambda.
# @note This initializer has a different signature than all others. Do not use in regular logic.
# @api private
#
def initialize(injector, clazz)
raise ArgumentError, "class must be given" unless clazz.is_a?(Class)
@injector = injector
@clazz = clazz
@inst = nil
end
def produce(scope, *args)
producer(scope, *args) unless @inst
@inst
end
# @api private
def producer(scope, *args)
@inst = nil
# A class :inject method wins over an instance :initialize if it is present, unless a more specific zero args
# constructor exists. (i.e do not pick :inject from superclass if class has a zero args constructor).
#
if @clazz.respond_to?(:inject)
inject_method = @clazz.method(:inject)
initialize_method = @clazz.instance_method(:initialize)
if inject_method.owner <= initialize_method.owner || initialize_method.arity != 0
@inst = @clazz.inject(@injector, scope, nil, *args)
end
end
if @inst.nil?
unless args.empty?
raise ArgumentError, "Assisted Inject can not pass arguments to no-args constructor when there is no class inject method."
end
@inst = @clazz.new()
end
self
end
end
# Abstract base class for multibind producers.
# Is suitable as base class for custom implementations of multibind producers.
# @abstract
# @api public
#
class MultibindProducer < AbstractArgumentedProducer
attr_reader :contributions_key
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
#
# @api public
#
def initialize(injector, binding, scope, options)
super
@contributions_key = injector.key_factory.multibind_contributions(binding.id)
end
- # @param expected [Array<Puppet::Pops::Types::PObjectType>, Puppet::Pops::Types::PObjectType] expected type or types
- # @param actual [Object, Puppet::Pops::Types::PObjectType> the actual value (or its type)
+ # @param expected [Array<Puppet::Pops::Types::PAnyType>, Puppet::Pops::Types::PAnyType] expected type or types
+ # @param actual [Object, Puppet::Pops::Types::PAnyType> the actual value (or its type)
# @return [String] a formatted string for inclusion as detail in an error message
# @api private
#
def type_error_detail(expected, actual)
tc = injector.type_calculator
expected = [expected] unless expected.is_a?(Array)
actual_t = tc.is_ptype?(actual) ? actual : tc.infer(actual)
expstrs = expected.collect {|t| tc.string(t) }
"expected: #{expstrs.join(', or ')}, got: #{tc.string(actual_t)}"
end
end
# A configurable multibind producer for Array type multibindings.
#
# This implementation collects all contributions to the multibind and then combines them using the following rules:
#
# - all *unnamed* entries are added unless the option `:priority_on_unnamed` is set to true, in which case the unnamed
# contribution with the highest priority is added, and the rest are ignored (unless they have the same priority in which
# case an error is raised).
# - all *named* entries are handled the same way as *unnamed* but the option `:priority_on_named` controls their handling.
# - the option `:uniq` post processes the result to only contain unique entries
# - the option `:flatten` post processes the result by flattening all nested arrays.
# - If both `:flatten` and `:uniq` are true, flattening is done first.
#
# @note
# Collection accepts elements that comply with the array's element type, or the entire type (i.e. Array[element_type]).
# If the type is restrictive - e.g. Array[String] and an Array[String] is contributed, the result will not be type
# compliant without also using the `:flatten` option, and a type error will be raised. For an array with relaxed typing
# i.e. Array[Data], it is valid to produce a result such as `['a', ['b', 'c'], 'd']` and no flattening is required
# and no error is raised (but using the array needs to be aware of potential array, non-array entries.
# The use of the option `:flatten` controls how the result is flattened.
#
# @api public
#
class ArrayMultibindProducer < MultibindProducer
# @return [Boolean] whether the result should be made contain unique (non-equal) entries or not
# @api public
attr_reader :uniq
# @return [Boolean, Integer] If result should be flattened (true), or not (false), or flattened to given level (0 = none, -1 = all)
# @api public
attr_reader :flatten
# @return [Boolean] whether priority should be considered for named contributions
# @api public
attr_reader :priority_on_named
# @return [Boolean] whether priority should be considered for unnamed contributions
# @api public
attr_reader :priority_on_unnamed
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [Boolean] :uniq (false) if collected result should be post-processed to contain only unique entries
# @option options [Boolean, Integer] :flatten (false) if collected result should be post-processed so all contained arrays
# are flattened. May be set to an Integer value to indicate the level of recursion (-1 is endless, 0 is none).
# @option options [Boolean] :priority_on_named (true) if highest precedented named element should win or if all should be included
# @option options [Boolean] :priority_on_unnamed (false) if highest precedented unnamed element should win or if all should be included
# @api public
#
def initialize(injector, binding, scope, options)
super
@uniq = !!options[:uniq]
@flatten = options[:flatten]
@priority_on_named = options[:priority_on_named].nil? ? true : options[:priority_on_name]
@priority_on_unnamed = !!options[:priority_on_unnamed]
case @flatten
when Integer
when true
@flatten = -1
when false
@flatten = nil
when NilClass
@flatten = nil
else
raise ArgumentError, "Option :flatten must be nil, Boolean, or an integer value" unless @flatten.is_a?(Integer)
end
end
protected
# @api private
def internal_produce(scope)
seen = {}
included_keys = []
injector.get_contributions(scope, contributions_key).each do |element|
key = element[0]
entry = element[1]
name = entry.binding.name
existing = seen[name]
empty_name = name.nil? || name.empty?
if existing
if empty_name && priority_on_unnamed
if (seen[name] <=> entry) >= 0
raise ArgumentError, "Duplicate key (same priority) contributed to Array Multibinding '#{binding.name}' with unnamed entry."
end
next
elsif !empty_name && priority_on_named
if (seen[name] <=> entry) >= 0
raise ArgumentError, "Duplicate key (same priority) contributed to Array Multibinding '#{binding.name}', key: '#{name}'."
end
next
end
else
seen[name] = entry
end
included_keys << key
end
result = included_keys.collect do |k|
x = injector.lookup_key(scope, k)
assert_type(binding(), injector.type_calculator(), x)
x
end
result.flatten!(flatten) if flatten
result.uniq! if uniq
result
end
# @api private
def assert_type(binding, tc, value)
infered = tc.infer(value)
unless tc.assignable?(binding.type.element_type, infered) || tc.assignable?(binding.type, infered)
raise ArgumentError, ["Type Error: contribution to '#{binding.name}' does not match type of multibind, ",
"#{type_error_detail([binding.type.element_type, binding.type], value)}"].join()
end
end
end
# @api public
class HashMultibindProducer < MultibindProducer
# @return [Symbol] One of `:error`, `:merge`, `:append`, `:priority`, `:ignore`
# @api public
attr_reader :conflict_resolution
# @return [Boolean]
# @api public
attr_reader :uniq
# @return [Boolean, Integer] Flatten all if true, or none if false, or to given level (0 = none, -1 = all)
# @api public
attr_reader :flatten
# The hash multibind producer provides options to control conflict resolution.
# By default, the hash is produced using `:priority` resolution - the highest entry is selected, the rest are
# ignored unless they have the same priority which is an error.
#
# @param injector [Puppet::Pops::Binder::Injector] The injector where the lookup originates
# @param binding [Puppet::Pops::Binder::Bindings::Binding, nil] The binding using this producer
# @param scope [Puppet::Parser::Scope] The scope to use for evaluation
# @option options [Puppet::Pops::Model::LambdaExpression] :transformer (nil) a transformer of produced value
# @option options [Symbol, String] :conflict_resolution (:priority) One of `:error`, `:merge`, `:append`, `:priority`, `:ignore`
# <ul><li> `ignore` the first found highest priority contribution is used, the rest are ignored</li>
# <li>`error` any duplicate key is an error</li>
# <li>`append` element type must be compatible with Array, makes elements be arrays and appends all found</li>
# <li>`merge` element type must be compatible with hash, merges hashes with retention of highest priority hash content</li>
# <li>`priority` the first found highest priority contribution is used, duplicates with same priority raises and error, the rest are
# ignored.</li></ul>
# @option options [Boolean, Integer] :flatten (false) If appended should be flattened. Also see {#flatten}.
# @option options [Boolean] :uniq (false) If appended result should be made unique.
#
# @api public
#
def initialize(injector, binding, scope, options)
super
@conflict_resolution = options[:conflict_resolution].nil? ? :priority : options[:conflict_resolution]
@uniq = !!options[:uniq]
@flatten = options[:flatten]
unless [:error, :merge, :append, :priority, :ignore].include?(@conflict_resolution)
raise ArgumentError, "Unknown conflict_resolution for Multibind Hash: '#{@conflict_resolution}."
end
case @flatten
when Integer
when true
@flatten = -1
when false
@flatten = nil
when NilClass
@flatten = nil
else
raise ArgumentError, "Option :flatten must be nil, Boolean, or an integer value" unless @flatten.is_a?(Integer)
end
if uniq || flatten || conflict_resolution.to_s == 'append'
etype = binding.type.element_type
unless etype.class == Puppet::Pops::Types::PDataType || etype.is_a?(Puppet::Pops::Types::PArrayType)
detail = []
detail << ":uniq" if uniq
detail << ":flatten" if flatten
detail << ":conflict_resolution => :append" if conflict_resolution.to_s == 'append'
raise ArgumentError, ["Options #{detail.join(', and ')} cannot be used with a Multibind ",
"of type #{injector.type_calculator.string(binding.type)}"].join()
end
end
end
protected
# @api private
def internal_produce(scope)
seen = {}
included_entries = []
injector.get_contributions(scope, contributions_key).each do |element|
key = element[0]
entry = element[1]
name = entry.binding.name
raise ArgumentError, "A Hash Multibind contribution to '#{binding.name}' must have a name." if name.nil? || name.empty?
existing = seen[name]
if existing
case conflict_resolution.to_s
when 'priority'
# skip if duplicate has lower prio
if (comparison = (seen[name] <=> entry)) <= 0
raise ArgumentError, "Internal Error: contributions not given in decreasing precedence order" unless comparison == 0
raise ArgumentError, "Duplicate key (same priority) contributed to Hash Multibinding '#{binding.name}', key: '#{name}'."
end
next
when 'ignore'
# skip, ignore conflict if prio is the same
next
when 'error'
raise ArgumentError, "Duplicate key contributed to Hash Multibinding '#{binding.name}', key: '#{name}'."
end
else
seen[name] = entry
end
included_entries << [key, entry]
end
result = {}
included_entries.each do |element|
k = element[ 0 ]
entry = element[ 1 ]
x = injector.lookup_key(scope, k)
name = entry.binding.name
assert_type(binding(), injector.type_calculator(), name, x)
if result[ name ]
merge(result, name, result[ name ], x)
else
result[ name ] = conflict_resolution().to_s == 'append' ? [x] : x
end
end
result
end
# @api private
def merge(result, name, higher, lower)
case conflict_resolution.to_s
when 'append'
unless higher.is_a?(Array)
higher = [higher]
end
tmp = higher + [lower]
tmp.flatten!(flatten) if flatten
tmp.uniq! if uniq
result[name] = tmp
when 'merge'
result[name] = lower.merge(higher)
end
end
# @api private
def assert_type(binding, tc, key, value)
unless tc.instance?(binding.type.key_type, key)
raise ArgumentError, ["Type Error: key contribution to #{binding.name}['#{key}'] ",
"is incompatible with key type: #{tc.label(binding.type)}, ",
type_error_detail(binding.type.key_type, key)].join()
end
if key.nil? || !key.is_a?(String) || key.empty?
raise ArgumentError, "Entry contributing to multibind hash with id '#{binding.id}' must have a name."
end
unless tc.instance?(binding.type.element_type, value)
raise ArgumentError, ["Type Error: value contribution to #{binding.name}['#{key}'] ",
"is incompatible, ",
type_error_detail(binding.type.element_type, value)].join()
end
end
end
end
diff --git a/lib/puppet/pops/evaluator/access_operator.rb b/lib/puppet/pops/evaluator/access_operator.rb
index 29a981538..e849a2c4e 100644
--- a/lib/puppet/pops/evaluator/access_operator.rb
+++ b/lib/puppet/pops/evaluator/access_operator.rb
@@ -1,598 +1,604 @@
# AccessOperator handles operator []
# This operator is part of evaluation.
#
class Puppet::Pops::Evaluator::AccessOperator
# Provides access to the Puppet 3.x runtime (scope, etc.)
# This separation has been made to make it easier to later migrate the evaluator to an improved runtime.
#
include Puppet::Pops::Evaluator::Runtime3Support
Issues = Puppet::Pops::Issues
TYPEFACTORY = Puppet::Pops::Types::TypeFactory
+ EMPTY_STRING = ''.freeze
attr_reader :semantic
# Initialize with AccessExpression to enable reporting issues
# @param access_expression [Puppet::Pops::Model::AccessExpression] the semantic object being evaluated
# @return [void]
#
def initialize(access_expression)
@@access_visitor ||= Puppet::Pops::Visitor.new(self, "access", 2, nil)
@semantic = access_expression
end
def access (o, scope, *keys)
@@access_visitor.visit_this_2(self, o, scope, keys)
end
protected
def access_Object(o, scope, keys)
fail(Issues::OPERATOR_NOT_APPLICABLE, @semantic.left_expr, :operator=>'[]', :left_value => o)
end
def access_String(o, scope, keys)
keys.flatten!
result = case keys.size
when 0
fail(Puppet::Pops::Issues::BAD_STRING_SLICE_ARITY, @semantic.left_expr, {:actual => keys.size})
when 1
# Note that Ruby 1.8.7 requires a length of 1 to produce a String
k1 = coerce_numeric(keys[0], @semantic.keys, scope)
bad_access_key_type(o, 0, k1, Integer) unless k1.is_a?(Integer)
k2 = 1
k1 = k1 < 0 ? o.length + k1 : k1 # abs pos
# if k1 is outside, a length of 1 always produces an empty string
if k1 < 0
- ''
+ EMPTY_STRING
else
o[ k1, k2 ]
end
when 2
k1 = coerce_numeric(keys[0], @semantic.keys, scope)
k2 = coerce_numeric(keys[1], @semantic.keys, scope)
[k1, k2].each_with_index { |k,i| bad_access_key_type(o, i, k, Integer) unless k.is_a?(Integer) }
k1 = k1 < 0 ? o.length + k1 : k1 # abs pos (negative is count from end)
k2 = k2 < 0 ? o.length - k1 + k2 + 1 : k2 # abs length (negative k2 is length from pos to end count)
# if k1 is outside, adjust to first position, and adjust length
if k1 < 0
k2 = k2 + k1
k1 = 0
end
o[ k1, k2 ]
else
fail(Puppet::Pops::Issues::BAD_STRING_SLICE_ARITY, @semantic.left_expr, {:actual => keys.size})
end
# Specified as: an index outside of range, or empty result == empty string
- (result.nil? || result.empty?) ? '' : result
+ (result.nil? || result.empty?) ? EMPTY_STRING : result
end
# Parameterizes a PRegexp Type with a pattern string or r ruby egexp
#
def access_PRegexpType(o, scope, keys)
keys.flatten!
unless keys.size == 1
blamed = keys.size == 0 ? @semantic : @semantic.keys[1]
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, blamed, :base_type => o, :min=>1, :actual => keys.size)
end
assert_keys(keys, o, 1, 1, String, Regexp)
Puppet::Pops::Types::TypeFactory.regexp(*keys)
end
# Evaluates <ary>[] with 1 or 2 arguments. One argument is an index lookup, two arguments is a slice from/to.
#
def access_Array(o, scope, keys)
keys.flatten!
case keys.size
when 0
fail(Puppet::Pops::Issues::BAD_ARRAY_SLICE_ARITY, @semantic.left_expr, {:actual => keys.size})
when 1
k = coerce_numeric(keys[0], @semantic.keys[0], scope)
unless k.is_a?(Integer)
bad_access_key_type(o, 0, k, Integer)
end
o[k]
when 2
# A slice [from, to] with support for -1 to mean start, or end respectively.
k1 = coerce_numeric(keys[0], @semantic.keys[0], scope)
k2 = coerce_numeric(keys[1], @semantic.keys[1], scope)
[k1, k2].each_with_index { |k,i| bad_access_key_type(o, i, k, Integer) unless k.is_a?(Integer) }
# Help confused Ruby do the right thing (it truncates to the right, but negative index + length can never overlap
# the available range.
k1 = k1 < 0 ? o.length + k1 : k1 # abs pos (negative is count from end)
k2 = k2 < 0 ? o.length - k1 + k2 + 1 : k2 # abs length (negative k2 is length from pos to end count)
# if k1 is outside, adjust to first position, and adjust length
if k1 < 0
k2 = k2 + k1
k1 = 0
end
# Help ruby always return empty array when asking for a sub array
result = o[ k1, k2 ]
result.nil? ? [] : result
else
fail(Puppet::Pops::Issues::BAD_ARRAY_SLICE_ARITY, @semantic.left_expr, {:actual => keys.size})
end
end
# Evaluates <hsh>[] with support for one or more arguments. If more than one argument is used, the result
# is an array with each lookup.
# @note
# Does not flatten its keys to enable looking up with a structure
#
def access_Hash(o, scope, keys)
- # Look up key in hash, if key is nil or :undef, try alternate form before giving up.
- # This makes :undef and nil "be the same key". (The alternative is to always only write one or the other
- # in all hashes - that is much harder to guarantee since the Hash is a regular Ruby hash.
- #
+ # Look up key in hash, if key is nil, try alternate form (:undef) before giving up.
+ # This is done because the hash may have been produced by 3x logic and may thus contain :undef.
result = keys.collect do |k|
- o.fetch(k) do |key|
- case key
- when nil
- o[:undef]
- when :undef
- o[:nil]
- else
- nil
- end
- end
+ o.fetch(k) { |key| key.nil? ? o[:undef] : nil }
end
case result.size
when 0
fail(Puppet::Pops::Issues::BAD_HASH_SLICE_ARITY, @semantic.left_expr, {:actual => keys.size})
when 1
result.pop
else
# remove nil elements and return
result.compact!
result
end
end
# Ruby does not have an infinity constant. TODO: Consider having one constant in Puppet. Now it is in several places.
INFINITY = 1.0 / 0.0
def access_PEnumType(o, scope, keys)
keys.flatten!
assert_keys(keys, o, 1, INFINITY, String)
Puppet::Pops::Types::TypeFactory.enum(*keys)
end
def access_PVariantType(o, scope, keys)
keys.flatten!
- assert_keys(keys, o, 1, INFINITY, Puppet::Pops::Types::PAbstractType)
+ assert_keys(keys, o, 1, INFINITY, Puppet::Pops::Types::PAnyType)
Puppet::Pops::Types::TypeFactory.variant(*keys)
end
def access_PTupleType(o, scope, keys)
keys.flatten!
if TYPEFACTORY.is_range_parameter?(keys[-2]) && TYPEFACTORY.is_range_parameter?(keys[-1])
size_type = TYPEFACTORY.range(keys[-2], keys[-1])
keys = keys[0, keys.size - 2]
elsif TYPEFACTORY.is_range_parameter?(keys[-1])
size_type = TYPEFACTORY.range(keys[-1], :default)
keys = keys[0, keys.size - 1]
end
- assert_keys(keys, o, 1, INFINITY, Puppet::Pops::Types::PAbstractType)
+ assert_keys(keys, o, 1, INFINITY, Puppet::Pops::Types::PAnyType)
t = Puppet::Pops::Types::TypeFactory.tuple(*keys)
# set size type, or nil for default (exactly 1)
t.size_type = size_type
t
end
def access_PCallableType(o, scope, keys)
TYPEFACTORY.callable(*keys)
end
def access_PStructType(o, scope, keys)
assert_keys(keys, o, 1, 1, Hash)
TYPEFACTORY.struct(keys[0])
end
def access_PStringType(o, scope, keys)
keys.flatten!
case keys.size
when 1
size_t = collection_size_t(0, keys[0])
when 2
size_t = collection_size_t(0, keys[0], keys[1])
else
fail(Puppet::Pops::Issues::BAD_STRING_SLICE_ARITY, @semantic, {:actual => keys.size})
end
string_t = Puppet::Pops::Types::TypeFactory.string()
string_t.size_type = size_t
string_t
end
# Asserts type of each key and calls fail with BAD_TYPE_SPECIFICATION
# @param keys [Array<Object>] the evaluated keys
# @param o [Object] evaluated LHS reported as :base_type
# @param min [Integer] the minimum number of keys (typically 1)
# @param max [Numeric] the maximum number of keys (use same as min, specific number, or INFINITY)
# @param allowed_classes [Class] a variable number of classes that each key must be an instance of (any)
# @api private
#
def assert_keys(keys, o, min, max, *allowed_classes)
size = keys.size
unless size.between?(min, max || INFINITY)
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, @semantic, :base_type => o, :min=>1, :max => max, :actual => keys.size)
end
keys.each_with_index do |k, i|
unless allowed_classes.any? {|clazz| k.is_a?(clazz) }
bad_type_specialization_key_type(o, i, k, *allowed_classes)
end
end
end
def bad_access_key_type(lhs, key_index, actual, *expected_classes)
fail(Puppet::Pops::Issues::BAD_SLICE_KEY_TYPE, @semantic.keys[key_index], {
:left_value => lhs,
:actual => bad_key_type_name(actual),
:expected_classes => expected_classes
})
end
def bad_key_type_name(actual)
case actual
- when nil, :undef
+ when nil
'Undef'
when :default
'Default'
else
actual.class.name
end
end
def bad_type_specialization_key_type(type, key_index, actual, *expected_classes)
label_provider = Puppet::Pops::Model::ModelLabelProvider.new()
expected = expected_classes.map {|c| label_provider.label(c) }.join(' or ')
fail(Puppet::Pops::Issues::BAD_TYPE_SPECIALIZATION, @semantic.keys[key_index], {
:type => type,
:message => "Cannot use #{bad_key_type_name(actual)} where #{expected} is expected"
})
end
def access_PPatternType(o, scope, keys)
keys.flatten!
assert_keys(keys, o, 1, INFINITY, String, Regexp, Puppet::Pops::Types::PPatternType, Puppet::Pops::Types::PRegexpType)
Puppet::Pops::Types::TypeFactory.pattern(*keys)
end
def access_POptionalType(o, scope, keys)
keys.flatten!
if keys.size == 1
- unless keys[0].is_a?(Puppet::Pops::Types::PAbstractType)
+ unless keys[0].is_a?(Puppet::Pops::Types::PAnyType)
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_TYPE, @semantic.keys[0], {:base_type => 'Optional-Type', :actual => keys[0].class})
end
result = Puppet::Pops::Types::POptionalType.new()
result.optional_type = keys[0]
result
else
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, @semantic, {:base_type => 'Optional-Type', :min => 1, :actual => keys.size})
end
end
def access_PType(o, scope, keys)
keys.flatten!
if keys.size == 1
- unless keys[0].is_a?(Puppet::Pops::Types::PAbstractType)
+ unless keys[0].is_a?(Puppet::Pops::Types::PAnyType)
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_TYPE, @semantic.keys[0], {:base_type => 'Type-Type', :actual => keys[0].class})
end
result = Puppet::Pops::Types::PType.new()
result.type = keys[0]
result
else
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, @semantic, {:base_type => 'Type-Type', :min => 1, :actual => keys.size})
end
end
- def access_PRubyType(o, scope, keys)
+ def access_PRuntimeType(o, scope, keys)
keys.flatten!
- assert_keys(keys, o, 1, 1, String)
- # create ruby type based on name of class, not inference of key's type
- Puppet::Pops::Types::TypeFactory.ruby_type(keys[0])
+ assert_keys(keys, o, 2, 2, String, String)
+ # create runtime type based on runtime and name of class, (not inference of key's type)
+ Puppet::Pops::Types::TypeFactory.runtime(*keys)
end
def access_PIntegerType(o, scope, keys)
keys.flatten!
unless keys.size.between?(1, 2)
fail(Puppet::Pops::Issues::BAD_INTEGER_SLICE_ARITY, @semantic, {:actual => keys.size})
end
keys.each_with_index do |x, index|
fail(Puppet::Pops::Issues::BAD_INTEGER_SLICE_TYPE, @semantic.keys[index],
{:actual => x.class}) unless (x.is_a?(Integer) || x == :default)
end
ranged_integer = Puppet::Pops::Types::PIntegerType.new()
from, to = keys
ranged_integer.from = from == :default ? nil : from
ranged_integer.to = to == :default ? nil : to
ranged_integer
end
def access_PFloatType(o, scope, keys)
keys.flatten!
unless keys.size.between?(1, 2)
fail(Puppet::Pops::Issues::BAD_FLOAT_SLICE_ARITY, @semantic, {:actual => keys.size})
end
keys.each_with_index do |x, index|
fail(Puppet::Pops::Issues::BAD_FLOAT_SLICE_TYPE, @semantic.keys[index],
{:actual => x.class}) unless (x.is_a?(Float) || x.is_a?(Integer) || x == :default)
end
ranged_float = Puppet::Pops::Types::PFloatType.new()
from, to = keys
ranged_float.from = from == :default || from.nil? ? nil : Float(from)
ranged_float.to = to == :default || to.nil? ? nil : Float(to)
ranged_float
end
# A Hash can create a new Hash type, one arg sets value type, two args sets key and value type in new type.
# With 3 or 4 arguments, these are used to create a size constraint.
# It is not possible to create a collection of Hash types directly.
#
def access_PHashType(o, scope, keys)
keys.flatten!
keys[0,2].each_with_index do |k, index|
- unless k.is_a?(Puppet::Pops::Types::PAbstractType)
+ unless k.is_a?(Puppet::Pops::Types::PAnyType)
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_TYPE, @semantic.keys[index], {:base_type => 'Hash-Type', :actual => k.class})
end
end
case keys.size
when 1
result = Puppet::Pops::Types::PHashType.new()
result.key_type = o.key_type.copy
result.element_type = keys[0]
result
when 2
result = Puppet::Pops::Types::PHashType.new()
result.key_type = keys[0]
result.element_type = keys[1]
result
when 3
result = Puppet::Pops::Types::PHashType.new()
result.key_type = keys[0]
result.element_type = keys[1]
size_t = collection_size_t(1, keys[2])
result
when 4
result = Puppet::Pops::Types::PHashType.new()
result.key_type = keys[0]
result.element_type = keys[1]
size_t = collection_size_t(1, keys[2], keys[3])
result
else
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, @semantic, {
:base_type => 'Hash-Type', :min => 1, :max => 4, :actual => keys.size
})
end
result.size_type = size_t if size_t
result
end
# CollectionType is parameterized with a range
def access_PCollectionType(o, scope, keys)
keys.flatten!
case keys.size
when 1
size_t = collection_size_t(1, keys[0])
when 2
size_t = collection_size_t(1, keys[0], keys[1])
else
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, @semantic,
{:base_type => 'Collection-Type', :min => 1, :max => 2, :actual => keys.size})
end
result = Puppet::Pops::Types::PCollectionType.new()
result.size_type = size_t
result
end
# An Array can create a new Array type. It is not possible to create a collection of Array types.
#
def access_PArrayType(o, scope, keys)
keys.flatten!
case keys.size
when 1
size_t = nil
when 2
size_t = collection_size_t(1, keys[1])
when 3
size_t = collection_size_t(1, keys[1], keys[2])
else
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, @semantic,
{:base_type => 'Array-Type', :min => 1, :max => 3, :actual => keys.size})
end
- unless keys[0].is_a?(Puppet::Pops::Types::PAbstractType)
+ unless keys[0].is_a?(Puppet::Pops::Types::PAnyType)
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_TYPE, @semantic.keys[0], {:base_type => 'Array-Type', :actual => keys[0].class})
end
result = Puppet::Pops::Types::PArrayType.new()
result.element_type = keys[0]
result.size_type = size_t
result
end
# Produces an PIntegerType (range) given one or two keys.
def collection_size_t(start_index, *keys)
if keys.size == 1 && keys[0].is_a?(Puppet::Pops::Types::PIntegerType)
keys[0].copy
else
keys.each_with_index do |x, index|
fail(Puppet::Pops::Issues::BAD_COLLECTION_SLICE_TYPE, @semantic.keys[start_index + index],
{:actual => x.class}) unless (x.is_a?(Integer) || x == :default)
end
ranged_integer = Puppet::Pops::Types::PIntegerType.new()
from, to = keys
ranged_integer.from = from == :default ? nil : from
ranged_integer.to = to == :default ? nil : to
ranged_integer
end
end
+ # A Puppet::Resource represents either just a type (no title), or is a fully qualified type/title.
+ #
+ def access_Resource(o, scope, keys)
+ # To access a Puppet::Resource as if it was a PResourceType, simply infer it, and take the type of
+ # the parameterized meta type (i.e. Type[Resource[the_resource_type, the_resource_title]])
+ t = Puppet::Pops::Types::TypeCalculator.infer(o).type
+ # must map "undefined title" from resource to nil
+ t.title = nil if t.title == EMPTY_STRING
+ access(t, scope, *keys)
+ end
+
# A Resource can create a new more specific Resource type, and/or an array of resource types
# If the given type has title set, it can not be specified further.
# @example
# Resource[File] # => File
# Resource[File, 'foo'] # => File[foo]
# Resource[File. 'foo', 'bar'] # => [File[foo], File[bar]]
# File['foo', 'bar'] # => [File[foo], File[bar]]
# File['foo']['bar'] # => Value of the 'bar' parameter in the File['foo'] resource
# Resource[File]['foo', 'bar'] # => [File[Foo], File[bar]]
# Resource[File, 'foo', 'bar'] # => [File[foo], File[bar]]
# Resource[File, 'foo']['bar'] # => Value of the 'bar' parameter in the File['foo'] resource
#
def access_PResourceType(o, scope, keys)
blamed = keys.size == 0 ? @semantic : @semantic.keys[0]
if keys.size == 0
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, blamed,
:base_type => Puppet::Pops::Types::TypeCalculator.new().string(o), :min => 1, :max => -1, :actual => 0)
end
# Must know which concrete resource type to operate on in all cases.
# It is not allowed to specify the type in an array arg - e.g. Resource[[File, 'foo']]
# type_name is LHS type_name if set, else the first given arg
type_name = o.type_name || keys.shift
type_name = case type_name
when Puppet::Pops::Types::PResourceType
type_name.type_name
when String
type_name.downcase
else
# blame given left expression if it defined the type, else the first given key expression
blame = o.type_name.nil? ? @semantic.keys[0] : @semantic.left_expr
fail(Puppet::Pops::Issues::ILLEGAL_RESOURCE_SPECIALIZATION, blame, {:actual => type_name.class})
end
+ # type name must conform
+ if type_name !~ Puppet::Pops::Patterns::CLASSREF
+ fail(Puppet::Pops::Issues::ILLEGAL_CLASSREF, blamed, {:name=>type_name})
+ end
+
# The result is an array if multiple titles are given, or if titles are specified with an array
# (possibly multiple arrays, and nested arrays).
result_type_array = keys.size > 1 || keys[0].is_a?(Array)
keys_orig_size = keys.size
keys.flatten!
keys.compact!
# If given keys that were just a mix of empty/nil with empty array as a result.
# As opposed to calling the function the wrong way (without any arguments), (configurable issue),
# Return an empty array
#
if keys.empty? && keys_orig_size > 0
optionally_fail(Puppet::Pops::Issues::EMPTY_RESOURCE_SPECIALIZATION, blamed)
return result_type_array ? [] : nil
end
if !o.title.nil?
# lookup resource and return one or more parameter values
resource = find_resource(scope, o.type_name, o.title)
unless resource
fail(Puppet::Pops::Issues::UNKNOWN_RESOURCE, @semantic, {:type_name => o.type_name, :title => o.title})
end
result = keys.map do |k|
unless is_parameter_of_resource?(scope, resource, k)
fail(Puppet::Pops::Issues::UNKNOWN_RESOURCE_PARAMETER, @semantic,
{:type_name => o.type_name, :title => o.title, :param_name=>k})
end
get_resource_parameter_value(scope, resource, k)
end
return result_type_array ? result : result.pop
end
keys = [:no_title] if keys.size < 1 # if there was only a type_name and it was consumed
result = keys.each_with_index.map do |t, i|
unless t.is_a?(String) || t == :no_title
type_to_report = case t
- when nil, :undef
+ when nil
'Undef'
when :default
'Default'
else
t.class.name
end
index = keys_orig_size != keys.size ? i+1 : i
fail(Puppet::Pops::Issues::BAD_TYPE_SPECIALIZATION, @semantic.keys[index], {
:type => o,
:message => "Cannot use #{type_to_report} where String is expected"
})
end
rtype = Puppet::Pops::Types::PResourceType.new()
rtype.type_name = type_name
rtype.title = (t == :no_title ? nil : t)
rtype
end
# returns single type if request was for a single entity, else an array of types (possibly empty)
return result_type_array ? result : result.pop
end
def access_PHostClassType(o, scope, keys)
blamed = keys.size == 0 ? @semantic : @semantic.keys[0]
keys_orig_size = keys.size
if keys_orig_size == 0
fail(Puppet::Pops::Issues::BAD_TYPE_SLICE_ARITY, blamed,
:base_type => Puppet::Pops::Types::TypeCalculator.new().string(o), :min => 1, :max => -1, :actual => 0)
end
# The result is an array if multiple classnames are given, or if classnames are specified with an array
# (possibly multiple arrays, and nested arrays).
result_type_array = keys.size > 1 || keys[0].is_a?(Array)
keys.flatten!
keys.compact!
# If given keys that were just a mix of empty/nil with empty array as a result.
# As opposed to calling the function the wrong way (without any arguments), (configurable issue),
# Return an empty array
#
if keys.empty? && keys_orig_size > 0
optionally_fail(Puppet::Pops::Issues::EMPTY_RESOURCE_SPECIALIZATION, blamed)
return result_type_array ? [] : nil
end
if o.class_name.nil?
# The type argument may be a Resource Type - the Puppet Language allows a reference such as
# Class[Foo], and this is interpreted as Class[Resource[Foo]] - which is ok as long as the resource
# does not have a title. This should probably be deprecated.
#
result = keys.each_with_index.map do |c, i|
name = if c.is_a?(Puppet::Pops::Types::PResourceType) && !c.type_name.nil? && c.title.nil?
# type_name is already downcase. Don't waste time trying to downcase again
c.type_name
elsif c.is_a?(String)
c.downcase
else
fail(Puppet::Pops::Issues::ILLEGAL_HOSTCLASS_NAME, @semantic.keys[i], {:name => c})
end
if name =~ Puppet::Pops::Patterns::NAME
ctype = Puppet::Pops::Types::PHostClassType.new()
# Remove leading '::' since all references are global, and 3x runtime does the wrong thing
- ctype.class_name = name.sub(/^::/, '')
+ ctype.class_name = name.sub(/^::/, EMPTY_STRING)
ctype
else
fail(Issues::ILLEGAL_NAME, @semantic.keys[i], {:name=>c})
end
end
else
# lookup class resource and return one or more parameter values
resource = find_resource(scope, 'class', o.class_name)
if resource
result = keys.map do |k|
if is_parameter_of_resource?(scope, resource, k)
get_resource_parameter_value(scope, resource, k)
else
fail(Puppet::Pops::Issues::UNKNOWN_RESOURCE_PARAMETER, @semantic,
{:type_name => 'Class', :title => o.class_name, :param_name=>k})
end
end
else
fail(Puppet::Pops::Issues::UNKNOWN_RESOURCE, @semantic, {:type_name => 'Class', :title => o.class_name})
end
end
# returns single type as type, else an array of types
return result_type_array ? result : result.pop
end
end
diff --git a/lib/puppet/pops/functions/dispatcher.rb b/lib/puppet/pops/evaluator/callable_mismatch_describer.rb
similarity index 54%
copy from lib/puppet/pops/functions/dispatcher.rb
copy to lib/puppet/pops/evaluator/callable_mismatch_describer.rb
index a4f912dc4..9d1108f8d 100644
--- a/lib/puppet/pops/functions/dispatcher.rb
+++ b/lib/puppet/pops/evaluator/callable_mismatch_describer.rb
@@ -1,237 +1,175 @@
-# Evaluate the dispatches defined as {Puppet::Pops::Functions::Dispatch}
-# instances to call the appropriate method on the
-# {Puppet::Pops::Functions::Function} instance.
-#
# @api private
-class Puppet::Pops::Functions::Dispatcher
- attr_reader :dispatchers
-
-# @api private
- def initialize()
- @dispatchers = [ ]
- end
-
- # Answers if dispatching has been defined
- # @return [Boolean] true if dispatching has been defined
- #
- # @api private
- def empty?
- @dispatchers.empty?
- end
-
- # Dispatches the call to the first found signature (entry with matching type).
- #
- # @param instance [Puppet::Functions::Function] - the function to call
- # @param calling_scope [T.B.D::Scope] - the scope of the caller
- # @param args [Array<Object>] - the given arguments in the form of an Array
- # @return [Object] - what the called function produced
- #
- # @api private
- def dispatch(instance, calling_scope, args)
- tc = Puppet::Pops::Types::TypeCalculator
- actual = tc.infer_set(args)
- found = @dispatchers.find { |d| tc.callable?(d.type, actual) }
- if found
- found.invoke(instance, calling_scope, args)
- else
- raise ArgumentError, "function '#{instance.class.name}' called with mis-matched arguments\n#{diff_string(instance.class.name, actual)}"
- end
- end
-
- # Adds a regular dispatch for one method name
- #
- # @param type [Puppet::Pops::Types::PArrayType, Puppet::Pops::Types::PTupleType] - type describing signature
- # @param method_name [String] - the name of the method that will be called when type matches given arguments
- # @param names [Array<String>] - array with names matching the number of parameters specified by type (or empty array)
- #
- # @api private
- def add_dispatch(type, method_name, param_names, block_name, injections, weaving, last_captures)
- @dispatchers << Puppet::Pops::Functions::Dispatch.new(type, method_name, param_names, block_name, injections, weaving, last_captures)
- end
-
- # Produces a CallableType for a single signature, and a Variant[<callables>] otherwise
- #
- # @api private
- def to_type()
- # make a copy to make sure it can be contained by someone else (even if it is not contained here, it
- # should be treated as immutable).
- #
- callables = dispatchers.map { | dispatch | dispatch.type.copy }
-
- # multiple signatures, produce a Variant type of Callable1-n (must copy them)
- # single signature, produce single Callable
- callables.size > 1 ? Puppet::Pops::Types::TypeFactory.variant(*callables) : callables.pop
- end
-
- # @api private
- def signatures
- @dispatchers
- end
-
- private
-
+module Puppet::Pops::Evaluator::CallableMismatchDescriber
# Produces a string with the difference between the given arguments and support signature(s).
#
+ # @param name [String] The name of the callable to describe
+ # @param args_type [Puppet::Pops::Types::Tuple] The tuple of argument types.
+ # @param supported_signatures [Array<Puppet::Pops::Types::Callable>] The available signatures that were available for calling.
+ #
# @api private
- def diff_string(name, args_type)
+ def self.diff_string(name, args_type, supported_signatures)
result = [ ]
- if @dispatchers.size < 2
- dispatch = @dispatchers[ 0 ]
- params_type = dispatch.type.param_types
- block_type = dispatch.type.block_type
- params_names = dispatch.param_names
- result << "expected:\n #{name}(#{signature_string(dispatch)}) - #{arg_count_string(dispatch.type)}"
+ if supported_signatures.size == 1
+ signature = supported_signatures[0]
+ params_type = signature.type.param_types
+ block_type = signature.type.block_type
+ params_names = signature.parameter_names
+ result << "expected:\n #{name}(#{signature_string(signature)}) - #{arg_count_string(signature.type)}"
else
result << "expected one of:\n"
- result << (@dispatchers.map do |d|
- params_type = d.type.param_types
- " #{name}(#{signature_string(d)}) - #{arg_count_string(d.type)}"
- end.join("\n"))
+ result << supported_signatures.map do |signature|
+ params_type = signature.type.param_types
+ " #{name}(#{signature_string(signature)}) - #{arg_count_string(signature.type)}"
+ end.join("\n")
end
+
result << "\nactual:\n #{name}(#{arg_types_string(args_type)}) - #{arg_count_string(args_type)}"
+
result.join('')
end
+ private
+
# Produces a string for the signature(s)
#
# @api private
- def signature_string(dispatch) # args_type, param_names
- param_types = dispatch.type.param_types
- block_type = dispatch.type.block_type
- param_names = dispatch.param_names
+ def self.signature_string(signature)
+ param_types = signature.type.param_types
+ block_type = signature.type.block_type
+ param_names = signature.parameter_names
from, to = param_types.size_range
if from == 0 && to == 0
# No parameters function
return ''
end
required_count = from
# there may be more names than there are types, and count needs to be subtracted from the count
# to make it correct for the last named element
adjust = max(0, param_names.size() -1)
last_range = [max(0, (from - adjust)), (to - adjust)]
types =
case param_types
when Puppet::Pops::Types::PTupleType
param_types.types
when Puppet::Pops::Types::PArrayType
[ param_types.element_type ]
end
tc = Puppet::Pops::Types::TypeCalculator
# join type with names (types are always present, names are optional)
# separate entries with comma
#
result =
if param_names.empty?
types.each_with_index.map {|t, index| tc.string(t) + opt_value_indicator(index, required_count, 0) }
else
limit = param_names.size
result = param_names.each_with_index.map do |name, index|
[tc.string(types[index] || types[-1]), name].join(' ') + opt_value_indicator(index, required_count, limit)
end
end.join(', ')
# Add {from, to} for the last type
# This works for both Array and Tuple since it describes the allowed count of the "last" type element
# for both. It does not show anything when the range is {1,1}.
#
result += range_string(last_range)
# If there is a block, include it with its own optional count {0,1}
- case dispatch.type.block_type
+ case signature.type.block_type
when Puppet::Pops::Types::POptionalType
result << ', ' unless result == ''
- result << "#{tc.string(dispatch.type.block_type.optional_type)} #{dispatch.block_name} {0,1}"
+ result << "#{tc.string(signature.type.block_type.optional_type)} #{signature.block_name} {0,1}"
when Puppet::Pops::Types::PCallableType
result << ', ' unless result == ''
- result << "#{tc.string(dispatch.type.block_type)} #{dispatch.block_name}"
+ result << "#{tc.string(signature.type.block_type)} #{signature.block_name}"
when NilClass
# nothing
end
result
end
# Why oh why Ruby do you not have a standard Math.max ?
# @api private
- def max(a, b)
+ def self.max(a, b)
a >= b ? a : b
end
# @api private
- def opt_value_indicator(index, required_count, limit)
+ def self.opt_value_indicator(index, required_count, limit)
count = index + 1
(count > required_count && count < limit) ? '?' : ''
end
# @api private
- def arg_count_string(args_type)
+ def self.arg_count_string(args_type)
if args_type.is_a?(Puppet::Pops::Types::PCallableType)
size_range = args_type.param_types.size_range # regular parameters
adjust_range=
case args_type.block_type
when Puppet::Pops::Types::POptionalType
size_range[1] += 1
when Puppet::Pops::Types::PCallableType
size_range[0] += 1
size_range[1] += 1
when NilClass
# nothing
else
raise ArgumentError, "Internal Error, only nil, Callable, and Optional[Callable] supported by Callable block type"
end
else
size_range = args_type.size_range
end
"arg count #{range_string(size_range, false)}"
end
# @api private
- def arg_types_string(args_type)
+ def self.arg_types_string(args_type)
types =
case args_type
when Puppet::Pops::Types::PTupleType
last_range = args_type.repeat_last_range
args_type.types
when Puppet::Pops::Types::PArrayType
last_range = args_type.size_range
[ args_type.element_type ]
end
# stringify generalized versions or it will display Integer[10,10] for "10", String['the content'] etc.
# note that type must be copied since generalize is a mutating operation
tc = Puppet::Pops::Types::TypeCalculator
result = types.map { |t| tc.string(tc.generalize!(t.copy)) }.join(', ')
# Add {from, to} for the last type
# This works for both Array and Tuple since it describes the allowed count of the "last" type element
# for both. It does not show anything when the range is {1,1}.
#
result += range_string(last_range)
result
end
# Formats a range into a string of the form: `{from, to}`
#
# The following cases are optimized:
#
# * from and to are equal => `{from}`
# * from and to are both and 1 and squelch_one == true => `''`
# * from is 0 and to is 1 => `'?'`
# * to is INFINITY => `{from, }`
#
# @api private
- def range_string(size_range, squelch_one = true)
+ def self.range_string(size_range, squelch_one = true)
from, to = size_range
if from == to
(squelch_one && from == 1) ? '' : "{#{from}}"
elsif to == Puppet::Pops::Types::INFINITY
"{#{from},}"
elsif from == 0 && to == 1
'?'
else
"{#{from},#{to}}"
end
end
end
diff --git a/lib/puppet/pops/evaluator/callable_signature.rb b/lib/puppet/pops/evaluator/callable_signature.rb
index e953a4409..044fc533b 100644
--- a/lib/puppet/pops/evaluator/callable_signature.rb
+++ b/lib/puppet/pops/evaluator/callable_signature.rb
@@ -1,101 +1,100 @@
# CallableSignature
# ===
# A CallableSignature describes how something callable expects to be called.
# Different implementation of this class are used for different types of callables.
#
# @api public
#
class Puppet::Pops::Evaluator::CallableSignature
# Returns the names of the parameters as an array of strings. This does not include the name
# of an optional block parameter.
#
# All implementations are not required to supply names for parameters. They may be used if present,
# to provide user feedback in errors etc. but they are not authoritative about the number of
# required arguments, optional arguments, etc.
#
# A derived class must implement this method.
#
# @return [Array<String>] - an array of names (that may be empty if names are unavailable)
#
# @api public
#
def parameter_names
raise NotImplementedError.new
end
# Returns a PCallableType with the type information, required and optional count, and type information about
# an optional block.
#
# A derived class must implement this method.
#
# @return [Puppet::Pops::Types::PCallableType]
# @api public
#
def type
raise NotImplementedError.new
end
# Returns the expected type for an optional block. The type may be nil, which means that the callable does
# not accept a block. If a type is returned it is one of Callable, Optional[Callable], Variant[Callable,...],
# or Optional[Variant[Callable, ...]]. The Variant type is used when multiple signatures are acceptable.
# The Optional type is used when the block is optional.
#
- # @return [Puppet::Pops::Types::PAbstractType, nil] the expected type of a block given as the last parameter in a call.
+ # @return [Puppet::Pops::Types::PAnyType, nil] the expected type of a block given as the last parameter in a call.
#
# @api public
#
def block_type
type.block_type
end
# Returns the name of the block parameter if the callable accepts a block.
# @return [String] the name of the block parameter
# A derived class must implement this method.
# @api public
#
def block_name
raise NotImplementedError.new
end
# Returns a range indicating the optionality of a block. One of [0,0] (does not accept block), [0,1] (optional
# block), and [1,1] (block required)
#
# @return [Array(Integer, Integer)] the range of the block parameter
#
def block_range
type.block_range
end
# Returns the range of required/optional argument values as an array of [min, max], where an infinite
# end is given as INFINITY. To test against infinity, use the infinity? method.
#
# @return [Array[Integer, Numeric]] - an Array with [min, max]
#
# @api public
#
def args_range
type.size_range
end
# Returns true if the last parameter captures the rest of the arguments, with a possible cap, as indicated
# by the `args_range` method.
# A derived class must implement this method.
#
# @return [Boolean] true if last parameter captures the rest of the given arguments (up to a possible cap)
# @api public
#
def last_captures_rest?
raise NotImplementedError.new
end
# Returns true if the given x is infinity
# @return [Boolean] true, if given value represents infinity
#
# @api public
#
def infinity?(x)
x == Puppet::Pops::Types::INFINITY
end
-
end
diff --git a/lib/puppet/pops/evaluator/closure.rb b/lib/puppet/pops/evaluator/closure.rb
index 974d77cd4..0aca525c1 100644
--- a/lib/puppet/pops/evaluator/closure.rb
+++ b/lib/puppet/pops/evaluator/closure.rb
@@ -1,112 +1,228 @@
# A Closure represents logic bound to a particular scope.
# As long as the runtime (basically the scope implementation) has the behaviour of Puppet 3x it is not
# safe to use this closure when the scope given to it when initialized goes "out of scope".
#
# Note that the implementation is backwards compatible in that the call method accepts a scope, but this
# scope is not used.
#
# Note that this class is a CallableSignature, and the methods defined there should be used
# as the API for obtaining information in a callable implementation agnostic way.
#
class Puppet::Pops::Evaluator::Closure < Puppet::Pops::Evaluator::CallableSignature
attr_reader :evaluator
attr_reader :model
attr_reader :enclosing_scope
def initialize(evaluator, model, scope)
@evaluator = evaluator
@model = model
@enclosing_scope = scope
end
# marker method checked with respond_to :puppet_lambda
# @api private
# @deprecated Use the type system to query if an object is of Callable type, then use its signatures method for info
def puppet_lambda()
true
end
# compatible with 3x AST::Lambda
# @api public
def call(scope, *args)
- @evaluator.call(self, args, @enclosing_scope)
+ variable_bindings = combine_values_with_parameters(args)
+
+ tc = Puppet::Pops::Types::TypeCalculator
+ final_args = tc.infer_set(parameters.inject([]) do |final_args, param|
+ if param.captures_rest
+ final_args.concat(variable_bindings[param.name])
+ else
+ final_args << variable_bindings[param.name]
+ end
+ end)
+
+ if tc.callable?(type, final_args)
+ @evaluator.evaluate_block_with_bindings(@enclosing_scope, variable_bindings, @model.body)
+ else
+ raise ArgumentError, "lambda called with mis-matched arguments\n#{Puppet::Pops::Evaluator::CallableMismatchDescriber.diff_string('lambda', final_args, [self])}"
+ end
end
# Call closure with argument assignment by name
- def call_by_name(scope, args_hash, spill_over = false)
- @evaluator.call_by_name(self, args_hash, @enclosing_scope, spill_over)
+ def call_by_name(scope, args_hash, enforce_parameters)
+ if enforce_parameters
+ if args_hash.size > parameters.size
+ raise ArgumentError, "Too many arguments: #{args_hash.size} for #{parameters.size}"
+ end
+
+ # associate values with parameters
+ scope_hash = {}
+ parameters.each do |p|
+ name = p.name
+ if (arg_value = args_hash[name]).nil?
+ # only set result of default expr if it is defined (it is otherwise not possible to differentiate
+ # between explicit undef and no default expression
+ unless p.value.nil?
+ scope_hash[name] = @evaluator.evaluate(p.value, @enclosing_scope)
+ end
+ else
+ scope_hash[name] = arg_value
+ end
+ end
+
+ missing = parameters.select { |p| !scope_hash.include?(p.name) }
+ if missing.any?
+ raise ArgumentError, "Too few arguments; no value given for required parameters #{missing.collect(&:name).join(" ,")}"
+ end
+
+ tc = Puppet::Pops::Types::TypeCalculator
+ final_args = tc.infer_set(parameter_names.collect { |param| scope_hash[param] })
+ if !tc.callable?(type, final_args)
+ raise ArgumentError, "lambda called with mis-matched arguments\n#{Puppet::Pops::Evaluator::CallableMismatchDescriber.diff_string('lambda', final_args, [self])}"
+ end
+ else
+ scope_hash = args_hash
+ end
+
+ @evaluator.evaluate_block_with_bindings(@enclosing_scope, scope_hash, @model.body)
end
- # incompatible with 3x except that it is an array of the same size
- def parameters()
- @model.parameters || []
+ def parameters
+ @model.parameters
end
# Returns the number of parameters (required and optional)
# @return [Integer] the total number of accepted parameters
def parameter_count
# yes, this is duplication of code, but it saves a method call
- (@model.parameters || []).size
- end
-
- # Returns the number of optional parameters.
- # @return [Integer] the number of optional accepted parameters
- def optional_parameter_count
- @model.parameters.count { |p| !p.value.nil? }
+ @model.parameters.size
end
# @api public
def parameter_names
- @model.parameters.collect {|p| p.name }
+ @model.parameters.collect(&:name)
end
# @api public
def type
- @callable || create_callable_type
+ @callable ||= create_callable_type
end
# @api public
def last_captures_rest?
- # TODO: No support for this yet
- false
+ last = @model.parameters[-1]
+ last && last.captures_rest
end
# @api public
def block_name
# TODO: Lambda's does not support blocks yet. This is a placeholder
'unsupported_block'
end
private
+ def combine_values_with_parameters(args)
+ variable_bindings = {}
+
+ parameters.each_with_index do |parameter, index|
+ param_captures = parameter.captures_rest
+ default_expression = parameter.value
+
+ if index >= args.size
+ if default_expression
+ # not given, has default
+ value = @evaluator.evaluate(default_expression, @enclosing_scope)
+ if param_captures && !value.is_a?(Array)
+ # correct non array default value
+ value = [value]
+ end
+ else
+ # not given, does not have default
+ if param_captures
+ # default for captures rest is an empty array
+ value = []
+ else
+ @evaluator.fail(Puppet::Pops::Issues::MISSING_REQUIRED_PARAMETER, parameter, { :param_name => parameter.name })
+ end
+ end
+ else
+ given_argument = args[index]
+ if param_captures
+ # get excess arguments
+ value = args[(parameter_count-1)..-1]
+ # If the input was a single nil, or undef, and there is a default, use the default
+ # This supports :undef in case it was used in a 3x data structure and it is passed as an arg
+ #
+ if value.size == 1 && (given_argument.nil? || given_argument == :undef) && default_expression
+ value = @evaluator.evaluate(default_expression, scope)
+ # and ensure it is an array
+ value = [value] unless value.is_a?(Array)
+ end
+ else
+ value = given_argument
+ end
+ end
+
+ variable_bindings[parameter.name] = value
+ end
+
+ variable_bindings
+ end
+
def create_callable_type()
- t = Puppet::Pops::Types::PCallableType.new()
- tuple_t = Puppet::Pops::Types::PTupleType.new()
- # since closure lambdas are currently untyped, each parameter becomes Optional[Object]
- parameter_names.each do |name|
- # TODO: Change when Closure supports typed parameters
- tuple_t.addTypes(Puppet::Pops::Types::TypeFactory.optional_object())
+ types = []
+ range = [0, 0]
+ in_optional_parameters = false
+ parameters.each do |param|
+ type = if param.type_expr
+ @evaluator.evaluate(param.type_expr, @enclosing_scope)
+ else
+ Puppet::Pops::Types::TypeFactory.any()
+ end
+
+ if param.captures_rest && type.is_a?(Puppet::Pops::Types::PArrayType)
+ # An array on a slurp parameter is how a size range is defined for a
+ # slurp (Array[Integer, 1, 3] *$param). However, the callable that is
+ # created can't have the array in that position or else type checking
+ # will require the parameters to be arrays, which isn't what is
+ # intended. The array type contains the intended information and needs
+ # to be unpacked.
+ param_range = type.size_range
+ type = type.element_type
+ elsif param.captures_rest && !type.is_a?(Puppet::Pops::Types::PArrayType)
+ param_range = ANY_NUMBER_RANGE
+ elsif param.value
+ param_range = OPTIONAL_SINGLE_RANGE
+ else
+ param_range = REQUIRED_SINGLE_RANGE
+ end
+
+ types << type
+
+ if param_range[0] == 0
+ in_optional_parameters = true
+ elsif param_range[0] != 0 && in_optional_parameters
+ @evaluator.fail(Puppet::Pops::Issues::REQUIRED_PARAMETER_AFTER_OPTIONAL, param, { :param_name => param.name })
+ end
+
+ range[0] += param_range[0]
+ range[1] += param_range[1]
end
- # TODO: A Lambda can not currently declare varargs
- to = parameter_count
- from = to - optional_parameter_count
- if from != to
- size_t = Puppet::Pops::Types::PIntegerType.new()
- size_t.from = size
- size_t.to = size
- tuple_t.size_type = size_t
+ if range[1] == Puppet::Pops::Types::INFINITY
+ range[1] = :default
end
- t.param_types = tuple_t
- # TODO: A Lambda can not currently declare that it accepts a lambda, except as an explicit parameter
- # being a Callable
- t
+
+ Puppet::Pops::Types::TypeFactory.callable(*(types + range))
end
# Produces information about parameters compatible with a 4x Function (which can have multiple signatures)
def signatures
[ self ]
end
+ ANY_NUMBER_RANGE = [0, Puppet::Pops::Types::INFINITY]
+ OPTIONAL_SINGLE_RANGE = [0, 1]
+ REQUIRED_SINGLE_RANGE = [1, 1]
end
diff --git a/lib/puppet/pops/evaluator/compare_operator.rb b/lib/puppet/pops/evaluator/compare_operator.rb
index 38575b5a4..900a717b6 100644
--- a/lib/puppet/pops/evaluator/compare_operator.rb
+++ b/lib/puppet/pops/evaluator/compare_operator.rb
@@ -1,168 +1,172 @@
# Compares the puppet DSL way
#
# ==Equality
# All string vs. numeric equalities check for numeric equality first, then string equality
# Arrays are equal to arrays if they have the same length, and each element #equals
# Hashes are equal to hashes if they have the same size and keys and values #equals.
# All other objects are equal if they are ruby #== equal
#
class Puppet::Pops::Evaluator::CompareOperator
include Puppet::Pops::Utils
+ # Provides access to the Puppet 3.x runtime (scope, etc.)
+ # This separation has been made to make it easier to later migrate the evaluator to an improved runtime.
+ #
+ include Puppet::Pops::Evaluator::Runtime3Support
+
def initialize
@@equals_visitor ||= Puppet::Pops::Visitor.new(self, "equals", 1, 1)
@@compare_visitor ||= Puppet::Pops::Visitor.new(self, "cmp", 1, 1)
- @@include_visitor ||= Puppet::Pops::Visitor.new(self, "include", 1, 1)
+ @@include_visitor ||= Puppet::Pops::Visitor.new(self, "include", 2, 2)
@type_calculator = Puppet::Pops::Types::TypeCalculator.new()
end
def equals (a, b)
@@equals_visitor.visit_this_1(self, a, b)
end
# Performs a comparison of a and b, and return > 0 if a is bigger, 0 if equal, and < 0 if b is bigger.
# Comparison of String vs. Numeric always compares using numeric.
def compare(a, b)
@@compare_visitor.visit_this_1(self, a, b)
end
# Answers is b included in a
- def include?(a, b)
- @@include_visitor.visit_this_1(self, a, b)
+ def include?(a, b, scope)
+ @@include_visitor.visit_this_2(self, a, b, scope)
end
protected
def cmp_String(a, b)
# if both are numerics in string form, compare as number
n1 = Puppet::Pops::Utils.to_n(a)
n2 = Puppet::Pops::Utils.to_n(b)
# Numeric is always lexically smaller than a string, even if the string is empty.
return n1 <=> n2 if n1 && n2
return -1 if n1 && b.is_a?(String)
return 1 if n2
return a.casecmp(b) if b.is_a?(String)
raise ArgumentError.new("A String is not comparable to a non String or Number")
end
# Equality is case independent.
def equals_String(a, b)
if n1 = Puppet::Pops::Utils.to_n(a)
if n2 = Puppet::Pops::Utils.to_n(b)
n1 == n2
else
false
end
else
return false unless b.is_a?(String)
a.casecmp(b) == 0
end
end
def cmp_Numeric(a, b)
if n2 = Puppet::Pops::Utils.to_n(b)
a <=> n2
elsif b.kind_of?(String)
# Numeric is always lexiographically smaller than a string, even if the string is empty.
-1
else
raise ArgumentError.new("A Numeric is not comparable to non Numeric or String")
end
end
def equals_Numeric(a, b)
if n2 = Puppet::Pops::Utils.to_n(b)
a == n2
else
false
end
end
def equals_Array(a, b)
return false unless b.is_a?(Array) && a.size == b.size
a.each_index {|i| return false unless equals(a.slice(i), b.slice(i)) }
true
end
def equals_Hash(a, b)
return false unless b.is_a?(Hash) && a.size == b.size
a.each {|ak, av| return false unless equals(b[ak], av)}
true
end
def cmp_Symbol(a, b)
if b.is_a?(Symbol)
a <=> b
else
raise ArgumentError.new("Symbol not comparable to non Symbol")
end
end
def cmp_Object(a, b)
raise ArgumentError.new("Only Strings and Numbers are comparable")
end
def equals_Object(a, b)
a == b
end
def equals_NilClass(a, b)
+ # :undef supported in case it is passed from a 3x data structure
b.nil? || b == :undef
end
def equals_Symbol(a, b)
+ # :undef supported in case it is passed from a 3x data structure
a == b || a == :undef && b.nil?
end
- def include_Object(a, b)
+ def include_Object(a, b, scope)
false
end
- def include_String(a, b)
+ def include_String(a, b, scope)
case b
when String
# subsstring search downcased
a.downcase.include?(b.downcase)
when Regexp
- # match (convert to boolean)
- !!(a =~ b)
+ matched = a.match(b) # nil, or MatchData
+ set_match_data(matched, scope) # creates ephemeral
+ !!matched # match (convert to boolean)
when Numeric
# convert string to number, true if ==
equals(a, b)
- when Puppet::Pops::Types::PStringType
- # is there a string in a string? (yes, each char is a string, and an empty string contains an empty string)
- true
else
- if b == Puppet::Pops::Types::PDataType || b == Puppet::Pops::Types::PObjectType
- # A String is Data and Object (but not of all subtypes of those types).
- true
- else
- false
- end
+ false
end
end
- def include_Array(a, b)
+ def include_Array(a, b, scope)
case b
when Regexp
+ matched = nil
a.each do |element|
next unless element.is_a? String
- return true if element =~ b
+ matched = element.match(b) # nil, or MatchData
+ break if matched
end
- return false
- when Puppet::Pops::Types::PAbstractType
+ # Always set match data, a "not found" should not keep old match data visible
+ set_match_data(matched, scope) # creates ephemeral
+ return !!matched
+ when Puppet::Pops::Types::PAnyType
a.each {|element| return true if @type_calculator.instance?(b, element) }
return false
else
a.each {|element| return true if equals(element, b) }
return false
end
end
- def include_Hash(a, b)
- include?(a.keys, b)
+ def include_Hash(a, b, scope)
+ include?(a.keys, b, scope)
end
end
diff --git a/lib/puppet/pops/evaluator/epp_evaluator.rb b/lib/puppet/pops/evaluator/epp_evaluator.rb
index 31e59aea7..57a95bc71 100644
--- a/lib/puppet/pops/evaluator/epp_evaluator.rb
+++ b/lib/puppet/pops/evaluator/epp_evaluator.rb
@@ -1,87 +1,88 @@
# Handler of Epp call/evaluation from the epp and inline_epp functions
#
class Puppet::Pops::Evaluator::EppEvaluator
def self.inline_epp(scope, epp_source, template_args = nil)
- unless epp_source.is_a? String
+ unless epp_source.is_a?(String)
raise ArgumentError, "inline_epp(): the first argument must be a String with the epp source text, got a #{epp_source.class}"
end
# Parse and validate the source
parser = Puppet::Pops::Parser::EvaluatingParser::EvaluatingEppParser.new
begin
- result = parser.parse_string(epp_source, 'inlined-epp-text')
+ result = parser.parse_string(epp_source, 'inlined-epp-text')
rescue Puppet::ParseError => e
raise ArgumentError, "inline_epp(): Invalid EPP: #{e.message}"
end
# Evaluate (and check template_args)
evaluate(parser, 'inline_epp', scope, false, result, template_args)
end
def self.epp(scope, file, env_name, template_args = nil)
- unless file.is_a? String
+ unless file.is_a?(String)
raise ArgumentError, "epp(): the first argument must be a String with the filename, got a #{file.class}"
end
file = file + ".epp" unless file =~ /\.epp$/
scope.debug "Retrieving epp template #{file}"
template_file = Puppet::Parser::Files.find_template(file, env_name)
unless template_file
raise Puppet::ParseError, "Could not find template '#{file}'"
end
# Parse and validate the source
parser = Puppet::Pops::Parser::EvaluatingParser::EvaluatingEppParser.new
begin
- result = parser.parse_file(template_file)
+ result = parser.parse_file(template_file)
rescue Puppet::ParseError => e
raise ArgumentError, "epp(): Invalid EPP: #{e.message}"
end
# Evaluate (and check template_args)
evaluate(parser, 'epp', scope, true, result, template_args)
end
private
def self.evaluate(parser, func_name, scope, use_global_scope_only, parse_result, template_args)
template_args, template_args_set = handle_template_args(func_name, template_args)
body = parse_result.body
unless body.is_a?(Puppet::Pops::Model::LambdaExpression)
raise ArgumentError, "#{func_name}(): the parser did not produce a LambdaExpression, got '#{body.class}'"
end
unless body.body.is_a?(Puppet::Pops::Model::EppExpression)
raise ArgumentError, "#{func_name}(): the parser did not produce an EppExpression, got '#{body.body.class}'"
end
unless parse_result.definitions.empty?
raise ArgumentError, "#{func_name}(): The EPP template contains illegal expressions (definitions)"
end
- see_scope = body.body.see_scope
- if see_scope && !template_args_set
- # no epp params and no arguments were given => inline_epp logic sees all local variables, epp all global
- closure_scope = use_global_scope_only ? scope.find_global_scope : scope
- spill_over = false
- else
- # no epp params or user provided arguments in a hash, epp logic only sees global + what was given
+ parameters_specified = body.body.parameters_specified
+ if parameters_specified || template_args_set
+ # no epp params or user provided arguments in a hash, epp() logic
+ # only sees global + what was given
closure_scope = scope.find_global_scope
- # given spill over if there are no params (e.g. replace closure scope by a new scope with the given args)
- spill_over = see_scope
+ enforce_parameters = parameters_specified
+ else
+ # no epp params and no arguments were given => inline_epp() logic
+ # sees all local variables, epp() all global
+ closure_scope = use_global_scope_only ? scope.find_global_scope : scope
+ enforce_parameters = true
end
- evaluated_result = parser.closure(body, closure_scope).call_by_name(scope, template_args, spill_over)
+ evaluated_result = parser.closure(body, closure_scope).call_by_name(scope, template_args, enforce_parameters)
evaluated_result
end
def self.handle_template_args(func_name, template_args)
if template_args.nil?
[{}, false]
else
unless template_args.is_a?(Hash)
raise ArgumentError, "#{func_name}(): the template_args must be a Hash, got a #{template_args.class}"
end
[template_args, true]
end
end
-end
\ No newline at end of file
+end
diff --git a/lib/puppet/pops/evaluator/evaluator_impl.rb b/lib/puppet/pops/evaluator/evaluator_impl.rb
index fe60cc0d9..02098d674 100644
--- a/lib/puppet/pops/evaluator/evaluator_impl.rb
+++ b/lib/puppet/pops/evaluator/evaluator_impl.rb
@@ -1,1067 +1,1115 @@
require 'rgen/ecore/ecore'
require 'puppet/pops/evaluator/compare_operator'
require 'puppet/pops/evaluator/relationship_operator'
require 'puppet/pops/evaluator/access_operator'
require 'puppet/pops/evaluator/closure'
require 'puppet/pops/evaluator/external_syntax_support'
# This implementation of {Puppet::Pops::Evaluator} performs evaluation using the puppet 3.x runtime system
# in a manner largely compatible with Puppet 3.x, but adds new features and introduces constraints.
#
# The evaluation uses _polymorphic dispatch_ which works by dispatching to the first found method named after
# the class or one of its super-classes. The EvaluatorImpl itself mainly deals with evaluation (it currently
# also handles assignment), and it uses a delegation pattern to more specialized handlers of some operators
# that in turn use polymorphic dispatch; this to not clutter EvaluatorImpl with too much responsibility).
#
# Since a pattern is used, only the main entry points are fully documented. The parameters _o_ and _scope_ are
# the same in all the polymorphic methods, (the type of the parameter _o_ is reflected in the method's name;
# either the actual class, or one of its super classes). The _scope_ parameter is always the scope in which
# the evaluation takes place. If nothing else is mentioned, the return is always the result of evaluation.
#
# See {Puppet::Pops::Visitable} and {Puppet::Pops::Visitor} for more information about
# polymorphic calling.
#
class Puppet::Pops::Evaluator::EvaluatorImpl
include Puppet::Pops::Utils
# Provides access to the Puppet 3.x runtime (scope, etc.)
# This separation has been made to make it easier to later migrate the evaluator to an improved runtime.
#
include Puppet::Pops::Evaluator::Runtime3Support
include Puppet::Pops::Evaluator::ExternalSyntaxSupport
# This constant is not defined as Float::INFINITY in Ruby 1.8.7 (but is available in later version
# Refactor when support is dropped for Ruby 1.8.7.
#
INFINITY = 1.0 / 0.0
+ EMPTY_STRING = ''.freeze
+ COMMA_SEPARATOR = ', '.freeze
# Reference to Issues name space makes it easier to refer to issues
# (Issues are shared with the validator).
#
Issues = Puppet::Pops::Issues
def initialize
- @@eval_visitor ||= Puppet::Pops::Visitor.new(self, "eval", 1, 1)
+ @@eval_visitor ||= Puppet::Pops::Visitor.new(self, "eval", 1, 1)
@@lvalue_visitor ||= Puppet::Pops::Visitor.new(self, "lvalue", 1, 1)
@@assign_visitor ||= Puppet::Pops::Visitor.new(self, "assign", 3, 3)
@@string_visitor ||= Puppet::Pops::Visitor.new(self, "string", 1, 1)
@@type_calculator ||= Puppet::Pops::Types::TypeCalculator.new()
@@type_parser ||= Puppet::Pops::Types::TypeParser.new()
@@compare_operator ||= Puppet::Pops::Evaluator::CompareOperator.new()
@@relationship_operator ||= Puppet::Pops::Evaluator::RelationshipOperator.new()
# Initialize the runtime module
Puppet::Pops::Evaluator::Runtime3Support.instance_method(:initialize).bind(self).call()
end
# @api private
def type_calculator
@@type_calculator
end
- # Polymorphic evaluate - calls eval_TYPE
+ # Evaluates the given _target_ object in the given scope.
#
- # ## Polymorphic evaluate
- # Polymorphic evaluate calls a method on the format eval_TYPE where classname is the last
- # part of the class of the given _target_. A search is performed starting with the actual class, continuing
- # with each of the _target_ class's super classes until a matching method is found.
- #
- # # Description
- # Evaluates the given _target_ object in the given scope, optionally passing a block which will be
- # called with the result of the evaluation.
- #
- # @overload evaluate(target, scope, {|result| block})
+ # @overload evaluate(target, scope)
# @param target [Object] evaluation target - see methods on the pattern assign_TYPE for actual supported types.
# @param scope [Object] the runtime specific scope class where evaluation should take place
# @return [Object] the result of the evaluation
#
- # @api
+ # @api public
#
def evaluate(target, scope)
begin
@@eval_visitor.visit_this_1(self, target, scope)
rescue Puppet::Pops::SemanticError => e
# a raised issue may not know the semantic target
fail(e.issue, e.semantic || target, e.options, e)
rescue StandardError => e
if e.is_a? Puppet::ParseError
# ParseError's are supposed to be fully configured with location information
raise e
end
fail(Issues::RUNTIME_ERROR, target, {:detail => e.message}, e)
end
end
- # Polymorphic assign - calls assign_TYPE
- #
- # ## Polymorphic assign
- # Polymorphic assign calls a method on the format assign_TYPE where TYPE is the last
- # part of the class of the given _target_. A search is performed starting with the actual class, continuing
- # with each of the _target_ class's super classes until a matching method is found.
- #
- # # Description
# Assigns the given _value_ to the given _target_. The additional argument _o_ is the instruction that
# produced the target/value tuple and it is used to set the origin of the result.
+ #
# @param target [Object] assignment target - see methods on the pattern assign_TYPE for actual supported types.
# @param value [Object] the value to assign to `target`
# @param o [Puppet::Pops::Model::PopsObject] originating instruction
# @param scope [Object] the runtime specific scope where evaluation should take place
#
- # @api
+ # @api private
#
def assign(target, value, o, scope)
@@assign_visitor.visit_this_3(self, target, value, o, scope)
end
+ # Computes a value that can be used as the LHS in an assignment.
+ # @param o [Object] the expression to evaluate as a left (assignable) entity
+ # @param scope [Object] the runtime specific scope where evaluation should take place
+ #
+ # @api private
+ #
def lvalue(o, scope)
@@lvalue_visitor.visit_this_1(self, o, scope)
end
+ # Produces a String representation of the given object _o_ as used in interpolation.
+ # @param o [Object] the expression of which a string representation is wanted
+ # @param scope [Object] the runtime specific scope where evaluation should take place
+ #
+ # @api public
+ #
def string(o, scope)
@@string_visitor.visit_this_1(self, o, scope)
end
- # Call a closure matching arguments by name - Can only be called with a Closure (for now), may be refactored later
- # to also handle other types of calls (function calls are also handled by CallNamedFunction and CallMethod, they
- # could create similar objects to Closure, wait until other types of defines are instantiated - they may behave
- # as special cases of calls - i.e. 'new').
+ # Evaluate a BlockExpression in a new scope with variables bound to the
+ # given values.
#
- # Call by name supports a "spill_over" mode where extra arguments in the given args_hash are introduced
- # as variables in the resulting scope.
+ # @param scope [Puppet::Parser::Scope] the parent scope
+ # @param variable_bindings [Hash{String => Object}] the variable names and values to bind (names are keys, bound values are values)
+ # @param block [Puppet::Pops::Model::BlockExpression] the sequence of expressions to evaluate in the new scope
#
- # @raise ArgumentError, if there are to many or too few arguments
- # @raise ArgumentError, if given closure is not a Puppet::Pops::Evaluator::Closure
- #
- def call_by_name(closure, args_hash, scope, spill_over = false)
- raise ArgumentError, "Can only call a Lambda" unless closure.is_a?(Puppet::Pops::Evaluator::Closure)
- pblock = closure.model
- parameters = pblock.parameters || []
-
- if !spill_over && args_hash.size > parameters.size
- raise ArgumentError, "Too many arguments: #{args_hash.size} for #{parameters.size}"
- end
-
- # associate values with parameters
- scope_hash = {}
- parameters.each do |p|
- scope_hash[p.name] = args_hash[p.name] || evaluate(p.value, scope)
- end
- missing = scope_hash.reduce([]) {|memo, entry| memo << entry[0] if entry[1].nil?; memo }
- unless missing.empty?
- optional = parameters.count { |p| !p.value.nil? }
- raise ArgumentError, "Too few arguments; no value given for required parameters #{missing.join(" ,")}"
- end
- if spill_over
- # all args from given hash should be used, nil entries replaced by default values should win
- scope_hash = args_hash.merge(scope_hash)
- end
-
- # Store the evaluated name => value associations in a new inner/local/ephemeral scope
- # (This is made complicated due to the fact that the implementation of scope is overloaded with
- # functionality and an inner ephemeral scope must be used (as opposed to just pushing a local scope
- # on a scope "stack").
-
- # Ensure variable exists with nil value if error occurs.
- # Some ruby implementations does not like creating variable on return
- result = nil
- begin
- scope_memo = get_scope_nesting_level(scope)
- # change to create local scope_from - cannot give it file and line - that is the place of the call, not
- # "here"
- create_local_scope_from(scope_hash, scope)
- result = evaluate(pblock.body, scope)
- ensure
- set_scope_nesting_level(scope, scope_memo)
- end
- result
- end
-
- # Call a closure - Can only be called with a Closure (for now), may be refactored later
- # to also handle other types of calls (function calls are also handled by CallNamedFunction and CallMethod, they
- # could create similar objects to Closure, wait until other types of defines are instantiated - they may behave
- # as special cases of calls - i.e. 'new')
- #
- # @raise ArgumentError, if there are to many or too few arguments
- # @raise ArgumentError, if given closure is not a Puppet::Pops::Evaluator::Closure
+ # @api private
#
- def call(closure, args, scope)
- raise ArgumentError, "Can only call a Lambda" unless closure.is_a?(Puppet::Pops::Evaluator::Closure)
- pblock = closure.model
- parameters = pblock.parameters || []
-
- raise ArgumentError, "Too many arguments: #{args.size} for #{parameters.size}" unless args.size <= parameters.size
-
- # associate values with parameters
- merged = parameters.zip(args)
- # calculate missing arguments
- missing = parameters.slice(args.size, parameters.size - args.size).select {|p| p.value.nil? }
- unless missing.empty?
- optional = parameters.count { |p| !p.value.nil? }
- raise ArgumentError, "Too few arguments; #{args.size} for #{optional > 0 ? ' min ' : ''}#{parameters.size - optional}"
- end
-
- evaluated = merged.collect do |m|
- # m can be one of
- # m = [Parameter{name => "name", value => nil], "given"]
- # | [Parameter{name => "name", value => Expression}, "given"]
- #
- # "given" is always an optional entry. If a parameter was provided then
- # the entry will be in the array, otherwise the m array will be a
- # single element.
- given_argument = m[1]
- argument_name = m[0].name
- default_expression = m[0].value
-
- value = if default_expression
- evaluate(default_expression, scope)
- else
- given_argument
- end
- [argument_name, value]
- end
-
- # Store the evaluated name => value associations in a new inner/local/ephemeral scope
- # (This is made complicated due to the fact that the implementation of scope is overloaded with
- # functionality and an inner ephemeral scope must be used (as opposed to just pushing a local scope
- # on a scope "stack").
-
- # Ensure variable exists with nil value if error occurs.
- # Some ruby implementations does not like creating variable on return
- result = nil
- begin
- scope_memo = get_scope_nesting_level(scope)
- # change to create local scope_from - cannot give it file and line - that is the place of the call, not
- # "here"
- create_local_scope_from(Hash[evaluated], scope)
- result = evaluate(pblock.body, scope)
- ensure
- set_scope_nesting_level(scope, scope_memo)
+ def evaluate_block_with_bindings(scope, variable_bindings, block_expr)
+ with_guarded_scope(scope) do
+ # change to create local scope_from - cannot give it file and line -
+ # that is the place of the call, not "here"
+ create_local_scope_from(variable_bindings, scope)
+ evaluate(block_expr, scope)
end
- result
end
protected
def lvalue_VariableExpression(o, scope)
# evaluate the name
evaluate(o.expr, scope)
end
# Catches all illegal lvalues
#
def lvalue_Object(o, scope)
fail(Issues::ILLEGAL_ASSIGNMENT, o)
end
# Assign value to named variable.
# The '$' sign is never part of the name.
# @example In Puppet DSL
# $name = value
# @param name [String] name of variable without $
# @param value [Object] value to assign to the variable
# @param o [Puppet::Pops::Model::PopsObject] originating instruction
# @param scope [Object] the runtime specific scope where evaluation should take place
# @return [value<Object>]
#
def assign_String(name, value, o, scope)
if name =~ /::/
fail(Issues::CROSS_SCOPE_ASSIGNMENT, o.left_expr, {:name => name})
end
set_variable(name, value, o, scope)
value
end
def assign_Numeric(n, value, o, scope)
fail(Issues::ILLEGAL_NUMERIC_ASSIGNMENT, o.left_expr, {:varname => n.to_s})
end
# Catches all illegal assignment (e.g. 1 = 2, {'a'=>1} = 2, etc)
#
def assign_Object(name, value, o, scope)
fail(Issues::ILLEGAL_ASSIGNMENT, o)
end
def eval_Factory(o, scope)
evaluate(o.current, scope)
end
# Evaluates any object not evaluated to something else to itself.
def eval_Object o, scope
o
end
- # Allows nil to be used as a Nop.
- # Evaluates to nil
- # TODO: What is the difference between literal undef, nil, and nop?
- #
+ # Allows nil to be used as a Nop, Evaluates to nil
def eval_NilClass(o, scope)
nil
end
# Evaluates Nop to nil.
- # TODO: or is this the same as :undef
- # TODO: is this even needed as a separate instruction when there is a literal undef?
def eval_Nop(o, scope)
nil
end
# Captures all LiteralValues not handled elsewhere.
#
def eval_LiteralValue(o, scope)
o.value
end
+ # Reserved Words fail to evaluate
+ #
+ def eval_ReservedWord(o, scope)
+ fail(Puppet::Pops::Issues::RESERVED_WORD, o, {:word => o.word})
+ end
+
def eval_LiteralDefault(o, scope)
:default
end
def eval_LiteralUndef(o, scope)
- :undef # TODO: or just use nil for this?
+ nil
end
# A QualifiedReference (i.e. a capitalized qualified name such as Foo, or Foo::Bar) evaluates to a PType
#
def eval_QualifiedReference(o, scope)
@@type_parser.interpret(o)
end
def eval_NotExpression(o, scope)
! is_true?(evaluate(o.expr, scope))
end
def eval_UnaryMinusExpression(o, scope)
- coerce_numeric(evaluate(o.expr, scope), o, scope)
end
+ def eval_UnfoldExpression(o, scope)
+ candidate = evaluate(o.expr, scope)
+ case candidate
+ when Array
+ candidate
+ when Hash
+ candidate.to_a
+ else
+ # turns anything else into an array (so result can be unfolded)
+ [candidate]
+ end
+ end
+
# Abstract evaluation, returns array [left, right] with the evaluated result of left_expr and
# right_expr
# @return <Array<Object, Object>> array with result of evaluating left and right expressions
#
def eval_BinaryExpression o, scope
[ evaluate(o.left_expr, scope), evaluate(o.right_expr, scope) ]
end
# Evaluates assignment with operators =, +=, -= and
#
# @example Puppet DSL
# $a = 1
# $a += 1
# $a -= 1
#
def eval_AssignmentExpression(o, scope)
name = lvalue(o.left_expr, scope)
value = evaluate(o.right_expr, scope)
- case o.operator
- when :'=' # regular assignment
+ if o.operator == :'='
assign(name, value, o, scope)
-
- when :'+='
- # if value does not exist and strict is on, looking it up fails, else it is nil or :undef
- existing_value = get_variable_value(name, o, scope)
- begin
- if existing_value.nil? || existing_value == :undef
- assign(name, value, o, scope)
- else
- # Delegate to calculate function to deal with check of LHS, and perform ´+´ as arithmetic or concatenation the
- # same way as ArithmeticExpression performs `+`.
- assign(name, calculate(existing_value, value, :'+', o.left_expr, o.right_expr, scope), o, scope)
- end
- rescue ArgumentError => e
- fail(Issues::APPEND_FAILED, o, {:message => e.message})
- end
-
- when :'-='
- # If an attempt is made to delete values from something that does not exists, the value is :undef (it is guaranteed to not
- # include any values the user wants deleted anyway :-)
- #
- # if value does not exist and strict is on, looking it up fails, else it is nil or :undef
- existing_value = get_variable_value(name, o, scope)
- begin
- if existing_value.nil? || existing_value == :undef
- assign(name, :undef, o, scope)
- else
- # Delegate to delete function to deal with check of LHS, and perform deletion
- assign(name, delete(get_variable_value(name, o, scope), value), o, scope)
- end
- rescue ArgumentError => e
- fail(Issues::APPEND_FAILED, o, {:message => e.message}, e)
- end
else
fail(Issues::UNSUPPORTED_OPERATOR, o, {:operator => o.operator})
end
value
end
ARITHMETIC_OPERATORS = [:'+', :'-', :'*', :'/', :'%', :'<<', :'>>']
COLLECTION_OPERATORS = [:'+', :'-', :'<<']
# Handles binary expression where lhs and rhs are array/hash or numeric and operator is +, - , *, % / << >>
#
def eval_ArithmeticExpression(o, scope)
- left, right = eval_BinaryExpression(o, scope)
+ left = evaluate(o.left_expr, scope)
+ right = evaluate(o.right_expr, scope)
+
begin
result = calculate(left, right, o.operator, o.left_expr, o.right_expr, scope)
rescue ArgumentError => e
fail(Issues::RUNTIME_ERROR, o, {:detail => e.message}, e)
end
result
end
# Handles binary expression where lhs and rhs are array/hash or numeric and operator is +, - , *, % / << >>
#
def calculate(left, right, operator, left_o, right_o, scope)
unless ARITHMETIC_OPERATORS.include?(operator)
fail(Issues::UNSUPPORTED_OPERATOR, left_o.eContainer, {:operator => o.operator})
end
if (left.is_a?(Array) || left.is_a?(Hash)) && COLLECTION_OPERATORS.include?(operator)
# Handle operation on collections
case operator
when :'+'
concatenate(left, right)
when :'-'
delete(left, right)
when :'<<'
unless left.is_a?(Array)
fail(Issues::OPERATOR_NOT_APPLICABLE, left_o, {:operator => operator, :left_value => left})
end
left + [right]
end
else
# Handle operation on numeric
left = coerce_numeric(left, left_o, scope)
right = coerce_numeric(right, right_o, scope)
begin
if operator == :'%' && (left.is_a?(Float) || right.is_a?(Float))
# Deny users the fun of seeing severe rounding errors and confusing results
fail(Issues::OPERATOR_NOT_APPLICABLE, left_o, {:operator => operator, :left_value => left})
end
result = left.send(operator, right)
rescue NoMethodError => e
fail(Issues::OPERATOR_NOT_APPLICABLE, left_o, {:operator => operator, :left_value => left})
rescue ZeroDivisionError => e
fail(Issues::DIV_BY_ZERO, right_o)
end
if result == INFINITY || result == -INFINITY
fail(Issues::RESULT_IS_INFINITY, left_o, {:operator => operator})
end
result
end
end
def eval_EppExpression(o, scope)
scope["@epp"] = []
evaluate(o.body, scope)
- result = scope["@epp"].join('')
+ result = scope["@epp"].join
result
end
def eval_RenderStringExpression(o, scope)
scope["@epp"] << o.value.dup
nil
end
def eval_RenderExpression(o, scope)
scope["@epp"] << string(evaluate(o.expr, scope), scope)
nil
end
# Evaluates Puppet DSL ->, ~>, <-, and <~
def eval_RelationshipExpression(o, scope)
# First level evaluation, reduction to basic data types or puppet types, the relationship operator then translates this
# to the final set of references (turning strings into references, which can not naturally be done by the main evaluator since
# all strings should not be turned into references.
#
real = eval_BinaryExpression(o, scope)
@@relationship_operator.evaluate(real, o, scope)
end
# Evaluates x[key, key, ...]
#
def eval_AccessExpression(o, scope)
left = evaluate(o.left_expr, scope)
keys = o.keys.nil? ? [] : o.keys.collect {|key| evaluate(key, scope) }
Puppet::Pops::Evaluator::AccessOperator.new(o).access(left, scope, *keys)
end
# Evaluates <, <=, >, >=, and ==
#
def eval_ComparisonExpression o, scope
- left, right = eval_BinaryExpression o, scope
+ left = evaluate(o.left_expr, scope)
+ right = evaluate(o.right_expr, scope)
begin
# Left is a type
- if left.is_a?(Puppet::Pops::Types::PAbstractType)
+ if left.is_a?(Puppet::Pops::Types::PAnyType)
case o.operator
when :'=='
@@type_calculator.equals(left,right)
when :'!='
!@@type_calculator.equals(left,right)
when :'<'
# left can be assigned to right, but they are not equal
@@type_calculator.assignable?(right, left) && ! @@type_calculator.equals(left,right)
when :'<='
# left can be assigned to right
@@type_calculator.assignable?(right, left)
when :'>'
# right can be assigned to left, but they are not equal
@@type_calculator.assignable?(left,right) && ! @@type_calculator.equals(left,right)
when :'>='
# right can be assigned to left
@@type_calculator.assignable?(left, right)
else
fail(Issues::UNSUPPORTED_OPERATOR, o, {:operator => o.operator})
end
else
case o.operator
when :'=='
@@compare_operator.equals(left,right)
when :'!='
! @@compare_operator.equals(left,right)
when :'<'
@@compare_operator.compare(left,right) < 0
when :'<='
@@compare_operator.compare(left,right) <= 0
when :'>'
@@compare_operator.compare(left,right) > 0
when :'>='
@@compare_operator.compare(left,right) >= 0
else
fail(Issues::UNSUPPORTED_OPERATOR, o, {:operator => o.operator})
end
end
rescue ArgumentError => e
fail(Issues::COMPARISON_NOT_POSSIBLE, o, {
:operator => o.operator,
:left_value => left,
:right_value => right,
:detail => e.message}, e)
end
end
# Evaluates matching expressions with type, string or regexp rhs expression.
- # If RHS is a type, the =~ matches compatible (assignable?) type.
+ # If RHS is a type, the =~ matches compatible (instance? of) type.
#
# @example
# x =~ /abc.*/
# @example
# x =~ "abc.*/"
# @example
# y = "abc"
# x =~ "${y}.*"
# @example
# [1,2,3] =~ Array[Integer[1,10]]
+ #
+ # Note that a string is not instance? of Regexp, only Regular expressions are.
+ # The Pattern type should instead be used as it is specified as subtype of String.
+ #
# @return [Boolean] if a match was made or not. Also sets $0..$n to matchdata in current scope.
#
def eval_MatchExpression o, scope
- left, pattern = eval_BinaryExpression o, scope
+ left = evaluate(o.left_expr, scope)
+ pattern = evaluate(o.right_expr, scope)
+
# matches RHS types as instance of for all types except a parameterized Regexp[R]
- if pattern.is_a?(Puppet::Pops::Types::PAbstractType)
- if pattern.is_a?(Puppet::Pops::Types::PRegexpType) && pattern.pattern
- # A qualified PRegexpType, get its ruby regexp
- pattern = pattern.regexp
- else
- # evaluate as instance?
- matched = @@type_calculator.instance?(pattern, left)
- # convert match result to Boolean true, or false
- return o.operator == :'=~' ? !!matched : !matched
- end
+ if pattern.is_a?(Puppet::Pops::Types::PAnyType)
+ # evaluate as instance? of type check
+ matched = @@type_calculator.instance?(pattern, left)
+ # convert match result to Boolean true, or false
+ return o.operator == :'=~' ? !!matched : !matched
end
begin
pattern = Regexp.new(pattern) unless pattern.is_a?(Regexp)
rescue StandardError => e
fail(Issues::MATCH_NOT_REGEXP, o.right_expr, {:detail => e.message}, e)
end
unless left.is_a?(String)
fail(Issues::MATCH_NOT_STRING, o.left_expr, {:left_value => left})
end
matched = pattern.match(left) # nil, or MatchData
- set_match_data(matched, o, scope) # creates ephemeral
+ set_match_data(matched,scope) # creates ephemeral
# convert match result to Boolean true, or false
o.operator == :'=~' ? !!matched : !matched
end
# Evaluates Puppet DSL `in` expression
#
def eval_InExpression o, scope
- left, right = eval_BinaryExpression o, scope
- @@compare_operator.include?(right, left)
+ left = evaluate(o.left_expr, scope)
+ right = evaluate(o.right_expr, scope)
+ @@compare_operator.include?(right, left, scope)
end
# @example
# $a and $b
# b is only evaluated if a is true
#
def eval_AndExpression o, scope
is_true?(evaluate(o.left_expr, scope)) ? is_true?(evaluate(o.right_expr, scope)) : false
end
# @example
# a or b
# b is only evaluated if a is false
#
def eval_OrExpression o, scope
is_true?(evaluate(o.left_expr, scope)) ? true : is_true?(evaluate(o.right_expr, scope))
end
# Evaluates each entry of the literal list and creates a new Array
+ # Supports unfolding of entries
# @return [Array] with the evaluated content
#
def eval_LiteralList o, scope
- o.values.collect {|expr| evaluate(expr, scope)}
+ unfold([], o.values, scope)
end
# Evaluates each entry of the literal hash and creates a new Hash.
# @return [Hash] with the evaluated content
#
def eval_LiteralHash o, scope
- h = Hash.new
- o.entries.each {|entry| h[ evaluate(entry.key, scope)]= evaluate(entry.value, scope)}
- h
+ # optimized
+ o.entries.reduce({}) {|h,entry| h[evaluate(entry.key, scope)] = evaluate(entry.value, scope); h }
end
# Evaluates all statements and produces the last evaluated value
#
def eval_BlockExpression o, scope
r = nil
o.statements.each {|s| r = evaluate(s, scope)}
r
end
# Performs optimized search over case option values, lazily evaluating each
# until there is a match. If no match is found, the case expression's default expression
# is evaluated (it may be nil or Nop if there is no default, thus producing nil).
# If an option matches, the result of evaluating that option is returned.
# @return [Object, nil] what a matched option returns, or nil if nothing matched.
#
def eval_CaseExpression(o, scope)
# memo scope level before evaluating test - don't want a match in the case test to leak $n match vars
# to expressions after the case expression.
#
with_guarded_scope(scope) do
test = evaluate(o.test, scope)
result = nil
the_default = nil
if o.options.find do |co|
# the first case option that matches
if co.values.find do |c|
- the_default = co.then_expr if c.is_a? Puppet::Pops::Model::LiteralDefault
- is_match?(test, evaluate(c, scope), c, scope)
+ case c
+ when Puppet::Pops::Model::LiteralDefault
+ the_default = co.then_expr
+ is_match?(test, evaluate(c, scope), c, scope)
+ when Puppet::Pops::Model::UnfoldExpression
+ # not ideal for error reporting, since it is not known which unfolded result
+ # that caused an error - the entire unfold expression is blamed (i.e. the var c, passed to is_match?)
+ evaluate(c, scope).any? {|v| is_match?(test, v, c, scope) }
+ else
+ is_match?(test, evaluate(c, scope), c, scope)
+ end
end
result = evaluate(co.then_expr, scope)
true # the option was picked
end
end
result # an option was picked, and produced a result
else
evaluate(the_default, scope) # evaluate the default (should be a nop/nil) if there is no default).
end
end
end
# Evaluates a CollectExpression by transforming it into a 3x AST::Collection and then evaluating that.
# This is done because of the complex API between compiler, indirector, backends, and difference between
# collecting virtual resources and exported resources.
#
def eval_CollectExpression o, scope
# The Collect Expression and its contained query expressions are implemented in such a way in
# 3x that it is almost impossible to do anything about them (the AST objects are lazily evaluated,
# and the built structure consists of both higher order functions and arrays with query expressions
# that are either used as a predicate filter, or given to an indirection terminus (such as the Puppet DB
# resource terminus). Unfortunately, the 3x implementation has many inconsistencies that the implementation
# below carries forward.
#
collect_3x = Puppet::Pops::Model::AstTransformer.new().transform(o)
collected = collect_3x.evaluate(scope)
# the 3x returns an instance of Parser::Collector (but it is only registered with the compiler at this
# point and does not contain any valuable information (like the result)
# Dilemma: If this object is returned, it is a first class value in the Puppet Language and we
# need to be able to perform operations on it. We can forbid it from leaking by making CollectExpression
# a non R-value. This makes it possible for the evaluator logic to make use of the Collector.
collected
end
def eval_ParenthesizedExpression(o, scope)
evaluate(o.expr, scope)
end
# This evaluates classes, nodes and resource type definitions to nil, since 3x:
# instantiates them, and evaluates their parameters and body. This is achieved by
# providing bridge AST classes in Puppet::Parser::AST::PopsBridge that bridges a
# Pops Program and a Pops Expression.
#
# Since all Definitions are handled "out of band", they are treated as a no-op when
# evaluated.
#
def eval_Definition(o, scope)
nil
end
def eval_Program(o, scope)
evaluate(o.body, scope)
end
- # Produces Array[PObjectType], an array of resource references
+ # Produces Array[PAnyType], an array of resource references
#
def eval_ResourceExpression(o, scope)
exported = o.exported
virtual = o.virtual
- type_name = evaluate(o.type_name, scope)
- o.bodies.map do |body|
- titles = [evaluate(body.title, scope)].flatten
- evaluated_parameters = body.operations.map {|op| evaluate(op, scope) }
- create_resources(o, scope, virtual, exported, type_name, titles, evaluated_parameters)
+
+ # Get the type name
+ type_name =
+ if (tmp_name = o.type_name).is_a?(Puppet::Pops::Model::QualifiedName)
+ tmp_name.value # already validated as a name
+ else
+ type_name_acceptable =
+ case o.type_name
+ when Puppet::Pops::Model::QualifiedReference
+ true
+ when Puppet::Pops::Model::AccessExpression
+ o.type_name.left_expr.is_a?(Puppet::Pops::Model::QualifiedReference)
+ end
+
+ evaluated_name = evaluate(tmp_name, scope)
+ unless type_name_acceptable
+ actual = type_calculator.generalize!(type_calculator.infer(evaluated_name)).to_s
+ fail(Puppet::Pops::Issues::ILLEGAL_RESOURCE_TYPE, o.type_name, {:actual => actual})
+ end
+
+ # must be a CatalogEntry subtype
+ case evaluated_name
+ when Puppet::Pops::Types::PHostClassType
+ unless evaluated_name.class_name.nil?
+ fail(Puppet::Pops::Issues::ILLEGAL_RESOURCE_TYPE, o.type_name, {:actual=> evaluated_name.to_s})
+ end
+ 'class'
+
+ when Puppet::Pops::Types::PResourceType
+ unless evaluated_name.title().nil?
+ fail(Puppet::Pops::Issues::ILLEGAL_RESOURCE_TYPE, o.type_name, {:actual=> evaluated_name.to_s})
+ end
+ evaluated_name.type_name # assume validated
+
+ else
+ actual = type_calculator.generalize!(type_calculator.infer(evaluated_name)).to_s
+ fail(Puppet::Pops::Issues::ILLEGAL_RESOURCE_TYPE, o.type_name, {:actual=>actual})
+ end
+ end
+
+ # This is a runtime check - the model is valid, but will have runtime issues when evaluated
+ # and storeconfigs is not set.
+ if(o.exported)
+ optionally_fail(Puppet::Pops::Issues::RT_NO_STORECONFIGS_EXPORT, o);
+ end
+
+ titles_to_body = {}
+ body_to_titles = {}
+ body_to_params = {}
+
+ # titles are evaluated before attribute operations
+ o.bodies.map do | body |
+ titles = evaluate(body.title, scope)
+
+ # Title may not be nil
+ # Titles may be given as an array, it is ok if it is empty, but not if it contains nil entries
+ # Titles may not be an empty String
+ # Titles must be unique in the same resource expression
+ # There may be a :default entry, its entries apply with lower precedence
+ #
+ if titles.nil?
+ fail(Puppet::Pops::Issues::MISSING_TITLE, body.title)
+ end
+ titles = [titles].flatten
+
+ # Check types of evaluated titles and duplicate entries
+ titles.each_with_index do |title, index|
+ if title.nil?
+ fail(Puppet::Pops::Issues::MISSING_TITLE_AT, body.title, {:index => index})
+
+ elsif !title.is_a?(String) && title != :default
+ actual = type_calculator.generalize!(type_calculator.infer(title)).to_s
+ fail(Puppet::Pops::Issues::ILLEGAL_TITLE_TYPE_AT, body.title, {:index => index, :actual => actual})
+
+ elsif title == EMPTY_STRING
+ fail(Puppet::Pops::Issues::EMPTY_STRING_TITLE_AT, body.title, {:index => index})
+
+ elsif titles_to_body[title]
+ fail(Puppet::Pops::Issues::DUPLICATE_TITLE, o, {:title => title})
+ end
+ titles_to_body[title] = body
+ end
+
+ # Do not create a real instance from the :default case
+ titles.delete(:default)
+
+ body_to_titles[body] = titles
+
+ # Store evaluated parameters in a hash associated with the body, but do not yet create resource
+ # since the entry containing :defaults may appear later
+ body_to_params[body] = body.operations.reduce({}) do |param_memo, op|
+ params = evaluate(op, scope)
+ params = [params] unless params.is_a?(Array)
+ params.each do |p|
+ if param_memo.include? p.name
+ fail(Puppet::Pops::Issues::DUPLICATE_ATTRIBUTE, o, {:attribute => p.name})
+ end
+ param_memo[p.name] = p
+ end
+ param_memo
+ end
+ end
+
+ # Titles and Operations have now been evaluated and resources can be created
+ # Each production is a PResource, and an array of all is produced as the result of
+ # evaluating the ResourceExpression.
+ #
+ defaults_hash = body_to_params[titles_to_body[:default]] || {}
+ o.bodies.map do | body |
+ titles = body_to_titles[body]
+ params = defaults_hash.merge(body_to_params[body] || {})
+ create_resources(o, scope, virtual, exported, type_name, titles, params.values)
end.flatten.compact
end
def eval_ResourceOverrideExpression(o, scope)
evaluated_resources = evaluate(o.resources, scope)
evaluated_parameters = o.operations.map { |op| evaluate(op, scope) }
create_resource_overrides(o, scope, [evaluated_resources].flatten, evaluated_parameters)
evaluated_resources
end
- # Produces 3x array of parameters
+ # Produces 3x parameter
def eval_AttributeOperation(o, scope)
create_resource_parameter(o, scope, o.attribute_name, evaluate(o.value_expr, scope), o.operator)
end
+ def eval_AttributesOperation(o, scope)
+ hashed_params = evaluate(o.expr, scope)
+ unless hashed_params.is_a?(Hash)
+ actual = type_calculator.generalize!(type_calculator.infer(hashed_params)).to_s
+ fail(Puppet::Pops::Issues::TYPE_MISMATCH, o.expr, {:expected => 'Hash', :actual => actual})
+ end
+ hashed_params.map { |k,v| create_resource_parameter(o, scope, k, v, :'=>') }
+ end
+
# Sets default parameter values for a type, produces the type
#
def eval_ResourceDefaultsExpression(o, scope)
- type_name = o.type_ref.value # a QualifiedName's string value
+ type = evaluate(o.type_ref, scope)
+ type_name =
+ if type.is_a?(Puppet::Pops::Types::PResourceType) && !type.type_name.nil? && type.title.nil?
+ type.type_name # assume it is a valid name
+ else
+ actual = type_calculator.generalize!(type_calculator.infer(type))
+ fail(Issues::ILLEGAL_RESOURCE_TYPE, o.type_ref, {:actual => actual})
+ end
evaluated_parameters = o.operations.map {|op| evaluate(op, scope) }
create_resource_defaults(o, scope, type_name, evaluated_parameters)
# Produce the type
- evaluate(o.type_ref, scope)
+ type
end
# Evaluates function call by name.
#
def eval_CallNamedFunctionExpression(o, scope)
# The functor expression is not evaluated, it is not possible to select the function to call
# via an expression like $a()
case o.functor_expr
when Puppet::Pops::Model::QualifiedName
# ok
when Puppet::Pops::Model::RenderStringExpression
# helpful to point out this easy to make Epp error
fail(Issues::ILLEGAL_EPP_PARAMETERS, o)
else
fail(Issues::ILLEGAL_EXPRESSION, o.functor_expr, {:feature=>'function name', :container => o})
end
name = o.functor_expr.value
- evaluated_arguments = o.arguments.collect {|arg| evaluate(arg, scope) }
+
+ evaluated_arguments = unfold([], o.arguments, scope)
+
# wrap lambda in a callable block if it is present
evaluated_arguments << Puppet::Pops::Evaluator::Closure.new(self, o.lambda, scope) if o.lambda
call_function(name, evaluated_arguments, o, scope)
end
# Evaluation of CallMethodExpression handles a NamedAccessExpression functor (receiver.function_name)
#
def eval_CallMethodExpression(o, scope)
unless o.functor_expr.is_a? Puppet::Pops::Model::NamedAccessExpression
fail(Issues::ILLEGAL_EXPRESSION, o.functor_expr, {:feature=>'function accessor', :container => o})
end
receiver = evaluate(o.functor_expr.left_expr, scope)
name = o.functor_expr.right_expr
unless name.is_a? Puppet::Pops::Model::QualifiedName
fail(Issues::ILLEGAL_EXPRESSION, o.functor_expr, {:feature=>'function name', :container => o})
end
name = name.value # the string function name
- evaluated_arguments = [receiver] + (o.arguments || []).collect {|arg| evaluate(arg, scope) }
+
+ evaluated_arguments = unfold([receiver], o.arguments || [], scope)
+
+ # wrap lambda in a callable block if it is present
evaluated_arguments << Puppet::Pops::Evaluator::Closure.new(self, o.lambda, scope) if o.lambda
call_function(name, evaluated_arguments, o, scope)
end
# @example
# $x ? { 10 => true, 20 => false, default => 0 }
#
def eval_SelectorExpression o, scope
# memo scope level before evaluating test - don't want a match in the case test to leak $n match vars
# to expressions after the selector expression.
#
with_guarded_scope(scope) do
test = evaluate(o.left_expr, scope)
+ the_default = nil
selected = o.selectors.find do |s|
- candidate = evaluate(s.matching_expr, scope)
- candidate == :default || is_match?(test, candidate, s.matching_expr, scope)
+ me = s.matching_expr
+ case me
+ when Puppet::Pops::Model::LiteralDefault
+ the_default = s.value_expr
+ false
+ when Puppet::Pops::Model::UnfoldExpression
+ # not ideal for error reporting, since it is not known which unfolded result
+ # that caused an error - the entire unfold expression is blamed (i.e. the var c, passed to is_match?)
+ evaluate(me, scope).any? {|v| is_match?(test, v, me, scope) }
+ else
+ is_match?(test, evaluate(me, scope), me, scope)
+ end
end
if selected
evaluate(selected.value_expr, scope)
+ elsif the_default
+ evaluate(the_default, scope)
else
- nil
+ fail(Issues::UNMATCHED_SELECTOR, o.left_expr, :param_value => test)
end
end
end
# SubLocatable is simply an expression that holds location information
def eval_SubLocatedExpression o, scope
evaluate(o.expr, scope)
end
# Evaluates Puppet DSL Heredoc
def eval_HeredocExpression o, scope
result = evaluate(o.text_expr, scope)
assert_external_syntax(scope, result, o.syntax, o.text_expr)
result
end
# Evaluates Puppet DSL `if`
def eval_IfExpression o, scope
with_guarded_scope(scope) do
if is_true?(evaluate(o.test, scope))
evaluate(o.then_expr, scope)
else
evaluate(o.else_expr, scope)
end
end
end
# Evaluates Puppet DSL `unless`
def eval_UnlessExpression o, scope
with_guarded_scope(scope) do
unless is_true?(evaluate(o.test, scope))
evaluate(o.then_expr, scope)
else
evaluate(o.else_expr, scope)
end
end
end
# Evaluates a variable (getting its value)
# The evaluator is lenient; any expression producing a String is used as a name
# of a variable.
#
def eval_VariableExpression o, scope
# Evaluator is not too fussy about what constitutes a name as long as the result
# is a String and a valid variable name
#
name = evaluate(o.expr, scope)
# Should be caught by validation, but make this explicit here as well, or mysterious evaluation issues
- # may occur.
+ # may occur for some evaluation use cases.
case name
when String
when Numeric
else
fail(Issues::ILLEGAL_VARIABLE_EXPRESSION, o.expr)
end
- # TODO: Check for valid variable name (Task for validator)
- # TODO: semantics of undefined variable in scope, this just returns what scope does == value or nil
get_variable_value(name, o, scope)
end
# Evaluates double quoted strings that may contain interpolation
#
def eval_ConcatenatedString o, scope
o.segments.collect {|expr| string(evaluate(expr, scope), scope)}.join
end
# If the wrapped expression is a QualifiedName, it is taken as the name of a variable in scope.
# Note that this is different from the 3.x implementation, where an initial qualified name
# is accepted. (e.g. `"---${var + 1}---"` is legal. This implementation requires such concrete
# syntax to be expressed in a model as `(TextExpression (+ (Variable var) 1)` - i.e. moving the decision to
# the parser.
#
# Semantics; the result of an expression is turned into a string, nil is silently transformed to empty
# string.
# @return [String] the interpolated result
#
def eval_TextExpression o, scope
if o.expr.is_a?(Puppet::Pops::Model::QualifiedName)
- # TODO: formalize, when scope returns nil, vs error
string(get_variable_value(o.expr.value, o, scope), scope)
else
string(evaluate(o.expr, scope), scope)
end
end
def string_Object(o, scope)
o.to_s
end
def string_Symbol(o, scope)
- case o
- when :undef
- ''
+ if :undef == o # optimized comparison 1.44 vs 1.95
+ EMPTY_STRING
else
o.to_s
end
end
- def string_Array(o, scope)
- ['[', o.map {|e| string(e, scope)}.join(', '), ']'].join()
+ def string_Array(o, scope)
+ "[#{o.map {|e| string(e, scope)}.join(COMMA_SEPARATOR)}]"
end
def string_Hash(o, scope)
- ['{', o.map {|k,v| string(k, scope) + " => " + string(v, scope)}.join(', '), '}'].join()
+ "{#{o.map {|k,v| "#{string(k, scope)} => #{string(v, scope)}"}.join(COMMA_SEPARATOR)}}"
end
def string_Regexp(o, scope)
- ['/', o.source, '/'].join()
+ "/#{o.source}/"
end
- def string_PAbstractType(o, scope)
+ def string_PAnyType(o, scope)
@@type_calculator.string(o)
end
# Produces concatenation / merge of x and y.
#
# When x is an Array, y of type produces:
#
# * Array => concatenation `[1,2], [3,4] => [1,2,3,4]`
# * Hash => concatenation of hash as array `[key, value, key, value, ...]`
# * any other => concatenation of single value
#
# When x is a Hash, y of type produces:
#
# * Array => merge of array interpreted as `[key, value, key, value,...]`
# * Hash => a merge, where entries in `y` overrides
# * any other => error
#
# When x is something else, wrap it in an array first.
#
# When x is nil, an empty array is used instead.
#
# @note to concatenate an Array, nest the array - i.e. `[1,2], [[2,3]]`
#
# @overload concatenate(obj_x, obj_y)
# @param obj_x [Object] object to wrap in an array and concatenate to; see other overloaded methods for return type
# @param ary_y [Object] array to concatenate at end of `ary_x`
# @return [Object] wraps obj_x in array before using other overloaded option based on type of obj_y
# @overload concatenate(ary_x, ary_y)
# @param ary_x [Array] array to concatenate to
# @param ary_y [Array] array to concatenate at end of `ary_x`
# @return [Array] new array with `ary_x` + `ary_y`
# @overload concatenate(ary_x, hsh_y)
# @param ary_x [Array] array to concatenate to
# @param hsh_y [Hash] converted to array form, and concatenated to array
# @return [Array] new array with `ary_x` + `hsh_y` converted to array
# @overload concatenate (ary_x, obj_y)
# @param ary_x [Array] array to concatenate to
# @param obj_y [Object] non array or hash object to add to array
# @return [Array] new array with `ary_x` + `obj_y` added as last entry
# @overload concatenate(hsh_x, ary_y)
# @param hsh_x [Hash] the hash to merge with
# @param ary_y [Array] array interpreted as even numbered sequence of key, value merged with `hsh_x`
# @return [Hash] new hash with `hsh_x` merged with `ary_y` interpreted as hash in array form
# @overload concatenate(hsh_x, hsh_y)
# @param hsh_x [Hash] the hash to merge to
# @param hsh_y [Hash] hash merged with `hsh_x`
# @return [Hash] new hash with `hsh_x` merged with `hsh_y`
# @raise [ArgumentError] when `xxx_x` is neither an Array nor a Hash
# @raise [ArgumentError] when `xxx_x` is a Hash, and `xxx_y` is neither Array nor Hash.
#
def concatenate(x, y)
x = [x] unless x.is_a?(Array) || x.is_a?(Hash)
case x
when Array
y = case y
when Array then y
when Hash then y.to_a
else
[y]
end
x + y # new array with concatenation
when Hash
y = case y
when Hash then y
when Array
# Hash[[a, 1, b, 2]] => {}
# Hash[a,1,b,2] => {a => 1, b => 2}
# Hash[[a,1], [b,2]] => {[a,1] => [b,2]}
# Hash[[[a,1], [b,2]]] => {a => 1, b => 2}
# Use type calcultor to determine if array is Array[Array[?]], and if so use second form
# of call
t = @@type_calculator.infer(y)
if t.element_type.is_a? Puppet::Pops::Types::PArrayType
Hash[y]
else
Hash[*y]
end
else
raise ArgumentError.new("Can only append Array or Hash to a Hash")
end
x.merge y # new hash with overwrite
else
raise ArgumentError.new("Can only append to an Array or a Hash.")
end
end
# Produces the result x \ y (set difference)
# When `x` is an Array, `y` is transformed to an array and then all matching elements removed from x.
# When `x` is a Hash, all contained keys are removed from x as listed in `y` if it is an Array, or all its keys if it is a Hash.
# The difference is returned. The given `x` and `y` are not modified by this operation.
# @raise [ArgumentError] when `x` is neither an Array nor a Hash
#
def delete(x, y)
result = x.dup
case x
when Array
y = case y
when Array then y
when Hash then y.to_a
else
[y]
end
y.each {|e| result.delete(e) }
when Hash
y = case y
when Array then y
when Hash then y.keys
else
[y]
end
y.each {|e| result.delete(e) }
else
raise ArgumentError.new("Can only delete from an Array or Hash.")
end
result
end
# Implementation of case option matching.
#
# This is the type of matching performed in a case option, using == for every type
# of value except regular expression where a match is performed.
#
def is_match? left, right, o, scope
if right.is_a?(Regexp)
return false unless left.is_a? String
matched = right.match(left)
- set_match_data(matched, o, scope) # creates or clears ephemeral
+ set_match_data(matched, scope) # creates or clears ephemeral
!!matched # convert to boolean
- elsif right.is_a?(Puppet::Pops::Types::PAbstractType)
+ elsif right.is_a?(Puppet::Pops::Types::PAnyType)
# right is a type and left is not - check if left is an instance of the given type
# (The reverse is not terribly meaningful - computing which of the case options that first produces
# an instance of a given type).
#
@@type_calculator.instance?(right, left)
else
# Handle equality the same way as the language '==' operator (case insensitive etc.)
@@compare_operator.equals(left,right)
end
end
def with_guarded_scope(scope)
scope_memo = get_scope_nesting_level(scope)
begin
yield
ensure
set_scope_nesting_level(scope, scope_memo)
end
end
+ # Maps the expression in the given array to their product except for UnfoldExpressions which are first unfolded.
+ # The result is added to the given result Array.
+ # @param result [Array] Where to add the result (may contain information to add to)
+ # @param array [Array[Puppet::Pops::Model::Expression] the expressions to map
+ # @param scope [Puppet::Parser::Scope] the scope to evaluate in
+ # @return [Array] the given result array with content added from the operation
+ #
+ def unfold(result, array, scope)
+ array.each do |x|
+ if x.is_a?(Puppet::Pops::Model::UnfoldExpression)
+ result.concat(evaluate(x, scope))
+ else
+ result << evaluate(x, scope)
+ end
+ end
+ result
+ end
+ private :unfold
+
end
diff --git a/lib/puppet/pops/evaluator/relationship_operator.rb b/lib/puppet/pops/evaluator/relationship_operator.rb
index 7ffd20c8c..866afa8ef 100644
--- a/lib/puppet/pops/evaluator/relationship_operator.rb
+++ b/lib/puppet/pops/evaluator/relationship_operator.rb
@@ -1,156 +1,157 @@
# The RelationshipOperator implements the semantics of the -> <- ~> <~ operators creating relationships or notification
# relationships between the left and right hand side's references to resources.
#
# This is separate class since a second level of evaluation is required that transforms string in left or right hand
# to type references. The task of "making a relationship" is delegated to the "runtime support" class that is included.
# This is done to separate the concerns of the new evaluator from the 3x runtime; messy logic goes into the runtime support
# module. Later when more is cleaned up this can be simplified further.
#
class Puppet::Pops::Evaluator::RelationshipOperator
# Provides access to the Puppet 3.x runtime (scope, etc.)
# This separation has been made to make it easier to later migrate the evaluator to an improved runtime.
#
include Puppet::Pops::Evaluator::Runtime3Support
Issues = Puppet::Pops::Issues
class IllegalRelationshipOperandError < RuntimeError
attr_reader :operand
def initialize operand
@operand = operand
end
end
class NotCatalogTypeError < RuntimeError
attr_reader :type
def initialize type
@type = type
end
end
def initialize
@type_transformer_visitor = Puppet::Pops::Visitor.new(self, "transform", 1, 1)
@type_calculator = Puppet::Pops::Types::TypeCalculator.new()
@type_parser = Puppet::Pops::Types::TypeParser.new()
- @catalog_type = Puppet::Pops::Types::TypeFactory.catalog_entry()
+ tf = Puppet::Pops::Types::TypeFactory
+ @catalog_type = tf.variant(tf.catalog_entry, tf.type_type(tf.catalog_entry))
end
def transform(o, scope)
@type_transformer_visitor.visit_this_1(self, o, scope)
end
# Catch all non transformable objects
# @api private
def transform_Object(o, scope)
raise IllegalRelationshipOperandError.new(o)
end
# A string must be a type reference in string format
# @api private
def transform_String(o, scope)
assert_catalog_type(@type_parser.parse(o), scope)
end
# A qualified name is short hand for a class with this name
# @api private
def transform_QualifiedName(o, scope)
Puppet::Pops::Types::TypeFactory.host_class(o.value)
end
# Types are what they are, just check the type
# @api private
- def transform_PAbstractType(o, scope)
+ def transform_PAnyType(o, scope)
assert_catalog_type(o, scope)
end
# This transforms a 3x Collector (the result of evaluating a 3x AST::Collection).
# It is passed through verbatim since it is evaluated late by the compiler. At the point
# where the relationship is evaluated, it is simply recorded with the compiler for later evaluation.
# If one of the sides of the relationship is a Collector it is evaluated before the actual
# relationship is formed. (All of this happens at a later point in time.
#
def transform_Collector(o, scope)
o
end
# Array content needs to be transformed
def transform_Array(o, scope)
o.map{|x| transform(x, scope) }
end
# Asserts (and returns) the type if it is a PCatalogEntryType
# (A PCatalogEntryType is the base class of PHostClassType, and PResourceType).
#
def assert_catalog_type(o, scope)
unless @type_calculator.assignable?(@catalog_type, o)
raise NotCatalogTypeError.new(o)
end
# TODO must check if this is an abstract PResourceType (i.e. without a type_name) - which should fail ?
# e.g. File -> File (and other similar constructs) - maybe the catalog protects against this since references
# may be to future objects...
o
end
RELATIONSHIP_OPERATORS = [:'->', :'~>', :'<-', :'<~']
REVERSE_OPERATORS = [:'<-', :'<~']
RELATION_TYPE = {
:'->' => :relationship,
:'<-' => :relationship,
:'~>' => :subscription,
:'<~' => :subscription
}
# Evaluate a relationship.
# TODO: The error reporting is not fine grained since evaluation has already taken place
# There is no references to the original source expressions at this point, only the overall
# relationship expression. (e.g.. the expression may be ['string', func_call(), etc.] -> func_call())
# To implement this, the general evaluator needs to be able to track each evaluation result and associate
# it with a corresponding expression. This structure should then be passed to the relationship operator.
#
def evaluate (left_right_evaluated, relationship_expression, scope)
# assert operator (should have been validated, but this logic makes assumptions which would
# screw things up royally). Better safe than sorry.
unless RELATIONSHIP_OPERATORS.include?(relationship_expression.operator)
fail(Issues::UNSUPPORTED_OPERATOR, relationship_expression, {:operator => relationship_expression.operator})
end
begin
# Turn each side into an array of types (this also asserts their type)
# (note wrap in array first if value is not already an array)
#
# TODO: Later when objects are Puppet Runtime Objects and know their type, it will be more efficient to check/infer
# the type first since a chained operation then does not have to visit each element again. This is not meaningful now
# since inference needs to visit each object each time, and this is what the transformation does anyway).
#
# real is [left, right], and both the left and right may be a single value or an array. In each case all content
# should be flattened, and then transformed to a type. left or right may also be a value that is transformed
# into an array, and thus the resulting left and right must be flattened individually
# Once flattened, the operands should be sets (to remove duplicate entries)
#
real = left_right_evaluated.collect {|x| [x].flatten.collect {|x| transform(x, scope) }}
real[0].flatten!
real[1].flatten!
real[0].uniq!
real[1].uniq!
# reverse order if operator is Right to Left
source, target = reverse_operator?(relationship_expression) ? real.reverse : real
# Add the relationships to the catalog
source.each {|s| target.each {|t| add_relationship(s, t, RELATION_TYPE[relationship_expression.operator], scope) }}
# Produce the transformed source RHS (if this is a chain, this does not need to be done again)
real.slice(1)
rescue NotCatalogTypeError => e
fail(Issues::ILLEGAL_RELATIONSHIP_OPERAND_TYPE, relationship_expression, {:type => @type_calculator.string(e.type)})
rescue IllegalRelationshipOperandError => e
fail(Issues::ILLEGAL_RELATIONSHIP_OPERAND_TYPE, relationship_expression, {:operand => e.operand})
end
end
def reverse_operator?(o)
REVERSE_OPERATORS.include?(o.operator)
end
end
diff --git a/lib/puppet/pops/evaluator/runtime3_support.rb b/lib/puppet/pops/evaluator/runtime3_support.rb
index ed5c124d1..127c7d6f5 100644
--- a/lib/puppet/pops/evaluator/runtime3_support.rb
+++ b/lib/puppet/pops/evaluator/runtime3_support.rb
@@ -1,540 +1,573 @@
# A module with bindings between the new evaluator and the 3x runtime.
# The intention is to separate all calls into scope, compiler, resource, etc. in this module
# to make it easier to later refactor the evaluator for better implementations of the 3x classes.
#
# @api private
module Puppet::Pops::Evaluator::Runtime3Support
+ NAME_SPACE_SEPARATOR = '::'.freeze
+
# Fails the evaluation of _semantic_ with a given issue.
#
# @param issue [Puppet::Pops::Issue] the issue to report
# @param semantic [Puppet::Pops::ModelPopsObject] the object for which evaluation failed in some way. Used to determine origin.
# @param options [Hash] hash of optional named data elements for the given issue
# @return [!] this method does not return
# @raise [Puppet::ParseError] an evaluation error initialized from the arguments (TODO: Change to EvaluationError?)
#
def fail(issue, semantic, options={}, except=nil)
optionally_fail(issue, semantic, options, except)
# an error should have been raised since fail always fails
raise ArgumentError, "Internal Error: Configuration of runtime error handling wrong: should have raised exception"
end
# Optionally (based on severity) Fails the evaluation of _semantic_ with a given issue
# If the given issue is configured to be of severity < :error it is only reported, and the function returns.
#
# @param issue [Puppet::Pops::Issue] the issue to report
# @param semantic [Puppet::Pops::ModelPopsObject] the object for which evaluation failed in some way. Used to determine origin.
# @param options [Hash] hash of optional named data elements for the given issue
# @return [!] this method does not return
# @raise [Puppet::ParseError] an evaluation error initialized from the arguments (TODO: Change to EvaluationError?)
#
def optionally_fail(issue, semantic, options={}, except=nil)
if except.nil?
# Want a stacktrace, and it must be passed as an exception
begin
raise EvaluationError.new()
rescue EvaluationError => e
except = e
end
end
diagnostic_producer.accept(issue, semantic, options, except)
end
# Binds the given variable name to the given value in the given scope.
# The reference object `o` is intended to be used for origin information - the 3x scope implementation
# only makes use of location when there is an error. This is now handled by other mechanisms; first a check
# is made if a variable exists and an error is raised if attempting to change an immutable value. Errors
# in name, numeric variable assignment etc. have also been validated prior to this call. In the event the
# scope.setvar still raises an error, the general exception handling for evaluation of the assignment
# expression knows about its location. Because of this, there is no need to extract the location for each
# setting (extraction is somewhat expensive since 3x requires line instead of offset).
#
def set_variable(name, value, o, scope)
# Scope also checks this but requires that location information are passed as options.
# Those are expensive to calculate and a test is instead made here to enable failing with better information.
# The error is not specific enough to allow catching it - need to check the actual message text.
# TODO: Improve the messy implementation in Scope.
#
if scope.bound?(name)
if Puppet::Parser::Scope::RESERVED_VARIABLE_NAMES.include?(name)
fail(Puppet::Pops::Issues::ILLEGAL_RESERVED_ASSIGNMENT, o, {:name => name} )
else
fail(Puppet::Pops::Issues::ILLEGAL_REASSIGNMENT, o, {:name => name} )
end
end
scope.setvar(name, value)
end
# Returns the value of the variable (nil is returned if variable has no value, or if variable does not exist)
#
def get_variable_value(name, o, scope)
# Puppet 3x stores all variables as strings (then converts them back to numeric with a regexp... to see if it is a match variable)
# Not ideal, scope should support numeric lookup directly instead.
# TODO: consider fixing scope
catch(:undefined_variable) {
- return scope.lookupvar(name.to_s)
+ x = scope.lookupvar(name.to_s)
+ # Must convert :undef back to nil - this can happen when an undefined variable is used in a
+ # parameter's default value expression - there nil must be :undef to work with the rest of 3x.
+ # Now that the value comes back to 4x it is changed to nil.
+ return (x == :undef) ? nil : x
}
# It is always ok to reference numeric variables even if they are not assigned. They are always undef
# if not set by a match expression.
#
unless name =~ Puppet::Pops::Patterns::NUMERIC_VAR_NAME
fail(Puppet::Pops::Issues::UNKNOWN_VARIABLE, o, {:name => name})
end
end
# Returns true if the variable of the given name is set in the given most nested scope. True is returned even if
# variable is bound to nil.
#
def variable_bound?(name, scope)
scope.bound?(name.to_s)
end
# Returns true if the variable is bound to a value or nil, in the scope or it's parent scopes.
#
def variable_exists?(name, scope)
scope.exist?(name.to_s)
end
- def set_match_data(match_data, o, scope)
+ def set_match_data(match_data, scope)
# See set_variable for rationale for not passing file and line to ephemeral_from.
# NOTE: The 3x scope adds one ephemeral(match) to its internal stack per match that succeeds ! It never
# clears anything. Thus a context that performs many matches will get very deep (there simply is no way to
# clear the match variables without rolling back the ephemeral stack.)
# This implementation does not attempt to fix this, it behaves the same bad way.
unless match_data.nil?
scope.ephemeral_from(match_data)
end
end
# Creates a local scope with vairalbes set from a hash of variable name to value
#
def create_local_scope_from(hash, scope)
# two dummy values are needed since the scope tries to give an error message (can not happen in this
# case - it is just wrong, the error should be reported by the caller who knows in more detail where it
# is in the source.
#
raise ArgumentError, "Internal error - attempt to create a local scope without a hash" unless hash.is_a?(Hash)
scope.ephemeral_from(hash)
end
# Creates a nested match scope
def create_match_scope_from(scope)
# Create a transparent match scope (for future matches)
scope.new_match_scope(nil)
end
def get_scope_nesting_level(scope)
scope.ephemeral_level
end
def set_scope_nesting_level(scope, level)
# Yup, 3x uses this method to reset the level, it also supports passing :all to destroy all
# ephemeral/local scopes - which is a sure way to create havoc.
#
scope.unset_ephemeral_var(level)
end
# Adds a relationship between the given `source` and `target` of the given `relationship_type`
# @param source [Puppet:Pops::Types::PCatalogEntryType] the source end of the relationship (from)
# @param target [Puppet:Pops::Types::PCatalogEntryType] the target end of the relationship (to)
# @param relationship_type [:relationship, :subscription] the type of the relationship
#
def add_relationship(source, target, relationship_type, scope)
# The 3x way is to record a Puppet::Parser::Relationship that is evaluated at the end of the compilation.
# This means it is not possible to detect any duplicates at this point (and signal where an attempt is made to
# add a duplicate. There is also no location information to signal the original place in the logic. The user will have
# to go fish.
# The 3.x implementation is based on Strings :-o, so the source and target must be transformed. The resolution is
# done by Catalog#resource(type, title). To do that, it creates a Puppet::Resource since it is responsible for
# translating the name/type/title and create index-keys used by the catalog. The Puppet::Resource has bizarre parsing of
# the type and title (scan for [] that is interpreted as type/title (but it gets it wrong).
# Moreover if the type is "" or "component", the type is Class, and if the type is :main, it is :main, all other cases
# undergo capitalization of name-segments (foo::bar becomes Foo::Bar). (This was earlier done in the reverse by the parser).
# Further, the title undergoes the same munging !!!
#
# That bug infested nest of messy logic needs serious Exorcism!
#
# Unfortunately it is not easy to simply call more intelligent methods at a lower level as the compiler evaluates the recorded
# Relationship object at a much later point, and it is responsible for invoking all the messy logic.
#
# TODO: Revisit the below logic when there is a sane implementation of the catalog, compiler and resource. For now
# concentrate on transforming the type references to what is expected by the wacky logic.
#
# HOWEVER, the Compiler only records the Relationships, and the only method it calls is @relationships.each{|x| x.evaluate(catalog) }
# Which means a smarter Relationship class could do this right. Instead of obtaining the resource from the catalog using
# the borked resource(type, title) which creates a resource for the purpose of looking it up, it needs to instead
# scan the catalog's resources
#
# GAAAH, it is even worse!
# It starts in the parser, which parses "File['foo']" into an AST::ResourceReference with type = File, and title = foo
# This AST is evaluated by looking up the type/title in the scope - causing it to be loaded if it exists, and if not, the given
# type name/title is used. It does not search for resource instances, only classes and types. It returns symbolic information
# [type, [title, title]]. From this, instances of Puppet::Resource are created and returned. These only have type/title information
# filled out. One or an array of resources are returned.
# This set of evaluated (empty reference) Resource instances are then passed to the relationship operator. It creates a
# Puppet::Parser::Relationship giving it a source and a target that are (empty reference) Resource instances. These are then remembered
# until the relationship is evaluated by the compiler (at the end). When evaluation takes place, the (empty reference) Resource instances
# are converted to String (!?! WTF) on the simple format "#{type}[#{title}]", and the catalog is told to find a resource, by giving
# it this string. If it cannot find the resource it fails, else the before/notify parameter is appended with the target.
# The search for the resource begin with (you guessed it) again creating an (empty reference) resource from type and title (WTF?!?!).
# The catalog now uses the reference resource to compute a key [r.type, r.title.to_s] and also gets a uniqueness key from the
# resource (This is only a reference type created from title and type). If it cannot find it with the first key, it uses the
# uniqueness key to lookup.
#
# This is probably done to allow a resource type to munge/translate the title in some way (but it is quite unclear from the long
# and convoluted path of evaluation.
# In order to do this in a way that is similar to 3.x two resources are created to be used as keys.
#
- #
- # TODO: logic that creates a PCatalogEntryType should resolve it to ensure it is loaded (to the best of known_resource_types knowledge).
- # If this is not done, the order in which things are done may be different? OTOH, it probably works anyway :-)
- # TODO: Not sure if references needs to be resolved via the scope?
- #
# And if that is not enough, a source/target may be a Collector (a baked query that will be evaluated by the
# compiler - it is simply passed through here for processing by the compiler at the right time).
#
if source.is_a?(Puppet::Parser::Collector)
# use verbatim - behavior defined by 3x
source_resource = source
else
# transform into the wonderful String representation in 3x
type, title = catalog_type_to_split_type_title(source)
source_resource = Puppet::Resource.new(type, title)
end
if target.is_a?(Puppet::Parser::Collector)
# use verbatim - behavior defined by 3x
target_resource = target
else
# transform into the wonderful String representation in 3x
type, title = catalog_type_to_split_type_title(target)
target_resource = Puppet::Resource.new(type, title)
end
# Add the relationship to the compiler for later evaluation.
scope.compiler.add_relationship(Puppet::Parser::Relationship.new(source_resource, target_resource, relationship_type))
end
# Coerce value `v` to numeric or fails.
# The given value `v` is coerced to Numeric, and if that fails the operation
# calls {#fail}.
# @param v [Object] the value to convert
# @param o [Object] originating instruction
# @param scope [Object] the (runtime specific) scope where evaluation of o takes place
# @return [Numeric] value `v` converted to Numeric.
#
def coerce_numeric(v, o, scope)
unless n = Puppet::Pops::Utils.to_n(v)
fail(Puppet::Pops::Issues::NOT_NUMERIC, o, {:value => v})
end
n
end
def call_function(name, args, o, scope)
- # Call via 4x API if it is available, and the function exists
- #
- if loaders = Puppet.lookup(:loaders) {nil}
- # find the loader that loaded the code, or use the private_environment_loader (sees env + all modules)
- adapter = Puppet::Pops::Utils.find_adapter(o, Puppet::Pops::Adapters::LoaderAdapter)
- loader = adapter.nil? ? loaders.private_environment_loader : adapter.loader
- if loader && func = loader.load(:function, name)
- return func.call(scope, *args)
- end
+ # Call via 4x API if the function exists there
+ loaders = scope.compiler.loaders
+ # find the loader that loaded the code, or use the private_environment_loader (sees env + all modules)
+ adapter = Puppet::Pops::Utils.find_adapter(o, Puppet::Pops::Adapters::LoaderAdapter)
+ loader = adapter.nil? ? loaders.private_environment_loader : adapter.loader
+ if loader && func = loader.load(:function, name)
+ return func.call(scope, *args)
end
+ # Call via 3x API if function exists there
fail(Puppet::Pops::Issues::UNKNOWN_FUNCTION, o, {:name => name}) unless Puppet::Parser::Functions.function(name)
- # TODO: if Puppet[:biff] == true, then 3x functions should be called via loaders above
# Arguments must be mapped since functions are unaware of the new and magical creatures in 4x.
- # NOTE: Passing an empty string last converts :undef to empty string
+ # NOTE: Passing an empty string last converts nil/:undef to empty string
mapped_args = args.map {|a| convert(a, scope, '') }
result = scope.send("function_#{name}", mapped_args)
# Prevent non r-value functions from leaking their result (they are not written to care about this)
Puppet::Parser::Functions.rvalue?(name) ? result : nil
end
# The o is used for source reference
def create_resource_parameter(o, scope, name, value, operator)
file, line = extract_file_line(o)
Puppet::Parser::Resource::Param.new(
:name => name,
+ # Here we must convert nil values to :undef for the 3x logic to work
:value => convert(value, scope, :undef), # converted to 3x since 4x supports additional objects / types
:source => scope.source, :line => line, :file => file,
:add => operator == :'+>'
)
end
+ CLASS_STRING = 'class'.freeze
+
def create_resources(o, scope, virtual, exported, type_name, resource_titles, evaluated_parameters)
# TODO: Unknown resource causes creation of Resource to fail with ArgumentError, should give
# a proper Issue. Now the result is "Error while evaluating a Resource Statement" with the message
# from the raised exception. (It may be good enough).
# resolve in scope.
fully_qualified_type, resource_titles = scope.resolve_type_and_titles(type_name, resource_titles)
# Not 100% accurate as this is the resource expression location and each title is processed separately
# The titles are however the result of evaluation and they have no location at this point (an array
# of positions for the source expressions are required for this to work).
# TODO: Revisit and possible improve the accuracy.
#
file, line = extract_file_line(o)
# Build a resource for each title
resource_titles.map do |resource_title|
resource = Puppet::Parser::Resource.new(
fully_qualified_type, resource_title,
:parameters => evaluated_parameters,
:file => file,
:line => line,
:exported => exported,
:virtual => virtual,
# WTF is this? Which source is this? The file? The name of the context ?
:source => scope.source,
:scope => scope,
:strict => true
)
if resource.resource_type.is_a? Puppet::Resource::Type
resource.resource_type.instantiate_resource(scope, resource)
end
scope.compiler.add_resource(scope, resource)
- scope.compiler.evaluate_classes([resource_title], scope, false, true) if fully_qualified_type == 'class'
+ scope.compiler.evaluate_classes([resource_title], scope, false, true) if fully_qualified_type == CLASS_STRING
# Turn the resource into a PType (a reference to a resource type)
# weed out nil's
resource_to_ptype(resource)
end
end
# Defines default parameters for a type with the given name.
#
def create_resource_defaults(o, scope, type_name, evaluated_parameters)
# Note that name must be capitalized in this 3x call
# The 3x impl creates a Resource instance with a bogus title and then asks the created resource
# for the type of the name.
# Note, locations are available per parameter.
#
scope.define_settings(capitalize_qualified_name(type_name), evaluated_parameters)
end
# Capitalizes each segment of a qualified name
#
def capitalize_qualified_name(name)
- name.split(/::/).map(&:capitalize).join('::')
+ name.split(/::/).map(&:capitalize).join(NAME_SPACE_SEPARATOR)
end
# Creates resource overrides for all resource type objects in evaluated_resources. The same set of
# evaluated parameters are applied to all.
#
def create_resource_overrides(o, scope, evaluated_resources, evaluated_parameters)
# Not 100% accurate as this is the resource expression location and each title is processed separately
# The titles are however the result of evaluation and they have no location at this point (an array
# of positions for the source expressions are required for this to work.
# TODO: Revisit and possible improve the accuracy.
#
file, line = extract_file_line(o)
evaluated_resources.each do |r|
+ unless r.is_a?(Puppet::Pops::Types::PResourceType) && r.type_name != 'class'
+ fail(Puppet::Pops::Issues::ILLEGAL_OVERRIDEN_TYPE, o, {:actual => r} )
+ end
resource = Puppet::Parser::Resource.new(
r.type_name, r.title,
:parameters => evaluated_parameters,
:file => file,
:line => line,
# WTF is this? Which source is this? The file? The name of the context ?
:source => scope.source,
:scope => scope
)
scope.compiler.add_override(resource)
end
end
# Finds a resource given a type and a title.
#
def find_resource(scope, type_name, title)
scope.compiler.findresource(type_name, title)
end
# Returns the value of a resource's parameter by first looking up the parameter in the resource
# and then in the defaults for the resource. Since the resource exists (it must in order to look up its
# parameters, any overrides have already been applied). Defaults are not applied to a resource until it
# has been finished (which typically has not taken place when this is evaluated; hence the dual lookup).
#
def get_resource_parameter_value(scope, resource, parameter_name)
# This gets the parameter value, or nil (for both valid parameters and parameters that do not exist).
val = resource[parameter_name]
- if val.nil? && defaults = scope.lookupdefaults(resource.type)
- # NOTE: 3x resource keeps defaults as hash using symbol for name as key to Parameter which (again) holds
- # name and value.
- # NOTE: meta parameters that are unset ends up here, and there are no defaults for those encoded
- # in the defaults, they may receive hardcoded defaults later (e.g. 'tag').
- param = defaults[parameter_name.to_sym]
- # Some parameters (meta parameters like 'tag') does not return a param from which the value can be obtained
- # at all times. Instead, they return a nil param until a value has been set.
- val = param.nil? ? nil : param.value
+
+ # Sometimes the resource is a Puppet::Parser::Resource and sometimes it is
+ # a Puppet::Resource. The Puppet::Resource case occurs when puppet language
+ # is evaluated against an already completed catalog (where all instances of
+ # Puppet::Parser::Resource are converted to Puppet::Resource instances).
+ # Evaluating against an already completed catalog is really only found in
+ # the language specification tests, where the puppet language is used to
+ # test itself.
+ if resource.is_a?(Puppet::Parser::Resource)
+ # The defaults must be looked up in the scope where the resource was created (not in the given
+ # scope where the lookup takes place.
+ resource_scope = resource.scope
+ if val.nil? && resource_scope && defaults = resource_scope.lookupdefaults(resource.type)
+ # NOTE: 3x resource keeps defaults as hash using symbol for name as key to Parameter which (again) holds
+ # name and value.
+ # NOTE: meta parameters that are unset ends up here, and there are no defaults for those encoded
+ # in the defaults, they may receive hardcoded defaults later (e.g. 'tag').
+ param = defaults[parameter_name.to_sym]
+ # Some parameters (meta parameters like 'tag') does not return a param from which the value can be obtained
+ # at all times. Instead, they return a nil param until a value has been set.
+ val = param.nil? ? nil : param.value
+ end
end
val
end
# Returns true, if the given name is the name of a resource parameter.
#
def is_parameter_of_resource?(scope, resource, name)
resource.valid_parameter?(name)
end
def resource_to_ptype(resource)
nil if resource.nil?
- type_calculator.infer(resource)
+ # inference returns the meta type since the 3x Resource is an alternate way to describe a type
+ type_calculator.infer(resource).type
end
# This is the same type of "truth" as used in the current Puppet DSL.
#
def is_true? o
# Is the value true? This allows us to control the definition of truth
# in one place.
case o
- when ''
- false
+ # Support :undef since it may come from a 3x structure
when :undef
false
else
!!o
end
end
# Utility method for TrueClass || FalseClass
# @param x [Object] the object to test if it is instance of TrueClass or FalseClass
def is_boolean? x
x.is_a?(TrueClass) || x.is_a?(FalseClass)
end
def initialize
@@convert_visitor ||= Puppet::Pops::Visitor.new(self, "convert", 2, 2)
end
# Converts 4x supported values to 3x values. This is required because
# resources and other objects do not know about the new type system, and does not support
# regular expressions. Unfortunately this has to be done for array and hash as well.
# A complication is that catalog types needs to be resolved against the scope.
#
def convert(o, scope, undef_value)
@@convert_visitor.visit_this_2(self, o, scope, undef_value)
end
def convert_NilClass(o, scope, undef_value)
undef_value
end
+ def convert_String(o, scope, undef_value)
+ # although wasteful, needed because user code may mutate these strings in Resources
+ o.frozen? ? o.dup : o
+ end
+
def convert_Object(o, scope, undef_value)
o
end
def convert_Array(o, scope, undef_value)
o.map {|x| convert(x, scope, undef_value) }
end
def convert_Hash(o, scope, undef_value)
result = {}
o.each {|k,v| result[convert(k, scope, undef_value)] = convert(v, scope, undef_value) }
result
end
def convert_Regexp(o, scope, undef_value)
# Puppet 3x cannot handle parameter values that are reqular expressions. Turn into regexp string in
# source form
o.inspect
end
def convert_Symbol(o, scope, undef_value)
case o
+ # Support :undef since it may come from a 3x structure
when :undef
- undef_value # 3x wants :undef as empty string in function
+ undef_value # 3x wants undef as either empty string or :undef
else
o # :default, and all others are verbatim since they are new in future evaluator
end
end
- def convert_PAbstractType(o, scope, undef_value)
+ def convert_PAnyType(o, scope, undef_value)
o
end
- def convert_PResourceType(o,scope, undef_value)
- # Needs conversion by calling scope to resolve the name and possibly return a different name
- # Resolution can only be called with an array, and returns an array. Here there is only one name
- type, titles = scope.resolve_type_and_titles(o.type_name, [o.title])
- # Note: a title of nil makes Resource class throw error with information that is wrong
- Puppet::Resource.new(type, titles[0].nil? ? '' : titles[0] )
- end
+ def convert_PCatalogEntryType(o, scope, undef_value)
+ # Since 4x does not support dynamic scoping, all names are absolute and can be
+ # used as is (with some check/transformation/mangling between absolute/relative form
+ # due to Puppet::Resource's idiosyncratic behavior where some references must be
+ # absolute and others cannot be.
+ # Thus there is no need to call scope.resolve_type_and_titles to do dynamic lookup.
- def convert_PHostClassType(o, scope, undef_value)
- # Needs conversion by calling scope to resolve the name and possibly return a different name
- # Resolution can only be called with an array, and returns an array. Here there is only one name
- type, titles = scope.resolve_type_and_titles('class', [o.class_name])
- # Note: a title of nil makes Resource class throw error with information that is wrong
- Puppet::Resource.new(type, titles[0].nil? ? '' : titles[0] )
+ Puppet::Resource.new(*catalog_type_to_split_type_title(o))
end
private
# Produces an array with [type, title] from a PCatalogEntryType
- # Used to produce reference resource instances (used when 3x is operating on a resource).
+ # This method is used to produce the arguments for creation of reference resource instances
+ # (used when 3x is operating on a resource).
+ # Ensures that resources are *not* absolute.
#
def catalog_type_to_split_type_title(catalog_type)
- case catalog_type
+ split_type = catalog_type.is_a?(Puppet::Pops::Types::PType) ? catalog_type.type : catalog_type
+ case split_type
when Puppet::Pops::Types::PHostClassType
- return ['Class', catalog_type.class_name]
+ class_name = split_type.class_name
+ ['class', class_name.nil? ? nil : class_name.sub(/^::/, '')]
when Puppet::Pops::Types::PResourceType
- return [catalog_type.type_name, catalog_type.title]
+ type_name = split_type.type_name
+ title = split_type.title
+ if type_name =~ /^(::)?[Cc]lass/
+ ['class', title.nil? ? nil : title.sub(/^::/, '')]
+ else
+ # Ensure that title is '' if nil
+ # Resources with absolute name always results in error because tagging does not support leading ::
+ [type_name.nil? ? nil : type_name.sub(/^::/, ''), title.nil? ? '' : title]
+ end
else
- raise ArgumentError, "Cannot split the type #{catalog_type.class}, it is neither a PHostClassType, nor a PResourceClass."
+ raise ArgumentError, "Cannot split the type #{catalog_type.class}, it represents neither a PHostClassType, nor a PResourceType."
end
end
def extract_file_line(o)
source_pos = Puppet::Pops::Utils.find_closest_positioned(o)
return [nil, -1] unless source_pos
[source_pos.locator.file, source_pos.line]
end
def find_closest_positioned(o)
return nil if o.nil? || o.is_a?(Puppet::Pops::Model::Program)
o.offset.nil? ? find_closest_positioned(o.eContainer) : Puppet::Pops::Adapters::SourcePosAdapter.adapt(o)
end
# Creates a diagnostic producer
def diagnostic_producer
Puppet::Pops::Validation::DiagnosticProducer.new(
ExceptionRaisingAcceptor.new(), # Raises exception on all issues
SeverityProducer.new(), # All issues are errors
-# Puppet::Pops::Validation::SeverityProducer.new(), # All issues are errors
Puppet::Pops::Model::ModelLabelProvider.new())
end
# Configure the severity of failures
class SeverityProducer < Puppet::Pops::Validation::SeverityProducer
Issues = Puppet::Pops::Issues
def initialize
super
p = self
# Issues triggering warning only if --debug is on
if Puppet[:debug]
p[Issues::EMPTY_RESOURCE_SPECIALIZATION] = :warning
else
p[Issues::EMPTY_RESOURCE_SPECIALIZATION] = :ignore
end
+
+ # Store config issues, ignore or warning
+ p[Issues::RT_NO_STORECONFIGS_EXPORT] = Puppet[:storeconfigs] ? :ignore : :warning
+ p[Issues::RT_NO_STORECONFIGS] = Puppet[:storeconfigs] ? :ignore : :warning
end
end
# An acceptor of diagnostics that immediately raises an exception.
class ExceptionRaisingAcceptor < Puppet::Pops::Validation::Acceptor
def accept(diagnostic)
super
Puppet::Pops::IssueReporter.assert_and_report(self, {:message => "Evaluation Error:", :emit_warnings => true })
if errors?
raise ArgumentError, "Internal Error: Configuration of runtime error handling wrong: should have raised exception"
end
end
end
class EvaluationError < StandardError
end
end
diff --git a/lib/puppet/pops/functions/dispatch.rb b/lib/puppet/pops/functions/dispatch.rb
index 2a6508e08..29a58e036 100644
--- a/lib/puppet/pops/functions/dispatch.rb
+++ b/lib/puppet/pops/functions/dispatch.rb
@@ -1,71 +1,76 @@
# Defines a connection between a implementation method and the signature that
# the method will handle.
#
# This interface should not be used directly. Instead dispatches should be
# constructed using the DSL defined in {Puppet::Functions}.
#
# @api private
class Puppet::Pops::Functions::Dispatch < Puppet::Pops::Evaluator::CallableSignature
# @api public
attr_reader :type
# TODO: refactor to parameter_names since that makes it API
attr_reader :param_names
attr_reader :injections
# Describes how arguments are woven if there are injections, a regular argument is a given arg index, an array
# an injection description.
#
attr_reader :weaving
# @api public
attr_reader :block_name
# @api private
def initialize(type, method_name, param_names, block_name, injections, weaving, last_captures)
@type = type
@method_name = method_name
@param_names = param_names || []
@block_name = block_name
@injections = injections || []
@weaving = weaving
@last_captures = last_captures
end
# @api private
def parameter_names
@param_names
end
# @api private
def last_captures_rest?
!! @last_captures
end
# @api private
def invoke(instance, calling_scope, args)
instance.send(@method_name, *weave(calling_scope, args))
end
# @api private
def weave(scope, args)
# no need to weave if there are no injections
if injections.empty?
args
else
- injector = Puppet.lookup(:injector)
+ injector = nil # lazy lookup of injector Puppet.lookup(:injector)
weaving.map do |knit|
if knit.is_a?(Array)
injection_data = @injections[knit[0]]
- # inject
- if injection_data[3] == :producer
+ case injection_data[3]
+ when :dispatcher_internal
+ # currently only supports :scope injection
+ scope
+ when :producer
+ injector ||= Puppet.lookup(:injector)
injector.lookup_producer(scope, injection_data[0], injection_data[2])
else
+ injector ||= Puppet.lookup(:injector)
injector.lookup(scope, injection_data[0], injection_data[2])
end
else
- # pick that argument
+ # pick that argument (injection of static value)
args[knit]
end
end
end
end
end
diff --git a/lib/puppet/pops/functions/dispatcher.rb b/lib/puppet/pops/functions/dispatcher.rb
index a4f912dc4..f15ad373b 100644
--- a/lib/puppet/pops/functions/dispatcher.rb
+++ b/lib/puppet/pops/functions/dispatcher.rb
@@ -1,237 +1,70 @@
# Evaluate the dispatches defined as {Puppet::Pops::Functions::Dispatch}
# instances to call the appropriate method on the
# {Puppet::Pops::Functions::Function} instance.
#
# @api private
class Puppet::Pops::Functions::Dispatcher
attr_reader :dispatchers
-# @api private
+ # @api private
def initialize()
@dispatchers = [ ]
end
# Answers if dispatching has been defined
# @return [Boolean] true if dispatching has been defined
#
# @api private
def empty?
@dispatchers.empty?
end
# Dispatches the call to the first found signature (entry with matching type).
#
# @param instance [Puppet::Functions::Function] - the function to call
# @param calling_scope [T.B.D::Scope] - the scope of the caller
# @param args [Array<Object>] - the given arguments in the form of an Array
# @return [Object] - what the called function produced
#
# @api private
def dispatch(instance, calling_scope, args)
tc = Puppet::Pops::Types::TypeCalculator
actual = tc.infer_set(args)
found = @dispatchers.find { |d| tc.callable?(d.type, actual) }
if found
found.invoke(instance, calling_scope, args)
else
- raise ArgumentError, "function '#{instance.class.name}' called with mis-matched arguments\n#{diff_string(instance.class.name, actual)}"
+ raise ArgumentError, "function '#{instance.class.name}' called with mis-matched arguments\n#{Puppet::Pops::Evaluator::CallableMismatchDescriber.diff_string(instance.class.name, actual, @dispatchers)}"
end
end
# Adds a regular dispatch for one method name
#
# @param type [Puppet::Pops::Types::PArrayType, Puppet::Pops::Types::PTupleType] - type describing signature
# @param method_name [String] - the name of the method that will be called when type matches given arguments
# @param names [Array<String>] - array with names matching the number of parameters specified by type (or empty array)
#
# @api private
def add_dispatch(type, method_name, param_names, block_name, injections, weaving, last_captures)
@dispatchers << Puppet::Pops::Functions::Dispatch.new(type, method_name, param_names, block_name, injections, weaving, last_captures)
end
# Produces a CallableType for a single signature, and a Variant[<callables>] otherwise
#
# @api private
def to_type()
# make a copy to make sure it can be contained by someone else (even if it is not contained here, it
# should be treated as immutable).
#
callables = dispatchers.map { | dispatch | dispatch.type.copy }
# multiple signatures, produce a Variant type of Callable1-n (must copy them)
# single signature, produce single Callable
callables.size > 1 ? Puppet::Pops::Types::TypeFactory.variant(*callables) : callables.pop
end
# @api private
def signatures
@dispatchers
end
-
- private
-
- # Produces a string with the difference between the given arguments and support signature(s).
- #
- # @api private
- def diff_string(name, args_type)
- result = [ ]
- if @dispatchers.size < 2
- dispatch = @dispatchers[ 0 ]
- params_type = dispatch.type.param_types
- block_type = dispatch.type.block_type
- params_names = dispatch.param_names
- result << "expected:\n #{name}(#{signature_string(dispatch)}) - #{arg_count_string(dispatch.type)}"
- else
- result << "expected one of:\n"
- result << (@dispatchers.map do |d|
- params_type = d.type.param_types
- " #{name}(#{signature_string(d)}) - #{arg_count_string(d.type)}"
- end.join("\n"))
- end
- result << "\nactual:\n #{name}(#{arg_types_string(args_type)}) - #{arg_count_string(args_type)}"
- result.join('')
- end
-
- # Produces a string for the signature(s)
- #
- # @api private
- def signature_string(dispatch) # args_type, param_names
- param_types = dispatch.type.param_types
- block_type = dispatch.type.block_type
- param_names = dispatch.param_names
-
- from, to = param_types.size_range
- if from == 0 && to == 0
- # No parameters function
- return ''
- end
-
- required_count = from
- # there may be more names than there are types, and count needs to be subtracted from the count
- # to make it correct for the last named element
- adjust = max(0, param_names.size() -1)
- last_range = [max(0, (from - adjust)), (to - adjust)]
-
- types =
- case param_types
- when Puppet::Pops::Types::PTupleType
- param_types.types
- when Puppet::Pops::Types::PArrayType
- [ param_types.element_type ]
- end
- tc = Puppet::Pops::Types::TypeCalculator
-
- # join type with names (types are always present, names are optional)
- # separate entries with comma
- #
- result =
- if param_names.empty?
- types.each_with_index.map {|t, index| tc.string(t) + opt_value_indicator(index, required_count, 0) }
- else
- limit = param_names.size
- result = param_names.each_with_index.map do |name, index|
- [tc.string(types[index] || types[-1]), name].join(' ') + opt_value_indicator(index, required_count, limit)
- end
- end.join(', ')
-
- # Add {from, to} for the last type
- # This works for both Array and Tuple since it describes the allowed count of the "last" type element
- # for both. It does not show anything when the range is {1,1}.
- #
- result += range_string(last_range)
-
- # If there is a block, include it with its own optional count {0,1}
- case dispatch.type.block_type
- when Puppet::Pops::Types::POptionalType
- result << ', ' unless result == ''
- result << "#{tc.string(dispatch.type.block_type.optional_type)} #{dispatch.block_name} {0,1}"
- when Puppet::Pops::Types::PCallableType
- result << ', ' unless result == ''
- result << "#{tc.string(dispatch.type.block_type)} #{dispatch.block_name}"
- when NilClass
- # nothing
- end
- result
- end
-
- # Why oh why Ruby do you not have a standard Math.max ?
- # @api private
- def max(a, b)
- a >= b ? a : b
- end
-
- # @api private
- def opt_value_indicator(index, required_count, limit)
- count = index + 1
- (count > required_count && count < limit) ? '?' : ''
- end
-
- # @api private
- def arg_count_string(args_type)
- if args_type.is_a?(Puppet::Pops::Types::PCallableType)
- size_range = args_type.param_types.size_range # regular parameters
- adjust_range=
- case args_type.block_type
- when Puppet::Pops::Types::POptionalType
- size_range[1] += 1
- when Puppet::Pops::Types::PCallableType
- size_range[0] += 1
- size_range[1] += 1
- when NilClass
- # nothing
- else
- raise ArgumentError, "Internal Error, only nil, Callable, and Optional[Callable] supported by Callable block type"
- end
- else
- size_range = args_type.size_range
- end
- "arg count #{range_string(size_range, false)}"
- end
-
- # @api private
- def arg_types_string(args_type)
- types =
- case args_type
- when Puppet::Pops::Types::PTupleType
- last_range = args_type.repeat_last_range
- args_type.types
- when Puppet::Pops::Types::PArrayType
- last_range = args_type.size_range
- [ args_type.element_type ]
- end
- # stringify generalized versions or it will display Integer[10,10] for "10", String['the content'] etc.
- # note that type must be copied since generalize is a mutating operation
- tc = Puppet::Pops::Types::TypeCalculator
- result = types.map { |t| tc.string(tc.generalize!(t.copy)) }.join(', ')
-
- # Add {from, to} for the last type
- # This works for both Array and Tuple since it describes the allowed count of the "last" type element
- # for both. It does not show anything when the range is {1,1}.
- #
- result += range_string(last_range)
- result
- end
-
- # Formats a range into a string of the form: `{from, to}`
- #
- # The following cases are optimized:
- #
- # * from and to are equal => `{from}`
- # * from and to are both and 1 and squelch_one == true => `''`
- # * from is 0 and to is 1 => `'?'`
- # * to is INFINITY => `{from, }`
- #
- # @api private
- def range_string(size_range, squelch_one = true)
- from, to = size_range
- if from == to
- (squelch_one && from == 1) ? '' : "{#{from}}"
- elsif to == Puppet::Pops::Types::INFINITY
- "{#{from},}"
- elsif from == 0 && to == 1
- '?'
- else
- "{#{from},#{to}}"
- end
- end
end
diff --git a/lib/puppet/pops/issue_reporter.rb b/lib/puppet/pops/issue_reporter.rb
index cac2efcbc..02b173c14 100644
--- a/lib/puppet/pops/issue_reporter.rb
+++ b/lib/puppet/pops/issue_reporter.rb
@@ -1,79 +1,87 @@
class Puppet::Pops::IssueReporter
# @param acceptor [Puppet::Pops::Validation::Acceptor] the acceptor containing reported issues
- # @option options [String] :message (nil) A message text to use as prefix in a single Error message
- # @option options [Boolean] :emit_warnings (false) A message text to use as prefix in a single Error message
- # @option options [Boolean] :emit_errors (true) whether errors should be emitted or only given message
+ # @option options [String] :message (nil) A message text to use as prefix in
+ # a single Error message
+ # @option options [Boolean] :emit_warnings (false) whether warnings should be emitted
+ # @option options [Boolean] :emit_errors (true) whether errors should be
+ # emitted or only the given message
# @option options [Exception] :exception_class (Puppet::ParseError) The exception to raise
#
def self.assert_and_report(acceptor, options)
return unless acceptor
max_errors = Puppet[:max_errors]
- max_warnings = Puppet[:max_warnings] + 1
- max_deprecations = Puppet[:max_deprecations] + 1
+ max_warnings = Puppet[:max_warnings]
+ max_deprecations =
+ if Puppet[:disable_warnings].include?('deprecations')
+ 0
+ else
+ Puppet[:max_deprecations]
+ end
+
emit_warnings = options[:emit_warnings] || false
emit_errors = options[:emit_errors].nil? ? true : !!options[:emit_errors]
emit_message = options[:message]
emit_exception = options[:exception_class] || Puppet::ParseError
# If there are warnings output them
warnings = acceptor.warnings
if emit_warnings && warnings.size > 0
formatter = Puppet::Pops::Validation::DiagnosticFormatterPuppetStyle.new
emitted_w = 0
emitted_dw = 0
acceptor.warnings.each do |w|
if w.severity == :deprecation
# Do *not* call Puppet.deprecation_warning it is for internal deprecation, not
# deprecation of constructs in manifests! (It is not designed for that purpose even if
# used throughout the code base).
#
Puppet.warning(formatter.format(w)) if emitted_dw < max_deprecations
emitted_dw += 1
else
Puppet.warning(formatter.format(w)) if emitted_w < max_warnings
emitted_w += 1
end
- break if emitted_w > max_warnings && emitted_dw > max_deprecations # but only then
+ break if emitted_w >= max_warnings && emitted_dw >= max_deprecations # but only then
end
end
# If there were errors, report the first found. Use a puppet style formatter.
errors = acceptor.errors
if errors.size > 0
unless emit_errors
raise emit_exception.new(emit_message)
end
formatter = Puppet::Pops::Validation::DiagnosticFormatterPuppetStyle.new
if errors.size == 1 || max_errors <= 1
# raise immediately
exception = emit_exception.new(format_with_prefix(emit_message, formatter.format(errors[0])))
# if an exception was given as cause, use it's backtrace instead of the one indicating "here"
if errors[0].exception
exception.set_backtrace(errors[0].exception.backtrace)
end
raise exception
end
emitted = 0
if emit_message
Puppet.err(emit_message)
end
errors.each do |e|
Puppet.err(formatter.format(e))
emitted += 1
break if emitted >= max_errors
end
warnings_message = (emit_warnings && warnings.size > 0) ? ", and #{warnings.size} warnings" : ""
giving_up_message = "Found #{errors.size} errors#{warnings_message}. Giving up"
exception = emit_exception.new(giving_up_message)
exception.file = errors[0].file
raise exception
end
end
def self.format_with_prefix(prefix, message)
return message unless prefix
[prefix, message].join(' ')
end
end
diff --git a/lib/puppet/pops/issues.rb b/lib/puppet/pops/issues.rb
index e37de30e8..ba53ddc0e 100644
--- a/lib/puppet/pops/issues.rb
+++ b/lib/puppet/pops/issues.rb
@@ -1,473 +1,548 @@
# Defines classes to deal with issues, and message formatting and defines constants with Issues.
# @api public
#
module Puppet::Pops::Issues
# Describes an issue, and can produce a message for an occurrence of the issue.
#
class Issue
# The issue code
# @return [Symbol]
attr_reader :issue_code
# A block producing the message
# @return [Proc]
attr_reader :message_block
# Names that must be bound in an occurrence of the issue to be able to produce a message.
# These are the names in addition to requirements stipulated by the Issue formatter contract; i.e. :label`,
# and `:semantic`.
#
attr_reader :arg_names
# If this issue can have its severity lowered to :warning, :deprecation, or :ignored
attr_writer :demotable
# Configures the Issue with required arguments (bound by occurrence), and a block producing a message.
def initialize issue_code, *args, &block
@issue_code = issue_code
@message_block = block
@arg_names = args
@demotable = true
end
# Returns true if it is allowed to demote this issue
def demotable?
@demotable
end
# Formats a message for an occurrence of the issue with argument bindings passed in a hash.
# The hash must contain a LabelProvider bound to the key `label` and the semantic model element
# bound to the key `semantic`. All required arguments as specified by `arg_names` must be bound
# in the given `hash`.
# @api public
#
def format(hash ={})
# Create a Message Data where all hash keys become methods for convenient interpolation
# in issue text.
msgdata = MessageData.new(*arg_names)
begin
# Evaluate the message block in the msg data's binding
msgdata.format(hash, &message_block)
rescue StandardError => e
Puppet::Pops::Issues::MessageData
raise RuntimeError, "Error while reporting issue: #{issue_code}. #{e.message}", caller
end
end
end
# Provides a binding of arguments passed to Issue.format to method names available
# in the issue's message producing block.
# @api private
#
class MessageData
def initialize *argnames
singleton = class << self; self end
argnames.each do |name|
singleton.send(:define_method, name) do
@data[name]
end
end
end
def format(hash, &block)
@data = hash
instance_eval &block
end
# Returns the label provider given as a key in the hash passed to #format.
# If given an argument, calls #label on the label provider (caller would otherwise have to
# call label.label(it)
#
def label(it = nil)
raise "Label provider key :label must be set to produce the text of the message!" unless @data[:label]
it.nil? ? @data[:label] : @data[:label].label(it)
end
# Returns the label provider given as a key in the hash passed to #format.
#
def semantic
raise "Label provider key :semantic must be set to produce the text of the message!" unless @data[:semantic]
@data[:semantic]
end
end
# Defines an issue with the given `issue_code`, additional required parameters, and a block producing a message.
# The block is evaluated in the context of a MessageData which provides convenient access to all required arguments
# via accessor methods. In addition to accessors for specified arguments, these are also available:
# * `label` - a `LabelProvider` that provides human understandable names for model elements and production of article (a/an/the).
# * `semantic` - the model element for which the issue is reported
#
# @param issue_code [Symbol] the issue code for the issue used as an identifier, should be the same as the constant
# the issue is bound to.
# @param args [Symbol] required arguments that must be passed when formatting the message, may be empty
# @param block [Proc] a block producing the message string, evaluated in a MessageData scope. The produced string
# should not end with a period as additional information may be appended.
#
# @see MessageData
# @api public
#
def self.issue (issue_code, *args, &block)
Issue.new(issue_code, *args, &block)
end
# Creates a non demotable issue.
# @see Issue.issue
#
def self.hard_issue(issue_code, *args, &block)
result = Issue.new(issue_code, *args, &block)
result.demotable = false
result
end
# @comment Here follows definitions of issues. The intent is to provide a list from which yardoc can be generated
# containing more detailed information / explanation of the issue.
# These issues are set as constants, but it is unfortunately not possible for the created object to easily know which
# name it is bound to. Instead the constant has to be repeated. (Alternatively, it could be done by instead calling
# #const_set on the module, but the extra work required to get yardoc output vs. the extra effort to repeat the name
# twice makes it not worth it (if doable at all, since there is no tag to artificially construct a constant, and
# the parse tag does not produce any result for a constant assignment).
# This is allowed (3.1) and has not yet been deprecated.
# @todo configuration
#
NAME_WITH_HYPHEN = issue :NAME_WITH_HYPHEN, :name do
"#{label.a_an_uc(semantic)} may not have a name containing a hyphen. The name '#{name}' is not legal"
end
# When a variable name contains a hyphen and these are illegal.
# It is possible to control if a hyphen is legal in a name or not using the setting TODO
# @todo describe the setting
# @api public
# @todo configuration if this is error or warning
#
VAR_WITH_HYPHEN = issue :VAR_WITH_HYPHEN, :name do
"A variable name may not contain a hyphen. The name '#{name}' is not legal"
end
# A class, definition, or node may only appear at top level or inside other classes
# @todo Is this really true for nodes? Can they be inside classes? Isn't that too late?
# @api public
#
NOT_TOP_LEVEL = hard_issue :NOT_TOP_LEVEL do
"Classes, definitions, and nodes may only appear at toplevel or inside other classes"
end
CROSS_SCOPE_ASSIGNMENT = hard_issue :CROSS_SCOPE_ASSIGNMENT, :name do
"Illegal attempt to assign to '#{name}'. Cannot assign to variables in other namespaces"
end
# Assignment can only be made to certain types of left hand expressions such as variables.
ILLEGAL_ASSIGNMENT = hard_issue :ILLEGAL_ASSIGNMENT do
"Illegal attempt to assign to '#{label.a_an(semantic)}'. Not an assignable reference"
end
# Variables are immutable, cannot reassign in the same assignment scope
ILLEGAL_REASSIGNMENT = hard_issue :ILLEGAL_REASSIGNMENT, :name do
"Cannot reassign variable #{name}"
end
ILLEGAL_RESERVED_ASSIGNMENT = hard_issue :ILLEGAL_RESERVED_ASSIGNMENT, :name do
"Attempt to assign to a reserved variable name: '#{name}'"
end
# Assignment cannot be made to numeric match result variables
ILLEGAL_NUMERIC_ASSIGNMENT = issue :ILLEGAL_NUMERIC_ASSIGNMENT, :varname do
"Illegal attempt to assign to the numeric match result variable '$#{varname}'. Numeric variables are not assignable"
end
- APPEND_FAILED = issue :APPEND_FAILED, :message do
- "Append assignment += failed with error: #{message}"
- end
-
- DELETE_FAILED = issue :DELETE_FAILED, :message do
- "'Delete' assignment -= failed with error: #{message}"
- end
-
# parameters cannot have numeric names, clashes with match result variables
ILLEGAL_NUMERIC_PARAMETER = issue :ILLEGAL_NUMERIC_PARAMETER, :name do
- "The numeric parameter name '$#{varname}' cannot be used (clashes with numeric match result variables)"
+ "The numeric parameter name '$#{name}' cannot be used (clashes with numeric match result variables)"
end
# In certain versions of Puppet it may be allowed to assign to a not already assigned key
# in an array or a hash. This is an optional validation that may be turned on to prevent accidental
# mutation.
#
ILLEGAL_INDEXED_ASSIGNMENT = issue :ILLEGAL_INDEXED_ASSIGNMENT do
"Illegal attempt to assign via [index/key]. Not an assignable reference"
end
# When indexed assignment ($x[]=) is allowed, the leftmost expression must be
# a variable expression.
#
ILLEGAL_ASSIGNMENT_VIA_INDEX = hard_issue :ILLEGAL_ASSIGNMENT_VIA_INDEX do
"Illegal attempt to assign to #{label.a_an(semantic)} via [index/key]. Not an assignable reference"
end
- # For unsupported operators (e.g. -= in puppet 3).
+ APPENDS_DELETES_NO_LONGER_SUPPORTED = hard_issue :APPENDS_DELETES_NO_LONGER_SUPPORTED, :operator do
+ "The operator '#{operator}' is no longer supported. See http://links.puppetlabs.com/remove-plus-equals"
+ end
+
+ # For unsupported operators (e.g. += and -= in puppet 4).
#
UNSUPPORTED_OPERATOR = hard_issue :UNSUPPORTED_OPERATOR, :operator do
+ "The operator '#{operator}' is not supported."
+ end
+
+ # For operators that are not supported in specific contexts (e.g. '* =>' in
+ # resource defaults)
+ #
+ UNSUPPORTED_OPERATOR_IN_CONTEXT = hard_issue :UNSUPPORTED_OPERATOR_IN_CONTEXT, :operator do
"The operator '#{operator}' in #{label.a_an(semantic)} is not supported."
end
# For non applicable operators (e.g. << on Hash).
#
OPERATOR_NOT_APPLICABLE = hard_issue :OPERATOR_NOT_APPLICABLE, :operator, :left_value do
"Operator '#{operator}' is not applicable to #{label.a_an(left_value)}."
end
COMPARISON_NOT_POSSIBLE = hard_issue :COMPARISON_NOT_POSSIBLE, :operator, :left_value, :right_value, :detail do
"Comparison of: #{label(left_value)} #{operator} #{label(right_value)}, is not possible. Caused by '#{detail}'."
end
MATCH_NOT_REGEXP = hard_issue :MATCH_NOT_REGEXP, :detail do
"Can not convert right match operand to a regular expression. Caused by '#{detail}'."
end
MATCH_NOT_STRING = hard_issue :MATCH_NOT_STRING, :left_value do
"Left match operand must result in a String value. Got #{label.a_an(left_value)}."
end
# Some expressions/statements may not produce a value (known as right-value, or rvalue).
# This may vary between puppet versions.
#
NOT_RVALUE = issue :NOT_RVALUE do
"Invalid use of expression. #{label.a_an_uc(semantic)} does not produce a value"
end
# Appending to attributes is only allowed in certain types of resource expressions.
#
ILLEGAL_ATTRIBUTE_APPEND = hard_issue :ILLEGAL_ATTRIBUTE_APPEND, :name, :parent do
"Illegal +> operation on attribute #{name}. This operator can not be used in #{label.a_an(parent)}"
end
ILLEGAL_NAME = hard_issue :ILLEGAL_NAME, :name do
"Illegal name. The given name #{name} does not conform to the naming rule /^((::)?[a-z_]\w*)(::[a-z]\w*)*$/"
end
ILLEGAL_VAR_NAME = hard_issue :ILLEGAL_VAR_NAME, :name do
"Illegal variable name, The given name '#{name}' does not conform to the naming rule /^((::)?[a-z]\w*)*((::)?[a-z_]\w*)$/"
end
ILLEGAL_NUMERIC_VAR_NAME = hard_issue :ILLEGAL_NUMERIC_VAR_NAME, :name do
"Illegal numeric variable name, The given name '#{name}' must be a decimal value if it starts with a digit 0-9"
end
# In case a model is constructed programmatically, it must create valid type references.
#
ILLEGAL_CLASSREF = hard_issue :ILLEGAL_CLASSREF, :name do
"Illegal type reference. The given name '#{name}' does not conform to the naming rule"
end
# This is a runtime issue - storeconfigs must be on in order to collect exported. This issue should be
# set to :ignore when just checking syntax.
# @todo should be a :warning by default
#
RT_NO_STORECONFIGS = issue :RT_NO_STORECONFIGS do
"You cannot collect exported resources without storeconfigs being set; the collection will be ignored"
end
# This is a runtime issue - storeconfigs must be on in order to export a resource. This issue should be
# set to :ignore when just checking syntax.
# @todo should be a :warning by default
#
RT_NO_STORECONFIGS_EXPORT = issue :RT_NO_STORECONFIGS_EXPORT do
"You cannot collect exported resources without storeconfigs being set; the export is ignored"
end
# A hostname may only contain letters, digits, '_', '-', and '.'.
#
ILLEGAL_HOSTNAME_CHARS = hard_issue :ILLEGAL_HOSTNAME_CHARS, :hostname do
"The hostname '#{hostname}' contains illegal characters (only letters, digits, '_', '-', and '.' are allowed)"
end
# A hostname may only contain letters, digits, '_', '-', and '.'.
#
ILLEGAL_HOSTNAME_INTERPOLATION = hard_issue :ILLEGAL_HOSTNAME_INTERPOLATION do
"An interpolated expression is not allowed in a hostname of a node"
end
# Issues when an expression is used where it is not legal.
# E.g. an arithmetic expression where a hostname is expected.
#
ILLEGAL_EXPRESSION = hard_issue :ILLEGAL_EXPRESSION, :feature, :container do
"Illegal expression. #{label.a_an_uc(semantic)} is unacceptable as #{feature} in #{label.a_an(container)}"
end
- # Issues when an expression is used where it is not legal.
- # E.g. an arithmetic expression where a hostname is expected.
+ # Issues when a variable is not a NAME
#
ILLEGAL_VARIABLE_EXPRESSION = hard_issue :ILLEGAL_VARIABLE_EXPRESSION do
"Illegal variable expression. #{label.a_an_uc(semantic)} did not produce a variable name (String or Numeric)."
end
# Issues when an expression is used illegaly in a query.
# query only supports == and !=, and not <, > etc.
#
ILLEGAL_QUERY_EXPRESSION = hard_issue :ILLEGAL_QUERY_EXPRESSION do
"Illegal query expression. #{label.a_an_uc(semantic)} cannot be used in a query"
end
# If an attempt is made to make a resource default virtual or exported.
#
NOT_VIRTUALIZEABLE = hard_issue :NOT_VIRTUALIZEABLE do
"Resource Defaults are not virtualizable"
end
# When an attempt is made to use multiple keys (to produce a range in Ruby - e.g. $arr[2,-1]).
# This is not supported in 3x, but it allowed in 4x.
#
UNSUPPORTED_RANGE = issue :UNSUPPORTED_RANGE, :count do
"Attempt to use unsupported range in #{label.a_an(semantic)}, #{count} values given for max 1"
end
- DEPRECATED_NAME_AS_TYPE = issue :DEPRECATED_NAME_AS_TYPE, :name do
- "Resource references should now be capitalized. The given '#{name}' does not have the correct form"
- end
-
ILLEGAL_RELATIONSHIP_OPERAND_TYPE = issue :ILLEGAL_RELATIONSHIP_OPERAND_TYPE, :operand do
"Illegal relationship operand, can not form a relationship with #{label.a_an(operand)}. A Catalog type is required."
end
NOT_CATALOG_TYPE = issue :NOT_CATALOG_TYPE, :type do
"Illegal relationship operand, can not form a relationship with something of type #{type}. A Catalog type is required."
end
BAD_STRING_SLICE_ARITY = issue :BAD_STRING_SLICE_ARITY, :actual do
"String supports [] with one or two arguments. Got #{actual}"
end
BAD_STRING_SLICE_TYPE = issue :BAD_STRING_SLICE_TYPE, :actual do
"String-Type [] requires all arguments to be integers (or default). Got #{actual}"
end
BAD_ARRAY_SLICE_ARITY = issue :BAD_ARRAY_SLICE_ARITY, :actual do
"Array supports [] with one or two arguments. Got #{actual}"
end
BAD_HASH_SLICE_ARITY = issue :BAD_HASH_SLICE_ARITY, :actual do
"Hash supports [] with one or more arguments. Got #{actual}"
end
BAD_INTEGER_SLICE_ARITY = issue :BAD_INTEGER_SLICE_ARITY, :actual do
"Integer-Type supports [] with one or two arguments (from, to). Got #{actual}"
end
BAD_INTEGER_SLICE_TYPE = issue :BAD_INTEGER_SLICE_TYPE, :actual do
"Integer-Type [] requires all arguments to be integers (or default). Got #{actual}"
end
BAD_COLLECTION_SLICE_TYPE = issue :BAD_COLLECTION_SLICE_TYPE, :actual do
"A Type's size constraint arguments must be a single Integer type, or 1-2 integers (or default). Got #{label.a_an(actual)}"
end
BAD_FLOAT_SLICE_ARITY = issue :BAD_INTEGER_SLICE_ARITY, :actual do
"Float-Type supports [] with one or two arguments (from, to). Got #{actual}"
end
BAD_FLOAT_SLICE_TYPE = issue :BAD_INTEGER_SLICE_TYPE, :actual do
"Float-Type [] requires all arguments to be floats, or integers (or default). Got #{actual}"
end
BAD_SLICE_KEY_TYPE = issue :BAD_SLICE_KEY_TYPE, :left_value, :expected_classes, :actual do
expected_text = if expected_classes.size > 1
"one of #{expected_classes.join(', ')} are"
else
"#{expected_classes[0]} is"
end
"#{label.a_an_uc(left_value)}[] cannot use #{actual} where #{expected_text} expected"
end
BAD_TYPE_SLICE_TYPE = issue :BAD_TYPE_SLICE_TYPE, :base_type, :actual do
"#{base_type}[] arguments must be types. Got #{actual}"
end
BAD_TYPE_SLICE_ARITY = issue :BAD_TYPE_SLICE_ARITY, :base_type, :min, :max, :actual do
base_type_label = base_type.is_a?(String) ? base_type : label.a_an_uc(base_type)
if max == -1 || max == 1.0 / 0.0 # Infinity
"#{base_type_label}[] accepts #{min} or more arguments. Got #{actual}"
elsif max && max != min
"#{base_type_label}[] accepts #{min} to #{max} arguments. Got #{actual}"
else
"#{base_type_label}[] accepts #{min} #{label.plural_s(min, 'argument')}. Got #{actual}"
end
end
BAD_TYPE_SPECIALIZATION = hard_issue :BAD_TYPE_SPECIALIZATION, :type, :message do
"Error creating type specialization of #{label.a_an(type)}, #{message}"
end
ILLEGAL_TYPE_SPECIALIZATION = issue :ILLEGAL_TYPE_SPECIALIZATION, :kind do
"Cannot specialize an already specialized #{kind} type"
end
ILLEGAL_RESOURCE_SPECIALIZATION = issue :ILLEGAL_RESOURCE_SPECIALIZATION, :actual do
"First argument to Resource[] must be a resource type or a String. Got #{actual}."
end
EMPTY_RESOURCE_SPECIALIZATION = issue :EMPTY_RESOURCE_SPECIALIZATION do
"Arguments to Resource[] are all empty/undefined"
end
ILLEGAL_HOSTCLASS_NAME = hard_issue :ILLEGAL_HOSTCLASS_NAME, :name do
"Illegal Class name in class reference. #{label.a_an_uc(name)} cannot be used where a String is expected"
end
- # Issues when an expression is used where it is not legal.
- # E.g. an arithmetic expression where a hostname is expected.
- #
ILLEGAL_DEFINITION_NAME = hard_issue :ILLEGAL_DEFINTION_NAME, :name do
"Unacceptable name. The name '#{name}' is unacceptable as the name of #{label.a_an(semantic)}"
end
- NON_NAMESPACED_FUNCTION = hard_issue :NON_NAMESPACED_FUNCTION, :name do
- "A Puppet Function must be defined within a module name-space. The name '#{name}' is unacceptable."
+ CAPTURES_REST_NOT_LAST = hard_issue :CAPTURES_REST_NOT_LAST, :param_name do
+ "Parameter $#{param_name} is not last, and has 'captures rest'"
+ end
+
+ CAPTURES_REST_NOT_SUPPORTED = hard_issue :CAPTURES_REST_NOT_SUPPORTED, :container, :param_name do
+ "Parameter $#{param_name} has 'captures rest' - not supported in #{label.a_an(container)}"
+ end
+
+ REQUIRED_PARAMETER_AFTER_OPTIONAL = hard_issue :REQUIRED_PARAMETER_AFTER_OPTIONAL, :param_name do
+ "Parameter $#{param_name} is required but appears after optional parameters"
+ end
+
+ MISSING_REQUIRED_PARAMETER = hard_issue :MISSING_REQUIRED_PARAMETER, :param_name do
+ "Parameter $#{param_name} is required but no value was given"
end
NOT_NUMERIC = issue :NOT_NUMERIC, :value do
"The value '#{value}' cannot be converted to Numeric."
end
UNKNOWN_FUNCTION = issue :UNKNOWN_FUNCTION, :name do
"Unknown function: '#{name}'."
end
UNKNOWN_VARIABLE = issue :UNKNOWN_VARIABLE, :name do
"Unknown variable: '#{name}'."
end
RUNTIME_ERROR = issue :RUNTIME_ERROR, :detail do
"Error while evaluating #{label.a_an(semantic)}, #{detail}"
end
UNKNOWN_RESOURCE_TYPE = issue :UNKNOWN_RESOURCE_TYPE, :type_name do
"Resource type not found: #{type_name.capitalize}"
end
+ ILLEGAL_RESOURCE_TYPE = hard_issue :ILLEGAL_RESOURCE_TYPE, :actual do
+ "Illegal Resource Type expression, expected result to be a type name, or untitled Resource, got #{actual}"
+ end
+
+ DUPLICATE_TITLE = issue :DUPLICATE_TITLE, :title do
+ "The title '#{title}' has already been used in this resource expression"
+ end
+
+ DUPLICATE_ATTRIBUTE = issue :DUPLICATE_ATTRIBUE, :attribute do
+ "The attribute '#{attribute}' has already been set in this resource body"
+ end
+
+ MISSING_TITLE = hard_issue :MISSING_TITLE do
+ "Missing title. The title expression resulted in undef"
+ end
+
+ MISSING_TITLE_AT = hard_issue :MISSING_TITLE_AT, :index do
+ "Missing title at index #{index}. The title expression resulted in an undef title"
+ end
+
+ ILLEGAL_TITLE_TYPE_AT = hard_issue :ILLEGAL_TITLE_TYPE_AT, :index, :actual do
+ "Illegal title type at index #{index}. Expected String, got #{actual}"
+ end
+
+ EMPTY_STRING_TITLE_AT = hard_issue :EMPTY_STRING_TITLE_AT, :index do
+ "Empty string title at #{index}. Title strings must have a length greater than zero."
+ end
+
UNKNOWN_RESOURCE = issue :UNKNOWN_RESOURCE, :type_name, :title do
"Resource not found: #{type_name.capitalize}['#{title}']"
end
UNKNOWN_RESOURCE_PARAMETER = issue :UNKNOWN_RESOURCE_PARAMETER, :type_name, :title, :param_name do
"The resource #{type_name.capitalize}['#{title}'] does not have a parameter called '#{param_name}'"
end
DIV_BY_ZERO = hard_issue :DIV_BY_ZERO do
"Division by 0"
end
RESULT_IS_INFINITY = hard_issue :RESULT_IS_INFINITY, :operator do
"The result of the #{operator} expression is Infinity"
end
# TODO_HEREDOC
EMPTY_HEREDOC_SYNTAX_SEGMENT = issue :EMPTY_HEREDOC_SYNTAX_SEGMENT, :syntax do
"Heredoc syntax specification has empty segment between '+' : '#{syntax}'"
end
ILLEGAL_EPP_PARAMETERS = issue :ILLEGAL_EPP_PARAMETERS do
"Ambiguous EPP parameter expression. Probably missing '<%-' before parameters to remove leading whitespace"
end
DISCONTINUED_IMPORT = hard_issue :DISCONTINUED_IMPORT do
"Use of 'import' has been discontinued in favor of a manifest directory. See http://links.puppetlabs.com/puppet-import-deprecation"
end
+
+ IDEM_EXPRESSION_NOT_LAST = issue :IDEM_EXPRESSION_NOT_LAST do
+ "This #{label.label(semantic)} is not productive. A non productive construct may only be placed last in a block/sequence"
+ end
+
+ IDEM_NOT_ALLOWED_LAST = hard_issue :IDEM_NOT_ALLOWED_LAST, :container do
+ "This #{label.label(semantic)} is not productive. #{label.a_an_uc(container)} can not end with a non productive construct"
+ end
+
+ RESERVED_WORD = hard_issue :RESERVED_WORD, :word do
+ "Use of reserved word: #{word}, must be quoted if intended to be a String value"
+ end
+
+ RESERVED_TYPE_NAME = hard_issue :RESERVED_TYPE_NAME, :name do
+ "The name: '#{name}' is already defined by Puppet and can not be used as the name of #{label.a_an(semantic)}."
+ end
+
+ UNMATCHED_SELECTOR = hard_issue :UNMATCHED_SELECTOR, :param_value do
+ "No matching entry for selector parameter with value '#{param_value}'"
+ end
+
+ ILLEGAL_NODE_INHERITANCE = issue :ILLEGAL_NODE_INHERITANCE do
+ "Node inheritance is not supported in Puppet >= 4.0.0. See http://links.puppetlabs.com/puppet-node-inheritance-deprecation"
+ end
+
+ ILLEGAL_OVERRIDEN_TYPE = issue :ILLEGAL_OVERRIDEN_TYPE, :actual do
+ "Resource Override can only operate on resources, got: #{label.label(actual)}"
+ end
+
+ RESERVED_PARAMETER = hard_issue :RESERVED_PARAMETER, :container, :param_name do
+ "The parameter $#{param_name} redefines a built in parameter in #{label.the(container)}"
+ end
+
+ TYPE_MISMATCH = hard_issue :TYPE_MISMATCH, :expected, :actual do
+ "Expected value of type #{expected}, got #{actual}"
+ end
+
+ MULTIPLE_ATTRIBUTES_UNFOLD = hard_issue :MULTIPLE_ATTRIBUTES_UNFOLD do
+ "Unfolding of attributes from Hash can only be used once per resource body"
+ end
end
diff --git a/lib/puppet/pops/loader/base_loader.rb b/lib/puppet/pops/loader/base_loader.rb
index f7b34ece7..cd916865b 100644
--- a/lib/puppet/pops/loader/base_loader.rb
+++ b/lib/puppet/pops/loader/base_loader.rb
@@ -1,102 +1,102 @@
# BaseLoader
# ===
# An abstract implementation of Puppet::Pops::Loader::Loader
#
# A derived class should implement `find(typed_name)` and set entries, and possible handle "miss caching".
#
# @api private
#
class Puppet::Pops::Loader::BaseLoader < Puppet::Pops::Loader::Loader
# The parent loader
attr_reader :parent
# An internal name used for debugging and error message purposes
attr_reader :loader_name
def initialize(parent_loader, loader_name)
@parent = parent_loader # the higher priority loader to consult
@named_values = {} # hash name => NamedEntry
@last_name = nil # the last name asked for (optimization)
@last_result = nil # the value of the last name (optimization)
@loader_name = loader_name # the name of the loader (not the name-space it is a loader for)
end
# @api public
#
def load_typed(typed_name)
# The check for "last queried name" is an optimization when a module searches. First it checks up its parent
# chain, then itself, and then delegates to modules it depends on.
# These modules are typically parented by the same
# loader as the one initiating the search. It is inefficient to again try to search the same loader for
# the same name.
if typed_name == @last_name
@last_result
else
@last_name = typed_name
@last_result = internal_load(typed_name)
end
end
# This method is final (subclasses should not override it)
#
# @api private
#
def get_entry(typed_name)
@named_values[typed_name]
end
# @api private
#
def set_entry(typed_name, value, origin = nil)
if entry = @named_values[typed_name] then fail_redefined(entry); end
@named_values[typed_name] = Puppet::Pops::Loader::Loader::NamedEntry.new(typed_name, value, origin)
end
# @api private
#
def add_entry(type, name, value, origin)
set_entry(Puppet::Pops::Loader::Loader::TypedName.new(type, name), value, origin)
end
# Promotes an already created entry (typically from another loader) to this loader
#
# @api private
#
def promote_entry(named_entry)
typed_name = named_entry.typed_name
- if entry = @named_values[typed_name] then fail_redefined(entry); end
+ if entry = @named_values[typed_name] then fail_redefine(entry); end
@named_values[typed_name] = named_entry
end
private
def fail_redefine(entry)
origin_info = entry.origin ? " Originally set at #{origin_label(entry.origin)}." : "unknown location"
- raise ArgumentError, "Attempt to redefine entity '#{entry.typed_name}' originally set at #{origin_label(origin)}.#{origin_info}"
+ raise ArgumentError, "Attempt to redefine entity '#{entry.typed_name}' originally set at #{origin_info}"
end
# TODO: Should not really be here?? - TODO: A Label provider ? semantics for the URI?
#
def origin_label(origin)
if origin && origin.is_a?(URI)
origin.to_s
elsif origin.respond_to?(:uri)
origin.uri.to_s
else
- nil
+ origin
end
end
# loads in priority order:
# 1. already loaded here
# 2. load from parent
# 3. find it here
# 4. give up
#
def internal_load(typed_name)
# avoid calling get_entry, by looking it up
@named_values[typed_name] || parent.load_typed(typed_name) || find(typed_name)
end
end
diff --git a/lib/puppet/pops/loader/loader.rb b/lib/puppet/pops/loader/loader.rb
index 256fc373e..37c912c2d 100644
--- a/lib/puppet/pops/loader/loader.rb
+++ b/lib/puppet/pops/loader/loader.rb
@@ -1,180 +1,180 @@
# Loader
# ===
# A Loader is responsible for loading "entities" ("instantiable and executable objects in the puppet language" which
# are type, hostclass, definition, function, and bindings.
#
# The main method for users of a Loader is the `load` or `load_typed methods`, which returns a previously loaded entity
# of a given type/name, and searches and loads the entity if not already loaded.
#
# private entities
# ---
# TODO: handle loading of entities that are private. Suggest that all calls pass an origin_loader (the loader
# where request originated (or symbol :public). A module loader has one (or possibly a list) of what is
# considered to represent private loader - i.e. the dependency loader for a module. If an entity is private
# it should be stored with this status, and an error should be raised if the origin_loader is not on the list
# of accepted "private" loaders.
# The private loaders can not be given at creation time (they are parented by the loader in question). Another
# alternative is to check if the origin_loader is a child loader, but this requires bidirectional links
# between loaders or a search if loader with private entity is a parent of the origin_loader).
#
# @api public
#
class Puppet::Pops::Loader::Loader
# Produces the value associated with the given name if already loaded, or available for loading
# by this loader, one of its parents, or other loaders visible to this loader.
# This is the method an external party should use to "get" the named element.
#
# An implementor of this method should first check if the given name is already loaded by self, or a parent
# loader, and if so return that result. If not, it should call `find` to perform the loading.
#
# @param type [:Symbol] the type to load
# @param name [String, Symbol] the name of the entity to load
# @return [Object, nil] the value or nil if not found
#
# @api public
#
def load(type, name)
if result = load_typed(TypedName.new(type, name.to_s))
result.value
end
end
# Loads the given typed name, and returns a NamedEntry if found, else returns nil.
# This the same a `load`, but returns a NamedEntry with origin/value information.
#
# @param typed_name [TypedName] - the type, name combination to lookup
# @return [NamedEntry, nil] the entry containing the loaded value, or nil if not found
#
# @api public
#
def load_typed(typed_name)
raise NotImplementedError.new
end
# Produces the value associated with the given name if defined **in this loader**, or nil if not defined.
# This lookup does not trigger any loading, or search of the given name.
# An implementor of this method may not search or look up in any other loader, and it may not
# define the name.
#
# @param typed_name [TypedName] - the type, name combination to lookup
#
# @api private
#
def [] (typed_name)
if found = get_entry(typed_name)
found.value
else
nil
end
end
# Searches for the given name in this loader's context (parents should already have searched their context(s) without
# producing a result when this method is called).
# An implementation of find typically caches the result.
#
# @param typed_name [TypedName] the type, name combination to lookup
# @return [NamedEntry, nil] the entry for the loaded entry, or nil if not found
#
# @api private
#
def find(typed_name)
raise NotImplementedError.new
end
# Returns the parent of the loader, or nil, if this is the top most loader. This implementation returns nil.
def parent
nil
end
# Produces the private loader for loaders that have a one (the visibility given to loaded entities).
# For loaders that does not provide a private loader, self is returned.
#
# @api private
def private_loader
self
end
# Binds a value to a name. The name should not start with '::', but may contain multiple segments.
#
# @param type [:Symbol] the type of the entity being set
# @param name [String, Symbol] the name of the entity being set
# @param origin [URI, #uri, String] the origin of the set entity, a URI, or provider of URI, or URI in string form
# @return [NamedEntry, nil] the created entry
#
# @api private
#
def set_entry(type, name, value, origin = nil)
raise NotImplementedError.new
end
# Produces a NamedEntry if a value is bound to the given name, or nil if nothing is bound.
#
# @param typed_name [TypedName] the type, name combination to lookup
# @return [NamedEntry, nil] the value bound in an entry
#
# @api private
#
def get_entry(typed_name)
raise NotImplementedError.new
end
# An entry for one entity loaded by the loader.
#
class NamedEntry
attr_reader :typed_name
attr_reader :value
attr_reader :origin
def initialize(typed_name, value, origin)
- @name = typed_name
+ @typed_name = typed_name
@value = value
@origin = origin
freeze()
end
end
# A name/type combination that can be used as a compound hash key
#
class TypedName
attr_reader :type
attr_reader :name
attr_reader :name_parts
# True if name is qualified (more than a single segment)
attr_reader :qualified
def initialize(type, name)
@type = type
# relativize the name (get rid of leading ::), and make the split string available
@name_parts = name.to_s.split(/::/)
@name_parts.shift if name_parts[0].empty?
@name = name_parts.join('::')
@qualified = name_parts.size > 1
# precompute hash - the name is frozen, so this is safe to do
@hash = [self.class, type, @name].hash
# Not allowed to have numeric names - 0, 010, 0x10, 1.2 etc
if Puppet::Pops::Utils.is_numeric?(@name)
raise ArgumentError, "Illegal attempt to use a numeric name '#{name}' at #{origin_label(origin)}."
end
freeze()
end
def hash
@hash
end
def ==(o)
o.class == self.class && type == o.type && name == o.name
end
alias eql? ==
def to_s
"#{type}/#{name}"
end
end
end
diff --git a/lib/puppet/pops/loader/loader_paths.rb b/lib/puppet/pops/loader/loader_paths.rb
index a431a4801..09bb7e5b0 100644
--- a/lib/puppet/pops/loader/loader_paths.rb
+++ b/lib/puppet/pops/loader/loader_paths.rb
@@ -1,137 +1,118 @@
# LoaderPaths
# ===
# The central loader knowledge about paths, what they represent and how to instantiate from them.
# Contains helpers (*smart paths*) to deal with lazy resolution of paths.
#
# TODO: Currently only supports loading of functions (3 kinds)
#
module Puppet::Pops::Loader::LoaderPaths
# Returns an array of SmartPath, each instantiated with a reference to the given loader (for root path resolution
# and existence checks). The smart paths in the array appear in precedence order. The returned array may be
# mutated.
#
- def self.relative_paths_for_type(type, loader) #, start_index_in_name)
+ def self.relative_paths_for_type(type, loader)
result =
- case type # typed_name.type
+ case type
when :function
- if Puppet[:biff] == true
- [FunctionPath4x.new(loader), FunctionPath3x.new(loader)]
- else
[FunctionPath4x.new(loader)]
- end
-
- # when :xxx # TODO: Add all other types
-
else
# unknown types, simply produce an empty result; no paths to check, nothing to find... move along...
[]
end
result
end
# # DO NOT REMOVE YET. needed later? when there is the need to decamel a classname
# def de_camel(fq_name)
# fq_name.to_s.gsub(/::/, '/').
# gsub(/([A-Z]+)([A-Z][a-z])/,'\1_\2').
# gsub(/([a-z\d])([A-Z])/,'\1_\2').
# tr("-", "_").
# downcase
# end
class SmartPath
# Generic path, in the sense of "if there are any entities of this kind to load, where are they?"
attr_reader :generic_path
# Creates SmartPath for the given loader (loader knows how to check for existence etc.)
def initialize(loader)
@loader = loader
end
def generic_path()
return @generic_path unless @generic_path.nil?
root_path = @loader.path
@generic_path = (root_path.nil? ? relative_path : File.join(root_path, relative_path))
end
# Effective path is the generic path + the name part(s) + extension.
#
def effective_path(typed_name, start_index_in_name)
"#{File.join(generic_path, typed_name.name_parts)}#{extension}"
end
def relative_path()
raise NotImplementedError.new
end
def instantiator()
raise NotImplementedError.new
end
end
class RubySmartPath < SmartPath
def extension
".rb"
end
# Duplication of extension information, but avoids one call
def effective_path(typed_name, start_index_in_name)
"#{File.join(generic_path, typed_name.name_parts)}.rb"
end
end
class FunctionPath4x < RubySmartPath
FUNCTION_PATH_4X = File.join('lib', 'puppet', 'functions')
def relative_path
FUNCTION_PATH_4X
end
def instantiator()
Puppet::Pops::Loader::RubyFunctionInstantiator
end
end
- class FunctionPath3x < RubySmartPath
- FUNCTION_PATH_3X = File.join('lib', 'puppet', 'parser', 'functions')
-
- def relative_path
- FUNCTION_PATH_3X
- end
-
- def instantiator()
- Puppet::Pops::Loader::RubyLegacyFunctionInstantiator
- end
- end
-
# SmartPaths
# ===
# Holds effective SmartPath instances per type
#
class SmartPaths
def initialize(path_based_loader)
@loader = path_based_loader
@smart_paths = {}
end
# Ensures that the paths for the type have been probed and pruned to what is existing relative to
# the given root.
#
# @param type [Symbol] the entity type to load
# @return [Array<SmartPath>] array of effective paths for type (may be empty)
#
def effective_paths(type)
smart_paths = @smart_paths
loader = @loader
unless effective_paths = smart_paths[type]
# type not yet processed, does the various directories for the type exist ?
# Get the relative dirs for the type
paths_for_type = Puppet::Pops::Loader::LoaderPaths.relative_paths_for_type(type, loader)
# Check which directories exist in the loader's content/index
effective_paths = smart_paths[type] = paths_for_type.select { |sp| loader.meaningful_to_search?(sp) }
end
effective_paths
end
end
end
diff --git a/lib/puppet/pops/loader/ruby_function_instantiator.rb b/lib/puppet/pops/loader/ruby_function_instantiator.rb
index 6c2b716cf..de372ab27 100644
--- a/lib/puppet/pops/loader/ruby_function_instantiator.rb
+++ b/lib/puppet/pops/loader/ruby_function_instantiator.rb
@@ -1,34 +1,34 @@
# The RubyFunctionInstantiator instantiates a Puppet::Functions::Function given the ruby source
# that calls Puppet::Functions.create_function.
#
class Puppet::Pops::Loader::RubyFunctionInstantiator
# Produces an instance of the Function class with the given typed_name, or fails with an error if the
# given ruby source does not produce this instance when evaluated.
#
# @param loader [Puppet::Pops::Loader::Loader] The loader the function is associated with
# @param typed_name [Puppet::Pops::Loader::TypedName] the type / name of the function to load
# @param source_ref [URI, String] a reference to the source / origin of the ruby code to evaluate
# @param ruby_code_string [String] ruby code in a string
#
# @return [Puppet::Pops::Functions.Function] - an instantiated function with global scope closure associated with the given loader
#
def self.create(loader, typed_name, source_ref, ruby_code_string)
unless ruby_code_string.is_a?(String) && ruby_code_string =~ /Puppet\:\:Functions\.create_function/
raise ArgumentError, "The code loaded from #{source_ref} does not seem to be a Puppet 4x API function - no create_function call."
end
- created = eval(ruby_code_string)
+ created = eval(ruby_code_string, nil, source_ref, 1)
unless created.is_a?(Class)
raise ArgumentError, "The code loaded from #{source_ref} did not produce a Function class when evaluated. Got '#{created.class}'"
end
unless created.name.to_s == typed_name.name()
raise ArgumentError, "The code loaded from #{source_ref} produced mis-matched name, expected '#{typed_name.name}', got #{created.name}"
end
# create the function instance - it needs closure (scope), and loader (i.e. where it should start searching for things
# when calling functions etc.
# It should be bound to global scope
# TODO: Cheating wrt. scope - assuming it is found in the context
closure_scope = Puppet.lookup(:global_scope) { {} }
created.new(closure_scope, loader.private_loader)
end
end
diff --git a/lib/puppet/pops/loader/ruby_legacy_function_instantiator.rb b/lib/puppet/pops/loader/ruby_legacy_function_instantiator.rb
deleted file mode 100644
index 589b05c05..000000000
--- a/lib/puppet/pops/loader/ruby_legacy_function_instantiator.rb
+++ /dev/null
@@ -1,109 +0,0 @@
-# The RubyLegacyFunctionInstantiator loads a 3x function and turns it into a 4x function
-# that is called with 3x semantics (values are transformed to be 3x compliant).
-#
-# The code is loaded from a string obtained by reading the 3x function ruby code into a string
-# and then passing it to the loaders class method `create`. When Puppet[:biff] == true, the
-# 3x Puppet::Parser::Function.newfunction method relays back to this function loader's
-# class method legacy_newfunction which creates a Puppet::Functions class wrapping the
-# 3x function's block into a method in a function class derived from Puppet::Function.
-# This class is then returned, and the Legacy loader continues the same way as it does
-# for a 4x function.
-#
-# TODO: Wrapping of Scope
-# The 3x function expects itself to be Scope. It passes itself as scope to other parts of the runtime,
-# it expects to find all sorts of information in itself, get/set variables, get compiler, get environment
-# etc.
-# TODO: Transformation of arguments to 3x compliant objects
-#
-class Puppet::Pops::Loader::RubyLegacyFunctionInstantiator
-
- # Produces an instance of the Function class with the given typed_name, or fails with an error if the
- # given ruby source does not produce this instance when evaluated.
- #
- # @param loader [Puppet::Pops::Loader::Loader] The loader the function is associated with
- # @param typed_name [Puppet::Pops::Loader::TypedName] the type / name of the function to load
- # @param source_ref [URI, String] a reference to the source / origin of the ruby code to evaluate
- # @param ruby_code_string [String] ruby code in a string
- #
- # @return [Puppet::Pops::Functions.Function] - an instantiated function with global scope closure associated with the given loader
- #
- def self.create(loader, typed_name, source_ref, ruby_code_string)
- # Old Ruby API supports calling a method via ::
- # this must also be checked as well as call with '.'
- #
- unless ruby_code_string.is_a?(String) && ruby_code_string =~ /Puppet\:\:Parser\:\:Functions(?:\.|\:\:)newfunction/
- raise ArgumentError, "The code loaded from #{source_ref} does not seem to be a Puppet 3x API function - no newfunction call."
- end
-
- # The evaluation of the 3x function creation source should result in a call to the legacy_newfunction
- #
- created = eval(ruby_code_string)
- unless created.is_a?(Class)
- raise ArgumentError, "The code loaded from #{source_ref} did not produce a Function class when evaluated. Got '#{created.class}'"
- end
- unless created.name.to_s == typed_name.name()
- raise ArgumentError, "The code loaded from #{source_ref} produced mis-matched name, expected '#{typed_name.name}', got #{created.name}"
- end
- # create the function instance - it needs closure (scope), and loader (i.e. where it should start searching for things
- # when calling functions etc.
- # It should be bound to global scope
-
- # TODO: Cheating wrt. scope - assuming it is found in the context
- closure_scope = Puppet.lookup(:global_scope) { {} }
- created.new(closure_scope, loader)
- end
-
- # This is a new implementation of the method that is used in 3x to create a function.
- # The arguments are the same as those passed to Puppet::Parser::Functions.newfunction, hence its
- # deviation from regular method naming practice.
- #
- def self.legacy_newfunction(name, options, &block)
-
- # 3x api allows arity to be specified, if unspecified it is 0 or more arguments
- # arity >= 0, is an exact count
- # airty < 0 is the number of required arguments -1 (i.e. -1 is 0 or more)
- # (there is no upper cap, there is no support for optional values, or defaults)
- #
- arity = options[:arity] || -1
- if arity >= 0
- min_arg_count = arity
- max_arg_count = arity
- else
- min_arg_count = (arity + 1).abs
- # infinity
- max_arg_count = :default
- end
-
- # Create a 4x function wrapper around the 3x Function
- created_function_class = Puppet::Functions.create_function(name) do
- # define a method on the new Function class with the same name as the function, but
- # padded with __ because the function may represent a ruby method with the same name that
- # expects to have inherited from Kernel, and then Object.
- # (This can otherwise lead to infinite recursion, or that an ArgumentError is raised).
- #
- __name__ = :"__#{name}__"
- define_method(__name__, &block)
-
- # Define the method that is called from dispatch - this method just changes a call
- # with multiple unknown arguments to passing all in an array (since this is expected in the 3x API).
- # We want the call to be checked for type and number of arguments so cannot call the function
- # defined by the block directly since it is defined to take a single argument.
- #
- define_method(:__relay__call__) do |*args|
- # dup the args since the function may destroy them
- # TODO: Should convert arguments to 3x, now :undef is send to the function
- send(__name__, args.dup)
- end
-
- # Define a dispatch that performs argument type/count checking
- #
- dispatch :__relay__call__ do
- # Use Puppet Type Object (not Optional[Object] since the 3x API passes undef as empty string).
- param 'Object', 'args'
- # Specify arg count (transformed from 3x function arity specification).
- arg_count(min_arg_count, max_arg_count)
- end
- end
- created_function_class
- end
-end
diff --git a/lib/puppet/pops/loader/static_loader.rb b/lib/puppet/pops/loader/static_loader.rb
index 27cfbe462..50b6a15f9 100644
--- a/lib/puppet/pops/loader/static_loader.rb
+++ b/lib/puppet/pops/loader/static_loader.rb
@@ -1,69 +1,79 @@
# Static Loader contains constants, basic data types and other types required for the system
# to boot.
#
class Puppet::Pops::Loader::StaticLoader < Puppet::Pops::Loader::Loader
attr_reader :loaded
def initialize
@loaded = {}
create_logging_functions()
end
def load_typed(typed_name)
load_constant(typed_name)
end
def get_entry(typed_name)
load_constant(typed_name)
end
def find(name)
# There is nothing to search for, everything this loader knows about is already available
nil
end
def parent
nil # at top of the hierarchy
end
def to_s()
"(StaticLoader)"
end
private
def load_constant(typed_name)
@loaded[typed_name]
end
private
# Creates a function for each of the specified log levels
#
def create_logging_functions()
Puppet::Util::Log.levels.each do |level|
fc = Puppet::Functions.create_function(level) do
# create empty dispatcher to stop it from complaining about missing method since
# an override of :call is made instead of using dispatch.
dispatch(:log) { }
# Logs per the specified level, outputs formatted information for arrays, hashes etc.
# Overrides the implementation in Function that uses dispatching. This is not needed here
- # since it accepts 0-n Optional[Object]
+ # since it accepts 0-n Object.
#
define_method(:call) do |scope, *vals|
# NOTE: 3x, does this: vals.join(" ")
# New implementation uses the evaluator to get proper formatting per type
# TODO: uses a fake scope (nil) - fix when :scopes are available via settings
mapped = vals.map {|v| Puppet::Pops::Evaluator::EvaluatorImpl.new.string(v, nil) }
- Puppet.send(level, mapped.join(" "))
+
+ # Bypass Puppet.<level> call since it picks up source from "self" which is not applicable in the 4x
+ # Function API.
+ # TODO: When a function can obtain the file, line, pos of the call merge those in (3x supports
+ # options :file, :line. (These were never output when calling the 3x logging functions since
+ # 3x scope does not know about the calling location at that detailed level, nor do they
+ # appear in a report to stdout/error when included). Now, the output simply uses scope (like 3x)
+ # as this is good enough, but does not reflect the true call-stack, but is a rough estimate
+ # of where the logging call originates from).
+ #
+ Puppet::Util::Log.create({:level => level, :source => scope, :message => mapped.join(" ")})
end
end
typed_name = Puppet::Pops::Loader::Loader::TypedName.new(:function, level)
# TODO:closure scope is fake (an empty hash) - waiting for new global scope to be available via lookup of :scopes
func = fc.new({},self)
@loaded[ typed_name ] = Puppet::Pops::Loader::Loader::NamedEntry.new(typed_name, func, __FILE__)
end
end
end
diff --git a/lib/puppet/pops/model/ast_transformer.rb b/lib/puppet/pops/model/ast_transformer.rb
index 4016cc266..2898534dd 100644
--- a/lib/puppet/pops/model/ast_transformer.rb
+++ b/lib/puppet/pops/model/ast_transformer.rb
@@ -1,663 +1,643 @@
require 'puppet/parser/ast'
# The receiver of `import(file)` calls; once per imported file, or nil if imports are ignored
#
# Transforms a Pops::Model to classic Puppet AST.
# TODO: Documentation is currently skipped completely (it is only used for Rdoc)
#
class Puppet::Pops::Model::AstTransformer
AST = Puppet::Parser::AST
Model = Puppet::Pops::Model
attr_reader :importer
def initialize(source_file = "unknown-file", importer=nil)
@@transform_visitor ||= Puppet::Pops::Visitor.new(nil,"transform",0,0)
@@query_transform_visitor ||= Puppet::Pops::Visitor.new(nil,"query",0,0)
@@hostname_transform_visitor ||= Puppet::Pops::Visitor.new(nil,"hostname",0,0)
@importer = importer
@source_file = source_file
end
# Initialize klass from o (location) and hash (options to created instance).
# The object o is used to compute a source location. It may be nil. Source position is merged into
# the given options (non surgically). If o is non-nil, the first found source position going up
# the containment hierarchy is set. I.e. callers should pass nil if a source position is not wanted
# or known to be unobtainable for the object.
#
# @param o [Object, nil] object from which source position / location is obtained, may be nil
# @param klass [Class<Puppet::Parser::AST>] the ast class to create an instance of
# @param hash [Hash] hash with options for the class to create
#
def ast(o, klass, hash={})
# create and pass hash with file and line information
klass.new(merge_location(hash, o))
end
# THIS IS AN EXPENSIVE OPERATION
# The 3x AST requires line, pos etc. to be recorded directly in the AST nodes and this information
# must be computed.
# (Newer implementation only computes the information that is actually needed; typically when raising an
# exception).
#
def merge_location(hash, o)
if o
pos = {}
source_pos = Puppet::Pops::Utils.find_closest_positioned(o)
if source_pos
pos[:line] = source_pos.line
pos[:pos] = source_pos.pos
end
pos[:file] = @source_file if @source_file
hash = hash.merge(pos)
end
hash
end
# Transforms pops expressions into AST 3.1 statements/expressions
def transform(o)
begin
@@transform_visitor.visit_this(self,o)
rescue StandardError => e
loc_data = {}
merge_location(loc_data, o)
raise Puppet::ParseError.new("Error while transforming to Puppet 3 AST: #{e.message}",
loc_data[:file], loc_data[:line], loc_data[:pos], e)
end
end
# Transforms pops expressions into AST 3.1 query expressions
def query(o)
@@query_transform_visitor.visit_this(self, o)
end
# Transforms pops expressions into AST 3.1 hostnames
def hostname(o)
@@hostname_transform_visitor.visit_this(self, o)
end
def transform_LiteralFloat(o)
# Numbers are Names in the AST !! (Name a.k.a BareWord)
ast o, AST::Name, :value => o.value.to_s
end
def transform_LiteralInteger(o)
s = case o.radix
when 10
o.value.to_s
when 8
"0%o" % o.value
when 16
"0x%X" % o.value
else
"bad radix:" + o.value.to_s
end
# Numbers are Names in the AST !! (Name a.k.a BareWord)
ast o, AST::Name, :value => s
end
# Transforms all literal values to string (override for those that should not be AST::String)
#
def transform_LiteralValue(o)
ast o, AST::String, :value => o.value.to_s
end
def transform_LiteralBoolean(o)
ast o, AST::Boolean, :value => o.value
end
def transform_Factory(o)
transform(o.current)
end
def transform_ArithmeticExpression(o)
ast o, AST::ArithmeticOperator2, :lval => transform(o.left_expr), :rval=>transform(o.right_expr),
:operator => o.operator.to_s
end
def transform_Array(o)
ast nil, AST::ASTArray, :children => o.collect {|x| transform(x) }
end
# Puppet AST only allows:
# * variable[expression] => Hasharray Access
# * NAME [expressions] => Resource Reference(s)
# * type [epxressions] => Resource Reference(s)
# * HashArrayAccesses[expression] => HasharrayAccesses
#
# i.e. it is not possible to do `func()[3]`, `[1,2,3][$x]`, `{foo=>10, bar=>20}[$x]` etc. since
# LHS is not an expression
#
# Validation for 3.x semantics should validate the illegal cases. This transformation may fail,
# or ignore excess information if the expressions are not correct.
# This means that the transformation does not have to evaluate the lhs to detect the target expression.
#
# Hm, this seems to have changed, the LHS (variable) is evaluated if evaluateable, else it is used as is.
#
def transform_AccessExpression(o)
case o.left_expr
when Model::QualifiedName
ast o, AST::ResourceReference, :type => o.left_expr.value, :title => transform(o.keys)
when Model::QualifiedReference
ast o, AST::ResourceReference, :type => o.left_expr.value, :title => transform(o.keys)
when Model::VariableExpression
ast o, AST::HashOrArrayAccess, :variable => transform(o.left_expr), :key => transform(o.keys()[0])
else
ast o, AST::HashOrArrayAccess, :variable => transform(o.left_expr), :key => transform(o.keys()[0])
end
end
# Puppet AST has a complicated structure
# LHS can not be an expression, it must be a type (which is downcased).
# type = a downcased QualifiedName
#
def transform_CollectExpression(o)
raise "LHS is not a type" unless o.type_expr.is_a? Model::QualifiedReference
type = o.type_expr.value().downcase()
args = { :type => type }
# This somewhat peculiar encoding is used by the 3.1 AST.
query = transform(o.query)
if query.is_a? Symbol
args[:form] = query
else
args[:form] = query.form
args[:query] = query
query.type = type
end
if o.operations.size > 0
args[:override] = transform(o.operations)
end
ast o, AST::Collection, args
end
def transform_EppExpression(o)
# TODO: Not supported in 3x TODO_EPP
parameters = o.parameters.collect {|p| transform(p) }
args = { :parameters => parameters }
args[:children] = transform(o.body) unless is_nop?(o.body)
Puppet::Parser::AST::Epp.new(merge_location(args, o))
end
def transform_ExportedQuery(o)
if is_nop?(o.expr)
result = :exported
else
result = query(o.expr)
result.form = :exported
end
result
end
def transform_VirtualQuery(o)
if is_nop?(o.expr)
result = :virtual
else
result = query(o.expr)
result.form = :virtual
end
result
end
# Ensures transformation fails if a 3.1 non supported object is encountered in a query expression
#
def query_Object(o)
raise "Not a valid expression in a collection query: "+o.class.name
end
# Puppet AST only allows == and !=, and left expr is restricted, but right value is an expression
#
def query_ComparisonExpression(o)
if [:'==', :'!='].include? o.operator
ast o, AST::CollExpr, :test1 => query(o.left_expr), :oper => o.operator.to_s, :test2 => transform(o.right_expr)
else
raise "Not a valid comparison operator in a collection query: " + o.operator.to_s
end
end
def query_AndExpression(o)
ast o, AST::CollExpr, :test1 => query(o.left_expr), :oper => 'and', :test2 => query(o.right_expr)
end
def query_OrExpression(o)
ast o, AST::CollExpr, :test1 => query(o.left_expr), :oper => 'or', :test2 => query(o.right_expr)
end
def query_ParenthesizedExpression(o)
result = query(o.expr) # produces CollExpr
result.parens = true
result
end
def query_VariableExpression(o)
transform(o)
end
def query_QualifiedName(o)
transform(o)
end
def query_LiteralNumber(o)
transform(o) # number to string in correct radix
end
def query_LiteralString(o)
transform(o)
end
def query_LiteralBoolean(o)
transform(o)
end
def transform_QualifiedName(o)
ast o, AST::Name, :value => o.value
end
def transform_QualifiedReference(o)
ast o, AST::Type, :value => o.value
end
def transform_ComparisonExpression(o)
ast o, AST::ComparisonOperator, :operator => o.operator.to_s, :lval => transform(o.left_expr), :rval => transform(o.right_expr)
end
def transform_AndExpression(o)
ast o, AST::BooleanOperator, :operator => 'and', :lval => transform(o.left_expr), :rval => transform(o.right_expr)
end
def transform_OrExpression(o)
ast o, AST::BooleanOperator, :operator => 'or', :lval => transform(o.left_expr), :rval => transform(o.right_expr)
end
def transform_InExpression(o)
ast o, AST::InOperator, :lval => transform(o.left_expr), :rval => transform(o.right_expr)
end
# Assignment in AST 3.1 is to variable or hasharray accesses !!! See Bug #16116
def transform_AssignmentExpression(o)
args = {:value => transform(o.right_expr) }
case o.operator
when :'+='
args[:append] = true
when :'='
else
raise "The operator #{o.operator} is not supported by Puppet 3."
end
args[:name] = case o.left_expr
when Model::VariableExpression
ast o, AST::Name, {:value => o.left_expr.expr.value }
when Model::AccessExpression
transform(o.left_expr)
else
raise "LHS is not an expression that can be assigned to"
end
ast o, AST::VarDef, args
end
# Produces (name => expr) or (name +> expr)
def transform_AttributeOperation(o)
args = { :value => transform(o.value_expr) }
args[:add] = true if o.operator == :'+>'
args[:param] = o.attribute_name
ast o, AST::ResourceParam, args
end
def transform_LiteralList(o)
# Uses default transform of Ruby Array to ASTArray
transform(o.values)
end
# Literal hash has strange behavior in Puppet 3.1. See Bug #19426, and this implementation is bug
# compatible
def transform_LiteralHash(o)
if o.entries.size == 0
ast o, AST::ASTHash, {:value=> {}}
else
value = {}
o.entries.each {|x| value.merge! transform(x) }
ast o, AST::ASTHash, {:value=> value}
end
end
# Transforms entry into a hash (they are later merged with strange effects: Bug #19426).
# Puppet 3.x only allows:
# * NAME
# * quotedtext
# As keys (quoted text can be an interpolated string which is compared as a key in a less than satisfactory way).
#
def transform_KeyedEntry(o)
value = transform(o.value)
key = case o.key
when Model::QualifiedName
o.key.value
when Model::LiteralString
transform o.key
when Model::LiteralNumber
transform o.key
when Model::ConcatenatedString
transform o.key
else
raise "Illegal hash key expression of type (#{o.key.class})"
end
{key => value}
end
def transform_MatchExpression(o)
ast o, AST::MatchOperator, :operator => o.operator.to_s, :lval => transform(o.left_expr), :rval => transform(o.right_expr)
end
def transform_LiteralString(o)
ast o, AST::String, :value => o.value
end
def transform_LambdaExpression(o)
astargs = { :parameters => o.parameters.collect {|p| transform(p) } }
astargs.merge!({ :children => transform(o.body) }) if o.body # do not want children if it is nil/nop
ast o, AST::Lambda, astargs
end
def transform_LiteralDefault(o)
ast o, AST::Default, :value => :default
end
def transform_LiteralUndef(o)
ast o, AST::Undef, :value => :undef
end
def transform_LiteralRegularExpression(o)
ast o, AST::Regex, :value => o.value
end
def transform_Nop(o)
ast o, AST::Nop
end
# In the 3.1. grammar this is a hash that is merged with other elements to form a method call
# Also in 3.1. grammar there are restrictions on the LHS (that are only there for grammar issues).
#
def transform_NamedAccessExpression(o)
receiver = transform(o.left_expr)
name = o.right_expr
raise "Unacceptable function/method name" unless name.is_a? Model::QualifiedName
{:receiver => receiver, :name => name.value}
end
def transform_NilClass(o)
ast o, AST::Nop, {}
end
def transform_NotExpression(o)
ast o, AST::Not, :value => transform(o.expr)
end
def transform_VariableExpression(o)
# assumes the expression is a QualifiedName
ast o, AST::Variable, :value => o.expr.value
end
# In Puppet 3.1, the ConcatenatedString is responsible for the evaluation and stringification of
# expression segments. Expressions and Strings are kept in an array.
def transform_TextExpression(o)
transform(o.expr)
end
def transform_UnaryMinusExpression(o)
ast o, AST::Minus, :value => transform(o.expr)
end
# Puppet 3.1 representation of a BlockExpression is an AST::Array - this makes it impossible to differentiate
# between a LiteralArray and a Sequence. (Should it return the collected array, or the last expression?)
# (A BlockExpression has now been introduced in the AST to solve this).
#
def transform_BlockExpression(o)
children = []
# remove nops resulting from import
o.statements.each {|s| r = transform(s); children << r unless is_nop?(r) }
ast o, AST::BlockExpression, :children => children # o.statements.collect {|s| transform(s) }
end
# Interpolated strings are kept in an array of AST (string or other expression).
def transform_ConcatenatedString(o)
ast o, AST::Concat, :value => o.segments.collect {|x| transform(x)}
end
def transform_HostClassDefinition(o)
parameters = o.parameters.collect {|p| transform(p) }
args = {
:arguments => parameters,
:parent => o.parent_class,
}
args[:code] = transform(o.body) unless is_nop?(o.body)
Puppet::Parser::AST::Hostclass.new(o.name, merge_location(args, o))
end
def transform_HeredocExpression(o)
# TODO_HEREDOC Not supported in 3x
args = {:syntax=> o.syntax(), :expr => transform(o.text_expr()) }
Puppet::Parser::AST::Heredoc.new(merge_location(args, o))
end
def transform_NodeDefinition(o)
# o.host_matches are expressions, and 3.1 AST requires special object AST::HostName
# where a HostName is one of NAME, STRING, DEFAULT or Regexp - all of these are strings except regexp
#
args = {
:code => transform(o.body)
}
args[:parent] = hostname(o.parent) unless is_nop?(o.parent)
if(args[:parent].is_a?(Array))
raise "Illegal expression - unacceptable as a node parent"
end
Puppet::Parser::AST::Node.new(hostname(o.host_matches), merge_location(args, o))
end
# Transforms Array of host matching expressions into a (Ruby) array of AST::HostName
def hostname_Array(o)
o.collect {|x| ast x, AST::HostName, :value => hostname(x) }
end
def hostname_LiteralValue(o)
return o.value
end
def hostname_QualifiedName(o)
return o.value
end
def hostname_LiteralNumber(o)
transform(o) # Number to string with correct radix
end
def hostname_LiteralDefault(o)
return 'default'
end
def hostname_LiteralRegularExpression(o)
ast o, AST::Regex, :value => o.value
end
def hostname_Object(o)
raise "Illegal expression - unacceptable as a node name"
end
def transform_RelationshipExpression(o)
Puppet::Parser::AST::Relationship.new(transform(o.left_expr), transform(o.right_expr), o.operator.to_s, merge_location({}, o))
end
def transform_RenderStringExpression(o)
# TODO_EPP Not supported in 3x
ast o, AST::RenderString, :value => o.value
end
def transform_RenderExpression(o)
# TODO_EPP Not supported in 3x
ast o, AST::RenderExpression, :value => transform(o.expr)
end
def transform_ResourceTypeDefinition(o)
parameters = o.parameters.collect {|p| transform(p) }
args = { :arguments => parameters }
args[:code] = transform(o.body) unless is_nop?(o.body)
Puppet::Parser::AST::Definition.new(o.name, merge_location(args, o))
end
# Transformation of ResourceOverrideExpression is slightly more involved than a straight forward
# transformation.
# A ResourceOverrideExppression has "resources" which should be an AccessExpression
# on the form QualifiedName[expressions], or QualifiedReference[expressions] to be valid.
# It also has a set of attribute operations.
#
# The AST equivalence is an AST::ResourceOverride with a ResourceReference as its LHS, and
# a set of Parameters.
# ResourceReference has type as a string, and the expressions representing
# the "titles" to be an ASTArray.
#
def transform_ResourceOverrideExpression(o)
- resource_ref = o.resources
- raise "Unacceptable expression for resource override" unless resource_ref.is_a? Model::AccessExpression
-
- type = case resource_ref.left_expr
- when Model::QualifiedName
- # This is deprecated "Resource references should now be capitalized" - this is caught elsewhere
- resource_ref.left_expr.value
- when Model::QualifiedReference
- resource_ref.left_expr.value
- else
- raise "Unacceptable expression for resource override; need NAME or CLASSREF"
- end
-
- result_ref = ast o, AST::ResourceReference, :type => type, :title => transform(resource_ref.keys)
-
- # title is one or more expressions, if more than one it should be an ASTArray
- ast o, AST::ResourceOverride, :object => result_ref, :parameters => transform(o.operations)
+ raise "Unsupported transformation - use the new evaluator"
end
# Parameter is a parameter in a definition of some kind.
# It is transformed to an array on the form `[name]´, or `[name, value]´.
def transform_Parameter(o)
if o.value
[o.name, transform(o.value)]
else
[o.name]
end
end
# For non query expressions, parentheses can be dropped in the resulting AST.
def transform_ParenthesizedExpression(o)
transform(o.expr)
end
def transform_Program(o)
transform(o.body)
end
def transform_IfExpression(o)
args = { :test => transform(o.test), :statements => transform(o.then_expr) }
args[:else] = transform(o.else_expr) # Tests say Nop should be there (unless is_nop? o.else_expr), probably not needed
ast o, AST::IfStatement, args
end
# Unless is not an AST object, instead an AST::IfStatement is used with an AST::Not around the test
#
def transform_UnlessExpression(o)
args = { :test => ast(o, AST::Not, :value => transform(o.test)),
:statements => transform(o.then_expr) }
# AST 3.1 does not allow else on unless in the grammar, but it is ok since unless is encoded as an if !x
args.merge!({:else => transform(o.else_expr)}) unless is_nop?(o.else_expr)
ast o, AST::IfStatement, args
end
# Puppet 3.1 AST only supports calling a function by name (it is not possible to produce a function
# that is then called).
# rval_required (for an expression)
# functor_expr (lhs - the "name" expression)
# arguments - list of arguments
#
def transform_CallNamedFunctionExpression(o)
name = o.functor_expr
raise "Unacceptable expression for name of function" unless name.is_a? Model::QualifiedName
args = {
:name => name.value,
:arguments => transform(o.arguments),
:ftype => o.rval_required ? :rvalue : :statement
}
args[:pblock] = transform(o.lambda) if o.lambda
ast o, AST::Function, args
end
# Transformation of CallMethodExpression handles a NamedAccessExpression functor and
# turns this into a 3.1 AST::MethodCall.
#
def transform_CallMethodExpression(o)
name = o.functor_expr
raise "Unacceptable expression for name of function" unless name.is_a? Model::NamedAccessExpression
# transform of NamedAccess produces a hash, add arguments to it
astargs = transform(name).merge(:arguments => transform(o.arguments))
astargs.merge!(:lambda => transform(o.lambda)) if o.lambda # do not want a Nop as the lambda
ast o, AST::MethodCall, astargs
end
def transform_CaseExpression(o)
# Expects expression, AST::ASTArray of AST
ast o, AST::CaseStatement, :test => transform(o.test), :options => transform(o.options)
end
def transform_CaseOption(o)
ast o, AST::CaseOpt, :value => transform(o.values), :statements => transform(o.then_expr)
end
def transform_ResourceBody(o)
- # expects AST, AST::ASTArray of AST
- ast o, AST::ResourceInstance, :title => transform(o.title), :parameters => transform(o.operations)
+ raise "Unsupported transformation - use the new evaluator"
end
def transform_ResourceDefaultsExpression(o)
- ast o, AST::ResourceDefaults, :type => o.type_ref.value, :parameters => transform(o.operations)
+ raise "Unsupported transformation - use the new evaluator"
end
# Transformation of ResourceExpression requires calling a method on the resulting
# AST::Resource if it is virtual or exported
#
def transform_ResourceExpression(o)
- raise "Unacceptable type name expression" unless o.type_name.is_a? Model::QualifiedName
- resource = ast o, AST::Resource, :type => o.type_name.value, :instances => transform(o.bodies)
- resource.send("#{o.form}=", true) unless o.form == :regular
- resource
+ raise "Unsupported transformation - use the new evaluator"
end
# Transformation of SelectorExpression is limited to certain types of expressions.
# This is probably due to constraints in the old grammar rather than any real concerns.
def transform_SelectorExpression(o)
case o.left_expr
when Model::CallNamedFunctionExpression
when Model::AccessExpression
when Model::VariableExpression
when Model::ConcatenatedString
else
raise "Unacceptable select expression" unless o.left_expr.kind_of? Model::Literal
end
ast o, AST::Selector, :param => transform(o.left_expr), :values => transform(o.selectors)
end
def transform_SelectorEntry(o)
ast o, AST::ResourceParam, :param => transform(o.matching_expr), :value => transform(o.value_expr)
end
def transform_Object(o)
raise "Unacceptable transform - found an Object without a rule: #{o.class}"
end
# Nil, nop
# Bee bopp a luh-lah, a bop bop boom.
#
def is_nop?(o)
o.nil? || o.is_a?(Model::Nop)
end
end
diff --git a/lib/puppet/pops/model/factory.rb b/lib/puppet/pops/model/factory.rb
index 43eab49a3..cf77d9185 100644
--- a/lib/puppet/pops/model/factory.rb
+++ b/lib/puppet/pops/model/factory.rb
@@ -1,978 +1,1040 @@
# Factory is a helper class that makes construction of a Pops Model
# much more convenient. It can be viewed as a small internal DSL for model
# constructions.
# For usage see tests using the factory.
#
# @todo All those uppercase methods ... they look bad in one way, but stand out nicely in the grammar...
# decide if they should change into lower case names (some of the are lower case)...
#
class Puppet::Pops::Model::Factory
Model = Puppet::Pops::Model
attr_accessor :current
alias_method :model, :current
# Shared build_visitor, since there are many instances of Factory being used
@@build_visitor = Puppet::Pops::Visitor.new(self, "build")
@@interpolation_visitor = Puppet::Pops::Visitor.new(self, "interpolate")
# Initialize a factory with a single object, or a class with arguments applied to build of
# created instance
#
def initialize o, *args
@current = case o
when Model::PopsObject
o
when Puppet::Pops::Model::Factory
o.current
else
build(o, *args)
end
end
# Polymorphic build
def build(o, *args)
begin
@@build_visitor.visit_this(self, o, *args)
rescue =>e
# debug here when in trouble...
raise e
end
end
# Polymorphic interpolate
def interpolate()
begin
@@interpolation_visitor.visit_this_0(self, current)
rescue =>e
# debug here when in trouble...
raise e
end
end
# Building of Model classes
def build_ArithmeticExpression(o, op, a, b)
o.operator = op
build_BinaryExpression(o, a, b)
end
def build_AssignmentExpression(o, op, a, b)
o.operator = op
build_BinaryExpression(o, a, b)
end
def build_AttributeOperation(o, name, op, value)
o.operator = op
o.attribute_name = name.to_s # BOOLEAN is allowed in the grammar
o.value_expr = build(value)
o
end
+ def build_AttributesOperation(o, value)
+ o.expr = build(value)
+ o
+ end
+
def build_AccessExpression(o, left, *keys)
o.left_expr = to_ops(left)
keys.each {|expr| o.addKeys(to_ops(expr)) }
o
end
def build_BinaryExpression(o, left, right)
o.left_expr = to_ops(left)
o.right_expr = to_ops(right)
o
end
def build_BlockExpression(o, *args)
args.each {|expr| o.addStatements(to_ops(expr)) }
o
end
def build_CollectExpression(o, type_expr, query_expr, attribute_operations)
o.type_expr = to_ops(type_expr)
o.query = build(query_expr)
attribute_operations.each {|op| o.addOperations(build(op)) }
o
end
def build_ComparisonExpression(o, op, a, b)
o.operator = op
build_BinaryExpression(o, a, b)
end
def build_ConcatenatedString(o, *args)
args.each {|expr| o.addSegments(build(expr)) }
o
end
def build_CreateTypeExpression(o, name, super_name = nil)
o.name = name
o.super_name = super_name
o
end
def build_CreateEnumExpression(o, *args)
o.name = args.slice(0) if args.size == 2
o.values = build(args.last)
o
end
def build_CreateAttributeExpression(o, name, datatype_expr)
o.name = name
o.type = to_ops(datatype_expr)
o
end
def build_HeredocExpression(o, name, expr)
o.syntax = name
o.text_expr = build(expr)
o
end
# @param name [String] a valid classname
# @param parameters [Array<Model::Parameter>] may be empty
# @param parent_class_name [String, nil] a valid classname referencing a parent class, optional.
# @param body [Array<Expression>, Expression, nil] expression that constitute the body
# @return [Model::HostClassDefinition] configured from the parameters
#
def build_HostClassDefinition(o, name, parameters, parent_class_name, body)
build_NamedDefinition(o, name, parameters, body)
o.parent_class = parent_class_name if parent_class_name
o
end
def build_ResourceOverrideExpression(o, resources, attribute_operations)
o.resources = build(resources)
attribute_operations.each {|ao| o.addOperations(build(ao)) }
o
end
+ def build_ReservedWord(o, name)
+ o.word = name
+ o
+ end
+
def build_KeyedEntry(o, k, v)
o.key = to_ops(k)
o.value = to_ops(v)
o
end
def build_LiteralHash(o, *keyed_entries)
keyed_entries.each {|entry| o.addEntries build(entry) }
o
end
def build_LiteralList(o, *values)
values.each {|v| o.addValues build(v) }
o
end
def build_LiteralFloat(o, val)
o.value = val
o
end
def build_LiteralInteger(o, val, radix)
o.value = val
o.radix = radix
o
end
def build_IfExpression(o, t, ift, els)
o.test = build(t)
o.then_expr = build(ift)
o.else_expr= build(els)
o
end
def build_MatchExpression(o, op, a, b)
o.operator = op
build_BinaryExpression(o, a, b)
end
# Builds body :) from different kinds of input
# @overload f_build_body(nothing)
# @param nothing [nil] unchanged, produces nil
# @overload f_build_body(array)
# @param array [Array<Expression>] turns into a BlockExpression
# @overload f_build_body(expr)
# @param expr [Expression] produces the given expression
# @overload f_build_body(obj)
# @param obj [Object] produces the result of calling #build with body as argument
def f_build_body(body)
case body
when NilClass
nil
when Array
Puppet::Pops::Model::Factory.new(Model::BlockExpression, *body)
else
build(body)
end
end
def build_LambdaExpression(o, parameters, body)
parameters.each {|p| o.addParameters(build(p)) }
b = f_build_body(body)
o.body = to_ops(b) if b
o
end
def build_NamedDefinition(o, name, parameters, body)
parameters.each {|p| o.addParameters(build(p)) }
b = f_build_body(body)
o.body = b.current if b
o.name = name
o
end
# @param o [Model::NodeDefinition]
# @param hosts [Array<Expression>] host matches
# @param parent [Expression] parent node matcher
# @param body [Object] see {#f_build_body}
def build_NodeDefinition(o, hosts, parent, body)
hosts.each {|h| o.addHost_matches(build(h)) }
o.parent = build(parent) if parent # no nop here
b = f_build_body(body)
o.body = b.current if b
o
end
def build_Parameter(o, name, expr)
o.name = name
o.value = build(expr) if expr # don't build a nil/nop
o
end
def build_QualifiedReference(o, name)
o.value = name.to_s.downcase
o
end
def build_RelationshipExpression(o, op, a, b)
o.operator = op
build_BinaryExpression(o, a, b)
end
def build_ResourceExpression(o, type_name, bodies)
o.type_name = build(type_name)
bodies.each {|b| o.addBodies(build(b)) }
o
end
def build_RenderStringExpression(o, string)
o.value = string;
o
end
def build_ResourceBody(o, title_expression, attribute_operations)
o.title = build(title_expression)
attribute_operations.each {|ao| o.addOperations(build(ao)) }
o
end
def build_ResourceDefaultsExpression(o, type_ref, attribute_operations)
o.type_ref = build(type_ref)
attribute_operations.each {|ao| o.addOperations(build(ao)) }
o
end
def build_SelectorExpression(o, left, *selectors)
o.left_expr = to_ops(left)
selectors.each {|s| o.addSelectors(build(s)) }
o
end
# Builds a SubLocatedExpression - this wraps the expression in a sublocation configured
# from the given token
# A SubLocated holds its own locator that is used for subexpressions holding positions relative
# to what it describes.
#
def build_SubLocatedExpression(o, token, expression)
o.expr = build(expression)
o.offset = token.offset
o.length = token.length
locator = token.locator
o.locator = locator
o.leading_line_count = locator.leading_line_count
o.leading_line_offset = locator.leading_line_offset
# Index is held in sublocator's parent locator - needed to be able to reconstruct
o.line_offsets = locator.locator.line_index
o
end
def build_SelectorEntry(o, matching, value)
o.matching_expr = build(matching)
o.value_expr = build(value)
o
end
def build_QueryExpression(o, expr)
ops = to_ops(expr)
o.expr = ops unless Puppet::Pops::Model::Factory.nop? ops
o
end
def build_UnaryExpression(o, expr)
ops = to_ops(expr)
o.expr = ops unless Puppet::Pops::Model::Factory.nop? ops
o
end
def build_Program(o, body, definitions, locator)
o.body = to_ops(body)
# non containment
definitions.each { |d| o.addDefinitions(d) }
o.source_ref = locator.file
o.source_text = locator.string
o.line_offsets = locator.line_index
o.locator = locator
o
end
def build_QualifiedName(o, name)
o.value = name.to_s
o
end
+ def build_TokenValue(o)
+ raise "Factory can not deal with a Lexer Token. Got token: #{o}. Probably caused by wrong index in grammar val[n]."
+ end
+
# Puppet::Pops::Model::Factory helpers
def f_build_unary(klazz, expr)
Puppet::Pops::Model::Factory.new(build(klazz.new, expr))
end
def f_build_binary_op(klazz, op, left, right)
Puppet::Pops::Model::Factory.new(build(klazz.new, op, left, right))
end
def f_build_binary(klazz, left, right)
Puppet::Pops::Model::Factory.new(build(klazz.new, left, right))
end
def f_build_vararg(klazz, left, *arg)
Puppet::Pops::Model::Factory.new(build(klazz.new, left, *arg))
end
def f_arithmetic(op, r)
f_build_binary_op(Model::ArithmeticExpression, op, current, r)
end
def f_comparison(op, r)
f_build_binary_op(Model::ComparisonExpression, op, current, r)
end
def f_match(op, r)
f_build_binary_op(Model::MatchExpression, op, current, r)
end
# Operator helpers
def in(r) f_build_binary(Model::InExpression, current, r); end
def or(r) f_build_binary(Model::OrExpression, current, r); end
def and(r) f_build_binary(Model::AndExpression, current, r); end
def not(); f_build_unary(Model::NotExpression, self); end
def minus(); f_build_unary(Model::UnaryMinusExpression, self); end
+ def unfold(); f_build_unary(Model::UnfoldExpression, self); end
+
def text(); f_build_unary(Model::TextExpression, self); end
def var(); f_build_unary(Model::VariableExpression, self); end
def [](*r); f_build_vararg(Model::AccessExpression, current, *r); end
def dot r; f_build_binary(Model::NamedAccessExpression, current, r); end
def + r; f_arithmetic(:+, r); end
def - r; f_arithmetic(:-, r); end
def / r; f_arithmetic(:/, r); end
def * r; f_arithmetic(:*, r); end
def % r; f_arithmetic(:%, r); end
def << r; f_arithmetic(:<<, r); end
def >> r; f_arithmetic(:>>, r); end
def < r; f_comparison(:<, r); end
def <= r; f_comparison(:<=, r); end
def > r; f_comparison(:>, r); end
def >= r; f_comparison(:>=, r); end
def == r; f_comparison(:==, r); end
def ne r; f_comparison(:'!=', r); end
def =~ r; f_match(:'=~', r); end
def mne r; f_match(:'!~', r); end
def paren(); f_build_unary(Model::ParenthesizedExpression, current); end
def relop op, r
f_build_binary_op(Model::RelationshipExpression, op.to_sym, current, r)
end
def select *args
Puppet::Pops::Model::Factory.new(build(Model::SelectorExpression, current, *args))
end
# For CaseExpression, setting the default for an already build CaseExpression
def default r
current.addOptions(Puppet::Pops::Model::Factory.WHEN(:default, r).current)
self
end
def lambda=(lambda)
current.lambda = lambda.current
self
end
# Assignment =
def set(r)
f_build_binary_op(Model::AssignmentExpression, :'=', current, r)
end
# Assignment +=
def plus_set(r)
f_build_binary_op(Model::AssignmentExpression, :'+=', current, r)
end
# Assignment -=
def minus_set(r)
f_build_binary_op(Model::AssignmentExpression, :'-=', current, r)
end
def attributes(*args)
args.each {|a| current.addAttributes(build(a)) }
self
end
# Catch all delegation to current
def method_missing(meth, *args, &block)
if current.respond_to?(meth)
current.send(meth, *args, &block)
else
super
end
end
def respond_to?(meth, include_all=false)
current.respond_to?(meth, include_all) || super
end
def self.record_position(o, start_locatable, end_locateable)
new(o).record_position(start_locatable, end_locateable)
end
# Records the position (start -> end) and computes the resulting length.
#
def record_position(start_locatable, end_locatable)
from = start_locatable.is_a?(Puppet::Pops::Model::Factory) ? start_locatable.current : start_locatable
to = end_locatable.is_a?(Puppet::Pops::Model::Factory) ? end_locatable.current : end_locatable
to = from if to.nil? || to.offset.nil?
o = current
# record information directly in the Model::Positioned object
o.offset = from.offset
o.length ||= to.offset - from.offset + to.length
self
end
# @return [Puppet::Pops::Adapters::SourcePosAdapter] with location information
def loc()
Puppet::Pops::Adapters::SourcePosAdapter.adapt(current)
end
- # Returns symbolic information about an expected share of a resource expression given the LHS of a resource expr.
+ # Sets the form of the resource expression (:regular (the default), :virtual, or :exported).
+ # Produces true if the expression was a resource expression, false otherwise.
+ #
+ def self.set_resource_form(expr, form)
+ expr = expr.current if expr.is_a?(Puppet::Pops::Model::Factory)
+ # Note: Validation handles illegal combinations
+ return false unless expr.is_a?(Puppet::Pops::Model::AbstractResource)
+ expr.form = form
+ return true
+ end
+
+ # Returns symbolic information about an expected shape of a resource expression given the LHS of a resource expr.
#
# * `name { }` => `:resource`, create a resource of the given type
# * `Name { }` => ':defaults`, set defaults for the referenced type
# * `Name[] { }` => `:override`, overrides instances referenced by LHS
# * _any other_ => ':error', all other are considered illegal
#
def self.resource_shape(expr)
expr = expr.current if expr.is_a?(Puppet::Pops::Model::Factory)
case expr
when Model::QualifiedName
:resource
when Model::QualifiedReference
:defaults
when Model::AccessExpression
- :override
+ # if Resource[e], then it is not resource specific
+ if expr.left_expr.is_a?(Model::QualifiedReference) && expr.left_expr.value == 'resource' && expr.keys.size == 1
+ :defaults
+ else
+ :override
+ end
when 'class'
:class
else
:error
end
end
# Factory starting points
def self.literal(o); new(o); end
def self.minus(o); new(o).minus; end
+ def self.unfold(o); new(o).unfold; end
+
def self.var(o); new(o).var; end
def self.block(*args); new(Model::BlockExpression, *args); end
def self.string(*args); new(Model::ConcatenatedString, *args); end
def self.text(o); new(o).text; end
def self.IF(test_e,then_e,else_e); new(Model::IfExpression, test_e, then_e, else_e); end
def self.UNLESS(test_e,then_e,else_e); new(Model::UnlessExpression, test_e, then_e, else_e); end
def self.CASE(test_e,*options); new(Model::CaseExpression, test_e, *options); end
def self.WHEN(values_list, block); new(Model::CaseOption, values_list, block); end
def self.MAP(match, value); new(Model::SelectorEntry, match, value); end
def self.TYPE(name, super_name=nil); new(Model::CreateTypeExpression, name, super_name); end
def self.ATTR(name, type_expr=nil); new(Model::CreateAttributeExpression, name, type_expr); end
def self.ENUM(*args); new(Model::CreateEnumExpression, *args); end
def self.KEY_ENTRY(key, val); new(Model::KeyedEntry, key, val); end
def self.HASH(entries); new(Model::LiteralHash, *entries); end
- # TODO_HEREDOC
def self.HEREDOC(name, expr); new(Model::HeredocExpression, name, expr); end
def self.SUBLOCATE(token, expr) new(Model::SubLocatedExpression, token, expr); end
def self.LIST(entries); new(Model::LiteralList, *entries); end
def self.PARAM(name, expr=nil); new(Model::Parameter, name, expr); end
def self.NODE(hosts, parent, body); new(Model::NodeDefinition, hosts, parent, body); end
+
+ # Parameters
+
+ # Mark parameter as capturing the rest of arguments
+ def captures_rest()
+ current.captures_rest = true
+ end
+
+ # Set Expression that should evaluate to the parameter's type
+ def type_expr(o)
+ current.type_expr = to_ops(o)
+ end
+
# Creates a QualifiedName representation of o, unless o already represents a QualifiedName in which
# case it is returned.
#
def self.fqn(o)
o = o.current if o.is_a?(Puppet::Pops::Model::Factory)
o = new(Model::QualifiedName, o) unless o.is_a? Model::QualifiedName
o
end
# Creates a QualifiedName representation of o, unless o already represents a QualifiedName in which
# case it is returned.
#
def self.fqr(o)
o = o.current if o.is_a?(Puppet::Pops::Model::Factory)
o = new(Model::QualifiedReference, o) unless o.is_a? Model::QualifiedReference
o
end
def self.TEXT(expr)
new(Model::TextExpression, new(expr).interpolate)
end
# TODO_EPP
def self.RENDER_STRING(o)
new(Model::RenderStringExpression, o)
end
def self.RENDER_EXPR(expr)
new(Model::RenderExpression, expr)
end
def self.EPP(parameters, body)
- see_scope = false
- params = parameters
if parameters.nil?
params = []
- see_scope = true
+ parameters_specified = false
+ else
+ params = parameters
+ parameters_specified = true
end
- LAMBDA(params, new(Model::EppExpression, see_scope, body))
+ LAMBDA(params, new(Model::EppExpression, parameters_specified, body))
+ end
+
+ def self.RESERVED(name)
+ new(Model::ReservedWord, name)
end
# TODO: This is the same a fqn factory method, don't know if callers to fqn and QNAME can live with the
# same result or not yet - refactor into one method when decided.
#
def self.QNAME(name)
new(Model::QualifiedName, name)
end
def self.NUMBER(name_or_numeric)
if n_radix = Puppet::Pops::Utils.to_n_with_radix(name_or_numeric)
val, radix = n_radix
if val.is_a?(Float)
new(Model::LiteralFloat, val)
else
new(Model::LiteralInteger, val, radix)
end
else
# Bad number should already have been caught by lexer - this should never happen
raise ArgumentError, "Internal Error, NUMBER token does not contain a valid number, #{name_or_numeric}"
end
end
# Convert input string to either a qualified name, a LiteralInteger with radix, or a LiteralFloat
#
def self.QNAME_OR_NUMBER(name)
if n_radix = Puppet::Pops::Utils.to_n_with_radix(name)
val, radix = n_radix
if val.is_a?(Float)
new(Model::LiteralFloat, val)
else
new(Model::LiteralInteger, val, radix)
end
else
new(Model::QualifiedName, name)
end
end
def self.QREF(name)
new(Model::QualifiedReference, name)
end
def self.VIRTUAL_QUERY(query_expr)
new(Model::VirtualQuery, query_expr)
end
def self.EXPORTED_QUERY(query_expr)
new(Model::ExportedQuery, query_expr)
end
def self.ATTRIBUTE_OP(name, op, expr)
new(Model::AttributeOperation, name, op, expr)
end
+ def self.ATTRIBUTES_OP(expr)
+ new(Model::AttributesOperation, expr)
+ end
+
def self.CALL_NAMED(name, rval_required, argument_list)
unless name.kind_of?(Model::PopsObject)
name = Puppet::Pops::Model::Factory.fqn(name) unless name.is_a?(Puppet::Pops::Model::Factory)
end
new(Model::CallNamedFunctionExpression, name, rval_required, *argument_list)
end
def self.CALL_METHOD(functor, argument_list)
new(Model::CallMethodExpression, functor, true, nil, *argument_list)
end
def self.COLLECT(type_expr, query_expr, attribute_operations)
new(Model::CollectExpression, type_expr, query_expr, attribute_operations)
end
def self.NAMED_ACCESS(type_name, bodies)
new(Model::NamedAccessExpression, type_name, bodies)
end
def self.RESOURCE(type_name, bodies)
new(Model::ResourceExpression, type_name, bodies)
end
def self.RESOURCE_DEFAULTS(type_name, attribute_operations)
new(Model::ResourceDefaultsExpression, type_name, attribute_operations)
end
def self.RESOURCE_OVERRIDE(resource_ref, attribute_operations)
new(Model::ResourceOverrideExpression, resource_ref, attribute_operations)
end
def self.RESOURCE_BODY(resource_title, attribute_operations)
new(Model::ResourceBody, resource_title, attribute_operations)
end
def self.PROGRAM(body, definitions, locator)
new(Model::Program, body, definitions, locator)
end
# Builds a BlockExpression if args size > 1, else the single expression/value in args
def self.block_or_expression(*args)
if args.size > 1
new(Model::BlockExpression, *args)
else
new(args[0])
end
end
def self.HOSTCLASS(name, parameters, parent, body)
new(Model::HostClassDefinition, name, parameters, parent, body)
end
def self.DEFINITION(name, parameters, body)
new(Model::ResourceTypeDefinition, name, parameters, body)
end
def self.LAMBDA(parameters, body)
new(Model::LambdaExpression, parameters, body)
end
def self.nop? o
o.nil? || o.is_a?(Puppet::Pops::Model::Nop)
end
STATEMENT_CALLS = {
'require' => true,
'realize' => true,
'include' => true,
'contain' => true,
+ 'tag' => true,
'debug' => true,
'info' => true,
'notice' => true,
'warning' => true,
'error' => true,
'fail' => true,
'import' => true # discontinued, but transform it to make it call error reporting function
}
# Returns true if the given name is a "statement keyword" (require, include, contain,
# error, notice, info, debug
#
def name_is_statement(name)
STATEMENT_CALLS[name]
end
# Transforms an array of expressions containing literal name expressions to calls if followed by an
# expression, or expression list.
#
def self.transform_calls(expressions)
expressions.reduce([]) do |memo, expr|
expr = expr.current if expr.is_a?(Puppet::Pops::Model::Factory)
name = memo[-1]
if name.is_a?(Model::QualifiedName) && STATEMENT_CALLS[name.value]
the_call = Puppet::Pops::Model::Factory.CALL_NAMED(name, false, expr.is_a?(Array) ? expr : [expr])
# last positioned is last arg if there are several
record_position(the_call, name, expr.is_a?(Array) ? expr[-1] : expr)
memo[-1] = the_call
if expr.is_a?(Model::CallNamedFunctionExpression)
# Patch statement function call to expression style
# This is needed because it is first parsed as a "statement" and the requirement changes as it becomes
# an argument to the name to call transform above.
expr.rval_required = true
end
else
memo << expr
if expr.is_a?(Model::CallNamedFunctionExpression)
# Patch rvalue expression function call to statement style.
# This is not really required but done to be AST model compliant
expr.rval_required = false
end
end
memo
end
end
# Transforms a left expression followed by an untitled resource (in the form of attribute_operations)
# @param left [Factory, Expression] the lhs followed what may be a hash
def self.transform_resource_wo_title(left, attribute_ops)
+ # Returning nil means accepting the given as a potential resource expression
return nil unless attribute_ops.is_a? Array
-# return nil if attribute_ops.find { |ao| ao.operator == :'+>' }
+ return nil unless left.current.is_a?(Puppet::Pops::Model::QualifiedName)
keyed_entries = attribute_ops.map do |ao|
return nil if ao.operator == :'+>'
KEY_ENTRY(ao.attribute_name, ao.value_expr)
end
result = block_or_expression(*transform_calls([left, HASH(keyed_entries)]))
result
end
# Building model equivalences of Ruby objects
# Allows passing regular ruby objects to the factory to produce instructions
# that when evaluated produce the same thing.
def build_String(o)
x = Model::LiteralString.new
x.value = o;
x
end
def build_NilClass(o)
x = Model::Nop.new
x
end
def build_TrueClass(o)
x = Model::LiteralBoolean.new
x.value = o
x
end
def build_FalseClass(o)
x = Model::LiteralBoolean.new
x.value = o
x
end
def build_Fixnum(o)
x = Model::LiteralInteger.new
x.value = o;
x
end
def build_Float(o)
x = Model::LiteralFloat.new
x.value = o;
x
end
def build_Regexp(o)
x = Model::LiteralRegularExpression.new
x.value = o;
x
end
- def build_EppExpression(o, see_scope, body)
- o.see_scope = see_scope
+ def build_EppExpression(o, parameters_specified, body)
+ o.parameters_specified = parameters_specified
b = f_build_body(body)
o.body = b.current if b
o
end
# If building a factory, simply unwrap the model oject contained in the factory.
def build_Factory(o)
o.current
end
# Creates a String literal, unless the symbol is one of the special :undef, or :default
# which instead creates a LiterlUndef, or a LiteralDefault.
+ # Supports :undef because nil creates a no-op instruction.
def build_Symbol(o)
case o
when :undef
Model::LiteralUndef.new
when :default
Model::LiteralDefault.new
else
build_String(o.to_s)
end
end
# Creates a LiteralList instruction from an Array, where the entries are built.
def build_Array(o)
x = Model::LiteralList.new
o.each { |v| x.addValues(build(v)) }
x
end
# Create a LiteralHash instruction from a hash, where keys and values are built
# The hash entries are added in sorted order based on key.to_s
#
def build_Hash(o)
x = Model::LiteralHash.new
(o.sort_by {|k,v| k.to_s}).each {|k,v| x.addEntries(build(Model::KeyedEntry.new, k, v)) }
x
end
# @param rval_required [Boolean] if the call must produce a value
def build_CallExpression(o, functor, rval_required, *args)
o.functor_expr = to_ops(functor)
o.rval_required = rval_required
args.each {|x| o.addArguments(to_ops(x)) }
o
end
def build_CallMethodExpression(o, functor, rval_required, lambda, *args)
build_CallExpression(o, functor, rval_required, *args)
o.lambda = lambda
o
end
def build_CaseExpression(o, test, *args)
o.test = build(test)
args.each {|opt| o.addOptions(build(opt)) }
o
end
def build_CaseOption(o, value_list, then_expr)
value_list = [value_list] unless value_list.is_a? Array
value_list.each { |v| o.addValues(build(v)) }
b = f_build_body(then_expr)
o.then_expr = to_ops(b) if b
o
end
# Build a Class by creating an instance of it, and then calling build on the created instance
# with the given arguments
def build_Class(o, *args)
build(o.new(), *args)
end
def interpolate_Factory(o)
interpolate(o.current)
end
def interpolate_LiteralInteger(o)
# convert number to a variable
self.class.new(o).var
end
def interpolate_Object(o)
o
end
def interpolate_QualifiedName(o)
self.class.new(o).var
end
# rewrite left expression to variable if it is name, number, and recurse if it is an access expression
# this is for interpolation support in new lexer (${NAME}, ${NAME[}}, ${NUMBER}, ${NUMBER[]} - all
# other expressions requires variables to be preceded with $
#
def interpolate_AccessExpression(o)
if is_interop_rewriteable?(o.left_expr)
o.left_expr = to_ops(self.class.new(o.left_expr).interpolate)
end
o
end
def interpolate_NamedAccessExpression(o)
if is_interop_rewriteable?(o.left_expr)
o.left_expr = to_ops(self.class.new(o.left_expr).interpolate)
end
o
end
# Rewrite method calls on the form ${x.each ...} to ${$x.each}
def interpolate_CallMethodExpression(o)
if is_interop_rewriteable?(o.functor_expr)
o.functor_expr = to_ops(self.class.new(o.functor_expr).interpolate)
end
o
end
def is_interop_rewriteable?(o)
case o
when Model::AccessExpression, Model::QualifiedName,
Model::NamedAccessExpression, Model::CallMethodExpression
true
when Model::LiteralInteger
# Only decimal integers can represent variables, else it is a number
o.radix == 10
else
false
end
end
# Checks if the object is already a model object, or build it
def to_ops(o, *args)
case o
when Model::PopsObject
o
when Puppet::Pops::Model::Factory
o.current
else
build(o, *args)
end
end
def self.concat(*args)
new(args.map do |e|
e = e.current if e.is_a?(self)
case e
when Model::LiteralString
e.value
when String
e
else
raise ArgumentError, "can only concatenate strings, got #{e.class}"
end
end.join(''))
end
+
+ def to_s
+ Puppet::Pops::Model::ModelTreeDumper.new.dump(self)
+ end
end
diff --git a/lib/puppet/pops/model/model.rb b/lib/puppet/pops/model/model.rb
index b186aca3f..9e2a079b4 100644
--- a/lib/puppet/pops/model/model.rb
+++ b/lib/puppet/pops/model/model.rb
@@ -1,606 +1,114 @@
#
-# The Puppet Pops Metamodel
+# The Puppet Pops Metamodel Implementation
#
-# This module contains a formal description of the Puppet Pops (*P*uppet *OP*eration instruction*S*).
-# It describes a Metamodel containing DSL instructions, a description of PuppetType and related
-# classes needed to evaluate puppet logic.
-# The metamodel resembles the existing AST model, but it is a semantic model of instructions and
-# the types that they operate on rather than an Abstract Syntax Tree, although closely related.
-#
-# The metamodel is anemic (has no behavior) except basic datatype and type
-# assertions and reference/containment assertions.
-# The metamodel is also a generalized description of the Puppet DSL to enable the
-# same metamodel to be used to express Puppet DSL models (instances) with different semantics as
-# the language evolves.
-#
-# The metamodel is concretized by a validator for a particular version of
-# the Puppet DSL language.
-#
-# This metamodel is expressed using RGen.
+# The Puppet Pops Metamodel consists of two parts; the metamodel expressed with RGen in model_meta.rb,
+# and this file which mixes in implementation details.
#
require 'rgen/metamodel_builder'
+require 'rgen/ecore/ecore'
+require 'rgen/ecore/ecore_ext'
+require 'rgen/ecore/ecore_to_ruby'
-module Puppet::Pops::Model
- extend RGen::MetamodelBuilder::ModuleExtension
-
- # A base class for modeled objects that makes them Visitable, and Adaptable.
- #
- class PopsObject < RGen::MetamodelBuilder::MMBase
- include Puppet::Pops::Visitable
- include Puppet::Pops::Adaptable
- include Puppet::Pops::Containment
- abstract
- end
-
- # A Positioned object has an offset measured in an opaque unit (representing characters) from the start
- # of a source text (starting
- # from 0), and a length measured in the same opaque unit. The resolution of the opaque unit requires the
- # aid of a Locator instance that knows about the measure. This information is stored in the model's
- # root node - a Program.
- #
- # The offset and length are optional if the source of the model is not from parsed text.
- #
- class Positioned < PopsObject
- abstract
- has_attr 'offset', Integer
- has_attr 'length', Integer
- end
-
- # @abstract base class for expressions
- class Expression < Positioned
- abstract
- end
-
- # A Nop - the "no op" expression.
- # @note not really needed since the evaluator can evaluate nil with the meaning of NoOp
- # @todo deprecate? May be useful if there is the need to differentiate between nil and Nop when transforming model.
- #
- class Nop < Expression
- end
-
- # A binary expression is abstract and has a left and a right expression. The order of evaluation
- # and semantics are determined by the concrete subclass.
- #
- class BinaryExpression < Expression
- abstract
- #
- # @!attribute [rw] left_expr
- # @return [Expression]
- contains_one_uni 'left_expr', Expression, :lowerBound => 1
- contains_one_uni 'right_expr', Expression, :lowerBound => 1
- end
-
- # An unary expression is abstract and contains one expression. The semantics are determined by
- # a concrete subclass.
- #
- class UnaryExpression < Expression
- abstract
- contains_one_uni 'expr', Expression, :lowerBound => 1
- end
-
- # A class that simply evaluates to the contained expression.
- # It is of value in order to preserve user entered parentheses in transformations, and
- # transformations from model to source.
- #
- class ParenthesizedExpression < UnaryExpression; end
-
- # A boolean not expression, reversing the truth of the unary expr.
- #
- class NotExpression < UnaryExpression; end
-
- # An arithmetic expression reversing the polarity of the numeric unary expr.
- #
- class UnaryMinusExpression < UnaryExpression; end
-
- # An assignment expression assigns a value to the lval() of the left_expr.
- #
- class AssignmentExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'=', :'+=', :'-=']), :lowerBound => 1
- end
-
- # An arithmetic expression applies an arithmetic operator on left and right expressions.
- #
- class ArithmeticExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'+', :'-', :'*', :'%', :'/', :'<<', :'>>' ]), :lowerBound => 1
- end
-
- # A relationship expression associates the left and right expressions
- #
- class RelationshipExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'->', :'<-', :'~>', :'<~']), :lowerBound => 1
- end
-
- # A binary expression, that accesses the value denoted by right in left. i.e. typically
- # expressed concretely in a language as left[right].
- #
- class AccessExpression < Expression
- contains_one_uni 'left_expr', Expression, :lowerBound => 1
- contains_many_uni 'keys', Expression, :lowerBound => 1
- end
-
- # A comparison expression compares left and right using a comparison operator.
- #
- class ComparisonExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'==', :'!=', :'<', :'>', :'<=', :'>=' ]), :lowerBound => 1
- end
-
- # A match expression matches left and right using a matching operator.
- #
- class MatchExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'!~', :'=~']), :lowerBound => 1
- end
-
- # An 'in' expression checks if left is 'in' right
- #
- class InExpression < BinaryExpression; end
-
- # A boolean expression applies a logical connective operator (and, or) to left and right expressions.
- #
- class BooleanExpression < BinaryExpression
- abstract
- end
-
- # An and expression applies the logical connective operator and to left and right expression
- # and does not evaluate the right expression if the left expression is false.
- #
- class AndExpression < BooleanExpression; end
-
- # An or expression applies the logical connective operator or to the left and right expression
- # and does not evaluate the right expression if the left expression is true
- #
- class OrExpression < BooleanExpression; end
-
- # A literal list / array containing 0:M expressions.
- #
- class LiteralList < Expression
- contains_many_uni 'values', Expression
- end
-
- # A Keyed entry has a key and a value expression. It is typically used as an entry in a Hash.
- #
- class KeyedEntry < Positioned
- contains_one_uni 'key', Expression, :lowerBound => 1
- contains_one_uni 'value', Expression, :lowerBound => 1
- end
-
- # A literal hash is a collection of KeyedEntry objects
- #
- class LiteralHash < Expression
- contains_many_uni 'entries', KeyedEntry
- end
-
- # A block contains a list of expressions
- #
- class BlockExpression < Expression
- contains_many_uni 'statements', Expression
- end
+module Puppet::Pops
+ require 'puppet/pops/model/model_meta'
- # A case option entry in a CaseStatement
- #
- class CaseOption < Expression
- contains_many_uni 'values', Expression, :lowerBound => 1
- contains_one_uni 'then_expr', Expression, :lowerBound => 1
- end
+ # TODO: See PUP-2978 for possible performance optimization
- # A case expression has a test, a list of options (multi values => block map).
- # One CaseOption may contain a LiteralDefault as value. This option will be picked if nothing
- # else matched.
- #
- class CaseExpression < Expression
- contains_one_uni 'test', Expression, :lowerBound => 1
- contains_many_uni 'options', CaseOption
- end
-
- # A query expression is an expression that is applied to some collection.
- # The contained optional expression may contain different types of relational expressions depending
- # on what the query is applied to.
- #
- class QueryExpression < Expression
- abstract
- contains_one_uni 'expr', Expression, :lowerBound => 0
- end
+ # Mix in implementation into the generated code
+ module Model
- # An exported query is a special form of query that searches for exported objects.
- #
- class ExportedQuery < QueryExpression
- end
-
- # A virtual query is a special form of query that searches for virtual objects.
- #
- class VirtualQuery < QueryExpression
- end
-
- # An attribute operation sets or appends a value to a named attribute.
- #
- class AttributeOperation < Positioned
- has_attr 'attribute_name', String, :lowerBound => 1
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'=>', :'+>', ]), :lowerBound => 1
- contains_one_uni 'value_expr', Expression, :lowerBound => 1
- end
-
- # An object that collects stored objects from the central cache and returns
- # them to the current host. Operations may optionally be applied.
- #
- class CollectExpression < Expression
- contains_one_uni 'type_expr', Expression, :lowerBound => 1
- contains_one_uni 'query', QueryExpression, :lowerBound => 1
- contains_many_uni 'operations', AttributeOperation
- end
-
- class Parameter < Positioned
- has_attr 'name', String, :lowerBound => 1
- contains_one_uni 'value', Expression
- end
-
- # Abstract base class for definitions.
- #
- class Definition < Expression
- abstract
- end
-
- # Abstract base class for named and parameterized definitions.
- class NamedDefinition < Definition
- abstract
- has_attr 'name', String, :lowerBound => 1
- contains_many_uni 'parameters', Parameter
- contains_one_uni 'body', Expression
- end
-
- # A resource type definition (a 'define' in the DSL).
- #
- class ResourceTypeDefinition < NamedDefinition
- end
-
- # A node definition matches hosts using Strings, or Regular expressions. It may inherit from
- # a parent node (also using a String or Regular expression).
- #
- class NodeDefinition < Definition
- contains_one_uni 'parent', Expression
- contains_many_uni 'host_matches', Expression, :lowerBound => 1
- contains_one_uni 'body', Expression
- end
-
- class LocatableExpression < Expression
- has_many_attr 'line_offsets', Integer
- has_attr 'locator', Object, :lowerBound => 1, :transient => true
+ class PopsObject
+ include Puppet::Pops::Visitable
+ include Puppet::Pops::Adaptable
+ include Puppet::Pops::Containment
+ end
- module ClassModule
- # Go through the gymnastics of making either value or pattern settable
- # with synchronization to the other form. A derived value cannot be serialized
- # and we want to serialize the pattern. When recreating the object we need to
- # recreate it from the pattern string.
- # The below sets both values if one is changed.
- #
- def locator
- unless result = getLocator
- setLocator(result = Puppet::Pops::Parser::Locator.locator(source_text, source_ref(), line_offsets))
+ class LocatableExpression
+ module ClassModule
+ # Go through the gymnastics of making either value or pattern settable
+ # with synchronization to the other form. A derived value cannot be serialized
+ # and we want to serialize the pattern. When recreating the object we need to
+ # recreate it from the pattern string.
+ # The below sets both values if one is changed.
+ #
+ def locator
+ unless result = getLocator
+ setLocator(result = Puppet::Pops::Parser::Locator.locator(source_text, source_ref(), line_offsets))
+ end
+ result
end
- result
end
end
- end
-
- # Contains one expression which has offsets reported virtually (offset against the Program's
- # overall locator).
- #
- class SubLocatedExpression < Expression
- contains_one_uni 'expr', Expression, :lowerBound => 1
-
- # line offset index for contained expressions
- has_many_attr 'line_offsets', Integer
-
- # Number of preceding lines (before the line_offsets)
- has_attr 'leading_line_count', Integer
-
- # The offset of the leading source line (i.e. size of "left margin").
- has_attr 'leading_line_offset', Integer
-
- # The locator for the sub-locatable's children (not for the sublocator itself)
- # The locator is not serialized and is recreated on demand from the indexing information
- # in self.
- #
- has_attr 'locator', Object, :lowerBound => 1, :transient => true
-
- module ClassModule
- def locator
- unless result = getLocator
- # Adapt myself to get the Locator for me
- adapter = Puppet::Pops::Adapters::SourcePosAdapter.adapt(self)
- # Get the program (root), and deal with case when not contained in a program
- program = eAllContainers.find {|c| c.is_a?(Program) }
- source_ref = program.nil? ? '' : program.source_ref
-
- # An outer locator is needed since SubLocator only deals with offsets. This outer locator
- # has 0,0 as origin.
- outer_locator = Puppet::Pops::Parser::Locator.locator(adpater.extract_text, source_ref, line_offsets)
- # Create a sublocator that describes an offset from the outer
- # NOTE: the offset of self is the same as the sublocator's leading_offset
- result = Puppet::Pops::Parser::Locator::SubLocator.new(outer_locator,
- leading_line_count, offset, leading_line_offset)
- setLocator(result)
+ class SubLocatedExpression
+ module ClassModule
+ def locator
+ unless result = getLocator
+ # Adapt myself to get the Locator for me
+ adapter = Puppet::Pops::Adapters::SourcePosAdapter.adapt(self)
+ # Get the program (root), and deal with case when not contained in a program
+ program = eAllContainers.find {|c| c.is_a?(Program) }
+ source_ref = program.nil? ? '' : program.source_ref
+
+ # An outer locator is needed since SubLocator only deals with offsets. This outer locator
+ # has 0,0 as origin.
+ outer_locator = Puppet::Pops::Parser::Locator.locator(adpater.extract_text, source_ref, line_offsets)
+
+ # Create a sublocator that describes an offset from the outer
+ # NOTE: the offset of self is the same as the sublocator's leading_offset
+ result = Puppet::Pops::Parser::Locator::SubLocator.new(outer_locator,
+ leading_line_count, offset, leading_line_offset)
+ setLocator(result)
+ end
+ result
end
- result
end
end
- end
-
- # A heredoc is a wrapper around a LiteralString or a ConcatenatedStringExpression with a specification
- # of syntax. The expectation is that "syntax" has meaning to a validator. A syntax of nil or '' means
- # "unspecified syntax".
- #
- class HeredocExpression < Expression
- has_attr 'syntax', String
- contains_one_uni 'text_expr', Expression, :lowerBound => 1
- end
-
- # A class definition
- #
- class HostClassDefinition < NamedDefinition
- has_attr 'parent_class', String
- end
-
- # i.e {|parameters| body }
- class LambdaExpression < Expression
- contains_many_uni 'parameters', Parameter
- contains_one_uni 'body', Expression
- end
-
- # If expression. If test is true, the then_expr part should be evaluated, else the (optional)
- # else_expr. An 'elsif' is simply an else_expr = IfExpression, and 'else' is simply else == Block.
- # a 'then' is typically a Block.
- #
- class IfExpression < Expression
- contains_one_uni 'test', Expression, :lowerBound => 1
- contains_one_uni 'then_expr', Expression, :lowerBound => 1
- contains_one_uni 'else_expr', Expression
- end
-
- # An if expression with boolean reversed test.
- #
- class UnlessExpression < IfExpression
- end
-
- # An abstract call.
- #
- class CallExpression < Expression
- abstract
- # A bit of a crutch; functions are either procedures (void return) or has an rvalue
- # this flag tells the evaluator that it is a failure to call a function that is void/procedure
- # where a value is expected.
- #
- has_attr 'rval_required', Boolean, :defaultValueLiteral => "false"
- contains_one_uni 'functor_expr', Expression, :lowerBound => 1
- contains_many_uni 'arguments', Expression
- contains_one_uni 'lambda', Expression
- end
-
- # A function call where the functor_expr should evaluate to something callable.
- #
- class CallFunctionExpression < CallExpression; end
-
- # A function call where the given functor_expr should evaluate to the name
- # of a function.
- #
- class CallNamedFunctionExpression < CallExpression; end
-
- # A method/function call where the function expr is a NamedAccess and with support for
- # an optional lambda block
- #
- class CallMethodExpression < CallExpression
- end
-
- # Abstract base class for literals.
- #
- class Literal < Expression
- abstract
- end
- # A literal value is an abstract value holder. The type of the contained value is
- # determined by the concrete subclass.
- #
- class LiteralValue < Literal
- abstract
- end
-
- # A Regular Expression Literal.
- #
- class LiteralRegularExpression < LiteralValue
- has_attr 'value', Object, :lowerBound => 1, :transient => true
- has_attr 'pattern', String, :lowerBound => 1
-
- module ClassModule
- # Go through the gymnastics of making either value or pattern settable
- # with synchronization to the other form. A derived value cannot be serialized
- # and we want to serialize the pattern. When recreating the object we need to
- # recreate it from the pattern string.
- # The below sets both values if one is changed.
- #
- def value= regexp
- setValue regexp
- setPattern regexp.to_s
- end
+ class LiteralRegularExpression
+ module ClassModule
+ # Go through the gymnastics of making either value or pattern settable
+ # with synchronization to the other form. A derived value cannot be serialized
+ # and we want to serialize the pattern. When recreating the object we need to
+ # recreate it from the pattern string.
+ # The below sets both values if one is changed.
+ #
+ def value= regexp
+ setValue regexp
+ setPattern regexp.to_s
+ end
- def pattern= regexp_string
- setPattern regexp_string
- setValue Regexp.new(regexp_string)
+ def pattern= regexp_string
+ setPattern regexp_string
+ setValue Regexp.new(regexp_string)
+ end
end
end
- end
-
- # A Literal String
- #
- class LiteralString < LiteralValue
- has_attr 'value', String, :lowerBound => 1
- end
-
- class LiteralNumber < LiteralValue
- abstract
- end
-
- # A literal number has a radix of decimal (10), octal (8), or hex (16) to enable string conversion with the input radix.
- # By default, a radix of 10 is used.
- #
- class LiteralInteger < LiteralNumber
- has_attr 'radix', Integer, :lowerBound => 1, :defaultValueLiteral => "10"
- has_attr 'value', Integer, :lowerBound => 1
- end
-
- class LiteralFloat < LiteralNumber
- has_attr 'value', Float, :lowerBound => 1
- end
-
- # The DSL `undef`.
- #
- class LiteralUndef < Literal; end
-
- # The DSL `default`
- class LiteralDefault < Literal; end
-
- # DSL `true` or `false`
- class LiteralBoolean < LiteralValue
- has_attr 'value', Boolean, :lowerBound => 1
- end
-
- # A text expression is an interpolation of an expression. If the embedded expression is
- # a QualifiedName, it is taken as a variable name and resolved. All other expressions are evaluated.
- # The result is transformed to a string.
- #
- class TextExpression < UnaryExpression; end
-
- # An interpolated/concatenated string. The contained segments are expressions. Verbatim sections
- # should be LiteralString instances, and interpolated expressions should either be
- # TextExpression instances (if QualifiedNames should be turned into variables), or any other expression
- # if such treatment is not needed.
- #
- class ConcatenatedString < Expression
- contains_many_uni 'segments', Expression
- end
-
- # A DSL NAME (one or multiple parts separated by '::').
- #
- class QualifiedName < LiteralValue
- has_attr 'value', String, :lowerBound => 1
- end
-
- # A DSL CLASSREF (one or multiple parts separated by '::' where (at least) the first part starts with an upper case letter).
- #
- class QualifiedReference < LiteralValue
- has_attr 'value', String, :lowerBound => 1
- end
-
- # A Variable expression looks up value of expr (some kind of name) in scope.
- # The expression is typically a QualifiedName, or QualifiedReference.
- #
- class VariableExpression < UnaryExpression; end
-
- # Epp start
- class EppExpression < Expression
- has_attr 'see_scope', Boolean
- contains_one_uni 'body', Expression
- end
-
- # A string to render
- class RenderStringExpression < LiteralString
- end
-
- # An expression to evluate and render
- class RenderExpression < UnaryExpression
- end
-
- # A resource body describes one resource instance
- #
- class ResourceBody < Positioned
- contains_one_uni 'title', Expression
- contains_many_uni 'operations', AttributeOperation
- end
-
- # An abstract resource describes the form of the resource (regular, virtual or exported)
- # and adds convenience methods to ask if it is virtual or exported.
- # All derived classes may not support all forms, and these needs to be validated
- #
- class AbstractResource < Expression
- abstract
- has_attr 'form', RGen::MetamodelBuilder::DataTypes::Enum.new([:regular, :virtual, :exported ]), :lowerBound => 1, :defaultValueLiteral => "regular"
- has_attr 'virtual', Boolean, :derived => true
- has_attr 'exported', Boolean, :derived => true
-
- module ClassModule
- def virtual_derived
- form == :virtual || form == :exported
- end
+ class AbstractResource
+ module ClassModule
+ def virtual_derived
+ form == :virtual || form == :exported
+ end
- def exported_derived
- form == :exported
+ def exported_derived
+ form == :exported
+ end
end
end
- end
-
- # A resource expression is used to instantiate one or many resource. Resources may optionally
- # be virtual or exported, an exported resource is always virtual.
- #
- class ResourceExpression < AbstractResource
- contains_one_uni 'type_name', Expression, :lowerBound => 1
- contains_many_uni 'bodies', ResourceBody
- end
-
- # A resource defaults sets defaults for a resource type. This class inherits from AbstractResource
- # but does only support the :regular form (this is intentional to be able to produce better error messages
- # when illegal forms are applied to a model.
- #
- class ResourceDefaultsExpression < AbstractResource
- contains_one_uni 'type_ref', QualifiedReference
- contains_many_uni 'operations', AttributeOperation
- end
-
- # A resource override overrides already set values.
- #
- class ResourceOverrideExpression < Expression
- contains_one_uni 'resources', Expression, :lowerBound => 1
- contains_many_uni 'operations', AttributeOperation
- end
-
- # A selector entry describes a map from matching_expr to value_expr.
- #
- class SelectorEntry < Positioned
- contains_one_uni 'matching_expr', Expression, :lowerBound => 1
- contains_one_uni 'value_expr', Expression, :lowerBound => 1
- end
-
- # A selector expression represents a mapping from a left_expr to a matching SelectorEntry.
- #
- class SelectorExpression < Expression
- contains_one_uni 'left_expr', Expression, :lowerBound => 1
- contains_many_uni 'selectors', SelectorEntry
- end
-
- # A named access expression looks up a named part. (e.g. $a.b)
- #
- class NamedAccessExpression < BinaryExpression; end
-
- # A Program is the top level construct returned by the parser
- # it contains the parsed result in the body, and has a reference to the full source text,
- # and its origin. The line_offset's is an array with the start offset of each line.
- #
- class Program < PopsObject
- contains_one_uni 'body', Expression
- has_many 'definitions', Definition
- has_attr 'source_text', String
- has_attr 'source_ref', String
- has_many_attr 'line_offsets', Integer
- has_attr 'locator', Object, :lowerBound => 1, :transient => true
-
- module ClassModule
- def locator
- unless result = getLocator
- setLocator(result = Puppet::Pops::Parser::Locator.locator(source_text, source_ref(), line_offsets))
+ class Program < PopsObject
+ module ClassModule
+ def locator
+ unless result = getLocator
+ setLocator(result = Puppet::Pops::Parser::Locator.locator(source_text, source_ref(), line_offsets))
+ end
+ result
end
- result
end
end
end
+
end
diff --git a/lib/puppet/pops/model/model_label_provider.rb b/lib/puppet/pops/model/model_label_provider.rb
index 979aa4bc5..543d83d49 100644
--- a/lib/puppet/pops/model/model_label_provider.rb
+++ b/lib/puppet/pops/model/model_label_provider.rb
@@ -1,104 +1,106 @@
# A provider of labels for model object, producing a human name for the model object.
# As an example, if object is an ArithmeticExpression with operator +, `#a_an(o)` produces "a '+' Expression",
# #the(o) produces "the + Expression", and #label produces "+ Expression".
#
class Puppet::Pops::Model::ModelLabelProvider < Puppet::Pops::LabelProvider
def initialize
@@label_visitor ||= Puppet::Pops::Visitor.new(self,"label",0,0)
end
# Produces a label for the given objects type/operator without article.
# If a Class is given, its name is used as label
#
def label o
@@label_visitor.visit(o)
end
def label_Factory o ; label(o.current) end
def label_Array o ; "Array" end
def label_LiteralInteger o ; "Literal Integer" end
def label_LiteralFloat o ; "Literal Float" end
def label_ArithmeticExpression o ; "'#{o.operator}' expression" end
def label_AccessExpression o ; "'[]' expression" end
def label_MatchExpression o ; "'#{o.operator}' expression" end
def label_CollectExpression o ; label(o.query) end
def label_EppExpression o ; "Epp Template" end
def label_ExportedQuery o ; "Exported Query" end
def label_VirtualQuery o ; "Virtual Query" end
def label_QueryExpression o ; "Collect Query" end
def label_ComparisonExpression o ; "'#{o.operator}' expression" end
def label_AndExpression o ; "'and' expression" end
def label_OrExpression o ; "'or' expression" end
def label_InExpression o ; "'in' expression" end
def label_AssignmentExpression o ; "'#{o.operator}' expression" end
def label_AttributeOperation o ; "'#{o.operator}' expression" end
def label_LiteralList o ; "Array Expression" end
def label_LiteralHash o ; "Hash Expression" end
def label_KeyedEntry o ; "Hash Entry" end
def label_LiteralBoolean o ; "Boolean" end
def label_TrueClass o ; "Boolean" end
def label_FalseClass o ; "Boolean" end
def label_LiteralString o ; "String" end
def label_LambdaExpression o ; "Lambda" end
def label_LiteralDefault o ; "'default' expression" end
def label_LiteralUndef o ; "'undef' expression" end
def label_LiteralRegularExpression o ; "Regular Expression" end
def label_Nop o ; "Nop Expression" end
def label_NamedAccessExpression o ; "'.' expression" end
def label_NilClass o ; "Nil Object" end
def label_NotExpression o ; "'not' expression" end
def label_VariableExpression o ; "Variable" end
def label_TextExpression o ; "Expression in Interpolated String" end
def label_UnaryMinusExpression o ; "Unary Minus" end
+ def label_UnfoldExpression o ; "Unfold" end
def label_BlockExpression o ; "Block Expression" end
def label_ConcatenatedString o ; "Double Quoted String" end
def label_HeredocExpression o ; "'@(#{o.syntax})' expression" end
def label_HostClassDefinition o ; "Host Class Definition" end
def label_NodeDefinition o ; "Node Definition" end
def label_ResourceTypeDefinition o ; "'define' expression" end
def label_ResourceOverrideExpression o ; "Resource Override" end
def label_Parameter o ; "Parameter Definition" end
def label_ParenthesizedExpression o ; "Parenthesized Expression" end
def label_IfExpression o ; "'if' statement" end
def label_UnlessExpression o ; "'unless' Statement" end
def label_CallNamedFunctionExpression o ; "Function Call" end
def label_CallMethodExpression o ; "Method call" end
def label_CaseExpression o ; "'case' statement" end
def label_CaseOption o ; "Case Option" end
def label_RenderStringExpression o ; "Epp Text" end
def label_RenderExpression o ; "Epp Interpolated Expression" end
def label_RelationshipExpression o ; "'#{o.operator}' expression" end
def label_ResourceBody o ; "Resource Instance Definition" end
def label_ResourceDefaultsExpression o ; "Resource Defaults Expression" end
def label_ResourceExpression o ; "Resource Statement" end
def label_SelectorExpression o ; "Selector Expression" end
def label_SelectorEntry o ; "Selector Option" end
def label_Integer o ; "Integer" end
def label_Fixnum o ; "Integer" end
def label_Bignum o ; "Integer" end
def label_Float o ; "Float" end
def label_String o ; "String" end
def label_Regexp o ; "Regexp" end
def label_Object o ; "Object" end
def label_Hash o ; "Hash" end
def label_QualifiedName o ; "Name" end
def label_QualifiedReference o ; "Type-Name" end
- def label_PAbstractType o ; "#{Puppet::Pops::Types::TypeCalculator.string(o)}-Type" end
+ def label_PAnyType o ; "#{Puppet::Pops::Types::TypeCalculator.string(o)}-Type" end
+ def label_ReservedWord o ; "Reserved Word '#{o.word}'" end
def label_PResourceType o
if o.title
"#{Puppet::Pops::Types::TypeCalculator.string(o)} Resource-Reference"
else
"#{Puppet::Pops::Types::TypeCalculator.string(o)}-Type"
end
end
def label_Class o
- if o <= Puppet::Pops::Types::PAbstractType
+ if o <= Puppet::Pops::Types::PAnyType
simple_name = o.name.split('::').last
simple_name[1..-5] + "-Type"
else
o.name
end
end
end
diff --git a/lib/puppet/pops/model/model.rb b/lib/puppet/pops/model/model_meta.rb
similarity index 81%
copy from lib/puppet/pops/model/model.rb
copy to lib/puppet/pops/model/model_meta.rb
index b186aca3f..246216aa9 100644
--- a/lib/puppet/pops/model/model.rb
+++ b/lib/puppet/pops/model/model_meta.rb
@@ -1,606 +1,576 @@
#
# The Puppet Pops Metamodel
#
# This module contains a formal description of the Puppet Pops (*P*uppet *OP*eration instruction*S*).
# It describes a Metamodel containing DSL instructions, a description of PuppetType and related
# classes needed to evaluate puppet logic.
# The metamodel resembles the existing AST model, but it is a semantic model of instructions and
# the types that they operate on rather than an Abstract Syntax Tree, although closely related.
#
# The metamodel is anemic (has no behavior) except basic datatype and type
# assertions and reference/containment assertions.
# The metamodel is also a generalized description of the Puppet DSL to enable the
# same metamodel to be used to express Puppet DSL models (instances) with different semantics as
# the language evolves.
#
# The metamodel is concretized by a validator for a particular version of
# the Puppet DSL language.
#
# This metamodel is expressed using RGen.
#
require 'rgen/metamodel_builder'
module Puppet::Pops::Model
extend RGen::MetamodelBuilder::ModuleExtension
# A base class for modeled objects that makes them Visitable, and Adaptable.
#
class PopsObject < RGen::MetamodelBuilder::MMBase
- include Puppet::Pops::Visitable
- include Puppet::Pops::Adaptable
- include Puppet::Pops::Containment
abstract
end
# A Positioned object has an offset measured in an opaque unit (representing characters) from the start
# of a source text (starting
# from 0), and a length measured in the same opaque unit. The resolution of the opaque unit requires the
# aid of a Locator instance that knows about the measure. This information is stored in the model's
# root node - a Program.
#
# The offset and length are optional if the source of the model is not from parsed text.
#
class Positioned < PopsObject
abstract
has_attr 'offset', Integer
has_attr 'length', Integer
end
# @abstract base class for expressions
class Expression < Positioned
abstract
end
# A Nop - the "no op" expression.
# @note not really needed since the evaluator can evaluate nil with the meaning of NoOp
# @todo deprecate? May be useful if there is the need to differentiate between nil and Nop when transforming model.
#
class Nop < Expression
end
# A binary expression is abstract and has a left and a right expression. The order of evaluation
# and semantics are determined by the concrete subclass.
#
class BinaryExpression < Expression
abstract
#
# @!attribute [rw] left_expr
# @return [Expression]
contains_one_uni 'left_expr', Expression, :lowerBound => 1
contains_one_uni 'right_expr', Expression, :lowerBound => 1
end
# An unary expression is abstract and contains one expression. The semantics are determined by
# a concrete subclass.
#
class UnaryExpression < Expression
abstract
contains_one_uni 'expr', Expression, :lowerBound => 1
end
# A class that simply evaluates to the contained expression.
# It is of value in order to preserve user entered parentheses in transformations, and
# transformations from model to source.
#
class ParenthesizedExpression < UnaryExpression; end
# A boolean not expression, reversing the truth of the unary expr.
#
class NotExpression < UnaryExpression; end
# An arithmetic expression reversing the polarity of the numeric unary expr.
#
class UnaryMinusExpression < UnaryExpression; end
+ # Unfolds an array (a.k.a 'splat')
+ class UnfoldExpression < UnaryExpression; end
+
+ OpAssignment = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :literals => [:'=', :'+=', :'-='],
+ :name => 'OpAssignment')
+
# An assignment expression assigns a value to the lval() of the left_expr.
#
class AssignmentExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'=', :'+=', :'-=']), :lowerBound => 1
+ has_attr 'operator', OpAssignment, :lowerBound => 1
end
+ OpArithmetic = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :literals => [:'+', :'-', :'*', :'%', :'/', :'<<', :'>>' ],
+ :name => 'OpArithmetic')
+
# An arithmetic expression applies an arithmetic operator on left and right expressions.
#
class ArithmeticExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'+', :'-', :'*', :'%', :'/', :'<<', :'>>' ]), :lowerBound => 1
+ has_attr 'operator', OpArithmetic, :lowerBound => 1
end
+ OpRelationship = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :literals => [:'->', :'<-', :'~>', :'<~'],
+ :name => 'OpRelationship')
+
# A relationship expression associates the left and right expressions
#
class RelationshipExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'->', :'<-', :'~>', :'<~']), :lowerBound => 1
+ has_attr 'operator', OpRelationship, :lowerBound => 1
end
# A binary expression, that accesses the value denoted by right in left. i.e. typically
# expressed concretely in a language as left[right].
#
class AccessExpression < Expression
contains_one_uni 'left_expr', Expression, :lowerBound => 1
contains_many_uni 'keys', Expression, :lowerBound => 1
end
+ OpComparison = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :literals => [:'==', :'!=', :'<', :'>', :'<=', :'>=' ],
+ :name => 'OpComparison')
+
# A comparison expression compares left and right using a comparison operator.
#
class ComparisonExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'==', :'!=', :'<', :'>', :'<=', :'>=' ]), :lowerBound => 1
+ has_attr 'operator', OpComparison, :lowerBound => 1
end
+ OpMatch = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :literals => [:'!~', :'=~'],
+ :name => 'OpMatch')
+
# A match expression matches left and right using a matching operator.
#
class MatchExpression < BinaryExpression
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'!~', :'=~']), :lowerBound => 1
+ has_attr 'operator', OpMatch, :lowerBound => 1
end
# An 'in' expression checks if left is 'in' right
#
class InExpression < BinaryExpression; end
# A boolean expression applies a logical connective operator (and, or) to left and right expressions.
#
class BooleanExpression < BinaryExpression
abstract
end
# An and expression applies the logical connective operator and to left and right expression
# and does not evaluate the right expression if the left expression is false.
#
class AndExpression < BooleanExpression; end
# An or expression applies the logical connective operator or to the left and right expression
# and does not evaluate the right expression if the left expression is true
#
class OrExpression < BooleanExpression; end
# A literal list / array containing 0:M expressions.
#
class LiteralList < Expression
contains_many_uni 'values', Expression
end
# A Keyed entry has a key and a value expression. It is typically used as an entry in a Hash.
#
class KeyedEntry < Positioned
contains_one_uni 'key', Expression, :lowerBound => 1
contains_one_uni 'value', Expression, :lowerBound => 1
end
# A literal hash is a collection of KeyedEntry objects
#
class LiteralHash < Expression
contains_many_uni 'entries', KeyedEntry
end
# A block contains a list of expressions
#
class BlockExpression < Expression
contains_many_uni 'statements', Expression
end
# A case option entry in a CaseStatement
#
class CaseOption < Expression
contains_many_uni 'values', Expression, :lowerBound => 1
contains_one_uni 'then_expr', Expression, :lowerBound => 1
end
# A case expression has a test, a list of options (multi values => block map).
# One CaseOption may contain a LiteralDefault as value. This option will be picked if nothing
# else matched.
#
class CaseExpression < Expression
contains_one_uni 'test', Expression, :lowerBound => 1
contains_many_uni 'options', CaseOption
end
# A query expression is an expression that is applied to some collection.
# The contained optional expression may contain different types of relational expressions depending
# on what the query is applied to.
#
class QueryExpression < Expression
abstract
contains_one_uni 'expr', Expression, :lowerBound => 0
end
# An exported query is a special form of query that searches for exported objects.
#
class ExportedQuery < QueryExpression
end
# A virtual query is a special form of query that searches for virtual objects.
#
class VirtualQuery < QueryExpression
end
+ OpAttribute = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :literals => [:'=>', :'+>', ],
+ :name => 'OpAttribute')
+
+ class AbstractAttributeOperation < Positioned
+ end
+
# An attribute operation sets or appends a value to a named attribute.
#
- class AttributeOperation < Positioned
+ class AttributeOperation < AbstractAttributeOperation
has_attr 'attribute_name', String, :lowerBound => 1
- has_attr 'operator', RGen::MetamodelBuilder::DataTypes::Enum.new([:'=>', :'+>', ]), :lowerBound => 1
+ has_attr 'operator', OpAttribute, :lowerBound => 1
contains_one_uni 'value_expr', Expression, :lowerBound => 1
end
+ # An attribute operation containing an expression that must evaluate to a Hash
+ #
+ class AttributesOperation < AbstractAttributeOperation
+ contains_one_uni 'expr', Expression, :lowerBound => 1
+ end
+
# An object that collects stored objects from the central cache and returns
# them to the current host. Operations may optionally be applied.
#
class CollectExpression < Expression
contains_one_uni 'type_expr', Expression, :lowerBound => 1
contains_one_uni 'query', QueryExpression, :lowerBound => 1
contains_many_uni 'operations', AttributeOperation
end
class Parameter < Positioned
has_attr 'name', String, :lowerBound => 1
contains_one_uni 'value', Expression
+ contains_one_uni 'type_expr', Expression, :lowerBound => 0
+ has_attr 'captures_rest', Boolean
end
# Abstract base class for definitions.
#
class Definition < Expression
abstract
end
# Abstract base class for named and parameterized definitions.
class NamedDefinition < Definition
abstract
has_attr 'name', String, :lowerBound => 1
contains_many_uni 'parameters', Parameter
contains_one_uni 'body', Expression
end
# A resource type definition (a 'define' in the DSL).
#
class ResourceTypeDefinition < NamedDefinition
end
# A node definition matches hosts using Strings, or Regular expressions. It may inherit from
# a parent node (also using a String or Regular expression).
#
class NodeDefinition < Definition
contains_one_uni 'parent', Expression
contains_many_uni 'host_matches', Expression, :lowerBound => 1
contains_one_uni 'body', Expression
end
class LocatableExpression < Expression
has_many_attr 'line_offsets', Integer
has_attr 'locator', Object, :lowerBound => 1, :transient => true
-
- module ClassModule
- # Go through the gymnastics of making either value or pattern settable
- # with synchronization to the other form. A derived value cannot be serialized
- # and we want to serialize the pattern. When recreating the object we need to
- # recreate it from the pattern string.
- # The below sets both values if one is changed.
- #
- def locator
- unless result = getLocator
- setLocator(result = Puppet::Pops::Parser::Locator.locator(source_text, source_ref(), line_offsets))
- end
- result
- end
- end
end
# Contains one expression which has offsets reported virtually (offset against the Program's
# overall locator).
#
class SubLocatedExpression < Expression
contains_one_uni 'expr', Expression, :lowerBound => 1
# line offset index for contained expressions
has_many_attr 'line_offsets', Integer
# Number of preceding lines (before the line_offsets)
has_attr 'leading_line_count', Integer
# The offset of the leading source line (i.e. size of "left margin").
has_attr 'leading_line_offset', Integer
# The locator for the sub-locatable's children (not for the sublocator itself)
# The locator is not serialized and is recreated on demand from the indexing information
# in self.
#
has_attr 'locator', Object, :lowerBound => 1, :transient => true
-
- module ClassModule
- def locator
- unless result = getLocator
- # Adapt myself to get the Locator for me
- adapter = Puppet::Pops::Adapters::SourcePosAdapter.adapt(self)
- # Get the program (root), and deal with case when not contained in a program
- program = eAllContainers.find {|c| c.is_a?(Program) }
- source_ref = program.nil? ? '' : program.source_ref
-
- # An outer locator is needed since SubLocator only deals with offsets. This outer locator
- # has 0,0 as origin.
- outer_locator = Puppet::Pops::Parser::Locator.locator(adpater.extract_text, source_ref, line_offsets)
-
- # Create a sublocator that describes an offset from the outer
- # NOTE: the offset of self is the same as the sublocator's leading_offset
- result = Puppet::Pops::Parser::Locator::SubLocator.new(outer_locator,
- leading_line_count, offset, leading_line_offset)
- setLocator(result)
- end
- result
- end
- end
end
# A heredoc is a wrapper around a LiteralString or a ConcatenatedStringExpression with a specification
# of syntax. The expectation is that "syntax" has meaning to a validator. A syntax of nil or '' means
# "unspecified syntax".
#
class HeredocExpression < Expression
has_attr 'syntax', String
contains_one_uni 'text_expr', Expression, :lowerBound => 1
end
# A class definition
#
class HostClassDefinition < NamedDefinition
has_attr 'parent_class', String
end
# i.e {|parameters| body }
class LambdaExpression < Expression
contains_many_uni 'parameters', Parameter
contains_one_uni 'body', Expression
end
# If expression. If test is true, the then_expr part should be evaluated, else the (optional)
# else_expr. An 'elsif' is simply an else_expr = IfExpression, and 'else' is simply else == Block.
# a 'then' is typically a Block.
#
class IfExpression < Expression
contains_one_uni 'test', Expression, :lowerBound => 1
contains_one_uni 'then_expr', Expression, :lowerBound => 1
contains_one_uni 'else_expr', Expression
end
# An if expression with boolean reversed test.
#
class UnlessExpression < IfExpression
end
# An abstract call.
#
class CallExpression < Expression
abstract
# A bit of a crutch; functions are either procedures (void return) or has an rvalue
# this flag tells the evaluator that it is a failure to call a function that is void/procedure
# where a value is expected.
#
has_attr 'rval_required', Boolean, :defaultValueLiteral => "false"
contains_one_uni 'functor_expr', Expression, :lowerBound => 1
contains_many_uni 'arguments', Expression
contains_one_uni 'lambda', Expression
end
# A function call where the functor_expr should evaluate to something callable.
#
class CallFunctionExpression < CallExpression; end
# A function call where the given functor_expr should evaluate to the name
# of a function.
#
class CallNamedFunctionExpression < CallExpression; end
# A method/function call where the function expr is a NamedAccess and with support for
# an optional lambda block
#
class CallMethodExpression < CallExpression
end
# Abstract base class for literals.
#
class Literal < Expression
abstract
end
# A literal value is an abstract value holder. The type of the contained value is
# determined by the concrete subclass.
#
class LiteralValue < Literal
abstract
end
# A Regular Expression Literal.
#
class LiteralRegularExpression < LiteralValue
has_attr 'value', Object, :lowerBound => 1, :transient => true
has_attr 'pattern', String, :lowerBound => 1
-
- module ClassModule
- # Go through the gymnastics of making either value or pattern settable
- # with synchronization to the other form. A derived value cannot be serialized
- # and we want to serialize the pattern. When recreating the object we need to
- # recreate it from the pattern string.
- # The below sets both values if one is changed.
- #
- def value= regexp
- setValue regexp
- setPattern regexp.to_s
- end
-
- def pattern= regexp_string
- setPattern regexp_string
- setValue Regexp.new(regexp_string)
- end
- end
-
end
# A Literal String
#
class LiteralString < LiteralValue
has_attr 'value', String, :lowerBound => 1
end
class LiteralNumber < LiteralValue
abstract
end
# A literal number has a radix of decimal (10), octal (8), or hex (16) to enable string conversion with the input radix.
# By default, a radix of 10 is used.
#
class LiteralInteger < LiteralNumber
has_attr 'radix', Integer, :lowerBound => 1, :defaultValueLiteral => "10"
has_attr 'value', Integer, :lowerBound => 1
end
class LiteralFloat < LiteralNumber
has_attr 'value', Float, :lowerBound => 1
end
# The DSL `undef`.
#
class LiteralUndef < Literal; end
# The DSL `default`
class LiteralDefault < Literal; end
# DSL `true` or `false`
class LiteralBoolean < LiteralValue
has_attr 'value', Boolean, :lowerBound => 1
end
# A text expression is an interpolation of an expression. If the embedded expression is
# a QualifiedName, it is taken as a variable name and resolved. All other expressions are evaluated.
# The result is transformed to a string.
#
class TextExpression < UnaryExpression; end
# An interpolated/concatenated string. The contained segments are expressions. Verbatim sections
# should be LiteralString instances, and interpolated expressions should either be
# TextExpression instances (if QualifiedNames should be turned into variables), or any other expression
# if such treatment is not needed.
#
class ConcatenatedString < Expression
contains_many_uni 'segments', Expression
end
# A DSL NAME (one or multiple parts separated by '::').
#
class QualifiedName < LiteralValue
has_attr 'value', String, :lowerBound => 1
end
+ # Represents a parsed reserved word
+ class ReservedWord < LiteralValue
+ has_attr 'word', String, :lowerBound => 1
+ end
+
# A DSL CLASSREF (one or multiple parts separated by '::' where (at least) the first part starts with an upper case letter).
#
class QualifiedReference < LiteralValue
has_attr 'value', String, :lowerBound => 1
end
# A Variable expression looks up value of expr (some kind of name) in scope.
# The expression is typically a QualifiedName, or QualifiedReference.
#
class VariableExpression < UnaryExpression; end
# Epp start
class EppExpression < Expression
- has_attr 'see_scope', Boolean
+ # EPP can be specified without giving any parameter specification.
+ # However, the parameters of the lambda in that case are the empty
+ # array, which is the same as when the parameters are explicity
+ # specified as empty. This attribute tracks that difference.
+ has_attr 'parameters_specified', Boolean
contains_one_uni 'body', Expression
end
# A string to render
class RenderStringExpression < LiteralString
end
# An expression to evluate and render
class RenderExpression < UnaryExpression
end
# A resource body describes one resource instance
#
class ResourceBody < Positioned
contains_one_uni 'title', Expression
- contains_many_uni 'operations', AttributeOperation
+ contains_many_uni 'operations', AbstractAttributeOperation
end
+ ResourceFormEnum = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :literals => [:regular, :virtual, :exported ],
+ :name => 'ResourceFormEnum')
+
# An abstract resource describes the form of the resource (regular, virtual or exported)
# and adds convenience methods to ask if it is virtual or exported.
# All derived classes may not support all forms, and these needs to be validated
#
class AbstractResource < Expression
abstract
- has_attr 'form', RGen::MetamodelBuilder::DataTypes::Enum.new([:regular, :virtual, :exported ]), :lowerBound => 1, :defaultValueLiteral => "regular"
+ has_attr 'form', ResourceFormEnum, :lowerBound => 1, :defaultValueLiteral => "regular"
has_attr 'virtual', Boolean, :derived => true
has_attr 'exported', Boolean, :derived => true
-
- module ClassModule
- def virtual_derived
- form == :virtual || form == :exported
- end
-
- def exported_derived
- form == :exported
- end
- end
-
end
# A resource expression is used to instantiate one or many resource. Resources may optionally
# be virtual or exported, an exported resource is always virtual.
#
class ResourceExpression < AbstractResource
contains_one_uni 'type_name', Expression, :lowerBound => 1
contains_many_uni 'bodies', ResourceBody
end
# A resource defaults sets defaults for a resource type. This class inherits from AbstractResource
# but does only support the :regular form (this is intentional to be able to produce better error messages
# when illegal forms are applied to a model.
#
class ResourceDefaultsExpression < AbstractResource
- contains_one_uni 'type_ref', QualifiedReference
- contains_many_uni 'operations', AttributeOperation
+ contains_one_uni 'type_ref', Expression
+ contains_many_uni 'operations', AbstractAttributeOperation
end
# A resource override overrides already set values.
#
- class ResourceOverrideExpression < Expression
+ class ResourceOverrideExpression < AbstractResource
contains_one_uni 'resources', Expression, :lowerBound => 1
- contains_many_uni 'operations', AttributeOperation
+ contains_many_uni 'operations', AbstractAttributeOperation
end
# A selector entry describes a map from matching_expr to value_expr.
#
class SelectorEntry < Positioned
contains_one_uni 'matching_expr', Expression, :lowerBound => 1
contains_one_uni 'value_expr', Expression, :lowerBound => 1
end
# A selector expression represents a mapping from a left_expr to a matching SelectorEntry.
#
class SelectorExpression < Expression
contains_one_uni 'left_expr', Expression, :lowerBound => 1
contains_many_uni 'selectors', SelectorEntry
end
# A named access expression looks up a named part. (e.g. $a.b)
#
class NamedAccessExpression < BinaryExpression; end
# A Program is the top level construct returned by the parser
# it contains the parsed result in the body, and has a reference to the full source text,
# and its origin. The line_offset's is an array with the start offset of each line.
#
class Program < PopsObject
contains_one_uni 'body', Expression
has_many 'definitions', Definition
has_attr 'source_text', String
has_attr 'source_ref', String
has_many_attr 'line_offsets', Integer
has_attr 'locator', Object, :lowerBound => 1, :transient => true
-
- module ClassModule
- def locator
- unless result = getLocator
- setLocator(result = Puppet::Pops::Parser::Locator.locator(source_text, source_ref(), line_offsets))
- end
- result
- end
- end
-
end
end
diff --git a/lib/puppet/pops/model/model_tree_dumper.rb b/lib/puppet/pops/model/model_tree_dumper.rb
index 0366a421a..4455414c1 100644
--- a/lib/puppet/pops/model/model_tree_dumper.rb
+++ b/lib/puppet/pops/model/model_tree_dumper.rb
@@ -1,385 +1,407 @@
# Dumps a Pops::Model in reverse polish notation; i.e. LISP style
# The intention is to use this for debugging output
# TODO: BAD NAME - A DUMP is a Ruby Serialization
#
class Puppet::Pops::Model::ModelTreeDumper < Puppet::Pops::Model::TreeDumper
def dump_Array o
o.collect {|e| do_dump(e) }
end
def dump_LiteralFloat o
o.value.to_s
end
def dump_LiteralInteger o
case o.radix
when 10
o.value.to_s
when 8
"0%o" % o.value
when 16
"0x%X" % o.value
else
"bad radix:" + o.value.to_s
end
end
def dump_LiteralValue o
o.value.to_s
end
def dump_Factory o
do_dump(o.current)
end
def dump_ArithmeticExpression o
[o.operator.to_s, do_dump(o.left_expr), do_dump(o.right_expr)]
end
# x[y] prints as (slice x y)
def dump_AccessExpression o
if o.keys.size <= 1
["slice", do_dump(o.left_expr), do_dump(o.keys[0])]
else
["slice", do_dump(o.left_expr), do_dump(o.keys)]
end
end
def dump_MatchesExpression o
[o.operator.to_s, do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_CollectExpression o
result = ["collect", do_dump(o.type_expr), :indent, :break, do_dump(o.query), :indent]
o.operations do |ao|
result << :break << do_dump(ao)
end
result += [:dedent, :dedent ]
result
end
def dump_EppExpression o
result = ["epp"]
# result << ["parameters"] + o.parameters.collect {|p| do_dump(p) } if o.parameters.size() > 0
if o.body
result << do_dump(o.body)
else
result << []
end
result
end
def dump_ExportedQuery o
result = ["<<| |>>"]
result += dump_QueryExpression(o) unless is_nop?(o.expr)
result
end
def dump_VirtualQuery o
result = ["<| |>"]
result += dump_QueryExpression(o) unless is_nop?(o.expr)
result
end
def dump_QueryExpression o
[do_dump(o.expr)]
end
def dump_ComparisonExpression o
[o.operator.to_s, do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_AndExpression o
["&&", do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_OrExpression o
["||", do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_InExpression o
["in", do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_AssignmentExpression o
[o.operator.to_s, do_dump(o.left_expr), do_dump(o.right_expr)]
end
# Produces (name => expr) or (name +> expr)
def dump_AttributeOperation o
[o.attribute_name, o.operator, do_dump(o.value_expr)]
end
+ def dump_AttributesOperation o
+ ['* =>', do_dump(o.expr)]
+ end
+
def dump_LiteralList o
["[]"] + o.values.collect {|x| do_dump(x)}
end
def dump_LiteralHash o
["{}"] + o.entries.collect {|x| do_dump(x)}
end
def dump_KeyedEntry o
[do_dump(o.key), do_dump(o.value)]
end
def dump_MatchExpression o
[o.operator.to_s, do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_LiteralString o
"'#{o.value}'"
end
def dump_LambdaExpression o
result = ["lambda"]
result << ["parameters"] + o.parameters.collect {|p| do_dump(p) } if o.parameters.size() > 0
if o.body
result << do_dump(o.body)
else
result << []
end
result
end
def dump_LiteralDefault o
":default"
end
def dump_LiteralUndef o
":undef"
end
def dump_LiteralRegularExpression o
"/#{o.value.source}/"
end
def dump_Nop o
":nop"
end
def dump_NamedAccessExpression o
[".", do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_NilClass o
"()"
end
def dump_NotExpression o
['!', dump(o.expr)]
end
def dump_VariableExpression o
"$#{dump(o.expr)}"
end
# Interpolation (to string) shown as (str expr)
def dump_TextExpression o
["str", do_dump(o.expr)]
end
def dump_UnaryMinusExpression o
['-', do_dump(o.expr)]
end
+ def dump_UnfoldExpression o
+ ['unfold', do_dump(o.expr)]
+ end
+
def dump_BlockExpression o
- ["block"] + o.statements.collect {|x| do_dump(x) }
+ result = ["block", :indent]
+ o.statements.each {|x| result << :break; result << do_dump(x) }
+ result << :dedent << :break
+ result
end
# Interpolated strings are shown as (cat seg0 seg1 ... segN)
def dump_ConcatenatedString o
["cat"] + o.segments.collect {|x| do_dump(x)}
end
def dump_HeredocExpression(o)
result = ["@(#{o.syntax})", :indent, :break, do_dump(o.text_expr), :dedent, :break]
end
def dump_HostClassDefinition o
result = ["class", o.name]
result << ["inherits", o.parent_class] if o.parent_class
result << ["parameters"] + o.parameters.collect {|p| do_dump(p) } if o.parameters.size() > 0
if o.body
result << do_dump(o.body)
else
result << []
end
result
end
def dump_NodeDefinition o
result = ["node"]
result << ["matches"] + o.host_matches.collect {|m| do_dump(m) }
result << ["parent", do_dump(o.parent)] if o.parent
if o.body
result << do_dump(o.body)
else
result << []
end
result
end
def dump_NamedDefinition o
# the nil must be replaced with a string
result = [nil, o.name]
result << ["parameters"] + o.parameters.collect {|p| do_dump(p) } if o.parameters.size() > 0
if o.body
result << do_dump(o.body)
else
result << []
end
result
end
def dump_ResourceTypeDefinition o
result = dump_NamedDefinition(o)
result[0] = 'define'
result
end
def dump_ResourceOverrideExpression o
- result = ["override", do_dump(o.resources), :indent]
+ form = o.form == :regular ? '' : o.form.to_s + "-"
+ result = [form+"override", do_dump(o.resources), :indent]
o.operations.each do |p|
result << :break << do_dump(p)
end
result << :dedent
result
end
+ def dump_ReservedWord o
+ [ 'reserved', o.word ]
+ end
+
# Produces parameters as name, or (= name value)
def dump_Parameter o
- name_part = "#{o.name}"
- if o.value
+ name_prefix = o.captures_rest ? '*' : ''
+ name_part = "#{name_prefix}#{o.name}"
+ if o.value && o.type_expr
+ ["=t", do_dump(o.type_expr), name_part, do_dump(o.value)]
+ elsif o.value
["=", name_part, do_dump(o.value)]
+ elsif o.type_expr
+ ["t", do_dump(o.type_expr), name_part]
else
name_part
end
end
def dump_ParenthesizedExpression o
do_dump(o.expr)
end
# Hides that Program exists in the output (only its body is shown), the definitions are just
# references to contained classes, resource types, and nodes
def dump_Program(o)
dump(o.body)
end
def dump_IfExpression o
result = ["if", do_dump(o.test), :indent, :break,
["then", :indent, do_dump(o.then_expr), :dedent]]
result +=
[:break,
["else", :indent, do_dump(o.else_expr), :dedent],
:dedent] unless is_nop? o.else_expr
result
end
def dump_UnlessExpression o
result = ["unless", do_dump(o.test), :indent, :break,
["then", :indent, do_dump(o.then_expr), :dedent]]
result +=
[:break,
["else", :indent, do_dump(o.else_expr), :dedent],
:dedent] unless is_nop? o.else_expr
result
end
# Produces (invoke name args...) when not required to produce an rvalue, and
# (call name args ... ) otherwise.
#
def dump_CallNamedFunctionExpression o
result = [o.rval_required ? "call" : "invoke", do_dump(o.functor_expr)]
o.arguments.collect {|a| result << do_dump(a) }
result
end
# def dump_CallNamedFunctionExpression o
# result = [o.rval_required ? "call" : "invoke", do_dump(o.functor_expr)]
# o.arguments.collect {|a| result << do_dump(a) }
# result
# end
def dump_CallMethodExpression o
result = [o.rval_required ? "call-method" : "invoke-method", do_dump(o.functor_expr)]
o.arguments.collect {|a| result << do_dump(a) }
result << do_dump(o.lambda) if o.lambda
result
end
def dump_CaseExpression o
result = ["case", do_dump(o.test), :indent]
o.options.each do |s|
result << :break << do_dump(s)
end
result << :dedent
end
def dump_CaseOption o
result = ["when"]
result << o.values.collect {|x| do_dump(x) }
result << ["then", do_dump(o.then_expr) ]
result
end
def dump_RelationshipExpression o
[o.operator.to_s, do_dump(o.left_expr), do_dump(o.right_expr)]
end
def dump_RenderStringExpression o
["render-s", " '#{o.value}'"]
end
def dump_RenderExpression o
["render", do_dump(o.expr)]
end
def dump_ResourceBody o
result = [do_dump(o.title), :indent]
o.operations.each do |p|
result << :break << do_dump(p)
end
result << :dedent
result
end
def dump_ResourceDefaultsExpression o
- result = ["resource-defaults", do_dump(o.type_ref), :indent]
+ form = o.form == :regular ? '' : o.form.to_s + "-"
+ result = [form+"resource-defaults", do_dump(o.type_ref), :indent]
o.operations.each do |p|
result << :break << do_dump(p)
end
result << :dedent
result
end
def dump_ResourceExpression o
form = o.form == :regular ? '' : o.form.to_s + "-"
result = [form+"resource", do_dump(o.type_name), :indent]
o.bodies.each do |b|
result << :break << do_dump(b)
end
result << :dedent
result
end
def dump_SelectorExpression o
["?", do_dump(o.left_expr)] + o.selectors.collect {|x| do_dump(x) }
end
def dump_SelectorEntry o
[do_dump(o.matching_expr), "=>", do_dump(o.value_expr)]
end
def dump_SubLocatedExpression o
["sublocated", do_dump(o.expr)]
end
def dump_Object o
[o.class.to_s, o.to_s]
end
def is_nop? o
o.nil? || o.is_a?(Puppet::Pops::Model::Nop)
end
end
diff --git a/lib/puppet/pops/parser/egrammar.ra b/lib/puppet/pops/parser/egrammar.ra
index a1d639cc2..54b183a81 100644
--- a/lib/puppet/pops/parser/egrammar.ra
+++ b/lib/puppet/pops/parser/egrammar.ra
@@ -1,769 +1,757 @@
# vim: syntax=ruby
# Parser using the Pops model, expression based
class Puppet::Pops::Parser::Parser
token STRING DQPRE DQMID DQPOST
token WORD
token LBRACK RBRACK LBRACE RBRACE SYMBOL FARROW COMMA TRUE
token FALSE EQUALS APPENDS DELETES LESSEQUAL NOTEQUAL DOT COLON LLCOLLECT RRCOLLECT
token QMARK LPAREN RPAREN ISEQUAL GREATEREQUAL GREATERTHAN LESSTHAN
token IF ELSE
token DEFINE ELSIF VARIABLE CLASS INHERITS NODE BOOLEAN
token NAME SEMIC CASE DEFAULT AT ATAT LCOLLECT RCOLLECT CLASSREF
token NOT OR AND UNDEF PARROW PLUS MINUS TIMES DIV LSHIFT RSHIFT UMINUS
token MATCH NOMATCH REGEX IN_EDGE OUT_EDGE IN_EDGE_SUB OUT_EDGE_SUB
token IN UNLESS PIPE
token LAMBDA SELBRACE
token NUMBER
token HEREDOC SUBLOCATE
token RENDER_STRING RENDER_EXPR EPP_START EPP_END EPP_END_TRIM
token FUNCTION
+token PRIVATE ATTR TYPE
token LOW
prechigh
left HIGH
left SEMIC
left PIPE
left LPAREN
left RPAREN
- left AT ATAT
left DOT
- left CALL
nonassoc EPP_START
left LBRACK LISTSTART
left RBRACK
left QMARK
left LCOLLECT LLCOLLECT
right NOT
+ nonassoc SPLAT
nonassoc UMINUS
left IN
left MATCH NOMATCH
left TIMES DIV MODULO
left MINUS PLUS
left LSHIFT RSHIFT
left NOTEQUAL ISEQUAL
left GREATEREQUAL GREATERTHAN LESSTHAN LESSEQUAL
left AND
left OR
- right APPENDS DELETES EQUALS
left LBRACE
left SELBRACE
left RBRACE
+ right AT ATAT
+ right APPENDS DELETES EQUALS
left IN_EDGE OUT_EDGE IN_EDGE_SUB OUT_EDGE_SUB
- left TITLE_COLON
- left CASE_COLON
left FARROW
left COMMA
nonassoc RENDER_EXPR
nonassoc RENDER_STRING
left LOW
preclow
rule
-# Produces [Model::BlockExpression, Model::Expression, nil] depending on multiple statements, single statement or empty
+# Produces [Model::Program] with a body containing what was parsed
program
- : statements { result = create_program(Factory.block_or_expression(*val[0])) }
+ : statements { result = create_program(Factory.block_or_expression(*val[0])) }
| epp_expression { result = create_program(Factory.block_or_expression(*val[0])) }
- | nil
+ | { result = create_empty_program() }
# Produces a semantic model (non validated, but semantically adjusted).
statements
: syntactic_statements { result = transform_calls(val[0]) }
-# Change may have issues with nil; i.e. program is a sequence of nils/nops
-# Simplified from original which had validation for top level constructs - see statement rule
+# Collects sequence of elements into a list that the statements rule can transform
+# (Needed because language supports function calls without parentheses around arguments).
# Produces Array<Model::Expression>
+#
syntactic_statements
: syntactic_statement { result = [val[0]]}
| syntactic_statements SEMIC syntactic_statement { result = val[0].push val[2] }
| syntactic_statements syntactic_statement { result = val[0].push val[1] }
# Produce a single expression or Array of expression
+# This exists to handle multiple arguments to non parenthesized function call. If e is expression,
+# the a program can consists of e [e,e,e] where the first may be a name of a function to call.
+#
syntactic_statement
- : any_expression { result = val[0] }
- | syntactic_statement COMMA any_expression { result = aryfy(val[0]).push val[2] }
+ : assignment =LOW { result = val[0] }
+ | syntactic_statement COMMA assignment =LOW { result = aryfy(val[0]).push val[2] }
+
+# Assignment (is right recursive since assignment is right associative)
+assignment
+ : relationship =LOW
+ | relationship EQUALS assignment { result = val[0].set(val[2]) ; loc result, val[1] }
+ | relationship APPENDS assignment { result = val[0].plus_set(val[2]) ; loc result, val[1] }
+ | relationship DELETES assignment { result = val[0].minus_set(val[2]); loc result, val[1] }
+
+assignments
+ : assignment { result = [val[0]] }
+ | assignments COMMA assignment { result = val[0].push(val[2]) }
+
+relationship
+ : resource =LOW
+ | relationship IN_EDGE resource { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
+ | relationship IN_EDGE_SUB resource { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
+ | relationship OUT_EDGE resource { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
+ | relationship OUT_EDGE_SUB resource { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
+
+#-- RESOURCE
+#
+resource
+ : expression = LOW
-any_expression
- : relationship_expression
+ #---VIRTUAL
+ | AT resource {
+ result = val[1]
+ unless Factory.set_resource_form(result, :virtual)
+ # This is equivalent to a syntax error - additional semantic restrictions apply
+ error val[0], "Virtual (@) can only be applied to a Resource Expression"
+ end
+ # relocate the result
+ loc result, val[0], val[1]
+ }
-relationship_expression
- : resource_expression =LOW { result = val[0] }
- | relationship_expression IN_EDGE relationship_expression { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
- | relationship_expression IN_EDGE_SUB relationship_expression { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
- | relationship_expression OUT_EDGE relationship_expression { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
- | relationship_expression OUT_EDGE_SUB relationship_expression { result = val[0].relop(val[1][:value], val[2]); loc result, val[1] }
+ #---EXPORTED
+ | ATAT resource {
+ result = val[1]
+ unless Factory.set_resource_form(result, :exported)
+ # This is equivalent to a syntax error - additional semantic restrictions apply
+ error val[0], "Exported (@@) can only be applied to a Resource Expression"
+ end
+ # relocate the result
+ loc result, val[0], val[1]
+ }
-#---EXPRESSION
+ #---RESOURCE TITLED 3x and 4x
+ | resource LBRACE expression COLON attribute_operations additional_resource_bodies RBRACE {
+ bodies = [Factory.RESOURCE_BODY(val[2], val[4])] + val[5]
+ result = Factory.RESOURCE(val[0], bodies)
+ loc result, val[0], val[6]
+ }
+
+ #---CLASS RESOURCE
+ | CLASS LBRACE resource_bodies endsemi RBRACE {
+ result = Factory.RESOURCE(Factory.fqn(token_text(val[0])), val[2])
+ loc result, val[0], val[4]
+ }
+
+ # --RESOURCE 3X Expression
+ # Handles both 3x overrides and defaults (i.e. single resource_body without title colon)
+ # Slated for possible deprecation since it requires transformation and mix static/evaluation check
+ #
+ | resource LBRACE attribute_operations endcomma RBRACE {
+ result = case Factory.resource_shape(val[0])
+ when :resource, :class
+ # This catches deprecated syntax.
+ # If the attribute operations does not include +>, then the found expression
+ # is actually a LEFT followed by LITERAL_HASH
+ #
+ unless tmp = transform_resource_wo_title(val[0], val[2])
+ error val[1], "Syntax error resource body without title or hash with +>"
+ end
+ tmp
+ when :defaults
+ Factory.RESOURCE_DEFAULTS(val[0], val[2])
+ when :override
+ # This was only done for override in original - TODO should it be here at all
+ Factory.RESOURCE_OVERRIDE(val[0], val[2])
+ else
+ error val[0], "Expression is not valid as a resource, resource-default, or resource-override"
+ end
+ loc result, val[0], val[4]
+ }
+
+ resource_body
+ : expression COLON attribute_operations endcomma { result = Factory.RESOURCE_BODY(val[0], val[2]) }
+
+ resource_bodies
+ : resource_body =HIGH { result = [val[0]] }
+ | resource_bodies SEMIC resource_body =HIGH { result = val[0].push val[2] }
+
+ # This is a rule for the intermediate state where RACC has seen enough tokens to understand that
+ # what is expressed is a Resource Expression, it now has to get to the finishing line
+ #
+ additional_resource_bodies
+ : endcomma { result = [] }
+ | endcomma SEMIC { result = [] }
+ | endcomma SEMIC resource_bodies endsemi { result = val[2] }
+
+#-- EXPRESSION
#
-# Produces Model::Expression
expression
- : higher_precedence
+ : primary_expression
+ | call_function_expression
| expression LBRACK expressions RBRACK =LBRACK { result = val[0][*val[2]] ; loc result, val[0], val[3] }
| expression IN expression { result = val[0].in val[2] ; loc result, val[1] }
| expression MATCH expression { result = val[0] =~ val[2] ; loc result, val[1] }
| expression NOMATCH expression { result = val[0].mne val[2] ; loc result, val[1] }
| expression PLUS expression { result = val[0] + val[2] ; loc result, val[1] }
| expression MINUS expression { result = val[0] - val[2] ; loc result, val[1] }
| expression DIV expression { result = val[0] / val[2] ; loc result, val[1] }
| expression TIMES expression { result = val[0] * val[2] ; loc result, val[1] }
| expression MODULO expression { result = val[0] % val[2] ; loc result, val[1] }
| expression LSHIFT expression { result = val[0] << val[2] ; loc result, val[1] }
| expression RSHIFT expression { result = val[0] >> val[2] ; loc result, val[1] }
| MINUS expression =UMINUS { result = val[1].minus() ; loc result, val[0] }
+ | TIMES expression =SPLAT { result = val[1].unfold() ; loc result, val[0] }
| expression NOTEQUAL expression { result = val[0].ne val[2] ; loc result, val[1] }
| expression ISEQUAL expression { result = val[0] == val[2] ; loc result, val[1] }
| expression GREATERTHAN expression { result = val[0] > val[2] ; loc result, val[1] }
| expression GREATEREQUAL expression { result = val[0] >= val[2] ; loc result, val[1] }
| expression LESSTHAN expression { result = val[0] < val[2] ; loc result, val[1] }
| expression LESSEQUAL expression { result = val[0] <= val[2] ; loc result, val[1] }
| NOT expression { result = val[1].not ; loc result, val[0] }
| expression AND expression { result = val[0].and val[2] ; loc result, val[1] }
| expression OR expression { result = val[0].or val[2] ; loc result, val[1] }
- | expression EQUALS expression { result = val[0].set(val[2]) ; loc result, val[1] }
- | expression APPENDS expression { result = val[0].plus_set(val[2]) ; loc result, val[1] }
- | expression DELETES expression { result = val[0].minus_set(val[2]); loc result, val[1] }
| expression QMARK selector_entries { result = val[0].select(*val[2]) ; loc result, val[0] }
- | LPAREN expression RPAREN { result = val[1].paren() ; loc result, val[0] }
+ | LPAREN assignment RPAREN { result = val[1].paren() ; loc result, val[0] }
+
#---EXPRESSIONS
-# (e.g. argument list)
+# (i.e. "argument list")
#
# This expression list can not contain function calls without parentheses around arguments
# Produces Array<Model::Expression>
+#
expressions
: expression { result = [val[0]] }
| expressions COMMA expression { result = val[0].push(val[2]) }
-# These go through a chain of left recursion, ending with primary_expression
-higher_precedence
- : call_function_expression
-
primary_expression
- : literal_expression
- | variable
+ : variable
| call_method_with_lambda_expression
| collection_expression
| case_expression
| if_expression
| unless_expression
| definition_expression
| hostclass_expression
| node_definition_expression
| epp_render_expression
- | function_definition
-
-# Allways have the same value
-literal_expression
- : array
- | boolean
- | default
+ | reserved_word
+ | array
| hash
| regex
- | text_or_name
- | number
+ | quotedtext
| type
- | undef
+ | NUMBER { result = Factory.NUMBER(val[0][:value]) ; loc result, val[0] }
+ | BOOLEAN { result = Factory.literal(val[0][:value]) ; loc result, val[0] }
+ | DEFAULT { result = Factory.literal(:default) ; loc result, val[0] }
+ | UNDEF { result = Factory.literal(:undef) ; loc result, val[0] }
+ | NAME { result = Factory.QNAME_OR_NUMBER(val[0][:value]) ; loc result, val[0] }
-text_or_name
- : name { result = val[0] }
- | quotedtext { result = val[0] }
#---CALL FUNCTION
#
# Produces Model::CallNamedFunction
call_function_expression
- : primary_expression LPAREN expressions endcomma RPAREN {
+ : expression LPAREN assignments endcomma RPAREN {
result = Factory.CALL_NAMED(val[0], true, val[2])
loc result, val[0], val[4]
}
- | primary_expression LPAREN RPAREN {
+ | expression LPAREN RPAREN {
result = Factory.CALL_NAMED(val[0], true, [])
loc result, val[0], val[2]
}
- | primary_expression LPAREN expressions endcomma RPAREN lambda {
+ | expression LPAREN assignments endcomma RPAREN lambda {
result = Factory.CALL_NAMED(val[0], true, val[2])
loc result, val[0], val[4]
result.lambda = val[5]
}
- | primary_expression LPAREN RPAREN lambda {
+ | expression LPAREN RPAREN lambda {
result = Factory.CALL_NAMED(val[0], true, [])
loc result, val[0], val[2]
result.lambda = val[3]
}
- | primary_expression = LOW { result = val[0] }
#---CALL METHOD
#
call_method_with_lambda_expression
: call_method_expression =LOW { result = val[0] }
- | call_method_expression lambda { result = val[0]; val[0].lambda = val[1] }
+ | call_method_expression lambda { result = val[0]; val[0].lambda = val[1] }
call_method_expression
- : named_access LPAREN expressions RPAREN { result = Factory.CALL_METHOD(val[0], val[2]); loc result, val[1], val[3] }
+ : named_access LPAREN assignments RPAREN { result = Factory.CALL_METHOD(val[0], val[2]); loc result, val[1], val[3] }
| named_access LPAREN RPAREN { result = Factory.CALL_METHOD(val[0], []); loc result, val[1], val[3] }
| named_access =LOW { result = Factory.CALL_METHOD(val[0], []); loc result, val[0] }
- # TODO: It may be of value to access named elements of types too
named_access
: expression DOT NAME {
result = val[0].dot(Factory.fqn(val[2][:value]))
loc result, val[1], val[2]
}
#---LAMBDA
#
-# This is a temporary switch while experimenting with concrete syntax
-# One should be picked for inclusion in puppet.
-
-# Lambda with parameters to the left of the body
lambda
: lambda_parameter_list lambda_rest {
- result = Factory.LAMBDA(val[0], val[1])
-# loc result, val[1] # TODO
+ result = Factory.LAMBDA(val[0][:value], val[1][:value])
+ loc result, val[0][:start], val[1][:end]
}
lambda_rest
- : LBRACE statements RBRACE { result = val[1] }
- | LBRACE RBRACE { result = nil }
+ : LBRACE statements RBRACE { result = {:end => val[2], :value =>val[1] } }
+ | LBRACE RBRACE { result = {:end => val[1], :value => nil } }
+
-# Produces Array<Model::Parameter>
lambda_parameter_list
- : PIPE PIPE { result = [] }
- | PIPE parameters endcomma PIPE { result = val[1] }
+ : PIPE PIPE { result = {:start => val[0], :value => [] } }
+ | PIPE parameters endcomma PIPE { result = {:start => val[0], :value => val[1] } }
#---CONDITIONALS
-#
#--IF
#
-# Produces Model::IfExpression
if_expression
: IF if_part {
result = val[1]
loc(result, val[0], val[1])
}
# Produces Model::IfExpression
if_part
: expression LBRACE statements RBRACE else {
result = Factory.IF(val[0], Factory.block_or_expression(*val[2]), val[4])
loc(result, val[0], (val[4] ? val[4] : val[3]))
}
| expression LBRACE RBRACE else {
result = Factory.IF(val[0], nil, val[3])
loc(result, val[0], (val[3] ? val[3] : val[2]))
}
# Produces [Model::Expression, nil] - nil if there is no else or elsif part
else
: # nothing
| ELSIF if_part {
result = val[1]
loc(result, val[0], val[1])
}
| ELSE LBRACE statements RBRACE {
result = Factory.block_or_expression(*val[2])
loc result, val[0], val[3]
}
| ELSE LBRACE RBRACE {
result = nil # don't think a nop is needed here either
}
#--UNLESS
#
-# Changed from Puppet 3x where there is no else part on unless
-#
unless_expression
: UNLESS expression LBRACE statements RBRACE unless_else {
result = Factory.UNLESS(val[1], Factory.block_or_expression(*val[3]), val[5])
loc result, val[0], val[4]
}
| UNLESS expression LBRACE RBRACE unless_else {
result = Factory.UNLESS(val[1], nil, nil)
loc result, val[0], val[4]
}
# Different from else part of if, since "elsif" is not supported, but 'else' is
#
# Produces [Model::Expression, nil] - nil if there is no else or elsif part
unless_else
: # nothing
| ELSE LBRACE statements RBRACE {
result = Factory.block_or_expression(*val[2])
loc result, val[0], val[3]
}
| ELSE LBRACE RBRACE {
result = nil # don't think a nop is needed here either
}
#--- CASE EXPRESSION
#
-# Produces Model::CaseExpression
case_expression
: CASE expression LBRACE case_options RBRACE {
result = Factory.CASE(val[1], *val[3])
loc result, val[0], val[4]
}
# Produces Array<Model::CaseOption>
case_options
: case_option { result = [val[0]] }
- | case_options case_option { result = val[0].push val[1] }
+ | case_options case_option { result = val[0].push val[1] }
# Produced Model::CaseOption (aka When)
case_option
- : expressions case_colon LBRACE statements RBRACE {
- result = Factory.WHEN(val[0], val[3])
- loc result, val[1], val[4]
- }
- | expressions case_colon LBRACE RBRACE = LOW {
- result = Factory.WHEN(val[0], nil)
- loc result, val[1], val[3]
+ : expressions COLON LBRACE options_statements RBRACE {
+ result = Factory.WHEN(val[0], val[3]); loc result, val[1], val[4]
}
- case_colon: COLON =CASE_COLON { result = val[0] }
+ options_statements
+ : nil
+ | statements
# This special construct is required or racc will produce the wrong result when the selector entry
# LHS is generalized to any expression (LBRACE looks like a hash). Thus it is not possible to write
# a selector with a single entry where the entry LHS is a hash.
# The SELBRACE token is a LBRACE that follows a QMARK, and this is produced by the lexer with a lookback
# Produces Array<Model::SelectorEntry>
#
selector_entries
: selector_entry
| SELBRACE selector_entry_list endcomma RBRACE {
result = val[1]
}
# Produces Array<Model::SelectorEntry>
selector_entry_list
: selector_entry { result = [val[0]] }
| selector_entry_list COMMA selector_entry { result = val[0].push val[2] }
# Produces a Model::SelectorEntry
# This FARROW wins over FARROW in Hash
selector_entry
: expression FARROW expression { result = Factory.MAP(val[0], val[2]) ; loc result, val[1] }
-#---RESOURCE
-#
-# Produces [Model::ResourceExpression, Model::ResourceDefaultsExpression]
-
-# The resource expression parses a generalized syntax and then selects the correct
-# resulting model based on the combinatoin of the LHS and what follows.
-# It also handled exported and virtual resources, and the class case
-#
-resource_expression
- : expression =LOW {
- result = val[0]
- }
- | at expression LBRACE resourceinstances endsemi RBRACE {
- result = case Factory.resource_shape(val[1])
- when :resource, :class
- tmp = Factory.RESOURCE(Factory.fqn(token_text(val[1])), val[3])
- tmp.form = val[0]
- tmp
- when :defaults
- error val[1], "A resource default can not be virtual or exported"
- when :override
- error val[1], "A resource override can not be virtual or exported"
- else
- error val[1], "Expression is not valid as a resource, resource-default, or resource-override"
- end
- loc result, val[1], val[4]
- }
- | at expression LBRACE attribute_operations endcomma RBRACE {
- result = case Factory.resource_shape(val[1])
- when :resource, :class, :defaults, :override
- error val[1], "Defaults are not virtualizable"
- else
- error val[1], "Expression is not valid as a resource, resource-default, or resource-override"
- end
- }
- | expression LBRACE resourceinstances endsemi RBRACE {
- result = case Factory.resource_shape(val[0])
- when :resource, :class
- Factory.RESOURCE(Factory.fqn(token_text(val[0])), val[2])
- when :defaults
- error val[1], "A resource default can not specify a resource name"
- when :override
- error val[1], "A resource override does not allow override of name of resource"
- else
- error val[1], "Expression is not valid as a resource, resource-default, or resource-override"
- end
- loc result, val[0], val[4]
- }
- | expression LBRACE attribute_operations endcomma RBRACE {
- result = case Factory.resource_shape(val[0])
- when :resource, :class
- # This catches deprecated syntax.
- # If the attribute operations does not include +>, then the found expression
- # is actually a LEFT followed by LITERAL_HASH
- #
- unless tmp = transform_resource_wo_title(val[0], val[2])
- error val[1], "Syntax error resource body without title or hash with +>"
- end
- tmp
- when :defaults
- Factory.RESOURCE_DEFAULTS(val[0], val[2])
- when :override
- # This was only done for override in original - TODO shuld it be here at all
- Factory.RESOURCE_OVERRIDE(val[0], val[2])
- else
- error val[0], "Expression is not valid as a resource, resource-default, or resource-override"
- end
- loc result, val[0], val[4]
- }
- | at CLASS LBRACE resourceinstances endsemi RBRACE {
- result = Factory.RESOURCE(Factory.fqn(token_text(val[1])), val[3])
- result.form = val[0]
- loc result, val[1], val[5]
- }
- | CLASS LBRACE resourceinstances endsemi RBRACE {
- result = Factory.RESOURCE(Factory.fqn(token_text(val[0])), val[2])
- loc result, val[0], val[4]
- }
-
- resourceinst
- : expression title_colon attribute_operations endcomma { result = Factory.RESOURCE_BODY(val[0], val[2]) }
-
- title_colon : COLON =TITLE_COLON { result = val[0] }
-
- resourceinstances
- : resourceinst { result = [val[0]] }
- | resourceinstances SEMIC resourceinst { result = val[0].push val[2] }
-
- # Produces Symbol corresponding to resource form
- #
- at
- : AT { result = :virtual }
- | AT AT { result = :exported }
- | ATAT { result = :exported }
-
#---COLLECTION
#
# A Collection is a predicate applied to a set of objects with an implied context (used variables are
# attributes of the object.
-# i.e. this is equivalent for source.select(QUERY).apply(ATTRIBUTE_OPERATIONS)
-#
-# Produces Model::CollectExpression
+# i.e. this is equivalent to source.select(QUERY).apply(ATTRIBUTE_OPERATIONS)
#
collection_expression
: expression collect_query LBRACE attribute_operations endcomma RBRACE {
result = Factory.COLLECT(val[0], val[1], val[3])
loc result, val[0], val[5]
}
| expression collect_query =LOW {
result = Factory.COLLECT(val[0], val[1], [])
loc result, val[0], val[1]
}
collect_query
: LCOLLECT optional_query RCOLLECT { result = Factory.VIRTUAL_QUERY(val[1]) ; loc result, val[0], val[2] }
| LLCOLLECT optional_query RRCOLLECT { result = Factory.EXPORTED_QUERY(val[1]) ; loc result, val[0], val[2] }
optional_query
: nil
| expression
-#---ATTRIBUTE OPERATIONS
-#
-# (Not an expression)
-#
-# Produces Array<Model::AttributeOperation>
+#---ATTRIBUTE OPERATIONS (Not an expression)
#
attribute_operations
: { result = [] }
| attribute_operation { result = [val[0]] }
| attribute_operations COMMA attribute_operation { result = val[0].push(val[2]) }
# Produces String
# QUESTION: Why is BOOLEAN valid as an attribute name?
#
attribute_name
: NAME
| keyword
- | BOOLEAN
+# | BOOLEAN
# In this version, illegal combinations are validated instead of producing syntax errors
# (Can give nicer error message "+> is not applicable to...")
# Produces Model::AttributeOperation
#
attribute_operation
: attribute_name FARROW expression {
result = Factory.ATTRIBUTE_OP(val[0][:value], :'=>', val[2])
loc result, val[0], val[2]
}
| attribute_name PARROW expression {
result = Factory.ATTRIBUTE_OP(val[0][:value], :'+>', val[2])
loc result, val[0], val[2]
}
+ | TIMES FARROW expression {
+ result = Factory.ATTRIBUTES_OP(val[2]) ; loc result, val[0], val[2]
+ }
#---DEFINE
#
# Produces Model::Definition
#
definition_expression
: DEFINE classname parameter_list LBRACE opt_statements RBRACE {
result = add_definition(Factory.DEFINITION(classname(val[1][:value]), val[2], val[4]))
loc result, val[0], val[5]
# New lexer does not keep track of this, this is done in validation
if @lexer.respond_to?(:'indefine=')
@lexer.indefine = false
end
}
#---HOSTCLASS
#
# Produces Model::HostClassDefinition
#
hostclass_expression
: CLASS stacked_classname parameter_list classparent LBRACE opt_statements RBRACE {
# Remove this class' name from the namestack as all nested classes have been parsed
namepop
result = add_definition(Factory.HOSTCLASS(classname(val[1][:value]), val[2], token_text(val[3]), val[5]))
loc result, val[0], val[6]
}
# Record the classname so nested classes gets a fully qualified name at parse-time
# This is a separate rule since racc does not support intermediate actions.
#
stacked_classname
: classname { namestack(val[0][:value]) ; result = val[0] }
opt_statements
: statements
| nil
# Produces String, name or nil result
classparent
: nil
| INHERITS classnameordefault { result = val[1] }
# Produces String (this construct allows a class to be named "default" and to be referenced as
# the parent class.
# TODO: Investigate the validity
# Produces a String (classname), or a token (DEFAULT).
#
classnameordefault
: classname
| DEFAULT
#---NODE
#
# Produces Model::NodeDefinition
#
node_definition_expression
- : NODE hostnames nodeparent LBRACE statements RBRACE {
- result = add_definition(Factory.NODE(val[1], val[2], val[4]))
- loc result, val[0], val[5]
+ : NODE hostnames endcomma nodeparent LBRACE statements RBRACE {
+ result = add_definition(Factory.NODE(val[1], val[3], val[5]))
+ loc result, val[0], val[6]
}
- | NODE hostnames nodeparent LBRACE RBRACE {
- result = add_definition(Factory.NODE(val[1], val[2], nil))
- loc result, val[0], val[4]
+ | NODE hostnames endcomma nodeparent LBRACE RBRACE {
+ result = add_definition(Factory.NODE(val[1], val[3], nil))
+ loc result, val[0], val[5]
}
# Hostnames is not a list of names, it is a list of name matchers (including a Regexp).
# (The old implementation had a special "Hostname" object with some minimal validation)
#
# Produces Array<Model::LiteralExpression>
#
hostnames
: hostname { result = [result] }
| hostnames COMMA hostname { result = val[0].push(val[2]) }
# Produces a LiteralExpression (string, :default, or regexp)
# String with interpolation is validated for better error message
hostname
- : dotted_name { result = val[0] }
- | quotedtext { result = val[0] }
+ : dotted_name
+ | quotedtext
| DEFAULT { result = Factory.literal(:default); loc result, val[0] }
| regex
dotted_name
- : NAME { result = Factory.literal(val[0][:value]); loc result, val[0] }
- | dotted_name DOT NAME { result = Factory.concat(val[0], '.', val[2][:value]); loc result, val[0], val[2] }
+ : name_or_number { result = Factory.literal(val[0][:value]); loc result, val[0] }
+ | dotted_name DOT name_or_number { result = Factory.concat(val[0], '.', val[2][:value]); loc result, val[0], val[2] }
+
+ name_or_number
+ : NAME
+ | NUMBER
# Produces Expression, since hostname is an Expression
nodeparent
: nil
| INHERITS hostname { result = val[1] }
#---FUNCTION DEFINITION
#
-function_definition
- : FUNCTION { result = Factory.QNAME(val[0][:value]) ; loc result, val[0] }
+#function_definition
# For now the function word will just be reserved, in the future it will
# produce a function definition
# FUNCTION classname parameter_list LBRACE opt_statements RBRACE {
# result = add_definition(Factory.FUNCTION(val[1][:value], val[2], val[4]))
# loc result, val[0], val[5]
# }
#---NAMES AND PARAMETERS COMMON TO SEVERAL RULES
# Produces String
# TODO: The error that "class" is not a valid classname is bad - classname rule is also used for other things
classname
- : NAME { result = val[0] }
+ : NAME
| CLASS { error val[0], "'class' is not a valid classname" }
# Produces Array<Model::Parameter>
parameter_list
: nil { result = [] }
| LPAREN RPAREN { result = [] }
| LPAREN parameters endcomma RPAREN { result = val[1] }
# Produces Array<Model::Parameter>
parameters
: parameter { result = [val[0]] }
| parameters COMMA parameter { result = val[0].push(val[2]) }
# Produces Model::Parameter
parameter
+ : untyped_parameter
+ | typed_parameter
+
+untyped_parameter
+ : regular_parameter
+ | splat_parameter
+
+regular_parameter
: VARIABLE EQUALS expression { result = Factory.PARAM(val[0][:value], val[2]) ; loc result, val[0] }
| VARIABLE { result = Factory.PARAM(val[0][:value]); loc result, val[0] }
-#--RESTRICTED EXPRESSIONS
-# i.e. where one could have expected an expression, but the set is limited
+splat_parameter
+ : TIMES regular_parameter { result = val[1]; val[1].captures_rest() }
-## What is allowed RHS of match operators (see expression)
-#match_rvalue
-# : regex
-# | text_or_name
+typed_parameter
+ : parameter_type untyped_parameter { val[1].type_expr(val[0]) ; result = val[1] }
+
+parameter_type
+ : type { result = val[0] }
+ | type LBRACK expressions RBRACK { result = val[0][*val[2]] ; loc result, val[0], val[3] }
#--VARIABLE
#
variable
: VARIABLE { result = Factory.fqn(val[0][:value]).var ; loc result, val[0] }
+#---RESERVED WORDS
+#
+reserved_word
+ : FUNCTION { result = Factory.RESERVED(val[0][:value]) ; loc result, val[0] }
+ | PRIVATE { result = Factory.RESERVED(val[0][:value]) ; loc result, val[0] }
+ | TYPE { result = Factory.RESERVED(val[0][:value]) ; loc result, val[0] }
+ | ATTR { result = Factory.RESERVED(val[0][:value]) ; loc result, val[0] }
+
#---LITERALS (dynamic and static)
#
array
- : LBRACK expressions RBRACK { result = Factory.LIST(val[1]); loc result, val[0], val[2] }
- | LBRACK expressions COMMA RBRACK { result = Factory.LIST(val[1]); loc result, val[0], val[3] }
- | LBRACK RBRACK { result = Factory.literal([]) ; loc result, val[0] }
- | LISTSTART expressions RBRACK { result = Factory.LIST(val[1]); loc result, val[0], val[2] }
- | LISTSTART expressions COMMA RBRACK { result = Factory.LIST(val[1]); loc result, val[0], val[3] }
- | LISTSTART RBRACK { result = Factory.literal([]) ; loc result, val[0] }
+ : LISTSTART assignments endcomma RBRACK { result = Factory.LIST(val[1]); loc result, val[0], val[3] }
+ | LISTSTART RBRACK { result = Factory.literal([]) ; loc result, val[0] }
+ | LBRACK assignments endcomma RBRACK { result = Factory.LIST(val[1]); loc result, val[0], val[3] }
+ | LBRACK RBRACK { result = Factory.literal([]) ; loc result, val[0] }
hash
: LBRACE hashpairs RBRACE { result = Factory.HASH(val[1]); loc result, val[0], val[2] }
| LBRACE hashpairs COMMA RBRACE { result = Factory.HASH(val[1]); loc result, val[0], val[3] }
| LBRACE RBRACE { result = Factory.literal({}) ; loc result, val[0], val[3] }
hashpairs
: hashpair { result = [val[0]] }
| hashpairs COMMA hashpair { result = val[0].push val[2] }
hashpair
- : expression FARROW expression { result = Factory.KEY_ENTRY(val[0], val[2]); loc result, val[1] }
+ : assignment FARROW assignment { result = Factory.KEY_ENTRY(val[0], val[2]); loc result, val[1] }
quotedtext
: string
| dq_string
| heredoc
string
: STRING { result = Factory.literal(val[0][:value]) ; loc result, val[0] }
| WORD { result = Factory.literal(val[0][:value]) ; loc result, val[0] }
dq_string : dqpre dqrval { result = Factory.string(val[0], *val[1]) ; loc result, val[0], val[1][-1] }
dqpre : DQPRE { result = Factory.literal(val[0][:value]); loc result, val[0] }
dqpost : DQPOST { result = Factory.literal(val[0][:value]); loc result, val[0] }
dqmid : DQMID { result = Factory.literal(val[0][:value]); loc result, val[0] }
dqrval : text_expression dqtail { result = [val[0]] + val[1] }
-text_expression : expression { result = Factory.TEXT(val[0]) }
+text_expression : assignment { result = Factory.TEXT(val[0]) }
dqtail
: dqpost { result = [val[0]] }
| dqmid dqrval { result = [val[0]] + val[1] }
heredoc
: HEREDOC sublocated_text { result = Factory.HEREDOC(val[0][:value], val[1]); loc result, val[0] }
sublocated_text
: SUBLOCATE string { result = Factory.SUBLOCATE(val[0], val[1]); loc result, val[0] }
| SUBLOCATE dq_string { result = Factory.SUBLOCATE(val[0], val[1]); loc result, val[0] }
epp_expression
- : EPP_START epp_parameters_list statements { result = Factory.EPP(val[1], val[2]); loc result, val[0] }
+ : EPP_START epp_parameters_list optional_statements { result = Factory.EPP(val[1], val[2]); loc result, val[0] }
+
+optional_statements
+ :
+ | statements
epp_parameters_list
: =LOW{ result = nil }
| PIPE PIPE { result = [] }
| PIPE parameters endcomma PIPE { result = val[1] }
epp_render_expression
: RENDER_STRING { result = Factory.RENDER_STRING(val[0][:value]); loc result, val[0] }
| RENDER_EXPR expression epp_end { result = Factory.RENDER_EXPR(val[1]); loc result, val[0], val[2] }
| RENDER_EXPR LBRACE statements RBRACE epp_end { result = Factory.RENDER_EXPR(Factory.block_or_expression(*val[2])); loc result, val[0], val[4] }
epp_end
: EPP_END
| EPP_END_TRIM
-number : NUMBER { result = Factory.NUMBER(val[0][:value]) ; loc result, val[0] }
-name : NAME { result = Factory.QNAME_OR_NUMBER(val[0][:value]) ; loc result, val[0] }
type : CLASSREF { result = Factory.QREF(val[0][:value]) ; loc result, val[0] }
-undef : UNDEF { result = Factory.literal(:undef); loc result, val[0] }
-default : DEFAULT { result = Factory.literal(:default); loc result, val[0] }
-
- # Assumes lexer produces a Boolean value for booleans, or this will go wrong and produce a literal string
- # with the text 'true'.
- #TODO: could be changed to a specific boolean literal factory method to prevent this possible glitch.
-boolean : BOOLEAN { result = Factory.literal(val[0][:value]) ; loc result, val[0] }
regex
: REGEX { result = Factory.literal(val[0][:value]); loc result, val[0] }
#---MARKERS, SPECIAL TOKENS, SYNTACTIC SUGAR, etc.
endcomma
: #
| COMMA { result = nil }
endsemi
: #
| SEMIC
keyword
: AND
| CASE
| CLASS
| DEFAULT
| DEFINE
| ELSE
| ELSIF
| IF
| IN
| INHERITS
| NODE
| OR
| UNDEF
| UNLESS
+ | TYPE
+ | ATTR
+ | FUNCTION
+ | PRIVATE
nil
: { result = nil}
end
---- header ----
require 'puppet'
require 'puppet/pops'
module Puppet
class ParseError < Puppet::Error; end
class ImportError < Racc::ParseError; end
class AlreadyImportedError < ImportError; end
end
---- inner ----
# Make emacs happy
# Local Variables:
# mode: ruby
# End:
diff --git a/lib/puppet/pops/parser/eparser.rb b/lib/puppet/pops/parser/eparser.rb
index 31b5bbee0..26ff778fe 100644
--- a/lib/puppet/pops/parser/eparser.rb
+++ b/lib/puppet/pops/parser/eparser.rb
@@ -1,2627 +1,2655 @@
#
# DO NOT MODIFY!!!!
# This file is automatically generated by Racc 1.4.9
# from Racc grammer file "".
#
require 'racc/parser.rb'
require 'puppet'
require 'puppet/pops'
module Puppet
class ParseError < Puppet::Error; end
class ImportError < Racc::ParseError; end
class AlreadyImportedError < ImportError; end
end
module Puppet
module Pops
module Parser
class Parser < Racc::Parser
-module_eval(<<'...end egrammar.ra/module_eval...', 'egrammar.ra', 765)
+module_eval(<<'...end egrammar.ra/module_eval...', 'egrammar.ra', 753)
# Make emacs happy
# Local Variables:
# mode: ruby
# End:
...end egrammar.ra/module_eval...
##### State transition tables begin ###
clist = [
-'59,62,241,278,60,53,316,55,-131,268,-219,-133,268,-228,227,227,117,267',
-'356,59,62,227,268,60,14,250,301,243,251,129,42,238,49,128,52,46,366',
-'50,72,68,331,44,71,47,48,279,275,69,13,261,-131,70,-219,-133,12,-228',
-'224,322,138,59,62,136,73,60,53,248,55,398,43,246,247,334,67,63,245,65',
-'66,64,59,62,51,73,60,14,54,351,129,350,238,42,128,49,63,52,46,129,50',
-'72,68,128,44,71,47,48,59,62,69,13,60,336,70,129,129,12,223,128,128,138',
-'59,62,136,73,60,53,351,55,350,43,263,264,338,67,63,77,65,66,234,59,62',
-'51,73,60,14,54,78,80,79,81,42,300,49,63,52,46,299,50,72,68,75,44,71',
-'47,48,277,129,69,13,117,128,70,253,252,12,343,344,345,138,59,62,136',
-'73,60,53,227,55,396,43,214,348,315,67,63,352,65,66,125,59,62,51,73,60',
-'14,54,354,293,190,275,42,277,49,63,52,46,275,50,72,68,362,44,71,47,48',
-'363,129,69,13,292,128,70,299,77,12,157,154,152,138,59,62,136,73,60,53',
-'373,55,394,43,244,291,375,67,63,277,65,66,277,130,275,51,73,378,14,54',
-'117,118,318,299,42,382,49,63,52,46,354,50,72,68,384,44,71,47,48,385',
-'386,69,13,387,388,70,114,390,12,391,392,319,77,59,62,74,73,60,53,399',
-'55,400,43,401,402,,67,63,82,65,66,,,,51,,,14,54,,,,105,42,109,49,104',
-'52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,108,,,,59,62,,73,60,53,',
-'55,,43,,,,67,63,82,65,66,83,,,51,,,14,54,,,,105,42,109,49,104,52,111',
-',50,72,68,,44,71,,,,,69,13,,,70,,,12,108,,,,59,62,,73,60,53,,55,,43',
-',84,85,67,63,82,65,66,83,,,51,,,14,54,,,,105,42,109,49,104,52,111,,50',
-'72,68,,44,71,,,,,69,13,,,70,,,12,108,,,,59,62,,73,60,53,,55,,43,,84',
-'85,67,63,82,65,66,83,,,51,,,14,54,,,,105,42,109,49,104,52,111,,50,72',
-'68,,44,71,,,,,69,13,,,70,,,12,108,,,,59,62,,73,60,53,,55,,43,,,,67,63',
-'82,65,66,83,,,51,,,14,54,,,,105,42,109,49,104,52,46,,50,72,68,,44,71',
-'47,48,,,69,13,,,70,,,12,108,,,,59,62,,73,60,53,,55,,43,,84,85,67,63',
-'82,65,66,83,,,51,,,14,54,,,,105,42,109,49,104,52,111,,50,72,68,,44,71',
-',,,,69,13,,,70,,,12,108,,,,59,62,,73,60,53,,55,,43,,,,67,63,82,65,66',
-',,,51,,,14,54,,,,105,42,109,49,104,52,111,,50,72,68,,44,71,,,,,69,13',
-',,70,,,12,108,,,,59,62,,73,60,53,,55,,43,,,,67,63,82,65,66,,,,51,,,14',
-'54,,,,105,42,109,49,104,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12',
-'108,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42',
-',49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-',55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,',
-'44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66',
-',,,51,,,14,54,,,,,42,,49,,52,124,,50,72,68,,44,71,,,,,69,13,,,70,,,12',
-',,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49',
-',52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55',
-',43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71',
-',,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,297,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49',
-',52,46,,50,72,68,,44,71,47,48,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-'141,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68',
-',44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,143,55,,43,,,,67,63,',
-'65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,',
-'70,,,12,,,,,59,62,,73,60,53,,55,146,43,,,,67,63,,65,66,,,,51,,,14,54',
-',,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73',
-'60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72',
-'68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,303,43,,,,67,63',
-',65,66,,,,51,,,14,54,,,,,42,,49,,52,46,,50,72,68,,44,71,47,48,,,69,13',
-',,70,,,12,,,,,59,62,,73,60,53,,55,146,43,,,,67,63,,65,66,,,,51,,,14',
-'54,,,,,42,,49,,52,46,,50,72,68,,44,71,47,48,,,69,13,,,70,,,12,,,,,59',
-'62,,73,60,53,,156,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111',
-',50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,372,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,46,,50,72,68,,44,71,47',
-'48,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,',
-'51,,,14,54,,,,,42,,49,,52,46,,50,72,68,,44,71,47,48,,,69,13,,,70,,,12',
-',,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49',
-',52,46,,50,72,68,,44,71,47,48,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-',55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,46,,50,72,68,,44',
-'71,47,48,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65',
-'66,,,,51,,,14,54,,,,,42,,49,,52,46,,50,72,68,,44,71,47,48,,,69,13,,',
-'70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,',
-',,42,,49,,52,46,,50,72,68,,44,71,47,48,,,69,13,,,70,,,12,,,,,59,62,',
-'73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,46,,50',
-'72,68,,44,71,47,48,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,',
-'67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,46,,50,72,68,,44,71,47,48',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43',
-',,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,',
-',,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51',
-',,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,',
-'59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52',
-'111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,357',
-'43,,,189,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44',
-'71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,',
-',,51,,,14,54,,,,,192,209,203,210,52,204,212,205,201,199,,194,207,,,',
-',69,13,213,208,206,,,12,,,,,59,62,,73,60,53,,55,211,193,,,,67,63,,65',
-'66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70',
-',,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42',
-',49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-',55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,',
-'44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66',
-',,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12',
-',,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49',
-',52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55',
-',43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71',
-',,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,305,43,,,,67,63,82,65,66',
-',,,51,,,14,54,,,,105,42,109,49,104,52,46,,50,72,68,,44,71,47,48,,,69',
-'13,,,70,,,12,108,,,,,,,73,,,89,88,,43,,84,85,67,63,,65,66,83,59,62,51',
-',60,53,54,55,,,,,,,,,,90,,,,,,,14,221,,,,,42,,49,,52,111,,50,72,68,',
-'44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66',
-',,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12',
-',,,,59,62,,73,60,53,,55,,43,,,,67,63,82,65,66,,,,51,,,14,54,,,,105,42',
-'109,49,104,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,108,,,,,,,73',
-',,89,88,,43,,84,85,67,63,,65,66,83,59,62,51,,60,53,54,55,,,,,,,,,,90',
-',,,,,,14,229,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12',
-',,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49',
-',52,46,,50,72,68,,44,71,47,48,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-',55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,',
-'44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,325,55,,43,,,,67,63,,65',
-'66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70',
-',,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42',
-',49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-',55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,',
-'44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66',
-',,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12',
-',,,,59,62,,73,60,53,324,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42',
-',49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-',55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,',
-'44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53,,55,327,43,,,,67,63,,65',
-'66,,,,51,,,14,54,,,,,42,,49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70',
-',,12,,,,,59,62,,73,60,53,,55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,42',
-',49,,52,111,,50,72,68,,44,71,,,,,69,13,,,70,,,12,,,,,59,62,,73,60,53',
-',55,,43,,,,67,63,,65,66,,,,51,,,14,54,,,,,192,209,203,210,52,204,212',
-'205,201,199,,194,207,,,,,69,13,213,208,206,,,12,,,,,,,,73,,,,,211,193',
-',,,67,63,,65,66,82,,,51,,,,54,,101,102,103,98,93,105,,109,,104,,,94',
-'96,95,97,,,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,,84,85,,,',
-',82,83,106,,,249,,,,101,102,103,98,93,105,,109,,104,90,,94,96,95,97',
-',,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,82,84,85,,,249,,,83',
-'101,102,103,98,93,105,,109,,104,,,94,96,95,97,,90,,,,,,,,,,,,,,108,',
-',,100,99,,,86,87,89,88,91,92,,84,85,,,,,82,83,233,,,,,,,101,102,103',
-'98,93,105,,109,,104,90,,94,96,95,97,,,,,,,,,,,,,,,,108,,,,100,99,,,86',
-'87,89,88,91,92,82,84,85,,,,,,83,101,102,103,98,93,105,,109,,104,,,94',
-'96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,,84,85,',
-',,,82,83,232,,,,,,,101,102,103,98,93,105,,109,,104,90,,94,96,95,97,',
-',,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,,84,85,,,,,82,83,231',
-',,,,,,101,102,103,98,93,105,,109,,104,90,,94,96,95,97,,,,,,,,,,,,,,',
-',108,,,,100,99,,,86,87,89,88,91,92,,84,85,,,,,82,83,230,,,,,,,101,102',
-'103,98,93,105,,109,,104,90,,94,96,95,97,,,,,,,,,,,,,,,,108,,,,100,99',
-',,86,87,89,88,91,92,82,84,85,,,,,,83,101,102,103,98,93,105,,109,,104',
-',219,94,96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92',
-'82,84,85,,,,,,83,101,102,103,98,93,105,,109,,104,,,94,96,95,97,,90,',
-',,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,82,84,85,,,,,,83,101,102',
-'103,98,93,105,,109,,104,263,264,94,96,95,97,,90,,,,,,,,,,,,,,108,,,',
-'100,99,,,86,87,89,88,91,92,82,84,85,,,,,,83,101,102,103,98,93,105,,109',
-',104,,,94,96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92',
-'82,84,85,,,,,,83,101,102,103,98,93,105,,109,,104,,,94,96,95,97,,90,',
-',,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,82,84,85,,,,,,83,101,102',
-'103,98,93,105,,109,,104,,,94,96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99',
-',,86,87,89,88,91,92,82,84,85,,,,,,83,101,102,103,98,93,105,,109,,104',
-',,94,96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,82',
-'84,85,,,,,,83,101,102,103,98,93,105,,109,,104,,,94,96,95,97,,90,,,,',
-',,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,82,84,85,,,,,,83,101,102',
-'103,98,93,105,,109,,104,,,94,96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99',
-',,86,87,89,88,91,92,82,84,85,,,,,,83,101,102,103,98,93,105,273,109,',
-'104,,,94,96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92',
-',84,85,,,,,82,83,106,,,,,,,101,102,103,98,93,105,,109,82,104,90,,94',
-'96,95,97,,,,,,,105,,109,,104,,,,,108,,,,100,99,,,86,87,89,88,91,92,',
-'84,85,108,,,82,,83,,,86,87,89,88,,,,84,85,105,,109,82,104,83,90,,,,',
-',,,,,,105,,109,,104,,90,,,108,,,,,,,,86,87,89,88,,,,84,85,108,,,82,',
-'83,,,86,87,89,88,91,92,,84,85,105,,109,82,104,83,90,,,,,,,,,,93,105',
-',109,,104,,90,94,,108,,,,,,,,86,87,89,88,91,92,,84,85,108,,,,,83,,,86',
-'87,89,88,91,92,82,84,85,,,,,,83,90,,,,93,105,,109,82,104,,,94,,,,,90',
-',,,93,105,,109,,104,,,94,,108,,,,,,,,86,87,89,88,91,92,,84,85,108,,',
-',,83,,,86,87,89,88,91,92,82,84,85,,,,,,83,90,,,,93,105,,109,,104,,82',
-'94,,,,,90,,,,,,98,93,105,,109,,104,,108,94,96,95,97,,,,86,87,89,88,91',
-'92,,84,85,,,,108,,83,,,82,,,86,87,89,88,91,92,,84,85,98,93,105,90,109',
-'83,104,,82,94,96,95,97,,,,,101,102,103,98,93,105,90,109,,104,,108,94',
-'96,95,97,99,,,86,87,89,88,91,92,,84,85,,,,108,,83,,100,99,,,86,87,89',
-'88,91,92,82,84,85,,,269,90,,83,101,102,103,98,93,105,,109,,104,,,94',
-'96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99,,,86,87,89,88,91,92,82,84,85',
-',,,,,83,101,102,103,98,93,105,,109,,104,,,94,96,95,97,,90,,,,,,,,,,',
-',,,108,,,,100,99,,,86,87,89,88,91,92,82,84,85,,,,,,83,101,102,103,98',
-'93,105,,109,,104,,,94,96,95,97,,90,,,,,,,,,,,,,,108,,,,100,99,,,86,87',
-'89,88,91,92,,84,85,,287,209,286,210,83,284,212,288,282,281,,283,285',
-',,,,,,213,208,289,90,287,209,286,210,,284,212,288,282,281,,283,285,',
-'211,290,,,,213,208,289,287,209,286,210,,284,212,288,282,281,,283,285',
-',,211,290,,,213,208,289,,,,,,,,,,,,,,,,211,290' ]
- racc_action_table = arr = ::Array.new(6559, nil)
+'58,61,388,275,59,53,317,54,-236,80,-238,-237,236,133,-129,-234,-239',
+'-225,256,255,318,392,278,101,18,104,278,99,100,333,42,373,45,237,47',
+'12,111,46,36,39,110,44,37,10,11,276,134,66,17,103,-236,38,-238,-237',
+'15,16,-129,-234,-239,-225,58,61,67,334,59,53,236,54,43,277,79,81,35',
+'62,278,64,65,63,107,66,48,49,51,50,18,111,52,237,79,110,42,236,45,79',
+'47,113,236,46,36,39,252,44,37,253,66,312,111,66,17,66,110,38,237,254',
+'15,16,369,237,368,111,58,61,67,110,59,53,229,54,43,340,111,265,35,62',
+'110,64,65,359,267,268,48,49,51,50,18,111,52,307,71,110,42,369,45,368',
+'47,12,236,46,36,39,69,44,37,10,11,342,273,66,17,66,329,38,58,61,15,16',
+'59,237,254,326,58,61,67,249,59,53,249,54,43,72,73,74,35,62,350,64,65',
+'351,273,274,48,49,51,50,18,353,52,248,247,356,42,316,45,312,47,12,361',
+'46,36,39,362,44,37,10,11,236,225,66,17,228,226,38,366,313,15,16,370',
+'372,75,77,76,78,67,312,249,225,379,79,43,381,299,273,35,62,79,64,65',
+'215,214,71,48,49,51,50,58,61,52,153,59,53,385,54,310,150,119,79,273',
+'148,391,306,119,302,120,395,372,397,398,399,18,58,61,119,402,59,42,403',
+'45,404,47,12,300,46,36,39,79,44,37,10,11,71,412,66,17,68,414,38,415',
+'416,15,16,302,,,,,,67,,133,,,130,43,,,,35,62,,64,65,,,,48,49,51,50,58',
+'61,52,67,59,53,,54,408,80,,,,134,62,,,,,,,,,101,18,104,,99,100,,42,',
+'45,,47,12,,46,36,39,,44,37,10,11,,,66,17,103,,38,,,15,16,,,,,58,61,67',
+',59,53,,54,43,,,81,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47',
+'113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54',
+'43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39',
+',44,37,10,11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62',
+',64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11',
+',,66,17,,,38,,,15,16,,,,,,,67,,,,,,43,,,,35,62,,64,65,,,,48,49,51,50',
+'58,61,52,,59,53,,54,406,80,,,,,,,,,,,,,,101,18,104,,99,100,,42,,45,',
+'47,12,,46,36,39,,44,37,10,11,,,66,17,103,,38,,,15,16,,,,,58,61,67,,59',
+'53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46',
+'36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35',
+'62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,',
+',,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,',
+'48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38',
+',,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18',
+',52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,',
+',,,67,,,,,,43,,,,35,62,,64,65,,,,48,49,51,50,58,61,52,,59,53,,54,401',
+'80,,,,,,,,,,,,,,101,18,104,,99,100,,42,,45,,47,12,,46,36,39,,44,37,10',
+'11,,,66,17,103,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65',
+',,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,',
+',38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50',
+'18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,',
+',58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42',
+',45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59',
+'53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46',
+'36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,,,67,,,,,,43,,,,35,62,,64,65',
+',,,48,49,51,50,58,61,52,,59,53,,54,320,,,,,,,,,,,,,,,,18,58,61,,,59',
+'42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,,,,,67,',
+'133,,,130,43,,,,35,62,,64,65,,,,48,49,51,50,58,61,52,67,59,53,,54,322',
+'80,,,,134,62,,,,,,,,,101,18,104,,99,100,,42,,45,,47,12,,46,36,39,,44',
+'37,10,11,,,66,17,103,,38,,,15,16,,,,,58,61,67,,59,53,137,54,43,,,,35',
+'62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10',
+'11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,139,54,43,,,,35,62,,64,65',
+',,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17',
+',,38,,,15,16,,,,,,,67,,,,,,43,,,,35,62,,64,65,,,,48,49,51,50,58,61,52',
+',59,53,,54,141,80,,,,,,,,,,,,,,101,18,104,,99,100,,42,,45,,47,12,,46',
+'36,39,,44,37,10,11,,,66,17,103,,38,,,15,16,,,,,58,61,67,,59,53,,54,43',
+',,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44',
+'37,10,11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64',
+'65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66',
+'17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49',
+'51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15',
+'16,,,,,58,61,67,,59,53,,152,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52',
+',,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61',
+'67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47',
+'113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54',
+'43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39',
+',44,37,10,11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62',
+',64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,',
+'66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48',
+'49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38',
+',,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18',
+',52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,',
+',58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42',
+',45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,,,58,61,67',
+',59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12',
+',46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54',
+'43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39',
+',44,37,10,11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62',
+',64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11',
+',,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48',
+'49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38',
+',,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18',
+',52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,',
+',58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,169',
+'183,175,184,47,176,186,177,36,168,,171,166,,,,,66,17,187,182,167,,,15',
+'165,,,,,,,67,,,,,185,170,,,,35,62,,64,65,,,,178,179,181,180,58,61,52',
+',59,53,,54,,,,,,,,,,,,,,,,,18,,,,,,42,,45,,47,113,,46,36,39,,44,37,',
+',,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,',
+'48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38',
+',,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18',
+',52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58',
+'61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45',
+',47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53',
+',54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36',
+'39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62',
+',64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,',
+'66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48',
+'49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,',
+'15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,',
+'52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58',
+'61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45',
+',47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53',
+',54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36',
+'39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62',
+',64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,',
+'66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48',
+'49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,',
+'15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,',
+'52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58',
+'61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45',
+',47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53',
+',54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36',
+'39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62',
+',64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,',
+'66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48',
+'49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,',
+'15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,',
+'52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58',
+'61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45',
+',47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53',
+',54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36',
+'39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,211,35',
+'62,,64,65,,,,48,49,51,50,18,213,52,,,,42,,45,,47,12,,46,36,39,,44,37',
+'10,11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65',
+',,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,',
+',38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50',
+'18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,',
+',58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42',
+',45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59',
+'53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46',
+'36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,274',
+',35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44',
+'37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65',
+',,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17',
+',,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51',
+'50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16',
+',,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,',
+'42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,,,,,67,',
+',,,,43,,,,35,62,,64,65,,,,48,49,51,50,58,61,52,,59,53,,54,335,,,,,,',
+',,,,,,,,,18,58,61,,,59,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17',
+',,38,,,15,16,,,,,,,67,,133,,,130,43,,,,35,62,,64,65,,,,48,49,51,50,58',
+'61,52,67,59,53,,54,374,,,,,134,62,,,,,,,,,,18,,,,,,42,,45,,47,113,,46',
+'36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35',
+'62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10',
+'11,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,',
+',,48,49,51,50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17',
+',,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51',
+'50,18,,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15',
+'16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52',
+',,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,,,67,',
+',,,,43,,,,35,62,,64,65,,,,48,49,51,50,58,61,52,,59,53,,54,141,,,,,,',
+',,,,,,,,,18,,,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,',
+',15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18',
+'241,52,,,,42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16',
+',,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,',
+'42,,45,,47,12,,46,36,39,,44,37,10,11,,,66,17,,,38,,,15,16,,,,,58,61',
+'67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47',
+'113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54',
+'43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39',
+',44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64',
+'65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17',
+',,38,,,15,16,,,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51',
+'50,18,,52,,,,42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16',
+',,,,58,61,67,,59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,',
+'42,,45,,47,113,,46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67',
+',59,53,,54,43,,,,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113',
+',46,36,39,,44,37,,,,,66,17,,,38,,,15,16,,,,,58,61,67,,59,53,,54,43,',
+',,35,62,,64,65,,,,48,49,51,50,18,,52,,,,42,,45,,47,113,,46,36,39,,44',
+'37,,,,,66,17,,,38,,,15,16,,,,,,,67,,,,,,43,,,,35,62,,64,65,80,,,48,49',
+'51,50,,,52,,,96,91,101,,104,,99,100,,92,94,93,95,,58,61,,,59,,,,,,,',
+',,103,,,,98,97,,,84,85,87,86,89,90,,82,83,80,,244,,,81,,,133,,,130,96',
+'91,101,,104,,99,100,,92,94,93,95,,88,,,,,67,,,,,,,,,103,134,62,,98,97',
+',,84,85,87,86,89,90,,82,83,80,,243,,,81,,,,,,,96,91,101,,104,80,99,100',
+',92,94,93,95,,88,,,,,101,,104,,99,100,,,,103,,,,98,97,,,84,85,87,86',
+'89,90,,82,83,103,,,,,81,,,,,87,86,,,,82,83,80,,242,,,81,,,,88,,,96,91',
+'101,,104,80,99,100,,92,94,93,95,,88,,,,,101,,104,,99,100,,,,103,,,,98',
+'97,,80,84,85,87,86,89,90,,82,83,103,,96,91,101,81,104,,99,100,,92,94',
+'93,95,82,83,,,,,,81,,,,88,,,,103,,,,98,97,,80,84,85,87,86,89,90,,82',
+'83,,,96,91,101,81,104,,99,100,,92,94,93,95,,,,,,,,,,,,88,,,,103,,,,98',
+'97,,,84,85,87,86,89,90,,82,83,,,,,,81,80,,,,,,,,,,267,268,96,91,101',
+'303,104,80,99,100,88,92,94,93,95,,,,,,,101,,104,,99,100,,,,103,,,,98',
+'97,,,84,85,87,86,89,90,,82,83,103,,,,,81,,,84,85,87,86,,,,82,83,80,',
+',,,81,,,,88,,,96,91,101,,104,,99,100,,92,94,93,95,,88,,,,,,,,,,,,,,103',
+',,,98,97,,,84,85,87,86,89,90,80,82,83,,,279,,,81,,,,96,91,101,,104,80',
+'99,100,,92,94,93,95,,,,,88,,101,,104,,99,100,,,,103,,,,98,97,,80,84',
+'85,87,86,89,90,,82,83,103,,96,91,101,81,104,,99,100,,92,94,93,95,82',
+'83,,,,,,81,,,,88,,,,103,,,,,97,,80,84,85,87,86,89,90,,82,83,,,96,91',
+'101,81,104,,99,100,,92,94,93,95,,,,,,,,,,,,88,,,,103,,,,98,97,,,84,85',
+'87,86,89,90,80,82,83,,,,,,81,,,,96,91,101,271,104,80,99,100,,92,94,93',
+'95,,,,,88,,101,,104,,99,100,,,,103,,,,98,97,,80,84,85,87,86,89,90,,82',
+'83,103,,96,91,101,81,104,,99,100,,92,94,93,95,82,83,,,,,,81,,,,88,,',
+',103,,,,98,97,,80,84,85,87,86,89,90,,82,83,,,96,91,101,81,104,,99,100',
+',92,94,93,95,,,,,,,,,,,,88,,,,103,,,,98,97,,80,84,85,87,86,89,90,,82',
+'83,,,96,91,101,81,104,,99,100,,92,94,93,95,80,,,,,,,,,,,88,,91,101,103',
+'104,,99,100,80,92,,84,85,87,86,89,90,,82,83,,,101,,104,81,99,100,103',
+',,,,,,,84,85,87,86,89,90,,82,83,,88,,103,,81,,,,,,84,85,87,86,80,,,82',
+'83,,,,,,81,88,96,91,101,,104,,99,100,80,92,94,93,95,,,,,,,88,,,101,',
+'104,,99,100,103,,,,98,97,,,84,85,87,86,89,90,,82,83,,,,103,,81,,,,,',
+',,87,86,80,,,82,83,,,,,,81,88,96,91,101,,104,,99,100,80,92,94,93,95',
+',,,,,,88,,91,101,,104,,99,100,103,92,,,98,97,,,84,85,87,86,89,90,,82',
+'83,,,,103,,81,,,,,,84,85,87,86,89,90,80,82,83,,,,,,81,88,,,96,91,101',
+',104,80,99,100,,92,94,93,95,,,,,88,,101,,104,,99,100,,,,103,,,,98,97',
+',,84,85,87,86,89,90,,82,83,103,,,,,81,,,84,85,87,86,89,90,80,82,83,',
+',,,,81,,,,88,,101,,104,80,99,100,,,,,,,,,,88,91,101,,104,,99,100,,92',
+',103,,,,,,,,84,85,87,86,89,90,,82,83,103,,,,,81,,,84,85,87,86,89,90',
+'80,82,83,,,,,,81,,,,88,91,101,,104,,99,100,,92,,,,,,,,88,,,,,,,,,,,103',
+',,,,,,,84,85,87,86,89,90,,82,83,,,,,,81,,,291,183,290,184,,288,186,292',
+',285,,287,289,,,,,,88,187,182,293,,,,286,,,,,,,,,,,,185,294,,,,,,,,',
+',,,297,298,296,295,291,183,290,184,,288,186,292,,285,,287,289,,,,,,',
+'187,182,293,,,,286,,,,,,,,,,,,185,294,,,,,,,,,,,,297,298,296,295,291',
+'183,290,184,,288,186,292,,285,,287,289,,,,,,,187,182,293,,,,286,,,,',
+',,,,,,,185,294,,,,,,,,,,,,297,298,296,295,291,183,290,184,,288,186,292',
+',285,,287,289,,,,,,,187,182,293,,,,286,,,,,,,,,,,,185,294,,,,,,,,,,',
+',297,298,296,295' ]
+ racc_action_table = arr = ::Array.new(6809, nil)
idx = 0
clist.each do |str|
str.split(',', -1).each do |i|
arr[idx] = i.to_i unless i.empty?
idx += 1
end
end
clist = [
-'0,0,132,202,0,0,238,0,199,306,207,201,228,206,154,238,40,164,306,241',
-'241,117,164,241,0,145,228,132,145,315,0,126,0,315,0,0,315,0,0,0,266',
-'0,0,0,0,202,235,0,0,154,199,0,207,201,0,206,117,244,241,385,385,241',
-'0,385,385,142,385,385,0,140,142,270,0,0,140,0,0,0,205,205,0,241,205',
-'385,0,348,204,348,131,385,204,385,241,385,385,49,385,385,385,49,385',
-'385,385,385,152,152,385,385,152,274,385,203,111,385,116,203,111,205',
-'5,5,205,385,5,5,303,5,303,385,331,331,276,385,385,158,385,385,124,243',
-'243,385,205,243,5,385,8,8,8,8,5,227,5,205,5,5,225,5,5,5,5,5,5,5,5,280',
-'124,5,5,221,124,5,150,150,5,294,296,298,243,384,384,243,5,384,384,299',
-'384,384,5,107,302,236,5,5,304,5,5,46,50,50,5,243,50,384,5,305,220,105',
-'309,384,310,384,243,384,384,311,384,384,384,312,384,384,384,384,313',
-'46,384,384,218,46,384,317,76,384,74,64,63,50,382,382,50,384,382,382',
-'330,382,382,384,134,216,333,384,384,196,384,384,335,47,195,384,50,342',
-'382,384,343,41,239,260,382,351,382,50,382,382,352,382,382,382,354,382',
-'382,382,382,355,359,382,382,360,361,382,39,367,382,368,371,240,6,189',
-'189,1,382,189,189,389,189,393,382,395,397,,382,382,167,382,382,,,,382',
-',,189,382,,,,167,189,167,189,167,189,189,,189,189,189,,189,189,,,,,189',
-'189,,,189,,,189,167,,,,12,12,,189,12,12,,12,,189,,,,189,189,170,189',
-'189,167,,,189,,,12,189,,,,170,12,170,12,170,12,12,,12,12,12,,12,12,',
-',,,12,12,,,12,,,12,170,,,,13,13,,12,13,13,,13,,12,,170,170,12,12,171',
-'12,12,170,,,12,,,13,12,,,,171,13,171,13,171,13,13,,13,13,13,,13,13,',
-',,,13,13,,,13,,,13,171,,,,14,14,,13,14,14,,14,,13,,171,171,13,13,166',
-'13,13,171,,,13,,,14,13,,,,166,14,166,14,166,14,14,,14,14,14,,14,14,',
-',,,14,14,,,14,,,14,166,,,,363,363,,14,363,363,,363,,14,,,,14,14,172',
-'14,14,166,,,14,,,363,14,,,,172,363,172,363,172,363,363,,363,363,363',
-',363,363,363,363,,,363,363,,,363,,,363,172,,,,350,350,,363,350,350,',
-'350,,363,,172,172,363,363,165,363,363,172,,,363,,,350,363,,,,165,350',
-'165,350,165,350,350,,350,350,350,,350,350,,,,,350,350,,,350,,,350,165',
-',,,192,192,,350,192,192,,192,,350,,,,350,350,112,350,350,,,,350,,,192',
-'350,,,,112,192,112,192,112,192,192,,192,192,192,,192,192,,,,,192,192',
-',,192,,,192,112,,,,42,42,,192,42,42,,42,,192,,,,192,192,110,192,192',
-',,,192,,,42,192,,,,110,42,110,42,110,42,42,,42,42,42,,42,42,,,,,42,42',
-',,42,,,42,110,,,,43,43,,42,43,43,,43,,42,,,,42,42,,42,42,,,,42,,,43',
-'42,,,,,43,,43,,43,43,,43,43,43,,43,43,,,,,43,43,,,43,,,43,,,,,44,44',
-',43,44,44,,44,,43,,,,43,43,,43,43,,,,43,,,44,43,,,,,44,,44,,44,44,,44',
-'44,44,,44,44,,,,,44,44,,,44,,,44,,,,,45,45,,44,45,45,,45,,44,,,,44,44',
-',44,44,,,,44,,,45,44,,,,,45,,45,,45,45,,45,45,45,,45,45,,,,,45,45,,',
-'45,,,45,,,,,193,193,,45,193,193,,193,,45,,,,45,45,,45,45,,,,45,,,193',
-'45,,,,,193,,193,,193,193,,193,193,193,,193,193,,,,,193,193,,,193,,,193',
-',,,,194,194,,193,194,194,,194,,193,,,,193,193,,193,193,,,,193,,,194',
-'193,,,,,194,,194,,194,194,,194,194,194,,194,194,,,,,194,194,,,194,,',
-'194,,,,,334,334,,194,334,334,,334,,194,,,,194,194,,194,194,,,,194,,',
-'334,194,,,,,334,,334,,334,334,,334,334,334,,334,334,,,,,334,334,,,334',
-',,334,,,,,223,223,,334,223,223,,223,223,334,,,,334,334,,334,334,,,,334',
-',,223,334,,,,,223,,223,,223,223,,223,223,223,,223,223,223,223,,,223',
-'223,,,223,,,223,,,,,53,53,,223,53,53,53,53,,223,,,,223,223,,223,223',
-',,,223,,,53,223,,,,,53,,53,,53,53,,53,53,53,,53,53,,,,,53,53,,,53,,',
-'53,,,,,54,54,,53,54,54,54,54,,53,,,,53,53,,53,53,,,,53,,,54,53,,,,,54',
-',54,,54,54,,54,54,54,,54,54,,,,,54,54,,,54,,,54,,,,,55,55,,54,55,55',
-',55,55,54,,,,54,54,,54,54,,,,54,,,55,54,,,,,55,,55,,55,55,,55,55,55',
-',55,55,,,,,55,55,,,55,,,55,,,,,61,61,,55,61,61,,61,,55,,,,55,55,,55',
-'55,,,,55,,,61,55,,,,,61,,61,,61,61,,61,61,61,,61,61,,,,,61,61,,,61,',
-',61,,,,,230,230,,61,230,230,,230,230,61,,,,61,61,,61,61,,,,61,,,230',
-'61,,,,,230,,230,,230,230,,230,230,230,,230,230,230,230,,,230,230,,,230',
-',,230,,,,,156,156,,230,156,156,,156,156,230,,,,230,230,,230,230,,,,230',
-',,156,230,,,,,156,,156,,156,156,,156,156,156,,156,156,156,156,,,156',
-'156,,,156,,,156,,,,,66,66,,156,66,66,,66,,156,,,,156,156,,156,156,,',
-',156,,,66,156,,,,,66,,66,,66,66,,66,66,66,,66,66,,,,,66,66,,,66,,,66',
-',,,,319,319,,66,319,319,,319,319,66,,,,66,66,,66,66,,,,66,,,319,66,',
-',,,319,,319,,319,319,,319,319,319,,319,319,319,319,,,319,319,,,319,',
-',319,,,,,75,75,,319,75,75,,75,,319,,,,319,319,,319,319,,,,319,,,75,319',
-',,,,75,,75,,75,75,,75,75,75,,75,75,75,75,,,75,75,,,75,,,75,,,,,318,318',
-',75,318,318,,318,,75,,,,75,75,,75,75,,,,75,,,318,75,,,,,318,,318,,318',
-'318,,318,318,318,,318,318,318,318,,,318,318,,,318,,,318,,,,,77,77,,318',
-'77,77,,77,,318,,,,318,318,,318,318,,,,318,,,77,318,,,,,77,,77,,77,77',
-',77,77,77,,77,77,77,77,,,77,77,,,77,,,77,,,,,78,78,,77,78,78,,78,,77',
-',,,77,77,,77,77,,,,77,,,78,77,,,,,78,,78,,78,78,,78,78,78,,78,78,78',
-'78,,,78,78,,,78,,,78,,,,,79,79,,78,79,79,,79,,78,,,,78,78,,78,78,,,',
-'78,,,79,78,,,,,79,,79,,79,79,,79,79,79,,79,79,79,79,,,79,79,,,79,,,79',
-',,,,80,80,,79,80,80,,80,,79,,,,79,79,,79,79,,,,79,,,80,79,,,,,80,,80',
-',80,80,,80,80,80,,80,80,80,80,,,80,80,,,80,,,80,,,,,81,81,,80,81,81',
-',81,,80,,,,80,80,,80,80,,,,80,,,81,80,,,,,81,,81,,81,81,,81,81,81,,81',
-'81,81,81,,,81,81,,,81,,,81,,,,,82,82,,81,82,82,,82,,81,,,,81,81,,81',
-'81,,,,81,,,82,81,,,,,82,,82,,82,82,,82,82,82,,82,82,,,,,82,82,,,82,',
-',82,,,,,83,83,,82,83,83,,83,,82,,,,82,82,,82,82,,,,82,,,83,82,,,,,83',
-',83,,83,83,,83,83,83,,83,83,,,,,83,83,,,83,,,83,,,,,84,84,,83,84,84',
-',84,,83,,,,83,83,,83,83,,,,83,,,84,83,,,,,84,,84,,84,84,,84,84,84,,84',
-'84,,,,,84,84,,,84,,,84,,,,,85,85,,84,85,85,,85,,84,,,,84,84,,84,84,',
-',,84,,,85,84,,,,,85,,85,,85,85,,85,85,85,,85,85,,,,,85,85,,,85,,,85',
-',,,,86,86,,85,86,86,,86,,85,,,,85,85,,85,85,,,,85,,,86,85,,,,,86,,86',
-',86,86,,86,86,86,,86,86,,,,,86,86,,,86,,,86,,,,,87,87,,86,87,87,,87',
-',86,,,,86,86,,86,86,,,,86,,,87,86,,,,,87,,87,,87,87,,87,87,87,,87,87',
-',,,,87,87,,,87,,,87,,,,,88,88,,87,88,88,,88,,87,,,,87,87,,87,87,,,,87',
-',,88,87,,,,,88,,88,,88,88,,88,88,88,,88,88,,,,,88,88,,,88,,,88,,,,,89',
-'89,,88,89,89,,89,,88,,,,88,88,,88,88,,,,88,,,89,88,,,,,89,,89,,89,89',
-',89,89,89,,89,89,,,,,89,89,,,89,,,89,,,,,90,90,,89,90,90,,90,,89,,,',
-'89,89,,89,89,,,,89,,,90,89,,,,,90,,90,,90,90,,90,90,90,,90,90,,,,,90',
-'90,,,90,,,90,,,,,91,91,,90,91,91,,91,,90,,,,90,90,,90,90,,,,90,,,91',
-'90,,,,,91,,91,,91,91,,91,91,91,,91,91,,,,,91,91,,,91,,,91,,,,,92,92',
-',91,92,92,,92,,91,,,,91,91,,91,91,,,,91,,,92,91,,,,,92,,92,,92,92,,92',
-'92,92,,92,92,,,,,92,92,,,92,,,92,,,,,93,93,,92,93,93,,93,,92,,,,92,92',
-',92,92,,,,92,,,93,92,,,,,93,,93,,93,93,,93,93,93,,93,93,,,,,93,93,,',
-'93,,,93,,,,,94,94,,93,94,94,,94,,93,,,,93,93,,93,93,,,,93,,,94,93,,',
-',,94,,94,,94,94,,94,94,94,,94,94,,,,,94,94,,,94,,,94,,,,,95,95,,94,95',
-'95,,95,,94,,,,94,94,,94,94,,,,94,,,95,94,,,,,95,,95,,95,95,,95,95,95',
-',95,95,,,,,95,95,,,95,,,95,,,,,96,96,,95,96,96,,96,,95,,,,95,95,,95',
-'95,,,,95,,,96,95,,,,,96,,96,,96,96,,96,96,96,,96,96,,,,,96,96,,,96,',
-',96,,,,,97,97,,96,97,97,,97,,96,,,,96,96,,96,96,,,,96,,,97,96,,,,,97',
-',97,,97,97,,97,97,97,,97,97,,,,,97,97,,,97,,,97,,,,,98,98,,97,98,98',
-',98,,97,,,,97,97,,97,97,,,,97,,,98,97,,,,,98,,98,,98,98,,98,98,98,,98',
-'98,,,,,98,98,,,98,,,98,,,,,99,99,,98,99,99,,99,,98,,,,98,98,,98,98,',
-',,98,,,99,98,,,,,99,,99,,99,99,,99,99,99,,99,99,,,,,99,99,,,99,,,99',
-',,,,100,100,,99,100,100,,100,,99,,,,99,99,,99,99,,,,99,,,100,99,,,,',
-'100,,100,,100,100,,100,100,100,,100,100,,,,,100,100,,,100,,,100,,,,',
-'101,101,,100,101,101,,101,,100,,,,100,100,,100,100,,,,100,,,101,100',
-',,,,101,,101,,101,101,,101,101,101,,101,101,,,,,101,101,,,101,,,101',
-',,,,102,102,,101,102,102,,102,,101,,,,101,101,,101,101,,,,101,,,102',
-'101,,,,,102,,102,,102,102,,102,102,102,,102,102,,,,,102,102,,,102,,',
-'102,,,,,103,103,,102,103,103,,103,,102,,,,102,102,,102,102,,,,102,,',
-'103,102,,,,,103,,103,,103,103,,103,103,103,,103,103,,,,,103,103,,,103',
-',,103,,,,,104,104,,103,104,104,,104,,103,,,,103,103,,103,103,,,,103',
-',,104,103,,,,,104,,104,,104,104,,104,104,104,,104,104,,,,,104,104,,',
-'104,,,104,,,,,307,307,,104,307,307,,307,307,104,,,104,104,104,,104,104',
-',,,104,,,307,104,,,,,307,,307,,307,307,,307,307,307,,307,307,,,,,307',
-'307,,,307,,,307,,,,,106,106,,307,106,106,,106,,307,,,,307,307,,307,307',
-',,,307,,,106,307,,,,,106,106,106,106,106,106,106,106,106,106,,106,106',
-',,,,106,106,106,106,106,,,106,,,,,300,300,,106,300,300,,300,106,106',
-',,,106,106,,106,106,,,,106,,,300,106,,,,,300,,300,,300,300,,300,300',
-'300,,300,300,,,,,300,300,,,300,,,300,,,,,108,108,,300,108,108,,108,',
-'300,,,,300,300,,300,300,,,,300,,,108,300,,,,,108,,108,,108,108,,108',
-'108,108,,108,108,,,,,108,108,,,108,,,108,,,,,109,109,,108,109,109,,109',
-',108,,,,108,108,,108,108,,,,108,,,109,108,,,,,109,,109,,109,109,,109',
-'109,109,,109,109,,,,,109,109,,,109,,,109,,,,,293,293,,109,293,293,,293',
-',109,,,,109,109,,109,109,,,,109,,,293,109,,,,,293,,293,,293,293,,293',
-'293,293,,293,293,,,,,293,293,,,293,,,293,,,,,279,279,,293,279,279,,279',
-',293,,,,293,293,,293,293,,,,293,,,279,293,,,,,279,,279,,279,279,,279',
-'279,279,,279,279,,,,,279,279,,,279,,,279,,,,,278,278,,279,278,278,,278',
-',279,,,,279,279,,279,279,,,,279,,,278,279,,,,,278,,278,,278,278,,278',
-'278,278,,278,278,,,,,278,278,,,278,,,278,,,,,231,231,,278,231,231,,231',
-'231,278,,,,278,278,168,278,278,,,,278,,,231,278,,,,168,231,168,231,168',
-'231,231,,231,231,231,,231,231,231,231,,,231,231,,,231,,,231,168,,,,',
-',,231,,,168,168,,231,,168,168,231,231,,231,231,168,114,114,231,,114',
-'114,231,114,,,,,,,,,,168,,,,,,,114,114,,,,,114,,114,,114,114,,114,114',
-'114,,114,114,,,,,114,114,,,114,,,114,,,,,275,275,,114,275,275,,275,',
-'114,,,,114,114,,114,114,,,,114,,,275,114,,,,,275,,275,,275,275,,275',
-'275,275,,275,275,,,,,275,275,,,275,,,275,,,,,269,269,,275,269,269,,269',
-',275,,,,275,275,169,275,275,,,,275,,,269,275,,,,169,269,169,269,169',
-'269,269,,269,269,269,,269,269,,,,,269,269,,,269,,,269,169,,,,,,,269',
-',,169,169,,269,,169,169,269,269,,269,269,169,118,118,269,,118,118,269',
-'118,,,,,,,,,,169,,,,,,,118,118,,,,,118,,118,,118,118,,118,118,118,,118',
-'118,,,,,118,118,,,118,,,118,,,,,153,153,,118,153,153,,153,,118,,,,118',
-'118,,118,118,,,,118,,,153,118,,,,,153,,153,,153,153,,153,153,153,,153',
-'153,153,153,,,153,153,,,153,,,153,,,,,232,232,,153,232,232,,232,,153',
-',,,153,153,,153,153,,,,153,,,232,153,,,,,232,,232,,232,232,,232,232',
-'232,,232,232,,,,,232,232,,,232,,,232,,,,,247,247,,232,247,247,247,247',
-',232,,,,232,232,,232,232,,,,232,,,247,232,,,,,247,,247,,247,247,,247',
-'247,247,,247,247,,,,,247,247,,,247,,,247,,,,,234,234,,247,234,234,,234',
-',247,,,,247,247,,247,247,,,,247,,,234,247,,,,,234,,234,,234,234,,234',
-'234,234,,234,234,,,,,234,234,,,234,,,234,,,,,268,268,,234,268,268,,268',
-',234,,,,234,234,,234,234,,,,234,,,268,234,,,,,268,,268,,268,268,,268',
-'268,268,,268,268,,,,,268,268,,,268,,,268,,,,,125,125,,268,125,125,,125',
-',268,,,,268,268,,268,268,,,,268,,,125,268,,,,,125,,125,,125,125,,125',
-'125,125,,125,125,,,,,125,125,,,125,,,125,,,,,245,245,,125,245,245,245',
-'245,,125,,,,125,125,,125,125,,,,125,,,245,125,,,,,245,,245,,245,245',
-',245,245,245,,245,245,,,,,245,245,,,245,,,245,,,,,256,256,,245,256,256',
-',256,,245,,,,245,245,,245,245,,,,245,,,256,245,,,,,256,,256,,256,256',
-',256,256,256,,256,256,,,,,256,256,,,256,,,256,,,,,251,251,,256,251,251',
-',251,251,256,,,,256,256,,256,256,,,,256,,,251,256,,,,,251,,251,,251',
-'251,,251,251,251,,251,251,,,,,251,251,,,251,,,251,,,,,249,249,,251,249',
-'249,,249,,251,,,,251,251,,251,251,,,,251,,,249,251,,,,,249,,249,,249',
-'249,,249,249,249,,249,249,,,,,249,249,,,249,,,249,,,,,233,233,,249,233',
-'233,,233,,249,,,,249,249,,249,249,,,,249,,,233,249,,,,,233,233,233,233',
-'233,233,233,233,233,233,,233,233,,,,,233,233,233,233,233,,,233,,,,,',
-',,233,,,,,233,233,,,,233,233,,233,233,139,,,233,,,,233,,139,139,139',
-'139,139,139,,139,,139,,,139,139,139,139,,,,,,,,,,,,,,,,139,,,,139,139',
-',,139,139,139,139,139,139,,139,139,,,,,265,139,265,,,265,,,,265,265',
-'265,265,265,265,,265,,265,139,,265,265,265,265,,,,,,,,,,,,,,,,265,,',
-',265,265,,,265,265,265,265,265,265,144,265,265,,,144,,,265,144,144,144',
-'144,144,144,,144,,144,,,144,144,144,144,,265,,,,,,,,,,,,,,144,,,,144',
-'144,,,144,144,144,144,144,144,,144,144,,,,,123,144,123,,,,,,,123,123',
-'123,123,123,123,,123,,123,144,,123,123,123,123,,,,,,,,,,,,,,,,123,,',
-',123,123,,,123,123,123,123,123,123,148,123,123,,,,,,123,148,148,148',
-'148,148,148,,148,,148,,,148,148,148,148,,123,,,,,,,,,,,,,,148,,,,148',
-'148,,,148,148,148,148,148,148,,148,148,,,,,122,148,122,,,,,,,122,122',
-'122,122,122,122,,122,,122,148,,122,122,122,122,,,,,,,,,,,,,,,,122,,',
-',122,122,,,122,122,122,122,122,122,,122,122,,,,,121,122,121,,,,,,,121',
-'121,121,121,121,121,,121,,121,122,,121,121,121,121,,,,,,,,,,,,,,,,121',
-',,,121,121,,,121,121,121,121,121,121,,121,121,,,,,119,121,119,,,,,,',
-'119,119,119,119,119,119,,119,,119,121,,119,119,119,119,,,,,,,,,,,,,',
-',,119,,,,119,119,,,119,119,119,119,119,119,113,119,119,,,,,,119,113',
-'113,113,113,113,113,,113,,113,,113,113,113,113,113,,119,,,,,,,,,,,,',
-',113,,,,113,113,,,113,113,113,113,113,113,155,113,113,,,,,,113,155,155',
-'155,155,155,155,,155,,155,,,155,155,155,155,,113,,,,,,,,,,,,,,155,,',
-',155,155,,,155,155,155,155,155,155,323,155,155,,,,,,155,323,323,323',
-'323,323,323,,323,,323,155,155,323,323,323,323,,155,,,,,,,,,,,,,,323',
-',,,323,323,,,323,323,323,323,323,323,326,323,323,,,,,,323,326,326,326',
-'326,326,326,,326,,326,,,326,326,326,326,,323,,,,,,,,,,,,,,326,,,,326',
-'326,,,326,326,326,326,326,326,332,326,326,,,,,,326,332,332,332,332,332',
-'332,,332,,332,,,332,332,332,332,,326,,,,,,,,,,,,,,332,,,,332,332,,,332',
-'332,332,332,332,332,215,332,332,,,,,,332,215,215,215,215,215,215,,215',
-',215,,,215,215,215,215,,332,,,,,,,,,,,,,,215,,,,215,215,,,215,215,215',
-'215,215,215,340,215,215,,,,,,215,340,340,340,340,340,340,,340,,340,',
-',340,340,340,340,,215,,,,,,,,,,,,,,340,,,,340,340,,,340,340,340,340',
-'340,340,341,340,340,,,,,,340,341,341,341,341,341,341,,341,,341,,,341',
-'341,341,341,,340,,,,,,,,,,,,,,341,,,,341,341,,,341,341,341,341,341,341',
-'347,341,341,,,,,,341,347,347,347,347,347,347,,347,,347,,,347,347,347',
-'347,,341,,,,,,,,,,,,,,347,,,,347,347,,,347,347,347,347,347,347,191,347',
-'347,,,,,,347,191,191,191,191,191,191,191,191,,191,,,191,191,191,191',
-',347,,,,,,,,,,,,,,191,,,,191,191,,,191,191,191,191,191,191,,191,191',
-',,,,11,191,11,,,,,,,11,11,11,11,11,11,,11,173,11,191,,11,11,11,11,,',
-',,,,173,,173,,173,,,,,11,,,,11,11,,,11,11,11,11,11,11,,11,11,173,,,174',
-',11,,,173,173,173,173,,,,173,173,174,,174,175,174,173,11,,,,,,,,,,,175',
-',175,,175,,173,,,174,,,,,,,,174,174,174,174,,,,174,174,175,,,176,,174',
-',,175,175,175,175,175,175,,175,175,176,,176,177,176,175,174,,,,,,,,',
-',177,177,,177,,177,,175,177,,176,,,,,,,,176,176,176,176,176,176,,176',
-'176,177,,,,,176,,,177,177,177,177,177,177,178,177,177,,,,,,177,176,',
-',,178,178,,178,179,178,,,178,,,,,177,,,,179,179,,179,,179,,,179,,178',
-',,,,,,,178,178,178,178,178,178,,178,178,179,,,,,178,,,179,179,179,179',
-'179,179,180,179,179,,,,,,179,178,,,,180,180,,180,,180,,181,180,,,,,179',
-',,,,,181,181,181,,181,,181,,180,181,181,181,181,,,,180,180,180,180,180',
-'180,,180,180,,,,181,,180,,,182,,,181,181,181,181,181,181,,181,181,182',
-'182,182,180,182,181,182,,183,182,182,182,182,,,,,183,183,183,183,183',
-'183,181,183,,183,,182,183,183,183,183,182,,,182,182,182,182,182,182',
-',182,182,,,,183,,182,,183,183,,,183,183,183,183,183,183,186,183,183',
-',,186,182,,183,186,186,186,186,186,186,,186,,186,,,186,186,186,186,',
-'183,,,,,,,,,,,,,,186,,,,186,186,,,186,186,186,186,186,186,185,186,186',
-',,,,,186,185,185,185,185,185,185,,185,,185,,,185,185,185,185,,186,,',
-',,,,,,,,,,,185,,,,185,185,,,185,185,185,185,185,185,184,185,185,,,,',
-',185,184,184,184,184,184,184,,184,,184,,,184,184,184,184,,185,,,,,,',
-',,,,,,,184,,,,184,184,,,184,184,184,184,184,184,,184,184,,277,277,277',
-'277,184,277,277,277,277,277,,277,277,,,,,,,277,277,277,184,272,272,272',
-'272,,272,272,272,272,272,,272,272,,277,277,,,,272,272,272,214,214,214',
-'214,,214,214,214,214,214,,214,214,,,272,272,,,214,214,214,,,,,,,,,,',
-',,,,,214,214' ]
- racc_action_check = arr = ::Array.new(6559, nil)
+'0,0,352,174,0,0,240,0,180,191,178,181,238,248,168,167,179,166,145,145',
+'240,365,323,191,0,191,365,191,191,250,0,323,0,238,0,0,113,0,0,0,113',
+'0,0,0,0,174,248,0,0,191,180,0,178,181,0,0,168,167,179,166,403,403,0',
+'251,403,403,312,403,0,189,161,191,0,0,189,0,0,0,12,312,0,0,0,0,403,45',
+'0,312,160,45,403,119,403,159,403,403,150,403,403,403,140,403,403,140',
+'119,264,12,403,403,150,12,403,119,269,403,403,320,150,320,175,4,4,403',
+'175,4,4,119,4,403,270,306,150,403,403,306,403,403,306,340,340,403,403',
+'403,403,4,176,403,225,154,176,4,366,4,366,4,4,225,4,4,4,4,4,4,4,4,272',
+'164,4,4,225,246,4,148,148,4,4,148,225,143,245,398,398,4,138,398,398',
+'136,398,4,7,7,7,4,4,280,4,4,282,284,286,4,4,4,4,398,301,4,128,126,304',
+'398,239,398,308,398,398,309,398,398,398,311,398,398,398,398,237,125',
+'398,398,118,116,398,319,236,398,398,321,322,7,7,7,7,398,230,212,108',
+'327,106,398,339,217,341,398,398,105,398,398,102,101,70,398,398,398,398',
+'228,228,398,68,228,228,349,228,228,63,351,162,355,62,360,223,213,220',
+'41,369,370,372,373,376,228,177,177,40,383,177,228,384,228,390,228,228',
+'219,228,228,228,8,228,228,228,228,5,400,228,228,1,405,228,407,409,228',
+'228,413,,,,,,228,,177,,,177,228,,,,228,228,,228,228,,,,228,228,228,228',
+'397,397,228,177,397,397,,397,397,192,,,,177,177,,,,,,,,,192,397,192',
+',192,192,,397,,397,,397,397,,397,397,397,,397,397,397,397,,,397,397',
+'192,,397,,,397,397,,,,,211,211,397,,211,211,,211,397,,,192,397,397,',
+'397,397,,,,397,397,397,397,211,,397,,,,211,,211,,211,211,,211,211,211',
+',211,211,,,,,211,211,,,211,,,211,211,,,,,10,10,211,,10,10,,10,211,,',
+',211,211,,211,211,,,,211,211,211,211,10,,211,,,,10,,10,,10,10,,10,10',
+'10,,10,10,10,10,,,10,10,,,10,,,10,10,,,,,11,11,10,,11,11,,11,10,,,,10',
+'10,,10,10,,,,10,10,10,10,11,,10,,,,11,,11,,11,11,,11,11,11,,11,11,11',
+'11,,,11,11,,,11,,,11,11,,,,,,,11,,,,,,11,,,,11,11,,11,11,,,,11,11,11',
+'11,395,395,11,,395,395,,395,395,114,,,,,,,,,,,,,,114,395,114,,114,114',
+',395,,395,,395,395,,395,395,395,,395,395,395,395,,,395,395,114,,395',
+',,395,395,,,,,15,15,395,,15,15,,15,395,,,,395,395,,395,395,,,,395,395',
+'395,395,15,,395,,,,15,,15,,15,15,,15,15,15,,15,15,,,,,15,15,,,15,,,15',
+'15,,,,,16,16,15,,16,16,,16,15,,,,15,15,,15,15,,,,15,15,15,15,16,,15',
+',,,16,,16,,16,16,,16,16,16,,16,16,,,,,16,16,,,16,,,16,16,,,,,17,17,16',
+',17,17,,17,16,,,,16,16,,16,16,,,,16,16,16,16,17,,16,,,,17,,17,,17,17',
+',17,17,17,,17,17,,,,,17,17,,,17,,,17,17,,,,,18,18,17,,18,18,,18,17,',
+',,17,17,,17,17,,,,17,17,17,17,18,,17,,,,18,,18,,18,18,,18,18,18,,18',
+'18,18,18,,,18,18,,,18,,,18,18,,,,,,,18,,,,,,18,,,,18,18,,18,18,,,,18',
+'18,18,18,379,379,18,,379,379,,379,379,115,,,,,,,,,,,,,,115,379,115,',
+'115,115,,379,,379,,379,379,,379,379,379,,379,379,379,379,,,379,379,115',
+',379,,,379,379,,,,,368,368,379,,368,368,,368,379,,,,379,379,,379,379',
+',,,379,379,379,379,368,,379,,,,368,,368,,368,368,,368,368,368,,368,368',
+',,,,368,368,,,368,,,368,368,,,,,42,42,368,,42,42,,42,368,,,,368,368',
+',368,368,,,,368,368,368,368,42,,368,,,,42,,42,,42,42,,42,42,42,,42,42',
+',,,,42,42,,,42,,,42,42,,,,,43,43,42,,43,43,,43,42,,,,42,42,,42,42,,',
+',42,42,42,42,43,,42,,,,43,,43,,43,43,,43,43,43,,43,43,,,,,43,43,,,43',
+',,43,43,,,,,44,44,43,,44,44,,44,43,,,,43,43,,43,43,,,,43,43,43,43,44',
+',43,,,,44,,44,,44,44,,44,44,44,,44,44,,,,,44,44,,,44,,,44,44,,,,,,,44',
+',,,,,44,,,,44,44,,44,44,,,,44,44,44,44,242,242,44,,242,242,,242,242',
+',,,,,,,,,,,,,,,242,46,46,,,46,242,,242,,242,242,,242,242,242,,242,242',
+'242,242,,,242,242,,,242,,,242,242,,,,,,,242,,46,,,46,242,,,,242,242',
+',242,242,,,,242,242,242,242,243,243,242,46,243,243,,243,243,190,,,,46',
+'46,,,,,,,,,190,243,190,,190,190,,243,,243,,243,243,,243,243,243,,243',
+'243,243,243,,,243,243,190,,243,,,243,243,,,,,52,52,243,,52,52,52,52',
+'243,,,,243,243,,243,243,,,,243,243,243,243,52,,243,,,,52,,52,,52,52',
+',52,52,52,,52,52,52,52,,,52,52,,,52,,,52,52,,,,,53,53,52,,53,53,53,53',
+'52,,,,52,52,,52,52,,,,52,52,52,52,53,,52,,,,53,,53,,53,53,,53,53,53',
+',53,53,53,53,,,53,53,,,53,,,53,53,,,,,,,53,,,,,,53,,,,53,53,,53,53,',
+',,53,53,53,53,54,54,53,,54,54,,54,54,112,,,,,,,,,,,,,,112,54,112,,112',
+'112,,54,,54,,54,54,,54,54,54,,54,54,54,54,,,54,54,112,,54,,,54,54,,',
+',,60,60,54,,60,60,,60,54,,,,54,54,,54,54,,,,54,54,54,54,60,,54,,,,60',
+',60,,60,60,,60,60,60,,60,60,60,60,,,60,60,,,60,,,60,60,,,,,356,356,60',
+',356,356,,356,60,,,,60,60,,60,60,,,,60,60,60,60,356,,60,,,,356,,356',
+',356,356,,356,356,356,,356,356,356,356,,,356,356,,,356,,,356,356,,,',
+',350,350,356,,350,350,,350,356,,,,356,356,,356,356,,,,356,356,356,356',
+'350,,356,,,,350,,350,,350,350,,350,350,350,,350,350,,,,,350,350,,,350',
+',,350,350,,,,,65,65,350,,65,65,,65,350,,,,350,350,,350,350,,,,350,350',
+'350,350,65,,350,,,,65,,65,,65,65,,65,65,65,,65,65,,,,,65,65,,,65,,,65',
+'65,,,,,244,244,65,,244,244,,244,65,,,,65,65,,65,65,,,,65,65,65,65,244',
+',65,,,,244,,244,,244,244,,244,244,244,,244,244,,,,,244,244,,,244,,,244',
+'244,,,,,69,69,244,,69,69,,69,244,,,,244,244,,244,244,,,,244,244,244',
+'244,69,,244,,,,69,,69,,69,69,,69,69,69,,69,69,69,69,,,69,69,,,69,,,69',
+'69,,,,,171,171,69,,171,171,,171,69,,,,69,69,,69,69,,,,69,69,69,69,171',
+',69,,,,171,,171,,171,171,,171,171,171,,171,171,,,,,171,171,,,171,,,171',
+'171,,,,,71,71,171,,71,71,,71,171,,,,171,171,,171,171,,,,171,171,171',
+'171,71,,171,,,,71,,71,,71,71,,71,71,71,,71,71,71,71,,,71,71,,,71,,,71',
+'71,,,,,72,72,71,,72,72,,72,71,,,,71,71,,71,71,,,,71,71,71,71,72,,71',
+',,,72,,72,,72,72,,72,72,72,,72,72,72,72,,,72,72,,,72,,,72,72,,,,,73',
+'73,72,,73,73,,73,72,,,,72,72,,72,72,,,,72,72,72,72,73,,72,,,,73,,73',
+',73,73,,73,73,73,,73,73,73,73,,,73,73,,,73,,,73,73,,,,,74,74,73,,74',
+'74,,74,73,,,,73,73,,73,73,,,,73,73,73,73,74,,73,,,,74,,74,,74,74,,74',
+'74,74,,74,74,74,74,,,74,74,,,74,,,74,74,,,,,75,75,74,,75,75,,75,74,',
+',,74,74,,74,74,,,,74,74,74,74,75,,74,,,,75,,75,,75,75,,75,75,75,,75',
+'75,75,75,,,75,75,,,75,,,75,75,,,,,76,76,75,,76,76,,76,75,,,,75,75,,75',
+'75,,,,75,75,75,75,76,,75,,,,76,,76,,76,76,,76,76,76,,76,76,76,76,,,76',
+'76,,,76,,,76,76,,,,,77,77,76,,77,77,,77,76,,,,76,76,,76,76,,,,76,76',
+'76,76,77,,76,,,,77,,77,,77,77,,77,77,77,,77,77,77,77,,,77,77,,,77,,',
+'77,77,,,,,78,78,77,,78,78,,78,77,,,,77,77,,77,77,,,,77,77,77,77,78,',
+'77,,,,78,,78,,78,78,,78,78,78,,78,78,78,78,,,78,78,,,78,,,78,78,,,,',
+'79,79,78,,79,79,,79,78,,,,78,78,,78,78,,,,78,78,78,78,79,,78,,,,79,79',
+'79,79,79,79,79,79,79,79,,79,79,,,,,79,79,79,79,79,,,79,79,,,,,,,79,',
+',,,79,79,,,,79,79,,79,79,,,,79,79,79,79,80,80,79,,80,80,,80,,,,,,,,',
+',,,,,,,,80,,,,,,80,,80,,80,80,,80,80,80,,80,80,,,,,80,80,,,80,,,80,80',
+',,,,81,81,80,,81,81,,81,80,,,,80,80,,80,80,,,,80,80,80,80,81,,80,,,',
+'81,,81,,81,81,,81,81,81,,81,81,,,,,81,81,,,81,,,81,81,,,,,82,82,81,',
+'82,82,,82,81,,,,81,81,,81,81,,,,81,81,81,81,82,,81,,,,82,,82,,82,82',
+',82,82,82,,82,82,,,,,82,82,,,82,,,82,82,,,,,83,83,82,,83,83,,83,82,',
+',,82,82,,82,82,,,,82,82,82,82,83,,82,,,,83,,83,,83,83,,83,83,83,,83',
+'83,,,,,83,83,,,83,,,83,83,,,,,84,84,83,,84,84,,84,83,,,,83,83,,83,83',
+',,,83,83,83,83,84,,83,,,,84,,84,,84,84,,84,84,84,,84,84,,,,,84,84,,',
+'84,,,84,84,,,,,85,85,84,,85,85,,85,84,,,,84,84,,84,84,,,,84,84,84,84',
+'85,,84,,,,85,,85,,85,85,,85,85,85,,85,85,,,,,85,85,,,85,,,85,85,,,,',
+'86,86,85,,86,86,,86,85,,,,85,85,,85,85,,,,85,85,85,85,86,,85,,,,86,',
+'86,,86,86,,86,86,86,,86,86,,,,,86,86,,,86,,,86,86,,,,,87,87,86,,87,87',
+',87,86,,,,86,86,,86,86,,,,86,86,86,86,87,,86,,,,87,,87,,87,87,,87,87',
+'87,,87,87,,,,,87,87,,,87,,,87,87,,,,,88,88,87,,88,88,,88,87,,,,87,87',
+',87,87,,,,87,87,87,87,88,,87,,,,88,,88,,88,88,,88,88,88,,88,88,,,,,88',
+'88,,,88,,,88,88,,,,,89,89,88,,89,89,,89,88,,,,88,88,,88,88,,,,88,88',
+'88,88,89,,88,,,,89,,89,,89,89,,89,89,89,,89,89,,,,,89,89,,,89,,,89,89',
+',,,,90,90,89,,90,90,,90,89,,,,89,89,,89,89,,,,89,89,89,89,90,,89,,,',
+'90,,90,,90,90,,90,90,90,,90,90,,,,,90,90,,,90,,,90,90,,,,,91,91,90,',
+'91,91,,91,90,,,,90,90,,90,90,,,,90,90,90,90,91,,90,,,,91,,91,,91,91',
+',91,91,91,,91,91,,,,,91,91,,,91,,,91,91,,,,,92,92,91,,92,92,,92,91,',
+',,91,91,,91,91,,,,91,91,91,91,92,,91,,,,92,,92,,92,92,,92,92,92,,92',
+'92,,,,,92,92,,,92,,,92,92,,,,,93,93,92,,93,93,,93,92,,,,92,92,,92,92',
+',,,92,92,92,92,93,,92,,,,93,,93,,93,93,,93,93,93,,93,93,,,,,93,93,,',
+'93,,,93,93,,,,,94,94,93,,94,94,,94,93,,,,93,93,,93,93,,,,93,93,93,93',
+'94,,93,,,,94,,94,,94,94,,94,94,94,,94,94,,,,,94,94,,,94,,,94,94,,,,',
+'95,95,94,,95,95,,95,94,,,,94,94,,94,94,,,,94,94,94,94,95,,94,,,,95,',
+'95,,95,95,,95,95,95,,95,95,,,,,95,95,,,95,,,95,95,,,,,96,96,95,,96,96',
+',96,95,,,,95,95,,95,95,,,,95,95,95,95,96,,95,,,,96,,96,,96,96,,96,96',
+'96,,96,96,,,,,96,96,,,96,,,96,96,,,,,97,97,96,,97,97,,97,96,,,,96,96',
+',96,96,,,,96,96,96,96,97,,96,,,,97,,97,,97,97,,97,97,97,,97,97,,,,,97',
+'97,,,97,,,97,97,,,,,98,98,97,,98,98,,98,97,,,,97,97,,97,97,,,,97,97',
+'97,97,98,,97,,,,98,,98,,98,98,,98,98,98,,98,98,,,,,98,98,,,98,,,98,98',
+',,,,99,99,98,,99,99,,99,98,,,,98,98,,98,98,,,,98,98,98,98,99,,98,,,',
+'99,,99,,99,99,,99,99,99,,99,99,,,,,99,99,,,99,,,99,99,,,,,100,100,99',
+',100,100,,100,99,,,99,99,99,,99,99,,,,99,99,99,99,100,100,99,,,,100',
+',100,,100,100,,100,100,100,,100,100,100,100,,,100,100,,,100,,,100,100',
+',,,,278,278,100,,278,278,,278,100,,,,100,100,,100,100,,,,100,100,100',
+'100,278,,100,,,,278,,278,,278,278,,278,278,278,,278,278,,,,,278,278',
+',,278,,,278,278,,,,,169,169,278,,169,169,,169,278,,,,278,278,,278,278',
+',,,278,278,278,278,169,,278,,,,169,,169,,169,169,,169,169,169,,169,169',
+',,,,169,169,,,169,,,169,169,,,,,103,103,169,,103,103,,103,169,,,,169',
+'169,,169,169,,,,169,169,169,169,103,,169,,,,103,,103,,103,103,,103,103',
+'103,,103,103,,,,,103,103,,,103,,,103,103,,,,,104,104,103,,104,104,,104',
+'103,,,,103,103,,103,103,,,,103,103,103,103,104,,103,,,,104,,104,,104',
+'104,,104,104,104,,104,104,,,,,104,104,,,104,,,104,104,,,,,165,165,104',
+',165,165,,165,104,,165,,104,104,,104,104,,,,104,104,104,104,165,,104',
+',,,165,,165,,165,165,,165,165,165,,165,165,,,,,165,165,,,165,,,165,165',
+',,,,249,249,165,,249,249,,249,165,,,,165,165,,165,165,,,,165,165,165',
+'165,249,,165,,,,249,,249,,249,249,,249,249,249,,249,249,249,249,,,249',
+'249,,,249,,,249,249,,,,,107,107,249,,107,107,,107,249,,,,249,249,,249',
+'249,,,,249,249,249,249,107,,249,,,,107,,107,,107,107,,107,107,107,,107',
+'107,,,,,107,107,,,107,,,107,107,,,,,326,326,107,,326,326,,326,107,,',
+',107,107,,107,107,,,,107,107,107,107,326,,107,,,,326,,326,,326,326,',
+'326,326,326,,326,326,326,326,,,326,326,,,326,,,326,326,,,,,,,326,,,',
+',,326,,,,326,326,,326,326,,,,326,326,326,326,253,253,326,,253,253,,253',
+'253,,,,,,,,,,,,,,,,253,247,247,,,247,253,,253,,253,253,,253,253,253',
+',253,253,253,253,,,253,253,,,253,,,253,253,,,,,,,253,,247,,,247,253',
+',,,253,253,,253,253,,,,253,253,253,253,324,324,253,247,324,324,,324',
+'324,,,,,247,247,,,,,,,,,,324,,,,,,324,,324,,324,324,,324,324,324,,324',
+'324,,,,,324,324,,,324,,,324,324,,,,,254,254,324,,254,254,,254,324,,',
+',324,324,,324,324,,,,324,324,324,324,254,,324,,,,254,,254,,254,254,',
+'254,254,254,,254,254,254,254,,,254,254,,,254,,,254,254,,,,,259,259,254',
+',259,259,,259,254,,,,254,254,,254,254,,,,254,254,254,254,259,,254,,',
+',259,,259,,259,259,,259,259,259,,259,259,259,259,,,259,259,,,259,,,259',
+'259,,,,,317,317,259,,317,317,,317,259,,,,259,259,,259,259,,,,259,259',
+'259,259,317,,259,,,,317,,317,,317,317,,317,317,317,,317,317,317,317',
+',,317,317,,,317,,,317,317,,,,,316,316,317,,316,316,,316,317,,,,317,317',
+',317,317,,,,317,317,317,317,316,,317,,,,316,,316,,316,316,,316,316,316',
+',316,316,,,,,316,316,,,316,,,316,316,,,,,,,316,,,,,,316,,,,316,316,',
+'316,316,,,,316,316,316,316,152,152,316,,152,152,,152,152,,,,,,,,,,,',
+',,,,152,,,,,,152,,152,,152,152,,152,152,152,,152,152,152,152,,,152,152',
+',,152,,,152,152,,,,,120,120,152,,120,120,,120,152,,,,152,152,,152,152',
+',,,152,152,152,152,120,120,152,,,,120,,120,,120,120,,120,120,120,,120',
+'120,120,120,,,120,120,,,120,,,120,120,,,,,149,149,120,,149,149,,149',
+'120,,,,120,120,,120,120,,,,120,120,120,120,149,,120,,,,149,,149,,149',
+'149,,149,149,149,,149,149,149,149,,,149,149,,,149,,,149,149,,,,,274',
+'274,149,,274,274,,274,149,,,,149,149,,149,149,,,,149,149,149,149,274',
+',149,,,,274,,274,,274,274,,274,274,274,,274,274,,,,,274,274,,,274,,',
+'274,274,,,,,275,275,274,,275,275,,275,274,,,,274,274,,274,274,,,,274',
+'274,274,274,275,,274,,,,275,,275,,275,275,,275,275,275,,275,275,,,,',
+'275,275,,,275,,,275,275,,,,,313,313,275,,313,313,,313,275,,,,275,275',
+',275,275,,,,275,275,275,275,313,,275,,,,313,,313,,313,313,,313,313,313',
+',313,313,,,,,313,313,,,313,,,313,313,,,,,276,276,313,,276,276,,276,313',
+',,,313,313,,313,313,,,,313,313,313,313,276,,313,,,,276,,276,,276,276',
+',276,276,276,,276,276,,,,,276,276,,,276,,,276,276,,,,,302,302,276,,302',
+'302,,302,276,,,,276,276,,276,276,,,,276,276,276,276,302,,276,,,,302',
+',302,,302,302,,302,302,302,,302,302,,,,,302,302,,,302,,,302,302,,,,',
+'279,279,302,,279,279,,279,302,,,,302,302,,302,302,,,,302,302,302,302',
+'279,,302,,,,279,,279,,279,279,,279,279,279,,279,279,,,,,279,279,,,279',
+',,279,279,,,,,170,170,279,,170,170,,170,279,,,,279,279,,279,279,,,,279',
+'279,279,279,170,,279,,,,170,,170,,170,170,,170,170,170,,170,170,,,,',
+'170,170,,,170,,,170,170,,,,,,,170,,,,,,170,,,,170,170,,170,170,346,',
+',170,170,170,170,,,170,,,346,346,346,,346,,346,346,,346,346,346,346',
+',329,329,,,329,,,,,,,,,,346,,,,346,346,,,346,346,346,346,346,346,,346',
+'346,124,,124,,,346,,,329,,,329,124,124,124,,124,,124,124,,124,124,124',
+'124,,346,,,,,329,,,,,,,,,124,329,329,,124,124,,,124,124,124,124,124',
+'124,,124,124,123,,123,,,124,,,,,,,123,123,123,,123,193,123,123,,123',
+'123,123,123,,124,,,,,193,,193,,193,193,,,,123,,,,123,123,,,123,123,123',
+'123,123,123,,123,123,193,,,,,123,,,,,193,193,,,,193,193,121,,121,,,193',
+',,,123,,,121,121,121,,121,195,121,121,,121,121,121,121,,193,,,,,195',
+',195,,195,195,,,,121,,,,121,121,,216,121,121,121,121,121,121,,121,121',
+'195,,216,216,216,121,216,,216,216,,216,216,216,216,195,195,,,,,,195',
+',,,121,,,,216,,,,216,216,,151,216,216,216,216,216,216,,216,216,,,151',
+'151,151,216,151,,151,151,,151,151,151,151,,,,,,,,,,,,216,,,,151,,,,151',
+'151,,,151,151,151,151,151,151,,151,151,,,,,,151,221,,,,,,,,,,151,151',
+'221,221,221,221,221,199,221,221,151,221,221,221,221,,,,,,,199,,199,',
+'199,199,,,,221,,,,221,221,,,221,221,221,221,221,221,,221,221,199,,,',
+',221,,,199,199,199,199,,,,199,199,9,,,,,199,,,,221,,,9,9,9,,9,,9,9,',
+'9,9,9,9,,199,,,,,,,,,,,,,,9,,,,9,9,,,9,9,9,9,9,9,208,9,9,,,208,,,9,',
+',,208,208,208,,208,196,208,208,,208,208,208,208,,,,,9,,196,,196,,196',
+'196,,,,208,,,,208,208,,207,208,208,208,208,208,208,,208,208,196,,207',
+'207,207,208,207,,207,207,,207,207,207,207,196,196,,,,,,196,,,,208,,',
+',207,,,,,207,,364,207,207,207,207,207,207,,207,207,,,364,364,364,207',
+'364,,364,364,,364,364,364,364,,,,,,,,,,,,207,,,,364,,,,364,364,,,364',
+'364,364,364,364,364,163,364,364,,,,,,364,,,,163,163,163,163,163,197',
+'163,163,,163,163,163,163,,,,,364,,197,,197,,197,197,,,,163,,,,163,163',
+',188,163,163,163,163,163,163,,163,163,197,,188,188,188,163,188,,188',
+'188,,188,188,188,188,197,197,,,,,,197,,,,163,,,,188,,,,188,188,,344',
+'188,188,188,188,188,188,,188,188,,,344,344,344,188,344,,344,344,,344',
+'344,344,344,,,,,,,,,,,,188,,,,344,,,,344,344,,206,344,344,344,344,344',
+'344,,344,344,,,206,206,206,344,206,,206,206,,206,206,206,206,205,,,',
+',,,,,,,344,,205,205,206,205,,205,205,198,205,,206,206,206,206,206,206',
+',206,206,,,198,,198,206,198,198,205,,,,,,,,205,205,205,205,205,205,',
+'205,205,,206,,198,,205,,,,,,198,198,198,198,345,,,198,198,,,,,,198,205',
+'345,345,345,,345,,345,345,194,345,345,345,345,,,,,,,198,,,194,,194,',
+'194,194,345,,,,345,345,,,345,345,345,345,345,345,,345,345,,,,194,,345',
+',,,,,,,194,194,347,,,194,194,,,,,,194,345,347,347,347,,347,,347,347',
+'203,347,347,347,347,,,,,,,194,,203,203,,203,,203,203,347,203,,,347,347',
+',,347,347,347,347,347,347,,347,347,,,,203,,347,,,,,,203,203,203,203',
+'203,203,348,203,203,,,,,,203,347,,,348,348,348,,348,200,348,348,,348',
+'348,348,348,,,,,203,,200,,200,,200,200,,,,348,,,,348,348,,,348,348,348',
+'348,348,348,,348,348,200,,,,,348,,,200,200,200,200,200,200,201,200,200',
+',,,,,200,,,,348,,201,,201,202,201,201,,,,,,,,,,200,202,202,,202,,202',
+'202,,202,,201,,,,,,,,201,201,201,201,201,201,,201,201,202,,,,,201,,',
+'202,202,202,202,202,202,204,202,202,,,,,,202,,,,201,204,204,,204,,204',
+'204,,204,,,,,,,,202,,,,,,,,,,,204,,,,,,,,204,204,204,204,204,204,,204',
+'204,,,,,,204,,,271,271,271,271,,271,271,271,,271,,271,271,,,,,,204,271',
+'271,271,,,,271,,,,,,,,,,,,271,271,,,,,,,,,,,,271,271,271,271,273,273',
+'273,273,,273,273,273,,273,,273,273,,,,,,,273,273,273,,,,273,,,,,,,,',
+',,,273,273,,,,,,,,,,,,273,273,273,273,303,303,303,303,,303,303,303,',
+'303,,303,303,,,,,,,303,303,303,,,,303,,,,,,,,,,,,303,303,,,,,,,,,,,',
+'303,303,303,303,215,215,215,215,,215,215,215,,215,,215,215,,,,,,,215',
+'215,215,,,,215,,,,,,,,,,,,215,215,,,,,,,,,,,,215,215,215,215' ]
+ racc_action_check = arr = ::Array.new(6809, nil)
idx = 0
clist.each do |str|
str.split(',', -1).each do |i|
arr[idx] = i.to_i unless i.empty?
idx += 1
end
end
racc_action_pointer = [
- -2, 297, nil, nil, nil, 116, 281, nil, 79, nil,
- nil, 5901, 352, 411, 470, nil, nil, nil, nil, nil,
+ -2, 313, nil, nil, 118, 296, nil, 173, 295, 5793,
+ 466, 526, 69, nil, nil, 670, 730, 790, 850, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, 262,
- -55, 237, 706, 765, 824, 883, 186, 210, nil, 58,
- 194, nil, nil, 1178, 1237, 1296, nil, nil, nil, nil,
- nil, 1355, nil, 158, 162, nil, 1532, nil, nil, nil,
- nil, nil, nil, nil, 232, 1650, 217, 1768, 1827, 1886,
- 1945, 2004, 2063, 2122, 2181, 2240, 2299, 2358, 2417, 2476,
- 2535, 2594, 2653, 2712, 2771, 2830, 2889, 2948, 3007, 3066,
- 3125, 3184, 3243, 3302, 3361, 164, 3479, 178, 3597, 3656,
- 716, 75, 657, 5354, 3970, nil, 105, -15, 4166, 5300,
- nil, 5239, 5178, 5063, 127, 4520, 5, nil, nil, nil,
- nil, 62, -11, nil, 225, nil, nil, nil, nil, 4887,
- 61, nil, 57, nil, 5002, 15, nil, nil, 5117, nil,
- 166, nil, 102, 4225, -22, 5408, 1473, nil, 120, nil,
- nil, nil, nil, nil, 9, 598, 480, 303, 3902, 4098,
- 362, 421, 539, 5918, 5961, 5978, 6021, 6038, 6092, 6109,
- 6163, 6183, 6228, 6248, 6410, 6356, 6302, nil, nil, 293,
- nil, 5840, 647, 942, 1001, 214, 238, nil, nil, -4,
- nil, -1, -9, 74, 49, 76, 1, -2, nil, nil,
- nil, nil, nil, nil, 6488, 5624, 199, nil, 202, nil,
- 191, 96, nil, 1119, nil, 141, nil, 133, -1, nil,
- 1414, 3892, 4284, 4815, 4402, 4, 151, nil, -21, 255,
- 284, 17, nil, 135, 16, 4579, nil, 4343, nil, 4756,
- nil, 4697, nil, nil, nil, nil, 4638, nil, nil, nil,
- 252, nil, nil, nil, nil, 4948, 30, nil, 4461, 4088,
- 58, nil, 6466, nil, 99, 4029, 120, 6443, 3833, 3774,
- 150, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, 3715, 146, nil, 164, nil, 104, 147,
- 3538, nil, 178, 91, 182, 170, -4, 3420, nil, 164,
- 195, 171, 207, 213, nil, -8, nil, 216, 1709, 1591,
- nil, nil, nil, 5462, nil, nil, 5516, nil, nil, nil,
- 171, 48, 5570, 238, 1060, 241, nil, nil, nil, nil,
- 5678, 5732, 249, 191, nil, nil, nil, 5786, 52, nil,
- 588, 258, 239, nil, 267, 272, nil, nil, nil, 272,
- 275, 276, nil, 529, nil, nil, nil, 262, 281, nil,
- nil, 282, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, 234, nil, 175, 57, nil, nil, nil, 291,
- nil, nil, nil, 293, nil, 295, nil, 296, nil, nil,
- nil, nil, nil ]
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
+ 220, 256, 1054, 1114, 1174, 48, 1283, nil, nil, nil,
+ nil, nil, 1402, 1462, 1546, nil, nil, nil, nil, nil,
+ 1606, nil, 201, 202, nil, 1786, nil, nil, 267, 1906,
+ 246, 2026, 2086, 2146, 2206, 2266, 2326, 2386, 2446, 2506,
+ 2590, 2650, 2710, 2770, 2830, 2890, 2950, 3010, 3070, 3130,
+ 3190, 3250, 3310, 3370, 3430, 3490, 3550, 3610, 3670, 3730,
+ 3790, 217, 248, 3970, 4030, 245, 238, 4210, 219, nil,
+ nil, nil, 1550, -1, 614, 938, 203, nil, 220, 55,
+ 4822, 5562, nil, 5488, 5431, 200, 195, nil, 186, nil,
+ nil, nil, nil, nil, nil, nil, 173, nil, 170, nil,
+ 90, nil, nil, 166, nil, 14, nil, nil, 170, 4882,
+ 60, 5656, 4762, nil, 135, nil, nil, nil, nil, 84,
+ 79, 61, 266, 5995, 153, 4090, 5, 3, 2, 3910,
+ 5302, 1966, nil, nil, -9, 82, 108, 287, -2, 4,
+ -4, -1, nil, nil, nil, nil, nil, nil, 6042, 61,
+ 1346, 2, 350, 5505, 6253, 5579, 5864, 6012, 6181, 5736,
+ 6396, 6450, 6467, 6325, 6521, 6161, 6136, 5894, 5847, nil,
+ nil, 406, 231, 209, nil, 6723, 5609, 202, nil, 276,
+ 239, 5719, nil, 241, nil, 120, nil, nil, 262, nil,
+ 230, nil, nil, nil, nil, nil, 217, 189, -24, 204,
+ -7, nil, 1258, 1342, 1846, 170, 132, 4379, -28, 4150,
+ 21, 55, nil, 4354, 4498, nil, nil, nil, nil, 4558,
+ nil, nil, nil, nil, 92, nil, nil, nil, nil, 101,
+ 119, 6561, 155, 6615, 4942, 5002, 5122, nil, 3850, 5242,
+ 181, nil, 170, nil, 185, nil, 187, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
+ nil, 195, 5182, 6669, 200, nil, 93, nil, 200, 206,
+ nil, 149, 30, 5062, nil, nil, 4678, 4618, nil, 222,
+ 83, 226, 204, 9, 4438, nil, 4270, 237, nil, 5405,
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, 178,
+ 58, 238, nil, nil, 6089, 6233, 5374, 6305, 6379, 260,
+ 1726, 203, -8, nil, nil, 263, 1666, nil, nil, nil,
+ 251, nil, nil, nil, 5941, 13, 118, nil, 994, 274,
+ 251, nil, 276, 277, nil, nil, 277, nil, nil, 934,
+ nil, nil, nil, 282, 253, nil, nil, nil, nil, nil,
+ 287, nil, nil, nil, nil, 610, nil, 346, 178, nil,
+ 300, nil, nil, 58, nil, 304, nil, 306, nil, 307,
+ nil, nil, nil, 278, nil, nil, nil, nil ]
racc_action_default = [
- -230, -231, -1, -2, -3, -4, -5, -8, -10, -11,
- -16, -108, -231, -231, -231, -45, -46, -47, -48, -49,
- -50, -51, -52, -53, -54, -55, -56, -57, -58, -59,
- -60, -61, -62, -63, -64, -65, -66, -67, -68, -73,
- -74, -78, -231, -231, -231, -231, -231, -119, -121, -231,
- -231, -157, -167, -231, -231, -231, -180, -181, -182, -183,
- -184, -231, -186, -231, -197, -200, -231, -205, -206, -207,
- -208, -209, -210, -211, -231, -231, -7, -231, -231, -231,
- -231, -231, -231, -231, -231, -231, -231, -231, -231, -231,
- -231, -231, -231, -231, -231, -231, -231, -231, -231, -231,
- -231, -231, -231, -231, -231, -231, -128, -123, -230, -230,
- -28, -231, -35, -231, -231, -75, -231, -231, -231, -231,
- -85, -231, -231, -231, -231, -231, -230, -138, -158, -159,
- -120, -230, -230, -147, -149, -150, -151, -152, -153, -43,
- -231, -170, -231, -173, -231, -231, -176, -177, -190, -185,
- -231, -193, -231, -231, -231, -231, -231, 403, -6, -9,
- -12, -13, -14, -15, -231, -18, -19, -20, -21, -22,
- -23, -24, -25, -26, -27, -29, -30, -31, -32, -33,
- -34, -36, -37, -38, -39, -40, -231, -41, -103, -231,
- -79, -231, -223, -229, -217, -214, -212, -117, -129, -206,
- -132, -210, -231, -220, -218, -226, -208, -209, -216, -221,
- -222, -224, -225, -227, -128, -127, -231, -126, -231, -42,
- -212, -70, -80, -231, -83, -212, -163, -166, -231, -77,
- -231, -231, -231, -128, -231, -214, -230, -160, -231, -231,
- -231, -231, -155, -231, -231, -231, -168, -231, -171, -231,
- -174, -231, -187, -188, -189, -191, -231, -194, -195, -196,
- -212, -198, -201, -203, -204, -108, -231, -17, -231, -231,
- -212, -105, -128, -116, -231, -215, -231, -213, -231, -231,
- -212, -131, -133, -217, -218, -219, -220, -223, -226, -228,
- -229, -124, -125, -213, -231, -72, -231, -82, -231, -213,
- -231, -76, -231, -88, -231, -94, -231, -231, -98, -214,
- -212, -214, -231, -231, -141, -231, -161, -212, -230, -231,
- -148, -156, -154, -44, -169, -172, -179, -175, -178, -192,
- -231, -231, -107, -231, -213, -212, -111, -118, -112, -130,
- -134, -135, -231, -69, -81, -84, -164, -165, -88, -87,
- -231, -231, -94, -93, -231, -231, -102, -97, -99, -231,
- -231, -231, -114, -230, -142, -143, -144, -231, -231, -139,
- -140, -231, -146, -199, -202, -104, -106, -115, -122, -71,
- -86, -89, -231, -92, -231, -231, -109, -110, -113, -231,
- -162, -136, -145, -231, -91, -231, -96, -231, -101, -137,
- -90, -95, -100 ]
+ -3, -241, -1, -2, -4, -5, -8, -10, -16, -21,
+ -241, -241, -241, -33, -34, -241, -241, -241, -241, -61,
+ -62, -63, -64, -65, -66, -67, -68, -69, -70, -71,
+ -72, -73, -74, -75, -76, -77, -78, -79, -80, -81,
+ -86, -90, -241, -241, -241, -241, -241, -174, -175, -176,
+ -177, -178, -241, -241, -241, -189, -190, -191, -192, -193,
+ -241, -195, -241, -208, -211, -241, -216, -217, -241, -241,
+ -7, -241, -241, -241, -241, -241, -241, -241, -241, -126,
+ -241, -241, -241, -241, -241, -241, -241, -241, -241, -241,
+ -241, -241, -241, -241, -241, -241, -241, -241, -241, -241,
+ -241, -241, -121, -240, -240, -22, -23, -241, -240, -136,
+ -157, -158, -46, -241, -47, -54, -241, -87, -241, -241,
+ -241, -241, -97, -241, -241, -240, -218, -145, -147, -148,
+ -149, -150, -151, -153, -154, -14, -218, -180, -218, -182,
+ -241, -185, -186, -241, -194, -241, -199, -202, -241, -206,
+ -241, -241, -241, 418, -6, -9, -11, -12, -13, -17,
+ -18, -19, -20, -241, -218, -241, -79, -80, -81, -229,
+ -235, -223, -127, -130, -241, -226, -224, -232, -175, -176,
+ -177, -178, -222, -227, -228, -230, -231, -233, -59, -241,
+ -36, -37, -38, -39, -40, -41, -42, -43, -44, -45,
+ -48, -49, -50, -51, -52, -53, -55, -56, -241, -57,
+ -115, -241, -218, -83, -91, -126, -125, -241, -124, -241,
+ -220, -241, -28, -240, -159, -241, -58, -92, -241, -95,
+ -218, -162, -164, -165, -166, -167, -169, -241, -241, -172,
+ -241, -89, -241, -241, -241, -241, -240, -219, -241, -219,
+ -241, -241, -183, -241, -241, -196, -197, -198, -200, -241,
+ -203, -204, -205, -207, -218, -209, -212, -214, -215, -8,
+ -241, -126, -241, -219, -241, -241, -241, -35, -241, -241,
+ -218, -117, -241, -85, -218, -129, -241, -223, -224, -225,
+ -226, -229, -232, -234, -235, -236, -237, -238, -239, -122,
+ -123, -241, -221, -126, -241, -139, -241, -160, -218, -241,
+ -94, -241, -219, -241, -170, -171, -241, -241, -88, -241,
+ -100, -241, -106, -241, -241, -110, -240, -241, -155, -241,
+ -146, -152, -15, -179, -181, -184, -187, -188, -201, -241,
+ -241, -218, -26, -128, -133, -131, -132, -60, -119, -241,
+ -219, -82, -241, -25, -29, -218, -240, -140, -141, -142,
+ -241, -93, -96, -163, -168, -241, -100, -99, -241, -241,
+ -106, -105, -241, -241, -109, -111, -241, -137, -138, -241,
+ -156, -210, -213, -241, -30, -116, -118, -84, -120, -27,
+ -241, -161, -173, -98, -101, -241, -104, -241, -240, -134,
+ -241, -144, -24, -31, -135, -241, -103, -241, -108, -241,
+ -113, -114, -143, -220, -102, -107, -112, -32 ]
racc_goto_table = [
- 2, 115, 4, 131, 110, 112, 113, 149, 137, 274,
- 196, 262, 188, 135, 195, 225, 353, 236, 187, 368,
- 349, 320, 239, 321, 308, 160, 161, 162, 163, 216,
- 218, 159, 337, 235, 119, 121, 122, 123, 276, 76,
- 272, 355, 140, 142, 339, 139, 139, 144, 270, 312,
- 307, 381, 260, 148, 313, 364, 240, 222, 155, 346,
- 328, 257, 294, 383, 389, 380, 258, 298, 3, 255,
- 256, 164, 254, 151, 139, 165, 166, 167, 168, 169,
- 170, 171, 172, 173, 174, 175, 176, 177, 178, 179,
- 180, 181, 182, 183, 184, 185, 186, 271, 191, 358,
- 215, 215, 330, 220, 153, 1, 139, 228, nil, 158,
- 139, nil, 333, nil, nil, nil, nil, 191, 280, nil,
- nil, nil, 342, 359, nil, 361, nil, nil, 237, nil,
- nil, nil, nil, 237, 242, nil, 317, 310, nil, nil,
- nil, 309, 311, nil, nil, nil, nil, nil, 265, nil,
- nil, nil, 360, 259, nil, nil, 266, 131, nil, 367,
- nil, nil, nil, 137, nil, nil, nil, nil, 135, nil,
- nil, nil, nil, nil, nil, nil, 335, 377, nil, nil,
- nil, 186, 295, nil, 119, 121, 122, 374, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, 137,
- nil, 137, 329, nil, 135, nil, 135, nil, nil, nil,
+ 2, 112, 114, 115, 117, 116, 224, 220, 125, 129,
+ 131, 189, 210, 144, 301, 266, 330, 230, 367, 325,
+ 376, 239, 409, 224, 246, 164, 324, 70, 121, 123,
+ 124, 280, 223, 394, 250, 343, 251, 105, 106, 135,
+ 135, 143, 227, 136, 138, 209, 371, 146, 264, 245,
+ 390, 151, 239, 217, 219, 354, 304, 357, 155, 156,
+ 157, 158, 272, 327, 393, 163, 188, 190, 191, 192,
+ 193, 194, 195, 196, 197, 198, 199, 200, 201, 202,
+ 203, 204, 205, 206, 207, 208, 383, 135, 331, 216,
+ 216, 212, 154, 221, 396, 363, 315, 314, 380, 375,
+ 336, 260, 159, 160, 161, 162, 261, 135, 3, 258,
+ 282, 240, 259, 257, 147, 149, 262, 1, nil, nil,
+ nil, 305, nil, 308, 281, nil, nil, 239, 311, nil,
+ nil, nil, nil, nil, nil, nil, nil, nil, 125, 269,
+ 129, 131, nil, nil, 328, nil, nil, nil, nil, 263,
+ nil, 114, 270, nil, nil, 121, 123, 124, nil, nil,
+ nil, 284, 339, nil, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, 283, 349, nil,
+ nil, nil, 352, nil, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, 208, nil, nil,
+ nil, nil, nil, nil, 382, nil, 360, 417, nil, nil,
+ 129, 131, 338, nil, 239, nil, nil, 341, nil, nil,
+ nil, nil, nil, nil, 378, nil, nil, nil, 309, nil,
+ 188, nil, nil, nil, nil, nil, 332, nil, nil, 384,
+ 143, 337, 319, 321, nil, nil, 146, 365, nil, 355,
+ nil, nil, nil, 389, 378, nil, nil, nil, nil, nil,
+ 344, 345, 346, 386, 347, 348, nil, nil, nil, 358,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, 296, 139, 191, 191, nil, nil, nil,
- 302, 304, nil, nil, nil, nil, nil, 323, 314, 323,
- nil, 326, 376, 144, nil, nil, nil, nil, 148, nil,
+ nil, nil, nil, nil, nil, nil, nil, nil, 221, nil,
+ nil, nil, 129, 131, nil, nil, 410, nil, nil, 364,
+ nil, nil, 188, 413, 332, nil, nil, nil, nil, nil,
+ 188, nil, nil, nil, nil, 387, nil, nil, nil, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- 323, 332, nil, nil, nil, nil, nil, 191, nil, 365,
- 340, 341, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, 323, nil, nil, nil, nil,
- nil, nil, 347, nil, nil, nil, nil, nil, nil, 139,
- nil, nil, nil, nil, 379, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, 371,
- 370, nil, nil, nil, nil, nil, 186, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, 208, nil, nil, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, 119, nil, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, 121, nil, nil, nil, nil, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, 370, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, 393, nil, 395, 397 ]
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, 400,
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, 221,
+ nil, nil, nil, nil, nil, 405, nil, 407, 411 ]
racc_goto_check = [
- 2, 40, 4, 65, 10, 10, 10, 82, 32, 56,
- 57, 89, 52, 38, 55, 45, 48, 66, 13, 67,
- 47, 73, 66, 73, 50, 8, 8, 8, 8, 61,
- 61, 7, 58, 55, 10, 10, 10, 10, 39, 6,
- 59, 51, 12, 12, 62, 10, 10, 10, 53, 56,
- 49, 46, 45, 10, 69, 70, 72, 44, 10, 75,
- 77, 78, 39, 48, 67, 47, 79, 39, 3, 83,
- 84, 12, 86, 87, 10, 10, 10, 10, 10, 10,
+ 2, 10, 10, 10, 37, 6, 49, 13, 57, 35,
+ 34, 19, 50, 80, 14, 88, 65, 42, 44, 47,
+ 59, 36, 48, 49, 15, 11, 46, 5, 10, 10,
+ 10, 51, 58, 43, 15, 54, 15, 9, 9, 6,
+ 6, 6, 41, 8, 8, 20, 45, 6, 42, 58,
+ 59, 10, 36, 53, 53, 16, 61, 62, 6, 6,
+ 6, 6, 15, 64, 44, 10, 10, 10, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
- 10, 10, 10, 10, 10, 10, 10, 52, 10, 50,
- 10, 10, 39, 12, 88, 1, 10, 12, nil, 6,
- 10, nil, 39, nil, nil, nil, nil, 10, 57, nil,
- nil, nil, 39, 56, nil, 56, nil, nil, 4, nil,
- nil, nil, nil, 4, 4, nil, 45, 57, nil, nil,
- nil, 55, 55, nil, nil, nil, nil, nil, 10, nil,
- nil, nil, 39, 2, nil, nil, 2, 65, nil, 39,
- nil, nil, nil, 32, nil, nil, nil, nil, 38, nil,
- nil, nil, nil, nil, nil, nil, 57, 39, nil, nil,
- nil, 10, 40, nil, 10, 10, 10, 89, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, 32,
- nil, 32, 82, nil, 38, nil, 38, nil, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, 2, 10, 10, 10, nil, nil, nil,
- 2, 2, nil, nil, nil, nil, nil, 10, 4, 10,
- nil, 10, 52, 10, nil, nil, nil, nil, 10, nil,
+ 10, 10, 10, 10, 10, 10, 12, 6, 67, 10,
+ 10, 8, 5, 10, 45, 68, 69, 71, 65, 47,
+ 75, 76, 9, 9, 9, 9, 77, 6, 3, 81,
+ 15, 8, 82, 84, 85, 86, 87, 1, nil, nil,
+ nil, 49, nil, 42, 50, nil, nil, 36, 15, nil,
+ nil, nil, nil, nil, nil, nil, nil, nil, 57, 6,
+ 35, 34, nil, nil, 49, nil, nil, nil, nil, 2,
+ nil, 10, 2, nil, nil, 10, 10, 10, nil, nil,
+ nil, 11, 15, nil, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, 37, 15, nil,
+ nil, nil, 15, nil, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, 10, nil, nil,
+ nil, nil, nil, nil, 88, nil, 15, 14, nil, nil,
+ 35, 34, 80, nil, 36, nil, nil, 11, nil, nil,
+ nil, nil, nil, nil, 49, nil, nil, nil, 2, nil,
+ 10, nil, nil, nil, nil, nil, 6, nil, nil, 15,
+ 6, 6, 2, 2, nil, nil, 6, 19, nil, 11,
+ nil, nil, nil, 15, 49, nil, nil, nil, nil, nil,
+ 10, 10, 10, 50, 10, 10, nil, nil, nil, 57,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- 10, 10, nil, nil, nil, nil, nil, 10, nil, 65,
- 10, 10, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, 10, nil, nil, nil, nil,
- nil, nil, 10, nil, nil, nil, nil, nil, nil, 10,
- nil, nil, nil, nil, 40, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, 2,
- 4, nil, nil, nil, nil, nil, 10, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, nil, 10, nil,
+ nil, nil, 35, 34, nil, nil, 49, nil, nil, 10,
+ nil, nil, 10, 13, 6, nil, nil, nil, nil, nil,
+ 10, nil, nil, nil, nil, 37, nil, nil, nil, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, 10, nil, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, 10, nil, nil, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, 4, nil, nil, nil, nil,
+ nil, nil, nil, nil, 10, nil, nil, nil, nil, nil,
nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, 2, nil, 2, 2 ]
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, 2,
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, 10,
+ nil, nil, nil, nil, nil, 2, nil, 2, 2 ]
racc_goto_pointer = [
- nil, 105, 0, 68, 2, nil, 34, -46, -53, nil,
- -8, nil, -11, -86, nil, nil, nil, nil, nil, nil,
- nil, nil, nil, nil, nil, nil, nil, nil, nil, nil,
- nil, nil, -42, nil, nil, nil, nil, nil, -37, -158,
- -39, nil, nil, nil, -59, -102, -299, -283, -289, -182,
- -208, -265, -92, -141, nil, -92, -186, -96, -243, -151,
- nil, -79, -233, nil, nil, -46, -109, -299, nil, -182,
- -260, nil, -76, -220, nil, -240, nil, -191, -91, -86,
- nil, nil, -54, -81, -80, nil, -78, 10, 40, -144 ]
+ nil, 117, 0, 108, nil, 23, -13, nil, -9, 27,
+ -14, -54, -255, -100, -206, -102, -247, nil, nil, -69,
+ -54, nil, nil, nil, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, -36, -37, -98, -36, nil, nil,
+ nil, -76, -102, -335, -302, -276, -218, -225, -376, -102,
+ -87, -180, nil, -50, -238, nil, nil, -37, -76, -306,
+ nil, -167, -249, nil, -183, -231, nil, -160, -217, -142,
+ nil, -140, nil, nil, nil, -153, -47, -42, nil, nil,
+ -47, -36, -33, nil, -32, 52, 52, -33, -136 ]
racc_goto_default = [
- nil, nil, 369, nil, 217, 5, 6, 7, 8, 9,
- 11, 10, 306, nil, 15, 39, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
- 30, 31, 32, 33, 34, 35, 36, 37, 38, nil,
- nil, 40, 41, 116, nil, nil, 120, nil, nil, nil,
- nil, nil, nil, nil, 45, nil, nil, nil, 197, nil,
- 107, nil, 198, 202, 200, 127, nil, nil, 126, nil,
- nil, 132, nil, 133, 134, 226, 145, 147, 56, 57,
- 58, 61, nil, nil, nil, 150, nil, nil, nil, nil ]
+ nil, nil, 377, nil, 4, 5, 6, 7, nil, 8,
+ 9, nil, nil, nil, nil, nil, 222, 13, 14, 323,
+ nil, 19, 20, 21, 22, 23, 24, 25, 26, 27,
+ 28, 29, 30, 31, 32, 33, 34, nil, 40, 41,
+ 118, nil, nil, 122, nil, nil, nil, nil, nil, 218,
+ nil, nil, 102, nil, 172, 174, 173, 109, nil, nil,
+ 108, nil, nil, 126, nil, 127, 128, 132, 231, 232,
+ 233, 234, 235, 238, 140, 142, 55, 56, 57, 60,
+ nil, nil, nil, 145, nil, nil, nil, nil, nil ]
racc_reduce_table = [
0, 0, :racc_error,
- 1, 91, :_reduce_1,
- 1, 91, :_reduce_2,
- 1, 91, :_reduce_none,
- 1, 92, :_reduce_4,
+ 1, 92, :_reduce_1,
+ 1, 92, :_reduce_2,
+ 0, 92, :_reduce_3,
+ 1, 93, :_reduce_4,
1, 95, :_reduce_5,
3, 95, :_reduce_6,
2, 95, :_reduce_7,
1, 96, :_reduce_8,
3, 96, :_reduce_9,
1, 97, :_reduce_none,
- 1, 98, :_reduce_11,
- 3, 98, :_reduce_12,
- 3, 98, :_reduce_13,
- 3, 98, :_reduce_14,
- 3, 98, :_reduce_15,
+ 3, 97, :_reduce_11,
+ 3, 97, :_reduce_12,
+ 3, 97, :_reduce_13,
+ 1, 99, :_reduce_14,
+ 3, 99, :_reduce_15,
+ 1, 98, :_reduce_none,
+ 3, 98, :_reduce_17,
+ 3, 98, :_reduce_18,
+ 3, 98, :_reduce_19,
+ 3, 98, :_reduce_20,
1, 100, :_reduce_none,
- 4, 100, :_reduce_17,
- 3, 100, :_reduce_18,
- 3, 100, :_reduce_19,
- 3, 100, :_reduce_20,
- 3, 100, :_reduce_21,
- 3, 100, :_reduce_22,
- 3, 100, :_reduce_23,
- 3, 100, :_reduce_24,
- 3, 100, :_reduce_25,
- 3, 100, :_reduce_26,
- 3, 100, :_reduce_27,
- 2, 100, :_reduce_28,
- 3, 100, :_reduce_29,
- 3, 100, :_reduce_30,
- 3, 100, :_reduce_31,
- 3, 100, :_reduce_32,
- 3, 100, :_reduce_33,
- 3, 100, :_reduce_34,
- 2, 100, :_reduce_35,
- 3, 100, :_reduce_36,
- 3, 100, :_reduce_37,
- 3, 100, :_reduce_38,
- 3, 100, :_reduce_39,
- 3, 100, :_reduce_40,
- 3, 100, :_reduce_41,
- 3, 100, :_reduce_42,
- 1, 102, :_reduce_43,
- 3, 102, :_reduce_44,
+ 2, 100, :_reduce_22,
+ 2, 100, :_reduce_23,
+ 7, 100, :_reduce_24,
+ 5, 100, :_reduce_25,
+ 5, 100, :_reduce_26,
+ 4, 107, :_reduce_27,
+ 1, 104, :_reduce_28,
+ 3, 104, :_reduce_29,
+ 1, 103, :_reduce_30,
+ 2, 103, :_reduce_31,
+ 4, 103, :_reduce_32,
1, 101, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 105, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 106, :_reduce_none,
- 1, 123, :_reduce_67,
- 1, 123, :_reduce_68,
- 5, 104, :_reduce_69,
- 3, 104, :_reduce_70,
- 6, 104, :_reduce_71,
- 4, 104, :_reduce_72,
- 1, 104, :_reduce_73,
- 1, 108, :_reduce_74,
- 2, 108, :_reduce_75,
- 4, 131, :_reduce_76,
- 3, 131, :_reduce_77,
- 1, 131, :_reduce_78,
- 3, 132, :_reduce_79,
- 2, 130, :_reduce_80,
- 3, 134, :_reduce_81,
- 2, 134, :_reduce_82,
- 2, 133, :_reduce_83,
- 4, 133, :_reduce_84,
- 2, 111, :_reduce_85,
- 5, 136, :_reduce_86,
- 4, 136, :_reduce_87,
- 0, 137, :_reduce_none,
- 2, 137, :_reduce_89,
- 4, 137, :_reduce_90,
- 3, 137, :_reduce_91,
- 6, 112, :_reduce_92,
- 5, 112, :_reduce_93,
- 0, 138, :_reduce_none,
- 4, 138, :_reduce_95,
- 3, 138, :_reduce_96,
- 5, 110, :_reduce_97,
- 1, 139, :_reduce_98,
- 2, 139, :_reduce_99,
- 5, 140, :_reduce_100,
- 4, 140, :_reduce_101,
- 1, 141, :_reduce_102,
- 1, 103, :_reduce_none,
- 4, 103, :_reduce_104,
- 1, 143, :_reduce_105,
- 3, 143, :_reduce_106,
- 3, 142, :_reduce_107,
- 1, 99, :_reduce_108,
- 6, 99, :_reduce_109,
- 6, 99, :_reduce_110,
- 5, 99, :_reduce_111,
- 5, 99, :_reduce_112,
- 6, 99, :_reduce_113,
- 5, 99, :_reduce_114,
- 4, 148, :_reduce_115,
- 1, 149, :_reduce_116,
- 1, 145, :_reduce_117,
- 3, 145, :_reduce_118,
- 1, 144, :_reduce_119,
- 2, 144, :_reduce_120,
- 1, 144, :_reduce_121,
- 6, 109, :_reduce_122,
- 2, 109, :_reduce_123,
- 3, 150, :_reduce_124,
- 3, 150, :_reduce_125,
- 1, 151, :_reduce_none,
- 1, 151, :_reduce_none,
- 0, 147, :_reduce_128,
- 1, 147, :_reduce_129,
- 3, 147, :_reduce_130,
- 1, 153, :_reduce_none,
+ 1, 101, :_reduce_none,
+ 4, 101, :_reduce_35,
+ 3, 101, :_reduce_36,
+ 3, 101, :_reduce_37,
+ 3, 101, :_reduce_38,
+ 3, 101, :_reduce_39,
+ 3, 101, :_reduce_40,
+ 3, 101, :_reduce_41,
+ 3, 101, :_reduce_42,
+ 3, 101, :_reduce_43,
+ 3, 101, :_reduce_44,
+ 3, 101, :_reduce_45,
+ 2, 101, :_reduce_46,
+ 2, 101, :_reduce_47,
+ 3, 101, :_reduce_48,
+ 3, 101, :_reduce_49,
+ 3, 101, :_reduce_50,
+ 3, 101, :_reduce_51,
+ 3, 101, :_reduce_52,
+ 3, 101, :_reduce_53,
+ 2, 101, :_reduce_54,
+ 3, 101, :_reduce_55,
+ 3, 101, :_reduce_56,
+ 3, 101, :_reduce_57,
+ 3, 101, :_reduce_58,
+ 1, 110, :_reduce_59,
+ 3, 110, :_reduce_60,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_none,
+ 1, 108, :_reduce_77,
+ 1, 108, :_reduce_78,
+ 1, 108, :_reduce_79,
+ 1, 108, :_reduce_80,
+ 1, 108, :_reduce_81,
+ 5, 109, :_reduce_82,
+ 3, 109, :_reduce_83,
+ 6, 109, :_reduce_84,
+ 4, 109, :_reduce_85,
+ 1, 113, :_reduce_86,
+ 2, 113, :_reduce_87,
+ 4, 129, :_reduce_88,
+ 3, 129, :_reduce_89,
+ 1, 129, :_reduce_90,
+ 3, 130, :_reduce_91,
+ 2, 128, :_reduce_92,
+ 3, 132, :_reduce_93,
+ 2, 132, :_reduce_94,
+ 2, 131, :_reduce_95,
+ 4, 131, :_reduce_96,
+ 2, 116, :_reduce_97,
+ 5, 134, :_reduce_98,
+ 4, 134, :_reduce_99,
+ 0, 135, :_reduce_none,
+ 2, 135, :_reduce_101,
+ 4, 135, :_reduce_102,
+ 3, 135, :_reduce_103,
+ 6, 117, :_reduce_104,
+ 5, 117, :_reduce_105,
+ 0, 136, :_reduce_none,
+ 4, 136, :_reduce_107,
+ 3, 136, :_reduce_108,
+ 5, 115, :_reduce_109,
+ 1, 137, :_reduce_110,
+ 2, 137, :_reduce_111,
+ 5, 138, :_reduce_112,
+ 1, 139, :_reduce_none,
+ 1, 139, :_reduce_none,
+ 1, 111, :_reduce_none,
+ 4, 111, :_reduce_116,
+ 1, 142, :_reduce_117,
+ 3, 142, :_reduce_118,
+ 3, 141, :_reduce_119,
+ 6, 114, :_reduce_120,
+ 2, 114, :_reduce_121,
+ 3, 143, :_reduce_122,
+ 3, 143, :_reduce_123,
+ 1, 144, :_reduce_none,
+ 1, 144, :_reduce_none,
+ 0, 102, :_reduce_126,
+ 1, 102, :_reduce_127,
+ 3, 102, :_reduce_128,
+ 1, 146, :_reduce_none,
+ 1, 146, :_reduce_none,
+ 3, 145, :_reduce_131,
+ 3, 145, :_reduce_132,
+ 3, 145, :_reduce_133,
+ 6, 118, :_reduce_134,
+ 7, 119, :_reduce_135,
+ 1, 151, :_reduce_136,
+ 1, 150, :_reduce_none,
+ 1, 150, :_reduce_none,
+ 1, 152, :_reduce_none,
+ 2, 152, :_reduce_140,
1, 153, :_reduce_none,
1, 153, :_reduce_none,
- 3, 152, :_reduce_134,
- 3, 152, :_reduce_135,
- 6, 113, :_reduce_136,
- 7, 114, :_reduce_137,
- 1, 158, :_reduce_138,
- 1, 157, :_reduce_none,
- 1, 157, :_reduce_none,
+ 7, 120, :_reduce_143,
+ 6, 120, :_reduce_144,
+ 1, 154, :_reduce_145,
+ 3, 154, :_reduce_146,
+ 1, 156, :_reduce_none,
+ 1, 156, :_reduce_none,
+ 1, 156, :_reduce_149,
+ 1, 156, :_reduce_none,
+ 1, 157, :_reduce_151,
+ 3, 157, :_reduce_152,
+ 1, 158, :_reduce_none,
+ 1, 158, :_reduce_none,
+ 1, 155, :_reduce_none,
+ 2, 155, :_reduce_156,
+ 1, 148, :_reduce_none,
+ 1, 148, :_reduce_158,
+ 1, 149, :_reduce_159,
+ 2, 149, :_reduce_160,
+ 4, 149, :_reduce_161,
+ 1, 133, :_reduce_162,
+ 3, 133, :_reduce_163,
+ 1, 159, :_reduce_none,
1, 159, :_reduce_none,
- 2, 159, :_reduce_142,
1, 160, :_reduce_none,
1, 160, :_reduce_none,
- 6, 115, :_reduce_145,
- 5, 115, :_reduce_146,
- 1, 161, :_reduce_147,
- 3, 161, :_reduce_148,
- 1, 163, :_reduce_149,
- 1, 163, :_reduce_150,
- 1, 163, :_reduce_151,
- 1, 163, :_reduce_none,
- 1, 164, :_reduce_153,
- 3, 164, :_reduce_154,
- 1, 162, :_reduce_none,
- 2, 162, :_reduce_156,
- 1, 117, :_reduce_157,
- 1, 155, :_reduce_158,
- 1, 155, :_reduce_159,
- 1, 156, :_reduce_160,
- 2, 156, :_reduce_161,
- 4, 156, :_reduce_162,
- 1, 135, :_reduce_163,
- 3, 135, :_reduce_164,
- 3, 165, :_reduce_165,
- 1, 165, :_reduce_166,
- 1, 107, :_reduce_167,
- 3, 118, :_reduce_168,
- 4, 118, :_reduce_169,
- 2, 118, :_reduce_170,
- 3, 118, :_reduce_171,
- 4, 118, :_reduce_172,
- 2, 118, :_reduce_173,
- 3, 121, :_reduce_174,
- 4, 121, :_reduce_175,
- 2, 121, :_reduce_176,
- 1, 166, :_reduce_177,
- 3, 166, :_reduce_178,
- 3, 167, :_reduce_179,
- 1, 128, :_reduce_none,
- 1, 128, :_reduce_none,
- 1, 128, :_reduce_none,
- 1, 168, :_reduce_183,
- 1, 168, :_reduce_184,
- 2, 169, :_reduce_185,
- 1, 171, :_reduce_186,
- 1, 173, :_reduce_187,
- 1, 174, :_reduce_188,
- 2, 172, :_reduce_189,
- 1, 175, :_reduce_190,
- 1, 176, :_reduce_191,
- 2, 176, :_reduce_192,
- 2, 170, :_reduce_193,
- 2, 177, :_reduce_194,
- 2, 177, :_reduce_195,
- 3, 93, :_reduce_196,
- 0, 178, :_reduce_197,
- 2, 178, :_reduce_198,
- 4, 178, :_reduce_199,
- 1, 116, :_reduce_200,
- 3, 116, :_reduce_201,
- 5, 116, :_reduce_202,
+ 3, 162, :_reduce_168,
+ 1, 162, :_reduce_169,
+ 2, 163, :_reduce_170,
+ 2, 161, :_reduce_171,
+ 1, 164, :_reduce_172,
+ 4, 164, :_reduce_173,
+ 1, 112, :_reduce_174,
+ 1, 122, :_reduce_175,
+ 1, 122, :_reduce_176,
+ 1, 122, :_reduce_177,
+ 1, 122, :_reduce_178,
+ 4, 123, :_reduce_179,
+ 2, 123, :_reduce_180,
+ 4, 123, :_reduce_181,
+ 2, 123, :_reduce_182,
+ 3, 124, :_reduce_183,
+ 4, 124, :_reduce_184,
+ 2, 124, :_reduce_185,
+ 1, 165, :_reduce_186,
+ 3, 165, :_reduce_187,
+ 3, 166, :_reduce_188,
+ 1, 126, :_reduce_none,
+ 1, 126, :_reduce_none,
+ 1, 126, :_reduce_none,
+ 1, 167, :_reduce_192,
+ 1, 167, :_reduce_193,
+ 2, 168, :_reduce_194,
+ 1, 170, :_reduce_195,
+ 1, 172, :_reduce_196,
+ 1, 173, :_reduce_197,
+ 2, 171, :_reduce_198,
+ 1, 174, :_reduce_199,
+ 1, 175, :_reduce_200,
+ 2, 175, :_reduce_201,
+ 2, 169, :_reduce_202,
+ 2, 176, :_reduce_203,
+ 2, 176, :_reduce_204,
+ 3, 94, :_reduce_205,
+ 0, 178, :_reduce_none,
+ 1, 178, :_reduce_none,
+ 0, 177, :_reduce_208,
+ 2, 177, :_reduce_209,
+ 4, 177, :_reduce_210,
+ 1, 121, :_reduce_211,
+ 3, 121, :_reduce_212,
+ 5, 121, :_reduce_213,
1, 179, :_reduce_none,
1, 179, :_reduce_none,
- 1, 124, :_reduce_205,
- 1, 127, :_reduce_206,
- 1, 125, :_reduce_207,
- 1, 126, :_reduce_208,
- 1, 120, :_reduce_209,
- 1, 119, :_reduce_210,
- 1, 122, :_reduce_211,
- 0, 129, :_reduce_none,
- 1, 129, :_reduce_213,
- 0, 146, :_reduce_none,
- 1, 146, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 1, 154, :_reduce_none,
- 0, 94, :_reduce_230 ]
-
-racc_reduce_n = 231
-
-racc_shift_n = 403
+ 1, 127, :_reduce_216,
+ 1, 125, :_reduce_217,
+ 0, 106, :_reduce_none,
+ 1, 106, :_reduce_219,
+ 0, 105, :_reduce_none,
+ 1, 105, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 1, 147, :_reduce_none,
+ 0, 140, :_reduce_240 ]
+
+racc_reduce_n = 241
+
+racc_shift_n = 418
racc_token_table = {
false => 0,
:error => 1,
:STRING => 2,
:DQPRE => 3,
:DQMID => 4,
:DQPOST => 5,
:WORD => 6,
:LBRACK => 7,
:RBRACK => 8,
:LBRACE => 9,
:RBRACE => 10,
:SYMBOL => 11,
:FARROW => 12,
:COMMA => 13,
:TRUE => 14,
:FALSE => 15,
:EQUALS => 16,
:APPENDS => 17,
:DELETES => 18,
:LESSEQUAL => 19,
:NOTEQUAL => 20,
:DOT => 21,
:COLON => 22,
:LLCOLLECT => 23,
:RRCOLLECT => 24,
:QMARK => 25,
:LPAREN => 26,
:RPAREN => 27,
:ISEQUAL => 28,
:GREATEREQUAL => 29,
:GREATERTHAN => 30,
:LESSTHAN => 31,
:IF => 32,
:ELSE => 33,
:DEFINE => 34,
:ELSIF => 35,
:VARIABLE => 36,
:CLASS => 37,
:INHERITS => 38,
:NODE => 39,
:BOOLEAN => 40,
:NAME => 41,
:SEMIC => 42,
:CASE => 43,
:DEFAULT => 44,
:AT => 45,
:ATAT => 46,
:LCOLLECT => 47,
:RCOLLECT => 48,
:CLASSREF => 49,
:NOT => 50,
:OR => 51,
:AND => 52,
:UNDEF => 53,
:PARROW => 54,
:PLUS => 55,
:MINUS => 56,
:TIMES => 57,
:DIV => 58,
:LSHIFT => 59,
:RSHIFT => 60,
:UMINUS => 61,
:MATCH => 62,
:NOMATCH => 63,
:REGEX => 64,
:IN_EDGE => 65,
:OUT_EDGE => 66,
:IN_EDGE_SUB => 67,
:OUT_EDGE_SUB => 68,
:IN => 69,
:UNLESS => 70,
:PIPE => 71,
:LAMBDA => 72,
:SELBRACE => 73,
:NUMBER => 74,
:HEREDOC => 75,
:SUBLOCATE => 76,
:RENDER_STRING => 77,
:RENDER_EXPR => 78,
:EPP_START => 79,
:EPP_END => 80,
:EPP_END_TRIM => 81,
:FUNCTION => 82,
- :LOW => 83,
- :HIGH => 84,
- :CALL => 85,
- :LISTSTART => 86,
- :MODULO => 87,
- :TITLE_COLON => 88,
- :CASE_COLON => 89 }
+ :PRIVATE => 83,
+ :ATTR => 84,
+ :TYPE => 85,
+ :LOW => 86,
+ :HIGH => 87,
+ :LISTSTART => 88,
+ :SPLAT => 89,
+ :MODULO => 90 }
-racc_nt_base = 90
+racc_nt_base = 91
racc_use_result_var = true
Racc_arg = [
racc_action_table,
racc_action_check,
racc_action_default,
racc_action_pointer,
racc_goto_table,
racc_goto_check,
racc_goto_default,
racc_goto_pointer,
racc_nt_base,
racc_reduce_table,
racc_token_table,
racc_shift_n,
racc_reduce_n,
racc_use_result_var ]
Racc_token_to_s_table = [
"$end",
"error",
"STRING",
"DQPRE",
"DQMID",
"DQPOST",
"WORD",
"LBRACK",
"RBRACK",
"LBRACE",
"RBRACE",
"SYMBOL",
"FARROW",
"COMMA",
"TRUE",
"FALSE",
"EQUALS",
"APPENDS",
"DELETES",
"LESSEQUAL",
"NOTEQUAL",
"DOT",
"COLON",
"LLCOLLECT",
"RRCOLLECT",
"QMARK",
"LPAREN",
"RPAREN",
"ISEQUAL",
"GREATEREQUAL",
"GREATERTHAN",
"LESSTHAN",
"IF",
"ELSE",
"DEFINE",
"ELSIF",
"VARIABLE",
"CLASS",
"INHERITS",
"NODE",
"BOOLEAN",
"NAME",
"SEMIC",
"CASE",
"DEFAULT",
"AT",
"ATAT",
"LCOLLECT",
"RCOLLECT",
"CLASSREF",
"NOT",
"OR",
"AND",
"UNDEF",
"PARROW",
"PLUS",
"MINUS",
"TIMES",
"DIV",
"LSHIFT",
"RSHIFT",
"UMINUS",
"MATCH",
"NOMATCH",
"REGEX",
"IN_EDGE",
"OUT_EDGE",
"IN_EDGE_SUB",
"OUT_EDGE_SUB",
"IN",
"UNLESS",
"PIPE",
"LAMBDA",
"SELBRACE",
"NUMBER",
"HEREDOC",
"SUBLOCATE",
"RENDER_STRING",
"RENDER_EXPR",
"EPP_START",
"EPP_END",
"EPP_END_TRIM",
"FUNCTION",
+ "PRIVATE",
+ "ATTR",
+ "TYPE",
"LOW",
"HIGH",
- "CALL",
"LISTSTART",
+ "SPLAT",
"MODULO",
- "TITLE_COLON",
- "CASE_COLON",
"$start",
"program",
"statements",
"epp_expression",
- "nil",
"syntactic_statements",
"syntactic_statement",
- "any_expression",
- "relationship_expression",
- "resource_expression",
+ "assignment",
+ "relationship",
+ "assignments",
+ "resource",
"expression",
- "higher_precedence",
+ "attribute_operations",
+ "additional_resource_bodies",
+ "resource_bodies",
+ "endsemi",
+ "endcomma",
+ "resource_body",
+ "primary_expression",
+ "call_function_expression",
"expressions",
"selector_entries",
- "call_function_expression",
- "primary_expression",
- "literal_expression",
"variable",
"call_method_with_lambda_expression",
"collection_expression",
"case_expression",
"if_expression",
"unless_expression",
"definition_expression",
"hostclass_expression",
"node_definition_expression",
"epp_render_expression",
- "function_definition",
+ "reserved_word",
"array",
- "boolean",
- "default",
"hash",
"regex",
- "text_or_name",
- "number",
- "type",
- "undef",
- "name",
"quotedtext",
- "endcomma",
+ "type",
"lambda",
"call_method_expression",
"named_access",
"lambda_parameter_list",
"lambda_rest",
"parameters",
"if_part",
"else",
"unless_else",
"case_options",
"case_option",
- "case_colon",
+ "options_statements",
+ "nil",
"selector_entry",
"selector_entry_list",
- "at",
- "resourceinstances",
- "endsemi",
- "attribute_operations",
- "resourceinst",
- "title_colon",
"collect_query",
"optional_query",
"attribute_operation",
"attribute_name",
"keyword",
"classname",
"parameter_list",
"opt_statements",
"stacked_classname",
"classparent",
"classnameordefault",
"hostnames",
"nodeparent",
"hostname",
"dotted_name",
+ "name_or_number",
"parameter",
+ "untyped_parameter",
+ "typed_parameter",
+ "regular_parameter",
+ "splat_parameter",
+ "parameter_type",
"hashpairs",
"hashpair",
"string",
"dq_string",
"heredoc",
"dqpre",
"dqrval",
"dqpost",
"dqmid",
"text_expression",
"dqtail",
"sublocated_text",
"epp_parameters_list",
+ "optional_statements",
"epp_end" ]
Racc_debug_parser = false
##### State transition tables end #####
# reduce 0 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 66)
+module_eval(<<'.,.,', 'egrammar.ra', 65)
def _reduce_1(val, _values, result)
result = create_program(Factory.block_or_expression(*val[0]))
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 67)
+module_eval(<<'.,.,', 'egrammar.ra', 66)
def _reduce_2(val, _values, result)
result = create_program(Factory.block_or_expression(*val[0]))
result
end
.,.,
-# reduce 3 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 67)
+ def _reduce_3(val, _values, result)
+ result = create_empty_program()
+ result
+ end
+.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 72)
+module_eval(<<'.,.,', 'egrammar.ra', 71)
def _reduce_4(val, _values, result)
result = transform_calls(val[0])
result
end
.,.,
module_eval(<<'.,.,', 'egrammar.ra', 78)
def _reduce_5(val, _values, result)
result = [val[0]]
result
end
.,.,
module_eval(<<'.,.,', 'egrammar.ra', 79)
def _reduce_6(val, _values, result)
result = val[0].push val[2]
result
end
.,.,
module_eval(<<'.,.,', 'egrammar.ra', 80)
def _reduce_7(val, _values, result)
result = val[0].push val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 84)
+module_eval(<<'.,.,', 'egrammar.ra', 87)
def _reduce_8(val, _values, result)
result = val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 85)
+module_eval(<<'.,.,', 'egrammar.ra', 88)
def _reduce_9(val, _values, result)
result = aryfy(val[0]).push val[2]
result
end
.,.,
# reduce 10 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 91)
+module_eval(<<'.,.,', 'egrammar.ra', 93)
def _reduce_11(val, _values, result)
- result = val[0]
+ result = val[0].set(val[2]) ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 92)
+module_eval(<<'.,.,', 'egrammar.ra', 94)
def _reduce_12(val, _values, result)
- result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
+ result = val[0].plus_set(val[2]) ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 93)
+module_eval(<<'.,.,', 'egrammar.ra', 95)
def _reduce_13(val, _values, result)
- result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
+ result = val[0].minus_set(val[2]); loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 94)
+module_eval(<<'.,.,', 'egrammar.ra', 98)
def _reduce_14(val, _values, result)
- result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
+ result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 95)
+module_eval(<<'.,.,', 'egrammar.ra', 99)
def _reduce_15(val, _values, result)
- result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
+ result = val[0].push(val[2])
result
end
.,.,
# reduce 16 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 102)
+module_eval(<<'.,.,', 'egrammar.ra', 103)
def _reduce_17(val, _values, result)
- result = val[0][*val[2]] ; loc result, val[0], val[3]
+ result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 103)
+module_eval(<<'.,.,', 'egrammar.ra', 104)
def _reduce_18(val, _values, result)
- result = val[0].in val[2] ; loc result, val[1]
+ result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 104)
+module_eval(<<'.,.,', 'egrammar.ra', 105)
def _reduce_19(val, _values, result)
- result = val[0] =~ val[2] ; loc result, val[1]
+ result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 105)
+module_eval(<<'.,.,', 'egrammar.ra', 106)
def _reduce_20(val, _values, result)
- result = val[0].mne val[2] ; loc result, val[1]
+ result = val[0].relop(val[1][:value], val[2]); loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 106)
- def _reduce_21(val, _values, result)
- result = val[0] + val[2] ; loc result, val[1]
- result
- end
-.,.,
+# reduce 21 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 107)
+module_eval(<<'.,.,', 'egrammar.ra', 115)
def _reduce_22(val, _values, result)
- result = val[0] - val[2] ; loc result, val[1]
+ result = val[1]
+ unless Factory.set_resource_form(result, :virtual)
+ # This is equivalent to a syntax error - additional semantic restrictions apply
+ error val[0], "Virtual (@) can only be applied to a Resource Expression"
+ end
+ # relocate the result
+ loc result, val[0], val[1]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 108)
+module_eval(<<'.,.,', 'egrammar.ra', 126)
def _reduce_23(val, _values, result)
- result = val[0] / val[2] ; loc result, val[1]
+ result = val[1]
+ unless Factory.set_resource_form(result, :exported)
+ # This is equivalent to a syntax error - additional semantic restrictions apply
+ error val[0], "Exported (@@) can only be applied to a Resource Expression"
+ end
+ # relocate the result
+ loc result, val[0], val[1]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 109)
+module_eval(<<'.,.,', 'egrammar.ra', 137)
def _reduce_24(val, _values, result)
- result = val[0] * val[2] ; loc result, val[1]
+ bodies = [Factory.RESOURCE_BODY(val[2], val[4])] + val[5]
+ result = Factory.RESOURCE(val[0], bodies)
+ loc result, val[0], val[6]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 110)
+module_eval(<<'.,.,', 'egrammar.ra', 144)
def _reduce_25(val, _values, result)
- result = val[0] % val[2] ; loc result, val[1]
+ result = Factory.RESOURCE(Factory.fqn(token_text(val[0])), val[2])
+ loc result, val[0], val[4]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 111)
+module_eval(<<'.,.,', 'egrammar.ra', 153)
def _reduce_26(val, _values, result)
- result = val[0] << val[2] ; loc result, val[1]
+ result = case Factory.resource_shape(val[0])
+ when :resource, :class
+ # This catches deprecated syntax.
+ # If the attribute operations does not include +>, then the found expression
+ # is actually a LEFT followed by LITERAL_HASH
+ #
+ unless tmp = transform_resource_wo_title(val[0], val[2])
+ error val[1], "Syntax error resource body without title or hash with +>"
+ end
+ tmp
+ when :defaults
+ Factory.RESOURCE_DEFAULTS(val[0], val[2])
+ when :override
+ # This was only done for override in original - TODO should it be here at all
+ Factory.RESOURCE_OVERRIDE(val[0], val[2])
+ else
+ error val[0], "Expression is not valid as a resource, resource-default, or resource-override"
+ end
+ loc result, val[0], val[4]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 112)
+module_eval(<<'.,.,', 'egrammar.ra', 175)
def _reduce_27(val, _values, result)
- result = val[0] >> val[2] ; loc result, val[1]
+ result = Factory.RESOURCE_BODY(val[0], val[2])
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 113)
+module_eval(<<'.,.,', 'egrammar.ra', 178)
def _reduce_28(val, _values, result)
- result = val[1].minus() ; loc result, val[0]
+ result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 114)
+module_eval(<<'.,.,', 'egrammar.ra', 179)
def _reduce_29(val, _values, result)
- result = val[0].ne val[2] ; loc result, val[1]
+ result = val[0].push val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 115)
+module_eval(<<'.,.,', 'egrammar.ra', 185)
def _reduce_30(val, _values, result)
- result = val[0] == val[2] ; loc result, val[1]
+ result = []
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 116)
+module_eval(<<'.,.,', 'egrammar.ra', 186)
def _reduce_31(val, _values, result)
- result = val[0] > val[2] ; loc result, val[1]
+ result = []
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 117)
+module_eval(<<'.,.,', 'egrammar.ra', 187)
def _reduce_32(val, _values, result)
- result = val[0] >= val[2] ; loc result, val[1]
+ result = val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 118)
- def _reduce_33(val, _values, result)
- result = val[0] < val[2] ; loc result, val[1]
- result
- end
-.,.,
+# reduce 33 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 119)
- def _reduce_34(val, _values, result)
- result = val[0] <= val[2] ; loc result, val[1]
- result
- end
-.,.,
+# reduce 34 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 120)
+module_eval(<<'.,.,', 'egrammar.ra', 194)
def _reduce_35(val, _values, result)
- result = val[1].not ; loc result, val[0]
+ result = val[0][*val[2]] ; loc result, val[0], val[3]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 121)
+module_eval(<<'.,.,', 'egrammar.ra', 195)
def _reduce_36(val, _values, result)
- result = val[0].and val[2] ; loc result, val[1]
+ result = val[0].in val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 122)
+module_eval(<<'.,.,', 'egrammar.ra', 196)
def _reduce_37(val, _values, result)
- result = val[0].or val[2] ; loc result, val[1]
+ result = val[0] =~ val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 123)
+module_eval(<<'.,.,', 'egrammar.ra', 197)
def _reduce_38(val, _values, result)
- result = val[0].set(val[2]) ; loc result, val[1]
+ result = val[0].mne val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 124)
+module_eval(<<'.,.,', 'egrammar.ra', 198)
def _reduce_39(val, _values, result)
- result = val[0].plus_set(val[2]) ; loc result, val[1]
+ result = val[0] + val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 125)
+module_eval(<<'.,.,', 'egrammar.ra', 199)
def _reduce_40(val, _values, result)
- result = val[0].minus_set(val[2]); loc result, val[1]
+ result = val[0] - val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 126)
+module_eval(<<'.,.,', 'egrammar.ra', 200)
def _reduce_41(val, _values, result)
- result = val[0].select(*val[2]) ; loc result, val[0]
+ result = val[0] / val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 127)
+module_eval(<<'.,.,', 'egrammar.ra', 201)
def _reduce_42(val, _values, result)
- result = val[1].paren() ; loc result, val[0]
+ result = val[0] * val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 135)
+module_eval(<<'.,.,', 'egrammar.ra', 202)
def _reduce_43(val, _values, result)
- result = [val[0]]
+ result = val[0] % val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 136)
+module_eval(<<'.,.,', 'egrammar.ra', 203)
def _reduce_44(val, _values, result)
- result = val[0].push(val[2])
+ result = val[0] << val[2] ; loc result, val[1]
result
end
.,.,
-# reduce 45 omitted
-
-# reduce 46 omitted
-
-# reduce 47 omitted
-
-# reduce 48 omitted
-
-# reduce 49 omitted
-
-# reduce 50 omitted
-
-# reduce 51 omitted
-
-# reduce 52 omitted
-
-# reduce 53 omitted
-
-# reduce 54 omitted
-
-# reduce 55 omitted
-
-# reduce 56 omitted
-
-# reduce 57 omitted
-
-# reduce 58 omitted
-
-# reduce 59 omitted
-
-# reduce 60 omitted
-
-# reduce 61 omitted
-
-# reduce 62 omitted
-
-# reduce 63 omitted
-
-# reduce 64 omitted
-
-# reduce 65 omitted
-
-# reduce 66 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 204)
+ def _reduce_45(val, _values, result)
+ result = val[0] >> val[2] ; loc result, val[1]
+ result
+ end
+.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 169)
- def _reduce_67(val, _values, result)
- result = val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 205)
+ def _reduce_46(val, _values, result)
+ result = val[1].minus() ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 170)
- def _reduce_68(val, _values, result)
- result = val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 206)
+ def _reduce_47(val, _values, result)
+ result = val[1].unfold() ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 178)
- def _reduce_69(val, _values, result)
- result = Factory.CALL_NAMED(val[0], true, val[2])
- loc result, val[0], val[4]
-
+module_eval(<<'.,.,', 'egrammar.ra', 207)
+ def _reduce_48(val, _values, result)
+ result = val[0].ne val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 182)
- def _reduce_70(val, _values, result)
- result = Factory.CALL_NAMED(val[0], true, [])
- loc result, val[0], val[2]
-
+module_eval(<<'.,.,', 'egrammar.ra', 208)
+ def _reduce_49(val, _values, result)
+ result = val[0] == val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 186)
- def _reduce_71(val, _values, result)
- result = Factory.CALL_NAMED(val[0], true, val[2])
- loc result, val[0], val[4]
- result.lambda = val[5]
-
+module_eval(<<'.,.,', 'egrammar.ra', 209)
+ def _reduce_50(val, _values, result)
+ result = val[0] > val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 191)
- def _reduce_72(val, _values, result)
- result = Factory.CALL_NAMED(val[0], true, [])
- loc result, val[0], val[2]
- result.lambda = val[3]
-
+module_eval(<<'.,.,', 'egrammar.ra', 210)
+ def _reduce_51(val, _values, result)
+ result = val[0] >= val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 195)
- def _reduce_73(val, _values, result)
- result = val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 211)
+ def _reduce_52(val, _values, result)
+ result = val[0] < val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 200)
- def _reduce_74(val, _values, result)
- result = val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 212)
+ def _reduce_53(val, _values, result)
+ result = val[0] <= val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 201)
- def _reduce_75(val, _values, result)
- result = val[0]; val[0].lambda = val[1]
+module_eval(<<'.,.,', 'egrammar.ra', 213)
+ def _reduce_54(val, _values, result)
+ result = val[1].not ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 204)
- def _reduce_76(val, _values, result)
- result = Factory.CALL_METHOD(val[0], val[2]); loc result, val[1], val[3]
+module_eval(<<'.,.,', 'egrammar.ra', 214)
+ def _reduce_55(val, _values, result)
+ result = val[0].and val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 205)
- def _reduce_77(val, _values, result)
- result = Factory.CALL_METHOD(val[0], []); loc result, val[1], val[3]
+module_eval(<<'.,.,', 'egrammar.ra', 215)
+ def _reduce_56(val, _values, result)
+ result = val[0].or val[2] ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 206)
- def _reduce_78(val, _values, result)
- result = Factory.CALL_METHOD(val[0], []); loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 216)
+ def _reduce_57(val, _values, result)
+ result = val[0].select(*val[2]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 211)
- def _reduce_79(val, _values, result)
- result = val[0].dot(Factory.fqn(val[2][:value]))
- loc result, val[1], val[2]
-
+module_eval(<<'.,.,', 'egrammar.ra', 217)
+ def _reduce_58(val, _values, result)
+ result = val[1].paren() ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 223)
- def _reduce_80(val, _values, result)
- result = Factory.LAMBDA(val[0], val[1])
-# loc result, val[1] # TODO
-
+module_eval(<<'.,.,', 'egrammar.ra', 227)
+ def _reduce_59(val, _values, result)
+ result = [val[0]]
result
end
.,.,
module_eval(<<'.,.,', 'egrammar.ra', 228)
- def _reduce_81(val, _values, result)
- result = val[1]
+ def _reduce_60(val, _values, result)
+ result = val[0].push(val[2])
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 229)
- def _reduce_82(val, _values, result)
- result = nil
+# reduce 61 omitted
+
+# reduce 62 omitted
+
+# reduce 63 omitted
+
+# reduce 64 omitted
+
+# reduce 65 omitted
+
+# reduce 66 omitted
+
+# reduce 67 omitted
+
+# reduce 68 omitted
+
+# reduce 69 omitted
+
+# reduce 70 omitted
+
+# reduce 71 omitted
+
+# reduce 72 omitted
+
+# reduce 73 omitted
+
+# reduce 74 omitted
+
+# reduce 75 omitted
+
+# reduce 76 omitted
+
+module_eval(<<'.,.,', 'egrammar.ra', 247)
+ def _reduce_77(val, _values, result)
+ result = Factory.NUMBER(val[0][:value]) ; loc result, val[0]
+ result
+ end
+.,.,
+
+module_eval(<<'.,.,', 'egrammar.ra', 248)
+ def _reduce_78(val, _values, result)
+ result = Factory.literal(val[0][:value]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 233)
+module_eval(<<'.,.,', 'egrammar.ra', 249)
+ def _reduce_79(val, _values, result)
+ result = Factory.literal(:default) ; loc result, val[0]
+ result
+ end
+.,.,
+
+module_eval(<<'.,.,', 'egrammar.ra', 250)
+ def _reduce_80(val, _values, result)
+ result = Factory.literal(:undef) ; loc result, val[0]
+ result
+ end
+.,.,
+
+module_eval(<<'.,.,', 'egrammar.ra', 251)
+ def _reduce_81(val, _values, result)
+ result = Factory.QNAME_OR_NUMBER(val[0][:value]) ; loc result, val[0]
+ result
+ end
+.,.,
+
+module_eval(<<'.,.,', 'egrammar.ra', 260)
+ def _reduce_82(val, _values, result)
+ result = Factory.CALL_NAMED(val[0], true, val[2])
+ loc result, val[0], val[4]
+
+ result
+ end
+.,.,
+
+module_eval(<<'.,.,', 'egrammar.ra', 264)
def _reduce_83(val, _values, result)
- result = []
+ result = Factory.CALL_NAMED(val[0], true, [])
+ loc result, val[0], val[2]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 234)
+module_eval(<<'.,.,', 'egrammar.ra', 268)
def _reduce_84(val, _values, result)
- result = val[1]
+ result = Factory.CALL_NAMED(val[0], true, val[2])
+ loc result, val[0], val[4]
+ result.lambda = val[5]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 244)
+module_eval(<<'.,.,', 'egrammar.ra', 273)
def _reduce_85(val, _values, result)
- result = val[1]
- loc(result, val[0], val[1])
+ result = Factory.CALL_NAMED(val[0], true, [])
+ loc result, val[0], val[2]
+ result.lambda = val[3]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 251)
+module_eval(<<'.,.,', 'egrammar.ra', 281)
def _reduce_86(val, _values, result)
- result = Factory.IF(val[0], Factory.block_or_expression(*val[2]), val[4])
- loc(result, val[0], (val[4] ? val[4] : val[3]))
-
+ result = val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 255)
+module_eval(<<'.,.,', 'egrammar.ra', 282)
def _reduce_87(val, _values, result)
- result = Factory.IF(val[0], nil, val[3])
- loc(result, val[0], (val[3] ? val[3] : val[2]))
-
+ result = val[0]; val[0].lambda = val[1]
result
end
.,.,
-# reduce 88 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 285)
+ def _reduce_88(val, _values, result)
+ result = Factory.CALL_METHOD(val[0], val[2]); loc result, val[1], val[3]
+ result
+ end
+.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 263)
+module_eval(<<'.,.,', 'egrammar.ra', 286)
def _reduce_89(val, _values, result)
- result = val[1]
- loc(result, val[0], val[1])
-
+ result = Factory.CALL_METHOD(val[0], []); loc result, val[1], val[3]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 267)
+module_eval(<<'.,.,', 'egrammar.ra', 287)
def _reduce_90(val, _values, result)
- result = Factory.block_or_expression(*val[2])
- loc result, val[0], val[3]
-
+ result = Factory.CALL_METHOD(val[0], []); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 271)
+module_eval(<<'.,.,', 'egrammar.ra', 291)
def _reduce_91(val, _values, result)
- result = nil # don't think a nop is needed here either
+ result = val[0].dot(Factory.fqn(val[2][:value]))
+ loc result, val[1], val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 280)
+module_eval(<<'.,.,', 'egrammar.ra', 299)
def _reduce_92(val, _values, result)
- result = Factory.UNLESS(val[1], Factory.block_or_expression(*val[3]), val[5])
- loc result, val[0], val[4]
+ result = Factory.LAMBDA(val[0][:value], val[1][:value])
+ loc result, val[0][:start], val[1][:end]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 284)
+module_eval(<<'.,.,', 'egrammar.ra', 304)
def _reduce_93(val, _values, result)
- result = Factory.UNLESS(val[1], nil, nil)
- loc result, val[0], val[4]
-
+ result = {:end => val[2], :value =>val[1] }
result
end
.,.,
-# reduce 94 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 305)
+ def _reduce_94(val, _values, result)
+ result = {:end => val[1], :value => nil }
+ result
+ end
+.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 294)
+module_eval(<<'.,.,', 'egrammar.ra', 309)
def _reduce_95(val, _values, result)
- result = Factory.block_or_expression(*val[2])
- loc result, val[0], val[3]
-
+ result = {:start => val[0], :value => [] }
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 298)
+module_eval(<<'.,.,', 'egrammar.ra', 310)
def _reduce_96(val, _values, result)
- result = nil # don't think a nop is needed here either
-
+ result = {:start => val[0], :value => val[1] }
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 306)
+module_eval(<<'.,.,', 'egrammar.ra', 318)
def _reduce_97(val, _values, result)
- result = Factory.CASE(val[1], *val[3])
- loc result, val[0], val[4]
+ result = val[1]
+ loc(result, val[0], val[1])
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 312)
+module_eval(<<'.,.,', 'egrammar.ra', 325)
def _reduce_98(val, _values, result)
- result = [val[0]]
+ result = Factory.IF(val[0], Factory.block_or_expression(*val[2]), val[4])
+ loc(result, val[0], (val[4] ? val[4] : val[3]))
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 313)
+module_eval(<<'.,.,', 'egrammar.ra', 329)
def _reduce_99(val, _values, result)
- result = val[0].push val[1]
- result
- end
-.,.,
-
-module_eval(<<'.,.,', 'egrammar.ra', 318)
- def _reduce_100(val, _values, result)
- result = Factory.WHEN(val[0], val[3])
- loc result, val[1], val[4]
+ result = Factory.IF(val[0], nil, val[3])
+ loc(result, val[0], (val[3] ? val[3] : val[2]))
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 322)
+# reduce 100 omitted
+
+module_eval(<<'.,.,', 'egrammar.ra', 337)
def _reduce_101(val, _values, result)
- result = Factory.WHEN(val[0], nil)
- loc result, val[1], val[3]
+ result = val[1]
+ loc(result, val[0], val[1])
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 326)
+module_eval(<<'.,.,', 'egrammar.ra', 341)
def _reduce_102(val, _values, result)
- result = val[0]
+ result = Factory.block_or_expression(*val[2])
+ loc result, val[0], val[3]
+
result
end
.,.,
-# reduce 103 omitted
-
-module_eval(<<'.,.,', 'egrammar.ra', 337)
- def _reduce_104(val, _values, result)
- result = val[1]
+module_eval(<<'.,.,', 'egrammar.ra', 345)
+ def _reduce_103(val, _values, result)
+ result = nil # don't think a nop is needed here either
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 342)
- def _reduce_105(val, _values, result)
- result = [val[0]]
+module_eval(<<'.,.,', 'egrammar.ra', 352)
+ def _reduce_104(val, _values, result)
+ result = Factory.UNLESS(val[1], Factory.block_or_expression(*val[3]), val[5])
+ loc result, val[0], val[4]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 343)
- def _reduce_106(val, _values, result)
- result = val[0].push val[2]
+module_eval(<<'.,.,', 'egrammar.ra', 356)
+ def _reduce_105(val, _values, result)
+ result = Factory.UNLESS(val[1], nil, nil)
+ loc result, val[0], val[4]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 348)
+# reduce 106 omitted
+
+module_eval(<<'.,.,', 'egrammar.ra', 366)
def _reduce_107(val, _values, result)
- result = Factory.MAP(val[0], val[2]) ; loc result, val[1]
+ result = Factory.block_or_expression(*val[2])
+ loc result, val[0], val[3]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 360)
+module_eval(<<'.,.,', 'egrammar.ra', 370)
def _reduce_108(val, _values, result)
- result = val[0]
-
+ result = nil # don't think a nop is needed here either
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 363)
+module_eval(<<'.,.,', 'egrammar.ra', 377)
def _reduce_109(val, _values, result)
- result = case Factory.resource_shape(val[1])
- when :resource, :class
- tmp = Factory.RESOURCE(Factory.fqn(token_text(val[1])), val[3])
- tmp.form = val[0]
- tmp
- when :defaults
- error val[1], "A resource default can not be virtual or exported"
- when :override
- error val[1], "A resource override can not be virtual or exported"
- else
- error val[1], "Expression is not valid as a resource, resource-default, or resource-override"
- end
- loc result, val[1], val[4]
+ result = Factory.CASE(val[1], *val[3])
+ loc result, val[0], val[4]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 378)
+module_eval(<<'.,.,', 'egrammar.ra', 383)
def _reduce_110(val, _values, result)
- result = case Factory.resource_shape(val[1])
- when :resource, :class, :defaults, :override
- error val[1], "Defaults are not virtualizable"
- else
- error val[1], "Expression is not valid as a resource, resource-default, or resource-override"
- end
-
+ result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 386)
+module_eval(<<'.,.,', 'egrammar.ra', 384)
def _reduce_111(val, _values, result)
- result = case Factory.resource_shape(val[0])
- when :resource, :class
- Factory.RESOURCE(Factory.fqn(token_text(val[0])), val[2])
- when :defaults
- error val[1], "A resource default can not specify a resource name"
- when :override
- error val[1], "A resource override does not allow override of name of resource"
- else
- error val[1], "Expression is not valid as a resource, resource-default, or resource-override"
- end
- loc result, val[0], val[4]
-
+ result = val[0].push val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 399)
+module_eval(<<'.,.,', 'egrammar.ra', 389)
def _reduce_112(val, _values, result)
- result = case Factory.resource_shape(val[0])
- when :resource, :class
- # This catches deprecated syntax.
- # If the attribute operations does not include +>, then the found expression
- # is actually a LEFT followed by LITERAL_HASH
- #
- unless tmp = transform_resource_wo_title(val[0], val[2])
- error val[1], "Syntax error resource body without title or hash with +>"
- end
- tmp
- when :defaults
- Factory.RESOURCE_DEFAULTS(val[0], val[2])
- when :override
- # This was only done for override in original - TODO shuld it be here at all
- Factory.RESOURCE_OVERRIDE(val[0], val[2])
- else
- error val[0], "Expression is not valid as a resource, resource-default, or resource-override"
- end
- loc result, val[0], val[4]
-
+ result = Factory.WHEN(val[0], val[3]); loc result, val[1], val[4]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 420)
- def _reduce_113(val, _values, result)
- result = Factory.RESOURCE(Factory.fqn(token_text(val[1])), val[3])
- result.form = val[0]
- loc result, val[1], val[5]
-
- result
- end
-.,.,
+# reduce 113 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 425)
- def _reduce_114(val, _values, result)
- result = Factory.RESOURCE(Factory.fqn(token_text(val[0])), val[2])
- loc result, val[0], val[4]
-
- result
- end
-.,.,
+# reduce 114 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 430)
- def _reduce_115(val, _values, result)
- result = Factory.RESOURCE_BODY(val[0], val[2])
- result
- end
-.,.,
+# reduce 115 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 432)
+module_eval(<<'.,.,', 'egrammar.ra', 405)
def _reduce_116(val, _values, result)
- result = val[0]
+ result = val[1]
+
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 435)
+module_eval(<<'.,.,', 'egrammar.ra', 410)
def _reduce_117(val, _values, result)
result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 436)
+module_eval(<<'.,.,', 'egrammar.ra', 411)
def _reduce_118(val, _values, result)
result = val[0].push val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 441)
+module_eval(<<'.,.,', 'egrammar.ra', 416)
def _reduce_119(val, _values, result)
- result = :virtual
+ result = Factory.MAP(val[0], val[2]) ; loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 442)
+module_eval(<<'.,.,', 'egrammar.ra', 426)
def _reduce_120(val, _values, result)
- result = :exported
- result
- end
-.,.,
-
-module_eval(<<'.,.,', 'egrammar.ra', 443)
- def _reduce_121(val, _values, result)
- result = :exported
- result
- end
-.,.,
-
-module_eval(<<'.,.,', 'egrammar.ra', 455)
- def _reduce_122(val, _values, result)
result = Factory.COLLECT(val[0], val[1], val[3])
loc result, val[0], val[5]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 459)
- def _reduce_123(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 430)
+ def _reduce_121(val, _values, result)
result = Factory.COLLECT(val[0], val[1], [])
loc result, val[0], val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 464)
- def _reduce_124(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 435)
+ def _reduce_122(val, _values, result)
result = Factory.VIRTUAL_QUERY(val[1]) ; loc result, val[0], val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 465)
- def _reduce_125(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 436)
+ def _reduce_123(val, _values, result)
result = Factory.EXPORTED_QUERY(val[1]) ; loc result, val[0], val[2]
result
end
.,.,
-# reduce 126 omitted
+# reduce 124 omitted
-# reduce 127 omitted
+# reduce 125 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 478)
- def _reduce_128(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 445)
+ def _reduce_126(val, _values, result)
result = []
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 479)
- def _reduce_129(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 446)
+ def _reduce_127(val, _values, result)
result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 480)
- def _reduce_130(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 447)
+ def _reduce_128(val, _values, result)
result = val[0].push(val[2])
result
end
.,.,
-# reduce 131 omitted
+# reduce 129 omitted
-# reduce 132 omitted
+# reduce 130 omitted
-# reduce 133 omitted
-
-module_eval(<<'.,.,', 'egrammar.ra', 496)
- def _reduce_134(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 463)
+ def _reduce_131(val, _values, result)
result = Factory.ATTRIBUTE_OP(val[0][:value], :'=>', val[2])
loc result, val[0], val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 500)
- def _reduce_135(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 467)
+ def _reduce_132(val, _values, result)
result = Factory.ATTRIBUTE_OP(val[0][:value], :'+>', val[2])
loc result, val[0], val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 510)
- def _reduce_136(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 471)
+ def _reduce_133(val, _values, result)
+ result = Factory.ATTRIBUTES_OP(val[2]) ; loc result, val[0], val[2]
+
+ result
+ end
+.,.,
+
+module_eval(<<'.,.,', 'egrammar.ra', 480)
+ def _reduce_134(val, _values, result)
result = add_definition(Factory.DEFINITION(classname(val[1][:value]), val[2], val[4]))
loc result, val[0], val[5]
# New lexer does not keep track of this, this is done in validation
if @lexer.respond_to?(:'indefine=')
@lexer.indefine = false
end
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 524)
- def _reduce_137(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 494)
+ def _reduce_135(val, _values, result)
# Remove this class' name from the namestack as all nested classes have been parsed
namepop
result = add_definition(Factory.HOSTCLASS(classname(val[1][:value]), val[2], token_text(val[3]), val[5]))
loc result, val[0], val[6]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 534)
- def _reduce_138(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 504)
+ def _reduce_136(val, _values, result)
namestack(val[0][:value]) ; result = val[0]
result
end
.,.,
-# reduce 139 omitted
+# reduce 137 omitted
-# reduce 140 omitted
+# reduce 138 omitted
-# reduce 141 omitted
+# reduce 139 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 543)
- def _reduce_142(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 513)
+ def _reduce_140(val, _values, result)
result = val[1]
result
end
.,.,
-# reduce 143 omitted
+# reduce 141 omitted
-# reduce 144 omitted
+# reduce 142 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 560)
- def _reduce_145(val, _values, result)
- result = add_definition(Factory.NODE(val[1], val[2], val[4]))
- loc result, val[0], val[5]
+module_eval(<<'.,.,', 'egrammar.ra', 530)
+ def _reduce_143(val, _values, result)
+ result = add_definition(Factory.NODE(val[1], val[3], val[5]))
+ loc result, val[0], val[6]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 564)
- def _reduce_146(val, _values, result)
- result = add_definition(Factory.NODE(val[1], val[2], nil))
- loc result, val[0], val[4]
+module_eval(<<'.,.,', 'egrammar.ra', 534)
+ def _reduce_144(val, _values, result)
+ result = add_definition(Factory.NODE(val[1], val[3], nil))
+ loc result, val[0], val[5]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 574)
- def _reduce_147(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 544)
+ def _reduce_145(val, _values, result)
result = [result]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 575)
- def _reduce_148(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 545)
+ def _reduce_146(val, _values, result)
result = val[0].push(val[2])
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 580)
- def _reduce_149(val, _values, result)
- result = val[0]
- result
- end
-.,.,
+# reduce 147 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 581)
- def _reduce_150(val, _values, result)
- result = val[0]
- result
- end
-.,.,
+# reduce 148 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 582)
- def _reduce_151(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 552)
+ def _reduce_149(val, _values, result)
result = Factory.literal(:default); loc result, val[0]
result
end
.,.,
-# reduce 152 omitted
+# reduce 150 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 586)
- def _reduce_153(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 556)
+ def _reduce_151(val, _values, result)
result = Factory.literal(val[0][:value]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 587)
- def _reduce_154(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 557)
+ def _reduce_152(val, _values, result)
result = Factory.concat(val[0], '.', val[2][:value]); loc result, val[0], val[2]
result
end
.,.,
+# reduce 153 omitted
+
+# reduce 154 omitted
+
# reduce 155 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 592)
+module_eval(<<'.,.,', 'egrammar.ra', 566)
def _reduce_156(val, _values, result)
result = val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 597)
- def _reduce_157(val, _values, result)
- result = Factory.QNAME(val[0][:value]) ; loc result, val[0]
- result
- end
-.,.,
+# reduce 157 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 609)
+module_eval(<<'.,.,', 'egrammar.ra', 583)
def _reduce_158(val, _values, result)
- result = val[0]
+ error val[0], "'class' is not a valid classname"
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 610)
+module_eval(<<'.,.,', 'egrammar.ra', 587)
def _reduce_159(val, _values, result)
- error val[0], "'class' is not a valid classname"
+ result = []
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 614)
+module_eval(<<'.,.,', 'egrammar.ra', 588)
def _reduce_160(val, _values, result)
result = []
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 615)
+module_eval(<<'.,.,', 'egrammar.ra', 589)
def _reduce_161(val, _values, result)
- result = []
- result
- end
-.,.,
-
-module_eval(<<'.,.,', 'egrammar.ra', 616)
- def _reduce_162(val, _values, result)
result = val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 620)
- def _reduce_163(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 593)
+ def _reduce_162(val, _values, result)
result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 621)
- def _reduce_164(val, _values, result)
+module_eval(<<'.,.,', 'egrammar.ra', 594)
+ def _reduce_163(val, _values, result)
result = val[0].push(val[2])
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 625)
- def _reduce_165(val, _values, result)
- result = Factory.PARAM(val[0][:value], val[2]) ; loc result, val[0]
- result
- end
-.,.,
+# reduce 164 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 626)
- def _reduce_166(val, _values, result)
- result = Factory.PARAM(val[0][:value]); loc result, val[0]
- result
- end
-.,.,
+# reduce 165 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 639)
- def _reduce_167(val, _values, result)
- result = Factory.fqn(val[0][:value]).var ; loc result, val[0]
- result
- end
-.,.,
+# reduce 166 omitted
+
+# reduce 167 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 645)
+module_eval(<<'.,.,', 'egrammar.ra', 606)
def _reduce_168(val, _values, result)
- result = Factory.LIST(val[1]); loc result, val[0], val[2]
+ result = Factory.PARAM(val[0][:value], val[2]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 646)
+module_eval(<<'.,.,', 'egrammar.ra', 607)
def _reduce_169(val, _values, result)
- result = Factory.LIST(val[1]); loc result, val[0], val[3]
+ result = Factory.PARAM(val[0][:value]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 647)
+module_eval(<<'.,.,', 'egrammar.ra', 610)
def _reduce_170(val, _values, result)
- result = Factory.literal([]) ; loc result, val[0]
+ result = val[1]; val[1].captures_rest()
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 648)
+module_eval(<<'.,.,', 'egrammar.ra', 613)
def _reduce_171(val, _values, result)
- result = Factory.LIST(val[1]); loc result, val[0], val[2]
+ val[1].type_expr(val[0]) ; result = val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 649)
+module_eval(<<'.,.,', 'egrammar.ra', 616)
def _reduce_172(val, _values, result)
- result = Factory.LIST(val[1]); loc result, val[0], val[3]
+ result = val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 650)
+module_eval(<<'.,.,', 'egrammar.ra', 617)
def _reduce_173(val, _values, result)
- result = Factory.literal([]) ; loc result, val[0]
+ result = val[0][*val[2]] ; loc result, val[0], val[3]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 653)
+module_eval(<<'.,.,', 'egrammar.ra', 622)
def _reduce_174(val, _values, result)
- result = Factory.HASH(val[1]); loc result, val[0], val[2]
+ result = Factory.fqn(val[0][:value]).var ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 654)
+module_eval(<<'.,.,', 'egrammar.ra', 627)
def _reduce_175(val, _values, result)
- result = Factory.HASH(val[1]); loc result, val[0], val[3]
+ result = Factory.RESERVED(val[0][:value]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 655)
+module_eval(<<'.,.,', 'egrammar.ra', 628)
def _reduce_176(val, _values, result)
- result = Factory.literal({}) ; loc result, val[0], val[3]
+ result = Factory.RESERVED(val[0][:value]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 658)
+module_eval(<<'.,.,', 'egrammar.ra', 629)
def _reduce_177(val, _values, result)
- result = [val[0]]
+ result = Factory.RESERVED(val[0][:value]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 659)
+module_eval(<<'.,.,', 'egrammar.ra', 630)
def _reduce_178(val, _values, result)
- result = val[0].push val[2]
+ result = Factory.RESERVED(val[0][:value]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 662)
+module_eval(<<'.,.,', 'egrammar.ra', 636)
def _reduce_179(val, _values, result)
- result = Factory.KEY_ENTRY(val[0], val[2]); loc result, val[1]
+ result = Factory.LIST(val[1]); loc result, val[0], val[3]
result
end
.,.,
-# reduce 180 omitted
-
-# reduce 181 omitted
-
-# reduce 182 omitted
-
-module_eval(<<'.,.,', 'egrammar.ra', 670)
- def _reduce_183(val, _values, result)
- result = Factory.literal(val[0][:value]) ; loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 637)
+ def _reduce_180(val, _values, result)
+ result = Factory.literal([]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 671)
- def _reduce_184(val, _values, result)
- result = Factory.literal(val[0][:value]) ; loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 638)
+ def _reduce_181(val, _values, result)
+ result = Factory.LIST(val[1]); loc result, val[0], val[3]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 673)
- def _reduce_185(val, _values, result)
- result = Factory.string(val[0], *val[1]) ; loc result, val[0], val[1][-1]
+module_eval(<<'.,.,', 'egrammar.ra', 639)
+ def _reduce_182(val, _values, result)
+ result = Factory.literal([]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 674)
- def _reduce_186(val, _values, result)
- result = Factory.literal(val[0][:value]); loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 642)
+ def _reduce_183(val, _values, result)
+ result = Factory.HASH(val[1]); loc result, val[0], val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 675)
- def _reduce_187(val, _values, result)
- result = Factory.literal(val[0][:value]); loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 643)
+ def _reduce_184(val, _values, result)
+ result = Factory.HASH(val[1]); loc result, val[0], val[3]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 676)
- def _reduce_188(val, _values, result)
- result = Factory.literal(val[0][:value]); loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 644)
+ def _reduce_185(val, _values, result)
+ result = Factory.literal({}) ; loc result, val[0], val[3]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 677)
- def _reduce_189(val, _values, result)
- result = [val[0]] + val[1]
+module_eval(<<'.,.,', 'egrammar.ra', 647)
+ def _reduce_186(val, _values, result)
+ result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 678)
- def _reduce_190(val, _values, result)
- result = Factory.TEXT(val[0])
+module_eval(<<'.,.,', 'egrammar.ra', 648)
+ def _reduce_187(val, _values, result)
+ result = val[0].push val[2]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 681)
- def _reduce_191(val, _values, result)
- result = [val[0]]
+module_eval(<<'.,.,', 'egrammar.ra', 651)
+ def _reduce_188(val, _values, result)
+ result = Factory.KEY_ENTRY(val[0], val[2]); loc result, val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 682)
+# reduce 189 omitted
+
+# reduce 190 omitted
+
+# reduce 191 omitted
+
+module_eval(<<'.,.,', 'egrammar.ra', 659)
def _reduce_192(val, _values, result)
- result = [val[0]] + val[1]
+ result = Factory.literal(val[0][:value]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 685)
+module_eval(<<'.,.,', 'egrammar.ra', 660)
def _reduce_193(val, _values, result)
- result = Factory.HEREDOC(val[0][:value], val[1]); loc result, val[0]
+ result = Factory.literal(val[0][:value]) ; loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 688)
+module_eval(<<'.,.,', 'egrammar.ra', 662)
def _reduce_194(val, _values, result)
- result = Factory.SUBLOCATE(val[0], val[1]); loc result, val[0]
+ result = Factory.string(val[0], *val[1]) ; loc result, val[0], val[1][-1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 689)
+module_eval(<<'.,.,', 'egrammar.ra', 663)
def _reduce_195(val, _values, result)
- result = Factory.SUBLOCATE(val[0], val[1]); loc result, val[0]
+ result = Factory.literal(val[0][:value]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 692)
+module_eval(<<'.,.,', 'egrammar.ra', 664)
def _reduce_196(val, _values, result)
- result = Factory.EPP(val[1], val[2]); loc result, val[0]
+ result = Factory.literal(val[0][:value]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 695)
+module_eval(<<'.,.,', 'egrammar.ra', 665)
def _reduce_197(val, _values, result)
- result = nil
+ result = Factory.literal(val[0][:value]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 696)
+module_eval(<<'.,.,', 'egrammar.ra', 666)
def _reduce_198(val, _values, result)
- result = []
+ result = [val[0]] + val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 697)
+module_eval(<<'.,.,', 'egrammar.ra', 667)
def _reduce_199(val, _values, result)
- result = val[1]
+ result = Factory.TEXT(val[0])
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 700)
+module_eval(<<'.,.,', 'egrammar.ra', 670)
def _reduce_200(val, _values, result)
- result = Factory.RENDER_STRING(val[0][:value]); loc result, val[0]
+ result = [val[0]]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 701)
+module_eval(<<'.,.,', 'egrammar.ra', 671)
def _reduce_201(val, _values, result)
- result = Factory.RENDER_EXPR(val[1]); loc result, val[0], val[2]
+ result = [val[0]] + val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 702)
+module_eval(<<'.,.,', 'egrammar.ra', 674)
def _reduce_202(val, _values, result)
- result = Factory.RENDER_EXPR(Factory.block_or_expression(*val[2])); loc result, val[0], val[4]
+ result = Factory.HEREDOC(val[0][:value], val[1]); loc result, val[0]
result
end
.,.,
-# reduce 203 omitted
-
-# reduce 204 omitted
-
-module_eval(<<'.,.,', 'egrammar.ra', 708)
- def _reduce_205(val, _values, result)
- result = Factory.NUMBER(val[0][:value]) ; loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 677)
+ def _reduce_203(val, _values, result)
+ result = Factory.SUBLOCATE(val[0], val[1]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 709)
- def _reduce_206(val, _values, result)
- result = Factory.QNAME_OR_NUMBER(val[0][:value]) ; loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 678)
+ def _reduce_204(val, _values, result)
+ result = Factory.SUBLOCATE(val[0], val[1]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 710)
- def _reduce_207(val, _values, result)
- result = Factory.QREF(val[0][:value]) ; loc result, val[0]
+module_eval(<<'.,.,', 'egrammar.ra', 681)
+ def _reduce_205(val, _values, result)
+ result = Factory.EPP(val[1], val[2]); loc result, val[0]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 711)
+# reduce 206 omitted
+
+# reduce 207 omitted
+
+module_eval(<<'.,.,', 'egrammar.ra', 688)
def _reduce_208(val, _values, result)
- result = Factory.literal(:undef); loc result, val[0]
+ result = nil
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 712)
+module_eval(<<'.,.,', 'egrammar.ra', 689)
def _reduce_209(val, _values, result)
- result = Factory.literal(:default); loc result, val[0]
+ result = []
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 717)
+module_eval(<<'.,.,', 'egrammar.ra', 690)
def _reduce_210(val, _values, result)
- result = Factory.literal(val[0][:value]) ; loc result, val[0]
+ result = val[1]
result
end
.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 720)
+module_eval(<<'.,.,', 'egrammar.ra', 693)
def _reduce_211(val, _values, result)
- result = Factory.literal(val[0][:value]); loc result, val[0]
+ result = Factory.RENDER_STRING(val[0][:value]); loc result, val[0]
result
end
.,.,
-# reduce 212 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 694)
+ def _reduce_212(val, _values, result)
+ result = Factory.RENDER_EXPR(val[1]); loc result, val[0], val[2]
+ result
+ end
+.,.,
-module_eval(<<'.,.,', 'egrammar.ra', 726)
+module_eval(<<'.,.,', 'egrammar.ra', 695)
def _reduce_213(val, _values, result)
- result = nil
+ result = Factory.RENDER_EXPR(Factory.block_or_expression(*val[2])); loc result, val[0], val[4]
result
end
.,.,
# reduce 214 omitted
# reduce 215 omitted
-# reduce 216 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 701)
+ def _reduce_216(val, _values, result)
+ result = Factory.QREF(val[0][:value]) ; loc result, val[0]
+ result
+ end
+.,.,
-# reduce 217 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 704)
+ def _reduce_217(val, _values, result)
+ result = Factory.literal(val[0][:value]); loc result, val[0]
+ result
+ end
+.,.,
# reduce 218 omitted
-# reduce 219 omitted
+module_eval(<<'.,.,', 'egrammar.ra', 710)
+ def _reduce_219(val, _values, result)
+ result = nil
+ result
+ end
+.,.,
# reduce 220 omitted
# reduce 221 omitted
# reduce 222 omitted
# reduce 223 omitted
# reduce 224 omitted
# reduce 225 omitted
# reduce 226 omitted
# reduce 227 omitted
# reduce 228 omitted
# reduce 229 omitted
-module_eval(<<'.,.,', 'egrammar.ra', 749)
- def _reduce_230(val, _values, result)
+# reduce 230 omitted
+
+# reduce 231 omitted
+
+# reduce 232 omitted
+
+# reduce 233 omitted
+
+# reduce 234 omitted
+
+# reduce 235 omitted
+
+# reduce 236 omitted
+
+# reduce 237 omitted
+
+# reduce 238 omitted
+
+# reduce 239 omitted
+
+module_eval(<<'.,.,', 'egrammar.ra', 737)
+ def _reduce_240(val, _values, result)
result = nil
result
end
.,.,
def _reduce_none(val, _values, result)
val[0]
end
end # class Parser
end # module Parser
end # module Pops
end # module Puppet
diff --git a/lib/puppet/pops/parser/evaluating_parser.rb b/lib/puppet/pops/parser/evaluating_parser.rb
index 22fe53720..596f549bc 100644
--- a/lib/puppet/pops/parser/evaluating_parser.rb
+++ b/lib/puppet/pops/parser/evaluating_parser.rb
@@ -1,200 +1,140 @@
# Does not support "import" and parsing ruby files
#
class Puppet::Pops::Parser::EvaluatingParser
attr_reader :parser
def initialize()
@parser = Puppet::Pops::Parser::Parser.new()
end
def parse_string(s, file_source = 'unknown')
@file_source = file_source
clear()
# Handling of syntax error can be much improved (in general), now it bails out of the parser
# and does not have as rich information (when parsing a string), need to update it with the file source
# (ideally, a syntax error should be entered as an issue, and not just thrown - but that is a general problem
# and an improvement that can be made in the eparser (rather than here).
# Also a possible improvement (if the YAML parser returns positions) is to provide correct output of position.
#
begin
assert_and_report(parser.parse_string(s))
rescue Puppet::ParseError => e
# TODO: This is not quite right, why does not the exception have the correct file?
e.file = @file_source unless e.file.is_a?(String) && !e.file.empty?
raise e
end
end
def parse_file(file)
@file_source = file
clear()
assert_and_report(parser.parse_file(file))
end
def evaluate_string(scope, s, file_source='unknown')
evaluate(scope, parse_string(s, file_source))
end
def evaluate_file(file)
evaluate(parse_file(file))
end
def clear()
@acceptor = nil
end
+ # Create a closure that can be called in the given scope
+ def closure(model, scope)
+ Puppet::Pops::Evaluator::Closure.new(evaluator, model, scope)
+ end
+
def evaluate(scope, model)
return nil unless model
- ast = Puppet::Pops::Model::AstTransformer.new(@file_source, nil).transform(model)
- return nil unless ast
- ast.safeevaluate(scope)
+ evaluator.evaluate(model, scope)
+ end
+
+ def evaluator
+ @@evaluator ||= Puppet::Pops::Evaluator::EvaluatorImpl.new()
+ @@evaluator
end
def validate(parse_result)
resulting_acceptor = acceptor()
validator(resulting_acceptor).validate(parse_result)
resulting_acceptor
end
def acceptor()
Puppet::Pops::Validation::Acceptor.new
end
def validator(acceptor)
- Puppet::Pops::Validation::ValidatorFactory_3_1.new().validator(acceptor)
+ Puppet::Pops::Validation::ValidatorFactory_4_0.new().validator(acceptor)
end
def assert_and_report(parse_result)
return nil unless parse_result
if parse_result.source_ref.nil? or parse_result.source_ref == ''
parse_result.source_ref = @file_source
end
validation_result = validate(parse_result)
- max_errors = Puppet[:max_errors]
- max_warnings = Puppet[:max_warnings] + 1
- max_deprecations = Puppet[:max_deprecations] + 1
-
- # If there are warnings output them
- warnings = validation_result.warnings
- if warnings.size > 0
- formatter = Puppet::Pops::Validation::DiagnosticFormatterPuppetStyle.new
- emitted_w = 0
- emitted_dw = 0
- validation_result.warnings.each {|w|
- if w.severity == :deprecation
- # Do *not* call Puppet.deprecation_warning it is for internal deprecation, not
- # deprecation of constructs in manifests! (It is not designed for that purpose even if
- # used throughout the code base).
- #
- Puppet.warning(formatter.format(w)) if emitted_dw < max_deprecations
- emitted_dw += 1
- else
- Puppet.warning(formatter.format(w)) if emitted_w < max_warnings
- emitted_w += 1
- end
- break if emitted_w > max_warnings && emitted_dw > max_deprecations # but only then
- }
- end
-
- # If there were errors, report the first found. Use a puppet style formatter.
- errors = validation_result.errors
- if errors.size > 0
- formatter = Puppet::Pops::Validation::DiagnosticFormatterPuppetStyle.new
- if errors.size == 1 || max_errors <= 1
- # raise immediately
- raise Puppet::ParseError.new(formatter.format(errors[0]))
- end
- emitted = 0
- errors.each do |e|
- Puppet.err(formatter.format(e))
- emitted += 1
- break if emitted >= max_errors
- end
- warnings_message = warnings.size > 0 ? ", and #{warnings.size} warnings" : ""
- giving_up_message = "Found #{errors.size} errors#{warnings_message}. Giving up"
- exception = Puppet::ParseError.new(giving_up_message)
- exception.file = errors[0].file
- raise exception
- end
+ Puppet::Pops::IssueReporter.assert_and_report(validation_result,
+ :emit_warnings => true)
parse_result
end
def quote(x)
self.class.quote(x)
end
# Translates an already parsed string that contains control characters, quotes
# and backslashes into a quoted string where all such constructs have been escaped.
# Parsing the return value of this method using the puppet parser should yield
# exactly the same string as the argument passed to this method
#
# The method makes an exception for the two character sequences \$ and \s. They
# will not be escaped since they have a special meaning in puppet syntax.
#
# TODO: Handle \uXXXX characters ??
#
# @param x [String] The string to quote and "unparse"
# @return [String] The quoted string
#
def self.quote(x)
escaped = '"'
p = nil
x.each_char do |c|
case p
when nil
# do nothing
when "\t"
escaped << '\\t'
when "\n"
escaped << '\\n'
when "\f"
escaped << '\\f'
# TODO: \cx is a range of characters - skip for now
# when "\c"
# escaped << '\\c'
when '"'
escaped << '\\"'
when '\\'
escaped << if c == '$' || c == 's'; p; else '\\\\'; end # don't escape \ when followed by s or $
else
escaped << p
end
p = c
end
escaped << p unless p.nil?
escaped << '"'
end
- # This is a temporary solution to making it possible to use the new evaluator. The main class
- # will eventually have this behavior instead of using transformation to Puppet 3.x AST
- class Transitional < Puppet::Pops::Parser::EvaluatingParser
-
- def evaluator
- @@evaluator ||= Puppet::Pops::Evaluator::EvaluatorImpl.new()
- @@evaluator
- end
-
- def evaluate(scope, model)
- return nil unless model
- evaluator.evaluate(model, scope)
- end
-
- def validator(acceptor)
- Puppet::Pops::Validation::ValidatorFactory_4_0.new().validator(acceptor)
- end
-
- # Create a closure that can be called in the given scope
- def closure(model, scope)
- Puppet::Pops::Evaluator::Closure.new(evaluator, model, scope)
- end
- end
-
- class EvaluatingEppParser < Transitional
+ class EvaluatingEppParser < Puppet::Pops::Parser::EvaluatingParser
def initialize()
@parser = Puppet::Pops::Parser::EppParser.new()
end
end
end
diff --git a/lib/puppet/pops/parser/lexer.rb b/lib/puppet/pops/parser/lexer.rb
deleted file mode 100644
index 97a330a56..000000000
--- a/lib/puppet/pops/parser/lexer.rb
+++ /dev/null
@@ -1,753 +0,0 @@
-# the scanner/lexer
-
-require 'forwardable'
-require 'strscan'
-require 'puppet'
-require 'puppet/util/methodhelper'
-
-module Puppet
- class LexError < RuntimeError; end
-end
-
-class Puppet::Pops::Parser::Lexer
- extend Forwardable
-
- attr_reader :file, :lexing_context, :token_queue
-
- attr_reader :locator
-
- attr_accessor :indefine
- alias :indefine? :indefine
-
- def lex_error msg
- raise Puppet::LexError.new(msg)
- end
-
- class Token
- ALWAYS_ACCEPTABLE = Proc.new { |context| true }
-
- include Puppet::Util::MethodHelper
-
- attr_accessor :regex, :name, :string, :skip, :skip_text
- alias skip? skip
-
- # @overload initialize(string)
- # @param string [String] a literal string token matcher
- # @param name [String] the token name (what it is known as in the grammar)
- # @param options [Hash] see {#set_options}
- # @overload initialize(regex)
- # @param regex [Regexp] a regular expression token text matcher
- # @param name [String] the token name (what it is known as in the grammar)
- # @param options [Hash] see {#set_options}
- #
- def initialize(string_or_regex, name, options = {})
- if string_or_regex.is_a?(String)
- @name, @string = name, string_or_regex
- @regex = Regexp.new(Regexp.escape(string_or_regex))
- else
- @name, @regex = name, string_or_regex
- end
-
- set_options(options)
- @acceptable_when = ALWAYS_ACCEPTABLE
- end
-
- # @return [String] human readable token reference; the String if literal, else the token name
- def to_s
- string or @name.to_s
- end
-
- # @return [Boolean] if the token is acceptable in the given context or not.
- # @param context [Hash] the lexing context
- #
- def acceptable?(context={})
- @acceptable_when.call(context)
- end
-
-
- # Defines when the token is able to match.
- # This provides context that cannot be expressed otherwise, such as feature flags.
- #
- # @param block [Proc] a proc that given a context returns a boolean
- def acceptable_when(block)
- @acceptable_when = block
- end
- end
-
- # Maintains a list of tokens.
- class TokenList
- extend Forwardable
-
- attr_reader :regex_tokens, :string_tokens
- def_delegator :@tokens, :[]
- # Adds a new token to the set of recognized tokens
- # @param name [String] the token name
- # @param regex [Regexp, String] source text token matcher, a litral string or regular expression
- # @param options [Hash] see {Token::set_options}
- # @param block [Proc] optional block set as the created tokens `convert` method
- # @raise [ArgumentError] if the token with the given name is already defined
- #
- def add_token(name, regex, options = {}, &block)
- raise(ArgumentError, "Token #{name} already exists") if @tokens.include?(name)
- token = Token.new(regex, name, options)
- @tokens[token.name] = token
- if token.string
- @string_tokens << token
- @tokens_by_string[token.string] = token
- else
- @regex_tokens << token
- end
-
- token.meta_def(:convert, &block) if block_given?
-
- token
- end
-
- # Creates an empty token list
- #
- def initialize
- @tokens = {}
- @regex_tokens = []
- @string_tokens = []
- @tokens_by_string = {}
- end
-
- # Look up a token by its literal (match) value, rather than name.
- # @param string [String, nil] the literal match string to obtain a {Token} for, or nil if it does not exist.
- def lookup(string)
- @tokens_by_string[string]
- end
-
- # Adds tokens from a hash where key is a matcher (literal string or regexp) and the
- # value is the token's name
- # @param hash [Hash<{String => Symbol}, Hash<{Regexp => Symbol}] map token text matcher to token name
- # @return [void]
- #
- def add_tokens(hash)
- hash.each do |regex, name|
- add_token(name, regex)
- end
- end
-
- # Sort literal (string-) tokens by length, so we know once we match, we're done.
- # This helps avoid the O(n^2) nature of token matching.
- # The tokens are sorted in place.
- # @return [void]
- def sort_tokens
- @string_tokens.sort! { |a, b| b.string.length <=> a.string.length }
- end
-
- # Yield each token name and value in turn.
- def each
- @tokens.each {|name, value| yield name, value }
- end
- end
-
- TOKENS = TokenList.new
- TOKENS.add_tokens(
- '[' => :LBRACK,
- ']' => :RBRACK,
- # '{' => :LBRACE, # Specialized to handle lambda and brace count
- # '}' => :RBRACE, # Specialized to handle brace count
- '(' => :LPAREN,
- ')' => :RPAREN,
- '=' => :EQUALS,
- '+=' => :APPENDS,
- '-=' => :DELETES,
- '==' => :ISEQUAL,
- '>=' => :GREATEREQUAL,
- '>' => :GREATERTHAN,
- '<' => :LESSTHAN,
- '<=' => :LESSEQUAL,
- '!=' => :NOTEQUAL,
- '!' => :NOT,
- ',' => :COMMA,
- '.' => :DOT,
- ':' => :COLON,
- '@' => :AT,
- '|' => :PIPE,
- '<<|' => :LLCOLLECT,
- '|>>' => :RRCOLLECT,
- '->' => :IN_EDGE,
- '<-' => :OUT_EDGE,
- '~>' => :IN_EDGE_SUB,
- '<~' => :OUT_EDGE_SUB,
- '<|' => :LCOLLECT,
- '|>' => :RCOLLECT,
- ';' => :SEMIC,
- '?' => :QMARK,
- '\\' => :BACKSLASH,
- '=>' => :FARROW,
- '+>' => :PARROW,
- '+' => :PLUS,
- '-' => :MINUS,
- '/' => :DIV,
- '*' => :TIMES,
- '%' => :MODULO,
- '<<' => :LSHIFT,
- '>>' => :RSHIFT,
- '=~' => :MATCH,
- '!~' => :NOMATCH,
- %r{((::){0,1}[A-Z][-\w]*)+} => :CLASSREF,
- "<string>" => :STRING,
- "<dqstring up to first interpolation>" => :DQPRE,
- "<dqstring between two interpolations>" => :DQMID,
- "<dqstring after final interpolation>" => :DQPOST,
- "<boolean>" => :BOOLEAN,
- "<select start>" => :SELBRACE # A QMARK followed by '{'
- )
-
- module Contextual
- QUOTE_TOKENS = [:DQPRE,:DQMID]
- REGEX_INTRODUCING_TOKENS = [:NODE,:LBRACE, :SELBRACE, :RBRACE,:MATCH,:NOMATCH,:COMMA]
-
- NOT_INSIDE_QUOTES = Proc.new do |context|
- !QUOTE_TOKENS.include? context[:after]
- end
-
- INSIDE_QUOTES = Proc.new do |context|
- QUOTE_TOKENS.include? context[:after]
- end
-
- IN_REGEX_POSITION = Proc.new do |context|
- REGEX_INTRODUCING_TOKENS.include? context[:after]
- end
-
-# DASHED_VARIABLES_ALLOWED = Proc.new do |context|
-# Puppet[:allow_variables_with_dashes]
-# end
-#
-# VARIABLE_AND_DASHES_ALLOWED = Proc.new do |context|
-# Contextual::DASHED_VARIABLES_ALLOWED.call(context) and TOKENS[:VARIABLE].acceptable?(context)
-# end
- end
-
- # Numbers are treated separately from names, so that they may contain dots.
- TOKENS.add_token :NUMBER, %r{\b(?:0[xX][0-9A-Fa-f]+|0?\d+(?:\.\d+)?(?:[eE]-?\d+)?)\b} do |lexer, value|
- lexer.assert_numeric(value)
- [TOKENS[:NAME], value]
- end
- TOKENS[:NUMBER].acceptable_when Contextual::NOT_INSIDE_QUOTES
-
- TOKENS.add_token :NAME, %r{((::)?[a-z0-9][-\w]*)(::[a-z0-9][-\w]*)*} do |lexer, value|
- # A name starting with a number must be a valid numeric string (not that
- # NUMBER token captures those names that do not comply with the name rule.
- if value =~ /^[0-9].*$/
- lexer.assert_numeric(value)
- end
-
- string_token = self
- # we're looking for keywords here
- if tmp = KEYWORDS.lookup(value)
- string_token = tmp
- if [:TRUE, :FALSE].include?(string_token.name)
- value = eval(value)
- string_token = TOKENS[:BOOLEAN]
- end
- end
- [string_token, value]
- end
- [:NAME, :CLASSREF].each do |name_token|
- TOKENS[name_token].acceptable_when Contextual::NOT_INSIDE_QUOTES
- end
-
- TOKENS.add_token :COMMENT, %r{#.*}, :skip => true do |lexer,value|
-# value.sub!(/# ?/,'')
- [self, ""]
- end
-
- TOKENS.add_token :MLCOMMENT, %r{/\*(.*?)\*/}m, :skip => true do |lexer, value|
-# value.sub!(/^\/\* ?/,'')
-# value.sub!(/ ?\*\/$/,'')
- [self, ""]
- end
-
- TOKENS.add_token :REGEX, %r{/[^/\n]*/} do |lexer, value|
- # Make sure we haven't matched an escaped /
- while value[-2..-2] == '\\'
- other = lexer.scan_until(%r{/})
- value += other
- end
- regex = value.sub(%r{\A/}, "").sub(%r{/\Z}, '').gsub("\\/", "/")
- [self, Regexp.new(regex)]
- end
- TOKENS[:REGEX].acceptable_when Contextual::IN_REGEX_POSITION
-
- TOKENS.add_token :RETURN, "\n", :skip => true, :skip_text => true
-
- TOKENS.add_token :SQUOTE, "'" do |lexer, value|
- [TOKENS[:STRING], lexer.slurpstring(value,["'"],:ignore_invalid_escapes).first ]
- end
-
- DQ_initial_token_types = {'$' => :DQPRE,'"' => :STRING}
- DQ_continuation_token_types = {'$' => :DQMID,'"' => :DQPOST}
-
- TOKENS.add_token :DQUOTE, /"/ do |lexer, value|
- lexer.tokenize_interpolated_string(DQ_initial_token_types)
- end
-
-
- # LBRACE needs look ahead to differentiate between '{' and a '{'
- # followed by a '|' (start of lambda) The racc grammar can only do one
- # token lookahead.
- #
- TOKENS.add_token :LBRACE, "{" do |lexer, value|
- lexer.lexing_context[:brace_count] += 1
- if lexer.lexing_context[:after] == :QMARK
- [TOKENS[:SELBRACE], value]
- else
- [TOKENS[:LBRACE], value]
- end
- end
-
- # RBRACE needs to differentiate between a regular brace that is part of
- # syntax and one that is the ending of a string interpolation.
- TOKENS.add_token :RBRACE, "}" do |lexer, value|
- context = lexer.lexing_context
- if context[:interpolation_stack].empty? || context[:brace_count] != context[:interpolation_stack][-1]
- context[:brace_count] -= 1
- [TOKENS[:RBRACE], value]
- else
- lexer.tokenize_interpolated_string(DQ_continuation_token_types)
- end
- end
-
- TOKENS.add_token :DOLLAR_VAR, %r{\$(::)?(\w+::)*\w+} do |lexer, value|
- [TOKENS[:VARIABLE],value[1..-1]]
- end
-
- TOKENS.add_token :VARIABLE, %r{(::)?(\w+::)*\w+} do |lexer, value|
- # If the varname (following $, or ${ is followed by (, it is a function call, and not a variable
- # reference.
- #
- if lexer.match?(%r{[ \t\r]*\(})
- # followed by ( is a function call
- [TOKENS[:NAME], value]
-
- elsif kwd_token = KEYWORDS.lookup(value)
- # true, false, if, unless, case, and undef are keywords that cannot be used as variables
- # but node, and several others are variables
- if [ :TRUE, :FALSE ].include?(kwd_token.name)
- [ TOKENS[:BOOLEAN], eval(value) ]
- elsif [ :IF, :UNLESS, :CASE, :UNDEF ].include?(kwd_token.name)
- [kwd_token, value]
- else
- [TOKENS[:VARIABLE], value]
- end
- else
- [TOKENS[:VARIABLE], value]
- end
-
- end
- TOKENS[:VARIABLE].acceptable_when Contextual::INSIDE_QUOTES
-
- TOKENS.sort_tokens
-
- @@pairs = {
- "{" => "}",
- "(" => ")",
- "[" => "]",
- "<|" => "|>",
- "<<|" => "|>>",
- "|" => "|"
- }
-
- KEYWORDS = TokenList.new
- KEYWORDS.add_tokens(
- "case" => :CASE,
- "class" => :CLASS,
- "default" => :DEFAULT,
- "define" => :DEFINE,
- # "import" => :IMPORT,
- "if" => :IF,
- "elsif" => :ELSIF,
- "else" => :ELSE,
- "inherits" => :INHERITS,
- "node" => :NODE,
- "and" => :AND,
- "or" => :OR,
- "undef" => :UNDEF,
- "false" => :FALSE,
- "true" => :TRUE,
- "in" => :IN,
- "unless" => :UNLESS
- )
-
- def clear
- initvars
- end
-
- def expected
- return nil if @expected.empty?
- name = @expected[-1]
- TOKENS.lookup(name) or lex_error "Internal Lexer Error: Could not find expected token #{name}"
- end
-
- # scan the whole file
- # basically just used for testing
- def fullscan
- array = []
-
- self.scan { |token, str|
- # Ignore any definition nesting problems
- @indefine = false
- array.push([token,str])
- }
- array
- end
-
- def file=(file)
- @file = file
- contents = Puppet::FileSystem.exist?(file) ? Puppet::FileSystem.read(file) : ""
- @scanner = StringScanner.new(contents.freeze)
- @locator = Puppet::Pops::Parser::Locator.locator(contents, file)
- end
-
- def_delegator :@token_queue, :shift, :shift_token
-
- def find_string_token
- # We know our longest string token is three chars, so try each size in turn
- # until we either match or run out of chars. This way our worst-case is three
- # tries, where it is otherwise the number of string token we have. Also,
- # the lookups are optimized hash lookups, instead of regex scans.
- #
- _scn = @scanner
- s = _scn.peek(3)
- token = TOKENS.lookup(s[0,3]) || TOKENS.lookup(s[0,2]) || TOKENS.lookup(s[0,1])
- unless token
- return [nil, nil]
- end
- [ token, _scn.scan(token.regex) ]
- end
-
- # Find the next token that matches a regex. We look for these first.
- def find_regex_token
- best_token = nil
- best_length = 0
-
- # I tried optimizing based on the first char, but it had
- # a slightly negative affect and was a good bit more complicated.
- _lxc = @lexing_context
- _scn = @scanner
- TOKENS.regex_tokens.each do |token|
- if length = _scn.match?(token.regex) and token.acceptable?(_lxc)
- # We've found a longer match
- if length > best_length
- best_length = length
- best_token = token
- end
- end
- end
-
- return best_token, _scn.scan(best_token.regex) if best_token
- end
-
- # Find the next token, returning the string and the token.
- def find_token
- shift_token || find_regex_token || find_string_token
- end
-
- MULTIBYTE = Puppet::Pops::Parser::Locator::MULTIBYTE
- SKIPPATTERN = MULTIBYTE ? %r{[[:blank:]\r]+} : %r{[ \t\r]+}
-
- def initialize
- initvars
- end
-
- def assert_numeric(value)
- if value =~ /^0[xX].*$/
- lex_error (positioned_message("Not a valid hex number #{value}")) unless value =~ /^0[xX][0-9A-Fa-f]+$/
- elsif value =~ /^0[^.].*$/
- lex_error(positioned_message("Not a valid octal number #{value}")) unless value =~ /^0[0-7]+$/
- else
- lex_error(positioned_message("Not a valid decimal number #{value}")) unless value =~ /0?\d+(?:\.\d+)?(?:[eE]-?\d+)?/
- end
- end
-
- def initvars
- @previous_token = nil
- @scanner = nil
- @file = nil
-
- # AAARRGGGG! okay, regexes in ruby are bloody annoying
- # no one else has "\n" =~ /\s/
-
- @namestack = []
- @token_queue = []
- @indefine = false
- @expected = []
- @lexing_context = {
- :after => nil,
- :start_of_line => true,
- :offset => 0, # byte offset before where token starts
- :end_offset => 0, # byte offset after scanned token
- :brace_count => 0, # nested depth of braces
- :interpolation_stack => [] # matching interpolation brace level
- }
- end
-
- # Make any necessary changes to the token and/or value.
- def munge_token(token, value)
- # A token may already have been munged (converted and positioned)
- #
- return token, value if value.is_a? Hash
-
- @scanner.skip(SKIPPATTERN) if token.skip_text
-
- return if token.skip
-
- token, value = token.convert(self, value) if token.respond_to?(:convert)
-
- return unless token
-
- return if token.skip
-
- # If the conversion performed the munging/positioning
- return token, value if value.is_a? Hash
-
- return token, positioned_value(value)
- end
-
- # Returns a hash with the current position in source based on the current lexing context
- #
- def positioned_value(value)
- {
- :value => value,
- :locator => @locator,
- :offset => @lexing_context[:offset],
- :end_offset => @lexing_context[:end_offset]
- }
- end
-
- def pos
- @locator.pos_on_line(@lexing_context[:offset])
- end
-
- # Handling the namespace stack
- def_delegator :@namestack, :pop, :namepop
-
- # This value might have :: in it, but we don't care -- it'll be handled
- # normally when joining, and when popping we want to pop this full value,
- # however long the namespace is.
- def_delegator :@namestack, :<<, :namestack
-
- # Collect the current namespace.
- def namespace
- @namestack.join("::")
- end
-
- def_delegator :@scanner, :rest
-
- LBRACE_CHAR = '{'
-
- # this is the heart of the lexer
- def scan
- _scn = @scanner
- #Puppet.debug("entering scan")
- lex_error "Internal Error: No string or file given to lexer to process." unless _scn
-
- # Skip any initial whitespace.
- _scn.skip(SKIPPATTERN)
- _lbrace = '{'.freeze # faster to compare against a frozen string in
-
- until token_queue.empty? and _scn.eos? do
- offset = _scn.pos
- matched_token, value = find_token
- end_offset = _scn.pos
-
- # error out if we didn't match anything at all
- lex_error "Could not match #{_scn.rest[/^(\S+|\s+|.*)/]}" unless matched_token
-
- newline = matched_token.name == :RETURN
-
- _lxc = @lexing_context
- _lxc[:start_of_line] = newline
- _lxc[:offset] = offset
- _lxc[:end_offset] = end_offset
-
- final_token, token_value = munge_token(matched_token, value)
- # update end position since munging may have moved the end offset
- _lxc[:end_offset] = _scn.pos
-
- unless final_token
- _scn.skip(SKIPPATTERN)
- next
- end
-
- _lxc[:after] = final_token.name unless newline
- if final_token.name == :DQPRE
- _lxc[:interpolation_stack] << _lxc[:brace_count]
- elsif final_token.name == :DQPOST
- _lxc[:interpolation_stack].pop
- end
-
- value = token_value[:value]
-
- _expected = @expected
- if match = @@pairs[value] and final_token.name != :DQUOTE and final_token.name != :SQUOTE
- _expected << match
- elsif exp = _expected[-1] and exp == value and final_token.name != :DQUOTE and final_token.name != :SQUOTE
- _expected.pop
- end
-
- yield [final_token.name, token_value]
-
- _prv = @previous_token
- if _prv
- namestack(value) if _prv.name == :CLASS and value != LBRACE_CHAR
-
- # TODO: Lexer has no business dealing with this - it is semantic
- if _prv.name == :DEFINE
- if indefine?
- msg = "Cannot nest definition #{value} inside #{@indefine}"
- self.indefine = false
- raise Puppet::ParseError, msg
- end
-
- @indefine = value
- end
- end
- @previous_token = final_token
- _scn.skip(SKIPPATTERN)
- end
- # Cannot reset @scanner to nil here - it is needed to answer questions about context after
- # completed parsing.
- # Seems meaningless to do this. Everything will be gc anyway.
- #@scanner = nil
-
- # This indicates that we're done parsing.
- yield [false,false]
- end
-
- def match? r
- @scanner.match?(r)
- end
-
- # Provide some limited access to the scanner, for those
- # tokens that need it.
- def_delegator :@scanner, :scan_until
-
- # we've encountered the start of a string...
- # slurp in the rest of the string and return it
- def slurpstring(terminators,escapes=%w{ \\ $ ' " r n t s }+["\n"],ignore_invalid_escapes=false)
- # we search for the next quote that isn't preceded by a
- # backslash; the caret is there to match empty strings
- last = @scanner.matched
- str = @scanner.scan_until(/([^\\]|^|[^\\])([\\]{2})*[#{terminators}]/) || lex_error(positioned_message("Unclosed quote after #{format_quote(last)} followed by '#{followed_by}'"))
- str.gsub!(/\\(.)/m) {
- ch = $1
- if escapes.include? ch
- case ch
- when 'r'; "\r"
- when 'n'; "\n"
- when 't'; "\t"
- when 's'; " "
- when "\n"; ''
- else ch
- end
- else
- Puppet.warning(positioned_message("Unrecognized escape sequence '\\#{ch}'")) unless ignore_invalid_escapes
- "\\#{ch}"
- end
- }
- [ str[0..-2],str[-1,1] ]
- end
-
- # Formats given message by appending file, line and position if available.
- def positioned_message msg
- result = [msg]
- result << "in file #{file}" if file
- result << "at line #{line}:#{pos}" if line
- result.join(" ")
- end
-
- # Returns "<eof>" if at end of input, else the following 5 characters with \n \r \t escaped
- def followed_by
- return "<eof>" if @scanner.eos?
- result = @scanner.rest[0,5] + "..."
- result.gsub!("\t", '\t')
- result.gsub!("\n", '\n')
- result.gsub!("\r", '\r')
- result
- end
-
- def format_quote q
- if q == "'"
- '"\'"'
- else
- "'#{q}'"
- end
- end
-
- def tokenize_interpolated_string(token_type,preamble='')
- # Expecting a (possibly empty) stretch of text terminated by end of string ", a variable $, or expression ${
- # The length of this part includes the start and terminating characters.
- value,terminator = slurpstring('"$')
-
- # Advanced after '{' if this is in expression ${} interpolation
- braced = terminator == '$' && @scanner.scan(/\{/)
- # make offset to end_ofset be the length of the pre expression string including its start and terminating chars
- lxc = @lexing_context
- lxc[:end_offset] = @scanner.pos
-
- token_queue << [TOKENS[token_type[terminator]],positioned_value(preamble+value)]
- variable_regex = if Puppet[:allow_variables_with_dashes]
- TOKENS[:VARIABLE_WITH_DASH].regex
- else
- TOKENS[:VARIABLE].regex
- end
- if terminator != '$' or braced
- return token_queue.shift
- end
-
- tmp_offset = @scanner.pos
- if var_name = @scanner.scan(variable_regex)
- lxc[:offset] = tmp_offset
- lxc[:end_offset] = @scanner.pos
- warn_if_variable_has_hyphen(var_name)
- # If the varname after ${ is followed by (, it is a function call, and not a variable
- # reference.
- #
- if braced && @scanner.match?(%r{[ \t\r]*\(})
- token_queue << [TOKENS[:NAME], positioned_value(var_name)]
- else
- token_queue << [TOKENS[:VARIABLE],positioned_value(var_name)]
- end
- lxc[:offset] = @scanner.pos
- tokenize_interpolated_string(DQ_continuation_token_types)
- else
- tokenize_interpolated_string(token_type, replace_false_start_with_text(terminator))
- end
- end
-
- def replace_false_start_with_text(appendix)
- last_token = token_queue.pop
- value = last_token.last
- if value.is_a? Hash
- value[:value] + appendix
- else
- value + appendix
- end
- end
-
- # just parse a string, not a whole file
- def string=(string, path='')
- @scanner = StringScanner.new(string.freeze)
- @locator = Puppet::Pops::Parser::Locator.locator(string, path)
- end
-
- def warn_if_variable_has_hyphen(var_name)
- if var_name.include?('-')
- Puppet.deprecation_warning("Using `-` in variable names is deprecated at #{file || '<string>'}:#{line}. See http://links.puppetlabs.com/puppet-hyphenated-variable-deprecation")
- end
- end
-
- # Returns the line number (starting from 1) for the current position
- # in the scanned text (at the end of the last produced, but not necessarily
- # consumed.
- #
- def line
- return 1 unless @lexing_context && locator
- locator.line_for_offset(@lexing_context[:end_offset])
- end
-end
diff --git a/lib/puppet/pops/parser/lexer2.rb b/lib/puppet/pops/parser/lexer2.rb
index aee0f6678..fda745f98 100644
--- a/lib/puppet/pops/parser/lexer2.rb
+++ b/lib/puppet/pops/parser/lexer2.rb
@@ -1,692 +1,696 @@
# The Lexer is responsbile for turning source text into tokens.
# This version is a performance enhanced lexer (in comparison to the 3.x and earlier "future parser" lexer.
#
# Old returns tokens [:KEY, value, { locator = }
# Could return [[token], locator]
# or Token.new([token], locator) with the same API x[0] = token_symbol, x[1] = self, x[:key] = (:value, :file, :line, :pos) etc
require 'strscan'
require 'puppet/pops/parser/lexer_support'
require 'puppet/pops/parser/heredoc_support'
require 'puppet/pops/parser/interpolation_support'
require 'puppet/pops/parser/epp_support'
require 'puppet/pops/parser/slurp_support'
class Puppet::Pops::Parser::Lexer2
include Puppet::Pops::Parser::LexerSupport
include Puppet::Pops::Parser::HeredocSupport
include Puppet::Pops::Parser::InterpolationSupport
include Puppet::Pops::Parser::SlurpSupport
include Puppet::Pops::Parser::EppSupport
# ALl tokens have three slots, the token name (a Symbol), the token text (String), and a token text length.
# All operator and punctuation tokens reuse singleton arrays Tokens that require unique values create
# a unique array per token.
#
# PEFORMANCE NOTES:
# This construct reduces the amount of object that needs to be created for operators and punctuation.
# The length is pre-calculated for all singleton tokens. The length is used both to signal the length of
# the token, and to advance the scanner position (without having to advance it with a scan(regexp)).
#
TOKEN_LBRACK = [:LBRACK, '['.freeze, 1].freeze
TOKEN_LISTSTART = [:LISTSTART, '['.freeze, 1].freeze
TOKEN_RBRACK = [:RBRACK, ']'.freeze, 1].freeze
TOKEN_LBRACE = [:LBRACE, '{'.freeze, 1].freeze
TOKEN_RBRACE = [:RBRACE, '}'.freeze, 1].freeze
TOKEN_SELBRACE = [:SELBRACE, '{'.freeze, 1].freeze
TOKEN_LPAREN = [:LPAREN, '('.freeze, 1].freeze
TOKEN_RPAREN = [:RPAREN, ')'.freeze, 1].freeze
TOKEN_EQUALS = [:EQUALS, '='.freeze, 1].freeze
TOKEN_APPENDS = [:APPENDS, '+='.freeze, 2].freeze
TOKEN_DELETES = [:DELETES, '-='.freeze, 2].freeze
TOKEN_ISEQUAL = [:ISEQUAL, '=='.freeze, 2].freeze
TOKEN_NOTEQUAL = [:NOTEQUAL, '!='.freeze, 2].freeze
TOKEN_MATCH = [:MATCH, '=~'.freeze, 2].freeze
TOKEN_NOMATCH = [:NOMATCH, '!~'.freeze, 2].freeze
TOKEN_GREATEREQUAL = [:GREATEREQUAL, '>='.freeze, 2].freeze
TOKEN_GREATERTHAN = [:GREATERTHAN, '>'.freeze, 1].freeze
TOKEN_LESSEQUAL = [:LESSEQUAL, '<='.freeze, 2].freeze
TOKEN_LESSTHAN = [:LESSTHAN, '<'.freeze, 1].freeze
TOKEN_FARROW = [:FARROW, '=>'.freeze, 2].freeze
TOKEN_PARROW = [:PARROW, '+>'.freeze, 2].freeze
TOKEN_LSHIFT = [:LSHIFT, '<<'.freeze, 2].freeze
TOKEN_LLCOLLECT = [:LLCOLLECT, '<<|'.freeze, 3].freeze
TOKEN_LCOLLECT = [:LCOLLECT, '<|'.freeze, 2].freeze
TOKEN_RSHIFT = [:RSHIFT, '>>'.freeze, 2].freeze
TOKEN_RRCOLLECT = [:RRCOLLECT, '|>>'.freeze, 3].freeze
TOKEN_RCOLLECT = [:RCOLLECT, '|>'.freeze, 2].freeze
TOKEN_PLUS = [:PLUS, '+'.freeze, 1].freeze
TOKEN_MINUS = [:MINUS, '-'.freeze, 1].freeze
TOKEN_DIV = [:DIV, '/'.freeze, 1].freeze
TOKEN_TIMES = [:TIMES, '*'.freeze, 1].freeze
TOKEN_MODULO = [:MODULO, '%'.freeze, 1].freeze
TOKEN_NOT = [:NOT, '!'.freeze, 1].freeze
TOKEN_DOT = [:DOT, '.'.freeze, 1].freeze
TOKEN_PIPE = [:PIPE, '|'.freeze, 1].freeze
TOKEN_AT = [:AT , '@'.freeze, 1].freeze
TOKEN_ATAT = [:ATAT , '@@'.freeze, 2].freeze
TOKEN_COLON = [:COLON, ':'.freeze, 1].freeze
TOKEN_COMMA = [:COMMA, ','.freeze, 1].freeze
TOKEN_SEMIC = [:SEMIC, ';'.freeze, 1].freeze
TOKEN_QMARK = [:QMARK, '?'.freeze, 1].freeze
TOKEN_TILDE = [:TILDE, '~'.freeze, 1].freeze # lexed but not an operator in Puppet
TOKEN_REGEXP = [:REGEXP, nil, 0].freeze
TOKEN_IN_EDGE = [:IN_EDGE, '->'.freeze, 2].freeze
TOKEN_IN_EDGE_SUB = [:IN_EDGE_SUB, '~>'.freeze, 2].freeze
TOKEN_OUT_EDGE = [:OUT_EDGE, '<-'.freeze, 2].freeze
TOKEN_OUT_EDGE_SUB = [:OUT_EDGE_SUB, '<~'.freeze, 2].freeze
# Tokens that are always unique to what has been lexed
TOKEN_STRING = [:STRING, nil, 0].freeze
TOKEN_WORD = [:WORD, nil, 0].freeze
TOKEN_DQPRE = [:DQPRE, nil, 0].freeze
TOKEN_DQMID = [:DQPRE, nil, 0].freeze
TOKEN_DQPOS = [:DQPRE, nil, 0].freeze
TOKEN_NUMBER = [:NUMBER, nil, 0].freeze
TOKEN_VARIABLE = [:VARIABLE, nil, 1].freeze
TOKEN_VARIABLE_EMPTY = [:VARIABLE, ''.freeze, 1].freeze
# HEREDOC has syntax as an argument.
TOKEN_HEREDOC = [:HEREDOC, nil, 0].freeze
# EPP_START is currently a marker token, may later get syntax
TOKEN_EPPSTART = [:EPP_START, nil, 0].freeze
TOKEN_EPPEND = [:EPP_END, '%>', 2].freeze
TOKEN_EPPEND_TRIM = [:EPP_END_TRIM, '-%>', 3].freeze
# This is used for unrecognized tokens, will always be a single character. This particular instance
# is not used, but is kept here for documentation purposes.
TOKEN_OTHER = [:OTHER, nil, 0]
# Keywords are all singleton tokens with pre calculated lengths.
# Booleans are pre-calculated (rather than evaluating the strings "false" "true" repeatedly.
#
KEYWORDS = {
"case" => [:CASE, 'case', 4],
"class" => [:CLASS, 'class', 5],
"default" => [:DEFAULT, 'default', 7],
"define" => [:DEFINE, 'define', 6],
"if" => [:IF, 'if', 2],
"elsif" => [:ELSIF, 'elsif', 5],
"else" => [:ELSE, 'else', 4],
"inherits" => [:INHERITS, 'inherits', 8],
"node" => [:NODE, 'node', 4],
"and" => [:AND, 'and', 3],
"or" => [:OR, 'or', 2],
"undef" => [:UNDEF, 'undef', 5],
"false" => [:BOOLEAN, false, 5],
"true" => [:BOOLEAN, true, 4],
"in" => [:IN, 'in', 2],
"unless" => [:UNLESS, 'unless', 6],
"function" => [:FUNCTION, 'function', 8],
+ "type" => [:TYPE, 'type', 4],
+ "attr" => [:ATTR, 'attr', 4],
+ "private" => [:PRIVATE, 'private', 7],
}
KEYWORDS.each {|k,v| v[1].freeze; v.freeze }
KEYWORDS.freeze
# Reverse lookup of keyword name to string
KEYWORD_NAMES = {}
KEYWORDS.each {|k, v| KEYWORD_NAMES[v[0]] = k }
KEYWORD_NAMES.freeze
PATTERN_WS = %r{[[:blank:]\r]+}
# The single line comment includes the line ending.
PATTERN_COMMENT = %r{#.*\r?}
PATTERN_MLCOMMENT = %r{/\*(.*?)\*/}m
PATTERN_REGEX = %r{/[^/\n]*/}
PATTERN_REGEX_END = %r{/}
PATTERN_REGEX_A = %r{\A/} # for replacement to ""
PATTERN_REGEX_Z = %r{/\Z} # for replacement to ""
PATTERN_REGEX_ESC = %r{\\/} # for replacement to "/"
# The 3x patterns:
# PATTERN_CLASSREF = %r{((::){0,1}[A-Z][-\w]*)+}
# PATTERN_NAME = %r{((::)?[a-z0-9][-\w]*)(::[a-z0-9][-\w]*)*}
# The NAME and CLASSREF in 4x are strict. Each segment must start with
# a letter a-z and may not contain dashes (\w includes letters, digits and _).
#
PATTERN_CLASSREF = %r{((::){0,1}[A-Z][\w]*)+}
PATTERN_NAME = %r{((::)?[a-z][\w]*)(::[a-z][\w]*)*}
PATTERN_BARE_WORD = %r{[a-z_](?:[\w-]*[\w])?}
PATTERN_DOLLAR_VAR = %r{\$(::)?(\w+::)*\w+}
PATTERN_NUMBER = %r{\b(?:0[xX][0-9A-Fa-f]+|0?\d+(?:\.\d+)?(?:[eE]-?\d+)?)\b}
# PERFORMANCE NOTE:
# Comparison against a frozen string is faster (than unfrozen).
#
STRING_BSLASH_BSLASH = '\\'.freeze
attr_reader :locator
def initialize()
end
# Clears the lexer state (it is not required to call this as it will be garbage collected
# and the next lex call (lex_string, lex_file) will reset the internal state.
#
def clear()
# not really needed, but if someone wants to ensure garbage is collected as early as possible
@scanner = nil
@locator = nil
@lexing_context = nil
end
# Convenience method, and for compatibility with older lexer. Use the lex_string instead which allows
# passing the path to use without first having to call file= (which reads the file if it exists).
# (Bad form to use overloading of assignment operator for something that is not really an assignment. Also,
# overloading of = does not allow passing more than one argument).
#
def string=(string)
lex_string(string, '')
end
def lex_string(string, path='')
initvars
@scanner = StringScanner.new(string)
@locator = Puppet::Pops::Parser::Locator.locator(string, path)
end
# Lexes an unquoted string.
# @param string [String] the string to lex
# @param locator [Puppet::Pops::Parser::Locator] the locator to use (a default is used if nil is given)
# @param escapes [Array<String>] array of character strings representing the escape sequences to transform
# @param interpolate [Boolean] whether interpolation of expressions should be made or not.
#
def lex_unquoted_string(string, locator, escapes, interpolate)
initvars
@scanner = StringScanner.new(string)
@locator = locator || Puppet::Pops::Parser::Locator.locator(string, '')
@lexing_context[:escapes] = escapes || UQ_ESCAPES
@lexing_context[:uq_slurp_pattern] = (interpolate || !escapes.empty?) ? SLURP_UQ_PATTERN : SLURP_ALL_PATTERN
end
# Convenience method, and for compatibility with older lexer. Use the lex_file instead.
# (Bad form to use overloading of assignment operator for something that is not really an assignment).
#
def file=(file)
lex_file(file)
end
# TODO: This method should not be used, callers should get the locator since it is most likely required to
# compute line, position etc given offsets.
#
def file
@locator ? @locator.file : nil
end
# Initializes lexing of the content of the given file. An empty string is used if the file does not exist.
#
def lex_file(file)
initvars
contents = Puppet::FileSystem.exist?(file) ? Puppet::FileSystem.read(file) : ""
@scanner = StringScanner.new(contents.freeze)
@locator = Puppet::Pops::Parser::Locator.locator(contents, file)
end
def initvars
@token_queue = []
# NOTE: additional keys are used; :escapes, :uq_slurp_pattern, :newline_jump, :epp_*
@lexing_context = {
:brace_count => 0,
:after => nil,
}
end
# Scans all of the content and returns it in an array
# Note that the terminating [false, false] token is included in the result.
#
def fullscan
result = []
scan {|token, value| result.push([token, value]) }
result
end
# A block must be passed to scan. It will be called with two arguments, a symbol for the token,
# and an instance of LexerSupport::TokenValue
# PERFORMANCE NOTE: The TokenValue is designed to reduce the amount of garbage / temporary data
# and to only convert the lexer's internal tokens on demand. It is slightly more costly to create an
# instance of a class defined in Ruby than an Array or Hash, but the gain is much bigger since transformation
# logic is avoided for many of its members (most are never used (e.g. line/pos information which is only of
# value in general for error messages, and for some expressions (which the lexer does not know about).
#
def scan
# PERFORMANCE note: it is faster to access local variables than instance variables.
# This makes a small but notable difference since instance member access is avoided for
# every token in the lexed content.
#
scn = @scanner
ctx = @lexing_context
queue = @token_queue
lex_error_without_pos("Internal Error: No string or file given to lexer to process.") unless scn
scn.skip(PATTERN_WS)
# This is the lexer's main loop
until queue.empty? && scn.eos? do
if token = queue.shift || lex_token
- yield [ ctx[:after] = token[0], token[1] ]
+ ctx[:after] = token[0]
+ yield token
end
end
# Signals end of input
yield [false, false]
end
# This lexes one token at the current position of the scanner.
# PERFORMANCE NOTE: Any change to this logic should be performance measured.
#
def lex_token
# Using three char look ahead (may be faster to do 2 char look ahead since only 2 tokens require a third
scn = @scanner
ctx = @lexing_context
before = @scanner.pos
# A look ahead of 3 characters is used since the longest operator ambiguity is resolved at that point.
# PERFORMANCE NOTE: It is faster to peek once and use three separate variables for lookahead 0, 1 and 2.
#
la = scn.peek(3)
return nil if la.empty?
# Ruby 1.8.7 requires using offset and length (or integers are returned.
# PERFORMANCE NOTE.
# It is slightly faster to use these local variables than accessing la[0], la[1] etc. in ruby 1.9.3
# But not big enough to warrant two completely different implementations.
#
la0 = la[0,1]
la1 = la[1,1]
la2 = la[2,1]
# PERFORMANCE NOTE:
# A case when, where all the cases are literal values is the fastest way to map from data to code.
# It is much faster than using a hash with lambdas, hash with symbol used to then invoke send etc.
# This case statement is evaluated for most character positions in puppet source, and great care must
# be taken to not introduce performance regressions.
#
case la0
when '.'
emit(TOKEN_DOT, before)
when ','
emit(TOKEN_COMMA, before)
when '['
- if ctx[:after] == :NAME && (before == 0 || scn.string[before-1,1] =~ /[[:blank:]\r\n]+/)
+ if (before == 0 || scn.string[before-1,1] =~ /[[:blank:]\r\n]+/)
emit(TOKEN_LISTSTART, before)
else
emit(TOKEN_LBRACK, before)
end
when ']'
emit(TOKEN_RBRACK, before)
when '('
emit(TOKEN_LPAREN, before)
when ')'
emit(TOKEN_RPAREN, before)
when ';'
emit(TOKEN_SEMIC, before)
when '?'
emit(TOKEN_QMARK, before)
when '*'
emit(TOKEN_TIMES, before)
when '%'
if la1 == '>' && ctx[:epp_mode]
scn.pos += 2
if ctx[:epp_mode] == :expr
enqueue_completed(TOKEN_EPPEND, before)
end
ctx[:epp_mode] = :text
interpolate_epp
else
emit(TOKEN_MODULO, before)
end
when '{'
# The lexer needs to help the parser since the technology used cannot deal with
# lookahead of same token with different precedence. This is solved by making left brace
# after ? into a separate token.
#
ctx[:brace_count] += 1
emit(if ctx[:after] == :QMARK
TOKEN_SELBRACE
else
TOKEN_LBRACE
end, before)
when '}'
ctx[:brace_count] -= 1
emit(TOKEN_RBRACE, before)
# TOKENS @, @@, @(
when '@'
case la1
when '@'
emit(TOKEN_ATAT, before) # TODO; Check if this is good for the grammar
when '('
heredoc
else
emit(TOKEN_AT, before)
end
# TOKENS |, |>, |>>
when '|'
emit(case la1
when '>'
la2 == '>' ? TOKEN_RRCOLLECT : TOKEN_RCOLLECT
else
TOKEN_PIPE
end, before)
# TOKENS =, =>, ==, =~
when '='
emit(case la1
when '='
TOKEN_ISEQUAL
when '>'
TOKEN_FARROW
when '~'
TOKEN_MATCH
else
TOKEN_EQUALS
end, before)
# TOKENS '+', '+=', and '+>'
when '+'
emit(case la1
when '='
TOKEN_APPENDS
when '>'
TOKEN_PARROW
else
TOKEN_PLUS
end, before)
# TOKENS '-', '->', and epp '-%>' (end of interpolation with trim)
when '-'
if ctx[:epp_mode] && la1 == '%' && la2 == '>'
scn.pos += 3
if ctx[:epp_mode] == :expr
enqueue_completed(TOKEN_EPPEND_TRIM, before)
end
interpolate_epp(:with_trim)
else
emit(case la1
when '>'
TOKEN_IN_EDGE
when '='
TOKEN_DELETES
else
TOKEN_MINUS
end, before)
end
# TOKENS !, !=, !~
when '!'
emit(case la1
when '='
TOKEN_NOTEQUAL
when '~'
TOKEN_NOMATCH
else
TOKEN_NOT
end, before)
# TOKENS ~>, ~
when '~'
emit(la1 == '>' ? TOKEN_IN_EDGE_SUB : TOKEN_TILDE, before)
when '#'
scn.skip(PATTERN_COMMENT)
nil
# TOKENS '/', '/*' and '/ regexp /'
when '/'
case la1
when '*'
scn.skip(PATTERN_MLCOMMENT)
nil
else
# regexp position is a regexp, else a div
if regexp_acceptable? && value = scn.scan(PATTERN_REGEX)
# Ensure an escaped / was not matched
while value[-2..-2] == STRING_BSLASH_BSLASH # i.e. \\
value += scn.scan_until(PATTERN_REGEX_END)
end
regex = value.sub(PATTERN_REGEX_A, '').sub(PATTERN_REGEX_Z, '').gsub(PATTERN_REGEX_ESC, '/')
emit_completed([:REGEX, Regexp.new(regex), scn.pos-before], before)
else
emit(TOKEN_DIV, before)
end
end
# TOKENS <, <=, <|, <<|, <<, <-, <~
when '<'
emit(case la1
when '<'
if la2 == '|'
TOKEN_LLCOLLECT
else
TOKEN_LSHIFT
end
when '='
TOKEN_LESSEQUAL
when '|'
TOKEN_LCOLLECT
when '-'
TOKEN_OUT_EDGE
when '~'
TOKEN_OUT_EDGE_SUB
else
TOKEN_LESSTHAN
end, before)
# TOKENS >, >=, >>
when '>'
emit(case la1
when '>'
TOKEN_RSHIFT
when '='
TOKEN_GREATEREQUAL
else
TOKEN_GREATERTHAN
end, before)
# TOKENS :, ::CLASSREF, ::NAME
when ':'
if la1 == ':'
before = scn.pos
# PERFORMANCE NOTE: This could potentially be speeded up by using a case/when listing all
# upper case letters. Alternatively, the 'A', and 'Z' comparisons may be faster if they are
# frozen.
#
if la2 >= 'A' && la2 <= 'Z'
# CLASSREF or error
value = scn.scan(PATTERN_CLASSREF)
if value
after = scn.pos
- emit_completed([:CLASSREF, value, after-before], before)
+ emit_completed([:CLASSREF, value.freeze, after-before], before)
else
# move to faulty position ('::<uc-letter>' was ok)
scn.pos = scn.pos + 3
lex_error("Illegal fully qualified class reference")
end
else
# NAME or error
value = scn.scan(PATTERN_NAME)
if value
- emit_completed([:NAME, value, scn.pos-before], before)
+ emit_completed([:NAME, value.freeze, scn.pos-before], before)
else
# move to faulty position ('::' was ok)
scn.pos = scn.pos + 2
lex_error("Illegal fully qualified name")
end
end
else
emit(TOKEN_COLON, before)
end
when '$'
if value = scn.scan(PATTERN_DOLLAR_VAR)
- emit_completed([:VARIABLE, value[1..-1], scn.pos - before], before)
+ emit_completed([:VARIABLE, value[1..-1].freeze, scn.pos - before], before)
else
# consume the $ and let higher layer complain about the error instead of getting a syntax error
emit(TOKEN_VARIABLE_EMPTY, before)
end
when '"'
# Recursive string interpolation, 'interpolate' either returns a STRING token, or
# a DQPRE with the rest of the string's tokens placed in the @token_queue
interpolate_dq
when "'"
- emit_completed([:STRING, slurp_sqstring, before-scn.pos], before)
+ emit_completed([:STRING, slurp_sqstring.freeze, scn.pos - before], before)
when '0', '1', '2', '3', '4', '5', '6', '7', '8', '9'
value = scn.scan(PATTERN_NUMBER)
if value
length = scn.pos - before
assert_numeric(value, length)
- emit_completed([:NUMBER, value, length], before)
+ emit_completed([:NUMBER, value.freeze, length], before)
else
# move to faulty position ([0-9] was ok)
scn.pos = scn.pos + 1
lex_error("Illegal number")
end
when 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '_'
value = scn.scan(PATTERN_NAME)
# NAME or false start because followed by hyphen(s), underscore or word
if value && !scn.match?(/^-+\w/)
- emit_completed(KEYWORDS[value] || [:NAME, value, scn.pos - before], before)
+ emit_completed(KEYWORDS[value] || [:NAME, value.freeze, scn.pos - before], before)
else
# Restart and check entire pattern (for ease of detecting non allowed trailing hyphen)
scn.pos = before
value = scn.scan(PATTERN_BARE_WORD)
# If the WORD continues with :: it must be a correct fully qualified name
if value && !(fully_qualified = scn.match?(/::/))
- emit_completed([:WORD, value, scn.pos - before], before)
+ emit_completed([:WORD, value.freeze, scn.pos - before], before)
else
# move to faulty position ([a-z_] was ok)
scn.pos = scn.pos + 1
if fully_qualified
lex_error("Illegal fully qualified name")
else
lex_error("Illegal name or bare word")
end
end
end
when 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M',
'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
value = scn.scan(PATTERN_CLASSREF)
if value
- emit_completed([:CLASSREF, value, scn.pos - before], before)
+ emit_completed([:CLASSREF, value.freeze, scn.pos - before], before)
else
# move to faulty position ([A-Z] was ok)
scn.pos = scn.pos + 1
lex_error("Illegal class reference")
end
when "\n"
# If heredoc_cont is in effect there are heredoc text lines to skip over
# otherwise just skip the newline.
#
if ctx[:newline_jump]
scn.pos = ctx[:newline_jump]
ctx[:newline_jump] = nil
else
scn.pos += 1
end
return nil
when ' ', "\t", "\r"
scn.skip(PATTERN_WS)
return nil
else
# In case of unicode spaces of various kinds that are captured by a regexp, but not by the
# simpler case expression above (not worth handling those special cases with better performance).
if scn.skip(PATTERN_WS)
nil
else
# "unrecognized char"
emit([:OTHER, la0, 1], before)
end
end
end
# Emits (produces) a token [:tokensymbol, TokenValue] and moves the scanner's position past the token
#
def emit(token, byte_offset)
@scanner.pos = byte_offset + token[2]
[token[0], TokenValue.new(token, byte_offset, @locator)]
end
# Emits the completed token on the form [:tokensymbol, TokenValue. This method does not alter
# the scanner's position.
#
def emit_completed(token, byte_offset)
[token[0], TokenValue.new(token, byte_offset, @locator)]
end
# Enqueues a completed token at the given offset
def enqueue_completed(token, byte_offset)
@token_queue << emit_completed(token, byte_offset)
end
# Allows subprocessors for heredoc etc to enqueue tokens that are tokenized by a different lexer instance
#
def enqueue(emitted_token)
@token_queue << emitted_token
end
# Answers after which tokens it is acceptable to lex a regular expression.
# PERFORMANCE NOTE:
# It may be beneficial to turn this into a hash with default value of true for missing entries.
# A case expression with literal values will however create a hash internally. Since a reference is
# always needed to the hash, this access is almost as costly as a method call.
#
def regexp_acceptable?
case @lexing_context[:after]
# Ends of (potential) R-value generating expressions
when :RPAREN, :RBRACK, :RRCOLLECT, :RCOLLECT
false
# End of (potential) R-value - but must be allowed because of case expressions
# Called out here to not be mistaken for a bug.
when :RBRACE
true
# Operands (that can be followed by DIV (even if illegal in grammar)
when :NAME, :CLASSREF, :NUMBER, :STRING, :BOOLEAN, :DQPRE, :DQMID, :DQPOST, :HEREDOC, :REGEX
false
else
true
end
end
end
diff --git a/lib/puppet/pops/parser/lexer_support.rb b/lib/puppet/pops/parser/lexer_support.rb
index c769255a5..5b296e49c 100644
--- a/lib/puppet/pops/parser/lexer_support.rb
+++ b/lib/puppet/pops/parser/lexer_support.rb
@@ -1,107 +1,113 @@
# This is an integral part of the Lexer. It is broken out into a separate module
# for maintainability of the code, and making the various parts of the lexer focused.
#
module Puppet::Pops::Parser::LexerSupport
# Formats given message by appending file, line and position if available.
def positioned_message(msg, pos = nil)
result = [msg]
file = @locator.file
line = @locator.line_for_offset(pos || @scanner.pos)
pos = @locator.pos_on_line(pos || @scanner.pos)
result << "in file #{file}" if file && file.is_a?(String) && !file.empty?
result << "at line #{line}:#{pos}"
result.join(" ")
end
# Returns "<eof>" if at end of input, else the following 5 characters with \n \r \t escaped
def followed_by
return "<eof>" if @scanner.eos?
result = @scanner.rest[0,5] + "..."
result.gsub!("\t", '\t')
result.gsub!("\n", '\n')
result.gsub!("\r", '\r')
result
end
# Returns a quoted string using " or ' depending on the given a strings's content
def format_quote(q)
if q == "'"
'"\'"'
else
"'#{q}'"
end
end
# Raises a Puppet::LexError with the given message
def lex_error_without_pos msg
raise Puppet::LexError.new(msg)
end
# Raises a Puppet::LexError with the given message
def lex_error(msg, pos=nil)
raise Puppet::LexError.new(positioned_message(msg, pos))
end
# Asserts that the given string value is a float, or an integer in decimal, octal or hex form.
# An error is raised if the given value does not comply.
#
def assert_numeric(value, length)
if value =~ /^0[xX].*$/
lex_error("Not a valid hex number #{value}", length) unless value =~ /^0[xX][0-9A-Fa-f]+$/
elsif value =~ /^0[^.].*$/
lex_error("Not a valid octal number #{value}", length) unless value =~ /^0[0-7]+$/
else
lex_error("Not a valid decimal number #{value}", length) unless value =~ /0?\d+(?:\.\d+)?(?:[eE]-?\d+)?/
end
end
# A TokenValue keeps track of the token symbol, the lexed text for the token, its length
# and its position in its source container. There is a cost associated with computing the
# line and position on line information.
#
class TokenValue < Puppet::Pops::Parser::Locatable
attr_reader :token_array
attr_reader :offset
attr_reader :locator
def initialize(token_array, offset, locator)
@token_array = token_array
@offset = offset
@locator = locator
end
def length
@token_array[2]
end
def [](key)
case key
when :value
@token_array[1]
when :file
@locator.file
when :line
@locator.line_for_offset(@offset)
when :pos
@locator.pos_on_line(@offset)
when :length
@token_array[2]
when :locator
@locator
when :offset
@offset
else
nil
end
end
+ def to_s
+ # This format is very compact and is intended for debugging output from racc parsser in
+ # debug mode. If this is made more elaborate the output from a debug run becomes very hard to read.
+ #
+ "'#{self[:value]} #{@token_array[0]}'"
+ end
# TODO: Make this comparable for testing
# vs symbolic, vs array with symbol and non hash, array with symbol and hash)
#
end
end
diff --git a/lib/puppet/pops/parser/locator.rb b/lib/puppet/pops/parser/locator.rb
index 526126aca..c46c38ee9 100644
--- a/lib/puppet/pops/parser/locator.rb
+++ b/lib/puppet/pops/parser/locator.rb
@@ -1,291 +1,291 @@
# Helper class that keeps track of where line breaks are located and can answer questions about positions.
#
class Puppet::Pops::Parser::Locator
RUBY_1_9_3 = (1 << 16 | 9 << 8 | 3)
RUBY_2_0_0 = (2 << 16 | 0 << 8 | 0)
RUBYVER_ARRAY = RUBY_VERSION.split(".").collect {|s| s.to_i }
RUBYVER = (RUBYVER_ARRAY[0] << 16 | RUBYVER_ARRAY[1] << 8 | RUBYVER_ARRAY[2])
# Computes a symbol representing which ruby runtime this is running on
# This implementation will fail if there are more than 255 minor or micro versions of ruby
#
def self.locator_version
if RUBYVER >= RUBY_2_0_0
:ruby20
elsif RUBYVER >= RUBY_1_9_3
:ruby19
else
:ruby18
end
end
LOCATOR_VERSION = locator_version
# Constant set to true if multibyte is supported (includes multibyte extended regular expressions)
MULTIBYTE = !!(LOCATOR_VERSION == :ruby19 || LOCATOR_VERSION == :ruby20)
# Creates, or recreates a Locator. A Locator is created if index is not given (a scan is then
# performed of the given source string.
#
def self.locator(string, file, index = nil)
case LOCATOR_VERSION
when :ruby20, :ruby19
Locator19.new(string, file, index)
else
Locator18.new(string, file, index)
end
end
# Returns the file name associated with the string content
def file
end
# Returns the string content
def string
end
# Returns the position on line (first position on a line is 1)
def pos_on_line(offset)
end
# Returns the line number (first line is 1) for the given offset
def line_for_offset(offset)
end
# Returns the offset on line (first offset on a line is 0).
#
def offset_on_line(offset)
end
# Returns the character offset for a given reported offset
def char_offset(byte_offset)
end
- # Returns the length measured in number of characters from the given start and end reported offseta
+ # Returns the length measured in number of characters from the given start and end reported offset
def char_length(offset, end_offset)
end
# Returns the line index - an array of line offsets for the start position of each line, starting at 0 for
# the first line.
#
def line_index()
end
# A Sublocator locates a concrete locator (subspace) in a virtual space.
# The `leading_line_count` is the (virtual) number of lines preceding the first line in the concrete locator.
# The `leading_offset` is the (virtual) byte offset of the first byte in the concrete locator.
# The `leading_line_offset` is the (virtual) offset / margin in characters for each line.
#
# This illustrates characters in the sublocator (`.`) inside the subspace (`X`):
#
# 1:XXXXXXXX
# 2:XXXX.... .. ... ..
# 3:XXXX. . .... ..
# 4:XXXX............
#
# This sublocator would be configured with leading_line_count = 1,
# leading_offset=8, and leading_line_offset=4
#
# Note that leading_offset must be the same for all lines and measured in characters.
#
class SubLocator < Puppet::Pops::Parser::Locator
attr_reader :locator
attr_reader :leading_line_count
attr_reader :leading_offset
attr_reader :leading_line_offset
def self.sub_locator(string, file, leading_line_count, leading_offset, leading_line_offset)
self.new(Puppet::Pops::Parser::Locator.locator(string, file),
leading_line_count,
leading_offset,
leading_line_offset)
end
def initialize(locator, leading_line_count, leading_offset, leading_line_offset)
@locator = locator
@leading_line_count = leading_line_count
@leading_offset = leading_offset
@leading_line_offset = leading_line_offset
end
def file
@locator.file
end
def string
@locator.string
end
# Given offset is offset in the subspace
def line_for_offset(offset)
@locator.line_for_offset(offset) + @leading_line_count
end
# Given offset is offset in the subspace
def offset_on_line(offset)
@locator.offset_on_line(offset) + @leading_line_offset
end
# Given offset is offset in the subspace
def char_offset(offset)
effective_line = @locator.line_for_offset(offset)
locator.char_offset(offset) + (effective_line * @leading_line_offset) + @leading_offset
end
# Given offsets are offsets in the subspace
def char_length(offset, end_offset)
effective_line = @locator.line_for_offset(end_offset) - @locator.line_for_offset(offset)
locator.char_length(offset, end_offset) + (effective_line * @leading_line_offset)
end
def pos_on_line(offset)
offset_on_line(offset) +1
end
end
private
class AbstractLocator < Puppet::Pops::Parser::Locator
attr_accessor :line_index
attr_accessor :string
attr_accessor :prev_offset
attr_accessor :prev_line
attr_reader :string
attr_reader :file
# Create a locator based on a content string, and a boolean indicating if ruby version support multi-byte strings
# or not.
#
def initialize(string, file, index = nil)
@string = string.freeze
@file = file.freeze
@prev_offset = nil
@prev_line = nil
@line_index = index
compute_line_index unless !index.nil?
end
# Returns the position on line (first position on a line is 1)
def pos_on_line(offset)
offset_on_line(offset) +1
end
def to_location_hash(reported_offset, end_offset)
pos = pos_on_line(reported_offset)
offset = char_offset(reported_offset)
length = char_length(reported_offset, end_offset)
start_line = line_for_offset(reported_offset)
{ :line => start_line, :pos => pos, :offset => offset, :length => length}
end
# Returns the index of the smallest item for which the item > the given value
# This is a min binary search. Although written in Ruby it is only slightly slower than
# the corresponding method in C in Ruby 2.0.0 - the main benefit to use this method over
# the Ruby C version is that it returns the index (not the value) which means there is not need
# to have an additional structure to get the index (or record the index in the structure). This
# saves both memory and CPU. It also does not require passing a block that is called since this
# method is specialized to search the line index.
#
def ary_bsearch_i(ary, value)
low = 0
high = ary.length
mid = nil
smaller = false
satisfied = false
v = nil
while low < high do
mid = low + ((high - low) / 2)
v = (ary[mid] > value)
if v == true
satisfied = true
smaller = true
elsif !v
smaller = false
else
raise TypeError, "wrong argument, must be boolean or nil, got '#{v.class}'"
end
if smaller
high = mid
else
low = mid + 1;
end
end
return nil if low == ary.length
return nil if !satisfied
return low
end
# Common impl for 18 and 19 since scanner is byte based
def compute_line_index
scanner = StringScanner.new(string)
result = [0] # first line starts at 0
while scanner.scan_until(/\n/)
result << scanner.pos
end
self.line_index = result.freeze
end
# Returns the line number (first line is 1) for the given offset
def line_for_offset(offset)
if prev_offset == offset
# use cache
return prev_line
end
if line_nbr = ary_bsearch_i(line_index, offset)
# cache
prev_offset = offset
prev_line = line_nbr
return line_nbr
end
# If not found it is after last
# clear cache
prev_offset = prev_line = nil
return line_index.size
end
end
class Locator18 < AbstractLocator
def offset_on_line(offset)
line_offset = line_index[ line_for_offset(offset)-1 ]
offset - line_offset
end
def char_offset(char_offset)
char_offset
end
def char_length(offset, end_offset)
end_offset - offset
end
end
# This implementation is for Ruby19 and Ruby20. It uses byteslice to get strings from byte based offsets.
# For Ruby20 this is faster than using the Stringscanner.charpos method (byteslice outperforms it, when
# strings are frozen).
#
class Locator19 < AbstractLocator
# Returns the offset on line (first offset on a line is 0).
# Ruby 19 is multibyte but has no character position methods, must use byteslice
def offset_on_line(offset)
line_offset = line_index[ line_for_offset(offset)-1 ]
string.byteslice(line_offset, offset-line_offset).length
end
# Returns the character offset for a given byte offset
# Ruby 19 is multibyte but has no character position methods, must use byteslice
def char_offset(byte_offset)
string.byteslice(0, byte_offset).length
end
# Returns the length measured in number of characters from the given start and end byte offseta
def char_length(offset, end_offset)
string.byteslice(offset, end_offset - offset).length
end
end
end
diff --git a/lib/puppet/pops/parser/makefile b/lib/puppet/pops/parser/makefile
deleted file mode 100644
index 802382dd8..000000000
--- a/lib/puppet/pops/parser/makefile
+++ /dev/null
@@ -1,6 +0,0 @@
-
-eparser.rb: egrammar.ra
- racc -o$@ egrammar.ra
-
-egrammar.output: egrammar.ra
- racc -v -o$@ egrammar.ra
diff --git a/lib/puppet/pops/parser/parser_support.rb b/lib/puppet/pops/parser/parser_support.rb
index a351048d3..cb1c83aa2 100644
--- a/lib/puppet/pops/parser/parser_support.rb
+++ b/lib/puppet/pops/parser/parser_support.rb
@@ -1,231 +1,213 @@
require 'puppet/parser/functions'
require 'puppet/parser/files'
require 'puppet/resource/type_collection'
require 'puppet/resource/type_collection_helper'
require 'puppet/resource/type'
require 'monitor'
# Supporting logic for the parser.
# This supporting logic has slightly different responsibilities compared to the original Puppet::Parser::Parser.
# It is only concerned with parsing.
#
class Puppet::Pops::Parser::Parser
# Note that the name of the contained class and the file name (currently parser_support.rb)
# needs to be different as the class is generated by Racc, and this file (parser_support.rb) is included as a mix in
#
# Simplify access to the Model factory
# Note that the parser/parser support does not have direct knowledge about the Model.
# All model construction/manipulation is made by the Factory.
#
Factory = Puppet::Pops::Model::Factory
Model = Puppet::Pops::Model
include Puppet::Resource::TypeCollectionHelper
attr_accessor :lexer
attr_reader :definitions
# Returns the token text of the given lexer token, or nil, if token is nil
def token_text t
return t if t.nil?
t = t.current if t.respond_to?(:current)
return t.value if t.is_a? Model::QualifiedName
# else it is a lexer token
t[:value]
end
# Produces the fully qualified name, with the full (current) namespace for a given name.
#
# This is needed because class bodies are lazily evaluated and an inner class' container(s) may not
# have been evaluated before some external reference is made to the inner class; its must therefore know its complete name
# before evaluation-time.
#
def classname(name)
- [namespace, name].join("::").sub(/^::/, '')
+ [namespace, name].join('::').sub(/^::/, '')
end
-# # Reinitializes variables (i.e. creates a new lexer instance
-# #
-# def clear
-# initvars
-# end
-
# Raises a Parse error.
def error(value, message, options = {})
except = Puppet::ParseError.new(message)
except.line = options[:line] || value[:line]
- except.file = options[:file] || value[:file] # @lexer.file
- except.pos = options[:pos] || value[:pos] # @lexer.pos
+ except.file = options[:file] || value[:file]
+ except.pos = options[:pos] || value[:pos]
raise except
end
# Parses a file expected to contain pp DSL logic.
def parse_file(file)
unless Puppet::FileSystem.exist?(file)
unless file =~ /\.pp$/
file = file + ".pp"
end
end
@lexer.file = file
_parse()
end
def initialize()
- # Since the parser is not responsible for importing (removed), and does not perform linking,
- # and there is no syntax that requires knowing if something referenced exists, it is safe
- # to assume that no environment is needed when parsing. (All that comes later).
- #
@lexer = Puppet::Pops::Parser::Lexer2.new
@namestack = []
@definitions = []
end
-# # Initializes the parser support by creating a new instance of {Puppet::Pops::Parser::Lexer}
-# # @return [void]
-# #
-# def initvars
-# end
-
- # This is a callback from the generated grammar (when an error occurs while parsing)
- # TODO Picks up origin information from the lexer, probably needs this from the caller instead
- # (for code strings, and when start line is not line 1 in a code string (or file), etc.)
+ # This is a callback from the generated parser (when an error occurs while parsing)
#
def on_error(token,value,stack)
if token == 0 # denotes end of file
value_at = 'end of file'
else
value_at = "'#{value[:value]}'"
end
- error = "Syntax error at #{value_at}"
+ if @yydebug
+ error = "Syntax error at #{value_at}, token: #{token}"
+ else
+ error = "Syntax error at #{value_at}"
+ end
+ # Note, old parser had processing of "expected token here" - do not try to reinstate:
# The 'expected' is only of value at end of input, otherwise any parse error involving a
# start of a pair will be reported as expecting the close of the pair - e.g. "$x.each |$x {|", would
# report that "seeing the '{', the '}' is expected. That would be wrong.
# Real "expected" tokens are very difficult to compute (would require parsing of racc output data). Output of the stack
# could help, but can require extensive backtracking and produce many options.
#
# The lexer should handle the "expected instead of end of file for strings, and interpolation", other expectancies
# must be handled by the grammar. The lexer may have enqueued tokens far ahead - the lexer's opinion about this
# is not trustworthy.
#
-# if token == 0 && brace = @lexer.expected
-# error += "; expected '#{brace}'"
-# end
except = Puppet::ParseError.new(error)
if token != 0
path = value[:file]
except.line = value[:line]
except.pos = value[:pos]
else
# At end of input, use what the lexer thinks is the source file
path = lexer.file
end
except.file = path if path.is_a?(String) && !path.empty?
raise except
end
# Parses a String of pp DSL code.
# @todo make it possible to pass a given origin
#
def parse_string(code)
@lexer.string = code
_parse()
end
# Mark the factory wrapped model object with location information
- # @todo the lexer produces :line for token, but no offset or length
# @return [Puppet::Pops::Model::Factory] the given factory
# @api private
#
def loc(factory, start_locateable, end_locateable = nil)
factory.record_position(start_locateable, end_locateable)
end
- def heredoc_loc(factory, start_locateabke, end_locateable = nil)
- factory.record_heredoc_position(start_locatable, end_locatable)
- end
-
- # Associate documentation with the factory wrapped model object.
+ # Mark the factory wrapped heredoc model object with location information
# @return [Puppet::Pops::Model::Factory] the given factory
# @api private
- def doc factory, doc_string
- factory.doc = doc_string
+ #
+ def heredoc_loc(factory, start_locateabke, end_locateable = nil)
+ factory.record_heredoc_position(start_locatable, end_locatable)
end
def aryfy(o)
o = [o] unless o.is_a?(Array)
o
end
def namespace
@namestack.join('::')
end
def namestack(name)
@namestack << name
end
def namepop()
@namestack.pop
end
def add_definition(definition)
@definitions << definition.current
definition
end
# Transforms an array of expressions containing literal name expressions to calls if followed by an
# expression, or expression list
#
def transform_calls(expressions)
Factory.transform_calls(expressions)
end
# Transforms a LEFT followed by the result of attribute_operations, this may be a call or an invalid sequence
def transform_resource_wo_title(left, resource)
Factory.transform_resource_wo_title(left, resource)
end
- # If there are definitions that require initialization a Program is produced, else the body
+ # Creates a program with the given body.
+ #
def create_program(body)
locator = @lexer.locator
Factory.PROGRAM(body, definitions, locator)
end
+ # Creates an empty program with a single No-op at the input's EOF offset with 0 length.
+ #
+ def create_empty_program()
+ locator = @lexer.locator
+ no_op = Factory.literal(nil)
+ # Create a synthetic NOOP token at EOF offset with 0 size. The lexer does not produce an EOF token that is
+ # visible to the grammar rules. Creating this token is mainly to reuse the positioning logic as it
+ # expects a token decorated with location information.
+ token_sym, token = @lexer.emit_completed([:NOOP,'',0], locator.string.bytesize)
+ loc(no_op, token)
+ # Program with a Noop
+ program = Factory.PROGRAM(no_op, [], locator)
+ program
+ end
+
# Performs the parsing and returns the resulting model.
# The lexer holds state, and this is setup with {#parse_string}, or {#parse_file}.
#
- # TODO: Drop support for parsing a ruby file this way (should be done where it is decided
- # which file to load/run (i.e. loaders), and initial file to run
- # TODO: deal with options containing origin (i.e. parsing a string from externally known location).
- # TODO: should return the model, not a Hostclass
- #
# @api private
#
def _parse()
begin
@yydebug = false
main = yyparse(@lexer,:scan)
- # #Commented out now because this hides problems in the racc grammar while developing
- # # TODO include this when test coverage is good enough.
- # rescue Puppet::ParseError => except
- # except.line ||= @lexer.line
- # except.file ||= @lexer.file
- # except.pos ||= @lexer.pos
- # raise except
- # rescue => except
- # raise Puppet::ParseError.new(except.message, @lexer.file, @lexer.line, @lexer.pos, except)
end
return main
ensure
@lexer.clear
@namestack = []
@definitions = []
end
end
diff --git a/lib/puppet/pops/patterns.rb b/lib/puppet/pops/patterns.rb
index a2534774d..aa46d9d06 100644
--- a/lib/puppet/pops/patterns.rb
+++ b/lib/puppet/pops/patterns.rb
@@ -1,44 +1,44 @@
# The Patterns module contains common regular expression patters for the Puppet DSL language
module Puppet::Pops::Patterns
- # NUMERIC matches hex, octal, decimal, and floating point and captures three parts
- # 0 = entire matched number, leading and trailing whitespace included
- # 1 = hexadecimal number
- # 2 = non hex integer portion, possibly with leading 0 (octal)
- # 3 = floating point part, starts with ".", decimals and optional exponent
+ # NUMERIC matches hex, octal, decimal, and floating point and captures several parts
+ # 0 = entire matched number, leading and trailing whitespace and sign included
+ # 1 = sign, +, - or nothing
+ # 2 = entire numeric part
+ # 3 = hexadecimal number
+ # 4 = non hex integer portion, possibly with leading 0 (octal)
+ # 5 = floating point part, starts with ".", decimals and optional exponent
#
- # Thus, a hex number has group 1 value, an octal value has group 2 (if it starts with 0), and no group 3
- # and a floating point value has group 2 and group 3.
+ # Thus, a hex number has group 3 value, an octal value has group 4 (if it starts with 0), and no group 3
+ # and a floating point value has group 4 and group 5.
#
- NUMERIC = %r{^\s*(?:(0[xX][0-9A-Fa-f]+)|(0?\d+)((?:\.\d+)?(?:[eE]-?\d+)?))\s*$}
+ NUMERIC = %r{\A[[:blank:]]*([-+]?)[[:blank:]]*((0[xX][0-9A-Fa-f]+)|(0?\d+)((?:\.\d+)?(?:[eE]-?\d+)?))[[:blank:]]*\z}
# ILLEGAL_P3_1_HOSTNAME matches if a hostname contains illegal characters.
# This check does not prevent pathological names like 'a....b', '.....', "---". etc.
ILLEGAL_HOSTNAME_CHARS = %r{[^-\w.]}
# NAME matches a name the same way as the lexer.
NAME = %r{\A((::)?[a-z]\w*)(::[a-z]\w*)*\z}
# CLASSREF_EXT matches a class reference the same way as the lexer - i.e. the external source form
# where each part must start with a capital letter A-Z.
- # This name includes hyphen, which may be illegal in some cases.
#
CLASSREF_EXT = %r{\A((::){0,1}[A-Z][\w]*)+\z}
# CLASSREF matches a class reference the way it is represented internally in the
# model (i.e. in lower case).
- # This name includes hyphen, which may be illegal in some cases.
#
CLASSREF = %r{\A((::){0,1}[a-z][\w]*)+\z}
# DOLLAR_VAR matches a variable name including the initial $ character
DOLLAR_VAR = %r{\$(::)?(\w+::)*\w+}
# VAR_NAME matches the name part of a variable (The $ character is not included)
# Note, that only the final segment may start with an underscore.
VAR_NAME = %r{\A(:?(::)?[a-z]\w*)*(:?(::)?[a-z_]\w*)\z}
# A Numeric var name must be the decimal number 0, or a decimal number not starting with 0
NUMERIC_VAR_NAME = %r{\A(?:0|(?:[1-9][0-9]*))\z}
end
diff --git a/lib/puppet/pops/semantic_error.rb b/lib/puppet/pops/semantic_error.rb
index 3cfa120ba..58026025f 100644
--- a/lib/puppet/pops/semantic_error.rb
+++ b/lib/puppet/pops/semantic_error.rb
@@ -1,17 +1,17 @@
# Error that is used to raise an Issue. See {Puppet::Pops::Issues}.
#
class Puppet::Pops::SemanticError < RuntimeError
attr_accessor :issue
attr_accessor :semantic
attr_accessor :options
# @param issue [Puppet::Pops::Issues::Issue] the issue describing the severity and message
# @param semantic [Puppet::Pops::Model::Locatable, nil] the expression causing the failure, or nil if unknown
- # @param options [Hash] an options hash with Symbol to valu mapping - these are the arguments to the issue
+ # @param options [Hash] an options hash with Symbol to value mapping - these are the arguments to the issue
#
def initialize(issue, semantic=nil, options = {})
@issue = issue
@semantic = semantic
@options = options
end
end
diff --git a/lib/puppet/pops/types/class_loader.rb b/lib/puppet/pops/types/class_loader.rb
index 0cd1b8c2f..1011f4715 100644
--- a/lib/puppet/pops/types/class_loader.rb
+++ b/lib/puppet/pops/types/class_loader.rb
@@ -1,118 +1,129 @@
require 'rgen/metamodel_builder'
# The ClassLoader provides a Class instance given a class name or a meta-type.
# If the class is not already loaded, it is loaded using the Puppet Autoloader.
# This means it can load a class from a gem, or from puppet modules.
#
class Puppet::Pops::Types::ClassLoader
@autoloader = Puppet::Util::Autoload.new("ClassLoader", "", :wrap => false)
# Returns a Class given a fully qualified class name.
# Lookup of class is never relative to the calling namespace.
- # @param name [String, Array<String>, Array<Symbol>, Puppet::Pops::Types::PObjectType] A fully qualified
- # class name String (e.g. '::Foo::Bar', 'Foo::Bar'), a PObjectType, or a fully qualified name in Array form where each part
+ # @param name [String, Array<String>, Array<Symbol>, Puppet::Pops::Types::PAnyType] A fully qualified
+ # class name String (e.g. '::Foo::Bar', 'Foo::Bar'), a PAnyType, or a fully qualified name in Array form where each part
# is either a String or a Symbol, e.g. `%w{Puppetx Puppetlabs SomeExtension}`.
# @return [Class, nil] the looked up class or nil if no such class is loaded
# @raise ArgumentError If the given argument has the wrong type
# @api public
#
def self.provide(name)
case name
when String
provide_from_string(name)
when Array
provide_from_name_path(name.join('::'), name)
- when Puppet::Pops::Types::PObjectType, Puppet::Pops::Types::PType
+ when Puppet::Pops::Types::PAnyType, Puppet::Pops::Types::PType
provide_from_type(name)
else
raise ArgumentError, "Cannot provide a class from a '#{name.class.name}'"
end
end
private
def self.provide_from_type(type)
case type
- when Puppet::Pops::Types::PRubyType
- provide_from_string(type.ruby_class)
+ when Puppet::Pops::Types::PRuntimeType
+ raise ArgumentError.new("Only Runtime type 'ruby' is supported, got #{type.runtime}") unless type.runtime == :ruby
+ provide_from_string(type.runtime_type_name)
when Puppet::Pops::Types::PBooleanType
# There is no other thing to load except this Enum meta type
RGen::MetamodelBuilder::MMBase::Boolean
when Puppet::Pops::Types::PType
- # TODO: PType should have a type argument (a PObjectType)
+ # TODO: PType should has a type argument (a PAnyType) so the Class' class could be returned
+ # (but this only matters in special circumstances when meta programming has been used).
Class
+ when Puppet::Pops::Type::POptionalType
+ # cannot make a distinction between optional and its type
+ provide_from_type(type.optional_type)
+
# Although not expected to be the first choice for getting a concrete class for these
# types, these are of value if the calling logic just has a reference to type.
#
- when Puppet::Pops::Types::PArrayType ; Array
- when Puppet::Pops::Types::PHashType ; Hash
- when Puppet::Pops::Types::PRegexpType ; Regexp
- when Puppet::Pops::Types::PIntegerType ; Integer
- when Puppet::Pops::Types::PStringType ; String
- when Puppet::Pops::Types::PFloatType ; Float
- when Puppet::Pops::Types::PNilType ; NilClass
+ when Puppet::Pops::Types::PArrayType ; Array
+ when Puppet::Pops::Types::PTupleType ; Array
+ when Puppet::Pops::Types::PHashType ; Hash
+ when Puppet::Pops::Types::PStructType ; Hash
+ when Puppet::Pops::Types::PRegexpType ; Regexp
+ when Puppet::Pops::Types::PIntegerType ; Integer
+ when Puppet::Pops::Types::PStringType ; String
+ when Puppet::Pops::Types::PPatternType ; String
+ when Puppet::Pops::Types::PEnumType ; String
+ when Puppet::Pops::Types::PFloatType ; Float
+ when Puppet::Pops::Types::PNilType ; NilClass
+ when Puppet::Pops::Types::PCallableType ; Proc
else
nil
end
end
def self.provide_from_string(name)
name_path = name.split('::')
# always from the root, so remove an empty first segment
if name_path[0].empty?
name_path = name_path[1..-1]
end
provide_from_name_path(name, name_path)
end
def self.provide_from_name_path(name, name_path)
# If class is already loaded, try this first
result = find_class(name_path)
unless result.is_a?(Class)
# Attempt to load it using the auto loader
loaded_path = nil
if paths_for_name(name).find {|path| loaded_path = path; @autoloader.load(path) }
result = find_class(name_path)
unless result.is_a?(Class)
raise RuntimeError, "Loading of #{name} using relative path: '#{loaded_path}' did not create expected class"
end
end
end
return nil unless result.is_a?(Class)
result
end
def self.find_class(name_path)
name_path.reduce(Object) do |ns, name|
begin
ns.const_get(name)
rescue NameError
return nil
end
end
end
def self.paths_for_name(fq_name)
[de_camel(fq_name), downcased_path(fq_name)]
end
def self.downcased_path(fq_name)
fq_name.to_s.gsub(/::/, '/').downcase
end
def self.de_camel(fq_name)
fq_name.to_s.gsub(/::/, '/').
gsub(/([A-Z]+)([A-Z][a-z])/,'\1_\2').
gsub(/([a-z\d])([A-Z])/,'\1_\2').
tr("-", "_").
downcase
end
end
diff --git a/lib/puppet/pops/types/type_calculator.rb b/lib/puppet/pops/types/type_calculator.rb
index 5df01a82a..644d007cd 100644
--- a/lib/puppet/pops/types/type_calculator.rb
+++ b/lib/puppet/pops/types/type_calculator.rb
@@ -1,1597 +1,1698 @@
# The TypeCalculator can answer questions about puppet types.
#
# The Puppet type system is primarily based on sub-classing. When asking the type calculator to infer types from Ruby in general, it
# may not provide the wanted answer; it does not for instance take module inclusions and extensions into account. In general the type
# system should be unsurprising for anyone being exposed to the notion of type. The type `Data` may require a bit more explanation; this
# is an abstract type that includes all scalar types, as well as Array with an element type compatible with Data, and Hash with key
# compatible with scalar and elements compatible with Data. Expressed differently; Data is what you typically express using JSON (with
# the exception that the Puppet type system also includes Pattern (regular expression) as a scalar.
#
# Inference
# ---------
# The `infer(o)` method infers a Puppet type for scalar Ruby objects, and for Arrays and Hashes.
# The inference result is instance specific for single typed collections
# and allows answering questions about its embedded type. It does not however preserve multiple types in
# a collection, and can thus not answer questions like `[1,a].infer() =~ Array[Integer, String]` since the inference
# computes the common type Scalar when combining Integer and String.
#
# The `infer_generic(o)` method infers a generic Puppet type for scalar Ruby object, Arrays and Hashes.
# This inference result does not contain instance specific information; e.g. Array[Integer] where the integer
# range is the generic default. Just `infer` it also combines types into a common type.
#
# The `infer_set(o)` method works like `infer` but preserves all type information. It does not do any
# reduction into common types or ranges. This method of inference is best suited for answering questions
# about an object being an instance of a type. It correctly answers: `[1,a].infer_set() =~ Array[Integer, String]`
#
# The `generalize!(t)` method modifies an instance specific inference result to a generic. The method mutates
# the given argument. Basically, this removes string instances from String, and range from Integer and Float.
#
# Assignability
# -------------
# The `assignable?(t1, t2)` method answers if t2 conforms to t1. The type t2 may be an instance, in which case
# its type is inferred, or a type.
#
# Instance?
# ---------
# The `instance?(t, o)` method answers if the given object (instance) is an instance that is assignable to the given type.
#
# String
# ------
# Creates a string representation of a type.
#
# Creation of Type instances
# --------------------------
# Instance of the classes in the {Puppet::Pops::Types type model} are used to denote a specific type. It is most convenient
# to use the {Puppet::Pops::Types::TypeFactory TypeFactory} when creating instances.
#
# @note
# In general, new instances of the wanted type should be created as they are assigned to models using containment, and a
# contained object can only be in one container at a time. Also, the type system may include more details in each type
# instance, such as if it may be nil, be empty, contain a certain count etc. Or put differently, the puppet types are not
# singletons.
#
# All types support `copy` which should be used when assigning a type where it is unknown if it is bound or not
# to a parent type. A check can be made with `t.eContainer().nil?`
#
# Equality and Hash
# -----------------
# Type instances are equal in terms of Ruby eql? and `==` if they describe the same type, but they are not `equal?` if they are not
# the same type instance. Two types that describe the same type have identical hash - this makes them usable as hash keys.
#
# Types and Subclasses
# --------------------
# In general, the type calculator should be used to answer questions if a type is a subtype of another (using {#assignable?}, or
# {#instance?} if the question is if a given object is an instance of a given type (or is a subtype thereof).
# Many of the types also have a Ruby subtype relationship; e.g. PHashType and PArrayType are both subtypes of PCollectionType, and
# PIntegerType, PFloatType, PStringType,... are subtypes of PScalarType. Even if it is possible to answer certain questions about
# type by looking at the Ruby class of the types this is considered an implementation detail, and such checks should in general
# be performed by the type_calculator which implements the type system semantics.
#
-# The PRubyType
+# The PRuntimeType
# -------------
-# The PRubyType corresponds to a Ruby Class, except for the puppet types that are specialized (i.e. PRubyType should not be
-# used for Integer, String, etc. since there are specialized types for those).
-# When the type calculator deals with PRubyTypes and checks for assignability, it determines the "common ancestor class" of two classes.
-# This check is made based on the superclasses of the two classes being compared. In order to perform this, the classes must be present
-# (i.e. they are resolved from the string form in the PRubyType to a loaded, instantiated Ruby Class). In general this is not a problem,
-# since the question to produce the common super type for two objects means that the classes must be present or there would have been
-# no instances present in the first place. If however the classes are not present, the type calculator will fall back and state that
-# the two types at least have Object in common.
+# The PRuntimeType corresponds to a type in the runtime system (currently only supported runtime is 'ruby'). The
+# type has a runtime_type_name that corresponds to a Ruby Class name.
+# A Runtime[ruby] type can be used to describe any ruby class except for the puppet types that are specialized
+# (i.e. PRuntimeType should not be used for Integer, String, etc. since there are specialized types for those).
+# When the type calculator deals with PRuntimeTypes and checks for assignability, it determines the
+# "common ancestor class" of two classes.
+# This check is made based on the superclasses of the two classes being compared. In order to perform this, the
+# classes must be present (i.e. they are resolved from the string form in the PRuntimeType to a
+# loaded, instantiated Ruby Class). In general this is not a problem, since the question to produce the common
+# super type for two objects means that the classes must be present or there would have been
+# no instances present in the first place. If however the classes are not present, the type
+# calculator will fall back and state that the two types at least have Any in common.
#
# @see Puppet::Pops::Types::TypeFactory TypeFactory for how to create instances of types
# @see Puppet::Pops::Types::TypeParser TypeParser how to construct a type instance from a String
# @see Puppet::Pops::Types Types for details about the type model
#
# Using the Type Calculator
# -----
# The type calculator can be directly used via its class methods. If doing time critical work and doing many
# calls to the type calculator, it is more performant to create an instance and invoke the corresponding
-# instance methods. Note that inference is an expensive operation, rather than infering the same thing
+# instance methods. Note that inference is an expensive operation, rather than inferring the same thing
# several times, it is in general better to infer once and then copy the result if mutation to a more generic form is
# required.
#
# @api public
#
class Puppet::Pops::Types::TypeCalculator
Types = Puppet::Pops::Types
TheInfinity = 1.0 / 0.0 # because the Infinity symbol is not defined
# @api public
def self.assignable?(t1, t2)
singleton.assignable?(t1,t2)
end
# Answers, does the given callable accept the arguments given in args (an array or a tuple)
# @param callable [Puppet::Pops::Types::PCallableType] - the callable
# @param args [Puppet::Pops::Types::PArrayType, Puppet::Pops::Types::PTupleType] args optionally including a lambda callable at the end
# @return [Boolan] true if the callable accepts the arguments
#
# @api public
def self.callable?(callable, args)
singleton.callable?(callable, args)
end
# Produces a String representation of the given type.
- # @param t [Puppet::Pops::Types::PAbstractType] the type to produce a string form
+ # @param t [Puppet::Pops::Types::PAnyType] the type to produce a string form
# @return [String] the type in string form
#
# @api public
#
def self.string(t)
singleton.string(t)
end
# @api public
def self.infer(o)
singleton.infer(o)
end
# @api public
def self.generalize!(o)
singleton.generalize!(o)
end
# @api public
def self.infer_set(o)
singleton.infer_set(o)
end
# @api public
def self.debug_string(t)
singleton.debug_string(t)
end
# @api public
def self.enumerable(t)
singleton.enumerable(t)
end
# @api private
def self.singleton()
@tc_instance ||= new
end
# @api public
#
def initialize
@@assignable_visitor ||= Puppet::Pops::Visitor.new(nil,"assignable",1,1)
@@infer_visitor ||= Puppet::Pops::Visitor.new(nil,"infer",0,0)
@@infer_set_visitor ||= Puppet::Pops::Visitor.new(nil,"infer_set",0,0)
@@instance_of_visitor ||= Puppet::Pops::Visitor.new(nil,"instance_of",1,1)
@@string_visitor ||= Puppet::Pops::Visitor.new(nil,"string",0,0)
@@inspect_visitor ||= Puppet::Pops::Visitor.new(nil,"debug_string",0,0)
@@enumerable_visitor ||= Puppet::Pops::Visitor.new(nil,"enumerable",0,0)
@@extract_visitor ||= Puppet::Pops::Visitor.new(nil,"extract",0,0)
@@generalize_visitor ||= Puppet::Pops::Visitor.new(nil,"generalize",0,0)
@@callable_visitor ||= Puppet::Pops::Visitor.new(nil,"callable",1,1)
da = Types::PArrayType.new()
da.element_type = Types::PDataType.new()
@data_array = da
h = Types::PHashType.new()
h.element_type = Types::PDataType.new()
h.key_type = Types::PScalarType.new()
@data_hash = h
@data_t = Types::PDataType.new()
@scalar_t = Types::PScalarType.new()
@numeric_t = Types::PNumericType.new()
- @t = Types::PObjectType.new()
+ @t = Types::PAnyType.new()
# Data accepts a Tuple that has 0-infinity Data compatible entries (e.g. a Tuple equivalent to Array).
data_tuple = Types::PTupleType.new()
data_tuple.addTypes(Types::PDataType.new())
data_tuple.size_type = Types::PIntegerType.new()
data_tuple.size_type.from = 0
data_tuple.size_type.to = nil # infinity
@data_tuple_t = data_tuple
# Variant type compatible with Data
data_variant = Types::PVariantType.new()
data_variant.addTypes(@data_hash.copy)
data_variant.addTypes(@data_array.copy)
data_variant.addTypes(Types::PScalarType.new)
data_variant.addTypes(Types::PNilType.new)
data_variant.addTypes(@data_tuple_t.copy)
@data_variant_t = data_variant
collection_default_size = Types::PIntegerType.new()
collection_default_size.from = 0
collection_default_size.to = nil # infinity
@collection_default_size_t = collection_default_size
non_empty_string = Types::PStringType.new
non_empty_string.size_type = Types::PIntegerType.new()
non_empty_string.size_type.from = 1
non_empty_string.size_type.to = nil # infinity
@non_empty_string_t = non_empty_string
@nil_t = Types::PNilType.new
end
# Convenience method to get a data type for comparisons
# @api private the returned value may not be contained in another element
#
def data
@data_t
end
# Convenience method to get a variant compatible with the Data type.
# @api private the returned value may not be contained in another element
#
def data_variant
@data_variant_t
end
def self.data_variant
singleton.data_variant
end
# Answers the question 'is it possible to inject an instance of the given class'
# A class is injectable if it has a special *assisted inject* class method called `inject` taking
# an injector and a scope as argument, or if it has a zero args `initialize` method.
#
- # @param klazz [Class, PRubyType] the class/type to check if it is injectable
+ # @param klazz [Class, PRuntimeType] the class/type to check if it is injectable
# @return [Class, nil] the injectable Class, or nil if not injectable
# @api public
#
def injectable_class(klazz)
# Handle case when we get a PType instead of a class
- if klazz.is_a?(Types::PRubyType)
+ if klazz.is_a?(Types::PRuntimeType)
klazz = Puppet::Pops::Types::ClassLoader.provide(klazz)
end
- # data types can not be injected (check again, it is not safe to assume that given RubyType klazz arg was ok)
- return false unless type(klazz).is_a?(Types::PRubyType)
+ # data types can not be injected (check again, it is not safe to assume that given RubyRuntime klazz arg was ok)
+ return false unless type(klazz).is_a?(Types::PRuntimeType)
if (klazz.respond_to?(:inject) && klazz.method(:inject).arity() == -4) || klazz.instance_method(:initialize).arity() == 0
klazz
else
nil
end
end
# Answers 'can an instance of type t2 be assigned to a variable of type t'.
# Does not accept nil/undef unless the type accepts it.
#
# @api public
#
def assignable?(t, t2)
if t.is_a?(Class)
t = type(t)
end
if t2.is_a?(Class)
t2 = type(t2)
end
+ # Unit can be assigned to anything
+ return true if t2.class == Types::PUnitType
@@assignable_visitor.visit_this_1(self, t, t2)
end
# Returns an enumerable if the t represents something that can be iterated
def enumerable(t)
@@enumerable_visitor.visit_this_0(self, t)
end
# Answers, does the given callable accept the arguments given in args (an array or a tuple)
#
def callable?(callable, args)
- return false if !callable.is_a?(Types::PCallableType)
+ return false if !self.class.is_kind_of_callable?(callable)
# Note that polymorphism is for the args type, the callable is always a callable
@@callable_visitor.visit_this_1(self, args, callable)
end
# Answers if the two given types describe the same type
def equals(left, right)
- return false unless left.is_a?(Types::PAbstractType) && right.is_a?(Types::PAbstractType)
+ return false unless left.is_a?(Types::PAnyType) && right.is_a?(Types::PAnyType)
# Types compare per class only - an extra test must be made if the are mutually assignable
# to find all types that represent the same type of instance
#
- left == right || (assignable?(right, left) && assignable?(left, right))
+ left == right || (assignable?(right, left) && assignable?(left, right))
end
# Answers 'what is the Puppet Type corresponding to the given Ruby class'
# @param c [Class] the class for which a puppet type is wanted
# @api public
#
def type(c)
raise ArgumentError, "Argument must be a Class" unless c.is_a? Class
# Can't use a visitor here since we don't have an instance of the class
case
when c <= Integer
type = Types::PIntegerType.new()
when c == Float
type = Types::PFloatType.new()
when c == Numeric
type = Types::PNumericType.new()
when c == String
type = Types::PStringType.new()
when c == Regexp
type = Types::PRegexpType.new()
when c == NilClass
type = Types::PNilType.new()
when c == FalseClass, c == TrueClass
type = Types::PBooleanType.new()
when c == Class
type = Types::PType.new()
when c == Array
# Assume array of data values
type = Types::PArrayType.new()
type.element_type = Types::PDataType.new()
when c == Hash
# Assume hash with scalar keys and data values
type = Types::PHashType.new()
type.key_type = Types::PScalarType.new()
type.element_type = Types::PDataType.new()
else
- type = Types::PRubyType.new()
- type.ruby_class = c.name
+ type = Types::PRuntimeType.new(:runtime => :ruby, :runtime_type_name => c.name)
end
type
end
# Generalizes value specific types. The given type is mutated and returned.
# @api public
def generalize!(o)
@@generalize_visitor.visit_this_0(self, o)
o.eAllContents.each { |x| @@generalize_visitor.visit_this_0(self, x) }
o
end
def generalize_Object(o)
# do nothing, there is nothing to change for most types
end
def generalize_PStringType(o)
o.values = []
o.size_type = nil
[]
end
def generalize_PCollectionType(o)
# erase the size constraint from Array and Hash (if one exists, it is transformed to -Infinity - + Infinity, which is
# not desirable.
o.size_type = nil
end
def generalize_PFloatType(o)
o.to = nil
o.from = nil
end
def generalize_PIntegerType(o)
o.to = nil
o.from = nil
end
# Answers 'what is the single common Puppet Type describing o', or if o is an Array or Hash, what is the
# single common type of the elements (or keys and elements for a Hash).
# @api public
#
def infer(o)
@@infer_visitor.visit_this_0(self, o)
end
def infer_generic(o)
result = generalize!(infer(o))
result
end
# Answers 'what is the set of Puppet Types of o'
# @api public
#
def infer_set(o)
@@infer_set_visitor.visit_this_0(self, o)
end
def instance_of(t, o)
@@instance_of_visitor.visit_this_1(self, t, o)
end
def instance_of_Object(t, o)
- # Undef is Undef and Object, but nothing else when checking instance?
- return false if (o.nil? || o == :undef) && t.class != Types::PObjectType
+ # Undef is Undef and Any, but nothing else when checking instance?
+ return false if (o.nil?) && t.class != Types::PAnyType
assignable?(t, infer(o))
end
+ # Anything is an instance of Unit
+ # @api private
+ def instance_of_PUnitType(t, o)
+ true
+ end
+
def instance_of_PArrayType(t, o)
return false unless o.is_a?(Array)
return false unless o.all? {|element| instance_of(t.element_type, element) }
size_t = t.size_type || @collection_default_size_t
size_t2 = size_as_type(o)
- assignable?(size_t, size_t2)
+ # optimize by calling directly
+ assignable_PIntegerType(size_t, size_t2)
end
def instance_of_PTupleType(t, o)
return false unless o.is_a?(Array)
# compute the tuple's min/max size, and check if that size matches
size_t = t.size_type || Puppet::Pops::Types::TypeFactory.range(*t.size_range)
# compute the array's size as type
size_t2 = size_as_type(o)
return false unless assignable?(size_t, size_t2)
o.each_with_index do |element, index|
return false unless instance_of(t.types[index] || t.types[-1], element)
end
true
end
def instance_of_PStructType(t, o)
return false unless o.is_a?(Hash)
h = t.hashed_elements
# all keys must be present and have a value (even if nil/undef)
(o.keys - h.keys).empty? && h.all? { |k,v| instance_of(v, o[k]) }
end
def instance_of_PHashType(t, o)
return false unless o.is_a?(Hash)
key_t = t.key_type
element_t = t.element_type
return false unless o.keys.all? {|key| instance_of(key_t, key) } && o.values.all? {|value| instance_of(element_t, value) }
size_t = t.size_type || @collection_default_size_t
size_t2 = size_as_type(o)
- assignable?(size_t, size_t2)
+ # optimize by calling directly
+ assignable_PIntegerType(size_t, size_t2)
end
def instance_of_PDataType(t, o)
instance_of(@data_variant_t, o)
end
def instance_of_PNilType(t, o)
- return o.nil? || o == :undef
+ return o.nil?
end
def instance_of_POptionalType(t, o)
- return true if (o.nil? || o == :undef)
+ return true if (o.nil?)
instance_of(t.optional_type, o)
end
def instance_of_PVariantType(t, o)
# instance of variant if o is instance? of any of variant's types
t.types.any? { |option_t| instance_of(option_t, o) }
end
# Answers 'is o an instance of type t'
# @api public
#
def self.instance?(t, o)
singleton.instance_of(t,o)
end
# Answers 'is o an instance of type t'
# @api public
#
def instance?(t, o)
instance_of(t,o)
end
# Answers if t is a puppet type
# @api public
#
def is_ptype?(t)
- return t.is_a?(Types::PAbstractType)
+ return t.is_a?(Types::PAnyType)
end
# Answers if t represents the puppet type PNilType
# @api public
#
def is_pnil?(t)
return t.nil? || t.is_a?(Types::PNilType)
end
# Answers, 'What is the common type of t1 and t2?'
#
# TODO: The current implementation should be optimized for performance
#
# @api public
#
def common_type(t1, t2)
raise ArgumentError, 'two types expected' unless (is_ptype?(t1) || is_pnil?(t1)) && (is_ptype?(t2) || is_pnil?(t2))
+ # TODO: This is not right since Scalar U Undef is Any
# if either is nil, the common type is the other
if is_pnil?(t1)
return t2
elsif is_pnil?(t2)
return t1
end
+ # If either side is Unit, it is the other type
+ if t1.is_a?(Types::PUnitType)
+ return t2
+ elsif t2.is_a?(Types::PUnitType)
+ return t1
+ end
+
# Simple case, one is assignable to the other
if assignable?(t1, t2)
return t1
elsif assignable?(t2, t1)
return t2
end
# when both are arrays, return an array with common element type
if t1.is_a?(Types::PArrayType) && t2.is_a?(Types::PArrayType)
type = Types::PArrayType.new()
type.element_type = common_type(t1.element_type, t2.element_type)
return type
end
# when both are hashes, return a hash with common key- and element type
if t1.is_a?(Types::PHashType) && t2.is_a?(Types::PHashType)
type = Types::PHashType.new()
type.key_type = common_type(t1.key_type, t2.key_type)
type.element_type = common_type(t1.element_type, t2.element_type)
return type
end
# when both are host-classes, reduce to PHostClass[] (since one was not assignable to the other)
if t1.is_a?(Types::PHostClassType) && t2.is_a?(Types::PHostClassType)
return Types::PHostClassType.new()
end
# when both are resources, reduce to Resource[T] or Resource[] (since one was not assignable to the other)
if t1.is_a?(Types::PResourceType) && t2.is_a?(Types::PResourceType)
result = Types::PResourceType.new()
# only Resource[] unless the type name is the same
if t1.type_name == t2.type_name then result.type_name = t1.type_name end
# the cross assignability test above has already determined that they do not have the same type and title
return result
end
# Integers have range, expand the range to the common range
if t1.is_a?(Types::PIntegerType) && t2.is_a?(Types::PIntegerType)
t1range = from_to_ordered(t1.from, t1.to)
t2range = from_to_ordered(t2.from, t2.to)
t = Types::PIntegerType.new()
from = [t1range[0], t2range[0]].min
to = [t1range[1], t2range[1]].max
t.from = from unless from == TheInfinity
t.to = to unless to == TheInfinity
return t
end
# Floats have range, expand the range to the common range
if t1.is_a?(Types::PFloatType) && t2.is_a?(Types::PFloatType)
t1range = from_to_ordered(t1.from, t1.to)
t2range = from_to_ordered(t2.from, t2.to)
t = Types::PFloatType.new()
from = [t1range[0], t2range[0]].min
to = [t1range[1], t2range[1]].max
t.from = from unless from == TheInfinity
t.to = to unless to == TheInfinity
return t
end
if t1.is_a?(Types::PStringType) && t2.is_a?(Types::PStringType)
t = Types::PStringType.new()
t.values = t1.values | t2.values
return t
end
if t1.is_a?(Types::PPatternType) && t2.is_a?(Types::PPatternType)
t = Types::PPatternType.new()
# must make copies since patterns are contained types, not data-types
- t.patterns = (t1.patterns | t2.patterns).map {|p| p.copy }
+ t.patterns = (t1.patterns | t2.patterns).map(&:copy)
return t
end
if t1.is_a?(Types::PEnumType) && t2.is_a?(Types::PEnumType)
# The common type is one that complies with either set
t = Types::PEnumType.new
t.values = t1.values | t2.values
return t
end
if t1.is_a?(Types::PVariantType) && t2.is_a?(Types::PVariantType)
# The common type is one that complies with either set
t = Types::PVariantType.new
- t.types = (t1.types | t2.types).map {|opt_t| opt_t.copy }
+ t.types = (t1.types | t2.types).map(&:copy)
return t
end
if t1.is_a?(Types::PRegexpType) && t2.is_a?(Types::PRegexpType)
# if they were identical, the general rule would return a parameterized regexp
# since they were not, the result is a generic regexp type
return Types::PPatternType.new()
end
if t1.is_a?(Types::PCallableType) && t2.is_a?(Types::PCallableType)
# They do not have the same signature, and one is not assignable to the other,
# what remains is the most general form of Callable
return Types::PCallableType.new()
end
# Common abstract types, from most specific to most general
if common_numeric?(t1, t2)
return Types::PNumericType.new()
end
if common_scalar?(t1, t2)
return Types::PScalarType.new()
end
if common_data?(t1,t2)
return Types::PDataType.new()
end
# Meta types Type[Integer] + Type[String] => Type[Data]
if t1.is_a?(Types::PType) && t2.is_a?(Types::PType)
type = Types::PType.new()
type.type = common_type(t1.type, t2.type)
return type
end
- if t1.is_a?(Types::PRubyType) && t2.is_a?(Types::PRubyType)
- if t1.ruby_class == t2.ruby_class
+ # If both are Runtime types
+ if t1.is_a?(Types::PRuntimeType) && t2.is_a?(Types::PRuntimeType)
+ if t1.runtime == t2.runtime && t1.runtime_type_name == t2.runtime_type_name
return t1
end
# finding the common super class requires that names are resolved to class
+ # NOTE: This only supports runtime type of :ruby
c1 = Types::ClassLoader.provide_from_type(t1)
c2 = Types::ClassLoader.provide_from_type(t2)
if c1 && c2
c2_superclasses = superclasses(c2)
superclasses(c1).each do|c1_super|
c2_superclasses.each do |c2_super|
if c1_super == c2_super
- result = Types::PRubyType.new()
- result.ruby_class = c1_super.name
- return result
+ return Types::PRuntimeType.new(:runtime => :ruby, :runtime_type_name => c1_super.name)
end
end
end
end
end
- # If both are RubyObjects
- if common_pobject?(t1, t2)
- return Types::PObjectType.new()
+ # They better both be Any type, or the wrong thing was asked and nil is returned
+ if t1.is_a?(Types::PAnyType) && t2.is_a?(Types::PAnyType)
+ return Types::PAnyType.new()
end
end
# Produces the superclasses of the given class, including the class
def superclasses(c)
result = [c]
while s = c.superclass
result << s
c = s
end
result
end
# Produces a string representing the type
# @api public
#
def string(t)
@@string_visitor.visit_this_0(self, t)
end
# Produces a debug string representing the type (possibly with more information that the regular string format)
# @api public
#
def debug_string(t)
@@inspect_visitor.visit_this_0(self, t)
end
# Reduces an enumerable of types to a single common type.
# @api public
#
def reduce_type(enumerable)
enumerable.reduce(nil) {|memo, t| common_type(memo, t) }
end
# Reduce an enumerable of objects to a single common type
# @api public
#
def infer_and_reduce_type(enumerable)
reduce_type(enumerable.collect() {|o| infer(o) })
end
# The type of all classes is PType
# @api private
#
def infer_Class(o)
Types::PType.new()
end
# @api private
def infer_Closure(o)
o.type()
end
# @api private
def infer_Function(o)
o.class.dispatcher.to_type
end
# @api private
def infer_Object(o)
- type = Types::PRubyType.new()
- type.ruby_class = o.class.name
- type
+ Types::PRuntimeType.new(:runtime => :ruby, :runtime_type_name => o.class.name)
end
# The type of all types is PType
# @api private
#
- def infer_PAbstractType(o)
+ def infer_PAnyType(o)
type = Types::PType.new()
type.type = o.copy
type
end
# The type of all types is PType
# This is the metatype short circuit.
# @api private
#
def infer_PType(o)
type = Types::PType.new()
type.type = o.copy
type
end
# @api private
def infer_String(o)
t = Types::PStringType.new()
t.addValues(o)
t.size_type = size_as_type(o)
t
end
# @api private
def infer_Float(o)
t = Types::PFloatType.new()
t.from = o
t.to = o
t
end
# @api private
def infer_Integer(o)
t = Types::PIntegerType.new()
t.from = o
t.to = o
t
end
# @api private
def infer_Regexp(o)
t = Types::PRegexpType.new()
t.pattern = o.source
t
end
# @api private
def infer_NilClass(o)
Types::PNilType.new()
end
- # Inference of :undef as PNilType, all other are Ruby[Symbol]
+ # Inference of :default as PDefaultType, and all other are Ruby[Symbol]
# @api private
def infer_Symbol(o)
- o == :undef ? infer_NilClass(o) : infer_Object(o)
+ case o
+ when :default
+ Types::PDefaultType.new()
+
+ else
+ infer_Object(o)
+ end
end
# @api private
def infer_TrueClass(o)
Types::PBooleanType.new()
end
# @api private
def infer_FalseClass(o)
Types::PBooleanType.new()
end
# @api private
# A Puppet::Parser::Resource, or Puppet::Resource
#
def infer_Resource(o)
t = Types::PResourceType.new()
- t.type_name = o.type.to_s
+ t.type_name = o.type.to_s.downcase
# Only Puppet::Resource can have a title that is a symbol :undef, a PResource cannot.
# A mapping must be made to empty string. A nil value will result in an error later
title = o.title
- t.title = (title == :undef ? '' : title)
- t
+ t.title = (:undef == title ? '' : title)
+ type = Types::PType.new()
+ type.type = t
+ type
end
# @api private
def infer_Array(o)
type = Types::PArrayType.new()
type.element_type =
if o.empty?
Types::PNilType.new()
else
infer_and_reduce_type(o)
end
type.size_type = size_as_type(o)
type
end
# @api private
def infer_Hash(o)
type = Types::PHashType.new()
if o.empty?
ktype = Types::PNilType.new()
etype = Types::PNilType.new()
else
ktype = infer_and_reduce_type(o.keys())
etype = infer_and_reduce_type(o.values())
end
type.key_type = ktype
type.element_type = etype
type.size_type = size_as_type(o)
type
end
def size_as_type(collection)
size = collection.size
t = Types::PIntegerType.new()
t.from = size
t.to = size
t
end
# Common case for everything that intrinsically only has a single type
def infer_set_Object(o)
infer(o)
end
def infer_set_Array(o)
if o.empty?
type = Types::PArrayType.new()
type.element_type = Types::PNilType.new()
type.size_type = size_as_type(o)
else
type = Types::PTupleType.new()
type.types = o.map() {|x| infer_set(x) }
end
type
end
def infer_set_Hash(o)
type = Types::PHashType.new()
if o.empty?
ktype = Types::PNilType.new()
- etype = Types::PNilType.new()
+ vtype = Types::PNilType.new()
else
ktype = Types::PVariantType.new()
ktype.types = o.keys.map() {|k| infer_set(k) }
etype = Types::PVariantType.new()
etype.types = o.values.map() {|e| infer_set(e) }
end
type.key_type = unwrap_single_variant(ktype)
- type.element_type = unwrap_single_variant(vtype)
+ type.element_type = unwrap_single_variant(etype)
type.size_type = size_as_type(o)
type
end
def unwrap_single_variant(possible_variant)
if possible_variant.is_a?(Types::PVariantType) && possible_variant.types.size == 1
possible_variant.types[0]
else
possible_variant
end
end
+
# False in general type calculator
# @api private
def assignable_Object(t, t2)
false
end
# @api private
- def assignable_PObjectType(t, t2)
- t2.is_a?(Types::PObjectType)
+ def assignable_PAnyType(t, t2)
+ t2.is_a?(Types::PAnyType)
end
# @api private
def assignable_PNilType(t, t2)
# Only undef/nil is assignable to nil type
t2.is_a?(Types::PNilType)
end
+ # Anything is assignable to a Unit type
+ # @api private
+ def assignable_PUnitType(t, t2)
+ true
+ end
+
+ # @api private
+ def assignable_PDefaultType(t, t2)
+ # Only default is assignable to default type
+ t2.is_a?(Types::PDefaultType)
+ end
+
# @api private
def assignable_PScalarType(t, t2)
t2.is_a?(Types::PScalarType)
end
# @api private
def assignable_PNumericType(t, t2)
t2.is_a?(Types::PNumericType)
end
# @api private
def assignable_PIntegerType(t, t2)
return false unless t2.is_a?(Types::PIntegerType)
trange = from_to_ordered(t.from, t.to)
t2range = from_to_ordered(t2.from, t2.to)
# If t2 min and max are within the range of t
trange[0] <= t2range[0] && trange[1] >= t2range[1]
end
# Transform int range to a size constraint
# if range == nil the constraint is 1,1
# if range.from == nil min size = 1
# if range.to == nil max size == Infinity
#
def size_range(range)
return [1,1] if range.nil?
from = range.from
to = range.to
x = from.nil? ? 1 : from
y = to.nil? ? TheInfinity : to
if x < y
[x, y]
else
[y, x]
end
end
# @api private
def from_to_ordered(from, to)
x = (from.nil? || from == :default) ? -TheInfinity : from
y = (to.nil? || to == :default) ? TheInfinity : to
if x < y
[x, y]
else
[y, x]
end
end
# @api private
def assignable_PVariantType(t, t2)
# Data is a specific variant
t2 = @data_variant_t if t2.is_a?(Types::PDataType)
if t2.is_a?(Types::PVariantType)
# A variant is assignable if all of its options are assignable to one of this type's options
return true if t == t2
t2.types.all? do |other|
- # if the other is a Variant, all if its options, but be assignable to one of this type's options
+ # if the other is a Variant, all of its options, but be assignable to one of this type's options
other = other.is_a?(Types::PDataType) ? @data_variant_t : other
if other.is_a?(Types::PVariantType)
assignable?(t, other)
else
t.types.any? {|option_t| assignable?(option_t, other) }
end
end
else
# A variant is assignable if t2 is assignable to any of its types
t.types.any? { |option_t| assignable?(option_t, t2) }
end
end
# Catch all not callable combinations
def callable_Object(o, callable_t)
false
end
def callable_PTupleType(args_tuple, callable_t)
if args_tuple.size_type
raise ArgumentError, "Callable tuple may not have a size constraint when used as args"
end
# Assume no block was given - i.e. it is nil, and its type is PNilType
block_t = @nil_t
- if args_tuple.types.last.is_a?(Types::PCallableType)
+ if self.class.is_kind_of_callable?(args_tuple.types.last)
# a split is needed to make it possible to use required, optional, and varargs semantics
# of the tuple type.
#
args_tuple = args_tuple.copy
# to drop the callable, it must be removed explicitly since this is an rgen array
args_tuple.removeTypes(block_t = args_tuple.types.last())
else
# no block was given, if it is required, the below will fail
end
# unless argument types match parameter types
return false unless assignable?(callable_t.param_types, args_tuple)
- # unless given block (or no block) matches expected block (or no block)
+ # can the given block be *called* with a signature requirement specified by callable_t?
assignable?(callable_t.block_type || @nil_t, block_t)
end
+ # @api private
+ def self.is_kind_of_callable?(t, optional = true)
+ case t
+ when Types::PCallableType
+ true
+ when Types::POptionalType
+ optional && is_kind_of_callable?(t.optional_type, optional)
+ when Types::PVariantType
+ t.types.all? {|t2| is_kind_of_callable?(t2, optional) }
+ else
+ false
+ end
+ end
+
+
def callable_PArrayType(args_array, callable_t)
return false unless assignable?(callable_t.param_types, args_array)
- # does not support calling with a block, but have to check that callable expects it
+ # does not support calling with a block, but have to check that callable is ok with missing block
assignable?(callable_t.block_type || @nil_t, @nil_t)
end
+ def callable_PNilType(nil_t, callable_t)
+ # if callable_t is Optional (or indeed PNilType), this means that 'missing callable' is accepted
+ assignable?(callable_t, nil_t)
+ end
+
+ def callable_PCallableType(given_callable_t, required_callable_t)
+ # If the required callable is euqal or more specific than the given, the given is callable
+ assignable?(required_callable_t, given_callable_t)
+ end
+
def max(a,b)
a >=b ? a : b
end
def min(a,b)
a <= b ? a : b
end
def assignable_PTupleType(t, t2)
return true if t == t2 || t.types.empty? && (t2.is_a?(Types::PArrayType))
size_t = t.size_type || Puppet::Pops::Types::TypeFactory.range(*t.size_range)
if t2.is_a?(Types::PTupleType)
size_t2 = t2.size_type || Puppet::Pops::Types::TypeFactory.range(*t2.size_range)
# not assignable if the number of types in t2 is outside number of types in t1
- return false unless assignable?(size_t, size_t2)
- max(t.types.size, t2.types.size).times do |index|
- return false unless assignable?((t.types[index] || t.types[-1]), (t2.types[index] || t2.types[-1]))
+ if assignable?(size_t, size_t2)
+ t2.types.size.times do |index|
+ return false unless assignable?((t.types[index] || t.types[-1]), t2.types[index])
+ end
+ return true
+ else
+ return false
end
- true
-
elsif t2.is_a?(Types::PArrayType)
t2_entry = t2.element_type
# Array of anything can not be assigned (unless tuple is tuple of anything) - this case
# was handled at the top of this method.
#
return false if t2_entry.nil?
size_t = t.size_type || Puppet::Pops::Types::TypeFactory.range(*t.size_range)
size_t2 = t2.size_type || @collection_default_size_t
return false unless assignable?(size_t, size_t2)
min(t.types.size, size_t2.range()[1]).times do |index|
return false unless assignable?((t.types[index] || t.types[-1]), t2_entry)
end
true
else
false
end
end
# Produces the tuple entry at the given index given a tuple type, its from/to constraints on the last
# type, and an index.
# Produces nil if the index is out of bounds
# from must be less than to, and from may not be less than 0
#
# @api private
#
def tuple_entry_at(tuple_t, from, to, index)
regular = (tuple_t.types.size - 1)
if index < regular
tuple_t.types[index]
elsif index < regular + to
# in the varargs part
tuple_t.types[-1]
else
nil
end
end
# @api private
#
def assignable_PStructType(t, t2)
return true if t == t2 || t.elements.empty? && (t2.is_a?(Types::PHashType))
h = t.hashed_elements
if t2.is_a?(Types::PStructType)
h2 = t2.hashed_elements
h.size == h2.size && h.all? {|k, v| assignable?(v, h2[k]) }
elsif t2.is_a?(Types::PHashType)
size_t2 = t2.size_type || @collection_default_size_t
size_t = Types::PIntegerType.new
size_t.from = size_t.to = h.size
# compatible size
# hash key type must be string of min 1 size
# hash value t must be assignable to each key
element_type = t2.element_type
- assignable?(size_t, size_t2) &&
+ assignable_PIntegerType(size_t, size_t2) &&
assignable?(@non_empty_string_t, t2.key_type) &&
h.all? {|k,v| assignable?(v, element_type) }
else
false
end
end
# @api private
def assignable_POptionalType(t, t2)
return true if t2.is_a?(Types::PNilType)
if t2.is_a?(Types::POptionalType)
assignable?(t.optional_type, t2.optional_type)
else
assignable?(t.optional_type, t2)
end
end
# @api private
def assignable_PEnumType(t, t2)
return true if t == t2 || (t.values.empty? && (t2.is_a?(Types::PStringType) || t2.is_a?(Types::PEnumType)))
- if t2.is_a?(Types::PStringType)
+ case t2
+ when Types::PStringType
# if the set of strings are all found in the set of enums
t2.values.all? { |s| t.values.any? { |e| e == s }}
+ when Types::PVariantType
+ t2.types.all? {|variant_t| assignable_PEnumType(t, variant_t) }
+ when Types::PEnumType
+ t2.values.all? { |s| t.values.any? {|e| e == s }}
else
false
end
end
# @api private
def assignable_PStringType(t, t2)
if t.values.empty?
# A general string is assignable by any other string or pattern restricted string
# if the string has a size constraint it does not match since there is no reasonable way
# to compute the min/max length a pattern will match. For enum, it is possible to test that
# each enumerator value is within range
size_t = t.size_type || @collection_default_size_t
case t2
when Types::PStringType
# true if size compliant
size_t2 = t2.size_type || @collection_default_size_t
- assignable?(size_t, size_t2)
+ assignable_PIntegerType(size_t, size_t2)
when Types::PPatternType
# true if size constraint is at least 0 to +Infinity (which is the same as the default)
- assignable?(size_t, @collection_default_size_t)
+ assignable_PIntegerType(size_t, @collection_default_size_t)
when Types::PEnumType
if t2.values
# true if all enum values are within range
min, max = t2.values.map(&:size).minmax
trange = from_to_ordered(size_t.from, size_t.to)
t2range = [min, max]
# If t2 min and max are within the range of t
trange[0] <= t2range[0] && trange[1] >= t2range[1]
else
# no string can match this enum anyway since it does not accept anything
false
end
else
# no other type matches string
false
end
elsif t2.is_a?(Types::PStringType)
# A specific string acts as a set of strings - must have exactly the same strings
# In this case, size does not matter since the definition is very precise anyway
Set.new(t.values) == Set.new(t2.values)
else
# All others are false, since no other type describes the same set of specific strings
false
end
end
# @api private
def assignable_PPatternType(t, t2)
return true if t == t2
- return false unless t2.is_a?(Types::PStringType) || t2.is_a?(Types::PEnumType)
+ case t2
+ when Types::PStringType, Types::PEnumType
+ values = t2.values
+ when Types::PVariantType
+ return t2.types.all? {|variant_t| assignable_PPatternType(t, variant_t) }
+ else
+ return false
+ end
if t2.values.empty?
# Strings / Enums (unknown which ones) cannot all match a pattern, but if there is no pattern it is ok
# (There should really always be a pattern, but better safe than sorry).
return t.patterns.empty? ? true : false
end
# all strings in String/Enum type must match one of the patterns in Pattern type
regexps = t.patterns.map {|p| p.regexp }
t2.values.all? { |v| regexps.any? {|re| re.match(v) } }
end
# @api private
def assignable_PFloatType(t, t2)
return false unless t2.is_a?(Types::PFloatType)
trange = from_to_ordered(t.from, t.to)
t2range = from_to_ordered(t2.from, t2.to)
# If t2 min and max are within the range of t
trange[0] <= t2range[0] && trange[1] >= t2range[1]
end
# @api private
def assignable_PBooleanType(t, t2)
t2.is_a?(Types::PBooleanType)
end
# @api private
def assignable_PRegexpType(t, t2)
t2.is_a?(Types::PRegexpType) && (t.pattern.nil? || t.pattern == t2.pattern)
end
# @api private
def assignable_PCallableType(t, t2)
return false unless t2.is_a?(Types::PCallableType)
# nil param_types means, any other Callable is assignable
return true if t.param_types.nil?
- return false unless assignable?(t.param_types, t2.param_types)
+
+ # NOTE: these tests are made in reverse as it is calling the callable that is constrained
+ # (it's lower bound), not its upper bound
+ return false unless assignable?(t2.param_types, t.param_types)
# names are ignored, they are just information
# Blocks must be compatible
this_block_t = t.block_type || @nil_t
that_block_t = t2.block_type || @nil_t
- assignable?(this_block_t, that_block_t)
+ assignable?(that_block_t, this_block_t)
+
end
# @api private
def assignable_PCollectionType(t, t2)
size_t = t.size_type || @collection_default_size_t
case t2
when Types::PCollectionType
size_t2 = t2.size_type || @collection_default_size_t
- assignable?(size_t, size_t2)
+ assignable_PIntegerType(size_t, size_t2)
when Types::PTupleType
# compute the tuple's min/max size, and check if that size matches
from, to = size_range(t2.size_type)
t2s = Types::PIntegerType.new()
t2s.from = t2.types.size - 1 + from
t2s.to = t2.types.size - 1 + to
- assignable?(size_t, t2s)
+ assignable_PIntegerType(size_t, t2s)
when Types::PStructType
from = to = t2.elements.size
t2s = Types::PIntegerType.new()
t2s.from = from
t2s.to = to
- assignable?(size_t, t2s)
+ assignable_PIntegerType(size_t, t2s)
else
false
end
end
# @api private
def assignable_PType(t, t2)
return false unless t2.is_a?(Types::PType)
return true if t.type.nil? # wide enough to handle all types
return false if t2.type.nil? # wider than t
assignable?(t.type, t2.type)
end
# Array is assignable if t2 is an Array and t2's element type is assignable, or if t2 is a Tuple
# where
# @api private
def assignable_PArrayType(t, t2)
if t2.is_a?(Types::PArrayType)
return false unless assignable?(t.element_type, t2.element_type)
assignable_PCollectionType(t, t2)
elsif t2.is_a?(Types::PTupleType)
return false unless t2.types.all? {|t2_element| assignable?(t.element_type, t2_element) }
t2_regular = t2.types[0..-2]
t2_ranged = t2.types[-1]
t2_from, t2_to = size_range(t2.size_type)
t2_required = t2_regular.size + t2_from
t_entry = t.element_type
# Tuple of anything can not be assigned (unless array is tuple of anything) - this case
# was handled at the top of this method.
#
return false if t_entry.nil?
# array type may be size constrained
size_t = t.size_type || @collection_default_size_t
min, max = size_t.range
# Tuple with fewer min entries can not be assigned
return false if t2_required < min
# Tuple with more optionally available entries can not be assigned
return false if t2_regular.size + t2_to > max
# each tuple type must be assignable to the element type
t2_required.times do |index|
t2_entry = tuple_entry_at(t2, t2_from, t2_to, index)
return false unless assignable?(t_entry, t2_entry)
end
# ... and so must the last, possibly optional (ranged) type
return assignable?(t_entry, t2_ranged)
else
false
end
end
# Hash is assignable if t2 is a Hash and t2's key and element types are assignable
# @api private
def assignable_PHashType(t, t2)
case t2
when Types::PHashType
return false unless assignable?(t.key_type, t2.key_type) && assignable?(t.element_type, t2.element_type)
assignable_PCollectionType(t, t2)
when Types::PStructType
# hash must accept String as key type
# hash must accept all value types
# hash must accept the size of the struct
size_t = t.size_type || @collection_default_size_t
min, max = size_t.range
struct_size = t2.elements.size
element_type = t.element_type
( struct_size >= min && struct_size <= max &&
assignable?(t.key_type, @non_empty_string_t) &&
t2.hashed_elements.all? {|k,v| assignable?(element_type, v) })
else
false
end
end
# @api private
def assignable_PCatalogEntryType(t1, t2)
t2.is_a?(Types::PCatalogEntryType)
end
# @api private
def assignable_PHostClassType(t1, t2)
return false unless t2.is_a?(Types::PHostClassType)
# Class = Class[name}, Class[name] != Class
return true if t1.class_name.nil?
# Class[name] = Class[name]
return t1.class_name == t2.class_name
end
# @api private
def assignable_PResourceType(t1, t2)
return false unless t2.is_a?(Types::PResourceType)
return true if t1.type_name.nil?
return false if t1.type_name != t2.type_name
return true if t1.title.nil?
return t1.title == t2.title
end
# Data is assignable by other Data and by Array[Data] and Hash[Scalar, Data]
# @api private
def assignable_PDataType(t, t2)
t2.is_a?(Types::PDataType) || assignable?(@data_variant_t, t2)
end
- # Assignable if t2's ruby class is same or subclass of t1's ruby class
+ # Assignable if t2's has the same runtime and the runtime name resolves to
+ # a class that is the same or subclass of t1's resolved runtime type name
# @api private
- def assignable_PRubyType(t1, t2)
- return false unless t2.is_a?(Types::PRubyType)
- return true if t1.ruby_class.nil? # t1 is wider
- return false if t2.ruby_class.nil? # t1 not nil, so t2 can not be wider
- c1 = class_from_string(t1.ruby_class)
- c2 = class_from_string(t2.ruby_class)
+ def assignable_PRuntimeType(t1, t2)
+ return false unless t2.is_a?(Types::PRuntimeType)
+ return false unless t1.runtime == t2.runtime
+ return true if t1.runtime_type_name.nil? # t1 is wider
+ return false if t2.runtime_type_name.nil? # t1 not nil, so t2 can not be wider
+
+ # NOTE: This only supports Ruby, must change when/if the set of runtimes is expanded
+ c1 = class_from_string(t1.runtime_type_name)
+ c2 = class_from_string(t2.runtime_type_name)
return false unless c1.is_a?(Class) && c2.is_a?(Class)
!!(c2 <= c1)
end
# @api private
def debug_string_Object(t)
string(t)
end
# @api private
def string_PType(t)
if t.type.nil?
"Type"
else
"Type[#{string(t.type)}]"
end
end
# @api private
def string_NilClass(t) ; '?' ; end
# @api private
def string_String(t) ; t ; end
# @api private
- def string_PObjectType(t) ; "Object" ; end
+ def string_Symbol(t) ; t.to_s ; end
+
+ def string_PAnyType(t) ; "Any" ; end
# @api private
def string_PNilType(t) ; 'Undef' ; end
+ # @api private
+ def string_PDefaultType(t) ; 'Default' ; end
+
# @api private
def string_PBooleanType(t) ; "Boolean" ; end
# @api private
- def string_PScalarType(t) ; "Scalar" ; end
+ def string_PScalarType(t) ; "Scalar" ; end
# @api private
def string_PDataType(t) ; "Data" ; end
# @api private
def string_PNumericType(t) ; "Numeric" ; end
# @api private
def string_PIntegerType(t)
range = range_array_part(t)
unless range.empty?
"Integer[#{range.join(', ')}]"
else
"Integer"
end
end
# Produces a string from an Integer range type that is used inside other type strings
# @api private
def range_array_part(t)
return [] if t.nil? || (t.from.nil? && t.to.nil?)
[t.from.nil? ? 'default' : t.from , t.to.nil? ? 'default' : t.to ]
end
# @api private
def string_PFloatType(t)
range = range_array_part(t)
unless range.empty?
"Float[#{range.join(', ')}]"
else
"Float"
end
end
# @api private
def string_PRegexpType(t)
t.pattern.nil? ? "Regexp" : "Regexp[#{t.regexp.inspect}]"
end
# @api private
def string_PStringType(t)
# skip values in regular output - see debug_string
range = range_array_part(t.size_type)
unless range.empty?
"String[#{range.join(', ')}]"
else
"String"
end
end
# @api private
def debug_string_PStringType(t)
range = range_array_part(t.size_type)
range_part = range.empty? ? '' : '[' << range.join(' ,') << '], '
"String[" << range_part << (t.values.map {|s| "'#{s}'" }).join(', ') << ']'
end
# @api private
def string_PEnumType(t)
return "Enum" if t.values.empty?
"Enum[" << t.values.map {|s| "'#{s}'" }.join(', ') << ']'
end
# @api private
def string_PVariantType(t)
return "Variant" if t.types.empty?
"Variant[" << t.types.map {|t2| string(t2) }.join(', ') << ']'
end
# @api private
def string_PTupleType(t)
range = range_array_part(t.size_type)
return "Tuple" if t.types.empty?
s = "Tuple[" << t.types.map {|t2| string(t2) }.join(', ')
unless range.empty?
s << ", " << range.join(', ')
end
s << "]"
s
end
# @api private
def string_PCallableType(t)
# generic
return "Callable" if t.param_types.nil?
if t.param_types.types.empty?
range = [0, 0]
else
range = range_array_part(t.param_types.size_type)
end
- types = t.param_types.types.map {|t2| string(t2) }
+ # translate to string, and skip Unit types
+ types = t.param_types.types.map {|t2| string(t2) unless t2.class == Types::PUnitType }.compact
params_part= types.join(', ')
s = "Callable[" << types.join(', ')
unless range.empty?
(s << ', ') unless types.empty?
s << range.join(', ')
end
# Add block T last (after min, max) if present)
#
unless t.block_type.nil?
(s << ', ') unless types.empty? && range.empty?
s << string(t.block_type)
end
s << "]"
s
end
# @api private
def string_PStructType(t)
return "Struct" if t.elements.empty?
"Struct[{" << t.elements.map {|element| string(element) }.join(', ') << "}]"
end
def string_PStructElement(t)
"'#{t.name}'=>#{string(t.type)}"
end
# @api private
def string_PPatternType(t)
return "Pattern" if t.patterns.empty?
"Pattern[" << t.patterns.map {|s| "#{s.regexp.inspect}" }.join(', ') << ']'
end
# @api private
def string_PCollectionType(t)
range = range_array_part(t.size_type)
unless range.empty?
"Collection[#{range.join(', ')}]"
else
"Collection"
end
end
# @api private
- def string_PRubyType(t) ; "Ruby[#{string(t.ruby_class)}]" ; end
+ def string_PUnitType(t)
+ "Unit"
+ end
+
+ # @api private
+ def string_PRuntimeType(t) ; "Runtime[#{string(t.runtime)}, #{string(t.runtime_type_name)}]" ; end
# @api private
def string_PArrayType(t)
parts = [string(t.element_type)] + range_array_part(t.size_type)
"Array[#{parts.join(', ')}]"
end
# @api private
def string_PHashType(t)
parts = [string(t.key_type), string(t.element_type)] + range_array_part(t.size_type)
"Hash[#{parts.join(', ')}]"
end
# @api private
def string_PCatalogEntryType(t)
"CatalogEntry"
end
# @api private
def string_PHostClassType(t)
if t.class_name
"Class[#{t.class_name}]"
else
"Class"
end
end
# @api private
def string_PResourceType(t)
if t.type_name
if t.title
- "#{t.type_name.capitalize}['#{t.title}']"
+ "#{capitalize_segments(t.type_name)}['#{t.title}']"
else
- "#{t.type_name.capitalize}"
+ capitalize_segments(t.type_name)
end
else
"Resource"
end
end
def string_POptionalType(t)
if t.optional_type.nil?
"Optional"
else
"Optional[#{string(t.optional_type)}]"
end
end
# Catches all non enumerable types
# @api private
def enumerable_Object(o)
nil
end
# @api private
def enumerable_PIntegerType(t)
# Not enumerable if representing an infinite range
return nil if t.size == TheInfinity
t
end
def self.copy_as_tuple(t)
case t
when Types::PTupleType
t.copy
when Types::PArrayType
# transform array to tuple
result = Types::PTupleType.new
result.addTypes(t.element_type.copy)
result.size_type = t.size_type.nil? ? nil : t.size_type.copy
result
else
raise ArgumentError, "Internal Error: Only Array and Tuple can be given to copy_as_tuple"
end
end
private
+ NAME_SEGMENT_SEPARATOR = '::'.freeze
+
+ def capitalize_segments(s)
+ s.split(NAME_SEGMENT_SEPARATOR).map(&:capitalize).join(NAME_SEGMENT_SEPARATOR)
+ end
+
def class_from_string(str)
begin
- str.split('::').inject(Object) do |memo, name_segment|
+ str.split(NAME_SEGMENT_SEPARATOR).inject(Object) do |memo, name_segment|
memo.const_get(name_segment)
end
rescue NameError
return nil
end
end
def common_data?(t1, t2)
assignable?(@data_t, t1) && assignable?(@data_t, t2)
end
def common_scalar?(t1, t2)
assignable?(@scalar_t, t1) && assignable?(@scalar_t, t2)
end
def common_numeric?(t1, t2)
assignable?(@numeric_t, t1) && assignable?(@numeric_t, t2)
end
- def common_pobject?(t1, t2)
- assignable?(@t, t1) && assignable?(@t, t2)
- end
end
diff --git a/lib/puppet/pops/types/type_factory.rb b/lib/puppet/pops/types/type_factory.rb
index e01494f8f..45979dd96 100644
--- a/lib/puppet/pops/types/type_factory.rb
+++ b/lib/puppet/pops/types/type_factory.rb
@@ -1,406 +1,432 @@
# Helper module that makes creation of type objects simpler.
# @api public
#
module Puppet::Pops::Types::TypeFactory
@type_calculator = Puppet::Pops::Types::TypeCalculator.new()
Types = Puppet::Pops::Types
# Produces the Integer type
# @api public
#
def self.integer()
Types::PIntegerType.new()
end
# Produces an Integer range type
# @api public
#
def self.range(from, to)
t = Types::PIntegerType.new()
- t.from = from unless (from == :default || from == 'default')
- t.to = to unless (to == :default || to == 'default')
+ # optimize eq with symbol (faster when it is left)
+ t.from = from unless (:default == from || from == 'default')
+ t.to = to unless (:default == to || to == 'default')
t
end
# Produces a Float range type
# @api public
#
def self.float_range(from, to)
t = Types::PFloatType.new()
- t.from = Float(from) unless from == :default || from.nil?
- t.to = Float(to) unless to == :default || to.nil?
+ # optimize eq with symbol (faster when it is left)
+ t.from = Float(from) unless :default == from || from.nil?
+ t.to = Float(to) unless :default == to || to.nil?
t
end
# Produces the Float type
# @api public
#
def self.float()
Types::PFloatType.new()
end
# Produces the Numeric type
# @api public
#
def self.numeric()
Types::PNumericType.new()
end
# Produces a string representation of the type
# @api public
#
def self.label(t)
@type_calculator.string(t)
end
# Produces the String type, optionally with specific string values
# @api public
#
def self.string(*values)
t = Types::PStringType.new()
values.each {|v| t.addValues(v) }
t
end
# Produces the Optional type, i.e. a short hand for Variant[T, Undef]
def self.optional(optional_type = nil)
t = Types::POptionalType.new
t.optional_type = type_of(optional_type)
t
end
- # Convenience method to produce an Optional[Object] type
- def self.optional_object()
- optional(object())
- end
-
# Produces the Enum type, optionally with specific string values
# @api public
#
def self.enum(*values)
t = Types::PEnumType.new()
values.each {|v| t.addValues(v) }
t
end
# Produces the Variant type, optionally with the "one of" types
# @api public
#
def self.variant(*types)
t = Types::PVariantType.new()
types.each {|v| t.addTypes(type_of(v)) }
t
end
- # Produces the Struct type, either a non parameterized instance representing all structs (i.e. all hashes)
- # or a hash with a given set of keys of String type (names), bound to a value of a given type. Type may be
- # a Ruby Class, a Puppet Type, or an instance from which the type is inferred.
+ # Produces the Struct type, either a non parameterized instance representing
+ # all structs (i.e. all hashes) or a hash with a given set of keys of String
+ # type (names), bound to a value of a given type. Type may be a Ruby Class, a
+ # Puppet Type, or an instance from which the type is inferred.
#
def self.struct(name_type_hash = {})
t = Types::PStructType.new
name_type_hash.map do |name, type|
elem = Types::PStructElement.new
if name.is_a?(String) && name.empty?
raise ArgumentError, "An empty String can not be used where a String[1, default] is expected"
end
elem.name = name
elem.type = type_of(type)
elem
end.each {|elem| t.addElements(elem) }
t
end
def self.tuple(*types)
t = Types::PTupleType.new
types.each {|elem| t.addTypes(type_of(elem)) }
t
end
# Produces the Boolean type
# @api public
#
def self.boolean()
Types::PBooleanType.new()
end
- # Produces the Object type
+ # Produces the Any type
# @api public
#
- def self.object()
- Types::PObjectType.new()
+ def self.any()
+ Types::PAnyType.new()
end
# Produces the Regexp type
- # @param pattern [Regexp, String, nil] (nil) The regular expression object or a regexp source string, or nil for bare type
+ # @param pattern [Regexp, String, nil] (nil) The regular expression object or
+ # a regexp source string, or nil for bare type
# @api public
#
def self.regexp(pattern = nil)
t = Types::PRegexpType.new()
if pattern
t.pattern = pattern.is_a?(Regexp) ? pattern.inspect[1..-2] : pattern
end
t.regexp() unless pattern.nil? # compile pattern to catch errors
t
end
def self.pattern(*regular_expressions)
t = Types::PPatternType.new()
regular_expressions.each do |re|
case re
when String
re_T = Types::PRegexpType.new()
re_T.pattern = re
re_T.regexp() # compile it to catch errors
t.addPatterns(re_T)
when Regexp
re_T = Types::PRegexpType.new()
# Regep.to_s includes options user did not enter and does not escape source
# to work either as a string or as a // regexp. The inspect method does a better
# job, but includes the //
re_T.pattern = re.inspect[1..-2]
t.addPatterns(re_T)
when Types::PRegexpType
t.addPatterns(re.copy)
when Types::PPatternType
re.patterns.each do |p|
t.addPatterns(p.copy)
end
else
raise ArgumentError, "Only String, Regexp, Pattern-Type, and Regexp-Type are allowed: got '#{re.class}"
end
end
t
end
# Produces the Literal type
# @api public
#
def self.scalar()
Types::PScalarType.new()
end
# Produces a CallableType matching all callables
# @api public
#
def self.all_callables()
return Puppet::Pops::Types::PCallableType.new
end
# Produces a Callable type with one signature without support for a block
# Use #with_block, or #with_optional_block to add a block to the callable
# If no parameters are given, the Callable will describe a signature
# that does not accept parameters. To create a Callable that matches all callables
# use {#all_callables}.
#
# The params is a list of types, where the three last entries may be
- # optionally followed by min, max count, and a Callable which is taken as the block_type.
+ # optionally followed by min, max count, and a Callable which is taken as the
+ # block_type.
# If neither min or max are specified the parameters must match exactly.
# A min < params.size means that the difference are optional.
# If max > params.size means that the last type repeats.
# if max is :default, the max value is unbound (infinity).
- #
+ #
# Params are given as a sequence of arguments to {#type_of}.
#
def self.callable(*params)
- case params.last
- when Types::PCallableType
+ if Puppet::Pops::Types::TypeCalculator.is_kind_of_callable?(params.last)
last_callable = true
- when Types::POptionalType
- last_callable = true if params.last.optional_type.is_a?(Types::PCallableType)
end
block_t = last_callable ? params.pop : nil
# compute a size_type for the signature based on the two last parameters
if is_range_parameter?(params[-2]) && is_range_parameter?(params[-1])
size_type = range(params[-2], params[-1])
params = params[0, params.size - 2]
elsif is_range_parameter?(params[-1])
size_type = range(params[-1], :default)
params = params[0, params.size - 1]
end
types = params.map {|p| type_of(p) }
+ # If the specification requires types, and none were given, a Unit type is used
+ if types.empty? && !size_type.nil? && size_type.range[1] > 0
+ types << Types::PUnitType.new
+ end
# create a signature
callable_t = Types::PCallableType.new()
tuple_t = tuple(*types)
tuple_t.size_type = size_type unless size_type.nil?
callable_t.param_types = tuple_t
callable_t.block_type = block_t
callable_t
end
def self.with_block(callable, *block_params)
callable.block_type = callable(*block_params)
callable
end
def self.with_optional_block(callable, *block_params)
callable.block_type = optional(callable(*block_params))
callable
end
# Produces the abstract type Collection
# @api public
#
def self.collection()
Types::PCollectionType.new()
end
# Produces the Data type
# @api public
#
def self.data()
Types::PDataType.new()
end
# Creates an instance of the Undef type
# @api public
def self.undef()
Types::PNilType.new()
end
+ # Creates an instance of the Default type
+ # @api public
+ def self.default()
+ Types::PDefaultType.new()
+ end
+
# Produces an instance of the abstract type PCatalogEntryType
def self.catalog_entry()
Types::PCatalogEntryType.new()
end
- # Produces a PResourceType with a String type_name
- # A PResourceType with a nil or empty name is compatible with any other PResourceType.
- # A PResourceType with a given name is only compatible with a PResourceType with the same name.
- # (There is no resource-type subtyping in Puppet (yet)).
+ # Produces a PResourceType with a String type_name A PResourceType with a nil
+ # or empty name is compatible with any other PResourceType. A PResourceType
+ # with a given name is only compatible with a PResourceType with the same
+ # name. (There is no resource-type subtyping in Puppet (yet)).
#
def self.resource(type_name = nil, title = nil)
type = Types::PResourceType.new()
type_name = type_name.type_name if type_name.is_a?(Types::PResourceType)
- type.type_name = type_name.downcase unless type_name.nil?
+ type_name = type_name.downcase unless type_name.nil?
+ type.type_name = type_name
+ unless type_name.nil? || type_name =~ Puppet::Pops::Patterns::CLASSREF
+ raise ArgumentError, "Illegal type name '#{type.type_name}'"
+ end
+ if type_name.nil? && !title.nil?
+ raise ArgumentError, "The type name cannot be nil, if title is given"
+ end
type.title = title
type
end
- # Produces PHostClassType with a string class_name.
- # A PHostClassType with nil or empty name is compatible with any other PHostClassType.
- # A PHostClassType with a given name is only compatible with a PHostClassType with the same name.
+ # Produces PHostClassType with a string class_name. A PHostClassType with
+ # nil or empty name is compatible with any other PHostClassType. A
+ # PHostClassType with a given name is only compatible with a PHostClassType
+ # with the same name.
#
def self.host_class(class_name = nil)
type = Types::PHostClassType.new()
unless class_name.nil?
type.class_name = class_name.sub(/^::/, '')
end
type
end
- # Produces a type for Array[o] where o is either a type, or an instance for which a type is inferred.
+ # Produces a type for Array[o] where o is either a type, or an instance for
+ # which a type is inferred.
# @api public
#
def self.array_of(o)
type = Types::PArrayType.new()
type.element_type = type_of(o)
type
end
- # Produces a type for Hash[Scalar, o] where o is either a type, or an instance for which a type is inferred.
+ # Produces a type for Hash[Scalar, o] where o is either a type, or an
+ # instance for which a type is inferred.
# @api public
#
def self.hash_of(value, key = scalar())
type = Types::PHashType.new()
type.key_type = type_of(key)
type.element_type = type_of(value)
type
end
# Produces a type for Array[Data]
# @api public
#
def self.array_of_data()
type = Types::PArrayType.new()
type.element_type = data()
type
end
# Produces a type for Hash[Scalar, Data]
# @api public
#
def self.hash_of_data()
type = Types::PHashType.new()
type.key_type = scalar()
type.element_type = data()
type
end
# Produces a type for Type[T]
# @api public
#
def self.type_type(inst_type = nil)
type = Types::PType.new()
type.type = inst_type
type
end
- # Produce a type corresponding to the class of given unless given is a String, Class or a PAbstractType.
- # When a String is given this is taken as a classname.
+ # Produce a type corresponding to the class of given unless given is a
+ # String, Class or a PAnyType. When a String is given this is taken as
+ # a classname.
#
def self.type_of(o)
if o.is_a?(Class)
@type_calculator.type(o)
- elsif o.is_a?(Types::PAbstractType)
+ elsif o.is_a?(Types::PAnyType)
o
elsif o.is_a?(String)
- type = Types::PRubyType.new()
- type.ruby_class = o
- type
+ Types::PRuntimeType.new(:runtime => :ruby, :runtime_type_name => o)
else
@type_calculator.infer_generic(o)
end
end
- # Produces a type for a class or infers a type for something that is not a class
+ # Produces a type for a class or infers a type for something that is not a
+ # class
# @note
# To get the type for the class' class use `TypeCalculator.infer(c)`
#
# @overload ruby(o)
- # @param o [Class] produces the type corresponding to the class (e.g. Integer becomes PIntegerType)
+ # @param o [Class] produces the type corresponding to the class (e.g.
+ # Integer becomes PIntegerType)
# @overload ruby(o)
- # @param o [Object] produces the type corresponding to the instance class (e.g. 3 becomes PIntegerType)
+ # @param o [Object] produces the type corresponding to the instance class
+ # (e.g. 3 becomes PIntegerType)
#
# @api public
#
def self.ruby(o)
if o.is_a?(Class)
@type_calculator.type(o)
else
- type = Types::PRubyType.new()
- type.ruby_class = o.class.name
- type
+ Types::PRuntimeType.new(:runtime => :ruby, :runtime_type_name => o.class.name)
end
end
- # Generic creator of a RubyType - allows creating the Ruby type with nil name, or String name.
- # Also see ruby(o) which performs inference, or mapps a Ruby Class to its name.
+ # Generic creator of a RuntimeType["ruby"] - allows creating the Ruby type
+ # with nil name, or String name. Also see ruby(o) which performs inference,
+ # or mapps a Ruby Class to its name.
#
def self.ruby_type(class_name = nil)
- type = Types::PRubyType.new()
- type.ruby_class = class_name
- type
+ Types::PRuntimeType.new(:runtime => :ruby, :runtime_type_name => class_name)
+ end
+
+ # Generic creator of a RuntimeType - allows creating the type with nil or
+ # String runtime_type_name. Also see ruby_type(o) and ruby(o).
+ #
+ def self.runtime(runtime=nil, runtime_type_name = nil)
+ runtime = runtime.to_sym if runtime.is_a?(String)
+ Types::PRuntimeType.new(:runtime => runtime, :runtime_type_name => runtime_type_name)
end
- # Sets the accepted size range of a collection if something other than the default 0 to Infinity
- # is wanted. The semantics for from/to are the same as for #range
+ # Sets the accepted size range of a collection if something other than the
+ # default 0 to Infinity is wanted. The semantics for from/to are the same as
+ # for #range
#
def self.constrain_size(collection_t, from, to)
collection_t.size_type = range(from, to)
collection_t
end
- # Returns true if the given type t is of valid range parameter type (integer or literal default).
+ # Returns true if the given type t is of valid range parameter type (integer
+ # or literal default).
def self.is_range_parameter?(t)
- t.is_a?(Integer) || t == 'default' || t == :default
+ t.is_a?(Integer) || t == 'default' || :default == t
end
end
diff --git a/lib/puppet/pops/types/type_parser.rb b/lib/puppet/pops/types/type_parser.rb
index 782598f2d..1b65e147e 100644
--- a/lib/puppet/pops/types/type_parser.rb
+++ b/lib/puppet/pops/types/type_parser.rb
@@ -1,469 +1,475 @@
# This class provides parsing of Type Specification from a string into the Type
# Model that is produced by the Puppet::Pops::Types::TypeFactory.
#
# The Type Specifications that are parsed are the same as the stringified forms
# of types produced by the {Puppet::Pops::Types::TypeCalculator TypeCalculator}.
#
# @api public
class Puppet::Pops::Types::TypeParser
# @api private
TYPES = Puppet::Pops::Types::TypeFactory
# @api public
def initialize
@parser = Puppet::Pops::Parser::Parser.new()
@type_transformer = Puppet::Pops::Visitor.new(nil, "interpret", 0, 0)
end
# Produces a *puppet type* based on the given string.
#
# @example
# parser.parse('Integer')
# parser.parse('Array[String]')
# parser.parse('Hash[Integer, Array[String]]')
#
# @param string [String] a string with the type expressed in stringified form as produced by the
# {Puppet::Pops::Types::TypeCalculator#string TypeCalculator#string} method.
- # @return [Puppet::Pops::Types::PObjectType] a specialization of the PObjectType representing the type.
+ # @return [Puppet::Pops::Types::PAnyType] a specialization of the PAnyType representing the type.
#
# @api public
#
def parse(string)
# TODO: This state (@string) can be removed since the parse result of newer future parser
# contains a Locator in its SourcePosAdapter and the Locator keeps the string.
# This way, there is no difference between a parsed "string" and something that has been parsed
# earlier and fed to 'interpret'
#
@string = string
model = @parser.parse_string(@string)
if model
interpret(model.current)
else
raise_invalid_type_specification_error
end
end
# @api private
def interpret(ast)
result = @type_transformer.visit_this_0(self, ast)
result = result.body if result.is_a?(Puppet::Pops::Model::Program)
- raise_invalid_type_specification_error unless result.is_a?(Puppet::Pops::Types::PAbstractType)
+ raise_invalid_type_specification_error unless result.is_a?(Puppet::Pops::Types::PAnyType)
result
end
# @api private
def interpret_any(ast)
@type_transformer.visit_this_0(self, ast)
end
# @api private
def interpret_Object(o)
raise_invalid_type_specification_error
end
# @api private
def interpret_Program(o)
interpret(o.body)
end
# @api private
def interpret_QualifiedName(o)
o.value
end
# @api private
def interpret_LiteralString(o)
o.value
end
+ def interpret_LiteralRegularExpression(o)
+ o.value
+ end
+
# @api private
def interpret_String(o)
o
end
# @api private
def interpret_LiteralDefault(o)
:default
end
# @api private
def interpret_LiteralInteger(o)
o.value
end
# @api private
def interpret_LiteralFloat(o)
o.value
end
# @api private
def interpret_LiteralHash(o)
result = {}
o.entries.each do |entry|
result[@type_transformer.visit_this_0(self, entry.key)] = @type_transformer.visit_this_0(self, entry.value)
end
result
end
# @api private
def interpret_QualifiedReference(name_ast)
case name_ast.value
when "integer"
TYPES.integer
when "float"
TYPES.float
when "numeric"
TYPES.numeric
when "string"
TYPES.string
when "enum"
TYPES.enum
when "boolean"
TYPES.boolean
when "pattern"
TYPES.pattern
when "regexp"
TYPES.regexp
when "data"
TYPES.data
when "array"
TYPES.array_of_data
when "hash"
TYPES.hash_of_data
when "class"
TYPES.host_class()
when "resource"
TYPES.resource()
when "collection"
TYPES.collection()
when "scalar"
TYPES.scalar()
when "catalogentry"
TYPES.catalog_entry()
when "undef"
- # Should not be interpreted as Resource type
TYPES.undef()
- when "object"
- TYPES.object()
+ when "default"
+ TYPES.default()
+
+ when "any"
+ TYPES.any()
when "variant"
TYPES.variant()
when "optional"
TYPES.optional()
- when "ruby"
- TYPES.ruby_type()
+ when "runtime"
+ TYPES.runtime()
when "type"
TYPES.type_type()
when "tuple"
TYPES.tuple()
when "struct"
TYPES.struct()
when "callable"
# A generic callable as opposed to one that does not accept arguments
TYPES.all_callables()
else
TYPES.resource(name_ast.value)
end
end
# @api private
def interpret_AccessExpression(parameterized_ast)
parameters = parameterized_ast.keys.collect { |param| interpret_any(param) }
unless parameterized_ast.left_expr.is_a?(Puppet::Pops::Model::QualifiedReference)
raise_invalid_type_specification_error
end
case parameterized_ast.left_expr.value
when "array"
case parameters.size
when 1
when 2
size_type =
if parameters[1].is_a?(Puppet::Pops::Types::PIntegerType)
parameters[1].copy
else
assert_range_parameter(parameters[1])
TYPES.range(parameters[1], :default)
end
when 3
assert_range_parameter(parameters[1])
assert_range_parameter(parameters[2])
size_type = TYPES.range(parameters[1], parameters[2])
else
raise_invalid_parameters_error("Array", "1 to 3", parameters.size)
end
assert_type(parameters[0])
t = TYPES.array_of(parameters[0])
t.size_type = size_type if size_type
t
when "hash"
result = case parameters.size
when 1
assert_type(parameters[0])
TYPES.hash_of(parameters[0])
when 2
assert_type(parameters[0])
assert_type(parameters[1])
TYPES.hash_of(parameters[1], parameters[0])
when 3
size_type =
if parameters[2].is_a?(Puppet::Pops::Types::PIntegerType)
parameters[2].copy
else
assert_range_parameter(parameters[2])
TYPES.range(parameters[2], :default)
end
assert_type(parameters[0])
assert_type(parameters[1])
TYPES.hash_of(parameters[1], parameters[0])
when 4
assert_range_parameter(parameters[2])
assert_range_parameter(parameters[3])
size_type = TYPES.range(parameters[2], parameters[3])
assert_type(parameters[0])
assert_type(parameters[1])
TYPES.hash_of(parameters[1], parameters[0])
else
raise_invalid_parameters_error("Hash", "1 to 4", parameters.size)
end
result.size_type = size_type if size_type
result
when "collection"
size_type = case parameters.size
when 1
if parameters[0].is_a?(Puppet::Pops::Types::PIntegerType)
parameters[0].copy
else
assert_range_parameter(parameters[0])
TYPES.range(parameters[0], :default)
end
when 2
assert_range_parameter(parameters[0])
assert_range_parameter(parameters[1])
TYPES.range(parameters[0], parameters[1])
else
raise_invalid_parameters_error("Collection", "1 to 2", parameters.size)
end
result = TYPES.collection
result.size_type = size_type
result
when "class"
if parameters.size != 1
raise_invalid_parameters_error("Class", 1, parameters.size)
end
TYPES.host_class(parameters[0])
when "resource"
if parameters.size == 1
TYPES.resource(parameters[0])
elsif parameters.size != 2
raise_invalid_parameters_error("Resource", "1 or 2", parameters.size)
else
TYPES.resource(parameters[0], parameters[1])
end
when "regexp"
# 1 parameter being a string, or regular expression
raise_invalid_parameters_error("Regexp", "1", parameters.size) unless parameters.size == 1
TYPES.regexp(parameters[0])
when "enum"
# 1..m parameters being strings
- raise_invalid_parameters_error("Enum", "1 or more", parameters.size) unless parameters.size > 1
+ raise_invalid_parameters_error("Enum", "1 or more", parameters.size) unless parameters.size >= 1
TYPES.enum(*parameters)
when "pattern"
# 1..m parameters being strings or regular expressions
- raise_invalid_parameters_error("Pattern", "1 or more", parameters.size) unless parameters.size > 1
+ raise_invalid_parameters_error("Pattern", "1 or more", parameters.size) unless parameters.size >= 1
TYPES.pattern(*parameters)
when "variant"
# 1..m parameters being strings or regular expressions
- raise_invalid_parameters_error("Variant", "1 or more", parameters.size) unless parameters.size > 1
+ raise_invalid_parameters_error("Variant", "1 or more", parameters.size) unless parameters.size >= 1
TYPES.variant(*parameters)
when "tuple"
# 1..m parameters being types (last two optionally integer or literal default
- raise_invalid_parameters_error("Tuple", "1 or more", parameters.size) unless parameters.size > 1
+ raise_invalid_parameters_error("Tuple", "1 or more", parameters.size) unless parameters.size >= 1
length = parameters.size
if TYPES.is_range_parameter?(parameters[-2])
# min, max specification
min = parameters[-2]
min = (min == :default || min == 'default') ? 0 : min
assert_range_parameter(parameters[-1])
max = parameters[-1]
max = max == :default ? nil : max
parameters = parameters[0, length-2]
elsif TYPES.is_range_parameter?(parameters[-1])
min = parameters[-1]
min = (min == :default || min == 'default') ? 0 : min
max = nil
parameters = parameters[0, length-1]
end
t = TYPES.tuple(*parameters)
if min || max
TYPES.constrain_size(t, min, max)
end
t
when "callable"
# 1..m parameters being types (last three optionally integer or literal default, and a callable)
TYPES.callable(*parameters)
when "struct"
# 1..m parameters being types (last two optionally integer or literal default
raise_invalid_parameters_error("Struct", "1", parameters.size) unless parameters.size == 1
assert_struct_parameter(parameters[0])
TYPES.struct(parameters[0])
when "integer"
if parameters.size == 1
case parameters[0]
when Integer
TYPES.range(parameters[0], parameters[0])
when :default
TYPES.integer # unbound
end
elsif parameters.size != 2
raise_invalid_parameters_error("Integer", "1 or 2", parameters.size)
else
TYPES.range(parameters[0] == :default ? nil : parameters[0], parameters[1] == :default ? nil : parameters[1])
end
when "float"
if parameters.size == 1
case parameters[0]
when Integer, Float
TYPES.float_range(parameters[0], parameters[0])
when :default
TYPES.float # unbound
end
elsif parameters.size != 2
raise_invalid_parameters_error("Float", "1 or 2", parameters.size)
else
TYPES.float_range(parameters[0] == :default ? nil : parameters[0], parameters[1] == :default ? nil : parameters[1])
end
when "string"
size_type =
case parameters.size
when 1
if parameters[0].is_a?(Puppet::Pops::Types::PIntegerType)
parameters[0].copy
else
assert_range_parameter(parameters[0])
TYPES.range(parameters[0], :default)
end
when 2
assert_range_parameter(parameters[0])
assert_range_parameter(parameters[1])
TYPES.range(parameters[0], parameters[1])
else
raise_invalid_parameters_error("String", "1 to 2", parameters.size)
end
result = TYPES.string
result.size_type = size_type
result
when "optional"
if parameters.size != 1
raise_invalid_parameters_error("Optional", 1, parameters.size)
end
assert_type(parameters[0])
TYPES.optional(parameters[0])
- when "object", "data", "catalogentry", "boolean", "scalar", "undef", "numeric"
+ when "any", "data", "catalogentry", "boolean", "scalar", "undef", "numeric", "default"
raise_unparameterized_type_error(parameterized_ast.left_expr)
when "type"
if parameters.size != 1
raise_invalid_parameters_error("Type", 1, parameters.size)
end
assert_type(parameters[0])
TYPES.type_type(parameters[0])
- when "ruby"
- raise_invalid_parameters_error("Ruby", "1", parameters.size) unless parameters.size == 1
- TYPES.ruby_type(parameters[0])
+ when "runtime"
+ raise_invalid_parameters_error("Runtime", "2", parameters.size) unless parameters.size == 2
+ TYPES.runtime(*parameters)
else
# It is a resource such a File['/tmp/foo']
type_name = parameterized_ast.left_expr.value
if parameters.size != 1
raise_invalid_parameters_error(type_name.capitalize, 1, parameters.size)
end
TYPES.resource(type_name, parameters[0])
end
end
private
def assert_type(t)
- raise_invalid_type_specification_error unless t.is_a?(Puppet::Pops::Types::PObjectType)
+ raise_invalid_type_specification_error unless t.is_a?(Puppet::Pops::Types::PAnyType)
true
end
def assert_range_parameter(t)
raise_invalid_type_specification_error unless TYPES.is_range_parameter?(t)
end
def assert_struct_parameter(h)
raise_invalid_type_specification_error unless h.is_a?(Hash)
h.each do |k,v|
# TODO: Should have stricter name rule
raise_invalid_type_specification_error unless k.is_a?(String) && !k.empty?
assert_type(v)
end
end
def raise_invalid_type_specification_error
raise Puppet::ParseError,
"The expression <#{@string}> is not a valid type specification."
end
def raise_invalid_parameters_error(type, required, given)
raise Puppet::ParseError,
"Invalid number of type parameters specified: #{type} requires #{required}, #{given} provided"
end
def raise_unparameterized_type_error(ast)
raise Puppet::ParseError, "Not a parameterized type <#{original_text_of(ast)}>"
end
def raise_unknown_type_error(ast)
raise Puppet::ParseError, "Unknown type <#{original_text_of(ast)}>"
end
def original_text_of(ast)
position = Puppet::Pops::Adapters::SourcePosAdapter.adapt(ast)
position.extract_text()
end
end
diff --git a/lib/puppet/pops/types/types.rb b/lib/puppet/pops/types/types.rb
index f6e374eb0..331357f80 100644
--- a/lib/puppet/pops/types/types.rb
+++ b/lib/puppet/pops/types/types.rb
@@ -1,506 +1,397 @@
require 'rgen/metamodel_builder'
# The Types model is a model of Puppet Language types.
-#
-# The exact relationship between types is not visible in this model wrt. the PDataType which is an abstraction
-# of Scalar, Array[Data], and Hash[Scalar, Data] nested to any depth. This means it is not possible to
-# infer the type by simply looking at the inheritance hierarchy. The {Puppet::Pops::Types::TypeCalculator} should
-# be used to answer questions about types. The {Puppet::Pops::Types::TypeFactory} should be used to create an instance
-# of a type whenever one is needed.
-#
-# The implementation of the Types model contains methods that are required for the type objects to behave as
-# expected when comparing them and using them as keys in hashes. (No other logic is, or should be included directly in
-# the model's classes).
+# It consists of two parts; the meta-model expressed using RGen (in types_meta.rb) and this file which
+# mixes in the implementation.
#
# @api public
#
-module Puppet::Pops::Types
- # Used as end in a range
- INFINITY = 1.0 / 0.0
- NEGATIVE_INFINITY = -INFINITY
-
- class PAbstractType < Puppet::Pops::Model::PopsObject
- abstract
- module ClassModule
- # Produce a deep copy of the type
- def copy
- Marshal.load(Marshal.dump(self))
- end
+module Puppet::Pops
+ require 'puppet/pops/types/types_meta'
- def hash
- self.class.hash
- end
+ # TODO: See PUP-2978 for possible performance optimization
- def ==(o)
- self.class == o.class
- end
+ # Mix in implementation part of the Bindings Module
+ module Types
+ # Used as end in a range
+ INFINITY = 1.0 / 0.0
+ NEGATIVE_INFINITY = -INFINITY
- alias eql? ==
-
- def to_s
- Puppet::Pops::Types::TypeCalculator.string(self)
- end
+ class TypeModelObject < RGen::MetamodelBuilder::MMBase
+ include Puppet::Pops::Visitable
+ include Puppet::Pops::Adaptable
+ include Puppet::Pops::Containment
end
- end
- # The type of types.
- # @api public
- class PType < PAbstractType
- contains_one_uni 'type', PAbstractType
- module ClassModule
- def hash
- [self.class, type].hash
- end
-
- def ==(o)
- self.class == o.class && type == o.type
- end
- end
- end
-
- # Base type for all types except {Puppet::Pops::Types::PType PType}, the type of types.
- # @api public
- class PObjectType < PAbstractType
+ class PAnyType < TypeModelObject
+ module ClassModule
+ # Produce a deep copy of the type
+ def copy
+ Marshal.load(Marshal.dump(self))
+ end
- module ClassModule
- end
+ def hash
+ self.class.hash
+ end
- end
+ def ==(o)
+ self.class == o.class
+ end
- # @api public
- class PNilType < PObjectType
- end
+ alias eql? ==
- # A flexible data type, being assignable to its subtypes as well as PArrayType and PHashType with element type assignable to PDataType.
- #
- # @api public
- class PDataType < PObjectType
- module ClassModule
- def ==(o)
- self.class == o.class ||
- o.class == PVariantType && o == Puppet::Pops::Types::TypeCalculator.data_variant()
+ def to_s
+ Puppet::Pops::Types::TypeCalculator.string(self)
+ end
end
end
- end
- # A flexible type describing an any? of other types
- # @api public
- class PVariantType < PObjectType
- contains_many_uni 'types', PAbstractType, :lowerBound => 1
-
- module ClassModule
+ class PType < PAnyType
+ module ClassModule
+ def hash
+ [self.class, type].hash
+ end
- def hash
- [self.class, Set.new(self.types)].hash
+ def ==(o)
+ self.class == o.class && type == o.type
+ end
end
+ end
- def ==(o)
- (self.class == o.class && Set.new(types) == Set.new(o.types)) ||
- (o.class == PDataType && self == Puppet::Pops::Types::TypeCalculator.data_variant())
+ class PDataType < PAnyType
+ module ClassModule
+ def ==(o)
+ self.class == o.class ||
+ o.class == PVariantType && o == Puppet::Pops::Types::TypeCalculator.data_variant()
+ end
end
end
- end
-
- # Type that is PDataType compatible, but is not a PCollectionType.
- # @api public
- class PScalarType < PObjectType
- end
- # A string type describing the set of strings having one of the given values
- #
- class PEnumType < PScalarType
- has_many_attr 'values', String, :lowerBound => 1
+ class PVariantType < PAnyType
+ module ClassModule
- module ClassModule
- def hash
- [self.class, Set.new(self.values)].hash
- end
+ def hash
+ [self.class, Set.new(self.types)].hash
+ end
- def ==(o)
- self.class == o.class && Set.new(values) == Set.new(o.values)
+ def ==(o)
+ (self.class == o.class && Set.new(types) == Set.new(o.types)) ||
+ (o.class == PDataType && self == Puppet::Pops::Types::TypeCalculator.data_variant())
+ end
end
end
- end
- # @api public
- class PNumericType < PScalarType
- end
+ class PEnumType < PScalarType
+ module ClassModule
+ def hash
+ [self.class, Set.new(self.values)].hash
+ end
- # @api public
- class PIntegerType < PNumericType
- has_attr 'from', Integer, :lowerBound => 0
- has_attr 'to', Integer, :lowerBound => 0
+ def ==(o)
+ self.class == o.class && Set.new(values) == Set.new(o.values)
+ end
+ end
+ end
- module ClassModule
- # The integer type is enumerable when it defines a range
- include Enumerable
+ class PIntegerType < PNumericType
+ module ClassModule
+ # The integer type is enumerable when it defines a range
+ include Enumerable
- # Returns Float.Infinity if one end of the range is unbound
- def size
- return INFINITY if from.nil? || to.nil?
- 1+(to-from).abs
- end
+ # Returns Float.Infinity if one end of the range is unbound
+ def size
+ return INFINITY if from.nil? || to.nil?
+ 1+(to-from).abs
+ end
- # Returns the range as an array ordered so the smaller number is always first.
- # The number may be Infinity or -Infinity.
- def range
- f = from || NEGATIVE_INFINITY
- t = to || INFINITY
- if f < t
- [f, t]
- else
- [t,f]
+ # Returns the range as an array ordered so the smaller number is always first.
+ # The number may be Infinity or -Infinity.
+ def range
+ f = from || NEGATIVE_INFINITY
+ t = to || INFINITY
+ if f < t
+ [f, t]
+ else
+ [t,f]
+ end
end
- end
- # Returns Enumerator if no block is given
- # Returns self if size is infinity (does not yield)
- def each
- return self.to_enum unless block_given?
- return nil if from.nil? || to.nil?
- if to < from
- from.downto(to) {|x| yield x }
- else
- from.upto(to) {|x| yield x }
+ # Returns Enumerator if no block is given
+ # Returns self if size is infinity (does not yield)
+ def each
+ return self.to_enum unless block_given?
+ return nil if from.nil? || to.nil?
+ if to < from
+ from.downto(to) {|x| yield x }
+ else
+ from.upto(to) {|x| yield x }
+ end
end
- end
- def hash
- [self.class, from, to].hash
- end
+ def hash
+ [self.class, from, to].hash
+ end
- def ==(o)
- self.class == o.class && from == o.from && to == o.to
+ def ==(o)
+ self.class == o.class && from == o.from && to == o.to
+ end
end
end
- end
- # @api public
- class PFloatType < PNumericType
- has_attr 'from', Float, :lowerBound => 0
- has_attr 'to', Float, :lowerBound => 0
-
- module ClassModule
- def hash
- [self.class, from, to].hash
- end
+ class PFloatType < PNumericType
+ module ClassModule
+ def hash
+ [self.class, from, to].hash
+ end
- def ==(o)
- self.class == o.class && from == o.from && to == o.to
+ def ==(o)
+ self.class == o.class && from == o.from && to == o.to
+ end
end
end
- end
-
- # @api public
- class PStringType < PScalarType
- has_many_attr 'values', String, :lowerBound => 0, :upperBound => -1, :unique => true
- contains_one_uni 'size_type', PIntegerType
- module ClassModule
+ class PStringType < PScalarType
+ module ClassModule
- def hash
- [self.class, self.size_type, Set.new(self.values)].hash
- end
+ def hash
+ [self.class, self.size_type, Set.new(self.values)].hash
+ end
- def ==(o)
- self.class == o.class && self.size_type == o.size_type && Set.new(values) == Set.new(o.values)
+ def ==(o)
+ self.class == o.class && self.size_type == o.size_type && Set.new(values) == Set.new(o.values)
+ end
end
end
- end
-
- # @api public
- class PRegexpType < PScalarType
- has_attr 'pattern', String, :lowerBound => 1
- has_attr 'regexp', Object, :derived => true
- module ClassModule
- def regexp_derived
- @_regexp = Regexp.new(pattern) unless @_regexp && @_regexp.source == pattern
- @_regexp
- end
+ class PRegexpType < PScalarType
+ module ClassModule
+ def regexp_derived
+ @_regexp = Regexp.new(pattern) unless @_regexp && @_regexp.source == pattern
+ @_regexp
+ end
- def hash
- [self.class, pattern].hash
- end
+ def hash
+ [self.class, pattern].hash
+ end
- def ==(o)
- self.class == o.class && pattern == o.pattern
+ def ==(o)
+ self.class == o.class && pattern == o.pattern
+ end
end
end
- end
- # Represents a subtype of String that narrows the string to those matching the patterns
- # If specified without a pattern it is basically the same as the String type.
- #
- # @api public
- class PPatternType < PScalarType
- contains_many_uni 'patterns', PRegexpType
+ class PPatternType < PScalarType
+ module ClassModule
- module ClassModule
-
- def hash
- [self.class, Set.new(patterns)].hash
- end
+ def hash
+ [self.class, Set.new(patterns)].hash
+ end
- def ==(o)
- self.class == o.class && Set.new(patterns) == Set.new(o.patterns)
+ def ==(o)
+ self.class == o.class && Set.new(patterns) == Set.new(o.patterns)
+ end
end
end
- end
-
- # @api public
- class PBooleanType < PScalarType
- end
- # @api public
- class PCollectionType < PObjectType
- contains_one_uni 'element_type', PAbstractType
- contains_one_uni 'size_type', PIntegerType
-
- module ClassModule
- # Returns an array with from (min) size to (max) size
- # A negative range value in from is
- def size_range
- return [0, INFINITY] if size_type.nil?
- f = size_type.from || 0
- t = size_type.to || INFINITY
- if f < t
- [f, t]
- else
- [t,f]
+ class PCollectionType < PAnyType
+ module ClassModule
+ # Returns an array with from (min) size to (max) size
+ def size_range
+ return [0, INFINITY] if size_type.nil?
+ f = size_type.from || 0
+ t = size_type.to || INFINITY
+ if f < t
+ [f, t]
+ else
+ [t,f]
+ end
end
- end
- def hash
- [self.class, element_type, size_type].hash
- end
+ def hash
+ [self.class, element_type, size_type].hash
+ end
- def ==(o)
- self.class == o.class && element_type == o.element_type && size_type == o.size_type
+ def ==(o)
+ self.class == o.class && element_type == o.element_type && size_type == o.size_type
+ end
end
end
- end
-
- class PStructElement < Puppet::Pops::Model::PopsObject
- has_attr 'name', String, :lowerBound => 1
- contains_one_uni 'type', PAbstractType
- module ClassModule
- def hash
- [self.class, type, name].hash
- end
+ class PStructElement < TypeModelObject
+ module ClassModule
+ def hash
+ [self.class, type, name].hash
+ end
- def ==(o)
- self.class == o.class && type == o.type && name == o.name
+ def ==(o)
+ self.class == o.class && type == o.type && name == o.name
+ end
end
end
- end
- # @api public
- class PStructType < PObjectType
- contains_many_uni 'elements', PStructElement, :lowerBound => 1
- has_attr 'hashed_elements', Object, :derived => true
- module ClassModule
- def hashed_elements_derived
- @_hashed ||= elements.reduce({}) {|memo, e| memo[e.name] = e.type; memo }
- @_hashed
- end
+ class PStructType < PAnyType
+ module ClassModule
+ def hashed_elements_derived
+ @_hashed ||= elements.reduce({}) {|memo, e| memo[e.name] = e.type; memo }
+ @_hashed
+ end
- def clear_hashed_elements
- @_hashed = nil
- end
+ def clear_hashed_elements
+ @_hashed = nil
+ end
- def hash
- [self.class, Set.new(elements)].hash
- end
+ def hash
+ [self.class, Set.new(elements)].hash
+ end
- def ==(o)
- self.class == o.class && hashed_elements == o.hashed_elements
+ def ==(o)
+ self.class == o.class && hashed_elements == o.hashed_elements
+ end
end
end
- end
- # @api public
- class PTupleType < PObjectType
- contains_many_uni 'types', PAbstractType, :lowerBound => 1
- # If set, describes min and max required of the given types - if max > size of
- # types, the last type entry repeats
- #
- contains_one_uni 'size_type', PIntegerType, :lowerBound => 0
-
- module ClassModule
- # Returns the number of elements accepted [min, max] in the tuple
- def size_range
- types_size = types.size
- size_type.nil? ? [types_size, types_size] : size_type.range
- end
+ class PTupleType < PAnyType
+ module ClassModule
+ # Returns the number of elements accepted [min, max] in the tuple
+ def size_range
+ types_size = types.size
+ size_type.nil? ? [types_size, types_size] : size_type.range
+ end
- # Returns the number of accepted occurrences [min, max] of the last type in the tuple
- # The defaults is [1,1]
- #
- def repeat_last_range
- types_size = types.size
- if size_type.nil?
- return [1, 1]
- end
- from, to = size_type.range()
- min = from - (types_size-1)
- min = min <= 0 ? 0 : min
- max = to - (types_size-1)
- [min, max]
- end
+ # Returns the number of accepted occurrences [min, max] of the last type in the tuple
+ # The defaults is [1,1]
+ #
+ def repeat_last_range
+ types_size = types.size
+ if size_type.nil?
+ return [1, 1]
+ end
+ from, to = size_type.range()
+ min = from - (types_size-1)
+ min = min <= 0 ? 0 : min
+ max = to - (types_size-1)
+ [min, max]
+ end
- def hash
- [self.class, size_type, Set.new(types)].hash
- end
+ def hash
+ [self.class, size_type, Set.new(types)].hash
+ end
- def ==(o)
- self.class == o.class && types == o.types && size_type == o.size_type
+ def ==(o)
+ self.class == o.class && types == o.types && size_type == o.size_type
+ end
end
end
- end
-
- class PCallableType < PObjectType
- # Types of parameters and required/optional count
- contains_one_uni 'param_types', PTupleType, :lowerBound => 1
- # Although being an abstract type reference, only PAbstractCallable, and Optional[Callable] are supported
- # If not set, the meaning is that block is not supported.
- #
- contains_one_uni 'block_type', PAbstractType, :lowerBound => 0
-
- module ClassModule
- # Returns the number of accepted arguments [min, max]
- def size_range
- param_types.size_range
- end
+ class PCallableType < PAnyType
+ module ClassModule
+ # Returns the number of accepted arguments [min, max]
+ def size_range
+ param_types.size_range
+ end
- # Returns the number of accepted arguments for the last parameter type [min, max]
- #
- def last_range
- param_types.repeat_last_range
- end
+ # Returns the number of accepted arguments for the last parameter type [min, max]
+ #
+ def last_range
+ param_types.repeat_last_range
+ end
- # Range [0,0], [0,1], or [1,1] for the block
- #
- def block_range
- case block_type
- when Puppet::Pops::Types::POptionalType
- [0,1]
- when Puppet::Pops::Types::PVariantType, Puppet::Pops::Types::PCallableType
- [1,1]
- else
- [0,0]
+ # Range [0,0], [0,1], or [1,1] for the block
+ #
+ def block_range
+ case block_type
+ when Puppet::Pops::Types::POptionalType
+ [0,1]
+ when Puppet::Pops::Types::PVariantType, Puppet::Pops::Types::PCallableType
+ [1,1]
+ else
+ [0,0]
+ end
end
- end
- def hash
- [self.class, Set.new(param_types), block_type].hash
- end
+ def hash
+ [self.class, Set.new(param_types), block_type].hash
+ end
- def ==(o)
- self.class == o.class && args_type == o.args_type && block_type == o.block_type
+ def ==(o)
+ self.class == o.class && args_type == o.args_type && block_type == o.block_type
+ end
end
end
- end
- # @api public
- class PArrayType < PCollectionType
- module ClassModule
- def hash
- [self.class, self.element_type, self.size_type].hash
- end
+ class PArrayType < PCollectionType
+ module ClassModule
+ def hash
+ [self.class, self.element_type, self.size_type].hash
+ end
- def ==(o)
- self.class == o.class && self.element_type == o.element_type && self.size_type == o.size_type
+ def ==(o)
+ self.class == o.class && self.element_type == o.element_type && self.size_type == o.size_type
+ end
end
end
- end
- # @api public
- class PHashType < PCollectionType
- contains_one_uni 'key_type', PAbstractType
- module ClassModule
- def hash
- [self.class, key_type, self.element_type, self.size_type].hash
- end
+ class PHashType < PCollectionType
+ module ClassModule
+ def hash
+ [self.class, key_type, self.element_type, self.size_type].hash
+ end
- def ==(o)
- self.class == o.class &&
- key_type == o.key_type &&
- self.element_type == o.element_type &&
- self.size_type == o.size_type
+ def ==(o)
+ self.class == o.class &&
+ key_type == o.key_type &&
+ self.element_type == o.element_type &&
+ self.size_type == o.size_type
+ end
end
end
- end
- # @api public
- class PRubyType < PObjectType
- has_attr 'ruby_class', String
- module ClassModule
- def hash
- [self.class, ruby_class].hash
- end
- def ==(o)
- self.class == o.class && ruby_class == o.ruby_class
+ class PRuntimeType < PAnyType
+ module ClassModule
+ def hash
+ [self.class, runtime, runtime_type_name].hash
+ end
+
+ def ==(o)
+ self.class == o.class && runtime == o.runtime && runtime_type_name == o.runtime_type_name
+ end
end
end
- end
-
- # Abstract representation of a type that can be placed in a Catalog.
- # @api public
- #
- class PCatalogEntryType < PObjectType
- end
- # Represents a (host-) class in the Puppet Language.
- # @api public
- #
- class PHostClassType < PCatalogEntryType
- has_attr 'class_name', String
- # contains_one_uni 'super_type', PHostClassType
- module ClassModule
- def hash
- [self.class, class_name].hash
- end
- def ==(o)
- self.class == o.class && class_name == o.class_name
+ class PHostClassType < PCatalogEntryType
+ module ClassModule
+ def hash
+ [self.class, class_name].hash
+ end
+ def ==(o)
+ self.class == o.class && class_name == o.class_name
+ end
end
end
- end
- # Represents a Resource Type in the Puppet Language
- # @api public
- #
- class PResourceType < PCatalogEntryType
- has_attr 'type_name', String
- has_attr 'title', String
- module ClassModule
- def hash
- [self.class, type_name, title].hash
- end
- def ==(o)
- self.class == o.class && type_name == o.type_name && title == o.title
+ class PResourceType < PCatalogEntryType
+ module ClassModule
+ def hash
+ [self.class, type_name, title].hash
+ end
+ def ==(o)
+ self.class == o.class && type_name == o.type_name && title == o.title
+ end
end
end
- end
- # Represents a type that accept PNilType instead of the type parameter
- # required_type - is a short hand for Variant[T, Undef]
- #
- class POptionalType < PAbstractType
- contains_one_uni 'optional_type', PAbstractType
- module ClassModule
- def hash
- [self.class, optional_type].hash
- end
+ class POptionalType < PAnyType
+ module ClassModule
+ def hash
+ [self.class, optional_type].hash
+ end
- def ==(o)
- self.class == o.class && optional_type == o.optional_type
+ def ==(o)
+ self.class == o.class && optional_type == o.optional_type
+ end
end
end
end
-
end
diff --git a/lib/puppet/pops/types/types_meta.rb b/lib/puppet/pops/types/types_meta.rb
new file mode 100644
index 000000000..3e7d80ad7
--- /dev/null
+++ b/lib/puppet/pops/types/types_meta.rb
@@ -0,0 +1,223 @@
+require 'rgen/metamodel_builder'
+
+# The Types model is a model of Puppet Language types.
+#
+# The exact relationship between types is not visible in this model wrt. the PDataType which is an abstraction
+# of Scalar, Array[Data], and Hash[Scalar, Data] nested to any depth. This means it is not possible to
+# infer the type by simply looking at the inheritance hierarchy. The {Puppet::Pops::Types::TypeCalculator} should
+# be used to answer questions about types. The {Puppet::Pops::Types::TypeFactory} should be used to create an instance
+# of a type whenever one is needed.
+#
+# The implementation of the Types model contains methods that are required for the type objects to behave as
+# expected when comparing them and using them as keys in hashes. (No other logic is, or should be included directly in
+# the model's classes).
+#
+# @api public
+#
+module Puppet::Pops::Types
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class TypeModelObject < RGen::MetamodelBuilder::MMBase
+ abstract
+ end
+
+ # Base type for all types except {Puppet::Pops::Types::PType PType}, the type of types.
+ # @api public
+ #
+ class PAnyType < TypeModelObject
+ end
+
+ # The type of types.
+ # @api public
+ #
+ class PType < PAnyType
+ contains_one_uni 'type', PAnyType
+ end
+
+ # @api public
+ #
+ class PNilType < PAnyType
+ end
+
+
+ # A type private to the type system that describes "ignored type" - i.e. "I am what you are"
+ # @api private
+ #
+ class PUnitType < PAnyType
+ end
+
+ # @api public
+ #
+ class PDefaultType < PAnyType
+ end
+
+ # A flexible data type, being assignable to its subtypes as well as PArrayType and PHashType with element type assignable to PDataType.
+ #
+ # @api public
+ #
+ class PDataType < PAnyType
+ end
+
+ # A flexible type describing an any? of other types
+ # @api public
+ #
+ class PVariantType < PAnyType
+ contains_many_uni 'types', PAnyType, :lowerBound => 1
+ end
+
+ # Type that is PDataType compatible, but is not a PCollectionType.
+ # @api public
+ #
+ class PScalarType < PAnyType
+ end
+
+ # A string type describing the set of strings having one of the given values
+ # @api public
+ #
+ class PEnumType < PScalarType
+ has_many_attr 'values', String, :lowerBound => 1
+ end
+
+ # @api public
+ #
+ class PNumericType < PScalarType
+ end
+
+ # @api public
+ #
+ class PIntegerType < PNumericType
+ has_attr 'from', Integer, :lowerBound => 0
+ has_attr 'to', Integer, :lowerBound => 0
+ end
+
+ # @api public
+ #
+ class PFloatType < PNumericType
+ has_attr 'from', Float, :lowerBound => 0
+ has_attr 'to', Float, :lowerBound => 0
+ end
+
+ # @api public
+ #
+ class PStringType < PScalarType
+ has_many_attr 'values', String, :lowerBound => 0, :upperBound => -1, :unique => true
+ contains_one_uni 'size_type', PIntegerType
+ end
+
+ # @api public
+ #
+ class PRegexpType < PScalarType
+ has_attr 'pattern', String, :lowerBound => 1
+ has_attr 'regexp', Object, :derived => true
+ end
+
+ # Represents a subtype of String that narrows the string to those matching the patterns
+ # If specified without a pattern it is basically the same as the String type.
+ #
+ # @api public
+ #
+ class PPatternType < PScalarType
+ contains_many_uni 'patterns', PRegexpType
+ end
+
+ # @api public
+ #
+ class PBooleanType < PScalarType
+ end
+
+ # @api public
+ #
+ class PCollectionType < PAnyType
+ contains_one_uni 'element_type', PAnyType
+ contains_one_uni 'size_type', PIntegerType
+ end
+
+ # @api public
+ #
+ class PStructElement < TypeModelObject
+ has_attr 'name', String, :lowerBound => 1
+ contains_one_uni 'type', PAnyType
+ end
+
+ # @api public
+ #
+ class PStructType < PAnyType
+ contains_many_uni 'elements', PStructElement, :lowerBound => 1
+ has_attr 'hashed_elements', Object, :derived => true
+ end
+
+ # @api public
+ #
+ class PTupleType < PAnyType
+ contains_many_uni 'types', PAnyType, :lowerBound => 1
+ # If set, describes min and max required of the given types - if max > size of
+ # types, the last type entry repeats
+ #
+ contains_one_uni 'size_type', PIntegerType, :lowerBound => 0
+ end
+
+ # @api public
+ #
+ class PCallableType < PAnyType
+ # Types of parameters as a Tuple with required/optional count, or an Integer with min (required), max count
+ contains_one_uni 'param_types', PAnyType, :lowerBound => 1
+
+ # Although being an abstract type reference, only Callable, or all Callables wrapped in
+ # Optional or Variant are supported
+ # If not set, the meaning is that block is not supported.
+ #
+ contains_one_uni 'block_type', PAnyType, :lowerBound => 0
+ end
+
+ # @api public
+ #
+ class PArrayType < PCollectionType
+ end
+
+ # @api public
+ #
+ class PHashType < PCollectionType
+ contains_one_uni 'key_type', PAnyType
+ end
+
+ RuntimeEnum = RGen::MetamodelBuilder::DataTypes::Enum.new(
+ :name => 'RuntimeEnum',
+ :literals => [:'ruby', ])
+
+ # @api public
+ #
+ class PRuntimeType < PAnyType
+ has_attr 'runtime', RuntimeEnum, :lowerBound => 1
+ has_attr 'runtime_type_name', String
+ end
+
+ # Abstract representation of a type that can be placed in a Catalog.
+ # @api public
+ #
+ class PCatalogEntryType < PAnyType
+ end
+
+ # Represents a (host-) class in the Puppet Language.
+ # @api public
+ #
+ class PHostClassType < PCatalogEntryType
+ has_attr 'class_name', String
+ end
+
+ # Represents a Resource Type in the Puppet Language
+ # @api public
+ #
+ class PResourceType < PCatalogEntryType
+ has_attr 'type_name', String
+ has_attr 'title', String
+ end
+
+ # Represents a type that accept PNilType instead of the type parameter
+ # required_type - is a short hand for Variant[T, Undef]
+ # @api public
+ #
+ class POptionalType < PAnyType
+ contains_one_uni 'optional_type', PAnyType
+ end
+
+end
diff --git a/lib/puppet/pops/utils.rb b/lib/puppet/pops/utils.rb
index 45e166984..7fd98c58f 100644
--- a/lib/puppet/pops/utils.rb
+++ b/lib/puppet/pops/utils.rb
@@ -1,120 +1,122 @@
# Provides utility methods
module Puppet::Pops::Utils
# Can the given o be converted to numeric? (or is numeric already)
# Accepts a leading '::'
# Returns a boolean if the value is numeric
# If testing if value can be converted it is more efficient to call {#to_n} or {#to_n_with_radix} directly
# and check if value is nil.
def self.is_numeric?(o)
case o
- when Numeric, Integer, Fixnum, Float
- !!o
+ when Numeric
+ true
else
!!Puppet::Pops::Patterns::NUMERIC.match(relativize_name(o.to_s))
end
end
# To Numeric with radix, or nil if not a number.
# If the value is already Numeric it is returned verbatim with a radix of 10.
# @param o [String, Number] a string containing a number in octal, hex, integer (decimal) or floating point form
+ # with optional sign +/-
# @return [Array<Number, Integer>, nil] array with converted number and radix, or nil if not possible to convert
# @api public
#
def self.to_n_with_radix o
begin
case o
when String
match = Puppet::Pops::Patterns::NUMERIC.match(relativize_name(o))
if !match
nil
- elsif match[3].to_s.length > 0
+ elsif match[5].to_s.length > 0
# Use default radix (default is decimal == 10) for floats
- [Float(match[0]), 10]
+ match[1] == '-' ? [-Float(match[2]), 10] : [Float(match[2]), 10]
else
# Set radix (default is decimal == 10)
radix = 10
- if match[1].to_s.length > 0
+ if match[3].to_s.length > 0
radix = 16
- elsif match[2].to_s.length > 1 && match[2][0,1] == '0'
+ elsif match[4].to_s.length > 1 && match[4][0,1] == '0'
radix = 8
end
# Ruby 1.8.7 does not have a second argument to Kernel method that creates an
# integer from a string, it relies on the prefix 0x, 0X, 0 (and unsupported in puppet binary 'b')
- # We have the correct string here, match[0] is safe to parse without passing on radix
- [Integer(match[0]), radix]
+ # We have the correct string here, match[2] is safe to parse without passing on radix
+ match[1] == '-' ? [-Integer(match[2]), radix] : [Integer(match[2]), radix]
end
- when Numeric, Fixnum, Integer, Float
+ when Numeric
# Impossible to calculate radix, assume decimal
[o, 10]
else
nil
end
rescue ArgumentError
nil
end
end
# To Numeric (or already numeric)
- # Returns nil if value is not numeric, else an Integer or Float
+ # Returns nil if value is not numeric, else an Integer or Float. A String may have an optional sign.
+ #
# A leading '::' is accepted (and ignored)
#
def self.to_n o
begin
case o
when String
match = Puppet::Pops::Patterns::NUMERIC.match(relativize_name(o))
if !match
nil
- elsif match[3].to_s.length > 0
- Float(match[0])
+ elsif match[5].to_s.length > 0
+ match[1] == '-' ? -Float(match[2]) : Float(match[2])
else
- Integer(match[0])
+ match[1] == '-' ? -Integer(match[2]) : Integer(match[2])
end
- when Numeric, Fixnum, Integer, Float
+ when Numeric
o
else
nil
end
rescue ArgumentError
nil
end
end
# is the name absolute (i.e. starts with ::)
def self.is_absolute? name
name.start_with? "::"
end
def self.name_to_segments name
name.split("::")
end
def self.relativize_name name
is_absolute?(name) ? name[2..-1] : name
end
# Finds an existing adapter for o or for one of its containers, or nil, if none of the containers
# was adapted with the given adapter.
# This method can only be used with objects that respond to `:eContainer`.
# with true.
#
# @see #find_closest_positioned
#
def self.find_adapter(o, adapter)
return nil if o.nil? || (o.is_a?(Array) && o.empty?)
a = adapter.get(o)
return a if a
return find_adapter(o.eContainer, adapter)
end
# Finds the closest positioned Puppet::Pops::Model::Positioned object, or object decorated with
# a SourcePosAdapter, and returns
# a SourcePosAdapter for the first found, or nil if not found.
#
def self.find_closest_positioned(o)
return nil if o.nil? || o.is_a?(Puppet::Pops::Model::Program) || (o.is_a?(Array) && o.empty?)
return find_adapter(o, Puppet::Pops::Adapters::SourcePosAdapter) unless o.is_a?(Puppet::Pops::Model::Positioned)
o.offset.nil? ? find_closest_positioned(o.eContainer) : Puppet::Pops::Adapters::SourcePosAdapter.adapt(o)
end
end
diff --git a/lib/puppet/pops/validation/checker3_1.rb b/lib/puppet/pops/validation/checker3_1.rb
deleted file mode 100644
index 77f643fb1..000000000
--- a/lib/puppet/pops/validation/checker3_1.rb
+++ /dev/null
@@ -1,558 +0,0 @@
-# A Validator validates a model.
-#
-# Validation is performed on each model element in isolation. Each method should validate the model element's state
-# but not validate its referenced/contained elements except to check their validity in their respective role.
-# The intent is to drive the validation with a tree iterator that visits all elements in a model.
-#
-#
-# TODO: Add validation of multiplicities - this is a general validation that can be checked for all
-# Model objects via their metamodel. (I.e an extra call to multiplicity check in polymorph check).
-# This is however mostly valuable when validating model to model transformations, and is therefore T.B.D
-#
-class Puppet::Pops::Validation::Checker3_1
- Issues = Puppet::Pops::Issues
- Model = Puppet::Pops::Model
-
- attr_reader :acceptor
- # Initializes the validator with a diagnostics producer. This object must respond to
- # `:will_accept?` and `:accept`.
- #
- def initialize(diagnostics_producer)
- @@check_visitor ||= Puppet::Pops::Visitor.new(nil, "check", 0, 0)
- @@rvalue_visitor ||= Puppet::Pops::Visitor.new(nil, "rvalue", 0, 0)
- @@hostname_visitor ||= Puppet::Pops::Visitor.new(nil, "hostname", 1, 2)
- @@assignment_visitor ||= Puppet::Pops::Visitor.new(nil, "assign", 0, 1)
- @@query_visitor ||= Puppet::Pops::Visitor.new(nil, "query", 0, 0)
- @@top_visitor ||= Puppet::Pops::Visitor.new(nil, "top", 1, 1)
- @@relation_visitor ||= Puppet::Pops::Visitor.new(nil, "relation", 1, 1)
-
- @acceptor = diagnostics_producer
- end
-
- # Validates the entire model by visiting each model element and calling `check`.
- # The result is collected (or acted on immediately) by the configured diagnostic provider/acceptor
- # given when creating this Checker.
- #
- def validate(model)
- # tree iterate the model, and call check for each element
- check(model)
- model.eAllContents.each {|m| check(m) }
- end
-
- # Performs regular validity check
- def check(o)
- @@check_visitor.visit_this_0(self, o)
- end
-
- # Performs check if this is a vaid hostname expression
- # @param single_feature_name [String, nil] the name of a single valued hostname feature of the value's container. e.g. 'parent'
- def hostname(o, semantic, single_feature_name = nil)
- @@hostname_visitor.visit_this_2(self, o, semantic, single_feature_name)
- end
-
- # Performs check if this is valid as a query
- def query(o)
- @@query_visitor.visit_this_0(self, o)
- end
-
- # Performs check if this is valid as a relationship side
- def relation(o, container)
- @@relation_visitor.visit_this_1(self, o, container)
- end
-
- # Performs check if this is valid as a rvalue
- def rvalue(o)
- @@rvalue_visitor.visit_this_0(self, o)
- end
-
- # Performs check if this is valid as a container of a definition (class, define, node)
- def top(o, definition)
- @@top_visitor.visit_this_1(self, o, definition)
- end
-
- # Checks the LHS of an assignment (is it assignable?).
- # If args[0] is true, assignment via index is checked.
- #
- def assign(o, via_index = false)
- @@assignment_visitor.visit_this_1(self, o, via_index)
- end
-
- #---ASSIGNMENT CHECKS
-
- def assign_VariableExpression(o, via_index)
- varname_string = varname_to_s(o.expr)
- if varname_string =~ /^[0-9]+$/
- acceptor.accept(Issues::ILLEGAL_NUMERIC_ASSIGNMENT, o, :varname => varname_string)
- end
- # Can not assign to something in another namespace (i.e. a '::' in the name is not legal)
- if acceptor.will_accept? Issues::CROSS_SCOPE_ASSIGNMENT
- if varname_string =~ /::/
- acceptor.accept(Issues::CROSS_SCOPE_ASSIGNMENT, o, :name => varname_string)
- end
- end
- # TODO: Could scan for reassignment of the same variable if done earlier in the same container
- # Or if assigning to a parameter (more work).
- # TODO: Investigate if there are invalid cases for += assignment
- end
-
- def assign_AccessExpression(o, via_index)
- # Are indexed assignments allowed at all ? $x[x] = '...'
- if acceptor.will_accept? Issues::ILLEGAL_INDEXED_ASSIGNMENT
- acceptor.accept(Issues::ILLEGAL_INDEXED_ASSIGNMENT, o)
- else
- # Then the left expression must be assignable-via-index
- assign(o.left_expr, true)
- end
- end
-
- def assign_Object(o, via_index)
- # Can not assign to anything else (differentiate if this is via index or not)
- # i.e. 10 = 'hello' vs. 10['x'] = 'hello' (the root is reported as being in error in both cases)
- #
- acceptor.accept(via_index ? Issues::ILLEGAL_ASSIGNMENT_VIA_INDEX : Issues::ILLEGAL_ASSIGNMENT, o)
- end
-
- #---CHECKS
-
- def check_Object(o)
- end
-
- def check_Factory(o)
- check(o.current)
- end
-
- def check_AccessExpression(o)
- # Check multiplicity of keys
- case o.left_expr
- when Model::QualifiedName
- # allows many keys, but the name should really be a QualifiedReference
- acceptor.accept(Issues::DEPRECATED_NAME_AS_TYPE, o, :name => o.left_expr.value)
- when Model::QualifiedReference
- # ok, allows many - this is a resource reference
-
- else
- # i.e. for any other expression that may produce an array or hash
- if o.keys.size > 1
- acceptor.accept(Issues::UNSUPPORTED_RANGE, o, :count => o.keys.size)
- end
- if o.keys.size < 1
- acceptor.accept(Issues::MISSING_INDEX, o)
- end
- end
- end
-
- def check_AssignmentExpression(o)
- acceptor.accept(Issues::UNSUPPORTED_OPERATOR, o, {:operator => o.operator}) unless [:'=', :'+='].include? o.operator
- assign(o.left_expr)
- rvalue(o.right_expr)
- end
-
- # Checks that operation with :+> is contained in a ResourceOverride or Collector.
- #
- # Parent of an AttributeOperation can be one of:
- # * CollectExpression
- # * ResourceOverride
- # * ResourceBody (ILLEGAL this is a regular resource expression)
- # * ResourceDefaults (ILLEGAL)
- #
- def check_AttributeOperation(o)
- if o.operator == :'+>'
- # Append operator use is constrained
- parent = o.eContainer
- unless parent.is_a?(Model::CollectExpression) || parent.is_a?(Model::ResourceOverrideExpression)
- acceptor.accept(Issues::ILLEGAL_ATTRIBUTE_APPEND, o, {:name=>o.attribute_name, :parent=>parent})
- end
- end
- rvalue(o.value_expr)
- end
-
- def check_BinaryExpression(o)
- rvalue(o.left_expr)
- rvalue(o.right_expr)
- end
-
- def check_CallNamedFunctionExpression(o)
- unless o.functor_expr.is_a? Model::QualifiedName
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.functor_expr, :feature => 'function name', :container => o)
- end
- end
-
- def check_MethodCallExpression(o)
- unless o.functor_expr.is_a? Model::QualifiedName
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.functor_expr, :feature => 'function name', :container => o)
- end
- end
-
- def check_CaseExpression(o)
- # There should only be one LiteralDefault case option value
- # TODO: Implement this check
- end
-
- def check_CollectExpression(o)
- unless o.type_expr.is_a? Model::QualifiedReference
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.type_expr, :feature=> 'type name', :container => o)
- end
-
- # If a collect expression tries to collect exported resources and storeconfigs is not on
- # then it will not work... This was checked in the parser previously. This is a runtime checking
- # thing as opposed to a language thing.
- if acceptor.will_accept?(Issues::RT_NO_STORECONFIGS) && o.query.is_a?(Model::ExportedQuery)
- acceptor.accept(Issues::RT_NO_STORECONFIGS, o)
- end
- end
-
- # Only used for function names, grammar should not be able to produce something faulty, but
- # check anyway if model is created programatically (it will fail in transformation to AST for sure).
- def check_NamedAccessExpression(o)
- name = o.right_expr
- unless name.is_a? Model::QualifiedName
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, name, :feature=> 'function name', :container => o.eContainer)
- end
- end
-
- # for 'class' and 'define'
- def check_NamedDefinition(o)
- top(o.eContainer, o)
- if (acceptor.will_accept? Issues::NAME_WITH_HYPHEN) && o.name.include?('-')
- acceptor.accept(Issues::NAME_WITH_HYPHEN, o, {:name => o.name})
- end
- end
-
- def check_ImportExpression(o)
- o.files.each do |f|
- unless f.is_a? Model::LiteralString
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, f, :feature => 'file name', :container => o)
- end
- end
- end
-
- def check_InstanceReference(o)
- # TODO: Original warning is :
- # Puppet.warning addcontext("Deprecation notice: Resource references should now be capitalized")
- # This model element is not used in the egrammar.
- # Either implement checks or deprecate the use of InstanceReference (the same is acheived by
- # transformation of AccessExpression when used where an Instance/Resource reference is allowed.
- #
- end
-
- # Restrictions on hash key are because of the strange key comparisons/and merge rules in the AST evaluation
- # (Even the allowed ones are handled in a strange way).
- #
- def transform_KeyedEntry(o)
- case o.key
- when Model::QualifiedName
- when Model::LiteralString
- when Model::LiteralNumber
- when Model::ConcatenatedString
- else
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.key, :feature => 'hash key', :container => o.eContainer)
- end
- end
-
- # A Lambda is a Definition, but it may appear in other scopes that top scope (Which check_Definition asserts).
- #
- def check_LambdaExpression(o)
- end
-
- def check_NodeDefinition(o)
- # Check that hostnames are valid hostnames (or regular expressons)
- hostname(o.host_matches, o)
- hostname(o.parent, o, 'parent') unless o.parent.nil?
- top(o.eContainer, o)
- end
-
- # No checking takes place - all expressions using a QualifiedName need to check. This because the
- # rules are slightly different depending on the container (A variable allows a numeric start, but not
- # other names). This means that (if the lexer/parser so chooses) a QualifiedName
- # can be anything when it represents a Bare Word and evaluates to a String.
- #
- def check_QualifiedName(o)
- end
-
- # Checks that the value is a valid UpperCaseWord (a CLASSREF), and optionally if it contains a hypen.
- # DOH: QualifiedReferences are created with LOWER CASE NAMES at parse time
- def check_QualifiedReference(o)
- # Is this a valid qualified name?
- if o.value !~ Puppet::Pops::Patterns::CLASSREF
- acceptor.accept(Issues::ILLEGAL_CLASSREF, o, {:name=>o.value})
- elsif (acceptor.will_accept? Issues::NAME_WITH_HYPHEN) && o.value.include?('-')
- acceptor.accept(Issues::NAME_WITH_HYPHEN, o, {:name => o.value})
- end
- end
-
- def check_QueryExpression(o)
- query(o.expr) if o.expr # is optional
- end
-
- def relation_Object(o, rel_expr)
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o, {:feature => o.eContainingFeature, :container => rel_expr})
- end
-
- def relation_AccessExpression(o, rel_expr); end
-
- def relation_CollectExpression(o, rel_expr); end
-
- def relation_VariableExpression(o, rel_expr); end
-
- def relation_LiteralString(o, rel_expr); end
-
- def relation_ConcatenatedStringExpression(o, rel_expr); end
-
- def relation_SelectorExpression(o, rel_expr); end
-
- def relation_CaseExpression(o, rel_expr); end
-
- def relation_ResourceExpression(o, rel_expr); end
-
- def relation_RelationshipExpression(o, rel_expr); end
-
- def check_Parameter(o)
- if o.name =~ /^[0-9]+$/
- acceptor.accept(Issues::ILLEGAL_NUMERIC_PARAMETER, o, :name => o.name)
- end
- end
-
- #relationship_side: resource
- # | resourceref
- # | collection
- # | variable
- # | quotedtext
- # | selector
- # | casestatement
- # | hasharrayaccesses
-
- def check_RelationshipExpression(o)
- relation(o.left_expr, o)
- relation(o.right_expr, o)
- end
-
- def check_ResourceExpression(o)
- # A resource expression must have a lower case NAME as its type e.g. 'file { ... }'
- unless o.type_name.is_a? Model::QualifiedName
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.type_name, :feature => 'resource type', :container => o)
- end
-
- # This is a runtime check - the model is valid, but will have runtime issues when evaluated
- # and storeconfigs is not set.
- if acceptor.will_accept?(Issues::RT_NO_STORECONFIGS) && o.exported
- acceptor.accept(Issues::RT_NO_STORECONFIGS_EXPORT, o)
- end
- end
-
- def check_ResourceDefaultsExpression(o)
- if o.form && o.form != :regular
- acceptor.accept(Issues::NOT_VIRTUALIZEABLE, o)
- end
- end
-
- # Transformation of SelectorExpression is limited to certain types of expressions.
- # This is probably due to constraints in the old grammar rather than any real concerns.
- def select_SelectorExpression(o)
- case o.left_expr
- when Model::CallNamedFunctionExpression
- when Model::AccessExpression
- when Model::VariableExpression
- when Model::ConcatenatedString
- else
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.left_expr, :feature => 'left operand', :container => o)
- end
- end
-
- def check_UnaryExpression(o)
- rvalue(o.expr)
- end
-
- def check_UnlessExpression(o)
- # TODO: Unless may not have an elsif
- # TODO: 3.x unless may not have an else
- end
-
- def check_VariableExpression(o)
- # The expression must be a qualified name
- if !o.expr.is_a? Model::QualifiedName
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o, :feature => 'name', :container => o)
- else
- # Note, that if it later becomes illegal with hyphen in any name, this special check
- # can be skipped in favor of the check in QualifiedName, which is now not done if contained in
- # a VariableExpression
- name = o.expr.value
- if (acceptor.will_accept? Issues::VAR_WITH_HYPHEN) && name.include?('-')
- acceptor.accept(Issues::VAR_WITH_HYPHEN, o, {:name => name})
- end
- end
- end
-
- #--- HOSTNAME CHECKS
-
- # Transforms Array of host matching expressions into a (Ruby) array of AST::HostName
- def hostname_Array(o, semantic, single_feature_name)
- if single_feature_name
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o, {:feature=>single_feature_name, :container=>semantic})
- end
- o.each {|x| hostname(x, semantic, false) }
- end
-
- def hostname_String(o, semantic, single_feature_name)
- # The 3.x checker only checks for illegal characters - if matching /[^-\w.]/ the name is invalid,
- # but this allows pathological names like "a..b......c", "----"
- # TODO: Investigate if more illegal hostnames should be flagged.
- #
- if o =~ Puppet::Pops::Patterns::ILLEGAL_HOSTNAME_CHARS
- acceptor.accept(Issues::ILLEGAL_HOSTNAME_CHARS, semantic, :hostname => o)
- end
- end
-
- def hostname_LiteralValue(o, semantic, single_feature_name)
- hostname_String(o.value.to_s, o, single_feature_name)
- end
-
- def hostname_ConcatenatedString(o, semantic, single_feature_name)
- # Puppet 3.1. only accepts a concatenated string without interpolated expressions
- if the_expr = o.segments.index {|s| s.is_a?(Model::TextExpression) }
- acceptor.accept(Issues::ILLEGAL_HOSTNAME_INTERPOLATION, o.segments[the_expr].expr)
- elsif o.segments.size() != 1
- # corner case, bad model, concatenation of several plain strings
- acceptor.accept(Issues::ILLEGAL_HOSTNAME_INTERPOLATION, o)
- else
- # corner case, may be ok, but lexer may have replaced with plain string, this is
- # here if it does not
- hostname_String(o.segments[0], o.segments[0], false)
- end
- end
-
- def hostname_QualifiedName(o, semantic, single_feature_name)
- hostname_String(o.value.to_s, o, single_feature_name)
- end
-
- def hostname_QualifiedReference(o, semantic, single_feature_name)
- hostname_String(o.value.to_s, o, single_feature_name)
- end
-
- def hostname_LiteralNumber(o, semantic, single_feature_name)
- # always ok
- end
-
- def hostname_LiteralDefault(o, semantic, single_feature_name)
- # always ok
- end
-
- def hostname_LiteralRegularExpression(o, semantic, single_feature_name)
- # always ok
- end
-
- def hostname_Object(o, semantic, single_feature_name)
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o, {:feature=> single_feature_name || 'hostname', :container=>semantic})
- end
-
- #---QUERY CHECKS
-
- # Anything not explicitly allowed is flagged as error.
- def query_Object(o)
- acceptor.accept(Issues::ILLEGAL_QUERY_EXPRESSION, o)
- end
-
- # Puppet AST only allows == and !=
- #
- def query_ComparisonExpression(o)
- acceptor.accept(Issues::ILLEGAL_QUERY_EXPRESSION, o) unless [:'==', :'!='].include? o.operator
- end
-
- # Allows AND, OR, and checks if left/right are allowed in query.
- def query_BooleanExpression(o)
- query o.left_expr
- query o.right_expr
- end
-
- def query_ParenthesizedExpression(o)
- query(o.expr)
- end
-
- def query_VariableExpression(o); end
-
- def query_QualifiedName(o); end
-
- def query_LiteralNumber(o); end
-
- def query_LiteralString(o); end
-
- def query_LiteralBoolean(o); end
-
- #---RVALUE CHECKS
-
- # By default, all expressions are reported as being rvalues
- # Implement specific rvalue checks for those that are not.
- #
- def rvalue_Expression(o); end
-
- def rvalue_ImportExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_BlockExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_CaseExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_IfExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_UnlessExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_ResourceExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_ResourceDefaultsExpression(o); acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_ResourceOverrideExpression(o); acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_CollectExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_Definition(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_NodeDefinition(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_UnaryExpression(o) ; rvalue o.expr ; end
-
- #---TOP CHECK
-
- def top_NilClass(o, definition)
- # ok, reached the top, no more parents
- end
-
- def top_Object(o, definition)
- # fail, reached a container that is not top level
- acceptor.accept(Issues::NOT_TOP_LEVEL, definition)
- end
-
- def top_BlockExpression(o, definition)
- # ok, if this is a block representing the body of a class, or is top level
- top o.eContainer, definition
- end
-
- def top_HostClassDefinition(o, definition)
- # ok, stop scanning parents
- end
-
- def top_Program(o, definition)
- # ok
- end
-
- # A LambdaExpression is a BlockExpression, and this method is needed to prevent the polymorph method for BlockExpression
- # to accept a lambda.
- # A lambda can not iteratively create classes, nodes or defines as the lambda does not have a closure.
- #
- def top_LambdaExpression(o, definition)
- # fail, stop scanning parents
- acceptor.accept(Issues::NOT_TOP_LEVEL, definition)
- end
-
- #--- NON POLYMORPH, NON CHECKING CODE
-
- # Produces string part of something named, or nil if not a QualifiedName or QualifiedReference
- #
- def varname_to_s(o)
- case o
- when Model::QualifiedName
- o.value
- when Model::QualifiedReference
- o.value
- else
- nil
- end
- end
-end
diff --git a/lib/puppet/pops/validation/checker4_0.rb b/lib/puppet/pops/validation/checker4_0.rb
index fa2587620..079710d59 100644
--- a/lib/puppet/pops/validation/checker4_0.rb
+++ b/lib/puppet/pops/validation/checker4_0.rb
@@ -1,514 +1,760 @@
# A Validator validates a model.
#
# Validation is performed on each model element in isolation. Each method should validate the model element's state
# but not validate its referenced/contained elements except to check their validity in their respective role.
# The intent is to drive the validation with a tree iterator that visits all elements in a model.
#
#
# TODO: Add validation of multiplicities - this is a general validation that can be checked for all
# Model objects via their metamodel. (I.e an extra call to multiplicity check in polymorph check).
# This is however mostly valuable when validating model to model transformations, and is therefore T.B.D
#
class Puppet::Pops::Validation::Checker4_0
Issues = Puppet::Pops::Issues
Model = Puppet::Pops::Model
attr_reader :acceptor
# Initializes the validator with a diagnostics producer. This object must respond to
# `:will_accept?` and `:accept`.
#
def initialize(diagnostics_producer)
@@check_visitor ||= Puppet::Pops::Visitor.new(nil, "check", 0, 0)
@@rvalue_visitor ||= Puppet::Pops::Visitor.new(nil, "rvalue", 0, 0)
@@hostname_visitor ||= Puppet::Pops::Visitor.new(nil, "hostname", 1, 2)
@@assignment_visitor ||= Puppet::Pops::Visitor.new(nil, "assign", 0, 1)
@@query_visitor ||= Puppet::Pops::Visitor.new(nil, "query", 0, 0)
@@top_visitor ||= Puppet::Pops::Visitor.new(nil, "top", 1, 1)
@@relation_visitor ||= Puppet::Pops::Visitor.new(nil, "relation", 0, 0)
+ @@idem_visitor ||= Puppet::Pops::Visitor.new(self, "idem", 0, 0)
@acceptor = diagnostics_producer
end
# Validates the entire model by visiting each model element and calling `check`.
# The result is collected (or acted on immediately) by the configured diagnostic provider/acceptor
# given when creating this Checker.
#
def validate(model)
# tree iterate the model, and call check for each element
check(model)
model.eAllContents.each {|m| check(m) }
end
# Performs regular validity check
def check(o)
@@check_visitor.visit_this_0(self, o)
end
# Performs check if this is a vaid hostname expression
# @param single_feature_name [String, nil] the name of a single valued hostname feature of the value's container. e.g. 'parent'
def hostname(o, semantic, single_feature_name = nil)
@@hostname_visitor.visit_this_2(self, o, semantic, single_feature_name)
end
# Performs check if this is valid as a query
def query(o)
@@query_visitor.visit_this_0(self, o)
end
# Performs check if this is valid as a relationship side
def relation(o)
@@relation_visitor.visit_this_0(self, o)
end
# Performs check if this is valid as a rvalue
def rvalue(o)
@@rvalue_visitor.visit_this_0(self, o)
end
# Performs check if this is valid as a container of a definition (class, define, node)
def top(o, definition)
@@top_visitor.visit_this_1(self, o, definition)
end
# Checks the LHS of an assignment (is it assignable?).
# If args[0] is true, assignment via index is checked.
#
def assign(o, via_index = false)
@@assignment_visitor.visit_this_1(self, o, via_index)
end
+ # Checks if the expression has side effect ('idem' is latin for 'the same', here meaning that the evaluation state
+ # is known to be unchanged after the expression has been evaluated). The result is not 100% authoritative for
+ # negative answers since analysis of function behavior is not possible.
+ # @return [Boolean] true if expression is known to have no effect on evaluation state
+ #
+ def idem(o)
+ @@idem_visitor.visit_this_0(self, o)
+ end
+
+ # Returns the last expression in a block, or the expression, if that expression is idem
+ def ends_with_idem(o)
+ if o.is_a?(Puppet::Pops::Model::BlockExpression)
+ last = o.statements[-1]
+ idem(last) ? last : nil
+ else
+ idem(o) ? o : nil
+ end
+ end
+
#---ASSIGNMENT CHECKS
def assign_VariableExpression(o, via_index)
varname_string = varname_to_s(o.expr)
if varname_string =~ Puppet::Pops::Patterns::NUMERIC_VAR_NAME
acceptor.accept(Issues::ILLEGAL_NUMERIC_ASSIGNMENT, o, :varname => varname_string)
end
# Can not assign to something in another namespace (i.e. a '::' in the name is not legal)
if acceptor.will_accept? Issues::CROSS_SCOPE_ASSIGNMENT
if varname_string =~ /::/
acceptor.accept(Issues::CROSS_SCOPE_ASSIGNMENT, o, :name => varname_string)
end
end
# TODO: Could scan for reassignment of the same variable if done earlier in the same container
# Or if assigning to a parameter (more work).
# TODO: Investigate if there are invalid cases for += assignment
end
def assign_AccessExpression(o, via_index)
# Are indexed assignments allowed at all ? $x[x] = '...'
if acceptor.will_accept? Issues::ILLEGAL_INDEXED_ASSIGNMENT
acceptor.accept(Issues::ILLEGAL_INDEXED_ASSIGNMENT, o)
else
# Then the left expression must be assignable-via-index
assign(o.left_expr, true)
end
end
def assign_Object(o, via_index)
# Can not assign to anything else (differentiate if this is via index or not)
# i.e. 10 = 'hello' vs. 10['x'] = 'hello' (the root is reported as being in error in both cases)
#
acceptor.accept(via_index ? Issues::ILLEGAL_ASSIGNMENT_VIA_INDEX : Issues::ILLEGAL_ASSIGNMENT, o)
end
#---CHECKS
def check_Object(o)
end
def check_Factory(o)
check(o.current)
end
def check_AccessExpression(o)
# Only min range is checked, all other checks are RT checks as they depend on the resulting type
# of the LHS.
if o.keys.size < 1
acceptor.accept(Issues::MISSING_INDEX, o)
end
end
def check_AssignmentExpression(o)
- acceptor.accept(Issues::UNSUPPORTED_OPERATOR, o, {:operator => o.operator}) unless [:'=', :'+=', :'-='].include? o.operator
- assign(o.left_expr)
- rvalue(o.right_expr)
+ case o.operator
+ when :'='
+ assign(o.left_expr)
+ rvalue(o.right_expr)
+ when :'+=', :'-='
+ acceptor.accept(Issues::APPENDS_DELETES_NO_LONGER_SUPPORTED, o, {:operator => o.operator})
+ else
+ acceptor.accept(Issues::UNSUPPORTED_OPERATOR, o, {:operator => o.operator})
+ end
end
# Checks that operation with :+> is contained in a ResourceOverride or Collector.
#
# Parent of an AttributeOperation can be one of:
# * CollectExpression
# * ResourceOverride
# * ResourceBody (ILLEGAL this is a regular resource expression)
# * ResourceDefaults (ILLEGAL)
#
def check_AttributeOperation(o)
if o.operator == :'+>'
# Append operator use is constrained
parent = o.eContainer
unless parent.is_a?(Model::CollectExpression) || parent.is_a?(Model::ResourceOverrideExpression)
acceptor.accept(Issues::ILLEGAL_ATTRIBUTE_APPEND, o, {:name=>o.attribute_name, :parent=>parent})
end
end
rvalue(o.value_expr)
end
+ def check_AttributesOperation(o)
+ # Append operator use is constrained
+ parent = o.eContainer
+ parent = parent.eContainer unless parent.nil?
+ unless parent.is_a?(Model::ResourceExpression)
+ acceptor.accept(Issues::UNSUPPORTED_OPERATOR_IN_CONTEXT, o, :operator=>'* =>')
+ end
+
+ rvalue(o.expr)
+ end
+
def check_BinaryExpression(o)
rvalue(o.left_expr)
rvalue(o.right_expr)
end
+ def check_BlockExpression(o)
+ o.statements[0..-2].each do |statement|
+ if idem(statement)
+ acceptor.accept(Issues::IDEM_EXPRESSION_NOT_LAST, statement)
+ break # only flag the first
+ end
+ end
+ end
+
def check_CallNamedFunctionExpression(o)
case o.functor_expr
when Puppet::Pops::Model::QualifiedName
# ok
nil
when Puppet::Pops::Model::RenderStringExpression
# helpful to point out this easy to make Epp error
acceptor.accept(Issues::ILLEGAL_EPP_PARAMETERS, o)
else
acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.functor_expr, {:feature=>'function name', :container => o})
end
end
+ def check_EppExpression(o)
+ if o.eContainer.is_a?(Puppet::Pops::Model::LambdaExpression)
+ internal_check_no_capture(o.eContainer, o)
+ end
+ end
+
def check_MethodCallExpression(o)
unless o.functor_expr.is_a? Model::QualifiedName
acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.functor_expr, :feature => 'function name', :container => o)
end
end
def check_CaseExpression(o)
rvalue(o.test)
# There should only be one LiteralDefault case option value
# TODO: Implement this check
end
def check_CaseOption(o)
o.values.each { |v| rvalue(v) }
end
def check_CollectExpression(o)
unless o.type_expr.is_a? Model::QualifiedReference
acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.type_expr, :feature=> 'type name', :container => o)
end
# If a collect expression tries to collect exported resources and storeconfigs is not on
# then it will not work... This was checked in the parser previously. This is a runtime checking
# thing as opposed to a language thing.
if acceptor.will_accept?(Issues::RT_NO_STORECONFIGS) && o.query.is_a?(Model::ExportedQuery)
acceptor.accept(Issues::RT_NO_STORECONFIGS, o)
end
end
# Only used for function names, grammar should not be able to produce something faulty, but
# check anyway if model is created programatically (it will fail in transformation to AST for sure).
def check_NamedAccessExpression(o)
name = o.right_expr
unless name.is_a? Model::QualifiedName
acceptor.accept(Issues::ILLEGAL_EXPRESSION, name, :feature=> 'function name', :container => o.eContainer)
end
end
+ RESERVED_TYPE_NAMES = {
+ 'type' => true,
+ 'any' => true,
+ 'unit' => true,
+ 'scalar' => true,
+ 'boolean' => true,
+ 'numeric' => true,
+ 'integer' => true,
+ 'float' => true,
+ 'collection' => true,
+ 'array' => true,
+ 'hash' => true,
+ 'tuple' => true,
+ 'struct' => true,
+ 'variant' => true,
+ 'optional' => true,
+ 'enum' => true,
+ 'regexp' => true,
+ 'pattern' => true,
+ 'runtime' => true,
+ }
+
# for 'class', 'define', and function
def check_NamedDefinition(o)
top(o.eContainer, o)
if o.name !~ Puppet::Pops::Patterns::CLASSREF
acceptor.accept(Issues::ILLEGAL_DEFINITION_NAME, o, {:name=>o.name})
end
+
+ if RESERVED_TYPE_NAMES[o.name()]
+ acceptor.accept(Issues::RESERVED_TYPE_NAME, o, {:name => o.name})
+ end
+
+ if violator = ends_with_idem(o.body)
+ acceptor.accept(Issues::IDEM_NOT_ALLOWED_LAST, violator, {:container => o})
+ end
+ end
+
+ def check_HostClassDefinition(o)
+ check_NamedDefinition(o)
+ internal_check_no_capture(o)
+ internal_check_reserved_params(o)
+ end
+
+ def check_ResourceTypeDefinition(o)
+ check_NamedDefinition(o)
+ internal_check_no_capture(o)
+ internal_check_reserved_params(o)
+ end
+
+ def internal_check_capture_last(o)
+ accepted_index = o.parameters.size() -1
+ o.parameters.each_with_index do |p, index|
+ if p.captures_rest && index != accepted_index
+ acceptor.accept(Issues::CAPTURES_REST_NOT_LAST, p, {:param_name => p.name})
+ end
+ end
+ end
+
+ def internal_check_no_capture(o, container = o)
+ o.parameters.each do |p|
+ if p.captures_rest
+ acceptor.accept(Issues::CAPTURES_REST_NOT_SUPPORTED, p, {:container => container, :param_name => p.name})
+ end
+ end
+ end
+
+ RESERVED_PARAMETERS = {
+ 'name' => true,
+ 'title' => true,
+ }
+
+ def internal_check_reserved_params(o)
+ o.parameters.each do |p|
+ if RESERVED_PARAMETERS[p.name]
+ acceptor.accept(Issues::RESERVED_PARAMETER, p, {:container => o, :param_name => p.name})
+ end
+ end
end
def check_IfExpression(o)
rvalue(o.test)
end
def check_KeyedEntry(o)
rvalue(o.key)
rvalue(o.value)
# In case there are additional things to forbid than non-rvalues
# acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.key, :feature => 'hash key', :container => o.eContainer)
end
- # A Lambda is a Definition, but it may appear in other scopes than top scope (Which check_Definition asserts).
- #
def check_LambdaExpression(o)
+ internal_check_capture_last(o)
end
def check_LiteralList(o)
o.values.each {|v| rvalue(v) }
end
def check_NodeDefinition(o)
# Check that hostnames are valid hostnames (or regular expressions)
hostname(o.host_matches, o)
hostname(o.parent, o, 'parent') unless o.parent.nil?
top(o.eContainer, o)
+ if violator = ends_with_idem(o.body)
+ acceptor.accept(Issues::IDEM_NOT_ALLOWED_LAST, violator, {:container => o})
+ end
+ unless o.parent.nil?
+ acceptor.accept(Issues::ILLEGAL_NODE_INHERITANCE, o.parent)
+ end
end
# No checking takes place - all expressions using a QualifiedName need to check. This because the
# rules are slightly different depending on the container (A variable allows a numeric start, but not
# other names). This means that (if the lexer/parser so chooses) a QualifiedName
# can be anything when it represents a Bare Word and evaluates to a String.
#
def check_QualifiedName(o)
end
# Checks that the value is a valid UpperCaseWord (a CLASSREF), and optionally if it contains a hypen.
# DOH: QualifiedReferences are created with LOWER CASE NAMES at parse time
def check_QualifiedReference(o)
# Is this a valid qualified name?
if o.value !~ Puppet::Pops::Patterns::CLASSREF
acceptor.accept(Issues::ILLEGAL_CLASSREF, o, {:name=>o.value})
end
end
def check_QueryExpression(o)
query(o.expr) if o.expr # is optional
end
def relation_Object(o)
rvalue(o)
end
def relation_CollectExpression(o); end
def relation_RelationshipExpression(o); end
def check_Parameter(o)
- if o.name =~ /^[0-9]+$/
+ if o.name =~ /^(?:0x)?[0-9]+$/
acceptor.accept(Issues::ILLEGAL_NUMERIC_PARAMETER, o, :name => o.name)
end
end
#relationship_side: resource
# | resourceref
# | collection
# | variable
# | quotedtext
# | selector
# | casestatement
# | hasharrayaccesses
def check_RelationshipExpression(o)
relation(o.left_expr)
relation(o.right_expr)
end
def check_ResourceExpression(o)
- # A resource expression must have a lower case NAME as its type e.g. 'file { ... }'
- unless o.type_name.is_a? Model::QualifiedName
- acceptor.accept(Issues::ILLEGAL_EXPRESSION, o.type_name, :feature => 'resource type', :container => o)
+ # The expression for type name cannot be statically checked - this is instead done at runtime
+ # to enable better error message of the result of the expression rather than the static instruction.
+ # (This can be revised as there are static constructs that are illegal, but require updating many
+ # tests that expect the detailed reporting).
+ end
+
+ def check_ResourceBody(o)
+ seenUnfolding = false
+ o.operations.each do |ao|
+ if ao.is_a?(Puppet::Pops::Model::AttributesOperation)
+ if seenUnfolding
+ acceptor.accept(Issues::MULTIPLE_ATTRIBUTES_UNFOLD, ao)
+ else
+ seenUnfolding = true
+ end
+ end
end
+ end
- # This is a runtime check - the model is valid, but will have runtime issues when evaluated
- # and storeconfigs is not set.
- if acceptor.will_accept?(Issues::RT_NO_STORECONFIGS) && o.exported
- acceptor.accept(Issues::RT_NO_STORECONFIGS_EXPORT, o)
+ def check_ResourceDefaultsExpression(o)
+ if o.form && o.form != :regular
+ acceptor.accept(Issues::NOT_VIRTUALIZEABLE, o)
end
end
- def check_ResourceDefaultsExpression(o)
+ def check_ResourceOverrideExpression(o)
if o.form && o.form != :regular
acceptor.accept(Issues::NOT_VIRTUALIZEABLE, o)
end
end
+ def check_ReservedWord(o)
+ acceptor.accept(Issues::RESERVED_WORD, o, :word => o.word)
+ end
+
def check_SelectorExpression(o)
rvalue(o.left_expr)
end
def check_SelectorEntry(o)
rvalue(o.matching_expr)
end
def check_UnaryExpression(o)
rvalue(o.expr)
end
def check_UnlessExpression(o)
rvalue(o.test)
# TODO: Unless may not have an else part that is an IfExpression (grammar denies this though)
end
# Checks that variable is either strictly 0, or a non 0 starting decimal number, or a valid VAR_NAME
def check_VariableExpression(o)
# The expression must be a qualified name
if !o.expr.is_a?(Model::QualifiedName)
acceptor.accept(Issues::ILLEGAL_EXPRESSION, o, :feature => 'name', :container => o)
else
# name must be either a decimal value, or a valid NAME
name = o.expr.value
if name[0,1] =~ /[0-9]/
unless name =~ Puppet::Pops::Patterns::NUMERIC_VAR_NAME
acceptor.accept(Issues::ILLEGAL_NUMERIC_VAR_NAME, o, :name => name)
end
else
unless name =~ Puppet::Pops::Patterns::VAR_NAME
acceptor.accept(Issues::ILLEGAL_VAR_NAME, o, :name => name)
end
end
end
end
#--- HOSTNAME CHECKS
# Transforms Array of host matching expressions into a (Ruby) array of AST::HostName
def hostname_Array(o, semantic, single_feature_name)
if single_feature_name
acceptor.accept(Issues::ILLEGAL_EXPRESSION, o, {:feature=>single_feature_name, :container=>semantic})
end
o.each {|x| hostname(x, semantic, false) }
end
def hostname_String(o, semantic, single_feature_name)
# The 3.x checker only checks for illegal characters - if matching /[^-\w.]/ the name is invalid,
# but this allows pathological names like "a..b......c", "----"
# TODO: Investigate if more illegal hostnames should be flagged.
#
if o =~ Puppet::Pops::Patterns::ILLEGAL_HOSTNAME_CHARS
acceptor.accept(Issues::ILLEGAL_HOSTNAME_CHARS, semantic, :hostname => o)
end
end
def hostname_LiteralValue(o, semantic, single_feature_name)
hostname_String(o.value.to_s, o, single_feature_name)
end
def hostname_ConcatenatedString(o, semantic, single_feature_name)
# Puppet 3.1. only accepts a concatenated string without interpolated expressions
if the_expr = o.segments.index {|s| s.is_a?(Model::TextExpression) }
acceptor.accept(Issues::ILLEGAL_HOSTNAME_INTERPOLATION, o.segments[the_expr].expr)
elsif o.segments.size() != 1
# corner case, bad model, concatenation of several plain strings
acceptor.accept(Issues::ILLEGAL_HOSTNAME_INTERPOLATION, o)
else
# corner case, may be ok, but lexer may have replaced with plain string, this is
# here if it does not
hostname_String(o.segments[0], o.segments[0], false)
end
end
def hostname_QualifiedName(o, semantic, single_feature_name)
hostname_String(o.value.to_s, o, single_feature_name)
end
def hostname_QualifiedReference(o, semantic, single_feature_name)
hostname_String(o.value.to_s, o, single_feature_name)
end
def hostname_LiteralNumber(o, semantic, single_feature_name)
# always ok
end
def hostname_LiteralDefault(o, semantic, single_feature_name)
# always ok
end
def hostname_LiteralRegularExpression(o, semantic, single_feature_name)
# always ok
end
def hostname_Object(o, semantic, single_feature_name)
acceptor.accept(Issues::ILLEGAL_EXPRESSION, o, {:feature=> single_feature_name || 'hostname', :container=>semantic})
end
#---QUERY CHECKS
# Anything not explicitly allowed is flagged as error.
def query_Object(o)
acceptor.accept(Issues::ILLEGAL_QUERY_EXPRESSION, o)
end
# Puppet AST only allows == and !=
#
def query_ComparisonExpression(o)
acceptor.accept(Issues::ILLEGAL_QUERY_EXPRESSION, o) unless [:'==', :'!='].include? o.operator
end
# Allows AND, OR, and checks if left/right are allowed in query.
def query_BooleanExpression(o)
query o.left_expr
query o.right_expr
end
def query_ParenthesizedExpression(o)
query(o.expr)
end
def query_VariableExpression(o); end
def query_QualifiedName(o); end
def query_LiteralNumber(o); end
def query_LiteralString(o); end
def query_LiteralBoolean(o); end
#---RVALUE CHECKS
# By default, all expressions are reported as being rvalues
# Implement specific rvalue checks for those that are not.
#
def rvalue_Expression(o); end
- def rvalue_ResourceDefaultsExpression(o); acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
- def rvalue_ResourceOverrideExpression(o); acceptor.accept(Issues::NOT_RVALUE, o) ; end
-
def rvalue_CollectExpression(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
def rvalue_Definition(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
def rvalue_NodeDefinition(o) ; acceptor.accept(Issues::NOT_RVALUE, o) ; end
def rvalue_UnaryExpression(o) ; rvalue o.expr ; end
#---TOP CHECK
def top_NilClass(o, definition)
# ok, reached the top, no more parents
end
def top_Object(o, definition)
# fail, reached a container that is not top level
acceptor.accept(Issues::NOT_TOP_LEVEL, definition)
end
def top_BlockExpression(o, definition)
# ok, if this is a block representing the body of a class, or is top level
top o.eContainer, definition
end
def top_HostClassDefinition(o, definition)
# ok, stop scanning parents
end
def top_Program(o, definition)
# ok
end
# A LambdaExpression is a BlockExpression, and this method is needed to prevent the polymorph method for BlockExpression
# to accept a lambda.
# A lambda can not iteratively create classes, nodes or defines as the lambda does not have a closure.
#
def top_LambdaExpression(o, definition)
# fail, stop scanning parents
acceptor.accept(Issues::NOT_TOP_LEVEL, definition)
end
+ #--IDEM CHECK
+ def idem_Object(o)
+ false
+ end
+
+ def idem_Nop(o)
+ true
+ end
+
+ def idem_NilClass(o)
+ true
+ end
+
+ def idem_Literal(o)
+ true
+ end
+
+ def idem_LiteralList(o)
+ true
+ end
+
+ def idem_LiteralHash(o)
+ true
+ end
+
+ def idem_Factory(o)
+ idem(o.current)
+ end
+
+ def idem_AccessExpression(o)
+ true
+ end
+
+ def idem_BinaryExpression(o)
+ true
+ end
+
+ def idem_RelationshipExpression(o)
+ # Always side effect
+ false
+ end
+
+ def idem_AssignmentExpression(o)
+ # Always side effect
+ false
+ end
+
+ # Handles UnaryMinusExpression, NotExpression, VariableExpression
+ def idem_UnaryExpression(o)
+ true
+ end
+
+ # Allow (no-effect parentheses) to be used around a productive expression
+ def idem_ParenthesizedExpression(o)
+ idem(o.expr)
+ end
+
+ def idem_RenderExpression(o)
+ false
+ end
+
+ def idem_RenderStringExpression(o)
+ false
+ end
+
+ def idem_BlockExpression(o)
+ # productive if there is at least one productive expression
+ ! o.statements.any? {|expr| !idem(expr) }
+ end
+
+ # Returns true even though there may be interpolated expressions that have side effect.
+ # Report as idem anyway, as it is very bad design to evaluate an interpolated string for its
+ # side effect only.
+ def idem_ConcatenatedString(o)
+ true
+ end
+
+ # Heredoc is just a string, but may contain interpolated string (which may have side effects).
+ # This is still bad design and should be reported as idem.
+ def idem_HeredocExpression(o)
+ true
+ end
+
+ # May technically have side effects inside the Selector, but this is bad design - treat as idem
+ def idem_SelectorExpression(o)
+ true
+ end
+
+ def idem_IfExpression(o)
+ [o.test, o.then_expr, o.else_expr].all? {|e| idem(e) }
+ end
+
+ # Case expression is idem, if test, and all options are idem
+ def idem_CaseExpression(o)
+ return false if !idem(o.test)
+ ! o.options.any? {|opt| !idem(opt) }
+ end
+
+ # An option is idem if values and the then_expression are idem
+ def idem_CaseOption(o)
+ return false if o.values.any? { |value| !idem(value) }
+ idem(o.then_expr)
+ end
+
#--- NON POLYMORPH, NON CHECKING CODE
# Produces string part of something named, or nil if not a QualifiedName or QualifiedReference
#
def varname_to_s(o)
case o
when Model::QualifiedName
o.value
when Model::QualifiedReference
o.value
else
nil
end
end
end
diff --git a/lib/puppet/pops/validation/validator_factory_3_1.rb b/lib/puppet/pops/validation/validator_factory_3_1.rb
deleted file mode 100644
index 738eac404..000000000
--- a/lib/puppet/pops/validation/validator_factory_3_1.rb
+++ /dev/null
@@ -1,31 +0,0 @@
-# Configures validation suitable for 3.1 + iteration
-#
-class Puppet::Pops::Validation::ValidatorFactory_3_1 < Puppet::Pops::Validation::Factory
- Issues = Puppet::Pops::Issues
-
- # Produces the checker to use
- def checker diagnostic_producer
- Puppet::Pops::Validation::Checker3_1.new(diagnostic_producer)
- end
-
- # Produces the label provider to use
- def label_provider
- Puppet::Pops::Model::ModelLabelProvider.new()
- end
-
- # Produces the severity producer to use
- def severity_producer
- p = super
-
- # Configure each issue that should **not** be an error
- #
- # Validate as per the current runtime configuration
- p[Issues::RT_NO_STORECONFIGS_EXPORT] = Puppet[:storeconfigs] ? :ignore : :warning
- p[Issues::RT_NO_STORECONFIGS] = Puppet[:storeconfigs] ? :ignore : :warning
-
- p[Issues::NAME_WITH_HYPHEN] = :deprecation
- p[Issues::DEPRECATED_NAME_AS_TYPE] = :deprecation
-
- p
- end
-end
diff --git a/lib/puppet/pops/validation/validator_factory_4_0.rb b/lib/puppet/pops/validation/validator_factory_4_0.rb
index 7eae59351..783164016 100644
--- a/lib/puppet/pops/validation/validator_factory_4_0.rb
+++ b/lib/puppet/pops/validation/validator_factory_4_0.rb
@@ -1,31 +1,30 @@
# Configures validation suitable for 4.0
#
class Puppet::Pops::Validation::ValidatorFactory_4_0 < Puppet::Pops::Validation::Factory
Issues = Puppet::Pops::Issues
# Produces the checker to use
def checker diagnostic_producer
Puppet::Pops::Validation::Checker4_0.new(diagnostic_producer)
end
# Produces the label provider to use
def label_provider
Puppet::Pops::Model::ModelLabelProvider.new()
end
# Produces the severity producer to use
def severity_producer
p = super
# Configure each issue that should **not** be an error
#
# Validate as per the current runtime configuration
p[Issues::RT_NO_STORECONFIGS_EXPORT] = Puppet[:storeconfigs] ? :ignore : :warning
p[Issues::RT_NO_STORECONFIGS] = Puppet[:storeconfigs] ? :ignore : :warning
p[Issues::NAME_WITH_HYPHEN] = :error
- p[Issues::DEPRECATED_NAME_AS_TYPE] = :error
p[Issues::EMPTY_RESOURCE_SPECIALIZATION] = :ignore
p
end
end
diff --git a/lib/puppet/pops/visitor.rb b/lib/puppet/pops/visitor.rb
index c35fa0b93..baae887f7 100644
--- a/lib/puppet/pops/visitor.rb
+++ b/lib/puppet/pops/visitor.rb
@@ -1,192 +1,89 @@
# A Visitor performs delegation to a given receiver based on the configuration of the Visitor.
# A new visitor is created with a given receiver, a method prefix, min, and max argument counts.
# e.g.
# vistor = Visitor.new(self, "visit_from", 1, 1)
# will make the visitor call "self.visit_from_CLASS(x)" where CLASS is resolved to the given
# objects class, or one of is ancestors, the first class for which there is an implementation of
# a method will be selected.
#
# Raises RuntimeError if there are too few or too many arguments, or if the receiver is not
# configured to handle a given visiting object.
#
class Puppet::Pops::Visitor
attr_reader :receiver, :message, :min_args, :max_args, :cache
def initialize(receiver, message, min_args=0, max_args=nil)
raise ArgumentError.new("min_args must be >= 0") if min_args < 0
raise ArgumentError.new("max_args must be >= min_args or nil") if max_args && max_args < min_args
@receiver = receiver
@message = message
@min_args = min_args
@max_args = max_args
@cache = Hash.new
end
# Visit the configured receiver
def visit(thing, *args)
visit_this(@receiver, thing, *args)
end
# Visit an explicit receiver
def visit_this(receiver, thing, *args)
raise "Visitor Error: Too few arguments passed. min = #{@min_args}" unless args.length >= @min_args
if @max_args
raise "Visitor Error: Too many arguments passed. max = #{@max_args}" unless args.length <= @max_args
end
if method_name = @cache[thing.class]
return receiver.send(method_name, thing, *args)
else
thing.class.ancestors().each do |ancestor|
method_name = :"#{@message}_#{ancestor.name.split(/::/).last}"
next unless receiver.respond_to?(method_name, true)
@cache[thing.class] = method_name
return receiver.send(method_name, thing, *args)
end
end
raise "Visitor Error: the configured receiver (#{receiver.class}) can't handle instance of: #{thing.class}"
end
# Visit an explicit receiver with 0 args
# (This is ~30% faster than calling the general method)
#
def visit_this_0(receiver, thing)
if method_name = @cache[thing.class]
return receiver.send(method_name, thing)
end
visit_this(receiver, thing)
end
# Visit an explicit receiver with 1 args
# (This is ~30% faster than calling the general method)
#
def visit_this_1(receiver, thing, arg)
if method_name = @cache[thing.class]
return receiver.send(method_name, thing, arg)
end
visit_this(receiver, thing, arg)
end
# Visit an explicit receiver with 2 args
# (This is ~30% faster than calling th general method)
#
def visit_this_2(receiver, thing, arg1, arg2)
if method_name = @cache[thing.class]
return receiver.send(method_name, thing, arg1, arg2)
end
visit_this(receiver, thing, arg1, arg2)
end
# Visit an explicit receiver with 3 args
# (This is ~30% faster than calling the general method)
#
def visit_this_3(receiver, thing, arg1, arg2, arg3)
if method_name = @cache[thing.class]
return receiver.send(method_name, thing, arg1, arg2, arg3)
end
visit_this(receiver, thing, arg1, arg2, arg3)
end
- # This is an alternative implementation that separates the finding of method names
- # (Cached in the Visitor2 class), and bound methods (in an inner Delegator class) that
- # are cached for this receiver instance. This is based on micro benchmarks measuring that a send is slower
- # that directly calling a bound method.
- # Larger benchmark however show that the overhead is fractional. Additional (larger) tests may
- # show otherwise.
- # To use this class instead of the regular Visitor.
- # @@the_visitor_c = Visitor2.new(...)
- # @@the_visitor = @@the_visitor_c.instance(self)
- # then visit with one of the Delegator's visit methods.
- #
- # Performance Note: there are still issues with this implementation (although cleaner) since it requires
- # holding on to the first instance in order to compute respond_do?. This is required if the class
- # is using method_missing? which cannot be computed by introspection of the class (which would be
- # ideal). Another approach is to pre-scan all the available methods starting with the pattern for
- # the visitor, scan the class, and just check if the class has this method. (This will not work
- # for dispatch to methods that requires method missing. (Maybe that does not matter)
- # Further experiments could try looking up unbound methods via the class, cloning and binding them
- # instead of again looking them up with #method(name)
- # Also note that this implementation does not check min/max args on each call - there was not much gain
- # from skipping this. It is safe to skip, but produces less friendly errors if there is an error in the
- # implementation.
- #
- class Visitor2
- attr_reader :receiver, :message, :min_args, :max_args, :cache
-
- def initialize(receiver, message, min_args=0, max_args=nil)
- raise ArgumentError.new("receiver can not be nil") if receiver.nil?
- raise ArgumentError.new("min_args must be >= 0") if min_args < 0
- raise ArgumentError.new("max_args must be >= min_args or nil") if max_args && max_args < min_args
-
- @receiver = receiver
- @message = message
- @min_args = min_args
- @max_args = max_args
- @cache = Hash.new
- end
-
- def instance(receiver)
- # Create a visitable instance for the receiver
- Delegator.new(receiver, self)
- end
-
- # Produce the name of the method to use
- # @return [Symbol, nil] the method name symbol, or nil if there is no method to call for thing
- #
- def method_name_for(thing)
- if method_name = @cache[thing.class]
- return method_name
- else
- thing.class.ancestors().each do |ancestor|
- method_name = :"#{@message}_#{ancestor.name.split(/::/).last}"
- next unless receiver.respond_to?(method_name, true)
- @cache[thing.class] = method_name
- return method_name
- end
- end
- end
-
- class Delegator
- attr_reader :receiver, :visitor, :cache
- def initialize(receiver, visitor)
- @receiver = receiver
- @visitor = visitor
- @cache = Hash.new
- end
-
- # Visit
- def visit(thing, *args)
- if method = @cache[thing.class]
- return method.call(thing, *args)
- else
- method_name = visitor.method_name_for(thing)
- method = receiver.method(method_name)
- unless method
- raise "Visitor Error: the configured receiver (#{receiver.class}) can't handle instance of: #{thing.class}"
- end
- @cache[thing.class] = method
- method.call(thing, *args)
- end
- end
-
- # Visit an explicit receiver with 0 args
- # (This is ~30% faster than calling the general method)
- #
- def visit_0(thing)
- (method = @cache[thing.class]) ? method.call(thing) : visit(thing)
- end
-
- def visit_1(thing, arg)
- (method = @cache[thing.class]) ? method.call(thing, arg) : visit(thing, arg)
- end
-
- def visit_2(thing, arg1, arg2)
- (method = @cache[thing.class]) ? method.call(thing, arg1, arg2) : visit(thing, arg1, arg2)
- end
-
- def visit_3(thing, arg1, arg2, arg3)
- (method = @cache[thing.class]) ? method.call(thing, arg1, arg2, arg3) : visit(thing, arg1, arg2, arg3)
- end
-
- end
- end
end
diff --git a/lib/puppet/provider/exec.rb b/lib/puppet/provider/exec.rb
index e5b90b6e2..70bcda8c0 100644
--- a/lib/puppet/provider/exec.rb
+++ b/lib/puppet/provider/exec.rb
@@ -1,91 +1,100 @@
require 'puppet/provider'
require 'puppet/util/execution'
class Puppet::Provider::Exec < Puppet::Provider
include Puppet::Util::Execution
def run(command, check = false)
output = nil
status = nil
dir = nil
checkexe(command)
if dir = resource[:cwd]
unless File.directory?(dir)
if check
dir = nil
else
self.fail "Working directory '#{dir}' does not exist"
end
end
end
dir ||= Dir.pwd
debug "Executing#{check ? " check": ""} '#{command}'"
begin
# Do our chdir
Dir.chdir(dir) do
environment = {}
environment[:PATH] = resource[:path].join(File::PATH_SEPARATOR) if resource[:path]
if envlist = resource[:environment]
envlist = [envlist] unless envlist.is_a? Array
envlist.each do |setting|
if setting =~ /^(\w+)=((.|\n)+)$/
env_name = $1
value = $2
if environment.include?(env_name) || environment.include?(env_name.to_sym)
warning "Overriding environment setting '#{env_name}' with '#{value}'"
end
environment[env_name] = value
else
warning "Cannot understand environment setting #{setting.inspect}"
end
end
end
+ if Puppet.features.microsoft_windows?
+ exec_user = resource[:user]
+ # Etc.getpwuid() returns nil on Windows
+ elsif resource.current_username == resource[:user]
+ exec_user = nil
+ else
+ exec_user = resource[:user]
+ end
+
Timeout::timeout(resource[:timeout]) do
# note that we are passing "false" for the "override_locale" parameter, which ensures that the user's
# default/system locale will be respected. Callers may override this behavior by setting locale-related
# environment variables (LANG, LC_ALL, etc.) in their 'environment' configuration.
output = Puppet::Util::Execution.execute(command, :failonfail => false, :combine => true,
- :uid => resource[:user], :gid => resource[:group],
+ :uid => exec_user, :gid => resource[:group],
:override_locale => false,
:custom_environment => environment)
end
# The shell returns 127 if the command is missing.
if output.exitstatus == 127
raise ArgumentError, output
end
end
rescue Errno::ENOENT => detail
self.fail Puppet::Error, detail.to_s, detail
end
# Return output twice as processstatus was returned before, but only exitstatus was ever called.
# Output has the exitstatus on it so it is returned instead. This is here twice as changing this
# would result in a change to the underlying API.
return output, output
end
def extractexe(command)
if command.is_a? Array
command.first
elsif match = /^"([^"]+)"|^'([^']+)'/.match(command)
# extract whichever of the two sides matched the content.
match[1] or match[2]
else
command.split(/ /)[0]
end
end
def validatecmd(command)
exe = extractexe(command)
# if we're not fully qualified, require a path
self.fail "'#{command}' is not qualified and no path was specified. Please qualify the command or specify a path." if !absolute_path?(exe) and resource[:path].nil?
end
end
diff --git a/lib/puppet/provider/file/windows.rb b/lib/puppet/provider/file/windows.rb
index 5c8038db1..c0ef68cd9 100644
--- a/lib/puppet/provider/file/windows.rb
+++ b/lib/puppet/provider/file/windows.rb
@@ -1,105 +1,104 @@
Puppet::Type.type(:file).provide :windows do
desc "Uses Microsoft Windows functionality to manage file ownership and permissions."
confine :operatingsystem => :windows
has_feature :manages_symlinks if Puppet.features.manages_symlinks?
include Puppet::Util::Warnings
if Puppet.features.microsoft_windows?
require 'puppet/util/windows'
- require 'puppet/util/adsi'
include Puppet::Util::Windows::Security
end
# Determine if the account is valid, and if so, return the UID
def name2id(value)
- Puppet::Util::Windows::Security.name_to_sid(value)
+ Puppet::Util::Windows::SID.name_to_sid(value)
end
# If it's a valid SID, get the name. Otherwise, it's already a name,
# so just return it.
def id2name(id)
- if Puppet::Util::Windows::Security.valid_sid?(id)
- Puppet::Util::Windows::Security.sid_to_name(id)
+ if Puppet::Util::Windows::SID.valid_sid?(id)
+ Puppet::Util::Windows::SID.sid_to_name(id)
else
id
end
end
# We use users and groups interchangeably, so use the same methods for both
# (the type expects different methods, so we have to oblige).
alias :uid2name :id2name
alias :gid2name :id2name
alias :name2gid :name2id
alias :name2uid :name2id
def owner
return :absent unless resource.stat
get_owner(resource[:path])
end
def owner=(should)
begin
set_owner(should, resolved_path)
rescue => detail
raise Puppet::Error, "Failed to set owner to '#{should}': #{detail}", detail.backtrace
end
end
def group
return :absent unless resource.stat
get_group(resource[:path])
end
def group=(should)
begin
set_group(should, resolved_path)
rescue => detail
raise Puppet::Error, "Failed to set group to '#{should}': #{detail}", detail.backtrace
end
end
def mode
if resource.stat
mode = get_mode(resource[:path])
mode ? mode.to_s(8) : :absent
else
:absent
end
end
def mode=(value)
begin
set_mode(value.to_i(8), resource[:path])
rescue => detail
error = Puppet::Error.new("failed to set mode #{mode} on #{resource[:path]}: #{detail.message}")
error.set_backtrace detail.backtrace
raise error
end
:file_changed
end
def validate
if [:owner, :group, :mode].any?{|p| resource[p]} and !supports_acl?(resource[:path])
resource.fail("Can only manage owner, group, and mode on filesystems that support Windows ACLs, such as NTFS")
end
end
attr_reader :file
private
def file
@file ||= Puppet::FileSystem.pathname(resource[:path])
end
def resolved_path
path = file()
# under POSIX, :manage means use lchown - i.e. operate on the link
return path.to_s if resource[:links] == :manage
# otherwise, use chown -- that will resolve the link IFF it is a link
# otherwise it will operate on the path
Puppet::FileSystem.symlink?(path) ? Puppet::FileSystem.readlink(path) : path.to_s
end
end
diff --git a/lib/puppet/provider/group/windows_adsi.rb b/lib/puppet/provider/group/windows_adsi.rb
index 56c0c175b..c6db2af92 100644
--- a/lib/puppet/provider/group/windows_adsi.rb
+++ b/lib/puppet/provider/group/windows_adsi.rb
@@ -1,86 +1,86 @@
-require 'puppet/util/adsi'
+require 'puppet/util/windows'
Puppet::Type.type(:group).provide :windows_adsi do
desc "Local group management for Windows. Group members can be both users and groups.
Additionally, local groups can contain domain users."
defaultfor :operatingsystem => :windows
confine :operatingsystem => :windows
has_features :manages_members
def members_insync?(current, should)
return false unless current
# By comparing account SIDs we don't have to worry about case
# sensitivity, or canonicalization of account names.
# Cannot use munge of the group property to canonicalize @should
# since the default array_matching comparison is not commutative
should_empty = should.nil? or should.empty?
return false if current.empty? != should_empty
# dupes automatically weeded out when hashes built
- Puppet::Util::ADSI::Group.name_sid_hash(current) == Puppet::Util::ADSI::Group.name_sid_hash(should)
+ Puppet::Util::Windows::ADSI::Group.name_sid_hash(current) == Puppet::Util::Windows::ADSI::Group.name_sid_hash(should)
end
def members_to_s(users)
return '' if users.nil? or !users.kind_of?(Array)
users = users.map do |user_name|
- sid = Puppet::Util::Windows::Security.name_to_sid_object(user_name)
+ sid = Puppet::Util::Windows::SID.name_to_sid_object(user_name)
if sid.account =~ /\\/
- account, _ = Puppet::Util::ADSI::User.parse_name(sid.account)
+ account, _ = Puppet::Util::Windows::ADSI::User.parse_name(sid.account)
else
account = sid.account
end
resource.debug("#{sid.domain}\\#{account} (#{sid.to_s})")
"#{sid.domain}\\#{account}"
end
return users.join(',')
end
def group
- @group ||= Puppet::Util::ADSI::Group.new(@resource[:name])
+ @group ||= Puppet::Util::Windows::ADSI::Group.new(@resource[:name])
end
def members
group.members
end
def members=(members)
group.set_members(members)
end
def create
- @group = Puppet::Util::ADSI::Group.create(@resource[:name])
+ @group = Puppet::Util::Windows::ADSI::Group.create(@resource[:name])
@group.commit
self.members = @resource[:members]
end
def exists?
- Puppet::Util::ADSI::Group.exists?(@resource[:name])
+ Puppet::Util::Windows::ADSI::Group.exists?(@resource[:name])
end
def delete
- Puppet::Util::ADSI::Group.delete(@resource[:name])
+ Puppet::Util::Windows::ADSI::Group.delete(@resource[:name])
end
# Only flush if we created or modified a group, not deleted
def flush
@group.commit if @group
end
def gid
- Puppet::Util::Windows::Security.name_to_sid(@resource[:name])
+ Puppet::Util::Windows::SID.name_to_sid(@resource[:name])
end
def gid=(value)
fail "gid is read-only"
end
def self.instances
- Puppet::Util::ADSI::Group.map { |g| new(:ensure => :present, :name => g.name) }
+ Puppet::Util::Windows::ADSI::Group.map { |g| new(:ensure => :present, :name => g.name) }
end
end
diff --git a/lib/puppet/provider/nameservice/directoryservice.rb b/lib/puppet/provider/nameservice/directoryservice.rb
index 0dd4113a0..9f3108911 100644
--- a/lib/puppet/provider/nameservice/directoryservice.rb
+++ b/lib/puppet/provider/nameservice/directoryservice.rb
@@ -1,589 +1,588 @@
require 'puppet'
require 'puppet/provider/nameservice'
require 'facter/util/plist'
require 'fileutils'
class Puppet::Provider::NameService::DirectoryService < Puppet::Provider::NameService
# JJM: Dive into the singleton_class
class << self
# JJM: This allows us to pass information when calling
# Puppet::Type.type
# e.g. Puppet::Type.type(:user).provide :directoryservice, :ds_path => "Users"
# This is referenced in the get_ds_path class method
attr_writer :ds_path
attr_writer :macosx_version_major
end
initvars
commands :dscl => "/usr/bin/dscl"
commands :dseditgroup => "/usr/sbin/dseditgroup"
commands :sw_vers => "/usr/bin/sw_vers"
commands :plutil => '/usr/bin/plutil'
confine :operatingsystem => :darwin
defaultfor :operatingsystem => :darwin
# JJM 2007-07-25: This map is used to map NameService attributes to their
# corresponding DirectoryService attribute names.
# See: http://images.apple.com/server/docs.Open_Directory_v10.4.pdf
# JJM: Note, this is de-coupled from the Puppet::Type, and must
# be actively maintained. There may also be collisions with different
# types (Users, Groups, Mounts, Hosts, etc...)
def ds_to_ns_attribute_map; self.class.ds_to_ns_attribute_map; end
def self.ds_to_ns_attribute_map
{
'RecordName' => :name,
'PrimaryGroupID' => :gid,
'NFSHomeDirectory' => :home,
'UserShell' => :shell,
'UniqueID' => :uid,
'RealName' => :comment,
'Password' => :password,
'GeneratedUID' => :guid,
'IPAddress' => :ip_address,
'ENetAddress' => :en_address,
'GroupMembership' => :members,
}
end
# JJM The same table as above, inverted.
def ns_to_ds_attribute_map; self.class.ns_to_ds_attribute_map end
def self.ns_to_ds_attribute_map
@ns_to_ds_attribute_map ||= ds_to_ns_attribute_map.invert
end
def self.password_hash_dir
'/var/db/shadow/hash'
end
def self.users_plist_dir
'/var/db/dslocal/nodes/Default/users'
end
def self.instances
# JJM Class method that provides an array of instance objects of this
# type.
# JJM: Properties are dependent on the Puppet::Type we're managine.
type_property_array = [:name] + @resource_type.validproperties
# Create a new instance of this Puppet::Type for each object present
# on the system.
list_all_present.collect do |name_string|
self.new(single_report(name_string, *type_property_array))
end
end
def self.get_ds_path
# JJM: 2007-07-24 This method dynamically returns the DS path we're concerned with.
# For example, if we're working with an user type, this will be /Users
# with a group type, this will be /Groups.
# @ds_path is an attribute of the class itself.
return @ds_path if defined?(@ds_path)
# JJM: "Users" or "Groups" etc ... (Based on the Puppet::Type)
# Remember this is a class method, so self.class is Class
# Also, @resource_type seems to be the reference to the
# Puppet::Type this class object is providing for.
@resource_type.name.to_s.capitalize + "s"
end
def self.get_macosx_version_major
return @macosx_version_major if defined?(@macosx_version_major)
begin
- # Make sure we've loaded all of the facts
- Facter.loadfacts
-
product_version_major = Facter.value(:macosx_productversion_major)
fail("#{product_version_major} is not supported by the directoryservice provider") if %w{10.0 10.1 10.2 10.3 10.4}.include?(product_version_major)
@macosx_version_major = product_version_major
return @macosx_version_major
rescue Puppet::ExecutionFailure => detail
fail("Could not determine OS X version: #{detail}")
end
end
def self.list_all_present
# JJM: List all objects of this Puppet::Type already present on the system.
begin
dscl_output = execute(get_exec_preamble("-list"))
rescue Puppet::ExecutionFailure
fail("Could not get #{@resource_type.name} list from DirectoryService")
end
dscl_output.split("\n")
end
def self.parse_dscl_plist_data(dscl_output)
Plist.parse_xml(dscl_output)
end
def self.generate_attribute_hash(input_hash, *type_properties)
attribute_hash = {}
input_hash.keys.each do |key|
ds_attribute = key.sub("dsAttrTypeStandard:", "")
next unless (ds_to_ns_attribute_map.keys.include?(ds_attribute) and type_properties.include? ds_to_ns_attribute_map[ds_attribute])
ds_value = input_hash[key]
case ds_to_ns_attribute_map[ds_attribute]
when :members
ds_value = ds_value # only members uses arrays so far
when :gid, :uid
# OS X stores objects like uid/gid as strings.
# Try casting to an integer for these cases to be
# consistent with the other providers and the group type
# validation
begin
ds_value = Integer(ds_value[0])
rescue ArgumentError
ds_value = ds_value[0]
end
else ds_value = ds_value[0]
end
attribute_hash[ds_to_ns_attribute_map[ds_attribute]] = ds_value
end
# NBK: need to read the existing password here as it's not actually
# stored in the user record. It is stored at a path that involves the
# UUID of the user record for non-Mobile local acccounts.
# Mobile Accounts are out of scope for this provider for now
attribute_hash[:password] = self.get_password(attribute_hash[:guid], attribute_hash[:name]) if @resource_type.validproperties.include?(:password) and Puppet.features.root?
attribute_hash
end
def self.single_report(resource_name, *type_properties)
# JJM 2007-07-24:
# Given a the name of an object and a list of properties of that
# object, return all property values in a hash.
#
# This class method returns nil if the object doesn't exist
# Otherwise, it returns a hash of the object properties.
all_present_str_array = list_all_present
# NBK: shortcut the process if the resource is missing
return nil unless all_present_str_array.include? resource_name
dscl_vector = get_exec_preamble("-read", resource_name)
begin
dscl_output = execute(dscl_vector)
rescue Puppet::ExecutionFailure
fail("Could not get report. command execution failed.")
end
# (#11593) Remove support for OS X 10.4 and earlier
fail_if_wrong_version
dscl_plist = self.parse_dscl_plist_data(dscl_output)
self.generate_attribute_hash(dscl_plist, *type_properties)
end
def self.fail_if_wrong_version
- fail("Puppet does not support OS X versions < 10.5") unless self.get_macosx_version_major >= "10.5"
+ if (Puppet::Util::Package.versioncmp(self.get_macosx_version_major, '10.5') == -1)
+ fail("Puppet does not support OS X versions < 10.5")
+ end
end
def self.get_exec_preamble(ds_action, resource_name = nil)
# JJM 2007-07-24
# DSCL commands are often repetitive and contain the same positional
# arguments over and over. See http://developer.apple.com/documentation/Porting/Conceptual/PortingUnix/additionalfeatures/chapter_10_section_9.html
# for an example of what I mean.
# This method spits out proper DSCL commands for us.
# We EXPECT name to be @resource[:name] when called from an instance object.
# (#11593) Remove support for OS X 10.4 and earlier
fail_if_wrong_version
command_vector = [ command(:dscl), "-plist", "." ]
# JJM: The actual action to perform. See "man dscl"
# Common actiosn: -create, -delete, -merge, -append, -passwd
command_vector << ds_action
# JJM: get_ds_path will spit back "Users" or "Groups",
# etc... Depending on the Puppet::Type of our self.
if resource_name
command_vector << "/#{get_ds_path}/#{resource_name}"
else
command_vector << "/#{get_ds_path}"
end
# JJM: This returns most of the preamble of the command.
# e.g. 'dscl / -create /Users/mccune'
command_vector
end
def self.set_password(resource_name, guid, password_hash)
# Use Puppet::Util::Package.versioncmp() to catch the scenario where a
# version '10.10' would be < '10.7' with simple string comparison. This
# if-statement only executes if the current version is less-than 10.7
if (Puppet::Util::Package.versioncmp(get_macosx_version_major, '10.7') == -1)
password_hash_file = "#{password_hash_dir}/#{guid}"
begin
File.open(password_hash_file, 'w') { |f| f.write(password_hash)}
rescue Errno::EACCES => detail
fail("Could not write to password hash file: #{detail}")
end
# NBK: For shadow hashes, the user AuthenticationAuthority must contain a value of
# ";ShadowHash;". The LKDC in 10.5 makes this more interesting though as it
# will dynamically generate ;Kerberosv5;;username@LKDC:SHA1 attributes if
# missing. Thus we make sure we only set ;ShadowHash; if it is missing, and
# we can do this with the merge command. This allows people to continue to
# use other custom AuthenticationAuthority attributes without stomping on them.
#
# There is a potential problem here in that we're only doing this when setting
# the password, and the attribute could get modified at other times while the
# hash doesn't change and so this doesn't get called at all... but
# without switching all the other attributes to merge instead of create I can't
# see a simple enough solution for this that doesn't modify the user record
# every single time. This should be a rather rare edge case. (famous last words)
dscl_vector = self.get_exec_preamble("-merge", resource_name)
dscl_vector << "AuthenticationAuthority" << ";ShadowHash;"
begin
execute(dscl_vector)
rescue Puppet::ExecutionFailure => detail
fail("Could not set AuthenticationAuthority.")
end
else
# 10.7 uses salted SHA512 password hashes which are 128 characters plus
# an 8 character salt. Previous versions used a SHA1 hash padded with
# zeroes. If someone attempts to use a password hash that worked with
# a previous version of OX X, we will fail early and warn them.
if password_hash.length != 136
fail("OS X 10.7 requires a Salted SHA512 hash password of 136 characters. \
Please check your password and try again.")
end
if Puppet::FileSystem.exist?("#{users_plist_dir}/#{resource_name}.plist")
# If a plist already exists in /var/db/dslocal/nodes/Default/users, then
# we will need to extract the binary plist from the 'ShadowHashData'
# key, log the new password into the resultant plist's 'SALTED-SHA512'
# key, and then save the entire structure back.
users_plist = Plist::parse_xml(plutil( '-convert', 'xml1', '-o', '/dev/stdout', \
"#{users_plist_dir}/#{resource_name}.plist"))
# users_plist['ShadowHashData'][0].string is actually a binary plist
# that's nested INSIDE the user's plist (which itself is a binary
# plist). If we encounter a user plist that DOESN'T have a
# ShadowHashData field, create one.
if users_plist['ShadowHashData']
password_hash_plist = users_plist['ShadowHashData'][0].string
converted_hash_plist = convert_binary_to_xml(password_hash_plist)
else
users_plist['ShadowHashData'] = [StringIO.new]
converted_hash_plist = {'SALTED-SHA512' => StringIO.new}
end
# converted_hash_plist['SALTED-SHA512'].string expects a Base64 encoded
# string. The password_hash provided as a resource attribute is a
# hex value. We need to convert the provided hex value to a Base64
# encoded string to nest it in the converted hash plist.
converted_hash_plist['SALTED-SHA512'].string = \
password_hash.unpack('a2'*(password_hash.size/2)).collect { |i| i.hex.chr }.join
# Finally, we can convert the nested plist back to binary, embed it
# into the user's plist, and convert the resultant plist back to
# a binary plist.
changed_plist = convert_xml_to_binary(converted_hash_plist)
users_plist['ShadowHashData'][0].string = changed_plist
Plist::Emit.save_plist(users_plist, "#{users_plist_dir}/#{resource_name}.plist")
plutil('-convert', 'binary1', "#{users_plist_dir}/#{resource_name}.plist")
end
end
end
def self.get_password(guid, username)
# Use Puppet::Util::Package.versioncmp() to catch the scenario where a
# version '10.10' would be < '10.7' with simple string comparison. This
# if-statement only executes if the current version is less-than 10.7
if (Puppet::Util::Package.versioncmp(get_macosx_version_major, '10.7') == -1)
password_hash = nil
password_hash_file = "#{password_hash_dir}/#{guid}"
if Puppet::FileSystem.exist?(password_hash_file) and File.file?(password_hash_file)
fail("Could not read password hash file at #{password_hash_file}") if not File.readable?(password_hash_file)
f = File.new(password_hash_file)
password_hash = f.read
f.close
end
password_hash
else
if Puppet::FileSystem.exist?("#{users_plist_dir}/#{username}.plist")
# If a plist exists in /var/db/dslocal/nodes/Default/users, we will
# extract the binary plist from the 'ShadowHashData' key, decode the
# salted-SHA512 password hash, and then return it.
users_plist = Plist::parse_xml(plutil('-convert', 'xml1', '-o', '/dev/stdout', "#{users_plist_dir}/#{username}.plist"))
if users_plist['ShadowHashData']
# users_plist['ShadowHashData'][0].string is actually a binary plist
# that's nested INSIDE the user's plist (which itself is a binary
# plist).
password_hash_plist = users_plist['ShadowHashData'][0].string
converted_hash_plist = convert_binary_to_xml(password_hash_plist)
# converted_hash_plist['SALTED-SHA512'].string is a Base64 encoded
# string. The password_hash provided as a resource attribute is a
# hex value. We need to convert the Base64 encoded string to a
# hex value and provide it back to Puppet.
password_hash = converted_hash_plist['SALTED-SHA512'].string.unpack("H*")[0]
password_hash
end
end
end
end
# This method will accept a hash that has been returned from Plist::parse_xml
# and convert it to a binary plist (string value).
def self.convert_xml_to_binary(plist_data)
Puppet.debug('Converting XML plist to binary')
Puppet.debug('Executing: \'plutil -convert binary1 -o - -\'')
IO.popen('plutil -convert binary1 -o - -', 'r+') do |io|
io.write plist_data.to_plist
io.close_write
@converted_plist = io.read
end
@converted_plist
end
# This method will accept a binary plist (as a string) and convert it to a
# hash via Plist::parse_xml.
def self.convert_binary_to_xml(plist_data)
Puppet.debug('Converting binary plist to XML')
Puppet.debug('Executing: \'plutil -convert xml1 -o - -\'')
IO.popen('plutil -convert xml1 -o - -', 'r+') do |io|
io.write plist_data
io.close_write
@converted_plist = io.read
end
Puppet.debug('Converting XML values to a hash.')
@plist_hash = Plist::parse_xml(@converted_plist)
@plist_hash
end
# Unlike most other *nixes, OS X doesn't provide built in functionality
# for automatically assigning uids and gids to accounts, so we set up these
# methods for consumption by functionality like --mkusers
# By default we restrict to a reasonably sane range for system accounts
def self.next_system_id(id_type, min_id=20)
dscl_args = ['.', '-list']
if id_type == 'uid'
dscl_args << '/Users' << 'uid'
elsif id_type == 'gid'
dscl_args << '/Groups' << 'gid'
else
fail("Invalid id_type #{id_type}. Only 'uid' and 'gid' supported")
end
dscl_out = dscl(dscl_args)
# We're ok with throwing away negative uids here.
ids = dscl_out.split.compact.collect { |l| l.to_i if l.match(/^\d+$/) }
ids.compact!.sort! { |a,b| a.to_f <=> b.to_f }
# We're just looking for an unused id in our sorted array.
ids.each_index do |i|
next_id = ids[i] + 1
return next_id if ids[i+1] != next_id and next_id >= min_id
end
end
def ensure=(ensure_value)
super
# We need to loop over all valid properties for the type we're
# managing and call the method which sets that property value
# dscl can't create everything at once unfortunately.
if ensure_value == :present
@resource.class.validproperties.each do |name|
next if name == :ensure
# LAK: We use property.sync here rather than directly calling
# the settor method because the properties might do some kind
# of conversion. In particular, the user gid property might
# have a string and need to convert it to a number
if @resource.should(name)
@resource.property(name).sync
elsif value = autogen(name)
self.send(name.to_s + "=", value)
else
next
end
end
end
end
def password=(passphrase)
exec_arg_vector = self.class.get_exec_preamble("-read", @resource.name)
exec_arg_vector << ns_to_ds_attribute_map[:guid]
begin
guid_output = execute(exec_arg_vector)
guid_plist = Plist.parse_xml(guid_output)
# Although GeneratedUID like all DirectoryService values can be multi-valued
# according to the schema, in practice user accounts cannot have multiple UUIDs
# otherwise Bad Things Happen, so we just deal with the first value.
guid = guid_plist["dsAttrTypeStandard:#{ns_to_ds_attribute_map[:guid]}"][0]
self.class.set_password(@resource.name, guid, passphrase)
rescue Puppet::ExecutionFailure => detail
fail("Could not set #{param} on #{@resource.class.name}[#{@resource.name}]: #{detail}")
end
end
# NBK: we override @parent.set as we need to execute a series of commands
# to deal with array values, rather than the single command nameservice.rb
# expects to be returned by modifycmd. Thus we don't bother defining modifycmd.
def set(param, value)
self.class.validate(param, value)
current_members = @property_value_cache_hash[:members]
if param == :members
# If we are meant to be authoritative for the group membership
# then remove all existing members who haven't been specified
# in the manifest.
remove_unwanted_members(current_members, value) if @resource[:auth_membership] and not current_members.nil?
# if they're not a member, make them one.
add_members(current_members, value)
else
exec_arg_vector = self.class.get_exec_preamble("-create", @resource[:name])
# JJM: The following line just maps the NS name to the DS name
# e.g. { :uid => 'UniqueID' }
exec_arg_vector << ns_to_ds_attribute_map[param.intern]
# JJM: The following line sends the actual value to set the property to
exec_arg_vector << value.to_s
begin
execute(exec_arg_vector)
rescue Puppet::ExecutionFailure => detail
fail("Could not set #{param} on #{@resource.class.name}[#{@resource.name}]: #{detail}")
end
end
end
# NBK: we override @parent.create as we need to execute a series of commands
# to create objects with dscl, rather than the single command nameservice.rb
# expects to be returned by addcmd. Thus we don't bother defining addcmd.
def create
if exists?
info "already exists"
return nil
end
# NBK: First we create the object with a known guid so we can set the contents
# of the password hash if required
# Shelling out sucks, but for a single use case it doesn't seem worth
# requiring people install a UUID library that doesn't come with the system.
# This should be revisited if Puppet starts managing UUIDs for other platform
# user records.
guid = %x{/usr/bin/uuidgen}.chomp
exec_arg_vector = self.class.get_exec_preamble("-create", @resource[:name])
exec_arg_vector << ns_to_ds_attribute_map[:guid] << guid
begin
execute(exec_arg_vector)
rescue Puppet::ExecutionFailure => detail
fail("Could not set GeneratedUID for #{@resource.class.name} #{@resource.name}: #{detail}")
end
if value = @resource.should(:password) and value != ""
self.class.set_password(@resource[:name], guid, value)
end
# Now we create all the standard properties
Puppet::Type.type(@resource.class.name).validproperties.each do |property|
next if property == :ensure
value = @resource.should(property)
if property == :gid and value.nil?
value = self.class.next_system_id(id_type='gid')
end
if property == :uid and value.nil?
value = self.class.next_system_id(id_type='uid')
end
if value != "" and not value.nil?
if property == :members
add_members(nil, value)
else
exec_arg_vector = self.class.get_exec_preamble("-create", @resource[:name])
exec_arg_vector << ns_to_ds_attribute_map[property.intern]
next if property == :password # skip setting the password here
exec_arg_vector << value.to_s
begin
execute(exec_arg_vector)
rescue Puppet::ExecutionFailure => detail
fail("Could not create #{@resource.class.name} #{@resource.name}: #{detail}")
end
end
end
end
end
def remove_unwanted_members(current_members, new_members)
current_members.each do |member|
if not new_members.flatten.include?(member)
cmd = [:dseditgroup, "-o", "edit", "-n", ".", "-d", member, @resource[:name]]
begin
execute(cmd)
rescue Puppet::ExecutionFailure => detail
# TODO: We're falling back to removing the member using dscl due to rdar://8481241
# This bug causes dseditgroup to fail to remove a member if that member doesn't exist
cmd = [:dscl, ".", "-delete", "/Groups/#{@resource.name}", "GroupMembership", member]
begin
execute(cmd)
rescue Puppet::ExecutionFailure => detail
fail("Could not remove #{member} from group: #{@resource.name}, #{detail}")
end
end
end
end
end
def add_members(current_members, new_members)
new_members.flatten.each do |new_member|
if current_members.nil? or not current_members.include?(new_member)
cmd = [:dseditgroup, "-o", "edit", "-n", ".", "-a", new_member, @resource[:name]]
begin
execute(cmd)
rescue Puppet::ExecutionFailure => detail
fail("Could not add #{new_member} to group: #{@resource.name}, #{detail}")
end
end
end
end
def deletecmd
# JJM: Like addcmd, only called when deleting the object itself
# Note, this isn't used to delete properties of the object,
# at least that's how I understand it...
self.class.get_exec_preamble("-delete", @resource[:name])
end
def getinfo(refresh = false)
# JJM 2007-07-24:
# Override the getinfo method, which is also defined in nameservice.rb
# This method returns and sets @infohash
# I'm not re-factoring the name "getinfo" because this method will be
# most likely called by nameservice.rb, which I didn't write.
if refresh or (! defined?(@property_value_cache_hash) or ! @property_value_cache_hash)
# JJM 2007-07-24: OK, there's a bit of magic that's about to
# happen... Let's see how strong my grip has become... =)
#
# self is a provider instance of some Puppet::Type, like
# Puppet::Type::User::ProviderDirectoryservice for the case of the
# user type and this provider.
#
# self.class looks like "user provider directoryservice", if that
# helps you ...
#
# self.class.resource_type is a reference to the Puppet::Type class,
# probably Puppet::Type::User or Puppet::Type::Group, etc...
#
# self.class.resource_type.validproperties is a class method,
# returning an Array of the valid properties of that specific
# Puppet::Type.
#
# So... something like [:comment, :home, :password, :shell, :uid,
# :groups, :ensure, :gid]
#
# Ultimately, we add :name to the list, delete :ensure from the
# list, then report on the remaining list. Pretty whacky, ehh?
type_properties = [:name] + self.class.resource_type.validproperties
type_properties.delete(:ensure) if type_properties.include? :ensure
type_properties << :guid # append GeneratedUID so we just get the report here
@property_value_cache_hash = self.class.single_report(@resource[:name], *type_properties)
[:uid, :gid].each do |param|
@property_value_cache_hash[param] = @property_value_cache_hash[param].to_i if @property_value_cache_hash and @property_value_cache_hash.include?(param)
end
end
@property_value_cache_hash
end
end
diff --git a/lib/puppet/provider/package/apt.rb b/lib/puppet/provider/package/apt.rb
index 63187cdda..f43f51446 100644
--- a/lib/puppet/provider/package/apt.rb
+++ b/lib/puppet/provider/package/apt.rb
@@ -1,112 +1,116 @@
Puppet::Type.type(:package).provide :apt, :parent => :dpkg, :source => :dpkg do
# Provide sorting functionality
include Puppet::Util::Package
- desc "Package management via `apt-get`."
+ desc "Package management via `apt-get`.
+
+ This provider supports the `install_options` attribute, which allows command-line flags to be passed to apt-get.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
has_feature :versionable, :install_options
commands :aptget => "/usr/bin/apt-get"
commands :aptcache => "/usr/bin/apt-cache"
commands :preseed => "/usr/bin/debconf-set-selections"
defaultfor :operatingsystem => [:debian, :ubuntu]
ENV['DEBIAN_FRONTEND'] = "noninteractive"
# disable common apt helpers to allow non-interactive package installs
ENV['APT_LISTBUGS_FRONTEND'] = "none"
ENV['APT_LISTCHANGES_FRONTEND'] = "none"
# A derivative of DPKG; this is how most people actually manage
# Debian boxes, and the only thing that differs is that it can
# install packages from remote sites.
def checkforcdrom
have_cdrom = begin
!!(File.read("/etc/apt/sources.list") =~ /^[^#]*cdrom:/)
rescue
# This is basically pathological...
false
end
if have_cdrom and @resource[:allowcdrom] != :true
raise Puppet::Error,
"/etc/apt/sources.list contains a cdrom source; not installing. Use 'allowcdrom' to override this failure."
end
end
# Install a package using 'apt-get'. This function needs to support
# installing a specific version.
def install
self.run_preseed if @resource[:responsefile]
should = @resource[:ensure]
checkforcdrom
cmd = %w{-q -y}
if config = @resource[:configfiles]
if config == :keep
cmd << "-o" << 'DPkg::Options::=--force-confold'
else
cmd << "-o" << 'DPkg::Options::=--force-confnew'
end
end
str = @resource[:name]
case should
when true, false, Symbol
# pass
else
# Add the package version and --force-yes option
str += "=#{should}"
cmd << "--force-yes"
end
cmd += install_options if @resource[:install_options]
cmd << :install << str
aptget(*cmd)
end
# What's the latest package version available?
def latest
output = aptcache :policy, @resource[:name]
if output =~ /Candidate:\s+(\S+)\s/
return $1
else
self.err "Could not find latest version"
return nil
end
end
#
# preseeds answers to dpkg-set-selection from the "responsefile"
#
def run_preseed
if response = @resource[:responsefile] and Puppet::FileSystem.exist?(response)
self.info("Preseeding #{response} to debconf-set-selections")
preseed response
else
self.info "No responsefile specified or non existant, not preseeding anything"
end
end
def uninstall
self.run_preseed if @resource[:responsefile]
aptget "-y", "-q", :remove, @resource[:name]
end
def purge
self.run_preseed if @resource[:responsefile]
aptget '-y', '-q', :remove, '--purge', @resource[:name]
# workaround a "bug" in apt, that already removed packages are not purged
super
end
def install_options
join_options(@resource[:install_options])
end
end
diff --git a/lib/puppet/provider/package/gem.rb b/lib/puppet/provider/package/gem.rb
index 380e81ef7..7cd61f4f4 100644
--- a/lib/puppet/provider/package/gem.rb
+++ b/lib/puppet/provider/package/gem.rb
@@ -1,131 +1,135 @@
require 'puppet/provider/package'
require 'uri'
# Ruby gems support.
Puppet::Type.type(:package).provide :gem, :parent => Puppet::Provider::Package do
desc "Ruby Gem support. If a URL is passed via `source`, then that URL is used as the
remote gem repository; if a source is present but is not a valid URL, it will be
interpreted as the path to a local gem file. If source is not present at all,
- the gem will be installed from the default gem repositories."
+ the gem will be installed from the default gem repositories.
+
+ This provider supports the `install_options` attribute, which allows command-line flags to be passed to the gem command.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
has_feature :versionable, :install_options
commands :gemcmd => "gem"
def self.gemlist(options)
gem_list_command = [command(:gemcmd), "list"]
if options[:local]
gem_list_command << "--local"
else
gem_list_command << "--remote"
end
if options[:source]
gem_list_command << "--source" << options[:source]
end
if name = options[:justme]
- gem_list_command << name + "$"
+ gem_list_command << "^" + name + "$"
end
begin
list = execute(gem_list_command).lines.
map {|set| gemsplit(set) }.
reject {|x| x.nil? }
rescue Puppet::ExecutionFailure => detail
raise Puppet::Error, "Could not list gems: #{detail}", detail.backtrace
end
if options[:justme]
return list.shift
else
return list
end
end
def self.gemsplit(desc)
# `gem list` when output console has a line like:
# *** LOCAL GEMS ***
# but when it's not to the console that line
# and all blank lines are stripped
# so we don't need to check for them
if desc =~ /^(\S+)\s+\((.+)\)/
name = $1
versions = $2.split(/,\s*/)
{
:name => name,
:ensure => versions.map{|v| v.split[0]},
:provider => :gem
}
else
Puppet.warning "Could not match #{desc}" unless desc.chomp.empty?
nil
end
end
def self.instances(justme = false)
gemlist(:local => true).collect do |hash|
new(hash)
end
end
def install(useversion = true)
command = [command(:gemcmd), "install"]
command << "-v" << resource[:ensure] if (! resource[:ensure].is_a? Symbol) and useversion
if source = resource[:source]
begin
uri = URI.parse(source)
rescue => detail
self.fail Puppet::Error, "Invalid source '#{uri}': #{detail}", detail
end
case uri.scheme
when nil
# no URI scheme => interpret the source as a local file
command << source
when /file/i
command << uri.path
when 'puppet'
# we don't support puppet:// URLs (yet)
raise Puppet::Error.new("puppet:// URLs are not supported as gem sources")
else
# interpret it as a gem repository
command << "--source" << "#{source}" << resource[:name]
end
else
command << "--no-rdoc" << "--no-ri" << resource[:name]
end
command += install_options if resource[:install_options]
output = execute(command)
# Apparently some stupid gem versions don't exit non-0 on failure
self.fail "Could not install: #{output.chomp}" if output.include?("ERROR")
end
def latest
# This always gets the latest version available.
gemlist_options = {:justme => resource[:name]}
gemlist_options.merge!({:source => resource[:source]}) unless resource[:source].nil?
hash = self.class.gemlist(gemlist_options)
hash[:ensure][0]
end
def query
self.class.gemlist(:justme => resource[:name], :local => true)
end
def uninstall
gemcmd "uninstall", "-x", "-a", resource[:name]
end
def update
self.install(false)
end
def install_options
join_options(resource[:install_options])
end
-end
\ No newline at end of file
+end
diff --git a/lib/puppet/provider/package/openbsd.rb b/lib/puppet/provider/package/openbsd.rb
index 4437537c8..acf9b42f0 100644
--- a/lib/puppet/provider/package/openbsd.rb
+++ b/lib/puppet/provider/package/openbsd.rb
@@ -1,151 +1,228 @@
require 'puppet/provider/package'
# Packaging on OpenBSD. Doesn't work anywhere else that I know of.
Puppet::Type.type(:package).provide :openbsd, :parent => Puppet::Provider::Package do
- desc "OpenBSD's form of `pkg_add` support."
+ desc "OpenBSD's form of `pkg_add` support.
- commands :pkginfo => "pkg_info", :pkgadd => "pkg_add", :pkgdelete => "pkg_delete"
+ This provider supports the `install_options` and `uninstall_options`
+ attributes, which allow command-line flags to be passed to pkg_add and pkg_delete.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
+
+ commands :pkginfo => "pkg_info",
+ :pkgadd => "pkg_add",
+ :pkgdelete => "pkg_delete"
defaultfor :operatingsystem => :openbsd
confine :operatingsystem => :openbsd
has_feature :versionable
has_feature :install_options
has_feature :uninstall_options
+ has_feature :upgradeable
def self.instances
packages = []
begin
execpipe(listcmd) do |process|
# our regex for matching pkg_info output
regex = /^(.*)-(\d[^-]*)[-]?(\w*)(.*)$/
fields = [:name, :ensure, :flavor ]
hash = {}
# now turn each returned line into a package object
process.each_line { |line|
if match = regex.match(line.split[0])
fields.zip(match.captures) { |field,value|
hash[field] = value
}
hash[:provider] = self.name
packages << new(hash)
hash = {}
else
unless line =~ /Updating the pkgdb/
# Print a warning on lines we can't match, but move
# on, since it should be non-fatal
warning("Failed to match line #{line}")
end
end
}
end
return packages
rescue Puppet::ExecutionFailure
return nil
end
end
def self.listcmd
[command(:pkginfo), "-a"]
end
+ def latest
+ parse_pkgconf
+
+ if @resource[:source][-1,1] == ::File::SEPARATOR
+ e_vars = { 'PKG_PATH' => @resource[:source] }
+ else
+ e_vars = {}
+ end
+
+ if @resource[:flavor]
+ query = "#{@resource[:name]}--#{@resource[:flavor]}"
+ else
+ query = @resource[:name]
+ end
+
+ output = Puppet::Util.withenv(e_vars) {pkginfo "-Q", query}
+
+ if output.nil? or output.size == 0 or output =~ /Error from /
+ debug "Failed to query for #{resource[:name]}"
+ return properties[:ensure]
+ else
+ # Remove all fuzzy matches first.
+ output = output.split.select {|p| p =~ /^#{resource[:name]}-(\d[^-]*)[-]?(\w*)/ }.join
+ debug "pkg_info -Q for #{resource[:name]}: #{output}"
+ end
+
+ if output =~ /^#{resource[:name]}-(\d[^-]*)[-]?(\w*) \(installed\)$/
+ debug "Package is already the latest available"
+ return properties[:ensure]
+ else
+ match = /^(.*)-(\d[^-]*)[-]?(\w*)$/.match(output)
+ debug "Latest available for #{resource[:name]}: #{match[2]}"
+
+ if properties[:ensure].to_sym == :absent
+ return match[2]
+ end
+
+ vcmp = properties[:ensure].split('.').map{|s|s.to_i} <=> match[2].split('.').map{|s|s.to_i}
+ if vcmp > 0
+ debug "ensure: #{properties[:ensure]}"
+ # The locally installed package may actually be newer than what a mirror
+ # has. Log it at debug, but ignore it otherwise.
+ debug "Package #{resource[:name]} #{properties[:ensure]} newer then available #{match[2]}"
+ return properties[:ensure]
+ else
+ return match[2]
+ end
+ end
+ end
+
+ def update
+ self.install(true)
+ end
+
def parse_pkgconf
unless @resource[:source]
if Puppet::FileSystem.exist?("/etc/pkg.conf")
File.open("/etc/pkg.conf", "rb").readlines.each do |line|
if matchdata = line.match(/^installpath\s*=\s*(.+)\s*$/i)
@resource[:source] = matchdata[1]
elsif matchdata = line.match(/^installpath\s*\+=\s*(.+)\s*$/i)
if @resource[:source].nil?
@resource[:source] = matchdata[1]
else
@resource[:source] += ":" + matchdata[1]
end
end
end
unless @resource[:source]
raise Puppet::Error,
"No valid installpath found in /etc/pkg.conf and no source was set"
end
else
raise Puppet::Error,
"You must specify a package source or configure an installpath in /etc/pkg.conf"
end
end
end
- def install
+ def install(latest = false)
cmd = []
parse_pkgconf
if @resource[:source][-1,1] == ::File::SEPARATOR
e_vars = { 'PKG_PATH' => @resource[:source] }
- full_name = [ @resource[:name], get_version || @resource[:ensure], @resource[:flavor] ].join('-').chomp('-').chomp('-')
+ # In case of a real update (i.e., the package already exists) then
+ # pkg_add(8) can handle the flavors. However, if we're actually
+ # installing with 'latest', we do need to handle the flavors.
+ # So we always need to handle flavors ourselves as to not break installs.
+ if latest and resource[:flavor]
+ full_name = "#{resource[:name]}--#{resource[:flavor]}"
+ elsif latest
+ # Don't depend on get_version for updates.
+ full_name = @resource[:name]
+ else
+ full_name = [ @resource[:name], get_version || @resource[:ensure], @resource[:flavor] ].join('-').chomp('-').chomp('-')
+ end
else
e_vars = {}
full_name = @resource[:source]
end
cmd << install_options
cmd << full_name
- Puppet::Util.withenv(e_vars) { pkgadd cmd.flatten.compact.join(' ') }
+ if latest
+ cmd.unshift('-rz')
+ end
+
+ Puppet::Util.withenv(e_vars) { pkgadd cmd.flatten.compact }
end
def get_version
execpipe([command(:pkginfo), "-I", @resource[:name]]) do |process|
# our regex for matching pkg_info output
- regex = /^(.*)-(\d[^-]*)[-]?(\D*)(.*)$/
+ regex = /^(.*)-(\d[^-]*)[-]?(\w*)(.*)$/
master_version = 0
version = -1
process.each_line do |line|
if match = regex.match(line.split[0])
# now we return the first version, unless ensure is latest
version = match.captures[1]
return version unless @resource[:ensure] == "latest"
master_version = version unless master_version > version
end
end
return master_version unless master_version == 0
return '' if version == -1
raise Puppet::Error, "#{version} is not available for this package"
end
rescue Puppet::ExecutionFailure
return nil
end
def query
# Search for the version info
if pkginfo(@resource[:name]) =~ /Information for (inst:)?#{@resource[:name]}-(\S+)/
return { :ensure => $2 }
else
return nil
end
end
def install_options
join_options(resource[:install_options])
end
def uninstall_options
join_options(resource[:uninstall_options])
end
def uninstall
- pkgdelete uninstall_options.flatten.compact.join(' '), @resource[:name]
+ pkgdelete uninstall_options.flatten.compact, @resource[:name]
end
def purge
pkgdelete "-c", "-q", @resource[:name]
end
end
diff --git a/lib/puppet/provider/package/pacman.rb b/lib/puppet/provider/package/pacman.rb
index 0c721c9d4..4ca356b16 100644
--- a/lib/puppet/provider/package/pacman.rb
+++ b/lib/puppet/provider/package/pacman.rb
@@ -1,209 +1,234 @@
require 'puppet/provider/package'
require 'set'
require 'uri'
Puppet::Type.type(:package).provide :pacman, :parent => Puppet::Provider::Package do
- desc "Support for the Package Manager Utility (pacman) used in Archlinux."
+ desc "Support for the Package Manager Utility (pacman) used in Archlinux.
+
+ This provider supports the `install_options` attribute, which allows command-line flags to be passed to pacman.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
commands :pacman => "/usr/bin/pacman"
# Yaourt is a common AUR helper which, if installed, we can use to query the AUR
commands :yaourt => "/usr/bin/yaourt" if Puppet::FileSystem.exist? '/usr/bin/yaourt'
confine :operatingsystem => :archlinux
defaultfor :operatingsystem => :archlinux
+ has_feature :install_options
+ has_feature :uninstall_options
has_feature :upgradeable
# If yaourt is installed, we can make use of it
def yaourt?
return Puppet::FileSystem.exist?('/usr/bin/yaourt')
end
# Install a package using 'pacman', or 'yaourt' if available.
# Installs quietly, without confirmation or progressbar, updates package
# list from servers defined in pacman.conf.
def install
if @resource[:source]
install_from_file
else
install_from_repo
end
unless self.query
raise Puppet::ExecutionFailure.new("Could not find package %s" % self.name)
end
end
def install_from_repo
if yaourt?
- yaourt "--noconfirm", "-S", @resource[:name]
+ cmd = %w{--noconfirm}
+ cmd += install_options if @resource[:install_options]
+ cmd << "-S" << @resource[:name]
+ yaourt *cmd
else
- pacman "--noconfirm", "--noprogressbar", "-Sy", @resource[:name]
+ cmd = %w{--noconfirm --noprogressbar}
+ cmd += install_options if @resource[:install_options]
+ cmd << "-Sy" << @resource[:name]
+ pacman *cmd
end
end
private :install_from_repo
def install_from_file
source = @resource[:source]
begin
source_uri = URI.parse source
rescue => detail
self.fail Puppet::Error, "Invalid source '#{source}': #{detail}", detail
end
source = case source_uri.scheme
when nil then source
when /https?/i then source
when /ftp/i then source
when /file/i then source_uri.path
when /puppet/i
fail "puppet:// URL is not supported by pacman"
else
fail "Source #{source} is not supported by pacman"
end
pacman "--noconfirm", "--noprogressbar", "-Sy"
pacman "--noconfirm", "--noprogressbar", "-U", source
end
private :install_from_file
def self.listcmd
[command(:pacman), "-Q"]
end
# Pacman has a concept of package groups as well.
# Package groups have no versions.
def self.listgroupcmd
[command(:pacman), "-Qg"]
end
# Get installed packages (pacman -Q)
def self.installedpkgs
packages = []
begin
execpipe(listcmd()) do |process|
# pacman -Q output is 'packagename version-rel'
regex = %r{^(\S+)\s(\S+)}
fields = [:name, :ensure]
hash = {}
process.each_line { |line|
if match = regex.match(line)
fields.zip(match.captures) { |field,value|
hash[field] = value
}
hash[:provider] = self.name
packages << new(hash)
hash = {}
else
warning("Failed to match line %s" % line)
end
}
end
rescue Puppet::ExecutionFailure
return nil
end
packages
end
# Get installed groups (pacman -Qg)
def self.installedgroups
packages = []
begin
execpipe(listgroupcmd()) do |process|
# pacman -Qg output is 'groupname packagename'
# Groups need to be deduplicated
groups = Set[]
process.each_line { |line|
groups.add(line.split[0])
}
groups.each { |line|
hash = {
:name => line,
:ensure => "1", # Groups don't have versions, so ensure => latest
# will still cause a reinstall.
:provider => self.name
}
packages << new(hash)
}
end
rescue Puppet::ExecutionFailure
return nil
end
packages
end
# Fetch the list of packages currently installed on the system.
def self.instances
packages = self.installedpkgs
groups = self.installedgroups
result = nil
if (!packages && !groups)
nil
elsif (packages && groups)
packages.concat(groups)
else
packages
end
end
# Because Archlinux is a rolling release based distro, installing a package
# should always result in the newest release.
def update
# Install in pacman can be used for update, too
self.install
end
# We rescue the main check from Pacman with a check on the AUR using yaourt, if installed
def latest
pacman "-Sy"
pacman_check = true # Query the main repos first
begin
if pacman_check
output = pacman "-Sp", "--print-format", "%v", @resource[:name]
return output.chomp
else
output = yaourt "-Qma", @resource[:name]
output.split("\n").each do |line|
return line.split[1].chomp if line =~ /^aur/
end
end
rescue Puppet::ExecutionFailure
if pacman_check and self.yaourt?
pacman_check = false # now try the AUR
retry
else
raise
end
end
end
# Querys the pacman master list for information about the package.
def query
begin
output = pacman("-Qi", @resource[:name])
if output =~ /Version.*:\s(.+)/
return { :ensure => $1 }
end
rescue Puppet::ExecutionFailure
return {
:ensure => :purged,
:status => 'missing',
:name => @resource[:name],
:error => 'ok',
}
end
nil
end
# Removes a package from the system.
def uninstall
- pacman "--noconfirm", "--noprogressbar", "-R", @resource[:name]
+ cmd = %w{--noconfirm --noprogressbar}
+ cmd += uninstall_options if @resource[:uninstall_options]
+ cmd << "-R" << @resource[:name]
+ pacman *cmd
+ end
+
+ private
+
+ def install_options
+ join_options(@resource[:install_options])
+ end
+
+ def uninstall_options
+ join_options(@resource[:uninstall_options])
end
end
diff --git a/lib/puppet/provider/package/rpm.rb b/lib/puppet/provider/package/rpm.rb
index fad6b4845..3b4b6388a 100644
--- a/lib/puppet/provider/package/rpm.rb
+++ b/lib/puppet/provider/package/rpm.rb
@@ -1,189 +1,187 @@
require 'puppet/provider/package'
# RPM packaging. Should work anywhere that has rpm installed.
Puppet::Type.type(:package).provide :rpm, :source => :rpm, :parent => Puppet::Provider::Package do
desc "RPM packaging support; should work anywhere with a working `rpm`
binary.
This provider supports the `install_options` and `uninstall_options`
- attributes, which allow command-line flags to be passed to the RPM binary.
- These options should be specified as an array, where each element is either
- a string or a `{'--flag' => 'value'}` hash. (That hash example would be
- equivalent to a `'--flag=value'` string; the hash syntax is available as a
- convenience.)"
+ attributes, which allow command-line flags to be passed to rpm.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
has_feature :versionable
has_feature :install_options
has_feature :uninstall_options
has_feature :virtual_packages
# Note: self:: is required here to keep these constants in the context of what will
# eventually become this Puppet::Type::Package::ProviderRpm class.
# The query format by which we identify installed packages
self::NEVRA_FORMAT = %Q{%{NAME} %|EPOCH?{%{EPOCH}}:{0}| %{VERSION} %{RELEASE} %{ARCH}\\n}
self::NEVRA_REGEX = %r{^(\S+) (\S+) (\S+) (\S+) (\S+)$}
self::NEVRA_FIELDS = [:name, :epoch, :version, :release, :arch]
commands :rpm => "rpm"
if command('rpm')
confine :true => begin
rpm('--version')
rescue Puppet::ExecutionFailure
false
else
true
end
end
def self.current_version
return @current_version unless @current_version.nil?
output = rpm "--version"
@current_version = output.gsub('RPM version ', '').strip
end
- # rpm < 4.1 don't support --nosignature
+ # rpm < 4.1 does not support --nosignature
def self.nosignature
'--nosignature' unless Puppet::Util::Package.versioncmp(current_version, '4.1') < 0
end
- # rpm < 4.0.2 don't support --nodigest
+ # rpm < 4.0.2 does not support --nodigest
def self.nodigest
'--nodigest' unless Puppet::Util::Package.versioncmp(current_version, '4.0.2') < 0
end
def self.instances
packages = []
# list out all of the packages
begin
execpipe("#{command(:rpm)} -qa #{nosignature} #{nodigest} --qf '#{self::NEVRA_FORMAT}'") { |process|
# now turn each returned line into a package object
process.each_line { |line|
hash = nevra_to_hash(line)
packages << new(hash) unless hash.empty?
}
}
rescue Puppet::ExecutionFailure
raise Puppet::Error, "Failed to list packages", $!.backtrace
end
packages
end
# Find the fully versioned package name and the version alone. Returns
# a hash with entries :instance => fully versioned package name, and
# :ensure => version-release
def query
#NOTE: Prior to a fix for issue 1243, this method potentially returned a cached value
#IF YOU CALL THIS METHOD, IT WILL CALL RPM
#Use get(:property) to check if cached values are available
cmd = ["-q", @resource[:name], "#{self.class.nosignature}", "#{self.class.nodigest}", "--qf", self.class::NEVRA_FORMAT]
begin
output = rpm(*cmd)
rescue Puppet::ExecutionFailure
return nil unless @resource.allow_virtual?
# rpm -q exits 1 if package not found
# retry the query for virtual packages
cmd << '--whatprovides'
begin
output = rpm(*cmd)
rescue Puppet::ExecutionFailure
# couldn't find a virtual package either
return nil
end
end
# FIXME: We could actually be getting back multiple packages
# for multilib and this will only return the first such package
@property_hash.update(self.class.nevra_to_hash(output))
@property_hash.dup
end
# Here we just retrieve the version from the file specified in the source.
def latest
unless source = @resource[:source]
@resource.fail "RPMs must specify a package source"
end
cmd = [command(:rpm), "-q", "--qf", self.class::NEVRA_FORMAT, "-p", source]
h = self.class.nevra_to_hash(execfail(cmd, Puppet::Error))
h[:ensure]
end
def install
unless source = @resource[:source]
@resource.fail "RPMs must specify a package source"
end
# RPM gets pissy if you try to install an already
# installed package
if @resource.should(:ensure) == @property_hash[:ensure] or
@resource.should(:ensure) == :latest && @property_hash[:ensure] == latest
return
end
flag = ["-i"]
flag = ["-U", "--oldpackage"] if @property_hash[:ensure] and @property_hash[:ensure] != :absent
flag += install_options if resource[:install_options]
rpm flag, source
end
def uninstall
query if get(:arch) == :absent
nvr = "#{get(:name)}-#{get(:version)}-#{get(:release)}"
arch = ".#{get(:arch)}"
# If they specified an arch in the manifest, erase that Otherwise,
# erase the arch we got back from the query. If multiple arches are
# installed and only the package name is specified (without the
# arch), this will uninstall all of them on successive runs of the
# client, one after the other
# version of RPM prior to 4.2.1 can't accept the architecture as
# part of the package name.
unless Puppet::Util::Package.versioncmp(self.class.current_version, '4.2.1') < 0
if @resource[:name][-arch.size, arch.size] == arch
nvr += arch
else
nvr += ".#{get(:arch)}"
end
end
flag = ['-e']
flag += uninstall_options if resource[:uninstall_options]
rpm flag, nvr
end
def update
self.install
end
def install_options
join_options(resource[:install_options])
end
def uninstall_options
join_options(resource[:uninstall_options])
end
private
# @param line [String] one line of rpm package query information
# @return [Hash] of NEVRA_FIELDS strings parsed from package info
# or an empty hash if we failed to parse
# @api private
def self.nevra_to_hash(line)
line.strip!
hash = {}
if match = self::NEVRA_REGEX.match(line)
self::NEVRA_FIELDS.zip(match.captures) { |f, v| hash[f] = v }
hash[:provider] = self.name
hash[:ensure] = "#{hash[:version]}-#{hash[:release]}"
else
Puppet.debug("Failed to match rpm line #{line}")
end
return hash
end
end
diff --git a/lib/puppet/provider/package/sun.rb b/lib/puppet/provider/package/sun.rb
index 87a13a2a5..15b78392d 100644
--- a/lib/puppet/provider/package/sun.rb
+++ b/lib/puppet/provider/package/sun.rb
@@ -1,128 +1,132 @@
# Sun packaging.
require 'puppet/provider/package'
Puppet::Type.type(:package).provide :sun, :parent => Puppet::Provider::Package do
desc "Sun's packaging system. Requires that you specify the source for
- the packages you're managing."
+ the packages you're managing.
+
+ This provider supports the `install_options` attribute, which allows command-line flags to be passed to pkgadd.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
commands :pkginfo => "/usr/bin/pkginfo",
:pkgadd => "/usr/sbin/pkgadd",
:pkgrm => "/usr/sbin/pkgrm"
confine :osfamily => :solaris
defaultfor :osfamily => :solaris
has_feature :install_options
self::Namemap = {
"PKGINST" => :name,
"CATEGORY" => :category,
"ARCH" => :platform,
"VERSION" => :ensure,
"BASEDIR" => :root,
"VENDOR" => :vendor,
"DESC" => :description,
}
def self.namemap(hash)
self::Namemap.keys.inject({}) do |hsh,k|
hsh.merge(self::Namemap[k] => hash[k])
end
end
def self.parse_pkginfo(out)
# collect all the lines with : in them, and separate them out by ^$
pkgs = []
pkg = {}
out.each_line do |line|
case line.chomp
when /^\s*$/
pkgs << pkg unless pkg.empty?
pkg = {}
when /^\s*([^:]+):\s+(.+)$/
pkg[$1] = $2
end
end
pkgs << pkg unless pkg.empty?
pkgs
end
def self.instances
parse_pkginfo(pkginfo('-l')).collect do |p|
hash = namemap(p)
hash[:provider] = :sun
new(hash)
end
end
# Get info on a package, optionally specifying a device.
def info2hash(device = nil)
args = ['-l']
args << '-d' << device if device
args << @resource[:name]
begin
pkgs = self.class.parse_pkginfo(pkginfo(*args))
errmsg = case pkgs.size
when 0
'No message'
when 1
pkgs[0]['ERROR']
end
return self.class.namemap(pkgs[0]) if errmsg.nil?
# according to commit 41356a7 some errors do not raise an exception
# so eventhough pkginfo passed, we have to check the actual output
raise Puppet::Error, "Unable to get information about package #{@resource[:name]} because of: #{errmsg}"
rescue Puppet::ExecutionFailure
return {:ensure => :absent}
end
end
# Retrieve the version from the current package file.
def latest
info2hash(@resource[:source])[:ensure]
end
def query
info2hash
end
# only looking for -G now
def install
raise Puppet::Error, "Sun packages must specify a package source" unless @resource[:source]
options = {
:adminfile => @resource[:adminfile],
:responsefile => @resource[:responsefile],
:source => @resource[:source],
:cmd_options => @resource[:install_options]
}
pkgadd prepare_cmd(options)
end
def uninstall
pkgrm prepare_cmd(:adminfile => @resource[:adminfile])
end
# Remove the old package, and install the new one. This will probably
# often fail.
def update
self.uninstall if (@property_hash[:ensure] || info2hash[:ensure]) != :absent
self.install
end
def prepare_cmd(opt)
[if_have_value('-a', opt[:adminfile]),
if_have_value('-r', opt[:responsefile]),
if_have_value('-d', opt[:source]),
opt[:cmd_options] || [],
['-n', @resource[:name]]].flatten
end
def if_have_value(prefix, value)
if value
[prefix, value]
else
[]
end
end
end
diff --git a/lib/puppet/provider/package/windows.rb b/lib/puppet/provider/package/windows.rb
index c91460f45..143d1c1e6 100644
--- a/lib/puppet/provider/package/windows.rb
+++ b/lib/puppet/provider/package/windows.rb
@@ -1,108 +1,113 @@
require 'puppet/provider/package'
require 'puppet/util/windows'
require 'puppet/provider/package/windows/package'
Puppet::Type.type(:package).provide(:windows, :parent => Puppet::Provider::Package) do
desc "Windows package management.
This provider supports either MSI or self-extracting executable installers.
This provider requires a `source` attribute when installing the package.
It accepts paths to local files, mapped drives, or UNC paths.
+ This provider supports the `install_options` and `uninstall_options`
+ attributes, which allow command-line flags to be passed to the installer.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash.
+
If the executable requires special arguments to perform a silent install or
uninstall, then the appropriate arguments should be specified using the
`install_options` or `uninstall_options` attributes, respectively. Puppet
will automatically quote any option that contains spaces."
confine :operatingsystem => :windows
defaultfor :operatingsystem => :windows
has_feature :installable
has_feature :uninstallable
has_feature :install_options
has_feature :uninstall_options
has_feature :versionable
attr_accessor :package
# Return an array of provider instances
def self.instances
Puppet::Provider::Package::Windows::Package.map do |pkg|
provider = new(to_hash(pkg))
provider.package = pkg
provider
end
end
def self.to_hash(pkg)
{
:name => pkg.name,
:ensure => pkg.version || :installed,
:provider => :windows
}
end
# Query for the provider hash for the current resource. The provider we
# are querying, may not have existed during prefetch
def query
Puppet::Provider::Package::Windows::Package.find do |pkg|
if pkg.match?(resource)
return self.class.to_hash(pkg)
end
end
nil
end
def install
installer = Puppet::Provider::Package::Windows::Package.installer_class(resource)
command = [installer.install_command(resource), install_options].flatten.compact.join(' ')
output = execute(command, :failonfail => false, :combine => true)
check_result(output.exitstatus)
end
def uninstall
command = [package.uninstall_command, uninstall_options].flatten.compact.join(' ')
output = execute(command, :failonfail => false, :combine => true)
check_result(output.exitstatus)
end
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa368542(v=vs.85).aspx
self::ERROR_SUCCESS = 0
self::ERROR_SUCCESS_REBOOT_INITIATED = 1641
self::ERROR_SUCCESS_REBOOT_REQUIRED = 3010
# (Un)install may "fail" because the package requested a reboot, the system requested a
# reboot, or something else entirely. Reboot requests mean the package was installed
# successfully, but we warn since we don't have a good reboot strategy.
def check_result(hr)
operation = resource[:ensure] == :absent ? 'uninstall' : 'install'
case hr
when self.class::ERROR_SUCCESS
# yeah
when self.class::ERROR_SUCCESS_REBOOT_INITIATED
warning("The package #{operation}ed successfully and the system is rebooting now.")
when self.class::ERROR_SUCCESS_REBOOT_REQUIRED
warning("The package #{operation}ed successfully, but the system must be rebooted.")
else
raise Puppet::Util::Windows::Error.new("Failed to #{operation}", hr)
end
end
- # This only get's called if there is a value to validate, but not if it's absent
+ # This only gets called if there is a value to validate, but not if it's absent
def validate_source(value)
fail("The source parameter cannot be empty when using the Windows provider.") if value.empty?
end
def install_options
join_options(resource[:install_options])
end
def uninstall_options
join_options(resource[:uninstall_options])
end
end
diff --git a/lib/puppet/provider/package/windows/exe_package.rb b/lib/puppet/provider/package/windows/exe_package.rb
index 5525ca0c6..9c9e46fbc 100644
--- a/lib/puppet/provider/package/windows/exe_package.rb
+++ b/lib/puppet/provider/package/windows/exe_package.rb
@@ -1,70 +1,70 @@
require 'puppet/provider/package/windows/package'
class Puppet::Provider::Package::Windows
class ExePackage < Puppet::Provider::Package::Windows::Package
attr_reader :uninstall_string
# Return an instance of the package from the registry, or nil
def self.from_registry(name, values)
if valid?(name, values)
ExePackage.new(
values['DisplayName'],
values['DisplayVersion'],
values['UninstallString']
)
end
end
# Is this a valid executable package we should manage?
def self.valid?(name, values)
# See http://community.spiceworks.com/how_to/show/2238
!!(values['DisplayName'] and values['DisplayName'].length > 0 and
values['UninstallString'] and values['UninstallString'].length > 0 and
values['SystemComponent'] != 1 and # DWORD
values['WindowsInstaller'] != 1 and # DWORD
name !~ /^KB[0-9]{6}/ and
values['ParentKeyName'] == nil and
values['Security Update'] == nil and
values['Update Rollup'] == nil and
values['Hotfix'] == nil)
end
def initialize(name, version, uninstall_string)
super(name, version)
@uninstall_string = uninstall_string
end
# Does this package match the resource?
def match?(resource)
resource[:name] == name
end
def self.install_command(resource)
- ['cmd.exe', '/c', 'start', '"puppet-install"', '/w', quote(resource[:source])]
+ ['cmd.exe', '/c', 'start', '"puppet-install"', '/w', munge(resource[:source])]
end
def uninstall_command
# 1. Launch using cmd /c start because if the executable is a console
# application Windows will automatically display its console window
# 2. Specify a quoted title, otherwise if uninstall_string is quoted,
# start will interpret that to be the title, and get confused
# 3. Specify /w (wait) to wait for uninstall to finish
command = ['cmd.exe', '/c', 'start', '"puppet-uninstall"', '/w']
# Only quote bare uninstall strings, e.g.
# C:\Program Files (x86)\Notepad++\uninstall.exe
# Don't quote uninstall strings that are already quoted, e.g.
# "c:\ruby187\unins000.exe"
# Don't quote uninstall strings that contain arguments:
# "C:\Program Files (x86)\Git\unins000.exe" /SILENT
if uninstall_string =~ /\A[^"]*.exe\Z/i
command << "\"#{uninstall_string}\""
else
command << uninstall_string
end
command
end
end
end
diff --git a/lib/puppet/provider/package/windows/msi_package.rb b/lib/puppet/provider/package/windows/msi_package.rb
index 9245d847d..1b8b04dfb 100644
--- a/lib/puppet/provider/package/windows/msi_package.rb
+++ b/lib/puppet/provider/package/windows/msi_package.rb
@@ -1,62 +1,62 @@
require 'puppet/provider/package/windows/package'
class Puppet::Provider::Package::Windows
class MsiPackage < Puppet::Provider::Package::Windows::Package
attr_reader :productcode, :packagecode
# From msi.h
INSTALLSTATE_DEFAULT = 5 # product is installed for the current user
INSTALLUILEVEL_NONE = 2 # completely silent installation
# Get the COM installer object, it's in a separate method for testing
def self.installer
# REMIND: when does the COM release happen?
WIN32OLE.new("WindowsInstaller.Installer")
end
# Return an instance of the package from the registry, or nil
def self.from_registry(name, values)
if valid?(name, values)
inst = installer
if inst.ProductState(name) == INSTALLSTATE_DEFAULT
MsiPackage.new(values['DisplayName'],
values['DisplayVersion'],
name, # productcode
inst.ProductInfo(name, 'PackageCode'))
end
end
end
# Is this a valid MSI package we should manage?
def self.valid?(name, values)
# See http://community.spiceworks.com/how_to/show/2238
!!(values['DisplayName'] and values['DisplayName'].length > 0 and
values['SystemComponent'] != 1 and # DWORD
values['WindowsInstaller'] == 1 and # DWORD
name =~ /\A\{[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}\}\Z/i)
end
def initialize(name, version, productcode, packagecode)
super(name, version)
@productcode = productcode
@packagecode = packagecode
end
# Does this package match the resource?
def match?(resource)
resource[:name].casecmp(packagecode) == 0 ||
resource[:name].casecmp(productcode) == 0 ||
resource[:name] == name
end
def self.install_command(resource)
- ['msiexec.exe', '/qn', '/norestart', '/i', quote(resource[:source])]
+ ['msiexec.exe', '/qn', '/norestart', '/i', munge(resource[:source])]
end
def uninstall_command
['msiexec.exe', '/qn', '/norestart', '/x', productcode]
end
end
end
diff --git a/lib/puppet/provider/package/windows/package.rb b/lib/puppet/provider/package/windows/package.rb
index 07fb64d25..a1526a886 100644
--- a/lib/puppet/provider/package/windows/package.rb
+++ b/lib/puppet/provider/package/windows/package.rb
@@ -1,80 +1,92 @@
require 'puppet/provider/package'
require 'puppet/util/windows'
class Puppet::Provider::Package::Windows
class Package
extend Enumerable
extend Puppet::Util::Errors
include Puppet::Util::Windows::Registry
extend Puppet::Util::Windows::Registry
attr_reader :name, :version
# Enumerate each package. The appropriate package subclass
# will be yielded.
def self.each(&block)
with_key do |key, values|
name = key.name.match(/^.+\\([^\\]+)$/).captures[0]
[MsiPackage, ExePackage].find do |klass|
if pkg = klass.from_registry(name, values)
yield pkg
end
end
end
end
# Yield each registry key and its values associated with an
# installed package. This searches both per-machine and current
# user contexts, as well as packages associated with 64 and
# 32-bit installers.
def self.with_key(&block)
%w[HKEY_LOCAL_MACHINE HKEY_CURRENT_USER].each do |hive|
[KEY64, KEY32].each do |mode|
mode |= KEY_READ
begin
open(hive, 'Software\Microsoft\Windows\CurrentVersion\Uninstall', mode) do |uninstall|
uninstall.each_key do |name, wtime|
open(hive, "#{uninstall.keyname}\\#{name}", mode) do |key|
yield key, values(key)
end
end
end
rescue Puppet::Util::Windows::Error => e
- raise e unless e.code == Windows::Error::ERROR_FILE_NOT_FOUND
+ raise e unless e.code == Puppet::Util::Windows::Error::ERROR_FILE_NOT_FOUND
end
end
end
end
# Get the class that knows how to install this resource
def self.installer_class(resource)
fail("The source parameter is required when using the Windows provider.") unless resource[:source]
case resource[:source]
when /\.msi"?\Z/i
# REMIND: can we install from URL?
# REMIND: what about msp, etc
MsiPackage
when /\.exe"?\Z/i
fail("The source does not exist: '#{resource[:source]}'") unless Puppet::FileSystem.exist?(resource[:source])
ExePackage
else
fail("Don't know how to install '#{resource[:source]}'")
end
end
+ def self.munge(value)
+ quote(replace_forward_slashes(value))
+ end
+
+ def self.replace_forward_slashes(value)
+ if value.include?('/')
+ value.gsub!('/', "\\")
+ Puppet.debug('Package source parameter contained /s - replaced with \\s')
+ end
+ value
+ end
+
def self.quote(value)
value.include?(' ') ? %Q["#{value.gsub(/"/, '\"')}"] : value
end
def initialize(name, version)
@name = name
@version = version
end
end
end
require 'puppet/provider/package/windows/msi_package'
require 'puppet/provider/package/windows/exe_package'
diff --git a/lib/puppet/provider/package/yum.rb b/lib/puppet/provider/package/yum.rb
index 1a6218c86..bfbe9df16 100644
--- a/lib/puppet/provider/package/yum.rb
+++ b/lib/puppet/provider/package/yum.rb
@@ -1,195 +1,199 @@
require 'puppet/util/package'
Puppet::Type.type(:package).provide :yum, :parent => :rpm, :source => :rpm do
desc "Support via `yum`.
Using this provider's `uninstallable` feature will not remove dependent packages. To
remove dependent packages with this provider use the `purgeable` feature, but note this
- feature is destructive and should be used with the utmost care."
+ feature is destructive and should be used with the utmost care.
+
+ This provider supports the `install_options` attribute, which allows command-line flags to be passed to yum.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
has_feature :install_options, :versionable, :virtual_packages
commands :yum => "yum", :rpm => "rpm", :python => "python"
self::YUMHELPER = File::join(File::dirname(__FILE__), "yumhelper.py")
if command('rpm')
confine :true => begin
rpm('--version')
rescue Puppet::ExecutionFailure
false
else
true
end
end
defaultfor :osfamily => :redhat
def self.prefetch(packages)
raise Puppet::Error, "The yum provider can only be used as root" if Process.euid != 0
super
end
# Retrieve the latest package version information for a given package name
# and combination of repos to enable and disable.
#
# @note If multiple package versions are defined (such as in the case where a
# package is built for multiple architectures), the first package found
# will be used.
#
# @api private
# @param package [String] The name of the package to query
# @param enablerepo [Array<String>] A list of repositories to enable for this query
# @param disablerepo [Array<String>] A list of repositories to disable for this query
# @return [Hash<Symbol, String>]
def self.latest_package_version(package, enablerepo, disablerepo)
key = [enablerepo, disablerepo]
@latest_versions ||= {}
if @latest_versions[key].nil?
@latest_versions[key] = fetch_latest_versions(enablerepo, disablerepo)
end
if @latest_versions[key][package]
@latest_versions[key][package].first
end
end
# Search for all installed packages that have newer versions, given a
# combination of repositories to enable and disable.
#
# @api private
# @param enablerepo [Array<String>] A list of repositories to enable for this query
# @param disablerepo [Array<String>] A list of repositories to disable for this query
# @return [Hash<String, Array<Hash<String, String>>>] All packages that were
# found with a list of found versions for each package.
def self.fetch_latest_versions(enablerepo, disablerepo)
latest_versions = Hash.new {|h, k| h[k] = []}
args = [self::YUMHELPER]
args.concat(enablerepo.map { |repo| ['-e', repo] }.flatten)
args.concat(disablerepo.map { |repo| ['-d', repo] }.flatten)
python(args).scan(/^_pkg (.*)$/) do |match|
hash = nevra_to_hash(match[0])
# Create entries for both the package name without a version and a
# version since yum considers those as mostly interchangeable.
short_name = hash[:name]
long_name = "#{hash[:name]}.#{hash[:arch]}"
latest_versions[short_name] << hash
latest_versions[long_name] << hash
end
latest_versions
end
def self.clear
@latest_versions = nil
end
def install
wanted = @resource[:name]
# If not allowing virtual packages, do a query to ensure a real package exists
unless @resource.allow_virtual?
- yum '-d', '0', '-e', '0', '-y', :list, wanted
+ yum *['-d', '0', '-e', '0', '-y', install_options, :list, wanted].compact
end
should = @resource.should(:ensure)
self.debug "Ensuring => #{should}"
operation = :install
case should
when true, false, Symbol
# pass
should = nil
else
# Add the package version
wanted += "-#{should}"
is = self.query
if is && Puppet::Util::Package.versioncmp(should, is[:ensure]) < 0
self.debug "Downgrading package #{@resource[:name]} from version #{is[:ensure]} to #{should}"
operation = :downgrade
end
end
args = ["-d", "0", "-e", "0", "-y", install_options, operation, wanted].compact
yum *args
# If a version was specified, query again to see if it is a matching version
if should
is = self.query
raise Puppet::Error, "Could not find package #{self.name}" unless is
# FIXME: Should we raise an exception even if should == :latest
# and yum updated us to a version other than @param_hash[:ensure] ?
raise Puppet::Error, "Failed to update to version #{should}, got version #{is[:ensure]} instead" if should != is[:ensure]
end
end
# What's the latest package version available?
def latest
upd = self.class.latest_package_version(@resource[:name], enablerepo, disablerepo)
unless upd.nil?
# FIXME: there could be more than one update for a package
# because of multiarch
return "#{upd[:epoch]}:#{upd[:version]}-#{upd[:release]}"
else
# Yum didn't find updates, pretend the current
# version is the latest
raise Puppet::DevError, "Tried to get latest on a missing package" if properties[:ensure] == :absent
return properties[:ensure]
end
end
def update
# Install in yum can be used for update, too
self.install
end
def purge
yum "-y", :erase, @resource[:name]
end
# @deprecated
def latest_info
Puppet.deprecation_warning("#{self.class}##{__method__} is deprecated and is no longer used.")
@latest_info
end
# @deprecated
def latest_info=(latest)
Puppet.deprecation_warning("#{self.class}##{__method__} is deprecated and is no longer used.")
@latest_info = latest
end
private
def enablerepo
scan_options(resource[:install_options], '--enablerepo')
end
def disablerepo
scan_options(resource[:install_options], '--disablerepo')
end
# Scan a structure that looks like the package type 'install_options'
# structure for all hashes that have a specific key.
#
# @api private
# @param options [Array<String | Hash>, nil] The options structure. If the
# options are nil an empty array will be returned.
# @param key [String] The key to look for in all contained hashes
# @return [Array<String>] All hash values with the given key.
def scan_options(options, key)
return [] if options.nil?
options.inject([]) do |repos, opt|
if opt.is_a? Hash and opt[key]
repos << opt[key]
end
repos
end
end
end
diff --git a/lib/puppet/provider/package/zypper.rb b/lib/puppet/provider/package/zypper.rb
index c0af6a59e..ddfd34084 100644
--- a/lib/puppet/provider/package/zypper.rb
+++ b/lib/puppet/provider/package/zypper.rb
@@ -1,86 +1,90 @@
Puppet::Type.type(:package).provide :zypper, :parent => :rpm do
- desc "Support for SuSE `zypper` package manager. Found in SLES10sp2+ and SLES11"
+ desc "Support for SuSE `zypper` package manager. Found in SLES10sp2+ and SLES11.
+
+ This provider supports the `install_options` attribute, which allows command-line flags to be passed to zypper.
+ These options should be specified as a string (e.g. '--flag'), a hash (e.g. {'--flag' => 'value'}),
+ or an array where each element is either a string or a hash."
has_feature :versionable, :install_options, :virtual_packages
commands :zypper => "/usr/bin/zypper"
confine :operatingsystem => [:suse, :sles, :sled, :opensuse]
#on zypper versions <1.0, the version option returns 1
#some versions of zypper output on stderr
def zypper_version
cmd = [self.class.command(:zypper),"--version"]
execute(cmd, { :failonfail => false, :combine => true})
end
# Install a package using 'zypper'.
def install
should = @resource.should(:ensure)
self.debug "Ensuring => #{should}"
wanted = @resource[:name]
# XXX: We don't actually deal with epochs here.
case should
when true, false, Symbol
should = nil
else
# Add the package version
wanted = "#{wanted}-#{should}"
end
#This has been tested with following zypper versions
#SLE 10.2: 0.6.104
#SLE 11.0: 1.0.8
#OpenSuse 10.2: 0.6.13
#OpenSuse 11.2: 1.2.8
#Assume that this will work on newer zypper versions
#extract version numbers and convert to integers
major, minor, patch = zypper_version.scan(/\d+/).map{ |x| x.to_i }
self.debug "Detected zypper version #{major}.#{minor}.#{patch}"
#zypper version < 1.0 does not support --quiet flag
if major < 1
quiet = '--terse'
else
quiet = '--quiet'
end
options = [quiet, :install]
#zypper 0.6.13 (OpenSuSE 10.2) does not support auto agree with licenses
options << '--auto-agree-with-licenses' unless major < 1 and minor <= 6 and patch <= 13
options << '--no-confirm'
options << '--name' unless @resource.allow_virtual? || should
options += install_options if resource[:install_options]
options << wanted
zypper *options
unless self.query
raise Puppet::ExecutionFailure.new(
"Could not find package #{self.name}"
)
end
end
# What's the latest package version available?
def latest
#zypper can only get a list of *all* available packages?
output = zypper "list-updates"
if output =~ /#{Regexp.escape @resource[:name]}\s*\|.*?\|\s*([^\s\|]+)/
return $1
else
# zypper didn't find updates, pretend the current
# version is the latest
return @property_hash[:ensure]
end
end
def update
# zypper install can be used for update, too
self.install
end
end
diff --git a/lib/puppet/provider/parsedfile.rb b/lib/puppet/provider/parsedfile.rb
index 03ad1e194..37d0ec483 100644
--- a/lib/puppet/provider/parsedfile.rb
+++ b/lib/puppet/provider/parsedfile.rb
@@ -1,443 +1,459 @@
require 'puppet'
require 'puppet/util/filetype'
require 'puppet/util/fileparsing'
# This provider can be used as the parent class for a provider that
# parses and generates files. Its content must be loaded via the
# 'prefetch' method, and the file will be written when 'flush' is called
# on the provider instance. At this point, the file is written once
# for every provider instance.
#
# Once the provider prefetches the data, it's the resource's job to copy
# that data over to the @is variables.
class Puppet::Provider::ParsedFile < Puppet::Provider
extend Puppet::Util::FileParsing
class << self
attr_accessor :default_target, :target
end
attr_accessor :property_hash
def self.clean(hash)
newhash = hash.dup
[:record_type, :on_disk].each do |p|
newhash.delete(p) if newhash.include?(p)
end
newhash
end
def self.clear
@target_objects.clear
@records.clear
end
def self.filetype
@filetype ||= Puppet::Util::FileType.filetype(:flat)
end
def self.filetype=(type)
if type.is_a?(Class)
@filetype = type
elsif klass = Puppet::Util::FileType.filetype(type)
@filetype = klass
else
raise ArgumentError, "Invalid filetype #{type}"
end
end
# Flush all of the targets for which there are modified records. The only
# reason we pass a record here is so that we can add it to the stack if
# necessary -- it's passed from the instance calling 'flush'.
def self.flush(record)
# Make sure this record is on the list to be flushed.
unless record[:on_disk]
record[:on_disk] = true
@records << record
# If we've just added the record, then make sure our
# target will get flushed.
modified(record[:target] || default_target)
end
return unless defined?(@modified) and ! @modified.empty?
flushed = []
begin
@modified.sort { |a,b| a.to_s <=> b.to_s }.uniq.each do |target|
Puppet.debug "Flushing #{@resource_type.name} provider target #{target}"
flushed << target
flush_target(target)
end
ensure
@modified.reject! { |t| flushed.include?(t) }
end
end
# Make sure our file is backed up, but only back it up once per transaction.
# We cheat and rely on the fact that @records is created on each prefetch.
def self.backup_target(target)
return nil unless target_object(target).respond_to?(:backup)
@backup_stats ||= {}
return nil if @backup_stats[target] == @records.object_id
target_object(target).backup
@backup_stats[target] = @records.object_id
end
# Flush all of the records relating to a specific target.
def self.flush_target(target)
backup_target(target)
records = target_records(target).reject { |r|
r[:ensure] == :absent
}
target_object(target).write(to_file(records))
end
# Return the header placed at the top of each generated file, warning
# users that modifying this file manually is probably a bad idea.
def self.header
%{# HEADER: This file was autogenerated at #{Time.now}
# HEADER: by puppet. While it can still be managed manually, it
# HEADER: is definitely not recommended.\n}
end
# An optional regular expression matched by third party headers.
#
# For example, this can be used to filter the vixie cron headers as
# erronously exported by older cron versions.
#
# @api private
# @abstract Providers based on ParsedFile may implement this to make it
# possible to identify a header maintained by a third party tool.
# The provider can then allow that header to remain near the top of the
# written file, or remove it after composing the file content.
# If implemented, the function must return a Regexp object.
# The expression must be tailored to match exactly one third party header.
# @see drop_native_header
# @note When specifying regular expressions in multiline mode, avoid
# greedy repititions such as '.*' (use .*? instead). Otherwise, the
# provider may drop file content between sparse headers.
def self.native_header_regex
nil
end
# How to handle third party headers.
# @api private
# @abstract Providers based on ParsedFile that make use of the support for
# third party headers may override this method to return +true+.
# When this is done, headers that are matched by the native_header_regex
# are not written back to disk.
# @see native_header_regex
def self.drop_native_header
false
end
# Add another type var.
def self.initvars
@records = []
@target_objects = {}
@target = nil
# Default to flat files
@filetype ||= Puppet::Util::FileType.filetype(:flat)
super
end
# Return a list of all of the records we can find.
def self.instances
targets.collect do |target|
prefetch_target(target)
end.flatten.reject { |r| skip_record?(r) }.collect do |record|
new(record)
end
end
# Override the default method with a lot more functionality.
def self.mk_resource_methods
[resource_type.validproperties, resource_type.parameters].flatten.each do |attr|
attr = attr.intern
define_method(attr) do
# If it's not a valid field for this record type (which can happen
# when different platforms support different fields), then just
# return the should value, so the resource shuts up.
if @property_hash[attr] or self.class.valid_attr?(self.class.name, attr)
@property_hash[attr] || :absent
else
if defined?(@resource)
@resource.should(attr)
else
nil
end
end
end
define_method(attr.to_s + "=") do |val|
mark_target_modified
@property_hash[attr] = val
end
end
end
# Always make the resource methods.
def self.resource_type=(resource)
super
mk_resource_methods
end
# Mark a target as modified so we know to flush it. This only gets
# used within the attr= methods.
def self.modified(target)
@modified ||= []
@modified << target unless @modified.include?(target)
end
# Retrieve all of the data from disk. There are three ways to know
# which files to retrieve: We might have a list of file objects already
# set up, there might be instances of our associated resource and they
# will have a path parameter set, and we will have a default path
# set. We need to turn those three locations into a list of files,
# prefetch each one, and make sure they're associated with each appropriate
# resource instance.
def self.prefetch(resources = nil)
# Reset the record list.
@records = prefetch_all_targets(resources)
match_providers_with_resources(resources)
end
# Match a list of catalog resources with provider instances
#
# @api private
#
# @param [Array<Puppet::Resource>] resources A list of resources using this class as a provider
def self.match_providers_with_resources(resources)
return unless resources
matchers = resources.dup
@records.each do |record|
# Skip things like comments and blank lines
next if skip_record?(record)
if (resource = resource_for_record(record, resources))
resource.provider = new(record)
elsif respond_to?(:match)
if resource = match(record, matchers)
matchers.delete(resource.title)
record[:name] = resource[:name]
resource.provider = new(record)
end
end
end
end
# Look up a resource based on a parsed file record
#
# @api private
#
# @param [Hash<Symbol, Object>] record
# @param [Array<Puppet::Resource>] resources
#
# @return [Puppet::Resource, nil] The resource if found, else nil
def self.resource_for_record(record, resources)
name = record[:name]
if name
resources[name]
end
end
def self.prefetch_all_targets(resources)
records = []
targets(resources).each do |target|
records += prefetch_target(target)
end
records
end
# Prefetch an individual target.
def self.prefetch_target(target)
begin
target_records = retrieve(target)
rescue Puppet::Util::FileType::FileReadError => detail
puts detail.backtrace if Puppet[:trace]
Puppet.err "Could not prefetch #{self.resource_type.name} provider '#{self.name}' target '#{target}': #{detail}. Treating as empty"
target_records = []
end
target_records.each do |r|
r[:on_disk] = true
r[:target] = target
r[:ensure] = :present
end
target_records = prefetch_hook(target_records) if respond_to?(:prefetch_hook)
raise Puppet::DevError, "Prefetching #{target} for provider #{self.name} returned nil" unless target_records
target_records
end
# Is there an existing record with this name?
def self.record?(name)
return nil unless @records
@records.find { |r| r[:name] == name }
end
# Retrieve the text for the file. Returns nil in the unlikely
# event that it doesn't exist.
def self.retrieve(path)
# XXX We need to be doing something special here in case of failure.
text = target_object(path).read
if text.nil? or text == ""
# there is no file
return []
else
# Set the target, for logging.
old = @target
begin
@target = path
return self.parse(text)
rescue Puppet::Error => detail
detail.file = @target if detail.respond_to?(:file=)
raise detail
ensure
@target = old
end
end
end
# Should we skip the record? Basically, we skip text records.
# This is only here so subclasses can override it.
def self.skip_record?(record)
record_type(record[:record_type]).text?
end
+ # The mode for generated files if they are newly created.
+ # No mode will be set on existing files.
+ #
+ # @abstract Providers inheriting parsedfile can override this method
+ # to provide a mode. The value should be suitable for File.chmod
+ def self.default_mode
+ nil
+ end
+
# Initialize the object if necessary.
def self.target_object(target)
- @target_objects[target] ||= filetype.new(target)
+ # only send the default mode if the actual provider defined it,
+ # because certain filetypes (e.g. the crontab variants) do not
+ # expect it in their initialize method
+ if default_mode
+ @target_objects[target] ||= filetype.new(target, default_mode)
+ else
+ @target_objects[target] ||= filetype.new(target)
+ end
@target_objects[target]
end
# Find all of the records for a given target
def self.target_records(target)
@records.find_all { |r| r[:target] == target }
end
# Find a list of all of the targets that we should be reading. This is
# used to figure out what targets we need to prefetch.
def self.targets(resources = nil)
targets = []
# First get the default target
raise Puppet::DevError, "Parsed Providers must define a default target" unless self.default_target
targets << self.default_target
# Then get each of the file objects
targets += @target_objects.keys
# Lastly, check the file from any resource instances
if resources
resources.each do |name, resource|
if value = resource.should(:target)
targets << value
end
end
end
targets.uniq.compact
end
# Compose file contents from the set of records.
#
# If self.native_header_regex is not nil, possible vendor headers are
# identified by matching the return value against the expression.
# If one (or several consecutive) such headers, are found, they are
# either moved in front of the self.header if self.drop_native_header
# is false (this is the default), or removed from the return value otherwise.
#
# @api private
def self.to_file(records)
text = super
if native_header_regex and (match = text.match(native_header_regex))
if drop_native_header
# concatenate the text in front of and after the native header
text = match.pre_match + match.post_match
else
native_header = match[0]
return native_header + header + match.pre_match + match.post_match
end
end
header + text
end
def create
@resource.class.validproperties.each do |property|
if value = @resource.should(property)
@property_hash[property] = value
end
end
mark_target_modified
(@resource.class.name.to_s + "_created").intern
end
def destroy
# We use the method here so it marks the target as modified.
self.ensure = :absent
(@resource.class.name.to_s + "_deleted").intern
end
def exists?
!(@property_hash[:ensure] == :absent or @property_hash[:ensure].nil?)
end
# Write our data to disk.
def flush
# Make sure we've got a target and name set.
# If the target isn't set, then this is our first modification, so
# mark it for flushing.
unless @property_hash[:target]
@property_hash[:target] = @resource.should(:target) || self.class.default_target
self.class.modified(@property_hash[:target])
end
@resource.class.key_attributes.each do |attr|
@property_hash[attr] ||= @resource[attr]
end
self.class.flush(@property_hash)
end
def initialize(record)
super
# The 'record' could be a resource or a record, depending on how the provider
# is initialized. If we got an empty property hash (probably because the resource
# is just being initialized), then we want to set up some defaults.
@property_hash = self.class.record?(resource[:name]) || {:record_type => self.class.name, :ensure => :absent} if @property_hash.empty?
end
# Retrieve the current state from disk.
def prefetch
raise Puppet::DevError, "Somehow got told to prefetch with no resource set" unless @resource
self.class.prefetch(@resource[:name] => @resource)
end
def record_type
@property_hash[:record_type]
end
private
# Mark both the resource and provider target as modified.
def mark_target_modified
if defined?(@resource) and restarget = @resource.should(:target) and restarget != @property_hash[:target]
self.class.modified(restarget)
end
self.class.modified(@property_hash[:target]) if @property_hash[:target] != :absent and @property_hash[:target]
end
end
diff --git a/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb b/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb
index 5a8741122..91526d7fa 100644
--- a/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb
+++ b/lib/puppet/provider/scheduled_task/win32_taskscheduler.rb
@@ -1,565 +1,559 @@
require 'puppet/parameter'
if Puppet.features.microsoft_windows?
- require 'win32/taskscheduler'
- require 'puppet/util/adsi'
+ require 'puppet/util/windows/taskscheduler'
end
Puppet::Type.type(:scheduled_task).provide(:win32_taskscheduler) do
- desc %q{This provider uses the win32-taskscheduler gem to manage scheduled
- tasks on Windows.
-
- Puppet requires version 0.2.1 or later of the win32-taskscheduler gem;
- previous versions can cause "Could not evaluate: The operation completed
- successfully" errors.}
+ desc %q{This provider manages scheduled tasks on Windows.}
defaultfor :operatingsystem => :windows
confine :operatingsystem => :windows
def self.instances
Win32::TaskScheduler.new.tasks.collect do |job_file|
job_title = File.basename(job_file, '.job')
new(
:provider => :win32_taskscheduler,
:name => job_title
)
end
end
def exists?
Win32::TaskScheduler.new.exists? resource[:name]
end
def task
return @task if @task
@task ||= Win32::TaskScheduler.new
@task.activate(resource[:name] + '.job') if exists?
@task
end
def clear_task
@task = nil
@triggers = nil
end
def enabled
task.flags & Win32::TaskScheduler::DISABLED == 0 ? :true : :false
end
def command
task.application_name
end
def arguments
task.parameters
end
def working_dir
task.working_directory
end
def user
account = task.account_information
return 'system' if account == ''
account
end
def trigger
return @triggers if @triggers
@triggers = []
task.trigger_count.times do |i|
trigger = begin
task.trigger(i)
rescue Win32::TaskScheduler::Error
# Win32::TaskScheduler can't handle all of the
# trigger types Windows uses, so we need to skip the
# unhandled types to prevent "puppet resource" from
# blowing up.
nil
end
next unless trigger and scheduler_trigger_types.include?(trigger['trigger_type'])
puppet_trigger = {}
case trigger['trigger_type']
when Win32::TaskScheduler::TASK_TIME_TRIGGER_DAILY
puppet_trigger['schedule'] = 'daily'
puppet_trigger['every'] = trigger['type']['days_interval'].to_s
when Win32::TaskScheduler::TASK_TIME_TRIGGER_WEEKLY
puppet_trigger['schedule'] = 'weekly'
puppet_trigger['every'] = trigger['type']['weeks_interval'].to_s
puppet_trigger['on'] = days_of_week_from_bitfield(trigger['type']['days_of_week'])
when Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDATE
puppet_trigger['schedule'] = 'monthly'
puppet_trigger['months'] = months_from_bitfield(trigger['type']['months'])
puppet_trigger['on'] = days_from_bitfield(trigger['type']['days'])
when Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDOW
puppet_trigger['schedule'] = 'monthly'
puppet_trigger['months'] = months_from_bitfield(trigger['type']['months'])
puppet_trigger['which_occurrence'] = occurrence_constant_to_name(trigger['type']['weeks'])
puppet_trigger['day_of_week'] = days_of_week_from_bitfield(trigger['type']['days_of_week'])
when Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE
puppet_trigger['schedule'] = 'once'
end
puppet_trigger['start_date'] = self.class.normalized_date("#{trigger['start_year']}-#{trigger['start_month']}-#{trigger['start_day']}")
puppet_trigger['start_time'] = self.class.normalized_time("#{trigger['start_hour']}:#{trigger['start_minute']}")
puppet_trigger['enabled'] = trigger['flags'] & Win32::TaskScheduler::TASK_TRIGGER_FLAG_DISABLED == 0
puppet_trigger['index'] = i
@triggers << puppet_trigger
end
@triggers = @triggers[0] if @triggers.length == 1
@triggers
end
def user_insync?(current, should)
return false unless current
# Win32::TaskScheduler can return the 'SYSTEM' account as the
# empty string.
current = 'system' if current == ''
# By comparing account SIDs we don't have to worry about case
# sensitivity, or canonicalization of the account name.
- Puppet::Util::Windows::Security.name_to_sid(current) == Puppet::Util::Windows::Security.name_to_sid(should[0])
+ Puppet::Util::Windows::SID.name_to_sid(current) == Puppet::Util::Windows::SID.name_to_sid(should[0])
end
def trigger_insync?(current, should)
should = [should] unless should.is_a?(Array)
current = [current] unless current.is_a?(Array)
return false unless current.length == should.length
current_in_sync = current.all? do |c|
should.any? {|s| triggers_same?(c, s)}
end
should_in_sync = should.all? do |s|
current.any? {|c| triggers_same?(c,s)}
end
current_in_sync && should_in_sync
end
def command=(value)
task.application_name = value
end
def arguments=(value)
task.parameters = value
end
def working_dir=(value)
task.working_directory = value
end
def enabled=(value)
if value == :true
task.flags = task.flags & ~Win32::TaskScheduler::DISABLED
else
task.flags = task.flags | Win32::TaskScheduler::DISABLED
end
end
def trigger=(value)
desired_triggers = value.is_a?(Array) ? value : [value]
current_triggers = trigger.is_a?(Array) ? trigger : [trigger]
extra_triggers = []
desired_to_search = desired_triggers.dup
current_triggers.each do |current|
if found = desired_to_search.find {|desired| triggers_same?(current, desired)}
desired_to_search.delete(found)
else
extra_triggers << current['index']
end
end
needed_triggers = []
current_to_search = current_triggers.dup
desired_triggers.each do |desired|
if found = current_to_search.find {|current| triggers_same?(current, desired)}
current_to_search.delete(found)
else
needed_triggers << desired
end
end
extra_triggers.reverse_each do |index|
task.delete_trigger(index)
end
needed_triggers.each do |trigger_hash|
# Even though this is an assignment, the API for
# Win32::TaskScheduler ends up appending this trigger to the
# list of triggers for the task, while #add_trigger is only able
# to replace existing triggers. *shrug*
task.trigger = translate_hash_to_trigger(trigger_hash)
end
end
def user=(value)
- self.fail("Invalid user: #{value}") unless Puppet::Util::Windows::Security.name_to_sid(value)
+ self.fail("Invalid user: #{value}") unless Puppet::Util::Windows::SID.name_to_sid(value)
if value.to_s.downcase != 'system'
task.set_account_information(value, resource[:password])
else
# Win32::TaskScheduler treats a nil/empty username & password as
# requesting the SYSTEM account.
task.set_account_information(nil, nil)
end
end
def create
clear_task
@task = Win32::TaskScheduler.new(resource[:name], dummy_time_trigger)
self.command = resource[:command]
[:arguments, :working_dir, :enabled, :trigger, :user].each do |prop|
send("#{prop}=", resource[prop]) if resource[prop]
end
end
def destroy
Win32::TaskScheduler.new.delete(resource[:name] + '.job')
end
def flush
unless resource[:ensure] == :absent
self.fail('Parameter command is required.') unless resource[:command]
task.save
@task = nil
end
end
def triggers_same?(current_trigger, desired_trigger)
return false unless current_trigger['schedule'] == desired_trigger['schedule']
return false if current_trigger.has_key?('enabled') && !current_trigger['enabled']
desired = desired_trigger.dup
desired['every'] ||= current_trigger['every'] if current_trigger.has_key?('every')
desired['months'] ||= current_trigger['months'] if current_trigger.has_key?('months')
desired['on'] ||= current_trigger['on'] if current_trigger.has_key?('on')
desired['day_of_week'] ||= current_trigger['day_of_week'] if current_trigger.has_key?('day_of_week')
translate_hash_to_trigger(current_trigger) == translate_hash_to_trigger(desired)
end
def self.normalized_date(date_string)
date = Date.parse("#{date_string}")
"#{date.year}-#{date.month}-#{date.day}"
end
def self.normalized_time(time_string)
Time.parse("#{time_string}").strftime('%H:%M')
end
def dummy_time_trigger
now = Time.now
{
'flags' => 0,
'random_minutes_interval' => 0,
'end_day' => 0,
"end_year" => 0,
"trigger_type" => 0,
"minutes_interval" => 0,
"end_month" => 0,
"minutes_duration" => 0,
'start_year' => now.year,
'start_month' => now.month,
'start_day' => now.day,
'start_hour' => now.hour,
'start_minute' => now.min,
'trigger_type' => Win32::TaskScheduler::ONCE,
}
end
def translate_hash_to_trigger(puppet_trigger, user_provided_input=false)
trigger = dummy_time_trigger
if user_provided_input
- self.fail "'enabled' is read-only on triggers" if puppet_trigger.has_key?('enabled')
- self.fail "'index' is read-only on triggers" if puppet_trigger.has_key?('index')
+ self.fail "'enabled' is read-only on scheduled_task triggers and should be removed ('enabled' is usually provided in puppet resource scheduled_task)." if puppet_trigger.has_key?('enabled')
+ self.fail "'index' is read-only on scheduled_task triggers and should be removed ('index' is usually provided in puppet resource scheduled_task)." if puppet_trigger.has_key?('index')
end
puppet_trigger.delete('index')
if puppet_trigger.delete('enabled') == false
trigger['flags'] |= Win32::TaskScheduler::TASK_TRIGGER_FLAG_DISABLED
else
trigger['flags'] &= ~Win32::TaskScheduler::TASK_TRIGGER_FLAG_DISABLED
end
extra_keys = puppet_trigger.keys.sort - ['schedule', 'start_date', 'start_time', 'every', 'months', 'on', 'which_occurrence', 'day_of_week']
self.fail "Unknown trigger option(s): #{Puppet::Parameter.format_value_for_display(extra_keys)}" unless extra_keys.empty?
self.fail "Must specify 'start_time' when defining a trigger" unless puppet_trigger['start_time']
case puppet_trigger['schedule']
when 'daily'
trigger['trigger_type'] = Win32::TaskScheduler::DAILY
trigger['type'] = {
'days_interval' => Integer(puppet_trigger['every'] || 1)
}
when 'weekly'
trigger['trigger_type'] = Win32::TaskScheduler::WEEKLY
trigger['type'] = {
'weeks_interval' => Integer(puppet_trigger['every'] || 1)
}
trigger['type']['days_of_week'] = if puppet_trigger['day_of_week']
bitfield_from_days_of_week(puppet_trigger['day_of_week'])
else
scheduler_days_of_week.inject(0) {|day_flags,day| day_flags |= day}
end
when 'monthly'
trigger['type'] = {
'months' => bitfield_from_months(puppet_trigger['months'] || (1..12).to_a),
}
if puppet_trigger.keys.include?('on')
if puppet_trigger.has_key?('day_of_week') or puppet_trigger.has_key?('which_occurrence')
self.fail "Neither 'day_of_week' nor 'which_occurrence' can be specified when creating a monthly date-based trigger"
end
trigger['trigger_type'] = Win32::TaskScheduler::MONTHLYDATE
trigger['type']['days'] = bitfield_from_days(puppet_trigger['on'])
elsif puppet_trigger.keys.include?('which_occurrence') or puppet_trigger.keys.include?('day_of_week')
self.fail 'which_occurrence cannot be specified as an array' if puppet_trigger['which_occurrence'].is_a?(Array)
%w{day_of_week which_occurrence}.each do |field|
self.fail "#{field} must be specified when creating a monthly day-of-week based trigger" unless puppet_trigger.has_key?(field)
end
trigger['trigger_type'] = Win32::TaskScheduler::MONTHLYDOW
trigger['type']['weeks'] = occurrence_name_to_constant(puppet_trigger['which_occurrence'])
trigger['type']['days_of_week'] = bitfield_from_days_of_week(puppet_trigger['day_of_week'])
else
self.fail "Don't know how to create a 'monthly' schedule with the options: #{puppet_trigger.keys.sort.join(', ')}"
end
when 'once'
self.fail "Must specify 'start_date' when defining a one-time trigger" unless puppet_trigger['start_date']
trigger['trigger_type'] = Win32::TaskScheduler::ONCE
else
self.fail "Unknown schedule type: #{puppet_trigger["schedule"].inspect}"
end
if start_date = puppet_trigger['start_date']
start_date = Date.parse(start_date)
self.fail "start_date must be on or after 1753-01-01" unless start_date >= Date.new(1753, 1, 1)
trigger['start_year'] = start_date.year
trigger['start_month'] = start_date.month
trigger['start_day'] = start_date.day
end
start_time = Time.parse(puppet_trigger['start_time'])
trigger['start_hour'] = start_time.hour
trigger['start_minute'] = start_time.min
trigger
end
def validate_trigger(value)
value = [value] unless value.is_a?(Array)
# translate_hash_to_trigger handles the same validation that we
# would be doing here at the individual trigger level.
value.each {|t| translate_hash_to_trigger(t, true)}
true
end
private
def bitfield_from_months(months)
bitfield = 0
months = [months] unless months.is_a?(Array)
months.each do |month|
integer_month = Integer(month) rescue nil
self.fail 'Month must be specified as an integer in the range 1-12' unless integer_month == month.to_f and integer_month.between?(1,12)
bitfield |= scheduler_months[integer_month - 1]
end
bitfield
end
def bitfield_from_days(days)
bitfield = 0
days = [days] unless days.is_a?(Array)
days.each do |day|
# The special "day" of 'last' is represented by day "number"
# 32. 'last' has the special meaning of "the last day of the
# month", no matter how many days there are in the month.
day = 32 if day == 'last'
integer_day = Integer(day)
self.fail "Day must be specified as an integer in the range 1-31, or as 'last'" unless integer_day = day.to_f and integer_day.between?(1,32)
bitfield |= 1 << integer_day - 1
end
bitfield
end
def bitfield_from_days_of_week(days_of_week)
bitfield = 0
days_of_week = [days_of_week] unless days_of_week.is_a?(Array)
days_of_week.each do |day_of_week|
bitfield |= day_of_week_name_to_constant(day_of_week)
end
bitfield
end
def months_from_bitfield(bitfield)
months = []
scheduler_months.each do |month|
if bitfield & month != 0
months << month_constant_to_number(month)
end
end
months
end
def days_from_bitfield(bitfield)
days = []
i = 0
while bitfield > 0
if bitfield & 1 > 0
# Day 32 has the special meaning of "the last day of the
# month", no matter how many days there are in the month.
days << (i == 31 ? 'last' : i + 1)
end
bitfield = bitfield >> 1
i += 1
end
days
end
def days_of_week_from_bitfield(bitfield)
days_of_week = []
scheduler_days_of_week.each do |day_of_week|
if bitfield & day_of_week != 0
days_of_week << day_of_week_constant_to_name(day_of_week)
end
end
days_of_week
end
def scheduler_trigger_types
[
Win32::TaskScheduler::TASK_TIME_TRIGGER_DAILY,
Win32::TaskScheduler::TASK_TIME_TRIGGER_WEEKLY,
Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDATE,
Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDOW,
Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE
]
end
def scheduler_days_of_week
[
Win32::TaskScheduler::SUNDAY,
Win32::TaskScheduler::MONDAY,
Win32::TaskScheduler::TUESDAY,
Win32::TaskScheduler::WEDNESDAY,
Win32::TaskScheduler::THURSDAY,
Win32::TaskScheduler::FRIDAY,
Win32::TaskScheduler::SATURDAY
]
end
def scheduler_months
[
Win32::TaskScheduler::JANUARY,
Win32::TaskScheduler::FEBRUARY,
Win32::TaskScheduler::MARCH,
Win32::TaskScheduler::APRIL,
Win32::TaskScheduler::MAY,
Win32::TaskScheduler::JUNE,
Win32::TaskScheduler::JULY,
Win32::TaskScheduler::AUGUST,
Win32::TaskScheduler::SEPTEMBER,
Win32::TaskScheduler::OCTOBER,
Win32::TaskScheduler::NOVEMBER,
Win32::TaskScheduler::DECEMBER
]
end
def scheduler_occurrences
[
Win32::TaskScheduler::FIRST_WEEK,
Win32::TaskScheduler::SECOND_WEEK,
Win32::TaskScheduler::THIRD_WEEK,
Win32::TaskScheduler::FOURTH_WEEK,
Win32::TaskScheduler::LAST_WEEK
]
end
def day_of_week_constant_to_name(constant)
case constant
when Win32::TaskScheduler::SUNDAY; 'sun'
when Win32::TaskScheduler::MONDAY; 'mon'
when Win32::TaskScheduler::TUESDAY; 'tues'
when Win32::TaskScheduler::WEDNESDAY; 'wed'
when Win32::TaskScheduler::THURSDAY; 'thurs'
when Win32::TaskScheduler::FRIDAY; 'fri'
when Win32::TaskScheduler::SATURDAY; 'sat'
end
end
def day_of_week_name_to_constant(name)
case name
when 'sun'; Win32::TaskScheduler::SUNDAY
when 'mon'; Win32::TaskScheduler::MONDAY
when 'tues'; Win32::TaskScheduler::TUESDAY
when 'wed'; Win32::TaskScheduler::WEDNESDAY
when 'thurs'; Win32::TaskScheduler::THURSDAY
when 'fri'; Win32::TaskScheduler::FRIDAY
when 'sat'; Win32::TaskScheduler::SATURDAY
end
end
def month_constant_to_number(constant)
month_num = 1
while constant >> month_num - 1 > 1
month_num += 1
end
month_num
end
def occurrence_constant_to_name(constant)
case constant
when Win32::TaskScheduler::FIRST_WEEK; 'first'
when Win32::TaskScheduler::SECOND_WEEK; 'second'
when Win32::TaskScheduler::THIRD_WEEK; 'third'
when Win32::TaskScheduler::FOURTH_WEEK; 'fourth'
when Win32::TaskScheduler::LAST_WEEK; 'last'
end
end
def occurrence_name_to_constant(name)
case name
when 'first'; Win32::TaskScheduler::FIRST_WEEK
when 'second'; Win32::TaskScheduler::SECOND_WEEK
when 'third'; Win32::TaskScheduler::THIRD_WEEK
when 'fourth'; Win32::TaskScheduler::FOURTH_WEEK
when 'last'; Win32::TaskScheduler::LAST_WEEK
end
end
end
diff --git a/lib/puppet/provider/service/freebsd.rb b/lib/puppet/provider/service/freebsd.rb
index 1e59fc47d..91a3e0244 100644
--- a/lib/puppet/provider/service/freebsd.rb
+++ b/lib/puppet/provider/service/freebsd.rb
@@ -1,143 +1,143 @@
Puppet::Type.type(:service).provide :freebsd, :parent => :init do
desc "Provider for FreeBSD and DragonFly BSD. Uses the `rcvar` argument of init scripts and parses/edits rc files."
confine :operatingsystem => [:freebsd, :dragonfly]
defaultfor :operatingsystem => [:freebsd, :dragonfly]
def rcconf() '/etc/rc.conf' end
def rcconf_local() '/etc/rc.conf.local' end
def rcconf_dir() '/etc/rc.conf.d' end
def self.defpath
superclass.defpath
end
def error(msg)
raise Puppet::Error, msg
end
# Executing an init script with the 'rcvar' argument returns
# the service name, rcvar name and whether it's enabled/disabled
def rcvar
rcvar = execute([self.initscript, :rcvar], :failonfail => true, :combine => false, :squelch => false)
rcvar = rcvar.split("\n")
rcvar.delete_if {|str| str =~ /^#\s*$/}
rcvar[1] = rcvar[1].gsub(/^\$/, '')
rcvar
end
+ # Extract value name from service or rcvar
+ def extract_value_name(name, rc_index, regex, regex_index)
+ value_name = self.rcvar[rc_index]
+ self.error("No #{name} name found in rcvar") if value_name.nil?
+ value_name = value_name.gsub!(regex, regex_index)
+ self.error("#{name} name is empty") if value_name.nil?
+ self.debug("#{name} name is #{value_name}")
+ value_name
+ end
+
# Extract service name
def service_name
- name = self.rcvar[0]
- self.error("No service name found in rcvar") if name.nil?
- name = name.gsub!(/# (.*)/, '\1')
- self.error("Service name is empty") if name.nil?
- self.debug("Service name is #{name}")
- name
+ extract_value_name('service', 0, /# (.*)/, '\1')
end
# Extract rcvar name
def rcvar_name
- name = self.rcvar[1]
- self.error("No rcvar name found in rcvar") if name.nil?
- name = name.gsub!(/(.*?)(_enable)?=(.*)/, '\1')
- self.error("rcvar name is empty") if name.nil?
- self.debug("rcvar name is #{name}")
- name
+ extract_value_name('rcvar', 1, /(.*?)(_enable)?=(.*)/, '\1')
end
# Extract rcvar value
def rcvar_value
value = self.rcvar[1]
self.error("No rcvar value found in rcvar") if value.nil?
value = value.gsub!(/(.*)(_enable)?="?(\w+)"?/, '\3')
self.error("rcvar value is empty") if value.nil?
self.debug("rcvar value is #{value}")
value
end
# Edit rc files and set the service to yes/no
def rc_edit(yesno)
service = self.service_name
rcvar = self.rcvar_name
self.debug("Editing rc files: setting #{rcvar} to #{yesno} for #{service}")
self.rc_add(service, rcvar, yesno) if not self.rc_replace(service, rcvar, yesno)
end
# Try to find an existing setting in the rc files
# and replace the value
def rc_replace(service, rcvar, yesno)
success = false
# Replace in all files, not just in the first found with a match
[rcconf, rcconf_local, rcconf_dir + "/#{service}"].each do |filename|
if Puppet::FileSystem.exist?(filename)
s = File.read(filename)
if s.gsub!(/^(#{rcvar}(_enable)?)=\"?(YES|NO)\"?/, "\\1=\"#{yesno}\"")
File.open(filename, File::WRONLY) { |f| f << s }
self.debug("Replaced in #{filename}")
success = true
end
end
end
success
end
# Add a new setting to the rc files
def rc_add(service, rcvar, yesno)
append = "\# Added by Puppet\n#{rcvar}_enable=\"#{yesno}\"\n"
# First, try the one-file-per-service style
if Puppet::FileSystem.exist?(rcconf_dir)
File.open(rcconf_dir + "/#{service}", File::WRONLY | File::APPEND | File::CREAT, 0644) {
|f| f << append
self.debug("Appended to #{f.path}")
}
else
# Else, check the local rc file first, but don't create it
if Puppet::FileSystem.exist?(rcconf_local)
File.open(rcconf_local, File::WRONLY | File::APPEND) {
|f| f << append
self.debug("Appended to #{f.path}")
}
else
# At last use the standard rc.conf file
File.open(rcconf, File::WRONLY | File::APPEND | File::CREAT, 0644) {
|f| f << append
self.debug("Appended to #{f.path}")
}
end
end
end
def enabled?
if /YES$/ =~ self.rcvar_value
self.debug("Is enabled")
return :true
end
self.debug("Is disabled")
:false
end
def enable
self.debug("Enabling")
self.rc_edit("YES")
end
def disable
self.debug("Disabling")
self.rc_edit("NO")
end
def startcmd
[self.initscript, :onestart]
end
def stopcmd
[self.initscript, :onestop]
end
def statuscmd
[self.initscript, :onestatus]
end
end
diff --git a/lib/puppet/provider/service/init.rb b/lib/puppet/provider/service/init.rb
index 19e171571..387212f11 100644
--- a/lib/puppet/provider/service/init.rb
+++ b/lib/puppet/provider/service/init.rb
@@ -1,164 +1,169 @@
# The standard init-based service type. Many other service types are
# customizations of this module.
Puppet::Type.type(:service).provide :init, :parent => :base do
desc "Standard `init`-style service management."
def self.defpath
case Facter.value(:operatingsystem)
when "FreeBSD", "DragonFly"
["/etc/rc.d", "/usr/local/etc/rc.d"]
when "HP-UX"
"/sbin/init.d"
when "Archlinux"
"/etc/rc.d"
else
"/etc/init.d"
end
end
# We can't confine this here, because the init path can be overridden.
#confine :exists => defpath
# some init scripts are not safe to execute, e.g. we do not want
# to suddently run /etc/init.d/reboot.sh status and reboot our system. The
# exclude list could be platform agnostic but I assume an invalid init script
# on system A will never be a valid init script on system B
def self.excludes
excludes = []
# these exclude list was found with grep -L '\/sbin\/runscript' /etc/init.d/* on gentoo
excludes += %w{functions.sh reboot.sh shutdown.sh}
# this exclude list is all from /sbin/service (5.x), but I did not exclude kudzu
excludes += %w{functions halt killall single linuxconf reboot boot}
# 'wait-for-state' and 'portmap-wait' are excluded from instances here
# because they take parameters that have unclear meaning. It looks like
# 'wait-for-state' is a generic waiter mainly used internally for other
# upstart services as a 'sleep until something happens'
# (http://lists.debian.org/debian-devel/2012/02/msg01139.html), while
# 'portmap-wait' is a specific instance of a waiter. There is an open
# launchpad bug
# (https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/962047) that may
# eventually explain how to use the wait-for-state service or perhaps why
# it should remain excluded. When that bug is adddressed this should be
# reexamined.
excludes += %w{wait-for-state portmap-wait}
# these excludes were found with grep -r -L start /etc/init.d
excludes += %w{rcS module-init-tools}
# Prevent puppet failing to get status of the new service introduced
# by the fix for this (bug https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/982889)
# due to puppet's inability to deal with upstart services with instances.
excludes += %w{plymouth-ready}
# Prevent puppet failing to get status of these services, which need parameters
# passed in (see https://bugs.launchpad.net/ubuntu/+source/puppet/+bug/1276766).
excludes += %w{idmapd-mounting startpar-bridge}
+ # Prevent puppet failing to get status of these services, additional upstart
+ # service with instances
+ excludes += %w{cryptdisks-udev}
+ excludes += %w{statd-mounting}
+ excludes += %w{gssd-mounting}
end
# List all services of this type.
def self.instances
get_services(self.defpath)
end
def self.get_services(defpath, exclude = self.excludes)
defpath = [defpath] unless defpath.is_a? Array
instances = []
defpath.each do |path|
unless FileTest.directory?(path)
Puppet.debug "Service path #{path} does not exist"
next
end
check = [:ensure]
check << :enable if public_method_defined? :enabled?
Dir.entries(path).each do |name|
fullpath = File.join(path, name)
next if name =~ /^\./
next if exclude.include? name
next if not FileTest.executable?(fullpath)
next if not is_init?(fullpath)
instances << new(:name => name, :path => path, :hasstatus => true)
end
end
instances
end
# Mark that our init script supports 'status' commands.
def hasstatus=(value)
case value
when true, "true"; @parameters[:hasstatus] = true
when false, "false"; @parameters[:hasstatus] = false
else
raise Puppet::Error, "Invalid 'hasstatus' value #{value.inspect}"
end
end
# Where is our init script?
def initscript
@initscript ||= self.search(@resource[:name])
end
def paths
@paths ||= @resource[:path].find_all do |path|
if File.directory?(path)
true
else
if Puppet::FileSystem.exist?(path)
self.debug "Search path #{path} is not a directory"
else
self.debug "Search path #{path} does not exist"
end
false
end
end
end
def search(name)
paths.each do |path|
fqname = File.join(path,name)
if Puppet::FileSystem.exist? fqname
return fqname
else
self.debug("Could not find #{name} in #{path}")
end
end
paths.each do |path|
fqname_sh = File.join(path,"#{name}.sh")
if Puppet::FileSystem.exist? fqname_sh
return fqname_sh
else
self.debug("Could not find #{name}.sh in #{path}")
end
end
raise Puppet::Error, "Could not find init script for '#{name}'"
end
# The start command is just the init scriptwith 'start'.
def startcmd
[initscript, :start]
end
# The stop command is just the init script with 'stop'.
def stopcmd
[initscript, :stop]
end
def restartcmd
(@resource[:hasrestart] == :true) && [initscript, :restart]
end
# If it was specified that the init script has a 'status' command, then
# we just return that; otherwise, we return false, which causes it to
# fallback to other mechanisms.
def statuscmd
(@resource[:hasstatus] == :true) && [initscript, :status]
end
private
def self.is_init?(script = initscript)
file = Puppet::FileSystem.pathname(script)
!Puppet::FileSystem.symlink?(file) || Puppet::FileSystem.readlink(file) != "/lib/init/upstart-job"
end
end
diff --git a/lib/puppet/provider/service/launchd.rb b/lib/puppet/provider/service/launchd.rb
index 49f14d619..b65c79261 100644
--- a/lib/puppet/provider/service/launchd.rb
+++ b/lib/puppet/provider/service/launchd.rb
@@ -1,356 +1,353 @@
require 'facter/util/plist'
Puppet::Type.type(:service).provide :launchd, :parent => :base do
desc <<-'EOT'
This provider manages jobs with `launchd`, which is the default service
framework for Mac OS X (and may be available for use on other platforms).
For `launchd` documentation, see:
* <http://developer.apple.com/macosx/launchd.html>
* <http://launchd.macosforge.org/>
This provider reads plists out of the following directories:
* `/System/Library/LaunchDaemons`
* `/System/Library/LaunchAgents`
* `/Library/LaunchDaemons`
* `/Library/LaunchAgents`
...and builds up a list of services based upon each plist's "Label" entry.
This provider supports:
* ensure => running/stopped,
* enable => true/false
* status
* restart
Here is how the Puppet states correspond to `launchd` states:
* stopped --- job unloaded
* started --- job loaded
* enabled --- 'Disable' removed from job plist file
* disabled --- 'Disable' added to job plist file
Note that this allows you to do something `launchctl` can't do, which is to
be in a state of "stopped/enabled" or "running/disabled".
Note that this provider does not support overriding 'restart' or 'status'.
EOT
include Puppet::Util::Warnings
commands :launchctl => "/bin/launchctl"
commands :sw_vers => "/usr/bin/sw_vers"
commands :plutil => "/usr/bin/plutil"
defaultfor :operatingsystem => :darwin
confine :operatingsystem => :darwin
has_feature :enableable
has_feature :refreshable
mk_resource_methods
# These are the paths in OS X where a launchd service plist could
# exist. This is a helper method, versus a constant, for easy testing
# and mocking
#
# @api private
def self.launchd_paths
[
"/Library/LaunchAgents",
"/Library/LaunchDaemons",
"/System/Library/LaunchAgents",
"/System/Library/LaunchDaemons"
]
end
# Defines the path to the overrides plist file where service enabling
# behavior is defined in 10.6 and greater.
#
# @api private
def self.launchd_overrides
"/var/db/launchd.db/com.apple.launchd/overrides.plist"
end
# Caching is enabled through the following three methods. Self.prefetch will
# call self.instances to create an instance for each service. Self.flush will
# clear out our cache when we're done.
def self.prefetch(resources)
instances.each do |prov|
if resource = resources[prov.name]
resource.provider = prov
end
end
end
# Self.instances will return an array with each element being a hash
# containing the name, provider, path, and status of each service on the
# system.
def self.instances
jobs = self.jobsearch
@job_list ||= self.job_list
jobs.keys.collect do |job|
job_status = @job_list.has_key?(job) ? :running : :stopped
new(:name => job, :provider => :launchd, :path => jobs[job], :status => job_status)
end
end
# This method will return a list of files in the passed directory. This method
# does not go recursively down the tree and does not return directories
#
# @param path [String] The directory to glob
#
# @api private
#
# @return [Array] of String instances modeling file paths
def self.return_globbed_list_of_file_paths(path)
array_of_files = Dir.glob(File.join(path, '*')).collect do |filepath|
File.file?(filepath) ? filepath : nil
end
array_of_files.compact
end
# Get a hash of all launchd plists, keyed by label. This value is cached, but
# the cache will be refreshed if refresh is true.
#
# @api private
def self.make_label_to_path_map(refresh=false)
return @label_to_path_map if @label_to_path_map and not refresh
@label_to_path_map = {}
launchd_paths.each do |path|
return_globbed_list_of_file_paths(path).each do |filepath|
job = read_plist(filepath)
next if job.nil?
if job.has_key?("Label")
@label_to_path_map[job["Label"]] = filepath
else
Puppet.warning("The #{filepath} plist does not contain a 'label' key; " +
"Puppet is skipping it")
next
end
end
end
@label_to_path_map
end
# Sets a class instance variable with a hash of all launchd plist files that
# are found on the system. The key of the hash is the job id and the value
# is the path to the file. If a label is passed, we return the job id and
# path for that specific job.
def self.jobsearch(label=nil)
by_label = make_label_to_path_map
if label
if by_label.has_key? label
return { label => by_label[label] }
else
# try refreshing the map, in case a plist has been added in the interim
by_label = make_label_to_path_map(true)
if by_label.has_key? label
return { label => by_label[label] }
else
raise Puppet::Error, "Unable to find launchd plist for job: #{label}"
end
end
else
# caller wants the whole map
by_label
end
end
# This status method lists out all currently running services.
# This hash is returned at the end of the method.
def self.job_list
@job_list = Hash.new
begin
output = launchctl :list
raise Puppet::Error.new("launchctl list failed to return any data.") if output.nil?
output.split("\n").each do |line|
@job_list[line.split(/\s/).last] = :running
end
rescue Puppet::ExecutionFailure
raise Puppet::Error.new("Unable to determine status of #{resource[:name]}", $!)
end
@job_list
end
# Launchd implemented plist overrides in version 10.6.
# This method checks the major_version of OS X and returns true if
# it is 10.6 or greater. This allows us to implement different plist
# behavior for versions >= 10.6
def has_macosx_plist_overrides?
@product_version ||= self.class.get_macosx_version_major
# (#11593) Remove support for OS X 10.4 & earlier
# leaving this as is because 10.5 still didn't have plist support
return true unless /^10\.[0-5]/.match(@product_version)
return false
end
# Read a plist, whether its format is XML or in Apple's "binary1"
# format.
def self.read_plist(path)
begin
Plist::parse_xml(plutil('-convert', 'xml1', '-o', '/dev/stdout', path))
rescue Puppet::ExecutionFailure => detail
Puppet.warning("Cannot read file #{path}; Puppet is skipping it. \n" +
"Details: #{detail}")
return nil
end
end
# Clean out the @property_hash variable containing the cached list of services
def flush
@property_hash.clear
end
def exists?
Puppet.debug("Puppet::Provider::Launchd:Ensure for #{@property_hash[:name]}: #{@property_hash[:ensure]}")
@property_hash[:ensure] != :absent
end
def self.get_macosx_version_major
return @macosx_version_major if @macosx_version_major
begin
- # Make sure we've loaded all of the facts
- Facter.loadfacts
-
product_version_major = Facter.value(:macosx_productversion_major)
fail("#{product_version_major} is not supported by the launchd provider") if %w{10.0 10.1 10.2 10.3 10.4}.include?(product_version_major)
@macosx_version_major = product_version_major
return @macosx_version_major
rescue Puppet::ExecutionFailure => detail
self.fail Puppet::Error, "Could not determine OS X version: #{detail}", detail
end
end
# finds the path for a given label and returns the path and parsed plist
# as an array of [path, plist]. Note plist is really a Hash here.
def plist_from_label(label)
job = self.class.jobsearch(label)
job_path = job[label]
if FileTest.file?(job_path)
job_plist = self.class.read_plist(job_path)
else
raise Puppet::Error.new("Unable to parse launchd plist at path: #{job_path}")
end
[job_path, job_plist]
end
# start the service. To get to a state of running/enabled, we need to
# conditionally enable at load, then disable by modifying the plist file
# directly.
def start
return ucommand(:start) if resource[:start]
job_path, job_plist = plist_from_label(resource[:name])
did_enable_job = false
cmds = []
cmds << :launchctl << :load
if self.enabled? == :false || self.status == :stopped # launchctl won't load disabled jobs
cmds << "-w"
did_enable_job = true
end
cmds << job_path
begin
execute(cmds)
rescue Puppet::ExecutionFailure
raise Puppet::Error.new("Unable to start service: #{resource[:name]} at path: #{job_path}", $!)
end
# As load -w clears the Disabled flag, we need to add it in after
self.disable if did_enable_job and resource[:enable] == :false
end
def stop
return ucommand(:stop) if resource[:stop]
job_path, job_plist = plist_from_label(resource[:name])
did_disable_job = false
cmds = []
cmds << :launchctl << :unload
if self.enabled? == :true # keepalive jobs can't be stopped without disabling
cmds << "-w"
did_disable_job = true
end
cmds << job_path
begin
execute(cmds)
rescue Puppet::ExecutionFailure
raise Puppet::Error.new("Unable to stop service: #{resource[:name]} at path: #{job_path}", $!)
end
# As unload -w sets the Disabled flag, we need to add it in after
self.enable if did_disable_job and resource[:enable] == :true
end
def restart
Puppet.debug("A restart has been triggered for the #{resource[:name]} service")
Puppet.debug("Stopping the #{resource[:name]} service")
self.stop
Puppet.debug("Starting the #{resource[:name]} service")
self.start
end
# launchd jobs are enabled by default. They are only disabled if the key
# "Disabled" is set to true, but it can also be set to false to enable it.
# Starting in 10.6, the Disabled key in the job plist is consulted, but only
# if there is no entry in the global overrides plist. We need to draw a
# distinction between undefined, true and false for both locations where the
# Disabled flag can be defined.
def enabled?
job_plist_disabled = nil
overrides_disabled = nil
job_path, job_plist = plist_from_label(resource[:name])
job_plist_disabled = job_plist["Disabled"] if job_plist.has_key?("Disabled")
if has_macosx_plist_overrides?
if FileTest.file?(self.class.launchd_overrides) and overrides = self.class.read_plist(self.class.launchd_overrides)
if overrides.has_key?(resource[:name])
overrides_disabled = overrides[resource[:name]]["Disabled"] if overrides[resource[:name]].has_key?("Disabled")
end
end
end
if overrides_disabled.nil?
if job_plist_disabled.nil? or job_plist_disabled == false
return :true
end
elsif overrides_disabled == false
return :true
end
:false
end
# enable and disable are a bit hacky. We write out the plist with the appropriate value
# rather than dealing with launchctl as it is unable to change the Disabled flag
# without actually loading/unloading the job.
# Starting in 10.6 we need to write out a disabled key to the global
# overrides plist, in earlier versions this is stored in the job plist itself.
def enable
if has_macosx_plist_overrides?
overrides = self.class.read_plist(self.class.launchd_overrides)
overrides[resource[:name]] = { "Disabled" => false }
Plist::Emit.save_plist(overrides, self.class.launchd_overrides)
else
job_path, job_plist = plist_from_label(resource[:name])
if self.enabled? == :false
job_plist.delete("Disabled")
Plist::Emit.save_plist(job_plist, job_path)
end
end
end
def disable
if has_macosx_plist_overrides?
overrides = self.class.read_plist(self.class.launchd_overrides)
overrides[resource[:name]] = { "Disabled" => true }
Plist::Emit.save_plist(overrides, self.class.launchd_overrides)
else
job_path, job_plist = plist_from_label(resource[:name])
job_plist["Disabled"] = true
Plist::Emit.save_plist(job_plist, job_path)
end
end
end
diff --git a/lib/puppet/provider/service/openbsd.rb b/lib/puppet/provider/service/openbsd.rb
index 55b521785..521462b78 100644
--- a/lib/puppet/provider/service/openbsd.rb
+++ b/lib/puppet/provider/service/openbsd.rb
@@ -1,341 +1,342 @@
Puppet::Type.type(:service).provide :openbsd, :parent => :init do
desc "Provider for OpenBSD's rc.d daemon control scripts"
confine :operatingsystem => :openbsd
defaultfor :operatingsystem => :openbsd
has_feature :flaggable
def self.rcconf() '/etc/rc.conf' end
def self.rcconf_local() '/etc/rc.conf.local' end
def self.defpath
["/etc/rc.d"]
end
def startcmd
[self.initscript, "-f", :start]
end
def restartcmd
(@resource[:hasrestart] == :true) && [self.initscript, "-f", :restart]
end
def statuscmd
[self.initscript, :check]
end
# Fetch the state of all service resources
def self.prefetch(resources)
services = instances
resources.keys.each do |name|
if provider = services.find { |svc| svc.name == name }
resources[name].provider = provider
end
end
end
# Return the list of rc scripts
# @api private
def self.rclist
unless @rclist
Puppet.debug "Building list of rc scripts"
@rclist = []
defpath.each do |p|
Dir.glob(p + '/*').each do |item|
@rclist << item if File.executable?(item)
end
end
end
@rclist
end
# Return a hash where the keys are the rc script names as symbols with flags
# as values
# @api private
def self.rcflags
unless @flag_hash
Puppet.debug "Retrieving flags for all discovered services"
Puppet.debug "Reading the contents of the rc conf files"
if File.exists?(rcconf_local())
rcconf_local_contents = File.readlines(rcconf_local())
else
rcconf_local_contents = []
end
if File.exists?(rcconf())
rcconf_contents = File.readlines(rcconf())
else
rcconf_contents = []
end
@flag_hash = {}
rclist().each do |rcitem|
rcname = rcitem.split('/').last
if flagline = rcconf_local_contents.find {|l| l =~ /^#{rcname}_flags/ }
flag = parse_rc_line(flagline)
@flag_hash[rcname.to_sym] ||= flag
end
# For the defaults, if the flags are set to 'NO', we skip setting the
# flag here, since it will already be disabled, and this makes the
# output of `puppet resource service` a bit more correct.
if flagline = rcconf_contents.find {|l| l =~ /^#{rcname}_flags/ }
flag = parse_rc_line(flagline)
unless flag == "NO"
@flag_hash[rcname.to_sym] ||= flag
end
end
@flag_hash[rcname.to_sym] ||= nil
end
end
@flag_hash
end
# @api private
def self.parse_rc_line(rc_line)
rc_line.sub!(/\s*#(.*)$/,'')
regex = /\w+_flags=(.*)/
rc_line.match(regex)[1].gsub(/^"/,'').gsub(/"$/,'')
end
# Read the rc.conf* files and determine the value of the flags
# @api private
def self.get_flags(rcname)
rcflags()
@flag_hash[rcname.to_sym]
end
def self.instances
instances = []
defpath.each do |path|
unless File.directory?(path)
Puppet.debug "Service path #{path} does not exist"
next
end
rclist().each do |d|
instances << new(
:name => File.basename(d),
:path => path,
:flags => get_flags(File.basename(d)),
:hasstatus => true
)
end
end
instances
end
# @api private
def rcvar_name
self.name + '_flags'
end
# @api private
def read_rcconf_local_text()
if File.exists?(self.class.rcconf_local())
File.read(self.class.rcconf_local())
else
[]
end
end
# @api private
def load_rcconf_local_array
if File.exists?(self.class.rcconf_local())
File.readlines(self.class.rcconf_local()).map {|l|
l.chomp!
}
else
[]
end
end
# @api private
def write_rc_contents(file, text)
Puppet::Util.replace_file(file, 0644) do |f|
f.write(text)
end
end
# @api private
def set_content_flags(content,flags)
unless content.is_a? Array
debug "content must be an array at flags"
return ""
else
content.reject! {|l| l.nil? }
end
- if flags.nil?
- append = resource[:name] + '_flags=""'
+ if flags.nil? or flags.size == 0
+ if in_base?
+ append = resource[:name] + '_flags=""'
+ end
else
append = resource[:name] + '_flags="' + flags + '"'
end
if content.find {|l| l =~ /#{resource[:name]}_flags/ }.nil?
content << append
else
content.map {|l| l.gsub!(/^#{resource[:name]}_flags="(.*)?"(.*)?$/, append) }
end
content
end
# @api private
def remove_content_flags(content)
content.reject {|l| l =~ /#{resource[:name]}_flags/ }
end
# return an array of the currently enabled pkg_scripts
# @api private
def pkg_scripts
current = load_rcconf_local_array()
if scripts = current.find{|l| l =~ /^pkg_scripts/ }
if match = scripts.match(/^pkg_scripts="(.*)?"(.*)?$/)
match[1].split(' ')
else
[]
end
else
[]
end
end
# return the array with the current resource added
# @api private
def pkg_scripts_append
- [pkg_scripts(), resource[:name]].flatten.sort.uniq
+ [pkg_scripts(), resource[:name]].flatten.uniq
end
# return the array without the current resource
# @api private
def pkg_scripts_remove
pkg_scripts().reject {|s| s == resource[:name] }
end
# Modify the content array to contain the requsted pkg_scripts line and retun
# the resulting array
# @api private
def set_content_scripts(content,scripts)
unless content.is_a? Array
debug "content must be an array at scripts"
return ""
else
content.reject! {|l| l.nil? }
end
scripts_line = 'pkg_scripts="' + scripts.join(' ') + '"'
if content.find {|l| l =~ /^pkg_scripts/ }.nil?
content << scripts_line
else
# Replace the found pkg_scripts line with our own
content.each_with_index {|l,i|
if l =~ /^pkg_scripts/
content[i] = scripts_line
end
}
end
content
end
- # Determine if the rc script is included in base, or if it exists as a result
- # of a package installation.
+ # Determine if the rc script is included in base
# @api private
def in_base?
- system("/usr/sbin/pkg_info -qE /etc/rc.d/#{self.name}> /dev/null")
- $?.exitstatus == 1
+ script = File.readlines(self.class.rcconf).find {|s| s =~ /^#{rcvar_name}/ }
+ !script.nil?
end
# @api private
def default_disabled?
line = File.readlines(self.class.rcconf).find {|l| l =~ /#{rcvar_name}/ }
self.class.parse_rc_line(line) == 'NO'
end
def enabled?
if in_base?
if (@property_hash[:flags].nil? or @property_hash[:flags] == 'NO')
:false
else
:true
end
else
if (pkg_scripts().include?(@property_hash[:name]))
:true
else
:false
end
end
end
def enable
self.debug("Enabling #{self.name}")
end
# We should also check for default state
def disable
self.debug("Disabling #{self.name}")
end
def flags
@property_hash[:flags]
end
def flags=(value)
@property_hash[:flags] = value
end
def flush
debug "Flusing resource for #{self.name}"
# Here we load the contents of the rc.conf.local file into the contents
# variable, modify it if needed, and then compare that to the original. If
# they are different, we write it out.
original = load_rcconf_local_array()
content = original
debug @property_hash.inspect
if resource[:enable] == :true
#set_flags(resource[:flags])
content = set_content_flags(content, resource[:flags])
# We need only add append the resource name to the pkg_scripts if the
# package is not found in the base system.
if not in_base?
content = set_content_scripts(content,pkg_scripts_append())
end
elsif resource[:enable] == :false
# By virtue of being excluded from the base system, all packages are
# disabled by default and need not be set in the rc.conf.local at all.
if not in_base?
content = remove_content_flags(content)
content = set_content_scripts(content,pkg_scripts_remove())
else
if default_disabled?
content = remove_content_flags(content)
else
content = set_content_flags(content, "NO")
end
end
end
# Make sure to append a newline to the end of the file
unless content[-1] == ""
content << ""
end
output = content.join("\n")
# Write the contents only if necessary, and only once
write_rc_contents(self.class.rcconf_local(), output)
end
end
diff --git a/lib/puppet/provider/ssh_authorized_key/parsed.rb b/lib/puppet/provider/ssh_authorized_key/parsed.rb
index af0e082ea..67403c9d6 100644
--- a/lib/puppet/provider/ssh_authorized_key/parsed.rb
+++ b/lib/puppet/provider/ssh_authorized_key/parsed.rb
@@ -1,89 +1,89 @@
require 'puppet/provider/parsedfile'
Puppet::Type.type(:ssh_authorized_key).provide(
:parsed,
:parent => Puppet::Provider::ParsedFile,
:filetype => :flat,
:default_target => ''
) do
desc "Parse and generate authorized_keys files for SSH."
text_line :comment, :match => /^\s*#/
text_line :blank, :match => /^\s*$/
record_line :parsed,
:fields => %w{options type key name},
:optional => %w{options},
:rts => /^\s+/,
- :match => /^(?:(.+) )?(ssh-dss|ssh-ed25519|ssh-rsa|ecdsa-sha2-nistp256|ecdsa-sha2-nistp384|ecdsa-sha2-nistp521) ([^ ]+) ?(.*)$/,
+ :match => Puppet::Type.type(:ssh_authorized_key).keyline_regex,
:post_parse => proc { |h|
h[:name] = "" if h[:name] == :absent
h[:options] ||= [:absent]
h[:options] = Puppet::Type::Ssh_authorized_key::ProviderParsed.parse_options(h[:options]) if h[:options].is_a? String
},
:pre_gen => proc { |h|
h[:options] = [] if h[:options].include?(:absent)
h[:options] = h[:options].join(',')
}
record_line :key_v1,
:fields => %w{options bits exponent modulus name},
:optional => %w{options},
:rts => /^\s+/,
:match => /^(?:(.+) )?(\d+) (\d+) (\d+)(?: (.+))?$/
def dir_perm
0700
end
def file_perm
0600
end
def user
uid = Puppet::FileSystem.stat(target).uid
Etc.getpwuid(uid).name
end
def flush
raise Puppet::Error, "Cannot write SSH authorized keys without user" unless @resource.should(:user)
raise Puppet::Error, "User '#{@resource.should(:user)}' does not exist" unless Puppet::Util.uid(@resource.should(:user))
# ParsedFile usually calls backup_target much later in the flush process,
# but our SUID makes that fail to open filebucket files for writing.
# Fortunately, there's already logic to make sure it only ever happens once,
# so calling it here supresses the later attempt by our superclass's flush method.
self.class.backup_target(target)
Puppet::Util::SUIDManager.asuser(@resource.should(:user)) do
unless Puppet::FileSystem.exist?(dir = File.dirname(target))
Puppet.debug "Creating #{dir}"
Dir.mkdir(dir, dir_perm)
end
super
File.chmod(file_perm, target)
end
end
# parse sshv2 option strings, wich is a comma separated list of
# either key="values" elements or bare-word elements
def self.parse_options(options)
result = []
scanner = StringScanner.new(options)
while !scanner.eos?
scanner.skip(/[ \t]*/)
# scan a long option
- if out = scanner.scan(/[-a-z0-9A-Z_]+=\".*?\"/) or out = scanner.scan(/[-a-z0-9A-Z_]+/)
+ if out = scanner.scan(/[-a-z0-9A-Z_]+=\".*?[^\\]\"/) or out = scanner.scan(/[-a-z0-9A-Z_]+/)
result << out
else
# found an unscannable token, let's abort
break
end
# eat a comma
scanner.skip(/[ \t]*,[ \t]*/)
end
result
end
end
diff --git a/lib/puppet/provider/sshkey/parsed.rb b/lib/puppet/provider/sshkey/parsed.rb
index f874683b7..29f345916 100644
--- a/lib/puppet/provider/sshkey/parsed.rb
+++ b/lib/puppet/provider/sshkey/parsed.rb
@@ -1,35 +1,40 @@
require 'puppet/provider/parsedfile'
known = nil
case Facter.value(:operatingsystem)
when "Darwin"; known = "/etc/ssh_known_hosts"
else
known = "/etc/ssh/ssh_known_hosts"
end
Puppet::Type.type(:sshkey).provide(
:parsed,
:parent => Puppet::Provider::ParsedFile,
:default_target => known,
:filetype => :flat
) do
desc "Parse and generate host-wide known hosts files for SSH."
text_line :comment, :match => /^#/
text_line :blank, :match => /^\s+/
record_line :parsed, :fields => %w{name type key},
:post_parse => proc { |hash|
names = hash[:name].split(",", -1)
hash[:name] = names.shift
hash[:host_aliases] = names
},
:pre_gen => proc { |hash|
if hash[:host_aliases]
hash[:name] = [hash[:name], hash[:host_aliases]].flatten.join(",")
hash.delete(:host_aliases)
end
}
+
+ # Make sure to use mode 644 if ssh_known_hosts is newly created
+ def self.default_mode
+ 0644
+ end
end
diff --git a/lib/puppet/provider/user/user_role_add.rb b/lib/puppet/provider/user/user_role_add.rb
index 2abec5f45..f3100e04f 100644
--- a/lib/puppet/provider/user/user_role_add.rb
+++ b/lib/puppet/provider/user/user_role_add.rb
@@ -1,209 +1,210 @@
require 'puppet/util'
require 'puppet/util/user_attr'
require 'date'
Puppet::Type.type(:user).provide :user_role_add, :parent => :useradd, :source => :useradd do
desc "User and role management on Solaris, via `useradd` and `roleadd`."
defaultfor :osfamily => :solaris
commands :add => "useradd", :delete => "userdel", :modify => "usermod", :password => "passwd", :role_add => "roleadd", :role_delete => "roledel", :role_modify => "rolemod"
options :home, :flag => "-d", :method => :dir
options :comment, :method => :gecos
options :groups, :flag => "-G"
options :roles, :flag => "-R"
options :auths, :flag => "-A"
options :profiles, :flag => "-P"
options :password_min_age, :flag => "-n"
options :password_max_age, :flag => "-x"
verify :gid, "GID must be an integer" do |value|
value.is_a? Integer
end
verify :groups, "Groups must be comma-separated" do |value|
value !~ /\s/
end
has_features :manages_homedir, :allows_duplicates, :manages_solaris_rbac, :manages_passwords, :manages_password_age, :manages_shell
#must override this to hand the keyvalue pairs
def add_properties
cmd = []
Puppet::Type.type(:user).validproperties.each do |property|
#skip the password because we can't create it with the solaris useradd
next if [:ensure, :password, :password_min_age, :password_max_age].include?(property)
# 1680 Now you can set the hashed passwords on solaris:lib/puppet/provider/user/user_role_add.rb
# the value needs to be quoted, mostly because -c might
# have spaces in it
if value = @resource.should(property) and value != ""
if property == :keys
cmd += build_keys_cmd(value)
else
cmd << flag(property) << value
end
end
end
cmd
end
def user_attributes
@user_attributes ||= UserAttr.get_attributes_by_name(@resource[:name])
end
def flush
@user_attributes = nil
end
def command(cmd)
cmd = ("role_#{cmd}").intern if is_role? or (!exists? and @resource[:ensure] == :role)
super(cmd)
end
def is_role?
user_attributes and user_attributes[:type] == "role"
end
def run(cmd, msg)
execute(cmd)
rescue Puppet::ExecutionFailure => detail
raise Puppet::Error, "Could not #{msg} #{@resource.class.name} #{@resource.name}: #{detail}", detail.backtrace
end
def transition(type)
cmd = [command(:modify)]
cmd << "-K" << "type=#{type}"
cmd += add_properties
cmd << @resource[:name]
end
def create
if is_role?
run(transition("normal"), "transition role to")
else
run(addcmd, "create")
if cmd = passcmd
run(cmd, "change password policy for")
end
end
# added to handle case when password is specified
self.password = @resource[:password] if @resource[:password]
end
def destroy
run(deletecmd, "delete "+ (is_role? ? "role" : "user"))
end
def create_role
if exists? and !is_role?
run(transition("role"), "transition user to")
else
run(addcmd, "create role")
end
end
def roles
user_attributes[:roles] if user_attributes
end
def auths
user_attributes[:auths] if user_attributes
end
def profiles
user_attributes[:profiles] if user_attributes
end
def project
user_attributes[:project] if user_attributes
end
def managed_attributes
[:name, :type, :roles, :auths, :profiles, :project]
end
def remove_managed_attributes
managed = managed_attributes
user_attributes.select { |k,v| !managed.include?(k) }.inject({}) { |hash, array| hash[array[0]] = array[1]; hash }
end
def keys
if user_attributes
#we have to get rid of all the keys we are managing another way
remove_managed_attributes
end
end
def build_keys_cmd(keys_hash)
cmd = []
keys_hash.each do |k,v|
cmd << "-K" << "#{k}=#{v}"
end
cmd
end
def keys=(keys_hash)
run([command(:modify)] + build_keys_cmd(keys_hash) << @resource[:name], "modify attribute key pairs")
end
# This helper makes it possible to test this on stub data without having to
# do too many crazy things!
def target_file_path
"/etc/shadow"
end
private :target_file_path
#Read in /etc/shadow, find the line for this user (skipping comments, because who knows) and return it
#No abstraction, all esoteric knowledge of file formats, yay
def shadow_entry
return @shadow_entry if defined? @shadow_entry
@shadow_entry = File.readlines(target_file_path).
reject { |r| r =~ /^[^\w]/ }.
- collect { |l| l.chomp.split(':') }.
+ # PUP-229 dont suppress the empty fields
+ collect { |l| l.chomp.split(':', -1) }.
find { |user, _| user == @resource[:name] }
end
def password
shadow_entry[1] if shadow_entry
end
def password_min_age
- shadow_entry ? shadow_entry[3] : :absent
+ shadow_entry[3].empty? ? -1 : shadow_entry[3]
end
def password_max_age
return :absent unless shadow_entry
- shadow_entry[4] || -1
+ shadow_entry[4].empty? ? -1 : shadow_entry[4]
end
# Read in /etc/shadow, find the line for our used and rewrite it with the
# new pw. Smooth like 80 grit sandpaper.
#
# Now uses the `replace_file` mechanism to minimize the chance that we lose
# data, but it is still terrible. We still skip platform locking, so a
# concurrent `vipw -s` session will have no idea we risk data loss.
def password=(cryptopw)
begin
shadow = File.read(target_file_path)
# Go Mifune loves the race here where we can lose data because
# /etc/shadow changed between reading it and writing it.
# --daniel 2012-02-05
Puppet::Util.replace_file(target_file_path, 0640) do |fh|
shadow.each_line do |line|
line_arr = line.split(':')
if line_arr[0] == @resource[:name]
line_arr[1] = cryptopw
line_arr[2] = (Date.today - Date.new(1970,1,1)).to_i.to_s
line = line_arr.join(':')
end
fh.print line
end
end
rescue => detail
self.fail Puppet::Error, "Could not write replace #{target_file_path}: #{detail}", detail
end
end
end
diff --git a/lib/puppet/provider/user/windows_adsi.rb b/lib/puppet/provider/user/windows_adsi.rb
index 14b905d3b..fbca8744b 100644
--- a/lib/puppet/provider/user/windows_adsi.rb
+++ b/lib/puppet/provider/user/windows_adsi.rb
@@ -1,99 +1,99 @@
-require 'puppet/util/adsi'
+require 'puppet/util/windows'
Puppet::Type.type(:user).provide :windows_adsi do
desc "Local user management for Windows."
defaultfor :operatingsystem => :windows
confine :operatingsystem => :windows
has_features :manages_homedir, :manages_passwords
def user
- @user ||= Puppet::Util::ADSI::User.new(@resource[:name])
+ @user ||= Puppet::Util::Windows::ADSI::User.new(@resource[:name])
end
def groups
user.groups.join(',')
end
def groups=(groups)
user.set_groups(groups, @resource[:membership] == :minimum)
end
def create
- @user = Puppet::Util::ADSI::User.create(@resource[:name])
+ @user = Puppet::Util::Windows::ADSI::User.create(@resource[:name])
@user.password = @resource[:password]
@user.commit
[:comment, :home, :groups].each do |prop|
send("#{prop}=", @resource[prop]) if @resource[prop]
end
if @resource.managehome?
Puppet::Util::Windows::User.load_profile(@resource[:name], @resource[:password])
end
end
def exists?
- Puppet::Util::ADSI::User.exists?(@resource[:name])
+ Puppet::Util::Windows::ADSI::User.exists?(@resource[:name])
end
def delete
# lookup sid before we delete account
sid = uid if @resource.managehome?
- Puppet::Util::ADSI::User.delete(@resource[:name])
+ Puppet::Util::Windows::ADSI::User.delete(@resource[:name])
if sid
- Puppet::Util::ADSI::UserProfile.delete(sid)
+ Puppet::Util::Windows::ADSI::UserProfile.delete(sid)
end
end
# Only flush if we created or modified a user, not deleted
def flush
@user.commit if @user
end
def comment
user['Description']
end
def comment=(value)
user['Description'] = value
end
def home
user['HomeDirectory']
end
def home=(value)
user['HomeDirectory'] = value
end
def password
user.password_is?( @resource[:password] ) ? @resource[:password] : :absent
end
def password=(value)
user.password = value
end
def uid
- Puppet::Util::Windows::Security.name_to_sid(@resource[:name])
+ Puppet::Util::Windows::SID.name_to_sid(@resource[:name])
end
def uid=(value)
fail "uid is read-only"
end
[:gid, :shell].each do |prop|
define_method(prop) { nil }
define_method("#{prop}=") do |v|
fail "No support for managing property #{prop} of user #{@resource[:name]} on Windows"
end
end
def self.instances
- Puppet::Util::ADSI::User.map { |u| new(:ensure => :present, :name => u.name) }
+ Puppet::Util::Windows::ADSI::User.map { |u| new(:ensure => :present, :name => u.name) }
end
end
diff --git a/lib/puppet/provider/zone/solaris.rb b/lib/puppet/provider/zone/solaris.rb
index e681aca75..cbf07eec8 100644
--- a/lib/puppet/provider/zone/solaris.rb
+++ b/lib/puppet/provider/zone/solaris.rb
@@ -1,361 +1,361 @@
Puppet::Type.type(:zone).provide(:solaris) do
desc "Provider for Solaris Zones."
commands :adm => "/usr/sbin/zoneadm", :cfg => "/usr/sbin/zonecfg"
defaultfor :osfamily => :solaris
mk_resource_methods
# Convert the output of a list into a hash
def self.line2hash(line)
fields = [:id, :name, :ensure, :path, :uuid, :brand, :iptype]
properties = Hash[fields.zip(line.split(':'))]
del_id = [:brand, :uuid]
# Configured but not installed zones do not have IDs
del_id << :id if properties[:id] == "-"
del_id.each { |p| properties.delete(p) }
properties[:ensure] = properties[:ensure].intern
properties[:iptype] = 'exclusive' if properties[:iptype] == 'excl'
properties
end
def self.instances
adm(:list, "-cp").split("\n").collect do |line|
new(line2hash(line))
end
end
def multi_conf(name, should, &action)
has = properties[name]
- has = [] if has == :absent
+ has = [] if !has || has == :absent
rms = has - should
adds = should - has
(rms.map{|o| action.call(:rm,o)} + adds.map{|o| action.call(:add,o)}).join("\n")
end
def self.def_prop(var, str)
define_method('%s_conf' % var.to_s) do |v|
str % v
end
define_method('%s=' % var.to_s) do |v|
setconfig self.send( ('%s_conf'% var).intern, v)
end
end
def self.def_multiprop(var, &conf)
define_method(var.to_s) do |v|
o = properties[var]
return '' if o.nil? or o == :absent
o.join(' ')
end
define_method('%s=' % var.to_s) do |v|
setconfig self.send( ('%s_conf'% var).intern, v)
end
define_method('%s_conf' % var.to_s) do |v|
multi_conf(var, v, &conf)
end
end
def_prop :iptype, "set ip-type=%s"
def_prop :autoboot, "set autoboot=%s"
def_prop :path, "set zonepath=%s"
def_prop :pool, "set pool=%s"
def_prop :shares, "add rctl\nset name=zone.cpu-shares\nadd value (priv=privileged,limit=%s,action=none)\nend"
def_multiprop :ip do |action, str|
interface, ip, defrouter = str.split(':')
case action
when :add
cmd = ["add net"]
cmd << "set physical=#{interface}" if interface
cmd << "set address=#{ip}" if ip
cmd << "set defrouter=#{defrouter}" if defrouter
cmd << "end"
cmd.join("\n")
when :rm
if ip
"remove net address=#{ip}"
elsif interface
"remove net physical=#{interface}"
else
raise ArgumentError, "can not remove network based on default router"
end
else self.fail action
end
end
def_multiprop :dataset do |action, str|
case action
when :add; ['add dataset',"set name=#{str}",'end'].join("\n")
when :rm; "remove dataset name=#{str}"
else self.fail action
end
end
def_multiprop :inherit do |action, str|
case action
when :add; ['add inherit-pkg-dir', "set dir=#{str}",'end'].join("\n")
when :rm; "remove inherit-pkg-dir dir=#{str}"
else self.fail action
end
end
def my_properties
[:path, :iptype, :autoboot, :pool, :shares, :ip, :dataset, :inherit]
end
# Perform all of our configuration steps.
def configure
self.fail "Path is required" unless @resource[:path]
arr = ["create -b #{@resource[:create_args]}"]
# Then perform all of our configuration steps. It's annoying
# that we need this much internal info on the resource.
self.resource.properties.each do |property|
next unless my_properties.include? property.name
method = (property.name.to_s + '_conf').intern
arr << self.send(method ,@resource[property.name]) unless property.safe_insync?(properties[property.name])
end
setconfig(arr.join("\n"))
end
def destroy
zonecfg :delete, "-F"
end
def add_cmd(cmd)
@cmds = [] if @cmds.nil?
@cmds << cmd
end
def exists?
properties[:ensure] != :absent
end
# We cannot use the execpipe in util because the pipe is not opened in
# read/write mode.
def exec_cmd(var)
# In bash, the exit value of the last command is the exit value of the
# entire pipeline
out = execute("echo \"#{var[:input]}\" | #{var[:cmd]}", :failonfail => false, :combine => true)
st = $?.exitstatus
{:out => out, :exit => st}
end
# Clear out the cached values.
def flush
return if @cmds.nil? || @cmds.empty?
str = (@cmds << "commit" << "exit").join("\n")
@cmds = []
@property_hash.clear
command = "#{command(:cfg)} -z #{@resource[:name]} -f -"
r = exec_cmd(:cmd => command, :input => str)
if r[:exit] != 0 or r[:out] =~ /not allowed/
raise ArgumentError, "Failed to apply configuration"
end
end
def install(dummy_argument=:work_arround_for_ruby_GC_bug)
if @resource[:clone] # TODO: add support for "-s snapshot"
zoneadm :clone, @resource[:clone]
elsif @resource[:install_args]
zoneadm :install, @resource[:install_args].split(" ")
else
zoneadm :install
end
end
# Look up the current status.
def properties
if @property_hash.empty?
@property_hash = status || {}
if @property_hash.empty?
@property_hash[:ensure] = :absent
else
@resource.class.validproperties.each do |name|
@property_hash[name] ||= :absent
end
end
end
@property_hash.dup
end
# We need a way to test whether a zone is in process. Our 'ensure'
# property models the static states, but we need to handle the temporary ones.
def processing?
hash = status
return false unless hash
["incomplete", "ready", "shutting_down"].include? hash[:ensure]
end
# Collect the configuration of the zone. The output looks like:
# zonename: z1
# zonepath: /export/z1
# brand: native
# autoboot: true
# bootargs:
# pool:
# limitpriv:
# scheduling-class:
# ip-type: shared
# hostid:
# net:
# address: 192.168.1.1
# physical: eg0001
# defrouter not specified
# net:
# address: 192.168.1.3
# physical: eg0002
# defrouter not specified
#
def getconfig
output = zonecfg :info
name = nil
current = nil
hash = {}
output.split("\n").each do |line|
case line
when /^(\S+):\s*$/
name = $1
current = nil # reset it
when /^(\S+):\s*(\S+)$/
hash[$1.intern] = $2
when /^\s+(\S+):\s*(.+)$/
if name
hash[name] ||= []
unless current
current = {}
hash[name] << current
end
current[$1.intern] = $2
else
err "Ignoring '#{line}'"
end
else
debug "Ignoring zone output '#{line}'"
end
end
hash
end
# Execute a configuration string. Can't be private because it's called
# by the properties.
def setconfig(str)
add_cmd str
end
def start
# Check the sysidcfg stuff
if cfg = @resource[:sysidcfg]
self.fail "Path is required" unless @resource[:path]
zoneetc = File.join(@resource[:path], "root", "etc")
sysidcfg = File.join(zoneetc, "sysidcfg")
# if the zone root isn't present "ready" the zone
# which makes zoneadmd mount the zone root
zoneadm :ready unless File.directory?(zoneetc)
unless Puppet::FileSystem.exist?(sysidcfg)
begin
File.open(sysidcfg, "w", 0600) do |f|
f.puts cfg
end
rescue => detail
puts detail.stacktrace if Puppet[:debug]
raise Puppet::Error, "Could not create sysidcfg: #{detail}", detail.backtrace
end
end
end
zoneadm :boot
end
# Return a hash of the current status of this zone.
def status
begin
output = adm "-z", @resource[:name], :list, "-p"
rescue Puppet::ExecutionFailure
return nil
end
main = self.class.line2hash(output.chomp)
# Now add in the configuration information
config_status.each do |name, value|
main[name] = value
end
main
end
def ready
zoneadm :ready
end
def stop
zoneadm :halt
end
def unconfigure
zonecfg :delete, "-F"
end
def uninstall
zoneadm :uninstall, "-F"
end
private
# Turn the results of getconfig into status information.
def config_status
config = getconfig
result = {}
result[:autoboot] = config[:autoboot] ? config[:autoboot].intern : :true
result[:pool] = config[:pool]
result[:shares] = config[:shares]
if dir = config["inherit-pkg-dir"]
result[:inherit] = dir.collect { |dirs| dirs[:dir] }
end
if datasets = config["dataset"]
result[:dataset] = datasets.collect { |dataset| dataset[:name] }
end
result[:iptype] = config[:'ip-type'] if config[:'ip-type']
if net = config["net"]
result[:ip] = net.collect do |params|
if params[:defrouter]
"#{params[:physical]}:#{params[:address]}:#{params[:defrouter]}"
elsif params[:address]
"#{params[:physical]}:#{params[:address]}"
else
params[:physical]
end
end
end
result
end
def zoneadm(*cmd)
adm("-z", @resource[:name], *cmd)
rescue Puppet::ExecutionFailure => detail
self.fail Puppet::Error, "Could not #{cmd[0]} zone: #{detail}", detail
end
def zonecfg(*cmd)
# You apparently can't get the configuration of the global zone (strictly in solaris11)
return "" if self.name == "global"
begin
cfg("-z", self.name, *cmd)
rescue Puppet::ExecutionFailure => detail
self.fail Puppet::Error, "Could not #{cmd[0]} zone: #{detail}", detail
end
end
end
diff --git a/lib/puppet/reference/metaparameter.rb b/lib/puppet/reference/metaparameter.rb
index 7c701a110..cd8d6d44f 100644
--- a/lib/puppet/reference/metaparameter.rb
+++ b/lib/puppet/reference/metaparameter.rb
@@ -1,42 +1,44 @@
Puppet::Util::Reference.newreference :metaparameter, :doc => "All Puppet metaparameters and all their details" do
types = {}
Puppet::Type.loadall
Puppet::Type.eachtype { |type|
next if type.name == :puppet
next if type.name == :component
types[type.name] = type
}
str = %{
-# Metaparameters
-
-Metaparameters are parameters that work with any resource type; they are part of the
-Puppet framework itself rather than being part of the implementation of any
-given instance. Thus, any defined metaparameter can be used with any instance
-in your manifest, including defined components.
+Metaparameters are attributes that work with any resource type, including custom
+types and defined types.
+
+In general, they affect _Puppet's_ behavior rather than the desired state of the
+resource. Metaparameters do things like add metadata to a resource (`alias`,
+`tag`), set limits on when the resource should be synced (`require`, `schedule`,
+etc.), prevent Puppet from making changes (`noop`), and change logging verbosity
+(`loglevel`).
## Available Metaparameters
}
begin
params = []
Puppet::Type.eachmetaparam { |param|
params << param
}
params.sort { |a,b|
a.to_s <=> b.to_s
}.each { |param|
str << markdown_header(param.to_s, 3)
str << scrub(Puppet::Type.metaparamdoc(param))
str << "\n\n"
}
rescue => detail
Puppet.log_exception(detail, "incorrect metaparams: #{detail}")
exit(1)
end
str
end
diff --git a/lib/puppet/reports/store.rb b/lib/puppet/reports/store.rb
index 5b82f6d0f..528f9dba8 100644
--- a/lib/puppet/reports/store.rb
+++ b/lib/puppet/reports/store.rb
@@ -1,73 +1,68 @@
require 'puppet'
require 'fileutils'
-require 'tempfile'
+require 'puppet/util'
SEPARATOR = [Regexp.escape(File::SEPARATOR.to_s), Regexp.escape(File::ALT_SEPARATOR.to_s)].join
Puppet::Reports.register_report(:store) do
desc "Store the yaml report on disk. Each host sends its report as a YAML dump
and this just stores the file on disk, in the `reportdir` directory.
These files collect quickly -- one every half hour -- so it is a good idea
to perform some maintenance on them if you use this report (it's the only
default report)."
def process
validate_host(host)
dir = File.join(Puppet[:reportdir], host)
if ! Puppet::FileSystem.exist?(dir)
FileUtils.mkdir_p(dir)
FileUtils.chmod_R(0750, dir)
end
# Now store the report.
now = Time.now.gmtime
name = %w{year month day hour min}.collect do |method|
# Make sure we're at least two digits everywhere
"%02d" % now.send(method).to_s
end.join("") + ".yaml"
file = File.join(dir, name)
- f = Tempfile.new(name, dir)
begin
- begin
- f.chmod(0640)
- f.print to_yaml
- ensure
- f.close
+ Puppet::Util.replace_file(file, 0640) do |fh|
+ fh.print to_yaml
end
- FileUtils.mv(f.path, file)
rescue => detail
- Puppet.log_exception(detail, "Could not write report for #{host} at #{file}: #{detail}")
+ Puppet.log_exception(detail, "Could not write report for #{host} at #{file}: #{detail}")
end
# Only testing cares about the return value
file
end
# removes all reports for a given host?
def self.destroy(host)
validate_host(host)
dir = File.join(Puppet[:reportdir], host)
if Puppet::FileSystem.exist?(dir)
Dir.entries(dir).each do |file|
next if ['.','..'].include?(file)
file = File.join(dir, file)
Puppet::FileSystem.unlink(file) if File.file?(file)
end
Dir.rmdir(dir)
end
end
def validate_host(host)
if host =~ Regexp.union(/[#{SEPARATOR}]/, /\A\.\.?\Z/)
raise ArgumentError, "Invalid node name #{host.inspect}"
end
end
module_function :validate_host
end
diff --git a/lib/puppet/resource.rb b/lib/puppet/resource.rb
index 40f12b5d2..7e7a6ab2c 100644
--- a/lib/puppet/resource.rb
+++ b/lib/puppet/resource.rb
@@ -1,546 +1,610 @@
require 'puppet'
require 'puppet/util/tagging'
require 'puppet/util/pson'
require 'puppet/parameter'
# The simplest resource class. Eventually it will function as the
# base class for all resource-like behaviour.
#
# @api public
class Puppet::Resource
# This stub class is only needed for serialization compatibility with 0.25.x.
# Specifically, it exists to provide a compatibility API when using YAML
# serialized objects loaded from StoreConfigs.
Reference = Puppet::Resource
include Puppet::Util::Tagging
extend Puppet::Util::Pson
include Enumerable
attr_accessor :file, :line, :catalog, :exported, :virtual, :validate_parameters, :strict
attr_reader :type, :title
require 'puppet/indirector'
extend Puppet::Indirector
indirects :resource, :terminus_class => :ral
ATTRIBUTES = [:file, :line, :exported]
def self.from_data_hash(data)
raise ArgumentError, "No resource type provided in serialized data" unless type = data['type']
raise ArgumentError, "No resource title provided in serialized data" unless title = data['title']
resource = new(type, title)
if params = data['parameters']
params.each { |param, value| resource[param] = value }
end
if tags = data['tags']
tags.each { |tag| resource.tag(tag) }
end
ATTRIBUTES.each do |a|
if value = data[a.to_s]
resource.send(a.to_s + "=", value)
end
end
resource
end
def self.from_pson(pson)
Puppet.deprecation_warning("from_pson is being removed in favour of from_data_hash.")
self.from_data_hash(pson)
end
def inspect
"#{@type}[#{@title}]#{to_hash.inspect}"
end
def to_data_hash
data = ([:type, :title, :tags] + ATTRIBUTES).inject({}) do |hash, param|
next hash unless value = self.send(param)
hash[param.to_s] = value
hash
end
data["exported"] ||= false
params = self.to_hash.inject({}) do |hash, ary|
param, value = ary
# Don't duplicate the title as the namevar
next hash if param == namevar and value == title
hash[param] = Puppet::Resource.value_to_pson_data(value)
hash
end
data["parameters"] = params unless params.empty?
data
end
# This doesn't include document type as it is part of a catalog
def to_pson_data_hash
to_data_hash
end
def self.value_to_pson_data(value)
if value.is_a? Array
value.map{|v| value_to_pson_data(v) }
elsif value.is_a? Puppet::Resource
value.to_s
else
value
end
end
def yaml_property_munge(x)
case x
when Hash
x.inject({}) { |h,kv|
k,v = kv
h[k] = self.class.value_to_pson_data(v)
h
}
else self.class.value_to_pson_data(x)
end
end
YAML_ATTRIBUTES = [:@file, :@line, :@exported, :@type, :@title, :@tags, :@parameters]
# Explicitly list the instance variables that should be serialized when
# converting to YAML.
#
# @api private
# @return [Array<Symbol>] The intersection of our explicit variable list and
# all of the instance variables defined on this class.
def to_yaml_properties
YAML_ATTRIBUTES & super
end
def to_pson(*args)
to_data_hash.to_pson(*args)
end
# Proxy these methods to the parameters hash. It's likely they'll
# be overridden at some point, but this works for now.
%w{has_key? keys length delete empty? <<}.each do |method|
define_method(method) do |*args|
parameters.send(method, *args)
end
end
# Set a given parameter. Converts all passed names
# to lower-case symbols.
def []=(param, value)
validate_parameter(param) if validate_parameters
parameters[parameter_name(param)] = value
end
# Return a given parameter's value. Converts all passed names
# to lower-case symbols.
def [](param)
parameters[parameter_name(param)]
end
def ==(other)
return false unless other.respond_to?(:title) and self.type == other.type and self.title == other.title
return false unless to_hash == other.to_hash
true
end
# Compatibility method.
def builtin?
builtin_type?
end
# Is this a builtin resource type?
def builtin_type?
resource_type.is_a?(Class)
end
# Iterate over each param/value pair, as required for Enumerable.
def each
parameters.each { |p,v| yield p, v }
end
def include?(parameter)
super || parameters.keys.include?( parameter_name(parameter) )
end
%w{exported virtual strict}.each do |m|
define_method(m+"?") do
self.send(m)
end
end
def class?
@is_class ||= @type == "Class"
end
def stage?
@is_stage ||= @type.to_s.downcase == "stage"
end
- # Create our resource.
+ # Cache to reduce respond_to? lookups
+ @@nondeprecating_type = {}
+
+ # Construct a resource from data.
+ #
+ # Constructs a resource instance with the given `type` and `title`. Multiple
+ # type signatures are possible for these arguments and most will result in an
+ # expensive call to {Puppet::Node::Environment#known_resource_types} in order
+ # to resolve `String` and `Symbol` Types to actual Ruby classes.
+ #
+ # @param type [Symbol, String] The name of the Puppet Type, as a string or
+ # symbol. The actual Type will be looked up using
+ # {Puppet::Node::Environment#known_resource_types}. This lookup is expensive.
+ # @param type [String] The full resource name in the form of
+ # `"Type[Title]"`. This method of calling should only be used when
+ # `title` is `nil`.
+ # @param type [nil] If a `nil` is passed, the title argument must be a string
+ # of the form `"Type[Title]"`.
+ # @param type [Class] A class that inherits from `Puppet::Type`. This method
+ # of construction is much more efficient as it skips calls to
+ # {Puppet::Node::Environment#known_resource_types}.
+ #
+ # @param title [String, :main, nil] The title of the resource. If type is `nil`, may also
+ # be the full resource name in the form of `"Type[Title]"`.
+ #
+ # @api public
def initialize(type, title = nil, attributes = {})
@parameters = {}
+ if type.is_a?(Class) && type < Puppet::Type
+ # Set the resource type to avoid an expensive `known_resource_types`
+ # lookup.
+ self.resource_type = type
+ # From this point on, the constructor behaves the same as if `type` had
+ # been passed as a symbol.
+ type = type.name
+ end
# Set things like strictness first.
attributes.each do |attr, value|
next if attr == :parameters
send(attr.to_s + "=", value)
end
@type, @title = extract_type_and_title(type, title)
@type = munge_type_name(@type)
if self.class?
@title = :main if @title == ""
@title = munge_type_name(@title)
end
if params = attributes[:parameters]
extract_parameters(params)
end
+ if resource_type and ! @@nondeprecating_type[resource_type]
+ if resource_type.respond_to?(:deprecate_params)
+ resource_type.deprecate_params(title, attributes[:parameters])
+ else
+ @@nondeprecating_type[resource_type] = true
+ end
+ end
+
tag(self.type)
tag(self.title) if valid_tag?(self.title)
@reference = self # for serialization compatibility with 0.25.x
if strict? and ! resource_type
if self.class?
raise ArgumentError, "Could not find declared class #{title}"
else
raise ArgumentError, "Invalid resource type #{type}"
end
end
end
def ref
to_s
end
# Find our resource.
def resolve
catalog ? catalog.resource(to_s) : nil
end
# The resource's type implementation
# @return [Puppet::Type, Puppet::Resource::Type]
# @api private
def resource_type
@rstype ||= case type
when "Class"; environment.known_resource_types.hostclass(title == :main ? "" : title)
when "Node"; environment.known_resource_types.node(title)
else
Puppet::Type.type(type) || environment.known_resource_types.definition(type)
end
end
# Set the resource's type implementation
# @param type [Puppet::Type, Puppet::Resource::Type]
# @api private
def resource_type=(type)
@rstype = type
end
def environment
@environment ||= if catalog
catalog.environment_instance
else
- Puppet::Node::Environment::NONE
+ Puppet.lookup(:current_environment) { Puppet::Node::Environment::NONE }
end
end
def environment=(environment)
@environment = environment
end
# Produce a simple hash of our parameters.
def to_hash
parse_title.merge parameters
end
def to_s
"#{type}[#{title}]"
end
def uniqueness_key
# Temporary kludge to deal with inconsistant use patters
h = self.to_hash
h[namevar] ||= h[:name]
h[:name] ||= h[namevar]
h.values_at(*key_attributes.sort_by { |k| k.to_s })
end
def key_attributes
resource_type.respond_to?(:key_attributes) ? resource_type.key_attributes : [:name]
end
# Convert our resource to Puppet code.
def to_manifest
# Collect list of attributes to align => and move ensure first
attr = parameters.keys
attr_max = attr.inject(0) { |max,k| k.to_s.length > max ? k.to_s.length : max }
attr.sort!
if attr.first != :ensure && attr.include?(:ensure)
attr.delete(:ensure)
attr.unshift(:ensure)
end
attributes = attr.collect { |k|
v = parameters[k]
" %-#{attr_max}s => %s,\n" % [k, Puppet::Parameter.format_value_for_display(v)]
}.join
"%s { '%s':\n%s}" % [self.type.to_s.downcase, self.title, attributes]
end
def to_ref
ref
end
# Convert our resource to a RAL resource instance. Creates component
# instances for resource types that don't exist.
def to_ral
typeklass = Puppet::Type.type(self.type) || Puppet::Type.type(:component)
typeklass.new(self)
end
def name
# this is potential namespace conflict
# between the notion of an "indirector name"
# and a "resource name"
[ type, title ].join('/')
end
def missing_arguments
resource_type.arguments.select do |param, default|
param = param.to_sym
parameters[param].nil? || parameters[param].value == :undef
end
end
private :missing_arguments
# Consult external data bindings for class parameter values which must be
# namespaced in the backend.
#
# Example:
#
# class foo($port=0){ ... }
#
# We make a request to the backend for the key 'foo::port' not 'foo'
#
def lookup_external_default_for(param, scope)
# Only lookup parameters for host classes
return nil unless resource_type.type == :hostclass
name = "#{resource_type.name}::#{param}"
lookup_with_databinding(name, scope)
end
private :lookup_external_default_for
def lookup_with_databinding(name, scope)
begin
Puppet::DataBinding.indirection.find(
name,
:environment => scope.environment.to_s,
:variables => scope)
rescue Puppet::DataBinding::LookupError => e
raise Puppet::Error.new("Error from DataBinding '#{Puppet[:data_binding_terminus]}' while looking up '#{name}': #{e.message}", e)
end
end
private :lookup_with_databinding
def set_default_parameters(scope)
return [] unless resource_type and resource_type.respond_to?(:arguments)
unless is_a?(Puppet::Parser::Resource)
fail Puppet::DevError, "Cannot evaluate default parameters for #{self} - not a parser resource"
end
missing_arguments.collect do |param, default|
external_value = lookup_external_default_for(param, scope)
if external_value.nil? && default.nil?
next
elsif external_value.nil?
value = default.safeevaluate(scope)
else
value = external_value
end
self[param.to_sym] = value
param
end.compact
end
def copy_as_resource
result = Puppet::Resource.new(type, title)
result.file = self.file
result.line = self.line
result.exported = self.exported
result.virtual = self.virtual
result.tag(*self.tags)
result.environment = environment
result.instance_variable_set(:@rstype, resource_type)
to_hash.each do |p, v|
if v.is_a?(Puppet::Resource)
v = Puppet::Resource.new(v.type, v.title)
elsif v.is_a?(Array)
# flatten resource references arrays
v = v.flatten if v.flatten.find { |av| av.is_a?(Puppet::Resource) }
v = v.collect do |av|
av = Puppet::Resource.new(av.type, av.title) if av.is_a?(Puppet::Resource)
av
end
end
- # If the value is an array with only one value, then
- # convert it to a single value. This is largely so that
- # the database interaction doesn't have to worry about
- # whether it returns an array or a string.
- result[p] = if v.is_a?(Array) and v.length == 1
- v[0]
- else
- v
- end
+ if Puppet[:parser] == 'current'
+ # If the value is an array with only one value, then
+ # convert it to a single value. This is largely so that
+ # the database interaction doesn't have to worry about
+ # whether it returns an array or a string.
+ #
+ # This behavior is not done in the future parser, but we can't issue a
+ # deprecation warning either since there isn't anything that a user can
+ # do about it.
+ result[p] = if v.is_a?(Array) and v.length == 1
+ v[0]
+ else
+ v
+ end
+ else
+ result[p] = v
+ end
end
result
end
def valid_parameter?(name)
resource_type.valid_parameter?(name)
end
# Verify that all required arguments are either present or
# have been provided with defaults.
# Must be called after 'set_default_parameters'. We can't join the methods
# because Type#set_parameters needs specifically ordered behavior.
def validate_complete
return unless resource_type and resource_type.respond_to?(:arguments)
resource_type.arguments.each do |param, default|
param = param.to_sym
fail Puppet::ParseError, "Must pass #{param} to #{self}" unless parameters.include?(param)
end
+
+ # Perform optional type checking
+ if Puppet[:parser] == 'future'
+ # Perform type checking
+ arg_types = resource_type.argument_types
+ # Parameters is a map from name, to parameter, and the parameter again has name and value
+ parameters.each do |name, value|
+ next unless t = arg_types[name.to_s] # untyped, and parameters are symbols here (aargh, strings in the type)
+ unless Puppet::Pops::Types::TypeCalculator.instance?(t, value.value)
+ inferred_type = Puppet::Pops::Types::TypeCalculator.infer(value.value)
+ actual = Puppet::Pops::Types::TypeCalculator.generalize!(inferred_type)
+ fail Puppet::ParseError, "Expected parameter '#{name}' of '#{self}' to have type #{t.to_s}, got #{actual.to_s}"
+ end
+ end
+ end
end
def validate_parameter(name)
raise ArgumentError, "Invalid parameter #{name}" unless valid_parameter?(name)
end
def prune_parameters(options = {})
properties = resource_type.properties.map(&:name)
dup.collect do |attribute, value|
if value.to_s.empty? or Array(value).empty?
delete(attribute)
elsif value.to_s == "absent" and attribute.to_s != "ensure"
delete(attribute)
end
parameters_to_include = options[:parameters_to_include] || []
delete(attribute) unless properties.include?(attribute) || parameters_to_include.include?(attribute)
end
self
end
private
# Produce a canonical method name.
def parameter_name(param)
param = param.to_s.downcase.to_sym
if param == :name and namevar
param = namevar
end
param
end
# The namevar for our resource type. If the type doesn't exist,
# always use :name.
def namevar
if builtin_type? and t = resource_type and t.key_attributes.length == 1
t.key_attributes.first
else
:name
end
end
def extract_parameters(params)
params.each do |param, value|
validate_parameter(param) if strict?
self[param] = value
end
end
def extract_type_and_title(argtype, argtitle)
if (argtitle || argtype) =~ /^([^\[\]]+)\[(.+)\]$/m then [ $1, $2 ]
elsif argtitle then [ argtype, argtitle ]
elsif argtype.is_a?(Puppet::Type) then [ argtype.class.name, argtype.title ]
elsif argtype.is_a?(Hash) then
raise ArgumentError, "Puppet::Resource.new does not take a hash as the first argument. "+
"Did you mean (#{(argtype[:type] || argtype["type"]).inspect}, #{(argtype[:title] || argtype["title"]).inspect }) ?"
else raise ArgumentError, "No title provided and #{argtype.inspect} is not a valid resource reference"
end
end
def munge_type_name(value)
return :main if value == :main
return "Class" if value == "" or value.nil? or value.to_s.downcase == "component"
value.to_s.split("::").collect { |s| s.capitalize }.join("::")
end
def parse_title
h = {}
type = resource_type
if type.respond_to? :title_patterns
type.title_patterns.each { |regexp, symbols_and_lambdas|
if captures = regexp.match(title.to_s)
symbols_and_lambdas.zip(captures[1..-1]).each do |symbol_and_lambda,capture|
symbol, proc = symbol_and_lambda
# Many types pass "identity" as the proc; we might as well give
# them a shortcut to delivering that without the extra cost.
#
# Especially because the global type defines title_patterns and
# uses the identity patterns.
#
# This was worth about 8MB of memory allocation saved in my
# testing, so is worth the complexity for the API.
if proc then
h[symbol] = proc.call(capture)
else
h[symbol] = capture
end
end
return h
end
}
# If we've gotten this far, then none of the provided title patterns
# matched. Since there's no way to determine the title then the
# resource should fail here.
raise Puppet::Error, "No set of title patterns matched the title \"#{title}\"."
else
return { :name => title.to_s }
end
end
def parameters
# @parameters could have been loaded from YAML, causing it to be nil (by
# bypassing initialize).
@parameters ||= {}
end
end
diff --git a/lib/puppet/resource/catalog.rb b/lib/puppet/resource/catalog.rb
index 5a67f2574..832d88eab 100644
--- a/lib/puppet/resource/catalog.rb
+++ b/lib/puppet/resource/catalog.rb
@@ -1,552 +1,554 @@
require 'puppet/node'
require 'puppet/indirector'
require 'puppet/transaction'
require 'puppet/util/pson'
require 'puppet/util/tagging'
require 'puppet/graph'
# This class models a node catalog. It is the thing meant to be passed
# from server to client, and it contains all of the information in the
# catalog, including the resources and the relationships between them.
#
# @api public
class Puppet::Resource::Catalog < Puppet::Graph::SimpleGraph
class DuplicateResourceError < Puppet::Error
include Puppet::ExternalFileError
end
extend Puppet::Indirector
indirects :catalog, :terminus_setting => :catalog_terminus
include Puppet::Util::Tagging
extend Puppet::Util::Pson
# The host name this is a catalog for.
attr_accessor :name
# The catalog version. Used for testing whether a catalog
# is up to date.
attr_accessor :version
# How long this catalog took to retrieve. Used for reporting stats.
attr_accessor :retrieval_duration
# Whether this is a host catalog, which behaves very differently.
# In particular, reports are sent, graphs are made, and state is
# stored in the state database. If this is set incorrectly, then you often
# end up in infinite loops, because catalogs are used to make things
# that the host catalog needs.
attr_accessor :host_config
# Whether this catalog was retrieved from the cache, which affects
# whether it is written back out again.
attr_accessor :from_cache
# Some metadata to help us compile and generally respond to the current state.
attr_accessor :client_version, :server_version
# A String representing the environment for this catalog
attr_accessor :environment
# The actual environment instance that was used during compilation
attr_accessor :environment_instance
# Add classes to our class list.
def add_class(*classes)
classes.each do |klass|
@classes << klass
end
# Add the class names as tags, too.
tag(*classes)
end
def title_key_for_ref( ref )
ref =~ /^([-\w:]+)\[(.*)\]$/m
[$1, $2]
end
def add_resource(*resources)
resources.each do |resource|
add_one_resource(resource)
end
end
# @param resource [A Resource] a resource in the catalog
# @return [A Resource, nil] the resource that contains the given resource
# @api public
def container_of(resource)
adjacent(resource, :direction => :in)[0]
end
def add_one_resource(resource)
- fail_on_duplicate_type_and_title(resource)
+ title_key = title_key_for_ref(resource.ref)
+ if @resource_table[title_key]
+ fail_on_duplicate_type_and_title(resource, title_key)
+ end
- add_resource_to_table(resource)
+ add_resource_to_table(resource, title_key)
create_resource_aliases(resource)
resource.catalog = self if resource.respond_to?(:catalog=)
add_resource_to_graph(resource)
end
private :add_one_resource
- def add_resource_to_table(resource)
- title_key = title_key_for_ref(resource.ref)
+ def add_resource_to_table(resource, title_key)
@resource_table[title_key] = resource
@resources << title_key
end
private :add_resource_to_table
def add_resource_to_graph(resource)
add_vertex(resource)
@relationship_graph.add_vertex(resource) if @relationship_graph
end
private :add_resource_to_graph
def create_resource_aliases(resource)
if resource.respond_to?(:isomorphic?) and resource.isomorphic? and resource.name != resource.title
self.alias(resource, resource.uniqueness_key)
end
end
private :create_resource_aliases
# Create an alias for a resource.
def alias(resource, key)
resource.ref =~ /^(.+)\[/
class_name = $1 || resource.class.name
newref = [class_name, key].flatten
if key.is_a? String
ref_string = "#{class_name}[#{key}]"
return if ref_string == resource.ref
end
# LAK:NOTE It's important that we directly compare the references,
# because sometimes an alias is created before the resource is
# added to the catalog, so comparing inside the below if block
# isn't sufficient.
if existing = @resource_table[newref]
return if existing == resource
resource_declaration = " at #{resource.file}:#{resource.line}" if resource.file and resource.line
existing_declaration = " at #{existing.file}:#{existing.line}" if existing.file and existing.line
msg = "Cannot alias #{resource.ref} to #{key.inspect}#{resource_declaration}; resource #{newref.inspect} already declared#{existing_declaration}"
raise ArgumentError, msg
end
@resource_table[newref] = resource
@aliases[resource.ref] ||= []
@aliases[resource.ref] << newref
end
# Apply our catalog to the local host.
# @param options [Hash{Symbol => Object}] a hash of options
# @option options [Puppet::Transaction::Report] :report
# The report object to log this transaction to. This is optional,
# and the resulting transaction will create a report if not
# supplied.
# @option options [Array[String]] :tags
# Tags used to filter the transaction. If supplied then only
# resources tagged with any of these tags will be evaluated.
# @option options [Boolean] :ignoreschedules
# Ignore schedules when evaluating resources
# @option options [Boolean] :for_network_device
# Whether this catalog is for a network device
#
# @return [Puppet::Transaction] the transaction created for this
# application
#
# @api public
def apply(options = {})
Puppet::Util::Storage.load if host_config?
transaction = create_transaction(options)
begin
transaction.report.as_logging_destination do
transaction.evaluate
end
rescue Puppet::Error => detail
Puppet.log_exception(detail, "Could not apply complete catalog: #{detail}")
rescue => detail
Puppet.log_exception(detail, "Got an uncaught exception of type #{detail.class}: #{detail}")
ensure
# Don't try to store state unless we're a host config
# too recursive.
Puppet::Util::Storage.store if host_config?
end
yield transaction if block_given?
transaction
end
# The relationship_graph form of the catalog. This contains all of the
# dependency edges that are used for determining order.
#
# @param given_prioritizer [Puppet::Graph::Prioritizer] The prioritization
# strategy to use when constructing the relationship graph. Defaults the
# being determined by the `ordering` setting.
# @return [Puppet::Graph::RelationshipGraph]
# @api public
def relationship_graph(given_prioritizer = nil)
if @relationship_graph.nil?
@relationship_graph = Puppet::Graph::RelationshipGraph.new(given_prioritizer || prioritizer)
@relationship_graph.populate_from(self)
end
@relationship_graph
end
def clear(remove_resources = true)
super()
# We have to do this so that the resources clean themselves up.
@resource_table.values.each { |resource| resource.remove } if remove_resources
@resource_table.clear
@resources = []
if @relationship_graph
@relationship_graph.clear
@relationship_graph = nil
end
end
def classes
@classes.dup
end
# Create a new resource and register it in the catalog.
def create_resource(type, options)
unless klass = Puppet::Type.type(type)
raise ArgumentError, "Unknown resource type #{type}"
end
return unless resource = klass.new(options)
add_resource(resource)
resource
end
# Make sure all of our resources are "finished".
def finalize
make_default_resources
@resource_table.values.each { |resource| resource.finish }
write_graph(:resources)
end
def host_config?
host_config
end
def initialize(name = nil, environment = Puppet::Node::Environment::NONE)
super()
@name = name
@classes = []
@resource_table = {}
@resources = []
@relationship_graph = nil
@host_config = true
@environment_instance = environment
@environment = environment.to_s
@aliases = {}
if block_given?
yield(self)
finalize
end
end
# Make the default objects necessary for function.
def make_default_resources
# We have to add the resources to the catalog, or else they won't get cleaned up after
# the transaction.
# First create the default scheduling objects
Puppet::Type.type(:schedule).mkdefaultschedules.each { |res| add_resource(res) unless resource(res.ref) }
# And filebuckets
if bucket = Puppet::Type.type(:filebucket).mkdefaultbucket
add_resource(bucket) unless resource(bucket.ref)
end
end
# Remove the resource from our catalog. Notice that we also call
# 'remove' on the resource, at least until resource classes no longer maintain
# references to the resource instances.
def remove_resource(*resources)
resources.each do |resource|
title_key = title_key_for_ref(resource.ref)
@resource_table.delete(title_key)
if aliases = @aliases[resource.ref]
aliases.each { |res_alias| @resource_table.delete(res_alias) }
@aliases.delete(resource.ref)
end
remove_vertex!(resource) if vertex?(resource)
@relationship_graph.remove_vertex!(resource) if @relationship_graph and @relationship_graph.vertex?(resource)
@resources.delete(title_key)
resource.remove
end
end
# Look a resource up by its reference (e.g., File[/etc/passwd]).
def resource(type, title = nil)
# Always create a resource reference, so that it always
# canonicalizes how we are referring to them.
if title
res = Puppet::Resource.new(type, title)
else
# If they didn't provide a title, then we expect the first
# argument to be of the form 'Class[name]', which our
# Reference class canonicalizes for us.
res = Puppet::Resource.new(nil, type)
end
res.catalog = self
title_key = [res.type, res.title.to_s]
uniqueness_key = [res.type, res.uniqueness_key].flatten
@resource_table[title_key] || @resource_table[uniqueness_key]
end
def resource_refs
resource_keys.collect{ |type, name| name.is_a?( String ) ? "#{type}[#{name}]" : nil}.compact
end
def resource_keys
@resource_table.keys
end
def resources
@resources.collect do |key|
@resource_table[key]
end
end
def self.from_data_hash(data)
result = new(data['name'], Puppet::Node::Environment::NONE)
if tags = data['tags']
result.tag(*tags)
end
if version = data['version']
result.version = version
end
if environment = data['environment']
result.environment = environment
result.environment_instance = Puppet::Node::Environment.remote(environment.to_sym)
end
if resources = data['resources']
result.add_resource(*resources.collect do |res|
Puppet::Resource.from_data_hash(res)
end)
end
if edges = data['edges']
edges.each do |edge_hash|
edge = Puppet::Relationship.from_data_hash(edge_hash)
unless source = result.resource(edge.source)
raise ArgumentError, "Could not intern from data: Could not find relationship source #{edge.source.inspect}"
end
edge.source = source
unless target = result.resource(edge.target)
raise ArgumentError, "Could not intern from data: Could not find relationship target #{edge.target.inspect}"
end
edge.target = target
result.add_edge(edge)
end
end
if classes = data['classes']
result.add_class(*classes)
end
result
end
def self.from_pson(data)
Puppet.deprecation_warning("from_pson is being removed in favour of from_data_hash.")
self.from_data_hash(data)
end
def to_data_hash
{
'tags' => tags,
'name' => name,
'version' => version,
'environment' => environment.to_s,
'resources' => @resources.collect { |v| @resource_table[v].to_pson_data_hash },
'edges' => edges. collect { |e| e.to_pson_data_hash },
'classes' => classes
}
end
PSON.register_document_type('Catalog',self)
def to_pson_data_hash
{
'document_type' => 'Catalog',
'data' => to_data_hash,
'metadata' => {
'api_version' => 1
}
}
end
def to_pson(*args)
to_pson_data_hash.to_pson(*args)
end
# Convert our catalog into a RAL catalog.
def to_ral
to_catalog :to_ral
end
# Convert our catalog into a catalog of Puppet::Resource instances.
def to_resource
to_catalog :to_resource
end
# filter out the catalog, applying +block+ to each resource.
# If the block result is false, the resource will
# be kept otherwise it will be skipped
def filter(&block)
to_catalog :to_resource, &block
end
# Store the classes in the classfile.
def write_class_file
::File.open(Puppet[:classfile], "w") do |f|
f.puts classes.join("\n")
end
rescue => detail
Puppet.err "Could not create class file #{Puppet[:classfile]}: #{detail}"
end
# Store the list of resources we manage
def write_resource_file
::File.open(Puppet[:resourcefile], "w") do |f|
to_print = resources.map do |resource|
next unless resource.managed?
if resource.name_var
"#{resource.type}[#{resource[resource.name_var]}]"
else
"#{resource.ref.downcase}"
end
end.compact
f.puts to_print.join("\n")
end
rescue => detail
Puppet.err "Could not create resource file #{Puppet[:resourcefile]}: #{detail}"
end
# Produce the graph files if requested.
def write_graph(name)
# We only want to graph the main host catalog.
return unless host_config?
super
end
private
def prioritizer
@prioritizer ||= case Puppet[:ordering]
when "title-hash"
Puppet::Graph::TitleHashPrioritizer.new
when "manifest"
Puppet::Graph::SequentialPrioritizer.new
when "random"
Puppet::Graph::RandomPrioritizer.new
else
raise Puppet::DevError, "Unknown ordering type #{Puppet[:ordering]}"
end
end
def create_transaction(options)
transaction = Puppet::Transaction.new(self, options[:report], prioritizer)
transaction.tags = options[:tags] if options[:tags]
transaction.ignoreschedules = true if options[:ignoreschedules]
transaction.for_network_device = options[:network_device]
transaction
end
# Verify that the given resource isn't declared elsewhere.
- def fail_on_duplicate_type_and_title(resource)
+ def fail_on_duplicate_type_and_title(resource, title_key)
# Short-circuit the common case,
- return unless existing_resource = @resource_table[title_key_for_ref(resource.ref)]
+ return unless existing_resource = @resource_table[title_key]
# If we've gotten this far, it's a real conflict
msg = "Duplicate declaration: #{resource.ref} is already declared"
msg << " in file #{existing_resource.file}:#{existing_resource.line}" if existing_resource.file and existing_resource.line
msg << "; cannot redeclare"
raise DuplicateResourceError.new(msg, resource.file, resource.line)
end
# An abstracted method for converting one catalog into another type of catalog.
# This pretty much just converts all of the resources from one class to another, using
# a conversion method.
def to_catalog(convert)
result = self.class.new(self.name, self.environment_instance)
result.version = self.version
map = {}
resources.each do |resource|
next if virtual_not_exported?(resource)
next if block_given? and yield resource
newres = resource.copy_as_resource
newres.catalog = result
if convert != :to_resource
newres = newres.to_ral
end
# We can't guarantee that resources don't munge their names
# (like files do with trailing slashes), so we have to keep track
# of what a resource got converted to.
map[resource.ref] = newres
result.add_resource newres
end
message = convert.to_s.gsub "_", " "
edges.each do |edge|
# Skip edges between virtual resources.
next if virtual_not_exported?(edge.source)
next if block_given? and yield edge.source
next if virtual_not_exported?(edge.target)
next if block_given? and yield edge.target
unless source = map[edge.source.ref]
raise Puppet::DevError, "Could not find resource #{edge.source.ref} when converting #{message} resources"
end
unless target = map[edge.target.ref]
raise Puppet::DevError, "Could not find resource #{edge.target.ref} when converting #{message} resources"
end
result.add_edge(source, target, edge.label)
end
map.clear
result.add_class(*self.classes)
result.tag(*self.tags)
result
end
def virtual_not_exported?(resource)
- resource.respond_to?(:virtual?) and resource.virtual? and (resource.respond_to?(:exported?) and not resource.exported?)
+ resource.virtual && !resource.exported
end
end
diff --git a/lib/puppet/resource/type.rb b/lib/puppet/resource/type.rb
index 27884df9f..04d1f1f03 100644
--- a/lib/puppet/resource/type.rb
+++ b/lib/puppet/resource/type.rb
@@ -1,386 +1,413 @@
require 'puppet/parser'
require 'puppet/util/warnings'
require 'puppet/util/errors'
require 'puppet/util/inline_docs'
require 'puppet/parser/ast/leaf'
require 'puppet/parser/ast/block_expression'
require 'puppet/dsl'
# Puppet::Resource::Type represents nodes, classes and defined types.
#
# It has a standard format for external consumption, usable from the
# resource_type indirection via rest and the resource_type face. See the
# {file:api_docs/http_resource_type.md#Schema resource type schema
# description}.
#
# @api public
class Puppet::Resource::Type
Puppet::ResourceType = self
include Puppet::Util::InlineDocs
include Puppet::Util::Warnings
include Puppet::Util::Errors
RESOURCE_KINDS = [:hostclass, :node, :definition]
# Map the names used in our documentation to the names used internally
RESOURCE_KINDS_TO_EXTERNAL_NAMES = {
:hostclass => "class",
:node => "node",
:definition => "defined_type",
}
RESOURCE_EXTERNAL_NAMES_TO_KINDS = RESOURCE_KINDS_TO_EXTERNAL_NAMES.invert
attr_accessor :file, :line, :doc, :code, :ruby_code, :parent, :resource_type_collection
attr_reader :namespace, :arguments, :behaves_like, :module_name
+ # Map from argument (aka parameter) names to Puppet Type
+ # @return [Hash<Symbol, Puppet::Pops::Types::PAnyType] map from name to type
+ #
+ attr_reader :argument_types
+
# This should probably be renamed to 'kind' eventually, in accordance with the changes
# made for serialization and API usability (#14137). At the moment that seems like
# it would touch a whole lot of places in the code, though. --cprice 2012-04-23
attr_reader :type
RESOURCE_KINDS.each do |t|
define_method("#{t}?") { self.type == t }
end
require 'puppet/indirector'
extend Puppet::Indirector
indirects :resource_type, :terminus_class => :parser
def self.from_data_hash(data)
name = data.delete('name') or raise ArgumentError, "Resource Type names must be specified"
kind = data.delete('kind') || "definition"
unless type = RESOURCE_EXTERNAL_NAMES_TO_KINDS[kind]
raise ArgumentError, "Unsupported resource kind '#{kind}'"
end
data = data.inject({}) { |result, ary| result[ary[0].intern] = ary[1]; result }
# External documentation uses "parameters" but the internal name
# is "arguments"
data[:arguments] = data.delete(:parameters)
new(type, name, data)
end
def self.from_pson(data)
Puppet.deprecation_warning("from_pson is being removed in favour of from_data_hash.")
self.from_data_hash(data)
end
def to_data_hash
data = [:doc, :line, :file, :parent].inject({}) do |hash, param|
next hash unless (value = self.send(param)) and (value != "")
hash[param.to_s] = value
hash
end
# External documentation uses "parameters" but the internal name
# is "arguments"
data['parameters'] = arguments.dup unless arguments.empty?
data['name'] = name
unless RESOURCE_KINDS_TO_EXTERNAL_NAMES.has_key?(type)
raise ArgumentError, "Unsupported resource kind '#{type}'"
end
data['kind'] = RESOURCE_KINDS_TO_EXTERNAL_NAMES[type]
data
end
# Are we a child of the passed class? Do a recursive search up our
# parentage tree to figure it out.
def child_of?(klass)
return false unless parent
return(klass == parent_type ? true : parent_type.child_of?(klass))
end
# Now evaluate the code associated with this class or definition.
def evaluate_code(resource)
static_parent = evaluate_parent_type(resource)
scope = static_parent || resource.scope
scope = scope.newscope(:namespace => namespace, :source => self, :resource => resource) unless resource.title == :main
scope.compiler.add_class(name) unless definition?
set_resource_parameters(resource, scope)
resource.add_edge_to_stage
if code
if @match # Only bother setting up the ephemeral scope if there are match variables to add into it
begin
elevel = scope.ephemeral_level
scope.ephemeral_from(@match, file, line)
code.safeevaluate(scope)
ensure
scope.unset_ephemeral_var(elevel)
end
else
code.safeevaluate(scope)
end
end
evaluate_ruby_code(resource, scope) if ruby_code
end
def initialize(type, name, options = {})
@type = type.to_s.downcase.to_sym
raise ArgumentError, "Invalid resource supertype '#{type}'" unless RESOURCE_KINDS.include?(@type)
name = convert_from_ast(name) if name.is_a?(Puppet::Parser::AST::HostName)
set_name_and_namespace(name)
[:code, :doc, :line, :file, :parent].each do |param|
next unless value = options[param]
send(param.to_s + "=", value)
end
set_arguments(options[:arguments])
+ set_argument_types(options[:argument_types])
@match = nil
@module_name = options[:module_name]
end
# This is only used for node names, and really only when the node name
# is a regexp.
def match(string)
return string.to_s.downcase == name unless name_is_regex?
@match = @name.match(string)
end
# Add code from a new instance to our code.
def merge(other)
fail "#{name} is not a class; cannot add code to it" unless type == :hostclass
fail "#{other.name} is not a class; cannot add code from it" unless other.type == :hostclass
fail "Cannot have code outside of a class/node/define because 'freeze_main' is enabled" if name == "" and Puppet.settings[:freeze_main]
if parent and other.parent and parent != other.parent
fail "Cannot merge classes with different parent classes (#{name} => #{parent} vs. #{other.name} => #{other.parent})"
end
# We know they're either equal or only one is set, so keep whichever parent is specified.
self.parent ||= other.parent
if other.doc
self.doc ||= ""
self.doc += other.doc
end
# This might just be an empty, stub class.
return unless other.code
unless self.code
self.code = other.code
return
end
self.code = Puppet::Parser::ParserFactory.code_merger.concatenate([self, other])
# self.code = self.code.sequence_with(other.code)
end
# Make an instance of the resource type, and place it in the catalog
# if it isn't in the catalog already. This is only possible for
# classes and nodes. No parameters are be supplied--if this is a
# parameterized class, then all parameters take on their default
# values.
def ensure_in_catalog(scope, parameters=nil)
type == :definition and raise ArgumentError, "Cannot create resources for defined resource types"
resource_type = type == :hostclass ? :class : :node
# Do nothing if the resource already exists; this makes sure we don't
# get multiple copies of the class resource, which helps provide the
# singleton nature of classes.
# we should not do this for classes with parameters
# if parameters are passed, we should still try to create the resource
# even if it exists so that we can fail
# this prevents us from being able to combine param classes with include
if resource = scope.catalog.resource(resource_type, name) and !parameters
return resource
end
resource = Puppet::Parser::Resource.new(resource_type, name, :scope => scope, :source => self)
assign_parameter_values(parameters, resource)
instantiate_resource(scope, resource)
scope.compiler.add_resource(scope, resource)
resource
end
def instantiate_resource(scope, resource)
# Make sure our parent class has been evaluated, if we have one.
if parent && !scope.catalog.resource(resource.type, parent)
parent_type(scope).ensure_in_catalog(scope)
end
if ['Class', 'Node'].include? resource.type
scope.catalog.tag(*resource.tags)
end
end
def name
return @name unless @name.is_a?(Regexp)
@name.source.downcase.gsub(/[^-\w:.]/,'').sub(/^\.+/,'')
end
def name_is_regex?
@name.is_a?(Regexp)
end
def assign_parameter_values(parameters, resource)
return unless parameters
# It'd be nice to assign default parameter values here,
# but we can't because they often rely on local variables
# created during set_resource_parameters.
parameters.each do |name, value|
resource.set_parameter name, value
end
end
# MQR TODO:
#
# The change(s) introduced by the fix for #4270 are mostly silly & should be
# removed, though we didn't realize it at the time. If it can be established/
# ensured that nodes never call parent_type and that resource_types are always
# (as they should be) members of exactly one resource_type_collection the
# following method could / should be replaced with:
#
# def parent_type
# @parent_type ||= parent && (
# resource_type_collection.find_or_load([name],parent,type.to_sym) ||
# fail Puppet::ParseError, "Could not find parent resource type '#{parent}' of type #{type} in #{resource_type_collection.environment}"
# )
# end
#
# ...and then the rest of the changes around passing in scope reverted.
#
def parent_type(scope = nil)
return nil unless parent
unless @parent_type
raise "Must pass scope to parent_type when called first time" unless scope
unless @parent_type = scope.environment.known_resource_types.send("find_#{type}", [name], parent)
fail Puppet::ParseError, "Could not find parent resource type '#{parent}' of type #{type} in #{scope.environment}"
end
end
@parent_type
end
# Set any arguments passed by the resource as variables in the scope.
def set_resource_parameters(resource, scope)
set = {}
resource.to_hash.each do |param, value|
param = param.to_sym
fail Puppet::ParseError, "#{resource.ref} does not accept attribute #{param}" unless valid_parameter?(param)
exceptwrap { scope[param.to_s] = value }
set[param] = true
end
if @type == :hostclass
scope["title"] = resource.title.to_s.downcase unless set.include? :title
scope["name"] = resource.name.to_s.downcase unless set.include? :name
else
scope["title"] = resource.title unless set.include? :title
scope["name"] = resource.name unless set.include? :name
end
scope["module_name"] = module_name if module_name and ! set.include? :module_name
if caller_name = scope.parent_module_name and ! set.include?(:caller_module_name)
scope["caller_module_name"] = caller_name
end
scope.class_set(self.name,scope) if hostclass? or node?
# Evaluate the default parameters, now that all other variables are set
default_params = resource.set_default_parameters(scope)
default_params.each { |param| scope[param] = resource[param] }
# This has to come after the above parameters so that default values
# can use their values
resource.validate_complete
end
# Check whether a given argument is valid.
def valid_parameter?(param)
param = param.to_s
return true if param == "name"
return true if Puppet::Type.metaparam?(param)
return false unless defined?(@arguments)
return(arguments.include?(param) ? true : false)
end
def set_arguments(arguments)
@arguments = {}
return if arguments.nil?
arguments.each do |arg, default|
arg = arg.to_s
warn_if_metaparam(arg, default)
@arguments[arg] = default
end
end
+ # Sets the argument name to Puppet Type hash used for type checking.
+ # Names must correspond to available arguments (they must be defined first).
+ # Arguments not mentioned will not be type-checked. Only supported when parser == "future"
+ #
+ def set_argument_types(name_to_type_hash)
+ @argument_types = {}
+ # Stop here if not running under future parser, the rest requires pops to be initialized
+ # and that the type system is available
+ return unless Puppet[:parser] == 'future' && name_to_type_hash
+ name_to_type_hash.each do |name, t|
+ # catch internal errors
+ unless @arguments.include?(name)
+ raise Puppet::DevError, "Parameter '#{name}' is given a type, but is not a valid parameter."
+ end
+ unless t.is_a? Puppet::Pops::Types::PAnyType
+ raise Puppet::DevError, "Parameter '#{name}' is given a type that is not a Puppet Type, got #{t.class}"
+ end
+ @argument_types[name] = t
+ end
+ end
+
private
def convert_from_ast(name)
value = name.value
if value.is_a?(Puppet::Parser::AST::Regex)
name = value.value
else
name = value
end
end
def evaluate_parent_type(resource)
return unless klass = parent_type(resource.scope) and parent_resource = resource.scope.compiler.catalog.resource(:class, klass.name) || resource.scope.compiler.catalog.resource(:node, klass.name)
parent_resource.evaluate unless parent_resource.evaluated?
parent_scope(resource.scope, klass)
end
def evaluate_ruby_code(resource, scope)
Puppet::DSL::ResourceAPI.new(resource, scope, ruby_code).evaluate
end
# Split an fq name into a namespace and name
def namesplit(fullname)
ary = fullname.split("::")
n = ary.pop || ""
ns = ary.join("::")
return ns, n
end
def parent_scope(scope, klass)
scope.class_scope(klass) || raise(Puppet::DevError, "Could not find scope for #{klass.name}")
end
def set_name_and_namespace(name)
if name.is_a?(Regexp)
@name = name
@namespace = ""
else
@name = name.to_s.downcase
# Note we're doing something somewhat weird here -- we're setting
# the class's namespace to its fully qualified name. This means
# anything inside that class starts looking in that namespace first.
@namespace, ignored_shortname = @type == :hostclass ? [@name, ''] : namesplit(@name)
end
end
def warn_if_metaparam(param, default)
return unless Puppet::Type.metaparamclass(param)
if default
warnonce "#{param} is a metaparam; this value will inherit to all contained resources in the #{self.name} definition"
else
raise Puppet::ParseError, "#{param} is a metaparameter; please choose another parameter name in the #{self.name} definition"
end
end
end
diff --git a/lib/puppet/settings.rb b/lib/puppet/settings.rb
index c651f3aee..3201b125f 100644
--- a/lib/puppet/settings.rb
+++ b/lib/puppet/settings.rb
@@ -1,1372 +1,1392 @@
require 'puppet'
require 'getoptlong'
require 'puppet/util/watched_file'
require 'puppet/util/command_line/puppet_option_parser'
+require 'forwardable'
# The class for handling configuration files.
class Puppet::Settings
+ extend Forwardable
include Enumerable
require 'puppet/settings/errors'
require 'puppet/settings/base_setting'
require 'puppet/settings/string_setting'
require 'puppet/settings/enum_setting'
require 'puppet/settings/array_setting'
require 'puppet/settings/file_setting'
require 'puppet/settings/directory_setting'
require 'puppet/settings/file_or_directory_setting'
require 'puppet/settings/path_setting'
require 'puppet/settings/boolean_setting'
require 'puppet/settings/terminus_setting'
require 'puppet/settings/duration_setting'
require 'puppet/settings/ttl_setting'
require 'puppet/settings/priority_setting'
require 'puppet/settings/autosign_setting'
require 'puppet/settings/config_file'
require 'puppet/settings/value_translator'
require 'puppet/settings/environment_conf'
# local reference for convenience
PuppetOptionParser = Puppet::Util::CommandLine::PuppetOptionParser
attr_accessor :files
attr_reader :timer
# These are the settings that every app is required to specify; there are reasonable defaults defined in application.rb.
REQUIRED_APP_SETTINGS = [:logdir, :confdir, :vardir]
# This method is intended for puppet internal use only; it is a convenience method that
# returns reasonable application default settings values for a given run_mode.
def self.app_defaults_for_run_mode(run_mode)
{
:name => run_mode.to_s,
:run_mode => run_mode.name,
:confdir => run_mode.conf_dir,
:vardir => run_mode.var_dir,
:rundir => run_mode.run_dir,
:logdir => run_mode.log_dir,
}
end
def self.default_certname()
hostname = hostname_fact
domain = domain_fact
if domain and domain != ""
fqdn = [hostname, domain].join(".")
else
fqdn = hostname
end
fqdn.to_s.gsub(/\.$/, '')
end
def self.hostname_fact()
Facter["hostname"].value
end
def self.domain_fact()
Facter["domain"].value
end
def self.default_config_file_name
"puppet.conf"
end
# Create a new collection of config settings.
def initialize
@config = {}
@shortnames = {}
@created = []
# Keep track of set values.
@value_sets = {
:cli => Values.new(:cli, @config),
:memory => Values.new(:memory, @config),
:application_defaults => Values.new(:application_defaults, @config),
:overridden_defaults => Values.new(:overridden_defaults, @config),
}
@configuration_file = nil
# And keep a per-environment cache
@cache = Hash.new { |hash, key| hash[key] = {} }
+ @values = Hash.new { |hash, key| hash[key] = {} }
# The list of sections we've used.
@used = []
@hooks_to_call_on_application_initialization = []
@deprecated_setting_names = []
@deprecated_settings_that_have_been_configured = []
@translate = Puppet::Settings::ValueTranslator.new
@config_file_parser = Puppet::Settings::ConfigFile.new(@translate)
end
# @param name [Symbol] The name of the setting to fetch
# @return [Puppet::Settings::BaseSetting] The setting object
def setting(name)
@config[name]
end
# Retrieve a config value
# @param param [Symbol] the name of the setting
# @return [Object] the value of the setting
# @api private
def [](param)
if @deprecated_setting_names.include?(param)
issue_deprecation_warning(setting(param), "Accessing '#{param}' as a setting is deprecated.")
end
value(param)
end
# Set a config value. This doesn't set the defaults, it sets the value itself.
# @param param [Symbol] the name of the setting
# @param value [Object] the new value of the setting
# @api private
def []=(param, value)
if @deprecated_setting_names.include?(param)
issue_deprecation_warning(setting(param), "Modifying '#{param}' as a setting is deprecated.")
end
@value_sets[:memory].set(param, value)
unsafe_flush_cache
end
# Create a new default value for the given setting. The default overrides are
# higher precedence than the defaults given in defaults.rb, but lower
# precedence than any other values for the setting. This allows one setting
# `a` to change the default of setting `b`, but still allow a user to provide
# a value for setting `b`.
#
# @param param [Symbol] the name of the setting
# @param value [Object] the new default value for the setting
# @api private
def override_default(param, value)
@value_sets[:overridden_defaults].set(param, value)
unsafe_flush_cache
end
# Generate the list of valid arguments, in a format that GetoptLong can
# understand, and add them to the passed option list.
def addargs(options)
# Add all of the settings as valid options.
self.each { |name, setting|
setting.getopt_args.each { |args| options << args }
}
options
end
# Generate the list of valid arguments, in a format that OptionParser can
# understand, and add them to the passed option list.
def optparse_addargs(options)
# Add all of the settings as valid options.
self.each { |name, setting|
options << setting.optparse_args
}
options
end
# Is our setting a boolean setting?
def boolean?(param)
param = param.to_sym
@config.include?(param) and @config[param].kind_of?(BooleanSetting)
end
# Remove all set values, potentially skipping cli values.
def clear
unsafe_clear
end
# Remove all set values, potentially skipping cli values.
def unsafe_clear(clear_cli = true, clear_application_defaults = false)
if clear_application_defaults
@value_sets[:application_defaults] = Values.new(:application_defaults, @config)
@app_defaults_initialized = false
end
if clear_cli
@value_sets[:cli] = Values.new(:cli, @config)
# Only clear the 'used' values if we were explicitly asked to clear out
# :cli values; otherwise, it may be just a config file reparse,
# and we want to retain this cli values.
@used = []
end
@value_sets[:memory] = Values.new(:memory, @config)
@value_sets[:overridden_defaults] = Values.new(:overridden_defaults, @config)
+ @deprecated_settings_that_have_been_configured.clear
+ @values.clear
@cache.clear
end
private :unsafe_clear
# Clear @cache, @used and the Environment.
#
# Whenever an object is returned by Settings, a copy is stored in @cache.
# As long as Setting attributes that determine the content of returned
# objects remain unchanged, Settings can keep returning objects from @cache
# without re-fetching or re-generating them.
#
# Whenever a Settings attribute changes, such as @values or @preferred_run_mode,
# this method must be called to clear out the caches so that updated
# objects will be returned.
def flush_cache
unsafe_flush_cache
end
def unsafe_flush_cache
clearused
# Clear the list of environments, because they cache, at least, the module path.
# We *could* preferentially just clear them if the modulepath is changed,
# but we don't really know if, say, the vardir is changed and the modulepath
# is defined relative to it. We need the defined?(stuff) because of loading
# order issues.
Puppet::Node::Environment.clear if defined?(Puppet::Node) and defined?(Puppet::Node::Environment)
end
private :unsafe_flush_cache
def clearused
@cache.clear
@used = []
end
def global_defaults_initialized?()
@global_defaults_initialized
end
def initialize_global_settings(args = [])
raise Puppet::DevError, "Attempting to initialize global default settings more than once!" if global_defaults_initialized?
# The first two phases of the lifecycle of a puppet application are:
# 1) Parse the command line options and handle any of them that are
# registered, defined "global" puppet settings (mostly from defaults.rb).
# 2) Parse the puppet config file(s).
parse_global_options(args)
parse_config_files
@global_defaults_initialized = true
end
# This method is called during application bootstrapping. It is responsible for parsing all of the
# command line options and initializing the settings accordingly.
#
# It will ignore options that are not defined in the global puppet settings list, because they may
# be valid options for the specific application that we are about to launch... however, at this point
# in the bootstrapping lifecycle, we don't yet know what that application is.
def parse_global_options(args)
# Create an option parser
option_parser = PuppetOptionParser.new
option_parser.ignore_invalid_options = true
# Add all global options to it.
self.optparse_addargs([]).each do |option|
option_parser.on(*option) do |arg|
opt, val = Puppet::Settings.clean_opt(option[0], arg)
handlearg(opt, val)
end
end
option_parser.on('--run_mode',
"The effective 'run mode' of the application: master, agent, or user.",
:REQUIRED) do |arg|
Puppet.settings.preferred_run_mode = arg
end
option_parser.parse(args)
# remove run_mode options from the arguments so that later parses don't think
# it is an unknown option.
while option_index = args.index('--run_mode') do
args.delete_at option_index
args.delete_at option_index
end
args.reject! { |arg| arg.start_with? '--run_mode=' }
end
private :parse_global_options
# A utility method (public, is used by application.rb and perhaps elsewhere) that munges a command-line
# option string into the format that Puppet.settings expects. (This mostly has to deal with handling the
# "no-" prefix on flag/boolean options).
#
# @param [String] opt the command line option that we are munging
# @param [String, TrueClass, FalseClass] val the value for the setting (as determined by the OptionParser)
def self.clean_opt(opt, val)
# rewrite --[no-]option to --no-option if that's what was given
if opt =~ /\[no-\]/ and !val
opt = opt.gsub(/\[no-\]/,'no-')
end
# otherwise remove the [no-] prefix to not confuse everybody
opt = opt.gsub(/\[no-\]/, '')
[opt, val]
end
def app_defaults_initialized?
@app_defaults_initialized
end
def initialize_app_defaults(app_defaults)
REQUIRED_APP_SETTINGS.each do |key|
raise SettingsError, "missing required app default setting '#{key}'" unless app_defaults.has_key?(key)
end
app_defaults.each do |key, value|
if key == :run_mode
self.preferred_run_mode = value
else
@value_sets[:application_defaults].set(key, value)
unsafe_flush_cache
end
end
apply_metadata
call_hooks_deferred_to_application_initialization
issue_deprecations
@app_defaults_initialized = true
end
def call_hooks_deferred_to_application_initialization(options = {})
@hooks_to_call_on_application_initialization.each do |setting|
begin
setting.handle(self.value(setting.name))
rescue InterpolationError => err
raise InterpolationError, err, err.backtrace unless options[:ignore_interpolation_dependency_errors]
#swallow. We're not concerned if we can't call hooks because dependencies don't exist yet
#we'll get another chance after application defaults are initialized
end
end
end
private :call_hooks_deferred_to_application_initialization
# Return a value's description.
def description(name)
if obj = @config[name.to_sym]
obj.desc
else
nil
end
end
- def each
- @config.each { |name, object|
- yield name, object
- }
- end
+ def_delegator :@config, :each
# Iterate over each section name.
def eachsection
yielded = []
@config.each do |name, object|
section = object.section
unless yielded.include? section
yield section
yielded << section
end
end
end
# Return an object by name.
def setting(param)
param = param.to_sym
@config[param]
end
# Handle a command-line argument.
def handlearg(opt, value = nil)
@cache.clear
if value.is_a?(FalseClass)
value = "false"
elsif value.is_a?(TrueClass)
value = "true"
end
value &&= @translate[value]
str = opt.sub(/^--/,'')
bool = true
newstr = str.sub(/^no-/, '')
if newstr != str
str = newstr
bool = false
end
str = str.intern
if @config[str].is_a?(Puppet::Settings::BooleanSetting)
if value == "" or value.nil?
value = bool
end
end
if s = @config[str]
@deprecated_settings_that_have_been_configured << s if s.completely_deprecated?
end
@value_sets[:cli].set(str, value)
unsafe_flush_cache
end
def include?(name)
name = name.intern if name.is_a? String
@config.include?(name)
end
# check to see if a short name is already defined
def shortinclude?(short)
short = short.intern if name.is_a? String
@shortnames.include?(short)
end
# Prints the contents of a config file with the available config settings, or it
# prints a single value of a config setting.
def print_config_options
env = value(:environment)
val = value(:configprint)
if val == "all"
hash = {}
each do |name, obj|
val = value(name,env)
val = val.inspect if val == ""
hash[name] = val
end
hash.sort { |a,b| a[0].to_s <=> b[0].to_s }.each do |name, val|
puts "#{name} = #{val}"
end
else
val.split(/\s*,\s*/).sort.each do |v|
if include?(v)
#if there is only one value, just print it for back compatibility
if v == val
puts value(val,env)
break
end
puts "#{v} = #{value(v,env)}"
else
puts "invalid setting: #{v}"
return false
end
end
end
true
end
def generate_config
puts to_config
true
end
def generate_manifest
puts to_manifest
true
end
def print_configs
return print_config_options if value(:configprint) != ""
return generate_config if value(:genconfig)
generate_manifest if value(:genmanifest)
end
def print_configs?
(value(:configprint) != "" || value(:genconfig) || value(:genmanifest)) && true
end
# Return a given object's file metadata.
def metadata(param)
if obj = @config[param.to_sym] and obj.is_a?(FileSetting)
{
:owner => obj.owner,
:group => obj.group,
:mode => obj.mode
}.delete_if { |key, value| value.nil? }
else
nil
end
end
# Make a directory with the appropriate user, group, and mode
def mkdir(default)
obj = get_config_file_default(default)
Puppet::Util::SUIDManager.asuser(obj.owner, obj.group) do
mode = obj.mode || 0750
Dir.mkdir(obj.value, mode)
end
end
# The currently configured run mode that is preferred for constructing the application configuration.
def preferred_run_mode
@preferred_run_mode_name || :user
end
# PRIVATE! This only exists because we need a hook to validate the run mode when it's being set, and
# it should never, ever, ever, ever be called from outside of this file.
# This method is also called when --run_mode MODE is used on the command line to set the default
#
# @param mode [String|Symbol] the name of the mode to have in effect
# @api private
def preferred_run_mode=(mode)
mode = mode.to_s.downcase.intern
raise ValidationError, "Invalid run mode '#{mode}'" unless [:master, :agent, :user].include?(mode)
@preferred_run_mode_name = mode
# Changing the run mode has far-reaching consequences. Flush any cached
# settings so they will be re-generated.
flush_cache
mode
end
# Return all of the settings associated with a given section.
def params(section = nil)
if section
section = section.intern if section.is_a? String
@config.find_all { |name, obj|
obj.section == section
}.collect { |name, obj|
name
}
else
@config.keys
end
end
def parse_config(text, file = "text")
begin
data = @config_file_parser.parse_file(file, text)
rescue => detail
Puppet.log_exception(detail, "Could not parse #{file}: #{detail}")
return
end
# If we get here and don't have any data, we just return and don't muck with the current state of the world.
return if data.nil?
- record_deprecations_from_puppet_conf(data)
-
- # If we get here then we have some data, so we need to clear out any previous settings that may have come from
- # config files.
+ # If we get here then we have some data, so we need to clear out any
+ # previous settings that may have come from config files.
unsafe_clear(false, false)
+ record_deprecations_from_puppet_conf(data)
+
# And now we can repopulate with the values from our last parsing of the config files.
@configuration_file = data
# Determine our environment, if we have one.
if @config[:environment]
env = self.value(:environment).to_sym
else
env = "none"
end
# Call any hooks we should be calling.
@config.values.select(&:has_hook?).each do |setting|
value_sets_for(env, self.preferred_run_mode).each do |source|
if source.include?(setting.name)
# We still have to use value to retrieve the value, since
# we want the fully interpolated value, not $vardir/lib or whatever.
# This results in extra work, but so few of the settings
# will have associated hooks that it ends up being less work this
# way overall.
if setting.call_hook_on_initialize?
@hooks_to_call_on_application_initialization << setting
else
setting.handle(self.value(setting.name, env))
end
break
end
end
end
call_hooks_deferred_to_application_initialization :ignore_interpolation_dependency_errors => true
apply_metadata
end
# Parse the configuration file. Just provides thread safety.
def parse_config_files
file = which_configuration_file
if Puppet::FileSystem.exist?(file)
begin
text = read_file(file)
rescue => detail
Puppet.log_exception(detail, "Could not load #{file}: #{detail}")
return
end
else
return
end
parse_config(text, file)
end
private :parse_config_files
def main_config_file
if explicit_config_file?
return self[:config]
else
return File.join(Puppet::Util::RunMode[:master].conf_dir, config_file_name)
end
end
private :main_config_file
def user_config_file
return File.join(Puppet::Util::RunMode[:user].conf_dir, config_file_name)
end
private :user_config_file
# This method is here to get around some life-cycle issues. We need to be
# able to determine the config file name before the settings / defaults are
# fully loaded. However, we also need to respect any overrides of this value
# that the user may have specified on the command line.
#
# The easiest way to do this is to attempt to read the setting, and if we
# catch an error (meaning that it hasn't been set yet), we'll fall back to
# the default value.
def config_file_name
begin
return self[:config_file_name] if self[:config_file_name]
rescue SettingsError
# This just means that the setting wasn't explicitly set on the command line, so we will ignore it and
# fall through to the default name.
end
return self.class.default_config_file_name
end
private :config_file_name
def apply_metadata
# We have to do it in the reverse of the search path,
# because multiple sections could set the same value
# and I'm too lazy to only set the metadata once.
if @configuration_file
searchpath.reverse.each do |source|
source = preferred_run_mode if source == :run_mode
if section = @configuration_file.sections[source]
apply_metadata_from_section(section)
end
end
end
end
private :apply_metadata
def apply_metadata_from_section(section)
section.settings.each do |setting|
if setting.has_metadata? && type = @config[setting.name]
type.set_meta(setting.meta)
end
end
end
SETTING_TYPES = {
:string => StringSetting,
:file => FileSetting,
:directory => DirectorySetting,
:file_or_directory => FileOrDirectorySetting,
:path => PathSetting,
:boolean => BooleanSetting,
:terminus => TerminusSetting,
:duration => DurationSetting,
:ttl => TTLSetting,
:array => ArraySetting,
:enum => EnumSetting,
:priority => PrioritySetting,
:autosign => AutosignSetting,
}
# Create a new setting. The value is passed in because it's used to determine
# what kind of setting we're creating, but the value itself might be either
# a default or a value, so we can't actually assign it.
#
# See #define_settings for documentation on the legal values for the ":type" option.
def newsetting(hash)
klass = nil
hash[:section] = hash[:section].to_sym if hash[:section]
if type = hash[:type]
unless klass = SETTING_TYPES[type]
raise ArgumentError, "Invalid setting type '#{type}'"
end
hash.delete(:type)
else
# The only implicit typing we still do for settings is to fall back to "String" type if they didn't explicitly
# specify a type. Personally I'd like to get rid of this too, and make the "type" option mandatory... but
# there was a little resistance to taking things quite that far for now. --cprice 2012-03-19
klass = StringSetting
end
hash[:settings] = self
setting = klass.new(hash)
setting
end
# This has to be private, because it doesn't add the settings to @config
private :newsetting
# Iterate across all of the objects in a given section.
def persection(section)
section = section.to_sym
self.each { |name, obj|
if obj.section == section
yield obj
end
}
end
# Reparse our config file, if necessary.
def reparse_config_files
if files
if filename = any_files_changed?
Puppet.notice "Config file #{filename} changed; triggering re-parse of all config files."
parse_config_files
reuse
end
end
end
def files
return @files if @files
@files = []
[main_config_file, user_config_file].each do |path|
if Puppet::FileSystem.exist?(path)
@files << Puppet::Util::WatchedFile.new(path)
end
end
@files
end
private :files
# Checks to see if any of the config files have been modified
# @return the filename of the first file that is found to have changed, or
# nil if no files have changed
def any_files_changed?
files.each do |file|
return file.to_str if file.changed?
end
nil
end
private :any_files_changed?
def reuse
return unless defined?(@used)
new = @used
@used = []
self.use(*new)
end
# The order in which to search for values.
def searchpath(environment = nil)
[:memory, :cli, environment, :run_mode, :main, :application_defaults, :overridden_defaults].compact
end
# Get a list of objects per section
def sectionlist
sectionlist = []
self.each { |name, obj|
section = obj.section || "puppet"
sections[section] ||= []
sectionlist << section unless sectionlist.include?(section)
sections[section] << obj
}
return sectionlist, sections
end
def service_user_available?
return @service_user_available if defined?(@service_user_available)
if self[:user]
user = Puppet::Type.type(:user).new :name => self[:user], :audit => :ensure
@service_user_available = user.exists?
else
@service_user_available = false
end
end
def service_group_available?
return @service_group_available if defined?(@service_group_available)
if self[:group]
group = Puppet::Type.type(:group).new :name => self[:group], :audit => :ensure
@service_group_available = group.exists?
else
@service_group_available = false
end
end
# Allow later inspection to determine if the setting was set on the
# command line, or through some other code path. Used for the
# `dns_alt_names` option during cert generate. --daniel 2011-10-18
def set_by_cli?(param)
param = param.to_sym
!@value_sets[:cli].lookup(param).nil?
end
def set_value(param, value, type, options = {})
Puppet.deprecation_warning("Puppet.settings.set_value is deprecated. Use Puppet[]= instead.")
if @value_sets[type]
@value_sets[type].set(param, value)
unsafe_flush_cache
end
end
# Deprecated; use #define_settings instead
def setdefaults(section, defs)
Puppet.deprecation_warning("'setdefaults' is deprecated and will be removed; please call 'define_settings' instead")
define_settings(section, defs)
end
# Define a group of settings.
#
# @param [Symbol] section a symbol to use for grouping multiple settings together into a conceptual unit. This value
# (and the conceptual separation) is not used very often; the main place where it will have a potential impact
# is when code calls Settings#use method. See docs on that method for further details, but basically that method
# just attempts to do any preparation that may be necessary before code attempts to leverage the value of a particular
# setting. This has the most impact for file/directory settings, where #use will attempt to "ensure" those
# files / directories.
# @param [Hash[Hash]] defs the settings to be defined. This argument is a hash of hashes; each key should be a symbol,
# which is basically the name of the setting that you are defining. The value should be another hash that specifies
# the parameters for the particular setting. Legal values include:
- # [:default] => required; this is a string value that will be used as a default value for a setting if no other
- # value is specified (via cli, config file, etc.) This string may include "variables", demarcated with $ or ${},
- # which will be interpolated with values of other settings.
+ # [:default] => not required; this is the value for the setting if no other value is specified (via cli, config file, etc.)
+ # For string settings this may include "variables", demarcated with $ or ${} which will be interpolated with values of other settings.
+ # The default value may also be a Proc that will be called only once to evaluate the default when the setting's value is retrieved.
# [:desc] => required; a description of the setting, used in documentation / help generation
# [:type] => not required, but highly encouraged! This specifies the data type that the setting represents. If
# you do not specify it, it will default to "string". Legal values include:
# :string - A generic string setting
# :boolean - A boolean setting; values are expected to be "true" or "false"
# :file - A (single) file path; puppet may attempt to create this file depending on how the settings are used. This type
# also supports additional options such as "mode", "owner", "group"
# :directory - A (single) directory path; puppet may attempt to create this file depending on how the settings are used. This type
# also supports additional options such as "mode", "owner", "group"
# :path - This is intended to be used for settings whose value can contain multiple directory paths, respresented
# as strings separated by the system path separator (e.g. system path, module path, etc.).
# [:mode] => an (optional) octal value to be used as the permissions/mode for :file and :directory settings
# [:owner] => optional owner username/uid for :file and :directory settings
# [:group] => optional group name/gid for :file and :directory settings
#
def define_settings(section, defs)
section = section.to_sym
call = []
defs.each do |name, hash|
raise ArgumentError, "setting definition for '#{name}' is not a hash!" unless hash.is_a? Hash
name = name.to_sym
hash[:name] = name
hash[:section] = section
raise ArgumentError, "Setting #{name} is already defined" if @config.include?(name)
tryconfig = newsetting(hash)
if short = tryconfig.short
if other = @shortnames[short]
raise ArgumentError, "Setting #{other.name} is already using short name '#{short}'"
end
@shortnames[short] = tryconfig
end
@config[name] = tryconfig
# Collect the settings that need to have their hooks called immediately.
# We have to collect them so that we can be sure we're fully initialized before
# the hook is called.
if tryconfig.has_hook?
if tryconfig.call_hook_on_define?
call << tryconfig
elsif tryconfig.call_hook_on_initialize?
@hooks_to_call_on_application_initialization << tryconfig
end
end
@deprecated_setting_names << name if tryconfig.deprecated?
end
call.each do |setting|
setting.handle(self.value(setting.name))
end
end
# Convert the settings we manage into a catalog full of resources that model those settings.
def to_catalog(*sections)
sections = nil if sections.empty?
catalog = Puppet::Resource::Catalog.new("Settings", Puppet::Node::Environment::NONE)
@config.keys.find_all { |key| @config[key].is_a?(FileSetting) }.each do |key|
file = @config[key]
next unless (sections.nil? or sections.include?(file.section))
next unless resource = file.to_resource
next if catalog.resource(resource.ref)
Puppet.debug("Using settings: adding file resource '#{key}': '#{resource.inspect}'")
catalog.add_resource(resource)
end
add_user_resources(catalog, sections)
+ add_environment_resources(catalog, sections)
catalog
end
# Convert our list of config settings into a configuration file.
def to_config
str = %{The configuration file for #{Puppet.run_mode.name}. Note that this file
is likely to have unused settings in it; any setting that's
valid anywhere in Puppet can be in any config file, even if it's not used.
Every section can specify three special parameters: owner, group, and mode.
These parameters affect the required permissions of any files specified after
their specification. Puppet will sometimes use these parameters to check its
own configured state, so they can be used to make Puppet a bit more self-managing.
The file format supports octothorpe-commented lines, but not partial-line comments.
Generated on #{Time.now}.
}.gsub(/^/, "# ")
# Add a section heading that matches our name.
str += "[#{preferred_run_mode}]\n"
eachsection do |section|
persection(section) do |obj|
str += obj.to_config + "\n" unless obj.name == :genconfig
end
end
return str
end
# Convert to a parseable manifest
def to_manifest
catalog = to_catalog
catalog.resource_refs.collect do |ref|
catalog.resource(ref).to_manifest
end.join("\n\n")
end
# Create the necessary objects to use a section. This is idempotent;
# you can 'use' a section as many times as you want.
def use(*sections)
sections = sections.collect { |s| s.to_sym }
sections = sections.reject { |s| @used.include?(s) }
return if sections.empty?
begin
catalog = to_catalog(*sections).to_ral
rescue => detail
Puppet.log_and_raise(detail, "Could not create resources for managing Puppet's files and directories in sections #{sections.inspect}: #{detail}")
end
catalog.host_config = false
catalog.apply do |transaction|
if transaction.any_failed?
report = transaction.report
status_failures = report.resource_statuses.values.select { |r| r.failed? }
status_fail_msg = status_failures.
collect(&:events).
flatten.
select { |event| event.status == 'failure' }.
collect { |event| "#{event.resource}: #{event.message}" }.join("; ")
raise "Got #{status_failures.length} failure(s) while initializing: #{status_fail_msg}"
end
end
sections.each { |s| @used << s }
@used.uniq!
end
def valid?(param)
param = param.to_sym
@config.has_key?(param)
end
def uninterpolated_value(param, environment = nil)
Puppet.deprecation_warning("Puppet.settings.uninterpolated_value is deprecated. Use Puppet.settings.value instead")
param = param.to_sym
environment &&= environment.to_sym
values(environment, self.preferred_run_mode).lookup(param)
end
# Retrieve an object that can be used for looking up values of configuration
# settings.
#
# @param environment [Symbol] The name of the environment in which to lookup
# @param section [Symbol] The name of the configuration section in which to lookup
# @return [Puppet::Settings::ChainedValues] An object to perform lookups
# @api public
def values(environment, section)
- ChainedValues.new(
+ @values[environment][section] ||= ChainedValues.new(
section,
environment,
value_sets_for(environment, section),
@config)
end
# Find the correct value using our search path.
#
# @param param [String, Symbol] The value to look up
# @param environment [String, Symbol] The environment to check for the value
# @param bypass_interpolation [true, false] Whether to skip interpolation
#
# @return [Object] The looked up value
#
# @raise [InterpolationError]
def value(param, environment = nil, bypass_interpolation = false)
param = param.to_sym
environment &&= environment.to_sym
setting = @config[param]
# Short circuit to nil for undefined settings.
return nil if setting.nil?
# Check the cache first. It needs to be a per-environment
# cache so that we don't spread values from one env
# to another.
if @cache[environment||"none"].has_key?(param)
return @cache[environment||"none"][param]
elsif bypass_interpolation
val = values(environment, self.preferred_run_mode).lookup(param)
else
val = values(environment, self.preferred_run_mode).interpolate(param)
end
@cache[environment||"none"][param] = val
val
end
##
# (#15337) All of the logic to determine the configuration file to use
# should be centralized into this method. The simplified approach is:
#
# 1. If there is an explicit configuration file, use that. (--confdir or
# --config)
# 2. If we're running as a root process, use the system puppet.conf
# (usually /etc/puppet/puppet.conf)
# 3. Otherwise, use the user puppet.conf (usually ~/.puppet/puppet.conf)
#
# @api private
# @todo this code duplicates {Puppet::Util::RunMode#which_dir} as described
# in {http://projects.puppetlabs.com/issues/16637 #16637}
def which_configuration_file
if explicit_config_file? or Puppet.features.root? then
return main_config_file
else
return user_config_file
end
end
# This method just turns a file into a new ConfigFile::Conf instance
# @param file [String] absolute path to the configuration file
# @return [Puppet::Settings::ConfigFile::Conf]
# @api private
def parse_file(file)
@config_file_parser.parse_file(file, read_file(file))
end
private
DEPRECATION_REFS = {
[:manifest, :modulepath, :config_version, :templatedir, :manifestdir] =>
"See http://links.puppetlabs.com/env-settings-deprecations"
}.freeze
# Record that we want to issue a deprecation warning later in the application
# initialization cycle when we have settings bootstrapped to the point where
# we can read the Puppet[:disable_warnings] setting.
#
# We are only recording warnings applicable to settings set in puppet.conf
# itself.
def record_deprecations_from_puppet_conf(puppet_conf)
conf_sections = puppet_conf.sections.inject([]) do |accum,entry|
accum << entry[1] if [:main, :master, :agent, :user].include?(entry[0])
accum
end
conf_sections.each do |section|
section.settings.each do |conf_setting|
if setting = self.setting(conf_setting.name)
@deprecated_settings_that_have_been_configured << setting if setting.deprecated?
end
end
end
end
def issue_deprecations
@deprecated_settings_that_have_been_configured.each do |setting|
issue_deprecation_warning(setting)
end
end
def issue_deprecation_warning(setting, msg = nil)
name = setting.name
ref = DEPRECATION_REFS.find { |params,reference| params.include?(name) }
ref = ref[1] if ref
case
when msg
msg << " #{ref}" if ref
Puppet.deprecation_warning(msg)
when setting.completely_deprecated?
Puppet.deprecation_warning("Setting #{name} is deprecated. #{ref}", "setting-#{name}")
when setting.allowed_on_commandline?
Puppet.deprecation_warning("Setting #{name} is deprecated in puppet.conf. #{ref}", "puppet-conf-setting-#{name}")
end
end
def get_config_file_default(default)
obj = nil
unless obj = @config[default]
raise ArgumentError, "Unknown default #{default}"
end
raise ArgumentError, "Default #{default} is not a file" unless obj.is_a? FileSetting
obj
end
+ def add_environment_resources(catalog, sections)
+ path = self[:environmentpath]
+ envdir = path.split(File::PATH_SEPARATOR).first if path
+ configured_environment = self[:environment]
+ if configured_environment == "production" && envdir && Puppet::FileSystem.exist?(envdir)
+ configured_environment_path = File.join(envdir, configured_environment)
+ catalog.add_resource(
+ Puppet::Resource.new(:file,
+ configured_environment_path,
+ :parameters => { :ensure => 'directory' })
+ )
+ end
+ end
+
def add_user_resources(catalog, sections)
return unless Puppet.features.root?
return if Puppet.features.microsoft_windows?
return unless self[:mkusers]
@config.each do |name, setting|
next unless setting.respond_to?(:owner)
next unless sections.nil? or sections.include?(setting.section)
if user = setting.owner and user != "root" and catalog.resource(:user, user).nil?
resource = Puppet::Resource.new(:user, user, :parameters => {:ensure => :present})
resource[:gid] = self[:group] if self[:group]
catalog.add_resource resource
end
if group = setting.group and ! %w{root wheel}.include?(group) and catalog.resource(:group, group).nil?
catalog.add_resource Puppet::Resource.new(:group, group, :parameters => {:ensure => :present})
end
end
end
# Yield each search source in turn.
def value_sets_for(environment, mode)
searchpath(environment).collect do |name|
case name
when :cli, :memory, :application_defaults, :overridden_defaults
@value_sets[name]
when :run_mode
if @configuration_file
section = @configuration_file.sections[mode]
if section
ValuesFromSection.new(mode, section)
end
end
else
values_from_section = nil
if @configuration_file
if section = @configuration_file.sections[name]
values_from_section = ValuesFromSection.new(name, section)
end
end
if values_from_section.nil? && global_defaults_initialized?
values_from_section = ValuesFromEnvironmentConf.new(name)
end
values_from_section
end
end.compact
end
# Read the file in.
# @api private
def read_file(file)
return Puppet::FileSystem.read(file)
end
# Private method for internal test use only; allows to do a comprehensive clear of all settings between tests.
#
# @return nil
def clear_everything_for_tests()
unsafe_clear(true, true)
+ @configuration_file = nil
@global_defaults_initialized = false
@app_defaults_initialized = false
end
private :clear_everything_for_tests
def explicit_config_file?
# Figure out if the user has provided an explicit configuration file. If
# so, return the path to the file, if not return nil.
#
# The easiest way to determine whether an explicit one has been specified
# is to simply attempt to evaluate the value of ":config". This will
# obviously be successful if they've passed an explicit value for :config,
# but it will also result in successful interpolation if they've only
# passed an explicit value for :confdir.
#
# If they've specified neither, then the interpolation will fail and we'll
# get an exception.
#
begin
return true if self[:config]
rescue InterpolationError
# This means we failed to interpolate, which means that they didn't
# explicitly specify either :config or :confdir... so we'll fall out to
# the default value.
return false
end
end
private :explicit_config_file?
# Lookup configuration setting value through a chain of different value sources.
#
# @api public
class ChainedValues
+ ENVIRONMENT_SETTING = "environment".freeze
+
# @see Puppet::Settings.values
# @api private
def initialize(mode, environment, value_sets, defaults)
@mode = mode
@environment = environment
@value_sets = value_sets
@defaults = defaults
end
# Lookup the uninterpolated value.
#
# @param name [Symbol] The configuration setting name to look up
# @return [Object] The configuration setting value or nil if the setting is not known
# @api public
def lookup(name)
set = @value_sets.find do |set|
set.include?(name)
end
if set
value = set.lookup(name)
if !value.nil?
return value
end
end
@defaults[name].default
end
# Lookup the interpolated value. All instances of `$name` in the value will
# be replaced by performing a lookup of `name` and substituting the text
# for `$name` in the original value. This interpolation is only performed
# if the looked up value is a String.
#
# @param name [Symbol] The configuration setting name to look up
# @return [Object] The configuration setting value or nil if the setting is not known
# @api public
def interpolate(name)
setting = @defaults[name]
if setting
val = lookup(name)
# if we interpolate code, all hell breaks loose.
if name == :code
val
else
# Convert it if necessary
begin
val = convert(val)
rescue InterpolationError => err
# This happens because we don't have access to the param name when the
# exception is originally raised, but we want it in the message
raise InterpolationError, "Error converting value for param '#{name}': #{err}", err.backtrace
end
setting.munge(val)
end
else
nil
end
end
private
def convert(value)
- return nil if value.nil?
- return value unless value.is_a? String
- value.gsub(/\$(\w+)|\$\{(\w+)\}/) do |value|
- varname = $2 || $1
- if varname == "environment" && @environment
- @environment
- elsif varname == "run_mode"
- @mode
- elsif !(pval = interpolate(varname.to_sym)).nil?
- pval
- else
- raise InterpolationError, "Could not find value for #{value}"
+ case value
+ when nil
+ nil
+ when String
+ value.gsub(/\$(\w+)|\$\{(\w+)\}/) do |value|
+ varname = $2 || $1
+ if varname == ENVIRONMENT_SETTING && @environment
+ @environment
+ elsif varname == "run_mode"
+ @mode
+ elsif !(pval = interpolate(varname.to_sym)).nil?
+ pval
+ else
+ raise InterpolationError, "Could not find value for #{value}"
+ end
end
+ else
+ value
end
end
end
class Values
+ extend Forwardable
+
def initialize(name, defaults)
@name = name
@values = {}
@defaults = defaults
end
- def include?(name)
- @values.include?(name)
- end
+ def_delegator :@values, :include?
+ def_delegator :@values, :[], :lookup
def set(name, value)
- if !@defaults[name]
+ default = @defaults[name]
+
+ if !default
raise ArgumentError,
"Attempt to assign a value to unknown setting #{name.inspect}"
end
- if @defaults[name].has_hook?
- @defaults[name].handle(value)
+ if default.has_hook?
+ default.handle(value)
end
@values[name] = value
end
-
- def lookup(name)
- @values[name]
- end
end
class ValuesFromSection
def initialize(name, section)
@name = name
@section = section
end
def include?(name)
!@section.setting(name).nil?
end
def lookup(name)
setting = @section.setting(name)
if setting
setting.value
end
end
end
# @api private
class ValuesFromEnvironmentConf
def initialize(environment_name)
@environment_name = environment_name
end
def include?(name)
if Puppet::Settings::EnvironmentConf::VALID_SETTINGS.include?(name) && conf
return true
end
false
end
def lookup(name)
return nil unless Puppet::Settings::EnvironmentConf::VALID_SETTINGS.include?(name)
conf.send(name) if conf
end
def conf
- unless @conf
- if environments = Puppet.lookup(:environments)
- @conf = environments.get_conf(@environment_name)
- end
- end
- return @conf
+ @conf ||= if environments = Puppet.lookup(:environments)
+ environments.get_conf(@environment_name)
+ end
end
end
end
diff --git a/lib/puppet/settings/base_setting.rb b/lib/puppet/settings/base_setting.rb
index dcc7d9545..e7c8cf17f 100644
--- a/lib/puppet/settings/base_setting.rb
+++ b/lib/puppet/settings/base_setting.rb
@@ -1,189 +1,195 @@
require 'puppet/settings/errors'
# The base setting type
class Puppet::Settings::BaseSetting
attr_accessor :name, :desc, :section, :default, :call_on_define, :call_hook
attr_reader :short, :deprecated
def self.available_call_hook_values
[:on_define_and_write, :on_initialize_and_write, :on_write_only]
end
def call_on_define
Puppet.deprecation_warning "call_on_define has been deprecated. Please use call_hook_on_define?"
call_hook_on_define?
end
def call_on_define=(value)
if value
Puppet.deprecation_warning ":call_on_define has been changed to :call_hook => :on_define_and_write. Please change #{name}."
@call_hook = :on_define_and_write
else
Puppet.deprecation_warning ":call_on_define => :false has been changed to :call_hook => :on_write_only. Please change #{name}."
@call_hook = :on_write_only
end
end
def call_hook=(value)
if value.nil?
Puppet.warning "Setting :#{name} :call_hook is nil, defaulting to :on_write_only"
value ||= :on_write_only
end
raise ArgumentError, "Invalid option #{value} for call_hook" unless self.class.available_call_hook_values.include? value
@call_hook = value
end
def call_hook_on_define?
call_hook == :on_define_and_write
end
def call_hook_on_initialize?
call_hook == :on_initialize_and_write
end
#added as a proper method, only to generate a deprecation warning
#and return value from
def setbycli
Puppet.deprecation_warning "Puppet.settings.setting(#{name}).setbycli is deprecated. Use Puppet.settings.set_by_cli?(#{name}) instead."
@settings.set_by_cli?(name)
end
def setbycli=(value)
Puppet.deprecation_warning "Puppet.settings.setting(#{name}).setbycli= is deprecated. You should not manually set that values were specified on the command line."
@settings.set_value(name, @settings[name], :cli) if value
raise ArgumentError, "Cannot unset setbycli" unless value
end
# get the arguments in getopt format
def getopt_args
if short
[["--#{name}", "-#{short}", GetoptLong::REQUIRED_ARGUMENT]]
else
[["--#{name}", GetoptLong::REQUIRED_ARGUMENT]]
end
end
# get the arguments in OptionParser format
def optparse_args
if short
["--#{name}", "-#{short}", desc, :REQUIRED]
else
["--#{name}", desc, :REQUIRED]
end
end
def hook=(block)
@has_hook = true
meta_def :handle, &block
end
def has_hook?
@has_hook
end
# Create the new element. Pretty much just sets the name.
def initialize(args = {})
unless @settings = args.delete(:settings)
raise ArgumentError.new("You must refer to a settings object")
end
# explicitly set name prior to calling other param= methods to provide meaningful feedback during
# other warnings
@name = args[:name] if args.include? :name
#set the default value for call_hook
@call_hook = :on_write_only if args[:hook] and not args[:call_hook]
@has_hook = false
raise ArgumentError, "Cannot reference :call_hook for :#{@name} if no :hook is defined" if args[:call_hook] and not args[:hook]
args.each do |param, value|
method = param.to_s + "="
raise ArgumentError, "#{self.class} (setting '#{args[:name]}') does not accept #{param}" unless self.respond_to? method
self.send(method, value)
end
raise ArgumentError, "You must provide a description for the #{self.name} config option" unless self.desc
end
def iscreated
@iscreated = true
end
def iscreated?
@iscreated
end
# short name for the celement
def short=(value)
raise ArgumentError, "Short names can only be one character." if value.to_s.length != 1
@short = value.to_s
end
def default(check_application_defaults_first = false)
+ if @default.is_a? Proc
+ @default = @default.call
+ end
return @default unless check_application_defaults_first
return @settings.value(name, :application_defaults, true) || @default
end
# Convert the object to a config statement.
def to_config
require 'puppet/util/docs'
# Scrub any funky indentation; comment out description.
str = Puppet::Util::Docs.scrub(@desc).gsub(/^/, "# ") + "\n"
# Add in a statement about the default.
str << "# The default value is '#{default(true)}'.\n" if default(true)
# If the value has not been overridden, then print it out commented
# and unconverted, so it's clear that that's the default and how it
# works.
value = @settings.value(self.name)
if value != @default
line = "#{@name} = #{value}"
else
line = "# #{@name} = #{@default}"
end
str << (line + "\n")
# Indent
str.gsub(/^/, " ")
end
- # Retrieves the value, or if it's not set, retrieves the default.
- def value
- @settings.value(self.name)
+ # @param bypass_interpolation [Boolean] Set this true to skip the
+ # interpolation step, returning the raw setting value. Defaults to false.
+ # @return [String] Retrieves the value, or if it's not set, retrieves the default.
+ # @api public
+ def value(bypass_interpolation = false)
+ @settings.value(self.name, nil, bypass_interpolation)
end
# Modify the value when it is first evaluated
def munge(value)
value
end
def set_meta(meta)
Puppet.notice("#{name} does not support meta data. Ignoring.")
end
def deprecated=(deprecation)
raise(ArgumentError, "'#{deprecation}' is an unknown setting deprecation state. Must be either :completely or :allowed_on_commandline") unless [:completely, :allowed_on_commandline].include?(deprecation)
@deprecated = deprecation
end
def deprecated?
!!@deprecated
end
# True if we should raise a deprecation_warning if the setting is submitted
# on the commandline or is set in puppet.conf.
def completely_deprecated?
@deprecated == :completely
end
# True if we should raise a deprecation_warning if the setting is found in
# puppet.conf, but not if the user sets it on the commandline
def allowed_on_commandline?
@deprecated == :allowed_on_commandline
end
end
diff --git a/lib/puppet/settings/environment_conf.rb b/lib/puppet/settings/environment_conf.rb
index 187f963d7..c6fe42c66 100644
--- a/lib/puppet/settings/environment_conf.rb
+++ b/lib/puppet/settings/environment_conf.rb
@@ -1,147 +1,175 @@
# Configuration settings for a single directory Environment.
# @api private
class Puppet::Settings::EnvironmentConf
VALID_SETTINGS = [:modulepath, :manifest, :config_version, :environment_timeout].freeze
# Given a path to a directory environment, attempts to load and parse an
# environment.conf in ini format, and return an EnvironmentConf instance.
#
# An environment.conf is optional, so if the file itself is missing, or
# empty, an EnvironmentConf with default values will be returned.
#
# @note logs warnings if the environment.conf contains any ini sections,
# or has settings other than the three handled for directory environments
# (:manifest, :modulepath, :config_version)
#
# @param path_to_env [String] path to the directory environment
# @param global_module_path [Array<String>] the installation's base modulepath
# setting, appended to default environment modulepaths
# @return [EnvironmentConf] the parsed EnvironmentConf object
def self.load_from(path_to_env, global_module_path)
path_to_env = File.expand_path(path_to_env)
conf_file = File.join(path_to_env, 'environment.conf')
config = nil
begin
config = Puppet.settings.parse_file(conf_file)
validate(conf_file, config)
section = config.sections[:main]
rescue Errno::ENOENT
# environment.conf is an optional file
end
new(path_to_env, section, global_module_path)
end
# Provides a configuration object tied directly to the passed environment.
# Configuration values are exactly those returned by the environment object,
# without interpolation. This is a special case for the default configured
# environment returned by the Puppet::Environments::StaticPrivate loader.
def self.static_for(environment)
Static.new(environment)
end
attr_reader :section
# Create through EnvironmentConf.load_from()
def initialize(path_to_env, section, global_module_path)
@path_to_env = path_to_env
@section = section
@global_module_path = global_module_path
end
def manifest
- get_setting(:manifest, File.join(@path_to_env, "manifests")) do |manifest|
- absolute(manifest)
+ puppet_conf_manifest = Pathname.new(Puppet.settings.value(:default_manifest))
+ disable_per_environment_manifest = Puppet.settings.value(:disable_per_environment_manifest)
+
+ fallback_manifest_directory =
+ if puppet_conf_manifest.absolute?
+ puppet_conf_manifest.to_s
+ else
+ File.join(@path_to_env, puppet_conf_manifest.to_s)
+ end
+
+ if disable_per_environment_manifest
+ environment_conf_manifest = absolute(raw_setting(:manifest))
+ if environment_conf_manifest && fallback_manifest_directory != environment_conf_manifest
+ errmsg = ["The 'disable_per_environment_manifest' setting is true, but the",
+ "environment located at #{@path_to_env} has a manifest setting in its",
+ "environment.conf of '#{environment_conf_manifest}' which does not match",
+ "the default_manifest setting '#{puppet_conf_manifest}'. If this",
+ "environment is expecting to find modules in",
+ "'#{environment_conf_manifest}', they will not be available!"]
+ Puppet.err(errmsg.join(' '))
+ end
+ fallback_manifest_directory.to_s
+ else
+ get_setting(:manifest, fallback_manifest_directory) do |manifest|
+ absolute(manifest)
+ end
end
end
def environment_timeout
# gen env specific config or use the default value
get_setting(:environment_timeout, Puppet.settings.value(:environment_timeout)) do |ttl|
# munges the string form statically without really needed the settings system, only
# its ability to munge "4s, 3m, 5d, and 'unlimited' into seconds - if already munged into
# numeric form, the TTLSetting handles that.
Puppet::Settings::TTLSetting.munge(ttl, 'environment_timeout')
end
end
def modulepath
default_modulepath = [File.join(@path_to_env, "modules")] + @global_module_path
get_setting(:modulepath, default_modulepath) do |modulepath|
path = modulepath.kind_of?(String) ?
modulepath.split(File::PATH_SEPARATOR) :
modulepath
path.map { |p| absolute(p) }.join(File::PATH_SEPARATOR)
end
end
def config_version
get_setting(:config_version) do |config_version|
absolute(config_version)
end
end
+ def raw_setting(setting_name)
+ setting = section.setting(setting_name) if section
+ setting.value if setting
+ end
+
private
def self.validate(path_to_conf_file, config)
valid = true
section_keys = config.sections.keys
main = config.sections[:main]
if section_keys.size > 1
Puppet.warning("Invalid sections in environment.conf at '#{path_to_conf_file}'. Environment conf may not have sections. The following sections are being ignored: '#{(section_keys - [:main]).join(',')}'")
valid = false
end
extraneous_settings = main.settings.map(&:name) - VALID_SETTINGS
if !extraneous_settings.empty?
Puppet.warning("Invalid settings in environment.conf at '#{path_to_conf_file}'. The following unknown setting(s) are being ignored: #{extraneous_settings.join(', ')}")
valid = false
end
return valid
end
def get_setting(setting_name, default = nil)
- setting = section.setting(setting_name) if section
- value = setting.value if setting
+ value = raw_setting(setting_name)
value ||= default
yield value
end
def absolute(path)
return nil if path.nil?
if path =~ /^\$/
# Path begins with $something interpolatable
path
else
File.expand_path(path, @path_to_env)
end
end
# Models configuration for an environment that is not loaded from a directory.
#
# @api private
class Static
def initialize(environment)
@environment = environment
end
def manifest
@environment.manifest
end
def modulepath
@environment.modulepath.join(File::PATH_SEPARATOR)
end
def config_version
@environment.config_version
end
def environment_timeout
0
end
end
end
diff --git a/lib/puppet/settings/file_setting.rb b/lib/puppet/settings/file_setting.rb
index e20767374..800a9b7c4 100644
--- a/lib/puppet/settings/file_setting.rb
+++ b/lib/puppet/settings/file_setting.rb
@@ -1,226 +1,234 @@
# A file.
class Puppet::Settings::FileSetting < Puppet::Settings::StringSetting
class SettingError < StandardError; end
# An unspecified user or group
#
# @api private
class Unspecified
def value
nil
end
end
# A "root" user or group
#
# @api private
class Root
def value
"root"
end
end
# A "service" user or group that picks up values from settings when the
# referenced user or group is safe to use (it exists or will be created), and
# uses the given fallback value when not safe.
#
# @api private
class Service
# @param name [Symbol] the name of the setting to use as the service value
# @param fallback [String, nil] the value to use when the service value cannot be used
# @param settings [Puppet::Settings] the puppet settings object
# @param available_method [Symbol] the name of the method to call on
# settings to determine if the value in settings is available on the system
#
def initialize(name, fallback, settings, available_method)
@settings = settings
@available_method = available_method
@name = name
@fallback = fallback
end
def value
if safe_to_use_settings_value?
@settings[@name]
else
@fallback
end
end
private
def safe_to_use_settings_value?
@settings[:mkusers] or @settings.send(@available_method)
end
end
attr_accessor :mode, :create
def initialize(args)
@group = Unspecified.new
@owner = Unspecified.new
super(args)
end
# Should we create files, rather than just directories?
def create_files?
create
end
# @param value [String] the group to use on the created file (can only be "root" or "service")
# @api public
def group=(value)
@group = case value
when "root"
Root.new
when "service"
# Group falls back to `nil` because we cannot assume that a "root" group exists.
# Some systems have root group, others have wheel, others have something else.
Service.new(:group, nil, @settings, :service_group_available?)
else
unknown_value(':group', value)
end
end
# @param value [String] the owner to use on the created file (can only be "root" or "service")
# @api public
def owner=(value)
@owner = case value
when "root"
Root.new
when "service"
Service.new(:user, "root", @settings, :service_user_available?)
else
unknown_value(':owner', value)
end
end
# @return [String, nil] the name of the group to use for the file or nil if the group should not be managed
# @api public
def group
@group.value
end
# @return [String, nil] the name of the user to use for the file or nil if the user should not be managed
# @api public
def owner
@owner.value
end
def set_meta(meta)
self.owner = meta.owner if meta.owner
self.group = meta.group if meta.group
self.mode = meta.mode if meta.mode
end
def munge(value)
if value.is_a?(String) and value != ':memory:' # for sqlite3 in-memory tests
value = File.expand_path(value)
end
value
end
def type
:file
end
# Turn our setting thing into a Puppet::Resource instance.
def to_resource
return nil unless type = self.type
path = self.value
return nil unless path.is_a?(String)
# Make sure the paths are fully qualified.
path = File.expand_path(path)
return nil unless type == :directory or create_files? or Puppet::FileSystem.exist?(path)
return nil if path =~ /^\/dev/ or path =~ /^[A-Z]:\/dev/i
resource = Puppet::Resource.new(:file, path)
if Puppet[:manage_internal_file_permissions]
if self.mode
# This ends up mimicking the munge method of the mode
# parameter to make sure that we're always passing the string
# version of the octal number. If we were setting the
# 'should' value for mode rather than the 'is', then the munge
# method would be called for us automatically. Normally, one
# wouldn't need to call the munge method manually, since
# 'should' gets set by the provider and it should be able to
# provide the data in the appropriate format.
mode = self.mode
mode = mode.to_i(8) if mode.is_a?(String)
mode = mode.to_s(8)
resource[:mode] = mode
end
# REMIND fails on Windows because chown/chgrp functionality not supported yet
if Puppet.features.root? and !Puppet.features.microsoft_windows?
resource[:owner] = self.owner if self.owner
resource[:group] = self.group if self.group
end
end
resource[:ensure] = type
resource[:loglevel] = :debug
resource[:links] = :follow
resource[:backup] = false
resource.tag(self.section, self.name, "settings")
resource
end
# Make sure any provided variables look up to something.
def validate(value)
return true unless value.is_a? String
value.scan(/\$(\w+)/) { |name|
name = $1
unless @settings.include?(name)
raise ArgumentError,
"Settings parameter '#{name}' is undefined"
end
}
end
# @api private
def exclusive_open(option = 'r', &block)
controlled_access do |mode|
Puppet::FileSystem.exclusive_open(file(), mode, option, &block)
end
end
# @api private
def open(option = 'r', &block)
controlled_access do |mode|
Puppet::FileSystem.open(file, mode, option, &block)
end
end
private
def file
Puppet::FileSystem.pathname(value)
end
def unknown_value(parameter, value)
raise SettingError, "The #{parameter} parameter for the setting '#{name}' must be either 'root' or 'service', not '#{value}'"
end
def controlled_access(&block)
chown = nil
if Puppet.features.root?
chown = [owner, group]
else
chown = [nil, nil]
end
Puppet::Util::SUIDManager.asuser(*chown) do
# Update the umask to make non-executable files
Puppet::Util.withumask(File.umask ^ 0111) do
- yield mode ? mode.to_i : 0640
+ mode = case mode.class
+ when String
+ mode.to_i(8)
+ when NilClass
+ 0640
+ else
+ mode
+ end
+ yield mode
end
end
end
end
diff --git a/lib/puppet/settings/priority_setting.rb b/lib/puppet/settings/priority_setting.rb
index 707b8ab82..66443398f 100644
--- a/lib/puppet/settings/priority_setting.rb
+++ b/lib/puppet/settings/priority_setting.rb
@@ -1,42 +1,42 @@
require 'puppet/settings/base_setting'
# A setting that represents a scheduling priority, and evaluates to an
# OS-specific priority level.
class Puppet::Settings::PrioritySetting < Puppet::Settings::BaseSetting
PRIORITY_MAP =
if Puppet::Util::Platform.windows?
- require 'win32/process'
+ require 'puppet/util/windows/process'
{
- :high => Process::HIGH_PRIORITY_CLASS,
- :normal => Process::NORMAL_PRIORITY_CLASS,
- :low => Process::BELOW_NORMAL_PRIORITY_CLASS,
- :idle => Process::IDLE_PRIORITY_CLASS
+ :high => Puppet::Util::Windows::Process::HIGH_PRIORITY_CLASS,
+ :normal => Puppet::Util::Windows::Process::NORMAL_PRIORITY_CLASS,
+ :low => Puppet::Util::Windows::Process::BELOW_NORMAL_PRIORITY_CLASS,
+ :idle => Puppet::Util::Windows::Process::IDLE_PRIORITY_CLASS
}
else
{
:high => -10,
:normal => 0,
:low => 10,
:idle => 19
}
end
def type
:priority
end
def munge(value)
return unless value
case
when value.is_a?(Integer)
value
when (value.is_a?(String) and value =~ /\d+/)
value.to_i
when (value.is_a?(String) and PRIORITY_MAP[value.to_sym])
PRIORITY_MAP[value.to_sym]
else
raise Puppet::Settings::ValidationError, "Invalid priority format '#{value.inspect}' for parameter: #{@name}"
end
end
end
diff --git a/lib/puppet/ssl.rb b/lib/puppet/ssl.rb
index 596feb933..f22c82d0e 100644
--- a/lib/puppet/ssl.rb
+++ b/lib/puppet/ssl.rb
@@ -1,12 +1,13 @@
# Just to make the constants work out.
require 'puppet'
require 'openssl'
module Puppet::SSL # :nodoc:
CA_NAME = "ca"
+ require 'puppet/ssl/configuration'
require 'puppet/ssl/host'
require 'puppet/ssl/oids'
require 'puppet/ssl/validator'
require 'puppet/ssl/validator/no_validator'
require 'puppet/ssl/validator/default_validator'
end
diff --git a/lib/puppet/ssl/certificate_authority.rb b/lib/puppet/ssl/certificate_authority.rb
index 7caf44978..3ffc58463 100644
--- a/lib/puppet/ssl/certificate_authority.rb
+++ b/lib/puppet/ssl/certificate_authority.rb
@@ -1,508 +1,517 @@
require 'puppet/ssl/host'
require 'puppet/ssl/certificate_request'
require 'puppet/ssl/certificate_signer'
require 'puppet/util'
# The class that knows how to sign certificates. It creates
# a 'special' SSL::Host whose name is 'ca', thus indicating
# that, well, it's the CA. There's some magic in the
# indirector/ssl_file terminus base class that does that
# for us.
# This class mostly just signs certs for us, but
# it can also be seen as a general interface into all of the
# SSL stuff.
class Puppet::SSL::CertificateAuthority
# We will only sign extensions on this whitelist, ever. Any CSR with a
# requested extension that we don't recognize is rejected, against the risk
# that it will introduce some security issue through our ignorance of it.
#
# Adding an extension to this whitelist simply means we will consider it
# further, not that we will always accept a certificate with an extension
# requested on this list.
RequestExtensionWhitelist = %w{subjectAltName}
require 'puppet/ssl/certificate_factory'
require 'puppet/ssl/inventory'
require 'puppet/ssl/certificate_revocation_list'
require 'puppet/ssl/certificate_authority/interface'
require 'puppet/ssl/certificate_authority/autosign_command'
require 'puppet/network/authstore'
class CertificateVerificationError < RuntimeError
attr_accessor :error_code
def initialize(code)
@error_code = code
end
end
def self.singleton_instance
@singleton_instance ||= new
end
class CertificateSigningError < RuntimeError
attr_accessor :host
def initialize(host)
@host = host
end
end
def self.ca?
# running as ca? - ensure boolean answer
!!(Puppet[:ca] && Puppet.run_mode.master?)
end
# If this process can function as a CA, then return a singleton instance.
def self.instance
ca? ? singleton_instance : nil
end
attr_reader :name, :host
# If autosign is configured, autosign the csr we are passed.
# @param csr [Puppet::SSL::CertificateRequest] The csr to sign.
# @return [Void]
# @api private
def autosign(csr)
if autosign?(csr)
Puppet.info "Autosigning #{csr.name}"
sign(csr.name)
end
end
# Determine if a CSR can be autosigned by the autosign store or autosign command
#
# @param csr [Puppet::SSL::CertificateRequest] The CSR to check
# @return [true, false]
# @api private
def autosign?(csr)
auto = Puppet[:autosign]
decider = case auto
when false
AutosignNever.new
when true
AutosignAlways.new
else
file = Puppet::FileSystem.pathname(auto)
if Puppet::FileSystem.executable?(file)
Puppet::SSL::CertificateAuthority::AutosignCommand.new(auto)
elsif Puppet::FileSystem.exist?(file)
AutosignConfig.new(file)
else
AutosignNever.new
end
end
decider.allowed?(csr)
end
# Retrieves (or creates, if necessary) the certificate revocation list.
def crl
unless defined?(@crl)
unless @crl = Puppet::SSL::CertificateRevocationList.indirection.find(Puppet::SSL::CA_NAME)
@crl = Puppet::SSL::CertificateRevocationList.new(Puppet::SSL::CA_NAME)
@crl.generate(host.certificate.content, host.key.content)
Puppet::SSL::CertificateRevocationList.indirection.save(@crl)
end
end
@crl
end
# Delegates this to our Host class.
def destroy(name)
Puppet::SSL::Host.destroy(name)
end
# Generates a new certificate.
# @return Puppet::SSL::Certificate
def generate(name, options = {})
raise ArgumentError, "A Certificate already exists for #{name}" if Puppet::SSL::Certificate.indirection.find(name)
# Pass on any requested subjectAltName field.
san = options[:dns_alt_names]
host = Puppet::SSL::Host.new(name)
host.generate_certificate_request(:dns_alt_names => san)
# CSR may have been implicitly autosigned, generating a certificate
# Or sign explicitly
host.certificate || sign(name, !!san)
end
# Generate our CA certificate.
def generate_ca_certificate
generate_password unless password?
host.generate_key unless host.key
# Create a new cert request. We do this specially, because we don't want
# to actually save the request anywhere.
request = Puppet::SSL::CertificateRequest.new(host.name)
# We deliberately do not put any subjectAltName in here: the CA
# certificate absolutely does not need them. --daniel 2011-10-13
request.generate(host.key)
# Create a self-signed certificate.
@certificate = sign(host.name, false, request)
# And make sure we initialize our CRL.
crl
end
def initialize
Puppet.settings.use :main, :ssl, :ca
@name = Puppet[:certname]
@host = Puppet::SSL::Host.new(Puppet::SSL::Host.ca_name)
setup
end
# Retrieve (or create, if necessary) our inventory manager.
def inventory
@inventory ||= Puppet::SSL::Inventory.new
end
# Generate a new password for the CA.
def generate_password
pass = ""
20.times { pass += (rand(74) + 48).chr }
begin
Puppet.settings.setting(:capass).open('w') { |f| f.print pass }
rescue Errno::EACCES => detail
raise Puppet::Error, "Could not write CA password: #{detail}", detail.backtrace
end
@password = pass
pass
end
# Lists the names of all signed certificates.
#
# @param name [Array<string>] filter to cerificate names
#
# @return [Array<String>]
def list(name='*')
list_certificates(name).collect { |c| c.name }
end
# Return all the certificate objects as found by the indirector
# API for PE license checking.
#
# Created to prevent the case of reading all certs from disk, getting
# just their names and verifying the cert for each name, which then
# causes the cert to again be read from disk.
#
# @author Jeff Weiss <jeff.weiss@puppetlabs.com>
# @api Puppet Enterprise Licensing
#
# @param name [Array<string>] filter to cerificate names
#
# @return [Array<Puppet::SSL::Certificate>]
def list_certificates(name='*')
Puppet::SSL::Certificate.indirection.search(name)
end
# Read the next serial from the serial file, and increment the
# file so this one is considered used.
def next_serial
serial = 1
Puppet.settings.setting(:serial).exclusive_open('a+') do |f|
f.rewind
serial = f.read.chomp.hex
if serial == 0
serial = 1
end
f.truncate(0)
f.rewind
# We store the next valid serial, not the one we just used.
f << "%04X" % (serial + 1)
end
serial
end
# Does the password file exist?
def password?
Puppet::FileSystem.exist?(Puppet[:capass])
end
# Print a given host's certificate as text.
def print(name)
(cert = Puppet::SSL::Certificate.indirection.find(name)) ? cert.to_text : nil
end
# Revoke a given certificate.
def revoke(name)
raise ArgumentError, "Cannot revoke certificates when the CRL is disabled" unless crl
- if cert = Puppet::SSL::Certificate.indirection.find(name)
- serial = cert.content.serial
- elsif name =~ /^0x[0-9A-Fa-f]+$/
- serial = name.hex
- elsif ! serial = inventory.serial(name)
+ cert = Puppet::SSL::Certificate.indirection.find(name)
+
+ serials = if cert
+ [cert.content.serial]
+ elsif name =~ /^0x[0-9A-Fa-f]+$/
+ [name.hex]
+ else
+ inventory.serials(name)
+ end
+
+ if serials.empty?
raise ArgumentError, "Could not find a serial number for #{name}"
end
- crl.revoke(serial, host.key.content)
+
+ serials.each do |s|
+ crl.revoke(s, host.key.content)
+ end
end
# This initializes our CA so it actually works. This should be a private
# method, except that you can't any-instance stub private methods, which is
# *awesome*. This method only really exists to provide a stub-point during
# testing.
def setup
generate_ca_certificate unless @host.certificate
end
# Sign a given certificate request.
def sign(hostname, allow_dns_alt_names = false, self_signing_csr = nil)
# This is a self-signed certificate
if self_signing_csr
# # This is a self-signed certificate, which is for the CA. Since this
# # forces the certificate to be self-signed, anyone who manages to trick
# # the system into going through this path gets a certificate they could
# # generate anyway. There should be no security risk from that.
csr = self_signing_csr
cert_type = :ca
issuer = csr.content
else
allow_dns_alt_names = true if hostname == Puppet[:certname].downcase
unless csr = Puppet::SSL::CertificateRequest.indirection.find(hostname)
raise ArgumentError, "Could not find certificate request for #{hostname}"
end
cert_type = :server
issuer = host.certificate.content
# Make sure that the CSR conforms to our internal signing policies.
# This will raise if the CSR doesn't conform, but just in case...
check_internal_signing_policies(hostname, csr, allow_dns_alt_names) or
raise CertificateSigningError.new(hostname), "CSR had an unknown failure checking internal signing policies, will not sign!"
end
cert = Puppet::SSL::Certificate.new(hostname)
cert.content = Puppet::SSL::CertificateFactory.
build(cert_type, csr, issuer, next_serial)
signer = Puppet::SSL::CertificateSigner.new
signer.sign(cert.content, host.key.content)
Puppet.notice "Signed certificate request for #{hostname}"
# Add the cert to the inventory before we save it, since
# otherwise we could end up with it being duplicated, if
# this is the first time we build the inventory file.
inventory.add(cert)
# Save the now-signed cert. This should get routed correctly depending
# on the certificate type.
Puppet::SSL::Certificate.indirection.save(cert)
# And remove the CSR if this wasn't self signed.
Puppet::SSL::CertificateRequest.indirection.destroy(csr.name) unless self_signing_csr
cert
end
def check_internal_signing_policies(hostname, csr, allow_dns_alt_names)
# Reject unknown request extensions.
unknown_req = csr.request_extensions.reject do |x|
RequestExtensionWhitelist.include? x["oid"] or
Puppet::SSL::Oids.subtree_of?('ppRegCertExt', x["oid"], true) or
Puppet::SSL::Oids.subtree_of?('ppPrivCertExt', x["oid"], true)
end
if unknown_req and not unknown_req.empty?
names = unknown_req.map {|x| x["oid"] }.sort.uniq.join(", ")
raise CertificateSigningError.new(hostname), "CSR has request extensions that are not permitted: #{names}"
end
# Do not sign misleading CSRs
cn = csr.content.subject.to_a.assoc("CN")[1]
if hostname != cn
raise CertificateSigningError.new(hostname), "CSR subject common name #{cn.inspect} does not match expected certname #{hostname.inspect}"
end
if hostname !~ Puppet::SSL::Base::VALID_CERTNAME
raise CertificateSigningError.new(hostname), "CSR #{hostname.inspect} subject contains unprintable or non-ASCII characters"
end
# Wildcards: we don't allow 'em at any point.
#
# The stringification here makes the content visible, and saves us having
# to scrobble through the content of the CSR subject field to make sure it
# is what we expect where we expect it.
if csr.content.subject.to_s.include? '*'
raise CertificateSigningError.new(hostname), "CSR subject contains a wildcard, which is not allowed: #{csr.content.subject.to_s}"
end
unless csr.content.verify(csr.content.public_key)
raise CertificateSigningError.new(hostname), "CSR contains a public key that does not correspond to the signing key"
end
unless csr.subject_alt_names.empty?
# If you alt names are allowed, they are required. Otherwise they are
# disallowed. Self-signed certs are implicitly trusted, however.
unless allow_dns_alt_names
raise CertificateSigningError.new(hostname), "CSR '#{csr.name}' contains subject alternative names (#{csr.subject_alt_names.join(', ')}), which are disallowed. Use `puppet cert --allow-dns-alt-names sign #{csr.name}` to sign this request."
end
# If subjectAltNames are present, validate that they are only for DNS
# labels, not any other kind.
unless csr.subject_alt_names.all? {|x| x =~ /^DNS:/ }
raise CertificateSigningError.new(hostname), "CSR '#{csr.name}' contains a subjectAltName outside the DNS label space: #{csr.subject_alt_names.join(', ')}. To continue, this CSR needs to be cleaned."
end
# Check for wildcards in the subjectAltName fields too.
if csr.subject_alt_names.any? {|x| x.include? '*' }
raise CertificateSigningError.new(hostname), "CSR '#{csr.name}' subjectAltName contains a wildcard, which is not allowed: #{csr.subject_alt_names.join(', ')} To continue, this CSR needs to be cleaned."
end
end
return true # good enough for us!
end
# Utility method for optionally caching the X509 Store for verifying a
# large number of certificates in a short amount of time--exactly the
# case we have during PE license checking.
#
# @example Use the cached X509 store
# x509store(:cache => true)
#
# @example Use a freshly create X509 store
# x509store
# x509store(:cache => false)
#
# @param [Hash] options the options used for retrieving the X509 Store
# @option options [Boolean] :cache whether or not to use a cached version
# of the X509 Store
#
# @return [OpenSSL::X509::Store]
def x509_store(options = {})
if (options[:cache])
return @x509store unless @x509store.nil?
@x509store = create_x509_store
else
create_x509_store
end
end
private :x509_store
# Creates a brand new OpenSSL::X509::Store with the appropriate
# Certificate Revocation List and flags
#
# @return [OpenSSL::X509::Store]
def create_x509_store
store = OpenSSL::X509::Store.new()
store.add_file(Puppet[:cacert])
store.add_crl(crl.content) if self.crl
store.purpose = OpenSSL::X509::PURPOSE_SSL_CLIENT
if Puppet.settings[:certificate_revocation]
store.flags = OpenSSL::X509::V_FLAG_CRL_CHECK_ALL | OpenSSL::X509::V_FLAG_CRL_CHECK
end
store
end
private :create_x509_store
# Utility method which is API for PE license checking.
# This is used rather than `verify` because
# 1) We have already read the certificate from disk into memory.
# To read the certificate from disk again is just wasteful.
# 2) Because we're checking a large number of certificates against
# a transient CertificateAuthority, we can relatively safely cache
# the X509 Store that actually does the verification.
#
# Long running instances of CertificateAuthority will certainly
# want to use `verify` because it will recreate the X509 Store with
# the absolutely latest CRL.
#
# Additionally, this method explicitly returns a boolean whereas
# `verify` will raise an error if the certificate has been revoked.
#
# @author Jeff Weiss <jeff.weiss@puppetlabs.com>
# @api Puppet Enterprise Licensing
#
# @param cert [Puppet::SSL::Certificate] the certificate to check validity of
#
# @return [Boolean] true if signed, false if unsigned or revoked
def certificate_is_alive?(cert)
x509_store(:cache => true).verify(cert.content)
end
# Verify a given host's certificate. The certname is passed in, and
# the indirector will be used to locate the actual contents of the
# certificate with that name.
#
# @param name [String] certificate name to verify
#
# @raise [ArgumentError] if the certificate name cannot be found
# (i.e. doesn't exist or is unsigned)
# @raise [CertificateVerficationError] if the certificate has been revoked
#
# @return [Boolean] true if signed, there are no cases where false is returned
def verify(name)
unless cert = Puppet::SSL::Certificate.indirection.find(name)
raise ArgumentError, "Could not find a certificate for #{name}"
end
store = x509_store
raise CertificateVerificationError.new(store.error), store.error_string unless store.verify(cert.content)
end
def fingerprint(name, md = :SHA256)
unless cert = Puppet::SSL::Certificate.indirection.find(name) || Puppet::SSL::CertificateRequest.indirection.find(name)
raise ArgumentError, "Could not find a certificate or csr for #{name}"
end
cert.fingerprint(md)
end
# List the waiting certificate requests.
def waiting?
Puppet::SSL::CertificateRequest.indirection.search("*").collect { |r| r.name }
end
# @api private
class AutosignAlways
def allowed?(csr)
true
end
end
# @api private
class AutosignNever
def allowed?(csr)
false
end
end
# @api private
class AutosignConfig
def initialize(config_file)
@config = config_file
end
def allowed?(csr)
autosign_store.allowed?(csr.name, '127.1.1.1')
end
private
def autosign_store
auth = Puppet::Network::AuthStore.new
Puppet::FileSystem.each_line(@config) do |line|
next if line =~ /^\s*#/
next if line =~ /^\s*$/
auth.allow(line.chomp)
end
auth
end
end
end
diff --git a/lib/puppet/ssl/certificate_authority/autosign_command.rb b/lib/puppet/ssl/certificate_authority/autosign_command.rb
index d4533bab9..822b374ff 100644
--- a/lib/puppet/ssl/certificate_authority/autosign_command.rb
+++ b/lib/puppet/ssl/certificate_authority/autosign_command.rb
@@ -1,44 +1,45 @@
require 'puppet/ssl/certificate_authority'
+require 'puppet/file_system/uniquefile'
# This class wraps a given command and invokes it with a CSR name and body to
# determine if the given CSR should be autosigned
#
# @api private
class Puppet::SSL::CertificateAuthority::AutosignCommand
class CheckFailure < Puppet::Error; end
def initialize(path)
@path = path
end
# Run the autosign command with the given CSR name as an argument and the
# CSR body on stdin.
#
# @param csr [String] The CSR name to check for autosigning
# @return [true, false] If the CSR should be autosigned
def allowed?(csr)
name = csr.name
cmd = [@path, name]
- output = Puppet::FileSystem::Tempfile.open('puppet-csr') do |csr_file|
+ output = Puppet::FileSystem::Uniquefile.open_tmp('puppet-csr') do |csr_file|
csr_file.write(csr.to_s)
csr_file.flush
execute_options = {:stdinfile => csr_file.path, :combine => true, :failonfail => false}
Puppet::Util::Execution.execute(cmd, execute_options)
end
output.chomp!
Puppet.debug "Autosign command '#{@path}' exit status: #{output.exitstatus}"
Puppet.debug "Autosign command '#{@path}' output: #{output}"
case output.exitstatus
when 0
true
else
false
end
end
end
diff --git a/lib/puppet/ssl/host.rb b/lib/puppet/ssl/host.rb
index 566ec18f5..442c648bf 100644
--- a/lib/puppet/ssl/host.rb
+++ b/lib/puppet/ssl/host.rb
@@ -1,371 +1,372 @@
require 'puppet/indirector'
require 'puppet/ssl'
require 'puppet/ssl/key'
require 'puppet/ssl/certificate'
require 'puppet/ssl/certificate_request'
require 'puppet/ssl/certificate_revocation_list'
require 'puppet/ssl/certificate_request_attributes'
# The class that manages all aspects of our SSL certificates --
# private keys, public keys, requests, etc.
class Puppet::SSL::Host
# Yay, ruby's strange constant lookups.
Key = Puppet::SSL::Key
CA_NAME = Puppet::SSL::CA_NAME
Certificate = Puppet::SSL::Certificate
CertificateRequest = Puppet::SSL::CertificateRequest
CertificateRevocationList = Puppet::SSL::CertificateRevocationList
extend Puppet::Indirector
indirects :certificate_status, :terminus_class => :file, :doc => <<DOC
This indirection represents the host that ties a key, certificate, and certificate request together.
The indirection key is the certificate CN (generally a hostname).
DOC
attr_reader :name
attr_accessor :ca
attr_writer :key, :certificate, :certificate_request
# This accessor is used in instances for indirector requests to hold desired state
attr_accessor :desired_state
def self.localhost
return @localhost if @localhost
@localhost = new
@localhost.generate unless @localhost.certificate
@localhost.key
@localhost
end
def self.reset
@localhost = nil
end
# This is the constant that people will use to mark that a given host is
# a certificate authority.
def self.ca_name
CA_NAME
end
class << self
attr_reader :ca_location
end
# Configure how our various classes interact with their various terminuses.
def self.configure_indirection(terminus, cache = nil)
Certificate.indirection.terminus_class = terminus
CertificateRequest.indirection.terminus_class = terminus
CertificateRevocationList.indirection.terminus_class = terminus
host_map = {:ca => :file, :disabled_ca => nil, :file => nil, :rest => :rest}
if term = host_map[terminus]
self.indirection.terminus_class = term
else
self.indirection.reset_terminus_class
end
if cache
# This is weird; we don't actually cache our keys, we
# use what would otherwise be the cache as our normal
# terminus.
Key.indirection.terminus_class = cache
else
Key.indirection.terminus_class = terminus
end
if cache
Certificate.indirection.cache_class = cache
CertificateRequest.indirection.cache_class = cache
CertificateRevocationList.indirection.cache_class = cache
else
# Make sure we have no cache configured. puppet master
# switches the configurations around a bit, so it's important
# that we specify the configs for absolutely everything, every
# time.
Certificate.indirection.cache_class = nil
CertificateRequest.indirection.cache_class = nil
CertificateRevocationList.indirection.cache_class = nil
end
end
CA_MODES = {
# Our ca is local, so we use it as the ultimate source of information
# And we cache files locally.
:local => [:ca, :file],
# We're a remote CA client.
:remote => [:rest, :file],
# We are the CA, so we don't have read/write access to the normal certificates.
:only => [:ca],
# We have no CA, so we just look in the local file store.
:none => [:disabled_ca]
}
# Specify how we expect to interact with our certificate authority.
def self.ca_location=(mode)
modes = CA_MODES.collect { |m, vals| m.to_s }.join(", ")
raise ArgumentError, "CA Mode can only be one of: #{modes}" unless CA_MODES.include?(mode)
@ca_location = mode
configure_indirection(*CA_MODES[@ca_location])
end
# Puppet::SSL::Host is actually indirected now so the original implementation
# has been moved into the certificate_status indirector. This method is in-use
# in `puppet cert -c <certname>`.
def self.destroy(name)
indirection.destroy(name)
end
def self.from_data_hash(data)
instance = new(data["name"])
if data["desired_state"]
instance.desired_state = data["desired_state"]
end
instance
end
def self.from_pson(pson)
Puppet.deprecation_warning("from_pson is being removed in favour of from_data_hash.")
self.from_data_hash(pson)
end
# Puppet::SSL::Host is actually indirected now so the original implementation
# has been moved into the certificate_status indirector. This method does not
# appear to be in use in `puppet cert -l`.
def self.search(options = {})
indirection.search("*", options)
end
# Is this a ca host, meaning that all of its files go in the CA location?
def ca?
ca
end
def key
@key ||= Key.indirection.find(name)
end
# This is the private key; we can create it from scratch
# with no inputs.
def generate_key
@key = Key.new(name)
@key.generate
begin
Key.indirection.save(@key)
rescue
@key = nil
raise
end
true
end
def certificate_request
@certificate_request ||= CertificateRequest.indirection.find(name)
end
# Our certificate request requires the key but that's all.
def generate_certificate_request(options = {})
generate_key unless key
# If this CSR is for the current machine...
if name == Puppet[:certname].downcase
# ...add our configured dns_alt_names
if Puppet[:dns_alt_names] and Puppet[:dns_alt_names] != ''
options[:dns_alt_names] ||= Puppet[:dns_alt_names]
elsif Puppet::SSL::CertificateAuthority.ca? and fqdn = Facter.value(:fqdn) and domain = Facter.value(:domain)
options[:dns_alt_names] = "puppet, #{fqdn}, puppet.#{domain}"
end
end
csr_attributes = Puppet::SSL::CertificateRequestAttributes.new(Puppet[:csr_attributes])
if csr_attributes.load
options[:csr_attributes] = csr_attributes.custom_attributes
options[:extension_requests] = csr_attributes.extension_requests
end
@certificate_request = CertificateRequest.new(name)
@certificate_request.generate(key.content, options)
begin
CertificateRequest.indirection.save(@certificate_request)
rescue
@certificate_request = nil
raise
end
true
end
def certificate
unless @certificate
generate_key unless key
# get the CA cert first, since it's required for the normal cert
# to be of any use.
return nil unless Certificate.indirection.find("ca", :fail_on_404 => true) unless ca?
return nil unless @certificate = Certificate.indirection.find(name)
validate_certificate_with_key
end
@certificate
end
def validate_certificate_with_key
raise Puppet::Error, "No certificate to validate." unless certificate
raise Puppet::Error, "No private key with which to validate certificate with fingerprint: #{certificate.fingerprint}" unless key
unless certificate.content.check_private_key(key.content)
raise Puppet::Error, <<ERROR_STRING
The certificate retrieved from the master does not match the agent's private key.
Certificate fingerprint: #{certificate.fingerprint}
To fix this, remove the certificate from both the master and the agent and then start a puppet run, which will automatically regenerate a certficate.
On the master:
puppet cert clean #{Puppet[:certname]}
On the agent:
- rm -f #{Puppet[:hostcert]}
- puppet agent -t
+ 1a. On most platforms: find #{Puppet[:ssldir]} -name #{Puppet[:certname]}.pem -delete
+ 1b. On Windows: del "#{Puppet[:ssldir]}/#{Puppet[:certname]}.pem" /f
+ 2. puppet agent -t
ERROR_STRING
end
end
# Generate all necessary parts of our ssl host.
def generate
generate_key unless key
generate_certificate_request unless certificate_request
# If we can get a CA instance, then we're a valid CA, and we
# should use it to sign our request; else, just try to read
# the cert.
if ! certificate and ca = Puppet::SSL::CertificateAuthority.instance
ca.sign(self.name, true)
end
end
def initialize(name = nil)
@name = (name || Puppet[:certname]).downcase
Puppet::SSL::Base.validate_certname(@name)
@key = @certificate = @certificate_request = nil
@ca = (name == self.class.ca_name)
end
# Extract the public key from the private key.
def public_key
key.content.public_key
end
# Create/return a store that uses our SSL info to validate
# connections.
def ssl_store(purpose = OpenSSL::X509::PURPOSE_ANY)
unless @ssl_store
@ssl_store = OpenSSL::X509::Store.new
@ssl_store.purpose = purpose
# Use the file path here, because we don't want to cause
# a lookup in the middle of setting our ssl connection.
@ssl_store.add_file(Puppet[:localcacert])
# If we're doing revocation and there's a CRL, add it to our store.
if Puppet.settings[:certificate_revocation]
if crl = Puppet::SSL::CertificateRevocationList.indirection.find(CA_NAME)
@ssl_store.flags = OpenSSL::X509::V_FLAG_CRL_CHECK_ALL|OpenSSL::X509::V_FLAG_CRL_CHECK
@ssl_store.add_crl(crl.content)
end
end
return @ssl_store
end
@ssl_store
end
def to_data_hash
my_cert = Puppet::SSL::Certificate.indirection.find(name)
result = { :name => name }
my_state = state
result[:state] = my_state
result[:desired_state] = desired_state if desired_state
thing_to_use = (my_state == 'requested') ? certificate_request : my_cert
# this is for backwards-compatibility
# we should deprecate it and transition people to using
# pson[:fingerprints][:default]
# It appears that we have no internal consumers of this api
# --jeffweiss 30 aug 2012
result[:fingerprint] = thing_to_use.fingerprint
# The above fingerprint doesn't tell us what message digest algorithm was used
# No problem, except that the default is changing between 2.7 and 3.0. Also, as
# we move to FIPS 140-2 compliance, MD5 is no longer allowed (and, gasp, will
# segfault in rubies older than 1.9.3)
# So, when we add the newer fingerprints, we're explicit about the hashing
# algorithm used.
# --jeffweiss 31 july 2012
result[:fingerprints] = {}
result[:fingerprints][:default] = thing_to_use.fingerprint
suitable_message_digest_algorithms.each do |md|
result[:fingerprints][md] = thing_to_use.fingerprint md
end
result[:dns_alt_names] = thing_to_use.subject_alt_names
result
end
# eventually we'll probably want to move this somewhere else or make it
# configurable
# --jeffweiss 29 aug 2012
def suitable_message_digest_algorithms
[:SHA1, :SHA256, :SHA512]
end
# Attempt to retrieve a cert, if we don't already have one.
def wait_for_cert(time)
begin
return if certificate
generate
return if certificate
rescue SystemExit,NoMemoryError
raise
rescue Exception => detail
Puppet.log_exception(detail, "Could not request certificate: #{detail.message}")
if time < 1
puts "Exiting; failed to retrieve certificate and waitforcert is disabled"
exit(1)
else
sleep(time)
end
retry
end
if time < 1
puts "Exiting; no certificate found and waitforcert is disabled"
exit(1)
end
while true
sleep time
begin
break if certificate
Puppet.notice "Did not receive certificate"
rescue StandardError => detail
Puppet.log_exception(detail, "Could not request certificate: #{detail.message}")
end
end
end
def state
if certificate_request
return 'requested'
end
begin
Puppet::SSL::CertificateAuthority.new.verify(name)
return 'signed'
rescue Puppet::SSL::CertificateAuthority::CertificateVerificationError
return 'revoked'
end
end
end
require 'puppet/ssl/certificate_authority'
diff --git a/lib/puppet/ssl/inventory.rb b/lib/puppet/ssl/inventory.rb
index e3ad3121f..83b53889a 100644
--- a/lib/puppet/ssl/inventory.rb
+++ b/lib/puppet/ssl/inventory.rb
@@ -1,50 +1,55 @@
require 'puppet/ssl'
require 'puppet/ssl/certificate'
# Keep track of all of our known certificates.
class Puppet::SSL::Inventory
attr_reader :path
# Add a certificate to our inventory.
def add(cert)
cert = cert.content if cert.is_a?(Puppet::SSL::Certificate)
Puppet.settings.setting(:cert_inventory).open("a") do |f|
f.print format(cert)
end
end
# Format our certificate for output.
def format(cert)
iso = '%Y-%m-%dT%H:%M:%S%Z'
"0x%04x %s %s %s\n" % [cert.serial, cert.not_before.strftime(iso), cert.not_after.strftime(iso), cert.subject]
end
def initialize
@path = Puppet[:cert_inventory]
end
# Rebuild the inventory from scratch. This should happen if
# the file is entirely missing or if it's somehow corrupted.
def rebuild
Puppet.notice "Rebuilding inventory file"
Puppet.settings.setting(:cert_inventory).open('w') do |f|
Puppet::SSL::Certificate.indirection.search("*").each do |cert|
f.print format(cert.content)
end
end
end
# Find the serial number for a given certificate.
def serial(name)
+ Puppet.deprecation_warning 'Inventory#serial is deprecated, use Inventory#serials instead.'
return nil unless Puppet::FileSystem.exist?(@path)
+ serials(name).first
+ end
- File.readlines(@path).each do |line|
- next unless line =~ /^(\S+).+\/CN=#{name}$/
-
- return Integer($1)
- end
+ # Find all serial numbers for a given certificate. If none can be found, returns
+ # an empty array.
+ def serials(name)
+ return [] unless Puppet::FileSystem.exist?(@path)
- return nil
+ File.readlines(@path).collect do |line|
+ /^(\S+).+\/CN=#{name}$/.match(line)
+ end.compact.map { |m| Integer(m[1]) }
end
+
end
diff --git a/lib/puppet/ssl/validator/default_validator.rb b/lib/puppet/ssl/validator/default_validator.rb
index e8e1d16e1..1f31499e2 100644
--- a/lib/puppet/ssl/validator/default_validator.rb
+++ b/lib/puppet/ssl/validator/default_validator.rb
@@ -1,153 +1,154 @@
require 'openssl'
+require 'puppet/ssl'
# Perform peer certificate verification against the known CA.
# If there is no CA information known, then no verification is performed
#
# @api private
#
class Puppet::SSL::Validator::DefaultValidator #< class Puppet::SSL::Validator
attr_reader :peer_certs
attr_reader :verify_errors
attr_reader :ssl_configuration
# Creates a new DefaultValidator, optionally with an SSL Configuration and SSL Host.
#
# @param ssl_configuration [Puppet::SSL::Configuration] (a default configuration) ssl_configuration the SSL configuration to use
# @param ssl_host [Puppet::SSL::Host] (Puppet::SSL::Host.localhost) the SSL host to use
#
# @api private
#
def initialize(
ssl_configuration = Puppet::SSL::Configuration.new(
Puppet[:localcacert], {
:ca_chain_file => Puppet[:ssl_client_ca_chain],
:ca_auth_file => Puppet[:ssl_client_ca_auth]
}),
ssl_host = Puppet::SSL::Host.localhost)
reset!
@ssl_configuration = ssl_configuration
@ssl_host = ssl_host
end
# Resets this validator to its initial validation state. The ssl configuration is not changed.
#
# @api private
#
def reset!
@peer_certs = []
@verify_errors = []
end
# Performs verification of the SSL connection and collection of the
# certificates for use in constructing the error message if the verification
# failed. This callback will be executed once for each certificate in a
# chain being verified.
#
# From the [OpenSSL
# documentation](http://www.openssl.org/docs/ssl/SSL_CTX_set_verify.html):
# The `verify_callback` function is used to control the behaviour when the
# SSL_VERIFY_PEER flag is set. It must be supplied by the application and
# receives two arguments: preverify_ok indicates, whether the verification of
# the certificate in question was passed (preverify_ok=1) or not
# (preverify_ok=0). x509_ctx is a pointer to the complete context used for
# the certificate chain verification.
#
# See {Puppet::Network::HTTP::Connection} for more information and where this
# class is intended to be used.
#
# @param [Boolean] preverify_ok indicates whether the verification of the
# certificate in question was passed (preverify_ok=true)
# @param [OpenSSL::SSL::SSLContext] ssl_context holds the SSLContext for the
# chain being verified.
#
# @return [Boolean] false if the peer is invalid, true otherwise.
#
# @api private
#
def call(preverify_ok, ssl_context)
# We must make a copy since the scope of the ssl_context will be lost
# across invocations of this method.
current_cert = ssl_context.current_cert
@peer_certs << Puppet::SSL::Certificate.from_instance(current_cert)
if preverify_ok
# If we've copied all of the certs in the chain out of the SSL library
if @peer_certs.length == ssl_context.chain.length
# (#20027) The peer cert must be issued by a specific authority
preverify_ok = valid_peer?
end
else
if ssl_context.error_string
@verify_errors << "#{ssl_context.error_string} for #{current_cert.subject}"
end
end
preverify_ok
rescue => ex
@verify_errors << ex.message
false
end
# Registers the instance's call method with the connection.
#
# @param [Net::HTTP] connection The connection to validate
#
# @return [void]
#
# @api private
#
def setup_connection(connection)
if ssl_certificates_are_present?
connection.cert_store = @ssl_host.ssl_store
connection.ca_file = @ssl_configuration.ca_auth_file
connection.cert = @ssl_host.certificate.content
connection.key = @ssl_host.key.content
connection.verify_mode = OpenSSL::SSL::VERIFY_PEER
connection.verify_callback = self
else
connection.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
end
# Validates the peer certificates against the authorized certificates.
#
# @api private
#
def valid_peer?
descending_cert_chain = @peer_certs.reverse.map {|c| c.content }
authz_ca_certs = ssl_configuration.ca_auth_certificates
if not has_authz_peer_cert(descending_cert_chain, authz_ca_certs)
msg = "The server presented a SSL certificate chain which does not include a " <<
"CA listed in the ssl_client_ca_auth file. "
msg << "Authorized Issuers: #{authz_ca_certs.collect {|c| c.subject}.join(', ')} " <<
"Peer Chain: #{descending_cert_chain.collect {|c| c.subject}.join(' => ')}"
@verify_errors << msg
false
else
true
end
end
# Checks if the set of peer_certs contains at least one certificate issued
# by a certificate listed in authz_certs
#
# @return [Boolean]
#
# @api private
#
def has_authz_peer_cert(peer_certs, authz_certs)
peer_certs.any? do |peer_cert|
authz_certs.any? do |authz_cert|
peer_cert.verify(authz_cert.public_key)
end
end
end
# @api private
#
def ssl_certificates_are_present?
Puppet::FileSystem.exist?(Puppet[:hostcert]) && Puppet::FileSystem.exist?(@ssl_configuration.ca_auth_file)
end
end
diff --git a/lib/puppet/ssl/validator/no_validator.rb b/lib/puppet/ssl/validator/no_validator.rb
index 1141b6952..b019369cc 100644
--- a/lib/puppet/ssl/validator/no_validator.rb
+++ b/lib/puppet/ssl/validator/no_validator.rb
@@ -1,17 +1,20 @@
+require 'openssl'
+require 'puppet/ssl'
+
# Performs no SSL verification
# @api private
#
class Puppet::SSL::Validator::NoValidator < Puppet::SSL::Validator
def setup_connection(connection)
connection.verify_mode = OpenSSL::SSL::VERIFY_NONE
end
def peer_certs
[]
end
def verify_errors
[]
end
end
diff --git a/lib/puppet/transaction.rb b/lib/puppet/transaction.rb
index 983dbd307..53118755e 100644
--- a/lib/puppet/transaction.rb
+++ b/lib/puppet/transaction.rb
@@ -1,340 +1,369 @@
require 'puppet'
require 'puppet/util/tagging'
require 'puppet/application'
require 'digest/sha1'
require 'set'
# the class that actually walks our resource/property tree, collects the changes,
# and performs them
#
# @api private
class Puppet::Transaction
require 'puppet/transaction/additional_resource_generator'
require 'puppet/transaction/event'
require 'puppet/transaction/event_manager'
require 'puppet/transaction/resource_harness'
require 'puppet/resource/status'
attr_accessor :catalog, :ignoreschedules, :for_network_device
# The report, once generated.
attr_reader :report
# Routes and stores any events and subscriptions.
attr_reader :event_manager
# Handles most of the actual interacting with resources
attr_reader :resource_harness
attr_reader :prefetched_providers
include Puppet::Util
include Puppet::Util::Tagging
def initialize(catalog, report, prioritizer)
@catalog = catalog
@report = report || Puppet::Transaction::Report.new("apply", catalog.version, catalog.environment)
@prioritizer = prioritizer
@report.add_times(:config_retrieval, @catalog.retrieval_duration || 0)
@event_manager = Puppet::Transaction::EventManager.new(self)
@resource_harness = Puppet::Transaction::ResourceHarness.new(self)
@prefetched_providers = Hash.new { |h,k| h[k] = {} }
end
+ # Invoke the pre_run_check hook in every resource in the catalog.
+ # This should (only) be called by Transaction#evaluate before applying
+ # the catalog.
+ #
+ # @see Puppet::Transaction#evaluate
+ # @see Puppet::Type#pre_run_check
+ # @raise [Puppet::Error] If any pre-run checks failed.
+ # @return [void]
+ def perform_pre_run_checks
+ prerun_errors = {}
+
+ @catalog.vertices.each do |res|
+ begin
+ res.pre_run_check
+ rescue Puppet::Error => detail
+ prerun_errors[res] = detail
+ end
+ end
+
+ unless prerun_errors.empty?
+ prerun_errors.each do |res, detail|
+ res.log_exception(detail)
+ end
+ raise Puppet::Error, "Some pre-run checks failed"
+ end
+ end
+
# This method does all the actual work of running a transaction. It
# collects all of the changes, executes them, and responds to any
# necessary events.
def evaluate(&block)
block ||= method(:eval_resource)
generator = AdditionalResourceGenerator.new(@catalog, relationship_graph, @prioritizer)
@catalog.vertices.each { |resource| generator.generate_additional_resources(resource) }
+ perform_pre_run_checks
+
Puppet.info "Applying configuration version '#{catalog.version}'" if catalog.version
continue_while = lambda { !stop_processing? }
post_evalable_providers = Set.new
pre_process = lambda do |resource|
prov_class = resource.provider.class
post_evalable_providers << prov_class if prov_class.respond_to?(:post_resource_eval)
prefetch_if_necessary(resource)
# If we generated resources, we don't know what they are now
# blocking, so we opt to recompute it, rather than try to track every
# change that would affect the number.
relationship_graph.clear_blockers if generator.eval_generate(resource)
end
providerless_types = []
overly_deferred_resource_handler = lambda do |resource|
# We don't automatically assign unsuitable providers, so if there
# is one, it must have been selected by the user.
if resource.provider
resource.err "Provider #{resource.provider.class.name} is not functional on this host"
else
providerless_types << resource.type
end
resource_status(resource).failed = true
end
canceled_resource_handler = lambda do |resource|
resource_status(resource).skipped = true
resource.debug "Transaction canceled, skipping"
end
teardown = lambda do
# Just once per type. No need to punish the user.
providerless_types.uniq.each do |type|
Puppet.err "Could not find a suitable provider for #{type}"
end
post_evalable_providers.each do |provider|
begin
provider.post_resource_eval
rescue => detail
Puppet.log_exception(detail, "post_resource_eval failed for provider #{provider}")
end
end
end
relationship_graph.traverse(:while => continue_while,
:pre_process => pre_process,
:overly_deferred_resource_handler => overly_deferred_resource_handler,
:canceled_resource_handler => canceled_resource_handler,
:teardown => teardown) do |resource|
if resource.is_a?(Puppet::Type::Component)
Puppet.warning "Somehow left a component in the relationship graph"
else
resource.info "Starting to evaluate the resource" if Puppet[:evaltrace] and @catalog.host_config?
seconds = thinmark { block.call(resource) }
resource.info "Evaluated in %0.2f seconds" % seconds if Puppet[:evaltrace] and @catalog.host_config?
end
end
Puppet.debug "Finishing transaction #{object_id}"
end
# Wraps application run state check to flag need to interrupt processing
def stop_processing?
Puppet::Application.stop_requested? && catalog.host_config?
end
# Are there any failed resources in this transaction?
def any_failed?
report.resource_statuses.values.detect { |status| status.failed? }
end
# Find all of the changed resources.
def changed?
report.resource_statuses.values.find_all { |status| status.changed }.collect { |status| catalog.resource(status.resource) }
end
def relationship_graph
catalog.relationship_graph(@prioritizer)
end
def resource_status(resource)
report.resource_statuses[resource.to_s] || add_resource_status(Puppet::Resource::Status.new(resource))
end
# The tags we should be checking.
def tags
self.tags = Puppet[:tags] unless defined?(@tags)
super
end
def prefetch_if_necessary(resource)
provider_class = resource.provider.class
return unless provider_class.respond_to?(:prefetch) and !prefetched_providers[resource.type][provider_class.name]
resources = resources_by_provider(resource.type, provider_class.name)
if provider_class == resource.class.defaultprovider
providerless_resources = resources_by_provider(resource.type, nil)
providerless_resources.values.each {|res| res.provider = provider_class.name}
resources.merge! providerless_resources
end
prefetch(provider_class, resources)
end
private
# Apply all changes for a resource
def apply(resource, ancestor = nil)
status = resource_harness.evaluate(resource)
add_resource_status(status)
event_manager.queue_events(ancestor || resource, status.events) unless status.failed?
rescue => detail
resource.err "Could not evaluate: #{detail}"
end
# Evaluate a single resource.
def eval_resource(resource, ancestor = nil)
if skip?(resource)
resource_status(resource).skipped = true
else
resource_status(resource).scheduled = true
apply(resource, ancestor)
end
# Check to see if there are any events queued for this resource
event_manager.process_events(resource)
end
def failed?(resource)
s = resource_status(resource) and s.failed?
end
# Does this resource have any failed dependencies?
def failed_dependencies?(resource)
# First make sure there are no failed dependencies. To do this,
# we check for failures in any of the vertexes above us. It's not
# enough to check the immediate dependencies, which is why we use
# a tree from the reversed graph.
found_failed = false
# When we introduced the :whit into the graph, to reduce the combinatorial
# explosion of edges, we also ended up reporting failures for containers
# like class and stage. This is undesirable; while just skipping the
# output isn't perfect, it is RC-safe. --daniel 2011-06-07
suppress_report = (resource.class == Puppet::Type.type(:whit))
relationship_graph.dependencies(resource).each do |dep|
next unless failed?(dep)
found_failed = true
# See above. --daniel 2011-06-06
unless suppress_report then
resource.notice "Dependency #{dep} has failures: #{resource_status(dep).failed}"
end
end
found_failed
end
# A general method for recursively generating new resources from a
# resource.
def generate_additional_resources(resource)
return unless resource.respond_to?(:generate)
begin
made = resource.generate
rescue => detail
resource.log_exception(detail, "Failed to generate additional resources using 'generate': #{detail}")
end
return unless made
made = [made] unless made.is_a?(Array)
made.uniq.each do |res|
begin
res.tag(*resource.tags)
@catalog.add_resource(res)
res.finish
add_conditional_directed_dependency(resource, res)
generate_additional_resources(res)
rescue Puppet::Resource::Catalog::DuplicateResourceError
res.info "Duplicate generated resource; skipping"
end
end
end
# Should we ignore tags?
def ignore_tags?
! @catalog.host_config?
end
def resources_by_provider(type_name, provider_name)
unless @resources_by_provider
@resources_by_provider = Hash.new { |h, k| h[k] = Hash.new { |h, k| h[k] = {} } }
@catalog.vertices.each do |resource|
if resource.class.attrclass(:provider)
prov = resource.provider && resource.provider.class.name
@resources_by_provider[resource.type][prov][resource.name] = resource
end
end
end
@resources_by_provider[type_name][provider_name] || {}
end
# Prefetch any providers that support it, yo. We don't support prefetching
# types, just providers.
def prefetch(provider_class, resources)
type_name = provider_class.resource_type.name
return if @prefetched_providers[type_name][provider_class.name]
Puppet.debug "Prefetching #{provider_class.name} resources for #{type_name}"
begin
provider_class.prefetch(resources)
rescue => detail
Puppet.log_exception(detail, "Could not prefetch #{type_name} provider '#{provider_class.name}': #{detail}")
end
@prefetched_providers[type_name][provider_class.name] = true
end
def add_resource_status(status)
report.add_resource_status(status)
end
# Is the resource currently scheduled?
def scheduled?(resource)
self.ignoreschedules or resource_harness.scheduled?(resource)
end
# Should this resource be skipped?
def skip?(resource)
if missing_tags?(resource)
resource.debug "Not tagged with #{tags.join(", ")}"
elsif ! scheduled?(resource)
resource.debug "Not scheduled"
elsif failed_dependencies?(resource)
# When we introduced the :whit into the graph, to reduce the combinatorial
# explosion of edges, we also ended up reporting failures for containers
# like class and stage. This is undesirable; while just skipping the
# output isn't perfect, it is RC-safe. --daniel 2011-06-07
unless resource.class == Puppet::Type.type(:whit) then
resource.warning "Skipping because of failed dependencies"
end
elsif resource.virtual?
resource.debug "Skipping because virtual"
elsif !host_and_device_resource?(resource) && resource.appliable_to_host? && for_network_device
resource.debug "Skipping host resources because running on a device"
elsif !host_and_device_resource?(resource) && resource.appliable_to_device? && !for_network_device
resource.debug "Skipping device resources because running on a posix host"
else
return false
end
true
end
def host_and_device_resource?(resource)
resource.appliable_to_host? && resource.appliable_to_device?
end
def handle_qualified_tags( qualified )
# The default behavior of Puppet::Util::Tagging is
# to split qualified tags into parts. That would cause
# qualified tags to match too broadly here.
return
end
# Is this resource tagged appropriately?
def missing_tags?(resource)
return false if ignore_tags?
return false if tags.empty?
not resource.tagged?(*tags)
end
end
require 'puppet/transaction/report'
diff --git a/lib/puppet/transaction/resource_harness.rb b/lib/puppet/transaction/resource_harness.rb
index 3f02c1436..f57677c26 100644
--- a/lib/puppet/transaction/resource_harness.rb
+++ b/lib/puppet/transaction/resource_harness.rb
@@ -1,238 +1,251 @@
require 'puppet/resource/status'
class Puppet::Transaction::ResourceHarness
NO_ACTION = Object.new
extend Forwardable
def_delegators :@transaction, :relationship_graph
attr_reader :transaction
def initialize(transaction)
@transaction = transaction
end
def evaluate(resource)
status = Puppet::Resource::Status.new(resource)
begin
context = ResourceApplicationContext.from_resource(resource, status)
perform_changes(resource, context)
if status.changed? && ! resource.noop?
cache(resource, :synced, Time.now)
resource.flush if resource.respond_to?(:flush)
end
rescue => detail
status.failed_because(detail)
ensure
status.evaluation_time = Time.now - status.time
end
status
end
def scheduled?(resource)
return true if Puppet[:ignoreschedules]
return true unless schedule = schedule(resource)
# We use 'checked' here instead of 'synced' because otherwise we'll
# end up checking most resources most times, because they will generally
# have been synced a long time ago (e.g., a file only gets updated
# once a month on the server and its schedule is daily; the last sync time
# will have been a month ago, so we'd end up checking every run).
schedule.match?(cached(resource, :checked).to_i)
end
def schedule(resource)
unless resource.catalog
resource.warning "Cannot schedule without a schedule-containing catalog"
return nil
end
return nil unless name = resource[:schedule]
resource.catalog.resource(:schedule, name) || resource.fail("Could not find schedule #{name}")
end
# Used mostly for scheduling and auditing at this point.
def cached(resource, name)
Puppet::Util::Storage.cache(resource)[name]
end
# Used mostly for scheduling and auditing at this point.
def cache(resource, name, value)
Puppet::Util::Storage.cache(resource)[name] = value
end
private
def perform_changes(resource, context)
cache(resource, :checked, Time.now)
return [] if ! allow_changes?(resource)
# Record the current state in state.yml.
context.audited_params.each do |param|
cache(resource, param, context.current_values[param])
end
ensure_param = resource.parameter(:ensure)
if ensure_param && ensure_param.should
ensure_event = sync_if_needed(ensure_param, context)
else
ensure_event = NO_ACTION
end
if ensure_event == NO_ACTION
if context.resource_present?
resource.properties.each do |param|
sync_if_needed(param, context)
end
else
resource.debug("Nothing to manage: no ensure and the resource doesn't exist")
end
end
capture_audit_events(resource, context)
end
def allow_changes?(resource)
if resource.purging? and resource.deleting? and deps = relationship_graph.dependents(resource) \
and ! deps.empty? and deps.detect { |d| ! d.deleting? }
deplabel = deps.collect { |r| r.ref }.join(",")
plurality = deps.length > 1 ? "":"s"
resource.warning "#{deplabel} still depend#{plurality} on me -- not purging"
false
else
true
end
end
def sync_if_needed(param, context)
historical_value = context.historical_values[param.name]
current_value = context.current_values[param.name]
do_audit = context.audited_params.include?(param.name)
begin
if param.should && !param.safe_insync?(current_value)
event = create_change_event(param, current_value, historical_value)
if do_audit
event = audit_event(event, param)
end
brief_audit_message = audit_message(param, do_audit, historical_value, current_value)
if param.noop
noop(event, param, current_value, brief_audit_message)
else
sync(event, param, current_value, brief_audit_message)
end
event
else
NO_ACTION
end
rescue => detail
# Execution will continue on StandardErrors, just store the event
Puppet.log_exception(detail)
event = create_change_event(param, current_value, historical_value)
event.status = "failure"
event.message = "change from #{param.is_to_s(current_value)} to #{param.should_to_s(param.should)} failed: #{detail}"
event
rescue Exception => detail
# Execution will halt on Exceptions, they get raised to the application
event = create_change_event(param, current_value, historical_value)
event.status = "failure"
event.message = "change from #{param.is_to_s(current_value)} to #{param.should_to_s(param.should)} failed: #{detail}"
raise
ensure
if event
context.record(event)
event.send_log
context.synced_params << param.name
end
end
end
def create_change_event(property, current_value, historical_value)
event = property.event
event.previous_value = current_value
event.desired_value = property.should
event.historical_value = historical_value
event
end
+ # This method is an ugly hack because, given a Time object with nanosecond
+ # resolution, roundtripped through YAML serialization, the Time object will
+ # be truncated to microseconds.
+ # For audit purposes, this code special cases this comparison, and compares
+ # the two objects by their second and microsecond components. tv_sec is the
+ # number of seconds since the epoch, and tv_usec is only the microsecond
+ # portion of time. This compare satisfies compatibility requirements for
+ # Ruby 1.8.7, where to_r does not exist on the Time class.
+ def are_audited_values_equal(a, b)
+ a == b || (a.is_a?(Time) && b.is_a?(Time) && a.tv_sec == b.tv_sec && a.tv_usec == b.tv_usec)
+ end
+ private :are_audited_values_equal
+
def audit_event(event, property)
event.audited = true
event.status = "audit"
- if event.historical_value != event.previous_value
+ if !are_audited_values_equal(event.historical_value, event.previous_value)
event.message = "audit change: previously recorded value #{property.is_to_s(event.historical_value)} has been changed to #{property.is_to_s(event.previous_value)}"
end
event
end
def audit_message(param, do_audit, historical_value, current_value)
- if do_audit && historical_value && historical_value != current_value
+ if do_audit && historical_value && !are_audited_values_equal(historical_value, current_value)
" (previously recorded value was #{param.is_to_s(historical_value)})"
else
""
end
end
def noop(event, param, current_value, audit_message)
event.message = "current_value #{param.is_to_s(current_value)}, should be #{param.should_to_s(param.should)} (noop)#{audit_message}"
event.status = "noop"
end
def sync(event, param, current_value, audit_message)
param.sync
event.message = "#{param.change_to_s(current_value, param.should)}#{audit_message}"
event.status = "success"
end
def capture_audit_events(resource, context)
context.audited_params.each do |param_name|
if context.historical_values.include?(param_name)
- if context.historical_values[param_name] != context.current_values[param_name] && !context.synced_params.include?(param_name)
+ if !are_audited_values_equal(context.historical_values[param_name], context.current_values[param_name]) && !context.synced_params.include?(param_name)
parameter = resource.parameter(param_name)
event = audit_event(create_change_event(parameter,
context.current_values[param_name],
context.historical_values[param_name]),
parameter)
event.send_log
context.record(event)
end
else
resource.property(param_name).notice "audit change: newly-recorded value #{context.current_values[param_name]}"
end
end
end
# @api private
ResourceApplicationContext = Struct.new(:resource,
:current_values,
:historical_values,
:audited_params,
:synced_params,
:status) do
def self.from_resource(resource, status)
ResourceApplicationContext.new(resource,
resource.retrieve_resource.to_hash,
Puppet::Util::Storage.cache(resource).dup,
(resource[:audit] || []).map { |p| p.to_sym },
[],
status)
end
def resource_present?
resource.present?(current_values)
end
def record(event)
status << event
end
end
end
diff --git a/lib/puppet/type.rb b/lib/puppet/type.rb
index f382f30ca..208d860f0 100644
--- a/lib/puppet/type.rb
+++ b/lib/puppet/type.rb
@@ -1,2430 +1,2452 @@
require 'puppet'
require 'puppet/util/log'
require 'puppet/util/metric'
require 'puppet/property'
require 'puppet/parameter'
require 'puppet/util'
require 'puppet/util/autoload'
require 'puppet/metatype/manager'
require 'puppet/util/errors'
require 'puppet/util/logging'
require 'puppet/util/tagging'
# see the bottom of the file for the rest of the inclusions
module Puppet
# The base class for all Puppet types.
#
# A type describes:
#--
# * **Attributes** - properties, parameters, and meta-parameters are different types of attributes of a type.
# * **Properties** - these are the properties of the managed resource (attributes of the entity being managed; like
# a file's owner, group and mode). A property describes two states; the 'is' (current state) and the 'should' (wanted
# state).
# * **Ensurable** - a set of traits that control the lifecycle (create, remove, etc.) of a managed entity.
# There is a default set of operations associated with being _ensurable_, but this can be changed.
# * **Name/Identity** - one property is the name/identity of a resource, the _namevar_ that uniquely identifies
# one instance of a type from all others.
# * **Parameters** - additional attributes of the type (that does not directly related to an instance of the managed
# resource; if an operation is recursive or not, where to look for things, etc.). A Parameter (in contrast to Property)
# has one current value where a Property has two (current-state and wanted-state).
# * **Meta-Parameters** - parameters that are available across all types. A meta-parameter typically has
# additional semantics; like the `require` meta-parameter. A new type typically does not add new meta-parameters,
# but you need to be aware of their existence so you do not inadvertently shadow an existing meta-parameters.
# * **Parent** - a type can have a super type (that it inherits from).
# * **Validation** - If not just a basic data type, or an enumeration of symbolic values, it is possible to provide
# validation logic for a type, properties and parameters.
# * **Munging** - munging/unmunging is the process of turning a value in external representation (as used
# by a provider) into an internal representation and vice versa. A Type supports adding custom logic for these.
# * **Auto Requirements** - a type can specify automatic relationships to resources to ensure that if they are being
# managed, they will be processed before this type.
# * **Providers** - a provider is an implementation of a type's behavior - the management of a resource in the
# system being managed. A provider is often platform specific and is selected at runtime based on
# criteria/predicates specified in the configured providers. See {Puppet::Provider} for details.
# * **Device Support** - A type has some support for being applied to a device; i.e. something that is managed
# by running logic external to the device itself. There are several methods that deals with type
# applicability for these special cases such as {apply_to_device}.
#
# Additional Concepts:
# --
# * **Resource-type** - A _resource type_ is a term used to denote the type of a resource; internally a resource
# is really an instance of a Ruby class i.e. {Puppet::Resource} which defines its behavior as "resource data".
# Conceptually however, a resource is an instance of a subclass of Type (e.g. File), where such a class describes
# its interface (what can be said/what is known about a resource of this type),
# * **Managed Entity** - This is not a term in general use, but is used here when there is a need to make
# a distinction between a resource (a description of what/how something should be managed), and what it is
# managing (a file in the file system). The term _managed entity_ is a reference to the "file in the file system"
# * **Isomorphism** - the quality of being _isomorphic_ means that two resource instances with the same name
# refers to the same managed entity. Or put differently; _an isomorphic name is the identity of a resource_.
# As an example, `exec` resources (that executes some command) have the command (i.e. the command line string) as
# their name, and these resources are said to be non-isomorphic.
#
# @note The Type class deals with multiple concerns; some methods provide an internal DSL for convenient definition
# of types, other methods deal with various aspects while running; wiring up a resource (expressed in Puppet DSL
# or Ruby DSL) with its _resource type_ (i.e. an instance of Type) to enable validation, transformation of values
# (munge/unmunge), etc. Lastly, Type is also responsible for dealing with Providers; the concrete implementations
# of the behavior that constitutes how a particular Type behaves on a particular type of system (e.g. how
# commands are executed on a flavor of Linux, on Windows, etc.). This means that as you are reading through the
# documentation of this class, you will be switching between these concepts, as well as switching between
# the conceptual level "a resource is an instance of a resource-type" and the actual implementation classes
# (Type, Resource, Provider, and various utility and helper classes).
#
# @api public
#
#
class Type
include Puppet::Util
include Puppet::Util::Errors
include Puppet::Util::Logging
include Puppet::Util::Tagging
# Comparing type instances.
include Comparable
# Compares this type against the given _other_ (type) and returns -1, 0, or +1 depending on the order.
# @param other [Object] the object to compare against (produces nil, if not kind of Type}
# @return [-1, 0, +1, nil] produces -1 if this type is before the given _other_ type, 0 if equals, and 1 if after.
# Returns nil, if the given _other_ is not a kind of Type.
# @see Comparable
#
def <=>(other)
# We only order against other types, not arbitrary objects.
return nil unless other.is_a? Puppet::Type
# Our natural order is based on the reference name we use when comparing
# against other type instances.
self.ref <=> other.ref
end
# Code related to resource type attributes.
class << self
include Puppet::Util::ClassGen
include Puppet::Util::Warnings
# @return [Array<Puppet::Property>] The list of declared properties for the resource type.
# The returned lists contains instances if Puppet::Property or its subclasses.
attr_reader :properties
end
# Returns all the attribute names of the type in the appropriate order.
# The {key_attributes} come first, then the {provider}, then the {properties}, and finally
# the {parameters} and {metaparams},
# all in the order they were specified in the respective files.
# @return [Array<String>] all type attribute names in a defined order.
#
def self.allattrs
key_attributes | (parameters & [:provider]) | properties.collect { |property| property.name } | parameters | metaparams
end
# Returns the class associated with the given attribute name.
# @param name [String] the name of the attribute to obtain the class for
# @return [Class, nil] the class for the given attribute, or nil if the name does not refer to an existing attribute
#
def self.attrclass(name)
@attrclasses ||= {}
# We cache the value, since this method gets called such a huge number
# of times (as in, hundreds of thousands in a given run).
unless @attrclasses.include?(name)
@attrclasses[name] = case self.attrtype(name)
when :property; @validproperties[name]
when :meta; @@metaparamhash[name]
when :param; @paramhash[name]
end
end
@attrclasses[name]
end
# Returns the attribute type (`:property`, `;param`, `:meta`).
# @comment What type of parameter are we dealing with? Cache the results, because
# this method gets called so many times.
# @return [Symbol] a symbol describing the type of attribute (`:property`, `;param`, `:meta`)
#
def self.attrtype(attr)
@attrtypes ||= {}
unless @attrtypes.include?(attr)
@attrtypes[attr] = case
when @validproperties.include?(attr); :property
when @paramhash.include?(attr); :param
when @@metaparamhash.include?(attr); :meta
end
end
@attrtypes[attr]
end
# Provides iteration over meta-parameters.
# @yieldparam p [Puppet::Parameter] each meta parameter
# @return [void]
#
def self.eachmetaparam
@@metaparams.each { |p| yield p.name }
end
# Creates a new `ensure` property with configured default values or with configuration by an optional block.
# This method is a convenience method for creating a property `ensure` with default accepted values.
# If no block is specified, the new `ensure` property will accept the default symbolic
# values `:present`, and `:absent` - see {Puppet::Property::Ensure}.
# If something else is wanted, pass a block and make calls to {Puppet::Property.newvalue} from this block
# to define each possible value. If a block is passed, the defaults are not automatically added to the set of
# valid values.
#
# @note This method will be automatically called without a block if the type implements the methods
# specified by {ensurable?}. It is recommended to always call this method and not rely on this automatic
# specification to clearly state that the type is ensurable.
#
# @overload ensurable()
# @overload ensurable({|| ... })
# @yield [ ] A block evaluated in scope of the new Parameter
# @yieldreturn [void]
# @return [void]
# @dsl type
# @api public
#
def self.ensurable(&block)
if block_given?
self.newproperty(:ensure, :parent => Puppet::Property::Ensure, &block)
else
self.newproperty(:ensure, :parent => Puppet::Property::Ensure) do
self.defaultvalues
end
end
end
# Returns true if the type implements the default behavior expected by being _ensurable_ "by default".
# A type is _ensurable_ by default if it responds to `:exists`, `:create`, and `:destroy`.
# If a type implements these methods and have not already specified that it is _ensurable_, it will be
# made so with the defaults specified in {ensurable}.
# @return [Boolean] whether the type is _ensurable_ or not.
#
def self.ensurable?
# If the class has all three of these methods defined, then it's
# ensurable.
[:exists?, :create, :destroy].all? { |method|
self.public_method_defined?(method)
}
end
# @comment These `apply_to` methods are horrible. They should really be implemented
# as part of the usual system of constraints that apply to a type and
# provider pair, but were implemented as a separate shadow system.
#
# @comment We should rip them out in favour of a real constraint pattern around the
# target device - whatever that looks like - and not have this additional
# magic here. --daniel 2012-03-08
#
# Makes this type applicable to `:device`.
# @return [Symbol] Returns `:device`
# @api private
#
def self.apply_to_device
@apply_to = :device
end
# Makes this type applicable to `:host`.
# @return [Symbol] Returns `:host`
# @api private
#
def self.apply_to_host
@apply_to = :host
end
# Makes this type applicable to `:both` (i.e. `:host` and `:device`).
# @return [Symbol] Returns `:both`
# @api private
#
def self.apply_to_all
@apply_to = :both
end
# Makes this type apply to `:host` if not already applied to something else.
# @return [Symbol] a `:device`, `:host`, or `:both` enumeration
# @api private
def self.apply_to
@apply_to ||= :host
end
# Returns true if this type is applicable to the given target.
# @param target [Symbol] should be :device, :host or :target, if anything else, :host is enforced
# @return [Boolean] true
# @api private
#
def self.can_apply_to(target)
[ target == :device ? :device : :host, :both ].include?(apply_to)
end
# Processes the options for a named parameter.
# @param name [String] the name of a parameter
# @param options [Hash] a hash of options
# @option options [Boolean] :boolean if option set to true, an access method on the form _name_? is added for the param
# @return [void]
#
def self.handle_param_options(name, options)
# If it's a boolean parameter, create a method to test the value easily
if options[:boolean]
define_method(name.to_s + "?") do
val = self[name]
if val == :true or val == true
return true
end
end
end
end
# Is the given parameter a meta-parameter?
# @return [Boolean] true if the given parameter is a meta-parameter.
#
def self.metaparam?(param)
@@metaparamhash.include?(param.intern)
end
# Returns the meta-parameter class associated with the given meta-parameter name.
# Accepts a `nil` name, and return nil.
# @param name [String, nil] the name of a meta-parameter
# @return [Class,nil] the class for the given meta-parameter, or `nil` if no such meta-parameter exists, (or if
# the given meta-parameter name is `nil`.
#
def self.metaparamclass(name)
return nil if name.nil?
@@metaparamhash[name.intern]
end
# Returns all meta-parameter names.
# @return [Array<String>] all meta-parameter names
#
def self.metaparams
@@metaparams.collect { |param| param.name }
end
# Returns the documentation for a given meta-parameter of this type.
- # @todo the type for the param metaparam
- # @param metaparam [??? Puppet::Parameter] the meta-parameter to get documentation for.
- # @return [String] the documentation associated with the given meta-parameter, or nil of not such documentation
+ # @param metaparam [Puppet::Parameter] the meta-parameter to get documentation for.
+ # @return [String] the documentation associated with the given meta-parameter, or nil of no such documentation
# exists.
# @raise if the given metaparam is not a meta-parameter in this type
#
def self.metaparamdoc(metaparam)
@@metaparamhash[metaparam].doc
end
# Creates a new meta-parameter.
- # This creates a new meta-parameter that is added to all types.
+ # This creates a new meta-parameter that is added to this and all inheriting types.
# @param name [Symbol] the name of the parameter
# @param options [Hash] a hash with options.
# @option options [Class<inherits Puppet::Parameter>] :parent (Puppet::Parameter) the super class of this parameter
# @option options [Hash{String => Object}] :attributes a hash that is applied to the generated class
# by calling setter methods corresponding to this hash's keys/value pairs. This is done before the given
# block is evaluated.
# @option options [Boolean] :boolean (false) specifies if this is a boolean parameter
# @option options [Boolean] :namevar (false) specifies if this parameter is the namevar
# @option options [Symbol, Array<Symbol>] :required_features specifies required provider features by name
# @return [Class<inherits Puppet::Parameter>] the created parameter
# @yield [ ] a required block that is evaluated in the scope of the new meta-parameter
# @api public
# @dsl type
# @todo Verify that this description is ok
#
def self.newmetaparam(name, options = {}, &block)
@@metaparams ||= []
@@metaparamhash ||= {}
name = name.intern
param = genclass(
name,
:parent => options[:parent] || Puppet::Parameter,
:prefix => "MetaParam",
:hash => @@metaparamhash,
:array => @@metaparams,
:attributes => options[:attributes],
&block
)
# Grr.
param.required_features = options[:required_features] if options[:required_features]
handle_param_options(name, options)
param.metaparam = true
param
end
- # Returns parameters that act as a key.
+ # Returns the list of parameters that comprise the composite key / "uniqueness key".
# All parameters that return true from #isnamevar? or is named `:name` are included in the returned result.
- # @todo would like a better explanation
+ # @see uniqueness_key
# @return [Array<Puppet::Parameter>] WARNING: this return type is uncertain
def self.key_attribute_parameters
@key_attribute_parameters ||= (
@parameters.find_all { |param|
param.isnamevar? or param.name == :name
}
)
end
- # Returns cached {key_attribute_parameters} names
- # @todo what is a 'key_attribute' ?
+ # Returns cached {key_attribute_parameters} names.
+ # Key attributes are properties and parameters that comprise a composite key
+ # or "uniqueness key".
# @return [Array<String>] cached key_attribute names
#
def self.key_attributes
# This is a cache miss around 0.05 percent of the time. --daniel 2012-07-17
@key_attributes_cache ||= key_attribute_parameters.collect { |p| p.name }
end
# Returns a mapping from the title string to setting of attribute value(s).
# This default implementation provides a mapping of title to the one and only _namevar_ present
# in the type's definition.
# @note Advanced: some logic requires this mapping to be done differently, using a different
# validation/pattern, breaking up the title
# into several parts assigning each to an individual attribute, or even use a composite identity where
# all namevars are seen as part of the unique identity (such computation is done by the {#uniqueness} method.
# These advanced options are rarely used (only one of the built in puppet types use this, and then only
# a small part of the available functionality), and the support for these advanced mappings is not
# implemented in a straight forward way. For these reasons, this method has been marked as private).
#
# @raise [Puppet::DevError] if there is no title pattern and there are two or more key attributes
# @return [Array<Array<Regexp, Array<Array <Symbol, Proc>>>>, nil] a structure with a regexp and the first key_attribute ???
# @comment This wonderful piece of logic creates a structure used by Resource.parse_title which
# has the capability to assign parts of the title to one or more attributes; It looks like an implementation
# of a composite identity key (all parts of the key_attributes array are in the key). This can also
# be seen in the method uniqueness_key.
# The implementation in this method simply assigns the title to the one and only namevar (which is name
# or a variable marked as namevar).
# If there are multiple namevars (any in addition to :name?) then this method MUST be implemented
# as it raises an exception if there is more than 1. Note that in puppet, it is only File that uses this
# to create a different pattern for assigning to the :path attribute
# This requires further digging.
# The entire construct is somewhat strange, since resource checks if the method "title_patterns" is
# implemented (it seems it always is) - why take this more expensive regexp mathching route for all
# other types?
# @api private
#
def self.title_patterns
case key_attributes.length
when 0; []
when 1;
[ [ /(.*)/m, [ [key_attributes.first] ] ] ]
else
raise Puppet::DevError,"you must specify title patterns when there are two or more key attributes"
end
end
- # Produces a _uniqueness_key_
- # @todo Explain what a uniqueness_key is
+ # Produces a resource's _uniqueness_key_ (or composite key).
+ # This key is an array of all key attributes' values. Each distinct tuple must be unique for each resource type.
+ # @see key_attributes
# @return [Object] an object that is a _uniqueness_key_ for this object
#
def uniqueness_key
self.class.key_attributes.sort_by { |attribute_name| attribute_name.to_s }.map{ |attribute_name| self[attribute_name] }
end
# Creates a new parameter.
# @param name [Symbol] the name of the parameter
# @param options [Hash] a hash with options.
# @option options [Class<inherits Puppet::Parameter>] :parent (Puppet::Parameter) the super class of this parameter
# @option options [Hash{String => Object}] :attributes a hash that is applied to the generated class
# by calling setter methods corresponding to this hash's keys/value pairs. This is done before the given
# block is evaluated.
# @option options [Boolean] :boolean (false) specifies if this is a boolean parameter
# @option options [Boolean] :namevar (false) specifies if this parameter is the namevar
# @option options [Symbol, Array<Symbol>] :required_features specifies required provider features by name
# @return [Class<inherits Puppet::Parameter>] the created parameter
# @yield [ ] a required block that is evaluated in the scope of the new parameter
# @api public
# @dsl type
#
def self.newparam(name, options = {}, &block)
options[:attributes] ||= {}
param = genclass(
name,
:parent => options[:parent] || Puppet::Parameter,
:attributes => options[:attributes],
:block => block,
:prefix => "Parameter",
:array => @parameters,
:hash => @paramhash
)
handle_param_options(name, options)
# Grr.
param.required_features = options[:required_features] if options[:required_features]
param.isnamevar if options[:namevar]
param
end
# Creates a new property.
# @param name [Symbol] the name of the property
# @param options [Hash] a hash with options.
# @option options [Symbol] :array_matching (:first) specifies how the current state is matched against
# the wanted state. Use `:first` if the property is single valued, and (`:all`) otherwise.
# @option options [Class<inherits Puppet::Property>] :parent (Puppet::Property) the super class of this property
# @option options [Hash{String => Object}] :attributes a hash that is applied to the generated class
# by calling setter methods corresponding to this hash's keys/value pairs. This is done before the given
# block is evaluated.
# @option options [Boolean] :boolean (false) specifies if this is a boolean parameter
# @option options [Symbol] :retrieve the method to call on the provider (or `parent` if `provider` is not set)
# to retrieve the current value of this property.
# @option options [Symbol, Array<Symbol>] :required_features specifies required provider features by name
# @return [Class<inherits Puppet::Property>] the created property
# @yield [ ] a required block that is evaluated in the scope of the new property
# @api public
# @dsl type
#
def self.newproperty(name, options = {}, &block)
name = name.intern
# This is here for types that might still have the old method of defining
# a parent class.
unless options.is_a? Hash
raise Puppet::DevError,
"Options must be a hash, not #{options.inspect}"
end
raise Puppet::DevError, "Class #{self.name} already has a property named #{name}" if @validproperties.include?(name)
if parent = options[:parent]
options.delete(:parent)
else
parent = Puppet::Property
end
# We have to create our own, new block here because we want to define
# an initial :retrieve method, if told to, and then eval the passed
# block if available.
prop = genclass(name, :parent => parent, :hash => @validproperties, :attributes => options) do
# If they've passed a retrieve method, then override the retrieve
# method on the class.
if options[:retrieve]
define_method(:retrieve) do
provider.send(options[:retrieve])
end
end
class_eval(&block) if block
end
# If it's the 'ensure' property, always put it first.
if name == :ensure
@properties.unshift prop
else
@properties << prop
end
prop
end
def self.paramdoc(param)
@paramhash[param].doc
end
# @return [Array<String>] Returns the parameter names
def self.parameters
return [] unless defined?(@parameters)
@parameters.collect { |klass| klass.name }
end
# @return [Puppet::Parameter] Returns the parameter class associated with the given parameter name.
def self.paramclass(name)
@paramhash[name]
end
# @return [Puppet::Property] Returns the property class ??? associated with the given property name
def self.propertybyname(name)
@validproperties[name]
end
# Returns whether or not the given name is the name of a property, parameter or meta-parameter
# @return [Boolean] true if the given attribute name is the name of an existing property, parameter or meta-parameter
#
def self.validattr?(name)
name = name.intern
return true if name == :name
@validattrs ||= {}
unless @validattrs.include?(name)
@validattrs[name] = !!(self.validproperty?(name) or self.validparameter?(name) or self.metaparam?(name))
end
@validattrs[name]
end
# @return [Boolean] Returns true if the given name is the name of an existing property
def self.validproperty?(name)
name = name.intern
@validproperties.include?(name) && @validproperties[name]
end
# @return [Array<Symbol>, {}] Returns a list of valid property names, or an empty hash if there are none.
# @todo An empty hash is returned if there are no defined parameters (not an empty array). This looks like
# a bug.
#
def self.validproperties
return {} unless defined?(@parameters)
@validproperties.keys
end
# @return [Boolean] Returns true if the given name is the name of an existing parameter
def self.validparameter?(name)
raise Puppet::DevError, "Class #{self} has not defined parameters" unless defined?(@parameters)
!!(@paramhash.include?(name) or @@metaparamhash.include?(name))
end
# (see validattr?)
# @note see comment in code - how should this be documented? Are some of the other query methods deprecated?
# (or should be).
# @comment This is a forward-compatibility method - it's the validity interface we'll use in Puppet::Resource.
def self.valid_parameter?(name)
validattr?(name)
end
# @return [Boolean] Returns true if the wanted state of the resoure is that it should be absent (i.e. to be deleted).
def deleting?
obj = @parameters[:ensure] and obj.should == :absent
end
# Creates a new property value holder for the resource if it is valid and does not already exist
# @return [Boolean] true if a new parameter was added, false otherwise
def add_property_parameter(prop_name)
if self.class.validproperty?(prop_name) && !@parameters[prop_name]
self.newattr(prop_name)
return true
end
false
end
# @return [Symbol, Boolean] Returns the name of the namevar if there is only one or false otherwise.
# @comment This is really convoluted and part of the support for multiple namevars (?).
# If there is only one namevar, the produced value is naturally this namevar, but if there are several?
# The logic caches the name of the namevar if it is a single name, but otherwise always
# calls key_attributes, and then caches the first if there was only one, otherwise it returns
# false and caches this (which is then subsequently returned as a cache hit).
#
def name_var
return @name_var_cache unless @name_var_cache.nil?
key_attributes = self.class.key_attributes
@name_var_cache = (key_attributes.length == 1) && key_attributes.first
end
# Gets the 'should' (wanted state) value of a parameter or property by name.
# To explicitly get the 'is' (current state) value use `o.is(:name)`, and to explicitly get the 'should' value
# use `o.should(:name)`
# @param name [String] the name of the attribute to obtain the 'should' value for.
# @return [Object] 'should'/wanted value of the given attribute
def [](name)
name = name.intern
fail("Invalid parameter #{name}(#{name.inspect})") unless self.class.validattr?(name)
if name == :name && nv = name_var
name = nv
end
if obj = @parameters[name]
# Note that if this is a property, then the value is the "should" value,
# not the current value.
obj.value
else
return nil
end
end
# Sets the 'should' (wanted state) value of a property, or the value of a parameter.
# @return
# @raise [Puppet::Error] if the setting of the value fails, or if the given name is nil.
# @raise [Puppet::ResourceError] when the parameter validation raises Puppet::Error or
# ArgumentError
def []=(name,value)
name = name.intern
fail("Invalid parameter #{name}") unless self.class.validattr?(name)
if name == :name && nv = name_var
name = nv
end
raise Puppet::Error.new("Got nil value for #{name}") if value.nil?
property = self.newattr(name)
if property
begin
# make sure the parameter doesn't have any errors
property.value = value
rescue Puppet::Error, ArgumentError => detail
error = Puppet::ResourceError.new("Parameter #{name} failed on #{ref}: #{detail}")
adderrorcontext(error, detail)
raise error
end
end
nil
end
- # Removes a property from the object; useful in testing or in cleanup
+ # Removes an attribute from the object; useful in testing or in cleanup
# when an error has been encountered
- # @todo Incomprehensible - the comment says "Remove a property", the code refers to @parameters, and
- # the method parameter is called "attr" - What is it, property, parameter, both (i.e an attribute) or what?
# @todo Don't know what the attr is (name or Property/Parameter?). Guessing it is a String name...
# @todo Is it possible to delete a meta-parameter?
# @todo What does delete mean? Is it deleted from the type or is its value state 'is'/'should' deleted?
# @param attr [String] the attribute to delete from this object. WHAT IS THE TYPE?
# @raise [Puppet::DecError] when an attempt is made to delete an attribute that does not exists.
#
def delete(attr)
attr = attr.intern
if @parameters.has_key?(attr)
@parameters.delete(attr)
else
raise Puppet::DevError.new("Undefined attribute '#{attr}' in #{self}")
end
end
- # Iterates over the existing properties.
- # @todo what does this mean? As opposed to iterating over the "non existing properties" ??? Is it an
- # iteration over those properties that have state? CONFUSING.
+ # Iterates over the properties that were set on this resource.
# @yieldparam property [Puppet::Property] each property
# @return [void]
def eachproperty
# properties is a private method
properties.each { |property|
yield property
}
end
# Return the parameters, metaparams, and properties that have a value or were set by a default. Properties are
# included since they are a subclass of parameter.
# @return [Array<Puppet::Parameter>] Array of parameter objects ( or subclass thereof )
def parameters_with_value
self.class.allattrs.collect { |attr| parameter(attr) }.compact
end
# Iterates over all parameters with value currently set.
# @yieldparam parameter [Puppet::Parameter] or a subclass thereof
# @return [void]
def eachparameter
parameters_with_value.each { |parameter| yield parameter }
end
# Creates a transaction event.
# Called by Transaction or by a property.
# Merges the given options with the options `:resource`, `:file`, `:line`, and `:tags`, initialized from
# values in this object. For possible options to pass (if any ????) see {Puppet::Transaction::Event}.
# @todo Needs a better explanation "Why should I care who is calling this method?", What do I need to know
# about events and how they work? Where can I read about them?
# @param options [Hash] options merged with a fixed set of options defined by this method, passed on to {Puppet::Transaction::Event}.
# @return [Puppet::Transaction::Event] the created event
def event(options = {})
Puppet::Transaction::Event.new({:resource => self, :file => file, :line => line, :tags => tags}.merge(options))
end
# @return [Object, nil] Returns the 'should' (wanted state) value for a specified property, or nil if the
# given attribute name is not a property (i.e. if it is a parameter, meta-parameter, or does not exist).
def should(name)
name = name.intern
(prop = @parameters[name] and prop.is_a?(Puppet::Property)) ? prop.should : nil
end
- # Creates an instance to represent/manage the given attribute.
- # Requires either the attribute name or class as the first argument, then an optional hash of
- # attributes to set during initialization.
- # @todo The original comment is just wrong - the method does not accept a hash of options
- # @todo Detective work required; this method interacts with provider to ask if it supports a parameter of
- # the given class. it then returns the parameter if it exists, otherwise creates a parameter
- # with its :resource => self.
+ # Registers an attribute to this resource type insance.
+ # Requires either the attribute name or class as its argument.
+ # This is a noop if the named property/parameter is not supported
+ # by this resource. Otherwise, an attribute instance is created
+ # and kept in this resource's parameters hash.
# @overload newattr(name)
- # @param name [String] Unclear what name is (probably a symbol) - Needs investigation.
+ # @param name [Symbol] symbolic name of the attribute
# @overload newattr(klass)
- # @param klass [Class] a class supported as an attribute class - Needs clarification what that means.
- # @return [???] Probably returns a new instance of the class - Needs investigation.
+ # @param klass [Class] a class supported as an attribute class, i.e. a subclass of
+ # Parameter or Property
+ # @return [Object] An instance of the named Parameter or Property class associated
+ # to this resource type instance, or nil if the attribute is not supported
#
def newattr(name)
if name.is_a?(Class)
klass = name
name = klass.name
end
unless klass = self.class.attrclass(name)
raise Puppet::Error, "Resource type #{self.class.name} does not support parameter #{name}"
end
if provider and ! provider.class.supports_parameter?(klass)
missing = klass.required_features.find_all { |f| ! provider.class.feature?(f) }
debug "Provider %s does not support features %s; not managing attribute %s" % [provider.class.name, missing.join(", "), name]
return nil
end
return @parameters[name] if @parameters.include?(name)
@parameters[name] = klass.new(:resource => self)
end
# Returns a string representation of the resource's containment path in
# the catalog.
# @return [String]
def path
@path ||= '/' + pathbuilder.join('/')
end
# Returns the value of this object's parameter given by name
# @param name [String] the name of the parameter
# @return [Object] the value
def parameter(name)
@parameters[name.to_sym]
end
- # Returns a shallow copy of this object's hash of parameters.
- # @todo Add that this is not only "parameters", but also "properties" and "meta-parameters" ?
+ # Returns a shallow copy of this object's hash of attributes by name.
+ # Note that his not only comprises parameters, but also properties and metaparameters.
# Changes to the contained parameters will have an effect on the parameters of this type, but changes to
# the returned hash does not.
- # @return [Hash{String => Puppet:???Parameter}] a new hash being a shallow copy of the parameters map name to parameter
+ # @return [Hash{String => Object}] a new hash being a shallow copy of the parameters map name to parameter
def parameters
@parameters.dup
end
- # @return [Boolean] Returns whether the property given by name is defined or not.
- # @todo what does it mean to be defined?
+ # @return [Boolean] Returns whether the attribute given by name has been added
+ # to this resource or not.
def propertydefined?(name)
name = name.intern unless name.is_a? Symbol
@parameters.include?(name)
end
# Returns a {Puppet::Property} instance by name.
# To return the value, use 'resource[param]'
# @todo LAK:NOTE(20081028) Since the 'parameter' method is now a superset of this method,
# this one should probably go away at some point. - Does this mean it should be deprecated ?
# @return [Puppet::Property] the property with the given name, or nil if not a property or does not exist.
def property(name)
(obj = @parameters[name.intern] and obj.is_a?(Puppet::Property)) ? obj : nil
end
# @todo comment says "For any parameters or properties that have defaults and have not yet been
# set, set them now. This method can be handed a list of attributes,
# and if so it will only set defaults for those attributes."
# @todo Needs a better explanation, and investigation about the claim an array can be passed (it is passed
# to self.class.attrclass to produce a class on which a check is made if it has a method class :default (does
# not seem to support an array...
# @return [void]
#
def set_default(attr)
return unless klass = self.class.attrclass(attr)
return unless klass.method_defined?(:default)
return if @parameters.include?(klass.name)
return unless parameter = newattr(klass.name)
if value = parameter.default and ! value.nil?
parameter.value = value
else
@parameters.delete(parameter.name)
end
end
# @todo the comment says: "Convert our object to a hash. This just includes properties."
# @todo this is confused, again it is the @parameters instance variable that is consulted, and
# each value is copied - does it contain "properties" and "parameters" or both? Does it contain
# meta-parameters?
#
# @return [Hash{ ??? => ??? }] a hash of WHAT?. The hash is a shallow copy, any changes to the
# objects returned in this hash will be reflected in the original resource having these attributes.
#
def to_hash
rethash = {}
@parameters.each do |name, obj|
rethash[name] = obj.value
end
rethash
end
# @return [String] the name of this object's class
# @todo Would that be "file" for the "File" resource type? of "File" or something else?
#
def type
self.class.name
end
# @todo Comment says "Return a specific value for an attribute.", as opposed to what "An upspecific value"???
# @todo is this the 'is' or the 'should' value?
# @todo why is the return restricted to things that respond to :value? (Only non structural basic data types
# supported?
#
# @return [Object, nil] the value of the attribute having the given name, or nil if the given name is not
# an attribute, or the referenced attribute does not respond to `:value`.
def value(name)
name = name.intern
(obj = @parameters[name] and obj.respond_to?(:value)) ? obj.value : nil
end
# @todo What is this used for? Needs a better explanation.
# @return [???] the version of the catalog or 0 if there is no catalog.
def version
return 0 unless catalog
catalog.version
end
# @return [Array<Puppet::Property>] Returns all of the property objects, in the order specified in the
# class.
# @todo "what does the 'order specified in the class' mean? The order the properties where added in the
# ruby file adding a new type with new properties?
#
def properties
self.class.properties.collect { |prop| @parameters[prop.name] }.compact
end
# Returns true if the type's notion of name is the identity of a resource.
# See the overview of this class for a longer explanation of the concept _isomorphism_.
# Defaults to true.
#
# @return [Boolan] true, if this type's name is isomorphic with the object
def self.isomorphic?
if defined?(@isomorphic)
return @isomorphic
else
return true
end
end
# @todo check that this gets documentation (it is at the class level as well as instance).
# (see isomorphic?)
def isomorphic?
self.class.isomorphic?
end
# Returns true if the instance is a managed instance.
# A 'yes' here means that the instance was created from the language, vs. being created
# in order resolve other questions, such as finding a package in a list.
# @note An object that is managed always stays managed, but an object that is not managed
# may become managed later in its lifecycle.
# @return [Boolean] true if the object is managed
def managed?
# Once an object is managed, it always stays managed; but an object
# that is listed as unmanaged might become managed later in the process,
# so we have to check that every time
if @managed
return @managed
else
@managed = false
properties.each { |property|
s = property.should
if s and ! property.class.unmanaged
@managed = true
break
end
}
return @managed
end
end
###############################
# Code related to the container behaviour.
# Returns true if the search should be done in depth-first order.
# This implementation always returns false.
# @todo What is this used for?
#
# @return [Boolean] true if the search should be done in depth first order.
#
def depthfirst?
false
end
# Removes this object (FROM WHERE?)
# @todo removes if from where?
# @overload remove(rmdeps)
# @deprecated Use remove()
# @param rmdeps [Boolean] intended to indicate that all subscriptions should also be removed, ignored.
# @overload remove()
# @return [void]
#
def remove(rmdeps = true)
# This is hackish (mmm, cut and paste), but it works for now, and it's
# better than warnings.
@parameters.each do |name, obj|
obj.remove
end
@parameters.clear
@parent = nil
# Remove the reference to the provider.
if self.provider
@provider.clear
@provider = nil
end
end
###############################
# Code related to evaluating the resources.
# Returns the ancestors - WHAT?
# This implementation always returns an empty list.
# @todo WHAT IS THIS ?
# @return [Array<???>] returns a list of ancestors.
def ancestors
[]
end
+ # Lifecycle method for a resource. This is called during graph creation.
+ # It should perform any consistency checking of the catalog and raise a
+ # Puppet::Error if the transaction should be aborted.
+ #
+ # It differs from the validate method, since it is called later during
+ # initialization and can rely on self.catalog to have references to all
+ # resources that comprise the catalog.
+ #
+ # @see Puppet::Transaction#add_vertex
+ # @raise [Puppet::Error] If the pre-run check failed.
+ # @return [void]
+ # @abstract a resource type may implement this method to perform
+ # validation checks that can query the complete catalog
+ def pre_run_check
+ end
+
# Flushes the provider if supported by the provider, else no action.
# This is called by the transaction.
# @todo What does Flushing the provider mean? Why is it interesting to know that this is
# called by the transaction? (It is not explained anywhere what a transaction is).
#
# @return [???, nil] WHAT DOES IT RETURN? GUESS IS VOID
def flush
self.provider.flush if self.provider and self.provider.respond_to?(:flush)
end
# Returns true if all contained objects are in sync.
# @todo "contained in what?" in the given "in" parameter?
#
# @todo deal with the comment _"FIXME I don't think this is used on the type instances any more,
# it's really only used for testing"_
# @return [Boolean] true if in sync, false otherwise.
#
def insync?(is)
insync = true
if property = @parameters[:ensure]
unless is.include? property
raise Puppet::DevError,
"The is value is not in the is array for '#{property.name}'"
end
ensureis = is[property]
if property.safe_insync?(ensureis) and property.should == :absent
return true
end
end
properties.each { |property|
unless is.include? property
raise Puppet::DevError,
"The is value is not in the is array for '#{property.name}'"
end
propis = is[property]
unless property.safe_insync?(propis)
property.debug("Not in sync: #{propis.inspect} vs #{property.should.inspect}")
insync = false
#else
# property.debug("In sync")
end
}
#self.debug("#{self} sync status is #{insync}")
insync
end
# Retrieves the current value of all contained properties.
# Parameters and meta-parameters are not included in the result.
# @todo As oposed to all non contained properties? How is this different than any of the other
# methods that also "gets" properties/parameters/etc. ?
# @return [Puppet::Resource] array of all property values (mix of types)
# @raise [fail???] if there is a provider and it is not suitable for the host this is evaluated for.
def retrieve
fail "Provider #{provider.class.name} is not functional on this host" if self.provider.is_a?(Puppet::Provider) and ! provider.class.suitable?
- result = Puppet::Resource.new(type, title)
+ result = Puppet::Resource.new(self.class, title)
# Provide the name, so we know we'll always refer to a real thing
result[:name] = self[:name] unless self[:name] == title
if ensure_prop = property(:ensure) or (self.class.validattr?(:ensure) and ensure_prop = newattr(:ensure))
result[:ensure] = ensure_state = ensure_prop.retrieve
else
ensure_state = nil
end
properties.each do |property|
next if property.name == :ensure
if ensure_state == :absent
result[property] = :absent
else
result[property] = property.retrieve
end
end
result
end
# Retrieve the current state of the system as a Puppet::Resource. For
# the base Puppet::Type this does the same thing as #retrieve, but
# specific types are free to implement #retrieve as returning a hash,
# and this will call #retrieve and convert the hash to a resource.
# This is used when determining when syncing a resource.
#
# @return [Puppet::Resource] A resource representing the current state
# of the system.
#
# @api private
def retrieve_resource
resource = retrieve
- resource = Resource.new(type, title, :parameters => resource) if resource.is_a? Hash
+ resource = Resource.new(self.class, title, :parameters => resource) if resource.is_a? Hash
resource
end
# Given the hash of current properties, should this resource be treated as if it
# currently exists on the system. May need to be overridden by types that offer up
# more than just :absent and :present.
def present?(current_values)
current_values[:ensure] != :absent
end
# Returns a hash of the current properties and their values.
# If a resource is absent, its value is the symbol `:absent`
# @return [Hash{Puppet::Property => Object}] mapping of property instance to its value
#
def currentpropvalues
# It's important to use the 'properties' method here, as it follows the order
# in which they're defined in the class. It also guarantees that 'ensure'
# is the first property, which is important for skipping 'retrieve' on
# all the properties if the resource is absent.
ensure_state = false
return properties.inject({}) do | prophash, property|
if property.name == :ensure
ensure_state = property.retrieve
prophash[property] = ensure_state
else
if ensure_state == :absent
prophash[property] = :absent
else
prophash[property] = property.retrieve
end
end
prophash
end
end
# Returns the `noop` run mode status of this.
# @return [Boolean] true if running in noop mode.
def noop?
# If we're not a host_config, we're almost certainly part of
# Settings, and we want to ignore 'noop'
return false if catalog and ! catalog.host_config?
if defined?(@noop)
@noop
else
Puppet[:noop]
end
end
# (see #noop?)
def noop
noop?
end
# Retrieves all known instances.
# @todo Retrieves them from where? Known to whom?
# Either requires providers or must be overridden.
# @raise [Puppet::DevError] when there are no providers and the implementation has not overridded this method.
def self.instances
raise Puppet::DevError, "#{self.name} has no providers and has not overridden 'instances'" if provider_hash.empty?
# Put the default provider first, then the rest of the suitable providers.
provider_instances = {}
providers_by_source.collect do |provider|
self.properties.find_all do |property|
provider.supports_parameter?(property)
end.collect do |property|
property.name
end
provider.instances.collect do |instance|
# We always want to use the "first" provider instance we find, unless the resource
# is already managed and has a different provider set
if other = provider_instances[instance.name]
Puppet.debug "%s %s found in both %s and %s; skipping the %s version" %
[self.name.to_s.capitalize, instance.name, other.class.name, instance.class.name, instance.class.name]
next
end
provider_instances[instance.name] = instance
result = new(:name => instance.name, :provider => instance)
properties.each { |name| result.newattr(name) }
result
end
end.flatten.compact
end
# Returns a list of one suitable provider per source, with the default provider first.
# @todo Needs better explanation; what does "source" mean in this context?
# @return [Array<Puppet::Provider>] list of providers
#
def self.providers_by_source
# Put the default provider first (can be nil), then the rest of the suitable providers.
sources = []
[defaultprovider, suitableprovider].flatten.uniq.collect do |provider|
next if provider.nil?
next if sources.include?(provider.source)
sources << provider.source
provider
end.compact
end
# Converts a simple hash into a Resource instance.
# @todo as opposed to a complex hash? Other raised exceptions?
# @param [Hash{Symbol, String => Object}] hash resource attribute to value map to initialize the created resource from
# @return [Puppet::Resource] the resource created from the hash
# @raise [Puppet::Error] if a title is missing in the given hash
def self.hash2resource(hash)
hash = hash.inject({}) { |result, ary| result[ary[0].to_sym] = ary[1]; result }
title = hash.delete(:title)
title ||= hash[:name]
title ||= hash[key_attributes.first] if key_attributes.length == 1
raise Puppet::Error, "Title or name must be provided" unless title
# Now create our resource.
- resource = Puppet::Resource.new(self.name, title)
+ resource = Puppet::Resource.new(self, title)
resource.catalog = hash.delete(:catalog)
- resource.resource_type = self
hash.each do |param, value|
resource[param] = value
end
resource
end
# Returns an array of strings representing the containment heirarchy
# (types/classes) that make up the path to the resource from the root
# of the catalog. This is mostly used for logging purposes.
#
# @api private
def pathbuilder
if p = parent
[p.pathbuilder, self.ref].flatten
else
[self.ref]
end
end
###############################
# Add all of the meta-parameters.
newmetaparam(:noop) do
desc "Whether to apply this resource in noop mode.
When applying a resource in noop mode, Puppet will check whether it is in sync,
like it does when running normally. However, if a resource attribute is not in
the desired state (as declared in the catalog), Puppet will take no
action, and will instead report the changes it _would_ have made. These
simulated changes will appear in the report sent to the puppet master, or
be shown on the console if running puppet agent or puppet apply in the
foreground. The simulated changes will not send refresh events to any
subscribing or notified resources, although Puppet will log that a refresh
event _would_ have been sent.
**Important note:**
[The `noop` setting](http://docs.puppetlabs.com/references/latest/configuration.html#noop)
allows you to globally enable or disable noop mode, but it will _not_ override
the `noop` metaparameter on individual resources. That is, the value of the
global `noop` setting will _only_ affect resources that do not have an explicit
value set for their `noop` attribute."
newvalues(:true, :false)
munge do |value|
case value
when true, :true, "true"; @resource.noop = true
when false, :false, "false"; @resource.noop = false
end
end
end
newmetaparam(:schedule) do
desc "A schedule to govern when Puppet is allowed to manage this resource.
The value of this metaparameter must be the `name` of a `schedule`
resource. This means you must declare a schedule resource, then
refer to it by name; see
[the docs for the `schedule` type](http://docs.puppetlabs.com/references/latest/type.html#schedule)
for more info.
schedule { 'everyday':
period => daily,
range => \"2-4\"
}
exec { \"/usr/bin/apt-get update\":
schedule => 'everyday'
}
Note that you can declare the schedule resource anywhere in your
manifests, as long as it ends up in the final compiled catalog."
end
newmetaparam(:audit) do
desc "Marks a subset of this resource's unmanaged attributes for auditing. Accepts an
attribute name, an array of attribute names, or `all`.
Auditing a resource attribute has two effects: First, whenever a catalog
is applied with puppet apply or puppet agent, Puppet will check whether
that attribute of the resource has been modified, comparing its current
value to the previous run; any change will be logged alongside any actions
performed by Puppet while applying the catalog.
Secondly, marking a resource attribute for auditing will include that
attribute in inspection reports generated by puppet inspect; see the
puppet inspect documentation for more details.
Managed attributes for a resource can also be audited, but note that
changes made by Puppet will be logged as additional modifications. (I.e.
if a user manually edits a file whose contents are audited and managed,
puppet agent's next two runs will both log an audit notice: the first run
will log the user's edit and then revert the file to the desired state,
and the second run will log the edit made by Puppet.)"
validate do |list|
list = Array(list).collect {|p| p.to_sym}
unless list == [:all]
list.each do |param|
next if @resource.class.validattr?(param)
fail "Cannot audit #{param}: not a valid attribute for #{resource}"
end
end
end
munge do |args|
properties_to_audit(args).each do |param|
next unless resource.class.validproperty?(param)
resource.newattr(param)
end
end
def all_properties
resource.class.properties.find_all do |property|
resource.provider.nil? or resource.provider.class.supports_parameter?(property)
end.collect do |property|
property.name
end
end
def properties_to_audit(list)
if !list.kind_of?(Array) && list.to_sym == :all
list = all_properties
else
list = Array(list).collect { |p| p.to_sym }
end
end
end
newmetaparam(:loglevel) do
desc "Sets the level that information will be logged.
The log levels have the biggest impact when logs are sent to
- syslog (which is currently the default)."
+ syslog (which is currently the default).
+
+ The order of the log levels, in decreasing priority, is:
+
+ * `crit`
+ * `emerg`
+ * `alert`
+ * `err`
+ * `warning`
+ * `notice`
+ * `info` / `verbose`
+ * `debug`
+ "
defaultto :notice
newvalues(*Puppet::Util::Log.levels)
newvalues(:verbose)
munge do |loglevel|
val = super(loglevel)
if val == :verbose
val = :info
end
val
end
end
newmetaparam(:alias) do
desc %q{Creates an alias for the resource. Puppet uses this internally when you
provide a symbolic title and an explicit namevar value:
file { 'sshdconfig':
path => $operatingsystem ? {
solaris => '/usr/local/etc/ssh/sshd_config',
default => '/etc/ssh/sshd_config',
},
source => '...'
}
service { 'sshd':
subscribe => File['sshdconfig'],
}
When you use this feature, the parser sets `sshdconfig` as the title,
and the library sets that as an alias for the file so the dependency
lookup in `Service['sshd']` works. You can use this metaparameter yourself,
but note that aliases generally only work for creating relationships; anything
else that refers to an existing resource (such as amending or overriding
resource attributes in an inherited class) must use the resource's exact
title. For example, the following code will not work:
file { '/etc/ssh/sshd_config':
owner => root,
group => root,
alias => 'sshdconfig',
}
File['sshdconfig'] {
mode => 644,
}
There's no way here for the Puppet parser to know that these two stanzas
should be affecting the same file.
}
munge do |aliases|
aliases = [aliases] unless aliases.is_a?(Array)
raise(ArgumentError, "Cannot add aliases without a catalog") unless @resource.catalog
aliases.each do |other|
if obj = @resource.catalog.resource(@resource.class.name, other)
unless obj.object_id == @resource.object_id
self.fail("#{@resource.title} can not create alias #{other}: object already exists")
end
next
end
# Newschool, add it to the catalog.
@resource.catalog.alias(@resource, other)
end
end
end
newmetaparam(:tag) do
desc "Add the specified tags to the associated resource. While all resources
are automatically tagged with as much information as possible
(e.g., each class and definition containing the resource), it can
be useful to add your own tags to a given resource.
Multiple tags can be specified as an array:
file {'/etc/hosts':
ensure => file,
source => 'puppet:///modules/site/hosts',
mode => 0644,
tag => ['bootstrap', 'minimumrun', 'mediumrun'],
}
Tags are useful for things like applying a subset of a host's configuration
with [the `tags` setting](/references/latest/configuration.html#tags)
(e.g. `puppet agent --test --tags bootstrap`) or filtering alerts with
[the `tagmail` report processor](http://docs.puppetlabs.com/references/latest/report.html#tagmail)."
munge do |tags|
tags = [tags] unless tags.is_a? Array
tags.each do |tag|
@resource.tag(tag)
end
end
end
# RelationshipMetaparam is an implementation supporting the meta-parameters `:require`, `:subscribe`,
# `:notify`, and `:before`.
#
#
class RelationshipMetaparam < Puppet::Parameter
class << self
attr_accessor :direction, :events, :callback, :subclasses
end
@subclasses = []
def self.inherited(sub)
@subclasses << sub
end
# @return [Array<Puppet::Resource>] turns attribute value(s) into list of resources
def munge(references)
references = [references] unless references.is_a?(Array)
references.collect do |ref|
if ref.is_a?(Puppet::Resource)
ref
else
Puppet::Resource.new(ref)
end
end
end
# Checks each reference to assert that what it references exists in the catalog.
#
# @raise [???fail] if the referenced resource can not be found
# @return [void]
def validate_relationship
@value.each do |ref|
unless @resource.catalog.resource(ref.to_s)
description = self.class.direction == :in ? "dependency" : "dependent"
fail ResourceError, "Could not find #{description} #{ref} for #{resource.ref}"
end
end
end
# Creates edges for all relationships.
# The `:in` relationships are specified by the event-receivers, and `:out`
# relationships are specified by the event generator.
# @todo references to "event-receivers" and "event generator" means in this context - are those just
# the resources at the two ends of the relationship?
# This way 'source' and 'target' are consistent terms in both edges
# and events, i.e. an event targets edges whose source matches
# the event's source. The direction of the relationship determines
# which resource is applied first and which resource is considered
# to be the event generator.
# @return [Array<Puppet::Relationship>]
# @raise [???fail] when a reference can not be resolved
#
def to_edges
@value.collect do |reference|
reference.catalog = resource.catalog
# Either of the two retrieval attempts could have returned
# nil.
unless related_resource = reference.resolve
self.fail "Could not retrieve dependency '#{reference}' of #{@resource.ref}"
end
# Are we requiring them, or vice versa? See the method docs
# for futher info on this.
if self.class.direction == :in
source = related_resource
target = @resource
else
source = @resource
target = related_resource
end
if method = self.class.callback
subargs = {
:event => self.class.events,
:callback => method
}
self.debug("subscribes to #{related_resource.ref}")
else
# If there's no callback, there's no point in even adding
# a label.
subargs = nil
self.debug("requires #{related_resource.ref}")
end
Puppet::Relationship.new(source, target, subargs)
end
end
end
# @todo document this, have no clue what this does... it retuns "RelationshipMetaparam.subclasses"
#
def self.relationship_params
RelationshipMetaparam.subclasses
end
# Note that the order in which the relationships params is defined
# matters. The labelled params (notify and subcribe) must be later,
# so that if both params are used, those ones win. It's a hackish
# solution, but it works.
newmetaparam(:require, :parent => RelationshipMetaparam, :attributes => {:direction => :in, :events => :NONE}) do
desc "One or more resources that this resource depends on, expressed as
[resource references](http://docs.puppetlabs.com/puppet/latest/reference/lang_datatypes.html#resource-references).
Multiple resources can be specified as an array of references. When this
attribute is present:
* The required resource(s) will be applied **before** this resource.
This is one of the four relationship metaparameters, along with
`before`, `notify`, and `subscribe`. For more context, including the
alternate chaining arrow (`->` and `~>`) syntax, see
[the language page on relationships](http://docs.puppetlabs.com/puppet/latest/reference/lang_relationships.html)."
end
newmetaparam(:subscribe, :parent => RelationshipMetaparam, :attributes => {:direction => :in, :events => :ALL_EVENTS, :callback => :refresh}) do
desc "One or more resources that this resource depends on, expressed as
[resource references](http://docs.puppetlabs.com/puppet/latest/reference/lang_datatypes.html#resource-references).
Multiple resources can be specified as an array of references. When this
attribute is present:
* The subscribed resource(s) will be applied _before_ this resource.
* If Puppet makes changes to any of the subscribed resources, it will cause
this resource to _refresh._ (Refresh behavior varies by resource
type: services will restart, mounts will unmount and re-mount, etc. Not
all types can refresh.)
This is one of the four relationship metaparameters, along with
`before`, `require`, and `notify`. For more context, including the
alternate chaining arrow (`->` and `~>`) syntax, see
[the language page on relationships](http://docs.puppetlabs.com/puppet/latest/reference/lang_relationships.html)."
end
newmetaparam(:before, :parent => RelationshipMetaparam, :attributes => {:direction => :out, :events => :NONE}) do
desc "One or more resources that depend on this resource, expressed as
[resource references](http://docs.puppetlabs.com/puppet/latest/reference/lang_datatypes.html#resource-references).
Multiple resources can be specified as an array of references. When this
attribute is present:
* This resource will be applied _before_ the dependent resource(s).
This is one of the four relationship metaparameters, along with
`require`, `notify`, and `subscribe`. For more context, including the
alternate chaining arrow (`->` and `~>`) syntax, see
[the language page on relationships](http://docs.puppetlabs.com/puppet/latest/reference/lang_relationships.html)."
end
newmetaparam(:notify, :parent => RelationshipMetaparam, :attributes => {:direction => :out, :events => :ALL_EVENTS, :callback => :refresh}) do
desc "One or more resources that depend on this resource, expressed as
[resource references](http://docs.puppetlabs.com/puppet/latest/reference/lang_datatypes.html#resource-references).
Multiple resources can be specified as an array of references. When this
attribute is present:
* This resource will be applied _before_ the notified resource(s).
* If Puppet makes changes to this resource, it will cause all of the
notified resources to _refresh._ (Refresh behavior varies by resource
type: services will restart, mounts will unmount and re-mount, etc. Not
all types can refresh.)
This is one of the four relationship metaparameters, along with
`before`, `require`, and `subscribe`. For more context, including the
alternate chaining arrow (`->` and `~>`) syntax, see
[the language page on relationships](http://docs.puppetlabs.com/puppet/latest/reference/lang_relationships.html)."
end
newmetaparam(:stage) do
desc %{Which run stage this class should reside in.
**Note: This metaparameter can only be used on classes,** and only when
declaring them with the resource-like syntax. It cannot be used on normal
resources or on classes declared with `include`.
By default, all classes are declared in the `main` stage. To assign a class
to a different stage, you must:
* Declare the new stage as a [`stage` resource](http://docs.puppetlabs.com/references/latest/type.html#stage).
* Declare an order relationship between the new stage and the `main` stage.
* Use the resource-like syntax to declare the class, and set the `stage`
metaparameter to the name of the desired stage.
For example:
stage { 'pre':
before => Stage['main'],
}
class { 'apt-updates':
stage => 'pre',
}
}
end
###############################
# All of the provider plumbing for the resource types.
require 'puppet/provider'
require 'puppet/util/provider_features'
# Add the feature handling module.
extend Puppet::Util::ProviderFeatures
# The provider that has been selected for the instance of the resource type.
# @return [Puppet::Provider,nil] the selected provider or nil, if none has been selected
#
attr_reader :provider
# the Type class attribute accessors
class << self
# The loader of providers to use when loading providers from disk.
# Although it looks like this attribute provides a way to operate with different loaders of
# providers that is not the case; the attribute is written when a new type is created,
# and should not be changed thereafter.
# @api private
#
attr_accessor :providerloader
# @todo Don't know if this is a name, or a reference to a Provider instance (now marked up as an instance
# of Provider.
# @return [Puppet::Provider, nil] The default provider for this type, or nil if non is defines
#
attr_writer :defaultprovider
end
# The default provider, or the most suitable provider if no default provider was set.
# @note a warning will be issued if no default provider has been configured and a search for the most
# suitable provider returns more than one equally suitable provider.
# @return [Puppet::Provider, nil] the default or most suitable provider, or nil if no provider was found
#
def self.defaultprovider
return @defaultprovider if @defaultprovider
suitable = suitableprovider
# Find which providers are a default for this system.
defaults = suitable.find_all { |provider| provider.default? }
# If we don't have any default we use suitable providers
defaults = suitable if defaults.empty?
max = defaults.collect { |provider| provider.specificity }.max
defaults = defaults.find_all { |provider| provider.specificity == max }
if defaults.length > 1
Puppet.warning(
"Found multiple default providers for #{self.name}: #{defaults.collect { |i| i.name.to_s }.join(", ")}; using #{defaults[0].name}"
)
end
@defaultprovider = defaults.shift unless defaults.empty?
end
# @return [Hash{??? => Puppet::Provider}] Returns a hash of WHAT EXACTLY for the given type
# @todo what goes into this hash?
def self.provider_hash_by_type(type)
@provider_hashes ||= {}
@provider_hashes[type] ||= {}
end
# @return [Hash{ ??? => Puppet::Provider}] Returns a hash of WHAT EXACTLY for this type.
# @see provider_hash_by_type method to get the same for some other type
def self.provider_hash
Puppet::Type.provider_hash_by_type(self.name)
end
# Returns the provider having the given name.
# This will load a provider if it is not already loaded. The returned provider is the first found provider
# having the given name, where "first found" semantics is defined by the {providerloader} in use.
#
# @param name [String] the name of the provider to get
# @return [Puppet::Provider, nil] the found provider, or nil if no provider of the given name was found
#
def self.provider(name)
name = name.intern
# If we don't have it yet, try loading it.
@providerloader.load(name) unless provider_hash.has_key?(name)
provider_hash[name]
end
# Returns a list of loaded providers by name.
# This method will not load/search for available providers.
# @return [Array<String>] list of loaded provider names
#
def self.providers
provider_hash.keys
end
# Returns true if the given name is a reference to a provider and if this is a suitable provider for
# this type.
# @todo How does the provider know if it is suitable for the type? Is it just suitable for the platform/
# environment where this method is executing?
# @param name [String] the name of the provider for which validity is checked
# @return [Boolean] true if the given name references a provider that is suitable
#
def self.validprovider?(name)
name = name.intern
(provider_hash.has_key?(name) && provider_hash[name].suitable?)
end
# Creates a new provider of a type.
# This method must be called directly on the type that it's implementing.
# @todo Fix Confusing Explanations!
# Is this a new provider of a Type (metatype), or a provider of an instance of Type (a resource), or
# a Provider (the implementation of a Type's behavior). CONFUSED. It calls magically named methods like
# "providify" ...
# @param name [String, Symbol] the name of the WHAT? provider? type?
# @param options [Hash{Symbol => Object}] a hash of options, used by this method, and passed on to {#genclass}, (see
# it for additional options to pass).
# @option options [Puppet::Provider] :parent the parent provider (what is this?)
# @option options [Puppet::Type] :resource_type the resource type, defaults to this type if unspecified
# @return [Puppet::Provider] a provider ???
# @raise [Puppet::DevError] when the parent provider could not be found.
#
def self.provide(name, options = {}, &block)
name = name.intern
if unprovide(name)
Puppet.debug "Reloading #{name} #{self.name} provider"
end
parent = if pname = options[:parent]
options.delete(:parent)
if pname.is_a? Class
pname
else
if provider = self.provider(pname)
provider
else
raise Puppet::DevError,
"Could not find parent provider #{pname} of #{name}"
end
end
else
Puppet::Provider
end
options[:resource_type] ||= self
self.providify
provider = genclass(
name,
:parent => parent,
:hash => provider_hash,
:prefix => "Provider",
:block => block,
:include => feature_module,
:extend => feature_module,
:attributes => options
)
provider
end
# Ensures there is a `:provider` parameter defined.
# Should only be called if there are providers.
# @return [void]
def self.providify
return if @paramhash.has_key? :provider
newparam(:provider) do
# We're using a hacky way to get the name of our type, since there doesn't
# seem to be a correct way to introspect this at the time this code is run.
# We expect that the class in which this code is executed will be something
# like Puppet::Type::Ssh_authorized_key::ParameterProvider.
desc <<-EOT
The specific backend to use for this `#{self.to_s.split('::')[2].downcase}`
resource. You will seldom need to specify this --- Puppet will usually
discover the appropriate provider for your platform.
EOT
# This is so we can refer back to the type to get a list of
# providers for documentation.
class << self
# The reference to a parent type for the parameter `:provider` used to get a list of
# providers for documentation purposes.
#
attr_accessor :parenttype
end
# Provides the ability to add documentation to a provider.
#
def self.doc
# Since we're mixing @doc with text from other sources, we must normalize
# its indentation with scrub. But we don't need to manually scrub the
# provider's doc string, since markdown_definitionlist sanitizes its inputs.
scrub(@doc) + "Available providers are:\n\n" + parenttype.providers.sort { |a,b|
a.to_s <=> b.to_s
}.collect { |i|
markdown_definitionlist( i, scrub(parenttype().provider(i).doc) )
}.join
end
- # @todo this does what? where and how?
- # @return [String] the name of the provider
+ # For each resource, the provider param defaults to
+ # the type's default provider
defaultto {
prov = @resource.class.defaultprovider
prov.name if prov
}
validate do |provider_class|
provider_class = provider_class[0] if provider_class.is_a? Array
provider_class = provider_class.class.name if provider_class.is_a?(Puppet::Provider)
unless @resource.class.provider(provider_class)
raise ArgumentError, "Invalid #{@resource.class.name} provider '#{provider_class}'"
end
end
munge do |provider|
provider = provider[0] if provider.is_a? Array
provider = provider.intern if provider.is_a? String
@resource.provider = provider
if provider.is_a?(Puppet::Provider)
provider.class.name
else
provider
end
end
end.parenttype = self
end
# @todo this needs a better explanation
# Removes the implementation class of a given provider.
# @return [Object] returns what {Puppet::Util::ClassGen#rmclass} returns
def self.unprovide(name)
if @defaultprovider and @defaultprovider.name == name
@defaultprovider = nil
end
rmclass(name, :hash => provider_hash, :prefix => "Provider")
end
# Returns a list of suitable providers for the given type.
# A call to this method will load all providers if not already loaded and ask each if it is
# suitable - those that are are included in the result.
# @note This method also does some special processing which rejects a provider named `:fake` (for testing purposes).
# @return [Array<Puppet::Provider>] Returns an array of all suitable providers.
#
def self.suitableprovider
providerloader.loadall if provider_hash.empty?
provider_hash.find_all { |name, provider|
provider.suitable?
}.collect { |name, provider|
provider
}.reject { |p| p.name == :fake } # For testing
end
# @return [Boolean] Returns true if this is something else than a `:provider`, or if it
# is a provider and it is suitable, or if there is a default provider. Otherwise, false is returned.
#
def suitable?
# If we don't use providers, then we consider it suitable.
return true unless self.class.paramclass(:provider)
# We have a provider and it is suitable.
return true if provider && provider.class.suitable?
# We're using the default provider and there is one.
if !provider and self.class.defaultprovider
self.provider = self.class.defaultprovider.name
return true
end
# We specified an unsuitable provider, or there isn't any suitable
# provider.
false
end
# Sets the provider to the given provider/name.
# @overload provider=(name)
# Sets the provider to the result of resolving the name to an instance of Provider.
# @param name [String] the name of the provider
# @overload provider=(provider)
# Sets the provider to the given instances of Provider.
# @param provider [Puppet::Provider] the provider to set
# @return [Puppet::Provider] the provider set
# @raise [ArgumentError] if the provider could not be found/resolved.
#
def provider=(name)
if name.is_a?(Puppet::Provider)
@provider = name
@provider.resource = self
elsif klass = self.class.provider(name)
@provider = klass.new(self)
else
raise ArgumentError, "Could not find #{name} provider of #{self.class.name}"
end
end
###############################
# All of the relationship code.
# Adds a block producing a single name (or list of names) of the given resource type name to autorequire.
+ # Resources in the catalog that have the named type and a title that is included in the result will be linked
+ # to the calling resource as a requirement.
+ #
# @example Autorequire the files File['foo', 'bar']
# autorequire( 'file', {|| ['foo', 'bar'] })
#
- # @todo original = _"Specify a block for generating a list of objects to autorequire.
- # This makes it so that you don't have to manually specify things that you clearly require."_
# @param name [String] the name of a type of which one or several resources should be autorequired e.g. "file"
# @yield [ ] a block returning list of names of given type to auto require
# @yieldreturn [String, Array<String>] one or several resource names for the named type
# @return [void]
# @dsl type
# @api public
#
def self.autorequire(name, &block)
@autorequires ||= {}
@autorequires[name] = block
end
# Provides iteration over added auto-requirements (see {autorequire}).
# @yieldparam type [String] the name of the type to autoriquire an instance of
# @yieldparam block [Proc] a block producing one or several dependencies to auto require (see {autorequire}).
# @yieldreturn [void]
# @return [void]
def self.eachautorequire
@autorequires ||= {}
@autorequires.each { |type, block|
yield(type, block)
}
end
# Adds dependencies to the catalog from added autorequirements.
# See {autorequire} for how to add an auto-requirement.
# @todo needs details - see the param rel_catalog, and type of this param
# @param rel_catalog [Puppet::Resource::Catalog, nil] the catalog to
# add dependencies to. Defaults to the current catalog (set when the
# type instance was added to a catalog)
# @raise [Puppet::DevError] if there is no catalog
#
def autorequire(rel_catalog = nil)
rel_catalog ||= catalog
raise(Puppet::DevError, "You cannot add relationships without a catalog") unless rel_catalog
reqs = []
self.class.eachautorequire { |type, block|
# Ignore any types we can't find, although that would be a bit odd.
next unless Puppet::Type.type(type)
# Retrieve the list of names from the block.
next unless list = self.instance_eval(&block)
list = [list] unless list.is_a?(Array)
# Collect the current prereqs
list.each { |dep|
# Support them passing objects directly, to save some effort.
unless dep.is_a? Puppet::Type
# Skip autorequires that we aren't managing
unless dep = rel_catalog.resource(type, dep)
next
end
end
reqs << Puppet::Relationship.new(dep, self)
}
}
reqs
end
- # Builds the dependencies associated with an individual object.
- # @todo Which object is the "individual object", as opposed to "object as a group?" or should it simply
- # be "this object" as in "this resource" ?
- # @todo Does this method "build dependencies" or "build what it depends on" ... CONFUSING
+ # Builds the dependencies associated with this resource.
#
- # @return [Array<???>] list of WHAT? resources? edges?
+ # @return [Array<Puppet::Relationship>] list of relationships to other resources
def builddepends
# Handle the requires
self.class.relationship_params.collect do |klass|
if param = @parameters[klass.name]
param.to_edges
end
end.flatten.reject { |r| r.nil? }
end
- # Sets the initial list of tags...
- # @todo The initial list of tags, that ... that what?
+ # Sets the initial list of tags to associate to this resource.
+ #
# @return [void] ???
def tags=(list)
tag(self.class.name)
tag(*list)
end
# @comment - these two comments were floating around here, and turned up as documentation
# for the attribute "title", much to my surprise and amusement. Clearly these comments
# are orphaned ... I think they can just be removed as what they say should be covered
# by the now added yardoc. <irony>(Yo! to quote some of the other actual awsome specific comments applicable
# to objects called from elsewhere, or not. ;-)</irony>
#
# @comment Types (which map to resources in the languages) are entirely composed of
# attribute value pairs. Generally, Puppet calls any of these things an
# 'attribute', but these attributes always take one of three specific
# forms: parameters, metaparams, or properties.
# @comment In naming methods, I have tried to consistently name the method so
# that it is clear whether it operates on all attributes (thus has 'attr' in
# the method name, or whether it operates on a specific type of attributes.
# The title attribute of WHAT ???
# @todo Figure out what this is the title attribute of (it appears on line 1926 currently).
# @return [String] the title
attr_writer :title
# The noop attribute of WHAT ??? does WHAT???
# @todo Figure out what this is the noop attribute of (it appears on line 1931 currently).
# @return [???] the noop WHAT ??? (mode? if so of what, or noop for an instance of the type, or for all
# instances of a type, or for what???
#
attr_writer :noop
include Enumerable
# class methods dealing with Type management
public
# The Type class attribute accessors
class << self
# @return [String] the name of the resource type; e.g., "File"
#
attr_reader :name
# @return [Boolean] true if the type should send itself a refresh event on change.
#
attr_accessor :self_refresh
include Enumerable, Puppet::Util::ClassGen
include Puppet::MetaType::Manager
include Puppet::Util
include Puppet::Util::Logging
end
# Initializes all of the variables that must be initialized for each subclass.
# @todo Does the explanation make sense?
# @return [void]
def self.initvars
# all of the instances of this class
@objects = Hash.new
@aliases = Hash.new
@defaults = {}
@parameters ||= []
@validproperties = {}
@properties = []
@parameters = []
@paramhash = {}
@paramdoc = Hash.new { |hash,key|
key = key.intern if key.is_a?(String)
if hash.include?(key)
hash[key]
else
"Param Documentation for #{key} not found"
end
}
@doc ||= ""
end
# Returns the name of this type (if specified) or the parent type #to_s.
# The returned name is on the form "Puppet::Type::<name>", where the first letter of name is
# capitalized.
# @return [String] the fully qualified name Puppet::Type::<name> where the first letter of name is captialized
#
def self.to_s
if defined?(@name)
"Puppet::Type::#{@name.to_s.capitalize}"
else
super
end
end
# Creates a `validate` method that is used to validate a resource before it is operated on.
# The validation should raise exceptions if the validation finds errors. (It is not recommended to
# issue warnings as this typically just ends up in a logfile - you should fail if a validation fails).
# The easiest way to raise an appropriate exception is to call the method {Puppet::Util::Errors.fail} with
# the message as an argument.
#
# @yield [ ] a required block called with self set to the instance of a Type class representing a resource.
# @return [void]
# @dsl type
# @api public
#
def self.validate(&block)
define_method(:validate, &block)
end
# @return [String] The file from which this type originates from
attr_accessor :file
# @return [Integer] The line in {#file} from which this type originates from
attr_accessor :line
# @todo what does this mean "this resource" (sounds like this if for an instance of the type, not the meta Type),
# but not sure if this is about the catalog where the meta Type is included)
# @return [??? TODO] The catalog that this resource is stored in.
attr_accessor :catalog
# @return [Boolean] Flag indicating if this type is exported
attr_accessor :exported
# @return [Boolean] Flag indicating if the type is virtual (it should not be).
attr_accessor :virtual
# Creates a log entry with the given message at the log level specified by the parameter `loglevel`
# @return [void]
#
def log(msg)
Puppet::Util::Log.create(
:level => @parameters[:loglevel].value,
:message => msg,
:source => self
)
end
# instance methods related to instance intrinsics
# e.g., initialize and name
public
# @return [Hash] hash of parameters originally defined
# @api private
attr_reader :original_parameters
# Creates an instance of Type from a hash or a {Puppet::Resource}.
# @todo Unclear if this is a new Type or a new instance of a given type (the initialization ends
# with calling validate - which seems like validation of an instance of a given type, not a new
# meta type.
#
# @todo Explain what the Hash and Resource are. There seems to be two different types of
# resources; one that causes the title to be set to resource.title, and one that
# causes the title to be resource.ref ("for components") - what is a component?
#
# @overload initialize(hash)
# @param [Hash] hash
# @raise [Puppet::ResourceError] when the type validation raises
# Puppet::Error or ArgumentError
# @overload initialize(resource)
# @param resource [Puppet:Resource]
# @raise [Puppet::ResourceError] when the type validation raises
# Puppet::Error or ArgumentError
#
def initialize(resource)
resource = self.class.hash2resource(resource) unless resource.is_a?(Puppet::Resource)
# The list of parameter/property instances.
@parameters = {}
# Set the title first, so any failures print correctly.
if resource.type.to_s.downcase.to_sym == self.class.name
self.title = resource.title
else
# This should only ever happen for components
self.title = resource.ref
end
[:file, :line, :catalog, :exported, :virtual].each do |getter|
setter = getter.to_s + "="
if val = resource.send(getter)
self.send(setter, val)
end
end
@tags = resource.tags
@original_parameters = resource.to_hash
set_name(@original_parameters)
set_default(:provider)
set_parameters(@original_parameters)
begin
self.validate if self.respond_to?(:validate)
rescue Puppet::Error, ArgumentError => detail
error = Puppet::ResourceError.new("Validation of #{ref} failed: #{detail}")
adderrorcontext(error, detail)
raise error
end
end
private
# Sets the name of the resource from a hash containing a mapping of `name_var` to value.
# Sets the value of the property/parameter appointed by the `name_var` (if it is defined). The value set is
# given by the corresponding entry in the given hash - e.g. if name_var appoints the name `:path` the value
# of `:path` is set to the value at the key `:path` in the given hash. As a side effect this key/value is then
# removed from the given hash.
#
# @note This method mutates the given hash by removing the entry with a key equal to the value
# returned from name_var!
# @param hash [Hash] a hash of what
# @return [void]
def set_name(hash)
self[name_var] = hash.delete(name_var) if name_var
end
# Sets parameters from the given hash.
# Values are set in _attribute order_ i.e. higher priority attributes before others, otherwise in
# the order they were specified (as opposed to just setting them in the order they happen to appear in
# when iterating over the given hash).
#
# Attributes that are not included in the given hash are set to their default value.
#
# @todo Is this description accurate? Is "ensure" an example of such a higher priority attribute?
# @return [void]
# @raise [Puppet::DevError] when impossible to set the value due to some problem
# @raise [ArgumentError, TypeError, Puppet::Error] when faulty arguments have been passed
#
def set_parameters(hash)
# Use the order provided by allattrs, but add in any
# extra attributes from the resource so we get failures
# on invalid attributes.
no_values = []
(self.class.allattrs + hash.keys).uniq.each do |attr|
begin
# Set any defaults immediately. This is mostly done so
# that the default provider is available for any other
# property validation.
if hash.has_key?(attr)
self[attr] = hash[attr]
else
no_values << attr
end
rescue ArgumentError, Puppet::Error, TypeError
raise
rescue => detail
error = Puppet::DevError.new( "Could not set #{attr} on #{self.class.name}: #{detail}")
error.set_backtrace(detail.backtrace)
raise error
end
end
no_values.each do |attr|
set_default(attr)
end
end
public
# Finishes any outstanding processing.
# This method should be called as a final step in setup,
# to allow the parameters that have associated auto-require needs to be processed.
#
# @todo what is the expected sequence here - who is responsible for calling this? When?
# Is the returned type correct?
# @return [Array<Puppet::Parameter>] the validated list/set of attributes
#
def finish
# Call post_compile hook on every parameter that implements it. This includes all subclasses
# of parameter including, but not limited to, regular parameters, metaparameters, relationship
# parameters, and properties.
eachparameter do |parameter|
parameter.post_compile if parameter.respond_to? :post_compile
end
# Make sure all of our relationships are valid. Again, must be done
# when the entire catalog is instantiated.
self.class.relationship_params.collect do |klass|
if param = @parameters[klass.name]
param.validate_relationship
end
end.flatten.reject { |r| r.nil? }
end
# @comment For now, leave the 'name' method functioning like it used to. Once 'title'
# works everywhere, I'll switch it.
# Returns the resource's name
# @todo There is a comment in source that this is not quite the same as ':title' and that a switch should
# be made...
# @return [String] the name of a resource
def name
self[:name]
end
# Returns the parent of this in the catalog. In case of an erroneous catalog
# where multiple parents have been produced, the first found (non
# deterministic) parent is returned.
# @return [Puppet::Type, nil] the
# containing resource or nil if there is no catalog or no containing
# resource.
def parent
return nil unless catalog
@parent ||=
if parents = catalog.adjacent(self, :direction => :in)
parents.shift
else
nil
end
end
# Returns a reference to this as a string in "Type[name]" format.
# @return [String] a reference to this object on the form 'Type[name]'
#
def ref
# memoizing this is worthwhile ~ 3 percent of calls are the "first time
# around" in an average run of Puppet. --daniel 2012-07-17
@ref ||= "#{self.class.name.to_s.capitalize}[#{self.title}]"
end
# (see self_refresh)
# @todo check that meaningful yardoc is produced - this method delegates to "self.class.self_refresh"
# @return [Boolean] - ??? returns true when ... what?
#
def self_refresh?
self.class.self_refresh
end
# Marks the object as "being purged".
# This method is used by transactions to forbid deletion when there are dependencies.
# @todo what does this mean; "mark that we are purging" (purging what from where). How to use/when?
# Is this internal API in transactions?
# @see purging?
def purging
@purging = true
end
# Returns whether this resource is being purged or not.
# This method is used by transactions to forbid deletion when there are dependencies.
# @return [Boolean] the current "purging" state
#
def purging?
if defined?(@purging)
@purging
else
false
end
end
# Returns the title of this object, or its name if title was not explicetly set.
# If the title is not already set, it will be computed by looking up the {#name_var} and using
# that value as the title.
# @todo it is somewhat confusing that if the name_var is a valid parameter, it is assumed to
# be the name_var called :name, but if it is a property, it uses the name_var.
# It is further confusing as Type in some respects supports multiple namevars.
#
# @return [String] Returns the title of this object, or its name if title was not explicetly set.
# @raise [??? devfail] if title is not set, and name_var can not be found.
def title
unless @title
if self.class.validparameter?(name_var)
@title = self[:name]
elsif self.class.validproperty?(name_var)
@title = self.should(name_var)
else
self.devfail "Could not find namevar #{name_var} for #{self.class.name}"
end
end
@title
end
# Produces a reference to this in reference format.
# @see #ref
#
def to_s
self.ref
end
- # @todo What to resource? Which one of the resource forms is prroduced? returned here?
- # @return [??? Resource] a resource that WHAT???
+ # Convert this resource type instance to a Puppet::Resource.
+ # @return [Puppet::Resource] Returns a serializable representation of this resource
#
def to_resource
resource = self.retrieve_resource
resource.tag(*self.tags)
@parameters.each do |name, param|
# Avoid adding each instance name twice
next if param.class.isnamevar? and param.value == self.title
# We've already got property values
next if param.is_a?(Puppet::Property)
resource[name] = param.value
end
resource
end
# @return [Boolean] Returns whether the resource is virtual or not
def virtual?; !!@virtual; end
# @return [Boolean] Returns whether the resource is exported or not
def exported?; !!@exported; end
# @return [Boolean] Returns whether the resource is applicable to `:device`
# Returns true if a resource of this type can be evaluated on a 'network device' kind
# of hosts.
# @api private
def appliable_to_device?
self.class.can_apply_to(:device)
end
# @return [Boolean] Returns whether the resource is applicable to `:host`
# Returns true if a resource of this type can be evaluated on a regular generalized computer (ie not an appliance like a network device)
# @api private
def appliable_to_host?
self.class.can_apply_to(:host)
end
end
end
require 'puppet/provider'
diff --git a/lib/puppet/type/exec.rb b/lib/puppet/type/exec.rb
index 732a0c1c5..325219bf4 100644
--- a/lib/puppet/type/exec.rb
+++ b/lib/puppet/type/exec.rb
@@ -1,564 +1,592 @@
module Puppet
newtype(:exec) do
include Puppet::Util::Execution
require 'timeout'
@doc = "Executes external commands.
Any command in an `exec` resource **must** be able to run multiple times
without causing harm --- that is, it must be *idempotent*. There are three
main ways for an exec to be idempotent:
* The command itself is already idempotent. (For example, `apt-get update`.)
* The exec has an `onlyif`, `unless`, or `creates` attribute, which prevents
Puppet from running the command unless some condition is met.
* The exec has `refreshonly => true`, which only allows Puppet to run the
command when some other resource is changed. (See the notes on refreshing
below.)
A caution: There's a widespread tendency to use collections of execs to
manage resources that aren't covered by an existing resource type. This
works fine for simple tasks, but once your exec pile gets complex enough
that you really have to think to understand what's happening, you should
consider developing a custom resource type instead, as it will be much
more predictable and maintainable.
**Refresh:** `exec` resources can respond to refresh events (via
`notify`, `subscribe`, or the `~>` arrow). The refresh behavior of execs
is non-standard, and can be affected by the `refresh` and
`refreshonly` attributes:
* If `refreshonly` is set to true, the exec will _only_ run when it receives an
event. This is the most reliable way to use refresh with execs.
* If the exec already would have run and receives an event, it will run its
command **up to two times.** (If an `onlyif`, `unless`, or `creates` condition
is no longer met after the first run, the second run will not occur.)
* If the exec already would have run, has a `refresh` command, and receives an
event, it will run its normal command, then run its `refresh` command
(as long as any `onlyif`, `unless`, or `creates` conditions are still met
after the normal command finishes).
* If the exec would **not** have run (due to an `onlyif`, `unless`, or `creates`
attribute) and receives an event, it still will not run.
* If the exec has `noop => true`, would otherwise have run, and receives
an event from a non-noop resource, it will run once (or run its `refresh`
command instead, if it has one).
In short: If there's a possibility of your exec receiving refresh events,
it becomes doubly important to make sure the run conditions are restricted.
**Autorequires:** If Puppet is managing an exec's cwd or the executable
file used in an exec's command, the exec resource will autorequire those
files. If Puppet is managing the user that an exec should run as, the
exec resource will autorequire that user."
# Create a new check mechanism. It's basically just a parameter that
# provides one extra 'check' method.
def self.newcheck(name, options = {}, &block)
@checks ||= {}
check = newparam(name, options, &block)
@checks[name] = check
end
def self.checks
@checks.keys
end
newproperty(:returns, :array_matching => :all, :event => :executed_command) do |property|
include Puppet::Util::Execution
munge do |value|
value.to_s
end
def event_name
:executed_command
end
defaultto "0"
attr_reader :output
- desc "The expected return code(s). An error will be returned if the
- executed command returns something else. Defaults to 0. Can be
- specified as an array of acceptable return codes or a single value."
+ desc "The expected exit code(s). An error will be returned if the
+ executed command has some other exit code. Defaults to 0. Can be
+ specified as an array of acceptable exit codes or a single value.
+
+ On POSIX systems, exit codes are always integers between 0 and 255.
+
+ On Windows, **most** exit codes should be integers between 0
+ and 2147483647.
+
+ Larger exit codes on Windows can behave inconsistently across different
+ tools. The Win32 APIs define exit codes as 32-bit unsigned integers, but
+ both the cmd.exe shell and the .NET runtime cast them to signed
+ integers. This means some tools will report negative numbers for exit
+ codes above 2147483647. (For example, cmd.exe reports 4294967295 as -1.)
+ Since Puppet uses the plain Win32 APIs, it will report the very large
+ number instead of the negative number, which might not be what you
+ expect if you got the exit code from a cmd.exe session.
+
+ Microsoft recommends against using negative/very large exit codes, and
+ you should avoid them when possible. To convert a negative exit code to
+ the positive one Puppet will use, add it to 4294967296."
# Make output a bit prettier
def change_to_s(currentvalue, newvalue)
"executed successfully"
end
# First verify that all of our checks pass.
def retrieve
# We need to return :notrun to trigger evaluation; when that isn't
# true, we *LIE* about what happened and return a "success" for the
# value, which causes us to be treated as in_sync?, which means we
# don't actually execute anything. I think. --daniel 2011-03-10
if @resource.check_all_attributes
return :notrun
else
return self.should
end
end
# Actually execute the command.
def sync
event = :executed_command
tries = self.resource[:tries]
try_sleep = self.resource[:try_sleep]
begin
tries.times do |try|
# Only add debug messages for tries > 1 to reduce log spam.
debug("Exec try #{try+1}/#{tries}") if tries > 1
@output, @status = provider.run(self.resource[:command])
break if self.should.include?(@status.exitstatus.to_s)
if try_sleep > 0 and tries > 1
debug("Sleeping for #{try_sleep} seconds between tries")
sleep try_sleep
end
end
rescue Timeout::Error
self.fail Puppet::Error, "Command exceeded timeout", $!
end
if log = @resource[:logoutput]
case log
when :true
log = @resource[:loglevel]
when :on_failure
unless self.should.include?(@status.exitstatus.to_s)
log = @resource[:loglevel]
else
log = :false
end
end
unless log == :false
@output.split(/\n/).each { |line|
self.send(log, line)
}
end
end
unless self.should.include?(@status.exitstatus.to_s)
self.fail("#{self.resource[:command]} returned #{@status.exitstatus} instead of one of [#{self.should.join(",")}]")
end
event
end
end
newparam(:command) do
isnamevar
desc "The actual command to execute. Must either be fully qualified
or a search path for the command must be provided. If the command
succeeds, any output produced will be logged at the instance's
normal log level (usually `notice`), but if the command fails
(meaning its return code does not match the specified code) then
any output is logged at the `err` log level."
validate do |command|
raise ArgumentError, "Command must be a String, got value of class #{command.class}" unless command.is_a? String
end
end
newparam(:path) do
desc "The search path used for command execution.
Commands must be fully qualified if no path is specified. Paths
can be specified as an array or as a '#{File::PATH_SEPARATOR}' separated list."
# Support both arrays and colon-separated fields.
def value=(*values)
@value = values.flatten.collect { |val|
val.split(File::PATH_SEPARATOR)
}.flatten
end
end
newparam(:user) do
desc "The user to run the command as. Note that if you
use this then any error output is not currently captured. This
is because of a bug within Ruby. If you are using Puppet to
create this user, the exec will automatically require the user,
as long as it is specified by name.
Please note that the $HOME environment variable is not automatically set
when using this attribute."
- # Most validation is handled by the SUIDManager class.
validate do |user|
- self.fail "Only root can execute commands as other users" unless Puppet.features.root?
- self.fail "Unable to execute commands as other users on Windows" if Puppet.features.microsoft_windows?
+ if Puppet.features.microsoft_windows?
+ self.fail "Unable to execute commands as other users on Windows"
+ elsif !Puppet.features.root? && resource.current_username() != user
+ self.fail "Only root can execute commands as other users"
+ end
end
end
newparam(:group) do
desc "The group to run the command as. This seems to work quite
haphazardly on different platforms -- it is a platform issue
not a Ruby or Puppet one, since the same variety exists when
running commands as different users in the shell."
# Validation is handled by the SUIDManager class.
end
newparam(:cwd, :parent => Puppet::Parameter::Path) do
desc "The directory from which to run the command. If
this directory does not exist, the command will fail."
end
newparam(:logoutput) do
desc "Whether to log command output in addition to logging the
exit code. Defaults to `on_failure`, which only logs the output
when the command has an exit code that does not match any value
specified by the `returns` attribute. As with any resource type,
the log level can be controlled with the `loglevel` metaparameter."
defaultto :on_failure
newvalues(:true, :false, :on_failure)
end
newparam(:refresh) do
desc "How to refresh this command. By default, the exec is just
called again when it receives an event from another resource,
but this parameter allows you to define a different command
for refreshing."
validate do |command|
provider.validatecmd(command)
end
end
newparam(:environment) do
desc "Any additional environment variables you want to set for a
command. Note that if you use this to set PATH, it will override
the `path` attribute. Multiple environment variables should be
specified as an array."
validate do |values|
values = [values] unless values.is_a? Array
values.each do |value|
unless value =~ /\w+=/
raise ArgumentError, "Invalid environment setting '#{value}'"
end
end
end
end
newparam(:umask, :required_feature => :umask) do
desc "Sets the umask to be used while executing this command"
munge do |value|
if value =~ /^0?[0-7]{1,4}$/
return value.to_i(8)
else
raise Puppet::Error, "The umask specification is invalid: #{value.inspect}"
end
end
end
newparam(:timeout) do
desc "The maximum time the command should take. If the command takes
longer than the timeout, the command is considered to have failed
and will be stopped. The timeout is specified in seconds. The default
timeout is 300 seconds and you can set it to 0 to disable the timeout."
munge do |value|
value = value.shift if value.is_a?(Array)
begin
value = Float(value)
rescue ArgumentError
raise ArgumentError, "The timeout must be a number.", $!.backtrace
end
[value, 0.0].max
end
defaultto 300
end
newparam(:tries) do
desc "The number of times execution of the command should be tried.
Defaults to '1'. This many attempts will be made to execute
the command until an acceptable return code is returned.
Note that the timeout paramater applies to each try rather than
to the complete set of tries."
munge do |value|
if value.is_a?(String)
unless value =~ /^[\d]+$/
raise ArgumentError, "Tries must be an integer"
end
value = Integer(value)
end
raise ArgumentError, "Tries must be an integer >= 1" if value < 1
value
end
defaultto 1
end
newparam(:try_sleep) do
desc "The time to sleep in seconds between 'tries'."
munge do |value|
if value.is_a?(String)
unless value =~ /^[-\d.]+$/
raise ArgumentError, "try_sleep must be a number"
end
value = Float(value)
end
raise ArgumentError, "try_sleep cannot be a negative number" if value < 0
value
end
defaultto 0
end
newcheck(:refreshonly) do
desc <<-'EOT'
The command should only be run as a
refresh mechanism for when a dependent object is changed. It only
makes sense to use this option when this command depends on some
other object; it is useful for triggering an action:
# Pull down the main aliases file
file { "/etc/aliases":
source => "puppet://server/module/aliases"
}
# Rebuild the database, but only when the file changes
exec { newaliases:
path => ["/usr/bin", "/usr/sbin"],
subscribe => File["/etc/aliases"],
refreshonly => true
}
Note that only `subscribe` and `notify` can trigger actions, not `require`,
so it only makes sense to use `refreshonly` with `subscribe` or `notify`.
EOT
newvalues(:true, :false)
# We always fail this test, because we're only supposed to run
# on refresh.
def check(value)
# We have to invert the values.
if value == :true
false
else
true
end
end
end
newcheck(:creates, :parent => Puppet::Parameter::Path) do
desc <<-'EOT'
A file to look for before running the command. The command will
only run if the file **doesn't exist.**
This parameter doesn't cause Puppet to create a file; it is only
useful if **the command itself** creates a file.
exec { "tar -xf /Volumes/nfs02/important.tar":
cwd => "/var/tmp",
creates => "/var/tmp/myfile",
path => ["/usr/bin", "/usr/sbin"]
}
In this example, `myfile` is assumed to be a file inside
`important.tar`. If it is ever deleted, the exec will bring it
back by re-extracting the tarball. If `important.tar` does **not**
actually contain `myfile`, the exec will keep running every time
Puppet runs.
EOT
accept_arrays
# If the file exists, return false (i.e., don't run the command),
# else return true
def check(value)
! Puppet::FileSystem.exist?(value)
end
end
newcheck(:unless) do
desc <<-'EOT'
If this parameter is set, then this `exec` will run unless
- the command returns 0. For example:
+ the command has an exit code of 0. For example:
exec { "/bin/echo root >> /usr/lib/cron/cron.allow":
path => "/usr/bin:/usr/sbin:/bin",
unless => "grep root /usr/lib/cron/cron.allow 2>/dev/null"
}
This would add `root` to the cron.allow file (on Solaris) unless
`grep` determines it's already there.
Note that this command follows the same rules as the main command,
which is to say that it must be fully qualified if the path is not set.
+ It also uses the same provider as the main command, so any behavior
+ that differs by provider will match.
EOT
validate do |cmds|
cmds = [cmds] unless cmds.is_a? Array
cmds.each do |command|
provider.validatecmd(command)
end
end
# Return true if the command does not return 0.
def check(value)
begin
output, status = provider.run(value, true)
rescue Timeout::Error
err "Check #{value.inspect} exceeded timeout"
return false
end
output.split(/\n/).each { |line|
self.debug(line)
}
status.exitstatus != 0
end
end
newcheck(:onlyif) do
desc <<-'EOT'
If this parameter is set, then this `exec` will only run if
- the command returns 0. For example:
+ the command has an exit code of 0. For example:
exec { "logrotate":
path => "/usr/bin:/usr/sbin:/bin",
onlyif => "test `du /var/log/messages | cut -f1` -gt 100000"
}
This would run `logrotate` only if that test returned true.
Note that this command follows the same rules as the main command,
which is to say that it must be fully qualified if the path is not set.
+ It also uses the same provider as the main command, so any behavior
+ that differs by provider will match.
Also note that onlyif can take an array as its value, e.g.:
onlyif => ["test -f /tmp/file1", "test -f /tmp/file2"]
This will only run the exec if _all_ conditions in the array return true.
EOT
validate do |cmds|
cmds = [cmds] unless cmds.is_a? Array
cmds.each do |command|
provider.validatecmd(command)
end
end
# Return true if the command returns 0.
def check(value)
begin
output, status = provider.run(value, true)
rescue Timeout::Error
err "Check #{value.inspect} exceeded timeout"
return false
end
output.split(/\n/).each { |line|
self.debug(line)
}
status.exitstatus == 0
end
end
# Exec names are not isomorphic with the objects.
@isomorphic = false
validate do
provider.validatecmd(self[:command])
end
# FIXME exec should autorequire any exec that 'creates' our cwd
autorequire(:file) do
reqs = []
# Stick the cwd in there if we have it
reqs << self[:cwd] if self[:cwd]
file_regex = Puppet.features.microsoft_windows? ? %r{^([a-zA-Z]:[\\/]\S+)} : %r{^(/\S+)}
self[:command].scan(file_regex) { |str|
reqs << str
}
self[:command].scan(/^"([^"]+)"/) { |str|
reqs << str
}
[:onlyif, :unless].each { |param|
next unless tmp = self[param]
tmp = [tmp] unless tmp.is_a? Array
tmp.each do |line|
# And search the command line for files, adding any we
# find. This will also catch the command itself if it's
# fully qualified. It might not be a bad idea to add
# unqualified files, but, well, that's a bit more annoying
# to do.
reqs += line.scan(file_regex)
end
}
# For some reason, the += isn't causing a flattening
reqs.flatten!
reqs
end
autorequire(:user) do
# Autorequire users if they are specified by name
if user = self[:user] and user !~ /^\d+$/
user
end
end
def self.instances
[]
end
# Verify that we pass all of the checks. The argument determines whether
# we skip the :refreshonly check, which is necessary because we now check
# within refresh
def check_all_attributes(refreshing = false)
self.class.checks.each { |check|
next if refreshing and check == :refreshonly
if @parameters.include?(check)
val = @parameters[check].value
val = [val] unless val.is_a? Array
val.each do |value|
return false unless @parameters[check].check(value)
end
end
}
true
end
def output
if self.property(:returns).nil?
return nil
else
return self.property(:returns).output
end
end
# Run the command, or optionally run a separately-specified command.
def refresh
if self.check_all_attributes(true)
if cmd = self[:refresh]
provider.run(cmd)
else
self.property(:returns).sync
end
end
end
+
+ def current_username
+ Etc.getpwuid(Process.uid).name
+ end
end
end
diff --git a/lib/puppet/type/file.rb b/lib/puppet/type/file.rb
index 9f2d46b8d..851b97312 100644
--- a/lib/puppet/type/file.rb
+++ b/lib/puppet/type/file.rb
@@ -1,893 +1,933 @@
require 'digest/md5'
require 'cgi'
require 'etc'
require 'uri'
require 'fileutils'
require 'enumerator'
require 'pathname'
require 'puppet/parameter/boolean'
require 'puppet/util/diff'
require 'puppet/util/checksums'
require 'puppet/util/backups'
require 'puppet/util/symbolic_file_mode'
Puppet::Type.newtype(:file) do
include Puppet::Util::MethodHelper
include Puppet::Util::Checksums
include Puppet::Util::Backups
include Puppet::Util::SymbolicFileMode
@doc = "Manages files, including their content, ownership, and permissions.
The `file` type can manage normal files, directories, and symlinks; the
- type should be specified in the `ensure` attribute. Note that symlinks cannot
- be managed on Windows systems.
+ type should be specified in the `ensure` attribute.
File contents can be managed directly with the `content` attribute, or
downloaded from a remote source using the `source` attribute; the latter
can also be used to recursively serve directories (when the `recurse`
attribute is set to `true` or `local`). On Windows, note that file
contents are managed in binary mode; Puppet never automatically translates
line endings.
**Autorequires:** If Puppet is managing the user or group that owns a
file, the file resource will autorequire them. If Puppet is managing any
parent directories of a file, the file resource will autorequire them."
feature :manages_symlinks,
"The provider can manage symbolic links."
def self.title_patterns
[ [ /^(.*?)\/*\Z/m, [ [ :path ] ] ] ]
end
newparam(:path) do
desc <<-'EOT'
The path to the file to manage. Must be fully qualified.
On Windows, the path should include the drive letter and should use `/` as
the separator character (rather than `\\`).
EOT
isnamevar
validate do |value|
unless Puppet::Util.absolute_path?(value)
fail Puppet::Error, "File paths must be fully qualified, not '#{value}'"
end
end
munge do |value|
if value.start_with?('//') and ::File.basename(value) == "/"
# This is a UNC path pointing to a share, so don't add a trailing slash
::File.expand_path(value)
else
::File.join(::File.split(::File.expand_path(value)))
end
end
end
newparam(:backup) do
desc <<-EOT
Whether (and how) file content should be backed up before being replaced.
This attribute works best as a resource default in the site manifest
(`File { backup => main }`), so it can affect all file resources.
* If set to `false`, file content won't be backed up.
* If set to a string beginning with `.` (e.g., `.puppet-bak`), Puppet will
use copy the file in the same directory with that value as the extension
of the backup. (A value of `true` is a synonym for `.puppet-bak`.)
* If set to any other string, Puppet will try to back up to a filebucket
with that title. See the `filebucket` resource type for more details.
(This is the preferred method for backup, since it can be centralized
and queried.)
Default value: `puppet`, which backs up to a filebucket of the same name.
(Puppet automatically creates a **local** filebucket named `puppet` if one
doesn't already exist.)
Backing up to a local filebucket isn't particularly useful. If you want
to make organized use of backups, you will generally want to use the
puppet master server's filebucket service. This requires declaring a
filebucket resource and a resource default for the `backup` attribute
in site.pp:
# /etc/puppet/manifests/site.pp
filebucket { 'main':
path => false, # This is required for remote filebuckets.
server => 'puppet.example.com', # Optional; defaults to the configured puppet master.
}
File { backup => main, }
If you are using multiple puppet master servers, you will want to
centralize the contents of the filebucket. Either configure your load
balancer to direct all filebucket traffic to a single master, or use
something like an out-of-band rsync task to synchronize the content on all
masters.
EOT
defaultto "puppet"
munge do |value|
# I don't really know how this is happening.
value = value.shift if value.is_a?(Array)
case value
when false, "false", :false
false
when true, "true", ".puppet-bak", :true
".puppet-bak"
when String
value
else
self.fail "Invalid backup type #{value.inspect}"
end
end
end
newparam(:recurse) do
- desc "Whether and how to do recursive file management. Options are:
-
- * `inf,true` --- Regular style recursion on both remote and local
- directory structure. See `recurselimit` to specify a limit to the
- recursion depth.
- * `remote` --- Descends recursively into the remote (source) directory
- but not the local (destination) directory. Allows copying of
- a few files into a directory containing many
- unmanaged files without scanning all the local files.
- This can only be used when a source parameter is specified.
- * `false` --- Default of no recursion.
+ desc "Whether to recursively manage the _contents_ of a directory. This attribute
+ is only used when `ensure => directory` is set. The allowed values are:
+
+ * `false` --- The default behavior. The contents of the directory will not be
+ automatically managed.
+ * `remote` --- If the `source` attribute is set, Puppet will automatically
+ manage the contents of the source directory (or directories), ensuring
+ that equivalent files and directories exist on the target system and
+ that their contents match.
+
+ Using `remote` will disable the `purge` attribute, but results in faster
+ catalog application than `recurse => true`.
+
+ The `source` attribute is mandatory when `recurse => remote`.
+ * `true` --- If the `source` attribute is set, this behaves similarly to
+ `recurse => remote`, automatically managing files from the source directory.
+
+ This also enables the `purge` attribute, which can delete unmanaged
+ files from a directory. See the description of `purge` for more details.
+
+ The `source` attribute is not mandatory when using `recurse => true`, so you
+ can enable purging in directories where all files are managed individually.
+
+ (Note: `inf` is a deprecated synonym for `true`.)
+
+ By default, setting recurse to `remote` or `true` will manage _all_
+ subdirectories. You can use the `recurselimit` attribute to limit the
+ recursion depth.
"
newvalues(:true, :false, :inf, :remote)
validate { |arg| }
munge do |value|
newval = super(value)
case newval
when :true, :inf; true
when :false; false
when :remote; :remote
else
self.fail "Invalid recurse value #{value.inspect}"
end
end
end
newparam(:recurselimit) do
- desc "How deeply to do recursive management."
+ desc "How far Puppet should descend into subdirectories, when using
+ `ensure => directory` and either `recurse => true` or `recurse => remote`.
+ The recursion limit affects which files will be copied from the `source`
+ directory, as well as which files can be purged when `purge => true`.
+
+ Setting `recurselimit => 0` is the same as setting `recurse => false` ---
+ Puppet will manage the directory, but all of its contents will be treated
+ as unmanaged.
+
+ Setting `recurselimit => 1` will manage files and directories that are
+ directly inside the directory, but will not manage the contents of any
+ subdirectories.
+
+ Setting `recurselimit => 2` will manage the direct contents of the
+ directory, as well as the contents of the _first_ level of subdirectories.
+
+ And so on --- 3 will manage the contents of the second level of
+ subdirectories, etc."
newvalues(/^[0-9]+$/)
munge do |value|
newval = super(value)
case newval
when Integer, Fixnum, Bignum; value
when /^\d+$/; Integer(value)
else
self.fail "Invalid recurselimit value #{value.inspect}"
end
end
end
newparam(:replace, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether to replace a file or symlink that already exists on the local system but
whose content doesn't match what the `source` or `content` attribute
specifies. Setting this to false allows file resources to initialize files
without overwriting future changes. Note that this only affects content;
Puppet will still manage ownership and permissions. Defaults to `true`."
defaultto :true
end
newparam(:force, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Perform the file operation even if it will destroy one or more directories.
You must use `force` in order to:
* `purge` subdirectories
* Replace directories with files or links
* Remove a directory when `ensure => absent`"
defaultto false
end
newparam(:ignore) do
desc "A parameter which omits action on files matching
specified patterns during recursion. Uses Ruby's builtin globbing
engine, so shell metacharacters are fully supported, e.g. `[a-z]*`.
Matches that would descend into the directory structure are ignored,
e.g., `*/*`."
validate do |value|
unless value.is_a?(Array) or value.is_a?(String) or value == false
self.devfail "Ignore must be a string or an Array"
end
end
end
newparam(:links) do
desc "How to handle links during file actions. During file copying,
`follow` will copy the target file instead of the link, `manage`
will copy the link itself, and `ignore` will just pass it by.
When not copying, `manage` and `ignore` behave equivalently
(because you cannot really ignore links entirely during local
recursion), and `follow` will manage the file to which the link points."
newvalues(:follow, :manage)
defaultto :manage
end
newparam(:purge, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether unmanaged files should be purged. This option only makes
- sense when managing directories with `recurse => true`.
+ sense when `ensure => directory` and `recurse => true`.
* When recursively duplicating an entire directory with the `source`
attribute, `purge => true` will automatically purge any files
that are not in the source directory.
* When managing files in a directory as individual resources,
setting `purge => true` will purge any files that aren't being
specifically managed.
If you have a filebucket configured, the purged files will be uploaded,
- but if you do not, this will destroy data."
+ but if you do not, this will destroy data.
+
+ Unless `force => true` is set, purging will **not** delete directories,
+ although it will delete the files they contain.
+
+ If `recurselimit` is set and you aren't using `force => true`, purging
+ will obey the recursion limit; files in any subdirectories deeper than the
+ limit will be treated as unmanaged and left alone."
defaultto :false
end
newparam(:sourceselect) do
desc "Whether to copy all valid sources, or just the first one. This parameter
only affects recursive directory copies; by default, the first valid
source is the only one used, but if this parameter is set to `all`, then
all valid sources will have all of their contents copied to the local
system. If a given file exists in more than one source, the version from
the earliest source in the list will be used."
defaultto :first
newvalues(:first, :all)
end
newparam(:show_diff, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether to display differences when the file changes, defaulting to
true. This parameter is useful for files that may contain passwords or
other secret data, which might otherwise be included in Puppet reports or
other insecure outputs. If the global `show_diff` setting
is false, then no diffs will be shown even if this parameter is true."
defaultto :true
end
newparam(:validate_cmd) do
desc "A command for validating the file's syntax before replacing it. If
Puppet would need to rewrite a file due to new `source` or `content`, it
will check the new content's validity first. If validation fails, the file
resource will fail.
This command must have a fully qualified path, and should contain a
percent (`%`) token where it would expect an input file. It must exit `0`
if the syntax is correct, and non-zero otherwise. The command will be
run on the target system while applying the catalog, not on the puppet master.
Example:
file { '/etc/apache2/apache2.conf':
content => 'example',
validate_cmd => '/usr/sbin/apache2 -t -f %',
}
This would replace apache2.conf only if the test returned true.
Note that if a validation command requires a `%` as part of its text,
you can specify a different placeholder token with the
`validate_replacement` attribute."
end
newparam(:validate_replacement) do
desc "The replacement string in a `validate_cmd` that will be replaced
with an input file name. Defaults to: `%`"
defaultto '%'
end
# Autorequire the nearest ancestor directory found in the catalog.
autorequire(:file) do
req = []
path = Pathname.new(self[:path])
if !path.root?
# Start at our parent, to avoid autorequiring ourself
parents = path.parent.enum_for(:ascend)
if found = parents.find { |p| catalog.resource(:file, p.to_s) }
req << found.to_s
end
end
# if the resource is a link, make sure the target is created first
req << self[:target] if self[:target]
req
end
# Autorequire the owner and group of the file.
{:user => :owner, :group => :group}.each do |type, property|
autorequire(type) do
if @parameters.include?(property)
# The user/group property automatically converts to IDs
next unless should = @parameters[property].shouldorig
val = should[0]
if val.is_a?(Integer) or val =~ /^\d+$/
nil
else
val
end
end
end
end
CREATORS = [:content, :source, :target]
SOURCE_ONLY_CHECKSUMS = [:none, :ctime, :mtime]
validate do
creator_count = 0
CREATORS.each do |param|
creator_count += 1 if self.should(param)
end
creator_count += 1 if @parameters.include?(:source)
self.fail "You cannot specify more than one of #{CREATORS.collect { |p| p.to_s}.join(", ")}" if creator_count > 1
self.fail "You cannot specify a remote recursion without a source" if !self[:source] and self[:recurse] == :remote
self.fail "You cannot specify source when using checksum 'none'" if self[:checksum] == :none && !self[:source].nil?
SOURCE_ONLY_CHECKSUMS.each do |checksum_type|
self.fail "You cannot specify content when using checksum '#{checksum_type}'" if self[:checksum] == checksum_type && !self[:content].nil?
end
self.warning "Possible error: recurselimit is set but not recurse, no recursion will happen" if !self[:recurse] and self[:recurselimit]
provider.validate if provider.respond_to?(:validate)
end
def self.[](path)
return nil unless path
super(path.gsub(/\/+/, '/').sub(/\/$/, ''))
end
def self.instances
return []
end
# Determine the user to write files as.
def asuser
if self.should(:owner) and ! self.should(:owner).is_a?(Symbol)
writeable = Puppet::Util::SUIDManager.asuser(self.should(:owner)) {
FileTest.writable?(::File.dirname(self[:path]))
}
# If the parent directory is writeable, then we execute
# as the user in question. Otherwise we'll rely on
# the 'owner' property to do things.
asuser = self.should(:owner) if writeable
end
asuser
end
def bucket
return @bucket if @bucket
backup = self[:backup]
return nil unless backup
return nil if backup =~ /^\./
unless catalog or backup == "puppet"
fail "Can not find filebucket for backups without a catalog"
end
unless catalog and filebucket = catalog.resource(:filebucket, backup) or backup == "puppet"
fail "Could not find filebucket #{backup} specified in backup"
end
return default_bucket unless filebucket
@bucket = filebucket.bucket
@bucket
end
def default_bucket
Puppet::Type.type(:filebucket).mkdefaultbucket.bucket
end
# Does the file currently exist? Just checks for whether
# we have a stat
def exist?
stat ? true : false
end
def present?(current_values)
super && current_values[:ensure] != :false
end
# We have to do some extra finishing, to retrieve our bucket if
# there is one.
def finish
# Look up our bucket, if there is one
bucket
super
end
# Create any children via recursion or whatever.
def eval_generate
return [] unless self.recurse?
recurse
end
def ancestors
ancestors = Pathname.new(self[:path]).enum_for(:ascend).map(&:to_s)
ancestors.delete(self[:path])
ancestors
end
def flush
# We want to make sure we retrieve metadata anew on each transaction.
@parameters.each do |name, param|
param.flush if param.respond_to?(:flush)
end
@stat = :needs_stat
end
def initialize(hash)
# Used for caching clients
@clients = {}
super
# If they've specified a source, we get our 'should' values
# from it.
unless self[:ensure]
if self[:target]
self[:ensure] = :link
elsif self[:content]
self[:ensure] = :file
end
end
@stat = :needs_stat
end
# Configure discovered resources to be purged.
def mark_children_for_purging(children)
children.each do |name, child|
next if child[:source]
child[:ensure] = :absent
end
end
# Create a new file or directory object as a child to the current
# object.
def newchild(path)
full_path = ::File.join(self[:path], path)
# Add some new values to our original arguments -- these are the ones
# set at initialization. We specifically want to exclude any param
# values set by the :source property or any default values.
# LAK:NOTE This is kind of silly, because the whole point here is that
# the values set at initialization should live as long as the resource
# but values set by default or by :source should only live for the transaction
# or so. Unfortunately, we don't have a straightforward way to manage
# the different lifetimes of this data, so we kludge it like this.
# The right-side hash wins in the merge.
options = @original_parameters.merge(:path => full_path).reject { |param, value| value.nil? }
# These should never be passed to our children.
[:parent, :ensure, :recurse, :recurselimit, :target, :alias, :source].each do |param|
options.delete(param) if options.include?(param)
end
self.class.new(options)
end
# Files handle paths specially, because they just lengthen their
# path names, rather than including the full parent's title each
# time.
def pathbuilder
# We specifically need to call the method here, so it looks
# up our parent in the catalog graph.
if parent = parent()
# We only need to behave specially when our parent is also
# a file
if parent.is_a?(self.class)
# Remove the parent file name
list = parent.pathbuilder
list.pop # remove the parent's path info
return list << self.ref
else
return super
end
else
return [self.ref]
end
end
# Recursively generate a list of file resources, which will
# be used to copy remote files, manage local files, and/or make links
# to map to another directory.
def recurse
children = (self[:recurse] == :remote) ? {} : recurse_local
if self[:target]
recurse_link(children)
elsif self[:source]
recurse_remote(children)
end
# If we're purging resources, then delete any resource that isn't on the
# remote system.
mark_children_for_purging(children) if self.purge?
# REVISIT: sort_by is more efficient?
result = children.values.sort { |a, b| a[:path] <=> b[:path] }
remove_less_specific_files(result)
end
# This is to fix bug #2296, where two files recurse over the same
# set of files. It's a rare case, and when it does happen you're
# not likely to have many actual conflicts, which is good, because
# this is a pretty inefficient implementation.
def remove_less_specific_files(files)
# REVISIT: is this Windows safe? AltSeparator?
mypath = self[:path].split(::File::Separator)
other_paths = catalog.vertices.
select { |r| r.is_a?(self.class) and r[:path] != self[:path] }.
collect { |r| r[:path].split(::File::Separator) }.
select { |p| p[0,mypath.length] == mypath }
return files if other_paths.empty?
files.reject { |file|
path = file[:path].split(::File::Separator)
other_paths.any? { |p| path[0,p.length] == p }
}
end
# A simple method for determining whether we should be recursing.
def recurse?
self[:recurse] == true or self[:recurse] == :remote
end
# Recurse the target of the link.
def recurse_link(children)
perform_recursion(self[:target]).each do |meta|
if meta.relative_path == "."
self[:ensure] = :directory
next
end
children[meta.relative_path] ||= newchild(meta.relative_path)
if meta.ftype == "directory"
children[meta.relative_path][:ensure] = :directory
else
children[meta.relative_path][:ensure] = :link
children[meta.relative_path][:target] = meta.full_path
end
end
children
end
# Recurse the file itself, returning a Metadata instance for every found file.
def recurse_local
result = perform_recursion(self[:path])
return {} unless result
result.inject({}) do |hash, meta|
next hash if meta.relative_path == "."
hash[meta.relative_path] = newchild(meta.relative_path)
hash
end
end
# Recurse against our remote file.
def recurse_remote(children)
sourceselect = self[:sourceselect]
total = self[:source].collect do |source|
next unless result = perform_recursion(source)
return if top = result.find { |r| r.relative_path == "." } and top.ftype != "directory"
result.each { |data| data.source = "#{source}/#{data.relative_path}" }
break result if result and ! result.empty? and sourceselect == :first
result
end.flatten.compact
# This only happens if we have sourceselect == :all
unless sourceselect == :first
found = []
total.reject! do |data|
result = found.include?(data.relative_path)
found << data.relative_path unless found.include?(data.relative_path)
result
end
end
total.each do |meta|
if meta.relative_path == "."
parameter(:source).metadata = meta
next
end
children[meta.relative_path] ||= newchild(meta.relative_path)
children[meta.relative_path][:source] = meta.source
if meta.ftype == "file"
children[meta.relative_path][:checksum] = Puppet[:digest_algorithm].to_sym
end
children[meta.relative_path].parameter(:source).metadata = meta
end
children
end
def perform_recursion(path)
Puppet::FileServing::Metadata.indirection.search(
path,
:links => self[:links],
:recurse => (self[:recurse] == :remote ? true : self[:recurse]),
:recurselimit => self[:recurselimit],
:ignore => self[:ignore],
:checksum_type => (self[:source] || self[:content]) ? self[:checksum] : :none,
:environment => catalog.environment
)
end
# Back up and remove the file or directory at `self[:path]`.
#
# @param [Symbol] should The file type replacing the current content.
# @return [Boolean] True if the file was removed, else False
# @raises [fail???] If the current file isn't one of %w{file link directory} and can't be removed.
def remove_existing(should)
wanted_type = should.to_s
current_type = read_current_type
if current_type.nil?
return false
end
if can_backup?(current_type)
backup_existing
end
if wanted_type != "link" and current_type == wanted_type
return false
end
case current_type
when "directory"
return remove_directory(wanted_type)
when "link", "file"
return remove_file(current_type, wanted_type)
else
self.fail "Could not back up files of type #{current_type}"
end
end
def retrieve
if source = parameter(:source)
source.copy_source_values
end
super
end
# Set the checksum, from another property. There are multiple
# properties that modify the contents of a file, and they need the
# ability to make sure that the checksum value is in sync.
def setchecksum(sum = nil)
if @parameters.include? :checksum
if sum
@parameters[:checksum].checksum = sum
else
# If they didn't pass in a sum, then tell checksum to
# figure it out.
currentvalue = @parameters[:checksum].retrieve
@parameters[:checksum].checksum = currentvalue
end
end
end
# Should this thing be a normal file? This is a relatively complex
# way of determining whether we're trying to create a normal file,
# and it's here so that the logic isn't visible in the content property.
def should_be_file?
return true if self[:ensure] == :file
# I.e., it's set to something like "directory"
return false if e = self[:ensure] and e != :present
# The user doesn't really care, apparently
if self[:ensure] == :present
return true unless s = stat
return(s.ftype == "file" ? true : false)
end
# If we've gotten here, then :ensure isn't set
return true if self[:content]
return true if stat and stat.ftype == "file"
false
end
# Stat our file. Depending on the value of the 'links' attribute, we
# use either 'stat' or 'lstat', and we expect the properties to use the
# resulting stat object accordingly (mostly by testing the 'ftype'
# value).
#
# We use the initial value :needs_stat to ensure we only stat the file once,
# but can also keep track of a failed stat (@stat == nil). This also allows
# us to re-stat on demand by setting @stat = :needs_stat.
def stat
return @stat unless @stat == :needs_stat
method = :stat
# Files are the only types that support links
if (self.class.name == :file and self[:links] != :follow) or self.class.name == :tidy
method = :lstat
end
@stat = begin
Puppet::FileSystem.send(method, self[:path])
rescue Errno::ENOENT => error
nil
rescue Errno::ENOTDIR => error
nil
rescue Errno::EACCES => error
warning "Could not stat; permission denied"
nil
end
end
def to_resource
resource = super
resource.delete(:target) if resource[:target] == :notlink
resource
end
# Write out the file. Requires the property name for logging.
# Write will be done by the content property, along with checksum computation
def write(property)
remove_existing(:file)
mode = self.should(:mode) # might be nil
mode_int = mode ? symbolic_mode_to_int(mode, Puppet::Util::DEFAULT_POSIX_MODE) : nil
if write_temporary_file?
Puppet::Util.replace_file(self[:path], mode_int) do |file|
file.binmode
content_checksum = write_content(file)
file.flush
fail_if_checksum_is_wrong(file.path, content_checksum) if validate_checksum?
if self[:validate_cmd]
output = Puppet::Util::Execution.execute(self[:validate_cmd].gsub(self[:validate_replacement], file.path), :failonfail => true, :combine => true)
output.split(/\n/).each { |line|
self.debug(line)
}
end
end
else
umask = mode ? 000 : 022
Puppet::Util.withumask(umask) { ::File.open(self[:path], 'wb', mode_int ) { |f| write_content(f) } }
end
# make sure all of the modes are actually correct
property_fix
end
private
# @return [String] The type of the current file, cast to a string.
def read_current_type
stat_info = stat
if stat_info
stat_info.ftype.to_s
else
nil
end
end
# @return [Boolean] If the current file can be backed up and needs to be backed up.
def can_backup?(type)
if type == "directory" and not force?
# (#18110) Directories cannot be removed without :force, so it doesn't
# make sense to back them up.
false
else
true
end
end
# @return [Boolean] True if the directory was removed
# @api private
def remove_directory(wanted_type)
if force?
debug "Removing existing directory for replacement with #{wanted_type}"
FileUtils.rmtree(self[:path])
stat_needed
true
else
notice "Not removing directory; use 'force' to override"
false
end
end
# @return [Boolean] if the file was removed (which is always true currently)
# @api private
def remove_file(current_type, wanted_type)
debug "Removing existing #{current_type} for replacement with #{wanted_type}"
Puppet::FileSystem.unlink(self[:path])
stat_needed
true
end
def stat_needed
@stat = :needs_stat
end
# Back up the existing file at a given prior to it being removed
# @api private
# @raise [Puppet::Error] if the file backup failed
# @return [void]
def backup_existing
unless perform_backup
raise Puppet::Error, "Could not back up; will not replace"
end
end
# Should we validate the checksum of the file we're writing?
def validate_checksum?
self[:checksum] !~ /time/
end
# Make sure the file we wrote out is what we think it is.
def fail_if_checksum_is_wrong(path, content_checksum)
newsum = parameter(:checksum).sum_file(path)
return if [:absent, nil, content_checksum].include?(newsum)
self.fail "File written to disk did not match checksum; discarding changes (#{content_checksum} vs #{newsum})"
end
# write the current content. Note that if there is no content property
# simply opening the file with 'w' as done in write is enough to truncate
# or write an empty length file.
def write_content(file)
(content = property(:content)) && content.write(file)
end
def write_temporary_file?
# unfortunately we don't know the source file size before fetching it
# so let's assume the file won't be empty
(c = property(:content) and c.length) || @parameters[:source]
end
# There are some cases where all of the work does not get done on
# file creation/modification, so we have to do some extra checking.
def property_fix
properties.each do |thing|
next unless [:mode, :owner, :group, :seluser, :selrole, :seltype, :selrange].include?(thing.name)
# Make sure we get a new stat objct
@stat = :needs_stat
currentvalue = thing.retrieve
thing.sync unless thing.safe_insync?(currentvalue)
end
end
end
# We put all of the properties in separate files, because there are so many
# of them. The order these are loaded is important, because it determines
# the order they are in the property lit.
require 'puppet/type/file/checksum'
require 'puppet/type/file/content' # can create the file
require 'puppet/type/file/source' # can create the file
require 'puppet/type/file/target' # creates a different type of file
require 'puppet/type/file/ensure' # can create the file
require 'puppet/type/file/owner'
require 'puppet/type/file/group'
require 'puppet/type/file/mode'
require 'puppet/type/file/type'
require 'puppet/type/file/selcontext' # SELinux file context
require 'puppet/type/file/ctime'
require 'puppet/type/file/mtime'
diff --git a/lib/puppet/type/file/content.rb b/lib/puppet/type/file/content.rb
index 87231a95f..2ac13becb 100644
--- a/lib/puppet/type/file/content.rb
+++ b/lib/puppet/type/file/content.rb
@@ -1,240 +1,241 @@
require 'net/http'
require 'uri'
require 'tempfile'
require 'puppet/util/checksums'
require 'puppet/network/http'
require 'puppet/network/http/compression'
module Puppet
Puppet::Type.type(:file).newproperty(:content) do
include Puppet::Util::Diff
include Puppet::Util::Checksums
include Puppet::Network::HTTP::Compression.module
attr_reader :actual_content
desc <<-'EOT'
The desired contents of a file, as a string. This attribute is mutually
exclusive with `source` and `target`.
Newlines and tabs can be specified in double-quoted strings using
standard escaped syntax --- \n for a newline, and \t for a tab.
With very small files, you can construct content strings directly in
the manifest...
define resolve(nameserver1, nameserver2, domain, search) {
$str = "search $search
domain $domain
nameserver $nameserver1
nameserver $nameserver2
"
file { "/etc/resolv.conf":
content => "$str",
}
}
...but for larger files, this attribute is more useful when combined with the
[template](http://docs.puppetlabs.com/references/latest/function.html#template)
+ or [file](http://docs.puppetlabs.com/references/latest/function.html#file)
function.
EOT
# Store a checksum as the value, rather than the actual content.
# Simplifies everything.
munge do |value|
if value == :absent
value
elsif checksum?(value)
# XXX This is potentially dangerous because it means users can't write a file whose
# entire contents are a plain checksum
value
else
@actual_content = value
resource.parameter(:checksum).sum(value)
end
end
# Checksums need to invert how changes are printed.
def change_to_s(currentvalue, newvalue)
# Our "new" checksum value is provided by the source.
if source = resource.parameter(:source) and tmp = source.checksum
newvalue = tmp
end
if currentvalue == :absent
return "defined content as '#{newvalue}'"
elsif newvalue == :absent
return "undefined content from '#{currentvalue}'"
else
return "content changed '#{currentvalue}' to '#{newvalue}'"
end
end
def checksum_type
if source = resource.parameter(:source)
result = source.checksum
else
result = resource[:checksum]
end
if result =~ /^\{(\w+)\}.+/
return $1.to_sym
else
return result
end
end
def length
(actual_content and actual_content.length) || 0
end
def content
self.should
end
# Override this method to provide diffs if asked for.
# Also, fix #872: when content is used, and replace is true, the file
# should be insync when it exists
def insync?(is)
if resource.should_be_file?
return false if is == :absent
else
if resource[:ensure] == :present and resource[:content] and s = resource.stat
resource.warning "Ensure set to :present but file type is #{s.ftype} so no content will be synced"
end
return true
end
return true if ! @resource.replace?
result = super
if ! result and Puppet[:show_diff] and resource.show_diff?
write_temporarily do |path|
send @resource[:loglevel], "\n" + diff(@resource[:path], path)
end
end
result
end
def retrieve
return :absent unless stat = @resource.stat
ftype = stat.ftype
# Don't even try to manage the content on directories or links
return nil if ["directory","link"].include?(ftype)
begin
resource.parameter(:checksum).sum_file(resource[:path])
rescue => detail
raise Puppet::Error, "Could not read #{ftype} #{@resource.title}: #{detail}", detail.backtrace
end
end
# Make sure we're also managing the checksum property.
def should=(value)
# treat the value as a bytestring, in Ruby versions that support it, regardless of the encoding
# in which it has been supplied
value = value.clone.force_encoding(Encoding::ASCII_8BIT) if value.respond_to?(:force_encoding)
@resource.newattr(:checksum) unless @resource.parameter(:checksum)
super
end
# Just write our content out to disk.
def sync
return_event = @resource.stat ? :file_changed : :file_created
# We're safe not testing for the 'source' if there's no 'should'
# because we wouldn't have gotten this far if there weren't at least
# one valid value somewhere.
@resource.write(:content)
return_event
end
def write_temporarily
tempfile = Tempfile.new("puppet-file")
tempfile.open
write(tempfile)
tempfile.close
yield tempfile.path
tempfile.delete
end
def write(file)
resource.parameter(:checksum).sum_stream { |sum|
each_chunk_from(actual_content || resource.parameter(:source)) { |chunk|
sum << chunk
file.print chunk
}
}
end
# the content is munged so if it's a checksum source_or_content is nil
# unless the checksum indirectly comes from source
def each_chunk_from(source_or_content)
if source_or_content.is_a?(String)
yield source_or_content
elsif content_is_really_a_checksum? && source_or_content.nil?
yield read_file_from_filebucket
elsif source_or_content.nil?
yield ''
elsif Puppet[:default_file_terminus] == :file_server
yield source_or_content.content
elsif source_or_content.local?
chunk_file_from_disk(source_or_content) { |chunk| yield chunk }
else
chunk_file_from_source(source_or_content) { |chunk| yield chunk }
end
end
private
def content_is_really_a_checksum?
checksum?(should)
end
def chunk_file_from_disk(source_or_content)
File.open(source_or_content.full_path, "rb") do |src|
while chunk = src.read(8192)
yield chunk
end
end
end
def get_from_source(source_or_content, &block)
source = source_or_content.metadata.source
request = Puppet::Indirector::Request.new(:file_content, :find, source, nil, :environment => resource.catalog.environment)
request.do_request(:fileserver) do |req|
connection = Puppet::Network::HttpPool.http_instance(req.server, req.port)
connection.request_get(Puppet::Network::HTTP::API::V1.indirection2uri(req), add_accept_encoding({"Accept" => "raw"}), &block)
end
end
def chunk_file_from_source(source_or_content)
get_from_source(source_or_content) do |response|
case response.code
when /^2/; uncompress(response) { |uncompressor| response.read_body { |chunk| yield uncompressor.uncompress(chunk) } }
else
# Raise the http error if we didn't get a 'success' of some kind.
message = "Error #{response.code} on SERVER: #{(response.body||'').empty? ? response.message : uncompress_body(response)}"
raise Net::HTTPError.new(message, response)
end
end
end
def read_file_from_filebucket
raise "Could not get filebucket from file" unless dipper = resource.bucket
sum = should.sub(/\{\w+\}/, '')
dipper.getfile(sum)
rescue => detail
self.fail Puppet::Error, "Could not retrieve content for #{should} from filebucket: #{detail}", detail
end
end
end
diff --git a/lib/puppet/type/file/mode.rb b/lib/puppet/type/file/mode.rb
index 682e744cd..6f104947b 100644
--- a/lib/puppet/type/file/mode.rb
+++ b/lib/puppet/type/file/mode.rb
@@ -1,159 +1,168 @@
# Manage file modes. This state should support different formats
# for specification (e.g., u+rwx, or -0011), but for now only supports
# specifying the full mode.
module Puppet
Puppet::Type.type(:file).newproperty(:mode) do
require 'puppet/util/symbolic_file_mode'
include Puppet::Util::SymbolicFileMode
desc <<-'EOT'
The desired permissions mode for the file, in symbolic or numeric
- notation. Puppet uses traditional Unix permission schemes and translates
+ notation. This value should be specified as a quoted string; do not use
+ un-quoted numbers to represent file modes.
+
+ The `file` type uses traditional Unix permission schemes and translates
them to equivalent permissions for systems which represent permissions
- differently, including Windows.
+ differently, including Windows. For detailed ACL controls on Windows,
+ you can leave `mode` unmanaged and use
+ [the puppetlabs/acl module.](https://forge.puppetlabs.com/puppetlabs/acl)
Numeric modes should use the standard four-digit octal notation of
`<setuid/setgid/sticky><owner><group><other>` (e.g. 0644). Each of the
"owner," "group," and "other" digits should be a sum of the
permissions for that class of users, where read = 4, write = 2, and
execute/search = 1. When setting numeric permissions for
directories, Puppet sets the search permission wherever the read
permission is set.
Symbolic modes should be represented as a string of comma-separated
permission clauses, in the form `<who><op><perm>`:
* "Who" should be u (user), g (group), o (other), and/or a (all)
* "Op" should be = (set exact permissions), + (add select permissions),
or - (remove select permissions)
* "Perm" should be one or more of:
* r (read)
* w (write)
* x (execute/search)
* t (sticky)
* s (setuid/setgid)
* X (execute/search if directory or if any one user can execute)
* u (user's current permissions)
* g (group's current permissions)
* o (other's current permissions)
Thus, mode `0664` could be represented symbolically as either `a=r,ug+w`
or `ug=rw,o=r`. However, symbolic modes are more expressive than numeric
modes: a mode only affects the specified bits, so `mode => 'ug+w'` will
set the user and group write bits, without affecting any other bits.
See the manual page for GNU or BSD `chmod` for more details
on numeric and symbolic modes.
On Windows, permissions are translated as follows:
* Owner and group names are mapped to Windows SIDs
* The "other" class of users maps to the "Everyone" SID
* The read/write/execute permissions map to the `FILE_GENERIC_READ`,
`FILE_GENERIC_WRITE`, and `FILE_GENERIC_EXECUTE` access rights; a
file's owner always has the `FULL_CONTROL` right
* "Other" users can't have any permissions a file's group lacks,
and its group can't have any permissions its owner lacks; that is, 0644
is an acceptable mode, but 0464 is not.
EOT
validate do |value|
+ if !value.is_a?(String)
+ Puppet.deprecation_warning("Non-string values for the file mode property are deprecated. It must be a string, " \
+ "either a symbolic mode like 'o+w,a+r' or an octal representation like '0644' or '755'.")
+ end
unless value.nil? or valid_symbolic_mode?(value)
raise Puppet::Error, "The file mode specification is invalid: #{value.inspect}"
end
end
munge do |value|
return nil if value.nil?
unless valid_symbolic_mode?(value)
raise Puppet::Error, "The file mode specification is invalid: #{value.inspect}"
end
normalize_symbolic_mode(value)
end
def desired_mode_from_current(desired, current)
current = current.to_i(8) if current.is_a? String
- is_a_directory = @resource.stat and @resource.stat.directory?
+ is_a_directory = @resource.stat && @resource.stat.directory?
symbolic_mode_to_int(desired, current, is_a_directory)
end
# If we're a directory, we need to be executable for all cases
# that are readable. This should probably be selectable, but eh.
def dirmask(value)
if FileTest.directory?(resource[:path]) and value =~ /^\d+$/ then
value = value.to_i(8)
value |= 0100 if value & 0400 != 0
value |= 010 if value & 040 != 0
value |= 01 if value & 04 != 0
value = value.to_s(8)
end
value
end
# If we're not following links and we're a link, then we just turn
# off mode management entirely.
def insync?(currentvalue)
if stat = @resource.stat and stat.ftype == "link" and @resource[:links] != :follow
self.debug "Not managing symlink mode"
return true
else
return super(currentvalue)
end
end
def property_matches?(current, desired)
return false unless current
current_bits = normalize_symbolic_mode(current)
desired_bits = desired_mode_from_current(desired, current).to_s(8)
current_bits == desired_bits
end
# Ideally, dirmask'ing could be done at munge time, but we don't know if 'ensure'
# will eventually be a directory or something else. And unfortunately, that logic
# depends on the ensure, source, and target properties. So rather than duplicate
# that logic, and get it wrong, we do dirmask during retrieve, after 'ensure' has
# been synced.
def retrieve
if @resource.stat
@should &&= @should.collect { |s| self.dirmask(s) }
end
super
end
# Finally, when we sync the mode out we need to transform it; since we
# don't have access to the calculated "desired" value here, or the
# "current" value, only the "should" value we need to retrieve again.
def sync
current = @resource.stat ? @resource.stat.mode : 0644
set(desired_mode_from_current(@should[0], current).to_s(8))
end
def change_to_s(old_value, desired)
return super if desired =~ /^\d+$/
old_bits = normalize_symbolic_mode(old_value)
new_bits = normalize_symbolic_mode(desired_mode_from_current(desired, old_bits))
super(old_bits, new_bits) + " (#{desired})"
end
def should_to_s(should_value)
should_value.rjust(4, "0")
end
def is_to_s(currentvalue)
if currentvalue == :absent
# This can occur during audits---if a file is transitioning from
# present to absent the mode will have a value of `:absent`.
super
else
currentvalue.rjust(4, "0")
end
end
end
end
diff --git a/lib/puppet/type/file/source.rb b/lib/puppet/type/file/source.rb
index ce704c5be..a7ebfc6a2 100644
--- a/lib/puppet/type/file/source.rb
+++ b/lib/puppet/type/file/source.rb
@@ -1,256 +1,259 @@
require 'puppet/file_serving/content'
require 'puppet/file_serving/metadata'
module Puppet
# Copy files from a local or remote source. This state *only* does any work
# when the remote file is an actual file; in that case, this state copies
# the file down. If the remote file is a dir or a link or whatever, then
# this state, during retrieval, modifies the appropriate other states
# so that things get taken care of appropriately.
Puppet::Type.type(:file).newparam(:source) do
include Puppet::Util::Diff
attr_accessor :source, :local
desc <<-'EOT'
A source file, which will be copied into place on the local system.
Values can be URIs pointing to remote files, or fully qualified paths to
files available on the local system (including files on NFS shares or
Windows mapped drives). This attribute is mutually exclusive with
`content` and `target`.
The available URI schemes are *puppet* and *file*. *Puppet*
URIs will retrieve files from Puppet's built-in file server, and are
usually formatted as:
`puppet:///modules/name_of_module/filename`
This will fetch a file from a module on the puppet master (or from a
local module when using puppet apply). Given a `modulepath` of
`/etc/puppetlabs/puppet/modules`, the example above would resolve to
`/etc/puppetlabs/puppet/modules/name_of_module/files/filename`.
Unlike `content`, the `source` attribute can be used to recursively copy
directories if the `recurse` attribute is set to `true` or `remote`. If
a source directory contains symlinks, use the `links` attribute to
specify whether to recreate links or follow them.
Multiple `source` values can be specified as an array, and Puppet will
use the first source that exists. This can be used to serve different
files to different system types:
file { "/etc/nfs.conf":
source => [
"puppet:///modules/nfs/conf.$host",
"puppet:///modules/nfs/conf.$operatingsystem",
"puppet:///modules/nfs/conf"
]
}
Alternately, when serving directories recursively, multiple sources can
be combined by setting the `sourceselect` attribute to `all`.
EOT
validate do |sources|
sources = [sources] unless sources.is_a?(Array)
sources.each do |source|
next if Puppet::Util.absolute_path?(source)
begin
uri = URI.parse(URI.escape(source))
rescue => detail
self.fail Puppet::Error, "Could not understand source #{source}: #{detail}", detail
end
self.fail "Cannot use relative URLs '#{source}'" unless uri.absolute?
self.fail "Cannot use opaque URLs '#{source}'" unless uri.hierarchical?
self.fail "Cannot use URLs of type '#{uri.scheme}' as source for fileserving" unless %w{file puppet}.include?(uri.scheme)
end
end
SEPARATOR_REGEX = [Regexp.escape(File::SEPARATOR.to_s), Regexp.escape(File::ALT_SEPARATOR.to_s)].join
munge do |sources|
sources = [sources] unless sources.is_a?(Array)
sources.map do |source|
source = source.sub(/[#{SEPARATOR_REGEX}]+$/, '')
if Puppet::Util.absolute_path?(source)
URI.unescape(Puppet::Util.path_to_uri(source).to_s)
else
source
end
end
end
def change_to_s(currentvalue, newvalue)
# newvalue = "{md5}#{@metadata.checksum}"
if resource.property(:ensure).retrieve == :absent
return "creating from source #{metadata.source} with contents #{metadata.checksum}"
else
return "replacing from source #{metadata.source} with contents #{metadata.checksum}"
end
end
def checksum
metadata && metadata.checksum
end
# Look up (if necessary) and return remote content.
def content
return @content if @content
raise Puppet::DevError, "No source for content was stored with the metadata" unless metadata.source
unless tmp = Puppet::FileServing::Content.indirection.find(metadata.source, :environment => resource.catalog.environment, :links => resource[:links])
self.fail "Could not find any content at %s" % metadata.source
end
@content = tmp.content
end
# Copy the values from the source to the resource. Yay.
def copy_source_values
devfail "Somehow got asked to copy source values without any metadata" unless metadata
# conditionally copy :checksum
if metadata.ftype != "directory" && !(metadata.ftype == "link" && metadata.links == :manage)
copy_source_value(:checksum)
end
# Take each of the stats and set them as states on the local file
# if a value has not already been provided.
[:owner, :mode, :group].each do |metadata_method|
next if metadata_method == :owner and !Puppet.features.root?
next if metadata_method == :group and !Puppet.features.root?
if Puppet.features.microsoft_windows?
# Warn on Windows if source permissions are being used and the file resource
# does not have mode owner and group all set (which would take precedence).
if [:use, :use_when_creating].include?(resource[:source_permissions]) &&
(resource[:owner] == nil || resource[:group] == nil || resource[:mode] == nil)
warning = "Copying %s from the source" <<
" file on Windows is deprecated;" <<
" use source_permissions => ignore."
Puppet.deprecation_warning(warning % 'owner/mode/group')
resource.debug(warning % metadata_method.to_s)
end
# But never try to copy remote owner/group on Windows
next if [:owner, :group].include?(metadata_method) && !local?
end
case resource[:source_permissions]
when :ignore
next
when :use_when_creating
next if Puppet::FileSystem.exist?(resource[:path])
end
copy_source_value(metadata_method)
end
if resource[:ensure] == :absent
# We know all we need to
elsif metadata.ftype != "link"
resource[:ensure] = metadata.ftype
elsif resource[:links] == :follow
resource[:ensure] = :present
else
resource[:ensure] = "link"
resource[:target] = metadata.destination
end
end
attr_writer :metadata
# Provide, and retrieve if necessary, the metadata for this file. Fail
# if we can't find data about this host, and fail if there are any
# problems in our query.
def metadata
return @metadata if @metadata
return nil unless value
value.each do |source|
begin
options = {
:environment => resource.catalog.environment,
:links => resource[:links],
:source_permissions => resource[:source_permissions]
}
if data = Puppet::FileServing::Metadata.indirection.find(source, options)
@metadata = data
@metadata.source = source
break
end
rescue => detail
self.fail Puppet::Error, "Could not retrieve file metadata for #{source}: #{detail}", detail
end
end
self.fail "Could not retrieve information from environment #{resource.catalog.environment} source(s) #{value.join(", ")}" unless @metadata
@metadata
end
def local?
found? and scheme == "file"
end
def full_path
Puppet::Util.uri_to_path(uri) if found?
end
def server?
uri and uri.host
end
def server
(uri and uri.host) or Puppet.settings[:server]
end
def port
(uri and uri.port) or Puppet.settings[:masterport]
end
def uri
@uri ||= URI.parse(URI.escape(metadata.source))
end
private
def scheme
(uri and uri.scheme)
end
def found?
! (metadata.nil? or metadata.ftype.nil?)
end
def copy_source_value(metadata_method)
param_name = (metadata_method == :checksum) ? :content : metadata_method
if resource[param_name].nil? or resource[param_name] == :absent
- resource[param_name] = metadata.send(metadata_method)
+ value = metadata.send(metadata_method)
+ # Force the mode value in file resources to be a string containing octal.
+ value = value.to_s(8) if param_name == :mode && value.is_a?(Numeric)
+ resource[param_name] = value
end
end
end
Puppet::Type.type(:file).newparam(:source_permissions) do
desc <<-'EOT'
Whether (and how) Puppet should copy owner, group, and mode permissions from
the `source` to `file` resources when the permissions are not explicitly
specified. (In all cases, explicit permissions will take precedence.)
Valid values are `use`, `use_when_creating`, and `ignore`:
* `use` (the default) will cause Puppet to apply the owner, group,
and mode from the `source` to any files it is managing.
* `use_when_creating` will only apply the owner, group, and mode from the
`source` when creating a file; existing files will not have their permissions
overwritten.
* `ignore` will never apply the owner, group, or mode from the `source` when
managing a file. When creating new files without explicit permissions,
the permissions they receive will depend on platform-specific behavior.
On POSIX, Puppet will use the umask of the user it is running as. On
Windows, Puppet will use the default DACL associated with the user it is
running as.
EOT
defaultto :use
newvalues(:use, :use_when_creating, :ignore)
end
end
diff --git a/lib/puppet/type/group.rb b/lib/puppet/type/group.rb
index cf9eb2382..d5adaf455 100644
--- a/lib/puppet/type/group.rb
+++ b/lib/puppet/type/group.rb
@@ -1,188 +1,188 @@
require 'etc'
require 'facter'
require 'puppet/property/keyvalue'
require 'puppet/parameter/boolean'
module Puppet
newtype(:group) do
@doc = "Manage groups. On most platforms this can only create groups.
Group membership must be managed on individual users.
On some platforms such as OS X, group membership is managed as an
attribute of the group, not the user record. Providers must have
the feature 'manages_members' to manage the 'members' property of
a group record."
feature :manages_members,
"For directories where membership is an attribute of groups not users."
feature :manages_aix_lam,
"The provider can manage AIX Loadable Authentication Module (LAM) system."
feature :system_groups,
"The provider allows you to create system groups with lower GIDs."
feature :libuser,
"Allows local groups to be managed on systems that also use some other
remote NSS method of managing accounts."
ensurable do
desc "Create or remove the group."
newvalue(:present) do
provider.create
end
newvalue(:absent) do
provider.delete
end
end
newproperty(:gid) do
desc "The group ID. Must be specified numerically. If no group ID is
specified when creating a new group, then one will be chosen
automatically according to local system standards. This will likely
result in the same group having different GIDs on different systems,
which is not recommended.
On Windows, this property is read-only and will return the group's security
identifier (SID)."
def retrieve
provider.gid
end
def sync
if self.should == :absent
raise Puppet::DevError, "GID cannot be deleted"
else
provider.gid = self.should
end
end
munge do |gid|
case gid
when String
if gid =~ /^[-0-9]+$/
gid = Integer(gid)
else
self.fail "Invalid GID #{gid}"
end
when Symbol
unless gid == :absent
self.devfail "Invalid GID #{gid}"
end
end
return gid
end
end
newproperty(:members, :array_matching => :all, :required_features => :manages_members) do
desc "The members of the group. For directory services where group
membership is stored in the group objects, not the users."
def change_to_s(currentvalue, newvalue)
currentvalue = currentvalue.join(",") if currentvalue != :absent
newvalue = newvalue.join(",")
super(currentvalue, newvalue)
end
def insync?(current)
if provider.respond_to?(:members_insync?)
return provider.members_insync?(current, @should)
end
super(current)
end
def is_to_s(currentvalue)
if provider.respond_to?(:members_to_s)
currentvalue = '' if currentvalue.nil?
return provider.members_to_s(currentvalue.split(','))
end
super(currentvalue)
end
alias :should_to_s :is_to_s
end
newparam(:auth_membership) do
desc "whether the provider is authoritative for group membership."
defaultto true
end
newparam(:name) do
desc "The group name. While naming limitations vary by operating system,
it is advisable to restrict names to the lowest common denominator,
which is a maximum of 8 characters beginning with a letter.
Note that Puppet considers group names to be case-sensitive, regardless
of the platform's own rules; be sure to always use the same case when
referring to a given group."
isnamevar
end
newparam(:allowdupe, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether to allow duplicate GIDs. Defaults to `false`."
defaultto false
end
newparam(:ia_load_module, :required_features => :manages_aix_lam) do
desc "The name of the I&A module to use to manage this user"
end
newproperty(:attributes, :parent => Puppet::Property::KeyValue, :required_features => :manages_aix_lam) do
desc "Specify group AIX attributes in an array of `key=value` pairs."
def membership
:attribute_membership
end
def delimiter
" "
end
validate do |value|
raise ArgumentError, "Attributes value pairs must be separated by an =" unless value.include?("=")
end
end
newparam(:attribute_membership) do
desc "Whether specified attribute value pairs should be treated as the only attributes
of the user or whether they should merely
be treated as the minimum list."
newvalues(:inclusive, :minimum)
defaultto :minimum
end
newparam(:system, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether the group is a system group with lower GID."
defaultto false
end
newparam(:forcelocal, :boolean => true,
:required_features => :libuser,
:parent => Puppet::Parameter::Boolean) do
- desc "Forces the mangement of local accounts when accounts are also
+ desc "Forces the management of local accounts when accounts are also
being managed by some other NSS"
defaultto false
end
# This method has been exposed for puppet to manage users and groups of
# files in its settings and should not be considered available outside of
# puppet.
#
# (see Puppet::Settings#service_group_available?)
#
# @return [Boolean] if the group exists on the system
# @api private
def exists?
provider.exists?
end
end
end
diff --git a/lib/puppet/type/mount.rb b/lib/puppet/type/mount.rb
index 8111a0d4a..620ea8dfc 100644
--- a/lib/puppet/type/mount.rb
+++ b/lib/puppet/type/mount.rb
@@ -1,288 +1,286 @@
require 'puppet/property/boolean'
module Puppet
# We want the mount to refresh when it changes.
newtype(:mount, :self_refresh => true) do
@doc = "Manages mounted filesystems, including putting mount
information into the mount table. The actual behavior depends
on the value of the 'ensure' parameter.
**Refresh:** `mount` resources can respond to refresh events (via
`notify`, `subscribe`, or the `~>` arrow). If a `mount` receives an event
from another resource **and** its `ensure` attribute is set to `mounted`,
Puppet will try to unmount then remount that filesystem.
**Autorequires:** If Puppet is managing any parents of a mount resource ---
that is, other mount points higher up in the filesystem --- the child
mount will autorequire them."
feature :refreshable, "The provider can remount the filesystem.",
:methods => [:remount]
# Use the normal parent class, because we actually want to
# call code when sync is called.
newproperty(:ensure) do
desc "Control what to do with this mount. Set this attribute to
`unmounted` to make sure the filesystem is in the filesystem table
but not mounted (if the filesystem is currently mounted, it will be
unmounted). Set it to `absent` to unmount (if necessary) and remove
the filesystem from the fstab. Set to `mounted` to add it to the
fstab and mount it. Set to `present` to add to fstab but not change
mount/unmount status."
# IS -> SHOULD In Sync Action
# ghost -> present NO create
# absent -> present NO create
# (mounted -> present YES)
# (unmounted -> present YES)
newvalue(:defined) do
provider.create
return :mount_created
end
aliasvalue :present, :defined
# IS -> SHOULD In Sync Action
# ghost -> unmounted NO create, unmount
# absent -> unmounted NO create
# mounted -> unmounted NO unmount
newvalue(:unmounted) do
case self.retrieve
when :ghost # (not in fstab but mounted)
provider.create
@resource.flush
provider.unmount
return :mount_unmounted
when nil, :absent # (not in fstab and not mounted)
provider.create
return :mount_created
when :mounted # (in fstab and mounted)
provider.unmount
syncothers # I guess it's more likely that the mount was originally mounted with
# the wrong attributes so I sync AFTER the umount
return :mount_unmounted
else
raise Puppet::Error, "Unexpected change from #{current_value} to unmounted}"
end
end
# IS -> SHOULD In Sync Action
# ghost -> absent NO unmount
# mounted -> absent NO provider.destroy AND unmount
# unmounted -> absent NO provider.destroy
newvalue(:absent, :event => :mount_deleted) do
current_value = self.retrieve
provider.unmount if provider.mounted?
provider.destroy unless current_value == :ghost
end
# IS -> SHOULD In Sync Action
# ghost -> mounted NO provider.create
# absent -> mounted NO provider.create AND mount
# unmounted -> mounted NO mount
newvalue(:mounted, :event => :mount_mounted) do
# Create the mount point if it does not already exist.
current_value = self.retrieve
currently_mounted = provider.mounted?
provider.create if [nil, :absent, :ghost].include?(current_value)
syncothers
# The fs can be already mounted if it was absent but mounted
provider.property_hash[:needs_mount] = true unless currently_mounted
end
# insync: mounted -> present
# unmounted -> present
def insync?(is)
if should == :defined and [:mounted,:unmounted].include?(is)
true
else
super
end
end
def syncothers
# We have to flush any changes to disk.
currentvalues = @resource.retrieve_resource
# Determine if there are any out-of-sync properties.
oos = @resource.send(:properties).find_all do |prop|
unless currentvalues.include?(prop)
raise Puppet::DevError, "Parent has property %s but it doesn't appear in the current values", [prop.name]
end
if prop.name == :ensure
false
else
! prop.safe_insync?(currentvalues[prop])
end
end.each { |prop| prop.sync }.length
@resource.flush if oos > 0
end
end
newproperty(:device) do
desc "The device providing the mount. This can be whatever
device is supporting by the mount, including network
devices or devices specified by UUID rather than device
path, depending on the operating system."
validate do |value|
raise Puppet::Error, "device must not contain whitespace: #{value}" if value =~ /\s/
end
end
# Solaris specifies two devices, not just one.
newproperty(:blockdevice) do
desc "The device to fsck. This is property is only valid
on Solaris, and in most cases will default to the correct
value."
# Default to the device but with "dsk" replaced with "rdsk".
defaultto do
if Facter.value(:osfamily) == "Solaris"
if device = resource[:device] and device =~ %r{/dsk/}
device.sub(%r{/dsk/}, "/rdsk/")
elsif fstype = resource[:fstype] and fstype == 'nfs'
'-'
else
nil
end
else
nil
end
end
validate do |value|
raise Puppet::Error, "blockdevice must not contain whitespace: #{value}" if value =~ /\s/
end
end
newproperty(:fstype) do
desc "The mount type. Valid values depend on the
operating system. This is a required option."
validate do |value|
raise Puppet::Error, "fstype must not contain whitespace: #{value}" if value =~ /\s/
end
end
newproperty(:options) do
desc "Mount options for the mounts, as they would
appear in the fstab."
validate do |value|
raise Puppet::Error, "option must not contain whitespace: #{value}" if value =~ /\s/
end
end
newproperty(:pass) do
desc "The pass in which the mount is checked."
defaultto {
if @resource.managed?
if Facter.value(:osfamily) == 'Solaris'
'-'
else
0
end
end
}
end
newproperty(:atboot, :parent => Puppet::Property::Boolean) do
desc "Whether to mount the mount at boot. Not all platforms
support this."
def munge(value)
munged = super
if munged
:yes
else
:no
end
end
end
newproperty(:dump) do
desc "Whether to dump the mount. Not all platform support this.
- Valid values are `1` or `0`. or `2` on FreeBSD, Default is `0`."
+ Valid values are `1` or `0` (or `2` on FreeBSD). Default is `0`."
if Facter.value(:operatingsystem) == "FreeBSD"
newvalue(%r{(0|1|2)})
else
newvalue(%r{(0|1)})
end
- newvalue(%r{(0|1)})
-
defaultto {
0 if @resource.managed?
}
end
newproperty(:target) do
desc "The file in which to store the mount table. Only used by
those providers that write to disk."
defaultto { if @resource.class.defaultprovider.ancestors.include?(Puppet::Provider::ParsedFile)
@resource.class.defaultprovider.default_target
else
nil
end
}
end
newparam(:name) do
desc "The mount path for the mount."
isnamevar
validate do |value|
raise Puppet::Error, "name must not contain whitespace: #{value}" if value =~ /\s/
end
munge do |value|
value.gsub(/^(.+?)\/*$/, '\1')
end
end
newparam(:remounts) do
desc "Whether the mount can be remounted `mount -o remount`. If
this is false, then the filesystem will be unmounted and remounted
manually, which is prone to failure."
newvalues(:true, :false)
defaultto do
case Facter.value(:operatingsystem)
when "FreeBSD", "Darwin", "AIX", "DragonFly", "OpenBSD"
false
else
true
end
end
end
def refresh
# Only remount if we're supposed to be mounted.
provider.remount if self.should(:fstype) != "swap" and provider.mounted?
end
def value(name)
name = name.intern
if property = @parameters[name]
return property.value
end
end
# Ensure that mounts higher up in the filesystem are mounted first
autorequire(:mount) do
dependencies = []
Pathname.new(@parameters[:name].value).ascend do |parent|
dependencies.unshift parent.to_s
end
dependencies[0..-2]
end
end
end
diff --git a/lib/puppet/type/resources.rb b/lib/puppet/type/resources.rb
index e8dd08a92..7a4ffd8f3 100644
--- a/lib/puppet/type/resources.rb
+++ b/lib/puppet/type/resources.rb
@@ -1,163 +1,187 @@
require 'puppet'
require 'puppet/parameter/boolean'
Puppet::Type.newtype(:resources) do
@doc = "This is a metatype that can manage other resource types. Any
metaparams specified here will be passed on to any generated resources,
so you can purge umanaged resources but set `noop` to true so the
purging is only logged and does not actually happen."
newparam(:name) do
desc "The name of the type to be managed."
validate do |name|
raise ArgumentError, "Could not find resource type '#{name}'" unless Puppet::Type.type(name)
end
munge { |v| v.to_s }
end
newparam(:purge, :boolean => true, :parent => Puppet::Parameter::Boolean) do
- desc "Purge unmanaged resources. This will delete any resource
- that is not specified in your configuration
- and is not required by any specified resources.
- Purging ssh_authorized_keys this way is deprecated; see the
- purge_ssh_keys parameter of the user type for a better alternative."
+ desc "Whether to purge unmanaged resources. When set to `true`, this will
+ delete any resource that is not specified in your configuration and is not
+ autorequired by any managed resources. **Note:** The `ssh_authorized_key`
+ resource type can't be purged this way; instead, see the `purge_ssh_keys`
+ attribute of the `user` type."
defaultto :false
validate do |value|
if munge(value)
unless @resource.resource_type.respond_to?(:instances)
raise ArgumentError, "Purging resources of type #{@resource[:name]} is not supported, since they cannot be queried from the system"
end
raise ArgumentError, "Purging is only supported on types that accept 'ensure'" unless @resource.resource_type.validproperty?(:ensure)
end
end
end
newparam(:unless_system_user) do
desc "This keeps system users from being purged. By default, it
- does not purge users whose UIDs are less than or equal to 500, but you can specify
+ does not purge users whose UIDs are less than the minimum UID for the system (typically 500 or 1000), but you can specify
a different UID as the inclusive limit."
newvalues(:true, :false, /^\d+$/)
munge do |value|
case value
when /^\d+/
Integer(value)
when :true, true
- 500
+ @resource.class.system_users_max_uid
when :false, false
false
when Integer; value
else
raise ArgumentError, "Invalid value #{value.inspect}"
end
end
defaultto {
if @resource[:name] == "user"
- 500
+ @resource.class.system_users_max_uid
else
nil
end
}
end
newparam(:unless_uid) do
- desc "This keeps specific uids or ranges of uids from being purged when purge is true.
- Accepts ranges, integers and (mixed) arrays of both."
-
- munge do |value|
- case value
- when /^\d+/
- [Integer(value)]
- when Integer
- [value]
- when Range
- [value]
- when Array
- value
- when /^\[\d+/
- value.split(',').collect{|x| x.include?('..') ? Integer(x.split('..')[0])..Integer(x.split('..')[1]) : Integer(x) }
- else
- raise ArgumentError, "Invalid value #{value.inspect}"
- end
- end
- end
+ desc 'This keeps specific uids or ranges of uids from being purged when purge is true.
+ Accepts integers, integer strings, and arrays of integers or integer strings.
+ To specify a range of uids, consider using the range() function from stdlib.'
+
+ munge do |value|
+ value = [value] unless value.is_a? Array
+ value.flatten.collect do |v|
+ case v
+ when Integer
+ v
+ when String
+ Integer(v)
+ else
+ raise ArgumentError, "Invalid value #{v.inspect}."
+ end
+ end
+ end
+ end
def check(resource)
@checkmethod ||= "#{self[:name]}_check"
@hascheck ||= respond_to?(@checkmethod)
if @hascheck
return send(@checkmethod, resource)
else
return true
end
end
def able_to_ensure_absent?(resource)
resource[:ensure] = :absent
rescue ArgumentError, Puppet::Error
err "The 'ensure' attribute on #{self[:name]} resources does not accept 'absent' as a value"
false
end
# Generate any new resources we need to manage. This is pretty hackish
# right now, because it only supports purging.
def generate
return [] unless self.purge?
resource_type.instances.
reject { |r| catalog.resource_refs.include? r.ref }.
select { |r| check(r) }.
select { |r| r.class.validproperty?(:ensure) }.
select { |r| able_to_ensure_absent?(r) }.
each { |resource|
@parameters.each do |name, param|
resource[name] = param.value if param.metaparam?
end
# Mark that we're purging, so transactions can handle relationships
# correctly
resource.purging
}
end
def resource_type
unless defined?(@resource_type)
unless type = Puppet::Type.type(self[:name])
raise Puppet::DevError, "Could not find resource type"
end
@resource_type = type
end
@resource_type
end
+ def self.deprecate_params(title,params)
+ return unless params
+ if title == 'cron' and ! params.select { |param| param.name.intern == :purge and param.value == true }.empty?
+ Puppet.deprecation_warning("Change notice: purging cron entries will be more aggressive in future versions, take care when updating your agents. See http://links.puppetlabs.com/puppet-aggressive-cron-purge")
+ end
+ end
+
# Make sure we don't purge users with specific uids
def user_check(resource)
return true unless self[:name] == "user"
return true unless self[:unless_system_user]
resource[:audit] = :uid
current_values = resource.retrieve_resource
current_uid = current_values[resource.property(:uid)]
unless_uids = self[:unless_uid]
return false if system_users.include?(resource[:name])
-
- if unless_uids && unless_uids.length > 0
- unless_uids.each do |unless_uid|
- return false if unless_uid == current_uid
- return false if unless_uid.respond_to?('include?') && unless_uid.include?(current_uid)
- end
- end
+ return false if unless_uids && unless_uids.include?(current_uid)
current_uid > self[:unless_system_user]
end
def system_users
%w{root nobody bin noaccess daemon sys}
end
+
+ def self.system_users_max_uid
+ return @system_users_max_uid if @system_users_max_uid
+
+ # First try to read the minimum user id from login.defs
+ if Puppet::FileSystem.exist?('/etc/login.defs')
+ @system_users_max_uid = Puppet::FileSystem.each_line '/etc/login.defs' do |line|
+ break $1.to_i - 1 if line =~ /^\s*UID_MIN\s+(\d+)(\s*#.*)?$/
+ end
+ end
+
+ # Otherwise, use a sensible default based on the OS family
+ @system_users_max_uid ||= case Facter.value(:osfamily)
+ when 'OpenBSD', 'FreeBSD'
+ 999
+ else
+ 499
+ end
+
+ @system_users_max_uid
+ end
+
+ def self.reset_system_users_max_uid!
+ @system_users_max_uid = nil
+ end
end
diff --git a/lib/puppet/type/ssh_authorized_key.rb b/lib/puppet/type/ssh_authorized_key.rb
index 12c8294b1..8b40907f3 100644
--- a/lib/puppet/type/ssh_authorized_key.rb
+++ b/lib/puppet/type/ssh_authorized_key.rb
@@ -1,115 +1,153 @@
module Puppet
newtype(:ssh_authorized_key) do
- @doc = "Manages SSH authorized keys. Currently only type 2 keys are
- supported.
+ @doc = "Manages SSH authorized keys. Currently only type 2 keys are supported.
- **Autorequires:** If Puppet is managing the user account in which this
- SSH key should be installed, the `ssh_authorized_key` resource will autorequire
- that user."
+ In their native habitat, SSH keys usually appear as a single long line. This
+ resource type requires you to split that line into several attributes. Thus, a
+ key that appears in your `~/.ssh/id_rsa.pub` file like this...
+
+ ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAy5mtOAMHwA2ZAIfW6Ap70r+I4EclYHEec5xIN59ROUjss23Skb1OtjzYpVPaPH8mSdSmsN0JHaBLiRcu7stl4O8D8zA4mz/vw32yyQ/Kqaxw8l0K76k6t2hKOGqLTY4aFbFISV6GDh7MYLn8KU7cGp96J+caO5R5TqtsStytsUhSyqH+iIDh4e4+BrwTc6V4Y0hgFxaZV5d18mLA4EPYKeG5+zyBCVu+jueYwFqM55E0tHbfiaIN9IzdLV+7NEEfdLkp6w2baLKPqWUBmuvPF1Mn3FwaFLjVsMT3GQeMue6b3FtUdTDeyAYoTxrsRo/WnDkS6Pa3YhrFwjtUqXfdaQ== nick@magpie.puppetlabs.lan
+
+ ...would translate to the following resource:
+
+ ssh_authorized_key { 'nick@magpie.puppetlabs.lan':
+ user => 'nick',
+ type => 'ssh-rsa',
+ key => 'AAAAB3NzaC1yc2EAAAABIwAAAQEAy5mtOAMHwA2ZAIfW6Ap70r+I4EclYHEec5xIN59ROUjss23Skb1OtjzYpVPaPH8mSdSmsN0JHaBLiRcu7stl4O8D8zA4mz/vw32yyQ/Kqaxw8l0K76k6t2hKOGqLTY4aFbFISV6GDh7MYLn8KU7cGp96J+caO5R5TqtsStytsUhSyqH+iIDh4e4+BrwTc6V4Y0hgFxaZV5d18mLA4EPYKeG5+zyBCVu+jueYwFqM55E0tHbfiaIN9IzdLV+7NEEfdLkp6w2baLKPqWUBmuvPF1Mn3FwaFLjVsMT3GQeMue6b3FtUdTDeyAYoTxrsRo/WnDkS6Pa3YhrFwjtUqXfdaQ==',
+ }
+
+ To ensure that only the currently approved keys are present, you can purge
+ unmanaged SSH keys on a per-user basis. Do this with the `user` resource
+ type's `purge_ssh_keys` attribute:
+
+ user { 'nick':
+ ensure => present,
+ purge_ssh_keys => true,
+ }
+
+ This will remove any keys in `~/.ssh/authorized_keys` that aren't being
+ managed with `ssh_authorized_key` resources. See the documentation of the
+ `user` type for more details.
+
+ **Autorequires:** If Puppet is managing the user account in which this
+ SSH key should be installed, the `ssh_authorized_key` resource will autorequire
+ that user."
ensurable
newparam(:name) do
desc "The SSH key comment. This attribute is currently used as a
- system-wide primary key and therefore has to be unique."
+ system-wide primary key and therefore has to be unique."
isnamevar
end
newproperty(:type) do
- desc "The encryption type used: ssh-dss or ssh-rsa."
+ desc "The encryption type used."
newvalues :'ssh-dss', :'ssh-rsa', :'ecdsa-sha2-nistp256', :'ecdsa-sha2-nistp384', :'ecdsa-sha2-nistp521', :'ssh-ed25519'
aliasvalue(:dsa, :'ssh-dss')
aliasvalue(:ed25519, :'ssh-ed25519')
aliasvalue(:rsa, :'ssh-rsa')
end
newproperty(:key) do
- desc "The public key itself; generally a long string of hex characters. The key attribute
- may not contain whitespace: Omit key headers (e.g. 'ssh-rsa') and key identifiers
- (e.g. 'joe@joescomputer.local') found in the public key file."
+ desc "The public key itself; generally a long string of hex characters. The `key`
+ attribute may not contain whitespace.
+
+ Make sure to omit the following in this attribute (and specify them in
+ other attributes):
+
+ * Key headers (e.g. 'ssh-rsa') --- put these in the `type` attribute.
+ * Key identifiers / comments (e.g. 'joe@joescomputer.local') --- put these in
+ the `name` attribute/resource title."
validate do |value|
raise Puppet::Error, "Key must not contain whitespace: #{value}" if value =~ /\s/
end
end
newproperty(:user) do
- desc "The user account in which the SSH key should be installed.
- The resource will automatically depend on this user."
+ desc "The user account in which the SSH key should be installed. The resource
+ will autorequire this user if it is being managed as a `user` resource."
end
newproperty(:target) do
desc "The absolute filename in which to store the SSH key. This
- property is optional and should only be used in cases where keys
- are stored in a non-standard location (i.e.` not in
- `~user/.ssh/authorized_keys`)."
+ property is optional and should only be used in cases where keys
+ are stored in a non-standard location (i.e.` not in
+ `~user/.ssh/authorized_keys`)."
defaultto :absent
def should
return super if defined?(@should) and @should[0] != :absent
return nil unless user = resource[:user]
begin
return File.expand_path("~#{user}/.ssh/authorized_keys")
rescue
Puppet.debug "The required user is not yet present on the system"
return nil
end
end
def insync?(is)
is == should
end
end
newproperty(:options, :array_matching => :all) do
- desc "Key options, see sshd(8) for possible values. Multiple values
+ desc "Key options; see sshd(8) for possible values. Multiple values
should be specified as an array."
defaultto do :absent end
def is_to_s(value)
if value == :absent or value.include?(:absent)
super
else
value.join(",")
end
end
def should_to_s(value)
if value == :absent or value.include?(:absent)
super
else
value.join(",")
end
end
validate do |value|
unless value == :absent or value =~ /^[-a-z0-9A-Z_]+(?:=\".*?\")?$/
raise Puppet::Error, "Option #{value} is not valid. A single option must either be of the form 'option' or 'option=\"value\". Multiple options must be provided as an array"
end
end
end
autorequire(:user) do
should(:user) if should(:user)
end
validate do
# Go ahead if target attribute is defined
return if @parameters[:target].shouldorig[0] != :absent
# Go ahead if user attribute is defined
return if @parameters.include?(:user)
# If neither target nor user is defined, this is an error
raise Puppet::Error, "Attribute 'user' or 'target' is mandatory"
end
+
+ # regular expression suitable for use by a ParsedFile based provider
+ REGEX = /^(?:(.+) )?(ssh-dss|ssh-ed25519|ssh-rsa|ecdsa-sha2-nistp256|ecdsa-sha2-nistp384|ecdsa-sha2-nistp521) ([^ ]+) ?(.*)$/
+ def self.keyline_regex
+ REGEX
+ end
end
end
diff --git a/lib/puppet/type/sshkey.rb b/lib/puppet/type/sshkey.rb
index db3c52c85..2a3d5fedf 100644
--- a/lib/puppet/type/sshkey.rb
+++ b/lib/puppet/type/sshkey.rb
@@ -1,73 +1,73 @@
module Puppet
newtype(:sshkey) do
@doc = "Installs and manages ssh host keys. At this point, this type
only knows how to install keys into `/etc/ssh/ssh_known_hosts`. See
the `ssh_authorized_key` type to manage authorized keys."
ensurable
newproperty(:type) do
desc "The encryption type used. Probably ssh-dss or ssh-rsa."
newvalues :'ssh-dss', :'ssh-ed25519', :'ssh-rsa', :'ecdsa-sha2-nistp256', :'ecdsa-sha2-nistp384', :'ecdsa-sha2-nistp521'
aliasvalue(:dsa, :'ssh-dss')
aliasvalue(:ed25519, :'ssh-ed25519')
aliasvalue(:rsa, :'ssh-rsa')
end
newproperty(:key) do
- desc "The key itself; generally a long string of hex digits."
+ desc "The key itself; generally a long string of uuencoded characters."
end
# FIXME This should automagically check for aliases to the hosts, just
# to see if we can automatically glean any aliases.
newproperty(:host_aliases) do
desc 'Any aliases the host might have. Multiple values must be
specified as an array.'
attr_accessor :meta
def insync?(is)
is == @should
end
# We actually want to return the whole array here, not just the first
# value.
def should
defined?(@should) ? @should : nil
end
validate do |value|
if value =~ /\s/
raise Puppet::Error, "Aliases cannot include whitespace"
end
if value =~ /,/
raise Puppet::Error, "Aliases must be provided as an array, not a comma-separated list"
end
end
end
newparam(:name) do
desc "The host name that the key is associated with."
isnamevar
validate do |value|
raise Puppet::Error, "Resourcename cannot include whitespaces" if value =~ /\s/
raise Puppet::Error, "No comma in resourcename allowed. If you want to specify aliases use the host_aliases property" if value.include?(',')
end
end
newproperty(:target) do
desc "The file in which to store the ssh key. Only used by
the `parsed` provider."
defaultto { if @resource.class.defaultprovider.ancestors.include?(Puppet::Provider::ParsedFile)
@resource.class.defaultprovider.default_target
else
nil
end
}
end
end
end
diff --git a/lib/puppet/type/user.rb b/lib/puppet/type/user.rb
index 15099566c..b9b26295f 100644
--- a/lib/puppet/type/user.rb
+++ b/lib/puppet/type/user.rb
@@ -1,682 +1,688 @@
require 'etc'
require 'facter'
require 'puppet/parameter/boolean'
require 'puppet/property/list'
require 'puppet/property/ordered_list'
require 'puppet/property/keyvalue'
module Puppet
newtype(:user) do
@doc = "Manage users. This type is mostly built to manage system
users, so it is lacking some features useful for managing normal
users.
This resource type uses the prescribed native tools for creating
groups and generally uses POSIX APIs for retrieving information
about them. It does not directly modify `/etc/passwd` or anything.
**Autorequires:** If Puppet is managing the user's primary group (as
provided in the `gid` attribute), the user resource will autorequire
that group. If Puppet is managing any role accounts corresponding to the
user's roles, the user resource will autorequire those role accounts."
feature :allows_duplicates,
"The provider supports duplicate users with the same UID."
feature :manages_homedir,
"The provider can create and remove home directories."
feature :manages_passwords,
"The provider can modify user passwords, by accepting a password
hash."
feature :manages_password_age,
"The provider can set age requirements and restrictions for
passwords."
feature :manages_password_salt,
"The provider can set a password salt. This is for providers that
implement PBKDF2 passwords with salt properties."
feature :manages_solaris_rbac,
"The provider can manage roles and normal users"
feature :manages_expiry,
"The provider can manage the expiry date for a user."
feature :system_users,
"The provider allows you to create system users with lower UIDs."
feature :manages_aix_lam,
"The provider can manage AIX Loadable Authentication Module (LAM) system."
feature :libuser,
"Allows local users to be managed on systems that also use some other
remote NSS method of managing accounts."
feature :manages_shell,
"The provider allows for setting shell and validates if possible"
newproperty(:ensure, :parent => Puppet::Property::Ensure) do
newvalue(:present, :event => :user_created) do
provider.create
end
newvalue(:absent, :event => :user_removed) do
provider.delete
end
newvalue(:role, :event => :role_created, :required_features => :manages_solaris_rbac) do
provider.create_role
end
desc "The basic state that the object should be in."
# If they're talking about the thing at all, they generally want to
# say it should exist.
defaultto do
if @resource.managed?
:present
else
nil
end
end
def retrieve
if provider.exists?
if provider.respond_to?(:is_role?) and provider.is_role?
return :role
else
return :present
end
else
return :absent
end
end
end
newproperty(:home) do
desc "The home directory of the user. The directory must be created
separately and is not currently checked for existence."
end
newproperty(:uid) do
desc "The user ID; must be specified numerically. If no user ID is
specified when creating a new user, then one will be chosen
automatically. This will likely result in the same user having
different UIDs on different systems, which is not recommended. This is
especially noteworthy when managing the same user on both Darwin and
other platforms, since Puppet does UID generation on Darwin, but
the underlying tools do so on other platforms.
On Windows, this property is read-only and will return the user's
security identifier (SID)."
munge do |value|
case value
when String
if value =~ /^[-0-9]+$/
value = Integer(value)
end
end
return value
end
end
newproperty(:gid) do
desc "The user's primary group. Can be specified numerically or by name.
This attribute is not supported on Windows systems; use the `groups`
attribute instead. (On Windows, designating a primary group is only
meaningful for domain accounts, which Puppet does not currently manage.)"
munge do |value|
if value.is_a?(String) and value =~ /^[-0-9]+$/
Integer(value)
else
value
end
end
def insync?(is)
# We know the 'is' is a number, so we need to convert the 'should' to a number,
# too.
@should.each do |value|
return true if number = Puppet::Util.gid(value) and is == number
end
false
end
def sync
found = false
@should.each do |value|
if number = Puppet::Util.gid(value)
provider.gid = number
found = true
break
end
end
fail "Could not find group(s) #{@should.join(",")}" unless found
# Use the default event.
end
end
newproperty(:comment) do
desc "A description of the user. Generally the user's full name."
munge do |v|
v.respond_to?(:force_encoding) ? v.force_encoding(Encoding::ASCII_8BIT) : v
end
end
newproperty(:shell, :required_features => :manages_shell) do
desc "The user's login shell. The shell must exist and be
executable.
This attribute cannot be managed on Windows systems."
end
newproperty(:password, :required_features => :manages_passwords) do
desc %q{The user's password, in whatever encrypted format the local
system requires.
* Most modern Unix-like systems use salted SHA1 password hashes. You can use
Puppet's built-in `sha1` function to generate a hash from a password.
* Mac OS X 10.5 and 10.6 also use salted SHA1 hashes.
* Mac OS X 10.7 (Lion) uses salted SHA512 hashes. The Puppet Labs [stdlib][]
module contains a `str2saltedsha512` function which can generate password
hashes for Lion.
* Mac OS X 10.8 and higher use salted SHA512 PBKDF2 hashes. When
managing passwords on these systems the salt and iterations properties
need to be specified as well as the password.
* Windows passwords can only be managed in cleartext, as there is no Windows API
for setting the password hash.
[stdlib]: https://github.com/puppetlabs/puppetlabs-stdlib/
Be sure to enclose any value that includes a dollar sign ($) in single
quotes (') to avoid accidental variable interpolation.}
validate do |value|
raise ArgumentError, "Passwords cannot include ':'" if value.is_a?(String) and value.include?(":")
end
def change_to_s(currentvalue, newvalue)
if currentvalue == :absent
return "created password"
else
return "changed password"
end
end
def is_to_s( currentvalue )
return '[old password hash redacted]'
end
def should_to_s( newvalue )
return '[new password hash redacted]'
end
end
newproperty(:password_min_age, :required_features => :manages_password_age) do
desc "The minimum number of days a password must be used before it may be changed."
munge do |value|
case value
when String
Integer(value)
else
value
end
end
validate do |value|
if value.to_s !~ /^-?\d+$/
raise ArgumentError, "Password minimum age must be provided as a number."
end
end
end
newproperty(:password_max_age, :required_features => :manages_password_age) do
desc "The maximum number of days a password may be used before it must be changed."
munge do |value|
case value
when String
Integer(value)
else
value
end
end
validate do |value|
if value.to_s !~ /^-?\d+$/
raise ArgumentError, "Password maximum age must be provided as a number."
end
end
end
newproperty(:groups, :parent => Puppet::Property::List) do
desc "The groups to which the user belongs. The primary group should
not be listed, and groups should be identified by name rather than by
GID. Multiple groups should be specified as an array."
validate do |value|
if value =~ /^\d+$/
raise ArgumentError, "Group names must be provided, not GID numbers."
end
raise ArgumentError, "Group names must be provided as an array, not a comma-separated list." if value.include?(",")
raise ArgumentError, "Group names must not be empty. If you want to specify \"no groups\" pass an empty array" if value.empty?
end
end
newparam(:name) do
desc "The user name. While naming limitations vary by operating system,
it is advisable to restrict names to the lowest common denominator,
which is a maximum of 8 characters beginning with a letter.
Note that Puppet considers user names to be case-sensitive, regardless
of the platform's own rules; be sure to always use the same case when
referring to a given user."
isnamevar
end
newparam(:membership) do
desc "Whether specified groups should be considered the **complete list**
(`inclusive`) or the **minimum list** (`minimum`) of groups to which
the user belongs. Defaults to `minimum`."
newvalues(:inclusive, :minimum)
defaultto :minimum
end
newparam(:system, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether the user is a system user, according to the OS's criteria;
on most platforms, a UID less than or equal to 500 indicates a system
- user. Defaults to `false`."
+ user. This parameter is only used when the resource is created and will
+ not affect the UID when the user is present. Defaults to `false`."
defaultto false
end
newparam(:allowdupe, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether to allow duplicate UIDs. Defaults to `false`."
defaultto false
end
newparam(:managehome, :boolean => true, :parent => Puppet::Parameter::Boolean) do
desc "Whether to manage the home directory when managing the user.
This will create the home directory when `ensure => present`, and
delete the home directory when `ensure => absent`. Defaults to `false`."
defaultto false
validate do |val|
if munge(val)
raise ArgumentError, "User provider #{provider.class.name} can not manage home directories" if provider and not provider.class.manages_homedir?
end
end
end
newproperty(:expiry, :required_features => :manages_expiry) do
desc "The expiry date for this user. Must be provided in
a zero-padded YYYY-MM-DD format --- e.g. 2010-02-19.
If you want to make sure the user account does never
expire, you can pass the special value `absent`."
newvalues :absent
newvalues /^\d{4}-\d{2}-\d{2}$/
validate do |value|
if value.intern != :absent and value !~ /^\d{4}-\d{2}-\d{2}$/
raise ArgumentError, "Expiry dates must be YYYY-MM-DD or the string \"absent\""
end
end
end
# Autorequire the group, if it's around
autorequire(:group) do
autos = []
if obj = @parameters[:gid] and groups = obj.shouldorig
groups = groups.collect { |group|
if group =~ /^\d+$/
Integer(group)
else
group
end
}
groups.each { |group|
case group
when Integer
if resource = catalog.resources.find { |r| r.is_a?(Puppet::Type.type(:group)) and r.should(:gid) == group }
autos << resource
end
else
autos << group
end
}
end
if obj = @parameters[:groups] and groups = obj.should
autos += groups.split(",")
end
autos
end
# This method has been exposed for puppet to manage users and groups of
# files in its settings and should not be considered available outside of
# puppet.
#
# (see Puppet::Settings#service_user_available?)
#
# @return [Boolean] if the user exists on the system
# @api private
def exists?
provider.exists?
end
def retrieve
absent = false
properties.inject({}) { |prophash, property|
current_value = :absent
if absent
prophash[property] = :absent
else
current_value = property.retrieve
prophash[property] = current_value
end
if property.name == :ensure and current_value == :absent
absent = true
end
prophash
}
end
newproperty(:roles, :parent => Puppet::Property::List, :required_features => :manages_solaris_rbac) do
desc "The roles the user has. Multiple roles should be
specified as an array."
def membership
:role_membership
end
validate do |value|
if value =~ /^\d+$/
raise ArgumentError, "Role names must be provided, not numbers"
end
raise ArgumentError, "Role names must be provided as an array, not a comma-separated list" if value.include?(",")
end
end
#autorequire the roles that the user has
autorequire(:user) do
reqs = []
if roles_property = @parameters[:roles] and roles = roles_property.should
reqs += roles.split(',')
end
reqs
end
newparam(:role_membership) do
desc "Whether specified roles should be considered the **complete list**
(`inclusive`) or the **minimum list** (`minimum`) of roles the user
has. Defaults to `minimum`."
newvalues(:inclusive, :minimum)
defaultto :minimum
end
newproperty(:auths, :parent => Puppet::Property::List, :required_features => :manages_solaris_rbac) do
desc "The auths the user has. Multiple auths should be
specified as an array."
def membership
:auth_membership
end
validate do |value|
if value =~ /^\d+$/
raise ArgumentError, "Auth names must be provided, not numbers"
end
raise ArgumentError, "Auth names must be provided as an array, not a comma-separated list" if value.include?(",")
end
end
newparam(:auth_membership) do
desc "Whether specified auths should be considered the **complete list**
(`inclusive`) or the **minimum list** (`minimum`) of auths the user
has. Defaults to `minimum`."
newvalues(:inclusive, :minimum)
defaultto :minimum
end
newproperty(:profiles, :parent => Puppet::Property::OrderedList, :required_features => :manages_solaris_rbac) do
desc "The profiles the user has. Multiple profiles should be
specified as an array."
def membership
:profile_membership
end
validate do |value|
if value =~ /^\d+$/
raise ArgumentError, "Profile names must be provided, not numbers"
end
raise ArgumentError, "Profile names must be provided as an array, not a comma-separated list" if value.include?(",")
end
end
newparam(:profile_membership) do
desc "Whether specified roles should be treated as the **complete list**
(`inclusive`) or the **minimum list** (`minimum`) of roles
of which the user is a member. Defaults to `minimum`."
newvalues(:inclusive, :minimum)
defaultto :minimum
end
newproperty(:keys, :parent => Puppet::Property::KeyValue, :required_features => :manages_solaris_rbac) do
desc "Specify user attributes in an array of key = value pairs."
def membership
:key_membership
end
validate do |value|
raise ArgumentError, "Key/value pairs must be separated by an =" unless value.include?("=")
end
end
newparam(:key_membership) do
desc "Whether specified key/value pairs should be considered the
**complete list** (`inclusive`) or the **minimum list** (`minimum`) of
the user's attributes. Defaults to `minimum`."
newvalues(:inclusive, :minimum)
defaultto :minimum
end
newproperty(:project, :required_features => :manages_solaris_rbac) do
desc "The name of the project associated with a user."
end
newparam(:ia_load_module, :required_features => :manages_aix_lam) do
desc "The name of the I&A module to use to manage this user."
end
newproperty(:attributes, :parent => Puppet::Property::KeyValue, :required_features => :manages_aix_lam) do
desc "Specify AIX attributes for the user in an array of attribute = value pairs."
def membership
:attribute_membership
end
def delimiter
" "
end
validate do |value|
raise ArgumentError, "Attributes value pairs must be separated by an =" unless value.include?("=")
end
end
newparam(:attribute_membership) do
desc "Whether specified attribute value pairs should be treated as the
**complete list** (`inclusive`) or the **minimum list** (`minimum`) of
attribute/value pairs for the user. Defaults to `minimum`."
newvalues(:inclusive, :minimum)
defaultto :minimum
end
newproperty(:salt, :required_features => :manages_password_salt) do
desc "This is the 32 byte salt used to generate the PBKDF2 password used in
OS X. This field is required for managing passwords on OS X >= 10.8."
end
newproperty(:iterations, :required_features => :manages_password_salt) do
desc "This is the number of iterations of a chained computation of the
password hash (http://en.wikipedia.org/wiki/PBKDF2). This parameter
is used in OS X. This field is required for managing passwords on OS X >= 10.8."
munge do |value|
if value.is_a?(String) and value =~/^[-0-9]+$/
Integer(value)
else
value
end
end
end
newparam(:forcelocal, :boolean => true,
:required_features => :libuser,
:parent => Puppet::Parameter::Boolean) do
- desc "Forces the mangement of local accounts when accounts are also
+ desc "Forces the management of local accounts when accounts are also
being managed by some other NSS"
defaultto false
end
def generate
return [] if self[:purge_ssh_keys].empty?
find_unmanaged_keys
end
newparam(:purge_ssh_keys) do
- desc "Purge ssh keys authorized for the user
- if they are not managed via ssh_authorized_keys. When true,
- looks for keys in .ssh/authorized_keys in the user's home
- directory. Possible values are true, false, or an array of
- paths to file to search for authorized keys. If a path starts
- with ~ or %h, this token is replaced with the user's home directory."
+ desc "Whether to purge authorized SSH keys for this user if they are not managed
+ with the `ssh_authorized_key` resource type. Allowed values are:
+
+ * `false` (default) --- don't purge SSH keys for this user.
+ * `true` --- look for keys in the `.ssh/authorized_keys` file in the user's
+ home directory. Purge any keys that aren't managed as `ssh_authorized_key`
+ resources.
+ * An array of file paths --- look for keys in all of the files listed. Purge
+ any keys that aren't managed as `ssh_authorized_key` resources. If any of
+ these paths starts with `~` or `%h`, that token will be replaced with
+ the user's home directory."
defaultto :false
# Use Symbols instead of booleans until PUP-1967 is resolved.
newvalues(:true, :false)
validate do |value|
if [ :true, :false ].include? value.to_s.intern
return
end
value = [ value ] if value.is_a?(String)
if value.is_a?(Array)
value.each do |entry|
raise ArgumentError, "Each entry for purge_ssh_keys must be a string, not a #{entry.class}" unless entry.is_a?(String)
valid_home = Puppet::Util.absolute_path?(entry) || entry =~ %r{^~/|^%h/}
raise ArgumentError, "Paths to keyfiles must be absolute, not #{entry}" unless valid_home
end
return
end
raise ArgumentError, "purge_ssh_keys must be true, false, or an array of file names, not #{value.inspect}"
end
munge do |value|
# Resolve string, boolean and symbol forms of true and false to a
# single representation.
test_sym = value.to_s.intern
value = test_sym if [:true, :false].include? test_sym
return [] if value == :false
home = resource[:home]
if value == :true and not home
raise ArgumentError, "purge_ssh_keys can only be true for users with a defined home directory"
end
return [ "#{home}/.ssh/authorized_keys" ] if value == :true
# value is an array - munge each value
[ value ].flatten.map do |entry|
if entry =~ /^~|^%h/ and not home
raise ArgumentError, "purge_ssh_keys value '#{value}' meta character ~ or %h only allowed for users with a defined home directory"
end
entry.gsub!(/^~\//, "#{home}/")
entry.gsub!(/^%h\//, "#{home}/")
entry
end
end
end
# Generate ssh_authorized_keys resources for purging. The key files are
# taken from the purge_ssh_keys parameter. The generated resources inherit
# all metaparameters from the parent user resource.
#
# @return [Array<Puppet::Type::Ssh_authorized_key] a list of resources
# representing the found keys
# @see generate
# @api private
def find_unmanaged_keys
self[:purge_ssh_keys].
select { |f| File.readable?(f) }.
map { |f| unknown_keys_in_file(f) }.
flatten.each do |res|
res[:ensure] = :absent
res[:user] = self[:name]
@parameters.each do |name, param|
res[name] = param.value if param.metaparam?
end
end
end
# Parse an ssh authorized keys file superficially, extract the comments
# on the keys. These are considered names of possible ssh_authorized_keys
# resources. Keys that are managed by the present catalog are ignored.
#
# @see generate
# @api private
# @return [Array<Puppet::Type::Ssh_authorized_key] a list of resources
# representing the found keys
def unknown_keys_in_file(keyfile)
names = []
File.new(keyfile).each do |line|
- next if line.strip.empty?
- next if line =~ /^\s*#/
- names << line.strip.split.last
+ next unless line =~ Puppet::Type.type(:ssh_authorized_key).keyline_regex
+ # the name is stored in the 4th capture of the regex
+ names << $4
end
names.map { |keyname|
Puppet::Type.type(:ssh_authorized_key).new(
:name => keyname,
:target => keyfile)
}.reject { |res|
catalog.resource_refs.include? res.ref
}
end
end
end
diff --git a/lib/puppet/type/yumrepo.rb b/lib/puppet/type/yumrepo.rb
index d6de1b369..028717071 100644
--- a/lib/puppet/type/yumrepo.rb
+++ b/lib/puppet/type/yumrepo.rb
@@ -1,312 +1,363 @@
require 'uri'
Puppet::Type.newtype(:yumrepo) do
@doc = "The client-side description of a yum repository. Repository
configurations are found by parsing `/etc/yum.conf` and
the files indicated by the `reposdir` option in that file
(see `yum.conf(5)` for details).
Most parameters are identical to the ones documented
in the `yum.conf(5)` man page.
Continuation lines that yum supports (for the `baseurl`, for example)
are not supported. This type does not attempt to read or verify the
exinstence of files listed in the `include` attribute."
# Ensure yumrepos can be removed too.
ensurable
# Doc string for properties that can be made 'absent'
ABSENT_DOC="Set this to `absent` to remove it from the file completely."
# False can be false/0/no and True can be true/1/yes in yum.
- YUM_BOOLEAN=/(True|False|0|1|No|Yes)/i
+ YUM_BOOLEAN=/^(True|False|0|1|No|Yes)$/i
YUM_BOOLEAN_DOC="Valid values are: False/0/No or True/1/Yes."
VALID_SCHEMES = %w[file http https ftp]
newparam(:name, :namevar => true) do
desc "The name of the repository. This corresponds to the
`repositoryid` parameter in `yum.conf(5)`."
end
newparam(:target) do
desc "The filename to write the yum repository to."
defaultto :absent
end
newproperty(:descr) do
desc "A human-readable description of the repository.
This corresponds to the name parameter in `yum.conf(5)`.
#{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
newproperty(:mirrorlist) do
desc "The URL that holds the list of mirrors for this repository.
#{ABSENT_DOC}"
newvalues(/.*/, :absent)
validate do |value|
next if value.to_s == 'absent'
parsed = URI.parse(value)
unless VALID_SCHEMES.include?(parsed.scheme)
raise "Must be a valid URL"
end
end
end
newproperty(:baseurl) do
desc "The URL for this repository. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
validate do |value|
next if value.to_s == 'absent'
value.split(/\s+/).each do |uri|
parsed = URI.parse(uri)
unless VALID_SCHEMES.include?(parsed.scheme)
raise "Must be a valid URL"
end
end
end
end
newproperty(:enabled) do
desc "Whether this repository is enabled.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
newproperty(:gpgcheck) do
desc "Whether to check the GPG signature on packages installed
from this repository.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
newproperty(:repo_gpgcheck) do
desc "Whether to check the GPG signature on repodata.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
newproperty(:gpgkey) do
desc "The URL for the GPG key with which packages from this
repository are signed. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
validate do |value|
next if value.to_s == 'absent'
value.split(/\s+/).each do |uri|
parsed = URI.parse(uri)
unless VALID_SCHEMES.include?(parsed.scheme)
raise "Must be a valid URL"
end
end
end
end
+ newproperty(:mirrorlist_expire) do
+ desc "Time (in seconds) after which the mirrorlist locally cached
+ will expire.\n#{ABSENT_DOC}"
+
+ newvalues(/^[0-9]+$/, :absent)
+ end
+
newproperty(:include) do
desc "The URL of a remote file containing additional yum configuration
settings. Puppet does not check for this file's existence or validity.
#{ABSENT_DOC}"
newvalues(/.*/, :absent)
validate do |value|
next if value.to_s == 'absent'
parsed = URI.parse(value)
unless VALID_SCHEMES.include?(parsed.scheme)
raise "Must be a valid URL"
end
end
end
newproperty(:exclude) do
desc "List of shell globs. Matching packages will never be
considered in updates or installs for this repo.
#{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
+ newproperty(:gpgcakey) do
+ desc "The URL for the GPG CA key for this repository. #{ABSENT_DOC}"
+
+ newvalues(/.*/, :absent)
+ validate do |value|
+ next if value.to_s == 'absent'
+ parsed = URI.parse(value)
+
+ unless VALID_SCHEMES.include?(parsed.scheme)
+ raise "Must be a valid URL"
+ end
+ end
+ end
+
newproperty(:includepkgs) do
desc "List of shell globs. If this is set, only packages
matching one of the globs will be considered for
update or install from this repo. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
newproperty(:enablegroups) do
desc "Whether yum will allow the use of package groups for this
repository.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
newproperty(:failovermethod) do
desc "The failover method for this repository; should be either
`roundrobin` or `priority`. #{ABSENT_DOC}"
- newvalues(/roundrobin|priority/, :absent)
+ newvalues(/^roundrobin|priority$/, :absent)
end
newproperty(:keepalive) do
desc "Whether HTTP/1.1 keepalive should be used with this repository.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
+ newproperty(:retries) do
+ desc "Set the number of times any attempt to retrieve a file should
+ retry before returning an error. Setting this to `0` makes yum
+ try forever.\n#{ABSENT_DOC}"
+
+ newvalues(/^[0-9]+$/, :absent)
+ end
+
newproperty(:http_caching) do
desc "What to cache from this repository. #{ABSENT_DOC}"
- newvalues(/(packages|all|none)/, :absent)
+ newvalues(/^(packages|all|none)$/, :absent)
end
newproperty(:timeout) do
desc "Number of seconds to wait for a connection before timing
out. #{ABSENT_DOC}"
- newvalues(/[0-9]+/, :absent)
+ newvalues(/^\d+$/, :absent)
end
newproperty(:metadata_expire) do
desc "Number of seconds after which the metadata will expire.
#{ABSENT_DOC}"
- newvalues(/[0-9]+/, :absent)
+ newvalues(/^([0-9]+[dhm]?|never)$/, :absent)
end
newproperty(:protect) do
desc "Enable or disable protection for this repository. Requires
that the `protectbase` plugin is installed and enabled.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
newproperty(:priority) do
desc "Priority of this repository from 1-99. Requires that
the `priorities` plugin is installed and enabled.
#{ABSENT_DOC}"
newvalues(/.*/, :absent)
validate do |value|
next if value.to_s == 'absent'
unless (1..99).include?(value.to_i)
fail("Must be within range 1-99")
end
end
end
+ newproperty(:throttle) do
+ desc "Enable bandwidth throttling for downloads. This option
+ can be expressed as a absolute data rate in bytes/sec or a
+ percentage `60%`. An SI prefix (k, M or G) may be appended
+ to the data rate values.\n#{ABSENT_DOC}"
+
+ newvalues(/^\d+[kMG%]?$/, :absent)
+ end
+
+ newproperty(:bandwidth) do
+ desc "Use to specify the maximum available network bandwidth
+ in bytes/second. Used with the `throttle` option. If `throttle`
+ is a percentage and `bandwidth` is `0` then bandwidth throttling
+ will be disabled. If `throttle` is expressed as a data rate then
+ this option is ignored.\n#{ABSENT_DOC}"
+
+ newvalues(/^\d+[kMG]?$/, :absent)
+ end
+
newproperty(:cost) do
desc "Cost of this repository. #{ABSENT_DOC}"
- newvalues(/\d+/, :absent)
+ newvalues(/^\d+$/, :absent)
end
newproperty(:proxy) do
- desc "URL to the proxy server for this repository. #{ABSENT_DOC}"
+ desc "URL of a proxy server that Yum should use when accessing this repository.
+ This attribute can also be set to `'_none_'`, which will make Yum bypass any
+ global proxy settings when accessing this repository.
+ #{ABSENT_DOC}"
newvalues(/.*/, :absent)
validate do |value|
- next if value.to_s == 'absent'
+ next if value.to_s =~ /^(absent|_none_)$/
parsed = URI.parse(value)
unless VALID_SCHEMES.include?(parsed.scheme)
raise "Must be a valid URL"
end
end
end
newproperty(:proxy_username) do
desc "Username for this proxy. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
newproperty(:proxy_password) do
desc "Password for this proxy. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
newproperty(:s3_enabled) do
desc "Access the repo via S3.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
newproperty(:sslcacert) do
desc "Path to the directory containing the databases of the
certificate authorities yum should use to verify SSL certificates.
#{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
newproperty(:sslverify) do
desc "Should yum verify SSL certificates/hosts at all.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
newproperty(:sslclientcert) do
desc "Path to the SSL client certificate yum should use to connect
to repos/remote sites. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
newproperty(:sslclientkey) do
desc "Path to the SSL client key yum should use to connect
to repos/remote sites. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
end
newproperty(:metalink) do
desc "Metalink for mirrors. #{ABSENT_DOC}"
newvalues(/.*/, :absent)
validate do |value|
next if value.to_s == 'absent'
parsed = URI.parse(value)
unless VALID_SCHEMES.include?(parsed.scheme)
raise "Must be a valid URL"
end
end
end
newproperty(:skip_if_unavailable) do
desc "Should yum skip this repository if unable to reach it.
#{YUM_BOOLEAN_DOC}
#{ABSENT_DOC}"
newvalues(YUM_BOOLEAN, :absent)
end
end
diff --git a/lib/puppet/type/zone.rb b/lib/puppet/type/zone.rb
index 09572bdb6..9094aaf6f 100644
--- a/lib/puppet/type/zone.rb
+++ b/lib/puppet/type/zone.rb
@@ -1,385 +1,382 @@
require 'puppet/property/list'
Puppet::Type.newtype(:zone) do
@doc = "Manages Solaris zones.
**Autorequires:** If Puppet is managing the directory specified as the root of
the zone's filesystem (with the `path` attribute), the zone resource will
autorequire that directory."
module Puppet::Zone
class StateMachine
# A silly little state machine.
def initialize
@state = {}
@sequence = []
@state_aliases = {}
@default = nil
end
# The order of calling insert_state is important
def insert_state(name, transitions)
@sequence << name
@state[name] = transitions
end
def alias_state(state, salias)
@state_aliases[state] = salias
end
def name(n)
@state_aliases[n.to_sym] || n.to_sym
end
def index(state)
@sequence.index(name(state))
end
# return all states between fs and ss excluding fs
def sequence(fs, ss)
fi = index(fs)
si= index(ss)
(if fi > si
then @sequence[si .. fi].map{|i| @state[i]}.reverse
else @sequence[fi .. si].map{|i| @state[i]}
end)[1..-1]
end
def cmp?(a,b)
index(a) < index(b)
end
end
end
ensurable do
desc "The running state of the zone. The valid states directly reflect
the states that `zoneadm` provides. The states are linear,
in that a zone must be `configured`, then `installed`, and
only then can be `running`. Note also that `halt` is currently
used to stop zones."
def self.fsm
return @fsm if @fsm
@fsm = Puppet::Zone::StateMachine.new
end
def self.alias_state(values)
values.each do |k,v|
fsm.alias_state(k,v)
end
end
def self.seqvalue(name, hash)
fsm.insert_state(name, hash)
self.newvalue name
end
# This is seq value because the order of declaration is important.
# i.e we go linearly from :absent -> :configured -> :installed -> :running
seqvalue :absent, :down => :destroy
seqvalue :configured, :up => :configure, :down => :uninstall
seqvalue :installed, :up => :install, :down => :stop
seqvalue :running, :up => :start
alias_state :incomplete => :installed, :ready => :installed, :shutting_down => :running
defaultto :running
def self.state_sequence(first, second)
fsm.sequence(first, second)
end
# Why override it? because property/ensure.rb has a default retrieve method
# that knows only about :present and :absent. That method just calls
# provider.exists? and returns :present if a result was returned.
def retrieve
provider.properties[:ensure]
end
def provider_sync_send(method)
warned = false
while provider.processing?
next if warned
info "Waiting for zone to finish processing"
warned = true
sleep 1
end
provider.send(method)
provider.flush()
end
def sync
method = nil
direction = up? ? :up : :down
# We need to get the state we're currently in and just call
# everything between it and us.
self.class.state_sequence(self.retrieve, self.should).each do |state|
method = state[direction]
raise Puppet::DevError, "Cannot move #{direction} from #{st[:name]}" unless method
provider_sync_send(method)
end
("zone_#{self.should}").intern
end
# Are we moving up the property tree?
def up?
self.class.fsm.cmp?(self.retrieve, self.should)
end
end
newparam(:name) do
desc "The name of the zone."
isnamevar
end
newparam(:id) do
desc "The numerical ID of the zone. This number is autogenerated
and cannot be changed."
end
newparam(:clone) do
desc "Instead of installing the zone, clone it from another zone.
If the zone root resides on a zfs file system, a snapshot will be
used to create the clone; if it resides on a ufs filesystem, a copy of the
zone will be used. The zone from which you clone must not be running."
end
newproperty(:ip, :parent => Puppet::Property::List) do
require 'ipaddr'
desc "The IP address of the zone. IP addresses **must** be specified
with an interface, and may optionally be specified with a default router
(sometimes called a defrouter). The interface, IP address, and default
router should be separated by colons to form a complete IP address string.
For example: `bge0:192.168.178.200` would be a valid IP address string
without a default router, and `bge0:192.168.178.200:192.168.178.1` adds a
default router to it.
For zones with multiple interfaces, the value of this attribute should be
an array of IP address strings (each of which must include an interface
and may include a default router)."
# The default action of list should is to lst.join(' '). By specifying
# @should, we ensure the should remains an array. If we override should, we
# should also override insync?() -- property/list.rb
def should
@should
end
# overridden so that we match with self.should
def insync?(is)
- return true unless is
- is = [] if is == :absent
+ is = [] if !is || is == :absent
is.sort == self.should.sort
end
end
newproperty(:iptype) do
desc "The IP stack type of the zone."
defaultto :shared
newvalue :shared
newvalue :exclusive
end
newproperty(:autoboot, :boolean => true) do
desc "Whether the zone should automatically boot."
defaultto true
newvalues(:true, :false)
end
newproperty(:path) do
desc "The root of the zone's filesystem. Must be a fully qualified
file name. If you include `%s` in the path, then it will be
replaced with the zone's name. Currently, you cannot use
Puppet to move a zone. Consequently this is a readonly property."
validate do |value|
raise ArgumentError, "The zone base must be fully qualified" unless value =~ /^\//
end
munge do |value|
if value =~ /%s/
value % @resource[:name]
else
value
end
end
end
newproperty(:pool) do
desc "The resource pool for this zone."
end
newproperty(:shares) do
desc "Number of FSS CPU shares allocated to the zone."
end
newproperty(:dataset, :parent => Puppet::Property::List ) do
desc "The list of datasets delegated to the non-global zone from the
global zone. All datasets must be zfs filesystem names which are
different from the mountpoint."
def should
@should
end
# overridden so that we match with self.should
def insync?(is)
- return true unless is
- is = [] if is == :absent
+ is = [] if !is || is == :absent
is.sort == self.should.sort
end
validate do |value|
unless value !~ /^\//
raise ArgumentError, "Datasets must be the name of a zfs filesystem"
end
end
end
newproperty(:inherit, :parent => Puppet::Property::List) do
desc "The list of directories that the zone inherits from the global
zone. All directories must be fully qualified."
def should
@should
end
# overridden so that we match with self.should
def insync?(is)
- return true unless is
- is = [] if is == :absent
+ is = [] if !is || is == :absent
is.sort == self.should.sort
end
validate do |value|
unless value =~ /^\//
raise ArgumentError, "Inherited filesystems must be fully qualified"
end
end
end
# Specify the sysidcfg file. This is pretty hackish, because it's
# only used to boot the zone the very first time.
newparam(:sysidcfg) do
desc %{The text to go into the `sysidcfg` file when the zone is first
booted. The best way is to use a template:
# $confdir/modules/site/templates/sysidcfg.erb
system_locale=en_US
timezone=GMT
terminal=xterms
security_policy=NONE
root_password=<%= password %>
timeserver=localhost
name_service=DNS {domain_name=<%= domain %> name_server=<%= nameserver %>}
network_interface=primary {hostname=<%= realhostname %>
ip_address=<%= ip %>
netmask=<%= netmask %>
protocol_ipv6=no
default_route=<%= defaultroute %>}
nfs4_domain=dynamic
And then call that:
zone { myzone:
ip => "bge0:192.168.0.23",
sysidcfg => template("site/sysidcfg.erb"),
path => "/opt/zones/myzone",
realhostname => "fully.qualified.domain.name"
}
The `sysidcfg` only matters on the first booting of the zone,
so Puppet only checks for it at that time.}
end
newparam(:create_args) do
desc "Arguments to the `zonecfg` create command. This can be used to create branded zones."
end
newparam(:install_args) do
desc "Arguments to the `zoneadm` install command. This can be used to create branded zones."
end
newparam(:realhostname) do
desc "The actual hostname of the zone."
end
# If Puppet is also managing the base dir or its parent dir, list them
# both as prerequisites.
autorequire(:file) do
if @parameters.include? :path
[@parameters[:path].value, ::File.dirname(@parameters[:path].value)]
else
nil
end
end
# If Puppet is also managing the zfs filesystem which is the zone dataset
# then list it as a prerequisite. Zpool's get autorequired by the zfs
# type. We just need to autorequire the dataset zfs itself as the zfs type
# will autorequire all of the zfs parents and zpool.
autorequire(:zfs) do
# Check if we have datasets in our zone configuration and autorequire each dataset
self[:dataset] if @parameters.include? :dataset
end
def validate_ip(ip, name)
IPAddr.new(ip) if ip
rescue ArgumentError
self.fail Puppet::Error, "'#{ip}' is an invalid #{name}", $!
end
def validate_exclusive(interface, address, router)
return if !interface.nil? and address.nil?
self.fail "only interface may be specified when using exclusive IP stack: #{interface}:#{address}"
end
def validate_shared(interface, address, router)
self.fail "ip must contain interface name and ip address separated by a \":\"" if interface.nil? or address.nil?
[address, router].each do |ip|
validate_ip(address, "IP address") unless ip.nil?
end
end
validate do
return unless self[:ip]
# self[:ip] reflects the type passed from proeprty:ip.should. If we
# override it and pass @should, then we get an array here back.
self[:ip].each do |ip|
interface, address, router = ip.split(':')
if self[:iptype] == :shared
validate_shared(interface, address, router)
else
validate_exclusive(interface, address, router)
end
end
end
def retrieve
provider.flush
hash = provider.properties
return setstatus(hash) unless hash.nil? or hash[:ensure] == :absent
# Return all properties as absent.
return Hash[properties.map{|p| [p, :absent]} ]
end
# Take the results of a listing and set everything appropriately.
def setstatus(hash)
prophash = {}
hash.each do |param, value|
next if param == :name
case self.class.attrtype(param)
when :property
# Only try to provide values for the properties we're managing
prop = self.property(param)
prophash[prop] = value if prop
else
self[param] = value
end
end
prophash
end
end
diff --git a/lib/puppet/util.rb b/lib/puppet/util.rb
index a8b78da7c..e05339517 100644
--- a/lib/puppet/util.rb
+++ b/lib/puppet/util.rb
@@ -1,545 +1,554 @@
# A module to collect utility functions.
require 'English'
require 'puppet/error'
require 'puppet/util/execution_stub'
require 'uri'
-require 'tempfile'
require 'pathname'
require 'ostruct'
require 'puppet/util/platform'
require 'puppet/util/symbolic_file_mode'
+require 'puppet/file_system/uniquefile'
require 'securerandom'
module Puppet
module Util
require 'puppet/util/monkey_patches'
require 'benchmark'
# These are all for backward compatibility -- these are methods that used
# to be in Puppet::Util but have been moved into external modules.
require 'puppet/util/posix'
extend Puppet::Util::POSIX
extend Puppet::Util::SymbolicFileMode
def self.activerecord_version
if (defined?(::ActiveRecord) and defined?(::ActiveRecord::VERSION) and defined?(::ActiveRecord::VERSION::MAJOR) and defined?(::ActiveRecord::VERSION::MINOR))
([::ActiveRecord::VERSION::MAJOR, ::ActiveRecord::VERSION::MINOR].join('.').to_f)
else
0
end
end
# Run some code with a specific environment. Resets the environment back to
# what it was at the end of the code.
def self.withenv(hash)
saved = ENV.to_hash
hash.each do |name, val|
ENV[name.to_s] = val
end
yield
ensure
ENV.clear
saved.each do |name, val|
ENV[name] = val
end
end
# Execute a given chunk of code with a new umask.
def self.withumask(mask)
cur = File.umask(mask)
begin
yield
ensure
File.umask(cur)
end
end
# Change the process to a different user
def self.chuser
if group = Puppet[:group]
begin
Puppet::Util::SUIDManager.change_group(group, true)
rescue => detail
Puppet.warning "could not change to group #{group.inspect}: #{detail}"
$stderr.puts "could not change to group #{group.inspect}"
# Don't exit on failed group changes, since it's
# not fatal
#exit(74)
end
end
if user = Puppet[:user]
begin
Puppet::Util::SUIDManager.change_user(user, true)
rescue => detail
$stderr.puts "Could not change to user #{user}: #{detail}"
exit(74)
end
end
end
# Create instance methods for each of the log levels. This allows
# the messages to be a little richer. Most classes will be calling this
# method.
def self.logmethods(klass, useself = true)
Puppet::Util::Log.eachlevel { |level|
klass.send(:define_method, level, proc { |args|
args = args.join(" ") if args.is_a?(Array)
if useself
Puppet::Util::Log.create(
:level => level,
:source => self,
:message => args
)
else
Puppet::Util::Log.create(
:level => level,
:message => args
)
end
})
}
end
# Proxy a bunch of methods to another object.
def self.classproxy(klass, objmethod, *methods)
classobj = class << klass; self; end
methods.each do |method|
classobj.send(:define_method, method) do |*args|
obj = self.send(objmethod)
obj.send(method, *args)
end
end
end
# Proxy a bunch of methods to another object.
def self.proxy(klass, objmethod, *methods)
methods.each do |method|
klass.send(:define_method, method) do |*args|
obj = self.send(objmethod)
obj.send(method, *args)
end
end
end
def benchmark(*args)
msg = args.pop
level = args.pop
object = nil
if args.empty?
if respond_to?(level)
object = self
else
object = Puppet
end
else
object = args.pop
end
raise Puppet::DevError, "Failed to provide level to :benchmark" unless level
unless level == :none or object.respond_to? level
raise Puppet::DevError, "Benchmarked object does not respond to #{level}"
end
# Only benchmark if our log level is high enough
if level != :none and Puppet::Util::Log.sendlevel?(level)
seconds = Benchmark.realtime {
yield
}
object.send(level, msg + (" in %0.2f seconds" % seconds))
return seconds
else
yield
end
end
module_function :benchmark
# Resolve a path for an executable to the absolute path. This tries to behave
# in the same manner as the unix `which` command and uses the `PATH`
# environment variable.
#
# @api public
# @param bin [String] the name of the executable to find.
# @return [String] the absolute path to the found executable.
def which(bin)
if absolute_path?(bin)
return bin if FileTest.file? bin and FileTest.executable? bin
else
ENV['PATH'].split(File::PATH_SEPARATOR).each do |dir|
begin
dest = File.expand_path(File.join(dir, bin))
rescue ArgumentError => e
# if the user's PATH contains a literal tilde (~) character and HOME is not set, we may get
# an ArgumentError here. Let's check to see if that is the case; if not, re-raise whatever error
# was thrown.
if e.to_s =~ /HOME/ and (ENV['HOME'].nil? || ENV['HOME'] == "")
# if we get here they have a tilde in their PATH. We'll issue a single warning about this and then
# ignore this path element and carry on with our lives.
Puppet::Util::Warnings.warnonce("PATH contains a ~ character, and HOME is not set; ignoring PATH element '#{dir}'.")
elsif e.to_s =~ /doesn't exist|can't find user/
# ...otherwise, we just skip the non-existent entry, and do nothing.
Puppet::Util::Warnings.warnonce("Couldn't expand PATH containing a ~ character; ignoring PATH element '#{dir}'.")
else
raise
end
else
if Puppet.features.microsoft_windows? && File.extname(dest).empty?
exts = ENV['PATHEXT']
exts = exts ? exts.split(File::PATH_SEPARATOR) : %w[.COM .EXE .BAT .CMD]
exts.each do |ext|
destext = File.expand_path(dest + ext)
return destext if FileTest.file? destext and FileTest.executable? destext
end
end
return dest if FileTest.file? dest and FileTest.executable? dest
end
end
end
nil
end
module_function :which
# Determine in a platform-specific way whether a path is absolute. This
# defaults to the local platform if none is specified.
#
# Escape once for the string literal, and once for the regex.
slash = '[\\\\/]'
label = '[^\\\\/]+'
AbsolutePathWindows = %r!^(?:(?:[A-Z]:#{slash})|(?:#{slash}#{slash}#{label}#{slash}#{label})|(?:#{slash}#{slash}\?#{slash}#{label}))!io
AbsolutePathPosix = %r!^/!
def absolute_path?(path, platform=nil)
# Ruby only sets File::ALT_SEPARATOR on Windows and the Ruby standard
# library uses that to test what platform it's on. Normally in Puppet we
# would use Puppet.features.microsoft_windows?, but this method needs to
# be called during the initialization of features so it can't depend on
# that.
platform ||= Puppet::Util::Platform.windows? ? :windows : :posix
regex = case platform
when :windows
AbsolutePathWindows
when :posix
AbsolutePathPosix
else
raise Puppet::DevError, "unknown platform #{platform} in absolute_path"
end
!! (path =~ regex)
end
module_function :absolute_path?
# Convert a path to a file URI
def path_to_uri(path)
return unless path
params = { :scheme => 'file' }
if Puppet.features.microsoft_windows?
path = path.gsub(/\\/, '/')
if unc = /^\/\/([^\/]+)(\/.+)/.match(path)
params[:host] = unc[1]
path = unc[2]
elsif path =~ /^[a-z]:\//i
path = '/' + path
end
end
params[:path] = URI.escape(path)
begin
URI::Generic.build(params)
rescue => detail
raise Puppet::Error, "Failed to convert '#{path}' to URI: #{detail}", detail.backtrace
end
end
module_function :path_to_uri
# Get the path component of a URI
def uri_to_path(uri)
return unless uri.is_a?(URI)
path = URI.unescape(uri.path)
if Puppet.features.microsoft_windows? and uri.scheme == 'file'
if uri.host
path = "//#{uri.host}" + path # UNC
else
path.sub!(/^\//, '')
end
end
path
end
module_function :uri_to_path
def safe_posix_fork(stdin=$stdin, stdout=$stdout, stderr=$stderr, &block)
child_pid = Kernel.fork do
$stdin.reopen(stdin)
$stdout.reopen(stdout)
$stderr.reopen(stderr)
3.upto(256){|fd| IO::new(fd).close rescue nil}
block.call if block
end
child_pid
end
module_function :safe_posix_fork
def memory
unless defined?(@pmap)
@pmap = which('pmap')
end
if @pmap
%x{#{@pmap} #{Process.pid}| grep total}.chomp.sub(/^\s*total\s+/, '').sub(/K$/, '').to_i
else
0
end
end
def symbolizehash(hash)
newhash = {}
hash.each do |name, val|
name = name.intern if name.respond_to? :intern
newhash[name] = val
end
newhash
end
module_function :symbolizehash
# Just benchmark, with no logging.
def thinmark
seconds = Benchmark.realtime {
yield
}
seconds
end
module_function :memory, :thinmark
# Because IO#binread is only available in 1.9
def binread(file)
Puppet.deprecation_warning("Puppet::Util.binread is deprecated. Read the file without this method as it will be removed in a future version.")
File.open(file, 'rb') { |f| f.read }
end
module_function :binread
# utility method to get the current call stack and format it to a human-readable string (which some IDEs/editors
# will recognize as links to the line numbers in the trace)
def self.pretty_backtrace(backtrace = caller(1))
backtrace.collect do |line|
_, path, rest = /^(.*):(\d+.*)$/.match(line).to_a
# If the path doesn't exist - like in one test, and like could happen in
# the world - we should just tolerate it and carry on. --daniel 2012-09-05
# Also, if we don't match, just include the whole line.
if path
path = Pathname(path).realpath rescue path
"#{path}:#{rest}"
else
line
end
end.join("\n")
end
# Replace a file, securely. This takes a block, and passes it the file
# handle of a file open for writing. Write the replacement content inside
# the block and it will safely replace the target file.
#
# This method will make no changes to the target file until the content is
# successfully written and the block returns without raising an error.
#
# As far as possible the state of the existing file, such as mode, is
# preserved. This works hard to avoid loss of any metadata, but will result
# in an inode change for the file.
#
# Arguments: `filename`, `default_mode`
#
# The filename is the file we are going to replace.
#
# The default_mode is the mode to use when the target file doesn't already
# exist; if the file is present we copy the existing mode/owner/group values
# across. The default_mode can be expressed as an octal integer, a numeric string (ie '0664')
# or a symbolic file mode.
DEFAULT_POSIX_MODE = 0644
DEFAULT_WINDOWS_MODE = nil
def replace_file(file, default_mode, &block)
raise Puppet::DevError, "replace_file requires a block" unless block_given?
if default_mode
unless valid_symbolic_mode?(default_mode)
raise Puppet::DevError, "replace_file default_mode: #{default_mode} is invalid"
end
mode = symbolic_mode_to_int(normalize_symbolic_mode(default_mode))
else
if Puppet.features.microsoft_windows?
mode = DEFAULT_WINDOWS_MODE
else
mode = DEFAULT_POSIX_MODE
end
end
- file = Puppet::FileSystem.pathname(file)
- tempfile = Tempfile.new(Puppet::FileSystem.basename_string(file), Puppet::FileSystem.dir_string(file))
-
- # Set properties of the temporary file before we write the content, because
- # Tempfile doesn't promise to be safe from reading by other people, just
- # that it avoids races around creating the file.
- #
- # Our Windows emulation is pretty limited, and so we have to carefully
- # and specifically handle the platform, which has all sorts of magic.
- # So, unlike Unix, we don't pre-prep security; we use the default "quite
- # secure" tempfile permissions instead. Magic happens later.
- if !Puppet.features.microsoft_windows?
- # Grab the current file mode, and fall back to the defaults.
- effective_mode =
- if Puppet::FileSystem.exist?(file)
- stat = Puppet::FileSystem.lstat(file)
- tempfile.chown(stat.uid, stat.gid)
- stat.mode
- else
- mode
- end
+ begin
+ file = Puppet::FileSystem.pathname(file)
+ tempfile = Puppet::FileSystem::Uniquefile.new(Puppet::FileSystem.basename_string(file), Puppet::FileSystem.dir_string(file))
+
+ # Set properties of the temporary file before we write the content, because
+ # Tempfile doesn't promise to be safe from reading by other people, just
+ # that it avoids races around creating the file.
+ #
+ # Our Windows emulation is pretty limited, and so we have to carefully
+ # and specifically handle the platform, which has all sorts of magic.
+ # So, unlike Unix, we don't pre-prep security; we use the default "quite
+ # secure" tempfile permissions instead. Magic happens later.
+ if !Puppet.features.microsoft_windows?
+ # Grab the current file mode, and fall back to the defaults.
+ effective_mode =
+ if Puppet::FileSystem.exist?(file)
+ stat = Puppet::FileSystem.lstat(file)
+ tempfile.chown(stat.uid, stat.gid)
+ stat.mode
+ else
+ mode
+ end
- if effective_mode
- # We only care about the bottom four slots, which make the real mode,
- # and not the rest of the platform stat call fluff and stuff.
- tempfile.chmod(effective_mode & 07777)
+ if effective_mode
+ # We only care about the bottom four slots, which make the real mode,
+ # and not the rest of the platform stat call fluff and stuff.
+ tempfile.chmod(effective_mode & 07777)
+ end
end
- end
- # OK, now allow the caller to write the content of the file.
- yield tempfile
+ # OK, now allow the caller to write the content of the file.
+ yield tempfile
- # Now, make sure the data (which includes the mode) is safe on disk.
- tempfile.flush
- begin
- tempfile.fsync
- rescue NotImplementedError
- # fsync may not be implemented by Ruby on all platforms, but
- # there is absolutely no recovery path if we detect that. So, we just
- # ignore the return code.
- #
- # However, don't be fooled: that is accepting that we are running in
- # an unsafe fashion. If you are porting to a new platform don't stub
- # that out.
- end
+ # Now, make sure the data (which includes the mode) is safe on disk.
+ tempfile.flush
+ begin
+ tempfile.fsync
+ rescue NotImplementedError
+ # fsync may not be implemented by Ruby on all platforms, but
+ # there is absolutely no recovery path if we detect that. So, we just
+ # ignore the return code.
+ #
+ # However, don't be fooled: that is accepting that we are running in
+ # an unsafe fashion. If you are porting to a new platform don't stub
+ # that out.
+ end
- tempfile.close
+ tempfile.close
- if Puppet.features.microsoft_windows?
- # Windows ReplaceFile needs a file to exist, so touch handles this
- if !Puppet::FileSystem.exist?(file)
- Puppet::FileSystem.touch(file)
- if mode
- Puppet::Util::Windows::Security.set_mode(mode, Puppet::FileSystem.path_string(file))
+ if Puppet.features.microsoft_windows?
+ # Windows ReplaceFile needs a file to exist, so touch handles this
+ if !Puppet::FileSystem.exist?(file)
+ Puppet::FileSystem.touch(file)
+ if mode
+ Puppet::Util::Windows::Security.set_mode(mode, Puppet::FileSystem.path_string(file))
+ end
end
- end
- # Yes, the arguments are reversed compared to the rename in the rest
- # of the world.
- Puppet::Util::Windows::File.replace_file(FileSystem.path_string(file), tempfile.path)
+ # Yes, the arguments are reversed compared to the rename in the rest
+ # of the world.
+ Puppet::Util::Windows::File.replace_file(FileSystem.path_string(file), tempfile.path)
- else
- File.rename(tempfile.path, Puppet::FileSystem.path_string(file))
+ else
+ File.rename(tempfile.path, Puppet::FileSystem.path_string(file))
+ end
+ ensure
+ # in case an error occurred before we renamed the temp file, make sure it
+ # gets deleted
+ if tempfile
+ tempfile.close!
+ end
end
+
# Ideally, we would now fsync the directory as well, but Ruby doesn't
# have support for that, and it doesn't matter /that/ much...
# Return something true, and possibly useful.
file
end
module_function :replace_file
# Executes a block of code, wrapped with some special exception handling. Causes the ruby interpreter to
# exit if the block throws an exception.
#
# @api public
# @param [String] message a message to log if the block fails
# @param [Integer] code the exit code that the ruby interpreter should return if the block fails
# @yield
def exit_on_fail(message, code = 1)
yield
# First, we need to check and see if we are catching a SystemExit error. These will be raised
# when we daemonize/fork, and they do not necessarily indicate a failure case.
rescue SystemExit => err
raise err
# Now we need to catch *any* other kind of exception, because we may be calling third-party
# code (e.g. webrick), and we have no idea what they might throw.
rescue Exception => err
## NOTE: when debugging spec failures, these two lines can be very useful
#puts err.inspect
#puts Puppet::Util.pretty_backtrace(err.backtrace)
Puppet.log_exception(err, "Could not #{message}: #{err}")
Puppet::Util::Log.force_flushqueue()
exit(code)
end
module_function :exit_on_fail
def deterministic_rand(seed,max)
if defined?(Random) == 'constant' && Random.class == Class
Random.new(seed).rand(max).to_s
else
srand(seed)
result = rand(max).to_s
srand()
result
end
end
module_function :deterministic_rand
#######################################################################################################
# Deprecated methods relating to process execution; these have been moved to Puppet::Util::Execution
#######################################################################################################
def execpipe(command, failonfail = true, &block)
Puppet.deprecation_warning("Puppet::Util.execpipe is deprecated; please use Puppet::Util::Execution.execpipe")
Puppet::Util::Execution.execpipe(command, failonfail, &block)
end
module_function :execpipe
def execfail(command, exception)
Puppet.deprecation_warning("Puppet::Util.execfail is deprecated; please use Puppet::Util::Execution.execfail")
Puppet::Util::Execution.execfail(command, exception)
end
module_function :execfail
def execute(*args)
Puppet.deprecation_warning("Puppet::Util.execute is deprecated; please use Puppet::Util::Execution.execute")
Puppet::Util::Execution.execute(*args)
end
module_function :execute
end
end
require 'puppet/util/errors'
require 'puppet/util/methodhelper'
require 'puppet/util/metaid'
require 'puppet/util/classgen'
require 'puppet/util/docs'
require 'puppet/util/execution'
require 'puppet/util/logging'
require 'puppet/util/package'
require 'puppet/util/warnings'
diff --git a/lib/puppet/util/autoload.rb b/lib/puppet/util/autoload.rb
index f8f7c260c..51ff94e1c 100644
--- a/lib/puppet/util/autoload.rb
+++ b/lib/puppet/util/autoload.rb
@@ -1,227 +1,227 @@
require 'pathname'
require 'puppet/util/rubygems'
require 'puppet/util/warnings'
require 'puppet/util/methodhelper'
# Autoload paths, either based on names or all at once.
class Puppet::Util::Autoload
include Puppet::Util::MethodHelper
@autoloaders = {}
@loaded = {}
class << self
attr_reader :autoloaders
attr_accessor :loaded
private :autoloaders, :loaded
def gem_source
@gem_source ||= Puppet::Util::RubyGems::Source.new
end
# Has a given path been loaded? This is used for testing whether a
# changed file should be loaded or just ignored. This is only
# used in network/client/master, when downloading plugins, to
# see if a given plugin is currently loaded and thus should be
# reloaded.
def loaded?(path)
path = cleanpath(path).chomp('.rb')
loaded.include?(path)
end
# Save the fact that a given path has been loaded. This is so
# we can load downloaded plugins if they've already been loaded
# into memory.
def mark_loaded(name, file)
name = cleanpath(name).chomp('.rb')
ruby_file = name + ".rb"
$LOADED_FEATURES << ruby_file unless $LOADED_FEATURES.include?(ruby_file)
loaded[name] = [file, File.mtime(file)]
end
def changed?(name)
name = cleanpath(name).chomp('.rb')
return true unless loaded.include?(name)
file, old_mtime = loaded[name]
- environment = Puppet.lookup(:environments).get(Puppet[:environment])
+ environment = Puppet.lookup(:current_environment)
return true unless file == get_file(name, environment)
begin
old_mtime.to_i != File.mtime(file).to_i
rescue Errno::ENOENT
true
end
end
# Load a single plugin by name. We use 'load' here so we can reload a
# given plugin.
def load_file(name, env)
file = get_file(name.to_s, env)
return false unless file
begin
mark_loaded(name, file)
Kernel.load file, @wrap
return true
rescue SystemExit,NoMemoryError
raise
rescue Exception => detail
message = "Could not autoload #{name}: #{detail}"
Puppet.log_exception(detail, message)
raise Puppet::Error, message, detail.backtrace
end
end
def loadall(path)
# Load every instance of everything we can find.
files_to_load(path).each do |file|
name = file.chomp(".rb")
load_file(name, nil) unless loaded?(name)
end
end
def reload_changed
loaded.keys.each { |file| load_file(file, nil) if changed?(file) }
end
# Get the correct file to load for a given path
# returns nil if no file is found
def get_file(name, env)
name = name + '.rb' unless name =~ /\.rb$/
path = search_directories(env).find { |dir| Puppet::FileSystem.exist?(File.join(dir, name)) }
path and File.join(path, name)
end
def files_to_load(path)
search_directories(nil).map {|dir| files_in_dir(dir, path) }.flatten.uniq
end
def files_in_dir(dir, path)
dir = Pathname.new(File.expand_path(dir))
Dir.glob(File.join(dir, path, "*.rb")).collect do |file|
Pathname.new(file).relative_path_from(dir).to_s
end
end
def module_directories(env)
# We're using a per-thread cache of module directories so that we don't
# scan the filesystem each time we try to load something. This is reset
# at the beginning of compilation and at the end of an agent run.
$env_module_directories ||= {}
# This is a little bit of a hack. Basically, the autoloader is being
# called indirectly during application bootstrapping when we do things
# such as check "features". However, during bootstrapping, we haven't
# yet parsed all of the command line parameters nor the config files,
# and thus we don't yet know with certainty what the module path is.
# This should be irrelevant during bootstrapping, because anything that
# we are attempting to load during bootstrapping should be something
# that we ship with puppet, and thus the module path is irrelevant.
#
# In the long term, I think the way that we want to handle this is to
# have the autoloader ignore the module path in all cases where it is
# not specifically requested (e.g., by a constructor param or
# something)... because there are very few cases where we should
# actually be loading code from the module path. However, until that
# happens, we at least need a way to prevent the autoloader from
# attempting to access the module path before it is initialized. For
# now we are accomplishing that by calling the
# "app_defaults_initialized?" method on the main puppet Settings object.
# --cprice 2012-03-16
- if Puppet.settings.app_defaults_initialized?
+ if Puppet.settings.app_defaults_initialized? &&
env ||= Puppet.lookup(:environments).get(Puppet[:environment])
# if the app defaults have been initialized then it should be safe to access the module path setting.
$env_module_directories[env] ||= env.modulepath.collect do |dir|
Dir.entries(dir).reject { |f| f =~ /^\./ }.collect { |f| File.join(dir, f, "lib") }
end.flatten.find_all do |d|
FileTest.directory?(d)
end
else
# if we get here, the app defaults have not been initialized, so we basically use an empty module path.
[]
end
end
def libdirs()
# See the comments in #module_directories above. Basically, we need to be careful not to try to access the
# libdir before we know for sure that all of the settings have been initialized (e.g., during bootstrapping).
if (Puppet.settings.app_defaults_initialized?)
Puppet[:libdir].split(File::PATH_SEPARATOR)
else
[]
end
end
def gem_directories
gem_source.directories
end
def search_directories(env)
[gem_directories, module_directories(env), libdirs(), $LOAD_PATH].flatten
end
# Normalize a path. This converts ALT_SEPARATOR to SEPARATOR on Windows
# and eliminates unnecessary parts of a path.
def cleanpath(path)
# There are two cases here because cleanpath does not handle absolute
# paths correctly on windows (c:\ and c:/ are treated as distinct) but
# we don't want to convert relative paths to absolute
if Puppet::Util.absolute_path?(path)
File.expand_path(path)
else
Pathname.new(path).cleanpath.to_s
end
end
end
# Send [] and []= to the @autoloaders hash
Puppet::Util.classproxy self, :autoloaders, "[]", "[]="
attr_accessor :object, :path, :objwarn, :wrap
def initialize(obj, path, options = {})
@path = path.to_s
raise ArgumentError, "Autoload paths cannot be fully qualified" if Puppet::Util.absolute_path?(@path)
@object = obj
self.class[obj] = self
set_options(options)
@wrap = true unless defined?(@wrap)
end
def load(name, env = nil)
self.class.load_file(expand(name), env)
end
# Load all instances from a path of Autoload.search_directories matching the
# relative path this Autoloader was initialized with. For example, if we
# have created a Puppet::Util::Autoload for Puppet::Type::User with a path of
# 'puppet/provider/user', the search_directories path will be searched for
# all ruby files matching puppet/provider/user/*.rb and they will then be
# loaded from the first directory in the search path providing them. So
# earlier entries in the search path may shadow later entries.
#
# This uses require, rather than load, so that already-loaded files don't get
# reloaded unnecessarily.
def loadall
self.class.loadall(@path)
end
def loaded?(name)
self.class.loaded?(expand(name))
end
def changed?(name)
self.class.changed?(expand(name))
end
def files_to_load
self.class.files_to_load(@path)
end
def expand(name)
::File.join(@path, name.to_s)
end
end
diff --git a/lib/puppet/util/colors.rb b/lib/puppet/util/colors.rb
index 622c3b583..dee607a9f 100644
--- a/lib/puppet/util/colors.rb
+++ b/lib/puppet/util/colors.rb
@@ -1,175 +1,217 @@
require 'puppet/util/platform'
module Puppet::Util::Colors
BLACK = {:console => "\e[0;30m", :html => "color: #FFA0A0" }
RED = {:console => "\e[0;31m", :html => "color: #FFA0A0" }
GREEN = {:console => "\e[0;32m", :html => "color: #00CD00" }
YELLOW = {:console => "\e[0;33m", :html => "color: #FFFF60" }
BLUE = {:console => "\e[0;34m", :html => "color: #80A0FF" }
MAGENTA = {:console => "\e[0;35m", :html => "color: #FFA500" }
CYAN = {:console => "\e[0;36m", :html => "color: #40FFFF" }
WHITE = {:console => "\e[0;37m", :html => "color: #FFFFFF" }
HBLACK = {:console => "\e[1;30m", :html => "color: #FFA0A0" }
HRED = {:console => "\e[1;31m", :html => "color: #FFA0A0" }
HGREEN = {:console => "\e[1;32m", :html => "color: #00CD00" }
HYELLOW = {:console => "\e[1;33m", :html => "color: #FFFF60" }
HBLUE = {:console => "\e[1;34m", :html => "color: #80A0FF" }
HMAGENTA = {:console => "\e[1;35m", :html => "color: #FFA500" }
HCYAN = {:console => "\e[1;36m", :html => "color: #40FFFF" }
HWHITE = {:console => "\e[1;37m", :html => "color: #FFFFFF" }
BG_RED = {:console => "\e[0;41m", :html => "background: #FFA0A0"}
BG_GREEN = {:console => "\e[0;42m", :html => "background: #00CD00"}
BG_YELLOW = {:console => "\e[0;43m", :html => "background: #FFFF60"}
BG_BLUE = {:console => "\e[0;44m", :html => "background: #80A0FF"}
BG_MAGENTA = {:console => "\e[0;45m", :html => "background: #FFA500"}
BG_CYAN = {:console => "\e[0;46m", :html => "background: #40FFFF"}
BG_WHITE = {:console => "\e[0;47m", :html => "background: #FFFFFF"}
BG_HRED = {:console => "\e[1;41m", :html => "background: #FFA0A0"}
BG_HGREEN = {:console => "\e[1;42m", :html => "background: #00CD00"}
BG_HYELLOW = {:console => "\e[1;43m", :html => "background: #FFFF60"}
BG_HBLUE = {:console => "\e[1;44m", :html => "background: #80A0FF"}
BG_HMAGENTA = {:console => "\e[1;45m", :html => "background: #FFA500"}
BG_HCYAN = {:console => "\e[1;46m", :html => "background: #40FFFF"}
BG_HWHITE = {:console => "\e[1;47m", :html => "background: #FFFFFF"}
RESET = {:console => "\e[0m", :html => "" }
Colormap = {
:debug => WHITE,
:info => GREEN,
:notice => CYAN,
:warning => YELLOW,
:err => HMAGENTA,
:alert => RED,
:emerg => HRED,
:crit => HRED,
:black => BLACK,
:red => RED,
:green => GREEN,
:yellow => YELLOW,
:blue => BLUE,
:magenta => MAGENTA,
:cyan => CYAN,
:white => WHITE,
:hblack => HBLACK,
:hred => HRED,
:hgreen => HGREEN,
:hyellow => HYELLOW,
:hblue => HBLUE,
:hmagenta => HMAGENTA,
:hcyan => HCYAN,
:hwhite => HWHITE,
:bg_red => BG_RED,
:bg_green => BG_GREEN,
:bg_yellow => BG_YELLOW,
:bg_blue => BG_BLUE,
:bg_magenta => BG_MAGENTA,
:bg_cyan => BG_CYAN,
:bg_white => BG_WHITE,
:bg_hred => BG_HRED,
:bg_hgreen => BG_HGREEN,
:bg_hyellow => BG_HYELLOW,
:bg_hblue => BG_HBLUE,
:bg_hmagenta => BG_HMAGENTA,
:bg_hcyan => BG_HCYAN,
:bg_hwhite => BG_HWHITE,
:reset => { :console => "\e[m", :html => "" }
}
# We define console_has_color? at load time since it's checking the
# underlying platform which will not change, and we don't want to perform
# the check every time we use logging
- if Puppet::Util::Platform.windows?
- # We're on windows, need win32console for color to work
+ if Puppet::Util::Platform.windows? && RUBY_VERSION =~ /^1\./
+ # We're on windows and using ruby less than v2
+ # so we need win32console for color to work
begin
- require 'Win32API'
+ require 'ffi'
require 'win32console'
- require 'puppet/util/windows/string'
# The win32console gem uses ANSI functions for writing to the console
# which doesn't work for unicode strings, e.g. module tool. Ruby 1.9
# does the same thing, but doesn't account for ANSI escape sequences
class WideConsole < Win32::Console
- WriteConsole = Win32API.new( "kernel32", "WriteConsoleW", ['l', 'p', 'l', 'p', 'p'], 'l' )
- WriteConsoleOutputCharacter = Win32API.new( "kernel32", "WriteConsoleOutputCharacterW", ['l', 'p', 'l', 'l', 'p'], 'l' )
+ extend FFI::Library
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms687401(v=vs.85).aspx
+ # BOOL WINAPI WriteConsole(
+ # _In_ HANDLE hConsoleOutput,
+ # _In_ const VOID *lpBuffer,
+ # _In_ DWORD nNumberOfCharsToWrite,
+ # _Out_ LPDWORD lpNumberOfCharsWritten,
+ # _Reserved_ LPVOID lpReserved
+ # );
+ ffi_lib :kernel32
+ attach_function_private :WriteConsoleW,
+ [:handle, :lpcwstr, :dword, :lpdword, :lpvoid], :win32_bool
+
+ # typedef struct _COORD {
+ # SHORT X;
+ # SHORT Y;
+ # } COORD, *PCOORD;
+ class COORD < FFI::Struct
+ layout :X, :short,
+ :Y, :short
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms687410(v=vs.85).aspx
+ # BOOL WINAPI WriteConsoleOutputCharacter(
+ # _In_ HANDLE hConsoleOutput,
+ # _In_ LPCTSTR lpCharacter,
+ # _In_ DWORD nLength,
+ # _In_ COORD dwWriteCoord,
+ # _Out_ LPDWORD lpNumberOfCharsWritten
+ # );
+ ffi_lib :kernel32
+ attach_function_private :WriteConsoleOutputCharacterW,
+ [:handle, :lpcwstr, :dword, COORD, :lpdword], :win32_bool
def initialize(t = nil)
super(t)
end
def WriteChar(str, col, row)
- dwWriteCoord = (row << 16) + col
- lpNumberOfCharsWritten = ' ' * 4
- utf16, nChars = string_encode(str)
- WriteConsoleOutputCharacter.call(@handle, utf16, nChars, dwWriteCoord, lpNumberOfCharsWritten)
- lpNumberOfCharsWritten.unpack('L')
+ writeCoord = COORD.new()
+ writeCoord[:X] = row
+ writeCoord[:Y] = col
+
+ chars_written = 0
+ FFI::MemoryPointer.from_string_to_wide_string(str) do |msg_ptr|
+ FFI::MemoryPointer.new(:dword, 1) do |numberOfCharsWritten_ptr|
+ WriteConsoleOutputCharacterW(@handle, msg_ptr,
+ str.length, writeCoord, numberOfCharsWritten_ptr)
+ chars_written = numberOfCharsWritten_ptr.read_dword
+ end
+ end
+
+ chars_written
end
def Write(str)
- written = 0.chr * 4
- reserved = 0.chr * 4
- utf16, nChars = string_encode(str)
- WriteConsole.call(@handle, utf16, nChars, written, reserved)
- end
+ result = false
+ FFI::MemoryPointer.from_string_to_wide_string(str) do |msg_ptr|
+ FFI::MemoryPointer.new(:dword, 1) do |numberOfCharsWritten_ptr|
+ result = WriteConsoleW(@handle, msg_ptr,
+ str.length, FFI::MemoryPointer.new(:dword, 1),
+ FFI::MemoryPointer::NULL) != FFI::WIN32_FALSE
+ end
+ end
- def string_encode(str)
- wstr = Puppet::Util::Windows::String.wide_string(str)
- [wstr, wstr.length - 1]
+ result
end
end
# Override the win32console's IO class so we can supply
# our own Console class
class WideIO < Win32::Console::ANSI::IO
def initialize(fd_std = :stdout)
super(fd_std)
handle = FD_STD_MAP[fd_std][1]
@Out = WideConsole.new(handle)
end
end
$stdout = WideIO.new(:stdout)
$stderr = WideIO.new(:stderr)
rescue LoadError
def console_has_color?
false
end
else
def console_has_color?
true
end
end
else
# On a posix system we can just enable it
def console_has_color?
true
end
end
def colorize(color, str)
case Puppet[:color]
when true, :ansi, "ansi", "yes"
if console_has_color?
console_color(color, str)
else
str
end
when :html, "html"
html_color(color, str)
else
str
end
end
def console_color(color, str)
Colormap[color][:console] +
str.gsub(RESET[:console], Colormap[color][:console]) +
RESET[:console]
end
def html_color(color, str)
span = '<span style="%s">' % Colormap[color][:html]
"#{span}%s</span>" % str.gsub(/<span .*?<\/span>/, "</span>\\0#{span}")
end
end
diff --git a/lib/puppet/util/command_line.rb b/lib/puppet/util/command_line.rb
index 35a38f5c3..22fd3532a 100644
--- a/lib/puppet/util/command_line.rb
+++ b/lib/puppet/util/command_line.rb
@@ -1,182 +1,199 @@
# Bundler and rubygems maintain a set of directories from which to
# load gems. If Bundler is loaded, let it determine what can be
# loaded. If it's not loaded, then use rubygems. But do this before
# loading any puppet code, so that our gem loading system is sane.
if not defined? ::Bundler
begin
require 'rubygems'
rescue LoadError
end
end
require 'puppet'
require 'puppet/util'
require "puppet/util/plugins"
require "puppet/util/rubygems"
require "puppet/util/limits"
+require 'puppet/util/colors'
module Puppet
module Util
# This is the main entry point for all puppet applications / faces; it
# is basically where the bootstrapping process / lifecycle of an app
# begins.
class CommandLine
include Puppet::Util::Limits
OPTION_OR_MANIFEST_FILE = /^-|\.pp$|\.rb$/
# @param zero [String] the name of the executable
# @param argv [Array<String>] the arguments passed on the command line
# @param stdin [IO] (unused)
def initialize(zero = $0, argv = ARGV, stdin = STDIN)
@command = File.basename(zero, '.rb')
@argv = argv
Puppet::Plugins.on_commandline_initialization(:command_line_object => self)
end
# @return [String] name of the subcommand is being executed
# @api public
def subcommand_name
return @command if @command != 'puppet'
if @argv.first =~ OPTION_OR_MANIFEST_FILE
nil
else
@argv.first
end
end
# @return [Array<String>] the command line arguments being passed to the subcommand
# @api public
def args
return @argv if @command != 'puppet'
if subcommand_name.nil?
@argv
else
@argv[1..-1]
end
end
# @api private
# @deprecated
def self.available_subcommands
Puppet.deprecation_warning('Puppet::Util::CommandLine.available_subcommands is deprecated; please use Puppet::Application.available_application_names instead.')
Puppet::Application.available_application_names
end
# available_subcommands was previously an instance method, not a class
# method, and we have an unknown number of user-implemented applications
# that depend on that behaviour. Forwarding allows us to preserve a
# backward compatible API. --daniel 2011-04-11
# @api private
# @deprecated
def available_subcommands
Puppet.deprecation_warning('Puppet::Util::CommandLine#available_subcommands is deprecated; please use Puppet::Application.available_application_names instead.')
Puppet::Application.available_application_names
end
# Run the puppet subcommand. If the subcommand is determined to be an
# external executable, this method will never return and the current
# process will be replaced via {Kernel#exec}.
#
# @return [void]
def execute
Puppet::Util.exit_on_fail("initialize global default settings") do
Puppet.initialize_settings(args)
end
setpriority(Puppet[:priority])
find_subcommand.run
end
# @api private
def external_subcommand
Puppet::Util.which("puppet-#{subcommand_name}")
end
private
def find_subcommand
if subcommand_name.nil?
NilSubcommand.new(self)
elsif Puppet::Application.available_application_names.include?(subcommand_name)
ApplicationSubcommand.new(subcommand_name, self)
elsif path_to_subcommand = external_subcommand
ExternalSubcommand.new(path_to_subcommand, self)
else
UnknownSubcommand.new(subcommand_name, self)
end
end
# @api private
class ApplicationSubcommand
def initialize(subcommand_name, command_line)
@subcommand_name = subcommand_name
@command_line = command_line
end
def run
# For most applications, we want to be able to load code from the modulepath,
# such as apply, describe, resource, and faces.
# For agent, we only want to load pluginsync'ed code from libdir.
# For master, we shouldn't ever be loading per-enviroment code into the master's
# ruby process, but that requires fixing (#17210, #12173, #8750). So for now
# we try to restrict to only code that can be autoloaded from the node's
# environment.
+
+ # PUP-2114 - at this point in the bootstrapping process we do not
+ # have an appropriate application-wide current_environment set.
+ # If we cannot find the configured environment, which may not exist,
+ # we do not attempt to add plugin directories to the load path.
+ #
if @subcommand_name != 'master' and @subcommand_name != 'agent'
- Puppet.lookup(:environments).get(Puppet[:environment]).each_plugin_directory do |dir|
- $LOAD_PATH << dir unless $LOAD_PATH.include?(dir)
+ if configured_environment = Puppet.lookup(:environments).get(Puppet[:environment])
+ configured_environment.each_plugin_directory do |dir|
+ $LOAD_PATH << dir unless $LOAD_PATH.include?(dir)
+ end
end
end
app = Puppet::Application.find(@subcommand_name).new(@command_line)
Puppet::Plugins.on_application_initialization(:application_object => @command_line)
app.run
end
end
# @api private
class ExternalSubcommand
def initialize(path_to_subcommand, command_line)
@path_to_subcommand = path_to_subcommand
@command_line = command_line
end
def run
Kernel.exec(@path_to_subcommand, *@command_line.args)
end
end
# @api private
class NilSubcommand
+ include Puppet::Util::Colors
+
def initialize(command_line)
@command_line = command_line
end
def run
- if @command_line.args.include? "--version" or @command_line.args.include? "-V"
+ args = @command_line.args
+ if args.include? "--version" or args.include? "-V"
puts Puppet.version
+ elsif @command_line.subcommand_name.nil? && args.count > 0
+ # If the subcommand is truly nil and there is an arg, it's an option; print out the invalid option message
+ puts colorize(:hred, "Error: Could not parse application options: invalid option: #{args[0]}")
+ exit 1
else
puts "See 'puppet help' for help on available puppet subcommands"
end
end
end
# @api private
class UnknownSubcommand < NilSubcommand
def initialize(subcommand_name, command_line)
@subcommand_name = subcommand_name
super(command_line)
end
def run
- puts "Error: Unknown Puppet subcommand '#{@subcommand_name}'"
+ puts colorize(:hred, "Error: Unknown Puppet subcommand '#{@subcommand_name}'")
super
+ exit 1
end
end
end
end
end
diff --git a/lib/puppet/util/execution.rb b/lib/puppet/util/execution.rb
index fb03510a9..556aba50e 100644
--- a/lib/puppet/util/execution.rb
+++ b/lib/puppet/util/execution.rb
@@ -1,315 +1,327 @@
+require 'puppet/file_system/uniquefile'
+
module Puppet
require 'rbconfig'
require 'puppet/error'
# A command failed to execute.
# @api public
class ExecutionFailure < Puppet::Error
end
end
# This module defines methods for execution of system commands. It is intented for inclusion
# in classes that needs to execute system commands.
# @api public
module Puppet::Util::Execution
# This is the full output from a process. The object itself (a String) is the
# stdout of the process.
#
# @api public
class ProcessOutput < String
# @return [Integer] The exit status of the process
# @api public
attr_reader :exitstatus
# @api private
def initialize(value,exitstatus)
super(value)
@exitstatus = exitstatus
end
end
# The command can be a simple string, which is executed as-is, or an Array,
# which is treated as a set of command arguments to pass through.
#
# In either case, the command is passed directly to the shell, STDOUT and
# STDERR are connected together, and STDOUT will be streamed to the yielded
# pipe.
#
# @param command [String, Array<String>] the command to execute as one string,
# or as parts in an array. The parts of the array are joined with one
# separating space between each entry when converting to the command line
# string to execute.
# @param failonfail [Boolean] (true) if the execution should fail with
# Exception on failure or not.
# @yield [pipe] to a block executing a subprocess
# @yieldparam pipe [IO] the opened pipe
# @yieldreturn [String] the output to return
# @raise [Puppet::ExecutionFailure] if the executed chiled process did not
# exit with status == 0 and `failonfail` is `true`.
# @return [String] a string with the output from the subprocess executed by
# the given block
#
# @see Kernel#open for `mode` values
# @api public
def self.execpipe(command, failonfail = true)
# Paste together an array with spaces. We used to paste directly
# together, no spaces, which made for odd invocations; the user had to
# include whitespace between arguments.
#
# Having two spaces is really not a big drama, since this passes to the
# shell anyhow, while no spaces makes for a small developer cost every
# time this is invoked. --daniel 2012-02-13
command_str = command.respond_to?(:join) ? command.join(' ') : command
if respond_to? :debug
debug "Executing '#{command_str}'"
else
Puppet.debug "Executing '#{command_str}'"
end
# force the run of the command with
# the user/system locale to "C" (via environment variables LANG and LC_*)
# it enables to have non localized output for some commands and therefore
# a predictable output
english_env = ENV.to_hash.merge( {'LANG' => 'C', 'LC_ALL' => 'C'} )
output = Puppet::Util.withenv(english_env) do
open("| #{command_str} 2>&1") do |pipe|
yield pipe
end
end
- if failonfail
- unless $CHILD_STATUS == 0
- raise Puppet::ExecutionFailure, output
- end
+ if failonfail && exitstatus != 0
+ raise Puppet::ExecutionFailure, output
end
output
end
+ def self.exitstatus
+ $CHILD_STATUS.exitstatus
+ end
+ private_class_method :exitstatus
+
# Wraps execution of {execute} with mapping of exception to given exception (and output as argument).
# @raise [exception] under same conditions as {execute}, but raises the given `exception` with the output as argument
# @return (see execute)
# @api public
def self.execfail(command, exception)
output = execute(command)
return output
rescue Puppet::ExecutionFailure
raise exception, output, exception.backtrace
end
# Default empty options for {execute}
NoOptionsSpecified = {}
# Executes the desired command, and return the status and output.
# def execute(command, options)
# @param command [Array<String>, String] the command to execute. If it is
# an Array the first element should be the executable and the rest of the
# elements should be the individual arguments to that executable.
# @param options [Hash] a Hash of options
# @option options [Boolean] :failonfail if this value is set to true, then this method will raise an error if the
# command is not executed successfully.
# @option options [Integer, String] :uid (nil) the user id of the user that the process should be run as
# @option options [Integer, String] :gid (nil) the group id of the group that the process should be run as
# @option options [Boolean] :combine sets whether or not to combine stdout/stderr in the output
# @option options [String] :stdinfile (nil) sets a file that can be used for stdin. Passing a string for stdin is not currently
# supported.
# @option options [Boolean] :squelch (true) if true, ignore stdout / stderr completely.
# @option options [Boolean] :override_locale (true) by default (and if this option is set to true), we will temporarily override
# the user/system locale to "C" (via environment variables LANG and LC_*) while we are executing the command.
# This ensures that the output of the command will be formatted consistently, making it predictable for parsing.
# Passing in a value of false for this option will allow the command to be executed using the user/system locale.
# @option options [Hash<{String => String}>] :custom_environment ({}) a hash of key/value pairs to set as environment variables for the duration
# of the command.
# @return [Puppet::Util::Execution::ProcessOutput] output as specified by options
# @raise [Puppet::ExecutionFailure] if the executed chiled process did not exit with status == 0 and `failonfail` is
# `true`.
# @note Unfortunately, the default behavior for failonfail and combine (since
# 0.22.4 and 0.24.7, respectively) depend on whether options are specified
# or not. If specified, then failonfail and combine default to false (even
# when the options specified are neither failonfail nor combine). If no
# options are specified, then failonfail and combine default to true.
# @comment See commits efe9a833c and d32d7f30
# @api public
#
def self.execute(command, options = NoOptionsSpecified)
# specifying these here rather than in the method signature to allow callers to pass in a partial
# set of overrides without affecting the default values for options that they don't pass in
default_options = {
:failonfail => NoOptionsSpecified.equal?(options),
:uid => nil,
:gid => nil,
:combine => NoOptionsSpecified.equal?(options),
:stdinfile => nil,
:squelch => false,
:override_locale => true,
:custom_environment => {},
}
options = default_options.merge(options)
if command.is_a?(Array)
command = command.flatten.map(&:to_s)
str = command.join(" ")
elsif command.is_a?(String)
str = command
end
if respond_to? :debug
debug "Executing '#{str}'"
else
Puppet.debug "Executing '#{str}'"
end
null_file = Puppet.features.microsoft_windows? ? 'NUL' : '/dev/null'
- stdin = File.open(options[:stdinfile] || null_file, 'r')
- stdout = options[:squelch] ? File.open(null_file, 'w') : Tempfile.new('puppet')
- stderr = options[:combine] ? stdout : File.open(null_file, 'w')
+ begin
+ stdin = File.open(options[:stdinfile] || null_file, 'r')
+ stdout = options[:squelch] ? File.open(null_file, 'w') : Puppet::FileSystem::Uniquefile.new('puppet')
+ stderr = options[:combine] ? stdout : File.open(null_file, 'w')
- exec_args = [command, options, stdin, stdout, stderr]
+ exec_args = [command, options, stdin, stdout, stderr]
- if execution_stub = Puppet::Util::ExecutionStub.current_value
- return execution_stub.call(*exec_args)
- elsif Puppet.features.posix?
- child_pid = execute_posix(*exec_args)
- exit_status = Process.waitpid2(child_pid).last.exitstatus
- elsif Puppet.features.microsoft_windows?
- process_info = execute_windows(*exec_args)
- begin
- exit_status = Puppet::Util::Windows::Process.wait_process(process_info.process_handle)
- ensure
- Puppet::Util::Windows::Process.CloseHandle(process_info.process_handle)
- Puppet::Util::Windows::Process.CloseHandle(process_info.thread_handle)
+ if execution_stub = Puppet::Util::ExecutionStub.current_value
+ return execution_stub.call(*exec_args)
+ elsif Puppet.features.posix?
+ child_pid = execute_posix(*exec_args)
+ exit_status = Process.waitpid2(child_pid).last.exitstatus
+ elsif Puppet.features.microsoft_windows?
+ process_info = execute_windows(*exec_args)
+ begin
+ exit_status = Puppet::Util::Windows::Process.wait_process(process_info.process_handle)
+ ensure
+ FFI::WIN32.CloseHandle(process_info.process_handle)
+ FFI::WIN32.CloseHandle(process_info.thread_handle)
+ end
end
- end
- [stdin, stdout, stderr].each {|io| io.close rescue nil}
+ [stdin, stdout, stderr].each {|io| io.close rescue nil}
- # read output in if required
- unless options[:squelch]
- output = wait_for_output(stdout)
- Puppet.warning "Could not get output" unless output
- end
+ # read output in if required
+ unless options[:squelch]
+ output = wait_for_output(stdout)
+ Puppet.warning "Could not get output" unless output
+ end
- if options[:failonfail] and exit_status != 0
- raise Puppet::ExecutionFailure, "Execution of '#{str}' returned #{exit_status}: #{output.strip}"
+ if options[:failonfail] and exit_status != 0
+ raise Puppet::ExecutionFailure, "Execution of '#{str}' returned #{exit_status}: #{output.strip}"
+ end
+ ensure
+ if !options[:squelch] && stdout
+ # if we opened a temp file for stdout, we need to clean it up.
+ stdout.close!
+ end
end
Puppet::Util::Execution::ProcessOutput.new(output || '', exit_status)
end
# Returns the path to the ruby executable (available via Config object, even if
# it's not in the PATH... so this is slightly safer than just using Puppet::Util.which)
# @return [String] the path to the Ruby executable
# @api private
#
def self.ruby_path()
File.join(RbConfig::CONFIG['bindir'],
RbConfig::CONFIG['ruby_install_name'] + RbConfig::CONFIG['EXEEXT']).
sub(/.*\s.*/m, '"\&"')
end
# Because some modules provide their own version of this method.
class << self
alias util_execute execute
end
# This is private method.
# @comment see call to private_class_method after method definition
# @api private
#
def self.execute_posix(command, options, stdin, stdout, stderr)
child_pid = Puppet::Util.safe_posix_fork(stdin, stdout, stderr) do
# We can't just call Array(command), and rely on it returning
# things like ['foo'], when passed ['foo'], because
# Array(command) will call command.to_a internally, which when
# given a string can end up doing Very Bad Things(TM), such as
# turning "/tmp/foo;\r\n /bin/echo" into ["/tmp/foo;\r\n", " /bin/echo"]
command = [command].flatten
Process.setsid
begin
Puppet::Util::SUIDManager.change_privileges(options[:uid], options[:gid], true)
# if the caller has requested that we override locale environment variables,
if (options[:override_locale]) then
# loop over them and clear them
Puppet::Util::POSIX::LOCALE_ENV_VARS.each { |name| ENV.delete(name) }
# set LANG and LC_ALL to 'C' so that the command will have consistent, predictable output
# it's OK to manipulate these directly rather than, e.g., via "withenv", because we are in
# a forked process.
ENV['LANG'] = 'C'
ENV['LC_ALL'] = 'C'
end
# unset all of the user-related environment variables so that different methods of starting puppet
# (automatic start during boot, via 'service', via /etc/init.d, etc.) won't have unexpected side
# effects relating to user / home dir environment vars.
# it's OK to manipulate these directly rather than, e.g., via "withenv", because we are in
# a forked process.
Puppet::Util::POSIX::USER_ENV_VARS.each { |name| ENV.delete(name) }
options[:custom_environment] ||= {}
Puppet::Util.withenv(options[:custom_environment]) do
Kernel.exec(*command)
end
rescue => detail
Puppet.log_exception(detail, "Could not execute posix command: #{detail}")
exit!(1)
end
end
child_pid
end
private_class_method :execute_posix
# This is private method.
# @comment see call to private_class_method after method definition
# @api private
#
def self.execute_windows(command, options, stdin, stdout, stderr)
command = command.map do |part|
part.include?(' ') ? %Q["#{part.gsub(/"/, '\"')}"] : part
end.join(" ") if command.is_a?(Array)
options[:custom_environment] ||= {}
Puppet::Util.withenv(options[:custom_environment]) do
Puppet::Util::Windows::Process.execute(command, options, stdin, stdout, stderr)
end
end
private_class_method :execute_windows
# This is private method.
# @comment see call to private_class_method after method definition
# @api private
#
def self.wait_for_output(stdout)
# Make sure the file's actually been written. This is basically a race
# condition, and is probably a horrible way to handle it, but, well, oh
# well.
# (If this method were treated as private / inaccessible from outside of this file, we shouldn't have to worry
# about a race condition because all of the places that we call this from are preceded by a call to "waitpid2",
# meaning that the processes responsible for writing the file have completed before we get here.)
2.times do |try|
if Puppet::FileSystem.exist?(stdout.path)
stdout.open
begin
return stdout.read
ensure
stdout.close
stdout.unlink
end
else
time_to_sleep = try / 2.0
Puppet.warning "Waiting for output; will sleep #{time_to_sleep} seconds"
sleep(time_to_sleep)
end
end
nil
end
private_class_method :wait_for_output
end
diff --git a/lib/puppet/util/feature.rb b/lib/puppet/util/feature.rb
index 19c544070..23d00edde 100644
--- a/lib/puppet/util/feature.rb
+++ b/lib/puppet/util/feature.rb
@@ -1,86 +1,97 @@
+require 'puppet'
+
class Puppet::Util::Feature
attr_reader :path
# Create a new feature test. You have to pass the feature name,
# and it must be unique. You can either provide a block that
# will get executed immediately to determine if the feature
# is present, or you can pass an option to determine it.
# Currently, the only supported option is 'libs' (must be
# passed as a symbol), which will make sure that each lib loads
# successfully.
def add(name, options = {})
method = name.to_s + "?"
@results.delete(name)
if block_given?
begin
result = yield
rescue Exception => detail
warn "Failed to load feature test for #{name}: #{detail}"
result = false
end
@results[name] = result
end
meta_def(method) do
- # Positive cache only, except blocks which are executed just once above
- final = @results[name] || block_given?
- @results[name] = test(name, options) unless final
- @results[name]
+ # we return a cached result if:
+ # * if a block is given (and we just evaluated it above)
+ # * if we already have a positive result
+ # * if we've tested this feature before and it failed, but we're
+ # configured to always cache
+ if block_given? ||
+ @results[name] ||
+ (@results.has_key?(name) and Puppet[:always_cache_features])
+ @results[name]
+ else
+ @results[name] = test(name, options)
+ @results[name]
+ end
end
end
# Create a new feature collection.
def initialize(path)
@path = path
@results = {}
@loader = Puppet::Util::Autoload.new(self, @path)
end
def load
@loader.loadall
end
def method_missing(method, *args)
return super unless method.to_s =~ /\?$/
feature = method.to_s.sub(/\?$/, '')
@loader.load(feature)
respond_to?(method) && self.send(method)
end
# Actually test whether the feature is present. We only want to test when
# someone asks for the feature, so we don't unnecessarily load
# files.
def test(name, options)
return true unless ary = options[:libs]
ary = [ary] unless ary.is_a?(Array)
ary.each do |lib|
return false unless load_library(lib, name)
end
# We loaded all of the required libraries
true
end
private
def load_library(lib, name)
raise ArgumentError, "Libraries must be passed as strings not #{lib.class}" unless lib.is_a?(String)
@rubygems ||= Puppet::Util::RubyGems::Source.new
@rubygems.clear_paths
begin
require lib
rescue SystemExit,NoMemoryError
raise
rescue Exception
Puppet.debug "Failed to load library '#{lib}' for feature '#{name}'"
return false
end
true
end
end
diff --git a/lib/puppet/util/filetype.rb b/lib/puppet/util/filetype.rb
index 9fc3b289a..08f763ee5 100644
--- a/lib/puppet/util/filetype.rb
+++ b/lib/puppet/util/filetype.rb
@@ -1,299 +1,303 @@
# Basic classes for reading, writing, and emptying files. Not much
# to see here.
require 'puppet/util/selinux'
require 'tempfile'
require 'fileutils'
class Puppet::Util::FileType
attr_accessor :loaded, :path, :synced
class FileReadError < Puppet::Error; end
include Puppet::Util::SELinux
class << self
attr_accessor :name
include Puppet::Util::ClassGen
end
# Create a new filetype.
def self.newfiletype(name, &block)
@filetypes ||= {}
klass = genclass(
name,
:block => block,
:prefix => "FileType",
:hash => @filetypes
)
# Rename the read and write methods, so that we're sure they
# maintain the stats.
klass.class_eval do
# Rename the read method
define_method(:real_read, instance_method(:read))
define_method(:read) do
begin
val = real_read
@loaded = Time.now
if val
return val.gsub(/# HEADER.*\n/,'')
else
return ""
end
rescue Puppet::Error => detail
raise
rescue => detail
message = "#{self.class} could not read #{@path}: #{detail}"
Puppet.log_exception(detail, message)
raise Puppet::Error, message, detail.backtrace
end
end
# And then the write method
define_method(:real_write, instance_method(:write))
define_method(:write) do |text|
begin
val = real_write(text)
@synced = Time.now
return val
rescue Puppet::Error => detail
raise
rescue => detail
message = "#{self.class} could not write #{@path}: #{detail}"
Puppet.log_exception(detail, message)
raise Puppet::Error, message, detail.backtrace
end
end
end
end
def self.filetype(type)
@filetypes[type]
end
# Pick or create a filebucket to use.
def bucket
@bucket ||= Puppet::Type.type(:filebucket).mkdefaultbucket.bucket
end
- def initialize(path)
+ def initialize(path, default_mode = nil)
raise ArgumentError.new("Path is nil") if path.nil?
@path = path
+ @default_mode = default_mode
end
# Arguments that will be passed to the execute method. Will set the uid
# to the target user if the target user and the current user are not
# the same
def cronargs
if uid = Puppet::Util.uid(@path) and uid == Puppet::Util::SUIDManager.uid
{:failonfail => true, :combine => true}
else
{:failonfail => true, :combine => true, :uid => @path}
end
end
# Operate on plain files.
newfiletype(:flat) do
# Back the file up before replacing it.
def backup
bucket.backup(@path) if Puppet::FileSystem.exist?(@path)
end
# Read the file.
def read
if Puppet::FileSystem.exist?(@path)
File.read(@path)
else
return nil
end
end
# Remove the file.
def remove
Puppet::FileSystem.unlink(@path) if Puppet::FileSystem.exist?(@path)
end
# Overwrite the file.
def write(text)
tf = Tempfile.new("puppet")
tf.print text; tf.flush
+ File.chmod(@default_mode, tf.path) if @default_mode
FileUtils.cp(tf.path, @path)
tf.close
# If SELinux is present, we need to ensure the file has its expected context
set_selinux_default_context(@path)
end
end
# Operate on plain files.
newfiletype(:ram) do
@@tabs = {}
def self.clear
@@tabs.clear
end
- def initialize(path)
+ def initialize(path, default_mode = nil)
+ # default_mode is meaningless for this filetype,
+ # supported only for compatibility with :flat
super
@@tabs[@path] ||= ""
end
# Read the file.
def read
Puppet.info "Reading #{@path} from RAM"
@@tabs[@path]
end
# Remove the file.
def remove
Puppet.info "Removing #{@path} from RAM"
@@tabs[@path] = ""
end
# Overwrite the file.
def write(text)
Puppet.info "Writing #{@path} to RAM"
@@tabs[@path] = text
end
end
# Handle Linux-style cron tabs.
newfiletype(:crontab) do
def initialize(user)
self.path = user
end
def path=(user)
begin
@uid = Puppet::Util.uid(user)
rescue Puppet::Error => detail
raise FileReadError, "Could not retrieve user #{user}: #{detail}", detail.backtrace
end
# XXX We have to have the user name, not the uid, because some
# systems *cough*linux*cough* require it that way
@path = user
end
# Read a specific @path's cron tab.
def read
%x{#{cmdbase} -l 2>/dev/null}
end
# Remove a specific @path's cron tab.
def remove
if %w{Darwin FreeBSD DragonFly}.include?(Facter.value("operatingsystem"))
%x{/bin/echo yes | #{cmdbase} -r 2>/dev/null}
else
%x{#{cmdbase} -r 2>/dev/null}
end
end
# Overwrite a specific @path's cron tab; must be passed the @path name
# and the text with which to create the cron tab.
def write(text)
IO.popen("#{cmdbase()} -", "w") { |p|
p.print text
}
end
private
# Only add the -u flag when the @path is different. Fedora apparently
# does not think I should be allowed to set the @path to my own user name
def cmdbase
if @uid == Puppet::Util::SUIDManager.uid || Facter.value(:operatingsystem) == "HP-UX"
return "crontab"
else
return "crontab -u #{@path}"
end
end
end
# SunOS has completely different cron commands; this class implements
# its versions.
newfiletype(:suntab) do
# Read a specific @path's cron tab.
def read
Puppet::Util::Execution.execute(%w{crontab -l}, cronargs)
rescue => detail
case detail.to_s
when /can't open your crontab/
return ""
when /you are not authorized to use cron/
raise FileReadError, "User #{@path} not authorized to use cron", detail.backtrace
else
raise FileReadError, "Could not read crontab for #{@path}: #{detail}", detail.backtrace
end
end
# Remove a specific @path's cron tab.
def remove
Puppet::Util::Execution.execute(%w{crontab -r}, cronargs)
rescue => detail
raise FileReadError, "Could not remove crontab for #{@path}: #{detail}", detail.backtrace
end
# Overwrite a specific @path's cron tab; must be passed the @path name
# and the text with which to create the cron tab.
def write(text)
output_file = Tempfile.new("puppet_suntab")
begin
output_file.print text
output_file.close
# We have to chown the stupid file to the user.
File.chown(Puppet::Util.uid(@path), nil, output_file.path)
Puppet::Util::Execution.execute(["crontab", output_file.path], cronargs)
rescue => detail
raise FileReadError, "Could not write crontab for #{@path}: #{detail}", detail.backtrace
ensure
output_file.close
output_file.unlink
end
end
end
# Support for AIX crontab with output different than suntab's crontab command.
newfiletype(:aixtab) do
# Read a specific @path's cron tab.
def read
Puppet::Util::Execution.execute(%w{crontab -l}, cronargs)
rescue => detail
case detail.to_s
when /Cannot open a file in the .* directory/
return ""
when /You are not authorized to use the cron command/
raise FileReadError, "User #{@path} not authorized to use cron", detail.backtrace
else
raise FileReadError, "Could not read crontab for #{@path}: #{detail}", detail.backtrace
end
end
# Remove a specific @path's cron tab.
def remove
Puppet::Util::Execution.execute(%w{crontab -r}, cronargs)
rescue => detail
raise FileReadError, "Could not remove crontab for #{@path}: #{detail}", detail.backtrace
end
# Overwrite a specific @path's cron tab; must be passed the @path name
# and the text with which to create the cron tab.
def write(text)
output_file = Tempfile.new("puppet_aixtab")
begin
output_file.print text
output_file.close
# We have to chown the stupid file to the user.
File.chown(Puppet::Util.uid(@path), nil, output_file.path)
Puppet::Util::Execution.execute(["crontab", output_file.path], cronargs)
rescue => detail
raise FileReadError, "Could not write crontab for #{@path}: #{detail}", detail.backtrace
ensure
output_file.close
output_file.unlink
end
end
end
end
diff --git a/lib/puppet/util/http_proxy.rb b/lib/puppet/util/http_proxy.rb
index 8c979c400..3785d4911 100644
--- a/lib/puppet/util/http_proxy.rb
+++ b/lib/puppet/util/http_proxy.rb
@@ -1,38 +1,65 @@
module Puppet::Util::HttpProxy
def self.http_proxy_env
# Returns a URI object if proxy is set, or nil
proxy_env = ENV["http_proxy"] || ENV["HTTP_PROXY"]
begin
return URI.parse(proxy_env) if proxy_env
rescue URI::InvalidURIError
return nil
end
return nil
end
def self.http_proxy_host
env = self.http_proxy_env
- if env and env.host then
+ if env and env.host
return env.host
end
if Puppet.settings[:http_proxy_host] == 'none'
return nil
end
return Puppet.settings[:http_proxy_host]
end
def self.http_proxy_port
env = self.http_proxy_env
- if env and env.port then
+ if env and env.port
return env.port
end
return Puppet.settings[:http_proxy_port]
end
+ def self.http_proxy_user
+ env = self.http_proxy_env
+
+ if env and env.user
+ return env.user
+ end
+
+ if Puppet.settings[:http_proxy_user] == 'none'
+ return nil
+ end
+
+ return Puppet.settings[:http_proxy_user]
+ end
+
+ def self.http_proxy_password
+ env = self.http_proxy_env
+
+ if env and env.password
+ return env.password
+ end
+
+ if Puppet.settings[:http_proxy_user] == 'none' or Puppet.settings[:http_proxy_password] == 'none'
+ return nil
+ end
+
+ return Puppet.settings[:http_proxy_password]
+ end
end
diff --git a/lib/puppet/util/lockfile.rb b/lib/puppet/util/lockfile.rb
index 9771f389e..21a1c2d9b 100644
--- a/lib/puppet/util/lockfile.rb
+++ b/lib/puppet/util/lockfile.rb
@@ -1,66 +1,66 @@
# This class provides a simple API for managing a lock file
# whose contents are an (optional) String. In addition
# to querying the basic state (#locked?) of the lock, managing
# the lock (#lock, #unlock), the contents can be retrieved at
# any time while the lock is held (#lock_data). This can be
# used to store pids, messages, etc.
#
# @see Puppet::Util::JsonLockfile
class Puppet::Util::Lockfile
attr_reader :file_path
def initialize(file_path)
@file_path = file_path
end
# Lock the lockfile. You may optionally pass a data object, which will be
# retrievable for the duration of time during which the file is locked.
#
# @param [String] lock_data an optional String data object to associate
# with the lock. This may be used to store pids, descriptive messages,
# etc. The data may be retrieved at any time while the lock is held by
# calling the #lock_data method.
# @return [boolean] true if lock is successfully acquired, false otherwise.
def lock(lock_data = nil)
begin
Puppet::FileSystem.exclusive_create(@file_path, nil) do |fd|
fd.print(lock_data)
end
true
rescue Errno::EEXIST
false
end
end
def unlock
if locked?
Puppet::FileSystem.unlink(@file_path)
true
else
false
end
end
def locked?
# delegate logic to a more explicit private method
file_locked?
end
# Retrieve the (optional) lock data that was specified at the time the file
# was locked.
# @return [String] the data object.
def lock_data
return File.read(@file_path) if file_locked?
end
# Private, internal utility method for encapsulating the logic about
# whether or not the file is locked. This method can be called
# by other methods in this class without as much risk of accidentally
# being overridden by child classes.
# @return [boolean] true if the file is locked, false if it is not.
- def file_locked?()
+ def file_locked?
Puppet::FileSystem.exist? @file_path
end
private :file_locked?
end
diff --git a/lib/puppet/util/log/destinations.rb b/lib/puppet/util/log/destinations.rb
index 932007099..6c5cc7f23 100644
--- a/lib/puppet/util/log/destinations.rb
+++ b/lib/puppet/util/log/destinations.rb
@@ -1,228 +1,232 @@
Puppet::Util::Log.newdesttype :syslog do
def self.suitable?(obj)
Puppet.features.syslog?
end
def close
Syslog.close
end
def initialize
Syslog.close if Syslog.opened?
name = "puppet-#{Puppet.run_mode.name}"
options = Syslog::LOG_PID | Syslog::LOG_NDELAY
# XXX This should really be configurable.
str = Puppet[:syslogfacility]
begin
facility = Syslog.const_get("LOG_#{str.upcase}")
rescue NameError
raise Puppet::Error, "Invalid syslog facility #{str}", $!.backtrace
end
@syslog = Syslog.open(name, options, facility)
end
def handle(msg)
# XXX Syslog currently has a bug that makes it so you
# cannot log a message with a '%' in it. So, we get rid
# of them.
if msg.source == "Puppet"
msg.to_s.split("\n").each do |line|
@syslog.send(msg.level, line.gsub("%", '%%'))
end
else
msg.to_s.split("\n").each do |line|
@syslog.send(msg.level, "(%s) %s" % [msg.source.to_s.gsub("%", ""),
line.gsub("%", '%%')
]
)
end
end
end
end
Puppet::Util::Log.newdesttype :file do
require 'fileutils'
def self.match?(obj)
Puppet::Util.absolute_path?(obj)
end
def close
if defined?(@file)
@file.close
@file = nil
end
end
def flush
@file.flush if defined?(@file)
end
attr_accessor :autoflush
def initialize(path)
@name = path
# first make sure the directory exists
# We can't just use 'Config.use' here, because they've
# specified a "special" destination.
unless Puppet::FileSystem.exist?(Puppet::FileSystem.dir(path))
FileUtils.mkdir_p(File.dirname(path), :mode => 0755)
Puppet.info "Creating log directory #{File.dirname(path)}"
end
# create the log file, if it doesn't already exist
file = File.open(path, File::WRONLY|File::CREAT|File::APPEND)
# Give ownership to the user and group puppet will run as
begin
FileUtils.chown(Puppet[:user], Puppet[:group], path) unless Puppet::Util::Platform.windows?
rescue ArgumentError, Errno::EPERM
Puppet.err "Unable to set ownership of log file"
end
@file = file
@autoflush = Puppet[:autoflush]
end
def handle(msg)
@file.puts("#{msg.time} #{msg.source} (#{msg.level}): #{msg}")
@file.flush if @autoflush
end
end
Puppet::Util::Log.newdesttype :logstash_event do
require 'time'
def format(msg)
# logstash_event format is documented at
# https://logstash.jira.com/browse/LOGSTASH-675
data = {}
data = msg.to_hash
data['version'] = 1
data['@timestamp'] = data['time']
data.delete('time')
data
end
def handle(msg)
message = format(msg)
$stdout.puts message.to_pson
end
end
Puppet::Util::Log.newdesttype :console do
require 'puppet/util/colors'
include Puppet::Util::Colors
def initialize
# Flush output immediately.
$stderr.sync = true
$stdout.sync = true
end
def handle(msg)
levels = {
:emerg => { :name => 'Emergency', :color => :hred, :stream => $stderr },
:alert => { :name => 'Alert', :color => :hred, :stream => $stderr },
:crit => { :name => 'Critical', :color => :hred, :stream => $stderr },
:err => { :name => 'Error', :color => :hred, :stream => $stderr },
:warning => { :name => 'Warning', :color => :hred, :stream => $stderr },
:notice => { :name => 'Notice', :color => :reset, :stream => $stdout },
:info => { :name => 'Info', :color => :green, :stream => $stdout },
:debug => { :name => 'Debug', :color => :cyan, :stream => $stdout },
}
str = msg.respond_to?(:multiline) ? msg.multiline : msg.to_s
str = msg.source == "Puppet" ? str : "#{msg.source}: #{str}"
level = levels[msg.level]
level[:stream].puts colorize(level[:color], "#{level[:name]}: #{str}")
end
end
# Log to a transaction report.
Puppet::Util::Log.newdesttype :report do
attr_reader :report
match "Puppet::Transaction::Report"
def initialize(report)
@report = report
end
def handle(msg)
@report << msg
end
end
# Log to an array, just for testing.
module Puppet::Test
class LogCollector
def initialize(logs)
@logs = logs
end
def <<(value)
@logs << value
end
end
end
Puppet::Util::Log.newdesttype :array do
match "Puppet::Test::LogCollector"
def initialize(messages)
@messages = messages
end
def handle(msg)
@messages << msg
end
end
Puppet::Util::Log.newdesttype :eventlog do
+ Puppet::Util::Log::DestEventlog::EVENTLOG_ERROR_TYPE = 0x0001
+ Puppet::Util::Log::DestEventlog::EVENTLOG_WARNING_TYPE = 0x0002
+ Puppet::Util::Log::DestEventlog::EVENTLOG_INFORMATION_TYPE = 0x0004
+
def self.suitable?(obj)
Puppet.features.eventlog?
end
def initialize
@eventlog = Win32::EventLog.open("Application")
end
def to_native(level)
case level
when :debug,:info,:notice
- [Win32::EventLog::INFO, 0x01]
+ [self.class::EVENTLOG_INFORMATION_TYPE, 0x01]
when :warning
- [Win32::EventLog::WARN, 0x02]
+ [self.class::EVENTLOG_WARNING_TYPE, 0x02]
when :err,:alert,:emerg,:crit
- [Win32::EventLog::ERROR, 0x03]
+ [self.class::EVENTLOG_ERROR_TYPE, 0x03]
end
end
def handle(msg)
native_type, native_id = to_native(msg.level)
@eventlog.report_event(
:source => "Puppet",
:event_type => native_type,
:event_id => native_id,
:data => (msg.source and msg.source != 'Puppet' ? "#{msg.source}: " : '') + msg.to_s
)
end
def close
if @eventlog
@eventlog.close
@eventlog = nil
end
end
end
diff --git a/lib/puppet/util/logging.rb b/lib/puppet/util/logging.rb
index 7541b42b0..84a35ddc2 100644
--- a/lib/puppet/util/logging.rb
+++ b/lib/puppet/util/logging.rb
@@ -1,151 +1,181 @@
# A module to make logging a bit easier.
require 'puppet/util/log'
require 'puppet/error'
module Puppet::Util::Logging
def send_log(level, message)
Puppet::Util::Log.create({:level => level, :source => log_source, :message => message}.merge(log_metadata))
end
# Create a method for each log level.
Puppet::Util::Log.eachlevel do |level|
define_method(level) do |args|
args = args.join(" ") if args.is_a?(Array)
send_log(level, args)
end
end
# Log an exception via Puppet.err. Will also log the backtrace if Puppet[:trace] is set.
# Parameters:
# [exception] an Exception to log
# [message] an optional String overriding the message to be logged; by default, we log Exception.message.
# If you pass a String here, your string will be logged instead. You may also pass nil if you don't
# wish to log a message at all; in this case it is likely that you are only calling this method in order
# to take advantage of the backtrace logging.
def log_exception(exception, message = :default, options = {})
err(format_exception(exception, message, Puppet[:trace] || options[:trace]))
end
def format_exception(exception, message = :default, trace = true)
arr = []
case message
when :default
arr << exception.message
when nil
# don't log anything if they passed a nil; they are just calling for the optional backtrace logging
else
arr << message
end
if trace and exception.backtrace
arr << Puppet::Util.pretty_backtrace(exception.backtrace)
end
if exception.respond_to?(:original) and exception.original
arr << "Wrapped exception:"
arr << format_exception(exception.original, :default, trace)
end
arr.flatten.join("\n")
end
def log_and_raise(exception, message)
log_exception(exception, message)
raise exception, message + "\n" + exception.to_s, exception.backtrace
end
class DeprecationWarning < Exception; end
- # Logs a warning indicating that the code path is deprecated. Note that this
- # method keeps track of the offending lines of code that triggered the
+ # Logs a warning indicating that the Ruby code path is deprecated. Note that
+ # this method keeps track of the offending lines of code that triggered the
# deprecation warning, and will only log a warning once per offending line of
# code. It will also stop logging deprecation warnings altogether after 100
- # unique deprecation warnings have been logged.
+ # unique deprecation warnings have been logged. Finally, if
+ # Puppet[:disable_warnings] includes 'deprecations', it will squelch all
+ # warning calls made via this method.
#
- # @param [String] message The message to log (logs via )
- # @param [String] key Optional key to mark the message as unique. If not
+ # @param message [String] The message to log (logs via warning)
+ # @param key [String] Optional key to mark the message as unique. If not
# passed in, the originating call line will be used instead.
def deprecation_warning(message, key = nil)
- return if Puppet[:disable_warnings].include?('deprecations')
- $deprecation_warnings ||= {}
- if $deprecation_warnings.length < 100 then
- key ||= (offender = get_deprecation_offender)
- if (! $deprecation_warnings.has_key?(key)) then
- $deprecation_warnings[key] = message
- warning("#{message}\n (at #{(offender || get_deprecation_offender).join('; ')})")
- end
- end
+ issue_deprecation_warning(message, key, nil, nil, true)
+ end
+
+ # Logs a warning whose origin comes from Puppet source rather than somewhere
+ # internal within Puppet. Otherwise the same as deprecation_warning()
+ #
+ # @param message [String] The message to log (logs via warning)
+ # @param options [Hash]
+ # @option options [String] :file File we are warning from
+ # @option options [Integer] :line Line number we are warning from
+ # @option options [String] :key (:file + :line) Alternative key used to mark
+ # warning as unique
+ #
+ # Either :file and :line and/or :key must be passed.
+ def puppet_deprecation_warning(message, options = {})
+ key = options[:key]
+ file = options[:file]
+ line = options[:line]
+ raise(Puppet::DevError, "Need either :file and :line, or :key") if (key.nil?) && (file.nil? || line.nil?)
+
+ key ||= "#{file}:#{line}"
+ issue_deprecation_warning(message, key, file, line, false)
end
def get_deprecation_offender()
# we have to put this in its own method to simplify testing; we need to be able to mock the offender results in
# order to test this class, and our framework does not appear to enjoy it if you try to mock Kernel.caller
#
# let's find the offending line; we need to jump back up the stack a few steps to find the method that called
# the deprecated method
if Puppet[:trace]
caller()[2..-1]
else
[caller()[2]]
end
end
def clear_deprecation_warnings
$deprecation_warnings.clear if $deprecation_warnings
end
# TODO: determine whether there might be a potential use for adding a puppet configuration option that would
# enable this deprecation logging.
# utility method that can be called, e.g., from spec_helper config.after, when tracking down calls to deprecated
# code.
# Parameters:
# [deprecations_file] relative or absolute path of a file to log the deprecations to
# [pattern] (default nil) if specified, will only log deprecations whose message matches the provided pattern
def log_deprecations_to_file(deprecations_file, pattern = nil)
# this method may get called lots and lots of times (e.g., from spec_helper config.after) without the global
# list of deprecation warnings being cleared out. We don't want to keep logging the same offenders over and over,
# so, we need to keep track of what we've logged.
#
# It'd be nice if we could just clear out the list of deprecation warnings, but then the very next spec might
# find the same offender, and we'd end up logging it again.
$logged_deprecation_warnings ||= {}
File.open(deprecations_file, "a") do |f|
if ($deprecation_warnings) then
$deprecation_warnings.each do |offender, message|
if (! $logged_deprecation_warnings.has_key?(offender)) then
$logged_deprecation_warnings[offender] = true
if ((pattern.nil?) || (message =~ pattern)) then
f.puts(message)
f.puts(offender)
f.puts()
end
end
end
end
end
end
private
+ def issue_deprecation_warning(message, key, file, line, use_caller)
+ return if Puppet[:disable_warnings].include?('deprecations')
+ $deprecation_warnings ||= {}
+ if $deprecation_warnings.length < 100 then
+ key ||= (offender = get_deprecation_offender)
+ if (! $deprecation_warnings.has_key?(key)) then
+ $deprecation_warnings[key] = message
+ call_trace = use_caller ?
+ (offender || get_deprecation_offender).join('; ') :
+ "#{file || 'unknown'}:#{line || 'unknown'}"
+ warning("#{message}\n (at #{call_trace})")
+ end
+ end
+ end
+
def is_resource?
defined?(Puppet::Type) && is_a?(Puppet::Type)
end
def is_resource_parameter?
defined?(Puppet::Parameter) && is_a?(Puppet::Parameter)
end
def log_metadata
[:file, :line, :tags].inject({}) do |result, attr|
result[attr] = send(attr) if respond_to?(attr)
result
end
end
def log_source
# We need to guard the existence of the constants, since this module is used by the base Puppet module.
(is_resource? or is_resource_parameter?) and respond_to?(:path) and return path.to_s
to_s
end
end
diff --git a/lib/puppet/util/pidlock.rb b/lib/puppet/util/pidlock.rb
index 35e4ad431..3ebb4e0c9 100644
--- a/lib/puppet/util/pidlock.rb
+++ b/lib/puppet/util/pidlock.rb
@@ -1,56 +1,62 @@
require 'fileutils'
require 'puppet/util/lockfile'
class Puppet::Util::Pidlock
def initialize(lockfile)
@lockfile = Puppet::Util::Lockfile.new(lockfile)
end
def locked?
clear_if_stale
@lockfile.locked?
end
def mine?
Process.pid == lock_pid
end
def lock
return mine? if locked?
@lockfile.lock(Process.pid)
end
- def unlock()
+ def unlock
if mine?
return @lockfile.unlock
else
false
end
end
def lock_pid
- @lockfile.lock_data.to_i
+ pid = @lockfile.lock_data
+ begin
+ Integer(pid)
+ rescue ArgumentError, TypeError
+ nil
+ end
end
def file_path
@lockfile.file_path
end
def clear_if_stale
- return if lock_pid.nil?
+ return @lockfile.unlock if lock_pid.nil?
errors = [Errno::ESRCH]
- # Process::Error can only happen, and is only defined, on Windows
- errors << Process::Error if defined? Process::Error
+ # Win32::Process now throws SystemCallError. Since this could be
+ # defined anywhere, only add when on Windows.
+ errors << SystemCallError if Puppet::Util::Platform.windows?
begin
Process.kill(0, lock_pid)
rescue *errors
@lockfile.unlock
end
end
private :clear_if_stale
end
diff --git a/lib/puppet/util/posix.rb b/lib/puppet/util/posix.rb
index 2af0e4b91..fa2c609a7 100644
--- a/lib/puppet/util/posix.rb
+++ b/lib/puppet/util/posix.rb
@@ -1,147 +1,137 @@
# Utility methods for interacting with POSIX objects; mostly user and group
module Puppet::Util::POSIX
# This is a list of environment variables that we will set when we want to override the POSIX locale
LOCALE_ENV_VARS = ['LANG', 'LC_ALL', 'LC_MESSAGES', 'LANGUAGE',
'LC_COLLATE', 'LC_CTYPE', 'LC_MONETARY', 'LC_NUMERIC', 'LC_TIME']
# This is a list of user-related environment variables that we will unset when we want to provide a pristine
# environment for "exec" runs
USER_ENV_VARS = ['HOME', 'USER', 'LOGNAME']
# Retrieve a field from a POSIX Etc object. The id can be either an integer
# or a name. This only works for users and groups. It's also broken on
# some platforms, unfortunately, which is why we fall back to the other
# method search_posix_field in the gid and uid methods if a sanity check
# fails
def get_posix_field(space, field, id)
raise Puppet::DevError, "Did not get id from caller" unless id
if id.is_a?(Integer)
if id > Puppet[:maximum_uid].to_i
Puppet.err "Tried to get #{field} field for silly id #{id}"
return nil
end
method = methodbyid(space)
else
method = methodbyname(space)
end
begin
return Etc.send(method, id).send(field)
rescue NoMethodError, ArgumentError
# ignore it; we couldn't find the object
return nil
end
end
# A degenerate method of retrieving name/id mappings. The job of this method is
# to retrieve all objects of a certain type, search for a specific entry
# and then return a given field from that entry.
def search_posix_field(type, field, id)
idmethod = idfield(type)
integer = false
if id.is_a?(Integer)
integer = true
if id > Puppet[:maximum_uid].to_i
Puppet.err "Tried to get #{field} field for silly id #{id}"
return nil
end
end
Etc.send(type) do |object|
if integer and object.send(idmethod) == id
return object.send(field)
elsif object.name == id
return object.send(field)
end
end
# Apparently the group/passwd methods need to get reset; if we skip
# this call, then new users aren't found.
case type
when :passwd; Etc.send(:endpwent)
when :group; Etc.send(:endgrent)
end
nil
end
# Determine what the field name is for users and groups.
def idfield(space)
case space.intern
when :gr, :group; return :gid
when :pw, :user, :passwd; return :uid
else
raise ArgumentError.new("Can only handle users and groups")
end
end
# Determine what the method is to get users and groups by id
def methodbyid(space)
case space.intern
when :gr, :group; return :getgrgid
when :pw, :user, :passwd; return :getpwuid
else
raise ArgumentError.new("Can only handle users and groups")
end
end
# Determine what the method is to get users and groups by name
def methodbyname(space)
case space.intern
when :gr, :group; return :getgrnam
when :pw, :user, :passwd; return :getpwnam
else
raise ArgumentError.new("Can only handle users and groups")
end
end
- # Get the GID of a given group, provided either a GID or a name
+ # Get the GID
def gid(group)
- begin
- group = Integer(group)
- rescue ArgumentError
- # pass
- end
- if group.is_a?(Integer)
- return nil unless name = get_posix_field(:group, :name, group)
- gid = get_posix_field(:group, :gid, name)
- check_value = gid
- else
- return nil unless gid = get_posix_field(:group, :gid, group)
- name = get_posix_field(:group, :name, gid)
- check_value = name
- end
- if check_value != group
- return search_posix_field(:group, :gid, group)
- else
- return gid
- end
+ get_posix_value(:group, :gid, group)
end
- # Get the UID of a given user, whether a UID or name is provided
+ # Get the UID
def uid(user)
+ get_posix_value(:passwd, :uid, user)
+ end
+
+ private
+
+ # Get the specified id_field of a given field (user or group),
+ # whether an ID name is provided
+ def get_posix_value(location, id_field, field)
begin
- user = Integer(user)
+ field = Integer(field)
rescue ArgumentError
# pass
end
- if user.is_a?(Integer)
- return nil unless name = get_posix_field(:passwd, :name, user)
- uid = get_posix_field(:passwd, :uid, name)
- check_value = uid
+ if field.is_a?(Integer)
+ return nil unless name = get_posix_field(location, :name, field)
+ id = get_posix_field(location, id_field, name)
+ check_value = id
else
- return nil unless uid = get_posix_field(:passwd, :uid, user)
- name = get_posix_field(:passwd, :name, uid)
+ return nil unless id = get_posix_field(location, id_field, field)
+ name = get_posix_field(location, :name, id)
check_value = name
end
- if check_value != user
- return search_posix_field(:passwd, :uid, user)
+ if check_value != field
+ return search_posix_field(location, id_field, field)
else
- return uid
+ return id
end
end
end
diff --git a/lib/puppet/util/profiler.rb b/lib/puppet/util/profiler.rb
index 4246181b4..820231f27 100644
--- a/lib/puppet/util/profiler.rb
+++ b/lib/puppet/util/profiler.rb
@@ -1,45 +1,53 @@
require 'benchmark'
# A simple profiling callback system.
#
# @api public
module Puppet::Util::Profiler
require 'puppet/util/profiler/wall_clock'
require 'puppet/util/profiler/object_counts'
- require 'puppet/util/profiler/none'
+ require 'puppet/util/profiler/around_profiler'
- NONE = Puppet::Util::Profiler::None.new
+ @profiler = Puppet::Util::Profiler::AroundProfiler.new
# Reset the profiling system to the original state
#
# @api private
def self.clear
- @profiler = nil
+ @profiler.clear
end
- # @return This thread's configured profiler
+ # Retrieve the current list of profilers
+ #
# @api private
def self.current
- @profiler || NONE
+ @profiler.current
end
# @param profiler [#profile] A profiler for the current thread
# @api private
- def self.current=(profiler)
- @profiler = profiler
+ def self.add_profiler(profiler)
+ @profiler.add_profiler(profiler)
+ end
+
+ # @param profiler [#profile] A profiler to remove from the current thread
+ # @api private
+ def self.remove_profiler(profiler)
+ @profiler.remove_profiler(profiler)
end
# Profile a block of code and log the time it took to execute.
#
# This outputs logs entries to the Puppet masters logging destination
# providing the time it took, a message describing the profiled code
# and a leaf location marking where the profile method was called
# in the profiled hierachy.
#
# @param message [String] A description of the profiled event
+ # @param metric_id [Array] A list of strings making up the ID of a metric to profile
# @param block [Block] The segment of code to profile
# @api public
- def self.profile(message, &block)
- current.profile(message, &block)
+ def self.profile(message, metric_id = nil, &block)
+ @profiler.profile(message, metric_id, &block)
end
end
diff --git a/lib/puppet/util/profiler/aggregate.rb b/lib/puppet/util/profiler/aggregate.rb
new file mode 100644
index 000000000..e8a4ca595
--- /dev/null
+++ b/lib/puppet/util/profiler/aggregate.rb
@@ -0,0 +1,85 @@
+require 'puppet/util/profiler'
+require 'puppet/util/profiler/wall_clock'
+
+class Puppet::Util::Profiler::Aggregate < Puppet::Util::Profiler::WallClock
+ def initialize(logger, identifier)
+ super(logger, identifier)
+ @metrics_hash = Metric.new
+ end
+
+ def shutdown()
+ super
+ @logger.call("AGGREGATE PROFILING RESULTS:")
+ @logger.call("----------------------------")
+ print_metrics(@metrics_hash, "")
+ @logger.call("----------------------------")
+ end
+
+ def do_start(description, metric_id)
+ super(description, metric_id)
+ end
+
+ def do_finish(context, description, metric_id)
+ result = super(context, description, metric_id)
+ update_metric(@metrics_hash, metric_id, result[:time])
+ result
+ end
+
+ def update_metric(metrics_hash, metric_id, time)
+ first, *rest = *metric_id
+ if first
+ m = metrics_hash[first]
+ m.increment
+ m.add_time(time)
+ if rest.count > 0
+ update_metric(m, rest, time)
+ end
+ end
+ end
+
+ def values
+ @metrics_hash
+ end
+
+ def print_metrics(metrics_hash, prefix)
+ metrics_hash.sort_by {|k,v| v.time }.reverse.each do |k,v|
+ @logger.call("#{prefix}#{k}: #{v.time} ms (#{v.count} calls)")
+ print_metrics(metrics_hash[k], "#{prefix}#{k} -> ")
+ end
+ end
+
+ class Metric < Hash
+ def initialize
+ super
+ @count = 0
+ @time = 0
+ end
+ attr_reader :count, :time
+
+ def [](key)
+ if !has_key?(key)
+ self[key] = Metric.new
+ end
+ super(key)
+ end
+
+ def increment
+ @count += 1
+ end
+
+ def add_time(time)
+ @time += time
+ end
+ end
+
+ class Timer
+ def initialize
+ @start = Time.now
+ end
+
+ def stop
+ Time.now - @start
+ end
+ end
+end
+
diff --git a/lib/puppet/util/profiler/around_profiler.rb b/lib/puppet/util/profiler/around_profiler.rb
new file mode 100644
index 000000000..0b408c6d8
--- /dev/null
+++ b/lib/puppet/util/profiler/around_profiler.rb
@@ -0,0 +1,67 @@
+# A Profiler that can be used to wrap around blocks of code. It is configured
+# with other profilers and controls them to start before the block is executed
+# and finish after the block is executed.
+#
+# @api private
+class Puppet::Util::Profiler::AroundProfiler
+
+ def initialize
+ @profilers = []
+ end
+
+ # Reset the profiling system to the original state
+ #
+ # @api private
+ def clear
+ @profilers = []
+ end
+
+ # Retrieve the current list of profilers
+ #
+ # @api private
+ def current
+ @profilers
+ end
+
+ # @param profiler [#profile] A profiler for the current thread
+ # @api private
+ def add_profiler(profiler)
+ @profilers << profiler
+ profiler
+ end
+
+ # @param profiler [#profile] A profiler to remove from the current thread
+ # @api private
+ def remove_profiler(profiler)
+ @profilers.delete(profiler)
+ end
+
+ # Profile a block of code and log the time it took to execute.
+ #
+ # This outputs logs entries to the Puppet masters logging destination
+ # providing the time it took, a message describing the profiled code
+ # and a leaf location marking where the profile method was called
+ # in the profiled hierachy.
+ #
+ # @param message [String] A description of the profiled event
+ # @param metric_id [Array] A list of strings making up the ID of a metric to profile
+ # @param block [Block] The segment of code to profile
+ # @api private
+ def profile(message, metric_id = nil)
+ retval = nil
+ contexts = {}
+ @profilers.each do |profiler|
+ contexts[profiler] = profiler.start(message, metric_id)
+ end
+
+ begin
+ retval = yield
+ ensure
+ @profilers.each do |profiler|
+ profiler.finish(contexts[profiler], message, metric_id)
+ end
+ end
+
+ retval
+ end
+end
diff --git a/lib/puppet/util/profiler/logging.rb b/lib/puppet/util/profiler/logging.rb
index c0de09e25..5d1fd101c 100644
--- a/lib/puppet/util/profiler/logging.rb
+++ b/lib/puppet/util/profiler/logging.rb
@@ -1,47 +1,48 @@
class Puppet::Util::Profiler::Logging
def initialize(logger, identifier)
@logger = logger
@identifier = identifier
@sequence = Sequence.new
end
- def profile(description, &block)
- retval = nil
+ def start(description, metric_id)
@sequence.next
@sequence.down
- context = start
- begin
- retval = yield
- ensure
- profile_explanation = finish(context)
- @sequence.up
- @logger.call("PROFILE [#{@identifier}] #{@sequence} #{description}: #{profile_explanation}")
- end
- retval
+ do_start(description, metric_id)
+ end
+
+ def finish(context, description, metric_id)
+ profile_explanation = do_finish(context, description, metric_id)[:msg]
+ @sequence.up
+ @logger.call("PROFILE [#{@identifier}] #{@sequence} #{description}: #{profile_explanation}")
+ end
+
+ def shutdown()
+ # nothing to do
end
class Sequence
INITIAL = 0
SEPARATOR = '.'
def initialize
@elements = [INITIAL]
end
def next
@elements[-1] += 1
end
def down
@elements << INITIAL
end
def up
@elements.pop
end
def to_s
@elements.join(SEPARATOR)
end
end
end
diff --git a/lib/puppet/util/profiler/none.rb b/lib/puppet/util/profiler/none.rb
deleted file mode 100644
index 7d4ad716d..000000000
--- a/lib/puppet/util/profiler/none.rb
+++ /dev/null
@@ -1,8 +0,0 @@
-# A no-op profiler. Used when there is no profiling wanted.
-#
-# @api private
-class Puppet::Util::Profiler::None
- def profile(description, &block)
- yield
- end
-end
diff --git a/lib/puppet/util/profiler/wall_clock.rb b/lib/puppet/util/profiler/wall_clock.rb
index 2ba47ca3e..46ac095ec 100644
--- a/lib/puppet/util/profiler/wall_clock.rb
+++ b/lib/puppet/util/profiler/wall_clock.rb
@@ -1,34 +1,35 @@
require 'puppet/util/profiler/logging'
# A profiler implementation that measures the number of seconds a segment of
# code takes to execute and provides a callback with a string representation of
# the profiling information.
#
# @api private
class Puppet::Util::Profiler::WallClock < Puppet::Util::Profiler::Logging
- def start
+ def do_start(description, metric_id)
Timer.new
end
- def finish(context)
- context.stop
- "took #{context} seconds"
+ def do_finish(context, description, metric_id)
+ {:time => context.stop,
+ :msg => "took #{context} seconds"}
end
class Timer
FOUR_DECIMAL_DIGITS = '%0.4f'
def initialize
@start = Time.now
end
def stop
- @finish = Time.now
+ @time = Time.now - @start
+ @time
end
def to_s
- format(FOUR_DECIMAL_DIGITS, @finish - @start)
+ format(FOUR_DECIMAL_DIGITS, @time)
end
end
end
diff --git a/lib/puppet/util/rdoc.rb b/lib/puppet/util/rdoc.rb
index 9119784e7..c2b9c15f4 100644
--- a/lib/puppet/util/rdoc.rb
+++ b/lib/puppet/util/rdoc.rb
@@ -1,89 +1,96 @@
require 'puppet/util'
module Puppet::Util::RDoc
module_function
# launch a rdoc documenation process
# with the files/dir passed in +files+
def rdoc(outputdir, files, charset = nil)
Puppet[:ignoreimport] = true
# then rdoc
require 'rdoc/rdoc'
require 'rdoc/options'
# load our parser
require 'puppet/util/rdoc/parser'
r = RDoc::RDoc.new
if Puppet.features.rdoc1?
RDoc::RDoc::GENERATORS["puppet"] = RDoc::RDoc::Generator.new(
"puppet/util/rdoc/generators/puppet_generator.rb",
:PuppetGenerator,
"puppet"
)
end
# specify our own format & where to output
options = [ "--fmt", "puppet",
"--quiet",
"--exclude", "/modules/[^/]*/spec/.*$",
"--exclude", "/modules/[^/]*/files/.*$",
"--exclude", "/modules/[^/]*/tests/.*$",
"--exclude", "/modules/[^/]*/templates/.*$",
"--op", outputdir ]
if !Puppet.features.rdoc1? || ::Options::OptionList.options.any? { |o| o[0] == "--force-update" } # Options is a root object in the rdoc1 namespace...
options << "--force-update"
end
options += [ "--charset", charset] if charset
+ # Rdoc root default is Dir.pwd, but the win32-dir gem monkey patchs Dir.pwd
+ # replacing Ruby's normal / with \. When RDoc generates relative paths it
+ # uses relative_path_from that will generate errors when the slashes don't
+ # properly match. This is a workaround for that issue.
+ if Puppet.features.microsoft_windows? && RDoc::VERSION !~ /^[0-3]\./
+ options += [ "--root", Dir.pwd.gsub(/\\/, '/')]
+ end
options += files
# launch the documentation process
r.document(options)
end
# launch an output to console manifest doc
def manifestdoc(files)
Puppet[:ignoreimport] = true
files.select { |f| FileTest.file?(f) }.each do |f|
- parser = Puppet::Parser::Parser.new(Puppet.lookup(:environments).get(Puppet[:environment]))
+ parser = Puppet::Parser::Parser.new(Puppet.lookup(:current_environment))
parser.file = f
ast = parser.parse
output(f, ast)
end
end
# Ouputs to the console the documentation
# of a manifest
def output(file, ast)
astobj = []
ast.instantiate('').each do |resource_type|
astobj << resource_type if resource_type.file == file
end
astobj.sort! {|a,b| a.line <=> b.line }.each do |k|
output_astnode_doc(k)
end
end
def output_astnode_doc(ast)
puts ast.doc if !ast.doc.nil? and !ast.doc.empty?
if Puppet.settings[:document_all]
# scan each underlying resources to produce documentation
code = ast.code.children if ast.code.is_a?(Puppet::Parser::AST::ASTArray)
code ||= ast.code
output_resource_doc(code) unless code.nil?
end
end
def output_resource_doc(code)
code.sort { |a,b| a.line <=> b.line }.each do |stmt|
output_resource_doc(stmt.children) if stmt.is_a?(Puppet::Parser::AST::ASTArray)
if stmt.is_a?(Puppet::Parser::AST::Resource)
puts stmt.doc if !stmt.doc.nil? and !stmt.doc.empty?
end
end
end
end
diff --git a/lib/puppet/util/rdoc/parser/puppet_parser_core.rb b/lib/puppet/util/rdoc/parser/puppet_parser_core.rb
index baff91e98..4d29cc127 100644
--- a/lib/puppet/util/rdoc/parser/puppet_parser_core.rb
+++ b/lib/puppet/util/rdoc/parser/puppet_parser_core.rb
@@ -1,477 +1,477 @@
# Functionality common to both our RDoc version 1 and 2 parsers.
module RDoc::PuppetParserCore
SITE = "__site__"
def self.included(base)
base.class_eval do
attr_accessor :input_file_name, :top_level
# parser registration into RDoc
parse_files_matching(/\.(rb|pp)$/)
end
end
# called with the top level file
def initialize(top_level, file_name, body, options, stats)
@options = options
@stats = stats
@input_file_name = file_name
@top_level = top_level
@top_level.extend(RDoc::PuppetTopLevel)
@progress = $stderr unless options.quiet
end
# main entry point
def scan
- environment = Puppet.lookup(:environments).get(Puppet[:environment])
+ environment = Puppet.lookup(:current_environment)
known_resource_types = environment.known_resource_types
unless known_resource_types.watching_file?(@input_file_name)
Puppet.info "rdoc: scanning #{@input_file_name}"
if @input_file_name =~ /\.pp$/
@parser = Puppet::Parser::Parser.new(environment)
@parser.file = @input_file_name
@parser.parse.instantiate('').each do |type|
known_resource_types.add type
end
end
end
scan_top_level(@top_level, environment)
@top_level
end
# Due to a bug in RDoc, we need to roll our own find_module_named
# The issue is that RDoc tries harder by asking the parent for a class/module
# of the name. But by doing so, it can mistakenly use a module of same name
# but from which we are not descendant.
def find_object_named(container, name)
return container if container.name == name
container.each_classmodule do |m|
return m if m.name == name
end
nil
end
# walk down the namespace and lookup/create container as needed
def get_class_or_module(container, name)
# class ::A -> A is in the top level
if name =~ /^::/
container = @top_level
end
names = name.split('::')
final_name = names.pop
names.each do |name|
prev_container = container
container = find_object_named(container, name)
container ||= prev_container.add_class(RDoc::PuppetClass, name, nil)
end
[container, final_name]
end
# split_module tries to find if +path+ belongs to the module path
# if it does, it returns the module name, otherwise if we are sure
# it is part of the global manifest path, "__site__" is returned.
# And finally if this path couldn't be mapped anywhere, nil is returned.
def split_module(path, environment)
# find a module
fullpath = File.expand_path(path)
Puppet.debug "rdoc: testing #{fullpath}"
if fullpath =~ /(.*)\/([^\/]+)\/(?:manifests|plugins|lib)\/.+\.(pp|rb)$/
modpath = $1
name = $2
Puppet.debug "rdoc: module #{name} into #{modpath} ?"
environment.modulepath.each do |mp|
if File.identical?(modpath,mp)
Puppet.debug "rdoc: found module #{name}"
return name
end
end
end
if fullpath =~ /\.(pp|rb)$/
# there can be paths we don't want to scan under modules
# imagine a ruby or manifest that would be distributed as part as a module
# but we don't want those to be hosted under <site>
environment.modulepath.each do |mp|
# check that fullpath is a descendant of mp
dirname = fullpath
previous = dirname
while (dirname = File.dirname(previous)) != previous
previous = dirname
return nil if File.identical?(dirname,mp)
end
end
end
# we are under a global manifests
Puppet.debug "rdoc: global manifests"
SITE
end
# create documentation for the top level +container+
def scan_top_level(container, environment)
# use the module README as documentation for the module
comment = ""
%w{README README.rdoc}.each do |rfile|
readme = File.join(File.dirname(File.dirname(@input_file_name)), rfile)
comment = File.open(readme,"r") { |f| f.read } if FileTest.readable?(readme)
end
look_for_directives_in(container, comment) unless comment.empty?
# infer module name from directory
name = split_module(@input_file_name, environment)
if name.nil?
# skip .pp files that are not in manifests directories as we can't guarantee they're part
# of a module or the global configuration.
container.document_self = false
return
end
Puppet.debug "rdoc: scanning for #{name}"
container.module_name = name
container.global=true if name == SITE
container, name = get_class_or_module(container,name)
mod = container.add_module(RDoc::PuppetModule, name)
mod.record_location(@top_level)
mod.add_comment(comment, @input_file_name)
if @input_file_name =~ /\.pp$/
parse_elements(mod, environment.known_resource_types)
elsif @input_file_name =~ /\.rb$/
parse_plugins(mod)
end
end
# create documentation for include statements we can find in +code+
# and associate it with +container+
def scan_for_include_or_require(container, code)
code = [code] unless code.is_a?(Array)
code.each do |stmt|
scan_for_include_or_require(container,stmt.children) if stmt.is_a?(Puppet::Parser::AST::BlockExpression)
if stmt.is_a?(Puppet::Parser::AST::Function) and ['include','require'].include?(stmt.name)
stmt.arguments.each do |included|
Puppet.debug "found #{stmt.name}: #{included}"
container.send("add_#{stmt.name}", RDoc::Include.new(included.to_s, stmt.doc))
end
end
end
end
# create documentation for realize statements we can find in +code+
# and associate it with +container+
def scan_for_realize(container, code)
code = [code] unless code.is_a?(Array)
code.each do |stmt|
scan_for_realize(container,stmt.children) if stmt.is_a?(Puppet::Parser::AST::BlockExpression)
if stmt.is_a?(Puppet::Parser::AST::Function) and stmt.name == 'realize'
stmt.arguments.each do |realized|
Puppet.debug "found #{stmt.name}: #{realized}"
container.add_realize( RDoc::Include.new(realized.to_s, stmt.doc))
end
end
end
end
# create documentation for global variables assignements we can find in +code+
# and associate it with +container+
def scan_for_vardef(container, code)
code = [code] unless code.is_a?(Array)
code.each do |stmt|
scan_for_vardef(container,stmt.children) if stmt.is_a?(Puppet::Parser::AST::BlockExpression)
if stmt.is_a?(Puppet::Parser::AST::VarDef)
Puppet.debug "rdoc: found constant: #{stmt.name} = #{stmt.value}"
container.add_constant(RDoc::Constant.new(stmt.name.to_s, stmt.value.to_s, stmt.doc))
end
end
end
# create documentation for resources we can find in +code+
# and associate it with +container+
def scan_for_resource(container, code)
code = [code] unless code.is_a?(Array)
code.each do |stmt|
scan_for_resource(container,stmt.children) if stmt.is_a?(Puppet::Parser::AST::BlockExpression)
if stmt.is_a?(Puppet::Parser::AST::Resource) and !stmt.type.nil?
begin
type = stmt.type.split("::").collect { |s| s.capitalize }.join("::")
stmt.instances.each do |inst|
title = inst.title.is_a?(Puppet::Parser::AST::ASTArray) ? inst.title.to_s.gsub(/\[(.*)\]/,'\1') : inst.title.to_s
Puppet.debug "rdoc: found resource: #{type}[#{title}]"
param = []
inst.parameters.children.each do |p|
res = {}
res["name"] = p.param
res["value"] = "#{p.value.to_s}" unless p.value.nil?
param << res
end
container.add_resource(RDoc::PuppetResource.new(type, title, stmt.doc, param))
end
rescue => detail
raise Puppet::ParseError, "impossible to parse resource in #{stmt.file} at line #{stmt.line}: #{detail}", detail.backtrace
end
end
end
end
# create documentation for a class named +name+
def document_class(name, klass, container)
Puppet.debug "rdoc: found new class #{name}"
container, name = get_class_or_module(container, name)
superclass = klass.parent
superclass = "" if superclass.nil? or superclass.empty?
comment = klass.doc
look_for_directives_in(container, comment) unless comment.empty?
cls = container.add_class(RDoc::PuppetClass, name, superclass)
# it is possible we already encountered this class, while parsing some namespaces
# from other classes of other files. But at that time we couldn't know this class superclass
# so, now we know it and force it.
cls.superclass = superclass
cls.record_location(@top_level)
# scan class code for include
code = klass.code.children if klass.code.is_a?(Puppet::Parser::AST::BlockExpression)
code ||= klass.code
unless code.nil?
scan_for_include_or_require(cls, code)
scan_for_realize(cls, code)
scan_for_resource(cls, code) if Puppet.settings[:document_all]
end
cls.add_comment(comment, klass.file)
rescue => detail
raise Puppet::ParseError, "impossible to parse class '#{name}' in #{klass.file} at line #{klass.line}: #{detail}", detail.backtrace
end
# create documentation for a node
def document_node(name, node, container)
Puppet.debug "rdoc: found new node #{name}"
superclass = node.parent
superclass = "" if superclass.nil? or superclass.empty?
comment = node.doc
look_for_directives_in(container, comment) unless comment.empty?
n = container.add_node(name, superclass)
n.record_location(@top_level)
code = node.code.children if node.code.is_a?(Puppet::Parser::AST::BlockExpression)
code ||= node.code
unless code.nil?
scan_for_include_or_require(n, code)
scan_for_realize(n, code)
scan_for_vardef(n, code)
scan_for_resource(n, code) if Puppet.settings[:document_all]
end
n.add_comment(comment, node.file)
rescue => detail
raise Puppet::ParseError, "impossible to parse node '#{name}' in #{node.file} at line #{node.line}: #{detail}", detail.backtrace
end
# create documentation for a define
def document_define(name, define, container)
Puppet.debug "rdoc: found new definition #{name}"
# find superclas if any
# find the parent
# split define name by :: to find the complete module hierarchy
container, name = get_class_or_module(container,name)
# build up declaration
declaration = ""
define.arguments.each do |arg,value|
declaration << "\$#{arg}"
unless value.nil?
declaration << " => "
case value
when Puppet::Parser::AST::Leaf
declaration << "'#{value.value}'"
when Puppet::Parser::AST::BlockExpression
declaration << "[#{value.children.collect { |v| "'#{v}'" }.join(", ")}]"
else
declaration << "#{value.to_s}"
end
end
declaration << ", "
end
declaration.chop!.chop! if declaration.size > 1
# register method into the container
meth = RDoc::AnyMethod.new(declaration, name)
meth.comment = define.doc
container.add_method(meth)
look_for_directives_in(container, meth.comment) unless meth.comment.empty?
meth.params = "( #{declaration} )"
meth.visibility = :public
meth.document_self = true
meth.singleton = false
rescue => detail
raise Puppet::ParseError, "impossible to parse definition '#{name}' in #{define.file} at line #{define.line}: #{detail}", detail.backtrace
end
# Traverse the AST tree and produce code-objects node
# that contains the documentation
def parse_elements(container, known_resource_types)
Puppet.debug "rdoc: scanning manifest"
known_resource_types.hostclasses.values.sort { |a,b| a.name <=> b.name }.each do |klass|
name = klass.name
if klass.file == @input_file_name
unless name.empty?
document_class(name,klass,container)
else # on main class document vardefs
code = klass.code.children if klass.code.is_a?(Puppet::Parser::AST::BlockExpression)
code ||= klass.code
scan_for_vardef(container, code) unless code.nil?
end
end
end
known_resource_types.definitions.each do |name, define|
if define.file == @input_file_name
document_define(name,define,container)
end
end
known_resource_types.nodes.each do |name, node|
if node.file == @input_file_name
document_node(name.to_s,node,container)
end
end
end
# create documentation for plugins
def parse_plugins(container)
Puppet.debug "rdoc: scanning plugin or fact"
if @input_file_name =~ /\/facter\/[^\/]+\.rb$/
parse_fact(container)
else
parse_puppet_plugin(container)
end
end
# this is a poor man custom fact parser :-)
def parse_fact(container)
comments = ""
current_fact = nil
parsed_facts = []
File.open(@input_file_name) do |of|
of.each do |line|
# fetch comments
if line =~ /^[ \t]*# ?(.*)$/
comments += $1 + "\n"
elsif line =~ /^[ \t]*Facter.add\(['"](.*?)['"]\)/
current_fact = RDoc::Fact.new($1,{})
look_for_directives_in(container, comments) unless comments.empty?
current_fact.comment = comments
parsed_facts << current_fact
comments = ""
Puppet.debug "rdoc: found custom fact #{current_fact.name}"
elsif line =~ /^[ \t]*confine[ \t]*:(.*?)[ \t]*=>[ \t]*(.*)$/
current_fact.confine = { :type => $1, :value => $2 } unless current_fact.nil?
else # unknown line type
comments =""
end
end
end
parsed_facts.each do |f|
container.add_fact(f)
f.record_location(@top_level)
end
end
# this is a poor man puppet plugin parser :-)
# it doesn't extract doc nor desc :-(
def parse_puppet_plugin(container)
comments = ""
current_plugin = nil
File.open(@input_file_name) do |of|
of.each do |line|
# fetch comments
if line =~ /^[ \t]*# ?(.*)$/
comments += $1 + "\n"
elsif line =~ /^[ \t]*(?:Puppet::Parser::Functions::)?newfunction[ \t]*\([ \t]*:(.*?)[ \t]*,[ \t]*:type[ \t]*=>[ \t]*(:rvalue|:lvalue)/
current_plugin = RDoc::Plugin.new($1, "function")
look_for_directives_in(container, comments) unless comments.empty?
current_plugin.comment = comments
current_plugin.record_location(@top_level)
container.add_plugin(current_plugin)
comments = ""
Puppet.debug "rdoc: found new function plugins #{current_plugin.name}"
elsif line =~ /^[ \t]*Puppet::Type.newtype[ \t]*\([ \t]*:(.*?)\)/
current_plugin = RDoc::Plugin.new($1, "type")
look_for_directives_in(container, comments) unless comments.empty?
current_plugin.comment = comments
current_plugin.record_location(@top_level)
container.add_plugin(current_plugin)
comments = ""
Puppet.debug "rdoc: found new type plugins #{current_plugin.name}"
elsif line =~ /module Puppet::Parser::Functions/
# skip
else # unknown line type
comments =""
end
end
end
end
# New instance of the appropriate PreProcess for our RDoc version.
def create_rdoc_preprocess
raise(NotImplementedError, "This method must be overwritten for whichever version of RDoc this parser is working with")
end
# look_for_directives_in scans the current +comment+ for RDoc directives
def look_for_directives_in(context, comment)
preprocess = create_rdoc_preprocess
preprocess.handle(comment) do |directive, param|
case directive
when "stopdoc"
context.stop_doc
""
when "startdoc"
context.start_doc
context.force_documentation = true
""
when "enddoc"
#context.done_documenting = true
#""
throw :enddoc
when "main"
options = Options.instance
options.main_page = param
""
when "title"
options = Options.instance
options.title = param
""
when "section"
context.set_current_section(param, comment)
comment.replace("") # 1.8 doesn't support #clear
break
else
warn "Unrecognized directive '#{directive}'"
break
end
end
remove_private_comments(comment)
end
def remove_private_comments(comment)
comment.gsub!(/^#--.*?^#\+\+/m, '')
comment.sub!(/^#--.*/m, '')
end
end
diff --git a/lib/puppet/util/suidmanager.rb b/lib/puppet/util/suidmanager.rb
index 481e3864c..b4d77c7cd 100644
--- a/lib/puppet/util/suidmanager.rb
+++ b/lib/puppet/util/suidmanager.rb
@@ -1,198 +1,191 @@
require 'facter'
require 'puppet/util/warnings'
require 'forwardable'
require 'etc'
module Puppet::Util::SUIDManager
include Puppet::Util::Warnings
extend Forwardable
# Note groups= is handled specially due to a bug in OS X 10.6, 10.7,
# and probably upcoming releases...
to_delegate_to_process = [ :euid=, :euid, :egid=, :egid, :uid=, :uid, :gid=, :gid, :groups ]
to_delegate_to_process.each do |method|
def_delegator Process, method
module_function method
end
def osx_maj_ver
return @osx_maj_ver unless @osx_maj_ver.nil?
- # 'kernel' is available without explicitly loading all facts
- if Facter.value('kernel') != 'Darwin'
- @osx_maj_ver = false
- return @osx_maj_ver
- end
- # But 'macosx_productversion_major' requires it.
- Facter.loadfacts
- @osx_maj_ver = Facter.value('macosx_productversion_major')
+ @osx_maj_ver = Facter.value('macosx_productversion_major') || false
end
module_function :osx_maj_ver
def groups=(grouplist)
begin
return Process.groups = grouplist
rescue Errno::EINVAL => e
#We catch Errno::EINVAL as some operating systems (OS X in particular) can
# cause troubles when using Process#groups= to change *this* user / process
# list of supplementary groups membership. This is done via Ruby's function
# "static VALUE proc_setgroups(VALUE obj, VALUE ary)" which is effectively
# a wrapper for "int setgroups(size_t size, const gid_t *list)" (part of SVr4
# and 4.3BSD but not in POSIX.1-2001) that fails and sets errno to EINVAL.
#
# This does not appear to be a problem with Ruby but rather an issue on the
# operating system side. Therefore we catch the exception and look whether
# we run under OS X or not -- if so, then we acknowledge the problem and
# re-throw the exception otherwise.
if osx_maj_ver and not osx_maj_ver.empty?
return true
else
raise e
end
end
end
module_function :groups=
def self.root?
return Process.uid == 0 unless Puppet.features.microsoft_windows?
require 'puppet/util/windows/user'
Puppet::Util::Windows::User.admin?
end
# Methods to handle changing uid/gid of the running process. In general,
# these will noop or fail on Windows, and require root to change to anything
# but the current uid/gid (which is a noop).
# Runs block setting euid and egid if provided then restoring original ids.
# If running on Windows or without root, the block will be run with the
# current euid/egid.
def asuser(new_uid=nil, new_gid=nil)
return yield if Puppet.features.microsoft_windows?
return yield unless root?
return yield unless new_uid or new_gid
old_euid, old_egid = self.euid, self.egid
begin
change_privileges(new_uid, new_gid, false)
yield
ensure
change_privileges(new_uid ? old_euid : nil, old_egid, false)
end
end
module_function :asuser
# If `permanently` is set, will permanently change the uid/gid of the
# process. If not, it will only set the euid/egid. If only uid is supplied,
# the primary group of the supplied gid will be used. If only gid is
# supplied, only gid will be changed. This method will fail if used on
# Windows.
def change_privileges(uid=nil, gid=nil, permanently=false)
return unless uid or gid
unless gid
uid = convert_xid(:uid, uid)
gid = Etc.getpwuid(uid).gid
end
change_group(gid, permanently)
change_user(uid, permanently) if uid
end
module_function :change_privileges
# Changes the egid of the process if `permanently` is not set, otherwise
# changes gid. This method will fail if used on Windows, or attempting to
# change to a different gid without root.
def change_group(group, permanently=false)
gid = convert_xid(:gid, group)
raise Puppet::Error, "No such group #{group}" unless gid
if permanently
Process::GID.change_privilege(gid)
else
Process.egid = gid
end
end
module_function :change_group
# As change_group, but operates on uids. If changing user permanently,
# supplementary groups will be set the to default groups for the new uid.
def change_user(user, permanently=false)
uid = convert_xid(:uid, user)
raise Puppet::Error, "No such user #{user}" unless uid
if permanently
# If changing uid, we must be root. So initgroups first here.
initgroups(uid)
Process::UID.change_privilege(uid)
else
# We must be root to initgroups, so initgroups before dropping euid if
# we're root, otherwise elevate euid before initgroups.
# change euid (to root) first.
if Process.euid == 0
initgroups(uid)
Process.euid = uid
else
Process.euid = uid
initgroups(uid)
end
end
end
module_function :change_user
# Make sure the passed argument is a number.
def convert_xid(type, id)
map = {:gid => :group, :uid => :user}
raise ArgumentError, "Invalid id type #{type}" unless map.include?(type)
ret = Puppet::Util.send(type, id)
if ret == nil
raise Puppet::Error, "Invalid #{map[type]}: #{id}"
end
ret
end
module_function :convert_xid
# Initialize primary and supplemental groups to those of the target user. We
# take the UID and manually look up their details in the system database,
# including username and primary group. This method will fail on Windows, or
# if used without root to initgroups of another user.
def initgroups(uid)
pwent = Etc.getpwuid(uid)
Process.initgroups(pwent.name, pwent.gid)
end
module_function :initgroups
# Run a command and capture the output
# Parameters:
# [command] the command to execute
# [new_uid] (optional) a userid to run the command as
# [new_gid] (optional) a groupid to run the command as
# [options] (optional, defaults to {}) a hash of option key/value pairs; currently supported:
# :override_locale (defaults to true) a flag indicating whether or puppet should temporarily override the
# system locale for the duration of the command. If true, the locale will be set to 'C' to ensure consistent
# output / formatting from the command, which makes it much easier to parse the output. If false, the system
# locale will be respected.
# :custom_environment (default {}) -- a hash of key/value pairs to set as environment variables for the duration
# of the command
def run_and_capture(command, new_uid=nil, new_gid=nil, options = {})
Puppet.deprecation_warning("Puppet::Util::SUIDManager.run_and_capture is deprecated; please use Puppet::Util::Execution.execute instead.")
# specifying these here rather than in the method signature to allow callers to pass in a partial
# set of overrides without affecting the default values for options that they don't pass in
default_options = {
:override_locale => true,
:custom_environment => {},
}
options = default_options.merge(options)
output = Puppet::Util::Execution.execute(command, :failonfail => false, :combine => true,
:uid => new_uid, :gid => new_gid,
:override_locale => options[:override_locale],
:custom_environment => options[:custom_environment])
[output, $CHILD_STATUS.dup]
end
module_function :run_and_capture
end
diff --git a/lib/puppet/util/windows.rb b/lib/puppet/util/windows.rb
index af8a36995..24aec4934 100644
--- a/lib/puppet/util/windows.rb
+++ b/lib/puppet/util/windows.rb
@@ -1,17 +1,28 @@
module Puppet::Util::Windows
+ module ADSI
+ class User; end
+ class UserProfile; end
+ class Group; end
+ end
+ module Registry
+ end
+
if Puppet::Util::Platform.windows?
# these reference platform specific gems
+ require 'puppet/util/windows/api_types'
+ require 'puppet/util/windows/string'
require 'puppet/util/windows/error'
+ require 'puppet/util/windows/com'
require 'puppet/util/windows/sid'
+ require 'puppet/util/windows/file'
require 'puppet/util/windows/security'
require 'puppet/util/windows/user'
require 'puppet/util/windows/process'
- require 'puppet/util/windows/string'
- require 'puppet/util/windows/file'
require 'puppet/util/windows/root_certs'
require 'puppet/util/windows/access_control_entry'
require 'puppet/util/windows/access_control_list'
require 'puppet/util/windows/security_descriptor'
+ require 'puppet/util/windows/adsi'
+ require 'puppet/util/windows/registry'
end
- require 'puppet/util/windows/registry'
end
diff --git a/lib/puppet/util/windows/access_control_list.rb b/lib/puppet/util/windows/access_control_list.rb
index 88e635ea2..ce0e8e6bd 100644
--- a/lib/puppet/util/windows/access_control_list.rb
+++ b/lib/puppet/util/windows/access_control_list.rb
@@ -1,113 +1,113 @@
# Windows Access Control List
#
# Represents a list of access control entries (ACEs).
#
# @see http://msdn.microsoft.com/en-us/library/windows/desktop/aa374872(v=vs.85).aspx
# @api private
class Puppet::Util::Windows::AccessControlList
include Enumerable
ACCESS_ALLOWED_ACE_TYPE = 0x0
ACCESS_DENIED_ACE_TYPE = 0x1
# Construct an ACL.
#
# @param acl [Enumerable] A list of aces to copy from.
def initialize(acl = nil)
if acl
@aces = acl.map(&:dup)
else
@aces = []
end
end
# Enumerate each ACE in the list.
#
# @yieldparam ace [Hash] the ace
def each
@aces.each {|ace| yield ace}
end
# Allow the +sid+ to access a resource with the specified access +mask+.
#
# @param sid [String] The SID that the ACE is granting access to
# @param mask [int] The access mask granted to the SID
# @param flags [int] The flags assigned to the ACE, e.g. +INHERIT_ONLY_ACE+
def allow(sid, mask, flags = 0)
@aces << Puppet::Util::Windows::AccessControlEntry.new(sid, mask, flags, ACCESS_ALLOWED_ACE_TYPE)
end
# Deny the +sid+ access to a resource with the specified access +mask+.
#
# @param sid [String] The SID that the ACE is denying access to
# @param mask [int] The access mask denied to the SID
# @param flags [int] The flags assigned to the ACE, e.g. +INHERIT_ONLY_ACE+
def deny(sid, mask, flags = 0)
@aces << Puppet::Util::Windows::AccessControlEntry.new(sid, mask, flags, ACCESS_DENIED_ACE_TYPE)
end
# Reassign all ACEs currently assigned to +old_sid+ to +new_sid+ instead.
# If an ACE is inherited or is not assigned to +old_sid+, then it will
# be copied as-is to the new ACL, preserving its order within the ACL.
#
# @param old_sid [String] The old SID, e.g. 'S-1-5-18'
# @param new_sid [String] The new SID
# @return [AccessControlList] The copied ACL.
def reassign!(old_sid, new_sid)
new_aces = []
prepend_needed = false
aces_to_prepend = []
@aces.each do |ace|
new_ace = ace.dup
if ace.sid == old_sid
if ace.inherited?
# create an explicit ACE granting or denying the
# new_sid the rights that the inherited ACE
# granted or denied the old_sid. We mask off all
# flags except those affecting inheritance of the
# ACE we're creating.
- inherit_mask = Windows::Security::CONTAINER_INHERIT_ACE |
- Windows::Security::OBJECT_INHERIT_ACE |
- Windows::Security::INHERIT_ONLY_ACE
+ inherit_mask = Puppet::Util::Windows::AccessControlEntry::CONTAINER_INHERIT_ACE |
+ Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE |
+ Puppet::Util::Windows::AccessControlEntry::INHERIT_ONLY_ACE
explicit_ace = Puppet::Util::Windows::AccessControlEntry.new(new_sid, ace.mask, ace.flags & inherit_mask, ace.type)
aces_to_prepend << explicit_ace
else
new_ace.sid = new_sid
prepend_needed = old_sid == Win32::Security::SID::LocalSystem
end
end
new_aces << new_ace
end
@aces = []
if prepend_needed
- mask = Windows::Security::STANDARD_RIGHTS_ALL | Windows::Security::SPECIFIC_RIGHTS_ALL
+ mask = Puppet::Util::Windows::File::STANDARD_RIGHTS_ALL | Puppet::Util::Windows::File::SPECIFIC_RIGHTS_ALL
ace = Puppet::Util::Windows::AccessControlEntry.new(
Win32::Security::SID::LocalSystem,
mask)
@aces << ace
end
@aces.concat(aces_to_prepend)
@aces.concat(new_aces)
end
def inspect
str = ""
@aces.each do |ace|
str << " #{ace.inspect}\n"
end
str
end
def ==(other)
self.class == other.class &&
self.to_a == other.to_a
end
alias eql? ==
end
diff --git a/lib/puppet/util/adsi.rb b/lib/puppet/util/windows/adsi.rb
similarity index 60%
rename from lib/puppet/util/adsi.rb
rename to lib/puppet/util/windows/adsi.rb
index 4c30e18c2..041be7765 100644
--- a/lib/puppet/util/adsi.rb
+++ b/lib/puppet/util/windows/adsi.rb
@@ -1,368 +1,430 @@
-module Puppet::Util::ADSI
+module Puppet::Util::Windows::ADSI
+ require 'ffi'
+
class << self
+ extend FFI::Library
+
def connectable?(uri)
begin
!! connect(uri)
rescue
false
end
end
def connect(uri)
begin
WIN32OLE.connect(uri)
rescue Exception => e
raise Puppet::Error.new( "ADSI connection error: #{e}", e )
end
end
def create(name, resource_type)
- Puppet::Util::ADSI.connect(computer_uri).Create(resource_type, name)
+ Puppet::Util::Windows::ADSI.connect(computer_uri).Create(resource_type, name)
end
def delete(name, resource_type)
- Puppet::Util::ADSI.connect(computer_uri).Delete(resource_type, name)
+ Puppet::Util::Windows::ADSI.connect(computer_uri).Delete(resource_type, name)
end
+ # taken from winbase.h
+ MAX_COMPUTERNAME_LENGTH = 31
+
def computer_name
unless @computer_name
- buf = " " * 128
- Win32API.new('kernel32', 'GetComputerName', ['P','P'], 'I').call(buf, buf.length.to_s)
- @computer_name = buf.unpack("A*")[0]
+ max_length = MAX_COMPUTERNAME_LENGTH + 1 # NULL terminated
+ FFI::MemoryPointer.new(max_length * 2) do |buffer| # wide string
+ FFI::MemoryPointer.new(:dword, 1) do |buffer_size|
+ buffer_size.write_dword(max_length) # length in TCHARs
+
+ if GetComputerNameW(buffer, buffer_size) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to get computer name")
+ end
+ @computer_name = buffer.read_wide_string(buffer_size.read_dword)
+ end
+ end
end
@computer_name
end
def computer_uri(host = '.')
"WinNT://#{host}"
end
def wmi_resource_uri( host = '.' )
"winmgmts:{impersonationLevel=impersonate}!//#{host}/root/cimv2"
end
# @api private
def sid_uri_safe(sid)
return sid_uri(sid) if sid.kind_of?(Win32::Security::SID)
begin
sid = Win32::Security::SID.new(Win32::Security::SID.string_to_sid(sid))
sid_uri(sid)
- rescue Win32::Security::SID::Error
- return nil
+ rescue SystemCallError
+ nil
end
end
def sid_uri(sid)
raise Puppet::Error.new( "Must use a valid SID object" ) if !sid.kind_of?(Win32::Security::SID)
"WinNT://#{sid.to_s}"
end
def uri(resource_name, resource_type, host = '.')
"#{computer_uri(host)}/#{resource_name},#{resource_type}"
end
def wmi_connection
connect(wmi_resource_uri)
end
def execquery(query)
wmi_connection.execquery(query)
end
def sid_for_account(name)
- Puppet.deprecation_warning "Puppet::Util::ADSI.sid_for_account is deprecated and will be removed in 3.0, use Puppet::Util::Windows::SID.name_to_sid instead."
+ Puppet.deprecation_warning "Puppet::Util::Windows::ADSI.sid_for_account is deprecated and will be removed in 3.0, use Puppet::Util::Windows::SID.name_to_sid instead."
- Puppet::Util::Windows::Security.name_to_sid(name)
+ Puppet::Util::Windows::SID.name_to_sid(name)
end
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms724295(v=vs.85).aspx
+ # BOOL WINAPI GetComputerName(
+ # _Out_ LPTSTR lpBuffer,
+ # _Inout_ LPDWORD lpnSize
+ # );
+ ffi_lib :kernel32
+ attach_function_private :GetComputerNameW,
+ [:lpwstr, :lpdword], :win32_bool
end
class User
extend Enumerable
+ extend FFI::Library
attr_accessor :native_user
attr_reader :name, :sid
def initialize(name, native_user = nil)
@name = name
@native_user = native_user
end
def self.parse_name(name)
if name =~ /\//
raise Puppet::Error.new( "Value must be in DOMAIN\\user style syntax" )
end
matches = name.scan(/((.*)\\)?(.*)/)
domain = matches[0][1] || '.'
account = matches[0][2]
return account, domain
end
def native_user
- @native_user ||= Puppet::Util::ADSI.connect(self.class.uri(*self.class.parse_name(@name)))
+ @native_user ||= Puppet::Util::Windows::ADSI.connect(self.class.uri(*self.class.parse_name(@name)))
end
def sid
- @sid ||= Puppet::Util::Windows::Security.octet_string_to_sid_object(native_user.objectSID)
+ @sid ||= Puppet::Util::Windows::SID.octet_string_to_sid_object(native_user.objectSID)
end
def self.uri(name, host = '.')
- if sid_uri = Puppet::Util::ADSI.sid_uri_safe(name) then return sid_uri end
+ if sid_uri = Puppet::Util::Windows::ADSI.sid_uri_safe(name) then return sid_uri end
host = '.' if ['NT AUTHORITY', 'BUILTIN', Socket.gethostname].include?(host)
- Puppet::Util::ADSI.uri(name, 'user', host)
+ Puppet::Util::Windows::ADSI.uri(name, 'user', host)
end
def uri
self.class.uri(sid.account, sid.domain)
end
def self.logon(name, password)
Puppet::Util::Windows::User.password_is?(name, password)
end
def [](attribute)
native_user.Get(attribute)
end
def []=(attribute, value)
native_user.Put(attribute, value)
end
def commit
begin
native_user.SetInfo unless native_user.nil?
rescue Exception => e
raise Puppet::Error.new( "User update failed: #{e}", e )
end
self
end
def password_is?(password)
self.class.logon(name, password)
end
def add_flag(flag_name, value)
flag = native_user.Get(flag_name) rescue 0
native_user.Put(flag_name, flag | value)
commit
end
def password=(password)
native_user.SetPassword(password)
commit
fADS_UF_DONT_EXPIRE_PASSWD = 0x10000
add_flag("UserFlags", fADS_UF_DONT_EXPIRE_PASSWD)
end
def groups
# WIN32OLE objects aren't enumerable, so no map
groups = []
native_user.Groups.each {|g| groups << g.Name} rescue nil
groups
end
def add_to_groups(*group_names)
group_names.each do |group_name|
- Puppet::Util::ADSI::Group.new(group_name).add_member_sids(sid)
+ Puppet::Util::Windows::ADSI::Group.new(group_name).add_member_sids(sid)
end
end
alias add_to_group add_to_groups
def remove_from_groups(*group_names)
group_names.each do |group_name|
- Puppet::Util::ADSI::Group.new(group_name).remove_member_sids(sid)
+ Puppet::Util::Windows::ADSI::Group.new(group_name).remove_member_sids(sid)
end
end
alias remove_from_group remove_from_groups
def set_groups(desired_groups, minimum = true)
return if desired_groups.nil? or desired_groups.empty?
desired_groups = desired_groups.split(',').map(&:strip)
current_groups = self.groups
# First we add the user to all the groups it should be in but isn't
groups_to_add = desired_groups - current_groups
add_to_groups(*groups_to_add)
# Then we remove the user from all groups it is in but shouldn't be, if
# that's been requested
groups_to_remove = current_groups - desired_groups
remove_from_groups(*groups_to_remove) unless minimum
end
def self.create(name)
# Windows error 1379: The specified local group already exists.
- raise Puppet::Error.new( "Cannot create user if group '#{name}' exists." ) if Puppet::Util::ADSI::Group.exists? name
- new(name, Puppet::Util::ADSI.create(name, 'user'))
+ raise Puppet::Error.new( "Cannot create user if group '#{name}' exists." ) if Puppet::Util::Windows::ADSI::Group.exists? name
+ new(name, Puppet::Util::Windows::ADSI.create(name, 'user'))
+ end
+
+ # UNLEN from lmcons.h - http://stackoverflow.com/a/2155176
+ MAX_USERNAME_LENGTH = 256
+ def self.current_user_name
+ user_name = ''
+ max_length = MAX_USERNAME_LENGTH + 1 # NULL terminated
+ FFI::MemoryPointer.new(max_length * 2) do |buffer| # wide string
+ FFI::MemoryPointer.new(:dword, 1) do |buffer_size|
+ buffer_size.write_dword(max_length) # length in TCHARs
+
+ if GetUserNameW(buffer, buffer_size) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to get user name")
+ end
+ # buffer_size includes trailing NULL
+ user_name = buffer.read_wide_string(buffer_size.read_dword - 1)
+ end
+ end
+
+ user_name
end
def self.exists?(name)
- Puppet::Util::ADSI::connectable?(User.uri(*User.parse_name(name)))
+ Puppet::Util::Windows::ADSI::connectable?(User.uri(*User.parse_name(name)))
end
def self.delete(name)
- Puppet::Util::ADSI.delete(name, 'user')
+ Puppet::Util::Windows::ADSI.delete(name, 'user')
end
def self.each(&block)
- wql = Puppet::Util::ADSI.execquery('select name from win32_useraccount where localaccount = "TRUE"')
+ wql = Puppet::Util::Windows::ADSI.execquery('select name from win32_useraccount where localaccount = "TRUE"')
users = []
wql.each do |u|
users << new(u.name)
end
users.each(&block)
end
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms724432(v=vs.85).aspx
+ # BOOL WINAPI GetUserName(
+ # _Out_ LPTSTR lpBuffer,
+ # _Inout_ LPDWORD lpnSize
+ # );
+ ffi_lib :advapi32
+ attach_function_private :GetUserNameW,
+ [:lpwstr, :lpdword], :win32_bool
end
class UserProfile
def self.delete(sid)
begin
- Puppet::Util::ADSI.wmi_connection.Delete("Win32_UserProfile.SID='#{sid}'")
+ Puppet::Util::Windows::ADSI.wmi_connection.Delete("Win32_UserProfile.SID='#{sid}'")
rescue => e
# http://social.technet.microsoft.com/Forums/en/ITCG/thread/0f190051-ac96-4bf1-a47f-6b864bfacee5
# Prior to Vista SP1, there's no builtin way to programmatically
# delete user profiles (except for delprof.exe). So try to delete
# but warn if we fail
raise e unless e.message.include?('80041010')
Puppet.warning "Cannot delete user profile for '#{sid}' prior to Vista SP1"
end
end
end
class Group
extend Enumerable
attr_accessor :native_group
- attr_reader :name
+ attr_reader :name, :sid
def initialize(name, native_group = nil)
@name = name
@native_group = native_group
end
def uri
self.class.uri(name)
end
def self.uri(name, host = '.')
- if sid_uri = Puppet::Util::ADSI.sid_uri_safe(name) then return sid_uri end
+ if sid_uri = Puppet::Util::Windows::ADSI.sid_uri_safe(name) then return sid_uri end
- Puppet::Util::ADSI.uri(name, 'group', host)
+ Puppet::Util::Windows::ADSI.uri(name, 'group', host)
end
def native_group
- @native_group ||= Puppet::Util::ADSI.connect(uri)
+ @native_group ||= Puppet::Util::Windows::ADSI.connect(uri)
+ end
+
+ def sid
+ @sid ||= Puppet::Util::Windows::SID.octet_string_to_sid_object(native_group.objectSID)
end
def commit
begin
native_group.SetInfo unless native_group.nil?
rescue Exception => e
raise Puppet::Error.new( "Group update failed: #{e}", e )
end
self
end
def self.name_sid_hash(names)
return [] if names.nil? or names.empty?
sids = names.map do |name|
- sid = Puppet::Util::Windows::Security.name_to_sid_object(name)
+ sid = Puppet::Util::Windows::SID.name_to_sid_object(name)
raise Puppet::Error.new( "Could not resolve username: #{name}" ) if !sid
[sid.to_s, sid]
end
Hash[ sids ]
end
def add_members(*names)
- Puppet.deprecation_warning('Puppet::Util::ADSI::Group#add_members is deprecated; please use Puppet::Util::ADSI::Group#add_member_sids')
+ Puppet.deprecation_warning('Puppet::Util::Windows::ADSI::Group#add_members is deprecated; please use Puppet::Util::Windows::ADSI::Group#add_member_sids')
sids = self.class.name_sid_hash(names)
add_member_sids(*sids.values)
end
alias add_member add_members
def remove_members(*names)
- Puppet.deprecation_warning('Puppet::Util::ADSI::Group#remove_members is deprecated; please use Puppet::Util::ADSI::Group#remove_member_sids')
+ Puppet.deprecation_warning('Puppet::Util::Windows::ADSI::Group#remove_members is deprecated; please use Puppet::Util::Windows::ADSI::Group#remove_member_sids')
sids = self.class.name_sid_hash(names)
remove_member_sids(*sids.values)
end
alias remove_member remove_members
def add_member_sids(*sids)
sids.each do |sid|
- native_group.Add(Puppet::Util::ADSI.sid_uri(sid))
+ native_group.Add(Puppet::Util::Windows::ADSI.sid_uri(sid))
end
end
def remove_member_sids(*sids)
sids.each do |sid|
- native_group.Remove(Puppet::Util::ADSI.sid_uri(sid))
+ native_group.Remove(Puppet::Util::Windows::ADSI.sid_uri(sid))
end
end
def members
# WIN32OLE objects aren't enumerable, so no map
members = []
native_group.Members.each {|m| members << m.Name}
members
end
def member_sids
sids = []
native_group.Members.each do |m|
- sids << Puppet::Util::Windows::Security.octet_string_to_sid_object(m.objectSID)
+ sids << Puppet::Util::Windows::SID.octet_string_to_sid_object(m.objectSID)
end
sids
end
def set_members(desired_members)
return if desired_members.nil? or desired_members.empty?
current_hash = Hash[ self.member_sids.map { |sid| [sid.to_s, sid] } ]
desired_hash = self.class.name_sid_hash(desired_members)
# First we add all missing members
members_to_add = (desired_hash.keys - current_hash.keys).map { |sid| desired_hash[sid] }
add_member_sids(*members_to_add)
# Then we remove all extra members
members_to_remove = (current_hash.keys - desired_hash.keys).map { |sid| current_hash[sid] }
remove_member_sids(*members_to_remove)
end
def self.create(name)
# Windows error 2224: The account already exists.
- raise Puppet::Error.new( "Cannot create group if user '#{name}' exists." ) if Puppet::Util::ADSI::User.exists? name
- new(name, Puppet::Util::ADSI.create(name, 'group'))
+ raise Puppet::Error.new( "Cannot create group if user '#{name}' exists." ) if Puppet::Util::Windows::ADSI::User.exists? name
+ new(name, Puppet::Util::Windows::ADSI.create(name, 'group'))
end
def self.exists?(name)
- Puppet::Util::ADSI.connectable?(Group.uri(name))
+ Puppet::Util::Windows::ADSI.connectable?(Group.uri(name))
end
def self.delete(name)
- Puppet::Util::ADSI.delete(name, 'group')
+ Puppet::Util::Windows::ADSI.delete(name, 'group')
end
def self.each(&block)
- wql = Puppet::Util::ADSI.execquery( 'select name from win32_group where localaccount = "TRUE"' )
+ wql = Puppet::Util::Windows::ADSI.execquery( 'select name from win32_group where localaccount = "TRUE"' )
groups = []
wql.each do |g|
groups << new(g.name)
end
groups.each(&block)
end
end
end
diff --git a/lib/puppet/util/windows/api_types.rb b/lib/puppet/util/windows/api_types.rb
new file mode 100644
index 000000000..52645d879
--- /dev/null
+++ b/lib/puppet/util/windows/api_types.rb
@@ -0,0 +1,255 @@
+require 'ffi'
+require 'puppet/util/windows/string'
+
+module Puppet::Util::Windows::APITypes
+ module ::FFI
+ WIN32_FALSE = 0
+
+ # standard Win32 error codes
+ ERROR_SUCCESS = 0
+ end
+
+ module ::FFI::Library
+ # Wrapper method for attach_function + private
+ def attach_function_private(*args)
+ attach_function(*args)
+ private args[0]
+ end
+ end
+
+ class ::FFI::Pointer
+ NULL_HANDLE = 0
+ NULL_TERMINATOR_WCHAR = 0
+
+ def self.from_string_to_wide_string(str, &block)
+ str = Puppet::Util::Windows::String.wide_string(str)
+ FFI::MemoryPointer.new(:byte, str.bytesize) do |ptr|
+ # uchar here is synonymous with byte
+ ptr.put_array_of_uchar(0, str.bytes.to_a)
+
+ yield ptr
+ end
+
+ # ptr has already had free called, so nothing to return
+ nil
+ end
+
+ def read_win32_bool
+ # BOOL is always a 32-bit integer in Win32
+ # some Win32 APIs return 1 for true, while others are non-0
+ read_int32 != FFI::WIN32_FALSE
+ end
+
+ alias_method :read_dword, :read_uint32
+ alias_method :read_win32_ulong, :read_uint32
+
+ alias_method :read_hresult, :read_int32
+
+ def read_handle
+ type_size == 4 ? read_uint32 : read_uint64
+ end
+
+ alias_method :read_wchar, :read_uint16
+ alias_method :read_word, :read_uint16
+
+ def read_wide_string(char_length)
+ # char_length is number of wide chars (typically excluding NULLs), *not* bytes
+ str = get_bytes(0, char_length * 2).force_encoding('UTF-16LE')
+ str.encode(Encoding.default_external)
+ end
+
+ def read_arbitrary_wide_string_up_to(max_char_length = 512)
+ # max_char_length is number of wide chars (typically excluding NULLs), *not* bytes
+ # use a pointer to read one UTF-16LE char (2 bytes) at a time
+ wchar_ptr = FFI::Pointer.new(:wchar, address)
+
+ # now iterate 2 bytes at a time until an offset lower than max_char_length is found
+ 0.upto(max_char_length - 1) do |i|
+ if wchar_ptr[i].read_wchar == NULL_TERMINATOR_WCHAR
+ return read_wide_string(i)
+ end
+ end
+
+ read_wide_string(max_char_length)
+ end
+
+ def read_win32_local_pointer(&block)
+ ptr = nil
+ begin
+ ptr = read_pointer
+ yield ptr
+ ensure
+ if ptr && ! ptr.null?
+ if FFI::WIN32::LocalFree(ptr.address) != FFI::Pointer::NULL_HANDLE
+ Puppet.debug "LocalFree memory leak"
+ end
+ end
+ end
+
+ # ptr has already had LocalFree called, so nothing to return
+ nil
+ end
+
+ def read_com_memory_pointer(&block)
+ ptr = nil
+ begin
+ ptr = read_pointer
+ yield ptr
+ ensure
+ FFI::WIN32::CoTaskMemFree(ptr) if ptr && ! ptr.null?
+ end
+
+ # ptr has already had CoTaskMemFree called, so nothing to return
+ nil
+ end
+
+
+ alias_method :write_dword, :write_uint32
+ alias_method :write_word, :write_uint16
+ end
+
+ # FFI Types
+ # https://github.com/ffi/ffi/wiki/Types
+
+ # Windows - Common Data Types
+ # http://msdn.microsoft.com/en-us/library/cc230309.aspx
+
+ # Windows Data Types
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa383751(v=vs.85).aspx
+
+ FFI.typedef :uint16, :word
+ FFI.typedef :uint32, :dword
+ # uintptr_t is defined in an FFI conf as platform specific, either
+ # ulong_long on x64 or just ulong on x86
+ FFI.typedef :uintptr_t, :handle
+ FFI.typedef :uintptr_t, :hwnd
+
+ # buffer_inout is similar to pointer (platform specific), but optimized for buffers
+ FFI.typedef :buffer_inout, :lpwstr
+ # buffer_in is similar to pointer (platform specific), but optimized for CONST read only buffers
+ FFI.typedef :buffer_in, :lpcwstr
+ FFI.typedef :buffer_in, :lpcolestr
+
+ # string is also similar to pointer, but should be used for const char *
+ # NOTE that this is not wide, useful only for A suffixed functions
+ FFI.typedef :string, :lpcstr
+
+ # pointer in FFI is platform specific
+ # NOTE: for API calls with reserved lpvoid parameters, pass a FFI::Pointer::NULL
+ FFI.typedef :pointer, :lpcvoid
+ FFI.typedef :pointer, :lpvoid
+ FFI.typedef :pointer, :lpword
+ FFI.typedef :pointer, :lpdword
+ FFI.typedef :pointer, :pdword
+ FFI.typedef :pointer, :phandle
+ FFI.typedef :pointer, :ulong_ptr
+ FFI.typedef :pointer, :pbool
+ FFI.typedef :pointer, :lpunknown
+
+ # any time LONG / ULONG is in a win32 API definition DO NOT USE platform specific width
+ # which is what FFI uses by default
+ # instead create new aliases for these very special cases
+ # NOTE: not a good idea to redefine FFI :ulong since other typedefs may rely on it
+ FFI.typedef :uint32, :win32_ulong
+ FFI.typedef :int32, :win32_long
+ # FFI bool can be only 1 byte at times,
+ # Win32 BOOL is a signed int, and is always 4 bytes, even on x64
+ # http://blogs.msdn.com/b/oldnewthing/archive/2011/03/28/10146459.aspx
+ FFI.typedef :int32, :win32_bool
+
+ # Same as a LONG, a 32-bit signed integer
+ FFI.typedef :int32, :hresult
+
+ # NOTE: FFI already defines (u)short as a 16-bit (un)signed like this:
+ # FFI.typedef :uint16, :ushort
+ # FFI.typedef :int16, :short
+
+ # 8 bits per byte
+ FFI.typedef :uchar, :byte
+ FFI.typedef :uint16, :wchar
+
+ module ::FFI::WIN32
+ extend ::FFI::Library
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa373931(v=vs.85).aspx
+ # typedef struct _GUID {
+ # DWORD Data1;
+ # WORD Data2;
+ # WORD Data3;
+ # BYTE Data4[8];
+ # } GUID;
+ class GUID < FFI::Struct
+ layout :Data1, :dword,
+ :Data2, :word,
+ :Data3, :word,
+ :Data4, [:byte, 8]
+
+ def self.[](s)
+ raise 'Bad GUID format.' unless s =~ /^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$/i
+
+ new.tap do |guid|
+ guid[:Data1] = s[0, 8].to_i(16)
+ guid[:Data2] = s[9, 4].to_i(16)
+ guid[:Data3] = s[14, 4].to_i(16)
+ guid[:Data4][0] = s[19, 2].to_i(16)
+ guid[:Data4][1] = s[21, 2].to_i(16)
+ s[24, 12].split('').each_slice(2).with_index do |a, i|
+ guid[:Data4][i + 2] = a.join('').to_i(16)
+ end
+ end
+ end
+
+ def ==(other) Windows.memcmp(other, self, size) == 0 end
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950(v=vs.85).aspx
+ # typedef struct _SYSTEMTIME {
+ # WORD wYear;
+ # WORD wMonth;
+ # WORD wDayOfWeek;
+ # WORD wDay;
+ # WORD wHour;
+ # WORD wMinute;
+ # WORD wSecond;
+ # WORD wMilliseconds;
+ # } SYSTEMTIME, *PSYSTEMTIME;
+ class SYSTEMTIME < FFI::Struct
+ layout :wYear, :word,
+ :wMonth, :word,
+ :wDayOfWeek, :word,
+ :wDay, :word,
+ :wHour, :word,
+ :wMinute, :word,
+ :wSecond, :word,
+ :wMilliseconds, :word
+
+ def to_local_time
+ Time.local(self[:wYear], self[:wMonth], self[:wDay],
+ self[:wHour], self[:wMinute], self[:wSecond], self[:wMilliseconds] * 1000)
+ end
+ end
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa366730(v=vs.85).aspx
+ # HLOCAL WINAPI LocalFree(
+ # _In_ HLOCAL hMem
+ # );
+ ffi_lib :kernel32
+ attach_function :LocalFree, [:handle], :handle
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms724211(v=vs.85).aspx
+ # BOOL WINAPI CloseHandle(
+ # _In_ HANDLE hObject
+ # );
+ ffi_lib :kernel32
+ attach_function_private :CloseHandle, [:handle], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms680722(v=vs.85).aspx
+ # void CoTaskMemFree(
+ # _In_opt_ LPVOID pv
+ # );
+ ffi_lib :ole32
+ attach_function :CoTaskMemFree, [:lpvoid], :void
+ end
+end
diff --git a/lib/puppet/util/windows/com.rb b/lib/puppet/util/windows/com.rb
new file mode 100644
index 000000000..0033d53c0
--- /dev/null
+++ b/lib/puppet/util/windows/com.rb
@@ -0,0 +1,224 @@
+require 'ffi'
+
+module Puppet::Util::Windows::COM
+ extend FFI::Library
+
+ ffi_convention :stdcall
+
+ S_OK = 0
+ S_FALSE = 1
+
+ def SUCCEEDED(hr) hr >= 0 end
+ def FAILED(hr) hr < 0 end
+
+ module_function :SUCCEEDED, :FAILED
+
+ def raise_if_hresult_failed(name, *args)
+ failed = FAILED(result = send(name, *args)) and raise "#{name} failed (hresult #{format('%#08x', result)})."
+
+ result
+ ensure
+ yield failed if block_given?
+ end
+
+ module_function :raise_if_hresult_failed
+
+ CLSCTX_INPROC_SERVER = 0x1
+ CLSCTX_INPROC_HANDLER = 0x2
+ CLSCTX_LOCAL_SERVER = 0x4
+ CLSCTX_INPROC_SERVER16 = 0x8
+ CLSCTX_REMOTE_SERVER = 0x10
+ CLSCTX_INPROC_HANDLER16 = 0x20
+ CLSCTX_RESERVED1 = 0x40
+ CLSCTX_RESERVED2 = 0x80
+ CLSCTX_RESERVED3 = 0x100
+ CLSCTX_RESERVED4 = 0x200
+ CLSCTX_NO_CODE_DOWNLOAD = 0x400
+ CLSCTX_RESERVED5 = 0x800
+ CLSCTX_NO_CUSTOM_MARSHAL = 0x1000
+ CLSCTX_ENABLE_CODE_DOWNLOAD = 0x2000
+ CLSCTX_NO_FAILURE_LOG = 0x4000
+ CLSCTX_DISABLE_AAA = 0x8000
+ CLSCTX_ENABLE_AAA = 0x10000
+ CLSCTX_FROM_DEFAULT_CONTEXT = 0x20000
+ CLSCTX_ACTIVATE_32_BIT_SERVER = 0x40000
+ CLSCTX_ACTIVATE_64_BIT_SERVER = 0x80000
+ CLSCTX_ENABLE_CLOAKING = 0x100000
+ CLSCTX_PS_DLL = -0x80000000
+ CLSCTX_INPROC = CLSCTX_INPROC_SERVER | CLSCTX_INPROC_HANDLER
+ CLSCTX_ALL = CLSCTX_INPROC_SERVER | CLSCTX_INPROC_HANDLER | CLSCTX_LOCAL_SERVER | CLSCTX_REMOTE_SERVER
+ CLSCTX_SERVER = CLSCTX_INPROC_SERVER | CLSCTX_LOCAL_SERVER | CLSCTX_REMOTE_SERVER
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms686615(v=vs.85).aspx
+ # HRESULT CoCreateInstance(
+ # _In_ REFCLSID rclsid,
+ # _In_ LPUNKNOWN pUnkOuter,
+ # _In_ DWORD dwClsContext,
+ # _In_ REFIID riid,
+ # _Out_ LPVOID *ppv
+ # );
+ ffi_lib :ole32
+ attach_function_private :CoCreateInstance,
+ [:pointer, :lpunknown, :dword, :pointer, :lpvoid], :hresult
+
+ # code modified from Unknownr project https://github.com/rpeev/Unknownr
+ # licensed under MIT
+ module Interface
+ def self.[](*args)
+ spec, iid, *ifaces = args.reverse
+
+ spec.each { |name, signature| signature[0].unshift(:pointer) }
+
+ Class.new(FFI::Struct) do
+ const_set(:IID, iid)
+
+ vtable = Class.new(FFI::Struct) do
+ vtable_hash = Hash[(ifaces.map { |iface| iface::VTBL::SPEC.to_a } << spec.to_a).flatten(1)]
+ const_set(:SPEC, vtable_hash)
+
+ layout \
+ *self::SPEC.map { |name, signature| [name, callback(*signature)] }.flatten
+ end
+
+ const_set(:VTBL, vtable)
+
+ layout \
+ :lpVtbl, :pointer
+ end
+ end
+ end
+
+ module Helpers
+ def QueryInstance(klass)
+ instance = nil
+
+ FFI::MemoryPointer.new(:pointer) do |ppv|
+ QueryInterface(klass::IID, ppv)
+
+ instance = klass.new(ppv.read_pointer)
+ end
+
+ begin
+ yield instance
+ return self
+ ensure
+ instance.Release
+ end if block_given?
+
+ instance
+ end
+
+ def UseInstance(klass, name, *args)
+ instance = nil
+
+ FFI::MemoryPointer.new(:pointer) do |ppv|
+ send(name, *args, ppv)
+
+ yield instance = klass.new(ppv.read_pointer)
+ end
+
+ self
+ ensure
+ instance.Release if instance && ! instance.null?
+ end
+ end
+
+ module Instance
+ def self.[](iface)
+ Class.new(iface) do
+ send(:include, Helpers)
+
+ def initialize(pointer)
+ self.pointer = pointer
+
+ @vtbl = self.class::VTBL.new(self[:lpVtbl])
+ end
+
+ attr_reader :vtbl
+
+ self::VTBL.members.each do |name|
+ define_method(name) do |*args|
+ if Puppet::Util::Windows::COM.FAILED(result = @vtbl[name].call(self, *args))
+ raise Puppet::Util::Windows::Error.new("Failed to call #{self}::#{name} with HRESULT: #{result}.", result)
+ end
+ result
+ end
+ end
+
+ layout \
+ :lpVtbl, :pointer
+ end
+ end
+ end
+
+ module Factory
+ def self.[](iface, clsid)
+ Class.new(iface) do
+ send(:include, Helpers)
+
+ const_set(:CLSID, clsid)
+
+ def initialize(opts = {})
+ @opts = opts
+
+ @opts[:clsctx] ||= CLSCTX_INPROC_SERVER
+
+ FFI::MemoryPointer.new(:pointer) do |ppv|
+ hr = Puppet::Util::Windows::COM.CoCreateInstance(self.class::CLSID, FFI::Pointer::NULL, @opts[:clsctx], self.class::IID, ppv)
+ if Puppet::Util::Windows::COM.FAILED(hr)
+ raise "CoCreateInstance failed (#{self.class})."
+ end
+
+ self.pointer = ppv.read_pointer
+ end
+
+ @vtbl = self.class::VTBL.new(self[:lpVtbl])
+ end
+
+ attr_reader :vtbl
+
+ self::VTBL.members.each do |name|
+ define_method(name) do |*args|
+ if Puppet::Util::Windows::COM.FAILED(result = @vtbl[name].call(self, *args))
+ raise Puppet::Util::Windows::Error.new("Failed to call #{self}::#{name} with HRESULT: #{result}.", result)
+ end
+ result
+ end
+ end
+
+ layout \
+ :lpVtbl, :pointer
+ end
+ end
+ end
+
+ IUnknown = Interface[
+ FFI::WIN32::GUID['00000000-0000-0000-C000-000000000046'],
+
+ QueryInterface: [[:pointer, :pointer], :hresult],
+ AddRef: [[], :win32_ulong],
+ Release: [[], :win32_ulong]
+ ]
+
+ Unknown = Instance[IUnknown]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms678543(v=vs.85).aspx
+ # HRESULT CoInitialize(
+ # _In_opt_ LPVOID pvReserved
+ # );
+ ffi_lib :ole32
+ attach_function_private :CoInitialize, [:lpvoid], :hresult
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms688715(v=vs.85).aspx
+ # void CoUninitialize(void);
+ ffi_lib :ole32
+ attach_function_private :CoUninitialize, [], :void
+
+ def InitializeCom
+ raise_if_hresult_failed(:CoInitialize, FFI::Pointer::NULL)
+
+ at_exit { CoUninitialize() }
+ end
+
+ module_function :InitializeCom
+end
diff --git a/lib/puppet/util/windows/error.rb b/lib/puppet/util/windows/error.rb
index c6088fa7a..4f54bca45 100644
--- a/lib/puppet/util/windows/error.rb
+++ b/lib/puppet/util/windows/error.rb
@@ -1,16 +1,83 @@
require 'puppet/util/windows'
# represents an error resulting from a Win32 error code
class Puppet::Util::Windows::Error < Puppet::Error
- require 'windows/error'
- include ::Windows::Error
+ require 'ffi'
+ extend FFI::Library
attr_reader :code
- def initialize(message, code = GetLastError.call, original = nil)
- super(message + ": #{get_last_error(code)}", original)
+ # NOTE: FFI.errno only works properly when prior Win32 calls have been made
+ # through FFI bindings. Calls made through Win32API do not have their error
+ # codes captured by FFI.errno
+ def initialize(message, code = FFI.errno, original = nil)
+ super(message + ": #{self.class.format_error_code(code)}", original)
@code = code
end
-end
+ # Helper method that wraps FormatMessage that returns a human readable string.
+ def self.format_error_code(code)
+ # specifying 0 will look for LANGID in the following order
+ # 1.Language neutral
+ # 2.Thread LANGID, based on the thread's locale value
+ # 3.User default LANGID, based on the user's default locale value
+ # 4.System default LANGID, based on the system default locale value
+ # 5.US English
+ dwLanguageId = 0
+ flags = FORMAT_MESSAGE_ALLOCATE_BUFFER |
+ FORMAT_MESSAGE_FROM_SYSTEM |
+ FORMAT_MESSAGE_ARGUMENT_ARRAY |
+ FORMAT_MESSAGE_IGNORE_INSERTS |
+ FORMAT_MESSAGE_MAX_WIDTH_MASK
+ error_string = ''
+
+ # this pointer actually points to a :lpwstr (pointer) since we're letting Windows allocate for us
+ FFI::MemoryPointer.new(:pointer, 1) do |buffer_ptr|
+ length = FormatMessageW(flags, FFI::Pointer::NULL, code, dwLanguageId,
+ buffer_ptr, 0, FFI::Pointer::NULL)
+
+ if length == FFI::WIN32_FALSE
+ # can't raise same error type here or potentially recurse infinitely
+ raise Puppet::Error.new("FormatMessageW could not format code #{code}")
+ end
+
+ # returns an FFI::Pointer with autorelease set to false, which is what we want
+ buffer_ptr.read_win32_local_pointer do |wide_string_ptr|
+ if wide_string_ptr.null?
+ raise Puppet::Error.new("FormatMessageW failed to allocate buffer for code #{code}")
+ end
+
+ error_string = wide_string_ptr.read_wide_string(length)
+ end
+ end
+
+ error_string
+ end
+
+ ERROR_FILE_NOT_FOUND = 2
+ ERROR_ACCESS_DENIED = 5
+
+ FORMAT_MESSAGE_ALLOCATE_BUFFER = 0x00000100
+ FORMAT_MESSAGE_IGNORE_INSERTS = 0x00000200
+ FORMAT_MESSAGE_FROM_SYSTEM = 0x00001000
+ FORMAT_MESSAGE_ARGUMENT_ARRAY = 0x00002000
+ FORMAT_MESSAGE_MAX_WIDTH_MASK = 0x000000FF
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms679351(v=vs.85).aspx
+ # DWORD WINAPI FormatMessage(
+ # _In_ DWORD dwFlags,
+ # _In_opt_ LPCVOID lpSource,
+ # _In_ DWORD dwMessageId,
+ # _In_ DWORD dwLanguageId,
+ # _Out_ LPTSTR lpBuffer,
+ # _In_ DWORD nSize,
+ # _In_opt_ va_list *Arguments
+ # );
+ # NOTE: since we're not preallocating the buffer, use a :pointer for lpBuffer
+ ffi_lib :kernel32
+ attach_function_private :FormatMessageW,
+ [:dword, :lpcvoid, :dword, :dword, :pointer, :dword, :pointer], :dword
+end
diff --git a/lib/puppet/util/windows/file.rb b/lib/puppet/util/windows/file.rb
index 15a6ef469..07ffc1cf4 100644
--- a/lib/puppet/util/windows/file.rb
+++ b/lib/puppet/util/windows/file.rb
@@ -1,279 +1,401 @@
require 'puppet/util/windows'
module Puppet::Util::Windows::File
require 'ffi'
- require 'windows/api'
+ extend FFI::Library
+ extend Puppet::Util::Windows::String
+
+ FILE_ATTRIBUTE_READONLY = 0x00000001
+
+ SYNCHRONIZE = 0x100000
+ STANDARD_RIGHTS_REQUIRED = 0xf0000
+ STANDARD_RIGHTS_READ = 0x20000
+ STANDARD_RIGHTS_WRITE = 0x20000
+ STANDARD_RIGHTS_EXECUTE = 0x20000
+ STANDARD_RIGHTS_ALL = 0x1F0000
+ SPECIFIC_RIGHTS_ALL = 0xFFFF
+
+ FILE_READ_DATA = 1
+ FILE_WRITE_DATA = 2
+ FILE_APPEND_DATA = 4
+ FILE_READ_EA = 8
+ FILE_WRITE_EA = 16
+ FILE_EXECUTE = 32
+ FILE_DELETE_CHILD = 64
+ FILE_READ_ATTRIBUTES = 128
+ FILE_WRITE_ATTRIBUTES = 256
+
+ FILE_ALL_ACCESS = STANDARD_RIGHTS_REQUIRED | SYNCHRONIZE | 0x1FF
+
+ FILE_GENERIC_READ =
+ STANDARD_RIGHTS_READ |
+ FILE_READ_DATA |
+ FILE_READ_ATTRIBUTES |
+ FILE_READ_EA |
+ SYNCHRONIZE
+
+ FILE_GENERIC_WRITE =
+ STANDARD_RIGHTS_WRITE |
+ FILE_WRITE_DATA |
+ FILE_WRITE_ATTRIBUTES |
+ FILE_WRITE_EA |
+ FILE_APPEND_DATA |
+ SYNCHRONIZE
+
+ FILE_GENERIC_EXECUTE =
+ STANDARD_RIGHTS_EXECUTE |
+ FILE_READ_ATTRIBUTES |
+ FILE_EXECUTE |
+ SYNCHRONIZE
def replace_file(target, source)
- target_encoded = Puppet::Util::Windows::String.wide_string(target.to_s)
- source_encoded = Puppet::Util::Windows::String.wide_string(source.to_s)
+ target_encoded = wide_string(target.to_s)
+ source_encoded = wide_string(source.to_s)
flags = 0x1
backup_file = nil
- result = API.replace_file(
+ result = ReplaceFileW(
target_encoded,
source_encoded,
backup_file,
flags,
- 0,
- 0
+ FFI::Pointer::NULL,
+ FFI::Pointer::NULL
)
- return true if result
+ return true if result != FFI::WIN32_FALSE
raise Puppet::Util::Windows::Error.new("ReplaceFile(#{target}, #{source})")
end
module_function :replace_file
- MoveFileEx = Windows::API.new('MoveFileExW', 'PPL', 'B')
def move_file_ex(source, target, flags = 0)
- result = MoveFileEx.call(Puppet::Util::Windows::String.wide_string(source.to_s),
- Puppet::Util::Windows::String.wide_string(target.to_s),
- flags)
- return true unless result == 0
+ result = MoveFileExW(wide_string(source.to_s),
+ wide_string(target.to_s),
+ flags)
+
+ return true if result != FFI::WIN32_FALSE
raise Puppet::Util::Windows::Error.
new("MoveFileEx(#{source}, #{target}, #{flags.to_s(8)})")
end
module_function :move_file_ex
- module API
- extend FFI::Library
- ffi_lib 'kernel32'
- ffi_convention :stdcall
-
- # http://msdn.microsoft.com/en-us/library/windows/desktop/aa365512(v=vs.85).aspx
- # BOOL WINAPI ReplaceFile(
- # _In_ LPCTSTR lpReplacedFileName,
- # _In_ LPCTSTR lpReplacementFileName,
- # _In_opt_ LPCTSTR lpBackupFileName,
- # _In_ DWORD dwReplaceFlags - 0x1 REPLACEFILE_WRITE_THROUGH,
- # 0x2 REPLACEFILE_IGNORE_MERGE_ERRORS,
- # 0x4 REPLACEFILE_IGNORE_ACL_ERRORS
- # _Reserved_ LPVOID lpExclude,
- # _Reserved_ LPVOID lpReserved
- # );
- attach_function :replace_file, :ReplaceFileW,
- [:buffer_in, :buffer_in, :buffer_in, :uint, :uint, :uint], :bool
-
- # BOOLEAN WINAPI CreateSymbolicLink(
- # _In_ LPTSTR lpSymlinkFileName, - symbolic link to be created
- # _In_ LPTSTR lpTargetFileName, - name of target for symbolic link
- # _In_ DWORD dwFlags - 0x0 target is a file, 0x1 target is a directory
- # );
- # rescue on Windows < 6.0 so that code doesn't explode
- begin
- attach_function :create_symbolic_link, :CreateSymbolicLinkW,
- [:buffer_in, :buffer_in, :uint], :bool
- rescue LoadError
- end
-
- # DWORD WINAPI GetFileAttributes(
- # _In_ LPCTSTR lpFileName
- # );
- attach_function :get_file_attributes, :GetFileAttributesW,
- [:buffer_in], :uint
-
- # HANDLE WINAPI CreateFile(
- # _In_ LPCTSTR lpFileName,
- # _In_ DWORD dwDesiredAccess,
- # _In_ DWORD dwShareMode,
- # _In_opt_ LPSECURITY_ATTRIBUTES lpSecurityAttributes,
- # _In_ DWORD dwCreationDisposition,
- # _In_ DWORD dwFlagsAndAttributes,
- # _In_opt_ HANDLE hTemplateFile
- # );
- attach_function :create_file, :CreateFileW,
- [:buffer_in, :uint, :uint, :pointer, :uint, :uint, :uint], :uint
-
- # http://msdn.microsoft.com/en-us/library/windows/desktop/aa363216(v=vs.85).aspx
- # BOOL WINAPI DeviceIoControl(
- # _In_ HANDLE hDevice,
- # _In_ DWORD dwIoControlCode,
- # _In_opt_ LPVOID lpInBuffer,
- # _In_ DWORD nInBufferSize,
- # _Out_opt_ LPVOID lpOutBuffer,
- # _In_ DWORD nOutBufferSize,
- # _Out_opt_ LPDWORD lpBytesReturned,
- # _Inout_opt_ LPOVERLAPPED lpOverlapped
- # );
- attach_function :device_io_control, :DeviceIoControl,
- [:uint, :uint, :pointer, :uint, :pointer, :uint, :pointer, :pointer], :bool
-
- MAXIMUM_REPARSE_DATA_BUFFER_SIZE = 16384
-
- # REPARSE_DATA_BUFFER
- # http://msdn.microsoft.com/en-us/library/cc232006.aspx
- # http://msdn.microsoft.com/en-us/library/windows/hardware/ff552012(v=vs.85).aspx
- # struct is always MAXIMUM_REPARSE_DATA_BUFFER_SIZE bytes
- class ReparseDataBuffer < FFI::Struct
- layout :reparse_tag, :uint,
- :reparse_data_length, :ushort,
- :reserved, :ushort,
- :substitute_name_offset, :ushort,
- :substitute_name_length, :ushort,
- :print_name_offset, :ushort,
- :print_name_length, :ushort,
- :flags, :uint,
- # max less above fields dword / uint 4 bytes, ushort 2 bytes
- :path_buffer, [:uchar, MAXIMUM_REPARSE_DATA_BUFFER_SIZE - 20]
- end
-
- # BOOL WINAPI CloseHandle(
- # _In_ HANDLE hObject
- # );
- attach_function :close_handle, :CloseHandle, [:uint], :bool
- end
-
def symlink(target, symlink)
flags = File.directory?(target) ? 0x1 : 0x0
- result = API.create_symbolic_link(Puppet::Util::Windows::String.wide_string(symlink.to_s),
- Puppet::Util::Windows::String.wide_string(target.to_s), flags)
- return true if result
+ result = CreateSymbolicLinkW(wide_string(symlink.to_s),
+ wide_string(target.to_s), flags)
+ return true if result != FFI::WIN32_FALSE
raise Puppet::Util::Windows::Error.new(
"CreateSymbolicLink(#{symlink}, #{target}, #{flags.to_s(8)})")
end
module_function :symlink
INVALID_FILE_ATTRIBUTES = 0xFFFFFFFF #define INVALID_FILE_ATTRIBUTES (DWORD (-1))
- def self.get_file_attributes(file_name)
- result = API.get_file_attributes(Puppet::Util::Windows::String.wide_string(file_name.to_s))
+
+ def get_file_attributes(file_name)
+ Puppet.deprecation_warning('Puppet::Util::Windows::File.get_file_attributes is deprecated; please use Puppet::Util::Windows::File.get_attributes')
+ get_attributes(file_name)
+ end
+ module_function :get_file_attributes
+
+ def get_attributes(file_name)
+ result = GetFileAttributesW(wide_string(file_name.to_s))
return result unless result == INVALID_FILE_ATTRIBUTES
raise Puppet::Util::Windows::Error.new("GetFileAttributes(#{file_name})")
end
+ module_function :get_attributes
+
+ def add_attributes(path, flags)
+ oldattrs = get_attributes(path)
+
+ if (oldattrs | flags) != oldattrs
+ set_attributes(path, oldattrs | flags)
+ end
+ end
+ module_function :add_attributes
+
+ def remove_attributes(path, flags)
+ oldattrs = get_attributes(path)
+
+ if (oldattrs & ~flags) != oldattrs
+ set_attributes(path, oldattrs & ~flags)
+ end
+ end
+ module_function :remove_attributes
+
+ def set_attributes(path, flags)
+ success = SetFileAttributesW(wide_string(path), flags) != FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to set file attributes") if !success
- INVALID_HANDLE_VALUE = -1 #define INVALID_HANDLE_VALUE ((HANDLE)(LONG_PTR)-1)
+ success
+ end
+ module_function :set_attributes
+
+ #define INVALID_HANDLE_VALUE ((HANDLE)(LONG_PTR)-1)
+ INVALID_HANDLE_VALUE = FFI::Pointer.new(-1).address
def self.create_file(file_name, desired_access, share_mode, security_attributes,
creation_disposition, flags_and_attributes, template_file_handle)
- result = API.create_file(Puppet::Util::Windows::String.wide_string(file_name.to_s),
+ result = CreateFileW(wide_string(file_name.to_s),
desired_access, share_mode, security_attributes, creation_disposition,
flags_and_attributes, template_file_handle)
return result unless result == INVALID_HANDLE_VALUE
raise Puppet::Util::Windows::Error.new(
"CreateFile(#{file_name}, #{desired_access.to_s(8)}, #{share_mode.to_s(8)}, " +
"#{security_attributes}, #{creation_disposition.to_s(8)}, " +
"#{flags_and_attributes.to_s(8)}, #{template_file_handle})")
end
+ def self.get_reparse_point_data(handle, &block)
+ # must be multiple of 1024, min 10240
+ FFI::MemoryPointer.new(REPARSE_DATA_BUFFER.size) do |reparse_data_buffer_ptr|
+ device_io_control(handle, FSCTL_GET_REPARSE_POINT, nil, reparse_data_buffer_ptr)
+ yield REPARSE_DATA_BUFFER.new(reparse_data_buffer_ptr)
+ end
+
+ # underlying struct MemoryPointer has been cleaned up by this point, nothing to return
+ nil
+ end
+
def self.device_io_control(handle, io_control_code, in_buffer = nil, out_buffer = nil)
if out_buffer.nil?
raise Puppet::Util::Windows::Error.new("out_buffer is required")
end
- result = API.device_io_control(
- handle,
- io_control_code,
- in_buffer, in_buffer.nil? ? 0 : in_buffer.size,
- out_buffer, out_buffer.size,
- FFI::MemoryPointer.new(:uint, 1),
- nil
- )
+ FFI::MemoryPointer.new(:dword, 1) do |bytes_returned_ptr|
+ result = DeviceIoControl(
+ handle,
+ io_control_code,
+ in_buffer, in_buffer.nil? ? 0 : in_buffer.size,
+ out_buffer, out_buffer.size,
+ bytes_returned_ptr,
+ nil
+ )
+
+ if result == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new(
+ "DeviceIoControl(#{handle}, #{io_control_code}, " +
+ "#{in_buffer}, #{in_buffer ? in_buffer.size : ''}, " +
+ "#{out_buffer}, #{out_buffer ? out_buffer.size : ''}")
+ end
+ end
- return out_buffer if result
- raise Puppet::Util::Windows::Error.new(
- "DeviceIoControl(#{handle}, #{io_control_code}, " +
- "#{in_buffer}, #{in_buffer ? in_buffer.size : ''}, " +
- "#{out_buffer}, #{out_buffer ? out_buffer.size : ''}")
+ out_buffer
end
FILE_ATTRIBUTE_REPARSE_POINT = 0x400
def symlink?(file_name)
begin
- attributes = get_file_attributes(file_name)
+ attributes = get_attributes(file_name)
(attributes & FILE_ATTRIBUTE_REPARSE_POINT) == FILE_ATTRIBUTE_REPARSE_POINT
rescue
# raised INVALID_FILE_ATTRIBUTES is equivalent to file not found
false
end
end
module_function :symlink?
GENERIC_READ = 0x80000000
+ GENERIC_WRITE = 0x40000000
+ GENERIC_EXECUTE = 0x20000000
+ GENERIC_ALL = 0x10000000
FILE_SHARE_READ = 1
+ FILE_SHARE_WRITE = 2
OPEN_EXISTING = 3
FILE_FLAG_OPEN_REPARSE_POINT = 0x00200000
FILE_FLAG_BACKUP_SEMANTICS = 0x02000000
def self.open_symlink(link_name)
begin
yield handle = create_file(
- Puppet::Util::Windows::String.wide_string(link_name.to_s),
+ link_name,
GENERIC_READ,
FILE_SHARE_READ,
nil, # security_attributes
OPEN_EXISTING,
FILE_FLAG_OPEN_REPARSE_POINT | FILE_FLAG_BACKUP_SEMANTICS,
0) # template_file
ensure
- API.close_handle(handle) if handle
+ FFI::WIN32.CloseHandle(handle) if handle
end
+
+ # handle has had CloseHandle called against it, so nothing to return
+ nil
end
def readlink(link_name)
+ link = nil
open_symlink(link_name) do |handle|
- resolve_symlink(handle)
+ link = resolve_symlink(handle)
end
+
+ link
end
module_function :readlink
def stat(file_name)
file_name = file_name.to_s # accomodate PathName or String
stat = File.stat(file_name)
singleton_class = class << stat; self; end
target_path = file_name
if symlink?(file_name)
target_path = readlink(file_name)
link_ftype = File.stat(target_path).ftype
# sigh, monkey patch instance method for instance, and close over link_ftype
singleton_class.send(:define_method, :ftype) do
link_ftype
end
end
singleton_class.send(:define_method, :mode) do
Puppet::Util::Windows::Security.get_mode(target_path)
end
stat
end
module_function :stat
def lstat(file_name)
file_name = file_name.to_s # accomodate PathName or String
# monkey'ing around!
stat = File.lstat(file_name)
singleton_class = class << stat; self; end
singleton_class.send(:define_method, :mode) do
Puppet::Util::Windows::Security.get_mode(file_name)
end
if symlink?(file_name)
def stat.ftype
"link"
end
end
stat
end
module_function :lstat
private
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa364571(v=vs.85).aspx
FSCTL_GET_REPARSE_POINT = 0x900a8
def self.resolve_symlink(handle)
- # must be multiple of 1024, min 10240
- out_buffer = FFI::MemoryPointer.new(API::ReparseDataBuffer.size)
- device_io_control(handle, FSCTL_GET_REPARSE_POINT, nil, out_buffer)
+ path = nil
+ get_reparse_point_data(handle) do |reparse_data|
+ offset = reparse_data[:PrintNameOffset]
+ length = reparse_data[:PrintNameLength]
- reparse_data = API::ReparseDataBuffer.new(out_buffer)
- offset = reparse_data[:print_name_offset]
- length = reparse_data[:print_name_length]
+ ptr = reparse_data.pointer + reparse_data.offset_of(:PathBuffer) + offset
+ path = ptr.read_wide_string(length / 2) # length is bytes, need UTF-16 wchars
+ end
+
+ path
+ end
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa365512(v=vs.85).aspx
+ # BOOL WINAPI ReplaceFile(
+ # _In_ LPCTSTR lpReplacedFileName,
+ # _In_ LPCTSTR lpReplacementFileName,
+ # _In_opt_ LPCTSTR lpBackupFileName,
+ # _In_ DWORD dwReplaceFlags - 0x1 REPLACEFILE_WRITE_THROUGH,
+ # 0x2 REPLACEFILE_IGNORE_MERGE_ERRORS,
+ # 0x4 REPLACEFILE_IGNORE_ACL_ERRORS
+ # _Reserved_ LPVOID lpExclude,
+ # _Reserved_ LPVOID lpReserved
+ # );
+ ffi_lib :kernel32
+ attach_function_private :ReplaceFileW,
+ [:lpcwstr, :lpcwstr, :lpcwstr, :dword, :lpvoid, :lpvoid], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx
+ # BOOL WINAPI MoveFileEx(
+ # _In_ LPCTSTR lpExistingFileName,
+ # _In_opt_ LPCTSTR lpNewFileName,
+ # _In_ DWORD dwFlags
+ # );
+ ffi_lib :kernel32
+ attach_function_private :MoveFileExW,
+ [:lpcwstr, :lpcwstr, :dword], :win32_bool
+
+ # BOOLEAN WINAPI CreateSymbolicLink(
+ # _In_ LPTSTR lpSymlinkFileName, - symbolic link to be created
+ # _In_ LPTSTR lpTargetFileName, - name of target for symbolic link
+ # _In_ DWORD dwFlags - 0x0 target is a file, 0x1 target is a directory
+ # );
+ # rescue on Windows < 6.0 so that code doesn't explode
+ begin
+ ffi_lib :kernel32
+ attach_function_private :CreateSymbolicLinkW,
+ [:lpwstr, :lpwstr, :dword], :win32_bool
+ rescue LoadError
+ end
- result = reparse_data[:path_buffer].to_a[offset, length].pack('C*')
- result.force_encoding('UTF-16LE').encode(Encoding.default_external)
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa364944(v=vs.85).aspx
+ # DWORD WINAPI GetFileAttributes(
+ # _In_ LPCTSTR lpFileName
+ # );
+ ffi_lib :kernel32
+ attach_function_private :GetFileAttributesW,
+ [:lpcwstr], :dword
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa365535(v=vs.85).aspx
+ # BOOL WINAPI SetFileAttributes(
+ # _In_ LPCTSTR lpFileName,
+ # _In_ DWORD dwFileAttributes
+ # );
+ ffi_lib :kernel32
+ attach_function_private :SetFileAttributesW,
+ [:lpcwstr, :dword], :win32_bool
+
+ # HANDLE WINAPI CreateFile(
+ # _In_ LPCTSTR lpFileName,
+ # _In_ DWORD dwDesiredAccess,
+ # _In_ DWORD dwShareMode,
+ # _In_opt_ LPSECURITY_ATTRIBUTES lpSecurityAttributes,
+ # _In_ DWORD dwCreationDisposition,
+ # _In_ DWORD dwFlagsAndAttributes,
+ # _In_opt_ HANDLE hTemplateFile
+ # );
+ ffi_lib :kernel32
+ attach_function_private :CreateFileW,
+ [:lpcwstr, :dword, :dword, :pointer, :dword, :dword, :handle], :handle
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa363216(v=vs.85).aspx
+ # BOOL WINAPI DeviceIoControl(
+ # _In_ HANDLE hDevice,
+ # _In_ DWORD dwIoControlCode,
+ # _In_opt_ LPVOID lpInBuffer,
+ # _In_ DWORD nInBufferSize,
+ # _Out_opt_ LPVOID lpOutBuffer,
+ # _In_ DWORD nOutBufferSize,
+ # _Out_opt_ LPDWORD lpBytesReturned,
+ # _Inout_opt_ LPOVERLAPPED lpOverlapped
+ # );
+ ffi_lib :kernel32
+ attach_function_private :DeviceIoControl,
+ [:handle, :dword, :lpvoid, :dword, :lpvoid, :dword, :lpdword, :pointer], :win32_bool
+
+ MAXIMUM_REPARSE_DATA_BUFFER_SIZE = 16384
+
+ # REPARSE_DATA_BUFFER
+ # http://msdn.microsoft.com/en-us/library/cc232006.aspx
+ # http://msdn.microsoft.com/en-us/library/windows/hardware/ff552012(v=vs.85).aspx
+ # struct is always MAXIMUM_REPARSE_DATA_BUFFER_SIZE bytes
+ class REPARSE_DATA_BUFFER < FFI::Struct
+ layout :ReparseTag, :win32_ulong,
+ :ReparseDataLength, :ushort,
+ :Reserved, :ushort,
+ :SubstituteNameOffset, :ushort,
+ :SubstituteNameLength, :ushort,
+ :PrintNameOffset, :ushort,
+ :PrintNameLength, :ushort,
+ :Flags, :win32_ulong,
+ # max less above fields dword / uint 4 bytes, ushort 2 bytes
+ # technically a WCHAR buffer, but we care about size in bytes here
+ :PathBuffer, [:byte, MAXIMUM_REPARSE_DATA_BUFFER_SIZE - 20]
end
end
diff --git a/lib/puppet/util/windows/process.rb b/lib/puppet/util/windows/process.rb
index c1f0bedd4..c6a8c0db5 100644
--- a/lib/puppet/util/windows/process.rb
+++ b/lib/puppet/util/windows/process.rb
@@ -1,238 +1,354 @@
require 'puppet/util/windows'
-require 'windows/process'
-require 'windows/handle'
-require 'windows/synchronize'
+require 'win32/process'
+require 'ffi'
module Puppet::Util::Windows::Process
- extend ::Windows::Process
- extend ::Windows::Handle
- extend ::Windows::Synchronize
-
- module API
- require 'ffi'
- extend FFI::Library
- ffi_convention :stdcall
-
- ffi_lib 'kernel32'
-
- # http://msdn.microsoft.com/en-us/library/windows/desktop/ms683179(v=vs.85).aspx
- # HANDLE WINAPI GetCurrentProcess(void);
- attach_function :get_current_process, :GetCurrentProcess, [], :uint
-
- # BOOL WINAPI CloseHandle(
- # _In_ HANDLE hObject
- # );
- attach_function :close_handle, :CloseHandle, [:uint], :bool
-
- ffi_lib 'advapi32'
-
- # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379295(v=vs.85).aspx
- # BOOL WINAPI OpenProcessToken(
- # _In_ HANDLE ProcessHandle,
- # _In_ DWORD DesiredAccess,
- # _Out_ PHANDLE TokenHandle
- # );
- attach_function :open_process_token, :OpenProcessToken,
- [:uint, :uint, :pointer], :bool
-
-
- # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379261(v=vs.85).aspx
- # typedef struct _LUID {
- # DWORD LowPart;
- # LONG HighPart;
- # } LUID, *PLUID;
- class LUID < FFI::Struct
- layout :low_part, :uint,
- :high_part, :int
- end
-
- # http://msdn.microsoft.com/en-us/library/Windows/desktop/aa379180(v=vs.85).aspx
- # BOOL WINAPI LookupPrivilegeValue(
- # _In_opt_ LPCTSTR lpSystemName,
- # _In_ LPCTSTR lpName,
- # _Out_ PLUID lpLuid
- # );
- attach_function :lookup_privilege_value, :LookupPrivilegeValueA,
- [:string, :string, :pointer], :bool
-
- Token_Information = enum(
- :token_user, 1,
- :token_groups,
- :token_privileges,
- :token_owner,
- :token_primary_group,
- :token_default_dacl,
- :token_source,
- :token_type,
- :token_impersonation_level,
- :token_statistics,
- :token_restricted_sids,
- :token_session_id,
- :token_groups_and_privileges,
- :token_session_reference,
- :token_sandbox_inert,
- :token_audit_policy,
- :token_origin,
- :token_elevation_type,
- :token_linked_token,
- :token_elevation,
- :token_has_restrictions,
- :token_access_information,
- :token_virtualization_allowed,
- :token_virtualization_enabled,
- :token_integrity_level,
- :token_ui_access,
- :token_mandatory_policy,
- :token_logon_sid,
- :max_token_info_class
- )
-
- # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379263(v=vs.85).aspx
- # typedef struct _LUID_AND_ATTRIBUTES {
- # LUID Luid;
- # DWORD Attributes;
- # } LUID_AND_ATTRIBUTES, *PLUID_AND_ATTRIBUTES;
- class LUID_And_Attributes < FFI::Struct
- layout :luid, LUID,
- :attributes, :uint
- end
+ extend Puppet::Util::Windows::String
+ extend FFI::Library
- # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379630(v=vs.85).aspx
- # typedef struct _TOKEN_PRIVILEGES {
- # DWORD PrivilegeCount;
- # LUID_AND_ATTRIBUTES Privileges[ANYSIZE_ARRAY];
- # } TOKEN_PRIVILEGES, *PTOKEN_PRIVILEGES;
- class Token_Privileges < FFI::Struct
- layout :privilege_count, :uint,
- :privileges, [LUID_And_Attributes, 1] # placeholder for offset
- end
-
- # http://msdn.microsoft.com/en-us/library/windows/desktop/aa446671(v=vs.85).aspx
- # BOOL WINAPI GetTokenInformation(
- # _In_ HANDLE TokenHandle,
- # _In_ TOKEN_INFORMATION_CLASS TokenInformationClass,
- # _Out_opt_ LPVOID TokenInformation,
- # _In_ DWORD TokenInformationLength,
- # _Out_ PDWORD ReturnLength
- # );
- attach_function :get_token_information, :GetTokenInformation,
- [:uint, Token_Information, :pointer, :uint, :pointer ], :bool
- end
+ WAIT_TIMEOUT = 0x102
def execute(command, arguments, stdin, stdout, stderr)
Process.create( :command_line => command, :startup_info => {:stdin => stdin, :stdout => stdout, :stderr => stderr}, :close_handles => false )
end
module_function :execute
def wait_process(handle)
- while WaitForSingleObject(handle, 0) == Windows::Synchronize::WAIT_TIMEOUT
+ while WaitForSingleObject(handle, 0) == WAIT_TIMEOUT
sleep(1)
end
- exit_status = [0].pack('L')
- unless GetExitCodeProcess(handle, exit_status)
- raise Puppet::Util::Windows::Error.new("Failed to get child process exit code")
+ exit_status = -1
+ FFI::MemoryPointer.new(:dword, 1) do |exit_status_ptr|
+ if GetExitCodeProcess(handle, exit_status_ptr) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to get child process exit code")
+ end
+ exit_status = exit_status_ptr.read_dword
+
+ # $CHILD_STATUS is not set when calling win32/process Process.create
+ # and since it's read-only, we can't set it. But we can execute a
+ # a shell that simply returns the desired exit status, which has the
+ # desired effect.
+ %x{#{ENV['COMSPEC']} /c exit #{exit_status}}
end
- exit_status = exit_status.unpack('L').first
-
- # $CHILD_STATUS is not set when calling win32/process Process.create
- # and since it's read-only, we can't set it. But we can execute a
- # a shell that simply returns the desired exit status, which has the
- # desired effect.
- %x{#{ENV['COMSPEC']} /c exit #{exit_status}}
exit_status
end
module_function :wait_process
def get_current_process
# this pseudo-handle does not require closing per MSDN docs
- API.get_current_process
+ GetCurrentProcess()
end
module_function :get_current_process
- def open_process_token(handle, desired_access)
- token_handle_ptr = FFI::MemoryPointer.new(:uint, 1)
- result = API.open_process_token(handle, desired_access, token_handle_ptr)
- if !result
- raise Puppet::Util::Windows::Error.new(
- "OpenProcessToken(#{handle}, #{desired_access.to_s(8)}, #{token_handle_ptr})")
- end
-
+ def open_process_token(handle, desired_access, &block)
+ token_handle = nil
begin
- yield token_handle = token_handle_ptr.read_uint
+ FFI::MemoryPointer.new(:handle, 1) do |token_handle_ptr|
+ result = OpenProcessToken(handle, desired_access, token_handle_ptr)
+ if result == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new(
+ "OpenProcessToken(#{handle}, #{desired_access.to_s(8)}, #{token_handle_ptr})")
+ end
+
+ yield token_handle = token_handle_ptr.read_handle
+ end
+
+ token_handle
ensure
- API.close_handle(token_handle)
+ FFI::WIN32.CloseHandle(token_handle) if token_handle
end
+
+ # token_handle has had CloseHandle called against it, so nothing to return
+ nil
end
module_function :open_process_token
- def lookup_privilege_value(name, system_name = '')
- luid = FFI::MemoryPointer.new(API::LUID.size)
- result = API.lookup_privilege_value(
- system_name,
- name.to_s,
- luid
- )
-
- return API::LUID.new(luid) if result
- raise Puppet::Util::Windows::Error.new(
- "LookupPrivilegeValue(#{system_name}, #{name}, #{luid})")
+ # Execute a block with the current process token
+ def with_process_token(access, &block)
+ handle = get_current_process
+ open_process_token(handle, access) do |token_handle|
+ yield token_handle
+ end
+
+ # all handles have been closed, so nothing to safely return
+ nil
+ end
+ module_function :with_process_token
+
+ def lookup_privilege_value(name, system_name = '', &block)
+ FFI::MemoryPointer.new(LUID.size) do |luid_ptr|
+ result = LookupPrivilegeValueW(
+ wide_string(system_name),
+ wide_string(name.to_s),
+ luid_ptr
+ )
+
+ if result == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new(
+ "LookupPrivilegeValue(#{system_name}, #{name}, #{luid_ptr})")
+ end
+
+ yield LUID.new(luid_ptr)
+ end
+
+ # the underlying MemoryPointer for LUID is cleaned up by this point
+ nil
end
module_function :lookup_privilege_value
- def get_token_information(token_handle, token_information)
+ def get_token_information(token_handle, token_information, &block)
# to determine buffer size
- return_length_ptr = FFI::MemoryPointer.new(:uint, 1)
- result = API.get_token_information(token_handle, token_information, nil, 0, return_length_ptr)
- return_length = return_length_ptr.read_uint
-
- if return_length <= 0
- raise Puppet::Util::Windows::Error.new(
- "GetTokenInformation(#{token_handle}, #{token_information}, nil, 0, #{return_length_ptr})")
+ FFI::MemoryPointer.new(:dword, 1) do |return_length_ptr|
+ result = GetTokenInformation(token_handle, token_information, nil, 0, return_length_ptr)
+ return_length = return_length_ptr.read_dword
+
+ if return_length <= 0
+ raise Puppet::Util::Windows::Error.new(
+ "GetTokenInformation(#{token_handle}, #{token_information}, nil, 0, #{return_length_ptr})")
+ end
+
+ # re-call API with properly sized buffer for all results
+ FFI::MemoryPointer.new(return_length) do |token_information_buf|
+ result = GetTokenInformation(token_handle, token_information,
+ token_information_buf, return_length, return_length_ptr)
+
+ if result == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new(
+ "GetTokenInformation(#{token_handle}, #{token_information}, #{token_information_buf}, " +
+ "#{return_length}, #{return_length_ptr})")
+ end
+
+ yield token_information_buf
+ end
end
- # re-call API with properly sized buffer for all results
- token_information_buf = FFI::MemoryPointer.new(return_length)
- result = API.get_token_information(token_handle, token_information,
- token_information_buf, return_length, return_length_ptr)
-
- if !result
- raise Puppet::Util::Windows::Error.new(
- "GetTokenInformation(#{token_handle}, #{token_information}, #{token_information_buf}, " +
- "#{return_length}, #{return_length_ptr})")
- end
+ # GetTokenInformation buffer has been cleaned up by this point, nothing to return
+ nil
+ end
+ module_function :get_token_information
- raw_privileges = API::Token_Privileges.new(token_information_buf)
- privileges = { :count => raw_privileges[:privilege_count], :privileges => [] }
+ def parse_token_information_as_token_privileges(token_information_buf)
+ raw_privileges = TOKEN_PRIVILEGES.new(token_information_buf)
+ privileges = { :count => raw_privileges[:PrivilegeCount], :privileges => [] }
- offset = token_information_buf + API::Token_Privileges.offset_of(:privileges)
- privilege_ptr = FFI::Pointer.new(API::LUID_And_Attributes, offset)
+ offset = token_information_buf + TOKEN_PRIVILEGES.offset_of(:Privileges)
+ privilege_ptr = FFI::Pointer.new(LUID_AND_ATTRIBUTES, offset)
- # extract each instance of LUID_And_Attributes
+ # extract each instance of LUID_AND_ATTRIBUTES
0.upto(privileges[:count] - 1) do |i|
- privileges[:privileges] << API::LUID_And_Attributes.new(privilege_ptr[i])
+ privileges[:privileges] << LUID_AND_ATTRIBUTES.new(privilege_ptr[i])
end
privileges
end
- module_function :get_token_information
+ module_function :parse_token_information_as_token_privileges
+
+ def parse_token_information_as_token_elevation(token_information_buf)
+ TOKEN_ELEVATION.new(token_information_buf)
+ end
+ module_function :parse_token_information_as_token_elevation
TOKEN_ALL_ACCESS = 0xF01FF
ERROR_NO_SUCH_PRIVILEGE = 1313
def process_privilege_symlink?
+ privilege_symlink = false
handle = get_current_process
open_process_token(handle, TOKEN_ALL_ACCESS) do |token_handle|
- luid = lookup_privilege_value('SeCreateSymbolicLinkPrivilege')
- token_info = get_token_information(token_handle, :token_privileges)
- token_info[:privileges].any? { |p| p[:luid].values == luid.values }
+ lookup_privilege_value('SeCreateSymbolicLinkPrivilege') do |luid|
+ get_token_information(token_handle, :TokenPrivileges) do |token_info|
+ token_privileges = parse_token_information_as_token_privileges(token_info)
+ privilege_symlink = token_privileges[:privileges].any? { |p| p[:Luid].values == luid.values }
+ end
+ end
end
+
+ privilege_symlink
rescue Puppet::Util::Windows::Error => e
if e.code == ERROR_NO_SUCH_PRIVILEGE
false # pre-Vista
else
raise e
end
end
module_function :process_privilege_symlink?
+
+ TOKEN_QUERY = 0x0008
+ # Returns whether or not the owner of the current process is running
+ # with elevated security privileges.
+ #
+ # Only supported on Windows Vista or later.
+ #
+ def elevated_security?
+ # default / pre-Vista
+ elevated = false
+ handle = nil
+
+ begin
+ handle = get_current_process
+ open_process_token(handle, TOKEN_QUERY) do |token_handle|
+ get_token_information(token_handle, :TokenElevation) do |token_info|
+ token_elevation = parse_token_information_as_token_elevation(token_info)
+ # TokenIsElevated member of the TOKEN_ELEVATION struct
+ elevated = token_elevation[:TokenIsElevated] != 0
+ end
+ end
+
+ elevated
+ rescue Puppet::Util::Windows::Error => e
+ raise e if e.code != ERROR_NO_SUCH_PRIVILEGE
+ ensure
+ FFI::WIN32.CloseHandle(handle) if handle
+ end
+ end
+ module_function :elevated_security?
+
+ ABOVE_NORMAL_PRIORITY_CLASS = 0x0008000
+ BELOW_NORMAL_PRIORITY_CLASS = 0x0004000
+ HIGH_PRIORITY_CLASS = 0x0000080
+ IDLE_PRIORITY_CLASS = 0x0000040
+ NORMAL_PRIORITY_CLASS = 0x0000020
+ REALTIME_PRIORITY_CLASS = 0x0000010
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms687032(v=vs.85).aspx
+ # DWORD WINAPI WaitForSingleObject(
+ # _In_ HANDLE hHandle,
+ # _In_ DWORD dwMilliseconds
+ # );
+ ffi_lib :kernel32
+ attach_function_private :WaitForSingleObject,
+ [:handle, :dword], :dword
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms683189(v=vs.85).aspx
+ # BOOL WINAPI GetExitCodeProcess(
+ # _In_ HANDLE hProcess,
+ # _Out_ LPDWORD lpExitCode
+ # );
+ ffi_lib :kernel32
+ attach_function_private :GetExitCodeProcess,
+ [:handle, :lpdword], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms683179(v=vs.85).aspx
+ # HANDLE WINAPI GetCurrentProcess(void);
+ ffi_lib :kernel32
+ attach_function_private :GetCurrentProcess, [], :handle
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379295(v=vs.85).aspx
+ # BOOL WINAPI OpenProcessToken(
+ # _In_ HANDLE ProcessHandle,
+ # _In_ DWORD DesiredAccess,
+ # _Out_ PHANDLE TokenHandle
+ # );
+ ffi_lib :advapi32
+ attach_function_private :OpenProcessToken,
+ [:handle, :dword, :phandle], :win32_bool
+
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379261(v=vs.85).aspx
+ # typedef struct _LUID {
+ # DWORD LowPart;
+ # LONG HighPart;
+ # } LUID, *PLUID;
+ class LUID < FFI::Struct
+ layout :LowPart, :dword,
+ :HighPart, :win32_long
+ end
+
+ # http://msdn.microsoft.com/en-us/library/Windows/desktop/aa379180(v=vs.85).aspx
+ # BOOL WINAPI LookupPrivilegeValue(
+ # _In_opt_ LPCTSTR lpSystemName,
+ # _In_ LPCTSTR lpName,
+ # _Out_ PLUID lpLuid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :LookupPrivilegeValueW,
+ [:lpcwstr, :lpcwstr, :pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379626(v=vs.85).aspx
+ TOKEN_INFORMATION_CLASS = enum(
+ :TokenUser, 1,
+ :TokenGroups,
+ :TokenPrivileges,
+ :TokenOwner,
+ :TokenPrimaryGroup,
+ :TokenDefaultDacl,
+ :TokenSource,
+ :TokenType,
+ :TokenImpersonationLevel,
+ :TokenStatistics,
+ :TokenRestrictedSids,
+ :TokenSessionId,
+ :TokenGroupsAndPrivileges,
+ :TokenSessionReference,
+ :TokenSandBoxInert,
+ :TokenAuditPolicy,
+ :TokenOrigin,
+ :TokenElevationType,
+ :TokenLinkedToken,
+ :TokenElevation,
+ :TokenHasRestrictions,
+ :TokenAccessInformation,
+ :TokenVirtualizationAllowed,
+ :TokenVirtualizationEnabled,
+ :TokenIntegrityLevel,
+ :TokenUIAccess,
+ :TokenMandatoryPolicy,
+ :TokenLogonSid,
+ :TokenIsAppContainer,
+ :TokenCapabilities,
+ :TokenAppContainerSid,
+ :TokenAppContainerNumber,
+ :TokenUserClaimAttributes,
+ :TokenDeviceClaimAttributes,
+ :TokenRestrictedUserClaimAttributes,
+ :TokenRestrictedDeviceClaimAttributes,
+ :TokenDeviceGroups,
+ :TokenRestrictedDeviceGroups,
+ :TokenSecurityAttributes,
+ :TokenIsRestricted,
+ :MaxTokenInfoClass
+ )
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379263(v=vs.85).aspx
+ # typedef struct _LUID_AND_ATTRIBUTES {
+ # LUID Luid;
+ # DWORD Attributes;
+ # } LUID_AND_ATTRIBUTES, *PLUID_AND_ATTRIBUTES;
+ class LUID_AND_ATTRIBUTES < FFI::Struct
+ layout :Luid, LUID,
+ :Attributes, :dword
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379630(v=vs.85).aspx
+ # typedef struct _TOKEN_PRIVILEGES {
+ # DWORD PrivilegeCount;
+ # LUID_AND_ATTRIBUTES Privileges[ANYSIZE_ARRAY];
+ # } TOKEN_PRIVILEGES, *PTOKEN_PRIVILEGES;
+ class TOKEN_PRIVILEGES < FFI::Struct
+ layout :PrivilegeCount, :dword,
+ :Privileges, [LUID_AND_ATTRIBUTES, 1] # placeholder for offset
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/bb530717(v=vs.85).aspx
+ # typedef struct _TOKEN_ELEVATION {
+ # DWORD TokenIsElevated;
+ # } TOKEN_ELEVATION, *PTOKEN_ELEVATION;
+ class TOKEN_ELEVATION < FFI::Struct
+ layout :TokenIsElevated, :dword
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa446671(v=vs.85).aspx
+ # BOOL WINAPI GetTokenInformation(
+ # _In_ HANDLE TokenHandle,
+ # _In_ TOKEN_INFORMATION_CLASS TokenInformationClass,
+ # _Out_opt_ LPVOID TokenInformation,
+ # _In_ DWORD TokenInformationLength,
+ # _Out_ PDWORD ReturnLength
+ # );
+ ffi_lib :advapi32
+ attach_function_private :GetTokenInformation,
+ [:handle, TOKEN_INFORMATION_CLASS, :lpvoid, :dword, :pdword ], :win32_bool
end
diff --git a/lib/puppet/util/windows/registry.rb b/lib/puppet/util/windows/registry.rb
index 13b931ec0..84cdde793 100644
--- a/lib/puppet/util/windows/registry.rb
+++ b/lib/puppet/util/windows/registry.rb
@@ -1,70 +1,80 @@
require 'puppet/util/windows'
module Puppet::Util::Windows
module Registry
+ require 'ffi'
+ extend FFI::Library
+
# http://msdn.microsoft.com/en-us/library/windows/desktop/aa384129(v=vs.85).aspx
KEY64 = 0x100
KEY32 = 0x200
KEY_READ = 0x20019
KEY_WRITE = 0x20006
KEY_ALL_ACCESS = 0x2003f
def root(name)
Win32::Registry.const_get(name)
rescue NameError
raise Puppet::Error, "Invalid registry key '#{name}'", $!.backtrace
end
def open(name, path, mode = KEY_READ | KEY64, &block)
hkey = root(name)
begin
hkey.open(path, mode) do |subkey|
return yield subkey
end
rescue Win32::Registry::Error => error
raise Puppet::Util::Windows::Error.new("Failed to open registry key '#{hkey.keyname}\\#{path}'", error.code, error)
end
end
def values(subkey)
values = {}
subkey.each_value do |name, type, data|
case type
when Win32::Registry::REG_MULTI_SZ
data.each { |str| force_encoding(str) }
when Win32::Registry::REG_SZ, Win32::Registry::REG_EXPAND_SZ
force_encoding(data)
end
values[name] = data
end
values
end
if defined?(Encoding)
def force_encoding(str)
if @encoding.nil?
# See https://bugs.ruby-lang.org/issues/8943
# Ruby uses ANSI versions of Win32 APIs to read values from the
# registry. The encoding of these strings depends on the active
# code page. However, ruby incorrectly sets the string
# encoding to US-ASCII. So we must force the encoding to the
# correct value.
- require 'windows/national'
begin
- cp = Windows::National::GetACP.call
+ cp = GetACP()
@encoding = Encoding.const_get("CP#{cp}")
rescue
@encoding = Encoding.default_external
end
end
str.force_encoding(@encoding)
end
else
def force_encoding(str, enc)
end
end
private :force_encoding
+
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/dd318070(v=vs.85).aspx
+ # UINT GetACP(void);
+ ffi_lib :kernel32
+ attach_function_private :GetACP, [], :uint32
end
end
diff --git a/lib/puppet/util/windows/root_certs.rb b/lib/puppet/util/windows/root_certs.rb
index 7988ab832..e15a203f0 100644
--- a/lib/puppet/util/windows/root_certs.rb
+++ b/lib/puppet/util/windows/root_certs.rb
@@ -1,101 +1,108 @@
require 'puppet/util/windows'
require 'openssl'
require 'ffi'
# Represents a collection of trusted root certificates.
#
# @api public
class Puppet::Util::Windows::RootCerts
include Enumerable
extend FFI::Library
- typedef :ulong, :dword
- typedef :uintptr_t, :handle
-
def initialize(roots)
@roots = roots
end
# Enumerates each root certificate.
# @yieldparam cert [OpenSSL::X509::Certificate] each root certificate
# @api public
def each
@roots.each {|cert| yield cert}
end
# Returns a new instance.
# @return [Puppet::Util::Windows::RootCerts] object constructed from current root certificates
def self.instance
new(self.load_certs)
end
# Returns an array of root certificates.
#
# @return [Array<[OpenSSL::X509::Certificate]>] an array of root certificates
# @api private
def self.load_certs
certs = []
# This is based on a patch submitted to openssl:
# http://www.mail-archive.com/openssl-dev@openssl.org/msg26958.html
ptr = FFI::Pointer::NULL
store = CertOpenSystemStoreA(nil, "ROOT")
begin
while (ptr = CertEnumCertificatesInStore(store, ptr)) and not ptr.null?
context = CERT_CONTEXT.new(ptr)
cert_buf = context[:pbCertEncoded].read_bytes(context[:cbCertEncoded])
begin
certs << OpenSSL::X509::Certificate.new(cert_buf)
rescue => detail
Puppet.warning("Failed to import root certificate: #{detail.inspect}")
end
end
ensure
CertCloseStore(store, 0)
end
certs
end
- private
-
- # typedef ULONG_PTR HCRYPTPROV_LEGACY;
+ ffi_convention :stdcall
# typedef void *HCERTSTORE;
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa377189(v=vs.85).aspx
+ # typedef struct _CERT_CONTEXT {
+ # DWORD dwCertEncodingType;
+ # BYTE *pbCertEncoded;
+ # DWORD cbCertEncoded;
+ # PCERT_INFO pCertInfo;
+ # HCERTSTORE hCertStore;
+ # } CERT_CONTEXT, *PCERT_CONTEXT;typedef const CERT_CONTEXT *PCCERT_CONTEXT;
class CERT_CONTEXT < FFI::Struct
layout(
:dwCertEncodingType, :dword,
:pbCertEncoded, :pointer,
:cbCertEncoded, :dword,
:pCertInfo, :pointer,
:hCertStore, :handle
)
end
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa376560(v=vs.85).aspx
# HCERTSTORE
# WINAPI
# CertOpenSystemStoreA(
# __in_opt HCRYPTPROV_LEGACY hProv,
# __in LPCSTR szSubsystemProtocol
# );
+ # typedef ULONG_PTR HCRYPTPROV_LEGACY;
ffi_lib :crypt32
- attach_function :CertOpenSystemStoreA, [:pointer, :string], :handle
+ attach_function_private :CertOpenSystemStoreA, [:ulong_ptr, :lpcstr], :handle
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa376050(v=vs.85).aspx
# PCCERT_CONTEXT
# WINAPI
# CertEnumCertificatesInStore(
# __in HCERTSTORE hCertStore,
# __in_opt PCCERT_CONTEXT pPrevCertContext
# );
ffi_lib :crypt32
- attach_function :CertEnumCertificatesInStore, [:handle, :pointer], :pointer
+ attach_function_private :CertEnumCertificatesInStore, [:handle, :pointer], :pointer
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa376026(v=vs.85).aspx
# BOOL
# WINAPI
# CertCloseStore(
# __in_opt HCERTSTORE hCertStore,
# __in DWORD dwFlags
# );
ffi_lib :crypt32
- attach_function :CertCloseStore, [:handle, :dword], :bool
+ attach_function_private :CertCloseStore, [:handle, :dword], :win32_bool
end
diff --git a/lib/puppet/util/windows/security.rb b/lib/puppet/util/windows/security.rb
index d1c1bcd34..8dc03a9a2 100644
--- a/lib/puppet/util/windows/security.rb
+++ b/lib/puppet/util/windows/security.rb
@@ -1,651 +1,920 @@
# This class maps POSIX owner, group, and modes to the Windows
# security model, and back.
#
# The primary goal of this mapping is to ensure that owner, group, and
# modes can be round-tripped in a consistent and deterministic
# way. Otherwise, Puppet might think file resources are out-of-sync
# every time it runs. A secondary goal is to provide equivalent
# permissions for common use-cases. For example, setting the owner to
# "Administrators", group to "Users", and mode to 750 (which also
# denies access to everyone else.
#
# There are some well-known problems mapping windows and POSIX
# permissions due to differences between the two security
# models. Search for "POSIX permission mapping leak". In POSIX, access
# to a file is determined solely based on the most specific class
# (user, group, other). So a mode of 460 would deny write access to
# the owner even if they are a member of the group. But in Windows,
# the entire access control list is walked until the user is
# explicitly denied or allowed (denied take precedence, and if neither
# occurs they are denied). As a result, a user could be allowed access
# based on their group membership. To solve this problem, other people
# have used deny access control entries to more closely model POSIX,
# but this introduces a lot of complexity.
#
# In general, this implementation only supports "typical" permissions,
# where group permissions are a subset of user, and other permissions
# are a subset of group, e.g. 754, but not 467. However, there are
# some Windows quirks to be aware of.
#
# * The owner can be either a user or group SID, and most system files
# are owned by the Administrators group.
# * The group can be either a user or group SID.
# * Unexpected results can occur if the owner and group are the
# same, but the user and group classes are different, e.g. 750. In
# this case, it is not possible to allow write access to the owner,
# but not the group. As a result, the actual permissions set on the
# file would be 770.
# * In general, only privileged users can set the owner, group, or
# change the mode for files they do not own. In 2003, the user must
# be a member of the Administrators group. In Vista/2008, the user
# must be running with elevated privileges.
# * A file/dir can be deleted by anyone with the DELETE access right
# OR by anyone that has the FILE_DELETE_CHILD access right for the
# parent. See http://support.microsoft.com/kb/238018. But on Unix,
# the user must have write access to the file/dir AND execute access
# to all of the parent path components.
# * Many access control entries are inherited from parent directories,
# and it is common for file/dirs to have more than 3 entries,
# e.g. Users, Power Users, Administrators, SYSTEM, etc, which cannot
# be mapped into the 3 class POSIX model. The get_mode method will
# set the S_IEXTRA bit flag indicating that an access control entry
# was found whose SID is neither the owner, group, or other. This
# enables Puppet to detect when file/dirs are out-of-sync,
# especially those that Puppet did not create, but is attempting
# to manage.
# * A special case of this is S_ISYSTEM_MISSING, which is set when the
# SYSTEM permissions are *not* present on the DACL.
# * On Unix, the owner and group can be modified without changing the
# mode. But on Windows, an access control entry specifies which SID
# it applies to. As a result, the set_owner and set_group methods
# automatically rebuild the access control list based on the new
# (and different) owner or group.
require 'puppet/util/windows'
require 'pathname'
require 'ffi'
require 'win32/security'
-require 'windows/file'
-require 'windows/handle'
-require 'windows/security'
-require 'windows/process'
-require 'windows/memory'
-require 'windows/msvcrt/buffer'
-require 'windows/volume'
-
module Puppet::Util::Windows::Security
- include ::Windows::File
- include ::Windows::Handle
- include ::Windows::Security
- include ::Windows::Process
- include ::Windows::Memory
- include ::Windows::MSVCRT::Buffer
- include ::Windows::Volume
-
- include Puppet::Util::Windows::SID
+ include Puppet::Util::Windows::String
extend Puppet::Util::Windows::Security
+ extend FFI::Library
# file modes
S_IRUSR = 0000400
S_IRGRP = 0000040
S_IROTH = 0000004
S_IWUSR = 0000200
S_IWGRP = 0000020
S_IWOTH = 0000002
S_IXUSR = 0000100
S_IXGRP = 0000010
S_IXOTH = 0000001
S_IRWXU = 0000700
S_IRWXG = 0000070
S_IRWXO = 0000007
S_ISVTX = 0001000
S_IEXTRA = 02000000 # represents an extra ace
S_ISYSTEM_MISSING = 04000000
# constants that are missing from Windows::Security
PROTECTED_DACL_SECURITY_INFORMATION = 0x80000000
UNPROTECTED_DACL_SECURITY_INFORMATION = 0x20000000
NO_INHERITANCE = 0x0
SE_DACL_PROTECTED = 0x1000
+ FILE = Puppet::Util::Windows::File
+
+ SE_BACKUP_NAME = 'SeBackupPrivilege'
+ SE_RESTORE_NAME = 'SeRestorePrivilege'
+
+ DELETE = 0x00010000
+ READ_CONTROL = 0x20000
+ WRITE_DAC = 0x40000
+ WRITE_OWNER = 0x80000
+
+ OWNER_SECURITY_INFORMATION = 1
+ GROUP_SECURITY_INFORMATION = 2
+ DACL_SECURITY_INFORMATION = 4
+
# Set the owner of the object referenced by +path+ to the specified
# +owner_sid+. The owner sid should be of the form "S-1-5-32-544"
# and can either be a user or group. Only a user with the
# SE_RESTORE_NAME privilege in their process token can overwrite the
# object's owner to something other than the current user.
def set_owner(owner_sid, path)
sd = get_security_descriptor(path)
if owner_sid != sd.owner
sd.owner = owner_sid
set_security_descriptor(path, sd)
end
end
# Get the owner of the object referenced by +path+. The returned
# value is a SID string, e.g. "S-1-5-32-544". Any user with read
# access to an object can get the owner. Only a user with the
# SE_BACKUP_NAME privilege in their process token can get the owner
# for objects they do not have read access to.
def get_owner(path)
return unless supports_acl?(path)
get_security_descriptor(path).owner
end
# Set the owner of the object referenced by +path+ to the specified
# +group_sid+. The group sid should be of the form "S-1-5-32-544"
# and can either be a user or group. Any user with WRITE_OWNER
# access to the object can change the group (regardless of whether
# the current user belongs to that group or not).
def set_group(group_sid, path)
sd = get_security_descriptor(path)
if group_sid != sd.group
sd.group = group_sid
set_security_descriptor(path, sd)
end
end
# Get the group of the object referenced by +path+. The returned
# value is a SID string, e.g. "S-1-5-32-544". Any user with read
# access to an object can get the group. Only a user with the
# SE_BACKUP_NAME privilege in their process token can get the group
# for objects they do not have read access to.
def get_group(path)
return unless supports_acl?(path)
get_security_descriptor(path).group
end
- def supports_acl?(path)
- flags = 0.chr * 4
+ FILE_PERSISTENT_ACLS = 0x00000008
+ def supports_acl?(path)
+ supported = false
root = Pathname.new(path).enum_for(:ascend).to_a.last.to_s
# 'A trailing backslash is required'
root = "#{root}\\" unless root =~ /[\/\\]$/
- unless GetVolumeInformation(root, nil, 0, nil, nil, flags, nil, 0)
- raise Puppet::Util::Windows::Error.new("Failed to get volume information")
+
+ FFI::MemoryPointer.new(:pointer, 1) do |flags_ptr|
+ if GetVolumeInformationW(wide_string(root), FFI::Pointer::NULL, 0,
+ FFI::Pointer::NULL, FFI::Pointer::NULL,
+ flags_ptr, FFI::Pointer::NULL, 0) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to get volume information")
+ end
+ supported = flags_ptr.read_dword & FILE_PERSISTENT_ACLS == FILE_PERSISTENT_ACLS
end
- (flags.unpack('L')[0] & Windows::File::FILE_PERSISTENT_ACLS) != 0
+ supported
end
def get_attributes(path)
- attributes = GetFileAttributes(path)
-
- raise Puppet::Util::Windows::Error.new("Failed to get file attributes") if attributes == INVALID_FILE_ATTRIBUTES
-
- attributes
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.get_attributes is deprecated; please use Puppet::Util::Windows::File.get_attributes')
+ FILE.get_attributes(file_name)
end
def add_attributes(path, flags)
- oldattrs = get_attributes(path)
-
- if (oldattrs | flags) != oldattrs
- set_attributes(path, oldattrs | flags)
- end
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.add_attributes is deprecated; please use Puppet::Util::Windows::File.add_attributes')
+ FILE.add_attributes(path, flags)
end
def remove_attributes(path, flags)
- oldattrs = get_attributes(path)
-
- if (oldattrs & ~flags) != oldattrs
- set_attributes(path, oldattrs & ~flags)
- end
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.remove_attributes is deprecated; please use Puppet::Util::Windows::File.remove_attributes')
+ FILE.remove_attributes(path, flags)
end
def set_attributes(path, flags)
- raise Puppet::Util::Windows::Error.new("Failed to set file attributes") unless SetFileAttributes(path, flags)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.set_attributes is deprecated; please use Puppet::Util::Windows::File.set_attributes')
+ FILE.set_attributes(path, flags)
end
MASK_TO_MODE = {
- FILE_GENERIC_READ => S_IROTH,
- FILE_GENERIC_WRITE => S_IWOTH,
- (FILE_GENERIC_EXECUTE & ~FILE_READ_ATTRIBUTES) => S_IXOTH
+ FILE::FILE_GENERIC_READ => S_IROTH,
+ FILE::FILE_GENERIC_WRITE => S_IWOTH,
+ (FILE::FILE_GENERIC_EXECUTE & ~FILE::FILE_READ_ATTRIBUTES) => S_IXOTH
}
def get_aces_for_path_by_sid(path, sid)
get_security_descriptor(path).dacl.select { |ace| ace.sid == sid }
end
# Get the mode of the object referenced by +path+. The returned
# integer value represents the POSIX-style read, write, and execute
# modes for the user, group, and other classes, e.g. 0640. Any user
# with read access to an object can get the mode. Only a user with
# the SE_BACKUP_NAME privilege in their process token can get the
# mode for objects they do not have read access to.
def get_mode(path)
return unless supports_acl?(path)
well_known_world_sid = Win32::Security::SID::Everyone
well_known_nobody_sid = Win32::Security::SID::Nobody
well_known_system_sid = Win32::Security::SID::LocalSystem
mode = S_ISYSTEM_MISSING
sd = get_security_descriptor(path)
sd.dacl.each do |ace|
next if ace.inherit_only?
case ace.sid
when sd.owner
MASK_TO_MODE.each_pair do |k,v|
if (ace.mask & k) == k
mode |= (v << 6)
end
end
when sd.group
MASK_TO_MODE.each_pair do |k,v|
if (ace.mask & k) == k
mode |= (v << 3)
end
end
when well_known_world_sid
MASK_TO_MODE.each_pair do |k,v|
if (ace.mask & k) == k
mode |= (v << 6) | (v << 3) | v
end
end
- if File.directory?(path) && (ace.mask & (FILE_WRITE_DATA | FILE_EXECUTE | FILE_DELETE_CHILD)) == (FILE_WRITE_DATA | FILE_EXECUTE)
+ if File.directory?(path) &&
+ (ace.mask & (FILE::FILE_WRITE_DATA | FILE::FILE_EXECUTE | FILE::FILE_DELETE_CHILD)) == (FILE::FILE_WRITE_DATA | FILE::FILE_EXECUTE)
mode |= S_ISVTX;
end
when well_known_nobody_sid
- if (ace.mask & FILE_APPEND_DATA).nonzero?
+ if (ace.mask & FILE::FILE_APPEND_DATA).nonzero?
mode |= S_ISVTX
end
when well_known_system_sid
else
#puts "Warning, unable to map SID into POSIX mode: #{ace.sid}"
mode |= S_IEXTRA
end
if ace.sid == well_known_system_sid
mode &= ~S_ISYSTEM_MISSING
end
# if owner and group the same, then user and group modes are the OR of both
if sd.owner == sd.group
mode |= ((mode & S_IRWXG) << 3) | ((mode & S_IRWXU) >> 3)
#puts "owner: #{sd.group}, 0x#{ace.mask.to_s(16)}, #{mode.to_s(8)}"
end
end
#puts "get_mode: #{mode.to_s(8)}"
mode
end
MODE_TO_MASK = {
- S_IROTH => FILE_GENERIC_READ,
- S_IWOTH => FILE_GENERIC_WRITE,
- S_IXOTH => (FILE_GENERIC_EXECUTE & ~FILE_READ_ATTRIBUTES),
+ S_IROTH => FILE::FILE_GENERIC_READ,
+ S_IWOTH => FILE::FILE_GENERIC_WRITE,
+ S_IXOTH => (FILE::FILE_GENERIC_EXECUTE & ~FILE::FILE_READ_ATTRIBUTES),
}
# Set the mode of the object referenced by +path+ to the specified
# +mode+. The mode should be specified as POSIX-stye read, write,
# and execute modes for the user, group, and other classes,
# e.g. 0640. The sticky bit, S_ISVTX, is supported, but is only
# meaningful for directories. If set, group and others are not
# allowed to delete child objects for which they are not the owner.
# By default, the DACL is set to protected, meaning it does not
# inherit access control entries from parent objects. This can be
# changed by setting +protected+ to false. The owner of the object
# (with READ_CONTROL and WRITE_DACL access) can always change the
# mode. Only a user with the SE_BACKUP_NAME and SE_RESTORE_NAME
# privileges in their process token can change the mode for objects
# that they do not have read and write access to.
def set_mode(mode, path, protected = true)
sd = get_security_descriptor(path)
well_known_world_sid = Win32::Security::SID::Everyone
well_known_nobody_sid = Win32::Security::SID::Nobody
well_known_system_sid = Win32::Security::SID::LocalSystem
- owner_allow = STANDARD_RIGHTS_ALL | FILE_READ_ATTRIBUTES | FILE_WRITE_ATTRIBUTES
- group_allow = STANDARD_RIGHTS_READ | FILE_READ_ATTRIBUTES | SYNCHRONIZE
- other_allow = STANDARD_RIGHTS_READ | FILE_READ_ATTRIBUTES | SYNCHRONIZE
+ owner_allow = FILE::STANDARD_RIGHTS_ALL |
+ FILE::FILE_READ_ATTRIBUTES |
+ FILE::FILE_WRITE_ATTRIBUTES
+ group_allow = FILE::STANDARD_RIGHTS_READ |
+ FILE::FILE_READ_ATTRIBUTES |
+ FILE::SYNCHRONIZE
+ other_allow = FILE::STANDARD_RIGHTS_READ |
+ FILE::FILE_READ_ATTRIBUTES |
+ FILE::SYNCHRONIZE
nobody_allow = 0
system_allow = 0
MODE_TO_MASK.each do |k,v|
if ((mode >> 6) & k) == k
owner_allow |= v
end
if ((mode >> 3) & k) == k
group_allow |= v
end
if (mode & k) == k
other_allow |= v
end
end
if (mode & S_ISVTX).nonzero?
- nobody_allow |= FILE_APPEND_DATA;
+ nobody_allow |= FILE::FILE_APPEND_DATA;
end
# caller is NOT managing SYSTEM by using group or owner, so set to FULL
if ! [sd.owner, sd.group].include? well_known_system_sid
# we don't check S_ISYSTEM_MISSING bit, but automatically carry over existing SYSTEM perms
# by default set SYSTEM perms to full
- system_allow = FILE_ALL_ACCESS
+ system_allow = FILE::FILE_ALL_ACCESS
end
isdir = File.directory?(path)
if isdir
if (mode & (S_IWUSR | S_IXUSR)) == (S_IWUSR | S_IXUSR)
- owner_allow |= FILE_DELETE_CHILD
+ owner_allow |= FILE::FILE_DELETE_CHILD
end
if (mode & (S_IWGRP | S_IXGRP)) == (S_IWGRP | S_IXGRP) && (mode & S_ISVTX) == 0
- group_allow |= FILE_DELETE_CHILD
+ group_allow |= FILE::FILE_DELETE_CHILD
end
if (mode & (S_IWOTH | S_IXOTH)) == (S_IWOTH | S_IXOTH) && (mode & S_ISVTX) == 0
- other_allow |= FILE_DELETE_CHILD
+ other_allow |= FILE::FILE_DELETE_CHILD
end
end
# if owner and group the same, then map group permissions to the one owner ACE
isownergroup = sd.owner == sd.group
if isownergroup
owner_allow |= group_allow
end
# if any ACE allows write, then clear readonly bit, but do this before we overwrite
# the DACl and lose our ability to set the attribute
- if ((owner_allow | group_allow | other_allow ) & FILE_WRITE_DATA) == FILE_WRITE_DATA
- remove_attributes(path, FILE_ATTRIBUTE_READONLY)
+ if ((owner_allow | group_allow | other_allow ) & FILE::FILE_WRITE_DATA) == FILE::FILE_WRITE_DATA
+ FILE.remove_attributes(path, FILE::FILE_ATTRIBUTE_READONLY)
end
dacl = Puppet::Util::Windows::AccessControlList.new
dacl.allow(sd.owner, owner_allow)
unless isownergroup
dacl.allow(sd.group, group_allow)
end
dacl.allow(well_known_world_sid, other_allow)
dacl.allow(well_known_nobody_sid, nobody_allow)
# TODO: system should be first?
dacl.allow(well_known_system_sid, system_allow)
# add inherit-only aces for child dirs and files that are created within the dir
+ inherit_only = Puppet::Util::Windows::AccessControlEntry::INHERIT_ONLY_ACE
if isdir
- inherit = INHERIT_ONLY_ACE | CONTAINER_INHERIT_ACE
+ inherit = inherit_only | Puppet::Util::Windows::AccessControlEntry::CONTAINER_INHERIT_ACE
dacl.allow(Win32::Security::SID::CreatorOwner, owner_allow, inherit)
dacl.allow(Win32::Security::SID::CreatorGroup, group_allow, inherit)
- inherit = INHERIT_ONLY_ACE | OBJECT_INHERIT_ACE
- dacl.allow(Win32::Security::SID::CreatorOwner, owner_allow & ~FILE_EXECUTE, inherit)
- dacl.allow(Win32::Security::SID::CreatorGroup, group_allow & ~FILE_EXECUTE, inherit)
+ inherit = inherit_only | Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE
+ dacl.allow(Win32::Security::SID::CreatorOwner, owner_allow & ~FILE::FILE_EXECUTE, inherit)
+ dacl.allow(Win32::Security::SID::CreatorGroup, group_allow & ~FILE::FILE_EXECUTE, inherit)
end
new_sd = Puppet::Util::Windows::SecurityDescriptor.new(sd.owner, sd.group, dacl, protected)
set_security_descriptor(path, new_sd)
nil
end
+ ACL_REVISION = 2
+
def add_access_allowed_ace(acl, mask, sid, inherit = nil)
inherit ||= NO_INHERITANCE
- string_to_sid_ptr(sid) do |sid_ptr|
- raise Puppet::Util::Windows::Error.new("Invalid SID") unless IsValidSid(sid_ptr)
+ Puppet::Util::Windows::SID.string_to_sid_ptr(sid) do |sid_ptr|
+ if Puppet::Util::Windows::SID.IsValidSid(sid_ptr) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Invalid SID")
+ end
- unless AddAccessAllowedAceEx(acl, ACL_REVISION, inherit, mask, sid_ptr)
+ if AddAccessAllowedAceEx(acl, ACL_REVISION, inherit, mask, sid_ptr) == FFI::WIN32_FALSE
raise Puppet::Util::Windows::Error.new("Failed to add access control entry")
end
end
+
+ # ensure this method is void if it doesn't raise
+ nil
end
def add_access_denied_ace(acl, mask, sid, inherit = nil)
inherit ||= NO_INHERITANCE
- string_to_sid_ptr(sid) do |sid_ptr|
- raise Puppet::Util::Windows::Error.new("Invalid SID") unless IsValidSid(sid_ptr)
+ Puppet::Util::Windows::SID.string_to_sid_ptr(sid) do |sid_ptr|
+ if Puppet::Util::Windows::SID.IsValidSid(sid_ptr) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Invalid SID")
+ end
- unless AddAccessDeniedAceEx(acl, ACL_REVISION, inherit, mask, sid_ptr)
+ if AddAccessDeniedAceEx(acl, ACL_REVISION, inherit, mask, sid_ptr) == FFI::WIN32_FALSE
raise Puppet::Util::Windows::Error.new("Failed to add access control entry")
end
end
+
+ # ensure this method is void if it doesn't raise
+ nil
end
def parse_dacl(dacl_ptr)
# REMIND: need to handle NULL DACL
- raise Puppet::Util::Windows::Error.new("Invalid DACL") unless IsValidAcl(dacl_ptr)
-
- # ACL structure, size and count are the important parts. The
- # size includes both the ACL structure and all the ACEs.
- #
- # BYTE AclRevision
- # BYTE Padding1
- # WORD AclSize
- # WORD AceCount
- # WORD Padding2
- acl_buf = 0.chr * 8
- memcpy(acl_buf, dacl_ptr, acl_buf.size)
- ace_count = acl_buf.unpack('CCSSS')[3]
+ if IsValidAcl(dacl_ptr) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Invalid DACL")
+ end
+
+ dacl_struct = ACL.new(dacl_ptr)
+ ace_count = dacl_struct[:AceCount]
dacl = Puppet::Util::Windows::AccessControlList.new
# deny all
return dacl if ace_count == 0
0.upto(ace_count - 1) do |i|
- ace_ptr = [0].pack('L')
-
- next unless GetAce(dacl_ptr, i, ace_ptr)
-
- # ACE structures vary depending on the type. All structures
- # begin with an ACE header, which specifies the type, flags
- # and size of what follows. We are only concerned with
- # ACCESS_ALLOWED_ACE and ACCESS_DENIED_ACEs, which have the
- # same structure:
- #
- # BYTE C AceType
- # BYTE C AceFlags
- # WORD S AceSize
- # DWORD L ACCESS_MASK
- # DWORD L Sid
- # .. ...
- # DWORD L Sid
-
- ace_buf = 0.chr * 8
- memcpy(ace_buf, ace_ptr.unpack('L')[0], ace_buf.size)
-
- ace_type, ace_flags, size, mask = ace_buf.unpack('CCSL')
-
- case ace_type
- when ACCESS_ALLOWED_ACE_TYPE
- sid_ptr = ace_ptr.unpack('L')[0] + 8 # address of ace_ptr->SidStart
- raise Puppet::Util::Windows::Error.new("Failed to read DACL, invalid SID") unless IsValidSid(sid_ptr)
- sid = sid_ptr_to_string(sid_ptr)
- dacl.allow(sid, mask, ace_flags)
- when ACCESS_DENIED_ACE_TYPE
- sid_ptr = ace_ptr.unpack('L')[0] + 8 # address of ace_ptr->SidStart
- raise Puppet::Util::Windows::Error.new("Failed to read DACL, invalid SID") unless IsValidSid(sid_ptr)
- sid = sid_ptr_to_string(sid_ptr)
- dacl.deny(sid, mask, ace_flags)
- else
- Puppet.warning "Unsupported access control entry type: 0x#{ace_type.to_s(16)}"
+ FFI::MemoryPointer.new(:pointer, 1) do |ace_ptr|
+
+ next if GetAce(dacl_ptr, i, ace_ptr) == FFI::WIN32_FALSE
+
+ # ACE structures vary depending on the type. We are only concerned with
+ # ACCESS_ALLOWED_ACE and ACCESS_DENIED_ACEs, which have the same layout
+ ace = GENERIC_ACCESS_ACE.new(ace_ptr.get_pointer(0)) #deref LPVOID *
+
+ ace_type = ace[:Header][:AceType]
+ if ace_type != Puppet::Util::Windows::AccessControlEntry::ACCESS_ALLOWED_ACE_TYPE &&
+ ace_type != Puppet::Util::Windows::AccessControlEntry::ACCESS_DENIED_ACE_TYPE
+ Puppet.warning "Unsupported access control entry type: 0x#{ace_type.to_s(16)}"
+ next
+ end
+
+ # using pointer addition gives the FFI::Pointer a size, but that's OK here
+ sid = Puppet::Util::Windows::SID.sid_ptr_to_string(ace.pointer + GENERIC_ACCESS_ACE.offset_of(:SidStart))
+ mask = ace[:Mask]
+ ace_flags = ace[:Header][:AceFlags]
+
+ case ace_type
+ when Puppet::Util::Windows::AccessControlEntry::ACCESS_ALLOWED_ACE_TYPE
+ dacl.allow(sid, mask, ace_flags)
+ when Puppet::Util::Windows::AccessControlEntry::ACCESS_DENIED_ACE_TYPE
+ dacl.deny(sid, mask, ace_flags)
+ end
end
end
dacl
end
# Open an existing file with the specified access mode, and execute a
# block with the opened file HANDLE.
- def open_file(path, access)
- handle = CreateFile(
- path,
+ def open_file(path, access, &block)
+ handle = CreateFileW(
+ wide_string(path),
access,
- FILE_SHARE_READ | FILE_SHARE_WRITE,
- 0, # security_attributes
- OPEN_EXISTING,
- FILE_FLAG_OPEN_REPARSE_POINT | FILE_FLAG_BACKUP_SEMANTICS,
- 0) # template
- raise Puppet::Util::Windows::Error.new("Failed to open '#{path}'") if handle == INVALID_HANDLE_VALUE
+ FILE::FILE_SHARE_READ | FILE::FILE_SHARE_WRITE,
+ FFI::Pointer::NULL, # security_attributes
+ FILE::OPEN_EXISTING,
+ FILE::FILE_FLAG_OPEN_REPARSE_POINT | FILE::FILE_FLAG_BACKUP_SEMANTICS,
+ FFI::Pointer::NULL_HANDLE) # template
+
+ if handle == Puppet::Util::Windows::File::INVALID_HANDLE_VALUE
+ raise Puppet::Util::Windows::Error.new("Failed to open '#{path}'")
+ end
+
begin
yield handle
ensure
- CloseHandle(handle)
+ FFI::WIN32.CloseHandle(handle) if handle
end
+
+ # handle has already had CloseHandle called against it, nothing to return
+ nil
end
# Execute a block with the specified privilege enabled
- def with_privilege(privilege)
+ def with_privilege(privilege, &block)
set_privilege(privilege, true)
yield
ensure
set_privilege(privilege, false)
end
+ SE_PRIVILEGE_ENABLED = 0x00000002
+ TOKEN_ADJUST_PRIVILEGES = 0x0020
+
# Enable or disable a privilege. Note this doesn't add any privileges the
# user doesn't already has, it just enables privileges that are disabled.
def set_privilege(privilege, enable)
return unless Puppet.features.root?
- with_process_token(TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY) do |token|
- tmpLuid = 0.chr * 8
-
- # Get the LUID for specified privilege.
- unless LookupPrivilegeValue("", privilege, tmpLuid)
- raise Puppet::Util::Windows::Error.new("Failed to lookup privilege")
- end
-
- # DWORD + [LUID + DWORD]
- tkp = [1].pack('L') + tmpLuid + [enable ? SE_PRIVILEGE_ENABLED : 0].pack('L')
-
- unless AdjustTokenPrivileges(token, 0, tkp, tkp.length , nil, nil)
- raise Puppet::Util::Windows::Error.new("Failed to adjust process privileges")
+ Puppet::Util::Windows::Process.with_process_token(TOKEN_ADJUST_PRIVILEGES) do |token|
+ Puppet::Util::Windows::Process.lookup_privilege_value(privilege) do |luid|
+ FFI::MemoryPointer.new(Puppet::Util::Windows::Process::LUID_AND_ATTRIBUTES.size) do |luid_and_attributes_ptr|
+ # allocate unmanaged memory for structs that we clean up afterwards
+ luid_and_attributes = Puppet::Util::Windows::Process::LUID_AND_ATTRIBUTES.new(luid_and_attributes_ptr)
+ luid_and_attributes[:Luid] = luid
+ luid_and_attributes[:Attributes] = enable ? SE_PRIVILEGE_ENABLED : 0
+
+ FFI::MemoryPointer.new(Puppet::Util::Windows::Process::TOKEN_PRIVILEGES.size) do |token_privileges_ptr|
+ token_privileges = Puppet::Util::Windows::Process::TOKEN_PRIVILEGES.new(token_privileges_ptr)
+ token_privileges[:PrivilegeCount] = 1
+ token_privileges[:Privileges][0] = luid_and_attributes
+
+ # size is correct given we only have 1 LUID, otherwise would be:
+ # [:PrivilegeCount].size + [:PrivilegeCount] * LUID_AND_ATTRIBUTES.size
+ if AdjustTokenPrivileges(token, FFI::WIN32_FALSE,
+ token_privileges, token_privileges.size,
+ FFI::MemoryPointer::NULL, FFI::MemoryPointer::NULL) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to adjust process privileges")
+ end
+ end
+ end
end
end
- end
-
- # Execute a block with the current process token
- def with_process_token(access)
- token = 0.chr * 4
- unless OpenProcessToken(GetCurrentProcess(), access, token)
- raise Puppet::Util::Windows::Error.new("Failed to open process token")
- end
- begin
- token = token.unpack('L')[0]
+ # token / luid structs freed by this point, so return true as nothing raised
+ true
+ end
+ def with_process_token(access, &block)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.with_process_token is deprecated; please use Puppet::Util::Windows::Process.with_process_token')
+ Puppet::Util::Windows::Process.with_process_token(access) do |token|
yield token
- ensure
- CloseHandle(token)
end
+
+ nil
end
def get_security_descriptor(path)
sd = nil
with_privilege(SE_BACKUP_NAME) do
open_file(path, READ_CONTROL) do |handle|
- owner_sid = [0].pack('L')
- group_sid = [0].pack('L')
- dacl = [0].pack('L')
- ppsd = [0].pack('L')
-
- rv = GetSecurityInfo(
- handle,
- SE_FILE_OBJECT,
- OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION | DACL_SECURITY_INFORMATION,
- owner_sid,
- group_sid,
- dacl,
- nil, #sacl
- ppsd) #sec desc
- raise Puppet::Util::Windows::Error.new("Failed to get security information") unless rv == ERROR_SUCCESS
-
- begin
- owner = sid_ptr_to_string(owner_sid.unpack('L')[0])
- group = sid_ptr_to_string(group_sid.unpack('L')[0])
-
- control = FFI::MemoryPointer.new(:uint16, 1)
- revision = FFI::MemoryPointer.new(:uint32, 1)
- ffsd = FFI::Pointer.new(ppsd.unpack('L')[0])
-
- if ! API.get_security_descriptor_control(ffsd, control, revision)
- raise Puppet::Util::Windows::Error.new("Failed to get security descriptor control")
+ FFI::MemoryPointer.new(:pointer, 1) do |owner_sid_ptr_ptr|
+ FFI::MemoryPointer.new(:pointer, 1) do |group_sid_ptr_ptr|
+ FFI::MemoryPointer.new(:pointer, 1) do |dacl_ptr_ptr|
+ FFI::MemoryPointer.new(:pointer, 1) do |sd_ptr_ptr|
+
+ rv = GetSecurityInfo(
+ handle,
+ :SE_FILE_OBJECT,
+ OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION | DACL_SECURITY_INFORMATION,
+ owner_sid_ptr_ptr,
+ group_sid_ptr_ptr,
+ dacl_ptr_ptr,
+ FFI::Pointer::NULL, #sacl
+ sd_ptr_ptr) #sec desc
+ raise Puppet::Util::Windows::Error.new("Failed to get security information") if rv != FFI::ERROR_SUCCESS
+
+ # these 2 convenience params are not freed since they point inside sd_ptr
+ owner = Puppet::Util::Windows::SID.sid_ptr_to_string(owner_sid_ptr_ptr.get_pointer(0))
+ group = Puppet::Util::Windows::SID.sid_ptr_to_string(group_sid_ptr_ptr.get_pointer(0))
+
+ FFI::MemoryPointer.new(:word, 1) do |control|
+ FFI::MemoryPointer.new(:dword, 1) do |revision|
+ sd_ptr_ptr.read_win32_local_pointer do |sd_ptr|
+
+ if GetSecurityDescriptorControl(sd_ptr, control, revision) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to get security descriptor control")
+ end
+
+ protect = (control.read_word & SE_DACL_PROTECTED) == SE_DACL_PROTECTED
+ dacl = parse_dacl(dacl_ptr_ptr.get_pointer(0))
+ sd = Puppet::Util::Windows::SecurityDescriptor.new(owner, group, dacl, protect)
+ end
+ end
+ end
+ end
+ end
end
-
- protect = (control.read_uint16 & SE_DACL_PROTECTED) == SE_DACL_PROTECTED
-
- dacl = parse_dacl(dacl.unpack('L')[0])
- sd = Puppet::Util::Windows::SecurityDescriptor.new(owner, group, dacl, protect)
- ensure
- LocalFree(ppsd.unpack('L')[0])
end
end
end
sd
end
+ def get_max_generic_acl_size(ace_count)
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa378853(v=vs.85).aspx
+ # To calculate the initial size of an ACL, add the following together, and then align the result to the nearest DWORD:
+ # * Size of the ACL structure.
+ # * Size of each ACE structure that the ACL is to contain minus the SidStart member (DWORD) of the ACE.
+ # * Length of the SID that each ACE is to contain.
+ ACL.size + ace_count * MAXIMUM_GENERIC_ACE_SIZE
+ end
+
# setting DACL requires both READ_CONTROL and WRITE_DACL access rights,
# and their respective privileges, SE_BACKUP_NAME and SE_RESTORE_NAME.
def set_security_descriptor(path, sd)
- # REMIND: FFI
- acl = 0.chr * 1024 # This can be increased later as neede
- unless InitializeAcl(acl, acl.size, ACL_REVISION)
- raise Puppet::Util::Windows::Error.new("Failed to initialize ACL")
- end
+ FFI::MemoryPointer.new(:byte, get_max_generic_acl_size(sd.dacl.count)) do |acl_ptr|
+ if InitializeAcl(acl_ptr, acl_ptr.size, ACL_REVISION) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to initialize ACL")
+ end
- raise Puppet::Util::Windows::Error.new("Invalid DACL") unless IsValidAcl(acl)
+ if IsValidAcl(acl_ptr) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Invalid DACL")
+ end
- with_privilege(SE_BACKUP_NAME) do
- with_privilege(SE_RESTORE_NAME) do
- open_file(path, READ_CONTROL | WRITE_DAC | WRITE_OWNER) do |handle|
- string_to_sid_ptr(sd.owner) do |ownersid|
- string_to_sid_ptr(sd.group) do |groupsid|
- sd.dacl.each do |ace|
- case ace.type
- when ACCESS_ALLOWED_ACE_TYPE
- #puts "ace: allow, sid #{sid_to_name(ace.sid)}, mask 0x#{ace.mask.to_s(16)}"
- add_access_allowed_ace(acl, ace.mask, ace.sid, ace.flags)
- when ACCESS_DENIED_ACE_TYPE
- #puts "ace: deny, sid #{sid_to_name(ace.sid)}, mask 0x#{ace.mask.to_s(16)}"
- add_access_denied_ace(acl, ace.mask, ace.sid, ace.flags)
- else
- raise "We should never get here"
- # TODO: this should have been a warning in an earlier commit
+ with_privilege(SE_BACKUP_NAME) do
+ with_privilege(SE_RESTORE_NAME) do
+ open_file(path, READ_CONTROL | WRITE_DAC | WRITE_OWNER) do |handle|
+ Puppet::Util::Windows::SID.string_to_sid_ptr(sd.owner) do |owner_sid_ptr|
+ Puppet::Util::Windows::SID.string_to_sid_ptr(sd.group) do |group_sid_ptr|
+ sd.dacl.each do |ace|
+ case ace.type
+ when Puppet::Util::Windows::AccessControlEntry::ACCESS_ALLOWED_ACE_TYPE
+ #puts "ace: allow, sid #{Puppet::Util::Windows::SID.sid_to_name(ace.sid)}, mask 0x#{ace.mask.to_s(16)}"
+ add_access_allowed_ace(acl_ptr, ace.mask, ace.sid, ace.flags)
+ when Puppet::Util::Windows::AccessControlEntry::ACCESS_DENIED_ACE_TYPE
+ #puts "ace: deny, sid #{Puppet::Util::Windows::SID.sid_to_name(ace.sid)}, mask 0x#{ace.mask.to_s(16)}"
+ add_access_denied_ace(acl_ptr, ace.mask, ace.sid, ace.flags)
+ else
+ raise "We should never get here"
+ # TODO: this should have been a warning in an earlier commit
+ end
end
- end
- # protected means the object does not inherit aces from its parent
- flags = OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION | DACL_SECURITY_INFORMATION
- flags |= sd.protect ? PROTECTED_DACL_SECURITY_INFORMATION : UNPROTECTED_DACL_SECURITY_INFORMATION
-
- rv = SetSecurityInfo(handle,
- SE_FILE_OBJECT,
- flags,
- ownersid,
- groupsid,
- acl,
- nil)
- raise Puppet::Util::Windows::Error.new("Failed to set security information") unless rv == ERROR_SUCCESS
+ # protected means the object does not inherit aces from its parent
+ flags = OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION | DACL_SECURITY_INFORMATION
+ flags |= sd.protect ? PROTECTED_DACL_SECURITY_INFORMATION : UNPROTECTED_DACL_SECURITY_INFORMATION
+
+ rv = SetSecurityInfo(handle,
+ :SE_FILE_OBJECT,
+ flags,
+ owner_sid_ptr,
+ group_sid_ptr,
+ acl_ptr,
+ FFI::MemoryPointer::NULL)
+
+ if rv != FFI::ERROR_SUCCESS
+ raise Puppet::Util::Windows::Error.new("Failed to set security information")
+ end
+ end
end
end
end
end
end
+
+ def name_to_sid(name)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.name_to_sid is deprecated; please use Puppet::Util::Windows::SID.name_to_sid')
+ Puppet::Util::Windows::SID.name_to_sid(name)
+ end
+
+ def name_to_sid_object(name)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.name_to_sid_object is deprecated; please use Puppet::Util::Windows::SID.name_to_sid_object')
+ Puppet::Util::Windows::SID.name_to_sid_object(name)
+ end
+
+ def octet_string_to_sid_object(bytes)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.octet_string_to_sid_object is deprecated; please use Puppet::Util::Windows::SID.octet_string_to_sid_object')
+ Puppet::Util::Windows::SID.octet_string_to_sid_object(bytes)
+ end
+
+ def sid_to_name(value)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.sid_to_name is deprecated; please use Puppet::Util::Windows::SID.sid_to_name')
+ Puppet::Util::Windows::SID.sid_to_name(value)
+ end
+
+ def sid_ptr_to_string(psid)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.sid_ptr_to_string is deprecated; please use Puppet::Util::Windows::SID.sid_ptr_to_string')
+ Puppet::Util::Windows::SID.sid_ptr_to_string(psid)
+ end
+
+ def string_to_sid_ptr(string_sid, &block)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.string_to_sid_ptr is deprecated; please use Puppet::Util::Windows::SID.string_to_sid_ptr')
+ Puppet::Util::Windows::SID.string_to_sid_ptr(string_sid, &block)
+ end
+
+ def valid_sid?(string_sid)
+ Puppet.deprecation_warning('Puppet::Util::Windows::Security.valid_sid? is deprecated; please use Puppet::Util::Windows::SID.valid_sid?')
+ Puppet::Util::Windows::SID.valid_sid?(string_sid)
+ end
end
- module API
- extend FFI::Library
- ffi_lib 'kernel32'
- ffi_convention :stdcall
-
- # typedef WORD SECURITY_DESCRIPTOR_CONTROL, *PSECURITY_DESCRIPTOR_CONTROL;
- # BOOL WINAPI GetSecurityDescriptorControl(
- # _In_ PSECURITY_DESCRIPTOR pSecurityDescriptor,
- # _Out_ PSECURITY_DESCRIPTOR_CONTROL pControl,
- # _Out_ LPDWORD lpdwRevision
- # );
- ffi_lib :advapi32
- attach_function :get_security_descriptor_control, :GetSecurityDescriptorControl, [:pointer, :pointer, :pointer], :bool
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa363858(v=vs.85).aspx
+ # HANDLE WINAPI CreateFile(
+ # _In_ LPCTSTR lpFileName,
+ # _In_ DWORD dwDesiredAccess,
+ # _In_ DWORD dwShareMode,
+ # _In_opt_ LPSECURITY_ATTRIBUTES lpSecurityAttributes,
+ # _In_ DWORD dwCreationDisposition,
+ # _In_ DWORD dwFlagsAndAttributes,
+ # _In_opt_ HANDLE hTemplateFile
+ # );
+ ffi_lib :kernel32
+ attach_function_private :CreateFileW,
+ [:lpcwstr, :dword, :dword, :pointer, :dword, :dword, :handle], :handle
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa364993(v=vs.85).aspx
+ # BOOL WINAPI GetVolumeInformation(
+ # _In_opt_ LPCTSTR lpRootPathName,
+ # _Out_opt_ LPTSTR lpVolumeNameBuffer,
+ # _In_ DWORD nVolumeNameSize,
+ # _Out_opt_ LPDWORD lpVolumeSerialNumber,
+ # _Out_opt_ LPDWORD lpMaximumComponentLength,
+ # _Out_opt_ LPDWORD lpFileSystemFlags,
+ # _Out_opt_ LPTSTR lpFileSystemNameBuffer,
+ # _In_ DWORD nFileSystemNameSize
+ # );
+ ffi_lib :kernel32
+ attach_function_private :GetVolumeInformationW,
+ [:lpcwstr, :lpwstr, :dword, :lpdword, :lpdword, :lpdword, :lpwstr, :dword], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374951(v=vs.85).aspx
+ # BOOL WINAPI AddAccessAllowedAceEx(
+ # _Inout_ PACL pAcl,
+ # _In_ DWORD dwAceRevision,
+ # _In_ DWORD AceFlags,
+ # _In_ DWORD AccessMask,
+ # _In_ PSID pSid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :AddAccessAllowedAceEx,
+ [:pointer, :dword, :dword, :dword, :pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374964(v=vs.85).aspx
+ # BOOL WINAPI AddAccessDeniedAceEx(
+ # _Inout_ PACL pAcl,
+ # _In_ DWORD dwAceRevision,
+ # _In_ DWORD AceFlags,
+ # _In_ DWORD AccessMask,
+ # _In_ PSID pSid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :AddAccessDeniedAceEx,
+ [:pointer, :dword, :dword, :dword, :pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374931(v=vs.85).aspx
+ # typedef struct _ACL {
+ # BYTE AclRevision;
+ # BYTE Sbz1;
+ # WORD AclSize;
+ # WORD AceCount;
+ # WORD Sbz2;
+ # } ACL, *PACL;
+ class ACL < FFI::Struct
+ layout :AclRevision, :byte,
+ :Sbz1, :byte,
+ :AclSize, :word,
+ :AceCount, :word,
+ :Sbz2, :word
end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374912(v=vs.85).aspx
+ # ACE types
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374919(v=vs.85).aspx
+ # typedef struct _ACE_HEADER {
+ # BYTE AceType;
+ # BYTE AceFlags;
+ # WORD AceSize;
+ # } ACE_HEADER, *PACE_HEADER;
+ class ACE_HEADER < FFI::Struct
+ layout :AceType, :byte,
+ :AceFlags, :byte,
+ :AceSize, :word
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374892(v=vs.85).aspx
+ # ACCESS_MASK
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374847(v=vs.85).aspx
+ # typedef struct _ACCESS_ALLOWED_ACE {
+ # ACE_HEADER Header;
+ # ACCESS_MASK Mask;
+ # DWORD SidStart;
+ # } ACCESS_ALLOWED_ACE, *PACCESS_ALLOWED_ACE;
+ #
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa374879(v=vs.85).aspx
+ # typedef struct _ACCESS_DENIED_ACE {
+ # ACE_HEADER Header;
+ # ACCESS_MASK Mask;
+ # DWORD SidStart;
+ # } ACCESS_DENIED_ACE, *PACCESS_DENIED_ACE;
+ class GENERIC_ACCESS_ACE < FFI::Struct
+ # ACE structures must be aligned on DWORD boundaries. All Windows
+ # memory-management functions return DWORD-aligned handles to memory
+ pack 4
+ layout :Header, ACE_HEADER,
+ :Mask, :dword,
+ :SidStart, :dword
+ end
+
+ # http://stackoverflow.com/a/1792930
+ MAXIMUM_SID_BYTES_LENGTH = 68
+ MAXIMUM_GENERIC_ACE_SIZE = GENERIC_ACCESS_ACE.offset_of(:SidStart) +
+ MAXIMUM_SID_BYTES_LENGTH
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa446634(v=vs.85).aspx
+ # BOOL WINAPI GetAce(
+ # _In_ PACL pAcl,
+ # _In_ DWORD dwAceIndex,
+ # _Out_ LPVOID *pAce
+ # );
+ ffi_lib :advapi32
+ attach_function_private :GetAce,
+ [:pointer, :dword, :pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa375202(v=vs.85).aspx
+ # BOOL WINAPI AdjustTokenPrivileges(
+ # _In_ HANDLE TokenHandle,
+ # _In_ BOOL DisableAllPrivileges,
+ # _In_opt_ PTOKEN_PRIVILEGES NewState,
+ # _In_ DWORD BufferLength,
+ # _Out_opt_ PTOKEN_PRIVILEGES PreviousState,
+ # _Out_opt_ PDWORD ReturnLength
+ # );
+ ffi_lib :advapi32
+ attach_function_private :AdjustTokenPrivileges,
+ [:handle, :win32_bool, :pointer, :dword, :pointer, :pdword], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/hardware/ff556610(v=vs.85).aspx
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379561(v=vs.85).aspx
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa446647(v=vs.85).aspx
+ # typedef WORD SECURITY_DESCRIPTOR_CONTROL, *PSECURITY_DESCRIPTOR_CONTROL;
+ # BOOL WINAPI GetSecurityDescriptorControl(
+ # _In_ PSECURITY_DESCRIPTOR pSecurityDescriptor,
+ # _Out_ PSECURITY_DESCRIPTOR_CONTROL pControl,
+ # _Out_ LPDWORD lpdwRevision
+ # );
+ ffi_lib :advapi32
+ attach_function_private :GetSecurityDescriptorControl,
+ [:pointer, :lpword, :lpdword], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa378853(v=vs.85).aspx
+ # BOOL WINAPI InitializeAcl(
+ # _Out_ PACL pAcl,
+ # _In_ DWORD nAclLength,
+ # _In_ DWORD dwAclRevision
+ # );
+ ffi_lib :advapi32
+ attach_function_private :InitializeAcl,
+ [:pointer, :dword, :dword], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379142(v=vs.85).aspx
+ # BOOL WINAPI IsValidAcl(
+ # _In_ PACL pAcl
+ # );
+ ffi_lib :advapi32
+ attach_function_private :IsValidAcl,
+ [:pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379593(v=vs.85).aspx
+ SE_OBJECT_TYPE = enum(
+ :SE_UNKNOWN_OBJECT_TYPE, 0,
+ :SE_FILE_OBJECT,
+ :SE_SERVICE,
+ :SE_PRINTER,
+ :SE_REGISTRY_KEY,
+ :SE_LMSHARE,
+ :SE_KERNEL_OBJECT,
+ :SE_WINDOW_OBJECT,
+ :SE_DS_OBJECT,
+ :SE_DS_OBJECT_ALL,
+ :SE_PROVIDER_DEFINED_OBJECT,
+ :SE_WMIGUID_OBJECT,
+ :SE_REGISTRY_WOW64_32KEY
+ )
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa446654(v=vs.85).aspx
+ # DWORD WINAPI GetSecurityInfo(
+ # _In_ HANDLE handle,
+ # _In_ SE_OBJECT_TYPE ObjectType,
+ # _In_ SECURITY_INFORMATION SecurityInfo,
+ # _Out_opt_ PSID *ppsidOwner,
+ # _Out_opt_ PSID *ppsidGroup,
+ # _Out_opt_ PACL *ppDacl,
+ # _Out_opt_ PACL *ppSacl,
+ # _Out_opt_ PSECURITY_DESCRIPTOR *ppSecurityDescriptor
+ # );
+ ffi_lib :advapi32
+ attach_function_private :GetSecurityInfo,
+ [:handle, SE_OBJECT_TYPE, :dword, :pointer, :pointer, :pointer, :pointer, :pointer], :dword
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379588(v=vs.85).aspx
+ # DWORD WINAPI SetSecurityInfo(
+ # _In_ HANDLE handle,
+ # _In_ SE_OBJECT_TYPE ObjectType,
+ # _In_ SECURITY_INFORMATION SecurityInfo,
+ # _In_opt_ PSID psidOwner,
+ # _In_opt_ PSID psidGroup,
+ # _In_opt_ PACL pDacl,
+ # _In_opt_ PACL pSacl
+ # );
+ ffi_lib :advapi32
+ # TODO: SECURITY_INFORMATION is actually a bitmask the size of a DWORD
+ attach_function_private :SetSecurityInfo,
+ [:handle, SE_OBJECT_TYPE, :dword, :pointer, :pointer, :pointer, :pointer], :dword
end
diff --git a/lib/puppet/util/windows/sid.rb b/lib/puppet/util/windows/sid.rb
index 90b48d933..68a780f8a 100644
--- a/lib/puppet/util/windows/sid.rb
+++ b/lib/puppet/util/windows/sid.rb
@@ -1,118 +1,162 @@
require 'puppet/util/windows'
module Puppet::Util::Windows
module SID
- require 'windows/security'
- include ::Windows::Security
-
- require 'windows/memory'
- include ::Windows::Memory
-
- require 'windows/msvcrt/string'
- include ::Windows::MSVCRT::String
+ require 'ffi'
+ extend FFI::Library
# missing from Windows::Error
ERROR_NONE_MAPPED = 1332
ERROR_INVALID_SID_STRUCTURE = 1337
# Convert an account name, e.g. 'Administrators' into a SID string,
# e.g. 'S-1-5-32-544'. The name can be specified as 'Administrators',
# 'BUILTIN\Administrators', or 'S-1-5-32-544', and will return the
# SID. Returns nil if the account doesn't exist.
def name_to_sid(name)
sid = name_to_sid_object(name)
sid ? sid.to_s : nil
end
+ module_function :name_to_sid
# Convert an account name, e.g. 'Administrators' into a SID object,
# e.g. 'S-1-5-32-544'. The name can be specified as 'Administrators',
# 'BUILTIN\Administrators', or 'S-1-5-32-544', and will return the
# SID object. Returns nil if the account doesn't exist.
def name_to_sid_object(name)
# Apparently, we accept a symbol..
name = name.to_s.strip if name
# if it's in SID string form, convert to user
parsed_sid = Win32::Security::SID.string_to_sid(name) rescue nil
parsed_sid ? Win32::Security::SID.new(parsed_sid) : Win32::Security::SID.new(name)
rescue
nil
end
+ module_function :name_to_sid_object
# Converts an octet string array of bytes to a SID object,
# e.g. [1, 1, 0, 0, 0, 0, 0, 5, 18, 0, 0, 0] is the representation for
# S-1-5-18, the local 'SYSTEM' account.
# Raises an Error for nil or non-array input.
def octet_string_to_sid_object(bytes)
if !bytes || !bytes.respond_to?('pack') || bytes.empty?
raise Puppet::Util::Windows::Error.new("Octet string must be an array of bytes")
end
Win32::Security::SID.new(bytes.pack('C*'))
end
+ module_function :octet_string_to_sid_object
# Convert a SID string, e.g. "S-1-5-32-544" to a name,
# e.g. 'BUILTIN\Administrators'. Returns nil if an account
# for that SID does not exist.
def sid_to_name(value)
sid = Win32::Security::SID.new(Win32::Security::SID.string_to_sid(value))
if sid.domain and sid.domain.length > 0
"#{sid.domain}\\#{sid.account}"
else
sid.account
end
rescue
nil
end
+ module_function :sid_to_name
+
+ # http://stackoverflow.com/a/1792930 - 68 bytes, 184 characters in a string
+ MAXIMUM_SID_STRING_LENGTH = 184
# Convert a SID pointer to a SID string, e.g. "S-1-5-32-544".
def sid_ptr_to_string(psid)
- sid_buf = 0.chr * 256
- str_ptr = 0.chr * 4
+ if ! psid.instance_of?(FFI::Pointer) || IsValidSid(psid) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Invalid SID")
+ end
- raise Puppet::Util::Windows::Error.new("Invalid SID") unless IsValidSid(psid)
+ sid_string = nil
+ FFI::MemoryPointer.new(:pointer, 1) do |buffer_ptr|
+ if ConvertSidToStringSidW(psid, buffer_ptr) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to convert binary SID")
+ end
- raise Puppet::Util::Windows::Error.new("Failed to convert binary SID") unless ConvertSidToStringSid(psid, str_ptr)
+ buffer_ptr.read_win32_local_pointer do |wide_string_ptr|
+ if wide_string_ptr.null?
+ raise Puppet::Error.new("ConvertSidToStringSidW failed to allocate buffer for sid")
+ end
- begin
- strncpy(sid_buf, str_ptr.unpack('L')[0], sid_buf.size - 1)
- sid_buf[sid_buf.size - 1] = 0.chr
- return sid_buf.strip
- ensure
- LocalFree(str_ptr.unpack('L')[0])
+ sid_string = wide_string_ptr.read_arbitrary_wide_string_up_to(MAXIMUM_SID_STRING_LENGTH)
+ end
end
+
+ sid_string
end
+ module_function :sid_ptr_to_string
# Convert a SID string, e.g. "S-1-5-32-544" to a pointer (containing the
# address of the binary SID structure). The returned value can be used in
# Win32 APIs that expect a PSID, e.g. IsValidSid. The account for this
# SID may or may not exist.
- def string_to_sid_ptr(string, &block)
- sid_buf = 0.chr * 80
- string_addr = [string].pack('p*').unpack('L')[0]
-
- raise Puppet::Util::Windows::Error.new("Failed to convert string SID: #{string}") unless ConvertStringSidToSid(string_addr, sid_buf)
-
- sid_ptr = sid_buf.unpack('L')[0]
- begin
- yield sid_ptr
- ensure
- LocalFree(sid_ptr)
+ def string_to_sid_ptr(string_sid, &block)
+ FFI::MemoryPointer.from_string_to_wide_string(string_sid) do |lpcwstr|
+ FFI::MemoryPointer.new(:pointer, 1) do |sid_ptr_ptr|
+
+ if ConvertStringSidToSidW(lpcwstr, sid_ptr_ptr) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to convert string SID: #{string_sid}")
+ end
+
+ sid_ptr_ptr.read_win32_local_pointer do |sid_ptr|
+ yield sid_ptr
+ end
+ end
end
+
+ # yielded sid_ptr has already had LocalFree called, nothing to return
+ nil
end
+ module_function :string_to_sid_ptr
# Return true if the string is a valid SID, e.g. "S-1-5-32-544", false otherwise.
- def valid_sid?(string)
- string_to_sid_ptr(string) { |ptr| true }
- rescue Puppet::Util::Windows::Error => e
- if e.code == ERROR_INVALID_SID_STRUCTURE
- false
- else
- raise
+ def valid_sid?(string_sid)
+ valid = false
+
+ begin
+ string_to_sid_ptr(string_sid) { |ptr| valid = ! ptr.nil? && ! ptr.null? }
+ rescue Puppet::Util::Windows::Error => e
+ raise if e.code != ERROR_INVALID_SID_STRUCTURE
end
+
+ valid
end
+ module_function :valid_sid?
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379151(v=vs.85).aspx
+ # BOOL WINAPI IsValidSid(
+ # _In_ PSID pSid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :IsValidSid,
+ [:pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa376399(v=vs.85).aspx
+ # BOOL ConvertSidToStringSid(
+ # _In_ PSID Sid,
+ # _Out_ LPTSTR *StringSid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :ConvertSidToStringSidW,
+ [:pointer, :pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa376402(v=vs.85).aspx
+ # BOOL WINAPI ConvertStringSidToSid(
+ # _In_ LPCTSTR StringSid,
+ # _Out_ PSID *Sid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :ConvertStringSidToSidW,
+ [:lpcwstr, :pointer], :win32_bool
end
end
diff --git a/lib/puppet/util/windows/string.rb b/lib/puppet/util/windows/string.rb
index 13d9839d1..147c92915 100644
--- a/lib/puppet/util/windows/string.rb
+++ b/lib/puppet/util/windows/string.rb
@@ -1,14 +1,16 @@
require 'puppet/util/windows'
module Puppet::Util::Windows::String
def wide_string(str)
+ # if given a nil string, assume caller wants to pass a nil pointer to win32
+ return nil if str.nil?
# ruby (< 2.1) does not respect multibyte terminators, so it is possible
# for a string to contain a single trailing null byte, followed by garbage
# causing buffer overruns.
#
# See http://svn.ruby-lang.org/cgi-bin/viewvc.cgi?revision=41920&view=revision
newstr = str + "\0".encode(str.encoding)
newstr.encode!('UTF-16LE')
end
module_function :wide_string
end
diff --git a/lib/puppet/util/windows/taskscheduler.rb b/lib/puppet/util/windows/taskscheduler.rb
new file mode 100644
index 000000000..ef598ab29
--- /dev/null
+++ b/lib/puppet/util/windows/taskscheduler.rb
@@ -0,0 +1,1241 @@
+require 'puppet/util/windows'
+
+# The Win32 module serves as a namespace only
+module Win32
+ # The TaskScheduler class encapsulates taskscheduler settings and behavior
+ class TaskScheduler
+ include Puppet::Util::Windows::String
+
+ require 'ffi'
+ extend FFI::Library
+
+ # The error class raised if any task scheduler specific calls fail.
+ class Error < Puppet::Util::Windows::Error; end
+
+ private
+
+ class << self
+ attr_accessor :com_initialized
+ end
+
+ # :stopdoc:
+ TASK_TIME_TRIGGER_ONCE = :TASK_TIME_TRIGGER_ONCE
+ TASK_TIME_TRIGGER_DAILY = :TASK_TIME_TRIGGER_DAILY
+ TASK_TIME_TRIGGER_WEEKLY = :TASK_TIME_TRIGGER_WEEKLY
+ TASK_TIME_TRIGGER_MONTHLYDATE = :TASK_TIME_TRIGGER_MONTHLYDATE
+ TASK_TIME_TRIGGER_MONTHLYDOW = :TASK_TIME_TRIGGER_MONTHLYDOW
+ TASK_EVENT_TRIGGER_ON_IDLE = :TASK_EVENT_TRIGGER_ON_IDLE
+ TASK_EVENT_TRIGGER_AT_SYSTEMSTART = :TASK_EVENT_TRIGGER_AT_SYSTEMSTART
+ TASK_EVENT_TRIGGER_AT_LOGON = :TASK_EVENT_TRIGGER_AT_LOGON
+
+ TASK_SUNDAY = 0x1
+ TASK_MONDAY = 0x2
+ TASK_TUESDAY = 0x4
+ TASK_WEDNESDAY = 0x8
+ TASK_THURSDAY = 0x10
+ TASK_FRIDAY = 0x20
+ TASK_SATURDAY = 0x40
+ TASK_FIRST_WEEK = 1
+ TASK_SECOND_WEEK = 2
+ TASK_THIRD_WEEK = 3
+ TASK_FOURTH_WEEK = 4
+ TASK_LAST_WEEK = 5
+ TASK_JANUARY = 0x1
+ TASK_FEBRUARY = 0x2
+ TASK_MARCH = 0x4
+ TASK_APRIL = 0x8
+ TASK_MAY = 0x10
+ TASK_JUNE = 0x20
+ TASK_JULY = 0x40
+ TASK_AUGUST = 0x80
+ TASK_SEPTEMBER = 0x100
+ TASK_OCTOBER = 0x200
+ TASK_NOVEMBER = 0x400
+ TASK_DECEMBER = 0x800
+
+ TASK_FLAG_INTERACTIVE = 0x1
+ TASK_FLAG_DELETE_WHEN_DONE = 0x2
+ TASK_FLAG_DISABLED = 0x4
+ TASK_FLAG_START_ONLY_IF_IDLE = 0x10
+ TASK_FLAG_KILL_ON_IDLE_END = 0x20
+ TASK_FLAG_DONT_START_IF_ON_BATTERIES = 0x40
+ TASK_FLAG_KILL_IF_GOING_ON_BATTERIES = 0x80
+ TASK_FLAG_RUN_ONLY_IF_DOCKED = 0x100
+ TASK_FLAG_HIDDEN = 0x200
+ TASK_FLAG_RUN_IF_CONNECTED_TO_INTERNET = 0x400
+ TASK_FLAG_RESTART_ON_IDLE_RESUME = 0x800
+ TASK_FLAG_SYSTEM_REQUIRED = 0x1000
+ TASK_FLAG_RUN_ONLY_IF_LOGGED_ON = 0x2000
+ TASK_TRIGGER_FLAG_HAS_END_DATE = 0x1
+ TASK_TRIGGER_FLAG_KILL_AT_DURATION_END = 0x2
+ TASK_TRIGGER_FLAG_DISABLED = 0x4
+
+ TASK_MAX_RUN_TIMES = 1440
+ TASKS_TO_RETRIEVE = 5
+
+ # COM
+
+ CLSID_CTask = FFI::WIN32::GUID['148BD520-A2AB-11CE-B11F-00AA00530503']
+ IID_ITask = FFI::WIN32::GUID['148BD524-A2AB-11CE-B11F-00AA00530503']
+ IID_IPersistFile = FFI::WIN32::GUID['0000010b-0000-0000-C000-000000000046']
+
+ SCHED_S_TASK_READY = 0x00041300
+ SCHED_S_TASK_RUNNING = 0x00041301
+ SCHED_S_TASK_HAS_NOT_RUN = 0x00041303
+ SCHED_S_TASK_NOT_SCHEDULED = 0x00041305
+ # HRESULT error codes
+ # http://blogs.msdn.com/b/eldar/archive/2007/04/03/a-lot-of-hresult-codes.aspx
+ # in Ruby, an 0x8XXXXXXX style HRESULT can be resolved to 2s complement
+ # by using "0x8XXXXXXX".to_i(16) - - 0x100000000
+ SCHED_E_ACCOUNT_INFORMATION_NOT_SET = -2147216625 # 0x8004130F
+ SCHED_E_NO_SECURITY_SERVICES = -2147216622 # 0x80041312
+ # No mapping between account names and security IDs was done.
+ ERROR_NONE_MAPPED = -2147023564 # 0x80070534 WIN32 Error CODE 1332 (0x534)
+
+ public
+
+ # :startdoc:
+
+ # Shorthand constants
+ IDLE = Puppet::Util::Windows::Process::IDLE_PRIORITY_CLASS
+ NORMAL = Puppet::Util::Windows::Process::NORMAL_PRIORITY_CLASS
+ HIGH = Puppet::Util::Windows::Process::HIGH_PRIORITY_CLASS
+ REALTIME = Puppet::Util::Windows::Process::REALTIME_PRIORITY_CLASS
+ BELOW_NORMAL = Puppet::Util::Windows::Process::BELOW_NORMAL_PRIORITY_CLASS
+ ABOVE_NORMAL = Puppet::Util::Windows::Process::ABOVE_NORMAL_PRIORITY_CLASS
+
+ ONCE = TASK_TIME_TRIGGER_ONCE
+ DAILY = TASK_TIME_TRIGGER_DAILY
+ WEEKLY = TASK_TIME_TRIGGER_WEEKLY
+ MONTHLYDATE = TASK_TIME_TRIGGER_MONTHLYDATE
+ MONTHLYDOW = TASK_TIME_TRIGGER_MONTHLYDOW
+
+ ON_IDLE = TASK_EVENT_TRIGGER_ON_IDLE
+ AT_SYSTEMSTART = TASK_EVENT_TRIGGER_AT_SYSTEMSTART
+ AT_LOGON = TASK_EVENT_TRIGGER_AT_LOGON
+ FIRST_WEEK = TASK_FIRST_WEEK
+ SECOND_WEEK = TASK_SECOND_WEEK
+ THIRD_WEEK = TASK_THIRD_WEEK
+ FOURTH_WEEK = TASK_FOURTH_WEEK
+ LAST_WEEK = TASK_LAST_WEEK
+ SUNDAY = TASK_SUNDAY
+ MONDAY = TASK_MONDAY
+ TUESDAY = TASK_TUESDAY
+ WEDNESDAY = TASK_WEDNESDAY
+ THURSDAY = TASK_THURSDAY
+ FRIDAY = TASK_FRIDAY
+ SATURDAY = TASK_SATURDAY
+ JANUARY = TASK_JANUARY
+ FEBRUARY = TASK_FEBRUARY
+ MARCH = TASK_MARCH
+ APRIL = TASK_APRIL
+ MAY = TASK_MAY
+ JUNE = TASK_JUNE
+ JULY = TASK_JULY
+ AUGUST = TASK_AUGUST
+ SEPTEMBER = TASK_SEPTEMBER
+ OCTOBER = TASK_OCTOBER
+ NOVEMBER = TASK_NOVEMBER
+ DECEMBER = TASK_DECEMBER
+
+ INTERACTIVE = TASK_FLAG_INTERACTIVE
+ DELETE_WHEN_DONE = TASK_FLAG_DELETE_WHEN_DONE
+ DISABLED = TASK_FLAG_DISABLED
+ START_ONLY_IF_IDLE = TASK_FLAG_START_ONLY_IF_IDLE
+ KILL_ON_IDLE_END = TASK_FLAG_KILL_ON_IDLE_END
+ DONT_START_IF_ON_BATTERIES = TASK_FLAG_DONT_START_IF_ON_BATTERIES
+ KILL_IF_GOING_ON_BATTERIES = TASK_FLAG_KILL_IF_GOING_ON_BATTERIES
+ RUN_ONLY_IF_DOCKED = TASK_FLAG_RUN_ONLY_IF_DOCKED
+ HIDDEN = TASK_FLAG_HIDDEN
+ RUN_IF_CONNECTED_TO_INTERNET = TASK_FLAG_RUN_IF_CONNECTED_TO_INTERNET
+ RESTART_ON_IDLE_RESUME = TASK_FLAG_RESTART_ON_IDLE_RESUME
+ SYSTEM_REQUIRED = TASK_FLAG_SYSTEM_REQUIRED
+ RUN_ONLY_IF_LOGGED_ON = TASK_FLAG_RUN_ONLY_IF_LOGGED_ON
+
+ FLAG_HAS_END_DATE = TASK_TRIGGER_FLAG_HAS_END_DATE
+ FLAG_KILL_AT_DURATION_END = TASK_TRIGGER_FLAG_KILL_AT_DURATION_END
+ FLAG_DISABLED = TASK_TRIGGER_FLAG_DISABLED
+
+ MAX_RUN_TIMES = TASK_MAX_RUN_TIMES
+
+ # Returns a new TaskScheduler object. If a work_item (and possibly the
+ # the trigger) are passed as arguments then a new work item is created and
+ # associated with that trigger, although you can still activate other tasks
+ # with the same handle.
+ #
+ # This is really just a bit of convenience. Passing arguments to the
+ # constructor is the same as calling TaskScheduler.new plus
+ # TaskScheduler#new_work_item.
+ #
+ def initialize(work_item=nil, trigger=nil)
+ @pITS = nil
+ @pITask = nil
+
+ if ! self.class.com_initialized
+ Puppet::Util::Windows::COM.InitializeCom()
+ self.class.com_initialized = true
+ end
+
+ @pITS = COM::TaskScheduler.new
+ at_exit do
+ begin
+ @pITS.Release if @pITS && !@pITS.null?
+ @pITS = nil
+ rescue
+ end
+ end
+
+ if work_item
+ if trigger
+ raise TypeError unless trigger.is_a?(Hash)
+ new_work_item(work_item, trigger)
+ end
+ end
+ end
+
+ # Returns an array of scheduled task names.
+ #
+ def enum
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ array = []
+
+ @pITS.UseInstance(COM::EnumWorkItems, :Enum) do |pIEnum|
+ FFI::MemoryPointer.new(:pointer) do |names_array_ptr_ptr|
+ FFI::MemoryPointer.new(:win32_ulong) do |fetched_count_ptr|
+ # awkward usage, if number requested is available, returns S_OK (0), or if less were returned returns S_FALSE (1)
+ while (pIEnum.Next(TASKS_TO_RETRIEVE, names_array_ptr_ptr, fetched_count_ptr) >= Puppet::Util::Windows::COM::S_OK)
+ count = fetched_count_ptr.read_win32_ulong
+ break if count == 0
+
+ names_array_ptr_ptr.read_com_memory_pointer do |names_array_ptr|
+ # iterate over the array of pointers
+ name_ptr_ptr = FFI::Pointer.new(:pointer, names_array_ptr)
+ for i in 0 ... count
+ name_ptr_ptr[i].read_com_memory_pointer do |name_ptr|
+ array << name_ptr.read_arbitrary_wide_string_up_to(256)
+ end
+ end
+ end
+ end
+ end
+ end
+ end
+
+ array
+ end
+
+ alias :tasks :enum
+
+ # Activate the specified task.
+ #
+ def activate(task)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise TypeError unless task.is_a?(String)
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITS.Activate(wide_string(task), IID_ITask, ptr)
+
+ reset_current_task
+ @pITask = COM::Task.new(ptr.read_pointer)
+ end
+
+ @pITask
+ end
+
+ # Delete the specified task name.
+ #
+ def delete(task)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise TypeError unless task.is_a?(String)
+
+ @pITS.Delete(wide_string(task))
+
+ true
+ end
+
+ # Execute the current task.
+ #
+ def run
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ @pITask.Run
+ end
+
+ # Saves the current task. Tasks must be saved before they can be activated.
+ # The .job file itself is typically stored in the C:\WINDOWS\Tasks folder.
+ #
+ # If +file+ (an absolute path) is specified then the job is saved to that
+ # file instead. A '.job' extension is recommended but not enforced.
+ #
+ def save(file = nil)
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ reset = true
+
+ begin
+ @pITask.QueryInstance(COM::PersistFile) do |pIPersistFile|
+ pIPersistFile.Save(wide_string(file), 1)
+ end
+ rescue
+ reset = false
+ ensure
+ reset_current_task if reset
+ end
+ end
+
+ # Terminate the current task.
+ #
+ def terminate
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ @pITask.Terminate
+ end
+
+ # Set the host on which the various TaskScheduler methods will execute.
+ #
+ def machine=(host)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise TypeError unless host.is_a?(String)
+
+ @pITS.SetTargetComputer(wide_string(host))
+
+ host
+ end
+
+ alias :host= :machine=
+
+ # Sets the +user+ and +password+ for the given task. If the user and
+ # password are set properly then true is returned.
+ #
+ # In some cases the job may be created, but the account information was
+ # bad. In this case the task is created but a warning is generated and
+ # false is returned.
+ #
+ def set_account_information(user, password)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ bool = false
+
+ begin
+ if (user.nil? || user=="") && (password.nil? || password=="")
+ @pITask.SetAccountInformation(wide_string(""), FFI::Pointer::NULL)
+ else
+ user = wide_string(user)
+ password = wide_string(password)
+ @pITask.SetAccountInformation(user, password)
+ end
+
+ bool = true
+ rescue Puppet::Util::Windows::Error => e
+ raise e unless e.code == SCHED_E_ACCOUNT_INFORMATION_NOT_SET
+
+ warn 'job created, but password was invalid'
+ end
+
+ bool
+ end
+
+ # Returns the user associated with the task or nil if no user has yet
+ # been associated with the task.
+ #
+ def account_information
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ # default under certain failures
+ user = nil
+
+ begin
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITask.GetAccountInformation(ptr)
+ ptr.read_com_memory_pointer do |str_ptr|
+ user = str_ptr.read_arbitrary_wide_string_up_to(256) if ! str_ptr.null?
+ end
+ end
+ rescue Puppet::Util::Windows::Error => e
+ raise e unless e.code == SCHED_E_ACCOUNT_INFORMATION_NOT_SET ||
+ e.code == SCHED_E_NO_SECURITY_SERVICES ||
+ e.code == ERROR_NONE_MAPPED
+ end
+
+ user
+ end
+
+ # Returns the name of the application associated with the task.
+ #
+ def application_name
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ app = nil
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITask.GetApplicationName(ptr)
+
+ ptr.read_com_memory_pointer do |str_ptr|
+ app = str_ptr.read_arbitrary_wide_string_up_to(256) if ! str_ptr.null?
+ end
+ end
+
+ app
+ end
+
+ # Sets the application name associated with the task.
+ #
+ def application_name=(app)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless app.is_a?(String)
+
+ @pITask.SetApplicationName(wide_string(app))
+
+ app
+ end
+
+ # Returns the command line parameters for the task.
+ #
+ def parameters
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ param = nil
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITask.GetParameters(ptr)
+
+ ptr.read_com_memory_pointer do |str_ptr|
+ param = str_ptr.read_arbitrary_wide_string_up_to(256) if ! str_ptr.null?
+ end
+ end
+
+ param
+ end
+
+ # Sets the parameters for the task. These parameters are passed as command
+ # line arguments to the application the task will run. To clear the command
+ # line parameters set it to an empty string.
+ #
+ def parameters=(param)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless param.is_a?(String)
+
+ @pITask.SetParameters(wide_string(param))
+
+ param
+ end
+
+ # Returns the working directory for the task.
+ #
+ def working_directory
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ dir = nil
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITask.GetWorkingDirectory(ptr)
+
+ ptr.read_com_memory_pointer do |str_ptr|
+ dir = str_ptr.read_arbitrary_wide_string_up_to(256) if ! str_ptr.null?
+ end
+ end
+
+ dir
+ end
+
+ # Sets the working directory for the task.
+ #
+ def working_directory=(dir)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless dir.is_a?(String)
+
+ @pITask.SetWorkingDirectory(wide_string(dir))
+
+ dir
+ end
+
+ # Returns the task's priority level. Possible values are 'idle',
+ # 'normal', 'high', 'realtime', 'below_normal', 'above_normal',
+ # and 'unknown'.
+ #
+ def priority
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ FFI::MemoryPointer.new(:dword, 1) do |ptr|
+ @pITask.GetPriority(ptr)
+
+ pri = ptr.read_dword
+ if (pri & IDLE) != 0
+ priority = 'idle'
+ elsif (pri & NORMAL) != 0
+ priority = 'normal'
+ elsif (pri & HIGH) != 0
+ priority = 'high'
+ elsif (pri & REALTIME) != 0
+ priority = 'realtime'
+ elsif (pri & BELOW_NORMAL) != 0
+ priority = 'below_normal'
+ elsif (pri & ABOVE_NORMAL) != 0
+ priority = 'above_normal'
+ else
+ priority = 'unknown'
+ end
+ end
+
+ priority
+ end
+
+ # Sets the priority of the task. The +priority+ should be a numeric
+ # priority constant value.
+ #
+ def priority=(priority)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless priority.is_a?(Numeric)
+
+ @pITask.SetPriority(priority)
+
+ priority
+ end
+
+ # Creates a new work item (scheduled job) with the given +trigger+. The
+ # trigger variable is a hash of options that define when the scheduled
+ # job should run.
+ #
+ def new_work_item(task, trigger)
+ raise TypeError unless trigger.is_a?(Hash)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+
+ # I'm working around github issue #1 here.
+ enum.each{ |name|
+ if name.downcase == task.downcase + '.job'
+ raise Error.new("task '#{task}' already exists")
+ end
+ }
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITS.NewWorkItem(wide_string(task), CLSID_CTask, IID_ITask, ptr)
+
+ reset_current_task
+ @pITask = COM::Task.new(ptr.read_pointer)
+
+ FFI::MemoryPointer.new(:word, 1) do |trigger_index_ptr|
+ # Without the 'enum.include?' check above the code segfaults here if the
+ # task already exists. This should probably be handled properly instead
+ # of simply avoiding the issue.
+
+ @pITask.UseInstance(COM::TaskTrigger, :CreateTrigger, trigger_index_ptr) do |pITaskTrigger|
+ populate_trigger(pITaskTrigger, trigger)
+ end
+ end
+ end
+
+ @pITask
+ end
+
+ alias :new_task :new_work_item
+
+ # Returns the number of triggers associated with the active task.
+ #
+ def trigger_count
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ count = 0
+
+ FFI::MemoryPointer.new(:word, 1) do |ptr|
+ @pITask.GetTriggerCount(ptr)
+ count = ptr.read_word
+ end
+
+ count
+ end
+
+ # Returns a string that describes the current trigger at the specified
+ # index for the active task.
+ #
+ # Example: "At 7:14 AM every day, starting 4/11/2009"
+ #
+ def trigger_string(index)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless index.is_a?(Numeric)
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITask.GetTriggerString(index, ptr)
+
+ ptr.read_com_memory_pointer do |str_ptr|
+ trigger = str_ptr.read_arbitrary_wide_string_up_to(256)
+ end
+ end
+
+ trigger
+ end
+
+ # Deletes the trigger at the specified index.
+ #
+ def delete_trigger(index)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ @pITask.DeleteTrigger(index)
+ index
+ end
+
+ # Returns a hash that describes the trigger at the given index for the
+ # current task.
+ #
+ def trigger(index)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ trigger = {}
+
+ @pITask.UseInstance(COM::TaskTrigger, :GetTrigger, index) do |pITaskTrigger|
+ FFI::MemoryPointer.new(COM::TASK_TRIGGER.size) do |task_trigger_ptr|
+ pITaskTrigger.GetTrigger(task_trigger_ptr)
+ trigger = populate_hash_from_trigger(COM::TASK_TRIGGER.new(task_trigger_ptr))
+ end
+ end
+
+ trigger
+ end
+
+ # Sets the trigger for the currently active task.
+ #
+ def trigger=(trigger)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless trigger.is_a?(Hash)
+
+ FFI::MemoryPointer.new(:word, 1) do |trigger_index_ptr|
+ # Without the 'enum.include?' check above the code segfaults here if the
+ # task already exists. This should probably be handled properly instead
+ # of simply avoiding the issue.
+
+ @pITask.UseInstance(COM::TaskTrigger, :CreateTrigger, trigger_index_ptr) do |pITaskTrigger|
+ populate_trigger(pITaskTrigger, trigger)
+ end
+ end
+
+ trigger
+ end
+
+ # Adds a trigger at the specified index.
+ #
+ def add_trigger(index, trigger)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless trigger.is_a?(Hash)
+
+ @pITask.UseInstance(COM::TaskTrigger, :GetTrigger, index) do |pITaskTrigger|
+ populate_trigger(pITaskTrigger, trigger)
+ end
+ end
+
+ # Returns the flags (integer) that modify the behavior of the work item. You
+ # must OR the return value to determine the flags yourself.
+ #
+ def flags
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ flags = 0
+
+ FFI::MemoryPointer.new(:dword, 1) do |ptr|
+ @pITask.GetFlags(ptr)
+ flags = ptr.read_dword
+ end
+
+ flags
+ end
+
+ # Sets an OR'd value of flags that modify the behavior of the work item.
+ #
+ def flags=(flags)
+ raise Error.new('No current task scheduler. ITaskScheduler is NULL.') if @pITS.nil?
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ @pITask.SetFlags(flags)
+ flags
+ end
+
+ # Returns the status of the currently active task. Possible values are
+ # 'ready', 'running', 'not scheduled' or 'unknown'.
+ #
+ def status
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ st = nil
+
+ FFI::MemoryPointer.new(:hresult, 1) do |ptr|
+ @pITask.GetStatus(ptr)
+ st = ptr.read_hresult
+ end
+
+ case st
+ when SCHED_S_TASK_READY
+ status = 'ready'
+ when SCHED_S_TASK_RUNNING
+ status = 'running'
+ when SCHED_S_TASK_NOT_SCHEDULED
+ status = 'not scheduled'
+ else
+ status = 'unknown'
+ end
+
+ status
+ end
+
+ # Returns the exit code from the last scheduled run.
+ #
+ def exit_code
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ status = 0
+
+ begin
+ FFI::MemoryPointer.new(:dword, 1) do |ptr|
+ @pITask.GetExitCode(ptr)
+ status = ptr.read_dword
+ end
+ rescue Puppet::Util::Windows::Error => e
+ raise e unless e.code == SCHED_S_TASK_HAS_NOT_RUN
+ end
+
+ status
+ end
+
+ # Returns the comment associated with the task, if any.
+ #
+ def comment
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ comment = nil
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITask.GetComment(ptr)
+
+ ptr.read_com_memory_pointer do |str_ptr|
+ comment = str_ptr.read_arbitrary_wide_string_up_to(256) if ! str_ptr.null?
+ end
+ end
+
+ comment
+ end
+
+ # Sets the comment for the task.
+ #
+ def comment=(comment)
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless comment.is_a?(String)
+
+ @pITask.SetComment(wide_string(comment))
+ comment
+ end
+
+ # Returns the name of the user who created the task.
+ #
+ def creator
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ creator = nil
+
+ FFI::MemoryPointer.new(:pointer) do |ptr|
+ @pITask.GetCreator(ptr)
+
+ ptr.read_com_memory_pointer do |str_ptr|
+ creator = str_ptr.read_arbitrary_wide_string_up_to(256) if ! str_ptr.null?
+ end
+ end
+
+ creator
+ end
+
+ # Sets the creator for the task.
+ #
+ def creator=(creator)
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless creator.is_a?(String)
+
+ @pITask.SetCreator(wide_string(creator))
+ creator
+ end
+
+ # Returns a Time object that indicates the next time the task will run.
+ #
+ def next_run_time
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ time = nil
+
+ FFI::MemoryPointer.new(WIN32::SYSTEMTIME.size) do |ptr|
+ @pITask.GetNextRunTime(ptr)
+ time = WIN32::SYSTEMTIME.new(ptr).to_local_time
+ end
+
+ time
+ end
+
+ # Returns a Time object indicating the most recent time the task ran or
+ # nil if the task has never run.
+ #
+ def most_recent_run_time
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ time = nil
+
+ begin
+ FFI::MemoryPointer.new(WIN32::SYSTEMTIME.size) do |ptr|
+ @pITask.GetMostRecentRunTime(ptr)
+ time = WIN32::SYSTEMTIME.new(ptr).to_local_time
+ end
+ rescue Puppet::Util::Windows::Error => e
+ raise e unless e.code == SCHED_S_TASK_HAS_NOT_RUN
+ end
+
+ time
+ end
+
+ # Returns the maximum length of time, in milliseconds, that the task
+ # will run before terminating.
+ #
+ def max_run_time
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+
+ max_run_time = nil
+
+ FFI::MemoryPointer.new(:dword, 1) do |ptr|
+ @pITask.GetMaxRunTime(ptr)
+ max_run_time = ptr.read_dword
+ end
+
+ max_run_time
+ end
+
+ # Sets the maximum length of time, in milliseconds, that the task can run
+ # before terminating. Returns the value you specified if successful.
+ #
+ def max_run_time=(max_run_time)
+ raise Error.new('No currently active task. ITask is NULL.') if @pITask.nil?
+ raise TypeError unless max_run_time.is_a?(Numeric)
+
+ @pITask.SetMaxRunTime(max_run_time)
+ max_run_time
+ end
+
+ # Returns whether or not the scheduled task exists.
+ def exists?(job_name)
+ bool = false
+ Dir.foreach('C:/Windows/Tasks'){ |file|
+ if File.basename(file, '.job') == job_name
+ bool = true
+ break
+ end
+ }
+ bool
+ end
+
+ private
+
+ # :stopdoc:
+
+ # Used for the new_work_item method
+ ValidTriggerKeys = [
+ 'end_day',
+ 'end_month',
+ 'end_year',
+ 'flags',
+ 'minutes_duration',
+ 'minutes_interval',
+ 'random_minutes_interval',
+ 'start_day',
+ 'start_hour',
+ 'start_minute',
+ 'start_month',
+ 'start_year',
+ 'trigger_type',
+ 'type'
+ ]
+
+ ValidTypeKeys = [
+ 'days_interval',
+ 'weeks_interval',
+ 'days_of_week',
+ 'months',
+ 'days',
+ 'weeks'
+ ]
+
+ # Private method that validates keys, and converts all keys to lowercase
+ # strings.
+ #
+ def transform_and_validate(hash)
+ new_hash = {}
+
+ hash.each{ |key, value|
+ key = key.to_s.downcase
+ if key == 'type'
+ new_type_hash = {}
+ raise ArgumentError unless value.is_a?(Hash)
+ value.each{ |subkey, subvalue|
+ subkey = subkey.to_s.downcase
+ if ValidTypeKeys.include?(subkey)
+ new_type_hash[subkey] = subvalue
+ else
+ raise ArgumentError, "Invalid type key '#{subkey}'"
+ end
+ }
+ new_hash[key] = new_type_hash
+ else
+ if ValidTriggerKeys.include?(key)
+ new_hash[key] = value
+ else
+ raise ArgumentError, "Invalid key '#{key}'"
+ end
+ end
+ }
+
+ new_hash
+ end
+
+ private
+
+ def reset_current_task
+ # Ensure that COM reference is decremented properly
+ @pITask.Release if @pITask && ! @pITask.null?
+ @pITask = nil
+ end
+
+ def populate_trigger(task_trigger, trigger)
+ raise TypeError unless task_trigger.is_a?(COM::TaskTrigger)
+ trigger = transform_and_validate(trigger)
+
+ FFI::MemoryPointer.new(COM::TASK_TRIGGER.size) do |trigger_ptr|
+ FFI::MemoryPointer.new(COM::TRIGGER_TYPE_UNION.size) do |trigger_type_union_ptr|
+ trigger_type_union = COM::TRIGGER_TYPE_UNION.new(trigger_type_union_ptr)
+
+ tmp = trigger['type'].is_a?(Hash) ? trigger['type'] : nil
+ case trigger['trigger_type']
+ when :TASK_TIME_TRIGGER_DAILY
+ if tmp && tmp['days_interval']
+ trigger_type_union[:Daily][:DaysInterval] = tmp['days_interval']
+ end
+ when :TASK_TIME_TRIGGER_WEEKLY
+ if tmp && tmp['weeks_interval'] && tmp['days_of_week']
+ trigger_type_union[:Weekly][:WeeksInterval] = tmp['weeks_interval']
+ trigger_type_union[:Weekly][:rgfDaysOfTheWeek] = tmp['days_of_week']
+ end
+ when :TASK_TIME_TRIGGER_MONTHLYDATE
+ if tmp && tmp['months'] && tmp['days']
+ trigger_type_union[:MonthlyDate][:rgfDays] = tmp['days']
+ trigger_type_union[:MonthlyDate][:rgfMonths] = tmp['months']
+ end
+ when :TASK_TIME_TRIGGER_MONTHLYDOW
+ if tmp && tmp['weeks'] && tmp['days_of_week'] && tmp['months']
+ trigger_type_union[:MonthlyDOW][:wWhichWeek] = tmp['weeks']
+ trigger_type_union[:MonthlyDOW][:rgfDaysOfTheWeek] = tmp['days_of_week']
+ trigger_type_union[:MonthlyDOW][:rgfMonths] = tmp['months']
+ end
+ when :TASK_TIME_TRIGGER_ONCE
+ # Do nothing. The Type member of the TASK_TRIGGER struct is ignored.
+ else
+ raise Error.new("Unknown trigger type #{trigger['trigger_type']}")
+ end
+
+ trigger_struct = COM::TASK_TRIGGER.new(trigger_ptr)
+ trigger_struct[:cbTriggerSize] = COM::TASK_TRIGGER.size
+ trigger_struct[:wBeginYear] = trigger['start_year'] || 0
+ trigger_struct[:wBeginMonth] = trigger['start_month'] || 0
+ trigger_struct[:wBeginDay] = trigger['start_day'] || 0
+ trigger_struct[:wEndYear] = trigger['end_year'] || 0
+ trigger_struct[:wEndMonth] = trigger['end_month'] || 0
+ trigger_struct[:wEndDay] = trigger['end_day'] || 0
+ trigger_struct[:wStartHour] = trigger['start_hour'] || 0
+ trigger_struct[:wStartMinute] = trigger['start_minute'] || 0
+ trigger_struct[:MinutesDuration] = trigger['minutes_duration'] || 0
+ trigger_struct[:MinutesInterval] = trigger['minutes_interval'] || 0
+ trigger_struct[:rgFlags] = trigger['flags'] || 0
+ trigger_struct[:TriggerType] = trigger['trigger_type'] || :TASK_TIME_TRIGGER_ONCE
+ trigger_struct[:Type] = trigger_type_union
+ trigger_struct[:wRandomMinutesInterval] = trigger['random_minutes_interval']
+
+ task_trigger.SetTrigger(trigger_struct)
+ end
+ end
+ end
+
+ def populate_hash_from_trigger(task_trigger)
+ raise TypeError unless task_trigger.is_a?(COM::TASK_TRIGGER)
+
+ trigger = {
+ 'start_year' => task_trigger[:wBeginYear],
+ 'start_month' => task_trigger[:wBeginMonth],
+ 'start_day' => task_trigger[:wBeginDay],
+ 'end_year' => task_trigger[:wEndYear],
+ 'end_month' => task_trigger[:wEndMonth],
+ 'end_day' => task_trigger[:wEndDay],
+ 'start_hour' => task_trigger[:wStartHour],
+ 'start_minute' => task_trigger[:wStartMinute],
+ 'minutes_duration' => task_trigger[:MinutesDuration],
+ 'minutes_interval' => task_trigger[:MinutesInterval],
+ 'flags' => task_trigger[:rgFlags],
+ 'trigger_type' => task_trigger[:TriggerType],
+ 'random_minutes_interval' => task_trigger[:wRandomMinutesInterval]
+ }
+
+ case task_trigger[:TriggerType]
+ when :TASK_TIME_TRIGGER_DAILY
+ trigger['type'] = { 'days_interval' => task_trigger[:Type][:Daily][:DaysInterval] }
+ when :TASK_TIME_TRIGGER_WEEKLY
+ trigger['type'] = {
+ 'weeks_interval' => task_trigger[:Type][:Weekly][:WeeksInterval],
+ 'days_of_week' => task_trigger[:Type][:Weekly][:rgfDaysOfTheWeek]
+ }
+ when :TASK_TIME_TRIGGER_MONTHLYDATE
+ trigger['type'] = {
+ 'days' => task_trigger[:Type][:MonthlyDate][:rgfDays],
+ 'months' => task_trigger[:Type][:MonthlyDate][:rgfMonths]
+ }
+ when :TASK_TIME_TRIGGER_MONTHLYDOW
+ trigger['type'] = {
+ 'weeks' => task_trigger[:Type][:MonthlyDOW][:wWhichWeek],
+ 'days_of_week' => task_trigger[:Type][:MonthlyDOW][:rgfDaysOfTheWeek],
+ 'months' => task_trigger[:Type][:MonthlyDOW][:rgfMonths]
+ }
+ when :TASK_TIME_TRIGGER_ONCE
+ trigger['type'] = { 'once' => nil }
+ else
+ raise Error.new("Unknown trigger type #{task_trigger[:TriggerType]}")
+ end
+
+ trigger
+ end
+
+ module COM
+ extend FFI::Library
+ private
+
+ com = Puppet::Util::Windows::COM
+
+ public
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa381811(v=vs.85).aspx
+ ITaskScheduler = com::Interface[com::IUnknown,
+ FFI::WIN32::GUID['148BD527-A2AB-11CE-B11F-00AA00530503'],
+
+ SetTargetComputer: [[:lpcwstr], :hresult],
+ # LPWSTR *
+ GetTargetComputer: [[:pointer], :hresult],
+ # IEnumWorkItems **
+ Enum: [[:pointer], :hresult],
+ # LPCWSTR, REFIID, IUnknown **
+ Activate: [[:lpcwstr, :pointer, :pointer], :hresult],
+ Delete: [[:lpcwstr], :hresult],
+ # LPCWSTR, REFCLSID, REFIID, IUnknown **
+ NewWorkItem: [[:lpcwstr, :pointer, :pointer, :pointer], :hresult],
+ # LPCWSTR, IScheduledWorkItem *
+ AddWorkItem: [[:lpcwstr, :pointer], :hresult],
+ # LPCWSTR, REFIID
+ IsOfType: [[:lpcwstr, :pointer], :hresult]
+ ]
+
+ TaskScheduler = com::Factory[ITaskScheduler,
+ FFI::WIN32::GUID['148BD52A-A2AB-11CE-B11F-00AA00530503']]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa380706(v=vs.85).aspx
+ IEnumWorkItems = com::Interface[com::IUnknown,
+ FFI::WIN32::GUID['148BD528-A2AB-11CE-B11F-00AA00530503'],
+
+ # ULONG, LPWSTR **, ULONG *
+ Next: [[:win32_ulong, :pointer, :pointer], :hresult],
+ Skip: [[:win32_ulong], :hresult],
+ Reset: [[], :hresult],
+ # IEnumWorkItems ** ppEnumWorkItems
+ Clone: [[:pointer], :hresult]
+ ]
+
+ EnumWorkItems = com::Instance[IEnumWorkItems]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa381216(v=vs.85).aspx
+ IScheduledWorkItem = com::Interface[com::IUnknown,
+ FFI::WIN32::GUID['a6b952f0-a4b1-11d0-997d-00aa006887ec'],
+
+ # WORD *, ITaskTrigger **
+ CreateTrigger: [[:pointer, :pointer], :hresult],
+ DeleteTrigger: [[:word], :hresult],
+ # WORD *
+ GetTriggerCount: [[:pointer], :hresult],
+ # WORD, ITaskTrigger **
+ GetTrigger: [[:word, :pointer], :hresult],
+ # WORD, LPWSTR *
+ GetTriggerString: [[:word, :pointer], :hresult],
+ # LPSYSTEMTIME, LPSYSTEMTIME, WORD *, LPSYSTEMTIME *
+ GetRunTimes: [[:pointer, :pointer, :pointer, :pointer], :hresult],
+ # SYSTEMTIME *
+ GetNextRunTime: [[:pointer], :hresult],
+ SetIdleWait: [[:word, :word], :hresult],
+ # WORD *, WORD *
+ GetIdleWait: [[:pointer, :pointer], :hresult],
+ Run: [[], :hresult],
+ Terminate: [[], :hresult],
+ EditWorkItem: [[:hwnd, :dword], :hresult],
+ # SYSTEMTIME *
+ GetMostRecentRunTime: [[:pointer], :hresult],
+ # HRESULT *
+ GetStatus: [[:pointer], :hresult],
+ GetExitCode: [[:pdword], :hresult],
+ SetComment: [[:lpcwstr], :hresult],
+ # LPWSTR *
+ GetComment: [[:pointer], :hresult],
+ SetCreator: [[:lpcwstr], :hresult],
+ # LPWSTR *
+ GetCreator: [[:pointer], :hresult],
+ # WORD, BYTE[]
+ SetWorkItemData: [[:word, :buffer_in], :hresult],
+ # WORD *, BYTE **
+ GetWorkItemData: [[:pointer, :pointer], :hresult],
+ SetErrorRetryCount: [[:word], :hresult],
+ # WORD *
+ GetErrorRetryCount: [[:pointer], :hresult],
+ SetErrorRetryInterval: [[:word], :hresult],
+ # WORD *
+ GetErrorRetryInterval: [[:pointer], :hresult],
+ SetFlags: [[:dword], :hresult],
+ # WORD *
+ GetFlags: [[:pointer], :hresult],
+ SetAccountInformation: [[:lpcwstr, :lpcwstr], :hresult],
+ # LPWSTR *
+ GetAccountInformation: [[:pointer], :hresult]
+ ]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa381311(v=vs.85).aspx
+ ITask = com::Interface[IScheduledWorkItem,
+ FFI::WIN32::GUID['148BD524-A2AB-11CE-B11F-00AA00530503'],
+
+ SetApplicationName: [[:lpcwstr], :hresult],
+ # LPWSTR *
+ GetApplicationName: [[:pointer], :hresult],
+ SetParameters: [[:lpcwstr], :hresult],
+ # LPWSTR *
+ GetParameters: [[:pointer], :hresult],
+ SetWorkingDirectory: [[:lpcwstr], :hresult],
+ # LPWSTR *
+ GetWorkingDirectory: [[:pointer], :hresult],
+ SetPriority: [[:dword], :hresult],
+ # DWORD *
+ GetPriority: [[:pointer], :hresult],
+ SetTaskFlags: [[:dword], :hresult],
+ # DWORD *
+ GetTaskFlags: [[:pointer], :hresult],
+ SetMaxRunTime: [[:dword], :hresult],
+ # DWORD *
+ GetMaxRunTime: [[:pointer], :hresult]
+ ]
+
+ Task = com::Instance[ITask]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms688695(v=vs.85).aspx
+ IPersist = com::Interface[com::IUnknown,
+ FFI::WIN32::GUID['0000010c-0000-0000-c000-000000000046'],
+ # CLSID *
+ GetClassID: [[:pointer], :hresult]
+ ]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms687223(v=vs.85).aspx
+ IPersistFile = com::Interface[IPersist,
+ FFI::WIN32::GUID['0000010b-0000-0000-C000-000000000046'],
+
+ IsDirty: [[], :hresult],
+ Load: [[:lpcolestr, :dword], :hresult],
+ Save: [[:lpcolestr, :win32_bool], :hresult],
+ SaveCompleted: [[:lpcolestr], :hresult],
+ # LPOLESTR *
+ GetCurFile: [[:pointer], :hresult]
+ ]
+
+ PersistFile = com::Instance[IPersistFile]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa381864(v=vs.85).aspx
+ ITaskTrigger = com::Interface[com::IUnknown,
+ FFI::WIN32::GUID['148BD52B-A2AB-11CE-B11F-00AA00530503'],
+
+ SetTrigger: [[:pointer], :hresult],
+ GetTrigger: [[:pointer], :hresult],
+ GetTriggerString: [[:pointer], :hresult]
+ ]
+
+ TaskTrigger = com::Instance[ITaskTrigger]
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa383620(v=vs.85).aspx
+ # The TASK_TRIGGER_TYPE field of the TASK_TRIGGER structure determines
+ # which member of the TRIGGER_TYPE_UNION field to use.
+ TASK_TRIGGER_TYPE = enum(
+ :TASK_TIME_TRIGGER_ONCE, 0, # Ignore the Type field
+ :TASK_TIME_TRIGGER_DAILY, 1,
+ :TASK_TIME_TRIGGER_WEEKLY, 2,
+ :TASK_TIME_TRIGGER_MONTHLYDATE, 3,
+ :TASK_TIME_TRIGGER_MONTHLYDOW, 4,
+ :TASK_EVENT_TRIGGER_ON_IDLE, 5, # Ignore the Type field
+ :TASK_EVENT_TRIGGER_AT_SYSTEMSTART, 6, # Ignore the Type field
+ :TASK_EVENT_TRIGGER_AT_LOGON, 7 # Ignore the Type field
+ )
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa446857(v=vs.85).aspx
+ class DAILY < FFI::Struct
+ layout :DaysInterval, :word
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa384014(v=vs.85).aspx
+ class WEEKLY < FFI::Struct
+ layout :WeeksInterval, :word,
+ :rgfDaysOfTheWeek, :word
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa381918(v=vs.85).aspx
+ class MONTHLYDATE < FFI::Struct
+ layout :rgfDays, :dword,
+ :rgfMonths, :word
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa381918(v=vs.85).aspx
+ class MONTHLYDOW < FFI::Struct
+ layout :wWhichWeek, :word,
+ :rgfDaysOfTheWeek, :word,
+ :rgfMonths, :word
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa384002(v=vs.85).aspx
+ class TRIGGER_TYPE_UNION < FFI::Union
+ layout :Daily, DAILY,
+ :Weekly, WEEKLY,
+ :MonthlyDate, MONTHLYDATE,
+ :MonthlyDOW, MONTHLYDOW
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa383618(v=vs.85).aspx
+ class TASK_TRIGGER < FFI::Struct
+ layout :cbTriggerSize, :word, # Structure size.
+ :Reserved1, :word, # Reserved. Must be zero.
+ :wBeginYear, :word, # Trigger beginning date year.
+ :wBeginMonth, :word, # Trigger beginning date month.
+ :wBeginDay, :word, # Trigger beginning date day.
+ :wEndYear, :word, # Optional trigger ending date year.
+ :wEndMonth, :word, # Optional trigger ending date month.
+ :wEndDay, :word, # Optional trigger ending date day.
+ :wStartHour, :word, # Run bracket start time hour.
+ :wStartMinute, :word, # Run bracket start time minute.
+ :MinutesDuration, :dword, # Duration of run bracket.
+ :MinutesInterval, :dword, # Run bracket repetition interval.
+ :rgFlags, :dword, # Trigger flags.
+ :TriggerType, TASK_TRIGGER_TYPE, # Trigger type.
+ :Type, TRIGGER_TYPE_UNION, # Trigger data.
+ :Reserved2, :word, # Reserved. Must be zero.
+ :wRandomMinutesInterval, :word # Maximum number of random minutes after start time
+ end
+ end
+ end
+end
diff --git a/lib/puppet/util/windows/user.rb b/lib/puppet/util/windows/user.rb
index 51eef779d..a2277f66c 100644
--- a/lib/puppet/util/windows/user.rb
+++ b/lib/puppet/util/windows/user.rb
@@ -1,108 +1,292 @@
require 'puppet/util/windows'
-require 'win32/security'
require 'facter'
+require 'ffi'
module Puppet::Util::Windows::User
- include ::Windows::Security
- extend ::Windows::Security
+ extend Puppet::Util::Windows::String
+ extend FFI::Library
def admin?
majversion = Facter.value(:kernelmajversion)
return false unless majversion
# if Vista or later, check for unrestricted process token
- return Win32::Security.elevated_security? unless majversion.to_f < 6.0
+ return Puppet::Util::Windows::Process.elevated_security? unless majversion.to_f < 6.0
# otherwise 2003 or less
check_token_membership
end
module_function :admin?
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ee207397(v=vs.85).aspx
+ SECURITY_MAX_SID_SIZE = 68
+
def check_token_membership
- sid = 0.chr * 80
- size = [80].pack('L')
- member = 0.chr * 4
+ is_admin = false
+ FFI::MemoryPointer.new(:byte, SECURITY_MAX_SID_SIZE) do |sid_pointer|
+ FFI::MemoryPointer.new(:dword, 1) do |size_pointer|
+ size_pointer.write_uint32(SECURITY_MAX_SID_SIZE)
- unless CreateWellKnownSid(WinBuiltinAdministratorsSid, nil, sid, size)
- raise Puppet::Util::Windows::Error.new("Failed to create administrators SID")
- end
+ if CreateWellKnownSid(:WinBuiltinAdministratorsSid, FFI::Pointer::NULL, sid_pointer, size_pointer) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to create administrators SID")
+ end
+ end
- unless IsValidSid(sid)
- raise Puppet::Util::Windows::Error.new("Invalid SID")
- end
+ if IsValidSid(sid_pointer) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Invalid SID")
+ end
+
+ FFI::MemoryPointer.new(:win32_bool, 1) do |ismember_pointer|
+ if CheckTokenMembership(FFI::Pointer::NULL_HANDLE, sid_pointer, ismember_pointer) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to check membership")
+ end
- unless CheckTokenMembership(nil, sid, member)
- raise Puppet::Util::Windows::Error.new("Failed to check membership")
+ # Is administrators SID enabled in calling thread's access token?
+ is_admin = ismember_pointer.read_win32_bool != FFI::WIN32_FALSE
+ end
end
- # Is administrators SID enabled in calling thread's access token?
- member.unpack('L')[0] == 1
+ is_admin
end
module_function :check_token_membership
def password_is?(name, password)
- logon_user(name, password)
- true
+ logon_user(name, password) { |token| }
rescue Puppet::Util::Windows::Error
false
end
module_function :password_is?
def logon_user(name, password, &block)
fLOGON32_LOGON_NETWORK = 3
fLOGON32_PROVIDER_DEFAULT = 0
- logon_user = Win32API.new("advapi32", "LogonUser", ['P', 'P', 'P', 'L', 'L', 'P'], 'L')
- close_handle = Win32API.new("kernel32", "CloseHandle", ['L'], 'B')
-
- token = 0.chr * 4
- if logon_user.call(name, ".", password, fLOGON32_LOGON_NETWORK, fLOGON32_PROVIDER_DEFAULT, token) == 0
- raise Puppet::Util::Windows::Error.new("Failed to logon user #{name.inspect}")
- end
-
- token = token.unpack('L')[0]
+ token = nil
begin
- yield token if block_given?
+ FFI::MemoryPointer.new(:handle, 1) do |token_pointer|
+ if LogonUserW(wide_string(name), wide_string('.'), wide_string(password),
+ fLOGON32_LOGON_NETWORK, fLOGON32_PROVIDER_DEFAULT, token_pointer) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to logon user #{name.inspect}")
+ end
+
+ yield token = token_pointer.read_handle
+ end
ensure
- close_handle.call(token)
+ FFI::WIN32.CloseHandle(token) if token
end
+
+ # token has been closed by this point
+ true
end
module_function :logon_user
def load_profile(user, password)
logon_user(user, password) do |token|
- # Set up the PROFILEINFO structure that will be used to load the
- # new user's profile
- # typedef struct _PROFILEINFO {
- # DWORD dwSize;
- # DWORD dwFlags;
- # LPTSTR lpUserName;
- # LPTSTR lpProfilePath;
- # LPTSTR lpDefaultPath;
- # LPTSTR lpServerName;
- # LPTSTR lpPolicyPath;
- # HANDLE hProfile;
- # } PROFILEINFO, *LPPROFILEINFO;
- fPI_NOUI = 1
- profile = 0.chr * 4
- pi = [4 * 8, fPI_NOUI, user, nil, nil, nil, nil, profile].pack('LLPPPPPP')
-
- load_user_profile = Win32API.new('userenv', 'LoadUserProfile', ['L', 'P'], 'L')
- unload_user_profile = Win32API.new('userenv', 'UnloadUserProfile', ['L', 'L'], 'L')
-
- # Load the profile. Since it doesn't exist, it will be created
- if load_user_profile.call(token, pi) == 0
- raise Puppet::Util::Windows::Error.new("Failed to load user profile #{user.inspect}")
- end
+ FFI::MemoryPointer.from_string_to_wide_string(user) do |lpUserName|
+ pi = PROFILEINFO.new
+ pi[:dwSize] = PROFILEINFO.size
+ pi[:dwFlags] = 1 # PI_NOUI - prevents display of profile error msgs
+ pi[:lpUserName] = lpUserName
+
+ # Load the profile. Since it doesn't exist, it will be created
+ if LoadUserProfileW(token, pi.pointer) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to load user profile #{user.inspect}")
+ end
- Puppet.debug("Loaded profile for #{user}")
+ Puppet.debug("Loaded profile for #{user}")
- profile = pi.unpack('LLLLLLLL').last
- if unload_user_profile.call(token, profile) == 0
- raise Puppet::Util::Windows::Error.new("Failed to unload user profile #{user.inspect}")
+ if UnloadUserProfile(token, pi[:hProfile]) == FFI::WIN32_FALSE
+ raise Puppet::Util::Windows::Error.new("Failed to unload user profile #{user.inspect}")
+ end
end
end
end
module_function :load_profile
+
+ ffi_convention :stdcall
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa378184(v=vs.85).aspx
+ # BOOL LogonUser(
+ # _In_ LPTSTR lpszUsername,
+ # _In_opt_ LPTSTR lpszDomain,
+ # _In_opt_ LPTSTR lpszPassword,
+ # _In_ DWORD dwLogonType,
+ # _In_ DWORD dwLogonProvider,
+ # _Out_ PHANDLE phToken
+ # );
+ ffi_lib :advapi32
+ attach_function_private :LogonUserW,
+ [:lpwstr, :lpwstr, :lpwstr, :dword, :dword, :phandle], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/bb773378(v=vs.85).aspx
+ # typedef struct _PROFILEINFO {
+ # DWORD dwSize;
+ # DWORD dwFlags;
+ # LPTSTR lpUserName;
+ # LPTSTR lpProfilePath;
+ # LPTSTR lpDefaultPath;
+ # LPTSTR lpServerName;
+ # LPTSTR lpPolicyPath;
+ # HANDLE hProfile;
+ # } PROFILEINFO, *LPPROFILEINFO;
+ # technically
+ # NOTE: that for structs, buffer_* (lptstr alias) cannot be used
+ class PROFILEINFO < FFI::Struct
+ layout :dwSize, :dword,
+ :dwFlags, :dword,
+ :lpUserName, :pointer,
+ :lpProfilePath, :pointer,
+ :lpDefaultPath, :pointer,
+ :lpServerName, :pointer,
+ :lpPolicyPath, :pointer,
+ :hProfile, :handle
+ end
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/bb762281(v=vs.85).aspx
+ # BOOL WINAPI LoadUserProfile(
+ # _In_ HANDLE hToken,
+ # _Inout_ LPPROFILEINFO lpProfileInfo
+ # );
+ ffi_lib :userenv
+ attach_function_private :LoadUserProfileW,
+ [:handle, :pointer], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/bb762282(v=vs.85).aspx
+ # BOOL WINAPI UnloadUserProfile(
+ # _In_ HANDLE hToken,
+ # _In_ HANDLE hProfile
+ # );
+ ffi_lib :userenv
+ attach_function_private :UnloadUserProfile,
+ [:handle, :handle], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa376389(v=vs.85).aspx
+ # BOOL WINAPI CheckTokenMembership(
+ # _In_opt_ HANDLE TokenHandle,
+ # _In_ PSID SidToCheck,
+ # _Out_ PBOOL IsMember
+ # );
+ ffi_lib :advapi32
+ attach_function_private :CheckTokenMembership,
+ [:handle, :pointer, :pbool], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379650(v=vs.85).aspx
+ WELL_KNOWN_SID_TYPE = enum(
+ :WinNullSid , 0,
+ :WinWorldSid , 1,
+ :WinLocalSid , 2,
+ :WinCreatorOwnerSid , 3,
+ :WinCreatorGroupSid , 4,
+ :WinCreatorOwnerServerSid , 5,
+ :WinCreatorGroupServerSid , 6,
+ :WinNtAuthoritySid , 7,
+ :WinDialupSid , 8,
+ :WinNetworkSid , 9,
+ :WinBatchSid , 10,
+ :WinInteractiveSid , 11,
+ :WinServiceSid , 12,
+ :WinAnonymousSid , 13,
+ :WinProxySid , 14,
+ :WinEnterpriseControllersSid , 15,
+ :WinSelfSid , 16,
+ :WinAuthenticatedUserSid , 17,
+ :WinRestrictedCodeSid , 18,
+ :WinTerminalServerSid , 19,
+ :WinRemoteLogonIdSid , 20,
+ :WinLogonIdsSid , 21,
+ :WinLocalSystemSid , 22,
+ :WinLocalServiceSid , 23,
+ :WinNetworkServiceSid , 24,
+ :WinBuiltinDomainSid , 25,
+ :WinBuiltinAdministratorsSid , 26,
+ :WinBuiltinUsersSid , 27,
+ :WinBuiltinGuestsSid , 28,
+ :WinBuiltinPowerUsersSid , 29,
+ :WinBuiltinAccountOperatorsSid , 30,
+ :WinBuiltinSystemOperatorsSid , 31,
+ :WinBuiltinPrintOperatorsSid , 32,
+ :WinBuiltinBackupOperatorsSid , 33,
+ :WinBuiltinReplicatorSid , 34,
+ :WinBuiltinPreWindows2000CompatibleAccessSid , 35,
+ :WinBuiltinRemoteDesktopUsersSid , 36,
+ :WinBuiltinNetworkConfigurationOperatorsSid , 37,
+ :WinAccountAdministratorSid , 38,
+ :WinAccountGuestSid , 39,
+ :WinAccountKrbtgtSid , 40,
+ :WinAccountDomainAdminsSid , 41,
+ :WinAccountDomainUsersSid , 42,
+ :WinAccountDomainGuestsSid , 43,
+ :WinAccountComputersSid , 44,
+ :WinAccountControllersSid , 45,
+ :WinAccountCertAdminsSid , 46,
+ :WinAccountSchemaAdminsSid , 47,
+ :WinAccountEnterpriseAdminsSid , 48,
+ :WinAccountPolicyAdminsSid , 49,
+ :WinAccountRasAndIasServersSid , 50,
+ :WinNTLMAuthenticationSid , 51,
+ :WinDigestAuthenticationSid , 52,
+ :WinSChannelAuthenticationSid , 53,
+ :WinThisOrganizationSid , 54,
+ :WinOtherOrganizationSid , 55,
+ :WinBuiltinIncomingForestTrustBuildersSid , 56,
+ :WinBuiltinPerfMonitoringUsersSid , 57,
+ :WinBuiltinPerfLoggingUsersSid , 58,
+ :WinBuiltinAuthorizationAccessSid , 59,
+ :WinBuiltinTerminalServerLicenseServersSid , 60,
+ :WinBuiltinDCOMUsersSid , 61,
+ :WinBuiltinIUsersSid , 62,
+ :WinIUserSid , 63,
+ :WinBuiltinCryptoOperatorsSid , 64,
+ :WinUntrustedLabelSid , 65,
+ :WinLowLabelSid , 66,
+ :WinMediumLabelSid , 67,
+ :WinHighLabelSid , 68,
+ :WinSystemLabelSid , 69,
+ :WinWriteRestrictedCodeSid , 70,
+ :WinCreatorOwnerRightsSid , 71,
+ :WinCacheablePrincipalsGroupSid , 72,
+ :WinNonCacheablePrincipalsGroupSid , 73,
+ :WinEnterpriseReadonlyControllersSid , 74,
+ :WinAccountReadonlyControllersSid , 75,
+ :WinBuiltinEventLogReadersGroup , 76,
+ :WinNewEnterpriseReadonlyControllersSid , 77,
+ :WinBuiltinCertSvcDComAccessGroup , 78,
+ :WinMediumPlusLabelSid , 79,
+ :WinLocalLogonSid , 80,
+ :WinConsoleLogonSid , 81,
+ :WinThisOrganizationCertificateSid , 82,
+ :WinApplicationPackageAuthoritySid , 83,
+ :WinBuiltinAnyPackageSid , 84,
+ :WinCapabilityInternetClientSid , 85,
+ :WinCapabilityInternetClientServerSid , 86,
+ :WinCapabilityPrivateNetworkClientServerSid , 87,
+ :WinCapabilityPicturesLibrarySid , 88,
+ :WinCapabilityVideosLibrarySid , 89,
+ :WinCapabilityMusicLibrarySid , 90,
+ :WinCapabilityDocumentsLibrarySid , 91,
+ :WinCapabilitySharedUserCertificatesSid , 92,
+ :WinCapabilityEnterpriseAuthenticationSid , 93,
+ :WinCapabilityRemovableStorageSid , 94
+ )
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa446585(v=vs.85).aspx
+ # BOOL WINAPI CreateWellKnownSid(
+ # _In_ WELL_KNOWN_SID_TYPE WellKnownSidType,
+ # _In_opt_ PSID DomainSid,
+ # _Out_opt_ PSID pSid,
+ # _Inout_ DWORD *cbSid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :CreateWellKnownSid,
+ [WELL_KNOWN_SID_TYPE, :pointer, :pointer, :lpdword], :win32_bool
+
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/aa379151(v=vs.85).aspx
+ # BOOL WINAPI IsValidSid(
+ # _In_ PSID pSid
+ # );
+ ffi_lib :advapi32
+ attach_function_private :IsValidSid,
+ [:pointer], :win32_bool
end
diff --git a/lib/puppet/vendor.rb b/lib/puppet/vendor.rb
index fdaf5c053..7c6be4e80 100644
--- a/lib/puppet/vendor.rb
+++ b/lib/puppet/vendor.rb
@@ -1,55 +1,57 @@
module Puppet
# Simple module to manage vendored code.
#
# To vendor a library:
#
# * Download its whole git repo or untar into `lib/puppet/vendor/<libname>`
- # * Create a lib/puppetload_libraryname.rb file to add its libdir into the $:.
+ # * Create a vendor/puppetload_libraryname.rb file to add its libdir into the $:.
# (Look at existing load_xxx files, they should all follow the same pattern).
+ # * Add a <libname>/PUPPET_README.md file describing what the library is for
+ # and where it comes from.
# * To load the vendored lib upfront, add a `require '<vendorlib>'`line to
# `vendor/require_vendored.rb`.
# * To load the vendored lib on demand, add a comment to `vendor/require_vendored.rb`
# to make it clear it should not be loaded upfront.
#
# At runtime, the #load_vendored method should be called. It will ensure
# all vendored libraries are added to the global `$:` path, and
# will then call execute the up-front loading specified in `vendor/require_vendored.rb`.
#
# The intention is to not change vendored libraries and to eventually
# make adding them in optional so that distros can simply adjust their
# packaging to exclude this directory and the various load_xxx.rb scripts
# if they wish to install these gems as native packages.
#
class Vendor
class << self
# @api private
def vendor_dir
File.join([File.dirname(File.expand_path(__FILE__)), "vendor"])
end
# @api private
def load_entry(entry)
Puppet.debug("Loading vendored #{$1}")
load "#{vendor_dir}/#{entry}"
end
# @api private
def require_libs
require 'puppet/vendor/require_vendored'
end
# Configures the path for all vendored libraries and loads required libraries.
# (This is the entry point for loading vendored libraries).
#
def load_vendored
Dir.entries(vendor_dir).each do |entry|
if entry.match(/load_(\w+?)\.rb$/)
load_entry entry
end
end
require_libs
end
end
end
end
diff --git a/lib/puppet/vendor/load_pathspec.rb b/lib/puppet/vendor/load_pathspec.rb
new file mode 100644
index 000000000..afdbc2279
--- /dev/null
+++ b/lib/puppet/vendor/load_pathspec.rb
@@ -0,0 +1 @@
+$: << File.join([File.dirname(__FILE__), "pathspec/lib"])
diff --git a/lib/puppet/vendor/load_rgen.rb b/lib/puppet/vendor/load_rgen.rb
new file mode 100644
index 000000000..ba1f1c5c7
--- /dev/null
+++ b/lib/puppet/vendor/load_rgen.rb
@@ -0,0 +1 @@
+$: << File.join([File.dirname(__FILE__), "rgen/lib"])
diff --git a/lib/puppet/vendor/pathspec/CHANGELOG.md b/lib/puppet/vendor/pathspec/CHANGELOG.md
new file mode 100644
index 000000000..e4fe66d91
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/CHANGELOG.md
@@ -0,0 +1,2 @@
+0.0.2: Misc. Windows/regex fixes.
+0.0.1: Initial version.
diff --git a/lib/puppet/vendor/pathspec/LICENSE b/lib/puppet/vendor/pathspec/LICENSE
new file mode 100644
index 000000000..5c304d1a4
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/LICENSE
@@ -0,0 +1,201 @@
+Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/lib/puppet/vendor/pathspec/PUPPET_README.md b/lib/puppet/vendor/pathspec/PUPPET_README.md
new file mode 100644
index 000000000..e7ceddec7
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/PUPPET_README.md
@@ -0,0 +1,6 @@
+Pathspec-ruby - Gitignore parsing in Ruby
+=============================================
+
+Pathspec-ruby version 0.0.2
+
+Copied from https://github.com/highb/pathspec-ruby/releases/tag/0.0.2
diff --git a/lib/puppet/vendor/pathspec/README.md b/lib/puppet/vendor/pathspec/README.md
new file mode 100644
index 000000000..ecbd64d3d
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/README.md
@@ -0,0 +1,53 @@
+pathspec-ruby
+=============
+
+Match Path Specifications, such as .gitignore, in Ruby!
+
+Follows .gitignore syntax defined on [gitscm](http://git-scm.com/docs/gitignore)
+
+.gitignore functionality ported from [Python pathspec](https://pypi.python.org/pypi/pathspec/0.2.2) by [@cpburnz](https://github.com/cpburnz/python-path-specification)
+
+[Travis Status](https://travis-ci.org/highb/pathspec-ruby) ![Travis CI Status](https://travis-ci.org/highb/pathspec-ruby.svg?branch=master)
+
+## Build/Install from Rubygems
+```shell
+gem install pathspec
+```
+
+## Usage
+```ruby
+require 'pathspec'
+
+# Create a .gitignore-style Pathspec by giving it newline separated gitignore
+# lines, an array of gitignore lines, or any other enumable object that will
+# give strings matching the .gitignore-style (File, etc.)
+gitignore = Pathspec.new File.read('.gitignore', 'r')
+
+# Our .gitignore in this example contains:
+# !**/important.txt
+# abc/**
+
+# true, matches "abc/**"
+gitignore.match 'abc/def.rb'
+
+# false, because it has been negated using the line "!**/important.txt"
+gitignore.match 'abc/important.txt'
+
+# Give a path somewhere in the filesystem, and the Pathspec will return all
+# matching files underneath.
+# Returns ['/src/repo/abc/', '/src/repo/abc/123']
+gitignore.match_tree '/src/repo'
+
+# Give an enumerable of paths, and Pathspec will return the ones that match.
+# Returns ['/abc/123', '/abc/']
+gitignore.match_paths ['/abc/123', '/abc/important.txt', '/abc/']
+```
+
+## Building/Installing from Source
+```shell
+git clone git@github.com:highb/pathspec-ruby.git
+cd pathspec-ruby && bash ./build_from_source.sh
+```
+
+## Contributing
+Pull requests, bug reports, and feature requests welcome! :smile: I've tried to write exhaustive tests but who knows what cases I've missed.
diff --git a/lib/puppet/vendor/pathspec/lib/pathspec.rb b/lib/puppet/vendor/pathspec/lib/pathspec.rb
new file mode 100644
index 000000000..9d9a92837
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/lib/pathspec.rb
@@ -0,0 +1,121 @@
+require 'pathspec/gitignorespec'
+require 'pathspec/regexspec'
+require 'find'
+require 'pathname'
+
+class PathSpec
+ attr_reader :specs
+
+ def initialize(lines=nil, type=:git)
+ @specs = []
+
+ if lines
+ add(lines, type)
+ end
+
+ self
+ end
+
+ # Check if a path matches the pathspecs described
+ # Returns true if there are matches and none are excluded
+ # Returns false if there aren't matches or none are included
+ def match(path)
+ matches = specs_matching(path.to_s)
+ !matches.empty? && matches.all? {|m| m.inclusive?}
+ end
+
+ def specs_matching(path)
+ @specs.select do |spec|
+ if spec.match(path)
+ spec
+ end
+ end
+ end
+
+ # Check if any files in a given directory or subdirectories match the specs
+ # Returns matched paths or nil if no paths matched
+ def match_tree(root)
+ rootpath = Pathname.new(root)
+ matching = []
+
+ Find.find(root) do |path|
+ relpath = Pathname.new(path).relative_path_from(rootpath).to_s
+ relpath += '/' if File.directory? path
+ if match(relpath)
+ matching << path
+ end
+ end
+
+ matching
+ end
+
+ def match_path(path, root='/')
+ rootpath = Pathname.new(drive_letter_to_path(root))
+ relpath = Pathname.new(drive_letter_to_path(path)).relative_path_from(rootpath).to_s
+ relpath = relpath + '/' if path[-1].chr == '/'
+
+ match(relpath)
+ end
+
+ def match_paths(paths, root='/')
+ matching = []
+
+ paths.each do |path|
+ if match_path(path, root)
+ matching << path
+ end
+ end
+
+ matching
+ end
+
+ def drive_letter_to_path(path)
+ path.gsub(/^([a-zA-z]):\//, '/\1/')
+ end
+
+ # Generate specs from a filename, such as a .gitignore
+ def self.from_filename(filename, type=:git)
+ self.from_lines(File.open(filename, 'r'))
+ end
+
+ def self.from_lines(lines, type=:git)
+ self.new lines, type
+ end
+
+ # Generate specs from lines of text
+ def add(obj, type=:git)
+ spec_class = spec_type(type)
+
+ if obj.respond_to?(:each_line)
+ obj.each_line do |l|
+ spec = spec_class.new(l.rstrip)
+
+ if !spec.regex.nil? && !spec.inclusive?.nil?
+ @specs << spec
+ end
+ end
+ elsif obj.respond_to?(:each)
+ obj.each do |l|
+ add(l, type)
+ end
+ else
+ raise 'Cannot make Pathspec from non-string/non-enumerable object.'
+ end
+
+ self
+ end
+
+ def empty?
+ @specs.empty?
+ end
+
+ def spec_type(type)
+ if type == :git
+ GitIgnoreSpec
+ elsif type == :regex
+ RegexSpec
+ else
+ raise "Unknown spec type #{type}"
+ end
+ end
+end
diff --git a/lib/puppet/vendor/pathspec/lib/pathspec/gitignorespec.rb b/lib/puppet/vendor/pathspec/lib/pathspec/gitignorespec.rb
new file mode 100644
index 000000000..f7a94b359
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/lib/pathspec/gitignorespec.rb
@@ -0,0 +1,275 @@
+# encoding: utf-8
+
+require 'pathspec/regexspec'
+
+class GitIgnoreSpec < RegexSpec
+ attr_reader :regex
+
+ def initialize(pattern)
+ pattern = pattern.strip unless pattern.nil?
+
+ # A pattern starting with a hash ('#') serves as a comment
+ # (neither includes nor excludes files). Escape the hash with a
+ # back-slash to match a literal hash (i.e., '\#').
+ if pattern.start_with?('#')
+ @regex = nil
+ @inclusive = nil
+
+ # A blank pattern is a null-operation (neither includes nor
+ # excludes files).
+ elsif pattern.empty?
+ @regex = nil
+ @inclusive = nil
+
+ # Patterns containing three or more consecutive stars are invalid and
+ # will be ignored.
+ elsif pattern =~ /\*\*\*+/
+ @regex = nil
+ @inclusive = nil
+
+ # We have a valid pattern!
+ else
+ # A pattern starting with an exclamation mark ('!') negates the
+ # pattern (exclude instead of include). Escape the exclamation
+ # mark with a back-slash to match a literal exclamation mark
+ # (i.e., '\!').
+ if pattern.start_with?('!')
+ @inclusive = false
+ # Remove leading exclamation mark.
+ pattern = pattern[1..-1]
+ else
+ @inclusive = true
+ end
+
+ # Remove leading back-slash escape for escaped hash ('#') or
+ # exclamation mark ('!').
+ if pattern.start_with?('\\')
+ pattern = pattern[1..-1]
+ end
+
+ # Split pattern into segments. -1 to allow trailing slashes.
+ pattern_segs = pattern.split('/', -1)
+
+ # Normalize pattern to make processing easier.
+
+ # A pattern beginning with a slash ('/') will only match paths
+ # directly on the root directory instead of any descendant
+ # paths. So, remove empty first segment to make pattern relative
+ # to root.
+ if pattern_segs[0].empty?
+ pattern_segs.shift
+ else
+ # A pattern without a beginning slash ('/') will match any
+ # descendant path. This is equivilent to "**/{pattern}". So,
+ # prepend with double-asterisks to make pattern relative to
+ # root.
+ if pattern_segs.length == 1 && pattern_segs[0] != '**'
+ pattern_segs.insert(0, '**')
+ end
+ end
+
+ # A pattern ending with a slash ('/') will match all descendant
+ # paths of if it is a directory but not if it is a regular file.
+ # This is equivilent to "{pattern}/**". So, set last segment to
+ # double asterisks to include all descendants.
+ if pattern_segs[-1].empty?
+ pattern_segs[-1] = '**'
+ end
+
+ # Handle platforms with backslash separated paths
+ if File::SEPARATOR == '\\'
+ path_sep = '\\\\'
+ else
+ path_sep = '/'
+ end
+
+
+ # Build regular expression from pattern.
+ regex = '^'
+ need_slash = false
+ regex_end = pattern_segs.size - 1
+ pattern_segs.each_index do |i|
+ seg = pattern_segs[i]
+
+ if seg == '**'
+ # A pattern consisting solely of double-asterisks ('**')
+ # will match every path.
+ if i == 0 && i == regex_end
+ regex.concat('.+')
+
+ # A normalized pattern beginning with double-asterisks
+ # ('**') will match any leading path segments.
+ elsif i == 0
+ regex.concat("(?:.+#{path_sep})?")
+ need_slash = false
+
+ # A normalized pattern ending with double-asterisks ('**')
+ # will match any trailing path segments.
+ elsif i == regex_end
+ regex.concat("#{path_sep}.*")
+
+ # A pattern with inner double-asterisks ('**') will match
+ # multiple (or zero) inner path segments.
+ else
+ regex.concat("(?:#{path_sep}.+)?")
+ need_slash = true
+ end
+
+ # Match single path segment.
+ elsif seg == '*'
+ if need_slash
+ regex.concat(path_sep)
+ end
+
+ regex.concat("[^#{path_sep}]+")
+ need_slash = true
+
+ else
+ # Match segment glob pattern.
+ if need_slash
+ regex.concat(path_sep)
+ end
+
+ regex.concat(translate_segment_glob(seg))
+ need_slash = true
+ end
+ end
+
+ regex.concat('$')
+ super(regex)
+ end
+ end
+
+ def match(path)
+ super(path)
+ end
+
+ def translate_segment_glob(pattern)
+ """
+ Translates the glob pattern to a regular expression. This is used in
+ the constructor to translate a path segment glob pattern to its
+ corresponding regular expression.
+
+ *pattern* (``str``) is the glob pattern.
+
+ Returns the regular expression (``str``).
+ """
+ # NOTE: This is derived from `fnmatch.translate()` and is similar to
+ # the POSIX function `fnmatch()` with the `FNM_PATHNAME` flag set.
+
+ escape = false
+ regex = ''
+ i = 0
+
+ while i < pattern.size
+ # Get next character.
+ char = pattern[i].chr
+ i += 1
+
+ # Escape the character.
+ if escape
+ escape = false
+ regex += Regexp.escape(char)
+
+ # Escape character, escape next character.
+ elsif char == '\\'
+ escape = true
+
+ # Multi-character wildcard. Match any string (except slashes),
+ # including an empty string.
+ elsif char == '*'
+ regex += '[^/]*'
+
+ # Single-character wildcard. Match any single character (except
+ # a slash).
+ elsif char == '?'
+ regex += '[^/]'
+
+ # Braket expression wildcard. Except for the beginning
+ # exclamation mark, the whole braket expression can be used
+ # directly as regex but we have to find where the expression
+ # ends.
+ # - "[][!]" matchs ']', '[' and '!'.
+ # - "[]-]" matchs ']' and '-'.
+ # - "[!]a-]" matchs any character except ']', 'a' and '-'.
+ elsif char == '['
+ j = i
+ # Pass brack expression negation.
+ if j < pattern.size && pattern[j].chr == '!'
+ j += 1
+ end
+
+ # Pass first closing braket if it is at the beginning of the
+ # expression.
+ if j < pattern.size && pattern[j].chr == ']'
+ j += 1
+ end
+
+ # Find closing braket. Stop once we reach the end or find it.
+ while j < pattern.size && pattern[j].chr != ']'
+ j += 1
+ end
+
+
+ if j < pattern.size
+ expr = '['
+
+ # Braket expression needs to be negated.
+ if pattern[i].chr == '!'
+ expr += '^'
+ i += 1
+
+ # POSIX declares that the regex braket expression negation
+ # "[^...]" is undefined in a glob pattern. Python's
+ # `fnmatch.translate()` escapes the caret ('^') as a
+ # literal. To maintain consistency with undefined behavior,
+ # I am escaping the '^' as well.
+ elsif pattern[i].chr == '^'
+ expr += '\\^'
+ i += 1
+ end
+
+ # Escape brackets contained within pattern
+ if pattern[i].chr == ']' && i != j
+ expr += '\]'
+ i += 1
+ end
+
+
+ # Build regex braket expression. Escape slashes so they are
+ # treated as literal slashes by regex as defined by POSIX.
+ expr += pattern[i..j].sub('\\', '\\\\')
+
+ # Add regex braket expression to regex result.
+ regex += expr
+
+ # Found end of braket expression. Increment j to be one past
+ # the closing braket:
+ #
+ # [...]
+ # ^ ^
+ # i j
+ #
+ j += 1
+ # Set i to one past the closing braket.
+ i = j
+
+ # Failed to find closing braket, treat opening braket as a
+ # braket literal instead of as an expression.
+ else
+ regex += '\['
+ end
+
+ # Regular character, escape it for regex.
+ else
+ regex << Regexp.escape(char)
+ end
+ end
+
+ regex
+ end
+
+ def inclusive?
+ @inclusive
+ end
+end
diff --git a/lib/puppet/vendor/pathspec/lib/pathspec/regexspec.rb b/lib/puppet/vendor/pathspec/lib/pathspec/regexspec.rb
new file mode 100644
index 000000000..af043f445
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/lib/pathspec/regexspec.rb
@@ -0,0 +1,17 @@
+require 'pathspec/spec'
+
+class RegexSpec < Spec
+ def initialize(regex)
+ @regex = Regexp.compile regex
+
+ super
+ end
+
+ def inclusive?
+ true
+ end
+
+ def match(path)
+ @regex.match(path) if @regex
+ end
+end
diff --git a/lib/puppet/vendor/pathspec/lib/pathspec/spec.rb b/lib/puppet/vendor/pathspec/lib/pathspec/spec.rb
new file mode 100644
index 000000000..756588b1f
--- /dev/null
+++ b/lib/puppet/vendor/pathspec/lib/pathspec/spec.rb
@@ -0,0 +1,14 @@
+class Spec
+ attr_reader :regex
+
+ def initialize(*_)
+ end
+
+ def match(files)
+ raise "Unimplemented"
+ end
+
+ def inclusive?
+ true
+ end
+end
diff --git a/lib/puppet/vendor/require_vendored.rb b/lib/puppet/vendor/require_vendored.rb
index 141c810de..d9f38fabc 100644
--- a/lib/puppet/vendor/require_vendored.rb
+++ b/lib/puppet/vendor/require_vendored.rb
@@ -1,7 +1,9 @@
# This adds upfront requirements on vendored code found under lib/vendor/x
# Add one requirement per vendored package (or a comment if it is loaded on demand).
require 'safe_yaml'
require 'puppet/vendor/safe_yaml_patches'
# The vendored library 'semantic' is loaded on demand.
+# The vendored library 'rgen' is loaded on demand.
+# The vendored library 'pathspec' is loaded on demand.
diff --git a/lib/puppet/vendor/rgen/.gitignore b/lib/puppet/vendor/rgen/.gitignore
new file mode 100644
index 000000000..d41f82329
--- /dev/null
+++ b/lib/puppet/vendor/rgen/.gitignore
@@ -0,0 +1,5 @@
+.eprj
+test/metamodel_roundtrip_test/TestModel_Regenerated.rb
+test/metamodel_roundtrip_test/houseMetamodel_Regenerated.ecore
+test/model_builder/ecore_internal.rb
+test/testmodel/ea_testmodel_regenerated.xml
diff --git a/lib/puppet/vendor/rgen/.project b/lib/puppet/vendor/rgen/.project
new file mode 100644
index 000000000..ba7e10352
--- /dev/null
+++ b/lib/puppet/vendor/rgen/.project
@@ -0,0 +1,17 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<projectDescription>
+ <name>rgen-head</name>
+ <comment></comment>
+ <projects>
+ </projects>
+ <buildSpec>
+ <buildCommand>
+ <name>org.rubypeople.rdt.core.rubybuilder</name>
+ <arguments>
+ </arguments>
+ </buildCommand>
+ </buildSpec>
+ <natures>
+ <nature>org.rubypeople.rdt.core.rubynature</nature>
+ </natures>
+</projectDescription>
diff --git a/lib/puppet/vendor/rgen/CHANGELOG b/lib/puppet/vendor/rgen/CHANGELOG
new file mode 100644
index 000000000..bac8f6d8a
--- /dev/null
+++ b/lib/puppet/vendor/rgen/CHANGELOG
@@ -0,0 +1,197 @@
+=0.1.0 (August 3rd, 2006)
+
+* First public release
+
+=0.2.0 (September 3rd, 2006)
+
+* Added model transformation language (Transformer)
+* Now RGen is distributed as a gem
+* More complete documentation
+
+=0.3.0 (October 9th, 2006)
+
+* Improved XML Instantiator (Namespaces, Resolver, Customization)
+* Added many_to_one builder method
+* Added attribute reflection to MMBase (one_attributes, many_attributes)
+* Added +copy+ method to Transformer
+* Added simple model dumper module
+* Fixed mmgen/mmgen.rb
+
+=0.4.0 (Aug 8th, 2007)
+
+* Added ECore metamodel and use it as the core metametamodel
+* Revised and extended MetamodelBuilder language
+* There is an ECore instance describing each metamodel built using MetamodelBuilder now
+* Metamodel generator is now ECore based
+* Added Ruby implementation of Boolean and Enum types
+* Switched XML Instantiator to xmlscan for performance reasons
+* Cleaned up instantiator file structure
+* Renamed RGen::XMLInstantiator into RGen::Instantiator::DefaultXMLInstantiator
+* Included xmlscan as a redistributed module
+* Added support for chardata within XML tags
+* Added (Enterprise Architect) XMI to ECore instantiator
+* Some minor fixes in NameHelper
+* Some fixes to template language
+* Added UML1.3 Metamodel
+* Added tranformation from UML1.3 to ECore
+
+=0.4.1 (Nov 25th, 2007)
+
+* Template language performance improvement
+* Bugfix: use true/false instead of symbols for boolean attribute default values in metamodel classes
+* Minor fixes on metamodel generator and ecore primitive type handling
+* Made transformer implementation non-recursive to prevent "stack level too deep" exception for large models
+* Minor fixes on EAInstantiator
+* Made transformer search for matching rules for superclasses
+* Bugfix: Enums are now added to EPackages created using the "ecore" method on a module
+* Bugfix: Metamodel generator now writes enum names
+* Performance improvement: don't require ecore transformer every time someone calls "ecore"
+* Major performance improvement of template engine (no Regexps to check \n at end of line)
+* Major performance improvement: AbstractXMLInstantiator optionally controls the garbage collector
+* Major performance improvement: ERB templates are reused in metamodel_builder
+* Added delete method to Environment
+
+=0.4.2 (Mar 2nd, 2008)
+
+* Performance improvement: collection feature of array extension uses hashes now to speed up array union
+* Performance improvement: find on environment hashes elements by class
+* Extended Transformer to allow sharing of result maps between several Transformer instances
+* Bugfix: User defined upper bound values are no longer overwritten by -1 in all "many" metamodel builder methods
+
+=0.4.3 (Aug 12th, 2008)
+
+* Performance improvement: significant speed up of metamodel reverse registration
+* Bugfix: Use object identity for metamodel to-many add/remove methods
+* Bugfix: If expand's :for expression evaluates to nil an error is generated (silently used current context before)
+* Template language indentation string can be set on DirectoryTemplateContainer and with the "file" command
+
+=0.4.4 (Sep 10th, 2008)
+
+* Added "abstract" metamodel DSL command
+* Added ecore_ext.rb with convenience methods
+* Added XMI1.1 serializer, revised XMLSerializer super class
+
+=0.4.5 (Nov 17th, 2008)
+
+* Updated XMI1.1 serializer to support explicit placement of elements on content level of the XMI file
+
+=0.4.6 (Mar 1st, 2009)
+
+* Bugfix: expand :foreach silently assumed current context if :foreach evalutated to nil
+* Bugfix: fixed unit test for non-Windows plattforms (\r\n)
+* Bugfix: depending on the Ruby version and/or platform constants used in templates could not be resolved
+* Added automatic line ending detection (\n or \r\n) for template language +nl+ command
+
+=0.5.0 (Jun 8th, 2009)
+
+* Added ModelBuilder and ModelSerializer
+* Added template language "define_local" command
+* Added template language "evaluate" command
+* Fixed template language bug: indentation problem when expand continues a non-empty line
+* Fixed template language bug: template content expands several times when a template container is called recursively
+* Fixed template language bug: template resolution problem if a template file has the same name as a template directory
+* Cleaned up EA support
+* Added method to clear ecore metamodel reflection cache
+* Improved overriding of metamodel features in reopened classes
+
+=0.5.1 (Nov 10th, 2009)
+
+* Fixed metamodel builder bug: _register at one-side did not unregister from the element referenced by the old value
+* Added helper class for building simple model comparators
+
+=0.5.2 (Jun 13th, 2010)
+
+* Added has_many_attr to metamodel builder, support for "many" attributes
+* Added JSON support (json instantiator and serializer)
+* Added QualifiedNameResolver instantiation helper
+* Added reference proxy support
+* Added more generic access methods on metaclasses
+* Added ReferenceResolver resolver mixin
+* Fixed ecore xml instantiator and serializer to handle references to builtin datatypes correctly
+* Fixed bug in ecore xml serializer to not output references which are opposites of containment references
+
+=0.5.3 (Aug 13th, 2010)
+
+* Fixed string escaping in JSON instantiator and serializer
+* Fixed order of eClassifiers and eSubpackages within an EPackage created by reflection on a RGen module
+
+=0.5.4
+
+* Fixed undeterministic order of child elements in ModelSerializer
+* Fixed undeterministic order of attributes in XMI serializers
+* Fixed ModelSerializer to always serialize the to-one part of bidirectional 1:N references
+* Fixed ModelSerializer to add :as => in case of ambiguous child roles
+* Made JsonInstantiator search subpackages for unqualified class names
+
+=0.6.0
+
+* Added exception when trying to instantiate abstract class
+* Replaced xmlscan by dependency to nokogiri
+* Made RGen work with Ruby 1.9
+* Cleaned up intermediate attribute and reference description, improvement of metamodel load time
+* Added optional data property for MMProxy
+* Added ECoreToRuby which can create Ruby classes and modules from ECore models in memory (without metamodel generator)
+* Refactored out QualifiedNameProvider and OppositeReferenceFilter
+* Added model fragment/fragmented models support
+* Extended Instantiator::ReferenceResolver and changed it into a class
+* Moved utilities into util folder/module
+* Added FileCacheMap
+* Fixed template language bug: indenting not correct after callback into same template container and iinc/idec
+* Added support for fragmented models
+* Added FileChangeDetector utility
+* Added CachedGlob utility
+* Added index parameter to model element add methods
+* Added MMGeneric
+* Modified has_many_attr to allow the same value in the same attribute multiple times
+* Made Environment#delete faster on large models
+* Added type check of ecore defaultValueLiteral content in MetamodelBuilder
+* Many-feature setters can work with an Enumerable instead of an Array
+* Added pattern matcher utility
+* Fixed problem of Ruby hanging when exceptions occur
+* Fixed metamodel generator to quote illegal enum literal symbols
+* Imporved UML to ECore transformer and EA support
+
+=0.6.1
+
+* Fixed metamodel builder to not overwrite a model element's 'class' method
+* Added enum type transformation to ECoreToUML13 transformer, primitive type mapping based on instanceClassName
+* Fixed default value appearing on read after setting a feature value to nil
+* Added eIsSet and eUnset methods
+* Added eContainer and eContainingFeature methods
+* Fixed ModelFragment#elements not containing root elements
+* Added optional output of invalidation reason to FileCacheMap#load_data
+
+=0.6.2
+
+* Made qualified name provider work with unidirectional containment references
+* Fixed array_extension breaking the Hash[] method
+
+=0.6.3
+
+* Added BigDecimal support
+
+=0.6.4
+
+* Made FileChangeDetector and FileCacheMap robust against missing files
+
+=0.6.5
+
+* Fixed missing default argument of FragmentedModel#resolve
+* Added to_str to methods which aren't forwarded by array extension on empty arrays
+
+=0.6.6
+
+* Added ModelFragment#mark_resolved and ResolutionHelper
+* Added ReferenceResolver option to output failed resolutions
+* Major performance improvement of FragmentedModel#resolve
+* Fixed a Ruby 2.0 related warning
+
+=0.7.0
+
+* Enforce unique container rule by automatically disconnecting elements from other containers
+* Added support for long typed values (ELong), thanks to Thomas Hallgren;
+ Note that this is merely an EMF compatibility thing, RGen could already handle big integers before
+* Added eContents and eAllContents methods
+* Added setNilOrRemoveGeneric and setNilOrRemoveAllGeneric methods
+* Added disconnectContainer method
+
diff --git a/lib/puppet/vendor/rgen/MIT-LICENSE b/lib/puppet/vendor/rgen/MIT-LICENSE
new file mode 100644
index 000000000..e124b6ce0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/MIT-LICENSE
@@ -0,0 +1,20 @@
+Copyright (c) 2013 Martin Thiede
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/lib/puppet/vendor/rgen/PUPPET_README.md b/lib/puppet/vendor/rgen/PUPPET_README.md
new file mode 100644
index 000000000..2db19c522
--- /dev/null
+++ b/lib/puppet/vendor/rgen/PUPPET_README.md
@@ -0,0 +1,6 @@
+RGen - Ruby Modelling and Generator Framework
+=============================================
+
+RGen version 0.7.0
+
+Copied from https://github.com/mthiede/rgen/tree/3e3c42a470269982ef52972bed82d65f78de496c
diff --git a/lib/puppet/vendor/rgen/README.rdoc b/lib/puppet/vendor/rgen/README.rdoc
new file mode 100644
index 000000000..b6dde344e
--- /dev/null
+++ b/lib/puppet/vendor/rgen/README.rdoc
@@ -0,0 +1,78 @@
+= RGen - Ruby Modelling and Generator Framework
+
+RGen is a framework for Model Driven Software Development (MDSD)in Ruby.
+This means that it helps you build Metamodels, instantiate Models, modify
+and transform Models and finally generate arbitrary textual content from it.
+
+RGen features include:
+* Supporting Ruby 1.8.6, 1.8.7 and 1.9.x
+* Metamodel definition language (internal Ruby DSL)
+* ECore Meta-metamodel with an ECore instance available for every Metamodel
+* Generator creating the Ruby metamodel definition from an ECore instance
+* Transformer creating Ruby metamodel classes/modules from an ECore instance
+* Instantiation of Metamodels, i.e. creation of Models (e.g. from XML)
+* Model builder, internal Ruby DSL
+* Model fragmentation over several several files and per-fragment caching
+* Model Transformation language (internal Ruby DSL)
+* Powerful template based generator language (internal Ruby DSL inside of ERB)
+* UML 1.3 metamodel and XMI 1.1 instantiator included
+* ECore XML support (XMI 2.0)
+* UML-to-ECore and ECore-to-UML transformation (UML class models)
+* Enterprise Architect support (UML1.3/XMI1.1)
+
+
+== Download
+
+Get the latest release from Github: https://github.com/mthiede/rgen
+
+
+== Installation
+
+Install RGen as a Ruby gem:
+
+ gem install rgen
+
+
+== Running the Tests
+
+Change to the 'test' folder and run the test suite:
+
+ cd test
+ ruby rgen_test.rb
+
+
+== Documentation
+
+RDoc documentation is available at Github: http://mthiede.github.com/rgen/
+
+Find the main documentation parts for:
+* RGen::MetamodelBuilder
+* RGen::Transformer
+* RGen::TemplateLanguage
+* RGen::Fragment::FragmentedModel
+
+
+== Examples
+
+There are several examples of using RGen within the framework itself.
+
+Metamodel Definition:
+ lib/rgen/ecore/ecore.rb
+ lib/metamodels/uml13_metamodel.rb
+
+Instantiation:
+ lib/rgen/instantiator/xmi11_instantiator.rb
+ lib/rgen/instantiator/ecore_xml_instantiator.rb
+
+Transformations:
+ lib/rgen/ecore/ruby_to_ecore.rb
+ lib/transformers/uml13_to_ecore.rb
+
+Generators:
+ lib/mmgen/metamodel_generator.rb
+
+
+== License
+
+RGen is released under the MIT license.
+
diff --git a/lib/puppet/vendor/rgen/Rakefile b/lib/puppet/vendor/rgen/Rakefile
new file mode 100644
index 000000000..5d3e9e77b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/Rakefile
@@ -0,0 +1,41 @@
+require 'rubygems/package_task'
+require 'rdoc/task'
+
+RGenGemSpec = Gem::Specification.new do |s|
+ s.name = %q{rgen}
+ s.version = "0.7.0"
+ s.date = Time.now.strftime("%Y-%m-%d")
+ s.summary = %q{Ruby Modelling and Generator Framework}
+ s.email = %q{martin dot thiede at gmx de}
+ s.homepage = %q{http://ruby-gen.org}
+ s.rubyforge_project = %q{rgen}
+ s.description = %q{RGen is a framework for Model Driven Software Development (MDSD) in Ruby. This means that it helps you build Metamodels, instantiate Models, modify and transform Models and finally generate arbitrary textual content from it.}
+ s.authors = ["Martin Thiede"]
+ gemfiles = Rake::FileList.new
+ gemfiles.include("{lib,test}/**/*")
+ gemfiles.include("README.rdoc", "CHANGELOG", "MIT-LICENSE", "Rakefile")
+ gemfiles.exclude(/\b\.bak\b/)
+ s.files = gemfiles
+ s.rdoc_options = ["--main", "README.rdoc", "-x", "test", "-x", "metamodels", "-x", "ea_support/uml13*"]
+ s.extra_rdoc_files = ["README.rdoc", "CHANGELOG", "MIT-LICENSE"]
+end
+
+RDoc::Task.new do |rd|
+ rd.main = "README.rdoc"
+ rd.rdoc_files.include("README.rdoc", "CHANGELOG", "MIT-LICENSE", "lib/**/*.rb")
+ rd.rdoc_files.exclude("lib/metamodels/*")
+ rd.rdoc_files.exclude("lib/ea_support/uml13*")
+ rd.rdoc_dir = "doc"
+end
+
+RGenPackageTask = Gem::PackageTask.new(RGenGemSpec) do |p|
+ p.need_zip = false
+end
+
+task :prepare_package_rdoc => :rdoc do
+ RGenPackageTask.package_files.include("doc/**/*")
+end
+
+task :release => [:prepare_package_rdoc, :package]
+
+task :clobber => [:clobber_rdoc, :clobber_package]
diff --git a/lib/puppet/vendor/rgen/TODO b/lib/puppet/vendor/rgen/TODO
new file mode 100644
index 000000000..f91158920
--- /dev/null
+++ b/lib/puppet/vendor/rgen/TODO
@@ -0,0 +1,41 @@
+=Known Bugs
+* <% expand ... :indent => 0 %> seems to change behaviour of active template not only expanded subtemplate
+* Ecore build in types (EString, ...) do not work in ECore instantiator, define your own EDatatype instead
+* ECore datatypes in RGen::ECore should use Java like instanceClassNames
+* overloading of transformation rules not working correctly
+* with \r\n in templates, empty lines appear in output
+* <%nl%> after <%nows%> creates no indentation (<%nl%> in another template in same file)
+
+=Major issues
+* XML instantiator documentation
+* revise builder datatypes, especially enum implementation using Enum objects as types,
+ also revise ecore metamodel at this point
+* revise documentation of BuilderExtensions
+* further cleanup EA UML import/export
+ - The differences between EA UML and uml13_metamodel.rb seem to be violations by EA, ArgoUML follows the standard much more closely
+ - Enums should be instances of Enumeration class with EnumerationLiterals (UML Standard),
+ for EA convert to Classes with stereotype "enumeration" and attributes as literals
+ (this is what EA 7 creates when clicking on the "New Enumeration" button, EA will reference these classes as type)
+ This is whats missing for Pragma MM generators.
+ - Support primitive types as instances of DataType (which basically have a name) instead of tagged values
+ (this should also be working with EA 7, the tagged values are just add on)
+ - Support more UML metamodel features in the transformers
+* Model Serializer:
+ - make "name" attribute configurable
+ - convert chars in string into something Ruby compatible (e.g newline to \n)
+
+=Minor Issues
+* allow definition of templates from within regular code
+* indexed find in environment
+* XMI Instantiator fixmap: add element names to make feature names unique
+* no error for expand '..', :forach => (foreach misspelled)
+* With JRuby (1.3.1) exceptions raised in templates have a short or no backtrace
+
+
+* extended constraint checks (feature bounds)
+* class filter in RText language
+* root classes for RText language
+* command/class aliases in RText language
+* language variants (different root classes depending on file type)
+* reference name in reference_qualifier
+
diff --git a/lib/puppet/vendor/rgen/anounce.txt b/lib/puppet/vendor/rgen/anounce.txt
new file mode 100644
index 000000000..c70de78af
--- /dev/null
+++ b/lib/puppet/vendor/rgen/anounce.txt
@@ -0,0 +1,61 @@
+=RGen - Ruby Modelling and Generator Framework
+
+RGen is a framework to support the "Model Driven Software Development (MDSD)" approach. Some people may want to call just about the same thing "Domain Specific Languages".
+
+The essence is to have a Metamodel which imposes a structure as well as certain other constraints on the Models to be instantiated from it. The Metamodel can be part of the software which is being developed.
+From a formal language point of view, the metamodel could be regarded as a grammer with the model being a sentence of the grammer.
+
+One possible application of MDSD are large domain specific software systems like banking systems. In this case one would define a metamodel which reflects the application domain (e.g. accounts, customers, their relations, ...).
+The metamodel can then serve as a common means of communication between users from the application domain (e.g. the customer) and software developers, as well as between software developers working on different subsystems. In addition, the metamodel can be used to generate recurring parts of the software instead of writing it by hand
+(e.g. database interfaces, application server glue code, etc). In a particular project a lot more usescases will typically show up.
+
+A very good framework implementing the MDSD approach is the open source Java framework OpenArchitectureWare (http://www.openarchitectureware.org/). Actually OpenArchitectureWare inspired the development of RGen.
+
+RGen implements many features also provided by OpenArchitectureWare:
+* Programmatic (textual) definition of Metamodels
+* Instantiation of models from various sources (XML, UML Models, ...)
+* Support of Model Transformations
+* Powerful template language (based on ERB) to generate arbitrary textual content
+
+In contrast to the mentioned Java framework, RGen is a more lightweight approach.
+It can be used to write powerful generator or transformation scripts quickly.
+However I believe RGen can also be useful in large software development projects.
+
+
+Here are some example usecases of RGen:
+
+Example 1: Generating C Code from a XML description
+* Directly instantiate the XML file using RGen's XMLInstantiator
+ An implicit Metamodel (Ruby classes) will be created automatically
+ The model is now available as Ruby objects in memory
+* Use the template language to navigate the instantiated model and generate C code
+
+Example 2: UML Defined Application specific Metamodel
+* Specify your metamodel as a UML Class Diagram
+* If the tool you use is Enterprise Architect, the UML Class Diagram can directly be instantiated using RGen's XMIClassInstantiator
+* If not, support for your tool should be added. The existing XMI instantiator is basically a transformation from an XMI model to an UML Class model. For a tool producing different XMI output the transformation has to be adapted.
+* Use the included MetamodelGenerator to generate Ruby classes from the UML model. These classes act as your application specific metamodel. Of course the generated classes can be extended by hand written code, either by subclassing or by just adding methods using Ruby's open classes. The generated code itself should not be touched.
+* Extend RGen with an own instantiator which reads your specific file format and instantiates your application specific metamodel
+* Then go on doing transformations on your model(s) or generate output.
+
+
+RGen could also be useful in combination with Rails.
+One application to Rails could be to generate not only the model, view and controller classes, but also the database schemas and code reflecting associations between Rails model elements (i.e. the has_many, belongs_to, .. code in the ActiveRecord subclasses)
+The base model for such a generation could be a description of the model elements and its associations as an UML class model.
+Another application could be to base new Rails generators (like the Login generator, etc) on RGen.
+
+
+=Major Performance Improvement
+
+RGen 0.4.1 features major performance improvements.
+All applications using the template language will be faster now.
+Applications using the AbstractXMLInstantiator can benefit if explicit garbage collection is enabled.
+(see documentation of AbstractXMLInstantiator)
+Another improvement makes the "ecore" class method of classes and modules much faster.
+Last but not least the loading of metamodels takes about half the time as before.
+
+For large models the metamodel generator is about 20 times faster.
+Reading a 50 000 lines ecore file now takes 9 seconds on a Centrino Duo 2GHz (85s with RGen 0.4.0)
+Generating the RGen metamodel code for the ecore model takes 5 seconds (200s with RGen 0.4.0)
+Requiring this RGen metamodel takes about 3 seconds (about 6s with RGen 0.4.0).
+
diff --git a/lib/puppet/vendor/rgen/design_rationale.txt b/lib/puppet/vendor/rgen/design_rationale.txt
new file mode 100644
index 000000000..0c6213c78
--- /dev/null
+++ b/lib/puppet/vendor/rgen/design_rationale.txt
@@ -0,0 +1,71 @@
+=ElementSet vs. Array
+
+Subject:
+Use a special array-like class "ElementSet" with the following properties:
+* can call methods of elements by . notation
+* can use all set and enumerable methods of Array
+* enforce constraints regarding type of elements
+* auto register/unregister with counterpart of an association
+
+Dependencies:
+* Without the constraint and register/unregister functionality of the ElementSet,
+ the API of model elements built by MetamodelBuilder has to be different:
+ instead of "e.myelements << newel" would be "e.addMyelements(newel)"
+ However this can also be an advantage (see Metamodel Many Assoc API)
+
+A1. ElementSet:
++ nice notation for calling methods of elements (.)
++ nice notation for adding/removing elements from a model element
+ (e.myelements << newel; e.myelements.delete newel)
+- complicated to realize
+ if ElementSet inherits from Array:
+ constraints/registration can not be garanteed for all add/remove operations
+ input and output of Array methods must be wrapped into ElementSet objects
+ if ElementSet delegates to an Array:
+ all (relevant) methods have to be delegated (methods from including
+ Enumerable do not automatically return ElementSet objects)
+- dot notation for calling methods of elements my lead to errors which are difficult
+ to find
+
+A2. Array:
++ a separate operator like >> makes calling methods of elements more explicit
++ very easy to implement
++ easy to understand by users (no "magic" going on)
+
+Decision: (2006-06-08)
+A2. Array
+Simplicity of implementation and ease of use are more important than a nice notation
+
+
+
+= Metamodel Many Assoc API
+
+Subject:
+How to implement the API to deal with to-many associations of model elements.
+One option is an array like object which is held by the model element for each to-many
+association and which is given to the user for modification (external array).
+The other option is an internal array which is only accessed via add and remove
+methods
+
+Dependencies:
+If an external array is used, this array must check the association's constraints
+and register/unregister with the other side of the association.
+(see ElementSet vs. Array)
+
+A1.External Array
++ nice API (e.myassocs << newel; e. myassocs.delete newel)
++ this is a Rails like API
+- a reference to the array might be stored somewhere else in the program and
+ accidentially be modified, this would modify the model element it belongs to
+ as well as register/unregister with other model elements leading to errors
+ which are hard to find
+- an external array is complicated to implement (see ElementSet vs. Array)
+
+A2.Internal Array
++ easy to understand for non Ruby/Rails aware users
++ simple implementation
+
+Decision: (2006-06-09)
+A2. Internal Array
+Simplicity of implementation and ease of use are more important than a nice notation
+
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/ea_support/ea_support.rb b/lib/puppet/vendor/rgen/lib/ea_support/ea_support.rb
new file mode 100644
index 000000000..a9062b9f6
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/ea_support/ea_support.rb
@@ -0,0 +1,54 @@
+require 'ea_support/uml13_ea_metamodel'
+require 'ea_support/uml13_ea_metamodel_ext'
+require 'ea_support/uml13_to_uml13_ea'
+require 'ea_support/uml13_ea_to_uml13'
+require 'ea_support/id_store'
+require 'rgen/serializer/xmi11_serializer'
+require 'rgen/instantiator/xmi11_instantiator'
+require 'rgen/environment'
+
+module EASupport
+
+ FIXMAP = {
+ :tags => {
+ "EAStub" => proc { |tag, attr|
+ UML13EA::Class.new(:name => attr["name"]) if attr["UMLType"] == "Class"
+ }
+ }
+ }
+
+ INFO = XMI11Instantiator::INFO
+ WARN = XMI11Instantiator::WARN
+ ERROR = XMI11Instantiator::ERROR
+
+ def self.instantiateUML13FromXMI11(envUML, fileName, options={})
+ envUMLEA = RGen::Environment.new
+ xmiInst = XMI11Instantiator.new(envUMLEA, FIXMAP, options[:loglevel] || ERROR)
+ xmiInst.add_metamodel("omg.org/UML1.3", UML13EA)
+ File.open(fileName) do |f|
+ xmiInst.instantiate(f.read)
+ end
+ trans = UML13EAToUML13.new(envUMLEA, envUML)
+ trans.transform
+ trans.cleanModel if options[:clean_model]
+ end
+
+ def self.serializeUML13ToXMI11(envUML, fileName, options={})
+ envUMLEA = RGen::Environment.new
+
+ UML13EA.idStore = options[:keep_ids] ?
+ IdStore.new(File.dirname(fileName)+"/"+File.basename(fileName)+".ids") : IdStore.new
+
+ UML13ToUML13EA.new(envUML, envUMLEA).transform
+
+ File.open(fileName, "w") do |f|
+ xmiSer = RGen::Serializer::XMI11Serializer.new(f)
+ xmiSer.setNamespace("UML","omg.org/UML1.3")
+ xmiSer.serialize(envUMLEA.find(:class => UML13EA::Model).first,
+ {:documentation => {:exporter => "Enterprise Architect", :exporterVersion => "2.5"}})
+ end
+
+ UML13EA.idStore.store
+ end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/ea_support/id_store.rb b/lib/puppet/vendor/rgen/lib/ea_support/id_store.rb
new file mode 100644
index 000000000..05af13ebf
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/ea_support/id_store.rb
@@ -0,0 +1,32 @@
+require 'yaml'
+
+class IdStore
+ def initialize(fileName=nil)
+ if fileName
+ raise "Base directory does not exist: #{File.dirname(fileName)}" \
+ unless File.exist?(File.dirname(fileName))
+ @idsFileName = fileName
+ end
+ @idHash = nil
+ end
+
+ def idHash
+ load unless @idHash
+ @idHash
+ end
+
+ def load
+ if @idsFileName && File.exist?(@idsFileName)
+ @idHash = YAML.load_file(@idsFileName) || {}
+ else
+ @idHash = {}
+ end
+ end
+
+ def store
+ return unless @idsFileName
+ File.open(@idsFileName,"w") do |f|
+ YAML.dump(@idHash, f)
+ end
+ end
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel.rb b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel.rb
new file mode 100644
index 000000000..7dc01a02c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel.rb
@@ -0,0 +1,562 @@
+require 'rgen/metamodel_builder'
+
+module UML13EA
+ extend RGen::MetamodelBuilder::ModuleExtension
+ include RGen::MetamodelBuilder::DataTypes
+
+ OperationDirectionKind = Enum.new(:name => 'OperationDirectionKind', :literals =>[ ])
+ MessageDirectionKind = Enum.new(:name => 'MessageDirectionKind', :literals =>[ ])
+ ChangeableKind = Enum.new(:name => 'ChangeableKind', :literals =>[ :changeable, :none, :addOnly ])
+ PseudostateKind = Enum.new(:name => 'PseudostateKind', :literals =>[ :initial, :deepHistory, :shallowHistory, :join, :fork, :branch, :junction, :final ])
+ ParameterDirectionKind = Enum.new(:name => 'ParameterDirectionKind', :literals =>[ :in, :inout, :out, :return ])
+ ScopeKind = Enum.new(:name => 'ScopeKind', :literals =>[ :instance, :classifier ])
+ OrderingKind = Enum.new(:name => 'OrderingKind', :literals =>[ :unordered, :ordered, :sorted ])
+ CallConcurrencyKind = Enum.new(:name => 'CallConcurrencyKind', :literals =>[ :sequential, :guarded, :concurrent ])
+ AggregationKind = Enum.new(:name => 'AggregationKind', :literals =>[ :none, :aggregate, :composite, :shared ])
+ VisibilityKind = Enum.new(:name => 'VisibilityKind', :literals =>[ :public, :protected, :private ])
+end
+
+class UML13EA::Expression < RGen::MetamodelBuilder::MMBase
+ has_attr 'language', String
+ has_attr 'body', String
+end
+
+class UML13EA::ActionExpression < UML13EA::Expression
+end
+
+class UML13EA::Element < RGen::MetamodelBuilder::MMBase
+end
+
+class UML13EA::ModelElement < UML13EA::Element
+ has_attr 'name', String
+ has_attr 'visibility', UML13EA::VisibilityKind, :defaultValueLiteral => "public"
+ has_attr 'isSpecification', Boolean
+end
+
+class UML13EA::Namespace < UML13EA::ModelElement
+end
+
+class UML13EA::GeneralizableElement < UML13EA::ModelElement
+ has_attr 'isRoot', Boolean
+ has_attr 'isLeaf', Boolean
+ has_attr 'isAbstract', Boolean
+end
+
+class UML13EA::Classifier < RGen::MetamodelBuilder::MMMultiple(UML13EA::GeneralizableElement, UML13EA::Namespace)
+end
+
+class UML13EA::ClassifierRole < UML13EA::Classifier
+end
+
+class UML13EA::PresentationElement < UML13EA::Element
+end
+
+class UML13EA::DiagramElement < UML13EA::PresentationElement
+ has_attr 'geometry', String
+ has_attr 'style', String
+end
+
+class UML13EA::Feature < UML13EA::ModelElement
+ has_attr 'ownerScope', UML13EA::ScopeKind, :defaultValueLiteral => "instance"
+end
+
+class UML13EA::BehavioralFeature < UML13EA::Feature
+ has_attr 'isQuery', Boolean
+end
+
+class UML13EA::Method < UML13EA::BehavioralFeature
+end
+
+class UML13EA::Actor < UML13EA::Classifier
+end
+
+class UML13EA::DataType < UML13EA::Classifier
+end
+
+class UML13EA::Primitive < UML13EA::DataType
+end
+
+class UML13EA::Action < UML13EA::ModelElement
+ has_attr 'isAsynchronous', Boolean
+end
+
+class UML13EA::SendAction < UML13EA::Action
+end
+
+class UML13EA::Interface < UML13EA::Classifier
+end
+
+class UML13EA::Event < UML13EA::ModelElement
+end
+
+class UML13EA::ChangeEvent < UML13EA::Event
+end
+
+class UML13EA::Partition < UML13EA::ModelElement
+end
+
+class UML13EA::Comment < UML13EA::ModelElement
+ has_attr 'body', String
+end
+
+class UML13EA::ProgrammingLanguageType < UML13EA::DataType
+end
+
+class UML13EA::StateMachine < UML13EA::ModelElement
+end
+
+class UML13EA::Call < RGen::MetamodelBuilder::MMBase
+end
+
+class UML13EA::Operation < UML13EA::BehavioralFeature
+ has_attr 'concurrency', UML13EA::CallConcurrencyKind, :defaultValueLiteral => "sequential"
+ has_attr 'isRoot', Boolean
+ has_attr 'isLeaf', Boolean
+ has_attr 'isAbstract', Boolean
+end
+
+class UML13EA::XmiIdProvider < RGen::MetamodelBuilder::MMBase
+end
+
+class UML13EA::StateVertex < RGen::MetamodelBuilder::MMMultiple(UML13EA::ModelElement, UML13EA::XmiIdProvider)
+end
+
+class UML13EA::SynchState < UML13EA::StateVertex
+ has_attr 'bound', Integer
+end
+
+class UML13EA::ClassifierInState < UML13EA::Classifier
+end
+
+class UML13EA::Link < UML13EA::ModelElement
+end
+
+class UML13EA::ProcedureExpression < UML13EA::Expression
+end
+
+class UML13EA::CallEvent < UML13EA::Event
+end
+
+class UML13EA::AssignmentAction < UML13EA::Action
+end
+
+class UML13EA::Relationship < UML13EA::ModelElement
+end
+
+class UML13EA::Association < RGen::MetamodelBuilder::MMMultiple(UML13EA::GeneralizableElement, UML13EA::Relationship, UML13EA::XmiIdProvider)
+end
+
+class UML13EA::AssociationRole < UML13EA::Association
+end
+
+class UML13EA::Diagram < UML13EA::PresentationElement
+ has_attr 'name', String
+ has_attr 'toolName', String
+ has_attr 'diagramType', String
+ has_attr 'style', String
+end
+
+class UML13EA::MultiplicityRange < RGen::MetamodelBuilder::MMBase
+ has_attr 'lower', String
+ has_attr 'upper', String
+end
+
+class UML13EA::ActionSequence < UML13EA::Action
+end
+
+class UML13EA::Constraint < UML13EA::ModelElement
+end
+
+class UML13EA::Instance < UML13EA::ModelElement
+end
+
+class UML13EA::UseCaseInstance < UML13EA::Instance
+end
+
+class UML13EA::State < UML13EA::StateVertex
+end
+
+class UML13EA::CompositeState < UML13EA::State
+ has_attr 'isConcurrent', Boolean
+end
+
+class UML13EA::SubmachineState < UML13EA::CompositeState
+end
+
+class UML13EA::SubactivityState < UML13EA::SubmachineState
+ has_attr 'isDynamic', Boolean
+end
+
+class UML13EA::StructuralFeature < UML13EA::Feature
+ has_attr 'changeable', UML13EA::ChangeableKind, :defaultValueLiteral => "changeable"
+ has_attr 'targetScope', UML13EA::ScopeKind, :defaultValueLiteral => "instance"
+end
+
+class UML13EA::Attribute < UML13EA::StructuralFeature
+end
+
+class UML13EA::Flow < UML13EA::Relationship
+end
+
+class UML13EA::Class < RGen::MetamodelBuilder::MMMultiple(UML13EA::Classifier, UML13EA::XmiIdProvider)
+ has_attr 'isActive', Boolean
+end
+
+class UML13EA::Guard < UML13EA::ModelElement
+end
+
+class UML13EA::CreateAction < UML13EA::Action
+end
+
+class UML13EA::IterationExpression < UML13EA::Expression
+end
+
+class UML13EA::ReturnAction < UML13EA::Action
+end
+
+class UML13EA::Parameter < UML13EA::ModelElement
+ has_attr 'kind', UML13EA::ParameterDirectionKind, :defaultValueLiteral => "inout"
+end
+
+class UML13EA::Dependency < UML13EA::Relationship
+end
+
+class UML13EA::Binding < UML13EA::Dependency
+end
+
+class UML13EA::Package < RGen::MetamodelBuilder::MMMultiple(UML13EA::Namespace, UML13EA::GeneralizableElement, UML13EA::XmiIdProvider)
+end
+
+class UML13EA::ObjectSetExpression < UML13EA::Expression
+end
+
+class UML13EA::StubState < UML13EA::StateVertex
+ has_attr 'referenceState', String
+end
+
+class UML13EA::Stereotype < UML13EA::GeneralizableElement
+ has_attr 'icon', String
+ has_attr 'baseClass', String
+end
+
+class UML13EA::Object < UML13EA::Instance
+end
+
+class UML13EA::LinkObject < RGen::MetamodelBuilder::MMMultiple(UML13EA::Link, UML13EA::Object)
+end
+
+class UML13EA::ComponentInstance < UML13EA::Instance
+end
+
+class UML13EA::Usage < UML13EA::Dependency
+end
+
+class UML13EA::SignalEvent < UML13EA::Event
+end
+
+class UML13EA::Structure < UML13EA::DataType
+end
+
+class UML13EA::AssociationEnd < RGen::MetamodelBuilder::MMMultiple(UML13EA::ModelElement, UML13EA::XmiIdProvider)
+ has_attr 'isNavigable', Boolean, :defaultValueLiteral => "false"
+ has_attr 'isOrdered', Boolean, :defaultValueLiteral => "false"
+ has_attr 'aggregation', UML13EA::AggregationKind, :defaultValueLiteral => "none"
+ has_attr 'targetScope', UML13EA::ScopeKind, :defaultValueLiteral => "instance"
+ has_attr 'changeable', UML13EA::ChangeableKind, :defaultValueLiteral => "changeable"
+ has_attr 'multiplicity', String
+end
+
+class UML13EA::AssociationEndRole < UML13EA::AssociationEnd
+end
+
+class UML13EA::Signal < UML13EA::Classifier
+end
+
+class UML13EA::Exception < UML13EA::Signal
+end
+
+class UML13EA::Extend < UML13EA::Relationship
+end
+
+class UML13EA::Argument < UML13EA::ModelElement
+end
+
+class UML13EA::TemplateParameter < RGen::MetamodelBuilder::MMBase
+end
+
+class UML13EA::PseudoState < UML13EA::StateVertex
+ has_attr 'kind', UML13EA::PseudostateKind, :defaultValueLiteral => "initial"
+end
+
+class UML13EA::SimpleState < UML13EA::State
+end
+
+class UML13EA::ActionState < UML13EA::SimpleState
+ has_attr 'isDynamic', Boolean
+end
+
+class UML13EA::TypeExpression < UML13EA::Expression
+end
+
+class UML13EA::DestroyAction < UML13EA::Action
+end
+
+class UML13EA::TerminateAction < UML13EA::Action
+end
+
+class UML13EA::Generalization < RGen::MetamodelBuilder::MMMultiple(UML13EA::Relationship, UML13EA::XmiIdProvider)
+ has_attr 'discriminator', String
+end
+
+class UML13EA::FinalState < UML13EA::State
+end
+
+class UML13EA::Subsystem < RGen::MetamodelBuilder::MMMultiple(UML13EA::Package, UML13EA::Classifier)
+ has_attr 'isInstantiable', Boolean
+end
+
+class UML13EA::TimeExpression < UML13EA::Expression
+end
+
+class UML13EA::TaggedValue < UML13EA::Element
+ has_attr 'tag', String
+ has_attr 'value', String
+end
+
+class UML13EA::DataValue < UML13EA::Instance
+end
+
+class UML13EA::Transition < UML13EA::ModelElement
+end
+
+class UML13EA::NodeInstance < UML13EA::Instance
+end
+
+class UML13EA::Component < UML13EA::Classifier
+end
+
+class UML13EA::Message < UML13EA::ModelElement
+end
+
+class UML13EA::Enumeration < UML13EA::DataType
+end
+
+class UML13EA::Reception < UML13EA::BehavioralFeature
+ has_attr 'isPolymorphic', Boolean
+ has_attr 'specification', String
+end
+
+class UML13EA::Include < UML13EA::Relationship
+end
+
+class UML13EA::CallState < UML13EA::ActionState
+end
+
+class UML13EA::ElementResidence < RGen::MetamodelBuilder::MMBase
+ has_attr 'visibility', UML13EA::VisibilityKind, :defaultValueLiteral => "public"
+end
+
+class UML13EA::UninterpretedAction < UML13EA::Action
+end
+
+class UML13EA::ArgListsExpression < UML13EA::Expression
+end
+
+class UML13EA::Stimulus < UML13EA::ModelElement
+end
+
+class UML13EA::AssociationClass < RGen::MetamodelBuilder::MMMultiple(UML13EA::Class, UML13EA::Association)
+end
+
+class UML13EA::Node < UML13EA::Classifier
+end
+
+class UML13EA::ElementImport < RGen::MetamodelBuilder::MMBase
+ has_attr 'visibility', UML13EA::VisibilityKind, :defaultValueLiteral => "public"
+ has_attr 'alias', String
+end
+
+class UML13EA::BooleanExpression < UML13EA::Expression
+end
+
+class UML13EA::Collaboration < RGen::MetamodelBuilder::MMMultiple(UML13EA::GeneralizableElement, UML13EA::Namespace)
+end
+
+class UML13EA::CallAction < UML13EA::Action
+end
+
+class UML13EA::UseCase < UML13EA::Classifier
+end
+
+class UML13EA::ActivityModel < UML13EA::StateMachine
+end
+
+class UML13EA::Permission < UML13EA::Dependency
+end
+
+class UML13EA::Interaction < UML13EA::ModelElement
+end
+
+class UML13EA::EnumerationLiteral < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+end
+
+class UML13EA::Model < UML13EA::Package
+end
+
+class UML13EA::LinkEnd < UML13EA::ModelElement
+end
+
+class UML13EA::ExtensionPoint < UML13EA::ModelElement
+ has_attr 'location', String
+end
+
+class UML13EA::Multiplicity < RGen::MetamodelBuilder::MMBase
+end
+
+class UML13EA::ObjectFlowState < UML13EA::SimpleState
+ has_attr 'isSynch', Boolean
+end
+
+class UML13EA::AttributeLink < UML13EA::ModelElement
+end
+
+class UML13EA::MappingExpression < UML13EA::Expression
+end
+
+class UML13EA::TimeEvent < UML13EA::Event
+end
+
+class UML13EA::Abstraction < UML13EA::Dependency
+end
+
+class UML13EA::ActionInstance < RGen::MetamodelBuilder::MMBase
+end
+
+
+UML13EA::ClassifierRole.contains_one_uni 'multiplicity', UML13EA::Multiplicity
+UML13EA::ClassifierRole.has_many 'availableContents', UML13EA::ModelElement
+UML13EA::ClassifierRole.has_many 'availableFeature', UML13EA::Feature
+UML13EA::ClassifierRole.has_one 'base', UML13EA::Classifier, :lowerBound => 1
+UML13EA::Diagram.contains_many 'element', UML13EA::DiagramElement, 'diagram'
+UML13EA::Method.many_to_one 'specification', UML13EA::Operation, 'method'
+UML13EA::Method.contains_one_uni 'body', UML13EA::ProcedureExpression
+UML13EA::SendAction.has_one 'signal', UML13EA::Signal, :lowerBound => 1
+UML13EA::ChangeEvent.contains_one_uni 'changeExpression', UML13EA::BooleanExpression
+UML13EA::Partition.has_many 'contents', UML13EA::ModelElement
+UML13EA::Comment.many_to_many 'annotatedElement', UML13EA::ModelElement, 'comment'
+UML13EA::ProgrammingLanguageType.contains_one_uni 'type', UML13EA::TypeExpression
+UML13EA::Action.contains_one_uni 'recurrence', UML13EA::IterationExpression
+UML13EA::Action.contains_one_uni 'target', UML13EA::ObjectSetExpression
+UML13EA::Action.contains_one_uni 'script', UML13EA::ActionExpression
+UML13EA::Action.contains_many_uni 'actualArgument', UML13EA::Argument
+UML13EA::StateMachine.many_to_one 'context', UML13EA::ModelElement, 'behavior'
+UML13EA::StateMachine.contains_many_uni 'transitions', UML13EA::Transition
+UML13EA::StateMachine.contains_one_uni 'top', UML13EA::State, :lowerBound => 1
+UML13EA::Operation.one_to_many 'occurrence', UML13EA::CallEvent, 'operation'
+UML13EA::ClassifierInState.has_one 'type', UML13EA::Classifier, :lowerBound => 1
+UML13EA::ClassifierInState.has_many 'inState', UML13EA::State
+UML13EA::Link.contains_many_uni 'connection', UML13EA::LinkEnd, :lowerBound => 2
+UML13EA::Link.has_one 'association', UML13EA::Association, :lowerBound => 1
+UML13EA::PresentationElement.many_to_many 'subject', UML13EA::ModelElement, 'presentation'
+UML13EA::AssociationRole.contains_one_uni 'multiplicity', UML13EA::Multiplicity
+UML13EA::AssociationRole.has_one 'base', UML13EA::Association
+UML13EA::Diagram.has_one 'owner', UML13EA::ModelElement, :lowerBound => 1
+UML13EA::ActionSequence.contains_many_uni 'action', UML13EA::Action
+UML13EA::Constraint.contains_one_uni 'body', UML13EA::BooleanExpression
+UML13EA::Constraint.many_to_many 'constrainedElement', UML13EA::ModelElement, 'constraint', :lowerBound => 1
+UML13EA::SubactivityState.contains_one_uni 'dynamicArguments', UML13EA::ArgListsExpression
+UML13EA::AssociationEnd.contains_many 'qualifier', UML13EA::Attribute, 'associationEnd'
+UML13EA::Attribute.contains_one_uni 'initialValue', UML13EA::Expression
+UML13EA::Flow.many_to_many 'source', UML13EA::ModelElement, 'sourceFlow'
+UML13EA::Flow.many_to_many 'target', UML13EA::ModelElement, 'targetFlow'
+UML13EA::Guard.contains_one_uni 'expression', UML13EA::BooleanExpression
+UML13EA::CreateAction.has_one 'instantiation', UML13EA::Classifier, :lowerBound => 1
+UML13EA::Namespace.contains_many 'ownedElement', UML13EA::ModelElement, 'namespace'
+UML13EA::Parameter.contains_one_uni 'defaultValue', UML13EA::Expression
+UML13EA::Parameter.many_to_many 'state', UML13EA::ObjectFlowState, 'parameter'
+UML13EA::Parameter.has_one 'type', UML13EA::Classifier, :lowerBound => 1
+UML13EA::Binding.has_many 'argument', UML13EA::ModelElement, :lowerBound => 1
+UML13EA::Event.contains_many_uni 'parameters', UML13EA::Parameter
+UML13EA::Dependency.many_to_many 'supplier', UML13EA::ModelElement, 'supplierDependency', :opposite_lowerBound => 1
+UML13EA::Dependency.many_to_many 'client', UML13EA::ModelElement, 'clientDependency', :opposite_lowerBound => 1
+UML13EA::Package.contains_many 'importedElement', UML13EA::ElementImport, 'package'
+UML13EA::Classifier.contains_many 'feature', UML13EA::Feature, 'owner'
+UML13EA::Stereotype.one_to_many 'extendedElement', UML13EA::ModelElement, 'stereotype'
+UML13EA::Stereotype.has_many 'requiredTag', UML13EA::TaggedValue
+UML13EA::ComponentInstance.has_many 'resident', UML13EA::Instance
+UML13EA::SignalEvent.many_to_one 'signal', UML13EA::Signal, 'occurrence', :lowerBound => 1
+UML13EA::Instance.contains_many_uni 'slot', UML13EA::AttributeLink
+UML13EA::Instance.one_to_many 'linkEnd', UML13EA::LinkEnd, 'instance'
+UML13EA::Instance.has_many 'classifier', UML13EA::Classifier, :lowerBound => 1
+UML13EA::AssociationEndRole.has_many 'availableQualifier', UML13EA::Attribute
+UML13EA::AssociationEndRole.has_one 'base', UML13EA::AssociationEnd
+UML13EA::Extend.many_to_one 'extension', UML13EA::UseCase, 'extend'
+UML13EA::Extend.contains_one_uni 'condition', UML13EA::BooleanExpression
+UML13EA::Extend.has_many 'extensionPoint', UML13EA::ExtensionPoint, :lowerBound => 1
+UML13EA::Extend.has_one 'base', UML13EA::UseCase, :lowerBound => 1
+UML13EA::Argument.contains_one_uni 'value', UML13EA::Expression
+UML13EA::TemplateParameter.has_one 'modelElement', UML13EA::ModelElement
+UML13EA::TemplateParameter.has_one 'defaultElement', UML13EA::ModelElement
+UML13EA::ActionState.contains_one_uni 'dynamicArguments', UML13EA::ArgListsExpression
+UML13EA::GeneralizableElement.one_to_many 'specialization', UML13EA::Generalization, 'supertype'
+UML13EA::GeneralizableElement.one_to_many 'generalization', UML13EA::Generalization, 'subtype'
+UML13EA::StateVertex.one_to_many 'incoming', UML13EA::Transition, 'target', :opposite_lowerBound => 1
+UML13EA::StateVertex.one_to_many 'outgoing', UML13EA::Transition, 'source', :opposite_lowerBound => 1
+UML13EA::CompositeState.contains_many 'substate', UML13EA::StateVertex, 'container', :lowerBound => 1
+UML13EA::ModelElement.contains_many 'taggedValue', UML13EA::TaggedValue, 'modelElement'
+UML13EA::StructuralFeature.contains_one_uni 'multiplicity', UML13EA::Multiplicity
+UML13EA::StructuralFeature.has_one 'type', UML13EA::Classifier, :lowerBound => 1
+UML13EA::Transition.has_one 'trigger', UML13EA::Event
+UML13EA::Transition.contains_one_uni 'effect', UML13EA::Action
+UML13EA::Transition.contains_one_uni 'guard', UML13EA::Guard
+UML13EA::NodeInstance.has_many 'resident', UML13EA::ComponentInstance
+UML13EA::Component.contains_many 'residentElement', UML13EA::ElementResidence, 'implementationLocation'
+UML13EA::Component.many_to_many 'deploymentLocation', UML13EA::Node, 'resident'
+UML13EA::Message.has_one 'action', UML13EA::Action, :lowerBound => 1
+UML13EA::Message.has_one 'communicationConnection', UML13EA::AssociationRole
+UML13EA::Message.has_many 'predecessor', UML13EA::Message
+UML13EA::Message.has_one 'receiver', UML13EA::ClassifierRole, :lowerBound => 1
+UML13EA::Message.has_one 'sender', UML13EA::ClassifierRole, :lowerBound => 1
+UML13EA::Message.has_one 'activator', UML13EA::Message
+UML13EA::Interaction.contains_many 'message', UML13EA::Message, 'interaction', :lowerBound => 1
+UML13EA::ModelElement.one_to_many 'elementResidence', UML13EA::ElementResidence, 'resident'
+UML13EA::ModelElement.contains_many_uni 'templateParameter', UML13EA::TemplateParameter
+UML13EA::ModelElement.one_to_many 'elementImport', UML13EA::ElementImport, 'modelElement'
+UML13EA::Enumeration.contains_many_uni 'literal', UML13EA::EnumerationLiteral, :lowerBound => 1
+UML13EA::Reception.many_to_one 'signal', UML13EA::Signal, 'reception'
+UML13EA::Association.contains_many 'connection', UML13EA::AssociationEnd, 'association', :lowerBound => 2
+UML13EA::Include.many_to_one 'base', UML13EA::UseCase, 'include'
+UML13EA::Include.has_one 'addition', UML13EA::UseCase, :lowerBound => 1
+UML13EA::Classifier.many_to_many 'participant', UML13EA::AssociationEnd, 'specification'
+UML13EA::Classifier.one_to_many 'associationEnd', UML13EA::AssociationEnd, 'type'
+UML13EA::Stimulus.has_one 'dispatchAction', UML13EA::Action, :lowerBound => 1
+UML13EA::Stimulus.has_one 'communicationLink', UML13EA::Link
+UML13EA::Stimulus.has_one 'receiver', UML13EA::Instance, :lowerBound => 1
+UML13EA::Stimulus.has_one 'sender', UML13EA::Instance, :lowerBound => 1
+UML13EA::Stimulus.has_many 'argument', UML13EA::Instance
+UML13EA::State.contains_one_uni 'doActivity', UML13EA::Action
+UML13EA::State.contains_many_uni 'internalTransition', UML13EA::Transition
+UML13EA::State.has_many 'deferrableEvent', UML13EA::Event
+UML13EA::State.contains_one_uni 'exit', UML13EA::Action
+UML13EA::State.contains_one_uni 'entry', UML13EA::Action
+UML13EA::Collaboration.has_one 'representedOperation', UML13EA::Operation
+UML13EA::Collaboration.has_one 'representedClassifier', UML13EA::Classifier
+UML13EA::Collaboration.has_many 'constrainingElement', UML13EA::ModelElement
+UML13EA::Collaboration.contains_many 'interaction', UML13EA::Interaction, 'context'
+UML13EA::CallAction.has_one 'operation', UML13EA::Operation, :lowerBound => 1
+UML13EA::UseCase.has_many 'extensionPoint', UML13EA::ExtensionPoint
+UML13EA::ActivityModel.contains_many_uni 'partition', UML13EA::Partition
+UML13EA::Interaction.contains_many_uni 'link', UML13EA::Link
+UML13EA::LinkEnd.has_one 'associationEnd', UML13EA::AssociationEnd, :lowerBound => 1
+UML13EA::LinkEnd.has_one 'participant', UML13EA::Instance, :lowerBound => 1
+UML13EA::BehavioralFeature.many_to_many 'raisedSignal', UML13EA::Signal, 'context'
+UML13EA::BehavioralFeature.contains_many_uni 'parameter', UML13EA::Parameter
+UML13EA::SubmachineState.has_one 'submachine', UML13EA::StateMachine, :lowerBound => 1
+UML13EA::Multiplicity.contains_many_uni 'range', UML13EA::MultiplicityRange, :lowerBound => 1
+UML13EA::ObjectFlowState.has_one 'type', UML13EA::Classifier, :lowerBound => 1
+UML13EA::ObjectFlowState.has_one 'available', UML13EA::Parameter, :lowerBound => 1
+UML13EA::AttributeLink.has_one 'value', UML13EA::Instance, :lowerBound => 1
+UML13EA::AttributeLink.has_one 'attribute', UML13EA::Attribute, :lowerBound => 1
+UML13EA::TimeEvent.contains_one_uni 'when', UML13EA::TimeExpression
+UML13EA::Abstraction.contains_one_uni 'mapping', UML13EA::MappingExpression
diff --git a/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel_ext.rb b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel_ext.rb
new file mode 100644
index 000000000..6c338c6aa
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel_ext.rb
@@ -0,0 +1,45 @@
+module UML13EA
+ class << self
+ attr_accessor :idStore
+ end
+ module ModelElement::ClassModule
+ def qualifiedName
+ _name = (respond_to?(:_name) ? self._name : name) || "unnamed"
+ _namespace = respond_to?(:_namespace) ? self._namespace : namespace
+ _namespace && _namespace.qualifiedName ? _namespace.qualifiedName+"::"+_name : _name
+ end
+ end
+ module XmiIdProvider::ClassModule
+ def _xmi_id
+ UML13EA.idStore.idHash[qualifiedName] ||= "EAID_"+object_id.to_s
+ end
+ end
+ module Package::ClassModule
+ def _xmi_id
+ UML13EA.idStore.idHash[qualifiedName] ||= "EAPK_"+object_id.to_s
+ end
+ end
+ module Generalization::ClassModule
+ def _name
+ "#{subtype.name}_#{supertype.name}"
+ end
+ end
+ module Association::ClassModule
+ def _name
+ connection.collect{|c| "#{c.getType.name}_#{c.name}"}.sort.join("_")
+ end
+ end
+ module AssociationEnd::ClassModule
+ def _name
+ "#{getType.name}_#{name}"
+ end
+ def _namespace
+ association
+ end
+ end
+ module StateVertex::ClassModule
+ def _namespace
+ container
+ end
+ end
+end
diff --git a/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel_generator.rb b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel_generator.rb
new file mode 100644
index 000000000..725d601a3
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_metamodel_generator.rb
@@ -0,0 +1,43 @@
+require 'metamodels/uml13_metamodel'
+require 'mmgen/metamodel_generator'
+require 'rgen/transformer'
+require 'rgen/environment'
+require 'rgen/ecore/ecore'
+
+include MMGen::MetamodelGenerator
+
+class ECoreCopyTransformer < RGen::Transformer
+ copy_all RGen::ECore
+end
+
+eaMMRoot = ECoreCopyTransformer.new.trans(UML13.ecore)
+
+eaMMRoot.name = "UML13EA"
+eaMMRoot.eClassifiers.find{|c| c.name == "ActivityGraph"}.name = "ActivityModel"
+eaMMRoot.eClassifiers.find{|c| c.name == "Pseudostate"}.name = "PseudoState"
+
+compositeState = eaMMRoot.eClassifiers.find{|c| c.name == "CompositeState"}
+compositeState.eReferences.find{|r| r.name == "subvertex"}.name = "substate"
+
+generalization = eaMMRoot.eClassifiers.find{|c| c.name == "Generalization"}
+generalization.eReferences.find{|r| r.name == "parent"}.name = "supertype"
+generalization.eReferences.find{|r| r.name == "child"}.name = "subtype"
+
+assocEnd = eaMMRoot.eClassifiers.find{|c| c.name == "AssociationEnd"}
+assocEnd.eAttributes.find{|r| r.name == "ordering"}.name = "isOrdered"
+assocEnd.eAttributes.find{|r| r.name == "changeability"}.name = "changeable"
+assocEnd.eAttributes.find{|r| r.name == "isOrdered"}.eType = RGen::ECore::EBoolean
+assocEnd.eAttributes.find{|r| r.name == "changeable"}.eType.eLiterals.find{|l| l.name == "frozen"}.name = "none"
+multRef = assocEnd.eStructuralFeatures.find{|f| f.name == "multiplicity"}
+multRef.eType = nil
+assocEnd.removeEStructuralFeatures(multRef)
+assocEnd.addEStructuralFeatures(RGen::ECore::EAttribute.new(:name => "multiplicity", :eType => RGen::ECore::EString))
+
+xmiIdProvider = RGen::ECore::EClass.new(:name => "XmiIdProvider", :ePackage => eaMMRoot)
+eaMMRoot.eClassifiers.each do |c|
+ if %w(Package Class Generalization Association AssociationEnd StateVertex).include?(c.name)
+ c.addESuperTypes(xmiIdProvider)
+ end
+end
+
+generateMetamodel(eaMMRoot, File.dirname(__FILE__)+"/uml13_ea_metamodel.rb")
diff --git a/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_to_uml13.rb b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_to_uml13.rb
new file mode 100644
index 000000000..d857bc293
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/ea_support/uml13_ea_to_uml13.rb
@@ -0,0 +1,103 @@
+require 'rgen/transformer'
+require 'metamodels/uml13_metamodel'
+require 'ea_support/uml13_ea_metamodel'
+
+class UML13EAToUML13 < RGen::Transformer
+ include UML13EA
+
+ def transform
+ trans(:class => Package)
+ trans(:class => Class)
+ @env_out.find(:class => UML13::Attribute).each do |me|
+ # remove tagged vales internally used by EA which have been converted to UML
+ me.taggedValue = me.taggedValue.reject{|tv| ["lowerBound", "upperBound"].include?(tv.tag)}
+ end
+ end
+
+ def cleanModel
+ @env_out.find(:class => UML13::ModelElement).each do |me|
+ me.taggedValue = []
+ end
+ end
+
+ copy_all UML13EA, :to => UML13, :except => %w(
+ XmiIdProvider
+ AssociationEnd AssociationEndRole
+ StructuralFeature
+ Attribute
+ Generalization
+ ActivityModel
+ CompositeState
+ PseudoState
+ Dependency
+ )
+
+ transform AssociationEndRole, :to => UML13::AssociationEndRole do
+ copyAssociationEnd
+ end
+
+ transform AssociationEnd, :to => UML13::AssociationEnd do
+ copyAssociationEnd
+ end
+
+ def copyAssociationEnd
+ copy_features :except => [:isOrdered, :changeable] do
+ {:ordering => isOrdered ? :ordered : :unordered,
+ :changeability => {:none => :frozen}[changeable] || changeable,
+ :aggregation => {:shared => :aggregate}[aggregation] || aggregation,
+ :multiplicity => UML13::Multiplicity.new(
+ :range => [UML13::MultiplicityRange.new(
+ :lower => multiplicity && multiplicity.split("..").first,
+ :upper => multiplicity && multiplicity.split("..").last)])}
+ end
+ end
+
+ transform StructuralFeature, :to => UML13::StructuralFeature,
+ :if => lambda{|c| !@current_object.is_a?(UML13EA::Attribute)} do
+ copy_features :except => [:changeable] do
+ {:changeability => {:none => :frozen}[changeable] }
+ end
+ end
+
+ transform StructuralFeature, :to => UML13::Attribute,
+ :if => lambda{|c| @current_object.is_a?(UML13EA::Attribute)} do
+ _lowerBound = taggedValue.find{|tv| tv.tag == "lowerBound"}
+ _upperBound = taggedValue.find{|tv| tv.tag == "upperBound"}
+ if _lowerBound || _upperBound
+ _multiplicity = UML13::Multiplicity.new(
+ :range => [UML13::MultiplicityRange.new(
+ :lower => (_lowerBound && _lowerBound.value) || "0",
+ :upper => (_upperBound && _upperBound.value) || "1"
+ )])
+ end
+ copy_features :except => [:changeable] do
+ {:changeability => {:none => :frozen}[changeable],
+ :multiplicity => _multiplicity }
+ end
+ end
+
+ transform Generalization, :to => UML13::Generalization do
+ copy_features :except => [:subtype, :supertype] do
+ { :child => trans(subtype),
+ :parent => trans(supertype) }
+ end
+ end
+
+ copy ActivityModel, :to => UML13::ActivityGraph
+
+ transform CompositeState, :to => UML13::CompositeState do
+ copy_features :except => [:substate] do
+ { :subvertex => trans(substate) }
+ end
+ end
+
+ copy PseudoState, :to => UML13::Pseudostate
+
+ transform Dependency, :to => UML13::Dependency do
+ _name_tag = taggedValue.find{|tv| tv.tag == "dst_name"}
+ copy_features do
+ { :name => (_name_tag && _name_tag.value) || "Anonymous" }
+ end
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/ea_support/uml13_to_uml13_ea.rb b/lib/puppet/vendor/rgen/lib/ea_support/uml13_to_uml13_ea.rb
new file mode 100644
index 000000000..2d8dc7cf6
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/ea_support/uml13_to_uml13_ea.rb
@@ -0,0 +1,89 @@
+require 'rgen/transformer'
+require 'metamodels/uml13_metamodel'
+require 'ea_support/uml13_ea_metamodel'
+require 'ea_support/uml13_ea_metamodel_ext'
+
+class UML13ToUML13EA < RGen::Transformer
+ include UML13
+
+ def transform
+ trans(:class => Package)
+ trans(:class => Class)
+ end
+
+ copy_all UML13, :to => UML13EA, :except => %w(
+ ActivityGraph
+ CompositeState SimpleState
+ Class
+ Association AssociationEnd AssociationEndRole
+ Generalization
+ Pseudostate
+ Attribute
+ )
+
+ copy ActivityGraph, :to => UML13EA::ActivityModel
+
+ copy Pseudostate, :to => UML13EA::PseudoState
+
+ transform CompositeState, :to => UML13EA::CompositeState do
+ copy_features :except => [:subvertex] do
+ { :substate => trans(subvertex) }
+ end
+ end
+
+ transform SimpleState, :to => UML13EA::SimpleState do
+ copy_features :except => [:container] do
+ { :taggedValue => trans(taggedValue) +
+ [@env_out.new(UML13EA::TaggedValue, :tag => "ea_stype", :value => "State")] +
+ (container ? [ @env_out.new(UML13EA::TaggedValue, :tag => "owner", :value => trans(container)._xmi_id)] : []) }
+ end
+ end
+
+ transform Class, :to => UML13EA::Class do
+ copy_features do
+ { :taggedValue => trans(taggedValue) + [@env_out.new(UML13EA::TaggedValue, :tag => "ea_stype", :value => "Class")]}
+ end
+ end
+
+ transform Association, :to => UML13EA::Association do
+ copy_features do
+ { :connection => trans(connection[1].isNavigable ? [connection[0], connection[1]] : [connection[1], connection[0]]),
+ :taggedValue => trans(taggedValue) + [
+ @env_out.new(UML13EA::TaggedValue, :tag => "ea_type", :value => "Association"),
+ @env_out.new(UML13EA::TaggedValue, :tag => "direction", :value =>
+ connection.all?{|c| c.isNavigable} ? "Bi-Directional" : "Source -&gt; Destination")] }
+ end
+ end
+
+ transform AssociationEnd, :to => UML13EA::AssociationEnd do
+ copyAssociationEnd
+ end
+
+ transform AssociationEndRole, :to => UML13EA::AssociationEndRole do
+ copyAssociationEnd
+ end
+
+ def copyAssociationEnd
+ _lower = multiplicity && multiplicity.range.first.lower
+ _upper = multiplicity && multiplicity.range.first.upper
+ copy_features :except => [:multiplicity, :ordering, :changeability] do
+ { :multiplicity => _lower == _upper ? _lower : "#{_lower}..#{_upper}",
+ :isOrdered => ordering == :ordered,
+ :changeable => :none } #{:frozen => :none}[changeability] || changeability}
+ end
+ end
+
+ transform Attribute, :to => UML13EA::Attribute do
+ copy_features :except => [:changeability] do
+ { :changeable => {:frozen => :none}[changeability] }
+ end
+ end
+
+ transform Generalization, :to => UML13EA::Generalization do
+ copy_features :except => [:child, :parent] do
+ { :taggedValue => trans(taggedValue) + [@env_out.new(UML13EA::TaggedValue, :tag => "ea_type", :value => "Generalization")],
+ :subtype => trans(child),
+ :supertype => trans(parent)}
+ end
+ end
+end
diff --git a/lib/puppet/vendor/rgen/lib/metamodels/uml13_metamodel.rb b/lib/puppet/vendor/rgen/lib/metamodels/uml13_metamodel.rb
new file mode 100644
index 000000000..01157df96
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/metamodels/uml13_metamodel.rb
@@ -0,0 +1,559 @@
+require 'rgen/metamodel_builder'
+
+module UML13
+ extend RGen::MetamodelBuilder::ModuleExtension
+ include RGen::MetamodelBuilder::DataTypes
+
+ AggregationKind = Enum.new(:name => "AggregationKind", :literals =>[ :none, :aggregate, :composite ])
+ ChangeableKind = Enum.new(:name => "ChangeableKind", :literals =>[ :changeable, :frozen, :addOnly ])
+ OperationDirectionKind = Enum.new(:name => "OperationDirectionKind", :literals =>[ ])
+ ParameterDirectionKind = Enum.new(:name => "ParameterDirectionKind", :literals =>[ :in, :inout, :out, :return ])
+ MessageDirectionKind = Enum.new(:name => "MessageDirectionKind", :literals =>[ ])
+ ScopeKind = Enum.new(:name => "ScopeKind", :literals =>[ :instance, :classifier ])
+ VisibilityKind = Enum.new(:name => "VisibilityKind", :literals =>[ :public, :protected, :private ])
+ PseudostateKind = Enum.new(:name => "PseudostateKind", :literals =>[ :initial, :deepHistory, :shallowHistory, :join, :fork, :branch, :junction, :final ])
+ CallConcurrencyKind = Enum.new(:name => "CallConcurrencyKind", :literals =>[ :sequential, :guarded, :concurrent ])
+ OrderingKind = Enum.new(:name => "OrderingKind", :literals =>[ :unordered, :ordered, :sorted ])
+
+ class Element < RGen::MetamodelBuilder::MMBase
+ end
+
+ class ModelElement < Element
+ has_attr 'name', String
+ has_attr 'visibility', UML13::VisibilityKind, :defaultValueLiteral => "public"
+ has_attr 'isSpecification', Boolean
+ end
+
+ class Namespace < ModelElement
+ end
+
+ class GeneralizableElement < ModelElement
+ has_attr 'isRoot', Boolean
+ has_attr 'isLeaf', Boolean
+ has_attr 'isAbstract', Boolean
+ end
+
+ class Classifier < RGen::MetamodelBuilder::MMMultiple(GeneralizableElement, Namespace)
+ end
+
+ class Class < Classifier
+ has_attr 'isActive', Boolean
+ end
+
+ class DataType < Classifier
+ end
+
+ class Feature < ModelElement
+ has_attr 'ownerScope', UML13::ScopeKind, :defaultValueLiteral => "instance"
+ end
+
+ class StructuralFeature < Feature
+ has_attr 'changeability', UML13::ChangeableKind, :defaultValueLiteral => "changeable"
+ has_attr 'targetScope', UML13::ScopeKind, :defaultValueLiteral => "instance"
+ end
+
+ class AssociationEnd < ModelElement
+ has_attr 'isNavigable', Boolean, :defaultValueLiteral => "false"
+ has_attr 'ordering', UML13::OrderingKind, :defaultValueLiteral => "unordered"
+ has_attr 'aggregation', UML13::AggregationKind, :defaultValueLiteral => "none"
+ has_attr 'targetScope', UML13::ScopeKind, :defaultValueLiteral => "instance"
+ has_attr 'changeability', UML13::ChangeableKind, :defaultValueLiteral => "changeable"
+ end
+
+ class Interface < Classifier
+ end
+
+ class Constraint < ModelElement
+ end
+
+ class Relationship < ModelElement
+ end
+
+ class Association < RGen::MetamodelBuilder::MMMultiple(GeneralizableElement, Relationship)
+ end
+
+ class Attribute < StructuralFeature
+ end
+
+ class BehavioralFeature < Feature
+ has_attr 'isQuery', Boolean
+ end
+
+ class Operation < BehavioralFeature
+ has_attr 'concurrency', UML13::CallConcurrencyKind, :defaultValueLiteral => "sequential"
+ has_attr 'isRoot', Boolean
+ has_attr 'isLeaf', Boolean
+ has_attr 'isAbstract', Boolean
+ end
+
+ class Parameter < ModelElement
+ has_attr 'kind', UML13::ParameterDirectionKind, :defaultValueLiteral => "inout"
+ end
+
+ class Method < BehavioralFeature
+ end
+
+ class Generalization < Relationship
+ has_attr 'discriminator', String
+ end
+
+ class AssociationClass < RGen::MetamodelBuilder::MMMultiple(Class, Association)
+ end
+
+ class Dependency < Relationship
+ end
+
+ class Abstraction < Dependency
+ end
+
+ class PresentationElement < Element
+ end
+
+ class Usage < Dependency
+ end
+
+ class Binding < Dependency
+ end
+
+ class Component < Classifier
+ end
+
+ class Node < Classifier
+ end
+
+ class Permission < Dependency
+ end
+
+ class Comment < ModelElement
+ has_attr 'body', String
+ end
+
+ class Flow < Relationship
+ end
+
+ class TemplateParameter < RGen::MetamodelBuilder::MMBase
+ end
+
+ class ElementResidence < RGen::MetamodelBuilder::MMBase
+ has_attr 'visibility', UML13::VisibilityKind, :defaultValueLiteral => "public"
+ end
+
+ class Multiplicity < RGen::MetamodelBuilder::MMBase
+ end
+
+ class Expression < RGen::MetamodelBuilder::MMBase
+ has_attr 'language', String
+ has_attr 'body', String
+ end
+
+ class ObjectSetExpression < Expression
+ end
+
+ class TimeExpression < Expression
+ end
+
+ class BooleanExpression < Expression
+ end
+
+ class ActionExpression < Expression
+ end
+
+ class MultiplicityRange < RGen::MetamodelBuilder::MMBase
+ has_attr 'lower', String
+ has_attr 'upper', String
+ end
+
+ class Structure < DataType
+ end
+
+ class Primitive < DataType
+ end
+
+ class Enumeration < DataType
+ end
+
+ class EnumerationLiteral < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+ end
+
+ class ProgrammingLanguageType < DataType
+ end
+
+ class IterationExpression < Expression
+ end
+
+ class TypeExpression < Expression
+ end
+
+ class ArgListsExpression < Expression
+ end
+
+ class MappingExpression < Expression
+ end
+
+ class ProcedureExpression < Expression
+ end
+
+ class Stereotype < GeneralizableElement
+ has_attr 'icon', String
+ has_attr 'baseClass', String
+ end
+
+ class TaggedValue < Element
+ has_attr 'tag', String
+ has_attr 'value', String
+ end
+
+ class UseCase < Classifier
+ end
+
+ class Actor < Classifier
+ end
+
+ class Instance < ModelElement
+ end
+
+ class UseCaseInstance < Instance
+ end
+
+ class Extend < Relationship
+ end
+
+ class Include < Relationship
+ end
+
+ class ExtensionPoint < ModelElement
+ has_attr 'location', String
+ end
+
+ class StateMachine < ModelElement
+ end
+
+ class Event < ModelElement
+ end
+
+ class StateVertex < ModelElement
+ end
+
+ class State < StateVertex
+ end
+
+ class TimeEvent < Event
+ end
+
+ class CallEvent < Event
+ end
+
+ class SignalEvent < Event
+ end
+
+ class Transition < ModelElement
+ end
+
+ class CompositeState < State
+ has_attr 'isConcurrent', Boolean
+ end
+
+ class ChangeEvent < Event
+ end
+
+ class Guard < ModelElement
+ end
+
+ class Pseudostate < StateVertex
+ has_attr 'kind', UML13::PseudostateKind, :defaultValueLiteral => "initial"
+ end
+
+ class SimpleState < State
+ end
+
+ class SubmachineState < CompositeState
+ end
+
+ class SynchState < StateVertex
+ has_attr 'bound', Integer
+ end
+
+ class StubState < StateVertex
+ has_attr 'referenceState', String
+ end
+
+ class FinalState < State
+ end
+
+ class Collaboration < RGen::MetamodelBuilder::MMMultiple(GeneralizableElement, Namespace)
+ end
+
+ class ClassifierRole < Classifier
+ end
+
+ class AssociationRole < Association
+ end
+
+ class AssociationEndRole < AssociationEnd
+ end
+
+ class Message < ModelElement
+ end
+
+ class Interaction < ModelElement
+ end
+
+ class Signal < Classifier
+ end
+
+ class Action < ModelElement
+ has_attr 'isAsynchronous', Boolean
+ end
+
+ class CreateAction < Action
+ end
+
+ class DestroyAction < Action
+ end
+
+ class UninterpretedAction < Action
+ end
+
+ class AttributeLink < ModelElement
+ end
+
+ class Object < Instance
+ end
+
+ class Link < ModelElement
+ end
+
+ class LinkObject < RGen::MetamodelBuilder::MMMultiple(Object, Link)
+ end
+
+ class DataValue < Instance
+ end
+
+ class CallAction < Action
+ end
+
+ class SendAction < Action
+ end
+
+ class ActionSequence < Action
+ end
+
+ class Argument < ModelElement
+ end
+
+ class Reception < BehavioralFeature
+ has_attr 'isPolymorphic', Boolean
+ has_attr 'specification', String
+ end
+
+ class LinkEnd < ModelElement
+ end
+
+ class Call < RGen::MetamodelBuilder::MMBase
+ end
+
+ class ReturnAction < Action
+ end
+
+ class TerminateAction < Action
+ end
+
+ class Stimulus < ModelElement
+ end
+
+ class ActionInstance < RGen::MetamodelBuilder::MMBase
+ end
+
+ class Exception < Signal
+ end
+
+ class AssignmentAction < Action
+ end
+
+ class ComponentInstance < Instance
+ end
+
+ class NodeInstance < Instance
+ end
+
+ class ActivityGraph < StateMachine
+ end
+
+ class Partition < ModelElement
+ end
+
+ class SubactivityState < SubmachineState
+ has_attr 'isDynamic', Boolean
+ end
+
+ class ActionState < SimpleState
+ has_attr 'isDynamic', Boolean
+ end
+
+ class CallState < ActionState
+ end
+
+ class ObjectFlowState < SimpleState
+ has_attr 'isSynch', Boolean
+ end
+
+ class ClassifierInState < Classifier
+ end
+
+ class Package < RGen::MetamodelBuilder::MMMultiple(GeneralizableElement, Namespace)
+ end
+
+ class Model < Package
+ end
+
+ class Subsystem < RGen::MetamodelBuilder::MMMultiple(Classifier, Package)
+ has_attr 'isInstantiable', Boolean
+ end
+
+ class ElementImport < RGen::MetamodelBuilder::MMBase
+ has_attr 'visibility', UML13::VisibilityKind, :defaultValueLiteral => "public"
+ has_attr 'alias', String
+ end
+
+ class DiagramElement < PresentationElement
+ has_attr 'geometry', String
+ has_attr 'style', String
+ end
+
+ class Diagram < PresentationElement
+ has_attr 'name', String
+ has_attr 'toolName', String
+ has_attr 'diagramType', String
+ has_attr 'style', String
+ end
+
+end
+
+UML13::Classifier.many_to_many 'participant', UML13::AssociationEnd, 'specification'
+UML13::Classifier.one_to_many 'associationEnd', UML13::AssociationEnd, 'type'
+UML13::Classifier.contains_many 'feature', UML13::Feature, 'owner'
+UML13::StructuralFeature.contains_one_uni 'multiplicity', UML13::Multiplicity
+UML13::StructuralFeature.has_one 'type', UML13::Classifier, :lowerBound => 1
+UML13::Namespace.contains_many 'ownedElement', UML13::ModelElement, 'namespace'
+UML13::AssociationEnd.contains_one_uni 'multiplicity', UML13::Multiplicity
+UML13::AssociationEnd.contains_many 'qualifier', UML13::Attribute, 'associationEnd'
+UML13::Association.contains_many 'connection', UML13::AssociationEnd, 'association', :lowerBound => 2
+UML13::Constraint.contains_one_uni 'body', UML13::BooleanExpression
+UML13::Constraint.many_to_many 'constrainedElement', UML13::ModelElement, 'constraint', :lowerBound => 1
+UML13::GeneralizableElement.one_to_many 'specialization', UML13::Generalization, 'parent'
+UML13::GeneralizableElement.one_to_many 'generalization', UML13::Generalization, 'child'
+UML13::Attribute.contains_one_uni 'initialValue', UML13::Expression
+UML13::Operation.one_to_many 'occurrence', UML13::CallEvent, 'operation'
+UML13::Operation.one_to_many 'method', UML13::Method, 'specification'
+UML13::Parameter.contains_one_uni 'defaultValue', UML13::Expression
+UML13::Parameter.many_to_many 'state', UML13::ObjectFlowState, 'parameter'
+UML13::Parameter.has_one 'type', UML13::Classifier, :lowerBound => 1
+UML13::Method.contains_one_uni 'body', UML13::ProcedureExpression
+UML13::BehavioralFeature.many_to_many 'raisedSignal', UML13::Signal, 'context'
+UML13::BehavioralFeature.contains_many_uni 'parameter', UML13::Parameter
+UML13::ModelElement.one_to_many 'behavior', UML13::StateMachine, 'context'
+UML13::ModelElement.many_to_one 'stereotype', UML13::Stereotype, 'extendedElement'
+UML13::ModelElement.one_to_many 'elementResidence', UML13::ElementResidence, 'resident'
+UML13::ModelElement.many_to_many 'sourceFlow', UML13::Flow, 'source'
+UML13::ModelElement.many_to_many 'targetFlow', UML13::Flow, 'target'
+UML13::ModelElement.many_to_many 'presentation', UML13::PresentationElement, 'subject'
+UML13::ModelElement.many_to_many 'supplierDependency', UML13::Dependency, 'supplier', :lowerBound => 1
+UML13::ModelElement.contains_many 'taggedValue', UML13::TaggedValue, 'modelElement'
+UML13::ModelElement.contains_many_uni 'templateParameter', UML13::TemplateParameter
+UML13::ModelElement.many_to_many 'clientDependency', UML13::Dependency, 'client', :lowerBound => 1
+UML13::ModelElement.many_to_many 'comment', UML13::Comment, 'annotatedElement'
+UML13::ModelElement.one_to_many 'elementImport', UML13::ElementImport, 'modelElement'
+UML13::Abstraction.contains_one_uni 'mapping', UML13::MappingExpression
+UML13::Binding.has_many 'argument', UML13::ModelElement, :lowerBound => 1
+UML13::Component.contains_many 'residentElement', UML13::ElementResidence, 'implementationLocation'
+UML13::Component.many_to_many 'deploymentLocation', UML13::Node, 'resident'
+UML13::TemplateParameter.has_one 'modelElement', UML13::ModelElement
+UML13::TemplateParameter.has_one 'defaultElement', UML13::ModelElement
+UML13::Multiplicity.contains_many_uni 'range', UML13::MultiplicityRange, :lowerBound => 1
+UML13::Enumeration.contains_many_uni 'literal', UML13::EnumerationLiteral, :lowerBound => 1
+UML13::ProgrammingLanguageType.contains_one_uni 'type', UML13::TypeExpression
+UML13::Stereotype.has_many 'requiredTag', UML13::TaggedValue
+UML13::UseCase.has_many 'extensionPoint', UML13::ExtensionPoint
+UML13::UseCase.one_to_many 'include', UML13::Include, 'base'
+UML13::UseCase.one_to_many 'extend', UML13::Extend, 'extension'
+UML13::Extend.contains_one_uni 'condition', UML13::BooleanExpression
+UML13::Extend.has_many 'extensionPoint', UML13::ExtensionPoint, :lowerBound => 1
+UML13::Extend.has_one 'base', UML13::UseCase, :lowerBound => 1
+UML13::Include.has_one 'addition', UML13::UseCase, :lowerBound => 1
+UML13::StateMachine.contains_many_uni 'transitions', UML13::Transition
+UML13::StateMachine.contains_one_uni 'top', UML13::State, :lowerBound => 1
+UML13::Event.contains_many_uni 'parameters', UML13::Parameter
+UML13::State.contains_one_uni 'doActivity', UML13::Action
+UML13::State.contains_many_uni 'internalTransition', UML13::Transition
+UML13::State.has_many 'deferrableEvent', UML13::Event
+UML13::State.contains_one_uni 'exit', UML13::Action
+UML13::State.contains_one_uni 'entry', UML13::Action
+UML13::TimeEvent.contains_one_uni 'when', UML13::TimeExpression
+UML13::SignalEvent.many_to_one 'signal', UML13::Signal, 'occurrence', :lowerBound => 1
+UML13::Transition.many_to_one 'target', UML13::StateVertex, 'incoming', :lowerBound => 1
+UML13::Transition.many_to_one 'source', UML13::StateVertex, 'outgoing', :lowerBound => 1
+UML13::Transition.has_one 'trigger', UML13::Event
+UML13::Transition.contains_one_uni 'effect', UML13::Action
+UML13::Transition.contains_one_uni 'guard', UML13::Guard
+UML13::CompositeState.contains_many 'subvertex', UML13::StateVertex, 'container', :lowerBound => 1
+UML13::ChangeEvent.contains_one_uni 'changeExpression', UML13::BooleanExpression
+UML13::Guard.contains_one_uni 'expression', UML13::BooleanExpression
+UML13::SubmachineState.has_one 'submachine', UML13::StateMachine, :lowerBound => 1
+UML13::Collaboration.has_one 'representedOperation', UML13::Operation
+UML13::Collaboration.has_one 'representedClassifier', UML13::Classifier
+UML13::Collaboration.has_many 'constrainingElement', UML13::ModelElement
+UML13::Collaboration.contains_many 'interaction', UML13::Interaction, 'context'
+UML13::ClassifierRole.contains_one_uni 'multiplicity', UML13::Multiplicity
+UML13::ClassifierRole.has_many 'availableContents', UML13::ModelElement
+UML13::ClassifierRole.has_many 'availableFeature', UML13::Feature
+UML13::ClassifierRole.has_one 'base', UML13::Classifier, :lowerBound => 1
+UML13::AssociationRole.contains_one_uni 'multiplicity', UML13::Multiplicity
+UML13::AssociationRole.has_one 'base', UML13::Association
+UML13::AssociationEndRole.has_many 'availableQualifier', UML13::Attribute
+UML13::AssociationEndRole.has_one 'base', UML13::AssociationEnd
+UML13::Message.has_one 'action', UML13::Action, :lowerBound => 1
+UML13::Message.has_one 'communicationConnection', UML13::AssociationRole
+UML13::Message.has_many 'predecessor', UML13::Message
+UML13::Message.has_one 'receiver', UML13::ClassifierRole, :lowerBound => 1
+UML13::Message.has_one 'sender', UML13::ClassifierRole, :lowerBound => 1
+UML13::Message.has_one 'activator', UML13::Message
+UML13::Interaction.contains_many 'message', UML13::Message, 'interaction', :lowerBound => 1
+UML13::Interaction.contains_many_uni 'link', UML13::Link
+UML13::Instance.contains_many_uni 'slot', UML13::AttributeLink
+UML13::Instance.one_to_many 'linkEnd', UML13::LinkEnd, 'instance'
+UML13::Instance.has_many 'classifier', UML13::Classifier, :lowerBound => 1
+UML13::Signal.one_to_many 'reception', UML13::Reception, 'signal'
+UML13::CreateAction.has_one 'instantiation', UML13::Classifier, :lowerBound => 1
+UML13::Action.contains_one_uni 'recurrence', UML13::IterationExpression
+UML13::Action.contains_one_uni 'target', UML13::ObjectSetExpression
+UML13::Action.contains_one_uni 'script', UML13::ActionExpression
+UML13::Action.contains_many_uni 'actualArgument', UML13::Argument
+UML13::AttributeLink.has_one 'value', UML13::Instance, :lowerBound => 1
+UML13::AttributeLink.has_one 'attribute', UML13::Attribute, :lowerBound => 1
+UML13::CallAction.has_one 'operation', UML13::Operation, :lowerBound => 1
+UML13::SendAction.has_one 'signal', UML13::Signal, :lowerBound => 1
+UML13::ActionSequence.contains_many_uni 'action', UML13::Action
+UML13::Argument.contains_one_uni 'value', UML13::Expression
+UML13::Link.contains_many_uni 'connection', UML13::LinkEnd, :lowerBound => 2
+UML13::Link.has_one 'association', UML13::Association, :lowerBound => 1
+UML13::LinkEnd.has_one 'associationEnd', UML13::AssociationEnd, :lowerBound => 1
+UML13::LinkEnd.has_one 'participant', UML13::Instance, :lowerBound => 1
+UML13::Stimulus.has_one 'dispatchAction', UML13::Action, :lowerBound => 1
+UML13::Stimulus.has_one 'communicationLink', UML13::Link
+UML13::Stimulus.has_one 'receiver', UML13::Instance, :lowerBound => 1
+UML13::Stimulus.has_one 'sender', UML13::Instance, :lowerBound => 1
+UML13::Stimulus.has_many 'argument', UML13::Instance
+UML13::ComponentInstance.has_many 'resident', UML13::Instance
+UML13::NodeInstance.has_many 'resident', UML13::ComponentInstance
+UML13::ActivityGraph.contains_many_uni 'partition', UML13::Partition
+UML13::Partition.has_many 'contents', UML13::ModelElement
+UML13::SubactivityState.contains_one_uni 'dynamicArguments', UML13::ArgListsExpression
+UML13::ObjectFlowState.has_one 'type', UML13::Classifier, :lowerBound => 1
+UML13::ObjectFlowState.has_one 'available', UML13::Parameter, :lowerBound => 1
+UML13::ClassifierInState.has_one 'type', UML13::Classifier, :lowerBound => 1
+UML13::ClassifierInState.has_many 'inState', UML13::State
+UML13::ActionState.contains_one_uni 'dynamicArguments', UML13::ArgListsExpression
+UML13::Package.contains_many 'importedElement', UML13::ElementImport, 'package'
+UML13::Diagram.contains_many 'element', UML13::DiagramElement, 'diagram'
+UML13::Diagram.has_one 'owner', UML13::ModelElement, :lowerBound => 1
diff --git a/lib/puppet/vendor/rgen/lib/metamodels/uml13_metamodel_ext.rb b/lib/puppet/vendor/rgen/lib/metamodels/uml13_metamodel_ext.rb
new file mode 100644
index 000000000..bc7eb7178
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/metamodels/uml13_metamodel_ext.rb
@@ -0,0 +1,26 @@
+# Some handy extensions to the UML13 metamodel
+#
+module UML13
+
+ module AssociationEnd::ClassModule
+ def otherEnd
+ association.connection.find{|c| c != self}
+ end
+ end
+
+ module Classifier::ClassModule
+ def localCompositeEnd
+ associationEnd.select{|e| e.aggregation == :composite}
+ end
+ def remoteCompositeEnd
+ associationEnd.otherEnd.select{|e| e.aggregation == :composite}
+ end
+ def localNavigableEnd
+ associationEnd.select{|e| e.isNavigable}
+ end
+ def remoteNavigableEnd
+ associationEnd.otherEnd.select{|e| e.isNavigable}
+ end
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/mmgen/metamodel_generator.rb b/lib/puppet/vendor/rgen/lib/mmgen/metamodel_generator.rb
new file mode 100644
index 000000000..9ea9fe608
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/mmgen/metamodel_generator.rb
@@ -0,0 +1,20 @@
+require 'rgen/environment'
+require 'rgen/template_language'
+require 'rgen/ecore/ecore'
+require 'rgen/ecore/ecore_ext'
+require 'mmgen/mm_ext/ecore_mmgen_ext'
+
+module MMGen
+
+module MetamodelGenerator
+
+ def generateMetamodel(rootPackage, out_file)
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new(RGen::ECore, File.dirname(out_file))
+ tpl_path = File.dirname(__FILE__) + '/templates'
+ tc.load(tpl_path)
+ tc.expand('metamodel_generator::GenerateMetamodel', File.basename(out_file), :for => rootPackage)
+ end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/mmgen/mm_ext/ecore_mmgen_ext.rb b/lib/puppet/vendor/rgen/lib/mmgen/mm_ext/ecore_mmgen_ext.rb
new file mode 100644
index 000000000..17dd81c69
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/mmgen/mm_ext/ecore_mmgen_ext.rb
@@ -0,0 +1,91 @@
+require 'rgen/util/name_helper'
+
+module RGen
+
+ module ECore
+
+ module EPackage::ClassModule
+ include RGen::Util::NameHelper
+
+ def moduleName
+ firstToUpper(name)
+ end
+
+ def qualifiedModuleName(rootPackage)
+ return moduleName unless eSuperPackage and self != rootPackage
+ eSuperPackage.qualifiedModuleName(rootPackage) + "::" + moduleName
+ end
+
+ def ancestorPackages
+ return [] unless eSuperPackage
+ [eSuperPackage] + eSuperPackage.ancestorPackages
+ end
+
+ def ownClasses
+ eClassifiers.select{|c| c.is_a?(EClass)}
+ end
+
+ def classesInGenerationOrdering
+ ownClasses + eSubpackages.collect{|s| s.classesInGenerationOrdering}.flatten
+ end
+
+ def needClassReorder?
+ classesInGenerationOrdering != inheritanceOrderClasses(classesInGenerationOrdering)
+ end
+
+ def allClassesSorted
+ inheritanceOrderClasses(classesInGenerationOrdering)
+ end
+
+ def inheritanceOrderClasses(cls)
+ sortArray = cls.dup
+ i1 = 0
+ while i1 < sortArray.size-1
+ again = false
+ for i2 in i1+1..sortArray.size-1
+ e2 = sortArray[i2]
+ if sortArray[i1].eSuperTypes.include?(e2)
+ sortArray.delete(e2)
+ sortArray.insert(i1,e2)
+ again = true
+ break
+ end
+ end
+ i1 += 1 unless again
+ end
+ sortArray
+ end
+ end
+
+ module EClassifier::ClassModule
+ include RGen::Util::NameHelper
+ def classifierName
+ firstToUpper(name)
+ end
+ def qualifiedClassifierName(rootPackage)
+ (ePackage ? ePackage.qualifiedModuleName(rootPackage) + "::" : "") + classifierName
+ end
+ def ancestorPackages
+ return [] unless ePackage
+ [ePackage] + ePackage.ancestorPackages
+ end
+ def qualifiedClassifierNameIfRequired(package)
+ if ePackage != package
+ commonSuper = (package.ancestorPackages & ancestorPackages).first
+ qualifiedClassifierName(commonSuper)
+ else
+ classifierName
+ end
+ end
+ end
+
+ module EAttribute::ClassModule
+ def RubyType
+ typeMap = {'float' => 'Float', 'int' => 'Integer'}
+ (self.getType && typeMap[self.getType.downcase]) || 'String'
+ end
+ end
+
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/mmgen/mmgen.rb b/lib/puppet/vendor/rgen/lib/mmgen/mmgen.rb
new file mode 100644
index 000000000..41a696b06
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/mmgen/mmgen.rb
@@ -0,0 +1,28 @@
+$:.unshift File.join(File.dirname(__FILE__),"..")
+
+require 'ea/xmi_ecore_instantiator'
+require 'mmgen/metamodel_generator'
+
+include MMGen::MetamodelGenerator
+
+unless ARGV.length >= 2
+ puts "Usage: mmgen.rb <xmi_class_model_file> <root package> (<module>)*"
+ exit
+else
+ file_name = ARGV.shift
+ root_package_name = ARGV.shift
+ modules = ARGV
+ out_file = file_name.gsub(/\.\w+$/,'.rb')
+ puts out_file
+end
+
+env = RGen::Environment.new
+File.open(file_name) { |f|
+ puts "instantiating ..."
+ XMIECoreInstantiator.new.instantiateECoreModel(env, f.read)
+}
+
+rootPackage = env.find(:class => RGen::ECore::EPackage, :name => root_package_name).first
+
+puts "generating ..."
+generateMetamodel(rootPackage, out_file, modules)
diff --git a/lib/puppet/vendor/rgen/lib/mmgen/templates/annotations.tpl b/lib/puppet/vendor/rgen/lib/mmgen/templates/annotations.tpl
new file mode 100644
index 000000000..b9f5d1bfd
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/mmgen/templates/annotations.tpl
@@ -0,0 +1,37 @@
+<% define 'Annotations', :for => EPackage do %>
+ <% for a in eAnnotations %>
+ annotation <% expand 'AnnotationArgs', :for => a %>
+ <% end %>
+<% end %>
+
+<% define 'Annotations', :for => EClass do %>
+ <% for a in eAnnotations %>
+ annotation <% expand 'AnnotationArgs', :for => a %>
+ <% end %>
+<% end %>
+
+<% define 'Annotations', :for => EStructuralFeature do %>
+ <% oppositeAnnotations = (this.respond_to?(:eOpposite) && eOpposite && eOpposite.eAnnotations) || [] %>
+ <% if eAnnotations.size > 0 || oppositeAnnotations.size > 0 %>
+ do<%iinc%>
+ <% for a in eAnnotations %>
+ annotation <% expand 'AnnotationArgs', :for => a %>
+ <% end %>
+ <% for a in oppositeAnnotations %>
+ opposite_annotation <% expand 'AnnotationArgs', :for => a %>
+ <% end %><%idec%>
+ end<%nows%>
+ <% end %>
+<% end %>
+
+<% define 'AnnotationArgs', :for => EAnnotation do %>
+ <% if source.nil? %>
+ <% expand 'Details' %>
+ <% else %>
+ :source => "<%= source.to_s %>", :details => {<% expand 'Details' %>}<%nows%>
+ <% end %>
+<% end %>
+
+<% define 'Details', :for => EAnnotation do %>
+ <%= details.sort{|a,b| a.key<=>b.key}.collect{ |d| "\'" + d.key + "\' => \'"+ (d.value || "").gsub('\'','\\\'').to_s + "\'"}.join(', ') %><%nows%>
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/mmgen/templates/metamodel_generator.tpl b/lib/puppet/vendor/rgen/lib/mmgen/templates/metamodel_generator.tpl
new file mode 100644
index 000000000..5e6877a74
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/mmgen/templates/metamodel_generator.tpl
@@ -0,0 +1,172 @@
+
+<% define 'GenerateMetamodel', :for => EPackage do |filename| %>
+ <% file filename do %>
+ require 'rgen/metamodel_builder'
+ <%nl%>
+ <% if needClassReorder? %>
+ <% expand 'GeneratePackagesOnly' %>
+ <% expand 'GenerateClassesReordered' %>
+ <% else %>
+ <% expand 'GeneratePackage' %>
+ <% end %>
+ <%nl%>
+ <% expand 'GenerateAssocs' %>
+ <% end %>
+<% end %>
+
+<% define 'GeneratePackage', :for => EPackage do %>
+ module <%= moduleName %><% iinc %>
+ extend RGen::MetamodelBuilder::ModuleExtension
+ include RGen::MetamodelBuilder::DataTypes
+ <% expand 'annotations::Annotations' %>
+ <%nl%>
+ <% expand 'EnumTypes' %>
+ <% for c in ownClasses %><%nl%>
+ <% expand 'ClassHeader', this, :for => c %><%iinc%>
+ <% if c.abstract %>abstract<% end %>
+ <% expand 'annotations::Annotations', :for => c %>
+ <% expand 'Attribute', this, :foreach => c.eAttributes %>
+ <%idec%>
+ end
+ <% end %><%nl%>
+ <% for p in eSubpackages %>
+ <%nl%><% expand 'GeneratePackage', :for => p %>
+ <% end %><%idec%>
+ end
+<% end %>
+
+<% define 'GenerateClassesReordered', :for => EPackage do %>
+ <% for c in allClassesSorted %><%nl%>
+ <% expand 'ClassHeaderFullyQualified', this, :for => c %><%iinc%>
+ <% if c.abstract %>abstract<% end %>
+ <% expand 'annotations::Annotations', :for => c %>
+ <% expand 'Attribute', this, :foreach => c.eAttributes %>
+ <%idec%>
+ end
+ <% end %><%nl%>
+<% end %>
+
+<% define 'GeneratePackagesOnly', :for => EPackage do %>
+ module <%= moduleName %><% iinc %>
+ extend RGen::MetamodelBuilder::ModuleExtension
+ include RGen::MetamodelBuilder::DataTypes
+ <% expand 'annotations::Annotations' %>
+ <%nl%>
+ <% expand 'EnumTypes' %>
+ <% for p in eSubpackages %>
+ <%nl%><% expand 'GeneratePackagesOnly', :for => p %>
+ <% end %><%idec%>
+ end
+<% end %>
+
+<% define 'Attribute', :for => EAttribute do |rootp| %>
+ <% if upperBound == 1%>has_attr<% else %>has_many_attr<% end %> '<%= name %>', <%nows%>
+ <% if eType.is_a?(EEnum) %><%nows%>
+ <%= eType.qualifiedClassifierName(rootp) %><%nows%>
+ <% else %><%nows%>
+ <%= eType && eType.instanceClass.to_s %><%nows%>
+ <% end %><%nows%>
+ <% for p in RGen::MetamodelBuilder::Intermediate::Attribute.properties %>
+ <% unless p == :name || (p == :upperBound && (upperBound == 1 || upperBound == -1)) ||
+ RGen::MetamodelBuilder::Intermediate::Attribute.default_value(p) == getGeneric(p) %>
+ , :<%=p%> => <%nows%>
+ <% if getGeneric(p).is_a?(String) %>
+ "<%= getGeneric(p) %>"<%nows%>
+ <% elsif getGeneric(p).is_a?(Symbol) %>
+ :<%= getGeneric(p) %><%nows%>
+ <% else %>
+ <%= getGeneric(p) %><%nows%>
+ <% end %>
+ <% end %>
+ <% end %>
+ <%ws%><% expand 'annotations::Annotations' %><%nl%>
+<% end %>
+
+<% define 'EnumTypes', :for => EPackage do %>
+ <% for enum in eClassifiers.select{|c| c.is_a?(EEnum)} %>
+ <%= enum.name %> = Enum.new(:name => '<%= enum.name %>', :literals =>[ <%nows%>
+ <%= enum.eLiterals.collect { |lit| ":"+(lit.name =~ /^\d|\W/ ? "'"+lit.name+"'" : lit.name) }.join(', ') %> ])
+ <% end %>
+<% end %>
+
+<% define 'GenerateAssocs', :for => EPackage do %>
+ <% refDone = {} %>
+ <% for ref in eAllClassifiers.select{|c| c.is_a?(EClass)}.eReferences %>
+ <% if !refDone[ref] && ref.eOpposite %>
+ <% ref = ref.eOpposite if ref.eOpposite.containment %>
+ <% refDone[ref] = refDone[ref.eOpposite] = true %>
+ <% if !ref.many && !ref.eOpposite.many %>
+ <% if ref.containment %>
+ <% expand 'Reference', "contains_one", this, :for => ref %>
+ <% else %>
+ <% expand 'Reference', "one_to_one", this, :for => ref %>
+ <% end %>
+ <% elsif !ref.many && ref.eOpposite.many %>
+ <% expand 'Reference', "many_to_one", this, :for => ref %>
+ <% elsif ref.many && !ref.eOpposite.many %>
+ <% if ref.containment %>
+ <% expand 'Reference', "contains_many", this, :for => ref %>
+ <% else %>
+ <% expand 'Reference', "one_to_many", this, :for => ref %>
+ <% end %>
+ <% elsif ref.many && ref.eOpposite.many %>
+ <% expand 'Reference', "many_to_many", this, :for => ref %>
+ <% end %>
+ <% expand 'annotations::Annotations', :for => ref %><%nl%>
+ <% elsif !refDone[ref] %>
+ <% refDone[ref] = true %>
+ <% if !ref.many %>
+ <% if ref.containment %>
+ <% expand 'Reference', "contains_one_uni", this, :for => ref %>
+ <% else %>
+ <% expand 'Reference', "has_one", this, :for => ref %>
+ <% end %>
+ <% elsif ref.many %>
+ <% if ref.containment %>
+ <% expand 'Reference', "contains_many_uni", this, :for => ref %>
+ <% else %>
+ <% expand 'Reference', "has_many", this, :for => ref %>
+ <% end %>
+ <% end %>
+ <% expand 'annotations::Annotations', :for => ref %><%nl%>
+ <% end %>
+ <% end %>
+<% end %>
+
+<% define 'Reference', :for => EReference do |cmd, rootpackage| %>
+ <%= eContainingClass.qualifiedClassifierName(rootpackage) %>.<%= cmd %> '<%= name %>', <%= eType && eType.qualifiedClassifierName(rootpackage) %><%nows%>
+ <% if eOpposite %><%nows%>
+ , '<%= eOpposite.name%>'<%nows%>
+ <% end %><%nows%>
+ <% pset = RGen::MetamodelBuilder::Intermediate::Reference.properties.reject{|p| p == :name || p == :upperBound || p == :containment} %>
+ <% for p in pset.reject{|p| RGen::MetamodelBuilder::Intermediate::Reference.default_value(p) == getGeneric(p)} %>
+ , :<%=p%> => <%=getGeneric(p)%><%nows%>
+ <% end %>
+ <% if eOpposite %>
+ <% for p in pset.reject{|p| RGen::MetamodelBuilder::Intermediate::Reference.default_value(p) == eOpposite.getGeneric(p)} %>
+ , :opposite_<%=p%> => <%=eOpposite.getGeneric(p)%><%nows%>
+ <% end %>
+ <% end %><%ws%>
+<% end %>
+
+<% define 'ClassHeader', :for => EClass do |rootp| %>
+ class <%= classifierName %> < <% nows %>
+ <% if eSuperTypes.size > 1 %><% nows %>
+ RGen::MetamodelBuilder::MMMultiple(<%= eSuperTypes.collect{|t| t.qualifiedClassifierNameIfRequired(rootp)}.join(', ') %>)
+ <% elsif eSuperTypes.size > 0 %><% nows %>
+ <%= eSuperTypes.first.qualifiedClassifierNameIfRequired(rootp) %>
+ <% else %><% nows %>
+ RGen::MetamodelBuilder::MMBase
+ <% end %>
+<% end %>
+
+<% define 'ClassHeaderFullyQualified', :for => EClass do |rootp| %>
+ class <%= qualifiedClassifierName(rootp) %> < <% nows %>
+ <% if eSuperTypes.size > 1 %><% nows %>
+ RGen::MetamodelBuilder::MMMultiple(<%= eSuperTypes.collect{|t| t.qualifiedClassifierName(rootp)}.join(', ') %>)
+ <% elsif eSuperTypes.size > 0 %><% nows %>
+ <%= eSuperTypes.first.qualifiedClassifierName(rootp) %>
+ <% else %><% nows %>
+ RGen::MetamodelBuilder::MMBase
+ <% end %>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/lib/rgen/array_extensions.rb b/lib/puppet/vendor/rgen/lib/rgen/array_extensions.rb
new file mode 100644
index 000000000..d1d7beda0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/array_extensions.rb
@@ -0,0 +1,45 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'rgen/metamodel_builder'
+
+class Array
+
+ def >>(method)
+ compact.inject([]) { |r,e| r | ( (o=e.send(method)).is_a?(Array) ? o : [o] ) }
+ end
+
+ def method_missing(m, *args)
+
+ # This extensions has the side effect that it allows to call any method on any
+ # empty array with an empty array as the result. This behavior is required for
+ # navigating models.
+ #
+ # This is a problem for Hash[] called with an (empty) array of tupels.
+ # It will call to_hash expecting a Hash as the result. When it gets an array instead,
+ # it fails with an exception. Make sure it gets a NoMethodException as without this
+ # extension and it will catch that and return an empty hash as expected.
+ #
+ # Similar problems exist for other Ruby built-in methods which are expected to fail.
+ #
+ return super unless (size == 0 &&
+ m != :to_hash && m != :to_str) ||
+ compact.any?{|e| e.is_a? RGen::MetamodelBuilder::MMBase}
+ # use an array to build the result to achiev similar ordering
+ result = []
+ inResult = {}
+ compact.each do |e|
+ if e.is_a? RGen::MetamodelBuilder::MMBase
+ ((o=e.send(m)).is_a?(Array) ? o : [o] ).each do |v|
+ next if inResult[v.object_id]
+ inResult[v.object_id] = true
+ result << v
+ end
+ else
+ raise StandardError.new("Trying to call a method on an array element not a RGen MMBase")
+ end
+ end
+ result.compact
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore.rb b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore.rb
new file mode 100644
index 000000000..251f60733
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore.rb
@@ -0,0 +1,218 @@
+require 'rgen/metamodel_builder'
+
+module RGen
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ # This is the ECore metamodel described using the RGen::MetamodelBuilder language.
+ #
+ # Known differences to the Java/EMF implementation are:
+ # * Attributes can not be "many"
+ #
+ module ECore
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class EObject < RGen::MetamodelBuilder::MMBase
+ end
+
+ class EModelElement < RGen::MetamodelBuilder::MMBase
+ end
+
+ class EAnnotation < RGen::MetamodelBuilder::MMMultiple(EModelElement, EObject)
+ has_attr 'source', String
+ end
+
+ class ENamedElement < EModelElement
+ has_attr 'name', String
+ end
+
+ class ETypedElement < ENamedElement
+ has_attr 'lowerBound', Integer, :defaultValueLiteral => "0"
+ has_attr 'ordered', Boolean, :defaultValueLiteral => "true"
+ has_attr 'unique', Boolean, :defaultValueLiteral => "true"
+ has_attr 'upperBound', Integer, :defaultValueLiteral => "1"
+ has_attr 'many', Boolean, :derived=>true
+ has_attr 'required', Boolean, :derived=>true
+ module ClassModule
+ def many_derived
+ upperBound > 1 || upperBound == -1
+ end
+ def required_derived
+ lowerBound > 0
+ end
+ end
+ end
+
+ class EStructuralFeature < ETypedElement
+ has_attr 'changeable', Boolean, :defaultValueLiteral => "true"
+ has_attr 'defaultValue', Object, :derived=>true
+ has_attr 'defaultValueLiteral', String
+ has_attr 'derived', Boolean, :defaultValueLiteral => "false"
+ has_attr 'transient', Boolean, :defaultValueLiteral => "false"
+ has_attr 'unsettable', Boolean, :defaultValueLiteral => "false"
+ has_attr 'volatile', Boolean, :defaultValueLiteral => "false"
+ module ClassModule
+ def defaultValue_derived
+ return nil if defaultValueLiteral.nil?
+ case eType
+ when EInt, ELong
+ defaultValueLiteral.to_i
+ when EFloat
+ defaultValueLiteral.to_f
+ when EEnum
+ defaultValueLiteral.to_sym
+ when EBoolean
+ defaultValueLiteral == "true"
+ when EString
+ defaultValueLiteral
+ else
+ raise "Unhandled type"
+ end
+ end
+ end
+ end
+
+ class EAttribute < EStructuralFeature
+ has_attr 'iD', Boolean, :defaultValueLiteral => "false"
+ end
+
+ class EClassifier < ENamedElement
+ has_attr 'defaultValue', Object, :derived=>true
+ has_attr 'instanceClass', Object, :derived=>true
+ has_attr 'instanceClassName', String
+ module ClassModule
+ def instanceClass_derived
+ map = {"java.lang.string" => "String",
+ "boolean" => "RGen::MetamodelBuilder::DataTypes::Boolean",
+ "int" => "Integer",
+ "long" => "RGen::MetamodelBuilder::DataTypes::Long"}
+ icn = instanceClassName
+ icn = "NilClass" if icn.nil?
+ icn = map[icn.downcase] if map[icn.downcase]
+ eval(icn)
+ end
+ end
+ end
+
+ class EDataType < EClassifier
+ has_attr 'serializable', Boolean
+ end
+
+ class EGenericType < EDataType
+ has_one 'eClassifier', EDataType
+ end
+
+ class ETypeArgument < EModelElement
+ has_one 'eClassifier', EDataType
+ end
+
+ class EEnum < EDataType
+ end
+
+ class EEnumLiteral < ENamedElement
+ # instance may point to a "singleton object" (e.g. a Symbol) representing the literal
+ # has_attr 'instance', Object, :eType=>:EEnumerator, :transient=>true
+ has_attr 'literal', String
+ has_attr 'value', Integer
+ end
+
+ # TODO: check if required
+ class EFactory < EModelElement
+ end
+
+ class EOperation < ETypedElement
+ end
+
+ class EPackage < ENamedElement
+ has_attr 'nsPrefix', String
+ has_attr 'nsURI', String
+ end
+
+ class EParameter < ETypedElement
+ end
+
+ class EReference < EStructuralFeature
+ has_attr 'container', Boolean, :derived=>true
+ has_attr 'containment', Boolean, :defaultValueLiteral => "false"
+ has_attr 'resolveProxies', Boolean, :defaultValueLiteral => "true"
+ end
+
+ class EStringToStringMapEntry < RGen::MetamodelBuilder::MMBase
+ has_attr 'key', String
+ has_attr 'value', String
+ end
+
+ class EClass < EClassifier
+ has_attr 'abstract', Boolean
+ has_attr 'interface', Boolean
+ has_one 'eIDAttribute', ECore::EAttribute, :derived=>true, :resolveProxies=>false
+
+ has_many 'eAllAttributes', ECore::EAttribute, :derived=>true
+ has_many 'eAllContainments', ECore::EReference, :derived=>true
+ has_many 'eAllOperations', ECore::EOperation, :derived=>true
+ has_many 'eAllReferences', ECore::EReference, :derived=>true
+ has_many 'eAllStructuralFeatures', ECore::EStructuralFeature, :derived=>true
+ has_many 'eAllSuperTypes', ECore::EClass, :derived=>true
+ has_many 'eAttributes', ECore::EAttribute, :derived=>true
+ has_many 'eReferences', ECore::EReference, :derived=>true
+
+ module ClassModule
+ def eAllAttributes_derived
+ eAttributes + eSuperTypes.eAllAttributes
+ end
+ def eAllContainments_derived
+ eReferences.select{|r| r.containment} + eSuperTypes.eAllContainments
+ end
+ def eAllReferences_derived
+ eReferences + eSuperTypes.eAllReferences
+ end
+ def eAllStructuralFeatures_derived
+ eStructuralFeatures + eSuperTypes.eAllStructuralFeatures
+ end
+ def eAllSuperTypes_derived
+ eSuperTypes + eSuperTypes.eAllSuperTypes
+ end
+ def eAttributes_derived
+ eStructuralFeatures.select{|f| f.is_a?(EAttribute)}
+ end
+ def eReferences_derived
+ eStructuralFeatures.select{|f| f.is_a?(EReference)}
+ end
+ end
+ end
+
+ # predefined datatypes
+
+ EString = EDataType.new(:name => "EString", :instanceClassName => "String")
+ EInt = EDataType.new(:name => "EInt", :instanceClassName => "Integer")
+ ELong = EDataType.new(:name => "ELong", :instanceClassName => "Long")
+ EBoolean = EDataType.new(:name => "EBoolean", :instanceClassName => "Boolean")
+ EFloat = EDataType.new(:name => "EFloat", :instanceClassName => "Float")
+ ERubyObject = EDataType.new(:name => "ERubyObject", :instanceClassName => "Object")
+ EJavaObject = EDataType.new(:name => "EJavaObject")
+ ERubyClass = EDataType.new(:name => "ERubyClass", :instanceClassName => "Class")
+ EJavaClass = EDataType.new(:name => "EJavaClass")
+
+ end
+
+ ECore::EModelElement.contains_many 'eAnnotations', ECore::EAnnotation, 'eModelElement', :resolveProxies=>false
+ ECore::EAnnotation.contains_many_uni 'details', ECore::EStringToStringMapEntry, :resolveProxies=>false
+ ECore::EAnnotation.contains_many_uni 'contents', ECore::EObject, :resolveProxies=>false
+ ECore::EAnnotation.has_many 'references', ECore::EObject
+ ECore::EPackage.contains_many 'eClassifiers', ECore::EClassifier, 'ePackage'
+ ECore::EPackage.contains_many 'eSubpackages', ECore::EPackage, 'eSuperPackage'
+ ECore::ETypedElement.has_one 'eType', ECore::EClassifier
+ ECore::EClass.contains_many 'eOperations', ECore::EOperation, 'eContainingClass', :resolveProxies=>false
+ ECore::EClass.contains_many 'eStructuralFeatures', ECore::EStructuralFeature, 'eContainingClass', :resolveProxies=>false
+ ECore::EClass.has_many 'eSuperTypes', ECore::EClass
+ ECore::EEnum.contains_many 'eLiterals', ECore::EEnumLiteral, 'eEnum', :resolveProxies=>false
+ ECore::EFactory.one_to_one 'ePackage', ECore::EPackage, 'eFactoryInstance', :lowerBound=>1, :transient=>true, :resolveProxies=>false
+ ECore::EOperation.contains_many 'eParameters', ECore::EParameter, 'eOperation', :resolveProxies=>false
+ ECore::EOperation.has_many 'eExceptions', ECore::EClassifier
+ ECore::EReference.has_one 'eOpposite', ECore::EReference
+
+ ECore::EAttribute.has_one 'eAttributeType', ECore::EDataType, :lowerBound=>1, :derived=>true
+ ECore::EReference.has_one 'eReferenceType', ECore::EClass, :lowerBound=>1, :derived=>true
+ ECore::EParameter.contains_one 'eGenericType', ECore::EGenericType, 'eParameter'
+ ECore::EGenericType.contains_many 'eTypeArguments', ECore::ETypeArgument, 'eGenericType'
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_builder_methods.rb b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_builder_methods.rb
new file mode 100644
index 000000000..8d78415b0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_builder_methods.rb
@@ -0,0 +1,81 @@
+module RGen
+
+module ECore
+
+module ECoreBuilderMethods
+ def eAttr(name, type, argHash={}, &block)
+ eAttribute(name, {:eType => type}.merge(argHash), &block)
+ end
+
+ def eRef(name, type, argHash={}, &block)
+ eReference(name, {:eType => type}.merge(argHash), &block)
+ end
+
+ # create bidirectional reference at once
+ def eBiRef(name, type, oppositeName, argHash={})
+ raise BuilderError.new("eOpposite attribute not allowed for bidirectional references") \
+ if argHash[:eOpposite] || argHash[:opposite_eOpposite]
+ eReference(name, {:eType => type}.merge(argHash.reject{|k,v| k.to_s =~ /^opposite_/})) do
+ eReference oppositeName, {:eContainingClass => type, :eType => _context(2),
+ :as => :eOpposite, :eOpposite => _context(1)}.
+ merge(Hash[*(argHash.select{|k,v| k.to_s =~ /^opposite_/}.
+ collect{|p| [p[0].to_s.sub(/^opposite_/,"").to_sym, p[1]]}.flatten)])
+ end
+ end
+
+ # reference shortcuts
+
+ alias references_1 eRef
+ alias references_one eRef
+
+ def references_N(name, type, argHash={})
+ eRef(name, type, {:upperBound => -1}.merge(argHash))
+ end
+ alias references_many references_N
+
+ def references_1to1(name, type, oppositeName, argHash={})
+ eBiRef(name, type, oppositeName, {:upperBound => 1, :opposite_upperBound => 1}.merge(argHash))
+ end
+ alias references_one_to_one references_1to1
+
+ def references_1toN(name, type, oppositeName, argHash={})
+ eBiRef(name, type, oppositeName, {:upperBound => -1, :opposite_upperBound => 1}.merge(argHash))
+ end
+ alias references_one_to_many references_1toN
+
+ def references_Nto1(name, type, oppositeName, argHash={})
+ eBiRef(name, type, oppositeName, {:upperBound => 1, :opposite_upperBound => -1}.merge(argHash))
+ end
+ alias references_many_to_one references_Nto1
+
+ def references_MtoN(name, type, oppositeName, argHash={})
+ eBiRef(name, type, oppositeName, {:upperBound => -1, :opposite_upperBound => -1}.merge(argHash))
+ end
+ alias references_many_to_many references_MtoN
+
+ # containment reference shortcuts
+
+ def contains_1(name, type, argHash={})
+ references_1(name, type, {:containment => true}.merge(argHash))
+ end
+ alias contains_one contains_1
+
+ def contains_N(name, type, argHash={})
+ references_N(name, type, {:containment => true}.merge(argHash))
+ end
+ alias contains_many contains_N
+
+ def contains_1to1(name, type, oppositeName, argHash={})
+ references_1to1(name, type, oppositeName, {:containment => true}.merge(argHash))
+ end
+ alias contains_one_to_one contains_1to1
+
+ def contains_1toN(name, type, oppositeName, argHash={})
+ references_1toN(name, type, oppositeName, {:containment => true}.merge(argHash))
+ end
+ alias contains_one_to_many contains_1toN
+end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_ext.rb b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_ext.rb
new file mode 100644
index 000000000..8c3441389
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_ext.rb
@@ -0,0 +1,69 @@
+require 'rgen/array_extensions'
+require 'rgen/ecore/ecore'
+
+module RGen
+ module ECore
+
+ # make super type reference bidirectional
+ EClass.many_to_many 'eSuperTypes', ECore::EClass, 'eSubTypes'
+
+ module EModelElement::ClassModule
+
+ def annotationValue(source, tag)
+ detail = eAnnotations.select{ |a| a.source == source }.details.find{ |d| d.key == tag }
+ detail && detail.value
+ end
+
+ end
+
+ module EPackage::ClassModule
+
+ def qualifiedName
+ if eSuperPackage
+ eSuperPackage.qualifiedName+"::"+name
+ else
+ name
+ end
+ end
+
+ def eAllClassifiers
+ eClassifiers + eSubpackages.eAllClassifiers
+ end
+ def eAllSubpackages
+ eSubpackages + eSubpackages.eAllSubpackages
+ end
+
+ def eClasses
+ eClassifiers.select{|c| c.is_a?(ECore::EClass)}
+ end
+
+ def eAllClasses
+ eClasses + eSubpackages.eAllClasses
+ end
+
+ def eDataTypes
+ eClassifiers.select{|c| c.is_a?(ECore::EDataType)}
+ end
+
+ def eAllDataTypes
+ eDataTypes + eSubpackages.eAllDataTypes
+ end
+ end
+
+ module EClass::ClassModule
+
+ def qualifiedName
+ if ePackage
+ ePackage.qualifiedName+"::"+name
+ else
+ name
+ end
+ end
+
+ def eAllSubTypes
+ eSubTypes + eSubTypes.eAllSubTypes
+ end
+
+ end
+ end
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_interface.rb b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_interface.rb
new file mode 100644
index 000000000..89ec32547
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_interface.rb
@@ -0,0 +1,47 @@
+module RGen
+
+module ECore
+
+# Mixin to provide access to the ECore model describing a Ruby class or module
+# built using MetamodelBuilder.
+# The module should be used to +extend+ a class or module, i.e. to make its
+# methods class methods.
+#
+module ECoreInterface
+
+ # This method will lazily build to ECore model element belonging to the calling
+ # class or module using RubyToECore.
+ # Alternatively, the ECore model element can be provided up front. This is used
+ # when the Ruby metamodel classes and modules are created from ECore.
+ #
+ def ecore
+ if defined?(@ecore)
+ @ecore
+ else
+ unless defined?(@@transformer)
+ require 'rgen/ecore/ruby_to_ecore'
+ @@transformer = RubyToECore.new
+ end
+ @@transformer.trans(self)
+ end
+ end
+
+ # This method can be used to clear the ecore cache after the metamodel classes
+ # or modules have been changed; the ecore model will be recreated on next access
+ # to the +ecore+ method
+ # Beware, the ecore cache is global, i.e. for all metamodels.
+ #
+ def self.clear_ecore_cache
+ require 'rgen/ecore/ruby_to_ecore'
+ @@transformer = RubyToECore.new
+ end
+
+ def _set_ecore_internal(ecore) # :nodoc:
+ @ecore = ecore
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_to_ruby.rb b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_to_ruby.rb
new file mode 100644
index 000000000..c69bd38ed
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/ecore/ecore_to_ruby.rb
@@ -0,0 +1,167 @@
+require 'rgen/ecore/ecore'
+
+module RGen
+
+module ECore
+
+class ECoreToRuby
+
+ def initialize
+ @modules = {}
+ @classifiers = {}
+ @features_added = {}
+ @in_create_module = false
+ end
+
+ def create_module(epackage)
+ return @modules[epackage] if @modules[epackage]
+
+ top = (@in_create_module == false)
+ @in_create_module = true
+
+ m = Module.new do
+ extend RGen::MetamodelBuilder::ModuleExtension
+ end
+ @modules[epackage] = m
+
+ epackage.eSubpackages.each{|p| create_module(p)}
+ m._set_ecore_internal(epackage)
+
+ create_module(epackage.eSuperPackage).const_set(epackage.name, m) if epackage.eSuperPackage
+
+ # create classes only after all modules have been created
+ # otherwise classes may be created multiple times
+ if top
+ epackage.eAllClassifiers.each do |c|
+ if c.is_a?(RGen::ECore::EClass)
+ create_class(c)
+ elsif c.is_a?(RGen::ECore::EEnum)
+ create_enum(c)
+ end
+ end
+ @in_create_module = false
+ end
+ m
+ end
+
+ def create_class(eclass)
+ return @classifiers[eclass] if @classifiers[eclass]
+
+ c = Class.new(super_class(eclass)) do
+ abstract if eclass.abstract
+ class << self
+ attr_accessor :_ecore_to_ruby
+ end
+ end
+ class << eclass
+ attr_accessor :instanceClass
+ def instanceClassName
+ instanceClass.to_s
+ end
+ end
+ eclass.instanceClass = c
+ c::ClassModule.module_eval do
+ alias _method_missing method_missing
+ def method_missing(m, *args)
+ if self.class._ecore_to_ruby.add_features(self.class.ecore)
+ send(m, *args)
+ else
+ _method_missing(m, *args)
+ end
+ end
+ alias _respond_to respond_to?
+ def respond_to?(m, include_all=false)
+ self.class._ecore_to_ruby.add_features(self.class.ecore)
+ _respond_to(m)
+ end
+ end
+ @classifiers[eclass] = c
+ c._set_ecore_internal(eclass)
+ c._ecore_to_ruby = self
+
+ create_module(eclass.ePackage).const_set(eclass.name, c)
+ c
+ end
+
+ def create_enum(eenum)
+ return @classifiers[eenum] if @classifiers[eenum]
+
+ e = RGen::MetamodelBuilder::DataTypes::Enum.new(eenum.eLiterals.collect{|l| l.name.to_sym})
+ @classifiers[eenum] = e
+
+ create_module(eenum.ePackage).const_set(eenum.name, e)
+ e
+ end
+
+ class FeatureWrapper
+ def initialize(efeature, classifiers)
+ @efeature = efeature
+ @classifiers = classifiers
+ end
+ def value(prop)
+ return false if prop == :containment && @efeature.is_a?(RGen::ECore::EAttribute)
+ @efeature.send(prop)
+ end
+ def many?
+ @efeature.many
+ end
+ def reference?
+ @efeature.is_a?(RGen::ECore::EReference)
+ end
+ def opposite
+ @efeature.eOpposite
+ end
+ def impl_type
+ etype = @efeature.eType
+ if etype.is_a?(RGen::ECore::EClass) || etype.is_a?(RGen::ECore::EEnum)
+ @classifiers[etype]
+ else
+ ic = etype.instanceClass
+ if ic
+ ic
+ else
+ raise "unknown type: #{etype.name}"
+ end
+ end
+ end
+ end
+
+ def add_features(eclass)
+ return false if @features_added[eclass]
+ c = @classifiers[eclass]
+ eclass.eStructuralFeatures.each do |f|
+ w1 = FeatureWrapper.new(f, @classifiers)
+ w2 = FeatureWrapper.new(f.eOpposite, @classifiers) if f.is_a?(RGen::ECore::EReference) && f.eOpposite
+ c.module_eval do
+ if w1.many?
+ _build_many_methods(w1, w2)
+ else
+ _build_one_methods(w1, w2)
+ end
+ end
+ end
+ @features_added[eclass] = true
+ eclass.eSuperTypes.each do |t|
+ add_features(t)
+ end
+ true
+ end
+
+ def super_class(eclass)
+ super_types = eclass.eSuperTypes
+ case super_types.size
+ when 0
+ RGen::MetamodelBuilder::MMBase
+ when 1
+ create_class(super_types.first)
+ else
+ RGen::MetamodelBuilder::MMMultiple(*super_types.collect{|t| create_class(t)})
+ end
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/ecore/ruby_to_ecore.rb b/lib/puppet/vendor/rgen/lib/rgen/ecore/ruby_to_ecore.rb
new file mode 100644
index 000000000..841b72f89
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/ecore/ruby_to_ecore.rb
@@ -0,0 +1,91 @@
+require 'rgen/transformer'
+require 'rgen/ecore/ecore'
+
+module RGen
+
+module ECore
+
+# This transformer creates an ECore model from Ruby classes built
+# by RGen::MetamodelBuilder.
+#
+class RubyToECore < Transformer
+
+ transform Class, :to => EClass, :if => :convert? do
+ { :name => name.gsub(/.*::(\w+)$/,'\1'),
+ :abstract => _abstract_class,
+ :interface => false,
+ :eStructuralFeatures => trans(_metamodel_description),
+ :ePackage => trans(name =~ /(.*)::\w+$/ ? eval($1) : nil),
+ :eSuperTypes => trans(superclasses),
+ :instanceClassName => name,
+ :eAnnotations => trans(_annotations)
+ }
+ end
+
+ method :superclasses do
+ if superclass.respond_to?(:multiple_superclasses) && superclass.multiple_superclasses
+ superclass.multiple_superclasses
+ else
+ [ superclass ]
+ end
+ end
+
+ transform Module, :to => EPackage, :if => :convert? do
+ @enumParentModule ||= {}
+ _constants = _constantOrder + (constants - _constantOrder)
+ _constants.select {|c| const_get(c).is_a?(MetamodelBuilder::DataTypes::Enum)}.
+ each {|c| @enumParentModule[const_get(c)] = @current_object}
+ { :name => name.gsub(/.*::(\w+)$/,'\1'),
+ :eClassifiers => trans(_constants.collect{|c| const_get(c)}.select{|c| c.is_a?(Class) ||
+ (c.is_a?(MetamodelBuilder::DataTypes::Enum) && c != MetamodelBuilder::DataTypes::Boolean) }),
+ :eSuperPackage => trans(name =~ /(.*)::\w+$/ ? eval($1) : nil),
+ :eSubpackages => trans(_constants.collect{|c| const_get(c)}.select{|c| c.is_a?(Module) && !c.is_a?(Class)}),
+ :eAnnotations => trans(_annotations)
+ }
+ end
+
+ method :convert? do
+ @current_object.respond_to?(:ecore) && @current_object != RGen::MetamodelBuilder::MMBase
+ end
+
+ transform MetamodelBuilder::Intermediate::Attribute, :to => EAttribute do
+ Hash[*MetamodelBuilder::Intermediate::Attribute.properties.collect{|p| [p, value(p)]}.flatten].merge({
+ :eType => (etype == :EEnumerable ? trans(impl_type) : RGen::ECore.const_get(etype)),
+ :eAnnotations => trans(annotations)
+ })
+ end
+
+ transform MetamodelBuilder::Intermediate::Reference, :to => EReference do
+ Hash[*MetamodelBuilder::Intermediate::Reference.properties.collect{|p| [p, value(p)]}.flatten].merge({
+ :eType => trans(impl_type),
+ :eOpposite => trans(opposite),
+ :eAnnotations => trans(annotations)
+ })
+ end
+
+ transform MetamodelBuilder::Intermediate::Annotation, :to => EAnnotation do
+ { :source => source,
+ :details => details.keys.collect do |k|
+ e = RGen::ECore::EStringToStringMapEntry.new
+ e.key = k
+ e.value = details[k]
+ e
+ end
+ }
+ end
+
+ transform MetamodelBuilder::DataTypes::Enum, :to => EEnum do
+ { :name => name,
+ :instanceClassName => @enumParentModule && @enumParentModule[@current_object] && @enumParentModule[@current_object].name+"::"+name,
+ :eLiterals => literals.collect do |l|
+ lit = RGen::ECore::EEnumLiteral.new
+ lit.name = l.to_s
+ lit
+ end }
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/environment.rb b/lib/puppet/vendor/rgen/lib/rgen/environment.rb
new file mode 100644
index 000000000..cf88bd4ea
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/environment.rb
@@ -0,0 +1,129 @@
+module RGen
+
+# An Environment is used to hold model elements.
+#
+class Environment
+
+ def initialize
+ @elements = {}
+ @subClasses = {}
+ @subClassesUpdated = {}
+ @deleted = {}
+ @deletedClasses = {}
+ end
+
+ # Add a model element. Returns the environment so <code><<</code> can be chained.
+ #
+ def <<(el)
+ clazz = el.class
+ @elements[clazz] ||= []
+ @elements[clazz] << el
+ updateSubClasses(clazz)
+ self
+ end
+
+ # Removes model element from environment.
+ def delete(el)
+ @deleted[el] = true
+ @deletedClasses[el.class] = true
+ end
+
+ # Iterates each element
+ #
+ def each(&b)
+ removeDeleted
+ @elements.values.flatten.each(&b)
+ end
+
+ # Return the elements of the environment as an array
+ #
+ def elements
+ removeDeleted
+ @elements.values.flatten
+ end
+
+ # This method can be used to instantiate a class and automatically put it into
+ # the environment. The new instance is returned.
+ #
+ def new(clazz, *args)
+ obj = clazz.new(*args)
+ self << obj
+ obj
+ end
+
+ # Finds and returns model elements in the environment.
+ #
+ # The search description argument must be a hash specifying attribute/value pairs.
+ # Only model elements are returned which respond to the specified attribute methods
+ # and return the specified values as result of these attribute methods.
+ #
+ # As a special hash key :class can be used to look for model elements of a specific
+ # class. In this case an array of possible classes can optionally be given.
+ #
+ def find(desc)
+ removeDeleted
+ result = []
+ classes = desc[:class] if desc[:class] and desc[:class].is_a?(Array)
+ classes = [ desc[:class] ] if !classes and desc[:class]
+ if classes
+ hashKeys = classesWithSubClasses(classes)
+ else
+ hashKeys = @elements.keys
+ end
+ hashKeys.each do |clazz|
+ next unless @elements[clazz]
+ @elements[clazz].each do |e|
+ failed = false
+ desc.each_pair { |k,v|
+ failed = true if k != :class and ( !e.respond_to?(k) or e.send(k) != v )
+ }
+ result << e unless failed
+ end
+ end
+ result
+ end
+
+ private
+
+ def removeDeleted
+ @deletedClasses.keys.each do |c|
+ @elements[c].reject!{|e| @deleted[e]}
+ end
+ @deletedClasses.clear
+ @deleted.clear
+ end
+
+ def updateSubClasses(clazz)
+ return if @subClassesUpdated[clazz]
+ if clazz.respond_to?( :ecore )
+ superClasses = clazz.ecore.eAllSuperTypes.collect{|c| c.instanceClass}
+ else
+ superClasses = superclasses(clazz)
+ end
+ superClasses.each do |c|
+ next if c == Object
+ @subClasses[c] ||= []
+ @subClasses[c] << clazz
+ end
+ @subClassesUpdated[clazz] = true
+ end
+
+ def classesWithSubClasses(classes)
+ result = classes
+ classes.each do |c|
+ result += @subClasses[c] if @subClasses[c]
+ end
+ result.uniq
+ end
+
+ def superclasses(clazz)
+ if clazz == Object
+ []
+ else
+ superclasses(clazz.superclass) << clazz.superclass
+ end
+ end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/fragment/dump_file_cache.rb b/lib/puppet/vendor/rgen/lib/rgen/fragment/dump_file_cache.rb
new file mode 100644
index 000000000..3963aeb2b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/fragment/dump_file_cache.rb
@@ -0,0 +1,63 @@
+module RGen
+
+module Fragment
+
+# Caches model fragments in Ruby dump files.
+#
+# Dump files are created per each fragment file.
+#
+# The main goal is to support fast loading and joining of fragments. Therefore the cache
+# stores additional information which makes the joining process faster (adding to
+# environment, resolving references)
+#
+class DumpFileCache
+
+ # +cache_map+ must be an object responding to +load_data+ and +store_data+
+ # for loading or storing data associated with a file;
+ # this can be an instance of Util::FileCacheMap
+ def initialize(cache_map)
+ @cache_map = cache_map
+ end
+
+ # Note that the fragment must not be connected to other fragments by resolved references
+ # unresolve the fragment if necessary
+ def store(fragment)
+ fref = fragment.fragment_ref
+ # temporarily remove the reference to the fragment to avoid dumping the fragment
+ fref.fragment = nil
+ @cache_map.store_data(fragment.location,
+ Marshal.dump({
+ :root_elements => fragment.root_elements,
+ :elements => fragment.elements,
+ :index => fragment.index,
+ :unresolved_refs => fragment.unresolved_refs,
+ :fragment_ref => fref,
+ :data => fragment.data
+ }))
+ fref.fragment = fragment
+ end
+
+ def load(fragment)
+ dump = @cache_map.load_data(fragment.location)
+ return :invalid if dump == :invalid
+ header = Marshal.load(dump)
+ fragment.set_root_elements(header[:root_elements],
+ :elements => header[:elements],
+ :index => header[:index],
+ :unresolved_refs => header[:unresolved_refs])
+ fragment.data = header[:data]
+ if header[:fragment_ref]
+ fragment.fragment_ref = header[:fragment_ref]
+ fragment.fragment_ref.fragment = fragment
+ else
+ raise "no fragment_ref in fragment loaded from cache"
+ end
+ end
+
+end
+
+end
+
+end
+
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/fragment/fragmented_model.rb b/lib/puppet/vendor/rgen/lib/rgen/fragment/fragmented_model.rb
new file mode 100644
index 000000000..155c767ff
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/fragment/fragmented_model.rb
@@ -0,0 +1,140 @@
+require 'rgen/instantiator/reference_resolver'
+
+module RGen
+
+module Fragment
+
+# A FragmentedModel represents a model which consists of fragments (ModelFragment).
+#
+# The main purpose of this class is to resolve references across fragments and
+# to keep the references consistent while fragments are added or removed.
+# This way it also plays an important role in keeping the model fragments consistent
+# and thus ModelFragment objects should only be accessed via this interface.
+# Overall unresolved references after the resolution step are also maintained.
+#
+# A FragmentedModel can also keep an RGen::Environment object up to date while fragments
+# are added or removed. The environment must be registered with the constructor.
+#
+# Reference resolution is based on arbitrary identifiers. The identifiers must be
+# provided in the fragments' indices. The FragmentedModel takes care to maintain
+# the overall index.
+#
+class FragmentedModel
+ attr_reader :fragments
+ attr_reader :environment
+
+ # Creates a fragmented model. Options:
+ #
+ # :env
+ # environment which will be updated as model elements are added and removed
+ #
+ def initialize(options={})
+ @environment = options[:env]
+ @fragments = []
+ @index = nil
+ @fragment_change_listeners = []
+ @fragment_index = {}
+ end
+
+ # Adds a proc which is called when a fragment is added or removed
+ # The proc receives the fragment and one of :added, :removed
+ #
+ def add_fragment_change_listener(listener)
+ @fragment_change_listeners << listener
+ end
+
+ def remove_fragment_change_listener(listener)
+ @fragment_change_listeners.delete(listener)
+ end
+
+ # Add a fragment.
+ #
+ def add_fragment(fragment)
+ invalidate_cache
+ @fragments << fragment
+ fragment.elements.each{|e| @environment << e} if @environment
+ @fragment_change_listeners.each{|l| l.call(fragment, :added)}
+ end
+
+ # Removes the fragment. The fragment will be unresolved using unresolve_fragment.
+ #
+ def remove_fragment(fragment)
+ raise "fragment not part of model" unless @fragments.include?(fragment)
+ invalidate_cache
+ @fragments.delete(fragment)
+ @fragment_index.delete(fragment)
+ unresolve_fragment(fragment)
+ fragment.elements.each{|e| @environment.delete(e)} if @environment
+ @fragment_change_listeners.each{|l| l.call(fragment, :removed)}
+ end
+
+ # Resolve references between fragments.
+ # It is assumed that references within fragments have already been resolved.
+ # This method can be called several times. It will update the overall unresolved references.
+ #
+ # Options:
+ #
+ # :fragment_provider:
+ # Only if a +fragment_provider+ is given, the resolve step can be reverted later on
+ # by a call to unresolve_fragment. The fragment provider is a proc which receives a model
+ # element and must return the fragment in which the element is contained.
+ #
+ # :use_target_type:
+ # reference resolver uses the expected target type to narrow the set of possible targets
+ #
+ def resolve(options={})
+ local_index = index
+ @fragments.each do |f|
+ f.resolve_external(local_index, options)
+ end
+ end
+
+ # Remove all references between this fragment and all other fragments.
+ # The references will be replaced with unresolved references (MMProxy objects).
+ #
+ def unresolve_fragment(fragment)
+ fragment.unresolve_external
+ @fragments.each do |f|
+ if f != fragment
+ f.unresolve_external_fragment(fragment)
+ end
+ end
+ end
+
+ # Returns the overall unresolved references.
+ #
+ def unresolved_refs
+ @fragments.collect{|f| f.unresolved_refs}.flatten
+ end
+
+ # Returns the overall index.
+ # This is a Hash mapping identifiers to model elements accessible via the identifier.
+ #
+ def index
+ fragments.each do |f|
+ if !@fragment_index[f] || (@fragment_index[f].object_id != f.index.object_id)
+ @fragment_index[f] = f.index
+ invalidate_cache
+ end
+ end
+ return @index if @index
+ @index = {}
+ fragments.each do |f|
+ f.index.each do |i|
+ (@index[i[0]] ||= []) << i[1]
+ end
+ end
+ @index
+ end
+
+ private
+
+ def invalidate_cache
+ @index = nil
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/fragment/model_fragment.rb b/lib/puppet/vendor/rgen/lib/rgen/fragment/model_fragment.rb
new file mode 100644
index 000000000..4e79f6126
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/fragment/model_fragment.rb
@@ -0,0 +1,289 @@
+require 'rgen/instantiator/reference_resolver'
+
+module RGen
+
+module Fragment
+
+# A model fragment is a list of root model elements associated with a location (e.g. a file).
+# It also stores a list of unresolved references as well as a list of unresolved references
+# which have been resolved. Using the latter, a fragment can undo reference resolution.
+#
+# Optionally, an arbitrary data object may be associated with the fragment. The data object
+# will also be stored in the cache.
+#
+# If an element within the fragment changes this must be indicated to the fragment by calling
+# +mark_changed+.
+#
+# Note: the fragment knows how to resolve references (+resolve_local+, +resolve_external+).
+# However considering a fragment a data structure, this functionality might be removed in the
+# future. Instead the fragment should be told about each resolution taking place. Use
+# method +mark_resolved+ for this purpose.
+#
+class ModelFragment
+ attr_reader :root_elements
+ attr_accessor :location, :fragment_ref, :data
+
+ # A FragmentRef serves as a single target object for elements which need to reference the
+ # fragment they are contained in. The FragmentRef references the fragment it is contained in.
+ # The FragmentRef is separate from the fragment itself to allow storing it in a marshal dump
+ # independently of the fragment.
+ #
+ class FragmentRef
+ attr_accessor :fragment
+ end
+
+ # A ResolvedReference wraps an unresolved reference after it has been resolved.
+ # It also holds the target element to which it has been resolved, i.e. with which the proxy
+ # object has been replaced.
+ #
+ class ResolvedReference
+ attr_reader :uref, :target
+ def initialize(uref, target)
+ @uref, @target = uref, target
+ end
+ end
+
+ # Create a model fragment
+ #
+ # :data
+ # data object associated with this fragment
+ #
+ # :identifier_provider
+ # identifier provider to be used when resolving references
+ # it must be a proc which receives a model element and must return
+ # that element's identifier or nil if the element has no identifier
+ #
+ def initialize(location, options={})
+ @location = location
+ @fragment_ref = FragmentRef.new
+ @fragment_ref.fragment = self
+ @data = options[:data]
+ @resolved_refs = nil
+ @changed = false
+ @identifier_provider = options[:identifier_provider]
+ end
+
+ # Set the root elements, normally done by an instantiator.
+ #
+ # For optimization reasons the instantiator of the fragment may provide data explicitly which
+ # is normally derived by the fragment itself. In this case it is essential that this
+ # data is consistent with the fragment.
+ #
+ def set_root_elements(root_elements, options={})
+ @root_elements = root_elements
+ @elements = options[:elements]
+ @index = options[:index]
+ @unresolved_refs = options[:unresolved_refs]
+ @resolved_refs = nil
+ # new unresolved refs, reset removed_urefs
+ @removed_urefs = nil
+ @changed = false
+ end
+
+ # Must be called when any of the elements in this fragment has been changed
+ #
+ def mark_changed
+ @changed = true
+ @elements = nil
+ @index = nil
+ @unresolved_refs = nil
+ # unresolved refs will be recalculated, no need to keep removed_urefs
+ @removed_urefs = nil
+ @resolved_refs = :dirty
+ end
+
+ # Can be used to reset the change status to unchanged.
+ #
+ def mark_unchanged
+ @changed = false
+ end
+
+ # Indicates whether the fragment has been changed or not
+ #
+ def changed?
+ @changed
+ end
+
+ # Returns all elements within this fragment
+ #
+ def elements
+ return @elements if @elements
+ @elements = []
+ @root_elements.each do |e|
+ @elements << e
+ @elements.concat(e.eAllContents)
+ end
+ @elements
+ end
+
+ # Returns the index of the element contained in this fragment.
+ #
+ def index
+ build_index unless @index
+ @index
+ end
+
+ # Returns all unresolved references within this fragment, i.e. references to MMProxy objects
+ #
+ def unresolved_refs
+ @unresolved_refs ||= collect_unresolved_refs
+ if @removed_urefs
+ @unresolved_refs -= @removed_urefs
+ @removed_urefs = nil
+ end
+ @unresolved_refs
+ end
+
+ # Builds the index of all elements within this fragment having an identifier
+ # the index is an array of 2-element arrays holding the identifier and the element
+ #
+ def build_index
+ raise "cannot build index without an identifier provider" unless @identifier_provider
+ @index = elements.collect { |e|
+ ident = @identifier_provider.call(e, nil)
+ ident && !ident.empty? ? [ident, e] : nil
+ }.compact
+ end
+
+ # Resolves local references (within this fragment) as far as possible
+ #
+ # Options:
+ #
+ # :use_target_type:
+ # reference resolver uses the expected target type to narrow the set of possible targets
+ #
+ def resolve_local(options={})
+ resolver = RGen::Instantiator::ReferenceResolver.new
+ index.each do |i|
+ resolver.add_identifier(i[0], i[1])
+ end
+ @unresolved_refs = resolver.resolve(unresolved_refs, :use_target_type => options[:use_target_type])
+ end
+
+ # Resolves references to external fragments using the external_index provided.
+ # The external index must be a Hash mapping identifiers uniquely to model elements.
+ #
+ # Options:
+ #
+ # :fragment_provider:
+ # If a +fragment_provider+ is given, the resolve step can be reverted later on
+ # by a call to unresolve_external or unresolve_external_fragment. The fragment provider
+ # is a proc which receives a model element and must return the fragment in which it is
+ # contained.
+ #
+ # :use_target_type:
+ # reference resolver uses the expected target type to narrow the set of possible targets
+ #
+ #
+ def resolve_external(external_index, options)
+ fragment_provider = options[:fragment_provider]
+ resolver = RGen::Instantiator::ReferenceResolver.new(
+ :identifier_resolver => proc {|ident| external_index[ident] })
+ if fragment_provider
+ @resolved_refs = {} if @resolved_refs.nil? || @resolved_refs == :dirty
+ on_resolve = proc { |ur, target|
+ target_fragment = fragment_provider.call(target)
+ target_fragment ||= :unknown
+ raise "can not resolve local reference in resolve_external, call resolve_local first" \
+ if target_fragment == self
+ @resolved_refs[target_fragment] ||= []
+ @resolved_refs[target_fragment] << ResolvedReference.new(ur, target)
+ }
+ @unresolved_refs = resolver.resolve(unresolved_refs, :on_resolve => on_resolve, :use_target_type => options[:use_target_type])
+ else
+ @unresolved_refs = resolver.resolve(unresolved_refs, :use_target_type => options[:use_target_type])
+ end
+ end
+
+ # Marks a particular unresolved reference +uref+ as resolved to +target+ in +target_fragment+.
+ #
+ def mark_resolved(uref, target_fragment, target)
+ @resolved_refs = {} if @resolved_refs.nil? || @resolved_refs == :dirty
+ target_fragment ||= :unknown
+ if target_fragment != self
+ @resolved_refs[target_fragment] ||= []
+ @resolved_refs[target_fragment] << ResolvedReference.new(uref, target)
+ end
+ @removed_urefs ||= []
+ @removed_urefs << uref
+ end
+
+ # Unresolve outgoing references to all external fragments, i.e. references which used to
+ # be represented by an unresolved reference from within this fragment.
+ # Note, that there may be more references to external fragments due to references which
+ # were represented by unresolved references from within other fragments.
+ #
+ def unresolve_external
+ return if @resolved_refs.nil?
+ raise "can not unresolve, missing fragment information" if @resolved_refs == :dirty || @resolved_refs[:unknown]
+ rrefs = @resolved_refs.values.flatten
+ @resolved_refs = {}
+ unresolve_refs(rrefs)
+ end
+
+ # Like unresolve_external but only unresolve references to external fragment +fragment+
+ #
+ def unresolve_external_fragment(fragment)
+ return if @resolved_refs.nil?
+ raise "can not unresolve, missing fragment information" if @resolved_refs == :dirty || @resolved_refs[:unknown]
+ rrefs = @resolved_refs[fragment]
+ @resolved_refs.delete(fragment)
+ unresolve_refs(rrefs) if rrefs
+ end
+
+ private
+
+ # Turns resolved references +rrefs+ back into unresolved references
+ #
+ def unresolve_refs(rrefs)
+ # make sure any removed_urefs have been removed,
+ # otherwise they will be removed later even if this method actually re-added them
+ unresolved_refs
+ rrefs.each do |rr|
+ ur = rr.uref
+ refs = ur.element.getGeneric(ur.feature_name)
+ if refs.is_a?(Array)
+ index = refs.index(rr.target)
+ ur.element.removeGeneric(ur.feature_name, rr.target)
+ ur.element.addGeneric(ur.feature_name, ur.proxy, index)
+ else
+ ur.element.setGeneric(ur.feature_name, ur.proxy)
+ end
+ @unresolved_refs << ur
+ end
+ end
+
+ def collect_unresolved_refs
+ unresolved_refs = []
+ elements.each do |e|
+ each_reference_target(e) do |r, t|
+ if t.is_a?(RGen::MetamodelBuilder::MMProxy)
+ unresolved_refs <<
+ RGen::Instantiator::ReferenceResolver::UnresolvedReference.new(e, r.name, t)
+ end
+ end
+ end
+ unresolved_refs
+ end
+
+ def each_reference_target(element)
+ non_containment_references(element.class).each do |r|
+ element.getGenericAsArray(r.name).each do |t|
+ yield(r, t)
+ end
+ end
+ end
+
+ def non_containment_references(clazz)
+ @@non_containment_references_cache ||= {}
+ @@non_containment_references_cache[clazz] ||=
+ clazz.ecore.eAllReferences.select{|r| !r.containment}
+ end
+
+end
+
+end
+
+end
+
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/abstract_instantiator.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/abstract_instantiator.rb
new file mode 100644
index 000000000..8329a3851
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/abstract_instantiator.rb
@@ -0,0 +1,66 @@
+module RGen
+
+module Instantiator
+
+class AbstractInstantiator
+
+ ResolverDescription = Struct.new(:from, :attribute, :block) # :nodoc:
+
+ class << self
+ attr_accessor :resolver_descs
+ end
+
+ def initialize(env)
+ @env = env
+ end
+
+ # Specifies that +attribute+ should be resolved. If +:class+ is specified,
+ # resolve +attribute+ only for objects of type class.
+ # The block must return the value to which the attribute should be assigned.
+ # The object for which the attribute is to be resolved will be accessible
+ # in the current context within the block.
+ #
+ def self.resolve(attribute, desc=nil, &block)
+ from = (desc.is_a?(Hash) && desc[:class])
+ self.resolver_descs ||= []
+ self.resolver_descs << ResolverDescription.new(from, attribute, block)
+ end
+
+ # Resolves +attribute+ to a model element which has attribute +:id+ set to the
+ # value currently in attribute +:src+
+ #
+ def self.resolve_by_id(attribute, desc)
+ id_attr = (desc.is_a?(Hash) && desc[:id])
+ src_attr = (desc.is_a?(Hash) && desc[:src])
+ raise StandardError.new("No id attribute given.") unless id_attr
+ resolve(attribute) do
+ @env.find(id_attr => @current_object.send(src_attr)).first
+ end
+ end
+
+ private
+
+ def method_missing(m, *args) #:nodoc:
+ if @current_object
+ @current_object.send(m)
+ else
+ super
+ end
+ end
+
+ def resolve
+ self.class.resolver_descs ||= []
+ self.class.resolver_descs.each { |desc|
+ @env.find(:class => desc.from).each { |e|
+ old_object, @current_object = @current_object, e
+ e.send("#{desc.attribute}=", instance_eval(&desc.block)) if e.respond_to?("#{desc.attribute}=")
+ @current_object = old_object
+ }
+ }
+ end
+
+end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/abstract_xml_instantiator.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/abstract_xml_instantiator.rb
new file mode 100644
index 000000000..441950e73
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/abstract_xml_instantiator.rb
@@ -0,0 +1,66 @@
+require 'nokogiri'
+
+class AbstractXMLInstantiator
+
+ class Visitor < Nokogiri::XML::SAX::Document
+
+ def initialize(inst, gcSuspendCount)
+ @instantiator = inst
+ @gcSuspendCount = gcSuspendCount
+ @namespaces = {}
+ end
+
+ def start_element_namespace(tag, attributes, prefix, uri, ns)
+ controlGC
+ ns.each{|n| @namespaces[n[0]] = n[1]}
+ attrs = attributes.collect{|a| [a.prefix ? a.prefix+":"+a.localname : a.localname, a.value]}
+ @instantiator.start_tag(prefix, tag, @namespaces, Hash[*(attrs.flatten)])
+ attrs.each { |pair| @instantiator.set_attribute(pair[0], pair[1]) }
+ end
+
+ def end_element_namespace(tag, prefix, uri)
+ @instantiator.end_tag(prefix, tag)
+ end
+
+ def characters(str)
+ @instantiator.text(str)
+ end
+
+ def controlGC
+ return unless @gcSuspendCount > 0
+ @gcCounter ||= 0
+ @gcCounter += 1
+ if @gcCounter == @gcSuspendCount
+ @gcCounter = 0
+ GC.enable
+ ObjectSpace.garbage_collect
+ GC.disable
+ end
+ end
+ end
+
+ # Parses str and calls start_tag, end_tag, set_attribute and text methods of a subclass.
+ #
+ # If gcSuspendCount is specified, the garbage collector will be disabled for that
+ # number of start or end tags. After that period it will clean up and then be disabled again.
+ # A value of about 1000 can significantly improve overall performance.
+ # The memory usage normally does not increase.
+ # Depending on the work done for every xml tag the value might have to be adjusted.
+ #
+ def instantiate(str, gcSuspendCount=0)
+ gcDisabledBefore = GC.disable
+ gcSuspendCount = 0 if gcDisabledBefore
+ begin
+ visitor = Visitor.new(self, gcSuspendCount)
+ parser = Nokogiri::XML::SAX::Parser.new(visitor)
+ parser.parse(str) do |ctx|
+ @parserContext = ctx
+ end
+ ensure
+ GC.enable unless gcDisabledBefore
+ end
+ end
+
+ def text(str)
+ end
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/default_xml_instantiator.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/default_xml_instantiator.rb
new file mode 100644
index 000000000..e8d2571f8
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/default_xml_instantiator.rb
@@ -0,0 +1,117 @@
+require 'rgen/instantiator/nodebased_xml_instantiator'
+
+module RGen
+
+module Instantiator
+
+# A default XML instantiator.
+# Derive your own instantiator from this class or use it as is.
+#
+class DefaultXMLInstantiator < NodebasedXMLInstantiator
+ include Util::NameHelper
+
+ NamespaceDescriptor = Struct.new(:prefix, :target)
+
+ class << self
+
+ def map_tag_ns(from, to, prefix="")
+ tag_ns_map[from] = NamespaceDescriptor.new(prefix, to)
+ end
+
+ def tag_ns_map # :nodoc:
+ @tag_ns_map ||={}
+ @tag_ns_map
+ end
+
+ end
+
+ def initialize(env, default_module, create_mm=false)
+ super(env)
+ @default_module = default_module
+ @create_mm = create_mm
+ end
+
+ def on_descent(node)
+ obj = new_object(node)
+ @env << obj unless obj.nil?
+ node.object = obj
+ node.attributes.each_pair { |k,v| set_attribute(node, k, v) }
+ end
+
+ def on_ascent(node)
+ node.children.each { |c| assoc_p2c(node, c) }
+ node.object.class.has_attr 'chardata', Object unless node.object.respond_to?(:chardata)
+ set_attribute(node, "chardata", node.chardata)
+ end
+
+ def class_name(str)
+ saneClassName(str)
+ end
+
+ def new_object(node)
+ ns_desc = self.class.tag_ns_map[node.namespace]
+ class_name = class_name(ns_desc.nil? ? node.qtag : ns_desc.prefix+node.tag)
+ mod = (ns_desc && ns_desc.target) || @default_module
+ build_on_error(NameError, :build_class, class_name, mod) do
+ mod.const_get(class_name).new
+ end
+ end
+
+ def build_class(name, mod)
+ mod.const_set(name, Class.new(RGen::MetamodelBuilder::MMBase))
+ end
+
+ def method_name(str)
+ saneMethodName(str)
+ end
+
+ def assoc_p2c(parent, child)
+ return unless parent.object && child.object
+ method_name = method_name(className(child.object))
+ build_on_error(NoMethodError, :build_p2c_assoc, parent, child, method_name) do
+ parent.object.addGeneric(method_name, child.object)
+ child.object.setGeneric("parent", parent.object)
+ end
+ end
+
+ def build_p2c_assoc(parent, child, method_name)
+ parent.object.class.has_many(method_name, child.object.class)
+ child.object.class.has_one("parent", RGen::MetamodelBuilder::MMBase)
+ end
+
+ def set_attribute(node, attr, value)
+ return unless node.object
+ build_on_error(NoMethodError, :build_attribute, node, attr, value) do
+ node.object.setGeneric(method_name(attr), value)
+ end
+ end
+
+ def build_attribute(node, attr, value)
+ node.object.class.has_attr(method_name(attr))
+ end
+
+ protected
+
+ # Helper method for implementing classes.
+ # This method yields the given block.
+ # If the metamodel should be create automatically (see constructor)
+ # rescues +error+ and calls +builder_method+ with +args+, then
+ # yields the block again.
+ def build_on_error(error, builder_method, *args)
+ begin
+ yield
+ rescue error
+ if @create_mm
+ send(builder_method, *args)
+ yield
+ else
+ raise
+ end
+ end
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/ecore_xml_instantiator.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/ecore_xml_instantiator.rb
new file mode 100644
index 000000000..bc911afd4
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/ecore_xml_instantiator.rb
@@ -0,0 +1,169 @@
+require 'rgen/ecore/ecore'
+require 'rgen/instantiator/abstract_xml_instantiator'
+require 'rgen/array_extensions'
+
+class ECoreXMLInstantiator < AbstractXMLInstantiator
+
+ include RGen::ECore
+
+ INFO = 0
+ WARN = 1
+ ERROR = 2
+
+ def initialize(env, loglevel=ERROR)
+ @env = env
+ @rolestack = []
+ @elementstack = []
+ @element_by_id = {}
+ @loglevel = loglevel
+ end
+
+ def start_tag(prefix, tag, namespaces, attributes)
+ eRef = nil
+ if @elementstack.last
+ eRef = eAllReferences(@elementstack.last).find{|r|r.name == tag}
+ if eRef
+ if attributes["xsi:type"] && attributes["xsi:type"] =~ /ecore:(\w+)/
+ class_name = $1
+ attributes.delete("xsi:type")
+ else
+ class_name = eRef.eType.name
+ end
+ else
+ raise "Reference not found: #{tag} on #{@elementstack.last}"
+ end
+ else
+ class_name = tag
+ end
+
+ eClass = RGen::ECore.ecore.eClassifiers.find{|c| c.name == class_name}
+ if eClass
+ obj = RGen::ECore.const_get(class_name).new
+ if attributes["xmi:id"]
+ @element_by_id[attributes["xmi:id"]] = obj
+ attributes.delete("xmi:id")
+ end
+ if eRef
+ if eRef.many
+ @elementstack.last.addGeneric(eRef.name, obj)
+ else
+ @elementstack.last.setGeneric(eRef.name, obj)
+ end
+ end
+ @env << obj
+ @elementstack.push obj
+ else
+ log WARN, "Class not found: #{class_name}"
+ @elementstack.push nil
+ end
+
+ attributes.each_pair do |attr, value|
+ set_attribute_internal(attr, value)
+ end
+ end
+
+ def end_tag(prefix, tag)
+ @elementstack.pop
+ end
+
+ ResolverDescription = Struct.new(:object, :attribute, :value)
+
+ def set_attribute(attr, value)
+ # do nothing, already handled by start_tag/set_attribute_internal
+ end
+
+ def set_attribute_internal(attr, value)
+ return unless @elementstack.last
+ eFeat = eAllStructuralFeatures(@elementstack.last).find{|a| a.name == attr}
+ if eFeat.is_a?(EReference)
+ rd = ResolverDescription.new
+ rd.object = @elementstack.last
+ rd.attribute = attr
+ rd.value = value
+ @resolver_descs << rd
+ elsif eFeat
+ value = true if value == "true" && eFeat.eType == EBoolean
+ value = false if value == "false" && eFeat.eType == EBoolean
+ value = value.to_i if eFeat.eType == EInt || eFeat.eType == ELong
+ @elementstack.last.setGeneric(attr, value)
+ else
+ log WARN, "Feature not found: #{attr} on #{@elementstack.last}"
+ end
+ end
+
+ def instantiate(str)
+ @resolver_descs = []
+# puts "Instantiating ..."
+ super(str, 1000)
+ rootpackage = @env.find(:class => EPackage).first
+# puts "Resolving ..."
+ @resolver_descs.each do |rd|
+ refed = find_referenced(rootpackage, rd.value)
+ feature = eAllStructuralFeatures(rd.object).find{|f| f.name == rd.attribute}
+ raise StandardError.new("StructuralFeature not found: #{rd.attribute}") unless feature
+ if feature.many
+ rd.object.setGeneric(feature.name, refed)
+ else
+ rd.object.setGeneric(feature.name, refed.first)
+ end
+ end
+ end
+
+ def eAllReferences(element)
+ @eAllReferences ||= {}
+ @eAllReferences[element.class] ||= element.class.ecore.eAllReferences
+ end
+
+ def eAllAttributes(element)
+ @eAllAttributes ||= {}
+ @eAllAttributes[element.class] ||= element.class.ecore.eAllAttributes
+ end
+
+ def eAllStructuralFeatures(element)
+ @eAllStructuralFeatures ||= {}
+ @eAllStructuralFeatures[element.class] ||= element.class.ecore.eAllStructuralFeatures
+ end
+
+ def find_referenced(context, desc)
+ desc.split(/\s+/).collect do |r|
+ if r =~ /^#([^\/]+)$/
+ @element_by_id[$1]
+ elsif r =~ /^#\/\d*\/([\w\/]+)/
+ find_in_context(context, $1.split('/'))
+ elsif r =~ /#\/\/(\w+)$/
+ case $1
+ when "EString"; RGen::ECore::EString
+ when "EInt"; RGen::ECore::EInt
+ when "ELong"; RGen::ECore::ELong
+ when "EBoolean"; RGen::ECore::EBoolean
+ when "EFloat"; RGen::ECore::EFloat
+ when "EJavaObject"; RGen::ECore::EJavaObject
+ when "EJavaClass"; RGen::ECore::EJavaClass
+ end
+ end
+ end.compact
+ end
+
+ def find_in_context(context, desc_elements)
+ if context.is_a?(EPackage)
+ r = (context.eClassifiers + context.eSubpackages).find{|c| c.name == desc_elements.first}
+ elsif context.is_a?(EClass)
+ r = context.eStructuralFeatures.find{|s| s.name == desc_elements.first}
+ else
+ raise StandardError.new("Don't know how to find #{desc_elements.join('/')} in context #{context}")
+ end
+ if r
+ if desc_elements.size > 1
+ find_in_context(r, desc_elements[1..-1])
+ else
+ r
+ end
+ else
+ log WARN, "Can not follow path, element #{desc_elements.first} not found within #{context}(#{context.name})"
+ end
+ end
+
+ def log(level, msg)
+ puts %w(INFO WARN ERROR)[level] + ": " + msg if level >= @loglevel
+ end
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_instantiator.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_instantiator.rb
new file mode 100644
index 000000000..1aa4d951e
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_instantiator.rb
@@ -0,0 +1,126 @@
+require 'rgen/instantiator/qualified_name_resolver'
+require 'rgen/instantiator/json_parser'
+
+module RGen
+
+module Instantiator
+
+# JsonInstantiator is used to create RGen models from JSON.
+#
+# Each JSON object needs to have an attribute "_class" which is used to find
+# the metamodel class to instantiate. The value of "_class" should be the
+# the relative qualified class name within the root package as a string.
+#
+# If the option "short_class_names" is set to true, unqualified class names can be used.
+# In this case, metamodel classes are searched in the metamodel root package first.
+# If this search is not successful, all subpackages will be searched for the class name.
+#
+class JsonInstantiator
+
+ # Model elements will be created in evironment +env+,
+ # classes are looked for in metamodel package module +mm+,
+ # +options+ include:
+ # short_class_names: if true subpackages will be searched for unqualifed class names (default: true)
+ # ignore_keys: an array of json object key names which are to be ignored (default: none)
+ #
+ # The options are also passed to the underlying QualifiedNameResolver.
+ #
+ def initialize(env, mm, options={})
+ @env = env
+ @mm = mm
+ @options = options
+ @short_class_names = !@options.has_key?(:short_class_names) || @options[:short_class_names]
+ @ignore_keys = @options[:ignore_keys] || []
+ @unresolvedReferences = []
+ @classes = {}
+ @classes_flat = {}
+ mm.ecore.eAllClasses.each do |c|
+ @classes[c.instanceClass.name.sub(mm.name+"::","")] = c
+ @classes_flat[c.name] = c
+ end
+ @parser = JsonParser.new(self)
+ end
+
+ # Creates the elements described by the json string +str+.
+ # Returns an array of ReferenceResolver::UnresolvedReference
+ # describing the references which could not be resolved
+ #
+ # Options:
+ # :root_elements: if an array is provided, it will be filled with the root elements
+ #
+ def instantiate(str, options={})
+ root = @parser.parse(str)
+ if options[:root_elements].is_a?(Array)
+ options[:root_elements].clear
+ root.each{|r| options[:root_elements] << r}
+ end
+ resolver = QualifiedNameResolver.new(root, @options)
+ resolver.resolveReferences(@unresolvedReferences)
+ end
+
+ def createObject(hash)
+ className = hash["_class"]
+ # hashes without a _class key are returned as is
+ return hash unless className
+ if @classes[className]
+ clazz = @classes[className].instanceClass
+ elsif @short_class_names && @classes_flat[className]
+ clazz = @classes_flat[className].instanceClass
+ else
+ raise "class not found: #{className}"
+ end
+ hash.delete("_class")
+ @ignore_keys.each do |k|
+ hash.delete(k)
+ end
+ urefs = []
+ hash.keys.each do |k|
+ f = eFeature(k, clazz)
+ hash[k] = [hash[k]] if f.many && !hash[k].is_a?(Array)
+ if f.is_a?(RGen::ECore::EReference) && !f.containment
+ if f.many
+ idents = hash[k]
+ hash[k] = idents.collect do |i|
+ proxy = RGen::MetamodelBuilder::MMProxy.new(i)
+ urefs << ReferenceResolver::UnresolvedReference.new(nil, k, proxy)
+ proxy
+ end
+ else
+ ident = hash[k]
+ ident = ident.first if ident.is_a?(Array)
+ proxy = RGen::MetamodelBuilder::MMProxy.new(ident)
+ hash[k] = proxy
+ urefs << ReferenceResolver::UnresolvedReference.new(nil, k, proxy)
+ end
+ elsif f.eType.is_a?(RGen::ECore::EEnum)
+ hash[k] = hash[k].to_sym
+ elsif f.eType.instanceClassName == "Float"
+ hash[k] = hash[k].to_f
+ end
+ end
+ obj = @env.new(clazz, hash)
+ urefs.each do |r|
+ r.element = obj
+ @unresolvedReferences << r
+ end
+ obj
+ end
+
+ private
+
+ def eFeature(name, clazz)
+ @eFeature ||= {}
+ @eFeature[clazz] ||= {}
+ unless @eFeature[clazz][name]
+ feature = clazz.ecore.eAllStructuralFeatures.find{|f| f.name == name}
+ raise "feature '#{name}' not found in class '#{clazz}'" unless feature
+ end
+ @eFeature[clazz][name] ||= feature
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_parser.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_parser.rb
new file mode 100644
index 000000000..02c04a0e8
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_parser.rb
@@ -0,0 +1,331 @@
+#
+# DO NOT MODIFY!!!!
+# This file is automatically generated by racc 1.4.5
+# from racc grammer file "json_parser.y".
+#
+
+require 'racc/parser'
+
+
+
+module RGen
+
+module Instantiator
+
+
+class JsonParser < Racc::Parser
+
+module_eval <<'..end json_parser.y modeval..id3d5fb611e2', 'json_parser.y', 38
+
+ ParserToken = Struct.new(:line, :file, :value)
+
+ def initialize(instantiator)
+ @instantiator = instantiator
+ end
+
+ def parse(str, file=nil)
+ @q = []
+ line = 1
+
+ until str.empty?
+ case str
+ when /\A\n/
+ str = $'
+ line +=1
+ when /\A\s+/
+ str = $'
+ when /\A([-+]?\d+\.\d+)/
+ str = $'
+ @q << [:FLOAT, ParserToken.new(line, file, $1)]
+ when /\A([-+]?\d+)/
+ str = $'
+ @q << [:INTEGER, ParserToken.new(line, file, $1)]
+ when /\A"((?:[^"\\]|\\"|\\\\|\\[^"\\])*)"/
+ str = $'
+ sval = $1
+ sval.gsub!('\\\\','\\')
+ sval.gsub!('\\"','"')
+ sval.gsub!('\\n',"\n")
+ sval.gsub!('\\r',"\r")
+ sval.gsub!('\\t',"\t")
+ sval.gsub!('\\f',"\f")
+ sval.gsub!('\\b',"\b")
+ @q << [:STRING, ParserToken.new(line, file, sval)]
+ when /\A(\{|\}|\[|\]|,|:|true|false)/
+ str = $'
+ @q << [$1, ParserToken.new(line, file, $1)]
+ else
+ raise "parse error in line #{line} on "+str[0..20].inspect+"..."
+ end
+ end
+ @q.push [false, ParserToken.new(line, file, '$end')]
+ do_parse
+ end
+
+ def next_token
+ r = @q.shift
+ r
+ end
+
+..end json_parser.y modeval..id3d5fb611e2
+
+##### racc 1.4.5 generates ###
+
+racc_reduce_table = [
+ 0, 0, :racc_error,
+ 1, 14, :_reduce_1,
+ 3, 16, :_reduce_2,
+ 2, 16, :_reduce_3,
+ 1, 17, :_reduce_4,
+ 3, 17, :_reduce_5,
+ 3, 18, :_reduce_6,
+ 2, 18, :_reduce_7,
+ 1, 19, :_reduce_8,
+ 3, 19, :_reduce_9,
+ 3, 20, :_reduce_10,
+ 1, 15, :_reduce_11,
+ 1, 15, :_reduce_12,
+ 1, 15, :_reduce_13,
+ 1, 15, :_reduce_14,
+ 1, 15, :_reduce_15,
+ 1, 15, :_reduce_16,
+ 1, 15, :_reduce_17 ]
+
+racc_reduce_n = 18
+
+racc_shift_n = 29
+
+racc_action_table = [
+ 3, 16, 17, 7, 22, 8, 21, 10, 11, 1,
+ 2, 3, 12, 23, 7, 24, 8, 25, 10, 11,
+ 1, 2, 3, 20, 15, 7, 17, 8, nil, 10,
+ 11, 1, 2, 3, nil, nil, 7, nil, 8, nil,
+ 10, 11, 1, 2 ]
+
+racc_action_check = [
+ 0, 7, 7, 0, 15, 0, 14, 0, 0, 0,
+ 0, 3, 3, 17, 3, 18, 3, 19, 3, 3,
+ 3, 3, 20, 13, 4, 20, 25, 20, nil, 20,
+ 20, 20, 20, 23, nil, nil, 23, nil, 23, nil,
+ 23, 23, 23, 23 ]
+
+racc_action_pointer = [
+ -2, nil, nil, 9, 24, nil, nil, -5, nil, nil,
+ nil, nil, nil, 19, 3, 4, nil, 5, 9, 13,
+ 20, nil, nil, 31, nil, 19, nil, nil, nil ]
+
+racc_action_default = [
+ -18, -16, -17, -18, -18, -1, -11, -18, -13, -12,
+ -14, -15, -3, -4, -18, -18, -7, -18, -18, -8,
+ -18, -2, 29, -18, -6, -18, -5, -10, -9 ]
+
+racc_goto_table = [
+ 5, 18, 4, 14, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, 28,
+ 26, nil, nil, 27 ]
+
+racc_goto_check = [
+ 2, 6, 1, 4, nil, nil, nil, nil, nil, nil,
+ nil, nil, nil, nil, nil, nil, nil, nil, nil, 6,
+ 4, nil, nil, 2 ]
+
+racc_goto_pointer = [
+ nil, 2, 0, nil, 0, nil, -6, nil ]
+
+racc_goto_default = [
+ nil, nil, 13, 6, nil, 9, nil, 19 ]
+
+racc_token_table = {
+ false => 0,
+ Object.new => 1,
+ "[" => 2,
+ "]" => 3,
+ "," => 4,
+ "{" => 5,
+ "}" => 6,
+ :STRING => 7,
+ ":" => 8,
+ :INTEGER => 9,
+ :FLOAT => 10,
+ "true" => 11,
+ "false" => 12 }
+
+racc_use_result_var = true
+
+racc_nt_base = 13
+
+Racc_arg = [
+ racc_action_table,
+ racc_action_check,
+ racc_action_default,
+ racc_action_pointer,
+ racc_goto_table,
+ racc_goto_check,
+ racc_goto_default,
+ racc_goto_pointer,
+ racc_nt_base,
+ racc_reduce_table,
+ racc_token_table,
+ racc_shift_n,
+ racc_reduce_n,
+ racc_use_result_var ]
+
+Racc_token_to_s_table = [
+'$end',
+'error',
+'"["',
+'"]"',
+'","',
+'"{"',
+'"}"',
+'STRING',
+'":"',
+'INTEGER',
+'FLOAT',
+'"true"',
+'"false"',
+'$start',
+'json',
+'value',
+'array',
+'valueList',
+'object',
+'memberList',
+'member']
+
+Racc_debug_parser = false
+
+##### racc system variables end #####
+
+ # reduce 0 omitted
+
+module_eval <<'.,.,', 'json_parser.y', 4
+ def _reduce_1( val, _values, result )
+ result = val[0]
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 6
+ def _reduce_2( val, _values, result )
+ result = val[1]
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 7
+ def _reduce_3( val, _values, result )
+ result = []
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 9
+ def _reduce_4( val, _values, result )
+ result = [ val[0] ]
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 10
+ def _reduce_5( val, _values, result )
+ result = [ val[0] ] + val[2]
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 12
+ def _reduce_6( val, _values, result )
+ result = @instantiator.createObject(val[1])
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 13
+ def _reduce_7( val, _values, result )
+ result = nil
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 15
+ def _reduce_8( val, _values, result )
+ result = val[0]
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 16
+ def _reduce_9( val, _values, result )
+ result = val[0].merge(val[2])
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 18
+ def _reduce_10( val, _values, result )
+ result = {val[0].value => val[2]}
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 20
+ def _reduce_11( val, _values, result )
+ result = val[0]
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 21
+ def _reduce_12( val, _values, result )
+ result = val[0]
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 22
+ def _reduce_13( val, _values, result )
+ result = val[0].value
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 23
+ def _reduce_14( val, _values, result )
+ result = val[0].value.to_i
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 24
+ def _reduce_15( val, _values, result )
+ result = val[0].value.to_f
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 25
+ def _reduce_16( val, _values, result )
+ result = true
+ result
+ end
+.,.,
+
+module_eval <<'.,.,', 'json_parser.y', 26
+ def _reduce_17( val, _values, result )
+ result = false
+ result
+ end
+.,.,
+
+ def _reduce_none( val, _values, result )
+ result
+ end
+
+end # class JsonParser
+
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_parser.y b/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_parser.y
new file mode 100644
index 000000000..7bdc46464
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/json_parser.y
@@ -0,0 +1,94 @@
+class JsonParser
+
+rule
+
+ json: value { result = val[0] }
+
+ array: "[" valueList "]" { result = val[1] }
+ | "[" "]" { result = [] }
+
+ valueList: value { result = [ val[0] ] }
+ | value "," valueList { result = [ val[0] ] + val[2] }
+
+ object: "{" memberList "}" { result = @instantiator.createObject(val[1]) }
+ | "{" "}" { result = nil }
+
+ memberList: member { result = val[0] }
+ | member "," memberList { result = val[0].merge(val[2]) }
+
+ member: STRING ":" value { result = {val[0].value => val[2]} }
+
+ value: array { result = val[0] }
+ | object { result = val[0] }
+ | STRING { result = val[0].value }
+ | INTEGER { result = val[0].value.to_i }
+ | FLOAT { result = val[0].value.to_f }
+ | "true" { result = true }
+ | "false" { result = false }
+
+end
+
+---- header
+
+module RGen
+
+module Instantiator
+
+---- inner
+
+ ParserToken = Struct.new(:line, :file, :value)
+
+ def initialize(instantiator)
+ @instantiator = instantiator
+ end
+
+ def parse(str, file=nil)
+ @q = []
+ line = 1
+
+ until str.empty?
+ case str
+ when /\A\n/
+ str = $'
+ line +=1
+ when /\A\s+/
+ str = $'
+ when /\A([-+]?\d+\.\d+)/
+ str = $'
+ @q << [:FLOAT, ParserToken.new(line, file, $1)]
+ when /\A([-+]?\d+)/
+ str = $'
+ @q << [:INTEGER, ParserToken.new(line, file, $1)]
+ when /\A"((?:[^"\\]|\\"|\\\\|\\[^"\\])*)"/
+ str = $'
+ sval = $1
+ sval.gsub!('\\\\','\\')
+ sval.gsub!('\\"','"')
+ sval.gsub!('\\n',"\n")
+ sval.gsub!('\\r',"\r")
+ sval.gsub!('\\t',"\t")
+ sval.gsub!('\\f',"\f")
+ sval.gsub!('\\b',"\b")
+ @q << [:STRING, ParserToken.new(line, file, sval)]
+ when /\A(\{|\}|\[|\]|,|:|true|false)/
+ str = $'
+ @q << [$1, ParserToken.new(line, file, $1)]
+ else
+ raise "parse error in line #{line} on "+str[0..20].inspect+"..."
+ end
+ end
+ @q.push [false, ParserToken.new(line, file, '$end')]
+ do_parse
+ end
+
+ def next_token
+ r = @q.shift
+ r
+ end
+
+---- footer
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/nodebased_xml_instantiator.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/nodebased_xml_instantiator.rb
new file mode 100644
index 000000000..5abc05523
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/nodebased_xml_instantiator.rb
@@ -0,0 +1,137 @@
+require 'rgen/metamodel_builder'
+require 'rgen/instantiator/abstract_instantiator'
+require 'nokogiri'
+
+module RGen
+
+module Instantiator
+
+class NodebasedXMLInstantiator < AbstractInstantiator
+
+ class << self
+
+ # The prune level is the number of parent/children associations which
+ # is kept when the instantiator ascents the XML tree.
+ # If the level is 2, information for the node's children and the childrens'
+ # children will be available as an XMLNodeDescriptor object.
+ # If the level is 0 no pruning will take place, i.e. the whole information
+ # is kept until the end of the instantiation process. 0 is default.
+ def set_prune_level(level)
+ @prune_level = level
+ end
+
+ def prune_level # :nodoc:
+ @prune_level ||= 0
+ end
+
+ end
+
+ class XMLNodeDescriptor
+ attr_reader :namespace, :qtag, :prefix, :tag, :parent, :attributes, :chardata
+ attr_accessor :object, :children
+
+ def initialize(ns, qtag, prefix, tag, parent, children, attributes)
+ @namespace, @qtag, @prefix, @tag, @parent, @children, @attributes =
+ ns, qtag, prefix, tag, parent, children, attributes
+ @parent.children << self if @parent
+ @chardata = []
+ end
+ end
+
+ class Visitor < Nokogiri::XML::SAX::Document
+ attr_reader :namespaces
+
+ def initialize(inst)
+ @instantiator = inst
+ @namespaces = {}
+ end
+
+ def start_element_namespace(tag, attributes, prefix, uri, ns)
+ ns.each{|n| @namespaces[n[0]] = n[1]}
+ attrs = {}
+ attributes.each{|a| attrs[a.prefix ? a.prefix+":"+a.localname : a.localname] = a.value}
+ qname = prefix ? prefix+":"+tag : tag
+ @instantiator.start_element(uri, qname, prefix, tag, attrs)
+ end
+
+ def end_element(name)
+ @instantiator.end_element
+ end
+
+ def characters(str)
+ @instantiator.on_chardata(str)
+ end
+ end
+
+ def initialize(env)
+ super
+ @env = env
+ @stack = []
+ end
+
+ def instantiate_file(file)
+ File.open(file) { |f| parse(f.read)}
+ resolve
+ end
+
+ def instantiate(text)
+ parse(text)
+ resolve
+ end
+
+ def parse(src)
+ @visitor = Visitor.new(self)
+ parser = Nokogiri::XML::SAX::Parser.new(@visitor)
+ parser.parse(src)
+ @visitor = nil
+ end
+
+ def start_element(ns, qtag, prefix, tag, attributes)
+ node = XMLNodeDescriptor.new(ns, qtag, prefix, tag, @stack[-1], [], attributes)
+ @stack.push node
+ on_descent(node)
+ end
+
+ def end_element
+ node = @stack.pop
+ on_ascent(node)
+ prune_children(node, self.class.prune_level - 1) if self.class.prune_level > 0
+ end
+
+ def on_chardata(str)
+ node = @stack.last
+ node.chardata << str
+ end
+
+ # This method is called when the XML parser goes down the tree.
+ # An XMLNodeDescriptor +node+ describes the current node.
+ # Implementing classes must overwrite this method.
+ def on_descent(node)
+ raise "Overwrite this method !"
+ end
+
+ # This method is called when the XML parser goes up the tree.
+ # An XMLNodeDescriptor +node+ describes the current node.
+ # Implementing classes must overwrite this method.
+ def on_ascent(node)
+ raise "Overwrite this method !"
+ end
+
+ def namespaces
+ @visitor.namespaces if @visitor
+ end
+
+ private
+
+ def prune_children(node, level)
+ if level == 0
+ node.children = nil
+ else
+ node.children.each { |c| prune_children(c, level-1) }
+ end
+ end
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/qualified_name_resolver.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/qualified_name_resolver.rb
new file mode 100644
index 000000000..7fe1c7cc2
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/qualified_name_resolver.rb
@@ -0,0 +1,97 @@
+require 'rgen/instantiator/reference_resolver'
+
+module RGen
+
+module Instantiator
+
+# This is a resolver resolving element identifiers which are qualified names.
+class QualifiedNameResolver
+
+ attr_reader :nameAttribute
+ attr_reader :separator
+ attr_reader :leadingSeparator
+
+ def initialize(rootElements, options={})
+ @rootElements = rootElements
+ @nameAttribute = options[:nameAttribute] || "name"
+ @separator = options[:separator] || "/"
+ @leadingSeparator = options.has_key?(:leadingSeparator) ? options[:leadingSeparator] : true
+ @elementByQName = {}
+ @visitedQName = {}
+ @childReferences = {}
+ @resolverDelegate = ReferenceResolver.new(:identifier_resolver => method(:resolveIdentifier))
+ end
+
+ def resolveIdentifier(qualifiedName)
+ return @elementByQName[qualifiedName] if @elementByQName.has_key?(qualifiedName)
+ path = qualifiedName.split(separator).reject{|s| s == ""}
+ if path.size > 1
+ parentQName = (leadingSeparator ? separator : "") + path[0..-2].join(separator)
+ parents = resolveIdentifier(parentQName)
+ parents = [parents].compact unless parents.is_a?(Array)
+ children = parents.collect{|p| allNamedChildren(p)}.flatten
+ elsif path.size == 1
+ parentQName = ""
+ children = allRootNamedChildren
+ else
+ return @elementByQName[qualifiedName] = nil
+ end
+ # if the parent was already visited all matching elements are the hash
+ if !@visitedQName[parentQName]
+ children.each do |c|
+ name = c.send(nameAttribute)
+ if name
+ qname = parentQName + ((parentQName != "" || leadingSeparator) ? separator : "") + name
+ existing = @elementByQName[qname]
+ if existing
+ @elementByQName[qname] = [existing] unless existing.is_a?(Array)
+ @elementByQName[qname] << c
+ else
+ @elementByQName[qname] = c
+ end
+ end
+ end
+ # all named children of praent have been checked and hashed
+ @visitedQName[parentQName] = true
+ end
+ @elementByQName[qualifiedName] ||= nil
+ end
+
+ def resolveReferences(unresolvedReferences, problems=[])
+ @resolverDelegate.resolve(unresolvedReferences, :problems => problems)
+ end
+
+ private
+
+ def allNamedChildren(element)
+ childReferences(element.class).collect do |r|
+ element.getGenericAsArray(r.name).collect do |c|
+ if c.respond_to?(nameAttribute)
+ c
+ else
+ allNamedChildren(c)
+ end
+ end
+ end.flatten
+ end
+
+ def allRootNamedChildren
+ @rootElements.collect do |e|
+ if e.respond_to?(nameAttribute)
+ e
+ else
+ allNamedChildren(e)
+ end
+ end.flatten
+ end
+
+ def childReferences(clazz)
+ @childReferences[clazz] ||= clazz.ecore.eAllReferences.select{|r| r.containment}
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/reference_resolver.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/reference_resolver.rb
new file mode 100644
index 000000000..87a4a834a
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/reference_resolver.rb
@@ -0,0 +1,128 @@
+require 'rgen/instantiator/resolution_helper'
+
+module RGen
+
+module Instantiator
+
+# The ReferenceResolver can be used to resolve unresolved references, i.e. instances
+# of class UnresolvedReference
+#
+# There are two ways how this can be used:
+# 1. the identifiers and associated model elements are added upfront using +add_identifier+
+# 2. register an :identifier_resolver with the constructor, which will be invoked
+# for every unresolved identifier
+#
+class ReferenceResolver
+
+ # Instances of this class represent information about not yet resolved references.
+ # This consists of the +element+ and metamodel +feature_name+ which hold/is to hold the
+ # reference and the +proxy+ object which is the placeholder for the reference.
+ # If the reference could not be resolved because the target type does not match the
+ # feature type, the flag +target_type_error+ will be set.
+ #
+ class UnresolvedReference
+ attr_reader :feature_name, :proxy
+ attr_accessor :element, :target_type_error
+ def initialize(element, feature_name, proxy)
+ @element = element
+ @feature_name = feature_name
+ @proxy = proxy
+ end
+ end
+
+ # Create a reference resolver, options:
+ #
+ # :identifier_resolver:
+ # a proc which is called with an identifier and which should return the associated element
+ # in case the identifier is not uniq, the proc may return multiple values
+ # default: lookup element in internal map
+ #
+ def initialize(options={})
+ @identifier_resolver = options[:identifier_resolver]
+ @identifier_map = {}
+ end
+
+ # Add an +identifer+ / +element+ pair which will be used for looking up unresolved identifers
+ def add_identifier(ident, element)
+ map_entry = @identifier_map[ident]
+ if map_entry
+ if map_entry.is_a?(Array)
+ map_entry << element
+ else
+ @identifier_map[ident] = [map_entry, element]
+ end
+ else
+ @identifier_map[ident] = element
+ end
+ end
+
+ # Tries to resolve the given +unresolved_refs+. If resolution is successful, the proxy object
+ # will be removed, otherwise there will be an error description in the problems array.
+ # In case the resolved target element's type is not valid for the given feature, the
+ # +target_type_error+ flag will be set on the unresolved reference.
+ # Returns an array of the references which are still unresolved. Options:
+ #
+ # :problems
+ # an array to which problems will be appended
+ #
+ # :on_resolve
+ # a proc which will be called for every sucessful resolution, receives the unresolved
+ # reference as well as to new target element
+ #
+ # :use_target_type
+ # use the expected target type to narrow the set of possible targets
+ # (i.e. ignore targets with wrong type)
+ #
+ # :failed_resolutions
+ # a Hash which will receive an entry for each failed resolution for which at least one
+ # target element was found (wrong target type, or target not unique).
+ # hash key is the uref, hash value is the target element or the Array of target elements
+ #
+ def resolve(unresolved_refs, options={})
+ problems = options[:problems] || []
+ still_unresolved_refs = []
+ failed_resolutions = options[:failed_resolutions] || {}
+ unresolved_refs.each do |ur|
+ if @identifier_resolver
+ target = @identifier_resolver.call(ur.proxy.targetIdentifier)
+ else
+ target = @identifier_map[ur.proxy.targetIdentifier]
+ end
+ target = [target].compact unless target.is_a?(Array)
+ if options[:use_target_type]
+ feature = ur.element.class.ecore.eAllReferences.find{|r| r.name == ur.feature_name}
+ target = target.select{|e| e.is_a?(feature.eType.instanceClass)}
+ end
+ if target.size == 1
+ status = ResolutionHelper.set_uref_target(ur, target[0])
+ if status == :success
+ options[:on_resolve] && options[:on_resolve].call(ur, target[0])
+ elsif status == :type_error
+ ur.target_type_error = true
+ problems << type_error_message(target[0])
+ still_unresolved_refs << ur
+ failed_resolutions[ur] = target[0]
+ end
+ elsif target.size > 1
+ problems << "identifier #{ur.proxy.targetIdentifier} not uniq"
+ still_unresolved_refs << ur
+ failed_resolutions[ur] = target
+ else
+ problems << "identifier #{ur.proxy.targetIdentifier} not found"
+ still_unresolved_refs << ur
+ end
+ end
+ still_unresolved_refs
+ end
+
+ private
+
+ def type_error_message(target)
+ "invalid target type #{target.class}"
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/resolution_helper.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/resolution_helper.rb
new file mode 100644
index 000000000..5452fec4c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/resolution_helper.rb
@@ -0,0 +1,47 @@
+module RGen
+module Instantiator
+
+module ResolutionHelper
+
+# sets the target of an unresolved reference in the model
+# returns :type_error if the target is of wrong type, otherwise :success
+#
+def self.set_uref_target(uref, target)
+ refs = uref.element.getGeneric(uref.feature_name)
+ if refs.is_a?(Array)
+ index = refs.index(uref.proxy)
+ uref.element.removeGeneric(uref.feature_name, uref.proxy)
+ begin
+ uref.element.addGeneric(uref.feature_name, target, index)
+ rescue StandardError => e
+ if is_type_error?(e)
+ uref.element.addGeneric(uref.feature_name, uref.proxy, index)
+ return :type_error
+ else
+ raise
+ end
+ end
+ else
+ begin
+ # this will replace the proxy
+ uref.element.setGeneric(uref.feature_name, target)
+ rescue StandardError => e
+ if is_type_error?(e)
+ return :type_error
+ else
+ raise
+ end
+ end
+ end
+ :success
+end
+
+def self.is_type_error?(e)
+ e.message =~ /Can not use a .* where a .* is expected/
+end
+
+end
+
+end
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/instantiator/xmi11_instantiator.rb b/lib/puppet/vendor/rgen/lib/rgen/instantiator/xmi11_instantiator.rb
new file mode 100644
index 000000000..af9657b16
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/instantiator/xmi11_instantiator.rb
@@ -0,0 +1,168 @@
+require 'rgen/ecore/ecore'
+require 'rgen/instantiator/abstract_xml_instantiator'
+require 'rgen/array_extensions'
+
+class XMI11Instantiator < AbstractXMLInstantiator
+
+ include RGen::ECore
+
+ ResolverDescription = Struct.new(:object, :attribute, :value, :many)
+
+ INFO = 0
+ WARN = 1
+ ERROR = 2
+
+ def initialize(env, fix_map={}, loglevel=ERROR)
+ @env = env
+ @fix_map = fix_map
+ @loglevel = loglevel
+ @rolestack = []
+ @elementstack = []
+ end
+
+ def add_metamodel(ns, mod)
+ @ns_module_map ||={}
+ @ns_module_map[ns] = mod
+ end
+
+ def instantiate(str)
+ @resolver_descs = []
+ @element_by_id = {}
+ super(str, 1000)
+ @resolver_descs.each do |rd|
+ if rd.many
+ newval = rd.value.split(" ").collect{|v| @element_by_id[v]}
+ else
+ newval = @element_by_id[rd.value]
+ end
+ log WARN, "Could not resolve reference #{rd.attribute} on #{rd.object}" unless newval
+ begin
+ rd.object.setGeneric(rd.attribute,newval)
+ rescue Exception
+ log WARN, "Could not set reference #{rd.attribute} on #{rd.object}"
+ end
+ end
+ end
+
+ def start_tag(prefix, tag, namespaces, attributes)
+ if tag =~ /\w+\.(\w+)/
+ # XMI role
+ role_name = map_feature_name($1) || $1
+ eRef = @elementstack.last && eAllReferences(@elementstack.last).find{|r|r.name == role_name}
+ log WARN, "No reference found for #{role_name} on #{@elementstack.last}" unless eRef
+ @rolestack.push eRef
+ elsif attributes["xmi.idref"]
+ # reference
+ rd = ResolverDescription.new
+ rd.object = @elementstack.last
+ rd.attribute = @rolestack.last.name
+ rd.value = attributes["xmi.idref"]
+ rd.many = @rolestack.last.many
+ @resolver_descs << rd
+ @elementstack.push nil
+ else
+ # model element
+ value = map_tag(tag, attributes) || tag
+ if value.is_a?(String)
+ mod = @ns_module_map[namespaces[prefix]]
+ unless mod
+ log WARN, "Ignoring tag #{tag}"
+ return
+ end
+ value = mod.const_get(value).new
+ end
+ @env << value
+ eRef = @rolestack.last
+ if eRef && eRef.many
+ @elementstack.last.addGeneric(eRef.name, value)
+ elsif eRef
+ @elementstack.last.setGeneric(eRef.name, value)
+ end
+ @elementstack.push value
+ end
+ end
+
+ def end_tag(prefix, tag)
+ if tag =~ /\w+\.(\w+)/
+ @rolestack.pop
+ else
+ @elementstack.pop
+ end
+ end
+
+ def set_attribute(attr, value)
+ return unless @elementstack.last
+ if attr == "xmi.id"
+ @element_by_id[value] = @elementstack.last
+ else
+ attr_name = map_feature_name(attr) || attr
+ eFeat = eAllStructuralFeatures(@elementstack.last).find{|a| a.name == attr_name}
+ unless eFeat
+ log WARN, "No structural feature found for #{attr_name} on #{@elementstack.last}"
+ return
+ end
+ if eFeat.is_a?(RGen::ECore::EReference)
+ if map_feature_value(attr_name, value).is_a?(eFeat.eType.instanceClass)
+ @elementstack.last.setGeneric(attr_name, map_feature_value(attr_name, value))
+ else
+ rd = ResolverDescription.new
+ rd.object = @elementstack.last
+ rd.attribute = attr_name
+ rd.value = value
+ rd.many = eFeat.many
+ @resolver_descs << rd
+ end
+ else
+ value = map_feature_value(attr_name, value) || value
+ value = true if value == "true" && eFeat.eType == EBoolean
+ value = false if value == "false" && eFeat.eType == EBoolean
+ value = value.to_i if eFeat.eType == EInt || eFeat.eType == ELong
+ value = value.to_f if eFeat.eType == EFloat
+ value = value.to_sym if eFeat.eType.is_a?(EEnum)
+ @elementstack.last.setGeneric(attr_name, value)
+ end
+ end
+ end
+
+ private
+
+ def map_tag(tag, attributes)
+ tag_map = @fix_map[:tags] || {}
+ value = tag_map[tag]
+ if value.is_a?(Proc)
+ value.call(tag, attributes)
+ else
+ value
+ end
+ end
+
+ def map_feature_name(name)
+ name_map = @fix_map[:feature_names] || {}
+ name_map[name]
+ end
+
+ def map_feature_value(attr_name, value)
+ value_map = @fix_map[:feature_values] || {}
+ map = value_map[attr_name]
+ if map.is_a?(Hash)
+ map[value]
+ elsif map.is_a?(Proc)
+ map.call(value)
+ end
+ end
+
+ def log(level, msg)
+ puts %w(INFO WARN ERROR)[level] + ": " + msg if level >= @loglevel
+ end
+
+ def eAllReferences(element)
+ @eAllReferences ||= {}
+ @eAllReferences[element.class] ||= element.class.ecore.eAllReferences
+ end
+
+ def eAllStructuralFeatures(element)
+ @eAllStructuralFeatures ||= {}
+ @eAllStructuralFeatures[element.class] ||= element.class.ecore.eAllStructuralFeatures
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder.rb
new file mode 100644
index 000000000..6fd680f7a
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder.rb
@@ -0,0 +1,224 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'rgen/metamodel_builder/constant_order_helper'
+require 'rgen/metamodel_builder/builder_runtime'
+require 'rgen/metamodel_builder/builder_extensions'
+require 'rgen/metamodel_builder/module_extension'
+require 'rgen/metamodel_builder/data_types'
+require 'rgen/metamodel_builder/mm_multiple'
+require 'rgen/ecore/ecore_interface'
+
+module RGen
+
+# MetamodelBuilder can be used to create a metamodel, i.e. Ruby classes which
+# act as metamodel elements.
+#
+# To create a new metamodel element, create a Ruby class which inherits from
+# MetamodelBuilder::MMBase
+#
+# class Person < RGen::MetamodelBuilder::MMBase
+# end
+#
+# This way a couple of class methods are made available to the new class.
+# These methods can be used to:
+# * add attributes to the class
+# * add associations with other classes
+#
+# Here is an example:
+#
+# class Person < RGen::MetamodelBuilder::MMBase
+# has_attr 'name', String
+# has_attr 'age', Integer
+# end
+#
+# class House < RGen::MetamodelBuilder::MMBase
+# has_attr 'address' # String is default
+# end
+#
+# Person.many_to_many 'homes', House, 'inhabitants'
+#
+# See BuilderExtensions for details about the available class methods.
+#
+# =Attributes
+#
+# The example above creates two classes 'Person' and 'House'. Person has the attributes
+# 'name' and 'age', House has the attribute 'address'. The attributes can be
+# accessed on instances of the classes in the following way:
+#
+# p = Person.new
+# p.name = "MyName"
+# p.age = 22
+# p.name # => "MyName"
+# p.age # => 22
+#
+# Note that the class Person takes care of the type of its attributes. As
+# declared above, a 'name' can only be a String, an 'age' must be an Integer.
+# So the following would return an exception:
+#
+# p.name = :myName # => exception: can not put a Symbol where a String is expected
+#
+# If the type of an attribute should be left undefined, use Object as type.
+#
+# =Associations
+#
+# As well as attributes show up as instance methods, associations bring their own
+# accessor methods. For the Person-to-House association this would be:
+#
+# h1 = House.new
+# h1.address = "Street1"
+# h2 = House.new
+# h2.address = "Street2"
+# p.addHomes(h1)
+# p.addHomes(h2)
+# p.removeHomes(h1)
+# p.homes # => [ h2 ]
+#
+# The Person-to-House association is _bidirectional_. This means that with the
+# addition of a House to a Person, the Person is also added to the House. Thus:
+#
+# h1.inhabitants # => []
+# h2.inhabitants # => [ p ]
+#
+# Note that the association is defined between two specific classes, instances of
+# different classes can not be added. Thus, the following would result in an
+# exception:
+#
+# p.addHomes(:justASymbol) # => exception: can not put a Symbol where a House is expected
+#
+# =ECore Metamodel description
+#
+# The class methods described above are used to create a Ruby representation of the metamodel
+# we have in mind in a very simple and easy way. We don't have to care about all the details
+# of a metamodel at this point (e.g. multiplicities, changeability, etc).
+#
+# At the same time however, an instance of the ECore metametamodel (i.e. a ECore based
+# description of our metamodel) is provided for all the Ruby classes and modules we create.
+# Since we did not provide the nitty-gritty details of the metamodel, defaults are used to
+# fully complete the ECore metamodel description.
+#
+# In order to access the ECore metamodel description, just call the +ecore+ method on a
+# Ruby class or module object belonging to your metamodel.
+#
+# Here is the example continued from above:
+#
+# Person.ecore.eAttributes.name # => ["name", "age"]
+# h2pRef = House.ecore.eReferences.first
+# h2pRef.eType # => Person
+# h2pRef.eOpposite.eType # => House
+# h2pRef.lowerBound # => 0
+# h2pRef.upperBound # => -1
+# h2pRef.many # => true
+# h2pRef.containment # => false
+#
+# Note that the use of array_extensions.rb is assumed here to make model navigation convenient.
+#
+# The following metamodel builder methods are supported, see individual method description
+# for details:
+#
+# Attributes:
+# * BuilderExtensions#has_attr
+#
+# Unidirectional references:
+# * BuilderExtensions#has_one
+# * BuilderExtensions#has_many
+# * BuilderExtensions#contains_one_uni
+# * BuilderExtensions#contains_many_uni
+#
+# Bidirectional references:
+# * BuilderExtensions#one_to_one
+# * BuilderExtensions#one_to_many
+# * BuilderExtensions#many_to_one
+# * BuilderExtensions#many_to_many
+# * BuilderExtensions#contains_one
+# * BuilderExtensions#contains_many
+#
+# Every builder command can optionally take a specification of further ECore properties.
+# Additional properties for Attributes and References are (with defaults in brackets):
+# * :ordered (true),
+# * :unique (true),
+# * :changeable (true),
+# * :volatile (false),
+# * :transient (false),
+# * :unsettable (false),
+# * :derived (false),
+# * :lowerBound (0),
+# * :resolveProxies (true) <i>references only</i>,
+#
+# Using these additional properties, the above example can be refined as follows:
+#
+# class Person < RGen::MetamodelBuilder::MMBase
+# has_attr 'name', String, :lowerBound => 1
+# has_attr 'yearOfBirth', Integer,
+# has_attr 'age', Integer, :derived => true
+# def age_derived
+# Time.now.year - yearOfBirth
+# end
+# end
+#
+# Person.many_to_many 'homes', House, 'inhabitants', :upperBound => 5
+#
+# Person.ecore.eReferences.find{|r| r.name == 'homes'}.upperBound # => 5
+#
+# This way we state that there must be a name for each person, we introduce a new attribute
+# 'yearOfBirth' and make 'age' a derived attribute. We also say that a person can
+# have at most 5 houses in our metamodel.
+#
+# ==Derived attributes and references
+#
+# If the attribute 'derived' of an attribute or reference is set to true, a method +attributeName_derived+
+# has to be provided. This method is called whenever the original attribute is accessed. The
+# original attribute can not be written if it is derived.
+#
+#
+module MetamodelBuilder
+
+ # Use this class as a start for new metamodel elements (i.e. Ruby classes)
+ # by inheriting for it.
+ #
+ # See MetamodelBuilder for an example.
+ class MMBase
+ include BuilderRuntime
+ include DataTypes
+ extend BuilderExtensions
+ extend ModuleExtension
+ extend RGen::ECore::ECoreInterface
+
+ def initialize(arg=nil)
+ raise StandardError.new("Class #{self.class} is abstract") if self.class._abstract_class
+ arg.each_pair { |k,v| setGeneric(k, v) } if arg.is_a?(Hash)
+ end
+
+ # Object#inspect causes problems on most models
+ def inspect
+ self.class.name
+ end
+
+ def self.method_added(m)
+ raise "Do not add methods to model classes directly, add them to the ClassModule instead"
+ end
+ end
+
+ # Instances of MMGeneric can be used as values of any attribute are reference
+ class MMGeneric
+ # empty implementation so we don't have to check if a value is a MMGeneriv before setting the container
+ def _set_container(container, containing_feature_name)
+ end
+ end
+
+ # MMProxy objects can be used instead of real target elements in case references should be resolved later on
+ class MMProxy < MMGeneric
+ # The +targetIdentifer+ is an object identifying the element the proxy represents
+ attr_accessor :targetIdentifier
+ # +data+ is optional additional information to be associated with the proxy
+ attr_accessor :data
+
+ def initialize(ident=nil, data=nil)
+ @targetIdentifier = ident
+ @data = data
+ end
+ end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/builder_extensions.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/builder_extensions.rb
new file mode 100644
index 000000000..dfa4a517e
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/builder_extensions.rb
@@ -0,0 +1,556 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'erb'
+require 'rgen/metamodel_builder/intermediate/feature'
+
+module RGen
+
+module MetamodelBuilder
+
+# This module provides methods which can be used to setup a metamodel element.
+# The module is used to +extend+ MetamodelBuilder::MMBase, i.e. add the module's
+# methods as class methods.
+#
+# MetamodelBuilder::MMBase should be used as a start for new metamodel elements.
+# See MetamodelBuilder for an example.
+#
+module BuilderExtensions
+ include Util::NameHelper
+
+ class FeatureBlockEvaluator
+ def self.eval(block, props1, props2=nil)
+ return unless block
+ e = self.new(props1, props2)
+ e.instance_eval(&block)
+ end
+ def initialize(props1, props2)
+ @props1, @props2 = props1, props2
+ end
+ def annotation(hash)
+ @props1.annotations << Intermediate::Annotation.new(hash)
+ end
+ def opposite_annotation(hash)
+ raise "No opposite available" unless @props2
+ @props2.annotations << Intermediate::Annotation.new(hash)
+ end
+ end
+
+ # Add an attribute which can hold a single value.
+ # 'role' specifies the name which is used to access the attribute.
+ # 'target_class' specifies the type of objects which can be held by this attribute.
+ # If no target class is given, String will be default.
+ #
+ # This class method adds the following instance methods, where 'role' is to be
+ # replaced by the given role name:
+ # class#role # getter
+ # class#role=(value) # setter
+ def has_attr(role, target_class=nil, raw_props={}, &block)
+ props = Intermediate::Attribute.new(target_class, _ownProps(raw_props).merge({
+ :name=>role, :upperBound=>1}))
+ raise "No opposite available" unless _oppositeProps(raw_props).empty?
+ FeatureBlockEvaluator.eval(block, props)
+ _build_internal(props)
+ end
+
+ # Add an attribute which can hold multiple values.
+ # 'role' specifies the name which is used to access the attribute.
+ # 'target_class' specifies the type of objects which can be held by this attribute.
+ # If no target class is given, String will be default.
+ #
+ # This class method adds the following instance methods, where 'role' is to be
+ # replaced by the given role name:
+ # class#addRole(value, index=-1)
+ # class#removeRole(value)
+ # class#role # getter, returns an array
+ # class#role= # setter, sets multiple values at once
+ # Note that the first letter of the role name is turned into an uppercase
+ # for the add and remove methods.
+ def has_many_attr(role, target_class=nil, raw_props={}, &block)
+ props = Intermediate::Attribute.new(target_class, _setManyUpperBound(_ownProps(raw_props).merge({
+ :name=>role})))
+ raise "No opposite available" unless _oppositeProps(raw_props).empty?
+ FeatureBlockEvaluator.eval(block, props)
+ _build_internal(props)
+ end
+
+ # Add a single unidirectional association.
+ # 'role' specifies the name which is used to access the association.
+ # 'target_class' specifies the type of objects which can be held by this association.
+ #
+ # This class method adds the following instance methods, where 'role' is to be
+ # replaced by the given role name:
+ # class#role # getter
+ # class#role=(value) # setter
+ #
+ def has_one(role, target_class=nil, raw_props={}, &block)
+ props = Intermediate::Reference.new(target_class, _ownProps(raw_props).merge({
+ :name=>role, :upperBound=>1, :containment=>false}))
+ raise "No opposite available" unless _oppositeProps(raw_props).empty?
+ FeatureBlockEvaluator.eval(block, props)
+ _build_internal(props)
+ end
+
+ # Add an unidirectional _many_ association.
+ # 'role' specifies the name which is used to access the attribute.
+ # 'target_class' is optional and can be used to fix the type of objects which
+ # can be referenced by this association.
+ #
+ # This class method adds the following instance methods, where 'role' is to be
+ # replaced by the given role name:
+ # class#addRole(value, index=-1)
+ # class#removeRole(value)
+ # class#role # getter, returns an array
+ # Note that the first letter of the role name is turned into an uppercase
+ # for the add and remove methods.
+ #
+ def has_many(role, target_class=nil, raw_props={}, &block)
+ props = Intermediate::Reference.new(target_class, _setManyUpperBound(_ownProps(raw_props).merge({
+ :name=>role, :containment=>false})))
+ raise "No opposite available" unless _oppositeProps(raw_props).empty?
+ FeatureBlockEvaluator.eval(block, props)
+ _build_internal(props)
+ end
+
+ def contains_one_uni(role, target_class=nil, raw_props={}, &block)
+ props = Intermediate::Reference.new(target_class, _ownProps(raw_props).merge({
+ :name=>role, :upperBound=>1, :containment=>true}))
+ raise "No opposite available" unless _oppositeProps(raw_props).empty?
+ FeatureBlockEvaluator.eval(block, props)
+ _build_internal(props)
+ end
+
+ def contains_many_uni(role, target_class=nil, raw_props={}, &block)
+ props = Intermediate::Reference.new(target_class, _setManyUpperBound(_ownProps(raw_props).merge({
+ :name=>role, :containment=>true})))
+ raise "No opposite available" unless _oppositeProps(raw_props).empty?
+ FeatureBlockEvaluator.eval(block, props)
+ _build_internal(props)
+ end
+
+ # Add a bidirectional one-to-many association between two classes.
+ # The class this method is called on is refered to as _own_class_ in
+ # the following.
+ #
+ # Instances of own_class can use 'own_role' to access _many_ associated instances
+ # of type 'target_class'. Instances of 'target_class' can use 'target_role' to
+ # access _one_ associated instance of own_class.
+ #
+ # This class method adds the following instance methods where 'ownRole' and
+ # 'targetRole' are to be replaced by the given role names:
+ # own_class#addOwnRole(value, index=-1)
+ # own_class#removeOwnRole(value)
+ # own_class#ownRole
+ # target_class#targetRole
+ # target_class#targetRole=(value)
+ # Note that the first letter of the role name is turned into an uppercase
+ # for the add and remove methods.
+ #
+ # When an element is added/set on either side, this element also receives the element
+ # is is added to as a new element.
+ #
+ def one_to_many(target_role, target_class, own_role, raw_props={}, &block)
+ props1 = Intermediate::Reference.new(target_class, _setManyUpperBound(_ownProps(raw_props).merge({
+ :name=>target_role, :containment=>false})))
+ props2 = Intermediate::Reference.new(self, _oppositeProps(raw_props).merge({
+ :name=>own_role, :upperBound=>1, :containment=>false}))
+ FeatureBlockEvaluator.eval(block, props1, props2)
+ _build_internal(props1, props2)
+ end
+
+ def contains_many(target_role, target_class, own_role, raw_props={}, &block)
+ props1 = Intermediate::Reference.new(target_class, _setManyUpperBound(_ownProps(raw_props).merge({
+ :name=>target_role, :containment=>true})))
+ props2 = Intermediate::Reference.new(self, _oppositeProps(raw_props).merge({
+ :name=>own_role, :upperBound=>1, :containment=>false}))
+ FeatureBlockEvaluator.eval(block, props1, props2)
+ _build_internal(props1, props2)
+ end
+
+ # This is the inverse of one_to_many provided for convenience.
+ def many_to_one(target_role, target_class, own_role, raw_props={}, &block)
+ props1 = Intermediate::Reference.new(target_class, _ownProps(raw_props).merge({
+ :name=>target_role, :upperBound=>1, :containment=>false}))
+ props2 = Intermediate::Reference.new(self, _setManyUpperBound(_oppositeProps(raw_props).merge({
+ :name=>own_role, :containment=>false})))
+ FeatureBlockEvaluator.eval(block, props1, props2)
+ _build_internal(props1, props2)
+ end
+
+ # Add a bidirectional many-to-many association between two classes.
+ # The class this method is called on is refered to as _own_class_ in
+ # the following.
+ #
+ # Instances of own_class can use 'own_role' to access _many_ associated instances
+ # of type 'target_class'. Instances of 'target_class' can use 'target_role' to
+ # access _many_ associated instances of own_class.
+ #
+ # This class method adds the following instance methods where 'ownRole' and
+ # 'targetRole' are to be replaced by the given role names:
+ # own_class#addOwnRole(value, index=-1)
+ # own_class#removeOwnRole(value)
+ # own_class#ownRole
+ # target_class#addTargetRole
+ # target_class#removeTargetRole=(value)
+ # target_class#targetRole
+ # Note that the first letter of the role name is turned into an uppercase
+ # for the add and remove methods.
+ #
+ # When an element is added on either side, this element also receives the element
+ # is is added to as a new element.
+ #
+ def many_to_many(target_role, target_class, own_role, raw_props={}, &block)
+ props1 = Intermediate::Reference.new(target_class, _setManyUpperBound(_ownProps(raw_props).merge({
+ :name=>target_role, :containment=>false})))
+ props2 = Intermediate::Reference.new(self, _setManyUpperBound(_oppositeProps(raw_props).merge({
+ :name=>own_role, :containment=>false})))
+ FeatureBlockEvaluator.eval(block, props1, props2)
+ _build_internal(props1, props2)
+ end
+
+ # Add a bidirectional one-to-one association between two classes.
+ # The class this method is called on is refered to as _own_class_ in
+ # the following.
+ #
+ # Instances of own_class can use 'own_role' to access _one_ associated instance
+ # of type 'target_class'. Instances of 'target_class' can use 'target_role' to
+ # access _one_ associated instance of own_class.
+ #
+ # This class method adds the following instance methods where 'ownRole' and
+ # 'targetRole' are to be replaced by the given role names:
+ # own_class#ownRole
+ # own_class#ownRole=(value)
+ # target_class#targetRole
+ # target_class#targetRole=(value)
+ #
+ # When an element is set on either side, this element also receives the element
+ # is is added to as the new element.
+ #
+ def one_to_one(target_role, target_class, own_role, raw_props={}, &block)
+ props1 = Intermediate::Reference.new(target_class, _ownProps(raw_props).merge({
+ :name=>target_role, :upperBound=>1, :containment=>false}))
+ props2 = Intermediate::Reference.new(self, _oppositeProps(raw_props).merge({
+ :name=>own_role, :upperBound=>1, :containment=>false}))
+ FeatureBlockEvaluator.eval(block, props1, props2)
+ _build_internal(props1, props2)
+ end
+
+ def contains_one(target_role, target_class, own_role, raw_props={}, &block)
+ props1 = Intermediate::Reference.new(target_class, _ownProps(raw_props).merge({
+ :name=>target_role, :upperBound=>1, :containment=>true}))
+ props2 = Intermediate::Reference.new(self, _oppositeProps(raw_props).merge({
+ :name=>own_role, :upperBound=>1, :containment=>false}))
+ FeatureBlockEvaluator.eval(block, props1, props2)
+ _build_internal(props1, props2)
+ end
+
+ def _metamodel_description # :nodoc:
+ @metamodel_description ||= []
+ end
+
+ def _add_metamodel_description(desc) # :nodoc
+ @metamodel_description ||= []
+ @metamodelDescriptionByName ||= {}
+ @metamodel_description.delete(@metamodelDescriptionByName[desc.value(:name)])
+ @metamodel_description << desc
+ @metamodelDescriptionByName[desc.value(:name)] = desc
+ end
+
+ def abstract
+ @abstract = true
+ end
+
+ def _abstract_class
+ @abstract || false
+ end
+
+ def inherited(c)
+ c.send(:include, c.const_set(:ClassModule, Module.new))
+ MetamodelBuilder::ConstantOrderHelper.classCreated(c)
+ end
+
+ protected
+
+ # Central builder method
+ #
+ def _build_internal(props1, props2=nil)
+ _add_metamodel_description(props1)
+ if props1.many?
+ _build_many_methods(props1, props2)
+ else
+ _build_one_methods(props1, props2)
+ end
+ if props2
+ # this is a bidirectional reference
+ props1.opposite, props2.opposite = props2, props1
+ other_class = props1.impl_type
+ other_class._add_metamodel_description(props2)
+ raise "Internal error: second description must be a reference description" \
+ unless props2.reference?
+ if props2.many?
+ other_class._build_many_methods(props2, props1)
+ else
+ other_class._build_one_methods(props2, props1)
+ end
+ end
+ end
+
+ # To-One association methods
+ #
+ def _build_one_methods(props, other_props=nil)
+ name = props.value(:name)
+ other_role = other_props && other_props.value(:name)
+
+ if props.value(:derived)
+ build_derived_method(name, props, :one)
+ else
+ @@one_read_builder ||= ERB.new <<-CODE
+
+ def get<%= firstToUpper(name) %>
+ <% if !props.reference? && props.value(:defaultValueLiteral) %>
+ <% defVal = props.value(:defaultValueLiteral) %>
+ <% check_default_value_literal(defVal, props) %>
+ <% defVal = '"'+defVal+'"' if props.impl_type == String %>
+ <% defVal = ':'+defVal if props.impl_type.is_a?(DataTypes::Enum) && props.impl_type != DataTypes::Boolean %>
+ (defined? @<%= name %>) ? @<%= name %> : <%= defVal %>
+ <% else %>
+ @<%= name %>
+ <% end %>
+ end
+ <% if name != "class" %>
+ alias <%= name %> get<%= firstToUpper(name) %>
+ <% end %>
+
+ CODE
+ self::ClassModule.module_eval(@@one_read_builder.result(binding))
+ end
+
+ if props.value(:changeable)
+ @@one_write_builder ||= ERB.new <<-CODE
+
+ def set<%= firstToUpper(name) %>(val)
+ return if (defined? @<%= name %>) && val == @<%= name %>
+ <%= type_check_code("val", props) %>
+ oldval = @<%= name %>
+ @<%= name %> = val
+ <% if other_role %>
+ oldval._unregister<%= firstToUpper(other_role) %>(self) unless oldval.nil? || oldval.is_a?(MMGeneric)
+ val._register<%= firstToUpper(other_role) %>(self) unless val.nil? || val.is_a?(MMGeneric)
+ <% end %>
+ <% if props.reference? && props.value(:containment) %>
+ val._set_container(self, :<%= name %>) unless val.nil?
+ oldval._set_container(nil, nil) unless oldval.nil?
+ <% end %>
+ end
+ alias <%= name %>= set<%= firstToUpper(name) %>
+
+ def _register<%= firstToUpper(name) %>(val)
+ <% if other_role %>
+ @<%= name %>._unregister<%= firstToUpper(other_role) %>(self) unless @<%= name %>.nil? || @<%= name %>.is_a?(MMGeneric)
+ <% end %>
+ <% if props.reference? && props.value(:containment) %>
+ @<%= name %>._set_container(nil, nil) unless @<%= name %>.nil?
+ val._set_container(self, :<%= name %>) unless val.nil?
+ <% end %>
+ @<%= name %> = val
+ end
+
+ def _unregister<%= firstToUpper(name) %>(val)
+ <% if props.reference? && props.value(:containment) %>
+ @<%= name %>._set_container(nil, nil) unless @<%= name %>.nil?
+ <% end %>
+ @<%= name %> = nil
+ end
+
+ CODE
+ self::ClassModule.module_eval(@@one_write_builder.result(binding))
+
+ end
+ end
+
+ # To-Many association methods
+ #
+ def _build_many_methods(props, other_props=nil)
+ name = props.value(:name)
+ other_role = other_props && other_props.value(:name)
+
+ if props.value(:derived)
+ build_derived_method(name, props, :many)
+ else
+ @@many_read_builder ||= ERB.new <<-CODE
+
+ def get<%= firstToUpper(name) %>
+ ( @<%= name %> ? @<%= name %>.dup : [] )
+ end
+ <% if name != "class" %>
+ alias <%= name %> get<%= firstToUpper(name) %>
+ <% end %>
+
+ CODE
+ self::ClassModule.module_eval(@@many_read_builder.result(binding))
+ end
+
+ if props.value(:changeable)
+ @@many_write_builder ||= ERB.new <<-CODE
+
+ def add<%= firstToUpper(name) %>(val, index=-1)
+ @<%= name %> = [] unless @<%= name %>
+ return if val.nil? || (@<%= name %>.any?{|e| e.object_id == val.object_id} && (val.is_a?(MMBase) || val.is_a?(MMGeneric)))
+ <%= type_check_code("val", props) %>
+ @<%= name %>.insert(index, val)
+ <% if other_role %>
+ val._register<%= firstToUpper(other_role) %>(self) unless val.is_a?(MMGeneric)
+ <% end %>
+ <% if props.reference? && props.value(:containment) %>
+ val._set_container(self, :<%= name %>)
+ <% end %>
+ end
+
+ def remove<%= firstToUpper(name) %>(val)
+ @<%= name %> = [] unless @<%= name %>
+ @<%= name %>.each_with_index do |e,i|
+ if e.object_id == val.object_id
+ @<%= name %>.delete_at(i)
+ <% if props.reference? && props.value(:containment) %>
+ val._set_container(nil, nil)
+ <% end %>
+ <% if other_role %>
+ val._unregister<%= firstToUpper(other_role) %>(self) unless val.is_a?(MMGeneric)
+ <% end %>
+ return
+ end
+ end
+ end
+
+ def set<%= firstToUpper(name) %>(val)
+ return if val.nil?
+ raise _assignmentTypeError(self, val, Enumerable) unless val.is_a? Enumerable
+ get<%= firstToUpper(name) %>.each {|e|
+ remove<%= firstToUpper(name) %>(e)
+ }
+ val.each {|v|
+ add<%= firstToUpper(name) %>(v)
+ }
+ end
+ alias <%= name %>= set<%= firstToUpper(name) %>
+
+ def _register<%= firstToUpper(name) %>(val)
+ @<%= name %> = [] unless @<%= name %>
+ @<%= name %>.push val
+ <% if props.reference? && props.value(:containment) %>
+ val._set_container(self, :<%= name %>)
+ <% end %>
+ end
+
+ def _unregister<%= firstToUpper(name) %>(val)
+ @<%= name %>.delete val
+ <% if props.reference? && props.value(:containment) %>
+ val._set_container(nil, nil)
+ <% end %>
+ end
+
+ CODE
+ self::ClassModule.module_eval(@@many_write_builder.result(binding))
+ end
+
+ end
+
+ private
+
+ def build_derived_method(name, props, kind)
+ raise "Implement method #{name}_derived instead of method #{name}" \
+ if (public_instance_methods+protected_instance_methods+private_instance_methods).include?(name)
+ @@derived_builder ||= ERB.new <<-CODE
+
+ def get<%= firstToUpper(name) %>
+ raise "Derived feature requires public implementation of method <%= name %>_derived" \
+ unless respond_to?(:<%= name+"_derived" %>)
+ val = <%= name %>_derived
+ <% if kind == :many %>
+ raise _assignmentTypeError(self,val,Enumerable) unless val && val.is_a?(Enumerable)
+ val.each do |v|
+ <%= type_check_code("v", props) %>
+ end
+ <% else %>
+ <%= type_check_code("val", props) %>
+ <% end %>
+ val
+ end
+ <% if name != "class" %>
+ alias <%= name %> get<%= firstToUpper(name) %>
+ <% end %>
+ #TODO final_method :<%= name %>
+
+ CODE
+ self::ClassModule.module_eval(@@derived_builder.result(binding))
+ end
+
+ def check_default_value_literal(literal, props)
+ return if literal.nil? || props.impl_type == String
+ if props.impl_type == Integer || props.impl_type == RGen::MetamodelBuilder::DataTypes::Long
+ unless literal =~ /^\d+$/
+ raise StandardError.new("Property #{props.value(:name)} can not take value #{literal}, expected an Integer")
+ end
+ elsif props.impl_type == Float
+ unless literal =~ /^\d+\.\d+$/
+ raise StandardError.new("Property #{props.value(:name)} can not take value #{literal}, expected a Float")
+ end
+ elsif props.impl_type == RGen::MetamodelBuilder::DataTypes::Boolean
+ unless ["true", "false"].include?(literal)
+ raise StandardError.new("Property #{props.value(:name)} can not take value #{literal}, expected true or false")
+ end
+ elsif props.impl_type.is_a?(RGen::MetamodelBuilder::DataTypes::Enum)
+ unless props.impl_type.literals.include?(literal.to_sym)
+ raise StandardError.new("Property #{props.value(:name)} can not take value #{literal}, expected one of #{props.impl_type.literals_as_strings.join(', ')}")
+ end
+ else
+ raise StandardError.new("Unkown type "+props.impl_type.to_s)
+ end
+ end
+
+ def type_check_code(varname, props)
+ code = ""
+ if props.impl_type == RGen::MetamodelBuilder::DataTypes::Long
+ code << "unless #{varname}.nil? || #{varname}.is_a?(Integer) || #{varname}.is_a?(MMGeneric)"
+ code << "\n"
+ expected = "Integer"
+ elsif props.impl_type.is_a?(Class)
+ code << "unless #{varname}.nil? || #{varname}.is_a?(#{props.impl_type}) || #{varname}.is_a?(MMGeneric)"
+ code << " || #{varname}.is_a?(BigDecimal)" if props.impl_type == Float && defined?(BigDecimal)
+ code << "\n"
+ expected = props.impl_type.to_s
+ elsif props.impl_type.is_a?(RGen::MetamodelBuilder::DataTypes::Enum)
+ code << "unless #{varname}.nil? || [#{props.impl_type.literals_as_strings.join(',')}].include?(#{varname}) || #{varname}.is_a?(MMGeneric)\n"
+ expected = "["+props.impl_type.literals_as_strings.join(',')+"]"
+ else
+ raise StandardError.new("Unkown type "+props.impl_type.to_s)
+ end
+ code << "raise _assignmentTypeError(self,#{varname},\"#{expected}\")\n"
+ code << "end"
+ code
+ end
+
+ def _ownProps(props)
+ Hash[*(props.select{|k,v| !(k.to_s =~ /^opposite_/)}.flatten)]
+ end
+
+ def _oppositeProps(props)
+ r = {}
+ props.each_pair do |k,v|
+ if k.to_s =~ /^opposite_(.*)$/
+ r[$1.to_sym] = v
+ end
+ end
+ r
+ end
+
+ def _setManyUpperBound(props)
+ props[:upperBound] = -1 unless props[:upperBound].is_a?(Integer) && props[:upperBound] > 1
+ props
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/builder_runtime.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/builder_runtime.rb
new file mode 100644
index 000000000..3ddbe3b5a
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/builder_runtime.rb
@@ -0,0 +1,174 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'rgen/util/name_helper'
+
+module RGen
+
+module MetamodelBuilder
+
+# This module is mixed into MetamodelBuilder::MMBase.
+# The methods provided by this module are used by the methods generated
+# by the class methods of MetamodelBuilder::BuilderExtensions
+module BuilderRuntime
+ include Util::NameHelper
+
+ def is_a?(c)
+ return super unless c.const_defined?(:ClassModule)
+ kind_of?(c::ClassModule)
+ end
+
+ def addGeneric(role, value, index=-1)
+ send("add#{firstToUpper(role.to_s)}",value, index)
+ end
+
+ def removeGeneric(role, value)
+ send("remove#{firstToUpper(role.to_s)}",value)
+ end
+
+ def setGeneric(role, value)
+ send("set#{firstToUpper(role.to_s)}",value)
+ end
+
+ def hasManyMethods(role)
+ respond_to?("add#{firstToUpper(role.to_s)}")
+ end
+
+ def setOrAddGeneric(role, value)
+ if hasManyMethods(role)
+ addGeneric(role, value)
+ else
+ setGeneric(role, value)
+ end
+ end
+
+ def setNilOrRemoveGeneric(role, value)
+ if hasManyMethods(role)
+ removeGeneric(role, value)
+ else
+ setGeneric(role, nil)
+ end
+ end
+
+ def setNilOrRemoveAllGeneric(role)
+ if hasManyMethods(role)
+ setGeneric(role, [])
+ else
+ setGeneric(role, nil)
+ end
+ end
+
+ def getGeneric(role)
+ send("get#{firstToUpper(role.to_s)}")
+ end
+
+ def getGenericAsArray(role)
+ result = getGeneric(role)
+ result = [result].compact unless result.is_a?(Array)
+ result
+ end
+
+ def eIsSet(role)
+ eval("defined? @#{role}") != nil
+ end
+
+ def eUnset(role)
+ if respond_to?("add#{firstToUpper(role.to_s)}")
+ setGeneric(role, [])
+ else
+ setGeneric(role, nil)
+ end
+ remove_instance_variable("@#{role}")
+ end
+
+ def eContainer
+ @_container
+ end
+
+ def eContainingFeature
+ @_containing_feature_name
+ end
+
+ # returns the contained elements in no particular order
+ def eContents
+ if @_contained_elements
+ @_contained_elements.dup
+ else
+ []
+ end
+ end
+
+ # if a block is given, calls the block on every contained element in depth first order.
+ # if the block returns :prune, recursion will stop at this point.
+ #
+ # if no block is given builds and returns a list of all contained elements.
+ #
+ def eAllContents(&block)
+ if block
+ if @_contained_elements
+ @_contained_elements.each do |e|
+ res = block.call(e)
+ e.eAllContents(&block) if res != :prune
+ end
+ end
+ nil
+ else
+ result = []
+ if @_contained_elements
+ @_contained_elements.each do |e|
+ result << e
+ result.concat(e.eAllContents)
+ end
+ end
+ result
+ end
+ end
+
+ def disconnectContainer
+ eContainer.setNilOrRemoveGeneric(eContainingFeature, self) if eContainer
+ end
+
+ def _set_container(container, containing_feature_name)
+ # if a new container is set, make sure to disconnect from the old one.
+ # note that _set_container will never be called for the container and the role
+ # which are currently set because the accessor methods in BuilderExtensions
+ # block setting/adding a value which is already present.
+ # (it may be called for the same container with a different role, a different container
+ # with the same role and a different container with a different role, though)
+ # this ensures, that disconnecting for the current container doesn't break
+ # a new connection which has just been set up in the accessor methods.
+ disconnectContainer if container
+ @_container._remove_contained_element(self) if @_container
+ container._add_contained_element(self) if container
+ @_container = container
+ @_containing_feature_name = containing_feature_name
+ end
+
+ def _add_contained_element(element)
+ @_contained_elements ||= []
+ @_contained_elements << element
+ end
+
+ def _remove_contained_element(element)
+ @_contained_elements.delete(element) if @_contained_elements
+ end
+
+ def _assignmentTypeError(target, value, expected)
+ text = ""
+ if target
+ targetId = target.class.name
+ targetId += "(" + target.name + ")" if target.respond_to?(:name) and target.name
+ text += "In #{targetId} : "
+ end
+ valueId = value.class.name
+ valueId += "(" + value.name + ")" if value.respond_to?(:name) and value.name
+ valueId += "(:" + value.to_s + ")" if value.is_a?(Symbol)
+ text += "Can not use a #{valueId} where a #{expected} is expected"
+ StandardError.new(text)
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/constant_order_helper.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/constant_order_helper.rb
new file mode 100644
index 000000000..51c8034a0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/constant_order_helper.rb
@@ -0,0 +1,89 @@
+module RGen
+
+module MetamodelBuilder
+
+# The purpose of the ConstantOrderHelper is to capture the definition order of RGen metamodel builder
+# classes, modules and enums. The problem is that Ruby doesn't seem to track the order of
+# constants being created in a module. However the order is important because it defines the order
+# of eClassifiers and eSubpackages in a EPackage.
+#
+# It would be helpful here if Ruby provided a +const_added+ callback, but this is not the case up to now.
+#
+# The idea for capturing is that all events of creating a RGen class, module or enum are reported to the
+# ConstantOrderHelper singleton.
+# For classes and modules it tries to add their names to the parent's +_constantOrder+ array.
+# The parent module is derived from the class's or module's name. However, the new name is only added
+# if the respective parent module has a new constant (which is not yet in +_constantOrder+) which
+# points to the new class or module.
+# For enums it is a bit more complicated, because at the time the enum is created, the parent
+# module does not yet contain the constant to which the enum is assigned. Therefor, the enum is remembered
+# and it is tried to be stored on the next event (class, module or enum) within the module which was
+# created last (which was last extended with ModuleExtension). If it can not be found in that module,
+# all parent modules of the last module are searched. This way it should also be correctly entered in
+# case it was defined outside of the last created module.
+# Note that an enum is not stored to the constant order array unless another event occurs. That's why
+# it is possible that one enum is missing at the enum. This needs to be taken care of by the ECore transformer.
+#
+# This way of capturing should be sufficient for the regular use cases of the RGen metamodel builder language.
+# However, it is possible to write code which messes this up, see unit tests for details.
+# In the worst case, the new classes, modules or enums will just not be found in a parent module and thus be ignored.
+#
+ConstantOrderHelper = Class.new do
+
+ def initialize
+ @currentModule = nil
+ @pendingEnum = nil
+ end
+
+ def classCreated(c)
+ handlePendingEnum
+ cont = containerModule(c)
+ name = (c.name || "").split("::").last
+ return unless cont.respond_to?(:_constantOrder) && !cont._constantOrder.include?(name)
+ cont._constantOrder << name
+ end
+
+ def moduleCreated(m)
+ handlePendingEnum
+ cont = containerModule(m)
+ name = (m.name || "").split("::").last
+ return unless cont.respond_to?(:_constantOrder) && !cont._constantOrder.include?(name)
+ cont._constantOrder << name
+ @currentModule = m
+ end
+
+ def enumCreated(e)
+ handlePendingEnum
+ @pendingEnum = e
+ end
+
+ private
+
+ def containerModule(m)
+ containerName = (m.name || "").split("::")[0..-2].join("::")
+ containerName.empty? ? nil : eval(containerName, TOPLEVEL_BINDING)
+ end
+
+ def handlePendingEnum
+ return unless @pendingEnum
+ m = @currentModule
+ while m
+ if m.respond_to?(:_constantOrder)
+ newConstants = m.constants - m._constantOrder
+ const = newConstants.find{|c| m.const_get(c).object_id == @pendingEnum.object_id}
+ if const
+ m._constantOrder << const.to_s
+ break
+ end
+ end
+ m = containerModule(m)
+ end
+ @pendingEnum = nil
+ end
+
+end.new
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/data_types.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/data_types.rb
new file mode 100644
index 000000000..17268f4e3
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/data_types.rb
@@ -0,0 +1,77 @@
+module RGen
+
+module MetamodelBuilder
+
+module DataTypes
+
+ # An enum object is used to describe possible attribute values within a
+ # MetamodelBuilder attribute definition. An attribute defined this way can only
+ # take the values specified when creating the Enum object.
+ # Literal values can only be symbols or true or false.
+ # Optionally a name may be specified for the enum object.
+ #
+ # Examples:
+ #
+ # Enum.new(:name => "AnimalEnum", :literals => [:cat, :dog])
+ # Enum.new(:literals => [:cat, :dog])
+ # Enum.new([:cat, :dog])
+ #
+ class Enum
+ attr_reader :name, :literals
+
+ # Creates a new named enum type object consisting of the elements passed as arguments.
+ def initialize(params)
+ MetamodelBuilder::ConstantOrderHelper.enumCreated(self)
+ if params.is_a?(Array)
+ @literals = params
+ @name = "anonymous"
+ elsif params.is_a?(Hash)
+ raise StandardError.new("Hash entry :literals is missing") unless params[:literals]
+ @literals = params[:literals]
+ @name = params[:name] || "anonymous"
+ else
+ raise StandardError.new("Pass an Array or a Hash")
+ end
+ end
+
+ # This method can be used to check if an object can be used as value for
+ # variables having this enum object as type.
+ def validLiteral?(l)
+ literals.include?(l)
+ end
+
+ def literals_as_strings
+ literals.collect do |l|
+ if l.is_a?(Symbol)
+ if l.to_s =~ /^\d|\W/
+ ":'"+l.to_s+"'"
+ else
+ ":"+l.to_s
+ end
+ elsif l.is_a?(TrueClass) || l.is_a?(FalseClass)
+ l.to_s
+ else
+ raise StandardError.new("Literal values can only be symbols or true/false")
+ end
+ end
+ end
+
+ def to_s # :nodoc:
+ name
+ end
+ end
+
+ # Boolean is a predefined enum object having Ruby's true and false singletons
+ # as possible values.
+ Boolean = Enum.new(:name => "Boolean", :literals => [true, false])
+
+ # Long represents a 64-bit Integer
+ # This constant is merely a marker for keeping this information in the Ruby version of the metamodel,
+ # values of this type will always be instances of Integer or Bignum;
+ # Setting it to a string value ensures that it responds to "to_s" which is used in the metamodel generator
+ Long = "Long"
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/intermediate/annotation.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/intermediate/annotation.rb
new file mode 100644
index 000000000..eaa1eee6e
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/intermediate/annotation.rb
@@ -0,0 +1,30 @@
+module RGen
+
+module MetamodelBuilder
+
+module Intermediate
+
+class Annotation
+ attr_reader :details, :source
+
+ def initialize(hash)
+ if hash[:source] || hash[:details]
+ restKeys = hash.keys - [:source, :details]
+ raise "Hash key #{restKeys.first} not allowed." unless restKeys.empty?
+ raise "Details not provided, key :details is missing" unless hash[:details]
+ raise "Details must be provided as a hash" unless hash[:details].is_a?(Hash)
+ @details = hash[:details]
+ @source = hash[:source]
+ else
+ raise "Details must be provided as a hash" unless hash.is_a?(Hash)
+ @details = hash
+ end
+ end
+
+end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/intermediate/feature.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/intermediate/feature.rb
new file mode 100644
index 000000000..ed319ea1d
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/intermediate/feature.rb
@@ -0,0 +1,168 @@
+require 'rgen/metamodel_builder/data_types'
+
+module RGen
+
+module MetamodelBuilder
+
+module Intermediate
+
+class Feature
+ attr_reader :etype, :impl_type
+
+ def value(prop)
+ @props[prop]
+ end
+
+ def annotations
+ @annotations ||= []
+ end
+
+ def many?
+ value(:upperBound) > 1 || value(:upperBound) == -1
+ end
+
+ def reference?
+ is_a?(Reference)
+ end
+
+ protected
+
+ def check(props)
+ @props.keys.each do |p|
+ kind = props[p]
+ raise StandardError.new("invalid property #{p}") unless kind
+ raise StandardError.new("property '#{p}' not set") if value(p).nil? && kind == :required
+ end
+ end
+
+end
+
+class Attribute < Feature
+
+ Properties = {
+ :name => :required,
+ :ordered => :required,
+ :unique => :required,
+ :changeable => :required,
+ :volatile => :required,
+ :transient => :required,
+ :unsettable => :required,
+ :derived => :required,
+ :lowerBound => :required,
+ :upperBound => :required,
+ :defaultValueLiteral => :optional
+ }
+
+ Defaults = {
+ :ordered => true,
+ :unique => true,
+ :changeable => true,
+ :volatile => false,
+ :transient => false,
+ :unsettable => false,
+ :derived => false,
+ :lowerBound => 0
+ }
+
+ Types = {
+ String => :EString,
+ Integer => :EInt,
+ RGen::MetamodelBuilder::DataTypes::Long => :ELong,
+ Float => :EFloat,
+ RGen::MetamodelBuilder::DataTypes::Boolean => :EBoolean,
+ Object => :ERubyObject,
+ Class => :ERubyClass
+ }
+
+ def self.default_value(prop)
+ Defaults[prop]
+ end
+
+ def self.properties
+ Properties.keys.sort{|a,b| a.to_s <=> b.to_s}
+ end
+
+ def initialize(type, props)
+ @props = Defaults.merge(props)
+ type ||= String
+ @etype = Types[type]
+ if @etype
+ @impl_type = type
+ elsif type.is_a?(RGen::MetamodelBuilder::DataTypes::Enum)
+ @etype = :EEnumerable
+ @impl_type = type
+ else
+ raise ArgumentError.new("invalid type '#{type}'")
+ end
+ if @props[:derived]
+ @props[:changeable] = false
+ @props[:volatile] = true
+ @props[:transient] = true
+ end
+ check(Properties)
+ end
+
+end
+
+class Reference < Feature
+ attr_accessor :opposite
+
+ Properties = {
+ :name => :required,
+ :ordered => :required,
+ :unique => :required,
+ :changeable => :required,
+ :volatile => :required,
+ :transient => :required,
+ :unsettable => :required,
+ :derived => :required,
+ :lowerBound => :required,
+ :upperBound => :required,
+ :resolveProxies => :required,
+ :containment => :required
+ }
+
+ Defaults = {
+ :ordered => true,
+ :unique => true,
+ :changeable => true,
+ :volatile => false,
+ :transient => false,
+ :unsettable => false,
+ :derived => false,
+ :lowerBound => 0,
+ :resolveProxies => true
+ }
+
+ def self.default_value(prop)
+ Defaults[prop]
+ end
+
+ def self.properties
+ Properties.keys.sort{|a,b| a.to_s <=> b.to_s}
+ end
+
+ def initialize(type, props)
+ @props = Defaults.merge(props)
+ if type.respond_to?(:_metamodel_description)
+ @etype = nil
+ @impl_type = type
+ else
+ raise ArgumentError.new("'#{type}' (#{type.class}) is not a MMBase in reference #{props[:name]}")
+ end
+ if @props[:derived]
+ @props[:changeable] = false
+ @props[:volatile] = true
+ @props[:transient] = true
+ end
+ check(Properties)
+ end
+
+end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/mm_multiple.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/mm_multiple.rb
new file mode 100644
index 000000000..8c4b402bf
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/mm_multiple.rb
@@ -0,0 +1,23 @@
+
+module RGen
+
+module MetamodelBuilder
+
+def self.MMMultiple(*superclasses)
+ c = Class.new(MMBase)
+ class << c
+ attr_reader :multiple_superclasses
+ end
+ c.instance_variable_set(:@multiple_superclasses, superclasses)
+ superclasses.collect{|sc| sc.ancestors}.flatten.
+ reject{|m| m.is_a?(Class)}.each do |arg|
+ c.instance_eval do
+ include arg
+ end
+ end
+ return c
+end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/module_extension.rb b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/module_extension.rb
new file mode 100644
index 000000000..16d61b0e0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/metamodel_builder/module_extension.rb
@@ -0,0 +1,42 @@
+require 'rgen/ecore/ecore_interface'
+require 'rgen/metamodel_builder/intermediate/annotation'
+
+module RGen
+
+module MetamodelBuilder
+
+# This module is used to extend modules which should be
+# part of RGen metamodels
+module ModuleExtension
+ include RGen::ECore::ECoreInterface
+
+ def annotation(hash)
+ _annotations << Intermediate::Annotation.new(hash)
+ end
+
+ def _annotations
+ @_annotations ||= []
+ end
+
+ def _constantOrder
+ @_constantOrder ||= []
+ end
+
+ def final_method(m)
+ @final_methods ||= []
+ @final_methods << m
+ end
+
+ def method_added(m)
+ raise "Method #{m} can not be redefined" if @final_methods && @final_methods.include?(m)
+ end
+
+ def self.extended(m)
+ MetamodelBuilder::ConstantOrderHelper.moduleCreated(m)
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/model_builder.rb b/lib/puppet/vendor/rgen/lib/rgen/model_builder.rb
new file mode 100644
index 000000000..f4643c8d7
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/model_builder.rb
@@ -0,0 +1,32 @@
+require 'rgen/model_builder/builder_context'
+require 'rgen/util/method_delegation'
+#require 'ruby-prof'
+
+module RGen
+
+module ModelBuilder
+
+ def self.build(package, env=nil, builderMethodsModule=nil, &block)
+ resolver = ReferenceResolver.new
+ bc = BuilderContext.new(package, builderMethodsModule, resolver, env)
+ contextModule = eval("Module.nesting", block.binding).first
+ Util::MethodDelegation.registerDelegate(bc, contextModule, "const_missing")
+ BuilderContext.currentBuilderContext = bc
+ begin
+ #RubyProf.start
+ bc.instance_eval(&block)
+ #prof = RubyProf.stop
+ #File.open("profile_flat.txt","w+") do |f|
+ # RubyProf::FlatPrinter.new(prof).print(f, 0)
+ # end
+ ensure
+ BuilderContext.currentBuilderContext = nil
+ end
+ Util::MethodDelegation.unregisterDelegate(bc, contextModule, "const_missing")
+ #puts "Resolving..."
+ resolver.resolve(bc.toplevelElements)
+ bc.toplevelElements
+ end
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/model_builder/builder_context.rb b/lib/puppet/vendor/rgen/lib/rgen/model_builder/builder_context.rb
new file mode 100644
index 000000000..09de32a22
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/model_builder/builder_context.rb
@@ -0,0 +1,334 @@
+require 'rgen/ecore/ecore_ext'
+require 'rgen/model_builder/reference_resolver'
+
+module RGen
+
+module ModelBuilder
+
+class BuilderContext
+ attr_reader :toplevelElements
+
+ def initialize(package, extensionsModule, resolver, env=nil)
+ package = package.ecore unless package.is_a?(RGen::ECore::EPackage)
+ raise "First argument must be a metamodel package" \
+ unless package.is_a?(RGen::ECore::EPackage)
+ @rootPackage, @env = package, env
+ @commandResolver = CommandResolver.new(package, extensionsModule, self)
+ @package = @rootPackage
+ @resolver = resolver
+ @contextStack = []
+ @toplevelElements = []
+ @helperNames = {}
+ end
+
+ def const_missing_delegated(delegator, const)
+ ConstPathElement.new(const, self)
+ end
+
+ # in Ruby 1.9.0 and 1.9.1 #instance_eval looks up constants in the calling scope
+ # that's why const_missing needs to be prepared in BuilderContext, too
+ class << self
+ def currentBuilderContext=(bc)
+ @@currentBuilderContext = bc
+ end
+
+ def const_missing(name)
+ if @@currentBuilderContext
+ ConstPathElement.new(name, @@currentBuilderContext)
+ else
+ super
+ end
+ end
+ end
+
+ class CommandResolver
+ def initialize(rootPackage, extensionsModule, builderContext)
+ @extensionFactory = ExtensionContainerFactory.new(rootPackage, extensionsModule, builderContext)
+ @packageResolver = PackageResolver.new(rootPackage, @extensionFactory)
+ @resolveCommand = {}
+ end
+
+ def resolveCommand(cmd, parentPackage)
+ return @resolveCommand[[parentPackage, cmd]] if @resolveCommand.has_key?([parentPackage, cmd])
+ package = @packageResolver.packageByCommand(parentPackage, cmd)
+ result = nil
+ if package
+ extensionContainer = @extensionFactory.extensionContainer(package)
+ if extensionContainer.respond_to?(cmd)
+ result = extensionContainer
+ else
+ className = cmd.to_s[0..0].upcase + cmd.to_s[1..-1]
+ result = package.eClasses.find{|c| c.name == className}
+ end
+ end
+ @resolveCommand[[parentPackage, cmd]] = [package, result]
+ end
+ end
+
+ def method_missing(m, *args, &block)
+ package, classOrContainer = @commandResolver.resolveCommand(m, @package)
+ return super if package.nil?
+ return classOrContainer.send(m, *args, &block) if classOrContainer.is_a?(ExtensionContainerFactory::ExtensionContainer)
+ eClass = classOrContainer
+ nameArg, argHash = self.class.processArguments(args)
+ internalName = nameArg || argHash[:name]
+ argHash[:name] ||= nameArg if nameArg && self.class.hasNameAttribute(eClass)
+ resolverJobs, asRole, helperName = self.class.filterArgHash(argHash, eClass)
+ element = eClass.instanceClass.new(argHash)
+ @resolver.setElementName(element, internalName)
+ @env << element if @env
+ contextElement = @contextStack.last
+ if contextElement
+ self.class.associateWithContextElement(element, contextElement, asRole)
+ else
+ @toplevelElements << element
+ end
+ resolverJobs.each do |job|
+ job.receiver = element
+ job.namespace = contextElement
+ @resolver.addJob(job)
+ end
+ # process block
+ if block
+ @contextStack.push(element)
+ @package, oldPackage = package, @package
+ instance_eval(&block)
+ @package = oldPackage
+ @contextStack.pop
+ end
+ element
+ end
+
+ def _using(constPathElement, &block)
+ @package, oldPackage =
+ self.class.resolvePackage(@package, @rootPackage, constPathElement.constPath), @package
+ instance_eval(&block)
+ @package = oldPackage
+ end
+
+ def _context(depth=1)
+ @contextStack[-depth]
+ end
+
+ class ExtensionContainerFactory
+
+ class ExtensionContainer
+ def initialize(builderContext)
+ @builderContext = builderContext
+ end
+ def method_missing(m, *args, &block)
+ @builderContext.send(m, *args, &block)
+ end
+ end
+
+ def initialize(rootPackage, extensionsModule, builderContext)
+ @rootPackage, @extensionsModule, @builderContext = rootPackage, extensionsModule, builderContext
+ @extensionContainer = {}
+ end
+
+ def moduleForPackage(package)
+ qName = package.qualifiedName
+ rqName = @rootPackage.qualifiedName
+ raise "Package #{qName} is not contained within #{rqName}" unless qName.index(rqName) == 0
+ path = qName.sub(rqName,'').split('::')
+ path.shift if path.first == ""
+ mod = @extensionsModule
+ path.each do |p|
+ if mod && mod.const_defined?(p)
+ mod = mod.const_get(p)
+ else
+ mod = nil
+ break
+ end
+ end
+ mod
+ end
+
+ def extensionContainer(package)
+ return @extensionContainer[package] if @extensionContainer[package]
+ container = ExtensionContainer.new(@builderContext)
+ extensionModule = moduleForPackage(package)
+ container.extend(extensionModule) if extensionModule
+ @extensionContainer[package] = container
+ end
+ end
+
+ class PackageResolver
+ def initialize(rootPackage, extensionFactory)
+ @rootPackage = rootPackage
+ @extensionFactory = extensionFactory
+ @packageByCommand = {}
+ end
+
+ def packageByCommand(contextPackage, name)
+ return @packageByCommand[[contextPackage, name]] if @packageByCommand.has_key?([contextPackage, name])
+ if @extensionFactory.extensionContainer(contextPackage).respond_to?(name)
+ result = contextPackage
+ else
+ className = name.to_s[0..0].upcase + name.to_s[1..-1]
+ eClass = contextPackage.eClasses.find{|c| c.name == className}
+ if eClass
+ result = contextPackage
+ elsif contextPackage != @rootPackage
+ result = packageByCommand(contextPackage.eSuperPackage, name)
+ else
+ result = nil
+ end
+ end
+ @packageByCommand[[contextPackage, name]] = result
+ end
+ end
+
+ class ConstPathElement < Module
+ def initialize(name, builderContext, parent=nil)
+ @name = name.to_s
+ @builderContext = builderContext
+ @parent = parent
+ end
+
+ def const_missing(const)
+ ConstPathElement.new(const, @builderContext, self)
+ end
+
+ def method_missing(m, *args, &block)
+ @builderContext._using(self) do
+ send(m, *args, &block)
+ end
+ end
+
+ def constPath
+ if @parent
+ @parent.constPath << @name
+ else
+ [@name]
+ end
+ end
+ end
+
+ # helper methods put in the class object to be out of the way of
+ # method evaluation in the builder context
+ class << self
+ class PackageNotFoundException < Exception
+ end
+
+ def resolvePackage(contextPackage, rootPackage, path)
+ begin
+ return resolvePackageDownwards(contextPackage, path)
+ rescue PackageNotFoundException
+ if contextPackage.eSuperPackage && contextPackage != rootPackage
+ return resolvePackage(contextPackage.eSuperPackage, rootPackage, path)
+ else
+ raise
+ end
+ end
+ end
+
+ def resolvePackageDownwards(contextPackage, path)
+ first, *rest = path
+ package = contextPackage.eSubpackages.find{|p| p.name == first}
+ raise PackageNotFoundException.new("Could not resolve package: #{first} is not a subpackage of #{contextPackage.name}") unless package
+ if rest.empty?
+ package
+ else
+ resolvePackageDownwards(package, rest)
+ end
+ end
+
+ def processArguments(args)
+ unless (args.size == 2 && args.first.is_a?(String) && args.last.is_a?(Hash)) ||
+ (args.size == 1 && (args.first.is_a?(String) || args.first.is_a?(Hash))) ||
+ args.size == 0
+ raise "Provide a Hash to set feature values, " +
+ "optionally the first argument may be a String specifying " +
+ "the value of the \"name\" attribute."
+ end
+ if args.last.is_a?(Hash)
+ argHash = args.last
+ else
+ argHash = {}
+ end
+ nameArg = args.first if args.first.is_a?(String)
+ [nameArg, argHash]
+ end
+
+ def filterArgHash(argHash, eClass)
+ resolverJobs = []
+ asRole, helperName = nil, nil
+ refByName = {}
+ eAllReferences(eClass).each {|r| refByName[r.name] = r}
+ argHash.each_pair do |k,v|
+ if k == :as
+ asRole = v
+ argHash.delete(k)
+ elsif k == :name && !hasNameAttribute(eClass)
+ helperName = v
+ argHash.delete(k)
+ elsif v.is_a?(String)
+ ref = refByName[k.to_s]#eAllReferences(eClass).find{|r| r.name == k.to_s}
+ if ref
+ argHash.delete(k)
+ resolverJobs << ReferenceResolver::ResolverJob.new(nil, ref, nil, v)
+ end
+ elsif v.is_a?(Array)
+ ref = refByName[k.to_s] #eAllReferences(eClass).find{|r| r.name == k.to_s}
+ ref && v.dup.each do |e|
+ if e.is_a?(String)
+ v.delete(e)
+ resolverJobs << ReferenceResolver::ResolverJob.new(nil, ref, nil, e)
+ end
+ end
+ end
+ end
+ [ resolverJobs, asRole, helperName ]
+ end
+
+ def hasNameAttribute(eClass)
+ @hasNameAttribute ||= {}
+ @hasNameAttribute[eClass] ||= eClass.eAllAttributes.any?{|a| a.name == "name"}
+ end
+
+ def eAllReferences(eClass)
+ @eAllReferences ||= {}
+ @eAllReferences[eClass] ||= eClass.eAllReferences
+ end
+
+ def containmentRefs(contextClass, eClass)
+ @containmentRefs ||= {}
+ @containmentRefs[[contextClass, eClass]] ||=
+ eAllReferences(contextClass).select do |r|
+ r.containment && (eClass.eAllSuperTypes << eClass).include?(r.eType)
+ end
+ end
+
+ def associateWithContextElement(element, contextElement, asRole)
+ return unless contextElement
+ contextClass = contextElement.class.ecore
+ if asRole
+ asRoleRef = eAllReferences(contextClass).find{|r| r.name == asRole.to_s}
+ raise "Context class #{contextClass.name} has no reference named #{asRole}" unless asRoleRef
+ ref = asRoleRef
+ else
+ possibleContainmentRefs = containmentRefs(contextClass, element.class.ecore)
+ if possibleContainmentRefs.size == 1
+ ref = possibleContainmentRefs.first
+ elsif possibleContainmentRefs.size == 0
+ raise "Context class #{contextClass.name} can not contain a #{element.class.ecore.name}"
+ else
+ raise "Context class #{contextClass.name} has several containment references to a #{element.class.ecore.name}." +
+ " Clearify using \":as => <role>\""
+ end
+ end
+ if ref.many
+ contextElement.addGeneric(ref.name, element)
+ else
+ contextElement.setGeneric(ref.name, element)
+ end
+ end
+
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/model_builder/model_serializer.rb b/lib/puppet/vendor/rgen/lib/rgen/model_builder/model_serializer.rb
new file mode 100644
index 000000000..7799c4dc0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/model_builder/model_serializer.rb
@@ -0,0 +1,225 @@
+require 'rgen/array_extensions'
+require 'rgen/ecore/ecore_ext'
+
+module RGen
+
+module ModelBuilder
+
+class ModelSerializer
+
+ def initialize(writable, rootPackage)
+ @writable = writable
+ @currentPackage = rootPackage
+ @qualifiedElementName = {}
+ @internalElementName = {}
+ @relativeQualifiedElementName = {}
+ end
+
+ def serialize(elements)
+ calcQualifiedElementNames(elements)
+ unifyQualifiedElementNames
+ elements = [elements] unless elements.is_a?(Enumerable)
+ elements.each do |e|
+ serializeElement(e)
+ end
+ end
+
+ private
+
+ def serializeElement(element, viaRef=nil, namePath=[], indent=0)
+ className = element.class.ecore.name
+ cmd = className[0..0].downcase+className[1..-1]
+ args = ["\"#{@internalElementName[element]}\""]
+ namePath = namePath + [@internalElementName[element]]
+ childs = []
+ eAllStructuralFeatures(element).each do |f|
+ next if f.derived
+ if f.is_a?(RGen::ECore::EAttribute)
+ next if f.name == "name" && element.name == @internalElementName[element]
+ val = element.getGeneric(f.name)
+ #puts f.defaultValue.inspect if f.name == "isRoot"
+ args << ":#{f.name} => #{serializeAttribute(val)}" unless val == f.defaultValue || val.nil?
+ elsif !f.containment
+ next if f.eOpposite && f.eOpposite == viaRef
+ val = element.getGeneric(f.name)
+ refString = serializeReference(element, f, val)
+ args << ":#{f.name} => #{refString}" if refString
+ else
+ cs = element.getGeneric(f.name)
+ refString = nil
+ if cs.is_a?(Array)
+ cs.compact!
+ rcs = cs.select{|c| serializeChild?(c, namePath)}
+ childs << [f, rcs] unless rcs.empty?
+ refString = serializeReference(element, f, cs-rcs)
+ else
+ if cs && serializeChild?(cs, namePath)
+ childs << [f, [cs]]
+ else
+ refString = serializeReference(element, f, cs)
+ end
+ end
+ args << ":#{f.name} => #{refString}" if refString
+ end
+ end
+
+ args << ":as => :#{viaRef.name}" if viaRef && containmentRefs(viaRef.eContainingClass, element.class.ecore).size > 1
+ cmd = elementPackage(element)+"."+cmd if elementPackage(element).size > 0
+ @writable.write " " * indent + cmd + " " + args.join(", ")
+ if childs.size > 0
+ @writable.write " do\n"
+ oldPackage, @currentPackage = @currentPackage, element.class.ecore.ePackage
+ childs.each do |pair|
+ f, cs = pair
+ cs.each {|c| serializeElement(c, f, namePath, indent+1) }
+ end
+ @currentPackage = oldPackage
+ @writable.write " " * indent + "end\n"
+ else
+ @writable.write "\n"
+ end
+ end
+
+ def serializeChild?(child, namePath)
+ @qualifiedElementName[child][0..-2] == namePath
+ end
+
+ def serializeAttribute(value)
+ if value.is_a?(String)
+ "\"#{value.gsub("\"","\\\"")}\""
+ elsif value.is_a?(Symbol)
+ ":#{value}"
+ elsif value.nil?
+ "nil"
+ else
+ value.to_s
+ end
+ end
+
+ def serializeReference(element, ref, value)
+ if value.is_a?(Array)
+ value = value.compact
+ value = value.select{|v| compareWithOppositeReference(ref, element, v) > 0} if ref.eOpposite
+ qualNames = value.collect do |v|
+ relativeQualifiedElementName(v, element).join(".")
+ end
+ !qualNames.empty? && ("[" + qualNames.collect { |v| "\"#{v}\"" }.join(", ") + "]")
+ elsif value && (!ref.eOpposite || compareWithOppositeReference(ref, element, value) > 0)
+ qualName = relativeQualifiedElementName(value, element).join(".")
+ ("\"#{qualName}\"")
+ end
+ end
+
+ # descide which part of a bidirectional reference get serialized
+ def compareWithOppositeReference(ref, element, target)
+ result = 0
+ # first try to make the reference from the many side to the one side
+ result = -1 if ref.many && !ref.eOpposite.many
+ result = 1 if !ref.many && ref.eOpposite.many
+ return result if result != 0
+ # for 1:1 or many:many perfer, shorter references
+ result = relativeQualifiedElementName(element, target).size <=>
+ relativeQualifiedElementName(target, element).size
+ return result if result != 0
+ # there just needs to be a descision, use class name or object_id
+ result = element.class.name <=> target.class.name
+ return result if result != 0
+ element.object_id <=> target.object_id
+ end
+
+ def elementPackage(element)
+ @elementPackage ||= {}
+ return @elementPackage[element] if @elementPackage[element]
+ eNames = element.class.ecore.ePackage.qualifiedName.split("::")
+ rNames = @currentPackage.qualifiedName.split("::")
+ while eNames.first == rNames.first && !eNames.first.nil?
+ eNames.shift
+ rNames.shift
+ end
+ @elementPackage[element] = eNames.join("::")
+ end
+
+ def relativeQualifiedElementName(element, context)
+ return @relativeQualifiedElementName[[element, context]] if @relativeQualifiedElementName[[element, context]]
+ # elements which are not in the @qualifiedElementName Hash are not in the scope
+ # of this serialization and will be ignored
+ return [] if element.nil? || @qualifiedElementName[element].nil?
+ return [] if context.nil? || @qualifiedElementName[context].nil?
+ eNames = @qualifiedElementName[element].dup
+ cNames = @qualifiedElementName[context].dup
+ while eNames.first == cNames.first && eNames.size > 1
+ eNames.shift
+ cNames.shift
+ end
+ @relativeQualifiedElementName[[element, context]] = eNames
+ end
+
+ def calcQualifiedElementNames(elements, prefix=[], takenNames=[])
+ elements = [elements] unless elements.is_a?(Array)
+ elements.compact!
+ elements.each do |element|
+ qualifiedNamePath = prefix + [calcInternalElementName(element, takenNames)]
+ @qualifiedElementName[element] ||= []
+ @qualifiedElementName[element] << qualifiedNamePath
+ takenChildNames = []
+ eAllStructuralFeatures(element).each do |f|
+ if f.is_a?(RGen::ECore::EReference) && f.containment
+ childs = element.getGeneric(f.name)
+ calcQualifiedElementNames(childs, qualifiedNamePath, takenChildNames)
+ end
+ end
+ end
+ end
+
+ def unifyQualifiedElementNames
+ @qualifiedElementName.keys.each do |k|
+ @qualifiedElementName[k] = @qualifiedElementName[k].sort{|a,b| a.size <=> b.size}.first
+ end
+ end
+
+ def calcInternalElementName(element, takenNames)
+ return @internalElementName[element] if @internalElementName[element]
+ name = if element.respond_to?(:name) && element.name && !element.name.empty?
+ element.name
+ else
+ nextElementHelperName(element)
+ end
+ while takenNames.include?(name)
+ name = nextElementHelperName(element)
+ end
+ takenNames << name
+ @internalElementName[element] = name
+ end
+
+ def nextElementHelperName(element)
+ eClass = element.class.ecore
+ @nextElementNameId ||= {}
+ @nextElementNameId[eClass] ||= 1
+ result = "_#{eClass.name}#{@nextElementNameId[eClass]}"
+ @nextElementNameId[eClass] += 1
+ result
+ end
+
+ def eAllStructuralFeatures(element)
+ @eAllStructuralFeatures ||= {}
+ @eAllStructuralFeatures[element.class] ||= element.class.ecore.eAllStructuralFeatures
+ end
+
+ def eAllReferences(eClass)
+ @eAllReferences ||= {}
+ @eAllReferences[eClass] ||= eClass.eAllReferences
+ end
+
+ def containmentRefs(contextClass, eClass)
+ @containmentRefs ||= {}
+ @containmentRefs[[contextClass, eClass]] ||=
+ eAllReferences(contextClass).select do |r|
+ r.containment && (eClass.eAllSuperTypes << eClass).include?(r.eType)
+ end
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/model_builder/reference_resolver.rb b/lib/puppet/vendor/rgen/lib/rgen/model_builder/reference_resolver.rb
new file mode 100644
index 000000000..cf870b0c2
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/model_builder/reference_resolver.rb
@@ -0,0 +1,156 @@
+require 'rgen/array_extensions'
+
+module RGen
+
+module ModelBuilder
+
+class ReferenceResolver
+ ResolverJob = Struct.new(:receiver, :reference, :namespace, :string)
+
+ class ResolverException < Exception
+ end
+
+ class ToplevelNamespace
+ def initialize(ns)
+ raise "Namespace must be an Enumerable" unless ns.is_a?(Enumerable)
+ @ns = ns
+ end
+ def elements
+ @ns
+ end
+ end
+
+ def initialize
+ @jobs = []
+ @elementName = {}
+ end
+
+ def addJob(job)
+ @jobs << job
+ end
+
+ def setElementName(element, name)
+ @elementName[element] = name
+ end
+
+ def resolve(ns=[])
+ @toplevelNamespace = ToplevelNamespace.new(ns)
+ (@jobs || []).each_with_index do |job, i|
+ target = resolveReference(job.namespace || @toplevelNamespace, job.string.split("."))
+ raise ResolverException.new("Can not resolve reference #{job.string}") unless target
+ if job.reference.many
+ job.receiver.addGeneric(job.reference.name, target)
+ else
+ job.receiver.setGeneric(job.reference.name, target)
+ end
+ end
+ end
+
+ private
+
+ # TODO: if a reference can not be fully resolved, but a prefix can be found,
+ # the exception reported is that its first path element can not be found on
+ # toplevel
+ def resolveReference(namespace, nameParts)
+ element = resolveReferenceDownwards(namespace, nameParts)
+ if element.nil? && parentNamespace(namespace)
+ element = resolveReference(parentNamespace(namespace), nameParts)
+ end
+ element
+ end
+
+ def resolveReferenceDownwards(namespace, nameParts)
+ firstPart, *restParts = nameParts
+ element = namespaceElementByName(namespace, firstPart)
+ return nil unless element
+ if restParts.size > 0
+ resolveReferenceDownwards(element, restParts)
+ else
+ element
+ end
+ end
+
+ def namespaceElementByName(namespace, name)
+ @namespaceElementsByName ||= {}
+ return @namespaceElementsByName[namespace][name] if @namespaceElementsByName[namespace]
+ hash = {}
+ namespaceElements(namespace).each do |e|
+ raise ResolverException.new("Multiple elements named #{elementName(e)} found in #{nsToS(namespace)}") if hash[elementName(e)]
+ hash[elementName(e)] = e if elementName(e)
+ end
+ @namespaceElementsByName[namespace] = hash
+ hash[name]
+ end
+
+ def parentNamespace(namespace)
+ if namespace.class.respond_to?(:ecore)
+ parents = elementParents(namespace)
+ raise ResolverException.new("Element #{nsToS(namespace)} has multiple parents") \
+ if parents.size > 1
+ parents.first || @toplevelNamespace
+ else
+ nil
+ end
+ end
+
+ def namespaceElements(namespace)
+ if namespace.is_a?(ToplevelNamespace)
+ namespace.elements
+ elsif namespace.class.respond_to?(:ecore)
+ elementChildren(namespace)
+ else
+ raise ResolverException.new("Element #{nsToS(namespace)} can not be used as a namespace")
+ end
+ end
+
+ def nsToS(namespace)
+ if namespace.is_a?(ToplevelNamespace)
+ "toplevel namespace"
+ else
+ result = namespace.class.name
+ result += ":\"#{elementName(namespace)}\"" if elementName(namespace)
+ result
+ end
+ end
+
+ def elementName(element)
+ @elementName[element]
+ end
+
+ def elementChildren(element)
+ @elementChildren ||= {}
+ return @elementChildren[element] if @elementChildren[element]
+ children = containmentRefs(element).collect do |r|
+ element.getGeneric(r.name)
+ end.flatten.compact
+ @elementChildren[element] = children
+ end
+
+ def elementParents(element)
+ @elementParents ||= {}
+ return @elementParents[element] if @elementParents[element]
+ parents = parentRefs(element).collect do |r|
+ element.getGeneric(r.name)
+ end.flatten.compact
+ @elementParents[element] = parents
+ end
+
+ def containmentRefs(element)
+ @containmentRefs ||= {}
+ @containmentRefs[element.class] ||= eAllReferences(element).select{|r| r.containment}
+ end
+
+ def parentRefs(element)
+ @parentRefs ||= {}
+ @parentRefs[element.class] ||= eAllReferences(element).select{|r| r.eOpposite && r.eOpposite.containment}
+ end
+
+ def eAllReferences(element)
+ @eAllReferences ||= {}
+ @eAllReferences[element.class] ||= element.class.ecore.eAllReferences
+ end
+end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/serializer/json_serializer.rb b/lib/puppet/vendor/rgen/lib/rgen/serializer/json_serializer.rb
new file mode 100644
index 000000000..46e7bcf68
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/serializer/json_serializer.rb
@@ -0,0 +1,121 @@
+module RGen
+
+module Serializer
+
+class JsonSerializer
+
+ def initialize(writer, opts={})
+ @writer = writer
+ @elementIdentifiers = {}
+ @identAttrName = opts[:identAttrName] || "name"
+ @separator = opts[:separator] || "/"
+ @leadingSeparator = opts.has_key?(:leadingSeparator) ? opts[:leadingSeparator] : true
+ @featureFilter = opts[:featureFilter]
+ @identifierProvider = opts[:identifierProvider]
+ end
+
+ def elementIdentifier(element)
+ ident = @identifierProvider && @identifierProvider.call(element)
+ ident || (element.is_a?(RGen::MetamodelBuilder::MMProxy) && element.targetIdentifier) || qualifiedElementName(element)
+ end
+
+ # simple identifier calculation based on qualified names
+ # prerequisits:
+ # * containment relations must be bidirectionsl
+ # * local name stored in single attribute +@identAttrName+ for all classes
+ #
+ def qualifiedElementName(element)
+ return @elementIdentifiers[element] if @elementIdentifiers[element]
+ localIdent = ((element.respond_to?(@identAttrName) && element.getGeneric(@identAttrName)) || "").strip
+ parentRef = element.class.ecore.eAllReferences.select{|r| r.eOpposite && r.eOpposite.containment}.first
+ parent = parentRef && element.getGeneric(parentRef.name)
+ if parent
+ if localIdent.size > 0
+ parentIdent = qualifiedElementName(parent)
+ result = parentIdent + @separator + localIdent
+ else
+ result = qualifiedElementName(parent)
+ end
+ else
+ result = (@leadingSeparator ? @separator : "") + localIdent
+ end
+ @elementIdentifiers[element] = result
+ end
+
+ def serialize(elements)
+ if elements.is_a?(Array)
+ write("[ ")
+ elements.each_with_index do |e, i|
+ serializeElement(e)
+ write(",\n") unless i == elements.size-1
+ end
+ write("]")
+ else
+ serializeElement(elements)
+ end
+ end
+
+ def serializeElement(element, indent="")
+ write(indent + "{ \"_class\": \""+element.class.ecore.name+"\"")
+ element.class.ecore.eAllStructuralFeatures.each do |f|
+ next if f.derived
+ value = element.getGeneric(f.name)
+ unless value == [] || value.nil? ||
+ (f.is_a?(RGen::ECore::EReference) && f.eOpposite && f.eOpposite.containment) ||
+ (@featureFilter && !@featureFilter.call(f))
+ write(", ")
+ writeFeature(f, value, indent)
+ end
+ end
+ write(" }")
+ end
+
+ def writeFeature(feat, value, indent)
+ write("\""+feat.name+"\": ")
+ if feat.is_a?(RGen::ECore::EAttribute)
+ if value.is_a?(Array)
+ write("[ "+value.collect{|v| attributeValue(v, feat)}.join(", ")+" ]")
+ else
+ write(attributeValue(value, feat))
+ end
+ elsif !feat.containment
+ if value.is_a?(Array)
+ write("[ "+value.collect{|v| "\""+elementIdentifier(v)+"\""}.join(", ")+" ]")
+ else
+ write("\""+elementIdentifier(value)+"\"")
+ end
+ else
+ if value.is_a?(Array)
+ write("[ \n")
+ value.each_with_index do |v, i|
+ serializeElement(v, indent+" ")
+ write(",\n") unless i == value.size-1
+ end
+ write("]")
+ else
+ write("\n")
+ serializeElement(value, indent+" ")
+ end
+ end
+ end
+
+ def attributeValue(value, a)
+ if a.eType == RGen::ECore::EString || a.eType.is_a?(RGen::ECore::EEnum)
+ "\""+value.to_s.gsub('\\','\\\\\\\\').gsub('"','\\"').gsub("\n","\\n").gsub("\r","\\r").
+ gsub("\t","\\t").gsub("\f","\\f").gsub("\b","\\b")+"\""
+ else
+ value.to_s
+ end
+ end
+
+ private
+
+ def write(s)
+ @writer.write(s)
+ end
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/serializer/opposite_reference_filter.rb b/lib/puppet/vendor/rgen/lib/rgen/serializer/opposite_reference_filter.rb
new file mode 100644
index 000000000..7a6f7235c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/serializer/opposite_reference_filter.rb
@@ -0,0 +1,18 @@
+module RGen
+
+module Serializer
+
+# Filters refereences with an eOpposite:
+# 1. containment references are always preferred
+# 2. at a 1-to-n reference the 1-reference is always preferred
+# 3. otherwise the reference with the name in string sort order before the opposite's name is prefereed
+#
+OppositeReferenceFilter = proc do |features|
+ features.reject{|f| f.is_a?(RGen::ECore::EReference) && !f.containment && f.eOpposite &&
+ (f.eOpposite.containment || (f.many && !f.eOpposite.many) || (!(!f.many && f.eOpposite.many) && (f.name < f.eOpposite.name)))}
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/serializer/qualified_name_provider.rb b/lib/puppet/vendor/rgen/lib/rgen/serializer/qualified_name_provider.rb
new file mode 100644
index 000000000..1fcce1bc5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/serializer/qualified_name_provider.rb
@@ -0,0 +1,47 @@
+module RGen
+
+module Serializer
+
+# simple identifier calculation based on qualified names.
+# as a prerequisit, elements must have a local name stored in single attribute +attribute_name+.
+# there may be classes without the name attribute though and there may be elements without a
+# local name. in both cases the element will have the same qualified name as its container.
+#
+class QualifiedNameProvider
+
+ def initialize(options={})
+ @qualified_name_cache = {}
+ @attribute_name = options[:attribute_name] || "name"
+ @separator = options[:separator] || "/"
+ @leading_separator = options.has_key?(:leading_separator) ? options[:leading_separator] : true
+ end
+
+ def identifier(element)
+ if element.is_a?(RGen::MetamodelBuilder::MMProxy)
+ element.targetIdentifier
+ else
+ qualified_name(element)
+ end
+ end
+
+ def qualified_name(element)
+ return @qualified_name_cache[element] if @qualified_name_cache[element]
+ local_ident = ((element.respond_to?(@attribute_name) && element.getGeneric(@attribute_name)) || "").strip
+ parent = element.eContainer
+ if parent
+ if local_ident.size > 0
+ result = qualified_name(parent) + @separator + local_ident
+ else
+ result = qualified_name(parent)
+ end
+ else
+ result = (@leading_separator ? @separator : "") + local_ident
+ end
+ @qualified_name_cache[element] = result
+ end
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/serializer/xmi11_serializer.rb b/lib/puppet/vendor/rgen/lib/rgen/serializer/xmi11_serializer.rb
new file mode 100644
index 000000000..f76f6c3c7
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/serializer/xmi11_serializer.rb
@@ -0,0 +1,116 @@
+require 'rgen/serializer/xml_serializer'
+
+module RGen
+
+module Serializer
+
+class XMI11Serializer < XMLSerializer
+
+ def initialize(file)
+ super
+ @namespacePrefix = ""
+ @contentLevelElements = []
+ end
+
+ def setNamespace(shortcut, url)
+ @namespaceShortcut = shortcut
+ @namespaceUrl = url
+ @namespacePrefix = shortcut+":"
+ end
+
+ def serialize(rootElement, headerInfo=nil)
+ attrs = []
+ attrs << ['xmi.version', "1.1"]
+ attrs << ['xmlns:'+@namespaceShortcut, @namespaceUrl] if @namespaceUrl
+ attrs << ['timestamp', Time.now.to_s]
+ startTag("XMI", attrs)
+ if headerInfo
+ startTag("XMI.header")
+ writeHeaderInfo(headerInfo)
+ endTag("XMI.header")
+ end
+ startTag("XMI.content")
+ @contentLevelElements = []
+ writeElement(rootElement)
+ # write remaining toplevel elements, each of which could have
+ # more toplevel elements as childs
+ while @contentLevelElements.size > 0
+ writeElement(@contentLevelElements.shift)
+ end
+ endTag("XMI.content")
+ endTag("XMI")
+ end
+
+ def writeHeaderInfo(hash)
+ hash.each_pair do |k,v|
+ tag = "XMI." + k.to_s
+ startTag(tag)
+ if v.is_a?(Hash)
+ writeHeaderInfo(v)
+ else
+ writeText(v.to_s)
+ end
+ endTag(tag)
+ end
+ end
+
+ def writeElement(element)
+ tag = @namespacePrefix + element.class.ecore.name
+ attrs = attributeValues(element)
+ startTag(tag, attrs)
+ containmentReferences(element).each do |r|
+ roletag = @namespacePrefix + r.eContainingClass.name + "." + r.name
+ targets = element.getGeneric(r.name)
+ targets = [ targets ] unless targets.is_a?(Array)
+ targets.compact!
+ next if targets.empty?
+ startTag(roletag)
+ targets.each do |t|
+ if xmiLevel(t) == :content
+ @contentLevelElements << t
+ else
+ writeElement(t)
+ end
+ end
+ endTag(roletag)
+ end
+ endTag(tag)
+ end
+
+ def attributeValues(element)
+ result = [["xmi.id", xmiId(element)]]
+ eAllAttributes(element).select{|a| !a.derived}.each do |a|
+ val = element.getGeneric(a.name)
+ result << [a.name, val] unless val.nil? || val == ""
+ end
+ eAllReferences(element).each do |r|
+ next if r.derived
+ next if r.containment
+ next if r.eOpposite && r.eOpposite.containment && xmiLevel(element).nil?
+ next if r.eOpposite && r.many && !r.eOpposite.many
+ targetElements = element.getGenericAsArray(r.name)
+ targetElements.compact!
+ val = targetElements.collect{|te| xmiId(te)}.compact.join(' ')
+ result << [r.name, val] unless val == ""
+ end
+ result
+ end
+
+ def xmiId(element)
+ if element.respond_to?(:_xmi_id) && element._xmi_id
+ element._xmi_id.to_s
+ else
+ element.object_id.to_s
+ end
+ end
+
+ def xmiLevel(element)
+ return nil unless element.respond_to?(:_xmi_level)
+ element._xmi_level
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/serializer/xmi20_serializer.rb b/lib/puppet/vendor/rgen/lib/rgen/serializer/xmi20_serializer.rb
new file mode 100644
index 000000000..bfe7d8a3d
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/serializer/xmi20_serializer.rb
@@ -0,0 +1,71 @@
+require 'rgen/serializer/xml_serializer'
+
+module RGen
+
+module Serializer
+
+class XMI20Serializer < XMLSerializer
+
+ def serialize(rootElement)
+ @referenceStrings = {}
+ buildReferenceStrings(rootElement, "#/")
+ addBuiltinReferenceStrings
+ attrs = attributeValues(rootElement)
+ attrs << ['xmi:version', "2.0"]
+ attrs << ['xmlns:xmi', "http://www.omg.org/XMI"]
+ attrs << ['xmlns:xsi', "http://www.w3.org/2001/XMLSchema-instance"]
+ attrs << ['xmlns:ecore', "http://www.eclipse.org/emf/2002/Ecore" ]
+ tag = "ecore:"+rootElement.class.ecore.name
+ startTag(tag, attrs)
+ writeComposites(rootElement)
+ endTag(tag)
+ end
+
+ def writeComposites(element)
+ eachReferencedElement(element, containmentReferences(element)) do |r,te|
+ attrs = attributeValues(te)
+ attrs << ['xsi:type', "ecore:"+te.class.ecore.name]
+ tag = r.name
+ startTag(tag, attrs)
+ writeComposites(te)
+ endTag(tag)
+ end
+ end
+
+ def attributeValues(element)
+ result = []
+ eAllAttributes(element).select{|a| !a.derived}.each do |a|
+ val = element.getGeneric(a.name)
+ result << [a.name, val] unless val.nil? || val == ""
+ end
+ eAllReferences(element).select{|r| !r.containment && !(r.eOpposite && r.eOpposite.containment) && !r.derived}.each do |r|
+ targetElements = element.getGenericAsArray(r.name)
+ val = targetElements.collect{|te| @referenceStrings[te]}.compact.join(' ')
+ result << [r.name, val] unless val.nil? || val == ""
+ end
+ result
+ end
+
+ def buildReferenceStrings(element, string)
+ @referenceStrings[element] = string
+ eachReferencedElement(element, containmentReferences(element)) do |r,te|
+ buildReferenceStrings(te, string+"/"+te.name) if te.respond_to?(:name)
+ end
+ end
+
+ def addBuiltinReferenceStrings
+ pre = "ecore:EDataType http://www.eclipse.org/emf/2002/Ecore"
+ @referenceStrings[RGen::ECore::EString] = pre+"#//EString"
+ @referenceStrings[RGen::ECore::EInt] = pre+"#//EInt"
+ @referenceStrings[RGen::ECore::ELong] = pre+"#//ELong"
+ @referenceStrings[RGen::ECore::EFloat] = pre+"#//EFloat"
+ @referenceStrings[RGen::ECore::EBoolean] = pre+"#//EBoolean"
+ @referenceStrings[RGen::ECore::EJavaObject] = pre+"#//EJavaObject"
+ @referenceStrings[RGen::ECore::EJavaClass] = pre+"#//EJavaClass"
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/serializer/xml_serializer.rb b/lib/puppet/vendor/rgen/lib/rgen/serializer/xml_serializer.rb
new file mode 100644
index 000000000..c89da8ec1
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/serializer/xml_serializer.rb
@@ -0,0 +1,98 @@
+module RGen
+
+module Serializer
+
+class XMLSerializer
+
+ INDENT_SPACE = 2
+
+ def initialize(file)
+ @indent = 0
+ @lastStartTag = nil
+ @textContent = false
+ @file = file
+ end
+
+ def serialize(rootElement)
+ raise "Abstract class, overwrite method in subclass!"
+ end
+
+ def startTag(tag, attributes={})
+ @textContent = false
+ handleLastStartTag(false, true)
+ if attributes.is_a?(Hash)
+ attrString = attributes.keys.collect{|k| "#{k}=\"#{attributes[k]}\""}.join(" ")
+ else
+ attrString = attributes.collect{|pair| "#{pair[0]}=\"#{pair[1]}\""}.join(" ")
+ end
+ @lastStartTag = " "*@indent*INDENT_SPACE + "<#{tag} "+attrString
+ @indent += 1
+ end
+
+ def endTag(tag)
+ @indent -= 1
+ unless handleLastStartTag(true, true)
+ output " "*@indent*INDENT_SPACE unless @textContent
+ output "</#{tag}>\n"
+ end
+ @textContent = false
+ end
+
+ def writeText(text)
+ handleLastStartTag(false, false)
+ output "#{text}"
+ @textContent = true
+ end
+
+ protected
+
+ def eAllReferences(element)
+ @eAllReferences ||= {}
+ @eAllReferences[element.class] ||= element.class.ecore.eAllReferences
+ end
+
+ def eAllAttributes(element)
+ @eAllAttributes ||= {}
+ @eAllAttributes[element.class] ||= element.class.ecore.eAllAttributes
+ end
+
+ def eAllStructuralFeatures(element)
+ @eAllStructuralFeatures ||= {}
+ @eAllStructuralFeatures[element.class] ||= element.class.ecore.eAllStructuralFeatures
+ end
+
+ def eachReferencedElement(element, refs, &block)
+ refs.each do |r|
+ targetElements = element.getGeneric(r.name)
+ targetElements = [targetElements] unless targetElements.is_a?(Array)
+ targetElements.each do |te|
+ yield(r,te)
+ end
+ end
+ end
+
+ def containmentReferences(element)
+ @containmentReferences ||= {}
+ @containmentReferences[element.class] ||= eAllReferences(element).select{|r| r.containment}
+ end
+
+ private
+
+ def handleLastStartTag(close, newline)
+ return false unless @lastStartTag
+ output @lastStartTag
+ output close ? "/>" : ">"
+ output "\n" if newline
+ @lastStartTag = nil
+ true
+ end
+
+ def output(text)
+ @file.write(text)
+ end
+
+end
+
+end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/template_language.rb b/lib/puppet/vendor/rgen/lib/rgen/template_language.rb
new file mode 100644
index 000000000..05fe5cc47
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/template_language.rb
@@ -0,0 +1,297 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'rgen/template_language/directory_template_container'
+require 'rgen/template_language/template_container'
+
+module RGen
+
+# The RGen template language has been designed to build complex generators.
+# It is very similar to the EXPAND language of the Java based
+# OpenArchitectureWare framework.
+#
+# =Templates
+#
+# The basic idea is to allow "templates" not only being template files
+# but smaller parts. Those parts can be expanded from other parts very
+# much like Ruby methods are called from other methods.
+# Thus the term "template" refers to such a part within a "template file".
+#
+# Template files used by the RGen template language should have a
+# filename with the postfix ".tpl". Those files can reside within (nested)
+# template file directories.
+#
+# As an example a template directory could look like the following:
+#
+# templates/root.tpl
+# templates/dbaccess/dbaccess.tpl
+# templates/dbaccess/schema.tpl
+# templates/headers/generic_headers.tpl
+# templates/headers/specific/component.tpl
+#
+# A template is always called for a <i>context object</i>. The context object
+# serves as the receiver of methods called within the template. Details are given
+# below.
+#
+#
+# =Defining Templates
+#
+# One or more templates can be defined in a template file using the +define+
+# keyword as in the following example:
+#
+# <% define 'GenerateDBAdapter', :for => DBDescription do |dbtype| %>
+# Content to be generated; use ERB syntax here
+# <% end %>
+#
+# The template definition takes three kinds of parameters:
+# 1. The name of the template within the template file as a String or Symbol
+# 2. An optional class object describing the class of context objects for which
+# this template is valid.
+# 3. An arbitrary number of template parameters
+# See RGen::TemplateLanguage::TemplateContainer for details about the syntax of +define+.
+#
+# Within a template, regular ERB syntax can be used. This is
+# * <code><%</code> and <code>%></code> are used to embed Ruby code
+# * <code><%=</code> and <code>%></code> are used to embed Ruby expressions with
+# the expression result being written to the template output
+# * <code><%#</code> and <code>%></code> are used for comments
+# All content not within these tags is written to the template output verbatim.
+# See below for details about output files and output formatting.
+#
+# All methods which are called from within the template are sent to the context
+# object.
+#
+# Experience shows that one easily forgets the +do+ at the end of the first
+# line of a template definition. This will result in an ERB parse error.
+#
+#
+# =Expanding Templates
+#
+# Templates are normally expanded from within other templates. The only
+# exception is the root template, which is expanded from the surrounding code.
+#
+# Template names can be specified in the following ways:
+# * Non qualified name: use the template with the given name in the current template file
+# * Relative qualified name: use the template within the template file specified by the relative path
+# * Absolute qualified name: use the template within the template file specified by the absolute path
+#
+# The +expand+ keyword is used to expand templates.
+#
+# Here are some examples:
+#
+# <% expand 'GenerateDBAdapter', dbtype, :for => dbDesc %>
+#
+# <i>Non qualified</i>. Must be called within the file where 'GenerateDBAdapter' is defined.
+# There is one template parameter passed in via variable +dbtype+.
+# The context object is provided in variable +dbDesc+.
+#
+# <% expand 'dbaccess::ExampleSQL' %>
+#
+# <i>Qualified with filename</i>. Must be called from a file in the same directory as 'dbaccess.tpl'
+# There are no parameters. The current context object will be used as the context
+# object for this template expansion.
+#
+# <% expand '../headers/generic_headers::CHeader', :foreach => modules %>
+#
+# <i>Relatively qualified</i>. Must be called from a location from which the file
+# 'generic_headers.tpl' is accessible via the relative path '../headers'.
+# The template is expanded for each module in +modules+ (which has to be an Array).
+# Each element of +modules+ will be the context object in turn.
+#
+# <% expand '/headers/generic_headers::CHeader', :foreach => modules %>
+#
+# Absolutely qualified: The same behaviour as before but with an absolute path from
+# the template directory root (which in this example is 'templates', see above)
+#
+# Sometimes it is neccessary to generate some text (e.g. a ',') in between the single
+# template expansion results from a <code>:foreach</code> expansion. This can be achieved by
+# using the <code>:separator</code> keyword:
+#
+# <% expand 'ColumnName', :foreach => column, :separator => ', ' %>
+#
+# Note that the separator may also contain newline characters (\n). See below for
+# details about formatting.
+#
+#
+# =Formatting
+#
+# For many generator tools a formatting postprocess (e.g. using a pretty printer) is
+# required in order to make the output readable. However, depending on the kind of
+# generated output, such a tool might not be available.
+#
+# The RGen template language has been design for generators which do not need a
+# postprocessing step. The basic idea is to eliminate all whitespace at the beginning
+# of template lines (the indentation that makes the _template_ readable) and output
+# newlines only after at least on character has been generated in the corresponding
+# line. This way there are no empty lines in the output and each line will start with
+# a non-whitspace character.
+#
+# Starting from this point one can add indentation and newlines as required by using
+# explicit formatting commands:
+# * <code><%nl%></code> (newline) starts a new line
+# * <code><%iinc%></code> (indentation increment) increases the current indentation
+# * <code><%idec%></code> (indentation decrement) decreases the current indentation
+# * <code><%nonl%></code> (no newline) ignore next newline
+# * <code><%nows%></code> (no whitespace) ignore next whitespace
+#
+# Indentation takes place for every new line in the output unless it is 0.
+# The initial indentation can be specified with a root +expand+ command by using
+# the <code>:indent</code> keyword.
+#
+# Here is an example:
+#
+# expand 'GenerateDBAdapter', dbtype, :for => dbDesc, :indent => 1
+#
+# Initial indentation defaults to 0. Normally <code><%iinc%></code> and
+# <code><%idec%></code> are used to change the indentation.
+# The current indentation is kept for expansion of subtemplates.
+#
+# The string which is used to realize one indentation step can be set using
+# DirectoryTemplateContainer#indentString or with the template language +file+ command.
+# The default is " " (3 spaces), the indentation string given at a +file+ command
+# overwrites the container's default which in turn overwrites the overall default.
+#
+# Note that commands to ignore whitespace and newlines are still useful if output
+# generated from multiple template lines should show up in one single output line.
+#
+# Here is an example of a template generating a C program:
+#
+# #include <stdio.h>
+# <%nl%>
+# int main() {<%iinc%>
+# printf("Hello World\n");
+# return 0;<%idec>
+# }
+#
+# The result is:
+#
+# #include <stdio.h>
+#
+# int main() {
+# printf("Hello World\n");
+# return 0;
+# }
+#
+# Note that without the explicit formatting commands, the output generated from the
+# example above would not have any empty lines or whitespace in the beginning of lines.
+# This may seem like unneccessary extra work for the example above which could also
+# have been generated by passing the template to the output verbatimly.
+# However in most cases templates will contain more template specific indentation and
+# newlines which should be eliminated than formatting that should be visible in the
+# output.
+#
+# Here is a more realistic example for generating C function prototypes:
+#
+# <% define 'Prototype', :for => CFunction do %>
+# <%= getType.name %> <%= name %>(<%nows%>
+# <% expand 'Signature', :foreach => argument, :separator => ', ' %>);
+# <% end %>
+#
+# <% define 'Signature', :for => CFunctionArgument do %>
+# <%= getType.name %> <%= name%><%nows%>
+# <% end %>
+#
+# The result could look something like:
+#
+# void somefunc(int a, float b, int c);
+# int otherfunc(short x);
+#
+# In this example a separator is used to join the single arguments of the C functions.
+# Note that the template generating the argument type and name needs to contain
+# a <code><%nows%></code> if the result should consist of a single line.
+#
+# Here is one more example for generating C array initializations:
+#
+# <% define 'Array', :for => CArray do %>
+# <%= getType.name %> <%= name %>[<%= size %>] = {<%iinc%>
+# <% expand 'InitValue', :foreach => initvalue, :separator => ",\n" %><%nl%><%idec%>
+# };
+# <% end %>
+#
+# <% define 'InitValue', :for => PrimitiveInitValue do %>
+# <%= value %><%nows%>
+# <% end %>
+#
+# The result could look something like:
+#
+# int myArray[3] = {
+# 1,
+# 2,
+# 3
+# };
+#
+# Note that in this example, the separator contains a newline. The current increment
+# will be applied to each single expansion result since it starts in a new line.
+#
+#
+# =Output Files
+#
+# Normally the generated content is to be written into one or more output files.
+# The RGen template language facilitates this by means of the +file+ keyword.
+#
+# When the +file+ keyword is used to define a block, all output generated
+# from template code within this block will be written to the specified file.
+# This includes output generated from template expansions.
+# Thus all output from templates expanded within this block is written to
+# the same file as long as those templates do not use the +file+ keyword to
+# define a new file context.
+#
+# Here is an example:
+#
+# <% file 'dbadapter/'+adapter.name+'.c' do %>
+# all content within this block will be written to the specified file
+# <% end %>
+#
+# Note that the filename itself can be calculated dynamically by an arbitrary
+# Ruby expression.
+#
+# The absolute position where the output file is created depends on the output
+# root directory passed to DirectoryTemplateContainer as described below.
+#
+# As a second argument, the +file+ command can take the indentation string which is
+# used to indent output lines (see Formatting).
+#
+# =Setting up the Generator
+#
+# Setting up the generator consists of 3 steps:
+# * Instantiate DirectoryTemplateContainer passing one or more metamodel(s) and the output
+# directory to the constructor.
+# * Load the templates into the template container
+# * Expand the root template to start generation
+#
+# Here is an example:
+#
+# module MyMM
+# # metaclasses are defined here, e.g. using RGen::MetamodelBuilder
+# end
+#
+# OUTPUT_DIR = File.dirname(__FILE__)+"/output"
+# TEMPLATES_DIR = File.dirname(__FILE__)+"/templates"
+#
+# tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new(MyMM, OUTPUT_DIR)
+# tc.load(TEMPLATES_DIR)
+# # testModel should hold an instance of the metamodel class expected by the root template
+# # the following line starts generation
+# tc.expand('root::Root', :for => testModel, :indent => 1)
+#
+# The metamodel is the Ruby module which contains the metaclasses.
+# This information is required for the template container in order to resolve the
+# metamodel classes used within the template file.
+# If several metamodels shall be used, an array of modules can be passed instead
+# of a single module.
+#
+# The output path is prepended to the relative paths provided to the +file+
+# definitions in the template files.
+#
+# The template directory should contain template files as described above.
+#
+# Finally the generation process is started by calling +expand+ in the same way as it
+# is used from within templates.
+#
+# Also see the unit tests for more examples.
+#
+module TemplateLanguage
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/template_language/directory_template_container.rb b/lib/puppet/vendor/rgen/lib/rgen/template_language/directory_template_container.rb
new file mode 100644
index 000000000..5f355b6c4
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/template_language/directory_template_container.rb
@@ -0,0 +1,83 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'rgen/template_language/template_container'
+require 'rgen/template_language/template_helper'
+
+module RGen
+
+module TemplateLanguage
+
+class DirectoryTemplateContainer
+ include TemplateHelper
+
+ def initialize(metamodel=nil, output_path=nil, parent=nil)
+ @containers = {}
+ @directoryContainers = {}
+ @parent = parent
+ @metamodel = metamodel
+ @output_path = output_path
+ end
+
+ def load(dir)
+ Dir.foreach(dir) { |f|
+ qf = dir+"/"+f
+ if !File.directory?(qf) && f =~ /^(.*)\.tpl$/
+ (@containers[$1] = TemplateContainer.new(@metamodel, @output_path, self,qf)).load
+ elsif File.directory?(qf) && f != "." && f != ".."
+ (@directoryContainers[f] = DirectoryTemplateContainer.new(@metamodel, @output_path, self)).load(qf)
+ end
+ }
+ end
+
+ def expand(template, *all_args)
+ if template =~ /^\//
+ if @parent
+ # pass to parent
+ @parent.expand(template, *all_args)
+ else
+ # this is root
+ _expand(template, *all_args)
+ end
+ elsif template =~ /^\.\.\/(.*)/
+ if @parent
+ # pass to parent
+ @parent.expand($1, *all_args)
+ else
+ raise "No parent directory for root"
+ end
+ else
+ _expand(template, *all_args)
+ end
+ end
+
+ # Set indentation string.
+ # Every generated line will be prependend with n times this string at an indentation level of n.
+ # Defaults to " " (3 spaces)
+ def indentString=(str)
+ @indentString = str
+ end
+
+ def indentString
+ @indentString || (@parent && @parent.indentString) || " "
+ end
+
+ private
+
+ def _expand(template, *all_args)
+ if template =~ /^\/?(\w+)::(\w.*)/
+ raise "Template not found: #{$1}" unless @containers[$1]
+ @containers[$1].expand($2, *all_args)
+ elsif template =~ /^\/?(\w+)\/(\w.*)/
+ raise "Template not found: #{$1}" unless @directoryContainers[$1]
+ @directoryContainers[$1].expand($2, *all_args)
+ else
+ raise "Invalid template name: #{template}"
+ end
+ end
+
+end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/template_language/output_handler.rb b/lib/puppet/vendor/rgen/lib/rgen/template_language/output_handler.rb
new file mode 100644
index 000000000..e6cd692a1
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/template_language/output_handler.rb
@@ -0,0 +1,87 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+module RGen
+
+module TemplateLanguage
+
+ class OutputHandler
+ attr_writer :indent
+ attr_accessor :noIndentNextLine
+
+ def initialize(indent=0, indentString=" ", mode=:explicit)
+ self.mode = mode
+ @indent = indent
+ @indentString = indentString
+ @state = :wait_for_nonws
+ @output = ""
+ end
+
+ # ERB will call this method for every string s which is part of the
+ # template file in between %> and <%. If s contains a newline, it will
+ # call this method for every part of s which is terminated by a \n
+ #
+ def concat(s)
+ return @output.concat(s) if s.is_a? OutputHandler
+ #puts [object_id, noIndentNextLine, @state, @output.to_s, s].inspect
+ s = s.to_str.gsub(/^[\t ]*\r?\n/,'') if @ignoreNextNL
+ s = s.to_str.gsub(/^\s+/,'') if @ignoreNextWS
+ @ignoreNextNL = @ignoreNextWS = false if s =~ /\S/
+ if @mode == :direct
+ @output.concat(s)
+ elsif @mode == :explicit
+ while s.size > 0
+ if @state == :wait_for_nl
+ if s =~ /\A([^\r\n]*\r?\n)(.*)/m
+ rest = $2
+ @output.concat($1.gsub(/[\t ]+(?=\r|\n)/,''))
+ s = rest || ""
+ @state = :wait_for_nonws
+ else
+ @output.concat(s)
+ s = ""
+ end
+ elsif @state == :wait_for_nonws
+ if s =~ /\A\s*(\S+.*)/m
+ s = $1 || ""
+ if !@noIndentNextLine && !(@output.to_s.size > 0 && @output.to_s[-1] != "\n"[0])
+ @output.concat(@indentString * @indent)
+ else
+ @noIndentNextLine = false
+ end
+ @state = :wait_for_nl
+ else
+ s = ""
+ end
+ end
+ end
+ end
+ end
+ alias << concat
+
+ def to_str
+ @output
+ end
+ alias to_s to_str
+
+ def direct_concat(s)
+ @output.concat(s)
+ end
+
+ def ignoreNextNL
+ @ignoreNextNL = true
+ end
+
+ def ignoreNextWS
+ @ignoreNextWS = true
+ end
+
+ def mode=(m)
+ raise StandardError.new("Unknown mode: #{m}") unless [:direct, :explicit].include?(m)
+ @mode = m
+ end
+ end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/template_language/template_container.rb b/lib/puppet/vendor/rgen/lib/rgen/template_language/template_container.rb
new file mode 100644
index 000000000..0a8331448
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/template_language/template_container.rb
@@ -0,0 +1,234 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'erb'
+require 'fileutils'
+require 'rgen/template_language/output_handler'
+require 'rgen/template_language/template_helper'
+
+module RGen
+
+ module TemplateLanguage
+
+ class TemplateContainer
+ include TemplateHelper
+
+ def initialize(metamodels, output_path, parent, filename)
+ @templates = {}
+ @parent = parent
+ @filename = filename
+ @indent = 0
+ @output_path = output_path
+ @metamodels = metamodels
+ @metamodels = [ @metamodels ] unless @metamodels.is_a?(Array)
+ end
+
+ def load
+ File.open(@filename,"rb") do |f|
+ begin
+ @@metamodels = @metamodels
+ fileContent = f.read
+ _detectNewLinePattern(fileContent)
+ ERB.new(fileContent,nil,nil,'@output').result(binding)
+ rescue Exception => e
+ processAndRaise(e)
+ end
+ end
+ end
+
+ def expand(template, *all_args)
+ args, params = _splitArgsAndOptions(all_args)
+ if params.has_key?(:foreach)
+ raise StandardError.new("expand :foreach argument is not enumerable") \
+ unless params[:foreach].is_a?(Enumerable)
+ _expand_foreach(template, args, params)
+ else
+ _expand(template, args, params)
+ end
+ end
+
+ def evaluate(template, *all_args)
+ args, params = _splitArgsAndOptions(all_args)
+ raise StandardError.new(":foreach can not be used with evaluate") if params[:foreach]
+ _expand(template, args, params.merge({:_evalOnly => true}))
+ end
+
+ def this
+ @context
+ end
+
+ def method_missing(name, *args)
+ @context.send(name, *args)
+ end
+
+ def self.const_missing(name)
+ super unless @@metamodels
+ @@metamodels.each do |mm|
+ return mm.const_get(name) rescue NameError
+ end
+ super
+ end
+
+ private
+
+ def nonl
+ @output.ignoreNextNL
+ end
+
+ def nows
+ @output.ignoreNextWS
+ end
+
+ def nl
+ _direct_concat(@newLinePattern)
+ end
+
+ def ws
+ _direct_concat(" ")
+ end
+
+ def iinc
+ @indent += 1
+ @output.indent = @indent
+ end
+
+ def idec
+ @indent -= 1 if @indent > 0
+ @output.indent = @indent
+ end
+
+ TemplateDesc = Struct.new(:block, :local)
+
+ def define(template, params={}, &block)
+ _define(template, params, &block)
+ end
+
+ def define_local(template, params={}, &block)
+ _define(template, params.merge({:local => true}), &block)
+ end
+
+ def file(name, indentString=nil)
+ old_output, @output = @output, OutputHandler.new(@indent, indentString || @parent.indentString)
+ begin
+ yield
+ rescue Exception => e
+ processAndRaise(e)
+ end
+ path = ""
+ path += @output_path+"/" if @output_path
+ dirname = File.dirname(path+name)
+ FileUtils.makedirs(dirname) unless File.exist?(dirname)
+ File.open(path+name,"wb") { |f| f.write(@output) }
+ @output = old_output
+ end
+
+ # private private
+
+ def _define(template, params={}, &block)
+ @templates[template] ||= {}
+ cls = params[:for] || Object
+ @templates[template][cls] = TemplateDesc.new(block, params[:local])
+ end
+
+ def _expand_foreach(template, args, params)
+ sep = params[:separator]
+ params[:foreach].each_with_index {|e,i|
+ _direct_concat(sep.to_s) if sep && i > 0
+ output = _expand(template, args, params.merge({:for => e}))
+ }
+ end
+
+ LOCAL_TEMPLATE_REGEX = /^:*(\w+)$/
+
+ def _expand(template, args, params)
+ raise StandardError.new("expand :for argument evaluates to nil") if params.has_key?(:for) && params[:for].nil?
+ context = params[:for]
+ old_indent = @indent
+ @indent = params[:indent] || @indent
+ noIndentNextLine = params[:_noIndentNextLine] ||
+ (@output.is_a?(OutputHandler) && @output.noIndentNextLine) ||
+ (@output.to_s.size > 0 && @output.to_s[-1] != "\n"[0])
+ caller = params[:_caller] || self
+ old_context, @context = @context, context if context
+ local_output = nil
+ if template =~ LOCAL_TEMPLATE_REGEX
+ tplname = $1
+ raise StandardError.new("Template not found: #{$1}") unless @templates[tplname]
+ old_output, @output = @output, OutputHandler.new(@indent, @parent.indentString)
+ @output.noIndentNextLine = noIndentNextLine
+ _call_template(tplname, @context, args, caller == self)
+ old_output.noIndentNextLine = false if old_output.is_a?(OutputHandler) && !old_output.noIndentNextLine
+ local_output, @output = @output, old_output
+ else
+ local_output = @parent.expand(template, *(args.dup << {:for => @context, :indent => @indent, :_noIndentNextLine => noIndentNextLine, :_evalOnly => true, :_caller => caller}))
+ end
+ _direct_concat(local_output) unless params[:_evalOnly]
+ @context = old_context if old_context
+ @indent = old_indent
+ local_output.to_s
+ end
+
+ def processAndRaise(e, tpl=nil)
+ bt = e.backtrace.dup
+ e.backtrace.each_with_index do |t,i|
+ if t =~ /\(erb\):(\d+):/
+ bt[i] = "#{@filename}:#{$1}"
+ bt[i] += ":in '#{tpl}'" if tpl
+ break
+ end
+ end
+ raise e, e.to_s, bt
+ end
+
+ def _call_template(tpl, context, args, localCall)
+ found = false
+ @templates[tpl].each_pair do |key, value|
+ if context.is_a?(key)
+ templateDesc = @templates[tpl][key]
+ proc = templateDesc.block
+ arity = proc.arity
+ arity = 0 if arity == -1 # if no args are given
+ raise StandardError.new("Wrong number of arguments calling template #{tpl}: #{args.size} for #{arity} "+
+ "(Beware: Hashes as last arguments are taken as options and are ignored)") \
+ if arity != args.size
+ raise StandardError.new("Template can only be called locally: #{tpl}") \
+ if templateDesc.local && !localCall
+ begin
+ @@metamodels = @metamodels
+ proc.call(*args)
+ rescue Exception => e
+ processAndRaise(e, tpl)
+ end
+ found = true
+ end
+ end
+ raise StandardError.new("Template class not matching: #{tpl} for #{context.class.name}") unless found
+ end
+
+ def _direct_concat(s)
+ if @output.is_a? OutputHandler
+ @output.direct_concat(s)
+ else
+ @output << s
+ end
+ end
+ def _detectNewLinePattern(text)
+ tests = 0
+ rnOccurances = 0
+ text.scan(/(\r?)\n/) do |groups|
+ tests += 1
+ rnOccurances += 1 if groups[0] == "\r"
+ break if tests >= 10
+ end
+ if rnOccurances > (tests / 2)
+ @newLinePattern = "\r\n"
+ else
+ @newLinePattern = "\n"
+ end
+ end
+
+ end
+
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/rgen/template_language/template_helper.rb b/lib/puppet/vendor/rgen/lib/rgen/template_language/template_helper.rb
new file mode 100644
index 000000000..5eb3a98a2
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/template_language/template_helper.rb
@@ -0,0 +1,26 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+module RGen
+
+module TemplateLanguage
+
+module TemplateHelper
+
+ private
+
+ def _splitArgsAndOptions(all)
+ if all[-1] and all[-1].is_a? Hash
+ args = all[0..-2] || []
+ options = all[-1]
+ else
+ args = all
+ options = {}
+ end
+ return args, options
+ end
+end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/transformer.rb b/lib/puppet/vendor/rgen/lib/rgen/transformer.rb
new file mode 100644
index 000000000..097c089c5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/transformer.rb
@@ -0,0 +1,475 @@
+require 'rgen/ecore/ecore'
+require 'rgen/ecore/ecore_ext'
+
+module RGen
+
+# The Transformer class can be used to specify model transformations.
+#
+# Model transformations take place between a <i>source model</i> (located in the <i>source
+# environment</i> being an instance of the <i>source metamodel</i>) and a <i>target model</i> (located
+# in the <i>target environment</i> being an instance of the <i>target metamodel</i>).
+# Normally a "model" consists of several model elements associated with each other.
+#
+# =Transformation Rules
+#
+# The transformation is specified within a subclass of Transformer.
+# Within the subclass, the Transformer.transform class method can be used to specify transformation
+# blocks for specific metamodel classes of the source metamodel.
+#
+# If there is no transformation rule for the current object's class, a rule for the superclass
+# is used instead. If there's no rule for the superclass, the class hierarchy is searched
+# this way until Object.
+#
+# Here is an example:
+#
+# class MyTransformer < RGen::Transformer
+#
+# transform InputClass, :to => OutputClass do
+# { :name => name, :otherClass => trans(otherClass) }
+# end
+#
+# transform OtherInputClass, :to => OtherOutputClass do
+# { :name => name }
+# end
+# end
+#
+# In this example a transformation rule is specified for model elements of class InputClass
+# as well as for elements of class OtherInputClass. The former is to be transformed into
+# an instance of OutputClass, the latter into an instance of OtherOutputClass.
+# Note that the Ruby class objects are used to specifiy the classes.
+#
+# =Transforming Attributes
+#
+# Besides the target class of a transformation, the attributes of the result object are
+# specified in the above example. This is done by providing a Ruby block with the call of
+# +transform+. Within this block arbitrary Ruby code may be placed, however the block
+# must return a hash. This hash object specifies the attribute assignment of the
+# result object using key/value pairs: The key must be a Symbol specifying the attribute
+# which is to be assigned by name, the value is the value that will be assigned.
+#
+# For convenience, the transformation block will be evaluated in the context of the
+# source model element which is currently being converted. This way it is possible to just
+# write <code>:name => name</code> in the example in order to assign the name of the source
+# object to the name attribute of the target object.
+#
+# =Transforming References
+#
+# When attributes of elements are references to other elements, those referenced
+# elements have to be transformed as well. As shown above, this can be done by calling
+# the Transformer#trans method. This method initiates a transformation of the element
+# or array of elements passed as parameter according to transformation rules specified
+# using +transform+. In fact the +trans+ method is the only way to start the transformation
+# at all.
+#
+# For convenience and performance reasons, the result of +trans+ is cached with respect
+# to the parameter object. I.e. calling trans on the same source object a second time will
+# return the same result object _without_ a second evaluation of the corresponding
+# transformation rules.
+#
+# This way the +trans+ method can be used to lookup the target element for some source
+# element without the need to locally store a reference to the target element. In addition
+# this can be useful if it is not clear if certain element has already been transformed
+# when it is required within some other transformation block. See example below.
+#
+# Special care has been taken to allow the transformation of elements which reference
+# each other cyclically. The key issue here is that the target element of some transformation
+# is created _before_ the transformation's block is evaluated, i.e before the elements
+# attributes are set. Otherwise a call to +trans+ within the transformation's block
+# could lead to a +trans+ of the element itself.
+#
+# Here is an example:
+#
+# transform ModelAIn, :to => ModelAOut do
+# { :name => name, :modelB => trans(modelB) }
+# end
+#
+# transform ModelBIn, :to => ModelBOut do
+# { :name => name, :modelA => trans(modelA) }
+# end
+#
+# Note that in this case it does not matter if the transformation is initiated by calling
+# +trans+ with a ModelAIn element or ModelBIn element due to the caching feature described
+# above.
+#
+# =Transformer Methods
+#
+# When code in transformer blocks becomes more complex it might be useful to refactor
+# it into smaller methods. If regular Ruby methods within the Transformer subclass are
+# used for this purpose, it is necessary to know the source element being transformed.
+# This could be achieved by explicitly passing the +@current_object+ as parameter of the
+# method (see Transformer#trans).
+#
+# A more convenient way however is to define a special kind of method using the
+# Transformer.method class method. Those methods are evaluated within the context of the
+# current source element being transformed just the same as transformer blocks are.
+#
+# Here is an example:
+#
+# transform ModelIn, :to => ModelOut do
+# { :number => doubleNumber }
+# end
+#
+# method :doubleNumber do
+# number * 2;
+# end
+#
+# In this example the transformation assigns the 'number' attribute of the source element
+# multiplied by 2 to the target element. The multiplication is done in a dedicated method
+# called 'doubleNumber'. Note that the 'number' attribute of the source element is
+# accessed without an explicit reference to the source element as the method's body
+# evaluates in the source element's context.
+#
+# =Conditional Transformations
+#
+# Using the transformations as described above, all elements of the same class are
+# transformed the same way. Conditional transformations allow to transform elements of
+# the same class into elements of different target classes as well as applying different
+# transformations on the attributes.
+#
+# Conditional transformations are defined by specifying multiple transformer blocks for
+# the same source class and providing a condition with each block. Since it is important
+# to create the target object before evaluation of the transformation block (see above),
+# the conditions must also be evaluated separately _before_ the transformer block.
+#
+# Conditions are specified using transformer methods as described above. If the return
+# value is true, the corresponding block is used for transformation. If more than one
+# conditions are true, only the first transformer block will be evaluated.
+#
+# If there is no rule with a condition evaluating to true, rules for superclasses will
+# be checked as described above.
+#
+# Here is an example:
+#
+# transform ModelIn, :to => ModelOut, :if => :largeNumber do
+# { :number => number * 2}
+# end
+#
+# transform ModelIn, :to => ModelOut, :if => :smallNumber do
+# { :number => number / 2 }
+# end
+#
+# method :largeNumber do
+# number > 1000
+# end
+#
+# method :smallNumber do
+# number < 500
+# end
+#
+# In this case the transformation of an element of class ModelIn depends on the value
+# of the element's 'number' attribute. If the value is greater than 1000, the first rule
+# as taken and the number is doubled. If the value is smaller than 500, the second rule
+# is taken and the number is divided by two.
+#
+# Note that it is up to the user to avoid cycles within the conditions. A cycle could
+# occure if the condition are based on transformation target elements, i.e. if +trans+
+# is used within the condition to lookup or transform other elements.
+#
+# = Copy Transformations
+#
+# In some cases, transformations should just copy a model, either in the same metamodel
+# or in another metamodel with the same package/class structure. Sometimes, a transformation
+# is not exactly a copy, but a copy with slight modifications. Also in this case most
+# classes need to be copied verbatim.
+#
+# The class method Transformer.copy can be used to specify a copy rule for a single
+# metamodel class. If no target class is specified using the :to named parameter, the
+# target class will be the same as the source class (i.e. in the same metamodel).
+#
+# copy MM1::ClassA # copy within the same metamodel
+# copy MM1::ClassA, :to => MM2::ClassA
+#
+# The class method Transfomer.copy_all can be used to specify copy rules for all classes
+# of a particular metamodel package. Again with :to, the target metamodel package may
+# be specified which must have the same package/class structure. If :to is omitted, the
+# target metamodel is the same as the source metamodel. In case that for some classes
+# specific transformation rules should be used instead of copy rules, exceptions may be
+# specified using the :except named parameter. +copy_all+ also provides an easy way to
+# copy (clone) a model within the same metamodel.
+#
+# copy_all MM1 # copy rules for the whole metamodel MM1,
+# # used to clone models of MM1
+#
+# copy_all MM1, :to => MM2, :except => %w( # copy rules for all classes of MM1 to
+# ClassA # equally named classes in MM2, except
+# Sub1::ClassB # "ClassA" and "Sub1::ClassB"
+# )
+#
+# If a specific class transformation is not an exact copy, the Transformer.transform method
+# should be used to actually specify the transformation. If this transformation is also
+# mostly a copy, the helper method Transformer#copy_features can be used to create the
+# transformation Hash required by the transform method. Any changes to this hash may be done
+# in a hash returned by a block given to +copy_features+. This hash will extend or overwrite
+# the default copy hash. In case a particular feature should not be part of the copy hash
+# (e.g. because it does not exist in the target metamodel), exceptions can be specified using
+# the :except named parameter. Here is an example:
+#
+# transform ClassA, :to => ClassAx do
+# copy_features :except => [:featA] do
+# { :featB => featA }
+# end
+# end
+#
+# In this example, ClassAx is a copy of ClassA except, that feature "featA" in ClassA is renamed
+# into "featB" in ClassAx. Using +copy_features+ all features are copied except "featA". Then
+# "featB" of the target class is assigned the value of "featA" of the source class.
+#
+class Transformer
+
+ TransformationDescription = Struct.new(:block, :target) # :nodoc:
+
+ @@methods = {}
+ @@transformer_blocks = {}
+
+ def self._transformer_blocks # :nodoc:
+ @@transformer_blocks[self] ||= {}
+ end
+
+ def self._methods # :nodoc:
+ @@methods[self] ||= {}
+ end
+
+ # This class method is used to specify a transformation rule.
+ #
+ # The first argument specifies the class of elements for which this rule applies.
+ # The second argument must be a hash including the target class
+ # (as value of key ':to') and an optional condition (as value of key ':if').
+ #
+ # The target class is specified by passing the actual Ruby class object.
+ # The condition is either the name of a transformer method (see Transfomer.method) as
+ # a symbol or a proc object. In either case the block is evaluated at transformation
+ # time and its result value determines if the rule applies.
+ #
+ def self.transform(from, desc=nil, &block)
+ to = (desc && desc.is_a?(Hash) && desc[:to])
+ condition = (desc && desc.is_a?(Hash) && desc[:if])
+ raise StandardError.new("No transformation target specified.") unless to
+ block_desc = TransformationDescription.new(block, to)
+ if condition
+ _transformer_blocks[from] ||= {}
+ raise StandardError.new("Multiple (non-conditional) transformations for class #{from.name}.") unless _transformer_blocks[from].is_a?(Hash)
+ _transformer_blocks[from][condition] = block_desc
+ else
+ raise StandardError.new("Multiple (non-conditional) transformations for class #{from.name}.") unless _transformer_blocks[from].nil?
+ _transformer_blocks[from] = block_desc
+ end
+ end
+
+ # This class method specifies that all objects of class +from+ are to be copied
+ # into an object of class +to+. If +to+ is omitted, +from+ is used as target class.
+ # The target class may also be specified using the :to => <class> hash notation.
+ # During copy, all attributes and references of the target object
+ # are set to their transformed counterparts of the source object.
+ #
+ def self.copy(from, to=nil)
+ raise StandardError.new("Specify target class either directly as second parameter or using :to => <class>") \
+ unless to.nil? || to.is_a?(Class) || (to.is_a?(Hash) && to[:to].is_a?(Class))
+ to = to[:to] if to.is_a?(Hash)
+ transform(from, :to => to || from) do
+ copy_features
+ end
+ end
+
+ # Create copy rules for all classes of metamodel package (module) +from+ and its subpackages.
+ # The target classes are the classes with the same name in the metamodel package
+ # specified using named parameter :to. If no target metamodel is specified, source
+ # and target classes will be the same.
+ # The named parameter :except can be used to specify classes by qualified name for which
+ # no copy rules should be created. Qualified names are relative to the metamodel package
+ # specified.
+ #
+ def self.copy_all(from, hash={})
+ to = hash[:to] || from
+ except = hash[:except]
+ fromDepth = from.ecore.qualifiedName.split("::").size
+ from.ecore.eAllClasses.each do |c|
+ path = c.qualifiedName.split("::")[fromDepth..-1]
+ next if except && except.include?(path.join("::"))
+ copy c.instanceClass, :to => path.inject(to){|m,c| m.const_get(c)}
+ end
+ end
+
+ # Define a transformer method for the current transformer class.
+ # In contrast to regular Ruby methods, a method defined this way executes in the
+ # context of the object currently being transformed.
+ #
+ def self.method(name, &block)
+ _methods[name.to_s] = block
+ end
+
+
+ # Creates a new transformer
+ # Optionally an input and output Environment can be specified.
+ # If an elementMap is provided (normally a Hash) this map will be used to lookup
+ # and store transformation results. This way results can be predefined
+ # and it is possible to have several transformers work on the same result map.
+ #
+ def initialize(env_in=nil, env_out=nil, elementMap=nil)
+ @env_in = env_in
+ @env_out = env_out
+ @transformer_results = elementMap || {}
+ @transformer_jobs = []
+ end
+
+
+ # Transforms a given model element according to the rules specified by means of
+ # the Transformer.transform class method.
+ #
+ # The transformation result element is created in the output environment and returned.
+ # In addition, the result is cached, i.e. a second invocation with the same parameter
+ # object will return the same result object without any further evaluation of the
+ # transformation rules. Nil will be transformed into nil. Ruby "singleton" objects
+ # +true+, +false+, numerics and symbols will be returned without any change. Ruby strings
+ # will be duplicated with the result being cached.
+ #
+ # The transformation input can be given as:
+ # * a single object
+ # * an array each element of which is transformed in turn
+ # * a hash used as input to Environment#find with the result being transformed
+ #
+ def trans(obj)
+ if obj.is_a?(Hash)
+ raise StandardError.new("No input environment available to find model element.") unless @env_in
+ obj = @env_in.find(obj)
+ end
+ return nil if obj.nil?
+ return obj if obj.is_a?(TrueClass) or obj.is_a?(FalseClass) or obj.is_a?(Numeric) or obj.is_a?(Symbol)
+ return @transformer_results[obj] if @transformer_results[obj]
+ return @transformer_results[obj] = obj.dup if obj.is_a?(String)
+ return obj.collect{|o| trans(o)}.compact if obj.is_a? Array
+ raise StandardError.new("No transformer for class #{obj.class.name}") unless _transformerBlock(obj.class)
+ block_desc = _evaluateCondition(obj)
+ return nil unless block_desc
+ @transformer_results[obj] = _instantiateTargetClass(obj, block_desc.target)
+ # we will transform the properties later
+ @transformer_jobs << TransformerJob.new(self, obj, block_desc)
+ # if there have been jobs in the queue before, don't process them in this call
+ # this way calls to trans are not nested; a recursive implementation
+ # may cause a "Stack level too deep" exception for large models
+ return @transformer_results[obj] if @transformer_jobs.size > 1
+ # otherwise this is the first call of trans, process all jobs here
+ # more jobs will be added during job execution
+ while @transformer_jobs.size > 0
+ @transformer_jobs.first.execute
+ @transformer_jobs.shift
+ end
+ @transformer_results[obj]
+ end
+
+ # Create the hash required as return value of the block given to the Transformer.transform method.
+ # The hash will assign feature values of the source class to the features of the target class,
+ # assuming the features of both classes are the same. If the :except named parameter specifies
+ # an Array of symbols, the listed features are not copied by the hash. In order to easily manipulate
+ # the resulting hash, a block may be given which should also return a feature assignmet hash. This
+ # hash should be created manually and will extend/overwrite the automatically created hash.
+ #
+ def copy_features(options={})
+ hash = {}
+ @current_object.class.ecore.eAllStructuralFeatures.each do |f|
+ next if f.derived
+ next if options[:except] && options[:except].include?(f.name.to_sym)
+ hash[f.name.to_sym] = trans(@current_object.send(f.name))
+ end
+ hash.merge!(yield) if block_given?
+ hash
+ end
+
+ def _transformProperties(obj, block_desc) #:nodoc:
+ old_object, @current_object = @current_object, obj
+ block_result = instance_eval(&block_desc.block)
+ raise StandardError.new("Transformer must return a hash") unless block_result.is_a? Hash
+ @current_object = old_object
+ _attributesFromHash(@transformer_results[obj], block_result)
+ end
+
+ class TransformerJob #:nodoc:
+ def initialize(transformer, obj, block_desc)
+ @transformer, @obj, @block_desc = transformer, obj, block_desc
+ end
+ def execute
+ @transformer._transformProperties(@obj, @block_desc)
+ end
+ end
+
+ # Each call which is not handled by the transformer object is passed to the object
+ # currently being transformed.
+ # If that object also does not respond to the call, it is treated as a transformer
+ # method call (see Transformer.method).
+ #
+ def method_missing(m, *args) #:nodoc:
+ if @current_object.respond_to?(m)
+ @current_object.send(m, *args)
+ else
+ _invokeMethod(m, *args)
+ end
+ end
+
+ private
+
+ # returns _transformer_blocks content for clazz or one of its superclasses
+ def _transformerBlock(clazz) # :nodoc:
+ block = self.class._transformer_blocks[clazz]
+ block = _transformerBlock(clazz.superclass) if block.nil? && clazz != Object
+ block
+ end
+
+ # returns the first TransformationDescription for clazz or one of its superclasses
+ # for which condition is true
+ def _evaluateCondition(obj, clazz=obj.class) # :nodoc:
+ tb = self.class._transformer_blocks[clazz]
+ block_description = nil
+ if tb.is_a?(TransformationDescription)
+ # non-conditional
+ block_description = tb
+ elsif tb
+ old_object, @current_object = @current_object, obj
+ tb.each_pair {|condition, block|
+ if condition.is_a?(Proc)
+ result = instance_eval(&condition)
+ elsif condition.is_a?(Symbol)
+ result = _invokeMethod(condition)
+ else
+ result = condition
+ end
+ if result
+ block_description = block
+ break
+ end
+ }
+ @current_object = old_object
+ end
+ block_description = _evaluateCondition(obj, clazz.superclass) if block_description.nil? && clazz != Object
+ block_description
+ end
+
+ def _instantiateTargetClass(obj, target_desc) # :nodoc:
+ old_object, @current_object = @current_object, obj
+ if target_desc.is_a?(Proc)
+ target_class = instance_eval(&target_desc)
+ elsif target_desc.is_a?(Symbol)
+ target_class = _invokeMethod(target_desc)
+ else
+ target_class = target_desc
+ end
+ @current_object = old_object
+ result = target_class.new
+ @env_out << result if @env_out
+ result
+ end
+
+ def _invokeMethod(m) # :nodoc:
+ raise StandardError.new("Method not found: #{m}") unless self.class._methods[m.to_s]
+ instance_eval(&self.class._methods[m.to_s])
+ end
+
+ def _attributesFromHash(obj, hash) # :nodoc:
+ hash.delete(:class)
+ hash.each_pair{|k,v|
+ obj.send("#{k}=", v)
+ }
+ obj
+ end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/auto_class_creator.rb b/lib/puppet/vendor/rgen/lib/rgen/util/auto_class_creator.rb
new file mode 100644
index 000000000..1bdaa44f0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/auto_class_creator.rb
@@ -0,0 +1,61 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+require 'rgen/metamodel_builder'
+
+module RGen
+
+module Util
+
+class Base
+ extend MetamodelBuilder
+ def initialize(env=nil)
+ env << self if env
+ end
+end
+
+class AutoCreatedClass < Base
+ def method_missing(m,*args)
+ return super unless self.class.parent.accEnabled
+ if m.to_s =~ /(.*)=$/
+ self.class.has_one($1)
+ send(m,args[0])
+ elsif args.size == 0
+ self.class.has_many(m)
+ send(m)
+ end
+ end
+end
+
+# will be "extended" to the auto created class
+module ParentAccess
+ def parent=(p)
+ @parent = p
+ end
+ def parent
+ @parent
+ end
+end
+
+module AutoClassCreator
+ attr_reader :accEnabled
+ def const_missing(className)
+ return super unless @accEnabled
+ module_eval("class "+className.to_s+" < RGen::AutoCreatedClass; end")
+ c = const_get(className)
+ c.extend(ParentAccess)
+ c.parent = self
+ c
+ end
+ def enableACC
+ @accEnabled = true
+ end
+ def disableACC
+ @accEnabled = false
+ end
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/cached_glob.rb b/lib/puppet/vendor/rgen/lib/rgen/util/cached_glob.rb
new file mode 100644
index 000000000..c5718c8b6
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/cached_glob.rb
@@ -0,0 +1,67 @@
+module RGen
+
+module Util
+
+# WARNING: the mechanism of taking timestamps of directories in order to find out if the
+# content has changed doesn't work reliably across all kinds of filesystems
+#
+class CachedGlob
+
+ def initialize(dir_glob, file_glob)
+ @dir_glob = dir_glob
+ @file_glob = file_glob
+ @root_dirs = []
+ @dirs = {}
+ @files = {}
+ @timestamps = {}
+ end
+
+ # returns all files contained in directories matched by +dir_glob+ which match +file_glob+.
+ # +file_glob+ must be relative to +dir_glob+.
+ # dir_glob "*/a" with file_glob "**/*.txt" is basically equivalent with Dir.glob("*/a/**/*.txt")
+ # the idea is that the file glob will only be re-eavluated when the content of one of the
+ # directories matched by dir_glob has changed.
+ # this will only be faster than a normal Dir.glob if the number of dirs matched by dir_glob is
+ # relatively large and changes in files affect only a few of them at a time.
+ def glob
+ root_dirs = Dir.glob(@dir_glob)
+ (@root_dirs - root_dirs).each do |d|
+ remove_root_dir(d)
+ end
+ (@root_dirs & root_dirs).each do |d|
+ update_root_dir(d) if dir_changed?(d)
+ end
+ (root_dirs - @root_dirs).each do |d|
+ update_root_dir(d)
+ end
+ @root_dirs = root_dirs
+ @root_dirs.sort.collect{|d| @files[d]}.flatten
+ end
+
+ private
+
+ def dir_changed?(dir)
+ @dirs[dir].any?{|d| File.mtime(d) != @timestamps[dir][d]}
+ end
+
+ def update_root_dir(dir)
+ @dirs[dir] = Dir.glob(dir+"/**/")
+ @files[dir] = Dir.glob(dir+"/"+@file_glob)
+ @timestamps[dir] = {}
+ @dirs[dir].each do |d|
+ @timestamps[dir][d] = File.mtime(d)
+ end
+ end
+
+ def remove_root_dir(dir)
+ @dirs.delete(dir)
+ @files.delete(dir)
+ @timestamps.delete(dir)
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/file_cache_map.rb b/lib/puppet/vendor/rgen/lib/rgen/util/file_cache_map.rb
new file mode 100644
index 000000000..a8464d43b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/file_cache_map.rb
@@ -0,0 +1,124 @@
+require 'digest'
+require 'fileutils'
+
+module RGen
+
+module Util
+
+# Implements a cache for storing and loading data associated with arbitrary files.
+# The data is stored in cache files within a subfolder of the folder where
+# the associated files exists.
+# The cache files are protected with a checksum and loaded data will be
+# invalid in case either the associated file are the cache file has changed.
+#
+class FileCacheMap
+ # optional program version info to be associated with the cache files
+ # if the program version changes, cached data will also be invalid
+ attr_accessor :version_info
+
+ # +cache_dir+ is the name of the subfolder where cache files are created
+ # +postfix+ is an extension appended to the original file name for
+ # creating the name of the cache file
+ def initialize(cache_dir, postfix)
+ @postfix = postfix
+ @cache_dir = cache_dir
+ end
+
+ # load data associated with file +key_path+
+ # returns :invalid in case either the associated file or the cache file has changed
+ #
+ # options:
+ # :invalidation_reasons:
+ # an array which will receive symbols indicating why the cache is invalid:
+ # :no_cachefile, :cachefile_corrupted, :keyfile_changed
+ #
+ def load_data(key_path, options={})
+ reasons = options[:invalidation_reasons] || []
+ cf = cache_file(key_path)
+ result = nil
+ begin
+ File.open(cf, "rb") do |f|
+ header = f.read(41)
+ if !header
+ reasons << :cachefile_corrupted
+ return :invalid
+ end
+ checksum = header[0..39]
+ data = f.read
+ if calc_sha1(data) == checksum
+ if calc_sha1_keydata(key_path) == data[0..39]
+ result = data[41..-1]
+ else
+ reasons << :keyfile_changed
+ result = :invalid
+ end
+ else
+ reasons << :cachefile_corrupted
+ result = :invalid
+ end
+ end
+ rescue Errno::ENOENT
+ reasons << :no_cachefile
+ result = :invalid
+ end
+ result
+ end
+
+ # store data +value_data+ associated with file +key_path+
+ def store_data(key_path, value_data)
+ data = calc_sha1_keydata(key_path) + "\n" + value_data
+ data = calc_sha1(data) + "\n" + data
+ cf = cache_file(key_path)
+ FileUtils.mkdir(File.dirname(cf)) rescue Errno::EEXIST
+ File.open(cf, "wb") do |f|
+ f.write(data)
+ end
+ end
+
+ # remove cache files which are not associated with any file in +key_paths+
+ # will only remove files within +root_path+
+ def clean_unused(root_path, key_paths)
+ raise "key paths must be within root path" unless key_paths.all?{|p| p.index(root_path) == 0}
+ used_files = key_paths.collect{|p| cache_file(p)}
+ files = Dir[root_path+"/**/"+@cache_dir+"/*"+@postfix]
+ files.each do |f|
+ FileUtils.rm(f) unless used_files.include?(f)
+ end
+ end
+
+private
+
+ def cache_file(path)
+ File.dirname(path) + "/"+@cache_dir+"/" + File.basename(path) + @postfix
+ end
+
+ def calc_sha1(data)
+ sha1 = Digest::SHA1.new
+ sha1.update(data)
+ sha1.hexdigest
+ end
+
+ def keyData(path)
+ File.read(path)+@version_info.to_s
+ end
+
+ # this method is much faster than calling +keyData+ and putting the result in +calc_sha1+
+ # reason is probably that there are not so many big strings being created
+ def calc_sha1_keydata(path)
+ begin
+ sha1 = Digest::SHA1.new
+ sha1.file(path)
+ sha1.update(@version_info.to_s)
+ sha1.hexdigest
+ rescue Errno::ENOENT
+ "<missing_key_file>"
+ end
+ end
+
+end
+
+end
+
+end
+
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/file_change_detector.rb b/lib/puppet/vendor/rgen/lib/rgen/util/file_change_detector.rb
new file mode 100644
index 000000000..061802efc
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/file_change_detector.rb
@@ -0,0 +1,84 @@
+require 'digest'
+
+module RGen
+
+module Util
+
+# The FileChangeDetector detects changes in a set of files.
+# Changes are detected between successive calls to check_files with a give set of files.
+# Changes include files being added, removed or having changed their content.
+#
+class FileChangeDetector
+
+ FileInfo = Struct.new(:timestamp, :digest)
+
+ # Create a FileChangeDetector, options include:
+ #
+ # :file_added
+ # a proc which is called when a file is added, receives the filename
+ #
+ # :file_removed
+ # a proc which is called when a file is removed, receives the filename
+ #
+ # :file_changed
+ # a proc which is called when a file is changed, receives the filename
+ #
+ def initialize(options={})
+ @file_added = options[:file_added]
+ @file_removed = options[:file_removed]
+ @file_changed = options[:file_changed]
+ @file_info = {}
+ end
+
+ # Checks if any of the files has changed compared to the last call of check_files.
+ # When called for the first time on a new object, all files will be reported as being added.
+ #
+ def check_files(files)
+ files_before = @file_info.keys
+ used_files = {}
+ files.each do |file|
+ begin
+ if @file_info[file]
+ if @file_info[file].timestamp != File.mtime(file)
+ @file_info[file].timestamp = File.mtime(file)
+ digest = calc_digest(file)
+ if @file_info[file].digest != digest
+ @file_info[file].digest = digest
+ @file_changed && @file_changed.call(file)
+ end
+ end
+ else
+ @file_info[file] = FileInfo.new
+ @file_info[file].timestamp = File.mtime(file)
+ @file_info[file].digest = calc_digest(file)
+ @file_added && @file_added.call(file)
+ end
+ used_files[file] = true
+ # protect against missing files
+ rescue Errno::ENOENT
+ # used_files is not set and @file_info will be removed below
+ # notification hook hasn't been called yet since it comes after file accesses
+ end
+ end
+ files_before.each do |file|
+ if !used_files[file]
+ @file_info.delete(file)
+ @file_removed && @file_removed.call(file)
+ end
+ end
+ end
+
+ private
+
+ def calc_digest(file)
+ sha1 = Digest::SHA1.new
+ sha1.file(file)
+ sha1.hexdigest
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/method_delegation.rb b/lib/puppet/vendor/rgen/lib/rgen/util/method_delegation.rb
new file mode 100644
index 000000000..7d540e944
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/method_delegation.rb
@@ -0,0 +1,114 @@
+module RGen
+
+module Util
+
+module MethodDelegation
+
+class << self
+
+ def registerDelegate(delegate, object, method)
+ method = method.to_sym
+ createDelegateStore(object)
+ if object._methodDelegates[method]
+ object._methodDelegates[method] << delegate
+ else
+ object._methodDelegates[method] = [delegate]
+ createDelegatingMethod(object, method)
+ end
+ end
+
+ def unregisterDelegate(delegate, object, method)
+ method = method.to_sym
+ return unless object.respond_to?(:_methodDelegates)
+ return unless object._methodDelegates[method]
+ object._methodDelegates[method].delete(delegate)
+ if object._methodDelegates[method].empty?
+ object._methodDelegates[method] = nil
+ removeDelegatingMethod(object, method)
+ removeDelegateStore(object)
+ end
+ end
+
+ private
+
+ def createDelegateStore(object)
+ return if object.respond_to?(:_methodDelegates)
+ class << object
+ def _methodDelegates
+ @_methodDelegates ||= {}
+ end
+ end
+ end
+
+ def removeDelegateStore(object)
+ return unless object.respond_to?(:_methodDelegates)
+ class << object
+ remove_method(:_methodDelegates)
+ end
+ end
+
+ def createDelegatingMethod(object, method)
+ if hasMethod(object, method)
+ object.instance_eval <<-END
+ class << self
+ alias #{aliasMethodName(method)} #{method}
+ end
+ END
+ end
+
+ # define the delegating method
+ object.instance_eval <<-END
+ class << self
+ def #{method}(*args, &block)
+ @_methodDelegates[:#{method}].each do |d|
+ catch(:continue) do
+ return d.#{method}_delegated(self, *args, &block)
+ end
+ end
+ # if aliased method does not exist, we want an exception
+ #{aliasMethodName(method)}(*args, &block)
+ end
+ end
+ END
+ end
+
+ def removeDelegatingMethod(object, method)
+ if hasMethod(object, aliasMethodName(method))
+ # there is an aliased original, restore it
+ object.instance_eval <<-END
+ class << self
+ alias #{method} #{aliasMethodName(method)}
+ remove_method(:#{aliasMethodName(method)})
+ end
+ END
+ else
+ # just delete the delegating method
+ object.instance_eval <<-END
+ class << self
+ remove_method(:#{method})
+ end
+ END
+ end
+ end
+
+ def hasMethod(object, method)
+ # in Ruby 1.9, #methods returns symbols
+ if object.methods.first.is_a?(Symbol)
+ method = method.to_sym
+ else
+ method = method.to_s
+ end
+ object.methods.include?(method)
+ end
+
+ def aliasMethodName(method)
+ "#{method}_delegate_original"
+ end
+end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/model_comparator.rb b/lib/puppet/vendor/rgen/lib/rgen/util/model_comparator.rb
new file mode 100644
index 000000000..f17ebc722
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/model_comparator.rb
@@ -0,0 +1,68 @@
+require 'rgen/ecore/ecore'
+
+module RGen
+
+module Util
+
+module ModelComparator
+
+# This method compares to models regarding equality
+# For this the identity of a model element is defined based on identity
+# of all attributes and referenced elements.
+# Arrays are sorted before comparison if possible (if <=> is provided).
+#
+def modelEqual?(a, b, featureIgnoreList=[])
+ @modelEqual_visited = {}
+ _modelEqual_internal(a, b, featureIgnoreList, [])
+end
+
+def _modelEqual_internal(a, b, featureIgnoreList, path)
+ return true if @modelEqual_visited[[a,b]]
+ @modelEqual_visited[[a,b]] = true
+ return true if a.nil? && b.nil?
+ unless a.class == b.class
+ puts "#{path.inspect}\n Classes differ: #{a} vs. #{b}"
+ return false
+ end
+ if a.is_a?(Array)
+ unless a.size == b.size
+ puts "#{path.inspect}\n Array size differs: #{a.size} vs. #{b.size}"
+ return false
+ end
+ begin
+ # in Ruby 1.9 every object has the <=> operator but the default one returns
+ # nil and thus sorting won't work (ArgumentError)
+ as = a.sort
+ rescue ArgumentError, NoMethodError
+ as = a
+ end
+ begin
+ bs = b.sort
+ rescue ArgumentError, NoMethodError
+ bs = b
+ end
+ a.each_index do |i|
+ return false unless _modelEqual_internal(as[i], bs[i], featureIgnoreList, path+[i])
+ end
+ else
+ a.class.ecore.eAllStructuralFeatures.reject{|f| f.derived}.each do |feat|
+ next if featureIgnoreList.include?(feat.name)
+ if feat.eType.is_a?(RGen::ECore::EDataType)
+ unless a.getGeneric(feat.name) == b.getGeneric(feat.name)
+ puts "#{path.inspect}\n Value '#{feat.name}' differs: #{a.getGeneric(feat.name)} vs. #{b.getGeneric(feat.name)}"
+ return false
+ end
+ else
+ return false unless _modelEqual_internal(a.getGeneric(feat.name), b.getGeneric(feat.name), featureIgnoreList, path+["#{a.respond_to?(:name) && a.name}:#{feat.name}"])
+ end
+ end
+ end
+ return true
+end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/model_comparator_base.rb b/lib/puppet/vendor/rgen/lib/rgen/util/model_comparator_base.rb
new file mode 100644
index 000000000..bf8137f8c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/model_comparator_base.rb
@@ -0,0 +1,142 @@
+require 'andand'
+
+module RGen
+
+module Util
+
+class ModelComparatorBase
+
+ CompareSpec = Struct.new(:identifier, :recurse, :filter, :ignore_features, :display_name, :sort)
+ INDENT = " "
+
+ class << self
+ attr_reader :compareSpecs
+
+ def compare_spec(clazz, hash)
+ @compareSpecs ||= {}
+ raise "Compare spec already defined for #{clazz}" if @compareSpecs[clazz]
+ spec = CompareSpec.new
+ hash.each_pair do |k,v|
+ spec.send("#{k}=",v)
+ end
+ @compareSpecs[clazz] = spec
+ end
+ end
+
+ # compares two sets of elements
+ def compare(as, bs, recursive=true)
+ result = []
+ aById = as.select{|e| useElement?(e)}.inject({}){|r, e| r[elementIdentifier(e)] = e; r}
+ bById = bs.select{|e| useElement?(e)}.inject({}){|r, e| r[elementIdentifier(e)] = e; r}
+ onlyA = sortElements((aById.keys - bById.keys).collect{|id| aById[id]})
+ onlyB = sortElements((bById.keys - aById.keys).collect{|id| bById[id]})
+ aAndB = sortElementPairs((aById.keys & bById.keys).collect{|id| [aById[id], bById[id]]})
+ onlyA.each do |e|
+ result << "- #{elementDisplayName(e)}"
+ end
+ onlyB.each do |e|
+ result << "+ #{elementDisplayName(e)}"
+ end
+ if recursive
+ aAndB.each do |ab|
+ a, b = *ab
+ r = compareElements(a, b)
+ if r.size > 0
+ result << "#{elementDisplayName(a)}"
+ result += r.collect{|l| INDENT+l}
+ end
+ end
+ end
+ result
+ end
+
+ def sortElementPairs(pairs)
+ pairs.sort do |x,y|
+ a, b = x[0], y[0]
+ r = a.class.name <=> b.class.name
+ r = compareSpec(a).sort.call(a,b) if r == 0 && compareSpec(a) && compareSpec(a).sort
+ r
+ end
+ end
+
+ def sortElements(elements)
+ elements.sort do |a,b|
+ r = a.class.name <=> b.class.name
+ r = compareSpec(a).sort.call(a,b) if r == 0 && compareSpec(a) && compareSpec(a).sort
+ r
+ end
+ end
+
+ def elementDisplayName(e)
+ if compareSpec(e) && compareSpec(e).display_name
+ compareSpec(e).display_name.call(e)
+ else
+ elementIdentifier(e)
+ end
+ end
+
+ def compareElements(a, b)
+ result = []
+ if a.class != b.class
+ result << "Class: #{a.class} -> #{b.class}"
+ else
+ a.class.ecore.eAllStructuralFeatures.reject{|f| f.derived || compareSpec(a).andand.ignore_features.andand.include?(f.name.to_sym)}.each do |f|
+ va, vb = a.getGeneric(f.name), b.getGeneric(f.name)
+ if f.is_a?(RGen::ECore::EAttribute)
+ r = compareValues(f.name, va, vb)
+ result << r if r
+ else
+ va, vb = [va].compact, [vb].compact unless f.many
+ r = compare(va, vb, f.containment || compareSpec(a).andand.recurse.andand.include?(f.name.to_sym))
+ if r.size > 0
+ result << "[#{f.name}]"
+ result += r.collect{|l| INDENT+l}
+ end
+ end
+ end
+ end
+ result
+ end
+
+ def compareValues(name, val1, val2)
+ result = nil
+ result = "[#{name}] #{val1} -> #{val2}" if val1 != val2
+ result
+ end
+
+ def elementIdentifier(element)
+ cs = compareSpec(element)
+ if cs && cs.identifier
+ if cs.identifier.is_a?(Proc)
+ cs.identifier.call(element)
+ else
+ cs.identifier
+ end
+ else
+ if element.respond_to?(:name)
+ element.name
+ else
+ element.object_id
+ end
+ end
+ end
+
+ def useElement?(element)
+ cs = compareSpec(element)
+ !(cs && cs.filter) || cs.filter.call(element)
+ end
+
+ def compareSpec(element)
+ @compareSpec ||= {}
+ return @compareSpec[element.class] if @compareSpec[element.class]
+ return nil unless self.class.compareSpecs
+ key = self.class.compareSpecs.keys.find{|k| element.is_a?(k)}
+ @compareSpec[element.class] = self.class.compareSpecs[key]
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/model_dumper.rb b/lib/puppet/vendor/rgen/lib/rgen/util/model_dumper.rb
new file mode 100644
index 000000000..20fc2d886
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/model_dumper.rb
@@ -0,0 +1,29 @@
+module RGen
+
+module Util
+
+module ModelDumper
+
+ def dump(obj=nil)
+ obj ||= self
+ if obj.is_a?(Array)
+ obj.collect {|o| dump(o)}.join("\n\n")
+ elsif obj.class.respond_to?(:ecore)
+ ([obj.to_s] +
+ obj.class.ecore.eAllStructuralFeatures.select{|f| !f.many}.collect { |a|
+ " #{a} => #{obj.getGeneric(a.name)}"
+ } +
+ obj.class.ecore.eAllStructuralFeatures.select{|f| f.many}.collect { |a|
+ " #{a} => [ #{obj.getGeneric(a.name).join(', ')} ]"
+ }).join("\n")
+ else
+ obj.to_s
+ end
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/name_helper.rb b/lib/puppet/vendor/rgen/lib/rgen/util/name_helper.rb
new file mode 100644
index 000000000..2f827de0c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/name_helper.rb
@@ -0,0 +1,42 @@
+# RGen Framework
+# (c) Martin Thiede, 2006
+
+module RGen
+
+module Util
+
+module NameHelper
+
+ def normalize(name)
+ name.gsub(/\W/,'_')
+ end
+
+ def className(object)
+ object.class.name =~ /::(\w+)$/; $1
+ end
+
+ def firstToUpper(str)
+ str[0..0].upcase + ( str[1..-1] || "" )
+ end
+
+ def firstToLower(str)
+ str[0..0].downcase + ( str[1..-1] || "" )
+ end
+
+ def saneClassName(str)
+ firstToUpper(normalize(str)).sub(/^Class$/, 'Clazz')
+ end
+
+ def saneMethodName(str)
+ firstToLower(normalize(str)).sub(/^class$/, 'clazz')
+ end
+
+ def camelize(str)
+ str.split(/[\W_]/).collect{|s| firstToUpper(s.downcase)}.join
+ end
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/rgen/util/pattern_matcher.rb b/lib/puppet/vendor/rgen/lib/rgen/util/pattern_matcher.rb
new file mode 100644
index 000000000..0429cd154
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/rgen/util/pattern_matcher.rb
@@ -0,0 +1,329 @@
+module RGen
+
+module Util
+
+# A PatternMatcher can be used to find, insert and remove patterns on a given model.
+#
+# A pattern is specified by means of a block passed to the add_pattern method.
+# The block must take an Environment as first parameter and at least one model element
+# as connection point as further parameter. The pattern matches if it can be found
+# in a given environment and connected to the given connection point elements.
+#
+class PatternMatcher
+
+ Match = Struct.new(:root, :elements, :bound_values)
+ attr_accessor :debug
+
+ def initialize
+ @patterns = {}
+ @insert_mode = false
+ @debug = false
+ end
+
+ def add_pattern(name, &block)
+ raise "a pattern needs at least 2 block parameters: " +
+ "an RGen environment and a model element as connection point" \
+ unless block.arity >= 2
+ @patterns[name] = block
+ end
+
+ def find_pattern(env, name, *connection_points)
+ match = find_pattern_internal(env, name, *connection_points)
+ end
+
+ def insert_pattern(env, name, *connection_points)
+ @insert_mode = true
+ root = evaluate_pattern(name, env, connection_points)
+ @insert_mode = false
+ root
+ end
+
+ def remove_pattern(env, name, *connection_points)
+ match = find_pattern_internal(env, name, *connection_points)
+ if match
+ match.elements.each do |e|
+ disconnect_element(e)
+ env.delete(e)
+ end
+ match
+ else
+ nil
+ end
+ end
+
+ def lazy(&block)
+ if @insert_mode
+ block.call
+ else
+ Lazy.new(&block)
+ end
+ end
+
+ class Lazy < RGen::MetamodelBuilder::MMGeneric
+ def initialize(&block)
+ @block = block
+ end
+ def _eval
+ @block.call
+ end
+ end
+
+ private
+
+ class Proxy < RGen::MetamodelBuilder::MMProxy
+ attr_reader :_target
+ def initialize(target)
+ @_target = target
+ end
+ def method_missing(m, *args)
+ result = @_target.send(m, *args)
+ if result.is_a?(Array)
+ result.collect do |e|
+ if e.is_a?(RGen::MetamodelBuilder::MMBase)
+ Proxy.new(e)
+ else
+ e
+ end
+ end
+ else
+ if result.is_a?(RGen::MetamodelBuilder::MMBase)
+ Proxy.new(result)
+ else
+ result
+ end
+ end
+ end
+ end
+
+ class Bindable < RGen::MetamodelBuilder::MMGeneric
+ # by being an Enumerable, Bindables can be used for many-features as well
+ include Enumerable
+ def initialize
+ @bound = false
+ @value = nil
+ @many = false
+ end
+ def _bound?
+ @bound
+ end
+ def _many?
+ @many
+ end
+ def _bind(value)
+ @value = value
+ @bound = true
+ end
+ def _value
+ @value
+ end
+ def to_s
+ @value.to_s
+ end
+ # pretend this is an enumerable which contains itself, so the bindable can be
+ # inserted into many-features, when this is done the bindable is marked as a many-bindable
+ def each
+ @many = true
+ yield(self)
+ end
+ def method_missing(m, *args)
+ raise "bindable not bound" unless _bound?
+ @value.send(m, *args)
+ end
+ end
+
+ TempEnv = RGen::Environment.new
+ class << TempEnv
+ def <<(el); end
+ end
+
+ def find_pattern_internal(env, name, *connection_points)
+ proxied_args = connection_points.collect{|a|
+ a.is_a?(RGen::MetamodelBuilder::MMBase) ? Proxy.new(a) : a }
+ bindables = create_bindables(name, connection_points)
+ pattern_root = evaluate_pattern(name, TempEnv, proxied_args+bindables)
+ candidates = candidates_via_connection_points(pattern_root, connection_points)
+ candidates ||= env.find(:class => pattern_root.class)
+ candidates.each do |e|
+ # create new bindables for every try, otherwise they can could be bound to old values
+ bindables = create_bindables(name, connection_points)
+ pattern_root = evaluate_pattern(name, TempEnv, proxied_args+bindables)
+ matched = match(pattern_root, e)
+ return Match.new(e, matched, bindables.collect{|b| b._value}) if matched
+ end
+ nil
+ end
+
+ def create_bindables(pattern_name, connection_points)
+ (1..(num_pattern_variables(pattern_name) - connection_points.size)).collect{|i| Bindable.new}
+ end
+
+ def candidates_via_connection_points(pattern_root, connection_points)
+ @candidates_via_connection_points_refs ||= {}
+ refs = (@candidates_via_connection_points_refs[pattern_root.class] ||=
+ pattern_root.class.ecore.eAllReferences.reject{|r| r.derived || r.many || !r.eOpposite})
+ candidates = nil
+ refs.each do |r|
+ t = pattern_root.getGeneric(r.name)
+ cp = t.is_a?(Proxy) && connection_points.find{|cp| cp.object_id == t._target.object_id}
+ if cp
+ elements = cp.getGenericAsArray(r.eOpposite.name)
+ candidates = elements if candidates.nil? || elements.size < candidates.size
+ end
+ end
+ candidates
+ end
+
+ def match(pat_element, test_element)
+ visited = {}
+ check_later = []
+ return false unless match_internal(pat_element, test_element, visited, check_later)
+ while cl = check_later.shift
+ pv, tv = cl.lazy._eval, cl.value
+ if cl.feature.is_a?(RGen::ECore::EAttribute)
+ unless pv == tv
+ match_failed(cl.feature, "wrong attribute value (lazy): #{pv} vs. #{tv}")
+ return false
+ end
+ else
+ if pv.is_a?(Proxy)
+ unless pv._target.object_id == tv.object_id
+ match_failed(f, "wrong target object")
+ return false
+ end
+ else
+ unless (pv.nil? && tv.nil?) || (!pv.nil? && !tv.nil? && match_internal(pv, tv, visited, check_later))
+ return false
+ end
+ end
+ end
+ end
+ visited.keys
+ end
+
+ CheckLater = Struct.new(:feature, :lazy, :value)
+ def match_internal(pat_element, test_element, visited, check_later)
+ return true if visited[test_element]
+ visited[test_element] = true
+ unless pat_element.class == test_element.class
+ match_failed(nil, "wrong class: #{pat_element.class} vs #{test_element.class}")
+ return false
+ end
+ all_structural_features(pat_element).each do |f|
+ pat_values = pat_element.getGeneric(f.name)
+ # nil values must be kept to support size check with Bindables
+ pat_values = [ pat_values ] unless pat_values.is_a?(Array)
+ test_values = test_element.getGeneric(f.name)
+ test_values = [ test_values] unless test_values.is_a?(Array)
+ if pat_values.size == 1 && pat_values.first.is_a?(Bindable) && pat_values.first._many?
+ unless match_many_bindable(pat_values.first, test_values)
+ return false
+ end
+ else
+ unless pat_values.size == test_values.size
+ match_failed(f, "wrong size #{pat_values.size} vs. #{test_values.size}")
+ return false
+ end
+ pat_values.each_with_index do |pv,i|
+ tv = test_values[i]
+ if pv.is_a?(Lazy)
+ check_later << CheckLater.new(f, pv, tv)
+ elsif pv.is_a?(Bindable)
+ if pv._bound?
+ unless pv._value == tv
+ match_failed(f, "value does not match bound value #{pv._value.class}:#{pv._value.object_id} vs. #{tv.class}:#{tv.object_id}")
+ return false
+ end
+ else
+ pv._bind(tv)
+ end
+ else
+ if f.is_a?(RGen::ECore::EAttribute)
+ unless pv == tv
+ match_failed(f, "wrong attribute value")
+ return false
+ end
+ else
+ if pv.is_a?(Proxy)
+ unless pv._target.object_id == tv.object_id
+ match_failed(f, "wrong target object")
+ return false
+ end
+ else
+ unless both_nil_or_match(pv, tv, visited, check_later)
+ return false
+ end
+ end
+ end
+ end
+ end
+ end
+ end
+ true
+ end
+
+ def match_many_bindable(bindable, test_values)
+ if bindable._bound?
+ bindable._value.each_with_index do |pv,i|
+ tv = test_values[i]
+ if f.is_a?(RGen::ECore::EAttribute)
+ unless pv == tv
+ match_failed(f, "wrong attribute value")
+ return false
+ end
+ else
+ unless both_nil_or_match(pv, tv, visited, check_later)
+ return false
+ end
+ end
+ end
+ else
+ bindable._bind(test_values.dup)
+ end
+ true
+ end
+
+ def both_nil_or_match(pv, tv, visited, check_later)
+ (pv.nil? && tv.nil?) || (!pv.nil? && !tv.nil? && match_internal(pv, tv, visited, check_later))
+ end
+
+ def match_failed(f, msg)
+ puts "match failed #{f&&f.eContainingClass.name}##{f&&f.name}: #{msg}" if @debug
+ end
+
+ def num_pattern_variables(name)
+ prok = @patterns[name]
+ prok.arity - 1
+ end
+
+ def evaluate_pattern(name, env, connection_points)
+ prok = @patterns[name]
+ raise "unknown pattern #{name}" unless prok
+ raise "wrong number of arguments, expected #{prok.arity-1} connection points)" \
+ unless connection_points.size == prok.arity-1
+ prok.call(env, *connection_points)
+ end
+
+ def disconnect_element(element)
+ return if element.nil?
+ all_structural_features(element).each do |f|
+ if f.many
+ element.setGeneric(f.name, [])
+ else
+ element.setGeneric(f.name, nil)
+ end
+ end
+ end
+
+ def all_structural_features(element)
+ @all_structural_features ||= {}
+ return @all_structural_features[element.class] if @all_structural_features[element.class]
+ @all_structural_features[element.class] =
+ element.class.ecore.eAllStructuralFeatures.reject{|f| f.derived}
+ end
+
+end
+
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/lib/transformers/ecore_to_uml13.rb b/lib/puppet/vendor/rgen/lib/transformers/ecore_to_uml13.rb
new file mode 100644
index 000000000..cbd832a35
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/transformers/ecore_to_uml13.rb
@@ -0,0 +1,79 @@
+require 'rgen/transformer'
+require 'rgen/ecore/ecore'
+require 'metamodels/uml13_metamodel'
+
+class ECoreToUML13 < RGen::Transformer
+ include RGen::ECore
+
+ def transform
+ trans(:class => EPackage)
+ trans(:class => EClass)
+ trans(:class => EEnum)
+ end
+
+ transform EPackage, :to => UML13::Package do
+ {:name => name,
+ :namespace => trans(eSuperPackage) || model,
+ :ownedElement => trans(eClassifiers.select{|c| c.is_a?(EClass)} + eSubpackages)
+ }
+ end
+
+ transform EClass, :to => UML13::Class do
+ {:name => name,
+ :namespace => trans(ePackage),
+ :feature => trans(eStructuralFeatures.select{|f| f.is_a?(EAttribute)} + eOperations),
+ :associationEnd => trans(eStructuralFeatures.select{|f| f.is_a?(EReference)}),
+ :generalization => eSuperTypes.collect { |st|
+ @env_out.new(UML13::Generalization, :parent => trans(st), :namespace => trans(ePackage) || model)
+ }
+ }
+ end
+
+ transform EEnum, :to => UML13::Class do
+ {:name => name,
+ :namespace => trans(ePackage),
+ :feature => trans(eLiterals)
+ }
+ end
+
+ transform EEnumLiteral, :to => UML13::Attribute do
+ {:name => name }
+ end
+
+ transform EAttribute, :to => UML13::Attribute do
+ _typemap = {"String" => "string", "Boolean" => "boolean", "Integer" => "int", "Float" => "float"}
+ {:name => name,
+ :taggedValue => [@env_out.new(UML13::TaggedValue, :tag => "type",
+ :value => _typemap[eType.instanceClassName] || eType.name)]
+ }
+ end
+
+ transform EReference, :to => UML13::AssociationEnd do
+ _otherAssocEnd = eOpposite ? trans(eOpposite) :
+ @env_out.new(UML13::AssociationEnd,
+ :type => trans(eType), :name => name, :multiplicity => createMultiplicity(@current_object),
+ :aggregation => :none, :isNavigable => true)
+ { :association => trans(@current_object).association || @env_out.new(UML13::Association,
+ :connection => [_otherAssocEnd], :namespace => trans(eContainingClass.ePackage) || model),
+ :name => eOpposite && eOpposite.name,
+ :multiplicity => eOpposite && createMultiplicity(eOpposite),
+ :aggregation => containment ? :composite : :none,
+ :isNavigable => !eOpposite.nil?
+ }
+ end
+
+ transform EOperation, :to => UML13::Operation do
+ {:name => name}
+ end
+
+ def createMultiplicity(ref)
+ @env_out.new(UML13::Multiplicity, :range => [
+ @env_out.new(UML13::MultiplicityRange,
+ :lower => ref.lowerBound.to_s.sub("-1","*"), :upper => ref.upperBound.to_s.sub("-1","*"))])
+ end
+
+ def model
+ @model ||= @env_out.new(UML13::Model, :name => "Model")
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/lib/transformers/uml13_to_ecore.rb b/lib/puppet/vendor/rgen/lib/transformers/uml13_to_ecore.rb
new file mode 100644
index 000000000..1bc57ba62
--- /dev/null
+++ b/lib/puppet/vendor/rgen/lib/transformers/uml13_to_ecore.rb
@@ -0,0 +1,127 @@
+require 'metamodels/uml13_metamodel'
+require 'rgen/transformer'
+require 'rgen/ecore/ecore'
+require 'rgen/array_extensions'
+
+class UML13ToECore < RGen::Transformer
+ include RGen::ECore
+
+ # Options:
+ #
+ # :reference_filter:
+ # a proc which receives an AssociationEnd or a Dependency and should return
+ # true or false, depending on if a referece should be created for it or not
+ #
+ def initialize(*args)
+ options = {}
+ if args.last.is_a?(Hash)
+ options = args.pop
+ end
+ @reference_filter = options[:reference_filter] || proc do |e|
+ if e.is_a?(UML13::AssociationEnd)
+ otherEnd = e.association.connection.find{|ae| ae != e}
+ otherEnd.name && otherEnd.name.size > 0
+ else
+ false
+ end
+ end
+ super(*args)
+ end
+
+ def transform
+ trans(:class => UML13::Class)
+ end
+
+ transform UML13::Model, :to => EPackage do
+ trans(ownedClassOrPackage)
+ { :name => name && name.strip }
+ end
+
+ transform UML13::Package, :to => EPackage do
+ trans(ownedClassOrPackage)
+ { :name => name && name.strip,
+ :eSuperPackage => trans(namespace.is_a?(UML13::Package) ? namespace : nil) }
+ end
+
+ method :ownedClassOrPackage do
+ ownedElement.select{|e| e.is_a?(UML13::Package) || e.is_a?(UML13::Class)}
+ end
+
+ transform UML13::Class, :to => EClass do
+ { :name => name && name.strip,
+ :abstract => isAbstract,
+ :ePackage => trans(namespace.is_a?(UML13::Package) ? namespace : nil),
+ :eStructuralFeatures => trans(feature.select{|f| f.is_a?(UML13::Attribute)} +
+ associationEnd + clientDependency),
+ :eOperations => trans(feature.select{|f| f.is_a?(UML13::Operation)}),
+ :eSuperTypes => trans(generalization.parent + clientDependency.select{|d| d.stereotype && d.stereotype.name == "implements"}.supplier),
+ :eAnnotations => createAnnotations(taggedValue) }
+ end
+
+ transform UML13::Interface, :to => EClass do
+ { :name => name && name.strip,
+ :abstract => isAbstract,
+ :ePackage => trans(namespace.is_a?(UML13::Package) ? namespace : nil),
+ :eStructuralFeatures => trans(feature.select{|f| f.is_a?(UML13::Attribute)} + associationEnd),
+ :eOperations => trans(feature.select{|f| f.is_a?(UML13::Operation)}),
+ :eSuperTypes => trans(generalization.parent)}
+ end
+
+ transform UML13::Attribute, :to => EAttribute do
+ { :name => name && name.strip, :eType => trans(getType),
+ :lowerBound => (multiplicity && multiplicity.range.first.lower &&
+ multiplicity.range.first.lower.to_i) || 0,
+ :upperBound => (multiplicity && multiplicity.range.first.upper &&
+ multiplicity.range.first.upper.gsub('*','-1').to_i) || 1,
+ :eAnnotations => createAnnotations(taggedValue) }
+ end
+
+ transform UML13::DataType, :to => EDataType do
+ { :name => name && name.strip,
+ :ePackage => trans(namespace.is_a?(UML13::Package) ? namespace : nil),
+ :eAnnotations => createAnnotations(taggedValue) }
+ end
+
+ transform UML13::Operation, :to => EOperation do
+ { :name => name && name.strip }
+ end
+
+ transform UML13::AssociationEnd, :to => EReference, :if => :isReference do
+ otherEnd = association.connection.find{|ae| ae != @current_object}
+ { :eType => trans(otherEnd.type),
+ :name => otherEnd.name && otherEnd.name.strip,
+ :eOpposite => trans(otherEnd),
+ :lowerBound => (otherEnd.multiplicity && otherEnd.multiplicity.range.first.lower &&
+ otherEnd.multiplicity.range.first.lower.to_i) || 0,
+ :upperBound => (otherEnd.multiplicity && otherEnd.multiplicity.range.first.upper &&
+ otherEnd.multiplicity.range.first.upper.gsub('*','-1').to_i) || 1,
+ :containment => (aggregation == :composite),
+ :eAnnotations => createAnnotations(association.taggedValue) }
+ end
+
+ transform UML13::Dependency, :to => EReference, :if => :isReference do
+ { :eType => trans(supplier.first),
+ :name => name,
+ :lowerBound => 0,
+ :upperBound => 1,
+ :containment => false,
+ :eAnnotations => createAnnotations(taggedValue)
+ }
+ end
+
+ method :isReference do
+ @reference_filter.call(@current_object)
+ end
+
+ def createAnnotations(taggedValues)
+ if taggedValues.size > 0
+ [ EAnnotation.new(:details => trans(taggedValues)) ]
+ else
+ []
+ end
+ end
+
+ transform UML13::TaggedValue, :to => EStringToStringMapEntry do
+ { :key => tag, :value => value}
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/array_extensions_test.rb b/lib/puppet/vendor/rgen/test/array_extensions_test.rb
new file mode 100644
index 000000000..e6caa9037
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/array_extensions_test.rb
@@ -0,0 +1,64 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/array_extensions'
+
+class ArrayExtensionsTest < Test::Unit::TestCase
+
+ def test_element_methods
+ c = Struct.new("SomeClass",:name,:age)
+ a = []
+ a << c.new('MyName',33)
+ a << c.new('YourName',22)
+ assert_equal ["MyName", "YourName"], a >> :name
+ assert_raise NoMethodError do
+ a.name
+ end
+ assert_equal [33, 22], a>>:age
+ assert_raise NoMethodError do
+ a.age
+ end
+ # unfortunately, any method can be called on an empty array
+ assert_equal [], [].age
+ end
+
+ class MMBaseClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'name'
+ has_attr 'age', Integer
+ end
+
+ def test_with_mmbase
+ e1 = MMBaseClass.new
+ e1.name = "MyName"
+ e1.age = 33
+ e2 = MMBaseClass.new
+ e2.name = "YourName"
+ e2.age = 22
+ a = [e1, e2]
+ assert_equal ["MyName", "YourName"], a >> :name
+ assert_equal ["MyName", "YourName"], a.name
+ assert_equal [33, 22], a>>:age
+ assert_equal [33, 22], a.age
+ # put something into the array that is not an MMBase
+ a << "not a MMBase"
+ # the dot operator will tell that there is something not a MMBase
+ assert_raise StandardError do
+ a.age
+ end
+ # the >> operator will try to call the method anyway
+ assert_raise NoMethodError do
+ a >> :age
+ end
+ end
+
+ def test_hash_square
+ assert_equal({}, Hash[[]])
+ end
+
+ def test_to_str_on_empty_array
+ assert_raise NoMethodError do
+ [].to_str
+ end
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/ea_instantiator_test.rb b/lib/puppet/vendor/rgen/test/ea_instantiator_test.rb
new file mode 100644
index 000000000..bafae8af2
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/ea_instantiator_test.rb
@@ -0,0 +1,35 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/environment'
+require 'metamodels/uml13_metamodel'
+require 'ea_support/ea_support'
+require 'transformers/uml13_to_ecore'
+require 'testmodel/class_model_checker'
+require 'testmodel/object_model_checker'
+require 'testmodel/ecore_model_checker'
+
+class EAInstantiatorTest < Test::Unit::TestCase
+
+ include Testmodel::ClassModelChecker
+ include Testmodel::ObjectModelChecker
+ include Testmodel::ECoreModelChecker
+
+ MODEL_DIR = File.join(File.dirname(__FILE__),"testmodel")
+
+ def test_instantiator
+ envUML = RGen::Environment.new
+ EASupport.instantiateUML13FromXMI11(envUML, MODEL_DIR+"/ea_testmodel.xml")
+ checkClassModel(envUML)
+ checkObjectModel(envUML)
+ envECore = RGen::Environment.new
+ UML13ToECore.new(envUML, envECore).transform
+ checkECoreModel(envECore)
+ end
+
+ def test_partial
+ envUML = RGen::Environment.new
+ EASupport.instantiateUML13FromXMI11(envUML, MODEL_DIR+"/ea_testmodel_partial.xml")
+ checkClassModelPartial(envUML)
+ end
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/ea_serializer_test.rb b/lib/puppet/vendor/rgen/test/ea_serializer_test.rb
new file mode 100644
index 000000000..c5ecc2755
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/ea_serializer_test.rb
@@ -0,0 +1,23 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/environment'
+require 'metamodels/uml13_metamodel'
+require 'ea_support/ea_support'
+require 'rgen/serializer/xmi11_serializer'
+
+class EASerializerTest < Test::Unit::TestCase
+
+ MODEL_DIR = File.join(File.dirname(__FILE__),"testmodel")
+ TEST_DIR = File.join(File.dirname(__FILE__),"ea_serializer_test")
+
+ def test_serializer
+ envUML = RGen::Environment.new
+ EASupport.instantiateUML13FromXMI11(envUML, MODEL_DIR+"/ea_testmodel.xml")
+ models = envUML.find(:class => UML13::Model)
+ assert_equal 1, models.size
+
+ EASupport.serializeUML13ToXMI11(envUML, MODEL_DIR+"/ea_testmodel_regenerated.xml")
+ end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/ecore_self_test.rb b/lib/puppet/vendor/rgen/test/ecore_self_test.rb
new file mode 100644
index 000000000..ae523faa8
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/ecore_self_test.rb
@@ -0,0 +1,54 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/ecore/ecore'
+require 'rgen/array_extensions'
+
+class ECoreSelfTest < Test::Unit::TestCase
+ include RGen::ECore
+
+ def test_simple
+ assert_equal \
+ %w(lowerBound ordered unique upperBound many required eType).sort,
+ ETypedElement.ecore.eStructuralFeatures.name.sort
+
+ assert_equal \
+ EClassifier.ecore,
+ ETypedElement.ecore.eStructuralFeatures.find{|f| f.name=="eType"}.eType
+ assert_equal %w(ENamedElement), ETypedElement.ecore.eSuperTypes.name
+
+ assert_equal \
+ EModelElement.ecore,
+ EModelElement.ecore.eStructuralFeatures.find{|f| f.name=="eAnnotations"}.eOpposite.eType
+
+ assert_equal \
+ %w(eType),
+ ETypedElement.ecore.eReferences.name
+
+ assert_equal \
+ %w(lowerBound ordered unique upperBound many required).sort,
+ ETypedElement.ecore.eAttributes.name.sort
+
+ assert RGen::ECore.ecore.is_a?(EPackage)
+ assert_equal "ECore", RGen::ECore.ecore.name
+ assert_equal "RGen", RGen::ECore.ecore.eSuperPackage.name
+ assert_equal %w(ECore), RGen.ecore.eSubpackages.name
+ assert_equal\
+ %w(EObject EModelElement EAnnotation ENamedElement ETypedElement
+ EStructuralFeature EAttribute EClassifier EDataType EEnum EEnumLiteral EFactory
+ EOperation EPackage EParameter EReference EStringToStringMapEntry EClass
+ ETypeArgument EGenericType).sort,
+ RGen::ECore.ecore.eClassifiers.name.sort
+
+ assert_equal "false", EAttribute.ecore.eAllAttributes.
+ find{|a|a.name == "derived"}.defaultValueLiteral
+ assert_equal false, EAttribute.ecore.eAllAttributes.
+ find{|a|a.name == "derived"}.defaultValue
+
+ assert_nil EAttribute.ecore.eAllAttributes.
+ find{|a|a.name == "defaultValueLiteral"}.defaultValueLiteral
+ assert_nil EAttribute.ecore.eAllAttributes.
+ find{|a|a.name == "defaultValueLiteral"}.defaultValue
+
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/environment_test.rb b/lib/puppet/vendor/rgen/test/environment_test.rb
new file mode 100644
index 000000000..aefd480e5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/environment_test.rb
@@ -0,0 +1,90 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/environment'
+require 'rgen/metamodel_builder'
+
+class EnvironmentTest < Test::Unit::TestCase
+
+ class Model
+ attr_accessor :name
+ end
+
+ class ModelSub < Model
+ end
+
+ class ClassSuperA < RGen::MetamodelBuilder::MMBase
+ end
+
+ class ClassSuperB < RGen::MetamodelBuilder::MMBase
+ end
+
+ class ClassC < RGen::MetamodelBuilder::MMMultiple(ClassSuperA, ClassSuperB)
+ has_attr 'name', String
+ end
+
+ class ClassSubD < ClassC
+ end
+
+ class ClassSubE < ClassC
+ end
+
+ def test_find_mmbase
+ env = RGen::Environment.new
+ mA1 = env.new(ClassSuperA)
+ mB1 = env.new(ClassSuperB)
+ mD1 = env.new(ClassSubD, :name => "mD1")
+ mD2 = env.new(ClassSubD, :name => "mD2")
+ mE = env.new(ClassSubE, :name => "mE")
+
+ resultA = env.find(:class => ClassSuperA)
+ assert_equal sortById([mA1,mD1,mD2,mE]), sortById(resultA)
+ resultNamedA = env.find(:class => ClassSuperA, :name => "mD1")
+ assert_equal sortById([mD1]), sortById(resultNamedA)
+
+ resultB = env.find(:class => ClassSuperB)
+ assert_equal sortById([mB1,mD1,mD2,mE]), sortById(resultB)
+ resultNamedB = env.find(:class => ClassSuperB, :name => "mD1")
+ assert_equal sortById([mD1]), sortById(resultNamedB)
+
+ resultC = env.find(:class => ClassC)
+ assert_equal sortById([mD1,mD2,mE]), sortById(resultC)
+
+ resultD = env.find(:class => ClassSubD)
+ assert_equal sortById([mD1,mD2]), sortById(resultD)
+ end
+
+ def test_find
+ m1 = Model.new
+ m1.name = "M1"
+ m2 = ModelSub.new
+ m2.name = "M2"
+ m3 = "justAString"
+ env = RGen::Environment.new << m1 << m2 << m3
+
+ result = env.find(:class => Model, :name => "M1")
+ assert result.is_a?(Array)
+ assert_equal 1, result.size
+ assert_equal m1, result.first
+
+ result = env.find(:class => Model)
+ assert result.is_a?(Array)
+ assert_equal sortById([m1,m2]), sortById(result)
+
+ result = env.find(:name => "M2")
+ assert result.is_a?(Array)
+ assert_equal 1, result.size
+ assert_equal m2, result.first
+
+ result = env.find(:class => [Model, String])
+ assert result.is_a?(Array)
+ assert_equal sortById([m1,m2,m3]), sortById(result)
+ end
+
+ private
+
+ def sortById(array)
+ array.sort{|a,b| a.object_id <=> b.object_id}
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/json_test.rb b/lib/puppet/vendor/rgen/test/json_test.rb
new file mode 100644
index 000000000..0c73b9723
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/json_test.rb
@@ -0,0 +1,171 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/environment'
+require 'rgen/metamodel_builder'
+require 'rgen/serializer/json_serializer'
+require 'rgen/instantiator/json_instantiator'
+
+class JsonTest < Test::Unit::TestCase
+
+ module TestMM
+ extend RGen::MetamodelBuilder::ModuleExtension
+ class TestNode < RGen::MetamodelBuilder::MMBase
+ has_attr 'text', String
+ has_attr 'integer', Integer
+ has_attr 'float', Float
+ has_one 'other', TestNode
+ contains_many 'childs', TestNode, 'parent'
+ end
+ end
+
+ module TestMMData
+ extend RGen::MetamodelBuilder::ModuleExtension
+ # class "Data" exists in the standard Ruby namespace
+ class Data < RGen::MetamodelBuilder::MMBase
+ has_attr 'notTheBuiltin', String
+ end
+ end
+
+ module TestMMSubpackage
+ extend RGen::MetamodelBuilder::ModuleExtension
+ module SubPackage
+ extend RGen::MetamodelBuilder::ModuleExtension
+ class Data < RGen::MetamodelBuilder::MMBase
+ has_attr 'notTheBuiltin', String
+ end
+ class Data2 < RGen::MetamodelBuilder::MMBase
+ has_attr 'data2', String
+ end
+ end
+ end
+
+ class StringWriter < String
+ alias write concat
+ end
+
+ def test_json_serializer
+ testModel = TestMM::TestNode.new(:text => "some text", :childs => [
+ TestMM::TestNode.new(:text => "child")])
+
+ output = StringWriter.new
+ ser = RGen::Serializer::JsonSerializer.new(output)
+
+ assert_equal %q({ "_class": "TestNode", "text": "some text", "childs": [
+ { "_class": "TestNode", "text": "child" }] }), ser.serialize(testModel)
+ end
+
+ def test_json_instantiator
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMM)
+ inst.instantiate(%q({ "_class": "TestNode", "text": "some text", "childs": [
+ { "_class": "TestNode", "text": "child" }] }))
+ root = env.find(:class => TestMM::TestNode, :text => "some text").first
+ assert_not_nil root
+ assert_equal 1, root.childs.size
+ assert_equal TestMM::TestNode, root.childs.first.class
+ assert_equal "child", root.childs.first.text
+ end
+
+ def test_json_serializer_escapes
+ testModel = TestMM::TestNode.new(:text => %Q(some " \\ \\" text \r xx \n xx \r\n xx \t xx \b xx \f))
+ output = StringWriter.new
+ ser = RGen::Serializer::JsonSerializer.new(output)
+
+ assert_equal %q({ "_class": "TestNode", "text": "some \" \\\\ \\\\\" text \r xx \n xx \r\n xx \t xx \b xx \f" }),
+ ser.serialize(testModel)
+ end
+
+ def test_json_instantiator_escapes
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMM)
+ inst.instantiate(%q({ "_class": "TestNode", "text": "some \" \\\\ \\\\\" text \r xx \n xx \r\n xx \t xx \b xx \f" }))
+ assert_equal %Q(some " \\ \\" text \r xx \n xx \r\n xx \t xx \b xx \f), env.elements.first.text
+ end
+
+ def test_json_instantiator_escape_single_backslash
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMM)
+ inst.instantiate(%q({ "_class": "TestNode", "text": "a single \\ will be just itself" }))
+ assert_equal %q(a single \\ will be just itself), env.elements.first.text
+ end
+
+ def test_json_serializer_integer
+ testModel = TestMM::TestNode.new(:integer => 7)
+ output = StringWriter.new
+ ser = RGen::Serializer::JsonSerializer.new(output)
+ assert_equal %q({ "_class": "TestNode", "integer": 7 }), ser.serialize(testModel)
+ end
+
+ def test_json_instantiator_integer
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMM)
+ inst.instantiate(%q({ "_class": "TestNode", "integer": 7 }))
+ assert_equal 7, env.elements.first.integer
+ end
+
+ def test_json_serializer_float
+ testModel = TestMM::TestNode.new(:float => 1.23)
+ output = StringWriter.new
+ ser = RGen::Serializer::JsonSerializer.new(output)
+ assert_equal %q({ "_class": "TestNode", "float": 1.23 }), ser.serialize(testModel)
+ end
+
+ def test_json_instantiator_float
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMM)
+ inst.instantiate(%q({ "_class": "TestNode", "float": 1.23 }))
+ assert_equal 1.23, env.elements.first.float
+ end
+
+ def test_json_instantiator_conflict_builtin
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMMData)
+ inst.instantiate(%q({ "_class": "Data", "notTheBuiltin": "for sure" }))
+ assert_equal "for sure", env.elements.first.notTheBuiltin
+ end
+
+ def test_json_serializer_subpacakge
+ testModel = TestMMSubpackage::SubPackage::Data2.new(:data2 => "xxx")
+ output = StringWriter.new
+ ser = RGen::Serializer::JsonSerializer.new(output)
+ assert_equal %q({ "_class": "Data2", "data2": "xxx" }), ser.serialize(testModel)
+ end
+
+ def test_json_instantiator_builtin_in_subpackage
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMMSubpackage)
+ inst.instantiate(%q({ "_class": "Data", "notTheBuiltin": "for sure" }))
+ assert_equal "for sure", env.elements.first.notTheBuiltin
+ end
+
+ def test_json_instantiator_subpackage
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMMSubpackage)
+ inst.instantiate(%q({ "_class": "Data2", "data2": "something" }))
+ assert_equal "something", env.elements.first.data2
+ end
+
+ def test_json_instantiator_subpackage_no_shortname_opt
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMMSubpackage, :short_class_names => false)
+ assert_raise RuntimeError do
+ inst.instantiate(%q({ "_class": "Data2", "data2": "something" }))
+ end
+ end
+
+ def test_json_instantiator_references
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::JsonInstantiator.new(env, TestMM, :nameAttribute => "text")
+ inst.instantiate(%q([
+ { "_class": "TestNode", "text": "A", "childs": [
+ { "_class": "TestNode", "text": "B" } ]},
+ { "_class": "TestNode", "text": "C", "other": "/A/B"}]
+ ))
+ nodeA = env.find(:class => TestMM::TestNode, :text => "A").first
+ nodeC = env.find(:class => TestMM::TestNode, :text => "C").first
+ assert_equal 1, nodeA.childs.size
+ assert_equal nodeA.childs[0], nodeC.other
+ end
+end
+
diff --git a/lib/puppet/vendor/rgen/test/metamodel_builder_test.rb b/lib/puppet/vendor/rgen/test/metamodel_builder_test.rb
new file mode 100644
index 000000000..4f0308b44
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_builder_test.rb
@@ -0,0 +1,1482 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/metamodel_builder'
+require 'rgen/array_extensions'
+require 'bigdecimal'
+
+class MetamodelBuilderTest < Test::Unit::TestCase
+
+ module TestMetamodel
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class SimpleClass < RGen::MetamodelBuilder::MMBase
+ KindType = RGen::MetamodelBuilder::DataTypes::Enum.new([:simple, :extended])
+ has_attr 'name' # default is String
+ has_attr 'stringWithDefault', String, :defaultValueLiteral => "xtest"
+ has_attr 'integerWithDefault', Integer, :defaultValueLiteral => "123"
+ has_attr 'longWithDefault', Long, :defaultValueLiteral => "1234567890"
+ has_attr 'floatWithDefault', Float, :defaultValueLiteral => "0.123"
+ has_attr 'boolWithDefault', Boolean, :defaultValueLiteral => "true"
+ has_attr 'anything', Object
+ has_attr 'allowed', RGen::MetamodelBuilder::DataTypes::Boolean
+ has_attr 'kind', KindType
+ has_attr 'kindWithDefault', KindType, :defaultValueLiteral => "extended"
+ end
+
+ class ManyAttrClass < RGen::MetamodelBuilder::MMBase
+ has_many_attr 'literals', String
+ has_many_attr 'bools', Boolean
+ has_many_attr 'integers', Integer
+ has_many_attr 'enums', RGen::MetamodelBuilder::DataTypes::Enum.new([:a, :b, :c])
+ has_many_attr 'limitTest', Integer, :upperBound => 2
+ end
+
+ class ClassA < RGen::MetamodelBuilder::MMBase
+ # metamodel accessors must work independent of the ==() method
+ module ClassModule
+ def ==(o)
+ o.is_a?(ClassA)
+ end
+ end
+ end
+
+ class ClassB < RGen::MetamodelBuilder::MMBase
+ end
+
+ class ClassC < RGen::MetamodelBuilder::MMBase
+ end
+
+ class HasOneTestClass < RGen::MetamodelBuilder::MMBase
+ has_one 'classA', ClassA
+ has_one 'classB', ClassB
+ end
+
+ class HasManyTestClass < RGen::MetamodelBuilder::MMBase
+ has_many 'classA', ClassA
+ end
+
+ class OneClass < RGen::MetamodelBuilder::MMBase
+ end
+ class ManyClass < RGen::MetamodelBuilder::MMBase
+ end
+ OneClass.one_to_many 'manyClasses', ManyClass, 'oneClass', :upperBound => 5
+
+ class AClassMM < RGen::MetamodelBuilder::MMBase
+ end
+ class BClassMM < RGen::MetamodelBuilder::MMBase
+ end
+ AClassMM.many_to_many 'bClasses', BClassMM, 'aClasses'
+
+ module SomePackage
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class ClassA < RGen::MetamodelBuilder::MMBase
+ end
+
+ module SubPackage
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class ClassB < RGen::MetamodelBuilder::MMBase
+ end
+ end
+ end
+
+ class OneClass2 < RGen::MetamodelBuilder::MMBase
+ end
+ class ManyClass2 < RGen::MetamodelBuilder::MMBase
+ end
+ ManyClass2.many_to_one 'oneClass', OneClass2, 'manyClasses'
+
+ class AClassOO < RGen::MetamodelBuilder::MMBase
+ end
+ class BClassOO < RGen::MetamodelBuilder::MMBase
+ end
+ AClassOO.one_to_one 'bClass', BClassOO, 'aClass'
+
+ class SomeSuperClass < RGen::MetamodelBuilder::MMBase
+ has_attr "name"
+ has_many "classAs", ClassA
+ end
+
+ class SomeSubClass < SomeSuperClass
+ has_attr "subname"
+ has_many "classBs", ClassB
+ end
+
+ class OtherSubClass < SomeSuperClass
+ has_attr "othersubname"
+ has_many "classCs", ClassC
+ end
+
+ class SubSubClass < RGen::MetamodelBuilder::MMMultiple(SomeSubClass, OtherSubClass)
+ has_attr "subsubname"
+ end
+
+ module AnnotatedModule
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ annotation "moduletag" => "modulevalue"
+
+ class AnnotatedClass < RGen::MetamodelBuilder::MMBase
+ annotation "sometag" => "somevalue", "othertag" => "othervalue"
+ annotation :source => "rgen/test", :details => {"thirdtag" => "thirdvalue"}
+
+ has_attr "boolAttr", Boolean do
+ annotation "attrtag" => "attrval"
+ annotation :source => "rgen/test2", :details => {"attrtag2" => "attrvalue2", "attrtag3" => "attrvalue3"}
+ end
+
+ has_many "others", AnnotatedClass do
+ annotation "reftag" => "refval"
+ annotation :source => "rgen/test3", :details => {"reftag2" => "refvalue2", "reftag3" => "refvalue3"}
+ end
+
+ many_to_many "m2m", AnnotatedClass, "m2mback" do
+ annotation "m2mtag" => "m2mval"
+ opposite_annotation "opposite_m2mtag" => "opposite_m2mval"
+ end
+ end
+
+ end
+
+ class AbstractClass < RGen::MetamodelBuilder::MMBase
+ abstract
+ end
+
+ class ContainedClass < RGen::MetamodelBuilder::MMBase
+ end
+
+ class ContainerClass < RGen::MetamodelBuilder::MMBase
+ contains_one_uni 'oneChildUni', ContainedClass
+ contains_one_uni 'oneChildUni2', ContainedClass
+ contains_one 'oneChild', ContainedClass, 'parentOne'
+ contains_one 'oneChild2', ContainedClass, 'parentOne2'
+ contains_many_uni 'manyChildUni', ContainedClass
+ contains_many_uni 'manyChildUni2', ContainedClass
+ contains_many 'manyChild', ContainedClass, 'parentMany'
+ contains_many 'manyChild2', ContainedClass, 'parentMany2'
+ end
+
+ class NestedContainerClass < ContainedClass
+ contains_one_uni 'oneChildUni', ContainedClass
+ end
+
+ class OppositeRefAssocA < RGen::MetamodelBuilder::MMBase
+ end
+ class OppositeRefAssocB < RGen::MetamodelBuilder::MMBase
+ end
+ OppositeRefAssocA.one_to_one 'bClass', OppositeRefAssocB, 'aClass'
+
+ end
+
+ def mm
+ TestMetamodel
+ end
+
+ def test_has_attr
+ sc = mm::SimpleClass.new
+
+ assert_respond_to sc, :name
+ assert_respond_to sc, :name=
+ sc.name = "TestName"
+ assert_equal "TestName", sc.name
+ sc.name = nil
+ assert_equal nil, sc.name
+ err = assert_raise StandardError do
+ sc.name = 5
+ end
+ assert_match /In (\w+::)+SimpleClass : Can not use a Fixnum where a String is expected/, err.message
+ assert_equal "EString", mm::SimpleClass.ecore.eAttributes.find{|a| a.name=="name"}.eType.name
+
+ assert_equal "xtest", sc.stringWithDefault
+ assert_equal :extended, sc.kindWithDefault
+ assert_equal 123, sc.integerWithDefault
+ assert_equal 1234567890, sc.longWithDefault
+ assert_equal 0.123, sc.floatWithDefault
+ assert_equal true, sc.boolWithDefault
+
+ # setting nil should not make the default value appear on next read
+ sc.stringWithDefault = nil
+ assert_nil sc.stringWithDefault
+
+ sc.anything = :asymbol
+ assert_equal :asymbol, sc.anything
+ sc.anything = self # a class
+ assert_equal self, sc.anything
+
+ assert_respond_to sc, :allowed
+ assert_respond_to sc, :allowed=
+ sc.allowed = true
+ assert_equal true, sc.allowed
+ sc.allowed = false
+ assert_equal false, sc.allowed
+ sc.allowed = nil
+ assert_equal nil, sc.allowed
+ err = assert_raise StandardError do
+ sc.allowed = :someSymbol
+ end
+ assert_match /In (\w+::)+SimpleClass : Can not use a Symbol\(:someSymbol\) where a \[true,false\] is expected/, err.message
+ err = assert_raise StandardError do
+ sc.allowed = "a string"
+ end
+ assert_match /In (\w+::)+SimpleClass : Can not use a String where a \[true,false\] is expected/, err.message
+ assert_equal "EBoolean", mm::SimpleClass.ecore.eAttributes.find{|a| a.name=="allowed"}.eType.name
+
+ assert_respond_to sc, :kind
+ assert_respond_to sc, :kind=
+ sc.kind = :simple
+ assert_equal :simple, sc.kind
+ sc.kind = :extended
+ assert_equal :extended, sc.kind
+ sc.kind = nil
+ assert_equal nil, sc.kind
+ err = assert_raise StandardError do
+ sc.kind = :false
+ end
+ assert_match /In (\w+::)+SimpleClass : Can not use a Symbol\(:false\) where a \[:simple,:extended\] is expected/, err.message
+ err = assert_raise StandardError do
+ sc.kind = "a string"
+ end
+ assert_match /In (\w+::)+SimpleClass : Can not use a String where a \[:simple,:extended\] is expected/, err.message
+
+ enum = mm::SimpleClass.ecore.eAttributes.find{|a| a.name=="kind"}.eType
+ assert_equal ["extended", "simple"], enum.eLiterals.name.sort
+ end
+
+ def test_float
+ sc = mm::SimpleClass.new
+ sc.floatWithDefault = 7.89
+ assert_equal 7.89, sc.floatWithDefault
+ if BigDecimal.double_fig == 16
+ sc.floatWithDefault = 123456789012345678.0
+ # loss of precision
+ assert_equal "123456789012345680.0", sprintf("%.1f", sc.floatWithDefault)
+ end
+ sc.floatWithDefault = nil
+ sc.floatWithDefault = BigDecimal.new("123456789012345678.0")
+ assert sc.floatWithDefault.is_a?(BigDecimal)
+ assert_equal "123456789012345678.0", sc.floatWithDefault.to_s("F")
+
+ dump = Marshal.dump(sc)
+ sc2 = Marshal.load(dump)
+ assert sc2.floatWithDefault.is_a?(BigDecimal)
+ assert_equal "123456789012345678.0", sc2.floatWithDefault.to_s("F")
+ end
+
+ def test_long
+ sc = mm::SimpleClass.new
+ sc.longWithDefault = 5
+ assert_equal 5, sc.longWithDefault
+ sc.longWithDefault = 1234567890
+ assert_equal 1234567890, sc.longWithDefault
+ assert sc.longWithDefault.is_a?(Bignum)
+ assert sc.longWithDefault.is_a?(Integer)
+ err = assert_raise StandardError do
+ sc.longWithDefault = "a string"
+ end
+ assert_match /In (\w+::)+SimpleClass : Can not use a String where a Integer is expected/, err.message
+ end
+
+ def test_many_attr
+ o = mm::ManyAttrClass.new
+
+ assert_respond_to o, :literals
+ assert_respond_to o, :addLiterals
+ assert_respond_to o, :removeLiterals
+
+ err = assert_raise(StandardError) do
+ o.addLiterals(1)
+ end
+ assert_match /In (\w+::)+ManyAttrClass : Can not use a Fixnum where a String is expected/, err.message
+
+ assert_equal [], o.literals
+ o.addLiterals("a")
+ assert_equal ["a"], o.literals
+ o.addLiterals("b")
+ assert_equal ["a", "b"], o.literals
+ o.addLiterals("b")
+ assert_equal ["a", "b", "b"], o.literals
+ # attributes allow the same object several times
+ o.addLiterals(o.literals.first)
+ assert_equal ["a", "b", "b", "a"], o.literals
+ assert o.literals[0].object_id == o.literals[3].object_id
+ # removing works by object identity, so providing a new string won't delete an existing one
+ o.removeLiterals("a")
+ assert_equal ["a", "b", "b", "a"], o.literals
+ theA = o.literals.first
+ # each remove command removes only one element: remove first "a"
+ o.removeLiterals(theA)
+ assert_equal ["b", "b", "a"], o.literals
+ # remove second "a" (same object)
+ o.removeLiterals(theA)
+ assert_equal ["b", "b"], o.literals
+ o.removeLiterals(o.literals.first)
+ assert_equal ["b"], o.literals
+ o.removeLiterals(o.literals.first)
+ assert_equal [], o.literals
+
+ # setting multiple elements at a time
+ o.literals = ["a", "b", "c"]
+ assert_equal ["a", "b", "c"], o.literals
+ # can only take enumerables
+ err = assert_raise(StandardError) do
+ o.literals = 1
+ end
+ assert_match /In (\w+::)+ManyAttrClass : Can not use a Fixnum where a Enumerable is expected/, err.message
+
+ o.bools = [true, false, true, false]
+ assert_equal [true, false, true, false], o.bools
+
+ o.integers = [1, 2, 2, 3, 3]
+ assert_equal [1, 2, 2, 3, 3], o.integers
+
+ o.enums = [:a, :a, :b, :c, :c]
+ assert_equal [:a, :a, :b, :c, :c], o.enums
+
+ lit = mm::ManyAttrClass.ecore.eAttributes.find{|a| a.name == "literals"}
+ assert lit.is_a?(RGen::ECore::EAttribute)
+ assert lit.many
+
+ lim = mm::ManyAttrClass.ecore.eAttributes.find{|a| a.name == "limitTest"}
+ assert lit.many
+ assert_equal 2, lim.upperBound
+ end
+
+ def test_many_attr_insert
+ o = mm::ManyAttrClass.new
+ o.addLiterals("a")
+ o.addLiterals("b", 0)
+ o.addLiterals("c", 1)
+ assert_equal ["b", "c", "a"], o.literals
+ end
+
+ def test_has_one
+ sc = mm::HasOneTestClass.new
+ assert_respond_to sc, :classA
+ assert_respond_to sc, :classA=
+ ca = mm::ClassA.new
+ sc.classA = ca
+ assert_equal ca, sc.classA
+ sc.classA = nil
+ assert_equal nil, sc.classA
+
+ assert_respond_to sc, :classB
+ assert_respond_to sc, :classB=
+ cb = mm::ClassB.new
+ sc.classB = cb
+ assert_equal cb, sc.classB
+
+ err = assert_raise StandardError do
+ sc.classB = ca
+ end
+ assert_match /In (\w+::)+HasOneTestClass : Can not use a (\w+::)+ClassA where a (\w+::)+ClassB is expected/, err.message
+
+ assert_equal [], mm::ClassA.ecore.eReferences
+ assert_equal [], mm::ClassB.ecore.eReferences
+ assert_equal ["classA", "classB"].sort, mm::HasOneTestClass.ecore.eReferences.name.sort
+ assert_equal [], mm::HasOneTestClass.ecore.eReferences.select { |a| a.many == true }
+ assert_equal [], mm::HasOneTestClass.ecore.eAttributes
+ end
+
+ def test_has_many
+ o = mm::HasManyTestClass.new
+ ca1 = mm::ClassA.new
+ ca2 = mm::ClassA.new
+ ca3 = mm::ClassA.new
+ o.addClassA(ca1)
+ o.addClassA(ca2)
+ assert_equal [ca1, ca2], o.classA
+ # make sure we get a copy
+ o.classA.clear
+ assert_equal [ca1, ca2], o.classA
+ o.removeClassA(ca3)
+ assert_equal [ca1, ca2], o.classA
+ o.removeClassA(ca2)
+ assert_equal [ca1], o.classA
+ err = assert_raise StandardError do
+ o.addClassA(mm::ClassB.new)
+ end
+ assert_match /In (\w+::)+HasManyTestClass : Can not use a (\w+::)+ClassB where a (\w+::)+ClassA is expected/, err.message
+ assert_equal [], mm::HasManyTestClass.ecore.eReferences.select{|r| r.many == false}
+ assert_equal ["classA"], mm::HasManyTestClass.ecore.eReferences.select{|r| r.many == true}.name
+ end
+
+ def test_has_many_insert
+ o = mm::HasManyTestClass.new
+ ca1 = mm::ClassA.new
+ ca2 = mm::ClassA.new
+ ca3 = mm::ClassA.new
+ ca4 = mm::ClassA.new
+ ca5 = mm::ClassA.new
+ o.addClassA(ca1)
+ o.addClassA(ca2)
+ o.addClassA(ca3,0)
+ o.addClassA(ca4,1)
+ o.addGeneric("classA",ca5,2)
+ assert_equal [ca3, ca4, ca5, ca1, ca2], o.classA
+ end
+
+ def test_one_to_many
+ oc = mm::OneClass.new
+ assert_respond_to oc, :manyClasses
+ assert oc.manyClasses.empty?
+
+ mc = mm::ManyClass.new
+ assert_respond_to mc, :oneClass
+ assert_respond_to mc, :oneClass=
+ assert_nil mc.oneClass
+
+ # put the OneClass into the ManyClass
+ mc.oneClass = oc
+ assert_equal oc, mc.oneClass
+ assert oc.manyClasses.include?(mc)
+
+ # remove the OneClass from the ManyClass
+ mc.oneClass = nil
+ assert_equal nil, mc.oneClass
+ assert !oc.manyClasses.include?(mc)
+
+ # put the ManyClass into the OneClass
+ oc.addManyClasses mc
+ assert oc.manyClasses.include?(mc)
+ assert_equal oc, mc.oneClass
+
+ # remove the ManyClass from the OneClass
+ oc.removeManyClasses mc
+ assert !oc.manyClasses.include?(mc)
+ assert_equal nil, mc.oneClass
+
+ assert_equal [], mm::OneClass.ecore.eReferences.select{|r| r.many == false}
+ assert_equal ["manyClasses"], mm::OneClass.ecore.eReferences.select{|r| r.many == true}.name
+ assert_equal 5, mm::OneClass.ecore.eReferences.find{|r| r.many == true}.upperBound
+ assert_equal ["oneClass"], mm::ManyClass.ecore.eReferences.select{|r| r.many == false}.name
+ assert_equal [], mm::ManyClass.ecore.eReferences.select{|r| r.many == true}
+ end
+
+ def test_one_to_many_replace1
+ oc1 = mm::OneClass.new
+ oc2 = mm::OneClass.new
+ mc = mm::ManyClass.new
+
+ oc1.manyClasses = [mc]
+ assert_equal [mc], oc1.manyClasses
+ assert_equal [], oc2.manyClasses
+ assert_equal oc1, mc.oneClass
+
+ oc2.manyClasses = [mc]
+ assert_equal [mc], oc2.manyClasses
+ assert_equal [], oc1.manyClasses
+ assert_equal oc2, mc.oneClass
+ end
+
+ def test_one_to_many_replace2
+ oc = mm::OneClass.new
+ mc1 = mm::ManyClass.new
+ mc2 = mm::ManyClass.new
+
+ mc1.oneClass = oc
+ assert_equal [mc1], oc.manyClasses
+ assert_equal oc, mc1.oneClass
+ assert_equal nil, mc2.oneClass
+
+ mc2.oneClass = oc
+ assert_equal [mc1, mc2], oc.manyClasses
+ assert_equal oc, mc1.oneClass
+ assert_equal oc, mc2.oneClass
+ end
+
+ def test_one_to_many_insert
+ oc = mm::OneClass.new
+ mc1 = mm::ManyClass.new
+ mc2 = mm::ManyClass.new
+
+ oc.addManyClasses(mc1, 0)
+ oc.addManyClasses(mc2, 0)
+ assert_equal [mc2, mc1], oc.manyClasses
+ assert_equal oc, mc1.oneClass
+ assert_equal oc, mc2.oneClass
+ end
+
+ def test_one_to_many2
+ oc = mm::OneClass2.new
+ assert_respond_to oc, :manyClasses
+ assert oc.manyClasses.empty?
+
+ mc = mm::ManyClass2.new
+ assert_respond_to mc, :oneClass
+ assert_respond_to mc, :oneClass=
+ assert_nil mc.oneClass
+
+ # put the OneClass into the ManyClass
+ mc.oneClass = oc
+ assert_equal oc, mc.oneClass
+ assert oc.manyClasses.include?(mc)
+
+ # remove the OneClass from the ManyClass
+ mc.oneClass = nil
+ assert_equal nil, mc.oneClass
+ assert !oc.manyClasses.include?(mc)
+
+ # put the ManyClass into the OneClass
+ oc.addManyClasses mc
+ assert oc.manyClasses.include?(mc)
+ assert_equal oc, mc.oneClass
+
+ # remove the ManyClass from the OneClass
+ oc.removeManyClasses mc
+ assert !oc.manyClasses.include?(mc)
+ assert_equal nil, mc.oneClass
+
+ assert_equal [], mm::OneClass2.ecore.eReferences.select{|r| r.many == false}
+ assert_equal ["manyClasses"], mm::OneClass2.ecore.eReferences.select{|r| r.many == true}.name
+ assert_equal ["oneClass"], mm::ManyClass2.ecore.eReferences.select{|r| r.many == false}.name
+ assert_equal [], mm::ManyClass2.ecore.eReferences.select{|r| r.many == true}
+ end
+
+ def test_one_to_one
+ ac = mm::AClassOO.new
+ assert_respond_to ac, :bClass
+ assert_respond_to ac, :bClass=
+ assert_nil ac.bClass
+
+ bc = mm::BClassOO.new
+ assert_respond_to bc, :aClass
+ assert_respond_to bc, :aClass=
+ assert_nil bc.aClass
+
+ # put the AClass into the BClass
+ bc.aClass = ac
+ assert_equal ac, bc.aClass
+ assert_equal bc, ac.bClass
+
+ # remove the AClass from the BClass
+ bc.aClass = nil
+ assert_equal nil, bc.aClass
+ assert_equal nil, ac.bClass
+
+ # put the BClass into the AClass
+ ac.bClass = bc
+ assert_equal bc, ac.bClass
+ assert_equal ac, bc.aClass
+
+ # remove the BClass from the AClass
+ ac.bClass = nil
+ assert_equal nil, ac.bClass
+ assert_equal nil, bc.aClass
+
+ assert_equal ["bClass"], mm::AClassOO.ecore.eReferences.select{|r| r.many == false}.name
+ assert_equal [], mm::AClassOO.ecore.eReferences.select{|r| r.many == true}
+ assert_equal ["aClass"], mm::BClassOO.ecore.eReferences.select{|r| r.many == false}.name
+ assert_equal [], mm::BClassOO.ecore.eReferences.select{|r| r.many == true}
+ end
+
+ def test_one_to_one_replace
+ a = mm::AClassOO.new
+ b1 = mm::BClassOO.new
+ b2 = mm::BClassOO.new
+
+ a.bClass = b1
+ assert_equal b1, a.bClass
+ assert_equal a, b1.aClass
+ assert_equal nil, b2.aClass
+
+ a.bClass = b2
+ assert_equal b2, a.bClass
+ assert_equal nil, b1.aClass
+ assert_equal a, b2.aClass
+ end
+
+ def test_many_to_many
+
+ ac = mm::AClassMM.new
+ assert_respond_to ac, :bClasses
+ assert ac.bClasses.empty?
+
+ bc = mm::BClassMM.new
+ assert_respond_to bc, :aClasses
+ assert bc.aClasses.empty?
+
+ # put the AClass into the BClass
+ bc.addAClasses ac
+ assert bc.aClasses.include?(ac)
+ assert ac.bClasses.include?(bc)
+
+ # put something else into the BClass
+ err = assert_raise StandardError do
+ bc.addAClasses :notaaclass
+ end
+ assert_match /In (\w+::)+BClassMM : Can not use a Symbol\(:notaaclass\) where a (\w+::)+AClassMM is expected/, err.message
+
+ # remove the AClass from the BClass
+ bc.removeAClasses ac
+ assert !bc.aClasses.include?(ac)
+ assert !ac.bClasses.include?(bc)
+
+ # put the BClass into the AClass
+ ac.addBClasses bc
+ assert ac.bClasses.include?(bc)
+ assert bc.aClasses.include?(ac)
+
+ # put something else into the AClass
+ err = assert_raise StandardError do
+ ac.addBClasses :notabclass
+ end
+ assert_match /In (\w+::)+AClassMM : Can not use a Symbol\(:notabclass\) where a (\w+::)+BClassMM is expected/, err.message
+
+ # remove the BClass from the AClass
+ ac.removeBClasses bc
+ assert !ac.bClasses.include?(bc)
+ assert !bc.aClasses.include?(ac)
+
+ assert_equal [], mm::AClassMM.ecore.eReferences.select{|r| r.many == false}
+ assert_equal ["bClasses"], mm::AClassMM.ecore.eReferences.select{|r| r.many == true}.name
+ assert_equal [], mm::BClassMM.ecore.eReferences.select{|r| r.many == false}
+ assert_equal ["aClasses"], mm::BClassMM.ecore.eReferences.select{|r| r.many == true}.name
+ end
+
+ def test_many_to_many_insert
+ ac1 = mm::AClassMM.new
+ ac2 = mm::AClassMM.new
+ bc1= mm::BClassMM.new
+ bc2= mm::BClassMM.new
+
+ ac1.addBClasses(bc1)
+ ac1.addBClasses(bc2, 0)
+ ac2.addBClasses(bc1)
+ ac2.addBClasses(bc2, 0)
+
+ assert_equal [bc2, bc1], ac1.bClasses
+ assert_equal [bc2, bc1], ac2.bClasses
+ assert_equal [ac1, ac2], bc1.aClasses
+ assert_equal [ac1, ac2], bc2.aClasses
+ end
+
+ def test_inheritance
+ assert_equal ["name"], mm::SomeSuperClass.ecore.eAllAttributes.name
+ assert_equal ["classAs"], mm::SomeSuperClass.ecore.eAllReferences.name
+ assert_equal ["name", "subname"], mm::SomeSubClass.ecore.eAllAttributes.name.sort
+ assert_equal ["classAs", "classBs"], mm::SomeSubClass.ecore.eAllReferences.name.sort
+ assert_equal ["name", "othersubname"], mm::OtherSubClass.ecore.eAllAttributes.name.sort
+ assert_equal ["classAs", "classCs"], mm::OtherSubClass.ecore.eAllReferences.name.sort
+ assert mm::SomeSubClass.new.is_a?(mm::SomeSuperClass)
+ assert_equal ["name", "othersubname", "subname", "subsubname"], mm::SubSubClass.ecore.eAllAttributes.name.sort
+ assert_equal ["classAs", "classBs", "classCs"], mm::SubSubClass.ecore.eAllReferences.name.sort
+ assert mm::SubSubClass.new.is_a?(mm::SomeSuperClass)
+ assert mm::SubSubClass.new.is_a?(mm::SomeSubClass)
+ assert mm::SubSubClass.new.is_a?(mm::OtherSubClass)
+ end
+
+ def test_annotations
+ assert_equal 1, mm::AnnotatedModule.ecore.eAnnotations.size
+ anno = mm::AnnotatedModule.ecore.eAnnotations.first
+ checkAnnotation(anno, nil, {"moduletag" => "modulevalue"})
+
+ eClass = mm::AnnotatedModule::AnnotatedClass.ecore
+ assert_equal 2, eClass.eAnnotations.size
+ anno = eClass.eAnnotations.find{|a| a.source == "rgen/test"}
+ checkAnnotation(anno, "rgen/test", {"thirdtag" => "thirdvalue"})
+ anno = eClass.eAnnotations.find{|a| a.source == nil}
+ checkAnnotation(anno, nil, {"sometag" => "somevalue", "othertag" => "othervalue"})
+
+ eAttr = eClass.eAttributes.first
+ assert_equal 2, eAttr.eAnnotations.size
+ anno = eAttr.eAnnotations.find{|a| a.source == "rgen/test2"}
+ checkAnnotation(anno, "rgen/test2", {"attrtag2" => "attrvalue2", "attrtag3" => "attrvalue3"})
+ anno = eAttr.eAnnotations.find{|a| a.source == nil}
+ checkAnnotation(anno, nil, {"attrtag" => "attrval"})
+
+ eRef = eClass.eReferences.find{|r| !r.eOpposite}
+ assert_equal 2, eRef.eAnnotations.size
+ anno = eRef.eAnnotations.find{|a| a.source == "rgen/test3"}
+ checkAnnotation(anno, "rgen/test3", {"reftag2" => "refvalue2", "reftag3" => "refvalue3"})
+ anno = eRef.eAnnotations.find{|a| a.source == nil}
+ checkAnnotation(anno, nil, {"reftag" => "refval"})
+
+ eRef = eClass.eReferences.find{|r| r.eOpposite}
+ assert_equal 1, eRef.eAnnotations.size
+ anno = eRef.eAnnotations.first
+ checkAnnotation(anno, nil, {"m2mtag" => "m2mval"})
+ eRef = eRef.eOpposite
+ assert_equal 1, eRef.eAnnotations.size
+ anno = eRef.eAnnotations.first
+ checkAnnotation(anno, nil, {"opposite_m2mtag" => "opposite_m2mval"})
+ end
+
+ def checkAnnotation(anno, source, hash)
+ assert anno.is_a?(RGen::ECore::EAnnotation)
+ assert_equal source, anno.source
+ assert_equal hash.size, anno.details.size
+ hash.each_pair do |k, v|
+ detail = anno.details.find{|d| d.key == k}
+ assert detail.is_a?(RGen::ECore::EStringToStringMapEntry)
+ assert_equal v, detail.value
+ end
+ end
+
+ def test_ecore_identity
+ subPackage = mm::SomePackage::SubPackage.ecore
+ assert_equal subPackage.eClassifiers.first.object_id, mm::SomePackage::SubPackage::ClassB.ecore.object_id
+
+ somePackage = mm::SomePackage.ecore
+ assert_equal somePackage.eSubpackages.first.object_id, subPackage.object_id
+ end
+
+ def test_proxy
+ p = RGen::MetamodelBuilder::MMProxy.new("test")
+ assert_equal "test", p.targetIdentifier
+ p.targetIdentifier = 123
+ assert_equal 123, p.targetIdentifier
+ p.data = "additional info"
+ assert_equal "additional info", p.data
+ q = RGen::MetamodelBuilder::MMProxy.new("ident", "data")
+ assert_equal "data", q.data
+ end
+
+ def test_proxies_has_one
+ e = mm::HasOneTestClass.new
+ proxy = RGen::MetamodelBuilder::MMProxy.new
+ e.classA = proxy
+ assert_equal proxy, e.classA
+ a = mm::ClassA.new
+ # displace proxy
+ e.classA = a
+ assert_equal a, e.classA
+ # displace by proxy
+ e.classA = proxy
+ assert_equal proxy, e.classA
+ end
+
+ def test_proxies_has_many
+ e = mm::HasManyTestClass.new
+ proxy = RGen::MetamodelBuilder::MMProxy.new
+ e.addClassA(proxy)
+ assert_equal [proxy], e.classA
+ # again
+ e.addClassA(proxy)
+ assert_equal [proxy], e.classA
+ proxy2 = RGen::MetamodelBuilder::MMProxy.new
+ e.addClassA(proxy2)
+ assert_equal [proxy, proxy2], e.classA
+ e.removeClassA(proxy)
+ assert_equal [proxy2], e.classA
+ # again
+ e.removeClassA(proxy)
+ assert_equal [proxy2], e.classA
+ e.removeClassA(proxy2)
+ assert_equal [], e.classA
+ end
+
+ def test_proxies_one_to_one
+ ea = mm::AClassOO.new
+ eb = mm::BClassOO.new
+ proxy1 = RGen::MetamodelBuilder::MMProxy.new
+ proxy2 = RGen::MetamodelBuilder::MMProxy.new
+ ea.bClass = proxy1
+ eb.aClass = proxy2
+ assert_equal proxy1, ea.bClass
+ assert_equal proxy2, eb.aClass
+ # displace proxies
+ ea.bClass = eb
+ assert_equal eb, ea.bClass
+ assert_equal ea, eb.aClass
+ # displace by proxy
+ ea.bClass = proxy1
+ assert_equal proxy1, ea.bClass
+ assert_nil eb.aClass
+ end
+
+ def test_proxies_one_to_many
+ eo = mm::OneClass.new
+ em = mm::ManyClass.new
+ proxy1 = RGen::MetamodelBuilder::MMProxy.new
+ proxy2 = RGen::MetamodelBuilder::MMProxy.new
+ eo.addManyClasses(proxy1)
+ assert_equal [proxy1], eo.manyClasses
+ em.oneClass = proxy2
+ assert_equal proxy2, em.oneClass
+ # displace proxies at many side
+ # adding em will set em.oneClass to eo and displace the proxy from em.oneClass
+ eo.addManyClasses(em)
+ assert_equal [proxy1, em], eo.manyClasses
+ assert_equal eo, em.oneClass
+ eo.removeManyClasses(proxy1)
+ assert_equal [em], eo.manyClasses
+ assert_equal eo, em.oneClass
+ # displace by proxy
+ em.oneClass = proxy2
+ assert_equal [], eo.manyClasses
+ assert_equal proxy2, em.oneClass
+ # displace proxies at one side
+ em.oneClass = eo
+ assert_equal [em], eo.manyClasses
+ assert_equal eo, em.oneClass
+ end
+
+ def test_proxies_many_to_many
+ e1 = mm::AClassMM.new
+ e2 = mm::BClassMM.new
+ proxy1 = RGen::MetamodelBuilder::MMProxy.new
+ proxy2 = RGen::MetamodelBuilder::MMProxy.new
+ e1.addBClasses(proxy1)
+ e2.addAClasses(proxy2)
+ assert_equal [proxy1], e1.bClasses
+ assert_equal [proxy2], e2.aClasses
+ e1.addBClasses(e2)
+ assert_equal [proxy1, e2], e1.bClasses
+ assert_equal [proxy2, e1], e2.aClasses
+ e1.removeBClasses(proxy1)
+ e2.removeAClasses(proxy2)
+ assert_equal [e2], e1.bClasses
+ assert_equal [e1], e2.aClasses
+ end
+
+ # Multiplicity agnostic convenience methods
+
+ def test_genericAccess
+ e1 = mm::OneClass.new
+ e2 = mm::ManyClass.new
+ e3 = mm::OneClass.new
+ e4 = mm::ManyClass.new
+ # use on "many" feature
+ e1.setOrAddGeneric("manyClasses", e2)
+ assert_equal [e2], e1.manyClasses
+ assert_equal [e2], e1.getGeneric("manyClasses")
+ assert_equal [e2], e1.getGenericAsArray("manyClasses")
+ # use on "one" feature
+ e2.setOrAddGeneric("oneClass", e3)
+ assert_equal e3, e2.oneClass
+ assert_equal e3, e2.getGeneric("oneClass")
+ assert_equal [e3], e2.getGenericAsArray("oneClass")
+ assert_nil e4.getGeneric("oneClass")
+ assert_equal [], e4.getGenericAsArray("oneClass")
+ end
+
+ def test_setNilOrRemoveGeneric
+ e1 = mm::OneClass.new
+ e2 = mm::ManyClass.new
+ e3 = mm::OneClass.new
+ # use on "many" feature
+ e1.addManyClasses(e2)
+ assert_equal [e2], e1.manyClasses
+ e1.setNilOrRemoveGeneric("manyClasses", e2)
+ assert_equal [], e1.manyClasses
+ # use on "one" feature
+ e2.oneClass = e3
+ assert_equal e3, e2.oneClass
+ e2.setNilOrRemoveGeneric("oneClass", e3)
+ assert_nil e2.oneClass
+ end
+
+ def test_setNilOrRemoveAllGeneric
+ e1 = mm::OneClass.new
+ e2 = mm::ManyClass.new
+ e3 = mm::OneClass.new
+ e4 = mm::ManyClass.new
+ # use on "many" feature
+ e1.addManyClasses(e2)
+ e1.addManyClasses(e4)
+ assert_equal [e2, e4], e1.manyClasses
+ e1.setNilOrRemoveAllGeneric("manyClasses")
+ assert_equal [], e1.manyClasses
+ # use on "one" feature
+ e2.oneClass = e3
+ assert_equal e3, e2.oneClass
+ e2.setNilOrRemoveAllGeneric("oneClass")
+ assert_nil e2.oneClass
+ end
+
+ def test_abstract
+ err = assert_raise StandardError do
+ mm::AbstractClass.new
+ end
+ assert_match /Class (\w+::)+AbstractClass is abstract/, err.message
+ end
+
+ module BadDefaultValueLiteralContainer
+ Test1 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'integerWithDefault', Integer, :defaultValueLiteral => "1.1"
+ end
+ end
+ Test2 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'integerWithDefault', Integer, :defaultValueLiteral => "x"
+ end
+ end
+ Test3 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'boolWithDefault', Boolean, :defaultValueLiteral => "1"
+ end
+ end
+ Test4 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'floatWithDefault', Float, :defaultValueLiteral => "1"
+ end
+ end
+ Test5 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'floatWithDefault', Float, :defaultValueLiteral => "true"
+ end
+ end
+ Test6 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ kindType = RGen::MetamodelBuilder::DataTypes::Enum.new([:simple, :extended])
+ has_attr 'enumWithDefault', kindType, :defaultValueLiteral => "xxx"
+ end
+ end
+ Test7 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ kindType = RGen::MetamodelBuilder::DataTypes::Enum.new([:simple, :extended])
+ has_attr 'enumWithDefault', kindType, :defaultValueLiteral => "7"
+ end
+ end
+ Test8 = proc do
+ class BadClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'longWithDefault', Integer, :defaultValueLiteral => "1.1"
+ end
+ end
+ end
+
+ def test_bad_default_value_literal
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test1.call
+ end
+ assert_equal "Property integerWithDefault can not take value 1.1, expected an Integer", err.message
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test2.call
+ end
+ assert_equal "Property integerWithDefault can not take value x, expected an Integer", err.message
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test3.call
+ end
+ assert_equal "Property boolWithDefault can not take value 1, expected true or false", err.message
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test4.call
+ end
+ assert_equal "Property floatWithDefault can not take value 1, expected a Float", err.message
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test5.call
+ end
+ assert_equal "Property floatWithDefault can not take value true, expected a Float", err.message
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test6.call
+ end
+ assert_equal "Property enumWithDefault can not take value xxx, expected one of :simple, :extended", err.message
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test7.call
+ end
+ assert_equal "Property enumWithDefault can not take value 7, expected one of :simple, :extended", err.message
+ err = assert_raise StandardError do
+ BadDefaultValueLiteralContainer::Test8.call
+ end
+ assert_equal "Property longWithDefault can not take value 1.1, expected an Integer", err.message
+ end
+
+ def test_isset_set_to_nil
+ e = mm::SimpleClass.new
+ assert_respond_to e, :name
+ assert !e.eIsSet(:name)
+ assert !e.eIsSet("name")
+ e.name = nil
+ assert e.eIsSet(:name)
+ end
+
+ def test_isset_set_to_default
+ e = mm::SimpleClass.new
+ assert !e.eIsSet(:stringWithDefault)
+ # set the default value
+ e.name = "xtest"
+ assert e.eIsSet(:name)
+ end
+
+ def test_isset_many_add
+ e = mm::ManyAttrClass.new
+ assert_equal [], e.literals
+ assert !e.eIsSet(:literals)
+ e.addLiterals("x")
+ assert e.eIsSet(:literals)
+ end
+
+ def test_isset_many_remove
+ e = mm::ManyAttrClass.new
+ assert_equal [], e.literals
+ assert !e.eIsSet(:literals)
+ # removing a value which is not there
+ e.removeLiterals("x")
+ assert e.eIsSet(:literals)
+ end
+
+ def test_isset_ref
+ ac = mm::AClassOO.new
+ bc = mm::BClassOO.new
+ assert !bc.eIsSet(:aClass)
+ assert !ac.eIsSet(:bClass)
+ bc.aClass = ac
+ assert bc.eIsSet(:aClass)
+ assert ac.eIsSet(:bClass)
+ end
+
+ def test_isset_ref_many
+ ac = mm::AClassMM.new
+ bc = mm::BClassMM.new
+ assert !bc.eIsSet(:aClasses)
+ assert !ac.eIsSet(:bClasses)
+ bc.aClasses = [ac]
+ assert bc.eIsSet(:aClasses)
+ assert ac.eIsSet(:bClasses)
+ end
+
+ def test_unset_nil
+ e = mm::SimpleClass.new
+ e.name = nil
+ assert e.eIsSet(:name)
+ e.eUnset(:name)
+ assert !e.eIsSet(:name)
+ end
+
+ def test_unset_string
+ e = mm::SimpleClass.new
+ e.name = "someone"
+ assert e.eIsSet(:name)
+ e.eUnset(:name)
+ assert !e.eIsSet(:name)
+ end
+
+ def test_unset_ref
+ ac = mm::AClassOO.new
+ bc = mm::BClassOO.new
+ bc.aClass = ac
+ assert bc.eIsSet(:aClass)
+ assert ac.eIsSet(:bClass)
+ assert_equal bc, ac.bClass
+ bc.eUnset(:aClass)
+ assert_nil bc.aClass
+ assert_nil ac.bClass
+ assert !bc.eIsSet(:aClass)
+ # opposite ref is nil but still "set"
+ assert ac.eIsSet(:bClass)
+ end
+
+ def test_unset_ref_many
+ ac = mm::AClassMM.new
+ bc = mm::BClassMM.new
+ bc.aClasses = [ac]
+ assert bc.eIsSet(:aClasses)
+ assert ac.eIsSet(:bClasses)
+ assert_equal [bc], ac.bClasses
+ bc.eUnset(:aClasses)
+ assert_equal [], bc.aClasses
+ assert_equal [], ac.bClasses
+ assert !bc.eIsSet(:aClasses)
+ # opposite ref is empty but still "set"
+ assert ac.eIsSet(:bClasses)
+ end
+
+ def test_unset_marshal
+ e = mm::SimpleClass.new
+ e.name = "someone"
+ e.eUnset(:name)
+ e2 = Marshal.load(Marshal.dump(e))
+ assert e.object_id != e2.object_id
+ assert !e2.eIsSet(:name)
+ end
+
+ def test_conainer_one_uni
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+ assert_equal [], a.eContents
+ assert_equal [], a.eAllContents
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ a.oneChildUni = b
+ assert_equal a, b.eContainer
+ assert_equal :oneChildUni, b.eContainingFeature
+ assert_equal [b], a.eContents
+ assert_equal [b], a.eAllContents
+ a.oneChildUni = c
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ assert_equal a, c.eContainer
+ assert_equal :oneChildUni, c.eContainingFeature
+ assert_equal [c], a.eContents
+ assert_equal [c], a.eAllContents
+ a.oneChildUni = nil
+ assert_nil c.eContainer
+ assert_nil c.eContainingFeature
+ assert_equal [], a.eContents
+ assert_equal [], a.eAllContents
+ end
+
+ def test_container_many_uni
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+ assert_equal [], a.eContents
+ assert_equal [], a.eAllContents
+ a.addManyChildUni(b)
+ assert_equal a, b.eContainer
+ assert_equal :manyChildUni, b.eContainingFeature
+ assert_equal [b], a.eContents
+ assert_equal [b], a.eAllContents
+ a.addManyChildUni(c)
+ assert_equal a, c.eContainer
+ assert_equal :manyChildUni, c.eContainingFeature
+ assert_equal [b, c], a.eContents
+ assert_equal [b, c], a.eAllContents
+ a.removeManyChildUni(b)
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ assert_equal a, c.eContainer
+ assert_equal :manyChildUni, c.eContainingFeature
+ assert_equal [c], a.eContents
+ assert_equal [c], a.eAllContents
+ a.removeManyChildUni(c)
+ assert_nil c.eContainer
+ assert_nil c.eContainingFeature
+ assert_equal [], a.eContents
+ assert_equal [], a.eAllContents
+ end
+
+ def test_conainer_one_bi
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainerClass.new
+ d = mm::ContainedClass.new
+ a.oneChild = b
+ assert_equal a, b.eContainer
+ assert_equal :oneChild, b.eContainingFeature
+ assert_equal [b], a.eContents
+ assert_equal [b], a.eAllContents
+ c.oneChild = d
+ assert_equal c, d.eContainer
+ assert_equal :oneChild, d.eContainingFeature
+ assert_equal [d], c.eContents
+ assert_equal [d], c.eAllContents
+ a.oneChild = d
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ assert_equal a, d.eContainer
+ assert_equal :oneChild, d.eContainingFeature
+ assert_equal [d], a.eContents
+ assert_equal [d], a.eAllContents
+ assert_equal [], c.eContents
+ assert_equal [], c.eAllContents
+ end
+
+ def test_conainer_one_bi_rev
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainerClass.new
+ d = mm::ContainedClass.new
+ a.oneChild = b
+ assert_equal a, b.eContainer
+ assert_equal :oneChild, b.eContainingFeature
+ assert_equal [b], a.eContents
+ assert_equal [b], a.eAllContents
+ c.oneChild = d
+ assert_equal c, d.eContainer
+ assert_equal :oneChild, d.eContainingFeature
+ assert_equal [d], c.eContents
+ assert_equal [d], c.eAllContents
+ d.parentOne = a
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ assert_equal a, d.eContainer
+ assert_equal :oneChild, d.eContainingFeature
+ assert_equal [d], a.eContents
+ assert_equal [d], a.eAllContents
+ assert_equal [], c.eContents
+ assert_equal [], c.eAllContents
+ end
+
+ def test_conainer_one_bi_nil
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a.oneChild = b
+ assert_equal a, b.eContainer
+ assert_equal :oneChild, b.eContainingFeature
+ assert_equal [b], a.eContents
+ assert_equal [b], a.eAllContents
+ a.oneChild = nil
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ assert_equal [], a.eContents
+ assert_equal [], a.eAllContents
+ end
+
+ def test_conainer_one_bi_nil_rev
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a.oneChild = b
+ assert_equal a, b.eContainer
+ assert_equal :oneChild, b.eContainingFeature
+ assert_equal [b], a.eContents
+ assert_equal [b], a.eAllContents
+ b.parentOne = nil
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ assert_equal [], a.eContents
+ assert_equal [], a.eAllContents
+ end
+
+ def test_container_many_bi
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+ a.addManyChild(b)
+ a.addManyChild(c)
+ assert_equal a, b.eContainer
+ assert_equal :manyChild, b.eContainingFeature
+ assert_equal a, c.eContainer
+ assert_equal :manyChild, c.eContainingFeature
+ assert_equal [b, c], a.eContents
+ assert_equal [b, c], a.eAllContents
+ a.removeManyChild(b)
+ assert_nil b.eContainer
+ assert_nil b.eContainingFeature
+ assert_equal [c], a.eContents
+ assert_equal [c], a.eAllContents
+ end
+
+ def test_conainer_many_bi_steal
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+ d = mm::ContainerClass.new
+ a.addManyChild(b)
+ a.addManyChild(c)
+ assert_equal a, b.eContainer
+ assert_equal :manyChild, b.eContainingFeature
+ assert_equal a, c.eContainer
+ assert_equal :manyChild, c.eContainingFeature
+ assert_equal [b, c], a.eContents
+ assert_equal [b, c], a.eAllContents
+ d.addManyChild(b)
+ assert_equal d, b.eContainer
+ assert_equal :manyChild, b.eContainingFeature
+ assert_equal [c], a.eContents
+ assert_equal [c], a.eAllContents
+ assert_equal [b], d.eContents
+ assert_equal [b], d.eAllContents
+ end
+
+ def test_conainer_many_bi_steal_rev
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+ d = mm::ContainerClass.new
+ a.addManyChild(b)
+ a.addManyChild(c)
+ assert_equal a, b.eContainer
+ assert_equal :manyChild, b.eContainingFeature
+ assert_equal a, c.eContainer
+ assert_equal :manyChild, c.eContainingFeature
+ assert_equal [b, c], a.eContents
+ assert_equal [b, c], a.eAllContents
+ b.parentMany = d
+ assert_equal d, b.eContainer
+ assert_equal :manyChild, b.eContainingFeature
+ assert_equal [c], a.eContents
+ assert_equal [c], a.eAllContents
+ assert_equal [b], d.eContents
+ assert_equal [b], d.eAllContents
+ end
+
+ def test_all_contents
+ a = mm::ContainerClass.new
+ b = mm::NestedContainerClass.new
+ c = mm::ContainedClass.new
+ a.oneChildUni = b
+ b.oneChildUni = c
+ assert_equal [b, c], a.eAllContents
+ end
+
+ def test_all_contents_with_block
+ a = mm::ContainerClass.new
+ b = mm::NestedContainerClass.new
+ c = mm::ContainedClass.new
+ a.oneChildUni = b
+ b.oneChildUni = c
+ yielded = []
+ a.eAllContents do |e|
+ yielded << e
+ end
+ assert_equal [b, c], yielded
+ end
+
+ def test_all_contents_prune
+ a = mm::ContainerClass.new
+ b = mm::NestedContainerClass.new
+ c = mm::ContainedClass.new
+ a.oneChildUni = b
+ b.oneChildUni = c
+ yielded = []
+ a.eAllContents do |e|
+ yielded << e
+ :prune
+ end
+ assert_equal [b], yielded
+ end
+
+ def test_container_generic
+ a = mm::ContainerClass.new
+ assert_nothing_raised do
+ a.oneChild = RGen::MetamodelBuilder::MMGeneric.new
+ end
+ end
+
+ def test_opposite_assoc_on_first_write
+ ac = mm::OppositeRefAssocA.new
+ bc = mm::OppositeRefAssocB.new
+
+ # no access to 'aClass' or 'bClass' methods before
+ # test if on-demand metamodel building creates opposite ref association on first write
+ bc.aClass = ac
+ assert_equal ac, bc.aClass
+ assert_equal bc, ac.bClass
+ end
+
+ def test_clear_by_array_assignment
+ oc1 = mm::OneClass.new
+ mc1 = mm::ManyClass.new
+ mc2 = mm::ManyClass.new
+ mc3 = mm::ManyClass.new
+
+ oc1.manyClasses = [mc1, mc2]
+ assert_equal [mc1, mc2], oc1.manyClasses
+ oc1.manyClasses = []
+ assert_equal [], oc1.manyClasses
+ end
+
+ def test_clear_by_array_assignment_uni
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+
+ a.manyChildUni = [b, c]
+ assert_equal [b, c], a.manyChildUni
+ a.manyChildUni = []
+ assert_equal [], a.manyChildUni
+ end
+
+ def test_disconnectContainer_one_uni
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a.oneChildUni = b
+ b.disconnectContainer
+ assert_nil a.oneChildUni
+ end
+
+ def test_disconnectContainer_one
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a.oneChild = b
+ b.disconnectContainer
+ assert_nil a.oneChild
+ assert_nil b.parentOne
+ end
+
+ def test_disconnectContainer_many_uni
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+ a.addManyChildUni(b)
+ a.addManyChildUni(c)
+ b.disconnectContainer
+ assert_equal [c], a.manyChildUni
+ end
+
+ def test_disconnectContainer_many
+ a = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ c = mm::ContainedClass.new
+ a.addManyChild(b)
+ a.addManyChild(c)
+ b.disconnectContainer
+ assert_nil b.parentMany
+ assert_equal [c], a.manyChild
+ end
+
+ # Duplicate Containment Tests
+ #
+ # Testing that no element is contained in two different containers at a time.
+ # This must also work for uni-directional containments as well as
+ # for containments via different roles.
+
+ # here the bi-dir reference disconnects from the previous container
+ def test_duplicate_containment_bidir_samerole_one
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.oneChild = b
+ a2.oneChild = b
+ assert_nil a1.oneChild
+ end
+
+ # here the bi-dir reference disconnects from the previous container
+ def test_duplicate_containment_bidir_samerole_many
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.addManyChild(b)
+ a2.addManyChild(b)
+ assert_equal [], a1.manyChild
+ end
+
+ def test_duplicate_containment_unidir_samerole_one
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.oneChildUni = b
+ a2.oneChildUni = b
+ assert_nil a1.oneChildUni
+ end
+
+ def test_duplicate_containment_unidir_samerole_many
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.addManyChildUni(b)
+ a2.addManyChildUni(b)
+ assert_equal [], a1.manyChildUni
+ end
+
+ def test_duplicate_containment_bidir_otherrole_one
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.oneChild = b
+ a2.oneChild2 = b
+ assert_nil a1.oneChild
+ end
+
+ def test_duplicate_containment_bidir_otherrole_many
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.addManyChild(b)
+ a2.addManyChild2(b)
+ assert_equal [], a1.manyChild
+ end
+
+ def test_duplicate_containment_unidir_otherrole_one
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.oneChildUni = b
+ a2.oneChildUni2 = b
+ assert_nil a1.oneChildUni
+ end
+
+ def test_duplicate_containment_unidir_otherrole_many
+ a1 = mm::ContainerClass.new
+ a2 = mm::ContainerClass.new
+ b = mm::ContainedClass.new
+ a1.addManyChildUni(b)
+ a2.addManyChildUni2(b)
+ assert_equal [], a1.manyChildUni
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/metamodel_from_ecore_test.rb b/lib/puppet/vendor/rgen/test/metamodel_from_ecore_test.rb
new file mode 100644
index 000000000..6617f652d
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_from_ecore_test.rb
@@ -0,0 +1,57 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","test")
+
+require 'metamodel_builder_test'
+require 'rgen/ecore/ecore_to_ruby'
+
+# this test suite runs all the tests of MetamodelBuilderTest with the TestMetamodel
+# replaced by the result of feeding its ecore model through ECoreToRuby
+#
+class MetamodelFromEcoreTest < MetamodelBuilderTest
+
+ # clone the ecore model, because it will be modified below
+ test_ecore = Marshal.load(Marshal.dump(TestMetamodel.ecore))
+ # some EEnum types are not hooked into the EPackage because they do not
+ # appear with a constant assignment in TestMetamodel
+ # fix this by explicitly assigning the ePackage
+ # also fix the name of anonymous enums
+ test_ecore.eClassifiers.find{|c| c.name == "SimpleClass"}.
+ eAttributes.select{|a| a.name == "kind" || a.name == "kindWithDefault"}.each{|a|
+ a.eType.name = "KindType"
+ a.eType.ePackage = test_ecore}
+ test_ecore.eClassifiers.find{|c| c.name == "ManyAttrClass"}.
+ eAttributes.select{|a| a.name == "enums"}.each{|a|
+ a.eType.name = "ABCEnum"
+ a.eType.ePackage = test_ecore}
+
+ MetamodelFromEcore = RGen::ECore::ECoreToRuby.new.create_module(test_ecore)
+
+ def mm
+ MetamodelFromEcore
+ end
+
+ # alternative implementation for dynamic variant
+ def test_bad_default_value_literal
+ package = RGen::ECore::EPackage.new(:name => "Package1", :eClassifiers => [
+ RGen::ECore::EClass.new(:name => "Class1", :eStructuralFeatures => [
+ RGen::ECore::EAttribute.new(:name => "value", :eType => RGen::ECore::EInt, :defaultValueLiteral => "x")])])
+ mod = RGen::ECore::ECoreToRuby.new.create_module(package)
+ obj = mod::Class1.new
+ # the error is raised only when the feature is lazily constructed
+ assert_raise StandardError do
+ obj.value
+ end
+ end
+
+ # define all the test methods explicitly in the subclass
+ # otherwise minitest is smart enough to run the tests only in the superclass context
+ MetamodelBuilderTest.instance_methods.select{|m| m.to_s =~ /^test_/}.each do |m|
+ next if instance_methods(false).include?(m)
+ module_eval <<-END
+ def #{m}
+ super
+ end
+ END
+ end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/test/metamodel_order_test.rb b/lib/puppet/vendor/rgen/test/metamodel_order_test.rb
new file mode 100644
index 000000000..a81cad4b5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_order_test.rb
@@ -0,0 +1,131 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/ecore/ecore'
+require 'rgen/array_extensions'
+
+class MetamodelOrderTest < Test::Unit::TestCase
+ include RGen::ECore
+
+ module TestMM1
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class Class11 < RGen::MetamodelBuilder::MMBase
+ end
+
+ module Module11
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ DataType111 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType111" ,:literals => {:b => 1})
+ DataType112 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType112", :literals => {:b => 1})
+
+ class Class111 < RGen::MetamodelBuilder::MMBase
+ end
+
+ # anonymous classes won't be handled by the order helper, but will be in eClassifiers
+ Class112 = Class.new(RGen::MetamodelBuilder::MMBase)
+
+ # classes that are not MMBase won't be handled
+ class Class113
+ end
+
+ # modules that are not extended by the ModuleExtension are not handled
+ module Module111
+ end
+
+ # however it can be extendend later on
+ module Module112
+ # this one is not handled by the order helper since Module112 doesn't have the ModuleExtension yet
+ # however, it will be in eClassifiers
+ class Class1121 < RGen::MetamodelBuilder::MMBase
+ end
+ end
+ # this datatype must be in Module11 not Module112
+ DataType113 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType113", :literals => {:b => 1})
+
+ Module112.extend(RGen::MetamodelBuilder::ModuleExtension)
+ # this datatype must be in Module11 not Module112
+ DataType114 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType114", :literals => {:b => 1})
+ module Module112
+ # this one is handled because now Module112 is extended
+ class Class1122 < RGen::MetamodelBuilder::MMBase
+ end
+ end
+
+ DataType115 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType115", :literals => {:b => 1})
+ DataType116 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType116", :literals => {:b => 1})
+ end
+
+ DataType11 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType11", :literals => {:a => 1})
+
+ class Class12 < RGen::MetamodelBuilder::MMBase
+ end
+
+ class Class13 < RGen::MetamodelBuilder::MMBase
+ end
+ end
+
+ # datatypes outside of a module won't be handled
+ DataType1 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType1", :literals => {:b => 1})
+
+ # classes outside of a module won't be handled
+ class Class1 < RGen::MetamodelBuilder::MMBase
+ end
+
+ module TestMM2
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ TestMM1::Module11.extend(RGen::MetamodelBuilder::ModuleExtension)
+ # this is a critical case: because of the previous extension of Module11 which is in a different
+ # hierarchy, DataType21 is looked for in Module11 and its parents; finally it is not
+ # found and the definition is ignored for order calculation
+ DataType21 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType21", :literals => {:b => 1})
+
+ module Module21
+ extend RGen::MetamodelBuilder::ModuleExtension
+ end
+
+ module Module22
+ extend RGen::MetamodelBuilder::ModuleExtension
+ end
+
+ module Module23
+ extend RGen::MetamodelBuilder::ModuleExtension
+ end
+
+ # if there is no other class or module after the last datatype, it won't show up in _constantOrder
+ # however, the order of eClassifiers can still be reconstructed
+ # note that this can not be tested if the test is executed as part of the whole testsuite
+ # since there will be classes and modules created within other test files
+ DataType22 = RGen::MetamodelBuilder::DataTypes::Enum.new(:name => "DataType22", :literals => {:b => 1})
+ end
+
+ def test_constant_order
+ assert_equal ["Class11", "Module11", "DataType11", "Class12", "Class13"], TestMM1._constantOrder
+ assert_equal ["DataType111", "DataType112", "Class111", "DataType113", "Module112", "DataType114", "DataType115", "DataType116"], TestMM1::Module11._constantOrder
+ assert_equal ["Class1122"], TestMM1::Module11::Module112._constantOrder
+ if File.basename($0) == "metamodel_order_test.rb"
+ # this won't work if run in the whole test suite (see comment at DataType22)
+ assert_equal ["Module21", "Module22", "Module23"], TestMM2._constantOrder
+ end
+ end
+
+ def test_classifier_order
+ # eClassifiers also contains the ones which where ignored in order calculation, these are expected at the end
+ # (in an arbitrary order)
+ assert_equal ["Class11", "DataType11", "Class12", "Class13"], TestMM1.ecore.eClassifiers.name
+ assert_equal ["DataType111", "DataType112", "Class111", "DataType113", "DataType114", "DataType115", "DataType116", "Class112"], TestMM1::Module11.ecore.eClassifiers.name
+ assert_equal ["Class1122", "Class1121"], TestMM1::Module11::Module112.ecore.eClassifiers.name
+ # no classifiers in TestMM2._constantOrder, so the datatypes can appear in arbitrary order
+ assert_equal ["DataType21","DataType22"], TestMM2.ecore.eClassifiers.name.sort
+ end
+
+ def test_subpackage_order
+ assert_equal ["Module11"], TestMM1.ecore.eSubpackages.name
+ assert_equal ["Module112"], TestMM1::Module11.ecore.eSubpackages.name
+ assert_equal [], TestMM1::Module11::Module112.ecore.eSubpackages.name
+ assert_equal ["Module21", "Module22", "Module23"], TestMM2.ecore.eSubpackages.name
+ end
+end
+
+
diff --git a/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test.rb b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test.rb
new file mode 100644
index 000000000..f0be7cf43
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test.rb
@@ -0,0 +1,98 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/array_extensions'
+require 'rgen/util/model_comparator'
+require 'mmgen/metamodel_generator'
+require 'rgen/instantiator/ecore_xml_instantiator'
+require 'rgen/serializer/xmi20_serializer'
+
+class MetamodelRoundtripTest < Test::Unit::TestCase
+
+ TEST_DIR = File.dirname(__FILE__)+"/metamodel_roundtrip_test"
+
+ include MMGen::MetamodelGenerator
+ include RGen::Util::ModelComparator
+
+ module Regenerated
+ Inside = binding
+ end
+
+ def test_generator
+ require TEST_DIR+"/TestModel.rb"
+ outfile = TEST_DIR+"/TestModel_Regenerated.rb"
+ generateMetamodel(HouseMetamodel.ecore, outfile)
+
+ File.open(outfile) do |f|
+ eval(f.read, Regenerated::Inside)
+ end
+
+ assert modelEqual?(HouseMetamodel.ecore, Regenerated::HouseMetamodel.ecore, ["instanceClassName"])
+ end
+
+ module UMLRegenerated
+ Inside = binding
+ end
+
+ def test_generate_from_ecore
+ outfile = TEST_DIR+"/houseMetamodel_from_ecore.rb"
+
+ env = RGen::Environment.new
+ File.open(TEST_DIR+"/houseMetamodel.ecore") { |f|
+ ECoreXMLInstantiator.new(env).instantiate(f.read)
+ }
+ rootpackage = env.find(:class => RGen::ECore::EPackage).first
+ rootpackage.name = "HouseMetamodel"
+ generateMetamodel(rootpackage, outfile)
+
+ File.open(outfile) do |f|
+ eval(f.read, UMLRegenerated::Inside, "test_eval", 0)
+ end
+ end
+
+ def test_ecore_serializer
+ require TEST_DIR+"/TestModel.rb"
+ File.open(TEST_DIR+"/houseMetamodel_Regenerated.ecore","w") do |f|
+ ser = RGen::Serializer::XMI20Serializer.new(f)
+ ser.serialize(HouseMetamodel.ecore)
+ end
+ end
+
+ BuiltinTypesTestEcore = TEST_DIR+"/using_builtin_types.ecore"
+
+ def test_ecore_serializer_builtin_types
+ mm = RGen::ECore::EPackage.new(:name => "P1", :eClassifiers => [
+ RGen::ECore::EClass.new(:name => "C1", :eStructuralFeatures => [
+ RGen::ECore::EAttribute.new(:name => "a1", :eType => RGen::ECore::EString),
+ RGen::ECore::EAttribute.new(:name => "a2", :eType => RGen::ECore::EInt),
+ RGen::ECore::EAttribute.new(:name => "a3", :eType => RGen::ECore::ELong),
+ RGen::ECore::EAttribute.new(:name => "a4", :eType => RGen::ECore::EFloat),
+ RGen::ECore::EAttribute.new(:name => "a5", :eType => RGen::ECore::EBoolean)
+ ])
+ ])
+ outfile = TEST_DIR+"/using_builtin_types_serialized.ecore"
+ File.open(outfile, "w") do |f|
+ ser = RGen::Serializer::XMI20Serializer.new(f)
+ ser.serialize(mm)
+ end
+ assert_equal(File.read(BuiltinTypesTestEcore), File.read(outfile))
+ end
+
+ def test_ecore_instantiator_builtin_types
+ env = RGen::Environment.new
+ File.open(BuiltinTypesTestEcore) { |f|
+ ECoreXMLInstantiator.new(env).instantiate(f.read)
+ }
+ a1 = env.find(:class => RGen::ECore::EAttribute, :name => "a1").first
+ assert_equal(RGen::ECore::EString, a1.eType)
+ a2 = env.find(:class => RGen::ECore::EAttribute, :name => "a2").first
+ assert_equal(RGen::ECore::EInt, a2.eType)
+ a3 = env.find(:class => RGen::ECore::EAttribute, :name => "a3").first
+ assert_equal(RGen::ECore::ELong, a3.eType)
+ a4 = env.find(:class => RGen::ECore::EAttribute, :name => "a4").first
+ assert_equal(RGen::ECore::EFloat, a4.eType)
+ a5 = env.find(:class => RGen::ECore::EAttribute, :name => "a5").first
+ assert_equal(RGen::ECore::EBoolean, a5.eType)
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/.gitignore b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/.gitignore
new file mode 100644
index 000000000..817ba4948
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/.gitignore
@@ -0,0 +1,2 @@
+using_builtin_types_serialized.ecore
+
diff --git a/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/TestModel.rb b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/TestModel.rb
new file mode 100644
index 000000000..01342286f
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/TestModel.rb
@@ -0,0 +1,70 @@
+require 'rgen/metamodel_builder'
+
+module HouseMetamodel
+ extend RGen::MetamodelBuilder::ModuleExtension
+ include RGen::MetamodelBuilder::DataTypes
+
+ SexEnum = Enum.new(:name => "SexEnum", :literals => [ :male, :female ])
+ # TODO: Datatypes
+ # AggregationKind = Enum.new([ :none, :aggregate, :composite ])
+
+ class MeetingPlace < RGen::MetamodelBuilder::MMBase
+ annotation :source => "testmodel", :details => { 'complexity' => '1', 'date_created' => '2006-07-12 08:40:46', 'date_modified' => '2006-07-12 08:44:02', 'ea_ntype' => '0', 'ea_stype' => 'Class', 'gentype' => 'Java', 'isSpecification' => 'false', 'package' => 'EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD', 'package_name' => 'HouseMetamodel', 'phase' => '1.0', 'status' => 'Proposed', 'style' => 'BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;', 'tagged' => '0', 'version' => '1.0' }
+ end
+
+ class Person < RGen::MetamodelBuilder::MMBase
+ annotation 'complexity' => '1', 'date_created' => '2006-06-27 08:34:23', 'date_modified' => '2006-06-27 08:34:26', 'ea_ntype' => '0', 'ea_stype' => 'Class', 'gentype' => 'Java', 'isSpecification' => 'false', 'package' => 'EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD', 'package_name' => 'HouseMetamodel', 'phase' => '1.0', 'status' => 'Proposed', 'style' => 'BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;', 'tagged' => '0', 'version' => '1.0'
+ has_attr 'sex', SexEnum
+ has_attr 'id', Long
+ has_many_attr 'nicknames', String
+ end
+
+ class House < RGen::MetamodelBuilder::MMBase
+ annotation 'complexity' => '1', 'date_created' => '2005-09-16 19:52:18', 'date_modified' => '2006-02-28 08:29:19', 'ea_ntype' => '0', 'ea_stype' => 'Class', 'gentype' => 'Java', 'isSpecification' => 'false', 'package' => 'EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD', 'package_name' => 'HouseMetamodel', 'phase' => '1.0', 'status' => 'Proposed', 'stereotype' => 'dummy', 'style' => 'BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;', 'tagged' => '0', 'version' => '1.0'
+ has_attr 'size', Integer
+ has_attr 'module'
+ has_attr 'address', String, :changeable => false do
+ annotation 'collection' => 'false', 'containment' => 'Not Specified', 'derived' => '0', 'duplicates' => '0', 'ea_guid' => '{A8DF581B-9AC6-4f75-AB48-8FAEDFC6E068}', 'lowerBound' => '1', 'ordered' => '0', 'position' => '0', 'styleex' => 'volatile=0;', 'type' => 'String', 'upperBound' => '1'
+ end
+
+ end
+
+
+ module Rooms
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+
+ class Room < RGen::MetamodelBuilder::MMBase
+ abstract
+ annotation 'complexity' => '1', 'date_created' => '2005-09-16 19:52:28', 'date_modified' => '2006-06-22 21:15:25', 'ea_ntype' => '0', 'ea_stype' => 'Class', 'gentype' => 'Java', 'isSpecification' => 'false', 'package' => 'EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08', 'package_name' => 'Rooms', 'phase' => '1.0', 'status' => 'Proposed', 'style' => 'BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;', 'tagged' => '0', 'version' => '1.0'
+ end
+
+ class Bathroom < Room
+ annotation 'complexity' => '1', 'date_created' => '2006-06-27 08:32:25', 'date_modified' => '2006-06-27 08:34:23', 'ea_ntype' => '0', 'ea_stype' => 'Class', 'gentype' => 'Java', 'isSpecification' => 'false', 'package' => 'EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08', 'package_name' => 'Rooms', 'phase' => '1.0', 'status' => 'Proposed', 'style' => 'BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;', 'tagged' => '0', 'version' => '1.0'
+ end
+
+ class Kitchen < RGen::MetamodelBuilder::MMMultiple(HouseMetamodel::MeetingPlace, Room)
+ annotation 'complexity' => '1', 'date_created' => '2005-11-30 19:26:13', 'date_modified' => '2006-06-22 21:15:34', 'ea_ntype' => '0', 'ea_stype' => 'Class', 'gentype' => 'Java', 'isSpecification' => 'false', 'package' => 'EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08', 'package_name' => 'Rooms', 'phase' => '1.0', 'status' => 'Proposed', 'style' => 'BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;', 'tagged' => '0', 'version' => '1.0'
+ end
+
+ end
+
+ module DependingOnRooms
+ extend RGen::MetamodelBuilder::ModuleExtension
+ class RoomSub < Rooms::Room
+ end
+ end
+end
+
+HouseMetamodel::Person.has_many 'home', HouseMetamodel::House do
+ annotation 'containment' => 'Unspecified'
+end
+HouseMetamodel::House.has_one 'bathroom', HouseMetamodel::Rooms::Bathroom, :lowerBound => 1, :transient => true
+HouseMetamodel::House.one_to_one 'kitchen', HouseMetamodel::Rooms::Kitchen, 'house', :lowerBound => 1, :opposite_lowerBound => 1 do
+ annotation 'containment' => 'Unspecified'
+ opposite_annotation 'containment' => 'Unspecified'
+end
+HouseMetamodel::House.contains_many 'room', HouseMetamodel::Rooms::Room, 'house', :lowerBound => 1 do
+ # only an opposite annotation
+ opposite_annotation 'containment' => 'Unspecified'
+end
diff --git a/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/houseMetamodel.ecore b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/houseMetamodel.ecore
new file mode 100644
index 000000000..6f5c01b03
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/houseMetamodel.ecore
@@ -0,0 +1,42 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<ecore:EPackage xmi:version="2.0"
+ xmlns:xmi="http://www.omg.org/XMI" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xmlns:ecore="http://www.eclipse.org/emf/2002/Ecore" name="HouseMetamodel">
+ <eClassifiers xsi:type="ecore:EClass" name="House">
+ <eAnnotations source="bla">
+ <details key="a" value="b"/>
+ </eAnnotations>
+ <eStructuralFeatures xsi:type="ecore:EAttribute" name="address" eType="#//String"
+ changeable="false"/>
+ <eStructuralFeatures xsi:type="ecore:EReference" name="bathroom" lowerBound="1"
+ eType="#//Rooms/Bathroom"/>
+ <eStructuralFeatures xsi:type="ecore:EReference" name="kitchen" lowerBound="1"
+ eType="#//Rooms/Kitchen" eOpposite="#//Rooms/Kitchen/house"/>
+ <eStructuralFeatures xsi:type="ecore:EReference" name="room" upperBound="-1" eType="#//Rooms/Room"
+ containment="true" eOpposite="#//Rooms/Room/house"/>
+ </eClassifiers>
+ <eClassifiers xsi:type="ecore:EClass" name="MeetingPlace"/>
+ <eClassifiers xsi:type="ecore:EClass" name="Person">
+ <eStructuralFeatures xsi:type="ecore:EReference" name="house" upperBound="-1"
+ eType="#//House"/>
+ <eStructuralFeatures xsi:type="ecore:EAttribute" name="sex" eType="#//SexEnum"/>
+ <eStructuralFeatures xsi:type="ecore:EAttribute" name="id" eType="ecore:EDataType http://www.eclipse.org/emf/2002/Ecore#//ELong"/>
+ <eStructuralFeatures xsi:type="ecore:EAttribute" name="nicknames" upperBound="-1" eType="#//String"/>
+ </eClassifiers>
+ <eClassifiers xsi:type="ecore:EDataType" name="String" instanceClassName="java.lang.String"/>
+ <eClassifiers xsi:type="ecore:EEnum" name="SexEnum">
+ <eLiterals name="male"/>
+ <eLiterals name="female"/>
+ </eClassifiers>
+ <eSubpackages name="Rooms">
+ <eClassifiers xsi:type="ecore:EClass" name="Room">
+ <eStructuralFeatures xsi:type="ecore:EReference" name="house" eType="#//House"
+ defaultValueLiteral="" eOpposite="#//House/room"/>
+ </eClassifiers>
+ <eClassifiers xsi:type="ecore:EClass" name="Bathroom" eSuperTypes="#//Rooms/Room"/>
+ <eClassifiers xsi:type="ecore:EClass" name="Kitchen" eSuperTypes="#//Rooms/Room #//MeetingPlace">
+ <eStructuralFeatures xsi:type="ecore:EReference" name="house" eType="#//House"
+ eOpposite="#//House/kitchen"/>
+ </eClassifiers>
+ </eSubpackages>
+</ecore:EPackage>
diff --git a/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/houseMetamodel_from_ecore.rb b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/houseMetamodel_from_ecore.rb
new file mode 100644
index 000000000..a7a1104d6
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/houseMetamodel_from_ecore.rb
@@ -0,0 +1,44 @@
+require 'rgen/metamodel_builder'
+
+module HouseMetamodel
+ extend RGen::MetamodelBuilder::ModuleExtension
+ include RGen::MetamodelBuilder::DataTypes
+
+ SexEnum = Enum.new(:name => 'SexEnum', :literals =>[ :male, :female ])
+
+ class House < RGen::MetamodelBuilder::MMBase
+ annotation :source => "bla", :details => {'a' => 'b'}
+ has_attr 'address', String, :changeable => false
+ end
+
+ class MeetingPlace < RGen::MetamodelBuilder::MMBase
+ end
+
+ class Person < RGen::MetamodelBuilder::MMBase
+ has_attr 'sex', HouseMetamodel::SexEnum
+ has_attr 'id', Long
+ has_many_attr 'nicknames', String
+ end
+
+
+ module Rooms
+ extend RGen::MetamodelBuilder::ModuleExtension
+ include RGen::MetamodelBuilder::DataTypes
+
+
+ class Room < RGen::MetamodelBuilder::MMBase
+ end
+
+ class Bathroom < Room
+ end
+
+ class Kitchen < RGen::MetamodelBuilder::MMMultiple(Room, HouseMetamodel::MeetingPlace)
+ end
+
+ end
+end
+
+HouseMetamodel::House.has_one 'bathroom', HouseMetamodel::Rooms::Bathroom, :lowerBound => 1
+HouseMetamodel::House.one_to_one 'kitchen', HouseMetamodel::Rooms::Kitchen, 'house', :lowerBound => 1
+HouseMetamodel::House.contains_many 'room', HouseMetamodel::Rooms::Room, 'house'
+HouseMetamodel::Person.has_many 'house', HouseMetamodel::House
diff --git a/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/using_builtin_types.ecore b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/using_builtin_types.ecore
new file mode 100644
index 000000000..2f93239c4
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/metamodel_roundtrip_test/using_builtin_types.ecore
@@ -0,0 +1,9 @@
+<ecore:EPackage name="P1" xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ecore="http://www.eclipse.org/emf/2002/Ecore">
+ <eClassifiers name="C1" xsi:type="ecore:EClass">
+ <eStructuralFeatures iD="false" changeable="true" derived="false" transient="false" unsettable="false" volatile="false" lowerBound="0" ordered="true" unique="true" upperBound="1" name="a1" eType="ecore:EDataType http://www.eclipse.org/emf/2002/Ecore#//EString" xsi:type="ecore:EAttribute"/>
+ <eStructuralFeatures iD="false" changeable="true" derived="false" transient="false" unsettable="false" volatile="false" lowerBound="0" ordered="true" unique="true" upperBound="1" name="a2" eType="ecore:EDataType http://www.eclipse.org/emf/2002/Ecore#//EInt" xsi:type="ecore:EAttribute"/>
+ <eStructuralFeatures iD="false" changeable="true" derived="false" transient="false" unsettable="false" volatile="false" lowerBound="0" ordered="true" unique="true" upperBound="1" name="a3" eType="ecore:EDataType http://www.eclipse.org/emf/2002/Ecore#//ELong" xsi:type="ecore:EAttribute"/>
+ <eStructuralFeatures iD="false" changeable="true" derived="false" transient="false" unsettable="false" volatile="false" lowerBound="0" ordered="true" unique="true" upperBound="1" name="a4" eType="ecore:EDataType http://www.eclipse.org/emf/2002/Ecore#//EFloat" xsi:type="ecore:EAttribute"/>
+ <eStructuralFeatures iD="false" changeable="true" derived="false" transient="false" unsettable="false" volatile="false" lowerBound="0" ordered="true" unique="true" upperBound="1" name="a5" eType="ecore:EDataType http://www.eclipse.org/emf/2002/Ecore#//EBoolean" xsi:type="ecore:EAttribute"/>
+ </eClassifiers>
+</ecore:EPackage>
diff --git a/lib/puppet/vendor/rgen/test/method_delegation_test.rb b/lib/puppet/vendor/rgen/test/method_delegation_test.rb
new file mode 100644
index 000000000..9b540ea2b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/method_delegation_test.rb
@@ -0,0 +1,178 @@
+$:.unshift File.dirname(__FILE__) + "/../lib"
+
+require 'test/unit'
+require 'rgen/util/method_delegation'
+
+class MethodDelegationTest < Test::Unit::TestCase
+ include RGen
+
+ class TestDelegate
+ attr_accessor :mode, :callcount
+ def common_delegated(delegator)
+ @callcount ||= 0
+ @callcount += 1
+ case @mode
+ when :continue
+ throw :continue
+ when :delegatorId
+ delegator.object_id
+ when :return7
+ 7
+ end
+ end
+ alias to_s_delegated common_delegated
+ alias methodInSingleton_delegated common_delegated
+ alias class_delegated common_delegated
+ alias artificialMethod_delegated common_delegated
+ end
+
+ class ConstPathElement < Module
+ def self.const_missing_delegated(delegator, const)
+ ConstPathElement.new(const)
+ end
+ def initialize(name, parent=nil)
+ @name = name.to_s
+ @parent = parent
+ end
+ def const_missing(const)
+ ConstPathElement.new(const, self)
+ end
+ def to_s
+ if @parent
+ @parent.to_s+"::"+@name
+ else
+ @name
+ end
+ end
+ end
+
+ # missing: check with multiple params and block param
+
+ def test_method_defined_in_singleton
+ # delegator is an Array
+ delegator = []
+ # delegating method is a method defined in the singleton class
+ class << delegator
+ def methodInSingleton
+ "result from method in singleton"
+ end
+ end
+ checkDelegation(delegator, "methodInSingleton", "result from method in singleton")
+ end
+
+ def test_method_defined_in_class
+ # delegator is a String
+ delegator = "Delegator1"
+ checkDelegation(delegator, "to_s", "Delegator1")
+ end
+
+ def test_method_defined_in_superclass
+ # delegator is an instance of a new anonymous class
+ delegator = Class.new.new
+ # delegating method is +object_id+ which is defined in the superclass
+ checkDelegation(delegator, "class", delegator.class)
+ end
+
+ def test_new_method
+ # delegator is an String
+ delegator = "Delegator2"
+ # delegating method is a new method which does not exist on String
+ checkDelegation(delegator, "artificialMethod", delegator.object_id, true)
+ end
+
+ def test_const_missing
+ surroundingModule = Module.nesting.first
+ Util::MethodDelegation.registerDelegate(ConstPathElement, surroundingModule, "const_missing")
+
+ assert_equal "SomeArbitraryConst", SomeArbitraryConst.to_s
+ assert_equal "AnotherConst::A::B::C", AnotherConst::A::B::C.to_s
+
+ Util::MethodDelegation.unregisterDelegate(ConstPathElement, surroundingModule, "const_missing")
+ assert_raise NameError do
+ SomeArbitraryConst
+ end
+ end
+
+ def checkDelegation(delegator, method, originalResult, newMethod=false)
+ delegate1 = TestDelegate.new
+ delegate2 = TestDelegate.new
+
+ Util::MethodDelegation.registerDelegate(delegate1, delegator, method)
+ Util::MethodDelegation.registerDelegate(delegate2, delegator, method)
+
+ assert delegator.respond_to?(:_methodDelegates)
+ if newMethod
+ assert !delegator.respond_to?("#{method}_delegate_original".to_sym)
+ else
+ assert delegator.respond_to?("#{method}_delegate_original".to_sym)
+ end
+
+ # check delegator parameter
+ delegate1.mode = :delegatorId
+ assert_equal delegator.object_id, delegator.send(method)
+
+ delegate1.callcount = 0
+ delegate2.callcount = 0
+
+ delegate1.mode = :return7
+ # delegate1 returns a value
+ assert_equal 7, delegator.send(method)
+ assert_equal 1, delegate1.callcount
+ # delegate2 is not called
+ assert_equal 0, delegate2.callcount
+
+ delegate1.mode = :nothing
+ # delegate1 just exits and thus returns nil
+ assert_equal nil, delegator.send(method)
+ assert_equal 2, delegate1.callcount
+ # delegate2 is not called
+ assert_equal 0, delegate2.callcount
+
+ delegate1.mode = :continue
+ delegate2.mode = :return7
+ # delegate1 is called but continues
+ # delegate2 returns a value
+ assert_equal 7, delegator.send(method)
+ assert_equal 3, delegate1.callcount
+ assert_equal 1, delegate2.callcount
+
+ delegate1.mode = :continue
+ delegate2.mode = :continue
+ # both delegates continue, the original method returns its value
+ checkCallOriginal(delegator, method, originalResult, newMethod)
+ # both delegates are called though
+ assert_equal 4, delegate1.callcount
+ assert_equal 2, delegate2.callcount
+
+ # calling unregister with a non existing method has no effect
+ Util::MethodDelegation.unregisterDelegate(delegate1, delegator, "xxx")
+ Util::MethodDelegation.unregisterDelegate(delegate1, delegator, method)
+
+ checkCallOriginal(delegator, method, originalResult, newMethod)
+ # delegate1 not called any more
+ assert_equal 4, delegate1.callcount
+ # delegate2 is still called
+ assert_equal 3, delegate2.callcount
+
+ Util::MethodDelegation.unregisterDelegate(delegate2, delegator, method)
+
+ checkCallOriginal(delegator, method, originalResult, newMethod)
+ # both delegates not called any more
+ assert_equal 4, delegate1.callcount
+ assert_equal 3, delegate2.callcount
+
+ # after all delegates were unregistered, singleton class should be clean
+ assert !delegator.respond_to?(:_methodDelegates)
+ end
+
+ def checkCallOriginal(delegator, method, originalResult, newMethod)
+ if newMethod
+ assert_raise NoMethodError do
+ result = delegator.send(method)
+ end
+ else
+ result = delegator.send(method)
+ assert_equal originalResult, result
+ end
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/model_builder/builder_context_test.rb b/lib/puppet/vendor/rgen/test/model_builder/builder_context_test.rb
new file mode 100644
index 000000000..7cb62b3d5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/builder_context_test.rb
@@ -0,0 +1,59 @@
+$:.unshift File.dirname(__FILE__)+"/../lib"
+
+require 'test/unit'
+require 'rgen/ecore/ecore'
+require 'rgen/model_builder/builder_context'
+
+class BuilderContextTest < Test::Unit::TestCase
+
+ module BuilderExtension1
+ module PackageA
+ def inPackAExt
+ 3
+ end
+ module PackageB
+ def inPackBExt
+ 5
+ end
+ end
+ end
+ end
+
+ class BuilderContext
+ def inBuilderContext
+ 7
+ end
+ end
+
+ def test_extensionContainerFactory
+ aboveRoot = RGen::ECore::EPackage.new(:name => "AboveRoot")
+ root = RGen::ECore::EPackage.new(:name => "Root", :eSuperPackage => aboveRoot)
+ packageA = RGen::ECore::EPackage.new(:name => "PackageA", :eSuperPackage => root)
+ packageB = RGen::ECore::EPackage.new(:name => "PackageB", :eSuperPackage => packageA)
+ packageC = RGen::ECore::EPackage.new(:name => "PackageBC", :eSuperPackage => packageA)
+
+ factory = RGen::ModelBuilder::BuilderContext::ExtensionContainerFactory.new(root, BuilderExtension1, BuilderContext.new)
+
+ assert_equal BuilderExtension1::PackageA, factory.moduleForPackage(packageA)
+
+ packAExt = factory.extensionContainer(packageA)
+ assert packAExt.respond_to?(:inPackAExt)
+ assert !packAExt.respond_to?(:inPackBExt)
+ assert_equal 3, packAExt.inPackAExt
+ assert_equal 7, packAExt.inBuilderContext
+
+ assert_equal BuilderExtension1::PackageA::PackageB, factory.moduleForPackage(packageB)
+
+ packBExt = factory.extensionContainer(packageB)
+ assert !packBExt.respond_to?(:inPackAExt)
+ assert packBExt.respond_to?(:inPackBExt)
+ assert_equal 5, packBExt.inPackBExt
+ assert_equal 7, packBExt.inBuilderContext
+
+ assert_raise RuntimeError do
+ # aboveRoot is not contained within root
+ assert_nil factory.moduleForPackage(aboveRoot)
+ end
+ assert_nil factory.moduleForPackage(packageC)
+ end
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/model_builder/builder_test.rb b/lib/puppet/vendor/rgen/test/model_builder/builder_test.rb
new file mode 100644
index 000000000..ce4225f94
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/builder_test.rb
@@ -0,0 +1,242 @@
+$:.unshift File.dirname(__FILE__) + "/../lib"
+
+require 'test/unit'
+require 'rgen/ecore/ecore'
+require 'rgen/ecore/ecore_builder_methods'
+require 'rgen/environment'
+require 'rgen/model_builder'
+require 'model_builder/statemachine_metamodel'
+
+class ModelBuilderTest < Test::Unit::TestCase
+
+ def test_statemachine
+ result = RGen::ModelBuilder.build(StatemachineMetamodel) do
+ statemachine "Airconditioner" do
+ state "Off", :kind => :START
+ compositeState "On" do
+ state "Heating" do
+ transition :as => :outgoingTransition, :targetState => "Cooling",
+ :statemachine => "Airconditioner"
+ end
+ state "Cooling" do
+ end
+ end
+ transition :sourceState => "On.Cooling", :targetState => "On.Heating" do
+ _using Condition::TimeCondition do
+ timeCondition :as => :condition, :timeout => 100
+ end
+ Condition::TimeCondition.timeCondition :as => :condition, :timeout => 10
+ end
+ end
+ _using Condition do
+ statemachine "AirconExtension" do
+ s = state "StartState"
+ transition :sourceState => s, :targetState => "Airconditioner.Off"
+ end
+ end
+ end
+
+ assert result.is_a?(Array)
+ assert_equal 2, result.size
+
+ sm1 = result[0]
+ assert sm1.is_a?(StatemachineMetamodel::Statemachine)
+ assert_equal "Airconditioner", sm1.name
+
+ assert_equal 2, sm1.state.size
+ offState = sm1.state[0]
+ assert offState.is_a?(StatemachineMetamodel::State)
+ assert_equal "Off", offState.name
+ assert_equal :START, offState.kind
+
+ onState = sm1.state[1]
+ assert onState.is_a?(StatemachineMetamodel::CompositeState)
+ assert_equal "On", onState.name
+
+ assert_equal 2, onState.state.size
+ hState = onState.state[0]
+ assert hState.is_a?(StatemachineMetamodel::State)
+ assert_equal "Heating", hState.name
+
+ cState = onState.state[1]
+ assert cState.is_a?(StatemachineMetamodel::State)
+ assert_equal "Cooling", cState.name
+
+ assert_equal 1, hState.outgoingTransition.size
+ hOutTrans = hState.outgoingTransition[0]
+ assert hOutTrans.is_a?(StatemachineMetamodel::Transition)
+ assert_equal cState, hOutTrans.targetState
+ assert_equal sm1, hOutTrans.statemachine
+
+ assert_equal 1, hState.incomingTransition.size
+ hInTrans = hState.incomingTransition[0]
+ assert hInTrans.is_a?(StatemachineMetamodel::Transition)
+ assert_equal cState, hInTrans.sourceState
+ assert_equal sm1, hInTrans.statemachine
+
+ assert_equal 2, hInTrans.condition.size
+ assert hInTrans.condition[0].is_a?(StatemachineMetamodel::Condition::TimeCondition::TimeCondition)
+ assert_equal 100, hInTrans.condition[0].timeout
+ assert hInTrans.condition[1].is_a?(StatemachineMetamodel::Condition::TimeCondition::TimeCondition)
+ assert_equal 10, hInTrans.condition[1].timeout
+
+ sm2 = result[1]
+ assert sm2.is_a?(StatemachineMetamodel::Statemachine)
+ assert_equal "AirconExtension", sm2.name
+
+ assert_equal 1, sm2.state.size
+ sState = sm2.state[0]
+ assert sState.is_a?(StatemachineMetamodel::State)
+ assert_equal "StartState", sState.name
+
+ assert_equal 1, sState.outgoingTransition.size
+ assert sState.outgoingTransition[0].is_a?(StatemachineMetamodel::Transition)
+ assert_equal offState, sState.outgoingTransition[0].targetState
+ assert_equal sm2, sState.outgoingTransition[0].statemachine
+ end
+
+ def test_dynamic
+ numStates = 5
+ env = RGen::Environment.new
+ result = RGen::ModelBuilder.build(StatemachineMetamodel, env) do
+ sm = statemachine "SM#{numStates}" do
+ (1..numStates).each do |i|
+ state "State#{i}" do
+ transition :as => :outgoingTransition, :targetState => "State#{i < numStates ? i+1 : 1}",
+ :statemachine => sm
+ end
+ end
+ end
+ end
+ assert_equal 11, env.elements.size
+ assert_equal "SM5", result[0].name
+ state = result[0].state.first
+ assert_equal "State1", state.name
+ state = state.outgoingTransition.first.targetState
+ assert_equal "State2", state.name
+ state = state.outgoingTransition.first.targetState
+ assert_equal "State3", state.name
+ state = state.outgoingTransition.first.targetState
+ assert_equal "State4", state.name
+ state = state.outgoingTransition.first.targetState
+ assert_equal "State5", state.name
+ assert_equal result[0].state[0], state.outgoingTransition.first.targetState
+ end
+
+ def test_multiref
+ result = RGen::ModelBuilder.build(StatemachineMetamodel) do
+ a = transition
+ transition "b"
+ transition "c"
+ state :outgoingTransition => [a, "b", "c"]
+ end
+
+ assert result[0].is_a?(StatemachineMetamodel::Transition)
+ assert result[1].is_a?(StatemachineMetamodel::Transition)
+ assert !result[1].respond_to?(:name)
+ assert result[2].is_a?(StatemachineMetamodel::Transition)
+ assert !result[2].respond_to?(:name)
+ state = result[3]
+ assert state.is_a?(StatemachineMetamodel::State)
+ assert_equal result[0], state.outgoingTransition[0]
+ assert_equal result[1], state.outgoingTransition[1]
+ assert_equal result[2], state.outgoingTransition[2]
+ end
+
+ module TestMetamodel
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ # these classes have no name
+ class TestA < RGen::MetamodelBuilder::MMBase
+ end
+ class TestB < RGen::MetamodelBuilder::MMBase
+ end
+ class TestC < RGen::MetamodelBuilder::MMBase
+ end
+ TestA.contains_many 'testB', TestB, 'testA'
+ TestC.has_one 'testB', TestB
+ end
+
+ def test_helper_names
+ result = RGen::ModelBuilder.build(TestMetamodel) do
+ testA "_a" do
+ testB "_b"
+ end
+ testC :testB => "_a._b"
+ end
+ assert result[0].is_a?(TestMetamodel::TestA)
+ assert result[1].is_a?(TestMetamodel::TestC)
+ assert_equal result[0].testB[0], result[1].testB
+ end
+
+ def test_ecore
+ result = RGen::ModelBuilder.build(RGen::ECore, nil, RGen::ECore::ECoreBuilderMethods) do
+ ePackage "TestPackage1" do
+ eClass "TestClass1" do
+ eAttribute "attr1", :eType => RGen::ECore::EString
+ eAttr "attr2", RGen::ECore::EInt
+ eBiRef "biRef1", "TestClass2", "testClass1"
+ contains_1toN 'testClass2', "TestClass2", "tc1Parent"
+ end
+ eClass "TestClass2" do
+ eRef "ref1", "TestClass1"
+ end
+ end
+ end
+
+ assert result.is_a?(Array)
+ assert_equal 1, result.size
+ p1 = result.first
+
+ assert p1.is_a?(RGen::ECore::EPackage)
+ assert_equal "TestPackage1", p1.name
+
+ # TestClass1
+ class1 = p1.eClassifiers.find{|c| c.name == "TestClass1"}
+ assert_not_nil class1
+ assert class1.is_a?(RGen::ECore::EClass)
+
+ # TestClass1.attr1
+ attr1 = class1.eAllAttributes.find{|a| a.name == "attr1"}
+ assert_not_nil attr1
+ assert_equal RGen::ECore::EString, attr1.eType
+
+ # TestClass1.attr2
+ attr2 = class1.eAllAttributes.find{|a| a.name == "attr2"}
+ assert_not_nil attr2
+ assert_equal RGen::ECore::EInt, attr2.eType
+
+ # TestClass2
+ class2 = p1.eClassifiers.find{|c| c.name == "TestClass2"}
+ assert_not_nil class2
+ assert class2.is_a?(RGen::ECore::EClass)
+
+ # TestClass2.ref1
+ ref1 = class2.eAllReferences.find{|a| a.name == "ref1"}
+ assert_not_nil ref1
+ assert_equal class1, ref1.eType
+
+ # TestClass1.biRef1
+ biRef1 = class1.eAllReferences.find{|r| r.name == "biRef1"}
+ assert_not_nil biRef1
+ assert_equal class2, biRef1.eType
+ biRef1Opp = class2.eAllReferences.find {|r| r.name == "testClass1"}
+ assert_not_nil biRef1Opp
+ assert_equal class1, biRef1Opp.eType
+ assert_equal biRef1Opp, biRef1.eOpposite
+ assert_equal biRef1, biRef1Opp.eOpposite
+
+ # TestClass1.testClass2
+ tc2Ref = class1.eAllReferences.find{|r| r.name == "testClass2"}
+ assert_not_nil tc2Ref
+ assert_equal class2, tc2Ref.eType
+ assert tc2Ref.containment
+ assert_equal -1, tc2Ref.upperBound
+ tc2RefOpp = class2.eAllReferences.find{|r| r.name == "tc1Parent"}
+ assert_not_nil tc2RefOpp
+ assert_equal class1, tc2RefOpp.eType
+ assert !tc2RefOpp.containment
+ assert_equal 1, tc2RefOpp.upperBound
+ end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/model_builder/ecore_original.rb b/lib/puppet/vendor/rgen/test/model_builder/ecore_original.rb
new file mode 100644
index 000000000..bf3a1a5a1
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/ecore_original.rb
@@ -0,0 +1,163 @@
+ePackage "ecore", :nsPrefix => "ecore", :nsURI => "http://www.eclipse.org/emf/2002/Ecore" do
+ eClass "EAttribute", :eSuperTypes => ["EStructuralFeature"] do
+ eAttribute "iD"
+ eReference "eAttributeType", :changeable => false, :derived => true, :transient => true, :volatile => true, :lowerBound => 1, :eType => "EDataType"
+ end
+ eClass "EAnnotation", :eSuperTypes => ["EModelElement"] do
+ eAttribute "source"
+ eReference "details", :containment => true, :resolveProxies => false, :upperBound => -1, :eType => "EStringToStringMapEntry"
+ eReference "eModelElement", :resolveProxies => false, :eOpposite => "EModelElement.eAnnotations", :transient => true, :eType => "EModelElement"
+ eReference "contents", :containment => true, :resolveProxies => false, :upperBound => -1, :eType => "EObject"
+ eReference "references", :upperBound => -1, :eType => "EObject"
+ end
+ eClass "EClass", :eSuperTypes => ["EClassifier"] do
+ eAttribute "abstract"
+ eAttribute "interface"
+ eReference "eSuperTypes", :unsettable => true, :upperBound => -1, :eType => "EClass"
+ eReference "eOperations", :containment => true, :resolveProxies => false, :eOpposite => "EOperation.eContainingClass", :upperBound => -1, :eType => "EOperation"
+ eReference "eAllAttributes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EAttribute"
+ eReference "eAllReferences", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EReference"
+ eReference "eReferences", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EReference"
+ eReference "eAttributes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EAttribute"
+ eReference "eAllContainments", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EReference"
+ eReference "eAllOperations", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EOperation"
+ eReference "eAllStructuralFeatures", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EStructuralFeature"
+ eReference "eAllSuperTypes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EClass"
+ eReference "eIDAttribute", :resolveProxies => false, :changeable => false, :derived => true, :transient => true, :volatile => true, :eType => "EAttribute"
+ eReference "eStructuralFeatures", :containment => true, :resolveProxies => false, :eOpposite => "EStructuralFeature.eContainingClass", :upperBound => -1, :eType => "EStructuralFeature"
+ eReference "eGenericSuperTypes", :containment => true, :resolveProxies => false, :unsettable => true, :upperBound => -1, :eType => "EGenericType"
+ eReference "eAllGenericSuperTypes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1, :eType => "EGenericType"
+ end
+ eClass "EClassifier", :abstract => true, :eSuperTypes => ["ENamedElement"], :eSubTypes => ["EClass", "EDataType"] do
+ eAttribute "instanceClassName", :unsettable => true, :volatile => true
+ eAttribute "instanceClass", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "defaultValue", :changeable => false, :derived => true, :transient => true, :volatile => true, :eType => "EJavaObject"
+ eAttribute "instanceTypeName", :unsettable => true, :volatile => true
+ eReference "ePackage", :eOpposite => "EPackage.eClassifiers", :changeable => false, :transient => true, :eType => "EPackage"
+ eReference "eTypeParameters", :containment => true, :upperBound => -1, :eType => "ETypeParameter"
+ end
+ eClass "EDataType", :eSuperTypes => ["EClassifier"], :eSubTypes => ["EEnum"] do
+ eAttribute "serializable", :defaultValueLiteral => "true"
+ end
+ eClass "EEnum", :eSuperTypes => ["EDataType"] do
+ eReference "eLiterals", :containment => true, :resolveProxies => false, :eOpposite => "EEnumLiteral.eEnum", :upperBound => -1, :eType => "EEnumLiteral"
+ end
+ eClass "EEnumLiteral", :eSuperTypes => ["ENamedElement"] do
+ eAttribute "value"
+ eAttribute "instance", :transient => true, :eType => "EEnumerator"
+ eAttribute "literal"
+ eReference "eEnum", :resolveProxies => false, :eOpposite => "EEnum.eLiterals", :changeable => false, :transient => true, :eType => "EEnum"
+ end
+ eClass "EFactory", :eSuperTypes => ["EModelElement"] do
+ eReference "ePackage", :resolveProxies => false, :eOpposite => "EPackage.eFactoryInstance", :transient => true, :lowerBound => 1, :eType => "EPackage"
+ end
+ eClass "EModelElement", :abstract => true, :eSuperTypes => ["EObject"], :eSubTypes => ["EAnnotation", "EFactory", "ENamedElement"] do
+ eReference "eAnnotations", :containment => true, :resolveProxies => false, :eOpposite => "EAnnotation.eModelElement", :upperBound => -1, :eType => "EAnnotation"
+ end
+ eClass "ENamedElement", :abstract => true, :eSuperTypes => ["EModelElement"], :eSubTypes => ["EClassifier", "EEnumLiteral", "EPackage", "ETypedElement", "ETypeParameter"] do
+ eAttribute "name"
+ end
+ eClass "EObject", :eSubTypes => ["EModelElement", "EGenericType"]
+ eClass "EOperation", :eSuperTypes => ["ETypedElement"] do
+ eReference "eContainingClass", :resolveProxies => false, :eOpposite => "EClass.eOperations", :changeable => false, :transient => true, :eType => "EClass"
+ eReference "eTypeParameters", :containment => true, :upperBound => -1, :eType => "ETypeParameter"
+ eReference "eParameters", :containment => true, :resolveProxies => false, :eOpposite => "EParameter.eOperation", :upperBound => -1, :eType => "EParameter"
+ eReference "eExceptions", :unsettable => true, :upperBound => -1, :eType => "EClassifier"
+ eReference "eGenericExceptions", :containment => true, :resolveProxies => false, :unsettable => true, :upperBound => -1, :eType => "EGenericType"
+ end
+ eClass "EPackage", :eSuperTypes => ["ENamedElement"] do
+ eAttribute "nsURI"
+ eAttribute "nsPrefix"
+ eReference "eFactoryInstance", :resolveProxies => false, :eOpposite => "EFactory.ePackage", :transient => true, :lowerBound => 1, :eType => "EFactory"
+ eReference "eClassifiers", :containment => true, :eOpposite => "EClassifier.ePackage", :upperBound => -1, :eType => "EClassifier"
+ eReference "eSubpackages", :containment => true, :eOpposite => "eSuperPackage", :upperBound => -1, :eType => "EPackage"
+ eReference "eSuperPackage", :eOpposite => "eSubpackages", :changeable => false, :transient => true, :eType => "EPackage"
+ end
+ eClass "EParameter", :eSuperTypes => ["ETypedElement"] do
+ eReference "eOperation", :resolveProxies => false, :eOpposite => "EOperation.eParameters", :changeable => false, :transient => true, :eType => "EOperation"
+ end
+ eClass "EReference", :eSuperTypes => ["EStructuralFeature"] do
+ eAttribute "containment"
+ eAttribute "container", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "resolveProxies", :defaultValueLiteral => "true"
+ eReference "eOpposite", :eType => "EReference"
+ eReference "eReferenceType", :changeable => false, :derived => true, :transient => true, :volatile => true, :lowerBound => 1, :eType => "EClass"
+ eReference "eKeys", :upperBound => -1, :eType => "EAttribute"
+ end
+ eClass "EStructuralFeature", :abstract => true, :eSuperTypes => ["ETypedElement"], :eSubTypes => ["EAttribute", "EReference"] do
+ eAttribute "changeable", :defaultValueLiteral => "true"
+ eAttribute "volatile"
+ eAttribute "transient"
+ eAttribute "defaultValueLiteral"
+ eAttribute "defaultValue", :changeable => false, :derived => true, :transient => true, :volatile => true, :eType => "EJavaObject"
+ eAttribute "unsettable"
+ eAttribute "derived"
+ eReference "eContainingClass", :resolveProxies => false, :eOpposite => "EClass.eStructuralFeatures", :changeable => false, :transient => true, :eType => "EClass"
+ end
+ eClass "ETypedElement", :abstract => true, :eSuperTypes => ["ENamedElement"], :eSubTypes => ["EOperation", "EParameter", "EStructuralFeature"] do
+ eAttribute "ordered", :defaultValueLiteral => "true"
+ eAttribute "unique", :defaultValueLiteral => "true"
+ eAttribute "lowerBound"
+ eAttribute "upperBound", :defaultValueLiteral => "1"
+ eAttribute "many", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "required", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eReference "eType", :unsettable => true, :volatile => true, :eType => "EClassifier"
+ eReference "eGenericType", :containment => true, :resolveProxies => false, :unsettable => true, :volatile => true, :eType => "EGenericType"
+ end
+ eDataType "EBigDecimal", :instanceClassName => "java.math.BigDecimal"
+ eDataType "EBigInteger", :instanceClassName => "java.math.BigInteger"
+ eDataType "EBoolean", :instanceClassName => "boolean"
+ eDataType "EBooleanObject", :instanceClassName => "java.lang.Boolean"
+ eDataType "EByte", :instanceClassName => "byte"
+ eDataType "EByteArray", :instanceClassName => "byte[]"
+ eDataType "EByteObject", :instanceClassName => "java.lang.Byte"
+ eDataType "EChar", :instanceClassName => "char"
+ eDataType "ECharacterObject", :instanceClassName => "java.lang.Character"
+ eDataType "EDate", :instanceClassName => "java.util.Date"
+ eDataType "EDiagnosticChain", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.DiagnosticChain"
+ eDataType "EDouble", :instanceClassName => "double"
+ eDataType "EDoubleObject", :instanceClassName => "java.lang.Double"
+ eDataType "EEList", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.EList" do
+ eTypeParameter "E"
+ end
+ eDataType "EEnumerator", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.Enumerator"
+ eDataType "EFeatureMap", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.util.FeatureMap"
+ eDataType "EFeatureMapEntry", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.util.FeatureMap$Entry"
+ eDataType "EFloat", :instanceClassName => "float"
+ eDataType "EFloatObject", :instanceClassName => "java.lang.Float"
+ eDataType "EInt", :instanceClassName => "int"
+ eDataType "EIntegerObject", :instanceClassName => "java.lang.Integer"
+ eDataType "EJavaClass", :instanceClassName => "java.lang.Class" do
+ eTypeParameter "T"
+ end
+ eDataType "EJavaObject", :instanceClassName => "java.lang.Object"
+ eDataType "ELong", :instanceClassName => "long"
+ eDataType "ELongObject", :instanceClassName => "java.lang.Long"
+ eDataType "EMap", :serializable => false, :instanceClassName => "java.util.Map" do
+ eTypeParameter "K"
+ eTypeParameter "V"
+ end
+ eDataType "EResource", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.resource.Resource"
+ eDataType "EResourceSet", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.resource.ResourceSet"
+ eDataType "EShort", :instanceClassName => "short"
+ eDataType "EShortObject", :instanceClassName => "java.lang.Short"
+ eDataType "EString", :instanceClassName => "java.lang.String"
+ eClass "EStringToStringMapEntry", :instanceClassName => "java.util.Map$Entry" do
+ eAttribute "key"
+ eAttribute "value"
+ end
+ eDataType "ETreeIterator", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.TreeIterator" do
+ eTypeParameter "E"
+ end
+ eClass "EGenericType", :eSuperTypes => ["EObject"] do
+ eReference "eUpperBound", :containment => true, :resolveProxies => false, :eType => "EGenericType"
+ eReference "eTypeArguments", :containment => true, :resolveProxies => false, :upperBound => -1, :eType => "EGenericType"
+ eReference "eRawType", :changeable => false, :derived => true, :transient => true, :lowerBound => 1, :eType => "EClassifier"
+ eReference "eLowerBound", :containment => true, :resolveProxies => false, :eType => "EGenericType"
+ eReference "eTypeParameter", :resolveProxies => false, :eType => "ETypeParameter"
+ eReference "eClassifier", :eType => "EClassifier"
+ end
+ eClass "ETypeParameter", :eSuperTypes => ["ENamedElement"] do
+ eReference "eBounds", :containment => true, :resolveProxies => false, :upperBound => -1, :eType => "EGenericType"
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/model_builder/ecore_original_regenerated.rb b/lib/puppet/vendor/rgen/test/model_builder/ecore_original_regenerated.rb
new file mode 100644
index 000000000..1eda80f6c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/ecore_original_regenerated.rb
@@ -0,0 +1,163 @@
+ePackage "ecore", :nsPrefix => "ecore", :nsURI => "http://www.eclipse.org/emf/2002/Ecore" do
+ eClass "EAttribute" do
+ eAttribute "iD"
+ eReference "eAttributeType", :changeable => false, :derived => true, :transient => true, :volatile => true, :lowerBound => 1
+ end
+ eClass "EAnnotation" do
+ eAttribute "source"
+ eReference "details", :containment => true, :resolveProxies => false, :upperBound => -1
+ eReference "eModelElement", :resolveProxies => false, :transient => true
+ eReference "contents", :containment => true, :resolveProxies => false, :upperBound => -1
+ eReference "references", :upperBound => -1
+ end
+ eClass "EClass" do
+ eAttribute "abstract"
+ eAttribute "interface"
+ eReference "eSuperTypes", :unsettable => true, :upperBound => -1
+ eReference "eOperations", :containment => true, :resolveProxies => false, :upperBound => -1
+ eReference "eAllAttributes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eAllReferences", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eReferences", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eAttributes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eAllContainments", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eAllOperations", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eAllStructuralFeatures", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eAllSuperTypes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ eReference "eIDAttribute", :resolveProxies => false, :changeable => false, :derived => true, :transient => true, :volatile => true
+ eReference "eStructuralFeatures", :containment => true, :resolveProxies => false, :upperBound => -1
+ eReference "eGenericSuperTypes", :containment => true, :resolveProxies => false, :unsettable => true, :upperBound => -1
+ eReference "eAllGenericSuperTypes", :changeable => false, :derived => true, :transient => true, :volatile => true, :upperBound => -1
+ end
+ eClass "EClassifier", :abstract => true do
+ eAttribute "instanceClassName", :unsettable => true, :volatile => true
+ eAttribute "instanceClass", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "defaultValue", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "instanceTypeName", :unsettable => true, :volatile => true
+ eReference "ePackage", :changeable => false, :transient => true
+ eReference "eTypeParameters", :containment => true, :upperBound => -1
+ end
+ eClass "EDataType" do
+ eAttribute "serializable", :defaultValueLiteral => "true"
+ end
+ eClass "EEnum" do
+ eReference "eLiterals", :containment => true, :resolveProxies => false, :upperBound => -1
+ end
+ eClass "EEnumLiteral" do
+ eAttribute "value"
+ eAttribute "instance", :transient => true
+ eAttribute "literal"
+ eReference "eEnum", :resolveProxies => false, :changeable => false, :transient => true
+ end
+ eClass "EFactory" do
+ eReference "ePackage", :resolveProxies => false, :transient => true, :lowerBound => 1
+ end
+ eClass "EModelElement", :abstract => true do
+ eReference "eAnnotations", :containment => true, :resolveProxies => false, :upperBound => -1
+ end
+ eClass "ENamedElement", :abstract => true do
+ eAttribute "name"
+ end
+ eClass "EObject"
+ eClass "EOperation" do
+ eReference "eContainingClass", :resolveProxies => false, :changeable => false, :transient => true
+ eReference "eTypeParameters", :containment => true, :upperBound => -1
+ eReference "eParameters", :containment => true, :resolveProxies => false, :upperBound => -1
+ eReference "eExceptions", :unsettable => true, :upperBound => -1
+ eReference "eGenericExceptions", :containment => true, :resolveProxies => false, :unsettable => true, :upperBound => -1
+ end
+ eClass "EPackage" do
+ eAttribute "nsURI"
+ eAttribute "nsPrefix"
+ eReference "eFactoryInstance", :resolveProxies => false, :transient => true, :lowerBound => 1
+ eReference "eClassifiers", :containment => true, :upperBound => -1
+ eReference "eSubpackages", :containment => true, :upperBound => -1
+ eReference "eSuperPackage", :changeable => false, :transient => true
+ end
+ eClass "EParameter" do
+ eReference "eOperation", :resolveProxies => false, :changeable => false, :transient => true
+ end
+ eClass "EReference" do
+ eAttribute "containment"
+ eAttribute "container", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "resolveProxies", :defaultValueLiteral => "true"
+ eReference "eOpposite"
+ eReference "eReferenceType", :changeable => false, :derived => true, :transient => true, :volatile => true, :lowerBound => 1
+ eReference "eKeys", :upperBound => -1
+ end
+ eClass "EStructuralFeature", :abstract => true do
+ eAttribute "changeable", :defaultValueLiteral => "true"
+ eAttribute "volatile"
+ eAttribute "transient"
+ eAttribute "defaultValueLiteral"
+ eAttribute "defaultValue", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "unsettable"
+ eAttribute "derived"
+ eReference "eContainingClass", :resolveProxies => false, :changeable => false, :transient => true
+ end
+ eClass "ETypedElement", :abstract => true do
+ eAttribute "ordered", :defaultValueLiteral => "true"
+ eAttribute "unique", :defaultValueLiteral => "true"
+ eAttribute "lowerBound"
+ eAttribute "upperBound", :defaultValueLiteral => "1"
+ eAttribute "many", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eAttribute "required", :changeable => false, :derived => true, :transient => true, :volatile => true
+ eReference "eType", :unsettable => true, :volatile => true
+ eReference "eGenericType", :containment => true, :resolveProxies => false, :unsettable => true, :volatile => true
+ end
+ eDataType "EBigDecimal", :instanceClassName => "java.math.BigDecimal"
+ eDataType "EBigInteger", :instanceClassName => "java.math.BigInteger"
+ eDataType "EBoolean", :instanceClassName => "boolean"
+ eDataType "EBooleanObject", :instanceClassName => "java.lang.Boolean"
+ eDataType "EByte", :instanceClassName => "byte"
+ eDataType "EByteArray", :instanceClassName => "byte[]"
+ eDataType "EByteObject", :instanceClassName => "java.lang.Byte"
+ eDataType "EChar", :instanceClassName => "char"
+ eDataType "ECharacterObject", :instanceClassName => "java.lang.Character"
+ eDataType "EDate", :instanceClassName => "java.util.Date"
+ eDataType "EDiagnosticChain", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.DiagnosticChain"
+ eDataType "EDouble", :instanceClassName => "double"
+ eDataType "EDoubleObject", :instanceClassName => "java.lang.Double"
+ eDataType "EEList", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.EList" do
+ eTypeParameter "E"
+ end
+ eDataType "EEnumerator", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.Enumerator"
+ eDataType "EFeatureMap", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.util.FeatureMap"
+ eDataType "EFeatureMapEntry", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.util.FeatureMap$Entry"
+ eDataType "EFloat", :instanceClassName => "float"
+ eDataType "EFloatObject", :instanceClassName => "java.lang.Float"
+ eDataType "EInt", :instanceClassName => "int"
+ eDataType "EIntegerObject", :instanceClassName => "java.lang.Integer"
+ eDataType "EJavaClass", :instanceClassName => "java.lang.Class" do
+ eTypeParameter "T"
+ end
+ eDataType "EJavaObject", :instanceClassName => "java.lang.Object"
+ eDataType "ELong", :instanceClassName => "long"
+ eDataType "ELongObject", :instanceClassName => "java.lang.Long"
+ eDataType "EMap", :serializable => false, :instanceClassName => "java.util.Map" do
+ eTypeParameter "K"
+ eTypeParameter "V"
+ end
+ eDataType "EResource", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.resource.Resource"
+ eDataType "EResourceSet", :serializable => false, :instanceClassName => "org.eclipse.emf.ecore.resource.ResourceSet"
+ eDataType "EShort", :instanceClassName => "short"
+ eDataType "EShortObject", :instanceClassName => "java.lang.Short"
+ eDataType "EString", :instanceClassName => "java.lang.String"
+ eClass "EStringToStringMapEntry", :instanceClassName => "java.util.Map$Entry" do
+ eAttribute "key"
+ eAttribute "value"
+ end
+ eDataType "ETreeIterator", :serializable => false, :instanceClassName => "org.eclipse.emf.common.util.TreeIterator" do
+ eTypeParameter "E"
+ end
+ eClass "EGenericType" do
+ eReference "eUpperBound", :containment => true, :resolveProxies => false
+ eReference "eTypeArguments", :containment => true, :resolveProxies => false, :upperBound => -1
+ eReference "eRawType", :changeable => false, :derived => true, :transient => true, :lowerBound => 1
+ eReference "eLowerBound", :containment => true, :resolveProxies => false
+ eReference "eTypeParameter", :resolveProxies => false
+ eReference "eClassifier"
+ end
+ eClass "ETypeParameter" do
+ eReference "eBounds", :containment => true, :resolveProxies => false, :upperBound => -1
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/model_builder/reference_resolver_test.rb b/lib/puppet/vendor/rgen/test/model_builder/reference_resolver_test.rb
new file mode 100644
index 000000000..099335340
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/reference_resolver_test.rb
@@ -0,0 +1,156 @@
+$:.unshift File.dirname(__FILE__)+"/../lib"
+
+require 'test/unit'
+require 'rgen/metamodel_builder'
+require 'rgen/model_builder/reference_resolver'
+
+class ReferenceResolverTest < Test::Unit::TestCase
+
+ class ClassA < RGen::MetamodelBuilder::MMBase
+ has_attr "name"
+ end
+
+ class ClassB < RGen::MetamodelBuilder::MMBase
+ has_attr "name"
+ end
+
+ class ClassC < RGen::MetamodelBuilder::MMBase
+ has_attr "name"
+ end
+
+ ClassA.contains_many 'childB', ClassB, 'parentA'
+ ClassB.contains_many 'childC', ClassC, 'parentB'
+ ClassA.has_one 'refC', ClassC
+ ClassB.has_one 'refC', ClassC
+ ClassC.has_many 'refCs', ClassC
+ ClassC.has_one 'refA', ClassA
+ ClassC.has_one 'refB', ClassB
+
+ def testModel
+ a1 = ClassA.new(:name => "a1")
+ a2 = ClassA.new(:name => "a2")
+ b1 = ClassB.new(:name => "b1", :parentA => a1)
+ b2 = ClassB.new(:name => "b2", :parentA => a1)
+ c1 = ClassC.new(:name => "c1", :parentB => b1)
+ c2 = ClassC.new(:name => "c2", :parentB => b1)
+ c3 = ClassC.new(:name => "c3", :parentB => b1)
+ [a1, a2, b1, b2, c1, c2, c3]
+ end
+
+ def setElementNames(resolver, elements)
+ elements.each do |e|
+ resolver.setElementName(e, e.name)
+ end
+ end
+
+ def createJob(hash)
+ raise "Invalid arguments" unless \
+ hash.is_a?(Hash) && (hash.keys & [:receiver, :reference, :namespace, :string]).size == 4
+ RGen::ModelBuilder::ReferenceResolver::ResolverJob.new(
+ hash[:receiver], hash[:reference], hash[:namespace], hash[:string])
+ end
+
+ def test_resolve_same_namespace
+ a1, a2, b1, b2, c1, c2, c3 = testModel
+
+ toplevelNamespace = [a1, a2]
+ resolver = RGen::ModelBuilder::ReferenceResolver.new
+ setElementNames(resolver, [a1, a2, b1, b2, c1, c2, c3])
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refCs"},
+ :namespace => b1,
+ :string => "c1"))
+ resolver.addJob(createJob(
+ :receiver => b2,
+ :reference => ClassB.ecore.eReferences.find{|r| r.name == "refC"},
+ :namespace => a1,
+ :string => "b1.c1"))
+ resolver.addJob(createJob(
+ :receiver => a2,
+ :reference => ClassA.ecore.eReferences.find{|r| r.name == "refC"},
+ :namespace => nil,
+ :string => "a1.b1.c1"))
+ resolver.resolve(toplevelNamespace)
+
+ assert_equal [c1], c2.refCs
+ assert_equal c1, b2.refC
+ assert_equal c1, a2.refC
+ end
+
+ def test_resolve_parent_namespace
+ a1, a2, b1, b2, c1, c2, c3 = testModel
+
+ toplevelNamespace = [a1, a2]
+ resolver = RGen::ModelBuilder::ReferenceResolver.new
+ setElementNames(resolver, [a1, a2, b1, b2, c1, c2, c3])
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refA"},
+ :namespace => b1,
+ :string => "a1"))
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refB"},
+ :namespace => b1,
+ :string => "b1"))
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refCs"},
+ :namespace => b1,
+ :string => "b1.c1"))
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refCs"},
+ :namespace => b1,
+ :string => "a1.b1.c3"))
+ resolver.resolve(toplevelNamespace)
+
+ assert_equal a1, c2.refA
+ assert_equal b1, c2.refB
+ assert_equal [c1, c3], c2.refCs
+ end
+
+ def test_resolve_faulty
+ a1, a2, b1, b2, c1, c2, c3 = testModel
+
+ toplevelNamespace = [a1, a2]
+ resolver = RGen::ModelBuilder::ReferenceResolver.new
+ setElementNames(resolver, [a1, a2, b1, b2, c1, c2, c3])
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refCs"},
+ :namespace => b1,
+ :string => "b1.c5"))
+ assert_raise RGen::ModelBuilder::ReferenceResolver::ResolverException do
+ resolver.resolve(toplevelNamespace)
+ end
+ end
+
+ def test_ambiguous_prefix
+ a = ClassA.new(:name => "name1")
+ b1 = ClassB.new(:name => "name1", :parentA => a)
+ b2 = ClassB.new(:name => "target", :parentA => a)
+ c1 = ClassC.new(:name => "name21", :parentB => b1)
+ c2 = ClassC.new(:name => "name22", :parentB => b1)
+
+ toplevelNamespace = [a]
+ resolver = RGen::ModelBuilder::ReferenceResolver.new
+ setElementNames(resolver, [a, b1, b2, c1, c2])
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refCs"},
+ :namespace => b1,
+ :string => "name1.name1.name21"))
+ resolver.addJob(createJob(
+ :receiver => c2,
+ :reference => ClassC.ecore.eReferences.find{|r| r.name == "refB"},
+ :namespace => b1,
+ :string => "name1.target"))
+ resolver.resolve(toplevelNamespace)
+
+ assert_equal [c1], c2.refCs
+ assert_equal b2, c2.refB
+ end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/model_builder/serializer_test.rb b/lib/puppet/vendor/rgen/test/model_builder/serializer_test.rb
new file mode 100644
index 000000000..46eab3d05
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/serializer_test.rb
@@ -0,0 +1,94 @@
+$:.unshift File.dirname(__FILE__) + "/../lib"
+
+require 'test/unit'
+require 'rgen/ecore/ecore'
+
+# The following would also influence other tests...
+#
+#module RGen::ECore
+# class EGenericType < EObject
+# contains_many_uni 'eTypeArguments', EGenericType
+# end
+# class ETypeParameter < ENamedElement
+# end
+# class EClassifier
+# contains_many_uni 'eTypeParameters', ETypeParameter
+# end
+# class ETypedElement
+# has_one 'eGenericType', EGenericType
+# end
+#end
+#
+#RGen::ECore::ECoreInterface.clear_ecore_cache
+#RGen::ECore::EString.ePackage = RGen::ECore.ecore
+
+require 'rgen/environment'
+require 'rgen/model_builder/model_serializer'
+require 'rgen/instantiator/ecore_xml_instantiator'
+require 'rgen/model_builder'
+require 'model_builder/statemachine_metamodel'
+
+class ModelSerializerTest < Test::Unit::TestCase
+ def test_ecore_internal
+ File.open(File.dirname(__FILE__)+"/ecore_internal.rb","w") do |f|
+ serializer = RGen::ModelBuilder::ModelSerializer.new(f, RGen::ECore.ecore)
+ serializer.serialize(RGen::ECore.ecore)
+ end
+ end
+
+ def test_roundtrip
+ model = %{\
+statemachine "Airconditioner" do
+ state "Off", :kind => :START
+ compositeState "On" do
+ state "Heating"
+ state "Cooling"
+ state "Dumm"
+ end
+ transition "_Transition1", :sourceState => "On.Cooling", :targetState => "On.Heating"
+ transition "_Transition2", :sourceState => "On.Heating", :targetState => "On.Cooling"
+end
+}
+ check_roundtrip(StatemachineMetamodel, model)
+ end
+
+ module AmbiguousRoleMM
+ extend RGen::MetamodelBuilder::ModuleExtension
+ class A < RGen::MetamodelBuilder::MMBase
+ end
+ class B < RGen::MetamodelBuilder::MMBase
+ end
+ class C < B
+ end
+ A.contains_many 'role1', B, 'back1'
+ A.contains_many 'role2', B, 'back2'
+ end
+
+ def test_roundtrip_ambiguous_role
+ model = %{\
+a "_A1" do
+ b "_B1", :as => :role1
+ b "_B2", :as => :role2
+ c "_C1", :as => :role2
+end
+}
+ check_roundtrip(AmbiguousRoleMM, model)
+ end
+
+ private
+
+ def build_model(mm, model)
+ RGen::ModelBuilder.build(mm) do
+ eval(model)
+ end
+ end
+
+ def check_roundtrip(mm, model)
+ sm = build_model(mm, model)
+ f = StringIO.new
+ serializer = RGen::ModelBuilder::ModelSerializer.new(f, mm.ecore)
+ serializer.serialize(sm)
+ assert_equal model, f.string
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/model_builder/statemachine_metamodel.rb b/lib/puppet/vendor/rgen/test/model_builder/statemachine_metamodel.rb
new file mode 100644
index 000000000..f7a42cb0e
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/statemachine_metamodel.rb
@@ -0,0 +1,42 @@
+# a test metamodel used by the following tests
+module StatemachineMetamodel
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ module Condition
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class Condition < RGen::MetamodelBuilder::MMBase
+ end
+
+ module TimeCondition
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class TimeCondition < Condition
+ has_attr 'timeout', Integer
+ end
+ end
+ end
+
+ class Statemachine < RGen::MetamodelBuilder::MMBase
+ has_attr 'name'
+ end
+
+ class State < RGen::MetamodelBuilder::MMBase
+ has_attr 'name'
+ has_attr 'kind', RGen::MetamodelBuilder::DataTypes::Enum.new([:START])
+ end
+
+ class CompositeState < State
+ has_attr 'name'
+ contains_many 'state', State, 'compositeState'
+ end
+
+ class Transition < RGen::MetamodelBuilder::MMBase
+ many_to_one 'sourceState', State, 'outgoingTransition'
+ many_to_one 'targetState', State, 'incomingTransition'
+ has_many 'condition', Condition::Condition
+ end
+
+ Statemachine.contains_many 'state', State, 'statemachine'
+ Statemachine.contains_many 'transition', Transition, 'statemachine'
+end
diff --git a/lib/puppet/vendor/rgen/test/model_builder/test_model/statemachine1.rb b/lib/puppet/vendor/rgen/test/model_builder/test_model/statemachine1.rb
new file mode 100644
index 000000000..9f44b7018
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder/test_model/statemachine1.rb
@@ -0,0 +1,23 @@
+statemachine "Airconditioner" do
+ state "Off", :kind => :START
+ compositeState "On" do
+ state "Heating" do
+ transition :as => :outgoingTransition, :targetState => "Cooling",
+ :statemachine => "Airconditioner"
+ end
+ state "Cooling" do
+ end
+ end
+ transition :sourceState => "On.Cooling", :targetState => "On.Heating" do
+ _using Condition::TimeCondition do
+ timeCondition :as => :condition, :timeout => 100
+ end
+ Condition::TimeCondition.timeCondition :as => :condition, :timeout => 10
+ end
+end
+_using Condition do
+ statemachine "AirconExtension" do
+ s = state "StartState"
+ transition :sourceState => s, :targetState => "Airconditioner.Off"
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/model_builder_test.rb b/lib/puppet/vendor/rgen/test/model_builder_test.rb
new file mode 100644
index 000000000..a8e550c88
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_builder_test.rb
@@ -0,0 +1,6 @@
+$:.unshift File.dirname(__FILE__) + "/../lib"
+
+require 'model_builder/builder_test'
+require 'model_builder/serializer_test'
+require 'model_builder/builder_context_test'
+require 'model_builder/reference_resolver_test'
diff --git a/lib/puppet/vendor/rgen/test/model_fragment_test.rb b/lib/puppet/vendor/rgen/test/model_fragment_test.rb
new file mode 100644
index 000000000..deadf587d
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/model_fragment_test.rb
@@ -0,0 +1,30 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/metamodel_builder'
+require 'rgen/fragment/model_fragment'
+
+class ModelFragmentTest < Test::Unit::TestCase
+
+module TestMetamodel
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class SimpleClass < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+ contains_many 'subclass', SimpleClass, 'parent'
+ end
+end
+
+def test_elements
+ root = TestMetamodel::SimpleClass.new(:name => "parent",
+ :subclass => [TestMetamodel::SimpleClass.new(:name => "child")])
+
+ frag = RGen::Fragment::ModelFragment.new("location")
+ frag.set_root_elements([root])
+
+ assert_equal 2, frag.elements.size
+end
+
+end
+
+
diff --git a/lib/puppet/vendor/rgen/test/output_handler_test.rb b/lib/puppet/vendor/rgen/test/output_handler_test.rb
new file mode 100644
index 000000000..f00a8af71
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/output_handler_test.rb
@@ -0,0 +1,58 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/template_language/output_handler'
+
+class MetamodelBuilderTest < Test::Unit::TestCase
+ def test_direct_nl
+ h = RGen::TemplateLanguage::OutputHandler.new
+ h.mode = :direct
+ h << "Test"
+ h.ignoreNextNL
+ h << "\nContent"
+ assert_equal "TestContent", h.to_s
+ end
+ def test_direct_ws
+ h = RGen::TemplateLanguage::OutputHandler.new
+ h.mode = :direct
+ h << "Test"
+ h.ignoreNextWS
+ h << " \n Content"
+ assert_equal "TestContent", h.to_s
+ end
+ def test_explicit_indent
+ h = RGen::TemplateLanguage::OutputHandler.new
+ h.mode = :explicit
+ h.indent = 1
+ h << "Start"
+ h << " \n "
+ h << "Test"
+ h << " \n \n Content"
+ assert_equal " Start\n Test\n Content", h.to_s
+ end
+ def test_explicit_endswithws
+ h = RGen::TemplateLanguage::OutputHandler.new
+ h.mode = :explicit
+ h.indent = 1
+ h << "Start \n\n"
+ assert_equal " Start\n", h.to_s
+ end
+ def test_performance
+ h = RGen::TemplateLanguage::OutputHandler.new
+ h.mode = :explicit
+ h.indent = 1
+ line = (1..50).collect{|w| "someword"}.join(" ")+"\n"
+ # repeat more often to make performance differences visible
+ 20.times do
+ h << line
+ end
+ end
+ def test_indent_string
+ h = RGen::TemplateLanguage::OutputHandler.new(1, "\t", :explicit)
+ h << "Start"
+ h << " \n "
+ h << "Test"
+ h << " \n \n Content"
+ assert_equal "\tStart\n\tTest\n\tContent", h.to_s
+ end
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/qualified_name_provider_test.rb b/lib/puppet/vendor/rgen/test/qualified_name_provider_test.rb
new file mode 100644
index 000000000..c59cfce47
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/qualified_name_provider_test.rb
@@ -0,0 +1,48 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/metamodel_builder'
+require 'rgen/serializer/qualified_name_provider'
+
+class QualifiedNameProviderTest < Test::Unit::TestCase
+
+ class AbstractTestNode < RGen::MetamodelBuilder::MMBase
+ contains_many 'children', AbstractTestNode, "parent"
+ end
+
+ class NamedNode < AbstractTestNode
+ has_attr 'n', String
+ end
+
+ class UnnamedNode < AbstractTestNode
+ end
+
+ def test_simple
+ root = NamedNode.new(:n => "root", :children => [
+ NamedNode.new(:n => "a", :children => [
+ NamedNode.new(:n => "a1")
+ ]),
+ UnnamedNode.new(:children => [
+ NamedNode.new(:n => "b1")
+ ])
+ ])
+
+ qnp = RGen::Serializer::QualifiedNameProvider.new(:attribute_name => "n")
+
+ assert_equal "/root", qnp.identifier(root)
+ assert_equal "/root/a", qnp.identifier(root.children[0])
+ assert_equal "/root/a/a1", qnp.identifier(root.children[0].children[0])
+ assert_equal "/root", qnp.identifier(root.children[1])
+ assert_equal "/root/b1", qnp.identifier(root.children[1].children[0])
+ end
+
+ def test_unnamed_root
+ root = UnnamedNode.new
+
+ qnp = RGen::Serializer::QualifiedNameProvider.new(:attribute_name => "n")
+
+ assert_equal "/", qnp.identifier(root)
+ end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/test/qualified_name_resolver_test.rb b/lib/puppet/vendor/rgen/test/qualified_name_resolver_test.rb
new file mode 100644
index 000000000..f14929226
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/qualified_name_resolver_test.rb
@@ -0,0 +1,102 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/metamodel_builder'
+require 'rgen/instantiator/qualified_name_resolver'
+
+class QualifiedNameResolverTest < Test::Unit::TestCase
+
+ class TestNode < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+ has_one 'nextSibling', TestNode
+ contains_many 'children', TestNode, "parent"
+ end
+
+ class TestNode2 < RGen::MetamodelBuilder::MMBase
+ has_attr 'cname', String
+ has_one 'nextSibling', TestNode2
+ contains_many 'children', TestNode2, "parent"
+ end
+
+ class TestNode3 < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+ contains_one 'child', TestNode3, "parent"
+ end
+
+ def testModel
+ [TestNode.new(:name => "Root1", :children => [
+ TestNode.new(:name => "Sub11"),
+ TestNode.new(:name => "Sub12", :children => [
+ TestNode.new(:name => "Sub121")])]),
+ TestNode.new(:name => "Root2", :children => [
+ TestNode.new(:name => "Sub21", :children => [
+ TestNode.new(:name => "Sub211")])]),
+ TestNode.new(:name => "Root3"),
+ TestNode.new(:name => "Root3")
+ ]
+ end
+
+ def testModel2
+ [TestNode2.new(:cname => "Root1", :children => [
+ TestNode2.new(:cname => "Sub11")])]
+ end
+
+ def testModel3
+ [TestNode3.new(:name => "Root1", :child =>
+ TestNode3.new(:name => "Sub11", :child =>
+ TestNode3.new(:name => "Sub111")))]
+ end
+
+ def test_customNameAttribute
+ model = testModel2
+ res = RGen::Instantiator::QualifiedNameResolver.new(model, :nameAttribute => "cname")
+ assert_equal model[0], res.resolveIdentifier("/Root1")
+ assert_equal model[0].children[0], res.resolveIdentifier("/Root1/Sub11")
+ end
+
+ def test_customSeparator
+ model = testModel
+ res = RGen::Instantiator::QualifiedNameResolver.new(model, :separator => "|")
+ assert_equal model[0], res.resolveIdentifier("|Root1")
+ assert_nil res.resolveIdentifier("/Root1")
+ assert_equal model[0].children[0], res.resolveIdentifier("|Root1|Sub11")
+ end
+
+ def test_noLeadingSeparator
+ model = testModel
+ res = RGen::Instantiator::QualifiedNameResolver.new(model, :leadingSeparator => false)
+ assert_equal model[0], res.resolveIdentifier("Root1")
+ assert_nil res.resolveIdentifier("/Root1")
+ assert_equal model[0].children[0], res.resolveIdentifier("Root1/Sub11")
+ end
+
+ def test_resolve
+ model = testModel
+ res = RGen::Instantiator::QualifiedNameResolver.new(model)
+ assert_equal model[0], res.resolveIdentifier("/Root1")
+ # again
+ assert_equal model[0], res.resolveIdentifier("/Root1")
+ assert_equal model[0].children[0], res.resolveIdentifier("/Root1/Sub11")
+ # again
+ assert_equal model[0].children[0], res.resolveIdentifier("/Root1/Sub11")
+ assert_equal model[0].children[1], res.resolveIdentifier("/Root1/Sub12")
+ assert_equal model[0].children[1].children[0], res.resolveIdentifier("/Root1/Sub12/Sub121")
+ assert_equal model[1], res.resolveIdentifier("/Root2")
+ assert_equal model[1].children[0], res.resolveIdentifier("/Root2/Sub21")
+ assert_equal model[1].children[0].children[0], res.resolveIdentifier("/Root2/Sub21/Sub211")
+ # duplicate name yields two result elements
+ assert_equal [model[2], model[3]], res.resolveIdentifier("/Root3")
+ assert_equal nil, res.resolveIdentifier("/RootX")
+ assert_equal nil, res.resolveIdentifier("/Root1/SubX")
+ end
+
+ def test_oneChild
+ model = testModel3
+ res = RGen::Instantiator::QualifiedNameResolver.new(model)
+ assert_equal model[0], res.resolveIdentifier("/Root1")
+ assert_equal model[0].child, res.resolveIdentifier("/Root1/Sub11")
+ assert_equal model[0].child.child, res.resolveIdentifier("/Root1/Sub11/Sub111")
+ end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/test/reference_resolver_test.rb b/lib/puppet/vendor/rgen/test/reference_resolver_test.rb
new file mode 100644
index 000000000..b3dd58d73
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/reference_resolver_test.rb
@@ -0,0 +1,117 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/metamodel_builder'
+require 'rgen/instantiator/reference_resolver'
+
+class ReferenceResolverTest < Test::Unit::TestCase
+
+ class TestNode < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+ has_one 'other', TestNode
+ has_many 'others', TestNode
+ end
+
+ class TestNode2 < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+ end
+
+ def test_identifier_resolver
+ nodeA, nodeB, nodeC, unresolved_refs = create_model
+ resolver = RGen::Instantiator::ReferenceResolver.new(
+ :identifier_resolver => proc do |ident|
+ {:a => nodeA, :b => nodeB, :c => nodeC}[ident]
+ end)
+ urefs = resolver.resolve(unresolved_refs)
+ check_model(nodeA, nodeB, nodeC)
+ assert urefs.empty?
+ end
+
+ def test_add_identifier
+ nodeA, nodeB, nodeC, unresolved_refs = create_model
+ resolver = RGen::Instantiator::ReferenceResolver.new
+ resolver.add_identifier(:a, nodeA)
+ resolver.add_identifier(:b, nodeB)
+ resolver.add_identifier(:c, nodeC)
+ urefs = resolver.resolve(unresolved_refs)
+ check_model(nodeA, nodeB, nodeC)
+ assert urefs.empty?
+ end
+
+ def test_problems
+ nodeA, nodeB, nodeC, unresolved_refs = create_model
+ resolver = RGen::Instantiator::ReferenceResolver.new(
+ :identifier_resolver => proc do |ident|
+ {:a => [nodeA, nodeB], :c => nodeC}[ident]
+ end)
+ problems = []
+ urefs = resolver.resolve(unresolved_refs, :problems => problems)
+ assert_equal ["identifier b not found", "identifier a not uniq"], problems
+ assert_equal 2, urefs.size
+ assert urefs.all?{|ur| !ur.target_type_error}
+ end
+
+ def test_on_resolve_proc
+ nodeA, nodeB, nodeC, unresolved_refs = create_model
+ resolver = RGen::Instantiator::ReferenceResolver.new
+ resolver.add_identifier(:a, nodeA)
+ resolver.add_identifier(:b, nodeB)
+ resolver.add_identifier(:c, nodeC)
+ data = []
+ resolver.resolve(unresolved_refs,
+ :on_resolve => proc {|ur, e| data << [ ur, e ]})
+ assert data[0][0].is_a?(RGen::Instantiator::ReferenceResolver::UnresolvedReference)
+ assert_equal nodeA, data[0][0].element
+ assert_equal "other", data[0][0].feature_name
+ assert_equal :b, data[0][0].proxy.targetIdentifier
+ assert_equal nodeB, data[0][1]
+ end
+
+ def test_target_type_error
+ nodeA, nodeB, nodeC, unresolved_refs = create_model
+ resolver = RGen::Instantiator::ReferenceResolver.new(
+ :identifier_resolver => proc do |ident|
+ {:a => TestNode2.new, :b => TestNode2.new, :c => nodeC}[ident]
+ end)
+ problems = []
+ urefs = resolver.resolve(unresolved_refs, :problems => problems)
+ assert_equal 2, problems.size
+ assert problems[0] =~ /invalid target type .*TestNode2/
+ assert problems[1] =~ /invalid target type .*TestNode2/
+ assert_equal 2, urefs.uniq.size
+ assert urefs[0].target_type_error
+ assert urefs[1].target_type_error
+ assert urefs.any?{|ur| ur.proxy.object_id == nodeA.other.object_id}
+ assert urefs.any?{|ur| ur.proxy.object_id == nodeB.others[0].object_id}
+ end
+
+ private
+
+ def create_model
+ nodeA = TestNode.new(:name => "NodeA")
+ nodeB = TestNode.new(:name => "NodeB")
+ nodeC = TestNode.new(:name => "NodeC")
+ bProxy = RGen::MetamodelBuilder::MMProxy.new(:b)
+ nodeA.other = bProxy
+ aProxy = RGen::MetamodelBuilder::MMProxy.new(:a)
+ cProxy = RGen::MetamodelBuilder::MMProxy.new(:c)
+ nodeB.others = [aProxy, cProxy]
+ unresolved_refs = [
+ RGen::Instantiator::ReferenceResolver::UnresolvedReference.new(nodeA, "other", bProxy),
+ RGen::Instantiator::ReferenceResolver::UnresolvedReference.new(nodeB, "others", aProxy),
+ RGen::Instantiator::ReferenceResolver::UnresolvedReference.new(nodeB, "others", cProxy)
+ ]
+ return nodeA, nodeB, nodeC, unresolved_refs
+ end
+
+ def check_model(nodeA, nodeB, nodeC)
+ assert_equal nodeB, nodeA.other
+ assert_equal [], nodeA.others
+ assert_equal nil, nodeB.other
+ assert_equal [nodeA, nodeC], nodeB.others
+ assert_equal nil, nodeC.other
+ assert_equal [], nodeC.others
+ end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/test/rgen_test.rb b/lib/puppet/vendor/rgen/test/rgen_test.rb
new file mode 100644
index 000000000..68f1ac565
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/rgen_test.rb
@@ -0,0 +1,26 @@
+$:.unshift File.dirname(__FILE__)
+
+require 'test/unit'
+
+require 'array_extensions_test'
+require 'ea_instantiator_test'
+require 'ecore_self_test'
+require 'environment_test'
+require 'metamodel_builder_test'
+require 'metamodel_roundtrip_test'
+require 'output_handler_test'
+require 'template_language_test'
+require 'transformer_test'
+require 'xml_instantiator_test'
+require 'ea_serializer_test'
+require 'model_builder_test'
+require 'method_delegation_test'
+require 'json_test'
+require 'reference_resolver_test'
+require 'qualified_name_resolver_test'
+require 'metamodel_order_test'
+require 'metamodel_from_ecore_test'
+require 'util_test'
+require 'model_fragment_test'
+require 'qualified_name_provider_test'
+
diff --git a/lib/puppet/vendor/rgen/test/template_language_test.rb b/lib/puppet/vendor/rgen/test/template_language_test.rb
new file mode 100644
index 000000000..a1f14dda8
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test.rb
@@ -0,0 +1,163 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/template_language'
+require 'rgen/metamodel_builder'
+
+class TemplateContainerTest < Test::Unit::TestCase
+
+ TEMPLATES_DIR = File.dirname(__FILE__)+"/template_language_test/templates"
+ OUTPUT_DIR = File.dirname(__FILE__)+"/template_language_test"
+
+ module MyMM
+
+ class Chapter
+ attr_reader :title
+ def initialize(title)
+ @title = title
+ end
+ end
+
+ class Document
+ attr_reader :title, :authors, :chapters
+ attr_accessor :sampleArray
+ def initialize(title)
+ @title = title
+ @chapters = []
+ @authors = []
+ end
+ end
+
+ class Author
+ attr_reader :name, :email
+ def initialize(name, email)
+ @name, @email = name, email
+ end
+ end
+
+ end
+
+ module CCodeMM
+ class CArray < RGen::MetamodelBuilder::MMBase
+ has_attr 'name'
+ has_attr 'size', Integer
+ has_attr 'type'
+ end
+ class PrimitiveInitValue < RGen::MetamodelBuilder::MMBase
+ has_attr 'value', Integer
+ end
+ CArray.has_many 'initvalue', PrimitiveInitValue
+ end
+
+ TEST_MODEL = MyMM::Document.new("SomeDocument")
+ TEST_MODEL.authors << MyMM::Author.new("Martin", "martin@somewhe.re")
+ TEST_MODEL.authors << MyMM::Author.new("Otherguy", "other@somewhereel.se")
+ TEST_MODEL.chapters << MyMM::Chapter.new("Intro")
+ TEST_MODEL.chapters << MyMM::Chapter.new("MainPart")
+ TEST_MODEL.chapters << MyMM::Chapter.new("Summary")
+ TEST_MODEL.sampleArray = CCodeMM::CArray.new(:name => "myArray", :type => "int", :size => 5,
+ :initvalue => (1..5).collect { |v| CCodeMM::PrimitiveInitValue.new(:value => v) })
+
+ def test_with_model
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ File.delete(OUTPUT_DIR+"/testout.txt") if File.exists? OUTPUT_DIR+"/testout.txt"
+ tc.expand('root::Root', :for => TEST_MODEL, :indent => 1)
+ result = expected = ""
+ File.open(OUTPUT_DIR+"/testout.txt") {|f| result = f.read}
+ File.open(OUTPUT_DIR+"/expected_result1.txt") {|f| expected = f.read}
+ assert_equal expected, result
+ end
+
+ def test_immediate_result
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ expected = ""
+ File.open(OUTPUT_DIR+"/expected_result2.txt","rb") {|f| expected = f.read}
+ assert_equal expected, tc.expand('code/array::ArrayDefinition', :for => TEST_MODEL.sampleArray).to_s
+ end
+
+ def test_indent_string
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ tc.indentString = " " # 2 spaces instead of 3 (default)
+ tc.expand('indent_string_test::IndentStringTest', :for => :dummy)
+ File.open(OUTPUT_DIR+"/indentStringTestDefaultIndent.out","rb") do |f|
+ assert_equal " <- your default here\r\n", f.read
+ end
+ File.open(OUTPUT_DIR+"/indentStringTestTabIndent.out","rb") do |f|
+ assert_equal "\t<- tab\r\n", f.read
+ end
+ end
+
+ def test_null_context
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_raise StandardError do
+ # the template must raise an exception because it calls expand :for => nil
+ tc.expand('null_context_test::NullContextTestBad', :for => :dummy)
+ end
+ assert_raise StandardError do
+ # the template must raise an exception because it calls expand :foreach => nil
+ tc.expand('null_context_test::NullContextTestBad2', :for => :dummy)
+ end
+ assert_nothing_raised do
+ tc.expand('null_context_test::NullContextTestOk', :for => :dummy)
+ end
+ end
+
+ def test_no_indent
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_equal " xxx<---\r\n xxx<---\r\n xxx<---\r\n xxx<---\r\n", tc.expand('no_indent_test/test::Test', :for => :dummy)
+ end
+
+ def test_no_indent2
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_equal " return xxxx;\r\n", tc.expand("no_indent_test/test2::Test", :for => :dummy)
+ end
+
+ def test_no_indent3
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_equal " l1<---\r\n l2\r\n\r\n", tc.expand("no_indent_test/test3::Test", :for => :dummy)
+ end
+
+ def test_template_resolution
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_equal "Sub1\r\nSub1 in sub1\r\n", tc.expand('template_resolution_test/test::Test', :for => :dummy)
+ assert_equal "Sub1\r\nSub1\r\nSub1 in sub1\r\n", tc.expand('template_resolution_test/sub1::Test', :for => :dummy)
+ end
+
+ def test_evaluate
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_equal "xx1xxxx2xxxx3xxxx4xx\r\n", tc.expand('evaluate_test/test::Test', :for => :dummy)
+ end
+
+ def test_define_local
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_equal "Local1\r\n", tc.expand('define_local_test/test::Test', :for => :dummy)
+ assert_raise StandardError do
+ tc.expand('define_local_test/test::TestForbidden', :for => :dummy)
+ end
+ end
+
+ def test_no_backslash_r
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ expected = ""
+ File.open(OUTPUT_DIR+"/expected_result3.txt") {|f| expected = f.read}
+ assert_equal expected, tc.expand('no_backslash_r_test::Test', :for => :dummy).to_s
+ end
+
+ def test_callback_indent
+ tc = RGen::TemplateLanguage::DirectoryTemplateContainer.new([MyMM, CCodeMM], OUTPUT_DIR)
+ tc.load(TEMPLATES_DIR)
+ assert_equal("|before callback\r\n |in callback\r\n|after callback\r\n |after iinc\r\n",
+ tc.expand('callback_indent_test/a::caller', :for => :dummy))
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/expected_result1.txt b/lib/puppet/vendor/rgen/test/template_language_test/expected_result1.txt
new file mode 100644
index 000000000..551d273a5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/expected_result1.txt
@@ -0,0 +1,29 @@
+ Document: SomeDocument
+
+ by Martin, EMail: martin(at)somewhe.re and Otherguy, EMail: other(at)somewhereel.se
+
+ Index:
+ 1 Intro in SomeDocument
+ 2 MainPart in SomeDocument
+ 3 Summary in SomeDocument
+
+ ----------------
+
+ Chapters in one line:
+ *** Intro ***, *** MainPart ***, *** Summary ***
+
+ Chapters each in one line:
+ *** Intro ***,
+ *** MainPart ***,
+ *** Summary ***
+
+ Here are some code examples:
+ int myArray[5] = {
+ 1,
+ 2,
+ 3,
+ 4,
+ 5
+ };
+ Text from Root
+ Text from Root
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/expected_result2.txt b/lib/puppet/vendor/rgen/test/template_language_test/expected_result2.txt
new file mode 100644
index 000000000..b16ec5237
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/expected_result2.txt
@@ -0,0 +1,9 @@
+int myArray[5] = {
+ 1,
+ 2,
+ 3,
+ 4,
+ 5
+};
+Text from Root
+Text from Root
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/expected_result3.txt b/lib/puppet/vendor/rgen/test/template_language_test/expected_result3.txt
new file mode 100644
index 000000000..79e6f8d1c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/expected_result3.txt
@@ -0,0 +1,4 @@
+This file was created on Linux and does not contain \r before \n
+The next blank line is done by the "nl" command which shall only add a \n, no \r:
+
+END
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/indentStringTestDefaultIndent.out b/lib/puppet/vendor/rgen/test/template_language_test/indentStringTestDefaultIndent.out
new file mode 100644
index 000000000..6cf7396e1
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/indentStringTestDefaultIndent.out
@@ -0,0 +1 @@
+ <- your default here
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/indentStringTestTabIndent.out b/lib/puppet/vendor/rgen/test/template_language_test/indentStringTestTabIndent.out
new file mode 100644
index 000000000..f70482c3b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/indentStringTestTabIndent.out
@@ -0,0 +1 @@
+ <- tab
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/callback_indent_test/a.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/callback_indent_test/a.tpl
new file mode 100644
index 000000000..30fd9d6df
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/callback_indent_test/a.tpl
@@ -0,0 +1,12 @@
+<% define 'caller', :for => Object do %>
+ |before callback
+ <% expand 'b::do_callback' %>
+ |after callback
+ <%iinc%>
+ |after iinc
+<% end %>
+
+<% define 'callback', :for => Object do %>
+ |in callback
+<% end %>
+
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/callback_indent_test/b.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/callback_indent_test/b.tpl
new file mode 100644
index 000000000..293edd87e
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/callback_indent_test/b.tpl
@@ -0,0 +1,5 @@
+<% define 'do_callback', :for => Object do %>
+ <%iinc%>
+ <% expand 'a::callback' %>
+ <%idec%>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/code/array.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/code/array.tpl
new file mode 100644
index 000000000..78d3a8b0f
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/code/array.tpl
@@ -0,0 +1,11 @@
+<% define 'ArrayDefinition', :for => CArray do %>
+ <%= getType %> <%= name %>[<%= size %>] = {<%iinc%>
+ <% expand 'InitValue', :foreach => initvalue, :separator => ",\r\n" %><%nl%><%idec%>
+ };
+ <% expand '../root::TextFromRoot' %>
+ <% expand '/root::TextFromRoot' %>
+<% end %>
+
+<% define 'InitValue', :for => PrimitiveInitValue do %>
+ <%= value %><%nows%>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/content/author.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/content/author.tpl
new file mode 100644
index 000000000..8c03422b1
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/content/author.tpl
@@ -0,0 +1,7 @@
+<% define 'Author', :for => Author do %>
+ <% expand 'SubAuthor' %>
+<% end %>
+
+<% define 'SubAuthor', :for => Author do %>
+ <%= name %>, EMail: <%= email.sub('@','(at)') %><%nows%>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/content/chapter.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/content/chapter.tpl
new file mode 100644
index 000000000..53ada2239
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/content/chapter.tpl
@@ -0,0 +1,5 @@
+<% define 'Root', :for => Chapter do %>
+ *** <%= title %> ***<%nows%>
+
+
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/define_local_test/local.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/define_local_test/local.tpl
new file mode 100644
index 000000000..5e4aad74b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/define_local_test/local.tpl
@@ -0,0 +1,8 @@
+<% define 'CallLocal1', :for => Object do %>
+ <% expand 'Local1' %>
+<% end %>
+
+<% define_local 'Local1', :for => Object do %>
+ Local1
+<% end %>
+
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/define_local_test/test.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/define_local_test/test.tpl
new file mode 100644
index 000000000..8511132e9
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/define_local_test/test.tpl
@@ -0,0 +1,8 @@
+<% define 'Test', :for => Object do %>
+ <% expand 'local::CallLocal1' %>
+<% end %>
+
+<% define 'TestForbidden', :for => Object do %>
+ <% expand 'local::Local1' %>
+<% end %>
+
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/evaluate_test/test.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/evaluate_test/test.tpl
new file mode 100644
index 000000000..7cc20b2e6
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/evaluate_test/test.tpl
@@ -0,0 +1,7 @@
+<% define 'Test', :for => Object do %>
+ <%= [1,2,3,4].collect{|n| evaluate 'Eval', :for => n}.join %>
+<% end %>
+
+<% define 'Eval', :for => Object do %>
+ xx<%= this %>xx<%nows%>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/indent_string_test.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/indent_string_test.tpl
new file mode 100644
index 000000000..a57840cc8
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/indent_string_test.tpl
@@ -0,0 +1,12 @@
+<% define 'IndentStringTest', :for => Object do %>
+ <% file 'indentStringTestDefaultIndent.out' do %>
+ <%iinc%>
+ <- your default here
+ <%idec%>
+ <% end %>
+ <% file 'indentStringTestTabIndent.out', "\t" do %>
+ <%iinc%>
+ <- tab
+ <%idec%>
+ <% end %>
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/index/c/cmod.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/index/c/cmod.tpl
new file mode 100644
index 000000000..411d68c4b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/index/c/cmod.tpl
@@ -0,0 +1 @@
+<% define 'cmod' do %>Module C is special !<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/index/chapter.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/index/chapter.tpl
new file mode 100644
index 000000000..7396f611c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/index/chapter.tpl
@@ -0,0 +1,3 @@
+<% define 'Root' do |idx, doc| %>
+ <%= idx%> <%= title %> in <%= doc.title %>
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/no_backslash_r_test.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_backslash_r_test.tpl
new file mode 100644
index 000000000..3c06203d9
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_backslash_r_test.tpl
@@ -0,0 +1,5 @@
+<% define 'Test', :for => Object do %>
+This file was created on Linux and does not contain \r before \n
+The next blank line is done by the "nl" command which shall only add a \n, no \r:
+<%nl%>END
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/no_indent.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/no_indent.tpl
new file mode 100644
index 000000000..9bdacde57
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/no_indent.tpl
@@ -0,0 +1,3 @@
+<% define 'NoIndent', :for => Object do %>
+ <---<%nows%>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/sub1/no_indent.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/sub1/no_indent.tpl
new file mode 100644
index 000000000..9bdacde57
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/sub1/no_indent.tpl
@@ -0,0 +1,3 @@
+<% define 'NoIndent', :for => Object do %>
+ <---<%nows%>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test.tpl
new file mode 100644
index 000000000..0acec2774
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test.tpl
@@ -0,0 +1,24 @@
+<% define 'Test', :for => Object do %>
+ <%iinc%>
+ xxx<% expand 'NoIndent1' %>
+ xxx<% expand 'NoIndent2' %>
+ xxx<% expand 'NoIndent3' %>
+ xxx<% expand 'NoIndent4' %>
+ <%idec%>
+<% end %>
+
+<% define 'NoIndent1', :for => Object do %>
+ <---<%nows%>
+<% end %>
+
+<% define 'NoIndent2', :for => Object do %>
+ <% expand 'NoIndent1' %>
+<% end %>
+
+<% define 'NoIndent3', :for => Object do %>
+ <% expand 'no_indent::NoIndent' %>
+<% end %>
+
+<% define 'NoIndent4', :for => Object do %>
+ <% expand 'sub1/no_indent::NoIndent' %>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test2.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test2.tpl
new file mode 100644
index 000000000..63cf99263
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test2.tpl
@@ -0,0 +1,13 @@
+<% define 'Test', :for => Object do %>
+<%iinc%><%iinc%>
+return <% expand 'Call1' %>;
+<%idec%><%idec%>
+<% end %>
+
+<% define 'Call1', :for => Object do %>
+ x<% expand 'Call2' %><%nows%>
+<% end %>
+
+<% define 'Call2', :for => Object do %>
+ xxx<%nows%>
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test3.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test3.tpl
new file mode 100644
index 000000000..8c4c3eea5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/no_indent_test/test3.tpl
@@ -0,0 +1,10 @@
+<% define 'Test', :for => Object do %>
+<%iinc%>
+ l1<% expand 'Call1' %>
+<%idec%>
+<% end %>
+
+<% define 'Call1', :for => Object do %>
+ <---
+ l2
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/null_context_test.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/null_context_test.tpl
new file mode 100644
index 000000000..856b77109
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/null_context_test.tpl
@@ -0,0 +1,17 @@
+<% define 'NullContextTestBad', :for => Object do %>
+ <%# this must raise an exception %>
+ <% expand 'Callee', :for => nil %>
+<% end %>
+
+<% define 'NullContextTestBad2', :for => Object do %>
+ <%# this must raise an exception %>
+ <% expand 'Callee', :foreach => nil %>
+<% end %>
+
+<% define 'NullContextTestOk', :for => Object do %>
+ <%# however this is ok %>
+ <% expand 'Callee' %>
+<% end %>
+
+<% define 'Callee', :for => Object do %>
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/root.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/root.tpl
new file mode 100644
index 000000000..8b70e31a4
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/root.tpl
@@ -0,0 +1,31 @@
+<% define 'Root' do %>
+ <% file 'testout.txt' do %>
+ Document: <%= title %>
+ <%nl%>
+ <%iinc%>
+ by <% expand 'content/author::Author', :foreach => authors, :separator => ' and ' %>
+ <%idec%>
+ <%nl%>
+ Index:<%iinc%>
+ <% for c in chapters %>
+ <% nr = (nr || 0); nr += 1 %>
+ <% expand 'index/chapter::Root', nr, this, :for => c %>
+ <% end %><%idec%>
+ <%nl%>
+ ----------------
+ <%nl%>
+ Chapters in one line:
+ <% expand 'content/chapter::Root', :foreach => chapters, :separator => ", " %><%nl%>
+ <%nl%>
+ Chapters each in one line:
+ <% expand 'content/chapter::Root', :foreach => chapters, :separator => ",\r\n" %><%nl%>
+ <%nl%>
+ Here are some code examples:
+ <% expand 'code/array::ArrayDefinition', :for => sampleArray %>
+ <% end %>
+<% end %>
+
+<% define 'TextFromRoot' do %>
+ Text from Root
+<% end %>
+
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/sub1.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/sub1.tpl
new file mode 100644
index 000000000..3c7a5cfa2
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/sub1.tpl
@@ -0,0 +1,9 @@
+<% define 'Sub1', :for => Object do %>
+ Sub1
+<% end %>
+
+<% define 'Test', :for => Object do %>
+ <% expand 'Sub1' %>
+ <% expand 'sub1::Sub1' %>
+ <% expand 'sub1/sub1::Sub1' %>
+<% end %>
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/sub1/sub1.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/sub1/sub1.tpl
new file mode 100644
index 000000000..1b6dc9700
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/sub1/sub1.tpl
@@ -0,0 +1,3 @@
+<% define 'Sub1', :for => Object do %>
+ Sub1 in sub1
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/test.tpl b/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/test.tpl
new file mode 100644
index 000000000..b1a970a01
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/templates/template_resolution_test/test.tpl
@@ -0,0 +1,4 @@
+<% define 'Test', :for => Object do %>
+ <% expand 'sub1::Sub1' %>
+ <% expand 'sub1/sub1::Sub1' %>
+<% end %>
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/template_language_test/testout.txt b/lib/puppet/vendor/rgen/test/template_language_test/testout.txt
new file mode 100644
index 000000000..551d273a5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/template_language_test/testout.txt
@@ -0,0 +1,29 @@
+ Document: SomeDocument
+
+ by Martin, EMail: martin(at)somewhe.re and Otherguy, EMail: other(at)somewhereel.se
+
+ Index:
+ 1 Intro in SomeDocument
+ 2 MainPart in SomeDocument
+ 3 Summary in SomeDocument
+
+ ----------------
+
+ Chapters in one line:
+ *** Intro ***, *** MainPart ***, *** Summary ***
+
+ Chapters each in one line:
+ *** Intro ***,
+ *** MainPart ***,
+ *** Summary ***
+
+ Here are some code examples:
+ int myArray[5] = {
+ 1,
+ 2,
+ 3,
+ 4,
+ 5
+ };
+ Text from Root
+ Text from Root
diff --git a/lib/puppet/vendor/rgen/test/testmodel/class_model_checker.rb b/lib/puppet/vendor/rgen/test/testmodel/class_model_checker.rb
new file mode 100644
index 000000000..0bebbdc6b
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/testmodel/class_model_checker.rb
@@ -0,0 +1,119 @@
+require 'metamodels/uml13_metamodel'
+require 'metamodels/uml13_metamodel_ext'
+
+module Testmodel
+
+# Checks the UML Class model elements from the example model
+#
+module ClassModelChecker
+
+ def checkClassModel(envUML)
+
+ # check main package
+ mainPackage = envUML.find(:class => UML13::Package, :name => "HouseMetamodel").first
+ assert_not_nil mainPackage
+
+ # check Rooms package
+ subs = mainPackage.ownedElement.select{|e| e.is_a?(UML13::Package)}
+ assert_equal 1, subs.size
+ roomsPackage = subs.first
+ assert_equal "Rooms", roomsPackage.name
+
+ # check main package classes
+ classes = mainPackage.ownedElement.select{|e| e.is_a?(UML13::Class)}
+ assert_equal 3, classes.size
+ houseClass = classes.find{|c| c.name == "House"}
+ personClass = classes.find{|c| c.name == "Person"}
+ meetingPlaceClass = classes.find{|c| c.name == "MeetingPlace"}
+ cookingPlaceInterface = mainPackage.ownedElement.find{|e| e.is_a?(UML13::Interface) && e.name == "CookingPlace"}
+ assert_not_nil houseClass
+ assert_not_nil personClass
+ assert_not_nil meetingPlaceClass
+ assert_not_nil cookingPlaceInterface
+
+ # check Rooms package classes
+ classes = roomsPackage.ownedElement.select{|e| e.is_a?(UML13::Class)}
+ assert_equal 3, classes.size
+ roomClass = classes.find{|c| c.name == "Room"}
+ kitchenClass = classes.find{|c| c.name == "Kitchen"}
+ bathroomClass = classes.find{|c| c.name == "Bathroom"}
+ assert_not_nil roomClass
+ assert_not_nil kitchenClass
+ assert_not_nil bathroomClass
+
+ # check Room inheritance
+ assert_equal 2, roomClass.specialization.child.size
+ assert_not_nil roomClass.specialization.child.find{|c| c.name == "Kitchen"}
+ assert_not_nil roomClass.specialization.child.find{|c| c.name == "Bathroom"}
+ assert_equal 2, kitchenClass.generalization.parent.size
+ assert_equal roomClass.object_id, kitchenClass.generalization.parent.find{|c| c.name == "Room"}.object_id
+ assert_equal meetingPlaceClass.object_id, kitchenClass.generalization.parent.find{|c| c.name == "MeetingPlace"}.object_id
+ assert_equal 1, bathroomClass.generalization.parent.size
+ assert_equal roomClass.object_id, bathroomClass.generalization.parent.first.object_id
+ assert_not_nil kitchenClass.clientDependency.find{|d| d.stereotype.name == "implements"}
+ assert_equal cookingPlaceInterface.object_id, kitchenClass.clientDependency.supplier.find{|c| c.name == "CookingPlace"}.object_id
+ assert_equal kitchenClass.object_id, cookingPlaceInterface.supplierDependency.client.find{|c| c.name == "Kitchen"}.object_id
+
+ # check House-Room "part of" association
+ assert_equal 1, houseClass.localCompositeEnd.size
+ roomEnd = houseClass.localCompositeEnd.first.otherEnd
+ assert_equal UML13::Association, roomEnd.association.class
+ assert_equal roomClass.object_id, roomEnd.type.object_id
+ assert_equal "room", roomEnd.name
+ assert_equal UML13::Multiplicity, roomEnd.multiplicity.class
+ assert_equal "1", roomEnd.multiplicity.range.first.lower
+ assert_equal "*", roomEnd.multiplicity.range.first.upper
+
+ assert_equal 1, roomClass.remoteCompositeEnd.size
+ assert_equal houseClass.object_id, roomClass.remoteCompositeEnd.first.type.object_id
+ assert_equal "house", roomClass.remoteCompositeEnd.first.name
+
+ # check House OUT associations
+ assert_equal 2, houseClass.remoteNavigableEnd.size
+ bathEnd = houseClass.remoteNavigableEnd.find{|e| e.name == "bathroom"}
+ kitchenEnd = houseClass.remoteNavigableEnd.find{|e| e.name== "kitchen"}
+ assert_not_nil bathEnd
+ assert_not_nil kitchenEnd
+ assert_equal UML13::Association, bathEnd.association.class
+ assert_equal UML13::Association, kitchenEnd.association.class
+ assert_equal "1", kitchenEnd.multiplicity.range.first.lower
+ assert_equal "1", kitchenEnd.multiplicity.range.first.upper
+
+ # check House IN associations
+ assert_equal 3, houseClass.localNavigableEnd.size
+ homeEnd = houseClass.localNavigableEnd.find{|e| e.name == "home"}
+ assert_not_nil homeEnd
+ assert_equal UML13::Association, homeEnd.association.class
+ assert_equal "0", homeEnd.multiplicity.range.first.lower
+ assert_equal "*", homeEnd.multiplicity.range.first.upper
+
+ # check House all associations
+ assert_equal 4, houseClass.associationEnd.size
+ end
+
+ def checkClassModelPartial(envUML)
+ # HouseMetamodel package is not part of the partial export
+ mainPackage = envUML.find(:class => UML13::Package, :name => "HouseMetamodel").first
+ assert_nil mainPackage
+
+ roomsPackage = envUML.find(:class => UML13::Package, :name => "Rooms").first
+ assert_not_nil roomsPackage
+
+ roomClass = envUML.find(:class => UML13::Class, :name => "Room").first
+ assert_not_nil roomClass
+
+ # House is created from an EAStub
+ houseClass = roomClass.remoteCompositeEnd.first.type
+ assert_not_nil houseClass
+ assert_equal "House", houseClass.name
+ # House is not in a package since it's just a stub
+ assert houseClass.namespace.nil?
+
+ # in the partial model, House has only 3 (not 4) associations
+ # since the fourth class (Person) is not in Rooms package
+ assert_equal 3, houseClass.associationEnd.size
+ end
+
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel.eap b/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel.eap
new file mode 100644
index 000000000..737b49cdf
Binary files /dev/null and b/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel.eap differ
diff --git a/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel.xml b/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel.xml
new file mode 100644
index 000000000..b2bee1593
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel.xml
@@ -0,0 +1,1029 @@
+<?xml version="1.0" encoding="windows-1252"?>
+<XMI xmi.version="1.1" xmlns:UML="omg.org/UML1.3" timestamp="2007-06-14 08:25:40">
+ <XMI.header>
+ <XMI.documentation>
+ <XMI.exporter>Enterprise Architect</XMI.exporter>
+ <XMI.exporterVersion>2.5</XMI.exporterVersion>
+ </XMI.documentation>
+ </XMI.header>
+ <XMI.content>
+ <UML:Model name="EA Model" xmi.id="MX_EAID_B216CC15_6D9C_4c10_9015_96740F924D1C">
+ <UML:Namespace.ownedElement>
+ <UML:Class name="EARootClass" xmi.id="EAID_11111111_5487_4080_A7F4_41526CB0AA00" isRoot="true" isLeaf="false" isAbstract="false"/>
+ <UML:Package name="Views" xmi.id="EAPK_B216CC15_6D9C_4c10_9015_96740F924D1C" isRoot="true" isLeaf="false" isAbstract="false" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="ea_package_id" value="1"/>
+ <UML:TaggedValue tag="iscontrolled" value="FALSE"/>
+ <UML:TaggedValue tag="isprotected" value="FALSE"/>
+ <UML:TaggedValue tag="usedtd" value="FALSE"/>
+ <UML:TaggedValue tag="logxml" value="FALSE"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Namespace.ownedElement>
+ <UML:Collaboration xmi.id="EAID_B216CC15_6D9C_4c10_9015_96740F924D1C_Collaboration" name="Collaborations">
+ <UML:Namespace.ownedElement>
+ <UML:ClassifierRole name="HouseMetamodel" xmi.id="EAID_A1B83D59_CAE1_422c_BA5F_D3624D7156AD" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Package"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_B216CC15_6D9C_4c10_9015_96740F924D1C"/>
+ <UML:TaggedValue tag="date_created" value="2006-06-26 08:41:49"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-26 08:41:49"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package2" value="EAID_A1B83D59_CAE1_422c_BA5F_D3624D7156AD"/>
+ <UML:TaggedValue tag="package_name" value="Views"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ <UML:ClassifierRole name="HouseExampleModel" xmi.id="EAID_06C9C958_C14A_41f4_89A9_6873CCED37A7" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Package"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_B216CC15_6D9C_4c10_9015_96740F924D1C"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-20 18:34:43"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-20 18:34:43"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package2" value="EAID_06C9C958_C14A_41f4_89A9_6873CCED37A7"/>
+ <UML:TaggedValue tag="package_name" value="Views"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ </UML:Namespace.ownedElement>
+ <UML:Collaboration.interaction/>
+ </UML:Collaboration>
+ <UML:Package name="HouseMetamodel" xmi.id="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD" isRoot="true" isLeaf="false" isAbstract="false" visibility="public">
+ <UML:ModelElement.name>HouseMetamodel</UML:ModelElement.name>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="created" value="2006-06-26 00:00:00"/>
+ <UML:TaggedValue tag="modified" value="2006-06-26 00:00:00"/>
+ <UML:TaggedValue tag="iscontrolled" value="FALSE"/>
+ <UML:TaggedValue tag="isprotected" value="FALSE"/>
+ <UML:TaggedValue tag="usedtd" value="FALSE"/>
+ <UML:TaggedValue tag="logxml" value="FALSE"/>
+ <UML:TaggedValue tag="ea_package_id" value="44"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="ea_stype" value="Public"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Namespace.ownedElement>
+ <UML:Class name="House" xmi.id="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32" visibility="public" namespace="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.stereotype>
+ <UML:Stereotype name="dummy"/>
+ </UML:ModelElement.stereotype>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD"/>
+ <UML:TaggedValue tag="date_created" value="2005-09-16 19:52:18"/>
+ <UML:TaggedValue tag="date_modified" value="2006-02-28 08:29:19"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseMetamodel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="stereotype" value="dummy"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Classifier.feature>
+ <UML:Attribute name="address" visibility="private" ownerScope="instance" changeable="none" targetScope="instance">
+ <UML:Attribute.initialValue>
+ <UML:Expression/>
+ </UML:Attribute.initialValue>
+ <UML:StructuralFeature.type>
+ <UML:Classifier xmi.idref="eaxmiid0"/>
+ </UML:StructuralFeature.type>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="type" value="String"/>
+ <UML:TaggedValue tag="derived" value="0"/>
+ <UML:TaggedValue tag="containment" value="Not Specified"/>
+ <UML:TaggedValue tag="ordered" value="0"/>
+ <UML:TaggedValue tag="collection" value="false"/>
+ <UML:TaggedValue tag="position" value="0"/>
+ <UML:TaggedValue tag="lowerBound" value="1"/>
+ <UML:TaggedValue tag="upperBound" value="1"/>
+ <UML:TaggedValue tag="duplicates" value="0"/>
+ <UML:TaggedValue tag="ea_guid" value="{A8DF581B-9AC6-4f75-AB48-8FAEDFC6E068}"/>
+ <UML:TaggedValue tag="styleex" value="volatile=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Attribute>
+ </UML:Classifier.feature>
+ </UML:Class>
+ <UML:Association xmi.id="EAID_161BF4FA_8F48_441f_AF1F_D36014D57A1F" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Association"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=8;SY=5;EX=8;EY=5;"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="bathroom" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_244CC9E5_1F81_4164_9215_C048EC679543">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Association xmi.id="EAID_9F0FF178_300F_45f4_B31F_9633AF770E44" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Association"/>
+ <UML:TaggedValue tag="direction" value="Bi-Directional"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=6;SY=-12;EX=-44;EY=-9;EDGE=1;"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="kitchen" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="house" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Association xmi.id="EAID_A9112D1C_DCB6_4040_B466_D5F560898B33" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Association"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_2F1D9CC6_F38F_4a34_B3C4_B92915108384">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" multiplicity="0..*" name="home" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Association xmi.id="EAID_F15A2B25_0DA9_4203_8FC9_25645610B5E5" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Aggregation"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="subtype" value="Strong"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=-2;SY=-1;EX=-2;EY=-1;"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" multiplicity="1..*" name="room" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="house" aggregation="composite" isOrdered="false" isNavigable="true" type="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Collaboration xmi.id="EAID_A1B83D59_CAE1_422c_BA5F_D3624D7156AD_Collaboration" name="Collaborations">
+ <UML:Namespace.ownedElement>
+ <UML:ClassifierRole name="Rooms" xmi.id="EAID_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Package"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD"/>
+ <UML:TaggedValue tag="date_created" value="2006-06-23 08:28:49"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-23 08:28:49"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package2" value="EAID_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08"/>
+ <UML:TaggedValue tag="package_name" value="HouseMetamodel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ </UML:Namespace.ownedElement>
+ <UML:Collaboration.interaction/>
+ </UML:Collaboration>
+ <UML:Class name="Person" xmi.id="EAID_2F1D9CC6_F38F_4a34_B3C4_B92915108384" visibility="public" namespace="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD"/>
+ <UML:TaggedValue tag="date_created" value="2006-06-27 08:34:23"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-27 08:34:26"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseMetamodel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Class>
+ <UML:Comment xmi.id="EAID_9D7C072D_2F74_4348_B1D9_BA3CFA72FC6F" visibility="public" namespace="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Note"/>
+ <UML:TaggedValue tag="ea_ntype" value="1"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-07 08:08:53"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-07 08:09:48"/>
+ <UML:TaggedValue tag="gentype" value="&lt;none&gt;"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseMetamodel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="documentation" value="this aggregation is navigable from room to house (EA does not show it)"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="relatedlinks" value="idref1=EAID_F15A2B25_0DA9_4203_8FC9_25645610B5E5;"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Comment>
+ <UML:Class name="MeetingPlace" xmi.id="EAID_CB9E84D4_9064_49d0_BF1E_EE53C05853EF" visibility="public" namespace="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-12 08:40:46"/>
+ <UML:TaggedValue tag="date_modified" value="2007-06-14 08:22:21"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseMetamodel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Class>
+ <UML:Generalization subtype="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" supertype="EAID_CB9E84D4_9064_49d0_BF1E_EE53C05853EF" xmi.id="EAID_E824691D_2AE7_4c9c_8408_881A2B85516F" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Generalization"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="false"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="$ea_xref_property" value="$XREFPROP=$XID={8AAF19F7-EC93-4dda-ADD1-69A07CC671D7}$XID;$NAM=CustomProperties$NAM;$TYP=connector property$TYP;$VIS=Public$VIS;$DES=@PROP=@NAME=isSubstitutable@ENDNAME;@TYPE=boolean@ENDTYPE;@VALU=@ENDVALU;@PRMT=@ENDPRMT;@ENDPROP;$DES;$CLT={E824691D-2AE7-4c9c-8408-881A2B85516F}$CLT;$SUP=&lt;none&gt;$SUP;$ENDXREF;"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=-24;SY=-16;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Generalization>
+ <UML:Interface name="CookingPlace" xmi.id="EAID_E21E535F_E300_4405_A917_28FFB4B2DB6E" visibility="public" namespace="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD" isRoot="false" isLeaf="false" isAbstract="true">
+ <UML:ModelElement.stereotype>
+ <UML:Stereotype name="interface"/>
+ </UML:ModelElement.stereotype>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Interface"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_A1B83D59_CAE1_422c_BA5F_D3624D7156AD"/>
+ <UML:TaggedValue tag="date_created" value="2007-06-14 08:22:21"/>
+ <UML:TaggedValue tag="date_modified" value="2007-06-14 08:23:36"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseMetamodel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="stereotype" value="interface"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Interface>
+ <UML:Dependency client="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" supplier="EAID_E21E535F_E300_4405_A917_28FFB4B2DB6E" xmi.id="EAID_9613E9DD_8709_4e9e_92D8_2F6CA9835851" visibility="public">
+ <UML:ModelElement.stereotype>
+ <UML:Stereotype name="implements"/>
+ </UML:ModelElement.stereotype>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Dependency"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="stereotype" value="implements"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="conditional" value="«implements»"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_aggregation" value="0"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="false"/>
+ <UML:TaggedValue tag="src_containment" value="Unspecified"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_aggregation" value="0"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="dst_containment" value="Unspecified"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ <UML:TaggedValue tag="mb" value="«implements»"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Dependency>
+ <UML:Package name="Rooms" xmi.id="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="true" isLeaf="false" isAbstract="false" visibility="public">
+ <UML:ModelElement.name>Rooms</UML:ModelElement.name>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="created" value="2006-06-23 00:00:00"/>
+ <UML:TaggedValue tag="modified" value="2006-06-23 00:00:00"/>
+ <UML:TaggedValue tag="iscontrolled" value="FALSE"/>
+ <UML:TaggedValue tag="isprotected" value="FALSE"/>
+ <UML:TaggedValue tag="usedtd" value="FALSE"/>
+ <UML:TaggedValue tag="logxml" value="FALSE"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="ea_stype" value="Public"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Namespace.ownedElement>
+ <UML:Class name="Room" xmi.id="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" visibility="public" namespace="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08"/>
+ <UML:TaggedValue tag="date_created" value="2005-09-16 19:52:28"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-22 21:15:25"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="Rooms"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Classifier.feature>
+ <UML:Operation name="enter" visibility="public" ownerScope="instance" isQuery="false" concurrency="sequential">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="type" value="void"/>
+ <UML:TaggedValue tag="const" value="false"/>
+ <UML:TaggedValue tag="synchronised" value="0"/>
+ <UML:TaggedValue tag="concurrency" value="Sequential"/>
+ <UML:TaggedValue tag="position" value="0"/>
+ <UML:TaggedValue tag="returnarray" value="0"/>
+ <UML:TaggedValue tag="pure" value="0"/>
+ <UML:TaggedValue tag="ea_guid" value="{A8C01177-4523-44c4-87F7-F2B8F9F3C915}"/>
+ </UML:ModelElement.taggedValue>
+ <UML:BehavioralFeature.parameter>
+ <UML:Parameter kind="return" visibility="public">
+ <UML:Parameter.type>
+ <UML:Classifier xmi.idref="eaxmiid1"/>
+ </UML:Parameter.type>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="pos" value="0"/>
+ <UML:TaggedValue tag="type" value="void"/>
+ <UML:TaggedValue tag="const" value="0"/>
+ <UML:TaggedValue tag="ea_guid" value="{A8C01177-4523-44c4-87F7-F2B8F9F3C915}"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Parameter.defaultValue>
+ <UML:Expression/>
+ </UML:Parameter.defaultValue>
+ </UML:Parameter>
+ </UML:BehavioralFeature.parameter>
+ </UML:Operation>
+ </UML:Classifier.feature>
+ </UML:Class>
+ <UML:Generalization subtype="EAID_244CC9E5_1F81_4164_9215_C048EC679543" supertype="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" xmi.id="EAID_03052911_3625_4472_8C6B_5E1742FED753" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Generalization"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="false"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="$ea_xref_property" value="$XREFPROP=$XID={DF243A69-1029-4c4c-A1A0-C821A1DF9623}$XID;$NAM=CustomProperties$NAM;$TYP=connector property$TYP;$VIS=Public$VIS;$DES=@PROP=@NAME=isSubstitutable@ENDNAME;@TYPE=boolean@ENDTYPE;@VALU=@ENDVALU;@PRMT=@ENDPRMT;@ENDPROP;$DES;$CLT={03052911-3625-4472-8C6B-5E1742FED753}$CLT;$SUP=&lt;none&gt;$SUP;$ENDXREF;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Generalization>
+ <UML:Generalization subtype="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" supertype="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" xmi.id="EAID_C3D18D92_3A5C_4112_9958_926A259BAE24" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Generalization"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="false"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="$ea_xref_property" value="$XREFPROP=$XID={DB1C706A-470A-45ed-89DA-601821419545}$XID;$NAM=CustomProperties$NAM;$TYP=connector property$TYP;$VIS=Public$VIS;$DES=@PROP=@NAME=isSubstitutable@ENDNAME;@TYPE=boolean@ENDTYPE;@VALU=@ENDVALU;@PRMT=@ENDPRMT;@ENDPROP;$DES;$CLT={C3D18D92-3A5C-4112-9958-926A259BAE24}$CLT;$SUP=&lt;none&gt;$SUP;$ENDXREF;"/>
+ <UML:TaggedValue tag="privatedata5" value="EX=37;EY=3;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Generalization>
+ <UML:Class name="Kitchen" xmi.id="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" visibility="public" namespace="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08"/>
+ <UML:TaggedValue tag="date_created" value="2005-11-30 19:26:13"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-22 21:15:34"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="Rooms"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Class>
+ <UML:Class name="Bathroom" xmi.id="EAID_244CC9E5_1F81_4164_9215_C048EC679543" visibility="public" namespace="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08"/>
+ <UML:TaggedValue tag="date_created" value="2006-06-27 08:32:25"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-27 08:34:23"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="Rooms"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Class>
+ </UML:Namespace.ownedElement>
+ </UML:Package>
+ </UML:Namespace.ownedElement>
+ </UML:Package>
+ <UML:Package name="HouseExampleModel" xmi.id="EAPK_06C9C958_C14A_41f4_89A9_6873CCED37A7" isRoot="true" isLeaf="false" isAbstract="false" visibility="public">
+ <UML:ModelElement.name>HouseExampleModel</UML:ModelElement.name>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="created" value="2006-07-20 00:00:00"/>
+ <UML:TaggedValue tag="modified" value="2006-07-20 00:00:00"/>
+ <UML:TaggedValue tag="iscontrolled" value="FALSE"/>
+ <UML:TaggedValue tag="isprotected" value="FALSE"/>
+ <UML:TaggedValue tag="usedtd" value="FALSE"/>
+ <UML:TaggedValue tag="logxml" value="FALSE"/>
+ <UML:TaggedValue tag="ea_package_id" value="46"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="ea_stype" value="Public"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Namespace.ownedElement>
+ <UML:Collaboration xmi.id="EAID_06C9C958_C14A_41f4_89A9_6873CCED37A7_Collaboration" name="Collaborations">
+ <UML:Namespace.ownedElement>
+ <UML:ClassifierRole name="SomeonesHouse" xmi.id="EAID_3D59084A_BBC6_4d6b_B8E5_9A764D27563B" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Object"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="classifier" value="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32"/>
+ <UML:TaggedValue tag="package" value="EAPK_06C9C958_C14A_41f4_89A9_6873CCED37A7"/>
+ <UML:TaggedValue tag="classname" value="House"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-20 18:35:22"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-21 19:34:53"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseExampleModel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ <UML:AssociationRole xmi.id="EAID_AFD66AB6_337B_49c5_9FA9_0CFAFEB77AA7" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Association"/>
+ <UML:TaggedValue tag="direction" value="Unspecified"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_aggregation" value="0"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="true"/>
+ <UML:TaggedValue tag="src_containment" value="Unspecified"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_name" value="home"/>
+ <UML:TaggedValue tag="dst_aggregation" value="0"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="dst_containment" value="Unspecified"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEndRole visibility="public" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_960E757D_3D81_43e2_BA2E_5F7341421EA5">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEndRole>
+ <UML:AssociationEndRole visibility="public" name="home" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_3D59084A_BBC6_4d6b_B8E5_9A764D27563B">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEndRole>
+ </UML:Association.connection>
+ </UML:AssociationRole>
+ <UML:ClassifierRole name="GreenRoom" xmi.id="EAID_597D270D_65EB_4941_908B_A42419BD7C3F" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Object"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="classifier" value="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0"/>
+ <UML:TaggedValue tag="package" value="EAPK_06C9C958_C14A_41f4_89A9_6873CCED37A7"/>
+ <UML:TaggedValue tag="classname" value="Room"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-20 18:35:32"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-21 19:34:11"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseExampleModel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ <UML:ClassifierRole name="YellowRoom" xmi.id="EAID_5D770C43_247F_41e6_BC35_93C5A9204E76" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Object"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="classifier" value="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0"/>
+ <UML:TaggedValue tag="package" value="EAPK_06C9C958_C14A_41f4_89A9_6873CCED37A7"/>
+ <UML:TaggedValue tag="classname" value="Room"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-21 19:22:58"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-21 19:34:27"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseExampleModel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ <UML:ClassifierRole name="HotRoom" xmi.id="EAID_9907C98A_7E63_47dc_8A4D_ED020B68C41D" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Object"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="classifier" value="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529"/>
+ <UML:TaggedValue tag="package" value="EAPK_06C9C958_C14A_41f4_89A9_6873CCED37A7"/>
+ <UML:TaggedValue tag="classname" value="Kitchen"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-21 19:27:21"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-21 19:33:56"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseExampleModel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ <UML:ClassifierRole name="Someone" xmi.id="EAID_960E757D_3D81_43e2_BA2E_5F7341421EA5" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Object"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="classifier" value="EAID_2F1D9CC6_F38F_4a34_B3C4_B92915108384"/>
+ <UML:TaggedValue tag="package" value="EAPK_06C9C958_C14A_41f4_89A9_6873CCED37A7"/>
+ <UML:TaggedValue tag="classname" value="Person"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-21 19:29:45"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-21 19:35:00"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseExampleModel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ <UML:ClassifierRole name="WetRoom" xmi.id="EAID_783325F8_9AA7_4836_A0D9_DDA9103CDC71" visibility="public" base="EAID_11111111_5487_4080_A7F4_41526CB0AA00">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Object"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="classifier" value="EAID_244CC9E5_1F81_4164_9215_C048EC679543"/>
+ <UML:TaggedValue tag="package" value="EAPK_06C9C958_C14A_41f4_89A9_6873CCED37A7"/>
+ <UML:TaggedValue tag="classname" value="Bathroom"/>
+ <UML:TaggedValue tag="date_created" value="2006-07-21 19:31:18"/>
+ <UML:TaggedValue tag="date_modified" value="2006-07-21 19:34:02"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="HouseExampleModel"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:ClassifierRole>
+ </UML:Namespace.ownedElement>
+ <UML:Collaboration.interaction/>
+ </UML:Collaboration>
+ <UML:Association xmi.id="EAID_3F0E9949_A749_4958_9A1B_40E3F0E12393" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Aggregation"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="subtype" value="Strong"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" name="room" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_597D270D_65EB_4941_908B_A42419BD7C3F">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" aggregation="composite" isOrdered="false" isNavigable="true" type="EAID_3D59084A_BBC6_4d6b_B8E5_9A764D27563B">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Association xmi.id="EAID_9DDBD985_D9D5_408c_A8F4_40085F2E8E5B" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Aggregation"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="subtype" value="Strong"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" name="room" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_9907C98A_7E63_47dc_8A4D_ED020B68C41D">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" aggregation="composite" isOrdered="false" isNavigable="true" type="EAID_3D59084A_BBC6_4d6b_B8E5_9A764D27563B">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Association xmi.id="EAID_D775B2C1_C7AF_4a19_A4D5_7B3AC816664C" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Aggregation"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="subtype" value="Strong"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" name="room" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_5D770C43_247F_41e6_BC35_93C5A9204E76">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" aggregation="composite" isOrdered="false" isNavigable="true" type="EAID_3D59084A_BBC6_4d6b_B8E5_9A764D27563B">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Association xmi.id="EAID_FF2ECD20_22FF_4734_9DB4_8316A98DDC16" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Aggregation"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="subtype" value="Strong"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=36;SY=-9;EX=45;EY=-9;"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" name="room" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_783325F8_9AA7_4836_A0D9_DDA9103CDC71">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" aggregation="composite" isOrdered="false" isNavigable="true" type="EAID_3D59084A_BBC6_4d6b_B8E5_9A764D27563B">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ </UML:Namespace.ownedElement>
+ </UML:Package>
+ </UML:Namespace.ownedElement>
+ </UML:Package>
+ <UML:DataType xmi.id="eaxmiid0" name="String" visibility="private" isRoot="false" isLeaf="false" isAbstract="false"/>
+ <UML:DataType xmi.id="eaxmiid1" name="void" visibility="private" isRoot="false" isLeaf="false" isAbstract="false"/>
+ </UML:Namespace.ownedElement>
+ </UML:Model>
+ <UML:DiagramElement geometry="Left=260;Top=103;Right=359;Bottom=173;" subject="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32" seqno="7" style="DUID=CDDEF3BC;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=472;Top=163;Right=562;Bottom=233;" subject="EAID_2F1D9CC6_F38F_4a34_B3C4_B92915108384" seqno="3" style="DUID=12DDDFF9;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=415;Top=252;Right=530;Bottom=323;" subject="EAID_9D7C072D_2F74_4348_B1D9_BA3CFA72FC6F" seqno="8" style="DUID=FAD2A1B5;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=21;Top=241;Right=111;Bottom=311;" subject="EAID_CB9E84D4_9064_49d0_BF1E_EE53C05853EF" seqno="2" style="DUID=E58486A8;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=108;Top=132;Right=198;Bottom=222;" subject="EAID_E21E535F_E300_4405_A917_28FFB4B2DB6E" seqno="1" style="DUID=1CE36E20;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=265;Top=280;Right=355;Bottom=350;" subject="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" seqno="6" style="DUID=BFDAC143;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=145;Top=402;Right=235;Bottom=472;" subject="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" seqno="5" style="DUID=BDEA24A9;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=394;Top=412;Right=484;Bottom=482;" subject="EAID_244CC9E5_1F81_4164_9215_C048EC679543" seqno="4" style="DUID=5DECEEC9;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=242;Top=137;Right=530;Bottom=187;" subject="EAID_3D59084A_BBC6_4d6b_B8E5_9A764D27563B" seqno="6" style="DUID=3D79E0A4;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=145;Top=288;Right=235;Bottom=338;" subject="EAID_597D270D_65EB_4941_908B_A42419BD7C3F" seqno="5" style="DUID=E585EE32;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=279;Top=289;Right=369;Bottom=339;" subject="EAID_5D770C43_247F_41e6_BC35_93C5A9204E76" seqno="4" style="DUID=DFC93F21;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=414;Top=289;Right=504;Bottom=339;" subject="EAID_9907C98A_7E63_47dc_8A4D_ED020B68C41D" seqno="3" style="DUID=EE2397B7;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=46;Top=136;Right=136;Bottom=186;" subject="EAID_960E757D_3D81_43e2_BA2E_5F7341421EA5" seqno="2" style="DUID=31760B86;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=543;Top=289;Right=633;Bottom=339;" subject="EAID_783325F8_9AA7_4836_A0D9_DDA9103CDC71" seqno="1" style="DUID=81FBAF85;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=8;SY=5;EX=8;EY=5;EDGE=3;$LLB=;LLT=;LMT=;LMB=;LRT=CX=61:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_161BF4FA_8F48_441f_AF1F_D36014D57A1F" style="Mode=3;EOID=5DECEEC9;SOID=CDDEF3BC;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=6;SY=-12;EX=-44;EY=-9;EDGE=1;$LLB=CX=16:CY=15:OX=13:OY=-3:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LLT=CX=49:CY=15:OX=0:OY=-14:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=CX=44:CY=15:OX=-10:OY=-1:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=-23:OY=-4:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_9F0FF178_300F_45f4_B31F_9633AF770E44" style="Mode=3;EOID=CDDEF3BC;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=4;$LLB=;LLT=;LMT=;LMB=;LRT=CX=43:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=26:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_A9112D1C_DCB6_4040_B466_D5F560898B33" style="Mode=3;EOID=CDDEF3BC;SOID=12DDDFF9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=-2;SY=-1;EX=-2;EY=-1;EDGE=1;$LLB=CX=26:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LLT=CX=40:CY=15:OX=-12:OY=-3:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=CX=44:CY=15:OX=-6:OY=-11:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=-6:OY=-6:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_F15A2B25_0DA9_4203_8FC9_25645610B5E5" style="Mode=3;EOID=CDDEF3BC;SOID=BFDAC143;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=4;$LLB=;LLT=;LMT=;LMB=;LRT=CX=43:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=26:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_A9112D1C_DCB6_4040_B466_D5F560898B33" style="Mode=3;EOID=CDDEF3BC;SOID=12DDDFF9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=-24;SY=-16;EDGE=4;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_E824691D_2AE7_4c9c_8408_881A2B85516F" style="Mode=3;EOID=E58486A8;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=;LMT=;LMB=CX=77:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRT=;LRB=;" subject="EAID_9613E9DD_8709_4e9e_92D8_2F6CA9835851" style="Mode=3;EOID=1CE36E20;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=-2;SY=-1;EX=-2;EY=-1;EDGE=1;$LLB=CX=26:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LLT=CX=40:CY=15:OX=-12:OY=-3:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=CX=44:CY=15:OX=-6:OY=-11:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=-6:OY=-6:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_F15A2B25_0DA9_4203_8FC9_25645610B5E5" style="Mode=3;EOID=CDDEF3BC;SOID=BFDAC143;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_03052911_3625_4472_8C6B_5E1742FED753" style="Mode=3;EOID=BFDAC143;SOID=5DECEEC9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EX=37;EY=3;EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_C3D18D92_3A5C_4112_9958_926A259BAE24" style="Mode=3;EOID=BFDAC143;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=;LMT=;LMB=CX=77:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRT=;LRB=;" subject="EAID_9613E9DD_8709_4e9e_92D8_2F6CA9835851" style="Mode=3;EOID=1CE36E20;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=6;SY=-12;EX=-44;EY=-9;EDGE=1;$LLB=CX=16:CY=15:OX=13:OY=-3:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LLT=CX=49:CY=15:OX=0:OY=-14:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=CX=44:CY=15:OX=-10:OY=-1:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=-23:OY=-4:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_9F0FF178_300F_45f4_B31F_9633AF770E44" style="Mode=3;EOID=CDDEF3BC;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EX=37;EY=3;EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_C3D18D92_3A5C_4112_9958_926A259BAE24" style="Mode=3;EOID=BFDAC143;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=-24;SY=-16;EDGE=4;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_E824691D_2AE7_4c9c_8408_881A2B85516F" style="Mode=3;EOID=E58486A8;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_03052911_3625_4472_8C6B_5E1742FED753" style="Mode=3;EOID=BFDAC143;SOID=5DECEEC9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=8;SY=5;EX=8;EY=5;EDGE=3;$LLB=;LLT=;LMT=;LMB=;LRT=CX=61:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_161BF4FA_8F48_441f_AF1F_D36014D57A1F" style="Mode=3;EOID=5DECEEC9;SOID=CDDEF3BC;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_3F0E9949_A749_4958_9A1B_40E3F0E12393" style="Mode=3;EOID=3D79E0A4;SOID=E585EE32;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_9DDBD985_D9D5_408c_A8F4_40085F2E8E5B" style="Mode=3;EOID=3D79E0A4;SOID=EE2397B7;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=2;$LLB=;LLT=;LMT=;LMB=;LRT=CX=43:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=;" subject="EAID_AFD66AB6_337B_49c5_9FA9_0CFAFEB77AA7" style="Mode=3;EOID=3D79E0A4;SOID=31760B86;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_D775B2C1_C7AF_4a19_A4D5_7B3AC816664C" style="Mode=3;EOID=3D79E0A4;SOID=DFC93F21;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=36;SY=-9;EX=45;EY=-9;EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=-17:OY=-1:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_FF2ECD20_22FF_4734_9DB4_8316A98DDC16" style="Mode=3;EOID=3D79E0A4;SOID=81FBAF85;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_3F0E9949_A749_4958_9A1B_40E3F0E12393" style="Mode=3;EOID=3D79E0A4;SOID=E585EE32;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_D775B2C1_C7AF_4a19_A4D5_7B3AC816664C" style="Mode=3;EOID=3D79E0A4;SOID=DFC93F21;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_9DDBD985_D9D5_408c_A8F4_40085F2E8E5B" style="Mode=3;EOID=3D79E0A4;SOID=EE2397B7;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=2;$LLB=;LLT=;LMT=;LMB=;LRT=CX=43:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=;" subject="EAID_AFD66AB6_337B_49c5_9FA9_0CFAFEB77AA7" style="Mode=3;EOID=3D79E0A4;SOID=31760B86;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=36;SY=-9;EX=45;EY=-9;EDGE=1;$LLB=;LLT=CX=40:CY=15:OX=-17:OY=-1:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=;LRB=;" subject="EAID_FF2ECD20_22FF_4734_9DB4_8316A98DDC16" style="Mode=3;EOID=3D79E0A4;SOID=81FBAF85;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2DBA38A4_BB85_46dc_A5ED_C43AD3050EC4"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ </XMI.content>
+ <XMI.difference/>
+ <XMI.extensions xmi.extender="Enterprise Architect 2.5"/>
+</XMI>
diff --git a/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel_partial.xml b/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel_partial.xml
new file mode 100644
index 000000000..3ad796eb5
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/testmodel/ea_testmodel_partial.xml
@@ -0,0 +1,317 @@
+<?xml version="1.0" encoding="windows-1252"?>
+<XMI xmi.version="1.1" xmlns:UML="omg.org/UML1.3" timestamp="2007-06-10 12:35:22">
+ <XMI.header>
+ <XMI.documentation>
+ <XMI.exporter>Enterprise Architect</XMI.exporter>
+ <XMI.exporterVersion>2.5</XMI.exporterVersion>
+ </XMI.documentation>
+ </XMI.header>
+ <XMI.content>
+ <UML:Model name="EA Model" xmi.id="MX_EAID_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08">
+ <UML:Namespace.ownedElement>
+ <UML:Class name="EARootClass" xmi.id="EAID_11111111_5487_4080_A7F4_41526CB0AA00" isRoot="true" isLeaf="false" isAbstract="false"/>
+ <UML:Package name="Rooms" xmi.id="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="true" isLeaf="false" isAbstract="false" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="created" value="2006-06-23 00:00:00"/>
+ <UML:TaggedValue tag="modified" value="2006-06-23 00:00:00"/>
+ <UML:TaggedValue tag="iscontrolled" value="FALSE"/>
+ <UML:TaggedValue tag="isprotected" value="FALSE"/>
+ <UML:TaggedValue tag="usedtd" value="FALSE"/>
+ <UML:TaggedValue tag="logxml" value="FALSE"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="ea_stype" value="Public"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Namespace.ownedElement>
+ <UML:Class name="Room" xmi.id="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" visibility="public" namespace="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08"/>
+ <UML:TaggedValue tag="date_created" value="2005-09-16 19:52:28"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-22 21:15:25"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="Rooms"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Classifier.feature>
+ <UML:Operation name="enter" visibility="public" ownerScope="instance" isQuery="false" concurrency="sequential">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="type" value="void"/>
+ <UML:TaggedValue tag="const" value="false"/>
+ <UML:TaggedValue tag="synchronised" value="0"/>
+ <UML:TaggedValue tag="concurrency" value="Sequential"/>
+ <UML:TaggedValue tag="position" value="0"/>
+ <UML:TaggedValue tag="returnarray" value="0"/>
+ <UML:TaggedValue tag="pure" value="0"/>
+ <UML:TaggedValue tag="ea_guid" value="{A8C01177-4523-44c4-87F7-F2B8F9F3C915}"/>
+ </UML:ModelElement.taggedValue>
+ <UML:BehavioralFeature.parameter>
+ <UML:Parameter kind="return" visibility="public">
+ <UML:Parameter.type>
+ <UML:Classifier xmi.idref="eaxmiid0"/>
+ </UML:Parameter.type>
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="pos" value="0"/>
+ <UML:TaggedValue tag="type" value="void"/>
+ <UML:TaggedValue tag="const" value="0"/>
+ <UML:TaggedValue tag="ea_guid" value="{A8C01177-4523-44c4-87F7-F2B8F9F3C915}"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Parameter.defaultValue>
+ <UML:Expression/>
+ </UML:Parameter.defaultValue>
+ </UML:Parameter>
+ </UML:BehavioralFeature.parameter>
+ </UML:Operation>
+ </UML:Classifier.feature>
+ </UML:Class>
+ <UML:Association xmi.id="EAID_F15A2B25_0DA9_4203_8FC9_25645610B5E5" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Aggregation"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="subtype" value="Strong"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=-2;SY=-1;EX=-2;EY=-1;"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" multiplicity="1..*" name="room" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="house" aggregation="composite" isOrdered="false" isNavigable="true" type="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Generalization subtype="EAID_244CC9E5_1F81_4164_9215_C048EC679543" supertype="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" xmi.id="EAID_03052911_3625_4472_8C6B_5E1742FED753" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Generalization"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="false"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="$ea_xref_property" value="$XREFPROP=$XID={DF243A69-1029-4c4c-A1A0-C821A1DF9623}$XID;$NAM=CustomProperties$NAM;$TYP=connector property$TYP;$VIS=Public$VIS;$DES=@PROP=@NAME=isSubstitutable@ENDNAME;@TYPE=boolean@ENDTYPE;@VALU=@ENDVALU;@PRMT=@ENDPRMT;@ENDPROP;$DES;$CLT={03052911-3625-4472-8C6B-5E1742FED753}$CLT;$SUP=&lt;none&gt;$SUP;$ENDXREF;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Generalization>
+ <UML:Generalization subtype="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" supertype="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" xmi.id="EAID_C3D18D92_3A5C_4112_9958_926A259BAE24" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Generalization"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="false"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="$ea_xref_property" value="$XREFPROP=$XID={DB1C706A-470A-45ed-89DA-601821419545}$XID;$NAM=CustomProperties$NAM;$TYP=connector property$TYP;$VIS=Public$VIS;$DES=@PROP=@NAME=isSubstitutable@ENDNAME;@TYPE=boolean@ENDTYPE;@VALU=@ENDVALU;@PRMT=@ENDPRMT;@ENDPROP;$DES;$CLT={C3D18D92-3A5C-4112-9958-926A259BAE24}$CLT;$SUP=&lt;none&gt;$SUP;$ENDXREF;"/>
+ <UML:TaggedValue tag="privatedata5" value="EX=37;EY=3;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Generalization>
+ <UML:Class name="Kitchen" xmi.id="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" visibility="public" namespace="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08"/>
+ <UML:TaggedValue tag="date_created" value="2005-11-30 19:26:13"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-22 21:15:34"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="Rooms"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Class>
+ <UML:Association xmi.id="EAID_9F0FF178_300F_45f4_B31F_9633AF770E44" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Association"/>
+ <UML:TaggedValue tag="direction" value="Bi-Directional"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=-39;SY=-10;EX=-43;EY=-10;EDGE=1;"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="kitchen" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="house" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ <UML:Generalization subtype="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" supertype="EAID_CB9E84D4_9064_49d0_BF1E_EE53C05853EF" xmi.id="EAID_E824691D_2AE7_4c9c_8408_881A2B85516F" visibility="public">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Generalization"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="src_visibility" value="Public"/>
+ <UML:TaggedValue tag="src_isOrdered" value="false"/>
+ <UML:TaggedValue tag="src_isNavigable" value="false"/>
+ <UML:TaggedValue tag="dst_visibility" value="Public"/>
+ <UML:TaggedValue tag="dst_isOrdered" value="false"/>
+ <UML:TaggedValue tag="dst_isNavigable" value="true"/>
+ <UML:TaggedValue tag="$ea_xref_property" value="$XREFPROP=$XID={8AAF19F7-EC93-4dda-ADD1-69A07CC671D7}$XID;$NAM=CustomProperties$NAM;$TYP=connector property$TYP;$VIS=Public$VIS;$DES=@PROP=@NAME=isSubstitutable@ENDNAME;@TYPE=boolean@ENDTYPE;@VALU=@ENDVALU;@PRMT=@ENDPRMT;@ENDPROP;$DES;$CLT={E824691D-2AE7-4c9c-8408-881A2B85516F}$CLT;$SUP=&lt;none&gt;$SUP;$ENDXREF;"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=-24;SY=-16;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Generalization>
+ <UML:Class name="Bathroom" xmi.id="EAID_244CC9E5_1F81_4164_9215_C048EC679543" visibility="public" namespace="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08" isRoot="false" isLeaf="false" isAbstract="false" isActive="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="isSpecification" value="false"/>
+ <UML:TaggedValue tag="ea_stype" value="Class"/>
+ <UML:TaggedValue tag="ea_ntype" value="0"/>
+ <UML:TaggedValue tag="version" value="1.0"/>
+ <UML:TaggedValue tag="package" value="EAPK_F9D8C6E3_4DAD_4aa2_AD47_D0ABA4E93E08"/>
+ <UML:TaggedValue tag="date_created" value="2006-06-27 08:32:25"/>
+ <UML:TaggedValue tag="date_modified" value="2006-06-27 08:34:23"/>
+ <UML:TaggedValue tag="gentype" value="Java"/>
+ <UML:TaggedValue tag="tagged" value="0"/>
+ <UML:TaggedValue tag="package_name" value="Rooms"/>
+ <UML:TaggedValue tag="phase" value="1.0"/>
+ <UML:TaggedValue tag="complexity" value="1"/>
+ <UML:TaggedValue tag="status" value="Proposed"/>
+ <UML:TaggedValue tag="style" value="BackColor=-1;BorderColor=-1;BorderWidth=-1;FontColor=-1;VSwimLanes=0;HSwimLanes=0;BorderStyle=0;"/>
+ </UML:ModelElement.taggedValue>
+ </UML:Class>
+ <UML:Association xmi.id="EAID_161BF4FA_8F48_441f_AF1F_D36014D57A1F" visibility="public" isRoot="false" isLeaf="false" isAbstract="false">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="style" value="3"/>
+ <UML:TaggedValue tag="ea_type" value="Association"/>
+ <UML:TaggedValue tag="direction" value="Source -&gt; Destination"/>
+ <UML:TaggedValue tag="linemode" value="3"/>
+ <UML:TaggedValue tag="seqno" value="0"/>
+ <UML:TaggedValue tag="headStyle" value="0"/>
+ <UML:TaggedValue tag="lineStyle" value="0"/>
+ <UML:TaggedValue tag="privatedata5" value="SX=8;SY=5;EX=8;EY=5;"/>
+ <UML:TaggedValue tag="virtualInheritance" value="0"/>
+ </UML:ModelElement.taggedValue>
+ <UML:Association.connection>
+ <UML:AssociationEnd visibility="public" aggregation="none" isOrdered="false" isNavigable="false" type="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ <UML:AssociationEnd visibility="public" multiplicity="1" name="bathroom" aggregation="none" isOrdered="false" isNavigable="true" type="EAID_244CC9E5_1F81_4164_9215_C048EC679543">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="containment" value="Unspecified"/>
+ </UML:ModelElement.taggedValue>
+ </UML:AssociationEnd>
+ </UML:Association.connection>
+ </UML:Association>
+ </UML:Namespace.ownedElement>
+ </UML:Package>
+ <UML:DataType xmi.id="eaxmiid0" name="void" visibility="private" isRoot="false" isLeaf="false" isAbstract="false"/>
+ </UML:Namespace.ownedElement>
+ </UML:Model>
+ <UML:DiagramElement geometry="Left=265;Top=280;Right=355;Bottom=350;" subject="EAID_14DB5E54_CD7B_4c84_998C_44960049D7E0" seqno="5" style="DUID=BFDAC143;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=145;Top=402;Right=235;Bottom=472;" subject="EAID_9CE44C59_37E4_4117_8C05_F87C4DC33529" seqno="4" style="DUID=BDEA24A9;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="Left=394;Top=412;Right=484;Bottom=482;" subject="EAID_244CC9E5_1F81_4164_9215_C048EC679543" seqno="3" style="DUID=5DECEEC9;LBL=;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=-2;SY=-1;EX=-2;EY=-1;EDGE=1;$LLB=CX=26:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LLT=CX=40:CY=15:OX=-12:OY=-3:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=CX=44:CY=15:OX=-6:OY=-11:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=-6:OY=-6:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_F15A2B25_0DA9_4203_8FC9_25645610B5E5" style="Mode=3;EOID=CDDEF3BC;SOID=BFDAC143;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_03052911_3625_4472_8C6B_5E1742FED753" style="Mode=3;EOID=BFDAC143;SOID=5DECEEC9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EX=37;EY=3;EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_C3D18D92_3A5C_4112_9958_926A259BAE24" style="Mode=3;EOID=BFDAC143;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=-44;SY=-9;EX=-44;EY=-9;EDGE=1;$LLB=CX=16:CY=15:OX=13:OY=-3:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LLT=CX=49:CY=15:OX=0:OY=-14:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LMT=;LMB=;LRT=CX=44:CY=15:OX=-10:OY=-1:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=-23:OY=-4:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_9F0FF178_300F_45f4_B31F_9633AF770E44" style="Mode=3;EOID=CDDEF3BC;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EX=37;EY=3;EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_C3D18D92_3A5C_4112_9958_926A259BAE24" style="Mode=3;EOID=BFDAC143;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=-24;SY=-16;EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_E824691D_2AE7_4c9c_8408_881A2B85516F" style="Mode=3;EOID=E58486A8;SOID=BDEA24A9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="EDGE=1;$LLB=;LLT=;LMT=;LMB=;LRT=;LRB=;" subject="EAID_03052911_3625_4472_8C6B_5E1742FED753" style="Mode=3;EOID=BFDAC143;SOID=5DECEEC9;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ <UML:DiagramElement geometry="SX=8;SY=5;EX=8;EY=5;EDGE=3;$LLB=;LLT=;LMT=;LMB=;LRT=CX=61:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;LRB=CX=16:CY=15:OX=0:OY=0:HDN=0:BLD=0:ITA=0:UND=0:CLR=-1:ALN=0:DIR=0:;" subject="EAID_161BF4FA_8F48_441f_AF1F_D36014D57A1F" style="Mode=3;EOID=5DECEEC9;SOID=CDDEF3BC;">
+ <UML:ModelElement.taggedValue>
+ <UML:TaggedValue tag="diagram_guid" value="EAID_2FBE7B3F_756C_46a0_90EE_93429FDA065D"/>
+ <UML:TaggedValue tag="hidden" value="0"/>
+ </UML:ModelElement.taggedValue>
+ </UML:DiagramElement>
+ </XMI.content>
+ <XMI.difference/>
+ <XMI.extensions xmi.extender="Enterprise Architect 2.5">
+ <EAStub xmi.id="EAID_436D81AD_A9B0_44d8_8AD1_86BB0808DA32" name="House" UMLType="Class"/>
+ <EAStub xmi.id="EAID_CB9E84D4_9064_49d0_BF1E_EE53C05853EF" name="MeetingPlace" UMLType="Class"/>
+ </XMI.extensions>
+</XMI>
diff --git a/lib/puppet/vendor/rgen/test/testmodel/ecore_model_checker.rb b/lib/puppet/vendor/rgen/test/testmodel/ecore_model_checker.rb
new file mode 100644
index 000000000..3fe1f1e6c
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/testmodel/ecore_model_checker.rb
@@ -0,0 +1,101 @@
+require 'rgen/ecore/ecore'
+
+module Testmodel
+
+# Checks the ECore model elements created by transformation from the
+# UML Class model elements from the example model
+#
+module ECoreModelChecker
+ include RGen::ECore
+
+ def checkECoreModel(env)
+
+ # check main package
+ mainPackage = env.elements.select {|e| e.is_a? EPackage and e.name == "HouseMetamodel"}.first
+ assert_not_nil mainPackage
+
+ # check Rooms package
+ assert mainPackage.eSubpackages.is_a?(Array)
+ assert_equal 1, mainPackage.eSubpackages.size
+ assert mainPackage.eSubpackages[0].is_a?(EPackage)
+ roomsPackage = mainPackage.eSubpackages[0]
+ assert_equal "Rooms", roomsPackage.name
+
+ # check main package classes
+ assert mainPackage.eClassifiers.is_a?(Array)
+ assert_equal 4, mainPackage.eClassifiers.size
+ assert mainPackage.eClassifiers.all?{|c| c.is_a?(EClass)}
+ houseClass = mainPackage.eClassifiers.select{|c| c.name == "House"}.first
+ personClass = mainPackage.eClassifiers.select{|c| c.name == "Person"}.first
+ meetingPlaceClass = mainPackage.eClassifiers.select{|c| c.name == "MeetingPlace"}.first
+ cookingPlaceInterface = mainPackage.eClassifiers.select{|c| c.name == "CookingPlace"}.first
+ assert_not_nil houseClass
+ assert_not_nil personClass
+ assert_not_nil meetingPlaceClass
+ assert_not_nil cookingPlaceInterface
+
+ # check Rooms package classes
+ assert roomsPackage.eClassifiers.is_a?(Array)
+ assert_equal 3, roomsPackage.eClassifiers.size
+ assert roomsPackage.eClassifiers.all?{|c| c.is_a?(EClass)}
+ roomClass = roomsPackage.eClassifiers.select{|c| c.name == "Room"}.first
+ kitchenClass = roomsPackage.eClassifiers.select{|c| c.name == "Kitchen"}.first
+ bathroomClass = roomsPackage.eClassifiers.select{|c| c.name == "Bathroom"}.first
+ assert_not_nil roomClass
+ assert_not_nil kitchenClass
+ assert_not_nil bathroomClass
+
+ # check Room inheritance
+ assert kitchenClass.eSuperTypes.is_a?(Array)
+ assert_equal 3, kitchenClass.eSuperTypes.size
+ assert_equal roomClass.object_id, kitchenClass.eSuperTypes.select{|c| c.name == "Room"}.first.object_id
+ assert_equal meetingPlaceClass.object_id, kitchenClass.eSuperTypes.select{|c| c.name == "MeetingPlace"}.first.object_id
+ assert_equal cookingPlaceInterface.object_id, kitchenClass.eSuperTypes.select{|c| c.name == "CookingPlace"}.first.object_id
+ assert bathroomClass.eSuperTypes.is_a?(Array)
+ assert_equal 1, bathroomClass.eSuperTypes.size
+ assert_equal roomClass.object_id, bathroomClass.eSuperTypes[0].object_id
+
+ # check House-Room "part of" association
+ assert houseClass.eAllContainments.eType.is_a?(Array)
+ assert_equal 1, houseClass.eAllContainments.eType.size
+ roomRef = houseClass.eAllContainments.first
+ assert_equal roomClass.object_id, roomRef.eType.object_id
+ assert_equal "room", roomRef.name
+ assert_equal 1, roomRef.lowerBound
+ assert_equal(-1, roomRef.upperBound)
+ assert_not_nil roomRef.eOpposite
+ assert_equal houseClass.object_id, roomRef.eOpposite.eType.object_id
+
+ partOfRefs = roomClass.eReferences.select{|r| r.eOpposite && r.eOpposite.containment}
+ assert_equal 1, partOfRefs.size
+ assert_equal houseClass.object_id, partOfRefs.first.eType.object_id
+ assert_equal "house", partOfRefs.first.name
+ assert_equal roomRef.object_id, partOfRefs.first.eOpposite.object_id
+
+ # check House OUT associations
+ assert houseClass.eReferences.is_a?(Array)
+ assert_equal 3, houseClass.eReferences.size
+ bathRef = houseClass.eReferences.find {|e| e.name == "bathroom"}
+ kitchenRef = houseClass.eReferences.find {|e| e.name == "kitchen"}
+ roomRef = houseClass.eReferences.find {|e| e.name == "room"}
+ assert_not_nil bathRef
+ assert_nil bathRef.eOpposite
+ assert_not_nil kitchenRef
+ assert_not_nil roomRef
+ assert_equal 1, kitchenRef.lowerBound
+ assert_equal 1, kitchenRef.upperBound
+ assert_equal 1, roomRef.lowerBound
+ assert_equal(-1, roomRef.upperBound)
+
+ # check House IN associations
+ houseInRefs = env.find(:class => EReference, :eType => houseClass)
+ assert_equal 3, houseInRefs.size
+ homeEnd = houseInRefs.find{|e| e.name == "home"}
+ assert_not_nil homeEnd
+ assert_equal 0, homeEnd.lowerBound
+ assert_equal(-1, homeEnd.upperBound)
+
+ end
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/testmodel/manual_testmodel.xml b/lib/puppet/vendor/rgen/test/testmodel/manual_testmodel.xml
new file mode 100644
index 000000000..d45a0a611
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/testmodel/manual_testmodel.xml
@@ -0,0 +1,22 @@
+<TestModel xmlns:MNS="testmodel.org/myNamespace" >
+ <MNS:House>
+ before kitchen
+ <MNS:Room id="1" name="Kitchen" />
+ after kitchen
+ <MNS:Room id="2" name="TomsRoom">within toms room</MNS:Room>
+ after toms room
+ </MNS:House>
+ <Person name="Tom" room="2">
+ <Parents>
+ <Person name="Kate">
+ <Parents>
+ <Person name="Maria" />
+ <Person name="Will" />
+ </Parents>
+ </Person>
+ </Parents>
+ </Person>
+ <MULTI-PART-NAME>
+ <INSIDE_MULTI_PART/>
+ </MULTI-PART-NAME>
+</TestModel>
diff --git a/lib/puppet/vendor/rgen/test/testmodel/object_model_checker.rb b/lib/puppet/vendor/rgen/test/testmodel/object_model_checker.rb
new file mode 100644
index 000000000..415a8d3b2
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/testmodel/object_model_checker.rb
@@ -0,0 +1,67 @@
+require 'metamodels/uml13_metamodel'
+require 'metamodels/uml13_metamodel_ext'
+
+module Testmodel
+
+# Checks the UML Object model elements from the example model
+#
+module ObjectModelChecker
+
+ # convenient extension for this test only
+ module UML13::ClassifierRole::ClassModule
+ def classname
+ taggedValue.find{|tv| tv.tag == "classname"}.value
+ end
+ end
+
+ def checkObjectModel(envUML)
+
+ # check main package
+ mainPackage = envUML.find(:class => UML13::Package, :name => "HouseExampleModel").first
+ assert_not_nil mainPackage
+
+ eaRootCollaboration = mainPackage.ownedElement.find{|e| e.is_a?(UML13::Collaboration) && e.name == "Collaborations"}
+ assert_not_nil eaRootCollaboration
+
+ # check main package objects
+ objects = eaRootCollaboration.ownedElement.select{|e| e.is_a?(UML13::ClassifierRole)}
+ assert_equal 6, objects.size
+
+ someone = objects.find{|o| o.name == "Someone"}
+ assert_equal "Person", someone.classname
+
+ someonesHouse = objects.find{|o| o.name == "SomeonesHouse"}
+ assert_equal "House", someonesHouse.classname
+
+ greenRoom = objects.find{|o| o.name == "GreenRoom"}
+ assert_equal "Room", greenRoom.classname
+
+ yellowRoom = objects.find{|o| o.name == "YellowRoom"}
+ assert_equal "Room", yellowRoom.classname
+
+ hotRoom = objects.find{|o| o.name == "HotRoom"}
+ assert_equal "Kitchen", hotRoom.classname
+
+ wetRoom = objects.find{|o| o.name == "WetRoom"}
+ assert_equal "Bathroom", wetRoom.classname
+
+ # Someone to SomeonesHouse
+ assert someone.associationEnd.otherEnd.getType.is_a?(Array)
+ assert_equal 1, someone.associationEnd.otherEnd.getType.size
+ houseEnd = someone.associationEnd.otherEnd[0]
+ assert_equal someonesHouse.object_id, houseEnd.getType.object_id
+ assert_equal "home", houseEnd.name
+
+ # Someone to SomeonesHouse
+ assert someonesHouse.localCompositeEnd.otherEnd.is_a?(Array)
+ assert_equal 4, someonesHouse.localCompositeEnd.otherEnd.size
+ assert someonesHouse.localCompositeEnd.otherEnd.all?{|e| e.name == "room"}
+ assert_not_nil someonesHouse.localCompositeEnd.otherEnd.getType.find{|o| o == yellowRoom}
+ assert_not_nil someonesHouse.localCompositeEnd.otherEnd.getType.find{|o| o == greenRoom}
+ assert_not_nil someonesHouse.localCompositeEnd.otherEnd.getType.find{|o| o == hotRoom}
+ assert_not_nil someonesHouse.localCompositeEnd.otherEnd.getType.find{|o| o == wetRoom}
+
+ end
+end
+
+end
\ No newline at end of file
diff --git a/lib/puppet/vendor/rgen/test/transformer_test.rb b/lib/puppet/vendor/rgen/test/transformer_test.rb
new file mode 100644
index 000000000..afbf5fba0
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/transformer_test.rb
@@ -0,0 +1,254 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+$:.unshift File.join(File.dirname(__FILE__),"..","test")
+
+require 'test/unit'
+require 'rgen/transformer'
+require 'rgen/environment'
+require 'rgen/util/model_comparator'
+require 'metamodels/uml13_metamodel'
+require 'testmodel/class_model_checker'
+
+class TransformerTest < Test::Unit::TestCase
+
+ class ModelIn
+ attr_accessor :name
+ end
+
+ class ModelInSub < ModelIn
+ end
+
+ class ModelAIn
+ attr_accessor :name
+ attr_accessor :modelB
+ end
+
+ class ModelBIn
+ attr_accessor :name
+ attr_accessor :modelA
+ end
+
+ class ModelCIn
+ attr_accessor :number
+ end
+
+ class ModelOut
+ attr_accessor :name
+ end
+
+ class ModelAOut
+ attr_accessor :name
+ attr_accessor :modelB
+ end
+
+ class ModelBOut
+ attr_accessor :name
+ attr_accessor :modelA
+ end
+
+ class ModelCOut
+ attr_accessor :number
+ end
+
+ class MyTransformer < RGen::Transformer
+ attr_reader :modelInTrans_count
+ attr_reader :modelAInTrans_count
+ attr_reader :modelBInTrans_count
+
+ transform ModelIn, :to => ModelOut do
+ # aribitrary ruby code may be placed before the hash creating the output element
+ @modelInTrans_count ||= 0; @modelInTrans_count += 1
+ { :name => name }
+ end
+
+ transform ModelAIn, :to => ModelAOut do
+ @modelAInTrans_count ||= 0; @modelAInTrans_count += 1
+ { :name => name, :modelB => trans(modelB) }
+ end
+
+ transform ModelBIn, :to => ModelBOut do
+ @modelBInTrans_count ||= 0; @modelBInTrans_count += 1
+ { :name => name, :modelA => trans(modelA) }
+ end
+
+ transform ModelCIn, :to => ModelCOut, :if => :largeNumber do
+ # a method can be called anywhere in a transformer block
+ { :number => duplicateNumber }
+ end
+
+ transform ModelCIn, :to => ModelCOut, :if => :smallNumber do
+ { :number => number / 2 }
+ end
+
+ method :largeNumber do
+ number > 1000
+ end
+
+ method :smallNumber do
+ number < 500
+ end
+
+ method :duplicateNumber do
+ number * 2;
+ end
+
+ end
+
+ class MyTransformer2 < RGen::Transformer
+ # check that subclasses are independent (i.e. do not share the rules)
+ transform ModelIn, :to => ModelOut do
+ { :name => name }
+ end
+ end
+
+ def test_transformer
+ from = ModelIn.new
+ from.name = "TestName"
+ env_out = RGen::Environment.new
+ t = MyTransformer.new(:env_in, env_out)
+ assert t.trans(from).is_a?(ModelOut)
+ assert_equal "TestName", t.trans(from).name
+ assert_equal 1, env_out.elements.size
+ assert_equal env_out.elements.first, t.trans(from)
+ assert_equal 1, t.modelInTrans_count
+ end
+
+ def test_transformer_chain
+ from = ModelIn.new
+ from.name = "Test1"
+ from2 = ModelIn.new
+ from2.name = "Test2"
+ from3 = ModelIn.new
+ from3.name = "Test3"
+ env_out = RGen::Environment.new
+ elementMap = {}
+ t1 = MyTransformer.new(:env_in, env_out, elementMap)
+ assert t1.trans(from).is_a?(ModelOut)
+ assert_equal "Test1", t1.trans(from).name
+ assert_equal 1, t1.modelInTrans_count
+ # modifying the element map means that following calls of +trans+ will be affected
+ assert_equal( {from => t1.trans(from)}, elementMap )
+ elementMap.merge!({from2 => :dummy})
+ assert_equal :dummy, t1.trans(from2)
+ # second transformer based on the element map of the first
+ t2 = MyTransformer.new(:env_in, env_out, elementMap)
+ # second transformer returns same objects
+ assert_equal t1.trans(from).object_id, t2.trans(from).object_id
+ assert_equal :dummy, t2.trans(from2)
+ # and no transformer rule is evaluated at this point
+ assert_equal nil, t2.modelInTrans_count
+ # now transform a new object in second transformer
+ assert t2.trans(from3).is_a?(ModelOut)
+ assert_equal "Test3", t2.trans(from3).name
+ assert_equal 1, t2.modelInTrans_count
+ # the first transformer returns the same object without evaluation of a transformer rule
+ assert_equal t1.trans(from3).object_id, t2.trans(from3).object_id
+ assert_equal 1, t1.modelInTrans_count
+ end
+
+ def test_transformer_subclass
+ from = ModelInSub.new
+ from.name = "TestName"
+ t = MyTransformer.new
+ assert t.trans(from).is_a?(ModelOut)
+ assert_equal "TestName", t.trans(from).name
+ assert_equal 1, t.modelInTrans_count
+ end
+
+ def test_transformer_array
+ froms = [ModelIn.new, ModelIn.new]
+ froms[0].name = "M1"
+ froms[1].name = "M2"
+ env_out = RGen::Environment.new
+ t = MyTransformer.new(:env_in, env_out)
+ assert t.trans(froms).is_a?(Array)
+ assert t.trans(froms)[0].is_a?(ModelOut)
+ assert_equal "M1", t.trans(froms)[0].name
+ assert t.trans(froms)[1].is_a?(ModelOut)
+ assert_equal "M2", t.trans(froms)[1].name
+ assert_equal 2, env_out.elements.size
+ assert (t.trans(froms)-env_out.elements).empty?
+ assert_equal 2, t.modelInTrans_count
+ end
+
+ def test_transformer_cyclic
+ # setup a cyclic dependency between fromA and fromB
+ fromA = ModelAIn.new
+ fromB = ModelBIn.new
+ fromA.modelB = fromB
+ fromA.name = "ModelA"
+ fromB.modelA = fromA
+ fromB.name = "ModelB"
+ env_out = RGen::Environment.new
+ t = MyTransformer.new(:env_in, env_out)
+ # check that trans resolves the cycle correctly (no endless loop)
+ # both elements, fromA and fromB will be transformed with the transformation
+ # of the first element, either fromA or fromB
+ assert t.trans(fromA).is_a?(ModelAOut)
+ assert_equal "ModelA", t.trans(fromA).name
+ assert t.trans(fromA).modelB.is_a?(ModelBOut)
+ assert_equal "ModelB", t.trans(fromA).modelB.name
+ assert_equal t.trans(fromA), t.trans(fromA).modelB.modelA
+ assert_equal t.trans(fromB), t.trans(fromA).modelB
+ assert_equal 2, env_out.elements.size
+ assert (env_out.elements - [t.trans(fromA), t.trans(fromB)]).empty?
+ assert_equal 1, t.modelAInTrans_count
+ assert_equal 1, t.modelBInTrans_count
+ end
+
+ def test_transformer_conditional
+ froms = [ModelCIn.new, ModelCIn.new, ModelCIn.new]
+ froms[0].number = 100
+ froms[1].number = 1000
+ froms[2].number = 2000
+
+ env_out = RGen::Environment.new
+ t = MyTransformer.new(:env_in, env_out)
+
+ assert t.trans(froms).is_a?(Array)
+ assert_equal 2, t.trans(froms).size
+
+ # this one matched the smallNumber rule
+ assert t.trans(froms[0]).is_a?(ModelCOut)
+ assert_equal 50, t.trans(froms[0]).number
+
+ # this one did not match any rule
+ assert t.trans(froms[1]).nil?
+
+ # this one matched the largeNumber rule
+ assert t.trans(froms[2]).is_a?(ModelCOut)
+ assert_equal 4000, t.trans(froms[2]).number
+
+ # elements in environment are the same as the ones returned
+ assert_equal 2, env_out.elements.size
+ assert (t.trans(froms)-env_out.elements).empty?
+ end
+
+ class CopyTransformer < RGen::Transformer
+ include UML13
+ def transform
+ trans(:class => UML13::Package)
+ end
+ UML13.ecore.eClassifiers.each do |c|
+ copy c.instanceClass
+ end
+ end
+
+ MODEL_DIR = File.join(File.dirname(__FILE__),"testmodel")
+
+ include Testmodel::ClassModelChecker
+ include RGen::Util::ModelComparator
+
+ def test_copyTransformer
+ envIn = RGen::Environment.new
+ envOut = RGen::Environment.new
+
+ EASupport.instantiateUML13FromXMI11(envIn, MODEL_DIR+"/ea_testmodel.xml")
+
+ CopyTransformer.new(envIn, envOut).transform
+ checkClassModel(envOut)
+ assert modelEqual?(
+ envIn.find(:class => UML13::Model).first,
+ envOut.find(:class => UML13::Model).first)
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/util/file_cache_map_test.rb b/lib/puppet/vendor/rgen/test/util/file_cache_map_test.rb
new file mode 100644
index 000000000..fcb4e32f2
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/util/file_cache_map_test.rb
@@ -0,0 +1,99 @@
+$:.unshift(File.dirname(__FILE__)+"/../../lib")
+
+require 'test/unit'
+require 'fileutils'
+require 'rgen/util/file_cache_map'
+
+class FileCacheMapTest < Test::Unit::TestCase
+
+ TestDir = File.dirname(__FILE__)+"/file_cache_map_test/testdir"
+
+ def setup
+ FileUtils.rm_r(Dir[TestDir+"/*"])
+ # * doesn't include dot files
+ FileUtils.rm_r(Dir[TestDir+"/.cache"])
+ @cm = RGen::Util::FileCacheMap.new(".cache", ".test")
+ end
+
+ def test_nocache
+ reasons = []
+ assert_equal(:invalid, @cm.load_data(TestDir+"/fileA", :invalidation_reasons => reasons))
+ assert_equal [:no_cachefile], reasons
+ end
+
+ def test_storeload
+ keyFile = TestDir+"/fileA"
+ File.open(keyFile, "w") {|f| f.write("somedata")}
+ @cm.store_data(keyFile, "valuedata")
+ assert(File.exist?(TestDir+"/.cache/fileA.test"))
+ assert_equal("valuedata", @cm.load_data(keyFile))
+ end
+
+ def test_storeload_subdir
+ keyFile = TestDir+"/subdir/fileA"
+ FileUtils.mkdir(TestDir+"/subdir")
+ File.open(keyFile, "w") {|f| f.write("somedata")}
+ @cm.store_data(keyFile, "valuedata")
+ assert(File.exist?(TestDir+"/subdir/.cache/fileA.test"))
+ assert_equal("valuedata", @cm.load_data(keyFile))
+ end
+
+ def test_storeload_postfix
+ keyFile = TestDir+"/fileB.txt"
+ File.open(keyFile, "w") {|f| f.write("somedata")}
+ @cm.store_data(keyFile, "valuedata")
+ assert(File.exist?(TestDir+"/.cache/fileB.txt.test"))
+ assert_equal("valuedata", @cm.load_data(keyFile))
+ end
+
+ def test_storeload_empty
+ keyFile = TestDir+"/fileA"
+ File.open(keyFile, "w") {|f| f.write("")}
+ @cm.store_data(keyFile, "valuedata")
+ assert(File.exist?(TestDir+"/.cache/fileA.test"))
+ assert_equal("valuedata", @cm.load_data(keyFile))
+ end
+
+ def test_corruptcache
+ keyFile = TestDir+"/fileA"
+ File.open(keyFile, "w") {|f| f.write("somedata")}
+ @cm.store_data(keyFile, "valuedata")
+ File.open(TestDir+"/.cache/fileA.test","a") {|f| f.write("more data")}
+ reasons = []
+ assert_equal(:invalid, @cm.load_data(keyFile, :invalidation_reasons => reasons))
+ assert_equal [:cachefile_corrupted], reasons
+ end
+
+ def test_changedcontent
+ keyFile = TestDir+"/fileA"
+ File.open(keyFile, "w") {|f| f.write("somedata")}
+ @cm.store_data(keyFile, "valuedata")
+ File.open(keyFile, "a") {|f| f.write("more data")}
+ reasons = []
+ assert_equal(:invalid, @cm.load_data(keyFile, :invalidation_reasons => reasons))
+ assert_equal [:keyfile_changed], reasons
+ end
+
+ def test_versioninfo
+ keyFile = TestDir+"/fileA"
+ File.open(keyFile, "w") {|f| f.write("somedata")}
+ @cm.version_info = "123"
+ @cm.store_data(keyFile, "valuedata")
+ assert(File.exist?(TestDir+"/.cache/fileA.test"))
+ assert_equal("valuedata", @cm.load_data(keyFile))
+ end
+
+ def test_changed_version
+ keyFile = TestDir+"/fileA"
+ File.open(keyFile, "w") {|f| f.write("somedata")}
+ @cm.version_info = "123"
+ @cm.store_data(keyFile, "valuedata")
+ @cm.version_info = "456"
+ reasons = []
+ assert_equal(:invalid, @cm.load_data(keyFile, :invalidation_reasons => reasons))
+ assert_equal [:keyfile_changed], reasons
+ end
+
+end
+
+
diff --git a/lib/puppet/vendor/rgen/test/util/file_cache_map_test/testdir/.gitignore b/lib/puppet/vendor/rgen/test/util/file_cache_map_test/testdir/.gitignore
new file mode 100644
index 000000000..bad49a678
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/util/file_cache_map_test/testdir/.gitignore
@@ -0,0 +1,3 @@
+.cache
+fileA
+
diff --git a/lib/puppet/vendor/rgen/test/util/pattern_matcher_test.rb b/lib/puppet/vendor/rgen/test/util/pattern_matcher_test.rb
new file mode 100644
index 000000000..156856492
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/util/pattern_matcher_test.rb
@@ -0,0 +1,97 @@
+$:.unshift(File.dirname(__FILE__)+"/../../lib")
+
+require 'test/unit'
+require 'rgen/environment'
+require 'rgen/metamodel_builder'
+require 'rgen/model_builder'
+require 'rgen/util/pattern_matcher'
+
+class PatternMatcherTest < Test::Unit::TestCase
+
+module TestMM
+ extend RGen::MetamodelBuilder::ModuleExtension
+
+ class Node < RGen::MetamodelBuilder::MMBase
+ has_attr 'name', String
+ contains_many 'children', Node, 'parent'
+ end
+end
+
+def modelA
+ env = RGen::Environment.new
+ RGen::ModelBuilder.build(TestMM, env) do
+ node "A" do
+ node "AA"
+ end
+ node "B" do
+ node "B1"
+ node "B2"
+ node "B3"
+ end
+ node "C" do
+ node "C1"
+ node "C2"
+ end
+ node "D" do
+ node "DD"
+ end
+ end
+ env
+end
+
+def test_simple
+ matcher = RGen::Util::PatternMatcher.new
+ matcher.add_pattern("simple") do |env, c|
+ TestMM::Node.new(:name => "A", :children => [
+ TestMM::Node.new(:name => "AA")])
+ end
+ matcher.add_pattern("bad") do |env, c|
+ TestMM::Node.new(:name => "X")
+ end
+ env = modelA
+
+ match = matcher.find_pattern(env, "simple")
+ assert_not_nil match
+ assert_equal "A", match.root.name
+ assert_equal env.find(:class => TestMM::Node, :name => "A").first.object_id, match.root.object_id
+ assert_equal 2, match.elements.size
+ assert_equal [nil], match.bound_values
+
+ assert_nil matcher.find_pattern(env, "bad")
+end
+
+def test_value_binding
+ matcher = RGen::Util::PatternMatcher.new
+ matcher.add_pattern("single_child") do |env, name, child|
+ TestMM::Node.new(:name => name, :children => [ child ])
+ end
+ matcher.add_pattern("double_child") do |env, name, child1, child2|
+ TestMM::Node.new(:name => name, :children => [ child1, child2 ])
+ end
+ matcher.add_pattern("child_pattern") do |env, child_name|
+ TestMM::Node.new(:name => "A", :children => [
+ TestMM::Node.new(:name => child_name)])
+ end
+ env = modelA
+
+ match = matcher.find_pattern(env, "single_child")
+ assert_not_nil match
+ assert_equal "A", match.root.name
+ assert_equal "AA", match.bound_values[1].name
+
+ match = matcher.find_pattern(env, "single_child", "D")
+ assert_not_nil match
+ assert_equal "D", match.root.name
+ assert_equal "DD", match.bound_values[0].name
+
+ match = matcher.find_pattern(env, "double_child")
+ assert_not_nil match
+ assert_equal "C", match.root.name
+
+ match = matcher.find_pattern(env, "child_pattern")
+ assert_not_nil match
+ assert_equal ["AA"], match.bound_values
+end
+
+end
+
diff --git a/lib/puppet/vendor/rgen/test/util_test.rb b/lib/puppet/vendor/rgen/test/util_test.rb
new file mode 100644
index 000000000..d36e366db
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/util_test.rb
@@ -0,0 +1,5 @@
+$:.unshift File.dirname(__FILE__)
+
+require 'util/file_cache_map_test'
+require 'util/pattern_matcher_test'
+
diff --git a/lib/puppet/vendor/rgen/test/xml_instantiator_test.rb b/lib/puppet/vendor/rgen/test/xml_instantiator_test.rb
new file mode 100644
index 000000000..145058250
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/xml_instantiator_test.rb
@@ -0,0 +1,160 @@
+$:.unshift File.join(File.dirname(__FILE__),"..","lib")
+
+require 'test/unit'
+require 'rgen/instantiator/default_xml_instantiator'
+require 'rgen/environment'
+require 'rgen/util/model_dumper'
+require 'xml_instantiator_test/simple_xmi_ecore_instantiator'
+require 'xml_instantiator_test/simple_ecore_model_checker'
+
+module EmptyMM
+end
+
+module DefaultMM
+ module MNS
+ class Room < RGen::MetamodelBuilder::MMBase; end
+ end
+ class Person < RGen::MetamodelBuilder::MMBase; end
+ Person.one_to_one 'personalRoom', MNS::Room, 'inhabitant'
+end
+
+class XMLInstantiatorTest < Test::Unit::TestCase
+
+ XML_DIR = File.join(File.dirname(__FILE__),"testmodel")
+
+ include RGen::Util::ModelDumper
+
+ class MyInstantiator < RGen::Instantiator::DefaultXMLInstantiator
+
+ map_tag_ns "testmodel.org/myNamespace", DefaultMM::MNS
+
+ def class_name(str)
+ camelize(str)
+ end
+
+# resolve :type do
+# @env.find(:xmi_id => getType).first
+# end
+
+ resolve_by_id :personalRoom, :id => :getId, :src => :room
+
+ end
+
+ class PruneTestInstantiator < RGen::Instantiator::NodebasedXMLInstantiator
+ attr_reader :max_depth
+
+ set_prune_level 2
+
+ def initialize(env)
+ super(env)
+ @max_depth = 0
+ end
+
+ def on_descent(node)
+ end
+
+ def on_ascent(node)
+ calc_max_depth(node, 0)
+ end
+
+ def calc_max_depth(node, offset)
+ if node.children.nil? || node.children.size == 0
+ @max_depth = offset if offset > @max_depth
+ else
+ node.children.each do |c|
+ calc_max_depth(c, offset+1)
+ end
+ end
+ end
+ end
+
+ module PruneTestMM
+ end
+
+ def test_pruning
+ env = RGen::Environment.new
+
+ # prune level 2 is set in the class body
+ inst = PruneTestInstantiator.new(env)
+ inst.instantiate_file(File.join(XML_DIR,"manual_testmodel.xml"))
+ assert_equal 2, inst.max_depth
+
+ PruneTestInstantiator.set_prune_level(0)
+ inst = PruneTestInstantiator.new(env)
+ inst.instantiate_file(File.join(XML_DIR,"manual_testmodel.xml"))
+ assert_equal 5, inst.max_depth
+
+ PruneTestInstantiator.set_prune_level(1)
+ inst = PruneTestInstantiator.new(env)
+ inst.instantiate_file(File.join(XML_DIR,"manual_testmodel.xml"))
+ assert_equal 1, inst.max_depth
+ end
+
+ def test_custom
+ env = RGen::Environment.new
+ inst = MyInstantiator.new(env, DefaultMM, true)
+ inst.instantiate_file(File.join(XML_DIR,"manual_testmodel.xml"))
+
+ house = env.find(:class => DefaultMM::MNS::House).first
+ assert_not_nil house
+ assert_equal 2, house.room.size
+
+ rooms = env.find(:class => DefaultMM::MNS::Room)
+ assert_equal 2, rooms.size
+ assert_equal 0, (house.room - rooms).size
+ rooms.each {|r| assert r.parent == house}
+ tomsRoom = rooms.select{|r| r.name == "TomsRoom"}.first
+ assert_not_nil tomsRoom
+
+ persons = env.find(:class => DefaultMM::Person)
+ assert_equal 4, persons.size
+ tom = persons.select{|p| p.name == "Tom"}.first
+ assert_not_nil tom
+
+ assert tom.personalRoom == tomsRoom
+
+ mpns = env.find(:class => DefaultMM::MultiPartName)
+ assert mpns.first.respond_to?("insideMultiPart")
+ end
+
+ def test_default
+ env = RGen::Environment.new
+ inst = RGen::Instantiator::DefaultXMLInstantiator.new(env, EmptyMM, true)
+ inst.instantiate_file(File.join(XML_DIR,"manual_testmodel.xml"))
+
+ house = env.find(:class => EmptyMM::MNS_House).first
+ assert_not_nil house
+ assert_equal 2, house.mNS_Room.size
+ assert_equal "before kitchen", remove_whitespace_elements(house.chardata)[0].strip
+ assert_equal "after kitchen", remove_whitespace_elements(house.chardata)[1].strip
+ assert_equal "after toms room", remove_whitespace_elements(house.chardata)[2].strip
+
+ rooms = env.find(:class => EmptyMM::MNS_Room)
+ assert_equal 2, rooms.size
+ assert_equal 0, (house.mNS_Room - rooms).size
+ rooms.each {|r| assert r.parent == house}
+ tomsRoom = rooms.select{|r| r.name == "TomsRoom"}.first
+ assert_not_nil tomsRoom
+ assert_equal "within toms room", remove_whitespace_elements(tomsRoom.chardata)[0]
+
+ persons = env.find(:class => EmptyMM::Person)
+ assert_equal 4, persons.size
+ tom = persons.select{|p| p.name == "Tom"}.first
+ assert_not_nil tom
+ end
+
+ def remove_whitespace_elements(elements)
+ elements.reject{|e| e.strip == ""}
+ end
+
+ include SimpleECoreModelChecker
+
+ def test_simle_xmi_ecore_instantiator
+ envECore = RGen::Environment.new
+ File.open(XML_DIR+"/ea_testmodel.xml") { |f|
+ SimpleXMIECoreInstantiator.new.instantiateECoreModel(envECore, f.read)
+ }
+ checkECoreModel(envECore)
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_ecore_model_checker.rb b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_ecore_model_checker.rb
new file mode 100644
index 000000000..986af3309
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_ecore_model_checker.rb
@@ -0,0 +1,94 @@
+require 'rgen/ecore/ecore'
+
+# This "light" version of the ECore model checker is used to check the
+# model produced by the XMLInstantiatorTest only.
+#
+module SimpleECoreModelChecker
+ include RGen::ECore
+
+ def checkECoreModel(env)
+
+ # check main package
+ mainPackage = env.elements.select {|e| e.is_a? EPackage and e.name == "HouseMetamodel"}.first
+ assert_not_nil mainPackage
+
+ # check Rooms package
+ assert mainPackage.eSubpackages.is_a?(Array)
+ assert_equal 1, mainPackage.eSubpackages.size
+ assert mainPackage.eSubpackages[0].is_a?(EPackage)
+ roomsPackage = mainPackage.eSubpackages[0]
+ assert_equal "Rooms", roomsPackage.name
+
+ # check main package classes
+ assert mainPackage.eClassifiers.is_a?(Array)
+ assert_equal 3, mainPackage.eClassifiers.size
+ assert mainPackage.eClassifiers.all?{|c| c.is_a?(EClass)}
+ houseClass = mainPackage.eClassifiers.select{|c| c.name == "House"}.first
+ personClass = mainPackage.eClassifiers.select{|c| c.name == "Person"}.first
+ meetingPlaceClass = mainPackage.eClassifiers.select{|c| c.name == "MeetingPlace"}.first
+ assert_not_nil houseClass
+ assert_not_nil personClass
+ assert_not_nil meetingPlaceClass
+
+ # check Rooms package classes
+ assert roomsPackage.eClassifiers.is_a?(Array)
+ assert_equal 3, roomsPackage.eClassifiers.size
+ assert roomsPackage.eClassifiers.all?{|c| c.is_a?(EClass)}
+ roomClass = roomsPackage.eClassifiers.select{|c| c.name == "Room"}.first
+ kitchenClass = roomsPackage.eClassifiers.select{|c| c.name == "Kitchen"}.first
+ bathroomClass = roomsPackage.eClassifiers.select{|c| c.name == "Bathroom"}.first
+ assert_not_nil roomClass
+ assert_not_nil kitchenClass
+ assert_not_nil bathroomClass
+
+ # check Room inheritance
+ assert kitchenClass.eSuperTypes.is_a?(Array)
+ assert_equal 2, kitchenClass.eSuperTypes.size
+ assert_equal roomClass.object_id, kitchenClass.eSuperTypes.select{|c| c.name == "Room"}.first.object_id
+ assert_equal meetingPlaceClass.object_id, kitchenClass.eSuperTypes.select{|c| c.name == "MeetingPlace"}.first.object_id
+ assert bathroomClass.eSuperTypes.is_a?(Array)
+ assert_equal 1, bathroomClass.eSuperTypes.size
+ assert_equal roomClass.object_id, bathroomClass.eSuperTypes[0].object_id
+
+ # check House-Room "part of" association
+ assert houseClass.eAllContainments.eType.is_a?(Array)
+ assert_equal 1, houseClass.eAllContainments.eType.size
+ roomRef = houseClass.eAllContainments.first
+ assert_equal roomClass.object_id, roomRef.eType.object_id
+ assert_equal "room", roomRef.name
+ assert_equal 1, roomRef.lowerBound
+ assert_equal(-1, roomRef.upperBound)
+ assert_not_nil roomRef.eOpposite
+ assert_equal houseClass.object_id, roomRef.eOpposite.eType.object_id
+
+ partOfRefs = roomClass.eReferences.select{|r| r.eOpposite && r.eOpposite.containment}
+ assert_equal 1, partOfRefs.size
+ assert_equal houseClass.object_id, partOfRefs.first.eType.object_id
+ assert_equal "house", partOfRefs.first.name
+ assert_equal roomRef.object_id, partOfRefs.first.eOpposite.object_id
+
+ # check House OUT associations
+ assert houseClass.eReferences.is_a?(Array)
+ assert_equal 3, houseClass.eReferences.size
+ bathRef = houseClass.eReferences.find {|e| e.name == "bathroom"}
+ kitchenRef = houseClass.eReferences.find {|e| e.name == "kitchen"}
+ roomRef = houseClass.eReferences.find {|e| e.name == "room"}
+ assert_not_nil bathRef
+ assert_nil bathRef.eOpposite
+ assert_not_nil kitchenRef
+ assert_not_nil roomRef
+ assert_equal 1, kitchenRef.lowerBound
+ assert_equal 1, kitchenRef.upperBound
+ assert_equal 1, roomRef.lowerBound
+ assert_equal(-1, roomRef.upperBound)
+
+ # check House IN associations
+ houseInRefs = env.find(:class => EReference, :eType => houseClass)
+ assert_equal 3, houseInRefs.size
+ homeEnd = houseInRefs.find{|e| e.name == "home"}
+ assert_not_nil homeEnd
+ assert_equal 0, homeEnd.lowerBound
+ assert_equal(-1, homeEnd.upperBound)
+
+ end
+end
diff --git a/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_ecore_instantiator.rb b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_ecore_instantiator.rb
new file mode 100644
index 000000000..33cf97759
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_ecore_instantiator.rb
@@ -0,0 +1,53 @@
+require 'rgen/instantiator/default_xml_instantiator'
+require 'rgen/environment'
+require 'rgen/ecore/ecore'
+require 'xml_instantiator_test/simple_xmi_metamodel'
+
+# SimpleXMIECoreInstantiator demonstrates the usage of the DefaultXMLInstantiator.
+# It can be used to instantiate an ECore model from an XMI description
+# produced by Enterprise Architect.
+#
+# Note however, that this is *not* the recommended way to read an EA model.
+# See EAInstantiatorTest for the clean way to do this.
+#
+# This example shows how arbitrary XML content can be used to instantiate
+# an implicit metamodel. The resulting model is transformed into a simple
+# ECore model.
+#
+# See XMLInstantiatorTest for an example of how to use this class.
+#
+class SimpleXMIECoreInstantiator < RGen::Instantiator::DefaultXMLInstantiator
+
+ map_tag_ns "omg.org/UML1.3", SimpleXMIMetaModel::UML
+
+ resolve_by_id :typeClass, :src => :type, :id => :xmi_id
+ resolve_by_id :subtypeClass, :src => :subtype, :id => :xmi_id
+ resolve_by_id :supertypeClass, :src => :supertype, :id => :xmi_id
+
+ def initialize
+ @envXMI = RGen::Environment.new
+ super(@envXMI, SimpleXMIMetaModel, true)
+ end
+
+ def new_object(node)
+ if node.tag == "EAStub"
+ class_name = saneClassName(node.attributes["UMLType"])
+ mod = XMIMetaModel::UML
+ build_on_error(NameError, :build_class, class_name, mod) do
+ mod.const_get(class_name).new
+ end
+ else
+ super
+ end
+ end
+
+ # This method does the actual work.
+ def instantiateECoreModel(envOut, str)
+ instantiate(str)
+
+ require 'xml_instantiator_test/simple_xmi_to_ecore'
+
+ SimpleXmiToECore.new(@envXMI,envOut).transform
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_metamodel.rb b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_metamodel.rb
new file mode 100644
index 000000000..38bc0ceb4
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_metamodel.rb
@@ -0,0 +1,49 @@
+# This is an extension of the implicit metamodel created by the
+# DefaultXMLInstantiator when it reads an Enterprise Architect
+# XMI file.
+#
+module SimpleXMIMetaModel
+
+ module UML
+ include RGen::MetamodelBuilder
+
+ class Classifier_feature < MMBase
+ end
+
+ class ClassifierRole < MMBase
+ end
+
+ class Clazz < ClassifierRole
+ end
+
+ class Interface < ClassifierRole
+ end
+
+ class Operation < MMBase
+ end
+
+ class Generalization < MMBase
+ end
+
+ class ModelElement_stereotype < MMBase
+ end
+
+ class AssociationEnd < MMBase
+ module ClassModule
+ def otherEnd
+ parent.associationEnd.find{|ae| ae != self}
+ end
+ end
+ end
+
+ class AssociationEndRole < MMBase
+ end
+
+ ClassifierRole.one_to_many 'associationEnds', AssociationEnd, 'typeClass'
+ ClassifierRole.one_to_many 'associationEndRoles', AssociationEndRole, 'typeClass'
+ Clazz.one_to_many 'generalizationsAsSubtype', Generalization, 'subtypeClass'
+ Clazz.one_to_many 'generalizationsAsSupertype', Generalization, 'supertypeClass'
+
+ end
+
+end
diff --git a/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_to_ecore.rb b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_to_ecore.rb
new file mode 100644
index 000000000..d95477562
--- /dev/null
+++ b/lib/puppet/vendor/rgen/test/xml_instantiator_test/simple_xmi_to_ecore.rb
@@ -0,0 +1,75 @@
+require 'rgen/transformer'
+require 'rgen/ecore/ecore'
+require 'rgen/array_extensions'
+require 'xml_instantiator_test/simple_xmi_metamodel'
+
+class SimpleXmiToECore < RGen::Transformer
+ include RGen::ECore
+
+ class MapHelper
+ def initialize(keyMethod,valueMethod,elements)
+ @keyMethod, @valueMethod, @elements = keyMethod, valueMethod, elements
+ end
+ def [](key)
+ return @elements.select{|e| e.send(@keyMethod) == key}.first.send(@valueMethod) rescue NoMethodError
+ nil
+ end
+ end
+
+ class TaggedValueHelper < MapHelper
+ def initialize(element)
+ super('tag','value',element.modelElement_taggedValue.taggedValue)
+ end
+ end
+
+ # Do the actual transformation.
+ # Input and output environment have to be provided to the transformer constructor.
+ def transform
+ trans(:class => SimpleXMIMetaModel::UML::Clazz)
+ end
+
+ transform SimpleXMIMetaModel::UML::Package, :to => EPackage do
+ { :name => name,
+ :eSuperPackage => trans(parent.parent.is_a?(SimpleXMIMetaModel::UML::Package) ? parent.parent : nil) }
+ end
+
+ transform SimpleXMIMetaModel::UML::Clazz, :to => EClass do
+ { :name => name,
+ :ePackage => trans(parent.parent.is_a?(SimpleXMIMetaModel::UML::Package) ? parent.parent : nil),
+ :eStructuralFeatures => trans(classifier_feature.attribute + associationEnds),
+ :eOperations => trans(classifier_feature.operation),
+ :eSuperTypes => trans(generalizationsAsSubtype.supertypeClass),
+ :eAnnotations => [ EAnnotation.new(:details => trans(modelElement_taggedValue.taggedValue)) ] }
+ end
+
+ transform SimpleXMIMetaModel::UML::TaggedValue, :to => EStringToStringMapEntry do
+ { :key => tag, :value => value}
+ end
+
+ transform SimpleXMIMetaModel::UML::Attribute, :to => EAttribute do
+ typemap = { "String" => EString, "boolean" => EBoolean, "int" => EInt, "long" => ELong, "float" => EFloat }
+ tv = TaggedValueHelper.new(@current_object)
+ { :name => name, :eType => typemap[tv['type']],
+ :eAnnotations => [ EAnnotation.new(:details => trans(modelElement_taggedValue.taggedValue)) ] }
+ end
+
+ transform SimpleXMIMetaModel::UML::Operation, :to => EOperation do
+ { :name => name }
+ end
+
+ transform SimpleXMIMetaModel::UML::AssociationEnd, :to => EReference, :if => :isReference do
+ { :eType => trans(otherEnd.typeClass),
+ :name => otherEnd.name,
+ :eOpposite => trans(otherEnd),
+ :lowerBound => (otherEnd.multiplicity || '0').split('..').first.to_i,
+ :upperBound => (otherEnd.multiplicity || '1').split('..').last.gsub('*','-1').to_i,
+ :containment => (aggregation == 'composite'),
+ :eAnnotations => [ EAnnotation.new(:details => trans(modelElement_taggedValue.taggedValue)) ] }
+ end
+
+ method :isReference do
+ otherEnd.isNavigable == 'true' ||
+ # composite assocations are bidirectional
+ aggregation == 'composite' || otherEnd.aggregation == 'composite'
+ end
+end
diff --git a/lib/puppet/vendor/safe_yaml/PUPPET_README.md b/lib/puppet/vendor/safe_yaml/PUPPET_README.md
new file mode 100644
index 000000000..55be8867c
--- /dev/null
+++ b/lib/puppet/vendor/safe_yaml/PUPPET_README.md
@@ -0,0 +1,6 @@
+SafeYAML
+=============================================
+
+safe_yaml version 0.9.2
+
+Copied from https://github.com/dtao/safe_yaml/tree/c5591b790472f7db413aaa28716f86ddf4929f48
diff --git a/lib/puppet/vendor/semantic/PUPPET_README.md b/lib/puppet/vendor/semantic/PUPPET_README.md
new file mode 100644
index 000000000..dc9d4ae62
--- /dev/null
+++ b/lib/puppet/vendor/semantic/PUPPET_README.md
@@ -0,0 +1,6 @@
+Semantic
+========
+
+Semantic 0.0.1
+
+Copied from https://github.com/puppetlabs/semantic/tree/6d17d5283e0868cfc3d74d1e9d37b572e1739b6e
diff --git a/spec/fixtures/integration/node/environment/sitedir2/00_a.pp b/spec/fixtures/integration/node/environment/sitedir2/00_a.pp
new file mode 100644
index 000000000..508992fc3
--- /dev/null
+++ b/spec/fixtures/integration/node/environment/sitedir2/00_a.pp
@@ -0,0 +1,2 @@
+class a {}
+$a = 10
\ No newline at end of file
diff --git a/spec/fixtures/integration/node/environment/sitedir2/02_folder/01_b.pp b/spec/fixtures/integration/node/environment/sitedir2/02_folder/01_b.pp
new file mode 100644
index 000000000..25e339b4c
--- /dev/null
+++ b/spec/fixtures/integration/node/environment/sitedir2/02_folder/01_b.pp
@@ -0,0 +1,6 @@
+class b {}
+
+# if the files are evaluated in the wrong order, the file 'b' has a reference
+# to $a (set in file 'a') and with strict variable lookup should raise an error
+# and fail this test.
+$b = $a # error if $a not set in strict mode
diff --git a/spec/fixtures/integration/node/environment/sitedir2/03_c.pp b/spec/fixtures/integration/node/environment/sitedir2/03_c.pp
new file mode 100644
index 000000000..33393065c
--- /dev/null
+++ b/spec/fixtures/integration/node/environment/sitedir2/03_c.pp
@@ -0,0 +1 @@
+$c = $a + $b
\ No newline at end of file
diff --git a/spec/fixtures/integration/node/environment/sitedir2/04_include.pp b/spec/fixtures/integration/node/environment/sitedir2/04_include.pp
new file mode 100644
index 000000000..0187959f0
--- /dev/null
+++ b/spec/fixtures/integration/node/environment/sitedir2/04_include.pp
@@ -0,0 +1,2 @@
+include a, b
+notify { "variables": message => "a: $a, b: $b c: $c" }
diff --git a/spec/fixtures/releases/jamtur01-apache/manifests/vhost.pp b/spec/fixtures/releases/jamtur01-apache/manifests/vhost.pp
index 2fe6ed204..a210f3557 100644
--- a/spec/fixtures/releases/jamtur01-apache/manifests/vhost.pp
+++ b/spec/fixtures/releases/jamtur01-apache/manifests/vhost.pp
@@ -1,15 +1,15 @@
define apache::vhost( $port, $docroot, $ssl=true, $template='apache/vhost-default.conf.erb', $priority, $serveraliases = '' ) {
include apache
$vdir = $operatingsystem? {
'ubuntu' => '/etc/apache2/sites-enabled/',
default => '/etc/httpd/conf.d',
}
file{"${vdir}/${priority}-${name}":
content => template($template),
owner => 'root',
group => 'root',
- mode => '777',
+ mode => '0777',
require => Package['httpd'],
notify => Service['httpd'],
}
}
diff --git a/spec/fixtures/unit/indirector/hiera/global.yaml b/spec/fixtures/unit/indirector/hiera/global.yaml
new file mode 100644
index 000000000..0853e0ec1
--- /dev/null
+++ b/spec/fixtures/unit/indirector/hiera/global.yaml
@@ -0,0 +1,10 @@
+---
+integer: 3000
+string: 'apache'
+hash:
+ user: 'Hightower'
+ group: 'admin'
+ mode: '0644'
+array:
+ - '0.ntp.puppetlabs.com'
+ - '1.ntp.puppetlabs.com'
diff --git a/spec/fixtures/unit/indirector/hiera/invalid.yaml b/spec/fixtures/unit/indirector/hiera/invalid.yaml
new file mode 100644
index 000000000..9a84fa87c
--- /dev/null
+++ b/spec/fixtures/unit/indirector/hiera/invalid.yaml
@@ -0,0 +1 @@
+{ invalid:
diff --git a/spec/fixtures/unit/parser/functions/create_resources/foo/manifests/init.pp b/spec/fixtures/unit/parser/functions/create_resources/foo/manifests/init.pp
new file mode 100644
index 000000000..1772d0bae
--- /dev/null
+++ b/spec/fixtures/unit/parser/functions/create_resources/foo/manifests/init.pp
@@ -0,0 +1,3 @@
+class foo {
+ create_resources('foo::wrongdefine', {'blah'=>{'one'=>'two'}})
+}
diff --git a/spec/fixtures/unit/parser/functions/create_resources/foo/manifests/wrongdefine.pp b/spec/fixtures/unit/parser/functions/create_resources/foo/manifests/wrongdefine.pp
new file mode 100644
index 000000000..c933d89ec
--- /dev/null
+++ b/spec/fixtures/unit/parser/functions/create_resources/foo/manifests/wrongdefine.pp
@@ -0,0 +1,3 @@
+define foo::wrongdefine($one){
+ $foo = $one,
+}
diff --git a/spec/fixtures/unit/parser/lexer/argumentdefaults.pp b/spec/fixtures/unit/parser/lexer/argumentdefaults.pp
index eac9dd757..a296c2fb0 100644
--- a/spec/fixtures/unit/parser/lexer/argumentdefaults.pp
+++ b/spec/fixtures/unit/parser/lexer/argumentdefaults.pp
@@ -1,14 +1,14 @@
# $Id$
-define testargs($file, $mode = 755) {
+define testargs($file, $mode = '0755') {
file { $file: ensure => file, mode => $mode }
}
testargs { "testingname":
file => "/tmp/argumenttest1"
}
testargs { "testingother":
file => "/tmp/argumenttest2",
- mode => 644
+ mode => '0644'
}
diff --git a/spec/fixtures/unit/parser/lexer/casestatement.pp b/spec/fixtures/unit/parser/lexer/casestatement.pp
index 66ecd72b9..c17242791 100644
--- a/spec/fixtures/unit/parser/lexer/casestatement.pp
+++ b/spec/fixtures/unit/parser/lexer/casestatement.pp
@@ -1,65 +1,65 @@
# $Id$
$var = "value"
case $var {
"nope": {
- file { "/tmp/fakefile": mode => 644, ensure => file }
+ file { "/tmp/fakefile": mode => '0644', ensure => file }
}
"value": {
- file { "/tmp/existsfile": mode => 755, ensure => file }
+ file { "/tmp/existsfile": mode => '0755', ensure => file }
}
}
$ovar = "yayness"
case $ovar {
"fooness": {
- file { "/tmp/nostillexistsfile": mode => 644, ensure => file }
+ file { "/tmp/nostillexistsfile": mode => '0644', ensure => file }
}
"booness", "yayness": {
case $var {
"nep": {
- file { "/tmp/noexistsfile": mode => 644, ensure => file }
+ file { "/tmp/noexistsfile": mode => '0644', ensure => file }
}
"value": {
- file { "/tmp/existsfile2": mode => 755, ensure => file }
+ file { "/tmp/existsfile2": mode => '0755', ensure => file }
}
}
}
}
case $ovar {
"fooness": {
- file { "/tmp/nostillexistsfile": mode => 644, ensure => file }
+ file { "/tmp/nostillexistsfile": mode => '0644', ensure => file }
}
default: {
- file { "/tmp/existsfile3": mode => 755, ensure => file }
+ file { "/tmp/existsfile3": mode => '0755', ensure => file }
}
}
$bool = true
case $bool {
true: {
- file { "/tmp/existsfile4": mode => 755, ensure => file }
+ file { "/tmp/existsfile4": mode => '0755', ensure => file }
}
}
$yay = yay
$a = yay
$b = boo
case $yay {
- $a: { file { "/tmp/existsfile5": mode => 755, ensure => file } }
- $b: { file { "/tmp/existsfile5": mode => 644, ensure => file } }
- default: { file { "/tmp/existsfile5": mode => 711, ensure => file } }
+ $a: { file { "/tmp/existsfile5": mode => '0755', ensure => file } }
+ $b: { file { "/tmp/existsfile5": mode => '0644', ensure => file } }
+ default: { file { "/tmp/existsfile5": mode => '0711', ensure => file } }
}
$regexvar = "exists regex"
case $regexvar {
- "no match": { file { "/tmp/existsfile6": mode => 644, ensure => file } }
- /(.*) regex$/: { file { "/tmp/${1}file6": mode => 755, ensure => file } }
- default: { file { "/tmp/existsfile6": mode => 711, ensure => file } }
+ "no match": { file { "/tmp/existsfile6": mode => '0644', ensure => file } }
+ /(.*) regex$/: { file { "/tmp/${1}file6": mode => '0755', ensure => file } }
+ default: { file { "/tmp/existsfile6": mode => '0711', ensure => file } }
}
diff --git a/spec/fixtures/unit/parser/lexer/classheirarchy.pp b/spec/fixtures/unit/parser/lexer/classheirarchy.pp
index 36619d8b9..5a51a8229 100644
--- a/spec/fixtures/unit/parser/lexer/classheirarchy.pp
+++ b/spec/fixtures/unit/parser/lexer/classheirarchy.pp
@@ -1,15 +1,15 @@
# $Id$
class base {
- file { "/tmp/classheir1": ensure => file, mode => 755 }
+ file { "/tmp/classheir1": ensure => file, mode => '0755' }
}
class sub1 inherits base {
- file { "/tmp/classheir2": ensure => file, mode => 755 }
+ file { "/tmp/classheir2": ensure => file, mode => '0755' }
}
class sub2 inherits base {
- file { "/tmp/classheir3": ensure => file, mode => 755 }
+ file { "/tmp/classheir3": ensure => file, mode => '0755' }
}
include sub1, sub2
diff --git a/spec/fixtures/unit/parser/lexer/classincludes.pp b/spec/fixtures/unit/parser/lexer/classincludes.pp
index bd5b44ed7..0b1bb07a8 100644
--- a/spec/fixtures/unit/parser/lexer/classincludes.pp
+++ b/spec/fixtures/unit/parser/lexer/classincludes.pp
@@ -1,17 +1,17 @@
# $Id$
class base {
- file { "/tmp/classincludes1": ensure => file, mode => 755 }
+ file { "/tmp/classincludes1": ensure => file, mode => '0755' }
}
class sub1 inherits base {
- file { "/tmp/classincludes2": ensure => file, mode => 755 }
+ file { "/tmp/classincludes2": ensure => file, mode => '0755' }
}
class sub2 inherits base {
- file { "/tmp/classincludes3": ensure => file, mode => 755 }
+ file { "/tmp/classincludes3": ensure => file, mode => '0755' }
}
$sub = "sub2"
include sub1, $sub
diff --git a/spec/fixtures/unit/parser/lexer/classpathtest.pp b/spec/fixtures/unit/parser/lexer/classpathtest.pp
index 580333369..fae7cadbf 100644
--- a/spec/fixtures/unit/parser/lexer/classpathtest.pp
+++ b/spec/fixtures/unit/parser/lexer/classpathtest.pp
@@ -1,11 +1,11 @@
# $Id$
define mytype {
- file { "/tmp/classtest": ensure => file, mode => 755 }
+ file { "/tmp/classtest": ensure => file, mode => '0755' }
}
class testing {
mytype { "componentname": }
}
include testing
diff --git a/spec/fixtures/unit/parser/lexer/collection_override.pp b/spec/fixtures/unit/parser/lexer/collection_override.pp
index b1b39ab16..f1063bf97 100644
--- a/spec/fixtures/unit/parser/lexer/collection_override.pp
+++ b/spec/fixtures/unit/parser/lexer/collection_override.pp
@@ -1,8 +1,8 @@
@file {
"/tmp/collection":
content => "whatever"
}
File<| |> {
- mode => 0600
+ mode => '0600'
}
diff --git a/spec/fixtures/unit/parser/lexer/componentrequire.pp b/spec/fixtures/unit/parser/lexer/componentrequire.pp
index a61d2050c..8d67ab161 100644
--- a/spec/fixtures/unit/parser/lexer/componentrequire.pp
+++ b/spec/fixtures/unit/parser/lexer/componentrequire.pp
@@ -1,8 +1,8 @@
define testfile($mode) {
file { $name: mode => $mode, ensure => present }
}
-testfile { "/tmp/testing_component_requires2": mode => 755 }
+testfile { "/tmp/testing_component_requires2": mode => '0755' }
-file { "/tmp/testing_component_requires1": mode => 755, ensure => present,
+file { "/tmp/testing_component_requires1": mode => '0755', ensure => present,
require => Testfile["/tmp/testing_component_requires2"] }
diff --git a/spec/fixtures/unit/parser/lexer/deepclassheirarchy.pp b/spec/fixtures/unit/parser/lexer/deepclassheirarchy.pp
index 249e6334d..8e477f7b7 100644
--- a/spec/fixtures/unit/parser/lexer/deepclassheirarchy.pp
+++ b/spec/fixtures/unit/parser/lexer/deepclassheirarchy.pp
@@ -1,23 +1,23 @@
# $Id$
class base {
- file { "/tmp/deepclassheir1": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir1": ensure => file, mode => '0755' }
}
class sub1 inherits base {
- file { "/tmp/deepclassheir2": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir2": ensure => file, mode => '0755' }
}
class sub2 inherits sub1 {
- file { "/tmp/deepclassheir3": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir3": ensure => file, mode => '0755' }
}
class sub3 inherits sub2 {
- file { "/tmp/deepclassheir4": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir4": ensure => file, mode => '0755' }
}
class sub4 inherits sub3 {
- file { "/tmp/deepclassheir5": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir5": ensure => file, mode => '0755' }
}
include sub4
diff --git a/spec/fixtures/unit/parser/lexer/defineoverrides.pp b/spec/fixtures/unit/parser/lexer/defineoverrides.pp
index c68b139e3..bc2b5647e 100644
--- a/spec/fixtures/unit/parser/lexer/defineoverrides.pp
+++ b/spec/fixtures/unit/parser/lexer/defineoverrides.pp
@@ -1,17 +1,17 @@
# $Id$
$file = "/tmp/defineoverrides1"
define myfile($mode) {
file { $name: ensure => file, mode => $mode }
}
class base {
- myfile { $file: mode => 644 }
+ myfile { $file: mode => '0644' }
}
class sub inherits base {
- Myfile[$file] { mode => 755, } # test the end-comma
+ Myfile[$file] { mode => '0755', } # test the end-comma
}
include sub
diff --git a/spec/fixtures/unit/parser/lexer/filecreate.pp b/spec/fixtures/unit/parser/lexer/filecreate.pp
index d7972c234..b8edcd540 100644
--- a/spec/fixtures/unit/parser/lexer/filecreate.pp
+++ b/spec/fixtures/unit/parser/lexer/filecreate.pp
@@ -1,11 +1,11 @@
# $Id$
file {
- "/tmp/createatest": ensure => file, mode => 755;
- "/tmp/createbtest": ensure => file, mode => 755
+ "/tmp/createatest": ensure => file, mode => '0755';
+ "/tmp/createbtest": ensure => file, mode => '0755'
}
file {
"/tmp/createctest": ensure => file;
"/tmp/createdtest": ensure => file;
}
diff --git a/spec/fixtures/unit/parser/lexer/ifexpression.pp b/spec/fixtures/unit/parser/lexer/ifexpression.pp
index 29a637291..131530dcb 100644
--- a/spec/fixtures/unit/parser/lexer/ifexpression.pp
+++ b/spec/fixtures/unit/parser/lexer/ifexpression.pp
@@ -1,12 +1,12 @@
$one = 1
$two = 2
if ($one < $two) and (($two < 3) or ($two == 2)) {
notice("True!")
}
if "test regex" =~ /(.*) regex/ {
file {
- "/tmp/${1}iftest": ensure => file, mode => 0755
+ "/tmp/${1}iftest": ensure => file, mode => '0755'
}
}
diff --git a/spec/fixtures/unit/parser/lexer/implicititeration.pp b/spec/fixtures/unit/parser/lexer/implicititeration.pp
index 6f34cb29c..75719e985 100644
--- a/spec/fixtures/unit/parser/lexer/implicititeration.pp
+++ b/spec/fixtures/unit/parser/lexer/implicititeration.pp
@@ -1,15 +1,15 @@
# $Id$
$files = ["/tmp/iterationatest", "/tmp/iterationbtest"]
-file { $files: ensure => file, mode => 755 }
+file { $files: ensure => file, mode => '0755' }
file { ["/tmp/iterationctest", "/tmp/iterationdtest"]:
ensure => file,
- mode => 755
+ mode => '0755'
}
file {
- ["/tmp/iterationetest", "/tmp/iterationftest"]: ensure => file, mode => 755;
- ["/tmp/iterationgtest", "/tmp/iterationhtest"]: ensure => file, mode => 755;
+ ["/tmp/iterationetest", "/tmp/iterationftest"]: ensure => file, mode => '0755';
+ ["/tmp/iterationgtest", "/tmp/iterationhtest"]: ensure => file, mode => '0755';
}
diff --git a/spec/fixtures/unit/parser/lexer/multipleinstances.pp b/spec/fixtures/unit/parser/lexer/multipleinstances.pp
index 2f9b3c2e8..bc0cdee24 100644
--- a/spec/fixtures/unit/parser/lexer/multipleinstances.pp
+++ b/spec/fixtures/unit/parser/lexer/multipleinstances.pp
@@ -1,7 +1,7 @@
# $Id$
file {
- "/tmp/multipleinstancesa": ensure => file, mode => 755;
- "/tmp/multipleinstancesb": ensure => file, mode => 755;
- "/tmp/multipleinstancesc": ensure => file, mode => 755;
+ "/tmp/multipleinstancesa": ensure => file, mode => '0755';
+ "/tmp/multipleinstancesb": ensure => file, mode => '0755';
+ "/tmp/multipleinstancesc": ensure => file, mode => '0755';
}
diff --git a/spec/fixtures/unit/parser/lexer/multisubs.pp b/spec/fixtures/unit/parser/lexer/multisubs.pp
index bcec69e2a..079edf336 100644
--- a/spec/fixtures/unit/parser/lexer/multisubs.pp
+++ b/spec/fixtures/unit/parser/lexer/multisubs.pp
@@ -1,13 +1,13 @@
class base {
- file { "/tmp/multisubtest": content => "base", mode => 644 }
+ file { "/tmp/multisubtest": content => "base", mode => '0644' }
}
class sub1 inherits base {
- File["/tmp/multisubtest"] { mode => 755 }
+ File["/tmp/multisubtest"] { mode => '0755' }
}
class sub2 inherits base {
File["/tmp/multisubtest"] { content => sub2 }
}
include sub1, sub2
diff --git a/spec/fixtures/unit/parser/lexer/namevartest.pp b/spec/fixtures/unit/parser/lexer/namevartest.pp
index dbee1c356..d080cb467 100644
--- a/spec/fixtures/unit/parser/lexer/namevartest.pp
+++ b/spec/fixtures/unit/parser/lexer/namevartest.pp
@@ -1,9 +1,9 @@
define filetest($mode, $ensure = file) {
file { $name:
mode => $mode,
ensure => $ensure
}
}
-filetest { "/tmp/testfiletest": mode => 644}
-filetest { "/tmp/testdirtest": mode => 755, ensure => directory}
+filetest { "/tmp/testfiletest": mode => '0644'}
+filetest { "/tmp/testdirtest": mode => '0755', ensure => directory}
diff --git a/spec/fixtures/unit/parser/lexer/simpledefaults.pp b/spec/fixtures/unit/parser/lexer/simpledefaults.pp
index 63d199a68..bbe61e25c 100644
--- a/spec/fixtures/unit/parser/lexer/simpledefaults.pp
+++ b/spec/fixtures/unit/parser/lexer/simpledefaults.pp
@@ -1,5 +1,5 @@
# $Id$
-File { mode => 755 }
+File { mode => '0755' }
file { "/tmp/defaulttest": ensure => file }
diff --git a/spec/fixtures/unit/pops/parser/lexer/argumentdefaults.pp b/spec/fixtures/unit/pops/parser/lexer/argumentdefaults.pp
index eac9dd757..29c7227ea 100644
--- a/spec/fixtures/unit/pops/parser/lexer/argumentdefaults.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/argumentdefaults.pp
@@ -1,14 +1,14 @@
# $Id$
define testargs($file, $mode = 755) {
file { $file: ensure => file, mode => $mode }
}
testargs { "testingname":
file => "/tmp/argumenttest1"
}
testargs { "testingother":
file => "/tmp/argumenttest2",
- mode => 644
+ mode => '0644'
}
diff --git a/spec/fixtures/unit/pops/parser/lexer/casestatement.pp b/spec/fixtures/unit/pops/parser/lexer/casestatement.pp
index 66ecd72b9..c17242791 100644
--- a/spec/fixtures/unit/pops/parser/lexer/casestatement.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/casestatement.pp
@@ -1,65 +1,65 @@
# $Id$
$var = "value"
case $var {
"nope": {
- file { "/tmp/fakefile": mode => 644, ensure => file }
+ file { "/tmp/fakefile": mode => '0644', ensure => file }
}
"value": {
- file { "/tmp/existsfile": mode => 755, ensure => file }
+ file { "/tmp/existsfile": mode => '0755', ensure => file }
}
}
$ovar = "yayness"
case $ovar {
"fooness": {
- file { "/tmp/nostillexistsfile": mode => 644, ensure => file }
+ file { "/tmp/nostillexistsfile": mode => '0644', ensure => file }
}
"booness", "yayness": {
case $var {
"nep": {
- file { "/tmp/noexistsfile": mode => 644, ensure => file }
+ file { "/tmp/noexistsfile": mode => '0644', ensure => file }
}
"value": {
- file { "/tmp/existsfile2": mode => 755, ensure => file }
+ file { "/tmp/existsfile2": mode => '0755', ensure => file }
}
}
}
}
case $ovar {
"fooness": {
- file { "/tmp/nostillexistsfile": mode => 644, ensure => file }
+ file { "/tmp/nostillexistsfile": mode => '0644', ensure => file }
}
default: {
- file { "/tmp/existsfile3": mode => 755, ensure => file }
+ file { "/tmp/existsfile3": mode => '0755', ensure => file }
}
}
$bool = true
case $bool {
true: {
- file { "/tmp/existsfile4": mode => 755, ensure => file }
+ file { "/tmp/existsfile4": mode => '0755', ensure => file }
}
}
$yay = yay
$a = yay
$b = boo
case $yay {
- $a: { file { "/tmp/existsfile5": mode => 755, ensure => file } }
- $b: { file { "/tmp/existsfile5": mode => 644, ensure => file } }
- default: { file { "/tmp/existsfile5": mode => 711, ensure => file } }
+ $a: { file { "/tmp/existsfile5": mode => '0755', ensure => file } }
+ $b: { file { "/tmp/existsfile5": mode => '0644', ensure => file } }
+ default: { file { "/tmp/existsfile5": mode => '0711', ensure => file } }
}
$regexvar = "exists regex"
case $regexvar {
- "no match": { file { "/tmp/existsfile6": mode => 644, ensure => file } }
- /(.*) regex$/: { file { "/tmp/${1}file6": mode => 755, ensure => file } }
- default: { file { "/tmp/existsfile6": mode => 711, ensure => file } }
+ "no match": { file { "/tmp/existsfile6": mode => '0644', ensure => file } }
+ /(.*) regex$/: { file { "/tmp/${1}file6": mode => '0755', ensure => file } }
+ default: { file { "/tmp/existsfile6": mode => '0711', ensure => file } }
}
diff --git a/spec/fixtures/unit/pops/parser/lexer/classheirarchy.pp b/spec/fixtures/unit/pops/parser/lexer/classheirarchy.pp
index 36619d8b9..5a51a8229 100644
--- a/spec/fixtures/unit/pops/parser/lexer/classheirarchy.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/classheirarchy.pp
@@ -1,15 +1,15 @@
# $Id$
class base {
- file { "/tmp/classheir1": ensure => file, mode => 755 }
+ file { "/tmp/classheir1": ensure => file, mode => '0755' }
}
class sub1 inherits base {
- file { "/tmp/classheir2": ensure => file, mode => 755 }
+ file { "/tmp/classheir2": ensure => file, mode => '0755' }
}
class sub2 inherits base {
- file { "/tmp/classheir3": ensure => file, mode => 755 }
+ file { "/tmp/classheir3": ensure => file, mode => '0755' }
}
include sub1, sub2
diff --git a/spec/fixtures/unit/pops/parser/lexer/classincludes.pp b/spec/fixtures/unit/pops/parser/lexer/classincludes.pp
index bd5b44ed7..0b1bb07a8 100644
--- a/spec/fixtures/unit/pops/parser/lexer/classincludes.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/classincludes.pp
@@ -1,17 +1,17 @@
# $Id$
class base {
- file { "/tmp/classincludes1": ensure => file, mode => 755 }
+ file { "/tmp/classincludes1": ensure => file, mode => '0755' }
}
class sub1 inherits base {
- file { "/tmp/classincludes2": ensure => file, mode => 755 }
+ file { "/tmp/classincludes2": ensure => file, mode => '0755' }
}
class sub2 inherits base {
- file { "/tmp/classincludes3": ensure => file, mode => 755 }
+ file { "/tmp/classincludes3": ensure => file, mode => '0755' }
}
$sub = "sub2"
include sub1, $sub
diff --git a/spec/fixtures/unit/pops/parser/lexer/classpathtest.pp b/spec/fixtures/unit/pops/parser/lexer/classpathtest.pp
index 580333369..fae7cadbf 100644
--- a/spec/fixtures/unit/pops/parser/lexer/classpathtest.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/classpathtest.pp
@@ -1,11 +1,11 @@
# $Id$
define mytype {
- file { "/tmp/classtest": ensure => file, mode => 755 }
+ file { "/tmp/classtest": ensure => file, mode => '0755' }
}
class testing {
mytype { "componentname": }
}
include testing
diff --git a/spec/fixtures/unit/pops/parser/lexer/collection_override.pp b/spec/fixtures/unit/pops/parser/lexer/collection_override.pp
index b1b39ab16..f1063bf97 100644
--- a/spec/fixtures/unit/pops/parser/lexer/collection_override.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/collection_override.pp
@@ -1,8 +1,8 @@
@file {
"/tmp/collection":
content => "whatever"
}
File<| |> {
- mode => 0600
+ mode => '0600'
}
diff --git a/spec/fixtures/unit/pops/parser/lexer/componentrequire.pp b/spec/fixtures/unit/pops/parser/lexer/componentrequire.pp
index a61d2050c..8d67ab161 100644
--- a/spec/fixtures/unit/pops/parser/lexer/componentrequire.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/componentrequire.pp
@@ -1,8 +1,8 @@
define testfile($mode) {
file { $name: mode => $mode, ensure => present }
}
-testfile { "/tmp/testing_component_requires2": mode => 755 }
+testfile { "/tmp/testing_component_requires2": mode => '0755' }
-file { "/tmp/testing_component_requires1": mode => 755, ensure => present,
+file { "/tmp/testing_component_requires1": mode => '0755', ensure => present,
require => Testfile["/tmp/testing_component_requires2"] }
diff --git a/spec/fixtures/unit/pops/parser/lexer/deepclassheirarchy.pp b/spec/fixtures/unit/pops/parser/lexer/deepclassheirarchy.pp
index 249e6334d..8e477f7b7 100644
--- a/spec/fixtures/unit/pops/parser/lexer/deepclassheirarchy.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/deepclassheirarchy.pp
@@ -1,23 +1,23 @@
# $Id$
class base {
- file { "/tmp/deepclassheir1": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir1": ensure => file, mode => '0755' }
}
class sub1 inherits base {
- file { "/tmp/deepclassheir2": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir2": ensure => file, mode => '0755' }
}
class sub2 inherits sub1 {
- file { "/tmp/deepclassheir3": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir3": ensure => file, mode => '0755' }
}
class sub3 inherits sub2 {
- file { "/tmp/deepclassheir4": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir4": ensure => file, mode => '0755' }
}
class sub4 inherits sub3 {
- file { "/tmp/deepclassheir5": ensure => file, mode => 755 }
+ file { "/tmp/deepclassheir5": ensure => file, mode => '0755' }
}
include sub4
diff --git a/spec/fixtures/unit/pops/parser/lexer/defineoverrides.pp b/spec/fixtures/unit/pops/parser/lexer/defineoverrides.pp
index c68b139e3..bc2b5647e 100644
--- a/spec/fixtures/unit/pops/parser/lexer/defineoverrides.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/defineoverrides.pp
@@ -1,17 +1,17 @@
# $Id$
$file = "/tmp/defineoverrides1"
define myfile($mode) {
file { $name: ensure => file, mode => $mode }
}
class base {
- myfile { $file: mode => 644 }
+ myfile { $file: mode => '0644' }
}
class sub inherits base {
- Myfile[$file] { mode => 755, } # test the end-comma
+ Myfile[$file] { mode => '0755', } # test the end-comma
}
include sub
diff --git a/spec/fixtures/unit/pops/parser/lexer/filecreate.pp b/spec/fixtures/unit/pops/parser/lexer/filecreate.pp
index d7972c234..b8edcd540 100644
--- a/spec/fixtures/unit/pops/parser/lexer/filecreate.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/filecreate.pp
@@ -1,11 +1,11 @@
# $Id$
file {
- "/tmp/createatest": ensure => file, mode => 755;
- "/tmp/createbtest": ensure => file, mode => 755
+ "/tmp/createatest": ensure => file, mode => '0755';
+ "/tmp/createbtest": ensure => file, mode => '0755'
}
file {
"/tmp/createctest": ensure => file;
"/tmp/createdtest": ensure => file;
}
diff --git a/spec/fixtures/unit/pops/parser/lexer/ifexpression.pp b/spec/fixtures/unit/pops/parser/lexer/ifexpression.pp
index 29a637291..131530dcb 100644
--- a/spec/fixtures/unit/pops/parser/lexer/ifexpression.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/ifexpression.pp
@@ -1,12 +1,12 @@
$one = 1
$two = 2
if ($one < $two) and (($two < 3) or ($two == 2)) {
notice("True!")
}
if "test regex" =~ /(.*) regex/ {
file {
- "/tmp/${1}iftest": ensure => file, mode => 0755
+ "/tmp/${1}iftest": ensure => file, mode => '0755'
}
}
diff --git a/spec/fixtures/unit/pops/parser/lexer/implicititeration.pp b/spec/fixtures/unit/pops/parser/lexer/implicititeration.pp
index 6f34cb29c..75719e985 100644
--- a/spec/fixtures/unit/pops/parser/lexer/implicititeration.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/implicititeration.pp
@@ -1,15 +1,15 @@
# $Id$
$files = ["/tmp/iterationatest", "/tmp/iterationbtest"]
-file { $files: ensure => file, mode => 755 }
+file { $files: ensure => file, mode => '0755' }
file { ["/tmp/iterationctest", "/tmp/iterationdtest"]:
ensure => file,
- mode => 755
+ mode => '0755'
}
file {
- ["/tmp/iterationetest", "/tmp/iterationftest"]: ensure => file, mode => 755;
- ["/tmp/iterationgtest", "/tmp/iterationhtest"]: ensure => file, mode => 755;
+ ["/tmp/iterationetest", "/tmp/iterationftest"]: ensure => file, mode => '0755';
+ ["/tmp/iterationgtest", "/tmp/iterationhtest"]: ensure => file, mode => '0755';
}
diff --git a/spec/fixtures/unit/pops/parser/lexer/multipleinstances.pp b/spec/fixtures/unit/pops/parser/lexer/multipleinstances.pp
index 2f9b3c2e8..bc0cdee24 100644
--- a/spec/fixtures/unit/pops/parser/lexer/multipleinstances.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/multipleinstances.pp
@@ -1,7 +1,7 @@
# $Id$
file {
- "/tmp/multipleinstancesa": ensure => file, mode => 755;
- "/tmp/multipleinstancesb": ensure => file, mode => 755;
- "/tmp/multipleinstancesc": ensure => file, mode => 755;
+ "/tmp/multipleinstancesa": ensure => file, mode => '0755';
+ "/tmp/multipleinstancesb": ensure => file, mode => '0755';
+ "/tmp/multipleinstancesc": ensure => file, mode => '0755';
}
diff --git a/spec/fixtures/unit/pops/parser/lexer/multisubs.pp b/spec/fixtures/unit/pops/parser/lexer/multisubs.pp
index bcec69e2a..079edf336 100644
--- a/spec/fixtures/unit/pops/parser/lexer/multisubs.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/multisubs.pp
@@ -1,13 +1,13 @@
class base {
- file { "/tmp/multisubtest": content => "base", mode => 644 }
+ file { "/tmp/multisubtest": content => "base", mode => '0644' }
}
class sub1 inherits base {
- File["/tmp/multisubtest"] { mode => 755 }
+ File["/tmp/multisubtest"] { mode => '0755' }
}
class sub2 inherits base {
File["/tmp/multisubtest"] { content => sub2 }
}
include sub1, sub2
diff --git a/spec/fixtures/unit/pops/parser/lexer/namevartest.pp b/spec/fixtures/unit/pops/parser/lexer/namevartest.pp
index dbee1c356..d080cb467 100644
--- a/spec/fixtures/unit/pops/parser/lexer/namevartest.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/namevartest.pp
@@ -1,9 +1,9 @@
define filetest($mode, $ensure = file) {
file { $name:
mode => $mode,
ensure => $ensure
}
}
-filetest { "/tmp/testfiletest": mode => 644}
-filetest { "/tmp/testdirtest": mode => 755, ensure => directory}
+filetest { "/tmp/testfiletest": mode => '0644'}
+filetest { "/tmp/testdirtest": mode => '0755', ensure => directory}
diff --git a/spec/fixtures/unit/pops/parser/lexer/simpledefaults.pp b/spec/fixtures/unit/pops/parser/lexer/simpledefaults.pp
index 63d199a68..bbe61e25c 100644
--- a/spec/fixtures/unit/pops/parser/lexer/simpledefaults.pp
+++ b/spec/fixtures/unit/pops/parser/lexer/simpledefaults.pp
@@ -1,5 +1,5 @@
# $Id$
-File { mode => 755 }
+File { mode => '0755' }
file { "/tmp/defaulttest": ensure => file }
diff --git a/spec/fixtures/unit/provider/package/gem/gem-list-single-package b/spec/fixtures/unit/provider/package/gem/gem-list-single-package
new file mode 100644
index 000000000..41576e3e6
--- /dev/null
+++ b/spec/fixtures/unit/provider/package/gem/gem-list-single-package
@@ -0,0 +1,4 @@
+
+*** REMOTE GEMS ***
+
+bundler (1.6.2)
diff --git a/spec/fixtures/unit/type/user/authorized_keys b/spec/fixtures/unit/type/user/authorized_keys
index 265173ef4..dd1807e56 100644
--- a/spec/fixtures/unit/type/user/authorized_keys
+++ b/spec/fixtures/unit/type/user/authorized_keys
@@ -1,5 +1,5 @@
# fixture for testing ssh key purging
-ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTXvM7AslzjNUYrPLiNVBsF5VnqL2RmqrkzscdVdHzVxvieNwmLGeUkg8EfXPiz7j5F/Lr0J8oItTCWzyN2KmM+DhUMjvP4AbELO/VYbnVrZICRiUNYSO3EN9/uapKAuiev88d7ynbonCU0VZoTPg/ug4OondOrLCtcGri5ltF+mausGfAYiFAQVEWqXV+1tyejoawJ884etb3n4ilpsrH9JK6AtOkEWVD3TDrNi29O1mQQ/Cn88g472zAJ+DhsIn+iehtfX5nmOtDNN/1t1bGMIBzkSYEAYwUiRJbRXvbobT7qKZQPA3dh0m8AYQS5/hd4/c4pmlxL8kgr24SnBY5 key1 keyname1
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTXvM7AslzjNUYrPLiNVBsF5VnqL2RmqrkzscdVdHzVxvieNwmLGeUkg8EfXPiz7j5F/Lr0J8oItTCWzyN2KmM+DhUMjvP4AbELO/VYbnVrZICRiUNYSO3EN9/uapKAuiev88d7ynbonCU0VZoTPg/ug4OondOrLCtcGri5ltF+mausGfAYiFAQVEWqXV+1tyejoawJ884etb3n4ilpsrH9JK6AtOkEWVD3TDrNi29O1mQQ/Cn88g472zAJ+DhsIn+iehtfX5nmOtDNN/1t1bGMIBzkSYEAYwUiRJbRXvbobT7qKZQPA3dh0m8AYQS5/hd4/c4pmlxL8kgr24SnBY5 key1 name
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTXvM7AslzjNUYrPLiNVBsF5VnqL2RmqrkzscdVdHzVxvieNwmLGeUkg8EfXPiz7j5F/Lr0J8oItTCWzyN2KmM+DhUMjvP4AbELO/VYbnVrZICRiUNYSO3EN9/uapKAuiev88d7ynbonCU0VZoTPg/ug4OondOrLCtcGri5ltF+mausGfAYiFAQVEWqXV+1tyejoawJ884etb3n4ilpsrH9JK6AtOkEWVD3TDrNi29O1mQQ/Cn88g472zAJ+DhsIn+iehtfX5nmOtDNN/1t1bGMIBzkSYEAYwUiRJbRXvbobT7qKZQPA3dh0m8AYQS5/hd4/c4pmlxL8kgr24SnBY5 keyname2
#ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTXvM7AslzjNUYrPLiNVBsF5VnqL2RmqrkzscdVdHzVxvieNwmLGeUkg8EfXPiz7j5F/Lr0J8oItTCWzyN2KmM+DhUMjvP4AbELO/VYbnVrZICRiUNYSO3EN9/uapKAuiev88d7ynbonCU0VZoTPg/ug4OondOrLCtcGri5ltF+mausGfAYiFAQVEWqXV+1tyejoawJ884etb3n4ilpsrH9JK6AtOkEWVD3TDrNi29O1mQQ/Cn88g472zAJ+DhsIn+iehtfX5nmOtDNN/1t1bGMIBzkSYEAYwUiRJbRXvbobT7qKZQPA3dh0m8AYQS5/hd4/c4pmlxL8kgr24SnBY5 keyname3
diff --git a/spec/integration/agent/logging_spec.rb b/spec/integration/agent/logging_spec.rb
index c686397c8..1c4394a6e 100755
--- a/spec/integration/agent/logging_spec.rb
+++ b/spec/integration/agent/logging_spec.rb
@@ -1,178 +1,182 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet'
require 'puppet/daemon'
require 'puppet/application/agent'
# The command line flags affecting #20900 and #20919:
#
# --onetime
# --daemonize
# --no-daemonize
# --logdest
# --verbose
# --debug
# (no flags) (-)
#
# d and nd are mutally exclusive
#
# Combinations without logdest, verbose or debug:
#
# --onetime --daemonize
# --onetime --no-daemonize
# --onetime
# --daemonize
# --no-daemonize
# -
#
# 6 cases X [--logdest=console, --logdest=syslog, --logdest=/some/file, <nothing added>]
# = 24 cases to test
#
# X [--verbose, --debug, <nothing added>]
# = 72 cases to test
#
# Expectations of behavior are defined in the expected_loggers, expected_level methods,
# so adapting to a change in logging behavior should hopefully be mostly a matter of
# adjusting the logic in those methods to define new behavior.
#
# Note that this test does not have anything to say about what happens to logging after
# daemonizing.
describe 'agent logging' do
ONETIME = '--onetime'
DAEMONIZE = '--daemonize'
NO_DAEMONIZE = '--no-daemonize'
LOGDEST_FILE = '--logdest=/dev/null/foo'
LOGDEST_SYSLOG = '--logdest=syslog'
LOGDEST_CONSOLE = '--logdest=console'
VERBOSE = '--verbose'
DEBUG = '--debug'
DEFAULT_LOG_LEVEL = :notice
INFO_LEVEL = :info
DEBUG_LEVEL = :debug
CONSOLE = :console
SYSLOG = :syslog
EVENTLOG = :eventlog
FILE = :file
ONETIME_DAEMONIZE_ARGS = [
[ONETIME],
[ONETIME, DAEMONIZE],
[ONETIME, NO_DAEMONIZE],
[DAEMONIZE],
[NO_DAEMONIZE],
[],
]
LOG_DEST_ARGS = [LOGDEST_FILE, LOGDEST_SYSLOG, LOGDEST_CONSOLE, nil]
LOG_LEVEL_ARGS = [VERBOSE, DEBUG, nil]
shared_examples "an agent" do |argv, expected|
before(:each) do
# Don't actually run the agent, bypassing cert checks, forking and the puppet run itself
Puppet::Application::Agent.any_instance.stubs(:run_command)
end
def double_of_bin_puppet_agent_call(argv)
argv.unshift('agent')
command_line = Puppet::Util::CommandLine.new('puppet', argv)
command_line.execute
end
if Puppet.features.microsoft_windows? && argv.include?(DAEMONIZE)
it "should exit on a platform which cannot daemonize if the --daemonize flag is set" do
expect { double_of_bin_puppet_agent_call(argv) }.to raise_error(SystemExit)
end
else
it "when evoked with #{argv}, logs to #{expected[:loggers].inspect} at level #{expected[:level]}" do
+ if Facter.value(:kernelmajversion).to_f < 6.0
+ pending("requires win32-eventlog gem upgrade to 0.6.2 on Windows 2003")
+ end
+
# This logger is created by the Puppet::Settings object which creates and
# applies a catalog to ensure that configuration files and users are in
# place.
#
# It's not something we are specifically testing here since it occurs
# regardless of user flags.
Puppet::Util::Log.expects(:newdestination).with(instance_of(Puppet::Transaction::Report)).at_least_once
expected[:loggers].each do |logclass|
Puppet::Util::Log.expects(:newdestination).with(logclass).at_least_once
end
double_of_bin_puppet_agent_call(argv)
Puppet::Util::Log.level.should == expected[:level]
end
end
end
def self.no_log_dest_set_in(argv)
([LOGDEST_SYSLOG, LOGDEST_CONSOLE, LOGDEST_FILE] & argv).empty?
end
def self.verbose_or_debug_set_in_argv(argv)
!([VERBOSE, DEBUG] & argv).empty?
end
def self.log_dest_is_set_to(argv, log_dest)
argv.include?(log_dest)
end
# @param argv Array of commandline flags
# @return Set<Symbol> of expected loggers
def self.expected_loggers(argv)
loggers = Set.new
loggers << CONSOLE if verbose_or_debug_set_in_argv(argv)
loggers << 'console' if log_dest_is_set_to(argv, LOGDEST_CONSOLE)
loggers << '/dev/null/foo' if log_dest_is_set_to(argv, LOGDEST_FILE)
if Puppet.features.microsoft_windows?
# an explicit call to --logdest syslog on windows is swallowed silently with no
# logger created (see #suitable() of the syslog Puppet::Util::Log::Destination subclass)
# however Puppet::Util::Log.newdestination('syslog') does get called...so we have
# to set an expectation
loggers << 'syslog' if log_dest_is_set_to(argv, LOGDEST_SYSLOG)
loggers << EVENTLOG if no_log_dest_set_in(argv)
else
# posix
loggers << 'syslog' if log_dest_is_set_to(argv, LOGDEST_SYSLOG)
loggers << SYSLOG if no_log_dest_set_in(argv)
end
return loggers
end
# @param argv Array of commandline flags
# @return Symbol of the expected log level
def self.expected_level(argv)
case
when argv.include?(VERBOSE) then INFO_LEVEL
when argv.include?(DEBUG) then DEBUG_LEVEL
else DEFAULT_LOG_LEVEL
end
end
# @param argv Array of commandline flags
# @return Hash of expected loggers and the expected log level
def self.with_expectations_based_on(argv)
{
:loggers => expected_loggers(argv),
:level => expected_level(argv),
}
end
# For running a single spec (by line number): rspec -l150 spec/integration/agent/logging_spec.rb
# debug_argv = []
# it_should_behave_like( "an agent", [debug_argv], with_expectations_based_on([debug_argv]))
ONETIME_DAEMONIZE_ARGS.each do |onetime_daemonize_args|
LOG_LEVEL_ARGS.each do |log_level_args|
LOG_DEST_ARGS.each do |log_dest_args|
argv = (onetime_daemonize_args + [log_level_args, log_dest_args]).flatten.compact
describe "for #{argv}" do
it_should_behave_like( "an agent", argv, with_expectations_based_on(argv))
end
end
end
end
end
diff --git a/spec/integration/application/doc_spec.rb b/spec/integration/application/doc_spec.rb
index 77fc38625..040dde72d 100755
--- a/spec/integration/application/doc_spec.rb
+++ b/spec/integration/application/doc_spec.rb
@@ -1,56 +1,57 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet/application/doc'
describe Puppet::Application::Doc do
include PuppetSpec::Files
it "should not generate an error when module dir overlaps parent of site.pp (#4798)",
:if => (Puppet.features.rdoc1? and not Puppet.features.microsoft_windows?) do
begin
# Note: the directory structure below is more complex than it
# needs to be, but it's representative of the directory structure
# used in bug #4798.
old_dir = Dir.getwd # Note: can't use chdir with a block because it will generate bogus warnings
tmpdir = tmpfile('doc_spec')
Dir.mkdir(tmpdir)
Dir.chdir(tmpdir)
site_file = 'site.pp'
File.open(site_file, 'w') do |f|
f.puts '# A comment'
end
modules_dir = 'modules'
Dir.mkdir(modules_dir)
rt_dir = File.join(modules_dir, 'rt')
Dir.mkdir(rt_dir)
manifests_dir = File.join(rt_dir, 'manifests')
Dir.mkdir(manifests_dir)
rt_file = File.join(manifests_dir, 'rt.pp')
File.open(rt_file, 'w') do |f|
f.puts '# A class'
f.puts 'class foo { }'
f.puts '# A definition'
f.puts 'define bar { }'
end
puppet = Puppet::Application[:doc]
- Puppet[:modulepath] = modules_dir
- Puppet[:manifest] = site_file
puppet.options[:mode] = :rdoc
- expect { puppet.run_command }.to exit_with 0
+ env = Puppet::Node::Environment.create(:rdoc, [modules_dir], site_file)
+ Puppet.override(:current_environment => env) do
+ expect { puppet.run_command }.to exit_with 0
+ end
Puppet::FileSystem.exist?('doc').should be_true
ensure
Dir.chdir(old_dir)
end
end
it "should respect the -o option" do
puppetdoc = Puppet::Application[:doc]
puppetdoc.command_line.stubs(:args).returns(['foo', '-o', 'bar'])
puppetdoc.parse_options
puppetdoc.options[:outputdir].should == 'bar'
end
end
diff --git a/spec/integration/configurer_spec.rb b/spec/integration/configurer_spec.rb
index 2e0c7370e..be1f34ca3 100755
--- a/spec/integration/configurer_spec.rb
+++ b/spec/integration/configurer_spec.rb
@@ -1,81 +1,67 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/configurer'
describe Puppet::Configurer do
include PuppetSpec::Files
- describe "when downloading plugins" do
- it "should use the :pluginsignore setting, split on whitespace, for ignoring remote files" do
- Puppet.settings.stubs(:use)
- resource = Puppet::Type.type(:notify).new :name => "yay"
- Puppet::Type.type(:file).expects(:new).at_most(2).with do |args|
- args[:ignore] == Puppet[:pluginsignore].split(/\s+/)
- end.returns resource
-
- configurer = Puppet::Configurer.new
- configurer.stubs(:download_plugins?).returns true
- configurer.download_plugins(Puppet::Node::Environment.remote(:testing))
- end
- end
-
describe "when running" do
before(:each) do
@catalog = Puppet::Resource::Catalog.new("testing", Puppet.lookup(:environments).get(Puppet[:environment]))
@catalog.add_resource(Puppet::Type.type(:notify).new(:title => "testing"))
# Make sure we don't try to persist the local state after the transaction ran,
# because it will fail during test (the state file is in a not-existing directory)
# and we need the transaction to be successful to be able to produce a summary report
@catalog.host_config = false
@configurer = Puppet::Configurer.new
end
it "should send a transaction report with valid data" do
@configurer.stubs(:save_last_run_summary)
Puppet::Transaction::Report.indirection.expects(:save).with do |report, x|
report.time.class == Time and report.logs.length > 0
end
Puppet[:report] = true
@configurer.run :catalog => @catalog
end
it "should save a correct last run summary" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.indirection.stubs(:save)
Puppet[:lastrunfile] = tmpfile("lastrunfile")
Puppet.settings.setting(:lastrunfile).mode = 0666
Puppet[:report] = true
# We only record integer seconds in the timestamp, and truncate
# backwards, so don't use a more accurate timestamp in the test.
# --daniel 2011-03-07
t1 = Time.now.tv_sec
@configurer.run :catalog => @catalog, :report => report
t2 = Time.now.tv_sec
# sticky bit only applies to directories in windows
file_mode = Puppet.features.microsoft_windows? ? '666' : '100666'
Puppet::FileSystem.stat(Puppet[:lastrunfile]).mode.to_s(8).should == file_mode
summary = nil
File.open(Puppet[:lastrunfile], "r") do |fd|
summary = YAML.load(fd.read)
end
summary.should be_a(Hash)
%w{time changes events resources}.each do |key|
summary.should be_key(key)
end
summary["time"].should be_key("notify")
summary["time"]["last_run"].should be_between(t1, t2)
end
end
end
diff --git a/spec/integration/defaults_spec.rb b/spec/integration/defaults_spec.rb
index 8c8432b6f..734785230 100755
--- a/spec/integration/defaults_spec.rb
+++ b/spec/integration/defaults_spec.rb
@@ -1,315 +1,341 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/defaults'
require 'puppet/rails'
describe "Puppet defaults" do
+
+ describe "when default_manifest is set" do
+ it "returns ./manifests by default" do
+ expect(Puppet[:default_manifest]).to eq('./manifests')
+ end
+
+ it "errors when $environment is part of the value" do
+ expect {
+ Puppet[:default_manifest] = '/$environment/manifest.pp'
+ }.to raise_error Puppet::Settings::ValidationError, /cannot interpolate.*\$environment/
+ end
+ end
+
+ describe "when disable_per_environment_manifest is set" do
+ it "returns false by default" do
+ expect(Puppet[:disable_per_environment_manifest]).to eq(false)
+ end
+
+ it "errors when set to true and default_manifest is not an absolute path" do
+ expect {
+ Puppet[:default_manifest] = './some/relative/manifest.pp'
+ Puppet[:disable_per_environment_manifest] = true
+ }.to raise_error Puppet::Settings::ValidationError, /'default_manifest' setting must be.*absolute/
+ end
+ end
+
describe "when setting the :factpath" do
it "should add the :factpath to Facter's search paths" do
Facter.expects(:search).with("/my/fact/path")
Puppet.settings[:factpath] = "/my/fact/path"
end
end
describe "when setting the :certname" do
it "should fail if the certname is not downcased" do
expect { Puppet.settings[:certname] = "Host.Domain.Com" }.to raise_error(ArgumentError)
end
end
describe "when setting :node_name_value" do
it "should default to the value of :certname" do
Puppet.settings[:certname] = 'blargle'
Puppet.settings[:node_name_value].should == 'blargle'
end
end
describe "when setting the :node_name_fact" do
it "should fail when also setting :node_name_value" do
lambda do
Puppet.settings[:node_name_value] = "some value"
Puppet.settings[:node_name_fact] = "some_fact"
end.should raise_error("Cannot specify both the node_name_value and node_name_fact settings")
end
it "should not fail when using the default for :node_name_value" do
lambda do
Puppet.settings[:node_name_fact] = "some_fact"
end.should_not raise_error
end
end
describe "when :certdnsnames is set" do
it "should not fail" do
expect { Puppet[:certdnsnames] = 'fred:wilma' }.to_not raise_error
end
it "should warn the value is ignored" do
Puppet.expects(:warning).with {|msg| msg =~ /CVE-2011-3872/ }
Puppet[:certdnsnames] = 'fred:wilma'
end
end
describe "when setting the :catalog_format" do
it "should log a deprecation notice" do
Puppet.expects(:deprecation_warning)
Puppet.settings[:catalog_format] = 'marshal'
end
it "should copy the value to :preferred_serialization_format" do
Puppet.settings[:catalog_format] = 'marshal'
Puppet.settings[:preferred_serialization_format].should == 'marshal'
end
end
it "should have a clientyamldir setting" do
Puppet.settings[:clientyamldir].should_not be_nil
end
it "should have different values for the yamldir and clientyamldir" do
Puppet.settings[:yamldir].should_not == Puppet.settings[:clientyamldir]
end
it "should have a client_datadir setting" do
Puppet.settings[:client_datadir].should_not be_nil
end
it "should have different values for the server_datadir and client_datadir" do
Puppet.settings[:server_datadir].should_not == Puppet.settings[:client_datadir]
end
# See #1232
it "should not specify a user or group for the clientyamldir" do
Puppet.settings.setting(:clientyamldir).owner.should be_nil
Puppet.settings.setting(:clientyamldir).group.should be_nil
end
it "should use the service user and group for the yamldir" do
Puppet.settings.stubs(:service_user_available?).returns true
Puppet.settings.stubs(:service_group_available?).returns true
Puppet.settings.setting(:yamldir).owner.should == Puppet.settings[:user]
Puppet.settings.setting(:yamldir).group.should == Puppet.settings[:group]
end
it "should specify that the host private key should be owned by the service user" do
Puppet.settings.stubs(:service_user_available?).returns true
Puppet.settings.setting(:hostprivkey).owner.should == Puppet.settings[:user]
end
it "should specify that the host certificate should be owned by the service user" do
Puppet.settings.stubs(:service_user_available?).returns true
Puppet.settings.setting(:hostcert).owner.should == Puppet.settings[:user]
end
[:modulepath, :factpath].each do |setting|
it "should configure '#{setting}' not to be a file setting, so multi-directory settings are acceptable" do
Puppet.settings.setting(setting).should be_instance_of(Puppet::Settings::PathSetting)
end
end
describe "on a Unix-like platform it", :as_platform => :posix do
it "should add /usr/sbin and /sbin to the path if they're not there" do
Puppet::Util.withenv("PATH" => "/usr/bin#{File::PATH_SEPARATOR}/usr/local/bin") do
Puppet.settings[:path] = "none" # this causes it to ignore the setting
ENV["PATH"].split(File::PATH_SEPARATOR).should be_include("/usr/sbin")
ENV["PATH"].split(File::PATH_SEPARATOR).should be_include("/sbin")
end
end
end
describe "on a Windows-like platform it", :as_platform => :windows do
it "should not add anything" do
path = "c:\\windows\\system32#{File::PATH_SEPARATOR}c:\\windows"
Puppet::Util.withenv("PATH" => path) do
Puppet.settings[:path] = "none" # this causes it to ignore the setting
ENV["PATH"].should == path
end
end
end
it "should default to pson for the preferred serialization format" do
Puppet.settings.value(:preferred_serialization_format).should == "pson"
end
describe "when enabling storeconfigs" do
before do
Puppet::Resource::Catalog.indirection.stubs(:cache_class=)
Puppet::Node::Facts.indirection.stubs(:cache_class=)
Puppet::Node.indirection.stubs(:cache_class=)
Puppet.features.stubs(:rails?).returns true
end
it "should set the Catalog cache class to :store_configs" do
Puppet::Resource::Catalog.indirection.expects(:cache_class=).with(:store_configs)
Puppet.settings[:storeconfigs] = true
end
it "should not set the Catalog cache class to :store_configs if asynchronous storeconfigs is enabled" do
Puppet::Resource::Catalog.indirection.expects(:cache_class=).with(:store_configs).never
Puppet.settings[:async_storeconfigs] = true
Puppet.settings[:storeconfigs] = true
end
it "should set the Facts cache class to :store_configs" do
Puppet::Node::Facts.indirection.expects(:cache_class=).with(:store_configs)
Puppet.settings[:storeconfigs] = true
end
it "does not change the Node cache" do
Puppet::Node.indirection.expects(:cache_class=).never
Puppet.settings[:storeconfigs] = true
end
end
describe "when enabling asynchronous storeconfigs" do
before do
Puppet::Resource::Catalog.indirection.stubs(:cache_class=)
Puppet::Node::Facts.indirection.stubs(:cache_class=)
Puppet::Node.indirection.stubs(:cache_class=)
Puppet.features.stubs(:rails?).returns true
end
it "should set storeconfigs to true" do
Puppet.settings[:async_storeconfigs] = true
Puppet.settings[:storeconfigs].should be_true
end
it "should set the Catalog cache class to :queue" do
Puppet::Resource::Catalog.indirection.expects(:cache_class=).with(:queue)
Puppet.settings[:async_storeconfigs] = true
end
it "should set the Facts cache class to :store_configs" do
Puppet::Node::Facts.indirection.expects(:cache_class=).with(:store_configs)
Puppet.settings[:storeconfigs] = true
end
it "does not change the Node cache" do
Puppet::Node.indirection.expects(:cache_class=).never
Puppet.settings[:storeconfigs] = true
end
end
describe "when enabling thin storeconfigs" do
before do
Puppet::Resource::Catalog.indirection.stubs(:cache_class=)
Puppet::Node::Facts.indirection.stubs(:cache_class=)
Puppet::Node.indirection.stubs(:cache_class=)
Puppet.features.stubs(:rails?).returns true
end
it "should set storeconfigs to true" do
Puppet.settings[:thin_storeconfigs] = true
Puppet.settings[:storeconfigs].should be_true
end
end
it "should have a setting for determining the configuration version and should default to an empty string" do
Puppet.settings[:config_version].should == ""
end
describe "when enabling reports" do
it "should use the default server value when report server is unspecified" do
Puppet.settings[:server] = "server"
Puppet.settings[:report_server].should == "server"
end
it "should use the default masterport value when report port is unspecified" do
Puppet.settings[:masterport] = "1234"
Puppet.settings[:report_port].should == "1234"
end
it "should use report_port when set" do
Puppet.settings[:masterport] = "1234"
Puppet.settings[:report_port] = "5678"
Puppet.settings[:report_port].should == "5678"
end
end
it "should have a :caname setting that defaults to the cert name" do
Puppet.settings[:certname] = "foo"
Puppet.settings[:ca_name].should == "Puppet CA: foo"
end
it "should have a 'prerun_command' that defaults to the empty string" do
Puppet.settings[:prerun_command].should == ""
end
it "should have a 'postrun_command' that defaults to the empty string" do
Puppet.settings[:postrun_command].should == ""
end
it "should have a 'certificate_revocation' setting that defaults to true" do
Puppet.settings[:certificate_revocation].should be_true
end
it "should have an http_compression setting that defaults to false" do
Puppet.settings[:http_compression].should be_false
end
describe "reportdir" do
subject { Puppet.settings[:reportdir] }
it { should == "#{Puppet[:vardir]}/reports" }
end
describe "reporturl" do
subject { Puppet.settings[:reporturl] }
it { should == "http://localhost:3000/reports/upload" }
end
describe "when configuring color" do
subject { Puppet.settings[:color] }
it { should == "ansi" }
end
describe "daemonize" do
it "should default to true", :unless => Puppet.features.microsoft_windows? do
Puppet.settings[:daemonize].should == true
end
describe "on Windows", :if => Puppet.features.microsoft_windows? do
it "should default to false" do
Puppet.settings[:daemonize].should == false
end
it "should raise an error if set to true" do
expect { Puppet.settings[:daemonize] = true }.to raise_error(/Cannot daemonize on Windows/)
end
end
end
describe "diff" do
it "should default to 'diff' on POSIX", :unless => Puppet.features.microsoft_windows? do
Puppet.settings[:diff].should == 'diff'
end
it "should default to '' on Windows", :if => Puppet.features.microsoft_windows? do
Puppet.settings[:diff].should == ''
end
end
describe "when configuring hiera" do
it "should have a hiera_config setting" do
Puppet.settings[:hiera_config].should_not be_nil
end
end
describe "when configuring the data_binding terminus" do
it "should have a data_binding_terminus setting" do
Puppet.settings[:data_binding_terminus].should_not be_nil
end
it "should be set to hiera by default" do
Puppet.settings[:data_binding_terminus].should == :hiera
end
end
describe "agent_catalog_run_lockfile" do
it "(#2888) is not a file setting so it is absent from the Settings catalog" do
Puppet.settings.setting(:agent_catalog_run_lockfile).should_not be_a_kind_of Puppet::Settings::FileSetting
Puppet.settings.setting(:agent_catalog_run_lockfile).should be_a Puppet::Settings::StringSetting
end
end
end
diff --git a/spec/integration/environments/default_manifest_spec.rb b/spec/integration/environments/default_manifest_spec.rb
new file mode 100644
index 000000000..6d4564037
--- /dev/null
+++ b/spec/integration/environments/default_manifest_spec.rb
@@ -0,0 +1,274 @@
+require 'spec_helper'
+
+module EnvironmentsDefaultManifestsSpec
+describe "default manifests" do
+ FS = Puppet::FileSystem
+
+ shared_examples_for "puppet with default_manifest settings" do
+ let(:confdir) { Puppet[:confdir] }
+ let(:environmentpath) { File.expand_path("envdir", confdir) }
+
+ context "relative default" do
+ let(:testingdir) { File.join(environmentpath, "testing") }
+
+ before(:each) do
+ FileUtils.mkdir_p(testingdir)
+ end
+
+ it "reads manifest from ./manifest of a basic directory environment" do
+ manifestsdir = File.join(testingdir, "manifests")
+ FileUtils.mkdir_p(manifestsdir)
+
+ File.open(File.join(manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromRelativeDefault': }")
+ end
+
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts("environmentpath=#{environmentpath}")
+ end
+
+ expect(a_catalog_compiled_for_environment('testing')).to(
+ include_resource('Notify[ManifestFromRelativeDefault]')
+ )
+ end
+ end
+
+ context "set absolute" do
+ let(:testingdir) { File.join(environmentpath, "testing") }
+
+ before(:each) do
+ FileUtils.mkdir_p(testingdir)
+ end
+
+ it "reads manifest from an absolute default_manifest" do
+ manifestsdir = File.expand_path("manifests", confdir)
+ FileUtils.mkdir_p(manifestsdir)
+
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts(<<-EOF)
+ environmentpath=#{environmentpath}
+ default_manifest=#{manifestsdir}
+ EOF
+ end
+
+ File.open(File.join(manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromAbsoluteDefaultManifest': }")
+ end
+
+ expect(a_catalog_compiled_for_environment('testing')).to(
+ include_resource('Notify[ManifestFromAbsoluteDefaultManifest]')
+ )
+ end
+
+ it "reads manifest from directory environment manifest when environment.conf manifest set" do
+ default_manifestsdir = File.expand_path("manifests", confdir)
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts(<<-EOF)
+ environmentpath=#{environmentpath}
+ default_manifest=#{default_manifestsdir}
+ EOF
+ end
+
+ manifestsdir = File.join(testingdir, "special_manifests")
+ FileUtils.mkdir_p(manifestsdir)
+
+ File.open(File.join(manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromEnvironmentConfManifest': }")
+ end
+
+ File.open(File.join(testingdir, "environment.conf"), "w") do |f|
+ f.puts("manifest=./special_manifests")
+ end
+
+ expect(a_catalog_compiled_for_environment('testing')).to(
+ include_resource('Notify[ManifestFromEnvironmentConfManifest]')
+ )
+ expect(Puppet[:default_manifest]).to eq(default_manifestsdir)
+ end
+
+ it "ignores manifests in the local ./manifests if default_manifest specifies another directory" do
+ default_manifestsdir = File.expand_path("manifests", confdir)
+ FileUtils.mkdir_p(default_manifestsdir)
+
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts(<<-EOF)
+ environmentpath=#{environmentpath}
+ default_manifest=#{default_manifestsdir}
+ EOF
+ end
+
+ File.open(File.join(default_manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromAbsoluteDefaultManifest': }")
+ end
+
+ implicit_manifestsdir = File.join(testingdir, "manifests")
+ FileUtils.mkdir_p(implicit_manifestsdir)
+
+ File.open(File.join(implicit_manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromImplicitRelativeEnvironmentManifestDirectory': }")
+ end
+
+ expect(a_catalog_compiled_for_environment('testing')).to(
+ include_resource('Notify[ManifestFromAbsoluteDefaultManifest]')
+ )
+ end
+
+ it "raises an exception if default_manifest has $environment in it" do
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts(<<-EOF)
+ environmentpath=#{environmentpath}
+ default_manifest=/foo/$environment
+ EOF
+ end
+
+ expect { Puppet.initialize_settings }.to raise_error(Puppet::Settings::ValidationError, /cannot interpolate.*\$environment.*in.*default_manifest/)
+ end
+ end
+
+ context "with disable_per_environment_manifest true" do
+ let(:manifestsdir) { File.expand_path("manifests", confdir) }
+ let(:testingdir) { File.join(environmentpath, "testing") }
+
+ before(:each) do
+ FileUtils.mkdir_p(testingdir)
+ end
+
+ before(:each) do
+ FileUtils.mkdir_p(manifestsdir)
+
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts(<<-EOF)
+ environmentpath=#{environmentpath}
+ default_manifest=#{manifestsdir}
+ disable_per_environment_manifest=true
+ EOF
+ end
+
+ File.open(File.join(manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromAbsoluteDefaultManifest': }")
+ end
+ end
+
+ it "reads manifest from the default manifest setting" do
+ expect(a_catalog_compiled_for_environment('testing')).to(
+ include_resource('Notify[ManifestFromAbsoluteDefaultManifest]')
+ )
+ end
+
+ it "refuses to compile if environment.conf specifies a different manifest" do
+ File.open(File.join(testingdir, "environment.conf"), "w") do |f|
+ f.puts("manifest=./special_manifests")
+ end
+
+ expect { a_catalog_compiled_for_environment('testing') }.to(
+ raise_error(Puppet::Error, /disable_per_environment_manifest.*environment.conf.*manifest.*conflict/)
+ )
+ end
+
+ it "reads manifest from default_manifest setting when environment.conf has manifest set if setting equals default_manifest setting" do
+ File.open(File.join(testingdir, "environment.conf"), "w") do |f|
+ f.puts("manifest=#{manifestsdir}")
+ end
+
+ expect(a_catalog_compiled_for_environment('testing')).to(
+ include_resource('Notify[ManifestFromAbsoluteDefaultManifest]')
+ )
+ end
+
+ it "logs errors if environment.conf specifies a different manifest" do
+ File.open(File.join(testingdir, "environment.conf"), "w") do |f|
+ f.puts("manifest=./special_manifests")
+ end
+
+ Puppet.initialize_settings
+ expect(Puppet[:environmentpath]).to eq(environmentpath)
+ environment = Puppet.lookup(:environments).get('testing')
+ expect(environment.manifest).to eq(manifestsdir)
+ expect(@logs.first.to_s).to match(%r{disable_per_environment_manifest.*is true, but.*environment.*at #{testingdir}.*has.*environment.conf.*manifest.*#{testingdir}/special_manifests})
+ end
+
+ it "raises an error if default_manifest is not absolute" do
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts(<<-EOF)
+ environmentpath=#{environmentpath}
+ default_manifest=./relative
+ disable_per_environment_manifest=true
+ EOF
+ end
+
+ expect { Puppet.initialize_settings }.to raise_error(Puppet::Settings::ValidationError, /default_manifest.*must be.*absolute.*when.*disable_per_environment_manifest.*true/)
+ end
+ end
+
+ context "in legacy environments" do
+ let(:environmentpath) { '' }
+ let(:manifestsdir) { File.expand_path("default_manifests", confdir) }
+ let(:legacy_manifestsdir) { File.expand_path('manifests', confdir) }
+
+ before(:each) do
+ FileUtils.mkdir_p(manifestsdir)
+
+ File.open(File.join(confdir, "puppet.conf"), "w") do |f|
+ f.puts(<<-EOF)
+ default_manifest=#{manifestsdir}
+ disable_per_environment_manifest=true
+ manifest=#{legacy_manifestsdir}
+ EOF
+ end
+
+ File.open(File.join(manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromAbsoluteDefaultManifest': }")
+ end
+ end
+
+ it "has no effect on compilation" do
+ FileUtils.mkdir_p(legacy_manifestsdir)
+
+ File.open(File.join(legacy_manifestsdir, "site.pp"), "w") do |f|
+ f.puts("notify { 'ManifestFromLegacy': }")
+ end
+
+ expect(a_catalog_compiled_for_environment('testing')).to(
+ include_resource('Notify[ManifestFromLegacy]')
+ )
+ end
+ end
+ end
+
+ describe 'using future parser' do
+ before :each do
+ Puppet[:parser] = 'future'
+ end
+ it_behaves_like 'puppet with default_manifest settings'
+ end
+
+ describe 'using current parser' do
+ before :each do
+ Puppet[:parser] = 'current'
+ end
+ it_behaves_like 'puppet with default_manifest settings'
+ end
+
+ RSpec::Matchers.define :include_resource do |expected|
+ match do |actual|
+ actual.resources.map(&:ref).include?(expected)
+ end
+
+ def failure_message_for_should
+ "expected #{@actual.resources.map(&:ref)} to include #{expected}"
+ end
+
+ def failure_message_for_should_not
+ "expected #{@actual.resources.map(&:ref)} not to include #{expected}"
+ end
+ end
+
+ def a_catalog_compiled_for_environment(envname)
+ Puppet.initialize_settings
+ expect(Puppet[:environmentpath]).to eq(environmentpath)
+ node = Puppet::Node.new('testnode', :environment => 'testing')
+ expect(node.environment).to eq(Puppet.lookup(:environments).get('testing'))
+ Puppet::Parser::Compiler.compile(node)
+ end
+end
+end
diff --git a/spec/integration/faces/documentation_spec.rb b/spec/integration/faces/documentation_spec.rb
index bd1f8008a..803d61599 100755
--- a/spec/integration/faces/documentation_spec.rb
+++ b/spec/integration/faces/documentation_spec.rb
@@ -1,62 +1,58 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/face'
describe "documentation of faces" do
it "should generate global help" do
help = nil
expect { help = Puppet::Face[:help, :current].help }.not_to raise_error
help.should be_an_instance_of String
help.length.should be > 200
end
########################################################################
# Can we actually generate documentation for the face, and the actions it
# has? This avoids situations where the ERB template turns out to have a
# bug in it, triggered in something the user might do.
context "face help messages" do
- # we need to set a bunk module path here, because without doing so,
- # the autoloader will try to use it before it is initialized.
- Puppet[:modulepath] = "/dev/null"
-
Puppet::Face.faces.sort.each do |face_name|
# REVISIT: We should walk all versions of the face here...
let :help do Puppet::Face[:help, :current] end
context "generating help" do
it "for #{face_name}" do
expect {
text = help.help(face_name)
text.should be_an_instance_of String
text.length.should be > 100
}.not_to raise_error
end
Puppet::Face[face_name, :current].actions.sort.each do |action_name|
it "for #{face_name}.#{action_name}" do
expect {
text = help.help(face_name, action_name)
text.should be_an_instance_of String
text.length.should be > 100
}.not_to raise_error
end
end
end
########################################################################
# Ensure that we have authorship and copyright information in *our* faces;
# if you apply this to third party faces you might well be disappointed.
context "licensing of Puppet Labs face '#{face_name}'" do
subject { Puppet::Face[face_name, :current] }
its :license do should =~ /Apache\s*2/ end
its :copyright do should =~ /Puppet Labs/ end
# REVISIT: This is less that ideal, I think, but right now I am more
# comfortable watching us ship with some copyright than without any; we
# can redress that when it becomes appropriate. --daniel 2011-04-27
its :copyright do should =~ /20\d{2}/ end
end
end
end
end
diff --git a/spec/integration/file_bucket/file_spec.rb b/spec/integration/file_bucket/file_spec.rb
index 2c411fdf7..f0dbecaa3 100644
--- a/spec/integration/file_bucket/file_spec.rb
+++ b/spec/integration/file_bucket/file_spec.rb
@@ -1,44 +1,65 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/file_bucket/file'
describe Puppet::FileBucket::File do
describe "#indirection" do
before :each do
# Never connect to the network, no matter what
described_class.indirection.terminus(:rest).class.any_instance.stubs(:find)
end
describe "when running the master application" do
before :each do
Puppet::Application[:master].setup_terminuses
end
{
"md5/d41d8cd98f00b204e9800998ecf8427e" => :file,
"https://puppetmaster:8140/production/file_bucket_file/md5/d41d8cd98f00b204e9800998ecf8427e" => :file,
}.each do |key, terminus|
it "should use the #{terminus} terminus when requesting #{key.inspect}" do
described_class.indirection.terminus(terminus).class.any_instance.expects(:find)
described_class.indirection.find(key)
end
end
end
describe "when running another application" do
{
"md5/d41d8cd98f00b204e9800998ecf8427e" => :file,
"https://puppetmaster:8140/production/file_bucket_file/md5/d41d8cd98f00b204e9800998ecf8427e" => :rest,
}.each do |key, terminus|
it "should use the #{terminus} terminus when requesting #{key.inspect}" do
described_class.indirection.terminus(terminus).class.any_instance.expects(:find)
described_class.indirection.find(key)
end
end
end
end
+
+ describe "saving binary files" do
+ describe "on Ruby 1.8.7", :if => RUBY_VERSION.match(/^1\.8/) do
+ let(:binary) { "\xD1\xF2\r\n\x81NuSc\x00" }
+
+ it "does not error when the same contents are saved twice" do
+ bucket_file = Puppet::FileBucket::File.new(binary)
+ Puppet::FileBucket::File.indirection.save(bucket_file, bucket_file.name)
+ Puppet::FileBucket::File.indirection.save(bucket_file, bucket_file.name)
+ end
+ end
+ describe "on Ruby 1.9+", :if => RUBY_VERSION.match(/^1\.9|^2/) do
+ let(:binary) { "\xD1\xF2\r\n\x81NuSc\x00".force_encoding(Encoding::ASCII_8BIT) }
+
+ it "does not error when the same contents are saved twice" do
+ bucket_file = Puppet::FileBucket::File.new(binary)
+ Puppet::FileBucket::File.indirection.save(bucket_file, bucket_file.name)
+ Puppet::FileBucket::File.indirection.save(bucket_file, bucket_file.name)
+ end
+ end
+ end
end
diff --git a/spec/integration/indirector/catalog/compiler_spec.rb b/spec/integration/indirector/catalog/compiler_spec.rb
index 1e7f17298..11c448575 100755
--- a/spec/integration/indirector/catalog/compiler_spec.rb
+++ b/spec/integration/indirector/catalog/compiler_spec.rb
@@ -1,67 +1,65 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/resource/catalog'
Puppet::Resource::Catalog.indirection.terminus(:compiler)
describe Puppet::Resource::Catalog::Compiler do
before do
Facter.stubs(:value).returns "something"
@catalog = Puppet::Resource::Catalog.new("testing", Puppet::Node::Environment::NONE)
@catalog.add_resource(@one = Puppet::Resource.new(:file, "/one"))
@catalog.add_resource(@two = Puppet::Resource.new(:file, "/two"))
end
- after { Puppet.settings.clear }
-
it "should remove virtual resources when filtering" do
@one.virtual = true
Puppet::Resource::Catalog.indirection.terminus.filter(@catalog).resource_refs.should == [ @two.ref ]
end
it "should not remove exported resources when filtering" do
@one.exported = true
Puppet::Resource::Catalog.indirection.terminus.filter(@catalog).resource_refs.sort.should == [ @one.ref, @two.ref ]
end
it "should remove virtual exported resources when filtering" do
@one.exported = true
@one.virtual = true
Puppet::Resource::Catalog.indirection.terminus.filter(@catalog).resource_refs.should == [ @two.ref ]
end
it "should filter out virtual resources when finding a catalog" do
Puppet[:node_terminus] = :memory
Puppet::Node.indirection.save(Puppet::Node.new("mynode"))
Puppet::Resource::Catalog.indirection.terminus.stubs(:extract_facts_from_request)
Puppet::Resource::Catalog.indirection.terminus.stubs(:compile).returns(@catalog)
@one.virtual = true
Puppet::Resource::Catalog.indirection.find("mynode").resource_refs.should == [ @two.ref ]
end
it "should not filter out exported resources when finding a catalog" do
Puppet[:node_terminus] = :memory
Puppet::Node.indirection.save(Puppet::Node.new("mynode"))
Puppet::Resource::Catalog.indirection.terminus.stubs(:extract_facts_from_request)
Puppet::Resource::Catalog.indirection.terminus.stubs(:compile).returns(@catalog)
@one.exported = true
Puppet::Resource::Catalog.indirection.find("mynode").resource_refs.sort.should == [ @one.ref, @two.ref ]
end
it "should filter out virtual exported resources when finding a catalog" do
Puppet[:node_terminus] = :memory
Puppet::Node.indirection.save(Puppet::Node.new("mynode"))
Puppet::Resource::Catalog.indirection.terminus.stubs(:extract_facts_from_request)
Puppet::Resource::Catalog.indirection.terminus.stubs(:compile).returns(@catalog)
@one.exported = true
@one.virtual = true
Puppet::Resource::Catalog.indirection.find("mynode").resource_refs.should == [ @two.ref ]
end
end
diff --git a/spec/integration/indirector/catalog/queue_spec.rb b/spec/integration/indirector/catalog/queue_spec.rb
index fe359236f..00f3f2e53 100755
--- a/spec/integration/indirector/catalog/queue_spec.rb
+++ b/spec/integration/indirector/catalog/queue_spec.rb
@@ -1,57 +1,55 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/resource/catalog'
describe "Puppet::Resource::Catalog::Queue" do
before do
Puppet::Resource::Catalog.indirection.terminus(:queue)
@catalog = Puppet::Resource::Catalog.new("foo", Puppet::Node::Environment::NONE)
@one = Puppet::Resource.new(:file, "/one")
@two = Puppet::Resource.new(:file, "/two")
@catalog.add_resource(@one, @two)
@catalog.add_edge(@one, @two)
Puppet[:trace] = true
end
- after { Puppet.settings.clear }
-
it "should render catalogs to pson and publish them via the queue client when catalogs are saved" do
terminus = Puppet::Resource::Catalog.indirection.terminus(:queue)
client = mock 'client'
terminus.stubs(:client).returns client
client.expects(:publish_message).with(:catalog, @catalog.to_pson)
request = Puppet::Indirector::Request.new(:catalog, :save, "foo", @catalog)
terminus.save(request)
end
it "should intern catalog messages when they are passed via a subscription" do
client = mock 'client'
Puppet::Resource::Catalog::Queue.stubs(:client).returns client
pson = @catalog.to_pson
client.expects(:subscribe).with(:catalog).yields(pson)
Puppet.expects(:err).never
result = []
Puppet::Resource::Catalog::Queue.subscribe do |catalog|
result << catalog
end
catalog = result.shift
catalog.should be_instance_of(Puppet::Resource::Catalog)
catalog.resource(:file, "/one").should be_instance_of(Puppet::Resource)
catalog.resource(:file, "/two").should be_instance_of(Puppet::Resource)
catalog.should be_edge(catalog.resource(:file, "/one"), catalog.resource(:file, "/two"))
end
end
diff --git a/spec/integration/indirector/facts/facter_spec.rb b/spec/integration/indirector/facts/facter_spec.rb
index c71ff0937..b552431f9 100644
--- a/spec/integration/indirector/facts/facter_spec.rb
+++ b/spec/integration/indirector/facts/facter_spec.rb
@@ -1,22 +1,22 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/compiler'
describe Puppet::Node::Facts::Facter do
include PuppetSpec::Compiler
it "preserves case in fact values" do
Facter.add(:downcase_test) do
setcode do
"AaBbCc"
end
end
- Facter.stubs(:clear)
+ Facter.stubs(:reset)
cat = compile_to_catalog('notify { $downcase_test: }',
Puppet::Node.indirection.find('foo'))
cat.resource("Notify[AaBbCc]").should be
end
end
diff --git a/spec/integration/indirector/file_content/file_server_spec.rb b/spec/integration/indirector/file_content/file_server_spec.rb
index 103a028a2..ee0db17a9 100755
--- a/spec/integration/indirector/file_content/file_server_spec.rb
+++ b/spec/integration/indirector/file_content/file_server_spec.rb
@@ -1,89 +1,89 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/indirector/file_content/file_server'
require 'shared_behaviours/file_server_terminus'
require 'puppet_spec/files'
describe Puppet::Indirector::FileContent::FileServer, " when finding files" do
it_should_behave_like "Puppet::Indirector::FileServerTerminus"
include PuppetSpec::Files
before do
@terminus = Puppet::Indirector::FileContent::FileServer.new
@test_class = Puppet::FileServing::Content
Puppet::FileServing::Configuration.instance_variable_set(:@configuration, nil)
end
it "should find plugin file content in the environment specified in the request" do
path = tmpfile("file_content_with_env")
Dir.mkdir(path)
modpath = File.join(path, "mod")
FileUtils.mkdir_p(File.join(modpath, "lib"))
file = File.join(modpath, "lib", "file.rb")
File.open(file, "wb") { |f| f.write "1\r\n" }
Puppet.settings[:modulepath] = "/no/such/file"
env = Puppet::Node::Environment.create(:foo, [path])
result = Puppet::FileServing::Content.indirection.search("plugins", :environment => env, :recurse => true)
result.should_not be_nil
result.length.should == 2
result.map {|x| x.should be_instance_of(Puppet::FileServing::Content) }
result.find {|x| x.relative_path == 'file.rb' }.content.should == "1\r\n"
end
it "should find file content in modules" do
path = tmpfile("file_content")
Dir.mkdir(path)
modpath = File.join(path, "mymod")
FileUtils.mkdir_p(File.join(modpath, "files"))
file = File.join(modpath, "files", "myfile")
File.open(file, "wb") { |f| f.write "1\r\n" }
- Puppet.settings[:modulepath] = path
+ env = Puppet::Node::Environment.create(:foo, [path])
- result = Puppet::FileServing::Content.indirection.find("modules/mymod/myfile")
+ result = Puppet::FileServing::Content.indirection.find("modules/mymod/myfile", :environment => env)
result.should_not be_nil
result.should be_instance_of(Puppet::FileServing::Content)
result.content.should == "1\r\n"
end
it "should find file content in files when node name expansions are used" do
Puppet::FileSystem.stubs(:exist?).returns true
Puppet::FileSystem.stubs(:exist?).with(Puppet[:fileserverconfig]).returns(true)
@path = tmpfile("file_server_testing")
Dir.mkdir(@path)
subdir = File.join(@path, "mynode")
Dir.mkdir(subdir)
File.open(File.join(subdir, "myfile"), "wb") { |f| f.write "1\r\n" }
# Use a real mount, so the integration is a bit deeper.
@mount1 = Puppet::FileServing::Configuration::Mount::File.new("one")
@mount1.stubs(:allowed?).returns true
@mount1.path = File.join(@path, "%h")
@parser = stub 'parser', :changed? => false
@parser.stubs(:parse).returns("one" => @mount1)
Puppet::FileServing::Configuration::Parser.stubs(:new).returns(@parser)
path = File.join(@path, "myfile")
result = Puppet::FileServing::Content.indirection.find("one/myfile", :environment => "foo", :node => "mynode")
result.should_not be_nil
result.should be_instance_of(Puppet::FileServing::Content)
result.content.should == "1\r\n"
end
end
diff --git a/spec/integration/node/environment_spec.rb b/spec/integration/node/environment_spec.rb
index f8a7ace7d..797e4105c 100755
--- a/spec/integration/node/environment_spec.rb
+++ b/spec/integration/node/environment_spec.rb
@@ -1,109 +1,125 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet_spec/scope'
require 'matchers/resource'
describe Puppet::Node::Environment do
include PuppetSpec::Files
include Matchers::Resource
def a_module_in(name, dir)
Dir.mkdir(dir)
moddir = File.join(dir, name)
Dir.mkdir(moddir)
moddir
end
it "should be able to return each module from its environment with the environment, name, and path set correctly" do
base = tmpfile("env_modules")
Dir.mkdir(base)
dirs = []
mods = {}
%w{1 2}.each do |num|
dir = File.join(base, "dir#{num}")
dirs << dir
mods["mod#{num}"] = a_module_in("mod#{num}", dir)
end
environment = Puppet::Node::Environment.create(:foo, dirs)
environment.modules.each do |mod|
mod.environment.should == environment
mod.path.should == mods[mod.name]
end
end
it "should not yield the same module from different module paths" do
base = tmpfile("env_modules")
Dir.mkdir(base)
dirs = []
%w{1 2}.each do |num|
dir = File.join(base, "dir#{num}")
dirs << dir
a_module_in("mod", dir)
end
environment = Puppet::Node::Environment.create(:foo, dirs)
mods = environment.modules
mods.length.should == 1
mods[0].path.should == File.join(base, "dir1", "mod")
end
shared_examples_for "the environment's initial import" do |settings|
it "a manifest referring to a directory invokes parsing of all its files in sorted order" do
settings.each do |name, value|
Puppet[name] = value
end
# fixture has three files 00_a.pp, 01_b.pp, and 02_c.pp. The 'b' file
# depends on 'a' being evaluated first. The 'c' file is empty (to ensure
# empty things do not break the directory import).
#
dirname = my_fixture('sitedir')
# Set the manifest to the directory to make it parse and combine them when compiling
node = Puppet::Node.new('testnode',
:environment => Puppet::Node::Environment.create(:testing, [], dirname))
catalog = Puppet::Parser::Compiler.compile(node)
expect(catalog).to have_resource('Class[A]')
expect(catalog).to have_resource('Class[B]')
expect(catalog).to have_resource('Notify[variables]').with_parameter(:message, "a: 10, b: 10")
end
end
+ shared_examples_for "the environment's initial import in the future" do |settings|
+ it "a manifest referring to a directory invokes recursive parsing of all its files in sorted order" do
+ settings.each do |name, value|
+ Puppet[name] = value
+ end
+
+ # fixture has three files 00_a.pp, 01_b.pp, and 02_c.pp. The 'b' file
+ # depends on 'a' being evaluated first. The 'c' file is empty (to ensure
+ # empty things do not break the directory import).
+ #
+ dirname = my_fixture('sitedir2')
+
+ # Set the manifest to the directory to make it parse and combine them when compiling
+ node = Puppet::Node.new('testnode',
+ :environment => Puppet::Node::Environment.create(:testing, [], dirname))
+
+ catalog = Puppet::Parser::Compiler.compile(node)
+
+ expect(catalog).to have_resource('Class[A]')
+ expect(catalog).to have_resource('Class[B]')
+ expect(catalog).to have_resource('Notify[variables]').with_parameter(:message, "a: 10, b: 10 c: 20")
+ end
+ end
+
describe 'using classic parser' do
it_behaves_like "the environment's initial import",
:parser => 'current',
# fixture uses variables that are set in a particular order (this ensures
# that files are parsed and combined in the right order or an error will
# be raised if 'b' is evaluated before 'a').
:strict_variables => true
end
describe 'using future parser' do
it_behaves_like "the environment's initial import",
:parser => 'future',
- :evaluator => 'future',
# Turned off because currently future parser turns on the binder which
# causes lookup of facts that are uninitialized and it will fail with
# errors for 'osfamily' etc. This can be turned back on when the binder
# is taken out of the equation.
:strict_variables => false
-
- context 'and evaluator current' do
- it_behaves_like "the environment's initial import",
- :parser => 'future',
- :evaluator => 'current',
- :strict_variables => false
- end
end
end
diff --git a/spec/integration/parser/catalog_spec.rb b/spec/integration/parser/catalog_spec.rb
index e37eb591a..39aeb394e 100644
--- a/spec/integration/parser/catalog_spec.rb
+++ b/spec/integration/parser/catalog_spec.rb
@@ -1,125 +1,125 @@
require 'spec_helper'
require 'matchers/include_in_order'
require 'puppet_spec/compiler'
require 'puppet/indirector/catalog/compiler'
describe "A catalog" do
include PuppetSpec::Compiler
shared_examples_for "when compiled" do
context "when transmitted to the agent" do
it "preserves the order in which the resources are added to the catalog" do
resources_in_declaration_order = ["Class[First]",
"Second[position]",
"Class[Third]",
"Fourth[position]"]
master_catalog, agent_catalog = master_and_agent_catalogs_for(<<-EOM)
define fourth() { }
class third { }
define second() {
fourth { "position": }
}
class first {
second { "position": }
class { "third": }
}
include first
EOM
expect(resources_in(master_catalog)).
to include_in_order(*resources_in_declaration_order)
expect(resources_in(agent_catalog)).
to include_in_order(*resources_in_declaration_order)
end
it "does not contain unrealized, virtual resources" do
virtual_resources = ["Unrealized[unreal]", "Class[Unreal]"]
master_catalog, agent_catalog = master_and_agent_catalogs_for(<<-EOM)
class unreal { }
define unrealized() { }
class real {
@unrealized { "unreal": }
@class { "unreal": }
}
include real
EOM
expect(resources_in(master_catalog)).to_not include(*virtual_resources)
expect(resources_in(agent_catalog)).to_not include(*virtual_resources)
end
it "does not contain unrealized, exported resources" do
exported_resources = ["Unrealized[unreal]", "Class[Unreal]"]
master_catalog, agent_catalog = master_and_agent_catalogs_for(<<-EOM)
class unreal { }
define unrealized() { }
class real {
@@unrealized { "unreal": }
@@class { "unreal": }
}
include real
EOM
expect(resources_in(master_catalog)).to_not include(*exported_resources)
expect(resources_in(agent_catalog)).to_not include(*exported_resources)
end
end
+ end
+
+ describe 'using classic parser' do
+ before :each do
+ Puppet[:parser] = 'current'
+ end
+ it_behaves_like 'when compiled' do
+ end
it "compiles resource creation from appended array as two separate resources" do
# moved here from acceptance test "jeff_append_to_array.rb"
master_catalog = master_catalog_for(<<-EOM)
class parent {
$arr1 = [ "parent array element" ]
}
class parent::child inherits parent {
$arr1 += ["child array element"]
notify { $arr1: }
}
include parent::child
EOM
expect(resources_in(master_catalog)).to include('Notify[parent array element]', 'Notify[child array element]')
end
end
- describe 'using classic parser' do
- before :each do
- Puppet[:parser] = 'current'
- end
- it_behaves_like 'when compiled' do
- end
- end
-
describe 'using future parser' do
before :each do
Puppet[:parser] = 'future'
end
it_behaves_like 'when compiled' do
end
end
def master_catalog_for(manifest)
master_catalog = Puppet::Resource::Catalog::Compiler.new.filter(compile_to_catalog(manifest))
end
def master_and_agent_catalogs_for(manifest)
- master_catalog = Puppet::Resource::Catalog::Compiler.new.filter(compile_to_catalog(manifest))
+ compiler = Puppet::Resource::Catalog::Compiler.new
+ master_catalog = compiler.filter(compile_to_catalog(manifest))
agent_catalog = Puppet::Resource::Catalog.convert_from(:pson, master_catalog.render(:pson))
-
[master_catalog, agent_catalog]
end
def resources_in(catalog)
catalog.resources.map(&:ref)
end
end
diff --git a/spec/integration/parser/class_spec.rb b/spec/integration/parser/class_spec.rb
new file mode 100644
index 000000000..9f63eb083
--- /dev/null
+++ b/spec/integration/parser/class_spec.rb
@@ -0,0 +1,37 @@
+require 'spec_helper'
+require 'puppet_spec/language'
+
+describe "Class expressions" do
+ extend PuppetSpec::Language
+
+ before :each do
+ Puppet[:parser] = 'future'
+ end
+
+ produces(
+ "class hi { }" => '!defined(Class[Hi])',
+
+ "class hi { } include hi" => 'defined(Class[Hi])',
+ "include(hi) class hi { }" => 'defined(Class[Hi])',
+
+ "class hi { } class { hi: }" => 'defined(Class[Hi])',
+ "class { hi: } class hi { }" => 'defined(Class[Hi])',
+
+ "class bye { } class hi inherits bye { } include hi" => 'defined(Class[Hi]) and defined(Class[Bye])')
+
+ produces(<<-EXAMPLE => 'defined(Notify[foo]) and defined(Notify[bar]) and !defined(Notify[foo::bar])')
+ class bar { notify { 'bar': } }
+ class foo::bar { notify { 'foo::bar': } }
+ class foo inherits bar { notify { 'foo': } }
+
+ include foo
+ EXAMPLE
+
+ produces(<<-EXAMPLE => 'defined(Notify[foo]) and defined(Notify[bar]) and !defined(Notify[foo::bar])')
+ class bar { notify { 'bar': } }
+ class foo::bar { notify { 'foo::bar': } }
+ class foo inherits ::bar { notify { 'foo': } }
+
+ include foo
+ EXAMPLE
+end
diff --git a/spec/integration/parser/collector_spec.rb b/spec/integration/parser/collector_spec.rb
index 49ce74583..55ac66a5b 100755
--- a/spec/integration/parser/collector_spec.rb
+++ b/spec/integration/parser/collector_spec.rb
@@ -1,117 +1,276 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/compiler'
require 'puppet/parser/collector'
describe Puppet::Parser::Collector do
include PuppetSpec::Compiler
def expect_the_message_to_be(expected_messages, code, node = Puppet::Node.new('the node'))
catalog = compile_to_catalog(code, node)
messages = catalog.resources.find_all { |resource| resource.type == 'Notify' }.
collect { |notify| notify[:message] }
messages.should include(*expected_messages)
end
- it "matches on title" do
- expect_the_message_to_be(["the message"], <<-MANIFEST)
- @notify { "testing": message => "the message" }
+ shared_examples_for "virtual resource collection" do
+ it "matches everything when no query given" do
+ expect_the_message_to_be(["the other message", "the message"], <<-MANIFEST)
+ @notify { "testing": message => "the message" }
+ @notify { "other": message => "the other message" }
- Notify <| title == "testing" |>
- MANIFEST
- end
+ Notify <| |>
+ MANIFEST
+ end
- it "matches on other parameters" do
- expect_the_message_to_be(["the message"], <<-MANIFEST)
- @notify { "testing": message => "the message" }
- @notify { "other testing": message => "the wrong message" }
+ it "matches regular resources " do
+ expect_the_message_to_be(["changed", "changed"], <<-MANIFEST)
+ notify { "testing": message => "the message" }
+ notify { "other": message => "the other message" }
- Notify <| message == "the message" |>
- MANIFEST
- end
+ Notify <| |> { message => "changed" }
+ MANIFEST
+ end
- it "allows criteria to be combined with 'and'" do
- expect_the_message_to_be(["the message"], <<-MANIFEST)
- @notify { "testing": message => "the message" }
- @notify { "other": message => "the message" }
+ it "matches on tags" do
+ expect_the_message_to_be(["wanted"], <<-MANIFEST)
+ @notify { "testing": tag => ["one"], message => "wanted" }
+ @notify { "other": tag => ["two"], message => "unwanted" }
- Notify <| title == "testing" and message == "the message" |>
- MANIFEST
- end
+ Notify <| tag == one |>
+ MANIFEST
+ end
- it "allows criteria to be combined with 'or'" do
- expect_the_message_to_be(["the message", "other message"], <<-MANIFEST)
- @notify { "testing": message => "the message" }
- @notify { "other": message => "other message" }
- @notify { "yet another": message => "different message" }
+ it "matches on title" do
+ expect_the_message_to_be(["the message"], <<-MANIFEST)
+ @notify { "testing": message => "the message" }
- Notify <| title == "testing" or message == "other message" |>
- MANIFEST
- end
+ Notify <| title == "testing" |>
+ MANIFEST
+ end
- it "allows criteria to be combined with 'or'" do
- expect_the_message_to_be(["the message", "other message"], <<-MANIFEST)
- @notify { "testing": message => "the message" }
- @notify { "other": message => "other message" }
- @notify { "yet another": message => "different message" }
+ it "matches on other parameters" do
+ expect_the_message_to_be(["the message"], <<-MANIFEST)
+ @notify { "testing": message => "the message" }
+ @notify { "other testing": message => "the wrong message" }
- Notify <| title == "testing" or message == "other message" |>
- MANIFEST
- end
+ Notify <| message == "the message" |>
+ MANIFEST
+ end
- it "allows criteria to be grouped with parens" do
- expect_the_message_to_be(["the message", "different message"], <<-MANIFEST)
- @notify { "testing": message => "different message", withpath => true }
- @notify { "other": message => "the message" }
- @notify { "yet another": message => "the message", withpath => true }
+ it "matches against elements of an array valued parameter" do
+ expect_the_message_to_be([["the", "message"]], <<-MANIFEST)
+ @notify { "testing": message => ["the", "message"] }
+ @notify { "other testing": message => ["not", "here"] }
- Notify <| (title == "testing" or message == "the message") and withpath == true |>
- MANIFEST
- end
+ Notify <| message == "message" |>
+ MANIFEST
+ end
- it "does not do anything if nothing matches" do
- expect_the_message_to_be([], <<-MANIFEST)
- @notify { "testing": message => "different message" }
+ it "allows criteria to be combined with 'and'" do
+ expect_the_message_to_be(["the message"], <<-MANIFEST)
+ @notify { "testing": message => "the message" }
+ @notify { "other": message => "the message" }
- Notify <| title == "does not exist" |>
- MANIFEST
- end
+ Notify <| title == "testing" and message == "the message" |>
+ MANIFEST
+ end
- it "excludes items with inequalities" do
- expect_the_message_to_be(["good message"], <<-MANIFEST)
- @notify { "testing": message => "good message" }
- @notify { "the wrong one": message => "bad message" }
+ it "allows criteria to be combined with 'or'" do
+ expect_the_message_to_be(["the message", "other message"], <<-MANIFEST)
+ @notify { "testing": message => "the message" }
+ @notify { "other": message => "other message" }
+ @notify { "yet another": message => "different message" }
- Notify <| title != "the wrong one" |>
- MANIFEST
- end
+ Notify <| title == "testing" or message == "other message" |>
+ MANIFEST
+ end
- context "issue #10963" do
- it "collects with override when inside a class" do
- expect_the_message_to_be(["overridden message"], <<-MANIFEST)
- @notify { "testing": message => "original message" }
+ it "allows criteria to be combined with 'or'" do
+ expect_the_message_to_be(["the message", "other message"], <<-MANIFEST)
+ @notify { "testing": message => "the message" }
+ @notify { "other": message => "other message" }
+ @notify { "yet another": message => "different message" }
- include collector_test
- class collector_test {
- Notify <| |> {
- message => "overridden message"
- }
- }
+ Notify <| title == "testing" or message == "other message" |>
+ MANIFEST
+ end
+
+ it "allows criteria to be grouped with parens" do
+ expect_the_message_to_be(["the message", "different message"], <<-MANIFEST)
+ @notify { "testing": message => "different message", withpath => true }
+ @notify { "other": message => "the message" }
+ @notify { "yet another": message => "the message", withpath => true }
+
+ Notify <| (title == "testing" or message == "the message") and withpath == true |>
MANIFEST
end
- it "collects with override when inside a define" do
- expect_the_message_to_be(["overridden message"], <<-MANIFEST)
- @notify { "testing": message => "original message" }
+ it "does not do anything if nothing matches" do
+ expect_the_message_to_be([], <<-MANIFEST)
+ @notify { "testing": message => "different message" }
+
+ Notify <| title == "does not exist" |>
+ MANIFEST
+ end
+
+ it "excludes items with inequalities" do
+ expect_the_message_to_be(["good message"], <<-MANIFEST)
+ @notify { "testing": message => "good message" }
+ @notify { "the wrong one": message => "bad message" }
+
+ Notify <| title != "the wrong one" |>
+ MANIFEST
+ end
+
+ it "does not exclude resources with unequal arrays" do
+ expect_the_message_to_be(["message", ["not this message", "or this one"]], <<-MANIFEST)
+ @notify { "testing": message => "message" }
+ @notify { "the wrong one": message => ["not this message", "or this one"] }
+
+ Notify <| message != "not this message" |>
+ MANIFEST
+ end
+
+ it "does not exclude tags with inequalities" do
+ expect_the_message_to_be(["wanted message", "the way it works"], <<-MANIFEST)
+ @notify { "testing": tag => ["wanted"], message => "wanted message" }
+ @notify { "other": tag => ["why"], message => "the way it works" }
+
+ Notify <| tag != "why" |>
+ MANIFEST
+ end
+
+ it "does not collect classes" do
+ node = Puppet::Node.new('the node')
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ class theclass {
+ @notify { "testing": message => "good message" }
+ }
+ Class <| |>
+ MANIFEST
+ end.to raise_error(/Classes cannot be collected/)
+ end
+
+ context "overrides" do
+ it "modifies an existing array" do
+ expect_the_message_to_be([["original message", "extra message"]], <<-MANIFEST)
+ @notify { "testing": message => ["original message"] }
- collector_test { testing: }
- define collector_test() {
Notify <| |> {
- message => "overridden message"
+ message +> "extra message"
}
- }
- MANIFEST
+ MANIFEST
+ end
+
+ it "converts a scalar to an array" do
+ expect_the_message_to_be([["original message", "extra message"]], <<-MANIFEST)
+ @notify { "testing": message => "original message" }
+
+ Notify <| |> {
+ message +> "extra message"
+ }
+ MANIFEST
+ end
+
+ it "collects with override when inside a class (#10963)" do
+ expect_the_message_to_be(["overridden message"], <<-MANIFEST)
+ @notify { "testing": message => "original message" }
+
+ include collector_test
+ class collector_test {
+ Notify <| |> {
+ message => "overridden message"
+ }
+ }
+ MANIFEST
+ end
+
+ it "collects with override when inside a define (#10963)" do
+ expect_the_message_to_be(["overridden message"], <<-MANIFEST)
+ @notify { "testing": message => "original message" }
+
+ collector_test { testing: }
+ define collector_test() {
+ Notify <| |> {
+ message => "overridden message"
+ }
+ }
+ MANIFEST
+ end
+
+ # Catches regression in implemented behavior, this is not to be taken as this is the wanted behavior
+ # but it has been this way for a long time.
+ it "collects and overrides user defined resources immediately (before queue is evaluated)" do
+ expect_the_message_to_be(["overridden"], <<-MANIFEST)
+ define foo($message) {
+ notify { "testing": message => $message }
+ }
+ foo { test: message => 'given' }
+ Foo <| |> { message => 'overridden' }
+ MANIFEST
+ end
+
+ # Catches regression in implemented behavior, this is not to be taken as this is the wanted behavior
+ # but it has been this way for a long time.
+ it "collects and overrides user defined resources immediately (virtual resources not queued)" do
+ expect_the_message_to_be(["overridden"], <<-MANIFEST)
+ define foo($message) {
+ @notify { "testing": message => $message }
+ }
+ foo { test: message => 'given' }
+ Notify <| |> # must be collected or the assertion does not find it
+ Foo <| |> { message => 'overridden' }
+ MANIFEST
+ end
+
+ # Catches regression in implemented behavior, this is not to be taken as this is the wanted behavior
+ # but it has been this way for a long time.
+ # Note difference from none +> case where the override takes effect
+ it "collects and overrides user defined resources with +>" do
+ expect_the_message_to_be([["given", "overridden"]], <<-MANIFEST)
+ define foo($message) {
+ notify { "$name": message => $message }
+ }
+ foo { test: message => ['given'] }
+ Notify <| |> { message +> ['overridden'] }
+ MANIFEST
+ end
+
+ it "collects and overrides virtual resources multiple times using multiple collects" do
+ expect_the_message_to_be(["overridden2"], <<-MANIFEST)
+ @notify { "testing": message => "original" }
+ Notify <| |> { message => 'overridden1' }
+ Notify <| |> { message => 'overridden2' }
+ MANIFEST
+ end
+
+ it "collects and overrides non virtual resources multiple times using multiple collects" do
+ expect_the_message_to_be(["overridden2"], <<-MANIFEST)
+ notify { "testing": message => "original" }
+ Notify <| |> { message => 'overridden1' }
+ Notify <| |> { message => 'overridden2' }
+ MANIFEST
+ end
+
end
end
+
+ describe "in the current parser" do
+ before :each do
+ Puppet[:parser] = 'current'
+ end
+
+ it_behaves_like "virtual resource collection"
+ end
+
+ describe "in the future parser" do
+ before :each do
+ Puppet[:parser] = 'future'
+ end
+
+ it_behaves_like "virtual resource collection"
+ end
end
diff --git a/spec/integration/parser/compiler_spec.rb b/spec/integration/parser/compiler_spec.rb
index 6ce0b7d67..e6069d834 100755
--- a/spec/integration/parser/compiler_spec.rb
+++ b/spec/integration/parser/compiler_spec.rb
@@ -1,541 +1,525 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/parser/parser_factory'
require 'puppet_spec/compiler'
require 'matchers/resource'
describe "Puppet::Parser::Compiler" do
include PuppetSpec::Compiler
include Matchers::Resource
before :each do
@node = Puppet::Node.new "testnode"
@scope_resource = stub 'scope_resource', :builtin? => true, :finish => nil, :ref => 'Class[main]'
@scope = stub 'scope', :resource => @scope_resource, :source => mock("source")
end
- after do
- Puppet.settings.clear
- end
-
- # shared because tests are invoked both for classic and future parser
- #
- shared_examples_for "the compiler" do
- it "should be able to determine the configuration version from a local version control repository" do
- pending("Bug #14071 about semantics of Puppet::Util::Execute on Windows", :if => Puppet.features.microsoft_windows?) do
- # This should always work, because we should always be
- # in the puppet repo when we run this.
- version = %x{git rev-parse HEAD}.chomp
+ it "should be able to determine the configuration version from a local version control repository" do
+ pending("Bug #14071 about semantics of Puppet::Util::Execute on Windows", :if => Puppet.features.microsoft_windows?) do
+ # This should always work, because we should always be
+ # in the puppet repo when we run this.
+ version = %x{git rev-parse HEAD}.chomp
- Puppet.settings[:config_version] = 'git rev-parse HEAD'
+ Puppet.settings[:config_version] = 'git rev-parse HEAD'
- @parser = Puppet::Parser::ParserFactory.parser "development"
- @compiler = Puppet::Parser::Compiler.new(@node)
+ @parser = Puppet::Parser::ParserFactory.parser "development"
+ @compiler = Puppet::Parser::Compiler.new(@node)
- @compiler.catalog.version.should == version
- end
+ @compiler.catalog.version.should == version
end
+ end
+
+ it "should not create duplicate resources when a class is referenced both directly and indirectly by the node classifier (4792)" do
+ Puppet[:code] = <<-PP
+ class foo
+ {
+ notify { foo_notify: }
+ include bar
+ }
+ class bar
+ {
+ notify { bar_notify: }
+ }
+ PP
+
+ @node.stubs(:classes).returns(['foo', 'bar'])
+
+ catalog = Puppet::Parser::Compiler.compile(@node)
+
+ catalog.resource("Notify[foo_notify]").should_not be_nil
+ catalog.resource("Notify[bar_notify]").should_not be_nil
+ end
- it "should not create duplicate resources when a class is referenced both directly and indirectly by the node classifier (4792)" do
+ describe "when resolving class references" do
+ it "should favor local scope, even if there's an included class in topscope" do
Puppet[:code] = <<-PP
- class foo
- {
- notify { foo_notify: }
- include bar
+ class experiment {
+ class baz {
+ }
+ notify {"x" : require => Class[Baz] }
}
- class bar
- {
- notify { bar_notify: }
+ class baz {
}
+ include baz
+ include experiment
+ include experiment::baz
PP
- @node.stubs(:classes).returns(['foo', 'bar'])
+ catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
- catalog = Puppet::Parser::Compiler.compile(@node)
+ notify_resource = catalog.resource( "Notify[x]" )
- catalog.resource("Notify[foo_notify]").should_not be_nil
- catalog.resource("Notify[bar_notify]").should_not be_nil
+ notify_resource[:require].title.should == "Experiment::Baz"
end
- describe "when resolving class references" do
- it "should favor local scope, even if there's an included class in topscope" do
- Puppet[:code] = <<-PP
- class experiment {
- class baz {
- }
- notify {"x" : require => Class[Baz] }
- }
+ it "should favor local scope, even if there's an unincluded class in topscope" do
+ Puppet[:code] = <<-PP
+ class experiment {
class baz {
}
- include baz
- include experiment
- include experiment::baz
- PP
-
- catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
+ notify {"x" : require => Class[Baz] }
+ }
+ class baz {
+ }
+ include experiment
+ include experiment::baz
+ PP
- notify_resource = catalog.resource( "Notify[x]" )
+ catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
- notify_resource[:require].title.should == "Experiment::Baz"
- end
+ notify_resource = catalog.resource( "Notify[x]" )
- it "should favor local scope, even if there's an unincluded class in topscope" do
- Puppet[:code] = <<-PP
- class experiment {
- class baz {
+ notify_resource[:require].title.should == "Experiment::Baz"
+ end
+ end
+ describe "(ticket #13349) when explicitly specifying top scope" do
+ ["class {'::bar::baz':}", "include ::bar::baz"].each do |include|
+ describe "with #{include}" do
+ it "should find the top level class" do
+ Puppet[:code] = <<-MANIFEST
+ class { 'foo::test': }
+ class foo::test {
+ #{include}
}
- notify {"x" : require => Class[Baz] }
- }
- class baz {
- }
- include experiment
- include experiment::baz
- PP
-
- catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
+ class bar::baz {
+ notify { 'good!': }
+ }
+ class foo::bar::baz {
+ notify { 'bad!': }
+ }
+ MANIFEST
- notify_resource = catalog.resource( "Notify[x]" )
+ catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
- notify_resource[:require].title.should == "Experiment::Baz"
- end
- end
- describe "(ticket #13349) when explicitly specifying top scope" do
- ["class {'::bar::baz':}", "include ::bar::baz"].each do |include|
- describe "with #{include}" do
- it "should find the top level class" do
- Puppet[:code] = <<-MANIFEST
- class { 'foo::test': }
- class foo::test {
- #{include}
- }
- class bar::baz {
- notify { 'good!': }
- }
- class foo::bar::baz {
- notify { 'bad!': }
- }
- MANIFEST
-
- catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
-
- catalog.resource("Class[Bar::Baz]").should_not be_nil
- catalog.resource("Notify[good!]").should_not be_nil
- catalog.resource("Class[Foo::Bar::Baz]").should be_nil
- catalog.resource("Notify[bad!]").should be_nil
- end
+ catalog.resource("Class[Bar::Baz]").should_not be_nil
+ catalog.resource("Notify[good!]").should_not be_nil
+ catalog.resource("Class[Foo::Bar::Baz]").should be_nil
+ catalog.resource("Notify[bad!]").should be_nil
end
end
end
+ end
- it "should recompute the version after input files are re-parsed" do
- Puppet[:code] = 'class foo { }'
- Time.stubs(:now).returns(1)
- node = Puppet::Node.new('mynode')
- Puppet::Parser::Compiler.compile(node).version.should == 1
- Time.stubs(:now).returns(2)
- Puppet::Parser::Compiler.compile(node).version.should == 1 # no change because files didn't change
- Puppet::Resource::TypeCollection.any_instance.stubs(:stale?).returns(true).then.returns(false) # pretend change
- Puppet::Parser::Compiler.compile(node).version.should == 2
- end
-
- ['class', 'define', 'node'].each do |thing|
- it "should not allow '#{thing}' inside evaluated conditional constructs" do
- Puppet[:code] = <<-PP
- if true {
- #{thing} foo {
- }
- notify { decoy: }
- }
- PP
-
- begin
- Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
- raise "compilation should have raised Puppet::Error"
- rescue Puppet::Error => e
- e.message.should =~ /at line 2/
- end
- end
- end
+ it "should recompute the version after input files are re-parsed" do
+ Puppet[:code] = 'class foo { }'
+ Time.stubs(:now).returns(1)
+ node = Puppet::Node.new('mynode')
+ Puppet::Parser::Compiler.compile(node).version.should == 1
+ Time.stubs(:now).returns(2)
+ Puppet::Parser::Compiler.compile(node).version.should == 1 # no change because files didn't change
+ Puppet::Resource::TypeCollection.any_instance.stubs(:stale?).returns(true).then.returns(false) # pretend change
+ Puppet::Parser::Compiler.compile(node).version.should == 2
+ end
- it "should not allow classes inside unevaluated conditional constructs" do
+ ['class', 'define', 'node'].each do |thing|
+ it "should not allow '#{thing}' inside evaluated conditional constructs" do
Puppet[:code] = <<-PP
- if false {
- class foo {
+ if true {
+ #{thing} foo {
}
+ notify { decoy: }
}
PP
- lambda { Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode")) }.should raise_error(Puppet::Error)
+ begin
+ Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
+ raise "compilation should have raised Puppet::Error"
+ rescue Puppet::Error => e
+ e.message.should =~ /at line 2/
+ end
end
+ end
- describe "when defining relationships" do
- def extract_name(ref)
- ref.sub(/File\[(\w+)\]/, '\1')
- end
+ it "should not allow classes inside unevaluated conditional constructs" do
+ Puppet[:code] = <<-PP
+ if false {
+ class foo {
+ }
+ }
+ PP
- let(:node) { Puppet::Node.new('mynode') }
- let(:code) do
- <<-MANIFEST
- file { [a,b,c]:
- mode => 0644,
- }
- file { [d,e]:
- mode => 0755,
- }
- MANIFEST
- end
- let(:expected_relationships) { [] }
- let(:expected_subscriptions) { [] }
+ lambda { Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode")) }.should raise_error(Puppet::Error)
+ end
- before :each do
- Puppet[:code] = code
- end
+ describe "when defining relationships" do
+ def extract_name(ref)
+ ref.sub(/File\[(\w+)\]/, '\1')
+ end
- after :each do
- catalog = Puppet::Parser::Compiler.compile(node)
+ let(:node) { Puppet::Node.new('mynode') }
+ let(:code) do
+ <<-MANIFEST
+ file { [a,b,c]:
+ mode => '0644',
+ }
+ file { [d,e]:
+ mode => '0755',
+ }
+ MANIFEST
+ end
+ let(:expected_relationships) { [] }
+ let(:expected_subscriptions) { [] }
- resources = catalog.resources.select { |res| res.type == 'File' }
+ before :each do
+ Puppet[:code] = code
+ end
- actual_relationships, actual_subscriptions = [:before, :notify].map do |relation|
- resources.map do |res|
- dependents = Array(res[relation])
- dependents.map { |ref| [res.title, extract_name(ref)] }
- end.inject(&:concat)
- end
+ after :each do
+ catalog = Puppet::Parser::Compiler.compile(node)
- actual_relationships.should =~ expected_relationships
- actual_subscriptions.should =~ expected_subscriptions
+ resources = catalog.resources.select { |res| res.type == 'File' }
+
+ actual_relationships, actual_subscriptions = [:before, :notify].map do |relation|
+ resources.map do |res|
+ dependents = Array(res[relation])
+ dependents.map { |ref| [res.title, extract_name(ref)] }
+ end.inject(&:concat)
end
- it "should create a relationship" do
- code << "File[a] -> File[b]"
+ actual_relationships.should =~ expected_relationships
+ actual_subscriptions.should =~ expected_subscriptions
+ end
- expected_relationships << ['a','b']
- end
+ it "should create a relationship" do
+ code << "File[a] -> File[b]"
- it "should create a subscription" do
- code << "File[a] ~> File[b]"
+ expected_relationships << ['a','b']
+ end
- expected_subscriptions << ['a', 'b']
- end
+ it "should create a subscription" do
+ code << "File[a] ~> File[b]"
- it "should create relationships using title arrays" do
- code << "File[a,b] -> File[c,d]"
+ expected_subscriptions << ['a', 'b']
+ end
- expected_relationships.concat [
- ['a', 'c'],
- ['b', 'c'],
- ['a', 'd'],
- ['b', 'd'],
- ]
- end
+ it "should create relationships using title arrays" do
+ code << "File[a,b] -> File[c,d]"
- it "should create relationships using collection expressions" do
- code << "File <| mode == 0644 |> -> File <| mode == 0755 |>"
-
- expected_relationships.concat [
- ['a', 'd'],
- ['b', 'd'],
- ['c', 'd'],
- ['a', 'e'],
- ['b', 'e'],
- ['c', 'e'],
- ]
- end
+ expected_relationships.concat [
+ ['a', 'c'],
+ ['b', 'c'],
+ ['a', 'd'],
+ ['b', 'd'],
+ ]
+ end
- it "should create relationships using resource names" do
- code << "'File[a]' -> 'File[b]'"
+ it "should create relationships using collection expressions" do
+ code << "File <| mode == 0644 |> -> File <| mode == 0755 |>"
+
+ expected_relationships.concat [
+ ['a', 'd'],
+ ['b', 'd'],
+ ['c', 'd'],
+ ['a', 'e'],
+ ['b', 'e'],
+ ['c', 'e'],
+ ]
+ end
- expected_relationships << ['a', 'b']
- end
+ it "should create relationships using resource names" do
+ code << "'File[a]' -> 'File[b]'"
- it "should create relationships using variables" do
- code << <<-MANIFEST
- $var = File[a]
- $var -> File[b]
- MANIFEST
+ expected_relationships << ['a', 'b']
+ end
- expected_relationships << ['a', 'b']
- end
+ it "should create relationships using variables" do
+ code << <<-MANIFEST
+ $var = File[a]
+ $var -> File[b]
+ MANIFEST
- it "should create relationships using case statements" do
- code << <<-MANIFEST
- $var = 10
- case $var {
- 10: {
- file { s1: }
- }
- 12: {
- file { s2: }
- }
+ expected_relationships << ['a', 'b']
+ end
+
+ it "should create relationships using case statements" do
+ code << <<-MANIFEST
+ $var = 10
+ case $var {
+ 10: {
+ file { s1: }
}
- ->
- case $var + 2 {
- 10: {
- file { t1: }
- }
- 12: {
- file { t2: }
- }
+ 12: {
+ file { s2: }
}
- MANIFEST
+ }
+ ->
+ case $var + 2 {
+ 10: {
+ file { t1: }
+ }
+ 12: {
+ file { t2: }
+ }
+ }
+ MANIFEST
- expected_relationships << ['s1', 't2']
- end
+ expected_relationships << ['s1', 't2']
+ end
- it "should create relationships using array members" do
- code << <<-MANIFEST
- $var = [ [ [ File[a], File[b] ] ] ]
- $var[0][0][0] -> $var[0][0][1]
- MANIFEST
+ it "should create relationships using array members" do
+ code << <<-MANIFEST
+ $var = [ [ [ File[a], File[b] ] ] ]
+ $var[0][0][0] -> $var[0][0][1]
+ MANIFEST
- expected_relationships << ['a', 'b']
- end
+ expected_relationships << ['a', 'b']
+ end
- it "should create relationships using hash members" do
- code << <<-MANIFEST
- $var = {'foo' => {'bar' => {'source' => File[a], 'target' => File[b]}}}
- $var[foo][bar][source] -> $var[foo][bar][target]
- MANIFEST
+ it "should create relationships using hash members" do
+ code << <<-MANIFEST
+ $var = {'foo' => {'bar' => {'source' => File[a], 'target' => File[b]}}}
+ $var[foo][bar][source] -> $var[foo][bar][target]
+ MANIFEST
- expected_relationships << ['a', 'b']
- end
+ expected_relationships << ['a', 'b']
+ end
- it "should create relationships using resource declarations" do
- code << "file { l: } -> file { r: }"
+ it "should create relationships using resource declarations" do
+ code << "file { l: } -> file { r: }"
- expected_relationships << ['l', 'r']
- end
+ expected_relationships << ['l', 'r']
+ end
- it "should chain relationships" do
- code << "File[a] -> File[b] ~> File[c] <- File[d] <~ File[e]"
+ it "should chain relationships" do
+ code << "File[a] -> File[b] ~> File[c] <- File[d] <~ File[e]"
- expected_relationships << ['a', 'b'] << ['d', 'c']
- expected_subscriptions << ['b', 'c'] << ['e', 'd']
- end
+ expected_relationships << ['a', 'b'] << ['d', 'c']
+ expected_subscriptions << ['b', 'c'] << ['e', 'd']
end
+ end
- context 'when working with immutable node data' do
- context 'and have opted in to immutable_node_data' do
- before :each do
- Puppet[:immutable_node_data] = true
- end
+ context 'when working with immutable node data' do
+ context 'and have opted in to immutable_node_data' do
+ before :each do
+ Puppet[:immutable_node_data] = true
+ end
- def node_with_facts(facts)
- Puppet[:facts_terminus] = :memory
- Puppet::Node::Facts.indirection.save(Puppet::Node::Facts.new("testing", facts))
- node = Puppet::Node.new("testing")
- node.fact_merge
- node
- end
+ def node_with_facts(facts)
+ Puppet[:facts_terminus] = :memory
+ Puppet::Node::Facts.indirection.save(Puppet::Node::Facts.new("testing", facts))
+ node = Puppet::Node.new("testing")
+ node.fact_merge
+ node
+ end
- matcher :fail_compile_with do |node, message_regex|
- match do |manifest|
- @error = nil
- begin
- compile_to_catalog(manifest, node)
- false
- rescue Puppet::Error => e
- @error = e
- message_regex.match(e.message)
- end
+ matcher :fail_compile_with do |node, message_regex|
+ match do |manifest|
+ @error = nil
+ begin
+ PuppetSpec::Compiler.compile_to_catalog(manifest, node)
+ false
+ rescue Puppet::Error => e
+ @error = e
+ message_regex.match(e.message)
end
+ end
- failure_message_for_should do
- if @error
- "failed with #{@error}\n#{@error.backtrace}"
- else
- "did not fail"
- end
+ failure_message_for_should do
+ if @error
+ "failed with #{@error}\n#{@error.backtrace}"
+ else
+ "did not fail"
end
end
+ end
- it 'should make $facts available' do
- node = node_with_facts('the_facts' => 'straight')
+ it 'should make $facts available' do
+ node = node_with_facts('the_facts' => 'straight')
- catalog = compile_to_catalog(<<-MANIFEST, node)
- notify { 'test': message => $facts[the_facts] }
- MANIFEST
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ notify { 'test': message => $facts[the_facts] }
+ MANIFEST
- catalog.resource("Notify[test]")[:message].should == "straight"
- end
+ catalog.resource("Notify[test]")[:message].should == "straight"
+ end
- it 'should make $facts reserved' do
- node = node_with_facts('the_facts' => 'straight')
+ it 'should make $facts reserved' do
+ node = node_with_facts('the_facts' => 'straight')
- expect('$facts = {}').to fail_compile_with(node, /assign to a reserved variable name: 'facts'/)
- expect('class a { $facts = {} } include a').to fail_compile_with(node, /assign to a reserved variable name: 'facts'/)
- end
+ expect('$facts = {}').to fail_compile_with(node, /assign to a reserved variable name: 'facts'/)
+ expect('class a { $facts = {} } include a').to fail_compile_with(node, /assign to a reserved variable name: 'facts'/)
+ end
- it 'should make $facts immutable' do
- node = node_with_facts('string' => 'value', 'array' => ['string'], 'hash' => { 'a' => 'string' }, 'number' => 1, 'boolean' => true)
+ it 'should make $facts immutable' do
+ node = node_with_facts('string' => 'value', 'array' => ['string'], 'hash' => { 'a' => 'string' }, 'number' => 1, 'boolean' => true)
- expect('$i=inline_template("<% @facts[%q{new}] = 2 %>")').to fail_compile_with(node, /frozen Hash/i)
- expect('$i=inline_template("<% @facts[%q{string}].chop! %>")').to fail_compile_with(node, /frozen String/i)
+ expect('$i=inline_template("<% @facts[%q{new}] = 2 %>")').to fail_compile_with(node, /frozen Hash/i)
+ expect('$i=inline_template("<% @facts[%q{string}].chop! %>")').to fail_compile_with(node, /frozen String/i)
- expect('$i=inline_template("<% @facts[%q{array}][0].chop! %>")').to fail_compile_with(node, /frozen String/i)
- expect('$i=inline_template("<% @facts[%q{array}][1] = 2 %>")').to fail_compile_with(node, /frozen Array/i)
+ expect('$i=inline_template("<% @facts[%q{array}][0].chop! %>")').to fail_compile_with(node, /frozen String/i)
+ expect('$i=inline_template("<% @facts[%q{array}][1] = 2 %>")').to fail_compile_with(node, /frozen Array/i)
- expect('$i=inline_template("<% @facts[%q{hash}][%q{a}].chop! %>")').to fail_compile_with(node, /frozen String/i)
- expect('$i=inline_template("<% @facts[%q{hash}][%q{b}] = 2 %>")').to fail_compile_with(node, /frozen Hash/i)
- end
+ expect('$i=inline_template("<% @facts[%q{hash}][%q{a}].chop! %>")').to fail_compile_with(node, /frozen String/i)
+ expect('$i=inline_template("<% @facts[%q{hash}][%q{b}] = 2 %>")').to fail_compile_with(node, /frozen Hash/i)
+ end
- it 'should make $facts available even if there are no facts' do
- Puppet[:facts_terminus] = :memory
- node = Puppet::Node.new("testing2")
- node.fact_merge
+ it 'should make $facts available even if there are no facts' do
+ Puppet[:facts_terminus] = :memory
+ node = Puppet::Node.new("testing2")
+ node.fact_merge
- catalog = compile_to_catalog(<<-MANIFEST, node)
- notify { 'test': message => $facts }
- MANIFEST
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ notify { 'test': message => $facts }
+ MANIFEST
- expect(catalog).to have_resource("Notify[test]").with_parameter(:message, {})
- end
+ expect(catalog).to have_resource("Notify[test]").with_parameter(:message, {})
end
+ end
- context 'and have not opted in to immutable_node_data' do
- before :each do
- Puppet[:immutable_node_data] = false
- end
+ context 'and have not opted in to immutable_node_data' do
+ before :each do
+ Puppet[:immutable_node_data] = false
+ end
- it 'should not make $facts available' do
- Puppet[:facts_terminus] = :memory
- facts = Puppet::Node::Facts.new("testing", 'the_facts' => 'straight')
- Puppet::Node::Facts.indirection.save(facts)
- node = Puppet::Node.new("testing")
- node.fact_merge
+ it 'should not make $facts available' do
+ Puppet[:facts_terminus] = :memory
+ facts = Puppet::Node::Facts.new("testing", 'the_facts' => 'straight')
+ Puppet::Node::Facts.indirection.save(facts)
+ node = Puppet::Node.new("testing")
+ node.fact_merge
- catalog = compile_to_catalog(<<-MANIFEST, node)
- notify { 'test': message => "An $facts space" }
- MANIFEST
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ notify { 'test': message => "An $facts space" }
+ MANIFEST
- catalog.resource("Notify[test]")[:message].should == "An space"
- end
+ catalog.resource("Notify[test]")[:message].should == "An space"
end
end
+ end
- context 'when working with the trusted data hash' do
- context 'and have opted in to trusted_node_data' do
- before :each do
- Puppet[:trusted_node_data] = true
- end
+ context 'when working with the trusted data hash' do
+ context 'and have opted in to trusted_node_data' do
+ before :each do
+ Puppet[:trusted_node_data] = true
+ end
- it 'should make $trusted available' do
- node = Puppet::Node.new("testing")
- node.trusted_data = { "data" => "value" }
+ it 'should make $trusted available' do
+ node = Puppet::Node.new("testing")
+ node.trusted_data = { "data" => "value" }
- catalog = compile_to_catalog(<<-MANIFEST, node)
- notify { 'test': message => $trusted[data] }
- MANIFEST
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ notify { 'test': message => $trusted[data] }
+ MANIFEST
- catalog.resource("Notify[test]")[:message].should == "value"
- end
+ catalog.resource("Notify[test]")[:message].should == "value"
+ end
- it 'should not allow assignment to $trusted' do
- node = Puppet::Node.new("testing")
- node.trusted_data = { "data" => "value" }
-
- expect do
- catalog = compile_to_catalog(<<-MANIFEST, node)
- $trusted = 'changed'
- notify { 'test': message => $trusted == 'changed' }
- MANIFEST
- catalog.resource("Notify[test]")[:message].should == true
- end.to raise_error(Puppet::Error, /Attempt to assign to a reserved variable name: 'trusted'/)
- end
+ it 'should not allow assignment to $trusted' do
+ node = Puppet::Node.new("testing")
+ node.trusted_data = { "data" => "value" }
- it 'should not allow addition to $trusted hash' do
- node = Puppet::Node.new("testing")
- node.trusted_data = { "data" => "value" }
-
- expect do
- catalog = compile_to_catalog(<<-MANIFEST, node)
- $trusted['extra'] = 'added'
- notify { 'test': message => $trusted['extra'] == 'added' }
- MANIFEST
- catalog.resource("Notify[test]")[:message].should == true
- # different errors depending on regular or future parser
- end.to raise_error(Puppet::Error, /(can't modify frozen [hH]ash)|(Illegal attempt to assign)/)
- end
-
- it 'should not allow addition to $trusted hash via Ruby inline template' do
- node = Puppet::Node.new("testing")
- node.trusted_data = { "data" => "value" }
-
- expect do
- catalog = compile_to_catalog(<<-MANIFEST, node)
- $dummy = inline_template("<% @trusted['extra'] = 'added' %> lol")
- notify { 'test': message => $trusted['extra'] == 'added' }
- MANIFEST
- catalog.resource("Notify[test]")[:message].should == true
- end.to raise_error(Puppet::Error, /can't modify frozen [hH]ash/)
- end
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ $trusted = 'changed'
+ notify { 'test': message => $trusted == 'changed' }
+ MANIFEST
+ catalog.resource("Notify[test]")[:message].should == true
+ end.to raise_error(Puppet::Error, /Attempt to assign to a reserved variable name: 'trusted'/)
end
- context 'and have not opted in to trusted_node_data' do
- before :each do
- Puppet[:trusted_node_data] = false
- end
-
- it 'should not make $trusted available' do
- node = Puppet::Node.new("testing")
- node.trusted_data = { "data" => "value" }
+ it 'should not allow addition to $trusted hash' do
+ node = Puppet::Node.new("testing")
+ node.trusted_data = { "data" => "value" }
+ expect do
catalog = compile_to_catalog(<<-MANIFEST, node)
- notify { 'test': message => $trusted == undef }
+ $trusted['extra'] = 'added'
+ notify { 'test': message => $trusted['extra'] == 'added' }
MANIFEST
-
catalog.resource("Notify[test]")[:message].should == true
- end
+ # different errors depending on regular or future parser
+ end.to raise_error(Puppet::Error, /(can't modify frozen [hH]ash)|(Illegal attempt to assign)/)
+ end
- it 'should allow assignment to $trusted' do
- node = Puppet::Node.new("testing")
+ it 'should not allow addition to $trusted hash via Ruby inline template' do
+ node = Puppet::Node.new("testing")
+ node.trusted_data = { "data" => "value" }
+ expect do
catalog = compile_to_catalog(<<-MANIFEST, node)
- $trusted = 'changed'
- notify { 'test': message => $trusted == 'changed' }
+ $dummy = inline_template("<% @trusted['extra'] = 'added' %> lol")
+ notify { 'test': message => $trusted['extra'] == 'added' }
MANIFEST
-
catalog.resource("Notify[test]")[:message].should == true
- end
+ end.to raise_error(Puppet::Error, /can't modify frozen [hH]ash/)
end
end
- context 'when evaluating collection' do
- it 'matches on container inherited tags' do
- Puppet[:code] = <<-MANIFEST
- class xport_test {
- tag 'foo_bar'
- @notify { 'nbr1':
- message => 'explicitly tagged',
- tag => 'foo_bar'
- }
+ context 'and have not opted in to trusted_node_data' do
+ before :each do
+ Puppet[:trusted_node_data] = false
+ end
- @notify { 'nbr2':
- message => 'implicitly tagged'
- }
+ it 'should not make $trusted available' do
+ node = Puppet::Node.new("testing")
+ node.trusted_data = { "data" => "value" }
- Notify <| tag == 'foo_bar' |> {
- message => 'overridden'
- }
- }
- include xport_test
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ notify { 'test': message => $trusted == undef }
MANIFEST
- catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
+ catalog.resource("Notify[test]")[:message].should == true
+ end
+
+ it 'should allow assignment to $trusted' do
+ node = Puppet::Node.new("testing")
+
+ catalog = compile_to_catalog(<<-MANIFEST, node)
+ $trusted = 'changed'
+ notify { 'test': message => $trusted == 'changed' }
+ MANIFEST
- expect(catalog).to have_resource("Notify[nbr1]").with_parameter(:message, 'overridden')
- expect(catalog).to have_resource("Notify[nbr2]").with_parameter(:message, 'overridden')
+ catalog.resource("Notify[test]")[:message].should == true
end
end
end
- describe 'using classic parser' do
- before :each do
- Puppet[:parser] = 'current'
- end
- it_behaves_like 'the compiler' do
+ context 'when evaluating collection' do
+ it 'matches on container inherited tags' do
+ Puppet[:code] = <<-MANIFEST
+ class xport_test {
+ tag 'foo_bar'
+ @notify { 'nbr1':
+ message => 'explicitly tagged',
+ tag => 'foo_bar'
+ }
+
+ @notify { 'nbr2':
+ message => 'implicitly tagged'
+ }
+
+ Notify <| tag == 'foo_bar' |> {
+ message => 'overridden'
+ }
+ }
+ include xport_test
+ MANIFEST
+
+ catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
+
+ expect(catalog).to have_resource("Notify[nbr1]").with_parameter(:message, 'overridden')
+ expect(catalog).to have_resource("Notify[nbr2]").with_parameter(:message, 'overridden')
end
end
end
diff --git a/spec/integration/parser/conditionals_spec.rb b/spec/integration/parser/conditionals_spec.rb
new file mode 100644
index 000000000..82d950a06
--- /dev/null
+++ b/spec/integration/parser/conditionals_spec.rb
@@ -0,0 +1,117 @@
+require 'spec_helper'
+require 'puppet_spec/compiler'
+require 'matchers/resource'
+
+describe "Evaluation of Conditionals" do
+ include PuppetSpec::Compiler
+ include Matchers::Resource
+
+ shared_examples_for "a catalog built with conditionals" do
+ it "evaluates an if block correctly" do
+ catalog = compile_to_catalog(<<-CODE)
+ if( 1 == 1) {
+ notify { 'if': }
+ } elsif(2 == 2) {
+ notify { 'elsif': }
+ } else {
+ notify { 'else': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[if]")
+ end
+
+ it "evaluates elsif block" do
+ catalog = compile_to_catalog(<<-CODE)
+ if( 1 == 3) {
+ notify { 'if': }
+ } elsif(2 == 2) {
+ notify { 'elsif': }
+ } else {
+ notify { 'else': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[elsif]")
+ end
+
+ it "reaches the else clause if no expressions match" do
+ catalog = compile_to_catalog(<<-CODE)
+ if( 1 == 2) {
+ notify { 'if': }
+ } elsif(2 == 3) {
+ notify { 'elsif': }
+ } else {
+ notify { 'else': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[else]")
+ end
+
+ it "evalutes false to false" do
+ catalog = compile_to_catalog(<<-CODE)
+ if false {
+ } else {
+ notify { 'false': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[false]")
+ end
+
+ it "evaluates the string 'false' as true" do
+ catalog = compile_to_catalog(<<-CODE)
+ if 'false' {
+ notify { 'true': }
+ } else {
+ notify { 'false': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[true]")
+ end
+
+ it "evaluates undefined variables as false" do
+ catalog = compile_to_catalog(<<-CODE)
+ if $undef_var {
+ } else {
+ notify { 'undef': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[undef]")
+ end
+ end
+
+ context "current parser" do
+ before(:each) do
+ Puppet[:parser] = 'current'
+ end
+
+ it_behaves_like "a catalog built with conditionals"
+
+ it "evaluates empty string as false" do
+ catalog = compile_to_catalog(<<-CODE)
+ if '' {
+ notify { 'true': }
+ } else {
+ notify { 'empty': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[empty]")
+ end
+ end
+
+ context "future parser" do
+ before(:each) do
+ Puppet[:parser] = 'future'
+ end
+ it_behaves_like "a catalog built with conditionals"
+
+ it "evaluates empty string as true" do
+ catalog = compile_to_catalog(<<-CODE)
+ if '' {
+ notify { 'true': }
+ } else {
+ notify { 'empty': }
+ }
+ CODE
+ expect(catalog).to have_resource("Notify[true]")
+ end
+ end
+end
diff --git a/spec/integration/parser/future_compiler_spec.rb b/spec/integration/parser/future_compiler_spec.rb
index 7fcd7aaaa..d0fcfcdec 100644
--- a/spec/integration/parser/future_compiler_spec.rb
+++ b/spec/integration/parser/future_compiler_spec.rb
@@ -1,399 +1,750 @@
require 'spec_helper'
require 'puppet/pops'
require 'puppet/parser/parser_factory'
require 'puppet_spec/compiler'
require 'puppet_spec/pops'
require 'puppet_spec/scope'
require 'matchers/resource'
require 'rgen/metamodel_builder'
# Test compilation using the future evaluator
describe "Puppet::Parser::Compiler" do
include PuppetSpec::Compiler
include Matchers::Resource
before :each do
Puppet[:parser] = 'future'
end
describe "the compiler when using future parser and evaluator" do
it "should be able to determine the configuration version from a local version control repository" do
pending("Bug #14071 about semantics of Puppet::Util::Execute on Windows", :if => Puppet.features.microsoft_windows?) do
# This should always work, because we should always be
# in the puppet repo when we run this.
version = %x{git rev-parse HEAD}.chomp
Puppet.settings[:config_version] = 'git rev-parse HEAD'
compiler = Puppet::Parser::Compiler.new(Puppet::Node.new("testnode"))
compiler.catalog.version.should == version
end
end
it "should not create duplicate resources when a class is referenced both directly and indirectly by the node classifier (4792)" do
node = Puppet::Node.new("testnodex")
node.classes = ['foo', 'bar']
catalog = compile_to_catalog(<<-PP, node)
class foo
{
notify { foo_notify: }
include bar
}
class bar
{
notify { bar_notify: }
}
PP
catalog = Puppet::Parser::Compiler.compile(node)
expect(catalog).to have_resource("Notify[foo_notify]")
expect(catalog).to have_resource("Notify[bar_notify]")
end
it 'applies defaults for defines with qualified names (PUP-2302)' do
catalog = compile_to_catalog(<<-CODE)
define my::thing($msg = 'foo') { notify {'check_me': message => $msg } }
My::Thing { msg => 'evoe' }
my::thing { 'name': }
CODE
expect(catalog).to have_resource("Notify[check_me]").with_parameter(:message, "evoe")
end
+ it 'Applies defaults from dynamic scopes (3x and future with reverted PUP-867)' do
+ catalog = compile_to_catalog(<<-CODE)
+ class a {
+ Notify { message => "defaulted" }
+ include b
+ notify { bye: }
+ }
+ class b { notify { hi: } }
+
+ include a
+ CODE
+ expect(catalog).to have_resource("Notify[hi]").with_parameter(:message, "defaulted")
+ expect(catalog).to have_resource("Notify[bye]").with_parameter(:message, "defaulted")
+ end
+
+ it 'gets default from inherited class (PUP-867)' do
+ catalog = compile_to_catalog(<<-CODE)
+ class a {
+ Notify { message => "defaulted" }
+ include c
+ notify { bye: }
+ }
+ class b { Notify { message => "inherited" } }
+ class c inherits b { notify { hi: } }
+
+ include a
+ CODE
+
+ expect(catalog).to have_resource("Notify[hi]").with_parameter(:message, "inherited")
+ expect(catalog).to have_resource("Notify[bye]").with_parameter(:message, "defaulted")
+ end
+
+ it 'looks up default parameter values from inherited class (PUP-2532)' do
+ catalog = compile_to_catalog(<<-CODE)
+ class a {
+ Notify { message => "defaulted" }
+ include c
+ notify { bye: }
+ }
+ class b { Notify { message => "inherited" } }
+ class c inherits b { notify { hi: } }
+
+ include a
+ notify {hi_test: message => Notify[hi][message] }
+ notify {bye_test: message => Notify[bye][message] }
+ CODE
+
+ expect(catalog).to have_resource("Notify[hi_test]").with_parameter(:message, "inherited")
+ expect(catalog).to have_resource("Notify[bye_test]").with_parameter(:message, "defaulted")
+ end
+
+ it 'does not allow override of class parameters using a resource override expression' do
+ expect do
+ compile_to_catalog(<<-CODE)
+ Class[a] { x => 2}
+ CODE
+ end.to raise_error(/Resource Override can only.*got: Class\[a\].*/)
+ end
+
describe "when resolving class references" do
- it "should favor local scope, even if there's an included class in topscope" do
+ it "should not favor local scope (with class included in topscope)" do
catalog = compile_to_catalog(<<-PP)
class experiment {
class baz {
}
notify {"x" : require => Class[Baz] }
+ notify {"y" : require => Class[Experiment::Baz] }
}
class baz {
}
include baz
include experiment
include experiment::baz
PP
- expect(catalog).to have_resource("Notify[x]").with_parameter(:require, be_resource("Class[Experiment::Baz]"))
+ expect(catalog).to have_resource("Notify[x]").with_parameter(:require, be_resource("Class[Baz]"))
+ expect(catalog).to have_resource("Notify[y]").with_parameter(:require, be_resource("Class[Experiment::Baz]"))
end
- it "should favor local scope, even if there's an unincluded class in topscope" do
+ it "should not favor local scope, (with class not included in topscope)" do
catalog = compile_to_catalog(<<-PP)
class experiment {
class baz {
}
notify {"x" : require => Class[Baz] }
+ notify {"y" : require => Class[Experiment::Baz] }
}
class baz {
}
include experiment
include experiment::baz
PP
- expect(catalog).to have_resource("Notify[x]").with_parameter(:require, be_resource("Class[Experiment::Baz]"))
+ expect(catalog).to have_resource("Notify[x]").with_parameter(:require, be_resource("Class[Baz]"))
+ expect(catalog).to have_resource("Notify[y]").with_parameter(:require, be_resource("Class[Experiment::Baz]"))
end
end
describe "(ticket #13349) when explicitly specifying top scope" do
["class {'::bar::baz':}", "include ::bar::baz"].each do |include|
describe "with #{include}" do
it "should find the top level class" do
catalog = compile_to_catalog(<<-MANIFEST)
class { 'foo::test': }
class foo::test {
#{include}
}
class bar::baz {
notify { 'good!': }
}
class foo::bar::baz {
notify { 'bad!': }
}
MANIFEST
expect(catalog).to have_resource("Class[Bar::Baz]")
expect(catalog).to have_resource("Notify[good!]")
expect(catalog).to_not have_resource("Class[Foo::Bar::Baz]")
expect(catalog).to_not have_resource("Notify[bad!]")
end
end
end
end
it "should recompute the version after input files are re-parsed" do
Puppet[:code] = 'class foo { }'
Time.stubs(:now).returns(1)
node = Puppet::Node.new('mynode')
Puppet::Parser::Compiler.compile(node).version.should == 1
Time.stubs(:now).returns(2)
Puppet::Parser::Compiler.compile(node).version.should == 1 # no change because files didn't change
Puppet::Resource::TypeCollection.any_instance.stubs(:stale?).returns(true).then.returns(false) # pretend change
Puppet::Parser::Compiler.compile(node).version.should == 2
end
['define', 'class', 'node'].each do |thing|
it "'#{thing}' is not allowed inside evaluated conditional constructs" do
expect do
compile_to_catalog(<<-PP)
if true {
#{thing} foo {
}
notify { decoy: }
}
PP
end.to raise_error(Puppet::Error, /Classes, definitions, and nodes may only appear at toplevel/)
end
it "'#{thing}' is not allowed inside un-evaluated conditional constructs" do
expect do
compile_to_catalog(<<-PP)
if false {
#{thing} foo {
}
notify { decoy: }
}
PP
end.to raise_error(Puppet::Error, /Classes, definitions, and nodes may only appear at toplevel/)
end
end
describe "relationships can be formed" do
def extract_name(ref)
ref.sub(/File\[(\w+)\]/, '\1')
end
def assert_creates_relationships(relationship_code, expectations)
base_manifest = <<-MANIFEST
file { [a,b,c]:
- mode => 0644,
+ mode => '0644',
}
file { [d,e]:
- mode => 0755,
+ mode => '0755',
}
MANIFEST
catalog = compile_to_catalog(base_manifest + relationship_code)
resources = catalog.resources.select { |res| res.type == 'File' }
actual_relationships, actual_subscriptions = [:before, :notify].map do |relation|
resources.map do |res|
dependents = Array(res[relation])
dependents.map { |ref| [res.title, extract_name(ref)] }
end.inject(&:concat)
end
actual_relationships.should =~ (expectations[:relationships] || [])
actual_subscriptions.should =~ (expectations[:subscriptions] || [])
end
it "of regular type" do
assert_creates_relationships("File[a] -> File[b]",
:relationships => [['a','b']])
end
it "of subscription type" do
assert_creates_relationships("File[a] ~> File[b]",
:subscriptions => [['a', 'b']])
end
it "between multiple resources expressed as resource with multiple titles" do
assert_creates_relationships("File[a,b] -> File[c,d]",
:relationships => [['a', 'c'],
['b', 'c'],
['a', 'd'],
['b', 'd']])
end
it "between collection expressions" do
assert_creates_relationships("File <| mode == 0644 |> -> File <| mode == 0755 |>",
:relationships => [['a', 'd'],
['b', 'd'],
['c', 'd'],
['a', 'e'],
['b', 'e'],
['c', 'e']])
end
it "between resources expressed as Strings" do
assert_creates_relationships("'File[a]' -> 'File[b]'",
:relationships => [['a', 'b']])
end
it "between resources expressed as variables" do
assert_creates_relationships(<<-MANIFEST, :relationships => [['a', 'b']])
$var = File[a]
$var -> File[b]
MANIFEST
end
it "between resources expressed as case statements" do
assert_creates_relationships(<<-MANIFEST, :relationships => [['s1', 't2']])
$var = 10
case $var {
10: {
file { s1: }
}
12: {
file { s2: }
}
}
->
case $var + 2 {
10: {
file { t1: }
}
12: {
file { t2: }
}
}
MANIFEST
end
it "using deep access in array" do
assert_creates_relationships(<<-MANIFEST, :relationships => [['a', 'b']])
$var = [ [ [ File[a], File[b] ] ] ]
$var[0][0][0] -> $var[0][0][1]
MANIFEST
end
it "using deep access in hash" do
assert_creates_relationships(<<-MANIFEST, :relationships => [['a', 'b']])
$var = {'foo' => {'bar' => {'source' => File[a], 'target' => File[b]}}}
$var[foo][bar][source] -> $var[foo][bar][target]
MANIFEST
end
it "using resource declarations" do
assert_creates_relationships("file { l: } -> file { r: }", :relationships => [['l', 'r']])
end
it "between entries in a chain of relationships" do
assert_creates_relationships("File[a] -> File[b] ~> File[c] <- File[d] <~ File[e]",
:relationships => [['a', 'b'], ['d', 'c']],
:subscriptions => [['b', 'c'], ['e', 'd']])
end
end
context "when dealing with variable references" do
it 'an initial underscore in a variable name is ok' do
catalog = compile_to_catalog(<<-MANIFEST)
class a { $_a = 10}
include a
notify { 'test': message => $a::_a }
MANIFEST
expect(catalog).to have_resource("Notify[test]").with_parameter(:message, 10)
end
it 'an initial underscore in not ok if elsewhere than last segment' do
expect do
catalog = compile_to_catalog(<<-MANIFEST)
class a { $_a = 10}
include a
notify { 'test': message => $_a::_a }
MANIFEST
end.to raise_error(/Illegal variable name/)
end
it 'a missing variable as default value becomes undef' do
+ # strict variables not on
catalog = compile_to_catalog(<<-MANIFEST)
- class a ($b=$x) { notify {$b: message=>'meh'} }
+ class a ($b=$x) { notify {test: message=>"yes ${undef == $b}" } }
include a
MANIFEST
- expect(catalog).to have_resource("Notify[undef]").with_parameter(:message, "meh")
+ expect(catalog).to have_resource("Notify[test]").with_parameter(:message, "yes true")
end
end
context 'when working with the trusted data hash' do
context 'and have opted in to hashed_node_data' do
before :each do
Puppet[:trusted_node_data] = true
end
it 'should make $trusted available' do
node = Puppet::Node.new("testing")
node.trusted_data = { "data" => "value" }
catalog = compile_to_catalog(<<-MANIFEST, node)
notify { 'test': message => $trusted[data] }
MANIFEST
expect(catalog).to have_resource("Notify[test]").with_parameter(:message, "value")
end
it 'should not allow assignment to $trusted' do
node = Puppet::Node.new("testing")
node.trusted_data = { "data" => "value" }
expect do
compile_to_catalog(<<-MANIFEST, node)
$trusted = 'changed'
notify { 'test': message => $trusted == 'changed' }
MANIFEST
end.to raise_error(Puppet::Error, /Attempt to assign to a reserved variable name: 'trusted'/)
end
end
context 'and have not opted in to hashed_node_data' do
before :each do
Puppet[:trusted_node_data] = false
end
it 'should not make $trusted available' do
node = Puppet::Node.new("testing")
node.trusted_data = { "data" => "value" }
catalog = compile_to_catalog(<<-MANIFEST, node)
notify { 'test': message => ($trusted == undef) }
MANIFEST
expect(catalog).to have_resource("Notify[test]").with_parameter(:message, true)
end
it 'should allow assignment to $trusted' do
catalog = compile_to_catalog(<<-MANIFEST)
$trusted = 'changed'
notify { 'test': message => $trusted == 'changed' }
MANIFEST
expect(catalog).to have_resource("Notify[test]").with_parameter(:message, true)
end
end
end
+
+ context 'when using typed parameters in definition' do
+ it 'accepts type compliant arguments' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ define foo(String $x) { }
+ foo { 'test': x =>'say friend' }
+ MANIFEST
+ expect(catalog).to have_resource("Foo[test]").with_parameter(:x, 'say friend')
+ end
+
+ it 'accepts anything when parameters are untyped' do
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ define foo($a, $b, $c) { }
+ foo { 'test': a => String, b=>10, c=>undef }
+ MANIFEST
+ end.to_not raise_error()
+ end
+
+ it 'denies non type compliant arguments' do
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ define foo(Integer $x) { }
+ foo { 'test': x =>'say friend' }
+ MANIFEST
+ end.to raise_error(/type Integer, got String/)
+ end
+
+ it 'denies non type compliant default argument' do
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ define foo(Integer $x = 'pow') { }
+ foo { 'test': }
+ MANIFEST
+ end.to raise_error(/type Integer, got String/)
+ end
+
+ it 'accepts a Resource as a Type' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ define foo(Type[Bar] $x) {
+ notify { 'test': message => $x[text] }
+ }
+ define bar($text) { }
+ bar { 'joke': text => 'knock knock' }
+ foo { 'test': x => Bar[joke] }
+ MANIFEST
+ expect(catalog).to have_resource("Notify[test]").with_parameter(:message, 'knock knock')
+ end
+ end
+
+ context 'when using typed parameters in class' do
+ it 'accepts type compliant arguments' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo(String $x) { }
+ class { 'foo': x =>'say friend' }
+ MANIFEST
+ expect(catalog).to have_resource("Class[Foo]").with_parameter(:x, 'say friend')
+ end
+
+ it 'accepts anything when parameters are untyped' do
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo($a, $b, $c) { }
+ class { 'foo': a => String, b=>10, c=>undef }
+ MANIFEST
+ end.to_not raise_error()
+ end
+
+ it 'denies non type compliant arguments' do
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo(Integer $x) { }
+ class { 'foo': x =>'say friend' }
+ MANIFEST
+ end.to raise_error(/type Integer, got String/)
+ end
+
+ it 'denies non type compliant default argument' do
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo(Integer $x = 'pow') { }
+ class { 'foo': }
+ MANIFEST
+ end.to raise_error(/type Integer, got String/)
+ end
+
+ it 'accepts a Resource as a Type' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo(Type[Bar] $x) {
+ notify { 'test': message => $x[text] }
+ }
+ define bar($text) { }
+ bar { 'joke': text => 'knock knock' }
+ class { 'foo': x => Bar[joke] }
+ MANIFEST
+ expect(catalog).to have_resource("Notify[test]").with_parameter(:message, 'knock knock')
+ end
+ end
+
+ context 'when using typed parameters in lambdas' do
+ it 'accepts type compliant arguments' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with('value') |String $x| { notify { "$x": } }
+ MANIFEST
+ expect(catalog).to have_resource("Notify[value]")
+ end
+
+ it 'handles an array as a single argument' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with(['value', 'second']) |$x| { notify { "${x[0]} ${x[1]}": } }
+ MANIFEST
+ expect(catalog).to have_resource("Notify[value second]")
+ end
+
+ it 'denies when missing required arguments' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with(1) |$x, $y| { }
+ MANIFEST
+ end.to raise_error(/Parameter \$y is required but no value was given/m)
+ end
+
+ it 'accepts anything when parameters are untyped' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ ['value', 1, true, undef].each |$x| { notify { "value: $x": } }
+ MANIFEST
+
+ expect(catalog).to have_resource("Notify[value: value]")
+ expect(catalog).to have_resource("Notify[value: 1]")
+ expect(catalog).to have_resource("Notify[value: true]")
+ expect(catalog).to have_resource("Notify[value: ]")
+ end
+
+ it 'accepts type-compliant, slurped arguments' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with(1, 2) |Integer *$x| { notify { "${$x[0] + $x[1]}": } }
+ MANIFEST
+ expect(catalog).to have_resource("Notify[3]")
+ end
+
+ it 'denies non-type-compliant arguments' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with(1) |String $x| { }
+ MANIFEST
+ end.to raise_error(/expected.*String.*actual.*Integer/m)
+ end
+
+ it 'denies non-type-compliant, slurped arguments' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with(1, "hello") |Integer *$x| { }
+ MANIFEST
+ end.to raise_error(/called with mis-matched arguments.*expected.*Integer.*actual.*Integer, String/m)
+ end
+
+ it 'denies non-type-compliant default argument' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with(1) |$x, String $defaulted = 1| { notify { "${$x + $defaulted}": }}
+ MANIFEST
+ end.to raise_error(/expected.*Any.*String.*actual.*Integer.*Integer/m)
+ end
+
+ it 'raises an error when a default argument value is an incorrect type and there are no arguments passed' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with() |String $defaulted = 1| {}
+ MANIFEST
+ end.to raise_error(/expected.*String.*actual.*Integer/m)
+ end
+
+ it 'raises an error when the default argument for a slurped parameter is an incorrect type' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with() |String *$defaulted = 1| {}
+ MANIFEST
+ end.to raise_error(/expected.*String.*actual.*Integer/m)
+ end
+
+ it 'allows using an array as the default slurped value' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with() |String *$defaulted = [hi]| { notify { $defaulted[0]: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[hi]')
+ end
+
+ it 'allows using a value of the type as the default slurped value' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with() |String *$defaulted = hi| { notify { $defaulted[0]: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[hi]')
+ end
+
+ it 'allows specifying the type of a slurped parameter as an array' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with() |Array[String] *$defaulted = hi| { notify { $defaulted[0]: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[hi]')
+ end
+
+ it 'raises an error when the number of default values does not match the parameter\'s size specification' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with() |Array[String, 2] *$defaulted = hi| { }
+ MANIFEST
+ end.to raise_error(/expected.*arg count \{2,\}.*actual.*arg count \{1\}/m)
+ end
+
+ it 'raises an error when the number of passed values does not match the parameter\'s size specification' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with(hi) |Array[String, 2] *$passed| { }
+ MANIFEST
+ end.to raise_error(/expected.*arg count \{2,\}.*actual.*arg count \{1\}/m)
+ end
+
+ it 'matches when the number of arguments passed for a slurp parameter match the size specification' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with(hi, bye) |Array[String, 2] *$passed| {
+ $passed.each |$n| { notify { $n: } }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[hi]')
+ expect(catalog).to have_resource('Notify[bye]')
+ end
+
+ it 'raises an error when the number of allowed slurp parameters exceeds the size constraint' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with(hi, bye) |Array[String, 1, 1] *$passed| { }
+ MANIFEST
+ end.to raise_error(/expected.*arg count \{1\}.*actual.*arg count \{2\}/m)
+ end
+
+ it 'allows passing slurped arrays by specifying an array of arrays' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with([hi], [bye]) |Array[Array[String, 1, 1]] *$passed| {
+ notify { $passed[0][0]: }
+ notify { $passed[1][0]: }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[hi]')
+ expect(catalog).to have_resource('Notify[bye]')
+ end
+
+ it 'raises an error when a required argument follows an optional one' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with() |$y = first, $x, Array[String, 1] *$passed = bye| {}
+ MANIFEST
+ end.to raise_error(/Parameter \$x is required/)
+ end
+
+ it 'raises an error when the minimum size of a slurped argument makes it required and it follows an optional argument' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ with() |$x = first, Array[String, 1] *$passed| {}
+ MANIFEST
+ end.to raise_error(/Parameter \$passed is required/)
+ end
+
+ it 'allows slurped arguments with a minimum size of 0 after an optional argument' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ with() |$x = first, Array[String, 0] *$passed| {
+ notify { $x: }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[first]')
+ end
+
+ it 'accepts a Resource as a Type' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ define bar($text) { }
+ bar { 'joke': text => 'knock knock' }
+
+ with(Bar[joke]) |Type[Bar] $joke| { notify { "${joke[text]}": } }
+ MANIFEST
+ expect(catalog).to have_resource("Notify[knock knock]")
+ end
+ end
end
context 'when evaluating collection' do
it 'matches on container inherited tags' do
Puppet[:code] = <<-MANIFEST
class xport_test {
tag('foo_bar')
@notify { 'nbr1':
message => 'explicitly tagged',
tag => 'foo_bar'
}
@notify { 'nbr2':
message => 'implicitly tagged'
}
Notify <| tag == 'foo_bar' |> {
message => 'overridden'
}
}
include xport_test
MANIFEST
catalog = Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
expect(catalog).to have_resource("Notify[nbr1]").with_parameter(:message, 'overridden')
expect(catalog).to have_resource("Notify[nbr2]").with_parameter(:message, 'overridden')
end
end
end
diff --git a/spec/integration/parser/node_spec.rb b/spec/integration/parser/node_spec.rb
new file mode 100644
index 000000000..7ce58f152
--- /dev/null
+++ b/spec/integration/parser/node_spec.rb
@@ -0,0 +1,185 @@
+require 'spec_helper'
+require 'puppet_spec/compiler'
+require 'matchers/resource'
+
+describe 'node statements' do
+ include PuppetSpec::Compiler
+ include Matchers::Resource
+
+ shared_examples_for 'nodes' do
+ it 'selects a node where the name is just a number' do
+ # Future parser doesn't allow a number in this position
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("5"))
+ node 5 { notify { 'matched': } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'selects the node with a matching name' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node noden {}
+ node nodename { notify { matched: } }
+ node name {}
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'prefers a node with a literal name over one with a regex' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node /noden.me/ { notify { ignored: } }
+ node nodename { notify { matched: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'selects a node where one of the names matches' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node different, nodename, other { notify { matched: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'arbitrarily selects one of the matching nodes' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node /not/ { notify { 'is not matched': } }
+ node /name.*/ { notify { 'could be matched': } }
+ node /na.e/ { notify { 'could also be matched': } }
+ MANIFEST
+
+ expect([catalog.resource('Notify[could be matched]'), catalog.resource('Notify[could also be matched]')].compact).to_not be_empty
+ end
+
+ it 'selects a node where one of the names matches with a mixture of literals and regex' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node different, /name/, other { notify { matched: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'errors when two nodes with regexes collide after some regex syntax is removed' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ node /a.*(c)?/ { }
+ node 'a.c' { }
+ MANIFEST
+ end.to raise_error(Puppet::Error, /Node 'a.c' is already defined/)
+ end
+
+ it 'provides captures from the regex in the node body' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node /(.*)/ { notify { "$1": } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[nodename]')
+ end
+
+ it 'selects the node with the matching regex' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node /node.*/ { notify { matched: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'selects a node that is a literal string' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("node.name"))
+ node 'node.name' { notify { matched: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'selects a node that is a prefix of the agent name' do
+ Puppet[:strict_hostname_checking] = false
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("node.name.com"))
+ node 'node.name' { notify { matched: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'does not treat regex symbols as a regex inside a string literal' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodexname"))
+ node 'node.name' { notify { 'not matched': } }
+ node 'nodexname' { notify { 'matched': } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'errors when two nodes have the same name' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ node name { }
+ node 'name' { }
+ MANIFEST
+ end.to raise_error(Puppet::Error, /Node 'name' is already defined/)
+ end
+ end
+
+ describe 'using classic parser' do
+ before :each do
+ Puppet[:parser] = 'current'
+ end
+
+ it_behaves_like 'nodes'
+
+ it 'includes the inherited nodes of the matching node' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node notmatched1 { notify { inherited: } }
+ node nodename inherits notmatched1 { notify { matched: } }
+ node notmatched2 { notify { ignored: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ expect(catalog).to have_resource('Notify[inherited]')
+ end
+
+ it 'raises deprecation warning for node inheritance for 3x parser' do
+ Puppet.expects(:warning).at_least_once
+ Puppet.expects(:warning).with(regexp_matches(/Deprecation notice\: Node inheritance is not supported in Puppet >= 4\.0\.0/))
+
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("1.2.3.4"))
+ node default {}
+ node '1.2.3.4' inherits default { }
+ MANIFEST
+ end
+ end
+
+ describe 'using future parser' do
+ before :each do
+ Puppet[:parser] = 'future'
+ end
+
+ it_behaves_like 'nodes'
+
+ it 'is unable to parse a name that is an invalid number' do
+ expect do
+ compile_to_catalog('node 5name {} ')
+ end.to raise_error(Puppet::Error, /Illegal number/)
+ end
+
+ it 'parses a node name that is dotted numbers' do
+ catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new("1.2.3.4"))
+ node 1.2.3.4 { notify { matched: } }
+ MANIFEST
+
+ expect(catalog).to have_resource('Notify[matched]')
+ end
+
+ it 'raises error for node inheritance' do
+ expect do
+ compile_to_catalog(<<-MANIFEST, Puppet::Node.new("nodename"))
+ node default {}
+ node nodename inherits default { }
+ MANIFEST
+ end.to raise_error(/Node inheritance is not supported in Puppet >= 4\.0\.0/)
+ end
+
+ end
+end
diff --git a/spec/integration/parser/resource_expressions_spec.rb b/spec/integration/parser/resource_expressions_spec.rb
new file mode 100644
index 000000000..c753f5a91
--- /dev/null
+++ b/spec/integration/parser/resource_expressions_spec.rb
@@ -0,0 +1,286 @@
+require 'spec_helper'
+require 'puppet_spec/language'
+
+describe "Puppet resource expressions" do
+ extend PuppetSpec::Language
+
+ describe "future parser" do
+ before :each do
+ Puppet[:parser] = 'future'
+ end
+
+ produces(
+ "$a = notify
+ $b = example
+ $c = { message => hello }
+ @@Resource[$a] {
+ $b:
+ * => $c
+ }
+ realize(Resource[$a, $b])
+ " => "Notify[example][message] == 'hello'")
+
+
+ context "resource titles" do
+ produces(
+ "notify { thing: }" => "defined(Notify[thing])",
+ "$x = thing notify { $x: }" => "defined(Notify[thing])",
+
+ "notify { [thing]: }" => "defined(Notify[thing])",
+ "$x = [thing] notify { $x: }" => "defined(Notify[thing])",
+
+ "notify { [[nested, array]]: }" => "defined(Notify[nested]) and defined(Notify[array])",
+ "$x = [[nested, array]] notify { $x: }" => "defined(Notify[nested]) and defined(Notify[array])",
+
+ "notify { []: }" => [], # this asserts nothing added
+ "$x = [] notify { $x: }" => [], # this asserts nothing added
+
+ "notify { default: }" => "!defined(Notify['default'])", # nothing created because this is just a local default
+ "$x = default notify { $x: }" => "!defined(Notify['default'])")
+
+ fails(
+ "notify { '': }" => /Empty string title/,
+ "$x = '' notify { $x: }" => /Empty string title/,
+
+ "notify { 1: }" => /Illegal title type.*Expected String, got Integer/,
+ "$x = 1 notify { $x: }" => /Illegal title type.*Expected String, got Integer/,
+
+ "notify { [1]: }" => /Illegal title type.*Expected String, got Integer/,
+ "$x = [1] notify { $x: }" => /Illegal title type.*Expected String, got Integer/,
+
+ "notify { 3.0: }" => /Illegal title type.*Expected String, got Float/,
+ "$x = 3.0 notify { $x: }" => /Illegal title type.*Expected String, got Float/,
+
+ "notify { [3.0]: }" => /Illegal title type.*Expected String, got Float/,
+ "$x = [3.0] notify { $x: }" => /Illegal title type.*Expected String, got Float/,
+
+ "notify { true: }" => /Illegal title type.*Expected String, got Boolean/,
+ "$x = true notify { $x: }" => /Illegal title type.*Expected String, got Boolean/,
+
+ "notify { [true]: }" => /Illegal title type.*Expected String, got Boolean/,
+ "$x = [true] notify { $x: }" => /Illegal title type.*Expected String, got Boolean/,
+
+ "notify { [false]: }" => /Illegal title type.*Expected String, got Boolean/,
+ "$x = [false] notify { $x: }" => /Illegal title type.*Expected String, got Boolean/,
+
+ "notify { undef: }" => /Missing title.*undef/,
+ "$x = undef notify { $x: }" => /Missing title.*undef/,
+
+ "notify { [undef]: }" => /Missing title.*undef/,
+ "$x = [undef] notify { $x: }" => /Missing title.*undef/,
+
+ "notify { {nested => hash}: }" => /Illegal title type.*Expected String, got Hash/,
+ "$x = {nested => hash} notify { $x: }" => /Illegal title type.*Expected String, got Hash/,
+
+ "notify { [{nested => hash}]: }" => /Illegal title type.*Expected String, got Hash/,
+ "$x = [{nested => hash}] notify { $x: }" => /Illegal title type.*Expected String, got Hash/,
+
+ "notify { /regexp/: }" => /Illegal title type.*Expected String, got Regexp/,
+ "$x = /regexp/ notify { $x: }" => /Illegal title type.*Expected String, got Regexp/,
+
+ "notify { [/regexp/]: }" => /Illegal title type.*Expected String, got Regexp/,
+ "$x = [/regexp/] notify { $x: }" => /Illegal title type.*Expected String, got Regexp/,
+
+ "notify { [dupe, dupe]: }" => /The title 'dupe' has already been used/,
+ "notify { dupe:; dupe: }" => /The title 'dupe' has already been used/,
+ "notify { [dupe]:; dupe: }" => /The title 'dupe' has already been used/,
+ "notify { [default, default]:}" => /The title 'default' has already been used/,
+ "notify { default:; default:}" => /The title 'default' has already been used/,
+ "notify { [default]:; default:}" => /The title 'default' has already been used/)
+ end
+
+ context "type names" do
+ produces( "notify { testing: }" => "defined(Notify[testing])")
+ produces( "$a = notify; Resource[$a] { testing: }" => "defined(Notify[testing])")
+ produces( "Resource['notify'] { testing: }" => "defined(Notify[testing])")
+ produces( "Resource[sprintf('%s', 'notify')] { testing: }" => "defined(Notify[testing])")
+ produces( "$a = ify; Resource[\"not$a\"] { testing: }" => "defined(Notify[testing])")
+
+ produces( "Notify { testing: }" => "defined(Notify[testing])")
+ produces( "Resource[Notify] { testing: }" => "defined(Notify[testing])")
+ produces( "Resource['Notify'] { testing: }" => "defined(Notify[testing])")
+
+ produces( "class a { notify { testing: } } class { a: }" => "defined(Notify[testing])")
+ produces( "class a { notify { testing: } } Class { a: }" => "defined(Notify[testing])")
+ produces( "class a { notify { testing: } } Resource['class'] { a: }" => "defined(Notify[testing])")
+
+ produces( "define a::b { notify { testing: } } a::b { title: }" => "defined(Notify[testing])")
+ produces( "define a::b { notify { testing: } } A::B { title: }" => "defined(Notify[testing])")
+ produces( "define a::b { notify { testing: } } Resource['a::b'] { title: }" => "defined(Notify[testing])")
+
+ fails( "'class' { a: }" => /Illegal Resource Type expression.*got String/)
+ fails( "'' { testing: }" => /Illegal Resource Type expression.*got String/)
+ fails( "1 { testing: }" => /Illegal Resource Type expression.*got Integer/)
+ fails( "3.0 { testing: }" => /Illegal Resource Type expression.*got Float/)
+ fails( "true { testing: }" => /Illegal Resource Type expression.*got Boolean/)
+ fails( "'not correct' { testing: }" => /Illegal Resource Type expression.*got String/)
+
+ fails( "Notify[hi] { testing: }" => /Illegal Resource Type expression.*got Notify\['hi'\]/)
+ fails( "[Notify, File] { testing: }" => /Illegal Resource Type expression.*got Array\[Type\[Resource\]\]/)
+
+ fails( "define a::b { notify { testing: } } 'a::b' { title: }" => /Illegal Resource Type expression.*got String/)
+
+ fails( "Does::Not::Exist { title: }" => /Invalid resource type does::not::exist/)
+ end
+
+ context "local defaults" do
+ produces(
+ "notify { example:; default: message => defaulted }" => "Notify[example][message] == 'defaulted'",
+ "notify { example: message => specific; default: message => defaulted }" => "Notify[example][message] == 'specific'",
+ "notify { example: message => undef; default: message => defaulted }" => "Notify[example][message] == undef",
+ "notify { [example, other]: ; default: message => defaulted }" => "Notify[example][message] == 'defaulted' and Notify[other][message] == 'defaulted'",
+ "notify { [example, default]: message => set; other: }" => "Notify[example][message] == 'set' and Notify[other][message] == 'set'")
+ end
+
+ context "order of evaluation" do
+ fails("notify { hi: message => value; bye: message => Notify[hi][message] }" => /Resource not found: Notify\['hi'\]/)
+
+ produces("notify { hi: message => (notify { param: message => set }); bye: message => Notify[param][message] }" => "defined(Notify[hi]) and Notify[bye][message] == 'set'")
+ fails("notify { bye: message => Notify[param][message]; hi: message => (notify { param: message => set }) }" => /Resource not found: Notify\['param'\]/)
+ end
+
+ context "parameters" do
+ produces(
+ "notify { title: message => set }" => "Notify[title][message] == 'set'",
+ "$x = set notify { title: message => $x }" => "Notify[title][message] == 'set'",
+
+ "notify { title: *=> { message => set } }" => "Notify[title][message] == 'set'",
+
+ "$x = { message => set } notify { title: * => $x }" => "Notify[title][message] == 'set'",
+
+ # picks up defaults
+ "$x = { owner => the_x }
+ $y = { mode => '0666' }
+ $t = '/tmp/x'
+ file {
+ default:
+ * => $x;
+ $t:
+ path => '/somewhere',
+ * => $y }" => "File[$t][mode] == '0666' and File[$t][owner] == 'the_x' and File[$t][path] == '/somewhere'",
+
+ # explicit wins over default - no error
+ "$x = { owner => the_x, mode => '0777' }
+ $y = { mode => '0666' }
+ $t = '/tmp/x'
+ file {
+ default:
+ * => $x;
+ $t:
+ path => '/somewhere',
+ * => $y }" => "File[$t][mode] == '0666' and File[$t][owner] == 'the_x' and File[$t][path] == '/somewhere'")
+
+ fails("notify { title: unknown => value }" => /Invalid parameter unknown/)
+
+ # this really needs to be a better error message.
+ fails("notify { title: * => { hash => value }, message => oops }" => /Invalid parameter hash/)
+
+ # should this be a better error message?
+ fails("notify { title: message => oops, * => { hash => value } }" => /Invalid parameter hash/)
+
+ fails("notify { title: * => { unknown => value } }" => /Invalid parameter unknown/)
+ fails("
+ $x = { mode => '0666' }
+ $y = { owner => the_y }
+ $t = '/tmp/x'
+ file { $t:
+ * => $x,
+ * => $y }" => /Unfolding of attributes from Hash can only be used once per resource body/)
+ end
+
+ context "virtual" do
+ produces(
+ "@notify { example: }" => "!defined(Notify[example])",
+
+ "@notify { example: }
+ realize(Notify[example])" => "defined(Notify[example])",
+
+ "@notify { virtual: message => set }
+ notify { real:
+ message => Notify[virtual][message] }" => "Notify[real][message] == 'set'")
+ end
+
+ context "exported" do
+ produces(
+ "@@notify { example: }" => "!defined(Notify[example])",
+ "@@notify { example: } realize(Notify[example])" => "defined(Notify[example])",
+ "@@notify { exported: message => set } notify { real: message => Notify[exported][message] }" => "Notify[real][message] == 'set'")
+ end
+ end
+
+ describe "current parser" do
+ before :each do
+ Puppet[:parser] = 'current'
+ end
+
+ produces(
+ "notify { thing: }" => ["Notify[thing]"],
+ "$x = thing notify { $x: }" => ["Notify[thing]"],
+
+ "notify { [thing]: }" => ["Notify[thing]"],
+ "$x = [thing] notify { $x: }" => ["Notify[thing]"],
+
+ "notify { [[nested, array]]: }" => ["Notify[nested]", "Notify[array]"],
+ "$x = [[nested, array]] notify { $x: }" => ["Notify[nested]", "Notify[array]"],
+
+ # deprecate?
+ "notify { 1: }" => ["Notify[1]"],
+ "$x = 1 notify { $x: }" => ["Notify[1]"],
+
+ # deprecate?
+ "notify { [1]: }" => ["Notify[1]"],
+ "$x = [1] notify { $x: }" => ["Notify[1]"],
+
+ # deprecate?
+ "notify { 3.0: }" => ["Notify[3.0]"],
+ "$x = 3.0 notify { $x: }" => ["Notify[3.0]"],
+
+ # deprecate?
+ "notify { [3.0]: }" => ["Notify[3.0]"],
+ "$x = [3.0] notify { $x: }" => ["Notify[3.0]"])
+
+ # :(
+ fails( "notify { true: }" => /Syntax error/)
+ produces("$x = true notify { $x: }" => ["Notify[true]"])
+
+ # this makes no sense given the [false] case
+ produces(
+ "notify { [true]: }" => ["Notify[true]"],
+ "$x = [true] notify { $x: }" => ["Notify[true]"])
+
+ # *sigh*
+ fails(
+ "notify { false: }" => /Syntax error/,
+ "$x = false notify { $x: }" => /No title provided and :notify is not a valid resource reference/,
+
+ "notify { [false]: }" => /No title provided and :notify is not a valid resource reference/,
+ "$x = [false] notify { $x: }" => /No title provided and :notify is not a valid resource reference/)
+
+ # works for variable value, not for literal. deprecate?
+ fails("notify { undef: }" => /Syntax error/)
+ produces(
+ "$x = undef notify { $x: }" => ["Notify[undef]"],
+
+ # deprecate?
+ "notify { [undef]: }" => ["Notify[undef]"],
+ "$x = [undef] notify { $x: }" => ["Notify[undef]"])
+
+ fails("notify { {nested => hash}: }" => /Syntax error/)
+ #produces("$x = {nested => hash} notify { $x: }" => ["Notify[{nested => hash}]"]) #it is created, but isn't possible to reference the resource. deprecate?
+ #produces("notify { [{nested => hash}]: }" => ["Notify[{nested => hash}]"]) #it is created, but isn't possible to reference the resource. deprecate?
+ #produces("$x = [{nested => hash}] notify { $x: }" => ["Notify[{nested => hash}]"]) #it is created, but isn't possible to reference the resource. deprecate?
+
+ fails(
+ "notify { /regexp/: }" => /Syntax error/,
+ "$x = /regexp/ notify { $x: }" => /Syntax error/,
+
+ "notify { [/regexp/]: }" => /Syntax error/,
+ "$x = [/regexp/] notify { $x: }" => /Syntax error/,
+
+ "notify { default: }" => /Syntax error/,
+ "$x = default notify { $x: }" => /Syntax error/,
+
+ "notify { [default]: }" => /Syntax error/,
+ "$x = [default] notify { $x: }" => /Syntax error/)
+ end
+end
diff --git a/spec/integration/parser/ruby_manifest_spec.rb b/spec/integration/parser/ruby_manifest_spec.rb
index d0bd5f0e7..29e9c6379 100755
--- a/spec/integration/parser/ruby_manifest_spec.rb
+++ b/spec/integration/parser/ruby_manifest_spec.rb
@@ -1,123 +1,119 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'tempfile'
require 'puppet_spec/files'
describe "Pure ruby manifests" do
include PuppetSpec::Files
before do
@test_dir = tmpdir('ruby_manifest_test')
end
- after do
- Puppet.settings.clear
- end
-
def write_file(name, contents)
path = File.join(@test_dir, name)
File.open(path, "w") { |f| f.write(contents) }
path
end
def compile(contents)
Puppet[:code] = contents
Dir.chdir(@test_dir) do
Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
end
end
it "should allow classes" do
write_file('foo.rb', ["hostclass 'one' do notify('one_notify') end",
"hostclass 'two' do notify('two_notify') end"].join("\n"))
catalog = compile("import 'foo'\ninclude one")
catalog.resource("Notify[one_notify]").should_not be_nil
catalog.resource("Notify[two_notify]").should be_nil
end
it "should allow defines" do
write_file('foo.rb', 'define "bar", :arg do notify("bar_#{@name}_#{@arg}") end')
catalog = compile("import 'foo'\nbar { instance: arg => 'xyz' }")
catalog.resource("Notify[bar_instance_xyz]").should_not be_nil
catalog.resource("Bar[instance]").should_not be_nil
end
it "should allow node declarations" do
write_file('foo.rb', "node 'mynode' do notify('mynode') end")
catalog = compile("import 'foo'")
node_declaration = catalog.resource("Notify[mynode]")
node_declaration.should_not be_nil
node_declaration.title.should == 'mynode'
end
it "should allow access to the environment" do
write_file('foo.rb', ["hostclass 'bar' do",
" if environment.is_a? Puppet::Node::Environment",
" notify('success')",
" end",
"end"].join("\n"))
compile("import 'foo'\ninclude bar").resource("Notify[success]").should_not be_nil
end
it "should allow creation of resources of built-in types" do
write_file('foo.rb', "hostclass 'bar' do file 'test_file', :owner => 'root', :mode => '644' end")
catalog = compile("import 'foo'\ninclude bar")
file = catalog.resource("File[test_file]")
file.should be_a(Puppet::Resource)
file.type.should == 'File'
file.title.should == 'test_file'
file.exported.should_not be
file.virtual.should_not be
file[:owner].should == 'root'
file[:mode].should == '644'
file[:stage].should be_nil # TODO: is this correct behavior?
end
it "should allow calling user-defined functions" do
write_file('foo.rb', "hostclass 'bar' do user_func 'name', :arg => 'xyz' end")
catalog = compile(['define user_func($arg) { notify {"n_$arg": } }',
'import "foo"',
'include bar'].join("\n"))
catalog.resource("Notify[n_xyz]").should_not be_nil
catalog.resource("User_func[name]").should_not be_nil
end
it "should be properly cached for multiple compiles" do
# Note: we can't test this by calling compile() twice, because
# that sets Puppet[:code], which clears out all cached
# environments.
Puppet[:filetimeout] = 1000
write_file('foo.rb', "hostclass 'bar' do notify('success') end")
Puppet[:code] = "import 'foo'\ninclude bar"
# Compile the catalog and check it
catalog = Dir.chdir(@test_dir) do
Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
end
catalog.resource("Notify[success]").should_not be_nil
# Secretly change the file to make it invalid. This change
# shouldn't be noticed because the we've set a high
# Puppet[:filetimeout].
write_file('foo.rb', "raise 'should not be executed'")
# Compile the catalog a second time and make sure it's still ok.
catalog = Dir.chdir(@test_dir) do
Puppet::Parser::Compiler.compile(Puppet::Node.new("mynode"))
end
catalog.resource("Notify[success]").should_not be_nil
end
it "should be properly reloaded when stale" do
Puppet[:filetimeout] = -1 # force stale check to happen all the time
write_file('foo.rb', "hostclass 'bar' do notify('version1') end")
catalog = compile("import 'foo'\ninclude bar")
catalog.resource("Notify[version1]").should_not be_nil
sleep 1 # so that timestamp will change forcing file reload
write_file('foo.rb', "hostclass 'bar' do notify('version2') end")
catalog = compile("import 'foo'\ninclude bar")
catalog.resource("Notify[version1]").should be_nil
catalog.resource("Notify[version2]").should_not be_nil
end
end
diff --git a/spec/integration/parser/scope_spec.rb b/spec/integration/parser/scope_spec.rb
index 1fd84f421..c10caa7f0 100644
--- a/spec/integration/parser/scope_spec.rb
+++ b/spec/integration/parser/scope_spec.rb
@@ -1,798 +1,741 @@
require 'spec_helper'
require 'puppet_spec/compiler'
describe "Two step scoping for variables" do
include PuppetSpec::Compiler
def expect_the_message_to_be(message, node = Puppet::Node.new('the node'))
catalog = compile_to_catalog(yield, node)
catalog.resource('Notify', 'something')[:message].should == message
end
before :each do
Puppet.expects(:deprecation_warning).never
end
context 'using current parser' do
describe "using plussignment to change in a new scope" do
it "does not change a string in the parent scope" do
# Expects to be able to concatenate string using +=
expect_the_message_to_be('top_msg') do <<-MANIFEST
$var = "top_msg"
class override {
$var += "override"
include foo
}
class foo {
notify { 'something': message => $var, }
}
include override
MANIFEST
end
end
end
end
context 'using future parser' do
before(:each) do
Puppet[:parser] = 'future'
end
- describe "using plussignment to change in a new scope" do
- it "does not change a string in the parent scope" do
- # Expects to be able to concatenate string using +=
+ describe "using unsupported operators" do
+ it "issues an error for +=" do
expect do
- catalog = compile_to_catalog(<<-MANIFEST, Puppet::Node.new('the node'))
- $var = "top_msg"
- class override {
- $var += "override"
- include foo
- }
- class foo {
- notify { 'something': message => $var, }
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $var = ["top_msg"]
+ node default {
+ $var += ["override"]
}
+ MANIFEST
+ end.to raise_error(/The operator '\+=' is no longer supported/)
+ end
- include override
+ it "issues an error for -=" do
+ expect do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $var = ["top_msg"]
+ node default {
+ $var -= ["top_msg"]
+ }
MANIFEST
- end.to raise_error(/The value 'top_msg' cannot be converted to Numeric/)
+ end.to raise_error(/The operator '-=' is no longer supported/)
end
end
it "when using a template ignores the dynamic value of the var when using the @varname syntax" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include foo
}
class foo {
$var = "foo_msg"
include bar
}
class bar {
notify { 'something': message => inline_template("<%= @var %>"), }
}
MANIFEST
end
end
it "when using a template gets the var from an inherited class when using the @varname syntax" do
expect_the_message_to_be('Barbamama') do <<-MANIFEST
node default {
$var = "node_msg"
include bar_bamama
include foo
}
class bar_bamama {
$var = "Barbamama"
}
class foo {
$var = "foo_msg"
include bar
}
class bar inherits bar_bamama {
notify { 'something': message => inline_template("<%= @var %>"), }
}
MANIFEST
end
end
it "when using a template ignores the dynamic var when it is not present in an inherited class" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include bar_bamama
include foo
}
class bar_bamama {
}
class foo {
$var = "foo_msg"
include bar
}
class bar inherits bar_bamama {
notify { 'something': message => inline_template("<%= @var %>"), }
}
MANIFEST
end
end
end
shared_examples_for "the scope" do
describe "fully qualified variable names" do
it "keeps nodescope separate from topscope" do
expect_the_message_to_be('topscope') do <<-MANIFEST
$c = "topscope"
node default {
$c = "nodescope"
notify { 'something': message => $::c }
}
MANIFEST
end
end
end
describe "when colliding class and variable names" do
it "finds a topscope variable with the same name as a class" do
expect_the_message_to_be('topscope') do <<-MANIFEST
$c = "topscope"
class c { }
node default {
include c
notify { 'something': message => $c }
}
MANIFEST
end
end
it "finds a node scope variable with the same name as a class" do
expect_the_message_to_be('nodescope') do <<-MANIFEST
class c { }
node default {
$c = "nodescope"
include c
notify { 'something': message => $c }
}
MANIFEST
end
end
it "finds a class variable when the class collides with a nodescope variable" do
expect_the_message_to_be('class') do <<-MANIFEST
class c { $b = "class" }
node default {
$c = "nodescope"
include c
notify { 'something': message => $c::b }
}
MANIFEST
end
end
it "finds a class variable when the class collides with a topscope variable" do
expect_the_message_to_be('class') do <<-MANIFEST
$c = "topscope"
class c { $b = "class" }
node default {
include c
notify { 'something': message => $::c::b }
}
MANIFEST
end
end
end
describe "when using shadowing and inheritance" do
- it "finds value define in the inherited node" do
- expect_the_message_to_be('parent_msg') do <<-MANIFEST
- $var = "top_msg"
- node parent {
- $var = "parent_msg"
- }
- node default inherits parent {
- include foo
- }
- class foo {
- notify { 'something': message => $var, }
- }
- MANIFEST
- end
- end
-
- it "finds top scope when the class is included before the node defines the var" do
- expect_the_message_to_be('top_msg') do <<-MANIFEST
- $var = "top_msg"
- node parent {
- include foo
- }
- node default inherits parent {
- $var = "default_msg"
- }
- class foo {
- notify { 'something': message => $var, }
- }
- MANIFEST
- end
- end
-
- it "finds top scope when the class is included before the node defines the var" do
- expect_the_message_to_be('top_msg') do <<-MANIFEST
- $var = "top_msg"
- node parent {
- include foo
- }
- node default inherits parent {
- $var = "default_msg"
- }
- class foo {
- notify { 'something': message => $var, }
- }
- MANIFEST
- end
- end
-
it "finds values in its local scope" do
expect_the_message_to_be('local_msg') do <<-MANIFEST
node default {
include baz
}
class foo {
}
class bar inherits foo {
$var = "local_msg"
notify { 'something': message => $var, }
}
class baz {
include bar
}
MANIFEST
end
end
it "finds values in its inherited scope" do
expect_the_message_to_be('foo_msg') do <<-MANIFEST
node default {
include baz
}
class foo {
$var = "foo_msg"
}
class bar inherits foo {
notify { 'something': message => $var, }
}
class baz {
include bar
}
MANIFEST
end
end
it "prefers values in its local scope over values in the inherited scope" do
expect_the_message_to_be('local_msg') do <<-MANIFEST
include bar
class foo {
$var = "inherited"
}
class bar inherits foo {
$var = "local_msg"
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "finds a qualified variable by following parent scopes of the specified scope" do
expect_the_message_to_be("from node") do <<-MANIFEST
class c {
notify { 'something': message => "$a::b" }
}
class a { }
node default {
$b = "from node"
include a
include c
}
MANIFEST
end
end
it "finds values in its inherited scope when the inherited class is qualified to the top" do
expect_the_message_to_be('foo_msg') do <<-MANIFEST
node default {
include baz
}
class foo {
$var = "foo_msg"
}
class bar inherits ::foo {
notify { 'something': message => $var, }
}
class baz {
include bar
}
MANIFEST
end
end
it "prefers values in its local scope over values in the inherited scope when the inherited class is fully qualified" do
expect_the_message_to_be('local_msg') do <<-MANIFEST
include bar
class foo {
$var = "inherited"
}
class bar inherits ::foo {
$var = "local_msg"
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "finds values in top scope when the inherited class is qualified to the top" do
expect_the_message_to_be('top msg') do <<-MANIFEST
$var = "top msg"
class foo {
}
class bar inherits ::foo {
notify { 'something': message => $var, }
}
include bar
MANIFEST
end
end
it "finds values in its inherited scope when the inherited class is a nested class that shadows another class at the top" do
expect_the_message_to_be('inner baz') do <<-MANIFEST
node default {
include foo::bar
}
class baz {
$var = "top baz"
}
class foo {
class baz {
$var = "inner baz"
}
- class bar inherits baz {
+ class bar inherits foo::baz {
notify { 'something': message => $var, }
}
}
MANIFEST
end
end
it "finds values in its inherited scope when the inherited class is qualified to a nested class and qualified to the top" do
expect_the_message_to_be('top baz') do <<-MANIFEST
node default {
include foo::bar
}
class baz {
$var = "top baz"
}
class foo {
class baz {
$var = "inner baz"
}
class bar inherits ::baz {
notify { 'something': message => $var, }
}
}
MANIFEST
end
end
it "finds values in its inherited scope when the inherited class is qualified" do
expect_the_message_to_be('foo_msg') do <<-MANIFEST
node default {
include bar
}
class foo {
class baz {
$var = "foo_msg"
}
}
class bar inherits foo::baz {
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "prefers values in its inherited scope over those in the node (with intermediate inclusion)" do
expect_the_message_to_be('foo_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include baz
}
class foo {
$var = "foo_msg"
}
class bar inherits foo {
notify { 'something': message => $var, }
}
class baz {
include bar
}
MANIFEST
end
end
it "prefers values in its inherited scope over those in the node (without intermediate inclusion)" do
expect_the_message_to_be('foo_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include bar
}
class foo {
$var = "foo_msg"
}
class bar inherits foo {
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "prefers values in its inherited scope over those from where it is included" do
expect_the_message_to_be('foo_msg') do <<-MANIFEST
node default {
include baz
}
class foo {
$var = "foo_msg"
}
class bar inherits foo {
notify { 'something': message => $var, }
}
class baz {
$var = "baz_msg"
include bar
}
MANIFEST
end
end
it "does not used variables from classes included in the inherited scope" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include bar
}
class quux {
$var = "quux_msg"
}
class foo inherits quux {
}
class baz {
include foo
}
class bar inherits baz {
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "does not use a variable from a scope lexically enclosing it" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include other::bar
}
class other {
$var = "other_msg"
class bar {
notify { 'something': message => $var, }
}
}
MANIFEST
end
end
it "finds values in its node scope" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include baz
}
class foo {
}
class bar inherits foo {
notify { 'something': message => $var, }
}
class baz {
include bar
}
MANIFEST
end
end
it "finds values in its top scope" do
expect_the_message_to_be('top_msg') do <<-MANIFEST
$var = "top_msg"
node default {
include baz
}
class foo {
}
class bar inherits foo {
notify { 'something': message => $var, }
}
class baz {
include bar
}
MANIFEST
end
end
it "prefers variables from the node over those in the top scope" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
$var = "top_msg"
node default {
$var = "node_msg"
include foo
}
class foo {
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "finds top scope variables referenced inside a defined type" do
expect_the_message_to_be('top_msg') do <<-MANIFEST
$var = "top_msg"
node default {
foo { "testing": }
}
define foo() {
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "finds node scope variables referenced inside a defined type" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
$var = "top_msg"
node default {
$var = "node_msg"
foo { "testing": }
}
define foo() {
notify { 'something': message => $var, }
}
MANIFEST
end
end
end
describe "in situations that used to have dynamic lookup" do
it "ignores the dynamic value of the var" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include foo
}
class baz {
$var = "baz_msg"
include bar
}
class foo inherits baz {
}
class bar {
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "finds nil when the only set variable is in the dynamic scope" do
expect_the_message_to_be(nil) do <<-MANIFEST
node default {
include baz
}
class foo {
}
class bar inherits foo {
notify { 'something': message => $var, }
}
class baz {
$var = "baz_msg"
include bar
}
MANIFEST
end
end
it "ignores the value in the dynamic scope for a defined type" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include foo
}
class foo {
$var = "foo_msg"
bar { "testing": }
}
define bar() {
notify { 'something': message => $var, }
}
MANIFEST
end
end
it "when using a template ignores the dynamic value of the var when using scope.lookupvar" do
expect_the_message_to_be('node_msg') do <<-MANIFEST
node default {
$var = "node_msg"
include foo
}
class foo {
$var = "foo_msg"
include bar
}
class bar {
notify { 'something': message => inline_template("<%= scope.lookupvar('var') %>"), }
}
MANIFEST
end
end
end
- describe "using plussignment to change in a new scope" do
-
- it "does not change an array in the parent scope" do
- expect_the_message_to_be('top_msg') do <<-MANIFEST
- $var = ["top_msg"]
- class override {
- $var += ["override"]
- include foo
- }
- class foo {
- notify { 'something': message => $var, }
- }
-
- include override
- MANIFEST
- end
- end
-
- it "concatenates two arrays" do
- expect_the_message_to_be(['top_msg', 'override']) do <<-MANIFEST
- $var = ["top_msg"]
- class override {
- $var += ["override"]
- notify { 'something': message => $var, }
- }
-
- include override
- MANIFEST
- end
- end
-
- it "leaves an array of arrays unflattened" do
- expect_the_message_to_be([['top_msg'], ['override']]) do <<-MANIFEST
- $var = [["top_msg"]]
- class override {
- $var += [["override"]]
- notify { 'something': message => $var, }
- }
-
- include override
- MANIFEST
- end
- end
-
- it "does not change a hash in the parent scope" do
- expect_the_message_to_be({"key"=>"top_msg"}) do <<-MANIFEST
- $var = { "key" => "top_msg" }
- class override {
- $var += { "other" => "override" }
- include foo
- }
- class foo {
- notify { 'something': message => $var, }
- }
-
- include override
- MANIFEST
- end
- end
-
- it "replaces a value of a key in the hash instead of merging the values" do
- expect_the_message_to_be({"key"=>"override"}) do <<-MANIFEST
- $var = { "key" => "top_msg" }
- class override {
- $var += { "key" => "override" }
- notify { 'something': message => $var, }
- }
-
- include override
- MANIFEST
- end
- end
- end
-
describe "when using an enc" do
it "places enc parameters in top scope" do
enc_node = Puppet::Node.new("the node", { :parameters => { "var" => 'from_enc' } })
expect_the_message_to_be('from_enc', enc_node) do <<-MANIFEST
notify { 'something': message => $var, }
MANIFEST
end
end
it "does not allow the enc to specify an existing top scope var" do
enc_node = Puppet::Node.new("the_node", { :parameters => { "var" => 'from_enc' } })
expect {
compile_to_catalog("$var = 'top scope'", enc_node)
}.to raise_error(
Puppet::Error,
/Cannot reassign variable var at line 1(\:6)? on node the_node/
)
end
it "evaluates enc classes in top scope when there is no node" do
enc_node = Puppet::Node.new("the node", { :classes => ['foo'], :parameters => { "var" => 'from_enc' } })
expect_the_message_to_be('from_enc', enc_node) do <<-MANIFEST
class foo {
notify { 'something': message => $var, }
}
MANIFEST
end
end
- it "evaluates enc classes in the node scope when there is a matching node" do
- enc_node = Puppet::Node.new("the_node", { :classes => ['foo'] })
-
- expect_the_message_to_be('from matching node', enc_node) do <<-MANIFEST
- node inherited {
- $var = "from inherited"
- }
+ it "overrides enc variables from a node scope var" do
+ enc_node = Puppet::Node.new("the_node", { :classes => ['foo'], :parameters => { 'enc_var' => 'Set from ENC.' } })
- node the_node inherits inherited {
- $var = "from matching node"
+ expect_the_message_to_be('ENC overridden in node', enc_node) do <<-MANIFEST
+ node the_node {
+ $enc_var = "ENC overridden in node"
}
class foo {
- notify { 'something': message => $var, }
+ notify { 'something': message => $enc_var, }
}
MANIFEST
end
end
end
end
describe 'using classic parser' do
before :each do
Puppet[:parser] = 'current'
end
- it_behaves_like 'the scope' do
+
+ it_behaves_like 'the scope'
+
+ it "finds value define in the inherited node" do
+ expect_the_message_to_be('parent_msg') do <<-MANIFEST
+ $var = "top_msg"
+ node parent {
+ $var = "parent_msg"
+ }
+ node default inherits parent {
+ include foo
+ }
+ class foo {
+ notify { 'something': message => $var, }
+ }
+ MANIFEST
+ end
+ end
+
+ it "finds top scope when the class is included before the node defines the var" do
+ expect_the_message_to_be('top_msg') do <<-MANIFEST
+ $var = "top_msg"
+ node parent {
+ include foo
+ }
+ node default inherits parent {
+ $var = "default_msg"
+ }
+ class foo {
+ notify { 'something': message => $var, }
+ }
+ MANIFEST
+ end
+ end
+
+ it "finds top scope when the class is included before the node defines the var" do
+ expect_the_message_to_be('top_msg') do <<-MANIFEST
+ $var = "top_msg"
+ node parent {
+ include foo
+ }
+ node default inherits parent {
+ $var = "default_msg"
+ }
+ class foo {
+ notify { 'something': message => $var, }
+ }
+ MANIFEST
+ end
+ end
+
+ it "evaluates enc classes in the node scope when there is a matching node" do
+ enc_node = Puppet::Node.new("the_node", { :classes => ['foo'] })
+
+ expect_the_message_to_be('from matching node', enc_node) do <<-MANIFEST
+ node inherited {
+ $var = "from inherited"
+ }
+
+ node the_node inherits inherited {
+ $var = "from matching node"
+ }
+
+ class foo {
+ notify { 'something': message => $var, }
+ }
+ MANIFEST
+ end
end
end
describe 'using future parser' do
before :each do
Puppet[:parser] = 'future'
end
- it_behaves_like 'the scope' do
- end
- end
+ it_behaves_like 'the scope'
+ end
end
-
diff --git a/spec/integration/provider/cron/crontab_spec.rb b/spec/integration/provider/cron/crontab_spec.rb
index a88eb1c76..e75523cac 100644
--- a/spec/integration/provider/cron/crontab_spec.rb
+++ b/spec/integration/provider/cron/crontab_spec.rb
@@ -1,252 +1,241 @@
#!/usr/bin/env ruby
require 'spec_helper'
require 'puppet/file_bucket/dipper'
+require 'puppet_spec/compiler'
describe Puppet::Type.type(:cron).provider(:crontab), '(integration)', :unless => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
+ include PuppetSpec::Compiler
before :each do
Puppet::Type.type(:cron).stubs(:defaultprovider).returns described_class
Puppet::FileBucket::Dipper.any_instance.stubs(:backup) # Don't backup to filebucket
# I don't want to execute anything
described_class.stubs(:filetype).returns Puppet::Util::FileType::FileTypeFlat
described_class.stubs(:default_target).returns crontab_user1
# I don't want to stub Time.now to get a static header because I don't know
# where Time.now is used elsewhere, so just go with a very simple header
described_class.stubs(:header).returns "# HEADER: some simple\n# HEADER: header\n"
FileUtils.cp(my_fixture('crontab_user1'), crontab_user1)
FileUtils.cp(my_fixture('crontab_user2'), crontab_user2)
end
after :each do
described_class.clear
end
let :crontab_user1 do
tmpfile('cron_integration_specs')
end
let :crontab_user2 do
tmpfile('cron_integration_specs')
end
- def run_in_catalog(*resources)
- catalog = Puppet::Resource::Catalog.new
- catalog.host_config = false
- resources.each do |resource|
- resource.expects(:err).never
- catalog.add_resource(resource)
- end
-
- # the resources are not properly contained and generated resources
- # will end up with dangling edges without this stubbing:
- catalog.stubs(:container_of).returns resources[0]
- catalog.apply
- end
-
def expect_output(fixture_name)
File.read(crontab_user1).should == File.read(my_fixture(fixture_name))
end
describe "when managing a cron entry" do
it "should be able to purge unmanaged entries" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'only managed entry',
- :ensure => :present,
- :command => '/bin/true',
- :target => crontab_user1,
- :user => crontab_user1
- )
- resources = Puppet::Type.type(:resources).new(
- :name => 'cron',
- :purge => 'true'
- )
- run_in_catalog(resource, resources)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'only managed entry':
+ ensure => 'present',
+ command => '/bin/true',
+ target => '#{crontab_user1}',
+ }
+ resources { 'cron': purge => 'true' }
+ MANIFEST
expect_output('purged')
end
describe "with ensure absent" do
it "should do nothing if entry already absent" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'no_such_entry',
- :ensure => :absent,
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'no_such_entry':
+ ensure => 'absent',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('crontab_user1')
end
it "should remove the resource from crontab if present" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'My daily failure',
- :ensure => :absent,
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'My daily failure':
+ ensure => 'absent',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('remove_named_resource')
end
it "should remove a matching cronentry if present" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'no_such_named_resource_in_crontab',
- :ensure => :absent,
- :minute => [ '17-19', '22' ],
- :hour => [ '0-23/2' ],
- :weekday => 'Tue',
- :command => '/bin/unnamed_regular_command',
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'no_such_named_resource_in_crontab':
+ ensure => absent,
+ minute => [ '17-19', '22' ],
+ hour => [ '0-23/2' ],
+ weekday => 'Tue',
+ command => '/bin/unnamed_regular_command',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('remove_unnamed_resource')
end
end
describe "with ensure present" do
context "and no command specified" do
it "should work if the resource is already present" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'My daily failure',
- :special => 'daily',
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'My daily failure':
+ special => 'daily',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('crontab_user1')
end
it "should fail if the resource needs creating" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'Entirely new resource',
- :special => 'daily',
- :target => crontab_user1,
- :user => crontab_user1
- )
- resource.expects(:err).with(regexp_matches(/no command/))
- run_in_catalog(resource)
+ manifest = <<-MANIFEST
+ cron {
+ 'Entirely new resource':
+ special => 'daily',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
+ apply_compiled_manifest(manifest) do |res|
+ if res.ref == 'Cron[Entirely new resource]'
+ res.expects(:err).with(regexp_matches(/no command/))
+ else
+ res.expects(:err).never
+ end
+ end
end
end
it "should do nothing if entry already present" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'My daily failure',
- :special => 'daily',
- :command => '/bin/false',
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'My daily failure':
+ special => 'daily',
+ command => '/bin/false',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('crontab_user1')
end
it "should work correctly when managing 'target' but not 'user'" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'My daily failure',
- :special => 'daily',
- :command => '/bin/false',
- :target => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'My daily failure':
+ special => 'daily',
+ command => '/bin/false',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('crontab_user1')
end
it "should do nothing if a matching entry already present" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'no_such_named_resource_in_crontab',
- :ensure => :present,
- :minute => [ '17-19', '22' ],
- :hour => [ '0-23/2' ],
- :command => '/bin/unnamed_regular_command',
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'no_such_named_resource_in_crontab':
+ ensure => present,
+ minute => [ '17-19', '22' ],
+ hour => [ '0-23/2' ],
+ command => '/bin/unnamed_regular_command',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('crontab_user1')
end
it "should add a new normal entry if currently absent" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'new entry',
- :ensure => :present,
- :minute => '12',
- :weekday => 'Tue',
- :command => '/bin/new',
- :environment => [
- 'MAILTO=""',
- 'SHELL=/bin/bash'
- ],
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'new entry':
+ ensure => present,
+ minute => '12',
+ weekday => 'Tue',
+ command => '/bin/new',
+ environment => [
+ 'MAILTO=""',
+ 'SHELL=/bin/bash'
+ ],
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('create_normal_entry')
end
it "should add a new special entry if currently absent" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'new special entry',
- :ensure => :present,
- :special => 'reboot',
- :command => 'echo "Booted" 1>&2',
- :environment => 'MAILTO=bob@company.com',
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'new special entry':
+ ensure => present,
+ special => 'reboot',
+ command => 'echo "Booted" 1>&2',
+ environment => 'MAILTO=bob@company.com',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('create_special_entry')
end
it "should change existing entry if out of sync" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'Monthly job',
- :ensure => :present,
- :special => 'monthly',
-# :minute => ['22'],
- :command => '/usr/bin/monthly',
- :environment => [],
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'Monthly job':
+ ensure => present,
+ special => 'monthly',
+ #minute => ['22'],
+ command => '/usr/bin/monthly',
+ environment => [],
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('modify_entry')
end
it "should change a special schedule to numeric if requested" do
- resource = Puppet::Type.type(:cron).new(
- :name => 'My daily failure',
- :special => 'absent',
- :command => '/bin/false',
- :target => crontab_user1,
- :user => crontab_user1
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'My daily failure':
+ special => 'absent',
+ command => '/bin/false',
+ target => '#{crontab_user1}',
+ }
+ MANIFEST
expect_output('unspecialized')
end
it "should not try to move an entry from one file to another" do
# force the parsedfile provider to also parse user1's crontab
- random_resource = Puppet::Type.type(:cron).new(
- :name => 'foo',
- :ensure => :absent,
- :target => crontab_user1,
- :user => crontab_user1
- )
- resource = Puppet::Type.type(:cron).new(
- :name => 'My daily failure',
- :special => 'daily',
- :command => "/bin/false",
- :target => crontab_user2,
- :user => crontab_user2
- )
- run_in_catalog(resource)
+ apply_with_error_check(<<-MANIFEST)
+ cron {
+ 'foo':
+ ensure => absent,
+ target => '#{crontab_user1}';
+ 'My daily failure':
+ special => 'daily',
+ command => "/bin/false",
+ target => '#{crontab_user2}',
+ }
+ MANIFEST
File.read(crontab_user1).should == File.read(my_fixture('moved_cronjob_input1'))
File.read(crontab_user2).should == File.read(my_fixture('moved_cronjob_input2'))
end
end
end
end
diff --git a/spec/integration/ssl/certificate_authority_spec.rb b/spec/integration/ssl/certificate_authority_spec.rb
index acbc38cbc..3cf494afa 100755
--- a/spec/integration/ssl/certificate_authority_spec.rb
+++ b/spec/integration/ssl/certificate_authority_spec.rb
@@ -1,137 +1,163 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/ssl/certificate_authority'
describe Puppet::SSL::CertificateAuthority, :unless => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
let(:ca) { @ca }
before do
dir = tmpdir("ca_integration_testing")
Puppet.settings[:confdir] = dir
Puppet.settings[:vardir] = dir
Puppet.settings[:group] = Process.gid
Puppet::SSL::Host.ca_location = :local
# this has the side-effect of creating the various directories that we need
@ca = Puppet::SSL::CertificateAuthority.new
end
it "should be able to generate a new host certificate" do
ca.generate("newhost")
Puppet::SSL::Certificate.indirection.find("newhost").should be_instance_of(Puppet::SSL::Certificate)
end
it "should be able to revoke a host certificate" do
ca.generate("newhost")
ca.revoke("newhost")
expect { ca.verify("newhost") }.to raise_error(Puppet::SSL::CertificateAuthority::CertificateVerificationError, "certificate revoked")
end
describe "when signing certificates" do
it "should save the signed certificate" do
host = certificate_request_for("luke.madstop.com")
ca.sign("luke.madstop.com")
Puppet::SSL::Certificate.indirection.find("luke.madstop.com").should be_instance_of(Puppet::SSL::Certificate)
end
it "should be able to sign multiple certificates" do
host = certificate_request_for("luke.madstop.com")
other = certificate_request_for("other.madstop.com")
ca.sign("luke.madstop.com")
ca.sign("other.madstop.com")
Puppet::SSL::Certificate.indirection.find("other.madstop.com").should be_instance_of(Puppet::SSL::Certificate)
Puppet::SSL::Certificate.indirection.find("luke.madstop.com").should be_instance_of(Puppet::SSL::Certificate)
end
it "should save the signed certificate to the :signeddir" do
host = certificate_request_for("luke.madstop.com")
ca.sign("luke.madstop.com")
client_cert = File.join(Puppet[:signeddir], "luke.madstop.com.pem")
File.read(client_cert).should == Puppet::SSL::Certificate.indirection.find("luke.madstop.com").content.to_s
end
it "should save valid certificates" do
host = certificate_request_for("luke.madstop.com")
ca.sign("luke.madstop.com")
unless ssl = Puppet::Util::which('openssl')
pending "No ssl available"
else
ca_cert = Puppet[:cacert]
client_cert = File.join(Puppet[:signeddir], "luke.madstop.com.pem")
output = %x{openssl verify -CAfile #{ca_cert} #{client_cert}}
$CHILD_STATUS.should == 0
end
end
it "should verify proof of possession when signing certificates" do
host = certificate_request_for("luke.madstop.com")
csr = host.certificate_request
wrong_key = Puppet::SSL::Key.new(host.name)
wrong_key.generate
csr.content.public_key = wrong_key.content.public_key
# The correct key has to be removed so we can save the incorrect one
Puppet::SSL::CertificateRequest.indirection.destroy(host.name)
Puppet::SSL::CertificateRequest.indirection.save(csr)
expect {
ca.sign(host.name)
}.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
"CSR contains a public key that does not correspond to the signing key"
)
end
end
+ describe "when revoking certificate" do
+ it "should work for one certificate" do
+ certificate_request_for("luke.madstop.com")
+
+ ca.sign("luke.madstop.com")
+ ca.revoke("luke.madstop.com")
+
+ expect { ca.verify("luke.madstop.com") }.to raise_error(
+ Puppet::SSL::CertificateAuthority::CertificateVerificationError,
+ "certificate revoked"
+ )
+ end
+
+ it "should work for several certificates" do
+ 3.times.each do |c|
+ certificate_request_for("luke.madstop.com")
+ ca.sign("luke.madstop.com")
+ ca.destroy("luke.madstop.com")
+ end
+ ca.revoke("luke.madstop.com")
+
+ ca.crl.content.revoked.map { |r| r.serial }.should == [2,3,4] # ca has serial 1
+ end
+
+ end
+
it "allows autosigning certificates concurrently", :unless => Puppet::Util::Platform.windows? do
Puppet[:autosign] = true
hosts = (0..4).collect { |i| certificate_request_for("host#{i}") }
run_in_parallel(5) do |i|
ca.autosign(Puppet::SSL::CertificateRequest.indirection.find(hosts[i].name))
end
certs = hosts.collect { |host| Puppet::SSL::Certificate.indirection.find(host.name).content }
serial_numbers = certs.collect(&:serial)
serial_numbers.sort.should == [2, 3, 4, 5, 6] # serial 1 is the ca certificate
end
def certificate_request_for(hostname)
key = Puppet::SSL::Key.new(hostname)
key.generate
host = Puppet::SSL::Host.new(hostname)
host.key = key
host.generate_certificate_request
host
end
def run_in_parallel(number)
children = []
number.times do |i|
children << Kernel.fork do
yield i
end
end
children.each { |pid| Process.wait(pid) }
end
end
diff --git a/spec/integration/ssl/certificate_request_spec.rb b/spec/integration/ssl/certificate_request_spec.rb
index 4a035d532..eeb29da79 100755
--- a/spec/integration/ssl/certificate_request_spec.rb
+++ b/spec/integration/ssl/certificate_request_spec.rb
@@ -1,54 +1,48 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/ssl/certificate_request'
describe Puppet::SSL::CertificateRequest do
include PuppetSpec::Files
before do
# Get a safe temporary file
dir = tmpdir("csr_integration_testing")
- Puppet.settings.clear
-
Puppet.settings[:confdir] = dir
Puppet.settings[:vardir] = dir
Puppet.settings[:group] = Process.gid
Puppet::SSL::Host.ca_location = :none
@csr = Puppet::SSL::CertificateRequest.new("luke.madstop.com")
@key = OpenSSL::PKey::RSA.new(512)
# This is necessary so the terminus instances don't lie around.
Puppet::SSL::CertificateRequest.indirection.termini.clear
end
- after do
- Puppet.settings.clear
- end
-
it "should be able to generate CSRs" do
@csr.generate(@key)
end
it "should be able to save CSRs" do
Puppet::SSL::CertificateRequest.indirection.save(@csr)
end
it "should be able to find saved certificate requests via the Indirector" do
@csr.generate(@key)
Puppet::SSL::CertificateRequest.indirection.save(@csr)
Puppet::SSL::CertificateRequest.indirection.find("luke.madstop.com").should be_instance_of(Puppet::SSL::CertificateRequest)
end
it "should save the completely CSR when saving" do
@csr.generate(@key)
Puppet::SSL::CertificateRequest.indirection.save(@csr)
Puppet::SSL::CertificateRequest.indirection.find("luke.madstop.com").content.to_s.should == @csr.content.to_s
end
end
diff --git a/spec/integration/ssl/certificate_revocation_list_spec.rb b/spec/integration/ssl/certificate_revocation_list_spec.rb
index 06a69a741..ec344926b 100755
--- a/spec/integration/ssl/certificate_revocation_list_spec.rb
+++ b/spec/integration/ssl/certificate_revocation_list_spec.rb
@@ -1,37 +1,35 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/ssl/certificate_revocation_list'
describe Puppet::SSL::CertificateRevocationList do
include PuppetSpec::Files
before do
# Get a safe temporary file
dir = tmpdir("ca_integration_testing")
Puppet.settings[:confdir] = dir
Puppet.settings[:vardir] = dir
Puppet.settings[:group] = Process.gid
Puppet::SSL::Host.ca_location = :local
end
after {
Puppet::SSL::Host.ca_location = :none
- Puppet.settings.clear
-
# This is necessary so the terminus instances don't lie around.
Puppet::SSL::Host.indirection.termini.clear
}
it "should be able to read in written out CRLs with no revoked certificates" do
ca = Puppet::SSL::CertificateAuthority.new
raise "CRL not created" unless Puppet::FileSystem.exist?(Puppet[:hostcrl])
crl = Puppet::SSL::CertificateRevocationList.new("crl_int_testing")
crl.read(Puppet[:hostcrl])
end
end
diff --git a/spec/integration/ssl/host_spec.rb b/spec/integration/ssl/host_spec.rb
index fbb108db7..18f0d17fc 100755
--- a/spec/integration/ssl/host_spec.rb
+++ b/spec/integration/ssl/host_spec.rb
@@ -1,84 +1,82 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/ssl/host'
describe Puppet::SSL::Host do
include PuppetSpec::Files
before do
# Get a safe temporary file
dir = tmpdir("host_integration_testing")
Puppet.settings[:confdir] = dir
Puppet.settings[:vardir] = dir
Puppet.settings[:group] = Process.gid
Puppet::SSL::Host.ca_location = :local
@host = Puppet::SSL::Host.new("luke.madstop.com")
@ca = Puppet::SSL::CertificateAuthority.new
end
after {
Puppet::SSL::Host.ca_location = :none
-
- Puppet.settings.clear
}
it "should be considered a CA host if its name is equal to 'ca'" do
Puppet::SSL::Host.new(Puppet::SSL::CA_NAME).should be_ca
end
describe "when managing its key" do
it "should be able to generate and save a key" do
@host.generate_key
end
it "should save the key such that the Indirector can find it" do
@host.generate_key
Puppet::SSL::Key.indirection.find(@host.name).content.to_s.should == @host.key.to_s
end
it "should save the private key into the :privatekeydir" do
@host.generate_key
File.read(File.join(Puppet.settings[:privatekeydir], "luke.madstop.com.pem")).should == @host.key.to_s
end
end
describe "when managing its certificate request" do
it "should be able to generate and save a certificate request" do
@host.generate_certificate_request
end
it "should save the certificate request such that the Indirector can find it" do
@host.generate_certificate_request
Puppet::SSL::CertificateRequest.indirection.find(@host.name).content.to_s.should == @host.certificate_request.to_s
end
it "should save the private certificate request into the :privatekeydir" do
@host.generate_certificate_request
File.read(File.join(Puppet.settings[:requestdir], "luke.madstop.com.pem")).should == @host.certificate_request.to_s
end
end
describe "when the CA host" do
it "should never store its key in the :privatekeydir" do
Puppet.settings.use(:main, :ssl, :ca)
@ca = Puppet::SSL::Host.new(Puppet::SSL::Host.ca_name)
@ca.generate_key
Puppet::FileSystem.exist?(File.join(Puppet[:privatekeydir], "ca.pem")).should be_false
end
end
it "should pass the verification of its own SSL store", :unless => Puppet.features.microsoft_windows? do
@host.generate
@ca = Puppet::SSL::CertificateAuthority.new
@ca.sign(@host.name)
@host.ssl_store.verify(@host.certificate.content).should be_true
end
end
diff --git a/spec/integration/transaction_spec.rb b/spec/integration/transaction_spec.rb
index fc1fff228..35557b8f2 100755
--- a/spec/integration/transaction_spec.rb
+++ b/spec/integration/transaction_spec.rb
@@ -1,346 +1,362 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/transaction'
describe Puppet::Transaction do
include PuppetSpec::Files
before do
Puppet::Util::Storage.stubs(:store)
end
def mk_catalog(*resources)
catalog = Puppet::Resource::Catalog.new(Puppet::Node.new("mynode"))
resources.each { |res| catalog.add_resource res }
catalog
end
def usr_bin_touch(path)
Puppet.features.microsoft_windows? ? "#{ENV['windir']}/system32/cmd.exe /c \"type NUL >> \"#{path}\"\"" : "/usr/bin/touch #{path}"
end
def touch(path)
Puppet.features.microsoft_windows? ? "cmd.exe /c \"type NUL >> \"#{path}\"\"" : "touch #{path}"
end
it "should not apply generated resources if the parent resource fails" do
catalog = Puppet::Resource::Catalog.new
resource = Puppet::Type.type(:file).new :path => make_absolute("/foo/bar"), :backup => false
catalog.add_resource resource
child_resource = Puppet::Type.type(:file).new :path => make_absolute("/foo/bar/baz"), :backup => false
resource.expects(:eval_generate).returns([child_resource])
transaction = Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
resource.expects(:retrieve).raises "this is a failure"
resource.stubs(:err)
child_resource.expects(:retrieve).never
transaction.evaluate
end
it "should not apply virtual resources" do
catalog = Puppet::Resource::Catalog.new
resource = Puppet::Type.type(:file).new :path => make_absolute("/foo/bar"), :backup => false
resource.virtual = true
catalog.add_resource resource
transaction = Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
resource.expects(:evaluate).never
transaction.evaluate
end
it "should apply exported resources" do
catalog = Puppet::Resource::Catalog.new
path = tmpfile("exported_files")
resource = Puppet::Type.type(:file).new :path => path, :backup => false, :ensure => :file
resource.exported = true
catalog.add_resource resource
catalog.apply
Puppet::FileSystem.exist?(path).should be_true
end
it "should not apply virtual exported resources" do
catalog = Puppet::Resource::Catalog.new
resource = Puppet::Type.type(:file).new :path => make_absolute("/foo/bar"), :backup => false
resource.exported = true
resource.virtual = true
catalog.add_resource resource
transaction = Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
resource.expects(:evaluate).never
transaction.evaluate
end
it "should not apply device resources on normal host" do
catalog = Puppet::Resource::Catalog.new
resource = Puppet::Type.type(:interface).new :name => "FastEthernet 0/1"
catalog.add_resource resource
transaction = Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
transaction.for_network_device = false
transaction.expects(:apply).never.with(resource, nil)
transaction.evaluate
transaction.resource_status(resource).should be_skipped
end
it "should not apply host resources on device" do
catalog = Puppet::Resource::Catalog.new
resource = Puppet::Type.type(:file).new :path => make_absolute("/foo/bar"), :backup => false
catalog.add_resource resource
transaction = Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
transaction.for_network_device = true
transaction.expects(:apply).never.with(resource, nil)
transaction.evaluate
transaction.resource_status(resource).should be_skipped
end
it "should apply device resources on device" do
catalog = Puppet::Resource::Catalog.new
resource = Puppet::Type.type(:interface).new :name => "FastEthernet 0/1"
catalog.add_resource resource
transaction = Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
transaction.for_network_device = true
transaction.expects(:apply).with(resource, nil)
transaction.evaluate
transaction.resource_status(resource).should_not be_skipped
end
it "should apply resources appliable on host and device on a device" do
catalog = Puppet::Resource::Catalog.new
resource = Puppet::Type.type(:schedule).new :name => "test"
catalog.add_resource resource
transaction = Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
transaction.for_network_device = true
transaction.expects(:apply).with(resource, nil)
transaction.evaluate
transaction.resource_status(resource).should_not be_skipped
end
# Verify that one component requiring another causes the contained
# resources in the requiring component to get refreshed.
it "should propagate events from a contained resource through its container to its dependent container's contained resources" do
transaction = nil
file = Puppet::Type.type(:file).new :path => tmpfile("event_propagation"), :ensure => :present
execfile = File.join(tmpdir("exec_event"), "exectestingness2")
exec = Puppet::Type.type(:exec).new :command => touch(execfile), :path => ENV['PATH']
catalog = mk_catalog(file)
fcomp = Puppet::Type.type(:component).new(:name => "Foo[file]")
catalog.add_resource fcomp
catalog.add_edge(fcomp, file)
ecomp = Puppet::Type.type(:component).new(:name => "Foo[exec]")
catalog.add_resource ecomp
catalog.add_resource exec
catalog.add_edge(ecomp, exec)
ecomp[:subscribe] = Puppet::Resource.new(:foo, "file")
exec[:refreshonly] = true
exec.expects(:refresh)
catalog.apply
end
# Make sure that multiple subscriptions get triggered.
it "should propagate events to all dependent resources" do
path = tmpfile("path")
file1 = tmpfile("file1")
file2 = tmpfile("file2")
file = Puppet::Type.type(:file).new(
:path => path,
:ensure => "file"
)
exec1 = Puppet::Type.type(:exec).new(
:path => ENV["PATH"],
:command => touch(file1),
:refreshonly => true,
:subscribe => Puppet::Resource.new(:file, path)
)
exec2 = Puppet::Type.type(:exec).new(
:path => ENV["PATH"],
:command => touch(file2),
:refreshonly => true,
:subscribe => Puppet::Resource.new(:file, path)
)
catalog = mk_catalog(file, exec1, exec2)
catalog.apply
Puppet::FileSystem.exist?(file1).should be_true
Puppet::FileSystem.exist?(file2).should be_true
end
+ it "should apply no resources whatsoever if a pre_run_check fails" do
+ path = tmpfile("path")
+ file = Puppet::Type.type(:file).new(
+ :path => path,
+ :ensure => "file"
+ )
+ notify = Puppet::Type.type(:notify).new(
+ :title => "foo"
+ )
+ notify.expects(:pre_run_check).raises(Puppet::Error, "fail for testing")
+
+ catalog = mk_catalog(file, notify)
+ catalog.apply
+ Puppet::FileSystem.exist?(path).should_not be_true
+ end
+
it "should not let one failed refresh result in other refreshes failing" do
path = tmpfile("path")
newfile = tmpfile("file")
file = Puppet::Type.type(:file).new(
:path => path,
:ensure => "file"
)
exec1 = Puppet::Type.type(:exec).new(
:path => ENV["PATH"],
:command => touch(File.expand_path("/this/cannot/possibly/exist")),
:logoutput => true,
:refreshonly => true,
:subscribe => file,
:title => "one"
)
exec2 = Puppet::Type.type(:exec).new(
:path => ENV["PATH"],
:command => touch(newfile),
:logoutput => true,
:refreshonly => true,
:subscribe => [file, exec1],
:title => "two"
)
exec1.stubs(:err)
catalog = mk_catalog(file, exec1, exec2)
catalog.apply
Puppet::FileSystem.exist?(newfile).should be_true
end
it "should still trigger skipped resources" do
catalog = mk_catalog
catalog.add_resource(*Puppet::Type.type(:schedule).mkdefaultschedules)
Puppet[:ignoreschedules] = false
file = Puppet::Type.type(:file).new(
:name => tmpfile("file"),
:ensure => "file",
:backup => false
)
fname = tmpfile("exec")
exec = Puppet::Type.type(:exec).new(
:name => touch(fname),
:path => Puppet.features.microsoft_windows? ? "#{ENV['windir']}/system32" : "/usr/bin:/bin",
:schedule => "monthly",
:subscribe => Puppet::Resource.new("file", file.name)
)
catalog.add_resource(file, exec)
# Run it once
catalog.apply
Puppet::FileSystem.exist?(fname).should be_true
# Now remove it, so it can get created again
Puppet::FileSystem.unlink(fname)
file[:content] = "some content"
catalog.apply
Puppet::FileSystem.exist?(fname).should be_true
# Now remove it, so it can get created again
Puppet::FileSystem.unlink(fname)
# And tag our exec
exec.tag("testrun")
# And our file, so it runs
file.tag("norun")
Puppet[:tags] = "norun"
file[:content] = "totally different content"
catalog.apply
Puppet::FileSystem.exist?(fname).should be_true
end
it "should not attempt to evaluate resources with failed dependencies" do
exec = Puppet::Type.type(:exec).new(
:command => "#{File.expand_path('/bin/mkdir')} /this/path/cannot/possibly/exist",
:title => "mkdir"
)
file1 = Puppet::Type.type(:file).new(
:title => "file1",
:path => tmpfile("file1"),
:require => exec,
:ensure => :file
)
file2 = Puppet::Type.type(:file).new(
:title => "file2",
:path => tmpfile("file2"),
:require => file1,
:ensure => :file
)
catalog = mk_catalog(exec, file1, file2)
catalog.apply
Puppet::FileSystem.exist?(file1[:path]).should be_false
Puppet::FileSystem.exist?(file2[:path]).should be_false
end
it "should not trigger subscribing resources on failure" do
file1 = tmpfile("file1")
file2 = tmpfile("file2")
create_file1 = Puppet::Type.type(:exec).new(
:command => usr_bin_touch(file1)
)
exec = Puppet::Type.type(:exec).new(
:command => "#{File.expand_path('/bin/mkdir')} /this/path/cannot/possibly/exist",
:title => "mkdir",
:notify => create_file1
)
create_file2 = Puppet::Type.type(:exec).new(
:command => usr_bin_touch(file2),
:subscribe => exec
)
catalog = mk_catalog(exec, create_file1, create_file2)
catalog.apply
Puppet::FileSystem.exist?(file1).should be_false
Puppet::FileSystem.exist?(file2).should be_false
end
# #801 -- resources only checked in noop should be rescheduled immediately.
it "should immediately reschedule noop resources" do
Puppet::Type.type(:schedule).mkdefaultschedules
resource = Puppet::Type.type(:notify).new(:name => "mymessage", :noop => true)
catalog = Puppet::Resource::Catalog.new
catalog.add_resource resource
trans = catalog.apply
trans.resource_harness.should be_scheduled(resource)
end
end
diff --git a/spec/integration/type/file_spec.rb b/spec/integration/type/file_spec.rb
index ed8a9768f..d1862e241 100755
--- a/spec/integration/type/file_spec.rb
+++ b/spec/integration/type/file_spec.rb
@@ -1,1377 +1,1378 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/files'
if Puppet.features.microsoft_windows?
require 'puppet/util/windows'
class WindowsSecurity
extend Puppet::Util::Windows::Security
end
end
describe Puppet::Type.type(:file), :uses_checksums => true do
include PuppetSpec::Files
let(:catalog) { Puppet::Resource::Catalog.new }
let(:path) do
# we create a directory first so backups of :path that are stored in
# the same directory will also be removed after the tests
parent = tmpdir('file_spec')
File.join(parent, 'file_testing')
end
let(:dir) do
# we create a directory first so backups of :path that are stored in
# the same directory will also be removed after the tests
parent = tmpdir('file_spec')
File.join(parent, 'dir_testing')
end
if Puppet.features.posix?
def set_mode(mode, file)
File.chmod(mode, file)
end
def get_mode(file)
Puppet::FileSystem.lstat(file).mode
end
def get_owner(file)
Puppet::FileSystem.lstat(file).uid
end
def get_group(file)
Puppet::FileSystem.lstat(file).gid
end
else
class SecurityHelper
extend Puppet::Util::Windows::Security
end
def set_mode(mode, file)
SecurityHelper.set_mode(mode, file)
end
def get_mode(file)
SecurityHelper.get_mode(file)
end
def get_owner(file)
SecurityHelper.get_owner(file)
end
def get_group(file)
SecurityHelper.get_group(file)
end
def get_aces_for_path_by_sid(path, sid)
SecurityHelper.get_aces_for_path_by_sid(path, sid)
end
end
before do
# stub this to not try to create state.yaml
Puppet::Util::Storage.stubs(:store)
end
it "should not attempt to manage files that do not exist if no means of creating the file is specified" do
source = tmpfile('source')
- catalog.add_resource described_class.new :path => source, :mode => 0755
+ catalog.add_resource described_class.new :path => source, :mode => '0755'
status = catalog.apply.report.resource_statuses["File[#{source}]"]
status.should_not be_failed
status.should_not be_changed
Puppet::FileSystem.exist?(source).should be_false
end
describe "when ensure is absent" do
it "should remove the file if present" do
FileUtils.touch(path)
catalog.add_resource(described_class.new(:path => path, :ensure => :absent, :backup => :false))
report = catalog.apply.report
report.resource_statuses["File[#{path}]"].should_not be_failed
Puppet::FileSystem.exist?(path).should be_false
end
it "should do nothing if file is not present" do
catalog.add_resource(described_class.new(:path => path, :ensure => :absent, :backup => :false))
report = catalog.apply.report
report.resource_statuses["File[#{path}]"].should_not be_failed
Puppet::FileSystem.exist?(path).should be_false
end
# issue #14599
it "should not fail if parts of path aren't directories" do
FileUtils.touch(path)
catalog.add_resource(described_class.new(:path => File.join(path,'no_such_file'), :ensure => :absent, :backup => :false))
report = catalog.apply.report
report.resource_statuses["File[#{File.join(path,'no_such_file')}]"].should_not be_failed
end
end
describe "when setting permissions" do
it "should set the owner" do
target = tmpfile_with_contents('target', '')
owner = get_owner(target)
catalog.add_resource described_class.new(
:name => target,
:owner => owner
)
catalog.apply
get_owner(target).should == owner
end
it "should set the group" do
target = tmpfile_with_contents('target', '')
group = get_group(target)
catalog.add_resource described_class.new(
:name => target,
:group => group
)
catalog.apply
get_group(target).should == group
end
describe "when setting mode" do
describe "for directories" do
let(:target) { tmpdir('dir_mode') }
it "should set executable bits for newly created directories" do
catalog.add_resource described_class.new(:path => target, :ensure => :directory, :mode => 0600)
catalog.apply
(get_mode(target) & 07777).should == 0700
end
it "should set executable bits for existing readable directories" do
set_mode(0600, target)
- catalog.add_resource described_class.new(:path => target, :ensure => :directory, :mode => 0644)
+ catalog.add_resource described_class.new(:path => target, :ensure => :directory, :mode => '0644')
catalog.apply
(get_mode(target) & 07777).should == 0755
end
it "should not set executable bits for unreadable directories" do
begin
catalog.add_resource described_class.new(:path => target, :ensure => :directory, :mode => 0300)
catalog.apply
(get_mode(target) & 07777).should == 0300
ensure
# so we can cleanup
set_mode(0700, target)
end
end
it "should set user, group, and other executable bits" do
catalog.add_resource described_class.new(:path => target, :ensure => :directory, :mode => 0664)
catalog.apply
(get_mode(target) & 07777).should == 0775
end
it "should set executable bits when overwriting a non-executable file" do
target_path = tmpfile_with_contents('executable', '')
set_mode(0444, target_path)
catalog.add_resource described_class.new(:path => target_path, :ensure => :directory, :mode => 0666, :backup => false)
catalog.apply
(get_mode(target_path) & 07777).should == 0777
File.should be_directory(target_path)
end
end
describe "for files" do
it "should not set executable bits" do
catalog.add_resource described_class.new(:path => path, :ensure => :file, :mode => 0666)
catalog.apply
(get_mode(path) & 07777).should == 0666
end
it "should not set executable bits when replacing an executable directory (#10365)" do
pending("bug #10365")
FileUtils.mkdir(path)
set_mode(0777, path)
catalog.add_resource described_class.new(:path => path, :ensure => :file, :mode => 0666, :backup => false, :force => true)
catalog.apply
(get_mode(path) & 07777).should == 0666
end
end
describe "for links", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
let(:link) { tmpfile('link_mode') }
describe "when managing links" do
let(:link_target) { tmpfile('target') }
before :each do
FileUtils.touch(link_target)
File.chmod(0444, link_target)
Puppet::FileSystem.symlink(link_target, link)
end
it "should not set the executable bit on the link nor the target" do
catalog.add_resource described_class.new(:path => link, :ensure => :link, :mode => 0666, :target => link_target, :links => :manage)
catalog.apply
(Puppet::FileSystem.stat(link).mode & 07777) == 0666
(Puppet::FileSystem.lstat(link_target).mode & 07777) == 0444
end
it "should ignore dangling symlinks (#6856)" do
File.delete(link_target)
catalog.add_resource described_class.new(:path => link, :ensure => :link, :mode => 0666, :target => link_target, :links => :manage)
catalog.apply
Puppet::FileSystem.exist?(link).should be_false
end
it "should create a link to the target if ensure is omitted" do
FileUtils.touch(link_target)
catalog.add_resource described_class.new(:path => link, :target => link_target)
catalog.apply
Puppet::FileSystem.exist?(link).should be_true
Puppet::FileSystem.lstat(link).ftype.should == 'link'
Puppet::FileSystem.readlink(link).should == link_target
end
end
describe "when following links" do
it "should ignore dangling symlinks (#6856)" do
target = tmpfile('dangling')
FileUtils.touch(target)
Puppet::FileSystem.symlink(target, link)
File.delete(target)
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0600, :links => :follow)
catalog.apply
end
describe "to a directory" do
let(:link_target) { tmpdir('dir_target') }
before :each do
File.chmod(0600, link_target)
Puppet::FileSystem.symlink(link_target, link)
end
after :each do
File.chmod(0750, link_target)
end
describe "that is readable" do
it "should set the executable bits when creating the destination (#10315)" do
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0666, :links => :follow)
catalog.apply
File.should be_directory(path)
(get_mode(path) & 07777).should == 0777
end
it "should set the executable bits when overwriting the destination (#10315)" do
FileUtils.touch(path)
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0666, :links => :follow, :backup => false)
catalog.apply
File.should be_directory(path)
(get_mode(path) & 07777).should == 0777
end
end
describe "that is not readable" do
before :each do
set_mode(0300, link_target)
end
# so we can cleanup
after :each do
set_mode(0700, link_target)
end
it "should set executable bits when creating the destination (#10315)" do
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0666, :links => :follow)
catalog.apply
File.should be_directory(path)
(get_mode(path) & 07777).should == 0777
end
it "should set executable bits when overwriting the destination" do
FileUtils.touch(path)
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0666, :links => :follow, :backup => false)
catalog.apply
File.should be_directory(path)
(get_mode(path) & 07777).should == 0777
end
end
end
describe "to a file" do
let(:link_target) { tmpfile('file_target') }
before :each do
FileUtils.touch(link_target)
Puppet::FileSystem.symlink(link_target, link)
end
it "should create the file, not a symlink (#2817, #10315)" do
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0600, :links => :follow)
catalog.apply
File.should be_file(path)
(get_mode(path) & 07777).should == 0600
end
it "should overwrite the file" do
FileUtils.touch(path)
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0600, :links => :follow)
catalog.apply
File.should be_file(path)
(get_mode(path) & 07777).should == 0600
end
end
describe "to a link to a directory" do
let(:real_target) { tmpdir('real_target') }
let(:target) { tmpfile('target') }
before :each do
File.chmod(0666, real_target)
# link -> target -> real_target
Puppet::FileSystem.symlink(real_target, target)
Puppet::FileSystem.symlink(target, link)
end
after :each do
File.chmod(0750, real_target)
end
describe "when following all links" do
it "should create the destination and apply executable bits (#10315)" do
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0600, :links => :follow)
catalog.apply
File.should be_directory(path)
(get_mode(path) & 07777).should == 0700
end
it "should overwrite the destination and apply executable bits" do
FileUtils.mkdir(path)
catalog.add_resource described_class.new(:path => path, :source => link, :mode => 0600, :links => :follow)
catalog.apply
File.should be_directory(path)
(get_mode(path) & 0111).should == 0100
end
end
end
end
end
end
end
describe "when writing files" do
with_digest_algorithms do
it "should backup files to a filebucket when one is configured" do
filebucket = Puppet::Type.type(:filebucket).new :path => tmpfile("filebucket"), :name => "mybucket"
file = described_class.new :path => path, :backup => "mybucket", :content => "foo"
catalog.add_resource file
catalog.add_resource filebucket
File.open(file[:path], "w") { |f| f.write("bar") }
d = digest(IO.binread(file[:path]))
catalog.apply
filebucket.bucket.getfile(d).should == "bar"
end
it "should backup files in the local directory when a backup string is provided" do
file = described_class.new :path => path, :backup => ".bak", :content => "foo"
catalog.add_resource file
File.open(file[:path], "w") { |f| f.puts "bar" }
catalog.apply
backup = file[:path] + ".bak"
Puppet::FileSystem.exist?(backup).should be_true
File.read(backup).should == "bar\n"
end
it "should fail if no backup can be performed" do
dir = tmpdir("backups")
file = described_class.new :path => File.join(dir, "testfile"), :backup => ".bak", :content => "foo"
catalog.add_resource file
File.open(file[:path], 'w') { |f| f.puts "bar" }
# Create a directory where the backup should be so that writing to it fails
Dir.mkdir(File.join(dir, "testfile.bak"))
Puppet::Util::Log.stubs(:newmessage)
catalog.apply
File.read(file[:path]).should == "bar\n"
end
it "should not backup symlinks", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
link = tmpfile("link")
dest1 = tmpfile("dest1")
dest2 = tmpfile("dest2")
bucket = Puppet::Type.type(:filebucket).new :path => tmpfile("filebucket"), :name => "mybucket"
file = described_class.new :path => link, :target => dest2, :ensure => :link, :backup => "mybucket"
catalog.add_resource file
catalog.add_resource bucket
File.open(dest1, "w") { |f| f.puts "whatever" }
Puppet::FileSystem.symlink(dest1, link)
d = digest(File.read(file[:path]))
catalog.apply
Puppet::FileSystem.readlink(link).should == dest2
Puppet::FileSystem.exist?(bucket[:path]).should be_false
end
it "should backup directories to the local filesystem by copying the whole directory" do
file = described_class.new :path => path, :backup => ".bak", :content => "foo", :force => true
catalog.add_resource file
Dir.mkdir(path)
otherfile = File.join(path, "foo")
File.open(otherfile, "w") { |f| f.print "yay" }
catalog.apply
backup = "#{path}.bak"
FileTest.should be_directory(backup)
File.read(File.join(backup, "foo")).should == "yay"
end
it "should backup directories to filebuckets by backing up each file separately" do
bucket = Puppet::Type.type(:filebucket).new :path => tmpfile("filebucket"), :name => "mybucket"
file = described_class.new :path => tmpfile("bucket_backs"), :backup => "mybucket", :content => "foo", :force => true
catalog.add_resource file
catalog.add_resource bucket
Dir.mkdir(file[:path])
foofile = File.join(file[:path], "foo")
barfile = File.join(file[:path], "bar")
File.open(foofile, "w") { |f| f.print "fooyay" }
File.open(barfile, "w") { |f| f.print "baryay" }
food = digest(File.read(foofile))
bard = digest(File.read(barfile))
catalog.apply
bucket.bucket.getfile(food).should == "fooyay"
bucket.bucket.getfile(bard).should == "baryay"
end
end
end
describe "when recursing" do
def build_path(dir)
Dir.mkdir(dir)
File.chmod(0750, dir)
@dirs = [dir]
@files = []
%w{one two}.each do |subdir|
fdir = File.join(dir, subdir)
Dir.mkdir(fdir)
File.chmod(0750, fdir)
@dirs << fdir
%w{three}.each do |file|
ffile = File.join(fdir, file)
@files << ffile
File.open(ffile, "w") { |f| f.puts "test #{file}" }
File.chmod(0640, ffile)
end
end
end
it "should be able to recurse over a nonexistent file" do
@file = described_class.new(
:name => path,
:mode => 0644,
:recurse => true,
:backup => false
)
catalog.add_resource @file
lambda { @file.eval_generate }.should_not raise_error
end
it "should be able to recursively set properties on existing files" do
path = tmpfile("file_integration_tests")
build_path(path)
file = described_class.new(
:name => path,
:mode => 0644,
:recurse => true,
:backup => false
)
catalog.add_resource file
catalog.apply
@dirs.should_not be_empty
@dirs.each do |path|
(get_mode(path) & 007777).should == 0755
end
@files.should_not be_empty
@files.each do |path|
(get_mode(path) & 007777).should == 0644
end
end
it "should be able to recursively make links to other files", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
source = tmpfile("file_link_integration_source")
build_path(source)
dest = tmpfile("file_link_integration_dest")
@file = described_class.new(:name => dest, :target => source, :recurse => true, :ensure => :link, :backup => false)
catalog.add_resource @file
catalog.apply
@dirs.each do |path|
link_path = path.sub(source, dest)
Puppet::FileSystem.lstat(link_path).should be_directory
end
@files.each do |path|
link_path = path.sub(source, dest)
Puppet::FileSystem.lstat(link_path).ftype.should == "link"
end
end
it "should be able to recursively copy files" do
source = tmpfile("file_source_integration_source")
build_path(source)
dest = tmpfile("file_source_integration_dest")
@file = described_class.new(:name => dest, :source => source, :recurse => true, :backup => false)
catalog.add_resource @file
catalog.apply
@dirs.each do |path|
newpath = path.sub(source, dest)
Puppet::FileSystem.lstat(newpath).should be_directory
end
@files.each do |path|
newpath = path.sub(source, dest)
Puppet::FileSystem.lstat(newpath).ftype.should == "file"
end
end
it "should not recursively manage files managed by a more specific explicit file" do
dir = tmpfile("recursion_vs_explicit_1")
subdir = File.join(dir, "subdir")
file = File.join(subdir, "file")
FileUtils.mkdir_p(subdir)
File.open(file, "w") { |f| f.puts "" }
base = described_class.new(:name => dir, :recurse => true, :backup => false, :mode => "755")
sub = described_class.new(:name => subdir, :recurse => true, :backup => false, :mode => "644")
catalog.add_resource base
catalog.add_resource sub
catalog.apply
(get_mode(file) & 007777).should == 0644
end
it "should recursively manage files even if there is an explicit file whose name is a prefix of the managed file" do
managed = File.join(path, "file")
generated = File.join(path, "file_with_a_name_starting_with_the_word_file")
managed_mode = 0700
FileUtils.mkdir_p(path)
FileUtils.touch(managed)
FileUtils.touch(generated)
catalog.add_resource described_class.new(:name => path, :recurse => true, :backup => false, :mode => managed_mode)
catalog.add_resource described_class.new(:name => managed, :recurse => true, :backup => false, :mode => "644")
catalog.apply
(get_mode(generated) & 007777).should == managed_mode
end
describe "when recursing remote directories" do
describe "when sourceselect first" do
describe "for a directory" do
it "should recursively copy the first directory that exists" do
one = File.expand_path('thisdoesnotexist')
two = tmpdir('two')
FileUtils.mkdir_p(File.join(two, 'three'))
FileUtils.touch(File.join(two, 'three', 'four'))
catalog.add_resource Puppet::Type.newfile(
:path => path,
:ensure => :directory,
:backup => false,
:recurse => true,
:sourceselect => :first,
:source => [one, two]
)
catalog.apply
File.should be_directory(path)
Puppet::FileSystem.exist?(File.join(path, 'one')).should be_false
Puppet::FileSystem.exist?(File.join(path, 'three', 'four')).should be_true
end
it "should recursively copy an empty directory" do
one = File.expand_path('thisdoesnotexist')
two = tmpdir('two')
three = tmpdir('three')
file_in_dir_with_contents(three, 'a', '')
catalog.add_resource Puppet::Type.newfile(
:path => path,
:ensure => :directory,
:backup => false,
:recurse => true,
:sourceselect => :first,
:source => [one, two, three]
)
catalog.apply
File.should be_directory(path)
Puppet::FileSystem.exist?(File.join(path, 'a')).should be_false
end
it "should only recurse one level" do
one = tmpdir('one')
FileUtils.mkdir_p(File.join(one, 'a', 'b'))
FileUtils.touch(File.join(one, 'a', 'b', 'c'))
two = tmpdir('two')
FileUtils.mkdir_p(File.join(two, 'z'))
FileUtils.touch(File.join(two, 'z', 'y'))
catalog.add_resource Puppet::Type.newfile(
:path => path,
:ensure => :directory,
:backup => false,
:recurse => true,
:recurselimit => 1,
:sourceselect => :first,
:source => [one, two]
)
catalog.apply
Puppet::FileSystem.exist?(File.join(path, 'a')).should be_true
Puppet::FileSystem.exist?(File.join(path, 'a', 'b')).should be_false
Puppet::FileSystem.exist?(File.join(path, 'z')).should be_false
end
end
describe "for a file" do
it "should copy the first file that exists" do
one = File.expand_path('thisdoesnotexist')
two = tmpfile_with_contents('two', 'yay')
three = tmpfile_with_contents('three', 'no')
catalog.add_resource Puppet::Type.newfile(
:path => path,
:ensure => :file,
:backup => false,
:sourceselect => :first,
:source => [one, two, three]
)
catalog.apply
File.read(path).should == 'yay'
end
it "should copy an empty file" do
one = File.expand_path('thisdoesnotexist')
two = tmpfile_with_contents('two', '')
three = tmpfile_with_contents('three', 'no')
catalog.add_resource Puppet::Type.newfile(
:path => path,
:ensure => :file,
:backup => false,
:sourceselect => :first,
:source => [one, two, three]
)
catalog.apply
File.read(path).should == ''
end
end
end
describe "when sourceselect all" do
describe "for a directory" do
it "should recursively copy all sources from the first valid source" do
dest = tmpdir('dest')
one = tmpdir('one')
two = tmpdir('two')
three = tmpdir('three')
four = tmpdir('four')
file_in_dir_with_contents(one, 'a', one)
file_in_dir_with_contents(two, 'a', two)
file_in_dir_with_contents(two, 'b', two)
file_in_dir_with_contents(three, 'a', three)
file_in_dir_with_contents(three, 'c', three)
obj = Puppet::Type.newfile(
:path => dest,
:ensure => :directory,
:backup => false,
:recurse => true,
:sourceselect => :all,
:source => [one, two, three, four]
)
catalog.add_resource obj
catalog.apply
File.read(File.join(dest, 'a')).should == one
File.read(File.join(dest, 'b')).should == two
File.read(File.join(dest, 'c')).should == three
end
it "should only recurse one level from each valid source" do
one = tmpdir('one')
FileUtils.mkdir_p(File.join(one, 'a', 'b'))
FileUtils.touch(File.join(one, 'a', 'b', 'c'))
two = tmpdir('two')
FileUtils.mkdir_p(File.join(two, 'z'))
FileUtils.touch(File.join(two, 'z', 'y'))
obj = Puppet::Type.newfile(
:path => path,
:ensure => :directory,
:backup => false,
:recurse => true,
:recurselimit => 1,
:sourceselect => :all,
:source => [one, two]
)
catalog.add_resource obj
catalog.apply
Puppet::FileSystem.exist?(File.join(path, 'a')).should be_true
Puppet::FileSystem.exist?(File.join(path, 'a', 'b')).should be_false
Puppet::FileSystem.exist?(File.join(path, 'z')).should be_true
Puppet::FileSystem.exist?(File.join(path, 'z', 'y')).should be_false
end
end
end
end
end
describe "when generating resources" do
before do
source = tmpdir("generating_in_catalog_source")
s1 = file_in_dir_with_contents(source, "one", "uno")
s2 = file_in_dir_with_contents(source, "two", "dos")
@file = described_class.new(
:name => path,
:source => source,
:recurse => true,
:backup => false
)
catalog.add_resource @file
end
it "should add each generated resource to the catalog" do
catalog.apply do |trans|
catalog.resource(:file, File.join(path, "one")).must be_a(described_class)
catalog.resource(:file, File.join(path, "two")).must be_a(described_class)
end
end
it "should have an edge to each resource in the relationship graph" do
catalog.apply do |trans|
one = catalog.resource(:file, File.join(path, "one"))
catalog.relationship_graph.should be_edge(@file, one)
two = catalog.resource(:file, File.join(path, "two"))
catalog.relationship_graph.should be_edge(@file, two)
end
end
end
describe "when copying files" do
it "should be able to copy files with pound signs in their names (#285)" do
source = tmpfile_with_contents("filewith#signs", "foo")
dest = tmpfile("destwith#signs")
catalog.add_resource described_class.new(:name => dest, :source => source)
catalog.apply
File.read(dest).should == "foo"
end
it "should be able to copy files with spaces in their names" do
dest = tmpfile("destwith spaces")
source = tmpfile_with_contents("filewith spaces", "foo")
expected_mode = 0755
Puppet::FileSystem.chmod(expected_mode, source)
catalog.add_resource described_class.new(:path => dest, :source => source)
catalog.apply
File.read(dest).should == "foo"
(Puppet::FileSystem.stat(dest).mode & 007777).should == expected_mode
end
it "should be able to copy individual files even if recurse has been specified" do
source = tmpfile_with_contents("source", "foo")
dest = tmpfile("dest")
catalog.add_resource described_class.new(:name => dest, :source => source, :recurse => true)
catalog.apply
File.read(dest).should == "foo"
end
end
it "should create a file with content if ensure is omitted" do
catalog.add_resource described_class.new(
:path => path,
:content => "this is some content, yo"
)
catalog.apply
File.read(path).should == "this is some content, yo"
end
it "should create files with content if both content and ensure are set" do
file = described_class.new(
:path => path,
:ensure => "file",
:content => "this is some content, yo"
)
catalog.add_resource file
catalog.apply
File.read(path).should == "this is some content, yo"
end
it "should delete files with sources but that are set for deletion" do
source = tmpfile_with_contents("source_source_with_ensure", "yay")
dest = tmpfile_with_contents("source_source_with_ensure", "boo")
file = described_class.new(
:path => dest,
:ensure => :absent,
:source => source,
:backup => false
)
catalog.add_resource file
catalog.apply
Puppet::FileSystem.exist?(dest).should be_false
end
describe "when sourcing" do
let(:source) { tmpfile_with_contents("source_default_values", "yay") }
it "should apply the source metadata values" do
set_mode(0770, source)
file = described_class.new(
:path => path,
:ensure => :file,
:source => source,
:backup => false
)
catalog.add_resource file
catalog.apply
get_owner(path).should == get_owner(source)
get_group(path).should == get_group(source)
(get_mode(path) & 07777).should == 0770
end
it "should override the default metadata values" do
set_mode(0770, source)
file = described_class.new(
:path => path,
:ensure => :file,
:source => source,
:backup => false,
:mode => 0440
)
catalog.add_resource file
catalog.apply
(get_mode(path) & 07777).should == 0440
end
describe "on Windows systems", :if => Puppet.features.microsoft_windows? do
def expects_sid_granted_full_access_explicitly(path, sid)
- inherited_ace = Windows::Security::INHERITED_ACE
+ inherited_ace = Puppet::Util::Windows::AccessControlEntry::INHERITED_ACE
aces = get_aces_for_path_by_sid(path, sid)
aces.should_not be_empty
aces.each do |ace|
- ace.mask.should == Windows::File::FILE_ALL_ACCESS
+ ace.mask.should == Puppet::Util::Windows::File::FILE_ALL_ACCESS
(ace.flags & inherited_ace).should_not == inherited_ace
end
end
def expects_system_granted_full_access_explicitly(path)
expects_sid_granted_full_access_explicitly(path, @sids[:system])
end
def expects_at_least_one_inherited_ace_grants_full_access(path, sid)
- inherited_ace = Windows::Security::INHERITED_ACE
+ inherited_ace = Puppet::Util::Windows::AccessControlEntry::INHERITED_ACE
aces = get_aces_for_path_by_sid(path, sid)
aces.should_not be_empty
aces.any? do |ace|
- ace.mask == Windows::File::FILE_ALL_ACCESS &&
+ ace.mask == Puppet::Util::Windows::File::FILE_ALL_ACCESS &&
(ace.flags & inherited_ace) == inherited_ace
end.should be_true
end
def expects_at_least_one_inherited_system_ace_grants_full_access(path)
expects_at_least_one_inherited_ace_grants_full_access(path, @sids[:system])
end
it "should provide valid default values when ACLs are not supported" do
Puppet::Util::Windows::Security.stubs(:supports_acl?).returns(false)
Puppet::Util::Windows::Security.stubs(:supports_acl?).with(source).returns false
file = described_class.new(
:path => path,
:ensure => :file,
:source => source,
:backup => false
)
catalog.add_resource file
catalog.apply
get_owner(path).should =~ /^S\-1\-5\-.*$/
get_group(path).should =~ /^S\-1\-0\-0.*$/
get_mode(path).should == 0644
end
describe "when processing SYSTEM ACEs" do
before do
@sids = {
- :current_user => Puppet::Util::Windows::Security.name_to_sid(Sys::Admin.get_login),
+ :current_user => Puppet::Util::Windows::SID.name_to_sid(Puppet::Util::Windows::ADSI::User.current_user_name),
:system => Win32::Security::SID::LocalSystem,
- :admin => Puppet::Util::Windows::Security.name_to_sid("Administrator"),
- :guest => Puppet::Util::Windows::Security.name_to_sid("Guest"),
+ :admin => Puppet::Util::Windows::SID.name_to_sid("Administrator"),
+ :guest => Puppet::Util::Windows::SID.name_to_sid("Guest"),
:users => Win32::Security::SID::BuiltinUsers,
:power_users => Win32::Security::SID::PowerUsers,
:none => Win32::Security::SID::Nobody
}
end
describe "on files" do
before :each do
@file = described_class.new(
:path => path,
:ensure => :file,
:source => source,
:backup => false
)
catalog.add_resource @file
end
describe "when source permissions are ignored" do
before :each do
@file[:source_permissions] = :ignore
end
it "preserves the inherited SYSTEM ACE" do
catalog.apply
expects_at_least_one_inherited_system_ace_grants_full_access(path)
end
end
describe "when permissions are insync?" do
it "preserves the explicit SYSTEM ACE" do
FileUtils.touch(path)
sd = Puppet::Util::Windows::Security.get_security_descriptor(path)
sd.protect = true
sd.owner = @sids[:none]
sd.group = @sids[:none]
Puppet::Util::Windows::Security.set_security_descriptor(source, sd)
Puppet::Util::Windows::Security.set_security_descriptor(path, sd)
catalog.apply
expects_system_granted_full_access_explicitly(path)
end
end
describe "when permissions are not insync?" do
before :each do
@file[:owner] = 'None'
@file[:group] = 'None'
end
it "replaces inherited SYSTEM ACEs with an uninherited one for an existing file" do
FileUtils.touch(path)
expects_at_least_one_inherited_system_ace_grants_full_access(path)
catalog.apply
expects_system_granted_full_access_explicitly(path)
end
it "replaces inherited SYSTEM ACEs for a new file with an uninherited one" do
catalog.apply
expects_system_granted_full_access_explicitly(path)
end
end
describe "created with SYSTEM as the group" do
before :each do
@file[:owner] = @sids[:users]
@file[:group] = @sids[:system]
@file[:mode] = 0644
catalog.apply
end
it "should allow the user to explicitly set the mode to 4" do
system_aces = get_aces_for_path_by_sid(path, @sids[:system])
system_aces.should_not be_empty
system_aces.each do |ace|
- ace.mask.should == Windows::File::FILE_GENERIC_READ
+ ace.mask.should == Puppet::Util::Windows::File::FILE_GENERIC_READ
end
end
it "prepends SYSTEM ace when changing group from system to power users" do
@file[:group] = @sids[:power_users]
catalog.apply
system_aces = get_aces_for_path_by_sid(path, @sids[:system])
system_aces.size.should == 1
end
end
describe "with :links set to :follow" do
it "should not fail to apply" do
# at minimal, we need an owner and/or group
@file[:owner] = @sids[:users]
@file[:links] = :follow
catalog.apply do |transaction|
if transaction.any_failed?
pretty_transaction_error(transaction)
end
end
end
end
end
describe "on directories" do
before :each do
@directory = described_class.new(
:path => dir,
:ensure => :directory
)
catalog.add_resource @directory
end
def grant_everyone_full_access(path)
sd = Puppet::Util::Windows::Security.get_security_descriptor(path)
sd.dacl.allow(
'S-1-1-0', #everyone
- Windows::File::FILE_ALL_ACCESS,
- Windows::File::OBJECT_INHERIT_ACE | Windows::File::CONTAINER_INHERIT_ACE)
+ Puppet::Util::Windows::File::FILE_ALL_ACCESS,
+ Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE |
+ Puppet::Util::Windows::AccessControlEntry::CONTAINER_INHERIT_ACE)
Puppet::Util::Windows::Security.set_security_descriptor(path, sd)
end
after :each do
grant_everyone_full_access(dir)
end
describe "when source permissions are ignored" do
before :each do
@directory[:source_permissions] = :ignore
end
it "preserves the inherited SYSTEM ACE" do
catalog.apply
expects_at_least_one_inherited_system_ace_grants_full_access(dir)
end
end
describe "when permissions are insync?" do
it "preserves the explicit SYSTEM ACE" do
Dir.mkdir(dir)
source_dir = tmpdir('source_dir')
@directory[:source] = source_dir
sd = Puppet::Util::Windows::Security.get_security_descriptor(source_dir)
sd.protect = true
sd.owner = @sids[:none]
sd.group = @sids[:none]
Puppet::Util::Windows::Security.set_security_descriptor(source_dir, sd)
Puppet::Util::Windows::Security.set_security_descriptor(dir, sd)
catalog.apply
expects_system_granted_full_access_explicitly(dir)
end
end
describe "when permissions are not insync?" do
before :each do
@directory[:owner] = 'None'
@directory[:group] = 'None'
@directory[:mode] = 0444
end
it "replaces inherited SYSTEM ACEs with an uninherited one for an existing directory" do
FileUtils.mkdir(dir)
expects_at_least_one_inherited_system_ace_grants_full_access(dir)
catalog.apply
expects_system_granted_full_access_explicitly(dir)
end
it "replaces inherited SYSTEM ACEs with an uninherited one for an existing directory" do
catalog.apply
expects_system_granted_full_access_explicitly(dir)
end
describe "created with SYSTEM as the group" do
before :each do
@directory[:owner] = @sids[:users]
@directory[:group] = @sids[:system]
@directory[:mode] = 0644
catalog.apply
end
it "should allow the user to explicitly set the mode to 4" do
system_aces = get_aces_for_path_by_sid(dir, @sids[:system])
system_aces.should_not be_empty
system_aces.each do |ace|
# unlike files, Puppet sets execute bit on directories that are readable
- ace.mask.should == Windows::File::FILE_GENERIC_READ | Windows::File::FILE_GENERIC_EXECUTE
+ ace.mask.should == Puppet::Util::Windows::File::FILE_GENERIC_READ | Puppet::Util::Windows::File::FILE_GENERIC_EXECUTE
end
end
it "prepends SYSTEM ace when changing group from system to power users" do
@directory[:group] = @sids[:power_users]
catalog.apply
system_aces = get_aces_for_path_by_sid(dir, @sids[:system])
system_aces.size.should == 1
end
end
describe "with :links set to :follow" do
it "should not fail to apply" do
# at minimal, we need an owner and/or group
@directory[:owner] = @sids[:users]
@directory[:links] = :follow
catalog.apply do |transaction|
if transaction.any_failed?
pretty_transaction_error(transaction)
end
end
end
end
end
end
end
end
end
describe "when purging files" do
before do
sourcedir = tmpdir("purge_source")
destdir = tmpdir("purge_dest")
sourcefile = File.join(sourcedir, "sourcefile")
@copiedfile = File.join(destdir, "sourcefile")
@localfile = File.join(destdir, "localfile")
@purgee = File.join(destdir, "to_be_purged")
File.open(@localfile, "w") { |f| f.print "oldtest" }
File.open(sourcefile, "w") { |f| f.print "funtest" }
# this file should get removed
File.open(@purgee, "w") { |f| f.print "footest" }
lfobj = Puppet::Type.newfile(
:title => "localfile",
:path => @localfile,
:content => "rahtest",
:ensure => :file,
:backup => false
)
destobj = Puppet::Type.newfile(
:title => "destdir",
:path => destdir,
:source => sourcedir,
:backup => false,
:purge => true,
:recurse => true
)
catalog.add_resource lfobj, destobj
catalog.apply
end
it "should still copy remote files" do
File.read(@copiedfile).should == 'funtest'
end
it "should not purge managed, local files" do
File.read(@localfile).should == 'rahtest'
end
it "should purge files that are neither remote nor otherwise managed" do
Puppet::FileSystem.exist?(@purgee).should be_false
end
end
describe "when using validate_cmd" do
it "should fail the file resource if command fails" do
catalog.add_resource(described_class.new(:path => path, :content => "foo", :validate_cmd => "/usr/bin/env false"))
Puppet::Util::Execution.expects(:execute).with("/usr/bin/env false", {:combine => true, :failonfail => true}).raises(Puppet::ExecutionFailure, "Failed")
report = catalog.apply.report
report.resource_statuses["File[#{path}]"].should be_failed
Puppet::FileSystem.exist?(path).should be_false
end
it "should succeed the file resource if command succeeds" do
catalog.add_resource(described_class.new(:path => path, :content => "foo", :validate_cmd => "/usr/bin/env true"))
Puppet::Util::Execution.expects(:execute).with("/usr/bin/env true", {:combine => true, :failonfail => true}).returns ''
report = catalog.apply.report
report.resource_statuses["File[#{path}]"].should_not be_failed
Puppet::FileSystem.exist?(path).should be_true
end
end
def tmpfile_with_contents(name, contents)
file = tmpfile(name)
File.open(file, "w") { |f| f.write contents }
file
end
def file_in_dir_with_contents(dir, name, contents)
full_name = File.join(dir, name)
File.open(full_name, "w") { |f| f.write contents }
full_name
end
def pretty_transaction_error(transaction)
report = transaction.report
status_failures = report.resource_statuses.values.select { |r| r.failed? }
status_fail_msg = status_failures.
collect(&:events).
flatten.
select { |event| event.status == 'failure' }.
collect { |event| "#{event.resource}: #{event.message}" }.join("; ")
raise "Got #{status_failures.length} failure(s) while applying: #{status_fail_msg}"
end
end
diff --git a/spec/integration/type/nagios_spec.rb b/spec/integration/type/nagios_spec.rb
index 818b61649..9610daefa 100644
--- a/spec/integration/type/nagios_spec.rb
+++ b/spec/integration/type/nagios_spec.rb
@@ -1,80 +1,71 @@
#!/usr/bin/env ruby
require 'spec_helper'
require 'puppet/file_bucket/dipper'
describe "Nagios file creation" do
include PuppetSpec::Files
+ let(:initial_mode) { 0600 }
+
before :each do
FileUtils.touch(target_file)
- File.chmod(0600, target_file)
+ Puppet::FileSystem.chmod(initial_mode, target_file)
Puppet::FileBucket::Dipper.any_instance.stubs(:backup) # Don't backup to filebucket
end
let :target_file do
tmpfile('nagios_integration_specs')
end
# Copied from the crontab integration spec.
#
# @todo This should probably live in the PuppetSpec module instead then.
def run_in_catalog(*resources)
catalog = Puppet::Resource::Catalog.new
catalog.host_config = false
resources.each do |resource|
resource.expects(:err).never
catalog.add_resource(resource)
end
# the resources are not properly contained and generated resources
# will end up with dangling edges without this stubbing:
catalog.stubs(:container_of).returns resources[0]
catalog.apply
end
- # These three helpers are from file_spec.rb
- #
- # @todo Define those centrally as well?
- def get_mode(file)
- Puppet::FileSystem.stat(file).mode
- end
-
context "when creating a nagios config file" do
context "which is not managed" do
it "should choose the file mode if requested" do
resource = Puppet::Type.type(:nagios_host).new(
:name => 'spechost',
:use => 'spectemplate',
:ensure => 'present',
:target => target_file,
:mode => '0640'
)
run_in_catalog(resource)
- # sticky bit only applies to directories in Windows
- mode = Puppet.features.microsoft_windows? ? "640" : "100640"
- ( "%o" % get_mode(target_file) ).should == mode
+ expect_file_mode(target_file, "640")
end
end
context "which is managed" do
- it "should not the mode" do
+ it "should not override the mode" do
file_res = Puppet::Type.type(:file).new(
:name => target_file,
:ensure => :present
)
nag_res = Puppet::Type.type(:nagios_host).new(
:name => 'spechost',
:use => 'spectemplate',
:ensure => :present,
:target => target_file,
:mode => '0640'
)
run_in_catalog(file_res, nag_res)
- ( "%o" % get_mode(target_file) ).should_not == "100640"
+ expect_file_mode(target_file, initial_mode.to_s(8))
end
end
-
end
-
end
diff --git a/spec/integration/type/sshkey_spec.rb b/spec/integration/type/sshkey_spec.rb
new file mode 100644
index 000000000..d1b1e01c7
--- /dev/null
+++ b/spec/integration/type/sshkey_spec.rb
@@ -0,0 +1,22 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+require 'puppet_spec/files'
+require 'puppet_spec/compiler'
+
+describe Puppet::Type.type(:sshkey), '(integration)', :unless => Puppet.features.microsoft_windows? do
+ include PuppetSpec::Files
+ include PuppetSpec::Compiler
+
+ let(:target) { tmpfile('ssh_known_hosts') }
+ let(:manifest) { "sshkey { 'test':
+ ensure => 'present',
+ type => 'rsa',
+ key => 'TESTKEY',
+ target => '#{target}' }"
+ }
+
+ it "should create a new known_hosts file with mode 0644" do
+ apply_compiled_manifest(manifest)
+ expect_file_mode(target, "644")
+ end
+end
diff --git a/spec/integration/type/tidy_spec.rb b/spec/integration/type/tidy_spec.rb
index 9c044d703..7dbefb6ca 100755
--- a/spec/integration/type/tidy_spec.rb
+++ b/spec/integration/type/tidy_spec.rb
@@ -1,31 +1,34 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet/file_bucket/dipper'
describe Puppet::Type.type(:tidy) do
include PuppetSpec::Files
before do
Puppet::Util::Storage.stubs(:store)
end
# Testing #355.
it "should be able to remove dead links", :if => Puppet.features.manages_symlinks? do
dir = tmpfile("tidy_link_testing")
link = File.join(dir, "link")
target = tmpfile("no_such_file_tidy_link_testing")
Dir.mkdir(dir)
Puppet::FileSystem.symlink(target, link)
tidy = Puppet::Type.type(:tidy).new :path => dir, :recurse => true
catalog = Puppet::Resource::Catalog.new
catalog.add_resource(tidy)
+ # avoid crude failures because of nil resources that result
+ # from implicit containment and lacking containers
+ catalog.stubs(:container_of).returns tidy
catalog.apply
Puppet::FileSystem.symlink?(link).should be_false
end
end
diff --git a/spec/integration/type/user_spec.rb b/spec/integration/type/user_spec.rb
index c4d4879af..4724fe9d5 100644
--- a/spec/integration/type/user_spec.rb
+++ b/spec/integration/type/user_spec.rb
@@ -1,31 +1,36 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet_spec/compiler'
describe Puppet::Type.type(:user), '(integration)', :unless => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
include PuppetSpec::Compiler
context "when set to purge ssh keys from a file" do
- let(:tempfile) { file_containing('user_spec', "# comment\nssh-rsa KEY-DATA key-name") }
+ let(:tempfile) { file_containing('user_spec', "# comment\nssh-rsa KEY-DATA key-name\nssh-rsa KEY-DATA key name\n") }
# must use an existing user, or the generated key resource
# will fail on account of an invalid user for the key
# - root should be a safe default
let(:manifest) { "user { 'root': purge_ssh_keys => '#{tempfile}' }" }
it "should purge authorized ssh keys" do
apply_compiled_manifest(manifest)
File.read(tempfile).should_not =~ /key-name/
end
+ it "should purge keys with spaces in the comment string" do
+ apply_compiled_manifest(manifest)
+ File.read(tempfile).should_not =~ /key name/
+ end
+
context "with other prefetching resources evaluated first" do
let(:manifest) { "host { 'test': before => User[root] } user { 'root': purge_ssh_keys => '#{tempfile}' }" }
it "should purge authorized ssh keys" do
apply_compiled_manifest(manifest)
File.read(tempfile).should_not =~ /key-name/
end
end
end
end
diff --git a/spec/integration/util/autoload_spec.rb b/spec/integration/util/autoload_spec.rb
index bfe8b67d2..c352fea9e 100755
--- a/spec/integration/util/autoload_spec.rb
+++ b/spec/integration/util/autoload_spec.rb
@@ -1,107 +1,107 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/autoload'
require 'fileutils'
class AutoloadIntegrator
@things = []
def self.newthing(name)
@things << name
end
def self.thing?(name)
@things.include? name
end
def self.clear
@things.clear
end
end
require 'puppet_spec/files'
describe Puppet::Util::Autoload do
include PuppetSpec::Files
def with_file(name, *path)
path = File.join(*path)
# Now create a file to load
File.open(path, "w") { |f|
f.puts "\nAutoloadIntegrator.newthing(:#{name.to_s})\n"
}
yield
File.delete(path)
end
def with_loader(name, path)
dir = tmpfile(name + path)
$LOAD_PATH << dir
Dir.mkdir(dir)
rbdir = File.join(dir, path.to_s)
Dir.mkdir(rbdir)
loader = Puppet::Util::Autoload.new(name, path)
yield rbdir, loader
Dir.rmdir(rbdir)
Dir.rmdir(dir)
$LOAD_PATH.pop
AutoloadIntegrator.clear
end
it "should make instances available by the loading class" do
loader = Puppet::Util::Autoload.new("foo", "bar")
Puppet::Util::Autoload["foo"].should == loader
end
it "should not fail when asked to load a missing file" do
Puppet::Util::Autoload.new("foo", "bar").load(:eh).should be_false
end
it "should load and return true when it successfully loads a file" do
with_loader("foo", "bar") { |dir,loader|
with_file(:mything, dir, "mything.rb") {
loader.load(:mything).should be_true
loader.class.should be_loaded("bar/mything")
AutoloadIntegrator.should be_thing(:mything)
}
}
end
it "should consider a file loaded when asked for the name without an extension" do
with_loader("foo", "bar") { |dir,loader|
with_file(:noext, dir, "noext.rb") {
loader.load(:noext)
loader.class.should be_loaded("bar/noext")
}
}
end
it "should consider a file loaded when asked for the name with an extension" do
with_loader("foo", "bar") { |dir,loader|
with_file(:noext, dir, "withext.rb") {
loader.load(:withext)
loader.class.should be_loaded("bar/withext.rb")
}
}
end
it "should be able to load files directly from modules" do
## modulepath can't be used until after app settings are initialized, so we need to simulate that:
Puppet.settings.expects(:app_defaults_initialized?).returns(true).at_least_once
modulepath = tmpfile("autoload_module_testing")
libdir = File.join(modulepath, "mymod", "lib", "foo")
FileUtils.mkdir_p(libdir)
file = File.join(libdir, "plugin.rb")
- Puppet[:modulepath] = modulepath
-
- with_loader("foo", "foo") do |dir, loader|
- with_file(:plugin, file.split("/")) do
- loader.load(:plugin)
- loader.class.should be_loaded("foo/plugin.rb")
+ Puppet.override(:environments => Puppet::Environments::Static.new(Puppet::Node::Environment.create(:production, [modulepath]))) do
+ with_loader("foo", "foo") do |dir, loader|
+ with_file(:plugin, file.split("/")) do
+ loader.load(:plugin)
+ loader.class.should be_loaded("foo/plugin.rb")
+ end
end
end
end
end
diff --git a/spec/integration/util/rdoc/parser_spec.rb b/spec/integration/util/rdoc/parser_spec.rb
index d3bbef45c..2fdaf8019 100755
--- a/spec/integration/util/rdoc/parser_spec.rb
+++ b/spec/integration/util/rdoc/parser_spec.rb
@@ -1,261 +1,268 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/rdoc'
describe "RDoc::Parser" do
require 'puppet_spec/files'
include PuppetSpec::Files
let(:document_all) { false }
let(:tmp_dir) { tmpdir('rdoc_parser_tmp') }
let(:doc_dir) { File.join(tmp_dir, 'doc') }
let(:manifests_dir) { File.join(tmp_dir, 'manifests') }
let(:modules_dir) { File.join(tmp_dir, 'modules') }
let(:modules_and_manifests) do
{
:site => [
File.join(manifests_dir, 'site.pp'),
<<-EOF
# The test class comment
class test {
# The virtual resource comment
@notify { virtual: }
# The a_notify_resource comment
notify { a_notify_resource:
message => "a_notify_resource message"
}
}
# The includes_another class comment
class includes_another {
include another
}
# The requires_another class comment
class requires_another {
require another
}
# node comment
node foo {
include test
$a_var = "var_value"
realize Notify[virtual]
notify { bar: }
}
EOF
],
:module_readme => [
File.join(modules_dir, 'a_module', 'README'),
<<-EOF
The a_module README docs.
EOF
],
:module_init => [
File.join(modules_dir, 'a_module', 'manifests', 'init.pp'),
<<-EOF
# The a_module class comment
class a_module {}
class another {}
EOF
],
:module_type => [
File.join(modules_dir, 'a_module', 'manifests', 'a_type.pp'),
<<-EOF
# The a_type type comment
define a_module::a_type() {}
EOF
],
:module_plugin => [
File.join(modules_dir, 'a_module', 'lib', 'puppet', 'type', 'a_plugin.rb'),
<<-EOF
# The a_plugin type comment
Puppet::Type.newtype(:a_plugin) do
@doc = "Not presented"
end
EOF
],
:module_function => [
File.join(modules_dir, 'a_module', 'lib', 'puppet', 'parser', 'a_function.rb'),
<<-EOF
# The a_function function comment
module Puppet::Parser::Functions
newfunction(:a_function, :type => :rvalue) do
return
end
end
EOF
],
:module_fact => [
File.join(modules_dir, 'a_module', 'lib', 'facter', 'a_fact.rb'),
<<-EOF
# The a_fact fact comment
Facter.add("a_fact") do
end
EOF
],
}
end
def write_file(file, content)
FileUtils.mkdir_p(File.dirname(file))
File.open(file, 'w') do |f|
f.puts(content)
end
end
def prepare_manifests_and_modules
modules_and_manifests.each do |key,array|
write_file(*array)
end
end
def file_exists_and_matches_content(file, *content_patterns)
Puppet::FileSystem.exist?(file).should(be_true, "Cannot find #{file}")
content_patterns.each do |pattern|
content = File.read(file)
content.should match(pattern)
end
end
def some_file_exists_with_matching_content(glob, *content_patterns)
Dir.glob(glob).select do |f|
contents = File.read(f)
content_patterns.all? { |p| p.match(contents) }
end.should_not(be_empty, "Could not match #{content_patterns} in any of the files found in #{glob}")
end
+ around(:each) do |example|
+ env = Puppet::Node::Environment.create(:doc_test_env, [modules_dir], manifests_dir)
+ Puppet.override({:environments => Puppet::Environments::Static.new(env), :current_environment => env}) do
+ example.run
+ end
+ end
+
before :each do
prepare_manifests_and_modules
Puppet.settings[:document_all] = document_all
Puppet.settings[:modulepath] = modules_dir
Puppet::Util::RDoc.rdoc(doc_dir, [modules_dir, manifests_dir])
end
module RdocTesters
def has_module_rdoc(module_name, *other_test_patterns)
file_exists_and_matches_content(module_path(module_name), /Module:? +#{module_name}/i, *other_test_patterns)
end
def has_node_rdoc(module_name, node_name, *other_test_patterns)
file_exists_and_matches_content(node_path(module_name, node_name), /#{node_name}/, /node comment/, *other_test_patterns)
end
def has_defined_type(module_name, type_name)
file_exists_and_matches_content(module_path(module_name), /#{type_name}.*?\(\s*\)/m, "The .*?#{type_name}.*? type comment")
end
def has_class_rdoc(module_name, class_name, *other_test_patterns)
file_exists_and_matches_content(class_path(module_name, class_name), /#{class_name}.*? class comment/, *other_test_patterns)
end
def has_plugin_rdoc(module_name, type, name)
file_exists_and_matches_content(plugin_path(module_name, type, name), /The .*?#{name}.*?\s*#{type} comment/m, /Type.*?#{type}/m)
end
end
shared_examples_for :an_rdoc_site do
it "documents the __site__ module" do
has_module_rdoc("__site__")
end
it "documents the __site__::test class" do
has_class_rdoc("__site__", "test")
end
it "documents the __site__::foo node" do
has_node_rdoc("__site__", "foo")
end
it "documents the a_module module" do
has_module_rdoc("a_module", /The .*?a_module.*? .*?README.*?docs/m)
end
it "documents the a_module::a_module class" do
has_class_rdoc("a_module", "a_module")
end
it "documents the a_module::a_type defined type" do
has_defined_type("a_module", "a_type")
end
it "documents the a_module::a_plugin type" do
has_plugin_rdoc("a_module", :type, 'a_plugin')
end
it "documents the a_module::a_function function" do
has_plugin_rdoc("a_module", :function, 'a_function')
end
it "documents the a_module::a_fact fact" do
has_plugin_rdoc("a_module", :fact, 'a_fact')
end
it "documents included classes" do
has_class_rdoc("__site__", "includes_another", /Included.*?another/m)
end
end
shared_examples_for :an_rdoc1_site do
it "documents required classes" do
has_class_rdoc("__site__", "requires_another", /Required Classes.*?another/m)
end
it "documents realized resources" do
has_node_rdoc("__site__", "foo", /Realized Resources.*?Notify\[virtual\]/m)
end
it "documents global variables" do
has_node_rdoc("__site__", "foo", /Global Variables.*?a_var.*?=.*?var_value/m)
end
describe "when document_all is true" do
let(:document_all) { true }
it "documents virtual resource declarations" do
has_class_rdoc("__site__", "test", /Resources.*?Notify\[virtual\]/m, /The virtual resource comment/)
end
it "documents resources" do
has_class_rdoc("__site__", "test", /Resources.*?Notify\[a_notify_resource\]/m, /message => "a_notify_resource message"/, /The a_notify_resource comment/)
end
end
end
describe "rdoc1 support", :if => Puppet.features.rdoc1? do
def module_path(module_name); "#{doc_dir}/classes/#{module_name}.html" end
def node_path(module_name, node_name); "#{doc_dir}/nodes/**/*.html" end
def class_path(module_name, class_name); "#{doc_dir}/classes/#{module_name}/#{class_name}.html" end
def plugin_path(module_name, type, name); "#{doc_dir}/plugins/#{name}.html" end
include RdocTesters
def has_node_rdoc(module_name, node_name, *other_test_patterns)
some_file_exists_with_matching_content(node_path(module_name, node_name), /#{node_name}/, /node comment/, *other_test_patterns)
end
it_behaves_like :an_rdoc_site
it_behaves_like :an_rdoc1_site
it "references nodes and classes in the __site__ module" do
file_exists_and_matches_content("#{doc_dir}/classes/__site__.html", /Node.*__site__::foo/, /Class.*__site__::test/)
end
it "references functions, facts, and type plugins in the a_module module" do
file_exists_and_matches_content("#{doc_dir}/classes/a_module.html", /a_function/, /a_fact/, /a_plugin/, /Class.*a_module::a_module/)
end
end
describe "rdoc2 support", :if => !Puppet.features.rdoc1? do
def module_path(module_name); "#{doc_dir}/#{module_name}.html" end
def node_path(module_name, node_name); "#{doc_dir}/#{module_name}/__nodes__/#{node_name}.html" end
def class_path(module_name, class_name); "#{doc_dir}/#{module_name}/#{class_name}.html" end
def plugin_path(module_name, type, name); "#{doc_dir}/#{module_name}/__#{type}s__.html" end
include RdocTesters
it_behaves_like :an_rdoc_site
end
end
diff --git a/spec/integration/util/windows/process_spec.rb b/spec/integration/util/windows/process_spec.rb
index 6dc54d228..60eae3443 100644
--- a/spec/integration/util/windows/process_spec.rb
+++ b/spec/integration/util/windows/process_spec.rb
@@ -1,22 +1,34 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'facter'
describe "Puppet::Util::Windows::Process", :if => Puppet.features.microsoft_windows? do
describe "as an admin" do
it "should have the SeCreateSymbolicLinkPrivilege necessary to create symlinks on Vista / 2008+",
:if => Facter.value(:kernelmajversion).to_f >= 6.0 && Puppet.features.microsoft_windows? do
# this is a bit of a lame duck test since it requires running user to be admin
# a better integration test would create a new user with the privilege and verify
Puppet::Util::Windows::User.should be_admin
Puppet::Util::Windows::Process.process_privilege_symlink?.should be_true
end
it "should not have the SeCreateSymbolicLinkPrivilege necessary to create symlinks on 2003 and earlier",
:if => Facter.value(:kernelmajversion).to_f < 6.0 && Puppet.features.microsoft_windows? do
Puppet::Util::Windows::User.should be_admin
Puppet::Util::Windows::Process.process_privilege_symlink?.should be_false
end
+
+ it "should be able to lookup a standard Windows process privilege" do
+ Puppet::Util::Windows::Process.lookup_privilege_value('SeShutdownPrivilege') do |luid|
+ luid.should_not be_nil
+ luid.should be_instance_of(Puppet::Util::Windows::Process::LUID)
+ end
+ end
+
+ it "should raise an error for an unknown privilege name" do
+ fail_msg = /LookupPrivilegeValue\(, foo, .*\): A specified privilege does not exist/
+ expect { Puppet::Util::Windows::Process.lookup_privilege_value('foo') }.to raise_error(Puppet::Util::Windows::Error, fail_msg)
+ end
end
end
diff --git a/spec/integration/util/windows/security_spec.rb b/spec/integration/util/windows/security_spec.rb
index fa0eadc0d..7f7aa7cb6 100755
--- a/spec/integration/util/windows/security_spec.rb
+++ b/spec/integration/util/windows/security_spec.rb
@@ -1,863 +1,864 @@
#!/usr/bin/env ruby
require 'spec_helper'
-require 'puppet/util/adsi'
-
if Puppet.features.microsoft_windows?
class WindowsSecurityTester
require 'puppet/util/windows/security'
include Puppet::Util::Windows::Security
end
end
describe "Puppet::Util::Windows::Security", :if => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
before :all do
@sids = {
- :current_user => Puppet::Util::Windows::Security.name_to_sid(Sys::Admin.get_login),
+ :current_user => Puppet::Util::Windows::SID.name_to_sid(Puppet::Util::Windows::ADSI::User.current_user_name),
:system => Win32::Security::SID::LocalSystem,
- :admin => Puppet::Util::Windows::Security.name_to_sid("Administrator"),
+ :admin => Puppet::Util::Windows::SID.name_to_sid("Administrator"),
:administrators => Win32::Security::SID::BuiltinAdministrators,
- :guest => Puppet::Util::Windows::Security.name_to_sid("Guest"),
+ :guest => Puppet::Util::Windows::SID.name_to_sid("Guest"),
:users => Win32::Security::SID::BuiltinUsers,
:power_users => Win32::Security::SID::PowerUsers,
:none => Win32::Security::SID::Nobody,
:everyone => Win32::Security::SID::Everyone
}
# The TCP/IP NetBIOS Helper service (aka 'lmhosts') has ended up
# disabled on some VMs for reasons we couldn't track down. This
# condition causes tests which rely on resolving UNC style paths
# (like \\localhost) to fail with unhelpful error messages.
# Put a check for this upfront to aid debug should this strike again.
service = Puppet::Type.type(:service).new(:name => 'lmhosts')
- service.provider.status.should == :running
+ expect(service.provider.status).to eq(:running), 'lmhosts service is not running'
end
let (:sids) { @sids }
let (:winsec) { WindowsSecurityTester.new }
+ let (:klass) { Puppet::Util::Windows::File }
def set_group_depending_on_current_user(path)
if sids[:current_user] == sids[:system]
# if the current user is SYSTEM, by setting the group to
# guest, SYSTEM is automagically given full control, so instead
# override that behavior with SYSTEM as group and a specific mode
winsec.set_group(sids[:system], path)
mode = winsec.get_mode(path)
winsec.set_mode(mode & ~WindowsSecurityTester::S_IRWXG, path)
else
winsec.set_group(sids[:guest], path)
end
end
def grant_everyone_full_access(path)
sd = winsec.get_security_descriptor(path)
everyone = 'S-1-1-0'
- inherit = WindowsSecurityTester::OBJECT_INHERIT_ACE | WindowsSecurityTester::CONTAINER_INHERIT_ACE
- sd.dacl.allow(everyone, Windows::File::FILE_ALL_ACCESS, inherit)
+ inherit = Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE | Puppet::Util::Windows::AccessControlEntry::CONTAINER_INHERIT_ACE
+ sd.dacl.allow(everyone, klass::FILE_ALL_ACCESS, inherit)
winsec.set_security_descriptor(path, sd)
end
shared_examples_for "only child owner" do
it "should allow child owner" do
winsec.set_owner(sids[:guest], parent)
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(0700, parent)
check_delete(path)
end
it "should deny parent owner" do
winsec.set_owner(sids[:guest], path)
winsec.set_group(sids[:current_user], path)
winsec.set_mode(0700, path)
lambda { check_delete(path) }.should raise_error(Errno::EACCES)
end
it "should deny group" do
winsec.set_owner(sids[:guest], path)
winsec.set_group(sids[:current_user], path)
winsec.set_mode(0700, path)
lambda { check_delete(path) }.should raise_error(Errno::EACCES)
end
it "should deny other" do
winsec.set_owner(sids[:guest], path)
winsec.set_group(sids[:current_user], path)
winsec.set_mode(0700, path)
lambda { check_delete(path) }.should raise_error(Errno::EACCES)
end
end
shared_examples_for "a securable object" do
describe "on a volume that doesn't support ACLs" do
[:owner, :group, :mode].each do |p|
it "should return nil #{p}" do
winsec.stubs(:supports_acl?).returns false
winsec.send("get_#{p}", path).should be_nil
end
end
end
describe "on a volume that supports ACLs" do
describe "for a normal user" do
before :each do
Puppet.features.stubs(:root?).returns(false)
end
after :each do
winsec.set_mode(WindowsSecurityTester::S_IRWXU, parent)
winsec.set_mode(WindowsSecurityTester::S_IRWXU, path) if Puppet::FileSystem.exist?(path)
end
describe "#supports_acl?" do
%w[c:/ c:\\ c:/windows/system32 \\\\localhost\\C$ \\\\127.0.0.1\\C$\\foo].each do |path|
it "should accept #{path}" do
winsec.should be_supports_acl(path)
end
end
it "should raise an exception if it cannot get volume information" do
expect {
winsec.supports_acl?('foobar')
}.to raise_error(Puppet::Error, /Failed to get volume information/)
end
end
describe "#owner=" do
it "should allow setting to the current user" do
winsec.set_owner(sids[:current_user], path)
end
it "should raise an exception when setting to a different user" do
lambda { winsec.set_owner(sids[:guest], path) }.should raise_error(Puppet::Error, /This security ID may not be assigned as the owner of this object./)
end
end
describe "#owner" do
it "it should not be empty" do
winsec.get_owner(path).should_not be_empty
end
it "should raise an exception if an invalid path is provided" do
lambda { winsec.get_owner("c:\\doesnotexist.txt") }.should raise_error(Puppet::Error, /The system cannot find the file specified./)
end
end
describe "#group=" do
it "should allow setting to a group the current owner is a member of" do
winsec.set_group(sids[:users], path)
end
# Unlike unix, if the user has permission to WRITE_OWNER, which the file owner has by default,
# then they can set the primary group to a group that the user does not belong to.
it "should allow setting to a group the current owner is not a member of" do
winsec.set_group(sids[:power_users], path)
end
end
describe "#group" do
it "should not be empty" do
winsec.get_group(path).should_not be_empty
end
it "should raise an exception if an invalid path is provided" do
lambda { winsec.get_group("c:\\doesnotexist.txt") }.should raise_error(Puppet::Error, /The system cannot find the file specified./)
end
end
it "should preserve inherited full control for SYSTEM when setting owner and group" do
# new file has SYSTEM
system_aces = winsec.get_aces_for_path_by_sid(path, sids[:system])
system_aces.should_not be_empty
# when running under SYSTEM account, multiple ACEs come back
# so we only care that we have at least one of these
system_aces.any? do |ace|
- ace.mask == Windows::File::FILE_ALL_ACCESS
+ ace.mask == klass::FILE_ALL_ACCESS
end.should be_true
# changing the owner/group will no longer make the SD protected
winsec.set_group(sids[:power_users], path)
winsec.set_owner(sids[:administrators], path)
system_aces.find do |ace|
- ace.mask == Windows::File::FILE_ALL_ACCESS && ace.inherited?
+ ace.mask == klass::FILE_ALL_ACCESS && ace.inherited?
end.should_not be_nil
end
describe "#mode=" do
(0000..0700).step(0100) do |mode|
it "should enforce mode #{mode.to_s(8)}" do
winsec.set_mode(mode, path)
check_access(mode, path)
end
end
it "should round-trip all 128 modes that do not require deny ACEs" do
0.upto(1).each do |s|
0.upto(7).each do |u|
0.upto(u).each do |g|
0.upto(g).each do |o|
# if user is superset of group, and group superset of other, then
# no deny ace is required, and mode can be converted to win32
# access mask, and back to mode without loss of information
# (provided the owner and group are not the same)
next if ((u & g) != g) or ((g & o) != o)
mode = (s << 9 | u << 6 | g << 3 | o << 0)
winsec.set_mode(mode, path)
winsec.get_mode(path).to_s(8).should == mode.to_s(8)
end
end
end
end
end
it "should preserve full control for SYSTEM when setting mode" do
# new file has SYSTEM
system_aces = winsec.get_aces_for_path_by_sid(path, sids[:system])
system_aces.should_not be_empty
# when running under SYSTEM account, multiple ACEs come back
# so we only care that we have at least one of these
system_aces.any? do |ace|
- ace.mask == WindowsSecurityTester::FILE_ALL_ACCESS
+ ace.mask == klass::FILE_ALL_ACCESS
end.should be_true
# changing the mode will make the SD protected
winsec.set_group(sids[:none], path)
winsec.set_mode(0600, path)
# and should have a non-inherited SYSTEM ACE(s)
system_aces = winsec.get_aces_for_path_by_sid(path, sids[:system])
system_aces.each do |ace|
- ace.mask.should == Windows::File::FILE_ALL_ACCESS && ! ace.inherited?
+ ace.mask.should == klass::FILE_ALL_ACCESS && ! ace.inherited?
end
end
describe "for modes that require deny aces" do
it "should map everyone to group and owner" do
winsec.set_mode(0426, path)
winsec.get_mode(path).to_s(8).should == "666"
end
it "should combine user and group modes when owner and group sids are equal" do
winsec.set_group(winsec.get_owner(path), path)
winsec.set_mode(0410, path)
winsec.get_mode(path).to_s(8).should == "550"
end
end
describe "for read-only objects" do
before :each do
winsec.set_group(sids[:none], path)
winsec.set_mode(0600, path)
- winsec.add_attributes(path, WindowsSecurityTester::FILE_ATTRIBUTE_READONLY)
- (winsec.get_attributes(path) & WindowsSecurityTester::FILE_ATTRIBUTE_READONLY).should be_nonzero
+ Puppet::Util::Windows::File.add_attributes(path, klass::FILE_ATTRIBUTE_READONLY)
+ (Puppet::Util::Windows::File.get_attributes(path) & klass::FILE_ATTRIBUTE_READONLY).should be_nonzero
end
it "should make them writable if any sid has write permission" do
winsec.set_mode(WindowsSecurityTester::S_IWUSR, path)
- (winsec.get_attributes(path) & WindowsSecurityTester::FILE_ATTRIBUTE_READONLY).should == 0
+ (Puppet::Util::Windows::File.get_attributes(path) & klass::FILE_ATTRIBUTE_READONLY).should == 0
end
it "should leave them read-only if no sid has write permission and should allow full access for SYSTEM" do
winsec.set_mode(WindowsSecurityTester::S_IRUSR | WindowsSecurityTester::S_IXGRP, path)
- (winsec.get_attributes(path) & WindowsSecurityTester::FILE_ATTRIBUTE_READONLY).should be_nonzero
+ (Puppet::Util::Windows::File.get_attributes(path) & klass::FILE_ATTRIBUTE_READONLY).should be_nonzero
system_aces = winsec.get_aces_for_path_by_sid(path, sids[:system])
# when running under SYSTEM account, and set_group / set_owner hasn't been called
# SYSTEM full access will be restored
system_aces.any? do |ace|
- ace.mask == Windows::File::FILE_ALL_ACCESS
+ ace.mask == klass::FILE_ALL_ACCESS
end.should be_true
end
end
it "should raise an exception if an invalid path is provided" do
lambda { winsec.set_mode(sids[:guest], "c:\\doesnotexist.txt") }.should raise_error(Puppet::Error, /The system cannot find the file specified./)
end
end
describe "#mode" do
it "should report when extra aces are encounted" do
sd = winsec.get_security_descriptor(path)
(544..547).each do |rid|
- sd.dacl.allow("S-1-5-32-#{rid}", WindowsSecurityTester::STANDARD_RIGHTS_ALL)
+ sd.dacl.allow("S-1-5-32-#{rid}", klass::STANDARD_RIGHTS_ALL)
end
winsec.set_security_descriptor(path, sd)
mode = winsec.get_mode(path)
(mode & WindowsSecurityTester::S_IEXTRA).should == WindowsSecurityTester::S_IEXTRA
end
it "should return deny aces" do
sd = winsec.get_security_descriptor(path)
- sd.dacl.deny(sids[:guest], WindowsSecurityTester::FILE_GENERIC_WRITE)
+ sd.dacl.deny(sids[:guest], klass::FILE_GENERIC_WRITE)
winsec.set_security_descriptor(path, sd)
guest_aces = winsec.get_aces_for_path_by_sid(path, sids[:guest])
guest_aces.find do |ace|
- ace.type == WindowsSecurityTester::ACCESS_DENIED_ACE_TYPE
+ ace.type == Puppet::Util::Windows::AccessControlEntry::ACCESS_DENIED_ACE_TYPE
end.should_not be_nil
end
it "should skip inherit-only ace" do
sd = winsec.get_security_descriptor(path)
dacl = Puppet::Util::Windows::AccessControlList.new
dacl.allow(
- sids[:current_user], WindowsSecurityTester::STANDARD_RIGHTS_ALL | WindowsSecurityTester::SPECIFIC_RIGHTS_ALL
+ sids[:current_user], klass::STANDARD_RIGHTS_ALL | klass::SPECIFIC_RIGHTS_ALL
)
dacl.allow(
sids[:everyone],
- WindowsSecurityTester::FILE_GENERIC_READ,
- WindowsSecurityTester::INHERIT_ONLY_ACE | WindowsSecurityTester::OBJECT_INHERIT_ACE
+ klass::FILE_GENERIC_READ,
+ Puppet::Util::Windows::AccessControlEntry::INHERIT_ONLY_ACE | Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE
)
winsec.set_security_descriptor(path, sd)
(winsec.get_mode(path) & WindowsSecurityTester::S_IRWXO).should == 0
end
it "should raise an exception if an invalid path is provided" do
lambda { winsec.get_mode("c:\\doesnotexist.txt") }.should raise_error(Puppet::Error, /The system cannot find the file specified./)
end
end
describe "inherited access control entries" do
it "should be absent when the access control list is protected, and should not remove SYSTEM" do
winsec.set_mode(WindowsSecurityTester::S_IRWXU, path)
mode = winsec.get_mode(path)
[ WindowsSecurityTester::S_IEXTRA,
WindowsSecurityTester::S_ISYSTEM_MISSING ].each do |flag|
(mode & flag).should_not == flag
end
end
it "should be present when the access control list is unprotected" do
# add a bunch of aces to the parent with permission to add children
- allow = WindowsSecurityTester::STANDARD_RIGHTS_ALL | WindowsSecurityTester::SPECIFIC_RIGHTS_ALL
- inherit = WindowsSecurityTester::OBJECT_INHERIT_ACE | WindowsSecurityTester::CONTAINER_INHERIT_ACE
+ allow = klass::STANDARD_RIGHTS_ALL | klass::SPECIFIC_RIGHTS_ALL
+ inherit = Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE | Puppet::Util::Windows::AccessControlEntry::CONTAINER_INHERIT_ACE
sd = winsec.get_security_descriptor(parent)
sd.dacl.allow(
"S-1-1-0", #everyone
allow,
inherit
)
(544..547).each do |rid|
sd.dacl.allow(
"S-1-5-32-#{rid}",
- WindowsSecurityTester::STANDARD_RIGHTS_ALL,
+ klass::STANDARD_RIGHTS_ALL,
inherit
)
end
winsec.set_security_descriptor(parent, sd)
# unprotect child, it should inherit from parent
winsec.set_mode(WindowsSecurityTester::S_IRWXU, path, false)
(winsec.get_mode(path) & WindowsSecurityTester::S_IEXTRA).should == WindowsSecurityTester::S_IEXTRA
end
end
end
describe "for an administrator", :if => Puppet.features.root? do
before :each do
+ is_dir = Puppet::FileSystem.directory?(path)
winsec.set_mode(WindowsSecurityTester::S_IRWXU | WindowsSecurityTester::S_IRWXG, path)
set_group_depending_on_current_user(path)
winsec.set_owner(sids[:guest], path)
- lambda { File.open(path, 'r') }.should raise_error(Errno::EACCES)
+ expected_error = RUBY_VERSION =~ /^2\./ && is_dir ? Errno::EISDIR : Errno::EACCES
+ lambda { File.open(path, 'r') }.should raise_error(expected_error)
end
after :each do
if Puppet::FileSystem.exist?(path)
winsec.set_owner(sids[:current_user], path)
winsec.set_mode(WindowsSecurityTester::S_IRWXU, path)
end
end
describe "#owner=" do
it "should accept a user sid" do
winsec.set_owner(sids[:admin], path)
winsec.get_owner(path).should == sids[:admin]
end
it "should accept a group sid" do
winsec.set_owner(sids[:power_users], path)
winsec.get_owner(path).should == sids[:power_users]
end
it "should raise an exception if an invalid sid is provided" do
lambda { winsec.set_owner("foobar", path) }.should raise_error(Puppet::Error, /Failed to convert string SID/)
end
it "should raise an exception if an invalid path is provided" do
lambda { winsec.set_owner(sids[:guest], "c:\\doesnotexist.txt") }.should raise_error(Puppet::Error, /The system cannot find the file specified./)
end
end
describe "#group=" do
it "should accept a group sid" do
winsec.set_group(sids[:power_users], path)
winsec.get_group(path).should == sids[:power_users]
end
it "should accept a user sid" do
winsec.set_group(sids[:admin], path)
winsec.get_group(path).should == sids[:admin]
end
it "should combine owner and group rights when they are the same sid" do
winsec.set_owner(sids[:power_users], path)
winsec.set_group(sids[:power_users], path)
winsec.set_mode(0610, path)
winsec.get_owner(path).should == sids[:power_users]
winsec.get_group(path).should == sids[:power_users]
# note group execute permission added to user ace, and then group rwx value
# reflected to match
# Exclude missing system ace, since that's not relevant
(winsec.get_mode(path) & 0777).to_s(8).should == "770"
end
it "should raise an exception if an invalid sid is provided" do
lambda { winsec.set_group("foobar", path) }.should raise_error(Puppet::Error, /Failed to convert string SID/)
end
it "should raise an exception if an invalid path is provided" do
lambda { winsec.set_group(sids[:guest], "c:\\doesnotexist.txt") }.should raise_error(Puppet::Error, /The system cannot find the file specified./)
end
end
describe "when the sid is NULL" do
it "should retrieve an empty owner sid"
it "should retrieve an empty group sid"
end
describe "when the sid refers to a deleted trustee" do
it "should retrieve the user sid" do
sid = nil
- user = Puppet::Util::ADSI::User.create("delete_me_user")
+ user = Puppet::Util::Windows::ADSI::User.create("delete_me_user")
user.commit
begin
- sid = Sys::Admin::get_user(user.name).sid
+ sid = Puppet::Util::Windows::ADSI::User.new(user.name).sid.to_s
winsec.set_owner(sid, path)
winsec.set_mode(WindowsSecurityTester::S_IRWXU, path)
ensure
- Puppet::Util::ADSI::User.delete(user.name)
+ Puppet::Util::Windows::ADSI::User.delete(user.name)
end
winsec.get_owner(path).should == sid
winsec.get_mode(path).should == WindowsSecurityTester::S_IRWXU
end
it "should retrieve the group sid" do
sid = nil
- group = Puppet::Util::ADSI::Group.create("delete_me_group")
+ group = Puppet::Util::Windows::ADSI::Group.create("delete_me_group")
group.commit
begin
- sid = Sys::Admin::get_group(group.name).sid
+ sid = Puppet::Util::Windows::ADSI::Group.new(group.name).sid.to_s
winsec.set_group(sid, path)
winsec.set_mode(WindowsSecurityTester::S_IRWXG, path)
ensure
- Puppet::Util::ADSI::Group.delete(group.name)
+ Puppet::Util::Windows::ADSI::Group.delete(group.name)
end
winsec.get_group(path).should == sid
winsec.get_mode(path).should == WindowsSecurityTester::S_IRWXG
end
end
describe "#mode" do
it "should deny all access when the DACL is empty, including SYSTEM" do
sd = winsec.get_security_descriptor(path)
# don't allow inherited aces to affect the test
protect = true
new_sd = Puppet::Util::Windows::SecurityDescriptor.new(sd.owner, sd.group, [], protect)
winsec.set_security_descriptor(path, new_sd)
winsec.get_mode(path).should == WindowsSecurityTester::S_ISYSTEM_MISSING
end
# REMIND: ruby crashes when trying to set a NULL DACL
# it "should allow all when it is nil" do
# winsec.set_owner(sids[:current_user], path)
# winsec.open_file(path, WindowsSecurityTester::READ_CONTROL | WindowsSecurityTester::WRITE_DAC) do |handle|
# winsec.set_security_info(handle, WindowsSecurityTester::DACL_SECURITY_INFORMATION | WindowsSecurityTester::PROTECTED_DACL_SECURITY_INFORMATION, nil)
# end
# winsec.get_mode(path).to_s(8).should == "777"
# end
end
describe "when the parent directory" do
before :each do
winsec.set_owner(sids[:current_user], parent)
winsec.set_owner(sids[:current_user], path)
winsec.set_mode(0777, path, false)
end
describe "is writable and executable" do
describe "and sticky bit is set" do
it "should allow child owner" do
winsec.set_owner(sids[:guest], parent)
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(01700, parent)
check_delete(path)
end
it "should allow parent owner" do
winsec.set_owner(sids[:current_user], parent)
winsec.set_group(sids[:guest], parent)
winsec.set_mode(01700, parent)
winsec.set_owner(sids[:current_user], path)
winsec.set_group(sids[:guest], path)
winsec.set_mode(0700, path)
check_delete(path)
end
it "should deny group" do
winsec.set_owner(sids[:guest], parent)
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(01770, parent)
winsec.set_owner(sids[:guest], path)
winsec.set_group(sids[:current_user], path)
winsec.set_mode(0700, path)
lambda { check_delete(path) }.should raise_error(Errno::EACCES)
end
it "should deny other" do
winsec.set_owner(sids[:guest], parent)
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(01777, parent)
winsec.set_owner(sids[:guest], path)
winsec.set_group(sids[:current_user], path)
winsec.set_mode(0700, path)
lambda { check_delete(path) }.should raise_error(Errno::EACCES)
end
end
describe "and sticky bit is not set" do
it "should allow child owner" do
winsec.set_owner(sids[:guest], parent)
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(0700, parent)
check_delete(path)
end
it "should allow parent owner" do
winsec.set_owner(sids[:current_user], parent)
winsec.set_group(sids[:guest], parent)
winsec.set_mode(0700, parent)
winsec.set_owner(sids[:current_user], path)
winsec.set_group(sids[:guest], path)
winsec.set_mode(0700, path)
check_delete(path)
end
it "should allow group" do
winsec.set_owner(sids[:guest], parent)
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(0770, parent)
winsec.set_owner(sids[:guest], path)
winsec.set_group(sids[:current_user], path)
winsec.set_mode(0700, path)
check_delete(path)
end
it "should allow other" do
winsec.set_owner(sids[:guest], parent)
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(0777, parent)
winsec.set_owner(sids[:guest], path)
winsec.set_group(sids[:current_user], path)
winsec.set_mode(0700, path)
check_delete(path)
end
end
end
describe "is not writable" do
before :each do
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(0555, parent)
end
it_behaves_like "only child owner"
end
describe "is not executable" do
before :each do
winsec.set_group(sids[:current_user], parent)
winsec.set_mode(0666, parent)
end
it_behaves_like "only child owner"
end
end
end
end
end
describe "file" do
let (:parent) do
tmpdir('win_sec_test_file')
end
let (:path) do
path = File.join(parent, 'childfile')
File.new(path, 'w').close
path
end
after :each do
# allow temp files to be cleaned up
grant_everyone_full_access(parent)
end
it_behaves_like "a securable object" do
def check_access(mode, path)
if (mode & WindowsSecurityTester::S_IRUSR).nonzero?
check_read(path)
else
lambda { check_read(path) }.should raise_error(Errno::EACCES)
end
if (mode & WindowsSecurityTester::S_IWUSR).nonzero?
check_write(path)
else
lambda { check_write(path) }.should raise_error(Errno::EACCES)
end
if (mode & WindowsSecurityTester::S_IXUSR).nonzero?
lambda { check_execute(path) }.should raise_error(Errno::ENOEXEC)
else
lambda { check_execute(path) }.should raise_error(Errno::EACCES)
end
end
def check_read(path)
File.open(path, 'r').close
end
def check_write(path)
File.open(path, 'w').close
end
def check_execute(path)
Kernel.exec(path)
end
def check_delete(path)
File.delete(path)
end
end
describe "locked files" do
let (:explorer) { File.join(Dir::WINDOWS, "explorer.exe") }
it "should get the owner" do
winsec.get_owner(explorer).should match /^S-1-5-/
end
it "should get the group" do
winsec.get_group(explorer).should match /^S-1-5-/
end
it "should get the mode" do
winsec.get_mode(explorer).should == (WindowsSecurityTester::S_IRWXU | WindowsSecurityTester::S_IRWXG | WindowsSecurityTester::S_IEXTRA)
end
end
end
describe "directory" do
let (:parent) do
tmpdir('win_sec_test_dir')
end
let (:path) do
path = File.join(parent, 'childdir')
Dir.mkdir(path)
path
end
after :each do
# allow temp files to be cleaned up
grant_everyone_full_access(parent)
end
it_behaves_like "a securable object" do
def check_access(mode, path)
if (mode & WindowsSecurityTester::S_IRUSR).nonzero?
check_read(path)
else
lambda { check_read(path) }.should raise_error(Errno::EACCES)
end
if (mode & WindowsSecurityTester::S_IWUSR).nonzero?
check_write(path)
else
lambda { check_write(path) }.should raise_error(Errno::EACCES)
end
if (mode & WindowsSecurityTester::S_IXUSR).nonzero?
check_execute(path)
else
lambda { check_execute(path) }.should raise_error(Errno::EACCES)
end
end
def check_read(path)
Dir.entries(path)
end
def check_write(path)
Dir.mkdir(File.join(path, "subdir"))
end
def check_execute(path)
Dir.chdir(path) {}
end
def check_delete(path)
Dir.rmdir(path)
end
end
describe "inheritable aces" do
it "should be applied to child objects" do
mode640 = WindowsSecurityTester::S_IRUSR | WindowsSecurityTester::S_IWUSR | WindowsSecurityTester::S_IRGRP
winsec.set_mode(mode640, path)
newfile = File.join(path, "newfile.txt")
File.new(newfile, "w").close
newdir = File.join(path, "newdir")
Dir.mkdir(newdir)
[newfile, newdir].each do |p|
mode = winsec.get_mode(p)
(mode & 07777).to_s(8).should == mode640.to_s(8)
end
end
end
end
context "security descriptor" do
let(:path) { tmpfile('sec_descriptor') }
let(:read_execute) { 0x201FF }
let(:synchronize) { 0x100000 }
before :each do
FileUtils.touch(path)
end
it "preserves aces for other users" do
dacl = Puppet::Util::Windows::AccessControlList.new
sids_in_dacl = [sids[:current_user], sids[:users]]
sids_in_dacl.each do |sid|
dacl.allow(sid, read_execute)
end
sd = Puppet::Util::Windows::SecurityDescriptor.new(sids[:guest], sids[:guest], dacl, true)
winsec.set_security_descriptor(path, sd)
aces = winsec.get_security_descriptor(path).dacl.to_a
aces.map(&:sid).should == sids_in_dacl
aces.map(&:mask).all? { |mask| mask == read_execute }.should be_true
end
it "changes the sid for all aces that were assigned to the old owner" do
sd = winsec.get_security_descriptor(path)
sd.owner.should_not == sids[:guest]
sd.dacl.allow(sd.owner, read_execute)
sd.dacl.allow(sd.owner, synchronize)
sd.owner = sids[:guest]
winsec.set_security_descriptor(path, sd)
dacl = winsec.get_security_descriptor(path).dacl
aces = dacl.find_all { |ace| ace.sid == sids[:guest] }
# only non-inherited aces will be reassigned to guest, so
# make sure we find at least the two we added
aces.size.should >= 2
end
it "preserves INHERIT_ONLY_ACEs" do
# inherit only aces can only be set on directories
dir = tmpdir('inheritonlyace')
inherit_flags = Puppet::Util::Windows::AccessControlEntry::INHERIT_ONLY_ACE |
Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE |
Puppet::Util::Windows::AccessControlEntry::CONTAINER_INHERIT_ACE
sd = winsec.get_security_descriptor(dir)
- sd.dacl.allow(sd.owner, Windows::File::FILE_ALL_ACCESS, inherit_flags)
+ sd.dacl.allow(sd.owner, klass::FILE_ALL_ACCESS, inherit_flags)
winsec.set_security_descriptor(dir, sd)
sd = winsec.get_security_descriptor(dir)
winsec.set_owner(sids[:guest], dir)
sd = winsec.get_security_descriptor(dir)
sd.dacl.find do |ace|
ace.sid == sids[:guest] && ace.inherit_only?
end.should_not be_nil
end
it "allows deny ACEs with inheritance" do
# inheritance can only be set on directories
dir = tmpdir('denyaces')
inherit_flags = Puppet::Util::Windows::AccessControlEntry::OBJECT_INHERIT_ACE |
Puppet::Util::Windows::AccessControlEntry::CONTAINER_INHERIT_ACE
sd = winsec.get_security_descriptor(dir)
- sd.dacl.deny(sids[:guest], Windows::File::FILE_ALL_ACCESS, inherit_flags)
+ sd.dacl.deny(sids[:guest], klass::FILE_ALL_ACCESS, inherit_flags)
winsec.set_security_descriptor(dir, sd)
sd = winsec.get_security_descriptor(dir)
sd.dacl.find do |ace|
ace.sid == sids[:guest] && ace.flags != 0
end.should_not be_nil
end
context "when managing mode" do
it "removes aces for sids that are neither the owner nor group" do
# add a guest ace, it's never owner or group
sd = winsec.get_security_descriptor(path)
sd.dacl.allow(sids[:guest], read_execute)
winsec.set_security_descriptor(path, sd)
# setting the mode, it should remove extra aces
winsec.set_mode(0770, path)
# make sure it's gone
dacl = winsec.get_security_descriptor(path).dacl
aces = dacl.find_all { |ace| ace.sid == sids[:guest] }
aces.should be_empty
end
end
end
end
diff --git a/spec/integration/util/windows/user_spec.rb b/spec/integration/util/windows/user_spec.rb
index 0435b2cdc..4e873b34c 100755
--- a/spec/integration/util/windows/user_spec.rb
+++ b/spec/integration/util/windows/user_spec.rb
@@ -1,59 +1,125 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe "Puppet::Util::Windows::User", :if => Puppet.features.microsoft_windows? do
describe "2003 without UAC" do
before :each do
Facter.stubs(:value).with(:kernelmajversion).returns("5.2")
end
it "should be an admin if user's token contains the Administrators SID" do
Puppet::Util::Windows::User.expects(:check_token_membership).returns(true)
- Win32::Security.expects(:elevated_security?).never
+ Puppet::Util::Windows::Process.expects(:elevated_security?).never
Puppet::Util::Windows::User.should be_admin
end
it "should not be an admin if user's token doesn't contain the Administrators SID" do
Puppet::Util::Windows::User.expects(:check_token_membership).returns(false)
- Win32::Security.expects(:elevated_security?).never
+ Puppet::Util::Windows::Process.expects(:elevated_security?).never
Puppet::Util::Windows::User.should_not be_admin
end
it "should raise an exception if we can't check token membership" do
- Puppet::Util::Windows::User.expects(:check_token_membership).raises(Win32::Security::Error, "Access denied.")
- Win32::Security.expects(:elevated_security?).never
+ Puppet::Util::Windows::User.expects(:check_token_membership).raises(Puppet::Util::Windows::Error, "Access denied.")
+ Puppet::Util::Windows::Process.expects(:elevated_security?).never
- lambda { Puppet::Util::Windows::User.admin? }.should raise_error(Win32::Security::Error, /Access denied./)
+ lambda { Puppet::Util::Windows::User.admin? }.should raise_error(Puppet::Util::Windows::Error, /Access denied./)
end
end
describe "2008 with UAC" do
before :each do
Facter.stubs(:value).with(:kernelmajversion).returns("6.0")
end
it "should be an admin if user is running with elevated privileges" do
- Win32::Security.stubs(:elevated_security?).returns(true)
+ Puppet::Util::Windows::Process.stubs(:elevated_security?).returns(true)
Puppet::Util::Windows::User.expects(:check_token_membership).never
Puppet::Util::Windows::User.should be_admin
end
it "should not be an admin if user is not running with elevated privileges" do
- Win32::Security.stubs(:elevated_security?).returns(false)
+ Puppet::Util::Windows::Process.stubs(:elevated_security?).returns(false)
Puppet::Util::Windows::User.expects(:check_token_membership).never
Puppet::Util::Windows::User.should_not be_admin
end
it "should raise an exception if the process fails to open the process token" do
- Win32::Security.stubs(:elevated_security?).raises(Win32::Security::Error, "Access denied.")
+ Puppet::Util::Windows::Process.stubs(:elevated_security?).raises(Puppet::Util::Windows::Error, "Access denied.")
Puppet::Util::Windows::User.expects(:check_token_membership).never
- lambda { Puppet::Util::Windows::User.admin? }.should raise_error(Win32::Security::Error, /Access denied./)
+ lambda { Puppet::Util::Windows::User.admin? }.should raise_error(Puppet::Util::Windows::Error, /Access denied./)
+ end
+ end
+
+ describe "module function" do
+ let(:username) { 'fabio' }
+ let(:bad_password) { 'goldilocks' }
+ let(:logon_fail_msg) { /Failed to logon user "fabio": Logon failure: unknown user name or bad password./ }
+
+ def expect_logon_failure_error(&block)
+ expect {
+ yield
+ }.to raise_error { |error|
+ expect(error).to be_a(Puppet::Util::Windows::Error)
+ # http://msdn.microsoft.com/en-us/library/windows/desktop/ms681385(v=vs.85).aspx
+ # ERROR_LOGON_FAILURE 1326
+ expect(error.code).to eq(1326)
+ }
+ end
+
+ describe "load_profile" do
+ it "should raise an error when provided with an incorrect username and password" do
+ expect_logon_failure_error {
+ Puppet::Util::Windows::User.load_profile(username, bad_password)
+ }
+ end
+
+ it "should raise an error when provided with an incorrect username and nil password" do
+ expect_logon_failure_error {
+ Puppet::Util::Windows::User.load_profile(username, nil)
+ }
+ end
+ end
+
+ describe "logon_user" do
+ it "should raise an error when provided with an incorrect username and password" do
+ expect_logon_failure_error {
+ Puppet::Util::Windows::User.logon_user(username, bad_password)
+ }
+ end
+
+ it "should raise an error when provided with an incorrect username and nil password" do
+ expect_logon_failure_error {
+ Puppet::Util::Windows::User.logon_user(username, nil)
+ }
+ end
+ end
+
+ describe "password_is?" do
+ it "should return false given an incorrect username and password" do
+ Puppet::Util::Windows::User.password_is?(username, bad_password).should be_false
+ end
+
+ it "should return false given an incorrect username and nil password" do
+ Puppet::Util::Windows::User.password_is?(username, nil).should be_false
+ end
+
+ it "should return false given a nil username and an incorrect password" do
+ Puppet::Util::Windows::User.password_is?(nil, bad_password).should be_false
+ end
+ end
+
+ describe "check_token_membership" do
+ it "should not raise an error" do
+ # added just to call an FFI code path on all platforms
+ lambda { Puppet::Util::Windows::User.check_token_membership }.should_not raise_error
+ end
end
end
end
diff --git a/spec/integration/util_spec.rb b/spec/integration/util_spec.rb
index d8d96aad8..84f155123 100755
--- a/spec/integration/util_spec.rb
+++ b/spec/integration/util_spec.rb
@@ -1,111 +1,111 @@
#!/usr/bin/env ruby
require 'spec_helper'
describe Puppet::Util do
include PuppetSpec::Files
describe "#execute" do
it "should properly allow stdout and stderr to share a file" do
command = "ruby -e '(1..10).each {|i| (i%2==0) ? $stdout.puts(i) : $stderr.puts(i)}'"
Puppet::Util::Execution.execute(command, :combine => true).split.should =~ [*'1'..'10']
end
it "should return output and set $CHILD_STATUS" do
command = "ruby -e 'puts \"foo\"; exit 42'"
output = Puppet::Util::Execution.execute(command, {:failonfail => false})
output.should == "foo\n"
$CHILD_STATUS.exitstatus.should == 42
end
it "should raise an error if non-zero exit status is returned" do
command = "ruby -e 'exit 43'"
expect { Puppet::Util::Execution.execute(command) }.to raise_error(Puppet::ExecutionFailure, /Execution of '#{command}' returned 43: /)
$CHILD_STATUS.exitstatus.should == 43
end
it "replace_file should preserve original ACEs from existing replaced file on Windows",
:if => Puppet.features.microsoft_windows? do
file = tmpfile("somefile")
FileUtils.touch(file)
admins = 'S-1-5-32-544'
dacl = Puppet::Util::Windows::AccessControlList.new
- dacl.allow(admins, Windows::File::FILE_ALL_ACCESS)
+ dacl.allow(admins, Puppet::Util::Windows::File::FILE_ALL_ACCESS)
protect = true
expected_sd = Puppet::Util::Windows::SecurityDescriptor.new(admins, admins, dacl, protect)
Puppet::Util::Windows::Security.set_security_descriptor(file, expected_sd)
ignored_mode = 0644
Puppet::Util.replace_file(file, ignored_mode) do |temp_file|
ignored_sd = Puppet::Util::Windows::Security.get_security_descriptor(temp_file.path)
users = 'S-1-5-11'
- ignored_sd.dacl.allow(users, Windows::File::FILE_GENERIC_READ)
+ ignored_sd.dacl.allow(users, Puppet::Util::Windows::File::FILE_GENERIC_READ)
Puppet::Util::Windows::Security.set_security_descriptor(temp_file.path, ignored_sd)
end
replaced_sd = Puppet::Util::Windows::Security.get_security_descriptor(file)
replaced_sd.dacl.should == expected_sd.dacl
end
it "replace_file should use reasonable default ACEs on a new file on Windows",
:if => Puppet.features.microsoft_windows? do
dir = tmpdir('DACL_playground')
protected_sd = Puppet::Util::Windows::Security.get_security_descriptor(dir)
protected_sd.protect = true
Puppet::Util::Windows::Security.set_security_descriptor(dir, protected_sd)
sibling_path = File.join(dir, 'sibling_file')
FileUtils.touch(sibling_path)
expected_sd = Puppet::Util::Windows::Security.get_security_descriptor(sibling_path)
new_file_path = File.join(dir, 'new_file')
ignored_mode = nil
Puppet::Util.replace_file(new_file_path, ignored_mode) { |tmp_file| }
new_sd = Puppet::Util::Windows::Security.get_security_descriptor(new_file_path)
new_sd.dacl.should == expected_sd.dacl
end
end
it "replace_file should work with filenames that include - and . (PUP-1389)", :if => Puppet.features.microsoft_windows? do
expected_content = 'some content'
dir = tmpdir('ReplaceFile_playground')
destination_file = File.join(dir, 'some-file.xml')
Puppet::Util.replace_file(destination_file, nil) do |temp_file|
temp_file.open
temp_file.write(expected_content)
end
actual_content = File.read(destination_file)
actual_content.should == expected_content
end
it "replace_file should work with filenames that include special characters (PUP-1389)", :if => Puppet.features.microsoft_windows? do
expected_content = 'some content'
dir = tmpdir('ReplaceFile_playground')
# http://www.fileformat.info/info/unicode/char/00e8/index.htm
# dest_name = "somèfile.xml"
dest_name = "som\u00E8file.xml"
destination_file = File.join(dir, dest_name)
Puppet::Util.replace_file(destination_file, nil) do |temp_file|
temp_file.open
temp_file.write(expected_content)
end
actual_content = File.read(destination_file)
actual_content.should == expected_content
end
end
diff --git a/spec/lib/matchers/resource.rb b/spec/lib/matchers/resource.rb
index 4498f959e..fc305d0d2 100644
--- a/spec/lib/matchers/resource.rb
+++ b/spec/lib/matchers/resource.rb
@@ -1,67 +1,68 @@
module Matchers; module Resource
extend RSpec::Matchers::DSL
matcher :be_resource do |expected_resource|
@params = {}
match do |actual_resource|
matched = true
failures = []
if actual_resource.ref != expected_resource
matched = false
failures << "expected #{expected_resource} but was #{actual_resource.ref}"
end
@params.each do |name, value|
case value
when RSpec::Matchers::DSL::Matcher
if !value.matches?(actual_resource[name])
matched = false
failures << "expected #{name} to match '#{value.description}' but was '#{actual_resource[name]}'"
end
else
if actual_resource[name] != value
matched = false
failures << "expected #{name} to be '#{value}' but was '#{actual_resource[name]}'"
end
end
end
@mismatch = failures.join("\n")
matched
end
chain :with_parameter do |name, value|
@params[name] = value
end
def failure_message_for_should
@mismatch
end
end
module_function :be_resource
matcher :have_resource do |expected_resource|
@params = {}
@matcher = Matchers::Resource.be_resource(expected_resource)
match do |actual_catalog|
@mismatch = ""
if resource = actual_catalog.resource(expected_resource)
@matcher.matches?(resource)
else
@mismatch = "expected #{@actual.to_dot} to include #{@expected[0]}"
false
end
end
chain :with_parameter do |name, value|
@matcher.with_parameter(name, value)
end
def failure_message_for_should
@mismatch.empty? ? @matcher.failure_message_for_should : @mismatch
end
end
+ module_function :have_resource
end; end
diff --git a/spec/lib/puppet_spec/compiler.rb b/spec/lib/puppet_spec/compiler.rb
index cf4851807..4757705aa 100644
--- a/spec/lib/puppet_spec/compiler.rb
+++ b/spec/lib/puppet_spec/compiler.rb
@@ -1,36 +1,49 @@
module PuppetSpec::Compiler
+ module_function
+
def compile_to_catalog(string, node = Puppet::Node.new('foonode'))
Puppet[:code] = string
- Puppet::Parser::Compiler.compile(node)
+ # see lib/puppet/indirector/catalog/compiler.rb#filter
+ Puppet::Parser::Compiler.compile(node).filter { |r| r.virtual? }
end
def compile_to_ral(manifest)
catalog = compile_to_catalog(manifest)
ral = catalog.to_ral
ral.finalize
ral
end
def compile_to_relationship_graph(manifest, prioritizer = Puppet::Graph::SequentialPrioritizer.new)
ral = compile_to_ral(manifest)
graph = Puppet::Graph::RelationshipGraph.new(prioritizer)
graph.populate_from(ral)
graph
end
def apply_compiled_manifest(manifest, prioritizer = Puppet::Graph::SequentialPrioritizer.new)
- transaction = Puppet::Transaction.new(compile_to_ral(manifest),
+ catalog = compile_to_ral(manifest)
+ if block_given?
+ catalog.resources.each { |res| yield res }
+ end
+ transaction = Puppet::Transaction.new(catalog,
Puppet::Transaction::Report.new("apply"),
prioritizer)
transaction.evaluate
transaction.report.finalize_report
transaction
end
+ def apply_with_error_check(manifest)
+ apply_compiled_manifest(manifest) do |res|
+ res.expects(:err).never
+ end
+ end
+
def order_resources_traversed_in(relationships)
order_seen = []
relationships.traverse { |resource| order_seen << resource.ref }
order_seen
end
end
diff --git a/spec/lib/puppet_spec/files.rb b/spec/lib/puppet_spec/files.rb
index 1e1076b91..312c4fc95 100755
--- a/spec/lib/puppet_spec/files.rb
+++ b/spec/lib/puppet_spec/files.rb
@@ -1,78 +1,88 @@
require 'fileutils'
require 'tempfile'
require 'tmpdir'
require 'pathname'
# A support module for testing files.
module PuppetSpec::Files
def self.cleanup
$global_tempfiles ||= []
while path = $global_tempfiles.pop do
begin
Dir.unstub(:entries)
FileUtils.rm_rf path, :secure => true
rescue Errno::ENOENT
# nothing to do
end
end
end
def make_absolute(path) PuppetSpec::Files.make_absolute(path) end
def self.make_absolute(path)
path = File.expand_path(path)
path[0] = 'c' if Puppet.features.microsoft_windows?
path
end
def tmpfile(name, dir = nil) PuppetSpec::Files.tmpfile(name, dir) end
def self.tmpfile(name, dir = nil)
# Generate a temporary file, just for the name...
source = dir ? Tempfile.new(name, dir) : Tempfile.new(name)
path = source.path
source.close!
record_tmp(File.expand_path(path))
path
end
def file_containing(name, contents) PuppetSpec::Files.file_containing(name, contents) end
def self.file_containing(name, contents)
file = tmpfile(name)
File.open(file, 'wb') { |f| f.write(contents) }
file
end
def tmpdir(name) PuppetSpec::Files.tmpdir(name) end
def self.tmpdir(name)
dir = Dir.mktmpdir(name)
record_tmp(dir)
dir
end
def dir_containing(name, contents_hash) PuppetSpec::Files.dir_containing(name, contents_hash) end
def self.dir_containing(name, contents_hash)
dir_contained_in(tmpdir(name), contents_hash)
end
def self.dir_contained_in(dir, contents_hash)
contents_hash.each do |k,v|
if v.is_a?(Hash)
Dir.mkdir(tmp = File.join(dir,k))
dir_contained_in(tmp, v)
else
file = File.join(dir, k)
File.open(file, 'wb') {|f| f.write(v) }
end
end
dir
end
def self.record_tmp(tmp)
# ...record it for cleanup,
$global_tempfiles ||= []
$global_tempfiles << tmp
end
+
+ def expect_file_mode(file, mode)
+ actual_mode = "%o" % Puppet::FileSystem.stat(file).mode
+ target_mode = if Puppet.features.microsoft_windows?
+ mode
+ else
+ "10" + "%04i" % mode.to_i
+ end
+ actual_mode.should == target_mode
+ end
end
diff --git a/spec/lib/puppet_spec/language.rb b/spec/lib/puppet_spec/language.rb
new file mode 100644
index 000000000..197c4b827
--- /dev/null
+++ b/spec/lib/puppet_spec/language.rb
@@ -0,0 +1,74 @@
+require 'puppet_spec/compiler'
+require 'matchers/resource'
+
+module PuppetSpec::Language
+ extend RSpec::Matchers::DSL
+
+ def produces(expectations)
+ calledFrom = caller
+ expectations.each do |manifest, resources|
+ it "evaluates #{manifest} to produce #{resources}" do
+ begin
+ case resources
+ when String
+ node = Puppet::Node.new('specification')
+ Puppet[:code] = manifest
+ compiler = Puppet::Parser::Compiler.new(node)
+ evaluator = Puppet::Pops::Parser::EvaluatingParser.new()
+
+ # see lib/puppet/indirector/catalog/compiler.rb#filter
+ catalog = compiler.compile.filter { |r| r.virtual? }
+
+ compiler.send(:instance_variable_set, :@catalog, catalog)
+
+ expect(evaluator.evaluate_string(compiler.topscope, resources)).to eq(true)
+ when Array
+ catalog = PuppetSpec::Compiler.compile_to_catalog(manifest)
+
+ if resources.empty?
+ base_resources = ["Class[Settings]", "Class[main]", "Stage[main]"]
+ expect(catalog.resources.collect(&:ref) - base_resources).to eq([])
+ else
+ resources.each do |reference|
+ if reference.is_a?(Array)
+ matcher = Matchers::Resource.have_resource(reference[0])
+ reference[1].each do |name, value|
+ matcher = matcher.with_parameter(name, value)
+ end
+ else
+ matcher = Matchers::Resource.have_resource(reference)
+ end
+
+ expect(catalog).to matcher
+ end
+ end
+ else
+ raise "Unsupported creates specification: #{resources.inspect}"
+ end
+ rescue Puppet::Error, RSpec::Expectations::ExpectationNotMetError => e
+ # provide the backtrace from the caller, or it is close to impossible to find some originators
+ e.set_backtrace(calledFrom)
+ raise
+ end
+
+ end
+ end
+ end
+
+ def fails(expectations)
+ calledFrom = caller
+ expectations.each do |manifest, pattern|
+ it "fails to evaluate #{manifest} with message #{pattern}" do
+ begin
+ expect do
+ PuppetSpec::Compiler.compile_to_catalog(manifest)
+ end.to raise_error(Puppet::Error, pattern)
+ rescue RSpec::Expectations::ExpectationNotMetError => e
+ # provide the backgrace from the caller, or it is close to impossible to find some originators
+ e.set_backtrace(calledFrom)
+ raise
+ end
+ end
+ end
+ end
+end
diff --git a/spec/lib/puppet_spec/matchers.rb b/spec/lib/puppet_spec/matchers.rb
index 448bd1811..c01d8e89d 100644
--- a/spec/lib/puppet_spec/matchers.rb
+++ b/spec/lib/puppet_spec/matchers.rb
@@ -1,120 +1,147 @@
require 'stringio'
########################################################################
# Backward compatibility for Jenkins outdated environment.
module RSpec
module Matchers
module BlockAliases
alias_method :to, :should unless method_defined? :to
alias_method :to_not, :should_not unless method_defined? :to_not
alias_method :not_to, :should_not unless method_defined? :not_to
end
end
end
########################################################################
# Custom matchers...
RSpec::Matchers.define :have_matching_element do |expected|
match do |actual|
actual.any? { |item| item =~ expected }
end
end
RSpec::Matchers.define :exit_with do |expected|
actual = nil
match do |block|
begin
block.call
rescue SystemExit => e
actual = e.status
end
actual and actual == expected
end
failure_message_for_should do |block|
"expected exit with code #{expected} but " +
(actual.nil? ? " exit was not called" : "we exited with #{actual} instead")
end
failure_message_for_should_not do |block|
"expected that exit would not be called with #{expected}"
end
description do
"expect exit with #{expected}"
end
end
-class HavePrintedMatcher
- attr_accessor :expected, :actual
- def initialize(expected)
- case expected
+RSpec::Matchers.define :have_printed do |expected|
+
+ case expected
when String, Regexp
- @expected = expected
+ expected = expected
else
- @expected = expected.to_s
+ expected = expected.to_s
+ end
+
+ chain :and_exit_with do |code|
+ @expected_exit_code = code
+ end
+
+ define_method :matches_exit_code? do |actual|
+ @expected_exit_code.nil? || @expected_exit_code == actual
+ end
+
+ define_method :matches_output? do |actual|
+ return false unless actual
+ case expected
+ when String
+ actual.include?(expected)
+ when Regexp
+ expected.match(actual)
+ else
+ raise ArgumentError, "No idea how to match a #{actual.class.name}"
end
end
- def matches?(block)
+ match do |block|
+ $stderr = $stdout = StringIO.new
+ $stdout.set_encoding('UTF-8') if $stdout.respond_to?(:set_encoding)
+
begin
- $stderr = $stdout = StringIO.new
- $stdout.set_encoding('UTF-8') if $stdout.respond_to?(:set_encoding)
block.call
+ rescue SystemExit => e
+ raise unless @expected_exit_code
+ @actual_exit_code = e.status
+ ensure
$stdout.rewind
@actual = $stdout.read
- ensure
+
$stdout = STDOUT
$stderr = STDERR
end
- if @actual then
- case @expected
- when String
- @actual.include? @expected
- when Regexp
- @expected.match @actual
- end
- else
- false
- end
+ matches_output?(@actual) && matches_exit_code?(@actual_exit_code)
end
- def failure_message_for_should
- if @actual.nil? then
- "expected #{@expected.inspect}, but nothing was printed"
+ failure_message_for_should do |actual|
+ if actual.nil? then
+ "expected #{expected.inspect}, but nothing was printed"
else
- "expected #{@expected.inspect} to be printed; got:\n#{@actual}"
+ if !@expected_exit_code.nil? && matches_output?(actual)
+ "expected exit with code #{@expected_exit_code} but " +
+ (@actual_exit_code.nil? ? " exit was not called" : "exited with #{@actual_exit_code} instead")
+ else
+ "expected #{expected.inspect} to be printed; got:\n#{actual}"
+ end
end
end
- def failure_message_for_should_not
- "expected #{@expected.inspect} to not be printed; got:\n#{@actual}"
+ failure_message_for_should_not do |actual|
+ if @expected_exit_code && matches_exit_code?(@actual_exit_code)
+ "expected exit code to not be #{@actual_exit_code}"
+ else
+ "expected #{expected.inspect} to not be printed; got:\n#{actual}"
+ end
end
- def description
- "expect #{@expected.inspect} to be printed"
+ description do
+ "expect #{expected.inspect} to be printed" + (@expected_exit_code.nil ? '' : " with exit code #{@expected_exit_code}")
end
end
-def have_printed(what)
- HavePrintedMatcher.new(what)
-end
-
RSpec::Matchers.define :equal_attributes_of do |expected|
match do |actual|
actual.instance_variables.all? do |attr|
actual.instance_variable_get(attr) == expected.instance_variable_get(attr)
end
end
end
+RSpec::Matchers.define :equal_resource_attributes_of do |expected|
+ match do |actual|
+ actual.keys do |attr|
+ actual[attr] == expected[attr]
+ end
+ end
+end
+
RSpec::Matchers.define :be_one_of do |*expected|
match do |actual|
expected.include? actual
end
failure_message_for_should do |actual|
"expected #{actual.inspect} to be one of #{expected.map(&:inspect).join(' or ')}"
end
end
diff --git a/spec/lib/puppet_spec/module_tool/stub_source.rb b/spec/lib/puppet_spec/module_tool/stub_source.rb
index 5cee57c10..8b5f8bbe1 100644
--- a/spec/lib/puppet_spec/module_tool/stub_source.rb
+++ b/spec/lib/puppet_spec/module_tool/stub_source.rb
@@ -1,133 +1,136 @@
module PuppetSpec
module ModuleTool
class StubSource < Semantic::Dependency::Source
def inspect; "Stub Source"; end
def host
"http://nowhe.re"
end
def fetch(name)
available_releases[name.tr('/', '-')].values
end
def available_releases
return @available_releases if defined? @available_releases
@available_releases = {
'puppetlabs-java' => {
'10.0.0' => { 'puppetlabs/stdlib' => '4.1.0' },
},
'puppetlabs-stdlib' => {
'4.1.0' => {},
},
'pmtacceptance-stdlib' => {
"4.1.0" => {},
"3.2.0" => {},
"3.1.0" => {},
"3.0.0" => {},
"2.6.0" => {},
"2.5.1" => {},
"2.5.0" => {},
"2.4.0" => {},
"2.3.2" => {},
"2.3.1" => {},
"2.3.0" => {},
"2.2.1" => {},
"2.2.0" => {},
"2.1.3" => {},
"2.0.0" => {},
"1.1.0" => {},
"1.0.0" => {},
},
'pmtacceptance-keystone' => {
'3.0.0-rc2' => { "pmtacceptance/mysql" => ">=0.6.1 <1.0.0", "pmtacceptance/stdlib" => ">= 2.5.0" },
'3.0.0-rc1' => { "pmtacceptance/mysql" => ">=0.6.1 <1.0.0", "pmtacceptance/stdlib" => ">= 2.5.0" },
'2.2.0' => { "pmtacceptance/mysql" => ">=0.6.1 <1.0.0", "pmtacceptance/stdlib" => ">= 2.5.0" },
'2.2.0-rc1' => { "pmtacceptance/mysql" => ">=0.6.1 <1.0.0", "pmtacceptance/stdlib" => ">= 2.5.0" },
'2.1.0' => { "pmtacceptance/mysql" => ">=0.6.1 <1.0.0", "pmtacceptance/stdlib" => ">= 2.5.0" },
'2.0.0' => { "pmtacceptance/mysql" => ">= 0.6.1" },
'1.2.0' => { "pmtacceptance/mysql" => ">= 0.5.0" },
'1.1.1' => { "pmtacceptance/mysql" => ">= 0.5.0" },
'1.1.0' => { "pmtacceptance/mysql" => ">= 0.5.0" },
'1.0.1' => { "pmtacceptance/mysql" => ">= 0.5.0" },
'1.0.0' => { "pmtacceptance/mysql" => ">= 0.5.0" },
'0.2.0' => { "pmtacceptance/mysql" => ">= 0.5.0" },
'0.1.0' => { "pmtacceptance/mysql" => ">= 0.3.0" },
},
'pmtacceptance-mysql' => {
"2.1.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"2.0.1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"2.0.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"2.0.0-rc5" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"2.0.0-rc4" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"2.0.0-rc3" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"2.0.0-rc2" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"2.0.0-rc1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"1.0.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.9.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.8.1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.8.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.7.1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.7.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.6.1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.6.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.5.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.4.0" => {},
"0.3.0" => {},
"0.2.0" => {},
},
'pmtacceptance-apache' => {
"0.10.0" => { "pmtacceptance/stdlib" => ">= 2.4.0" },
"0.9.0" => { "pmtacceptance/stdlib" => ">= 2.4.0" },
"0.8.1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.8.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.7.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.6.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.5.0-rc1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.4.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.3.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.2.2" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.2.1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.2.0" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.1.1" => { "pmtacceptance/stdlib" => ">= 2.2.1" },
"0.0.4" => {},
"0.0.3" => {},
"0.0.2" => {},
"0.0.1" => {},
},
'pmtacceptance-bacula' => {
"0.0.3" => { "pmtacceptance/stdlib" => ">= 2.2.0", "pmtacceptance/mysql" => ">= 1.0.0" },
"0.0.2" => { "pmtacceptance/stdlib" => ">= 2.2.0", "pmtacceptance/mysql" => ">= 0.0.1" },
"0.0.1" => { "pmtacceptance/stdlib" => ">= 2.2.0" },
},
+ 'puppetlabs-oneversion' => {
+ "0.0.1" => {}
+ }
}
@available_releases.each do |name, versions|
versions.each do |version, deps|
deps, metadata = deps.partition { |k,v| k.is_a? String }
dependencies = Hash[deps.map { |k, v| [ k.tr('/', '-'), v ] }]
versions[version] = create_release(name, version, dependencies).tap do |release|
release.meta_def(:prepare) { }
release.meta_def(:install) { |x| @install_dir = x.to_s }
release.meta_def(:install_dir) { @install_dir }
release.meta_def(:metadata) do
metadata = Hash[metadata].merge(
:name => name,
:version => version,
:source => '', # GRR, Puppet!
:author => '', # GRR, Puppet!
:license => '', # GRR, Puppet!
:dependencies => dependencies.map do |dep, range|
{ :name => dep, :version_requirement => range }
end
)
Hash[metadata.map { |k,v| [ k.to_s, v ] }]
end
end
end
end
end
end
end
end
diff --git a/spec/unit/indirector/data_binding/hiera_spec.rb b/spec/shared_behaviours/hiera_indirections.rb
similarity index 71%
copy from spec/unit/indirector/data_binding/hiera_spec.rb
copy to spec/shared_behaviours/hiera_indirections.rb
index 7ab3b27fc..fe6dc35ad 100644
--- a/spec/unit/indirector/data_binding/hiera_spec.rb
+++ b/spec/shared_behaviours/hiera_indirections.rb
@@ -1,114 +1,99 @@
-require 'spec_helper'
-require 'puppet/indirector/data_binding/hiera'
-
-describe Puppet::DataBinding::Hiera do
+shared_examples_for "Hiera indirection" do |test_klass, fixture_dir|
include PuppetSpec::Files
def write_hiera_config(config_file, datadir)
File.open(config_file, 'w') do |f|
f.write("---
:yaml:
:datadir: #{datadir}
:hierarchy: ['global', 'invalid']
:logger: 'noop'
:backends: ['yaml']
")
end
end
def request(key)
Puppet::Indirector::Request.new(:hiera, :find, key, nil)
end
before do
hiera_config_file = tmpfile("hiera.yaml")
Puppet.settings[:hiera_config] = hiera_config_file
- write_hiera_config(hiera_config_file, my_fixture_dir)
+ write_hiera_config(hiera_config_file, fixture_dir)
end
after do
- Puppet::DataBinding::Hiera.instance_variable_set(:@hiera, nil)
- end
-
- it "should have documentation" do
- Puppet::DataBinding::Hiera.doc.should_not be_nil
+ test_klass.instance_variable_set(:@hiera, nil)
end
- it "should be registered with the data_binding indirection" do
- indirection = Puppet::Indirector::Indirection.instance(:data_binding)
- Puppet::DataBinding::Hiera.indirection.should equal(indirection)
- end
-
- it "should have its name set to :hiera" do
- Puppet::DataBinding::Hiera.name.should == :hiera
- end
it "should be the default data_binding terminus" do
Puppet.settings[:data_binding_terminus].should == :hiera
end
it "should raise an error if we don't have the hiera feature" do
Puppet.features.expects(:hiera?).returns(false)
- lambda { Puppet::DataBinding::Hiera.new }.should raise_error RuntimeError,
+ lambda { test_klass.new }.should raise_error RuntimeError,
"Hiera terminus not supported without hiera library"
end
describe "the behavior of the hiera_config method", :if => Puppet.features.hiera? do
it "should override the logger and set it to puppet" do
- Puppet::DataBinding::Hiera.hiera_config[:logger].should == "puppet"
+ test_klass.hiera_config[:logger].should == "puppet"
end
context "when the Hiera configuration file does not exist" do
let(:path) { File.expand_path('/doesnotexist') }
before do
Puppet.settings[:hiera_config] = path
end
it "should log a warning" do
Puppet.expects(:warning).with(
"Config file #{path} not found, using Hiera defaults")
- Puppet::DataBinding::Hiera.hiera_config
+ test_klass.hiera_config
end
it "should only configure the logger and set it to puppet" do
Puppet.expects(:warning).with(
"Config file #{path} not found, using Hiera defaults")
- Puppet::DataBinding::Hiera.hiera_config.should == { :logger => 'puppet' }
+ test_klass.hiera_config.should == { :logger => 'puppet' }
end
end
end
describe "the behavior of the find method", :if => Puppet.features.hiera? do
- let(:data_binder) { Puppet::DataBinding::Hiera.new }
+ let(:data_binder) { test_klass.new }
it "should support looking up an integer" do
data_binder.find(request("integer")).should == 3000
end
it "should support looking up a string" do
data_binder.find(request("string")).should == 'apache'
end
it "should support looking up an array" do
data_binder.find(request("array")).should == [
'0.ntp.puppetlabs.com',
'1.ntp.puppetlabs.com',
]
end
it "should support looking up a hash" do
data_binder.find(request("hash")).should == {
'user' => 'Hightower',
'group' => 'admin',
'mode' => '0644'
}
end
it "raises a data binding error if hiera cannot parse the yaml data" do
expect do
data_binder.find(request('invalid'))
end.to raise_error(Puppet::DataBinding::LookupError)
end
end
end
diff --git a/spec/shared_behaviours/iterative_functions.rb b/spec/shared_behaviours/iterative_functions.rb
new file mode 100644
index 000000000..1910a3c80
--- /dev/null
+++ b/spec/shared_behaviours/iterative_functions.rb
@@ -0,0 +1,69 @@
+
+shared_examples_for 'all iterative functions hash handling' do |func|
+ it 'passes a hash entry as an array of the key and value' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ {a=>1}.#{func} |$v| { notify { "${v[0]} ${v[1]}": } }
+ MANIFEST
+
+ catalog.resource(:notify, "a 1").should_not be_nil
+ end
+end
+
+shared_examples_for 'all iterative functions argument checks' do |func|
+
+ it 'raises an error when used against an unsupported type' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ 3.14.#{func} |$k, $v| { }
+ MANIFEST
+ end.to raise_error(Puppet::Error, /must be something enumerable/)
+ end
+
+ it 'raises an error when called with any parameters besides a block' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ [1].#{func}(1) |$v| { }
+ MANIFEST
+ end.to raise_error(Puppet::Error, /mis-matched arguments.*expected.*arg count \{2\}.*actual.*arg count \{3\}/m)
+ end
+
+ it 'raises an error when called without a block' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ [1].#{func}()
+ MANIFEST
+ end.to raise_error(Puppet::Error, /mis-matched arguments.*expected.*arg count \{2\}.*actual.*arg count \{1\}/m)
+ end
+
+ it 'raises an error when called with something that is not a block' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ [1].#{func}(1)
+ MANIFEST
+ end.to raise_error(Puppet::Error, /mis-matched arguments.*expected.*Callable.*actual(?!Callable\)).*/m)
+ end
+
+ it 'raises an error when called with a block with too many required parameters' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ [1].#{func}() |$v1, $v2, $v3| { }
+ MANIFEST
+ end.to raise_error(Puppet::Error, /mis-matched arguments.*expected.*arg count \{2\}.*actual.*Callable\[Any, Any, Any\]/m)
+ end
+
+ it 'raises an error when called with a block with too few parameters' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ [1].#{func}() | | { }
+ MANIFEST
+ end.to raise_error(Puppet::Error, /mis-matched arguments.*expected.*arg count \{2\}.*actual.*Callable\[0, 0\]/m)
+ end
+
+ it 'does not raise an error when called with a block with too many but optional arguments' do
+ expect do
+ compile_to_catalog(<<-MANIFEST)
+ [1].#{func}() |$v1, $v2, $v3=extra| { }
+ MANIFEST
+ end.to_not raise_error
+ end
+end
diff --git a/spec/unit/application/apply_spec.rb b/spec/unit/application/apply_spec.rb
index de4784fbe..ec6afee66 100755
--- a/spec/unit/application/apply_spec.rb
+++ b/spec/unit/application/apply_spec.rb
@@ -1,458 +1,460 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/application/apply'
require 'puppet/file_bucket/dipper'
require 'puppet/configurer'
require 'fileutils'
describe Puppet::Application::Apply do
before :each do
@apply = Puppet::Application[:apply]
Puppet::Util::Log.stubs(:newdestination)
Puppet[:reports] = "none"
end
after :each do
Puppet::Node::Facts.indirection.reset_terminus_class
Puppet::Node::Facts.indirection.cache_class = nil
Puppet::Node.indirection.reset_terminus_class
Puppet::Node.indirection.cache_class = nil
end
[:debug,:loadclasses,:test,:verbose,:use_nodes,:detailed_exitcodes,:catalog, :write_catalog_summary].each do |option|
it "should declare handle_#{option} method" do
@apply.should respond_to("handle_#{option}".to_sym)
end
it "should store argument value when calling handle_#{option}" do
@apply.options.expects(:[]=).with(option, 'arg')
@apply.send("handle_#{option}".to_sym, 'arg')
end
end
it "should set the code to the provided code when :execute is used" do
@apply.options.expects(:[]=).with(:code, 'arg')
@apply.send("handle_execute".to_sym, 'arg')
end
describe "when applying options" do
it "should set the log destination with --logdest" do
Puppet::Log.expects(:newdestination).with("console")
@apply.handle_logdest("console")
end
it "should set the setdest options to true" do
@apply.options.expects(:[]=).with(:setdest,true)
@apply.handle_logdest("console")
end
end
describe "during setup" do
before :each do
Puppet::Log.stubs(:newdestination)
Puppet::FileBucket::Dipper.stubs(:new)
STDIN.stubs(:read)
Puppet::Transaction::Report.indirection.stubs(:cache_class=)
end
describe "with --test" do
it "should call setup_test" do
@apply.options[:test] = true
@apply.expects(:setup_test)
@apply.setup
end
it "should set options[:verbose] to true" do
@apply.setup_test
@apply.options[:verbose].should == true
end
it "should set options[:show_diff] to true" do
Puppet.settings.override_default(:show_diff, false)
@apply.setup_test
Puppet[:show_diff].should == true
end
it "should set options[:detailed_exitcodes] to true" do
@apply.setup_test
@apply.options[:detailed_exitcodes].should == true
end
end
it "should set console as the log destination if logdest option wasn't provided" do
Puppet::Log.expects(:newdestination).with(:console)
@apply.setup
end
it "should set INT trap" do
Signal.expects(:trap).with(:INT)
@apply.setup
end
it "should set log level to debug if --debug was passed" do
@apply.options[:debug] = true
@apply.setup
Puppet::Log.level.should == :debug
end
it "should set log level to info if --verbose was passed" do
@apply.options[:verbose] = true
@apply.setup
Puppet::Log.level.should == :info
end
it "should print puppet config if asked to in Puppet config" do
Puppet.settings.stubs(:print_configs?).returns true
Puppet.settings.expects(:print_configs).returns true
expect { @apply.setup }.to exit_with 0
end
it "should exit after printing puppet config if asked to in Puppet config" do
Puppet.settings.stubs(:print_configs?).returns(true)
expect { @apply.setup }.to exit_with 1
end
it "should tell the report handler to cache locally as yaml" do
Puppet::Transaction::Report.indirection.expects(:cache_class=).with(:yaml)
@apply.setup
end
it "configures a profiler when profiling is enabled" do
Puppet[:profile] = true
@apply.setup
- expect(Puppet::Util::Profiler.current).to be_a(Puppet::Util::Profiler::WallClock)
+ expect(Puppet::Util::Profiler.current).to satisfy do |ps|
+ ps.any? {|p| p.is_a? Puppet::Util::Profiler::WallClock }
+ end
end
it "does not have a profiler if profiling is disabled" do
Puppet[:profile] = false
@apply.setup
- expect(Puppet::Util::Profiler.current).to eq(Puppet::Util::Profiler::NONE)
+ expect(Puppet::Util::Profiler.current.length).to be 0
end
it "should set default_file_terminus to `file_server` to be local" do
@apply.app_defaults[:default_file_terminus].should == :file_server
end
end
describe "when executing" do
it "should dispatch to 'apply' if it was called with 'apply'" do
@apply.options[:catalog] = "foo"
@apply.expects(:apply)
@apply.run_command
end
it "should dispatch to main otherwise" do
@apply.stubs(:options).returns({})
@apply.expects(:main)
@apply.run_command
end
describe "the main command" do
include PuppetSpec::Files
before :each do
Puppet[:prerun_command] = ''
Puppet[:postrun_command] = ''
Puppet::Node::Facts.indirection.terminus_class = :memory
Puppet::Node::Facts.indirection.cache_class = :memory
Puppet::Node.indirection.terminus_class = :memory
Puppet::Node.indirection.cache_class = :memory
@facts = Puppet::Node::Facts.new(Puppet[:node_name_value])
Puppet::Node::Facts.indirection.save(@facts)
@node = Puppet::Node.new(Puppet[:node_name_value])
Puppet::Node.indirection.save(@node)
@catalog = Puppet::Resource::Catalog.new("testing", Puppet.lookup(:environments).get(Puppet[:environment]))
@catalog.stubs(:to_ral).returns(@catalog)
Puppet::Resource::Catalog.indirection.stubs(:find).returns(@catalog)
STDIN.stubs(:read)
@transaction = stub('transaction')
@catalog.stubs(:apply).returns(@transaction)
Puppet::Util::Storage.stubs(:load)
Puppet::Configurer.any_instance.stubs(:save_last_run_summary) # to prevent it from trying to write files
end
after :each do
Puppet::Node::Facts.indirection.reset_terminus_class
Puppet::Node::Facts.indirection.cache_class = nil
end
around :each do |example|
Puppet.override(:current_environment =>
Puppet::Node::Environment.create(:production, [])) do
example.run
end
end
it "should set the code to run from --code" do
@apply.options[:code] = "code to run"
Puppet.expects(:[]=).with(:code,"code to run")
expect { @apply.main }.to exit_with 0
end
it "should set the code to run from STDIN if no arguments" do
@apply.command_line.stubs(:args).returns([])
STDIN.stubs(:read).returns("code to run")
Puppet.expects(:[]=).with(:code,"code to run")
expect { @apply.main }.to exit_with 0
end
it "should raise an error if a file is passed on command line and the file does not exist" do
noexist = tmpfile('noexist.pp')
@apply.command_line.stubs(:args).returns([noexist])
lambda { @apply.main }.should raise_error(RuntimeError, "Could not find file #{noexist}")
end
it "should set the manifest to the first file and warn other files will be skipped" do
manifest = tmpfile('starwarsIV')
FileUtils.touch(manifest)
@apply.command_line.stubs(:args).returns([manifest, 'starwarsI', 'starwarsII'])
expect { @apply.main }.to exit_with 0
msg = @logs.find {|m| m.message =~ /Only one file can be applied per run/ }
msg.message.should == 'Only one file can be applied per run. Skipping starwarsI, starwarsII'
msg.level.should == :warning
end
it "should raise an error if we can't find the node" do
Puppet::Node.indirection.expects(:find).returns(nil)
lambda { @apply.main }.should raise_error(RuntimeError, /Could not find node/)
end
it "should load custom classes if loadclasses" do
@apply.options[:loadclasses] = true
classfile = tmpfile('classfile')
File.open(classfile, 'w') { |c| c.puts 'class' }
Puppet[:classfile] = classfile
@node.expects(:classes=).with(['class'])
expect { @apply.main }.to exit_with 0
end
it "should compile the catalog" do
Puppet::Resource::Catalog.indirection.expects(:find).returns(@catalog)
expect { @apply.main }.to exit_with 0
end
it "should transform the catalog to ral" do
@catalog.expects(:to_ral).returns(@catalog)
expect { @apply.main }.to exit_with 0
end
it "should finalize the catalog" do
@catalog.expects(:finalize)
expect { @apply.main }.to exit_with 0
end
it "should not save the classes or resource file by default" do
@catalog.expects(:write_class_file).never
@catalog.expects(:write_resource_file).never
expect { @apply.main }.to exit_with 0
end
it "should save the classes and resources files when requested" do
@apply.options[:write_catalog_summary] = true
@catalog.expects(:write_class_file).once
@catalog.expects(:write_resource_file).once
expect { @apply.main }.to exit_with 0
end
it "should call the prerun and postrun commands on a Configurer instance" do
Puppet::Configurer.any_instance.expects(:execute_prerun_command).returns(true)
Puppet::Configurer.any_instance.expects(:execute_postrun_command).returns(true)
expect { @apply.main }.to exit_with 0
end
it "should apply the catalog" do
@catalog.expects(:apply).returns(stub_everything('transaction'))
expect { @apply.main }.to exit_with 0
end
it "should save the last run summary" do
Puppet[:noop] = false
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.stubs(:new).returns(report)
Puppet::Configurer.any_instance.expects(:save_last_run_summary).with(report)
expect { @apply.main }.to exit_with 0
end
describe "when using node_name_fact" do
before :each do
@facts = Puppet::Node::Facts.new(Puppet[:node_name_value], 'my_name_fact' => 'other_node_name')
Puppet::Node::Facts.indirection.save(@facts)
@node = Puppet::Node.new('other_node_name')
Puppet::Node.indirection.save(@node)
Puppet[:node_name_fact] = 'my_name_fact'
end
it "should set the facts name based on the node_name_fact" do
expect { @apply.main }.to exit_with 0
@facts.name.should == 'other_node_name'
end
it "should set the node_name_value based on the node_name_fact" do
expect { @apply.main }.to exit_with 0
Puppet[:node_name_value].should == 'other_node_name'
end
it "should merge in our node the loaded facts" do
@facts.values.merge!('key' => 'value')
expect { @apply.main }.to exit_with 0
@node.parameters['key'].should == 'value'
end
it "should raise an error if we can't find the facts" do
Puppet::Node::Facts.indirection.expects(:find).returns(nil)
lambda { @apply.main }.should raise_error
end
end
describe "with detailed_exitcodes" do
before :each do
@apply.options[:detailed_exitcodes] = true
end
it "should exit with report's computed exit status" do
Puppet[:noop] = false
Puppet::Transaction::Report.any_instance.stubs(:exit_status).returns(666)
expect { @apply.main }.to exit_with 666
end
it "should exit with report's computed exit status, even if --noop is set" do
Puppet[:noop] = true
Puppet::Transaction::Report.any_instance.stubs(:exit_status).returns(666)
expect { @apply.main }.to exit_with 666
end
it "should always exit with 0 if option is disabled" do
Puppet[:noop] = false
report = stub 'report', :exit_status => 666
@transaction.stubs(:report).returns(report)
expect { @apply.main }.to exit_with 0
end
it "should always exit with 0 if --noop" do
Puppet[:noop] = true
report = stub 'report', :exit_status => 666
@transaction.stubs(:report).returns(report)
expect { @apply.main }.to exit_with 0
end
end
end
describe "the 'apply' command" do
# We want this memoized, and to be able to adjust the content, so we
# have to do it ourselves.
def temporary_catalog(content = '"something"')
@tempfile = Tempfile.new('catalog.pson')
@tempfile.write(content)
@tempfile.close
@tempfile.path
end
it "should read the catalog in from disk if a file name is provided" do
@apply.options[:catalog] = temporary_catalog
catalog = Puppet::Resource::Catalog.new("testing", Puppet::Node::Environment::NONE)
Puppet::Resource::Catalog.stubs(:convert_from).with(:pson,'"something"').returns(catalog)
@apply.apply
end
it "should read the catalog in from stdin if '-' is provided" do
@apply.options[:catalog] = "-"
$stdin.expects(:read).returns '"something"'
catalog = Puppet::Resource::Catalog.new("testing", Puppet::Node::Environment::NONE)
Puppet::Resource::Catalog.stubs(:convert_from).with(:pson,'"something"').returns(catalog)
@apply.apply
end
it "should deserialize the catalog from the default format" do
@apply.options[:catalog] = temporary_catalog
Puppet::Resource::Catalog.stubs(:default_format).returns :rot13_piglatin
catalog = Puppet::Resource::Catalog.new("testing", Puppet::Node::Environment::NONE)
Puppet::Resource::Catalog.stubs(:convert_from).with(:rot13_piglatin,'"something"').returns(catalog)
@apply.apply
end
it "should fail helpfully if deserializing fails" do
@apply.options[:catalog] = temporary_catalog('something syntactically invalid')
lambda { @apply.apply }.should raise_error(Puppet::Error)
end
it "should convert plain data structures into a catalog if deserialization does not do so" do
@apply.options[:catalog] = temporary_catalog
Puppet::Resource::Catalog.stubs(:convert_from).with(:pson,'"something"').returns({:foo => "bar"})
catalog = Puppet::Resource::Catalog.new("testing", Puppet::Node::Environment::NONE)
Puppet::Resource::Catalog.expects(:pson_create).with({:foo => "bar"}).returns(catalog)
@apply.apply
end
it "should convert the catalog to a RAL catalog and use a Configurer instance to apply it" do
@apply.options[:catalog] = temporary_catalog
catalog = Puppet::Resource::Catalog.new("testing", Puppet::Node::Environment::NONE)
Puppet::Resource::Catalog.stubs(:convert_from).with(:pson,'"something"').returns catalog
catalog.expects(:to_ral).returns "mycatalog"
configurer = stub 'configurer'
Puppet::Configurer.expects(:new).returns configurer
configurer.expects(:run).
with(:catalog => "mycatalog", :pluginsync => false)
@apply.apply
end
end
end
describe "apply_catalog" do
it "should call the configurer with the catalog" do
catalog = "I am a catalog"
Puppet::Configurer.any_instance.expects(:run).
with(:catalog => catalog, :pluginsync => false)
@apply.send(:apply_catalog, catalog)
end
end
end
diff --git a/spec/unit/application/doc_spec.rb b/spec/unit/application/doc_spec.rb
index 8e43907ca..2f8624abd 100755
--- a/spec/unit/application/doc_spec.rb
+++ b/spec/unit/application/doc_spec.rb
@@ -1,340 +1,344 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/application/doc'
require 'puppet/util/reference'
require 'puppet/util/rdoc'
describe Puppet::Application::Doc do
before :each do
@doc = Puppet::Application[:doc]
@doc.stubs(:puts)
@doc.preinit
Puppet::Util::Log.stubs(:newdestination)
end
it "should declare an other command" do
@doc.should respond_to(:other)
end
it "should declare a rdoc command" do
@doc.should respond_to(:rdoc)
end
it "should declare a fallback for unknown options" do
@doc.should respond_to(:handle_unknown)
end
it "should declare a preinit block" do
@doc.should respond_to(:preinit)
end
describe "in preinit" do
it "should set references to []" do
@doc.preinit
@doc.options[:references].should == []
end
it "should init mode to text" do
@doc.preinit
@doc.options[:mode].should == :text
end
it "should init format to to_markdown" do
@doc.preinit
@doc.options[:format].should == :to_markdown
end
end
describe "when handling options" do
[:all, :outputdir, :verbose, :debug, :charset].each do |option|
it "should declare handle_#{option} method" do
@doc.should respond_to("handle_#{option}".to_sym)
end
it "should store argument value when calling handle_#{option}" do
@doc.options.expects(:[]=).with(option, 'arg')
@doc.send("handle_#{option}".to_sym, 'arg')
end
end
it "should store the format if valid" do
Puppet::Util::Reference.stubs(:method_defined?).with('to_format').returns(true)
@doc.handle_format('format')
@doc.options[:format].should == 'to_format'
end
it "should raise an error if the format is not valid" do
Puppet::Util::Reference.stubs(:method_defined?).with('to_format').returns(false)
expect { @doc.handle_format('format') }.to raise_error(RuntimeError, /Invalid output format/)
end
it "should store the mode if valid" do
Puppet::Util::Reference.stubs(:modes).returns(stub('mode', :include? => true))
@doc.handle_mode('mode')
@doc.options[:mode].should == :mode
end
it "should store the mode if :rdoc" do
Puppet::Util::Reference.modes.stubs(:include?).with('rdoc').returns(false)
@doc.handle_mode('rdoc')
@doc.options[:mode].should == :rdoc
end
it "should raise an error if the mode is not valid" do
Puppet::Util::Reference.modes.stubs(:include?).with('unknown').returns(false)
expect { @doc.handle_mode('unknown') }.to raise_error(RuntimeError, /Invalid output mode/)
end
it "should list all references on list and exit" do
reference = stubs 'reference'
ref = stubs 'ref'
Puppet::Util::Reference.stubs(:references).returns([reference])
Puppet::Util::Reference.expects(:reference).with(reference).returns(ref)
ref.expects(:doc)
expect { @doc.handle_list(nil) }.to exit_with 0
end
it "should add reference to references list with --reference" do
@doc.options[:references] = [:ref1]
@doc.handle_reference('ref2')
@doc.options[:references].should == [:ref1,:ref2]
end
end
describe "during setup" do
before :each do
Puppet::Log.stubs(:newdestination)
@doc.command_line.stubs(:args).returns([])
end
it "should default to rdoc mode if there are command line arguments" do
@doc.command_line.stubs(:args).returns(["1"])
@doc.stubs(:setup_rdoc)
@doc.setup
@doc.options[:mode].should == :rdoc
end
it "should call setup_rdoc in rdoc mode" do
@doc.options[:mode] = :rdoc
@doc.expects(:setup_rdoc)
@doc.setup
end
it "should call setup_reference if not rdoc" do
@doc.options[:mode] = :test
@doc.expects(:setup_reference)
@doc.setup
end
describe "configuring logging" do
before :each do
Puppet::Util::Log.stubs(:newdestination)
end
describe "with --debug" do
before do
@doc.options[:debug] = true
end
it "should set log level to debug" do
@doc.setup
Puppet::Util::Log.level.should == :debug
end
it "should set log destination to console" do
Puppet::Util::Log.expects(:newdestination).with(:console)
@doc.setup
end
end
describe "with --verbose" do
before do
@doc.options[:verbose] = true
end
it "should set log level to info" do
@doc.setup
Puppet::Util::Log.level.should == :info
end
it "should set log destination to console" do
Puppet::Util::Log.expects(:newdestination).with(:console)
@doc.setup
end
end
describe "without --debug or --verbose" do
before do
@doc.options[:debug] = false
@doc.options[:verbose] = false
end
it "should set log level to warning" do
@doc.setup
Puppet::Util::Log.level.should == :warning
end
it "should set log destination to console" do
Puppet::Util::Log.expects(:newdestination).with(:console)
@doc.setup
end
end
end
describe "in non-rdoc mode" do
it "should get all non-dynamic reference if --all" do
@doc.options[:all] = true
static = stub 'static', :dynamic? => false
dynamic = stub 'dynamic', :dynamic? => true
Puppet::Util::Reference.stubs(:reference).with(:static).returns(static)
Puppet::Util::Reference.stubs(:reference).with(:dynamic).returns(dynamic)
Puppet::Util::Reference.stubs(:references).returns([:static,:dynamic])
@doc.setup_reference
@doc.options[:references].should == [:static]
end
it "should default to :type if no references" do
@doc.setup_reference
@doc.options[:references].should == [:type]
end
end
describe "in rdoc mode" do
describe "when there are unknown args" do
it "should expand --modulepath if any" do
@doc.unknown_args = [ { :opt => "--modulepath", :arg => "path" } ]
Puppet.settings.stubs(:handlearg)
@doc.setup_rdoc
@doc.unknown_args[0][:arg].should == File.expand_path('path')
end
it "should expand --manifestdir if any" do
@doc.unknown_args = [ { :opt => "--manifestdir", :arg => "path" } ]
Puppet.settings.stubs(:handlearg)
@doc.setup_rdoc
@doc.unknown_args[0][:arg].should == File.expand_path('path')
end
it "should give them to Puppet.settings" do
@doc.unknown_args = [ { :opt => :option, :arg => :argument } ]
Puppet.settings.expects(:handlearg).with(:option,:argument)
@doc.setup_rdoc
end
end
it "should operate in master run_mode" do
@doc.class.run_mode.name.should == :master
@doc.setup_rdoc
end
end
end
describe "when running" do
describe "in rdoc mode" do
include PuppetSpec::Files
let(:envdir) { tmpdir('env') }
let(:modules) { File.join(envdir, "modules") }
+ let(:modules2) { File.join(envdir, "modules2") }
let(:manifests) { File.join(envdir, "manifests") }
before :each do
@doc.manifest = false
Puppet.stubs(:info)
Puppet[:trace] = false
Puppet[:modulepath] = modules
Puppet[:manifestdir] = manifests
@doc.options[:all] = false
@doc.options[:outputdir] = 'doc'
@doc.options[:charset] = nil
Puppet.settings.stubs(:define_settings)
Puppet::Util::RDoc.stubs(:rdoc)
@doc.command_line.stubs(:args).returns([])
end
around(:each) do |example|
FileUtils.mkdir_p(modules)
- @env = Puppet::Node::Environment.create(Puppet[:environment].to_sym, [modules], "#{manifests}/site.pp")
- Puppet.override(:environments => Puppet::Environments::Static.new(@env)) do
+ env = Puppet::Node::Environment.create(Puppet[:environment].to_sym, [modules], "#{manifests}/site.pp")
+ Puppet.override({:environments => Puppet::Environments::Static.new(env), :current_environment => env}) do
example.run
end
end
it "should set document_all on --all" do
@doc.options[:all] = true
Puppet.settings.expects(:[]=).with(:document_all, true)
- expect { @doc.rdoc }.to exit_with 0
+ expect { @doc.rdoc }.to exit_with(0)
end
it "should call Puppet::Util::RDoc.rdoc in full mode" do
Puppet::Util::RDoc.expects(:rdoc).with('doc', [modules, manifests], nil)
- expect { @doc.rdoc }.to exit_with 0
+ expect { @doc.rdoc }.to exit_with(0)
end
it "should call Puppet::Util::RDoc.rdoc with a charset if --charset has been provided" do
@doc.options[:charset] = 'utf-8'
Puppet::Util::RDoc.expects(:rdoc).with('doc', [modules, manifests], "utf-8")
- expect { @doc.rdoc }.to exit_with 0
+ expect { @doc.rdoc }.to exit_with(0)
end
it "should call Puppet::Util::RDoc.rdoc in full mode with outputdir set to doc if no --outputdir" do
@doc.options[:outputdir] = false
Puppet::Util::RDoc.expects(:rdoc).with('doc', [modules, manifests], nil)
- expect { @doc.rdoc }.to exit_with 0
+ expect { @doc.rdoc }.to exit_with(0)
end
it "should call Puppet::Util::RDoc.manifestdoc in manifest mode" do
@doc.manifest = true
Puppet::Util::RDoc.expects(:manifestdoc)
- expect { @doc.rdoc }.to exit_with 0
+ expect { @doc.rdoc }.to exit_with(0)
end
it "should get modulepath and manifestdir values from the environment" do
- @env.expects(:modulepath).returns(['envmodules1','envmodules2'])
- @env.expects(:manifest).returns('envmanifests/site.pp')
-
- Puppet::Util::RDoc.expects(:rdoc).with('doc', ['envmodules1','envmodules2','envmanifests'], nil)
-
- expect { @doc.rdoc }.to exit_with 0
+ FileUtils.mkdir_p(modules)
+ FileUtils.mkdir_p(modules2)
+ env = Puppet::Node::Environment.create(Puppet[:environment].to_sym,
+ [modules, modules2],
+ "envmanifests/site.pp")
+ Puppet.override({:environments => Puppet::Environments::Static.new(env), :current_environment => env}) do
+ Puppet::Util::RDoc.stubs(:rdoc).with('doc', [modules.to_s, modules2.to_s, env.manifest.to_s], nil)
+ expect { @doc.rdoc }.to exit_with(0)
+ end
end
end
describe "in the other modes" do
it "should get reference in given format" do
reference = stub 'reference'
@doc.options[:mode] = :none
@doc.options[:references] = [:ref]
Puppet::Util::Reference.expects(:reference).with(:ref).returns(reference)
@doc.options[:format] = :format
@doc.stubs(:exit)
reference.expects(:send).with { |format,contents| format == :format }.returns('doc')
@doc.other
end
end
end
end
diff --git a/spec/unit/application/master_spec.rb b/spec/unit/application/master_spec.rb
index 4d3167ce8..ef9c1b174 100755
--- a/spec/unit/application/master_spec.rb
+++ b/spec/unit/application/master_spec.rb
@@ -1,355 +1,363 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/application/master'
require 'puppet/daemon'
require 'puppet/network/server'
describe Puppet::Application::Master, :unless => Puppet.features.microsoft_windows? do
before :each do
@master = Puppet::Application[:master]
@daemon = stub_everything 'daemon'
Puppet::Daemon.stubs(:new).returns(@daemon)
Puppet::Util::Log.stubs(:newdestination)
Puppet::Node.indirection.stubs(:terminus_class=)
Puppet::Node.indirection.stubs(:cache_class=)
Puppet::Node::Facts.indirection.stubs(:terminus_class=)
Puppet::Node::Facts.indirection.stubs(:cache_class=)
Puppet::Transaction::Report.indirection.stubs(:terminus_class=)
Puppet::Resource::Catalog.indirection.stubs(:terminus_class=)
Puppet::SSL::Host.stubs(:ca_location=)
end
it "should operate in master run_mode" do
@master.class.run_mode.name.should equal(:master)
end
it "should declare a main command" do
@master.should respond_to(:main)
end
it "should declare a compile command" do
@master.should respond_to(:compile)
end
it "should declare a preinit block" do
@master.should respond_to(:preinit)
end
describe "during preinit" do
before :each do
@master.stubs(:trap)
end
it "should catch INT" do
@master.stubs(:trap).with { |arg,block| arg == :INT }
@master.preinit
end
end
[:debug,:verbose].each do |option|
it "should declare handle_#{option} method" do
@master.should respond_to("handle_#{option}".to_sym)
end
it "should store argument value when calling handle_#{option}" do
@master.options.expects(:[]=).with(option, 'arg')
@master.send("handle_#{option}".to_sym, 'arg')
end
end
describe "when applying options" do
before do
@master.command_line.stubs(:args).returns([])
end
it "should set the log destination with --logdest" do
Puppet::Log.expects(:newdestination).with("console")
@master.handle_logdest("console")
end
it "should put the setdest options to true" do
@master.options.expects(:[]=).with(:setdest,true)
@master.handle_logdest("console")
end
it "should parse the log destination from ARGV" do
@master.command_line.stubs(:args).returns(%w{--logdest /my/file})
Puppet::Util::Log.expects(:newdestination).with("/my/file")
@master.parse_options
end
it "should support dns alt names from ARGV" do
Puppet.settings.initialize_global_settings(["--dns_alt_names", "foo,bar,baz"])
@master.preinit
@master.parse_options
Puppet[:dns_alt_names].should == "foo,bar,baz"
end
end
describe "during setup" do
before :each do
Puppet::Log.stubs(:newdestination)
Puppet.stubs(:settraps)
Puppet::SSL::CertificateAuthority.stubs(:instance)
Puppet::SSL::CertificateAuthority.stubs(:ca?)
Puppet.settings.stubs(:use)
@master.options.stubs(:[]).with(any_parameters)
end
it "should abort stating that the master is not supported on Windows" do
Puppet.features.stubs(:microsoft_windows?).returns(true)
expect { @master.setup }.to raise_error(Puppet::Error, /Puppet master is not supported on Microsoft Windows/)
end
- it "should set log level to debug if --debug was passed" do
- @master.options.stubs(:[]).with(:debug).returns(true)
- @master.setup
- Puppet::Log.level.should == :debug
- end
-
- it "should set log level to info if --verbose was passed" do
- @master.options.stubs(:[]).with(:verbose).returns(true)
- @master.setup
- Puppet::Log.level.should == :info
- end
-
- it "should set console as the log destination if no --logdest and --daemonize" do
- @master.stubs(:[]).with(:daemonize).returns(:false)
-
- Puppet::Log.expects(:newdestination).with(:syslog)
-
- @master.setup
- end
+ describe "setting up logging" do
+ it "sets the log level" do
+ @master.expects(:set_log_level)
+ @master.setup
+ end
- it "should set syslog as the log destination if no --logdest and not --daemonize" do
- Puppet::Log.expects(:newdestination).with(:syslog)
+ describe "when the log destination is not explicitly configured" do
+ before do
+ @master.options.stubs(:[]).with(:setdest).returns false
+ end
- @master.setup
- end
+ it "logs to the console when --compile is given" do
+ @master.options.stubs(:[]).with(:node).returns "default"
+ Puppet::Util::Log.expects(:newdestination).with(:console)
+ @master.setup
+ end
- it "should set syslog as the log destination if --rack" do
- @master.options.stubs(:[]).with(:rack).returns(:true)
+ it "logs to the console when the master is not daemonized or run with rack" do
+ Puppet::Util::Log.expects(:newdestination).with(:console)
+ Puppet[:daemonize] = false
+ @master.options.stubs(:[]).with(:rack).returns(false)
+ @master.setup
+ end
- Puppet::Log.expects(:newdestination).with(:syslog)
+ it "logs to syslog when the master is daemonized" do
+ Puppet::Util::Log.expects(:newdestination).with(:console).never
+ Puppet::Util::Log.expects(:newdestination).with(:syslog)
+ Puppet[:daemonize] = true
+ @master.options.stubs(:[]).with(:rack).returns(false)
+ @master.setup
+ end
- @master.setup
+ it "logs to syslog when the master is run with rack" do
+ Puppet::Util::Log.expects(:newdestination).with(:console).never
+ Puppet::Util::Log.expects(:newdestination).with(:syslog)
+ Puppet[:daemonize] = false
+ @master.options.stubs(:[]).with(:rack).returns(true)
+ @master.setup
+ end
+ end
end
it "should print puppet config if asked to in Puppet config" do
Puppet.settings.stubs(:print_configs?).returns(true)
Puppet.settings.expects(:print_configs).returns(true)
expect { @master.setup }.to exit_with 0
end
it "should exit after printing puppet config if asked to in Puppet config" do
Puppet.settings.stubs(:print_configs?).returns(true)
expect { @master.setup }.to exit_with 1
end
it "should tell Puppet.settings to use :main,:ssl,:master and :metrics category" do
Puppet.settings.expects(:use).with(:main,:master,:ssl,:metrics)
@master.setup
end
describe "with no ca" do
it "should set the ca_location to none" do
Puppet::SSL::Host.expects(:ca_location=).with(:none)
@master.setup
end
end
describe "with a ca configured" do
before :each do
Puppet::SSL::CertificateAuthority.stubs(:ca?).returns(true)
end
it "should set the ca_location to local" do
Puppet::SSL::Host.expects(:ca_location=).with(:local)
@master.setup
end
it "should tell Puppet.settings to use :ca category" do
Puppet.settings.expects(:use).with(:ca)
@master.setup
end
it "should instantiate the CertificateAuthority singleton" do
Puppet::SSL::CertificateAuthority.expects(:instance)
@master.setup
end
end
end
describe "when running" do
before do
@master.preinit
end
it "should dispatch to compile if called with --compile" do
@master.options[:node] = "foo"
@master.expects(:compile)
@master.run_command
end
it "should dispatch to main otherwise" do
@master.options[:node] = nil
@master.expects(:main)
@master.run_command
end
describe "the compile command" do
before do
Puppet[:manifest] = "site.pp"
Puppet.stubs(:err)
@master.stubs(:puts)
end
it "should compile a catalog for the specified node" do
@master.options[:node] = "foo"
Puppet::Resource::Catalog.indirection.expects(:find).with("foo").returns Puppet::Resource::Catalog.new
expect { @master.compile }.to exit_with 0
end
it "should convert the catalog to a pure-resource catalog and use 'PSON::pretty_generate' to pretty-print the catalog" do
catalog = Puppet::Resource::Catalog.new
PSON.stubs(:pretty_generate)
Puppet::Resource::Catalog.indirection.expects(:find).returns catalog
catalog.expects(:to_resource).returns("rescat")
@master.options[:node] = "foo"
PSON.expects(:pretty_generate).with('rescat', :allow_nan => true, :max_nesting => false)
expect { @master.compile }.to exit_with 0
end
it "should exit with error code 30 if no catalog can be found" do
@master.options[:node] = "foo"
Puppet::Resource::Catalog.indirection.expects(:find).returns nil
Puppet.expects(:log_exception)
expect { @master.compile }.to exit_with 30
end
it "should exit with error code 30 if there's a failure" do
@master.options[:node] = "foo"
Puppet::Resource::Catalog.indirection.expects(:find).raises ArgumentError
Puppet.expects(:log_exception)
expect { @master.compile }.to exit_with 30
end
end
describe "the main command" do
before :each do
@master.preinit
@server = stub_everything 'server'
Puppet::Network::Server.stubs(:new).returns(@server)
@app = stub_everything 'app'
Puppet::SSL::Host.stubs(:localhost)
Puppet::SSL::CertificateAuthority.stubs(:ca?)
Process.stubs(:uid).returns(1000)
Puppet.stubs(:service)
Puppet[:daemonize] = false
Puppet.stubs(:notice)
Puppet.stubs(:start)
end
it "should create a Server" do
Puppet::Network::Server.expects(:new)
@master.main
end
it "should give the server to the daemon" do
@daemon.expects(:server=).with(@server)
@master.main
end
it "should generate a SSL cert for localhost" do
Puppet::SSL::Host.expects(:localhost)
@master.main
end
it "should make sure to *only* hit the CA for data" do
Puppet::SSL::CertificateAuthority.stubs(:ca?).returns(true)
Puppet::SSL::Host.expects(:ca_location=).with(:only)
@master.main
end
it "should drop privileges if running as root" do
Puppet.features.stubs(:root?).returns true
Puppet::Util.expects(:chuser)
@master.main
end
it "should daemonize if needed" do
Puppet[:daemonize] = true
@daemon.expects(:daemonize)
@master.main
end
it "should start the service" do
@daemon.expects(:start)
@master.main
end
describe "with --rack", :if => Puppet.features.rack? do
before do
require 'puppet/network/http/rack'
Puppet::Network::HTTP::Rack.stubs(:new).returns(@app)
end
it "it should not start a daemon" do
@master.options.stubs(:[]).with(:rack).returns(:true)
@daemon.expects(:start).never
@master.main
end
it "it should return the app" do
@master.options.stubs(:[]).with(:rack).returns(:true)
app = @master.main
app.should equal(@app)
end
end
end
end
end
diff --git a/spec/unit/application/resource_spec.rb b/spec/unit/application/resource_spec.rb
index f2e2596ca..029aa466d 100755
--- a/spec/unit/application/resource_spec.rb
+++ b/spec/unit/application/resource_spec.rb
@@ -1,214 +1,209 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/application/resource'
describe Puppet::Application::Resource do
include PuppetSpec::Files
before :each do
@resource_app = Puppet::Application[:resource]
Puppet::Util::Log.stubs(:newdestination)
Puppet::Resource.indirection.stubs(:terminus_class=)
end
describe "in preinit" do
it "should init extra_params to empty array" do
@resource_app.preinit
@resource_app.extra_params.should == []
end
-
- it "should load Facter facts" do
- Facter.expects(:loadfacts).once
- @resource_app.preinit
- end
end
describe "when handling options" do
[:debug, :verbose, :edit].each do |option|
it "should store argument value when calling handle_#{option}" do
@resource_app.options.expects(:[]=).with(option, 'arg')
@resource_app.send("handle_#{option}".to_sym, 'arg')
end
end
it "should set options[:host] to given host" do
@resource_app.handle_host(:whatever)
@resource_app.host.should == :whatever
end
it "should load a display all types with types option" do
type1 = stub_everything 'type1', :name => :type1
type2 = stub_everything 'type2', :name => :type2
Puppet::Type.stubs(:loadall)
Puppet::Type.stubs(:eachtype).multiple_yields(type1,type2)
@resource_app.expects(:puts).with(['type1','type2'])
expect { @resource_app.handle_types(nil) }.to exit_with 0
end
it "should add param to extra_params list" do
@resource_app.extra_params = [ :param1 ]
@resource_app.handle_param("whatever")
@resource_app.extra_params.should == [ :param1, :whatever ]
end
it "should get a parameter in the printed data if extra_params are passed" do
tty = stub("tty", :tty? => true )
path = tmpfile('testfile')
command_line = Puppet::Util::CommandLine.new("puppet", [ 'resource', 'file', path ], tty )
@resource_app.stubs(:command_line).returns command_line
# provider is a parameter that should always be available
@resource_app.extra_params = [ :provider ]
expect { @resource_app.main }.to have_printed /provider\s+=>/
end
end
describe "during setup" do
before :each do
Puppet::Log.stubs(:newdestination)
end
it "should set console as the log destination" do
Puppet::Log.expects(:newdestination).with(:console)
@resource_app.setup
end
it "should set log level to debug if --debug was passed" do
@resource_app.options.stubs(:[]).with(:debug).returns(true)
@resource_app.setup
Puppet::Log.level.should == :debug
end
it "should set log level to info if --verbose was passed" do
@resource_app.options.stubs(:[]).with(:debug).returns(false)
@resource_app.options.stubs(:[]).with(:verbose).returns(true)
@resource_app.setup
Puppet::Log.level.should == :info
end
end
describe "when running" do
before :each do
@type = stub_everything 'type', :properties => []
@resource_app.command_line.stubs(:args).returns(['mytype'])
Puppet::Type.stubs(:type).returns(@type)
@res = stub_everything "resource"
@res.stubs(:prune_parameters).returns(@res)
@report = stub_everything "report"
end
it "should raise an error if no type is given" do
@resource_app.command_line.stubs(:args).returns([])
lambda { @resource_app.main }.should raise_error(RuntimeError, "You must specify the type to display")
end
it "should raise an error when editing a remote host" do
@resource_app.options.stubs(:[]).with(:edit).returns(true)
@resource_app.host = 'host'
lambda { @resource_app.main }.should raise_error(RuntimeError, "You cannot edit a remote host")
end
it "should raise an error if the type is not found" do
Puppet::Type.stubs(:type).returns(nil)
lambda { @resource_app.main }.should raise_error(RuntimeError, 'Could not find type mytype')
end
describe "with a host" do
before :each do
@resource_app.stubs(:puts)
@resource_app.host = 'host'
Puppet::Resource.indirection.stubs(:find ).never
Puppet::Resource.indirection.stubs(:search).never
Puppet::Resource.indirection.stubs(:save ).never
end
it "should search for resources" do
@resource_app.command_line.stubs(:args).returns(['type'])
Puppet::Resource.indirection.expects(:search).with('https://host:8139/production/resources/type/', {}).returns([])
@resource_app.main
end
it "should describe the given resource" do
@resource_app.command_line.stubs(:args).returns(['type', 'name'])
Puppet::Resource.indirection.expects(:find).with('https://host:8139/production/resources/type/name').returns(@res)
@resource_app.main
end
it "should add given parameters to the object" do
@resource_app.command_line.stubs(:args).returns(['type','name','param=temp'])
Puppet::Resource.indirection.expects(:save).
with(@res, 'https://host:8139/production/resources/type/name').
returns([@res, @report])
Puppet::Resource.expects(:new).with('type', 'name', :parameters => {'param' => 'temp'}).returns(@res)
@resource_app.main
end
end
describe "without a host" do
before :each do
@resource_app.stubs(:puts)
@resource_app.host = nil
Puppet::Resource.indirection.stubs(:find ).never
Puppet::Resource.indirection.stubs(:search).never
Puppet::Resource.indirection.stubs(:save ).never
end
it "should search for resources" do
Puppet::Resource.indirection.expects(:search).with('mytype/', {}).returns([])
@resource_app.main
end
it "should describe the given resource" do
@resource_app.command_line.stubs(:args).returns(['type','name'])
Puppet::Resource.indirection.expects(:find).with('type/name').returns(@res)
@resource_app.main
end
it "should add given parameters to the object" do
@resource_app.command_line.stubs(:args).returns(['type','name','param=temp'])
Puppet::Resource.indirection.expects(:save).with(@res, 'type/name').returns([@res, @report])
Puppet::Resource.expects(:new).with('type', 'name', :parameters => {'param' => 'temp'}).returns(@res)
@resource_app.main
end
end
end
describe "when handling file type" do
before :each do
Facter.stubs(:loadfacts)
@resource_app.preinit
end
it "should raise an exception if no file specified" do
@resource_app.command_line.stubs(:args).returns(['file'])
lambda { @resource_app.main }.should raise_error(RuntimeError, /Listing all file instances is not supported/)
end
it "should output a file resource when given a file path" do
path = File.expand_path('/etc')
res = Puppet::Type.type(:file).new(:path => path).to_resource
Puppet::Resource.indirection.expects(:find).returns(res)
@resource_app.command_line.stubs(:args).returns(['file', path])
@resource_app.expects(:puts).with do |args|
args.should =~ /file \{ '#{Regexp.escape(path)}'/m
end
@resource_app.main
end
end
end
diff --git a/spec/unit/configurer/downloader_factory_spec.rb b/spec/unit/configurer/downloader_factory_spec.rb
new file mode 100755
index 000000000..fae919918
--- /dev/null
+++ b/spec/unit/configurer/downloader_factory_spec.rb
@@ -0,0 +1,96 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+require 'puppet/configurer'
+
+describe Puppet::Configurer::DownloaderFactory do
+ let(:factory) { Puppet::Configurer::DownloaderFactory.new }
+ let(:environment) { Puppet::Node::Environment.create(:myenv, []) }
+
+ let(:plugin_downloader) do
+ factory.create_plugin_downloader(environment)
+ end
+
+ let(:facts_downloader) do
+ factory.create_plugin_facts_downloader(environment)
+ end
+
+ def ignores_source_permissions(downloader)
+ expect(downloader.file[:source_permissions]).to eq(:ignore)
+ end
+
+ def uses_source_permissions(downloader)
+ expect(downloader.file[:source_permissions]).to eq(:use)
+ end
+
+ context "when creating a plugin downloader for modules" do
+ it 'is named "plugin"' do
+ expect(plugin_downloader.name).to eq('plugin')
+ end
+
+ it 'downloads files into Puppet[:plugindest]' do
+ plugindest = File.expand_path("/tmp/pdest")
+ Puppet[:plugindest] = plugindest
+
+ expect(plugin_downloader.file[:path]).to eq(plugindest)
+ end
+
+ it 'downloads files from Puppet[:pluginsource]' do
+ Puppet[:pluginsource] = 'puppet:///myotherplugins'
+
+ expect(plugin_downloader.file[:source]).to eq([Puppet[:pluginsource]])
+ end
+
+ it 'ignores files from Puppet[:pluginsignore]' do
+ Puppet[:pluginsignore] = 'pignore'
+
+ expect(plugin_downloader.file[:ignore]).to eq(['pignore'])
+ end
+
+ it 'splits Puppet[:pluginsignore] on whitespace' do
+ Puppet[:pluginsignore] = ".svn CVS .git"
+
+ expect(plugin_downloader.file[:ignore]).to eq(%w[.svn CVS .git])
+ end
+
+ it "ignores source permissions" do
+ ignores_source_permissions(plugin_downloader)
+ end
+ end
+
+ context "when creating a plugin downloader for external facts" do
+ it 'is named "pluginfacts"' do
+ expect(facts_downloader.name).to eq('pluginfacts')
+ end
+
+ it 'downloads files into Puppet[:pluginfactdest]' do
+ plugindest = File.expand_path("/tmp/pdest")
+ Puppet[:pluginfactdest] = plugindest
+
+ expect(facts_downloader.file[:path]).to eq(plugindest)
+ end
+
+ it 'downloads files from Puppet[:pluginfactsource]' do
+ Puppet[:pluginfactsource] = 'puppet:///myotherfacts'
+
+ expect(facts_downloader.file[:source]).to eq([Puppet[:pluginfactsource]])
+ end
+
+ it 'ignores files from Puppet[:pluginsignore]' do
+ Puppet[:pluginsignore] = 'pignore'
+
+ expect(facts_downloader.file[:ignore]).to eq(['pignore'])
+ end
+
+ context "on POSIX", :if => Puppet.features.posix? do
+ it "uses source permissions" do
+ uses_source_permissions(facts_downloader)
+ end
+ end
+
+ context "on Windows", :if => Puppet.features.microsoft_windows? do
+ it "ignores source permissions during external fact pluginsync" do
+ ignores_source_permissions(facts_downloader)
+ end
+ end
+ end
+end
diff --git a/spec/unit/configurer/downloader_spec.rb b/spec/unit/configurer/downloader_spec.rb
index 6003fd10f..71b88bb51 100755
--- a/spec/unit/configurer/downloader_spec.rb
+++ b/spec/unit/configurer/downloader_spec.rb
@@ -1,243 +1,222 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/configurer/downloader'
describe Puppet::Configurer::Downloader do
require 'puppet_spec/files'
include PuppetSpec::Files
let(:path) { Puppet[:plugindest] }
let(:source) { 'puppet://puppet/plugins' }
it "should require a name" do
lambda { Puppet::Configurer::Downloader.new }.should raise_error(ArgumentError)
end
it "should require a path and a source at initialization" do
lambda { Puppet::Configurer::Downloader.new("name") }.should raise_error(ArgumentError)
end
it "should set the name, path and source appropriately" do
dler = Puppet::Configurer::Downloader.new("facts", "path", "source")
dler.name.should == "facts"
dler.path.should == "path"
dler.source.should == "source"
end
def downloader(options = {})
options[:name] ||= "facts"
options[:path] ||= path
options[:source_permissions] ||= :ignore
Puppet::Configurer::Downloader.new(options[:name], options[:path], source, options[:ignore], options[:environment], options[:source_permissions])
end
def generate_file_resource(options = {})
dler = downloader(options)
dler.file
end
describe "when creating the file that does the downloading" do
it "should create a file instance with the right path and source" do
file = generate_file_resource(:path => path, :source => source)
expect(file[:path]).to eq(path)
expect(file[:source]).to eq([source])
end
it "should tag the file with the downloader name" do
name = "mydownloader"
file = generate_file_resource(:name => name)
expect(file[:tag]).to eq([name])
end
it "should always recurse" do
file = generate_file_resource
expect(file[:recurse]).to be_true
end
it "should always purge" do
file = generate_file_resource
expect(file[:purge]).to be_true
end
it "should never be in noop" do
file = generate_file_resource
expect(file[:noop]).to be_false
end
it "should set source_permissions to ignore by default" do
file = generate_file_resource
expect(file[:source_permissions]).to eq(:ignore)
end
it "should allow source_permissions to be overridden" do
file = generate_file_resource(:source_permissions => :use)
expect(file[:source_permissions]).to eq(:use)
end
describe "on POSIX", :as_platform => :posix do
it "should always set the owner to the current UID" do
Process.expects(:uid).returns 51
file = generate_file_resource(:path => '/path')
expect(file[:owner]).to eq(51)
end
it "should always set the group to the current GID" do
Process.expects(:gid).returns 61
file = generate_file_resource(:path => '/path')
expect(file[:group]).to eq(61)
end
end
describe "on Windows", :as_platform => :windows do
it "should omit the owner" do
file = generate_file_resource(:path => 'C:/path')
expect(file[:owner]).to be_nil
end
it "should omit the group" do
file = generate_file_resource(:path => 'C:/path')
expect(file[:group]).to be_nil
end
end
it "should always force the download" do
file = generate_file_resource
expect(file[:force]).to be_true
end
it "should never back up when downloading" do
file = generate_file_resource
expect(file[:backup]).to be_false
end
it "should support providing an 'ignore' parameter" do
file = generate_file_resource(:ignore => '.svn')
expect(file[:ignore]).to eq(['.svn'])
end
it "should split the 'ignore' parameter on whitespace" do
file = generate_file_resource(:ignore => '.svn CVS')
expect(file[:ignore]).to eq(['.svn', 'CVS'])
end
end
describe "when creating the catalog to do the downloading" do
before do
@path = make_absolute("/download/path")
@dler = Puppet::Configurer::Downloader.new("foo", @path, make_absolute("source"))
end
it "should create a catalog and add the file to it" do
catalog = @dler.catalog
catalog.resources.size.should == 1
catalog.resources.first.class.should == Puppet::Type::File
catalog.resources.first.name.should == @path
end
it "should specify that it is not managing a host catalog" do
@dler.catalog.host_config.should == false
end
end
describe "when downloading" do
before do
@dl_name = tmpfile("downloadpath")
source_name = tmpfile("source")
File.open(source_name, 'w') {|f| f.write('hola mundo') }
env = Puppet::Node::Environment.remote('foo')
@dler = Puppet::Configurer::Downloader.new("foo", @dl_name, source_name, Puppet[:pluginsignore], env)
end
it "should not skip downloaded resources when filtering on tags" do
Puppet[:tags] = 'maytag'
@dler.evaluate
Puppet::FileSystem.exist?(@dl_name).should be_true
end
it "should log that it is downloading" do
Puppet.expects(:info)
- Timeout.stubs(:timeout)
-
- @dler.evaluate
- end
-
- it "should set a timeout for the download using the `configtimeout` setting" do
- Puppet[:configtimeout] = 50
- Timeout.expects(:timeout).with(50)
-
- @dler.evaluate
- end
-
- it "should apply the catalog within the timeout block" do
- catalog = mock 'catalog'
- @dler.expects(:catalog).returns(catalog)
-
- Timeout.expects(:timeout).yields
-
- catalog.expects(:apply)
@dler.evaluate
end
it "should return all changed file paths" do
trans = mock 'transaction'
catalog = mock 'catalog'
@dler.expects(:catalog).returns(catalog)
catalog.expects(:apply).yields(trans)
- Timeout.expects(:timeout).yields
-
resource = mock 'resource'
resource.expects(:[]).with(:path).returns "/changed/file"
trans.expects(:changed?).returns([resource])
@dler.evaluate.should == %w{/changed/file}
end
it "should yield the resources if a block is given" do
trans = mock 'transaction'
catalog = mock 'catalog'
@dler.expects(:catalog).returns(catalog)
catalog.expects(:apply).yields(trans)
- Timeout.expects(:timeout).yields
-
resource = mock 'resource'
resource.expects(:[]).with(:path).returns "/changed/file"
trans.expects(:changed?).returns([resource])
yielded = nil
@dler.evaluate { |r| yielded = r }
yielded.should == resource
end
it "should catch and log exceptions" do
Puppet.expects(:err)
- Timeout.stubs(:timeout).raises(Puppet::Error, "testing")
+ # The downloader creates a new catalog for each apply, and really the only object
+ # that it is possible to stub for the purpose of generating a puppet error
+ Puppet::Resource::Catalog.any_instance.stubs(:apply).raises(Puppet::Error, "testing")
lambda { @dler.evaluate }.should_not raise_error
end
end
end
diff --git a/spec/unit/configurer/plugin_handler_spec.rb b/spec/unit/configurer/plugin_handler_spec.rb
index 8093fe708..a80236888 100755
--- a/spec/unit/configurer/plugin_handler_spec.rb
+++ b/spec/unit/configurer/plugin_handler_spec.rb
@@ -1,39 +1,39 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/configurer'
require 'puppet/configurer/plugin_handler'
-class PluginHandlerTester
- include Puppet::Configurer::PluginHandler
- attr_accessor :environment
-end
-
describe Puppet::Configurer::PluginHandler do
- before do
- @pluginhandler = PluginHandlerTester.new
+ let(:factory) { Puppet::Configurer::DownloaderFactory.new }
+ let(:pluginhandler) { Puppet::Configurer::PluginHandler.new(factory) }
+ let(:environment) { Puppet::Node::Environment.create(:myenv, []) }
+ before :each do
# PluginHandler#load_plugin has an extra-strong rescue clause
# this mock is to make sure that we don't silently ignore errors
Puppet.expects(:err).never
end
- it "should use an Agent Downloader, with the name, source, destination, ignore, and environment set correctly, to download plugins when downloading is enabled" do
- environment = Puppet::Node::Environment.create(:myenv, [])
- Puppet.features.stubs(:external_facts?).returns(:true)
- plugindest = File.expand_path("/tmp/pdest")
- Puppet[:pluginsource] = "psource"
- Puppet[:plugindest] = plugindest
- Puppet[:pluginsignore] = "pignore"
- Puppet[:pluginfactsource] = "psource"
- Puppet[:pluginfactdest] = plugindest
+ it "downloads plugins and facts" do
+ Puppet.features.stubs(:external_facts?).returns(true)
+
+ plugin_downloader = stub('plugin-downloader', :evaluate => [])
+ facts_downloader = stub('facts-downloader', :evaluate => [])
+
+ factory.expects(:create_plugin_downloader).returns(plugin_downloader)
+ factory.expects(:create_plugin_facts_downloader).returns(facts_downloader)
+
+ pluginhandler.download_plugins(environment)
+ end
+
+ it "skips facts if not enabled" do
+ Puppet.features.stubs(:external_facts?).returns(false)
- downloader = mock 'downloader'
- Puppet::Configurer::Downloader.expects(:new).with("pluginfacts", plugindest, "psource", "pignore", environment, :use).returns downloader
- Puppet::Configurer::Downloader.expects(:new).with("plugin", plugindest, "psource", "pignore", environment).returns downloader
+ plugin_downloader = stub('plugin-downloader', :evaluate => [])
- downloader.stubs(:evaluate).returns([])
- downloader.expects(:evaluate).twice
+ factory.expects(:create_plugin_downloader).returns(plugin_downloader)
+ factory.expects(:create_plugin_facts_downloader).never
- @pluginhandler.download_plugins(environment)
+ pluginhandler.download_plugins(environment)
end
end
diff --git a/spec/unit/configurer_spec.rb b/spec/unit/configurer_spec.rb
index ecbedec28..093990591 100755
--- a/spec/unit/configurer_spec.rb
+++ b/spec/unit/configurer_spec.rb
@@ -1,684 +1,680 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/configurer'
describe Puppet::Configurer do
before do
Puppet.settings.stubs(:use).returns(true)
@agent = Puppet::Configurer.new
@agent.stubs(:init_storage)
Puppet::Util::Storage.stubs(:store)
Puppet[:server] = "puppetmaster"
Puppet[:report] = true
end
- it "should include the Plugin Handler module" do
- Puppet::Configurer.ancestors.should be_include(Puppet::Configurer::PluginHandler)
- end
-
it "should include the Fact Handler module" do
Puppet::Configurer.ancestors.should be_include(Puppet::Configurer::FactHandler)
end
describe "when executing a pre-run hook" do
it "should do nothing if the hook is set to an empty string" do
Puppet.settings[:prerun_command] = ""
Puppet::Util.expects(:exec).never
@agent.execute_prerun_command
end
it "should execute any pre-run command provided via the 'prerun_command' setting" do
Puppet.settings[:prerun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@agent.execute_prerun_command
end
it "should fail if the command fails" do
Puppet.settings[:prerun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@agent.execute_prerun_command.should be_false
end
end
describe "when executing a post-run hook" do
it "should do nothing if the hook is set to an empty string" do
Puppet.settings[:postrun_command] = ""
Puppet::Util.expects(:exec).never
@agent.execute_postrun_command
end
it "should execute any post-run command provided via the 'postrun_command' setting" do
Puppet.settings[:postrun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@agent.execute_postrun_command
end
it "should fail if the command fails" do
Puppet.settings[:postrun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@agent.execute_postrun_command.should be_false
end
end
describe "when executing a catalog run" do
before do
Puppet.settings.stubs(:use).returns(true)
@agent.stubs(:download_plugins)
Puppet::Node::Facts.indirection.terminus_class = :memory
@facts = Puppet::Node::Facts.new(Puppet[:node_name_value])
Puppet::Node::Facts.indirection.save(@facts)
@catalog = Puppet::Resource::Catalog.new("tester", Puppet::Node::Environment.remote(Puppet[:environment].to_sym))
@catalog.stubs(:to_ral).returns(@catalog)
Puppet::Resource::Catalog.indirection.terminus_class = :rest
Puppet::Resource::Catalog.indirection.stubs(:find).returns(@catalog)
@agent.stubs(:send_report)
@agent.stubs(:save_last_run_summary)
Puppet::Util::Log.stubs(:close_all)
end
after :all do
Puppet::Node::Facts.indirection.reset_terminus_class
Puppet::Resource::Catalog.indirection.reset_terminus_class
end
it "should initialize storage" do
Puppet::Util::Storage.expects(:load)
@agent.run
end
it "downloads plugins when told" do
@agent.expects(:download_plugins)
@agent.run(:pluginsync => true)
end
it "does not download plugins when told" do
@agent.expects(:download_plugins).never
@agent.run(:pluginsync => false)
end
it "should carry on when it can't fetch its node definition" do
error = Net::HTTPError.new(400, 'dummy server communication error')
Puppet::Node.indirection.expects(:find).raises(error)
@agent.run.should == 0
end
it "applies a cached catalog when it can't connect to the master" do
error = Errno::ECONNREFUSED.new('Connection refused - connect(2)')
Puppet::Node.indirection.expects(:find).raises(error)
Puppet::Resource::Catalog.indirection.expects(:find).with(anything, has_entry(:ignore_cache => true)).raises(error)
Puppet::Resource::Catalog.indirection.expects(:find).with(anything, has_entry(:ignore_terminus => true)).returns(@catalog)
@agent.run.should == 0
end
it "should initialize a transaction report if one is not provided" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns report
@agent.run
end
it "should respect node_name_fact when setting the host on a report" do
Puppet[:node_name_fact] = 'my_name_fact'
@facts.values = {'my_name_fact' => 'node_name_from_fact'}
report = Puppet::Transaction::Report.new("apply")
@agent.run(:report => report)
report.host.should == 'node_name_from_fact'
end
it "should pass the new report to the catalog" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.stubs(:new).returns report
@catalog.expects(:apply).with{|options| options[:report] == report}
@agent.run
end
it "should use the provided report if it was passed one" do
report = Puppet::Transaction::Report.new("apply")
@catalog.expects(:apply).with {|options| options[:report] == report}
@agent.run(:report => report)
end
it "should set the report as a log destination" do
report = Puppet::Transaction::Report.new("apply")
report.expects(:<<).with(instance_of(Puppet::Util::Log)).at_least_once
@agent.run(:report => report)
end
it "should retrieve the catalog" do
@agent.expects(:retrieve_catalog)
@agent.run
end
it "should log a failure and do nothing if no catalog can be retrieved" do
@agent.expects(:retrieve_catalog).returns nil
Puppet.expects(:err).with "Could not retrieve catalog; skipping run"
@agent.run
end
it "should apply the catalog with all options to :run" do
@agent.expects(:retrieve_catalog).returns @catalog
@catalog.expects(:apply).with { |args| args[:one] == true }
@agent.run :one => true
end
it "should accept a catalog and use it instead of retrieving a different one" do
@agent.expects(:retrieve_catalog).never
@catalog.expects(:apply)
@agent.run :one => true, :catalog => @catalog
end
it "should benchmark how long it takes to apply the catalog" do
@agent.expects(:benchmark).with(:notice, "Finished catalog run")
@agent.expects(:retrieve_catalog).returns @catalog
@catalog.expects(:apply).never # because we're not yielding
@agent.run
end
it "should execute post-run hooks after the run" do
@agent.expects(:execute_postrun_command)
@agent.run
end
it "should send the report" do
report = Puppet::Transaction::Report.new("apply", nil, "test", "aaaa")
Puppet::Transaction::Report.expects(:new).returns(report)
@agent.expects(:send_report).with(report)
report.environment.should == "test"
report.transaction_uuid.should == "aaaa"
@agent.run
end
it "should send the transaction report even if the catalog could not be retrieved" do
@agent.expects(:retrieve_catalog).returns nil
report = Puppet::Transaction::Report.new("apply", nil, "test", "aaaa")
Puppet::Transaction::Report.expects(:new).returns(report)
@agent.expects(:send_report).with(report)
report.environment.should == "test"
report.transaction_uuid.should == "aaaa"
@agent.run
end
it "should send the transaction report even if there is a failure" do
@agent.expects(:retrieve_catalog).raises "whatever"
report = Puppet::Transaction::Report.new("apply", nil, "test", "aaaa")
Puppet::Transaction::Report.expects(:new).returns(report)
@agent.expects(:send_report).with(report)
report.environment.should == "test"
report.transaction_uuid.should == "aaaa"
@agent.run.should be_nil
end
it "should remove the report as a log destination when the run is finished" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
@agent.run
Puppet::Util::Log.destinations.should_not include(report)
end
it "should return the report exit_status as the result of the run" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
report.expects(:exit_status).returns(1234)
@agent.run.should == 1234
end
it "should send the transaction report even if the pre-run command fails" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
Puppet.settings[:prerun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@agent.expects(:send_report).with(report)
@agent.run.should be_nil
end
it "should include the pre-run command failure in the report" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
Puppet.settings[:prerun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@agent.run.should be_nil
report.logs.find { |x| x.message =~ /Could not run command from prerun_command/ }.should be
end
it "should send the transaction report even if the post-run command fails" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
Puppet.settings[:postrun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@agent.expects(:send_report).with(report)
@agent.run.should be_nil
end
it "should include the post-run command failure in the report" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
Puppet.settings[:postrun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
report.expects(:<<).with { |log| log.message.include?("Could not run command from postrun_command") }
@agent.run.should be_nil
end
it "should execute post-run command even if the pre-run command fails" do
Puppet.settings[:prerun_command] = "/my/precommand"
Puppet.settings[:postrun_command] = "/my/postcommand"
Puppet::Util::Execution.expects(:execute).with(["/my/precommand"]).raises(Puppet::ExecutionFailure, "Failed")
Puppet::Util::Execution.expects(:execute).with(["/my/postcommand"])
@agent.run.should be_nil
end
it "should finalize the report" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
report.expects(:finalize_report)
@agent.run
end
it "should not apply the catalog if the pre-run command fails" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
Puppet.settings[:prerun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@catalog.expects(:apply).never()
@agent.expects(:send_report)
@agent.run.should be_nil
end
it "should apply the catalog, send the report, and return nil if the post-run command fails" do
report = Puppet::Transaction::Report.new("apply")
Puppet::Transaction::Report.expects(:new).returns(report)
Puppet.settings[:postrun_command] = "/my/command"
Puppet::Util::Execution.expects(:execute).with(["/my/command"]).raises(Puppet::ExecutionFailure, "Failed")
@catalog.expects(:apply)
@agent.expects(:send_report)
@agent.run.should be_nil
end
it "should refetch the catalog if the server specifies a new environment in the catalog" do
@catalog.stubs(:environment).returns("second_env")
@agent.expects(:retrieve_catalog).returns(@catalog).twice
@agent.run
end
it "should change the environment setting if the server specifies a new environment in the catalog" do
@catalog.stubs(:environment).returns("second_env")
@agent.run
@agent.environment.should == "second_env"
end
it "should fix the report if the server specifies a new environment in the catalog" do
report = Puppet::Transaction::Report.new("apply", nil, "test", "aaaa")
Puppet::Transaction::Report.expects(:new).returns(report)
@agent.expects(:send_report).with(report)
@catalog.stubs(:environment).returns("second_env")
@agent.stubs(:retrieve_catalog).returns(@catalog)
@agent.run
report.environment.should == "second_env"
end
it "should clear the global caches" do
$env_module_directories = false
@agent.run
$env_module_directories.should == nil
end
describe "when not using a REST terminus for catalogs" do
it "should not pass any facts when retrieving the catalog" do
Puppet::Resource::Catalog.indirection.terminus_class = :compiler
@agent.expects(:facts_for_uploading).never
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options|
options[:facts].nil?
}.returns @catalog
@agent.run
end
end
describe "when using a REST terminus for catalogs" do
it "should pass the prepared facts and the facts format as arguments when retrieving the catalog" do
Puppet::Resource::Catalog.indirection.terminus_class = :rest
@agent.expects(:facts_for_uploading).returns(:facts => "myfacts", :facts_format => :foo)
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options|
options[:facts] == "myfacts" and options[:facts_format] == :foo
}.returns @catalog
@agent.run
end
end
end
describe "when sending a report" do
include PuppetSpec::Files
before do
Puppet.settings.stubs(:use).returns(true)
@configurer = Puppet::Configurer.new
Puppet[:lastrunfile] = tmpfile('last_run_file')
@report = Puppet::Transaction::Report.new("apply")
Puppet[:reports] = "none"
end
it "should print a report summary if configured to do so" do
Puppet.settings[:summarize] = true
@report.expects(:summary).returns "stuff"
@configurer.expects(:puts).with("stuff")
@configurer.send_report(@report)
end
it "should not print a report summary if not configured to do so" do
Puppet.settings[:summarize] = false
@configurer.expects(:puts).never
@configurer.send_report(@report)
end
it "should save the report if reporting is enabled" do
Puppet.settings[:report] = true
Puppet::Transaction::Report.indirection.expects(:save).with(@report, nil, instance_of(Hash))
@configurer.send_report(@report)
end
it "should not save the report if reporting is disabled" do
Puppet.settings[:report] = false
Puppet::Transaction::Report.indirection.expects(:save).with(@report, nil, instance_of(Hash)).never
@configurer.send_report(@report)
end
it "should save the last run summary if reporting is enabled" do
Puppet.settings[:report] = true
@configurer.expects(:save_last_run_summary).with(@report)
@configurer.send_report(@report)
end
it "should save the last run summary if reporting is disabled" do
Puppet.settings[:report] = false
@configurer.expects(:save_last_run_summary).with(@report)
@configurer.send_report(@report)
end
it "should log but not fail if saving the report fails" do
Puppet.settings[:report] = true
Puppet::Transaction::Report.indirection.expects(:save).raises("whatever")
Puppet.expects(:err)
lambda { @configurer.send_report(@report) }.should_not raise_error
end
end
describe "when saving the summary report file" do
include PuppetSpec::Files
before do
Puppet.settings.stubs(:use).returns(true)
@configurer = Puppet::Configurer.new
@report = stub 'report', :raw_summary => {}
Puppet[:lastrunfile] = tmpfile('last_run_file')
end
it "should write the last run file" do
@configurer.save_last_run_summary(@report)
Puppet::FileSystem.exist?(Puppet[:lastrunfile]).should be_true
end
it "should write the raw summary as yaml" do
@report.expects(:raw_summary).returns("summary")
@configurer.save_last_run_summary(@report)
File.read(Puppet[:lastrunfile]).should == YAML.dump("summary")
end
it "should log but not fail if saving the last run summary fails" do
# The mock will raise an exception on any method used. This should
# simulate a nice hard failure from the underlying OS for us.
fh = Class.new(Object) do
def method_missing(*args)
raise "failed to do #{args[0]}"
end
end.new
Puppet::Util.expects(:replace_file).yields(fh)
Puppet.expects(:err)
expect { @configurer.save_last_run_summary(@report) }.to_not raise_error
end
it "should create the last run file with the correct mode" do
Puppet.settings.setting(:lastrunfile).expects(:mode).returns('664')
@configurer.save_last_run_summary(@report)
if Puppet::Util::Platform.windows?
require 'puppet/util/windows/security'
mode = Puppet::Util::Windows::Security.get_mode(Puppet[:lastrunfile])
else
mode = Puppet::FileSystem.stat(Puppet[:lastrunfile]).mode
end
(mode & 0777).should == 0664
end
it "should report invalid last run file permissions" do
Puppet.settings.setting(:lastrunfile).expects(:mode).returns('892')
Puppet.expects(:err).with(regexp_matches(/Could not save last run local report.*892 is invalid/))
@configurer.save_last_run_summary(@report)
end
end
describe "when requesting a node" do
it "uses the transaction uuid in the request" do
Puppet::Node.indirection.expects(:find).with(anything, has_entries(:transaction_uuid => anything)).twice
@agent.run
end
end
describe "when retrieving a catalog" do
before do
Puppet.settings.stubs(:use).returns(true)
@agent.stubs(:facts_for_uploading).returns({})
@catalog = Puppet::Resource::Catalog.new
# this is the default when using a Configurer instance
Puppet::Resource::Catalog.indirection.stubs(:terminus_class).returns :rest
@agent.stubs(:convert_catalog).returns @catalog
end
describe "and configured to only retrieve a catalog from the cache" do
before do
Puppet.settings[:use_cached_catalog] = true
end
it "should first look in the cache for a catalog" do
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_terminus] == true }.returns @catalog
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.never
@agent.retrieve_catalog({}).should == @catalog
end
it "should compile a new catalog if none is found in the cache" do
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_terminus] == true }.returns nil
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.returns @catalog
@agent.retrieve_catalog({}).should == @catalog
end
end
it "should use the Catalog class to get its catalog" do
Puppet::Resource::Catalog.indirection.expects(:find).returns @catalog
@agent.retrieve_catalog({})
end
it "should use its node_name_value to retrieve the catalog" do
Facter.stubs(:value).returns "eh"
Puppet.settings[:node_name_value] = "myhost.domain.com"
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| name == "myhost.domain.com" }.returns @catalog
@agent.retrieve_catalog({})
end
it "should default to returning a catalog retrieved directly from the server, skipping the cache" do
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.returns @catalog
@agent.retrieve_catalog({}).should == @catalog
end
it "should log and return the cached catalog when no catalog can be retrieved from the server" do
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.returns nil
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_terminus] == true }.returns @catalog
Puppet.expects(:notice)
@agent.retrieve_catalog({}).should == @catalog
end
it "should not look in the cache for a catalog if one is returned from the server" do
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.returns @catalog
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_terminus] == true }.never
@agent.retrieve_catalog({}).should == @catalog
end
it "should return the cached catalog when retrieving the remote catalog throws an exception" do
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.raises "eh"
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_terminus] == true }.returns @catalog
@agent.retrieve_catalog({}).should == @catalog
end
it "should log and return nil if no catalog can be retrieved from the server and :usecacheonfailure is disabled" do
Puppet[:usecacheonfailure] = false
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.returns nil
Puppet.expects(:warning)
@agent.retrieve_catalog({}).should be_nil
end
it "should return nil if no cached catalog is available and no catalog can be retrieved from the server" do
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_cache] == true }.returns nil
Puppet::Resource::Catalog.indirection.expects(:find).with { |name, options| options[:ignore_terminus] == true }.returns nil
@agent.retrieve_catalog({}).should be_nil
end
it "should convert the catalog before returning" do
Puppet::Resource::Catalog.indirection.stubs(:find).returns @catalog
@agent.expects(:convert_catalog).with { |cat, dur| cat == @catalog }.returns "converted catalog"
@agent.retrieve_catalog({}).should == "converted catalog"
end
it "should return nil if there is an error while retrieving the catalog" do
Puppet::Resource::Catalog.indirection.expects(:find).at_least_once.raises "eh"
@agent.retrieve_catalog({}).should be_nil
end
end
describe "when converting the catalog" do
before do
Puppet.settings.stubs(:use).returns(true)
@catalog = Puppet::Resource::Catalog.new
@oldcatalog = stub 'old_catalog', :to_ral => @catalog
end
it "should convert the catalog to a RAL-formed catalog" do
@oldcatalog.expects(:to_ral).returns @catalog
@agent.convert_catalog(@oldcatalog, 10).should equal(@catalog)
end
it "should finalize the catalog" do
@catalog.expects(:finalize)
@agent.convert_catalog(@oldcatalog, 10)
end
it "should record the passed retrieval time with the RAL catalog" do
@catalog.expects(:retrieval_duration=).with 10
@agent.convert_catalog(@oldcatalog, 10)
end
it "should write the RAL catalog's class file" do
@catalog.expects(:write_class_file)
@agent.convert_catalog(@oldcatalog, 10)
end
it "should write the RAL catalog's resource file" do
@catalog.expects(:write_resource_file)
@agent.convert_catalog(@oldcatalog, 10)
end
end
end
diff --git a/spec/unit/defaults_spec.rb b/spec/unit/defaults_spec.rb
index f86283a64..0d141d91c 100644
--- a/spec/unit/defaults_spec.rb
+++ b/spec/unit/defaults_spec.rb
@@ -1,44 +1,74 @@
require 'spec_helper'
require 'puppet/settings'
describe "Defaults" do
describe ".default_diffargs" do
describe "on AIX" do
before(:each) do
Facter.stubs(:value).with(:kernel).returns("AIX")
end
describe "on 5.3" do
before(:each) do
Facter.stubs(:value).with(:kernelmajversion).returns("5300")
end
it "should be empty" do
Puppet.default_diffargs.should == ""
end
end
[ "",
nil,
"6300",
"7300",
].each do |kernel_version|
describe "on kernel version #{kernel_version.inspect}" do
before(:each) do
Facter.stubs(:value).with(:kernelmajversion).returns(kernel_version)
end
it "should be '-u'" do
Puppet.default_diffargs.should == "-u"
end
end
end
end
describe "on everything else" do
before(:each) do
Facter.stubs(:value).with(:kernel).returns("NOT_AIX")
end
it "should be '-u'" do
Puppet.default_diffargs.should == "-u"
end
end
end
+
+ describe 'cfacter' do
+
+ before :each do
+ Facter.reset
+ end
+
+ it 'should default to false' do
+ Puppet.settings[:cfacter].should be_false
+ end
+
+ it 'should raise an error if cfacter is not installed' do
+ Puppet.features.stubs(:cfacter?).returns false
+ lambda { Puppet.settings[:cfacter] = true }.should raise_exception ArgumentError, 'cfacter version 0.2.0 or later is not installed.'
+ end
+
+ it 'should raise an error if facter has already evaluated facts' do
+ Facter[:facterversion]
+ Puppet.features.stubs(:cfacter?).returns true
+ lambda { Puppet.settings[:cfacter] = true }.should raise_exception ArgumentError, 'facter has already evaluated facts.'
+ end
+
+ it 'should initialize cfacter when set to true' do
+ Puppet.features.stubs(:cfacter?).returns true
+ CFacter = mock
+ CFacter.stubs(:initialize)
+ Puppet.settings[:cfacter] = true
+ end
+
+ end
end
diff --git a/spec/unit/face/certificate_request_spec.rb b/spec/unit/face/certificate_request_spec.rb
deleted file mode 100755
index 60917254f..000000000
--- a/spec/unit/face/certificate_request_spec.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/face'
-
-describe Puppet::Face[:certificate_request, '0.0.1'] do
- it "should actually have some tests..."
-end
diff --git a/spec/unit/face/certificate_revocation_list_spec.rb b/spec/unit/face/certificate_revocation_list_spec.rb
deleted file mode 100755
index b833fb718..000000000
--- a/spec/unit/face/certificate_revocation_list_spec.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/face'
-
-describe Puppet::Face[:certificate_revocation_list, '0.0.1'] do
- it "should actually have some tests..."
-end
diff --git a/spec/unit/face/config_spec.rb b/spec/unit/face/config_spec.rb
index f42844827..2c43b33b7 100755
--- a/spec/unit/face/config_spec.rb
+++ b/spec/unit/face/config_spec.rb
@@ -1,146 +1,147 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/face'
module PuppetFaceSpecs
describe Puppet::Face[:config, '0.0.1'] do
FS = Puppet::FileSystem
it "prints a single setting without the name" do
Puppet[:trace] = true
expect { subject.print("trace") }.to have_printed('true')
end
it "prints multiple settings with the names" do
Puppet[:trace] = true
Puppet[:syslogfacility] = "file"
expect { subject.print("trace", "syslogfacility") }.to have_printed(<<-OUTPUT)
trace = true
syslogfacility = file
OUTPUT
end
it "prints the setting from the selected section" do
Puppet.settings.parse_config(<<-CONF)
[other]
syslogfacility = file
CONF
expect { subject.print("syslogfacility", :section => "other") }.to have_printed('file')
end
it "defaults to all when no arguments are given" do
subject.expects(:puts).times(Puppet.settings.to_a.length)
subject.print
end
it "prints out all of the settings when asked for 'all'" do
subject.expects(:puts).times(Puppet.settings.to_a.length)
subject.print('all')
end
shared_examples_for :config_printing_a_section do |section|
def add_section_option(args, section)
args << { :section => section } if section
args
end
it "prints directory env settings for an env that exists" do
FS.overlay(
FS::MemoryFile.a_directory(File.expand_path("/dev/null/environments"), [
FS::MemoryFile.a_directory("production", [
FS::MemoryFile.a_missing_file("environment.conf"),
]),
])
) do
args = "environmentpath","manifest","modulepath","environment","basemodulepath"
expect { subject.print(*add_section_option(args, section)) }.to have_printed(<<-OUTPUT)
environmentpath = #{File.expand_path("/dev/null/environments")}
manifest = #{File.expand_path("/dev/null/environments/production/manifests")}
modulepath = #{File.expand_path("/dev/null/environments/production/modules")}#{File::PATH_SEPARATOR}#{File.expand_path("/some/base")}
environment = production
basemodulepath = #{File.expand_path("/some/base")}
OUTPUT
end
end
it "interpolates settings in environment.conf" do
FS.overlay(
FS::MemoryFile.a_directory(File.expand_path("/dev/null/environments"), [
FS::MemoryFile.a_directory("production", [
FS::MemoryFile.a_regular_file_containing("environment.conf", <<-CONTENT),
modulepath=/custom/modules#{File::PATH_SEPARATOR}$basemodulepath
CONTENT
]),
])
) do
args = "environmentpath","manifest","modulepath","environment","basemodulepath"
expect { subject.print(*add_section_option(args, section)) }.to have_printed(<<-OUTPUT)
environmentpath = #{File.expand_path("/dev/null/environments")}
manifest = #{File.expand_path("/dev/null/environments/production/manifests")}
modulepath = #{File.expand_path("/custom/modules")}#{File::PATH_SEPARATOR}#{File.expand_path("/some/base")}
environment = production
basemodulepath = #{File.expand_path("/some/base")}
OUTPUT
end
end
it "prints the default configured env settings for an env that does not exist" do
+ pending "This case no longer exists because Application will through an error before we even get here because of the non-existent environment"
Puppet[:environment] = 'doesnotexist'
FS.overlay(
FS::MemoryFile.a_directory(File.expand_path("/dev/null/environments"), [
FS::MemoryFile.a_missing_file("doesnotexist")
])
) do
args = "environmentpath","manifest","modulepath","environment","basemodulepath"
expect { subject.print(*add_section_option(args, section)) }.to have_printed(<<-OUTPUT)
environmentpath = #{File.expand_path("/dev/null/environments")}
manifest = no_manifest
modulepath =
environment = doesnotexist
basemodulepath = #{File.expand_path("/some/base")}
OUTPUT
end
end
end
context "when printing environment settings" do
before(:each) do
Puppet.settings.stubs(:global_defaults_initialized?).returns(:true)
end
context "from main section" do
before(:each) do
Puppet.settings.parse_config(<<-CONF)
[main]
environmentpath=$confdir/environments
basemodulepath=/some/base
CONF
end
- it_behaves_like :config_printing_a_section
+ it_behaves_like :config_printing_a_section, nil
end
context "from master section" do
before(:each) do
Puppet.settings.parse_config(<<-CONF)
[master]
environmentpath=$confdir/environments
basemodulepath=/some/base
CONF
Puppet.settings.stubs(:global_defaults_initialized?).returns(:true)
end
it_behaves_like :config_printing_a_section, :master
end
end
end
end
diff --git a/spec/unit/face/key_spec.rb b/spec/unit/face/key_spec.rb
deleted file mode 100755
index 6d9519825..000000000
--- a/spec/unit/face/key_spec.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/face'
-
-describe Puppet::Face[:key, '0.0.1'] do
- it "should actually have some tests..."
-end
diff --git a/spec/unit/face/module/build_spec.rb b/spec/unit/face/module/build_spec.rb
index 02bbb7e9d..72ba5e1f1 100644
--- a/spec/unit/face/module/build_spec.rb
+++ b/spec/unit/face/module/build_spec.rb
@@ -1,69 +1,69 @@
require 'spec_helper'
require 'puppet/face'
require 'puppet/module_tool'
describe "puppet module build" do
subject { Puppet::Face[:module, :current] }
describe "when called without any options" do
it "if current directory is a module root should call builder with it" do
Dir.expects(:pwd).returns('/a/b/c')
Puppet::ModuleTool.expects(:find_module_root).with('/a/b/c').returns('/a/b/c')
Puppet::ModuleTool.expects(:set_option_defaults).returns({})
Puppet::ModuleTool::Applications::Builder.expects(:run).with('/a/b/c', {})
subject.build
end
it "if parent directory of current dir is a module root should call builder with it" do
Dir.expects(:pwd).returns('/a/b/c')
Puppet::ModuleTool.expects(:find_module_root).with('/a/b/c').returns('/a/b')
Puppet::ModuleTool.expects(:set_option_defaults).returns({})
Puppet::ModuleTool::Applications::Builder.expects(:run).with('/a/b', {})
subject.build
end
it "if current directory or parents contain no module root, should return exception" do
Dir.expects(:pwd).returns('/a/b/c')
Puppet::ModuleTool.expects(:find_module_root).returns(nil)
- expect { subject.build }.to raise_error RuntimeError, "Unable to find module root at /a/b/c or parent directories"
+ expect { subject.build }.to raise_error RuntimeError, "Unable to find metadata.json or Modulefile in module root /a/b/c or parent directories. See <http://links.puppetlabs.com/modulefile> for required file format."
end
end
describe "when called with a path" do
it "if path is a module root should call builder with it" do
Puppet::ModuleTool.expects(:is_module_root?).with('/a/b/c').returns(true)
Puppet::ModuleTool.expects(:set_option_defaults).returns({})
Puppet::ModuleTool::Applications::Builder.expects(:run).with('/a/b/c', {})
subject.build('/a/b/c')
end
it "if path is not a module root should raise exception" do
Puppet::ModuleTool.expects(:is_module_root?).with('/a/b/c').returns(false)
- expect { subject.build('/a/b/c') }.to raise_error RuntimeError, "Unable to find module root at /a/b/c"
+ expect { subject.build('/a/b/c') }.to raise_error RuntimeError, "Unable to find metadata.json or Modulefile in module root /a/b/c. See <http://links.puppetlabs.com/modulefile> for required file format."
end
end
describe "with options" do
it "should pass through options to builder when provided" do
Puppet::ModuleTool.stubs(:is_module_root?).returns(true)
Puppet::ModuleTool.expects(:set_option_defaults).returns({})
Puppet::ModuleTool::Applications::Builder.expects(:run).with('/a/b/c', {:modulepath => '/x/y/z'})
subject.build('/a/b/c', :modulepath => '/x/y/z')
end
end
describe "inline documentation" do
subject { Puppet::Face[:module, :current].get_action :build }
its(:summary) { should =~ /build.*module/im }
its(:description) { should =~ /build.*module/im }
its(:returns) { should =~ /pathname/i }
its(:examples) { should_not be_empty }
%w{ license copyright summary description returns examples }.each do |doc|
context "of the" do
its(doc.to_sym) { should_not =~ /(FIXME|REVISIT|TODO)/ }
end
end
end
end
diff --git a/spec/unit/face/parser_spec.rb b/spec/unit/face/parser_spec.rb
index 9b1b07a47..e9ba07a2d 100644
--- a/spec/unit/face/parser_spec.rb
+++ b/spec/unit/face/parser_spec.rb
@@ -1,73 +1,111 @@
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet/face'
describe Puppet::Face[:parser, :current] do
include PuppetSpec::Files
let(:parser) { Puppet::Face[:parser, :current] }
- context "from an interactive terminal" do
- before :each do
- from_an_interactive_terminal
+ context "validate" do
+ context "from an interactive terminal" do
+ before :each do
+ from_an_interactive_terminal
+ end
+
+ it "validates the configured site manifest when no files are given" do
+ manifest = file_containing('site.pp', "{ invalid =>")
+
+ configured_environment = Puppet::Node::Environment.create(:default, [], manifest)
+ Puppet.override(:current_environment => configured_environment) do
+ expect { parser.validate() }.to exit_with(1)
+ end
+ end
+
+ it "validates the given file" do
+ manifest = file_containing('site.pp', "{ invalid =>")
+
+ expect { parser.validate(manifest) }.to exit_with(1)
+ end
+
+ it "runs error free when there are no validation errors" do
+ manifest = file_containing('site.pp', "notify { valid: }")
+
+ parser.validate(manifest)
+ end
+
+ it "reports missing files" do
+ expect do
+ parser.validate("missing.pp")
+ end.to raise_error(Puppet::Error, /One or more file\(s\) specified did not exist.*missing\.pp/m)
+ end
+
+ it "parses supplied manifest files in the context of a directory environment" do
+ manifest = file_containing('test.pp', "{ invalid =>")
+
+ env = Puppet::Node::Environment.create(:special, [])
+ env_loader = Puppet::Environments::Static.new(env)
+ Puppet.override({:environments => env_loader, :current_environment => env}) do
+ expect { parser.validate(manifest) }.to exit_with(1)
+ end
+
+ expect(@logs.join).to match(/environment special.*Syntax error at '\{'/)
+ end
+
end
- it "validates the configured site manifest when no files are given" do
- manifest = file_containing('site.pp', "{ invalid =>")
+ it "validates the contents of STDIN when no files given and STDIN is not a tty" do
+ from_a_piped_input_of("{ invalid =>")
- configured_environment = Puppet::Node::Environment.create(:default, [], manifest)
- Puppet.override(:current_environment => configured_environment) do
+ Puppet.override(:current_environment => Puppet::Node::Environment.create(:special, [])) do
expect { parser.validate() }.to exit_with(1)
end
end
+ end
- it "validates the given file" do
- manifest = file_containing('site.pp', "{ invalid =>")
-
- expect { parser.validate(manifest) }.to exit_with(1)
+ context "dump" do
+ it "prints the AST of the passed expression" do
+ expect(parser.dump({ :e => 'notice hi' })).to eq("(invoke notice hi)\n")
end
- it "runs error free when there are no validation errors" do
- manifest = file_containing('site.pp', "notify { valid: }")
+ it "prints the AST of the code read from the passed files" do
+ first_manifest = file_containing('site.pp', "notice hi")
+ second_manifest = file_containing('site2.pp', "notice bye")
- parser.validate(manifest)
+ output = parser.dump(first_manifest, second_manifest)
+
+ expect(output).to match(/site\.pp.*\(invoke notice hi\)/)
+ expect(output).to match(/site2\.pp.*\(invoke notice bye\)/)
end
- it "reports missing files" do
- expect do
- parser.validate("missing.pp")
- end.to raise_error(Puppet::Error, /One or more file\(s\) specified did not exist.*missing\.pp/m)
+ it "informs the user of files that don't exist" do
+ expect(parser.dump('does_not_exist_here.pp')).to match(/did not exist:\s*does_not_exist_here\.pp/m)
end
- it "parses supplied manifest files in the context of a directory environment" do
- manifest = file_containing('test.pp', "{ invalid =>")
+ it "prints the AST of STDIN when no files given and STDIN is not a tty" do
+ from_a_piped_input_of("notice hi")
- env_loader = Puppet::Environments::Static.new(
- Puppet::Node::Environment.create(:special, [])
- )
- Puppet.override(:environments => env_loader) do
- Puppet[:environment] = 'special'
- expect { parser.validate(manifest) }.to exit_with(1)
+ Puppet.override(:current_environment => Puppet::Node::Environment.create(:special, [])) do
+ expect(parser.dump()).to eq("(invoke notice hi)\n")
end
-
- expect(@logs.join).to match(/environment special.*Syntax error at '\{'/)
end
- end
-
- it "validates the contents of STDIN when no files given and STDIN is not a tty" do
- from_a_piped_input_of("{ invalid =>")
+ it "logs an error if the input cannot be parsed" do
+ output = parser.dump({ :e => '{ invalid =>' })
- expect { parser.validate() }.to exit_with(1)
+ expect(output).to eq("")
+ expect(@logs[0].message).to eq("Syntax error at end of file")
+ expect(@logs[0].level).to eq(:err)
+ end
end
def from_an_interactive_terminal
STDIN.stubs(:tty?).returns(true)
end
def from_a_piped_input_of(contents)
STDIN.stubs(:tty?).returns(false)
STDIN.stubs(:read).returns(contents)
end
end
diff --git a/spec/unit/face/report_spec.rb b/spec/unit/face/report_spec.rb
deleted file mode 100755
index 6fcb49a64..000000000
--- a/spec/unit/face/report_spec.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/face'
-
-describe Puppet::Face[:report, '0.0.1'] do
- it "should actually have some tests..."
-end
diff --git a/spec/unit/face/resource_spec.rb b/spec/unit/face/resource_spec.rb
deleted file mode 100755
index 031d78116..000000000
--- a/spec/unit/face/resource_spec.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/face'
-
-describe Puppet::Face[:resource, '0.0.1'] do
- it "should actually have some tests..."
-end
diff --git a/spec/unit/face/resource_type_spec.rb b/spec/unit/face/resource_type_spec.rb
deleted file mode 100755
index 5713a01fb..000000000
--- a/spec/unit/face/resource_type_spec.rb
+++ /dev/null
@@ -1,7 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/face'
-
-describe Puppet::Face[:resource_type, '0.0.1'] do
- it "should actually have some tests..."
-end
diff --git a/spec/unit/file_bucket/file_spec.rb b/spec/unit/file_bucket/file_spec.rb
index 370ba1eeb..500e81685 100755
--- a/spec/unit/file_bucket/file_spec.rb
+++ b/spec/unit/file_bucket/file_spec.rb
@@ -1,76 +1,76 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/file_bucket/file'
describe Puppet::FileBucket::File, :uses_checksums => true do
include PuppetSpec::Files
# this is the default from spec_helper, but it keeps getting reset at odd times
let(:bucketdir) { Puppet[:bucketdir] = tmpdir('bucket') }
it "defaults to serializing to `:s`" do
expect(Puppet::FileBucket::File.default_format).to eq(:s)
end
it "accepts s and pson" do
- expect(Puppet::FileBucket::File.supported_formats).to include(:s, :pson)
+ expect(Puppet::FileBucket::File.supported_formats).to include(:s, :pson)
end
describe "making round trips through network formats" do
with_digest_algorithms do
it "can make a round trip through `s`" do
file = Puppet::FileBucket::File.new(plaintext)
tripped = Puppet::FileBucket::File.convert_from(:s, file.render)
expect(tripped.contents).to eq(plaintext)
end
it "can make a round trip through `pson`" do
file = Puppet::FileBucket::File.new(plaintext)
tripped = Puppet::FileBucket::File.convert_from(:pson, file.render(:pson))
expect(tripped.contents).to eq(plaintext)
end
end
end
it "should require contents to be a string" do
- expect { Puppet::FileBucket::File.new(5) }.to raise_error(ArgumentError, /contents must be a String, got a Fixnum$/)
+ expect { Puppet::FileBucket::File.new(5) }.to raise_error(ArgumentError, /contents must be a String or Pathname, got a Fixnum$/)
end
it "should complain about options other than :bucket_path" do
expect {
Puppet::FileBucket::File.new('5', :crazy_option => 'should not be passed')
}.to raise_error(ArgumentError, /Unknown option\(s\): crazy_option/)
end
with_digest_algorithms do
it "it uses #{metadata[:digest_algorithm]} as the configured digest algorithm" do
file = Puppet::FileBucket::File.new(plaintext)
file.contents.should == plaintext
file.checksum_type.should == digest_algorithm
file.checksum.should == "{#{digest_algorithm}}#{checksum}"
file.name.should == "#{digest_algorithm}/#{checksum}"
end
end
describe "when using back-ends" do
it "should redirect using Puppet::Indirector" do
Puppet::Indirector::Indirection.instance(:file_bucket_file).model.should equal(Puppet::FileBucket::File)
end
it "should have a :save instance method" do
Puppet::FileBucket::File.indirection.should respond_to(:save)
end
end
it "should convert the contents to PSON" do
Puppet.expects(:deprecation_warning).with('Serializing Puppet::FileBucket::File objects to pson is deprecated.')
Puppet::FileBucket::File.new("file contents").to_pson.should == '{"contents":"file contents"}'
end
it "should load from PSON" do
Puppet.expects(:deprecation_warning).with('Deserializing Puppet::FileBucket::File objects from pson is deprecated. Upgrade to a newer version.')
Puppet::FileBucket::File.from_pson({"contents"=>"file contents"}).contents.should == "file contents"
end
end
diff --git a/spec/unit/file_system/tempfile_spec.rb b/spec/unit/file_system/tempfile_spec.rb
deleted file mode 100644
index eb13b0406..000000000
--- a/spec/unit/file_system/tempfile_spec.rb
+++ /dev/null
@@ -1,48 +0,0 @@
-require 'spec_helper'
-
-describe Puppet::FileSystem::Tempfile do
- it "makes the name of the file available" do
- Puppet::FileSystem::Tempfile.open('foo') do |file|
- expect(file.path).to match(/foo/)
- end
- end
-
- it "provides a writeable file" do
- Puppet::FileSystem::Tempfile.open('foo') do |file|
- file.write("stuff")
- file.flush
-
- expect(Puppet::FileSystem.read(file.path)).to eq("stuff")
- end
- end
-
- it "returns the value of the block" do
- the_value = Puppet::FileSystem::Tempfile.open('foo') do |file|
- "my value"
- end
-
- expect(the_value).to eq("my value")
- end
-
- it "unlinks the temporary file" do
- filename = Puppet::FileSystem::Tempfile.open('foo') do |file|
- file.path
- end
-
- expect(Puppet::FileSystem.exist?(filename)).to be_false
- end
-
- it "unlinks the temporary file even if the block raises an error" do
- filename = nil
-
- begin
- Puppet::FileSystem::Tempfile.open('foo') do |file|
- filename = file.path
- raise "error!"
- end
- rescue
- end
-
- expect(Puppet::FileSystem.exist?(filename)).to be_false
- end
-end
diff --git a/spec/unit/file_system/uniquefile_spec.rb b/spec/unit/file_system/uniquefile_spec.rb
new file mode 100644
index 000000000..1268581fa
--- /dev/null
+++ b/spec/unit/file_system/uniquefile_spec.rb
@@ -0,0 +1,184 @@
+require 'spec_helper'
+
+describe Puppet::FileSystem::Uniquefile do
+ it "makes the name of the file available" do
+ Puppet::FileSystem::Uniquefile.open_tmp('foo') do |file|
+ expect(file.path).to match(/foo/)
+ end
+ end
+
+ it "provides a writeable file" do
+ Puppet::FileSystem::Uniquefile.open_tmp('foo') do |file|
+ file.write("stuff")
+ file.flush
+
+ expect(Puppet::FileSystem.read(file.path)).to eq("stuff")
+ end
+ end
+
+ it "returns the value of the block" do
+ the_value = Puppet::FileSystem::Uniquefile.open_tmp('foo') do |file|
+ "my value"
+ end
+
+ expect(the_value).to eq("my value")
+ end
+
+ it "unlinks the temporary file" do
+ filename = Puppet::FileSystem::Uniquefile.open_tmp('foo') do |file|
+ file.path
+ end
+
+ expect(Puppet::FileSystem.exist?(filename)).to be_false
+ end
+
+ it "unlinks the temporary file even if the block raises an error" do
+ filename = nil
+
+ begin
+ Puppet::FileSystem::Uniquefile.open_tmp('foo') do |file|
+ filename = file.path
+ raise "error!"
+ end
+ rescue
+ end
+
+ expect(Puppet::FileSystem.exist?(filename)).to be_false
+ end
+
+
+ context "Ruby 1.9.3 Tempfile tests" do
+ # the remaining tests in this file are ported directly from the ruby 1.9.3 source,
+ # since most of this file was ported from there
+ # see: https://github.com/ruby/ruby/blob/v1_9_3_547/test/test_tempfile.rb
+
+ def tempfile(*args, &block)
+ t = Puppet::FileSystem::Uniquefile.new(*args, &block)
+ @tempfile = (t unless block)
+ end
+
+ after(:each) do
+ if @tempfile
+ @tempfile.close!
+ end
+ end
+
+ it "creates tempfiles" do
+ t = tempfile("foo")
+ path = t.path
+ t.write("hello world")
+ t.close
+ expect(File.read(path)).to eq("hello world")
+ end
+
+ it "saves in tmpdir by default" do
+ t = tempfile("foo")
+ expect(Dir.tmpdir).to eq(File.dirname(t.path))
+ end
+
+ it "saves in given directory" do
+ subdir = File.join(Dir.tmpdir, "tempfile-test-#{rand}")
+ Dir.mkdir(subdir)
+ begin
+ tempfile = Tempfile.new("foo", subdir)
+ tempfile.close
+ begin
+ expect(subdir).to eq(File.dirname(tempfile.path))
+ ensure
+ tempfile.unlink
+ end
+ ensure
+ Dir.rmdir(subdir)
+ end
+ end
+
+ it "supports basename" do
+ t = tempfile("foo")
+ expect(File.basename(t.path)).to match(/^foo/)
+ end
+
+ it "supports basename with suffix" do
+ t = tempfile(["foo", ".txt"])
+ expect(File.basename(t.path)).to match(/^foo/)
+ expect(File.basename(t.path)).to match(/\.txt$/)
+ end
+
+ it "supports unlink" do
+ t = tempfile("foo")
+ path = t.path
+ t.close
+ expect(File.exist?(path)).to eq(true)
+ t.unlink
+ expect(File.exist?(path)).to eq(false)
+ expect(t.path).to eq(nil)
+ end
+
+ it "supports closing" do
+ t = tempfile("foo")
+ expect(t.closed?).to eq(false)
+ t.close
+ expect(t.closed?).to eq(true)
+ end
+
+ it "supports closing and unlinking via boolean argument" do
+ t = tempfile("foo")
+ path = t.path
+ t.close(true)
+ expect(t.closed?).to eq(true)
+ expect(t.path).to eq(nil)
+ expect(File.exist?(path)).to eq(false)
+ end
+
+ context "on unix platforms", :unless => Puppet.features.microsoft_windows? do
+ it "close doesn't unlink if already unlinked" do
+ t = tempfile("foo")
+ path = t.path
+ t.unlink
+ File.open(path, "w").close
+ begin
+ t.close(true)
+ expect(File.exist?(path)).to eq(true)
+ ensure
+ File.unlink(path) rescue nil
+ end
+ end
+ end
+
+ it "supports close!" do
+ t = tempfile("foo")
+ path = t.path
+ t.close!
+ expect(t.closed?).to eq(true)
+ expect(t.path).to eq(nil)
+ expect(File.exist?(path)).to eq(false)
+ end
+
+ context "on unix platforms", :unless => Puppet.features.microsoft_windows? do
+ it "close! doesn't unlink if already unlinked" do
+ t = tempfile("foo")
+ path = t.path
+ t.unlink
+ File.open(path, "w").close
+ begin
+ t.close!
+ expect(File.exist?(path)).to eq(true)
+ ensure
+ File.unlink(path) rescue nil
+ end
+ end
+ end
+
+ it "close does not make path nil" do
+ t = tempfile("foo")
+ t.close
+ expect(t.path.nil?).to eq(false)
+ end
+
+ it "close flushes buffer" do
+ t = tempfile("foo")
+ t.write("hello")
+ t.close
+ expect(File.size(t.path)).to eq(5)
+ end
+ end
+end
diff --git a/spec/unit/forge/errors_spec.rb b/spec/unit/forge/errors_spec.rb
index 057bf014a..fc1ac1a0e 100644
--- a/spec/unit/forge/errors_spec.rb
+++ b/spec/unit/forge/errors_spec.rb
@@ -1,82 +1,80 @@
require 'spec_helper'
require 'puppet/forge'
describe Puppet::Forge::Errors do
describe 'SSLVerifyError' do
subject { Puppet::Forge::Errors::SSLVerifyError }
let(:exception) { subject.new(:uri => 'https://fake.com:1111') }
it 'should return a valid single line error' do
exception.message.should == 'Unable to verify the SSL certificate at https://fake.com:1111'
end
it 'should return a valid multiline error' do
exception.multiline.should == <<-EOS.chomp
Could not connect via HTTPS to https://fake.com:1111
Unable to verify the SSL certificate
The certificate may not be signed by a valid CA
The CA bundle included with OpenSSL may not be valid or up to date
EOS
end
end
describe 'CommunicationError' do
subject { Puppet::Forge::Errors::CommunicationError }
let(:socket_exception) { SocketError.new('There was a problem') }
let(:exception) { subject.new(:uri => 'http://fake.com:1111', :original => socket_exception) }
it 'should return a valid single line error' do
exception.message.should == 'Unable to connect to the server at http://fake.com:1111. Detail: There was a problem.'
end
it 'should return a valid multiline error' do
exception.multiline.should == <<-EOS.chomp
Could not connect to http://fake.com:1111
There was a network communications problem
The error we caught said 'There was a problem'
Check your network connection and try again
EOS
end
end
describe 'ResponseError' do
subject { Puppet::Forge::Errors::ResponseError }
let(:response) { stub(:body => '{}', :code => '404', :message => "not found") }
context 'without message' do
let(:exception) { subject.new(:uri => 'http://fake.com:1111', :response => response, :input => 'user/module') }
it 'should return a valid single line error' do
- exception.message.should == 'Could not execute operation for \'user/module\'. Detail: 404 not found.'
+ exception.message.should == 'Request to Puppet Forge failed. Detail: 404 not found.'
end
it 'should return a valid multiline error' do
exception.multiline.should == <<-eos.chomp
-Could not execute operation for 'user/module'
+Request to Puppet Forge failed.
The server being queried was http://fake.com:1111
The HTTP response we received was '404 not found'
- Check the author and module names are correct.
eos
end
end
context 'with message' do
let(:exception) { subject.new(:uri => 'http://fake.com:1111', :response => response, :input => 'user/module', :message => 'no such module') }
it 'should return a valid single line error' do
- exception.message.should == 'Could not execute operation for \'user/module\'. Detail: no such module / 404 not found.'
+ exception.message.should == 'Request to Puppet Forge failed. Detail: no such module / 404 not found.'
end
it 'should return a valid multiline error' do
exception.multiline.should == <<-eos.chomp
-Could not execute operation for 'user/module'
+Request to Puppet Forge failed.
The server being queried was http://fake.com:1111
The HTTP response we received was '404 not found'
The message we received said 'no such module'
- Check the author and module names are correct.
eos
end
end
end
end
diff --git a/spec/unit/forge/module_release_spec.rb b/spec/unit/forge/module_release_spec.rb
index fbf5e157a..b763f0833 100644
--- a/spec/unit/forge/module_release_spec.rb
+++ b/spec/unit/forge/module_release_spec.rb
@@ -1,131 +1,220 @@
# encoding: utf-8
require 'spec_helper'
require 'puppet/forge'
require 'net/http'
require 'puppet/module_tool'
describe Puppet::Forge::ModuleRelease do
let(:agent) { "Test/1.0" }
let(:repository) { Puppet::Forge::Repository.new('http://fake.com', agent) }
let(:ssl_repository) { Puppet::Forge::Repository.new('https://fake.com', agent) }
-
- let(:release_json) do
- <<-EOF
- {
- "uri": "/v3/releases/puppetlabs-stdlib-4.1.0",
- "module": {
- "uri": "/v3/modules/puppetlabs-stdlib",
- "name": "stdlib",
- "owner": {
- "uri": "/v3/users/puppetlabs",
- "username": "puppetlabs",
- "gravatar_id": "fdd009b7c1ec96e088b389f773e87aec"
- }
- },
- "version": "4.1.0",
- "metadata": {
- "types": [ ],
- "license": "Apache 2.0",
- "checksums": { },
- "version": "4.1.0",
- "description": "Standard Library for Puppet Modules",
- "source": "git://github.com/puppetlabs/puppetlabs-stdlib.git",
- "project_page": "https://github.com/puppetlabs/puppetlabs-stdlib",
- "summary": "Puppet Module Standard Library",
- "dependencies": [
-
- ],
- "author": "puppetlabs",
- "name": "puppetlabs-stdlib"
- },
- "tags": [
- "puppetlabs",
- "library",
- "stdlib",
- "standard",
- "stages"
- ],
- "file_uri": "/v3/files/puppetlabs-stdlib-4.1.0.tar.gz",
- "file_size": 67586,
- "file_md5": "bbf919d7ee9d278d2facf39c25578bf8",
- "downloads": 610751,
- "readme": "",
- "changelog": "",
- "license": "",
- "created_at": "2013-05-13 08:31:19 -0700",
- "updated_at": "2013-05-13 08:31:19 -0700",
- "deleted_at": null
- }
- EOF
- end
-
+ let(:api_version) { "v3" }
+ let(:module_author) { "puppetlabs" }
+ let(:module_name) { "stdlib" }
+ let(:module_version) { "4.1.0" }
+ let(:module_full_name) { "#{module_author}-#{module_name}" }
+ let(:module_full_name_versioned) { "#{module_full_name}-#{module_version}" }
+ let(:module_md5) { "bbf919d7ee9d278d2facf39c25578bf8" }
+ let(:uri) { " "}
let(:release) { Puppet::Forge::ModuleRelease.new(ssl_repository, JSON.parse(release_json)) }
let(:mock_file) {
mock_io = StringIO.new
mock_io.stubs(:path).returns('/dev/null')
mock_io
}
let(:mock_dir) { '/tmp' }
- def mock_digest_file_with_md5(md5)
- Digest::MD5.stubs(:file).returns(stub(:hexdigest => md5))
- end
-
- describe '#prepare' do
- before :each do
- release.stubs(:tmpfile).returns(mock_file)
- release.stubs(:tmpdir).returns(mock_dir)
+ shared_examples 'a module release' do
+ def mock_digest_file_with_md5(md5)
+ Digest::MD5.stubs(:file).returns(stub(:hexdigest => md5))
end
- it 'should call sub methods with correct params' do
- release.expects(:download).with('/v3/files/puppetlabs-stdlib-4.1.0.tar.gz', mock_file)
- release.expects(:validate_checksum).with(mock_file, 'bbf919d7ee9d278d2facf39c25578bf8')
- release.expects(:unpack).with(mock_file, mock_dir)
+ describe '#prepare' do
+ before :each do
+ release.stubs(:tmpfile).returns(mock_file)
+ release.stubs(:tmpdir).returns(mock_dir)
+ end
+
+ it 'should call sub methods with correct params' do
+ release.expects(:download).with("/#{api_version}/files/#{module_full_name_versioned}.tar.gz", mock_file)
+ release.expects(:validate_checksum).with(mock_file, module_md5)
+ release.expects(:unpack).with(mock_file, mock_dir)
- release.prepare
+ release.prepare
+ end
end
- end
- describe '#tmpfile' do
+ describe '#tmpfile' do
- # This is impossible to test under Ruby 1.8.x, but should also occur there.
- it 'should be opened in binary mode', :unless => RUBY_VERSION >= '1.8.7' do
- Puppet::Forge::Cache.stubs(:base_path).returns(Dir.tmpdir)
- release.send(:tmpfile).binmode?.should be_true
+ # This is impossible to test under Ruby 1.8.x, but should also occur there.
+ it 'should be opened in binary mode', :unless => RUBY_VERSION >= '1.8.7' do
+ Puppet::Forge::Cache.stubs(:base_path).returns(Dir.tmpdir)
+ release.send(:tmpfile).binmode?.should be_true
+ end
end
- end
- describe '#download' do
- it 'should call make_http_request with correct params' do
- # valid URI comes from file_uri in JSON blob above
- ssl_repository.expects(:make_http_request).with('/v3/files/puppetlabs-stdlib-4.1.0.tar.gz', mock_file).returns(mock_file)
+ describe '#download' do
+ it 'should call make_http_request with correct params' do
+ # valid URI comes from file_uri in JSON blob above
+ ssl_repository.expects(:make_http_request).with("/#{api_version}/files/#{module_full_name_versioned}.tar.gz", mock_file).returns(stub(:body => '{}', :code => '200'))
+
+ release.send(:download, "/#{api_version}/files/#{module_full_name_versioned}.tar.gz", mock_file)
+ end
- release.send(:download, '/v3/files/puppetlabs-stdlib-4.1.0.tar.gz', mock_file)
+ it 'should raise a response error when it receives an error from forge' do
+ ssl_repository.stubs(:make_http_request).returns(stub(:body => '{"errors": ["error"]}', :code => '500', :message => 'server error'))
+ expect { release.send(:download, "/some/path", mock_file)}. to raise_error Puppet::Forge::Errors::ResponseError
+ end
end
- end
- describe '#verify_checksum' do
- it 'passes md5 check when valid' do
- # valid hash comes from file_md5 in JSON blob above
- mock_digest_file_with_md5('bbf919d7ee9d278d2facf39c25578bf8')
+ describe '#verify_checksum' do
+ it 'passes md5 check when valid' do
+ # valid hash comes from file_md5 in JSON blob above
+ mock_digest_file_with_md5(module_md5)
- release.send(:validate_checksum, mock_file, 'bbf919d7ee9d278d2facf39c25578bf8')
+ release.send(:validate_checksum, mock_file, module_md5)
+ end
+
+ it 'fails md5 check when invalid' do
+ mock_digest_file_with_md5('ffffffffffffffffffffffffffffffff')
+
+ expect { release.send(:validate_checksum, mock_file, module_md5) }.to raise_error(RuntimeError, /did not match expected checksum/)
+ end
end
- it 'fails md5 check when invalid' do
- mock_digest_file_with_md5('ffffffffffffffffffffffffffffffff')
+ describe '#unpack' do
+ it 'should call unpacker with correct params' do
+ Puppet::ModuleTool::Applications::Unpacker.expects(:unpack).with(mock_file.path, mock_dir).returns(true)
+
+ release.send(:unpack, mock_file, mock_dir)
+ end
+ end
+ end
- expect { release.send(:validate_checksum, mock_file, 'bbf919d7ee9d278d2facf39c25578bf8') }.to raise_error(RuntimeError, /did not match expected checksum/)
+ context 'standard forge module' do
+ let(:release_json) do %Q{
+ {
+ "uri": "/#{api_version}/releases/#{module_full_name_versioned}",
+ "module": {
+ "uri": "/#{api_version}/modules/#{module_full_name}",
+ "name": "#{module_name}",
+ "owner": {
+ "uri": "/#{api_version}/users/#{module_author}",
+ "username": "#{module_author}",
+ "gravatar_id": "fdd009b7c1ec96e088b389f773e87aec"
+ }
+ },
+ "version": "#{module_version}",
+ "metadata": {
+ "types": [ ],
+ "license": "Apache 2.0",
+ "checksums": { },
+ "version": "#{module_version}",
+ "description": "Standard Library for Puppet Modules",
+ "source": "git://github.com/puppetlabs/puppetlabs-stdlib.git",
+ "project_page": "https://github.com/puppetlabs/puppetlabs-stdlib",
+ "summary": "Puppet Module Standard Library",
+ "dependencies": [
+
+ ],
+ "author": "#{module_author}",
+ "name": "#{module_full_name}"
+ },
+ "tags": [
+ "puppetlabs",
+ "library",
+ "stdlib",
+ "standard",
+ "stages"
+ ],
+ "file_uri": "/#{api_version}/files/#{module_full_name_versioned}.tar.gz",
+ "file_size": 67586,
+ "file_md5": "#{module_md5}",
+ "downloads": 610751,
+ "readme": "",
+ "changelog": "",
+ "license": "",
+ "created_at": "2013-05-13 08:31:19 -0700",
+ "updated_at": "2013-05-13 08:31:19 -0700",
+ "deleted_at": null
+ }
+ }
end
+
+ it_behaves_like 'a module release'
end
- describe '#unpack' do
- it 'should call unpacker with correct params' do
- Puppet::ModuleTool::Applications::Unpacker.expects(:unpack).with(mock_file.path, mock_dir).returns(true)
+ context 'forge module with no dependencies field' do
+ let(:release_json) do %Q{
+ {
+ "uri": "/#{api_version}/releases/#{module_full_name_versioned}",
+ "module": {
+ "uri": "/#{api_version}/modules/#{module_full_name}",
+ "name": "#{module_name}",
+ "owner": {
+ "uri": "/#{api_version}/users/#{module_author}",
+ "username": "#{module_author}",
+ "gravatar_id": "fdd009b7c1ec96e088b389f773e87aec"
+ }
+ },
+ "version": "#{module_version}",
+ "metadata": {
+ "types": [ ],
+ "license": "Apache 2.0",
+ "checksums": { },
+ "version": "#{module_version}",
+ "description": "Standard Library for Puppet Modules",
+ "source": "git://github.com/puppetlabs/puppetlabs-stdlib.git",
+ "project_page": "https://github.com/puppetlabs/puppetlabs-stdlib",
+ "summary": "Puppet Module Standard Library",
+ "author": "#{module_author}",
+ "name": "#{module_full_name}"
+ },
+ "tags": [
+ "puppetlabs",
+ "library",
+ "stdlib",
+ "standard",
+ "stages"
+ ],
+ "file_uri": "/#{api_version}/files/#{module_full_name_versioned}.tar.gz",
+ "file_size": 67586,
+ "file_md5": "#{module_md5}",
+ "downloads": 610751,
+ "readme": "",
+ "changelog": "",
+ "license": "",
+ "created_at": "2013-05-13 08:31:19 -0700",
+ "updated_at": "2013-05-13 08:31:19 -0700",
+ "deleted_at": null
+ }
+ }
+ end
+
+ it_behaves_like 'a module release'
+ end
- release.send(:unpack, mock_file, mock_dir)
+ context 'forge module with the minimal set of fields' do
+ let(:release_json) do %Q{
+ {
+ "uri": "/#{api_version}/releases/#{module_full_name_versioned}",
+ "module": {
+ "uri": "/#{api_version}/modules/#{module_full_name}",
+ "name": "#{module_name}"
+ },
+ "metadata": {
+ "version": "#{module_version}",
+ "name": "#{module_full_name}"
+ },
+ "file_uri": "/#{api_version}/files/#{module_full_name_versioned}.tar.gz",
+ "file_size": 67586,
+ "file_md5": "#{module_md5}"
+ }
+ }
end
+
+ it_behaves_like 'a module release'
end
end
diff --git a/spec/unit/forge/repository_spec.rb b/spec/unit/forge/repository_spec.rb
index 04b10b166..4ac77ecb8 100644
--- a/spec/unit/forge/repository_spec.rb
+++ b/spec/unit/forge/repository_spec.rb
@@ -1,122 +1,230 @@
# encoding: utf-8
require 'spec_helper'
require 'net/http'
require 'puppet/forge/repository'
require 'puppet/forge/cache'
require 'puppet/forge/errors'
describe Puppet::Forge::Repository do
let(:agent) { "Test/1.0" }
let(:repository) { Puppet::Forge::Repository.new('http://fake.com', agent) }
let(:ssl_repository) { Puppet::Forge::Repository.new('https://fake.com', agent) }
it "retrieve accesses the cache" do
path = '/module/foo.tar.gz'
repository.cache.expects(:retrieve)
repository.retrieve(path)
end
it "retrieve merges forge URI and path specified" do
host = 'http://fake.com/test'
path = '/module/foo.tar.gz'
uri = [ host, path ].join('')
repository = Puppet::Forge::Repository.new(host, agent)
repository.cache.expects(:retrieve).with(uri)
repository.retrieve(path)
end
describe "making a request" do
before :each do
proxy_settings_of("proxy", 1234)
end
it "returns the result object from the request" do
result = "#{Object.new}"
performs_an_http_request result do |http|
http.expects(:request).with(responds_with(:path, "the_path"))
end
repository.make_http_request("the_path").should == result
end
it 'returns the result object from a request with ssl' do
result = "#{Object.new}"
performs_an_https_request result do |http|
http.expects(:request).with(responds_with(:path, "the_path"))
end
ssl_repository.make_http_request("the_path").should == result
end
it 'return a valid exception when there is an SSL verification problem' do
performs_an_https_request "#{Object.new}" do |http|
http.expects(:request).with(responds_with(:path, "the_path")).raises OpenSSL::SSL::SSLError.new("certificate verify failed")
end
expect { ssl_repository.make_http_request("the_path") }.to raise_error Puppet::Forge::Errors::SSLVerifyError, 'Unable to verify the SSL certificate at https://fake.com'
end
it 'return a valid exception when there is a communication problem' do
performs_an_http_request "#{Object.new}" do |http|
http.expects(:request).with(responds_with(:path, "the_path")).raises SocketError
end
expect { repository.make_http_request("the_path") }.
to raise_error Puppet::Forge::Errors::CommunicationError,
'Unable to connect to the server at http://fake.com. Detail: SocketError.'
end
it "sets the user agent for the request" do
path = 'the_path'
request = repository.get_request_object(path)
request['User-Agent'].should =~ /\b#{agent}\b/
request['User-Agent'].should =~ /\bPuppet\b/
request['User-Agent'].should =~ /\bRuby\b/
end
+ it "Does not set Authorization header by default" do
+ Puppet.features.stubs(:pe_license?).returns(false)
+ Puppet[:forge_authorization] = nil
+ request = repository.get_request_object("the_path")
+ request['Authorization'].should == nil
+ end
+
+ it "Sets Authorization header from config" do
+ token = 'bearer some token'
+ Puppet[:forge_authorization] = token
+ request = repository.get_request_object("the_path")
+ request['Authorization'].should == token
+ end
+
it "escapes the received URI" do
unescaped_uri = "héllo world !! ç à"
performs_an_http_request do |http|
http.expects(:request).with(responds_with(:path, URI.escape(unescaped_uri)))
end
repository.make_http_request(unescaped_uri)
end
def performs_an_http_request(result = nil, &block)
http = mock("http client")
yield http
proxy_class = mock("http proxy class")
proxy = mock("http proxy")
proxy_class.expects(:new).with("fake.com", 80).returns(proxy)
proxy.expects(:start).yields(http).returns(result)
- Net::HTTP.expects(:Proxy).with("proxy", 1234).returns(proxy_class)
+ Net::HTTP.expects(:Proxy).with("proxy", 1234, nil, nil).returns(proxy_class)
end
def performs_an_https_request(result = nil, &block)
http = mock("http client")
yield http
proxy_class = mock("http proxy class")
proxy = mock("http proxy")
proxy_class.expects(:new).with("fake.com", 443).returns(proxy)
proxy.expects(:start).yields(http).returns(result)
proxy.expects(:use_ssl=).with(true)
proxy.expects(:cert_store=)
proxy.expects(:verify_mode=).with(OpenSSL::SSL::VERIFY_PEER)
- Net::HTTP.expects(:Proxy).with("proxy", 1234).returns(proxy_class)
+ Net::HTTP.expects(:Proxy).with("proxy", 1234, nil, nil).returns(proxy_class)
+ end
+ end
+
+ describe "making a request against an authentiated proxy" do
+ before :each do
+ authenticated_proxy_settings_of("proxy", 1234, 'user1', 'password')
+ end
+
+ it "returns the result object from the request" do
+ result = "#{Object.new}"
+
+ performs_an_authenticated_http_request result do |http|
+ http.expects(:request).with(responds_with(:path, "the_path"))
+ end
+
+ repository.make_http_request("the_path").should == result
+ end
+
+ it 'returns the result object from a request with ssl' do
+ result = "#{Object.new}"
+ performs_an_authenticated_https_request result do |http|
+ http.expects(:request).with(responds_with(:path, "the_path"))
+ end
+
+ ssl_repository.make_http_request("the_path").should == result
+ end
+
+ it 'return a valid exception when there is an SSL verification problem' do
+ performs_an_authenticated_https_request "#{Object.new}" do |http|
+ http.expects(:request).with(responds_with(:path, "the_path")).raises OpenSSL::SSL::SSLError.new("certificate verify failed")
+ end
+
+ expect { ssl_repository.make_http_request("the_path") }.to raise_error Puppet::Forge::Errors::SSLVerifyError, 'Unable to verify the SSL certificate at https://fake.com'
+ end
+
+ it 'return a valid exception when there is a communication problem' do
+ performs_an_authenticated_http_request "#{Object.new}" do |http|
+ http.expects(:request).with(responds_with(:path, "the_path")).raises SocketError
+ end
+
+ expect { repository.make_http_request("the_path") }.
+ to raise_error Puppet::Forge::Errors::CommunicationError,
+ 'Unable to connect to the server at http://fake.com. Detail: SocketError.'
+ end
+
+ it "sets the user agent for the request" do
+ path = 'the_path'
+
+ request = repository.get_request_object(path)
+
+ request['User-Agent'].should =~ /\b#{agent}\b/
+ request['User-Agent'].should =~ /\bPuppet\b/
+ request['User-Agent'].should =~ /\bRuby\b/
+ end
+
+ it "escapes the received URI" do
+ unescaped_uri = "héllo world !! ç à"
+ performs_an_authenticated_http_request do |http|
+ http.expects(:request).with(responds_with(:path, URI.escape(unescaped_uri)))
+ end
+
+ repository.make_http_request(unescaped_uri)
+ end
+
+ def performs_an_authenticated_http_request(result = nil, &block)
+ http = mock("http client")
+ yield http
+
+ proxy_class = mock("http proxy class")
+ proxy = mock("http proxy")
+ proxy_class.expects(:new).with("fake.com", 80).returns(proxy)
+ proxy.expects(:start).yields(http).returns(result)
+ Net::HTTP.expects(:Proxy).with("proxy", 1234, "user1", "password").returns(proxy_class)
+ end
+
+ def performs_an_authenticated_https_request(result = nil, &block)
+ http = mock("http client")
+ yield http
+
+ proxy_class = mock("http proxy class")
+ proxy = mock("http proxy")
+ proxy_class.expects(:new).with("fake.com", 443).returns(proxy)
+ proxy.expects(:start).yields(http).returns(result)
+ proxy.expects(:use_ssl=).with(true)
+ proxy.expects(:cert_store=)
+ proxy.expects(:verify_mode=).with(OpenSSL::SSL::VERIFY_PEER)
+ Net::HTTP.expects(:Proxy).with("proxy", 1234, "user1", "password").returns(proxy_class)
end
end
def proxy_settings_of(host, port)
Puppet[:http_proxy_host] = host
Puppet[:http_proxy_port] = port
end
+
+ def authenticated_proxy_settings_of(host, port, user, password)
+ Puppet[:http_proxy_host] = host
+ Puppet[:http_proxy_port] = port
+ Puppet[:http_proxy_user] = user
+ Puppet[:http_proxy_password] = password
+ end
end
diff --git a/spec/unit/forge_spec.rb b/spec/unit/forge_spec.rb
index 96bf8d3be..eb7c56a3e 100644
--- a/spec/unit/forge_spec.rb
+++ b/spec/unit/forge_spec.rb
@@ -1,136 +1,172 @@
require 'spec_helper'
require 'puppet/forge'
require 'net/http'
require 'puppet/module_tool'
describe Puppet::Forge do
let(:http_response) do
<<-EOF
{
"pagination": {
"limit": 1,
"offset": 0,
"first": "/v3/modules?limit=1&offset=0",
"previous": null,
"current": "/v3/modules?limit=1&offset=0",
"next": null,
"total": 1832
},
"results": [
{
"uri": "/v3/modules/puppetlabs-bacula",
"name": "bacula",
"downloads": 640274,
"created_at": "2011-05-24 18:34:58 -0700",
"updated_at": "2013-12-03 15:24:20 -0800",
"owner": {
"uri": "/v3/users/puppetlabs",
"username": "puppetlabs",
"gravatar_id": "fdd009b7c1ec96e088b389f773e87aec"
},
"current_release": {
"uri": "/v3/releases/puppetlabs-bacula-0.0.2",
"module": {
"uri": "/v3/modules/puppetlabs-bacula",
"name": "bacula",
"owner": {
"uri": "/v3/users/puppetlabs",
"username": "puppetlabs",
"gravatar_id": "fdd009b7c1ec96e088b389f773e87aec"
}
},
"version": "0.0.2",
"metadata": {
"types": [],
"license": "Apache 2.0",
"checksums": { },
"version": "0.0.2",
"source": "git://github.com/puppetlabs/puppetlabs-bacula.git",
"project_page": "https://github.com/puppetlabs/puppetlabs-bacula",
"summary": "bacula",
"dependencies": [ ],
"author": "puppetlabs",
"name": "puppetlabs-bacula"
},
"tags": [
"backup",
"bacula"
],
"file_uri": "/v3/files/puppetlabs-bacula-0.0.2.tar.gz",
"file_size": 67586,
"file_md5": "bbf919d7ee9d278d2facf39c25578bf8",
"downloads": 565041,
"readme": "",
"changelog": "",
"license": "",
"created_at": "2013-05-13 08:31:19 -0700",
"updated_at": "2013-05-13 08:31:19 -0700",
"deleted_at": null
},
"releases": [
{
"uri": "/v3/releases/puppetlabs-bacula-0.0.2",
"version": "0.0.2"
},
{
"uri": "/v3/releases/puppetlabs-bacula-0.0.1",
"version": "0.0.1"
}
],
"homepage_url": "https://github.com/puppetlabs/puppetlabs-bacula",
"issues_url": "https://projects.puppetlabs.com/projects/bacula/issues"
}
]
}
EOF
end
let(:search_results) do
JSON.parse(http_response)['results'].map do |hash|
hash.merge(
"author" => "puppetlabs",
"name" => "bacula",
"tag_list" => ["backup", "bacula"],
"full_name" => "puppetlabs/bacula",
"version" => "0.0.2",
"project_url" => "https://github.com/puppetlabs/puppetlabs-bacula",
"desc" => "bacula"
)
end
end
let(:forge) { Puppet::Forge.new }
def repository_responds_with(response)
Puppet::Forge::Repository.any_instance.stubs(:make_http_request).returns(response)
end
it "returns a list of matches from the forge when there are matches for the search term" do
repository_responds_with(stub(:body => http_response, :code => '200'))
forge.search('bacula').should == search_results
end
+ context "when module_groups are defined" do
+ let(:release_response) do
+ releases = JSON.parse(http_response)
+ releases['results'] = []
+ JSON.dump(releases)
+ end
+
+ before :each do
+ repository_responds_with(stub(:body => release_response, :code => '200')).with {|uri| uri =~ /module_groups=foo/}
+ Puppet[:module_groups] = "foo"
+ end
+
+ it "passes module_groups with search" do
+ forge.search('bacula')
+ end
+
+ it "passes module_groups with fetch" do
+ forge.fetch('puppetlabs-bacula')
+ end
+ end
+
context "when the connection to the forge fails" do
before :each do
repository_responds_with(stub(:body => '{}', :code => '404', :message => "not found"))
end
it "raises an error for search" do
- expect { forge.search('bacula') }.to raise_error Puppet::Forge::Errors::ResponseError, "Could not execute operation for 'bacula'. Detail: 404 not found."
+ expect { forge.search('bacula') }.to raise_error Puppet::Forge::Errors::ResponseError, "Request to Puppet Forge failed. Detail: 404 not found."
end
it "raises an error for fetch" do
- expect { forge.fetch('puppetlabs/bacula') }.to raise_error Puppet::Forge::Errors::ResponseError, "Could not execute operation for 'puppetlabs/bacula'. Detail: 404 not found."
+ expect { forge.fetch('puppetlabs/bacula') }.to raise_error Puppet::Forge::Errors::ResponseError, "Request to Puppet Forge failed. Detail: 404 not found."
end
end
context "when the API responds with an error" do
before :each do
repository_responds_with(stub(:body => '{"error":"invalid module"}', :code => '410', :message => "Gone"))
end
it "raises an error for fetch" do
- expect { forge.fetch('puppetlabs/bacula') }.to raise_error Puppet::Forge::Errors::ResponseError, "Could not execute operation for 'puppetlabs/bacula'. Detail: 410 Gone."
+ expect { forge.fetch('puppetlabs/bacula') }.to raise_error Puppet::Forge::Errors::ResponseError, "Request to Puppet Forge failed. Detail: 410 Gone."
+ end
+ end
+
+ context "when the forge returns a module with unparseable dependencies" do
+ before :each do
+ response = JSON.parse(http_response)
+ release = response['results'][0]['current_release']
+ release['metadata']['dependencies'] = [{'name' => 'broken-garbage >= 1.0.0', 'version_requirement' => 'banana'}]
+ response['results'] = [release]
+ repository_responds_with(stub(:body => JSON.dump(response), :code => '200'))
+ end
+
+ it "ignores modules with unparseable dependencies" do
+ expect { result = forge.fetch('puppetlabs/bacula') }.to_not raise_error
+ expect { result.to be_empty }
end
end
end
diff --git a/spec/unit/functions/assert_type_spec.rb b/spec/unit/functions/assert_type_spec.rb
index d47f47e30..13b353401 100644
--- a/spec/unit/functions/assert_type_spec.rb
+++ b/spec/unit/functions/assert_type_spec.rb
@@ -1,59 +1,78 @@
require 'spec_helper'
require 'puppet/pops'
require 'puppet/loaders'
describe 'the assert_type function' do
after(:all) { Puppet::Pops::Loaders.clear }
around(:each) do |example|
loaders = Puppet::Pops::Loaders.new(Puppet::Node::Environment.create(:testing, []))
Puppet.override({:loaders => loaders}, "test-example") do
example.run
end
end
let(:func) do
Puppet.lookup(:loaders).puppet_system_loader.load(:function, 'assert_type')
end
it 'asserts compliant type by returning the value' do
expect(func.call({}, type(String), 'hello world')).to eql('hello world')
end
it 'accepts type given as a String' do
expect(func.call({}, 'String', 'hello world')).to eql('hello world')
end
it 'asserts non compliant type by raising an error' do
expect do
func.call({}, type(Integer), 'hello world')
end.to raise_error(Puppet::ParseError, /does not match actual/)
end
it 'checks that first argument is a type' do
expect do
func.call({}, 10, 10)
end.to raise_error(ArgumentError, Regexp.new(Regexp.escape(
"function 'assert_type' called with mis-matched arguments
expected one of:
- assert_type(Type type, Optional[Object] value) - arg count {2}
- assert_type(String type_string, Optional[Object] value) - arg count {2}
+ assert_type(Type type, Any value, Callable[Type, Type] block {0,1}) - arg count {2,3}
+ assert_type(String type_string, Any value, Callable[Type, Type] block {0,1}) - arg count {2,3}
actual:
assert_type(Integer, Integer) - arg count {2}")))
end
it 'allows the second arg to be undef/nil)' do
expect do
func.call({}, optional(String), nil)
- end.to_not raise_error(ArgumentError)
+ end.to_not raise_error
+ end
+
+ it 'can be called with a callable that receives a specific type' do
+ expected, actual = func.call({}, optional(String), 1, create_callable_2_args_unit)
+ expect(expected.to_s).to eql('Optional[String]')
+ expect(actual.to_s).to eql('Integer[1, 1]')
end
def optional(type_ref)
Puppet::Pops::Types::TypeFactory.optional(type(type_ref))
end
def type(type_ref)
Puppet::Pops::Types::TypeFactory.type_of(type_ref)
end
+
+ def create_callable_2_args_unit()
+ Puppet::Functions.create_function(:func) do
+ dispatch :func do
+ param 'Type', 'expected'
+ param 'Type', 'actual'
+ end
+
+ def func(expected, actual)
+ [expected, actual]
+ end
+ end.new({}, nil)
+ end
end
diff --git a/spec/unit/parser/methods/each_spec.rb b/spec/unit/functions/each_spec.rb
similarity index 80%
rename from spec/unit/parser/methods/each_spec.rb
rename to spec/unit/functions/each_spec.rb
index 5e9ce4e0c..d6651cf5a 100644
--- a/spec/unit/parser/methods/each_spec.rb
+++ b/spec/unit/functions/each_spec.rb
@@ -1,91 +1,111 @@
require 'puppet'
require 'spec_helper'
require 'puppet_spec/compiler'
-require 'rubygems'
+
+require 'shared_behaviours/iterative_functions'
describe 'the each method' do
include PuppetSpec::Compiler
before :each do
Puppet[:parser] = 'future'
end
context "should be callable as" do
it 'each on an array selecting each value' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1,2,3]
$a.each |$v| {
file { "/file_$v": ensure => present }
}
MANIFEST
catalog.resource(:file, "/file_1")['ensure'].should == 'present'
catalog.resource(:file, "/file_2")['ensure'].should == 'present'
catalog.resource(:file, "/file_3")['ensure'].should == 'present'
end
+
it 'each on an array selecting each value - function call style' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1,2,3]
each ($a) |$index, $v| {
file { "/file_$v": ensure => present }
}
MANIFEST
catalog.resource(:file, "/file_1")['ensure'].should == 'present'
catalog.resource(:file, "/file_2")['ensure'].should == 'present'
catalog.resource(:file, "/file_3")['ensure'].should == 'present'
end
it 'each on an array with index' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [present, absent, present]
$a.each |$k,$v| {
file { "/file_${$k+1}": ensure => $v }
}
MANIFEST
catalog.resource(:file, "/file_1")['ensure'].should == 'present'
catalog.resource(:file, "/file_2")['ensure'].should == 'absent'
catalog.resource(:file, "/file_3")['ensure'].should == 'present'
end
it 'each on a hash selecting entries' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {'a'=>'present','b'=>'absent','c'=>'present'}
$a.each |$e| {
file { "/file_${e[0]}": ensure => $e[1] }
}
MANIFEST
catalog.resource(:file, "/file_a")['ensure'].should == 'present'
catalog.resource(:file, "/file_b")['ensure'].should == 'absent'
catalog.resource(:file, "/file_c")['ensure'].should == 'present'
end
+
it 'each on a hash selecting key and value' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {'a'=>present,'b'=>absent,'c'=>present}
$a.each |$k, $v| {
file { "/file_$k": ensure => $v }
}
MANIFEST
catalog.resource(:file, "/file_a")['ensure'].should == 'present'
catalog.resource(:file, "/file_b")['ensure'].should == 'absent'
catalog.resource(:file, "/file_c")['ensure'].should == 'present'
end
+
+ it 'each on a hash selecting key and value (using captures-last parameter)' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = {'a'=>present,'b'=>absent,'c'=>present}
+ $a.each |*$kv| {
+ file { "/file_${kv[0]}": ensure => $kv[1] }
+ }
+ MANIFEST
+
+ catalog.resource(:file, "/file_a")['ensure'].should == 'present'
+ catalog.resource(:file, "/file_b")['ensure'].should == 'absent'
+ catalog.resource(:file, "/file_c")['ensure'].should == 'present'
+ end
end
+
context "should produce receiver" do
it 'each checking produced value using single expression' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1, 3, 2]
$b = $a.each |$x| { "unwanted" }
file { "/file_${b[1]}":
ensure => present
}
MANIFEST
catalog.resource(:file, "/file_3")['ensure'].should == 'present'
end
end
+ it_should_behave_like 'all iterative functions argument checks', 'each'
+ it_should_behave_like 'all iterative functions hash handling', 'each'
+
end
diff --git a/spec/unit/parser/functions/epp_spec.rb b/spec/unit/functions/epp_spec.rb
similarity index 56%
rename from spec/unit/parser/functions/epp_spec.rb
rename to spec/unit/functions/epp_spec.rb
index b88d3da8f..382fd9548 100644
--- a/spec/unit/parser/functions/epp_spec.rb
+++ b/spec/unit/functions/epp_spec.rb
@@ -1,103 +1,155 @@
require 'spec_helper'
describe "the epp function" do
include PuppetSpec::Files
before :all do
Puppet::Parser::Functions.autoloader.loadall
end
before :each do
Puppet[:parser] = 'future'
end
let :node do Puppet::Node.new('localhost') end
let :compiler do Puppet::Parser::Compiler.new(node) end
let :scope do Puppet::Parser::Scope.new(compiler) end
context "when accessing scope variables as $ variables" do
it "looks up the value from the scope" do
scope["what"] = "are belong"
eval_template("all your base <%= $what %> to us").should == "all your base are belong to us"
end
it "get nil accessing a variable that does not exist" do
eval_template("<%= $kryptonite == undef %>").should == "true"
end
it "get nil accessing a variable that is undef" do
- scope['undef_var'] = :undef
+ scope['undef_var'] = nil
eval_template("<%= $undef_var == undef %>").should == "true"
end
it "gets shadowed variable if args are given" do
scope['phantom'] = 'of the opera'
eval_template_with_args("<%= $phantom == dragos %>", 'phantom' => 'dragos').should == "true"
end
+ it "can use values from the enclosing scope for defaults" do
+ scope['phantom'] = 'of the opera'
+ eval_template("<%- |$phantom = $phantom| -%><%= $phantom %>").should == "of the opera"
+ end
+
+ it "uses the default value if the given value is undef/nil" do
+ eval_template_with_args("<%- |$phantom = 'inside your mind'| -%><%= $phantom %>", 'phantom' => nil).should == "inside your mind"
+ end
+
it "gets shadowed variable if args are given and parameters are specified" do
scope['x'] = 'wrong one'
- eval_template_with_args("<%-( $x )-%><%= $x == correct %>", 'x' => 'correct').should == "true"
+ eval_template_with_args("<%- |$x| -%><%= $x == correct %>", 'x' => 'correct').should == "true"
end
it "raises an error if required variable is not given" do
scope['x'] = 'wrong one'
- expect {
+ expect do
eval_template_with_args("<%-| $x |-%><%= $x == correct %>", 'y' => 'correct')
- }.to raise_error(/no value given for required parameters x/)
+ end.to raise_error(/no value given for required parameters x/)
end
it "raises an error if too many arguments are given" do
scope['x'] = 'wrong one'
- expect {
+ expect do
eval_template_with_args("<%-| $x |-%><%= $x == correct %>", 'x' => 'correct', 'y' => 'surplus')
- }.to raise_error(/Too many arguments: 2 for 1/)
+ end.to raise_error(/Too many arguments: 2 for 1/)
+ end
+ end
+
+ context "when given an empty template" do
+ it "allows the template file to be empty" do
+ expect(eval_template("")).to eq("")
+ end
+
+ it "allows the template to have empty body after parameters" do
+ expect(eval_template_with_args("<%-|$x|%>", 'x'=>1)).to eq("")
end
end
+ context "when using typed parameters" do
+ it "allows a passed value that matches the parameter's type" do
+ expect(eval_template_with_args("<%-|String $x|-%><%= $x == correct %>", 'x' => 'correct')).to eq("true")
+ end
+
+ it "does not allow slurped parameters" do
+ expect do
+ eval_template_with_args("<%-|*$x|-%><%= $x %>", 'x' => 'incorrect')
+ end.to raise_error(/'captures rest' - not supported in an Epp Template/)
+ end
+
+ it "raises an error when the passed value does not match the parameter's type" do
+ expect do
+ eval_template_with_args("<%-|Integer $x|-%><%= $x %>", 'x' => 'incorrect')
+ end.to raise_error(/expected.*Integer.*actual.*String/m)
+ end
+
+ it "raises an error when the default value does not match the parameter's type" do
+ expect do
+ eval_template("<%-|Integer $x = 'nope'|-%><%= $x %>")
+ end.to raise_error(/expected.*Integer.*actual.*String/m)
+ end
+
+ it "allows an parameter to default to undef" do
+ expect(eval_template("<%-|Optional[Integer] $x = undef|-%><%= $x == undef %>")).to eq("true")
+ end
+ end
+
+
# although never a problem with epp
it "is not interfered with by having a variable named 'string' (#14093)" do
scope['string'] = "this output should not be seen"
eval_template("some text that is static").should == "some text that is static"
end
it "has access to a variable named 'string' (#14093)" do
scope['string'] = "the string value"
eval_template("string was: <%= $string %>").should == "string was: the string value"
end
describe 'when loading from modules' do
include PuppetSpec::Files
it 'an epp template is found' do
modules_dir = dir_containing('modules', {
'testmodule' => {
'templates' => {
'the_x.epp' => 'The x is <%= $x %>'
}
}})
Puppet.override({:current_environment => (env = Puppet::Node::Environment.create(:testload, [ modules_dir ]))}, "test") do
node.environment = env
- expect(scope.function_epp([ 'testmodule/the_x.epp', { 'x' => '3'} ])).to eql("The x is 3")
+ expect(epp_function.call(scope, 'testmodule/the_x.epp', { 'x' => '3'} )).to eql("The x is 3")
end
end
end
def eval_template_with_args(content, args_hash)
file_path = tmpdir('epp_spec_content')
filename = File.join(file_path, "template.epp")
File.open(filename, "w+") { |f| f.write(content) }
Puppet::Parser::Files.stubs(:find_template).returns(filename)
- scope.function_epp(['template', args_hash])
+ epp_function.call(scope, 'template', args_hash)
end
def eval_template(content)
file_path = tmpdir('epp_spec_content')
filename = File.join(file_path, "template.epp")
File.open(filename, "w+") { |f| f.write(content) }
Puppet::Parser::Files.stubs(:find_template).returns(filename)
- scope.function_epp(['template'])
+ epp_function.call(scope, 'template')
+ end
+
+ def epp_function()
+ epp_func = scope.compiler.loaders.public_environment_loader.load(:function, 'epp')
end
end
diff --git a/spec/unit/parser/methods/filter_spec.rb b/spec/unit/functions/filter_spec.rb
similarity index 54%
rename from spec/unit/parser/methods/filter_spec.rb
rename to spec/unit/functions/filter_spec.rb
index c9fed31d2..e904c6751 100644
--- a/spec/unit/parser/methods/filter_spec.rb
+++ b/spec/unit/functions/filter_spec.rb
@@ -1,135 +1,131 @@
require 'puppet'
require 'spec_helper'
require 'puppet_spec/compiler'
+require 'matchers/resource'
-require 'unit/parser/methods/shared'
+require 'shared_behaviours/iterative_functions'
describe 'the filter method' do
include PuppetSpec::Compiler
+ include Matchers::Resource
before :each do
Puppet[:parser] = 'future'
end
it 'should filter on an array (all berries)' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = ['strawberry','blueberry','orange']
$a.filter |$x|{ $x =~ /berry$/}.each |$v|{
file { "/file_$v": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_strawberry")['ensure'].should == 'present'
- catalog.resource(:file, "/file_blueberry")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_strawberry]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_blueberry]").with_parameter(:ensure, 'present')
end
it 'should filter on enumerable type (Integer)' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = Integer[1,10]
$a.filter |$x|{ $x % 3 == 0}.each |$v|{
file { "/file_$v": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
- catalog.resource(:file, "/file_6")['ensure'].should == 'present'
- catalog.resource(:file, "/file_9")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_6]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_9]").with_parameter(:ensure, 'present')
end
it 'should filter on enumerable type (Integer) using two args index/value' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = Integer[10,18]
$a.filter |$i, $x|{ $i % 3 == 0}.each |$v|{
file { "/file_$v": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_10")['ensure'].should == 'present'
- catalog.resource(:file, "/file_13")['ensure'].should == 'present'
- catalog.resource(:file, "/file_16")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_10]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_13]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_16]").with_parameter(:ensure, 'present')
end
it 'should produce an array when acting on an array' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = ['strawberry','blueberry','orange']
$b = $a.filter |$x|{ $x =~ /berry$/}
file { "/file_${b[0]}": ensure => present }
file { "/file_${b[1]}": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_strawberry")['ensure'].should == 'present'
- catalog.resource(:file, "/file_blueberry")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_strawberry]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_blueberry]").with_parameter(:ensure, 'present')
end
it 'can filter array using index and value' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = ['strawberry','blueberry','orange']
$b = $a.filter |$index, $x|{ $index == 0 or $index ==2}
file { "/file_${b[0]}": ensure => present }
file { "/file_${b[1]}": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_strawberry")['ensure'].should == 'present'
- catalog.resource(:file, "/file_orange")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_strawberry]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_orange]").with_parameter(:ensure, 'present')
+ end
+
+ it 'can filter array using index and value (using captures-rest)' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = ['strawberry','blueberry','orange']
+ $b = $a.filter |*$ix|{ $ix[0] == 0 or $ix[0] ==2}
+ file { "/file_${b[0]}": ensure => present }
+ file { "/file_${b[1]}": ensure => present }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_strawberry]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_orange]").with_parameter(:ensure, 'present')
end
it 'filters on a hash (all berries) by key' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {'strawberry'=>'red','blueberry'=>'blue','orange'=>'orange'}
$a.filter |$x|{ $x[0] =~ /berry$/}.each |$v|{
file { "/file_${v[0]}": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_strawberry")['ensure'].should == 'present'
- catalog.resource(:file, "/file_blueberry")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_strawberry]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_blueberry]").with_parameter(:ensure, 'present')
end
it 'should produce a hash when acting on a hash' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {'strawberry'=>'red','blueberry'=>'blue','orange'=>'orange'}
$b = $a.filter |$x|{ $x[0] =~ /berry$/}
file { "/file_${b['strawberry']}": ensure => present }
file { "/file_${b['blueberry']}": ensure => present }
file { "/file_${b['orange']}": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_red")['ensure'].should == 'present'
- catalog.resource(:file, "/file_blue")['ensure'].should == 'present'
- catalog.resource(:file, "/file_")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_red]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_blue]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_]").with_parameter(:ensure, 'present')
end
it 'filters on a hash (all berries) by value' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {'strawb'=>'red berry','blueb'=>'blue berry','orange'=>'orange fruit'}
$a.filter |$x|{ $x[1] =~ /berry$/}.each |$v|{
file { "/file_${v[0]}": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_strawb")['ensure'].should == 'present'
- catalog.resource(:file, "/file_blueb")['ensure'].should == 'present'
- end
-
- context 'filter checks arguments and' do
- it 'raises an error when block has more than 2 argument' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- [1].filter |$indexm, $x, $yikes|{ }
- MANIFEST
- end.to raise_error(Puppet::Error, /block must define at most two parameters/)
- end
-
- it 'raises an error when block has fewer than 1 argument' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- [1].filter || { }
- MANIFEST
- end.to raise_error(Puppet::Error, /block must define at least one parameter/)
- end
+ expect(catalog).to have_resource("File[/file_strawb]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_blueb]").with_parameter(:ensure, 'present')
end
it_should_behave_like 'all iterative functions argument checks', 'filter'
it_should_behave_like 'all iterative functions hash handling', 'filter'
end
diff --git a/spec/unit/parser/functions/inline_epp_spec.rb b/spec/unit/functions/inline_epp_spec.rb
similarity index 81%
rename from spec/unit/parser/functions/inline_epp_spec.rb
rename to spec/unit/functions/inline_epp_spec.rb
index 44b24528b..36328c2dc 100644
--- a/spec/unit/parser/functions/inline_epp_spec.rb
+++ b/spec/unit/functions/inline_epp_spec.rb
@@ -1,82 +1,97 @@
require 'spec_helper'
describe "the inline_epp function" do
include PuppetSpec::Files
before :all do
Puppet::Parser::Functions.autoloader.loadall
end
before :each do
Puppet[:parser] = 'future'
end
let :node do Puppet::Node.new('localhost') end
let :compiler do Puppet::Parser::Compiler.new(node) end
let :scope do Puppet::Parser::Scope.new(compiler) end
context "when accessing scope variables as $ variables" do
it "looks up the value from the scope" do
scope["what"] = "are belong"
eval_template("all your base <%= $what %> to us").should == "all your base are belong to us"
end
it "get nil accessing a variable that does not exist" do
eval_template("<%= $kryptonite == undef %>").should == "true"
end
it "get nil accessing a variable that is undef" do
scope['undef_var'] = :undef
eval_template("<%= $undef_var == undef %>").should == "true"
end
it "gets shadowed variable if args are given" do
scope['phantom'] = 'of the opera'
eval_template_with_args("<%= $phantom == dragos %>", 'phantom' => 'dragos').should == "true"
end
it "gets shadowed variable if args are given and parameters are specified" do
scope['x'] = 'wrong one'
eval_template_with_args("<%-| $x |-%><%= $x == correct %>", 'x' => 'correct').should == "true"
end
it "raises an error if required variable is not given" do
scope['x'] = 'wrong one'
expect {
eval_template_with_args("<%-| $x |-%><%= $x == correct %>", 'y' => 'correct')
}.to raise_error(/no value given for required parameters x/)
end
it "raises an error if too many arguments are given" do
scope['x'] = 'wrong one'
expect {
eval_template_with_args("<%-| $x |-%><%= $x == correct %>", 'x' => 'correct', 'y' => 'surplus')
}.to raise_error(/Too many arguments: 2 for 1/)
end
end
+ context "when given an empty template" do
+ it "allows the template file to be empty" do
+ expect(eval_template("")).to eq("")
+ end
+
+ it "allows the template to have empty body after parameters" do
+ expect(eval_template_with_args("<%-|$x|%>", 'x'=>1)).to eq("")
+ end
+ end
+
it "renders a block expression" do
- eval_template_with_args("<%= {($x) $x + 1} %>", 'x' => 2).should == "3"
+ eval_template_with_args("<%= { $y = $x $x + 1} %>", 'x' => 2).should == "3"
end
# although never a problem with epp
it "is not interfered with by having a variable named 'string' (#14093)" do
scope['string'] = "this output should not be seen"
eval_template("some text that is static").should == "some text that is static"
end
it "has access to a variable named 'string' (#14093)" do
scope['string'] = "the string value"
eval_template("string was: <%= $string %>").should == "string was: the string value"
end
def eval_template_with_args(content, args_hash)
- scope.function_inline_epp([content, args_hash])
+ epp_function.call(scope, content, args_hash)
end
def eval_template(content)
- scope.function_inline_epp([content])
+ epp_function.call(scope, content)
end
+
+ def epp_function()
+ epp_func = scope.compiler.loaders.public_environment_loader.load(:function, 'inline_epp')
+ end
+
end
diff --git a/spec/unit/functions/map_spec.rb b/spec/unit/functions/map_spec.rb
new file mode 100644
index 000000000..e1b09cf24
--- /dev/null
+++ b/spec/unit/functions/map_spec.rb
@@ -0,0 +1,169 @@
+require 'puppet'
+require 'spec_helper'
+require 'puppet_spec/compiler'
+require 'matchers/resource'
+
+require 'shared_behaviours/iterative_functions'
+
+describe 'the map method' do
+ include PuppetSpec::Compiler
+ include Matchers::Resource
+
+ before :each do
+ Puppet[:parser] = "future"
+ end
+
+ context "using future parser" do
+ it 'map on an array (multiplying each value by 2)' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = [1,2,3]
+ $a.map |$x|{ $x*2}.each |$v|{
+ file { "/file_$v": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_4]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_6]").with_parameter(:ensure, 'present')
+ end
+
+ it 'map on an enumerable type (multiplying each value by 2)' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = Integer[1,3]
+ $a.map |$x|{ $x*2}.each |$v|{
+ file { "/file_$v": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_4]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_6]").with_parameter(:ensure, 'present')
+ end
+
+ it 'map on an integer (multiply each by 3)' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ 3.map |$x|{ $x*3}.each |$v|{
+ file { "/file_$v": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_0]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_6]").with_parameter(:ensure, 'present')
+ end
+
+ it 'map on a string' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = {a=>x, b=>y}
+ "ab".map |$x|{$a[$x]}.each |$v|{
+ file { "/file_$v": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_x]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_y]").with_parameter(:ensure, 'present')
+ end
+
+ it 'map on an array (multiplying value by 10 in even index position)' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = [1,2,3]
+ $a.map |$i, $x|{ if $i % 2 == 0 {$x} else {$x*10}}.each |$v|{
+ file { "/file_$v": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_20]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
+ end
+
+ it 'map on a hash selecting keys' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = {'a'=>1,'b'=>2,'c'=>3}
+ $a.map |$x|{ $x[0]}.each |$k|{
+ file { "/file_$k": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_a]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_b]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_c]").with_parameter(:ensure, 'present')
+ end
+
+ it 'map on a hash selecting keys - using two block parameters' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = {'a'=>1,'b'=>2,'c'=>3}
+ $a.map |$k,$v|{ file { "/file_$k": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_a]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_b]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_c]").with_parameter(:ensure, 'present')
+ end
+
+ it 'map on a hash using captures-last parameter' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = {'a'=>present,'b'=>absent,'c'=>present}
+ $a.map |*$kv|{ file { "/file_${kv[0]}": ensure => $kv[1] } }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_a]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_b]").with_parameter(:ensure, 'absent')
+ expect(catalog).to have_resource("File[/file_c]").with_parameter(:ensure, 'present')
+ end
+
+ it 'each on a hash selecting value' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = {'a'=>1,'b'=>2,'c'=>3}
+ $a.map |$x|{ $x[1]}.each |$k|{ file { "/file_$k": ensure => present } }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
+ end
+
+ it 'each on a hash selecting value - using two block parameters' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = {'a'=>1,'b'=>2,'c'=>3}
+ $a.map |$k,$v|{ file { "/file_$v": ensure => present } }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
+ end
+
+ context "handles data type corner cases" do
+ it "map gets values that are false" do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = [false,false]
+ $a.map |$x| { $x }.each |$i, $v| {
+ file { "/file_$i.$v": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_0.false]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_1.false]").with_parameter(:ensure, 'present')
+ end
+
+ it "map gets values that are nil" do
+ Puppet::Parser::Functions.newfunction(:nil_array, :type => :rvalue) do |args|
+ [nil]
+ end
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = nil_array()
+ $a.map |$x| { $x }.each |$i, $v| {
+ file { "/file_$i.$v": ensure => present }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_0.]").with_parameter(:ensure, 'present')
+ end
+ end
+
+ it_should_behave_like 'all iterative functions argument checks', 'map'
+ it_should_behave_like 'all iterative functions hash handling', 'map'
+ end
+end
diff --git a/spec/unit/functions/match_spec.rb b/spec/unit/functions/match_spec.rb
new file mode 100644
index 000000000..f4e2e383b
--- /dev/null
+++ b/spec/unit/functions/match_spec.rb
@@ -0,0 +1,57 @@
+require 'spec_helper'
+require 'puppet/pops'
+require 'puppet/loaders'
+
+describe 'the match function' do
+
+ before(:all) do
+ loaders = Puppet::Pops::Loaders.new(Puppet::Node::Environment.create(:testing, []))
+ Puppet.push_context({:loaders => loaders}, "test-examples")
+ end
+
+ after(:all) do
+ Puppet::Pops::Loaders.clear
+ Puppet::pop_context()
+ end
+
+ let(:func) do
+ Puppet.lookup(:loaders).puppet_system_loader.load(:function, 'match')
+ end
+
+ let(:type_parser) { Puppet::Pops::Types::TypeParser.new }
+
+
+ it 'matches string and regular expression without captures' do
+ expect(func.call({}, 'abc123', /[a-z]+[1-9]+/)).to eql(['abc123'])
+ end
+
+ it 'matches string and regular expression with captures' do
+ expect(func.call({}, 'abc123', /([a-z]+)([1-9]+)/)).to eql(['abc123', 'abc', '123'])
+ end
+
+ it 'produces nil if match is not found' do
+ expect(func.call({}, 'abc123', /([x]+)([6]+)/)).to be_nil
+ end
+
+ [ 'Pattern[/([a-z]+)([1-9]+)/]', # regexp
+ 'Pattern["([a-z]+)([1-9]+)"]', # string
+ 'Regexp[/([a-z]+)([1-9]+)/]', # regexp type
+ 'Pattern[/x9/, /([a-z]+)([1-9]+)/]', # regexp, first found matches
+ ].each do |pattern|
+ it "matches string and type #{pattern} with captures" do
+ expect(func.call({}, 'abc123', type(pattern))).to eql(['abc123', 'abc', '123'])
+ end
+ end
+
+ it 'matches an array of strings and yields a map of the result' do
+ expect(func.call({}, ['abc123', '2a', 'xyz2'], /([a-z]+)[1-9]+/)).to eql([['abc123', 'abc'], nil, ['xyz2', 'xyz']])
+ end
+
+ it 'raises error if Regexp type without regexp is used' do
+ expect{func.call({}, 'abc123', type('Regexp'))}.to raise_error(ArgumentError, /Given Regexp Type has no regular expression/)
+ end
+
+ def type(s)
+ Puppet::Pops::Types::TypeParser.new.parse(s)
+ end
+end
diff --git a/spec/unit/parser/methods/reduce_spec.rb b/spec/unit/functions/reduce_spec.rb
similarity index 66%
rename from spec/unit/parser/methods/reduce_spec.rb
rename to spec/unit/functions/reduce_spec.rb
index 4f0c14e5e..032f6ccc4 100644
--- a/spec/unit/parser/methods/reduce_spec.rb
+++ b/spec/unit/functions/reduce_spec.rb
@@ -1,78 +1,96 @@
require 'puppet'
require 'spec_helper'
require 'puppet_spec/compiler'
+require 'matchers/resource'
+require 'shared_behaviours/iterative_functions'
describe 'the reduce method' do
include PuppetSpec::Compiler
+ include Matchers::Resource
before :all do
# enable switching back
@saved_parser = Puppet[:parser]
# These tests only work with future parser
end
after :all do
# switch back to original
Puppet[:parser] = @saved_parser
end
before :each do
node = Puppet::Node.new("floppy", :environment => 'production')
@compiler = Puppet::Parser::Compiler.new(node)
@scope = Puppet::Parser::Scope.new(@compiler)
@topscope = @scope.compiler.topscope
@scope.parent = @topscope
Puppet[:parser] = 'future'
end
context "should be callable as" do
it 'reduce on an array' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1,2,3]
$b = $a.reduce |$memo, $x| { $memo + $x }
file { "/file_$b": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_6")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_6]").with_parameter(:ensure, 'present')
+ end
+
+ it 'reduce on an array with captures rest in lambda' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = [1,2,3]
+ $b = $a.reduce |*$mx| { $mx[0] + $mx[1] }
+ file { "/file_$b": ensure => present }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_6]").with_parameter(:ensure, 'present')
end
it 'reduce on enumerable type' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = Integer[1,3]
$b = $a.reduce |$memo, $x| { $memo + $x }
file { "/file_$b": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_6")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_6]").with_parameter(:ensure, 'present')
end
it 'reduce on an array with start value' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1,2,3]
$b = $a.reduce(4) |$memo, $x| { $memo + $x }
file { "/file_$b": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_10")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_10]").with_parameter(:ensure, 'present')
end
+
it 'reduce on a hash' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {a=>1, b=>2, c=>3}
$start = [ignored, 4]
$b = $a.reduce |$memo, $x| {['sum', $memo[1] + $x[1]] }
file { "/file_${$b[0]}_${$b[1]}": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_sum_6")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_sum_6]").with_parameter(:ensure, 'present')
end
+
it 'reduce on a hash with start value' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {a=>1, b=>2, c=>3}
$start = ['ignored', 4]
$b = $a.reduce($start) |$memo, $x| { ['sum', $memo[1] + $x[1]] }
file { "/file_${$b[0]}_${$b[1]}": ensure => present }
MANIFEST
- catalog.resource(:file, "/file_sum_10")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_sum_10]").with_parameter(:ensure, 'present')
end
end
+
+ it_should_behave_like 'all iterative functions argument checks', 'reduce'
+
end
diff --git a/spec/unit/parser/methods/slice_spec.rb b/spec/unit/functions/slice_spec.rb
similarity index 55%
rename from spec/unit/parser/methods/slice_spec.rb
rename to spec/unit/functions/slice_spec.rb
index 1de1dd0f1..945cae5c7 100644
--- a/spec/unit/parser/methods/slice_spec.rb
+++ b/spec/unit/functions/slice_spec.rb
@@ -1,135 +1,148 @@
require 'puppet'
require 'spec_helper'
require 'puppet_spec/compiler'
-require 'rubygems'
+require 'matchers/resource'
describe 'methods' do
include PuppetSpec::Compiler
+ include Matchers::Resource
before :all do
# enable switching back
@saved_parser = Puppet[:parser]
# These tests only work with future parser
Puppet[:parser] = 'future'
end
after :all do
# switch back to original
Puppet[:parser] = @saved_parser
end
before :each do
node = Puppet::Node.new("floppy", :environment => 'production')
@compiler = Puppet::Parser::Compiler.new(node)
@scope = Puppet::Parser::Scope.new(@compiler)
@topscope = @scope.compiler.topscope
@scope.parent = @topscope
Puppet[:parser] = 'future'
end
context "should be callable on array as" do
it 'slice with explicit parameters' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1, present, 2, absent, 3, present]
$a.slice(2) |$k,$v| {
file { "/file_${$k}": ensure => $v }
}
MANIFEST
- catalog.resource(:file, "/file_1")['ensure'].should == 'present'
- catalog.resource(:file, "/file_2")['ensure'].should == 'absent'
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'absent')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
+ end
+
+ it 'slice with captures last' do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ $a = [1, present, 2, absent, 3, present]
+ $a.slice(2) |*$kv| {
+ file { "/file_${$kv[0]}": ensure => $kv[1] }
+ }
+ MANIFEST
+
+ expect(catalog).to have_resource("File[/file_1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'absent')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
end
it 'slice with one parameter' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1, present, 2, absent, 3, present]
$a.slice(2) |$k| {
file { "/file_${$k[0]}": ensure => $k[1] }
}
MANIFEST
- catalog.resource(:file, "/file_1")['ensure'].should == 'present'
- catalog.resource(:file, "/file_2")['ensure'].should == 'absent'
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'absent')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
end
it 'slice with shorter last slice' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1, present, 2, present, 3, absent]
$a.slice(4) |$a, $b, $c, $d| {
file { "/file_$a.$c": ensure => $b }
}
MANIFEST
- catalog.resource(:file, "/file_1.2")['ensure'].should == 'present'
- catalog.resource(:file, "/file_3.")['ensure'].should == 'absent'
+ expect(catalog).to have_resource("File[/file_1.2]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_3.]").with_parameter(:ensure, 'absent')
end
end
context "should be callable on hash as" do
it 'slice with explicit parameters, missing are empty' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = {1=>present, 2=>present, 3=>absent}
$a.slice(2) |$a,$b| {
file { "/file_${a[0]}.${b[0]}": ensure => $a[1] }
}
MANIFEST
- catalog.resource(:file, "/file_1.2")['ensure'].should == 'present'
- catalog.resource(:file, "/file_3.")['ensure'].should == 'absent'
+ expect(catalog).to have_resource("File[/file_1.2]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_3.]").with_parameter(:ensure, 'absent')
end
end
context "should be callable on enumerable types as" do
it 'slice with integer range' do
catalog = compile_to_catalog(<<-MANIFEST)
$a = Integer[1,4]
$a.slice(2) |$a,$b| {
file { "/file_${a}.${b}": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_1.2")['ensure'].should == 'present'
- catalog.resource(:file, "/file_3.4")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_1.2]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_3.4]").with_parameter(:ensure, 'present')
end
it 'slice with integer' do
catalog = compile_to_catalog(<<-MANIFEST)
4.slice(2) |$a,$b| {
file { "/file_${a}.${b}": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_0.1")['ensure'].should == 'present'
- catalog.resource(:file, "/file_2.3")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_0.1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_2.3]").with_parameter(:ensure, 'present')
end
it 'slice with string' do
catalog = compile_to_catalog(<<-MANIFEST)
'abcd'.slice(2) |$a,$b| {
file { "/file_${a}.${b}": ensure => present }
}
MANIFEST
- catalog.resource(:file, "/file_a.b")['ensure'].should == 'present'
- catalog.resource(:file, "/file_c.d")['ensure'].should == 'present'
+ expect(catalog).to have_resource("File[/file_a.b]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_c.d]").with_parameter(:ensure, 'present')
end
end
context "when called without a block" do
it "should produce an array with the result" do
catalog = compile_to_catalog(<<-MANIFEST)
$a = [1, present, 2, absent, 3, present]
$a.slice(2).each |$k| {
file { "/file_${$k[0]}": ensure => $k[1] }
}
MANIFEST
- catalog.resource(:file, "/file_1")['ensure'].should == 'present'
- catalog.resource(:file, "/file_2")['ensure'].should == 'absent'
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
-
+ expect(catalog).to have_resource("File[/file_1]").with_parameter(:ensure, 'present')
+ expect(catalog).to have_resource("File[/file_2]").with_parameter(:ensure, 'absent')
+ expect(catalog).to have_resource("File[/file_3]").with_parameter(:ensure, 'present')
end
end
end
diff --git a/spec/unit/functions/with_spec.rb b/spec/unit/functions/with_spec.rb
new file mode 100644
index 000000000..952b14412
--- /dev/null
+++ b/spec/unit/functions/with_spec.rb
@@ -0,0 +1,35 @@
+require 'spec_helper'
+
+require 'puppet_spec/compiler'
+require 'matchers/resource'
+
+describe 'the with function' do
+ include PuppetSpec::Compiler
+ include Matchers::Resource
+
+ before :each do
+ Puppet[:parser] = 'future'
+ end
+
+ it 'calls a lambda passing no arguments' do
+ expect(compile_to_catalog("with() || { notify { testing: } }")).to have_resource('Notify[testing]')
+ end
+
+ it 'calls a lambda passing a single argument' do
+ expect(compile_to_catalog('with(1) |$x| { notify { "testing$x": } }')).to have_resource('Notify[testing1]')
+ end
+
+ it 'calls a lambda passing more than one argument' do
+ expect(compile_to_catalog('with(1, 2) |*$x| { notify { "testing${x[0]}, ${x[1]}": } }')).to have_resource('Notify[testing1, 2]')
+ end
+
+ it 'passes a type reference to a lambda' do
+ expect(compile_to_catalog('notify { test: message => "data" } with(Notify[test]) |$x| { notify { "${x[message]}": } }')).to have_resource('Notify[data]')
+ end
+
+ it 'errors when not given enough arguments for the lambda' do
+ expect do
+ compile_to_catalog('with(1) |$x, $y| { }')
+ end.to raise_error(/Parameter \$y is required but no value was given/m)
+ end
+end
diff --git a/spec/unit/functions4_spec.rb b/spec/unit/functions4_spec.rb
index a5404cc1e..6c31b75c2 100644
--- a/spec/unit/functions4_spec.rb
+++ b/spec/unit/functions4_spec.rb
@@ -1,671 +1,670 @@
require 'spec_helper'
require 'puppet/pops'
require 'puppet/loaders'
require 'puppet_spec/pops'
require 'puppet_spec/scope'
module FunctionAPISpecModule
class TestDuck
end
class TestFunctionLoader < Puppet::Pops::Loader::StaticLoader
def initialize
@functions = {}
end
def add_function(name, function)
typed_name = Puppet::Pops::Loader::Loader::TypedName.new(:function, name)
entry = Puppet::Pops::Loader::Loader::NamedEntry.new(typed_name, function, __FILE__)
@functions[typed_name] = entry
end
# override StaticLoader
def load_constant(typed_name)
@functions[typed_name]
end
end
end
describe 'the 4x function api' do
include FunctionAPISpecModule
include PuppetSpec::Pops
include PuppetSpec::Scope
let(:loader) { FunctionAPISpecModule::TestFunctionLoader.new }
it 'allows a simple function to be created without dispatch declaration' do
f = Puppet::Functions.create_function('min') do
def min(x,y)
x <= y ? x : y
end
end
# the produced result is a Class inheriting from Function
expect(f.class).to be(Class)
expect(f.superclass).to be(Puppet::Functions::Function)
# and this class had the given name (not a real Ruby class name)
expect(f.name).to eql('min')
end
it 'refuses to create functions that are not based on the Function class' do
expect do
Puppet::Functions.create_function('testing', Object) {}
end.to raise_error(ArgumentError, 'Functions must be based on Puppet::Pops::Functions::Function. Got Object')
end
it 'a function without arguments can be defined and called without dispatch declaration' do
f = create_noargs_function_class()
func = f.new(:closure_scope, :loader)
expect(func.call({})).to eql(10)
end
it 'an error is raised when calling a no arguments function with arguments' do
f = create_noargs_function_class()
func = f.new(:closure_scope, :loader)
expect{func.call({}, 'surprise')}.to raise_error(ArgumentError, "function 'test' called with mis-matched arguments
expected:
test() - arg count {0}
actual:
test(String) - arg count {1}")
end
it 'a simple function can be called' do
f = create_min_function_class()
# TODO: Bogus parameters, not yet used
func = f.new(:closure_scope, :loader)
expect(func.is_a?(Puppet::Functions::Function)).to be_true
expect(func.call({}, 10,20)).to eql(10)
end
it 'an error is raised if called with too few arguments' do
f = create_min_function_class()
# TODO: Bogus parameters, not yet used
func = f.new(:closure_scope, :loader)
expect(func.is_a?(Puppet::Functions::Function)).to be_true
signature = if RUBY_VERSION =~ /^1\.8/
- 'Object{2}'
+ 'Any{2}'
else
- 'Object x, Object y'
+ 'Any x, Any y'
end
expect do
func.call({}, 10)
end.to raise_error(ArgumentError, "function 'min' called with mis-matched arguments
expected:
min(#{signature}) - arg count {2}
actual:
min(Integer) - arg count {1}")
end
it 'an error is raised if called with too many arguments' do
f = create_min_function_class()
# TODO: Bogus parameters, not yet used
func = f.new(:closure_scope, :loader)
expect(func.is_a?(Puppet::Functions::Function)).to be_true
signature = if RUBY_VERSION =~ /^1\.8/
- 'Object{2}'
+ 'Any{2}'
else
- 'Object x, Object y'
+ 'Any x, Any y'
end
expect do
func.call({}, 10, 10, 10)
end.to raise_error(ArgumentError, Regexp.new(Regexp.escape(
"function 'min' called with mis-matched arguments
expected:
min(#{signature}) - arg count {2}
actual:
min(Integer, Integer, Integer) - arg count {3}")))
end
it 'an error is raised if simple function-name and method are not matched' do
expect do
f = create_badly_named_method_function_class()
end.to raise_error(ArgumentError, /Function Creation Error, cannot create a default dispatcher for function 'mix', no method with this name found/)
end
it 'the implementation separates dispatchers for different functions' do
# this tests that meta programming / construction puts class attributes in the correct class
f1 = create_min_function_class()
f2 = create_max_function_class()
d1 = f1.dispatcher
d2 = f2.dispatcher
expect(d1).to_not eql(d2)
expect(d1.dispatchers[0]).to_not eql(d2.dispatchers[0])
end
context 'when using regular dispatch' do
it 'a function can be created using dispatch and called' do
f = create_min_function_class_using_dispatch()
func = f.new(:closure_scope, :loader)
expect(func.call({}, 3,4)).to eql(3)
end
it 'an error is raised with reference to given parameter names when called with mis-matched arguments' do
f = create_min_function_class_using_dispatch()
# TODO: Bogus parameters, not yet used
func = f.new(:closure_scope, :loader)
expect(func.is_a?(Puppet::Functions::Function)).to be_true
expect do
func.call({}, 10, 10, 10)
end.to raise_error(ArgumentError, Regexp.new(Regexp.escape(
"function 'min' called with mis-matched arguments
expected:
min(Numeric a, Numeric b) - arg count {2}
actual:
min(Integer, Integer, Integer) - arg count {3}")))
end
it 'an error includes optional indicators and count for last element' do
f = create_function_with_optionals_and_varargs()
# TODO: Bogus parameters, not yet used
func = f.new(:closure_scope, :loader)
expect(func.is_a?(Puppet::Functions::Function)).to be_true
signature = if RUBY_VERSION =~ /^1\.8/
- 'Object{2,}'
+ 'Any{2,}'
else
- 'Object x, Object y, Object a?, Object b?, Object c{0,}'
+ 'Any x, Any y, Any a?, Any b?, Any c{0,}'
end
expect do
func.call({}, 10)
end.to raise_error(ArgumentError,
"function 'min' called with mis-matched arguments
expected:
min(#{signature}) - arg count {2,}
actual:
min(Integer) - arg count {1}")
end
it 'an error includes optional indicators and count for last element when defined via dispatch' do
f = create_function_with_optionals_and_varargs_via_dispatch()
# TODO: Bogus parameters, not yet used
func = f.new(:closure_scope, :loader)
expect(func.is_a?(Puppet::Functions::Function)).to be_true
expect do
func.call({}, 10)
end.to raise_error(ArgumentError,
"function 'min' called with mis-matched arguments
expected:
min(Numeric x, Numeric y, Numeric a?, Numeric b?, Numeric c{0,}) - arg count {2,}
actual:
min(Integer) - arg count {1}")
end
it 'a function can be created using dispatch and called' do
f = create_min_function_class_disptaching_to_two_methods()
func = f.new(:closure_scope, :loader)
expect(func.call({}, 3,4)).to eql(3)
expect(func.call({}, 'Apple', 'Banana')).to eql('Apple')
end
it 'an error is raised with reference to multiple methods when called with mis-matched arguments' do
f = create_min_function_class_disptaching_to_two_methods()
# TODO: Bogus parameters, not yet used
func = f.new(:closure_scope, :loader)
expect(func.is_a?(Puppet::Functions::Function)).to be_true
expect do
func.call({}, 10, 10, 10)
end.to raise_error(ArgumentError,
"function 'min' called with mis-matched arguments
expected one of:
min(Numeric a, Numeric b) - arg count {2}
min(String s1, String s2) - arg count {2}
actual:
min(Integer, Integer, Integer) - arg count {3}")
end
context 'can use injection' do
before :all do
injector = Puppet::Pops::Binder::Injector.create('test') do
bind.name('a_string').to('evoe')
bind.name('an_int').to(42)
end
Puppet.push_context({:injector => injector}, "injector for testing function API")
end
after :all do
Puppet.pop_context()
end
it 'attributes can be injected' do
f1 = create_function_with_class_injection()
f = f1.new(:closure_scope, :loader)
expect(f.test_attr2()).to eql("evoe")
expect(f.serial().produce(nil)).to eql(42)
expect(f.test_attr().class.name).to eql("FunctionAPISpecModule::TestDuck")
end
it 'parameters can be injected and woven with regular dispatch' do
f1 = create_function_with_param_injection_regular()
f = f1.new(:closure_scope, :loader)
expect(f.call(nil, 10, 20)).to eql("evoe! 10, and 20 < 42 = true")
expect(f.call(nil, 50, 20)).to eql("evoe! 50, and 20 < 42 = false")
end
end
context 'when requesting a type' do
it 'responds with a Callable for a single signature' do
tf = Puppet::Pops::Types::TypeFactory
fc = create_min_function_class_using_dispatch()
t = fc.dispatcher.to_type
expect(t.class).to be(Puppet::Pops::Types::PCallableType)
expect(t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t.param_types.types).to eql([tf.numeric(), tf.numeric()])
expect(t.block_type).to be_nil
end
it 'responds with a Variant[Callable...] for multiple signatures' do
tf = Puppet::Pops::Types::TypeFactory
fc = create_min_function_class_disptaching_to_two_methods()
t = fc.dispatcher.to_type
expect(t.class).to be(Puppet::Pops::Types::PVariantType)
expect(t.types.size).to eql(2)
t1 = t.types[0]
expect(t1.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t1.param_types.types).to eql([tf.numeric(), tf.numeric()])
expect(t1.block_type).to be_nil
t2 = t.types[1]
expect(t2.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t2.param_types.types).to eql([tf.string(), tf.string()])
expect(t2.block_type).to be_nil
end
end
context 'supports lambdas' do
it 'such that, a required block can be defined and given as an argument' do
# use a Function as callable
the_callable = create_min_function_class().new(:closure_scope, :loader)
the_function = create_function_with_required_block_all_defaults().new(:closure_scope, :loader)
result = the_function.call({}, 10, the_callable)
expect(result).to be(the_callable)
end
it 'such that, a missing required block when called raises an error' do
# use a Function as callable
the_function = create_function_with_required_block_all_defaults().new(:closure_scope, :loader)
expect do
the_function.call({}, 10)
end.to raise_error(ArgumentError,
"function 'test' called with mis-matched arguments
expected:
test(Integer x, Callable block) - arg count {2}
actual:
test(Integer) - arg count {1}")
end
it 'such that, an optional block can be defined and given as an argument' do
# use a Function as callable
the_callable = create_min_function_class().new(:closure_scope, :loader)
the_function = create_function_with_optional_block_all_defaults().new(:closure_scope, :loader)
result = the_function.call({}, 10, the_callable)
expect(result).to be(the_callable)
end
it 'such that, an optional block can be omitted when called and gets the value nil' do
# use a Function as callable
the_function = create_function_with_optional_block_all_defaults().new(:closure_scope, :loader)
expect(the_function.call({}, 10)).to be_nil
end
end
context 'provides signature information' do
it 'about capture rest (varargs)' do
fc = create_function_with_optionals_and_varargs
signatures = fc.signatures
expect(signatures.size).to eql(1)
signature = signatures[0]
expect(signature.last_captures_rest?).to be_true
end
it 'about optional and required parameters' do
fc = create_function_with_optionals_and_varargs
signature = fc.signatures[0]
expect(signature.args_range).to eql( [2, Puppet::Pops::Types::INFINITY ] )
expect(signature.infinity?(signature.args_range[1])).to be_true
end
it 'about block not being allowed' do
fc = create_function_with_optionals_and_varargs
signature = fc.signatures[0]
expect(signature.block_range).to eql( [ 0, 0 ] )
expect(signature.block_type).to be_nil
end
it 'about required block' do
fc = create_function_with_required_block_all_defaults
signature = fc.signatures[0]
expect(signature.block_range).to eql( [ 1, 1 ] )
expect(signature.block_type).to_not be_nil
end
it 'about optional block' do
fc = create_function_with_optional_block_all_defaults
signature = fc.signatures[0]
expect(signature.block_range).to eql( [ 0, 1 ] )
expect(signature.block_type).to_not be_nil
end
it 'about the type' do
fc = create_function_with_optional_block_all_defaults
signature = fc.signatures[0]
expect(signature.type.class).to be(Puppet::Pops::Types::PCallableType)
end
# conditional on Ruby 1.8.7 which does not do parameter introspection
if Method.method_defined?(:parameters)
it 'about parameter names obtained from ruby introspection' do
fc = create_min_function_class
signature = fc.signatures[0]
expect(signature.parameter_names).to eql(['x', 'y'])
end
end
it 'about parameter names specified with dispatch' do
fc = create_min_function_class_using_dispatch
signature = fc.signatures[0]
expect(signature.parameter_names).to eql(['a', 'b'])
end
it 'about block_name when it is *not* given in the definition' do
# neither type, nor name
fc = create_function_with_required_block_all_defaults
signature = fc.signatures[0]
expect(signature.block_name).to eql('block')
# no name given, only type
fc = create_function_with_required_block_given_type
signature = fc.signatures[0]
expect(signature.block_name).to eql('block')
end
it 'about block_name when it *is* given in the definition' do
# neither type, nor name
fc = create_function_with_required_block_default_type
signature = fc.signatures[0]
expect(signature.block_name).to eql('the_block')
# no name given, only type
fc = create_function_with_required_block_fully_specified
signature = fc.signatures[0]
expect(signature.block_name).to eql('the_block')
end
end
context 'supports calling other functions' do
before(:all) do
Puppet.push_context( {:loaders => Puppet::Pops::Loaders.new(Puppet::Node::Environment.create(:testing, []))})
end
after(:all) do
Puppet.pop_context()
end
it 'such that, other functions are callable by name' do
fc = Puppet::Functions.create_function(:test) do
def test()
# Call a function available in the puppet system
call_function('assert_type', 'Integer', 10)
end
end
# initiate the function the same way the loader initiates it
f = fc.new(:closure_scope, Puppet.lookup(:loaders).puppet_system_loader)
expect(f.call({})).to eql(10)
end
it 'such that, calling a non existing function raises an error' do
fc = Puppet::Functions.create_function(:test) do
def test()
# Call a function not available in the puppet system
call_function('no_such_function', 'Integer', 'hello')
end
end
# initiate the function the same way the loader initiates it
f = fc.new(:closure_scope, Puppet.lookup(:loaders).puppet_system_loader)
expect{f.call({})}.to raise_error(ArgumentError, "Function test(): cannot call function 'no_such_function' - not found")
end
end
context 'supports calling ruby functions with lambda from puppet' do
before(:all) do
Puppet.push_context( {:loaders => Puppet::Pops::Loaders.new(Puppet::Node::Environment.create(:testing, []))})
end
after(:all) do
Puppet.pop_context()
end
before(:each) do
Puppet[:strict_variables] = true
# These must be set since the is 3x logic that triggers on these even if the tests are explicit
# about selection of parser and evaluator
#
Puppet[:parser] = 'future'
- Puppet[:evaluator] = 'future'
# Puppetx cannot be loaded until the correct parser has been set (injector is turned off otherwise)
require 'puppetx'
end
- let(:parser) { Puppet::Pops::Parser::EvaluatingParser::Transitional.new }
+ let(:parser) { Puppet::Pops::Parser::EvaluatingParser.new }
let(:node) { 'node.example.com' }
let(:scope) { s = create_test_scope_for_node(node); s }
it 'function with required block can be called' do
# construct ruby function to call
fc = Puppet::Functions.create_function('testing::test') do
dispatch :test do
param 'Integer', 'x'
# block called 'the_block', and using "all_callables"
required_block_param #(all_callables(), 'the_block')
end
def test(x, block)
# call the block with x
block.call(closure_scope, x)
end
end
# add the function to the loader (as if it had been loaded from somewhere)
the_loader = loader()
f = fc.new({}, the_loader)
loader.add_function('testing::test', f)
# evaluate a puppet call
source = "testing::test(10) |$x| { $x+1 }"
program = parser.parse_string(source, __FILE__)
Puppet::Pops::Adapters::LoaderAdapter.adapt(program.model).loader = the_loader
expect(parser.evaluate(scope, program)).to eql(11)
end
end
end
def create_noargs_function_class
f = Puppet::Functions.create_function('test') do
def test()
10
end
end
end
def create_min_function_class
f = Puppet::Functions.create_function('min') do
def min(x,y)
x <= y ? x : y
end
end
end
def create_max_function_class
f = Puppet::Functions.create_function('max') do
def max(x,y)
x >= y ? x : y
end
end
end
def create_badly_named_method_function_class
f = Puppet::Functions.create_function('mix') do
def mix_up(x,y)
x <= y ? x : y
end
end
end
def create_min_function_class_using_dispatch
f = Puppet::Functions.create_function('min') do
dispatch :min do
param 'Numeric', 'a'
param 'Numeric', 'b'
end
def min(x,y)
x <= y ? x : y
end
end
end
def create_min_function_class_disptaching_to_two_methods
f = Puppet::Functions.create_function('min') do
dispatch :min do
param 'Numeric', 'a'
param 'Numeric', 'b'
end
dispatch :min_s do
param 'String', 's1'
param 'String', 's2'
end
def min(x,y)
x <= y ? x : y
end
def min_s(x,y)
cmp = (x.downcase <=> y.downcase)
cmp <= 0 ? x : y
end
end
end
def create_function_with_optionals_and_varargs
f = Puppet::Functions.create_function('min') do
def min(x,y,a=1, b=1, *c)
x <= y ? x : y
end
end
end
def create_function_with_optionals_and_varargs_via_dispatch
f = Puppet::Functions.create_function('min') do
dispatch :min do
param 'Numeric', 'x'
param 'Numeric', 'y'
param 'Numeric', 'a'
param 'Numeric', 'b'
param 'Numeric', 'c'
arg_count 2, :default
end
def min(x,y,a=1, b=1, *c)
x <= y ? x : y
end
end
end
def create_function_with_class_injection
f = Puppet::Functions.create_function('test', Puppet::Functions::InternalFunction) do
attr_injected Puppet::Pops::Types::TypeFactory.type_of(FunctionAPISpecModule::TestDuck), :test_attr
attr_injected Puppet::Pops::Types::TypeFactory.string(), :test_attr2, "a_string"
attr_injected_producer Puppet::Pops::Types::TypeFactory.integer(), :serial, "an_int"
def test(x,y,a=1, b=1, *c)
x <= y ? x : y
end
end
end
def create_function_with_param_injection_regular
f = Puppet::Functions.create_function('test', Puppet::Functions::InternalFunction) do
attr_injected Puppet::Pops::Types::TypeFactory.type_of(FunctionAPISpecModule::TestDuck), :test_attr
attr_injected Puppet::Pops::Types::TypeFactory.string(), :test_attr2, "a_string"
attr_injected_producer Puppet::Pops::Types::TypeFactory.integer(), :serial, "an_int"
dispatch :test do
injected_param Puppet::Pops::Types::TypeFactory.string, 'x', 'a_string'
injected_producer_param Puppet::Pops::Types::TypeFactory.integer, 'y', 'an_int'
param 'Scalar', 'a'
param 'Scalar', 'b'
end
def test(x,y,a,b)
y_produced = y.produce(nil)
"#{x}! #{a}, and #{b} < #{y_produced} = #{ !!(a < y_produced && b < y_produced)}"
end
end
end
def create_function_with_required_block_all_defaults
f = Puppet::Functions.create_function('test') do
dispatch :test do
param 'Integer', 'x'
# use defaults, any callable, name is 'block'
required_block_param
end
def test(x, block)
# returns the block to make it easy to test what it got when called
block
end
end
end
def create_function_with_required_block_default_type
f = Puppet::Functions.create_function('test') do
dispatch :test do
param 'Integer', 'x'
# use defaults, any callable, name is 'block'
required_block_param 'the_block'
end
def test(x, block)
# returns the block to make it easy to test what it got when called
block
end
end
end
def create_function_with_required_block_given_type
f = Puppet::Functions.create_function('test') do
dispatch :test do
param 'Integer', 'x'
required_block_param
end
def test(x, block)
# returns the block to make it easy to test what it got when called
block
end
end
end
def create_function_with_required_block_fully_specified
f = Puppet::Functions.create_function('test') do
dispatch :test do
param 'Integer', 'x'
# use defaults, any callable, name is 'block'
required_block_param('Callable', 'the_block')
end
def test(x, block)
# returns the block to make it easy to test what it got when called
block
end
end
end
def create_function_with_optional_block_all_defaults
f = Puppet::Functions.create_function('test') do
dispatch :test do
param 'Integer', 'x'
# use defaults, any callable, name is 'block'
optional_block_param
end
def test(x, block=nil)
# returns the block to make it easy to test what it got when called
# a default of nil must be used or the call will fail with a missing parameter
block
end
end
end
end
diff --git a/spec/unit/indirector/catalog/compiler_spec.rb b/spec/unit/indirector/catalog/compiler_spec.rb
index 7d92aa80f..4aaecd664 100755
--- a/spec/unit/indirector/catalog/compiler_spec.rb
+++ b/spec/unit/indirector/catalog/compiler_spec.rb
@@ -1,274 +1,272 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/indirector/catalog/compiler'
require 'puppet/rails'
describe Puppet::Resource::Catalog::Compiler do
before do
- require 'puppet/rails'
Puppet::Rails.stubs(:init)
Facter.stubs(:to_hash).returns({})
- Facter.stubs(:value).returns(Facter::Util::Fact.new("something"))
end
describe "when initializing" do
before do
Puppet.expects(:version).returns(1)
Facter.expects(:value).with('fqdn').returns("my.server.com")
Facter.expects(:value).with('ipaddress').returns("my.ip.address")
end
it "should gather data about itself" do
Puppet::Resource::Catalog::Compiler.new
end
it "should cache the server metadata and reuse it" do
Puppet[:node_terminus] = :memory
Puppet::Node.indirection.save(Puppet::Node.new("node1"))
Puppet::Node.indirection.save(Puppet::Node.new("node2"))
compiler = Puppet::Resource::Catalog::Compiler.new
compiler.stubs(:compile)
compiler.find(Puppet::Indirector::Request.new(:catalog, :find, 'node1', nil, :node => 'node1'))
compiler.find(Puppet::Indirector::Request.new(:catalog, :find, 'node2', nil, :node => 'node2'))
end
end
describe "when finding catalogs" do
before do
Facter.stubs(:value).returns("whatever")
@compiler = Puppet::Resource::Catalog::Compiler.new
@name = "me"
@node = Puppet::Node.new @name
@node.stubs(:merge)
Puppet::Node.indirection.stubs(:find).returns @node
@request = Puppet::Indirector::Request.new(:catalog, :find, @name, nil, :node => @name)
end
it "should directly use provided nodes for a local request" do
Puppet::Node.indirection.expects(:find).never
@compiler.expects(:compile).with(@node)
@request.stubs(:options).returns(:use_node => @node)
@request.stubs(:remote?).returns(false)
@compiler.find(@request)
end
it "rejects a provided node if the request is remote" do
@request.stubs(:options).returns(:use_node => @node)
@request.stubs(:remote?).returns(true)
expect {
@compiler.find(@request)
}.to raise_error Puppet::Error, /invalid option use_node/i
end
it "should use the authenticated node name if no request key is provided" do
@request.stubs(:key).returns(nil)
Puppet::Node.indirection.expects(:find).with(@name, anything).returns(@node)
@compiler.expects(:compile).with(@node)
@compiler.find(@request)
end
it "should use the provided node name by default" do
@request.expects(:key).returns "my_node"
Puppet::Node.indirection.expects(:find).with("my_node", anything).returns @node
@compiler.expects(:compile).with(@node)
@compiler.find(@request)
end
it "should fail if no node is passed and none can be found" do
Puppet::Node.indirection.stubs(:find).with(@name, anything).returns(nil)
proc { @compiler.find(@request) }.should raise_error(ArgumentError)
end
it "should fail intelligently when searching for a node raises an exception" do
Puppet::Node.indirection.stubs(:find).with(@name, anything).raises "eh"
proc { @compiler.find(@request) }.should raise_error(Puppet::Error)
end
it "should pass the found node to the compiler for compiling" do
Puppet::Node.indirection.expects(:find).with(@name, anything).returns(@node)
config = mock 'config'
Puppet::Parser::Compiler.expects(:compile).with(@node)
@compiler.find(@request)
end
it "should extract and save any facts from the request" do
Puppet::Node.indirection.expects(:find).with(@name, anything).returns @node
@compiler.expects(:extract_facts_from_request).with(@request)
Puppet::Parser::Compiler.stubs(:compile)
@compiler.find(@request)
end
it "requires `facts_format` option if facts are passed in" do
facts = Puppet::Node::Facts.new("mynode", :afact => "avalue")
request = Puppet::Indirector::Request.new(:catalog, :find, "mynode", nil, :facts => facts)
expect {
@compiler.find(request)
}.to raise_error ArgumentError, /no fact format provided for mynode/
end
it "rejects facts in the request from a different node" do
facts = Puppet::Node::Facts.new("differentnode", :afact => "avalue")
request = Puppet::Indirector::Request.new(
:catalog, :find, "mynode", nil, :facts => facts, :facts_format => "unused"
)
expect {
@compiler.find(request)
}.to raise_error Puppet::Error, /fact definition for the wrong node/i
end
it "should return the results of compiling as the catalog" do
Puppet::Node.indirection.stubs(:find).returns(@node)
config = mock 'config'
result = mock 'result'
Puppet::Parser::Compiler.expects(:compile).returns result
@compiler.find(@request).should equal(result)
end
end
describe "when extracting facts from the request" do
before do
Puppet::Node::Facts.indirection.terminus_class = :memory
Facter.stubs(:value).returns "something"
@compiler = Puppet::Resource::Catalog::Compiler.new
@facts = Puppet::Node::Facts.new('hostname', "fact" => "value", "architecture" => "i386")
end
def a_request_that_contains(facts)
request = Puppet::Indirector::Request.new(:catalog, :find, "hostname", nil)
request.options[:facts_format] = "pson"
request.options[:facts] = CGI.escape(facts.render(:pson))
request
end
it "should do nothing if no facts are provided" do
request = Puppet::Indirector::Request.new(:catalog, :find, "hostname", nil)
request.options[:facts] = nil
@compiler.extract_facts_from_request(request).should be_nil
end
it "deserializes the facts and timestamps them" do
@facts.timestamp = Time.parse('2010-11-01')
request = a_request_that_contains(@facts)
now = Time.parse('2010-11-02')
Time.stubs(:now).returns(now)
facts = @compiler.extract_facts_from_request(request)
facts.timestamp.should == now
end
it "should convert the facts into a fact instance and save it" do
request = a_request_that_contains(@facts)
options = {
:environment => request.environment,
:transaction_uuid => request.options[:transaction_uuid],
}
Puppet::Node::Facts.indirection.expects(:save).with(equals(@facts), nil, options)
@compiler.extract_facts_from_request(request)
end
end
describe "when finding nodes" do
it "should look node information up via the Node class with the provided key" do
Facter.stubs(:value).returns("whatever")
node = Puppet::Node.new('node')
compiler = Puppet::Resource::Catalog::Compiler.new
request = Puppet::Indirector::Request.new(:catalog, :find, "me", nil)
compiler.stubs(:compile)
Puppet::Node.indirection.expects(:find).with("me", anything).returns(node)
compiler.find(request)
end
it "should pass the transaction_uuid to the node indirection" do
uuid = '793ff10d-89f8-4527-a645-3302cbc749f3'
node = Puppet::Node.new("thing")
compiler = Puppet::Resource::Catalog::Compiler.new
compiler.stubs(:compile)
request = Puppet::Indirector::Request.new(:catalog, :find, "thing",
nil, :transaction_uuid => uuid)
Puppet::Node.indirection.expects(:find).with(
"thing",
has_entries(:transaction_uuid => uuid)
).returns(node)
compiler.find(request)
end
end
describe "after finding nodes" do
before do
Puppet.expects(:version).returns(1)
Facter.expects(:value).with('fqdn').returns("my.server.com")
Facter.expects(:value).with('ipaddress').returns("my.ip.address")
@compiler = Puppet::Resource::Catalog::Compiler.new
@node = Puppet::Node.new("me")
@request = Puppet::Indirector::Request.new(:catalog, :find, "me", nil)
@compiler.stubs(:compile)
Puppet::Node.indirection.stubs(:find).with("me", anything).returns(@node)
end
it "should add the server's Puppet version to the node's parameters as 'serverversion'" do
@node.expects(:merge).with { |args| args["serverversion"] == "1" }
@compiler.find(@request)
end
it "should add the server's fqdn to the node's parameters as 'servername'" do
@node.expects(:merge).with { |args| args["servername"] == "my.server.com" }
@compiler.find(@request)
end
it "should add the server's IP address to the node's parameters as 'serverip'" do
@node.expects(:merge).with { |args| args["serverip"] == "my.ip.address" }
@compiler.find(@request)
end
end
describe "when filtering resources" do
before :each do
Facter.stubs(:value)
@compiler = Puppet::Resource::Catalog::Compiler.new
@catalog = stub_everything 'catalog'
@catalog.stubs(:respond_to?).with(:filter).returns(true)
end
it "should delegate to the catalog instance filtering" do
@catalog.expects(:filter)
@compiler.filter(@catalog)
end
it "should filter out virtual resources" do
resource = mock 'resource', :virtual? => true
@catalog.stubs(:filter).yields(resource)
@compiler.filter(@catalog)
end
it "should return the same catalog if it doesn't support filtering" do
@catalog.stubs(:respond_to?).with(:filter).returns(false)
@compiler.filter(@catalog).should == @catalog
end
it "should return the filtered catalog" do
catalog = stub 'filtered catalog'
@catalog.stubs(:filter).returns(catalog)
@compiler.filter(@catalog).should == catalog
end
end
end
diff --git a/spec/unit/indirector/catalog/static_compiler_spec.rb b/spec/unit/indirector/catalog/static_compiler_spec.rb
index 31314b0b6..556bc7943 100644
--- a/spec/unit/indirector/catalog/static_compiler_spec.rb
+++ b/spec/unit/indirector/catalog/static_compiler_spec.rb
@@ -1,226 +1,236 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/indirector/catalog/static_compiler'
require 'puppet/file_serving/metadata'
require 'puppet/file_serving/content'
require 'yaml'
describe Puppet::Resource::Catalog::StaticCompiler do
before :all do
@num_file_resources = 10
end
before :each do
Facter.stubs(:loadfacts)
Facter.stubs(:to_hash).returns({})
Facter.stubs(:value)
end
+ around(:each) do |example|
+ Puppet.override({
+ :current_environment => Puppet::Node::Environment.create(:app, []),
+ },
+ "Ensure we are using an environment other than root"
+ ) do
+ example.run
+ end
+ end
+
let(:request) do
Puppet::Indirector::Request.new(:the_indirection_named_foo,
:find,
"the-node-named-foo",
:environment => "production")
end
describe "#find" do
it "returns a catalog" do
subject.find(request).should be_a_kind_of(Puppet::Resource::Catalog)
end
it "returns nil if there is no compiled catalog" do
subject.expects(:compile).returns(nil)
subject.find(request).should be_nil
end
describe "a catalog with file resources containing source parameters with puppet:// URIs" do
it "filters file resource source URI's to checksums" do
stub_the_compiler
resource_catalog = subject.find(request)
resource_catalog.resources.each do |resource|
next unless resource.type == "File"
resource[:content].should == "{md5}361fadf1c712e812d198c4cab5712a79"
resource[:source].should be_nil
end
end
it "does not modify file resources with non-puppet:// URI's" do
uri = "/this/is/not/a/puppet/uri.txt"
stub_the_compiler(:source => uri)
resource_catalog = subject.find(request)
resource_catalog.resources.each do |resource|
next unless resource.type == "File"
resource[:content].should be_nil
resource[:source].should == uri
end
end
it "copies the owner, group and mode from the fileserer" do
stub_the_compiler
resource_catalog = subject.find(request)
resource_catalog.resources.each do |resource|
next unless resource.type == "File"
resource[:owner].should == 0
resource[:group].should == 0
resource[:mode].should == 420
end
end
end
end
describe "(#15193) when storing content to the filebucket" do
it "explicitly uses the indirection method" do
# We expect the content to be retrieved from the FileServer ...
fake_content = mock('FileServer Content')
fake_content.expects(:content).returns("HELLO WORLD")
# Mock the FileBucket to behave as if the file content does not exist.
# NOTE, we're simulating the first call returning false, indicating the
# file is not present, then all subsequent calls returning true. This
# mocked behavior is intended to replicate the real behavior of the same
# file being stored to the filebucket multiple times.
Puppet::FileBucket::File.indirection.
expects(:find).times(@num_file_resources).
returns(false).then.returns(true)
Puppet::FileServing::Content.indirection.
expects(:find).once.
returns(fake_content)
# Once retrived from the FileServer, we expect the file to be stored into
# the FileBucket only once. All of the file resources in the fake
# catalog have the same content.
Puppet::FileBucket::File.indirection.expects(:save).once.with do |file|
file.contents == "HELLO WORLD"
end
# Obtain the Static Catalog
subject.stubs(:compile).returns(build_catalog)
resource_catalog = subject.find(request)
# Ensure all of the file resources were filtered
resource_catalog.resources.each do |resource|
next unless resource.type == "File"
resource[:content].should == "{md5}361fadf1c712e812d198c4cab5712a79"
resource[:source].should be_nil
end
end
end
# Spec helper methods
def stub_the_compiler(options = {:stub_methods => [:store_content]})
# Build a resource catalog suitable for specifying the behavior of the
# static compiler.
compiler = mock('indirection terminus compiler')
compiler.stubs(:find).returns(build_catalog(options))
subject.stubs(:compiler).returns(compiler)
# Mock the store content method to prevent copying the contents to the
# file bucket.
(options[:stub_methods] || []).each do |mthd|
subject.stubs(mthd)
end
end
def build_catalog(options = {})
options = options.dup
options[:source] ||= 'puppet:///modules/mymodule/config_file.txt'
options[:request] ||= request
# Build a catalog suitable for the static compiler to operate on
catalog = Puppet::Resource::Catalog.new("#{options[:request].key}", Puppet::Node::Environment.remote(:testing))
# Mock out the fileserver, otherwise converting the catalog to a
fake_fileserver_metadata = fileserver_metadata(options)
# Stub the call to the FileServer metadata API so we don't have to have
# a real fileserver initialized for testing.
Puppet::FileServing::Metadata.
indirection.stubs(:find).with do |uri, opts|
expect(uri).to eq options[:source].sub('puppet:///','')
expect(opts[:links]).to eq :manage
expect(opts[:environment]).to eq "testing"
end.returns(fake_fileserver_metadata)
# I want a resource that all the file resources require and another
# that requires them.
resources = Array.new
resources << Puppet::Resource.new("notify", "alpha")
resources << Puppet::Resource.new("notify", "omega")
# Create some File resources with source parameters.
1.upto(@num_file_resources) do |idx|
parameters = {
:ensure => 'file',
:source => options[:source],
:require => "Notify[alpha]",
:before => "Notify[omega]"
}
# The static compiler does not operate on a RAL catalog, so we're
# using Puppet::Resource to produce a resource catalog.
agnostic_path = File.expand_path("/tmp/file_#{idx}.txt") # Windows Friendly
rsrc = Puppet::Resource.new("file", agnostic_path, :parameters => parameters)
rsrc.file = 'site.pp'
rsrc.line = idx
resources << rsrc
end
resources.each do |rsrc|
catalog.add_resource(rsrc)
end
# Return the resource catalog
catalog
end
describe "(#22744) when filtering resources" do
let(:catalog) { stub_everything 'catalog' }
it "should delegate to the catalog instance filtering" do
catalog.expects(:filter)
subject.filter(catalog)
end
it "should filter out virtual resources" do
resource = mock 'resource', :virtual? => true
catalog.stubs(:filter).yields(resource)
subject.filter(catalog)
end
it "should return the same catalog if it doesn't support filtering" do
catalog.stubs(:respond_to?).with(:filter)
subject.filter(catalog).should == catalog
end
it "should return the filtered catalog" do
filtered_catalog = stub 'filtered catalog'
catalog.stubs(:filter).returns(filtered_catalog)
subject.filter(catalog).should == filtered_catalog
end
end
def fileserver_metadata(options = {})
yaml = <<EOFILESERVERMETADATA
--- !ruby/object:Puppet::FileServing::Metadata
checksum: "{md5}361fadf1c712e812d198c4cab5712a79"
checksum_type: md5
destination:
expiration: #{Time.now + 1800}
ftype: file
group: 0
links: !ruby/sym manage
mode: 420
owner: 0
path: #{File.expand_path('/etc/puppet/modules/mymodule/files/config_file.txt')}
source: #{options[:source]}
stat_method: !ruby/sym lstat
EOFILESERVERMETADATA
# Return a deserialized metadata object suitable for returning from a stub.
YAML.load(yaml)
end
end
diff --git a/spec/unit/indirector/data_binding/hiera_spec.rb b/spec/unit/indirector/data_binding/hiera_spec.rb
index 7ab3b27fc..12572b174 100644
--- a/spec/unit/indirector/data_binding/hiera_spec.rb
+++ b/spec/unit/indirector/data_binding/hiera_spec.rb
@@ -1,114 +1,19 @@
require 'spec_helper'
require 'puppet/indirector/data_binding/hiera'
describe Puppet::DataBinding::Hiera do
- include PuppetSpec::Files
-
- def write_hiera_config(config_file, datadir)
- File.open(config_file, 'w') do |f|
- f.write("---
- :yaml:
- :datadir: #{datadir}
- :hierarchy: ['global', 'invalid']
- :logger: 'noop'
- :backends: ['yaml']
- ")
- end
- end
-
- def request(key)
- Puppet::Indirector::Request.new(:hiera, :find, key, nil)
- end
-
- before do
- hiera_config_file = tmpfile("hiera.yaml")
- Puppet.settings[:hiera_config] = hiera_config_file
- write_hiera_config(hiera_config_file, my_fixture_dir)
- end
-
- after do
- Puppet::DataBinding::Hiera.instance_variable_set(:@hiera, nil)
- end
-
it "should have documentation" do
Puppet::DataBinding::Hiera.doc.should_not be_nil
end
it "should be registered with the data_binding indirection" do
indirection = Puppet::Indirector::Indirection.instance(:data_binding)
Puppet::DataBinding::Hiera.indirection.should equal(indirection)
end
it "should have its name set to :hiera" do
Puppet::DataBinding::Hiera.name.should == :hiera
end
- it "should be the default data_binding terminus" do
- Puppet.settings[:data_binding_terminus].should == :hiera
- end
-
- it "should raise an error if we don't have the hiera feature" do
- Puppet.features.expects(:hiera?).returns(false)
- lambda { Puppet::DataBinding::Hiera.new }.should raise_error RuntimeError,
- "Hiera terminus not supported without hiera library"
- end
-
- describe "the behavior of the hiera_config method", :if => Puppet.features.hiera? do
- it "should override the logger and set it to puppet" do
- Puppet::DataBinding::Hiera.hiera_config[:logger].should == "puppet"
- end
- context "when the Hiera configuration file does not exist" do
- let(:path) { File.expand_path('/doesnotexist') }
-
- before do
- Puppet.settings[:hiera_config] = path
- end
-
- it "should log a warning" do
- Puppet.expects(:warning).with(
- "Config file #{path} not found, using Hiera defaults")
- Puppet::DataBinding::Hiera.hiera_config
- end
-
- it "should only configure the logger and set it to puppet" do
- Puppet.expects(:warning).with(
- "Config file #{path} not found, using Hiera defaults")
- Puppet::DataBinding::Hiera.hiera_config.should == { :logger => 'puppet' }
- end
- end
- end
-
- describe "the behavior of the find method", :if => Puppet.features.hiera? do
-
- let(:data_binder) { Puppet::DataBinding::Hiera.new }
-
- it "should support looking up an integer" do
- data_binder.find(request("integer")).should == 3000
- end
-
- it "should support looking up a string" do
- data_binder.find(request("string")).should == 'apache'
- end
-
- it "should support looking up an array" do
- data_binder.find(request("array")).should == [
- '0.ntp.puppetlabs.com',
- '1.ntp.puppetlabs.com',
- ]
- end
-
- it "should support looking up a hash" do
- data_binder.find(request("hash")).should == {
- 'user' => 'Hightower',
- 'group' => 'admin',
- 'mode' => '0644'
- }
- end
-
- it "raises a data binding error if hiera cannot parse the yaml data" do
- expect do
- data_binder.find(request('invalid'))
- end.to raise_error(Puppet::DataBinding::LookupError)
- end
- end
+ it_should_behave_like "Hiera indirection", Puppet::DataBinding::Hiera, my_fixture_dir
end
diff --git a/spec/unit/indirector/facts/facter_spec.rb b/spec/unit/indirector/facts/facter_spec.rb
index cf6dad908..4417ede54 100755
--- a/spec/unit/indirector/facts/facter_spec.rb
+++ b/spec/unit/indirector/facts/facter_spec.rb
@@ -1,203 +1,170 @@
#! /usr/bin/env ruby
require 'spec_helper'
-
require 'puppet/indirector/facts/facter'
-module PuppetNodeFactsFacter
+module NodeFactsFacterSpec
describe Puppet::Node::Facts::Facter do
FS = Puppet::FileSystem
it "should be a subclass of the Code terminus" do
Puppet::Node::Facts::Facter.superclass.should equal(Puppet::Indirector::Code)
end
it "should have documentation" do
Puppet::Node::Facts::Facter.doc.should_not be_nil
end
it "should be registered with the configuration store indirection" do
indirection = Puppet::Indirector::Indirection.instance(:facts)
Puppet::Node::Facts::Facter.indirection.should equal(indirection)
end
it "should have its name set to :facter" do
Puppet::Node::Facts::Facter.name.should == :facter
end
- describe "when reloading Facter" do
- before do
- @facter_class = Puppet::Node::Facts::Facter
- Facter.stubs(:clear)
- Facter.stubs(:load)
- Facter.stubs(:loadfacts)
- end
-
- it "should clear Facter" do
- Facter.expects(:clear)
- @facter_class.reload_facter
- end
-
- it "should load all Facter facts" do
- Facter.expects(:loadfacts)
- @facter_class.reload_facter
- end
- end
-end
-
-describe Puppet::Node::Facts::Facter do
before :each do
Puppet::Node::Facts::Facter.stubs(:reload_facter)
@facter = Puppet::Node::Facts::Facter.new
Facter.stubs(:to_hash).returns({})
@name = "me"
@request = stub 'request', :key => @name
@environment = stub 'environment'
@request.stubs(:environment).returns(@environment)
@request.environment.stubs(:modules).returns([])
+ @request.environment.stubs(:modulepath).returns([])
end
- describe Puppet::Node::Facts::Facter, " when finding facts" do
- it "should reset and load facts" do
- clear = sequence 'clear'
- Puppet::Node::Facts::Facter.expects(:reload_facter).in_sequence(clear)
- Puppet::Node::Facts::Facter.expects(:load_fact_plugins).in_sequence(clear)
+ describe 'when finding facts' do
+ it 'should reset facts' do
+ reset = sequence 'reset'
+ Facter.expects(:reset).in_sequence(reset)
+ Puppet::Node::Facts::Facter.expects(:setup_search_paths).in_sequence(reset)
+ @facter.find(@request)
+ end
+
+ it 'should include external facts when feature is present' do
+ reset = sequence 'reset'
+ Puppet.features.stubs(:external_facts?).returns true
+ Facter.expects(:reset).in_sequence(reset)
+ Puppet::Node::Facts::Facter.expects(:setup_external_search_paths).in_sequence(reset)
+ Puppet::Node::Facts::Facter.expects(:setup_search_paths).in_sequence(reset)
@facter.find(@request)
end
- it "should include external facts when feature is present" do
- clear = sequence 'clear'
- Puppet.features.stubs(:external_facts?).returns(:true)
- Puppet::Node::Facts::Facter.expects(:setup_external_facts).in_sequence(clear)
- Puppet::Node::Facts::Facter.expects(:reload_facter).in_sequence(clear)
- Puppet::Node::Facts::Facter.expects(:load_fact_plugins).in_sequence(clear)
+ it 'should not include external facts when feature is not present' do
+ reset = sequence 'reset'
+ Puppet.features.stubs(:external_facts?).returns false
+ Facter.expects(:reset).in_sequence(reset)
+ Puppet::Node::Facts::Facter.expects(:setup_search_paths).in_sequence(reset)
@facter.find(@request)
end
it "should return a Facts instance" do
@facter.find(@request).should be_instance_of(Puppet::Node::Facts)
end
it "should return a Facts instance with the provided key as the name" do
@facter.find(@request).name.should == @name
end
it "should return the Facter facts as the values in the Facts instance" do
Facter.expects(:to_hash).returns("one" => "two")
facts = @facter.find(@request)
facts.values["one"].should == "two"
end
it "should add local facts" do
facts = Puppet::Node::Facts.new("foo")
Puppet::Node::Facts.expects(:new).returns facts
facts.expects(:add_local_facts)
@facter.find(@request)
end
it "should convert facts into strings when stringify_facts is true" do
Puppet[:stringify_facts] = true
facts = Puppet::Node::Facts.new("foo")
Puppet::Node::Facts.expects(:new).returns facts
facts.expects(:stringify)
@facter.find(@request)
end
it "should sanitize facts when stringify_facts is false" do
Puppet[:stringify_facts] = false
facts = Puppet::Node::Facts.new("foo")
Puppet::Node::Facts.expects(:new).returns facts
facts.expects(:sanitize)
@facter.find(@request)
end
end
- describe Puppet::Node::Facts::Facter, " when saving facts" do
-
- it "should fail" do
- proc { @facter.save(@facts) }.should raise_error(Puppet::DevError)
- end
- end
-
- describe Puppet::Node::Facts::Facter, " when destroying facts" do
-
- it "should fail" do
- proc { @facter.destroy(@facts) }.should raise_error(Puppet::DevError)
- end
+ it 'should fail when saving facts' do
+ proc { @facter.save(@facts) }.should raise_error(Puppet::DevError)
end
- it "should skip files when asked to load a directory" do
- FileTest.expects(:directory?).with("myfile").returns false
-
- Puppet::Node::Facts::Facter.load_facts_in_dir("myfile")
+ it 'should fail when destroying facts' do
+ proc { @facter.destroy(@facts) }.should raise_error(Puppet::DevError)
end
- it "should load each ruby file when asked to load a directory" do
- FileTest.expects(:directory?).with("mydir").returns true
- Dir.expects(:chdir).with("mydir").yields
+ describe 'when setting up search paths' do
+ let(:factpath1) { File.expand_path 'one' }
+ let(:factpath2) { File.expand_path 'two' }
+ let(:factpath) { [factpath1, factpath2].join(File::PATH_SEPARATOR) }
+ let(:modulepath) { File.expand_path 'module/foo' }
+ let(:modulelibfacter) { File.expand_path 'module/foo/lib/facter' }
+ let(:modulepluginsfacter) { File.expand_path 'module/foo/plugins/facter' }
- Dir.expects(:glob).with("*.rb").returns %w{a.rb b.rb}
+ before :each do
+ FileTest.expects(:directory?).with(factpath1).returns true
+ FileTest.expects(:directory?).with(factpath2).returns true
+ @request.environment.stubs(:modulepath).returns [modulepath]
+ Dir.expects(:glob).with("#{modulepath}/*/lib/facter").returns [modulelibfacter]
+ Dir.expects(:glob).with("#{modulepath}/*/plugins/facter").returns [modulepluginsfacter]
- Puppet::Node::Facts::Facter.expects(:load).with File.join('.', 'a.rb')
- Puppet::Node::Facts::Facter.expects(:load).with File.join('.', 'b.rb')
-
- Puppet::Node::Facts::Facter.load_facts_in_dir("mydir")
- end
+ Puppet[:factpath] = factpath
+ end
- it "should include pluginfactdest when loading external facts",
- :if => (Puppet.features.external_facts? and not Puppet.features.microsoft_windows?) do
- Puppet[:pluginfactdest] = "/plugin/dest"
- @facter.find(@request)
- Facter.search_external_path.include?("/plugin/dest")
- end
+ it 'should skip files' do
+ FileTest.expects(:directory?).with(modulelibfacter).returns false
+ FileTest.expects(:directory?).with(modulepluginsfacter).returns false
+ Facter.expects(:search).with(factpath1, factpath2)
+ Puppet::Node::Facts::Facter.setup_search_paths @request
+ end
- it "should include pluginfactdest when loading external facts",
- :if => (Puppet.features.external_facts? and Puppet.features.microsoft_windows?) do
- Puppet[:pluginfactdest] = "/plugin/dest"
- @facter.find(@request)
- Facter.search_external_path.include?("C:/plugin/dest")
+ it 'should add directories' do
+ FileTest.expects(:directory?).with(modulelibfacter).returns true
+ FileTest.expects(:directory?).with(modulepluginsfacter).returns true
+ Facter.expects(:search).with(modulelibfacter, modulepluginsfacter, factpath1, factpath2)
+ Puppet::Node::Facts::Facter.setup_search_paths @request
+ end
end
- describe "when loading fact plugins from disk" do
- let(:one) { File.expand_path("one") }
- let(:two) { File.expand_path("two") }
-
- it "should load each directory in the Fact path" do
- Puppet[:factpath] = [one, two].join(File::PATH_SEPARATOR)
-
- Puppet::Node::Facts::Facter.expects(:load_facts_in_dir).with(one)
- Puppet::Node::Facts::Facter.expects(:load_facts_in_dir).with(two)
+ describe 'when setting up external search paths', :if => Puppet.features.external_facts? do
+ let(:pluginfactdest) { File.expand_path 'plugin/dest' }
+ let(:modulepath) { File.expand_path 'module/foo' }
+ let(:modulefactsd) { File.expand_path 'module/foo/facts.d' }
- Puppet::Node::Facts::Facter.load_fact_plugins
+ before :each do
+ FileTest.expects(:directory?).with(pluginfactdest).returns true
+ mod = Puppet::Module.new('foo', modulepath, @request.environment)
+ @request.environment.stubs(:modules).returns [mod]
+ Puppet[:pluginfactdest] = pluginfactdest
end
- it "should load all facts from the modules" do
- Puppet::Node::Facts::Facter.stubs(:load_facts_in_dir)
-
- Dir.stubs(:glob).returns []
- Dir.expects(:glob).with("#{one}/*/lib/facter").returns %w{oneA oneB}
- Dir.expects(:glob).with("#{two}/*/lib/facter").returns %w{twoA twoB}
-
- Puppet::Node::Facts::Facter.expects(:load_facts_in_dir).with("oneA")
- Puppet::Node::Facts::Facter.expects(:load_facts_in_dir).with("oneB")
- Puppet::Node::Facts::Facter.expects(:load_facts_in_dir).with("twoA")
- Puppet::Node::Facts::Facter.expects(:load_facts_in_dir).with("twoB")
-
- FS.overlay(FS::MemoryFile.a_directory(one), FS::MemoryFile.a_directory(two)) do
- Puppet.override(:current_environment => Puppet::Node::Environment.create(:testing, [one, two], "")) do
- Puppet::Node::Facts::Facter.load_fact_plugins
- end
- end
+ it 'should skip files' do
+ File.expects(:directory?).with(modulefactsd).returns false
+ Facter.expects(:search_external).with [pluginfactdest]
+ Puppet::Node::Facts::Facter.setup_external_search_paths @request
end
- it "should include module plugin facts when present", :if => Puppet.features.external_facts? do
- mod = Puppet::Module.new("mymodule", "#{one}/mymodule", @request.environment)
- @request.environment.stubs(:modules).returns([mod])
- @facter.find(@request)
- Facter.search_external_path.include?("#{one}/mymodule/facts.d")
+ it 'should add directories' do
+ File.expects(:directory?).with(modulefactsd).returns true
+ Facter.expects(:search_external).with [modulefactsd, pluginfactdest]
+ Puppet::Node::Facts::Facter.setup_external_search_paths @request
end
end
end
end
diff --git a/spec/unit/indirector/hiera_spec.rb b/spec/unit/indirector/hiera_spec.rb
new file mode 100644
index 000000000..f74fd29aa
--- /dev/null
+++ b/spec/unit/indirector/hiera_spec.rb
@@ -0,0 +1,17 @@
+require 'spec_helper'
+require 'puppet/data_binding'
+require 'puppet/indirector/hiera'
+require 'hiera/backend'
+
+describe Puppet::Indirector::Hiera do
+
+ module Testing
+ module DataBinding
+ class Hiera < Puppet::Indirector::Hiera
+ end
+ end
+ end
+
+ it_should_behave_like "Hiera indirection", Testing::DataBinding::Hiera, my_fixture_dir
+end
+
diff --git a/spec/unit/indirector/request_spec.rb b/spec/unit/indirector/request_spec.rb
index 3c11664d3..e3edfd670 100755
--- a/spec/unit/indirector/request_spec.rb
+++ b/spec/unit/indirector/request_spec.rb
@@ -1,605 +1,603 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'matchers/json'
require 'puppet/indirector/request'
require 'puppet/util/pson'
describe Puppet::Indirector::Request do
include JSONMatchers
describe "when registering the document type" do
it "should register its document type with JSON" do
PSON.registered_document_types["IndirectorRequest"].should equal(Puppet::Indirector::Request)
end
end
describe "when initializing" do
it "should always convert the indirection name to a symbol" do
Puppet::Indirector::Request.new("ind", :method, "mykey", nil).indirection_name.should == :ind
end
it "should use provided value as the key if it is a string" do
Puppet::Indirector::Request.new(:ind, :method, "mykey", nil).key.should == "mykey"
end
it "should use provided value as the key if it is a symbol" do
Puppet::Indirector::Request.new(:ind, :method, :mykey, nil).key.should == :mykey
end
it "should use the name of the provided instance as its key if an instance is provided as the key instead of a string" do
instance = mock 'instance', :name => "mykey"
request = Puppet::Indirector::Request.new(:ind, :method, nil, instance)
request.key.should == "mykey"
request.instance.should equal(instance)
end
it "should support options specified as a hash" do
expect { Puppet::Indirector::Request.new(:ind, :method, :key, nil, :one => :two) }.to_not raise_error
end
it "should support nil options" do
expect { Puppet::Indirector::Request.new(:ind, :method, :key, nil, nil) }.to_not raise_error
end
it "should support unspecified options" do
expect { Puppet::Indirector::Request.new(:ind, :method, :key, nil) }.to_not raise_error
end
it "should use an empty options hash if nil was provided" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, nil).options.should == {}
end
it "should default to a nil node" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil).node.should be_nil
end
it "should set its node attribute if provided in the options" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :node => "foo.com").node.should == "foo.com"
end
it "should default to a nil ip" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil).ip.should be_nil
end
it "should set its ip attribute if provided in the options" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :ip => "192.168.0.1").ip.should == "192.168.0.1"
end
it "should default to being unauthenticated" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil).should_not be_authenticated
end
it "should set be marked authenticated if configured in the options" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :authenticated => "eh").should be_authenticated
end
it "should keep its options as a hash even if a node is specified" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :node => "eh").options.should be_instance_of(Hash)
end
it "should keep its options as a hash even if another option is specified" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :foo => "bar").options.should be_instance_of(Hash)
end
it "should treat options other than :ip, :node, and :authenticated as options rather than attributes" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :server => "bar").options[:server].should == "bar"
end
it "should normalize options to use symbols as keys" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, "foo" => "bar").options[:foo].should == "bar"
end
describe "and the request key is a URI" do
let(:file) { File.expand_path("/my/file with spaces") }
describe "and the URI is a 'file' URI" do
before do
@request = Puppet::Indirector::Request.new(:ind, :method, "#{URI.unescape(Puppet::Util.path_to_uri(file).to_s)}", nil)
end
it "should set the request key to the unescaped full file path" do
@request.key.should == file
end
it "should not set the protocol" do
@request.protocol.should be_nil
end
it "should not set the port" do
@request.port.should be_nil
end
it "should not set the server" do
@request.server.should be_nil
end
end
it "should set the protocol to the URI scheme" do
Puppet::Indirector::Request.new(:ind, :method, "http://host/stuff", nil).protocol.should == "http"
end
it "should set the server if a server is provided" do
Puppet::Indirector::Request.new(:ind, :method, "http://host/stuff", nil).server.should == "host"
end
it "should set the server and port if both are provided" do
Puppet::Indirector::Request.new(:ind, :method, "http://host:543/stuff", nil).port.should == 543
end
it "should default to the masterport if the URI scheme is 'puppet'" do
Puppet[:masterport] = "321"
Puppet::Indirector::Request.new(:ind, :method, "puppet://host/stuff", nil).port.should == 321
end
it "should use the provided port if the URI scheme is not 'puppet'" do
Puppet::Indirector::Request.new(:ind, :method, "http://host/stuff", nil).port.should == 80
end
it "should set the request key to the unescaped key part path from the URI" do
Puppet::Indirector::Request.new(:ind, :method, "http://host/environment/terminus/stuff with spaces", nil).key.should == "stuff with spaces"
end
it "should set the :uri attribute to the full URI" do
Puppet::Indirector::Request.new(:ind, :method, "http:///stu ff", nil).uri.should == 'http:///stu ff'
end
it "should not parse relative URI" do
Puppet::Indirector::Request.new(:ind, :method, "foo/bar", nil).uri.should be_nil
end
it "should not parse opaque URI" do
Puppet::Indirector::Request.new(:ind, :method, "mailto:joe", nil).uri.should be_nil
end
end
it "should allow indication that it should not read a cached instance" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :ignore_cache => true).should be_ignore_cache
end
it "should default to not ignoring the cache" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil).should_not be_ignore_cache
end
it "should allow indication that it should not not read an instance from the terminus" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil, :ignore_terminus => true).should be_ignore_terminus
end
it "should default to not ignoring the terminus" do
Puppet::Indirector::Request.new(:ind, :method, :key, nil).should_not be_ignore_terminus
end
end
it "should look use the Indirection class to return the appropriate indirection" do
ind = mock 'indirection'
Puppet::Indirector::Indirection.expects(:instance).with(:myind).returns ind
request = Puppet::Indirector::Request.new(:myind, :method, :key, nil)
request.indirection.should equal(ind)
end
it "should use its indirection to look up the appropriate model" do
ind = mock 'indirection'
Puppet::Indirector::Indirection.expects(:instance).with(:myind).returns ind
request = Puppet::Indirector::Request.new(:myind, :method, :key, nil)
ind.expects(:model).returns "mymodel"
request.model.should == "mymodel"
end
it "should fail intelligently when asked to find a model but the indirection cannot be found" do
Puppet::Indirector::Indirection.expects(:instance).with(:myind).returns nil
request = Puppet::Indirector::Request.new(:myind, :method, :key, nil)
expect { request.model }.to raise_error(ArgumentError)
end
it "should have a method for determining if the request is plural or singular" do
Puppet::Indirector::Request.new(:myind, :method, :key, nil).should respond_to(:plural?)
end
it "should be considered plural if the method is 'search'" do
Puppet::Indirector::Request.new(:myind, :search, :key, nil).should be_plural
end
it "should not be considered plural if the method is not 'search'" do
Puppet::Indirector::Request.new(:myind, :find, :key, nil).should_not be_plural
end
it "should use its uri, if it has one, as its string representation" do
Puppet::Indirector::Request.new(:myind, :find, "foo://bar/baz", nil).to_s.should == "foo://bar/baz"
end
it "should use its indirection name and key, if it has no uri, as its string representation" do
Puppet::Indirector::Request.new(:myind, :find, "key", nil) == "/myind/key"
end
it "should be able to return the URI-escaped key" do
Puppet::Indirector::Request.new(:myind, :find, "my key", nil).escaped_key.should == URI.escape("my key")
end
it "should set its environment to an environment instance when a string is specified as its environment" do
env = Puppet::Node::Environment.create(:foo, [])
Puppet.override(:environments => Puppet::Environments::Static.new(env)) do
Puppet::Indirector::Request.new(:myind, :find, "my key", nil, :environment => "foo").environment.should == env
end
end
it "should use any passed in environment instances as its environment" do
env = Puppet::Node::Environment.create(:foo, [])
Puppet::Indirector::Request.new(:myind, :find, "my key", nil, :environment => env).environment.should equal(env)
end
- it "should use the configured environment when none is provided" do
+ it "should use the current environment when none is provided" do
configured = Puppet::Node::Environment.create(:foo, [])
Puppet[:environment] = "foo"
- Puppet.override(:environments => Puppet::Environments::Static.new(configured)) do
- Puppet::Indirector::Request.new(:myind, :find, "my key", nil).environment.should == configured
- end
+ expect(Puppet::Indirector::Request.new(:myind, :find, "my key", nil).environment).to eq(Puppet.lookup(:current_environment))
end
it "should support converting its options to a hash" do
Puppet::Indirector::Request.new(:myind, :find, "my key", nil ).should respond_to(:to_hash)
end
it "should include all of its attributes when its options are converted to a hash" do
Puppet::Indirector::Request.new(:myind, :find, "my key", nil, :node => 'foo').to_hash[:node].should == 'foo'
end
describe "when building a query string from its options" do
def a_request_with_options(options)
Puppet::Indirector::Request.new(:myind, :find, "my key", nil, options)
end
def the_parsed_query_string_from(request)
CGI.parse(request.query_string.sub(/^\?/, ''))
end
it "should return an empty query string if there are no options" do
request = a_request_with_options(nil)
request.query_string.should == ""
end
it "should return an empty query string if the options are empty" do
request = a_request_with_options({})
request.query_string.should == ""
end
it "should prefix the query string with '?'" do
request = a_request_with_options(:one => "two")
request.query_string.should =~ /^\?/
end
it "should include all options in the query string, separated by '&'" do
request = a_request_with_options(:one => "two", :three => "four")
the_parsed_query_string_from(request).should == {
"one" => ["two"],
"three" => ["four"]
}
end
it "should ignore nil options" do
request = a_request_with_options(:one => "two", :three => nil)
the_parsed_query_string_from(request).should == {
"one" => ["two"]
}
end
it "should convert 'true' option values into strings" do
request = a_request_with_options(:one => true)
the_parsed_query_string_from(request).should == {
"one" => ["true"]
}
end
it "should convert 'false' option values into strings" do
request = a_request_with_options(:one => false)
the_parsed_query_string_from(request).should == {
"one" => ["false"]
}
end
it "should convert to a string all option values that are integers" do
request = a_request_with_options(:one => 50)
the_parsed_query_string_from(request).should == {
"one" => ["50"]
}
end
it "should convert to a string all option values that are floating point numbers" do
request = a_request_with_options(:one => 1.2)
the_parsed_query_string_from(request).should == {
"one" => ["1.2"]
}
end
it "should CGI-escape all option values that are strings" do
request = a_request_with_options(:one => "one two")
the_parsed_query_string_from(request).should == {
"one" => ["one two"]
}
end
it "should convert an array of values into multiple entries for the same key" do
request = a_request_with_options(:one => %w{one two})
the_parsed_query_string_from(request).should == {
"one" => ["one", "two"]
}
end
it "should convert an array of values into a single yaml entry when in legacy mode" do
Puppet[:legacy_query_parameter_serialization] = true
request = a_request_with_options(:one => %w{one two})
the_parsed_query_string_from(request).should == {
"one" => ["--- \n - one\n - two"]
}
end
it "should stringify simple data types inside an array" do
request = a_request_with_options(:one => ['one', nil])
the_parsed_query_string_from(request).should == {
"one" => ["one"]
}
end
it "should error if an array contains another array" do
request = a_request_with_options(:one => ['one', ["not allowed"]])
expect { request.query_string }.to raise_error(ArgumentError)
end
it "should error if an array contains illegal data" do
request = a_request_with_options(:one => ['one', { :not => "allowed" }])
expect { request.query_string }.to raise_error(ArgumentError)
end
it "should convert to a string and CGI-escape all option values that are symbols" do
request = a_request_with_options(:one => :"sym bol")
the_parsed_query_string_from(request).should == {
"one" => ["sym bol"]
}
end
it "should fail if options other than booleans or strings are provided" do
request = a_request_with_options(:one => { :one => :two })
expect { request.query_string }.to raise_error(ArgumentError)
end
end
describe "when converting to json" do
before do
@request = Puppet::Indirector::Request.new(:facts, :find, "foo", nil)
end
it "should produce a hash with the document_type set to 'request'" do
@request.should set_json_document_type_to("IndirectorRequest")
end
it "should set the 'key'" do
@request.should set_json_attribute("key").to("foo")
end
it "should include an attribute for its indirection name" do
@request.should set_json_attribute("type").to("facts")
end
it "should include a 'method' attribute set to its method" do
@request.should set_json_attribute("method").to("find")
end
it "should add all attributes under the 'attributes' attribute" do
@request.ip = "127.0.0.1"
@request.should set_json_attribute("attributes", "ip").to("127.0.0.1")
end
it "should add all options under the 'attributes' attribute" do
@request.options["opt"] = "value"
PSON.parse(@request.to_pson)["data"]['attributes']['opt'].should == "value"
end
it "should include the instance if provided" do
facts = Puppet::Node::Facts.new("foo")
@request.instance = facts
PSON.parse(@request.to_pson)["data"]['instance'].should be_instance_of(Hash)
end
end
describe "when converting from json" do
before do
@request = Puppet::Indirector::Request.new(:facts, :find, "foo", nil)
@klass = Puppet::Indirector::Request
@format = Puppet::Network::FormatHandler.format('pson')
end
def from_json(json)
@format.intern(Puppet::Indirector::Request, json)
end
it "should set the 'key'" do
from_json(@request.to_pson).key.should == "foo"
end
it "should fail if no key is provided" do
json = PSON.parse(@request.to_pson)
json['data'].delete("key")
expect { from_json(json.to_pson) }.to raise_error(ArgumentError)
end
it "should set its indirector name" do
from_json(@request.to_pson).indirection_name.should == :facts
end
it "should fail if no type is provided" do
json = PSON.parse(@request.to_pson)
json['data'].delete("type")
expect { from_json(json.to_pson) }.to raise_error(ArgumentError)
end
it "should set its method" do
from_json(@request.to_pson).method.should == "find"
end
it "should fail if no method is provided" do
json = PSON.parse(@request.to_pson)
json['data'].delete("method")
expect { from_json(json.to_pson) }.to raise_error(ArgumentError)
end
it "should initialize with all attributes and options" do
@request.ip = "127.0.0.1"
@request.options["opt"] = "value"
result = from_json(@request.to_pson)
result.options[:opt].should == "value"
result.ip.should == "127.0.0.1"
end
it "should set its instance as an instance if one is provided" do
facts = Puppet::Node::Facts.new("foo")
@request.instance = facts
result = from_json(@request.to_pson)
result.instance.should be_instance_of(Puppet::Node::Facts)
end
end
context '#do_request' do
before :each do
@request = Puppet::Indirector::Request.new(:myind, :find, "my key", nil)
end
context 'when not using SRV records' do
before :each do
Puppet.settings[:use_srv_records] = false
end
it "yields the request with the default server and port when no server or port were specified on the original request" do
count = 0
rval = @request.do_request(:puppet, 'puppet.example.com', '90210') do |got|
count += 1
got.server.should == 'puppet.example.com'
got.port.should == '90210'
'Block return value'
end
count.should == 1
rval.should == 'Block return value'
end
end
context 'when using SRV records' do
before :each do
Puppet.settings[:use_srv_records] = true
Puppet.settings[:srv_domain] = 'example.com'
end
it "yields the request with the original server and port unmodified" do
@request.server = 'puppet.example.com'
@request.port = '90210'
count = 0
rval = @request.do_request do |got|
count += 1
got.server.should == 'puppet.example.com'
got.port.should == '90210'
'Block return value'
end
count.should == 1
rval.should == 'Block return value'
end
context "when SRV returns servers" do
before :each do
@dns_mock = mock('dns')
Resolv::DNS.expects(:new).returns(@dns_mock)
@port = 7205
@host = '_x-puppet._tcp.example.com'
@srv_records = [Resolv::DNS::Resource::IN::SRV.new(0, 0, @port, @host)]
@dns_mock.expects(:getresources).
with("_x-puppet._tcp.#{Puppet.settings[:srv_domain]}", Resolv::DNS::Resource::IN::SRV).
returns(@srv_records)
end
it "yields a request using the server and port from the SRV record" do
count = 0
rval = @request.do_request do |got|
count += 1
got.server.should == '_x-puppet._tcp.example.com'
got.port.should == 7205
@block_return
end
count.should == 1
rval.should == @block_return
end
it "should fall back to the default server when the block raises a SystemCallError" do
count = 0
second_pass = nil
rval = @request.do_request(:puppet, 'puppet', 8140) do |got|
count += 1
if got.server == '_x-puppet._tcp.example.com' then
raise SystemCallError, "example failure"
else
second_pass = got
end
@block_return
end
second_pass.server.should == 'puppet'
second_pass.port.should == 8140
count.should == 2
rval.should == @block_return
end
end
end
end
describe "#remote?" do
def request(options = {})
Puppet::Indirector::Request.new('node', 'find', 'localhost', nil, options)
end
it "should not be unless node or ip is set" do
request.should_not be_remote
end
it "should be remote if node is set" do
request(:node => 'example.com').should be_remote
end
it "should be remote if ip is set" do
request(:ip => '127.0.0.1').should be_remote
end
it "should be remote if node and ip are set" do
request(:node => 'example.com', :ip => '127.0.0.1').should be_remote
end
end
end
diff --git a/spec/unit/indirector/resource/ral_spec.rb b/spec/unit/indirector/resource/ral_spec.rb
index 8b42f6881..36ba5f1c2 100755
--- a/spec/unit/indirector/resource/ral_spec.rb
+++ b/spec/unit/indirector/resource/ral_spec.rb
@@ -1,142 +1,147 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe "Puppet::Resource::Ral" do
it "is deprecated on the network, but still allows requests" do
Puppet.expects(:deprecation_warning)
expect(Puppet::Resource::Ral.new.allow_remote_requests?).to eq(true)
end
describe "find" do
before do
@request = stub 'request', :key => "user/root"
end
it "should find an existing instance" do
my_resource = stub "my user resource"
wrong_instance = stub "wrong user", :name => "bob"
my_instance = stub "my user", :name => "root", :to_resource => my_resource
require 'puppet/type/user'
Puppet::Type::User.expects(:instances).returns([ wrong_instance, my_instance, wrong_instance ])
Puppet::Resource::Ral.new.find(@request).should == my_resource
end
+ it "should produce Puppet::Error instead of ArgumentError" do
+ @bad_request = stub 'thiswillcauseanerror', :key => "thiswill/causeanerror"
+ expect{Puppet::Resource::Ral.new.find(@bad_request)}.to raise_error(Puppet::Error)
+ end
+
it "if there is no instance, it should create one" do
wrong_instance = stub "wrong user", :name => "bob"
root = mock "Root User"
root_resource = mock "Root Resource"
require 'puppet/type/user'
Puppet::Type::User.expects(:instances).returns([ wrong_instance, wrong_instance ])
Puppet::Type::User.expects(:new).with(has_entry(:name => "root")).returns(root)
root.expects(:to_resource).returns(root_resource)
result = Puppet::Resource::Ral.new.find(@request)
result.should == root_resource
end
end
describe "search" do
before do
@request = stub 'request', :key => "user/", :options => {}
end
it "should convert ral resources into regular resources" do
my_resource = stub "my user resource"
my_instance = stub "my user", :name => "root", :to_resource => my_resource
require 'puppet/type/user'
Puppet::Type::User.expects(:instances).returns([ my_instance ])
Puppet::Resource::Ral.new.search(@request).should == [my_resource]
end
it "should filter results by name if there's a name in the key" do
my_resource = stub "my user resource"
my_resource.stubs(:to_resource).returns(my_resource)
my_resource.stubs(:[]).with(:name).returns("root")
wrong_resource = stub "wrong resource"
wrong_resource.stubs(:to_resource).returns(wrong_resource)
wrong_resource.stubs(:[]).with(:name).returns("bad")
my_instance = stub "my user", :to_resource => my_resource
wrong_instance = stub "wrong user", :to_resource => wrong_resource
@request = stub 'request', :key => "user/root", :options => {}
require 'puppet/type/user'
Puppet::Type::User.expects(:instances).returns([ my_instance, wrong_instance ])
Puppet::Resource::Ral.new.search(@request).should == [my_resource]
end
it "should filter results by query parameters" do
wrong_resource = stub "my user resource"
wrong_resource.stubs(:to_resource).returns(wrong_resource)
wrong_resource.stubs(:[]).with(:name).returns("root")
my_resource = stub "wrong resource"
my_resource.stubs(:to_resource).returns(my_resource)
my_resource.stubs(:[]).with(:name).returns("bob")
my_instance = stub "my user", :to_resource => my_resource
wrong_instance = stub "wrong user", :to_resource => wrong_resource
@request = stub 'request', :key => "user/", :options => {:name => "bob"}
require 'puppet/type/user'
Puppet::Type::User.expects(:instances).returns([ my_instance, wrong_instance ])
Puppet::Resource::Ral.new.search(@request).should == [my_resource]
end
it "should return sorted results" do
a_resource = stub "alice resource"
a_resource.stubs(:to_resource).returns(a_resource)
a_resource.stubs(:title).returns("alice")
b_resource = stub "bob resource"
b_resource.stubs(:to_resource).returns(b_resource)
b_resource.stubs(:title).returns("bob")
a_instance = stub "alice user", :to_resource => a_resource
b_instance = stub "bob user", :to_resource => b_resource
@request = stub 'request', :key => "user/", :options => {}
require 'puppet/type/user'
Puppet::Type::User.expects(:instances).returns([ b_instance, a_instance ])
Puppet::Resource::Ral.new.search(@request).should == [a_resource, b_resource]
end
end
describe "save" do
before do
@rebuilt_res = stub 'rebuilt instance'
@ral_res = stub 'ral resource', :to_resource => @rebuilt_res
@instance = stub 'instance', :to_ral => @ral_res
@request = stub 'request', :key => "user/", :instance => @instance
@catalog = stub 'catalog'
@report = stub 'report'
@transaction = stub 'transaction', :report => @report
Puppet::Resource::Catalog.stubs(:new).returns(@catalog)
@catalog.stubs(:apply).returns(@transaction)
@catalog.stubs(:add_resource)
end
it "should apply a new catalog with a ral object in it" do
Puppet::Resource::Catalog.expects(:new).returns(@catalog)
@catalog.expects(:add_resource).with(@ral_res)
@catalog.expects(:apply).returns(@transaction)
Puppet::Resource::Ral.new.save(@request).should
end
it "should return a regular resource that used to be the ral resource" do
Puppet::Resource::Ral.new.save(@request).should == [@rebuilt_res, @report]
end
end
end
diff --git a/spec/unit/indirector/resource_type/parser_spec.rb b/spec/unit/indirector/resource_type/parser_spec.rb
index cf81428e4..fdbab84e7 100755
--- a/spec/unit/indirector/resource_type/parser_spec.rb
+++ b/spec/unit/indirector/resource_type/parser_spec.rb
@@ -1,249 +1,254 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/indirector/resource_type/parser'
require 'puppet_spec/files'
describe Puppet::Indirector::ResourceType::Parser do
include PuppetSpec::Files
+ let(:environmentpath) { tmpdir("envs") }
+ let(:modulepath) { "#{environmentpath}/test/modules" }
+ let(:environment) { Puppet::Node::Environment.create(:test, [modulepath]) }
before do
@terminus = Puppet::Indirector::ResourceType::Parser.new
@request = Puppet::Indirector::Request.new(:resource_type, :find, "foo", nil)
+ @request.environment = environment
@krt = @request.environment.known_resource_types
end
it "should be registered with the resource_type indirection" do
Puppet::Indirector::Terminus.terminus_class(:resource_type, :parser).should equal(Puppet::Indirector::ResourceType::Parser)
end
describe "when finding" do
it "should return any found type from the request's environment" do
type = Puppet::Resource::Type.new(:hostclass, "foo")
@request.environment.known_resource_types.add(type)
@terminus.find(@request).should == type
end
it "should attempt to load the type if none is found in memory" do
- dir = tmpdir("find_a_type")
- FileUtils.mkdir_p(dir)
- Puppet[:modulepath] = dir
+ FileUtils.mkdir_p(modulepath)
# Make a new request, since we've reset the env
- @request = Puppet::Indirector::Request.new(:resource_type, :find, "foo::bar", nil)
+ request = Puppet::Indirector::Request.new(:resource_type, :find, "foo::bar", nil)
+ request.environment = environment
- manifest_path = File.join(dir, "foo", "manifests")
+ manifest_path = File.join(modulepath, "foo", "manifests")
FileUtils.mkdir_p(manifest_path)
File.open(File.join(manifest_path, "bar.pp"), "w") { |f| f.puts "class foo::bar {}" }
- result = @terminus.find(@request)
+ result = @terminus.find(request)
result.should be_instance_of(Puppet::Resource::Type)
result.name.should == "foo::bar"
end
it "should return nil if no type can be found" do
@terminus.find(@request).should be_nil
end
it "should prefer definitions to nodes" do
type = @krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
node = @krt.add(Puppet::Resource::Type.new(:node, "foo"))
@terminus.find(@request).should == type
end
end
describe "when searching" do
describe "when the search key is a wildcard" do
before do
@request.key = "*"
end
it "should use the request's environment's list of known resource types" do
@request.environment.known_resource_types.expects(:hostclasses).returns({})
@terminus.search(@request)
end
it "should return all results if '*' is provided as the search string" do
type = @krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
node = @krt.add(Puppet::Resource::Type.new(:node, "bar"))
define = @krt.add(Puppet::Resource::Type.new(:definition, "baz"))
result = @terminus.search(@request)
result.should be_include(type)
result.should be_include(node)
result.should be_include(define)
end
it "should return all known types" do
type = @krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
node = @krt.add(Puppet::Resource::Type.new(:node, "bar"))
define = @krt.add(Puppet::Resource::Type.new(:definition, "baz"))
result = @terminus.search(@request)
result.should be_include(type)
result.should be_include(node)
result.should be_include(define)
end
it "should not return the 'main' class" do
main = @krt.add(Puppet::Resource::Type.new(:hostclass, ""))
# So there is a return value
foo = @krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
@terminus.search(@request).should_not be_include(main)
end
it "should return nil if no types can be found" do
@terminus.search(@request).should be_nil
end
it "should load all resource types from all search paths" do
dir = tmpdir("searching_in_all")
first = File.join(dir, "first")
second = File.join(dir, "second")
FileUtils.mkdir_p(first)
FileUtils.mkdir_p(second)
- Puppet[:modulepath] = "#{first}#{File::PATH_SEPARATOR}#{second}"
+ environment = Puppet::Node::Environment.create(:test, [first, second])
# Make a new request, since we've reset the env
- @request = Puppet::Indirector::Request.new(:resource_type, :search, "*", nil)
+ request = Puppet::Indirector::Request.new(:resource_type, :search, "*", nil)
+ request.environment = environment
onepath = File.join(first, "one", "manifests")
FileUtils.mkdir_p(onepath)
twopath = File.join(first, "two", "manifests")
FileUtils.mkdir_p(twopath)
File.open(File.join(onepath, "oneklass.pp"), "w") { |f| f.puts "class one::oneklass {}" }
File.open(File.join(twopath, "twoklass.pp"), "w") { |f| f.puts "class two::twoklass {}" }
- result = @terminus.search(@request)
+ result = @terminus.search(request)
result.find { |t| t.name == "one::oneklass" }.should be_instance_of(Puppet::Resource::Type)
result.find { |t| t.name == "two::twoklass" }.should be_instance_of(Puppet::Resource::Type)
end
context "when specifying a 'kind' parameter" do
before :each do
@klass = @krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
@node = @krt.add(Puppet::Resource::Type.new(:node, "bar"))
@define = @krt.add(Puppet::Resource::Type.new(:definition, "baz"))
end
it "should raise an error if you pass an invalid kind filter" do
@request.options[:kind] = "i bet you don't have a kind called this"
expect {
@terminus.search(@request)
}.to raise_error(ArgumentError, /Unrecognized kind filter/)
end
it "should support filtering for only hostclass results" do
@request.options[:kind] = "class"
result = @terminus.search(@request)
result.should be_include(@klass)
result.should_not be_include(@node)
result.should_not be_include(@define)
end
it "should support filtering for only node results" do
@request.options[:kind] = "node"
result = @terminus.search(@request)
result.should_not be_include(@klass)
result.should be_include(@node)
result.should_not be_include(@define)
end
it "should support filtering for only definition results" do
@request.options[:kind] = "defined_type"
result = @terminus.search(@request)
result.should_not be_include(@klass)
result.should_not be_include(@node)
result.should be_include(@define)
end
end
end
context "when the search string is not a wildcard" do
it "should treat any search string as a regex" do
@request.key = "a"
foo = @krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
bar = @krt.add(Puppet::Resource::Type.new(:hostclass, "bar"))
baz = @krt.add(Puppet::Resource::Type.new(:hostclass, "baz"))
result = @terminus.search(@request)
result.should be_include(bar)
result.should be_include(baz)
result.should_not be_include(foo)
end
it "should support kind filtering with a regex" do
@request.key = "foo"
@request.options[:kind] = "class"
foobar = @krt.add(Puppet::Resource::Type.new(:hostclass, "foobar"))
foobaz = @krt.add(Puppet::Resource::Type.new(:hostclass, "foobaz"))
foobam = @krt.add(Puppet::Resource::Type.new(:definition, "foobam"))
fooball = @krt.add(Puppet::Resource::Type.new(:node, "fooball"))
result = @terminus.search(@request)
result.should be_include(foobar)
result.should be_include(foobaz)
result.should_not be_include(foobam)
result.should_not be_include(fooball)
end
it "should fail if a provided search string is not a valid regex" do
@request.key = "*foo*"
# Add one instance so we don't just get an empty array"
@krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
lambda { @terminus.search(@request) }.should raise_error(ArgumentError)
end
end
it "should not return the 'main' class" do
main = @krt.add(Puppet::Resource::Type.new(:hostclass, ""))
# So there is a return value
foo = @krt.add(Puppet::Resource::Type.new(:hostclass, "foo"))
@terminus.search(@request).should_not be_include(main)
end
it "should return nil if no types can be found" do
@terminus.search(@request).should be_nil
end
it "should load all resource types from all search paths" do
dir = tmpdir("searching_in_all")
first = File.join(dir, "first")
second = File.join(dir, "second")
FileUtils.mkdir_p(first)
FileUtils.mkdir_p(second)
- Puppet[:modulepath] = "#{first}#{File::PATH_SEPARATOR}#{second}"
+ environment = Puppet::Node::Environment.create(:test, [first,second])
# Make a new request, since we've reset the env
- @request = Puppet::Indirector::Request.new(:resource_type, :search, "*", nil)
+ request = Puppet::Indirector::Request.new(:resource_type, :search, "*", nil)
+ request.environment = environment
onepath = File.join(first, "one", "manifests")
FileUtils.mkdir_p(onepath)
twopath = File.join(first, "two", "manifests")
FileUtils.mkdir_p(twopath)
File.open(File.join(onepath, "oneklass.pp"), "w") { |f| f.puts "class one::oneklass {}" }
File.open(File.join(twopath, "twoklass.pp"), "w") { |f| f.puts "class two::twoklass {}" }
- result = @terminus.search(@request)
+ result = @terminus.search(request)
result.find { |t| t.name == "one::oneklass" }.should be_instance_of(Puppet::Resource::Type)
result.find { |t| t.name == "two::twoklass" }.should be_instance_of(Puppet::Resource::Type)
end
end
end
diff --git a/spec/unit/indirector/rest_spec.rb b/spec/unit/indirector/rest_spec.rb
index 5c9065fec..7a97708d3 100755
--- a/spec/unit/indirector/rest_spec.rb
+++ b/spec/unit/indirector/rest_spec.rb
@@ -1,559 +1,589 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/indirector'
require 'puppet/indirector/errors'
require 'puppet/indirector/rest'
HTTP_ERROR_CODES = [300, 400, 500]
# Just one from each category since the code makes no real distinctions
shared_examples_for "a REST terminus method" do |terminus_method|
describe "when talking to an older master" do
it "should set backward compatibility settings" do
response.stubs(:[]).with(Puppet::Network::HTTP::HEADER_PUPPET_VERSION).returns nil
terminus.send(terminus_method, request)
Puppet[:report_serialization_format].should == 'yaml'
Puppet[:legacy_query_parameter_serialization].should == true
end
end
describe "when talking to a 3.3.1 master" do
it "should not set backward compatibility settings" do
response.stubs(:[]).with(Puppet::Network::HTTP::HEADER_PUPPET_VERSION).returns "3.3.1"
terminus.send(terminus_method, request)
Puppet[:report_serialization_format].should == 'pson'
Puppet[:legacy_query_parameter_serialization].should == false
end
end
HTTP_ERROR_CODES.each do |code|
describe "when the response code is #{code}" do
let(:response) { mock_response(code, 'error messaged!!!') }
it "raises an http error with the body of the response" do
expect {
terminus.send(terminus_method, request)
}.to raise_error(Net::HTTPError, "Error #{code} on SERVER: #{response.body}")
end
it "does not attempt to deserialize the response" do
model.expects(:convert_from).never
expect {
terminus.send(terminus_method, request)
}.to raise_error(Net::HTTPError)
end
# I'm not sure what this means or if it's used
it "if the body is empty raises an http error with the response header" do
response.stubs(:body).returns ""
response.stubs(:message).returns "fhqwhgads"
expect {
terminus.send(terminus_method, request)
}.to raise_error(Net::HTTPError, "Error #{code} on SERVER: #{response.message}")
end
describe "and the body is compressed" do
it "raises an http error with the decompressed body of the response" do
uncompressed_body = "why"
compressed_body = Zlib::Deflate.deflate(uncompressed_body)
response = mock_response(code, compressed_body, 'text/plain', 'deflate')
connection.expects(http_method).returns(response)
expect {
terminus.send(terminus_method, request)
}.to raise_error(Net::HTTPError, "Error #{code} on SERVER: #{uncompressed_body}")
end
end
end
end
end
shared_examples_for "a deserializing terminus method" do |terminus_method|
describe "when the response has no content-type" do
let(:response) { mock_response(200, "body", nil, nil) }
it "raises an error" do
expect {
terminus.send(terminus_method, request)
}.to raise_error(RuntimeError, "No content type in http response; cannot parse")
end
end
it "doesn't catch errors in deserialization" do
model.expects(:convert_from).raises(Puppet::Error, "Whoa there")
expect { terminus.send(terminus_method, request) }.to raise_error(Puppet::Error, "Whoa there")
end
end
describe Puppet::Indirector::REST do
before :all do
class Puppet::TestModel
extend Puppet::Indirector
indirects :test_model
attr_accessor :name, :data
def initialize(name = "name", data = '')
@name = name
@data = data
end
def self.convert_from(format, string)
new('', string)
end
def self.convert_from_multiple(format, string)
string.split(',').collect { |s| convert_from(format, s) }
end
def to_data_hash
{ 'name' => @name, 'data' => @data }
end
def ==(other)
other.is_a? Puppet::TestModel and other.name == name and other.data == data
end
end
# The subclass must not be all caps even though the superclass is
class Puppet::TestModel::Rest < Puppet::Indirector::REST
end
Puppet::TestModel.indirection.terminus_class = :rest
end
after :all do
Puppet::TestModel.indirection.delete
# Remove the class, unlinking it from the rest of the system.
Puppet.send(:remove_const, :TestModel)
end
let(:terminus_class) { Puppet::TestModel::Rest }
let(:terminus) { Puppet::TestModel.indirection.terminus(:rest) }
let(:indirection) { Puppet::TestModel.indirection }
let(:model) { Puppet::TestModel }
+ around(:each) do |example|
+ Puppet.override(:current_environment => Puppet::Node::Environment.create(:production, [])) do
+ example.run
+ end
+ end
+
def mock_response(code, body, content_type='text/plain', encoding=nil)
obj = stub('http 200 ok', :code => code.to_s, :body => body)
obj.stubs(:[]).with('content-type').returns(content_type)
obj.stubs(:[]).with('content-encoding').returns(encoding)
obj.stubs(:[]).with(Puppet::Network::HTTP::HEADER_PUPPET_VERSION).returns(Puppet.version)
obj
end
def find_request(key, options={})
Puppet::Indirector::Request.new(:test_model, :find, key, nil, options)
end
def head_request(key, options={})
Puppet::Indirector::Request.new(:test_model, :head, key, nil, options)
end
def search_request(key, options={})
Puppet::Indirector::Request.new(:test_model, :search, key, nil, options)
end
def delete_request(key, options={})
Puppet::Indirector::Request.new(:test_model, :destroy, key, nil, options)
end
def save_request(key, instance, options={})
Puppet::Indirector::Request.new(:test_model, :save, key, instance, options)
end
it "should have a method for specifying what setting a subclass should use to retrieve its server" do
terminus_class.should respond_to(:use_server_setting)
end
it "should use any specified setting to pick the server" do
terminus_class.expects(:server_setting).returns :inventory_server
Puppet[:inventory_server] = "myserver"
terminus_class.server.should == "myserver"
end
it "should default to :server for the server setting" do
terminus_class.expects(:server_setting).returns nil
Puppet[:server] = "myserver"
terminus_class.server.should == "myserver"
end
it "should have a method for specifying what setting a subclass should use to retrieve its port" do
terminus_class.should respond_to(:use_port_setting)
end
it "should use any specified setting to pick the port" do
terminus_class.expects(:port_setting).returns :ca_port
Puppet[:ca_port] = "321"
terminus_class.port.should == 321
end
it "should default to :port for the port setting" do
terminus_class.expects(:port_setting).returns nil
Puppet[:masterport] = "543"
terminus_class.port.should == 543
end
it 'should default to :puppet for the srv_service' do
Puppet::Indirector::REST.srv_service.should == :puppet
end
describe "when creating an HTTP client" do
it "should use the class's server and port if the indirection request provides neither" do
@request = stub 'request', :key => "foo", :server => nil, :port => nil
terminus.class.expects(:port).returns 321
terminus.class.expects(:server).returns "myserver"
Puppet::Network::HttpPool.expects(:http_instance).with("myserver", 321).returns "myconn"
terminus.network(@request).should == "myconn"
end
it "should use the server from the indirection request if one is present" do
@request = stub 'request', :key => "foo", :server => "myserver", :port => nil
terminus.class.stubs(:port).returns 321
Puppet::Network::HttpPool.expects(:http_instance).with("myserver", 321).returns "myconn"
terminus.network(@request).should == "myconn"
end
it "should use the port from the indirection request if one is present" do
@request = stub 'request', :key => "foo", :server => nil, :port => 321
terminus.class.stubs(:server).returns "myserver"
Puppet::Network::HttpPool.expects(:http_instance).with("myserver", 321).returns "myconn"
terminus.network(@request).should == "myconn"
end
end
describe "#find" do
let(:http_method) { :get }
let(:response) { mock_response(200, 'body') }
let(:connection) { stub('mock http connection', :get => response, :verify_callback= => nil) }
let(:request) { find_request('foo') }
before :each do
terminus.stubs(:network).returns(connection)
end
it_behaves_like 'a REST terminus method', :find
it_behaves_like 'a deserializing terminus method', :find
describe "with a long set of parameters" do
it "calls post on the connection with the query params in the body" do
params = {}
'aa'.upto('zz') do |s|
params[s] = 'foo'
end
# The request special-cases this parameter, and it
# won't be passed on to the server, so we remove it here
# to avoid a failure.
params.delete('ip')
request = find_request('whoa', params)
connection.expects(:post).with do |uri, body|
body.split("&").sort == params.map {|key,value| "#{key}=#{value}"}.sort
end.returns(mock_response(200, 'body'))
terminus.find(request)
end
end
describe "with no parameters" do
it "calls get on the connection" do
request = find_request('foo bar')
connection.expects(:get).with('/production/test_model/foo%20bar?', anything).returns(mock_response('200', 'response body'))
terminus.find(request).should == model.new('foo bar', 'response body')
end
end
it "returns nil on 404" do
response = mock_response('404', nil)
connection.expects(:get).returns(response)
terminus.find(request).should == nil
end
it 'raises no warning for a 404 (when not asked to do so)' do
response = mock_response('404', 'this is the notfound you are looking for')
connection.expects(:get).returns(response)
expected_message = 'Find /production/test_model/foo? resulted in 404 with the message: this is the notfound you are looking for'
expect{terminus.find(request)}.to_not raise_error()
end
context 'when fail_on_404 is used in request' do
- let(:request) { find_request('foo', :fail_on_404 => true) }
-
it 'raises an error for a 404 when asked to do so' do
+ request = find_request('foo', :fail_on_404 => true)
response = mock_response('404', 'this is the notfound you are looking for')
connection.expects(:get).returns(response)
- expected_message = [
- 'Find /production/test_model/foo?fail_on_404=true',
- 'resulted in 404 with the message: this is the notfound you are looking for'].join( ' ')
+
+ expect do
+ terminus.find(request)
+ end.to raise_error(
+ Puppet::Error,
+ 'Find /production/test_model/foo?fail_on_404=true resulted in 404 with the message: this is the notfound you are looking for')
+ end
+
+ it 'truncates the URI when it is very long' do
+ request = find_request('foo', :fail_on_404 => true, :long_param => ('A' * 100) + 'B')
+ response = mock_response('404', 'this is the notfound you are looking for')
+ connection.expects(:get).returns(response)
+
+ expect do
+ terminus.find(request)
+ end.to raise_error(
+ Puppet::Error,
+ /\/production\/test_model\/foo.*long_param=A+\.\.\..*resulted in 404 with the message/)
+ end
+
+ it 'does not truncate the URI when logging debug information' do
+ Puppet.debug = true
+ request = find_request('foo', :fail_on_404 => true, :long_param => ('A' * 100) + 'B')
+ response = mock_response('404', 'this is the notfound you are looking for')
+ connection.expects(:get).returns(response)
+
expect do
terminus.find(request)
- end.to raise_error(Puppet::Error, expected_message)
+ end.to raise_error(
+ Puppet::Error,
+ /\/production\/test_model\/foo.*long_param=A+B.*resulted in 404 with the message/)
end
end
it "asks the model to deserialize the response body and sets the name on the resulting object to the find key" do
connection.expects(:get).returns response
model.expects(:convert_from).with(response['content-type'], response.body).returns(
model.new('overwritten', 'decoded body')
)
terminus.find(request).should == model.new('foo', 'decoded body')
end
it "doesn't require the model to support name=" do
connection.expects(:get).returns response
instance = model.new('name', 'decoded body')
model.expects(:convert_from).with(response['content-type'], response.body).returns(instance)
instance.expects(:respond_to?).with(:name=).returns(false)
instance.expects(:name=).never
terminus.find(request).should == model.new('name', 'decoded body')
end
it "provides an Accept header containing the list of supported formats joined with commas" do
connection.expects(:get).with(anything, has_entry("Accept" => "supported, formats")).returns(response)
terminus.model.expects(:supported_formats).returns %w{supported formats}
terminus.find(request)
end
it "adds an Accept-Encoding header" do
terminus.expects(:add_accept_encoding).returns({"accept-encoding" => "gzip"})
connection.expects(:get).with(anything, has_entry("accept-encoding" => "gzip")).returns(response)
terminus.find(request)
end
it "uses only the mime-type from the content-type header when asking the model to deserialize" do
response = mock_response('200', 'mydata', "text/plain; charset=utf-8")
connection.expects(:get).returns(response)
model.expects(:convert_from).with("text/plain", "mydata").returns "myobject"
terminus.find(request).should == "myobject"
end
it "decompresses the body before passing it to the model for deserialization" do
uncompressed_body = "Why hello there"
compressed_body = Zlib::Deflate.deflate(uncompressed_body)
response = mock_response('200', compressed_body, 'text/plain', 'deflate')
connection.expects(:get).returns(response)
model.expects(:convert_from).with("text/plain", uncompressed_body).returns "myobject"
terminus.find(request).should == "myobject"
end
end
describe "#head" do
let(:http_method) { :head }
let(:response) { mock_response(200, nil) }
let(:connection) { stub('mock http connection', :head => response, :verify_callback= => nil) }
let(:request) { head_request('foo') }
before :each do
terminus.stubs(:network).returns(connection)
end
it_behaves_like 'a REST terminus method', :head
it "returns true if there was a successful http response" do
connection.expects(:head).returns mock_response('200', nil)
terminus.head(request).should == true
end
it "returns false on a 404 response" do
connection.expects(:head).returns mock_response('404', nil)
terminus.head(request).should == false
end
end
describe "#search" do
let(:http_method) { :get }
let(:response) { mock_response(200, 'data1,data2,data3') }
let(:connection) { stub('mock http connection', :get => response, :verify_callback= => nil) }
let(:request) { search_request('foo') }
before :each do
terminus.stubs(:network).returns(connection)
end
it_behaves_like 'a REST terminus method', :search
it_behaves_like 'a deserializing terminus method', :search
it "should call the GET http method on a network connection" do
connection.expects(:get).with('/production/test_models/foo', has_key('Accept')).returns mock_response(200, 'data3, data4')
terminus.search(request)
end
it "returns an empty list on 404" do
response = mock_response('404', nil)
connection.expects(:get).returns(response)
terminus.search(request).should == []
end
it "asks the model to deserialize the response body into multiple instances" do
terminus.search(request).should == [model.new('', 'data1'), model.new('', 'data2'), model.new('', 'data3')]
end
it "should provide an Accept header containing the list of supported formats joined with commas" do
connection.expects(:get).with(anything, has_entry("Accept" => "supported, formats")).returns(mock_response(200, ''))
terminus.model.expects(:supported_formats).returns %w{supported formats}
terminus.search(request)
end
it "should return an empty array if serialization returns nil" do
model.stubs(:convert_from_multiple).returns nil
terminus.search(request).should == []
end
end
describe "#destroy" do
let(:http_method) { :delete }
let(:response) { mock_response(200, 'body') }
let(:connection) { stub('mock http connection', :delete => response, :verify_callback= => nil) }
let(:request) { delete_request('foo') }
before :each do
terminus.stubs(:network).returns(connection)
end
it_behaves_like 'a REST terminus method', :destroy
it_behaves_like 'a deserializing terminus method', :destroy
it "should call the DELETE http method on a network connection" do
connection.expects(:delete).with('/production/test_model/foo', has_key('Accept')).returns(response)
terminus.destroy(request)
end
it "should fail if any options are provided, since DELETE apparently does not support query options" do
request = delete_request('foo', :one => "two", :three => "four")
expect { terminus.destroy(request) }.to raise_error(ArgumentError)
end
it "should deserialize and return the http response" do
connection.expects(:delete).returns response
terminus.destroy(request).should == model.new('', 'body')
end
it "returns nil on 404" do
response = mock_response('404', nil)
connection.expects(:delete).returns(response)
terminus.destroy(request).should == nil
end
it "should provide an Accept header containing the list of supported formats joined with commas" do
connection.expects(:delete).with(anything, has_entry("Accept" => "supported, formats")).returns(response)
terminus.model.expects(:supported_formats).returns %w{supported formats}
terminus.destroy(request)
end
end
describe "#save" do
let(:http_method) { :put }
let(:response) { mock_response(200, 'body') }
let(:connection) { stub('mock http connection', :put => response, :verify_callback= => nil) }
let(:instance) { model.new('the thing', 'some contents') }
let(:request) { save_request(instance.name, instance) }
before :each do
terminus.stubs(:network).returns(connection)
end
it_behaves_like 'a REST terminus method', :save
it "should call the PUT http method on a network connection" do
connection.expects(:put).with('/production/test_model/the%20thing', anything, has_key("Content-Type")).returns response
terminus.save(request)
end
it "should fail if any options are provided, since PUT apparently does not support query options" do
request = save_request(instance.name, instance, :one => "two", :three => "four")
expect { terminus.save(request) }.to raise_error(ArgumentError)
end
it "should serialize the instance using the default format and pass the result as the body of the request" do
instance.expects(:render).returns "serial_instance"
connection.expects(:put).with(anything, "serial_instance", anything).returns response
terminus.save(request)
end
it "returns nil on 404" do
response = mock_response('404', nil)
connection.expects(:put).returns(response)
terminus.save(request).should == nil
end
it "returns nil" do
connection.expects(:put).returns response
terminus.save(request).should be_nil
end
it "should provide an Accept header containing the list of supported formats joined with commas" do
connection.expects(:put).with(anything, anything, has_entry("Accept" => "supported, formats")).returns(response)
instance.expects(:render).returns('')
model.expects(:supported_formats).returns %w{supported formats}
instance.expects(:mime).returns "supported"
terminus.save(request)
end
it "should provide a Content-Type header containing the mime-type of the sent object" do
instance.expects(:mime).returns "mime"
connection.expects(:put).with(anything, anything, has_entry('Content-Type' => "mime")).returns(response)
terminus.save(request)
end
end
context 'dealing with SRV settings' do
[
:destroy,
:find,
:head,
:save,
:search
].each do |method|
it "##{method} passes the SRV service, and fall-back server & port to the request's do_request method" do
request = Puppet::Indirector::Request.new(:indirection, method, 'key', nil)
stub_response = mock_response('200', 'body')
request.expects(:do_request).with(terminus.class.srv_service, terminus.class.server, terminus.class.port).returns(stub_response)
terminus.send(method, request)
end
end
end
end
diff --git a/spec/unit/interface/face_collection_spec.rb b/spec/unit/interface/face_collection_spec.rb
index 0562c933e..89928b70a 100755
--- a/spec/unit/interface/face_collection_spec.rb
+++ b/spec/unit/interface/face_collection_spec.rb
@@ -1,212 +1,212 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'tmpdir'
require 'puppet/interface'
describe Puppet::Interface::FaceCollection do
# To prevent conflicts with other specs that use faces, we must save and restore global state.
# Because there are specs that do 'describe Puppet::Face[...]', we must restore the same objects otherwise
# the 'subject' of the specs will differ.
before :all do
# Save FaceCollection's global state
- faces = subject.instance_variable_get(:@faces)
+ faces = described_class.instance_variable_get(:@faces)
@faces = faces.dup
faces.each do |k, v|
@faces[k] = v.dup
end
- @faces_loaded = subject.instance_variable_get(:@loaded)
+ @faces_loaded = described_class.instance_variable_get(:@loaded)
# Save the already required face files
@required = []
$".each do |path|
@required << path if path =~ /face\/.*\.rb$/
end
# Save Autoload's global state
@loaded = Puppet::Util::Autoload.instance_variable_get(:@loaded).dup
end
after :all do
# Restore global state
subject.instance_variable_set :@faces, @faces
subject.instance_variable_set :@loaded, @faces_loaded
$".delete_if { |path| path =~ /face\/.*\.rb$/ }
@required.each { |path| $".push path unless $".include? path }
Puppet::Util::Autoload.instance_variable_set(:@loaded, @loaded)
end
before :each do
# Before each test, clear the faces
subject.instance_variable_get(:@faces).clear
subject.instance_variable_set(:@loaded, false)
Puppet::Util::Autoload.instance_variable_get(:@loaded).clear
$".delete_if { |path| path =~ /face\/.*\.rb$/ }
end
describe "::[]" do
before :each do
subject.instance_variable_get("@faces")[:foo][SemVer.new('0.0.1')] = 10
end
it "should return the face with the given name" do
subject["foo", '0.0.1'].should == 10
end
it "should attempt to load the face if it isn't found" do
subject.expects(:require).once.with('puppet/face/bar')
subject.expects(:require).once.with('puppet/face/0.0.1/bar')
subject["bar", '0.0.1']
end
it "should attempt to load the default face for the specified version :current" do
subject.expects(:require).with('puppet/face/fozzie')
subject['fozzie', :current]
end
it "should return true if the face specified is registered" do
subject.instance_variable_get("@faces")[:foo][SemVer.new('0.0.1')] = 10
subject["foo", '0.0.1'].should == 10
end
it "should attempt to require the face if it is not registered" do
subject.expects(:require).with do |file|
subject.instance_variable_get("@faces")[:bar][SemVer.new('0.0.1')] = true
file == 'puppet/face/bar'
end
subject["bar", '0.0.1'].should be_true
end
it "should return false if the face is not registered" do
subject.stubs(:require).returns(true)
subject["bar", '0.0.1'].should be_false
end
it "should return false if the face file itself is missing" do
subject.stubs(:require).
raises(LoadError, 'no such file to load -- puppet/face/bar').then.
raises(LoadError, 'no such file to load -- puppet/face/0.0.1/bar')
subject["bar", '0.0.1'].should be_false
end
it "should register the version loaded by `:current` as `:current`" do
subject.expects(:require).with do |file|
subject.instance_variable_get("@faces")[:huzzah]['2.0.1'] = :huzzah_face
file == 'puppet/face/huzzah'
end
subject["huzzah", :current]
subject.instance_variable_get("@faces")[:huzzah][:current].should == :huzzah_face
end
context "with something on disk" do
it "should register the version loaded from `puppet/face/{name}` as `:current`" do
subject["huzzah", '2.0.1'].should be
subject["huzzah", :current].should be
Puppet::Face[:huzzah, '2.0.1'].should == Puppet::Face[:huzzah, :current]
end
it "should index :current when the code was pre-required" do
subject.instance_variable_get("@faces")[:huzzah].should_not be_key :current
require 'puppet/face/huzzah'
subject[:huzzah, :current].should be_true
end
end
it "should not cause an invalid face to be enumerated later" do
subject[:there_is_no_face, :current].should be_false
subject.faces.should_not include :there_is_no_face
end
end
describe "::get_action_for_face" do
it "should return an action on the current face" do
Puppet::Face::FaceCollection.get_action_for_face(:huzzah, :bar, :current).
should be_an_instance_of Puppet::Interface::Action
end
it "should return an action on an older version of a face" do
action = Puppet::Face::FaceCollection.
get_action_for_face(:huzzah, :obsolete, :current)
action.should be_an_instance_of Puppet::Interface::Action
action.face.version.should == SemVer.new('1.0.0')
end
it "should load the full older version of a face" do
action = Puppet::Face::FaceCollection.
get_action_for_face(:huzzah, :obsolete, :current)
action.face.version.should == SemVer.new('1.0.0')
action.face.should be_action :obsolete_in_core
end
it "should not add obsolete actions to the current version" do
action = Puppet::Face::FaceCollection.
get_action_for_face(:huzzah, :obsolete, :current)
action.face.version.should == SemVer.new('1.0.0')
action.face.should be_action :obsolete_in_core
current = Puppet::Face[:huzzah, :current]
current.version.should == SemVer.new('2.0.1')
current.should_not be_action :obsolete_in_core
current.should_not be_action :obsolete
end
end
describe "::register" do
it "should store the face by name" do
face = Puppet::Face.new(:my_face, '0.0.1')
subject.register(face)
subject.instance_variable_get("@faces").should == {
:my_face => { face.version => face }
}
end
end
describe "::underscorize" do
faulty = [1, "23foo", "#foo", "$bar", "sturm und drang", :"sturm und drang"]
valid = {
"Foo" => :foo,
:Foo => :foo,
"foo_bar" => :foo_bar,
:foo_bar => :foo_bar,
"foo-bar" => :foo_bar,
:"foo-bar" => :foo_bar,
"foo_bar23" => :foo_bar23,
:foo_bar23 => :foo_bar23,
}
valid.each do |input, expect|
it "should map #{input.inspect} to #{expect.inspect}" do
result = subject.underscorize(input)
result.should == expect
end
end
faulty.each do |input|
it "should fail when presented with #{input.inspect} (#{input.class})" do
expect { subject.underscorize(input) }.
to raise_error ArgumentError, /not a valid face name/
end
end
end
context "faulty faces" do
before :each do
$:.unshift "#{PuppetSpec::FIXTURE_DIR}/faulty_face"
end
after :each do
$:.delete_if {|x| x == "#{PuppetSpec::FIXTURE_DIR}/faulty_face"}
end
it "should not die if a face has a syntax error" do
subject.faces.should be_include :help
subject.faces.should_not be_include :syntax
@logs.should_not be_empty
@logs.first.message.should =~ /syntax error/
end
end
end
diff --git a/spec/unit/module_tool/applications/builder_spec.rb b/spec/unit/module_tool/applications/builder_spec.rb
index b9d7d3d79..291473db9 100644
--- a/spec/unit/module_tool/applications/builder_spec.rb
+++ b/spec/unit/module_tool/applications/builder_spec.rb
@@ -1,88 +1,416 @@
require 'spec_helper'
+require 'puppet/file_system'
require 'puppet/module_tool/applications'
require 'puppet_spec/modules'
describe Puppet::ModuleTool::Applications::Builder do
include PuppetSpec::Files
let(:path) { tmpdir("working_dir") }
let(:module_name) { 'mymodule-mytarball' }
let(:version) { '0.0.1' }
let(:release_name) { "#{module_name}-#{version}" }
let(:tarball) { File.join(path, 'pkg', release_name) + ".tar.gz" }
let(:builder) { Puppet::ModuleTool::Applications::Builder.new(path) }
shared_examples "a packagable module" do
def target_exists?(file)
File.exist?(File.join(path, "pkg", "#{module_name}-#{version}", file))
end
- it "packages the module in a tarball named after the module" do
+ def build
tarrer = mock('tarrer')
Puppet::ModuleTool::Tar.expects(:instance).returns(tarrer)
Dir.expects(:chdir).with(File.join(path, 'pkg')).yields
tarrer.expects(:pack).with(release_name, tarball)
builder.run
end
+
+ def create_regular_files
+ Puppet::FileSystem.touch(File.join(path, '.dotfile'))
+ Puppet::FileSystem.touch(File.join(path, 'file.foo'))
+ Puppet::FileSystem.touch(File.join(path, 'REVISION'))
+ Puppet::FileSystem.touch(File.join(path, '~file'))
+ Puppet::FileSystem.touch(File.join(path, '#file'))
+ Puppet::FileSystem.mkpath(File.join(path, 'pkg'))
+ Puppet::FileSystem.mkpath(File.join(path, 'coverage'))
+ Puppet::FileSystem.mkpath(File.join(path, 'sub'))
+ Puppet::FileSystem.touch(File.join(path, 'sub/.dotfile'))
+ Puppet::FileSystem.touch(File.join(path, 'sub/file.foo'))
+ Puppet::FileSystem.touch(File.join(path, 'sub/REVISION'))
+ Puppet::FileSystem.touch(File.join(path, 'sub/~file'))
+ Puppet::FileSystem.touch(File.join(path, 'sub/#file'))
+ Puppet::FileSystem.mkpath(File.join(path, 'sub/pkg'))
+ Puppet::FileSystem.mkpath(File.join(path, 'sub/coverage'))
+ end
+
+ def create_symlinks
+ Puppet::FileSystem.touch(File.join(path, 'symlinkedfile'))
+ Puppet::FileSystem.symlink(File.join(path, 'symlinkedfile'), File.join(path, 'symlinkfile'))
+ end
+
+ def create_ignored_files
+ Puppet::FileSystem.touch(File.join(path, 'gitignored.foo'))
+ Puppet::FileSystem.mkpath(File.join(path, 'gitdirectory/sub'))
+ Puppet::FileSystem.touch(File.join(path, 'gitdirectory/gitartifact'))
+ Puppet::FileSystem.touch(File.join(path, 'gitdirectory/gitimportantfile'))
+ Puppet::FileSystem.touch(File.join(path, 'gitdirectory/sub/artifact'))
+ Puppet::FileSystem.touch(File.join(path, 'pmtignored.foo'))
+ Puppet::FileSystem.mkpath(File.join(path, 'pmtdirectory/sub'))
+ Puppet::FileSystem.touch(File.join(path, 'pmtdirectory/pmtimportantfile'))
+ Puppet::FileSystem.touch(File.join(path, 'pmtdirectory/pmtartifact'))
+ Puppet::FileSystem.touch(File.join(path, 'pmtdirectory/sub/artifact'))
+ end
+
+ def create_pmtignore_file
+ Puppet::FileSystem.open(File.join(path, '.pmtignore'), 0600, 'w') do |f|
+ f << <<-PMTIGNORE
+pmtignored.*
+pmtdirectory/sub/**
+pmtdirectory/pmt*
+!pmtimportantfile
+PMTIGNORE
+ end
+ end
+
+ def create_gitignore_file
+ Puppet::FileSystem.open(File.join(path, '.gitignore'), 0600, 'w') do |f|
+ f << <<-GITIGNORE
+gitignored.*
+gitdirectory/sub/**
+gitdirectory/git*
+!gitimportantfile
+GITIGNORE
+ end
+ end
+
+ def create_symlink_gitignore_file
+ Puppet::FileSystem.open(File.join(path, '.gitignore'), 0600, 'w') do |f|
+ f << <<-GITIGNORE
+symlinkfile
+ GITIGNORE
+ end
+ end
+
+ shared_examples "regular files are present" do
+ it "has metadata" do
+ expect(target_exists?('metadata.json')).to eq true
+ end
+
+ it "has checksums" do
+ expect(target_exists?('checksums.json')).to eq true
+ end
+
+ it "copies regular files" do
+ expect(target_exists?('file.foo')).to eq true
+ end
+ end
+
+ shared_examples "default artifacts are removed in module dir but not in subdirs" do
+ it "ignores dotfiles" do
+ expect(target_exists?('.dotfile')).to eq false
+ expect(target_exists?('sub/.dotfile')).to eq true
+ end
+
+ it "does not have .gitignore" do
+ expect(target_exists?('.gitignore')).to eq false
+ end
+
+ it "does not have .pmtignore" do
+ expect(target_exists?('.pmtignore')).to eq false
+ end
+
+ it "does not have pkg" do
+ expect(target_exists?('pkg')).to eq false
+ expect(target_exists?('sub/pkg')).to eq true
+ end
+
+ it "does not have coverage" do
+ expect(target_exists?('coverage')).to eq false
+ expect(target_exists?('sub/coverage')).to eq true
+ end
+
+ it "does not have REVISION" do
+ expect(target_exists?('REVISION')).to eq false
+ expect(target_exists?('sub/REVISION')).to eq true
+ end
+
+ it "does not have ~files" do
+ expect(target_exists?('~file')).to eq false
+ expect(target_exists?('sub/~file')).to eq true
+ end
+
+ it "does not have #files" do
+ expect(target_exists?('#file')).to eq false
+ expect(target_exists?('sub/#file')).to eq true
+ end
+ end
+
+ shared_examples "gitignored files are present" do
+ it "leaves regular files" do
+ expect(target_exists?('gitignored.foo')).to eq true
+ end
+
+ it "leaves directories" do
+ expect(target_exists?('gitdirectory')).to eq true
+ end
+
+ it "leaves files in directories" do
+ expect(target_exists?('gitdirectory/gitartifact')).to eq true
+ end
+
+ it "leaves exceptional files" do
+ expect(target_exists?('gitdirectory/gitimportantfile')).to eq true
+ end
+
+ it "leaves subdirectories" do
+ expect(target_exists?('gitdirectory/sub')).to eq true
+ end
+
+ it "leaves files in subdirectories" do
+ expect(target_exists?('gitdirectory/sub/artifact')).to eq true
+ end
+ end
+
+ shared_examples "gitignored files are not present" do
+ it "ignores regular files" do
+ expect(target_exists?('gitignored.foo')).to eq false
+ end
+
+ it "ignores directories" do
+ expect(target_exists?('gitdirectory')).to eq true
+ end
+
+ it "ignores files in directories" do
+ expect(target_exists?('gitdirectory/gitartifact')).to eq false
+ end
+
+ it "copies exceptional files" do
+ expect(target_exists?('gitdirectory/gitimportantfile')).to eq true
+ end
+
+ it "ignores subdirectories" do
+ expect(target_exists?('gitdirectory/sub')).to eq false
+ end
+
+ it "ignores files in subdirectories" do
+ expect(target_exists?('gitdirectory/sub/artifact')).to eq false
+ end
+ end
+
+ shared_examples "pmtignored files are present" do
+ it "leaves regular files" do
+ expect(target_exists?('pmtignored.foo')).to eq true
+ end
+
+ it "leaves directories" do
+ expect(target_exists?('pmtdirectory')).to eq true
+ end
+
+ it "ignores files in directories" do
+ expect(target_exists?('pmtdirectory/pmtartifact')).to eq true
+ end
+
+ it "leaves exceptional files" do
+ expect(target_exists?('pmtdirectory/pmtimportantfile')).to eq true
+ end
+
+ it "leaves subdirectories" do
+ expect(target_exists?('pmtdirectory/sub')).to eq true
+ end
+
+ it "leaves files in subdirectories" do
+ expect(target_exists?('pmtdirectory/sub/artifact')).to eq true
+ end
+ end
+
+ shared_examples "pmtignored files are not present" do
+ it "ignores regular files" do
+ expect(target_exists?('pmtignored.foo')).to eq false
+ end
+
+ it "ignores directories" do
+ expect(target_exists?('pmtdirectory')).to eq true
+ end
+
+ it "copies exceptional files" do
+ expect(target_exists?('pmtdirectory/pmtimportantfile')).to eq true
+ end
+
+ it "ignores files in directories" do
+ expect(target_exists?('pmtdirectory/pmtartifact')).to eq false
+ end
+
+ it "ignores subdirectories" do
+ expect(target_exists?('pmtdirectory/sub')).to eq false
+ end
+
+ it "ignores files in subdirectories" do
+ expect(target_exists?('pmtdirectory/sub/artifact')).to eq false
+ end
+ end
+
+ context "with no ignore files" do
+ before :each do
+ create_regular_files
+ create_ignored_files
+
+ build
+ end
+
+ it_behaves_like "regular files are present"
+ it_behaves_like "default artifacts are removed in module dir but not in subdirs"
+ it_behaves_like "pmtignored files are present"
+ it_behaves_like "gitignored files are present"
+ end
+
+ context "with .gitignore file" do
+ before :each do
+ create_regular_files
+ create_ignored_files
+ create_gitignore_file
+
+ build
+ end
+
+ it_behaves_like "regular files are present"
+ it_behaves_like "default artifacts are removed in module dir but not in subdirs"
+ it_behaves_like "pmtignored files are present"
+ it_behaves_like "gitignored files are not present"
+ end
+
+ context "with .pmtignore file" do
+ before :each do
+ create_regular_files
+ create_ignored_files
+ create_pmtignore_file
+
+ build
+ end
+
+ it_behaves_like "regular files are present"
+ it_behaves_like "default artifacts are removed in module dir but not in subdirs"
+ it_behaves_like "gitignored files are present"
+ it_behaves_like "pmtignored files are not present"
+ end
+
+ context "with .pmtignore and .gitignore file" do
+ before :each do
+ create_regular_files
+ create_ignored_files
+ create_pmtignore_file
+ create_gitignore_file
+
+ build
+ end
+
+ it_behaves_like "regular files are present"
+ it_behaves_like "default artifacts are removed in module dir but not in subdirs"
+ it_behaves_like "gitignored files are present"
+ it_behaves_like "pmtignored files are not present"
+ end
+
+ context "with unignored symlinks", :if => Puppet.features.manages_symlinks? do
+ before :each do
+ create_regular_files
+ create_symlinks
+ create_ignored_files
+ end
+
+ it "give an error about symlinks" do
+ expect { builder.run }.to raise_error
+ end
+ end
+
+ context "with .gitignore file and ignored symlinks", :if => Puppet.features.manages_symlinks? do
+ before :each do
+ create_regular_files
+ create_symlinks
+ create_ignored_files
+ create_symlink_gitignore_file
+ end
+
+ it "does not give an error about symlinks" do
+ expect { build }.not_to raise_error
+ end
+ end
end
context 'with metadata.json' do
before :each do
File.open(File.join(path, 'metadata.json'), 'w') do |f|
f.puts({
"name" => "#{module_name}",
"version" => "#{version}",
"source" => "http://github.com/testing/#{module_name}",
"author" => "testing",
"license" => "Apache License Version 2.0",
"summary" => "Puppet testing module",
"description" => "This module can be used for basic testing",
"project_page" => "http://github.com/testing/#{module_name}"
}.to_json)
end
end
it_behaves_like "a packagable module"
+
+ it "does not package with a symlink", :if => Puppet.features.manages_symlinks? do
+ FileUtils.touch(File.join(path, 'tempfile'))
+ Puppet::FileSystem.symlink(File.join(path, 'tempfile'), File.join(path, 'tempfile2'))
+
+ expect {
+ builder.run
+ }.to raise_error Puppet::ModuleTool::Errors::ModuleToolError, /symlinks/i
+ end
+
+ it "does not package with a symlink in a subdir", :if => Puppet.features.manages_symlinks? do
+ FileUtils.mkdir(File.join(path, 'manifests'))
+ FileUtils.touch(File.join(path, 'manifests/tempfile.pp'))
+ Puppet::FileSystem.symlink(File.join(path, 'manifests/tempfile.pp'), File.join(path, 'manifests/tempfile2.pp'))
+
+ expect {
+ builder.run
+ }.to raise_error Puppet::ModuleTool::Errors::ModuleToolError, /symlinks/i
+ end
end
context 'with metadata.json containing checksums' do
before :each do
File.open(File.join(path, 'metadata.json'), 'w') do |f|
f.puts({
"name" => "#{module_name}",
"version" => "#{version}",
"source" => "http://github.com/testing/#{module_name}",
"author" => "testing",
"license" => "Apache License Version 2.0",
"summary" => "Puppet testing module",
"description" => "This module can be used for basic testing",
"project_page" => "http://github.com/testing/#{module_name}",
"checksums" => {"README.md" => "deadbeef"}
}.to_json)
end
end
it_behaves_like "a packagable module"
end
-
context 'with Modulefile' do
before :each do
File.open(File.join(path, 'Modulefile'), 'w') do |f|
f.write <<-MODULEFILE
name '#{module_name}'
version '#{version}'
source 'http://github.com/testing/#{module_name}'
author 'testing'
license 'Apache License Version 2.0'
summary 'Puppet testing module'
description 'This module can be used for basic testing'
project_page 'http://github.com/testing/#{module_name}'
MODULEFILE
end
end
it_behaves_like "a packagable module"
end
end
diff --git a/spec/unit/module_tool/applications/unpacker_spec.rb b/spec/unit/module_tool/applications/unpacker_spec.rb
index 39b0c261f..81557df99 100644
--- a/spec/unit/module_tool/applications/unpacker_spec.rb
+++ b/spec/unit/module_tool/applications/unpacker_spec.rb
@@ -1,34 +1,74 @@
require 'spec_helper'
require 'json'
require 'puppet/module_tool/applications'
+require 'puppet/file_system'
require 'puppet_spec/modules'
describe Puppet::ModuleTool::Applications::Unpacker do
include PuppetSpec::Files
let(:target) { tmpdir("unpacker") }
let(:module_name) { 'myusername-mytarball' }
let(:filename) { tmpdir("module") + "/module.tar.gz" }
let(:working_dir) { tmpdir("working_dir") }
before :each do
Puppet.settings[:module_working_dir] = working_dir
end
it "should attempt to untar file to temporary location" do
untar = mock('Tar')
untar.expects(:unpack).with(filename, anything()) do |src, dest, _|
FileUtils.mkdir(File.join(dest, 'extractedmodule'))
File.open(File.join(dest, 'extractedmodule', 'metadata.json'), 'w+') do |file|
file.puts JSON.generate('name' => module_name, 'version' => '1.0.0')
end
true
end
Puppet::ModuleTool::Tar.expects(:instance).returns(untar)
Puppet::ModuleTool::Applications::Unpacker.run(filename, :target_dir => target)
File.should be_directory(File.join(target, 'mytarball'))
end
+
+ it "should warn about symlinks", :if => Puppet.features.manages_symlinks? do
+ untar = mock('Tar')
+ untar.expects(:unpack).with(filename, anything()) do |src, dest, _|
+ FileUtils.mkdir(File.join(dest, 'extractedmodule'))
+ File.open(File.join(dest, 'extractedmodule', 'metadata.json'), 'w+') do |file|
+ file.puts JSON.generate('name' => module_name, 'version' => '1.0.0')
+ end
+ FileUtils.touch(File.join(dest, 'extractedmodule/tempfile'))
+ Puppet::FileSystem.symlink(File.join(dest, 'extractedmodule/tempfile'), File.join(dest, 'extractedmodule/tempfile2'))
+ true
+ end
+
+ Puppet::ModuleTool::Tar.expects(:instance).returns(untar)
+ Puppet.expects(:warning).with(regexp_matches(/symlinks/i))
+
+ Puppet::ModuleTool::Applications::Unpacker.run(filename, :target_dir => target)
+ File.should be_directory(File.join(target, 'mytarball'))
+ end
+
+ it "should warn about symlinks in subdirectories", :if => Puppet.features.manages_symlinks? do
+ untar = mock('Tar')
+ untar.expects(:unpack).with(filename, anything()) do |src, dest, _|
+ FileUtils.mkdir(File.join(dest, 'extractedmodule'))
+ File.open(File.join(dest, 'extractedmodule', 'metadata.json'), 'w+') do |file|
+ file.puts JSON.generate('name' => module_name, 'version' => '1.0.0')
+ end
+ FileUtils.mkdir(File.join(dest, 'extractedmodule/manifests'))
+ FileUtils.touch(File.join(dest, 'extractedmodule/manifests/tempfile'))
+ Puppet::FileSystem.symlink(File.join(dest, 'extractedmodule/manifests/tempfile'), File.join(dest, 'extractedmodule/manifests/tempfile2'))
+ true
+ end
+
+ Puppet::ModuleTool::Tar.expects(:instance).returns(untar)
+ Puppet.expects(:warning).with(regexp_matches(/symlinks/i))
+
+ Puppet::ModuleTool::Applications::Unpacker.run(filename, :target_dir => target)
+ File.should be_directory(File.join(target, 'mytarball'))
+ end
end
diff --git a/spec/unit/module_tool/applications/upgrader_spec.rb b/spec/unit/module_tool/applications/upgrader_spec.rb
index 2c5652fe0..382e45a75 100644
--- a/spec/unit/module_tool/applications/upgrader_spec.rb
+++ b/spec/unit/module_tool/applications/upgrader_spec.rb
@@ -1,313 +1,324 @@
require 'spec_helper'
require 'puppet/module_tool/applications'
require 'puppet_spec/module_tool/shared_functions'
require 'puppet_spec/module_tool/stub_source'
require 'semver'
describe Puppet::ModuleTool::Applications::Upgrader do
include PuppetSpec::ModuleTool::SharedFunctions
include PuppetSpec::Files
before do
FileUtils.mkdir_p(primary_dir)
FileUtils.mkdir_p(secondary_dir)
end
let(:vardir) { tmpdir('upgrader') }
let(:primary_dir) { File.join(vardir, "primary") }
let(:secondary_dir) { File.join(vardir, "secondary") }
let(:remote_source) { PuppetSpec::ModuleTool::StubSource.new }
let(:environment) do
Puppet.lookup(:current_environment).override_with(
:vardir => vardir,
:modulepath => [ primary_dir, secondary_dir ]
)
end
before do
Semantic::Dependency.clear_sources
installer = Puppet::ModuleTool::Applications::Upgrader.any_instance
installer.stubs(:module_repository).returns(remote_source)
end
def upgrader(name, options = {})
Puppet::ModuleTool.set_option_defaults(options)
Puppet::ModuleTool::Applications::Upgrader.new(name, options)
end
describe '#run' do
let(:module) { 'pmtacceptance-stdlib' }
def options
{ :environment => environment }
end
let(:application) { upgrader(self.module, options) }
subject { application.run }
it 'fails if the module is not already installed' do
subject.should include :result => :failure
subject[:error].should include :oneline => "Could not upgrade '#{self.module}'; module is not installed"
end
context 'for an installed module' do
+ context 'with only one version' do
+ before { preinstall('puppetlabs-oneversion', '0.0.1') }
+ let(:module) { 'puppetlabs-oneversion' }
+
+ it 'declines to upgrade' do
+ subject.should include :result => :noop
+ subject[:error][:multiline].should =~ /already the latest version/
+ end
+ end
+
context 'without dependencies' do
before { preinstall('pmtacceptance-stdlib', '1.0.0') }
context 'without options' do
it 'properly upgrades the module to the greatest version' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-stdlib', v('1.0.0') => v('4.1.0')
end
end
context 'with version range' do
def options
super.merge(:version => '3.x')
end
context 'not matching the installed version' do
it 'properly upgrades the module to the greatest version within that range' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-stdlib', v('1.0.0') => v('3.2.0')
end
end
context 'matching the installed version' do
context 'with more recent version' do
before { preinstall('pmtacceptance-stdlib', '3.0.0')}
it 'properly upgrades the module to the greatest version within that range' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-stdlib', v('3.0.0') => v('3.2.0')
end
end
context 'without more recent version' do
before { preinstall('pmtacceptance-stdlib', '3.2.0')}
context 'without options' do
it 'declines to upgrade' do
subject.should include :result => :noop
+ subject[:error][:multiline].should =~ /already the latest version/
end
end
context 'with --force' do
def options
super.merge(:force => true)
end
it 'overwrites the installed module with the greatest version matching that range' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-stdlib', v('3.2.0') => v('3.2.0')
end
end
end
end
end
end
context 'that is depended upon' do
# pmtacceptance-keystone depends on pmtacceptance-mysql >=0.6.1 <1.0.0
before { preinstall('pmtacceptance-keystone', '2.1.0') }
before { preinstall('pmtacceptance-mysql', '0.9.0') }
let(:module) { 'pmtacceptance-mysql' }
context 'and out of date' do
before { preinstall('pmtacceptance-mysql', '0.8.0') }
it 'properly upgrades to the greatest version matching the dependency' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-mysql', v('0.8.0') => v('0.9.0')
end
end
context 'and up to date' do
it 'declines to upgrade' do
subject.should include :result => :failure
end
end
context 'when specifying a violating version range' do
def options
super.merge(:version => '2.1.0')
end
it 'fails to upgrade the module' do
# TODO: More helpful error message?
subject.should include :result => :failure
subject[:error].should include :oneline => "Could not upgrade '#{self.module}' (v0.9.0 -> v2.1.0); no version satisfies all dependencies"
end
context 'using --force' do
def options
super.merge(:force => true)
end
it 'overwrites the installed module with the specified version' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-mysql', v('0.9.0') => v('2.1.0')
end
end
end
end
context 'with local changes' do
before { preinstall('pmtacceptance-stdlib', '1.0.0') }
before do
release = application.send(:installed_modules)['pmtacceptance-stdlib']
mark_changed(release.mod.path)
end
it 'fails to upgrade' do
subject.should include :result => :failure
subject[:error].should include :oneline => "Could not upgrade '#{self.module}'; module has had changes made locally"
end
context 'with --ignore-changes' do
def options
super.merge(:ignore_changes => true)
end
it 'overwrites the installed module with the greatest version matching that range' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-stdlib', v('1.0.0') => v('4.1.0')
end
end
end
context 'with dependencies' do
context 'that are unsatisfied' do
def options
super.merge(:version => '0.1.1')
end
before { preinstall('pmtacceptance-apache', '0.0.3') }
let(:module) { 'pmtacceptance-apache' }
it 'upgrades the module and installs the missing dependencies' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.1.1')
graph_should_include 'pmtacceptance-stdlib', nil => v('4.1.0'), :action => :install
end
end
context 'with older major versions' do
# pmtacceptance-apache 0.0.4 has no dependency on pmtacceptance-stdlib
# the next available version (0.1.1) and all subsequent versions depend on pmtacceptance-stdlib >= 2.2.1
before { preinstall('pmtacceptance-apache', '0.0.3') }
before { preinstall('pmtacceptance-stdlib', '1.0.0') }
let(:module) { 'pmtacceptance-apache' }
it 'refuses to upgrade the installed dependency to a new major version, but upgrades the module to the greatest compatible version' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.0.4')
end
context 'using --ignore_dependencies' do
def options
super.merge(:ignore_dependencies => true)
end
it 'upgrades the module to the greatest available version' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.10.0')
end
end
end
context 'with satisfying major versions' do
before { preinstall('pmtacceptance-apache', '0.0.3') }
before { preinstall('pmtacceptance-stdlib', '2.0.0') }
let(:module) { 'pmtacceptance-apache' }
it 'upgrades the module and its dependencies to their greatest compatible versions' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.10.0')
graph_should_include 'pmtacceptance-stdlib', v('2.0.0') => v('2.6.0')
end
end
context 'with satisfying versions' do
before { preinstall('pmtacceptance-apache', '0.0.3') }
before { preinstall('pmtacceptance-stdlib', '2.4.0') }
let(:module) { 'pmtacceptance-apache' }
it 'upgrades the module to the greatest available version' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.10.0')
graph_should_include 'pmtacceptance-stdlib', nil
end
end
context 'with current versions' do
before { preinstall('pmtacceptance-apache', '0.0.3') }
before { preinstall('pmtacceptance-stdlib', '2.6.0') }
let(:module) { 'pmtacceptance-apache' }
it 'upgrades the module to the greatest available version' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.10.0')
graph_should_include 'pmtacceptance-stdlib', nil
end
end
context 'with shared dependencies' do
# bacula 0.0.3 depends on stdlib >= 2.2.0 and pmtacceptance/mysql >= 1.0.0
# bacula 0.0.2 depends on stdlib >= 2.2.0 and pmtacceptance/mysql >= 0.0.1
# bacula 0.0.1 depends on stdlib >= 2.2.0
# keystone 2.1.0 depends on pmtacceptance/stdlib >= 2.5.0 and pmtacceptance/mysql >=0.6.1 <1.0.0
before { preinstall('pmtacceptance-bacula', '0.0.1') }
before { preinstall('pmtacceptance-mysql', '0.9.0') }
before { preinstall('pmtacceptance-keystone', '2.1.0') }
let(:module) { 'pmtacceptance-bacula' }
it 'upgrades the module to the greatest version compatible with all other installed modules' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-bacula', v('0.0.1') => v('0.0.2')
end
context 'using --force' do
def options
super.merge(:force => true)
end
it 'upgrades the module to the greatest version available' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-bacula', v('0.0.1') => v('0.0.3')
end
end
end
context 'in other modulepath directories' do
before { preinstall('pmtacceptance-apache', '0.0.3') }
before { preinstall('pmtacceptance-stdlib', '1.0.0', :into => secondary_dir) }
let(:module) { 'pmtacceptance-apache' }
context 'with older major versions' do
it 'upgrades the module to the greatest version compatible with the installed modules' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.0.4')
graph_should_include 'pmtacceptance-stdlib', nil
end
end
context 'with satisfying major versions' do
before { preinstall('pmtacceptance-stdlib', '2.0.0', :into => secondary_dir) }
it 'upgrades the module and its dependencies to their greatest compatible versions, in-place' do
subject.should include :result => :success
graph_should_include 'pmtacceptance-apache', v('0.0.3') => v('0.10.0')
graph_should_include 'pmtacceptance-stdlib', v('2.0.0') => v('2.6.0'), :path => secondary_dir
end
end
end
end
end
end
end
diff --git a/spec/unit/module_tool/installed_modules_spec.rb b/spec/unit/module_tool/installed_modules_spec.rb
new file mode 100644
index 000000000..b9492d986
--- /dev/null
+++ b/spec/unit/module_tool/installed_modules_spec.rb
@@ -0,0 +1,49 @@
+require 'spec_helper'
+require 'puppet/module_tool/installed_modules'
+require 'puppet_spec/modules'
+
+describe Puppet::ModuleTool::InstalledModules do
+ include PuppetSpec::Files
+
+ around do |example|
+ dir = tmpdir("deep_path")
+
+ FileUtils.mkdir_p(@modpath = File.join(dir, "modpath"))
+
+ @env = Puppet::Node::Environment.create(:env, [@modpath])
+ Puppet.override(:current_environment => @env) do
+ example.run
+ end
+ end
+
+ it 'works when given a semantic version' do
+ mod = PuppetSpec::Modules.create('goodsemver', @modpath, :metadata => {:version => '1.2.3'})
+ installed = described_class.new(@env)
+ expect(installed.modules["puppetlabs-#{mod.name}"].version).to eq(Semantic::Version.parse('1.2.3'))
+ end
+
+ it 'defaults when not given a semantic version' do
+ mod = PuppetSpec::Modules.create('badsemver', @modpath, :metadata => {:version => 'banana'})
+ Puppet.expects(:warning).with(regexp_matches(/Semantic Version/))
+ installed = described_class.new(@env)
+ expect(installed.modules["puppetlabs-#{mod.name}"].version).to eq(Semantic::Version.parse('0.0.0'))
+ end
+
+ it 'defaults when not given a full semantic version' do
+ mod = PuppetSpec::Modules.create('badsemver', @modpath, :metadata => {:version => '1.2'})
+ Puppet.expects(:warning).with(regexp_matches(/Semantic Version/))
+ installed = described_class.new(@env)
+ expect(installed.modules["puppetlabs-#{mod.name}"].version).to eq(Semantic::Version.parse('0.0.0'))
+ end
+
+ it 'still works if there is an invalid version in one of the modules' do
+ mod1 = PuppetSpec::Modules.create('badsemver', @modpath, :metadata => {:version => 'banana'})
+ mod2 = PuppetSpec::Modules.create('goodsemver', @modpath, :metadata => {:version => '1.2.3'})
+ mod3 = PuppetSpec::Modules.create('notquitesemver', @modpath, :metadata => {:version => '1.2'})
+ Puppet.expects(:warning).with(regexp_matches(/Semantic Version/)).twice
+ installed = described_class.new(@env)
+ expect(installed.modules["puppetlabs-#{mod1.name}"].version).to eq(Semantic::Version.parse('0.0.0'))
+ expect(installed.modules["puppetlabs-#{mod2.name}"].version).to eq(Semantic::Version.parse('1.2.3'))
+ expect(installed.modules["puppetlabs-#{mod3.name}"].version).to eq(Semantic::Version.parse('0.0.0'))
+ end
+end
diff --git a/spec/unit/module_tool/metadata_spec.rb b/spec/unit/module_tool/metadata_spec.rb
index 3b925c644..fce9c8f8d 100644
--- a/spec/unit/module_tool/metadata_spec.rb
+++ b/spec/unit/module_tool/metadata_spec.rb
@@ -1,233 +1,301 @@
require 'spec_helper'
require 'puppet/module_tool'
describe Puppet::ModuleTool::Metadata do
let(:data) { {} }
let(:metadata) { Puppet::ModuleTool::Metadata.new }
+ describe 'property lookups' do
+ subject { metadata }
+
+ %w[ name version author summary license source project_page issues_url
+ dependencies dashed_name release_name description ].each do |prop|
+ describe "##{prop}" do
+ it "responds to the property" do
+ subject.send(prop)
+ end
+ end
+ end
+ end
+
describe "#update" do
subject { metadata.update(data) }
context "with a valid name" do
let(:data) { { 'name' => 'billgates-mymodule' } }
it "extracts the author name from the name field" do
subject.to_hash['author'].should == 'billgates'
end
it "extracts a module name from the name field" do
subject.module_name.should == 'mymodule'
end
context "and existing author" do
before { metadata.update('author' => 'foo') }
it "avoids overwriting the existing author" do
subject.to_hash['author'].should == 'foo'
end
end
end
context "with a valid name and author" do
let(:data) { { 'name' => 'billgates-mymodule', 'author' => 'foo' } }
it "use the author name from the author field" do
subject.to_hash['author'].should == 'foo'
end
context "and preexisting author" do
before { metadata.update('author' => 'bar') }
it "avoids overwriting the existing author" do
subject.to_hash['author'].should == 'foo'
end
end
end
context "with an invalid name" do
context "(short module name)" do
let(:data) { { 'name' => 'mymodule' } }
it "raises an exception" do
expect { subject }.to raise_error(ArgumentError, "Invalid 'name' field in metadata.json: the field must be a namespaced module name")
end
end
context "(missing namespace)" do
let(:data) { { 'name' => '/mymodule' } }
it "raises an exception" do
expect { subject }.to raise_error(ArgumentError, "Invalid 'name' field in metadata.json: the field must be a namespaced module name")
end
end
context "(missing module name)" do
let(:data) { { 'name' => 'namespace/' } }
it "raises an exception" do
expect { subject }.to raise_error(ArgumentError, "Invalid 'name' field in metadata.json: the field must be a namespaced module name")
end
end
context "(invalid namespace)" do
let(:data) { { 'name' => "dolla'bill$-mymodule" } }
it "raises an exception" do
expect { subject }.to raise_error(ArgumentError, "Invalid 'name' field in metadata.json: the namespace contains non-alphanumeric characters")
end
end
context "(non-alphanumeric module name)" do
let(:data) { { 'name' => "dollabils-fivedolla'" } }
it "raises an exception" do
expect { subject }.to raise_error(ArgumentError, "Invalid 'name' field in metadata.json: the module name contains non-alphanumeric (or underscore) characters")
end
end
context "(module name starts with a number)" do
let(:data) { { 'name' => "dollabills-5dollars" } }
it "raises an exception" do
expect { subject }.to raise_error(ArgumentError, "Invalid 'name' field in metadata.json: the module name must begin with a letter")
end
end
end
context "with an invalid version" do
let(:data) { { 'version' => '3.0' } }
it "raises an exception" do
expect { subject }.to raise_error(ArgumentError, "Invalid 'version' field in metadata.json: version string cannot be parsed as a valid Semantic Version")
end
end
context "with a valid source" do
context "which is a GitHub URL" do
context "with a scheme" do
before { metadata.update('source' => 'https://github.com/billgates/amazingness') }
it "predicts a default project_page" do
subject.to_hash['project_page'].should == 'https://github.com/billgates/amazingness'
end
it "predicts a default issues_url" do
subject.to_hash['issues_url'].should == 'https://github.com/billgates/amazingness/issues'
end
end
context "without a scheme" do
before { metadata.update('source' => 'github.com/billgates/amazingness') }
it "predicts a default project_page" do
subject.to_hash['project_page'].should == 'https://github.com/billgates/amazingness'
end
it "predicts a default issues_url" do
subject.to_hash['issues_url'].should == 'https://github.com/billgates/amazingness/issues'
end
end
end
context "which is not a GitHub URL" do
before { metadata.update('source' => 'https://notgithub.com/billgates/amazingness') }
it "does not predict a default project_page" do
subject.to_hash['project_page'].should be nil
end
it "does not predict a default issues_url" do
subject.to_hash['issues_url'].should be nil
end
end
context "which is not a URL" do
before { metadata.update('source' => 'my brain') }
it "does not predict a default project_page" do
subject.to_hash['project_page'].should be nil
end
it "does not predict a default issues_url" do
subject.to_hash['issues_url'].should be nil
end
end
end
+ context "with a valid dependency" do
+ let(:data) { {'dependencies' => [{'name' => 'puppetlabs-goodmodule'}] }}
+
+ it "adds the dependency" do
+ subject.dependencies.size.should == 1
+ end
+ end
+
+ context "with a invalid dependency name" do
+ let(:data) { {'dependencies' => [{'name' => 'puppetlabsbadmodule'}] }}
+
+ it "raises an exception" do
+ expect { subject }.to raise_error(ArgumentError)
+ end
+ end
+
+ context "with a valid dependency version range" do
+ let(:data) { {'dependencies' => [{'name' => 'puppetlabs-badmodule', 'version_requirement' => '>= 2.0.0'}] }}
+
+ it "adds the dependency" do
+ subject.dependencies.size.should == 1
+ end
+ end
+
+ context "with a invalid version range" do
+ let(:data) { {'dependencies' => [{'name' => 'puppetlabsbadmodule', 'version_requirement' => '>= banana'}] }}
+
+ it "raises an exception" do
+ expect { subject }.to raise_error(ArgumentError)
+ end
+ end
+
+ context "with duplicate dependencies" do
+ let(:data) { {'dependencies' => [{'name' => 'puppetlabs-dupmodule', 'version_requirement' => '1.0.0'},
+ {'name' => 'puppetlabs-dupmodule', 'version_requirement' => '0.0.1'}] }
+ }
+
+ it "raises an exception" do
+ expect { subject }.to raise_error(ArgumentError)
+ end
+ end
+
+ context "adding a duplicate dependency" do
+ let(:data) { {'dependencies' => [{'name' => 'puppetlabs-origmodule', 'version_requirement' => '1.0.0'}] }}
+
+ it "with a different version raises an exception" do
+ metadata.add_dependency('puppetlabs-origmodule', '>= 0.0.1')
+ expect { subject }.to raise_error(ArgumentError)
+ end
+
+ it "with the same version does not add another dependency" do
+ metadata.add_dependency('puppetlabs-origmodule', '1.0.0')
+ subject.dependencies.size.should == 1
+ end
+ end
end
describe '#dashed_name' do
it 'returns nil in the absence of a module name' do
expect(metadata.update('version' => '1.0.0').release_name).to be_nil
end
it 'returns a hyphenated string containing namespace and module name' do
data = metadata.update('name' => 'foo-bar')
data.dashed_name.should == 'foo-bar'
end
it 'properly handles slash-separated names' do
data = metadata.update('name' => 'foo/bar')
data.dashed_name.should == 'foo-bar'
end
it 'is unaffected by author name' do
data = metadata.update('name' => 'foo/bar', 'author' => 'me')
data.dashed_name.should == 'foo-bar'
end
end
describe '#release_name' do
it 'returns nil in the absence of a module name' do
expect(metadata.update('version' => '1.0.0').release_name).to be_nil
end
it 'returns nil in the absence of a version' do
expect(metadata.update('name' => 'foo/bar').release_name).to be_nil
end
it 'returns a hyphenated string containing module name and version' do
data = metadata.update('name' => 'foo/bar', 'version' => '1.0.0')
data.release_name.should == 'foo-bar-1.0.0'
end
it 'is unaffected by author name' do
data = metadata.update('name' => 'foo/bar', 'version' => '1.0.0', 'author' => 'me')
data.release_name.should == 'foo-bar-1.0.0'
end
end
describe "#to_hash" do
subject { metadata.to_hash }
- its(:keys) do
- subject.sort.should == %w[ name version author summary license source issues_url project_page dependencies ].sort
+ it "contains the default set of keys" do
+ subject.keys.sort.should == %w[ name version author summary license source issues_url project_page dependencies ].sort
end
describe "['license']" do
it "defaults to Apache 2" do
subject['license'].should == "Apache 2.0"
end
end
describe "['dependencies']" do
- it "defaults to an empty Array" do
- subject['dependencies'].should == []
+ it "defaults to an empty set" do
+ subject['dependencies'].should == Set.new
end
end
context "when updated with non-default data" do
subject { metadata.update('license' => 'MIT', 'non-standard' => 'yup').to_hash }
it "overrides the defaults" do
subject['license'].should == 'MIT'
end
it 'contains unanticipated values' do
subject['non-standard'].should == 'yup'
end
end
end
end
diff --git a/spec/unit/module_tool/tar/mini_spec.rb b/spec/unit/module_tool/tar/mini_spec.rb
index ef030611b..179952741 100644
--- a/spec/unit/module_tool/tar/mini_spec.rb
+++ b/spec/unit/module_tool/tar/mini_spec.rb
@@ -1,59 +1,60 @@
require 'spec_helper'
require 'puppet/module_tool'
describe Puppet::ModuleTool::Tar::Mini, :if => (Puppet.features.minitar? and Puppet.features.zlib?) do
let(:sourcefile) { '/the/module.tar.gz' }
let(:destdir) { File.expand_path '/the/dest/dir' }
let(:sourcedir) { '/the/src/dir' }
let(:destfile) { '/the/dest/file.tar.gz' }
let(:minitar) { described_class.new }
it "unpacks a tar file" do
unpacks_the_entry(:file_start, 'thefile')
minitar.unpack(sourcefile, destdir, 'uid')
end
it "does not allow an absolute path" do
unpacks_the_entry(:file_start, '/thefile')
expect {
minitar.unpack(sourcefile, destdir, 'uid')
}.to raise_error(Puppet::ModuleTool::Errors::InvalidPathInPackageError,
"Attempt to install file into \"/thefile\" under \"#{destdir}\"")
end
it "does not allow a file to be written outside the destination directory" do
unpacks_the_entry(:file_start, '../../thefile')
expect {
minitar.unpack(sourcefile, destdir, 'uid')
}.to raise_error(Puppet::ModuleTool::Errors::InvalidPathInPackageError,
"Attempt to install file into \"#{File.expand_path('/the/thefile')}\" under \"#{destdir}\"")
end
it "does not allow a directory to be written outside the destination directory" do
unpacks_the_entry(:dir, '../../thedir')
expect {
minitar.unpack(sourcefile, destdir, 'uid')
}.to raise_error(Puppet::ModuleTool::Errors::InvalidPathInPackageError,
"Attempt to install file into \"#{File.expand_path('/the/thedir')}\" under \"#{destdir}\"")
end
it "packs a tar file" do
writer = mock('GzipWriter')
Zlib::GzipWriter.expects(:open).with(destfile).yields(writer)
Archive::Tar::Minitar.expects(:pack).with(sourcedir, writer)
minitar.pack(sourcedir, destfile)
end
def unpacks_the_entry(type, name)
reader = mock('GzipReader')
Zlib::GzipReader.expects(:open).with(sourcefile).yields(reader)
- Archive::Tar::Minitar.expects(:unpack).with(reader, destdir).yields(type, name, nil)
+ minitar.expects(:find_valid_files).with(reader).returns([name])
+ Archive::Tar::Minitar.expects(:unpack).with(reader, destdir, [name]).yields(type, name, nil)
end
end
diff --git a/spec/unit/network/authentication_spec.rb b/spec/unit/network/authentication_spec.rb
index 8f3653cad..5e2f2de87 100755
--- a/spec/unit/network/authentication_spec.rb
+++ b/spec/unit/network/authentication_spec.rb
@@ -1,100 +1,104 @@
#! /usr/bin/env ruby
require 'spec_helper'
load 'puppet/network/authentication.rb'
class AuthenticationTest
include Puppet::Network::Authentication
end
describe Puppet::Network::Authentication do
subject { AuthenticationTest.new }
let(:now) { Time.now }
let(:cert) { Puppet::SSL::Certificate.new('cert') }
let(:host) { stub 'host', :certificate => cert }
# this is necessary since the logger is a class variable, and it needs to be stubbed
def reload_module
load 'puppet/network/authentication.rb'
end
describe "when warning about upcoming expirations" do
before do
Puppet::SSL::CertificateAuthority.stubs(:ca?).returns(false)
Puppet::FileSystem.stubs(:exist?).returns(false)
end
it "should check the expiration of the CA certificate" do
ca = stub 'ca', :host => host
Puppet::SSL::CertificateAuthority.stubs(:ca?).returns(true)
Puppet::SSL::CertificateAuthority.stubs(:instance).returns(ca)
cert.expects(:near_expiration?).returns(false)
subject.warn_if_near_expiration
end
context "when examining the local host" do
before do
Puppet::SSL::Host.stubs(:localhost).returns(host)
Puppet::FileSystem.stubs(:exist?).with(Puppet[:hostcert]).returns(true)
end
it "should not load the localhost certificate if the local CA certificate is missing" do
# Redmine-21869: Infinite recursion occurs if CA cert is missing.
Puppet::FileSystem.stubs(:exist?).with(Puppet[:localcacert]).returns(false)
host.unstub(:certificate)
host.expects(:certificate).never
subject.warn_if_near_expiration
end
it "should check the expiration of the localhost certificate if the local CA certificate is present" do
Puppet::FileSystem.stubs(:exist?).with(Puppet[:localcacert]).returns(true)
cert.expects(:near_expiration?).returns(false)
subject.warn_if_near_expiration
end
end
it "should check the expiration of any certificates passed in as arguments" do
cert.expects(:near_expiration?).twice.returns(false)
subject.warn_if_near_expiration(cert, cert)
end
it "should accept instances of OpenSSL::X509::Certificate" do
raw_cert = stub 'cert'
raw_cert.stubs(:is_a?).with(OpenSSL::X509::Certificate).returns(true)
Puppet::SSL::Certificate.stubs(:from_instance).with(raw_cert).returns(cert)
cert.expects(:near_expiration?).returns(false)
subject.warn_if_near_expiration(raw_cert)
end
it "should use a rate-limited logger for expiration warnings that uses `runinterval` as its interval" do
Puppet::Util::Log::RateLimitedLogger.expects(:new).with(Puppet[:runinterval])
reload_module
end
context "in the logs" do
let(:logger) { stub 'logger' }
before do
Puppet::Util::Log::RateLimitedLogger.stubs(:new).returns(logger)
reload_module
cert.stubs(:near_expiration?).returns(true)
cert.stubs(:expiration).returns(now)
cert.stubs(:unmunged_name).returns('foo')
end
+ after(:all) do
+ reload_module
+ end
+
it "should log a warning if a certificate's expiration is near" do
logger.expects(:warning)
subject.warn_if_near_expiration(cert)
end
it "should use the certificate's unmunged name in the message" do
logger.expects(:warning).with { |message| message.include? 'foo' }
subject.warn_if_near_expiration(cert)
end
it "should show certificate's expiration date in the message using ISO 8601 format" do
logger.expects(:warning).with { |message| message.include? now.strftime('%Y-%m-%dT%H:%M:%S%Z') }
subject.warn_if_near_expiration(cert)
end
end
end
end
diff --git a/spec/unit/network/http/api/v2/environments_spec.rb b/spec/unit/network/http/api/v2/environments_spec.rb
index 6c6d7a581..993e55011 100644
--- a/spec/unit/network/http/api/v2/environments_spec.rb
+++ b/spec/unit/network/http/api/v2/environments_spec.rb
@@ -1,42 +1,63 @@
require 'spec_helper'
require 'puppet/node/environment'
require 'puppet/network/http'
require 'matchers/json'
describe Puppet::Network::HTTP::API::V2::Environments do
include JSONMatchers
it "responds with all of the available environments" do
environment = Puppet::Node::Environment.create(:production, ["/first", "/second"], '/manifests')
loader = Puppet::Environments::Static.new(environment)
handler = Puppet::Network::HTTP::API::V2::Environments.new(loader)
response = Puppet::Network::HTTP::MemoryResponse.new
handler.call(Puppet::Network::HTTP::Request.from_hash(:headers => { 'accept' => 'application/json' }), response)
expect(response.code).to eq(200)
expect(response.type).to eq("application/json")
expect(JSON.parse(response.body)).to eq({
"search_paths" => loader.search_paths,
"environments" => {
"production" => {
"settings" => {
"modulepath" => [File.expand_path("/first"), File.expand_path("/second")],
- "manifest" => File.expand_path("/manifests")
+ "manifest" => File.expand_path("/manifests"),
+ "environment_timeout" => 0,
+ "config_version" => ""
}
}
}
})
end
- it "the response conforms to the environments schema" do
+ it "the response conforms to the environments schema for unlimited timeout" do
+ conf_stub = stub 'conf_stub'
+ conf_stub.expects(:environment_timeout).returns(1.0 / 0.0)
environment = Puppet::Node::Environment.create(:production, [])
- handler = Puppet::Network::HTTP::API::V2::Environments.new(Puppet::Environments::Static.new(environment))
+ env_loader = Puppet::Environments::Static.new(environment)
+ env_loader.expects(:get_conf).with(:production).returns(conf_stub)
+ handler = Puppet::Network::HTTP::API::V2::Environments.new(env_loader)
response = Puppet::Network::HTTP::MemoryResponse.new
handler.call(Puppet::Network::HTTP::Request.from_hash(:headers => { 'accept' => 'application/json' }), response)
expect(response.body).to validate_against('api/schemas/environments.json')
end
+
+ it "the response conforms to the environments schema for integer timeout" do
+ conf_stub = stub 'conf_stub'
+ conf_stub.expects(:environment_timeout).returns(1)
+ environment = Puppet::Node::Environment.create(:production, [])
+ env_loader = Puppet::Environments::Static.new(environment)
+ env_loader.expects(:get_conf).with(:production).returns(conf_stub)
+ handler = Puppet::Network::HTTP::API::V2::Environments.new(env_loader)
+ response = Puppet::Network::HTTP::MemoryResponse.new
+
+ handler.call(Puppet::Network::HTTP::Request.from_hash(:headers => { 'accept' => 'application/json' }), response)
+
+ expect(response.body).to validate_against('api/schemas/environments.json')
+ end
+
end
diff --git a/spec/unit/network/http/connection_spec.rb b/spec/unit/network/http/connection_spec.rb
old mode 100644
new mode 100755
index a5e6f64ae..ef6ca65d6
--- a/spec/unit/network/http/connection_spec.rb
+++ b/spec/unit/network/http/connection_spec.rb
@@ -1,271 +1,306 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/network/http/connection'
require 'puppet/network/authentication'
describe Puppet::Network::HTTP::Connection do
let (:host) { "me" }
let (:port) { 54321 }
subject { Puppet::Network::HTTP::Connection.new(host, port, :verify => Puppet::SSL::Validator.no_validator) }
let (:httpok) { Net::HTTPOK.new('1.1', 200, '') }
context "when providing HTTP connections" do
- after do
- Puppet::Network::HTTP::Connection.instance_variable_set("@ssl_host", nil)
- end
-
context "when initializing http instances" do
- before :each do
- # All of the cert stuff is tested elsewhere
- Puppet::Network::HTTP::Connection.stubs(:cert_setup)
- end
-
it "should return an http instance created with the passed host and port" do
- http = subject.send(:connection)
- http.should be_an_instance_of Net::HTTP
- http.address.should == host
- http.port.should == port
+ conn = Puppet::Network::HTTP::Connection.new(host, port, :verify => Puppet::SSL::Validator.no_validator)
+
+ expect(conn.address).to eq(host)
+ expect(conn.port).to eq(port)
end
it "should enable ssl on the http instance by default" do
- http = subject.send(:connection)
- http.should be_use_ssl
- end
+ conn = Puppet::Network::HTTP::Connection.new(host, port, :verify => Puppet::SSL::Validator.no_validator)
- it "can set ssl using an option" do
- Puppet::Network::HTTP::Connection.new(host, port, :use_ssl => false, :verify => Puppet::SSL::Validator.no_validator).send(:connection).should_not be_use_ssl
- Puppet::Network::HTTP::Connection.new(host, port, :use_ssl => true, :verify => Puppet::SSL::Validator.no_validator).send(:connection).should be_use_ssl
+ expect(conn).to be_use_ssl
end
- context "proxy and timeout settings should propagate" do
- subject { Puppet::Network::HTTP::Connection.new(host, port, :verify => Puppet::SSL::Validator.no_validator).send(:connection) }
- before :each do
- Puppet[:http_proxy_host] = "myhost"
- Puppet[:http_proxy_port] = 432
- Puppet[:configtimeout] = 120
- end
+ it "can disable ssl using an option" do
+ conn = Puppet::Network::HTTP::Connection.new(host, port, :use_ssl => false, :verify => Puppet::SSL::Validator.no_validator)
- its(:open_timeout) { should == Puppet[:configtimeout] }
- its(:read_timeout) { should == Puppet[:configtimeout] }
- its(:proxy_address) { should == Puppet[:http_proxy_host] }
- its(:proxy_port) { should == Puppet[:http_proxy_port] }
+ expect(conn).to_not be_use_ssl
end
- it "should not set a proxy if the value is 'none'" do
- Puppet[:http_proxy_host] = 'none'
- subject.send(:connection).proxy_address.should be_nil
+ it "can enable ssl using an option" do
+ conn = Puppet::Network::HTTP::Connection.new(host, port, :use_ssl => true, :verify => Puppet::SSL::Validator.no_validator)
+
+ expect(conn).to be_use_ssl
end
it "should raise Puppet::Error when invalid options are specified" do
expect { Puppet::Network::HTTP::Connection.new(host, port, :invalid_option => nil) }.to raise_error(Puppet::Error, 'Unrecognized option(s): :invalid_option')
end
end
end
context "when methods that accept a block are called with a block" do
let (:host) { "my_server" }
let (:port) { 8140 }
let (:subject) { Puppet::Network::HTTP::Connection.new(host, port, :use_ssl => false, :verify => Puppet::SSL::Validator.no_validator) }
before :each do
httpok.stubs(:body).returns ""
# This stubbing relies a bit more on knowledge of the internals of Net::HTTP
# than I would prefer, but it works on ruby 1.8.7 and 1.9.3, and it seems
# valuable enough to have tests for blocks that this is probably warranted.
socket = stub_everything("socket")
TCPSocket.stubs(:open).returns(socket)
Net::HTTP::Post.any_instance.stubs(:exec).returns("")
Net::HTTP::Head.any_instance.stubs(:exec).returns("")
Net::HTTP::Get.any_instance.stubs(:exec).returns("")
Net::HTTPResponse.stubs(:read_new).returns(httpok)
end
[:request_get, :request_head, :request_post].each do |method|
context "##{method}" do
it "should yield to the block" do
block_executed = false
subject.send(method, "/foo", {}) do |response|
block_executed = true
end
block_executed.should == true
end
end
end
end
- context "when validating HTTPS requests" do
+ class ConstantErrorValidator
+ def initialize(args)
+ @fails_with = args[:fails_with]
+ @error_string = args[:error_string] || ""
+ @peer_certs = args[:peer_certs] || []
+ end
+
+ def setup_connection(connection)
+ connection.stubs(:start).raises(OpenSSL::SSL::SSLError.new(@fails_with))
+ end
+
+ def peer_certs
+ @peer_certs
+ end
+
+ def verify_errors
+ [@error_string]
+ end
+ end
+
+ class NoProblemsValidator
+ def initialize(cert)
+ @cert = cert
+ end
+
+ def setup_connection(connection)
+ end
+
+ def peer_certs
+ [@cert]
+ end
+
+ def verify_errors
+ []
+ end
+ end
+
+ shared_examples_for 'ssl verifier' do
include PuppetSpec::Files
let (:host) { "my_server" }
let (:port) { 8140 }
it "should provide a useful error message when one is available and certificate validation fails", :unless => Puppet.features.microsoft_windows? do
connection = Puppet::Network::HTTP::Connection.new(
host, port,
:verify => ConstantErrorValidator.new(:fails_with => 'certificate verify failed',
:error_string => 'shady looking signature'))
expect do
connection.get('request')
end.to raise_error(Puppet::Error, "certificate verify failed: [shady looking signature]")
end
it "should provide a helpful error message when hostname was not match with server certificate", :unless => Puppet.features.microsoft_windows? do
Puppet[:confdir] = tmpdir('conf')
connection = Puppet::Network::HTTP::Connection.new(
- host, port,
- :verify => ConstantErrorValidator.new(
- :fails_with => 'hostname was not match with server certificate',
- :peer_certs => [Puppet::SSL::CertificateAuthority.new.generate(
- 'not_my_server', :dns_alt_names => 'foo,bar,baz')]))
+ host, port,
+ :verify => ConstantErrorValidator.new(
+ :fails_with => 'hostname was not match with server certificate',
+ :peer_certs => [Puppet::SSL::CertificateAuthority.new.generate(
+ 'not_my_server', :dns_alt_names => 'foo,bar,baz')]))
expect do
connection.get('request')
end.to raise_error(Puppet::Error) do |error|
error.message =~ /Server hostname 'my_server' did not match server certificate; expected one of (.+)/
$1.split(', ').should =~ %w[DNS:foo DNS:bar DNS:baz DNS:not_my_server not_my_server]
end
end
it "should pass along the error message otherwise" do
connection = Puppet::Network::HTTP::Connection.new(
host, port,
:verify => ConstantErrorValidator.new(:fails_with => 'some other message'))
expect do
connection.get('request')
end.to raise_error(/some other message/)
end
it "should check all peer certificates for upcoming expiration", :unless => Puppet.features.microsoft_windows? do
Puppet[:confdir] = tmpdir('conf')
cert = Puppet::SSL::CertificateAuthority.new.generate(
'server', :dns_alt_names => 'foo,bar,baz')
connection = Puppet::Network::HTTP::Connection.new(
host, port,
:verify => NoProblemsValidator.new(cert))
+ Net::HTTP.any_instance.stubs(:start)
Net::HTTP.any_instance.stubs(:request).returns(httpok)
connection.expects(:warn_if_near_expiration).with(cert)
connection.get('request')
end
+ end
- class ConstantErrorValidator
- def initialize(args)
- @fails_with = args[:fails_with]
- @error_string = args[:error_string] || ""
- @peer_certs = args[:peer_certs] || []
- end
+ context "when using single use HTTPS connections" do
+ it_behaves_like 'ssl verifier' do
+ end
+ end
- def setup_connection(connection)
- connection.stubs(:request).with do
- true
- end.raises(OpenSSL::SSL::SSLError.new(@fails_with))
+ context "when using persistent HTTPS connections" do
+ around :each do |example|
+ pool = Puppet::Network::HTTP::Pool.new
+ Puppet.override(:http_pool => pool) do
+ example.run
end
+ pool.close
+ end
- def peer_certs
- @peer_certs
- end
+ it_behaves_like 'ssl verifier' do
+ end
+ end
- def verify_errors
- [@error_string]
- end
+ context "when response is a redirect" do
+ let (:site) { Puppet::Network::HTTP::Site.new('http', 'my_server', 8140) }
+ let (:other_site) { Puppet::Network::HTTP::Site.new('http', 'redirected', 9292) }
+ let (:other_path) { "other-path" }
+ let (:verify) { Puppet::SSL::Validator.no_validator }
+ let (:subject) { Puppet::Network::HTTP::Connection.new(site.host, site.port, :use_ssl => false, :verify => verify) }
+ let (:httpredirection) do
+ response = Net::HTTPFound.new('1.1', 302, 'Moved Temporarily')
+ response['location'] = "#{other_site.addr}/#{other_path}"
+ response.stubs(:read_body).returns("This resource has moved")
+ response
end
- class NoProblemsValidator
- def initialize(cert)
- @cert = cert
- end
+ def create_connection(site, options)
+ options[:use_ssl] = site.use_ssl?
+ Puppet::Network::HTTP::Connection.new(site.host, site.port, options)
+ end
- def setup_connection(connection)
- end
+ it "should redirect to the final resource location" do
+ http = stub('http')
+ http.stubs(:request).returns(httpredirection).then.returns(httpok)
- def peer_certs
- [@cert]
- end
+ seq = sequence('redirection')
+ pool = Puppet.lookup(:http_pool)
+ pool.expects(:with_connection).with(site, anything).yields(http).in_sequence(seq)
+ pool.expects(:with_connection).with(other_site, anything).yields(http).in_sequence(seq)
- def verify_errors
- []
- end
+ conn = create_connection(site, :verify => verify)
+ conn.get('/foo')
end
- end
- context "when response is a redirect" do
- let (:other_host) { "redirected" }
- let (:other_port) { 9292 }
- let (:other_path) { "other-path" }
- let (:subject) { Puppet::Network::HTTP::Connection.new("my_server", 8140, :use_ssl => false, :verify => Puppet::SSL::Validator.no_validator) }
- let (:httpredirection) { Net::HTTPFound.new('1.1', 302, 'Moved Temporarily') }
+ def expects_redirection(conn, &block)
+ http = stub('http')
+ http.stubs(:request).returns(httpredirection)
- before :each do
- httpredirection['location'] = "http://#{other_host}:#{other_port}/#{other_path}"
- httpredirection.stubs(:read_body).returns("This resource has moved")
+ pool = Puppet.lookup(:http_pool)
+ pool.expects(:with_connection).with(site, anything).yields(http)
+ pool
+ end
- socket = stub_everything("socket")
- TCPSocket.stubs(:open).returns(socket)
+ def expects_limit_exceeded(conn)
+ expect {
+ conn.get('/')
+ }.to raise_error(Puppet::Network::HTTP::RedirectionLimitExceededException)
+ end
- Net::HTTP::Get.any_instance.stubs(:exec).returns("")
- Net::HTTP::Post.any_instance.stubs(:exec).returns("")
+ it "should not redirect when the limit is 0" do
+ conn = create_connection(site, :verify => verify, :redirect_limit => 0)
+
+ pool = expects_redirection(conn)
+ pool.expects(:with_connection).with(other_site, anything).never
+
+ expects_limit_exceeded(conn)
end
- it "should redirect to the final resource location" do
- httpok.stubs(:read_body).returns(:body)
- Net::HTTPResponse.stubs(:read_new).returns(httpredirection).then.returns(httpok)
+ it "should redirect only once" do
+ conn = create_connection(site, :verify => verify, :redirect_limit => 1)
+
+ pool = expects_redirection(conn)
+ pool.expects(:with_connection).with(other_site, anything).once
- subject.get("/foo").body.should == :body
- subject.port.should == other_port
- subject.address.should == other_host
+ expects_limit_exceeded(conn)
end
- it "should raise an error after too many redirections" do
- Net::HTTPResponse.stubs(:read_new).returns(httpredirection)
+ it "should raise an exception when the redirect limit is exceeded" do
+ conn = create_connection(site, :verify => verify, :redirect_limit => 3)
- expect {
- subject.get("/foo")
- }.to raise_error(Puppet::Network::HTTP::RedirectionLimitExceededException)
+ pool = expects_redirection(conn)
+ pool.expects(:with_connection).with(other_site, anything).times(3)
+
+ expects_limit_exceeded(conn)
end
end
it "allows setting basic auth on get requests" do
expect_request_with_basic_auth
subject.get('/path', nil, :basic_auth => { :user => 'user', :password => 'password' })
end
it "allows setting basic auth on post requests" do
expect_request_with_basic_auth
subject.post('/path', 'data', nil, :basic_auth => { :user => 'user', :password => 'password' })
end
it "allows setting basic auth on head requests" do
expect_request_with_basic_auth
subject.head('/path', nil, :basic_auth => { :user => 'user', :password => 'password' })
end
it "allows setting basic auth on delete requests" do
expect_request_with_basic_auth
subject.delete('/path', nil, :basic_auth => { :user => 'user', :password => 'password' })
end
it "allows setting basic auth on put requests" do
expect_request_with_basic_auth
subject.put('/path', 'data', nil, :basic_auth => { :user => 'user', :password => 'password' })
end
def expect_request_with_basic_auth
Net::HTTP.any_instance.expects(:request).with do |request|
expect(request['authorization']).to match(/^Basic/)
end.returns(httpok)
end
end
diff --git a/spec/unit/network/http/factory_spec.rb b/spec/unit/network/http/factory_spec.rb
new file mode 100755
index 000000000..107ededcd
--- /dev/null
+++ b/spec/unit/network/http/factory_spec.rb
@@ -0,0 +1,82 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+require 'puppet/network/http'
+
+describe Puppet::Network::HTTP::Factory do
+ before :each do
+ Puppet::SSL::Key.indirection.terminus_class = :memory
+ Puppet::SSL::CertificateRequest.indirection.terminus_class = :memory
+ end
+
+ let(:site) { Puppet::Network::HTTP::Site.new('https', 'www.example.com', 443) }
+ def create_connection(site)
+ factory = Puppet::Network::HTTP::Factory.new
+
+ factory.create_connection(site)
+ end
+
+ it 'creates a connection for the site' do
+ conn = create_connection(site)
+
+ expect(conn.use_ssl?).to be_true
+ expect(conn.address).to eq(site.host)
+ expect(conn.port).to eq(site.port)
+ end
+
+ it 'creates a connection that has not yet been started' do
+ conn = create_connection(site)
+
+ expect(conn).to_not be_started
+ end
+
+ it 'creates a connection supporting at least HTTP 1.1' do
+ conn = create_connection(site)
+
+ expect(any_of(conn.class.version_1_1?, conn.class.version_1_1?)).to be_true
+ end
+
+ context "proxy settings" do
+ let(:proxy_host) { 'myhost' }
+ let(:proxy_port) { 432 }
+
+ it "should not set a proxy if the value is 'none'" do
+ Puppet[:http_proxy_host] = 'none'
+ conn = create_connection(site)
+
+ expect(conn.proxy_address).to be_nil
+ end
+
+ it 'sets proxy_address' do
+ Puppet[:http_proxy_host] = proxy_host
+ conn = create_connection(site)
+
+ expect(conn.proxy_address).to eq(proxy_host)
+ end
+
+ it 'sets proxy address and port' do
+ Puppet[:http_proxy_host] = proxy_host
+ Puppet[:http_proxy_port] = proxy_port
+ conn = create_connection(site)
+
+ expect(conn.proxy_port).to eq(proxy_port)
+ end
+
+ context 'socket timeouts' do
+ let(:timeout) { 5 }
+
+ it 'sets open timeout' do
+ Puppet[:configtimeout] = timeout
+ conn = create_connection(site)
+
+ expect(conn.open_timeout).to eq(timeout)
+ end
+
+ it 'sets read timeout' do
+ Puppet[:configtimeout] = timeout
+ conn = create_connection(site)
+
+ expect(conn.read_timeout).to eq(timeout)
+ end
+ end
+ end
+end
diff --git a/spec/unit/network/http/handler_spec.rb b/spec/unit/network/http/handler_spec.rb
index 345818b4a..25df9d270 100755
--- a/spec/unit/network/http/handler_spec.rb
+++ b/spec/unit/network/http/handler_spec.rb
@@ -1,222 +1,232 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/indirector_testing'
require 'puppet/network/authorization'
require 'puppet/network/authentication'
require 'puppet/network/http'
describe Puppet::Network::HTTP::Handler do
before :each do
Puppet::IndirectorTesting.indirection.terminus_class = :memory
end
let(:indirection) { Puppet::IndirectorTesting.indirection }
def a_request(method = "HEAD", path = "/production/#{indirection.name}/unknown")
{
:accept_header => "pson",
:content_type_header => "text/yaml",
:http_method => method,
:path => path,
:params => {},
:client_cert => nil,
:headers => {},
:body => nil
}
end
let(:handler) { TestingHandler.new() }
describe "the HTTP Handler" do
def respond(text)
lambda { |req, res| res.respond_with(200, "text/plain", text) }
end
it "hands the request to the first route that matches the request path" do
handler = TestingHandler.new(
Puppet::Network::HTTP::Route.path(%r{^/foo}).get(respond("skipped")),
Puppet::Network::HTTP::Route.path(%r{^/vtest}).get(respond("used")),
Puppet::Network::HTTP::Route.path(%r{^/vtest/foo}).get(respond("ignored")))
req = a_request("GET", "/vtest/foo")
res = {}
handler.process(req, res)
expect(res[:body]).to eq("used")
end
it "raises an error if multiple routes with the same path regex are registered" do
expect do
handler = TestingHandler.new(
Puppet::Network::HTTP::Route.path(%r{^/foo}).get(respond("ignored")),
Puppet::Network::HTTP::Route.path(%r{^/foo}).post(respond("also ignored")))
end.to raise_error(ArgumentError)
end
it "raises an HTTP not found error if no routes match" do
handler = TestingHandler.new
req = a_request("GET", "/vtest/foo")
res = {}
handler.process(req, res)
res_body = JSON(res[:body])
expect(res[:content_type_header]).to eq("application/json")
expect(res_body["issue_kind"]).to eq("HANDLER_NOT_FOUND")
expect(res_body["message"]).to eq("Not Found: No route for GET /vtest/foo")
expect(res[:status]).to eq(404)
end
it "returns a structured error response with a stacktrace when the server encounters an internal error" do
handler = TestingHandler.new(
Puppet::Network::HTTP::Route.path(/.*/).get(lambda { |_, _| raise Exception.new("the sky is falling!")}))
req = a_request("GET", "/vtest/foo")
res = {}
handler.process(req, res)
res_body = JSON(res[:body])
expect(res[:content_type_header]).to eq("application/json")
expect(res_body["issue_kind"]).to eq(Puppet::Network::HTTP::Issues::RUNTIME_ERROR.to_s)
expect(res_body["message"]).to eq("Server Error: the sky is falling!")
expect(res_body["stacktrace"].is_a?(Array) && !res_body["stacktrace"].empty?).to be_true
expect(res_body["stacktrace"][0]).to match("spec/unit/network/http/handler_spec.rb")
expect(res[:status]).to eq(500)
end
end
describe "when processing a request" do
let(:response) do
{ :status => 200 }
end
before do
handler.stubs(:check_authorization)
handler.stubs(:warn_if_near_expiration)
end
- it "should check the client certificate for upcoming expiration" do
- request = a_request
- cert = mock 'cert'
- handler.expects(:client_cert).returns(cert).with(request)
- handler.expects(:warn_if_near_expiration).with(cert)
-
- handler.process(request, response)
- end
-
it "should setup a profiler when the puppet-profiling header exists" do
request = a_request
request[:headers][Puppet::Network::HTTP::HEADER_ENABLE_PROFILING.downcase] = "true"
- handler.process(request, response)
+ p = HandlerTestProfiler.new
+
+ Puppet::Util::Profiler.expects(:add_profiler).with { |profiler|
+ profiler.is_a? Puppet::Util::Profiler::WallClock
+ }.returns(p)
- Puppet::Util::Profiler.current.should be_kind_of(Puppet::Util::Profiler::WallClock)
+ Puppet::Util::Profiler.expects(:remove_profiler).with { |profiler|
+ profiler == p
+ }
+
+ handler.process(request, response)
end
it "should not setup profiler when the profile parameter is missing" do
request = a_request
request[:params] = { }
- handler.process(request, response)
+ Puppet::Util::Profiler.expects(:add_profiler).never
- Puppet::Util::Profiler.current.should == Puppet::Util::Profiler::NONE
+ handler.process(request, response)
end
it "should raise an error if the request is formatted in an unknown format" do
handler.stubs(:content_type_header).returns "unknown format"
lambda { handler.request_format(request) }.should raise_error
end
it "should still find the correct format if content type contains charset information" do
request = Puppet::Network::HTTP::Request.new({ 'content-type' => "text/plain; charset=UTF-8" },
{}, 'GET', '/', nil)
request.format.should == "s"
end
it "should deserialize YAML parameters" do
params = {'my_param' => [1,2,3].to_yaml}
decoded_params = handler.send(:decode_params, params)
decoded_params.should == {:my_param => [1,2,3]}
end
it "should ignore tags on YAML parameters" do
params = {'my_param' => "--- !ruby/object:Array {}"}
decoded_params = handler.send(:decode_params, params)
decoded_params[:my_param].should be_a(Hash)
end
end
describe "when resolving node" do
it "should use a look-up from the ip address" do
Resolv.expects(:getname).with("1.2.3.4").returns("host.domain.com")
handler.resolve_node(:ip => "1.2.3.4")
end
it "should return the look-up result" do
Resolv.stubs(:getname).with("1.2.3.4").returns("host.domain.com")
handler.resolve_node(:ip => "1.2.3.4").should == "host.domain.com"
end
it "should return the ip address if resolving fails" do
Resolv.stubs(:getname).with("1.2.3.4").raises(RuntimeError, "no such host")
handler.resolve_node(:ip => "1.2.3.4").should == "1.2.3.4"
end
end
class TestingHandler
include Puppet::Network::HTTP::Handler
def initialize(* routes)
register(routes)
end
def set_content_type(response, format)
response[:content_type_header] = format
end
def set_response(response, body, status = 200)
response[:body] = body
response[:status] = status
end
def http_method(request)
request[:http_method]
end
def path(request)
request[:path]
end
def params(request)
request[:params]
end
def client_cert(request)
request[:client_cert]
end
def body(request)
request[:body]
end
def headers(request)
request[:headers] || {}
end
end
+
+ class HandlerTestProfiler
+ def start(metric, description)
+ end
+
+ def finish(context, metric, description)
+ end
+
+ def shutdown()
+ end
+ end
end
diff --git a/spec/unit/network/http/nocache_pool_spec.rb b/spec/unit/network/http/nocache_pool_spec.rb
new file mode 100755
index 000000000..69e2d2e9a
--- /dev/null
+++ b/spec/unit/network/http/nocache_pool_spec.rb
@@ -0,0 +1,43 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+
+require 'puppet/network/http'
+require 'puppet/network/http/connection'
+
+describe Puppet::Network::HTTP::NoCachePool do
+ let(:site) { Puppet::Network::HTTP::Site.new('https', 'rubygems.org', 443) }
+ let(:verify) { stub('verify', :setup_connection => nil) }
+
+ it 'yields a connection' do
+ http = stub('http')
+
+ factory = Puppet::Network::HTTP::Factory.new
+ factory.stubs(:create_connection).returns(http)
+ pool = Puppet::Network::HTTP::NoCachePool.new(factory)
+
+ expect { |b|
+ pool.with_connection(site, verify, &b)
+ }.to yield_with_args(http)
+ end
+
+ it 'yields a new connection each time' do
+ http1 = stub('http1')
+ http2 = stub('http2')
+
+ factory = Puppet::Network::HTTP::Factory.new
+ factory.stubs(:create_connection).returns(http1).then.returns(http2)
+ pool = Puppet::Network::HTTP::NoCachePool.new(factory)
+
+ expect { |b|
+ pool.with_connection(site, verify, &b)
+ }.to yield_with_args(http1)
+
+ expect { |b|
+ pool.with_connection(site, verify, &b)
+ }.to yield_with_args(http2)
+ end
+
+ it 'has a close method' do
+ Puppet::Network::HTTP::NoCachePool.new.close
+ end
+end
diff --git a/spec/unit/network/http/pool_spec.rb b/spec/unit/network/http/pool_spec.rb
new file mode 100755
index 000000000..aef100953
--- /dev/null
+++ b/spec/unit/network/http/pool_spec.rb
@@ -0,0 +1,269 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+
+require 'openssl'
+require 'puppet/network/http'
+require 'puppet/network/http_pool'
+
+describe Puppet::Network::HTTP::Pool do
+ before :each do
+ Puppet::SSL::Key.indirection.terminus_class = :memory
+ Puppet::SSL::CertificateRequest.indirection.terminus_class = :memory
+ end
+
+ let(:site) do
+ Puppet::Network::HTTP::Site.new('https', 'rubygems.org', 443)
+ end
+
+ let(:different_site) do
+ Puppet::Network::HTTP::Site.new('https', 'github.com', 443)
+ end
+
+ let(:verify) do
+ stub('verify', :setup_connection => nil)
+ end
+
+ def create_pool
+ Puppet::Network::HTTP::Pool.new
+ end
+
+ def create_pool_with_connections(site, *connections)
+ pool = Puppet::Network::HTTP::Pool.new
+ connections.each do |conn|
+ pool.release(site, conn)
+ end
+ pool
+ end
+
+ def create_pool_with_expired_connections(site, *connections)
+ # setting keepalive timeout to -1 ensures any newly added
+ # connections have already expired
+ pool = Puppet::Network::HTTP::Pool.new(-1)
+ connections.each do |conn|
+ pool.release(site, conn)
+ end
+ pool
+ end
+
+ def create_connection(site)
+ stub(site.addr, :started? => false, :start => nil, :finish => nil, :use_ssl? => true, :verify_mode => OpenSSL::SSL::VERIFY_PEER)
+ end
+
+ context 'when yielding a connection' do
+ it 'yields a connection' do
+ conn = create_connection(site)
+ pool = create_pool_with_connections(site, conn)
+
+ expect { |b|
+ pool.with_connection(site, verify, &b)
+ }.to yield_with_args(conn)
+ end
+
+ it 'returns the connection to the pool' do
+ conn = create_connection(site)
+ pool = create_pool
+ pool.release(site, conn)
+
+ pool.with_connection(site, verify) { |c| }
+
+ expect(pool.pool[site].first.connection).to eq(conn)
+ end
+
+ it 'can yield multiple connections to the same site' do
+ lru_conn = create_connection(site)
+ mru_conn = create_connection(site)
+ pool = create_pool_with_connections(site, lru_conn, mru_conn)
+
+ pool.with_connection(site, verify) do |a|
+ expect(a).to eq(mru_conn)
+
+ pool.with_connection(site, verify) do |b|
+ expect(b).to eq(lru_conn)
+ end
+ end
+ end
+
+ it 'propagates exceptions' do
+ conn = create_connection(site)
+ pool = create_pool
+ pool.release(site, conn)
+
+ expect {
+ pool.with_connection(site, verify) do |c|
+ raise IOError, 'connection reset'
+ end
+ }.to raise_error(IOError, 'connection reset')
+ end
+
+ it 'does not re-cache connections when an error occurs' do
+ # we're not distinguishing between network errors that would
+ # suggest we close the socket, and other errors
+ conn = create_connection(site)
+ pool = create_pool
+ pool.release(site, conn)
+
+ pool.expects(:release).with(site, conn).never
+
+ pool.with_connection(site, verify) do |c|
+ raise IOError, 'connection reset'
+ end rescue nil
+ end
+
+ context 'when releasing connections' do
+ it 'releases HTTP connections' do
+ conn = create_connection(site)
+ conn.expects(:use_ssl?).returns(false)
+
+ pool = create_pool_with_connections(site, conn)
+ pool.expects(:release).with(site, conn)
+
+ pool.with_connection(site, verify) {|c| }
+ end
+
+ it 'releases secure HTTPS connections' do
+ conn = create_connection(site)
+ conn.expects(:use_ssl?).returns(true)
+ conn.expects(:verify_mode).returns(OpenSSL::SSL::VERIFY_PEER)
+
+ pool = create_pool_with_connections(site, conn)
+ pool.expects(:release).with(site, conn)
+
+ pool.with_connection(site, verify) {|c| }
+ end
+
+ it 'closes insecure HTTPS connections' do
+ conn = create_connection(site)
+ conn.expects(:use_ssl?).returns(true)
+ conn.expects(:verify_mode).returns(OpenSSL::SSL::VERIFY_NONE)
+
+ pool = create_pool_with_connections(site, conn)
+
+ pool.expects(:release).with(site, conn).never
+
+ pool.with_connection(site, verify) {|c| }
+ end
+ end
+ end
+
+ context 'when borrowing' do
+ it 'returns a new connection if the pool is empty' do
+ conn = create_connection(site)
+ pool = create_pool
+ pool.factory.expects(:create_connection).with(site).returns(conn)
+
+ expect(pool.borrow(site, verify)).to eq(conn)
+ end
+
+ it 'returns a matching connection' do
+ conn = create_connection(site)
+ pool = create_pool_with_connections(site, conn)
+
+ pool.factory.expects(:create_connection).never
+
+ expect(pool.borrow(site, verify)).to eq(conn)
+ end
+
+ it 'returns a new connection if there are no matching sites' do
+ different_conn = create_connection(different_site)
+ pool = create_pool_with_connections(different_site, different_conn)
+
+ conn = create_connection(site)
+ pool.factory.expects(:create_connection).with(site).returns(conn)
+
+ expect(pool.borrow(site, verify)).to eq(conn)
+ end
+
+ it 'returns started connections' do
+ conn = create_connection(site)
+ conn.expects(:start)
+
+ pool = create_pool
+ pool.factory.expects(:create_connection).with(site).returns(conn)
+
+ expect(pool.borrow(site, verify)).to eq(conn)
+ end
+
+ it "doesn't start a cached connection" do
+ conn = create_connection(site)
+ conn.expects(:start).never
+
+ pool = create_pool_with_connections(site, conn)
+ pool.borrow(site, verify)
+ end
+
+ it 'returns the most recently used connection from the pool' do
+ least_recently_used = create_connection(site)
+ most_recently_used = create_connection(site)
+
+ pool = create_pool_with_connections(site, least_recently_used, most_recently_used)
+ expect(pool.borrow(site, verify)).to eq(most_recently_used)
+ end
+
+ it 'finishes expired connections' do
+ conn = create_connection(site)
+ conn.expects(:finish)
+
+ pool = create_pool_with_expired_connections(site, conn)
+ pool.factory.expects(:create_connection => stub('conn', :start => nil))
+
+ pool.borrow(site, verify)
+ end
+
+ it 'logs an exception if it fails to close an expired connection' do
+ Puppet.expects(:log_exception).with(is_a(IOError), "Failed to close connection for #{site}: read timeout")
+
+ conn = create_connection(site)
+ conn.expects(:finish).raises(IOError, 'read timeout')
+
+ pool = create_pool_with_expired_connections(site, conn)
+ pool.factory.expects(:create_connection => stub('open_conn', :start => nil))
+
+ pool.borrow(site, verify)
+ end
+ end
+
+ context 'when releasing a connection' do
+ it 'adds the connection to an empty pool' do
+ conn = create_connection(site)
+
+ pool = create_pool
+ pool.release(site, conn)
+
+ expect(pool.pool[site].first.connection).to eq(conn)
+ end
+
+ it 'adds the connection to a pool with a connection for the same site' do
+ pool = create_pool
+ pool.release(site, create_connection(site))
+ pool.release(site, create_connection(site))
+
+ expect(pool.pool[site].count).to eq(2)
+ end
+
+ it 'adds the connection to a pool with a connection for a different site' do
+ pool = create_pool
+ pool.release(site, create_connection(site))
+ pool.release(different_site, create_connection(different_site))
+
+ expect(pool.pool[site].count).to eq(1)
+ expect(pool.pool[different_site].count).to eq(1)
+ end
+ end
+
+ context 'when closing' do
+ it 'clears the pool' do
+ pool = create_pool
+ pool.close
+
+ expect(pool.pool).to be_empty
+ end
+
+ it 'closes all cached connections' do
+ conn = create_connection(site)
+ conn.expects(:finish)
+
+ pool = create_pool_with_connections(site, conn)
+ pool.close
+ end
+ end
+end
diff --git a/spec/unit/network/http/rack/rest_spec.rb b/spec/unit/network/http/rack/rest_spec.rb
index 165b6ceb9..c35b789a2 100755
--- a/spec/unit/network/http/rack/rest_spec.rb
+++ b/spec/unit/network/http/rack/rest_spec.rb
@@ -1,316 +1,316 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/network/http/rack' if Puppet.features.rack?
require 'puppet/network/http/rack/rest'
describe "Puppet::Network::HTTP::RackREST", :if => Puppet.features.rack? do
it "should include the Puppet::Network::HTTP::Handler module" do
Puppet::Network::HTTP::RackREST.ancestors.should be_include(Puppet::Network::HTTP::Handler)
end
describe "when serving a request" do
before :all do
@model_class = stub('indirected model class')
Puppet::Indirector::Indirection.stubs(:model).with(:foo).returns(@model_class)
- @handler = Puppet::Network::HTTP::RackREST.new(:handler => :foo)
end
before :each do
@response = Rack::Response.new
+ @handler = Puppet::Network::HTTP::RackREST.new(:handler => :foo)
end
def mk_req(uri, opts = {})
env = Rack::MockRequest.env_for(uri, opts)
Rack::Request.new(env)
end
let(:minimal_certificate) do
cert = OpenSSL::X509::Certificate.new
cert.version = 2
cert.serial = 0
cert.not_before = Time.now
cert.not_after = Time.now + 3600
cert.public_key = OpenSSL::PKey::RSA.new(512)
cert.subject = OpenSSL::X509::Name.parse("/CN=testing")
cert
end
describe "#headers" do
it "should return the headers (parsed from env with prefix 'HTTP_')" do
req = mk_req('/', {'HTTP_Accept' => 'myaccept',
'HTTP_X_Custom_Header' => 'mycustom',
'NOT_HTTP_foo' => 'not an http header'})
@handler.headers(req).should == {"accept" => 'myaccept',
"x-custom-header" => 'mycustom',
"content-type" => nil }
end
end
describe "and using the HTTP Handler interface" do
it "should return the CONTENT_TYPE parameter as the content type header" do
req = mk_req('/', 'CONTENT_TYPE' => 'mycontent')
@handler.headers(req)['content-type'].should == "mycontent"
end
it "should use the REQUEST_METHOD as the http method" do
req = mk_req('/', :method => 'MYMETHOD')
@handler.http_method(req).should == "MYMETHOD"
end
it "should return the request path as the path" do
req = mk_req('/foo/bar')
@handler.path(req).should == "/foo/bar"
end
it "should return the request body as the body" do
req = mk_req('/foo/bar', :input => 'mybody')
@handler.body(req).should == "mybody"
end
it "should return the an Puppet::SSL::Certificate instance as the client_cert" do
req = mk_req('/foo/bar', 'SSL_CLIENT_CERT' => minimal_certificate.to_pem)
expect(@handler.client_cert(req).content.to_pem).to eq(minimal_certificate.to_pem)
end
it "returns nil when SSL_CLIENT_CERT is empty" do
req = mk_req('/foo/bar', 'SSL_CLIENT_CERT' => '')
@handler.client_cert(req).should be_nil
end
it "should set the response's content-type header when setting the content type" do
@header = mock 'header'
@response.expects(:header).returns @header
@header.expects(:[]=).with('Content-Type', "mytype")
@handler.set_content_type(@response, "mytype")
end
it "should set the status and write the body when setting the response for a request" do
@response.expects(:status=).with(400)
@response.expects(:write).with("mybody")
@handler.set_response(@response, "mybody", 400)
end
describe "when result is a File" do
before :each do
stat = stub 'stat', :size => 100
@file = stub 'file', :stat => stat, :path => "/tmp/path"
@file.stubs(:is_a?).with(File).returns(true)
end
it "should set the Content-Length header as a string" do
@response.expects(:[]=).with("Content-Length", '100')
@handler.set_response(@response, @file, 200)
end
it "should return a RackFile adapter as body" do
@response.expects(:body=).with { |val| val.is_a?(Puppet::Network::HTTP::RackREST::RackFile) }
@handler.set_response(@response, @file, 200)
end
end
it "should ensure the body has been read on success" do
req = mk_req('/production/report/foo', :method => 'PUT')
req.body.expects(:read).at_least_once
Puppet::Transaction::Report.stubs(:save)
@handler.process(req, @response)
end
it "should ensure the body has been partially read on failure" do
req = mk_req('/production/report/foo')
req.body.expects(:read).with(1)
@handler.stubs(:headers).raises(Exception)
@handler.process(req, @response)
end
end
describe "and determining the request parameters" do
it "should include the HTTP request parameters, with the keys as symbols" do
req = mk_req('/?foo=baz&bar=xyzzy')
result = @handler.params(req)
result[:foo].should == "baz"
result[:bar].should == "xyzzy"
end
it "should return multi-values params as an array of the values" do
req = mk_req('/?foo=baz&foo=xyzzy')
result = @handler.params(req)
result[:foo].should == ["baz", "xyzzy"]
end
it "should return parameters from the POST body" do
req = mk_req("/", :method => 'POST', :input => 'foo=baz&bar=xyzzy')
result = @handler.params(req)
result[:foo].should == "baz"
result[:bar].should == "xyzzy"
end
it "should not return multi-valued params in a POST body as an array of values" do
req = mk_req("/", :method => 'POST', :input => 'foo=baz&foo=xyzzy')
result = @handler.params(req)
result[:foo].should be_one_of("baz", "xyzzy")
end
it "should CGI-decode the HTTP parameters" do
encoding = CGI.escape("foo bar")
req = mk_req("/?foo=#{encoding}")
result = @handler.params(req)
result[:foo].should == "foo bar"
end
it "should convert the string 'true' to the boolean" do
req = mk_req("/?foo=true")
result = @handler.params(req)
result[:foo].should be_true
end
it "should convert the string 'false' to the boolean" do
req = mk_req("/?foo=false")
result = @handler.params(req)
result[:foo].should be_false
end
it "should convert integer arguments to Integers" do
req = mk_req("/?foo=15")
result = @handler.params(req)
result[:foo].should == 15
end
it "should convert floating point arguments to Floats" do
req = mk_req("/?foo=1.5")
result = @handler.params(req)
result[:foo].should == 1.5
end
it "should YAML-load and CGI-decode values that are YAML-encoded" do
escaping = CGI.escape(YAML.dump(%w{one two}))
req = mk_req("/?foo=#{escaping}")
result = @handler.params(req)
result[:foo].should == %w{one two}
end
it "should not allow the client to set the node via the query string" do
req = mk_req("/?node=foo")
@handler.params(req)[:node].should be_nil
end
it "should not allow the client to set the IP address via the query string" do
req = mk_req("/?ip=foo")
@handler.params(req)[:ip].should be_nil
end
it "should pass the client's ip address to model find" do
req = mk_req("/", 'REMOTE_ADDR' => 'ipaddress')
@handler.params(req)[:ip].should == "ipaddress"
end
it "should set 'authenticated' to false if no certificate is present" do
req = mk_req('/')
@handler.params(req)[:authenticated].should be_false
end
end
describe "with pre-validated certificates" do
it "should retrieve the hostname by finding the CN given in :ssl_client_header, in the format returned by Apache (RFC2253)" do
Puppet[:ssl_client_header] = "myheader"
req = mk_req('/', "myheader" => "O=Foo\\, Inc,CN=host.domain.com")
@handler.params(req)[:node].should == "host.domain.com"
end
it "should retrieve the hostname by finding the CN given in :ssl_client_header, in the format returned by nginx" do
Puppet[:ssl_client_header] = "myheader"
req = mk_req('/', "myheader" => "/CN=host.domain.com")
@handler.params(req)[:node].should == "host.domain.com"
end
it "should retrieve the hostname by finding the CN given in :ssl_client_header, ignoring other fields" do
Puppet[:ssl_client_header] = "myheader"
req = mk_req('/', "myheader" => 'ST=Denial,CN=host.domain.com,O=Domain\\, Inc.')
@handler.params(req)[:node].should == "host.domain.com"
end
it "should use the :ssl_client_header to determine the parameter for checking whether the host certificate is valid" do
Puppet[:ssl_client_header] = "certheader"
Puppet[:ssl_client_verify_header] = "myheader"
req = mk_req('/', "myheader" => "SUCCESS", "certheader" => "CN=host.domain.com")
@handler.params(req)[:authenticated].should be_true
end
it "should consider the host unauthenticated if the validity parameter does not contain 'SUCCESS'" do
Puppet[:ssl_client_header] = "certheader"
Puppet[:ssl_client_verify_header] = "myheader"
req = mk_req('/', "myheader" => "whatever", "certheader" => "CN=host.domain.com")
@handler.params(req)[:authenticated].should be_false
end
it "should consider the host unauthenticated if no certificate information is present" do
Puppet[:ssl_client_header] = "certheader"
Puppet[:ssl_client_verify_header] = "myheader"
req = mk_req('/', "myheader" => nil, "certheader" => "CN=host.domain.com")
@handler.params(req)[:authenticated].should be_false
end
it "should resolve the node name with an ip address look-up if no certificate is present" do
Puppet[:ssl_client_header] = "myheader"
req = mk_req('/', "myheader" => nil)
@handler.expects(:resolve_node).returns("host.domain.com")
@handler.params(req)[:node].should == "host.domain.com"
end
it "should resolve the node name with an ip address look-up if a certificate without a CN is present" do
Puppet[:ssl_client_header] = "myheader"
req = mk_req('/', "myheader" => "O=no CN")
@handler.expects(:resolve_node).returns("host.domain.com")
@handler.params(req)[:node].should == "host.domain.com"
end
it "should not allow authentication via the verify header if there is no CN available" do
Puppet[:ssl_client_header] = "dn_header"
Puppet[:ssl_client_verify_header] = "verify_header"
req = mk_req('/', "dn_header" => "O=no CN", "verify_header" => 'SUCCESS')
@handler.expects(:resolve_node).returns("host.domain.com")
@handler.params(req)[:authenticated].should be_false
end
end
end
end
describe Puppet::Network::HTTP::RackREST::RackFile do
before(:each) do
stat = stub 'stat', :size => 100
@file = stub 'file', :stat => stat, :path => "/tmp/path"
@rackfile = Puppet::Network::HTTP::RackREST::RackFile.new(@file)
end
it "should have an each method" do
@rackfile.should be_respond_to(:each)
end
it "should yield file chunks by chunks" do
@file.expects(:read).times(3).with(8192).returns("1", "2", nil)
i = 1
@rackfile.each do |chunk|
chunk.to_i.should == i
i += 1
end
end
it "should have a close method" do
@rackfile.should be_respond_to(:close)
end
it "should delegate close to File close" do
@file.expects(:close)
@rackfile.close
end
end
diff --git a/spec/unit/network/http/session_spec.rb b/spec/unit/network/http/session_spec.rb
new file mode 100755
index 000000000..4eba67d7d
--- /dev/null
+++ b/spec/unit/network/http/session_spec.rb
@@ -0,0 +1,43 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+
+require 'puppet/network/http'
+
+describe Puppet::Network::HTTP::Session do
+ let(:connection) { stub('connection') }
+
+ def create_session(connection, expiration_time = nil)
+ expiration_time ||= Time.now + 60 * 60
+
+ Puppet::Network::HTTP::Session.new(connection, expiration_time)
+ end
+
+ it 'provides access to its connection' do
+ session = create_session(connection)
+
+ session.connection.should == connection
+ end
+
+ it 'expires a connection whose expiration time is in the past' do
+ now = Time.now
+ past = now - 1
+
+ session = create_session(connection, past)
+ session.expired?(now).should be_true
+ end
+
+ it 'expires a connection whose expiration time is now' do
+ now = Time.now
+
+ session = create_session(connection, now)
+ session.expired?(now).should be_true
+ end
+
+ it 'does not expire a connection whose expiration time is in the future' do
+ now = Time.now
+ future = now + 1
+
+ session = create_session(connection, future)
+ session.expired?(now).should be_false
+ end
+end
diff --git a/spec/unit/network/http/site_spec.rb b/spec/unit/network/http/site_spec.rb
new file mode 100755
index 000000000..06fcbf83d
--- /dev/null
+++ b/spec/unit/network/http/site_spec.rb
@@ -0,0 +1,90 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+
+require 'puppet/network/http'
+
+describe Puppet::Network::HTTP::Site do
+ let(:scheme) { 'https' }
+ let(:host) { 'rubygems.org' }
+ let(:port) { 443 }
+
+ def create_site(scheme, host, port)
+ Puppet::Network::HTTP::Site.new(scheme, host, port)
+ end
+
+ it 'accepts scheme, host, and port' do
+ site = create_site(scheme, host, port)
+
+ expect(site.scheme).to eq(scheme)
+ expect(site.host).to eq(host)
+ expect(site.port).to eq(port)
+ end
+
+ it 'generates an external URI string' do
+ site = create_site(scheme, host, port)
+
+ expect(site.addr).to eq("https://rubygems.org:443")
+ end
+
+ it 'considers sites to be different when the scheme is different' do
+ https_site = create_site('https', host, port)
+ http_site = create_site('http', host, port)
+
+ expect(https_site).to_not eq(http_site)
+ end
+
+ it 'considers sites to be different when the host is different' do
+ rubygems_site = create_site(scheme, 'rubygems.org', port)
+ github_site = create_site(scheme, 'github.com', port)
+
+ expect(rubygems_site).to_not eq(github_site)
+ end
+
+ it 'considers sites to be different when the port is different' do
+ site_443 = create_site(scheme, host, 443)
+ site_80 = create_site(scheme, host, 80)
+
+ expect(site_443).to_not eq(site_80)
+ end
+
+ it 'compares values when determining equality' do
+ site = create_site(scheme, host, port)
+
+ sites = {}
+ sites[site] = site
+
+ another_site = create_site(scheme, host, port)
+
+ expect(sites.include?(another_site)).to be_true
+ end
+
+ it 'computes the same hash code for equivalent objects' do
+ site = create_site(scheme, host, port)
+ same_site = create_site(scheme, host, port)
+
+ expect(site.hash).to eq(same_site.hash)
+ end
+
+ it 'uses ssl with https' do
+ site = create_site('https', host, port)
+
+ expect(site).to be_use_ssl
+ end
+
+ it 'does not use ssl with http' do
+ site = create_site('http', host, port)
+
+ expect(site).to_not be_use_ssl
+ end
+
+ it 'moves to a new URI location' do
+ site = create_site('http', 'host1', 80)
+
+ uri = URI.parse('https://host2:443/some/where/else')
+ new_site = site.move_to(uri)
+
+ expect(new_site.scheme).to eq('https')
+ expect(new_site.host).to eq('host2')
+ expect(new_site.port).to eq(443)
+ end
+end
diff --git a/spec/unit/network/http/webrick_spec.rb b/spec/unit/network/http/webrick_spec.rb
index 17f61e339..edeb439a9 100755
--- a/spec/unit/network/http/webrick_spec.rb
+++ b/spec/unit/network/http/webrick_spec.rb
@@ -1,271 +1,271 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/network/http'
require 'puppet/network/http/webrick'
describe Puppet::Network::HTTP::WEBrick, "after initializing" do
it "should not be listening" do
Puppet::Network::HTTP::WEBrick.new.should_not be_listening
end
end
describe Puppet::Network::HTTP::WEBrick do
include PuppetSpec::Files
let(:address) { '127.0.0.1' }
let(:port) { 31337 }
let(:server) do
s = Puppet::Network::HTTP::WEBrick.new
s.stubs(:setup_logger).returns(Hash.new)
s.stubs(:setup_ssl).returns(Hash.new)
s
end
let(:mock_webrick) do
stub('webrick',
:[] => {},
:listeners => [],
:status => :Running,
:mount => nil,
:start => nil,
:shutdown => nil)
end
before :each do
WEBrick::HTTPServer.stubs(:new).returns(mock_webrick)
end
describe "when turning on listening" do
it "should fail if already listening" do
server.listen(address, port)
expect { server.listen(address, port) }.to raise_error(RuntimeError, /server is already listening/)
end
it "should tell webrick to listen on the specified address and port" do
WEBrick::HTTPServer.expects(:new).with(
has_entries(:Port => 31337, :BindAddress => "127.0.0.1")
).returns(mock_webrick)
server.listen(address, port)
end
it "should not perform reverse lookups" do
WEBrick::HTTPServer.expects(:new).with(
has_entry(:DoNotReverseLookup => true)
).returns(mock_webrick)
BasicSocket.expects(:do_not_reverse_lookup=).with(true)
server.listen(address, port)
end
it "should configure a logger for webrick" do
server.expects(:setup_logger).returns(:Logger => :mylogger)
WEBrick::HTTPServer.expects(:new).with {|args|
args[:Logger] == :mylogger
}.returns(mock_webrick)
server.listen(address, port)
end
it "should configure SSL for webrick" do
server.expects(:setup_ssl).returns(:Ssl => :testing, :Other => :yay)
WEBrick::HTTPServer.expects(:new).with {|args|
args[:Ssl] == :testing and args[:Other] == :yay
}.returns(mock_webrick)
server.listen(address, port)
end
it "should be listening" do
server.listen(address, port)
server.should be_listening
end
describe "when the REST protocol is requested" do
it "should register the REST handler at /" do
# We don't care about the options here.
mock_webrick.expects(:mount).with("/", Puppet::Network::HTTP::WEBrickREST, anything)
server.listen(address, port)
end
end
end
describe "when turning off listening" do
it "should fail unless listening" do
expect { server.unlisten }.to raise_error(RuntimeError, /server is not listening/)
end
it "should order webrick server to stop" do
mock_webrick.expects(:shutdown)
server.listen(address, port)
server.unlisten
end
it "should no longer be listening" do
server.listen(address, port)
server.unlisten
server.should_not be_listening
end
end
describe "when configuring an http logger" do
let(:server) { Puppet::Network::HTTP::WEBrick.new }
before :each do
Puppet.settings.stubs(:use)
@filehandle = stub 'handle', :fcntl => nil, :sync= => nil
File.stubs(:open).returns @filehandle
end
it "should use the settings for :main, :ssl, and :application" do
Puppet.settings.expects(:use).with(:main, :ssl, :application)
server.setup_logger
end
- it "should use the masterlog if the run_mode is master" do
+ it "should use the masterhttplog if the run_mode is master" do
Puppet.run_mode.stubs(:master?).returns(true)
log = make_absolute("/master/log")
Puppet[:masterhttplog] = log
File.expects(:open).with(log, "a+").returns @filehandle
server.setup_logger
end
it "should use the httplog if the run_mode is not master" do
Puppet.run_mode.stubs(:master?).returns(false)
log = make_absolute("/other/log")
Puppet[:httplog] = log
File.expects(:open).with(log, "a+").returns @filehandle
server.setup_logger
end
describe "and creating the logging filehandle" do
it "should set the close-on-exec flag if supported" do
if defined? Fcntl::FD_CLOEXEC
@filehandle.expects(:fcntl).with(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC)
else
@filehandle.expects(:fcntl).never
end
server.setup_logger
end
it "should sync the filehandle" do
@filehandle.expects(:sync=).with(true)
server.setup_logger
end
end
it "should create a new WEBrick::Log instance with the open filehandle" do
WEBrick::Log.expects(:new).with(@filehandle)
server.setup_logger
end
it "should set debugging if the current loglevel is :debug" do
Puppet::Util::Log.expects(:level).returns :debug
WEBrick::Log.expects(:new).with { |handle, debug| debug == WEBrick::Log::DEBUG }
server.setup_logger
end
it "should return the logger as the main log" do
logger = mock 'logger'
WEBrick::Log.expects(:new).returns logger
server.setup_logger[:Logger].should == logger
end
it "should return the logger as the access log using both the Common and Referer log format" do
logger = mock 'logger'
WEBrick::Log.expects(:new).returns logger
server.setup_logger[:AccessLog].should == [
[logger, WEBrick::AccessLog::COMMON_LOG_FORMAT],
[logger, WEBrick::AccessLog::REFERER_LOG_FORMAT]
]
end
end
describe "when configuring ssl" do
let(:server) { Puppet::Network::HTTP::WEBrick.new }
let(:localcacert) { make_absolute("/ca/crt") }
let(:ssl_server_ca_auth) { make_absolute("/ca/ssl_server_auth_file") }
let(:key) { stub 'key', :content => "mykey" }
let(:cert) { stub 'cert', :content => "mycert" }
let(:host) { stub 'host', :key => key, :certificate => cert, :name => "yay", :ssl_store => "mystore" }
before :each do
Puppet::SSL::Certificate.indirection.stubs(:find).with('ca').returns cert
Puppet::SSL::Host.stubs(:localhost).returns host
end
it "should use the key from the localhost SSL::Host instance" do
Puppet::SSL::Host.expects(:localhost).returns host
host.expects(:key).returns key
server.setup_ssl[:SSLPrivateKey].should == "mykey"
end
it "should configure the certificate" do
server.setup_ssl[:SSLCertificate].should == "mycert"
end
it "should fail if no CA certificate can be found" do
Puppet::SSL::Certificate.indirection.stubs(:find).with('ca').returns nil
expect { server.setup_ssl }.to raise_error(Puppet::Error, /Could not find CA certificate/)
end
it "should specify the path to the CA certificate" do
Puppet.settings[:hostcrl] = 'false'
Puppet.settings[:localcacert] = localcacert
server.setup_ssl[:SSLCACertificateFile].should == localcacert
end
it "should specify the path to the CA certificate" do
Puppet.settings[:hostcrl] = 'false'
Puppet.settings[:localcacert] = localcacert
Puppet.settings[:ssl_server_ca_auth] = ssl_server_ca_auth
server.setup_ssl[:SSLCACertificateFile].should == ssl_server_ca_auth
end
it "should start ssl immediately" do
server.setup_ssl[:SSLStartImmediately].should be_true
end
it "should enable ssl" do
server.setup_ssl[:SSLEnable].should be_true
end
it "should reject SSLv2" do
server.setup_ssl[:SSLOptions].should == OpenSSL::SSL::OP_NO_SSLv2
end
it "should configure the verification method as 'OpenSSL::SSL::VERIFY_PEER'" do
server.setup_ssl[:SSLVerifyClient].should == OpenSSL::SSL::VERIFY_PEER
end
it "should add an x509 store" do
host.expects(:ssl_store).returns "mystore"
server.setup_ssl[:SSLCertificateStore].should == "mystore"
end
it "should set the certificate name to 'nil'" do
server.setup_ssl[:SSLCertName].should be_nil
end
end
end
diff --git a/spec/unit/network/http_pool_spec.rb b/spec/unit/network/http_pool_spec.rb
index d8b84232e..a9c5783f2 100755
--- a/spec/unit/network/http_pool_spec.rb
+++ b/spec/unit/network/http_pool_spec.rb
@@ -1,95 +1,98 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/network/http_pool'
describe Puppet::Network::HttpPool do
before :each do
Puppet::SSL::Key.indirection.terminus_class = :memory
Puppet::SSL::CertificateRequest.indirection.terminus_class = :memory
end
describe "when managing http instances" do
-
it "should return an http instance created with the passed host and port" do
http = Puppet::Network::HttpPool.http_instance("me", 54321)
http.should be_an_instance_of Puppet::Network::HTTP::Connection
http.address.should == 'me'
http.port.should == 54321
end
it "should support using an alternate http client implementation" do
begin
class FooClient
def initialize(host, port, options = {})
@host = host
@port = port
end
attr_reader :host, :port
end
orig_class = Puppet::Network::HttpPool.http_client_class
Puppet::Network::HttpPool.http_client_class = FooClient
http = Puppet::Network::HttpPool.http_instance("me", 54321)
http.should be_an_instance_of FooClient
http.host.should == 'me'
http.port.should == 54321
ensure
Puppet::Network::HttpPool.http_client_class = orig_class
end
end
it "should enable ssl on the http instance by default" do
Puppet::Network::HttpPool.http_instance("me", 54321).should be_use_ssl
end
it "can set ssl using an option" do
Puppet::Network::HttpPool.http_instance("me", 54321, false).should_not be_use_ssl
Puppet::Network::HttpPool.http_instance("me", 54321, true).should be_use_ssl
end
-
describe 'peer verification' do
def setup_standard_ssl_configuration
ca_cert_file = File.expand_path('/path/to/ssl/certs/ca_cert.pem')
Puppet[:ssl_client_ca_auth] = ca_cert_file
Puppet::FileSystem.stubs(:exist?).with(ca_cert_file).returns(true)
end
def setup_standard_hostcert
host_cert_file = File.expand_path('/path/to/ssl/certs/host_cert.pem')
Puppet::FileSystem.stubs(:exist?).with(host_cert_file).returns(true)
Puppet[:hostcert] = host_cert_file
end
def setup_standard_ssl_host
cert = stub('cert', :content => 'real_cert')
key = stub('key', :content => 'real_key')
host = stub('host', :certificate => cert, :key => key, :ssl_store => stub('store'))
Puppet::SSL::Host.stubs(:localhost).returns(host)
end
before do
setup_standard_ssl_configuration
setup_standard_hostcert
setup_standard_ssl_host
end
- it 'can enable peer verification' do
- Puppet::Network::HttpPool.http_instance("me", 54321, true, true).send(:connection).verify_mode.should == OpenSSL::SSL::VERIFY_PEER
+ it 'enables peer verification by default' do
+ response = Net::HTTPOK.new('1.1', 200, 'body')
+ conn = Puppet::Network::HttpPool.http_instance("me", 54321, true)
+ conn.expects(:execute_request).with { |http, request| expect(http.verify_mode).to eq(OpenSSL::SSL::VERIFY_PEER) }.returns(response)
+ conn.get('/')
end
it 'can disable peer verification' do
- Puppet::Network::HttpPool.http_instance("me", 54321, true, false).send(:connection).verify_mode.should == OpenSSL::SSL::VERIFY_NONE
+ response = Net::HTTPOK.new('1.1', 200, 'body')
+ conn = Puppet::Network::HttpPool.http_instance("me", 54321, true, false)
+ conn.expects(:execute_request).with { |http, request| expect(http.verify_mode).to eq(OpenSSL::SSL::VERIFY_NONE) }.returns(response)
+ conn.get('/')
end
end
it "should not cache http instances" do
Puppet::Network::HttpPool.http_instance("me", 54321).
should_not equal(Puppet::Network::HttpPool.http_instance("me", 54321))
end
end
-
end
diff --git a/spec/unit/network/http_spec.rb b/spec/unit/network/http_spec.rb
new file mode 100755
index 000000000..4a149d3a8
--- /dev/null
+++ b/spec/unit/network/http_spec.rb
@@ -0,0 +1,10 @@
+#! /usr/bin/env ruby
+require 'spec_helper'
+require 'puppet/network/http'
+
+describe Puppet::Network::HTTP do
+ it 'defines an http_pool context' do
+ pool = Puppet.lookup(:http_pool)
+ expect(pool).to be_a(Puppet::Network::HTTP::NoCachePool)
+ end
+end
diff --git a/spec/unit/node/environment_spec.rb b/spec/unit/node/environment_spec.rb
index a0688559e..3df5d329c 100755
--- a/spec/unit/node/environment_spec.rb
+++ b/spec/unit/node/environment_spec.rb
@@ -1,527 +1,581 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'tmpdir'
require 'puppet/node/environment'
require 'puppet/util/execution'
require 'puppet_spec/modules'
require 'puppet/parser/parser_factory'
describe Puppet::Node::Environment do
let(:env) { Puppet::Node::Environment.new("testing") }
include PuppetSpec::Files
after do
Puppet::Node::Environment.clear
end
shared_examples_for 'the environment' do
it "should use the filetimeout for the ttl for the module list" do
Puppet::Node::Environment.attr_ttl(:modules).should == Integer(Puppet[:filetimeout])
end
it "should use the default environment if no name is provided while initializing an environment" do
Puppet[:environment] = "one"
Puppet::Node::Environment.new.name.should == :one
end
it "should treat environment instances as singletons" do
Puppet::Node::Environment.new("one").should equal(Puppet::Node::Environment.new("one"))
end
it "should treat an environment specified as names or strings as equivalent" do
Puppet::Node::Environment.new(:one).should equal(Puppet::Node::Environment.new("one"))
end
it "should return its name when converted to a string" do
Puppet::Node::Environment.new(:one).to_s.should == "one"
end
it "should just return any provided environment if an environment is provided as the name" do
one = Puppet::Node::Environment.new(:one)
Puppet::Node::Environment.new(one).should equal(one)
end
describe "equality" do
it "works as a hash key" do
base = Puppet::Node::Environment.create(:first, ["modules"], "manifests")
same = Puppet::Node::Environment.create(:first, ["modules"], "manifests")
different = Puppet::Node::Environment.create(:first, ["different"], "manifests")
hash = {}
hash[base] = "base env"
hash[same] = "same env"
hash[different] = "different env"
expect(hash[base]).to eq("same env")
expect(hash[different]).to eq("different env")
expect(hash).to have(2).item
end
it "is equal when name, modules, and manifests are the same" do
base = Puppet::Node::Environment.create(:base, ["modules"], "manifests")
different_name = Puppet::Node::Environment.create(:different, base.full_modulepath, base.manifest)
expect(base).to_not eq("not an environment")
expect(base).to eq(base)
expect(base.hash).to eq(base.hash)
expect(base.override_with(:modulepath => ["different"])).to_not eq(base)
expect(base.override_with(:modulepath => ["different"]).hash).to_not eq(base.hash)
expect(base.override_with(:manifest => "different")).to_not eq(base)
expect(base.override_with(:manifest => "different").hash).to_not eq(base.hash)
expect(different_name).to_not eq(base)
expect(different_name.hash).to_not eq(base.hash)
end
end
describe "overriding an existing environment" do
let(:original_path) { [tmpdir('original')] }
let(:new_path) { [tmpdir('new')] }
let(:environment) { Puppet::Node::Environment.create(:overridden, original_path, 'orig.pp', '/config/script') }
it "overrides modulepath" do
overridden = environment.override_with(:modulepath => new_path)
expect(overridden).to_not be_equal(environment)
expect(overridden.name).to eq(:overridden)
expect(overridden.manifest).to eq(File.expand_path('orig.pp'))
expect(overridden.modulepath).to eq(new_path)
expect(overridden.config_version).to eq('/config/script')
end
it "overrides manifest" do
overridden = environment.override_with(:manifest => 'new.pp')
expect(overridden).to_not be_equal(environment)
expect(overridden.name).to eq(:overridden)
expect(overridden.manifest).to eq(File.expand_path('new.pp'))
expect(overridden.modulepath).to eq(original_path)
expect(overridden.config_version).to eq('/config/script')
end
it "overrides config_version" do
overridden = environment.override_with(:config_version => '/new/script')
expect(overridden).to_not be_equal(environment)
expect(overridden.name).to eq(:overridden)
expect(overridden.manifest).to eq(File.expand_path('orig.pp'))
expect(overridden.modulepath).to eq(original_path)
expect(overridden.config_version).to eq('/new/script')
end
end
describe "watching a file" do
let(:filename) { "filename" }
it "accepts a File" do
file = tmpfile(filename)
env.known_resource_types.expects(:watch_file).with(file.to_s)
env.watch_file(file)
end
it "accepts a String" do
env.known_resource_types.expects(:watch_file).with(filename)
env.watch_file(filename)
end
end
describe "when managing known resource types" do
before do
@collection = Puppet::Resource::TypeCollection.new(env)
env.stubs(:perform_initial_import).returns(Puppet::Parser::AST::Hostclass.new(''))
end
it "should create a resource type collection if none exists" do
Puppet::Resource::TypeCollection.expects(:new).with(env).returns @collection
env.known_resource_types.should equal(@collection)
end
it "should reuse any existing resource type collection" do
env.known_resource_types.should equal(env.known_resource_types)
end
it "should perform the initial import when creating a new collection" do
env.expects(:perform_initial_import).returns(Puppet::Parser::AST::Hostclass.new(''))
env.known_resource_types
end
it "should return the same collection even if stale if it's the same thread" do
Puppet::Resource::TypeCollection.stubs(:new).returns @collection
env.known_resource_types.stubs(:stale?).returns true
env.known_resource_types.should equal(@collection)
end
it "should generate a new TypeCollection if the current one requires reparsing" do
old_type_collection = env.known_resource_types
old_type_collection.stubs(:require_reparse?).returns true
env.check_for_reparse
new_type_collection = env.known_resource_types
new_type_collection.should be_a Puppet::Resource::TypeCollection
new_type_collection.should_not equal(old_type_collection)
end
end
it "should validate the modulepath directories" do
real_file = tmpdir('moduledir')
path = %W[/one /two #{real_file}].join(File::PATH_SEPARATOR)
Puppet[:modulepath] = path
env.modulepath.should == [real_file]
end
it "should prefix the value of the 'PUPPETLIB' environment variable to the module path if present" do
first_puppetlib = tmpdir('puppetlib1')
second_puppetlib = tmpdir('puppetlib2')
first_moduledir = tmpdir('moduledir1')
second_moduledir = tmpdir('moduledir2')
Puppet::Util.withenv("PUPPETLIB" => [first_puppetlib, second_puppetlib].join(File::PATH_SEPARATOR)) do
Puppet[:modulepath] = [first_moduledir, second_moduledir].join(File::PATH_SEPARATOR)
env.modulepath.should == [first_puppetlib, second_puppetlib, first_moduledir, second_moduledir]
end
end
+ it "does not register conflicting_manifest_settings? when not using directory environments" do
+ expect(Puppet::Node::Environment.create(:directory, [], '/some/non/default/manifest.pp').conflicting_manifest_settings?).to be_false
+ end
+
+ describe "when operating in the context of directory environments" do
+ before(:each) do
+ Puppet[:environmentpath] = "$confdir/environments"
+ Puppet[:default_manifest] = "/default/manifests/site.pp"
+ end
+
+ it "has no conflicting_manifest_settings? when disable_per_environment_manifest is false" do
+ expect(Puppet::Node::Environment.create(:directory, [], '/some/non/default/manifest.pp').conflicting_manifest_settings?).to be_false
+ end
+
+ context "when disable_per_environment_manifest is true" do
+ let(:config) { mock('config') }
+ let(:global_modulepath) { ["/global/modulepath"] }
+ let(:envconf) { Puppet::Settings::EnvironmentConf.new("/some/direnv", config, global_modulepath) }
+
+ before(:each) do
+ Puppet[:disable_per_environment_manifest] = true
+ end
+
+ def assert_manifest_conflict(expectation, envconf_manifest_value)
+ config.expects(:setting).with(:manifest).returns(
+ mock('setting', :value => envconf_manifest_value)
+ )
+ environment = Puppet::Node::Environment.create(:directory, [], '/default/manifests/site.pp')
+ loader = Puppet::Environments::Static.new(environment)
+ loader.stubs(:get_conf).returns(envconf)
+
+ Puppet.override(:environments => loader) do
+ expect(environment.conflicting_manifest_settings?).to eq(expectation)
+ end
+ end
+
+ it "has conflicting_manifest_settings when environment.conf manifest was set" do
+ assert_manifest_conflict(true, '/some/envconf/manifest/site.pp')
+ end
+
+ it "does not have conflicting_manifest_settings when environment.conf manifest is empty" do
+ assert_manifest_conflict(false, '')
+ end
+
+ it "does not have conflicting_manifest_settings when environment.conf manifest is nil" do
+ assert_manifest_conflict(false, nil)
+ end
+
+ it "does not have conflicting_manifest_settings when environment.conf manifest is an exact, uninterpolated match of default_manifest" do
+ assert_manifest_conflict(false, '/default/manifests/site.pp')
+ end
+ end
+ end
+
describe "when modeling a specific environment" do
it "should have a method for returning the environment name" do
Puppet::Node::Environment.new("testing").name.should == :testing
end
it "should provide an array-like accessor method for returning any environment-specific setting" do
env.should respond_to(:[])
end
it "obtains its core values from the puppet settings instance as a legacy env" do
Puppet.settings.parse_config(<<-CONF)
[testing]
manifest = /some/manifest
modulepath = /some/modulepath
config_version = /some/script
CONF
env = Puppet::Node::Environment.new("testing")
expect(env.full_modulepath).to eq([File.expand_path('/some/modulepath')])
expect(env.manifest).to eq(File.expand_path('/some/manifest'))
expect(env.config_version).to eq('/some/script')
end
it "should ask the Puppet settings instance for the setting qualified with the environment name" do
Puppet.settings.parse_config(<<-CONF)
[testing]
server = myval
CONF
env[:server].should == "myval"
end
it "should be able to return an individual module that exists in its module path" do
env.stubs(:modules).returns [Puppet::Module.new('one', "/one", mock("env"))]
mod = env.module('one')
mod.should be_a(Puppet::Module)
mod.name.should == 'one'
end
it "should not return a module if the module doesn't exist" do
env.stubs(:modules).returns [Puppet::Module.new('one', "/one", mock("env"))]
env.module('two').should be_nil
end
it "should return nil if asked for a module that does not exist in its path" do
modpath = tmpdir('modpath')
env = Puppet::Node::Environment.create(:testing, [modpath])
env.module("one").should be_nil
end
describe "module data" do
before do
dir = tmpdir("deep_path")
@first = File.join(dir, "first")
@second = File.join(dir, "second")
Puppet[:modulepath] = "#{@first}#{File::PATH_SEPARATOR}#{@second}"
FileUtils.mkdir_p(@first)
FileUtils.mkdir_p(@second)
end
describe "#modules_by_path" do
it "should return an empty list if there are no modules" do
env.modules_by_path.should == {
@first => [],
@second => []
}
end
it "should include modules even if they exist in multiple dirs in the modulepath" do
modpath1 = File.join(@first, "foo")
FileUtils.mkdir_p(modpath1)
modpath2 = File.join(@second, "foo")
FileUtils.mkdir_p(modpath2)
env.modules_by_path.should == {
@first => [Puppet::Module.new('foo', modpath1, env)],
@second => [Puppet::Module.new('foo', modpath2, env)]
}
end
it "should ignore modules with invalid names" do
FileUtils.mkdir_p(File.join(@first, 'foo'))
FileUtils.mkdir_p(File.join(@first, 'foo2'))
FileUtils.mkdir_p(File.join(@first, 'foo-bar'))
FileUtils.mkdir_p(File.join(@first, 'foo_bar'))
FileUtils.mkdir_p(File.join(@first, 'foo=bar'))
FileUtils.mkdir_p(File.join(@first, 'foo bar'))
FileUtils.mkdir_p(File.join(@first, 'foo.bar'))
FileUtils.mkdir_p(File.join(@first, '-foo'))
FileUtils.mkdir_p(File.join(@first, 'foo-'))
FileUtils.mkdir_p(File.join(@first, 'foo--bar'))
env.modules_by_path[@first].collect{|mod| mod.name}.sort.should == %w{foo foo-bar foo2 foo_bar}
end
end
describe "#module_requirements" do
it "should return a list of what modules depend on other modules" do
PuppetSpec::Modules.create(
'foo',
@first,
:metadata => {
:author => 'puppetlabs',
:dependencies => [{ 'name' => 'puppetlabs/bar', "version_requirement" => ">= 1.0.0" }]
}
)
PuppetSpec::Modules.create(
'bar',
@second,
:metadata => {
:author => 'puppetlabs',
:dependencies => [{ 'name' => 'puppetlabs/foo', "version_requirement" => "<= 2.0.0" }]
}
)
PuppetSpec::Modules.create(
'baz',
@first,
:metadata => {
:author => 'puppetlabs',
:dependencies => [{ 'name' => 'puppetlabs-bar', "version_requirement" => "3.0.0" }]
}
)
PuppetSpec::Modules.create(
'alpha',
@first,
:metadata => {
:author => 'puppetlabs',
:dependencies => [{ 'name' => 'puppetlabs/bar', "version_requirement" => "~3.0.0" }]
}
)
env.module_requirements.should == {
'puppetlabs/alpha' => [],
'puppetlabs/foo' => [
{
"name" => "puppetlabs/bar",
"version" => "9.9.9",
"version_requirement" => "<= 2.0.0"
}
],
'puppetlabs/bar' => [
{
"name" => "puppetlabs/alpha",
"version" => "9.9.9",
"version_requirement" => "~3.0.0"
},
{
"name" => "puppetlabs/baz",
"version" => "9.9.9",
"version_requirement" => "3.0.0"
},
{
"name" => "puppetlabs/foo",
"version" => "9.9.9",
"version_requirement" => ">= 1.0.0"
}
],
'puppetlabs/baz' => []
}
end
end
describe ".module_by_forge_name" do
it "should find modules by forge_name" do
mod = PuppetSpec::Modules.create(
'baz',
@first,
:metadata => {:author => 'puppetlabs'},
:environment => env
)
env.module_by_forge_name('puppetlabs/baz').should == mod
end
it "should not find modules with same name by the wrong author" do
mod = PuppetSpec::Modules.create(
'baz',
@first,
:metadata => {:author => 'sneakylabs'},
:environment => env
)
env.module_by_forge_name('puppetlabs/baz').should == nil
end
it "should return nil when the module can't be found" do
env.module_by_forge_name('ima/nothere').should be_nil
end
end
describe ".modules" do
it "should return an empty list if there are no modules" do
env.modules.should == []
end
it "should return a module named for every directory in each module path" do
%w{foo bar}.each do |mod_name|
FileUtils.mkdir_p(File.join(@first, mod_name))
end
%w{bee baz}.each do |mod_name|
FileUtils.mkdir_p(File.join(@second, mod_name))
end
env.modules.collect{|mod| mod.name}.sort.should == %w{foo bar bee baz}.sort
end
it "should remove duplicates" do
FileUtils.mkdir_p(File.join(@first, 'foo'))
FileUtils.mkdir_p(File.join(@second, 'foo'))
env.modules.collect{|mod| mod.name}.sort.should == %w{foo}
end
it "should ignore modules with invalid names" do
FileUtils.mkdir_p(File.join(@first, 'foo'))
FileUtils.mkdir_p(File.join(@first, 'foo2'))
FileUtils.mkdir_p(File.join(@first, 'foo-bar'))
FileUtils.mkdir_p(File.join(@first, 'foo_bar'))
FileUtils.mkdir_p(File.join(@first, 'foo=bar'))
FileUtils.mkdir_p(File.join(@first, 'foo bar'))
env.modules.collect{|mod| mod.name}.sort.should == %w{foo foo-bar foo2 foo_bar}
end
it "should create modules with the correct environment" do
FileUtils.mkdir_p(File.join(@first, 'foo'))
env.modules.each {|mod| mod.environment.should == env }
end
end
end
end
describe "when performing initial import" do
def parser_and_environment(name)
env = Puppet::Node::Environment.new(name)
parser = Puppet::Parser::ParserFactory.parser(env)
Puppet::Parser::ParserFactory.stubs(:parser).returns(parser)
[parser, env]
end
it "should set the parser's string to the 'code' setting and parse if code is available" do
Puppet[:code] = "my code"
parser, env = parser_and_environment('testing')
parser.expects(:string=).with "my code"
parser.expects(:parse)
env.instance_eval { perform_initial_import }
end
it "should set the parser's file to the 'manifest' setting and parse if no code is available and the manifest is available" do
filename = tmpfile('myfile')
Puppet[:manifest] = filename
parser, env = parser_and_environment('testing')
parser.expects(:file=).with filename
parser.expects(:parse)
env.instance_eval { perform_initial_import }
end
it "should pass the manifest file to the parser even if it does not exist on disk" do
filename = tmpfile('myfile')
Puppet[:code] = ""
Puppet[:manifest] = filename
parser, env = parser_and_environment('testing')
parser.expects(:file=).with(filename).once
parser.expects(:parse).once
env.instance_eval { perform_initial_import }
end
it "should fail helpfully if there is an error importing" do
Puppet::FileSystem.stubs(:exist?).returns true
parser, env = parser_and_environment('testing')
parser.expects(:file=).once
parser.expects(:parse).raises ArgumentError
expect do
env.known_resource_types
end.to raise_error(Puppet::Error)
end
it "should not do anything if the ignore_import settings is set" do
Puppet[:ignoreimport] = true
parser, env = parser_and_environment('testing')
parser.expects(:string=).never
parser.expects(:file=).never
parser.expects(:parse).never
env.instance_eval { perform_initial_import }
end
it "should mark the type collection as needing a reparse when there is an error parsing" do
parser, env = parser_and_environment('testing')
parser.expects(:parse).raises Puppet::ParseError.new("Syntax error at ...")
expect do
env.known_resource_types
end.to raise_error(Puppet::Error, /Syntax error at .../)
env.known_resource_types.require_reparse?.should be_true
end
end
end
describe 'with classic parser' do
before :each do
Puppet[:parser] = 'current'
end
it_behaves_like 'the environment'
end
describe 'with future parser' do
before :each do
Puppet[:parser] = 'future'
end
it_behaves_like 'the environment'
end
describe '#current' do
it 'should return the current context' do
env = Puppet::Node::Environment.new(:test)
Puppet::Context.any_instance.expects(:lookup).with(:current_environment).returns(env)
Puppet.expects(:deprecation_warning).once
Puppet::Node::Environment.current.should equal(env)
end
end
end
diff --git a/spec/unit/node_spec.rb b/spec/unit/node_spec.rb
index 85ec6e127..2de2b8279 100755
--- a/spec/unit/node_spec.rb
+++ b/spec/unit/node_spec.rb
@@ -1,323 +1,321 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'matchers/json'
describe Puppet::Node do
include JSONMatchers
let(:environment) { Puppet::Node::Environment.create(:bar, []) }
let(:env_loader) { Puppet::Environments::Static.new(environment) }
it "should register its document type as Node" do
PSON.registered_document_types["Node"].should equal(Puppet::Node)
end
describe "when managing its environment" do
it "should use any set environment" do
Puppet.override(:environments => env_loader) do
Puppet::Node.new("foo", :environment => "bar").environment.should == environment
end
end
it "should support providing an actual environment instance" do
Puppet::Node.new("foo", :environment => environment).environment.name.should == :bar
end
it "should determine its environment from its parameters if no environment is set" do
Puppet.override(:environments => env_loader) do
Puppet::Node.new("foo", :parameters => {"environment" => :bar}).environment.should == environment
end
end
it "should use the configured environment if no environment is provided" do
Puppet[:environment] = environment.name.to_s
Puppet.override(:environments => env_loader) do
Puppet::Node.new("foo").environment.should == environment
end
end
it "should allow the environment to be set after initialization" do
node = Puppet::Node.new("foo")
node.environment = :bar
node.environment.name.should == :bar
end
it "should allow its environment to be set by parameters after initialization" do
node = Puppet::Node.new("foo")
node.parameters["environment"] = :bar
node.environment.name.should == :bar
end
end
it "can survive a round-trip through YAML" do
facts = Puppet::Node::Facts.new("hello", "one" => "c", "two" => "b")
node = Puppet::Node.new("hello",
:environment => 'kjhgrg',
:classes => ['erth', 'aiu'],
:parameters => {"hostname"=>"food"}
)
new_node = Puppet::Node.convert_from('yaml', node.render('yaml'))
new_node.environment.should == node.environment
new_node.parameters.should == node.parameters
new_node.classes.should == node.classes
new_node.name.should == node.name
end
it "can round-trip through pson" do
facts = Puppet::Node::Facts.new("hello", "one" => "c", "two" => "b")
node = Puppet::Node.new("hello",
:environment => 'kjhgrg',
:classes => ['erth', 'aiu'],
:parameters => {"hostname"=>"food"}
)
new_node = Puppet::Node.convert_from('pson', node.render('pson'))
new_node.environment.should == node.environment
new_node.parameters.should == node.parameters
new_node.classes.should == node.classes
new_node.name.should == node.name
end
it "validates against the node json schema", :unless => Puppet.features.microsoft_windows? do
facts = Puppet::Node::Facts.new("hello", "one" => "c", "two" => "b")
node = Puppet::Node.new("hello",
:environment => 'kjhgrg',
:classes => ['erth', 'aiu'],
:parameters => {"hostname"=>"food"}
)
expect(node.to_pson).to validate_against('api/schemas/node.json')
end
it "when missing optional parameters validates against the node json schema", :unless => Puppet.features.microsoft_windows? do
facts = Puppet::Node::Facts.new("hello", "one" => "c", "two" => "b")
node = Puppet::Node.new("hello",
:environment => 'kjhgrg'
)
expect(node.to_pson).to validate_against('api/schemas/node.json')
end
describe "when converting to json" do
before do
@node = Puppet::Node.new("mynode")
end
it "should provide its name" do
@node.should set_json_attribute('name').to("mynode")
end
it "should produce a hash with the document_type set to 'Node'" do
@node.should set_json_document_type_to("Node")
end
it "should include the classes if set" do
@node.classes = %w{a b c}
@node.should set_json_attribute("classes").to(%w{a b c})
end
it "should not include the classes if there are none" do
@node.should_not set_json_attribute('classes')
end
it "should include parameters if set" do
@node.parameters = {"a" => "b", "c" => "d"}
@node.should set_json_attribute('parameters').to({"a" => "b", "c" => "d"})
end
it "should not include the parameters if there are none" do
@node.should_not set_json_attribute('parameters')
end
it "should include the environment" do
@node.environment = "production"
@node.should set_json_attribute('environment').to('production')
end
end
describe "when converting from json" do
before do
@node = Puppet::Node.new("mynode")
@format = Puppet::Network::FormatHandler.format('pson')
end
def from_json(json)
@format.intern(Puppet::Node, json)
end
it "should set its name" do
Puppet::Node.should read_json_attribute('name').from(@node.to_pson).as("mynode")
end
it "should include the classes if set" do
@node.classes = %w{a b c}
Puppet::Node.should read_json_attribute('classes').from(@node.to_pson).as(%w{a b c})
end
it "should include parameters if set" do
@node.parameters = {"a" => "b", "c" => "d"}
Puppet::Node.should read_json_attribute('parameters').from(@node.to_pson).as({"a" => "b", "c" => "d"})
end
- it "should include the environment" do
- Puppet.override(:environments => env_loader) do
- @node.environment = environment
- Puppet::Node.should read_json_attribute('environment').from(@node.to_pson).as(environment)
- end
+ it "deserializes environment to environment_name as a string" do
+ @node.environment = environment
+ Puppet::Node.should read_json_attribute('environment_name').from(@node.to_pson).as('bar')
end
end
end
describe Puppet::Node, "when initializing" do
before do
@node = Puppet::Node.new("testnode")
end
it "should set the node name" do
@node.name.should == "testnode"
end
it "should not allow nil node names" do
proc { Puppet::Node.new(nil) }.should raise_error(ArgumentError)
end
it "should default to an empty parameter hash" do
@node.parameters.should == {}
end
it "should default to an empty class array" do
@node.classes.should == []
end
it "should note its creation time" do
@node.time.should be_instance_of(Time)
end
it "should accept parameters passed in during initialization" do
params = {"a" => "b"}
@node = Puppet::Node.new("testing", :parameters => params)
@node.parameters.should == params
end
it "should accept classes passed in during initialization" do
classes = %w{one two}
@node = Puppet::Node.new("testing", :classes => classes)
@node.classes.should == classes
end
it "should always return classes as an array" do
@node = Puppet::Node.new("testing", :classes => "myclass")
@node.classes.should == ["myclass"]
end
end
describe Puppet::Node, "when merging facts" do
before do
@node = Puppet::Node.new("testnode")
Puppet::Node::Facts.indirection.stubs(:find).with(@node.name, instance_of(Hash)).returns(Puppet::Node::Facts.new(@node.name, "one" => "c", "two" => "b"))
end
it "should fail intelligently if it cannot find facts" do
Puppet::Node::Facts.indirection.expects(:find).with(@node.name, instance_of(Hash)).raises "foo"
lambda { @node.fact_merge }.should raise_error(Puppet::Error)
end
it "should prefer parameters already set on the node over facts from the node" do
@node = Puppet::Node.new("testnode", :parameters => {"one" => "a"})
@node.fact_merge
@node.parameters["one"].should == "a"
end
it "should add passed parameters to the parameter list" do
@node = Puppet::Node.new("testnode", :parameters => {"one" => "a"})
@node.fact_merge
@node.parameters["two"].should == "b"
end
it "should accept arbitrary parameters to merge into its parameters" do
@node = Puppet::Node.new("testnode", :parameters => {"one" => "a"})
@node.merge "two" => "three"
@node.parameters["two"].should == "three"
end
it "should add the environment to the list of parameters" do
Puppet[:environment] = "one"
@node = Puppet::Node.new("testnode", :environment => "one")
@node.merge "two" => "three"
@node.parameters["environment"].should == "one"
end
it "should not set the environment if it is already set in the parameters" do
Puppet[:environment] = "one"
@node = Puppet::Node.new("testnode", :environment => "one")
@node.merge "environment" => "two"
@node.parameters["environment"].should == "two"
end
end
describe Puppet::Node, "when indirecting" do
it "should default to the 'plain' node terminus" do
Puppet::Node.indirection.reset_terminus_class
Puppet::Node.indirection.terminus_class.should == :plain
end
end
describe Puppet::Node, "when generating the list of names to search through" do
before do
@node = Puppet::Node.new("foo.domain.com", :parameters => {"hostname" => "yay", "domain" => "domain.com"})
end
it "should return an array of names" do
@node.names.should be_instance_of(Array)
end
describe "and the node name is fully qualified" do
it "should contain an entry for each part of the node name" do
@node.names.should be_include("foo.domain.com")
@node.names.should be_include("foo.domain")
@node.names.should be_include("foo")
end
end
it "should include the node's fqdn" do
@node.names.should be_include("yay.domain.com")
end
it "should combine and include the node's hostname and domain if no fqdn is available" do
@node.names.should be_include("yay.domain.com")
end
it "should contain an entry for each name available by stripping a segment of the fqdn" do
@node.parameters["fqdn"] = "foo.deep.sub.domain.com"
@node.names.should be_include("foo.deep.sub.domain")
@node.names.should be_include("foo.deep.sub")
end
describe "and :node_name is set to 'cert'" do
before do
Puppet[:strict_hostname_checking] = false
Puppet[:node_name] = "cert"
end
it "should use the passed-in key as the first value" do
@node.names[0].should == "foo.domain.com"
end
describe "and strict hostname checking is enabled" do
it "should only use the passed-in key" do
Puppet[:strict_hostname_checking] = true
@node.names.should == ["foo.domain.com"]
end
end
end
describe "and :node_name is set to 'facter'" do
before do
Puppet[:strict_hostname_checking] = false
Puppet[:node_name] = "facter"
end
it "should use the node's 'hostname' fact as the first value" do
@node.names[0].should == "yay"
end
end
end
diff --git a/spec/unit/parser/compiler_spec.rb b/spec/unit/parser/compiler_spec.rb
index d1e855a20..0ca01f521 100755
--- a/spec/unit/parser/compiler_spec.rb
+++ b/spec/unit/parser/compiler_spec.rb
@@ -1,903 +1,910 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/compiler'
class CompilerTestResource
attr_accessor :builtin, :virtual, :evaluated, :type, :title
def initialize(type, title)
@type = type
@title = title
end
def [](attr)
return nil if attr == :stage
:main
end
def ref
"#{type.to_s.capitalize}[#{title}]"
end
def evaluated?
@evaluated
end
def builtin_type?
@builtin
end
def virtual?
@virtual
end
def class?
false
end
def stage?
false
end
def evaluate
end
def file
"/fake/file/goes/here"
end
def line
"42"
end
end
describe Puppet::Parser::Compiler do
include PuppetSpec::Files
def resource(type, title)
Puppet::Parser::Resource.new(type, title, :scope => @scope)
end
before :each do
# Push me faster, I wanna go back in time! (Specifically, freeze time
# across the test since we have a bunch of version == timestamp code
# hidden away in the implementation and we keep losing the race.)
# --daniel 2011-04-21
now = Time.now
Time.stubs(:now).returns(now)
environment = Puppet::Node::Environment.create(:testing, [])
@node = Puppet::Node.new("testnode",
:facts => Puppet::Node::Facts.new("facts", {}),
:environment => environment)
@known_resource_types = environment.known_resource_types
@compiler = Puppet::Parser::Compiler.new(@node)
@scope = Puppet::Parser::Scope.new(@compiler, :source => stub('source'))
@scope_resource = Puppet::Parser::Resource.new(:file, "/my/file", :scope => @scope)
@scope.resource = @scope_resource
end
it "should have a class method that compiles, converts, and returns a catalog" do
compiler = stub 'compiler'
Puppet::Parser::Compiler.expects(:new).with(@node).returns compiler
catalog = stub 'catalog'
compiler.expects(:compile).returns catalog
converted_catalog = stub 'converted_catalog'
catalog.expects(:to_resource).returns converted_catalog
Puppet::Parser::Compiler.compile(@node).should equal(converted_catalog)
end
it "should fail intelligently when a class-level compile fails" do
Puppet::Parser::Compiler.expects(:new).raises ArgumentError
lambda { Puppet::Parser::Compiler.compile(@node) }.should raise_error(Puppet::Error)
end
it "should use the node's environment as its environment" do
@compiler.environment.should equal(@node.environment)
end
+ it "fails if the node's environment has conflicting manifest settings" do
+ conflicted_environment = Puppet::Node::Environment.create(:testing, [], '/some/environment.conf/manifest.pp')
+ conflicted_environment.stubs(:conflicting_manifest_settings?).returns(true)
+ @node.environment = conflicted_environment
+ expect { Puppet::Parser::Compiler.compile(@node) }.to raise_error(Puppet::Error, /disable_per_environment_manifest.*true.*environment.conf.*manifest.*conflict/)
+ end
+
it "should include the resource type collection helper" do
Puppet::Parser::Compiler.ancestors.should be_include(Puppet::Resource::TypeCollectionHelper)
end
it "should be able to return a class list containing all added classes" do
@compiler.add_class ""
@compiler.add_class "one"
@compiler.add_class "two"
@compiler.classlist.sort.should == %w{one two}.sort
end
it "should clear the global caches before compile" do
compiler = stub 'compiler'
Puppet::Parser::Compiler.expects(:new).with(@node).returns compiler
catalog = stub 'catalog'
compiler.expects(:compile).returns catalog
catalog.expects(:to_resource)
$known_resource_types = "rspec"
$env_module_directories = "rspec"
Puppet::Parser::Compiler.compile(@node)
$known_resource_types = nil
$env_module_directories = nil
end
describe "when initializing" do
it "should set its node attribute" do
@compiler.node.should equal(@node)
end
it "should detect when ast nodes are absent" do
@compiler.ast_nodes?.should be_false
end
it "should detect when ast nodes are present" do
@known_resource_types.expects(:nodes?).returns true
@compiler.ast_nodes?.should be_true
end
it "should copy the known_resource_types version to the catalog" do
@compiler.catalog.version.should == @known_resource_types.version
end
it "should copy any node classes into the class list" do
node = Puppet::Node.new("mynode")
node.classes = %w{foo bar}
compiler = Puppet::Parser::Compiler.new(node)
compiler.classlist.should =~ ['foo', 'bar']
end
it "should transform node class hashes into a class list" do
node = Puppet::Node.new("mynode")
node.classes = {'foo'=>{'one'=>'p1'}, 'bar'=>{'two'=>'p2'}}
compiler = Puppet::Parser::Compiler.new(node)
compiler.classlist.should =~ ['foo', 'bar']
end
it "should add a 'main' stage to the catalog" do
@compiler.catalog.resource(:stage, :main).should be_instance_of(Puppet::Parser::Resource)
end
end
describe "when managing scopes" do
it "should create a top scope" do
@compiler.topscope.should be_instance_of(Puppet::Parser::Scope)
end
it "should be able to create new scopes" do
@compiler.newscope(@compiler.topscope).should be_instance_of(Puppet::Parser::Scope)
end
it "should set the parent scope of the new scope to be the passed-in parent" do
scope = mock 'scope'
newscope = @compiler.newscope(scope)
newscope.parent.should equal(scope)
end
it "should set the parent scope of the new scope to its topscope if the parent passed in is nil" do
scope = mock 'scope'
newscope = @compiler.newscope(nil)
newscope.parent.should equal(@compiler.topscope)
end
end
describe "when compiling" do
def compile_methods
[:set_node_parameters, :evaluate_main, :evaluate_ast_node, :evaluate_node_classes, :evaluate_generators, :fail_on_unevaluated,
:finish, :store, :extract, :evaluate_relationships]
end
# Stub all of the main compile methods except the ones we're specifically interested in.
def compile_stub(*except)
(compile_methods - except).each { |m| @compiler.stubs(m) }
end
it "should set node parameters as variables in the top scope" do
params = {"a" => "b", "c" => "d"}
@node.stubs(:parameters).returns(params)
compile_stub(:set_node_parameters)
@compiler.compile
@compiler.topscope['a'].should == "b"
@compiler.topscope['c'].should == "d"
end
it "should set the client and server versions on the catalog" do
params = {"clientversion" => "2", "serverversion" => "3"}
@node.stubs(:parameters).returns(params)
compile_stub(:set_node_parameters)
@compiler.compile
@compiler.catalog.client_version.should == "2"
@compiler.catalog.server_version.should == "3"
end
it "should evaluate the main class if it exists" do
compile_stub(:evaluate_main)
main_class = @known_resource_types.add Puppet::Resource::Type.new(:hostclass, "")
main_class.expects(:evaluate_code).with { |r| r.is_a?(Puppet::Parser::Resource) }
@compiler.topscope.expects(:source=).with(main_class)
@compiler.compile
end
it "should create a new, empty 'main' if no main class exists" do
compile_stub(:evaluate_main)
@compiler.compile
@known_resource_types.find_hostclass([""], "").should be_instance_of(Puppet::Resource::Type)
end
it "should add an edge between the main stage and main class" do
@compiler.compile
(stage = @compiler.catalog.resource(:stage, "main")).should be_instance_of(Puppet::Parser::Resource)
(klass = @compiler.catalog.resource(:class, "")).should be_instance_of(Puppet::Parser::Resource)
@compiler.catalog.edge?(stage, klass).should be_true
end
it "should evaluate all added collections" do
colls = []
# And when the collections fail to evaluate.
colls << mock("coll1-false")
colls << mock("coll2-false")
colls.each { |c| c.expects(:evaluate).returns(false) }
@compiler.add_collection(colls[0])
@compiler.add_collection(colls[1])
compile_stub(:evaluate_generators)
@compiler.compile
end
it "should ignore builtin resources" do
resource = resource(:file, "testing")
@compiler.add_resource(@scope, resource)
resource.expects(:evaluate).never
@compiler.compile
end
it "should evaluate unevaluated resources" do
resource = CompilerTestResource.new(:file, "testing")
@compiler.add_resource(@scope, resource)
# We have to now mark the resource as evaluated
resource.expects(:evaluate).with { |*whatever| resource.evaluated = true }
@compiler.compile
end
it "should not evaluate already-evaluated resources" do
resource = resource(:file, "testing")
resource.stubs(:evaluated?).returns true
@compiler.add_resource(@scope, resource)
resource.expects(:evaluate).never
@compiler.compile
end
it "should evaluate unevaluated resources created by evaluating other resources" do
resource = CompilerTestResource.new(:file, "testing")
@compiler.add_resource(@scope, resource)
resource2 = CompilerTestResource.new(:file, "other")
# We have to now mark the resource as evaluated
resource.expects(:evaluate).with { |*whatever| resource.evaluated = true; @compiler.add_resource(@scope, resource2) }
resource2.expects(:evaluate).with { |*whatever| resource2.evaluated = true }
@compiler.compile
end
describe "when finishing" do
before do
@compiler.send(:evaluate_main)
@catalog = @compiler.catalog
end
def add_resource(name, parent = nil)
resource = Puppet::Parser::Resource.new "file", name, :scope => @scope
@compiler.add_resource(@scope, resource)
@catalog.add_edge(parent, resource) if parent
resource
end
it "should call finish() on all resources" do
# Add a resource that does respond to :finish
resource = Puppet::Parser::Resource.new "file", "finish", :scope => @scope
resource.expects(:finish)
@compiler.add_resource(@scope, resource)
# And one that does not
dnf_resource = stub_everything "dnf", :ref => "File[dnf]", :type => "file"
@compiler.add_resource(@scope, dnf_resource)
@compiler.send(:finish)
end
it "should call finish() in add_resource order" do
resources = sequence('resources')
resource1 = add_resource("finish1")
resource1.expects(:finish).in_sequence(resources)
resource2 = add_resource("finish2")
resource2.expects(:finish).in_sequence(resources)
@compiler.send(:finish)
end
it "should add each container's metaparams to its contained resources" do
main = @catalog.resource(:class, :main)
main[:noop] = true
resource1 = add_resource("meh", main)
@compiler.send(:finish)
resource1[:noop].should be_true
end
it "should add metaparams recursively" do
main = @catalog.resource(:class, :main)
main[:noop] = true
resource1 = add_resource("meh", main)
resource2 = add_resource("foo", resource1)
@compiler.send(:finish)
resource2[:noop].should be_true
end
it "should prefer metaparams from immediate parents" do
main = @catalog.resource(:class, :main)
main[:noop] = true
resource1 = add_resource("meh", main)
resource2 = add_resource("foo", resource1)
resource1[:noop] = false
@compiler.send(:finish)
resource2[:noop].should be_false
end
it "should merge tags downward" do
main = @catalog.resource(:class, :main)
main.tag("one")
resource1 = add_resource("meh", main)
resource1.tag "two"
resource2 = add_resource("foo", resource1)
@compiler.send(:finish)
resource2.tags.should be_include("one")
resource2.tags.should be_include("two")
end
it "should work if only middle resources have metaparams set" do
main = @catalog.resource(:class, :main)
resource1 = add_resource("meh", main)
resource1[:noop] = true
resource2 = add_resource("foo", resource1)
@compiler.send(:finish)
resource2[:noop].should be_true
end
end
it "should return added resources in add order" do
resource1 = resource(:file, "yay")
@compiler.add_resource(@scope, resource1)
resource2 = resource(:file, "youpi")
@compiler.add_resource(@scope, resource2)
@compiler.resources.should == [resource1, resource2]
end
it "should add resources that do not conflict with existing resources" do
resource = resource(:file, "yay")
@compiler.add_resource(@scope, resource)
@compiler.catalog.should be_vertex(resource)
end
it "should fail to add resources that conflict with existing resources" do
path = make_absolute("/foo")
file1 = resource(:file, path)
file2 = resource(:file, path)
@compiler.add_resource(@scope, file1)
lambda { @compiler.add_resource(@scope, file2) }.should raise_error(Puppet::Resource::Catalog::DuplicateResourceError)
end
it "should add an edge from the scope resource to the added resource" do
resource = resource(:file, "yay")
@compiler.add_resource(@scope, resource)
@compiler.catalog.should be_edge(@scope.resource, resource)
end
it "should not add non-class resources that don't specify a stage to the 'main' stage" do
main = @compiler.catalog.resource(:stage, :main)
resource = resource(:file, "foo")
@compiler.add_resource(@scope, resource)
@compiler.catalog.should_not be_edge(main, resource)
end
it "should not add any parent-edges to stages" do
stage = resource(:stage, "other")
@compiler.add_resource(@scope, stage)
@scope.resource = resource(:class, "foo")
@compiler.catalog.edge?(@scope.resource, stage).should be_false
end
it "should not attempt to add stages to other stages" do
other_stage = resource(:stage, "other")
second_stage = resource(:stage, "second")
@compiler.add_resource(@scope, other_stage)
@compiler.add_resource(@scope, second_stage)
second_stage[:stage] = "other"
@compiler.catalog.edge?(other_stage, second_stage).should be_false
end
it "should have a method for looking up resources" do
resource = resource(:yay, "foo")
@compiler.add_resource(@scope, resource)
@compiler.findresource("Yay[foo]").should equal(resource)
end
it "should be able to look resources up by type and title" do
resource = resource(:yay, "foo")
@compiler.add_resource(@scope, resource)
@compiler.findresource("Yay", "foo").should equal(resource)
end
it "should not evaluate virtual defined resources" do
resource = resource(:file, "testing")
resource.virtual = true
@compiler.add_resource(@scope, resource)
resource.expects(:evaluate).never
@compiler.compile
end
end
describe "when evaluating collections" do
it "should evaluate each collection" do
2.times { |i|
coll = mock 'coll%s' % i
@compiler.add_collection(coll)
# This is the hard part -- we have to emulate the fact that
# collections delete themselves if they are done evaluating.
coll.expects(:evaluate).with do
@compiler.delete_collection(coll)
end
}
@compiler.compile
end
it "should not fail when there are unevaluated resource collections that do not refer to specific resources" do
coll = stub 'coll', :evaluate => false
coll.expects(:resources).returns(nil)
@compiler.add_collection(coll)
lambda { @compiler.compile }.should_not raise_error
end
it "should fail when there are unevaluated resource collections that refer to a specific resource" do
coll = stub 'coll', :evaluate => false
coll.expects(:resources).returns(:something)
@compiler.add_collection(coll)
lambda { @compiler.compile }.should raise_error Puppet::ParseError, 'Failed to realize virtual resources something'
end
it "should fail when there are unevaluated resource collections that refer to multiple specific resources" do
coll = stub 'coll', :evaluate => false
coll.expects(:resources).returns([:one, :two])
@compiler.add_collection(coll)
lambda { @compiler.compile }.should raise_error Puppet::ParseError, 'Failed to realize virtual resources one, two'
end
end
describe "when evaluating relationships" do
it "should evaluate each relationship with its catalog" do
dep = stub 'dep'
dep.expects(:evaluate).with(@compiler.catalog)
@compiler.add_relationship dep
@compiler.evaluate_relationships
end
end
describe "when told to evaluate missing classes" do
it "should fail if there's no source listed for the scope" do
scope = stub 'scope', :source => nil
proc { @compiler.evaluate_classes(%w{one two}, scope) }.should raise_error(Puppet::DevError)
end
it "should raise an error if a class is not found" do
@scope.expects(:find_hostclass).with("notfound", {:assume_fqname => false}).returns(nil)
lambda{ @compiler.evaluate_classes(%w{notfound}, @scope) }.should raise_error(Puppet::Error, /Could not find class/)
end
it "should raise an error when it can't find class" do
klasses = {'foo'=>nil}
@node.classes = klasses
@compiler.topscope.expects(:find_hostclass).with('foo', {:assume_fqname => false}).returns(nil)
lambda{ @compiler.compile }.should raise_error(Puppet::Error, /Could not find class foo for testnode/)
end
end
describe "when evaluating found classes" do
before do
Puppet.settings[:data_binding_terminus] = "none"
@class = stub 'class', :name => "my::class"
@scope.stubs(:find_hostclass).with("myclass", {:assume_fqname => false}).returns(@class)
@resource = stub 'resource', :ref => "Class[myclass]", :type => "file"
end
it "should evaluate each class" do
@compiler.catalog.stubs(:tag)
@class.expects(:ensure_in_catalog).with(@scope)
@scope.stubs(:class_scope).with(@class)
@compiler.evaluate_classes(%w{myclass}, @scope)
end
describe "and the classes are specified as a hash with parameters" do
before do
@node.classes = {}
@ast_obj = Puppet::Parser::AST::String.new(:value => 'foo')
end
# Define the given class with default parameters
def define_class(name, parameters)
@node.classes[name] = parameters
klass = Puppet::Resource::Type.new(:hostclass, name, :arguments => {'p1' => @ast_obj, 'p2' => @ast_obj})
@compiler.topscope.known_resource_types.add klass
end
def compile
@catalog = @compiler.compile
end
it "should record which classes are evaluated" do
classes = {'foo'=>{}, 'bar::foo'=>{}, 'bar'=>{}}
classes.each { |c, params| define_class(c, params) }
compile()
classes.each { |name, p| @catalog.classes.should include(name) }
end
it "should provide default values for parameters that have no values specified" do
define_class('foo', {})
compile()
@catalog.resource(:class, 'foo')['p1'].should == "foo"
end
it "should use any provided values" do
define_class('foo', {'p1' => 'real_value'})
compile()
@catalog.resource(:class, 'foo')['p1'].should == "real_value"
end
it "should support providing some but not all values" do
define_class('foo', {'p1' => 'real_value'})
compile()
@catalog.resource(:class, 'Foo')['p1'].should == "real_value"
@catalog.resource(:class, 'Foo')['p2'].should == "foo"
end
it "should ensure each node class is in catalog and has appropriate tags" do
klasses = ['bar::foo']
@node.classes = klasses
ast_obj = Puppet::Parser::AST::String.new(:value => 'foo')
klasses.each do |name|
klass = Puppet::Resource::Type.new(:hostclass, name, :arguments => {'p1' => ast_obj, 'p2' => ast_obj})
@compiler.topscope.known_resource_types.add klass
end
catalog = @compiler.compile
r2 = catalog.resources.detect {|r| r.title == 'Bar::Foo' }
r2.tags.should == Puppet::Util::TagSet.new(['bar::foo', 'class', 'bar', 'foo'])
end
end
it "should fail if required parameters are missing" do
klass = {'foo'=>{'a'=>'one'}}
@node.classes = klass
klass = Puppet::Resource::Type.new(:hostclass, 'foo', :arguments => {'a' => nil, 'b' => nil})
@compiler.topscope.known_resource_types.add klass
lambda { @compiler.compile }.should raise_error(Puppet::ParseError, "Must pass b to Class[Foo]")
end
it "should fail if invalid parameters are passed" do
klass = {'foo'=>{'3'=>'one'}}
@node.classes = klass
klass = Puppet::Resource::Type.new(:hostclass, 'foo', :arguments => {})
@compiler.topscope.known_resource_types.add klass
lambda { @compiler.compile }.should raise_error(Puppet::ParseError, "Invalid parameter 3 on Class[Foo]")
end
it "should ensure class is in catalog without params" do
@node.classes = klasses = {'foo'=>nil}
foo = Puppet::Resource::Type.new(:hostclass, 'foo')
@compiler.topscope.known_resource_types.add foo
catalog = @compiler.compile
catalog.classes.should include 'foo'
end
it "should not evaluate the resources created for found classes unless asked" do
@compiler.catalog.stubs(:tag)
@resource.expects(:evaluate).never
@class.expects(:ensure_in_catalog).returns(@resource)
@scope.stubs(:class_scope).with(@class)
@compiler.evaluate_classes(%w{myclass}, @scope)
end
it "should immediately evaluate the resources created for found classes when asked" do
@compiler.catalog.stubs(:tag)
@resource.expects(:evaluate)
@class.expects(:ensure_in_catalog).returns(@resource)
@scope.stubs(:class_scope).with(@class)
@compiler.evaluate_classes(%w{myclass}, @scope, false)
end
it "should skip classes that have already been evaluated" do
@compiler.catalog.stubs(:tag)
- @scope.stubs(:class_scope).with(@class).returns("something")
+ @scope.stubs(:class_scope).with(@class).returns(@scope)
@compiler.expects(:add_resource).never
@resource.expects(:evaluate).never
Puppet::Parser::Resource.expects(:new).never
@compiler.evaluate_classes(%w{myclass}, @scope, false)
end
it "should skip classes previously evaluated with different capitalization" do
@compiler.catalog.stubs(:tag)
@scope.stubs(:find_hostclass).with("MyClass",{:assume_fqname => false}).returns(@class)
- @scope.stubs(:class_scope).with(@class).returns("something")
+ @scope.stubs(:class_scope).with(@class).returns(@scope)
@compiler.expects(:add_resource).never
@resource.expects(:evaluate).never
Puppet::Parser::Resource.expects(:new).never
@compiler.evaluate_classes(%w{MyClass}, @scope, false)
end
end
describe "when evaluating AST nodes with no AST nodes present" do
it "should do nothing" do
@compiler.expects(:ast_nodes?).returns(false)
@compiler.known_resource_types.expects(:nodes).never
Puppet::Parser::Resource.expects(:new).never
@compiler.send(:evaluate_ast_node)
end
end
describe "when evaluating AST nodes with AST nodes present" do
before do
@compiler.known_resource_types.stubs(:nodes?).returns true
# Set some names for our test
@node.stubs(:names).returns(%w{a b c})
@compiler.known_resource_types.stubs(:node).with("a").returns(nil)
@compiler.known_resource_types.stubs(:node).with("b").returns(nil)
@compiler.known_resource_types.stubs(:node).with("c").returns(nil)
# It should check this last, of course.
@compiler.known_resource_types.stubs(:node).with("default").returns(nil)
end
it "should fail if the named node cannot be found" do
proc { @compiler.send(:evaluate_ast_node) }.should raise_error(Puppet::ParseError)
end
it "should evaluate the first node class matching the node name" do
node_class = stub 'node', :name => "c", :evaluate_code => nil
@compiler.known_resource_types.stubs(:node).with("c").returns(node_class)
node_resource = stub 'node resource', :ref => "Node[c]", :evaluate => nil, :type => "node"
node_class.expects(:ensure_in_catalog).returns(node_resource)
@compiler.compile
end
it "should match the default node if no matching node can be found" do
node_class = stub 'node', :name => "default", :evaluate_code => nil
@compiler.known_resource_types.stubs(:node).with("default").returns(node_class)
node_resource = stub 'node resource', :ref => "Node[default]", :evaluate => nil, :type => "node"
node_class.expects(:ensure_in_catalog).returns(node_resource)
@compiler.compile
end
it "should evaluate the node resource immediately rather than using lazy evaluation" do
node_class = stub 'node', :name => "c"
@compiler.known_resource_types.stubs(:node).with("c").returns(node_class)
node_resource = stub 'node resource', :ref => "Node[c]", :type => "node"
node_class.expects(:ensure_in_catalog).returns(node_resource)
node_resource.expects(:evaluate)
@compiler.send(:evaluate_ast_node)
end
end
describe "when evaluating node classes" do
include PuppetSpec::Compiler
describe "when provided classes in array format" do
let(:node) { Puppet::Node.new('someone', :classes => ['something']) }
describe "when the class exists" do
it "should succeed if the class is already included" do
manifest = <<-MANIFEST
class something {}
include something
MANIFEST
catalog = compile_to_catalog(manifest, node)
catalog.resource('Class', 'Something').should_not be_nil
end
it "should evaluate the class without parameters if it's not already included" do
manifest = "class something {}"
catalog = compile_to_catalog(manifest, node)
catalog.resource('Class', 'Something').should_not be_nil
end
end
it "should fail if the class doesn't exist" do
expect { compile_to_catalog('', node) }.to raise_error(Puppet::Error, /Could not find class something/)
end
end
describe "when provided classes in hash format" do
describe "for classes without parameters" do
let(:node) { Puppet::Node.new('someone', :classes => {'something' => {}}) }
describe "when the class exists" do
it "should succeed if the class is already included" do
manifest = <<-MANIFEST
class something {}
include something
MANIFEST
catalog = compile_to_catalog(manifest, node)
catalog.resource('Class', 'Something').should_not be_nil
end
it "should evaluate the class if it's not already included" do
manifest = <<-MANIFEST
class something {}
MANIFEST
catalog = compile_to_catalog(manifest, node)
catalog.resource('Class', 'Something').should_not be_nil
end
end
it "should fail if the class doesn't exist" do
expect { compile_to_catalog('', node) }.to raise_error(Puppet::Error, /Could not find class something/)
end
end
describe "for classes with parameters" do
let(:node) { Puppet::Node.new('someone', :classes => {'something' => {'configuron' => 'defrabulated'}}) }
describe "when the class exists" do
it "should fail if the class is already included" do
manifest = <<-MANIFEST
class something($configuron=frabulated) {}
include something
MANIFEST
expect { compile_to_catalog(manifest, node) }.to raise_error(Puppet::Error, /Class\[Something\] is already declared/)
end
it "should evaluate the class if it's not already included" do
manifest = <<-MANIFEST
class something($configuron=frabulated) {}
MANIFEST
catalog = compile_to_catalog(manifest, node)
resource = catalog.resource('Class', 'Something')
resource['configuron'].should == 'defrabulated'
end
end
it "should fail if the class doesn't exist" do
expect { compile_to_catalog('', node) }.to raise_error(Puppet::Error, /Could not find class something/)
end
end
end
end
describe "when managing resource overrides" do
before do
@override = stub 'override', :ref => "File[/foo]", :type => "my"
@resource = resource(:file, "/foo")
end
it "should be able to store overrides" do
lambda { @compiler.add_override(@override) }.should_not raise_error
end
it "should apply overrides to the appropriate resources" do
@compiler.add_resource(@scope, @resource)
@resource.expects(:merge).with(@override)
@compiler.add_override(@override)
@compiler.compile
end
it "should accept overrides before the related resource has been created" do
@resource.expects(:merge).with(@override)
# First store the override
@compiler.add_override(@override)
# Then the resource
@compiler.add_resource(@scope, @resource)
# And compile, so they get resolved
@compiler.compile
end
it "should fail if the compile is finished and resource overrides have not been applied" do
@compiler.add_override(@override)
lambda { @compiler.compile }.should raise_error Puppet::ParseError, 'Could not find resource(s) File[/foo] for overriding'
end
end
end
diff --git a/spec/unit/parser/eparser_adapter_spec.rb b/spec/unit/parser/eparser_adapter_spec.rb
deleted file mode 100644
index 173cfb783..000000000
--- a/spec/unit/parser/eparser_adapter_spec.rb
+++ /dev/null
@@ -1,407 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/parser/e_parser_adapter'
-
-describe Puppet::Parser do
-
- Puppet::Parser::AST
-
- before :each do
- @known_resource_types = Puppet::Resource::TypeCollection.new("development")
- @classic_parser = Puppet::Parser::Parser.new "development"
- @parser = Puppet::Parser::EParserAdapter.new(@classic_parser)
- @classic_parser.stubs(:known_resource_types).returns @known_resource_types
- @true_ast = Puppet::Parser::AST::Boolean.new :value => true
- end
-
- it "should require an environment at initialization" do
- expect {
- Puppet::Parser::EParserAdapter.new
- }.to raise_error(ArgumentError, /wrong number of arguments/)
- end
-
- describe "when parsing append operator" do
-
- it "should not raise syntax errors" do
- expect { @parser.parse("$var += something") }.to_not raise_error
- end
-
- it "should raise syntax error on incomplete syntax " do
- expect {
- @parser.parse("$var += ")
- }.to raise_error(Puppet::ParseError, /Syntax error at end of file/)
- end
-
- it "should create ast::VarDef with append=true" do
- vardef = @parser.parse("$var += 2").code[0]
- vardef.should be_a(Puppet::Parser::AST::VarDef)
- vardef.append.should == true
- end
-
- it "should work with arrays too" do
- vardef = @parser.parse("$var += ['test']").code[0]
- vardef.should be_a(Puppet::Parser::AST::VarDef)
- vardef.append.should == true
- end
-
- end
-
- describe "when parsing selector" do
- it "should support hash access on the left hand side" do
- expect { @parser.parse("$h = { 'a' => 'b' } $a = $h['a'] ? { 'b' => 'd', default => undef }") }.to_not raise_error
- end
- end
-
- describe "parsing 'unless'" do
- it "should create the correct ast objects" do
- Puppet::Parser::AST::Not.expects(:new).with { |h| h[:value].is_a?(Puppet::Parser::AST::Boolean) }
- @parser.parse("unless false { $var = 1 }")
- end
-
- it "should not raise an error with empty statements" do
- expect { @parser.parse("unless false { }") }.to_not raise_error
- end
-
- #test for bug #13296
- it "should not override 'unless' as a parameter inside resources" do
- lambda { @parser.parse("exec {'/bin/echo foo': unless => '/usr/bin/false',}") }.should_not raise_error
- end
- end
-
- describe "when parsing parameter names" do
- Puppet::Parser::Lexer::KEYWORDS.sort_tokens.each do |keyword|
- it "should allow #{keyword} as a keyword" do
- lambda { @parser.parse("exec {'/bin/echo foo': #{keyword} => '/usr/bin/false',}") }.should_not raise_error
- end
- end
- end
-
- describe "when parsing 'if'" do
- it "not, it should create the correct ast objects" do
- Puppet::Parser::AST::Not.expects(:new).with { |h| h[:value].is_a?(Puppet::Parser::AST::Boolean) }
- @parser.parse("if ! true { $var = 1 }")
- end
-
- it "boolean operation, it should create the correct ast objects" do
- Puppet::Parser::AST::BooleanOperator.expects(:new).with {
- |h| h[:rval].is_a?(Puppet::Parser::AST::Boolean) and h[:lval].is_a?(Puppet::Parser::AST::Boolean) and h[:operator]=="or"
- }
- @parser.parse("if true or true { $var = 1 }")
-
- end
-
- it "comparison operation, it should create the correct ast objects" do
- Puppet::Parser::AST::ComparisonOperator.expects(:new).with {
- |h| h[:lval].is_a?(Puppet::Parser::AST::Name) and h[:rval].is_a?(Puppet::Parser::AST::Name) and h[:operator]=="<"
- }
- @parser.parse("if 1 < 2 { $var = 1 }")
-
- end
-
- end
-
- describe "when parsing if complex expressions" do
- it "should create a correct ast tree" do
- aststub = stub_everything 'ast'
- Puppet::Parser::AST::ComparisonOperator.expects(:new).with {
- |h| h[:rval].is_a?(Puppet::Parser::AST::Name) and h[:lval].is_a?(Puppet::Parser::AST::Name) and h[:operator]==">"
- }.returns(aststub)
- Puppet::Parser::AST::ComparisonOperator.expects(:new).with {
- |h| h[:rval].is_a?(Puppet::Parser::AST::Name) and h[:lval].is_a?(Puppet::Parser::AST::Name) and h[:operator]=="=="
- }.returns(aststub)
- Puppet::Parser::AST::BooleanOperator.expects(:new).with {
- |h| h[:rval]==aststub and h[:lval]==aststub and h[:operator]=="and"
- }
- @parser.parse("if (1 > 2) and (1 == 2) { $var = 1 }")
- end
-
- it "should raise an error on incorrect expression" do
- expect {
- @parser.parse("if (1 > 2 > ) or (1 == 2) { $var = 1 }")
- }.to raise_error(Puppet::ParseError, /Syntax error at '\)'/)
- end
-
- end
-
- describe "when parsing resource references" do
-
- it "should not raise syntax errors" do
- expect { @parser.parse('exec { test: param => File["a"] }') }.to_not raise_error
- end
-
- it "should not raise syntax errors with multiple references" do
- expect { @parser.parse('exec { test: param => File["a","b"] }') }.to_not raise_error
- end
-
- it "should create an ast::ResourceReference" do
- # NOTE: In egrammar, type and name are unified immediately to lower case whereas the regular grammar
- # keeps the UC name in some contexts - it gets downcased later as the name of the type is in lower case.
- #
- Puppet::Parser::AST::ResourceReference.expects(:new).with { |arg|
- arg[:line]==1 and arg[:pos] ==25 and arg[:type]=="file" and arg[:title].is_a?(Puppet::Parser::AST::ASTArray)
- }
- @parser.parse('exec { test: command => File["a","b"] }')
- end
- end
-
- describe "when parsing resource overrides" do
-
- it "should not raise syntax errors" do
- expect { @parser.parse('Resource["title"] { param => value }') }.to_not raise_error
- end
-
- it "should not raise syntax errors with multiple overrides" do
- expect { @parser.parse('Resource["title1","title2"] { param => value }') }.to_not raise_error
- end
-
- it "should create an ast::ResourceOverride" do
- ro = @parser.parse('Resource["title1","title2"] { param => value }').code[0]
- ro.should be_a(Puppet::Parser::AST::ResourceOverride)
- ro.line.should == 1
- ro.object.should be_a(Puppet::Parser::AST::ResourceReference)
- ro.parameters[0].should be_a(Puppet::Parser::AST::ResourceParam)
- end
-
- end
-
- describe "when parsing if statements" do
-
- it "should not raise errors with empty if" do
- expect { @parser.parse("if true { }") }.to_not raise_error
- end
-
- it "should not raise errors with empty else" do
- expect { @parser.parse("if false { notice('if') } else { }") }.to_not raise_error
- end
-
- it "should not raise errors with empty if and else" do
- expect { @parser.parse("if false { } else { }") }.to_not raise_error
- end
-
- it "should create a nop node for empty branch" do
- Puppet::Parser::AST::Nop.expects(:new).twice
- @parser.parse("if true { }")
- end
-
- it "should create a nop node for empty else branch" do
- Puppet::Parser::AST::Nop.expects(:new)
- @parser.parse("if true { notice('test') } else { }")
- end
-
- it "should build a chain of 'ifs' if there's an 'elsif'" do
- expect { @parser.parse(<<-PP) }.to_not raise_error
- if true { notice('test') } elsif true {} else { }
- PP
- end
-
- end
-
- describe "when parsing function calls" do
- it "should not raise errors with no arguments" do
- expect { @parser.parse("tag()") }.to_not raise_error
- end
-
- it "should not raise errors with rvalue function with no args" do
- expect { @parser.parse("$a = template()") }.to_not raise_error
- end
-
- it "should not raise errors with arguments" do
- expect { @parser.parse("notice(1)") }.to_not raise_error
- end
-
- it "should not raise errors with multiple arguments" do
- expect { @parser.parse("notice(1,2)") }.to_not raise_error
- end
-
- it "should not raise errors with multiple arguments and a trailing comma" do
- expect { @parser.parse("notice(1,2,)") }.to_not raise_error
- end
-
- end
-
- describe "when parsing arrays" do
- it "should parse an array" do
- expect { @parser.parse("$a = [1,2]") }.to_not raise_error
- end
-
- it "should not raise errors with a trailing comma" do
- expect { @parser.parse("$a = [1,2,]") }.to_not raise_error
- end
-
- it "should accept an empty array" do
- expect { @parser.parse("$var = []\n") }.to_not raise_error
- end
- end
-
- describe "when parsing classes" do
- before :each do
- @krt = Puppet::Resource::TypeCollection.new("development")
- @classic_parser = Puppet::Parser::Parser.new "development"
- @parser = Puppet::Parser::EParserAdapter.new(@classic_parser)
- @classic_parser.stubs(:known_resource_types).returns @krt
- end
-
- it "should not create new classes" do
- @parser.parse("class foobar {}").code[0].should be_a(Puppet::Parser::AST::Hostclass)
- @krt.hostclass("foobar").should be_nil
- end
-
- it "should correctly set the parent class when one is provided" do
- @parser.parse("class foobar inherits yayness {}").code[0].instantiate('')[0].parent.should == "yayness"
- end
-
- it "should correctly set the parent class for multiple classes at a time" do
- statements = @parser.parse("class foobar inherits yayness {}\nclass boo inherits bar {}").code
- statements[0].instantiate('')[0].parent.should == "yayness"
- statements[1].instantiate('')[0].parent.should == "bar"
- end
-
- it "should define the code when some is provided" do
- @parser.parse("class foobar { $var = val }").code[0].code.should_not be_nil
- end
-
- it "should accept parameters with trailing comma" do
- @parser.parse("file { '/example': ensure => file, }").should be
- end
-
- it "should accept parametrized classes with trailing comma" do
- @parser.parse("class foobar ($var1 = 0,) { $var = val }").code[0].code.should_not be_nil
- end
-
- it "should define parameters when provided" do
- foobar = @parser.parse("class foobar($biz,$baz) {}").code[0].instantiate('')[0]
- foobar.arguments.should == {"biz" => nil, "baz" => nil}
- end
- end
-
- describe "when parsing resources" do
- before :each do
- @krt = Puppet::Resource::TypeCollection.new("development")
- @classic_parser = Puppet::Parser::Parser.new "development"
- @parser = Puppet::Parser::EParserAdapter.new(@classic_parser)
- @classic_parser.stubs(:known_resource_types).returns @krt
- end
-
- it "should be able to parse class resources" do
- @krt.add(Puppet::Resource::Type.new(:hostclass, "foobar", :arguments => {"biz" => nil}))
- expect { @parser.parse("class { foobar: biz => stuff }") }.to_not raise_error
- end
-
- it "should correctly mark exported resources as exported" do
- @parser.parse("@@file { '/file': }").code[0].exported.should be_true
- end
-
- it "should correctly mark virtual resources as virtual" do
- @parser.parse("@file { '/file': }").code[0].virtual.should be_true
- end
- end
-
- describe "when parsing nodes" do
- it "should be able to parse a node with a single name" do
- node = @parser.parse("node foo { }").code[0]
- node.should be_a Puppet::Parser::AST::Node
- node.names.length.should == 1
- node.names[0].value.should == "foo"
- end
-
- it "should be able to parse a node with two names" do
- node = @parser.parse("node foo, bar { }").code[0]
- node.should be_a Puppet::Parser::AST::Node
- node.names.length.should == 2
- node.names[0].value.should == "foo"
- node.names[1].value.should == "bar"
- end
-
- it "should be able to parse a node with three names" do
- node = @parser.parse("node foo, bar, baz { }").code[0]
- node.should be_a Puppet::Parser::AST::Node
- node.names.length.should == 3
- node.names[0].value.should == "foo"
- node.names[1].value.should == "bar"
- node.names[2].value.should == "baz"
- end
- end
-
- it "should fail if trying to collect defaults" do
- expect {
- @parser.parse("@Port { protocols => tcp }")
- }.to raise_error(Puppet::ParseError, /Defaults are not virtualizable/)
- end
-
- context "when parsing collections" do
- it "should parse basic collections" do
- @parser.parse("Port <| |>").code.
- should be_all {|x| x.is_a? Puppet::Parser::AST::Collection }
- end
-
- it "should parse fully qualified collections" do
- @parser.parse("Port::Range <| |>").code.
- should be_all {|x| x.is_a? Puppet::Parser::AST::Collection }
- end
- end
-
- it "should not assign to a fully qualified variable" do
- expect {
- @parser.parse("$one::two = yay")
- }.to raise_error(Puppet::ParseError, /Cannot assign to variables in other namespaces/)
- end
-
- it "should parse assignment of undef" do
- tree = @parser.parse("$var = undef")
- tree.code.children[0].should be_an_instance_of Puppet::Parser::AST::VarDef
- tree.code.children[0].value.should be_an_instance_of Puppet::Parser::AST::Undef
- end
-
- it "should treat classes as case insensitive" do
- @classic_parser.known_resource_types.import_ast(@parser.parse("class yayness {}"), '')
- @classic_parser.known_resource_types.hostclass('yayness').
- should == @classic_parser.find_hostclass("", "YayNess")
- end
-
- it "should treat defines as case insensitive" do
- @classic_parser.known_resource_types.import_ast(@parser.parse("define funtest {}"), '')
- @classic_parser.known_resource_types.hostclass('funtest').
- should == @classic_parser.find_hostclass("", "fUntEst")
- end
- context "when parsing method calls" do
- it "should parse method call with one param lambda" do
- expect { @parser.parse("$a.each |$a|{ debug $a }") }.to_not raise_error
- end
- it "should parse method call with two param lambda" do
- expect { @parser.parse("$a.each |$a,$b|{ debug $a }") }.to_not raise_error
- end
- it "should parse method call with two param lambda and default value" do
- expect { @parser.parse("$a.each |$a,$b=1|{ debug $a }") }.to_not raise_error
- end
- it "should parse method call without lambda (statement)" do
- expect { @parser.parse("$a.each") }.to_not raise_error
- end
- it "should parse method call without lambda (expression)" do
- expect { @parser.parse("$x = $a.each + 1") }.to_not raise_error
- end
- context "a receiver expression of type" do
- it "variable should be allowed" do
- expect { @parser.parse("$a.each") }.to_not raise_error
- end
- it "hasharrayaccess should be allowed" do
- expect { @parser.parse("$a[0][1].each") }.to_not raise_error
- end
- it "quoted text should be allowed" do
- expect { @parser.parse("\"monkey\".each") }.to_not raise_error
- expect { @parser.parse("'monkey'.each") }.to_not raise_error
- end
- it "selector text should be allowed" do
- expect { @parser.parse("$a ? { 'banana'=>[1,2,3]}.each") }.to_not raise_error
- end
- it "function call should be allowed" do
- expect { @parser.parse("duh(1,2,3).each") }.to_not raise_error
- end
- it "method call should be allowed" do
- expect { @parser.parse("$a.foo.bar") }.to_not raise_error
- end
- it "chained method calls with lambda should be allowed" do
- expect { @parser.parse("$a.foo||{}.bar||{}") }.to_not raise_error
- end
- end
- end
-end
diff --git a/spec/unit/parser/files_spec.rb b/spec/unit/parser/files_spec.rb
index 3eb43b276..020ce740b 100755
--- a/spec/unit/parser/files_spec.rb
+++ b/spec/unit/parser/files_spec.rb
@@ -1,149 +1,168 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/parser/files'
describe Puppet::Parser::Files do
include PuppetSpec::Files
let(:environment) { Puppet::Node::Environment.create(:testing, []) }
before do
@basepath = make_absolute("/somepath")
end
+ describe "when searching for files" do
+ it "should return fully-qualified files directly" do
+ Puppet::Parser::Files.expects(:modulepath).never
+ Puppet::Parser::Files.find_file(@basepath + "/my/file", environment).should == @basepath + "/my/file"
+ end
+
+ it "should return the first found file" do
+ mod = mock 'module'
+ mod.expects(:file).returns("/one/mymod/files/myfile")
+ environment.expects(:module).with("mymod").returns mod
+
+ Puppet::Parser::Files.find_file("mymod/myfile", environment).should == "/one/mymod/files/myfile"
+ end
+
+ it "should return nil if template is not found" do
+ Puppet::Parser::Files.find_file("foomod/myfile", environment).should be_nil
+ end
+ end
+
describe "when searching for templates" do
it "should return fully-qualified templates directly" do
Puppet::Parser::Files.expects(:modulepath).never
Puppet::Parser::Files.find_template(@basepath + "/my/template", environment).should == @basepath + "/my/template"
end
it "should return the template from the first found module" do
mod = mock 'module'
mod.expects(:template).returns("/one/mymod/templates/mytemplate")
environment.expects(:module).with("mymod").returns mod
Puppet::Parser::Files.find_template("mymod/mytemplate", environment).should == "/one/mymod/templates/mytemplate"
end
it "should return the file in the templatedir if it exists" do
Puppet[:templatedir] = "/my/templates"
Puppet[:modulepath] = "/one:/two"
File.stubs(:directory?).returns(true)
Puppet::FileSystem.stubs(:exist?).returns(true)
Puppet::Parser::Files.find_template("mymod/mytemplate", environment).should == File.join(Puppet[:templatedir], "mymod/mytemplate")
end
it "should not raise an error if no valid templatedir exists and the template exists in a module" do
mod = mock 'module'
mod.expects(:template).returns("/one/mymod/templates/mytemplate")
environment.expects(:module).with("mymod").returns mod
Puppet::Parser::Files.stubs(:templatepath).with(environment).returns(nil)
Puppet::Parser::Files.find_template("mymod/mytemplate", environment).should == "/one/mymod/templates/mytemplate"
end
it "should return unqualified templates if they exist in the template dir" do
Puppet::FileSystem.stubs(:exist?).returns true
Puppet::Parser::Files.stubs(:templatepath).with(environment).returns(["/my/templates"])
Puppet::Parser::Files.find_template("mytemplate", environment).should == "/my/templates/mytemplate"
end
it "should only return templates if they actually exist" do
Puppet::FileSystem.expects(:exist?).with("/my/templates/mytemplate").returns true
Puppet::Parser::Files.stubs(:templatepath).with(environment).returns(["/my/templates"])
Puppet::Parser::Files.find_template("mytemplate", environment).should == "/my/templates/mytemplate"
end
it "should return nil when asked for a template that doesn't exist" do
Puppet::FileSystem.expects(:exist?).with("/my/templates/mytemplate").returns false
Puppet::Parser::Files.stubs(:templatepath).with(environment).returns(["/my/templates"])
Puppet::Parser::Files.find_template("mytemplate", environment).should be_nil
end
it "should accept relative templatedirs" do
Puppet::FileSystem.stubs(:exist?).returns true
Puppet[:templatedir] = "my/templates"
File.expects(:directory?).with(File.expand_path("my/templates")).returns(true)
Puppet::Parser::Files.find_template("mytemplate", environment).should == File.expand_path("my/templates/mytemplate")
end
it "should use the environment templatedir if no module is found and an environment is specified" do
Puppet::FileSystem.stubs(:exist?).returns true
Puppet::Parser::Files.stubs(:templatepath).with(environment).returns(["/myenv/templates"])
Puppet::Parser::Files.find_template("mymod/mytemplate", environment).should == "/myenv/templates/mymod/mytemplate"
end
it "should use first dir from environment templatedir if no module is found and an environment is specified" do
Puppet::FileSystem.stubs(:exist?).returns true
Puppet::Parser::Files.stubs(:templatepath).with(environment).returns(["/myenv/templates", "/two/templates"])
Puppet::Parser::Files.find_template("mymod/mytemplate", environment).should == "/myenv/templates/mymod/mytemplate"
end
it "should use a valid dir when templatedir is a path for unqualified templates and the first dir contains template" do
Puppet::Parser::Files.stubs(:templatepath).returns(["/one/templates", "/two/templates"])
Puppet::FileSystem.expects(:exist?).with("/one/templates/mytemplate").returns(true)
Puppet::Parser::Files.find_template("mytemplate", environment).should == "/one/templates/mytemplate"
end
it "should use a valid dir when templatedir is a path for unqualified templates and only second dir contains template" do
Puppet::Parser::Files.stubs(:templatepath).returns(["/one/templates", "/two/templates"])
Puppet::FileSystem.expects(:exist?).with("/one/templates/mytemplate").returns(false)
Puppet::FileSystem.expects(:exist?).with("/two/templates/mytemplate").returns(true)
Puppet::Parser::Files.find_template("mytemplate", environment).should == "/two/templates/mytemplate"
end
it "should use the node environment if specified" do
mod = mock 'module'
environment.expects(:module).with("mymod").returns mod
mod.expects(:template).returns("/my/modules/mymod/templates/envtemplate")
Puppet::Parser::Files.find_template("mymod/envtemplate", environment).should == "/my/modules/mymod/templates/envtemplate"
end
it "should return nil if no template can be found" do
Puppet::Parser::Files.find_template("foomod/envtemplate", environment).should be_nil
end
end
describe "when searching for manifests" do
it "should ignore invalid modules" do
mod = mock 'module'
environment.expects(:module).with("mymod").raises(Puppet::Module::InvalidName, "name is invalid")
Puppet.expects(:value).with(:modulepath).never
Dir.stubs(:glob).returns %w{foo}
Puppet::Parser::Files.find_manifests_in_modules("mymod/init.pp", environment)
end
end
describe "when searching for manifests in a module" do
def a_module_in_environment(env, name)
mod = Puppet::Module.new(name, "/one/#{name}", env)
env.stubs(:module).with(name).returns mod
mod.stubs(:match_manifests).with("init.pp").returns(["/one/#{name}/manifests/init.pp"])
end
it "returns no files when no module is found" do
module_name, files = Puppet::Parser::Files.find_manifests_in_modules("not_here_module/foo", environment)
expect(files).to be_empty
expect(module_name).to be_nil
end
it "should return the name of the module and the manifests from the first found module" do
a_module_in_environment(environment, "mymod")
Puppet::Parser::Files.find_manifests_in_modules("mymod/init.pp", environment).should ==
["mymod", ["/one/mymod/manifests/init.pp"]]
end
it "does not find the module when it is a different environment" do
different_env = Puppet::Node::Environment.create(:different, [])
a_module_in_environment(environment, "mymod")
Puppet::Parser::Files.find_manifests_in_modules("mymod/init.pp", different_env).should_not include("mymod")
end
end
end
diff --git a/spec/unit/parser/functions/contain_spec.rb b/spec/unit/parser/functions/contain_spec.rb
index 3150e0c8e..2a5aa57c7 100644
--- a/spec/unit/parser/functions/contain_spec.rb
+++ b/spec/unit/parser/functions/contain_spec.rb
@@ -1,185 +1,236 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/compiler'
require 'puppet/parser/functions'
require 'matchers/containment_matchers'
+require 'matchers/resource'
require 'matchers/include_in_order'
+require 'unit/parser/functions/shared'
+
describe 'The "contain" function' do
include PuppetSpec::Compiler
include ContainmentMatchers
+ include Matchers::Resource
it "includes the class" do
catalog = compile_to_catalog(<<-MANIFEST)
class contained {
notify { "contained": }
}
class container {
contain contained
}
include container
MANIFEST
expect(catalog.classes).to include("contained")
end
+ it "includes the class when using a fully qualified anchored name" do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class contained {
+ notify { "contained": }
+ }
+
+ class container {
+ contain ::contained
+ }
+
+ include container
+ MANIFEST
+
+ expect(catalog.classes).to include("contained")
+ end
+
+ it "ensures that the edge is with the correct class" do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class outer {
+ class named { }
+ contain named
+ }
+
+ class named { }
+
+ include named
+ include outer
+ MANIFEST
+
+ expect(catalog).to have_resource("Class[Named]")
+ expect(catalog).to have_resource("Class[Outer]")
+ expect(catalog).to have_resource("Class[Outer::Named]")
+ expect(catalog).to contain_class("outer::named").in("outer")
+ end
+
it "makes the class contained in the current class" do
catalog = compile_to_catalog(<<-MANIFEST)
class contained {
notify { "contained": }
}
class container {
contain contained
}
include container
MANIFEST
expect(catalog).to contain_class("contained").in("container")
end
it "can contain multiple classes" do
catalog = compile_to_catalog(<<-MANIFEST)
class a {
notify { "a": }
}
class b {
notify { "b": }
}
class container {
contain a, b
}
include container
MANIFEST
expect(catalog).to contain_class("a").in("container")
expect(catalog).to contain_class("b").in("container")
end
context "when containing a class in multiple classes" do
it "creates a catalog with all containment edges" do
catalog = compile_to_catalog(<<-MANIFEST)
class contained {
notify { "contained": }
}
class container {
contain contained
}
class another {
contain contained
}
include container
include another
MANIFEST
expect(catalog).to contain_class("contained").in("container")
expect(catalog).to contain_class("contained").in("another")
end
it "and there are no dependencies applies successfully" do
manifest = <<-MANIFEST
class contained {
notify { "contained": }
}
class container {
contain contained
}
class another {
contain contained
}
include container
include another
MANIFEST
expect { apply_compiled_manifest(manifest) }.not_to raise_error
end
it "and there are explicit dependencies on the containing class causes a dependency cycle" do
manifest = <<-MANIFEST
class contained {
notify { "contained": }
}
class container {
contain contained
}
class another {
contain contained
}
include container
include another
Class["container"] -> Class["another"]
MANIFEST
expect { apply_compiled_manifest(manifest) }.to raise_error(
Puppet::Error,
/Found 1 dependency cycle/
)
end
end
it "does not create duplicate edges" do
catalog = compile_to_catalog(<<-MANIFEST)
class contained {
notify { "contained": }
}
class container {
contain contained
contain contained
}
include container
MANIFEST
contained = catalog.resource("Class", "contained")
container = catalog.resource("Class", "container")
expect(catalog.edges_between(container, contained)).to have(1).item
end
context "when a containing class has a dependency order" do
it "the contained class is applied in that order" do
catalog = compile_to_relationship_graph(<<-MANIFEST)
class contained {
notify { "contained": }
}
class container {
contain contained
}
class first {
notify { "first": }
}
class last {
notify { "last": }
}
include container, first, last
Class["first"] -> Class["container"] -> Class["last"]
MANIFEST
expect(order_resources_traversed_in(catalog)).to include_in_order(
"Notify[first]", "Notify[contained]", "Notify[last]"
)
end
end
+
+ describe "When the future parser is in use" do
+ require 'puppet/pops'
+ before(:each) do
+ Puppet[:parser] = 'future'
+ compiler = Puppet::Parser::Compiler.new(Puppet::Node.new("foo"))
+ @scope = Puppet::Parser::Scope.new(compiler)
+ end
+
+ it_should_behave_like 'all functions transforming relative to absolute names', :function_contain
+ it_should_behave_like 'an inclusion function, regardless of the type of class reference,', :contain
+ end
end
diff --git a/spec/unit/parser/functions/create_resources_spec.rb b/spec/unit/parser/functions/create_resources_spec.rb
index 79ed02f22..3e7bd8015 100755
--- a/spec/unit/parser/functions/create_resources_spec.rb
+++ b/spec/unit/parser/functions/create_resources_spec.rb
@@ -1,206 +1,213 @@
require 'puppet'
require 'spec_helper'
require 'puppet_spec/compiler'
describe 'function for dynamically creating resources' do
include PuppetSpec::Compiler
before :each do
node = Puppet::Node.new("floppy", :environment => 'production')
@compiler = Puppet::Parser::Compiler.new(node)
@scope = Puppet::Parser::Scope.new(@compiler)
@topscope = @scope.compiler.topscope
@scope.parent = @topscope
Puppet::Parser::Functions.function(:create_resources)
end
it "should exist" do
Puppet::Parser::Functions.function(:create_resources).should == "function_create_resources"
end
it 'should require two or three arguments' do
expect { @scope.function_create_resources(['foo']) }.to raise_error(ArgumentError, 'create_resources(): Wrong number of arguments given (1 for minimum 2)')
expect { @scope.function_create_resources(['foo', 'bar', 'blah', 'baz']) }.to raise_error(ArgumentError, 'create_resources(): wrong number of arguments (4; must be 2 or 3)')
end
it 'should require second argument to be a hash' do
expect { @scope.function_create_resources(['foo','bar']) }.to raise_error(ArgumentError, 'create_resources(): second argument must be a hash')
end
it 'should require optional third argument to be a hash' do
expect { @scope.function_create_resources(['foo',{},'foo']) }.to raise_error(ArgumentError, 'create_resources(): third argument, if provided, must be a hash')
end
describe 'when creating native types' do
it 'empty hash should not cause resources to be added' do
noop_catalog = compile_to_catalog("create_resources('file', {})")
empty_catalog = compile_to_catalog("")
noop_catalog.resources.size.should == empty_catalog.resources.size
end
it 'should be able to add' do
catalog = compile_to_catalog("create_resources('file', {'/etc/foo'=>{'ensure'=>'present'}})")
catalog.resource(:file, "/etc/foo")['ensure'].should == 'present'
end
it 'should be able to add virtual resources' do
catalog = compile_to_catalog("create_resources('@file', {'/etc/foo'=>{'ensure'=>'present'}})\nrealize(File['/etc/foo'])")
catalog.resource(:file, "/etc/foo")['ensure'].should == 'present'
end
it 'should be able to add exported resources' do
- catalog = compile_to_catalog("create_resources('@@file', {'/etc/foo'=>{'ensure'=>'present'}})")
+ catalog = compile_to_catalog("create_resources('@@file', {'/etc/foo'=>{'ensure'=>'present'}}) realize(File['/etc/foo'])")
catalog.resource(:file, "/etc/foo")['ensure'].should == 'present'
catalog.resource(:file, "/etc/foo").exported.should == true
end
it 'should accept multiple types' do
catalog = compile_to_catalog("create_resources('notify', {'foo'=>{'message'=>'one'}, 'bar'=>{'message'=>'two'}})")
catalog.resource(:notify, "foo")['message'].should == 'one'
catalog.resource(:notify, "bar")['message'].should == 'two'
end
it 'should fail to add non-existing type' do
expect do
@scope.function_create_resources(['create-resource-foo', { 'foo' => {} }])
end.to raise_error(ArgumentError, /Invalid resource type create-resource-foo/)
end
it 'should be able to add edges' do
rg = compile_to_relationship_graph("notify { test: }\n create_resources('notify', {'foo'=>{'require'=>'Notify[test]'}})")
test = rg.vertices.find { |v| v.title == 'test' }
foo = rg.vertices.find { |v| v.title == 'foo' }
test.must be
foo.must be
rg.path_between(test,foo).should be
end
it 'should account for default values' do
catalog = compile_to_catalog("create_resources('file', {'/etc/foo'=>{'ensure'=>'present'}, '/etc/baz'=>{'group'=>'food'}}, {'group' => 'bar'})")
catalog.resource(:file, "/etc/foo")['group'].should == 'bar'
catalog.resource(:file, "/etc/baz")['group'].should == 'food'
end
end
describe 'when dynamically creating resource types' do
it 'should be able to create defined resource types' do
catalog = compile_to_catalog(<<-MANIFEST)
define foocreateresource($one) {
notify { $name: message => $one }
}
create_resources('foocreateresource', {'blah'=>{'one'=>'two'}})
MANIFEST
catalog.resource(:notify, "blah")['message'].should == 'two'
end
it 'should fail if defines are missing params' do
expect {
compile_to_catalog(<<-MANIFEST)
define foocreateresource($one) {
notify { $name: message => $one }
}
create_resources('foocreateresource', {'blah'=>{}})
MANIFEST
}.to raise_error(Puppet::Error, 'Must pass one to Foocreateresource[blah] on node foonode')
end
it 'should be able to add multiple defines' do
catalog = compile_to_catalog(<<-MANIFEST)
define foocreateresource($one) {
notify { $name: message => $one }
}
create_resources('foocreateresource', {'blah'=>{'one'=>'two'}, 'blaz'=>{'one'=>'three'}})
MANIFEST
catalog.resource(:notify, "blah")['message'].should == 'two'
catalog.resource(:notify, "blaz")['message'].should == 'three'
end
it 'should be able to add edges' do
rg = compile_to_relationship_graph(<<-MANIFEST)
define foocreateresource($one) {
notify { $name: message => $one }
}
notify { test: }
create_resources('foocreateresource', {'blah'=>{'one'=>'two', 'require' => 'Notify[test]'}})
MANIFEST
test = rg.vertices.find { |v| v.title == 'test' }
blah = rg.vertices.find { |v| v.title == 'blah' }
test.must be
blah.must be
rg.path_between(test,blah).should be
end
it 'should account for default values' do
catalog = compile_to_catalog(<<-MANIFEST)
define foocreateresource($one) {
notify { $name: message => $one }
}
create_resources('foocreateresource', {'blah'=>{}}, {'one' => 'two'})
MANIFEST
catalog.resource(:notify, "blah")['message'].should == 'two'
end
end
describe 'when creating classes' do
it 'should be able to create classes' do
catalog = compile_to_catalog(<<-MANIFEST)
class bar($one) {
notify { test: message => $one }
}
create_resources('class', {'bar'=>{'one'=>'two'}})
MANIFEST
catalog.resource(:notify, "test")['message'].should == 'two'
catalog.resource(:class, "bar").should_not be_nil
end
it 'should fail to create non-existing classes' do
expect do
compile_to_catalog(<<-MANIFEST)
create_resources('class', {'blah'=>{'one'=>'two'}})
MANIFEST
end.to raise_error(Puppet::Error, 'Could not find declared class blah at line 1 on node foonode')
end
it 'should be able to add edges' do
rg = compile_to_relationship_graph(<<-MANIFEST)
class bar($one) {
notify { test: message => $one }
}
notify { tester: }
create_resources('class', {'bar'=>{'one'=>'two', 'require' => 'Notify[tester]'}})
MANIFEST
test = rg.vertices.find { |v| v.title == 'test' }
tester = rg.vertices.find { |v| v.title == 'tester' }
test.must be
tester.must be
rg.path_between(tester,test).should be
end
it 'should account for default values' do
catalog = compile_to_catalog(<<-MANIFEST)
class bar($one) {
notify { test: message => $one }
}
create_resources('class', {'bar'=>{}}, {'one' => 'two'})
MANIFEST
catalog.resource(:notify, "test")['message'].should == 'two'
catalog.resource(:class, "bar").should_not be_nil
end
+
+ it 'should fail with a correct error message if the syntax of an imported file is incorrect' do
+ expect{
+ Puppet[:modulepath] = my_fixture_dir
+ compile_to_catalog('include foo')
+ }.to raise_error(Puppet::Error, /Syntax error at.*/)
+ end
end
end
diff --git a/spec/unit/parser/functions/digest_spec.rb b/spec/unit/parser/functions/digest_spec.rb
new file mode 100755
index 000000000..e3c0762d4
--- /dev/null
+++ b/spec/unit/parser/functions/digest_spec.rb
@@ -0,0 +1,31 @@
+#!/usr/bin/env rspec
+require 'spec_helper'
+
+describe "the digest function", :uses_checksums => true do
+ before :all do
+ Puppet::Parser::Functions.autoloader.loadall
+ end
+
+ before :each do
+ n = Puppet::Node.new('unnamed')
+ c = Puppet::Parser::Compiler.new(n)
+ @scope = Puppet::Parser::Scope.new(c)
+ end
+
+ it "should exist" do
+ Puppet::Parser::Functions.function("digest").should == "function_digest"
+ end
+
+ with_digest_algorithms do
+ it "should use the proper digest function" do
+ result = @scope.function_digest([plaintext])
+ result.should(eql( checksum ))
+ end
+
+ it "should only accept one parameter" do
+ expect do
+ @scope.function_digest(['foo', 'bar'])
+ end.to raise_error(ArgumentError)
+ end
+ end
+end
diff --git a/spec/unit/parser/functions/file_spec.rb b/spec/unit/parser/functions/file_spec.rb
index 34d12e2f2..c5f157300 100755
--- a/spec/unit/parser/functions/file_spec.rb
+++ b/spec/unit/parser/functions/file_spec.rb
@@ -1,55 +1,98 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/files'
describe "the 'file' function" do
include PuppetSpec::Files
before :all do
Puppet::Parser::Functions.autoloader.loadall
end
let :node do Puppet::Node.new('localhost') end
let :compiler do Puppet::Parser::Compiler.new(node) end
let :scope do Puppet::Parser::Scope.new(compiler) end
- it "should exist" do
- Puppet::Parser::Functions.function("file").should == "function_file"
- end
-
def with_file_content(content)
path = tmpfile('file-function')
file = File.new(path, 'w')
file.sync = true
file.print content
yield path
end
it "should read a file" do
with_file_content('file content') do |name|
scope.function_file([name]).should == "file content"
end
end
- it "should return the first file if given two files" do
+ it "should read a file from a module path" do
+ with_file_content('file content') do |name|
+ mod = mock 'module'
+ mod.stubs(:file).with('myfile').returns(name)
+ compiler.environment.stubs(:module).with('mymod').returns(mod)
+
+ scope.function_file(['mymod/myfile']).should == 'file content'
+ end
+ end
+
+ it "should return the first file if given two files with absolute paths" do
with_file_content('one') do |one|
with_file_content('two') do |two|
scope.function_file([one, two]).should == "one"
end
end
end
+ it "should return the first file if given two files with module paths" do
+ with_file_content('one') do |one|
+ with_file_content('two') do |two|
+ mod = mock 'module'
+ compiler.environment.expects(:module).with('mymod').returns(mod)
+ mod.expects(:file).with('one').returns(one)
+ mod.stubs(:file).with('two').returns(two)
+
+ scope.function_file(['mymod/one','mymod/two']).should == 'one'
+ end
+ end
+ end
+
+ it "should return the first file if given two files with mixed paths, absolute first" do
+ with_file_content('one') do |one|
+ with_file_content('two') do |two|
+ mod = mock 'module'
+ compiler.environment.stubs(:module).with('mymod').returns(mod)
+ mod.stubs(:file).with('two').returns(two)
+
+ scope.function_file([one,'mymod/two']).should == 'one'
+ end
+ end
+ end
+
+ it "should return the first file if given two files with mixed paths, module first" do
+ with_file_content('one') do |one|
+ with_file_content('two') do |two|
+ mod = mock 'module'
+ compiler.environment.expects(:module).with('mymod').returns(mod)
+ mod.stubs(:file).with('two').returns(two)
+
+ scope.function_file(['mymod/two',one]).should == 'two'
+ end
+ end
+ end
+
it "should not fail when some files are absent" do
expect {
with_file_content('one') do |one|
scope.function_file([make_absolute("/should-not-exist"), one]).should == 'one'
end
}.to_not raise_error
end
it "should fail when all files are absent" do
expect {
scope.function_file([File.expand_path('one')])
}.to raise_error(Puppet::ParseError, /Could not find any files/)
end
end
diff --git a/spec/unit/parser/functions/include_spec.rb b/spec/unit/parser/functions/include_spec.rb
index c1a5cbd5c..3fa0da35d 100755
--- a/spec/unit/parser/functions/include_spec.rb
+++ b/spec/unit/parser/functions/include_spec.rb
@@ -1,51 +1,65 @@
#! /usr/bin/env ruby
require 'spec_helper'
+require 'unit/parser/functions/shared'
describe "the 'include' function" do
before :all do
Puppet::Parser::Functions.autoloader.loadall
end
before :each do
@compiler = Puppet::Parser::Compiler.new(Puppet::Node.new("foo"))
@scope = Puppet::Parser::Scope.new(@compiler)
end
it "should exist" do
Puppet::Parser::Functions.function("include").should == "function_include"
end
it "should include a single class" do
inc = "foo"
@compiler.expects(:evaluate_classes).with {|klasses,parser,lazy| klasses == [inc]}.returns([inc])
@scope.function_include(["foo"])
end
it "should include multiple classes" do
inc = ["foo","bar"]
@compiler.expects(:evaluate_classes).with {|klasses,parser,lazy| klasses == inc}.returns(inc)
@scope.function_include(["foo","bar"])
end
it "should include multiple classes passed in an array" do
inc = ["foo","bar"]
@compiler.expects(:evaluate_classes).with {|klasses,parser,lazy| klasses == inc}.returns(inc)
@scope.function_include([["foo","bar"]])
end
it "should flatten nested arrays" do
inc = ["foo","bar","baz"]
@compiler.expects(:evaluate_classes).with {|klasses,parser,lazy| klasses == inc}.returns(inc)
@scope.function_include([["foo","bar"],"baz"])
end
it "should not lazily evaluate the included class" do
@compiler.expects(:evaluate_classes).with {|klasses,parser,lazy| lazy == false}.returns("foo")
@scope.function_include(["foo"])
end
it "should raise if the class is not found" do
@scope.stubs(:source).returns(true)
- expect { @scope.function_include(["nosuchclass"]) }.to raise_error Puppet::Error
+ expect { @scope.function_include(["nosuchclass"]) }.to raise_error(Puppet::Error)
+ end
+
+ describe "When the future parser is in use" do
+ require 'puppet/pops'
+ require 'puppet_spec/compiler'
+ include PuppetSpec::Compiler
+
+ before(:each) do
+ Puppet[:parser] = 'future'
+ end
+
+ it_should_behave_like 'all functions transforming relative to absolute names', :function_include
+ it_should_behave_like 'an inclusion function, regardless of the type of class reference,', :include
end
end
diff --git a/spec/unit/parser/functions/realize_spec.rb b/spec/unit/parser/functions/realize_spec.rb
index 9f53f5a76..79e5eb155 100755
--- a/spec/unit/parser/functions/realize_spec.rb
+++ b/spec/unit/parser/functions/realize_spec.rb
@@ -1,53 +1,61 @@
-#! /usr/bin/env ruby
require 'spec_helper'
+require 'matchers/resource'
+require 'puppet_spec/compiler'
describe "the realize function" do
- before :all do
- Puppet::Parser::Functions.autoloader.loadall
- end
+ include Matchers::Resource
+ include PuppetSpec::Compiler
- before :each do
- @collector = stub_everything 'collector'
- node = Puppet::Node.new('localhost')
- @compiler = Puppet::Parser::Compiler.new(node)
- @scope = Puppet::Parser::Scope.new(@compiler)
- @compiler.stubs(:add_collection).with(@collector)
- end
+ it "realizes a single, referenced resource" do
+ catalog = compile_to_catalog(<<-EOM)
+ @notify { testing: }
+ realize(Notify[testing])
+ EOM
- it "should exist" do
- Puppet::Parser::Functions.function("realize").should == "function_realize"
+ expect(catalog).to have_resource("Notify[testing]")
end
- it "should create a Collector when called" do
-
- Puppet::Parser::Collector.expects(:new).returns(@collector)
+ it "realizes multiple resources" do
+ catalog = compile_to_catalog(<<-EOM)
+ @notify { testing: }
+ @notify { other: }
+ realize(Notify[testing], Notify[other])
+ EOM
- @scope.function_realize(["test"])
+ expect(catalog).to have_resource("Notify[testing]")
+ expect(catalog).to have_resource("Notify[other]")
end
- it "should assign the passed-in resources to the collector" do
- Puppet::Parser::Collector.stubs(:new).returns(@collector)
+ it "realizes resources provided in arrays" do
+ catalog = compile_to_catalog(<<-EOM)
+ @notify { testing: }
+ @notify { other: }
+ realize([Notify[testing], [Notify[other]]])
+ EOM
- @collector.expects(:resources=).with(["test"])
-
- @scope.function_realize(["test"])
+ expect(catalog).to have_resource("Notify[testing]")
+ expect(catalog).to have_resource("Notify[other]")
end
- it "should flatten the resources assigned to the collector" do
- Puppet::Parser::Collector.stubs(:new).returns(@collector)
-
- @collector.expects(:resources=).with(["test"])
-
- @scope.function_realize([["test"]])
+ it "fails when the resource does not exist" do
+ expect do
+ compile_to_catalog(<<-EOM)
+ realize(Notify[missing])
+ EOM
+ end.to raise_error(Puppet::Error, /Failed to realize/)
end
- it "should let the compiler know this collector" do
- Puppet::Parser::Collector.stubs(:new).returns(@collector)
- @collector.stubs(:resources=).with(["test"])
-
- @compiler.expects(:add_collection).with(@collector)
-
- @scope.function_realize(["test"])
+ it "fails when no parameters given" do
+ expect do
+ compile_to_catalog(<<-EOM)
+ realize()
+ EOM
+ end.to raise_error(Puppet::Error, /Wrong number of arguments/)
end
+ it "silently does nothing when an empty array of resources is given" do
+ compile_to_catalog(<<-EOM)
+ realize([])
+ EOM
+ end
end
diff --git a/spec/unit/parser/functions/require_spec.rb b/spec/unit/parser/functions/require_spec.rb
index 72c3f9f5f..f0b4fcc28 100755
--- a/spec/unit/parser/functions/require_spec.rb
+++ b/spec/unit/parser/functions/require_spec.rb
@@ -1,61 +1,75 @@
#! /usr/bin/env ruby
require 'spec_helper'
+require 'unit/parser/functions/shared'
describe "the require function" do
before :all do
Puppet::Parser::Functions.autoloader.loadall
end
before :each do
@catalog = stub 'catalog'
node = Puppet::Node.new('localhost')
compiler = Puppet::Parser::Compiler.new(node)
@scope = Puppet::Parser::Scope.new(compiler)
@scope.stubs(:findresource)
@klass = stub 'class', :name => "myclass"
@scope.stubs(:find_hostclass).returns(@klass)
@resource = Puppet::Parser::Resource.new(:file, "/my/file", :scope => @scope, :source => "source")
@scope.stubs(:resource).returns @resource
end
it "should exist" do
Puppet::Parser::Functions.function("require").should == "function_require"
end
it "should delegate to the 'include' puppet function" do
- @scope.expects(:function_include).with(["myclass"])
+ @scope.compiler.expects(:evaluate_classes).with(["myclass"], @scope, false)
@scope.function_require(["myclass"])
end
- it "should set the 'require' prarameter on the resource to a resource reference" do
- @scope.stubs(:function_include)
+ it "should set the 'require' parameter on the resource to a resource reference" do
+ @scope.compiler.stubs(:evaluate_classes)
@scope.function_require(["myclass"])
@resource["require"].should be_instance_of(Array)
@resource["require"][0].should be_instance_of(Puppet::Resource)
end
it "should lookup the absolute class path" do
- @scope.stubs(:function_include)
+ @scope.compiler.stubs(:evaluate_classes)
@scope.expects(:find_hostclass).with("myclass").returns(@klass)
@klass.expects(:name).returns("myclass")
@scope.function_require(["myclass"])
end
it "should append the required class to the require parameter" do
- @scope.stubs(:function_include)
+ @scope.compiler.stubs(:evaluate_classes)
one = Puppet::Resource.new(:file, "/one")
@resource[:require] = one
@scope.function_require(["myclass"])
@resource[:require].should be_include(one)
@resource[:require].detect { |r| r.to_s == "Class[Myclass]" }.should be_instance_of(Puppet::Resource)
end
+
+ describe "When the future parser is in use" do
+ require 'puppet/pops'
+ require 'puppet_spec/compiler'
+ include PuppetSpec::Compiler
+
+ before(:each) do
+ Puppet[:parser] = 'future'
+ end
+
+ it_should_behave_like 'all functions transforming relative to absolute names', :function_require
+ it_should_behave_like 'an inclusion function, regardless of the type of class reference,', :require
+ end
end
diff --git a/spec/unit/parser/functions/search_spec.rb b/spec/unit/parser/functions/search_spec.rb
index b2c042b04..54054bd6a 100755
--- a/spec/unit/parser/functions/search_spec.rb
+++ b/spec/unit/parser/functions/search_spec.rb
@@ -1,23 +1,28 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe "the 'search' function" do
before :all do
Puppet::Parser::Functions.autoloader.loadall
end
let :node do Puppet::Node.new('localhost') end
let :compiler do Puppet::Parser::Compiler.new(node) end
let :scope do Puppet::Parser::Scope.new(compiler) end
it "should exist" do
Puppet::Parser::Functions.function("search").should == "function_search"
end
it "should invoke #add_namespace on the scope for all inputs" do
scope.expects(:add_namespace).with("where")
scope.expects(:add_namespace).with("what")
scope.expects(:add_namespace).with("who")
scope.function_search(["where", "what", "who"])
end
+
+ it "is deprecated" do
+ Puppet.expects(:deprecation_warning).with("The 'search' function is deprecated. See http://links.puppetlabs.com/search-function-deprecation")
+ scope.function_search(['wat'])
+ end
end
diff --git a/spec/unit/parser/functions/shared.rb b/spec/unit/parser/functions/shared.rb
new file mode 100644
index 000000000..f5adcd811
--- /dev/null
+++ b/spec/unit/parser/functions/shared.rb
@@ -0,0 +1,82 @@
+shared_examples_for 'all functions transforming relative to absolute names' do |func_method|
+
+ it 'transforms relative names to absolute' do
+ @scope.compiler.expects(:evaluate_classes).with(["::myclass"], @scope, false)
+ @scope.send(func_method, ["myclass"])
+ end
+
+ it 'accepts a Class[name] type' do
+ @scope.compiler.expects(:evaluate_classes).with(["::myclass"], @scope, false)
+ @scope.send(func_method, [Puppet::Pops::Types::TypeFactory.host_class('myclass')])
+ end
+
+ it 'accepts a Resource[class, name] type' do
+ @scope.compiler.expects(:evaluate_classes).with(["::myclass"], @scope, false)
+ @scope.send(func_method, [Puppet::Pops::Types::TypeFactory.resource('class', 'myclass')])
+ end
+
+ it 'raises and error for unspecific Class' do
+ expect {
+ @scope.send(func_method, [Puppet::Pops::Types::TypeFactory.host_class()])
+ }.to raise_error(ArgumentError, /Cannot use an unspecific Class\[\] Type/)
+ end
+
+ it 'raises and error for Resource that is not of class type' do
+ expect {
+ @scope.send(func_method, [Puppet::Pops::Types::TypeFactory.resource('file')])
+ }.to raise_error(ArgumentError, /Cannot use a Resource\[file\] where a Resource\['class', name\] is expected/)
+ end
+
+ it 'raises and error for Resource that is unspecific' do
+ expect {
+ @scope.send(func_method, [Puppet::Pops::Types::TypeFactory.resource()])
+ }.to raise_error(ArgumentError, /Cannot use an unspecific Resource\[\] where a Resource\['class', name\] is expected/)
+ end
+
+ it 'raises and error for Resource[class] that is unspecific' do
+ expect {
+ @scope.send(func_method, [Puppet::Pops::Types::TypeFactory.resource('class')])
+ }.to raise_error(ArgumentError, /Cannot use an unspecific Resource\['class'\] where a Resource\['class', name\] is expected/)
+ end
+
+end
+
+shared_examples_for 'an inclusion function, regardless of the type of class reference,' do |function|
+
+ it "and #{function} a class absolutely, even when a relative namespaced class of the same name is present" do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo {
+ class bar { }
+ #{function} bar
+ }
+ class bar { }
+ include foo
+ MANIFEST
+ expect(catalog.classes).to include('foo','bar')
+ end
+
+ it "and #{function} a class absolutely by Class['type'] reference" do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo {
+ class bar { }
+ #{function} Class['bar']
+ }
+ class bar { }
+ include foo
+ MANIFEST
+ expect(catalog.classes).to include('foo','bar')
+ end
+
+ it "and #{function} a class absolutely by Resource['type','title'] reference" do
+ catalog = compile_to_catalog(<<-MANIFEST)
+ class foo {
+ class bar { }
+ #{function} Resource['class','bar']
+ }
+ class bar { }
+ include foo
+ MANIFEST
+ expect(catalog.classes).to include('foo','bar')
+ end
+
+end
diff --git a/spec/unit/parser/functions_spec.rb b/spec/unit/parser/functions_spec.rb
index cf8aecc87..3c6266752 100755
--- a/spec/unit/parser/functions_spec.rb
+++ b/spec/unit/parser/functions_spec.rb
@@ -1,132 +1,132 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Parser::Functions do
def callable_functions_from(mod)
Class.new { include mod }.new
end
let(:function_module) { Puppet::Parser::Functions.environment_module(Puppet.lookup(:current_environment)) }
let(:environment) { Puppet::Node::Environment.create(:myenv, []) }
before do
Puppet::Parser::Functions.reset
end
it "should have a method for returning an environment-specific module" do
Puppet::Parser::Functions.environment_module(environment).should be_instance_of(Module)
end
describe "when calling newfunction" do
it "should create the function in the environment module" do
Puppet::Parser::Functions.newfunction("name", :type => :rvalue) { |args| }
function_module.should be_method_defined :function_name
end
it "should warn if the function already exists" do
Puppet::Parser::Functions.newfunction("name", :type => :rvalue) { |args| }
Puppet.expects(:warning)
Puppet::Parser::Functions.newfunction("name", :type => :rvalue) { |args| }
end
it "should raise an error if the function type is not correct" do
expect { Puppet::Parser::Functions.newfunction("name", :type => :unknown) { |args| } }.to raise_error Puppet::DevError, "Invalid statement type :unknown"
end
it "instruments the function to profile the execution" do
messages = []
- Puppet::Util::Profiler.current = Puppet::Util::Profiler::WallClock.new(proc { |msg| messages << msg }, "id")
+ Puppet::Util::Profiler.add_profiler(Puppet::Util::Profiler::WallClock.new(proc { |msg| messages << msg }, "id"))
Puppet::Parser::Functions.newfunction("name", :type => :rvalue) { |args| }
callable_functions_from(function_module).function_name([])
messages.first.should =~ /Called name/
end
end
describe "when calling function to test function existence" do
it "should return false if the function doesn't exist" do
Puppet::Parser::Functions.autoloader.stubs(:load)
Puppet::Parser::Functions.function("name").should be_false
end
it "should return its name if the function exists" do
Puppet::Parser::Functions.newfunction("name", :type => :rvalue) { |args| }
Puppet::Parser::Functions.function("name").should == "function_name"
end
it "should try to autoload the function if it doesn't exist yet" do
Puppet::Parser::Functions.autoloader.expects(:load)
Puppet::Parser::Functions.function("name")
end
it "combines functions from the root with those from the current environment" do
Puppet.override(:current_environment => Puppet.lookup(:root_environment)) do
Puppet::Parser::Functions.newfunction("onlyroot", :type => :rvalue) do |args|
end
end
Puppet.override(:current_environment => Puppet::Node::Environment.create(:other, [])) do
Puppet::Parser::Functions.newfunction("other_env", :type => :rvalue) do |args|
end
expect(Puppet::Parser::Functions.function("onlyroot")).to eq("function_onlyroot")
expect(Puppet::Parser::Functions.function("other_env")).to eq("function_other_env")
end
expect(Puppet::Parser::Functions.function("other_env")).to be_false
end
end
describe "when calling function to test arity" do
let(:function_module) { Puppet::Parser::Functions.environment_module(Puppet.lookup(:current_environment)) }
it "should raise an error if the function is called with too many arguments" do
Puppet::Parser::Functions.newfunction("name", :arity => 2) { |args| }
expect { callable_functions_from(function_module).function_name([1,2,3]) }.to raise_error ArgumentError
end
it "should raise an error if the function is called with too few arguments" do
Puppet::Parser::Functions.newfunction("name", :arity => 2) { |args| }
expect { callable_functions_from(function_module).function_name([1]) }.to raise_error ArgumentError
end
it "should not raise an error if the function is called with correct number of arguments" do
Puppet::Parser::Functions.newfunction("name", :arity => 2) { |args| }
expect { callable_functions_from(function_module).function_name([1,2]) }.to_not raise_error
end
it "should raise an error if the variable arg function is called with too few arguments" do
Puppet::Parser::Functions.newfunction("name", :arity => -3) { |args| }
expect { callable_functions_from(function_module).function_name([1]) }.to raise_error ArgumentError
end
it "should not raise an error if the variable arg function is called with correct number of arguments" do
Puppet::Parser::Functions.newfunction("name", :arity => -3) { |args| }
expect { callable_functions_from(function_module).function_name([1,2]) }.to_not raise_error
end
it "should not raise an error if the variable arg function is called with more number of arguments" do
Puppet::Parser::Functions.newfunction("name", :arity => -3) { |args| }
expect { callable_functions_from(function_module).function_name([1,2,3]) }.to_not raise_error
end
end
describe "::arity" do
it "returns the given arity of a function" do
Puppet::Parser::Functions.newfunction("name", :arity => 4) { |args| }
Puppet::Parser::Functions.arity(:name).should == 4
end
it "returns -1 if no arity is given" do
Puppet::Parser::Functions.newfunction("name") { |args| }
Puppet::Parser::Functions.arity(:name).should == -1
end
end
end
diff --git a/spec/unit/parser/lexer_spec.rb b/spec/unit/parser/lexer_spec.rb
index 62234e214..f0f10e9f3 100755
--- a/spec/unit/parser/lexer_spec.rb
+++ b/spec/unit/parser/lexer_spec.rb
@@ -1,868 +1,877 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/parser/lexer'
# This is a special matcher to match easily lexer output
RSpec::Matchers.define :be_like do |*expected|
match do |actual|
expected.zip(actual).all? { |e,a| !e or a[0] == e or (e.is_a? Array and a[0] == e[0] and (a[1] == e[1] or (a[1].is_a?(Hash) and a[1][:value] == e[1]))) }
end
end
__ = nil
def tokens_scanned_from(s)
lexer = Puppet::Parser::Lexer.new
lexer.string = s
lexer.fullscan[0..-2]
end
describe Puppet::Parser::Lexer do
describe "when reading strings" do
before { @lexer = Puppet::Parser::Lexer.new }
it "should increment the line count for every carriage return in the string" do
@lexer.line = 10
@lexer.string = "this\nis\natest'"
@lexer.slurpstring("'")
@lexer.line.should == 12
end
it "should not increment the line count for escapes in the string" do
@lexer.line = 10
@lexer.string = "this\\nis\\natest'"
@lexer.slurpstring("'")
@lexer.line.should == 10
end
it "should not think the terminator is escaped, when preceeded by an even number of backslashes" do
@lexer.line = 10
@lexer.string = "here\nis\nthe\nstring\\\\'with\nextra\njunk"
@lexer.slurpstring("'")
@lexer.line.should == 13
end
{
'r' => "\r",
'n' => "\n",
't' => "\t",
's' => " "
}.each do |esc, expected_result|
it "should recognize \\#{esc} sequence" do
@lexer.string = "\\#{esc}'"
@lexer.slurpstring("'")[0].should == expected_result
end
end
end
end
describe Puppet::Parser::Lexer::Token do
before do
@token = Puppet::Parser::Lexer::Token.new(%r{something}, :NAME)
end
[:regex, :name, :string, :skip, :incr_line, :skip_text, :accumulate].each do |param|
it "should have a #{param.to_s} reader" do
@token.should be_respond_to(param)
end
it "should have a #{param.to_s} writer" do
@token.should be_respond_to(param.to_s + "=")
end
end
end
describe Puppet::Parser::Lexer::Token, "when initializing" do
it "should create a regex if the first argument is a string" do
Puppet::Parser::Lexer::Token.new("something", :NAME).regex.should == %r{something}
end
it "should set the string if the first argument is one" do
Puppet::Parser::Lexer::Token.new("something", :NAME).string.should == "something"
end
it "should set the regex if the first argument is one" do
Puppet::Parser::Lexer::Token.new(%r{something}, :NAME).regex.should == %r{something}
end
end
describe Puppet::Parser::Lexer::TokenList do
before do
@list = Puppet::Parser::Lexer::TokenList.new
end
it "should have a method for retrieving tokens by the name" do
token = @list.add_token :name, "whatever"
@list[:name].should equal(token)
end
it "should have a method for retrieving string tokens by the string" do
token = @list.add_token :name, "whatever"
@list.lookup("whatever").should equal(token)
end
it "should add tokens to the list when directed" do
token = @list.add_token :name, "whatever"
@list[:name].should equal(token)
end
it "should have a method for adding multiple tokens at once" do
@list.add_tokens "whatever" => :name, "foo" => :bar
@list[:name].should_not be_nil
@list[:bar].should_not be_nil
end
it "should fail to add tokens sharing a name with an existing token" do
@list.add_token :name, "whatever"
expect { @list.add_token :name, "whatever" }.to raise_error(ArgumentError)
end
it "should set provided options on tokens being added" do
token = @list.add_token :name, "whatever", :skip_text => true
token.skip_text.should == true
end
it "should define any provided blocks as a :convert method" do
token = @list.add_token(:name, "whatever") do "foo" end
token.convert.should == "foo"
end
it "should store all string tokens in the :string_tokens list" do
one = @list.add_token(:name, "1")
@list.string_tokens.should be_include(one)
end
it "should store all regex tokens in the :regex_tokens list" do
one = @list.add_token(:name, %r{one})
@list.regex_tokens.should be_include(one)
end
it "should not store string tokens in the :regex_tokens list" do
one = @list.add_token(:name, "1")
@list.regex_tokens.should_not be_include(one)
end
it "should not store regex tokens in the :string_tokens list" do
one = @list.add_token(:name, %r{one})
@list.string_tokens.should_not be_include(one)
end
it "should sort the string tokens inversely by length when asked" do
one = @list.add_token(:name, "1")
two = @list.add_token(:other, "12")
@list.sort_tokens
@list.string_tokens.should == [two, one]
end
end
describe Puppet::Parser::Lexer::TOKENS do
before do
@lexer = Puppet::Parser::Lexer.new
end
{
:LBRACK => '[',
:RBRACK => ']',
:LBRACE => '{',
:RBRACE => '}',
:LPAREN => '(',
:RPAREN => ')',
:EQUALS => '=',
:ISEQUAL => '==',
:GREATEREQUAL => '>=',
:GREATERTHAN => '>',
:LESSTHAN => '<',
:LESSEQUAL => '<=',
:NOTEQUAL => '!=',
:NOT => '!',
:COMMA => ',',
:DOT => '.',
:COLON => ':',
:AT => '@',
:LLCOLLECT => '<<|',
:RRCOLLECT => '|>>',
:LCOLLECT => '<|',
:RCOLLECT => '|>',
:SEMIC => ';',
:QMARK => '?',
:BACKSLASH => '\\',
:FARROW => '=>',
:PARROW => '+>',
:APPENDS => '+=',
:PLUS => '+',
:MINUS => '-',
:DIV => '/',
:TIMES => '*',
:LSHIFT => '<<',
:RSHIFT => '>>',
:MATCH => '=~',
:NOMATCH => '!~',
:IN_EDGE => '->',
:OUT_EDGE => '<-',
:IN_EDGE_SUB => '~>',
:OUT_EDGE_SUB => '<~',
}.each do |name, string|
it "should have a token named #{name.to_s}" do
Puppet::Parser::Lexer::TOKENS[name].should_not be_nil
end
it "should match '#{string}' for the token #{name.to_s}" do
Puppet::Parser::Lexer::TOKENS[name].string.should == string
end
end
{
"case" => :CASE,
"class" => :CLASS,
"default" => :DEFAULT,
"define" => :DEFINE,
"import" => :IMPORT,
"if" => :IF,
"elsif" => :ELSIF,
"else" => :ELSE,
"inherits" => :INHERITS,
"node" => :NODE,
"and" => :AND,
"or" => :OR,
"undef" => :UNDEF,
"false" => :FALSE,
"true" => :TRUE,
"in" => :IN,
"unless" => :UNLESS,
}.each do |string, name|
it "should have a keyword named #{name.to_s}" do
Puppet::Parser::Lexer::KEYWORDS[name].should_not be_nil
end
it "should have the keyword for #{name.to_s} set to #{string}" do
Puppet::Parser::Lexer::KEYWORDS[name].string.should == string
end
end
# These tokens' strings don't matter, just that the tokens exist.
[:STRING, :DQPRE, :DQMID, :DQPOST, :BOOLEAN, :NAME, :NUMBER, :COMMENT, :MLCOMMENT, :RETURN, :SQUOTE, :DQUOTE, :VARIABLE].each do |name|
it "should have a token named #{name.to_s}" do
Puppet::Parser::Lexer::TOKENS[name].should_not be_nil
end
end
end
describe Puppet::Parser::Lexer::TOKENS[:CLASSREF] do
before { @token = Puppet::Parser::Lexer::TOKENS[:CLASSREF] }
it "should match against single upper-case alpha-numeric terms" do
@token.regex.should =~ "One"
end
it "should match against upper-case alpha-numeric terms separated by double colons" do
@token.regex.should =~ "One::Two"
end
it "should match against many upper-case alpha-numeric terms separated by double colons" do
@token.regex.should =~ "One::Two::Three::Four::Five"
end
it "should match against upper-case alpha-numeric terms prefixed by double colons" do
@token.regex.should =~ "::One"
end
end
describe Puppet::Parser::Lexer::TOKENS[:NAME] do
before { @token = Puppet::Parser::Lexer::TOKENS[:NAME] }
it "should match against lower-case alpha-numeric terms" do
@token.regex.should =~ "one-two"
end
it "should return itself and the value if the matched term is not a keyword" do
Puppet::Parser::Lexer::KEYWORDS.expects(:lookup).returns(nil)
- @token.convert(stub("lexer"), "myval").should == [Puppet::Parser::Lexer::TOKENS[:NAME], "myval"]
+ lexer = stub("lexer")
+ @token.convert(lexer, "myval").should == [Puppet::Parser::Lexer::TOKENS[:NAME], "myval"]
end
it "should return the keyword token and the value if the matched term is a keyword" do
keyword = stub 'keyword', :name => :testing
Puppet::Parser::Lexer::KEYWORDS.expects(:lookup).returns(keyword)
@token.convert(stub("lexer"), "myval").should == [keyword, "myval"]
end
it "should return the BOOLEAN token and 'true' if the matched term is the string 'true'" do
keyword = stub 'keyword', :name => :TRUE
Puppet::Parser::Lexer::KEYWORDS.expects(:lookup).returns(keyword)
@token.convert(stub('lexer'), "true").should == [Puppet::Parser::Lexer::TOKENS[:BOOLEAN], true]
end
it "should return the BOOLEAN token and 'false' if the matched term is the string 'false'" do
keyword = stub 'keyword', :name => :FALSE
Puppet::Parser::Lexer::KEYWORDS.expects(:lookup).returns(keyword)
@token.convert(stub('lexer'), "false").should == [Puppet::Parser::Lexer::TOKENS[:BOOLEAN], false]
end
it "should match against lower-case alpha-numeric terms separated by double colons" do
@token.regex.should =~ "one::two"
end
it "should match against many lower-case alpha-numeric terms separated by double colons" do
@token.regex.should =~ "one::two::three::four::five"
end
it "should match against lower-case alpha-numeric terms prefixed by double colons" do
@token.regex.should =~ "::one"
end
it "should match against nested terms starting with numbers" do
@token.regex.should =~ "::1one::2two::3three"
end
end
describe Puppet::Parser::Lexer::TOKENS[:NUMBER] do
before do
@token = Puppet::Parser::Lexer::TOKENS[:NUMBER]
@regex = @token.regex
end
it "should match against numeric terms" do
@regex.should =~ "2982383139"
end
it "should match against float terms" do
@regex.should =~ "29823.235"
end
it "should match against hexadecimal terms" do
@regex.should =~ "0xBEEF0023"
end
it "should match against float with exponent terms" do
@regex.should =~ "10e23"
end
it "should match against float terms with negative exponents" do
@regex.should =~ "10e-23"
end
it "should match against float terms with fractional parts and exponent" do
@regex.should =~ "1.234e23"
end
it "should return the NAME token and the value" do
@token.convert(stub("lexer"), "myval").should == [Puppet::Parser::Lexer::TOKENS[:NAME], "myval"]
end
end
describe Puppet::Parser::Lexer::TOKENS[:COMMENT] do
before { @token = Puppet::Parser::Lexer::TOKENS[:COMMENT] }
it "should match against lines starting with '#'" do
@token.regex.should =~ "# this is a comment"
end
it "should be marked to get skipped" do
@token.skip?.should be_true
end
it "should be marked to accumulate" do
@token.accumulate?.should be_true
end
it "'s block should return the comment without the #" do
@token.convert(@lexer,"# this is a comment")[1].should == "this is a comment"
end
end
describe Puppet::Parser::Lexer::TOKENS[:MLCOMMENT] do
before do
@token = Puppet::Parser::Lexer::TOKENS[:MLCOMMENT]
@lexer = stub 'lexer', :line => 0
end
it "should match against lines enclosed with '/*' and '*/'" do
@token.regex.should =~ "/* this is a comment */"
end
it "should match multiple lines enclosed with '/*' and '*/'" do
@token.regex.should =~ """/*
this is a comment
*/"""
end
it "should increase the lexer current line number by the amount of lines spanned by the comment" do
@lexer.expects(:line=).with(2)
@token.convert(@lexer, "1\n2\n3")
end
it "should not greedily match comments" do
match = @token.regex.match("/* first */ word /* second */")
match[1].should == " first "
end
it "should be marked to accumulate" do
@token.accumulate?.should be_true
end
it "'s block should return the comment without the comment marks" do
@lexer.stubs(:line=).with(0)
@token.convert(@lexer,"/* this is a comment */")[1].should == "this is a comment"
end
end
describe Puppet::Parser::Lexer::TOKENS[:RETURN] do
before { @token = Puppet::Parser::Lexer::TOKENS[:RETURN] }
it "should match against carriage returns" do
@token.regex.should =~ "\n"
end
it "should be marked to initiate text skipping" do
@token.skip_text.should be_true
end
it "should be marked to increment the line" do
@token.incr_line.should be_true
end
end
shared_examples_for "handling `-` in standard variable names" do |prefix|
# Watch out - a regex might match a *prefix* on these, not just the whole
# word, so make sure you don't have false positive or negative results based
# on that.
legal = %w{f foo f::b foo::b f::bar foo::bar 3 foo3 3foo}
illegal = %w{f- f-o -f f::-o f::o- f::o-o}
["", "::"].each do |global_scope|
legal.each do |name|
var = prefix + global_scope + name
it "should accept #{var.inspect} as a valid variable name" do
(subject.regex.match(var) || [])[0].should == var
end
end
illegal.each do |name|
var = prefix + global_scope + name
it "when `variable_with_dash` is disabled it should NOT accept #{var.inspect} as a valid variable name" do
Puppet[:allow_variables_with_dashes] = false
(subject.regex.match(var) || [])[0].should_not == var
end
it "when `variable_with_dash` is enabled it should NOT accept #{var.inspect} as a valid variable name" do
Puppet[:allow_variables_with_dashes] = true
(subject.regex.match(var) || [])[0].should_not == var
end
end
end
end
describe Puppet::Parser::Lexer::TOKENS[:DOLLAR_VAR] do
its(:skip_text) { should be_false }
its(:incr_line) { should be_false }
it_should_behave_like "handling `-` in standard variable names", '$'
end
describe Puppet::Parser::Lexer::TOKENS[:VARIABLE] do
its(:skip_text) { should be_false }
its(:incr_line) { should be_false }
it_should_behave_like "handling `-` in standard variable names", ''
end
describe "the horrible deprecation / compatibility variables with dashes" do
NamesWithDashes = %w{f- f-o -f f::-o f::o- f::o-o}
{ Puppet::Parser::Lexer::TOKENS[:DOLLAR_VAR_WITH_DASH] => '$',
Puppet::Parser::Lexer::TOKENS[:VARIABLE_WITH_DASH] => ''
}.each do |token, prefix|
describe token do
its(:skip_text) { should be_false }
its(:incr_line) { should be_false }
context "when compatibly is disabled" do
before :each do Puppet[:allow_variables_with_dashes] = false end
Puppet::Parser::Lexer::TOKENS.each do |name, value|
it "should be unacceptable after #{name}" do
token.acceptable?(:after => name).should be_false
end
end
# Yes, this should still *match*, just not be acceptable.
NamesWithDashes.each do |name|
["", "::"].each do |global_scope|
var = prefix + global_scope + name
it "should match #{var.inspect}" do
subject.regex.match(var).to_a.should == [var]
end
end
end
end
context "when compatibility is enabled" do
before :each do Puppet[:allow_variables_with_dashes] = true end
it "should be acceptable after DQPRE" do
token.acceptable?(:after => :DQPRE).should be_true
end
NamesWithDashes.each do |name|
["", "::"].each do |global_scope|
var = prefix + global_scope + name
it "should match #{var.inspect}" do
subject.regex.match(var).to_a.should == [var]
end
end
end
end
end
end
context "deprecation warnings" do
before :each do Puppet[:allow_variables_with_dashes] = true end
it "should match a top level variable" do
Puppet.expects(:deprecation_warning).once
tokens_scanned_from('$foo-bar').should == [
[:VARIABLE, { :value => 'foo-bar', :line => 1 }]
]
end
it "does not warn about a variable without a dash" do
Puppet.expects(:deprecation_warning).never
tokens_scanned_from('$c').should == [
[:VARIABLE, { :value => "c", :line => 1 }]
]
end
it "does not warn about referencing a class name that contains a dash" do
Puppet.expects(:deprecation_warning).never
tokens_scanned_from('foo-bar').should == [
[:NAME, { :value => "foo-bar", :line => 1 }]
]
end
it "warns about reference to variable" do
Puppet.expects(:deprecation_warning).once
tokens_scanned_from('$::foo-bar::baz-quux').should == [
[:VARIABLE, { :value => "::foo-bar::baz-quux", :line => 1 }]
]
end
it "warns about reference to variable interpolated in a string" do
Puppet.expects(:deprecation_warning).once
tokens_scanned_from('"$::foo-bar::baz-quux"').should == [
[:DQPRE, { :value => "", :line => 1 }],
[:VARIABLE, { :value => "::foo-bar::baz-quux", :line => 1 }],
[:DQPOST, { :value => "", :line => 1 }],
]
end
it "warns about reference to variable interpolated in a string as an expression" do
Puppet.expects(:deprecation_warning).once
tokens_scanned_from('"${::foo-bar::baz-quux}"').should == [
[:DQPRE, { :value => "", :line => 1 }],
[:VARIABLE, { :value => "::foo-bar::baz-quux", :line => 1 }],
[:DQPOST, { :value => "", :line => 1 }],
]
end
end
end
describe Puppet::Parser::Lexer,"when lexing strings" do
{
%q{'single quoted string')} => [[:STRING,'single quoted string']],
%q{"double quoted string"} => [[:STRING,'double quoted string']],
%q{'single quoted string with an escaped "\\'"'} => [[:STRING,'single quoted string with an escaped "\'"']],
%q{'single quoted string with an escaped "\$"'} => [[:STRING,'single quoted string with an escaped "\$"']],
%q{'single quoted string with an escaped "\."'} => [[:STRING,'single quoted string with an escaped "\."']],
%q{'single quoted string with an escaped "\r\n"'} => [[:STRING,'single quoted string with an escaped "\r\n"']],
%q{'single quoted string with an escaped "\n"'} => [[:STRING,'single quoted string with an escaped "\n"']],
%q{'single quoted string with an escaped "\\\\"'} => [[:STRING,'single quoted string with an escaped "\\\\"']],
%q{"string with an escaped '\\"'"} => [[:STRING,"string with an escaped '\"'"]],
%q{"string with an escaped '\\$'"} => [[:STRING,"string with an escaped '$'"]],
%Q{"string with a line ending with a backslash: \\\nfoo"} => [[:STRING,"string with a line ending with a backslash: foo"]],
%q{"string with $v (but no braces)"} => [[:DQPRE,"string with "],[:VARIABLE,'v'],[:DQPOST,' (but no braces)']],
%q["string with ${v} in braces"] => [[:DQPRE,"string with "],[:VARIABLE,'v'],[:DQPOST,' in braces']],
%q["string with ${qualified::var} in braces"] => [[:DQPRE,"string with "],[:VARIABLE,'qualified::var'],[:DQPOST,' in braces']],
%q{"string with $v and $v (but no braces)"} => [[:DQPRE,"string with "],[:VARIABLE,"v"],[:DQMID," and "],[:VARIABLE,"v"],[:DQPOST," (but no braces)"]],
%q["string with ${v} and ${v} in braces"] => [[:DQPRE,"string with "],[:VARIABLE,"v"],[:DQMID," and "],[:VARIABLE,"v"],[:DQPOST," in braces"]],
%q["string with ${'a nested single quoted string'} inside it."] => [[:DQPRE,"string with "],[:STRING,'a nested single quoted string'],[:DQPOST,' inside it.']],
%q["string with ${['an array ',$v2]} in it."] => [[:DQPRE,"string with "],:LBRACK,[:STRING,"an array "],:COMMA,[:VARIABLE,"v2"],:RBRACK,[:DQPOST," in it."]],
%q{a simple "scanner" test} => [[:NAME,"a"],[:NAME,"simple"], [:STRING,"scanner"],[:NAME,"test"]],
%q{a simple 'single quote scanner' test} => [[:NAME,"a"],[:NAME,"simple"], [:STRING,"single quote scanner"],[:NAME,"test"]],
%q{a harder 'a $b \c"'} => [[:NAME,"a"],[:NAME,"harder"], [:STRING,'a $b \c"']],
%q{a harder "scanner test"} => [[:NAME,"a"],[:NAME,"harder"], [:STRING,"scanner test"]],
%q{a hardest "scanner \"test\""} => [[:NAME,"a"],[:NAME,"hardest"],[:STRING,'scanner "test"']],
%Q{a hardestest "scanner \\"test\\"\n"} => [[:NAME,"a"],[:NAME,"hardestest"],[:STRING,%Q{scanner "test"\n}]],
%q{function("call")} => [[:NAME,"function"],[:LPAREN,"("],[:STRING,'call'],[:RPAREN,")"]],
%q["string with ${(3+5)/4} nested math."] => [[:DQPRE,"string with "],:LPAREN,[:NAME,"3"],:PLUS,[:NAME,"5"],:RPAREN,:DIV,[:NAME,"4"],[:DQPOST," nested math."]],
%q["$$$$"] => [[:STRING,"$$$$"]],
%q["$variable"] => [[:DQPRE,""],[:VARIABLE,"variable"],[:DQPOST,""]],
%q["$var$other"] => [[:DQPRE,""],[:VARIABLE,"var"],[:DQMID,""],[:VARIABLE,"other"],[:DQPOST,""]],
%q["foo$bar$"] => [[:DQPRE,"foo"],[:VARIABLE,"bar"],[:DQPOST,"$"]],
%q["foo$$bar"] => [[:DQPRE,"foo$"],[:VARIABLE,"bar"],[:DQPOST,""]],
%q[""] => [[:STRING,""]],
%q["123 456 789 0"] => [[:STRING,"123 456 789 0"]],
%q["${123} 456 $0"] => [[:DQPRE,""],[:VARIABLE,"123"],[:DQMID," 456 "],[:VARIABLE,"0"],[:DQPOST,""]],
%q["$foo::::bar"] => [[:DQPRE,""],[:VARIABLE,"foo"],[:DQPOST,"::::bar"]]
}.each { |src,expected_result|
it "should handle #{src} correctly" do
tokens_scanned_from(src).should be_like(*expected_result)
end
}
end
describe Puppet::Parser::Lexer::TOKENS[:DOLLAR_VAR] do
before { @token = Puppet::Parser::Lexer::TOKENS[:DOLLAR_VAR] }
it "should match against alpha words prefixed with '$'" do
@token.regex.should =~ '$this_var'
end
it "should return the VARIABLE token and the variable name stripped of the '$'" do
@token.convert(stub("lexer"), "$myval").should == [Puppet::Parser::Lexer::TOKENS[:VARIABLE], "myval"]
end
end
describe Puppet::Parser::Lexer::TOKENS[:REGEX] do
before { @token = Puppet::Parser::Lexer::TOKENS[:REGEX] }
it "should match against any expression enclosed in //" do
@token.regex.should =~ '/this is a regex/'
end
it 'should not match if there is \n in the regex' do
@token.regex.should_not =~ "/this is \n a regex/"
end
describe "when scanning" do
it "should not consider escaped slashes to be the end of a regex" do
tokens_scanned_from("$x =~ /this \\/ foo/").should be_like(__,__,[:REGEX,%r{this / foo}])
end
it "should not lex chained division as a regex" do
tokens_scanned_from("$x = $a/$b/$c").collect { |name, data| name }.should_not be_include( :REGEX )
end
it "should accept a regular expression after NODE" do
tokens_scanned_from("node /www.*\.mysite\.org/").should be_like(__,[:REGEX,Regexp.new("www.*\.mysite\.org")])
end
it "should accept regular expressions in a CASE" do
s = %q{case $variable {
"something": {$othervar = 4096 / 2}
/regex/: {notice("this notably sucks")}
}
}
tokens_scanned_from(s).should be_like(
:CASE,:VARIABLE,:LBRACE,:STRING,:COLON,:LBRACE,:VARIABLE,:EQUALS,:NAME,:DIV,:NAME,:RBRACE,[:REGEX,/regex/],:COLON,:LBRACE,:NAME,:LPAREN,:STRING,:RPAREN,:RBRACE,:RBRACE
)
end
end
it "should return the REGEX token and a Regexp" do
@token.convert(stub("lexer"), "/myregex/").should == [Puppet::Parser::Lexer::TOKENS[:REGEX], Regexp.new(/myregex/)]
end
end
describe Puppet::Parser::Lexer, "when lexing comments" do
before { @lexer = Puppet::Parser::Lexer.new }
it "should accumulate token in munge_token" do
token = stub 'token', :skip => true, :accumulate? => true, :incr_line => nil, :skip_text => false
token.stubs(:convert).with(@lexer, "# this is a comment").returns([token, " this is a comment"])
@lexer.munge_token(token, "# this is a comment")
@lexer.munge_token(token, "# this is a comment")
@lexer.getcomment.should == " this is a comment\n this is a comment\n"
end
it "should add a new comment stack level on LBRACE" do
@lexer.string = "{"
@lexer.expects(:commentpush)
@lexer.fullscan
end
it "should add a new comment stack level on LPAREN" do
@lexer.string = "("
@lexer.expects(:commentpush)
@lexer.fullscan
end
it "should pop the current comment on RPAREN" do
@lexer.string = ")"
@lexer.expects(:commentpop)
@lexer.fullscan
end
it "should return the current comments on getcomment" do
@lexer.string = "# comment"
@lexer.fullscan
@lexer.getcomment.should == "comment\n"
end
it "should discard the previous comments on blank line" do
@lexer.string = "# 1\n\n# 2"
@lexer.fullscan
@lexer.getcomment.should == "2\n"
end
it "should skip whitespace before lexing the next token after a non-token" do
tokens_scanned_from("/* 1\n\n */ \ntest").should be_like([:NAME, "test"])
end
it "should not return comments seen after the current line" do
@lexer.string = "# 1\n\n# 2"
@lexer.fullscan
@lexer.getcomment(1).should == ""
end
it "should return a comment seen before the current line" do
@lexer.string = "# 1\n# 2"
@lexer.fullscan
@lexer.getcomment(2).should == "1\n2\n"
end
end
# FIXME: We need to rewrite all of these tests, but I just don't want to take the time right now.
describe "Puppet::Parser::Lexer in the old tests" do
before { @lexer = Puppet::Parser::Lexer.new }
it "should do simple lexing" do
{
%q{\\} => [[:BACKSLASH,"\\"]],
%q{simplest scanner test} => [[:NAME,"simplest"],[:NAME,"scanner"],[:NAME,"test"]],
%Q{returned scanner test\n} => [[:NAME,"returned"],[:NAME,"scanner"],[:NAME,"test"]]
}.each { |source,expected|
tokens_scanned_from(source).should be_like(*expected)
}
end
it "should fail usefully" do
expect { tokens_scanned_from('^') }.to raise_error(RuntimeError)
end
it "should fail if the string is not set" do
expect { @lexer.fullscan }.to raise_error(Puppet::LexError)
end
it "should correctly identify keywords" do
tokens_scanned_from("case").should be_like([:CASE, "case"])
end
it "should correctly parse class references" do
%w{Many Different Words A Word}.each { |t| tokens_scanned_from(t).should be_like([:CLASSREF,t])}
end
# #774
it "should correctly parse namespaced class refernces token" do
%w{Foo ::Foo Foo::Bar ::Foo::Bar}.each { |t| tokens_scanned_from(t).should be_like([:CLASSREF, t]) }
end
it "should correctly parse names" do
%w{this is a bunch of names}.each { |t| tokens_scanned_from(t).should be_like([:NAME,t]) }
end
it "should correctly parse names with numerals" do
%w{1name name1 11names names11}.each { |t| tokens_scanned_from(t).should be_like([:NAME,t]) }
end
it "should correctly parse empty strings" do
expect { tokens_scanned_from('$var = ""') }.to_not raise_error
end
it "should correctly parse virtual resources" do
tokens_scanned_from("@type {").should be_like([:AT, "@"], [:NAME, "type"], [:LBRACE, "{"])
end
it "should correctly deal with namespaces" do
@lexer.string = %{class myclass}
@lexer.fullscan
@lexer.namespace.should == "myclass"
@lexer.namepop
@lexer.namespace.should == ""
@lexer.string = "class base { class sub { class more"
@lexer.fullscan
@lexer.namespace.should == "base::sub::more"
@lexer.namepop
@lexer.namespace.should == "base::sub"
end
it "should not put class instantiation on the namespace" do
@lexer.string = "class base { class sub { class { mode"
@lexer.fullscan
@lexer.namespace.should == "base::sub"
end
it "should correctly handle fully qualified names" do
@lexer.string = "class base { class sub::more {"
@lexer.fullscan
@lexer.namespace.should == "base::sub::more"
@lexer.namepop
@lexer.namespace.should == "base"
end
it "should correctly lex variables" do
["$variable", "$::variable", "$qualified::variable", "$further::qualified::variable"].each do |string|
tokens_scanned_from(string).should be_like([:VARIABLE,string.sub(/^\$/,'')])
end
end
it "should end variables at `-`" do
tokens_scanned_from('$hyphenated-variable').
should be_like [:VARIABLE, "hyphenated"], [:MINUS, '-'], [:NAME, 'variable']
end
it "should not include whitespace in a variable" do
tokens_scanned_from("$foo bar").should_not be_like([:VARIABLE, "foo bar"])
end
it "should not include excess colons in a variable" do
tokens_scanned_from("$foo::::bar").should_not be_like([:VARIABLE, "foo::::bar"])
end
end
+describe 'Puppet::Parser::Lexer handles reserved words' do
+ ['function', 'private', 'attr', 'type'].each do |reserved_bare_word|
+ it "by delivering '#{reserved_bare_word}' as a bare word" do
+ expect(tokens_scanned_from(reserved_bare_word)).to eq([[:NAME, {:value=>reserved_bare_word, :line => 1}]])
+ end
+ end
+end
+
describe "Puppet::Parser::Lexer in the old tests when lexing example files" do
my_fixtures('*.pp') do |file|
it "should correctly lex #{file}" do
lexer = Puppet::Parser::Lexer.new
lexer.file = file
expect { lexer.fullscan }.to_not raise_error
end
end
end
describe "when trying to lex a non-existent file" do
include PuppetSpec::Files
it "should return an empty list of tokens" do
lexer = Puppet::Parser::Lexer.new
lexer.file = nofile = tmpfile('lexer')
Puppet::FileSystem.exist?(nofile).should == false
lexer.fullscan.should == [[false,false]]
end
end
diff --git a/spec/unit/parser/methods/map_spec.rb b/spec/unit/parser/methods/map_spec.rb
deleted file mode 100644
index 7f8e79789..000000000
--- a/spec/unit/parser/methods/map_spec.rb
+++ /dev/null
@@ -1,184 +0,0 @@
-require 'puppet'
-require 'spec_helper'
-require 'puppet_spec/compiler'
-
-require 'unit/parser/methods/shared'
-
-describe 'the map method' do
- include PuppetSpec::Compiler
-
- before :each do
- Puppet[:parser] = "future"
- end
-
- context "using future parser" do
- it 'map on an array (multiplying each value by 2)' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = [1,2,3]
- $a.map |$x|{ $x*2}.each |$v|{
- file { "/file_$v": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_2")['ensure'].should == 'present'
- catalog.resource(:file, "/file_4")['ensure'].should == 'present'
- catalog.resource(:file, "/file_6")['ensure'].should == 'present'
- end
-
- it 'map on an enumerable type (multiplying each value by 2)' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = Integer[1,3]
- $a.map |$x|{ $x*2}.each |$v|{
- file { "/file_$v": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_2")['ensure'].should == 'present'
- catalog.resource(:file, "/file_4")['ensure'].should == 'present'
- catalog.resource(:file, "/file_6")['ensure'].should == 'present'
- end
-
- it 'map on an integer (multiply each by 3)' do
- catalog = compile_to_catalog(<<-MANIFEST)
- 3.map |$x|{ $x*3}.each |$v|{
- file { "/file_$v": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_0")['ensure'].should == 'present'
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
- catalog.resource(:file, "/file_6")['ensure'].should == 'present'
- end
-
- it 'map on a string' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = {a=>x, b=>y}
- "ab".map |$x|{$a[$x]}.each |$v|{
- file { "/file_$v": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_x")['ensure'].should == 'present'
- catalog.resource(:file, "/file_y")['ensure'].should == 'present'
- end
-
- it 'map on an array (multiplying value by 10 in even index position)' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = [1,2,3]
- $a.map |$i, $x|{ if $i % 2 == 0 {$x} else {$x*10}}.each |$v|{
- file { "/file_$v": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_1")['ensure'].should == 'present'
- catalog.resource(:file, "/file_20")['ensure'].should == 'present'
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
- end
-
- it 'map on a hash selecting keys' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = {'a'=>1,'b'=>2,'c'=>3}
- $a.map |$x|{ $x[0]}.each |$k|{
- file { "/file_$k": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_a")['ensure'].should == 'present'
- catalog.resource(:file, "/file_b")['ensure'].should == 'present'
- catalog.resource(:file, "/file_c")['ensure'].should == 'present'
- end
-
- it 'map on a hash selecting keys - using two block parameters' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = {'a'=>1,'b'=>2,'c'=>3}
- $a.map |$k,$v|{ file { "/file_$k": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_a")['ensure'].should == 'present'
- catalog.resource(:file, "/file_b")['ensure'].should == 'present'
- catalog.resource(:file, "/file_c")['ensure'].should == 'present'
- end
-
- it 'each on a hash selecting value' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = {'a'=>1,'b'=>2,'c'=>3}
- $a.map |$x|{ $x[1]}.each |$k|{ file { "/file_$k": ensure => present } }
- MANIFEST
-
- catalog.resource(:file, "/file_1")['ensure'].should == 'present'
- catalog.resource(:file, "/file_2")['ensure'].should == 'present'
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
- end
-
- it 'each on a hash selecting value - using two bloc parameters' do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = {'a'=>1,'b'=>2,'c'=>3}
- $a.map |$k,$v|{ file { "/file_$v": ensure => present } }
- MANIFEST
-
- catalog.resource(:file, "/file_1")['ensure'].should == 'present'
- catalog.resource(:file, "/file_2")['ensure'].should == 'present'
- catalog.resource(:file, "/file_3")['ensure'].should == 'present'
- end
-
- context "handles data type corner cases" do
- it "map gets values that are false" do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = [false,false]
- $a.map |$x| { $x }.each |$i, $v| {
- file { "/file_$i.$v": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_0.false")['ensure'].should == 'present'
- catalog.resource(:file, "/file_1.false")['ensure'].should == 'present'
- end
-
- it "map gets values that are nil" do
- Puppet::Parser::Functions.newfunction(:nil_array, :type => :rvalue) do |args|
- [nil]
- end
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = nil_array()
- $a.map |$x| { $x }.each |$i, $v| {
- file { "/file_$i.$v": ensure => present }
- }
- MANIFEST
-
- catalog.resource(:file, "/file_0.")['ensure'].should == 'present'
- end
-
- it "map gets values that are undef" do
- catalog = compile_to_catalog(<<-MANIFEST)
- $a = [$does_not_exist]
- $a.map |$x = "something"| { $x }.each |$i, $v| {
- file { "/file_$i.$v": ensure => present }
- }
- MANIFEST
- catalog.resource(:file, "/file_0.something")['ensure'].should == 'present'
- end
- end
-
- context 'map checks arguments and' do
- it 'raises an error when block has more than 2 argument' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- [1].map |$index, $x, $yikes|{ }
- MANIFEST
- end.to raise_error(Puppet::Error, /block must define at most two parameters/)
- end
-
- it 'raises an error when block has fewer than 1 argument' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- [1].map || { }
- MANIFEST
- end.to raise_error(Puppet::Error, /block must define at least one parameter/)
- end
- end
-
- it_should_behave_like 'all iterative functions argument checks', 'map'
- it_should_behave_like 'all iterative functions hash handling', 'map'
- end
-end
diff --git a/spec/unit/parser/methods/shared.rb b/spec/unit/parser/methods/shared.rb
deleted file mode 100644
index 42cfd2359..000000000
--- a/spec/unit/parser/methods/shared.rb
+++ /dev/null
@@ -1,45 +0,0 @@
-
-shared_examples_for 'all iterative functions hash handling' do |func|
- it 'passes a hash entry as an array of the key and value' do
- catalog = compile_to_catalog(<<-MANIFEST)
- {a=>1}.#{func} |$v| { notify { "${v[0]} ${v[1]}": } }
- MANIFEST
-
- catalog.resource(:notify, "a 1").should_not be_nil
- end
-end
-
-shared_examples_for 'all iterative functions argument checks' do |func|
-
- it 'raises an error when used against an unsupported type' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- 3.14.#{func} |$v| { }
- MANIFEST
- end.to raise_error(Puppet::Error, /must be something enumerable/)
- end
-
- it 'raises an error when called with any parameters besides a block' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- [1].#{func}(1) |$v| { }
- MANIFEST
- end.to raise_error(Puppet::Error, /Wrong number of arguments/)
- end
-
- it 'raises an error when called without a block' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- [1].#{func}()
- MANIFEST
- end.to raise_error(Puppet::Error, /Wrong number of arguments/)
- end
-
- it 'raises an error when called without a block' do
- expect do
- compile_to_catalog(<<-MANIFEST)
- [1].#{func}(1)
- MANIFEST
- end.to raise_error(Puppet::Error, /must be a parameterized block/)
- end
-end
diff --git a/spec/unit/parser/type_loader_spec.rb b/spec/unit/parser/type_loader_spec.rb
index 659ffa942..5454528a7 100755
--- a/spec/unit/parser/type_loader_spec.rb
+++ b/spec/unit/parser/type_loader_spec.rb
@@ -1,225 +1,224 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/parser/type_loader'
require 'puppet/parser/parser_factory'
-require 'puppet/parser/e_parser_adapter'
require 'puppet_spec/modules'
require 'puppet_spec/files'
describe Puppet::Parser::TypeLoader do
include PuppetSpec::Modules
include PuppetSpec::Files
let(:empty_hostclass) { Puppet::Parser::AST::Hostclass.new('') }
let(:loader) { Puppet::Parser::TypeLoader.new(:myenv) }
it "should support an environment" do
loader = Puppet::Parser::TypeLoader.new(:myenv)
loader.environment.name.should == :myenv
end
it "should delegate its known resource types to its environment" do
loader.known_resource_types.should be_instance_of(Puppet::Resource::TypeCollection)
end
describe "when loading names from namespaces" do
it "should do nothing if the name to import is an empty string" do
loader.try_load_fqname(:hostclass, "").should be_nil
end
it "should attempt to import each generated name" do
loader.expects(:import_from_modules).with("foo/bar").returns([])
loader.expects(:import_from_modules).with("foo").returns([])
loader.try_load_fqname(:hostclass, "foo::bar")
end
it "should attempt to load each possible name going from most to least specific" do
path_order = sequence('path')
['foo/bar/baz', 'foo/bar', 'foo'].each do |path|
Puppet::Parser::Files.expects(:find_manifests_in_modules).with(path, anything).returns([nil, []]).in_sequence(path_order)
end
loader.try_load_fqname(:hostclass, 'foo::bar::baz')
end
end
describe "when importing" do
let(:stub_parser) { stub 'Parser', :file= => nil, :parse => empty_hostclass }
before(:each) do
Puppet::Parser::ParserFactory.stubs(:parser).with(anything).returns(stub_parser)
end
it "should return immediately when imports are being ignored" do
Puppet::Parser::Files.expects(:find_manifests_in_modules).never
Puppet[:ignoreimport] = true
loader.import("foo", "/path").should be_nil
end
it "should find all manifests matching the file or pattern" do
Puppet::Parser::Files.expects(:find_manifests_in_modules).with("myfile", anything).returns ["modname", %w{one}]
loader.import("myfile", "/path")
end
it "should pass the environment when looking for files" do
Puppet::Parser::Files.expects(:find_manifests_in_modules).with(anything, loader.environment).returns ["modname", %w{one}]
loader.import("myfile", "/path")
end
it "should fail if no files are found" do
Puppet::Parser::Files.expects(:find_manifests_in_modules).returns [nil, []]
lambda { loader.import("myfile", "/path") }.should raise_error(Puppet::ImportError)
end
it "should parse each found file" do
Puppet::Parser::Files.expects(:find_manifests_in_modules).returns ["modname", [make_absolute("/one")]]
loader.expects(:parse_file).with(make_absolute("/one")).returns(Puppet::Parser::AST::Hostclass.new(''))
loader.import("myfile", "/path")
end
it "should not attempt to import files that have already been imported" do
loader = Puppet::Parser::TypeLoader.new(:myenv)
Puppet::Parser::Files.expects(:find_manifests_in_modules).twice.returns ["modname", %w{/one}]
loader.import("myfile", "/path").should_not be_empty
loader.import("myfile", "/path").should be_empty
end
end
describe "when importing all" do
before do
@base = tmpdir("base")
# Create two module path directories
@modulebase1 = File.join(@base, "first")
FileUtils.mkdir_p(@modulebase1)
@modulebase2 = File.join(@base, "second")
FileUtils.mkdir_p(@modulebase2)
Puppet[:modulepath] = "#{@modulebase1}#{File::PATH_SEPARATOR}#{@modulebase2}"
end
def mk_module(basedir, name)
PuppetSpec::Modules.create(name, basedir)
end
# We have to pass the base path so that we can
# write to modules that are in the second search path
def mk_manifests(base, mod, type, files)
exts = {"ruby" => ".rb", "puppet" => ".pp"}
files.collect do |file|
name = mod.name + "::" + file.gsub("/", "::")
path = File.join(base, mod.name, "manifests", file + exts[type])
FileUtils.mkdir_p(File.split(path)[0])
# write out the class
if type == "ruby"
File.open(path, "w") { |f| f.print "hostclass '#{name}' do\nend" }
else
File.open(path, "w") { |f| f.print "class #{name} {}" }
end
name
end
end
it "should load all puppet manifests from all modules in the specified environment" do
@module1 = mk_module(@modulebase1, "one")
@module2 = mk_module(@modulebase2, "two")
mk_manifests(@modulebase1, @module1, "puppet", %w{a b})
mk_manifests(@modulebase2, @module2, "puppet", %w{c d})
loader.import_all
loader.environment.known_resource_types.hostclass("one::a").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("one::b").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("two::c").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("two::d").should be_instance_of(Puppet::Resource::Type)
end
it "should load all ruby manifests from all modules in the specified environment" do
Puppet.expects(:deprecation_warning).at_least(1)
@module1 = mk_module(@modulebase1, "one")
@module2 = mk_module(@modulebase2, "two")
mk_manifests(@modulebase1, @module1, "ruby", %w{a b})
mk_manifests(@modulebase2, @module2, "ruby", %w{c d})
loader.import_all
loader.environment.known_resource_types.hostclass("one::a").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("one::b").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("two::c").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("two::d").should be_instance_of(Puppet::Resource::Type)
end
it "should not load manifests from duplicate modules later in the module path" do
@module1 = mk_module(@modulebase1, "one")
# duplicate
@module2 = mk_module(@modulebase2, "one")
mk_manifests(@modulebase1, @module1, "puppet", %w{a})
mk_manifests(@modulebase2, @module2, "puppet", %w{c})
loader.import_all
loader.environment.known_resource_types.hostclass("one::c").should be_nil
end
it "should load manifests from subdirectories" do
@module1 = mk_module(@modulebase1, "one")
mk_manifests(@modulebase1, @module1, "puppet", %w{a a/b a/b/c})
loader.import_all
loader.environment.known_resource_types.hostclass("one::a::b").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("one::a::b::c").should be_instance_of(Puppet::Resource::Type)
end
it "should skip modules that don't have manifests" do
@module1 = mk_module(@modulebase1, "one")
@module2 = mk_module(@modulebase2, "two")
mk_manifests(@modulebase2, @module2, "ruby", %w{c d})
loader.import_all
loader.environment.known_resource_types.hostclass("one::a").should be_nil
loader.environment.known_resource_types.hostclass("two::c").should be_instance_of(Puppet::Resource::Type)
loader.environment.known_resource_types.hostclass("two::d").should be_instance_of(Puppet::Resource::Type)
end
end
describe "when parsing a file" do
it "should create a new parser instance for each file using the current environment" do
parser = stub 'Parser', :file= => nil, :parse => empty_hostclass
Puppet::Parser::ParserFactory.expects(:parser).twice.with(loader.environment).returns(parser)
loader.parse_file("/my/file")
loader.parse_file("/my/other_file")
end
it "should assign the parser its file and parse" do
parser = mock 'parser'
Puppet::Parser::ParserFactory.expects(:parser).with(loader.environment).returns(parser)
parser.expects(:file=).with("/my/file")
parser.expects(:parse).returns(empty_hostclass)
loader.parse_file("/my/file")
end
end
it "should be able to add classes to the current resource type collection" do
file = tmpfile("simple_file.pp")
File.open(file, "w") { |f| f.puts "class foo {}" }
loader.import(File.basename(file), File.dirname(file))
loader.known_resource_types.hostclass("foo").should be_instance_of(Puppet::Resource::Type)
end
end
diff --git a/spec/unit/pops/benchmark_spec.rb b/spec/unit/pops/benchmark_spec.rb
index 03c2e743d..462c03947 100644
--- a/spec/unit/pops/benchmark_spec.rb
+++ b/spec/unit/pops/benchmark_spec.rb
@@ -1,142 +1,142 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
require 'puppet_spec/pops'
require 'puppet_spec/scope'
require 'rgen/environment'
require 'rgen/metamodel_builder'
require 'rgen/serializer/json_serializer'
require 'rgen/instantiator/json_instantiator'
describe "Benchmark", :benchmark => true do
include PuppetSpec::Pops
include PuppetSpec::Scope
def code
'if true
{
$a = 10 + 10
}
else
{
$a = "interpolate ${foo} and stuff"
}
' end
class StringWriter < String
alias write concat
end
class MyJSonSerializer < RGen::Serializer::JsonSerializer
def attributeValue(value, a)
x = super
puts "#{a.eType} value: <<#{value}>> serialize: <<#{x}>>"
x
end
end
def json_dump(model)
output = StringWriter.new
ser = MyJSonSerializer.new(output)
ser.serialize(model)
output
end
def json_load(string)
env = RGen::Environment.new
inst = RGen::Instantiator::JsonInstantiator.new(env, Puppet::Pops::Model)
inst.instantiate(string)
end
it "transformer", :profile => true do
parser = Puppet::Pops::Parser::Parser.new()
model = parser.parse_string(code).current
transformer = Puppet::Pops::Model::AstTransformer.new()
m = Benchmark.measure { 10000.times { transformer.transform(model) }}
puts "Transformer: #{m}"
end
it "validator", :profile => true do
parser = Puppet::Pops::Parser::EvaluatingParser.new()
model = parser.parse_string(code)
m = Benchmark.measure { 100000.times { parser.assert_and_report(model) }}
puts "Validator: #{m}"
end
it "parse transform", :profile => true do
parser = Puppet::Pops::Parser::Parser.new()
transformer = Puppet::Pops::Model::AstTransformer.new()
m = Benchmark.measure { 10000.times { transformer.transform(parser.parse_string(code).current) }}
puts "Parse and transform: #{m}"
end
it "parser0", :profile => true do
parser = Puppet::Parser::Parser.new('test')
m = Benchmark.measure { 10000.times { parser.parse(code) }}
puts "Parser 0: #{m}"
end
it "parser1", :profile => true do
parser = Puppet::Pops::Parser::EvaluatingParser.new()
m = Benchmark.measure { 10000.times { parser.parse_string(code) }}
puts "Parser1: #{m}"
end
it "marshal1", :profile => true do
parser = Puppet::Pops::Parser::EvaluatingParser.new()
model = parser.parse_string(code).current
dumped = Marshal.dump(model)
m = Benchmark.measure { 10000.times { Marshal.load(dumped) }}
puts "Marshal1: #{m}"
end
it "rgenjson", :profile => true do
parser = Puppet::Pops::Parser::EvaluatingParser.new()
model = parser.parse_string(code).current
dumped = json_dump(model)
m = Benchmark.measure { 10000.times { json_load(dumped) }}
puts "RGen Json: #{m}"
end
it "lexer2", :profile => true do
lexer = Puppet::Pops::Parser::Lexer2.new
m = Benchmark.measure {10000.times {lexer.string = code; lexer.fullscan }}
puts "Lexer2: #{m}"
end
it "lexer1", :profile => true do
lexer = Puppet::Pops::Parser::Lexer.new
m = Benchmark.measure {10000.times {lexer.string = code; lexer.fullscan }}
puts "Pops Lexer: #{m}"
end
it "lexer0", :profile => true do
lexer = Puppet::Parser::Lexer.new
m = Benchmark.measure {10000.times {lexer.string = code; lexer.fullscan }}
puts "Original Lexer: #{m}"
end
context "Measure Evaluator" do
- let(:parser) { Puppet::Pops::Parser::EvaluatingParser::Transitional.new }
+ let(:parser) { Puppet::Pops::Parser::EvaluatingParser.new }
let(:node) { 'node.example.com' }
let(:scope) { s = create_test_scope_for_node(node); s }
it "evaluator", :profile => true do
# Do the loop in puppet code since it otherwise drowns in setup
puppet_loop =
'Integer[0, 1000].each |$i| { if true
{
$a = 10 + 10
}
else
{
$a = "interpolate ${foo} and stuff"
}}
'
# parse once, only measure the evaluation
model = parser.parse_string(puppet_loop, __FILE__)
m = Benchmark.measure { parser.evaluate(create_test_scope_for_node(node), model) }
puts("Evaluator: #{m}")
end
end
end
diff --git a/spec/unit/pops/binder/bindings_composer_spec.rb b/spec/unit/pops/binder/bindings_composer_spec.rb
index 93bc44722..d8e4b6f9c 100644
--- a/spec/unit/pops/binder/bindings_composer_spec.rb
+++ b/spec/unit/pops/binder/bindings_composer_spec.rb
@@ -1,64 +1,66 @@
require 'spec_helper'
require 'puppet/pops'
require 'puppet_spec/pops'
describe 'BinderComposer' do
include PuppetSpec::Pops
def config_dir(config_name)
my_fixture(config_name)
end
let(:acceptor) { Puppet::Pops::Validation::Acceptor.new() }
let(:diag) { Puppet::Pops::Binder::Config::DiagnosticProducer.new(acceptor) }
let(:issues) { Puppet::Pops::Binder::Config::Issues }
let(:node) { Puppet::Node.new('localhost') }
let(:compiler) { Puppet::Parser::Compiler.new(node)}
let(:scope) { Puppet::Parser::Scope.new(compiler) }
let(:parser) { Puppet::Pops::Parser::Parser.new() }
let(:factory) { Puppet::Pops::Binder::BindingsFactory }
before(:each) do
Puppet[:binder] = true
end
it 'should load default config if no config file exists' do
diagnostics = diag
composer = Puppet::Pops::Binder::BindingsComposer.new()
composer.compose(scope)
end
context "when loading a complete configuration with modules" do
let(:config_directory) { config_dir('ok') }
it 'should load everything without errors' do
Puppet.settings[:confdir] = config_directory
Puppet.settings[:libdir] = File.join(config_directory, 'lib')
- Puppet.settings[:modulepath] = File.join(config_directory, 'modules')
- # this ensure the binder is active at the right time
- # (issues with getting a /dev/null path for "confdir" / "libdir")
- raise "Binder not active" unless scope.compiler.is_binder_active?
- diagnostics = diag
- composer = Puppet::Pops::Binder::BindingsComposer.new()
- the_scope = scope
- the_scope['fqdn'] = 'localhost'
- the_scope['environment'] = 'production'
- layered_bindings = composer.compose(scope)
- # puts Puppet::Pops::Binder::BindingsModelDumper.new().dump(layered_bindings)
- binder = Puppet::Pops::Binder::Binder.new(layered_bindings)
- injector = Puppet::Pops::Binder::Injector.new(binder)
- expect(injector.lookup(scope, 'awesome_x')).to be == 'golden'
- expect(injector.lookup(scope, 'good_x')).to be == 'golden'
- expect(injector.lookup(scope, 'rotten_x')).to be == nil
- expect(injector.lookup(scope, 'the_meaning_of_life')).to be == 42
- expect(injector.lookup(scope, 'has_funny_hat')).to be == 'the pope'
- expect(injector.lookup(scope, 'all your base')).to be == 'are belong to us'
- expect(injector.lookup(scope, 'env_meaning_of_life')).to be == 'production thinks it is 42'
- expect(injector.lookup(scope, '::quick::brown::fox')).to be == 'echo: quick brown fox'
+ Puppet.override(:environments => Puppet::Environments::Static.new(Puppet::Node::Environment.create(:production, [File.join(config_directory, 'modules')]))) do
+ # this ensure the binder is active at the right time
+ # (issues with getting a /dev/null path for "confdir" / "libdir")
+ raise "Binder not active" unless scope.compiler.is_binder_active?
+
+ diagnostics = diag
+ composer = Puppet::Pops::Binder::BindingsComposer.new()
+ the_scope = scope
+ the_scope['fqdn'] = 'localhost'
+ the_scope['environment'] = 'production'
+ layered_bindings = composer.compose(scope)
+ # puts Puppet::Pops::Binder::BindingsModelDumper.new().dump(layered_bindings)
+ binder = Puppet::Pops::Binder::Binder.new(layered_bindings)
+ injector = Puppet::Pops::Binder::Injector.new(binder)
+ expect(injector.lookup(scope, 'awesome_x')).to be == 'golden'
+ expect(injector.lookup(scope, 'good_x')).to be == 'golden'
+ expect(injector.lookup(scope, 'rotten_x')).to be == nil
+ expect(injector.lookup(scope, 'the_meaning_of_life')).to be == 42
+ expect(injector.lookup(scope, 'has_funny_hat')).to be == 'the pope'
+ expect(injector.lookup(scope, 'all your base')).to be == 'are belong to us'
+ expect(injector.lookup(scope, 'env_meaning_of_life')).to be == 'production thinks it is 42'
+ expect(injector.lookup(scope, '::quick::brown::fox')).to be == 'echo: quick brown fox'
+ end
end
end
# TODO: test error conditions (see BinderConfigChecker for what to test)
-end
\ No newline at end of file
+end
diff --git a/spec/unit/pops/binder/injector_spec.rb b/spec/unit/pops/binder/injector_spec.rb
index 32eba3260..b62ceaeb9 100644
--- a/spec/unit/pops/binder/injector_spec.rb
+++ b/spec/unit/pops/binder/injector_spec.rb
@@ -1,784 +1,786 @@
require 'spec_helper'
require 'puppet/pops'
module InjectorSpecModule
def injector(binder)
Puppet::Pops::Binder::Injector.new(binder)
end
def factory
Puppet::Pops::Binder::BindingsFactory
end
def test_layer_with_empty_bindings
factory.named_layer('test-layer', factory.named_bindings('test').model)
end
def test_layer_with_bindings(*bindings)
factory.named_layer('test-layer', *bindings)
end
def null_scope()
nil
end
def type_calculator
Puppet::Pops::Types::TypeCalculator
end
def type_factory
Puppet::Pops::Types::TypeFactory
end
# Returns a binder
#
def configured_binder
b = Puppet::Pops::Binder::Binder.new()
b
end
class TestDuck
end
class Daffy < TestDuck
end
class AngryDuck < TestDuck
# Supports assisted inject, returning a Donald duck as the default impl of Duck
def self.inject(injector, scope, binding, *args)
Donald.new()
end
end
class Donald < AngryDuck
end
class ArneAnka < AngryDuck
attr_reader :label
def initialize()
@label = 'A Swedish angry cartoon duck'
end
end
class ScroogeMcDuck < TestDuck
attr_reader :fortune
# Supports assisted inject, returning an ScroogeMcDuck with 1$ fortune or first arg in args
# Note that when injected (via instance producer, or implict assisted inject, the inject method
# always wins.
def self.inject(injector, scope, binding, *args)
self.new(args[0].nil? ? 1 : args[0])
end
def initialize(fortune)
@fortune = fortune
end
end
class NamedDuck < TestDuck
attr_reader :name
def initialize(name)
@name = name
end
end
# Test custom producer that on each produce returns a duck that is twice as rich as its predecessor
class ScroogeProducer < Puppet::Pops::Binder::Producers::Producer
attr_reader :next_capital
def initialize
@next_capital = 100
end
def produce(scope)
ScroogeMcDuck.new(@next_capital *= 2)
end
end
end
describe 'Injector' do
include InjectorSpecModule
let(:bindings) { factory.named_bindings('test') }
let(:scope) { null_scope()}
- let(:duck_type) { type_factory.ruby(InjectorSpecModule::TestDuck) }
-
let(:binder) { Puppet::Pops::Binder::Binder }
let(:lbinder) do
binder.new(layered_bindings)
end
+ def duck_type
+ # create distinct instances
+ type_factory.ruby(InjectorSpecModule::TestDuck)
+ end
let(:layered_bindings) { factory.layered_bindings(test_layer_with_bindings(bindings.model)) }
context 'When created' do
it 'should not raise an error if binder is configured' do
expect { injector(lbinder) }.to_not raise_error
end
it 'should create an empty injector given an empty binder' do
expect { binder.new(layered_bindings) }.to_not raise_exception
end
it "should be possible to reference the TypeCalculator" do
injector(lbinder).type_calculator.is_a?(Puppet::Pops::Types::TypeCalculator).should == true
end
it "should be possible to reference the KeyFactory" do
injector(lbinder).key_factory.is_a?(Puppet::Pops::Binder::KeyFactory).should == true
end
it "can be created using a model" do
bindings.bind.name('a_string').to('42')
injector = Puppet::Pops::Binder::Injector.create_from_model(layered_bindings)
injector.lookup(scope, 'a_string').should == '42'
end
it 'can be created using a block' do
injector = Puppet::Pops::Binder::Injector.create('test') do
bind.name('a_string').to('42')
end
injector.lookup(scope, 'a_string').should == '42'
end
it 'can be created using a hash' do
injector = Puppet::Pops::Binder::Injector.create_from_hash('test', 'a_string' => '42')
injector.lookup(scope, 'a_string').should == '42'
end
it 'can be created using an overriding injector with block' do
injector = Puppet::Pops::Binder::Injector.create('test') do
bind.name('a_string').to('42')
end
injector2 = injector.override('override') do
bind.name('a_string').to('43')
end
injector.lookup(scope, 'a_string').should == '42'
injector2.lookup(scope, 'a_string').should == '43'
end
it 'can be created using an overriding injector with hash' do
injector = Puppet::Pops::Binder::Injector.create_from_hash('test', 'a_string' => '42')
injector2 = injector.override_with_hash('override', 'a_string' => '43')
injector.lookup(scope, 'a_string').should == '42'
injector2.lookup(scope, 'a_string').should == '43'
end
it "can be created using an overriding injector with a model" do
injector = Puppet::Pops::Binder::Injector.create_from_hash('test', 'a_string' => '42')
bindings.bind.name('a_string').to('43')
injector2 = injector.override_with_model(layered_bindings)
injector.lookup(scope, 'a_string').should == '42'
injector2.lookup(scope, 'a_string').should == '43'
end
end
context "When looking up objects" do
it 'lookup(scope, name) finds bound object of type Data with given name' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup(scope, 'a_string').should == '42'
end
context 'a block transforming the result can be given' do
it 'that transform a found value given scope and value' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup(scope, 'a_string') {|zcope, val| val + '42' }.should == '4242'
end
it 'that transform a found value given only value' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup(scope, 'a_string') {|val| val + '42' }.should == '4242'
end
it 'that produces a default value when entry is missing' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup(scope, 'a_non_existing_string') {|val| val ? (raise Error, "Should not happen") : '4242' }.should == '4242'
end
end
context "and class is not bound" do
it "assisted inject kicks in for classes with zero args constructor" do
duck_type = type_factory.ruby(InjectorSpecModule::Daffy)
injector = injector(lbinder)
injector.lookup(scope, duck_type).is_a?(InjectorSpecModule::Daffy).should == true
injector.lookup_producer(scope, duck_type).produce(scope).is_a?(InjectorSpecModule::Daffy).should == true
end
it "assisted inject produces same instance on lookup but not on lookup producer" do
duck_type = type_factory.ruby(InjectorSpecModule::Daffy)
injector = injector(lbinder)
d1 = injector.lookup(scope, duck_type)
d2 = injector.lookup(scope, duck_type)
d1.equal?(d2).should == true
d1 = injector.lookup_producer(scope, duck_type).produce(scope)
d2 = injector.lookup_producer(scope, duck_type).produce(scope)
d1.equal?(d2).should == false
end
it "assisted inject kicks in for classes with a class inject method" do
duck_type = type_factory.ruby(InjectorSpecModule::ScroogeMcDuck)
injector = injector(lbinder)
# Do not pass any arguments, the ScroogeMcDuck :inject method should pick 1 by default
# This tests zero args passed
injector.lookup(scope, duck_type).fortune.should == 1
injector.lookup_producer(scope, duck_type).produce(scope).fortune.should == 1
end
it "assisted inject selects the inject method if it exists over a zero args constructor" do
injector = injector(lbinder)
duck_type = type_factory.ruby(InjectorSpecModule::AngryDuck)
injector.lookup(scope, duck_type).is_a?(InjectorSpecModule::Donald).should == true
injector.lookup_producer(scope, duck_type).produce(scope).is_a?(InjectorSpecModule::Donald).should == true
end
it "assisted inject selects the zero args constructor if injector is from a superclass" do
injector = injector(lbinder)
duck_type = type_factory.ruby(InjectorSpecModule::ArneAnka)
injector.lookup(scope, duck_type).is_a?(InjectorSpecModule::ArneAnka).should == true
injector.lookup_producer(scope, duck_type).produce(scope).is_a?(InjectorSpecModule::ArneAnka).should == true
end
end
context "and multiple layers are in use" do
it "a higher layer shadows anything in a lower layer" do
bindings1 = factory.named_bindings('test1')
bindings1.bind().name('a_string').to('bad stuff')
lower_layer = factory.named_layer('lower-layer', bindings1.model)
bindings2 = factory.named_bindings('test2')
bindings2.bind().name('a_string').to('good stuff')
higher_layer = factory.named_layer('higher-layer', bindings2.model)
injector = injector(binder.new(factory.layered_bindings(higher_layer, lower_layer)))
injector.lookup(scope,'a_string').should == 'good stuff'
end
it "a higher layer may not shadow a lower layer binding that is final" do
bindings1 = factory.named_bindings('test1')
bindings1.bind().final.name('a_string').to('required stuff')
lower_layer = factory.named_layer('lower-layer', bindings1.model)
bindings2 = factory.named_bindings('test2')
bindings2.bind().name('a_string').to('contraband')
higher_layer = factory.named_layer('higher-layer', bindings2.model)
expect {
injector = injector(binder.new(factory.layered_bindings(higher_layer, lower_layer)))
}.to raise_error(/Override of final binding not allowed/)
end
end
context "and dealing with Data types" do
let(:lbinder) { binder.new(layered_bindings) }
it "should treat all data as same type w.r.t. key" do
bindings.bind().name('a_string').to('42')
bindings.bind().name('an_int').to(43)
bindings.bind().name('a_float').to(3.14)
bindings.bind().name('a_boolean').to(true)
bindings.bind().name('an_array').to([1,2,3])
bindings.bind().name('a_hash').to({'a'=>1,'b'=>2,'c'=>3})
injector = injector(lbinder)
injector.lookup(scope,'a_string').should == '42'
injector.lookup(scope,'an_int').should == 43
injector.lookup(scope,'a_float').should == 3.14
injector.lookup(scope,'a_boolean').should == true
injector.lookup(scope,'an_array').should == [1,2,3]
injector.lookup(scope,'a_hash').should == {'a'=>1,'b'=>2,'c'=>3}
end
it "should provide type-safe lookup of given type/name" do
bindings.bind().string().name('a_string').to('42')
bindings.bind().integer().name('an_int').to(43)
bindings.bind().float().name('a_float').to(3.14)
bindings.bind().boolean().name('a_boolean').to(true)
bindings.bind().array_of_data().name('an_array').to([1,2,3])
bindings.bind().hash_of_data().name('a_hash').to({'a'=>1,'b'=>2,'c'=>3})
injector = injector(lbinder)
# Check lookup using implied Data type
injector.lookup(scope,'a_string').should == '42'
injector.lookup(scope,'an_int').should == 43
injector.lookup(scope,'a_float').should == 3.14
injector.lookup(scope,'a_boolean').should == true
injector.lookup(scope,'an_array').should == [1,2,3]
injector.lookup(scope,'a_hash').should == {'a'=>1,'b'=>2,'c'=>3}
# Check lookup using expected type
injector.lookup(scope,type_factory.string(), 'a_string').should == '42'
injector.lookup(scope,type_factory.integer(), 'an_int').should == 43
injector.lookup(scope,type_factory.float(),'a_float').should == 3.14
injector.lookup(scope,type_factory.boolean(),'a_boolean').should == true
injector.lookup(scope,type_factory.array_of_data(),'an_array').should == [1,2,3]
injector.lookup(scope,type_factory.hash_of_data(),'a_hash').should == {'a'=>1,'b'=>2,'c'=>3}
# Check lookup using wrong type
expect { injector.lookup(scope,type_factory.integer(), 'a_string')}.to raise_error(/Type error/)
expect { injector.lookup(scope,type_factory.string(), 'an_int')}.to raise_error(/Type error/)
expect { injector.lookup(scope,type_factory.string(),'a_float')}.to raise_error(/Type error/)
expect { injector.lookup(scope,type_factory.string(),'a_boolean')}.to raise_error(/Type error/)
expect { injector.lookup(scope,type_factory.string(),'an_array')}.to raise_error(/Type error/)
expect { injector.lookup(scope,type_factory.string(),'a_hash')}.to raise_error(/Type error/)
end
end
end
context "When looking up producer" do
it 'the value is produced by calling produce(scope)' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup_producer(scope, 'a_string').produce(scope).should == '42'
end
context 'a block transforming the result can be given' do
it 'that transform a found value given scope and producer' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup_producer(scope, 'a_string') {|zcope, p| p.produce(zcope) + '42' }.should == '4242'
end
it 'that transform a found value given only producer' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup_producer(scope, 'a_string') {|p| p.produce(scope) + '42' }.should == '4242'
end
it 'that can produce a default value when entry is not found' do
bindings.bind().name('a_string').to('42')
injector(lbinder).lookup_producer(scope, 'a_non_existing_string') {|p| p ? (raise Error,"Should not happen") : '4242' }.should == '4242'
end
end
end
context "When dealing with singleton vs. non singleton" do
it "should produce the same instance when producer is a singleton" do
bindings.bind().name('a_string').to('42')
injector = injector(lbinder)
a = injector.lookup(scope, 'a_string')
b = injector.lookup(scope, 'a_string')
a.equal?(b).should == true
end
it "should produce different instances when producer is a non singleton producer" do
bindings.bind().name('a_string').to_series_of('42')
injector = injector(lbinder)
a = injector.lookup(scope, 'a_string')
b = injector.lookup(scope, 'a_string')
a.should == '42'
b.should == '42'
a.equal?(b).should == false
end
end
context "When using the lookup producer" do
it "should lookup again to produce a value" do
bindings.bind().name('a_string').to_lookup_of('another_string')
bindings.bind().name('another_string').to('hello')
injector(lbinder).lookup(scope, 'a_string').should == 'hello'
end
it "should produce nil if looked up key does not exist" do
bindings.bind().name('a_string').to_lookup_of('non_existing')
injector(lbinder).lookup(scope, 'a_string').should == nil
end
it "should report an error if lookup loop is detected" do
bindings.bind().name('a_string').to_lookup_of('a_string')
expect { injector(lbinder).lookup(scope, 'a_string') }.to raise_error(/Lookup loop/)
end
end
context "When using the hash lookup producer" do
it "should lookup a key in looked up hash" do
data_hash = type_factory.hash_of_data()
bindings.bind().name('a_string').to_hash_lookup_of(data_hash, 'a_hash', 'huey')
bindings.bind().name('a_hash').to({'huey' => 'red', 'dewey' => 'blue', 'louie' => 'green'})
injector(lbinder).lookup(scope, 'a_string').should == 'red'
end
it "should produce nil if looked up entry does not exist" do
data_hash = type_factory.hash_of_data()
bindings.bind().name('a_string').to_hash_lookup_of(data_hash, 'non_existing_entry', 'huey')
bindings.bind().name('a_hash').to({'huey' => 'red', 'dewey' => 'blue', 'louie' => 'green'})
injector(lbinder).lookup(scope, 'a_string').should == nil
end
end
context "When using the first found producer" do
it "should lookup until it finds a value, but not further" do
bindings.bind().name('a_string').to_first_found('b_string', 'c_string', 'g_string')
bindings.bind().name('c_string').to('hello')
bindings.bind().name('g_string').to('Oh, mrs. Smith...')
injector(lbinder).lookup(scope, 'a_string').should == 'hello'
end
it "should lookup until it finds a value using mix of type and name, but not further" do
bindings.bind().name('a_string').to_first_found('b_string', [type_factory.string, 'c_string'], 'g_string')
bindings.bind().name('c_string').to('hello')
bindings.bind().name('g_string').to('Oh, mrs. Smith...')
injector(lbinder).lookup(scope, 'a_string').should == 'hello'
end
end
context "When producing instances" do
it "should lookup an instance of a class without arguments" do
bindings.bind().type(duck_type).name('the_duck').to(InjectorSpecModule::Daffy)
injector(lbinder).lookup(scope, duck_type, 'the_duck').is_a?(InjectorSpecModule::Daffy).should == true
end
it "should lookup an instance of a class with arguments" do
bindings.bind().type(duck_type).name('the_duck').to(InjectorSpecModule::ScroogeMcDuck, 1234)
injector = injector(lbinder)
the_duck = injector.lookup(scope, duck_type, 'the_duck')
the_duck.is_a?(InjectorSpecModule::ScroogeMcDuck).should == true
the_duck.fortune.should == 1234
end
it "singleton producer should not be recreated between lookups" do
bindings.bind().type(duck_type).name('the_duck').to_producer(InjectorSpecModule::ScroogeProducer)
injector = injector(lbinder)
the_duck = injector.lookup(scope, duck_type, 'the_duck')
the_duck.is_a?(InjectorSpecModule::ScroogeMcDuck).should == true
the_duck.fortune.should == 200
# singleton, do it again to get next value in series - it is the producer that is a singleton
# not the produced value
the_duck = injector.lookup(scope, duck_type, 'the_duck')
the_duck.is_a?(InjectorSpecModule::ScroogeMcDuck).should == true
the_duck.fortune.should == 400
duck_producer = injector.lookup_producer(scope, duck_type, 'the_duck')
duck_producer.produce(scope).fortune.should == 800
end
it "series of producers should recreate producer on each lookup and lookup_producer" do
bindings.bind().type(duck_type).name('the_duck').to_producer_series(InjectorSpecModule::ScroogeProducer)
injector = injector(lbinder)
duck_producer = injector.lookup_producer(scope, duck_type, 'the_duck')
duck_producer.produce(scope).fortune().should == 200
duck_producer.produce(scope).fortune().should == 400
# series, each lookup gets a new producer (initialized to produce 200)
duck_producer = injector.lookup_producer(scope, duck_type, 'the_duck')
duck_producer.produce(scope).fortune().should == 200
duck_producer.produce(scope).fortune().should == 400
injector.lookup(scope, duck_type, 'the_duck').fortune().should == 200
injector.lookup(scope, duck_type, 'the_duck').fortune().should == 200
end
end
context "When working with multibind" do
context "of hash kind" do
it "a multibind produces contributed items keyed by their bound key-name" do
hash_of_duck = type_factory.hash_of(duck_type)
multibind_id = "ducks"
bindings.multibind(multibind_id).type(hash_of_duck).name('donalds_nephews')
bindings.bind.in_multibind(multibind_id).type(duck_type).name('nephew1').to(InjectorSpecModule::NamedDuck, 'Huey')
bindings.bind.in_multibind(multibind_id).type(duck_type).name('nephew2').to(InjectorSpecModule::NamedDuck, 'Dewey')
bindings.bind.in_multibind(multibind_id).type(duck_type).name('nephew3').to(InjectorSpecModule::NamedDuck, 'Louie')
injector = injector(lbinder)
the_ducks = injector.lookup(scope, hash_of_duck, "donalds_nephews")
the_ducks.size.should == 3
the_ducks['nephew1'].name.should == 'Huey'
the_ducks['nephew2'].name.should == 'Dewey'
the_ducks['nephew3'].name.should == 'Louie'
end
it "is an error to not bind contribution with a name" do
hash_of_duck = type_factory.hash_of(duck_type)
multibind_id = "ducks"
bindings.multibind(multibind_id).type(hash_of_duck).name('donalds_nephews')
# missing name
bindings.bind.in_multibind(multibind_id).type(duck_type).to(InjectorSpecModule::NamedDuck, 'Huey')
bindings.bind.in_multibind(multibind_id).type(duck_type).to(InjectorSpecModule::NamedDuck, 'Dewey')
expect {
the_ducks = injector(lbinder).lookup(scope, hash_of_duck, "donalds_nephews")
}.to raise_error(/must have a name/)
end
it "is an error to bind with duplicate key when using default (priority) conflict resolution" do
hash_of_duck = type_factory.hash_of(duck_type)
multibind_id = "ducks"
bindings.multibind(multibind_id).type(hash_of_duck).name('donalds_nephews')
# missing name
bindings.bind.in_multibind(multibind_id).type(duck_type).name('foo').to(InjectorSpecModule::NamedDuck, 'Huey')
bindings.bind.in_multibind(multibind_id).type(duck_type).name('foo').to(InjectorSpecModule::NamedDuck, 'Dewey')
expect {
the_ducks = injector(lbinder).lookup(scope, hash_of_duck, "donalds_nephews")
}.to raise_error(/Duplicate key/)
end
it "is not an error to bind with duplicate key when using (ignore) conflict resolution" do
hash_of_duck = type_factory.hash_of(duck_type)
multibind_id = "ducks"
bindings.multibind(multibind_id).type(hash_of_duck).name('donalds_nephews').producer_options(:conflict_resolution => :ignore)
bindings.bind.in_multibind(multibind_id).type(duck_type).name('foo').to(InjectorSpecModule::NamedDuck, 'Huey')
bindings.bind.in_multibind(multibind_id).type(duck_type).name('foo').to(InjectorSpecModule::NamedDuck, 'Dewey')
expect {
the_ducks = injector(lbinder).lookup(scope, hash_of_duck, "donalds_nephews")
- }.to_not raise_error(/Duplicate key/)
+ }.to_not raise_error
end
it "should produce detailed type error message" do
hash_of_integer = type_factory.hash_of(type_factory.integer())
multibind_id = "ints"
mb = bindings.multibind(multibind_id).type(hash_of_integer).name('donalds_family')
bindings.bind.in_multibind(multibind_id).name('nephew').to('Huey')
expect { ducks = injector(lbinder).lookup(scope, 'donalds_family')
}.to raise_error(%r{expected: Integer, got: String})
end
it "should be possible to combine hash multibind contributions with append on conflict" do
# This case uses a multibind of individual strings, but combines them
# into an array bound to a hash key
# (There are other ways to do this - e.g. have the multibind lookup a multibind
# of array type to which nephews are contributed).
#
hash_of_data = type_factory.hash_of_data()
multibind_id = "ducks"
mb = bindings.multibind(multibind_id).type(hash_of_data).name('donalds_family')
mb.producer_options(:conflict_resolution => :append)
bindings.bind.in_multibind(multibind_id).name('nephews').to('Huey')
bindings.bind.in_multibind(multibind_id).name('nephews').to('Dewey')
bindings.bind.in_multibind(multibind_id).name('nephews').to('Louie')
bindings.bind.in_multibind(multibind_id).name('uncles').to('Scrooge McDuck')
bindings.bind.in_multibind(multibind_id).name('uncles').to('Ludwig Von Drake')
ducks = injector(lbinder).lookup(scope, 'donalds_family')
ducks['nephews'].should == ['Huey', 'Dewey', 'Louie']
ducks['uncles'].should == ['Scrooge McDuck', 'Ludwig Von Drake']
end
it "should be possible to combine hash multibind contributions with append, flat, and uniq, on conflict" do
# This case uses a multibind of individual strings, but combines them
# into an array bound to a hash key
# (There are other ways to do this - e.g. have the multibind lookup a multibind
# of array type to which nephews are contributed).
#
hash_of_data = type_factory.hash_of_data()
multibind_id = "ducks"
mb = bindings.multibind(multibind_id).type(hash_of_data).name('donalds_family')
mb.producer_options(:conflict_resolution => :append, :flatten => true, :uniq => true)
bindings.bind.in_multibind(multibind_id).name('nephews').to('Huey')
bindings.bind.in_multibind(multibind_id).name('nephews').to('Huey')
bindings.bind.in_multibind(multibind_id).name('nephews').to('Dewey')
bindings.bind.in_multibind(multibind_id).name('nephews').to(['Huey', ['Louie'], 'Dewey'])
bindings.bind.in_multibind(multibind_id).name('uncles').to('Scrooge McDuck')
bindings.bind.in_multibind(multibind_id).name('uncles').to('Ludwig Von Drake')
ducks = injector(lbinder).lookup(scope, 'donalds_family')
ducks['nephews'].should == ['Huey', 'Dewey', 'Louie']
ducks['uncles'].should == ['Scrooge McDuck', 'Ludwig Von Drake']
end
it "should fail attempts to append, perform uniq or flatten on type incompatible multibind hash" do
hash_of_integer = type_factory.hash_of(type_factory.integer())
ids = ["ducks1", "ducks2", "ducks3"]
- mb = bindings.multibind(ids[0]).type(hash_of_integer).name('broken_family0')
+ mb = bindings.multibind(ids[0]).type(hash_of_integer.copy).name('broken_family0')
mb.producer_options(:conflict_resolution => :append)
- mb = bindings.multibind(ids[1]).type(hash_of_integer).name('broken_family1')
+ mb = bindings.multibind(ids[1]).type(hash_of_integer.copy).name('broken_family1')
mb.producer_options(:flatten => :true)
- mb = bindings.multibind(ids[2]).type(hash_of_integer).name('broken_family2')
+ mb = bindings.multibind(ids[2]).type(hash_of_integer.copy).name('broken_family2')
mb.producer_options(:uniq => :true)
injector = injector(binder.new(factory.layered_bindings(test_layer_with_bindings(bindings.model))))
expect { injector.lookup(scope, 'broken_family0')}.to raise_error(/:conflict_resolution => :append/)
expect { injector.lookup(scope, 'broken_family1')}.to raise_error(/:flatten/)
expect { injector.lookup(scope, 'broken_family2')}.to raise_error(/:uniq/)
end
it "a higher priority contribution is selected when resolution is :priority" do
hash_of_duck = type_factory.hash_of(duck_type)
multibind_id = "ducks"
bindings.multibind(multibind_id).type(hash_of_duck).name('donalds_nephews')
mb1 = bindings.bind.in_multibind(multibind_id)
pending 'priority based on layers not added, and priority on category removed'
mb1.type(duck_type).name('nephew').to(InjectorSpecModule::NamedDuck, 'Huey')
mb2 = bindings.bind.in_multibind(multibind_id)
mb2.type(duck_type).name('nephew').to(InjectorSpecModule::NamedDuck, 'Dewey')
binder.define_layers(layered_bindings)
injector(binder).lookup(scope, hash_of_duck, "donalds_nephews")['nephew'].name.should == 'Huey'
end
it "a higher priority contribution wins when resolution is :merge" do
# THIS TEST MAY DEPEND ON HASH ORDER SINCE PRIORITY BASED ON CATEGORY IS REMOVED
hash_of_data = type_factory.hash_of_data()
multibind_id = "hashed_ducks"
bindings.multibind(multibind_id).type(hash_of_data).name('donalds_nephews').producer_options(:conflict_resolution => :merge)
mb1 = bindings.bind.in_multibind(multibind_id)
mb1.name('nephew').to({'name' => 'Huey', 'is' => 'winner'})
mb2 = bindings.bind.in_multibind(multibind_id)
mb2.name('nephew').to({'name' => 'Dewey', 'is' => 'looser', 'has' => 'cap'})
the_ducks = injector(binder.new(layered_bindings)).lookup(scope, "donalds_nephews");
the_ducks['nephew']['name'].should == 'Huey'
the_ducks['nephew']['is'].should == 'winner'
the_ducks['nephew']['has'].should == 'cap'
end
end
context "of array kind" do
it "an array multibind produces contributed items, names are allowed but ignored" do
array_of_duck = type_factory.array_of(duck_type)
multibind_id = "ducks"
bindings.multibind(multibind_id).type(array_of_duck).name('donalds_nephews')
# one with name (ignored, expect no error)
bindings.bind.in_multibind(multibind_id).type(duck_type).name('nephew1').to(InjectorSpecModule::NamedDuck, 'Huey')
# two without name
bindings.bind.in_multibind(multibind_id).type(duck_type).to(InjectorSpecModule::NamedDuck, 'Dewey')
bindings.bind.in_multibind(multibind_id).type(duck_type).to(InjectorSpecModule::NamedDuck, 'Louie')
the_ducks = injector(lbinder).lookup(scope, array_of_duck, "donalds_nephews")
the_ducks.size.should == 3
the_ducks.collect {|d| d.name }.sort.should == ['Dewey', 'Huey', 'Louie']
end
it "should be able to make result contain only unique entries" do
# This case uses a multibind of individual strings, and combines them
# into an array of unique values
#
array_of_data = type_factory.array_of_data()
multibind_id = "ducks"
mb = bindings.multibind(multibind_id).type(array_of_data).name('donalds_family')
# turn off priority on named to not trigger conflict as all additions have the same precedence
# (could have used the default for unnamed and add unnamed entries).
mb.producer_options(:priority_on_named => false, :uniq => true)
bindings.bind.in_multibind(multibind_id).name('nephews').to('Huey')
bindings.bind.in_multibind(multibind_id).name('nephews').to('Dewey')
bindings.bind.in_multibind(multibind_id).name('nephews').to('Dewey') # duplicate
bindings.bind.in_multibind(multibind_id).name('nephews').to('Louie')
bindings.bind.in_multibind(multibind_id).name('nephews').to('Louie') # duplicate
bindings.bind.in_multibind(multibind_id).name('nephews').to('Louie') # duplicate
ducks = injector(lbinder).lookup(scope, 'donalds_family')
ducks.should == ['Huey', 'Dewey', 'Louie']
end
it "should be able to contribute elements and arrays of elements and flatten 1 level" do
# This case uses a multibind of individual strings and arrays, and combines them
# into an array of flattened
#
array_of_string = type_factory.array_of(type_factory.string())
multibind_id = "ducks"
mb = bindings.multibind(multibind_id).type(array_of_string).name('donalds_family')
# flatten one level
mb.producer_options(:flatten => 1)
bindings.bind.in_multibind(multibind_id).to('Huey')
bindings.bind.in_multibind(multibind_id).to('Dewey')
bindings.bind.in_multibind(multibind_id).to('Louie') # duplicate
bindings.bind.in_multibind(multibind_id).to(['Huey', 'Dewey', 'Louie'])
ducks = injector(lbinder).lookup(scope, 'donalds_family')
ducks.should == ['Huey', 'Dewey', 'Louie', 'Huey', 'Dewey', 'Louie']
end
it "should produce detailed type error message" do
array_of_integer = type_factory.array_of(type_factory.integer())
multibind_id = "ints"
mb = bindings.multibind(multibind_id).type(array_of_integer).name('donalds_family')
bindings.bind.in_multibind(multibind_id).to('Huey')
expect { ducks = injector(lbinder).lookup(scope, 'donalds_family')
}.to raise_error(%r{expected: Integer, or Array\[Integer\], got: String})
end
end
context "When using multibind in multibind" do
it "a hash multibind can be contributed to another" do
hash_of_data = type_factory.hash_of_data()
mb1_id = 'data1'
mb2_id = 'data2'
top = bindings.multibind(mb1_id).type(hash_of_data).name("top")
detail = bindings.multibind(mb2_id).type(hash_of_data).name("detail").in_multibind(mb1_id)
bindings.bind.in_multibind(mb1_id).name('a').to(10)
bindings.bind.in_multibind(mb1_id).name('b').to(20)
bindings.bind.in_multibind(mb2_id).name('a').to(30)
bindings.bind.in_multibind(mb2_id).name('b').to(40)
expect( injector(lbinder).lookup(scope, "top") ).to eql({'detail' => {'a' => 30, 'b' => 40}, 'a' => 10, 'b' => 20})
end
end
context "When looking up entries requiring evaluation" do
let(:node) { Puppet::Node.new('localhost') }
let(:compiler) { Puppet::Parser::Compiler.new(node)}
let(:scope) { Puppet::Parser::Scope.new(compiler) }
let(:parser) { Puppet::Pops::Parser::Parser.new() }
it "should be possible to lookup a concatenated string" do
scope['duck'] = 'Donald Fauntleroy Duck'
expr = parser.parse_string('"Hello $duck"').current()
bindings.bind.name('the_duck').to(expr)
injector(lbinder).lookup(scope, 'the_duck').should == 'Hello Donald Fauntleroy Duck'
end
it "should be possible to post process lookup with a puppet lambda" do
model = parser.parse_string('fake() |$value| {$value + 1 }').current
bindings.bind.name('an_int').to(42).producer_options( :transformer => model.body.lambda)
injector(lbinder).lookup(scope, 'an_int').should == 43
end
it "should be possible to post process lookup with a ruby proc" do
transformer = lambda {|scope, value| value + 1 }
bindings.bind.name('an_int').to(42).producer_options( :transformer => transformer)
injector(lbinder).lookup(scope, 'an_int').should == 43
end
end
end
context "When there are problems with configuration" do
let(:lbinder) { binder.new(layered_bindings) }
it "reports error for surfacing abstract bindings" do
bindings.bind.abstract.name('an_int')
expect{injector(lbinder).lookup(scope, 'an_int') }.to raise_error(/The abstract binding .* was not overridden/)
end
it "does not report error for abstract binding that is ovrridden" do
bindings.bind.abstract.name('an_int')
bindings.bind.override.name('an_int').to(142)
expect{ injector(lbinder).lookup(scope, 'an_int') }.to_not raise_error
end
it "reports error for overriding binding that does not override" do
bindings.bind.override.name('an_int').to(42)
expect{injector(lbinder).lookup(scope, 'an_int') }.to raise_error(/Binding with unresolved 'override' detected/)
end
it "reports error for binding without producer" do
bindings.bind.name('an_int')
expect{injector(lbinder).lookup(scope, 'an_int') }.to raise_error(/Binding without producer/)
end
end
end
\ No newline at end of file
diff --git a/spec/unit/pops/evaluator/access_ops_spec.rb b/spec/unit/pops/evaluator/access_ops_spec.rb
index d0965ad5d..0fa4779a0 100644
--- a/spec/unit/pops/evaluator/access_ops_spec.rb
+++ b/spec/unit/pops/evaluator/access_ops_spec.rb
@@ -1,441 +1,441 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
require 'puppet/pops/evaluator/evaluator_impl'
require 'puppet/pops/types/type_factory'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/evaluator_rspec_helper')
describe 'Puppet::Pops::Evaluator::EvaluatorImpl/AccessOperator' do
include EvaluatorRspecHelper
def range(from, to)
Puppet::Pops::Types::TypeFactory.range(from, to)
end
def float_range(from, to)
Puppet::Pops::Types::TypeFactory.float_range(from, to)
end
context 'The evaluator when operating on a String' do
it 'can get a single character using a single key index to []' do
expect(evaluate(literal('abc')[1])).to eql('b')
end
it 'can get the last character using the key -1 in []' do
expect(evaluate(literal('abc')[-1])).to eql('c')
end
it 'can get a substring by giving two keys' do
expect(evaluate(literal('abcd')[1,2])).to eql('bc')
# flattens keys
expect(evaluate(literal('abcd')[[1,2]])).to eql('bc')
end
it 'produces empty string for a substring out of range' do
expect(evaluate(literal('abc')[100])).to eql('')
end
it 'raises an error if arity is wrong for []' do
expect{evaluate(literal('abc')[])}.to raise_error(/String supports \[\] with one or two arguments\. Got 0/)
expect{evaluate(literal('abc')[1,2,3])}.to raise_error(/String supports \[\] with one or two arguments\. Got 3/)
end
end
context 'The evaluator when operating on an Array' do
it 'is tested with the correct assumptions' do
expect(literal([1,2,3])[1].current.is_a?(Puppet::Pops::Model::AccessExpression)).to eql(true)
end
it 'can get an element using a single key index to []' do
expect(evaluate(literal([1,2,3])[1])).to eql(2)
end
it 'can get the last element using the key -1 in []' do
expect(evaluate(literal([1,2,3])[-1])).to eql(3)
end
it 'can get a slice of elements using two keys' do
expect(evaluate(literal([1,2,3,4])[1,2])).to eql([2,3])
# flattens keys
expect(evaluate(literal([1,2,3,4])[[1,2]])).to eql([2,3])
end
it 'produces nil for a missing entry' do
expect(evaluate(literal([1,2,3])[100])).to eql(nil)
end
it 'raises an error if arity is wrong for []' do
expect{evaluate(literal([1,2,3,4])[])}.to raise_error(/Array supports \[\] with one or two arguments\. Got 0/)
expect{evaluate(literal([1,2,3,4])[1,2,3])}.to raise_error(/Array supports \[\] with one or two arguments\. Got 3/)
end
end
context 'The evaluator when operating on a Hash' do
it 'can get a single element giving a single key to []' do
expect(evaluate(literal({'a'=>1,'b'=>2,'c'=>3})['b'])).to eql(2)
end
it 'can lookup an array' do
expect(evaluate(literal({[1]=>10,[2]=>20})[[2]])).to eql(20)
end
it 'produces nil for a missing key' do
expect(evaluate(literal({'a'=>1,'b'=>2,'c'=>3})['x'])).to eql(nil)
end
it 'can get multiple elements by giving multiple keys to []' do
expect(evaluate(literal({'a'=>1,'b'=>2,'c'=>3, 'd'=>4})['b', 'd'])).to eql([2, 4])
end
it 'compacts the result when using multiple keys' do
expect(evaluate(literal({'a'=>1,'b'=>2,'c'=>3, 'd'=>4})['b', 'x'])).to eql([2])
end
it 'produces an empty array if none of multiple given keys were missing' do
expect(evaluate(literal({'a'=>1,'b'=>2,'c'=>3, 'd'=>4})['x', 'y'])).to eql([])
end
it 'raises an error if arity is wrong for []' do
expect{evaluate(literal({'a'=>1,'b'=>2,'c'=>3})[])}.to raise_error(/Hash supports \[\] with one or more arguments\. Got 0/)
end
end
context "When applied to a type it" do
let(:types) { Puppet::Pops::Types::TypeFactory }
# Integer
#
it 'produces an Integer[from, to]' do
expr = fqr('Integer')[1, 3]
expect(evaluate(expr)).to eql(range(1,3))
# arguments are flattened
expr = fqr('Integer')[[1, 3]]
expect(evaluate(expr)).to eql(range(1,3))
end
it 'produces an Integer[1]' do
expr = fqr('Integer')[1]
expect(evaluate(expr)).to eql(range(1,1))
end
it 'produces an Integer[from, <from]' do
expr = fqr('Integer')[1,0]
expect(evaluate(expr)).to eql(range(1,0))
end
it 'produces an error for Integer[] if there are more than 2 keys' do
expr = fqr('Integer')[1,2,3]
expect { evaluate(expr)}.to raise_error(/with one or two arguments/)
end
# Float
#
it 'produces a Float[from, to]' do
expr = fqr('Float')[1, 3]
expect(evaluate(expr)).to eql(float_range(1.0,3.0))
# arguments are flattened
expr = fqr('Float')[[1, 3]]
expect(evaluate(expr)).to eql(float_range(1.0,3.0))
end
it 'produces a Float[1.0]' do
expr = fqr('Float')[1.0]
expect(evaluate(expr)).to eql(float_range(1.0,1.0))
end
it 'produces a Float[1]' do
expr = fqr('Float')[1]
expect(evaluate(expr)).to eql(float_range(1.0,1.0))
end
it 'produces a Float[from, <from]' do
expr = fqr('Float')[1.0,0.0]
expect(evaluate(expr)).to eql(float_range(1.0,0.0))
end
it 'produces an error for Float[] if there are more than 2 keys' do
expr = fqr('Float')[1,2,3]
expect { evaluate(expr)}.to raise_error(/with one or two arguments/)
end
# Hash Type
#
it 'produces a Hash[Scalar,String] from the expression Hash[String]' do
expr = fqr('Hash')[fqr('String')]
expect(evaluate(expr)).to be_the_type(types.hash_of(types.string, types.scalar))
# arguments are flattened
expr = fqr('Hash')[[fqr('String')]]
expect(evaluate(expr)).to be_the_type(types.hash_of(types.string, types.scalar))
end
it 'produces a Hash[String,String] from the expression Hash[String, String]' do
expr = fqr('Hash')[fqr('String'), fqr('String')]
expect(evaluate(expr)).to be_the_type(types.hash_of(types.string, types.string))
end
it 'produces a Hash[Scalar,String] from the expression Hash[Integer][String]' do
expr = fqr('Hash')[fqr('Integer')][fqr('String')]
expect(evaluate(expr)).to be_the_type(types.hash_of(types.string, types.scalar))
end
it "gives an error if parameter is not a type" do
expr = fqr('Hash')['String']
expect { evaluate(expr)}.to raise_error(/Hash-Type\[\] arguments must be types/)
end
# Array Type
#
it 'produces an Array[String] from the expression Array[String]' do
expr = fqr('Array')[fqr('String')]
expect(evaluate(expr)).to be_the_type(types.array_of(types.string))
# arguments are flattened
expr = fqr('Array')[[fqr('String')]]
expect(evaluate(expr)).to be_the_type(types.array_of(types.string))
end
it 'produces an Array[String] from the expression Array[Integer][String]' do
expr = fqr('Array')[fqr('Integer')][fqr('String')]
expect(evaluate(expr)).to be_the_type(types.array_of(types.string))
end
it 'produces a size constrained Array when the last two arguments specify this' do
expr = fqr('Array')[fqr('String'), 1]
expected_t = types.array_of(String)
types.constrain_size(expected_t, 1, :default)
expect(evaluate(expr)).to be_the_type(expected_t)
expr = fqr('Array')[fqr('String'), 1, 2]
expected_t = types.array_of(String)
types.constrain_size(expected_t, 1, 2)
expect(evaluate(expr)).to be_the_type(expected_t)
end
it "Array parameterization gives an error if parameter is not a type" do
expr = fqr('Array')['String']
expect { evaluate(expr)}.to raise_error(/Array-Type\[\] arguments must be types/)
end
# Tuple Type
#
it 'produces a Tuple[String] from the expression Tuple[String]' do
expr = fqr('Tuple')[fqr('String')]
expect(evaluate(expr)).to be_the_type(types.tuple(String))
# arguments are flattened
expr = fqr('Tuple')[[fqr('String')]]
expect(evaluate(expr)).to be_the_type(types.tuple(String))
end
it "Tuple parameterization gives an error if parameter is not a type" do
expr = fqr('Tuple')['String']
- expect { evaluate(expr)}.to raise_error(/Tuple-Type, Cannot use String where Abstract-Type is expected/)
+ expect { evaluate(expr)}.to raise_error(/Tuple-Type, Cannot use String where Any-Type is expected/)
end
it 'produces a varargs Tuple when the last two arguments specify size constraint' do
expr = fqr('Tuple')[fqr('String'), 1]
expected_t = types.tuple(String)
types.constrain_size(expected_t, 1, :default)
expect(evaluate(expr)).to be_the_type(expected_t)
expr = fqr('Tuple')[fqr('String'), 1, 2]
expected_t = types.tuple(String)
types.constrain_size(expected_t, 1, 2)
expect(evaluate(expr)).to be_the_type(expected_t)
end
# Pattern Type
#
it 'creates a PPatternType instance when applied to a Pattern' do
regexp_expr = fqr('Pattern')['foo']
expect(evaluate(regexp_expr)).to eql(Puppet::Pops::Types::TypeFactory.pattern('foo'))
end
# Regexp Type
#
it 'creates a Regexp instance when applied to a Pattern' do
regexp_expr = fqr('Regexp')['foo']
expect(evaluate(regexp_expr)).to eql(Puppet::Pops::Types::TypeFactory.regexp('foo'))
# arguments are flattened
regexp_expr = fqr('Regexp')[['foo']]
expect(evaluate(regexp_expr)).to eql(Puppet::Pops::Types::TypeFactory.regexp('foo'))
end
# Class
#
it 'produces a specific class from Class[classname]' do
expr = fqr('Class')[fqn('apache')]
expect(evaluate(expr)).to be_the_type(types.host_class('apache'))
expr = fqr('Class')[literal('apache')]
expect(evaluate(expr)).to be_the_type(types.host_class('apache'))
end
it 'produces an array of Class when args are in an array' do
# arguments are flattened
expr = fqr('Class')[[fqn('apache')]]
expect(evaluate(expr)[0]).to be_the_type(types.host_class('apache'))
end
it 'produces undef for Class if arg is undef' do
# arguments are flattened
expr = fqr('Class')[nil]
expect(evaluate(expr)).to be_nil
end
it 'produces empty array for Class if arg is [undef]' do
# arguments are flattened
expr = fqr('Class')[[]]
expect(evaluate(expr)).to be_eql([])
expr = fqr('Class')[[nil]]
expect(evaluate(expr)).to be_eql([])
end
it 'raises error if access is to no keys' do
expr = fqr('Class')[fqn('apache')][]
expect { evaluate(expr) }.to raise_error(/Evaluation Error: Class\[apache\]\[\] accepts 1 or more arguments\. Got 0/)
end
it 'produces a collection of classes when multiple class names are given' do
expr = fqr('Class')[fqn('apache'), literal('nginx')]
result = evaluate(expr)
expect(result[0]).to be_the_type(types.host_class('apache'))
expect(result[1]).to be_the_type(types.host_class('nginx'))
end
it 'removes leading :: in class name' do
expr = fqr('Class')['::evoe']
expect(evaluate(expr)).to be_the_type(types.host_class('evoe'))
end
it 'raises error if the name is not a valid name' do
expr = fqr('Class')['fail-whale']
expect { evaluate(expr) }.to raise_error(/Illegal name/)
end
it 'downcases capitalized class names' do
expr = fqr('Class')['My::Class']
expect(evaluate(expr)).to be_the_type(types.host_class('my::class'))
end
it 'gives an error if no keys are given as argument' do
expr = fqr('Class')[]
expect {evaluate(expr)}.to raise_error(/Evaluation Error: Class\[\] accepts 1 or more arguments. Got 0/)
end
it 'produces an empty array if the keys reduce to empty array' do
expr = fqr('Class')[literal([[],[]])]
expect(evaluate(expr)).to be_eql([])
end
# Resource
it 'produces a specific resource type from Resource[type]' do
expr = fqr('Resource')[fqr('File')]
expect(evaluate(expr)).to be_the_type(types.resource('File'))
expr = fqr('Resource')[literal('File')]
expect(evaluate(expr)).to be_the_type(types.resource('File'))
end
it 'does not allow the type to be specified in an array' do
# arguments are flattened
expr = fqr('Resource')[[fqr('File')]]
expect{evaluate(expr)}.to raise_error(Puppet::ParseError, /must be a resource type or a String/)
end
it 'produces a specific resource reference type from File[title]' do
expr = fqr('File')[literal('/tmp/x')]
expect(evaluate(expr)).to be_the_type(types.resource('File', '/tmp/x'))
end
it 'produces a collection of specific resource references when multiple titles are used' do
# Using a resource type
expr = fqr('File')[literal('x'),literal('y')]
result = evaluate(expr)
expect(result[0]).to be_the_type(types.resource('File', 'x'))
expect(result[1]).to be_the_type(types.resource('File', 'y'))
# Using generic resource
expr = fqr('Resource')[fqr('File'), literal('x'),literal('y')]
result = evaluate(expr)
expect(result[0]).to be_the_type(types.resource('File', 'x'))
expect(result[1]).to be_the_type(types.resource('File', 'y'))
end
it 'produces undef for Resource if arg is undef' do
# arguments are flattened
expr = fqr('File')[nil]
expect(evaluate(expr)).to be_nil
end
it 'gives an error if no keys are given as argument to Resource' do
expr = fqr('Resource')[]
expect {evaluate(expr)}.to raise_error(/Evaluation Error: Resource\[\] accepts 1 or more arguments. Got 0/)
end
it 'produces an empty array if the type is given, and keys reduce to empty array for Resource' do
expr = fqr('Resource')[fqr('File'),literal([[],[]])]
expect(evaluate(expr)).to be_eql([])
end
it 'gives an error i no keys are given as argument to a specific Resource type' do
expr = fqr('File')[]
expect {evaluate(expr)}.to raise_error(/Evaluation Error: File\[\] accepts 1 or more arguments. Got 0/)
end
it 'produces an empty array if the keys reduce to empty array for a specific Resource tyoe' do
expr = fqr('File')[literal([[],[]])]
expect(evaluate(expr)).to be_eql([])
end
it 'gives an error if resource is not found' do
expr = fqr('File')[fqn('x')][fqn('y')]
expect {evaluate(expr)}.to raise_error(/Resource not found: File\['x'\]/)
end
# Type Type
#
it 'creates a Type instance when applied to a Type' do
type_expr = fqr('Type')[fqr('Integer')]
tf = Puppet::Pops::Types::TypeFactory
expect(evaluate(type_expr)).to eql(tf.type_type(tf.integer))
# arguments are flattened
type_expr = fqr('Type')[[fqr('Integer')]]
expect(evaluate(type_expr)).to eql(tf.type_type(tf.integer))
end
# Ruby Type
#
it 'creates a Ruby Type instance when applied to a Ruby Type' do
- type_expr = fqr('Ruby')['String']
+ type_expr = fqr('Runtime')['ruby','String']
tf = Puppet::Pops::Types::TypeFactory
expect(evaluate(type_expr)).to eql(tf.ruby_type('String'))
# arguments are flattened
- type_expr = fqr('Ruby')[['String']]
+ type_expr = fqr('Runtime')[['ruby', 'String']]
expect(evaluate(type_expr)).to eql(tf.ruby_type('String'))
end
end
matcher :be_the_type do |type|
calc = Puppet::Pops::Types::TypeCalculator.new
match do |actual|
calc.assignable?(actual, type) && calc.assignable?(type, actual)
end
failure_message_for_should do |actual|
"expected #{type.to_s}, but was #{actual.to_s}"
end
end
end
diff --git a/spec/unit/pops/evaluator/comparison_ops_spec.rb b/spec/unit/pops/evaluator/comparison_ops_spec.rb
index 29938542e..e3564195c 100644
--- a/spec/unit/pops/evaluator/comparison_ops_spec.rb
+++ b/spec/unit/pops/evaluator/comparison_ops_spec.rb
@@ -1,256 +1,259 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
require 'puppet/pops/evaluator/evaluator_impl'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/evaluator_rspec_helper')
describe 'Puppet::Pops::Evaluator::EvaluatorImpl' do
include EvaluatorRspecHelper
context "When the evaluator performs comparisons" do
context "of string values" do
it "'a' == 'a' == true" do; evaluate(literal('a') == literal('a')).should == true ; end
it "'a' == 'b' == false" do; evaluate(literal('a') == literal('b')).should == false ; end
it "'a' != 'a' == false" do; evaluate(literal('a').ne(literal('a'))).should == false ; end
it "'a' != 'b' == true" do; evaluate(literal('a').ne(literal('b'))).should == true ; end
it "'a' < 'b' == true" do; evaluate(literal('a') < literal('b')).should == true ; end
it "'a' < 'a' == false" do; evaluate(literal('a') < literal('a')).should == false ; end
it "'b' < 'a' == false" do; evaluate(literal('b') < literal('a')).should == false ; end
it "'a' <= 'b' == true" do; evaluate(literal('a') <= literal('b')).should == true ; end
it "'a' <= 'a' == true" do; evaluate(literal('a') <= literal('a')).should == true ; end
it "'b' <= 'a' == false" do; evaluate(literal('b') <= literal('a')).should == false ; end
it "'a' > 'b' == false" do; evaluate(literal('a') > literal('b')).should == false ; end
it "'a' > 'a' == false" do; evaluate(literal('a') > literal('a')).should == false ; end
it "'b' > 'a' == true" do; evaluate(literal('b') > literal('a')).should == true ; end
it "'a' >= 'b' == false" do; evaluate(literal('a') >= literal('b')).should == false ; end
it "'a' >= 'a' == true" do; evaluate(literal('a') >= literal('a')).should == true ; end
it "'b' >= 'a' == true" do; evaluate(literal('b') >= literal('a')).should == true ; end
context "with mixed case" do
it "'a' == 'A' == true" do; evaluate(literal('a') == literal('A')).should == true ; end
it "'a' != 'A' == false" do; evaluate(literal('a').ne(literal('A'))).should == false ; end
it "'a' > 'A' == false" do; evaluate(literal('a') > literal('A')).should == false ; end
it "'a' >= 'A' == true" do; evaluate(literal('a') >= literal('A')).should == true ; end
it "'A' < 'a' == false" do; evaluate(literal('A') < literal('a')).should == false ; end
it "'A' <= 'a' == true" do; evaluate(literal('A') <= literal('a')).should == true ; end
end
end
context "of integer values" do
it "1 == 1 == true" do; evaluate(literal(1) == literal(1)).should == true ; end
it "1 == 2 == false" do; evaluate(literal(1) == literal(2)).should == false ; end
it "1 != 1 == false" do; evaluate(literal(1).ne(literal(1))).should == false ; end
it "1 != 2 == true" do; evaluate(literal(1).ne(literal(2))).should == true ; end
it "1 < 2 == true" do; evaluate(literal(1) < literal(2)).should == true ; end
it "1 < 1 == false" do; evaluate(literal(1) < literal(1)).should == false ; end
it "2 < 1 == false" do; evaluate(literal(2) < literal(1)).should == false ; end
it "1 <= 2 == true" do; evaluate(literal(1) <= literal(2)).should == true ; end
it "1 <= 1 == true" do; evaluate(literal(1) <= literal(1)).should == true ; end
it "2 <= 1 == false" do; evaluate(literal(2) <= literal(1)).should == false ; end
it "1 > 2 == false" do; evaluate(literal(1) > literal(2)).should == false ; end
it "1 > 1 == false" do; evaluate(literal(1) > literal(1)).should == false ; end
it "2 > 1 == true" do; evaluate(literal(2) > literal(1)).should == true ; end
it "1 >= 2 == false" do; evaluate(literal(1) >= literal(2)).should == false ; end
it "1 >= 1 == true" do; evaluate(literal(1) >= literal(1)).should == true ; end
it "2 >= 1 == true" do; evaluate(literal(2) >= literal(1)).should == true ; end
end
context "of mixed value types" do
it "1 == 1.0 == true" do; evaluate(literal(1) == literal(1.0)).should == true ; end
it "1 < 1.1 == true" do; evaluate(literal(1) < literal(1.1)).should == true ; end
it "'1' < 1.1 == true" do; evaluate(literal('1') < literal(1.1)).should == true ; end
it "1.0 == 1 == true" do; evaluate(literal(1.0) == literal(1)).should == true ; end
it "1.0 < 2 == true" do; evaluate(literal(1.0) < literal(2)).should == true ; end
it "'1.0' < 1.1 == true" do; evaluate(literal('1.0') < literal(1.1)).should == true ; end
it "'1.0' < 'a' == true" do; evaluate(literal('1.0') < literal('a')).should == true ; end
it "'1.0' < '' == true" do; evaluate(literal('1.0') < literal('')).should == true ; end
it "'1.0' < ' ' == true" do; evaluate(literal('1.0') < literal(' ')).should == true ; end
it "'a' > '1.0' == true" do; evaluate(literal('a') > literal('1.0')).should == true ; end
end
context "of regular expressions" do
it "/.*/ == /.*/ == true" do; evaluate(literal(/.*/) == literal(/.*/)).should == true ; end
it "/.*/ != /a.*/ == true" do; evaluate(literal(/.*/).ne(literal(/a.*/))).should == true ; end
end
context "of booleans" do
- it "true == true == true" do; evaluate(literal(true) == literal(true)).should == true ; end;
- it "false == false == true" do; evaluate(literal(false) == literal(false)).should == true ; end;
- it "true == false != true" do; evaluate(literal(true) == literal(false)).should == false ; end;
- it "false == '' == false" do; evaluate(literal(false) == literal('')).should == false ; end;
+ it "true == true == true" do; evaluate(literal(true) == literal(true)).should == true ; end;
+ it "false == false == true" do; evaluate(literal(false) == literal(false)).should == true ; end;
+ it "true == false != true" do; evaluate(literal(true) == literal(false)).should == false ; end;
+ it "false == '' == false" do; evaluate(literal(false) == literal('')).should == false ; end;
+ it "undef == '' == false" do; evaluate(literal(:undef) == literal('')).should == false ; end;
+ it "undef == undef == true" do; evaluate(literal(:undef) == literal(:undef)).should == true ; end;
+ it "nil == undef == true" do; evaluate(literal(nil) == literal(:undef)).should == true ; end;
end
context "of collections" do
it "[1,2,3] == [1,2,3] == true" do
evaluate(literal([1,2,3]) == literal([1,2,3])).should == true
evaluate(literal([1,2,3]).ne(literal([1,2,3]))).should == false
evaluate(literal([1,2,4]) == literal([1,2,3])).should == false
evaluate(literal([1,2,4]).ne(literal([1,2,3]))).should == true
end
it "{'a'=>1, 'b'=>2} == {'a'=>1, 'b'=>2} == true" do
evaluate(literal({'a'=>1, 'b'=>2}) == literal({'a'=>1, 'b'=>2})).should == true
evaluate(literal({'a'=>1, 'b'=>2}).ne(literal({'a'=>1, 'b'=>2}))).should == false
evaluate(literal({'a'=>1, 'b'=>2}) == literal({'x'=>1, 'b'=>2})).should == false
evaluate(literal({'a'=>1, 'b'=>2}).ne(literal({'x'=>1, 'b'=>2}))).should == true
end
end
context "of non comparable types" do
# TODO: Change the exception type
it "false < true == error" do; expect { evaluate(literal(true) < literal(false))}.to raise_error(Puppet::ParseError); end
it "false <= true == error" do; expect { evaluate(literal(true) <= literal(false))}.to raise_error(Puppet::ParseError); end
it "false > true == error" do; expect { evaluate(literal(true) > literal(false))}.to raise_error(Puppet::ParseError); end
it "false >= true == error" do; expect { evaluate(literal(true) >= literal(false))}.to raise_error(Puppet::ParseError); end
it "/a/ < /b/ == error" do; expect { evaluate(literal(/a/) < literal(/b/))}.to raise_error(Puppet::ParseError); end
it "/a/ <= /b/ == error" do; expect { evaluate(literal(/a/) <= literal(/b/))}.to raise_error(Puppet::ParseError); end
it "/a/ > /b/ == error" do; expect { evaluate(literal(/a/) > literal(/b/))}.to raise_error(Puppet::ParseError); end
it "/a/ >= /b/ == error" do; expect { evaluate(literal(/a/) >= literal(/b/))}.to raise_error(Puppet::ParseError); end
it "[1,2,3] < [1,2,3] == error" do
expect{ evaluate(literal([1,2,3]) < literal([1,2,3]))}.to raise_error(Puppet::ParseError)
end
it "[1,2,3] > [1,2,3] == error" do
expect{ evaluate(literal([1,2,3]) > literal([1,2,3]))}.to raise_error(Puppet::ParseError)
end
it "[1,2,3] >= [1,2,3] == error" do
expect{ evaluate(literal([1,2,3]) >= literal([1,2,3]))}.to raise_error(Puppet::ParseError)
end
it "[1,2,3] <= [1,2,3] == error" do
expect{ evaluate(literal([1,2,3]) <= literal([1,2,3]))}.to raise_error(Puppet::ParseError)
end
it "{'a'=>1, 'b'=>2} < {'a'=>1, 'b'=>2} == error" do
expect{ evaluate(literal({'a'=>1, 'b'=>2}) < literal({'a'=>1, 'b'=>2}))}.to raise_error(Puppet::ParseError)
end
it "{'a'=>1, 'b'=>2} > {'a'=>1, 'b'=>2} == error" do
expect{ evaluate(literal({'a'=>1, 'b'=>2}) > literal({'a'=>1, 'b'=>2}))}.to raise_error(Puppet::ParseError)
end
it "{'a'=>1, 'b'=>2} <= {'a'=>1, 'b'=>2} == error" do
expect{ evaluate(literal({'a'=>1, 'b'=>2}) <= literal({'a'=>1, 'b'=>2}))}.to raise_error(Puppet::ParseError)
end
it "{'a'=>1, 'b'=>2} >= {'a'=>1, 'b'=>2} == error" do
expect{ evaluate(literal({'a'=>1, 'b'=>2}) >= literal({'a'=>1, 'b'=>2}))}.to raise_error(Puppet::ParseError)
end
end
end
context "When the evaluator performs Regular Expression matching" do
it "'a' =~ /.*/ == true" do; evaluate(literal('a') =~ literal(/.*/)).should == true ; end
it "'a' =~ '.*' == true" do; evaluate(literal('a') =~ literal(".*")).should == true ; end
it "'a' !~ /b.*/ == true" do; evaluate(literal('a').mne(literal(/b.*/))).should == true ; end
it "'a' !~ 'b.*' == true" do; evaluate(literal('a').mne(literal("b.*"))).should == true ; end
it "'a' =~ Pattern['.*'] == true" do
evaluate(literal('a') =~ fqr('Pattern')[literal(".*")]).should == true
end
it "$a = Pattern['.*']; 'a' =~ $a == true" do
expr = block(var('a').set(fqr('Pattern')['foo']), literal('foo') =~ var('a'))
evaluate(expr).should == true
end
it 'should fail if LHS is not a string' do
expect { evaluate(literal(666) =~ literal(/6/))}.to raise_error(Puppet::ParseError)
end
end
context "When evaluator evaluates the 'in' operator" do
it "should find elements in an array" do
evaluate(literal(1).in(literal([1,2,3]))).should == true
evaluate(literal(4).in(literal([1,2,3]))).should == false
end
it "should find keys in a hash" do
evaluate(literal('a').in(literal({'x'=>1, 'a'=>2, 'y'=> 3}))).should == true
evaluate(literal('z').in(literal({'x'=>1, 'a'=>2, 'y'=> 3}))).should == false
end
it "should find substrings in a string" do
evaluate(literal('ana').in(literal('bananas'))).should == true
evaluate(literal('xxx').in(literal('bananas'))).should == false
end
it "should find substrings in a string (regexp)" do
evaluate(literal(/ana/).in(literal('bananas'))).should == true
evaluate(literal(/xxx/).in(literal('bananas'))).should == false
end
it "should find substrings in a string (ignoring case)" do
evaluate(literal('ANA').in(literal('bananas'))).should == true
evaluate(literal('ana').in(literal('BANANAS'))).should == true
evaluate(literal('xxx').in(literal('BANANAS'))).should == false
end
it "should find sublists in a list" do
evaluate(literal([2,3]).in(literal([1,[2,3],4]))).should == true
evaluate(literal([2,4]).in(literal([1,[2,3],4]))).should == false
end
it "should find sublists in a list (case insensitive)" do
evaluate(literal(['a','b']).in(literal(['A',['A','B'],'C']))).should == true
evaluate(literal(['x','y']).in(literal(['A',['A','B'],'C']))).should == false
end
it "should find keys in a hash" do
evaluate(literal('a').in(literal({'a' => 10, 'b' => 20}))).should == true
evaluate(literal('x').in(literal({'a' => 10, 'b' => 20}))).should == false
end
it "should find keys in a hash (case insensitive)" do
evaluate(literal('A').in(literal({'a' => 10, 'b' => 20}))).should == true
evaluate(literal('X').in(literal({'a' => 10, 'b' => 20}))).should == false
end
it "should find keys in a hash (regexp)" do
evaluate(literal(/xxx/).in(literal({'abcxxxabc' => 10, 'xyz' => 20}))).should == true
evaluate(literal(/yyy/).in(literal({'abcxxxabc' => 10, 'xyz' => 20}))).should == false
end
it "should find numbers as numbers" do
evaluate(literal(15).in(literal([1,0xf,2]))).should == true
end
it "should find numbers as strings" do
evaluate(literal(15).in(literal([1, '0xf',2]))).should == true
evaluate(literal('15').in(literal([1, 0xf,2]))).should == true
end
it "should not find numbers embedded in strings, nor digits in numbers" do
evaluate(literal(15).in(literal([1, '115', 2]))).should == false
evaluate(literal(1).in(literal([11, 111, 2]))).should == false
evaluate(literal('1').in(literal([11, 111, 2]))).should == false
end
it 'should find an entry with compatible type in an Array' do
evaluate(fqr('Array')[fqr('Integer')].in(literal(['a', [1,2,3], 'b']))).should == true
evaluate(fqr('Array')[fqr('Integer')].in(literal(['a', [1,2,'not integer'], 'b']))).should == false
end
it 'should find an entry with compatible type in a Hash' do
evaluate(fqr('Integer').in(literal({1 => 'a', 'a' => 'b'}))).should == true
evaluate(fqr('Integer').in(literal({'a' => 'a', 'a' => 'b'}))).should == false
end
end
end
diff --git a/spec/unit/pops/evaluator/evaluating_parser_spec.rb b/spec/unit/pops/evaluator/evaluating_parser_spec.rb
index 210b55b20..5e80e4076 100644
--- a/spec/unit/pops/evaluator/evaluating_parser_spec.rb
+++ b/spec/unit/pops/evaluator/evaluating_parser_spec.rb
@@ -1,1080 +1,1300 @@
require 'spec_helper'
require 'puppet/pops'
require 'puppet/pops/evaluator/evaluator_impl'
+require 'puppet/loaders'
require 'puppet_spec/pops'
require 'puppet_spec/scope'
require 'puppet/parser/e4_parser_adapter'
# relative to this spec file (./) does not work as this file is loaded by rspec
#require File.join(File.dirname(__FILE__), '/evaluator_rspec_helper')
describe 'Puppet::Pops::Evaluator::EvaluatorImpl' do
include PuppetSpec::Pops
include PuppetSpec::Scope
before(:each) do
Puppet[:strict_variables] = true
- # These must be set since the is 3x logic that triggers on these even if the tests are explicit
- # about selection of parser and evaluator
+ # These must be set since the 3x logic switches some behaviors on these even if the tests explicitly
+ # use the 4x parser and evaluator.
#
Puppet[:parser] = 'future'
- Puppet[:evaluator] = 'future'
+
# Puppetx cannot be loaded until the correct parser has been set (injector is turned off otherwise)
require 'puppetx'
+
+ # Tests needs a known configuration of node/scope/compiler since it parses and evaluates
+ # snippets as the compiler will evaluate them, butwithout the overhead of compiling a complete
+ # catalog for each tested expression.
+ #
+ @parser = Puppet::Pops::Parser::EvaluatingParser.new
+ @node = Puppet::Node.new('node.example.com')
+ @node.environment = Puppet::Node::Environment.create(:testing, [])
+ @compiler = Puppet::Parser::Compiler.new(@node)
+ @scope = Puppet::Parser::Scope.new(@compiler)
+ @scope.source = Puppet::Resource::Type.new(:node, 'node.example.com')
+ @scope.parent = @compiler.topscope
end
- let(:parser) { Puppet::Pops::Parser::EvaluatingParser::Transitional.new }
- let(:node) { 'node.example.com' }
- let(:scope) { s = create_test_scope_for_node(node); s }
+ let(:parser) { @parser }
+ let(:scope) { @scope }
types = Puppet::Pops::Types::TypeFactory
context "When evaluator evaluates literals" do
{
"1" => 1,
"010" => 8,
"0x10" => 16,
"3.14" => 3.14,
"0.314e1" => 3.14,
"31.4e-1" => 3.14,
"'1'" => '1',
"'banana'" => 'banana',
'"banana"' => 'banana',
"banana" => 'banana',
"banana::split" => 'banana::split',
"false" => false,
"true" => true,
"Array" => types.array_of_data(),
"/.*/" => /.*/
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
end
context "When the evaluator evaluates Lists and Hashes" do
{
"[]" => [],
"[1,2,3]" => [1,2,3],
"[1,[2.0, 2.1, [2.2]],[3.0, 3.1]]" => [1,[2.0, 2.1, [2.2]],[3.0, 3.1]],
"[2 + 2]" => [4],
"[1,2,3] == [1,2,3]" => true,
"[1,2,3] != [2,3,4]" => true,
"[1,2,3] == [2,2,3]" => false,
"[1,2,3] != [1,2,3]" => false,
"[1,2,3][2]" => 3,
"[1,2,3] + [4,5]" => [1,2,3,4,5],
"[1,2,3] + [[4,5]]" => [1,2,3,[4,5]],
"[1,2,3] + 4" => [1,2,3,4],
"[1,2,3] << [4,5]" => [1,2,3,[4,5]],
"[1,2,3] << {'a' => 1, 'b'=>2}" => [1,2,3,{'a' => 1, 'b'=>2}],
"[1,2,3] << 4" => [1,2,3,4],
"[1,2,3,4] - [2,3]" => [1,4],
"[1,2,3,4] - [2,5]" => [1,3,4],
"[1,2,3,4] - 2" => [1,3,4],
"[1,2,3,[2],4] - 2" => [1,3,[2],4],
"[1,2,3,[2,3],4] - [[2,3]]" => [1,2,3,4],
"[1,2,3,3,2,4,2,3] - [2,3]" => [1,4],
"[1,2,3,['a',1],['b',2]] - {'a' => 1, 'b'=>2}" => [1,2,3],
"[1,2,3,{'a'=>1,'b'=>2}] - [{'a' => 1, 'b'=>2}]" => [1,2,3],
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"[1,2,3] + {'a' => 1, 'b'=>2}" => [1,2,3,['a',1],['b',2]],
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
# This test must be done with match_array since the order of the hash
# is undefined and Ruby 1.8.7 and 1.9.3 produce different results.
expect(parser.evaluate_string(scope, source, __FILE__)).to match_array(result)
end
end
{
"[1,2,3][a]" => :error,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
{
"{}" => {},
"{'a'=>1,'b'=>2}" => {'a'=>1,'b'=>2},
"{'a'=>1,'b'=>{'x'=>2.1,'y'=>2.2}}" => {'a'=>1,'b'=>{'x'=>2.1,'y'=>2.2}},
"{'a'=> 2 + 2}" => {'a'=> 4},
"{'a'=> 1, 'b'=>2} == {'a'=> 1, 'b'=>2}" => true,
"{'a'=> 1, 'b'=>2} != {'x'=> 1, 'b'=>2}" => true,
"{'a'=> 1, 'b'=>2} == {'a'=> 2, 'b'=>3}" => false,
"{'a'=> 1, 'b'=>2} != {'a'=> 1, 'b'=>2}" => false,
"{a => 1, b => 2}[b]" => 2,
"{2+2 => sum, b => 2}[4]" => 'sum',
"{'a'=>1, 'b'=>2} + {'c'=>3}" => {'a'=>1,'b'=>2,'c'=>3},
"{'a'=>1, 'b'=>2} + {'b'=>3}" => {'a'=>1,'b'=>3},
"{'a'=>1, 'b'=>2} + ['c', 3, 'b', 3]" => {'a'=>1,'b'=>3, 'c'=>3},
"{'a'=>1, 'b'=>2} + [['c', 3], ['b', 3]]" => {'a'=>1,'b'=>3, 'c'=>3},
"{'a'=>1, 'b'=>2} - {'b' => 3}" => {'a'=>1},
"{'a'=>1, 'b'=>2, 'c'=>3} - ['b', 'c']" => {'a'=>1},
"{'a'=>1, 'b'=>2, 'c'=>3} - 'c'" => {'a'=>1, 'b'=>2},
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"{'a' => 1, 'b'=>2} << 1" => :error,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
end
context "When the evaluator perform comparisons" do
{
"'a' == 'a'" => true,
"'a' == 'b'" => false,
"'a' != 'a'" => false,
"'a' != 'b'" => true,
"'a' < 'b' " => true,
"'a' < 'a' " => false,
"'b' < 'a' " => false,
"'a' <= 'b'" => true,
"'a' <= 'a'" => true,
"'b' <= 'a'" => false,
"'a' > 'b' " => false,
"'a' > 'a' " => false,
"'b' > 'a' " => true,
"'a' >= 'b'" => false,
"'a' >= 'a'" => true,
"'b' >= 'a'" => true,
"'a' == 'A'" => true,
"'a' != 'A'" => false,
"'a' > 'A'" => false,
"'a' >= 'A'" => true,
"'A' < 'a'" => false,
"'A' <= 'a'" => true,
"1 == 1" => true,
"1 == 2" => false,
"1 != 1" => false,
"1 != 2" => true,
"1 < 2 " => true,
"1 < 1 " => false,
"2 < 1 " => false,
"1 <= 2" => true,
"1 <= 1" => true,
"2 <= 1" => false,
"1 > 2 " => false,
"1 > 1 " => false,
"2 > 1 " => true,
"1 >= 2" => false,
"1 >= 1" => true,
"2 >= 1" => true,
"1 == 1.0 " => true,
"1 < 1.1 " => true,
"'1' < 1.1" => true,
"1.0 == 1 " => true,
"1.0 < 2 " => true,
"1.0 < 'a'" => true,
"'1.0' < 1.1" => true,
"'1.0' < 'a'" => true,
"'1.0' < '' " => true,
"'1.0' < ' '" => true,
"'a' > '1.0'" => true,
"/.*/ == /.*/ " => true,
"/.*/ != /a.*/" => true,
"true == true " => true,
"false == false" => true,
"true == false" => false,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"'a' =~ /.*/" => true,
"'a' =~ '.*'" => true,
"/.*/ != /a.*/" => true,
"'a' !~ /b.*/" => true,
"'a' !~ 'b.*'" => true,
'$x = a; a =~ "$x.*"' => true,
"a =~ Pattern['a.*']" => true,
- "a =~ Regexp['a.*']" => true,
+ "a =~ Regexp['a.*']" => false, # String is not subtype of Regexp. PUP-957
"$x = /a.*/ a =~ $x" => true,
"$x = Pattern['a.*'] a =~ $x" => true,
"1 =~ Integer" => true,
"1 !~ Integer" => false,
"[1,2,3] =~ Array[Integer[1,10]]" => true,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"666 =~ /6/" => :error,
"[a] =~ /a/" => :error,
"{a=>1} =~ /a/" => :error,
"/a/ =~ /a/" => :error,
"Array =~ /A/" => :error,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
{
"1 in [1,2,3]" => true,
"4 in [1,2,3]" => false,
"a in {x=>1, a=>2}" => true,
"z in {x=>1, a=>2}" => false,
"ana in bananas" => true,
"xxx in bananas" => false,
"/ana/ in bananas" => true,
"/xxx/ in bananas" => false,
"ANA in bananas" => false, # ANA is a type, not a String
+ "String[1] in bananas" => false, # Philosophically true though :-)
"'ANA' in bananas" => true,
"ana in 'BANANAS'" => true,
"/ana/ in 'BANANAS'" => false,
"/ANA/ in 'BANANAS'" => true,
"xxx in 'BANANAS'" => false,
"[2,3] in [1,[2,3],4]" => true,
"[2,4] in [1,[2,3],4]" => false,
"[a,b] in ['A',['A','B'],'C']" => true,
"[x,y] in ['A',['A','B'],'C']" => false,
"a in {a=>1}" => true,
"x in {a=>1}" => false,
"'A' in {a=>1}" => true,
"'X' in {a=>1}" => false,
"a in {'A'=>1}" => true,
"x in {'A'=>1}" => false,
"/xxx/ in {'aaaxxxbbb'=>1}" => true,
"/yyy/ in {'aaaxxxbbb'=>1}" => false,
"15 in [1, 0xf]" => true,
"15 in [1, '0xf']" => true,
"'15' in [1, 0xf]" => true,
"15 in [1, 115]" => false,
"1 in [11, '111']" => false,
"'1' in [11, '111']" => false,
"Array[Integer] in [2, 3]" => false,
"Array[Integer] in [2, [3, 4]]" => true,
"Array[Integer] in [2, [a, 4]]" => false,
"Integer in { 2 =>'a'}" => true,
"Integer[5,10] in [1,5,3]" => true,
"Integer[5,10] in [1,2,3]" => false,
"Integer in {'a'=>'a'}" => false,
"Integer in {'a'=>1}" => false,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
- 'Object' => ['Data', 'Scalar', 'Numeric', 'Integer', 'Float', 'Boolean', 'String', 'Pattern', 'Collection',
+ "if /(ana)/ in bananas {$1}" => 'ana',
+ "if /(xyz)/ in bananas {$1} else {$1}" => nil,
+ "$a = bananas =~ /(ana)/; $b = /(xyz)/ in bananas; $1" => 'ana',
+ "$a = xyz =~ /(xyz)/; $b = /(ana)/ in bananas; $1" => 'ana',
+ "if /p/ in [pineapple, bananas] {$0}" => 'p',
+ "if /b/ in [pineapple, bananas] {$0}" => 'b',
+ }.each do |source, result|
+ it "sets match variables for a regexp search using in such that '#{source}' produces '#{result}'" do
+ parser.evaluate_string(scope, source, __FILE__).should == result
+ end
+ end
+
+ {
+ 'Any' => ['Data', 'Scalar', 'Numeric', 'Integer', 'Float', 'Boolean', 'String', 'Pattern', 'Collection',
'Array', 'Hash', 'CatalogEntry', 'Resource', 'Class', 'Undef', 'File', 'NotYetKnownResourceType'],
# Note, Data > Collection is false (so not included)
'Data' => ['Scalar', 'Numeric', 'Integer', 'Float', 'Boolean', 'String', 'Pattern', 'Array', 'Hash',],
'Scalar' => ['Numeric', 'Integer', 'Float', 'Boolean', 'String', 'Pattern'],
'Numeric' => ['Integer', 'Float'],
'CatalogEntry' => ['Class', 'Resource', 'File', 'NotYetKnownResourceType'],
'Integer[1,10]' => ['Integer[2,3]'],
}.each do |general, specials|
specials.each do |special |
it "should compute that #{general} > #{special}" do
parser.evaluate_string(scope, "#{general} > #{special}", __FILE__).should == true
end
it "should compute that #{special} < #{general}" do
parser.evaluate_string(scope, "#{special} < #{general}", __FILE__).should == true
end
it "should compute that #{general} != #{special}" do
parser.evaluate_string(scope, "#{special} != #{general}", __FILE__).should == true
end
end
end
{
'Integer[1,10] > Integer[2,3]' => true,
'Integer[1,10] == Integer[2,3]' => false,
'Integer[1,10] > Integer[0,5]' => false,
'Integer[1,10] > Integer[1,10]' => false,
'Integer[1,10] >= Integer[1,10]' => true,
'Integer[1,10] == Integer[1,10]' => true,
}.each do |source, result|
it "should parse and evaluate the integer range comparison expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
end
context "When the evaluator performs arithmetic" do
context "on Integers" do
{ "2+2" => 4,
"2 + 2" => 4,
"7 - 3" => 4,
"6 * 3" => 18,
"6 / 3" => 2,
"6 % 3" => 0,
"10 % 3" => 1,
"-(6/3)" => -2,
"-6/3 " => -2,
"8 >> 1" => 4,
"8 << 1" => 16,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
context "on Floats" do
{
"2.2 + 2.2" => 4.4,
"7.7 - 3.3" => 4.4,
"6.1 * 3.1" => 18.91,
"6.6 / 3.3" => 2.0,
"-(6.0/3.0)" => -2.0,
"-6.0/3.0 " => -2.0,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"3.14 << 2" => :error,
"3.14 >> 2" => :error,
"6.6 % 3.3" => 0.0,
"10.0 % 3.0" => 1.0,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
end
context "on strings requiring boxing to Numeric" do
{
"'2' + '2'" => 4,
+ "'-2' + '2'" => 0,
+ "'- 2' + '2'" => 0,
+ '"-\t 2" + "2"' => 0,
+ "'+2' + '2'" => 4,
+ "'+ 2' + '2'" => 4,
"'2.2' + '2.2'" => 4.4,
+ "'-2.2' + '2.2'" => 0.0,
"'0xF7' + '010'" => 0xFF,
"'0xF7' + '0x8'" => 0xFF,
"'0367' + '010'" => 0xFF,
"'012.3' + '010'" => 20.3,
+ "'-0x2' + '0x4'" => 2,
+ "'+0x2' + '0x4'" => 6,
+ "'-02' + '04'" => 2,
+ "'+02' + '04'" => 6,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"'0888' + '010'" => :error,
"'0xWTF' + '010'" => :error,
"'0x12.3' + '010'" => :error,
"'0x12.3' + '010'" => :error,
+ '"-\n 2" + "2"' => :error,
+ '"-\v 2" + "2"' => :error,
+ '"-2\n" + "2"' => :error,
+ '"-2\n " + "2"' => :error,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
end
end
end # arithmetic
context "When the evaluator evaluates assignment" do
{
"$a = 5" => 5,
"$a = 5; $a" => 5,
"$a = 5; $b = 6; $a" => 5,
"$a = $b = 5; $a == $b" => true,
- "$a = [1,2,3]; [x].map |$x| { $a += x; $a }" => [[1,2,3,'x']],
- "$a = [a,x,c]; [x].map |$x| { $a -= x; $a }" => [['a','c']],
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
- "[a,b,c] = [1,2,3]; $a == 1 and $b == 2 and $c == 3" => :error,
- "[a,b,c] = {b=>2,c=>3,a=>1}; $a == 1 and $b == 2 and $c == 3" => :error,
- "$a = [1,2,3]; [x].collect |$x| { [a] += x; $a }" => :error,
- "$a = [a,x,c]; [x].collect |$x| { [a] -= x; $a }" => :error,
+ "[a,b,c] = [1,2,3]" => /attempt to assign to 'an Array Expression'/,
+ "[a,b,c] = {b=>2,c=>3,a=>1}" => /attempt to assign to 'an Array Expression'/,
}.each do |source, result|
- it "should parse and evaluate the expression '#{source}' to #{result}" do
- expect { parser.evaluate_string(scope, source, __FILE__)}.to raise_error(Puppet::ParseError)
+ it "should parse and evaluate the expression '#{source}' to error with #{result}" do
+ expect { parser.evaluate_string(scope, source, __FILE__)}.to raise_error(Puppet::ParseError, result)
end
end
end
context "When the evaluator evaluates conditionals" do
{
"if true {5}" => 5,
"if false {5}" => nil,
"if false {2} else {5}" => 5,
"if false {2} elsif true {5}" => 5,
"if false {2} elsif false {5}" => nil,
"unless false {5}" => 5,
"unless true {5}" => nil,
"unless true {2} else {5}" => 5,
"$a = if true {5} $a" => 5,
"$a = if false {5} $a" => nil,
"$a = if false {2} else {5} $a" => 5,
"$a = if false {2} elsif true {5} $a" => 5,
"$a = if false {2} elsif false {5} $a" => nil,
"$a = unless false {5} $a" => 5,
"$a = unless true {5} $a" => nil,
"$a = unless true {2} else {5} $a" => 5,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"case 1 { 1 : { yes } }" => 'yes',
"case 2 { 1,2,3 : { yes} }" => 'yes',
"case 2 { 1,3 : { no } 2: { yes} }" => 'yes',
"case 2 { 1,3 : { no } 5: { no } default: { yes }}" => 'yes',
"case 2 { 1,3 : { no } 5: { no } }" => nil,
"case 'banana' { 1,3 : { no } /.*ana.*/: { yes } }" => 'yes',
"case 'banana' { /.*(ana).*/: { $1 } }" => 'ana',
"case [1] { Array : { yes } }" => 'yes',
"case [1] {
Array[String] : { no }
Array[Integer]: { yes }
}" => 'yes',
"case 1 {
Integer : { yes }
Type[Integer] : { no } }" => 'yes',
"case Integer {
Integer : { no }
Type[Integer] : { yes } }" => 'yes',
+ # supports unfold
+ "case ringo {
+ *[paul, john, ringo, george] : { 'beatle' } }" => 'beatle',
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"2 ? { 1 => no, 2 => yes}" => 'yes',
- "3 ? { 1 => no, 2 => no}" => nil,
"3 ? { 1 => no, 2 => no, default => yes }" => 'yes',
- "3 ? { 1 => no, default => yes, 3 => no }" => 'yes',
+ "3 ? { 1 => no, default => yes, 3 => no }" => 'no',
+ "3 ? { 1 => no, 3 => no, default => yes }" => 'no',
+ "4 ? { 1 => no, default => yes, 3 => no }" => 'yes',
+ "4 ? { 1 => no, 3 => no, default => yes }" => 'yes',
"'banana' ? { /.*(ana).*/ => $1 }" => 'ana',
"[2] ? { Array[String] => yes, Array => yes}" => 'yes',
+ "ringo ? *[paul, john, ringo, george] => 'beatle'" => 'beatle',
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
+
+ it 'fails if a selector does not match' do
+ expect{parser.evaluate_string(scope, "2 ? 3 => 4")}.to raise_error(/No matching entry for selector parameter with value '2'/)
+ end
+ end
+
+ context "When evaluator evaluated unfold" do
+ {
+ "*[1,2,3]" => [1,2,3],
+ "*1" => [1],
+ "*'a'" => ['a']
+ }.each do |source, result|
+ it "should parse and evaluate the expression '#{source}' to #{result}" do
+ parser.evaluate_string(scope, source, __FILE__).should == result
+ end
+ end
+
+ it "should parse and evaluate the expression '*{a=>10, b=>20} to [['a',10],['b',20]]" do
+ result = parser.evaluate_string(scope, '*{a=>10, b=>20}', __FILE__)
+ expect(result).to include(['a', 10])
+ expect(result).to include(['b', 20])
+ end
+
end
context "When evaluator performs [] operations" do
{
"[1,2,3][0]" => 1,
"[1,2,3][2]" => 3,
"[1,2,3][3]" => nil,
"[1,2,3][-1]" => 3,
"[1,2,3][-2]" => 2,
"[1,2,3][-4]" => nil,
"[1,2,3,4][0,2]" => [1,2],
"[1,2,3,4][1,3]" => [2,3,4],
"[1,2,3,4][-2,2]" => [3,4],
"[1,2,3,4][-3,2]" => [2,3],
"[1,2,3,4][3,5]" => [4],
"[1,2,3,4][5,2]" => [],
"[1,2,3,4][0,-1]" => [1,2,3,4],
"[1,2,3,4][0,-2]" => [1,2,3],
"[1,2,3,4][0,-4]" => [1],
"[1,2,3,4][0,-5]" => [],
"[1,2,3,4][-5,2]" => [1],
"[1,2,3,4][-5,-3]" => [1,2],
"[1,2,3,4][-6,-3]" => [1,2],
"[1,2,3,4][2,-3]" => [],
+ "[1,*[2,3],4]" => [1,2,3,4],
+ "[1,*[2,3],4][1]" => 2,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"{a=>1, b=>2, c=>3}[a]" => 1,
"{a=>1, b=>2, c=>3}[c]" => 3,
"{a=>1, b=>2, c=>3}[x]" => nil,
"{a=>1, b=>2, c=>3}[c,b]" => [3,2],
"{a=>1, b=>2, c=>3}[a,b,c]" => [1,2,3],
"{a=>{b=>{c=>'it works'}}}[a][b][c]" => 'it works',
"$a = {undef => 10} $a[free_lunch]" => nil,
"$a = {undef => 10} $a[undef]" => 10,
"$a = {undef => 10} $a[$a[free_lunch]]" => 10,
"$a = {} $a[free_lunch] == undef" => true,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"'abc'[0]" => 'a',
"'abc'[2]" => 'c',
"'abc'[-1]" => 'c',
"'abc'[-2]" => 'b',
"'abc'[-3]" => 'a',
"'abc'[-4]" => '',
"'abc'[3]" => '',
"abc[0]" => 'a',
"abc[2]" => 'c',
"abc[-1]" => 'c',
"abc[-2]" => 'b',
"abc[-3]" => 'a',
"abc[-4]" => '',
"abc[3]" => '',
"'abcd'[0,2]" => 'ab',
"'abcd'[1,3]" => 'bcd',
"'abcd'[-2,2]" => 'cd',
"'abcd'[-3,2]" => 'bc',
"'abcd'[3,5]" => 'd',
"'abcd'[5,2]" => '',
"'abcd'[0,-1]" => 'abcd',
"'abcd'[0,-2]" => 'abc',
"'abcd'[0,-4]" => 'a',
"'abcd'[0,-5]" => '',
"'abcd'[-5,2]" => 'a',
"'abcd'[-5,-3]" => 'ab',
"'abcd'[-6,-3]" => 'ab',
"'abcd'[2,-3]" => '',
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
# Type operations (full set tested by tests covering type calculator)
{
"Array[Integer]" => types.array_of(types.integer),
"Array[Integer,1]" => types.constrain_size(types.array_of(types.integer),1, :default),
"Array[Integer,1,2]" => types.constrain_size(types.array_of(types.integer),1, 2),
"Array[Integer,Integer[1,2]]" => types.constrain_size(types.array_of(types.integer),1, 2),
"Array[Integer,Integer[1]]" => types.constrain_size(types.array_of(types.integer),1, :default),
"Hash[Integer,Integer]" => types.hash_of(types.integer, types.integer),
"Hash[Integer,Integer,1]" => types.constrain_size(types.hash_of(types.integer, types.integer),1, :default),
"Hash[Integer,Integer,1,2]" => types.constrain_size(types.hash_of(types.integer, types.integer),1, 2),
"Hash[Integer,Integer,Integer[1,2]]" => types.constrain_size(types.hash_of(types.integer, types.integer),1, 2),
"Hash[Integer,Integer,Integer[1]]" => types.constrain_size(types.hash_of(types.integer, types.integer),1, :default),
"Resource[File]" => types.resource('File'),
"Resource['File']" => types.resource(types.resource('File')),
"File[foo]" => types.resource('file', 'foo'),
"File[foo, bar]" => [types.resource('file', 'foo'), types.resource('file', 'bar')],
"Pattern[a, /b/, Pattern[c], Regexp[d]]" => types.pattern('a', 'b', 'c', 'd'),
"String[1,2]" => types.constrain_size(types.string,1, 2),
"String[Integer[1,2]]" => types.constrain_size(types.string,1, 2),
"String[Integer[1]]" => types.constrain_size(types.string,1, :default),
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
# LHS where [] not supported, and missing key(s)
{
"Array[]" => :error,
"'abc'[]" => :error,
"Resource[]" => :error,
"File[]" => :error,
"String[]" => :error,
"1[]" => :error,
"3.14[]" => :error,
"/.*/[]" => :error,
"$a=[1] $a[]" => :error,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
expect { parser.evaluate_string(scope, source, __FILE__)}.to raise_error(/Syntax error/)
end
end
# Errors when wrong number/type of keys are used
{
"Array[0]" => 'Array-Type[] arguments must be types. Got Fixnum',
"Hash[0]" => 'Hash-Type[] arguments must be types. Got Fixnum',
"Hash[Integer, 0]" => 'Hash-Type[] arguments must be types. Got Fixnum',
"Array[Integer,1,2,3]" => 'Array-Type[] accepts 1 to 3 arguments. Got 4',
"Array[Integer,String]" => "A Type's size constraint arguments must be a single Integer type, or 1-2 integers (or default). Got a String-Type",
"Hash[Integer,String, 1,2,3]" => 'Hash-Type[] accepts 1 to 4 arguments. Got 5',
"'abc'[x]" => "The value 'x' cannot be converted to Numeric",
"'abc'[1.0]" => "A String[] cannot use Float where Integer is expected",
"'abc'[1,2,3]" => "String supports [] with one or two arguments. Got 3",
"Resource[0]" => 'First argument to Resource[] must be a resource type or a String. Got Fixnum',
"Resource[a, 0]" => 'Error creating type specialization of a Resource-Type, Cannot use Fixnum where String is expected',
"File[0]" => 'Error creating type specialization of a File-Type, Cannot use Fixnum where String is expected',
"String[a]" => "A Type's size constraint arguments must be a single Integer type, or 1-2 integers (or default). Got a String",
"Pattern[0]" => 'Error creating type specialization of a Pattern-Type, Cannot use Fixnum where String or Regexp or Pattern-Type or Regexp-Type is expected',
"Regexp[0]" => 'Error creating type specialization of a Regexp-Type, Cannot use Fixnum where String or Regexp is expected',
"Regexp[a,b]" => 'A Regexp-Type[] accepts 1 argument. Got 2',
"true[0]" => "Operator '[]' is not applicable to a Boolean",
"1[0]" => "Operator '[]' is not applicable to an Integer",
"3.14[0]" => "Operator '[]' is not applicable to a Float",
"/.*/[0]" => "Operator '[]' is not applicable to a Regexp",
"[1][a]" => "The value 'a' cannot be converted to Numeric",
"[1][0.0]" => "An Array[] cannot use Float where Integer is expected",
"[1]['0.0']" => "An Array[] cannot use Float where Integer is expected",
"[1,2][1, 0.0]" => "An Array[] cannot use Float where Integer is expected",
"[1,2][1.0, -1]" => "An Array[] cannot use Float where Integer is expected",
"[1,2][1, -1.0]" => "An Array[] cannot use Float where Integer is expected",
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
expect { parser.evaluate_string(scope, source, __FILE__)}.to raise_error(Regexp.new(Regexp.quote(result)))
end
end
context "on catalog types" do
it "[n] gets resource parameter [n]" do
source = "notify { 'hello': message=>'yo'} Notify[hello][message]"
parser.evaluate_string(scope, source, __FILE__).should == 'yo'
end
it "[n] gets class parameter [n]" do
source = "class wonka($produces='chocolate'){ }
include wonka
Class[wonka][produces]"
# This is more complicated since it needs to run like 3.x and do an import_ast
adapted_parser = Puppet::Parser::E4ParserAdapter.new
adapted_parser.file = __FILE__
ast = adapted_parser.parse(source)
- scope.known_resource_types.import_ast(ast, '')
- ast.code.safeevaluate(scope).should == 'chocolate'
+ Puppet.override({:global_scope => scope}, "test") do
+ scope.known_resource_types.import_ast(ast, '')
+ ast.code.safeevaluate(scope).should == 'chocolate'
+ end
end
# Resource default and override expressions and resource parameter access with []
{
+ # Properties
"notify { id: message=>explicit} Notify[id][message]" => "explicit",
"Notify { message=>by_default} notify {foo:} Notify[foo][message]" => "by_default",
"notify {foo:} Notify[foo]{message =>by_override} Notify[foo][message]" => "by_override",
+ # Parameters
+ "notify { id: withpath=>explicit} Notify[id][withpath]" => "explicit",
+ "Notify { withpath=>by_default } notify { foo: } Notify[foo][withpath]" => "by_default",
+ "notify {foo:}
+ Notify[foo]{withpath=>by_override}
+ Notify[foo][withpath]" => "by_override",
+ # Metaparameters
"notify { foo: tag => evoe} Notify[foo][tag]" => "evoe",
- # Does not produce the defaults for tag
+ # Does not produce the defaults for tag parameter (title, type or names of scopes)
"notify { foo: } Notify[foo][tag]" => nil,
+ # But a default may be specified on the type
+ "Notify { tag=>by_default } notify { foo: } Notify[foo][tag]" => "by_default",
+ "Notify { tag=>by_default }
+ notify { foo: }
+ Notify[foo]{ tag=>by_override }
+ Notify[foo][tag]" => "by_override",
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
+ # Virtual and realized resource default and overridden resource parameter access with []
+ {
+ # Properties
+ "@notify { id: message=>explicit } Notify[id][message]" => "explicit",
+ "@notify { id: message=>explicit }
+ realize Notify[id]
+ Notify[id][message]" => "explicit",
+ "Notify { message=>by_default } @notify { id: } Notify[id][message]" => "by_default",
+ "Notify { message=>by_default }
+ @notify { id: tag=>thisone }
+ Notify <| tag == thisone |>;
+ Notify[id][message]" => "by_default",
+ "@notify { id: } Notify[id]{message=>by_override} Notify[id][message]" => "by_override",
+ # Parameters
+ "@notify { id: withpath=>explicit } Notify[id][withpath]" => "explicit",
+ "Notify { withpath=>by_default }
+ @notify { id: }
+ Notify[id][withpath]" => "by_default",
+ "@notify { id: }
+ realize Notify[id]
+ Notify[id]{withpath=>by_override}
+ Notify[id][withpath]" => "by_override",
+ # Metaparameters
+ "@notify { id: tag=>explicit } Notify[id][tag]" => "explicit",
+ }.each do |source, result|
+ it "parses and evaluates virtual and realized resources in the expression '#{source}' to #{result}" do
+ expect(parser.evaluate_string(scope, source, __FILE__)).to eq(result)
+ end
+ end
+
+ # Exported resource attributes
+ {
+ "@@notify { id: message=>explicit } Notify[id][message]" => "explicit",
+ "@@notify { id: message=>explicit, tag=>thisone }
+ Notify <<| tag == thisone |>>
+ Notify[id][message]" => "explicit",
+ }.each do |source, result|
+ it "parses and evaluates exported resources in the expression '#{source}' to #{result}" do
+ expect(parser.evaluate_string(scope, source, __FILE__)).to eq(result)
+ end
+ end
+
# Resource default and override expressions and resource parameter access error conditions
{
"notify { xid: message=>explicit} Notify[id][message]" => /Resource not found/,
"notify { id: message=>explicit} Notify[id][mustard]" => /does not have a parameter called 'mustard'/,
# NOTE: these meta-esque parameters are not recognized as such
"notify { id: message=>explicit} Notify[id][title]" => /does not have a parameter called 'title'/,
"notify { id: message=>explicit} Notify[id]['type']" => /does not have a parameter called 'type'/,
+ "notify { id: message=>explicit } Notify[id]{message=>override}" => /'message' is already set on Notify\[id\]/
}.each do |source, result|
it "should parse '#{source}' and raise error matching #{result}" do
expect { parser.evaluate_string(scope, source, __FILE__)}.to raise_error(result)
end
end
context 'with errors' do
{ "Class['fail-whale']" => /Illegal name/,
"Class[0]" => /An Integer cannot be used where a String is expected/,
"Class[/.*/]" => /A Regexp cannot be used where a String is expected/,
"Class[4.1415]" => /A Float cannot be used where a String is expected/,
"Class[Integer]" => /An Integer-Type cannot be used where a String is expected/,
"Class[File['tmp']]" => /A File\['tmp'\] Resource-Reference cannot be used where a String is expected/,
}.each do | source, error_pattern|
it "an error is flagged for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__)}.to raise_error(error_pattern)
end
end
end
end
# end [] operations
end
context "When the evaluator performs boolean operations" do
{
"true and true" => true,
"false and true" => false,
"true and false" => false,
"false and false" => false,
"true or true" => true,
"false or true" => true,
"true or false" => true,
"false or false" => false,
"! true" => false,
"!! true" => true,
"!! false" => false,
"! 'x'" => false,
- "! ''" => true,
+ "! ''" => false,
"! undef" => true,
"! [a]" => false,
"! []" => false,
"! {a=>1}" => false,
"! {}" => false,
"true and false and '0xwtf' + 1" => false,
"false or true or '0xwtf' + 1" => true,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
"false || false || '0xwtf' + 1" => :error,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
end
context "When evaluator performs operations on literal undef" do
it "computes non existing hash lookup as undef" do
parser.evaluate_string(scope, "{a => 1}[b] == undef", __FILE__).should == true
parser.evaluate_string(scope, "undef == {a => 1}[b]", __FILE__).should == true
end
end
context "When evaluator performs calls" do
+
let(:populate) do
parser.evaluate_string(scope, "$a = 10 $b = [1,2,3]")
end
{
'sprintf( "x%iy", $a )' => "x10y",
+ # unfolds
+ 'sprintf( *["x%iy", $a] )' => "x10y",
'"x%iy".sprintf( $a )' => "x10y",
'$b.reduce |$memo,$x| { $memo + $x }' => 6,
'reduce($b) |$memo,$x| { $memo + $x }' => 6,
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
populate
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
{
'"value is ${a*2} yo"' => :error,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
it "provides location information on error in unparenthesized call logic" do
expect{parser.evaluate_string(scope, "include non_existing_class", __FILE__)}.to raise_error(Puppet::ParseError, /line 1\:1/)
end
+
+ it 'defaults can be given in a lambda and used only when arg is missing' do
+ env_loader = @compiler.loaders.public_environment_loader
+ fc = Puppet::Functions.create_function(:test) do
+ dispatch :test do
+ param 'Integer', 'count'
+ required_block_param
+ end
+ def test(count, block)
+ block.call({}, *[].fill(10, 0, count))
+ end
+ end
+ the_func = fc.new({}, env_loader)
+ env_loader.add_entry(:function, 'test', the_func, __FILE__)
+ expect(parser.evaluate_string(scope, "test(1) |$x, $y=20| { $x + $y}")).to eql(30)
+ expect(parser.evaluate_string(scope, "test(2) |$x, $y=20| { $x + $y}")).to eql(20)
+ end
+
+ it 'a given undef does not select the default value' do
+ env_loader = @compiler.loaders.public_environment_loader
+ fc = Puppet::Functions.create_function(:test) do
+ dispatch :test do
+ param 'Any', 'lambda_arg'
+ required_block_param
+ end
+ def test(lambda_arg, block)
+ block.call({}, lambda_arg)
+ end
+ end
+ the_func = fc.new({}, env_loader)
+ env_loader.add_entry(:function, 'test', the_func, __FILE__)
+
+ expect(parser.evaluate_string(scope, "test(undef) |$x=20| { $x == undef}")).to eql(true)
+ end
+
+ it 'a given undef is given as nil' do
+ env_loader = @compiler.loaders.public_environment_loader
+ fc = Puppet::Functions.create_function(:assert_no_undef) do
+ dispatch :assert_no_undef do
+ param 'Any', 'x'
+ end
+
+ def assert_no_undef(x)
+ case x
+ when Array
+ return unless x.include?(:undef)
+ when Hash
+ return unless x.keys.include?(:undef) || x.values.include?(:undef)
+ else
+ return unless x == :undef
+ end
+ raise "contains :undef"
+ end
+ end
+
+ the_func = fc.new({}, env_loader)
+ env_loader.add_entry(:function, 'assert_no_undef', the_func, __FILE__)
+
+ expect{parser.evaluate_string(scope, "assert_no_undef(undef)")}.to_not raise_error()
+ expect{parser.evaluate_string(scope, "assert_no_undef([undef])")}.to_not raise_error()
+ expect{parser.evaluate_string(scope, "assert_no_undef({undef => 1})")}.to_not raise_error()
+ expect{parser.evaluate_string(scope, "assert_no_undef({1 => undef})")}.to_not raise_error()
+ end
end
context "When evaluator performs string interpolation" do
let(:populate) do
parser.evaluate_string(scope, "$a = 10 $b = [1,2,3]")
end
{
'"value is $a yo"' => "value is 10 yo",
'"value is \$a yo"' => "value is $a yo",
'"value is ${a} yo"' => "value is 10 yo",
'"value is \${a} yo"' => "value is ${a} yo",
'"value is ${$a} yo"' => "value is 10 yo",
'"value is ${$a*2} yo"' => "value is 20 yo",
'"value is ${sprintf("x%iy",$a)} yo"' => "value is x10y yo",
'"value is ${"x%iy".sprintf($a)} yo"' => "value is x10y yo",
'"value is ${[1,2,3]} yo"' => "value is [1, 2, 3] yo",
'"value is ${/.*/} yo"' => "value is /.*/ yo",
'$x = undef "value is $x yo"' => "value is yo",
'$x = default "value is $x yo"' => "value is default yo",
'$x = Array[Integer] "value is $x yo"' => "value is Array[Integer] yo",
'"value is ${Array[Integer]} yo"' => "value is Array[Integer] yo",
}.each do |source, result|
it "should parse and evaluate the expression '#{source}' to #{result}" do
populate
parser.evaluate_string(scope, source, __FILE__).should == result
end
end
it "should parse and evaluate an interpolation of a hash" do
source = '"value is ${{a=>1,b=>2}} yo"'
# This test requires testing against two options because a hash to string
# produces a result that is unordered
hashstr = {'a' => 1, 'b' => 2}.to_s
alt_results = ["value is {a => 1, b => 2} yo", "value is {b => 2, a => 1} yo" ]
populate
parse_result = parser.evaluate_string(scope, source, __FILE__)
alt_results.include?(parse_result).should == true
end
it 'should accept a variable with leading underscore when used directly' do
source = '$_x = 10; "$_x"'
expect(parser.evaluate_string(scope, source, __FILE__)).to eql('10')
end
it 'should accept a variable with leading underscore when used as an expression' do
source = '$_x = 10; "${_x}"'
expect(parser.evaluate_string(scope, source, __FILE__)).to eql('10')
end
{
'"value is ${a*2} yo"' => :error,
}.each do |source, result|
it "should parse and raise error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(Puppet::ParseError)
end
end
end
context "When evaluating variables" do
context "that are non existing an error is raised for" do
it "unqualified variable" do
expect { parser.evaluate_string(scope, "$quantum_gravity", __FILE__) }.to raise_error(/Unknown variable/)
end
it "qualified variable" do
expect { parser.evaluate_string(scope, "$quantum_gravity::graviton", __FILE__) }.to raise_error(/Unknown variable/)
end
end
it "a lex error should be raised for '$foo::::bar'" do
expect { parser.evaluate_string(scope, "$foo::::bar") }.to raise_error(Puppet::LexError, /Illegal fully qualified name at line 1:7/)
end
{ '$a = $0' => nil,
'$a = $1' => nil,
}.each do |source, value|
it "it is ok to reference numeric unassigned variables '#{source}'" do
parser.evaluate_string(scope, source, __FILE__).should == value
end
end
{ '$00 = 0' => /must be a decimal value/,
'$0xf = 0' => /must be a decimal value/,
'$0777 = 0' => /must be a decimal value/,
'$123a = 0' => /must be a decimal value/,
}.each do |source, error_pattern|
it "should raise an error for '#{source}'" do
expect { parser.evaluate_string(scope, source, __FILE__) }.to raise_error(error_pattern)
end
end
context "an initial underscore in the last segment of a var name is allowed" do
{ '$_a = 1' => 1,
'$__a = 1' => 1,
}.each do |source, value|
it "as in this example '#{source}'" do
parser.evaluate_string(scope, source, __FILE__).should == value
end
end
end
end
context "When evaluating relationships" do
it 'should form a relation with File[a] -> File[b]' do
source = "File[a] -> File[b]"
parser.evaluate_string(scope, source, __FILE__)
scope.compiler.should have_relationship(['File', 'a', '->', 'File', 'b'])
end
it 'should form a relation with resource -> resource' do
source = "notify{a:} -> notify{b:}"
parser.evaluate_string(scope, source, __FILE__)
scope.compiler.should have_relationship(['Notify', 'a', '->', 'Notify', 'b'])
end
it 'should form a relation with [File[a], File[b]] -> [File[x], File[y]]' do
source = "[File[a], File[b]] -> [File[x], File[y]]"
parser.evaluate_string(scope, source, __FILE__)
scope.compiler.should have_relationship(['File', 'a', '->', 'File', 'x'])
scope.compiler.should have_relationship(['File', 'b', '->', 'File', 'x'])
scope.compiler.should have_relationship(['File', 'a', '->', 'File', 'y'])
scope.compiler.should have_relationship(['File', 'b', '->', 'File', 'y'])
end
it 'should tolerate (eliminate) duplicates in operands' do
source = "[File[a], File[a]] -> File[x]"
parser.evaluate_string(scope, source, __FILE__)
scope.compiler.should have_relationship(['File', 'a', '->', 'File', 'x'])
scope.compiler.relationships.size.should == 1
end
it 'should form a relation with <-' do
source = "File[a] <- File[b]"
parser.evaluate_string(scope, source, __FILE__)
scope.compiler.should have_relationship(['File', 'b', '->', 'File', 'a'])
end
it 'should form a relation with <-' do
source = "File[a] <~ File[b]"
parser.evaluate_string(scope, source, __FILE__)
scope.compiler.should have_relationship(['File', 'b', '~>', 'File', 'a'])
end
end
context "When evaluating heredoc" do
it "evaluates plain heredoc" do
src = "@(END)\nThis is\nheredoc text\nEND\n"
parser.evaluate_string(scope, src).should == "This is\nheredoc text\n"
end
it "parses heredoc with margin" do
src = [
"@(END)",
" This is",
" heredoc text",
" | END",
""
].join("\n")
parser.evaluate_string(scope, src).should == "This is\nheredoc text\n"
end
it "parses heredoc with margin and right newline trim" do
src = [
"@(END)",
" This is",
" heredoc text",
" |- END",
""
].join("\n")
parser.evaluate_string(scope, src).should == "This is\nheredoc text"
end
it "parses escape specification" do
src = <<-CODE
@(END/t)
Tex\\tt\\n
|- END
CODE
parser.evaluate_string(scope, src).should == "Tex\tt\\n"
end
it "parses syntax checked specification" do
src = <<-CODE
@(END:json)
["foo", "bar"]
|- END
CODE
parser.evaluate_string(scope, src).should == '["foo", "bar"]'
end
it "parses syntax checked specification with error and reports it" do
src = <<-CODE
@(END:json)
['foo', "bar"]
|- END
CODE
expect { parser.evaluate_string(scope, src)}.to raise_error(/Cannot parse invalid JSON string/)
end
- it "parses interpolated heredoc epression" do
+ it "parses interpolated heredoc expression" do
src = <<-CODE
$name = 'Fjodor'
@("END")
Hello $name
|- END
CODE
parser.evaluate_string(scope, src).should == "Hello Fjodor"
end
end
context "Handles Deprecations and Discontinuations" do
- around(:each) do |example|
- Puppet.override({:loaders => Puppet::Pops::Loaders.new(Puppet::Node::Environment.create(:testing, []))}, 'test') do
- example.run
- end
- end
-
it 'of import statements' do
source = "\nimport foo"
# Error references position 5 at the opening '{'
# Set file to nil to make it easier to match with line number (no file name in output)
expect { parser.evaluate_string(scope, source) }.to raise_error(/'import' has been discontinued.*line 2:1/)
end
end
context "Detailed Error messages are reported" do
it 'for illegal type references' do
source = '1+1 { "title": }'
# Error references position 5 at the opening '{'
# Set file to nil to make it easier to match with line number (no file name in output)
- expect { parser.parse_string(source, nil) }.to raise_error(/Expression is not valid as a resource.*line 1:5/)
+ expect { parser.evaluate_string(scope, source) }.to raise_error(
+ /Illegal Resource Type expression, expected result to be a type name, or untitled Resource.*line 1:2/)
end
it 'for non r-value producing <| |>' do
expect { parser.parse_string("$a = File <| |>", nil) }.to raise_error(/A Virtual Query does not produce a value at line 1:6/)
end
it 'for non r-value producing <<| |>>' do
expect { parser.parse_string("$a = File <<| |>>", nil) }.to raise_error(/An Exported Query does not produce a value at line 1:6/)
end
it 'for non r-value producing define' do
Puppet.expects(:err).with("Invalid use of expression. A 'define' expression does not produce a value at line 1:6")
Puppet.expects(:err).with("Classes, definitions, and nodes may only appear at toplevel or inside other classes at line 1:6")
expect { parser.parse_string("$a = define foo { }", nil) }.to raise_error(/2 errors/)
end
it 'for non r-value producing class' do
Puppet.expects(:err).with("Invalid use of expression. A Host Class Definition does not produce a value at line 1:6")
Puppet.expects(:err).with("Classes, definitions, and nodes may only appear at toplevel or inside other classes at line 1:6")
expect { parser.parse_string("$a = class foo { }", nil) }.to raise_error(/2 errors/)
end
it 'for unclosed quote with indication of start position of string' do
source = <<-SOURCE.gsub(/^ {6}/,'')
$a = "xx
yyy
SOURCE
# first char after opening " reported as being in error.
expect { parser.parse_string(source) }.to raise_error(/Unclosed quote after '"' followed by 'xx\\nyy\.\.\.' at line 1:7/)
end
it 'for multiple errors with a summary exception' do
Puppet.expects(:err).with("Invalid use of expression. A Node Definition does not produce a value at line 1:6")
Puppet.expects(:err).with("Classes, definitions, and nodes may only appear at toplevel or inside other classes at line 1:6")
expect { parser.parse_string("$a = node x { }",nil) }.to raise_error(/2 errors/)
end
it 'for a bad hostname' do
expect {
parser.parse_string("node 'macbook+owned+by+name' { }", nil)
}.to raise_error(/The hostname 'macbook\+owned\+by\+name' contains illegal characters.*at line 1:6/)
end
it 'for a hostname with interpolation' do
source = <<-SOURCE.gsub(/^ {6}/,'')
$name = 'fred'
node "macbook-owned-by$name" { }
SOURCE
expect {
parser.parse_string(source, nil)
}.to raise_error(/An interpolated expression is not allowed in a hostname of a node at line 2:23/)
end
end
+ context 'does not leak variables' do
+ it 'local variables are gone when lambda ends' do
+ source = <<-SOURCE
+ [1,2,3].each |$x| { $y = $x}
+ $a = $y
+ SOURCE
+ expect do
+ parser.evaluate_string(scope, source)
+ end.to raise_error(/Unknown variable: 'y'/)
+ end
+
+ it 'lambda parameters are gone when lambda ends' do
+ source = <<-SOURCE
+ [1,2,3].each |$x| { $y = $x}
+ $a = $x
+ SOURCE
+ expect do
+ parser.evaluate_string(scope, source)
+ end.to raise_error(/Unknown variable: 'x'/)
+ end
+
+ it 'does not leak match variables' do
+ source = <<-SOURCE
+ if 'xyz' =~ /(x)(y)(z)/ { notice $2 }
+ case 'abc' {
+ /(a)(b)(c)/ : { $x = $2 }
+ }
+ "-$x-$2-"
+ SOURCE
+ expect(parser.evaluate_string(scope, source)).to eq('-b--')
+ end
+ end
+
matcher :have_relationship do |expected|
calc = Puppet::Pops::Types::TypeCalculator.new
match do |compiler|
op_name = {'->' => :relationship, '~>' => :subscription}
compiler.relationships.any? do | relation |
relation.source.type == expected[0] &&
relation.source.title == expected[1] &&
relation.type == op_name[expected[2]] &&
relation.target.type == expected[3] &&
relation.target.title == expected[4]
end
end
failure_message_for_should do |actual|
"Relationship #{expected[0]}[#{expected[1]}] #{expected[2]} #{expected[3]}[#{expected[4]}] but was unknown to compiler"
end
end
end
diff --git a/spec/unit/pops/evaluator/logical_ops_spec.rb b/spec/unit/pops/evaluator/logical_ops_spec.rb
index e5cdd1f93..d6a179e03 100644
--- a/spec/unit/pops/evaluator/logical_ops_spec.rb
+++ b/spec/unit/pops/evaluator/logical_ops_spec.rb
@@ -1,90 +1,90 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
require 'puppet/pops/evaluator/evaluator_impl'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/evaluator_rspec_helper')
describe 'Puppet::Pops::Evaluator::EvaluatorImpl' do
include EvaluatorRspecHelper
context "When the evaluator performs boolean operations" do
context "using operator AND" do
it "true && true == true" do
evaluate(literal(true).and(literal(true))).should == true
end
it "false && true == false" do
evaluate(literal(false).and(literal(true))).should == false
end
it "true && false == false" do
evaluate(literal(true).and(literal(false))).should == false
end
it "false && false == false" do
evaluate(literal(false).and(literal(false))).should == false
end
end
context "using operator OR" do
it "true || true == true" do
evaluate(literal(true).or(literal(true))).should == true
end
it "false || true == true" do
evaluate(literal(false).or(literal(true))).should == true
end
it "true || false == true" do
evaluate(literal(true).or(literal(false))).should == true
end
it "false || false == false" do
evaluate(literal(false).or(literal(false))).should == false
end
end
context "using operator NOT" do
it "!false == true" do
evaluate(literal(false).not()).should == true
end
it "!true == false" do
evaluate(literal(true).not()).should == false
end
end
context "on values requiring boxing to Boolean" do
it "'x' == true" do
evaluate(literal('x').not()).should == false
end
- it "'' == false" do
- evaluate(literal('').not()).should == true
+ it "'' == true" do
+ evaluate(literal('').not()).should == false
end
it ":undef == false" do
evaluate(literal(:undef).not()).should == true
end
end
context "connectives should stop when truth is obtained" do
it "true && false && error == false (and no failure)" do
evaluate(literal(false).and(literal('0xwtf') + literal(1)).and(literal(true))).should == false
end
it "false || true || error == true (and no failure)" do
evaluate(literal(true).or(literal('0xwtf') + literal(1)).or(literal(false))).should == true
end
it "false || false || error == error (false positive test)" do
# TODO: Change the exception type
expect {evaluate(literal(true).and(literal('0xwtf') + literal(1)).or(literal(false)))}.to raise_error(Puppet::ParseError)
end
end
end
end
diff --git a/spec/unit/pops/evaluator/variables_spec.rb b/spec/unit/pops/evaluator/variables_spec.rb
index fe93842c4..6c1e9f821 100644
--- a/spec/unit/pops/evaluator/variables_spec.rb
+++ b/spec/unit/pops/evaluator/variables_spec.rb
@@ -1,194 +1,89 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
require 'puppet/pops/evaluator/evaluator_impl'
# This file contains basic testing of variable references and assignments
# using a top scope and a local scope.
# It does not test variables and named scopes.
#
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/evaluator_rspec_helper')
describe 'Puppet::Pops::Impl::EvaluatorImpl' do
include EvaluatorRspecHelper
context "When the evaluator deals with variables" do
context "it should handle" do
it "simple assignment and dereference" do
evaluate_l(block( var('a').set(literal(2)+literal(2)), var('a'))).should == 4
end
it "local scope shadows top scope" do
top_scope_block = block( var('a').set(literal(2)+literal(2)), var('a'))
local_scope_block = block( var('a').set(var('a') + literal(2)), var('a'))
evaluate_l(top_scope_block, local_scope_block).should == 6
end
it "shadowed in local does not affect parent scope" do
top_scope_block = block( var('a').set(literal(2)+literal(2)), var('a'))
local_scope_block = block( var('a').set(var('a') + literal(2)), var('a'))
top_scope_again = var('a')
evaluate_l(top_scope_block, local_scope_block, top_scope_again).should == 4
end
it "access to global names works in top scope" do
top_scope_block = block( var('a').set(literal(2)+literal(2)), var('::a'))
evaluate_l(top_scope_block).should == 4
end
it "access to global names works in local scope" do
top_scope_block = block( var('a').set(literal(2)+literal(2)))
local_scope_block = block( var('a').set(var('::a')+literal(2)), var('::a'))
evaluate_l(top_scope_block, local_scope_block).should == 6
end
it "can not change a variable value in same scope" do
expect { evaluate_l(block(var('a').set(10), var('a').set(20))) }.to raise_error(/Cannot reassign variable a/)
end
- context "-= operations" do
- # Also see collections_ops_spec.rb where delete via - is fully tested, here only the
- # the -= operation itself is tested (there are many combinations)
- #
- it 'deleting from non existing value produces :undef, nil -= ?' do
- top_scope_block = var('b').set([1,2,3])
- local_scope_block = block(var('a').minus_set([4]), fqn('a').var)
- evaluate_l(top_scope_block, local_scope_block).should == :undef
- end
-
- it 'deletes from a list' do
- top_scope_block = var('a').set([1,2,3])
- local_scope_block = block(var('a').minus_set([2]), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block).should == [1,3]
- end
-
- it 'deletes from a hash' do
- top_scope_block = var('a').set({'a'=>1,'b'=>2,'c'=>3})
- local_scope_block = block(var('a').minus_set('b'), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block).should == {'a'=>1,'c'=>3}
- end
- end
-
- context "+= operations" do
- # Also see collections_ops_spec.rb where concatenation via + is fully tested
- it "appending to non existing value, nil += []" do
- top_scope_block = var('b').set([1,2,3])
- local_scope_block = var('a').plus_set([4])
- evaluate_l(top_scope_block, local_scope_block).should == [4]
- end
-
- context "appending to list" do
- it "from list, [] += []" do
- top_scope_block = var('a').set([1,2,3])
- local_scope_block = block(var('a').plus_set([4]), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block).should == [1,2,3,4]
- end
-
- it "from hash, [] += {a=>b}" do
- top_scope_block = var('a').set([1,2,3])
- local_scope_block = block(var('a').plus_set({'a' => 1, 'b'=>2}), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block).should satisfy {|result|
- # hash in 1.8.7 is not insertion order preserving, hence this hoop
- result == [1,2,3,['a',1],['b',2]] || result == [1,2,3,['b',2],['a',1]]
- }
- end
-
- it "from single value, [] += x" do
- top_scope_block = var('a').set([1,2,3])
- local_scope_block = block(var('a').plus_set(4), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block).should == [1,2,3,4]
- end
-
- it "from embedded list, [] += [[x]]" do
- top_scope_block = var('a').set([1,2,3])
- local_scope_block = block(var('a').plus_set([[4,5]]), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block).should == [1,2,3,[4,5]]
- end
- end
-
- context "appending to hash" do
- it "from hash, {a=>b} += {x=>y}" do
- top_scope_block = var('a').set({'a' => 1, 'b' => 2})
- local_scope_block = block(var('a').plus_set({'c' => 3}), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block) do |scope|
- # Assert no change to top scope hash
- scope['a'].should == {'a' =>1, 'b'=> 2}
- end.should == {'a' => 1, 'b' => 2, 'c' => 3}
- end
-
- it "from list, {a=>b} += ['x', y]" do
- top_scope_block = var('a').set({'a' => 1, 'b' => 2})
- local_scope_block = block(var('a').plus_set(['c', 3]), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block) do |scope|
- # Assert no change to top scope hash
- scope['a'].should == {'a' =>1, 'b'=> 2}
- end.should == {'a' => 1, 'b' => 2, 'c' => 3}
- end
-
- it "with overwrite from hash, {a=>b} += {a=>c}" do
- top_scope_block = var('a').set({'a' => 1, 'b' => 2})
- local_scope_block = block(var('a').plus_set({'b' => 4, 'c' => 3}),fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block) do |scope|
- # Assert no change to top scope hash
- scope['a'].should == {'a' =>1, 'b'=> 2}
- end.should == {'a' => 1, 'b' => 4, 'c' => 3}
- end
-
- it "with overwrite from list, {a=>b} += ['a', c]" do
- top_scope_block = var('a').set({'a' => 1, 'b' => 2})
- local_scope_block = block(var('a').plus_set(['b', 4, 'c', 3]), fqn('a').var())
- evaluate_l(top_scope_block, local_scope_block) do |scope|
- # Assert no change to topscope hash
- scope['a'].should == {'a' =>1, 'b'=> 2}
- end.should == {'a' => 1, 'b' => 4, 'c' => 3}
- end
-
- it "from odd length array - error" do
- top_scope_block = var('a').set({'a' => 1, 'b' => 2})
- local_scope_block = var('a').plus_set(['b', 4, 'c'])
- expect { evaluate_l(top_scope_block, local_scope_block) }.to raise_error(/Append assignment \+= failed with error: odd number of arguments for Hash/)
- end
- end
- end
-
context "access to numeric variables" do
it "without a match" do
evaluate_l(block(literal(2) + literal(2),
[var(0), var(1), var(2), var(3)])).should == [nil, nil, nil, nil]
end
it "after a match" do
evaluate_l(block(literal('abc') =~ literal(/(a)(b)(c)/),
[var(0), var(1), var(2), var(3)])).should == ['abc', 'a', 'b', 'c']
end
it "after a failed match" do
evaluate_l(block(literal('abc') =~ literal(/(x)(y)(z)/),
[var(0), var(1), var(2), var(3)])).should == [nil, nil, nil, nil]
end
it "a failed match does not alter previous match" do
evaluate_l(block(
literal('abc') =~ literal(/(a)(b)(c)/),
literal('abc') =~ literal(/(x)(y)(z)/),
[var(0), var(1), var(2), var(3)])).should == ['abc', 'a', 'b', 'c']
end
it "a new match completely shadows previous match" do
evaluate_l(block(
literal('abc') =~ literal(/(a)(b)(c)/),
literal('abc') =~ literal(/(a)bc/),
[var(0), var(1), var(2), var(3)])).should == ['abc', 'a', nil, nil]
end
it "after a match with variable referencing a non existing group" do
evaluate_l(block(literal('abc') =~ literal(/(a)(b)(c)/),
[var(0), var(1), var(2), var(3), var(4)])).should == ['abc', 'a', 'b', 'c', nil]
end
end
end
end
end
diff --git a/spec/unit/pops/issues_spec.rb b/spec/unit/pops/issues_spec.rb
index 93f093d61..82dce1b44 100644
--- a/spec/unit/pops/issues_spec.rb
+++ b/spec/unit/pops/issues_spec.rb
@@ -1,26 +1,196 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
describe "Puppet::Pops::Issues" do
include Puppet::Pops::Issues
it "should have an issue called NAME_WITH_HYPHEN" do
x = Puppet::Pops::Issues::NAME_WITH_HYPHEN
x.class.should == Puppet::Pops::Issues::Issue
x.issue_code.should == :NAME_WITH_HYPHEN
end
it "should should format a message that requires an argument" do
x = Puppet::Pops::Issues::NAME_WITH_HYPHEN
x.format(:name => 'Boo-Hoo',
:label => Puppet::Pops::Model::ModelLabelProvider.new,
:semantic => "dummy"
).should == "A String may not have a name containing a hyphen. The name 'Boo-Hoo' is not legal"
end
it "should should format a message that does not require an argument" do
x = Puppet::Pops::Issues::NOT_TOP_LEVEL
x.format().should == "Classes, definitions, and nodes may only appear at toplevel or inside other classes"
end
+
+end
+
+describe "Puppet::Pops::IssueReporter" do
+
+ let(:acceptor) { Puppet::Pops::Validation::Acceptor.new }
+
+ def fake_positioned(number)
+ stub("positioned_#{number}", :line => number, :pos => number)
+ end
+
+ def diagnostic(severity, number)
+ Puppet::Pops::Validation::Diagnostic.new(
+ severity,
+ Puppet::Pops::Issues::Issue.new(number) { "#{severity}#{number}" },
+ "#{severity}file",
+ fake_positioned(number))
+ end
+
+ def warning(number)
+ diagnostic(:warning, number)
+ end
+
+ def deprecation(number)
+ diagnostic(:deprecation, number)
+ end
+
+ def error(number)
+ diagnostic(:error, number)
+ end
+
+ context "given warnings" do
+
+ before(:each) do
+ acceptor.accept( warning(1) )
+ acceptor.accept( deprecation(1) )
+ end
+
+ it "emits warnings if told to emit them" do
+ Puppet.expects(:warning).twice.with(regexp_matches(/warning1|deprecation1/))
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :emit_warnings => true })
+ end
+
+ it "does not emit warnings if not told to emit them" do
+ Puppet.expects(:warning).never
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, {})
+ end
+
+ it "emits no warnings if :max_warnings is 0" do
+ acceptor.accept( warning(2) )
+ Puppet[:max_warnings] = 0
+ Puppet.expects(:warning).once.with(regexp_matches(/deprecation1/))
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :emit_warnings => true })
+ end
+
+ it "emits no more than 1 warning if :max_warnings is 1" do
+ acceptor.accept( warning(2) )
+ acceptor.accept( warning(3) )
+ Puppet[:max_warnings] = 1
+ Puppet.expects(:warning).twice.with(regexp_matches(/warning1|deprecation1/))
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :emit_warnings => true })
+ end
+
+ it "does not emit more deprecations warnings than the max deprecation warnings" do
+ acceptor.accept( deprecation(2) )
+ Puppet[:max_deprecations] = 0
+ Puppet.expects(:warning).once.with(regexp_matches(/warning1/))
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :emit_warnings => true })
+ end
+
+ it "does not emit deprecation warnings, but does emit regular warnings if disable_warnings includes deprecations" do
+ Puppet[:disable_warnings] = 'deprecations'
+ Puppet.expects(:warning).once.with(regexp_matches(/warning1/))
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :emit_warnings => true })
+ end
+ end
+
+ context "given errors" do
+ it "logs nothing, but raises the given :message if :emit_errors is repressing error logging" do
+ acceptor.accept( error(1) )
+ Puppet.expects(:err).never
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :emit_errors => false, :message => 'special'})
+ end.to raise_error(Puppet::ParseError, 'special')
+ end
+
+ it "prefixes :message if a single error is raised" do
+ acceptor.accept( error(1) )
+ Puppet.expects(:err).never
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :message => 'special'})
+ end.to raise_error(Puppet::ParseError, /special error1/)
+ end
+
+ it "logs nothing and raises immediately if there is only one error" do
+ acceptor.accept( error(1) )
+ Puppet.expects(:err).never
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { })
+ end.to raise_error(Puppet::ParseError, /error1/)
+ end
+
+ it "logs nothing and raises immediately if there are multiple errors but max_errors is 0" do
+ acceptor.accept( error(1) )
+ acceptor.accept( error(2) )
+ Puppet[:max_errors] = 0
+ Puppet.expects(:err).never
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { })
+ end.to raise_error(Puppet::ParseError, /error1/)
+ end
+
+ it "logs the :message if there is more than one allowed error" do
+ acceptor.accept( error(1) )
+ acceptor.accept( error(2) )
+ Puppet.expects(:err).times(3).with(regexp_matches(/error1|error2|special/))
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :message => 'special'})
+ end.to raise_error(Puppet::ParseError, /Giving up/)
+ end
+
+ it "emits accumulated errors before raising a 'giving up' message if there are more errors than allowed" do
+ acceptor.accept( error(1) )
+ acceptor.accept( error(2) )
+ acceptor.accept( error(3) )
+ Puppet[:max_errors] = 2
+ Puppet.expects(:err).times(2).with(regexp_matches(/error1|error2/))
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { })
+ end.to raise_error(Puppet::ParseError, /3 errors.*Giving up/)
+ end
+
+ it "emits accumulated errors before raising a 'giving up' message if there are multiple errors but fewer than limits" do
+ acceptor.accept( error(1) )
+ acceptor.accept( error(2) )
+ acceptor.accept( error(3) )
+ Puppet[:max_errors] = 4
+ Puppet.expects(:err).times(3).with(regexp_matches(/error[123]/))
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { })
+ end.to raise_error(Puppet::ParseError, /3 errors.*Giving up/)
+ end
+
+ it "emits errors regardless of disable_warnings setting" do
+ acceptor.accept( error(1) )
+ acceptor.accept( error(2) )
+ Puppet[:disable_warnings] = 'deprecations'
+ Puppet.expects(:err).times(2).with(regexp_matches(/error1|error2/))
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { })
+ end.to raise_error(Puppet::ParseError, /Giving up/)
+ end
+ end
+
+ context "given both" do
+
+ it "logs warnings and errors" do
+ acceptor.accept( warning(1) )
+ acceptor.accept( error(1) )
+ acceptor.accept( error(2) )
+ acceptor.accept( error(3) )
+ acceptor.accept( deprecation(1) )
+ Puppet[:max_errors] = 2
+ Puppet.expects(:warning).twice.with(regexp_matches(/warning1|deprecation1/))
+ Puppet.expects(:err).times(2).with(regexp_matches(/error[123]/))
+ expect do
+ Puppet::Pops::IssueReporter.assert_and_report(acceptor, { :emit_warnings => true })
+ end.to raise_error(Puppet::ParseError, /3 errors.*2 warnings.*Giving up/)
+ end
+ end
end
diff --git a/spec/unit/pops/loaders/dependency_loader_spec.rb b/spec/unit/pops/loaders/dependency_loader_spec.rb
index dbea5b208..cbdefe897 100644
--- a/spec/unit/pops/loaders/dependency_loader_spec.rb
+++ b/spec/unit/pops/loaders/dependency_loader_spec.rb
@@ -1,44 +1,61 @@
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet/pops'
require 'puppet/loaders'
describe 'dependency loader' do
include PuppetSpec::Files
let(:static_loader) { Puppet::Pops::Loader::StaticLoader.new() }
let(:loaders) { Puppet::Pops::Loaders.new(Puppet::Node::Environment.create(:testing, [])) }
describe 'FileBased module loader' do
it 'load something in global name space raises an error' do
module_dir = dir_containing('testmodule', {
'lib' => { 'puppet' => { 'functions' => { 'testmodule' => {
'foo.rb' => 'Puppet::Functions.create_function("foo") { def foo; end; }'
}}}}})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, loaders, 'testmodule', module_dir, 'test1')
dep_loader = Puppet::Pops::Loader::DependencyLoader.new(static_loader, 'test-dep', [module_loader])
expect do
dep_loader.load_typed(typed_name(:function, 'testmodule::foo')).value
end.to raise_error(ArgumentError, /produced mis-matched name, expected 'testmodule::foo', got foo/)
end
it 'can load something in a qualified name space' do
module_dir = dir_containing('testmodule', {
'lib' => { 'puppet' => { 'functions' => { 'testmodule' => {
'foo.rb' => 'Puppet::Functions.create_function("testmodule::foo") { def foo; end; }'
}}}}})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, loaders, 'testmodule', module_dir, 'test1')
dep_loader = Puppet::Pops::Loader::DependencyLoader.new(static_loader, 'test-dep', [module_loader])
function = dep_loader.load_typed(typed_name(:function, 'testmodule::foo')).value
expect(function.class.name).to eq('testmodule::foo')
expect(function.is_a?(Puppet::Functions::Function)).to eq(true)
end
+
+ it 'can load something in a qualified name space more than once' do
+ module_dir = dir_containing('testmodule', {
+ 'lib' => { 'puppet' => { 'functions' => { 'testmodule' => {
+ 'foo.rb' => 'Puppet::Functions.create_function("testmodule::foo") { def foo; end; }'
+ }}}}})
+ module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, loaders, 'testmodule', module_dir, 'test1')
+ dep_loader = Puppet::Pops::Loader::DependencyLoader.new(static_loader, 'test-dep', [module_loader])
+
+ function = dep_loader.load_typed(typed_name(:function, 'testmodule::foo')).value
+ expect(function.class.name).to eq('testmodule::foo')
+ expect(function.is_a?(Puppet::Functions::Function)).to eq(true)
+
+ function = dep_loader.load_typed(typed_name(:function, 'testmodule::foo')).value
+ expect(function.class.name).to eq('testmodule::foo')
+ expect(function.is_a?(Puppet::Functions::Function)).to eq(true)
+ end
end
def typed_name(type, name)
Puppet::Pops::Loader::Loader::TypedName.new(type, name)
end
end
diff --git a/spec/unit/pops/loaders/loader_paths_spec.rb b/spec/unit/pops/loaders/loader_paths_spec.rb
index 2929fe7f8..2cd565a8c 100644
--- a/spec/unit/pops/loaders/loader_paths_spec.rb
+++ b/spec/unit/pops/loaders/loader_paths_spec.rb
@@ -1,66 +1,55 @@
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet/pops'
require 'puppet/loaders'
describe 'loader paths' do
include PuppetSpec::Files
- before(:each) { Puppet[:biff] = true }
let(:static_loader) { Puppet::Pops::Loader::StaticLoader.new() }
let(:unused_loaders) { nil }
describe 'the relative_path_for_types method' do
it 'produces paths to load in precendence order' do
module_dir = dir_containing('testmodule', {
'functions' => {},
'lib' => {
'puppet' => {
'functions' => {},
- 'parser' => {
- 'functions' => {},
- }
}}})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, unused_loaders, 'testmodule', module_dir, 'test1')
effective_paths = Puppet::Pops::Loader::LoaderPaths.relative_paths_for_type(:function, module_loader)
expect(effective_paths.collect(&:generic_path)).to eq([
- File.join(module_dir, 'lib', 'puppet', 'functions'), # 4x functions
- File.join(module_dir, 'lib', 'puppet','parser', 'functions') # 3x functions
+ File.join(module_dir, 'lib', 'puppet', 'functions')
])
end
it 'module loader has smart-paths that prunes unavailable paths' do
module_dir = dir_containing('testmodule', {'lib' => {'puppet' => {'functions' => {'foo.rb' => 'Puppet::Functions.create_function("testmodule::foo") { def foo; end; }' }}}})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, unused_loaders, 'testmodule', module_dir, 'test1')
effective_paths = module_loader.smart_paths.effective_paths(:function)
expect(effective_paths.size).to be_eql(1)
expect(effective_paths[0].generic_path).to be_eql(File.join(module_dir, 'lib', 'puppet', 'functions'))
- expect(module_loader.path_index.size).to be_eql(1)
- expect(module_loader.path_index.include?(File.join(module_dir, 'lib', 'puppet', 'functions', 'foo.rb'))).to be(true)
end
it 'all function smart-paths produces entries if they exist' do
module_dir = dir_containing('testmodule', {
'lib' => {
'puppet' => {
'functions' => {'foo4x.rb' => 'ignored in this test'},
- 'parser' => {
- 'functions' => {'foo3x.rb' => 'ignored in this test'},
- }
}}})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, unused_loaders, 'testmodule', module_dir, 'test1')
effective_paths = module_loader.smart_paths.effective_paths(:function)
- expect(effective_paths.size).to eq(2)
- expect(module_loader.path_index.size).to eq(2)
+ expect(effective_paths.size).to eq(1)
+ expect(module_loader.path_index.size).to eq(1)
path_index = module_loader.path_index
- expect(path_index.include?(File.join(module_dir, 'lib', 'puppet', 'functions', 'foo4x.rb'))).to eq(true)
- expect(path_index.include?(File.join(module_dir, 'lib', 'puppet', 'parser', 'functions', 'foo3x.rb'))).to eq(true)
+ expect(path_index).to include(File.join(module_dir, 'lib', 'puppet', 'functions', 'foo4x.rb'))
end
end
end
diff --git a/spec/unit/pops/loaders/loaders_spec.rb b/spec/unit/pops/loaders/loaders_spec.rb
index daa9c716c..831236698 100644
--- a/spec/unit/pops/loaders/loaders_spec.rb
+++ b/spec/unit/pops/loaders/loaders_spec.rb
@@ -1,105 +1,125 @@
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet/pops'
require 'puppet/loaders'
+describe 'loader helper classes' do
+ it 'NamedEntry holds values and is frozen' do
+ ne = Puppet::Pops::Loader::Loader::NamedEntry.new('name', 'value', 'origin')
+ expect(ne.frozen?).to be_true
+ expect(ne.typed_name).to eql('name')
+ expect(ne.origin).to eq('origin')
+ expect(ne.value).to eq('value')
+ end
+
+ it 'TypedName holds values and is frozen' do
+ tn = Puppet::Pops::Loader::Loader::TypedName.new(:function, '::foo::bar')
+ expect(tn.frozen?).to be_true
+ expect(tn.type).to eq(:function)
+ expect(tn.name_parts).to eq(['foo', 'bar'])
+ expect(tn.name).to eq('foo::bar')
+ expect(tn.qualified).to be_true
+ end
+end
+
describe 'loaders' do
include PuppetSpec::Files
let(:module_without_metadata) { File.join(config_dir('wo_metadata_module'), 'modules') }
let(:module_with_metadata) { File.join(config_dir('single_module'), 'modules') }
let(:dependent_modules_with_metadata) { config_dir('dependent_modules_with_metadata') }
let(:empty_test_env) { environment_for() }
# Loaders caches the puppet_system_loader, must reset between tests
before(:each) { Puppet::Pops::Loaders.clear() }
it 'creates a puppet_system loader' do
loaders = Puppet::Pops::Loaders.new(empty_test_env)
expect(loaders.puppet_system_loader()).to be_a(Puppet::Pops::Loader::ModuleLoaders::FileBased)
end
it 'creates an environment loader' do
loaders = Puppet::Pops::Loaders.new(empty_test_env)
expect(loaders.public_environment_loader()).to be_a(Puppet::Pops::Loader::SimpleEnvironmentLoader)
expect(loaders.public_environment_loader().to_s).to eql("(SimpleEnvironmentLoader 'environment:*test*')")
expect(loaders.private_environment_loader()).to be_a(Puppet::Pops::Loader::DependencyLoader)
expect(loaders.private_environment_loader().to_s).to eql("(DependencyLoader 'environment' [])")
end
- it 'can load 3x system functions' do
- Puppet[:biff] = true
- loaders = Puppet::Pops::Loaders.new(empty_test_env)
- puppet_loader = loaders.puppet_system_loader()
-
- function = puppet_loader.load_typed(typed_name(:function, 'sprintf')).value
-
- expect(function.class.name).to eq('sprintf')
- expect(function).to be_a(Puppet::Functions::Function)
- end
-
it 'can load a function using a qualified or unqualified name from a module with metadata' do
loaders = Puppet::Pops::Loaders.new(environment_for(module_with_metadata))
modulea_loader = loaders.public_loader_for_module('modulea')
unqualified_function = modulea_loader.load_typed(typed_name(:function, 'rb_func_a')).value
qualified_function = modulea_loader.load_typed(typed_name(:function, 'modulea::rb_func_a')).value
expect(unqualified_function).to be_a(Puppet::Functions::Function)
expect(qualified_function).to be_a(Puppet::Functions::Function)
expect(unqualified_function.class.name).to eq('rb_func_a')
expect(qualified_function.class.name).to eq('modulea::rb_func_a')
end
it 'can load a function with a qualified name from module without metadata' do
loaders = Puppet::Pops::Loaders.new(environment_for(module_without_metadata))
moduleb_loader = loaders.public_loader_for_module('moduleb')
function = moduleb_loader.load_typed(typed_name(:function, 'moduleb::rb_func_b')).value
expect(function).to be_a(Puppet::Functions::Function)
expect(function.class.name).to eq('moduleb::rb_func_b')
end
it 'cannot load an unqualified function from a module without metadata' do
loaders = Puppet::Pops::Loaders.new(environment_for(module_without_metadata))
moduleb_loader = loaders.public_loader_for_module('moduleb')
expect(moduleb_loader.load_typed(typed_name(:function, 'rb_func_b'))).to be_nil
end
it 'makes all other modules visible to a module without metadata' do
env = environment_for(module_with_metadata, module_without_metadata)
loaders = Puppet::Pops::Loaders.new(env)
moduleb_loader = loaders.private_loader_for_module('moduleb')
function = moduleb_loader.load_typed(typed_name(:function, 'moduleb::rb_func_b')).value
expect(function.call({})).to eql("I am modulea::rb_func_a() + I am moduleb::rb_func_b()")
end
it 'makes dependent modules visible to a module with metadata' do
env = environment_for(dependent_modules_with_metadata)
loaders = Puppet::Pops::Loaders.new(env)
moduleb_loader = loaders.private_loader_for_module('user')
function = moduleb_loader.load_typed(typed_name(:function, 'user::caller')).value
expect(function.call({})).to eql("usee::callee() was told 'passed value' + I am user::caller()")
end
+ it 'can load a function more than once from modules' do
+ env = environment_for(dependent_modules_with_metadata)
+ loaders = Puppet::Pops::Loaders.new(env)
+
+ moduleb_loader = loaders.private_loader_for_module('user')
+ function = moduleb_loader.load_typed(typed_name(:function, 'user::caller')).value
+ expect(function.call({})).to eql("usee::callee() was told 'passed value' + I am user::caller()")
+
+ function = moduleb_loader.load_typed(typed_name(:function, 'user::caller')).value
+ expect(function.call({})).to eql("usee::callee() was told 'passed value' + I am user::caller()")
+ end
+
def environment_for(*module_paths)
Puppet::Node::Environment.create(:'*test*', module_paths, '')
end
def typed_name(type, name)
Puppet::Pops::Loader::Loader::TypedName.new(type, name)
end
def config_dir(config_name)
my_fixture(config_name)
end
end
diff --git a/spec/unit/pops/loaders/module_loaders_spec.rb b/spec/unit/pops/loaders/module_loaders_spec.rb
index 9d0c1a86d..058e765de 100644
--- a/spec/unit/pops/loaders/module_loaders_spec.rb
+++ b/spec/unit/pops/loaders/module_loaders_spec.rb
@@ -1,119 +1,90 @@
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet/pops'
require 'puppet/loaders'
describe 'FileBased module loader' do
include PuppetSpec::Files
let(:static_loader) { Puppet::Pops::Loader::StaticLoader.new() }
let(:loaders) { Puppet::Pops::Loaders.new(Puppet::Node::Environment.create(:testing, [])) }
it 'can load a 4x function API ruby function in global name space' do
module_dir = dir_containing('testmodule', {
'lib' => {
'puppet' => {
'functions' => {
'foo4x.rb' => <<-CODE
Puppet::Functions.create_function(:foo4x) do
def foo4x()
'yay'
end
end
CODE
}
}
}
})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, loaders, 'testmodule', module_dir, 'test1')
function = module_loader.load_typed(typed_name(:function, 'foo4x')).value
expect(function.class.name).to eq('foo4x')
expect(function.is_a?(Puppet::Functions::Function)).to eq(true)
end
it 'can load a 4x function API ruby function in qualified name space' do
module_dir = dir_containing('testmodule', {
'lib' => {
'puppet' => {
'functions' => {
'testmodule' => {
'foo4x.rb' => <<-CODE
Puppet::Functions.create_function('testmodule::foo4x') do
def foo4x()
'yay'
end
end
CODE
}
}
}
}})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, loaders, 'testmodule', module_dir, 'test1')
function = module_loader.load_typed(typed_name(:function, 'testmodule::foo4x')).value
expect(function.class.name).to eq('testmodule::foo4x')
expect(function.is_a?(Puppet::Functions::Function)).to eq(true)
end
it 'makes parent loader win over entries in child' do
module_dir = dir_containing('testmodule', {
'lib' => { 'puppet' => { 'functions' => { 'testmodule' => {
'foo.rb' => <<-CODE
Puppet::Functions.create_function('testmodule::foo') do
def foo()
'yay'
end
end
CODE
}}}}})
module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, loaders, 'testmodule', module_dir, 'test1')
module_dir2 = dir_containing('testmodule2', {
'lib' => { 'puppet' => { 'functions' => { 'testmodule2' => {
'foo.rb' => <<-CODE
raise "should not get here"
CODE
}}}}})
module_loader2 = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(module_loader, loaders, 'testmodule2', module_dir2, 'test2')
function = module_loader2.load_typed(typed_name(:function, 'testmodule::foo')).value
expect(function.class.name).to eq('testmodule::foo')
expect(function.is_a?(Puppet::Functions::Function)).to eq(true)
end
- context 'when delegating 3x to 4x' do
- before(:each) { Puppet[:biff] = true }
-
- it 'can load a 3x function API ruby function in global name space' do
- module_dir = dir_containing('testmodule', {
- 'lib' => {
- 'puppet' => {
- 'parser' => {
- 'functions' => {
- 'foo3x.rb' => <<-CODE
- Puppet::Parser::Functions::newfunction(
- :foo3x, :type => :rvalue,
- :arity => 1
- ) do |args|
- args[0]
- end
- CODE
- }
- }
- }
- }})
-
- module_loader = Puppet::Pops::Loader::ModuleLoaders::FileBased.new(static_loader, loaders, 'testmodule', module_dir, 'test1')
- function = module_loader.load_typed(typed_name(:function, 'foo3x')).value
- expect(function.class.name).to eq('foo3x')
- expect(function.is_a?(Puppet::Functions::Function)).to eq(true)
- end
- end
-
def typed_name(type, name)
Puppet::Pops::Loader::Loader::TypedName.new(type, name)
end
end
diff --git a/spec/unit/pops/loaders/static_loader_spec.rb b/spec/unit/pops/loaders/static_loader_spec.rb
index e1a73273e..3d85f4522 100644
--- a/spec/unit/pops/loaders/static_loader_spec.rb
+++ b/spec/unit/pops/loaders/static_loader_spec.rb
@@ -1,46 +1,52 @@
require 'spec_helper'
require 'puppet/pops'
require 'puppet/loaders'
describe 'the static loader' do
it 'has no parent' do
expect(Puppet::Pops::Loader::StaticLoader.new.parent).to be(nil)
end
it 'identifies itself in string form' do
expect(Puppet::Pops::Loader::StaticLoader.new.to_s).to be_eql('(StaticLoader)')
end
it 'support the Loader API' do
# it may produce things later, this is just to test that calls work as they should - now all lookups are nil.
loader = Puppet::Pops::Loader::StaticLoader.new()
a_typed_name = typed_name(:function, 'foo')
expect(loader[a_typed_name]).to be(nil)
expect(loader.load_typed(a_typed_name)).to be(nil)
expect(loader.find(a_typed_name)).to be(nil)
end
context 'provides access to logging functions' do
let(:loader) { loader = Puppet::Pops::Loader::StaticLoader.new() }
# Ensure all logging functions produce output
before(:each) { Puppet::Util::Log.level = :debug }
Puppet::Util::Log.levels.each do |level|
it "defines the function #{level.to_s}" do
expect(loader.load(:function, level).class.name).to eql(level.to_s)
end
it 'and #{level.to_s} can be called' do
expect(loader.load(:function, level).call({}, 'yay').to_s).to eql('yay')
end
it "uses the evaluator to format output" do
expect(loader.load(:function, level).call({}, ['yay', 'surprise']).to_s).to eql('[yay, surprise]')
end
+
+ it 'outputs name of source (scope) by passing it to the Log utility' do
+ the_scope = {}
+ Puppet::Util::Log.any_instance.expects(:source=).with(the_scope)
+ loader.load(:function, level).call(the_scope, 'x')
+ end
end
end
def typed_name(type, name)
Puppet::Pops::Loader::Loader::TypedName.new(type, name)
end
end
\ No newline at end of file
diff --git a/spec/unit/pops/parser/epp_parser_spec.rb b/spec/unit/pops/parser/epp_parser_spec.rb
index 0db4ba7d9..fb32b9ba4 100644
--- a/spec/unit/pops/parser/epp_parser_spec.rb
+++ b/spec/unit/pops/parser/epp_parser_spec.rb
@@ -1,86 +1,115 @@
require 'spec_helper'
require 'puppet/pops'
require File.join(File.dirname(__FILE__), '/../factory_rspec_helper')
module EppParserRspecHelper
include FactoryRspecHelper
def parse(code)
parser = Puppet::Pops::Parser::EppParser.new()
parser.parse_string(code)
end
end
describe "epp parser" do
include EppParserRspecHelper
it "should instantiate an epp parser" do
parser = Puppet::Pops::Parser::EppParser.new()
parser.class.should == Puppet::Pops::Parser::EppParser
end
it "should parse a code string and return a program with epp" do
parser = Puppet::Pops::Parser::EppParser.new()
model = parser.parse_string("Nothing to see here, move along...").current
model.class.should == Puppet::Pops::Model::Program
model.body.class.should == Puppet::Pops::Model::LambdaExpression
model.body.body.class.should == Puppet::Pops::Model::EppExpression
end
context "when facing bad input it reports" do
it "unbalanced tags" do
expect { dump(parse("<% missing end tag")) }.to raise_error(/Unbalanced/)
end
it "abrupt end" do
expect { dump(parse("dum di dum di dum <%")) }.to raise_error(/Unbalanced/)
end
it "nested epp tags" do
expect { dump(parse("<% $a = 10 <% $b = 20 %>%>")) }.to raise_error(/Syntax error/)
end
it "nested epp expression tags" do
expect { dump(parse("<%= 1+1 <%= 2+2 %>%>")) }.to raise_error(/Syntax error/)
end
it "rendering sequence of expressions" do
expect { dump(parse("<%= 1 2 3 %>")) }.to raise_error(/Syntax error/)
end
end
context "handles parsing of" do
it "text (and nothing else)" do
- dump(parse("Hello World")).should == "(lambda (epp (block (render-s 'Hello World'))))"
+ dump(parse("Hello World")).should == [
+ "(lambda (epp (block",
+ " (render-s 'Hello World')",
+ ")))"].join("\n")
end
it "template parameters" do
- dump(parse("<%|$x|%>Hello World")).should == "(lambda (parameters x) (epp (block (render-s 'Hello World'))))"
+ dump(parse("<%|$x|%>Hello World")).should == [
+ "(lambda (parameters x) (epp (block",
+ " (render-s 'Hello World')",
+ ")))"].join("\n")
end
it "template parameters with default" do
- dump(parse("<%|$x='cigar'|%>Hello World")).should == "(lambda (parameters (= x 'cigar')) (epp (block (render-s 'Hello World'))))"
+ dump(parse("<%|$x='cigar'|%>Hello World")).should == [
+ "(lambda (parameters (= x 'cigar')) (epp (block",
+ " (render-s 'Hello World')",
+ ")))"].join("\n")
end
it "template parameters with and without default" do
- dump(parse("<%|$x='cigar', $y|%>Hello World")).should == "(lambda (parameters (= x 'cigar') y) (epp (block (render-s 'Hello World'))))"
+ dump(parse("<%|$x='cigar', $y|%>Hello World")).should == [
+ "(lambda (parameters (= x 'cigar') y) (epp (block",
+ " (render-s 'Hello World')",
+ ")))"].join("\n")
end
it "template parameters + additional setup" do
- dump(parse("<%|$x| $y = 10 %>Hello World")).should == "(lambda (parameters x) (epp (block (= $y 10) (render-s 'Hello World'))))"
+ dump(parse("<%|$x| $y = 10 %>Hello World")).should == [
+ "(lambda (parameters x) (epp (block",
+ " (= $y 10)",
+ " (render-s 'Hello World')",
+ ")))"].join("\n")
end
it "comments" do
- dump(parse("<%#($x='cigar', $y)%>Hello World")).should == "(lambda (epp (block (render-s 'Hello World'))))"
+ dump(parse("<%#($x='cigar', $y)%>Hello World")).should == [
+ "(lambda (epp (block",
+ " (render-s 'Hello World')",
+ ")))"
+ ].join("\n")
end
it "verbatim epp tags" do
- dump(parse("<%% contemplating %%>Hello World")).should == "(lambda (epp (block (render-s '<% contemplating %>Hello World'))))"
+ dump(parse("<%% contemplating %%>Hello World")).should == [
+ "(lambda (epp (block",
+ " (render-s '<% contemplating %>Hello World')",
+ ")))"
+ ].join("\n")
end
it "expressions" do
- dump(parse("We all live in <%= 3.14 - 2.14 %> world")).should ==
- "(lambda (epp (block (render-s 'We all live in ') (render (- 3.14 2.14)) (render-s ' world'))))"
+ dump(parse("We all live in <%= 3.14 - 2.14 %> world")).should == [
+ "(lambda (epp (block",
+ " (render-s 'We all live in ')",
+ " (render (- 3.14 2.14))",
+ " (render-s ' world')",
+ ")))"
+ ].join("\n")
end
end
end
diff --git a/spec/unit/pops/parser/evaluating_parser_spec.rb b/spec/unit/pops/parser/evaluating_parser_spec.rb
index 8448a5af3..7645b9bc2 100644
--- a/spec/unit/pops/parser/evaluating_parser_spec.rb
+++ b/spec/unit/pops/parser/evaluating_parser_spec.rb
@@ -1,90 +1,89 @@
require 'spec_helper'
require 'puppet/pops'
require 'puppet_spec/pops'
require 'puppet_spec/scope'
describe 'The Evaluating Parser' do
include PuppetSpec::Pops
include PuppetSpec::Scope
let(:acceptor) { Puppet::Pops::Validation::Acceptor.new() }
- let(:diag) { Puppet::Pops::Binder::Hiera2::DiagnosticProducer.new(acceptor) }
let(:scope) { s = create_test_scope_for_node(node); s }
let(:node) { 'node.example.com' }
def quote(x)
Puppet::Pops::Parser::EvaluatingParser.quote(x)
end
def evaluator()
Puppet::Pops::Parser::EvaluatingParser.new()
end
def evaluate(s)
evaluator.evaluate(scope, quote(s))
end
def test(x)
evaluator.evaluate_string(scope, quote(x)).should == x
end
def test_interpolate(x, y)
scope['a'] = 'expansion'
evaluator.evaluate_string(scope, quote(x)).should == y
end
context 'when evaluating' do
it 'should produce an empty string with no change' do
test('')
end
it 'should produce a normal string with no change' do
test('A normal string')
end
it 'should produce a string with newlines with no change' do
test("A\nnormal\nstring")
end
it 'should produce a string with escaped newlines with no change' do
test("A\\nnormal\\nstring")
end
it 'should produce a string containing quotes without change' do
test('This " should remain untouched')
end
it 'should produce a string containing escaped quotes without change' do
test('This \" should remain untouched')
end
it 'should expand ${a} variables' do
test_interpolate('This ${a} was expanded', 'This expansion was expanded')
end
it 'should expand quoted ${a} variables' do
test_interpolate('This "${a}" was expanded', 'This "expansion" was expanded')
end
it 'should not expand escaped ${a}' do
test_interpolate('This \${a} was not expanded', 'This ${a} was not expanded')
end
it 'should expand $a variables' do
test_interpolate('This $a was expanded', 'This expansion was expanded')
end
it 'should expand quoted $a variables' do
test_interpolate('This "$a" was expanded', 'This "expansion" was expanded')
end
it 'should not expand escaped $a' do
test_interpolate('This \$a was not expanded', 'This $a was not expanded')
end
it 'should produce an single space from a \s' do
test_interpolate("\\s", ' ')
end
end
end
diff --git a/spec/unit/pops/parser/lexer2_spec.rb b/spec/unit/pops/parser/lexer2_spec.rb
index 8ccdc2630..b9a0a916b 100644
--- a/spec/unit/pops/parser/lexer2_spec.rb
+++ b/spec/unit/pops/parser/lexer2_spec.rb
@@ -1,447 +1,464 @@
require 'spec_helper'
require 'matchers/match_tokens2'
require 'puppet/pops'
require 'puppet/pops/parser/lexer2'
module EgrammarLexer2Spec
def tokens_scanned_from(s)
lexer = Puppet::Pops::Parser::Lexer2.new
lexer.string = s
tokens = lexer.fullscan[0..-2]
end
def epp_tokens_scanned_from(s)
lexer = Puppet::Pops::Parser::Lexer2.new
lexer.string = s
tokens = lexer.fullscan_epp[0..-2]
end
end
describe 'Lexer2' do
include EgrammarLexer2Spec
{
- :LBRACK => '[',
+ :LISTSTART => '[',
:RBRACK => ']',
:LBRACE => '{',
:RBRACE => '}',
:LPAREN => '(',
:RPAREN => ')',
:EQUALS => '=',
:ISEQUAL => '==',
:GREATEREQUAL => '>=',
:GREATERTHAN => '>',
:LESSTHAN => '<',
:LESSEQUAL => '<=',
:NOTEQUAL => '!=',
:NOT => '!',
:COMMA => ',',
:DOT => '.',
:COLON => ':',
:AT => '@',
:LLCOLLECT => '<<|',
:RRCOLLECT => '|>>',
:LCOLLECT => '<|',
:RCOLLECT => '|>',
:SEMIC => ';',
:QMARK => '?',
:OTHER => '\\',
:FARROW => '=>',
:PARROW => '+>',
:APPENDS => '+=',
:DELETES => '-=',
:PLUS => '+',
:MINUS => '-',
:DIV => '/',
:TIMES => '*',
:LSHIFT => '<<',
:RSHIFT => '>>',
:MATCH => '=~',
:NOMATCH => '!~',
:IN_EDGE => '->',
:OUT_EDGE => '<-',
:IN_EDGE_SUB => '~>',
:OUT_EDGE_SUB => '<~',
:PIPE => '|',
}.each do |name, string|
it "should lex a token named #{name.to_s}" do
tokens_scanned_from(string).should match_tokens2(name)
end
end
+ it "should lex [ in position after non whitespace as LBRACK" do
+ tokens_scanned_from("a[").should match_tokens2(:NAME, :LBRACK)
+ end
+
{
"case" => :CASE,
"class" => :CLASS,
"default" => :DEFAULT,
"define" => :DEFINE,
# "import" => :IMPORT, # done as a function in egrammar
"if" => :IF,
"elsif" => :ELSIF,
"else" => :ELSE,
"inherits" => :INHERITS,
"node" => :NODE,
"and" => :AND,
"or" => :OR,
"undef" => :UNDEF,
"false" => :BOOLEAN,
"true" => :BOOLEAN,
"in" => :IN,
"unless" => :UNLESS,
}.each do |string, name|
it "should lex a keyword from '#{string}'" do
tokens_scanned_from(string).should match_tokens2(name)
end
end
# TODO: Complete with all edge cases
[ 'A', 'A::B', '::A', '::A::B',].each do |string|
it "should lex a CLASSREF on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:CLASSREF, string])
end
end
# TODO: Complete with all edge cases
[ 'a', 'a::b', '::a', '::a::b',].each do |string|
it "should lex a NAME on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:NAME, string])
end
end
[ 'a-b', 'a--b', 'a-b-c', '_x'].each do |string|
it "should lex a BARE WORD STRING on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:WORD, string])
end
end
[ '_x::y', 'x::_y'].each do |string|
it "should consider the bare word '#{string}' to be a bad NAME" do
expect {
tokens_scanned_from(string)
}.to raise_error(/Illegal fully qualified name/)
end
end
{ '-a' => [:MINUS, :NAME],
'--a' => [:MINUS, :MINUS, :NAME],
'a-' => [:NAME, :MINUS],
'a- b' => [:NAME, :MINUS, :NAME],
'a--' => [:NAME, :MINUS, :MINUS],
'a-$3' => [:NAME, :MINUS, :VARIABLE],
}.each do |source, expected|
it "should lex leading and trailing hyphens from #{source}" do
tokens_scanned_from(source).should match_tokens2(*expected)
end
end
{ 'false'=>false, 'true'=>true}.each do |string, value|
it "should lex a BOOLEAN on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:BOOLEAN, value])
end
end
[ '0', '1', '2982383139'].each do |string|
it "should lex a decimal integer NUMBER on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:NUMBER, string])
end
end
{ ' 1' => '1', '1 ' => '1', ' 1 ' => '1'}.each do |string, value|
it "should lex a NUMBER with surrounding space '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:NUMBER, value])
end
end
[ '0.0', '0.1', '0.2982383139', '29823.235', '10e23', '10e-23', '1.234e23'].each do |string|
it "should lex a decimal floating point NUMBER on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:NUMBER, string])
end
end
[ '00', '01', '0123', '0777'].each do |string|
it "should lex an octal integer NUMBER on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:NUMBER, string])
end
end
[ '0x0', '0x1', '0xa', '0xA', '0xabcdef', '0xABCDEF'].each do |string|
it "should lex an hex integer NUMBER on the form '#{string}'" do
tokens_scanned_from(string).should match_tokens2([:NUMBER, string])
end
end
{ "''" => '',
"'a'" => 'a',
"'a\\'b'" =>"a'b",
"'a\\rb'" =>"a\\rb",
"'a\\nb'" =>"a\\nb",
"'a\\tb'" =>"a\\tb",
"'a\\sb'" =>"a\\sb",
"'a\\$b'" =>"a\\$b",
"'a\\\"b'" =>"a\\\"b",
"'a\\\\b'" =>"a\\b",
"'a\\\\'" =>"a\\",
}.each do |source, expected|
it "should lex a single quoted STRING on the form #{source}" do
tokens_scanned_from(source).should match_tokens2([:STRING, expected])
end
end
+ { "''" => [2, ""],
+ "'a'" => [3, "a"],
+ "'a\\'b'" => [6, "a'b"],
+ }.each do |source, expected|
+ it "should lex a single quoted STRING on the form #{source} as having length #{expected[0]}" do
+ length, value = expected
+ tokens_scanned_from(source).should match_tokens2([:STRING, value, {:line => 1, :pos=>1, :length=> length}])
+ end
+ end
+
{ '""' => '',
'"a"' => 'a',
'"a\'b"' => "a'b",
}.each do |source, expected|
it "should lex a double quoted STRING on the form #{source}" do
tokens_scanned_from(source).should match_tokens2([:STRING, expected])
end
end
{ '"a$x b"' => [[:DQPRE, 'a', {:line => 1, :pos=>1, :length=>2 }],
[:VARIABLE, 'x', {:line => 1, :pos=>3, :length=>2 }],
[:DQPOST, ' b', {:line => 1, :pos=>5, :length=>3 }]],
'"a$x.b"' => [[:DQPRE, 'a', {:line => 1, :pos=>1, :length=>2 }],
[:VARIABLE, 'x', {:line => 1, :pos=>3, :length=>2 }],
[:DQPOST, '.b', {:line => 1, :pos=>5, :length=>3 }]],
'"$x.b"' => [[:DQPRE, '', {:line => 1, :pos=>1, :length=>1 }],
[:VARIABLE, 'x', {:line => 1, :pos=>2, :length=>2 }],
[:DQPOST, '.b', {:line => 1, :pos=>4, :length=>3 }]],
'"a$x"' => [[:DQPRE, 'a', {:line => 1, :pos=>1, :length=>2 }],
[:VARIABLE, 'x', {:line => 1, :pos=>3, :length=>2 }],
[:DQPOST, '', {:line => 1, :pos=>5, :length=>1 }]],
}.each do |source, expected|
it "should lex an interpolated variable 'x' from #{source}" do
tokens_scanned_from(source).should match_tokens2(*expected)
end
end
{ '"$"' => '$',
'"a$"' => 'a$',
'"a$%b"' => "a$%b",
'"a$$"' => "a$$",
'"a$$%"' => "a$$%",
}.each do |source, expected|
it "should lex interpolation including false starts #{source}" do
tokens_scanned_from(source).should match_tokens2([:STRING, expected])
end
end
it "differentiates between foo[x] and foo [x] (whitespace)" do
tokens_scanned_from("$a[1]").should match_tokens2(:VARIABLE, :LBRACK, :NUMBER, :RBRACK)
- tokens_scanned_from("$a [1]").should match_tokens2(:VARIABLE, :LBRACK, :NUMBER, :RBRACK)
+ tokens_scanned_from("$a [1]").should match_tokens2(:VARIABLE, :LISTSTART, :NUMBER, :RBRACK)
tokens_scanned_from("a[1]").should match_tokens2(:NAME, :LBRACK, :NUMBER, :RBRACK)
tokens_scanned_from("a [1]").should match_tokens2(:NAME, :LISTSTART, :NUMBER, :RBRACK)
tokens_scanned_from(" if \n\r\t\nif if ").should match_tokens2(:IF, :IF, :IF)
end
it "skips whitepsace" do
tokens_scanned_from(" if if if ").should match_tokens2(:IF, :IF, :IF)
tokens_scanned_from(" if \n\r\t\nif if ").should match_tokens2(:IF, :IF, :IF)
end
it "skips single line comments" do
tokens_scanned_from("if # comment\nif").should match_tokens2(:IF, :IF)
end
["if /* comment */\nif",
"if /* comment\n */\nif",
"if /*\n comment\n */\nif",
].each do |source|
it "skips multi line comments" do
tokens_scanned_from(source).should match_tokens2(:IF, :IF)
end
end
{ "=~" => [:MATCH, "=~ /./"],
"!~" => [:NOMATCH, "!~ /./"],
"," => [:COMMA, ", /./"],
"(" => [:LPAREN, "( /./"],
- "[" => [:LBRACK, "[ /./"],
+ "[" => [:LISTSTART, "[ /./"],
+ "[" => [[:NAME, :LBRACK], "a[ /./"],
+ "[" => [[:NAME, :LISTSTART], "a [ /./"],
"{" => [:LBRACE, "{ /./"],
"+" => [:PLUS, "+ /./"],
"-" => [:MINUS, "- /./"],
"*" => [:TIMES, "* /./"],
";" => [:SEMIC, "; /./"],
}.each do |token, entry|
it "should lex regexp after '#{token}'" do
- tokens_scanned_from(entry[1]).should match_tokens2(entry[0], :REGEX)
+ expected = [entry[0], :REGEX].flatten
+ tokens_scanned_from(entry[1]).should match_tokens2(*expected)
end
end
it "should lex a simple expression" do
tokens_scanned_from('1 + 1').should match_tokens2([:NUMBER, '1'], :PLUS, [:NUMBER, '1'])
end
{ "1" => ["1 /./", [:NUMBER, :DIV, :DOT, :DIV]],
"'a'" => ["'a' /./", [:STRING, :DIV, :DOT, :DIV]],
"true" => ["true /./", [:BOOLEAN, :DIV, :DOT, :DIV]],
"false" => ["false /./", [:BOOLEAN, :DIV, :DOT, :DIV]],
"/./" => ["/./ /./", [:REGEX, :DIV, :DOT, :DIV]],
"a" => ["a /./", [:NAME, :DIV, :DOT, :DIV]],
"A" => ["A /./", [:CLASSREF, :DIV, :DOT, :DIV]],
")" => [") /./", [:RPAREN, :DIV, :DOT, :DIV]],
"]" => ["] /./", [:RBRACK, :DIV, :DOT, :DIV]],
"|>" => ["|> /./", [:RCOLLECT, :DIV, :DOT, :DIV]],
"|>>" => ["|>> /./", [:RRCOLLECT, :DIV, :DOT, :DIV]],
'"a$a"' => ['"a$a" /./', [:DQPRE, :VARIABLE, :DQPOST, :DIV, :DOT, :DIV]],
}.each do |token, entry|
it "should not lex regexp after '#{token}'" do
tokens_scanned_from(entry[ 0 ]).should match_tokens2(*entry[ 1 ])
end
end
it 'should lex assignment' do
tokens_scanned_from("$a = 10").should match_tokens2([:VARIABLE, "a"], :EQUALS, [:NUMBER, '10'])
end
# TODO: Tricky, and heredoc not supported yet
# it "should not lex regexp after heredoc" do
# tokens_scanned_from("1 / /./").should match_tokens2(:NUMBER, :DIV, :REGEX)
# end
it "should lex regexp at beginning of input" do
tokens_scanned_from(" /./").should match_tokens2(:REGEX)
end
it "should lex regexp right of div" do
tokens_scanned_from("1 / /./").should match_tokens2(:NUMBER, :DIV, :REGEX)
end
context 'when lexer lexes heredoc' do
it 'lexes tag, syntax and escapes, margin and right trim' do
code = <<-CODE
@(END:syntax/t)
Tex\\tt\\n
|- END
CODE
tokens_scanned_from(code).should match_tokens2([:HEREDOC, 'syntax'], :SUBLOCATE, [:STRING, "Tex\tt\\n"])
end
it 'lexes "tag", syntax and escapes, margin, right trim and interpolation' do
code = <<-CODE
@("END":syntax/t)
Tex\\tt\\n$var After
|- END
CODE
tokens_scanned_from(code).should match_tokens2(
[:HEREDOC, 'syntax'],
:SUBLOCATE,
[:DQPRE, "Tex\tt\\n"],
[:VARIABLE, "var"],
[:DQPOST, " After"]
)
end
end
it 'should support unicode characters' do
code = <<-CODE
"x\\u2713y"
CODE
if Puppet::Pops::Parser::Locator::RUBYVER < Puppet::Pops::Parser::Locator::RUBY_1_9_3
# Ruby 1.8.7 reports the multibyte char as several octal characters
tokens_scanned_from(code).should match_tokens2([:STRING, "x\342\234\223y"])
else
# >= Ruby 1.9.3 reports \u
tokens_scanned_from(code).should match_tokens2([:STRING, "x\u2713y"])
end
end
context 'when lexing epp' do
it 'epp can contain just text' do
code = <<-CODE
This is just text
CODE
epp_tokens_scanned_from(code).should match_tokens2(:EPP_START, [:RENDER_STRING, " This is just text\n"])
end
it 'epp can contain text with interpolated rendered expressions' do
code = <<-CODE
This is <%= $x %> just text
CODE
epp_tokens_scanned_from(code).should match_tokens2(
:EPP_START,
[:RENDER_STRING, " This is "],
[:RENDER_EXPR, nil],
[:VARIABLE, "x"],
[:EPP_END, "%>"],
[:RENDER_STRING, " just text\n"]
)
end
it 'epp can contain text with trimmed interpolated rendered expressions' do
code = <<-CODE
This is <%= $x -%> just text
CODE
epp_tokens_scanned_from(code).should match_tokens2(
:EPP_START,
[:RENDER_STRING, " This is "],
[:RENDER_EXPR, nil],
[:VARIABLE, "x"],
[:EPP_END_TRIM, "-%>"],
[:RENDER_STRING, "just text\n"]
)
end
it 'epp can contain text with expressions that are not rendered' do
code = <<-CODE
This is <% $x=10 %> just text
CODE
epp_tokens_scanned_from(code).should match_tokens2(
:EPP_START,
[:RENDER_STRING, " This is "],
[:VARIABLE, "x"],
:EQUALS,
[:NUMBER, "10"],
[:RENDER_STRING, " just text\n"]
)
end
it 'epp can skip leading space in tail text' do
code = <<-CODE
This is <% $x=10 -%>
just text
CODE
epp_tokens_scanned_from(code).should match_tokens2(
:EPP_START,
[:RENDER_STRING, " This is "],
[:VARIABLE, "x"],
:EQUALS,
[:NUMBER, "10"],
[:RENDER_STRING, "just text\n"]
)
end
it 'epp can skip comments' do
code = <<-CODE
This is <% $x=10 -%>
<%# This is an epp comment -%>
just text
CODE
epp_tokens_scanned_from(code).should match_tokens2(
:EPP_START,
[:RENDER_STRING, " This is "],
[:VARIABLE, "x"],
:EQUALS,
[:NUMBER, "10"],
[:RENDER_STRING, "just text\n"]
)
end
it 'epp can escape epp tags' do
code = <<-CODE
This is <% $x=10 -%>
<%% this is escaped epp %%>
CODE
epp_tokens_scanned_from(code).should match_tokens2(
:EPP_START,
[:RENDER_STRING, " This is "],
[:VARIABLE, "x"],
:EQUALS,
[:NUMBER, "10"],
[:RENDER_STRING, "<% this is escaped epp %>\n"]
)
end
end
end
diff --git a/spec/unit/pops/parser/lexer_spec.rb b/spec/unit/pops/parser/lexer_spec.rb
deleted file mode 100755
index 40d9b3e51..000000000
--- a/spec/unit/pops/parser/lexer_spec.rb
+++ /dev/null
@@ -1,840 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-
-require 'puppet/pops'
-
-# This is a special matcher to match easily lexer output
-RSpec::Matchers.define :be_like do |*expected|
- match do |actual|
- diffable
- expected.zip(actual).all? { |e,a| !e or a[0] == e or (e.is_a? Array and a[0] == e[0] and (a[1] == e[1] or (a[1].is_a?(Hash) and a[1][:value] == e[1]))) }
- end
-end
-__ = nil
-
-module EgrammarLexerSpec
- def self.tokens_scanned_from(s)
- lexer = Puppet::Pops::Parser::Lexer.new
- lexer.string = s
- tokens = lexer.fullscan[0..-2]
- tokens.map do |t|
- key = t[0]
- options = t[1]
- if options[:locator]
- # unresolved locations needs to be resolved for tests that check positioning
- [key,
- options[:locator].to_location_hash(
- options[:offset],
- options[:end_offset]).merge({:value => options[:value]}) ]
- else
- t
- end
- end
- end
-end
-
-describe Puppet::Pops::Parser::Lexer do
- include EgrammarLexerSpec
-
- describe "when reading strings" do
- before { @lexer = Puppet::Pops::Parser::Lexer.new }
-
- it "should increment the line count for every carriage return in the string" do
- @lexer.string = "'this\nis\natest'"
- @lexer.fullscan[0..-2]
-
- line = @lexer.line
- line.should == 3
- end
-
- it "should not increment the line count for escapes in the string" do
- @lexer.string = "'this\\nis\\natest'"
- @lexer.fullscan[0..-2]
-
- @lexer.line.should == 1
- end
-
- it "should not think the terminator is escaped, when preceeded by an even number of backslashes" do
- @lexer.string = "'here\nis\nthe\nstring\\\\'with\nextra\njunk"
- @lexer.fullscan[0..-2]
-
- @lexer.line.should == 6
- end
-
- {
- 'r' => "\r",
- 'n' => "\n",
- 't' => "\t",
- 's' => " "
- }.each do |esc, expected_result|
- it "should recognize \\#{esc} sequence" do
- @lexer.string = "\\#{esc}'"
- @lexer.slurpstring("'")[0].should == expected_result
- end
- end
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::Token, "when initializing" do
- it "should create a regex if the first argument is a string" do
- Puppet::Pops::Parser::Lexer::Token.new("something", :NAME).regex.should == %r{something}
- end
-
- it "should set the string if the first argument is one" do
- Puppet::Pops::Parser::Lexer::Token.new("something", :NAME).string.should == "something"
- end
-
- it "should set the regex if the first argument is one" do
- Puppet::Pops::Parser::Lexer::Token.new(%r{something}, :NAME).regex.should == %r{something}
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TokenList do
- before do
- @list = Puppet::Pops::Parser::Lexer::TokenList.new
- end
-
- it "should have a method for retrieving tokens by the name" do
- token = @list.add_token :name, "whatever"
- @list[:name].should equal(token)
- end
-
- it "should have a method for retrieving string tokens by the string" do
- token = @list.add_token :name, "whatever"
- @list.lookup("whatever").should equal(token)
- end
-
- it "should add tokens to the list when directed" do
- token = @list.add_token :name, "whatever"
- @list[:name].should equal(token)
- end
-
- it "should have a method for adding multiple tokens at once" do
- @list.add_tokens "whatever" => :name, "foo" => :bar
- @list[:name].should_not be_nil
- @list[:bar].should_not be_nil
- end
-
- it "should fail to add tokens sharing a name with an existing token" do
- @list.add_token :name, "whatever"
- expect { @list.add_token :name, "whatever" }.to raise_error(ArgumentError)
- end
-
- it "should set provided options on tokens being added" do
- token = @list.add_token :name, "whatever", :skip_text => true
- token.skip_text.should == true
- end
-
- it "should define any provided blocks as a :convert method" do
- token = @list.add_token(:name, "whatever") do "foo" end
- token.convert.should == "foo"
- end
-
- it "should store all string tokens in the :string_tokens list" do
- one = @list.add_token(:name, "1")
- @list.string_tokens.should be_include(one)
- end
-
- it "should store all regex tokens in the :regex_tokens list" do
- one = @list.add_token(:name, %r{one})
- @list.regex_tokens.should be_include(one)
- end
-
- it "should not store string tokens in the :regex_tokens list" do
- one = @list.add_token(:name, "1")
- @list.regex_tokens.should_not be_include(one)
- end
-
- it "should not store regex tokens in the :string_tokens list" do
- one = @list.add_token(:name, %r{one})
- @list.string_tokens.should_not be_include(one)
- end
-
- it "should sort the string tokens inversely by length when asked" do
- one = @list.add_token(:name, "1")
- two = @list.add_token(:other, "12")
- @list.sort_tokens
- @list.string_tokens.should == [two, one]
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS do
- before do
- @lexer = Puppet::Pops::Parser::Lexer.new
- end
-
- {
- :LBRACK => '[',
- :RBRACK => ']',
-# :LBRACE => '{',
-# :RBRACE => '}',
- :LPAREN => '(',
- :RPAREN => ')',
- :EQUALS => '=',
- :ISEQUAL => '==',
- :GREATEREQUAL => '>=',
- :GREATERTHAN => '>',
- :LESSTHAN => '<',
- :LESSEQUAL => '<=',
- :NOTEQUAL => '!=',
- :NOT => '!',
- :COMMA => ',',
- :DOT => '.',
- :COLON => ':',
- :AT => '@',
- :LLCOLLECT => '<<|',
- :RRCOLLECT => '|>>',
- :LCOLLECT => '<|',
- :RCOLLECT => '|>',
- :SEMIC => ';',
- :QMARK => '?',
- :BACKSLASH => '\\',
- :FARROW => '=>',
- :PARROW => '+>',
- :APPENDS => '+=',
- :DELETES => '-=',
- :PLUS => '+',
- :MINUS => '-',
- :DIV => '/',
- :TIMES => '*',
- :LSHIFT => '<<',
- :RSHIFT => '>>',
- :MATCH => '=~',
- :NOMATCH => '!~',
- :IN_EDGE => '->',
- :OUT_EDGE => '<-',
- :IN_EDGE_SUB => '~>',
- :OUT_EDGE_SUB => '<~',
- :PIPE => '|',
- }.each do |name, string|
- it "should have a token named #{name.to_s}" do
- Puppet::Pops::Parser::Lexer::TOKENS[name].should_not be_nil
- end
-
- it "should match '#{string}' for the token #{name.to_s}" do
- Puppet::Pops::Parser::Lexer::TOKENS[name].string.should == string
- end
- end
-
- {
- "case" => :CASE,
- "class" => :CLASS,
- "default" => :DEFAULT,
- "define" => :DEFINE,
-# "import" => :IMPORT, # done as a function in egrammar
- "if" => :IF,
- "elsif" => :ELSIF,
- "else" => :ELSE,
- "inherits" => :INHERITS,
- "node" => :NODE,
- "and" => :AND,
- "or" => :OR,
- "undef" => :UNDEF,
- "false" => :FALSE,
- "true" => :TRUE,
- "in" => :IN,
- "unless" => :UNLESS,
- }.each do |string, name|
- it "should have a keyword named #{name.to_s}" do
- Puppet::Pops::Parser::Lexer::KEYWORDS[name].should_not be_nil
- end
-
- it "should have the keyword for #{name.to_s} set to #{string}" do
- Puppet::Pops::Parser::Lexer::KEYWORDS[name].string.should == string
- end
- end
-
- # These tokens' strings don't matter, just that the tokens exist.
- [:STRING, :DQPRE, :DQMID, :DQPOST, :BOOLEAN, :NAME, :NUMBER, :COMMENT, :MLCOMMENT,
- :LBRACE, :RBRACE,
- :RETURN, :SQUOTE, :DQUOTE, :VARIABLE].each do |name|
- it "should have a token named #{name.to_s}" do
- Puppet::Pops::Parser::Lexer::TOKENS[name].should_not be_nil
- end
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:CLASSREF] do
- before { @token = Puppet::Pops::Parser::Lexer::TOKENS[:CLASSREF] }
-
- it "should match against single upper-case alpha-numeric terms" do
- @token.regex.should =~ "One"
- end
-
- it "should match against upper-case alpha-numeric terms separated by double colons" do
- @token.regex.should =~ "One::Two"
- end
-
- it "should match against many upper-case alpha-numeric terms separated by double colons" do
- @token.regex.should =~ "One::Two::Three::Four::Five"
- end
-
- it "should match against upper-case alpha-numeric terms prefixed by double colons" do
- @token.regex.should =~ "::One"
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:NAME] do
- before { @token = Puppet::Pops::Parser::Lexer::TOKENS[:NAME] }
-
- it "should match against lower-case alpha-numeric terms" do
- @token.regex.should =~ "one-two"
- end
-
- it "should return itself and the value if the matched term is not a keyword" do
- Puppet::Pops::Parser::Lexer::KEYWORDS.expects(:lookup).returns(nil)
- @token.convert(stub("lexer"), "myval").should == [Puppet::Pops::Parser::Lexer::TOKENS[:NAME], "myval"]
- end
-
- it "should return the keyword token and the value if the matched term is a keyword" do
- keyword = stub 'keyword', :name => :testing
- Puppet::Pops::Parser::Lexer::KEYWORDS.expects(:lookup).returns(keyword)
- @token.convert(stub("lexer"), "myval").should == [keyword, "myval"]
- end
-
- it "should return the BOOLEAN token and 'true' if the matched term is the string 'true'" do
- keyword = stub 'keyword', :name => :TRUE
- Puppet::Pops::Parser::Lexer::KEYWORDS.expects(:lookup).returns(keyword)
- @token.convert(stub('lexer'), "true").should == [Puppet::Pops::Parser::Lexer::TOKENS[:BOOLEAN], true]
- end
-
- it "should return the BOOLEAN token and 'false' if the matched term is the string 'false'" do
- keyword = stub 'keyword', :name => :FALSE
- Puppet::Pops::Parser::Lexer::KEYWORDS.expects(:lookup).returns(keyword)
- @token.convert(stub('lexer'), "false").should == [Puppet::Pops::Parser::Lexer::TOKENS[:BOOLEAN], false]
- end
-
- it "should match against lower-case alpha-numeric terms separated by double colons" do
- @token.regex.should =~ "one::two"
- end
-
- it "should match against many lower-case alpha-numeric terms separated by double colons" do
- @token.regex.should =~ "one::two::three::four::five"
- end
-
- it "should match against lower-case alpha-numeric terms prefixed by double colons" do
- @token.regex.should =~ "::one"
- end
-
- it "should match against nested terms starting with numbers" do
- @token.regex.should =~ "::1one::2two::3three"
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:NUMBER] do
- before do
- @token = Puppet::Pops::Parser::Lexer::TOKENS[:NUMBER]
- @regex = @token.regex
- end
-
- it "should match against numeric terms" do
- @regex.should =~ "2982383139"
- end
-
- it "should match against float terms" do
- @regex.should =~ "29823.235"
- end
-
- it "should match against hexadecimal terms" do
- @regex.should =~ "0xBEEF0023"
- end
-
- it "should match against float with exponent terms" do
- @regex.should =~ "10e23"
- end
-
- it "should match against float terms with negative exponents" do
- @regex.should =~ "10e-23"
- end
-
- it "should match against float terms with fractional parts and exponent" do
- @regex.should =~ "1.234e23"
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:COMMENT] do
- before { @token = Puppet::Pops::Parser::Lexer::TOKENS[:COMMENT] }
-
- it "should match against lines starting with '#'" do
- @token.regex.should =~ "# this is a comment"
- end
-
- it "should be marked to get skipped" do
- @token.skip?.should be_true
- end
-
- it "'s block should return the comment without any text" do
- # This is a silly test, the original tested that the comments was processed, but
- # all comments are skipped anyway, and never collected for documentation.
- #
- @token.convert(@lexer,"# this is a comment")[1].should == ""
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:MLCOMMENT] do
- before do
- @token = Puppet::Pops::Parser::Lexer::TOKENS[:MLCOMMENT]
- @lexer = stub 'lexer', :line => 0
- end
-
- it "should match against lines enclosed with '/*' and '*/'" do
- @token.regex.should =~ "/* this is a comment */"
- end
-
- it "should match multiple lines enclosed with '/*' and '*/'" do
- @token.regex.should =~ """/*
- this is a comment
- */"""
- end
-
-# # TODO: REWRITE THIS TEST TO NOT BE BASED ON INTERNALS
-# it "should increase the lexer current line number by the amount of lines spanned by the comment" do
-# @lexer.expects(:line=).with(2)
-# @token.convert(@lexer, "1\n2\n3")
-# end
-
- it "should not greedily match comments" do
- match = @token.regex.match("/* first */ word /* second */")
- match[1].should == " first "
- end
-
- it "'s block should return the comment without the comment marks" do
- # This is a silly test, the original tested that the comments was processed, but
- # all comments are skipped anyway, and never collected for documentation.
- #
- @lexer.stubs(:line=).with(0)
-
- @token.convert(@lexer,"/* this is a comment */")[1].should == ""
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:RETURN] do
- before { @token = Puppet::Pops::Parser::Lexer::TOKENS[:RETURN] }
-
- it "should match against carriage returns" do
- @token.regex.should =~ "\n"
- end
-
- it "should be marked to initiate text skipping" do
- @token.skip_text.should be_true
- end
-end
-
-shared_examples_for "handling `-` in standard variable names for egrammar" do |prefix|
- # Watch out - a regex might match a *prefix* on these, not just the whole
- # word, so make sure you don't have false positive or negative results based
- # on that.
- legal = %w{f foo f::b foo::b f::bar foo::bar 3 foo3 3foo}
- illegal = %w{f- f-o -f f::-o f::o- f::o-o}
-
- ["", "::"].each do |global_scope|
- legal.each do |name|
- var = prefix + global_scope + name
- it "should accept #{var.inspect} as a valid variable name" do
- (subject.regex.match(var) || [])[0].should == var
- end
- end
-
- illegal.each do |name|
- var = prefix + global_scope + name
- it "when `variable_with_dash` is disabled it should NOT accept #{var.inspect} as a valid variable name" do
- Puppet[:allow_variables_with_dashes] = false
- (subject.regex.match(var) || [])[0].should_not == var
- end
-
- it "when `variable_with_dash` is enabled it should NOT accept #{var.inspect} as a valid variable name" do
- Puppet[:allow_variables_with_dashes] = true
- (subject.regex.match(var) || [])[0].should_not == var
- end
- end
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:DOLLAR_VAR] do
- its(:skip_text) { should be_false }
-
- it_should_behave_like "handling `-` in standard variable names for egrammar", '$'
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:VARIABLE] do
- its(:skip_text) { should be_false }
-
- it_should_behave_like "handling `-` in standard variable names for egrammar", ''
-end
-
-describe "the horrible deprecation / compatibility variables with dashes" do
-
- context "deprecation warnings" do
- before :each do Puppet[:allow_variables_with_dashes] = true end
-
- it "does not warn about a variable without a dash" do
- Puppet.expects(:deprecation_warning).never
-
- EgrammarLexerSpec.tokens_scanned_from('$c').should == [
- [:VARIABLE, {:value=>"c", :line=>1, :pos=>1, :offset=>0, :length=>2}]
- ]
- end
-
- it "does not warn about referencing a class name that contains a dash" do
- Puppet.expects(:deprecation_warning).never
-
- EgrammarLexerSpec.tokens_scanned_from('foo-bar').should == [
- [:NAME, {:value=>"foo-bar", :line=>1, :pos=>1, :offset=>0, :length=>7}]
- ]
- end
- end
-end
-
-
-describe Puppet::Pops::Parser::Lexer,"when lexing strings" do
- {
- %q{'single quoted string')} => [[:STRING,'single quoted string']],
- %q{"double quoted string"} => [[:STRING,'double quoted string']],
- %q{'single quoted string with an escaped "\\'"'} => [[:STRING,'single quoted string with an escaped "\'"']],
- %q{'single quoted string with an escaped "\$"'} => [[:STRING,'single quoted string with an escaped "\$"']],
- %q{'single quoted string with an escaped "\."'} => [[:STRING,'single quoted string with an escaped "\."']],
- %q{'single quoted string with an escaped "\r\n"'} => [[:STRING,'single quoted string with an escaped "\r\n"']],
- %q{'single quoted string with an escaped "\n"'} => [[:STRING,'single quoted string with an escaped "\n"']],
- %q{'single quoted string with an escaped "\\\\"'} => [[:STRING,'single quoted string with an escaped "\\\\"']],
- %q{"string with an escaped '\\"'"} => [[:STRING,"string with an escaped '\"'"]],
- %q{"string with an escaped '\\$'"} => [[:STRING,"string with an escaped '$'"]],
- %Q{"string with a line ending with a backslash: \\\nfoo"} => [[:STRING,"string with a line ending with a backslash: foo"]],
- %q{"string with $v (but no braces)"} => [[:DQPRE,"string with "],[:VARIABLE,'v'],[:DQPOST,' (but no braces)']],
- %q["string with ${v} in braces"] => [[:DQPRE,"string with "],[:VARIABLE,'v'],[:DQPOST,' in braces']],
- %q["string with ${qualified::var} in braces"] => [[:DQPRE,"string with "],[:VARIABLE,'qualified::var'],[:DQPOST,' in braces']],
- %q{"string with $v and $v (but no braces)"} => [[:DQPRE,"string with "],[:VARIABLE,"v"],[:DQMID," and "],[:VARIABLE,"v"],[:DQPOST," (but no braces)"]],
- %q["string with ${v} and ${v} in braces"] => [[:DQPRE,"string with "],[:VARIABLE,"v"],[:DQMID," and "],[:VARIABLE,"v"],[:DQPOST," in braces"]],
- %q["string with ${'a nested single quoted string'} inside it."] => [[:DQPRE,"string with "],[:STRING,'a nested single quoted string'],[:DQPOST,' inside it.']],
- %q["string with ${['an array ',$v2]} in it."] => [[:DQPRE,"string with "],:LBRACK,[:STRING,"an array "],:COMMA,[:VARIABLE,"v2"],:RBRACK,[:DQPOST," in it."]],
- %q{a simple "scanner" test} => [[:NAME,"a"],[:NAME,"simple"], [:STRING,"scanner"],[:NAME,"test"]],
- %q{a simple 'single quote scanner' test} => [[:NAME,"a"],[:NAME,"simple"], [:STRING,"single quote scanner"],[:NAME,"test"]],
- %q{a harder 'a $b \c"'} => [[:NAME,"a"],[:NAME,"harder"], [:STRING,'a $b \c"']],
- %q{a harder "scanner test"} => [[:NAME,"a"],[:NAME,"harder"], [:STRING,"scanner test"]],
- %q{a hardest "scanner \"test\""} => [[:NAME,"a"],[:NAME,"hardest"],[:STRING,'scanner "test"']],
- %Q{a hardestest "scanner \\"test\\"\n"} => [[:NAME,"a"],[:NAME,"hardestest"],[:STRING,%Q{scanner "test"\n}]],
- %q{function("call")} => [[:NAME,"function"],[:LPAREN,"("],[:STRING,'call'],[:RPAREN,")"]],
- %q["string with ${(3+5)/4} nested math."] => [[:DQPRE,"string with "],:LPAREN,[:NAME,"3"],:PLUS,[:NAME,"5"],:RPAREN,:DIV,[:NAME,"4"],[:DQPOST," nested math."]],
- %q["$$$$"] => [[:STRING,"$$$$"]],
- %q["$variable"] => [[:DQPRE,""],[:VARIABLE,"variable"],[:DQPOST,""]],
- %q["$var$other"] => [[:DQPRE,""],[:VARIABLE,"var"],[:DQMID,""],[:VARIABLE,"other"],[:DQPOST,""]],
- %q["foo$bar$"] => [[:DQPRE,"foo"],[:VARIABLE,"bar"],[:DQPOST,"$"]],
- %q["foo$$bar"] => [[:DQPRE,"foo$"],[:VARIABLE,"bar"],[:DQPOST,""]],
- %q[""] => [[:STRING,""]],
- %q["123 456 789 0"] => [[:STRING,"123 456 789 0"]],
- %q["${123} 456 $0"] => [[:DQPRE,""],[:VARIABLE,"123"],[:DQMID," 456 "],[:VARIABLE,"0"],[:DQPOST,""]],
- %q["$foo::::bar"] => [[:DQPRE,""],[:VARIABLE,"foo"],[:DQPOST,"::::bar"]],
- # Keyword variables
- %q["$true"] => [[:DQPRE,""],[:VARIABLE, "true"],[:DQPOST,""]],
- %q["$false"] => [[:DQPRE,""],[:VARIABLE, "false"],[:DQPOST,""]],
- %q["$if"] => [[:DQPRE,""],[:VARIABLE, "if"],[:DQPOST,""]],
- %q["$case"] => [[:DQPRE,""],[:VARIABLE, "case"],[:DQPOST,""]],
- %q["$unless"] => [[:DQPRE,""],[:VARIABLE, "unless"],[:DQPOST,""]],
- %q["$undef"] => [[:DQPRE,""],[:VARIABLE, "undef"],[:DQPOST,""]],
- # Expressions
- %q["${true}"] => [[:DQPRE,""],[:BOOLEAN, true],[:DQPOST,""]],
- %q["${false}"] => [[:DQPRE,""],[:BOOLEAN, false],[:DQPOST,""]],
- %q["${undef}"] => [[:DQPRE,""],:UNDEF,[:DQPOST,""]],
- %q["${if true {false}}"] => [[:DQPRE,""],:IF,[:BOOLEAN, true], :LBRACE, [:BOOLEAN, false], :RBRACE, [:DQPOST,""]],
- %q["${unless true {false}}"] => [[:DQPRE,""],:UNLESS,[:BOOLEAN, true], :LBRACE, [:BOOLEAN, false], :RBRACE, [:DQPOST,""]],
- %q["${case true {true:{false}}}"] => [
- [:DQPRE,""],:CASE,[:BOOLEAN, true], :LBRACE, [:BOOLEAN, true], :COLON, :LBRACE, [:BOOLEAN, false],
- :RBRACE, :RBRACE, [:DQPOST,""]],
- %q[{ "${a}" => 1 }] => [ :LBRACE, [:DQPRE,""], [:VARIABLE,"a"], [:DQPOST,""], :FARROW, [:NAME,"1"], :RBRACE ],
- }.each { |src,expected_result|
- it "should handle #{src} correctly" do
- EgrammarLexerSpec.tokens_scanned_from(src).should be_like(*expected_result)
- end
- }
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:DOLLAR_VAR] do
- before { @token = Puppet::Pops::Parser::Lexer::TOKENS[:DOLLAR_VAR] }
-
- it "should match against alpha words prefixed with '$'" do
- @token.regex.should =~ '$this_var'
- end
-
- it "should return the VARIABLE token and the variable name stripped of the '$'" do
- @token.convert(stub("lexer"), "$myval").should == [Puppet::Pops::Parser::Lexer::TOKENS[:VARIABLE], "myval"]
- end
-end
-
-describe Puppet::Pops::Parser::Lexer::TOKENS[:REGEX] do
- before { @token = Puppet::Pops::Parser::Lexer::TOKENS[:REGEX] }
-
- it "should match against any expression enclosed in //" do
- @token.regex.should =~ '/this is a regex/'
- end
-
- it 'should not match if there is \n in the regex' do
- @token.regex.should_not =~ "/this is \n a regex/"
- end
-
- describe "when scanning" do
- it "should not consider escaped slashes to be the end of a regex" do
- EgrammarLexerSpec.tokens_scanned_from("$x =~ /this \\/ foo/").should be_like(__,__,[:REGEX,%r{this / foo}])
- end
-
- it "should not lex chained division as a regex" do
- EgrammarLexerSpec.tokens_scanned_from("$x = $a/$b/$c").collect { |name, data| name }.should_not be_include( :REGEX )
- end
-
- it "should accept a regular expression after NODE" do
- EgrammarLexerSpec.tokens_scanned_from("node /www.*\.mysite\.org/").should be_like(__,[:REGEX,Regexp.new("www.*\.mysite\.org")])
- end
-
- it "should accept regular expressions in a CASE" do
- s = %q{case $variable {
- "something": {$othervar = 4096 / 2}
- /regex/: {notice("this notably sucks")}
- }
- }
- EgrammarLexerSpec.tokens_scanned_from(s).should be_like(
- :CASE,:VARIABLE,:LBRACE,:STRING,:COLON,:LBRACE,:VARIABLE,:EQUALS,:NAME,:DIV,:NAME,:RBRACE,[:REGEX,/regex/],:COLON,:LBRACE,:NAME,:LPAREN,:STRING,:RPAREN,:RBRACE,:RBRACE
- )
- end
- end
-
- it "should return the REGEX token and a Regexp" do
- @token.convert(stub("lexer"), "/myregex/").should == [Puppet::Pops::Parser::Lexer::TOKENS[:REGEX], Regexp.new(/myregex/)]
- end
-end
-
-describe Puppet::Pops::Parser::Lexer, "when lexing comments" do
- before { @lexer = Puppet::Pops::Parser::Lexer.new }
-
- it "should skip whitespace before lexing the next token after a non-token" do
- EgrammarLexerSpec.tokens_scanned_from("/* 1\n\n */ \ntest").should be_like([:NAME, "test"])
- end
-end
-
-# FIXME: We need to rewrite all of these tests, but I just don't want to take the time right now.
-describe "Puppet::Pops::Parser::Lexer in the old tests" do
- before { @lexer = Puppet::Pops::Parser::Lexer.new }
-
- it "should do simple lexing" do
- {
- %q{\\} => [[:BACKSLASH,"\\"]],
- %q{simplest scanner test} => [[:NAME,"simplest"],[:NAME,"scanner"],[:NAME,"test"]],
- %Q{returned scanner test\n} => [[:NAME,"returned"],[:NAME,"scanner"],[:NAME,"test"]]
- }.each { |source,expected|
- EgrammarLexerSpec.tokens_scanned_from(source).should be_like(*expected)
- }
- end
-
- it "should fail usefully" do
- expect { EgrammarLexerSpec.tokens_scanned_from('^') }.to raise_error(RuntimeError)
- end
-
- it "should fail if the string is not set" do
- expect { @lexer.fullscan }.to raise_error(Puppet::LexError)
- end
-
- it "should correctly identify keywords" do
- EgrammarLexerSpec.tokens_scanned_from("case").should be_like([:CASE, "case"])
- end
-
- it "should correctly parse class references" do
- %w{Many Different Words A Word}.each { |t| EgrammarLexerSpec.tokens_scanned_from(t).should be_like([:CLASSREF,t])}
- end
-
- # #774
- it "should correctly parse namespaced class refernces token" do
- %w{Foo ::Foo Foo::Bar ::Foo::Bar}.each { |t| EgrammarLexerSpec.tokens_scanned_from(t).should be_like([:CLASSREF, t]) }
- end
-
- it "should correctly parse names" do
- %w{this is a bunch of names}.each { |t| EgrammarLexerSpec.tokens_scanned_from(t).should be_like([:NAME,t]) }
- end
-
- it "should correctly parse names with numerals" do
- %w{1name name1 11names names11}.each { |t| EgrammarLexerSpec.tokens_scanned_from(t).should be_like([:NAME,t]) }
- end
-
- it "should correctly parse empty strings" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = ""') }.to_not raise_error
- end
-
- it "should correctly parse virtual resources" do
- EgrammarLexerSpec.tokens_scanned_from("@type {").should be_like([:AT, "@"], [:NAME, "type"], [:LBRACE, "{"])
- end
-
- it "should correctly deal with namespaces" do
- @lexer.string = %{class myclass}
- @lexer.fullscan
- @lexer.namespace.should == "myclass"
-
- @lexer.namepop
- @lexer.namespace.should == ""
-
- @lexer.string = "class base { class sub { class more"
- @lexer.fullscan
- @lexer.namespace.should == "base::sub::more"
-
- @lexer.namepop
- @lexer.namespace.should == "base::sub"
- end
-
- it "should not put class instantiation on the namespace" do
- @lexer.string = "class base { class sub { class { mode"
- @lexer.fullscan
- @lexer.namespace.should == "base::sub"
- end
-
- it "should correctly handle fully qualified names" do
- @lexer.string = "class base { class sub::more {"
- @lexer.fullscan
- @lexer.namespace.should == "base::sub::more"
-
- @lexer.namepop
- @lexer.namespace.should == "base"
- end
-
- it "should correctly lex variables" do
- ["$variable", "$::variable", "$qualified::variable", "$further::qualified::variable"].each do |string|
- EgrammarLexerSpec.tokens_scanned_from(string).should be_like([:VARIABLE,string.sub(/^\$/,'')])
- end
- end
-
- it "should end variables at `-`" do
- EgrammarLexerSpec.tokens_scanned_from('$hyphenated-variable').
- should be_like([:VARIABLE, "hyphenated"], [:MINUS, '-'], [:NAME, 'variable'])
- end
-
- it "should not include whitespace in a variable" do
- EgrammarLexerSpec.tokens_scanned_from("$foo bar").should_not be_like([:VARIABLE, "foo bar"])
- end
- it "should not include excess colons in a variable" do
- EgrammarLexerSpec.tokens_scanned_from("$foo::::bar").should_not be_like([:VARIABLE, "foo::::bar"])
- end
-end
-
-describe "Puppet::Pops::Parser::Lexer in the old tests when lexing example files" do
- my_fixtures('*.pp') do |file|
- it "should correctly lex #{file}" do
- lexer = Puppet::Pops::Parser::Lexer.new
- lexer.file = file
- expect { lexer.fullscan }.to_not raise_error
- end
- end
-end
-
-describe "when trying to lex a non-existent file" do
- include PuppetSpec::Files
-
- it "should return an empty list of tokens" do
- lexer = Puppet::Pops::Parser::Lexer.new
- lexer.file = nofile = tmpfile('lexer')
- Puppet::FileSystem.exist?(nofile).should == false
-
- lexer.fullscan.should == [[false,false]]
- end
-end
-
-describe "when string quotes are not closed" do
- it "should report with message including an \" opening quote" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = "') }.to raise_error(/after '"'/)
- end
-
- it "should report with message including an \' opening quote" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = \'') }.to raise_error(/after "'"/)
- end
-
- it "should report <eof> if immediately followed by eof" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = "') }.to raise_error(/followed by '<eof>'/)
- end
-
- it "should report max 5 chars following quote" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = "123456') }.to raise_error(/followed by '12345...'/)
- end
-
- it "should escape control chars" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = "12\n3456') }.to raise_error(/followed by '12\\n3...'/)
- end
-
- it "should resport position of opening quote" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = "123456') }.to raise_error(/at line 1:8/)
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = "123456') }.to raise_error(/at line 1:9/)
- end
-end
-
-describe "when lexing number, bad input should not go unpunished" do
- it "should slap bad octal as such" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = 0778') }.to raise_error(/Not a valid octal/)
- end
-
- it "should slap bad hex as such" do
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = 0xFG') }.to raise_error(/Not a valid hex/)
- expect { EgrammarLexerSpec.tokens_scanned_from('$var = 0xfg') }.to raise_error(/Not a valid hex/)
- end
- # Note, bad decimals are probably impossible to enter, as they are not recognized as complete numbers, instead,
- # the error will be something else, depending on what follows some initial digit.
- #
-end
-
-describe "when lexing interpolation detailed positioning should be correct" do
- it "should correctly position a string without interpolation" do
- EgrammarLexerSpec.tokens_scanned_from('"not interpolated"').should be_like(
- [:STRING, {:value=>"not interpolated", :line=>1, :offset=>0, :pos=>1, :length=>18}])
- end
-
- it "should correctly position a string with false start in interpolation" do
- EgrammarLexerSpec.tokens_scanned_from('"not $$$ rpolated"').should be_like(
- [:STRING, {:value=>"not $$$ rpolated", :line=>1, :offset=>0, :pos=>1, :length=>18}])
- end
-
- it "should correctly position pre-mid-end interpolation " do
- EgrammarLexerSpec.tokens_scanned_from('"pre $x mid $y end"').should be_like(
- [:DQPRE, {:value=>"pre ", :line=>1, :offset=>0, :pos=>1, :length=>6}],
- [:VARIABLE, {:value=>"x", :line=>1, :offset=>6, :pos=>7, :length=>1}],
- [:DQMID, {:value=>" mid ", :line=>1, :offset=>7, :pos=>8, :length=>6}],
- [:VARIABLE, {:value=>"y", :line=>1, :offset=>13, :pos=>14, :length=>1}],
- [:DQPOST, {:value=>" end", :line=>1, :offset=>14, :pos=>15, :length=>5}]
- )
- end
-
- it "should correctly position pre-mid-end interpolation using ${} " do
- EgrammarLexerSpec.tokens_scanned_from('"pre ${x} mid ${y} end"').should be_like(
- [:DQPRE, {:value=>"pre ", :line=>1, :offset=>0, :pos=>1, :length=>7}],
- [:VARIABLE, {:value=>"x", :line=>1, :offset=>7, :pos=>8, :length=>1}],
- [:DQMID, {:value=>" mid ", :line=>1, :offset=>8, :pos=>9, :length=>8}],
- [:VARIABLE, {:value=>"y", :line=>1, :offset=>16, :pos=>17, :length=>1}],
- [:DQPOST, {:value=>" end", :line=>1, :offset=>17, :pos=>18, :length=>6}]
- )
- end
-
- it "should correctly position pre-end interpolation using ${} with f call" do
- EgrammarLexerSpec.tokens_scanned_from('"pre ${x()} end"').should be_like(
- [:DQPRE, {:value=>"pre ", :line=>1, :offset=>0, :pos=>1, :length=>7}],
- [:NAME, {:value=>"x", :line=>1, :offset=>7, :pos=>8, :length=>1}],
- [:LPAREN, {:value=>"(", :line=>1, :offset=>8, :pos=>9, :length=>1}],
- [:RPAREN, {:value=>")", :line=>1, :offset=>9, :pos=>10, :length=>1}],
- [:DQPOST, {:value=>" end", :line=>1, :offset=>10, :pos=>11, :length=>6}]
- )
- end
-
- it "should correctly position pre-end interpolation using ${} with $x" do
- EgrammarLexerSpec.tokens_scanned_from('"pre ${$x} end"').should be_like(
- [:DQPRE, {:value=>"pre ", :line=>1, :offset=>0, :pos=>1, :length=>7}],
- [:VARIABLE, {:value=>"x", :line=>1, :offset=>7, :pos=>8, :length=>2}],
- [:DQPOST, {:value=>" end", :line=>1, :offset=>9, :pos=>10, :length=>6}]
- )
- end
-
- it "should correctly position pre-end interpolation across lines" do
- EgrammarLexerSpec.tokens_scanned_from(%Q["pre ${\n$x} end"]).should be_like(
- [:DQPRE, {:value=>"pre ", :line=>1, :offset=>0, :pos=>1, :length=>7}],
- [:VARIABLE, {:value=>"x", :line=>2, :offset=>8, :pos=>1, :length=>2}],
- [:DQPOST, {:value=>" end", :line=>2, :offset=>10, :pos=>3, :length=>6}]
- )
- end
-
- it "should correctly position interpolation across lines when strings have embedded newlines" do
- EgrammarLexerSpec.tokens_scanned_from(%Q["pre \n\n${$x}\n mid$y"]).should be_like(
- [:DQPRE, {:value=>"pre \n\n", :line=>1, :offset=>0, :pos=>1, :length=>9}],
- [:VARIABLE, {:value=>"x", :line=>3, :offset=>9, :pos=>3, :length=>2}],
- [:DQMID, {:value=>"\n mid", :line=>3, :offset=>11, :pos=>5, :length=>7}],
- [:VARIABLE, {:value=>"y", :line=>4, :offset=>18, :pos=>6, :length=>1}]
- )
- end
-end
diff --git a/spec/unit/pops/parser/parse_basic_expressions_spec.rb b/spec/unit/pops/parser/parse_basic_expressions_spec.rb
index f5aeb2a29..2190e54bb 100644
--- a/spec/unit/pops/parser/parse_basic_expressions_spec.rb
+++ b/spec/unit/pops/parser/parse_basic_expressions_spec.rb
@@ -1,273 +1,278 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/parser_rspec_helper')
describe "egrammar parsing basic expressions" do
include ParserRspecHelper
context "When the parser parses arithmetic" do
context "with Integers" do
it "$a = 2 + 2" do; dump(parse("$a = 2 + 2")).should == "(= $a (+ 2 2))" ; end
it "$a = 7 - 3" do; dump(parse("$a = 7 - 3")).should == "(= $a (- 7 3))" ; end
it "$a = 6 * 3" do; dump(parse("$a = 6 * 3")).should == "(= $a (* 6 3))" ; end
it "$a = 6 / 3" do; dump(parse("$a = 6 / 3")).should == "(= $a (/ 6 3))" ; end
it "$a = 6 % 3" do; dump(parse("$a = 6 % 3")).should == "(= $a (% 6 3))" ; end
it "$a = -(6/3)" do; dump(parse("$a = -(6/3)")).should == "(= $a (- (/ 6 3)))" ; end
it "$a = -6/3" do; dump(parse("$a = -6/3")).should == "(= $a (/ (- 6) 3))" ; end
it "$a = 8 >> 1 " do; dump(parse("$a = 8 >> 1")).should == "(= $a (>> 8 1))" ; end
it "$a = 8 << 1 " do; dump(parse("$a = 8 << 1")).should == "(= $a (<< 8 1))" ; end
end
context "with Floats" do
it "$a = 2.2 + 2.2" do; dump(parse("$a = 2.2 + 2.2")).should == "(= $a (+ 2.2 2.2))" ; end
it "$a = 7.7 - 3.3" do; dump(parse("$a = 7.7 - 3.3")).should == "(= $a (- 7.7 3.3))" ; end
it "$a = 6.1 * 3.1" do; dump(parse("$a = 6.1 - 3.1")).should == "(= $a (- 6.1 3.1))" ; end
it "$a = 6.6 / 3.3" do; dump(parse("$a = 6.6 / 3.3")).should == "(= $a (/ 6.6 3.3))" ; end
it "$a = -(6.0/3.0)" do; dump(parse("$a = -(6.0/3.0)")).should == "(= $a (- (/ 6.0 3.0)))" ; end
it "$a = -6.0/3.0" do; dump(parse("$a = -6.0/3.0")).should == "(= $a (/ (- 6.0) 3.0))" ; end
it "$a = 3.14 << 2" do; dump(parse("$a = 3.14 << 2")).should == "(= $a (<< 3.14 2))" ; end
it "$a = 3.14 >> 2" do; dump(parse("$a = 3.14 >> 2")).should == "(= $a (>> 3.14 2))" ; end
end
context "with hex and octal Integer values" do
it "$a = 0xAB + 0xCD" do; dump(parse("$a = 0xAB + 0xCD")).should == "(= $a (+ 0xAB 0xCD))" ; end
it "$a = 0777 - 0333" do; dump(parse("$a = 0777 - 0333")).should == "(= $a (- 0777 0333))" ; end
end
context "with strings requiring boxing to Numeric" do
# Test that numbers in string form does not turn into numbers
it "$a = '2' + '2'" do; dump(parse("$a = '2' + '2'")).should == "(= $a (+ '2' '2'))" ; end
it "$a = '2.2' + '0.2'" do; dump(parse("$a = '2.2' + '0.2'")).should == "(= $a (+ '2.2' '0.2'))" ; end
it "$a = '0xab' + '0xcd'" do; dump(parse("$a = '0xab' + '0xcd'")).should == "(= $a (+ '0xab' '0xcd'))" ; end
it "$a = '0777' + '0333'" do; dump(parse("$a = '0777' + '0333'")).should == "(= $a (+ '0777' '0333'))" ; end
end
context "precedence should be correct" do
it "$a = 1 + 2 * 3" do; dump(parse("$a = 1 + 2 * 3")).should == "(= $a (+ 1 (* 2 3)))"; end
it "$a = 1 + 2 % 3" do; dump(parse("$a = 1 + 2 % 3")).should == "(= $a (+ 1 (% 2 3)))"; end
it "$a = 1 + 2 / 3" do; dump(parse("$a = 1 + 2 / 3")).should == "(= $a (+ 1 (/ 2 3)))"; end
it "$a = 1 + 2 << 3" do; dump(parse("$a = 1 + 2 << 3")).should == "(= $a (<< (+ 1 2) 3))"; end
it "$a = 1 + 2 >> 3" do; dump(parse("$a = 1 + 2 >> 3")).should == "(= $a (>> (+ 1 2) 3))"; end
end
context "parentheses alter precedence" do
it "$a = (1 + 2) * 3" do; dump(parse("$a = (1 + 2) * 3")).should == "(= $a (* (+ 1 2) 3))"; end
it "$a = (1 + 2) / 3" do; dump(parse("$a = (1 + 2) / 3")).should == "(= $a (/ (+ 1 2) 3))"; end
end
end
context "When the evaluator performs boolean operations" do
context "using operators AND OR NOT" do
it "$a = true and true" do; dump(parse("$a = true and true")).should == "(= $a (&& true true))"; end
it "$a = true or true" do; dump(parse("$a = true or true")).should == "(= $a (|| true true))" ; end
it "$a = !true" do; dump(parse("$a = !true")).should == "(= $a (! true))" ; end
end
context "precedence should be correct" do
it "$a = false or true and true" do
dump(parse("$a = false or true and true")).should == "(= $a (|| false (&& true true)))"
end
it "$a = (false or true) and true" do
dump(parse("$a = (false or true) and true")).should == "(= $a (&& (|| false true) true))"
end
it "$a = !true or true and true" do
dump(parse("$a = !false or true and true")).should == "(= $a (|| (! false) (&& true true)))"
end
end
# Possibly change to check of literal expressions
context "on values requiring boxing to Boolean" do
it "'x' == true" do
dump(parse("! 'x'")).should == "(! 'x')"
end
it "'' == false" do
dump(parse("! ''")).should == "(! '')"
end
it ":undef == false" do
dump(parse("! undef")).should == "(! :undef)"
end
end
end
context "When parsing comparisons" do
context "of string values" do
it "$a = 'a' == 'a'" do; dump(parse("$a = 'a' == 'a'")).should == "(= $a (== 'a' 'a'))" ; end
it "$a = 'a' != 'a'" do; dump(parse("$a = 'a' != 'a'")).should == "(= $a (!= 'a' 'a'))" ; end
it "$a = 'a' < 'b'" do; dump(parse("$a = 'a' < 'b'")).should == "(= $a (< 'a' 'b'))" ; end
it "$a = 'a' > 'b'" do; dump(parse("$a = 'a' > 'b'")).should == "(= $a (> 'a' 'b'))" ; end
it "$a = 'a' <= 'b'" do; dump(parse("$a = 'a' <= 'b'")).should == "(= $a (<= 'a' 'b'))" ; end
it "$a = 'a' >= 'b'" do; dump(parse("$a = 'a' >= 'b'")).should == "(= $a (>= 'a' 'b'))" ; end
end
context "of integer values" do
it "$a = 1 == 1" do; dump(parse("$a = 1 == 1")).should == "(= $a (== 1 1))" ; end
it "$a = 1 != 1" do; dump(parse("$a = 1 != 1")).should == "(= $a (!= 1 1))" ; end
it "$a = 1 < 2" do; dump(parse("$a = 1 < 2")).should == "(= $a (< 1 2))" ; end
it "$a = 1 > 2" do; dump(parse("$a = 1 > 2")).should == "(= $a (> 1 2))" ; end
it "$a = 1 <= 2" do; dump(parse("$a = 1 <= 2")).should == "(= $a (<= 1 2))" ; end
it "$a = 1 >= 2" do; dump(parse("$a = 1 >= 2")).should == "(= $a (>= 1 2))" ; end
end
context "of regular expressions (parse errors)" do
# Not supported in concrete syntax
it "$a = /.*/ == /.*/" do
dump(parse("$a = /.*/ == /.*/")).should == "(= $a (== /.*/ /.*/))"
end
it "$a = /.*/ != /a.*/" do
dump(parse("$a = /.*/ != /.*/")).should == "(= $a (!= /.*/ /.*/))"
end
end
end
context "When parsing Regular Expression matching" do
it "$a = 'a' =~ /.*/" do; dump(parse("$a = 'a' =~ /.*/")).should == "(= $a (=~ 'a' /.*/))" ; end
it "$a = 'a' =~ '.*'" do; dump(parse("$a = 'a' =~ '.*'")).should == "(= $a (=~ 'a' '.*'))" ; end
it "$a = 'a' !~ /b.*/" do; dump(parse("$a = 'a' !~ /b.*/")).should == "(= $a (!~ 'a' /b.*/))" ; end
it "$a = 'a' !~ 'b.*'" do; dump(parse("$a = 'a' !~ 'b.*'")).should == "(= $a (!~ 'a' 'b.*'))" ; end
end
+ context "When parsing unfold" do
+ it "$a = *[1,2]" do; dump(parse("$a = *[1,2]")).should == "(= $a (unfold ([] 1 2)))" ; end
+ it "$a = *1" do; dump(parse("$a = *1")).should == "(= $a (unfold 1))" ; end
+ end
+
context "When parsing Lists" do
it "$a = []" do
dump(parse("$a = []")).should == "(= $a ([]))"
end
it "$a = [1]" do
dump(parse("$a = [1]")).should == "(= $a ([] 1))"
end
it "$a = [1,2,3]" do
dump(parse("$a = [1,2,3]")).should == "(= $a ([] 1 2 3))"
end
it "[...[...[]]] should create nested arrays without trouble" do
dump(parse("$a = [1,[2.0, 2.1, [2.2]],[3.0, 3.1]]")).should == "(= $a ([] 1 ([] 2.0 2.1 ([] 2.2)) ([] 3.0 3.1)))"
end
it "$a = [2 + 2]" do
dump(parse("$a = [2+2]")).should == "(= $a ([] (+ 2 2)))"
end
it "$a [1,2,3] == [1,2,3]" do
dump(parse("$a = [1,2,3] == [1,2,3]")).should == "(= $a (== ([] 1 2 3) ([] 1 2 3)))"
end
end
context "When parsing indexed access" do
it "$a = $b[2]" do
dump(parse("$a = $b[2]")).should == "(= $a (slice $b 2))"
end
it "$a = [1, 2, 3][2]" do
dump(parse("$a = [1,2,3][2]")).should == "(= $a (slice ([] 1 2 3) 2))"
end
it "$a = {'a' => 1, 'b' => 2}['b']" do
dump(parse("$a = {'a'=>1,'b' =>2}[b]")).should == "(= $a (slice ({} ('a' 1) ('b' 2)) b))"
end
end
context "When parsing assignments" do
it "Should allow simple assignment" do
dump(parse("$a = 10")).should == "(= $a 10)"
end
it "Should allow append assignment" do
dump(parse("$a += 10")).should == "(+= $a 10)"
end
it "Should allow without assignment" do
dump(parse("$a -= 10")).should == "(-= $a 10)"
end
it "Should allow chained assignment" do
dump(parse("$a = $b = 10")).should == "(= $a (= $b 10))"
end
it "Should allow chained assignment with expressions" do
dump(parse("$a = 1 + ($b = 10)")).should == "(= $a (+ 1 (= $b 10)))"
end
end
context "When parsing Hashes" do
it "should create a Hash when evaluating a LiteralHash" do
dump(parse("$a = {'a'=>1,'b'=>2}")).should == "(= $a ({} ('a' 1) ('b' 2)))"
end
it "$a = {...{...{}}} should create nested hashes without trouble" do
dump(parse("$a = {'a'=>1,'b'=>{'x'=>2.1,'y'=>2.2}}")).should == "(= $a ({} ('a' 1) ('b' ({} ('x' 2.1) ('y' 2.2)))))"
end
it "$a = {'a'=> 2 + 2} should evaluate values in entries" do
dump(parse("$a = {'a'=>2+2}")).should == "(= $a ({} ('a' (+ 2 2))))"
end
it "$a = {'a'=> 1, 'b'=>2} == {'a'=> 1, 'b'=>2}" do
dump(parse("$a = {'a'=>1,'b'=>2} == {'a'=>1,'b'=>2}")).should == "(= $a (== ({} ('a' 1) ('b' 2)) ({} ('a' 1) ('b' 2))))"
end
it "$a = {'a'=> 1, 'b'=>2} != {'x'=> 1, 'y'=>3}" do
dump(parse("$a = {'a'=>1,'b'=>2} != {'a'=>1,'b'=>2}")).should == "(= $a (!= ({} ('a' 1) ('b' 2)) ({} ('a' 1) ('b' 2))))"
end
end
context "When parsing the 'in' operator" do
it "with integer in a list" do
dump(parse("$a = 1 in [1,2,3]")).should == "(= $a (in 1 ([] 1 2 3)))"
end
it "with string key in a hash" do
dump(parse("$a = 'a' in {'x'=>1, 'a'=>2, 'y'=> 3}")).should == "(= $a (in 'a' ({} ('x' 1) ('a' 2) ('y' 3))))"
end
it "with substrings of a string" do
dump(parse("$a = 'ana' in 'bananas'")).should == "(= $a (in 'ana' 'bananas'))"
end
it "with sublist in a list" do
dump(parse("$a = [2,3] in [1,2,3]")).should == "(= $a (in ([] 2 3) ([] 1 2 3)))"
end
end
context "When parsing string interpolation" do
it "should interpolate a bare word as a variable name, \"${var}\"" do
dump(parse("$a = \"$var\"")).should == "(= $a (cat '' (str $var) ''))"
end
it "should interpolate a variable in a text expression, \"${$var}\"" do
dump(parse("$a = \"${$var}\"")).should == "(= $a (cat '' (str $var) ''))"
end
it "should interpolate a variable, \"yo${var}yo\"" do
dump(parse("$a = \"yo${var}yo\"")).should == "(= $a (cat 'yo' (str $var) 'yo'))"
end
it "should interpolate any expression in a text expression, \"${$var*2}\"" do
dump(parse("$a = \"yo${$var+2}yo\"")).should == "(= $a (cat 'yo' (str (+ $var 2)) 'yo'))"
end
it "should not interpolate names as variable in expression, \"${notvar*2}\"" do
dump(parse("$a = \"yo${notvar+2}yo\"")).should == "(= $a (cat 'yo' (str (+ notvar 2)) 'yo'))"
end
it "should interpolate name as variable in access expression, \"${var[0]}\"" do
dump(parse("$a = \"yo${var[0]}yo\"")).should == "(= $a (cat 'yo' (str (slice $var 0)) 'yo'))"
end
it "should interpolate name as variable in method call, \"${var.foo}\"" do
dump(parse("$a = \"yo${$var.foo}yo\"")).should == "(= $a (cat 'yo' (str (call-method (. $var foo))) 'yo'))"
end
it "should interpolate name as variable in method call, \"${var.foo}\"" do
dump(parse("$a = \"yo${var.foo}yo\"")).should == "(= $a (cat 'yo' (str (call-method (. $var foo))) 'yo'))"
dump(parse("$a = \"yo${var.foo.bar}yo\"")).should == "(= $a (cat 'yo' (str (call-method (. (call-method (. $var foo)) bar))) 'yo'))"
end
end
end
diff --git a/spec/unit/pops/parser/parse_calls_spec.rb b/spec/unit/pops/parser/parse_calls_spec.rb
index 115c160d6..ee80544f5 100644
--- a/spec/unit/pops/parser/parse_calls_spec.rb
+++ b/spec/unit/pops/parser/parse_calls_spec.rb
@@ -1,101 +1,104 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/parser_rspec_helper')
describe "egrammar parsing function calls" do
include ParserRspecHelper
context "When parsing calls as statements" do
context "in top level scope" do
it "foo()" do
dump(parse("foo()")).should == "(invoke foo)"
end
it "notice bar" do
dump(parse("notice bar")).should == "(invoke notice bar)"
end
it "notice(bar)" do
dump(parse("notice bar")).should == "(invoke notice bar)"
end
it "foo(bar)" do
dump(parse("foo(bar)")).should == "(invoke foo bar)"
end
it "foo(bar,)" do
dump(parse("foo(bar,)")).should == "(invoke foo bar)"
end
it "foo(bar, fum,)" do
dump(parse("foo(bar,fum,)")).should == "(invoke foo bar fum)"
end
it "notice fqdn_rand(30)" do
dump(parse("notice fqdn_rand(30)")).should == '(invoke notice (call fqdn_rand 30))'
end
end
context "in nested scopes" do
it "if true { foo() }" do
dump(parse("if true {foo()}")).should == "(if true\n (then (invoke foo)))"
end
it "if true { notice bar}" do
dump(parse("if true {notice bar}")).should == "(if true\n (then (invoke notice bar)))"
end
end
end
context "When parsing calls as expressions" do
it "$a = foo()" do
dump(parse("$a = foo()")).should == "(= $a (call foo))"
end
it "$a = foo(bar)" do
dump(parse("$a = foo()")).should == "(= $a (call foo))"
end
# # For regular grammar where a bare word can not be a "statement"
# it "$a = foo bar # illegal, must have parentheses" do
# expect { dump(parse("$a = foo bar"))}.to raise_error(Puppet::ParseError)
# end
# For egrammar where a bare word can be a "statement"
it "$a = foo bar # illegal, must have parentheses" do
- dump(parse("$a = foo bar")).should == "(block (= $a foo) bar)"
+ dump(parse("$a = foo bar")).should == "(block\n (= $a foo)\n bar\n)"
end
context "in nested scopes" do
it "if true { $a = foo() }" do
dump(parse("if true { $a = foo()}")).should == "(if true\n (then (= $a (call foo))))"
end
it "if true { $a= foo(bar)}" do
dump(parse("if true {$a = foo(bar)}")).should == "(if true\n (then (= $a (call foo bar))))"
end
end
end
context "When parsing method calls" do
it "$a.foo" do
dump(parse("$a.foo")).should == "(call-method (. $a foo))"
end
it "$a.foo || { }" do
dump(parse("$a.foo || { }")).should == "(call-method (. $a foo) (lambda ()))"
end
it "$a.foo |$x| { }" do
dump(parse("$a.foo |$x|{ }")).should == "(call-method (. $a foo) (lambda (parameters x) ()))"
end
it "$a.foo |$x|{ }" do
- dump(parse("$a.foo |$x|{ $b = $x}")).should ==
- "(call-method (. $a foo) (lambda (parameters x) (block (= $b $x))))"
+ dump(parse("$a.foo |$x|{ $b = $x}")).should == [
+ "(call-method (. $a foo) (lambda (parameters x) (block",
+ " (= $b $x)",
+ ")))"
+ ].join("\n")
end
end
end
diff --git a/spec/unit/pops/parser/parse_conditionals_spec.rb b/spec/unit/pops/parser/parse_conditionals_spec.rb
index b8b8d9c8e..591b20e97 100644
--- a/spec/unit/pops/parser/parse_conditionals_spec.rb
+++ b/spec/unit/pops/parser/parse_conditionals_spec.rb
@@ -1,150 +1,157 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/parser_rspec_helper')
describe "egrammar parsing conditionals" do
include ParserRspecHelper
context "When parsing if statements" do
it "if true { $a = 10 }" do
dump(parse("if true { $a = 10 }")).should == "(if true\n (then (= $a 10)))"
end
it "if true { $a = 10 } else {$a = 20}" do
dump(parse("if true { $a = 10 } else {$a = 20}")).should ==
["(if true",
" (then (= $a 10))",
" (else (= $a 20)))"].join("\n")
end
it "if true { $a = 10 } elsif false { $a = 15} else {$a = 20}" do
dump(parse("if true { $a = 10 } elsif false { $a = 15} else {$a = 20}")).should ==
["(if true",
" (then (= $a 10))",
" (else (if false",
" (then (= $a 15))",
" (else (= $a 20)))))"].join("\n")
end
it "if true { $a = 10 $b = 10 } else {$a = 20}" do
- dump(parse("if true { $a = 10 $b = 20} else {$a = 20}")).should ==
- ["(if true",
- " (then (block (= $a 10) (= $b 20)))",
- " (else (= $a 20)))"].join("\n")
+ dump(parse("if true { $a = 10 $b = 20} else {$a = 20}")).should == [
+ "(if true",
+ " (then (block",
+ " (= $a 10)",
+ " (= $b 20)",
+ " ))",
+ " (else (= $a 20)))"
+ ].join("\n")
end
it "allows a parenthesized conditional expression" do
dump(parse("if (true) { 10 }")).should == "(if true\n (then 10))"
end
it "allows a parenthesized elsif conditional expression" do
dump(parse("if true { 10 } elsif (false) { 20 }")).should ==
["(if true",
" (then 10)",
" (else (if false",
" (then 20))))"].join("\n")
end
end
context "When parsing unless statements" do
it "unless true { $a = 10 }" do
dump(parse("unless true { $a = 10 }")).should == "(unless true\n (then (= $a 10)))"
end
it "unless true { $a = 10 } else {$a = 20}" do
dump(parse("unless true { $a = 10 } else {$a = 20}")).should ==
["(unless true",
" (then (= $a 10))",
" (else (= $a 20)))"].join("\n")
end
it "allows a parenthesized conditional expression" do
dump(parse("unless (true) { 10 }")).should == "(unless true\n (then 10))"
end
it "unless true { $a = 10 } elsif false { $a = 15} else {$a = 20} # is illegal" do
expect { parse("unless true { $a = 10 } elsif false { $a = 15} else {$a = 20}")}.to raise_error(Puppet::ParseError)
end
end
context "When parsing selector expressions" do
it "$a = $b ? banana => fruit " do
dump(parse("$a = $b ? banana => fruit")).should ==
"(= $a (? $b (banana => fruit)))"
end
it "$a = $b ? { banana => fruit}" do
dump(parse("$a = $b ? { banana => fruit }")).should ==
"(= $a (? $b (banana => fruit)))"
end
it "does not fail on a trailing blank line" do
dump(parse("$a = $b ? { banana => fruit }\n\n")).should ==
"(= $a (? $b (banana => fruit)))"
end
it "$a = $b ? { banana => fruit, grape => berry }" do
dump(parse("$a = $b ? {banana => fruit, grape => berry}")).should ==
"(= $a (? $b (banana => fruit) (grape => berry)))"
end
it "$a = $b ? { banana => fruit, grape => berry, default => wat }" do
dump(parse("$a = $b ? {banana => fruit, grape => berry, default => wat}")).should ==
"(= $a (? $b (banana => fruit) (grape => berry) (:default => wat)))"
end
it "$a = $b ? { default => wat, banana => fruit, grape => berry, }" do
dump(parse("$a = $b ? {default => wat, banana => fruit, grape => berry}")).should ==
"(= $a (? $b (:default => wat) (banana => fruit) (grape => berry)))"
end
end
context "When parsing case statements" do
it "case $a { a : {}}" do
dump(parse("case $a { a : {}}")).should ==
["(case $a",
" (when (a) (then ())))"
].join("\n")
end
it "allows a parenthesized value expression" do
dump(parse("case ($a) { a : {}}")).should ==
["(case $a",
" (when (a) (then ())))"
].join("\n")
end
it "case $a { /.*/ : {}}" do
dump(parse("case $a { /.*/ : {}}")).should ==
["(case $a",
" (when (/.*/) (then ())))"
].join("\n")
end
it "case $a { a, b : {}}" do
dump(parse("case $a { a, b : {}}")).should ==
["(case $a",
" (when (a b) (then ())))"
].join("\n")
end
it "case $a { a, b : {} default : {}}" do
dump(parse("case $a { a, b : {} default : {}}")).should ==
["(case $a",
" (when (a b) (then ()))",
" (when (:default) (then ())))"
].join("\n")
end
it "case $a { a : {$b = 10 $c = 20}}" do
dump(parse("case $a { a : {$b = 10 $c = 20}}")).should ==
["(case $a",
- " (when (a) (then (block (= $b 10) (= $c 20)))))"
+ " (when (a) (then (block",
+ " (= $b 10)",
+ " (= $c 20)",
+ " ))))"
].join("\n")
end
end
end
diff --git a/spec/unit/pops/parser/parse_containers_spec.rb b/spec/unit/pops/parser/parse_containers_spec.rb
index 57d6efee9..a05c5975d 100644
--- a/spec/unit/pops/parser/parse_containers_spec.rb
+++ b/spec/unit/pops/parser/parse_containers_spec.rb
@@ -1,206 +1,261 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/parser_rspec_helper')
describe "egrammar parsing containers" do
include ParserRspecHelper
context "When parsing file scope" do
it "$a = 10 $b = 20" do
- dump(parse("$a = 10 $b = 20")).should == "(block (= $a 10) (= $b 20))"
+ dump(parse("$a = 10 $b = 20")).should == [
+ "(block",
+ " (= $a 10)",
+ " (= $b 20)",
+ ")"
+ ].join("\n")
end
it "$a = 10" do
dump(parse("$a = 10")).should == "(= $a 10)"
end
end
context "When parsing class" do
it "class foo {}" do
dump(parse("class foo {}")).should == "(class foo ())"
end
it "class foo { class bar {} }" do
- dump(parse("class foo { class bar {}}")).should == "(class foo (block (class foo::bar ())))"
+ dump(parse("class foo { class bar {}}")).should == [
+ "(class foo (block",
+ " (class foo::bar ())",
+ "))"
+ ].join("\n")
end
it "class foo::bar {}" do
dump(parse("class foo::bar {}")).should == "(class foo::bar ())"
end
it "class foo inherits bar {}" do
dump(parse("class foo inherits bar {}")).should == "(class foo (inherits bar) ())"
end
it "class foo($a) {}" do
dump(parse("class foo($a) {}")).should == "(class foo (parameters a) ())"
end
it "class foo($a, $b) {}" do
dump(parse("class foo($a, $b) {}")).should == "(class foo (parameters a b) ())"
end
it "class foo($a, $b=10) {}" do
dump(parse("class foo($a, $b=10) {}")).should == "(class foo (parameters a (= b 10)) ())"
end
it "class foo($a, $b) inherits belgo::bar {}" do
dump(parse("class foo($a, $b) inherits belgo::bar{}")).should == "(class foo (inherits belgo::bar) (parameters a b) ())"
end
it "class foo {$a = 10 $b = 20}" do
- dump(parse("class foo {$a = 10 $b = 20}")).should == "(class foo (block (= $a 10) (= $b 20)))"
+ dump(parse("class foo {$a = 10 $b = 20}")).should == [
+ "(class foo (block",
+ " (= $a 10)",
+ " (= $b 20)",
+ "))"
+ ].join("\n")
end
context "it should handle '3x weirdness'" do
it "class class {} # a class named 'class'" do
# Not as much weird as confusing that it is possible to name a class 'class'. Can have
# a very confusing effect when resolving relative names, getting the global hardwired "Class"
# instead of some foo::class etc.
# This is allowed in 3.x.
expect {
dump(parse("class class {}")).should == "(class class ())"
}.to raise_error(/not a valid classname/)
end
it "class default {} # a class named 'default'" do
# The weirdness here is that a class can inherit 'default' but not declare a class called default.
# (It will work with relative names i.e. foo::default though). The whole idea with keywords as
# names is flawed to begin with - it generally just a very bad idea.
expect { dump(parse("class default {}")).should == "(class default ())" }.to raise_error(Puppet::ParseError)
end
it "class foo::default {} # a nested name 'default'" do
dump(parse("class foo::default {}")).should == "(class foo::default ())"
end
it "class class inherits default {} # inherits default", :broken => true do
expect {
dump(parse("class class inherits default {}")).should == "(class class (inherits default) ())"
}.to raise_error(/not a valid classname/)
end
it "class class inherits default {} # inherits default" do
# TODO: See previous test marked as :broken=>true, it is actually this test (result) that is wacky,
# this because a class is named at parse time (since class evaluation is lazy, the model must have the
# full class name for nested classes - only, it gets this wrong when a class is named "class" - or at least
# I think it is wrong.)
#
expect {
dump(parse("class class inherits default {}")).should == "(class class::class (inherits default) ())"
}.to raise_error(/not a valid classname/)
end
it "class foo inherits class" do
expect {
dump(parse("class foo inherits class {}")).should == "(class foo (inherits class) ())"
}.to raise_error(/not a valid classname/)
end
end
+
+ context 'it should allow keywords as attribute names' do
+ ['and', 'case', 'class', 'default', 'define', 'else', 'elsif', 'if', 'in', 'inherits', 'node', 'or',
+ 'undef', 'unless', 'type', 'attr', 'function', 'private'].each do |keyword|
+ it "such as #{keyword}" do
+ expect {parse("class x ($#{keyword}){} class { x: #{keyword} => 1 }")}.to_not raise_error
+ end
+ end
+ end
+
end
context "When the parser parses define" do
it "define foo {}" do
dump(parse("define foo {}")).should == "(define foo ())"
end
it "class foo { define bar {}}" do
- dump(parse("class foo {define bar {}}")).should == "(class foo (block (define foo::bar ())))"
+ dump(parse("class foo {define bar {}}")).should == [
+ "(class foo (block",
+ " (define foo::bar ())",
+ "))"
+ ].join("\n")
end
it "define foo { define bar {}}" do
# This is illegal, but handled as part of validation
- dump(parse("define foo { define bar {}}")).should == "(define foo (block (define bar ())))"
+ dump(parse("define foo { define bar {}}")).should == [
+ "(define foo (block",
+ " (define bar ())",
+ "))"
+ ].join("\n")
end
it "define foo::bar {}" do
dump(parse("define foo::bar {}")).should == "(define foo::bar ())"
end
it "define foo($a) {}" do
dump(parse("define foo($a) {}")).should == "(define foo (parameters a) ())"
end
it "define foo($a, $b) {}" do
dump(parse("define foo($a, $b) {}")).should == "(define foo (parameters a b) ())"
end
it "define foo($a, $b=10) {}" do
dump(parse("define foo($a, $b=10) {}")).should == "(define foo (parameters a (= b 10)) ())"
end
it "define foo {$a = 10 $b = 20}" do
- dump(parse("define foo {$a = 10 $b = 20}")).should == "(define foo (block (= $a 10) (= $b 20)))"
+ dump(parse("define foo {$a = 10 $b = 20}")).should == [
+ "(define foo (block",
+ " (= $a 10)",
+ " (= $b 20)",
+ "))"
+ ].join("\n")
end
context "it should handle '3x weirdness'" do
it "define class {} # a define named 'class'" do
# This is weird because Class already exists, and instantiating this define will probably not
# work
expect {
dump(parse("define class {}")).should == "(define class ())"
}.to raise_error(/not a valid classname/)
end
it "define default {} # a define named 'default'" do
# Check unwanted ability to define 'default'.
# The expression below is not allowed (which is good).
#
expect { dump(parse("define default {}")).should == "(define default ())"}.to raise_error(Puppet::ParseError)
end
end
+
+ context 'it should allow keywords as attribute names' do
+ ['and', 'case', 'class', 'default', 'define', 'else', 'elsif', 'if', 'in', 'inherits', 'node', 'or',
+ 'undef', 'unless', 'type', 'attr', 'function', 'private'].each do |keyword|
+ it "such as #{keyword}" do
+ expect {parse("define x ($#{keyword}){} x { y: #{keyword} => 1 }")}.to_not raise_error
+ end
+ end
+ end
end
context "When parsing node" do
it "node foo {}" do
dump(parse("node foo {}")).should == "(node (matches 'foo') ())"
end
+ it "node foo, {} # trailing comma" do
+ dump(parse("node foo, {}")).should == "(node (matches 'foo') ())"
+ end
+
it "node kermit.example.com {}" do
dump(parse("node kermit.example.com {}")).should == "(node (matches 'kermit.example.com') ())"
end
it "node kermit . example . com {}" do
dump(parse("node kermit . example . com {}")).should == "(node (matches 'kermit.example.com') ())"
end
it "node foo, x::bar, default {}" do
dump(parse("node foo, x::bar, default {}")).should == "(node (matches 'foo' 'x::bar' :default) ())"
end
it "node 'foo' {}" do
dump(parse("node 'foo' {}")).should == "(node (matches 'foo') ())"
end
it "node foo inherits x::bar {}" do
dump(parse("node foo inherits x::bar {}")).should == "(node (matches 'foo') (parent 'x::bar') ())"
end
it "node foo inherits 'bar' {}" do
dump(parse("node foo inherits 'bar' {}")).should == "(node (matches 'foo') (parent 'bar') ())"
end
it "node foo inherits default {}" do
dump(parse("node foo inherits default {}")).should == "(node (matches 'foo') (parent :default) ())"
end
it "node /web.*/ {}" do
dump(parse("node /web.*/ {}")).should == "(node (matches /web.*/) ())"
end
it "node /web.*/, /do\.wop.*/, and.so.on {}" do
dump(parse("node /web.*/, /do\.wop.*/, 'and.so.on' {}")).should == "(node (matches /web.*/ /do\.wop.*/ 'and.so.on') ())"
end
it "node wat inherits /apache.*/ {}" do
dump(parse("node wat inherits /apache.*/ {}")).should == "(node (matches 'wat') (parent /apache.*/) ())"
end
it "node foo inherits bar {$a = 10 $b = 20}" do
- dump(parse("node foo inherits bar {$a = 10 $b = 20}")).should == "(node (matches 'foo') (parent 'bar') (block (= $a 10) (= $b 20)))"
+ dump(parse("node foo inherits bar {$a = 10 $b = 20}")).should == [
+ "(node (matches 'foo') (parent 'bar') (block",
+ " (= $a 10)",
+ " (= $b 20)",
+ "))"
+ ].join("\n")
end
end
end
diff --git a/spec/unit/pops/parser/parse_resource_spec.rb b/spec/unit/pops/parser/parse_resource_spec.rb
index ee7e13445..cf7e7cb01 100644
--- a/spec/unit/pops/parser/parse_resource_spec.rb
+++ b/spec/unit/pops/parser/parse_resource_spec.rb
@@ -1,242 +1,324 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/parser_rspec_helper')
describe "egrammar parsing resource declarations" do
include ParserRspecHelper
context "When parsing regular resource" do
- it "file { 'title': }" do
- dump(parse("file { 'title': }")).should == [
- "(resource file",
- " ('title'))"
- ].join("\n")
- end
+ ["File", "file"].each do |word|
+ it "#{word} { 'title': }" do
+ dump(parse("#{word} { 'title': }")).should == [
+ "(resource file",
+ " ('title'))"
+ ].join("\n")
+ end
- it "file { 'title': path => '/somewhere', mode => 0777}" do
- dump(parse("file { 'title': path => '/somewhere', mode => 0777}")).should == [
- "(resource file",
- " ('title'",
- " (path => '/somewhere')",
- " (mode => 0777)))"
- ].join("\n")
- end
+ it "#{word} { 'title': path => '/somewhere', mode => '0777'}" do
+ dump(parse("#{word} { 'title': path => '/somewhere', mode => '0777'}")).should == [
+ "(resource file",
+ " ('title'",
+ " (path => '/somewhere')",
+ " (mode => '0777')))"
+ ].join("\n")
+ end
- it "file { 'title': path => '/somewhere', }" do
- dump(parse("file { 'title': path => '/somewhere', }")).should == [
- "(resource file",
- " ('title'",
- " (path => '/somewhere')))"
- ].join("\n")
- end
+ it "#{word} { 'title': path => '/somewhere', }" do
+ dump(parse("#{word} { 'title': path => '/somewhere', }")).should == [
+ "(resource file",
+ " ('title'",
+ " (path => '/somewhere')))"
+ ].join("\n")
+ end
- it "file { 'title': , }" do
- dump(parse("file { 'title': , }")).should == [
- "(resource file",
- " ('title'))"
- ].join("\n")
- end
+ it "#{word} { 'title': , }" do
+ dump(parse("#{word} { 'title': , }")).should == [
+ "(resource file",
+ " ('title'))"
+ ].join("\n")
+ end
- it "file { 'title': ; }" do
- dump(parse("file { 'title': ; }")).should == [
- "(resource file",
- " ('title'))"
- ].join("\n")
- end
+ it "#{word} { 'title': ; }" do
+ dump(parse("#{word} { 'title': ; }")).should == [
+ "(resource file",
+ " ('title'))"
+ ].join("\n")
+ end
- it "file { 'title': ; 'other_title': }" do
- dump(parse("file { 'title': ; 'other_title': }")).should == [
- "(resource file",
- " ('title')",
- " ('other_title'))"
- ].join("\n")
- end
+ it "#{word} { 'title': ; 'other_title': }" do
+ dump(parse("#{word} { 'title': ; 'other_title': }")).should == [
+ "(resource file",
+ " ('title')",
+ " ('other_title'))"
+ ].join("\n")
+ end
- it "file { 'title1': path => 'x'; 'title2': path => 'y'}" do
- dump(parse("file { 'title1': path => 'x'; 'title2': path => 'y'}")).should == [
- "(resource file",
- " ('title1'",
- " (path => 'x'))",
- " ('title2'",
- " (path => 'y')))",
- ].join("\n")
+ # PUP-2898, trailing ';'
+ it "#{word} { 'title': ; 'other_title': ; }" do
+ dump(parse("#{word} { 'title': ; 'other_title': ; }")).should == [
+ "(resource file",
+ " ('title')",
+ " ('other_title'))"
+ ].join("\n")
+ end
+
+ it "#{word} { 'title1': path => 'x'; 'title2': path => 'y'}" do
+ dump(parse("#{word} { 'title1': path => 'x'; 'title2': path => 'y'}")).should == [
+ "(resource file",
+ " ('title1'",
+ " (path => 'x'))",
+ " ('title2'",
+ " (path => 'y')))",
+ ].join("\n")
+ end
+
+ it "#{word} { title: * => {mode => '0777'} }" do
+ dump(parse("#{word} { title: * => {mode => '0777'}}")).should == [
+ "(resource file",
+ " (title",
+ " (* => ({} (mode '0777')))))"
+ ].join("\n")
+ end
end
end
- context "When parsing resource defaults" do
+ context "When parsing (type based) resource defaults" do
it "File { }" do
dump(parse("File { }")).should == "(resource-defaults file)"
end
- it "File { mode => 0777 }" do
- dump(parse("File { mode => 0777}")).should == [
+ it "File { mode => '0777' }" do
+ dump(parse("File { mode => '0777'}")).should == [
"(resource-defaults file",
- " (mode => 0777))"
+ " (mode => '0777'))"
+ ].join("\n")
+ end
+
+ it "File { * => {mode => '0777'} } (even if validated to be illegal)" do
+ dump(parse("File { * => {mode => '0777'}}")).should == [
+ "(resource-defaults file",
+ " (* => ({} (mode '0777'))))"
].join("\n")
end
end
context "When parsing resource override" do
it "File['x'] { }" do
dump(parse("File['x'] { }")).should == "(override (slice file 'x'))"
end
it "File['x'] { x => 1 }" do
- dump(parse("File['x'] { x => 1}")).should == "(override (slice file 'x')\n (x => 1))"
+ dump(parse("File['x'] { x => 1}")).should == [
+ "(override (slice file 'x')",
+ " (x => 1))"
+ ].join("\n")
end
+
it "File['x', 'y'] { x => 1 }" do
- dump(parse("File['x', 'y'] { x => 1}")).should == "(override (slice file ('x' 'y'))\n (x => 1))"
+ dump(parse("File['x', 'y'] { x => 1}")).should == [
+ "(override (slice file ('x' 'y'))",
+ " (x => 1))"
+ ].join("\n")
end
it "File['x'] { x => 1, y => 2 }" do
- dump(parse("File['x'] { x => 1, y=> 2}")).should == "(override (slice file 'x')\n (x => 1)\n (y => 2))"
+ dump(parse("File['x'] { x => 1, y=> 2}")).should == [
+ "(override (slice file 'x')",
+ " (x => 1)",
+ " (y => 2))"
+ ].join("\n")
end
it "File['x'] { x +> 1 }" do
- dump(parse("File['x'] { x +> 1}")).should == "(override (slice file 'x')\n (x +> 1))"
+ dump(parse("File['x'] { x +> 1}")).should == [
+ "(override (slice file 'x')",
+ " (x +> 1))"
+ ].join("\n")
+ end
+
+ it "File['x'] { * => {mode => '0777'} } (even if validated to be illegal)" do
+ dump(parse("File['x'] { * => {mode => '0777'}}")).should == [
+ "(override (slice file 'x')",
+ " (* => ({} (mode '0777'))))"
+ ].join("\n")
end
end
context "When parsing virtual and exported resources" do
- it "@@file { 'title': }" do
+ it "parses exported @@file { 'title': }" do
dump(parse("@@file { 'title': }")).should == "(exported-resource file\n ('title'))"
end
- it "@file { 'title': }" do
+ it "parses virtual @file { 'title': }" do
dump(parse("@file { 'title': }")).should == "(virtual-resource file\n ('title'))"
end
- it "@file { mode => 0777 }" do
- # Defaults are not virtualizeable
- expect {
- dump(parse("@file { mode => 0777 }")).should == ""
- }.to raise_error(Puppet::ParseError, /Defaults are not virtualizable/)
+ it "nothing before the title colon is a syntax error" do
+ expect do
+ parse("@file {: mode => '0777' }")
+ end.to raise_error(/Syntax error/)
+ end
+
+ it "raises error for user error; not a resource" do
+ # The expression results in VIRTUAL, CALL FUNCTION('file', HASH) since the resource body has
+ # no title.
+ expect do
+ parse("@file { mode => '0777' }")
+ end.to raise_error(/Virtual \(@\) can only be applied to a Resource Expression/)
+ end
+
+ it "parses global defaults with @ (even if validated to be illegal)" do
+ dump(parse("@File { mode => '0777' }")).should == [
+ "(virtual-resource-defaults file",
+ " (mode => '0777'))"
+ ].join("\n")
+ end
+
+ it "parses global defaults with @@ (even if validated to be illegal)" do
+ dump(parse("@@File { mode => '0777' }")).should == [
+ "(exported-resource-defaults file",
+ " (mode => '0777'))"
+ ].join("\n")
+ end
+
+ it "parses override with @ (even if validated to be illegal)" do
+ dump(parse("@File[foo] { mode => '0777' }")).should == [
+ "(virtual-override (slice file foo)",
+ " (mode => '0777'))"
+ ].join("\n")
+ end
+
+ it "parses override combined with @@ (even if validated to be illegal)" do
+ dump(parse("@@File[foo] { mode => '0777' }")).should == [
+ "(exported-override (slice file foo)",
+ " (mode => '0777'))"
+ ].join("\n")
end
end
context "When parsing class resource" do
it "class { 'cname': }" do
dump(parse("class { 'cname': }")).should == [
"(resource class",
" ('cname'))"
].join("\n")
end
it "@class { 'cname': }" do
dump(parse("@class { 'cname': }")).should == [
"(virtual-resource class",
" ('cname'))"
].join("\n")
end
it "@@class { 'cname': }" do
dump(parse("@@class { 'cname': }")).should == [
"(exported-resource class",
" ('cname'))"
].join("\n")
end
it "class { 'cname': x => 1, y => 2}" do
dump(parse("class { 'cname': x => 1, y => 2}")).should == [
"(resource class",
" ('cname'",
" (x => 1)",
" (y => 2)))"
].join("\n")
end
it "class { 'cname1': x => 1; 'cname2': y => 2}" do
dump(parse("class { 'cname1': x => 1; 'cname2': y => 2}")).should == [
"(resource class",
" ('cname1'",
" (x => 1))",
" ('cname2'",
" (y => 2)))",
].join("\n")
end
end
context "reported issues in 3.x" do
it "should not screw up on brackets in title of resource #19632" do
dump(parse('notify { "thisisa[bug]": }')).should == [
"(resource notify",
" ('thisisa[bug]'))",
].join("\n")
end
end
context "When parsing Relationships" do
it "File[a] -> File[b]" do
dump(parse("File[a] -> File[b]")).should == "(-> (slice file a) (slice file b))"
end
it "File[a] <- File[b]" do
dump(parse("File[a] <- File[b]")).should == "(<- (slice file a) (slice file b))"
end
it "File[a] ~> File[b]" do
dump(parse("File[a] ~> File[b]")).should == "(~> (slice file a) (slice file b))"
end
it "File[a] <~ File[b]" do
dump(parse("File[a] <~ File[b]")).should == "(<~ (slice file a) (slice file b))"
end
it "Should chain relationships" do
dump(parse("a -> b -> c")).should ==
"(-> (-> a b) c)"
end
it "Should chain relationships" do
dump(parse("File[a] -> File[b] ~> File[c] <- File[d] <~ File[e]")).should ==
"(<~ (<- (~> (-> (slice file a) (slice file b)) (slice file c)) (slice file d)) (slice file e))"
end
it "should create relationships between collects" do
dump(parse("File <| mode == 0644 |> -> File <| mode == 0755 |>")).should ==
"(-> (collect file\n (<| |> (== mode 0644))) (collect file\n (<| |> (== mode 0755))))"
end
end
context "When parsing collection" do
context "of virtual resources" do
it "File <| |>" do
dump(parse("File <| |>")).should == "(collect file\n (<| |>))"
end
end
context "of exported resources" do
it "File <<| |>>" do
dump(parse("File <<| |>>")).should == "(collect file\n (<<| |>>))"
end
end
context "queries are parsed with correct precedence" do
it "File <| tag == 'foo' |>" do
dump(parse("File <| tag == 'foo' |>")).should == "(collect file\n (<| |> (== tag 'foo')))"
end
- it "File <| tag == 'foo' and mode != 0777 |>" do
- dump(parse("File <| tag == 'foo' and mode != 0777 |>")).should == "(collect file\n (<| |> (&& (== tag 'foo') (!= mode 0777))))"
+ it "File <| tag == 'foo' and mode != '0777' |>" do
+ dump(parse("File <| tag == 'foo' and mode != '0777' |>")).should == "(collect file\n (<| |> (&& (== tag 'foo') (!= mode '0777'))))"
end
- it "File <| tag == 'foo' or mode != 0777 |>" do
- dump(parse("File <| tag == 'foo' or mode != 0777 |>")).should == "(collect file\n (<| |> (|| (== tag 'foo') (!= mode 0777))))"
+ it "File <| tag == 'foo' or mode != '0777' |>" do
+ dump(parse("File <| tag == 'foo' or mode != '0777' |>")).should == "(collect file\n (<| |> (|| (== tag 'foo') (!= mode '0777'))))"
end
- it "File <| tag == 'foo' or tag == 'bar' and mode != 0777 |>" do
- dump(parse("File <| tag == 'foo' or tag == 'bar' and mode != 0777 |>")).should ==
- "(collect file\n (<| |> (|| (== tag 'foo') (&& (== tag 'bar') (!= mode 0777)))))"
+ it "File <| tag == 'foo' or tag == 'bar' and mode != '0777' |>" do
+ dump(parse("File <| tag == 'foo' or tag == 'bar' and mode != '0777' |>")).should ==
+ "(collect file\n (<| |> (|| (== tag 'foo') (&& (== tag 'bar') (!= mode '0777')))))"
end
- it "File <| (tag == 'foo' or tag == 'bar') and mode != 0777 |>" do
- dump(parse("File <| (tag == 'foo' or tag == 'bar') and mode != 0777 |>")).should ==
- "(collect file\n (<| |> (&& (|| (== tag 'foo') (== tag 'bar')) (!= mode 0777))))"
+ it "File <| (tag == 'foo' or tag == 'bar') and mode != '0777' |>" do
+ dump(parse("File <| (tag == 'foo' or tag == 'bar') and mode != '0777' |>")).should ==
+ "(collect file\n (<| |> (&& (|| (== tag 'foo') (== tag 'bar')) (!= mode '0777'))))"
end
end
end
end
diff --git a/spec/unit/pops/parser/parser_spec.rb b/spec/unit/pops/parser/parser_spec.rb
index fb44a08c9..23f537eeb 100644
--- a/spec/unit/pops/parser/parser_spec.rb
+++ b/spec/unit/pops/parser/parser_spec.rb
@@ -1,17 +1,33 @@
require 'spec_helper'
require 'puppet/pops'
describe Puppet::Pops::Parser::Parser do
it "should instantiate a parser" do
parser = Puppet::Pops::Parser::Parser.new()
parser.class.should == Puppet::Pops::Parser::Parser
end
it "should parse a code string and return a model" do
parser = Puppet::Pops::Parser::Parser.new()
model = parser.parse_string("$a = 10").current
model.class.should == Puppet::Pops::Model::Program
model.body.class.should == Puppet::Pops::Model::AssignmentExpression
end
+ it "should accept empty input and return a model" do
+ parser = Puppet::Pops::Parser::Parser.new()
+ model = parser.parse_string("").current
+ model.class.should == Puppet::Pops::Model::Program
+ model.body.class.should == Puppet::Pops::Model::Nop
+ end
+
+ it "should accept empty input containing only comments and report location at end of input" do
+ parser = Puppet::Pops::Parser::Parser.new()
+ model = parser.parse_string("# comment\n").current
+ model.class.should == Puppet::Pops::Model::Program
+ model.body.class.should == Puppet::Pops::Model::Nop
+ adapter = Puppet::Pops::Adapters::SourcePosAdapter.adapt(model.body)
+ expect(adapter.offset).to eq(10)
+ expect(adapter.length).to eq(0)
+ end
end
diff --git a/spec/unit/pops/parser/parsing_typed_parameters_spec.rb b/spec/unit/pops/parser/parsing_typed_parameters_spec.rb
new file mode 100644
index 000000000..678f6523c
--- /dev/null
+++ b/spec/unit/pops/parser/parsing_typed_parameters_spec.rb
@@ -0,0 +1,72 @@
+require 'spec_helper'
+
+require 'puppet/pops'
+require 'puppet/pops/evaluator/evaluator_impl'
+require 'puppet_spec/pops'
+require 'puppet_spec/scope'
+require 'puppet/parser/e4_parser_adapter'
+
+
+# relative to this spec file (./) does not work as this file is loaded by rspec
+#require File.join(File.dirname(__FILE__), '/evaluator_rspec_helper')
+
+describe 'Puppet::Pops::Evaluator::EvaluatorImpl' do
+ include PuppetSpec::Pops
+ include PuppetSpec::Scope
+ before(:each) do
+
+ # These must be set since the is 3x logic that triggers on these even if the tests are explicit
+ # about selection of parser and evaluator
+ #
+ Puppet[:parser] = 'future'
+ end
+
+ let(:parser) { Puppet::Pops::Parser::EvaluatingParser.new }
+
+ context "captures-rest parameter" do
+ it 'is allowed in lambda when placed last' do
+ source = <<-CODE
+ foo() |$a, *$b| { $a + $b[0] }
+ CODE
+ expect do
+ parser.parse_string(source, __FILE__)
+ end.to_not raise_error()
+ end
+
+ it 'allows a type annotation' do
+ source = <<-CODE
+ foo() |$a, Integer *$b| { $a + $b[0] }
+ CODE
+ expect do
+ parser.parse_string(source, __FILE__)
+ end.to_not raise_error()
+ end
+
+ it 'is not allowed in lambda except last' do
+ source = <<-CODE
+ foo() |*$a, $b| { $a + $b[0] }
+ CODE
+ expect do
+ parser.parse_string(source, __FILE__)
+ end.to raise_error(Puppet::ParseError, /Parameter \$a is not last, and has 'captures rest'/)
+ end
+
+ it 'is not allowed in define' do
+ source = <<-CODE
+ define foo(*$a) { }
+ CODE
+ expect do
+ parser.parse_string(source, __FILE__)
+ end.to raise_error(Puppet::ParseError, /Parameter \$a has 'captures rest' - not supported in a 'define'/)
+ end
+
+ it 'is not allowed in class' do
+ source = <<-CODE
+ class foo(*$a) { }
+ CODE
+ expect do
+ parser.parse_string(source, __FILE__)
+ end.to raise_error(Puppet::ParseError, /Parameter \$a has 'captures rest' - not supported in a Host Class Definition/)
+ end
+ end
+end
diff --git a/spec/unit/pops/transformer/transform_calls_spec.rb b/spec/unit/pops/transformer/transform_calls_spec.rb
index f58c79e1e..dba0d560f 100644
--- a/spec/unit/pops/transformer/transform_calls_spec.rb
+++ b/spec/unit/pops/transformer/transform_calls_spec.rb
@@ -1,115 +1,115 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '/transformer_rspec_helper')
describe "transformation to Puppet AST for function calls" do
include TransformerRspecHelper
context "When transforming calls as statements" do
context "in top level scope" do
it "foo()" do
astdump(parse("foo()")).should == "(invoke foo)"
end
it "notice bar" do
astdump(parse("notice bar")).should == "(invoke notice bar)"
end
end
context "in nested scopes" do
it "if true { foo() }" do
astdump(parse("if true {foo()}")).should == "(if true\n (then (invoke foo)))"
end
it "if true { notice bar}" do
astdump(parse("if true {notice bar}")).should == "(if true\n (then (invoke notice bar)))"
end
end
context "in general" do
{
"require bar" => '(invoke require bar)',
"realize bar" => '(invoke realize bar)',
"contain bar" => '(invoke contain bar)',
"include bar" => '(invoke include bar)',
+ "tag bar" => '(invoke tag bar)',
"info bar" => '(invoke info bar)',
"notice bar" => '(invoke notice bar)',
"error bar" => '(invoke error bar)',
"warning bar" => '(invoke warning bar)',
"debug bar" => '(invoke debug bar)',
"fail bar" => '(invoke fail bar)',
"notice {a => 1}" => "(invoke notice ({} ('a' 1)))",
"notice 1,2,3" => "(invoke notice 1 2 3)",
"notice(1,2,3)" => "(invoke notice 1 2 3)",
}.each do |source, result|
it "should transform #{source} to #{result}" do
astdump(parse(source)).should == result
end
end
{
"foo bar" => '(block foo bar)',
- "tag bar" => '(block tag bar)',
"tag" => 'tag',
}.each do |source, result|
it "should not transform #{source}, and instead produce #{result}" do
astdump(parse(source)).should == result
end
end
end
end
context "When transforming calls as expressions" do
it "$a = foo()" do
astdump(parse("$a = foo()")).should == "(= $a (call foo))"
end
it "$a = foo(bar)" do
astdump(parse("$a = foo()")).should == "(= $a (call foo))"
end
# For egrammar where a bare word can be a "statement"
it "$a = foo bar # assignment followed by bare word is ok in egrammar" do
astdump(parse("$a = foo bar")).should == "(block (= $a foo) bar)"
end
context "in nested scopes" do
it "if true { $a = foo() }" do
astdump(parse("if true { $a = foo()}")).should == "(if true\n (then (= $a (call foo))))"
end
it "if true { $a= foo(bar)}" do
astdump(parse("if true {$a = foo(bar)}")).should == "(if true\n (then (= $a (call foo bar))))"
end
end
end
context "When transforming method calls" do
it "$a.foo" do
astdump(parse("$a.foo")).should == "(call-method (. $a foo))"
end
it "$a.foo || { }" do
astdump(parse("$a.foo || { }")).should == "(call-method (. $a foo) (lambda ()))"
end
it "$a.foo ||{[]} # check transformation to block with empty array" do
astdump(parse("$a.foo || {[]}")).should == "(call-method (. $a foo) (lambda (block ([]))))"
end
it "$a.foo |$x| { }" do
astdump(parse("$a.foo |$x| { }")).should == "(call-method (. $a foo) (lambda (parameters x) ()))"
end
it "$a.foo |$x| { $b = $x}" do
astdump(parse("$a.foo |$x| { $b = $x}")).should ==
"(call-method (. $a foo) (lambda (parameters x) (block (= $b $x))))"
end
end
end
diff --git a/spec/unit/pops/transformer/transform_resource_spec.rb b/spec/unit/pops/transformer/transform_resource_spec.rb
deleted file mode 100644
index 251ef2f75..000000000
--- a/spec/unit/pops/transformer/transform_resource_spec.rb
+++ /dev/null
@@ -1,185 +0,0 @@
-#! /usr/bin/env ruby
-require 'spec_helper'
-require 'puppet/pops'
-
-# relative to this spec file (./) does not work as this file is loaded by rspec
-require File.join(File.dirname(__FILE__), '/transformer_rspec_helper')
-
-describe "transformation to Puppet AST for resource declarations" do
- include TransformerRspecHelper
-
- context "When transforming regular resource" do
- it "file { 'title': }" do
- astdump(parse("file { 'title': }")).should == [
- "(resource file",
- " ('title'))"
- ].join("\n")
- end
-
- it "file { 'title': ; 'other_title': }" do
- astdump(parse("file { 'title': ; 'other_title': }")).should == [
- "(resource file",
- " ('title')",
- " ('other_title'))"
- ].join("\n")
- end
-
- it "file { 'title': path => '/somewhere', mode => 0777}" do
- astdump(parse("file { 'title': path => '/somewhere', mode => 0777}")).should == [
- "(resource file",
- " ('title'",
- " (path => '/somewhere')",
- " (mode => 0777)))"
- ].join("\n")
- end
-
- it "file { 'title1': path => 'x'; 'title2': path => 'y'}" do
- astdump(parse("file { 'title1': path => 'x'; 'title2': path => 'y'}")).should == [
- "(resource file",
- " ('title1'",
- " (path => 'x'))",
- " ('title2'",
- " (path => 'y')))",
- ].join("\n")
- end
- end
-
- context "When transforming resource defaults" do
- it "File { }" do
- astdump(parse("File { }")).should == "(resource-defaults file)"
- end
-
- it "File { mode => 0777 }" do
- astdump(parse("File { mode => 0777}")).should == [
- "(resource-defaults file",
- " (mode => 0777))"
- ].join("\n")
- end
- end
-
- context "When transforming resource override" do
- it "File['x'] { }" do
- astdump(parse("File['x'] { }")).should == "(override (slice file 'x'))"
- end
-
- it "File['x'] { x => 1 }" do
- astdump(parse("File['x'] { x => 1}")).should == "(override (slice file 'x')\n (x => 1))"
- end
-
- it "File['x', 'y'] { x => 1 }" do
- astdump(parse("File['x', 'y'] { x => 1}")).should == "(override (slice file ('x' 'y'))\n (x => 1))"
- end
-
- it "File['x'] { x => 1, y => 2 }" do
- astdump(parse("File['x'] { x => 1, y=> 2}")).should == "(override (slice file 'x')\n (x => 1)\n (y => 2))"
- end
-
- it "File['x'] { x +> 1 }" do
- astdump(parse("File['x'] { x +> 1}")).should == "(override (slice file 'x')\n (x +> 1))"
- end
- end
-
- context "When transforming virtual and exported resources" do
- it "@@file { 'title': }" do
- astdump(parse("@@file { 'title': }")).should == "(exported-resource file\n ('title'))"
- end
-
- it "@file { 'title': }" do
- astdump(parse("@file { 'title': }")).should == "(virtual-resource file\n ('title'))"
- end
- end
-
- context "When transforming class resource" do
- it "class { 'cname': }" do
- astdump(parse("class { 'cname': }")).should == [
- "(resource class",
- " ('cname'))"
- ].join("\n")
- end
-
- it "class { 'cname': x => 1, y => 2}" do
- astdump(parse("class { 'cname': x => 1, y => 2}")).should == [
- "(resource class",
- " ('cname'",
- " (x => 1)",
- " (y => 2)))"
- ].join("\n")
- end
-
- it "class { 'cname1': x => 1; 'cname2': y => 2}" do
- astdump(parse("class { 'cname1': x => 1; 'cname2': y => 2}")).should == [
- "(resource class",
- " ('cname1'",
- " (x => 1))",
- " ('cname2'",
- " (y => 2)))",
- ].join("\n")
- end
- end
-
- context "When transforming Relationships" do
- it "File[a] -> File[b]" do
- astdump(parse("File[a] -> File[b]")).should == "(-> (slice file a) (slice file b))"
- end
-
- it "File[a] <- File[b]" do
- astdump(parse("File[a] <- File[b]")).should == "(<- (slice file a) (slice file b))"
- end
-
- it "File[a] ~> File[b]" do
- astdump(parse("File[a] ~> File[b]")).should == "(~> (slice file a) (slice file b))"
- end
-
- it "File[a] <~ File[b]" do
- astdump(parse("File[a] <~ File[b]")).should == "(<~ (slice file a) (slice file b))"
- end
-
- it "Should chain relationships" do
- astdump(parse("a -> b -> c")).should ==
- "(-> (-> a b) c)"
- end
-
- it "Should chain relationships" do
- astdump(parse("File[a] -> File[b] ~> File[c] <- File[d] <~ File[e]")).should ==
- "(<~ (<- (~> (-> (slice file a) (slice file b)) (slice file c)) (slice file d)) (slice file e))"
- end
- end
-
- context "When transforming collection" do
- context "of virtual resources" do
- it "File <| |>" do
- astdump(parse("File <| |>")).should == "(collect file\n (<| |>))"
- end
- end
-
- context "of exported resources" do
- it "File <<| |>>" do
- astdump(parse("File <<| |>>")).should == "(collect file\n (<<| |>>))"
- end
- end
-
- context "queries are parsed with correct precedence" do
- it "File <| tag == 'foo' |>" do
- astdump(parse("File <| tag == 'foo' |>")).should == "(collect file\n (<| |> (== tag 'foo')))"
- end
-
- it "File <| tag == 'foo' and mode != 0777 |>" do
- astdump(parse("File <| tag == 'foo' and mode != 0777 |>")).should == "(collect file\n (<| |> (&& (== tag 'foo') (!= mode 0777))))"
- end
-
- it "File <| tag == 'foo' or mode != 0777 |>" do
- astdump(parse("File <| tag == 'foo' or mode != 0777 |>")).should == "(collect file\n (<| |> (|| (== tag 'foo') (!= mode 0777))))"
- end
-
- it "File <| tag == 'foo' or tag == 'bar' and mode != 0777 |>" do
- astdump(parse("File <| tag == 'foo' or tag == 'bar' and mode != 0777 |>")).should ==
- "(collect file\n (<| |> (|| (== tag 'foo') (&& (== tag 'bar') (!= mode 0777)))))"
- end
-
- it "File <| (tag == 'foo' or tag == 'bar') and mode != 0777 |>" do
- astdump(parse("File <| (tag == 'foo' or tag == 'bar') and mode != 0777 |>")).should ==
- "(collect file\n (<| |> (&& (|| (== tag 'foo') (== tag 'bar')) (!= mode 0777))))"
- end
- end
- end
-end
diff --git a/spec/unit/pops/types/type_calculator_spec.rb b/spec/unit/pops/types/type_calculator_spec.rb
index 3c4ea77ca..0bd475263 100644
--- a/spec/unit/pops/types/type_calculator_spec.rb
+++ b/spec/unit/pops/types/type_calculator_spec.rb
@@ -1,1603 +1,1800 @@
require 'spec_helper'
require 'puppet/pops'
describe 'The type calculator' do
let(:calculator) { Puppet::Pops::Types::TypeCalculator.new() }
def range_t(from, to)
t = Puppet::Pops::Types::PIntegerType.new
t.from = from
t.to = to
t
end
def pattern_t(*patterns)
Puppet::Pops::Types::TypeFactory.pattern(*patterns)
end
def regexp_t(pattern)
Puppet::Pops::Types::TypeFactory.regexp(pattern)
end
def string_t(*strings)
Puppet::Pops::Types::TypeFactory.string(*strings)
end
def callable_t(*params)
Puppet::Pops::Types::TypeFactory.callable(*params)
end
def all_callables_t(*params)
Puppet::Pops::Types::TypeFactory.all_callables()
end
def with_block_t(callable_t, *params)
Puppet::Pops::Types::TypeFactory.with_block(callable_t, *params)
end
def with_optional_block_t(callable_t, *params)
Puppet::Pops::Types::TypeFactory.with_optional_block(callable_t, *params)
end
def enum_t(*strings)
Puppet::Pops::Types::TypeFactory.enum(*strings)
end
def variant_t(*types)
Puppet::Pops::Types::TypeFactory.variant(*types)
end
def integer_t()
Puppet::Pops::Types::TypeFactory.integer()
end
def array_t(t)
Puppet::Pops::Types::TypeFactory.array_of(t)
end
def hash_t(k,v)
Puppet::Pops::Types::TypeFactory.hash_of(v, k)
end
def data_t()
Puppet::Pops::Types::TypeFactory.data()
end
def factory()
Puppet::Pops::Types::TypeFactory
end
def collection_t()
Puppet::Pops::Types::TypeFactory.collection()
end
def tuple_t(*types)
Puppet::Pops::Types::TypeFactory.tuple(*types)
end
def struct_t(type_hash)
Puppet::Pops::Types::TypeFactory.struct(type_hash)
end
- def optional_object_t
- Puppet::Pops::Types::TypeFactory.optional_object()
+ def object_t
+ Puppet::Pops::Types::TypeFactory.any()
+ end
+
+ def unit_t
+ # Cannot be created via factory, the type is private to the type system
+ Puppet::Pops::Types::PUnitType.new
end
def types
Puppet::Pops::Types
end
shared_context "types_setup" do
+ # Do not include the special type Unit in this list
def all_types
- [ Puppet::Pops::Types::PObjectType,
+ [ Puppet::Pops::Types::PAnyType,
Puppet::Pops::Types::PNilType,
Puppet::Pops::Types::PDataType,
Puppet::Pops::Types::PScalarType,
Puppet::Pops::Types::PStringType,
Puppet::Pops::Types::PNumericType,
Puppet::Pops::Types::PIntegerType,
Puppet::Pops::Types::PFloatType,
Puppet::Pops::Types::PRegexpType,
Puppet::Pops::Types::PBooleanType,
Puppet::Pops::Types::PCollectionType,
Puppet::Pops::Types::PArrayType,
Puppet::Pops::Types::PHashType,
- Puppet::Pops::Types::PRubyType,
+ Puppet::Pops::Types::PRuntimeType,
Puppet::Pops::Types::PHostClassType,
Puppet::Pops::Types::PResourceType,
Puppet::Pops::Types::PPatternType,
Puppet::Pops::Types::PEnumType,
Puppet::Pops::Types::PVariantType,
Puppet::Pops::Types::PStructType,
Puppet::Pops::Types::PTupleType,
Puppet::Pops::Types::PCallableType,
+ Puppet::Pops::Types::PType,
+ Puppet::Pops::Types::POptionalType,
+ Puppet::Pops::Types::PDefaultType,
]
end
def scalar_types
# PVariantType is also scalar, if its types are all Scalar
[
Puppet::Pops::Types::PScalarType,
Puppet::Pops::Types::PStringType,
Puppet::Pops::Types::PNumericType,
Puppet::Pops::Types::PIntegerType,
Puppet::Pops::Types::PFloatType,
Puppet::Pops::Types::PRegexpType,
Puppet::Pops::Types::PBooleanType,
Puppet::Pops::Types::PPatternType,
Puppet::Pops::Types::PEnumType,
]
end
def numeric_types
# PVariantType is also numeric, if its types are all numeric
[
Puppet::Pops::Types::PNumericType,
Puppet::Pops::Types::PIntegerType,
Puppet::Pops::Types::PFloatType,
]
end
def string_types
# PVariantType is also string type, if its types are all compatible
[
Puppet::Pops::Types::PStringType,
Puppet::Pops::Types::PPatternType,
Puppet::Pops::Types::PEnumType,
]
end
def collection_types
# PVariantType is also string type, if its types are all compatible
[
Puppet::Pops::Types::PCollectionType,
Puppet::Pops::Types::PHashType,
Puppet::Pops::Types::PArrayType,
Puppet::Pops::Types::PStructType,
Puppet::Pops::Types::PTupleType,
]
end
def data_compatible_types
result = scalar_types
result << Puppet::Pops::Types::PDataType
result << array_t(types::PDataType.new)
result << types::TypeFactory.hash_of_data
result << Puppet::Pops::Types::PNilType
tmp = tuple_t(types::PDataType.new)
result << (tmp)
tmp.size_type = range_t(0, nil)
result
end
def type_from_class(c)
c.is_a?(Class) ? c.new : c
end
end
context 'when inferring ruby' do
it 'fixnum translates to PIntegerType' do
calculator.infer(1).class.should == Puppet::Pops::Types::PIntegerType
end
it 'large fixnum (or bignum depending on architecture) translates to PIntegerType' do
calculator.infer(2**33).class.should == Puppet::Pops::Types::PIntegerType
end
it 'float translates to PFloatType' do
calculator.infer(1.3).class.should == Puppet::Pops::Types::PFloatType
end
it 'string translates to PStringType' do
calculator.infer('foo').class.should == Puppet::Pops::Types::PStringType
end
it 'inferred string type knows the string value' do
t = calculator.infer('foo')
t.class.should == Puppet::Pops::Types::PStringType
t.values.should == ['foo']
end
it 'boolean true translates to PBooleanType' do
calculator.infer(true).class.should == Puppet::Pops::Types::PBooleanType
end
it 'boolean false translates to PBooleanType' do
calculator.infer(false).class.should == Puppet::Pops::Types::PBooleanType
end
it 'regexp translates to PRegexpType' do
calculator.infer(/^a regular expression$/).class.should == Puppet::Pops::Types::PRegexpType
end
it 'nil translates to PNilType' do
calculator.infer(nil).class.should == Puppet::Pops::Types::PNilType
end
- it ':undef translates to PNilType' do
- calculator.infer(:undef).class.should == Puppet::Pops::Types::PNilType
+ it ':undef translates to PRuntimeType' do
+ calculator.infer(:undef).class.should == Puppet::Pops::Types::PRuntimeType
end
- it 'an instance of class Foo translates to PRubyType[Foo]' do
+ it 'an instance of class Foo translates to PRuntimeType[ruby, Foo]' do
class Foo
end
t = calculator.infer(Foo.new)
- t.class.should == Puppet::Pops::Types::PRubyType
- t.ruby_class.should == 'Foo'
+ t.class.should == Puppet::Pops::Types::PRuntimeType
+ t.runtime.should == :ruby
+ t.runtime_type_name.should == 'Foo'
end
context 'array' do
it 'translates to PArrayType' do
calculator.infer([1,2]).class.should == Puppet::Pops::Types::PArrayType
end
it 'with fixnum values translates to PArrayType[PIntegerType]' do
calculator.infer([1,2]).element_type.class.should == Puppet::Pops::Types::PIntegerType
end
it 'with 32 and 64 bit integer values translates to PArrayType[PIntegerType]' do
calculator.infer([1,2**33]).element_type.class.should == Puppet::Pops::Types::PIntegerType
end
it 'Range of integer values are computed' do
t = calculator.infer([-3,0,42]).element_type
t.class.should == Puppet::Pops::Types::PIntegerType
t.from.should == -3
t.to.should == 42
end
it "Compound string values are computed" do
t = calculator.infer(['a','b', 'c']).element_type
t.class.should == Puppet::Pops::Types::PStringType
t.values.should == ['a', 'b', 'c']
end
it 'with fixnum and float values translates to PArrayType[PNumericType]' do
calculator.infer([1,2.0]).element_type.class.should == Puppet::Pops::Types::PNumericType
end
it 'with fixnum and string values translates to PArrayType[PScalarType]' do
calculator.infer([1,'two']).element_type.class.should == Puppet::Pops::Types::PScalarType
end
it 'with float and string values translates to PArrayType[PScalarType]' do
calculator.infer([1.0,'two']).element_type.class.should == Puppet::Pops::Types::PScalarType
end
it 'with fixnum, float, and string values translates to PArrayType[PScalarType]' do
calculator.infer([1, 2.0,'two']).element_type.class.should == Puppet::Pops::Types::PScalarType
end
it 'with fixnum and regexp values translates to PArrayType[PScalarType]' do
calculator.infer([1, /two/]).element_type.class.should == Puppet::Pops::Types::PScalarType
end
it 'with string and regexp values translates to PArrayType[PScalarType]' do
calculator.infer(['one', /two/]).element_type.class.should == Puppet::Pops::Types::PScalarType
end
- it 'with string and symbol values translates to PArrayType[PObjectType]' do
- calculator.infer(['one', :two]).element_type.class.should == Puppet::Pops::Types::PObjectType
+ it 'with string and symbol values translates to PArrayType[PAnyType]' do
+ calculator.infer(['one', :two]).element_type.class.should == Puppet::Pops::Types::PAnyType
end
it 'with fixnum and nil values translates to PArrayType[PIntegerType]' do
calculator.infer([1, nil]).element_type.class.should == Puppet::Pops::Types::PIntegerType
end
it 'with arrays of string values translates to PArrayType[PArrayType[PStringType]]' do
et = calculator.infer([['first' 'array'], ['second','array']])
et.class.should == Puppet::Pops::Types::PArrayType
et = et.element_type
et.class.should == Puppet::Pops::Types::PArrayType
et = et.element_type
et.class.should == Puppet::Pops::Types::PStringType
end
it 'with array of string values and array of fixnums translates to PArrayType[PArrayType[PScalarType]]' do
et = calculator.infer([['first' 'array'], [1,2]])
et.class.should == Puppet::Pops::Types::PArrayType
et = et.element_type
et.class.should == Puppet::Pops::Types::PArrayType
et = et.element_type
et.class.should == Puppet::Pops::Types::PScalarType
end
it 'with hashes of string values translates to PArrayType[PHashType[PStringType]]' do
et = calculator.infer([{:first => 'first', :second => 'second' }, {:first => 'first', :second => 'second' }])
et.class.should == Puppet::Pops::Types::PArrayType
et = et.element_type
et.class.should == Puppet::Pops::Types::PHashType
et = et.element_type
et.class.should == Puppet::Pops::Types::PStringType
end
it 'with hash of string values and hash of fixnums translates to PArrayType[PHashType[PScalarType]]' do
et = calculator.infer([{:first => 'first', :second => 'second' }, {:first => 1, :second => 2 }])
et.class.should == Puppet::Pops::Types::PArrayType
et = et.element_type
et.class.should == Puppet::Pops::Types::PHashType
et = et.element_type
et.class.should == Puppet::Pops::Types::PScalarType
end
end
context 'hash' do
it 'translates to PHashType' do
calculator.infer({:first => 1, :second => 2}).class.should == Puppet::Pops::Types::PHashType
end
- it 'with symbolic keys translates to PHashType[PRubyType[Symbol],value]' do
+ it 'with symbolic keys translates to PHashType[PRuntimeType[ruby, Symbol], value]' do
k = calculator.infer({:first => 1, :second => 2}).key_type
- k.class.should == Puppet::Pops::Types::PRubyType
- k.ruby_class.should == 'Symbol'
+ k.class.should == Puppet::Pops::Types::PRuntimeType
+ k.runtime.should == :ruby
+ k.runtime_type_name.should == 'Symbol'
end
- it 'with string keys translates to PHashType[PStringType,value]' do
+ it 'with string keys translates to PHashType[PStringType, value]' do
calculator.infer({'first' => 1, 'second' => 2}).key_type.class.should == Puppet::Pops::Types::PStringType
end
- it 'with fixnum values translates to PHashType[key,PIntegerType]' do
+ it 'with fixnum values translates to PHashType[key, PIntegerType]' do
calculator.infer({:first => 1, :second => 2}).element_type.class.should == Puppet::Pops::Types::PIntegerType
end
end
end
context 'patterns' do
it "constructs a PPatternType" do
t = pattern_t('a(b)c')
t.class.should == Puppet::Pops::Types::PPatternType
t.patterns.size.should == 1
t.patterns[0].class.should == Puppet::Pops::Types::PRegexpType
t.patterns[0].pattern.should == 'a(b)c'
t.patterns[0].regexp.match('abc')[1].should == 'b'
end
it "constructs a PStringType with multiple strings" do
t = string_t('a', 'b', 'c', 'abc')
t.values.should == ['a', 'b', 'c', 'abc']
end
end
# Deal with cases not covered by computing common type
context 'when computing common type' do
it 'computes given resource type commonality' do
r1 = Puppet::Pops::Types::PResourceType.new()
r1.type_name = 'File'
r2 = Puppet::Pops::Types::PResourceType.new()
r2.type_name = 'File'
calculator.string(calculator.common_type(r1, r2)).should == "File"
r2 = Puppet::Pops::Types::PResourceType.new()
r2.type_name = 'File'
r2.title = '/tmp/foo'
calculator.string(calculator.common_type(r1, r2)).should == "File"
r1 = Puppet::Pops::Types::PResourceType.new()
r1.type_name = 'File'
r1.title = '/tmp/foo'
calculator.string(calculator.common_type(r1, r2)).should == "File['/tmp/foo']"
r1 = Puppet::Pops::Types::PResourceType.new()
r1.type_name = 'File'
r1.title = '/tmp/bar'
calculator.string(calculator.common_type(r1, r2)).should == "File"
r2 = Puppet::Pops::Types::PResourceType.new()
r2.type_name = 'Package'
r2.title = 'apache'
calculator.string(calculator.common_type(r1, r2)).should == "Resource"
end
it 'computes given hostclass type commonality' do
r1 = Puppet::Pops::Types::PHostClassType.new()
r1.class_name = 'foo'
r2 = Puppet::Pops::Types::PHostClassType.new()
r2.class_name = 'foo'
calculator.string(calculator.common_type(r1, r2)).should == "Class[foo]"
r2 = Puppet::Pops::Types::PHostClassType.new()
r2.class_name = 'bar'
calculator.string(calculator.common_type(r1, r2)).should == "Class"
r2 = Puppet::Pops::Types::PHostClassType.new()
calculator.string(calculator.common_type(r1, r2)).should == "Class"
r1 = Puppet::Pops::Types::PHostClassType.new()
calculator.string(calculator.common_type(r1, r2)).should == "Class"
end
it 'computes pattern commonality' do
t1 = pattern_t('abc')
t2 = pattern_t('xyz')
common_t = calculator.common_type(t1,t2)
common_t.class.should == Puppet::Pops::Types::PPatternType
common_t.patterns.map { |pr| pr.pattern }.should == ['abc', 'xyz']
calculator.string(common_t).should == "Pattern[/abc/, /xyz/]"
end
it 'computes enum commonality to value set sum' do
t1 = enum_t('a', 'b', 'c')
t2 = enum_t('x', 'y', 'z')
common_t = calculator.common_type(t1, t2)
common_t.should == enum_t('a', 'b', 'c', 'x', 'y', 'z')
end
it 'computed variant commonality to type union where added types are not sub-types' do
a_t1 = integer_t()
a_t2 = enum_t('b')
v_a = variant_t(a_t1, a_t2)
b_t1 = enum_t('a')
v_b = variant_t(b_t1)
common_t = calculator.common_type(v_a, v_b)
common_t.class.should == Puppet::Pops::Types::PVariantType
Set.new(common_t.types).should == Set.new([a_t1, a_t2, b_t1])
end
it 'computed variant commonality to type union where added types are sub-types' do
a_t1 = integer_t()
a_t2 = string_t()
v_a = variant_t(a_t1, a_t2)
b_t1 = enum_t('a')
v_b = variant_t(b_t1)
common_t = calculator.common_type(v_a, v_b)
common_t.class.should == Puppet::Pops::Types::PVariantType
Set.new(common_t.types).should == Set.new([a_t1, a_t2])
end
context "of callables" do
it 'incompatible instances => generic callable' do
t1 = callable_t(String)
t2 = callable_t(Integer)
common_t = calculator.common_type(t1, t2)
expect(common_t.class).to be(Puppet::Pops::Types::PCallableType)
expect(common_t.param_types).to be_nil
expect(common_t.block_type).to be_nil
end
- it 'compatible instances => the least specific' do
+ it 'compatible instances => the most specific' do
t1 = callable_t(String)
scalar_t = Puppet::Pops::Types::PScalarType.new
t2 = callable_t(scalar_t)
common_t = calculator.common_type(t1, t2)
expect(common_t.class).to be(Puppet::Pops::Types::PCallableType)
expect(common_t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
- expect(common_t.param_types.types).to eql([scalar_t])
+ expect(common_t.param_types.types).to eql([string_t])
expect(common_t.block_type).to be_nil
end
it 'block_type is included in the check (incompatible block)' do
t1 = with_block_t(callable_t(String), String)
t2 = with_block_t(callable_t(String), Integer)
common_t = calculator.common_type(t1, t2)
expect(common_t.class).to be(Puppet::Pops::Types::PCallableType)
expect(common_t.param_types).to be_nil
expect(common_t.block_type).to be_nil
end
it 'block_type is included in the check (compatible block)' do
t1 = with_block_t(callable_t(String), String)
scalar_t = Puppet::Pops::Types::PScalarType.new
t2 = with_block_t(callable_t(String), scalar_t)
common_t = calculator.common_type(t1, t2)
expect(common_t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(common_t.block_type).to eql(callable_t(scalar_t))
end
end
end
context 'computes assignability' do
include_context "types_setup"
- context "for Object, such that" do
- it 'all types are assignable to Object' do
- t = Puppet::Pops::Types::PObjectType.new()
+ context 'for Unit, such that' do
+ it 'all types are assignable to Unit' do
+ t = Puppet::Pops::Types::PUnitType.new()
+ all_types.each { |t2| t2.new.should be_assignable_to(t) }
+ end
+
+ it 'Unit is assignable to all other types' do
+ t = Puppet::Pops::Types::PUnitType.new()
+ all_types.each { |t2| t.should be_assignable_to(t2.new) }
+ end
+
+ it 'Unit is assignable to Unit' do
+ t = Puppet::Pops::Types::PUnitType.new()
+ t2 = Puppet::Pops::Types::PUnitType.new()
+ t.should be_assignable_to(t2)
+ end
+ end
+
+ context "for Any, such that" do
+ it 'all types are assignable to Any' do
+ t = Puppet::Pops::Types::PAnyType.new()
all_types.each { |t2| t2.new.should be_assignable_to(t) }
end
- it 'Object is not assignable to anything but Object' do
- tested_types = all_types() - [Puppet::Pops::Types::PObjectType]
- t = Puppet::Pops::Types::PObjectType.new()
+ it 'Any is not assignable to anything but Any' do
+ tested_types = all_types() - [Puppet::Pops::Types::PAnyType]
+ t = Puppet::Pops::Types::PAnyType.new()
tested_types.each { |t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Data, such that" do
it 'all scalars + array and hash are assignable to Data' do
t = Puppet::Pops::Types::PDataType.new()
data_compatible_types.each { |t2|
type_from_class(t2).should be_assignable_to(t)
}
end
it 'a Variant of scalar, hash, or array is assignable to Data' do
t = Puppet::Pops::Types::PDataType.new()
data_compatible_types.each { |t2| variant_t(type_from_class(t2)).should be_assignable_to(t) }
end
it 'Data is not assignable to any of its subtypes' do
t = Puppet::Pops::Types::PDataType.new()
types_to_test = data_compatible_types- [Puppet::Pops::Types::PDataType]
types_to_test.each {|t2| t.should_not be_assignable_to(type_from_class(t2)) }
end
it 'Data is not assignable to a Variant of Data subtype' do
t = Puppet::Pops::Types::PDataType.new()
types_to_test = data_compatible_types- [Puppet::Pops::Types::PDataType]
types_to_test.each { |t2| t.should_not be_assignable_to(variant_t(type_from_class(t2))) }
end
it 'Data is not assignable to any disjunct type' do
- tested_types = all_types - [Puppet::Pops::Types::PObjectType, Puppet::Pops::Types::PDataType] - scalar_types
+ tested_types = all_types - [Puppet::Pops::Types::PAnyType, Puppet::Pops::Types::PDataType] - scalar_types
t = Puppet::Pops::Types::PDataType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Scalar, such that" do
it "all scalars are assignable to Scalar" do
t = Puppet::Pops::Types::PScalarType.new()
scalar_types.each {|t2| t2.new.should be_assignable_to(t) }
end
it 'Scalar is not assignable to any of its subtypes' do
t = Puppet::Pops::Types::PScalarType.new()
types_to_test = scalar_types - [Puppet::Pops::Types::PScalarType]
types_to_test.each {|t2| t.should_not be_assignable_to(t2.new) }
end
it 'Scalar is not assignable to any disjunct type' do
- tested_types = all_types - [Puppet::Pops::Types::PObjectType, Puppet::Pops::Types::PDataType] - scalar_types
+ tested_types = all_types - [Puppet::Pops::Types::PAnyType, Puppet::Pops::Types::PDataType] - scalar_types
t = Puppet::Pops::Types::PScalarType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Numeric, such that" do
it "all numerics are assignable to Numeric" do
t = Puppet::Pops::Types::PNumericType.new()
numeric_types.each {|t2| t2.new.should be_assignable_to(t) }
end
it 'Numeric is not assignable to any of its subtypes' do
t = Puppet::Pops::Types::PNumericType.new()
types_to_test = numeric_types - [Puppet::Pops::Types::PNumericType]
types_to_test.each {|t2| t.should_not be_assignable_to(t2.new) }
end
it 'Numeric is not assignable to any disjunct type' do
tested_types = all_types - [
- Puppet::Pops::Types::PObjectType,
+ Puppet::Pops::Types::PAnyType,
Puppet::Pops::Types::PDataType,
Puppet::Pops::Types::PScalarType,
] - numeric_types
t = Puppet::Pops::Types::PNumericType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Collection, such that" do
it "all collections are assignable to Collection" do
t = Puppet::Pops::Types::PCollectionType.new()
collection_types.each {|t2| t2.new.should be_assignable_to(t) }
end
it 'Collection is not assignable to any of its subtypes' do
t = Puppet::Pops::Types::PCollectionType.new()
types_to_test = collection_types - [Puppet::Pops::Types::PCollectionType]
types_to_test.each {|t2| t.should_not be_assignable_to(t2.new) }
end
it 'Collection is not assignable to any disjunct type' do
- tested_types = all_types - [Puppet::Pops::Types::PObjectType] - collection_types
+ tested_types = all_types - [Puppet::Pops::Types::PAnyType] - collection_types
t = Puppet::Pops::Types::PCollectionType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Array, such that" do
it "Array is not assignable to non Array based Collection type" do
t = Puppet::Pops::Types::PArrayType.new()
tested_types = collection_types - [
Puppet::Pops::Types::PCollectionType,
Puppet::Pops::Types::PArrayType,
Puppet::Pops::Types::PTupleType]
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
it 'Array is not assignable to any disjunct type' do
tested_types = all_types - [
- Puppet::Pops::Types::PObjectType,
+ Puppet::Pops::Types::PAnyType,
Puppet::Pops::Types::PDataType] - collection_types
t = Puppet::Pops::Types::PArrayType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Hash, such that" do
it "Hash is not assignable to any other Collection type" do
t = Puppet::Pops::Types::PHashType.new()
tested_types = collection_types - [
Puppet::Pops::Types::PCollectionType,
Puppet::Pops::Types::PStructType,
Puppet::Pops::Types::PHashType]
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
it 'Hash is not assignable to any disjunct type' do
tested_types = all_types - [
- Puppet::Pops::Types::PObjectType,
+ Puppet::Pops::Types::PAnyType,
Puppet::Pops::Types::PDataType] - collection_types
t = Puppet::Pops::Types::PHashType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Tuple, such that" do
it "Tuple is not assignable to any other non Array based Collection type" do
t = Puppet::Pops::Types::PTupleType.new()
tested_types = collection_types - [
Puppet::Pops::Types::PCollectionType,
Puppet::Pops::Types::PTupleType,
Puppet::Pops::Types::PArrayType]
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
it 'Tuple is not assignable to any disjunct type' do
tested_types = all_types - [
- Puppet::Pops::Types::PObjectType,
+ Puppet::Pops::Types::PAnyType,
Puppet::Pops::Types::PDataType] - collection_types
t = Puppet::Pops::Types::PTupleType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Struct, such that" do
it "Struct is not assignable to any other non Hashed based Collection type" do
t = Puppet::Pops::Types::PStructType.new()
tested_types = collection_types - [
Puppet::Pops::Types::PCollectionType,
Puppet::Pops::Types::PStructType,
Puppet::Pops::Types::PHashType]
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
it 'Struct is not assignable to any disjunct type' do
tested_types = all_types - [
- Puppet::Pops::Types::PObjectType,
+ Puppet::Pops::Types::PAnyType,
Puppet::Pops::Types::PDataType] - collection_types
t = Puppet::Pops::Types::PStructType.new()
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
context "for Callable, such that" do
it "Callable is not assignable to any disjunct type" do
t = Puppet::Pops::Types::PCallableType.new()
tested_types = all_types - [
Puppet::Pops::Types::PCallableType,
- Puppet::Pops::Types::PObjectType]
+ Puppet::Pops::Types::PAnyType]
tested_types.each {|t2| t.should_not be_assignable_to(t2.new) }
end
end
it 'should recognize mapped ruby types' do
{ Integer => Puppet::Pops::Types::PIntegerType.new,
Fixnum => Puppet::Pops::Types::PIntegerType.new,
Bignum => Puppet::Pops::Types::PIntegerType.new,
Float => Puppet::Pops::Types::PFloatType.new,
Numeric => Puppet::Pops::Types::PNumericType.new,
NilClass => Puppet::Pops::Types::PNilType.new,
TrueClass => Puppet::Pops::Types::PBooleanType.new,
FalseClass => Puppet::Pops::Types::PBooleanType.new,
String => Puppet::Pops::Types::PStringType.new,
Regexp => Puppet::Pops::Types::PRegexpType.new,
Regexp => Puppet::Pops::Types::PRegexpType.new,
Array => Puppet::Pops::Types::TypeFactory.array_of_data(),
Hash => Puppet::Pops::Types::TypeFactory.hash_of_data()
}.each do |ruby_type, puppet_type |
ruby_type.should be_assignable_to(puppet_type)
end
end
context 'when dealing with integer ranges' do
it 'should accept an equal range' do
calculator.assignable?(range_t(2,5), range_t(2,5)).should == true
end
it 'should accept an equal reverse range' do
calculator.assignable?(range_t(2,5), range_t(5,2)).should == true
end
it 'should accept a narrower range' do
calculator.assignable?(range_t(2,10), range_t(3,5)).should == true
end
it 'should accept a narrower reverse range' do
calculator.assignable?(range_t(2,10), range_t(5,3)).should == true
end
it 'should reject a wider range' do
calculator.assignable?(range_t(3,5), range_t(2,10)).should == false
end
it 'should reject a wider reverse range' do
calculator.assignable?(range_t(3,5), range_t(10,2)).should == false
end
it 'should reject a partially overlapping range' do
calculator.assignable?(range_t(3,5), range_t(2,4)).should == false
calculator.assignable?(range_t(3,5), range_t(4,6)).should == false
end
it 'should reject a partially overlapping reverse range' do
calculator.assignable?(range_t(3,5), range_t(4,2)).should == false
calculator.assignable?(range_t(3,5), range_t(6,4)).should == false
end
end
context 'when dealing with patterns' do
it 'should accept a string matching a pattern' do
p_t = pattern_t('abc')
p_s = string_t('XabcY')
calculator.assignable?(p_t, p_s).should == true
end
it 'should accept a regexp matching a pattern' do
p_t = pattern_t(/abc/)
p_s = string_t('XabcY')
calculator.assignable?(p_t, p_s).should == true
end
it 'should accept a pattern matching a pattern' do
p_t = pattern_t(pattern_t('abc'))
p_s = string_t('XabcY')
calculator.assignable?(p_t, p_s).should == true
end
it 'should accept a regexp matching a pattern' do
p_t = pattern_t(regexp_t('abc'))
p_s = string_t('XabcY')
calculator.assignable?(p_t, p_s).should == true
end
it 'should accept a string matching all patterns' do
p_t = pattern_t('abc', 'ab', 'c')
p_s = string_t('XabcY')
calculator.assignable?(p_t, p_s).should == true
end
it 'should accept multiple strings if they all match any patterns' do
p_t = pattern_t('X', 'Y', 'abc')
p_s = string_t('Xa', 'aY', 'abc')
calculator.assignable?(p_t, p_s).should == true
end
it 'should reject a string not matching any patterns' do
p_t = pattern_t('abc', 'ab', 'c')
p_s = string_t('XqqqY')
calculator.assignable?(p_t, p_s).should == false
end
it 'should reject multiple strings if not all match any patterns' do
p_t = pattern_t('abc', 'ab', 'c', 'q')
p_s = string_t('X', 'Y', 'Z')
calculator.assignable?(p_t, p_s).should == false
end
it 'should accept enum matching patterns as instanceof' do
enum = enum_t('XS', 'S', 'M', 'L' 'XL', 'XXL')
pattern = pattern_t('S', 'M', 'L')
calculator.assignable?(pattern, enum).should == true
end
+ it 'pattern should accept a variant where all variants are acceptable' do
+ pattern = pattern_t(/^\w+$/)
+ calculator.assignable?(pattern, variant_t(string_t('a'), string_t('b'))).should == true
+ end
+
+ end
+
+ context 'when dealing with enums' do
+ it 'should accept a string with matching content' do
+ calculator.assignable?(enum_t('a', 'b'), string_t('a')).should == true
+ calculator.assignable?(enum_t('a', 'b'), string_t('b')).should == true
+ calculator.assignable?(enum_t('a', 'b'), string_t('c')).should == false
+ end
+
+ it 'should accept an enum with matching enum' do
+ calculator.assignable?(enum_t('a', 'b'), enum_t('a', 'b')).should == true
+ calculator.assignable?(enum_t('a', 'b'), enum_t('a')).should == true
+ calculator.assignable?(enum_t('a', 'b'), enum_t('c')).should == false
+ end
+
+ it 'enum should accept a variant where all variants are acceptable' do
+ enum = enum_t('a', 'b')
+ calculator.assignable?(enum, variant_t(string_t('a'), string_t('b'))).should == true
+ end
end
context 'when dealing with tuples' do
+ it 'matches empty tuples' do
+ tuple1 = tuple_t()
+ tuple2 = tuple_t()
+
+ calculator.assignable?(tuple1, tuple2).should == true
+ calculator.assignable?(tuple2, tuple1).should == true
+ end
+
+ it 'accepts an empty tuple as assignable to a tuple with a min size of 0' do
+ tuple1 = tuple_t(Object)
+ factory.constrain_size(tuple1, 0, :default)
+ tuple2 = tuple_t()
+
+ calculator.assignable?(tuple1, tuple2).should == true
+ calculator.assignable?(tuple2, tuple1).should == false
+ end
+
it 'should accept matching tuples' do
tuple1 = tuple_t(1,2)
tuple2 = tuple_t(Integer,Integer)
calculator.assignable?(tuple1, tuple2).should == true
calculator.assignable?(tuple2, tuple1).should == true
end
it 'should accept matching tuples where one is more general than the other' do
tuple1 = tuple_t(1,2)
tuple2 = tuple_t(Numeric,Numeric)
calculator.assignable?(tuple1, tuple2).should == false
calculator.assignable?(tuple2, tuple1).should == true
end
it 'should accept ranged tuples' do
tuple1 = tuple_t(1)
factory.constrain_size(tuple1, 5, 5)
tuple2 = tuple_t(Integer,Integer, Integer, Integer, Integer)
calculator.assignable?(tuple1, tuple2).should == true
calculator.assignable?(tuple2, tuple1).should == true
end
it 'should reject ranged tuples when ranges does not match' do
tuple1 = tuple_t(1)
factory.constrain_size(tuple1, 4, 5)
tuple2 = tuple_t(Integer,Integer, Integer, Integer, Integer)
calculator.assignable?(tuple1, tuple2).should == true
calculator.assignable?(tuple2, tuple1).should == false
end
it 'should reject ranged tuples when ranges does not match (using infinite upper bound)' do
tuple1 = tuple_t(1)
factory.constrain_size(tuple1, 4, :default)
tuple2 = tuple_t(Integer,Integer, Integer, Integer, Integer)
calculator.assignable?(tuple1, tuple2).should == true
calculator.assignable?(tuple2, tuple1).should == false
end
it 'should accept matching tuples with optional entries by repeating last' do
tuple1 = tuple_t(1,2)
factory.constrain_size(tuple1, 0, :default)
tuple2 = tuple_t(Numeric,Numeric)
factory.constrain_size(tuple2, 0, :default)
calculator.assignable?(tuple1, tuple2).should == false
calculator.assignable?(tuple2, tuple1).should == true
end
it 'should accept matching tuples with optional entries' do
tuple1 = tuple_t(Integer, Integer, String)
factory.constrain_size(tuple1, 1, 3)
array2 = factory.constrain_size(array_t(Integer),2,2)
calculator.assignable?(tuple1, array2).should == true
factory.constrain_size(tuple1, 3, 3)
calculator.assignable?(tuple1, array2).should == false
end
it 'should accept matching array' do
tuple1 = tuple_t(1,2)
array = array_t(Integer)
factory.constrain_size(array, 2, 2)
calculator.assignable?(tuple1, array).should == true
calculator.assignable?(array, tuple1).should == true
end
+
+ it 'should accept empty array when tuple allows min of 0' do
+ tuple1 = tuple_t(Integer)
+ factory.constrain_size(tuple1, 0, 1)
+
+ array = array_t(Integer)
+ factory.constrain_size(array, 0, 0)
+
+ calculator.assignable?(tuple1, array).should == true
+ calculator.assignable?(array, tuple1).should == false
+ end
end
context 'when dealing with structs' do
it 'should accept matching structs' do
struct1 = struct_t({'a'=>Integer, 'b'=>Integer})
struct2 = struct_t({'a'=>Integer, 'b'=>Integer})
calculator.assignable?(struct1, struct2).should == true
calculator.assignable?(struct2, struct1).should == true
end
it 'should accept matching structs where one is more general than the other' do
struct1 = struct_t({'a'=>Integer, 'b'=>Integer})
struct2 = struct_t({'a'=>Numeric, 'b'=>Numeric})
calculator.assignable?(struct1, struct2).should == false
calculator.assignable?(struct2, struct1).should == true
end
it 'should accept matching hash' do
struct1 = struct_t({'a'=>Integer, 'b'=>Integer})
non_empty_string = string_t()
non_empty_string.size_type = range_t(1, nil)
hsh = hash_t(non_empty_string, Integer)
factory.constrain_size(hsh, 2, 2)
calculator.assignable?(struct1, hsh).should == true
calculator.assignable?(hsh, struct1).should == true
end
end
it 'should recognize ruby type inheritance' do
class Foo
end
class Bar < Foo
end
fooType = calculator.infer(Foo.new)
barType = calculator.infer(Bar.new)
calculator.assignable?(fooType, fooType).should == true
calculator.assignable?(Foo, fooType).should == true
calculator.assignable?(fooType, barType).should == true
calculator.assignable?(Foo, barType).should == true
calculator.assignable?(barType, fooType).should == false
calculator.assignable?(Bar, fooType).should == false
end
it "should allow host class with same name" do
hc1 = Puppet::Pops::Types::TypeFactory.host_class('the_name')
hc2 = Puppet::Pops::Types::TypeFactory.host_class('the_name')
calculator.assignable?(hc1, hc2).should == true
end
it "should allow host class with name assigned to hostclass without name" do
hc1 = Puppet::Pops::Types::TypeFactory.host_class()
hc2 = Puppet::Pops::Types::TypeFactory.host_class('the_name')
calculator.assignable?(hc1, hc2).should == true
end
it "should reject host classes with different names" do
hc1 = Puppet::Pops::Types::TypeFactory.host_class('the_name')
hc2 = Puppet::Pops::Types::TypeFactory.host_class('another_name')
calculator.assignable?(hc1, hc2).should == false
end
it "should reject host classes without name assigned to host class with name" do
hc1 = Puppet::Pops::Types::TypeFactory.host_class('the_name')
hc2 = Puppet::Pops::Types::TypeFactory.host_class()
calculator.assignable?(hc1, hc2).should == false
end
it "should allow resource with same type_name and title" do
r1 = Puppet::Pops::Types::TypeFactory.resource('file', 'foo')
r2 = Puppet::Pops::Types::TypeFactory.resource('file', 'foo')
calculator.assignable?(r1, r2).should == true
end
it "should allow more specific resource assignment" do
r1 = Puppet::Pops::Types::TypeFactory.resource()
r2 = Puppet::Pops::Types::TypeFactory.resource('file')
calculator.assignable?(r1, r2).should == true
r2 = Puppet::Pops::Types::TypeFactory.resource('file', '/tmp/foo')
calculator.assignable?(r1, r2).should == true
r1 = Puppet::Pops::Types::TypeFactory.resource('file')
calculator.assignable?(r1, r2).should == true
end
it "should reject less specific resource assignment" do
r1 = Puppet::Pops::Types::TypeFactory.resource('file', '/tmp/foo')
r2 = Puppet::Pops::Types::TypeFactory.resource('file')
calculator.assignable?(r1, r2).should == false
r2 = Puppet::Pops::Types::TypeFactory.resource()
calculator.assignable?(r1, r2).should == false
end
end
context 'when testing if x is instance of type t' do
include_context "types_setup"
- it 'should consider undef to be instance of Object and NilType' do
+ it 'should consider undef to be instance of Any, NilType, and optional' do
calculator.instance?(Puppet::Pops::Types::PNilType.new(), nil).should == true
- calculator.instance?(Puppet::Pops::Types::PObjectType.new(), nil).should == true
+ calculator.instance?(Puppet::Pops::Types::PAnyType.new(), nil).should == true
+ calculator.instance?(Puppet::Pops::Types::POptionalType.new(), nil).should == true
end
- it 'should not consider undef to be an instance of any other type than Object and NilType and Data' do
- types_to_test = all_types - [
- Puppet::Pops::Types::PObjectType,
+ it 'all types should be (ruby) instance of PAnyType' do
+ all_types.each do |t|
+ t.new.is_a?(Puppet::Pops::Types::PAnyType).should == true
+ end
+ end
+
+ it "should consider :undef to be instance of Runtime['ruby', 'Symbol]" do
+ calculator.instance?(Puppet::Pops::Types::PRuntimeType.new(:runtime => :ruby, :runtime_type_name => 'Symbol'), :undef).should == true
+ end
+
+ it 'should not consider undef to be an instance of any other type than Any, NilType and Data' do
+ types_to_test = all_types - [
+ Puppet::Pops::Types::PAnyType,
Puppet::Pops::Types::PNilType,
- Puppet::Pops::Types::PDataType]
+ Puppet::Pops::Types::PDataType,
+ Puppet::Pops::Types::POptionalType,
+ ]
types_to_test.each {|t| calculator.instance?(t.new, nil).should == false }
types_to_test.each {|t| calculator.instance?(t.new, :undef).should == false }
end
+ it 'should consider default to be instance of Default and Any' do
+ calculator.instance?(Puppet::Pops::Types::PDefaultType.new(), :default).should == true
+ calculator.instance?(Puppet::Pops::Types::PAnyType.new(), :default).should == true
+ end
+
+ it 'should not consider "default" to be an instance of anything but Default, and Any' do
+ types_to_test = all_types - [
+ Puppet::Pops::Types::PAnyType,
+ Puppet::Pops::Types::PDefaultType,
+ ]
+
+ types_to_test.each {|t| calculator.instance?(t.new, :default).should == false }
+ end
+
it 'should consider fixnum instanceof PIntegerType' do
calculator.instance?(Puppet::Pops::Types::PIntegerType.new(), 1).should == true
end
it 'should consider fixnum instanceof Fixnum' do
calculator.instance?(Fixnum, 1).should == true
end
it 'should consider integer in range' do
range = range_t(0,10)
calculator.instance?(range, 1).should == true
calculator.instance?(range, 10).should == true
calculator.instance?(range, -1).should == false
calculator.instance?(range, 11).should == false
end
it 'should consider string in length range' do
range = factory.constrain_size(string_t, 1,3)
calculator.instance?(range, 'a').should == true
calculator.instance?(range, 'abc').should == true
calculator.instance?(range, '').should == false
calculator.instance?(range, 'abcd').should == false
end
it 'should consider array in length range' do
range = factory.constrain_size(array_t(integer_t), 1,3)
calculator.instance?(range, [1]).should == true
calculator.instance?(range, [1,2,3]).should == true
calculator.instance?(range, []).should == false
calculator.instance?(range, [1,2,3,4]).should == false
end
it 'should consider hash in length range' do
range = factory.constrain_size(hash_t(integer_t, integer_t), 1,2)
calculator.instance?(range, {1=>1}).should == true
calculator.instance?(range, {1=>1, 2=>2}).should == true
calculator.instance?(range, {}).should == false
calculator.instance?(range, {1=>1, 2=>2, 3=>3}).should == false
end
it 'should consider collection in length range for array ' do
range = factory.constrain_size(collection_t, 1,3)
calculator.instance?(range, [1]).should == true
calculator.instance?(range, [1,2,3]).should == true
calculator.instance?(range, []).should == false
calculator.instance?(range, [1,2,3,4]).should == false
end
it 'should consider collection in length range for hash' do
range = factory.constrain_size(collection_t, 1,2)
calculator.instance?(range, {1=>1}).should == true
calculator.instance?(range, {1=>1, 2=>2}).should == true
calculator.instance?(range, {}).should == false
calculator.instance?(range, {1=>1, 2=>2, 3=>3}).should == false
end
it 'should consider string matching enum as instanceof' do
enum = enum_t('XS', 'S', 'M', 'L', 'XL', '0')
calculator.instance?(enum, 'XS').should == true
calculator.instance?(enum, 'S').should == true
calculator.instance?(enum, 'XXL').should == false
calculator.instance?(enum, '').should == false
calculator.instance?(enum, '0').should == true
calculator.instance?(enum, 0).should == false
end
it 'should consider array[string] as instance of Array[Enum] when strings are instance of Enum' do
enum = enum_t('XS', 'S', 'M', 'L', 'XL', '0')
array = array_t(enum)
calculator.instance?(array, ['XS', 'S', 'XL']).should == true
calculator.instance?(array, ['XS', 'S', 'XXL']).should == false
end
it 'should consider array[mixed] as instance of Variant[mixed] when mixed types are listed in Variant' do
enum = enum_t('XS', 'S', 'M', 'L', 'XL')
sizes = range_t(30, 50)
array = array_t(variant_t(enum, sizes))
calculator.instance?(array, ['XS', 'S', 30, 50]).should == true
calculator.instance?(array, ['XS', 'S', 'XXL']).should == false
calculator.instance?(array, ['XS', 'S', 29]).should == false
end
it 'should consider array[seq] as instance of Tuple[seq] when elements of seq are instance of' do
tuple = tuple_t(Integer, String, Float)
calculator.instance?(tuple, [1, 'a', 3.14]).should == true
calculator.instance?(tuple, [1.2, 'a', 3.14]).should == false
calculator.instance?(tuple, [1, 1, 3.14]).should == false
calculator.instance?(tuple, [1, 'a', 1]).should == false
end
it 'should consider hash[cont] as instance of Struct[cont-t]' do
struct = struct_t({'a'=>Integer, 'b'=>String, 'c'=>Float})
calculator.instance?(struct, {'a'=>1, 'b'=>'a', 'c'=>3.14}).should == true
calculator.instance?(struct, {'a'=>1.2, 'b'=>'a', 'c'=>3.14}).should == false
calculator.instance?(struct, {'a'=>1, 'b'=>1, 'c'=>3.14}).should == false
calculator.instance?(struct, {'a'=>1, 'b'=>'a', 'c'=>1}).should == false
end
context 'and t is Data' do
it 'undef should be considered instance of Data' do
- calculator.instance?(data_t, :undef).should == true
+ calculator.instance?(data_t, nil).should == true
end
it 'other symbols should not be considered instance of Data' do
calculator.instance?(data_t, :love).should == false
end
it 'an empty array should be considered instance of Data' do
calculator.instance?(data_t, []).should == true
end
it 'an empty hash should be considered instance of Data' do
calculator.instance?(data_t, {}).should == true
end
it 'a hash with nil/undef data should be considered instance of Data' do
calculator.instance?(data_t, {'a' => nil}).should == true
- calculator.instance?(data_t, {'a' => :undef}).should == true
end
- it 'a hash with nil/undef key should not considered instance of Data' do
+ it 'a hash with nil/default key should not considered instance of Data' do
calculator.instance?(data_t, {nil => 10}).should == false
- calculator.instance?(data_t, {:undef => 10}).should == false
+ calculator.instance?(data_t, {:default => 10}).should == false
end
- it 'an array with undef entries should be considered instance of Data' do
- calculator.instance?(data_t, [:undef]).should == true
+ it 'an array with nil entries should be considered instance of Data' do
calculator.instance?(data_t, [nil]).should == true
end
- it 'an array with undef / data entries should be considered instance of Data' do
- calculator.instance?(data_t, [1, :undef, 'a']).should == true
+ it 'an array with nil + data entries should be considered instance of Data' do
calculator.instance?(data_t, [1, nil, 'a']).should == true
end
end
context "and t is something Callable" do
it 'a Closure should be considered a Callable' do
factory = Puppet::Pops::Model::Factory
params = [factory.PARAM('a')]
the_block = factory.LAMBDA(params,factory.literal(42))
the_closure = Puppet::Pops::Evaluator::Closure.new(:fake_evaluator, the_block, :fake_scope)
expect(calculator.instance?(all_callables_t, the_closure)).to be_true
- # TODO: lambdas are currently unttypes, anything can be given if arg count is correct
- expect(calculator.instance?(callable_t(optional_object_t), the_closure)).to be_true
- # Arg count is wrong
- expect(calculator.instance?(callable_t(optional_object_t, optional_object_t), the_closure)).to be_false
+ expect(calculator.instance?(callable_t(object_t), the_closure)).to be_true
+ expect(calculator.instance?(callable_t(object_t, object_t), the_closure)).to be_false
end
it 'a Function instance should be considered a Callable' do
fc = Puppet::Functions.create_function(:foo) do
dispatch :foo do
param 'String', 'a'
end
def foo(a)
a
end
end
f = fc.new(:closure_scope, :loader)
# Any callable
expect(calculator.instance?(all_callables_t, f)).to be_true
# Callable[String]
expect(calculator.instance?(callable_t(String), f)).to be_true
end
end
end
context 'when converting a ruby class' do
it 'should yield \'PIntegerType\' for Integer, Fixnum, and Bignum' do
[Integer,Fixnum,Bignum].each do |c|
calculator.type(c).class.should == Puppet::Pops::Types::PIntegerType
end
end
it 'should yield \'PFloatType\' for Float' do
calculator.type(Float).class.should == Puppet::Pops::Types::PFloatType
end
it 'should yield \'PBooleanType\' for FalseClass and TrueClass' do
[FalseClass,TrueClass].each do |c|
calculator.type(c).class.should == Puppet::Pops::Types::PBooleanType
end
end
it 'should yield \'PNilType\' for NilClass' do
calculator.type(NilClass).class.should == Puppet::Pops::Types::PNilType
end
it 'should yield \'PStringType\' for String' do
calculator.type(String).class.should == Puppet::Pops::Types::PStringType
end
it 'should yield \'PRegexpType\' for Regexp' do
calculator.type(Regexp).class.should == Puppet::Pops::Types::PRegexpType
end
it 'should yield \'PArrayType[PDataType]\' for Array' do
t = calculator.type(Array)
t.class.should == Puppet::Pops::Types::PArrayType
t.element_type.class.should == Puppet::Pops::Types::PDataType
end
it 'should yield \'PHashType[PScalarType,PDataType]\' for Hash' do
t = calculator.type(Hash)
t.class.should == Puppet::Pops::Types::PHashType
t.key_type.class.should == Puppet::Pops::Types::PScalarType
t.element_type.class.should == Puppet::Pops::Types::PDataType
end
end
context 'when representing the type as string' do
it 'should yield \'Type\' for PType' do
calculator.string(Puppet::Pops::Types::PType.new()).should == 'Type'
end
- it 'should yield \'Object\' for PObjectType' do
- calculator.string(Puppet::Pops::Types::PObjectType.new()).should == 'Object'
+ it 'should yield \'Object\' for PAnyType' do
+ calculator.string(Puppet::Pops::Types::PAnyType.new()).should == 'Any'
end
it 'should yield \'Scalar\' for PScalarType' do
calculator.string(Puppet::Pops::Types::PScalarType.new()).should == 'Scalar'
end
it 'should yield \'Boolean\' for PBooleanType' do
calculator.string(Puppet::Pops::Types::PBooleanType.new()).should == 'Boolean'
end
it 'should yield \'Data\' for PDataType' do
calculator.string(Puppet::Pops::Types::PDataType.new()).should == 'Data'
end
it 'should yield \'Numeric\' for PNumericType' do
calculator.string(Puppet::Pops::Types::PNumericType.new()).should == 'Numeric'
end
it 'should yield \'Integer\' and from/to for PIntegerType' do
int_T = Puppet::Pops::Types::PIntegerType
calculator.string(int_T.new()).should == 'Integer'
int = int_T.new()
int.from = 1
int.to = 1
calculator.string(int).should == 'Integer[1, 1]'
int = int_T.new()
int.from = 1
int.to = 2
calculator.string(int).should == 'Integer[1, 2]'
int = int_T.new()
int.from = nil
int.to = 2
calculator.string(int).should == 'Integer[default, 2]'
int = int_T.new()
int.from = 2
int.to = nil
calculator.string(int).should == 'Integer[2, default]'
end
it 'should yield \'Float\' for PFloatType' do
calculator.string(Puppet::Pops::Types::PFloatType.new()).should == 'Float'
end
it 'should yield \'Regexp\' for PRegexpType' do
calculator.string(Puppet::Pops::Types::PRegexpType.new()).should == 'Regexp'
end
it 'should yield \'Regexp[/pat/]\' for parameterized PRegexpType' do
t = Puppet::Pops::Types::PRegexpType.new()
t.pattern = ('a/b')
calculator.string(Puppet::Pops::Types::PRegexpType.new()).should == 'Regexp'
end
it 'should yield \'String\' for PStringType' do
calculator.string(Puppet::Pops::Types::PStringType.new()).should == 'String'
end
it 'should yield \'String\' for PStringType with multiple values' do
calculator.string(string_t('a', 'b', 'c')).should == 'String'
end
it 'should yield \'String\' and from/to for PStringType' do
string_T = Puppet::Pops::Types::PStringType
calculator.string(factory.constrain_size(string_T.new(), 1,1)).should == 'String[1, 1]'
calculator.string(factory.constrain_size(string_T.new(), 1,2)).should == 'String[1, 2]'
calculator.string(factory.constrain_size(string_T.new(), :default, 2)).should == 'String[default, 2]'
calculator.string(factory.constrain_size(string_T.new(), 2, :default)).should == 'String[2, default]'
end
it 'should yield \'Array[Integer]\' for PArrayType[PIntegerType]' do
t = Puppet::Pops::Types::PArrayType.new()
t.element_type = Puppet::Pops::Types::PIntegerType.new()
calculator.string(t).should == 'Array[Integer]'
end
it 'should yield \'Collection\' and from/to for PCollectionType' do
col = collection_t()
calculator.string(factory.constrain_size(col.copy, 1,1)).should == 'Collection[1, 1]'
calculator.string(factory.constrain_size(col.copy, 1,2)).should == 'Collection[1, 2]'
calculator.string(factory.constrain_size(col.copy, :default, 2)).should == 'Collection[default, 2]'
calculator.string(factory.constrain_size(col.copy, 2, :default)).should == 'Collection[2, default]'
end
it 'should yield \'Array\' and from/to for PArrayType' do
arr = array_t(string_t)
calculator.string(factory.constrain_size(arr.copy, 1,1)).should == 'Array[String, 1, 1]'
calculator.string(factory.constrain_size(arr.copy, 1,2)).should == 'Array[String, 1, 2]'
calculator.string(factory.constrain_size(arr.copy, :default, 2)).should == 'Array[String, default, 2]'
calculator.string(factory.constrain_size(arr.copy, 2, :default)).should == 'Array[String, 2, default]'
end
it 'should yield \'Tuple[Integer]\' for PTupleType[PIntegerType]' do
t = Puppet::Pops::Types::PTupleType.new()
t.addTypes(Puppet::Pops::Types::PIntegerType.new())
calculator.string(t).should == 'Tuple[Integer]'
end
it 'should yield \'Tuple[T, T,..]\' for PTupleType[T, T, ...]' do
t = Puppet::Pops::Types::PTupleType.new()
t.addTypes(Puppet::Pops::Types::PIntegerType.new())
t.addTypes(Puppet::Pops::Types::PIntegerType.new())
t.addTypes(Puppet::Pops::Types::PStringType.new())
calculator.string(t).should == 'Tuple[Integer, Integer, String]'
end
it 'should yield \'Tuple\' and from/to for PTupleType' do
tuple_t = tuple_t(string_t)
calculator.string(factory.constrain_size(tuple_t.copy, 1,1)).should == 'Tuple[String, 1, 1]'
calculator.string(factory.constrain_size(tuple_t.copy, 1,2)).should == 'Tuple[String, 1, 2]'
calculator.string(factory.constrain_size(tuple_t.copy, :default, 2)).should == 'Tuple[String, default, 2]'
calculator.string(factory.constrain_size(tuple_t.copy, 2, :default)).should == 'Tuple[String, 2, default]'
end
it 'should yield \'Struct\' and details for PStructType' do
struct_t = struct_t({'a'=>Integer, 'b'=>String})
s = calculator.string(struct_t)
# Ruby 1.8.7 - noone likes you...
(s == "Struct[{'a'=>Integer, 'b'=>String}]" || s == "Struct[{'b'=>String, 'a'=>Integer}]").should == true
struct_t = struct_t({})
calculator.string(struct_t).should == "Struct"
end
it 'should yield \'Hash[String, Integer]\' for PHashType[PStringType, PIntegerType]' do
t = Puppet::Pops::Types::PHashType.new()
t.key_type = Puppet::Pops::Types::PStringType.new()
t.element_type = Puppet::Pops::Types::PIntegerType.new()
calculator.string(t).should == 'Hash[String, Integer]'
end
it 'should yield \'Hash\' and from/to for PHashType' do
hsh = hash_t(string_t, string_t)
calculator.string(factory.constrain_size(hsh.copy, 1,1)).should == 'Hash[String, String, 1, 1]'
calculator.string(factory.constrain_size(hsh.copy, 1,2)).should == 'Hash[String, String, 1, 2]'
calculator.string(factory.constrain_size(hsh.copy, :default, 2)).should == 'Hash[String, String, default, 2]'
calculator.string(factory.constrain_size(hsh.copy, 2, :default)).should == 'Hash[String, String, 2, default]'
end
it "should yield 'Class' for a PHostClassType" do
t = Puppet::Pops::Types::PHostClassType.new()
calculator.string(t).should == 'Class'
end
it "should yield 'Class[x]' for a PHostClassType[x]" do
t = Puppet::Pops::Types::PHostClassType.new()
t.class_name = 'x'
calculator.string(t).should == 'Class[x]'
end
it "should yield 'Resource' for a PResourceType" do
t = Puppet::Pops::Types::PResourceType.new()
calculator.string(t).should == 'Resource'
end
it 'should yield \'File\' for a PResourceType[\'File\']' do
t = Puppet::Pops::Types::PResourceType.new()
t.type_name = 'File'
calculator.string(t).should == 'File'
end
it "should yield 'File['/tmp/foo']' for a PResourceType['File', '/tmp/foo']" do
t = Puppet::Pops::Types::PResourceType.new()
t.type_name = 'File'
t.title = '/tmp/foo'
calculator.string(t).should == "File['/tmp/foo']"
end
it "should yield 'Enum[s,...]' for a PEnumType[s,...]" do
t = enum_t('a', 'b', 'c')
calculator.string(t).should == "Enum['a', 'b', 'c']"
end
it "should yield 'Pattern[/pat/,...]' for a PPatternType['pat',...]" do
t = pattern_t('a')
t2 = pattern_t('a', 'b', 'c')
calculator.string(t).should == "Pattern[/a/]"
calculator.string(t2).should == "Pattern[/a/, /b/, /c/]"
end
it "should escape special characters in the string for a PPatternType['pat',...]" do
t = pattern_t('a/b')
calculator.string(t).should == "Pattern[/a\\/b/]"
end
it "should yield 'Variant[t1,t2,...]' for a PVariantType[t1, t2,...]" do
t1 = string_t()
t2 = integer_t()
t3 = pattern_t('a')
t = variant_t(t1, t2, t3)
calculator.string(t).should == "Variant[String, Integer, Pattern[/a/]]"
end
it "should yield 'Callable' for generic callable" do
expect(calculator.string(all_callables_t)).to eql("Callable")
end
it "should yield 'Callable[0,0]' for callable without params" do
expect(calculator.string(callable_t)).to eql("Callable[0, 0]")
end
it "should yield 'Callable[t,t]' for callable with typed parameters" do
expect(calculator.string(callable_t(String, Integer))).to eql("Callable[String, Integer]")
end
- it "should yield 'Callable[t,min.max]' for callable with size constraint (infinite max)" do
+ it "should yield 'Callable[t,min,max]' for callable with size constraint (infinite max)" do
expect(calculator.string(callable_t(String, 0))).to eql("Callable[String, 0, default]")
end
- it "should yield 'Callable[t,min.max]' for callable with size constraint (capped max)" do
+ it "should yield 'Callable[t,min,max]' for callable with size constraint (capped max)" do
expect(calculator.string(callable_t(String, 0, 3))).to eql("Callable[String, 0, 3]")
end
+ it "should yield 'Callable[min,max]' callable with size > 0" do
+ expect(calculator.string(callable_t(0, 0))).to eql("Callable[0, 0]")
+ expect(calculator.string(callable_t(0, 1))).to eql("Callable[0, 1]")
+ expect(calculator.string(callable_t(0, :default))).to eql("Callable[0, default]")
+ end
+
it "should yield 'Callable[Callable]' for callable with block" do
expect(calculator.string(callable_t(all_callables_t))).to eql("Callable[0, 0, Callable]")
expect(calculator.string(callable_t(string_t, all_callables_t))).to eql("Callable[String, Callable]")
expect(calculator.string(callable_t(string_t, 1,1, all_callables_t))).to eql("Callable[String, 1, 1, Callable]")
end
+ it "should yield Unit for a Unit type" do
+ expect(calculator.string(unit_t)).to eql('Unit')
+ end
end
context 'when processing meta type' do
it 'should infer PType as the type of all other types' do
ptype = Puppet::Pops::Types::PType
calculator.infer(Puppet::Pops::Types::PNilType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PDataType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PScalarType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PStringType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PNumericType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PIntegerType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PFloatType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PRegexpType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PBooleanType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PCollectionType.new()).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PArrayType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PHashType.new() ).is_a?(ptype).should() == true
- calculator.infer(Puppet::Pops::Types::PRubyType.new() ).is_a?(ptype).should() == true
+ calculator.infer(Puppet::Pops::Types::PRuntimeType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PHostClassType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PResourceType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PEnumType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PPatternType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PVariantType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PTupleType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::POptionalType.new() ).is_a?(ptype).should() == true
calculator.infer(Puppet::Pops::Types::PCallableType.new() ).is_a?(ptype).should() == true
end
it 'should infer PType as the type of all other types' do
ptype = Puppet::Pops::Types::PType
calculator.string(calculator.infer(Puppet::Pops::Types::PNilType.new() )).should == "Type[Undef]"
calculator.string(calculator.infer(Puppet::Pops::Types::PDataType.new() )).should == "Type[Data]"
calculator.string(calculator.infer(Puppet::Pops::Types::PScalarType.new() )).should == "Type[Scalar]"
calculator.string(calculator.infer(Puppet::Pops::Types::PStringType.new() )).should == "Type[String]"
calculator.string(calculator.infer(Puppet::Pops::Types::PNumericType.new() )).should == "Type[Numeric]"
calculator.string(calculator.infer(Puppet::Pops::Types::PIntegerType.new() )).should == "Type[Integer]"
calculator.string(calculator.infer(Puppet::Pops::Types::PFloatType.new() )).should == "Type[Float]"
calculator.string(calculator.infer(Puppet::Pops::Types::PRegexpType.new() )).should == "Type[Regexp]"
calculator.string(calculator.infer(Puppet::Pops::Types::PBooleanType.new() )).should == "Type[Boolean]"
calculator.string(calculator.infer(Puppet::Pops::Types::PCollectionType.new())).should == "Type[Collection]"
calculator.string(calculator.infer(Puppet::Pops::Types::PArrayType.new() )).should == "Type[Array[?]]"
calculator.string(calculator.infer(Puppet::Pops::Types::PHashType.new() )).should == "Type[Hash[?, ?]]"
- calculator.string(calculator.infer(Puppet::Pops::Types::PRubyType.new() )).should == "Type[Ruby[?]]"
+ calculator.string(calculator.infer(Puppet::Pops::Types::PRuntimeType.new() )).should == "Type[Runtime[?, ?]]"
calculator.string(calculator.infer(Puppet::Pops::Types::PHostClassType.new() )).should == "Type[Class]"
calculator.string(calculator.infer(Puppet::Pops::Types::PResourceType.new() )).should == "Type[Resource]"
calculator.string(calculator.infer(Puppet::Pops::Types::PEnumType.new() )).should == "Type[Enum]"
calculator.string(calculator.infer(Puppet::Pops::Types::PVariantType.new() )).should == "Type[Variant]"
calculator.string(calculator.infer(Puppet::Pops::Types::PPatternType.new() )).should == "Type[Pattern]"
calculator.string(calculator.infer(Puppet::Pops::Types::PTupleType.new() )).should == "Type[Tuple]"
calculator.string(calculator.infer(Puppet::Pops::Types::POptionalType.new() )).should == "Type[Optional]"
calculator.string(calculator.infer(Puppet::Pops::Types::PCallableType.new() )).should == "Type[Callable]"
+
+ calculator.infer(Puppet::Pops::Types::PResourceType.new(:type_name => 'foo::fee::fum')).to_s.should == "Type[Foo::Fee::Fum]"
+ calculator.string(calculator.infer(Puppet::Pops::Types::PResourceType.new(:type_name => 'foo::fee::fum'))).should == "Type[Foo::Fee::Fum]"
+ calculator.infer(Puppet::Pops::Types::PResourceType.new(:type_name => 'Foo::Fee::Fum')).to_s.should == "Type[Foo::Fee::Fum]"
end
it "computes the common type of PType's type parameter" do
int_t = Puppet::Pops::Types::PIntegerType.new()
string_t = Puppet::Pops::Types::PStringType.new()
calculator.string(calculator.infer([int_t])).should == "Array[Type[Integer], 1, 1]"
calculator.string(calculator.infer([int_t, string_t])).should == "Array[Type[Scalar], 2, 2]"
end
it 'should infer PType as the type of ruby classes' do
class Foo
end
[Object, Numeric, Integer, Fixnum, Bignum, Float, String, Regexp, Array, Hash, Foo].each do |c|
calculator.infer(c).is_a?(Puppet::Pops::Types::PType).should() == true
end
end
it 'should infer PType as the type of PType (meta regression short-circuit)' do
calculator.infer(Puppet::Pops::Types::PType.new()).is_a?(Puppet::Pops::Types::PType).should() == true
end
it 'computes instance? to be true if parameterized and type match' do
int_t = Puppet::Pops::Types::PIntegerType.new()
type_t = Puppet::Pops::Types::TypeFactory.type_type(int_t)
type_type_t = Puppet::Pops::Types::TypeFactory.type_type(type_t)
calculator.instance?(type_type_t, type_t).should == true
end
it 'computes instance? to be false if parameterized and type do not match' do
int_t = Puppet::Pops::Types::PIntegerType.new()
string_t = Puppet::Pops::Types::PStringType.new()
type_t = Puppet::Pops::Types::TypeFactory.type_type(int_t)
type_t2 = Puppet::Pops::Types::TypeFactory.type_type(string_t)
type_type_t = Puppet::Pops::Types::TypeFactory.type_type(type_t)
# i.e. Type[Integer] =~ Type[Type[Integer]] # false
calculator.instance?(type_type_t, type_t2).should == false
end
it 'computes instance? to be true if unparameterized and matched against a type[?]' do
int_t = Puppet::Pops::Types::PIntegerType.new()
type_t = Puppet::Pops::Types::TypeFactory.type_type(int_t)
calculator.instance?(Puppet::Pops::Types::PType.new, type_t).should == true
end
end
context "when asking for an enumerable " do
it "should produce an enumerable for an Integer range that is not infinite" do
t = Puppet::Pops::Types::PIntegerType.new()
t.from = 1
t.to = 10
calculator.enumerable(t).respond_to?(:each).should == true
end
it "should not produce an enumerable for an Integer range that has an infinite side" do
t = Puppet::Pops::Types::PIntegerType.new()
t.from = nil
t.to = 10
calculator.enumerable(t).should == nil
t = Puppet::Pops::Types::PIntegerType.new()
t.from = 1
t.to = nil
calculator.enumerable(t).should == nil
end
it "all but Integer range are not enumerable" do
[Object, Numeric, Float, String, Regexp, Array, Hash].each do |t|
calculator.enumerable(calculator.type(t)).should == nil
end
end
end
context "when dealing with different types of inference" do
it "an instance specific inference is produced by infer" do
calculator.infer(['a','b']).element_type.values.should == ['a', 'b']
end
it "a generic inference is produced using infer_generic" do
calculator.infer_generic(['a','b']).element_type.values.should == []
end
it "a generic result is created by generalize! given an instance specific result for an Array" do
generic = calculator.infer(['a','b'])
generic.element_type.values.should == ['a', 'b']
calculator.generalize!(generic)
generic.element_type.values.should == []
end
it "a generic result is created by generalize! given an instance specific result for a Hash" do
generic = calculator.infer({'a' =>1,'b' => 2})
generic.key_type.values.sort.should == ['a', 'b']
generic.element_type.from.should == 1
generic.element_type.to.should == 2
calculator.generalize!(generic)
generic.key_type.values.should == []
generic.element_type.from.should == nil
generic.element_type.to.should == nil
end
it "does not reduce by combining types when using infer_set" do
element_type = calculator.infer(['a','b',1,2]).element_type
element_type.class.should == Puppet::Pops::Types::PScalarType
inferred_type = calculator.infer_set(['a','b',1,2])
inferred_type.class.should == Puppet::Pops::Types::PTupleType
element_types = inferred_type.types
element_types[0].class.should == Puppet::Pops::Types::PStringType
element_types[1].class.should == Puppet::Pops::Types::PStringType
element_types[2].class.should == Puppet::Pops::Types::PIntegerType
element_types[3].class.should == Puppet::Pops::Types::PIntegerType
end
it "does not reduce by combining types when using infer_set and values are undef" do
element_type = calculator.infer(['a',nil]).element_type
element_type.class.should == Puppet::Pops::Types::PStringType
inferred_type = calculator.infer_set(['a',nil])
inferred_type.class.should == Puppet::Pops::Types::PTupleType
element_types = inferred_type.types
element_types[0].class.should == Puppet::Pops::Types::PStringType
element_types[1].class.should == Puppet::Pops::Types::PNilType
end
end
+ context 'when determening callability' do
+ context 'and given is exact' do
+ it 'with callable' do
+ required = callable_t(string_t)
+ given = callable_t(string_t)
+ calculator.callable?(required, given).should == true
+ end
+
+ it 'with args tuple' do
+ required = callable_t(string_t)
+ given = tuple_t(string_t)
+ calculator.callable?(required, given).should == true
+ end
+
+ it 'with args tuple having a block' do
+ required = callable_t(string_t, callable_t(string_t))
+ given = tuple_t(string_t, callable_t(string_t))
+ calculator.callable?(required, given).should == true
+ end
+
+ it 'with args array' do
+ required = callable_t(string_t)
+ given = array_t(string_t)
+ factory.constrain_size(given, 1, 1)
+ calculator.callable?(required, given).should == true
+ end
+ end
+
+ context 'and given is more generic' do
+ it 'with callable' do
+ required = callable_t(string_t)
+ given = callable_t(object_t)
+ calculator.callable?(required, given).should == true
+ end
+
+ it 'with args tuple' do
+ required = callable_t(string_t)
+ given = tuple_t(object_t)
+ calculator.callable?(required, given).should == false
+ end
+
+ it 'with args tuple having a block' do
+ required = callable_t(string_t, callable_t(string_t))
+ given = tuple_t(string_t, callable_t(object_t))
+ calculator.callable?(required, given).should == true
+ end
+
+ it 'with args tuple having a block with captures rest' do
+ required = callable_t(string_t, callable_t(string_t))
+ given = tuple_t(string_t, callable_t(object_t, 0, :default))
+ calculator.callable?(required, given).should == true
+ end
+ end
+
+ context 'and given is more specific' do
+ it 'with callable' do
+ required = callable_t(object_t)
+ given = callable_t(string_t)
+ calculator.callable?(required, given).should == false
+ end
+
+ it 'with args tuple' do
+ required = callable_t(object_t)
+ given = tuple_t(string_t)
+ calculator.callable?(required, given).should == true
+ end
+
+ it 'with args tuple having a block' do
+ required = callable_t(string_t, callable_t(object_t))
+ given = tuple_t(string_t, callable_t(string_t))
+ calculator.callable?(required, given).should == false
+ end
+
+ it 'with args tuple having a block with captures rest' do
+ required = callable_t(string_t, callable_t(object_t))
+ given = tuple_t(string_t, callable_t(string_t, 0, :default))
+ calculator.callable?(required, given).should == false
+ end
+ end
+ end
+
matcher :be_assignable_to do |type|
calc = Puppet::Pops::Types::TypeCalculator.new
match do |actual|
calc.assignable?(type, actual)
end
failure_message_for_should do |actual|
"#{calc.string(actual)} should be assignable to #{calc.string(type)}"
end
failure_message_for_should_not do |actual|
"#{calc.string(actual)} is assignable to #{calc.string(type)} when it should not"
end
end
end
diff --git a/spec/unit/pops/types/type_factory_spec.rb b/spec/unit/pops/types/type_factory_spec.rb
index a5b949640..cca19a75c 100644
--- a/spec/unit/pops/types/type_factory_spec.rb
+++ b/spec/unit/pops/types/type_factory_spec.rb
@@ -1,276 +1,281 @@
require 'spec_helper'
require 'puppet/pops'
describe 'The type factory' do
context 'when creating' do
it 'integer() returns PIntegerType' do
Puppet::Pops::Types::TypeFactory.integer().class().should == Puppet::Pops::Types::PIntegerType
end
it 'float() returns PFloatType' do
Puppet::Pops::Types::TypeFactory.float().class().should == Puppet::Pops::Types::PFloatType
end
it 'string() returns PStringType' do
Puppet::Pops::Types::TypeFactory.string().class().should == Puppet::Pops::Types::PStringType
end
it 'boolean() returns PBooleanType' do
Puppet::Pops::Types::TypeFactory.boolean().class().should == Puppet::Pops::Types::PBooleanType
end
it 'pattern() returns PPatternType' do
Puppet::Pops::Types::TypeFactory.pattern().class().should == Puppet::Pops::Types::PPatternType
end
it 'regexp() returns PRegexpType' do
Puppet::Pops::Types::TypeFactory.regexp().class().should == Puppet::Pops::Types::PRegexpType
end
it 'enum() returns PEnumType' do
Puppet::Pops::Types::TypeFactory.enum().class().should == Puppet::Pops::Types::PEnumType
end
it 'variant() returns PVariantType' do
Puppet::Pops::Types::TypeFactory.variant().class().should == Puppet::Pops::Types::PVariantType
end
it 'scalar() returns PScalarType' do
Puppet::Pops::Types::TypeFactory.scalar().class().should == Puppet::Pops::Types::PScalarType
end
it 'data() returns PDataType' do
Puppet::Pops::Types::TypeFactory.data().class().should == Puppet::Pops::Types::PDataType
end
it 'optional() returns POptionalType' do
Puppet::Pops::Types::TypeFactory.optional().class().should == Puppet::Pops::Types::POptionalType
end
it 'collection() returns PCollectionType' do
Puppet::Pops::Types::TypeFactory.collection().class().should == Puppet::Pops::Types::PCollectionType
end
it 'catalog_entry() returns PCatalogEntryType' do
Puppet::Pops::Types::TypeFactory.catalog_entry().class().should == Puppet::Pops::Types::PCatalogEntryType
end
it 'struct() returns PStructType' do
Puppet::Pops::Types::TypeFactory.struct().class().should == Puppet::Pops::Types::PStructType
end
it 'tuple() returns PTupleType' do
Puppet::Pops::Types::TypeFactory.tuple().class().should == Puppet::Pops::Types::PTupleType
end
it 'undef() returns PNilType' do
Puppet::Pops::Types::TypeFactory.undef().class().should == Puppet::Pops::Types::PNilType
end
+ it 'default() returns PDefaultType' do
+ Puppet::Pops::Types::TypeFactory.default().class().should == Puppet::Pops::Types::PDefaultType
+ end
+
it 'range(to, from) returns PIntegerType' do
t = Puppet::Pops::Types::TypeFactory.range(1,2)
t.class().should == Puppet::Pops::Types::PIntegerType
t.from.should == 1
t.to.should == 2
end
it 'range(default, default) returns PIntegerType' do
t = Puppet::Pops::Types::TypeFactory.range(:default,:default)
t.class().should == Puppet::Pops::Types::PIntegerType
t.from.should == nil
t.to.should == nil
end
it 'float_range(to, from) returns PFloatType' do
t = Puppet::Pops::Types::TypeFactory.float_range(1.0, 2.0)
t.class().should == Puppet::Pops::Types::PFloatType
t.from.should == 1.0
t.to.should == 2.0
end
it 'float_range(default, default) returns PFloatType' do
t = Puppet::Pops::Types::TypeFactory.float_range(:default, :default)
t.class().should == Puppet::Pops::Types::PFloatType
t.from.should == nil
t.to.should == nil
end
it 'resource() creates a generic PResourceType' do
pr = Puppet::Pops::Types::TypeFactory.resource()
pr.class().should == Puppet::Pops::Types::PResourceType
pr.type_name.should == nil
end
it 'resource(x) creates a PResourceType[x]' do
pr = Puppet::Pops::Types::TypeFactory.resource('x')
pr.class().should == Puppet::Pops::Types::PResourceType
pr.type_name.should == 'x'
end
it 'host_class() creates a generic PHostClassType' do
hc = Puppet::Pops::Types::TypeFactory.host_class()
hc.class().should == Puppet::Pops::Types::PHostClassType
hc.class_name.should == nil
end
it 'host_class(x) creates a PHostClassType[x]' do
hc = Puppet::Pops::Types::TypeFactory.host_class('x')
hc.class().should == Puppet::Pops::Types::PHostClassType
hc.class_name.should == 'x'
end
it 'host_class(::x) creates a PHostClassType[x]' do
hc = Puppet::Pops::Types::TypeFactory.host_class('::x')
hc.class().should == Puppet::Pops::Types::PHostClassType
hc.class_name.should == 'x'
end
it 'array_of(fixnum) returns PArrayType[PIntegerType]' do
at = Puppet::Pops::Types::TypeFactory.array_of(1)
at.class().should == Puppet::Pops::Types::PArrayType
at.element_type.class.should == Puppet::Pops::Types::PIntegerType
end
it 'array_of(PIntegerType) returns PArrayType[PIntegerType]' do
at = Puppet::Pops::Types::TypeFactory.array_of(Puppet::Pops::Types::PIntegerType.new())
at.class().should == Puppet::Pops::Types::PArrayType
at.element_type.class.should == Puppet::Pops::Types::PIntegerType
end
it 'array_of_data returns PArrayType[PDataType]' do
at = Puppet::Pops::Types::TypeFactory.array_of_data
at.class().should == Puppet::Pops::Types::PArrayType
at.element_type.class.should == Puppet::Pops::Types::PDataType
end
it 'hash_of_data returns PHashType[PScalarType,PDataType]' do
ht = Puppet::Pops::Types::TypeFactory.hash_of_data
ht.class().should == Puppet::Pops::Types::PHashType
ht.key_type.class.should == Puppet::Pops::Types::PScalarType
ht.element_type.class.should == Puppet::Pops::Types::PDataType
end
- it 'ruby(1) returns PRubyType[\'Fixnum\']' do
+ it 'ruby(1) returns PRuntimeType[ruby, \'Fixnum\']' do
ht = Puppet::Pops::Types::TypeFactory.ruby(1)
- ht.class().should == Puppet::Pops::Types::PRubyType
- ht.ruby_class.should == 'Fixnum'
+ ht.class().should == Puppet::Pops::Types::PRuntimeType
+ ht.runtime.should == :ruby
+ ht.runtime_type_name.should == 'Fixnum'
end
it 'a size constrained collection can be created from array' do
t = Puppet::Pops::Types::TypeFactory.array_of_data()
Puppet::Pops::Types::TypeFactory.constrain_size(t, 1,2).should == t
t.size_type.class.should == Puppet::Pops::Types::PIntegerType
t.size_type.from.should == 1
t.size_type.to.should == 2
end
it 'a size constrained collection can be created from hash' do
t = Puppet::Pops::Types::TypeFactory.hash_of_data()
Puppet::Pops::Types::TypeFactory.constrain_size(t, 1,2).should == t
t.size_type.class.should == Puppet::Pops::Types::PIntegerType
t.size_type.from.should == 1
t.size_type.to.should == 2
end
context 'callable types' do
it 'the callable methods produces a Callable' do
t = Puppet::Pops::Types::TypeFactory.callable()
expect(t.class).to be(Puppet::Pops::Types::PCallableType)
expect(t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t.param_types.types).to be_empty
expect(t.block_type).to be_nil
end
it 'callable method with types produces the corresponding Tuple for parameters and generated names' do
tf = Puppet::Pops::Types::TypeFactory
t = tf.callable(tf.integer, tf.string)
expect(t.class).to be(Puppet::Pops::Types::PCallableType)
expect(t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t.param_types.types).to eql([tf.integer, tf.string])
expect(t.block_type).to be_nil
end
it 'callable accepts min range to be given' do
tf = Puppet::Pops::Types::TypeFactory
t = tf.callable(tf.integer, tf.string, 1)
expect(t.class).to be(Puppet::Pops::Types::PCallableType)
expect(t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t.param_types.size_type.from).to eql(1)
expect(t.param_types.size_type.to).to be_nil
end
it 'callable accepts max range to be given' do
tf = Puppet::Pops::Types::TypeFactory
t = tf.callable(tf.integer, tf.string, 1, 3)
expect(t.class).to be(Puppet::Pops::Types::PCallableType)
expect(t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t.param_types.size_type.from).to eql(1)
expect(t.param_types.size_type.to).to eql(3)
end
it 'callable accepts max range to be given as :default' do
tf = Puppet::Pops::Types::TypeFactory
t = tf.callable(tf.integer, tf.string, 1, :default)
expect(t.class).to be(Puppet::Pops::Types::PCallableType)
expect(t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(t.param_types.size_type.from).to eql(1)
expect(t.param_types.size_type.to).to be_nil
end
it 'the all_callables method produces a Callable matching any Callable' do
t = Puppet::Pops::Types::TypeFactory.all_callables()
expect(t.class).to be(Puppet::Pops::Types::PCallableType)
expect(t.param_types).to be_nil
expect(t.block_type).to be_nil
end
it 'with block are created by placing a Callable last' do
block_t = Puppet::Pops::Types::TypeFactory.callable(String)
t = Puppet::Pops::Types::TypeFactory.callable(String, block_t)
expect(t.block_type).to be(block_t)
end
it 'min size constraint can be used with a block last' do
block_t = Puppet::Pops::Types::TypeFactory.callable(String)
t = Puppet::Pops::Types::TypeFactory.callable(String, 1, block_t)
expect(t.block_type).to be(block_t)
expect(t.param_types.size_type.from).to eql(1)
expect(t.param_types.size_type.to).to be_nil
end
it 'min, max size constraint can be used with a block last' do
block_t = Puppet::Pops::Types::TypeFactory.callable(String)
t = Puppet::Pops::Types::TypeFactory.callable(String, 1, 3, block_t)
expect(t.block_type).to be(block_t)
expect(t.param_types.size_type.from).to eql(1)
expect(t.param_types.size_type.to).to eql(3)
end
it 'the with_block methods decorates a Callable with a block_type' do
t = Puppet::Pops::Types::TypeFactory.callable()
t2 = Puppet::Pops::Types::TypeFactory.with_block(t)
block_t = t2.block_type
# given t is returned after mutation
expect(t2).to be(t)
expect(block_t.class).to be(Puppet::Pops::Types::PCallableType)
expect(block_t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(block_t.param_types.types).to be_empty
expect(block_t.block_type).to be_nil
end
it 'the with_optional_block methods decorates a Callable with an optional block_type' do
t = Puppet::Pops::Types::TypeFactory.callable()
t2 = Puppet::Pops::Types::TypeFactory.with_optional_block(t)
opt_t = t2.block_type
expect(opt_t.class).to be(Puppet::Pops::Types::POptionalType)
block_t = opt_t.optional_type
# given t is returned after mutation
expect(t2).to be(t)
expect(block_t.class).to be(Puppet::Pops::Types::PCallableType)
expect(block_t.param_types.class).to be(Puppet::Pops::Types::PTupleType)
expect(block_t.param_types.types).to be_empty
expect(block_t.block_type).to be_nil
end
end
end
end
diff --git a/spec/unit/pops/types/type_parser_spec.rb b/spec/unit/pops/types/type_parser_spec.rb
index 95595e55d..b67b7b6cd 100644
--- a/spec/unit/pops/types/type_parser_spec.rb
+++ b/spec/unit/pops/types/type_parser_spec.rb
@@ -1,219 +1,240 @@
require 'spec_helper'
require 'puppet/pops'
describe Puppet::Pops::Types::TypeParser do
extend RSpec::Matchers::DSL
let(:parser) { Puppet::Pops::Types::TypeParser.new }
- let(:types) { Puppet::Pops::Types::TypeFactory }
+ let(:types) { Puppet::Pops::Types::TypeFactory }
+
it "rejects a puppet expression" do
expect { parser.parse("1 + 1") }.to raise_error(Puppet::ParseError, /The expression <1 \+ 1> is not a valid type specification/)
end
it "rejects a empty type specification" do
expect { parser.parse("") }.to raise_error(Puppet::ParseError, /The expression <> is not a valid type specification/)
end
it "rejects an invalid type simple type" do
expect { parser.parse("notAType") }.to raise_error(Puppet::ParseError, /The expression <notAType> is not a valid type specification/)
end
it "rejects an unknown parameterized type" do
expect { parser.parse("notAType[Integer]") }.to raise_error(Puppet::ParseError,
/The expression <notAType\[Integer\]> is not a valid type specification/)
end
it "rejects an unknown type parameter" do
expect { parser.parse("Array[notAType]") }.to raise_error(Puppet::ParseError,
/The expression <Array\[notAType\]> is not a valid type specification/)
end
[
- 'Object', 'Data', 'CatalogEntry', 'Boolean', 'Scalar', 'Undef', 'Numeric',
+ 'Any', 'Data', 'CatalogEntry', 'Boolean', 'Scalar', 'Undef', 'Numeric', 'Default'
].each do |name|
it "does not support parameterizing unparameterized type <#{name}>" do
expect { parser.parse("#{name}[Integer]") }.to raise_unparameterized_error_for(name)
end
end
it "parses a simple, unparameterized type into the type object" do
- expect(the_type_parsed_from(types.object)).to be_the_type(types.object)
+ expect(the_type_parsed_from(types.any)).to be_the_type(types.any)
expect(the_type_parsed_from(types.integer)).to be_the_type(types.integer)
expect(the_type_parsed_from(types.float)).to be_the_type(types.float)
expect(the_type_parsed_from(types.string)).to be_the_type(types.string)
expect(the_type_parsed_from(types.boolean)).to be_the_type(types.boolean)
expect(the_type_parsed_from(types.pattern)).to be_the_type(types.pattern)
expect(the_type_parsed_from(types.data)).to be_the_type(types.data)
expect(the_type_parsed_from(types.catalog_entry)).to be_the_type(types.catalog_entry)
expect(the_type_parsed_from(types.collection)).to be_the_type(types.collection)
expect(the_type_parsed_from(types.tuple)).to be_the_type(types.tuple)
expect(the_type_parsed_from(types.struct)).to be_the_type(types.struct)
expect(the_type_parsed_from(types.optional)).to be_the_type(types.optional)
+ expect(the_type_parsed_from(types.default)).to be_the_type(types.default)
end
it "interprets an unparameterized Array as an Array of Data" do
expect(parser.parse("Array")).to be_the_type(types.array_of_data)
end
it "interprets an unparameterized Hash as a Hash of Scalar to Data" do
expect(parser.parse("Hash")).to be_the_type(types.hash_of_data)
end
it "interprets a parameterized Hash[t] as a Hash of Scalar to t" do
expect(parser.parse("Hash[Integer]")).to be_the_type(types.hash_of(types.integer))
end
it "parses a parameterized type into the type object" do
parameterized_array = types.array_of(types.integer)
parameterized_hash = types.hash_of(types.integer, types.boolean)
expect(the_type_parsed_from(parameterized_array)).to be_the_type(parameterized_array)
expect(the_type_parsed_from(parameterized_hash)).to be_the_type(parameterized_hash)
end
it "parses a size constrained collection using capped range" do
parameterized_array = types.array_of(types.integer)
types.constrain_size(parameterized_array, 1,2)
parameterized_hash = types.hash_of(types.integer, types.boolean)
types.constrain_size(parameterized_hash, 1,2)
expect(the_type_parsed_from(parameterized_array)).to be_the_type(parameterized_array)
expect(the_type_parsed_from(parameterized_hash)).to be_the_type(parameterized_hash)
end
it "parses a size constrained collection with open range" do
parameterized_array = types.array_of(types.integer)
types.constrain_size(parameterized_array, 1,:default)
parameterized_hash = types.hash_of(types.integer, types.boolean)
types.constrain_size(parameterized_hash, 1,:default)
expect(the_type_parsed_from(parameterized_array)).to be_the_type(parameterized_array)
expect(the_type_parsed_from(parameterized_hash)).to be_the_type(parameterized_hash)
end
it "parses optional type" do
opt_t = types.optional(Integer)
expect(the_type_parsed_from(opt_t)).to be_the_type(opt_t)
end
it "parses tuple type" do
tuple_t = types.tuple(Integer, String)
expect(the_type_parsed_from(tuple_t)).to be_the_type(tuple_t)
end
it "parses tuple type with occurence constraint" do
tuple_t = types.tuple(Integer, String)
types.constrain_size(tuple_t, 2, 5)
expect(the_type_parsed_from(tuple_t)).to be_the_type(tuple_t)
end
it "parses struct type" do
struct_t = types.struct({'a'=>Integer, 'b'=>String})
expect(the_type_parsed_from(struct_t)).to be_the_type(struct_t)
end
+ describe "handles parsing of patterns and regexp" do
+ { 'Pattern[/([a-z]+)([1-9]+)/]' => [:pattern, [/([a-z]+)([1-9]+)/]],
+ 'Pattern["([a-z]+)([1-9]+)"]' => [:pattern, [/([a-z]+)([1-9]+)/]],
+ 'Regexp[/([a-z]+)([1-9]+)/]' => [:regexp, [/([a-z]+)([1-9]+)/]],
+ 'Pattern[/x9/, /([a-z]+)([1-9]+)/]' => [:pattern, [/x9/, /([a-z]+)([1-9]+)/]],
+ }.each do |source, type|
+ it "such that the source '#{source}' yields the type #{type.to_s}" do
+ expect(parser.parse(source)).to be_the_type(Puppet::Pops::Types::TypeFactory.send(type[0], *type[1]))
+ end
+ end
+ end
+
it "rejects an collection spec with the wrong number of parameters" do
expect { parser.parse("Array[Integer, 1,2,3]") }.to raise_the_parameter_error("Array", "1 to 3", 4)
expect { parser.parse("Hash[Integer, Integer, 1,2,3]") }.to raise_the_parameter_error("Hash", "1 to 4", 5)
end
it "interprets anything that is not a built in type to be a resource type" do
expect(parser.parse("File")).to be_the_type(types.resource('file'))
end
it "parses a resource type with title" do
expect(parser.parse("File['/tmp/foo']")).to be_the_type(types.resource('file', '/tmp/foo'))
end
it "parses a resource type using 'Resource[type]' form" do
expect(parser.parse("Resource[File]")).to be_the_type(types.resource('file'))
end
it "parses a resource type with title using 'Resource[type, title]'" do
expect(parser.parse("Resource[File, '/tmp/foo']")).to be_the_type(types.resource('file', '/tmp/foo'))
end
it "parses a host class type" do
expect(parser.parse("Class")).to be_the_type(types.host_class())
end
it "parses a parameterized host class type" do
expect(parser.parse("Class[foo::bar]")).to be_the_type(types.host_class('foo::bar'))
end
it 'parses an integer range' do
expect(parser.parse("Integer[1,2]")).to be_the_type(types.range(1,2))
end
it 'parses a float range' do
expect(parser.parse("Float[1.0,2.0]")).to be_the_type(types.float_range(1.0,2.0))
end
it 'parses a collection size range' do
expect(parser.parse("Collection[1,2]")).to be_the_type(types.constrain_size(types.collection,1,2))
end
it 'parses a type type' do
expect(parser.parse("Type[Integer]")).to be_the_type(types.type_type(types.integer))
end
it 'parses a ruby type' do
- expect(parser.parse("Ruby['Integer']")).to be_the_type(types.ruby_type('Integer'))
+ expect(parser.parse("Runtime[ruby, 'Integer']")).to be_the_type(types.ruby_type('Integer'))
end
it 'parses a callable type' do
expect(parser.parse("Callable")).to be_the_type(types.all_callables())
end
it 'parses a parameterized callable type' do
expect(parser.parse("Callable[String, Integer]")).to be_the_type(types.callable(String, Integer))
end
it 'parses a parameterized callable type with min/max' do
expect(parser.parse("Callable[String, Integer, 1, default]")).to be_the_type(types.callable(String, Integer, 1, :default))
end
it 'parses a parameterized callable type with block' do
expect(parser.parse("Callable[String, Callable[Boolean]]")).to be_the_type(types.callable(String, types.callable(true)))
end
- it 'parses a parameterized callable type with only min/max' do
+ it 'parses a parameterized callable type with 0 min/max' do
t = parser.parse("Callable[0,0]")
expect(t).to be_the_type(types.callable())
expect(t.param_types.types).to be_empty
end
+ it 'parses a parameterized callable type with >0 min/max' do
+ t = parser.parse("Callable[0,1]")
+ expect(t).to be_the_type(types.callable(0,1))
+ # Contains a Unit type to indicate "called with what you accept"
+ expect(t.param_types.types[0]).to be_the_type(Puppet::Pops::Types::PUnitType.new())
+ end
+
matcher :be_the_type do |type|
calc = Puppet::Pops::Types::TypeCalculator.new
match do |actual|
calc.assignable?(actual, type) && calc.assignable?(type, actual)
end
failure_message_for_should do |actual|
"expected #{calc.string(type)}, but was #{calc.string(actual)}"
end
end
def raise_the_parameter_error(type, required, given)
raise_error(Puppet::ParseError, /#{type} requires #{required}, #{given} provided/)
end
def raise_type_error_for(type_name)
raise_error(Puppet::ParseError, /Unknown type <#{type_name}>/)
end
def raise_unparameterized_error_for(type_name)
raise_error(Puppet::ParseError, /Not a parameterized type <#{type_name}>/)
end
def the_type_parsed_from(type)
parser.parse(the_type_spec_for(type))
end
def the_type_spec_for(type)
calc = Puppet::Pops::Types::TypeCalculator.new
calc.string(type)
end
end
diff --git a/spec/unit/pops/validator/validator_spec.rb b/spec/unit/pops/validator/validator_spec.rb
index 1d865de5f..31defdd6b 100644
--- a/spec/unit/pops/validator/validator_spec.rb
+++ b/spec/unit/pops/validator/validator_spec.rb
@@ -1,68 +1,184 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/pops'
require 'puppet_spec/pops'
# relative to this spec file (./) does not work as this file is loaded by rspec
require File.join(File.dirname(__FILE__), '../parser/parser_rspec_helper')
-describe "validating 3x" do
+describe "validating 4x" do
include ParserRspecHelper
include PuppetSpec::Pops
let(:acceptor) { Puppet::Pops::Validation::Acceptor.new() }
- let(:validator) { Puppet::Pops::Validation::ValidatorFactory_3_1.new().validator(acceptor) }
+ let(:validator) { Puppet::Pops::Validation::ValidatorFactory_4_0.new().validator(acceptor) }
def validate(model)
validator.validate(model)
acceptor
end
it 'should raise error for illegal names' do
pending "validation was too strict, now too relaxed - validation missing"
expect(validate(fqn('Aaa'))).to have_issue(Puppet::Pops::Issues::ILLEGAL_NAME)
expect(validate(fqn('AAA'))).to have_issue(Puppet::Pops::Issues::ILLEGAL_NAME)
end
it 'should raise error for illegal variable names' do
- pending "validation was too strict, now too relaxed - validation missing"
- expect(validate(fqn('Aaa').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_NAME)
- expect(validate(fqn('AAA').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_NAME)
+ expect(validate(fqn('Aaa').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
+ expect(validate(fqn('AAA').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
+ expect(validate(fqn('aaa::_aaa').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
end
- it 'should raise error for -= assignment' do
- expect(validate(fqn('aaa').minus_set(2))).to have_issue(Puppet::Pops::Issues::UNSUPPORTED_OPERATOR)
+ it 'should not raise error for variable name with underscore first in first name segment' do
+ expect(validate(fqn('_aa').var())).to_not have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
+ expect(validate(fqn('::_aa').var())).to_not have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
end
-end
+ context 'for non productive expressions' do
+ [ '1',
+ '3.14',
+ "'a'",
+ '"a"',
+ '"${$a=10}"', # interpolation with side effect
+ 'false',
+ 'true',
+ 'default',
+ 'undef',
+ '[1,2,3]',
+ '{a=>10}',
+ 'if 1 {2}',
+ 'if 1 {2} else {3}',
+ 'if 1 {2} elsif 3 {4}',
+ 'unless 1 {2}',
+ 'unless 1 {2} else {3}',
+ '1 ? 2 => 3',
+ '1 ? { 2 => 3}',
+ '-1',
+ '-foo()', # unary minus on productive
+ '1+2',
+ '1<2',
+ '(1<2)',
+ '!true',
+ '!foo()', # not on productive
+ '$a',
+ '$a[1]',
+ 'name',
+ 'Type',
+ 'Type[foo]'
+ ].each do |expr|
+ it "produces error for non productive: #{expr}" do
+ source = "#{expr}; $a = 10"
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::IDEM_EXPRESSION_NOT_LAST)
+ end
-describe "validating 4x" do
- include ParserRspecHelper
- include PuppetSpec::Pops
+ it "does not produce error when last for non productive: #{expr}" do
+ source = " $a = 10; #{expr}"
+ expect(validate(parse(source))).to_not have_issue(Puppet::Pops::Issues::IDEM_EXPRESSION_NOT_LAST)
+ end
+ end
- let(:acceptor) { Puppet::Pops::Validation::Acceptor.new() }
- let(:validator) { Puppet::Pops::Validation::ValidatorFactory_4_0.new().validator(acceptor) }
+ [
+ 'if 1 {$a = 1}',
+ 'if 1 {2} else {$a=1}',
+ 'if 1 {2} elsif 3 {$a=1}',
+ 'unless 1 {$a=1}',
+ 'unless 1 {2} else {$a=1}',
+ '$a = 1 ? 2 => 3',
+ '$a = 1 ? { 2 => 3}',
+ 'Foo[a] -> Foo[b]',
+ '($a=1)',
+ 'foo()',
+ '$a.foo()'
+ ].each do |expr|
- def validate(model)
- validator.validate(model)
- acceptor
+ it "does not produce error when for productive: #{expr}" do
+ source = "#{expr}; $x = 1"
+ expect(validate(parse(source))).to_not have_issue(Puppet::Pops::Issues::IDEM_EXPRESSION_NOT_LAST)
+ end
+ end
+
+ ['class', 'define', 'node'].each do |type|
+ it "flags non productive expression last in #{type}" do
+ source = <<-SOURCE
+ #{type} nope {
+ 1
+ }
+ end
+ SOURCE
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::IDEM_NOT_ALLOWED_LAST)
+ end
+ end
end
- it 'should raise error for illegal names' do
- pending "validation was too strict, now too relaxed - validation missing"
- expect(validate(fqn('Aaa'))).to have_issue(Puppet::Pops::Issues::ILLEGAL_NAME)
- expect(validate(fqn('AAA'))).to have_issue(Puppet::Pops::Issues::ILLEGAL_NAME)
+ context 'for reserved words' do
+ ['function', 'private', 'type', 'attr'].each do |word|
+ it "produces an error for the word '#{word}'" do
+ source = "$a = #{word}"
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::RESERVED_WORD)
+ end
+ end
end
- it 'should raise error for illegal variable names' do
- expect(validate(fqn('Aaa').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
- expect(validate(fqn('AAA').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
- expect(validate(fqn('aaa::_aaa').var())).to have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
+ context 'for reserved type names' do
+ [# type/Type, is a reserved name but results in syntax error because it is a keyword in lower case form
+ 'any',
+ 'unit',
+ 'scalar',
+ 'boolean',
+ 'numeric',
+ 'integer',
+ 'float',
+ 'collection',
+ 'array',
+ 'hash',
+ 'tuple',
+ 'struct',
+ 'variant',
+ 'optional',
+ 'enum',
+ 'regexp',
+ 'pattern',
+ 'runtime',
+ ].each do |name|
+
+ it "produces an error for 'class #{name}'" do
+ source = "class #{name} {}"
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::RESERVED_TYPE_NAME)
+ end
+
+ it "produces an error for 'define #{name}'" do
+ source = "define #{name} {}"
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::RESERVED_TYPE_NAME)
+ end
+ end
end
- it 'should not raise error for variable name with underscore first in first name segment' do
- expect(validate(fqn('_aa').var())).to_not have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
- expect(validate(fqn('::_aa').var())).to_not have_issue(Puppet::Pops::Issues::ILLEGAL_VAR_NAME)
+ context 'for reserved parameter names' do
+ ['name', 'title'].each do |word|
+ it "produces an error when $#{word} is used as a parameter in a class" do
+ source = "class x ($#{word}) {}"
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::RESERVED_PARAMETER)
+ end
+
+ it "produces an error when $#{word} is used as a parameter in a define" do
+ source = "define x ($#{word}) {}"
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::RESERVED_PARAMETER)
+ end
+ end
+
+ end
+
+ context 'for numeric parameter names' do
+ ['1', '0x2', '03'].each do |word|
+ it "produces an error when $#{word} is used as a parameter in a class" do
+ source = "class x ($#{word}) {}"
+ expect(validate(parse(source))).to have_issue(Puppet::Pops::Issues::ILLEGAL_NUMERIC_PARAMETER)
+ end
+ end
end
+ def parse(source)
+ Puppet::Pops::Parser::Parser.new().parse_string(source)
+ end
end
diff --git a/spec/unit/provider/exec/posix_spec.rb b/spec/unit/provider/exec/posix_spec.rb
index 7c4982fcc..02c338b69 100755
--- a/spec/unit/provider/exec/posix_spec.rb
+++ b/spec/unit/provider/exec/posix_spec.rb
@@ -1,211 +1,219 @@
#! /usr/bin/env ruby
require 'spec_helper'
-describe Puppet::Type.type(:exec).provider(:posix) do
+describe Puppet::Type.type(:exec).provider(:posix), :if => Puppet.features.posix? do
include PuppetSpec::Files
def make_exe
cmdpath = tmpdir('cmdpath')
exepath = tmpfile('my_command', cmdpath)
- exepath = exepath + ".exe" if Puppet.features.microsoft_windows?
FileUtils.touch(exepath)
File.chmod(0755, exepath)
exepath
end
- let(:resource) { Puppet::Type.type(:exec).new(:title => File.expand_path('/foo'), :provider => :posix) }
+ let(:resource) { Puppet::Type.type(:exec).new(:title => '/foo', :provider => :posix) }
let(:provider) { described_class.new(resource) }
describe "#validatecmd" do
it "should fail if no path is specified and the command is not fully qualified" do
expect { provider.validatecmd("foo") }.to raise_error(
Puppet::Error,
"'foo' is not qualified and no path was specified. Please qualify the command or specify a path."
)
end
it "should pass if a path is given" do
provider.resource[:path] = ['/bogus/bin']
provider.validatecmd("../foo")
end
it "should pass if command is fully qualifed" do
provider.resource[:path] = ['/bogus/bin']
- provider.validatecmd(File.expand_path("/bin/blah/foo"))
+ provider.validatecmd("/bin/blah/foo")
end
end
describe "#run" do
describe "when the command is an absolute path" do
let(:command) { tmpfile('foo') }
it "should fail if the command doesn't exist" do
expect { provider.run(command) }.to raise_error(ArgumentError, "Could not find command '#{command}'")
end
it "should fail if the command isn't a file" do
FileUtils.mkdir(command)
FileUtils.chmod(0755, command)
expect { provider.run(command) }.to raise_error(ArgumentError, "'#{command}' is a directory, not a file")
end
it "should fail if the command isn't executable" do
FileUtils.touch(command)
File.stubs(:executable?).with(command).returns(false)
expect { provider.run(command) }.to raise_error(ArgumentError, "'#{command}' is not executable")
end
end
describe "when the command is a relative path" do
it "should execute the command if it finds it in the path and is executable" do
command = make_exe
provider.resource[:path] = [File.dirname(command)]
filename = File.basename(command)
- Puppet::Util::Execution.expects(:execute).with { |cmdline, arguments| (cmdline == filename) && (arguments.is_a? Hash) }.returns(Puppet::Util::Execution::ProcessOutput.new('', 0))
+ Puppet::Util::Execution.expects(:execute).with(filename, instance_of(Hash)).returns(Puppet::Util::Execution::ProcessOutput.new('', 0))
provider.run(filename)
end
it "should fail if the command isn't in the path" do
resource[:path] = ["/fake/path"]
expect { provider.run('foo') }.to raise_error(ArgumentError, "Could not find command 'foo'")
end
it "should fail if the command is in the path but not executable" do
command = make_exe
File.chmod(0644, command)
FileTest.stubs(:executable?).with(command).returns(false)
resource[:path] = [File.dirname(command)]
filename = File.basename(command)
expect { provider.run(filename) }.to raise_error(ArgumentError, "Could not find command '#{filename}'")
end
end
it "should not be able to execute shell builtins" do
provider.resource[:path] = ['/bogus/bin']
expect { provider.run("cd ..") }.to raise_error(ArgumentError, "Could not find command 'cd'")
end
+ it "does not override the user when it is already the requested user" do
+ Etc.stubs(:getpwuid).returns(Struct::Passwd.new('testing'))
+ provider.resource[:user] = 'testing'
+ command = make_exe
+
+ Puppet::Util::Execution.expects(:execute).with(anything(), has_entry(:uid, nil)).returns(Puppet::Util::Execution::ProcessOutput.new('', 0))
+
+ provider.run(command)
+ end
+
it "should execute the command if the command given includes arguments or subcommands" do
provider.resource[:path] = ['/bogus/bin']
command = make_exe
- Puppet::Util::Execution.expects(:execute).with { |cmdline, arguments| (cmdline == "#{command} bar --sillyarg=true --blah") && (arguments.is_a? Hash) }.returns(Puppet::Util::Execution::ProcessOutput.new('', 0))
+ Puppet::Util::Execution.expects(:execute).with("#{command} bar --sillyarg=true --blah", instance_of(Hash)).returns(Puppet::Util::Execution::ProcessOutput.new('', 0))
+
provider.run("#{command} bar --sillyarg=true --blah")
end
it "should fail if quoted command doesn't exist" do
provider.resource[:path] = ['/bogus/bin']
- command = "#{File.expand_path('/foo')} bar --sillyarg=true --blah"
+ command = "/foo bar --sillyarg=true --blah"
expect { provider.run(%Q["#{command}"]) }.to raise_error(ArgumentError, "Could not find command '#{command}'")
end
it "should warn if you're overriding something in environment" do
provider.resource[:environment] = ['WHATEVER=/something/else', 'WHATEVER=/foo']
command = make_exe
- Puppet::Util::Execution.expects(:execute).with { |cmdline, arguments| (cmdline == command) && (arguments.is_a? Hash) }.returns(Puppet::Util::Execution::ProcessOutput.new('', 0))
+ Puppet::Util::Execution.expects(:execute).with(command, instance_of(Hash)).returns(Puppet::Util::Execution::ProcessOutput.new('', 0))
+
provider.run(command)
+
@logs.map {|l| "#{l.level}: #{l.message}" }.should == ["warning: Overriding environment setting 'WHATEVER' with '/foo'"]
end
it "should set umask before execution if umask parameter is in use" do
provider.resource[:umask] = '0027'
Puppet::Util.expects(:withumask).with(0027)
provider.run(provider.resource[:command])
end
- describe "posix locale settings", :unless => Puppet.features.microsoft_windows? do
+ describe "posix locale settings" do
# a sentinel value that we can use to emulate what locale environment variables might be set to on an international
# system.
lang_sentinel_value = "en_US.UTF-8"
# a temporary hash that contains sentinel values for each of the locale environment variables that we override in
# "exec"
locale_sentinel_env = {}
Puppet::Util::POSIX::LOCALE_ENV_VARS.each { |var| locale_sentinel_env[var] = lang_sentinel_value }
command = "/bin/echo $%s"
it "should not override user's locale during execution" do
# we'll do this once without any sentinel values, to give us a little more test coverage
orig_env = {}
Puppet::Util::POSIX::LOCALE_ENV_VARS.each { |var| orig_env[var] = ENV[var] if ENV[var] }
orig_env.keys.each do |var|
output, status = provider.run(command % var)
output.strip.should == orig_env[var]
end
# now, once more... but with our sentinel values
Puppet::Util.withenv(locale_sentinel_env) do
Puppet::Util::POSIX::LOCALE_ENV_VARS.each do |var|
output, status = provider.run(command % var)
output.strip.should == locale_sentinel_env[var]
end
end
end
it "should respect locale overrides in user's 'environment' configuration" do
provider.resource[:environment] = ['LANG=C', 'LC_ALL=C']
output, status = provider.run(command % 'LANG')
output.strip.should == 'C'
output, status = provider.run(command % 'LC_ALL')
output.strip.should == 'C'
end
end
- describe "posix user-related environment vars", :unless => Puppet.features.microsoft_windows? do
+ describe "posix user-related environment vars" do
# a temporary hash that contains sentinel values for each of the user-related environment variables that we
# are expected to unset during an "exec"
user_sentinel_env = {}
Puppet::Util::POSIX::USER_ENV_VARS.each { |var| user_sentinel_env[var] = "Abracadabra" }
command = "/bin/echo $%s"
it "should unset user-related environment vars during execution" do
# first we set up a temporary execution environment with sentinel values for the user-related environment vars
# that we care about.
Puppet::Util.withenv(user_sentinel_env) do
# with this environment, we loop over the vars in question
Puppet::Util::POSIX::USER_ENV_VARS.each do |var|
# ensure that our temporary environment is set up as we expect
ENV[var].should == user_sentinel_env[var]
# run an "exec" via the provider and ensure that it unsets the vars
output, status = provider.run(command % var)
output.strip.should == ""
# ensure that after the exec, our temporary env is still intact
ENV[var].should == user_sentinel_env[var]
end
end
end
it "should respect overrides to user-related environment vars in caller's 'environment' configuration" do
sentinel_value = "Abracadabra"
# set the "environment" property of the resource, populating it with a hash containing sentinel values for
# each of the user-related posix environment variables
provider.resource[:environment] = Puppet::Util::POSIX::USER_ENV_VARS.collect { |var| "#{var}=#{sentinel_value}"}
# loop over the posix user-related environment variables
Puppet::Util::POSIX::USER_ENV_VARS.each do |var|
# run an 'exec' to get the value of each variable
output, status = provider.run(command % var)
# ensure that it matches our expected sentinel value
output.strip.should == sentinel_value
end
end
-
-
end
-
-
end
end
diff --git a/spec/unit/provider/exec/shell_spec.rb b/spec/unit/provider/exec/shell_spec.rb
index 0f0faa594..932f46b6a 100755
--- a/spec/unit/provider/exec/shell_spec.rb
+++ b/spec/unit/provider/exec/shell_spec.rb
@@ -1,53 +1,53 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Type.type(:exec).provider(:shell), :unless => Puppet.features.microsoft_windows? do
- let :resource do Puppet::Resource.new(:exec, 'foo') end
- let :provider do described_class.new(resource) end
+ let(:resource) { Puppet::Type.type(:exec).new(:title => 'foo', :provider => 'shell') }
+ let(:provider) { described_class.new(resource) }
describe "#run" do
it "should be able to run builtin shell commands" do
output, status = provider.run("if [ 1 = 1 ]; then echo 'blah'; fi")
status.exitstatus.should == 0
output.should == "blah\n"
end
it "should be able to run commands with single quotes in them" do
output, status = provider.run("echo 'foo bar'")
status.exitstatus.should == 0
output.should == "foo bar\n"
end
it "should be able to run commands with double quotes in them" do
output, status = provider.run('echo "foo bar"')
status.exitstatus.should == 0
output.should == "foo bar\n"
end
it "should be able to run multiple commands separated by a semicolon" do
output, status = provider.run("echo 'foo' ; echo 'bar'")
status.exitstatus.should == 0
output.should == "foo\nbar\n"
end
it "should be able to read values from the environment parameter" do
resource[:environment] = "FOO=bar"
output, status = provider.run("echo $FOO")
status.exitstatus.should == 0
output.should == "bar\n"
end
it "#14060: should interpolate inside the subshell, not outside it" do
resource[:environment] = "foo=outer"
output, status = provider.run("foo=inner; echo \"foo is $foo\"")
status.exitstatus.should == 0
output.should == "foo is inner\n"
end
end
describe "#validatecmd" do
it "should always return true because builtins don't need path or to be fully qualified" do
provider.validatecmd('whateverdoesntmatter').should == true
end
end
end
diff --git a/spec/unit/provider/file/windows_spec.rb b/spec/unit/provider/file/windows_spec.rb
index a0e9d3a4e..f6d7ef0e7 100755
--- a/spec/unit/provider/file/windows_spec.rb
+++ b/spec/unit/provider/file/windows_spec.rb
@@ -1,154 +1,154 @@
#! /usr/bin/env ruby
require 'spec_helper'
if Puppet.features.microsoft_windows?
require 'puppet/util/windows'
class WindowsSecurity
extend Puppet::Util::Windows::Security
end
end
describe Puppet::Type.type(:file).provider(:windows), :if => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
let(:path) { tmpfile('windows_file_spec') }
let(:resource) { Puppet::Type.type(:file).new :path => path, :mode => 0777, :provider => described_class.name }
let(:provider) { resource.provider }
let(:sid) { 'S-1-1-50' }
let(:account) { 'quinn' }
describe "#mode" do
it "should return a string with the higher-order bits stripped away" do
FileUtils.touch(path)
WindowsSecurity.set_mode(0644, path)
provider.mode.should == '644'
end
it "should return absent if the file doesn't exist" do
provider.mode.should == :absent
end
end
describe "#mode=" do
it "should chmod the file to the specified value" do
FileUtils.touch(path)
WindowsSecurity.set_mode(0644, path)
provider.mode = '0755'
provider.mode.should == '755'
end
it "should pass along any errors encountered" do
expect do
provider.mode = '644'
end.to raise_error(Puppet::Error, /failed to set mode/)
end
end
describe "#id2name" do
it "should return the name of the user identified by the sid" do
- Puppet::Util::Windows::Security.expects(:valid_sid?).with(sid).returns(true)
- Puppet::Util::Windows::Security.expects(:sid_to_name).with(sid).returns(account)
+ Puppet::Util::Windows::SID.expects(:valid_sid?).with(sid).returns(true)
+ Puppet::Util::Windows::SID.expects(:sid_to_name).with(sid).returns(account)
provider.id2name(sid).should == account
end
it "should return the argument if it's already a name" do
- Puppet::Util::Windows::Security.expects(:valid_sid?).with(account).returns(false)
- Puppet::Util::Windows::Security.expects(:sid_to_name).never
+ Puppet::Util::Windows::SID.expects(:valid_sid?).with(account).returns(false)
+ Puppet::Util::Windows::SID.expects(:sid_to_name).never
provider.id2name(account).should == account
end
it "should return nil if the user doesn't exist" do
- Puppet::Util::Windows::Security.expects(:valid_sid?).with(sid).returns(true)
- Puppet::Util::Windows::Security.expects(:sid_to_name).with(sid).returns(nil)
+ Puppet::Util::Windows::SID.expects(:valid_sid?).with(sid).returns(true)
+ Puppet::Util::Windows::SID.expects(:sid_to_name).with(sid).returns(nil)
provider.id2name(sid).should == nil
end
end
describe "#name2id" do
it "should delegate to name_to_sid" do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with(account).returns(sid)
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with(account).returns(sid)
provider.name2id(account).should == sid
end
end
describe "#owner" do
it "should return the sid of the owner if the file does exist" do
FileUtils.touch(resource[:path])
provider.stubs(:get_owner).with(resource[:path]).returns(sid)
provider.owner.should == sid
end
it "should return absent if the file doesn't exist" do
provider.owner.should == :absent
end
end
describe "#owner=" do
it "should set the owner to the specified value" do
provider.expects(:set_owner).with(sid, resource[:path])
provider.owner = sid
end
it "should propagate any errors encountered when setting the owner" do
provider.stubs(:set_owner).raises(ArgumentError)
expect {
provider.owner = sid
}.to raise_error(Puppet::Error, /Failed to set owner/)
end
end
describe "#group" do
it "should return the sid of the group if the file does exist" do
FileUtils.touch(resource[:path])
provider.stubs(:get_group).with(resource[:path]).returns(sid)
provider.group.should == sid
end
it "should return absent if the file doesn't exist" do
provider.group.should == :absent
end
end
describe "#group=" do
it "should set the group to the specified value" do
provider.expects(:set_group).with(sid, resource[:path])
provider.group = sid
end
it "should propagate any errors encountered when setting the group" do
provider.stubs(:set_group).raises(ArgumentError)
expect {
provider.group = sid
}.to raise_error(Puppet::Error, /Failed to set group/)
end
end
describe "when validating" do
{:owner => 'foo', :group => 'foo', :mode => 0777}.each do |k,v|
it "should fail if the filesystem doesn't support ACLs and we're managing #{k}" do
described_class.any_instance.stubs(:supports_acl?).returns false
expect {
Puppet::Type.type(:file).new :path => path, k => v
}.to raise_error(Puppet::Error, /Can only manage owner, group, and mode on filesystems that support Windows ACLs, such as NTFS/)
end
end
it "should not fail if the filesystem doesn't support ACLs and we're not managing permissions" do
described_class.any_instance.stubs(:supports_acl?).returns false
Puppet::Type.type(:file).new :path => path
end
end
end
diff --git a/spec/unit/provider/group/windows_adsi_spec.rb b/spec/unit/provider/group/windows_adsi_spec.rb
index a7de859da..7c9366f72 100644
--- a/spec/unit/provider/group/windows_adsi_spec.rb
+++ b/spec/unit/provider/group/windows_adsi_spec.rb
@@ -1,168 +1,168 @@
#!/usr/bin/env ruby
require 'spec_helper'
-describe Puppet::Type.type(:group).provider(:windows_adsi) do
+describe Puppet::Type.type(:group).provider(:windows_adsi), :if => Puppet.features.microsoft_windows? do
let(:resource) do
Puppet::Type.type(:group).new(
:title => 'testers',
:provider => :windows_adsi
)
end
let(:provider) { resource.provider }
let(:connection) { stub 'connection' }
before :each do
- Puppet::Util::ADSI.stubs(:computer_name).returns('testcomputername')
- Puppet::Util::ADSI.stubs(:connect).returns connection
+ Puppet::Util::Windows::ADSI.stubs(:computer_name).returns('testcomputername')
+ Puppet::Util::Windows::ADSI.stubs(:connect).returns connection
end
describe ".instances" do
it "should enumerate all groups" do
names = ['group1', 'group2', 'group3']
stub_groups = names.map{|n| stub(:name => n)}
connection.stubs(:execquery).with('select name from win32_group where localaccount = "TRUE"').returns stub_groups
described_class.instances.map(&:name).should =~ names
end
end
- describe "group type :members property helpers", :if => Puppet.features.microsoft_windows? do
+ describe "group type :members property helpers" do
let(:user1) { stub(:account => 'user1', :domain => '.', :to_s => 'user1sid') }
let(:user2) { stub(:account => 'user2', :domain => '.', :to_s => 'user2sid') }
before :each do
- Puppet::Util::Windows::Security.stubs(:name_to_sid_object).with('user1').returns(user1)
- Puppet::Util::Windows::Security.stubs(:name_to_sid_object).with('user2').returns(user2)
+ Puppet::Util::Windows::SID.stubs(:name_to_sid_object).with('user1').returns(user1)
+ Puppet::Util::Windows::SID.stubs(:name_to_sid_object).with('user2').returns(user2)
end
describe "#members_insync?" do
it "should return false when current is nil" do
provider.members_insync?(nil, ['user2']).should be_false
end
it "should return false when should is nil" do
provider.members_insync?(['user1'], nil).should be_false
end
it "should return false for differing lists of members" do
provider.members_insync?(['user1'], ['user2']).should be_false
provider.members_insync?(['user1'], []).should be_false
provider.members_insync?([], ['user2']).should be_false
end
it "should return true for same lists of members" do
provider.members_insync?(['user1', 'user2'], ['user1', 'user2']).should be_true
end
it "should return true for same lists of unordered members" do
provider.members_insync?(['user1', 'user2'], ['user2', 'user1']).should be_true
end
it "should return true for same lists of members irrespective of duplicates" do
provider.members_insync?(['user1', 'user2', 'user2'], ['user2', 'user1', 'user1']).should be_true
end
end
describe "#members_to_s" do
it "should return an empty string on non-array input" do
[Object.new, {}, 1, :symbol, ''].each do |input|
provider.members_to_s(input).should be_empty
end
end
it "should return an empty string on empty or nil users" do
provider.members_to_s([]).should be_empty
provider.members_to_s(nil).should be_empty
end
it "should return a user string like DOMAIN\\USER" do
provider.members_to_s(['user1']).should == '.\user1'
end
it "should return a user string like DOMAIN\\USER,DOMAIN2\\USER2" do
provider.members_to_s(['user1', 'user2']).should == '.\user1,.\user2'
end
end
end
describe "when managing members" do
it "should be able to provide a list of members" do
provider.group.stubs(:members).returns ['user1', 'user2', 'user3']
provider.members.should =~ ['user1', 'user2', 'user3']
end
- it "should be able to set group members", :if => Puppet.features.microsoft_windows? do
+ it "should be able to set group members" do
provider.group.stubs(:members).returns ['user1', 'user2']
member_sids = [
stub(:account => 'user1', :domain => 'testcomputername'),
stub(:account => 'user2', :domain => 'testcomputername'),
stub(:account => 'user3', :domain => 'testcomputername'),
]
provider.group.stubs(:member_sids).returns(member_sids[0..1])
- Puppet::Util::Windows::Security.expects(:name_to_sid_object).with('user2').returns(member_sids[1])
- Puppet::Util::Windows::Security.expects(:name_to_sid_object).with('user3').returns(member_sids[2])
+ Puppet::Util::Windows::SID.expects(:name_to_sid_object).with('user2').returns(member_sids[1])
+ Puppet::Util::Windows::SID.expects(:name_to_sid_object).with('user3').returns(member_sids[2])
provider.group.expects(:remove_member_sids).with(member_sids[0])
provider.group.expects(:add_member_sids).with(member_sids[2])
provider.members = ['user2', 'user3']
end
end
describe 'when creating groups' do
it "should be able to create a group" do
resource[:members] = ['user1', 'user2']
group = stub 'group'
- Puppet::Util::ADSI::Group.expects(:create).with('testers').returns group
+ Puppet::Util::Windows::ADSI::Group.expects(:create).with('testers').returns group
create = sequence('create')
group.expects(:commit).in_sequence(create)
group.expects(:set_members).with(['user1', 'user2']).in_sequence(create)
provider.create
end
it 'should not create a group if a user by the same name exists' do
- Puppet::Util::ADSI::Group.expects(:create).with('testers').raises( Puppet::Error.new("Cannot create group if user 'testers' exists.") )
+ Puppet::Util::Windows::ADSI::Group.expects(:create).with('testers').raises( Puppet::Error.new("Cannot create group if user 'testers' exists.") )
expect{ provider.create }.to raise_error( Puppet::Error,
/Cannot create group if user 'testers' exists./ )
end
it 'should commit a newly created group' do
provider.group.expects( :commit )
provider.flush
end
end
it "should be able to test whether a group exists" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI.stubs(:connect).returns stub('connection')
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.stubs(:connect).returns stub('connection')
provider.should be_exists
- Puppet::Util::ADSI.stubs(:connect).returns nil
+ Puppet::Util::Windows::ADSI.stubs(:connect).returns nil
provider.should_not be_exists
end
it "should be able to delete a group" do
connection.expects(:Delete).with('group', 'testers')
provider.delete
end
- it "should report the group's SID as gid", :if => Puppet.features.microsoft_windows? do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('testers').returns('S-1-5-32-547')
+ it "should report the group's SID as gid" do
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('testers').returns('S-1-5-32-547')
provider.gid.should == 'S-1-5-32-547'
end
it "should fail when trying to manage the gid property" do
provider.expects(:fail).with { |msg| msg =~ /gid is read-only/ }
provider.send(:gid=, 500)
end
- it "should prefer the domain component from the resolved SID", :if => Puppet.features.microsoft_windows? do
+ it "should prefer the domain component from the resolved SID" do
provider.members_to_s(['.\Administrators']).should == 'BUILTIN\Administrators'
end
end
diff --git a/spec/unit/provider/package/gem_spec.rb b/spec/unit/provider/package/gem_spec.rb
index f382335cd..23aa2798d 100755
--- a/spec/unit/provider/package/gem_spec.rb
+++ b/spec/unit/provider/package/gem_spec.rb
@@ -1,177 +1,187 @@
#! /usr/bin/env ruby
require 'spec_helper'
provider_class = Puppet::Type.type(:package).provider(:gem)
describe provider_class do
let(:resource) do
Puppet::Type.type(:package).new(
:name => 'myresource',
:ensure => :installed
)
end
let(:provider) do
provider = provider_class.new
provider.resource = resource
provider
end
before :each do
resource.provider = provider
end
describe "when installing" do
it "should use the path to the gem" do
provider_class.stubs(:command).with(:gemcmd).returns "/my/gem"
provider.expects(:execute).with { |args| args[0] == "/my/gem" }.returns ""
provider.install
end
it "should specify that the gem is being installed" do
provider.expects(:execute).with { |args| args[1] == "install" }.returns ""
provider.install
end
it "should specify that documentation should not be included" do
provider.expects(:execute).with { |args| args[2] == "--no-rdoc" }.returns ""
provider.install
end
it "should specify that RI should not be included" do
provider.expects(:execute).with { |args| args[3] == "--no-ri" }.returns ""
provider.install
end
it "should specify the package name" do
provider.expects(:execute).with { |args| args[4] == "myresource" }.returns ""
provider.install
end
it "should not append install_options by default" do
provider.expects(:execute).with { |args| args.length == 5 }.returns ""
provider.install
end
it "should allow setting an install_options parameter" do
resource[:install_options] = [ '--force', {'--bindir' => '/usr/bin' } ]
provider.expects(:execute).with { |args| args[5] == '--force' && args[6] == '--bindir=/usr/bin' }.returns ''
provider.install
end
describe "when a source is specified" do
describe "as a normal file" do
it "should use the file name instead of the gem name" do
resource[:source] = "/my/file"
provider.expects(:execute).with { |args| args[2] == "/my/file" }.returns ""
provider.install
end
end
describe "as a file url" do
it "should use the file name instead of the gem name" do
resource[:source] = "file:///my/file"
provider.expects(:execute).with { |args| args[2] == "/my/file" }.returns ""
provider.install
end
end
describe "as a puppet url" do
it "should fail" do
resource[:source] = "puppet://my/file"
lambda { provider.install }.should raise_error(Puppet::Error)
end
end
describe "as a non-file and non-puppet url" do
it "should treat the source as a gem repository" do
resource[:source] = "http://host/my/file"
provider.expects(:execute).with { |args| args[2..4] == ["--source", "http://host/my/file", "myresource"] }.returns ""
provider.install
end
end
describe "with an invalid uri" do
it "should fail" do
URI.expects(:parse).raises(ArgumentError)
resource[:source] = "http:::::uppet:/:/my/file"
lambda { provider.install }.should raise_error(Puppet::Error)
end
end
end
end
describe "#latest" do
it "should return a single value for 'latest'" do
#gemlist is used for retrieving both local and remote version numbers, and there are cases
# (particularly local) where it makes sense for it to return an array. That doesn't make
# sense for '#latest', though.
provider.class.expects(:gemlist).with({ :justme => 'myresource'}).returns({
:name => 'myresource',
:ensure => ["3.0"],
:provider => :gem,
})
provider.latest.should == "3.0"
end
it "should list from the specified source repository" do
resource[:source] = "http://foo.bar.baz/gems"
provider.class.expects(:gemlist).
with({:justme => 'myresource', :source => "http://foo.bar.baz/gems"}).
returns({
:name => 'myresource',
:ensure => ["3.0"],
:provider => :gem,
})
provider.latest.should == "3.0"
end
end
describe "#instances" do
before do
provider_class.stubs(:command).with(:gemcmd).returns "/my/gem"
end
it "should return an empty array when no gems installed" do
provider_class.expects(:execute).with(%w{/my/gem list --local}).returns("\n")
provider_class.instances.should == []
end
it "should return ensure values as an array of installed versions" do
provider_class.expects(:execute).with(%w{/my/gem list --local}).returns <<-HEREDOC.gsub(/ /, '')
systemu (1.2.0)
vagrant (0.8.7, 0.6.9)
HEREDOC
provider_class.instances.map {|p| p.properties}.should == [
{:ensure => ["1.2.0"], :provider => :gem, :name => 'systemu'},
{:ensure => ["0.8.7", "0.6.9"], :provider => :gem, :name => 'vagrant'}
]
end
it "should ignore platform specifications" do
provider_class.expects(:execute).with(%w{/my/gem list --local}).returns <<-HEREDOC.gsub(/ /, '')
systemu (1.2.0)
nokogiri (1.6.1 ruby java x86-mingw32 x86-mswin32-60, 1.4.4.1 x86-mswin32)
HEREDOC
provider_class.instances.map {|p| p.properties}.should == [
{:ensure => ["1.2.0"], :provider => :gem, :name => 'systemu'},
{:ensure => ["1.6.1", "1.4.4.1"], :provider => :gem, :name => 'nokogiri'}
]
end
it "should not fail when an unmatched line is returned" do
provider_class.expects(:execute).with(%w{/my/gem list --local}).
returns(File.read(my_fixture('line-with-1.8.5-warning')))
provider_class.instances.map {|p| p.properties}.
should == [{:provider=>:gem, :ensure=>["0.3.2"], :name=>"columnize"},
{:provider=>:gem, :ensure=>["1.1.3"], :name=>"diff-lcs"},
{:provider=>:gem, :ensure=>["0.0.1"], :name=>"metaclass"},
{:provider=>:gem, :ensure=>["0.10.5"], :name=>"mocha"},
{:provider=>:gem, :ensure=>["0.8.7"], :name=>"rake"},
{:provider=>:gem, :ensure=>["2.9.0"], :name=>"rspec-core"},
{:provider=>:gem, :ensure=>["2.9.1"], :name=>"rspec-expectations"},
{:provider=>:gem, :ensure=>["2.9.0"], :name=>"rspec-mocks"},
{:provider=>:gem, :ensure=>["0.9.0"], :name=>"rubygems-bundler"},
{:provider=>:gem, :ensure=>["1.11.3.3"], :name=>"rvm"}]
end
end
+
+ describe "listing gems" do
+ describe "searching for a single package" do
+ it "searches for an exact match" do
+ provider_class.expects(:execute).with(includes('^bundler$')).returns(File.read(my_fixture('gem-list-single-package')))
+ expected = {:name => 'bundler', :ensure => %w[1.6.2], :provider => :gem}
+ expect(provider_class.gemlist({:justme => 'bundler'})).to eq(expected)
+ end
+ end
+ end
end
diff --git a/spec/unit/provider/package/openbsd_spec.rb b/spec/unit/provider/package/openbsd_spec.rb
index 8d4f079fe..712f9cfda 100755
--- a/spec/unit/provider/package/openbsd_spec.rb
+++ b/spec/unit/provider/package/openbsd_spec.rb
@@ -1,312 +1,369 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'stringio'
provider_class = Puppet::Type.type(:package).provider(:openbsd)
describe provider_class do
let(:package) { Puppet::Type.type(:package).new(:name => 'bash', :provider => 'openbsd') }
let(:provider) { provider_class.new(package) }
def expect_read_from_pkgconf(lines)
pkgconf = stub(:readlines => lines)
Puppet::FileSystem.expects(:exist?).with('/etc/pkg.conf').returns(true)
File.expects(:open).with('/etc/pkg.conf', 'rb').returns(pkgconf)
end
def expect_pkgadd_with_source(source)
provider.expects(:pkgadd).with do |fullname|
ENV.should_not be_key('PKG_PATH')
- fullname.should == source
+ fullname.should == [source]
end
end
def expect_pkgadd_with_env_and_name(source, &block)
ENV.should_not be_key('PKG_PATH')
provider.expects(:pkgadd).with do |fullname|
ENV.should be_key('PKG_PATH')
ENV['PKG_PATH'].should == source
- fullname.should == provider.resource[:name]
+ fullname.should == [provider.resource[:name]]
end
provider.expects(:execpipe).with(['/bin/pkg_info', '-I', provider.resource[:name]]).yields('')
yield
ENV.should_not be_key('PKG_PATH')
end
+ describe 'provider features' do
+ it { should be_installable }
+ it { should be_install_options }
+ it { should be_uninstallable }
+ it { should be_uninstall_options }
+ it { should be_upgradeable }
+ it { should be_versionable }
+ end
+
before :each do
# Stub some provider methods to avoid needing the actual software
# installed, so we can test on whatever platform we want.
provider_class.stubs(:command).with(:pkginfo).returns('/bin/pkg_info')
provider_class.stubs(:command).with(:pkgadd).returns('/bin/pkg_add')
provider_class.stubs(:command).with(:pkgdelete).returns('/bin/pkg_delete')
end
- context "::instances" do
+ context "#instances" do
it "should return nil if execution failed" do
provider_class.expects(:execpipe).raises(Puppet::ExecutionFailure, 'wawawa')
provider_class.instances.should be_nil
end
it "should return the empty set if no packages are listed" do
provider_class.expects(:execpipe).with(%w{/bin/pkg_info -a}).yields(StringIO.new(''))
provider_class.instances.should be_empty
end
it "should return all packages when invoked" do
fixture = File.read(my_fixture('pkginfo.list'))
provider_class.expects(:execpipe).with(%w{/bin/pkg_info -a}).yields(fixture)
provider_class.instances.map(&:name).sort.should ==
%w{bash bzip2 expat gettext libiconv lzo openvpn python vim wget}.sort
end
it "should return all flavors if set" do
fixture = File.read(my_fixture('pkginfo_flavors.list'))
provider_class.expects(:execpipe).with(%w{/bin/pkg_info -a}).yields(fixture)
instances = provider_class.instances.map {|p| {:name => p.get(:name),
:ensure => p.get(:ensure), :flavor => p.get(:flavor)}}
instances.size.should == 2
- instances[0].should == {:name => 'bash', :ensure => '3.1.17', :flavor => 'static'}
+ instances[0].should == {:name => 'bash', :ensure => '3.1.17', :flavor => 'static'}
instances[1].should == {:name => 'vim', :ensure => '7.0.42', :flavor => 'no_x11'}
end
end
context "#install" do
it "should fail if the resource doesn't have a source" do
Puppet::FileSystem.expects(:exist?).with('/etc/pkg.conf').returns(false)
expect {
provider.install
}.to raise_error(Puppet::Error, /must specify a package source/)
end
it "should fail if /etc/pkg.conf exists, but is not readable" do
Puppet::FileSystem.expects(:exist?).with('/etc/pkg.conf').returns(true)
File.expects(:open).with('/etc/pkg.conf', 'rb').raises(Errno::EACCES)
expect {
provider.install
}.to raise_error(Errno::EACCES, /Permission denied/)
end
it "should fail if /etc/pkg.conf exists, but there is no installpath" do
expect_read_from_pkgconf([])
expect {
provider.install
}.to raise_error(Puppet::Error, /No valid installpath found in \/etc\/pkg\.conf and no source was set/)
end
it "should install correctly when given a directory-unlike source" do
- source = '/whatever.pkg'
+ source = '/whatever.tgz'
provider.resource[:source] = source
expect_pkgadd_with_source(source)
provider.install
end
it "should install correctly when given a directory-like source" do
source = '/whatever/'
provider.resource[:source] = source
expect_pkgadd_with_env_and_name(source) do
provider.install
end
end
it "should install correctly when given a CDROM installpath" do
dir = '/mnt/cdrom/5.2/packages/amd64/'
expect_read_from_pkgconf(["installpath = #{dir}"])
expect_pkgadd_with_env_and_name(dir) do
provider.install
end
end
it "should install correctly when given a ftp mirror" do
url = 'ftp://your.ftp.mirror/pub/OpenBSD/5.2/packages/amd64/'
expect_read_from_pkgconf(["installpath = #{url}"])
expect_pkgadd_with_env_and_name(url) do
provider.install
end
end
it "should set the resource's source parameter" do
url = 'ftp://your.ftp.mirror/pub/OpenBSD/5.2/packages/amd64/'
expect_read_from_pkgconf(["installpath = #{url}"])
expect_pkgadd_with_env_and_name(url) do
provider.install
end
provider.resource[:source].should == url
end
it "should strip leading whitespace in installpath" do
dir = '/one/'
lines = ["# Notice the extra spaces after the ='s\n",
"installpath = #{dir}\n",
"# And notice how each line ends with a newline\n"]
expect_read_from_pkgconf(lines)
expect_pkgadd_with_env_and_name(dir) do
provider.install
end
end
it "should not require spaces around the equals" do
dir = '/one/'
lines = ["installpath=#{dir}"]
expect_read_from_pkgconf(lines)
expect_pkgadd_with_env_and_name(dir) do
provider.install
end
end
it "should be case-insensitive" do
dir = '/one/'
lines = ["INSTALLPATH = #{dir}"]
expect_read_from_pkgconf(lines)
expect_pkgadd_with_env_and_name(dir) do
provider.install
end
end
it "should ignore unknown keywords" do
dir = '/one/'
lines = ["foo = bar\n",
"installpath = #{dir}\n"]
expect_read_from_pkgconf(lines)
expect_pkgadd_with_env_and_name(dir) do
provider.install
end
end
it "should preserve trailing spaces" do
dir = '/one/ '
lines = ["installpath = #{dir}"]
expect_read_from_pkgconf(lines)
expect_pkgadd_with_source(dir)
provider.install
end
it "should append installpath" do
urls = ["ftp://your.ftp.mirror/pub/OpenBSD/5.2/packages/amd64/",
"http://another.ftp.mirror/pub/OpenBSD/5.2/packages/amd64/"]
lines = ["installpath = #{urls[0]}\n",
"installpath += #{urls[1]}\n"]
expect_read_from_pkgconf(lines)
expect_pkgadd_with_env_and_name(urls.join(":")) do
provider.install
end
end
it "should handle append on first installpath" do
url = "ftp://your.ftp.mirror/pub/OpenBSD/5.2/packages/amd64/"
lines = ["installpath += #{url}\n"]
expect_read_from_pkgconf(lines)
expect_pkgadd_with_env_and_name(url) do
provider.install
end
end
%w{ installpath installpath= installpath+=}.each do |line|
it "should reject '#{line}'" do
expect_read_from_pkgconf([line])
expect {
provider.install
}.to raise_error(Puppet::Error, /No valid installpath found in \/etc\/pkg\.conf and no source was set/)
end
end
+
+ it 'should use install_options as Array' do
+ provider.resource[:source] = '/tma1/'
+ provider.resource[:install_options] = ['-r', '-z']
+ provider.expects(:pkgadd).with(['-r', '-z', 'bash'])
+ provider.install
+ end
+ end
+
+ context "#latest" do
+ before do
+ provider.resource[:source] = '/tmp/tcsh.tgz'
+ provider.resource[:name] = 'tcsh'
+ provider.stubs(:pkginfo).with('tcsh')
+ end
+
+ it "should return the ensure value if the package is already installed" do
+ provider.stubs(:properties).returns({:ensure => '4.2.45'})
+ provider.stubs(:pkginfo).with('-Q', 'tcsh')
+ provider.latest.should == '4.2.45'
+ end
+
+ it "should recognize a new version" do
+ pkginfo_query = 'tcsh-6.18.01p1'
+ provider.stubs(:pkginfo).with('-Q', 'tcsh').returns(pkginfo_query)
+ provider.latest.should == '6.18.01p1'
+ end
+
+ it "should recognize a newer version" do
+ provider.stubs(:properties).returns({:ensure => '1.6.8'})
+ pkginfo_query = 'tcsh-1.6.10'
+ provider.stubs(:pkginfo).with('-Q', 'tcsh').returns(pkginfo_query)
+ provider.latest.should == '1.6.10'
+ end
+
+ it "should recognize a package that is already the newest" do
+ pkginfo_query = 'tcsh-6.18.01p0 (installed)'
+ provider.stubs(:pkginfo).with('-Q', 'tcsh').returns(pkginfo_query)
+ provider.latest.should == '6.18.01p0'
+ end
end
context "#get_version" do
it "should return nil if execution fails" do
provider.expects(:execpipe).raises(Puppet::ExecutionFailure, 'wawawa')
provider.get_version.should be_nil
end
it "should return the package version if in the output" do
- fixture = File.read(my_fixture('pkginfo.list'))
- provider.expects(:execpipe).with(%w{/bin/pkg_info -I bash}).yields(fixture)
+ output = 'bash-3.1.17 GNU Bourne Again Shell'
+ provider.expects(:execpipe).with(%w{/bin/pkg_info -I bash}).yields(output)
provider.get_version.should == '3.1.17'
end
it "should return the empty string if the package is not present" do
provider.resource[:name] = 'zsh'
provider.expects(:execpipe).with(%w{/bin/pkg_info -I zsh}).yields(StringIO.new(''))
provider.get_version.should == ''
end
end
context "#query" do
it "should return the installed version if present" do
fixture = File.read(my_fixture('pkginfo.detail'))
provider.expects(:pkginfo).with('bash').returns(fixture)
provider.query.should == { :ensure => '3.1.17' }
end
it "should return nothing if not present" do
provider.resource[:name] = 'zsh'
provider.expects(:pkginfo).with('zsh').returns('')
provider.query.should be_nil
end
end
context "#install_options" do
it "should return nill by default" do
provider.install_options.should be_nil
end
it "should return install_options when set" do
provider.resource[:install_options] = ['-n']
provider.resource[:install_options].should == ['-n']
end
it "should return multiple install_options when set" do
provider.resource[:install_options] = ['-L', '/opt/puppet']
provider.resource[:install_options].should == ['-L', '/opt/puppet']
end
it 'should return install_options when set as hash' do
provider.resource[:install_options] = { '-Darch' => 'vax' }
provider.install_options.should == ['-Darch=vax']
end
end
-
+
context "#uninstall_options" do
it "should return nill by default" do
provider.uninstall_options.should be_nil
end
it "should return uninstall_options when set" do
provider.resource[:uninstall_options] = ['-n']
provider.resource[:uninstall_options].should == ['-n']
end
it "should return multiple uninstall_options when set" do
provider.resource[:uninstall_options] = ['-q', '-c']
provider.resource[:uninstall_options].should == ['-q', '-c']
end
it 'should return uninstall_options when set as hash' do
provider.resource[:uninstall_options] = { '-Dbaddepend' => '1' }
provider.uninstall_options.should == ['-Dbaddepend=1']
end
end
-
+
context "#uninstall" do
describe 'when uninstalling' do
it 'should use erase to purge' do
provider.expects(:pkgdelete).with('-c', '-q', 'bash')
provider.purge
end
end
+
+ describe 'with uninstall_options' do
+ it 'should use uninstall_options as Array' do
+ provider.resource[:uninstall_options] = ['-q', '-c']
+ provider.expects(:pkgdelete).with(['-q', '-c'], 'bash')
+ provider.uninstall
+ end
+ end
end
end
diff --git a/spec/unit/provider/package/pacman_spec.rb b/spec/unit/provider/package/pacman_spec.rb
index 789fd88fb..aca7c5d64 100755
--- a/spec/unit/provider/package/pacman_spec.rb
+++ b/spec/unit/provider/package/pacman_spec.rb
@@ -1,295 +1,314 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'stringio'
-provider = Puppet::Type.type(:package).provider(:pacman)
-describe provider do
+describe Puppet::Type.type(:package).provider(:pacman) do
let(:no_extra_options) { { :failonfail => true, :combine => true, :custom_environment => {} } }
let(:executor) { Puppet::Util::Execution }
let(:resolver) { Puppet::Util }
+ let(:resource) { Puppet::Type.type(:package).new(:name => 'package', :provider => 'pacman') }
+ let(:provider) { described_class.new(resource) }
+
before do
resolver.stubs(:which).with('/usr/bin/pacman').returns('/usr/bin/pacman')
- provider.stubs(:which).with('/usr/bin/pacman').returns('/usr/bin/pacman')
+ described_class.stubs(:which).with('/usr/bin/pacman').returns('/usr/bin/pacman')
resolver.stubs(:which).with('/usr/bin/yaourt').returns('/usr/bin/yaourt')
- provider.stubs(:which).with('/usr/bin/yaourt').returns('/usr/bin/yaourt')
- @resource = Puppet::Type.type(:package).new(:name => 'package')
- @provider = provider.new(@resource)
+ described_class.stubs(:which).with('/usr/bin/yaourt').returns('/usr/bin/yaourt')
end
describe "when installing" do
before do
- @provider.stubs(:query).returns({
+ provider.stubs(:query).returns({
:ensure => '1.0'
})
end
- it "should call pacman to install the right package quietly" do
-
- if @provider.yaourt?
- args = ['/usr/bin/yaourt', '--noconfirm', '-S', @resource[:name]]
- else
- args = ['/usr/bin/pacman', '--noconfirm', '--noprogressbar', '-Sy', @resource[:name]]
- end
-
- executor.
- expects(:execute).
- at_least_once.
- with(args, no_extra_options).
- returns ''
+ it "should call pacman to install the right package quietly when yaourt is not installed" do
+ provider.stubs(:yaourt?).returns(false)
+ args = ['--noconfirm', '--noprogressbar', '-Sy', resource[:name]]
+ provider.expects(:pacman).at_least_once.with(*args).returns ''
+ provider.install
+ end
- @provider.install
+ it "should call yaourt to install the right package quietly when yaourt is installed" do
+ provider.stubs(:yaourt?).returns(true)
+ args = ['--noconfirm', '-S', resource[:name]]
+ provider.expects(:yaourt).at_least_once.with(*args).returns ''
+ provider.install
end
it "should raise an ExecutionFailure if the installation failed" do
executor.stubs(:execute).returns("")
- @provider.expects(:query).returns(nil)
+ provider.expects(:query).returns(nil)
- lambda { @provider.install }.should raise_exception(Puppet::ExecutionFailure)
+ lambda { provider.install }.should raise_exception(Puppet::ExecutionFailure)
end
- context "when :source is specified" do
- before :each do
- @install = sequence("install")
+ describe "and install_options are given" do
+ before do
+ resource[:install_options] = ['-x', {'--arg' => 'value'}]
+ end
+
+ it "should call pacman to install the right package quietly when yaourt is not installed" do
+ provider.stubs(:yaourt?).returns(false)
+ args = ['--noconfirm', '--noprogressbar', '-x', '--arg=value', '-Sy', resource[:name]]
+ provider.expects(:pacman).at_least_once.with(*args).returns ''
+ provider.install
end
+ it "should call yaourt to install the right package quietly when yaourt is installed" do
+ provider.stubs(:yaourt?).returns(true)
+ args = ['--noconfirm', '-x', '--arg=value', '-S', resource[:name]]
+ provider.expects(:yaourt).at_least_once.with(*args).returns ''
+ provider.install
+ end
+ end
+
+ context "when :source is specified" do
+ let(:install_seq) { sequence("install") }
+
context "recognizable by pacman" do
%w{
/some/package/file
http://some.package.in/the/air
ftp://some.package.in/the/air
}.each do |source|
it "should install #{source} directly" do
- @resource[:source] = source
+ resource[:source] = source
executor.expects(:execute).
with(all_of(includes("-Sy"), includes("--noprogressbar")), no_extra_options).
- in_sequence(@install).
+ in_sequence(install_seq).
returns("")
executor.expects(:execute).
with(all_of(includes("-U"), includes(source)), no_extra_options).
- in_sequence(@install).
+ in_sequence(install_seq).
returns("")
- @provider.install
+ provider.install
end
end
end
context "as a file:// URL" do
+ let(:actual_file_path) { "/some/package/file" }
+
before do
- @package_file = "file:///some/package/file"
- @actual_file_path = "/some/package/file"
- @resource[:source] = @package_file
+ resource[:source] = "file:///some/package/file"
end
it "should install from the path segment of the URL" do
executor.expects(:execute).
with(all_of(includes("-Sy"),
includes("--noprogressbar"),
includes("--noconfirm")),
no_extra_options).
- in_sequence(@install).
+ in_sequence(install_seq).
returns("")
executor.expects(:execute).
- with(all_of(includes("-U"), includes(@actual_file_path)), no_extra_options).
- in_sequence(@install).
+ with(all_of(includes("-U"), includes(actual_file_path)), no_extra_options).
+ in_sequence(install_seq).
returns("")
- @provider.install
+ provider.install
end
end
context "as a puppet URL" do
before do
- @resource[:source] = "puppet://server/whatever"
+ resource[:source] = "puppet://server/whatever"
end
it "should fail" do
- lambda { @provider.install }.should raise_error(Puppet::Error)
+ lambda { provider.install }.should raise_error(Puppet::Error)
end
end
context "as a malformed URL" do
before do
- @resource[:source] = "blah://"
+ resource[:source] = "blah://"
end
it "should fail" do
- lambda { @provider.install }.should raise_error(Puppet::Error)
+ lambda { provider.install }.should raise_error(Puppet::Error)
end
end
end
end
describe "when updating" do
it "should call install" do
- @provider.expects(:install).returns("install return value")
- @provider.update.should == "install return value"
+ provider.expects(:install).returns("install return value")
+ provider.update.should == "install return value"
end
end
describe "when uninstalling" do
it "should call pacman to remove the right package quietly" do
- executor.
- expects(:execute).
- with(["/usr/bin/pacman", "--noconfirm", "--noprogressbar", "-R", @resource[:name]], no_extra_options).
- returns ""
+ args = ["/usr/bin/pacman", "--noconfirm", "--noprogressbar", "-R", resource[:name]]
+ executor.expects(:execute).with(args, no_extra_options).returns ""
+ provider.uninstall
+ end
- @provider.uninstall
+ it "adds any uninstall_options" do
+ resource[:uninstall_options] = ['-x', {'--arg' => 'value'}]
+ args = ["/usr/bin/pacman", "--noconfirm", "--noprogressbar", "-x", "--arg=value", "-R", resource[:name]]
+ executor.expects(:execute).with(args, no_extra_options).returns ""
+ provider.uninstall
end
end
describe "when querying" do
it "should query pacman" do
executor.
expects(:execute).
- with(["/usr/bin/pacman", "-Qi", @resource[:name]], no_extra_options)
- @provider.query
+ with(["/usr/bin/pacman", "-Qi", resource[:name]], no_extra_options)
+ provider.query
end
it "should return the version" do
query_output = <<EOF
Name : package
Version : 1.01.3-2
URL : http://www.archlinux.org/pacman/
Licenses : GPL
Groups : base
Provides : None
Depends On : bash libarchive>=2.7.1 libfetch>=2.25 pacman-mirrorlist
Optional Deps : fakeroot: for makepkg usage as normal user
curl: for rankmirrors usage
Required By : None
Conflicts With : None
Replaces : None
Installed Size : 2352.00 K
Packager : Dan McGee <dan@archlinux.org>
Architecture : i686
Build Date : Sat 22 Jan 2011 03:56:41 PM EST
Install Date : Thu 27 Jan 2011 06:45:49 AM EST
Install Reason : Explicitly installed
Install Script : Yes
Description : A library-based package manager with dependency support
EOF
executor.expects(:execute).returns(query_output)
- @provider.query.should == {:ensure => "1.01.3-2"}
+ provider.query.should == {:ensure => "1.01.3-2"}
end
it "should return a nil if the package isn't found" do
executor.expects(:execute).returns("")
- @provider.query.should be_nil
+ provider.query.should be_nil
end
it "should return a hash indicating that the package is missing on error" do
executor.expects(:execute).raises(Puppet::ExecutionFailure.new("ERROR!"))
- @provider.query.should == {
+ provider.query.should == {
:ensure => :purged,
:status => 'missing',
- :name => @resource[:name],
+ :name => resource[:name],
:error => 'ok',
}
end
end
describe "when fetching a package list" do
it "should retrieve installed packages" do
- provider.expects(:execpipe).with(["/usr/bin/pacman", '-Q'])
- provider.installedpkgs
+ described_class.expects(:execpipe).with(["/usr/bin/pacman", '-Q'])
+ described_class.installedpkgs
end
it "should retrieve installed package groups" do
- provider.expects(:execpipe).with(["/usr/bin/pacman", '-Qg'])
- provider.installedgroups
+ described_class.expects(:execpipe).with(["/usr/bin/pacman", '-Qg'])
+ described_class.installedgroups
end
it "should return installed packages with their versions" do
- provider.expects(:execpipe).yields(StringIO.new("package1 1.23-4\npackage2 2.00\n"))
- packages = provider.installedpkgs
+ described_class.expects(:execpipe).yields(StringIO.new("package1 1.23-4\npackage2 2.00\n"))
+ packages = described_class.installedpkgs
packages.length.should == 2
packages[0].properties.should == {
:provider => :pacman,
:ensure => '1.23-4',
:name => 'package1'
}
packages[1].properties.should == {
:provider => :pacman,
:ensure => '2.00',
:name => 'package2'
}
end
it "should return installed groups with a dummy version" do
- provider.expects(:execpipe).yields(StringIO.new("group1 pkg1\ngroup1 pkg2"))
- groups = provider.installedgroups
+ described_class.expects(:execpipe).yields(StringIO.new("group1 pkg1\ngroup1 pkg2"))
+ groups = described_class.installedgroups
groups.length.should == 1
groups[0].properties.should == {
:provider => :pacman,
:ensure => '1',
:name => 'group1'
}
end
it "should return nil on error" do
- provider.expects(:execpipe).twice.raises(Puppet::ExecutionFailure.new("ERROR!"))
- provider.instances.should be_nil
+ described_class.expects(:execpipe).twice.raises(Puppet::ExecutionFailure.new("ERROR!"))
+ described_class.instances.should be_nil
end
it "should warn on invalid input" do
- provider.expects(:execpipe).yields(StringIO.new("blah"))
- provider.expects(:warning).with("Failed to match line blah")
- provider.installedpkgs == []
+ described_class.expects(:execpipe).yields(StringIO.new("blah"))
+ described_class.expects(:warning).with("Failed to match line blah")
+ described_class.installedpkgs == []
end
end
describe "when determining the latest version" do
it "should refresh package list" do
get_latest_version = sequence("get_latest_version")
executor.
expects(:execute).
in_sequence(get_latest_version).
with(['/usr/bin/pacman', '-Sy'], no_extra_options)
executor.
stubs(:execute).
in_sequence(get_latest_version).
returns("")
- @provider.latest
+ provider.latest
end
it "should get query pacman for the latest version" do
get_latest_version = sequence("get_latest_version")
executor.
stubs(:execute).
in_sequence(get_latest_version)
executor.
expects(:execute).
in_sequence(get_latest_version).
- with(['/usr/bin/pacman', '-Sp', '--print-format', '%v', @resource[:name]], no_extra_options).
+ with(['/usr/bin/pacman', '-Sp', '--print-format', '%v', resource[:name]], no_extra_options).
returns("")
- @provider.latest
+ provider.latest
end
it "should return the version number from pacman" do
executor.
expects(:execute).
at_least_once().
returns("1.00.2-3\n")
- @provider.latest.should == "1.00.2-3"
+ provider.latest.should == "1.00.2-3"
end
end
end
diff --git a/spec/unit/provider/package/windows/package_spec.rb b/spec/unit/provider/package/windows/package_spec.rb
index 7466be1e9..632fa13a6 100755
--- a/spec/unit/provider/package/windows/package_spec.rb
+++ b/spec/unit/provider/package/windows/package_spec.rb
@@ -1,126 +1,141 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/provider/package/windows/package'
describe Puppet::Provider::Package::Windows::Package do
subject { described_class }
let(:hklm) { 'HKEY_LOCAL_MACHINE' }
let(:hkcu) { 'HKEY_CURRENT_USER' }
let(:path) { 'Software\Microsoft\Windows\CurrentVersion\Uninstall' }
let(:key) { mock('key', :name => "#{hklm}\\#{path}\\Google") }
let(:package) { mock('package') }
context '::each' do
it 'should generate an empty enumeration' do
subject.expects(:with_key)
subject.to_a.should be_empty
end
it 'should yield each package it finds' do
subject.expects(:with_key).yields(key, {})
Puppet::Provider::Package::Windows::MsiPackage.expects(:from_registry).with('Google', {}).returns(package)
yielded = nil
subject.each do |pkg|
yielded = pkg
end
yielded.should == package
end
end
- context '::with_key' do
+ context '::with_key', :if => Puppet.features.microsoft_windows? do
it 'should search HKLM (64 & 32) and HKCU (64 & 32)' do
seq = sequence('reg')
subject.expects(:open).with(hklm, path, subject::KEY64 | subject::KEY_READ).in_sequence(seq)
subject.expects(:open).with(hklm, path, subject::KEY32 | subject::KEY_READ).in_sequence(seq)
subject.expects(:open).with(hkcu, path, subject::KEY64 | subject::KEY_READ).in_sequence(seq)
subject.expects(:open).with(hkcu, path, subject::KEY32 | subject::KEY_READ).in_sequence(seq)
subject.with_key { |key, values| }
end
- it 'should ignore file not found exceptions', :if => Puppet.features.microsoft_windows? do
- ex = Puppet::Util::Windows::Error.new('Failed to open registry key', Windows::Error::ERROR_FILE_NOT_FOUND)
+ it 'should ignore file not found exceptions' do
+ ex = Puppet::Util::Windows::Error.new('Failed to open registry key', Puppet::Util::Windows::Error::ERROR_FILE_NOT_FOUND)
# make sure we don't stop after the first exception
subject.expects(:open).times(4).raises(ex)
keys = []
subject.with_key { |key, values| keys << key }
keys.should be_empty
end
- it 'should raise other types of exceptions', :if => Puppet.features.microsoft_windows? do
- ex = Puppet::Util::Windows::Error.new('Failed to open registry key', Windows::Error::ERROR_ACCESS_DENIED)
+ it 'should raise other types of exceptions' do
+ ex = Puppet::Util::Windows::Error.new('Failed to open registry key', Puppet::Util::Windows::Error::ERROR_ACCESS_DENIED)
subject.expects(:open).raises(ex)
expect {
subject.with_key{ |key, values| }
- }.to raise_error(Puppet::Error, /Access is denied/)
+ }.to raise_error(Puppet::Util::Windows::Error, /Access is denied/)
end
end
context '::installer_class' do
it 'should require the source parameter' do
expect {
subject.installer_class({})
}.to raise_error(Puppet::Error, /The source parameter is required when using the Windows provider./)
end
context 'MSI' do
let (:klass) { Puppet::Provider::Package::Windows::MsiPackage }
it 'should accept source ending in .msi' do
subject.installer_class({:source => 'foo.msi'}).should == klass
end
it 'should accept quoted source ending in .msi' do
subject.installer_class({:source => '"foo.msi"'}).should == klass
end
it 'should accept source case insensitively' do
subject.installer_class({:source => '"foo.MSI"'}).should == klass
end
it 'should reject source containing msi in the name' do
expect {
subject.installer_class({:source => 'mymsi.txt'})
}.to raise_error(Puppet::Error, /Don't know how to install 'mymsi.txt'/)
end
end
context 'Unknown' do
it 'should reject packages it does not know about' do
expect {
subject.installer_class({:source => 'basram'})
}.to raise_error(Puppet::Error, /Don't know how to install 'basram'/)
end
end
end
+ context '::munge' do
+ it 'should shell quote strings with spaces and fix forward slashes' do
+ subject.munge('c:/windows/the thing').should == '"c:\windows\the thing"'
+ end
+ it 'should leave properly formatted paths alone' do
+ subject.munge('c:\windows\thething').should == 'c:\windows\thething'
+ end
+ end
+
+ context '::replace_forward_slashes' do
+ it 'should replace forward with back slashes' do
+ subject.replace_forward_slashes('c:/windows/thing/stuff').should == 'c:\windows\thing\stuff'
+ end
+ end
+
context '::quote' do
it 'should shell quote strings with spaces' do
subject.quote('foo bar').should == '"foo bar"'
end
it 'should shell quote strings with spaces and quotes' do
subject.quote('"foo bar" baz').should == '"\"foo bar\" baz"'
end
it 'should not shell quote strings without spaces' do
subject.quote('"foobar"').should == '"foobar"'
end
end
it 'should implement instance methods' do
pkg = subject.new('orca', '5.0')
pkg.name.should == 'orca'
pkg.version.should == '5.0'
end
end
diff --git a/spec/unit/provider/package/yum_spec.rb b/spec/unit/provider/package/yum_spec.rb
index d7130ee77..20a523be9 100755
--- a/spec/unit/provider/package/yum_spec.rb
+++ b/spec/unit/provider/package/yum_spec.rb
@@ -1,307 +1,308 @@
#! /usr/bin/env ruby
require 'spec_helper'
provider_class = Puppet::Type.type(:package).provider(:yum)
describe provider_class do
let(:name) { 'mypackage' }
let(:resource) do
Puppet::Type.type(:package).new(
:name => name,
:ensure => :installed,
:provider => 'yum'
)
end
let(:provider) do
provider = provider_class.new
provider.resource = resource
provider
end
before do
provider.stubs(:yum).returns 'yum'
provider.stubs(:rpm).returns 'rpm'
provider.stubs(:get).with(:version).returns '1'
provider.stubs(:get).with(:release).returns '1'
provider.stubs(:get).with(:arch).returns 'i386'
end
describe 'provider features' do
it { should be_versionable }
it { should be_install_options }
it { should be_virtual_packages }
end
# provider should repond to the following methods
[:install, :latest, :update, :purge, :install_options].each do |method|
it "should have a(n) #{method}" do
provider.should respond_to(method)
end
end
describe 'when installing' do
before(:each) do
Puppet::Util.stubs(:which).with("rpm").returns("/bin/rpm")
provider.stubs(:which).with("rpm").returns("/bin/rpm")
Puppet::Util::Execution.expects(:execute).with(["/bin/rpm", "--version"], {:combine => true, :custom_environment => {}, :failonfail => true}).returns("4.10.1\n").at_most_once
end
it 'should call yum install for :installed' do
resource.stubs(:should).with(:ensure).returns :installed
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', :list, name)
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', :install, name)
provider.install
end
it 'should use :install to update' do
provider.expects(:install)
provider.update
end
it 'should be able to set version' do
version = '1.2'
resource[:ensure] = version
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', :list, name)
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', :install, "#{name}-#{version}")
provider.stubs(:query).returns :ensure => version
provider.install
end
it 'should be able to downgrade' do
current_version = '1.2'
version = '1.0'
resource[:ensure] = '1.0'
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', :downgrade, "#{name}-#{version}")
provider.stubs(:query).returns(:ensure => current_version).then.returns(:ensure => version)
provider.install
end
it 'should accept install options' do
resource[:ensure] = :installed
resource[:install_options] = ['-t', {'-x' => 'expackage'}]
+ provider.expects(:yum).with('-d', '0', '-e', '0', '-y', ['-t', '-x=expackage'], :list, name)
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', ['-t', '-x=expackage'], :install, name)
provider.install
end
it 'allow virtual packages' do
resource[:ensure] = :installed
resource[:allow_virtual] = true
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', :list, name).never
provider.expects(:yum).with('-d', '0', '-e', '0', '-y', :install, name)
provider.install
end
end
describe 'when uninstalling' do
it 'should use erase to purge' do
provider.expects(:yum).with('-y', :erase, name)
provider.purge
end
end
it 'should be versionable' do
provider.should be_versionable
end
describe 'determining the latest version available for a package' do
it "passes the value of enablerepo install_options when querying" do
resource[:install_options] = [
{'--enablerepo' => 'contrib'},
{'--enablerepo' => 'centosplus'},
]
provider.stubs(:properties).returns({:ensure => '3.4.5'})
described_class.expects(:latest_package_version).with(name, ['contrib', 'centosplus'], [])
provider.latest
end
it "passes the value of disablerepo install_options when querying" do
resource[:install_options] = [
{'--disablerepo' => 'updates'},
{'--disablerepo' => 'centosplus'},
]
provider.stubs(:properties).returns({:ensure => '3.4.5'})
described_class.expects(:latest_package_version).with(name, [], ['updates', 'centosplus'])
provider.latest
end
describe 'and a newer version is not available' do
before :each do
described_class.stubs(:latest_package_version).with(name, [], []).returns nil
end
it 'raises an error the package is not installed' do
provider.stubs(:properties).returns({:ensure => :absent})
expect {
provider.latest
}.to raise_error(Puppet::DevError, 'Tried to get latest on a missing package')
end
it 'returns version of the currently installed package' do
provider.stubs(:properties).returns({:ensure => '3.4.5'})
provider.latest.should == '3.4.5'
end
end
describe 'and a newer version is available' do
let(:latest_version) do
{
:name => name,
:epoch => '1',
:version => '2.3.4',
:release => '5',
:arch => 'i686',
}
end
it 'includes the epoch in the version string' do
described_class.stubs(:latest_package_version).with(name, [], []).returns(latest_version)
provider.latest.should == '1:2.3.4-5'
end
end
end
describe "lazy loading of latest package versions" do
before { described_class.clear }
after { described_class.clear }
let(:mypackage_version) do
{
:name => name,
:epoch => '1',
:version => '2.3.4',
:release => '5',
:arch => 'i686',
}
end
let(:mypackage_newerversion) do
{
:name => name,
:epoch => '1',
:version => '4.5.6',
:release => '7',
:arch => 'i686',
}
end
let(:latest_versions) { {name => [mypackage_version]} }
let(:enabled_versions) { {name => [mypackage_newerversion]} }
it "returns the version hash if the package was found" do
described_class.expects(:fetch_latest_versions).with([], []).once.returns(latest_versions)
version = described_class.latest_package_version(name, [], [])
expect(version).to eq(mypackage_version)
end
it "is nil if the package was not found in the query" do
described_class.expects(:fetch_latest_versions).with([], []).once.returns(latest_versions)
version = described_class.latest_package_version('nopackage', [], [])
expect(version).to be_nil
end
it "caches the package list and reuses that for subsequent queries" do
described_class.expects(:fetch_latest_versions).with([], []).once.returns(latest_versions)
2.times {
version = described_class.latest_package_version(name, [], [])
expect(version).to eq mypackage_version
}
end
it "caches separate lists for each combination of 'enablerepo' and 'disablerepo'" do
described_class.expects(:fetch_latest_versions).with([], []).once.returns(latest_versions)
described_class.expects(:fetch_latest_versions).with(['enabled'], ['disabled']).once.returns(enabled_versions)
2.times {
version = described_class.latest_package_version(name, [], [])
expect(version).to eq mypackage_version
}
2.times {
version = described_class.latest_package_version(name, ['enabled'], ['disabled'])
expect(version).to eq(mypackage_newerversion)
}
end
end
describe "querying for the latest version of all packages" do
let(:yumhelper_single_arch) do
<<-YUMHELPER_OUTPUT
* base: centos.tcpdiag.net
* extras: centos.mirrors.hoobly.com
* updates: mirrors.arsc.edu
_pkg nss-tools 0 3.14.3 4.el6_4 x86_64
_pkg pixman 0 0.26.2 5.el6_4 x86_64
_pkg myresource 0 1.2.3.4 5.el4 noarch
_pkg mysummaryless 0 1.2.3.4 5.el4 noarch
YUMHELPER_OUTPUT
end
let(:yumhelper_multi_arch) do
yumhelper_single_arch + <<-YUMHELPER_OUTPUT
_pkg nss-tools 0 3.14.3 4.el6_4 i386
_pkg pixman 0 0.26.2 5.el6_4 i386
YUMHELPER_OUTPUT
end
it "creates an entry for each line that's prefixed with '_pkg'" do
described_class.expects(:python).with([described_class::YUMHELPER]).returns(yumhelper_single_arch)
entries = described_class.fetch_latest_versions([], [])
expect(entries.keys).to include 'nss-tools'
expect(entries.keys).to include 'pixman'
expect(entries.keys).to include 'myresource'
expect(entries.keys).to include 'mysummaryless'
end
it "creates an entry for each package name and architecture" do
described_class.expects(:python).with([described_class::YUMHELPER]).returns(yumhelper_single_arch)
entries = described_class.fetch_latest_versions([], [])
expect(entries.keys).to include 'nss-tools.x86_64'
expect(entries.keys).to include 'pixman.x86_64'
expect(entries.keys).to include 'myresource.noarch'
expect(entries.keys).to include 'mysummaryless.noarch'
end
it "stores multiple entries if a package is build for multiple architectures" do
described_class.expects(:python).with([described_class::YUMHELPER]).returns(yumhelper_multi_arch)
entries = described_class.fetch_latest_versions([], [])
expect(entries.keys).to include 'nss-tools.x86_64'
expect(entries.keys).to include 'pixman.x86_64'
expect(entries.keys).to include 'nss-tools.i386'
expect(entries.keys).to include 'pixman.i386'
expect(entries['nss-tools']).to have(2).items
expect(entries['pixman']).to have(2).items
end
it "passes the repos to enable to the helper" do
described_class.expects(:python).with do |script, *args|
expect(script).to eq described_class::YUMHELPER
expect(args).to eq %w[-e updates -e centosplus]
end.returns('')
described_class.fetch_latest_versions(['updates', 'centosplus'], [])
end
it "passes the repos to disable to the helper" do
described_class.expects(:python).with do |script, *args|
expect(script).to eq described_class::YUMHELPER
expect(args).to eq %w[-d updates -d centosplus]
end.returns('')
described_class.fetch_latest_versions([], ['updates', 'centosplus'])
end
it 'passes a combination of repos to the helper' do
described_class.expects(:python).with do |script, *args|
expect(script).to eq described_class::YUMHELPER
expect(args).to eq %w[-e os -e contrib -d updates -d centosplus]
end.returns('')
described_class.fetch_latest_versions(['os', 'contrib'], ['updates', 'centosplus'])
end
end
end
diff --git a/spec/unit/provider/parsedfile_spec.rb b/spec/unit/provider/parsedfile_spec.rb
index f8a1773de..b814bc7ee 100755
--- a/spec/unit/provider/parsedfile_spec.rb
+++ b/spec/unit/provider/parsedfile_spec.rb
@@ -1,228 +1,228 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/files'
require 'puppet'
require 'puppet/provider/parsedfile'
Puppet::Type.newtype(:parsedfile_type) do
newparam(:name)
newproperty(:target)
end
# Most of the tests for this are still in test/ral/provider/parsedfile.rb.
describe Puppet::Provider::ParsedFile do
# The ParsedFile provider class is meant to be used as an abstract base class
# but also stores a lot of state within the singleton class. To avoid
# sharing data between classes we construct an anonymous class that inherits
# the ParsedFile provider instead of directly working with the ParsedFile
# provider itself.
let(:parsed_type) do
Puppet::Type.type(:parsedfile_type)
end
let!(:provider) { parsed_type.provide(:parsedfile_provider, :parent => described_class) }
describe "when looking up records loaded from disk" do
it "should return nil if no records have been loaded" do
provider.record?("foo").should be_nil
end
end
describe "when generating a list of instances" do
it "should return an instance for each record parsed from all of the registered targets" do
provider.expects(:targets).returns %w{/one /two}
provider.stubs(:skip_record?).returns false
one = [:uno1, :uno2]
two = [:dos1, :dos2]
provider.expects(:prefetch_target).with("/one").returns one
provider.expects(:prefetch_target).with("/two").returns two
results = []
(one + two).each do |inst|
results << inst.to_s + "_instance"
provider.expects(:new).with(inst).returns(results[-1])
end
provider.instances.should == results
end
it "should ignore target when retrieve fails" do
provider.expects(:targets).returns %w{/one /two /three}
provider.stubs(:skip_record?).returns false
provider.expects(:retrieve).with("/one").returns [
{:name => 'target1_record1'},
{:name => 'target1_record2'}
]
provider.expects(:retrieve).with("/two").raises Puppet::Util::FileType::FileReadError, "some error"
provider.expects(:retrieve).with("/three").returns [
{:name => 'target3_record1'},
{:name => 'target3_record2'}
]
Puppet.expects(:err).with('Could not prefetch parsedfile_type provider \'parsedfile_provider\' target \'/two\': some error. Treating as empty')
provider.expects(:new).with(:name => 'target1_record1', :on_disk => true, :target => '/one', :ensure => :present).returns 'r1'
provider.expects(:new).with(:name => 'target1_record2', :on_disk => true, :target => '/one', :ensure => :present).returns 'r2'
provider.expects(:new).with(:name => 'target3_record1', :on_disk => true, :target => '/three', :ensure => :present).returns 'r3'
provider.expects(:new).with(:name => 'target3_record2', :on_disk => true, :target => '/three', :ensure => :present).returns 'r4'
provider.instances.should == %w{r1 r2 r3 r4}
end
it "should skip specified records" do
provider.expects(:targets).returns %w{/one}
provider.expects(:skip_record?).with(:uno).returns false
provider.expects(:skip_record?).with(:dos).returns true
one = [:uno, :dos]
provider.expects(:prefetch_target).returns one
provider.expects(:new).with(:uno).returns "eh"
provider.expects(:new).with(:dos).never
provider.instances
end
end
describe "when matching resources to existing records" do
let(:first_resource) { stub(:one, :name => :one) }
let(:second_resource) { stub(:two, :name => :two) }
let(:resources) {{:one => first_resource, :two => second_resource}}
it "returns a resource if the record name matches the resource name" do
record = {:name => :one}
provider.resource_for_record(record, resources).should be first_resource
end
it "doesn't return a resource if the record name doesn't match any resource names" do
record = {:name => :three}
provider.resource_for_record(record, resources).should be_nil
end
end
describe "when flushing a file's records to disk" do
before do
# This way we start with some @records, like we would in real life.
provider.stubs(:retrieve).returns []
provider.default_target = "/foo/bar"
provider.initvars
provider.prefetch
@filetype = Puppet::Util::FileType.filetype(:flat).new("/my/file")
- Puppet::Util::FileType.filetype(:flat).stubs(:new).with("/my/file").returns @filetype
+ Puppet::Util::FileType.filetype(:flat).stubs(:new).with("/my/file",nil).returns @filetype
@filetype.stubs(:write)
end
it "should back up the file being written if the filetype can be backed up" do
@filetype.expects(:backup)
provider.flush_target("/my/file")
end
it "should not try to back up the file if the filetype cannot be backed up" do
@filetype = Puppet::Util::FileType.filetype(:ram).new("/my/file")
Puppet::Util::FileType.filetype(:flat).expects(:new).returns @filetype
@filetype.stubs(:write)
provider.flush_target("/my/file")
end
it "should not back up the file more than once between calls to 'prefetch'" do
@filetype.expects(:backup).once
provider.flush_target("/my/file")
provider.flush_target("/my/file")
end
it "should back the file up again once the file has been reread" do
@filetype.expects(:backup).times(2)
provider.flush_target("/my/file")
provider.prefetch
provider.flush_target("/my/file")
end
end
describe "when flushing multiple files" do
describe "and an error is encountered" do
it "the other file does not fail" do
provider.stubs(:backup_target)
bad_file = 'broken'
good_file = 'writable'
bad_writer = mock 'bad'
bad_writer.expects(:write).raises(Exception, "Failed to write to bad file")
good_writer = mock 'good'
good_writer.expects(:write).returns(nil)
provider.stubs(:target_object).with(bad_file).returns(bad_writer)
provider.stubs(:target_object).with(good_file).returns(good_writer)
bad_resource = parsed_type.new(:name => 'one', :target => bad_file)
good_resource = parsed_type.new(:name => 'two', :target => good_file)
expect {
bad_resource.flush
}.to raise_error(Exception, "Failed to write to bad file")
good_resource.flush
end
end
end
end
describe "A very basic provider based on ParsedFile" do
include PuppetSpec::Files
let(:input_text) { File.read(my_fixture('simple.txt')) }
let(:target) { tmpfile('parsedfile_spec') }
let(:provider) do
example_provider_class = Class.new(Puppet::Provider::ParsedFile)
example_provider_class.default_target = target
# Setup some record rules
example_provider_class.instance_eval do
text_line :text, :match => %r{.}
end
example_provider_class.initvars
example_provider_class.prefetch
# evade a race between multiple invocations of the header method
example_provider_class.stubs(:header).
returns("# HEADER As added by puppet.\n")
example_provider_class
end
context "writing file contents back to disk" do
it "should not change anything except from adding a header" do
input_records = provider.parse(input_text)
provider.to_file(input_records).
should match provider.header + input_text
end
end
context "rewriting a file containing a native header" do
let(:regex) { %r/^# HEADER.*third party\.\n/ }
let(:input_records) { provider.parse(input_text) }
before :each do
provider.stubs(:native_header_regex).returns(regex)
end
it "should move the native header to the top" do
provider.to_file(input_records).should_not match /\A#{provider.header}/
end
context "and dropping native headers found in input" do
before :each do
provider.stubs(:drop_native_header).returns(true)
end
it "should not include the native header in the output" do
provider.to_file(input_records).should_not match regex
end
end
end
end
diff --git a/spec/unit/provider/scheduled_task/win32_taskscheduler_spec.rb b/spec/unit/provider/scheduled_task/win32_taskscheduler_spec.rb
index 4a950dad3..3d37956c5 100644
--- a/spec/unit/provider/scheduled_task/win32_taskscheduler_spec.rb
+++ b/spec/unit/provider/scheduled_task/win32_taskscheduler_spec.rb
@@ -1,1574 +1,1574 @@
#! /usr/bin/env ruby
require 'spec_helper'
-require 'win32/taskscheduler' if Puppet.features.microsoft_windows?
+require 'puppet/util/windows/taskscheduler' if Puppet.features.microsoft_windows?
shared_examples_for "a trigger that handles start_date and start_time" do
let(:trigger) do
described_class.new(
:name => 'Shared Test Task',
:command => 'C:\Windows\System32\notepad.exe'
).translate_hash_to_trigger(trigger_hash)
end
before :each do
Win32::TaskScheduler.any_instance.stubs(:save)
end
describe 'the given start_date' do
before :each do
trigger_hash['start_time'] = '00:00'
end
def date_component
{
'start_year' => trigger['start_year'],
'start_month' => trigger['start_month'],
'start_day' => trigger['start_day']
}
end
it 'should be able to be specified in ISO 8601 calendar date format' do
trigger_hash['start_date'] = '2011-12-31'
date_component.should == {
'start_year' => 2011,
'start_month' => 12,
'start_day' => 31
}
end
it 'should fail if before 1753-01-01' do
trigger_hash['start_date'] = '1752-12-31'
expect { date_component }.to raise_error(
Puppet::Error,
'start_date must be on or after 1753-01-01'
)
end
it 'should succeed if on 1753-01-01' do
trigger_hash['start_date'] = '1753-01-01'
date_component.should == {
'start_year' => 1753,
'start_month' => 1,
'start_day' => 1
}
end
it 'should succeed if after 1753-01-01' do
trigger_hash['start_date'] = '1753-01-02'
date_component.should == {
'start_year' => 1753,
'start_month' => 1,
'start_day' => 2
}
end
end
describe 'the given start_time' do
before :each do
trigger_hash['start_date'] = '2011-12-31'
end
def time_component
{
'start_hour' => trigger['start_hour'],
'start_minute' => trigger['start_minute']
}
end
it 'should be able to be specified as a 24-hour "hh:mm"' do
trigger_hash['start_time'] = '17:13'
time_component.should == {
'start_hour' => 17,
'start_minute' => 13
}
end
it 'should be able to be specified as a 12-hour "hh:mm am"' do
trigger_hash['start_time'] = '3:13 am'
time_component.should == {
'start_hour' => 3,
'start_minute' => 13
}
end
it 'should be able to be specified as a 12-hour "hh:mm pm"' do
trigger_hash['start_time'] = '3:13 pm'
time_component.should == {
'start_hour' => 15,
'start_minute' => 13
}
end
end
end
describe Puppet::Type.type(:scheduled_task).provider(:win32_taskscheduler), :if => Puppet.features.microsoft_windows? do
before :each do
Puppet::Type.type(:scheduled_task).stubs(:defaultprovider).returns(described_class)
end
describe 'when retrieving' do
before :each do
@mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
described_class.any_instance.stubs(:task).returns(@mock_task)
Win32::TaskScheduler.stubs(:new).returns(@mock_task)
end
let(:resource) { Puppet::Type.type(:scheduled_task).new(:name => 'Test Task', :command => 'C:\Windows\System32\notepad.exe') }
describe 'the triggers for a task' do
describe 'with only one trigger' do
before :each do
@mock_task.expects(:trigger_count).returns(1)
end
it 'should handle a single daily trigger' do
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_DAILY,
'start_year' => 2011,
'start_month' => 9,
'start_day' => 12,
'start_hour' => 13,
'start_minute' => 20,
'flags' => 0,
'type' => { 'days_interval' => 2 },
})
resource.provider.trigger.should == {
'start_date' => '2011-9-12',
'start_time' => '13:20',
'schedule' => 'daily',
'every' => '2',
'enabled' => true,
'index' => 0,
}
end
it 'should handle a single weekly trigger' do
scheduled_days_of_week = Win32::TaskScheduler::MONDAY |
Win32::TaskScheduler::WEDNESDAY |
Win32::TaskScheduler::FRIDAY |
Win32::TaskScheduler::SUNDAY
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_WEEKLY,
'start_year' => 2011,
'start_month' => 9,
'start_day' => 12,
'start_hour' => 13,
'start_minute' => 20,
'flags' => 0,
'type' => {
'weeks_interval' => 2,
'days_of_week' => scheduled_days_of_week
}
})
resource.provider.trigger.should == {
'start_date' => '2011-9-12',
'start_time' => '13:20',
'schedule' => 'weekly',
'every' => '2',
'on' => ['sun', 'mon', 'wed', 'fri'],
'enabled' => true,
'index' => 0,
}
end
it 'should handle a single monthly date-based trigger' do
scheduled_months = Win32::TaskScheduler::JANUARY |
Win32::TaskScheduler::FEBRUARY |
Win32::TaskScheduler::AUGUST |
Win32::TaskScheduler::SEPTEMBER |
Win32::TaskScheduler::DECEMBER
# 1 3 5 15 'last'
scheduled_days = 1 | 1 << 2 | 1 << 4 | 1 << 14 | 1 << 31
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDATE,
'start_year' => 2011,
'start_month' => 9,
'start_day' => 12,
'start_hour' => 13,
'start_minute' => 20,
'flags' => 0,
'type' => {
'months' => scheduled_months,
'days' => scheduled_days
}
})
resource.provider.trigger.should == {
'start_date' => '2011-9-12',
'start_time' => '13:20',
'schedule' => 'monthly',
'months' => [1, 2, 8, 9, 12],
'on' => [1, 3, 5, 15, 'last'],
'enabled' => true,
'index' => 0,
}
end
it 'should handle a single monthly day-of-week-based trigger' do
scheduled_months = Win32::TaskScheduler::JANUARY |
Win32::TaskScheduler::FEBRUARY |
Win32::TaskScheduler::AUGUST |
Win32::TaskScheduler::SEPTEMBER |
Win32::TaskScheduler::DECEMBER
scheduled_days_of_week = Win32::TaskScheduler::MONDAY |
Win32::TaskScheduler::WEDNESDAY |
Win32::TaskScheduler::FRIDAY |
Win32::TaskScheduler::SUNDAY
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_MONTHLYDOW,
'start_year' => 2011,
'start_month' => 9,
'start_day' => 12,
'start_hour' => 13,
'start_minute' => 20,
'flags' => 0,
'type' => {
'months' => scheduled_months,
'weeks' => Win32::TaskScheduler::FIRST_WEEK,
'days_of_week' => scheduled_days_of_week
}
})
resource.provider.trigger.should == {
'start_date' => '2011-9-12',
'start_time' => '13:20',
'schedule' => 'monthly',
'months' => [1, 2, 8, 9, 12],
'which_occurrence' => 'first',
'day_of_week' => ['sun', 'mon', 'wed', 'fri'],
'enabled' => true,
'index' => 0,
}
end
it 'should handle a single one-time trigger' do
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2011,
'start_month' => 9,
'start_day' => 12,
'start_hour' => 13,
'start_minute' => 20,
'flags' => 0,
})
resource.provider.trigger.should == {
'start_date' => '2011-9-12',
'start_time' => '13:20',
'schedule' => 'once',
'enabled' => true,
'index' => 0,
}
end
end
it 'should handle multiple triggers' do
@mock_task.expects(:trigger_count).returns(3)
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2011,
'start_month' => 10,
'start_day' => 13,
'start_hour' => 14,
'start_minute' => 21,
'flags' => 0,
})
@mock_task.expects(:trigger).with(1).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2012,
'start_month' => 11,
'start_day' => 14,
'start_hour' => 15,
'start_minute' => 22,
'flags' => 0,
})
@mock_task.expects(:trigger).with(2).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2013,
'start_month' => 12,
'start_day' => 15,
'start_hour' => 16,
'start_minute' => 23,
'flags' => 0,
})
resource.provider.trigger.should =~ [
{
'start_date' => '2011-10-13',
'start_time' => '14:21',
'schedule' => 'once',
'enabled' => true,
'index' => 0,
},
{
'start_date' => '2012-11-14',
'start_time' => '15:22',
'schedule' => 'once',
'enabled' => true,
'index' => 1,
},
{
'start_date' => '2013-12-15',
'start_time' => '16:23',
'schedule' => 'once',
'enabled' => true,
'index' => 2,
}
]
end
it 'should skip triggers Win32::TaskScheduler cannot handle' do
@mock_task.expects(:trigger_count).returns(3)
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2011,
'start_month' => 10,
'start_day' => 13,
'start_hour' => 14,
'start_minute' => 21,
'flags' => 0,
})
@mock_task.expects(:trigger).with(1).raises(
Win32::TaskScheduler::Error.new('Unhandled trigger type!')
)
@mock_task.expects(:trigger).with(2).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2013,
'start_month' => 12,
'start_day' => 15,
'start_hour' => 16,
'start_minute' => 23,
'flags' => 0,
})
resource.provider.trigger.should =~ [
{
'start_date' => '2011-10-13',
'start_time' => '14:21',
'schedule' => 'once',
'enabled' => true,
'index' => 0,
},
{
'start_date' => '2013-12-15',
'start_time' => '16:23',
'schedule' => 'once',
'enabled' => true,
'index' => 2,
}
]
end
it 'should skip trigger types Puppet does not handle' do
@mock_task.expects(:trigger_count).returns(3)
@mock_task.expects(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2011,
'start_month' => 10,
'start_day' => 13,
'start_hour' => 14,
'start_minute' => 21,
'flags' => 0,
})
@mock_task.expects(:trigger).with(1).returns({
'trigger_type' => Win32::TaskScheduler::TASK_EVENT_TRIGGER_AT_LOGON,
})
@mock_task.expects(:trigger).with(2).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2013,
'start_month' => 12,
'start_day' => 15,
'start_hour' => 16,
'start_minute' => 23,
'flags' => 0,
})
resource.provider.trigger.should =~ [
{
'start_date' => '2011-10-13',
'start_time' => '14:21',
'schedule' => 'once',
'enabled' => true,
'index' => 0,
},
{
'start_date' => '2013-12-15',
'start_time' => '16:23',
'schedule' => 'once',
'enabled' => true,
'index' => 2,
}
]
end
end
it 'should get the working directory from the working_directory on the task' do
@mock_task.expects(:working_directory).returns('C:\Windows\System32')
resource.provider.working_dir.should == 'C:\Windows\System32'
end
it 'should get the command from the application_name on the task' do
@mock_task.expects(:application_name).returns('C:\Windows\System32\notepad.exe')
resource.provider.command.should == 'C:\Windows\System32\notepad.exe'
end
it 'should get the command arguments from the parameters on the task' do
@mock_task.expects(:parameters).returns('these are my arguments')
resource.provider.arguments.should == 'these are my arguments'
end
it 'should get the user from the account_information on the task' do
@mock_task.expects(:account_information).returns('this is my user')
resource.provider.user.should == 'this is my user'
end
describe 'whether the task is enabled' do
it 'should report tasks with the disabled bit set as disabled' do
@mock_task.stubs(:flags).returns(Win32::TaskScheduler::DISABLED)
resource.provider.enabled.should == :false
end
it 'should report tasks without the disabled bit set as enabled' do
@mock_task.stubs(:flags).returns(~Win32::TaskScheduler::DISABLED)
resource.provider.enabled.should == :true
end
it 'should not consider triggers for determining if the task is enabled' do
@mock_task.stubs(:flags).returns(~Win32::TaskScheduler::DISABLED)
@mock_task.stubs(:trigger_count).returns(1)
@mock_task.stubs(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2011,
'start_month' => 10,
'start_day' => 13,
'start_hour' => 14,
'start_minute' => 21,
'flags' => Win32::TaskScheduler::TASK_TRIGGER_FLAG_DISABLED,
})
resource.provider.enabled.should == :true
end
end
end
describe '#exists?' do
before :each do
@mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
described_class.any_instance.stubs(:task).returns(@mock_task)
Win32::TaskScheduler.stubs(:new).returns(@mock_task)
end
let(:resource) { Puppet::Type.type(:scheduled_task).new(:name => 'Test Task', :command => 'C:\Windows\System32\notepad.exe') }
it "should delegate to Win32::TaskScheduler using the resource's name" do
@mock_task.expects(:exists?).with('Test Task').returns(true)
resource.provider.exists?.should == true
end
end
describe '#clear_task' do
before :each do
@mock_task = mock
@new_mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
@new_mock_task.responds_like(Win32::TaskScheduler.new)
Win32::TaskScheduler.stubs(:new).returns(@mock_task, @new_mock_task)
described_class.any_instance.stubs(:exists?).returns(false)
end
let(:resource) { Puppet::Type.type(:scheduled_task).new(:name => 'Test Task', :command => 'C:\Windows\System32\notepad.exe') }
it 'should clear the cached task object' do
resource.provider.task.should == @mock_task
resource.provider.task.should == @mock_task
resource.provider.clear_task
resource.provider.task.should == @new_mock_task
end
it 'should clear the cached list of triggers for the task' do
@mock_task.stubs(:trigger_count).returns(1)
@mock_task.stubs(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2011,
'start_month' => 10,
'start_day' => 13,
'start_hour' => 14,
'start_minute' => 21,
'flags' => 0,
})
@new_mock_task.stubs(:trigger_count).returns(1)
@new_mock_task.stubs(:trigger).with(0).returns({
'trigger_type' => Win32::TaskScheduler::TASK_TIME_TRIGGER_ONCE,
'start_year' => 2012,
'start_month' => 11,
'start_day' => 14,
'start_hour' => 15,
'start_minute' => 22,
'flags' => 0,
})
mock_task_trigger = {
'start_date' => '2011-10-13',
'start_time' => '14:21',
'schedule' => 'once',
'enabled' => true,
'index' => 0,
}
resource.provider.trigger.should == mock_task_trigger
resource.provider.trigger.should == mock_task_trigger
resource.provider.clear_task
resource.provider.trigger.should == {
'start_date' => '2012-11-14',
'start_time' => '15:22',
'schedule' => 'once',
'enabled' => true,
'index' => 0,
}
end
end
describe '.instances' do
it 'should use the list of .job files to construct the list of scheduled_tasks' do
job_files = ['foo.job', 'bar.job', 'baz.job']
Win32::TaskScheduler.any_instance.stubs(:tasks).returns(job_files)
job_files.each do |job|
job = File.basename(job, '.job')
described_class.expects(:new).with(:provider => :win32_taskscheduler, :name => job)
end
described_class.instances
end
end
describe '#user_insync?', :if => Puppet.features.microsoft_windows? do
let(:resource) { described_class.new(:name => 'foobar', :command => 'C:\Windows\System32\notepad.exe') }
it 'should consider the user as in sync if the name matches' do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('joe').twice.returns('SID A')
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('joe').twice.returns('SID A')
resource.should be_user_insync('joe', ['joe'])
end
it 'should consider the user as in sync if the current user is fully qualified' do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('joe').returns('SID A')
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('MACHINE\joe').returns('SID A')
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('joe').returns('SID A')
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('MACHINE\joe').returns('SID A')
resource.should be_user_insync('MACHINE\joe', ['joe'])
end
it 'should consider a current user of the empty string to be the same as the system user' do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('system').twice.returns('SYSTEM SID')
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('system').twice.returns('SYSTEM SID')
resource.should be_user_insync('', ['system'])
end
it 'should consider different users as being different' do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('joe').returns('SID A')
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('bob').returns('SID B')
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('joe').returns('SID A')
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('bob').returns('SID B')
resource.should_not be_user_insync('joe', ['bob'])
end
end
describe '#trigger_insync?' do
let(:resource) { described_class.new(:name => 'foobar', :command => 'C:\Windows\System32\notepad.exe') }
it 'should not consider any extra current triggers as in sync' do
current = [
{'start_date' => '2011-09-12', 'start_time' => '15:15', 'schedule' => 'once'},
{'start_date' => '2012-10-13', 'start_time' => '16:16', 'schedule' => 'once'}
]
desired = {'start_date' => '2011-09-12', 'start_time' => '15:15', 'schedule' => 'once'}
resource.should_not be_trigger_insync(current, desired)
end
it 'should not consider any extra desired triggers as in sync' do
current = {'start_date' => '2011-09-12', 'start_time' => '15:15', 'schedule' => 'once'}
desired = [
{'start_date' => '2011-09-12', 'start_time' => '15:15', 'schedule' => 'once'},
{'start_date' => '2012-10-13', 'start_time' => '16:16', 'schedule' => 'once'}
]
resource.should_not be_trigger_insync(current, desired)
end
it 'should consider triggers to be in sync if the sets of current and desired triggers are equal' do
current = [
{'start_date' => '2011-09-12', 'start_time' => '15:15', 'schedule' => 'once'},
{'start_date' => '2012-10-13', 'start_time' => '16:16', 'schedule' => 'once'}
]
desired = [
{'start_date' => '2011-09-12', 'start_time' => '15:15', 'schedule' => 'once'},
{'start_date' => '2012-10-13', 'start_time' => '16:16', 'schedule' => 'once'}
]
resource.should be_trigger_insync(current, desired)
end
end
describe '#triggers_same?' do
let(:provider) { described_class.new(:name => 'foobar', :command => 'C:\Windows\System32\notepad.exe') }
it "should not consider a disabled 'current' trigger to be the same" do
current = {'schedule' => 'once', 'enabled' => false}
desired = {'schedule' => 'once'}
provider.should_not be_triggers_same(current, desired)
end
it 'should not consider triggers with different schedules to be the same' do
current = {'schedule' => 'once'}
desired = {'schedule' => 'weekly'}
provider.should_not be_triggers_same(current, desired)
end
describe 'comparing daily triggers' do
it "should consider 'desired' triggers not specifying 'every' to have the same value as the 'current' trigger" do
current = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3}
desired = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30'}
provider.should be_triggers_same(current, desired)
end
it "should consider different 'start_dates' as different triggers" do
current = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3}
desired = {'schedule' => 'daily', 'start_date' => '2012-09-12', 'start_time' => '15:30', 'every' => 3}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'start_times' as different triggers" do
current = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3}
desired = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:31', 'every' => 3}
provider.should_not be_triggers_same(current, desired)
end
it 'should not consider differences in date formatting to be different triggers' do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3}
desired = {'schedule' => 'weekly', 'start_date' => '2011-9-12', 'start_time' => '15:30', 'every' => 3}
provider.should be_triggers_same(current, desired)
end
it 'should not consider differences in time formatting to be different triggers' do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '5:30', 'every' => 3}
desired = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '05:30', 'every' => 3}
provider.should be_triggers_same(current, desired)
end
it "should consider different 'every' as different triggers" do
current = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3}
desired = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 1}
provider.should_not be_triggers_same(current, desired)
end
it 'should consider triggers that are the same as being the same' do
trigger = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '01:30', 'every' => 1}
provider.should be_triggers_same(trigger, trigger)
end
end
describe 'comparing one-time triggers' do
it "should consider different 'start_dates' as different triggers" do
current = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30'}
desired = {'schedule' => 'daily', 'start_date' => '2012-09-12', 'start_time' => '15:30'}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'start_times' as different triggers" do
current = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:30'}
desired = {'schedule' => 'daily', 'start_date' => '2011-09-12', 'start_time' => '15:31'}
provider.should_not be_triggers_same(current, desired)
end
it 'should not consider differences in date formatting to be different triggers' do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30'}
desired = {'schedule' => 'weekly', 'start_date' => '2011-9-12', 'start_time' => '15:30'}
provider.should be_triggers_same(current, desired)
end
it 'should not consider differences in time formatting to be different triggers' do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '1:30'}
desired = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '01:30'}
provider.should be_triggers_same(current, desired)
end
it 'should consider triggers that are the same as being the same' do
trigger = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '01:30'}
provider.should be_triggers_same(trigger, trigger)
end
end
describe 'comparing monthly date-based triggers' do
it "should consider 'desired' triggers not specifying 'months' to have the same value as the 'current' trigger" do
current = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [3], 'on' => [1,'last']}
desired = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'on' => [1, 'last']}
provider.should be_triggers_same(current, desired)
end
it "should consider different 'start_dates' as different triggers" do
current = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
desired = {'schedule' => 'monthly', 'start_date' => '2011-10-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'start_times' as different triggers" do
current = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
desired = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '22:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
provider.should_not be_triggers_same(current, desired)
end
it 'should not consider differences in date formatting to be different triggers' do
current = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
desired = {'schedule' => 'monthly', 'start_date' => '2011-9-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
provider.should be_triggers_same(current, desired)
end
it 'should not consider differences in time formatting to be different triggers' do
current = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '5:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
desired = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '05:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
provider.should be_triggers_same(current, desired)
end
it "should consider different 'months' as different triggers" do
current = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
desired = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1], 'on' => [1, 3, 5, 7]}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'on' as different triggers" do
current = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
desired = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 5, 7]}
provider.should_not be_triggers_same(current, desired)
end
it 'should consider triggers that are the same as being the same' do
trigger = {'schedule' => 'monthly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'months' => [1, 2], 'on' => [1, 3, 5, 7]}
provider.should be_triggers_same(trigger, trigger)
end
end
describe 'comparing monthly day-of-week-based triggers' do
it "should consider 'desired' triggers not specifying 'months' to have the same value as the 'current' trigger" do
current = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
desired = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
provider.should be_triggers_same(current, desired)
end
it "should consider different 'start_dates' as different triggers" do
current = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
desired = {
'schedule' => 'monthly',
'start_date' => '2011-10-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'start_times' as different triggers" do
current = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
desired = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '22:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'months' as different triggers" do
current = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
desired = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3, 5, 7, 9],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'which_occurrence' as different triggers" do
current = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
desired = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'last',
'day_of_week' => ['mon', 'tues', 'sat']
}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'day_of_week' as different triggers" do
current = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
desired = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['fri']
}
provider.should_not be_triggers_same(current, desired)
end
it 'should consider triggers that are the same as being the same' do
trigger = {
'schedule' => 'monthly',
'start_date' => '2011-09-12',
'start_time' => '15:30',
'months' => [3],
'which_occurrence' => 'first',
'day_of_week' => ['mon', 'tues', 'sat']
}
provider.should be_triggers_same(trigger, trigger)
end
end
describe 'comparing weekly triggers' do
it "should consider 'desired' triggers not specifying 'day_of_week' to have the same value as the 'current' trigger" do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
desired = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3}
provider.should be_triggers_same(current, desired)
end
it "should consider different 'start_dates' as different triggers" do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
desired = {'schedule' => 'weekly', 'start_date' => '2011-10-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'start_times' as different triggers" do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
desired = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '22:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
provider.should_not be_triggers_same(current, desired)
end
it 'should not consider differences in date formatting to be different triggers' do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
desired = {'schedule' => 'weekly', 'start_date' => '2011-9-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
provider.should be_triggers_same(current, desired)
end
it 'should not consider differences in time formatting to be different triggers' do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '1:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
desired = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '01:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
provider.should be_triggers_same(current, desired)
end
it "should consider different 'every' as different triggers" do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 1, 'day_of_week' => ['mon', 'wed', 'fri']}
desired = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
provider.should_not be_triggers_same(current, desired)
end
it "should consider different 'day_of_week' as different triggers" do
current = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
desired = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['fri']}
provider.should_not be_triggers_same(current, desired)
end
it 'should consider triggers that are the same as being the same' do
trigger = {'schedule' => 'weekly', 'start_date' => '2011-09-12', 'start_time' => '15:30', 'every' => 3, 'day_of_week' => ['mon', 'wed', 'fri']}
provider.should be_triggers_same(trigger, trigger)
end
end
end
describe '#normalized_date' do
it 'should format the date without leading zeros' do
described_class.normalized_date('2011-01-01').should == '2011-1-1'
end
end
describe '#normalized_time' do
it 'should format the time as {24h}:{minutes}' do
described_class.normalized_time('8:37 PM').should == '20:37'
end
end
describe '#translate_hash_to_trigger' do
before :each do
@puppet_trigger = {
'start_date' => '2011-1-1',
'start_time' => '01:10'
}
end
let(:provider) { described_class.new(:name => 'Test Task', :command => 'C:\Windows\System32\notepad.exe') }
let(:trigger) { provider.translate_hash_to_trigger(@puppet_trigger) }
describe 'when given a one-time trigger' do
before :each do
@puppet_trigger['schedule'] = 'once'
end
it 'should set the trigger_type to Win32::TaskScheduler::ONCE' do
trigger['trigger_type'].should == Win32::TaskScheduler::ONCE
end
it 'should not set a type' do
trigger.should_not be_has_key('type')
end
it "should require 'start_date'" do
@puppet_trigger.delete('start_date')
expect { trigger }.to raise_error(
Puppet::Error,
/Must specify 'start_date' when defining a one-time trigger/
)
end
it "should require 'start_time'" do
@puppet_trigger.delete('start_time')
expect { trigger }.to raise_error(
Puppet::Error,
/Must specify 'start_time' when defining a trigger/
)
end
it_behaves_like "a trigger that handles start_date and start_time" do
let(:trigger_hash) {{'schedule' => 'once' }}
end
end
describe 'when given a daily trigger' do
before :each do
@puppet_trigger['schedule'] = 'daily'
end
it "should default 'every' to 1" do
trigger['type']['days_interval'].should == 1
end
it "should use the specified value for 'every'" do
@puppet_trigger['every'] = 5
trigger['type']['days_interval'].should == 5
end
it "should default 'start_date' to 'today'" do
@puppet_trigger.delete('start_date')
today = Time.now
trigger['start_year'].should == today.year
trigger['start_month'].should == today.month
trigger['start_day'].should == today.day
end
it_behaves_like "a trigger that handles start_date and start_time" do
let(:trigger_hash) {{'schedule' => 'daily', 'every' => 1}}
end
end
describe 'when given a weekly trigger' do
before :each do
@puppet_trigger['schedule'] = 'weekly'
end
it "should default 'every' to 1" do
trigger['type']['weeks_interval'].should == 1
end
it "should use the specified value for 'every'" do
@puppet_trigger['every'] = 4
trigger['type']['weeks_interval'].should == 4
end
it "should default 'day_of_week' to be every day of the week" do
trigger['type']['days_of_week'].should == Win32::TaskScheduler::MONDAY |
Win32::TaskScheduler::TUESDAY |
Win32::TaskScheduler::WEDNESDAY |
Win32::TaskScheduler::THURSDAY |
Win32::TaskScheduler::FRIDAY |
Win32::TaskScheduler::SATURDAY |
Win32::TaskScheduler::SUNDAY
end
it "should use the specified value for 'day_of_week'" do
@puppet_trigger['day_of_week'] = ['mon', 'wed', 'fri']
trigger['type']['days_of_week'].should == Win32::TaskScheduler::MONDAY |
Win32::TaskScheduler::WEDNESDAY |
Win32::TaskScheduler::FRIDAY
end
it "should default 'start_date' to 'today'" do
@puppet_trigger.delete('start_date')
today = Time.now
trigger['start_year'].should == today.year
trigger['start_month'].should == today.month
trigger['start_day'].should == today.day
end
it_behaves_like "a trigger that handles start_date and start_time" do
let(:trigger_hash) {{'schedule' => 'weekly', 'every' => 1, 'day_of_week' => 'mon'}}
end
end
shared_examples_for 'a monthly schedule' do
it "should default 'months' to be every month" do
trigger['type']['months'].should == Win32::TaskScheduler::JANUARY |
Win32::TaskScheduler::FEBRUARY |
Win32::TaskScheduler::MARCH |
Win32::TaskScheduler::APRIL |
Win32::TaskScheduler::MAY |
Win32::TaskScheduler::JUNE |
Win32::TaskScheduler::JULY |
Win32::TaskScheduler::AUGUST |
Win32::TaskScheduler::SEPTEMBER |
Win32::TaskScheduler::OCTOBER |
Win32::TaskScheduler::NOVEMBER |
Win32::TaskScheduler::DECEMBER
end
it "should use the specified value for 'months'" do
@puppet_trigger['months'] = [2, 8]
trigger['type']['months'].should == Win32::TaskScheduler::FEBRUARY |
Win32::TaskScheduler::AUGUST
end
end
describe 'when given a monthly date-based trigger' do
before :each do
@puppet_trigger['schedule'] = 'monthly'
@puppet_trigger['on'] = [7, 14]
end
it_behaves_like 'a monthly schedule'
it "should not allow 'which_occurrence' to be specified" do
@puppet_trigger['which_occurrence'] = 'first'
expect {trigger}.to raise_error(
Puppet::Error,
/Neither 'day_of_week' nor 'which_occurrence' can be specified when creating a monthly date-based trigger/
)
end
it "should not allow 'day_of_week' to be specified" do
@puppet_trigger['day_of_week'] = 'mon'
expect {trigger}.to raise_error(
Puppet::Error,
/Neither 'day_of_week' nor 'which_occurrence' can be specified when creating a monthly date-based trigger/
)
end
it "should require 'on'" do
@puppet_trigger.delete('on')
expect {trigger}.to raise_error(
Puppet::Error,
/Don't know how to create a 'monthly' schedule with the options: schedule, start_date, start_time/
)
end
it "should default 'start_date' to 'today'" do
@puppet_trigger.delete('start_date')
today = Time.now
trigger['start_year'].should == today.year
trigger['start_month'].should == today.month
trigger['start_day'].should == today.day
end
it_behaves_like "a trigger that handles start_date and start_time" do
let(:trigger_hash) {{'schedule' => 'monthly', 'months' => 1, 'on' => 1}}
end
end
describe 'when given a monthly day-of-week-based trigger' do
before :each do
@puppet_trigger['schedule'] = 'monthly'
@puppet_trigger['which_occurrence'] = 'first'
@puppet_trigger['day_of_week'] = 'mon'
end
it_behaves_like 'a monthly schedule'
it "should not allow 'on' to be specified" do
@puppet_trigger['on'] = 15
expect {trigger}.to raise_error(
Puppet::Error,
/Neither 'day_of_week' nor 'which_occurrence' can be specified when creating a monthly date-based trigger/
)
end
it "should require 'which_occurrence'" do
@puppet_trigger.delete('which_occurrence')
expect {trigger}.to raise_error(
Puppet::Error,
/which_occurrence must be specified when creating a monthly day-of-week based trigger/
)
end
it "should require 'day_of_week'" do
@puppet_trigger.delete('day_of_week')
expect {trigger}.to raise_error(
Puppet::Error,
/day_of_week must be specified when creating a monthly day-of-week based trigger/
)
end
it "should default 'start_date' to 'today'" do
@puppet_trigger.delete('start_date')
today = Time.now
trigger['start_year'].should == today.year
trigger['start_month'].should == today.month
trigger['start_day'].should == today.day
end
it_behaves_like "a trigger that handles start_date and start_time" do
let(:trigger_hash) {{'schedule' => 'monthly', 'months' => 1, 'which_occurrence' => 'first', 'day_of_week' => 'mon'}}
end
end
end
describe '#validate_trigger' do
let(:provider) { described_class.new(:name => 'Test Task', :command => 'C:\Windows\System32\notepad.exe') }
it 'should succeed if all passed triggers translate from hashes to triggers' do
triggers_to_validate = [
{'schedule' => 'once', 'start_date' => '2011-09-13', 'start_time' => '13:50'},
{'schedule' => 'weekly', 'start_date' => '2011-09-13', 'start_time' => '13:50', 'day_of_week' => 'mon'}
]
provider.validate_trigger(triggers_to_validate).should == true
end
it 'should use the exception from translate_hash_to_trigger when it fails' do
triggers_to_validate = [
{'schedule' => 'once', 'start_date' => '2011-09-13', 'start_time' => '13:50'},
{'schedule' => 'monthly', 'this is invalid' => true}
]
expect {provider.validate_trigger(triggers_to_validate)}.to raise_error(
Puppet::Error,
/#{Regexp.escape("Unknown trigger option(s): ['this is invalid']")}/
)
end
end
describe '#flush' do
let(:resource) do
Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:command => 'C:\Windows\System32\notepad.exe',
:ensure => @ensure
)
end
before :each do
@mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
@mock_task.stubs(:exists?).returns(true)
@mock_task.stubs(:activate)
Win32::TaskScheduler.stubs(:new).returns(@mock_task)
@command = 'C:\Windows\System32\notepad.exe'
end
describe 'when :ensure is :present' do
before :each do
@ensure = :present
end
it 'should save the task' do
@mock_task.expects(:save)
resource.provider.flush
end
it 'should fail if the command is not specified' do
resource = Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:ensure => @ensure
)
expect { resource.provider.flush }.to raise_error(
Puppet::Error,
'Parameter command is required.'
)
end
end
describe 'when :ensure is :absent' do
before :each do
@ensure = :absent
@mock_task.stubs(:activate)
end
it 'should not save the task if :ensure is :absent' do
@mock_task.expects(:save).never
resource.provider.flush
end
it 'should not fail if the command is not specified' do
@mock_task.stubs(:save)
resource = Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:ensure => @ensure
)
resource.provider.flush
end
end
end
describe 'property setter methods' do
let(:resource) do
Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:command => 'C:\dummy_task.exe'
)
end
before :each do
@mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
@mock_task.stubs(:exists?).returns(true)
@mock_task.stubs(:activate)
Win32::TaskScheduler.stubs(:new).returns(@mock_task)
end
describe '#command=' do
it 'should set the application_name on the task' do
@mock_task.expects(:application_name=).with('C:\Windows\System32\notepad.exe')
resource.provider.command = 'C:\Windows\System32\notepad.exe'
end
end
describe '#arguments=' do
it 'should set the parameters on the task' do
@mock_task.expects(:parameters=).with(['/some /arguments /here'])
resource.provider.arguments = ['/some /arguments /here']
end
end
describe '#working_dir=' do
it 'should set the working_directory on the task' do
@mock_task.expects(:working_directory=).with('C:\Windows\System32')
resource.provider.working_dir = 'C:\Windows\System32'
end
end
describe '#enabled=' do
it 'should set the disabled flag if the task should be disabled' do
@mock_task.stubs(:flags).returns(0)
@mock_task.expects(:flags=).with(Win32::TaskScheduler::DISABLED)
resource.provider.enabled = :false
end
it 'should clear the disabled flag if the task should be enabled' do
@mock_task.stubs(:flags).returns(Win32::TaskScheduler::DISABLED)
@mock_task.expects(:flags=).with(0)
resource.provider.enabled = :true
end
end
describe '#trigger=' do
let(:resource) do
Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:command => 'C:\Windows\System32\notepad.exe',
:trigger => @trigger
)
end
before :each do
@mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
@mock_task.stubs(:exists?).returns(true)
@mock_task.stubs(:activate)
Win32::TaskScheduler.stubs(:new).returns(@mock_task)
end
it 'should not consider all duplicate current triggers in sync with a single desired trigger' do
@trigger = {'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10'}
current_triggers = [
{'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10', 'index' => 0},
{'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10', 'index' => 1},
{'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10', 'index' => 2},
]
resource.provider.stubs(:trigger).returns(current_triggers)
@mock_task.expects(:delete_trigger).with(1)
@mock_task.expects(:delete_trigger).with(2)
resource.provider.trigger = @trigger
end
it 'should remove triggers not defined in the resource' do
@trigger = {'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10'}
current_triggers = [
{'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10', 'index' => 0},
{'schedule' => 'once', 'start_date' => '2012-09-15', 'start_time' => '15:10', 'index' => 1},
{'schedule' => 'once', 'start_date' => '2013-09-15', 'start_time' => '15:10', 'index' => 2},
]
resource.provider.stubs(:trigger).returns(current_triggers)
@mock_task.expects(:delete_trigger).with(1)
@mock_task.expects(:delete_trigger).with(2)
resource.provider.trigger = @trigger
end
it 'should add triggers defined in the resource, but not found on the system' do
@trigger = [
{'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10'},
{'schedule' => 'once', 'start_date' => '2012-09-15', 'start_time' => '15:10'},
{'schedule' => 'once', 'start_date' => '2013-09-15', 'start_time' => '15:10'},
]
current_triggers = [
{'schedule' => 'once', 'start_date' => '2011-09-15', 'start_time' => '15:10', 'index' => 0},
]
resource.provider.stubs(:trigger).returns(current_triggers)
@mock_task.expects(:trigger=).with(resource.provider.translate_hash_to_trigger(@trigger[1]))
@mock_task.expects(:trigger=).with(resource.provider.translate_hash_to_trigger(@trigger[2]))
resource.provider.trigger = @trigger
end
end
describe '#user=', :if => Puppet.features.microsoft_windows? do
before :each do
@mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
@mock_task.stubs(:exists?).returns(true)
@mock_task.stubs(:activate)
Win32::TaskScheduler.stubs(:new).returns(@mock_task)
end
it 'should use nil for user and password when setting the user to the SYSTEM account' do
- Puppet::Util::Windows::Security.stubs(:name_to_sid).with('system').returns('SYSTEM SID')
+ Puppet::Util::Windows::SID.stubs(:name_to_sid).with('system').returns('SYSTEM SID')
resource = Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:command => 'C:\dummy_task.exe',
:user => 'system'
)
@mock_task.expects(:set_account_information).with(nil, nil)
resource.provider.user = 'system'
end
it 'should use the specified user and password when setting the user to anything other than SYSTEM' do
- Puppet::Util::Windows::Security.stubs(:name_to_sid).with('my_user_name').returns('SID A')
+ Puppet::Util::Windows::SID.stubs(:name_to_sid).with('my_user_name').returns('SID A')
resource = Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:command => 'C:\dummy_task.exe',
:user => 'my_user_name',
:password => 'my password'
)
@mock_task.expects(:set_account_information).with('my_user_name', 'my password')
resource.provider.user = 'my_user_name'
end
end
end
describe '#create' do
let(:resource) do
Puppet::Type.type(:scheduled_task).new(
:name => 'Test Task',
:enabled => @enabled,
:command => @command,
:arguments => @arguments,
:working_dir => @working_dir,
:trigger => { 'schedule' => 'once', 'start_date' => '2011-09-27', 'start_time' => '17:00' }
)
end
before :each do
@enabled = :true
@command = 'C:\Windows\System32\notepad.exe'
@arguments = '/a /list /of /arguments'
@working_dir = 'C:\Windows\Some\Directory'
@mock_task = mock
@mock_task.responds_like(Win32::TaskScheduler.new)
@mock_task.stubs(:exists?).returns(true)
@mock_task.stubs(:activate)
@mock_task.stubs(:application_name=)
@mock_task.stubs(:parameters=)
@mock_task.stubs(:working_directory=)
@mock_task.stubs(:set_account_information)
@mock_task.stubs(:flags)
@mock_task.stubs(:flags=)
@mock_task.stubs(:trigger_count).returns(0)
@mock_task.stubs(:trigger=)
@mock_task.stubs(:save)
Win32::TaskScheduler.stubs(:new).returns(@mock_task)
described_class.any_instance.stubs(:sync_triggers)
end
it 'should set the command' do
resource.provider.expects(:command=).with(@command)
resource.provider.create
end
it 'should set the arguments' do
resource.provider.expects(:arguments=).with(@arguments)
resource.provider.create
end
it 'should set the working_dir' do
resource.provider.expects(:working_dir=).with(@working_dir)
resource.provider.create
end
it "should set the user" do
resource.provider.expects(:user=).with(:system)
resource.provider.create
end
it 'should set the enabled property' do
resource.provider.expects(:enabled=)
resource.provider.create
end
it 'should sync triggers' do
resource.provider.expects(:trigger=)
resource.provider.create
end
end
end
diff --git a/spec/unit/provider/service/openbsd_spec.rb b/spec/unit/provider/service/openbsd_spec.rb
old mode 100644
new mode 100755
index e7ba4a4db..ad1e50d18
--- a/spec/unit/provider/service/openbsd_spec.rb
+++ b/spec/unit/provider/service/openbsd_spec.rb
@@ -1,232 +1,256 @@
#!/usr/bin/env ruby
#
# Unit testing for the OpenBSD service provider
require 'spec_helper'
provider_class = Puppet::Type.type(:service).provider(:openbsd)
describe provider_class do
before :each do
Puppet::Type.type(:service).stubs(:defaultprovider).returns described_class
Facter.stubs(:value).with(:operatingsystem).returns :openbsd
end
let :rcscripts do
[
'/etc/rc.d/apmd',
'/etc/rc.d/aucat',
'/etc/rc.d/cron',
'/etc/rc.d/puppetd'
]
end
describe "#instances" do
it "should have an instances method" do
described_class.should respond_to :instances
end
it "should list all available services" do
File.expects(:directory?).with('/etc/rc.d').returns true
Dir.expects(:glob).with('/etc/rc.d/*').returns rcscripts
rcscripts.each do |script|
File.expects(:executable?).with(script).returns true
end
described_class.instances.map(&:name).should == [
'apmd',
'aucat',
'cron',
'puppetd'
]
end
end
describe "#start" do
it "should use the supplied start command if specified" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :start => '/bin/foo'))
provider.expects(:execute).with(['/bin/foo'], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.start
end
it "should start the service otherwise" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd'))
provider.expects(:execute).with(['/etc/rc.d/sshd', '-f', :start], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.expects(:search).with('sshd').returns('/etc/rc.d/sshd')
provider.start
end
end
describe "#stop" do
it "should use the supplied stop command if specified" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :stop => '/bin/foo'))
provider.expects(:execute).with(['/bin/foo'], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.stop
end
it "should stop the service otherwise" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd'))
provider.expects(:execute).with(['/etc/rc.d/sshd', :stop], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.expects(:search).with('sshd').returns('/etc/rc.d/sshd')
provider.stop
end
end
describe "#status" do
it "should use the status command from the resource" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :status => '/bin/foo'))
provider.expects(:execute).with(['/etc/rc.d/sshd', :status], :failonfail => false, :override_locale => false, :squelch => false, :combine => true).never
provider.expects(:execute).with(['/bin/foo'], :failonfail => false, :override_locale => false, :squelch => false, :combine => true)
provider.status
end
it "should return :stopped when status command returns with a non-zero exitcode" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :status => '/bin/foo'))
provider.expects(:execute).with(['/etc/rc.d/sshd', :status], :failonfail => false, :override_locale => false, :squelch => false, :combine => true).never
provider.expects(:execute).with(['/bin/foo'], :failonfail => false, :override_locale => false, :squelch => false, :combine => true)
$CHILD_STATUS.stubs(:exitstatus).returns 3
provider.status.should == :stopped
end
it "should return :running when status command returns with a zero exitcode" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :status => '/bin/foo'))
provider.expects(:execute).with(['/etc/rc.d/sshd', :status], :failonfail => false, :override_locale => false, :squelch => false, :combine => true).never
provider.expects(:execute).with(['/bin/foo'], :failonfail => false, :override_locale => false, :squelch => false, :combine => true)
$CHILD_STATUS.stubs(:exitstatus).returns 0
provider.status.should == :running
end
end
describe "#restart" do
it "should use the supplied restart command if specified" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :restart => '/bin/foo'))
provider.expects(:execute).with(['/etc/rc.d/sshd', '-f', :restart], :failonfail => true, :override_locale => false, :squelch => false, :combine => true).never
provider.expects(:execute).with(['/bin/foo'], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.restart
end
it "should restart the service with rc-service restart if hasrestart is true" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :hasrestart => true))
provider.expects(:execute).with(['/etc/rc.d/sshd', '-f', :restart], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.expects(:search).with('sshd').returns('/etc/rc.d/sshd')
provider.restart
end
it "should restart the service with rc-service stop/start if hasrestart is false" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd', :hasrestart => false))
provider.expects(:execute).with(['/etc/rc.d/sshd', '-f', :restart], :failonfail => true, :override_locale => false, :squelch => false, :combine => true).never
provider.expects(:execute).with(['/etc/rc.d/sshd', :stop], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.expects(:execute).with(['/etc/rc.d/sshd', '-f', :start], :failonfail => true, :override_locale => false, :squelch => false, :combine => true)
provider.expects(:search).with('sshd').returns('/etc/rc.d/sshd')
provider.restart
end
end
describe "#parse_rc_line" do
it "can parse a flag line with a known value" do
output = described_class.parse_rc_line('daemon_flags=')
output.should eq('')
end
it "can parse a flag line with a flag is wrapped in single quotes" do
output = described_class.parse_rc_line('daemon_flags=\'\'')
output.should eq('\'\'')
end
it "can parse a flag line with a flag is wrapped in double quotes" do
output = described_class.parse_rc_line('daemon_flags=""')
output.should eq('')
end
it "can parse a flag line with a trailing comment" do
output = described_class.parse_rc_line('daemon_flags="-d" # bees')
output.should eq('-d')
end
it "can parse a flag line with a bare word" do
output = described_class.parse_rc_line('daemon_flags=YES')
output.should eq('YES')
end
it "can parse a flag line with a flag that contains an equals" do
output = described_class.parse_rc_line('daemon_flags="-Dbla -tmpdir=foo"')
output.should eq('-Dbla -tmpdir=foo')
end
end
describe "#pkg_scripts" do
it "can retrieve the package_scripts array from rc.conf.local" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.expects(:load_rcconf_local_array).returns ['pkg_scripts="dbus_daemon cupsd"']
expect(provider.pkg_scripts).to match_array(['dbus_daemon', 'cupsd'])
end
it "returns an empty array when no pkg_scripts line is found" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.expects(:load_rcconf_local_array).returns ["#\n#\n#"]
expect(provider.pkg_scripts).to match_array([])
end
end
describe "#pkg_scripts_append" do
it "can append to the package_scripts array and return the result" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.expects(:load_rcconf_local_array).returns ['pkg_scripts="dbus_daemon"']
- expect(provider.pkg_scripts_append).to match_array(['dbus_daemon', 'cupsd'])
+ provider.pkg_scripts_append.should === ['dbus_daemon', 'cupsd']
end
it "should not duplicate the script name" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.expects(:load_rcconf_local_array).returns ['pkg_scripts="cupsd dbus_daemon"']
- expect(provider.pkg_scripts_append).to match_array(['dbus_daemon', 'cupsd'])
+ provider.pkg_scripts_append.should === ['cupsd', 'dbus_daemon']
end
end
describe "#pkg_scripts_remove" do
it "can append to the package_scripts array and return the result" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.expects(:load_rcconf_local_array).returns ['pkg_scripts="dbus_daemon cupsd"']
expect(provider.pkg_scripts_remove).to match_array(['dbus_daemon'])
end
it "should not remove the script from the array unless its needed" do
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.expects(:load_rcconf_local_array).returns ['pkg_scripts="dbus_daemon"']
expect(provider.pkg_scripts_remove).to match_array(['dbus_daemon'])
end
end
describe "#set_content_flags" do
it "can create the necessary content where none is provided" do
content = []
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.set_content_flags(content,'-d').should match_array(['cupsd_flags="-d"'])
end
it "can modify the existing content" do
content = ['cupsd_flags="-f"']
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
output = provider.set_content_flags(content,"-d")
output.should match_array(['cupsd_flags="-d"'])
end
+
+ it "does not set empty flags for package scripts" do
+ content = []
+ provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
+ provider.expects(:in_base?).returns(false)
+ output = provider.set_content_flags(content,'')
+ output.should match_array([nil])
+ end
+
+ it "does set empty flags for base scripts" do
+ content = []
+ provider = described_class.new(Puppet::Type.type(:service).new(:name => 'ntpd'))
+ provider.expects(:in_base?).returns(true)
+ output = provider.set_content_flags(content,'')
+ output.should match_array(['ntpd_flags=""'])
+ end
end
describe "#remove_content_flags" do
it "can remove the flags line from the requested content" do
content = ['cupsd_flags="-d"']
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
output = provider.remove_content_flags(content)
output.should_not match_array(['cupsd_flags="-d"'])
end
end
describe "#set_content_scripts" do
it "should append to the list of scripts" do
content = ['pkg_scripts="dbus_daemon"']
scripts = ['dbus_daemon','cupsd']
provider = described_class.new(Puppet::Type.type(:service).new(:name => 'cupsd'))
provider.set_content_scripts(content,scripts).should match_array(['pkg_scripts="dbus_daemon cupsd"'])
end
end
+
+ describe "#in_base?" do
+ it "should true if in base" do
+ File.stubs(:readlines).with('/etc/rc.conf').returns(['sshd_flags=""'])
+ provider = described_class.new(Puppet::Type.type(:service).new(:name => 'sshd'))
+ provider.in_base?.should be_true
+ end
+ end
end
diff --git a/spec/unit/provider/service/upstart_spec.rb b/spec/unit/provider/service/upstart_spec.rb
index 06be70acb..d31511626 100755
--- a/spec/unit/provider/service/upstart_spec.rb
+++ b/spec/unit/provider/service/upstart_spec.rb
@@ -1,522 +1,533 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Type.type(:service).provider(:upstart) do
let(:manual) { "\nmanual" }
let(:start_on_default_runlevels) { "\nstart on runlevel [2,3,4,5]" }
let(:provider_class) { Puppet::Type.type(:service).provider(:upstart) }
def given_contents_of(file, content)
File.open(file, 'w') do |file|
file.write(content)
end
end
def then_contents_of(file)
File.open(file).read
end
def lists_processes_as(output)
Puppet::Util::Execution.stubs(:execpipe).with("/sbin/initctl list").yields(output)
provider_class.stubs(:which).with("/sbin/initctl").returns("/sbin/initctl")
end
+ it "should be the default provider on Ubuntu" do
+ Facter.expects(:value).with(:operatingsystem).returns("Ubuntu")
+ described_class.default?.should be_true
+ end
+
describe "excluding services" do
it "ignores tty and serial on Redhat systems" do
Facter.stubs(:value).with(:osfamily).returns('RedHat')
expect(described_class.excludes).to include 'serial'
expect(described_class.excludes).to include 'tty'
end
end
describe "#instances" do
it "should be able to find all instances" do
lists_processes_as("rc stop/waiting\nssh start/running, process 712")
provider_class.instances.map {|provider| provider.name}.should =~ ["rc","ssh"]
end
it "should attach the interface name for network interfaces" do
lists_processes_as("network-interface (eth0)")
provider_class.instances.first.name.should == "network-interface INTERFACE=eth0"
end
it "should attach the job name for network interface security" do
processes = "network-interface-security (network-interface/eth0)"
provider_class.stubs(:execpipe).yields(processes)
provider_class.instances.first.name.should == "network-interface-security JOB=network-interface/eth0"
end
it "should not find excluded services" do
- processes = "wait-for-state stop/waiting\nportmap-wait start/running\nidmapd-mounting stop/waiting\nstartpar-bridge start/running"
+ processes = "wait-for-state stop/waiting"
+ processes += "\nportmap-wait start/running"
+ processes += "\nidmapd-mounting stop/waiting"
+ processes += "\nstartpar-bridge start/running"
+ processes += "\ncryptdisks-udev stop/waiting"
+ processes += "\nstatd-mounting stop/waiting"
+ processes += "\ngssd-mounting stop/waiting"
provider_class.stubs(:execpipe).yields(processes)
provider_class.instances.should be_empty
end
end
describe "#search" do
it "searches through paths to find a matching conf file" do
File.stubs(:directory?).returns(true)
Puppet::FileSystem.stubs(:exist?).returns(false)
Puppet::FileSystem.expects(:exist?).with("/etc/init/foo-bar.conf").returns(true)
resource = Puppet::Type.type(:service).new(:name => "foo-bar", :provider => :upstart)
provider = provider_class.new(resource)
provider.initscript.should == "/etc/init/foo-bar.conf"
end
it "searches for just the name of a compound named service" do
File.stubs(:directory?).returns(true)
Puppet::FileSystem.stubs(:exist?).returns(false)
Puppet::FileSystem.expects(:exist?).with("/etc/init/network-interface.conf").returns(true)
resource = Puppet::Type.type(:service).new(:name => "network-interface INTERFACE=lo", :provider => :upstart)
provider = provider_class.new(resource)
provider.initscript.should == "/etc/init/network-interface.conf"
end
end
describe "#status" do
it "should use the default status command if none is specified" do
resource = Puppet::Type.type(:service).new(:name => "foo", :provider => :upstart)
provider = provider_class.new(resource)
provider.stubs(:is_upstart?).returns(true)
provider.expects(:status_exec).with(["foo"]).returns("foo start/running, process 1000")
Process::Status.any_instance.stubs(:exitstatus).returns(0)
provider.status.should == :running
end
it "should properly handle services with 'start' in their name" do
resource = Puppet::Type.type(:service).new(:name => "foostartbar", :provider => :upstart)
provider = provider_class.new(resource)
provider.stubs(:is_upstart?).returns(true)
provider.expects(:status_exec).with(["foostartbar"]).returns("foostartbar stop/waiting")
Process::Status.any_instance.stubs(:exitstatus).returns(0)
provider.status.should == :stopped
end
end
describe "inheritance" do
let :resource do
resource = Puppet::Type.type(:service).new(:name => "foo", :provider => :upstart)
end
let :provider do
provider = provider_class.new(resource)
end
describe "when upstart job" do
before(:each) do
provider.stubs(:is_upstart?).returns(true)
end
["start", "stop"].each do |command|
it "should return the #{command}cmd of its parent provider" do
provider.send("#{command}cmd".to_sym).should == [provider.command(command.to_sym), resource.name]
end
end
it "should return nil for the statuscmd" do
provider.statuscmd.should be_nil
end
end
end
describe "should be enableable" do
let :resource do
Puppet::Type.type(:service).new(:name => "foo", :provider => :upstart)
end
let :provider do
provider_class.new(resource)
end
let :init_script do
PuppetSpec::Files.tmpfile("foo.conf")
end
let :over_script do
PuppetSpec::Files.tmpfile("foo.override")
end
let :disabled_content do
"\t # \t start on\nother file stuff"
end
let :multiline_disabled do
"# \t start on other file stuff (\n" +
"# more stuff ( # )))))inline comment\n" +
"# finishing up )\n" +
"# and done )\n" +
"this line shouldn't be touched\n"
end
let :multiline_disabled_bad do
"# \t start on other file stuff (\n" +
"# more stuff ( # )))))inline comment\n" +
"# finishing up )\n" +
"# and done )\n" +
"# this is a comment i want to be a comment\n" +
"this line shouldn't be touched\n"
end
let :multiline_enabled_bad do
" \t start on other file stuff (\n" +
" more stuff ( # )))))inline comment\n" +
" finishing up )\n" +
" and done )\n" +
"# this is a comment i want to be a comment\n" +
"this line shouldn't be touched\n"
end
let :multiline_enabled do
" \t start on other file stuff (\n" +
" more stuff ( # )))))inline comment\n" +
" finishing up )\n" +
" and done )\n" +
"this line shouldn't be touched\n"
end
let :multiline_enabled_standalone do
" \t start on other file stuff (\n" +
" more stuff ( # )))))inline comment\n" +
" finishing up )\n" +
" and done )\n"
end
let :enabled_content do
"\t \t start on\nother file stuff"
end
let :content do
"just some text"
end
describe "Upstart version < 0.6.7" do
before(:each) do
provider.stubs(:is_upstart?).returns(true)
provider.stubs(:upstart_version).returns("0.6.5")
provider.stubs(:search).returns(init_script)
end
[:enabled?,:enable,:disable].each do |enableable|
it "should respond to #{enableable}" do
provider.should respond_to(enableable)
end
end
describe "when enabling" do
it "should open and uncomment the '#start on' line" do
given_contents_of(init_script, disabled_content)
provider.enable
then_contents_of(init_script).should == enabled_content
end
it "should add a 'start on' line if none exists" do
given_contents_of(init_script, "this is a file")
provider.enable
then_contents_of(init_script).should == "this is a file" + start_on_default_runlevels
end
it "should handle multiline 'start on' stanzas" do
given_contents_of(init_script, multiline_disabled)
provider.enable
then_contents_of(init_script).should == multiline_enabled
end
it "should leave not 'start on' comments alone" do
given_contents_of(init_script, multiline_disabled_bad)
provider.enable
then_contents_of(init_script).should == multiline_enabled_bad
end
end
describe "when disabling" do
it "should open and comment the 'start on' line" do
given_contents_of(init_script, enabled_content)
provider.disable
then_contents_of(init_script).should == "#" + enabled_content
end
it "should handle multiline 'start on' stanzas" do
given_contents_of(init_script, multiline_enabled)
provider.disable
then_contents_of(init_script).should == multiline_disabled
end
end
describe "when checking whether it is enabled" do
it "should consider 'start on ...' to be enabled" do
given_contents_of(init_script, enabled_content)
provider.enabled?.should == :true
end
it "should consider '#start on ...' to be disabled" do
given_contents_of(init_script, disabled_content)
provider.enabled?.should == :false
end
it "should consider no start on line to be disabled" do
given_contents_of(init_script, content)
provider.enabled?.should == :false
end
end
end
describe "Upstart version < 0.9.0" do
before(:each) do
provider.stubs(:is_upstart?).returns(true)
provider.stubs(:upstart_version).returns("0.7.0")
provider.stubs(:search).returns(init_script)
end
[:enabled?,:enable,:disable].each do |enableable|
it "should respond to #{enableable}" do
provider.should respond_to(enableable)
end
end
describe "when enabling" do
it "should open and uncomment the '#start on' line" do
given_contents_of(init_script, disabled_content)
provider.enable
then_contents_of(init_script).should == enabled_content
end
it "should add a 'start on' line if none exists" do
given_contents_of(init_script, "this is a file")
provider.enable
then_contents_of(init_script).should == "this is a file" + start_on_default_runlevels
end
it "should handle multiline 'start on' stanzas" do
given_contents_of(init_script, multiline_disabled)
provider.enable
then_contents_of(init_script).should == multiline_enabled
end
it "should remove manual stanzas" do
given_contents_of(init_script, multiline_enabled + manual)
provider.enable
then_contents_of(init_script).should == multiline_enabled
end
it "should leave not 'start on' comments alone" do
given_contents_of(init_script, multiline_disabled_bad)
provider.enable
then_contents_of(init_script).should == multiline_enabled_bad
end
end
describe "when disabling" do
it "should add a manual stanza" do
given_contents_of(init_script, enabled_content)
provider.disable
then_contents_of(init_script).should == enabled_content + manual
end
it "should remove manual stanzas before adding new ones" do
given_contents_of(init_script, multiline_enabled + manual + "\n" + multiline_enabled)
provider.disable
then_contents_of(init_script).should == multiline_enabled + "\n" + multiline_enabled + manual
end
it "should handle multiline 'start on' stanzas" do
given_contents_of(init_script, multiline_enabled)
provider.disable
then_contents_of(init_script).should == multiline_enabled + manual
end
end
describe "when checking whether it is enabled" do
describe "with no manual stanza" do
it "should consider 'start on ...' to be enabled" do
given_contents_of(init_script, enabled_content)
provider.enabled?.should == :true
end
it "should consider '#start on ...' to be disabled" do
given_contents_of(init_script, disabled_content)
provider.enabled?.should == :false
end
it "should consider no start on line to be disabled" do
given_contents_of(init_script, content)
provider.enabled?.should == :false
end
end
describe "with manual stanza" do
it "should consider 'start on ...' to be disabled if there is a trailing manual stanza" do
given_contents_of(init_script, enabled_content + manual + "\nother stuff")
provider.enabled?.should == :false
end
it "should consider two start on lines with a manual in the middle to be enabled" do
given_contents_of(init_script, enabled_content + manual + "\n" + enabled_content)
provider.enabled?.should == :true
end
end
end
end
describe "Upstart version > 0.9.0" do
before(:each) do
provider.stubs(:is_upstart?).returns(true)
provider.stubs(:upstart_version).returns("0.9.5")
provider.stubs(:search).returns(init_script)
provider.stubs(:overscript).returns(over_script)
end
[:enabled?,:enable,:disable].each do |enableable|
it "should respond to #{enableable}" do
provider.should respond_to(enableable)
end
end
describe "when enabling" do
it "should add a 'start on' line if none exists" do
given_contents_of(init_script, "this is a file")
provider.enable
then_contents_of(init_script).should == "this is a file"
then_contents_of(over_script).should == start_on_default_runlevels
end
it "should handle multiline 'start on' stanzas" do
given_contents_of(init_script, multiline_disabled)
provider.enable
then_contents_of(init_script).should == multiline_disabled
then_contents_of(over_script).should == start_on_default_runlevels
end
it "should remove any manual stanzas from the override file" do
given_contents_of(over_script, manual)
given_contents_of(init_script, enabled_content)
provider.enable
then_contents_of(init_script).should == enabled_content
then_contents_of(over_script).should == ""
end
it "should copy existing start on from conf file if conf file is disabled" do
given_contents_of(init_script, multiline_enabled_standalone + manual)
provider.enable
then_contents_of(init_script).should == multiline_enabled_standalone + manual
then_contents_of(over_script).should == multiline_enabled_standalone
end
it "should leave not 'start on' comments alone" do
given_contents_of(init_script, multiline_disabled_bad)
given_contents_of(over_script, "")
provider.enable
then_contents_of(init_script).should == multiline_disabled_bad
then_contents_of(over_script).should == start_on_default_runlevels
end
end
describe "when disabling" do
it "should add a manual stanza to the override file" do
given_contents_of(init_script, enabled_content)
provider.disable
then_contents_of(init_script).should == enabled_content
then_contents_of(over_script).should == manual
end
it "should handle multiline 'start on' stanzas" do
given_contents_of(init_script, multiline_enabled)
provider.disable
then_contents_of(init_script).should == multiline_enabled
then_contents_of(over_script).should == manual
end
end
describe "when checking whether it is enabled" do
describe "with no override file" do
it "should consider 'start on ...' to be enabled" do
given_contents_of(init_script, enabled_content)
provider.enabled?.should == :true
end
it "should consider '#start on ...' to be disabled" do
given_contents_of(init_script, disabled_content)
provider.enabled?.should == :false
end
it "should consider no start on line to be disabled" do
given_contents_of(init_script, content)
provider.enabled?.should == :false
end
end
describe "with override file" do
it "should consider 'start on ...' to be disabled if there is manual in override file" do
given_contents_of(init_script, enabled_content)
given_contents_of(over_script, manual + "\nother stuff")
provider.enabled?.should == :false
end
it "should consider '#start on ...' to be enabled if there is a start on in the override file" do
given_contents_of(init_script, disabled_content)
given_contents_of(over_script, "start on stuff")
provider.enabled?.should == :true
end
end
end
end
end
end
diff --git a/spec/unit/provider/ssh_authorized_key/parsed_spec.rb b/spec/unit/provider/ssh_authorized_key/parsed_spec.rb
index d0cd4e850..2e88c57df 100755
--- a/spec/unit/provider/ssh_authorized_key/parsed_spec.rb
+++ b/spec/unit/provider/ssh_authorized_key/parsed_spec.rb
@@ -1,255 +1,261 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'shared_behaviours/all_parsedfile_providers'
require 'puppet_spec/files'
provider_class = Puppet::Type.type(:ssh_authorized_key).provider(:parsed)
describe provider_class, :unless => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
before :each do
@keyfile = tmpfile('authorized_keys')
@provider_class = provider_class
@provider_class.initvars
@provider_class.any_instance.stubs(:target).returns @keyfile
@user = 'random_bob'
Puppet::Util.stubs(:uid).with(@user).returns 12345
end
def mkkey(args)
args[:target] = @keyfile
args[:user] = @user
resource = Puppet::Type.type(:ssh_authorized_key).new(args)
key = @provider_class.new(resource)
args.each do |p,v|
key.send(p.to_s + "=", v)
end
key
end
def genkey(key)
@provider_class.stubs(:filetype).returns(Puppet::Util::FileType::FileTypeRam)
File.stubs(:chown)
File.stubs(:chmod)
Puppet::Util::SUIDManager.stubs(:asuser).yields
key.flush
@provider_class.target_object(@keyfile).read
end
it_should_behave_like "all parsedfile providers", provider_class
it "should be able to generate a basic authorized_keys file" do
key = mkkey(:name => "Just_Testing",
:key => "AAAAfsfddsjldjgksdflgkjsfdlgkj",
:type => "ssh-dss",
:ensure => :present,
:options => [:absent]
)
genkey(key).should == "ssh-dss AAAAfsfddsjldjgksdflgkjsfdlgkj Just_Testing\n"
end
it "should be able to generate an authorized_keys file with options" do
key = mkkey(:name => "root@localhost",
:key => "AAAAfsfddsjldjgksdflgkjsfdlgkj",
:type => "ssh-rsa",
:ensure => :present,
:options => ['from="192.168.1.1"', "no-pty", "no-X11-forwarding"]
)
genkey(key).should == "from=\"192.168.1.1\",no-pty,no-X11-forwarding ssh-rsa AAAAfsfddsjldjgksdflgkjsfdlgkj root@localhost\n"
end
it "should parse comments" do
result = [{ :record_type => :comment, :line => "# hello" }]
@provider_class.parse("# hello\n").should == result
end
it "should parse comments with leading whitespace" do
result = [{ :record_type => :comment, :line => " # hello" }]
@provider_class.parse(" # hello\n").should == result
end
it "should skip over lines with only whitespace" do
result = [{ :record_type => :comment, :line => "#before" },
{ :record_type => :blank, :line => " " },
{ :record_type => :comment, :line => "#after" }]
@provider_class.parse("#before\n \n#after\n").should == result
end
it "should skip over completely empty lines" do
result = [{ :record_type => :comment, :line => "#before"},
{ :record_type => :blank, :line => ""},
{ :record_type => :comment, :line => "#after"}]
@provider_class.parse("#before\n\n#after\n").should == result
end
it "should be able to parse name if it includes whitespace" do
@provider_class.parse_line('ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC7pHZ1XRj3tXbFpPFhMGU1bVwz7jr13zt/wuE+pVIJA8GlmHYuYtIxHPfDHlkixdwLachCpSQUL9NbYkkRFRn9m6PZ7125ohE4E4m96QS6SGSQowTiRn4Lzd9LV38g93EMHjPmEkdSq7MY4uJEd6DUYsLvaDYdIgBiLBIWPA3OrQ== fancy user')[:name].should == 'fancy user'
@provider_class.parse_line('from="host1.reductlivelabs.com,host.reductivelabs.com",command="/usr/local/bin/run",ssh-pty ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC7pHZ1XRj3tXbFpPFhMGU1bVwz7jr13zt/wuE+pVIJA8GlmHYuYtIxHPfDHlkixdwLachCpSQUL9NbYkkRFRn9m6PZ7125ohE4E4m96QS6SGSQowTiRn4Lzd9LV38g93EMHjPmEkdSq7MY4uJEd6DUYsLvaDYdIgBiLBIWPA3OrQ== fancy user')[:name].should == 'fancy user'
end
it "should be able to parse options containing commas via its parse_options method" do
options = %w{from="host1.reductlivelabs.com,host.reductivelabs.com" command="/usr/local/bin/run" ssh-pty}
optionstr = options.join(", ")
@provider_class.parse_options(optionstr).should == options
end
+ it "should parse quoted options" do
+ line = 'command="/usr/local/bin/mybin \"$SSH_ORIGINAL_COMMAND\"" ssh-rsa xxx mykey'
+
+ @provider_class.parse(line)[0][:options][0].should == 'command="/usr/local/bin/mybin \"$SSH_ORIGINAL_COMMAND\""'
+ end
+
it "should use '' as name for entries that lack a comment" do
line = "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAut8aOSxenjOqF527dlsdHWV4MNoAsX14l9M297+SQXaQ5Z3BedIxZaoQthkDALlV/25A1COELrg9J2MqJNQc8Xe9XQOIkBQWWinUlD/BXwoOTWEy8C8zSZPHZ3getMMNhGTBO+q/O+qiJx3y5cA4MTbw2zSxukfWC87qWwcZ64UUlegIM056vPsdZWFclS9hsROVEa57YUMrehQ1EGxT4Z5j6zIopufGFiAPjZigq/vqgcAqhAKP6yu4/gwO6S9tatBeEjZ8fafvj1pmvvIplZeMr96gHE7xS3pEEQqnB3nd4RY7AF6j9kFixnsytAUO7STPh/M3pLiVQBN89TvWPQ=="
@provider_class.parse(line)[0][:name].should == ""
end
{
# ssh-keygen -t dsa -b 1024
'ssh-dss' => 'AAAAB3NzaC1kc3MAAACBANGTefWMXS780qLMMgysq3GNMKzg55LXZODif6Tqv1vtTh4Wuk3J5X5u644jTyNdAIn1RiBI9MnwnZMZ6nXKvucMcMQWMibYS9W2MhkRj3oqsLWMMsdGXJL18SWM5A6oC3oIRC4JHJZtkm0OctR2trKxmX+MGhdCd+Xpsh9CNK8XAAAAFQD4olFiwv+QQUFdaZbWUy1CLEG9xQAAAIByCkXKgoriZF8bQ0OX1sKuR69M/6n5ngmQGVBKB7BQkpUjbK/OggB6iJgst5utKkDcaqYRnrTYG9q3jJ/flv7yYePuoSreS0nCMMx9gpEYuq+7Sljg9IecmN/IHrNd9qdYoASy5iuROQMvEZM7KFHA8vBv0tWdBOsp4hZKyiL1DAAAAIEAjkZlOps9L+cD/MTzxDj7toYYypdLOvjlcPBaglkPZoFZ0MAKTI0zXlVX1cWAnkd0Yfo4EpP+6XAjlZkod+QXKXM4Tb4PnR34ASMeU6sEjM61Na24S7JD3gpPKataFU/oH3hzXsBdK2ttKYmoqvf61h32IA/3Z5PjCCD9pPLPpAY',
# ssh-keygen -t rsa -b 2048
'ssh-rsa' => 'AAAAB3NzaC1yc2EAAAADAQABAAABAQDYtEaWa1mlxaAh9vtiz6RCVKDiJHDY15nsqqWU7F7A1+U1498+sWDyRDkZ8vXWQpzyOMBzBSHIxhsprlKhkjomy8BuJP+bHDBIKx4zgSFDrklrPIf467Iuug8J0qqDLxO4rOOjeAiLEyC0t2ZGnsTEea+rmat0bJ2cv3g5L4gH/OFz2pI4ZLp1HGN83ipl5UH8CjXQKwo3Db1E3WJCqKgszVX0Z4/qjnBRxFMoqky/1mGb/mX1eoT9JyQ8OhU9uENZOShkksSpgUqjlrjpj0Yd14hBlnE3M18pE4ivxjzectA/XRKNZaxOL1YREtU8sXusAwmlEY4aJ64aR0JrXfgx',
# ssh-keygen -t ecdsa -b 256
'ecdsa-sha2-nistp256' => 'AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBO5PfBf0c2jAuqD+Lj3j+SuXOXNT2uqESLVOn5jVQfEF9GzllOw+CMOpUvV1CiOOn+F1ET15vcsfmD7z05WUTA=',
# ssh-keygen -t ecdsa -b 384
'ecdsa-sha2-nistp384' => 'AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJIfxNoVK4FX3RuMlkHOwwxXwAh6Fqx5uAp4ftXrJ+64qYuIzb+/zSAkJV698Sre1b1lb0G4LyDdVAvXwaYK9kN25vy8umV3WdfZeHKXJGCcrplMCbbOERWARlpiPNEblg==',
# ssh-keygen -t ecdsa -b 521
'ecdsa-sha2-nistp521' => 'AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBADLK+u12xwB0JOwpmaxYXv8KnPK4p+SE2405qoo+vpAQ569fMwPMgKzltd770amdeuFogw/MJu17PN9LDdrD3o0uwHMjWee6TpHQDkuEetaxiou6K0WAzgbxx9QsY0MsJgXf1BuMLqdK+xT183wOSXwwumv99G7T32dOJZ5tYrH0y4XMw==',
# ssh-keygen -t ed25519
'ssh-ed25519' => 'AAAAC3NzaC1lZDI1NTE5AAAAIBWvu7D1KHBPaNXQcEuBsp48+JyPelXAq8ds6K5Du9gd',
}.each_pair do |keytype, keydata|
it "should be able to parse a #{keytype} key entry" do
comment = 'sample_key'
record = @provider_class.parse_line("#{keytype} #{keydata} #{comment}")
record.should_not be_nil
record[:name].should == comment
record[:key].should == keydata
record[:type].should == keytype
end
end
end
describe provider_class, :unless => Puppet.features.microsoft_windows? do
before :each do
@resource = Puppet::Type.type(:ssh_authorized_key).new(:name => "foo", :user => "random_bob")
@provider = provider_class.new(@resource)
provider_class.stubs(:filetype).returns(Puppet::Util::FileType::FileTypeRam)
Puppet::Util::SUIDManager.stubs(:asuser).yields
provider_class.initvars
end
describe "when flushing" do
before :each do
# Stub file and directory operations
Dir.stubs(:mkdir)
File.stubs(:chmod)
File.stubs(:chown)
end
describe "and both a user and a target have been specified" do
before :each do
Puppet::Util.stubs(:uid).with("random_bob").returns 12345
@resource[:user] = "random_bob"
target = "/tmp/.ssh_dir/place_to_put_authorized_keys"
@resource[:target] = target
end
it "should create the directory" do
Puppet::FileSystem.stubs(:exist?).with("/tmp/.ssh_dir").returns false
Dir.expects(:mkdir).with("/tmp/.ssh_dir", 0700)
@provider.flush
end
it "should absolutely not chown the directory to the user" do
uid = Puppet::Util.uid("random_bob")
File.expects(:chown).never
@provider.flush
end
it "should absolutely not chown the key file to the user" do
uid = Puppet::Util.uid("random_bob")
File.expects(:chown).never
@provider.flush
end
it "should chmod the key file to 0600" do
File.expects(:chmod).with(0600, "/tmp/.ssh_dir/place_to_put_authorized_keys")
@provider.flush
end
end
describe "and a user has been specified with no target" do
before :each do
@resource[:user] = "nobody"
#
# I'd like to use random_bob here and something like
#
# File.stubs(:expand_path).with("~random_bob/.ssh").returns "/users/r/random_bob/.ssh"
#
# but mocha objects strenuously to stubbing File.expand_path
# so I'm left with using nobody.
@dir = File.expand_path("~nobody/.ssh")
end
it "should create the directory if it doesn't exist" do
Puppet::FileSystem.stubs(:exist?).with(@dir).returns false
Dir.expects(:mkdir).with(@dir,0700)
@provider.flush
end
it "should not create or chown the directory if it already exist" do
Puppet::FileSystem.stubs(:exist?).with(@dir).returns false
Dir.expects(:mkdir).never
@provider.flush
end
it "should absolutely not chown the directory to the user if it creates it" do
Puppet::FileSystem.stubs(:exist?).with(@dir).returns false
Dir.stubs(:mkdir).with(@dir,0700)
uid = Puppet::Util.uid("nobody")
File.expects(:chown).never
@provider.flush
end
it "should not create or chown the directory if it already exist" do
Puppet::FileSystem.stubs(:exist?).with(@dir).returns false
Dir.expects(:mkdir).never
File.expects(:chown).never
@provider.flush
end
it "should absolutely not chown the key file to the user" do
uid = Puppet::Util.uid("nobody")
File.expects(:chown).never
@provider.flush
end
it "should chmod the key file to 0600" do
File.expects(:chmod).with(0600, File.expand_path("~nobody/.ssh/authorized_keys"))
@provider.flush
end
end
describe "and a target has been specified with no user" do
it "should raise an error" do
@resource = Puppet::Type.type(:ssh_authorized_key).new(:name => "foo", :target => "/tmp/.ssh_dir/place_to_put_authorized_keys")
@provider = provider_class.new(@resource)
proc { @provider.flush }.should raise_error
end
end
describe "and an invalid user has been specified with no target" do
it "should catch an exception and raise a Puppet error" do
@resource[:user] = "thisusershouldnotexist"
lambda { @provider.flush }.should raise_error(Puppet::Error)
end
end
end
end
diff --git a/spec/unit/provider/user/user_role_add_spec.rb b/spec/unit/provider/user/user_role_add_spec.rb
index 8dd48e767..42cc4995e 100755
--- a/spec/unit/provider/user/user_role_add_spec.rb
+++ b/spec/unit/provider/user/user_role_add_spec.rb
@@ -1,335 +1,357 @@
require 'spec_helper'
require 'puppet_spec/files'
require 'tempfile'
describe Puppet::Type.type(:user).provider(:user_role_add), :unless => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
let(:resource) { Puppet::Type.type(:user).new(:name => 'myuser', :managehome => false, :allowdupe => false) }
let(:provider) { described_class.new(resource) }
before do
resource.stubs(:should).returns "fakeval"
resource.stubs(:should).with(:keys).returns Hash.new
resource.stubs(:[]).returns "fakeval"
end
describe "#command" do
before do
klass = stub("provider")
klass.stubs(:command).with(:foo).returns("userfoo")
klass.stubs(:command).with(:role_foo).returns("rolefoo")
provider.stubs(:class).returns(klass)
end
it "should use the command if not a role and ensure!=role" do
provider.stubs(:is_role?).returns(false)
provider.stubs(:exists?).returns(false)
resource.stubs(:[]).with(:ensure).returns(:present)
provider.class.stubs(:foo)
provider.command(:foo).should == "userfoo"
end
it "should use the role command when a role" do
provider.stubs(:is_role?).returns(true)
provider.command(:foo).should == "rolefoo"
end
it "should use the role command when !exists and ensure=role" do
provider.stubs(:is_role?).returns(false)
provider.stubs(:exists?).returns(false)
resource.stubs(:[]).with(:ensure).returns(:role)
provider.command(:foo).should == "rolefoo"
end
end
describe "#transition" do
it "should return the type set to whatever is passed in" do
provider.expects(:command).with(:modify).returns("foomod")
provider.transition("bar").include?("type=bar")
end
end
describe "#create" do
before do
provider.stubs(:password=)
end
it "should use the add command when the user is not a role" do
provider.stubs(:is_role?).returns(false)
provider.expects(:addcmd).returns("useradd")
provider.expects(:run).at_least_once
provider.create
end
it "should use transition(normal) when the user is a role" do
provider.stubs(:is_role?).returns(true)
provider.expects(:transition).with("normal")
provider.expects(:run)
provider.create
end
it "should set password age rules" do
resource = Puppet::Type.type(:user).new :name => "myuser", :password_min_age => 5, :password_max_age => 10, :provider => :user_role_add
provider = described_class.new(resource)
provider.stubs(:user_attributes)
provider.stubs(:execute)
provider.expects(:execute).with { |cmd, *args| args == ["-n", 5, "-x", 10, "myuser"] }
provider.create
end
end
describe "#destroy" do
it "should use the delete command if the user exists and is not a role" do
provider.stubs(:exists?).returns(true)
provider.stubs(:is_role?).returns(false)
provider.expects(:deletecmd)
provider.expects(:run)
provider.destroy
end
it "should use the delete command if the user is a role" do
provider.stubs(:exists?).returns(true)
provider.stubs(:is_role?).returns(true)
provider.expects(:deletecmd)
provider.expects(:run)
provider.destroy
end
end
describe "#create_role" do
it "should use the transition(role) if the user exists" do
provider.stubs(:exists?).returns(true)
provider.stubs(:is_role?).returns(false)
provider.expects(:transition).with("role")
provider.expects(:run)
provider.create_role
end
it "should use the add command when role doesn't exists" do
provider.stubs(:exists?).returns(false)
provider.expects(:addcmd)
provider.expects(:run)
provider.create_role
end
end
describe "with :allow_duplicates" do
before do
resource.stubs(:allowdupe?).returns true
provider.stubs(:is_role?).returns(false)
provider.stubs(:execute)
resource.stubs(:system?).returns false
provider.expects(:execute).with { |args| args.include?("-o") }
end
it "should add -o when the user is being created" do
provider.stubs(:password=)
provider.create
end
it "should add -o when the uid is being modified" do
provider.uid = 150
end
end
[:roles, :auths, :profiles].each do |val|
context "#send" do
describe "when getting #{val}" do
it "should get the user_attributes" do
provider.expects(:user_attributes)
provider.send(val)
end
it "should get the #{val} attribute" do
attributes = mock("attributes")
attributes.expects(:[]).with(val)
provider.stubs(:user_attributes).returns(attributes)
provider.send(val)
end
end
end
end
describe "#keys" do
it "should get the user_attributes" do
provider.expects(:user_attributes)
provider.keys
end
it "should call removed_managed_attributes" do
provider.stubs(:user_attributes).returns({ :type => "normal", :foo => "something" })
provider.expects(:remove_managed_attributes)
provider.keys
end
it "should removed managed attribute (type, auths, roles, etc)" do
provider.stubs(:user_attributes).returns({ :type => "normal", :foo => "something" })
provider.keys.should == { :foo => "something" }
end
end
describe "#add_properties" do
it "should call build_keys_cmd" do
resource.stubs(:should).returns ""
resource.expects(:should).with(:keys).returns({ :foo => "bar" })
provider.expects(:build_keys_cmd).returns([])
provider.add_properties
end
it "should add the elements of the keys hash to an array" do
resource.stubs(:should).returns ""
resource.expects(:should).with(:keys).returns({ :foo => "bar"})
provider.add_properties.must == ["-K", "foo=bar"]
end
end
describe "#build_keys_cmd" do
it "should build cmd array with keypairs seperated by -K ending with user" do
provider.build_keys_cmd({"foo" => "bar", "baz" => "boo"}).should.eql? ["-K", "foo=bar", "-K", "baz=boo"]
end
end
describe "#keys=" do
before do
provider.stubs(:is_role?).returns(false)
end
it "should run a command" do
provider.expects(:run)
provider.keys=({})
end
it "should build the command" do
resource.stubs(:[]).with(:name).returns("someuser")
provider.stubs(:command).returns("usermod")
provider.expects(:build_keys_cmd).returns(["-K", "foo=bar"])
provider.expects(:run).with(["usermod", "-K", "foo=bar", "someuser"], "modify attribute key pairs")
provider.keys=({})
end
end
describe "#password" do
before do
@array = mock "array"
end
it "should readlines of /etc/shadow" do
File.expects(:readlines).with("/etc/shadow").returns([])
provider.password
end
it "should reject anything that doesn't start with alpha numerics" do
@array.expects(:reject).returns([])
File.stubs(:readlines).with("/etc/shadow").returns(@array)
provider.password
end
it "should collect splitting on ':'" do
@array.stubs(:reject).returns(@array)
@array.expects(:collect).returns([])
File.stubs(:readlines).with("/etc/shadow").returns(@array)
provider.password
end
it "should find the matching user" do
resource.stubs(:[]).with(:name).returns("username")
@array.stubs(:reject).returns(@array)
@array.stubs(:collect).returns([["username", "hashedpassword"], ["someoneelse", "theirpassword"]])
File.stubs(:readlines).with("/etc/shadow").returns(@array)
provider.password.must == "hashedpassword"
end
it "should get the right password" do
resource.stubs(:[]).with(:name).returns("username")
File.stubs(:readlines).with("/etc/shadow").returns(["#comment", " nonsense", " ", "username:hashedpassword:stuff:foo:bar:::", "other:pword:yay:::"])
provider.password.must == "hashedpassword"
end
end
describe "#password=" do
let(:path) { tmpfile('etc-shadow') }
before :each do
provider.stubs(:target_file_path).returns(path)
end
def write_fixture(content)
File.open(path, 'w') { |f| f.print(content) }
end
it "should update the target user" do
write_fixture <<FIXTURE
fakeval:seriously:15315:0:99999:7:::
FIXTURE
provider.password = "totally"
File.read(path).should =~ /^fakeval:totally:/
end
it "should only update the target user" do
Date.expects(:today).returns Date.new(2011,12,07)
write_fixture <<FIXTURE
before:seriously:15315:0:99999:7:::
fakeval:seriously:15315:0:99999:7:::
fakevalish:seriously:15315:0:99999:7:::
after:seriously:15315:0:99999:7:::
FIXTURE
provider.password = "totally"
File.read(path).should == <<EOT
before:seriously:15315:0:99999:7:::
fakeval:totally:15315:0:99999:7:::
fakevalish:seriously:15315:0:99999:7:::
after:seriously:15315:0:99999:7:::
EOT
end
# This preserves the current semantics, but is it right? --daniel 2012-02-05
it "should do nothing if the target user is missing" do
fixture = <<FIXTURE
before:seriously:15315:0:99999:7:::
fakevalish:seriously:15315:0:99999:7:::
after:seriously:15315:0:99999:7:::
FIXTURE
write_fixture fixture
provider.password = "totally"
File.read(path).should == fixture
end
it "should update the lastchg field" do
Date.expects(:today).returns Date.new(2013,5,12) # 15837 days after 1970-01-01
write_fixture <<FIXTURE
before:seriously:15315:0:99999:7:::
fakeval:seriously:15629:0:99999:7:::
fakevalish:seriously:15315:0:99999:7:::
after:seriously:15315:0:99999:7:::
FIXTURE
provider.password = "totally"
File.read(path).should == <<EOT
before:seriously:15315:0:99999:7:::
fakeval:totally:15837:0:99999:7:::
fakevalish:seriously:15315:0:99999:7:::
after:seriously:15315:0:99999:7:::
EOT
end
end
describe "#shadow_entry" do
it "should return the line for the right user" do
File.stubs(:readlines).returns(["someuser:!:10:5:20:7:1::\n", "fakeval:*:20:10:30:7:2::\n", "testuser:*:30:15:40:7:3::\n"])
- provider.shadow_entry.should == ["fakeval", "*", "20", "10", "30", "7", "2"]
+ provider.shadow_entry.should == ["fakeval", "*", "20", "10", "30", "7", "2", "", ""]
end
end
describe "#password_max_age" do
it "should return a maximum age number" do
File.stubs(:readlines).returns(["fakeval:NP:12345:0:50::::\n"])
provider.password_max_age.should == "50"
end
it "should return -1 for no maximum" do
File.stubs(:readlines).returns(["fakeval:NP:12345::::::\n"])
provider.password_max_age.should == -1
end
+
+ it "should return -1 for no maximum when failed attempts are present" do
+ File.stubs(:readlines).returns(["fakeval:NP:12345::::::3\n"])
+ provider.password_max_age.should == -1
+ end
+ end
+
+ describe "#password_min_age" do
+ it "should return a minimum age number" do
+ File.stubs(:readlines).returns(["fakeval:NP:12345:10:50::::\n"])
+ provider.password_min_age.should == "10"
+ end
+
+ it "should return -1 for no minimum" do
+ File.stubs(:readlines).returns(["fakeval:NP:12345::::::\n"])
+ provider.password_min_age.should == -1
+ end
+
+ it "should return -1 for no minimum when failed attempts are present" do
+ File.stubs(:readlines).returns(["fakeval:NP:12345::::::3\n"])
+ provider.password_min_age.should == -1
+ end
end
end
diff --git a/spec/unit/provider/user/windows_adsi_spec.rb b/spec/unit/provider/user/windows_adsi_spec.rb
index 8d3ed1d0a..84aa8a74c 100755
--- a/spec/unit/provider/user/windows_adsi_spec.rb
+++ b/spec/unit/provider/user/windows_adsi_spec.rb
@@ -1,173 +1,173 @@
#!/usr/bin/env ruby
require 'spec_helper'
-describe Puppet::Type.type(:user).provider(:windows_adsi) do
+describe Puppet::Type.type(:user).provider(:windows_adsi), :if => Puppet.features.microsoft_windows? do
let(:resource) do
Puppet::Type.type(:user).new(
:title => 'testuser',
:comment => 'Test J. User',
:provider => :windows_adsi
)
end
let(:provider) { resource.provider }
let(:connection) { stub 'connection' }
before :each do
- Puppet::Util::ADSI.stubs(:computer_name).returns('testcomputername')
- Puppet::Util::ADSI.stubs(:connect).returns connection
+ Puppet::Util::Windows::ADSI.stubs(:computer_name).returns('testcomputername')
+ Puppet::Util::Windows::ADSI.stubs(:connect).returns connection
end
describe ".instances" do
it "should enumerate all users" do
names = ['user1', 'user2', 'user3']
stub_users = names.map{|n| stub(:name => n)}
connection.stubs(:execquery).with('select name from win32_useraccount where localaccount = "TRUE"').returns(stub_users)
described_class.instances.map(&:name).should =~ names
end
end
- it "should provide access to a Puppet::Util::ADSI::User object" do
- provider.user.should be_a(Puppet::Util::ADSI::User)
+ it "should provide access to a Puppet::Util::Windows::ADSI::User object" do
+ provider.user.should be_a(Puppet::Util::Windows::ADSI::User)
end
describe "when managing groups" do
it 'should return the list of groups as a comma-separated list' do
provider.user.stubs(:groups).returns ['group1', 'group2', 'group3']
provider.groups.should == 'group1,group2,group3'
end
it "should return absent if there are no groups" do
provider.user.stubs(:groups).returns []
provider.groups.should == ''
end
it 'should be able to add a user to a set of groups' do
resource[:membership] = :minimum
provider.user.expects(:set_groups).with('group1,group2', true)
provider.groups = 'group1,group2'
resource[:membership] = :inclusive
provider.user.expects(:set_groups).with('group1,group2', false)
provider.groups = 'group1,group2'
end
end
describe "when creating a user" do
it "should create the user on the system and set its other properties" do
resource[:groups] = ['group1', 'group2']
resource[:membership] = :inclusive
resource[:comment] = 'a test user'
resource[:home] = 'C:\Users\testuser'
user = stub 'user'
- Puppet::Util::ADSI::User.expects(:create).with('testuser').returns user
+ Puppet::Util::Windows::ADSI::User.expects(:create).with('testuser').returns user
user.stubs(:groups).returns(['group2', 'group3'])
create = sequence('create')
user.expects(:password=).in_sequence(create)
user.expects(:commit).in_sequence(create)
user.expects(:set_groups).with('group1,group2', false).in_sequence(create)
user.expects(:[]=).with('Description', 'a test user')
user.expects(:[]=).with('HomeDirectory', 'C:\Users\testuser')
provider.create
end
- it "should load the profile if managehome is set", :if => Puppet.features.microsoft_windows? do
+ it "should load the profile if managehome is set" do
resource[:password] = '0xDeadBeef'
resource[:managehome] = true
user = stub_everything 'user'
- Puppet::Util::ADSI::User.expects(:create).with('testuser').returns user
+ Puppet::Util::Windows::ADSI::User.expects(:create).with('testuser').returns user
Puppet::Util::Windows::User.expects(:load_profile).with('testuser', '0xDeadBeef')
provider.create
end
it "should set a user's password" do
provider.user.expects(:password=).with('plaintextbad')
provider.password = "plaintextbad"
end
it "should test a valid user password" do
resource[:password] = 'plaintext'
provider.user.expects(:password_is?).with('plaintext').returns true
provider.password.should == 'plaintext'
end
it "should test a bad user password" do
resource[:password] = 'plaintext'
provider.user.expects(:password_is?).with('plaintext').returns false
provider.password.should == :absent
end
it 'should not create a user if a group by the same name exists' do
- Puppet::Util::ADSI::User.expects(:create).with('testuser').raises( Puppet::Error.new("Cannot create user if group 'testuser' exists.") )
+ Puppet::Util::Windows::ADSI::User.expects(:create).with('testuser').raises( Puppet::Error.new("Cannot create user if group 'testuser' exists.") )
expect{ provider.create }.to raise_error( Puppet::Error,
/Cannot create user if group 'testuser' exists./ )
end
end
it 'should be able to test whether a user exists' do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI.stubs(:connect).returns stub('connection')
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.stubs(:connect).returns stub('connection')
provider.should be_exists
- Puppet::Util::ADSI.stubs(:connect).returns nil
+ Puppet::Util::Windows::ADSI.stubs(:connect).returns nil
provider.should_not be_exists
end
it 'should be able to delete a user' do
connection.expects(:Delete).with('user', 'testuser')
provider.delete
end
- it 'should delete the profile if managehome is set', :if => Puppet.features.microsoft_windows? do
+ it 'should delete the profile if managehome is set' do
resource[:managehome] = true
sid = 'S-A-B-C'
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('testuser').returns(sid)
- Puppet::Util::ADSI::UserProfile.expects(:delete).with(sid)
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('testuser').returns(sid)
+ Puppet::Util::Windows::ADSI::UserProfile.expects(:delete).with(sid)
connection.expects(:Delete).with('user', 'testuser')
provider.delete
end
it "should commit the user when flushed" do
provider.user.expects(:commit)
provider.flush
end
- it "should return the user's SID as uid", :if => Puppet.features.microsoft_windows? do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('testuser').returns('S-1-5-21-1362942247-2130103807-3279964888-1111')
+ it "should return the user's SID as uid" do
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('testuser').returns('S-1-5-21-1362942247-2130103807-3279964888-1111')
provider.uid.should == 'S-1-5-21-1362942247-2130103807-3279964888-1111'
end
it "should fail when trying to manage the uid property" do
provider.expects(:fail).with { |msg| msg =~ /uid is read-only/ }
provider.send(:uid=, 500)
end
[:gid, :shell].each do |prop|
it "should fail when trying to manage the #{prop} property" do
provider.expects(:fail).with { |msg| msg =~ /No support for managing property #{prop}/ }
provider.send("#{prop}=", 'foo')
end
end
end
diff --git a/spec/unit/reports/store_spec.rb b/spec/unit/reports/store_spec.rb
index 7866571a1..7f94f7d1b 100755
--- a/spec/unit/reports/store_spec.rb
+++ b/spec/unit/reports/store_spec.rb
@@ -1,76 +1,60 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/reports'
require 'time'
require 'pathname'
require 'tempfile'
require 'fileutils'
processor = Puppet::Reports.report(:store)
describe processor do
describe "#process" do
include PuppetSpec::Files
before :each do
Puppet[:reportdir] = File.join(tmpdir('reports'), 'reports')
@report = YAML.load_file(File.join(PuppetSpec::FIXTURE_DIR, 'yaml/report2.6.x.yaml')).extend processor
end
it "should create a report directory for the client if one doesn't exist" do
@report.process
File.should be_directory(File.join(Puppet[:reportdir], @report.host))
end
it "should write the report to the file in YAML" do
Time.stubs(:now).returns(Time.parse("2011-01-06 12:00:00 UTC"))
@report.process
File.read(File.join(Puppet[:reportdir], @report.host, "201101061200.yaml")).should == @report.to_yaml
end
- it "should write to the report directory in the correct sequence" do
- # By doing things in this sequence we should protect against race
- # conditions
- Time.stubs(:now).returns(Time.parse("2011-01-06 12:00:00 UTC"))
- writeseq = sequence("write")
- file = mock "file"
- Tempfile.expects(:new).in_sequence(writeseq).returns(file)
- file.expects(:chmod).in_sequence(writeseq).with(0640)
- file.expects(:print).with(@report.to_yaml).in_sequence(writeseq)
- file.expects(:close).in_sequence(writeseq)
- file.stubs(:path).returns(File.join(Dir.tmpdir, "foo123"))
- FileUtils.expects(:mv).in_sequence(writeseq).with(File.join(Dir.tmpdir, "foo123"), File.join(Puppet[:reportdir], @report.host, "201101061200.yaml"))
- @report.process
- end
-
it "rejects invalid hostnames" do
@report.host = ".."
Puppet::FileSystem.expects(:exist?).never
- Tempfile.expects(:new).never
expect { @report.process }.to raise_error(ArgumentError, /Invalid node/)
end
end
describe "::destroy" do
it "rejects invalid hostnames" do
Puppet::FileSystem.expects(:unlink).never
expect { processor.destroy("..") }.to raise_error(ArgumentError, /Invalid node/)
end
end
describe "::validate_host" do
['..', 'hello/', '/hello', 'he/llo', 'hello/..', '.'].each do |node|
it "rejects #{node.inspect}" do
expect { processor.validate_host(node) }.to raise_error(ArgumentError, /Invalid node/)
end
end
['.hello', 'hello.', '..hi', 'hi..'].each do |node|
it "accepts #{node.inspect}" do
processor.validate_host(node)
end
end
end
end
diff --git a/spec/unit/resource/catalog_spec.rb b/spec/unit/resource/catalog_spec.rb
index 741f60ef3..80933dc41 100755
--- a/spec/unit/resource/catalog_spec.rb
+++ b/spec/unit/resource/catalog_spec.rb
@@ -1,889 +1,884 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/compiler'
require 'matchers/json'
describe Puppet::Resource::Catalog, "when compiling" do
include JSONMatchers
include PuppetSpec::Files
before do
@basepath = make_absolute("/somepath")
# stub this to not try to create state.yaml
Puppet::Util::Storage.stubs(:store)
end
# audit only resources are unmanaged
# as are resources without properties with should values
it "should write its managed resources' types, namevars" do
catalog = Puppet::Resource::Catalog.new("host")
resourcefile = tmpfile('resourcefile')
Puppet[:resourcefile] = resourcefile
res = Puppet::Type.type('file').new(:title => File.expand_path('/tmp/sam'), :ensure => 'present')
res.file = 'site.pp'
res.line = 21
res2 = Puppet::Type.type('exec').new(:title => 'bob', :command => "#{File.expand_path('/bin/rm')} -rf /")
res2.file = File.expand_path('/modules/bob/manifests/bob.pp')
res2.line = 42
res3 = Puppet::Type.type('file').new(:title => File.expand_path('/tmp/susan'), :audit => 'all')
res3.file = 'site.pp'
res3.line = 63
res4 = Puppet::Type.type('file').new(:title => File.expand_path('/tmp/lilly'))
res4.file = 'site.pp'
res4.line = 84
comp_res = Puppet::Type.type('component').new(:title => 'Class[Main]')
catalog.add_resource(res, res2, res3, res4, comp_res)
catalog.write_resource_file
File.readlines(resourcefile).map(&:chomp).should =~ [
"file[#{File.expand_path('/tmp/sam')}]",
"exec[#{File.expand_path('/bin/rm')} -rf /]"
]
end
it "should log an error if unable to write to the resource file" do
catalog = Puppet::Resource::Catalog.new("host")
Puppet[:resourcefile] = File.expand_path('/not/writable/file')
catalog.add_resource(Puppet::Type.type('file').new(:title => File.expand_path('/tmp/foo')))
catalog.write_resource_file
@logs.size.should == 1
@logs.first.message.should =~ /Could not create resource file/
@logs.first.level.should == :err
end
it "should be able to write its list of classes to the class file" do
@catalog = Puppet::Resource::Catalog.new("host")
@catalog.add_class "foo", "bar"
Puppet[:classfile] = File.expand_path("/class/file")
fh = mock 'filehandle'
File.expects(:open).with(Puppet[:classfile], "w").yields fh
fh.expects(:puts).with "foo\nbar"
@catalog.write_class_file
end
it "should have a client_version attribute" do
@catalog = Puppet::Resource::Catalog.new("host")
@catalog.client_version = 5
@catalog.client_version.should == 5
end
it "should have a server_version attribute" do
@catalog = Puppet::Resource::Catalog.new("host")
@catalog.server_version = 5
@catalog.server_version.should == 5
end
describe "when compiling" do
it "should accept tags" do
config = Puppet::Resource::Catalog.new("mynode")
config.tag("one")
config.should be_tagged("one")
end
it "should accept multiple tags at once" do
config = Puppet::Resource::Catalog.new("mynode")
config.tag("one", "two")
config.should be_tagged("one")
config.should be_tagged("two")
end
it "should convert all tags to strings" do
config = Puppet::Resource::Catalog.new("mynode")
config.tag("one", :two)
config.should be_tagged("one")
config.should be_tagged("two")
end
it "should tag with both the qualified name and the split name" do
config = Puppet::Resource::Catalog.new("mynode")
config.tag("one::two")
config.should be_tagged("one")
config.should be_tagged("one::two")
end
it "should accept classes" do
config = Puppet::Resource::Catalog.new("mynode")
config.add_class("one")
config.classes.should == %w{one}
config.add_class("two", "three")
config.classes.should == %w{one two three}
end
it "should tag itself with passed class names" do
config = Puppet::Resource::Catalog.new("mynode")
config.add_class("one")
config.should be_tagged("one")
end
end
describe "when converting to a RAL catalog" do
before do
@original = Puppet::Resource::Catalog.new("mynode")
@original.tag(*%w{one two three})
@original.add_class *%w{four five six}
@top = Puppet::Resource.new :class, 'top'
@topobject = Puppet::Resource.new :file, @basepath+'/topobject'
@middle = Puppet::Resource.new :class, 'middle'
@middleobject = Puppet::Resource.new :file, @basepath+'/middleobject'
@bottom = Puppet::Resource.new :class, 'bottom'
@bottomobject = Puppet::Resource.new :file, @basepath+'/bottomobject'
@resources = [@top, @topobject, @middle, @middleobject, @bottom, @bottomobject]
@original.add_resource(*@resources)
@original.add_edge(@top, @topobject)
@original.add_edge(@top, @middle)
@original.add_edge(@middle, @middleobject)
@original.add_edge(@middle, @bottom)
@original.add_edge(@bottom, @bottomobject)
@catalog = @original.to_ral
end
it "should add all resources as RAL instances" do
@resources.each do |resource|
# Warning: a failure here will result in "global resource iteration is
# deprecated" being raised, because the rspec rendering to get the
# result tries to call `each` on the resource, and that raises.
@catalog.resource(resource.ref).must be_a_kind_of(Puppet::Type)
end
end
it "should copy the tag list to the new catalog" do
@catalog.tags.sort.should == @original.tags.sort
end
it "should copy the class list to the new catalog" do
@catalog.classes.should == @original.classes
end
it "should duplicate the original edges" do
@original.edges.each do |edge|
@catalog.edge?(@catalog.resource(edge.source.ref), @catalog.resource(edge.target.ref)).should be_true
end
end
it "should set itself as the catalog for each converted resource" do
@catalog.vertices.each { |v| v.catalog.object_id.should equal(@catalog.object_id) }
end
# This tests #931.
it "should not lose track of resources whose names vary" do
changer = Puppet::Resource.new :file, @basepath+'/test/', :parameters => {:ensure => :directory}
config = Puppet::Resource::Catalog.new('test')
config.add_resource(changer)
config.add_resource(@top)
config.add_edge(@top, changer)
catalog = config.to_ral
catalog.resource("File[#{@basepath}/test/]").must equal(catalog.resource("File[#{@basepath}/test]"))
end
after do
# Remove all resource instances.
@catalog.clear(true)
end
end
describe "when filtering" do
before :each do
@original = Puppet::Resource::Catalog.new("mynode")
@original.tag(*%w{one two three})
@original.add_class *%w{four five six}
@r1 = stub_everything 'r1', :ref => "File[/a]"
@r1.stubs(:respond_to?).with(:ref).returns(true)
@r1.stubs(:copy_as_resource).returns(@r1)
@r1.stubs(:is_a?).with(Puppet::Resource).returns(true)
@r2 = stub_everything 'r2', :ref => "File[/b]"
@r2.stubs(:respond_to?).with(:ref).returns(true)
@r2.stubs(:copy_as_resource).returns(@r2)
@r2.stubs(:is_a?).with(Puppet::Resource).returns(true)
@resources = [@r1,@r2]
@original.add_resource(@r1,@r2)
end
it "should transform the catalog to a resource catalog" do
@original.expects(:to_catalog).with { |h,b| h == :to_resource }
@original.filter
end
it "should scan each catalog resource in turn and apply filtering block" do
@resources.each { |r| r.expects(:test?) }
@original.filter do |r|
r.test?
end
end
it "should filter out resources which produce true when the filter block is evaluated" do
@original.filter do |r|
r == @r1
end.resource("File[/a]").should be_nil
end
it "should not consider edges against resources that were filtered out" do
@original.add_edge(@r1,@r2)
@original.filter do |r|
r == @r1
end.edge?(@r1,@r2).should_not be
end
end
describe "when functioning as a resource container" do
before do
@catalog = Puppet::Resource::Catalog.new("host")
@one = Puppet::Type.type(:notify).new :name => "one"
@two = Puppet::Type.type(:notify).new :name => "two"
@dupe = Puppet::Type.type(:notify).new :name => "one"
end
it "should provide a method to add one or more resources" do
@catalog.add_resource @one, @two
@catalog.resource(@one.ref).must equal(@one)
@catalog.resource(@two.ref).must equal(@two)
end
it "should add resources to the relationship graph if it exists" do
relgraph = @catalog.relationship_graph
@catalog.add_resource @one
relgraph.should be_vertex(@one)
end
it "should set itself as the resource's catalog if it is not a relationship graph" do
@one.expects(:catalog=).with(@catalog)
@catalog.add_resource @one
end
it "should make all vertices available by resource reference" do
@catalog.add_resource(@one)
@catalog.resource(@one.ref).must equal(@one)
@catalog.vertices.find { |r| r.ref == @one.ref }.must equal(@one)
end
it "tracks the container through edges" do
@catalog.add_resource(@two)
@catalog.add_resource(@one)
@catalog.add_edge(@one, @two)
@catalog.container_of(@two).must == @one
end
it "a resource without a container is contained in nil" do
@catalog.add_resource(@one)
@catalog.container_of(@one).must be_nil
end
it "should canonize how resources are referred to during retrieval when both type and title are provided" do
@catalog.add_resource(@one)
@catalog.resource("notify", "one").must equal(@one)
end
it "should canonize how resources are referred to during retrieval when just the title is provided" do
@catalog.add_resource(@one)
@catalog.resource("notify[one]", nil).must equal(@one)
end
describe 'with a duplicate resource' do
def resource_at(type, name, file, line)
resource = Puppet::Resource.new(type, name)
resource.file = file
resource.line = line
Puppet::Type.type(type).new(resource)
end
let(:orig) { resource_at(:notify, 'duplicate-title', '/path/to/orig/file', 42) }
let(:dupe) { resource_at(:notify, 'duplicate-title', '/path/to/dupe/file', 314) }
it "should print the locations of the original duplicated resource" do
@catalog.add_resource(orig)
expect { @catalog.add_resource(dupe) }.to raise_error { |error|
error.should be_a Puppet::Resource::Catalog::DuplicateResourceError
error.message.should match %r[Duplicate declaration: Notify\[duplicate-title\] is already declared]
error.message.should match %r[in file /path/to/orig/file:42]
error.message.should match %r[cannot redeclare]
error.message.should match %r[at /path/to/dupe/file:314]
}
end
end
it "should remove all resources when asked" do
@catalog.add_resource @one
@catalog.add_resource @two
@one.expects :remove
@two.expects :remove
@catalog.clear(true)
end
it "should support a mechanism for finishing resources" do
@one.expects :finish
@two.expects :finish
@catalog.add_resource @one
@catalog.add_resource @two
@catalog.finalize
end
it "should make default resources when finalizing" do
@catalog.expects(:make_default_resources)
@catalog.finalize
end
it "should add default resources to the catalog upon creation" do
@catalog.make_default_resources
@catalog.resource(:schedule, "daily").should_not be_nil
end
it "should optionally support an initialization block and should finalize after such blocks" do
@one.expects :finish
@two.expects :finish
config = Puppet::Resource::Catalog.new("host") do |conf|
conf.add_resource @one
conf.add_resource @two
end
end
it "should inform the resource that it is the resource's catalog" do
@one.expects(:catalog=).with(@catalog)
@catalog.add_resource @one
end
it "should be able to find resources by reference" do
@catalog.add_resource @one
@catalog.resource(@one.ref).must equal(@one)
end
it "should be able to find resources by reference or by type/title tuple" do
@catalog.add_resource @one
@catalog.resource("notify", "one").must equal(@one)
end
it "should have a mechanism for removing resources" do
@catalog.add_resource(@one)
@catalog.resource(@one.ref).must be
@catalog.vertex?(@one).must be_true
@catalog.remove_resource(@one)
@catalog.resource(@one.ref).must be_nil
@catalog.vertex?(@one).must be_false
end
it "should have a method for creating aliases for resources" do
@catalog.add_resource @one
@catalog.alias(@one, "other")
@catalog.resource("notify", "other").must equal(@one)
end
it "should ignore conflicting aliases that point to the aliased resource" do
@catalog.alias(@one, "other")
lambda { @catalog.alias(@one, "other") }.should_not raise_error
end
it "should create aliases for isomorphic resources whose names do not match their titles" do
resource = Puppet::Type::File.new(:title => "testing", :path => @basepath+"/something")
@catalog.add_resource(resource)
@catalog.resource(:file, @basepath+"/something").must equal(resource)
end
it "should not create aliases for non-isomorphic resources whose names do not match their titles" do
resource = Puppet::Type.type(:exec).new(:title => "testing", :command => "echo", :path => %w{/bin /usr/bin /usr/local/bin})
@catalog.add_resource(resource)
# Yay, I've already got a 'should' method
@catalog.resource(:exec, "echo").object_id.should == nil.object_id
end
# This test is the same as the previous, but the behaviour should be explicit.
it "should alias using the class name from the resource reference, not the resource class name" do
@catalog.add_resource @one
@catalog.alias(@one, "other")
@catalog.resource("notify", "other").must equal(@one)
end
it "should fail to add an alias if the aliased name already exists" do
@catalog.add_resource @one
proc { @catalog.alias @two, "one" }.should raise_error(ArgumentError)
end
it "should not fail when a resource has duplicate aliases created" do
@catalog.add_resource @one
proc { @catalog.alias @one, "one" }.should_not raise_error
end
it "should not create aliases that point back to the resource" do
@catalog.alias(@one, "one")
@catalog.resource(:notify, "one").must be_nil
end
it "should be able to look resources up by their aliases" do
@catalog.add_resource @one
@catalog.alias @one, "two"
@catalog.resource(:notify, "two").must equal(@one)
end
it "should remove resource aliases when the target resource is removed" do
@catalog.add_resource @one
@catalog.alias(@one, "other")
@one.expects :remove
@catalog.remove_resource(@one)
@catalog.resource("notify", "other").must be_nil
end
it "should add an alias for the namevar when the title and name differ on isomorphic resource types" do
resource = Puppet::Type.type(:file).new :path => @basepath+"/something", :title => "other", :content => "blah"
resource.expects(:isomorphic?).returns(true)
@catalog.add_resource(resource)
@catalog.resource(:file, "other").must equal(resource)
@catalog.resource(:file, @basepath+"/something").ref.should == resource.ref
end
it "should not add an alias for the namevar when the title and name differ on non-isomorphic resource types" do
resource = Puppet::Type.type(:file).new :path => @basepath+"/something", :title => "other", :content => "blah"
resource.expects(:isomorphic?).returns(false)
@catalog.add_resource(resource)
@catalog.resource(:file, resource.title).must equal(resource)
# We can't use .should here, because the resources respond to that method.
raise "Aliased non-isomorphic resource" if @catalog.resource(:file, resource.name)
end
it "should provide a method to create additional resources that also registers the resource" do
args = {:name => "/yay", :ensure => :file}
resource = stub 'file', :ref => "File[/yay]", :catalog= => @catalog, :title => "/yay", :[] => "/yay"
Puppet::Type.type(:file).expects(:new).with(args).returns(resource)
@catalog.create_resource :file, args
@catalog.resource("File[/yay]").must equal(resource)
end
describe "when adding resources with multiple namevars" do
before :each do
Puppet::Type.newtype(:multiple) do
newparam(:color, :namevar => true)
newparam(:designation, :namevar => true)
def self.title_patterns
[ [
/^(\w+) (\w+)$/,
[
[:color, lambda{|x| x}],
[:designation, lambda{|x| x}]
]
] ]
end
end
end
it "should add an alias using the uniqueness key" do
@resource = Puppet::Type.type(:multiple).new(:title => "some resource", :color => "red", :designation => "5")
@catalog.add_resource(@resource)
@catalog.resource(:multiple, "some resource").must == @resource
@catalog.resource("Multiple[some resource]").must == @resource
@catalog.resource("Multiple[red 5]").must == @resource
end
it "should conflict with a resource with the same uniqueness key" do
@resource = Puppet::Type.type(:multiple).new(:title => "some resource", :color => "red", :designation => "5")
@other = Puppet::Type.type(:multiple).new(:title => "another resource", :color => "red", :designation => "5")
@catalog.add_resource(@resource)
expect { @catalog.add_resource(@other) }.to raise_error(ArgumentError, /Cannot alias Multiple\[another resource\] to \["red", "5"\].*resource \["Multiple", "red", "5"\] already declared/)
end
it "should conflict when its uniqueness key matches another resource's title" do
path = make_absolute("/tmp/foo")
@resource = Puppet::Type.type(:file).new(:title => path)
@other = Puppet::Type.type(:file).new(:title => "another file", :path => path)
@catalog.add_resource(@resource)
expect { @catalog.add_resource(@other) }.to raise_error(ArgumentError, /Cannot alias File\[another file\] to \["#{Regexp.escape(path)}"\].*resource \["File", "#{Regexp.escape(path)}"\] already declared/)
end
it "should conflict when its uniqueness key matches the uniqueness key derived from another resource's title" do
@resource = Puppet::Type.type(:multiple).new(:title => "red leader")
@other = Puppet::Type.type(:multiple).new(:title => "another resource", :color => "red", :designation => "leader")
@catalog.add_resource(@resource)
expect { @catalog.add_resource(@other) }.to raise_error(ArgumentError, /Cannot alias Multiple\[another resource\] to \["red", "leader"\].*resource \["Multiple", "red", "leader"\] already declared/)
end
end
end
describe "when applying" do
before :each do
@catalog = Puppet::Resource::Catalog.new("host")
@transaction = Puppet::Transaction.new(@catalog, nil, Puppet::Graph::RandomPrioritizer.new)
Puppet::Transaction.stubs(:new).returns(@transaction)
@transaction.stubs(:evaluate)
@transaction.stubs(:for_network_device=)
Puppet.settings.stubs(:use)
end
it "should create and evaluate a transaction" do
@transaction.expects(:evaluate)
@catalog.apply
end
it "should return the transaction" do
@catalog.apply.should equal(@transaction)
end
it "should yield the transaction if a block is provided" do
@catalog.apply do |trans|
trans.should equal(@transaction)
end
end
it "should default to being a host catalog" do
@catalog.host_config.should be_true
end
it "should be able to be set to a non-host_config" do
@catalog.host_config = false
@catalog.host_config.should be_false
end
it "should pass supplied tags on to the transaction" do
@transaction.expects(:tags=).with(%w{one two})
@catalog.apply(:tags => %w{one two})
end
it "should set ignoreschedules on the transaction if specified in apply()" do
@transaction.expects(:ignoreschedules=).with(true)
@catalog.apply(:ignoreschedules => true)
end
describe "host catalogs" do
# super() doesn't work in the setup method for some reason
before do
@catalog.host_config = true
Puppet::Util::Storage.stubs(:store)
end
it "should initialize the state database before applying a catalog" do
Puppet::Util::Storage.expects(:load)
# Short-circuit the apply, so we know we're loading before the transaction
Puppet::Transaction.expects(:new).raises ArgumentError
proc { @catalog.apply }.should raise_error(ArgumentError)
end
it "should sync the state database after applying" do
Puppet::Util::Storage.expects(:store)
@transaction.stubs :any_failed? => false
@catalog.apply
end
- after { Puppet.settings.clear }
end
describe "non-host catalogs" do
before do
@catalog.host_config = false
end
it "should never send reports" do
Puppet[:report] = true
Puppet[:summarize] = true
@catalog.apply
end
it "should never modify the state database" do
Puppet::Util::Storage.expects(:load).never
Puppet::Util::Storage.expects(:store).never
@catalog.apply
end
- after { Puppet.settings.clear }
end
end
describe "when creating a relationship graph" do
before do
@catalog = Puppet::Resource::Catalog.new("host")
end
it "should get removed when the catalog is cleaned up" do
@catalog.relationship_graph.expects(:clear)
@catalog.clear
@catalog.instance_variable_get("@relationship_graph").should be_nil
end
end
describe "when writing dot files" do
before do
@catalog = Puppet::Resource::Catalog.new("host")
@name = :test
@file = File.join(Puppet[:graphdir], @name.to_s + ".dot")
end
it "should only write when it is a host catalog" do
File.expects(:open).with(@file).never
@catalog.host_config = false
Puppet[:graph] = true
@catalog.write_graph(@name)
end
- after do
- Puppet.settings.clear
- end
end
describe "when indirecting" do
before do
@real_indirection = Puppet::Resource::Catalog.indirection
@indirection = stub 'indirection', :name => :catalog
end
it "should use the value of the 'catalog_terminus' setting to determine its terminus class" do
# Puppet only checks the terminus setting the first time you ask
# so this returns the object to the clean state
# at the expense of making this test less pure
Puppet::Resource::Catalog.indirection.reset_terminus_class
Puppet.settings[:catalog_terminus] = "rest"
Puppet::Resource::Catalog.indirection.terminus_class.should == :rest
end
it "should allow the terminus class to be set manually" do
Puppet::Resource::Catalog.indirection.terminus_class = :rest
Puppet::Resource::Catalog.indirection.terminus_class.should == :rest
end
after do
@real_indirection.reset_terminus_class
end
end
describe "when converting to yaml" do
before do
@catalog = Puppet::Resource::Catalog.new("me")
@catalog.add_edge("one", "two")
end
it "should be able to be dumped to yaml" do
YAML.dump(@catalog).should be_instance_of(String)
end
end
describe "when converting from yaml" do
before do
@catalog = Puppet::Resource::Catalog.new("me")
@catalog.add_edge("one", "two")
text = YAML.dump(@catalog)
@newcatalog = YAML.load(text)
end
it "should get converted back to a catalog" do
@newcatalog.should be_instance_of(Puppet::Resource::Catalog)
end
it "should have all vertices" do
@newcatalog.vertex?("one").should be_true
@newcatalog.vertex?("two").should be_true
end
it "should have all edges" do
@newcatalog.edge?("one", "two").should be_true
end
end
end
describe Puppet::Resource::Catalog, "when converting a resource catalog to pson" do
include JSONMatchers
include PuppetSpec::Compiler
it "should validate an empty catalog against the schema" do
empty_catalog = compile_to_catalog("")
expect(empty_catalog.to_pson).to validate_against('api/schemas/catalog.json')
end
it "should validate a noop catalog against the schema" do
noop_catalog = compile_to_catalog("create_resources('file', {})")
expect(noop_catalog.to_pson).to validate_against('api/schemas/catalog.json')
end
it "should validate a single resource catalog against the schema" do
catalog = compile_to_catalog("create_resources('file', {'/etc/foo'=>{'ensure'=>'present'}})")
expect(catalog.to_pson).to validate_against('api/schemas/catalog.json')
end
it "should validate a virtual resource catalog against the schema" do
catalog = compile_to_catalog("create_resources('@file', {'/etc/foo'=>{'ensure'=>'present'}})\nrealize(File['/etc/foo'])")
expect(catalog.to_pson).to validate_against('api/schemas/catalog.json')
end
it "should validate a single exported resource catalog against the schema" do
catalog = compile_to_catalog("create_resources('@@file', {'/etc/foo'=>{'ensure'=>'present'}})")
expect(catalog.to_pson).to validate_against('api/schemas/catalog.json')
end
it "should validate a two resource catalog against the schema" do
catalog = compile_to_catalog("create_resources('notify', {'foo'=>{'message'=>'one'}, 'bar'=>{'message'=>'two'}})")
expect(catalog.to_pson).to validate_against('api/schemas/catalog.json')
end
it "should validate a two parameter class catalog against the schema" do
catalog = compile_to_catalog(<<-MANIFEST)
class multi_param_class ($one, $two) {
notify {'foo':
message => "One is $one, two is $two",
}
}
class {'multi_param_class':
one => 'hello',
two => 'world',
}
MANIFEST
expect(catalog.to_pson).to validate_against('api/schemas/catalog.json')
end
end
describe Puppet::Resource::Catalog, "when converting to pson" do
before do
@catalog = Puppet::Resource::Catalog.new("myhost")
end
def pson_output_should
@catalog.class.expects(:pson_create).with { |hash| yield hash }.returns(:something)
end
# LAK:NOTE For all of these tests, we convert back to the resource so we can
# trap the actual data structure then.
it "should set its document_type to 'Catalog'" do
pson_output_should { |hash| hash['document_type'] == "Catalog" }
PSON.parse @catalog.to_pson
end
it "should set its data as a hash" do
pson_output_should { |hash| hash['data'].is_a?(Hash) }
PSON.parse @catalog.to_pson
end
[:name, :version, :classes].each do |param|
it "should set its #{param} to the #{param} of the resource" do
@catalog.send(param.to_s + "=", "testing") unless @catalog.send(param)
pson_output_should { |hash| hash['data'][param.to_s].should == @catalog.send(param) }
PSON.parse @catalog.to_pson
end
end
it "should convert its resources to a PSON-encoded array and store it as the 'resources' data" do
one = stub 'one', :to_pson_data_hash => "one_resource", :ref => "Foo[one]"
two = stub 'two', :to_pson_data_hash => "two_resource", :ref => "Foo[two]"
@catalog.add_resource(one)
@catalog.add_resource(two)
# TODO this should really guarantee sort order
PSON.parse(@catalog.to_pson,:create_additions => false)['data']['resources'].sort.should == ["one_resource", "two_resource"].sort
end
it "should convert its edges to a PSON-encoded array and store it as the 'edges' data" do
one = stub 'one', :to_pson_data_hash => "one_resource", :ref => 'Foo[one]'
two = stub 'two', :to_pson_data_hash => "two_resource", :ref => 'Foo[two]'
three = stub 'three', :to_pson_data_hash => "three_resource", :ref => 'Foo[three]'
@catalog.add_edge(one, two)
@catalog.add_edge(two, three)
@catalog.edges_between(one, two )[0].expects(:to_pson_data_hash).returns "one_two_pson"
@catalog.edges_between(two, three)[0].expects(:to_pson_data_hash).returns "two_three_pson"
PSON.parse(@catalog.to_pson,:create_additions => false)['data']['edges'].sort.should == %w{one_two_pson two_three_pson}.sort
end
end
describe Puppet::Resource::Catalog, "when converting from pson" do
before do
@data = {
'name' => "myhost"
}
@pson = {
'document_type' => 'Puppet::Resource::Catalog',
'data' => @data,
'metadata' => {}
}
end
it "should create it with the provided name" do
@data['version'] = 50
@data['tags'] = %w{one two}
@data['classes'] = %w{one two}
@data['edges'] = [Puppet::Relationship.new("File[/foo]", "File[/bar]",
:event => "one",
:callback => "refresh").to_data_hash]
@data['resources'] = [Puppet::Resource.new(:file, "/foo").to_data_hash,
Puppet::Resource.new(:file, "/bar").to_data_hash]
catalog = PSON.parse @pson.to_pson
expect(catalog.name).to eq('myhost')
expect(catalog.version).to eq(@data['version'])
expect(catalog).to be_tagged("one")
expect(catalog).to be_tagged("two")
expect(catalog.classes).to eq(@data['classes'])
expect(catalog.resources.collect(&:ref)).to eq(["File[/foo]", "File[/bar]"])
expect(catalog.edges.collect(&:event)).to eq(["one"])
expect(catalog.edges[0].source).to eq(catalog.resource(:file, "/foo"))
expect(catalog.edges[0].target).to eq(catalog.resource(:file, "/bar"))
end
it "should fail if the source resource cannot be found" do
@data['edges'] = [Puppet::Relationship.new("File[/missing]", "File[/bar]").to_data_hash]
@data['resources'] = [Puppet::Resource.new(:file, "/bar").to_data_hash]
expect { PSON.parse @pson.to_pson }.to raise_error(ArgumentError, /Could not find relationship source/)
end
it "should fail if the target resource cannot be found" do
@data['edges'] = [Puppet::Relationship.new("File[/bar]", "File[/missing]").to_data_hash]
@data['resources'] = [Puppet::Resource.new(:file, "/bar").to_data_hash]
expect { PSON.parse @pson.to_pson }.to raise_error(ArgumentError, /Could not find relationship target/)
end
end
diff --git a/spec/unit/resource_spec.rb b/spec/unit/resource_spec.rb
index 73c27c1fe..e954a2179 100755
--- a/spec/unit/resource_spec.rb
+++ b/spec/unit/resource_spec.rb
@@ -1,979 +1,979 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/resource'
describe Puppet::Resource do
include PuppetSpec::Files
let(:basepath) { make_absolute("/somepath") }
let(:environment) { Puppet::Node::Environment.create(:testing, []) }
[:catalog, :file, :line].each do |attr|
it "should have an #{attr} attribute" do
resource = Puppet::Resource.new("file", "/my/file")
resource.should respond_to(attr)
resource.should respond_to(attr.to_s + "=")
end
end
it "should have a :title attribute" do
Puppet::Resource.new(:user, "foo").title.should == "foo"
end
it "should require the type and title" do
expect { Puppet::Resource.new }.to raise_error(ArgumentError)
end
it "should canonize types to capitalized strings" do
Puppet::Resource.new(:user, "foo").type.should == "User"
end
it "should canonize qualified types so all strings are capitalized" do
Puppet::Resource.new("foo::bar", "foo").type.should == "Foo::Bar"
end
it "should tag itself with its type" do
Puppet::Resource.new("file", "/f").should be_tagged("file")
end
it "should tag itself with its title if the title is a valid tag" do
Puppet::Resource.new("user", "bar").should be_tagged("bar")
end
it "should not tag itself with its title if the title is a not valid tag" do
Puppet::Resource.new("file", "/bar").should_not be_tagged("/bar")
end
it "should allow setting of attributes" do
Puppet::Resource.new("file", "/bar", :file => "/foo").file.should == "/foo"
Puppet::Resource.new("file", "/bar", :exported => true).should be_exported
end
it "should set its type to 'Class' and its title to the passed title if the passed type is :component and the title has no square brackets in it" do
ref = Puppet::Resource.new(:component, "foo")
ref.type.should == "Class"
ref.title.should == "Foo"
end
it "should interpret the title as a reference and assign appropriately if the type is :component and the title contains square brackets" do
ref = Puppet::Resource.new(:component, "foo::bar[yay]")
ref.type.should == "Foo::Bar"
ref.title.should == "yay"
end
it "should set the type to 'Class' if it is nil and the title contains no square brackets" do
ref = Puppet::Resource.new(nil, "yay")
ref.type.should == "Class"
ref.title.should == "Yay"
end
it "should interpret the title as a reference and assign appropriately if the type is nil and the title contains square brackets" do
ref = Puppet::Resource.new(nil, "foo::bar[yay]")
ref.type.should == "Foo::Bar"
ref.title.should == "yay"
end
it "should interpret the title as a reference and assign appropriately if the type is nil and the title contains nested square brackets" do
ref = Puppet::Resource.new(nil, "foo::bar[baz[yay]]")
ref.type.should == "Foo::Bar"
ref.title.should =="baz[yay]"
end
it "should interpret the type as a reference and assign appropriately if the title is nil and the type contains square brackets" do
ref = Puppet::Resource.new("foo::bar[baz]")
ref.type.should == "Foo::Bar"
ref.title.should =="baz"
end
it "should be able to extract its information from a Puppet::Type instance" do
ral = Puppet::Type.type(:file).new :path => basepath+"/foo"
ref = Puppet::Resource.new(ral)
ref.type.should == "File"
ref.title.should == basepath+"/foo"
end
it "should fail if the title is nil and the type is not a valid resource reference string" do
expect { Puppet::Resource.new("resource-spec-foo") }.to raise_error(ArgumentError)
end
it 'should fail if strict is set and type does not exist' do
expect { Puppet::Resource.new('resource-spec-foo', 'title', {:strict=>true}) }.to raise_error(ArgumentError, 'Invalid resource type resource-spec-foo')
end
it 'should fail if strict is set and class does not exist' do
expect { Puppet::Resource.new('Class', 'resource-spec-foo', {:strict=>true}) }.to raise_error(ArgumentError, 'Could not find declared class resource-spec-foo')
end
it "should fail if the title is a hash and the type is not a valid resource reference string" do
expect { Puppet::Resource.new({:type => "resource-spec-foo", :title => "bar"}) }.
to raise_error ArgumentError, /Puppet::Resource.new does not take a hash/
end
it "should be taggable" do
Puppet::Resource.ancestors.should be_include(Puppet::Util::Tagging)
end
it "should have an 'exported' attribute" do
resource = Puppet::Resource.new("file", "/f")
resource.exported = true
resource.exported.should == true
resource.should be_exported
end
describe "and munging its type and title" do
describe "when modeling a builtin resource" do
it "should be able to find the resource type" do
Puppet::Resource.new("file", "/my/file").resource_type.should equal(Puppet::Type.type(:file))
end
it "should set its type to the capitalized type name" do
Puppet::Resource.new("file", "/my/file").type.should == "File"
end
end
describe "when modeling a defined resource" do
describe "that exists" do
before do
@type = Puppet::Resource::Type.new(:definition, "foo::bar")
environment.known_resource_types.add @type
end
it "should set its type to the capitalized type name" do
Puppet::Resource.new("foo::bar", "/my/file", :environment => environment).type.should == "Foo::Bar"
end
it "should be able to find the resource type" do
Puppet::Resource.new("foo::bar", "/my/file", :environment => environment).resource_type.should equal(@type)
end
it "should set its title to the provided title" do
Puppet::Resource.new("foo::bar", "/my/file", :environment => environment).title.should == "/my/file"
end
end
describe "that does not exist" do
it "should set its resource type to the capitalized resource type name" do
Puppet::Resource.new("foo::bar", "/my/file").type.should == "Foo::Bar"
end
end
end
describe "when modeling a node" do
# Life's easier with nodes, because they can't be qualified.
it "should set its type to 'Node' and its title to the provided title" do
node = Puppet::Resource.new("node", "foo")
node.type.should == "Node"
node.title.should == "foo"
end
end
describe "when modeling a class" do
it "should set its type to 'Class'" do
Puppet::Resource.new("class", "foo").type.should == "Class"
end
describe "that exists" do
before do
@type = Puppet::Resource::Type.new(:hostclass, "foo::bar")
environment.known_resource_types.add @type
end
it "should set its title to the capitalized, fully qualified resource type" do
Puppet::Resource.new("class", "foo::bar", :environment => environment).title.should == "Foo::Bar"
end
it "should be able to find the resource type" do
Puppet::Resource.new("class", "foo::bar", :environment => environment).resource_type.should equal(@type)
end
end
describe "that does not exist" do
it "should set its type to 'Class' and its title to the capitalized provided name" do
klass = Puppet::Resource.new("class", "foo::bar")
klass.type.should == "Class"
klass.title.should == "Foo::Bar"
end
end
describe "and its name is set to the empty string" do
it "should set its title to :main" do
Puppet::Resource.new("class", "").title.should == :main
end
describe "and a class exists whose name is the empty string" do # this was a bit tough to track down
it "should set its title to :main" do
@type = Puppet::Resource::Type.new(:hostclass, "")
environment.known_resource_types.add @type
Puppet::Resource.new("class", "", :environment => environment).title.should == :main
end
end
end
describe "and its name is set to :main" do
it "should set its title to :main" do
Puppet::Resource.new("class", :main).title.should == :main
end
describe "and a class exists whose name is the empty string" do # this was a bit tough to track down
it "should set its title to :main" do
@type = Puppet::Resource::Type.new(:hostclass, "")
environment.known_resource_types.add @type
Puppet::Resource.new("class", :main, :environment => environment).title.should == :main
end
end
end
end
end
it "should return nil when looking up resource types that don't exist" do
Puppet::Resource.new("foobar", "bar").resource_type.should be_nil
end
it "should not fail when an invalid parameter is used and strict mode is disabled" do
type = Puppet::Resource::Type.new(:definition, "foobar")
environment.known_resource_types.add type
resource = Puppet::Resource.new("foobar", "/my/file", :environment => environment)
resource[:yay] = true
end
it "should be considered equivalent to another resource if their type and title match and no parameters are set" do
Puppet::Resource.new("file", "/f").should == Puppet::Resource.new("file", "/f")
end
it "should be considered equivalent to another resource if their type, title, and parameters are equal" do
Puppet::Resource.new("file", "/f", :parameters => {:foo => "bar"}).should == Puppet::Resource.new("file", "/f", :parameters => {:foo => "bar"})
end
it "should not be considered equivalent to another resource if their type and title match but parameters are different" do
Puppet::Resource.new("file", "/f", :parameters => {:fee => "baz"}).should_not == Puppet::Resource.new("file", "/f", :parameters => {:foo => "bar"})
end
it "should not be considered equivalent to a non-resource" do
Puppet::Resource.new("file", "/f").should_not == "foo"
end
it "should not be considered equivalent to another resource if their types do not match" do
Puppet::Resource.new("file", "/f").should_not == Puppet::Resource.new("exec", "/f")
end
it "should not be considered equivalent to another resource if their titles do not match" do
Puppet::Resource.new("file", "/foo").should_not == Puppet::Resource.new("file", "/f")
end
describe "when setting default parameters" do
let(:foo_node) { Puppet::Node.new('foo', :environment => environment) }
let(:compiler) { Puppet::Parser::Compiler.new(foo_node) }
let(:scope) { Puppet::Parser::Scope.new(compiler) }
def ast_string(value)
Puppet::Parser::AST::String.new({:value => value})
end
it "should fail when asked to set default values and it is not a parser resource" do
environment.known_resource_types.add(
Puppet::Resource::Type.new(:definition, "default_param", :arguments => {"a" => ast_string("default")})
)
resource = Puppet::Resource.new("default_param", "name", :environment => environment)
lambda { resource.set_default_parameters(scope) }.should raise_error(Puppet::DevError)
end
it "should evaluate and set any default values when no value is provided" do
environment.known_resource_types.add(
Puppet::Resource::Type.new(:definition, "default_param", :arguments => {"a" => ast_string("a_default_value")})
)
resource = Puppet::Parser::Resource.new("default_param", "name", :scope => scope)
resource.set_default_parameters(scope)
resource["a"].should == "a_default_value"
end
it "should skip attributes with no default value" do
environment.known_resource_types.add(
Puppet::Resource::Type.new(:definition, "no_default_param", :arguments => {"a" => ast_string("a_default_value")})
)
resource = Puppet::Parser::Resource.new("no_default_param", "name", :scope => scope)
lambda { resource.set_default_parameters(scope) }.should_not raise_error
end
it "should return the list of default parameters set" do
environment.known_resource_types.add(
Puppet::Resource::Type.new(:definition, "default_param", :arguments => {"a" => ast_string("a_default_value")})
)
resource = Puppet::Parser::Resource.new("default_param", "name", :scope => scope)
resource.set_default_parameters(scope).should == ["a"]
end
describe "when the resource type is :hostclass" do
let(:environment_name) { "testing env" }
let(:fact_values) { { :a => 1 } }
let(:port) { Puppet::Parser::AST::String.new(:value => '80') }
let(:apache) { Puppet::Resource::Type.new(:hostclass, 'apache', :arguments => { 'port' => port }) }
before do
environment.known_resource_types.add(apache)
scope.stubs(:host).returns('host')
scope.stubs(:environment).returns(environment)
scope.stubs(:facts).returns(Puppet::Node::Facts.new("facts", fact_values))
end
context "when no value is provided" do
before(:each) do
Puppet[:binder] = true
end
let(:resource) do
Puppet::Parser::Resource.new("class", "apache", :scope => scope)
end
it "should query the data_binding terminus using a namespaced key" do
Puppet::DataBinding.indirection.expects(:find).with(
'apache::port', all_of(has_key(:environment), has_key(:variables)))
resource.set_default_parameters(scope)
end
it "should use the value from the data_binding terminus" do
Puppet::DataBinding.indirection.expects(:find).returns('443')
resource.set_default_parameters(scope)
resource[:port].should == '443'
end
it "should use the default value if the data_binding terminus returns nil" do
Puppet::DataBinding.indirection.expects(:find).returns(nil)
resource.set_default_parameters(scope)
resource[:port].should == '80'
end
it "should fail with error message about data binding on a hiera failure" do
Puppet::DataBinding.indirection.expects(:find).raises(Puppet::DataBinding::LookupError, 'Forgettabotit')
expect {
resource.set_default_parameters(scope)
}.to raise_error(Puppet::Error, /Error from DataBinding 'hiera' while looking up 'apache::port':.*Forgettabotit/)
end
end
context "when a value is provided" do
let(:port_parameter) do
Puppet::Parser::Resource::Param.new(
{ :name => 'port', :value => '8080' }
)
end
let(:resource) do
Puppet::Parser::Resource.new("class", "apache", :scope => scope,
:parameters => [port_parameter])
end
it "should not query the data_binding terminus" do
Puppet::DataBinding.indirection.expects(:find).never
resource.set_default_parameters(scope)
end
it "should not query the injector" do
# enable the injector
Puppet[:binder] = true
compiler.injector.expects(:find).never
resource.set_default_parameters(scope)
end
it "should use the value provided" do
Puppet::DataBinding.indirection.expects(:find).never
resource.set_default_parameters(scope).should == []
resource[:port].should == '8080'
end
end
end
end
describe "when validating all required parameters are present" do
it "should be able to validate that all required parameters are present" do
environment.known_resource_types.add(
Puppet::Resource::Type.new(:definition, "required_param", :arguments => {"a" => nil})
)
lambda { Puppet::Resource.new("required_param", "name", :environment => environment).validate_complete }.should raise_error(Puppet::ParseError)
end
it "should not fail when all required parameters are present" do
environment.known_resource_types.add(
Puppet::Resource::Type.new(:definition, "no_required_param")
)
resource = Puppet::Resource.new("no_required_param", "name", :environment => environment)
resource["a"] = "meh"
lambda { resource.validate_complete }.should_not raise_error
end
it "should not validate against builtin types" do
lambda { Puppet::Resource.new("file", "/bar").validate_complete }.should_not raise_error
end
end
describe "when referring to a resource with name canonicalization" do
it "should canonicalize its own name" do
res = Puppet::Resource.new("file", "/path/")
res.uniqueness_key.should == ["/path"]
res.ref.should == "File[/path/]"
end
end
describe "when running in strict mode" do
it "should be strict" do
Puppet::Resource.new("file", "/path", :strict => true).should be_strict
end
it "should fail if invalid parameters are used" do
expect { Puppet::Resource.new("file", "/path", :strict => true, :parameters => {:nosuchparam => "bar"}) }.to raise_error
end
it "should fail if the resource type cannot be resolved" do
expect { Puppet::Resource.new("nosuchtype", "/path", :strict => true) }.to raise_error
end
end
describe "when managing parameters" do
before do
@resource = Puppet::Resource.new("file", "/my/file")
end
it "should correctly detect when provided parameters are not valid for builtin types" do
Puppet::Resource.new("file", "/my/file").should_not be_valid_parameter("foobar")
end
it "should correctly detect when provided parameters are valid for builtin types" do
Puppet::Resource.new("file", "/my/file").should be_valid_parameter("mode")
end
it "should correctly detect when provided parameters are not valid for defined resource types" do
type = Puppet::Resource::Type.new(:definition, "foobar")
environment.known_resource_types.add type
Puppet::Resource.new("foobar", "/my/file", :environment => environment).should_not be_valid_parameter("myparam")
end
it "should correctly detect when provided parameters are valid for defined resource types" do
type = Puppet::Resource::Type.new(:definition, "foobar", :arguments => {"myparam" => nil})
environment.known_resource_types.add type
Puppet::Resource.new("foobar", "/my/file", :environment => environment).should be_valid_parameter("myparam")
end
it "should allow setting and retrieving of parameters" do
@resource[:foo] = "bar"
@resource[:foo].should == "bar"
end
it "should allow setting of parameters at initialization" do
Puppet::Resource.new("file", "/my/file", :parameters => {:foo => "bar"})[:foo].should == "bar"
end
it "should canonicalize retrieved parameter names to treat symbols and strings equivalently" do
@resource[:foo] = "bar"
@resource["foo"].should == "bar"
end
it "should canonicalize set parameter names to treat symbols and strings equivalently" do
@resource["foo"] = "bar"
@resource[:foo].should == "bar"
end
it "should set the namevar when asked to set the name" do
resource = Puppet::Resource.new("user", "bob")
Puppet::Type.type(:user).stubs(:key_attributes).returns [:myvar]
resource[:name] = "bob"
resource[:myvar].should == "bob"
end
it "should return the namevar when asked to return the name" do
resource = Puppet::Resource.new("user", "bob")
Puppet::Type.type(:user).stubs(:key_attributes).returns [:myvar]
resource[:myvar] = "test"
resource[:name].should == "test"
end
it "should be able to set the name for non-builtin types" do
resource = Puppet::Resource.new(:foo, "bar")
resource[:name] = "eh"
expect { resource[:name] = "eh" }.to_not raise_error
end
it "should be able to return the name for non-builtin types" do
resource = Puppet::Resource.new(:foo, "bar")
resource[:name] = "eh"
resource[:name].should == "eh"
end
it "should be able to iterate over parameters" do
@resource[:foo] = "bar"
@resource[:fee] = "bare"
params = {}
@resource.each do |key, value|
params[key] = value
end
params.should == {:foo => "bar", :fee => "bare"}
end
it "should include Enumerable" do
@resource.class.ancestors.should be_include(Enumerable)
end
it "should have a method for testing whether a parameter is included" do
@resource[:foo] = "bar"
@resource.should be_has_key(:foo)
@resource.should_not be_has_key(:eh)
end
it "should have a method for providing the list of parameters" do
@resource[:foo] = "bar"
@resource[:bar] = "foo"
keys = @resource.keys
keys.should be_include(:foo)
keys.should be_include(:bar)
end
it "should have a method for providing the number of parameters" do
@resource[:foo] = "bar"
@resource.length.should == 1
end
it "should have a method for deleting parameters" do
@resource[:foo] = "bar"
@resource.delete(:foo)
@resource[:foo].should be_nil
end
it "should have a method for testing whether the parameter list is empty" do
@resource.should be_empty
@resource[:foo] = "bar"
@resource.should_not be_empty
end
it "should be able to produce a hash of all existing parameters" do
@resource[:foo] = "bar"
@resource[:fee] = "yay"
hash = @resource.to_hash
hash[:foo].should == "bar"
hash[:fee].should == "yay"
end
it "should not provide direct access to the internal parameters hash when producing a hash" do
hash = @resource.to_hash
hash[:foo] = "bar"
@resource[:foo].should be_nil
end
it "should use the title as the namevar to the hash if no namevar is present" do
resource = Puppet::Resource.new("user", "bob")
Puppet::Type.type(:user).stubs(:key_attributes).returns [:myvar]
resource.to_hash[:myvar].should == "bob"
end
it "should set :name to the title if :name is not present for non-builtin types" do
krt = Puppet::Resource::TypeCollection.new("myenv")
krt.add Puppet::Resource::Type.new(:definition, :foo)
resource = Puppet::Resource.new :foo, "bar"
resource.stubs(:known_resource_types).returns krt
resource.to_hash[:name].should == "bar"
end
end
describe "when serializing a native type" do
before do
@resource = Puppet::Resource.new("file", "/my/file")
@resource["one"] = "test"
@resource["two"] = "other"
end
it "should produce an equivalent yaml object" do
text = @resource.render('yaml')
newresource = Puppet::Resource.convert_from('yaml', text)
- newresource.should equal_attributes_of @resource
+ newresource.should equal_resource_attributes_of @resource
end
end
describe "when serializing a defined type" do
before do
type = Puppet::Resource::Type.new(:definition, "foo::bar")
environment.known_resource_types.add type
@resource = Puppet::Resource.new('foo::bar', 'xyzzy', :environment => environment)
@resource['one'] = 'test'
@resource['two'] = 'other'
@resource.resource_type
end
it "doesn't include transient instance variables (#4506)" do
expect(@resource.to_yaml_properties).to_not include :@rstype
end
it "produces an equivalent yaml object" do
text = @resource.render('yaml')
newresource = Puppet::Resource.convert_from('yaml', text)
- newresource.should equal_attributes_of @resource
+ newresource.should equal_resource_attributes_of @resource
end
end
describe "when converting to a RAL resource" do
it "should use the resource type's :new method to create the resource if the resource is of a builtin type" do
resource = Puppet::Resource.new("file", basepath+"/my/file")
result = resource.to_ral
result.must be_instance_of(Puppet::Type.type(:file))
result[:path].should == basepath+"/my/file"
end
it "should convert to a component instance if the resource type is not of a builtin type" do
resource = Puppet::Resource.new("foobar", "somename")
result = resource.to_ral
result.must be_instance_of(Puppet::Type.type(:component))
result.title.should == "Foobar[somename]"
end
end
describe "when converting to puppet code" do
before do
@resource = Puppet::Resource.new("one::two", "/my/file",
:parameters => {
:noop => true,
:foo => %w{one two},
:ensure => 'present',
}
)
end
it "should align, sort and add trailing commas to attributes with ensure first" do
@resource.to_manifest.should == <<-HEREDOC.gsub(/^\s{8}/, '').gsub(/\n$/, '')
one::two { '/my/file':
ensure => 'present',
foo => ['one', 'two'],
noop => 'true',
}
HEREDOC
end
end
describe "when converting to pson" do
def pson_output_should
@resource.class.expects(:pson_create).with { |hash| yield hash }
end
it "should include the pson util module" do
Puppet::Resource.singleton_class.ancestors.should be_include(Puppet::Util::Pson)
end
# LAK:NOTE For all of these tests, we convert back to the resource so we can
# trap the actual data structure then.
it "should set its type to the provided type" do
Puppet::Resource.from_data_hash(PSON.parse(Puppet::Resource.new("File", "/foo").to_pson)).type.should == "File"
end
it "should set its title to the provided title" do
Puppet::Resource.from_data_hash(PSON.parse(Puppet::Resource.new("File", "/foo").to_pson)).title.should == "/foo"
end
it "should include all tags from the resource" do
resource = Puppet::Resource.new("File", "/foo")
resource.tag("yay")
Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson)).tags.should == resource.tags
end
it "should include the file if one is set" do
resource = Puppet::Resource.new("File", "/foo")
resource.file = "/my/file"
Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson)).file.should == "/my/file"
end
it "should include the line if one is set" do
resource = Puppet::Resource.new("File", "/foo")
resource.line = 50
Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson)).line.should == 50
end
it "should include the 'exported' value if one is set" do
resource = Puppet::Resource.new("File", "/foo")
resource.exported = true
Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson)).exported?.should be_true
end
it "should set 'exported' to false if no value is set" do
resource = Puppet::Resource.new("File", "/foo")
Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson)).exported?.should be_false
end
it "should set all of its parameters as the 'parameters' entry" do
resource = Puppet::Resource.new("File", "/foo")
resource[:foo] = %w{bar eh}
resource[:fee] = %w{baz}
result = Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson))
result["foo"].should == %w{bar eh}
result["fee"].should == %w{baz}
end
it "should serialize relationships as reference strings" do
resource = Puppet::Resource.new("File", "/foo")
resource[:requires] = Puppet::Resource.new("File", "/bar")
result = Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson))
result[:requires].should == "File[/bar]"
end
it "should serialize multiple relationships as arrays of reference strings" do
resource = Puppet::Resource.new("File", "/foo")
resource[:requires] = [Puppet::Resource.new("File", "/bar"), Puppet::Resource.new("File", "/baz")]
result = Puppet::Resource.from_data_hash(PSON.parse(resource.to_pson))
result[:requires].should == [ "File[/bar]", "File[/baz]" ]
end
end
describe "when converting from pson" do
def pson_result_should
Puppet::Resource.expects(:new).with { |hash| yield hash }
end
before do
@data = {
'type' => "file",
'title' => basepath+"/yay",
}
end
it "should set its type to the provided type" do
Puppet::Resource.from_data_hash(@data).type.should == "File"
end
it "should set its title to the provided title" do
Puppet::Resource.from_data_hash(@data).title.should == basepath+"/yay"
end
it "should tag the resource with any provided tags" do
@data['tags'] = %w{foo bar}
resource = Puppet::Resource.from_data_hash(@data)
resource.tags.should be_include("foo")
resource.tags.should be_include("bar")
end
it "should set its file to the provided file" do
@data['file'] = "/foo/bar"
Puppet::Resource.from_data_hash(@data).file.should == "/foo/bar"
end
it "should set its line to the provided line" do
@data['line'] = 50
Puppet::Resource.from_data_hash(@data).line.should == 50
end
it "should 'exported' to true if set in the pson data" do
@data['exported'] = true
Puppet::Resource.from_data_hash(@data).exported.should be_true
end
it "should 'exported' to false if not set in the pson data" do
Puppet::Resource.from_data_hash(@data).exported.should be_false
end
it "should fail if no title is provided" do
@data.delete('title')
expect { Puppet::Resource.from_data_hash(@data) }.to raise_error(ArgumentError)
end
it "should fail if no type is provided" do
@data.delete('type')
expect { Puppet::Resource.from_data_hash(@data) }.to raise_error(ArgumentError)
end
it "should set each of the provided parameters" do
@data['parameters'] = {'foo' => %w{one two}, 'fee' => %w{three four}}
resource = Puppet::Resource.from_data_hash(@data)
resource['foo'].should == %w{one two}
resource['fee'].should == %w{three four}
end
it "should convert single-value array parameters to normal values" do
@data['parameters'] = {'foo' => %w{one}}
resource = Puppet::Resource.from_data_hash(@data)
resource['foo'].should == %w{one}
end
end
it "implements copy_as_resource" do
resource = Puppet::Resource.new("file", "/my/file")
resource.copy_as_resource.should == resource
end
describe "because it is an indirector model" do
it "should include Puppet::Indirector" do
Puppet::Resource.should be_is_a(Puppet::Indirector)
end
it "should have a default terminus" do
Puppet::Resource.indirection.terminus_class.should be
end
it "should have a name" do
Puppet::Resource.new("file", "/my/file").name.should == "File//my/file"
end
end
describe "when resolving resources with a catalog" do
it "should resolve all resources using the catalog" do
catalog = mock 'catalog'
resource = Puppet::Resource.new("foo::bar", "yay")
resource.catalog = catalog
catalog.expects(:resource).with("Foo::Bar[yay]").returns(:myresource)
resource.resolve.should == :myresource
end
end
describe "when generating the uniqueness key" do
it "should include all of the key_attributes in alphabetical order by attribute name" do
Puppet::Type.type(:file).stubs(:key_attributes).returns [:myvar, :owner, :path]
Puppet::Type.type(:file).stubs(:title_patterns).returns(
[ [ /(.*)/, [ [:path, lambda{|x| x} ] ] ] ]
)
res = Puppet::Resource.new("file", "/my/file", :parameters => {:owner => 'root', :content => 'hello'})
res.uniqueness_key.should == [ nil, 'root', '/my/file']
end
end
describe '#parse_title' do
describe 'with a composite namevar' do
before do
Puppet::Type.newtype(:composite) do
newparam(:name)
newparam(:value)
# Configure two title patterns to match a title that is either
# separated with a colon or exclamation point. The first capture
# will be used for the :name param, and the second capture will be
# used for the :value param.
def self.title_patterns
identity = lambda {|x| x }
reverse = lambda {|x| x.reverse }
[
[
/^(.*?):(.*?)$/,
[
[:name, identity],
[:value, identity],
]
],
[
/^(.*?)!(.*?)$/,
[
[:name, reverse],
[:value, reverse],
]
],
]
end
end
end
describe "with no matching title patterns" do
subject { Puppet::Resource.new(:composite, 'unmatching title')}
it "should raise an exception if no title patterns match" do
expect do
subject.to_hash
end.to raise_error(Puppet::Error, /No set of title patterns matched/)
end
end
describe "with a matching title pattern" do
subject { Puppet::Resource.new(:composite, 'matching:title') }
it "should not raise an exception if there was a match" do
expect do
subject.to_hash
end.to_not raise_error
end
it "should set the resource parameters from the parsed title values" do
h = subject.to_hash
h[:name].should == 'matching'
h[:value].should == 'title'
end
end
describe "and multiple title patterns" do
subject { Puppet::Resource.new(:composite, 'matching!title') }
it "should use the first title pattern that matches" do
h = subject.to_hash
h[:name].should == 'gnihctam'
h[:value].should == 'eltit'
end
end
end
end
describe "#prune_parameters" do
before do
Puppet.newtype('blond') do
newproperty(:ensure)
newproperty(:height)
newproperty(:weight)
newproperty(:sign)
newproperty(:friends)
newparam(:admits_to_dying_hair)
newparam(:admits_to_age)
newparam(:name)
end
end
it "should strip all parameters and strip properties that are nil, empty or absent except for ensure" do
resource = Puppet::Resource.new("blond", "Bambi", :parameters => {
:ensure => 'absent',
:height => '',
:weight => 'absent',
:friends => [],
:admits_to_age => true,
:admits_to_dying_hair => false
})
pruned_resource = resource.prune_parameters
pruned_resource.should == Puppet::Resource.new("blond", "Bambi", :parameters => {:ensure => 'absent'})
end
it "should leave parameters alone if in parameters_to_include" do
resource = Puppet::Resource.new("blond", "Bambi", :parameters => {
:admits_to_age => true,
:admits_to_dying_hair => false
})
pruned_resource = resource.prune_parameters(:parameters_to_include => [:admits_to_dying_hair])
pruned_resource.should == Puppet::Resource.new("blond", "Bambi", :parameters => {:admits_to_dying_hair => false})
end
it "should leave properties if not nil, absent or empty" do
resource = Puppet::Resource.new("blond", "Bambi", :parameters => {
:ensure => 'silly',
:height => '7 ft 5 in',
:friends => ['Oprah'],
})
pruned_resource = resource.prune_parameters
pruned_resource.should ==
resource = Puppet::Resource.new("blond", "Bambi", :parameters => {
:ensure => 'silly',
:height => '7 ft 5 in',
:friends => ['Oprah'],
})
end
end
end
diff --git a/spec/unit/settings/autosign_setting_spec.rb b/spec/unit/settings/autosign_setting_spec.rb
index 0c8184c8a..0dbfe4ecb 100644
--- a/spec/unit/settings/autosign_setting_spec.rb
+++ b/spec/unit/settings/autosign_setting_spec.rb
@@ -1,103 +1,103 @@
require 'spec_helper'
require 'puppet/settings'
require 'puppet/settings/autosign_setting'
describe Puppet::Settings::AutosignSetting do
let(:settings) do
s = stub('settings')
s.stubs(:[]).with(:mkusers).returns true
s.stubs(:[]).with(:user).returns 'puppet'
s.stubs(:[]).with(:group).returns 'puppet'
s.stubs(:[]).with(:manage_internal_file_permissions).returns true
s
end
let(:setting) { described_class.new(:name => 'autosign', :section => 'section', :settings => settings, :desc => "test") }
it "is of type :file" do
expect(setting.type).to eq :file
end
describe "when munging the setting" do
it "passes boolean values through" do
expect(setting.munge(true)).to eq true
expect(setting.munge(false)).to eq false
end
it "converts nil to false" do
expect(setting.munge(nil)).to eq false
end
it "munges string 'true' to boolean true" do
expect(setting.munge('true')).to eq true
end
it "munges string 'false' to boolean false" do
expect(setting.munge('false')).to eq false
end
it "passes absolute paths through" do
path = File.expand_path('/path/to/autosign.conf')
expect(setting.munge(path)).to eq path
end
it "fails if given anything else" do
cases = [1.0, 'sometimes', 'relative/autosign.conf']
cases.each do |invalid|
expect {
setting.munge(invalid)
}.to raise_error Puppet::Settings::ValidationError, /Invalid autosign value/
end
end
end
describe "setting additional setting values" do
it "can set the file mode" do
setting.mode = '0664'
expect(setting.mode).to eq '0664'
end
it "can set the file owner" do
setting.owner = 'service'
expect(setting.owner).to eq 'puppet'
end
it "can set the file group" do
setting.group = 'service'
expect(setting.group).to eq 'puppet'
end
end
describe "converting the setting to a resource" do
it "converts the file path to a file resource" do
path = File.expand_path('/path/to/autosign.conf')
- settings.stubs(:value).with('autosign').returns(path)
+ settings.stubs(:value).with('autosign', nil, false).returns(path)
Puppet::FileSystem.stubs(:exist?).with(path).returns true
Puppet.stubs(:features).returns(stub(:root? => true, :microsoft_windows? => false))
setting.mode = '0664'
setting.owner = 'service'
setting.group = 'service'
resource = setting.to_resource
expect(resource.title).to eq path
expect(resource[:ensure]).to eq :file
expect(resource[:mode]).to eq '664'
expect(resource[:owner]).to eq 'puppet'
expect(resource[:group]).to eq 'puppet'
end
it "returns nil when the setting is a boolean" do
- settings.stubs(:value).with('autosign').returns 'true'
+ settings.stubs(:value).with('autosign', nil, false).returns 'true'
setting.mode = '0664'
setting.owner = 'service'
setting.group = 'service'
expect(setting.to_resource).to be_nil
end
end
end
diff --git a/spec/unit/settings/environment_conf_spec.rb b/spec/unit/settings/environment_conf_spec.rb
index 6a8a6689e..e4a492ae8 100644
--- a/spec/unit/settings/environment_conf_spec.rb
+++ b/spec/unit/settings/environment_conf_spec.rb
@@ -1,51 +1,118 @@
require 'spec_helper'
require 'puppet/settings/environment_conf.rb'
describe Puppet::Settings::EnvironmentConf do
+ def setup_environment_conf(config, conf_hash)
+ conf_hash.each do |setting,value|
+ config.expects(:setting).with(setting).returns(
+ mock('setting', :value => value)
+ )
+ end
+ end
+
context "with config" do
- let(:config) { stub(:config) }
+ let(:config) { stub('config') }
let(:envconf) { Puppet::Settings::EnvironmentConf.new("/some/direnv", config, ["/global/modulepath"]) }
it "reads a modulepath from config and does not include global_module_path" do
- config.expects(:setting).with(:modulepath).returns(
- mock('setting', :value => '/some/modulepath')
- )
+ setup_environment_conf(config, :modulepath => '/some/modulepath')
+
expect(envconf.modulepath).to eq(File.expand_path('/some/modulepath'))
end
it "reads a manifest from config" do
- config.expects(:setting).with(:manifest).returns(
- mock('setting', :value => '/some/manifest')
- )
+ setup_environment_conf(config, :manifest => '/some/manifest')
+
expect(envconf.manifest).to eq(File.expand_path('/some/manifest'))
end
it "reads a config_version from config" do
- config.expects(:setting).with(:config_version).returns(
- mock('setting', :value => '/some/version.sh')
- )
+ setup_environment_conf(config, :config_version => '/some/version.sh')
+
expect(envconf.config_version).to eq(File.expand_path('/some/version.sh'))
end
+ it "read an environment_timeout from config" do
+ setup_environment_conf(config, :environment_timeout => '3m')
+
+ expect(envconf.environment_timeout).to eq(180)
+ end
+
+ it "can retrieve raw settings" do
+ setup_environment_conf(config, :manifest => 'manifest.pp')
+
+ expect(envconf.raw_setting(:manifest)).to eq('manifest.pp')
+ end
end
context "without config" do
let(:envconf) { Puppet::Settings::EnvironmentConf.new("/some/direnv", nil, ["/global/modulepath"]) }
it "returns a default modulepath when config has none, with global_module_path" do
expect(envconf.modulepath).to eq(
[File.expand_path('/some/direnv/modules'),
File.expand_path('/global/modulepath')].join(File::PATH_SEPARATOR)
)
end
it "returns a default manifest when config has none" do
expect(envconf.manifest).to eq(File.expand_path('/some/direnv/manifests'))
end
it "returns nothing for config_version when config has none" do
expect(envconf.config_version).to be_nil
end
+
+ it "returns a defult of 0 for environment_timeout when config has none" do
+ expect(envconf.environment_timeout).to eq(0)
+ end
+
+ it "can still retrieve raw setting" do
+ expect(envconf.raw_setting(:manifest)).to be_nil
+ end
+ end
+
+ describe "with disable_per_environment_manifest" do
+
+ let(:config) { stub('config') }
+ let(:envconf) { Puppet::Settings::EnvironmentConf.new("/some/direnv", config, ["/global/modulepath"]) }
+
+ context "set true" do
+
+ before(:each) do
+ Puppet[:default_manifest] = File.expand_path('/default/manifest')
+ Puppet[:disable_per_environment_manifest] = true
+ end
+
+ it "ignores environment.conf manifest" do
+ setup_environment_conf(config, :manifest => '/some/manifest.pp')
+
+ expect(envconf.manifest).to eq(File.expand_path('/default/manifest'))
+ end
+
+ it "logs error when environment.conf has manifest set" do
+ setup_environment_conf(config, :manifest => '/some/manifest.pp')
+
+ envconf.manifest
+ expect(@logs.first.to_s).to match(/disable_per_environment_manifest.*true.*environment.conf.*does not match the default_manifest/)
+ end
+
+ it "does not log an error when environment.conf does not have a manifest set" do
+ setup_environment_conf(config, :manifest => nil)
+
+ expect(envconf.manifest).to eq(File.expand_path('/default/manifest'))
+ expect(@logs).to be_empty
+ end
+ end
+
+ it "uses environment.conf when false" do
+ setup_environment_conf(config, :manifest => '/some/manifest.pp')
+
+ Puppet[:default_manifest] = File.expand_path('/default/manifest')
+ Puppet[:disable_per_environment_manifest] = false
+
+ expect(envconf.manifest).to eq(File.expand_path('/some/manifest.pp'))
+ end
end
end
diff --git a/spec/unit/settings/file_setting_spec.rb b/spec/unit/settings/file_setting_spec.rb
index b31d0ccb3..c77cc5981 100755
--- a/spec/unit/settings/file_setting_spec.rb
+++ b/spec/unit/settings/file_setting_spec.rb
@@ -1,297 +1,298 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/settings'
require 'puppet/settings/file_setting'
describe Puppet::Settings::FileSetting do
FileSetting = Puppet::Settings::FileSetting
include PuppetSpec::Files
describe "when controlling permissions" do
def settings(wanted_values = {})
real_values = {
:user => 'root',
:group => 'root',
:mkusers => false,
:service_user_available? => false,
:service_group_available? => false
}.merge(wanted_values)
settings = mock("settings")
settings.stubs(:[]).with(:user).returns real_values[:user]
settings.stubs(:[]).with(:group).returns real_values[:group]
settings.stubs(:[]).with(:mkusers).returns real_values[:mkusers]
settings.stubs(:service_user_available?).returns real_values[:service_user_available?]
settings.stubs(:service_group_available?).returns real_values[:service_group_available?]
settings
end
context "owner" do
it "can always be root" do
settings = settings(:user => "the_service", :mkusers => true)
setting = FileSetting.new(:settings => settings, :owner => "root", :desc => "a setting")
setting.owner.should == "root"
end
it "is the service user if we are making users" do
settings = settings(:user => "the_service", :mkusers => true, :service_user_available? => false)
setting = FileSetting.new(:settings => settings, :owner => "service", :desc => "a setting")
setting.owner.should == "the_service"
end
it "is the service user if the user is available on the system" do
settings = settings(:user => "the_service", :mkusers => false, :service_user_available? => true)
setting = FileSetting.new(:settings => settings, :owner => "service", :desc => "a setting")
setting.owner.should == "the_service"
end
it "is root when the setting specifies service and the user is not available on the system" do
settings = settings(:user => "the_service", :mkusers => false, :service_user_available? => false)
setting = FileSetting.new(:settings => settings, :owner => "service", :desc => "a setting")
setting.owner.should == "root"
end
it "is unspecified when no specific owner is wanted" do
FileSetting.new(:settings => settings(), :desc => "a setting").owner.should be_nil
end
it "does not allow other owners" do
expect { FileSetting.new(:settings => settings(), :desc => "a setting", :name => "testing", :default => "the default", :owner => "invalid") }.
to raise_error(FileSetting::SettingError, /The :owner parameter for the setting 'testing' must be either 'root' or 'service'/)
end
end
context "group" do
it "is unspecified when no specific group is wanted" do
setting = FileSetting.new(:settings => settings(), :desc => "a setting")
setting.group.should be_nil
end
it "is root if root is requested" do
settings = settings(:group => "the_group")
setting = FileSetting.new(:settings => settings, :group => "root", :desc => "a setting")
setting.group.should == "root"
end
it "is the service group if we are making users" do
settings = settings(:group => "the_service", :mkusers => true)
setting = FileSetting.new(:settings => settings, :group => "service", :desc => "a setting")
setting.group.should == "the_service"
end
it "is the service user if the group is available on the system" do
settings = settings(:group => "the_service", :mkusers => false, :service_group_available? => true)
setting = FileSetting.new(:settings => settings, :group => "service", :desc => "a setting")
setting.group.should == "the_service"
end
it "is unspecified when the setting specifies service and the group is not available on the system" do
settings = settings(:group => "the_service", :mkusers => false, :service_group_available? => false)
setting = FileSetting.new(:settings => settings, :group => "service", :desc => "a setting")
setting.group.should be_nil
end
it "does not allow other groups" do
expect { FileSetting.new(:settings => settings(), :group => "invalid", :name => 'testing', :desc => "a setting") }.
to raise_error(FileSetting::SettingError, /The :group parameter for the setting 'testing' must be either 'root' or 'service'/)
end
end
end
it "should be able to be converted into a resource" do
FileSetting.new(:settings => mock("settings"), :desc => "eh").should respond_to(:to_resource)
end
describe "when being converted to a resource" do
before do
@basepath = make_absolute("/somepath")
@settings = mock 'settings'
@file = Puppet::Settings::FileSetting.new(:settings => @settings, :desc => "eh", :name => :myfile, :section => "mysect")
@file.stubs(:create_files?).returns true
- @settings.stubs(:value).with(:myfile).returns @basepath
+ @settings.stubs(:value).with(:myfile, nil, false).returns @basepath
end
it "should return :file as its type" do
@file.type.should == :file
end
it "should skip non-existent files if 'create_files' is not enabled" do
@file.expects(:create_files?).returns false
@file.expects(:type).returns :file
Puppet::FileSystem.expects(:exist?).with(@basepath).returns false
@file.to_resource.should be_nil
end
it "should manage existent files even if 'create_files' is not enabled" do
@file.expects(:create_files?).returns false
@file.expects(:type).returns :file
+ Puppet::FileSystem.stubs(:exist?)
Puppet::FileSystem.expects(:exist?).with(@basepath).returns true
@file.to_resource.should be_instance_of(Puppet::Resource)
end
describe "on POSIX systems", :if => Puppet.features.posix? do
it "should skip files in /dev" do
- @settings.stubs(:value).with(:myfile).returns "/dev/file"
+ @settings.stubs(:value).with(:myfile, nil, false).returns "/dev/file"
@file.to_resource.should be_nil
end
end
it "should skip files whose paths are not strings" do
- @settings.stubs(:value).with(:myfile).returns :foo
+ @settings.stubs(:value).with(:myfile, nil, false).returns :foo
@file.to_resource.should be_nil
end
it "should return a file resource with the path set appropriately" do
resource = @file.to_resource
resource.type.should == "File"
resource.title.should == @basepath
end
it "should fully qualified returned files if necessary (#795)" do
- @settings.stubs(:value).with(:myfile).returns "myfile"
+ @settings.stubs(:value).with(:myfile, nil, false).returns "myfile"
path = File.expand_path('myfile')
@file.to_resource.title.should == path
end
it "should set the mode on the file if a mode is provided as an octal number" do
@file.mode = 0755
@file.to_resource[:mode].should == '755'
end
it "should set the mode on the file if a mode is provided as a string" do
@file.mode = '0755'
@file.to_resource[:mode].should == '755'
end
it "should not set the mode on a the file if manage_internal_file_permissions is disabled" do
Puppet[:manage_internal_file_permissions] = false
@file.stubs(:mode).returns(0755)
@file.to_resource[:mode].should == nil
end
it "should set the owner if running as root and the owner is provided" do
Puppet.features.expects(:root?).returns true
Puppet.features.stubs(:microsoft_windows?).returns false
@file.stubs(:owner).returns "foo"
@file.to_resource[:owner].should == "foo"
end
it "should not set the owner if manage_internal_file_permissions is disabled" do
Puppet[:manage_internal_file_permissions] = false
Puppet.features.stubs(:root?).returns true
@file.stubs(:owner).returns "foo"
@file.to_resource[:owner].should == nil
end
it "should set the group if running as root and the group is provided" do
Puppet.features.expects(:root?).returns true
Puppet.features.stubs(:microsoft_windows?).returns false
@file.stubs(:group).returns "foo"
@file.to_resource[:group].should == "foo"
end
it "should not set the group if manage_internal_file_permissions is disabled" do
Puppet[:manage_internal_file_permissions] = false
Puppet.features.stubs(:root?).returns true
@file.stubs(:group).returns "foo"
@file.to_resource[:group].should == nil
end
it "should not set owner if not running as root" do
Puppet.features.expects(:root?).returns false
Puppet.features.stubs(:microsoft_windows?).returns false
@file.stubs(:owner).returns "foo"
@file.to_resource[:owner].should be_nil
end
it "should not set group if not running as root" do
Puppet.features.expects(:root?).returns false
Puppet.features.stubs(:microsoft_windows?).returns false
@file.stubs(:group).returns "foo"
@file.to_resource[:group].should be_nil
end
describe "on Microsoft Windows systems" do
before :each do
Puppet.features.stubs(:microsoft_windows?).returns true
end
it "should not set owner" do
@file.stubs(:owner).returns "foo"
@file.to_resource[:owner].should be_nil
end
it "should not set group" do
@file.stubs(:group).returns "foo"
@file.to_resource[:group].should be_nil
end
end
it "should set :ensure to the file type" do
@file.expects(:type).returns :directory
@file.to_resource[:ensure].should == :directory
end
it "should set the loglevel to :debug" do
@file.to_resource[:loglevel].should == :debug
end
it "should set the backup to false" do
@file.to_resource[:backup].should be_false
end
it "should tag the resource with the settings section" do
@file.expects(:section).returns "mysect"
@file.to_resource.should be_tagged("mysect")
end
it "should tag the resource with the setting name" do
@file.to_resource.should be_tagged("myfile")
end
it "should tag the resource with 'settings'" do
@file.to_resource.should be_tagged("settings")
end
it "should set links to 'follow'" do
@file.to_resource[:links].should == :follow
end
end
describe "#munge" do
it 'does not expand the path of the special value :memory: so we can set dblocation to an in-memory database' do
filesetting = FileSetting.new(:settings => mock("settings"), :desc => "eh")
filesetting.munge(':memory:').should == ':memory:'
end
end
end
diff --git a/spec/unit/settings/priority_setting_spec.rb b/spec/unit/settings/priority_setting_spec.rb
index d51e39dc4..62cad5def 100755
--- a/spec/unit/settings/priority_setting_spec.rb
+++ b/spec/unit/settings/priority_setting_spec.rb
@@ -1,66 +1,66 @@
#!/usr/bin/env ruby
require 'spec_helper'
require 'puppet/settings'
require 'puppet/settings/priority_setting'
require 'puppet/util/platform'
describe Puppet::Settings::PrioritySetting do
let(:setting) { described_class.new(:settings => mock('settings'), :desc => "test") }
it "is of type :priority" do
setting.type.should == :priority
end
describe "when munging the setting" do
it "passes nil through" do
setting.munge(nil).should be_nil
end
it "returns the same value if given an integer" do
setting.munge(5).should == 5
end
it "returns an integer if given a decimal string" do
setting.munge('12').should == 12
end
it "returns a negative integer if given a negative integer string" do
setting.munge('-5').should == -5
end
it "fails if given anything else" do
[ 'foo', 'realtime', true, 8.3, [] ].each do |value|
expect {
setting.munge(value)
}.to raise_error(Puppet::Settings::ValidationError)
end
end
describe "on a Unix-like platform it", :unless => Puppet::Util::Platform.windows? do
it "parses high, normal, low, and idle priorities" do
{
'high' => -10,
'normal' => 0,
'low' => 10,
'idle' => 19
}.each do |value, converted_value|
setting.munge(value).should == converted_value
end
end
end
describe "on a Windows-like platform it", :if => Puppet::Util::Platform.windows? do
it "parses high, normal, low, and idle priorities" do
{
- 'high' => Process::HIGH_PRIORITY_CLASS,
- 'normal' => Process::NORMAL_PRIORITY_CLASS,
- 'low' => Process::BELOW_NORMAL_PRIORITY_CLASS,
- 'idle' => Process::IDLE_PRIORITY_CLASS
+ 'high' => Puppet::Util::Windows::Process::HIGH_PRIORITY_CLASS,
+ 'normal' => Puppet::Util::Windows::Process::NORMAL_PRIORITY_CLASS,
+ 'low' => Puppet::Util::Windows::Process::BELOW_NORMAL_PRIORITY_CLASS,
+ 'idle' => Puppet::Util::Windows::Process::IDLE_PRIORITY_CLASS
}.each do |value, converted_value|
setting.munge(value).should == converted_value
end
end
end
end
end
diff --git a/spec/unit/settings_spec.rb b/spec/unit/settings_spec.rb
index 893db6090..0a1cee2bd 100755
--- a/spec/unit/settings_spec.rb
+++ b/spec/unit/settings_spec.rb
@@ -1,1845 +1,1883 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'ostruct'
require 'puppet/settings/errors'
describe Puppet::Settings do
include PuppetSpec::Files
let(:main_config_file_default_location) do
File.join(Puppet::Util::RunMode[:master].conf_dir, "puppet.conf")
end
let(:user_config_file_default_location) do
File.join(Puppet::Util::RunMode[:user].conf_dir, "puppet.conf")
end
describe "when specifying defaults" do
before do
@settings = Puppet::Settings.new
end
it "should start with no defined parameters" do
@settings.params.length.should == 0
end
it "should not allow specification of default values associated with a section as an array" do
expect {
@settings.define_settings(:section, :myvalue => ["defaultval", "my description"])
}.to raise_error
end
it "should not allow duplicate parameter specifications" do
@settings.define_settings(:section, :myvalue => { :default => "a", :desc => "b" })
lambda { @settings.define_settings(:section, :myvalue => { :default => "c", :desc => "d" }) }.should raise_error(ArgumentError)
end
it "should allow specification of default values associated with a section as a hash" do
@settings.define_settings(:section, :myvalue => {:default => "defaultval", :desc => "my description"})
end
it "should consider defined parameters to be valid" do
@settings.define_settings(:section, :myvalue => { :default => "defaultval", :desc => "my description" })
@settings.valid?(:myvalue).should be_true
end
it "should require a description when defaults are specified with a hash" do
lambda { @settings.define_settings(:section, :myvalue => {:default => "a value"}) }.should raise_error(ArgumentError)
end
it "should support specifying owner, group, and mode when specifying files" do
@settings.define_settings(:section, :myvalue => {:type => :file, :default => "/some/file", :owner => "service", :mode => "boo", :group => "service", :desc => "whatever"})
end
it "should support specifying a short name" do
@settings.define_settings(:section, :myvalue => {:default => "w", :desc => "b", :short => "m"})
end
it "should support specifying the setting type" do
@settings.define_settings(:section, :myvalue => {:default => "/w", :desc => "b", :type => :string})
@settings.setting(:myvalue).should be_instance_of(Puppet::Settings::StringSetting)
end
it "should fail if an invalid setting type is specified" do
lambda { @settings.define_settings(:section, :myvalue => {:default => "w", :desc => "b", :type => :foo}) }.should raise_error(ArgumentError)
end
it "should fail when short names conflict" do
@settings.define_settings(:section, :myvalue => {:default => "w", :desc => "b", :short => "m"})
lambda { @settings.define_settings(:section, :myvalue => {:default => "w", :desc => "b", :short => "m"}) }.should raise_error(ArgumentError)
end
end
describe "when initializing application defaults do" do
let(:default_values) do
values = {}
PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS.keys.each do |key|
values[key] = 'default value'
end
values
end
before do
@settings = Puppet::Settings.new
@settings.define_settings(:main, PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS)
end
it "should fail if the app defaults hash is missing any required values" do
incomplete_default_values = default_values.reject { |key, _| key == :confdir }
expect {
@settings.initialize_app_defaults(default_values.reject { |key, _| key == :confdir })
}.to raise_error(Puppet::Settings::SettingsError)
end
# ultimately I'd like to stop treating "run_mode" as a normal setting, because it has so many special
# case behaviors / uses. However, until that time... we need to make sure that our private run_mode=
# setter method gets properly called during app initialization.
it "sets the preferred run mode when initializing the app defaults" do
@settings.initialize_app_defaults(default_values.merge(:run_mode => :master))
@settings.preferred_run_mode.should == :master
end
end
describe "#call_hooks_deferred_to_application_initialization" do
let(:good_default) { "yay" }
let(:bad_default) { "$doesntexist" }
before(:each) do
@settings = Puppet::Settings.new
end
describe "when ignoring dependency interpolation errors" do
let(:options) { {:ignore_interpolation_dependency_errors => true} }
describe "if interpolation error" do
it "should not raise an error" do
hook_values = []
@settings.define_settings(:section, :badhook => {:default => bad_default, :desc => "boo", :call_hook => :on_initialize_and_write, :hook => lambda { |v| hook_values << v }})
expect do
@settings.send(:call_hooks_deferred_to_application_initialization, options)
end.to_not raise_error
end
end
describe "if no interpolation error" do
it "should not raise an error" do
hook_values = []
@settings.define_settings(:section, :goodhook => {:default => good_default, :desc => "boo", :call_hook => :on_initialize_and_write, :hook => lambda { |v| hook_values << v }})
expect do
@settings.send(:call_hooks_deferred_to_application_initialization, options)
end.to_not raise_error
end
end
end
describe "when not ignoring dependency interpolation errors" do
[ {}, {:ignore_interpolation_dependency_errors => false}].each do |options|
describe "if interpolation error" do
it "should raise an error" do
hook_values = []
@settings.define_settings(
:section,
:badhook => {
:default => bad_default,
:desc => "boo",
:call_hook => :on_initialize_and_write,
:hook => lambda { |v| hook_values << v }
}
)
expect do
@settings.send(:call_hooks_deferred_to_application_initialization, options)
end.to raise_error(Puppet::Settings::InterpolationError)
end
it "should contain the setting name in error message" do
hook_values = []
@settings.define_settings(
:section,
:badhook => {
:default => bad_default,
:desc => "boo",
:call_hook => :on_initialize_and_write,
:hook => lambda { |v| hook_values << v }
}
)
expect do
@settings.send(:call_hooks_deferred_to_application_initialization, options)
end.to raise_error(Puppet::Settings::InterpolationError, /badhook/)
end
end
describe "if no interpolation error" do
it "should not raise an error" do
hook_values = []
@settings.define_settings(
:section,
:goodhook => {
:default => good_default,
:desc => "boo",
:call_hook => :on_initialize_and_write,
:hook => lambda { |v| hook_values << v }
}
)
expect do
@settings.send(:call_hooks_deferred_to_application_initialization, options)
end.to_not raise_error
end
end
end
end
end
describe "when setting values" do
before do
@settings = Puppet::Settings.new
@settings.define_settings :main, :myval => { :default => "val", :desc => "desc" }
@settings.define_settings :main, :bool => { :type => :boolean, :default => true, :desc => "desc" }
end
it "should provide a method for setting values from other objects" do
@settings[:myval] = "something else"
@settings[:myval].should == "something else"
end
it "should support a getopt-specific mechanism for setting values" do
@settings.handlearg("--myval", "newval")
@settings[:myval].should == "newval"
end
it "should support a getopt-specific mechanism for turning booleans off" do
@settings.override_default(:bool, true)
@settings.handlearg("--no-bool", "")
@settings[:bool].should == false
end
it "should support a getopt-specific mechanism for turning booleans on" do
# Turn it off first
@settings.override_default(:bool, false)
@settings.handlearg("--bool", "")
@settings[:bool].should == true
end
it "should consider a cli setting with no argument to be a boolean" do
# Turn it off first
@settings.override_default(:bool, false)
@settings.handlearg("--bool")
@settings[:bool].should == true
end
it "should consider a cli setting with an empty string as an argument to be an empty argument, if the setting itself is not a boolean" do
@settings.override_default(:myval, "bob")
@settings.handlearg("--myval", "")
@settings[:myval].should == ""
end
it "should consider a cli setting with a boolean as an argument to be a boolean" do
# Turn it off first
@settings.override_default(:bool, false)
@settings.handlearg("--bool", "true")
@settings[:bool].should == true
end
it "should not consider a cli setting of a non boolean with a boolean as an argument to be a boolean" do
@settings.override_default(:myval, "bob")
@settings.handlearg("--no-myval", "")
@settings[:myval].should == ""
end
it "should flag string settings from the CLI" do
@settings.handlearg("--myval", "12")
@settings.set_by_cli?(:myval).should be_true
end
it "should flag bool settings from the CLI" do
@settings.handlearg("--bool")
@settings.set_by_cli?(:bool).should be_true
end
it "should not flag settings memory as from CLI" do
@settings[:myval] = "12"
@settings.set_by_cli?(:myval).should be_false
end
describe "setbycli" do
it "should generate a deprecation warning" do
Puppet.expects(:deprecation_warning).at_least(1)
@settings.setting(:myval).setbycli = true
end
it "should set the value" do
@settings[:myval] = "blah"
@settings.setting(:myval).setbycli = true
@settings.set_by_cli?(:myval).should be_true
end
it "should raise error if trying to unset value" do
@settings.handlearg("--myval", "blah")
expect do
@settings.setting(:myval).setbycli = nil
end.to raise_error(ArgumentError, /unset/)
end
end
it "should clear the cache when setting getopt-specific values" do
@settings.define_settings :mysection,
:one => { :default => "whah", :desc => "yay" },
:two => { :default => "$one yay", :desc => "bah" }
@settings.expects(:unsafe_flush_cache)
@settings[:two].should == "whah yay"
@settings.handlearg("--one", "else")
@settings[:two].should == "else yay"
end
it "should clear the cache when the preferred_run_mode is changed" do
@settings.expects(:flush_cache)
@settings.preferred_run_mode = :master
end
it "should not clear other values when setting getopt-specific values" do
@settings[:myval] = "yay"
@settings.handlearg("--no-bool", "")
@settings[:myval].should == "yay"
end
it "should clear the list of used sections" do
@settings.expects(:clearused)
@settings[:myval] = "yay"
end
describe "call_hook" do
Puppet::Settings::StringSetting.available_call_hook_values.each do |val|
describe "when :#{val}" do
describe "and definition invalid" do
it "should raise error if no hook defined" do
expect do
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => val})
end.to raise_error(ArgumentError, /no :hook/)
end
it "should include the setting name in the error message" do
expect do
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => val})
end.to raise_error(ArgumentError, /for :hooker/)
end
end
describe "and definition valid" do
before(:each) do
hook_values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => val, :hook => lambda { |v| hook_values << v }})
end
it "should call the hook when value written" do
@settings.setting(:hooker).expects(:handle).with("something").once
@settings[:hooker] = "something"
end
end
end
end
it "should have a default value of :on_write_only" do
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :hook => lambda { |v| hook_values << v }})
@settings.setting(:hooker).call_hook.should == :on_write_only
end
describe "when nil" do
it "should generate a warning" do
Puppet.expects(:warning)
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => nil, :hook => lambda { |v| hook_values << v }})
end
it "should use default" do
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => nil, :hook => lambda { |v| hook_values << v }})
@settings.setting(:hooker).call_hook.should == :on_write_only
end
end
describe "when invalid" do
it "should raise an error" do
expect do
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => :foo, :hook => lambda { |v| hook_values << v }})
end.to raise_error(ArgumentError, /invalid.*call_hook/i)
end
end
describe "when :on_define_and_write" do
it "should call the hook at definition" do
hook_values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => :on_define_and_write, :hook => lambda { |v| hook_values << v }})
@settings.setting(:hooker).call_hook.should == :on_define_and_write
hook_values.should == %w{yay}
end
end
describe "when :on_initialize_and_write" do
before(:each) do
@hook_values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => :on_initialize_and_write, :hook => lambda { |v| @hook_values << v }})
end
it "should not call the hook at definition" do
@hook_values.should == []
@hook_values.should_not == %w{yay}
end
it "should call the hook at initialization" do
app_defaults = {}
Puppet::Settings::REQUIRED_APP_SETTINGS.each do |key|
app_defaults[key] = "foo"
end
app_defaults[:run_mode] = :user
@settings.define_settings(:main, PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS)
@settings.setting(:hooker).expects(:handle).with("yay").once
@settings.initialize_app_defaults app_defaults
end
end
end
describe "call_on_define" do
[true, false].each do |val|
describe "to #{val}" do
it "should generate a deprecation warning" do
Puppet.expects(:deprecation_warning)
values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_on_define => val, :hook => lambda { |v| values << v }})
end
it "should should set call_hook" do
values = []
name = "hooker_#{val}".to_sym
@settings.define_settings(:section, name => {:default => "yay", :desc => "boo", :call_on_define => val, :hook => lambda { |v| values << v }})
@settings.setting(name).call_hook.should == :on_define_and_write if val
@settings.setting(name).call_hook.should == :on_write_only unless val
end
end
end
end
it "should call passed blocks when values are set" do
values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :hook => lambda { |v| values << v }})
values.should == []
@settings[:hooker] = "something"
values.should == %w{something}
end
it "should call passed blocks when values are set via the command line" do
values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :hook => lambda { |v| values << v }})
values.should == []
@settings.handlearg("--hooker", "yay")
values.should == %w{yay}
end
it "should provide an option to call passed blocks during definition" do
values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => :on_define_and_write, :hook => lambda { |v| values << v }})
values.should == %w{yay}
end
it "should pass the fully interpolated value to the hook when called on definition" do
values = []
@settings.define_settings(:section, :one => { :default => "test", :desc => "a" })
@settings.define_settings(:section, :hooker => {:default => "$one/yay", :desc => "boo", :call_hook => :on_define_and_write, :hook => lambda { |v| values << v }})
values.should == %w{test/yay}
end
it "should munge values using the setting-specific methods" do
@settings[:bool] = "false"
@settings[:bool].should == false
end
it "should prefer values set in ruby to values set on the cli" do
@settings[:myval] = "memarg"
@settings.handlearg("--myval", "cliarg")
@settings[:myval].should == "memarg"
end
it "should clear the list of environments" do
Puppet::Node::Environment.expects(:clear).at_least(1)
@settings[:myval] = "memarg"
end
it "should raise an error if we try to set a setting that hasn't been defined'" do
lambda{
@settings[:why_so_serious] = "foo"
}.should raise_error(ArgumentError, /unknown setting/)
end
it "allows overriding cli args based on the cli-set value" do
@settings.handlearg("--myval", "cliarg")
@settings.set_value(:myval, "modified #{@settings[:myval]}", :cli)
expect(@settings[:myval]).to eq("modified cliarg")
end
end
describe "when returning values" do
before do
@settings = Puppet::Settings.new
@settings.define_settings :section,
:config => { :type => :file, :default => "/my/file", :desc => "eh" },
:one => { :default => "ONE", :desc => "a" },
:two => { :default => "$one TWO", :desc => "b"},
:three => { :default => "$one $two THREE", :desc => "c"},
:four => { :default => "$two $three FOUR", :desc => "d"},
:five => { :default => nil, :desc => "e" }
Puppet::FileSystem.stubs(:exist?).returns true
end
describe "call_on_define" do
it "should generate a deprecation warning" do
Puppet.expects(:deprecation_warning)
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :hook => lambda { |v| hook_values << v }})
@settings.setting(:hooker).call_on_define
end
Puppet::Settings::StringSetting.available_call_hook_values.each do |val|
it "should match value for call_hook => :#{val}" do
hook_values = []
@settings.define_settings(:section, :hooker => {:default => "yay", :desc => "boo", :call_hook => val, :hook => lambda { |v| hook_values << v }})
@settings.setting(:hooker).call_on_define.should == @settings.setting(:hooker).call_hook_on_define?
end
end
end
it "should provide a mechanism for returning set values" do
@settings[:one] = "other"
@settings[:one].should == "other"
end
it "setting a value to nil causes it to return to its default" do
default_values = { :one => "skipped value" }
[:logdir, :confdir, :vardir].each do |key|
default_values[key] = 'default value'
end
@settings.define_settings :main, PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS
@settings.initialize_app_defaults(default_values)
@settings[:one] = "value will disappear"
@settings[:one] = nil
@settings[:one].should == "ONE"
end
it "should interpolate default values for other parameters into returned parameter values" do
@settings[:one].should == "ONE"
@settings[:two].should == "ONE TWO"
@settings[:three].should == "ONE ONE TWO THREE"
end
it "should interpolate default values that themselves need to be interpolated" do
@settings[:four].should == "ONE TWO ONE ONE TWO THREE FOUR"
end
it "should provide a method for returning uninterpolated values" do
@settings[:two] = "$one tw0"
@settings.uninterpolated_value(:two).should == "$one tw0"
@settings.uninterpolated_value(:four).should == "$two $three FOUR"
end
it "should interpolate set values for other parameters into returned parameter values" do
@settings[:one] = "on3"
@settings[:two] = "$one tw0"
@settings[:three] = "$one $two thr33"
@settings[:four] = "$one $two $three f0ur"
@settings[:one].should == "on3"
@settings[:two].should == "on3 tw0"
@settings[:three].should == "on3 on3 tw0 thr33"
@settings[:four].should == "on3 on3 tw0 on3 on3 tw0 thr33 f0ur"
end
it "should not cache interpolated values such that stale information is returned" do
@settings[:two].should == "ONE TWO"
@settings[:one] = "one"
@settings[:two].should == "one TWO"
end
it "should not cache values such that information from one environment is returned for another environment" do
text = "[env1]\none = oneval\n[env2]\none = twoval\n"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings.value(:one, "env1").should == "oneval"
@settings.value(:one, "env2").should == "twoval"
end
it "should have a run_mode that defaults to user" do
@settings.preferred_run_mode.should == :user
end
it "interpolates a boolean false without raising an error" do
@settings.define_settings(:section,
:trip_wire => { :type => :boolean, :default => false, :desc => "a trip wire" },
:tripping => { :default => '$trip_wire', :desc => "once tripped if interpolated was false" })
@settings[:tripping].should == "false"
end
describe "setbycli" do
it "should generate a deprecation warning" do
@settings.handlearg("--one", "blah")
Puppet.expects(:deprecation_warning)
@settings.setting(:one).setbycli
end
it "should be true" do
@settings.handlearg("--one", "blah")
@settings.setting(:one).setbycli.should be_true
end
end
end
describe "when choosing which value to return" do
before do
@settings = Puppet::Settings.new
@settings.define_settings :section,
:config => { :type => :file, :default => "/my/file", :desc => "a" },
:one => { :default => "ONE", :desc => "a" },
:two => { :default => "TWO", :desc => "b" }
Puppet::FileSystem.stubs(:exist?).returns true
@settings.preferred_run_mode = :agent
end
it "should return default values if no values have been set" do
@settings[:one].should == "ONE"
end
it "should return values set on the cli before values set in the configuration file" do
text = "[main]\none = fileval\n"
@settings.stubs(:read_file).returns(text)
@settings.handlearg("--one", "clival")
@settings.send(:parse_config_files)
@settings[:one].should == "clival"
end
it "should return values set in the mode-specific section before values set in the main section" do
text = "[main]\none = mainval\n[agent]\none = modeval\n"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == "modeval"
end
it "should not return values outside of its search path" do
text = "[other]\none = oval\n"
file = "/some/file"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == "ONE"
end
it "should return values in a specified environment" do
text = "[env]\none = envval\n"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings.value(:one, "env").should == "envval"
end
it 'should use the current environment for $environment' do
@settings.define_settings :main, :myval => { :default => "$environment/foo", :desc => "mydocs" }
@settings.value(:myval, "myenv").should == "myenv/foo"
end
it "should interpolate found values using the current environment" do
text = "[main]\none = mainval\n[myname]\none = nameval\ntwo = $one/two\n"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings.value(:two, "myname").should == "nameval/two"
end
it "should return values in a specified environment before values in the main or name sections" do
text = "[env]\none = envval\n[main]\none = mainval\n[myname]\none = nameval\n"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings.value(:one, "env").should == "envval"
end
context "when interpolating a dynamic environments setting" do
let(:dynamic_manifestdir) { "manifestdir=/somewhere/$environment/manifests" }
let(:environment) { "environment=anenv" }
before(:each) do
@settings.define_settings :main,
:manifestdir => { :default => "/manifests", :desc => "manifestdir setting" },
:environment => { :default => "production", :desc => "environment setting" }
end
it "interpolates default environment when no environment specified" do
text = <<-EOF
[main]
#{dynamic_manifestdir}
EOF
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
expect(@settings.value(:manifestdir)).to eq("/somewhere/production/manifests")
end
it "interpolates the set environment when no environment specified" do
text = <<-EOF
[main]
#{dynamic_manifestdir}
#{environment}
EOF
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
expect(@settings.value(:manifestdir)).to eq("/somewhere/anenv/manifests")
end
end
end
describe "when locating config files" do
before do
@settings = Puppet::Settings.new
end
describe "when root" do
it "should look for the main config file default location config settings haven't been overridden'" do
Puppet.features.stubs(:root?).returns(true)
Puppet::FileSystem.expects(:exist?).with(main_config_file_default_location).returns(false)
Puppet::FileSystem.expects(:exist?).with(user_config_file_default_location).never
@settings.send(:parse_config_files)
end
end
describe "when not root" do
it "should look for user config file default location if config settings haven't been overridden'" do
Puppet.features.stubs(:root?).returns(false)
seq = sequence "load config files"
Puppet::FileSystem.expects(:exist?).with(user_config_file_default_location).returns(false).in_sequence(seq)
@settings.send(:parse_config_files)
end
end
end
describe "when parsing its configuration" do
before do
@settings = Puppet::Settings.new
@settings.stubs(:service_user_available?).returns true
@settings.stubs(:service_group_available?).returns true
@file = make_absolute("/some/file")
@userconfig = make_absolute("/test/userconfigfile")
@settings.define_settings :section, :user => { :default => "suser", :desc => "doc" }, :group => { :default => "sgroup", :desc => "doc" }
@settings.define_settings :section,
:config => { :type => :file, :default => @file, :desc => "eh" },
:one => { :default => "ONE", :desc => "a" },
:two => { :default => "$one TWO", :desc => "b" },
:three => { :default => "$one $two THREE", :desc => "c" }
@settings.stubs(:user_config_file).returns(@userconfig)
Puppet::FileSystem.stubs(:exist?).with(@file).returns true
Puppet::FileSystem.stubs(:exist?).with(@userconfig).returns false
end
it "should not ignore the report setting" do
@settings.define_settings :section, :report => { :default => "false", :desc => "a" }
# This is needed in order to make sure we pass on windows
myfile = File.expand_path(@file)
@settings[:config] = myfile
text = <<-CONF
[puppetd]
report=true
CONF
Puppet::FileSystem.expects(:exist?).with(myfile).returns(true)
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:report].should be_true
end
it "should use its current ':config' value for the file to parse" do
myfile = make_absolute("/my/file") # do not stub expand_path here, as this leads to a stack overflow, when mocha tries to use it
@settings[:config] = myfile
Puppet::FileSystem.expects(:exist?).with(myfile).returns(true)
Puppet::FileSystem.expects(:read).with(myfile).returns "[main]"
@settings.send(:parse_config_files)
end
it "should not try to parse non-existent files" do
Puppet::FileSystem.expects(:exist?).with(@file).returns false
File.expects(:read).with(@file).never
@settings.send(:parse_config_files)
end
it "should return values set in the configuration file" do
text = "[main]
one = fileval
"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == "fileval"
end
#484 - this should probably be in the regression area
it "should not throw an exception on unknown parameters" do
text = "[main]\nnosuchparam = mval\n"
@settings.expects(:read_file).returns(text)
lambda { @settings.send(:parse_config_files) }.should_not raise_error
end
it "should convert booleans in the configuration file into Ruby booleans" do
text = "[main]
one = true
two = false
"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == true
@settings[:two].should == false
end
it "should convert integers in the configuration file into Ruby Integers" do
text = "[main]
one = 65
"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == 65
end
it "should support specifying all metadata (owner, group, mode) in the configuration file" do
@settings.define_settings :section, :myfile => { :type => :file, :default => make_absolute("/myfile"), :desc => "a" }
otherfile = make_absolute("/other/file")
@settings.parse_config(<<-CONF)
[main]
myfile = #{otherfile} {owner = service, group = service, mode = 644}
CONF
@settings[:myfile].should == otherfile
@settings.metadata(:myfile).should == {:owner => "suser", :group => "sgroup", :mode => "644"}
end
it "should support specifying a single piece of metadata (owner, group, or mode) in the configuration file" do
@settings.define_settings :section, :myfile => { :type => :file, :default => make_absolute("/myfile"), :desc => "a" }
otherfile = make_absolute("/other/file")
@settings.parse_config(<<-CONF)
[main]
myfile = #{otherfile} {owner = service}
CONF
@settings[:myfile].should == otherfile
@settings.metadata(:myfile).should == {:owner => "suser"}
end
it "should support loading metadata (owner, group, or mode) from a run_mode section in the configuration file" do
default_values = {}
PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS.keys.each do |key|
default_values[key] = 'default value'
end
@settings.define_settings :main, PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS
@settings.define_settings :master, :myfile => { :type => :file, :default => make_absolute("/myfile"), :desc => "a" }
otherfile = make_absolute("/other/file")
text = "[master]
myfile = #{otherfile} {mode = 664}
"
@settings.expects(:read_file).returns(text)
# will start initialization as user
@settings.preferred_run_mode.should == :user
@settings.send(:parse_config_files)
# change app run_mode to master
@settings.initialize_app_defaults(default_values.merge(:run_mode => :master))
@settings.preferred_run_mode.should == :master
# initializing the app should have reloaded the metadata based on run_mode
@settings[:myfile].should == otherfile
@settings.metadata(:myfile).should == {:mode => "664"}
end
it "does not use the metadata from the same setting in a different section" do
default_values = {}
PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS.keys.each do |key|
default_values[key] = 'default value'
end
file = make_absolute("/file")
default_mode = "0600"
@settings.define_settings :main, PuppetSpec::Settings::TEST_APP_DEFAULT_DEFINITIONS
@settings.define_settings :master, :myfile => { :type => :file, :default => file, :desc => "a", :mode => default_mode }
text = "[master]
myfile = #{file}/foo
[agent]
myfile = #{file} {mode = 664}
"
@settings.expects(:read_file).returns(text)
# will start initialization as user
@settings.preferred_run_mode.should == :user
@settings.send(:parse_config_files)
# change app run_mode to master
@settings.initialize_app_defaults(default_values.merge(:run_mode => :master))
@settings.preferred_run_mode.should == :master
# initializing the app should have reloaded the metadata based on run_mode
@settings[:myfile].should == "#{file}/foo"
@settings.metadata(:myfile).should == { :mode => default_mode }
end
it "should call hooks associated with values set in the configuration file" do
values = []
@settings.define_settings :section, :mysetting => {:default => "defval", :desc => "a", :hook => proc { |v| values << v }}
text = "[main]
mysetting = setval
"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
values.should == ["setval"]
end
it "should not call the same hook for values set multiple times in the configuration file" do
values = []
@settings.define_settings :section, :mysetting => {:default => "defval", :desc => "a", :hook => proc { |v| values << v }}
text = "[user]
mysetting = setval
[main]
mysetting = other
"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
values.should == ["setval"]
end
it "should pass the environment-specific value to the hook when one is available" do
values = []
@settings.define_settings :section, :mysetting => {:default => "defval", :desc => "a", :hook => proc { |v| values << v }}
@settings.define_settings :section, :environment => { :default => "yay", :desc => "a" }
@settings.define_settings :section, :environments => { :default => "yay,foo", :desc => "a" }
text = "[main]
mysetting = setval
[yay]
mysetting = other
"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
values.should == ["other"]
end
it "should pass the interpolated value to the hook when one is available" do
values = []
@settings.define_settings :section, :base => {:default => "yay", :desc => "a", :hook => proc { |v| values << v }}
@settings.define_settings :section, :mysetting => {:default => "defval", :desc => "a", :hook => proc { |v| values << v }}
text = "[main]
mysetting = $base/setval
"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
values.should == ["yay/setval"]
end
it "should allow hooks invoked at parse time to be deferred" do
hook_invoked = false
@settings.define_settings :section, :deferred => {:desc => '',
:hook => proc { |v| hook_invoked = true },
:call_hook => :on_initialize_and_write, }
@settings.define_settings(:main,
:logdir => { :type => :directory, :default => nil, :desc => "logdir" },
:confdir => { :type => :directory, :default => nil, :desc => "confdir" },
:vardir => { :type => :directory, :default => nil, :desc => "vardir" })
text = <<-EOD
[main]
deferred=$confdir/goose
EOD
@settings.stubs(:read_file).returns(text)
@settings.initialize_global_settings
hook_invoked.should be_false
@settings.initialize_app_defaults(:logdir => '/path/to/logdir', :confdir => '/path/to/confdir', :vardir => '/path/to/vardir')
hook_invoked.should be_true
@settings[:deferred].should eq(File.expand_path('/path/to/confdir/goose'))
end
it "does not require the value for a setting without a hook to resolve during global setup" do
hook_invoked = false
@settings.define_settings :section, :can_cause_problems => {:desc => '' }
@settings.define_settings(:main,
:logdir => { :type => :directory, :default => nil, :desc => "logdir" },
:confdir => { :type => :directory, :default => nil, :desc => "confdir" },
:vardir => { :type => :directory, :default => nil, :desc => "vardir" })
text = <<-EOD
[main]
can_cause_problems=$confdir/goose
EOD
@settings.stubs(:read_file).returns(text)
@settings.initialize_global_settings
@settings.initialize_app_defaults(:logdir => '/path/to/logdir', :confdir => '/path/to/confdir', :vardir => '/path/to/vardir')
@settings[:can_cause_problems].should eq(File.expand_path('/path/to/confdir/goose'))
end
it "should allow empty values" do
@settings.define_settings :section, :myarg => { :default => "myfile", :desc => "a" }
text = "[main]
myarg =
"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:myarg].should == ""
end
describe "deprecations" do
let(:settings) { Puppet::Settings.new }
let(:app_defaults) {
{
:logdir => "/dev/null",
:confdir => "/dev/null",
:vardir => "/dev/null",
}
}
def assert_accessing_setting_is_deprecated(settings, setting)
Puppet.expects(:deprecation_warning).with("Accessing '#{setting}' as a setting is deprecated. See http://links.puppetlabs.com/env-settings-deprecations")
Puppet.expects(:deprecation_warning).with("Modifying '#{setting}' as a setting is deprecated. See http://links.puppetlabs.com/env-settings-deprecations")
settings[setting.intern] = apath = File.expand_path('foo')
expect(settings[setting.intern]).to eq(apath)
end
before(:each) do
settings.define_settings(:main, {
:logdir => { :default => 'a', :desc => 'a' },
:confdir => { :default => 'b', :desc => 'b' },
:vardir => { :default => 'c', :desc => 'c' },
})
end
context "complete" do
let(:completely_deprecated_settings) do
settings.define_settings(:main, {
:manifestdir => {
:default => 'foo',
:desc => 'a deprecated setting',
:deprecated => :completely,
}
})
settings
end
it "warns when set in puppet.conf" do
Puppet.expects(:deprecation_warning).with(regexp_matches(/manifestdir is deprecated\./), 'setting-manifestdir')
completely_deprecated_settings.parse_config(<<-CONF)
manifestdir='should warn'
CONF
completely_deprecated_settings.initialize_app_defaults(app_defaults)
end
it "warns when set on the commandline" do
Puppet.expects(:deprecation_warning).with(regexp_matches(/manifestdir is deprecated\./), 'setting-manifestdir')
args = ["--manifestdir", "/some/value"]
completely_deprecated_settings.send(:parse_global_options, args)
completely_deprecated_settings.initialize_app_defaults(app_defaults)
end
it "warns when set in code" do
assert_accessing_setting_is_deprecated(completely_deprecated_settings, 'manifestdir')
end
end
context "partial" do
let(:partially_deprecated_settings) do
settings.define_settings(:main, {
:modulepath => {
:default => 'foo',
:desc => 'a partially deprecated setting',
:deprecated => :allowed_on_commandline,
}
})
settings
end
it "warns for a deprecated setting allowed on the command line set in puppet.conf" do
Puppet.expects(:deprecation_warning).with(regexp_matches(/modulepath is deprecated in puppet\.conf/), 'puppet-conf-setting-modulepath')
partially_deprecated_settings.parse_config(<<-CONF)
modulepath='should warn'
CONF
partially_deprecated_settings.initialize_app_defaults(app_defaults)
end
it "does not warn when manifest is set on command line" do
Puppet.expects(:deprecation_warning).never
args = ["--modulepath", "/some/value"]
partially_deprecated_settings.send(:parse_global_options, args)
partially_deprecated_settings.initialize_app_defaults(app_defaults)
end
it "warns when set in code" do
assert_accessing_setting_is_deprecated(partially_deprecated_settings, 'modulepath')
end
end
end
end
describe "when there are multiple config files" do
let(:main_config_text) { "[main]\none = main\ntwo = main2" }
let(:user_config_text) { "[main]\none = user\n" }
let(:seq) { sequence "config_file_sequence" }
before :each do
@settings = Puppet::Settings.new
@settings.define_settings(:section,
{ :confdir => { :default => nil, :desc => "Conf dir" },
:config => { :default => "$confdir/puppet.conf", :desc => "Config" },
:one => { :default => "ONE", :desc => "a" },
:two => { :default => "TWO", :desc => "b" }, })
end
context "running non-root without explicit config file" do
before :each do
Puppet.features.stubs(:root?).returns(false)
Puppet::FileSystem.expects(:exist?).
with(user_config_file_default_location).
returns(true).in_sequence(seq)
@settings.expects(:read_file).
with(user_config_file_default_location).
returns(user_config_text).in_sequence(seq)
end
it "should return values from the user config file" do
@settings.send(:parse_config_files)
@settings[:one].should == "user"
end
it "should not return values from the main config file" do
@settings.send(:parse_config_files)
@settings[:two].should == "TWO"
end
end
context "running as root without explicit config file" do
before :each do
Puppet.features.stubs(:root?).returns(true)
Puppet::FileSystem.expects(:exist?).
with(main_config_file_default_location).
returns(true).in_sequence(seq)
@settings.expects(:read_file).
with(main_config_file_default_location).
returns(main_config_text).in_sequence(seq)
end
it "should return values from the main config file" do
@settings.send(:parse_config_files)
@settings[:one].should == "main"
end
it "should not return values from the user config file" do
@settings.send(:parse_config_files)
@settings[:two].should == "main2"
end
end
context "running with an explicit config file as a user (e.g. Apache + Passenger)" do
before :each do
Puppet.features.stubs(:root?).returns(false)
@settings[:confdir] = File.dirname(main_config_file_default_location)
Puppet::FileSystem.expects(:exist?).
with(main_config_file_default_location).
returns(true).in_sequence(seq)
@settings.expects(:read_file).
with(main_config_file_default_location).
returns(main_config_text).in_sequence(seq)
end
it "should return values from the main config file" do
@settings.send(:parse_config_files)
@settings[:one].should == "main"
end
it "should not return values from the user config file" do
@settings.send(:parse_config_files)
@settings[:two].should == "main2"
end
end
end
describe "when reparsing its configuration" do
before do
@file = make_absolute("/test/file")
@userconfig = make_absolute("/test/userconfigfile")
@settings = Puppet::Settings.new
@settings.define_settings :section,
:config => { :type => :file, :default => @file, :desc => "a" },
:one => { :default => "ONE", :desc => "a" },
:two => { :default => "$one TWO", :desc => "b" },
:three => { :default => "$one $two THREE", :desc => "c" }
Puppet::FileSystem.stubs(:exist?).with(@file).returns true
Puppet::FileSystem.stubs(:exist?).with(@userconfig).returns false
@settings.stubs(:user_config_file).returns(@userconfig)
end
it "does not create the WatchedFile instance and should not parse if the file does not exist" do
Puppet::FileSystem.expects(:exist?).with(@file).returns false
Puppet::Util::WatchedFile.expects(:new).never
@settings.expects(:parse_config_files).never
@settings.reparse_config_files
end
context "and watched file exists" do
before do
@watched_file = Puppet::Util::WatchedFile.new(@file)
Puppet::Util::WatchedFile.expects(:new).with(@file).returns @watched_file
end
it "uses a WatchedFile instance to determine if the file has changed" do
@watched_file.expects(:changed?)
@settings.reparse_config_files
end
it "does not reparse if the file has not changed" do
@watched_file.expects(:changed?).returns false
@settings.expects(:parse_config_files).never
@settings.reparse_config_files
end
it "reparses if the file has changed" do
@watched_file.expects(:changed?).returns true
@settings.expects(:parse_config_files)
@settings.reparse_config_files
end
it "replaces in-memory values with on-file values" do
@watched_file.stubs(:changed?).returns(true)
@settings[:one] = "init"
# Now replace the value
text = "[main]\none = disk-replace\n"
@settings.stubs(:read_file).returns(text)
@settings.reparse_config_files
@settings[:one].should == "disk-replace"
end
end
it "should retain parameters set by cli when configuration files are reparsed" do
@settings.handlearg("--one", "clival")
text = "[main]\none = on-disk\n"
@settings.stubs(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == "clival"
end
it "should remove in-memory values that are no longer set in the file" do
# Init the value
text = "[main]\none = disk-init\n"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == "disk-init"
# Now replace the value
text = "[main]\ntwo = disk-replace\n"
@settings.expects(:read_file).returns(text)
@settings.send(:parse_config_files)
# The originally-overridden value should be replaced with the default
@settings[:one].should == "ONE"
# and we should now have the new value in memory
@settings[:two].should == "disk-replace"
end
it "should retain in-memory values if the file has a syntax error" do
# Init the value
text = "[main]\none = initial-value\n"
@settings.expects(:read_file).with(@file).returns(text)
@settings.send(:parse_config_files)
@settings[:one].should == "initial-value"
# Now replace the value with something bogus
text = "[main]\nkenny = killed-by-what-follows\n1 is 2, blah blah florp\n"
@settings.expects(:read_file).with(@file).returns(text)
@settings.send(:parse_config_files)
# The originally-overridden value should not be replaced with the default
@settings[:one].should == "initial-value"
# and we should not have the new value in memory
@settings[:kenny].should be_nil
end
end
it "should provide a method for creating a catalog of resources from its configuration" do
Puppet::Settings.new.should respond_to(:to_catalog)
end
describe "when creating a catalog" do
before do
@settings = Puppet::Settings.new
@settings.stubs(:service_user_available?).returns true
@prefix = Puppet.features.posix? ? "" : "C:"
end
it "should add all file resources to the catalog if no sections have been specified" do
@settings.define_settings :main,
:maindir => { :type => :directory, :default => @prefix+"/maindir", :desc => "a"},
:seconddir => { :type => :directory, :default => @prefix+"/seconddir", :desc => "a"}
@settings.define_settings :other,
:otherdir => { :type => :directory, :default => @prefix+"/otherdir", :desc => "a" }
catalog = @settings.to_catalog
[@prefix+"/maindir", @prefix+"/seconddir", @prefix+"/otherdir"].each do |path|
catalog.resource(:file, path).should be_instance_of(Puppet::Resource)
end
end
it "should add only files in the specified sections if section names are provided" do
@settings.define_settings :main, :maindir => { :type => :directory, :default => @prefix+"/maindir", :desc => "a" }
@settings.define_settings :other, :otherdir => { :type => :directory, :default => @prefix+"/otherdir", :desc => "a" }
catalog = @settings.to_catalog(:main)
catalog.resource(:file, @prefix+"/otherdir").should be_nil
catalog.resource(:file, @prefix+"/maindir").should be_instance_of(Puppet::Resource)
end
it "should not try to add the same file twice" do
@settings.define_settings :main, :maindir => { :type => :directory, :default => @prefix+"/maindir", :desc => "a" }
@settings.define_settings :other, :otherdir => { :type => :directory, :default => @prefix+"/maindir", :desc => "a" }
lambda { @settings.to_catalog }.should_not raise_error
end
it "should ignore files whose :to_resource method returns nil" do
@settings.define_settings :main, :maindir => { :type => :directory, :default => @prefix+"/maindir", :desc => "a" }
@settings.setting(:maindir).expects(:to_resource).returns nil
Puppet::Resource::Catalog.any_instance.expects(:add_resource).never
@settings.to_catalog
end
describe "on Microsoft Windows" do
before :each do
Puppet.features.stubs(:root?).returns true
Puppet.features.stubs(:microsoft_windows?).returns true
@settings.define_settings :foo,
:mkusers => { :type => :boolean, :default => true, :desc => "e" },
:user => { :default => "suser", :desc => "doc" },
:group => { :default => "sgroup", :desc => "doc" }
@settings.define_settings :other,
:otherdir => { :type => :directory, :default => "/otherdir", :desc => "a", :owner => "service", :group => "service"}
@catalog = @settings.to_catalog
end
it "it should not add users and groups to the catalog" do
@catalog.resource(:user, "suser").should be_nil
@catalog.resource(:group, "sgroup").should be_nil
end
end
+ describe "adding default directory environment to the catalog" do
+ let(:tmpenv) { tmpdir("envs") }
+ let(:default_path) { "#{tmpenv}/environments" }
+ before(:each) do
+ @settings.define_settings :main,
+ :environment => { :default => "production", :desc => "env"},
+ :environmentpath => { :type => :path, :default => default_path, :desc => "envpath"}
+ end
+
+ it "adds if environmentpath exists" do
+ envpath = "#{tmpenv}/custom_envpath"
+ @settings[:environmentpath] = envpath
+ Dir.mkdir(envpath)
+ catalog = @settings.to_catalog
+ expect(catalog.resource_keys).to include(["File", "#{envpath}/production"])
+ end
+
+ it "adds the first directory of environmentpath" do
+ envdir = "#{tmpenv}/custom_envpath"
+ envpath = "#{envdir}#{File::PATH_SEPARATOR}/some/other/envdir"
+ @settings[:environmentpath] = envpath
+ Dir.mkdir(envdir)
+ catalog = @settings.to_catalog
+ expect(catalog.resource_keys).to include(["File", "#{envdir}/production"])
+ end
+
+ it "handles a non-existent environmentpath" do
+ catalog = @settings.to_catalog
+ expect(catalog.resource_keys).to be_empty
+ end
+
+ it "handles a default environmentpath" do
+ Dir.mkdir(default_path)
+ catalog = @settings.to_catalog
+ expect(catalog.resource_keys).to include(["File", "#{default_path}/production"])
+ end
+ end
+
describe "when adding users and groups to the catalog" do
before do
Puppet.features.stubs(:root?).returns true
Puppet.features.stubs(:microsoft_windows?).returns false
@settings.define_settings :foo,
:mkusers => { :type => :boolean, :default => true, :desc => "e" },
:user => { :default => "suser", :desc => "doc" },
:group => { :default => "sgroup", :desc => "doc" }
@settings.define_settings :other, :otherdir => {:type => :directory, :default => "/otherdir", :desc => "a", :owner => "service", :group => "service"}
@catalog = @settings.to_catalog
end
it "should add each specified user and group to the catalog if :mkusers is a valid setting, is enabled, and we're running as root" do
@catalog.resource(:user, "suser").should be_instance_of(Puppet::Resource)
@catalog.resource(:group, "sgroup").should be_instance_of(Puppet::Resource)
end
it "should only add users and groups to the catalog from specified sections" do
@settings.define_settings :yay, :yaydir => { :type => :directory, :default => "/yaydir", :desc => "a", :owner => "service", :group => "service"}
catalog = @settings.to_catalog(:other)
catalog.resource(:user, "jane").should be_nil
catalog.resource(:group, "billy").should be_nil
end
it "should not add users or groups to the catalog if :mkusers not running as root" do
Puppet.features.stubs(:root?).returns false
catalog = @settings.to_catalog
catalog.resource(:user, "suser").should be_nil
catalog.resource(:group, "sgroup").should be_nil
end
it "should not add users or groups to the catalog if :mkusers is not a valid setting" do
Puppet.features.stubs(:root?).returns true
settings = Puppet::Settings.new
settings.define_settings :other, :otherdir => {:type => :directory, :default => "/otherdir", :desc => "a", :owner => "service", :group => "service"}
catalog = settings.to_catalog
catalog.resource(:user, "suser").should be_nil
catalog.resource(:group, "sgroup").should be_nil
end
it "should not add users or groups to the catalog if :mkusers is a valid setting but is disabled" do
@settings[:mkusers] = false
catalog = @settings.to_catalog
catalog.resource(:user, "suser").should be_nil
catalog.resource(:group, "sgroup").should be_nil
end
it "should not try to add users or groups to the catalog twice" do
@settings.define_settings :yay, :yaydir => {:type => :directory, :default => "/yaydir", :desc => "a", :owner => "service", :group => "service"}
# This would fail if users/groups were added twice
lambda { @settings.to_catalog }.should_not raise_error
end
it "should set :ensure to :present on each created user and group" do
@catalog.resource(:user, "suser")[:ensure].should == :present
@catalog.resource(:group, "sgroup")[:ensure].should == :present
end
it "should set each created user's :gid to the service group" do
@settings.to_catalog.resource(:user, "suser")[:gid].should == "sgroup"
end
it "should not attempt to manage the root user" do
Puppet.features.stubs(:root?).returns true
@settings.define_settings :foo, :foodir => {:type => :directory, :default => "/foodir", :desc => "a", :owner => "root", :group => "service"}
@settings.to_catalog.resource(:user, "root").should be_nil
end
end
end
it "should be able to be converted to a manifest" do
Puppet::Settings.new.should respond_to(:to_manifest)
end
describe "when being converted to a manifest" do
it "should produce a string with the code for each resource joined by two carriage returns" do
@settings = Puppet::Settings.new
@settings.define_settings :main,
:maindir => { :type => :directory, :default => "/maindir", :desc => "a"},
:seconddir => { :type => :directory, :default => "/seconddir", :desc => "a"}
main = stub 'main_resource', :ref => "File[/maindir]"
main.expects(:to_manifest).returns "maindir"
second = stub 'second_resource', :ref => "File[/seconddir]"
second.expects(:to_manifest).returns "seconddir"
@settings.setting(:maindir).expects(:to_resource).returns main
@settings.setting(:seconddir).expects(:to_resource).returns second
@settings.to_manifest.split("\n\n").sort.should == %w{maindir seconddir}
end
end
describe "when using sections of the configuration to manage the local host" do
before do
@settings = Puppet::Settings.new
@settings.stubs(:service_user_available?).returns true
@settings.stubs(:service_group_available?).returns true
@settings.define_settings :main, :noop => { :default => false, :desc => "", :type => :boolean }
@settings.define_settings :main,
:maindir => { :type => :directory, :default => make_absolute("/maindir"), :desc => "a" },
:seconddir => { :type => :directory, :default => make_absolute("/seconddir"), :desc => "a"}
@settings.define_settings :main, :user => { :default => "suser", :desc => "doc" }, :group => { :default => "sgroup", :desc => "doc" }
- @settings.define_settings :other, :otherdir => {:type => :directory, :default => make_absolute("/otherdir"), :desc => "a", :owner => "service", :group => "service", :mode => 0755}
+ @settings.define_settings :other, :otherdir => {:type => :directory, :default => make_absolute("/otherdir"), :desc => "a", :owner => "service", :group => "service", :mode => '0755'}
@settings.define_settings :third, :thirddir => { :type => :directory, :default => make_absolute("/thirddir"), :desc => "b"}
- @settings.define_settings :files, :myfile => {:type => :file, :default => make_absolute("/myfile"), :desc => "a", :mode => 0755}
+ @settings.define_settings :files, :myfile => {:type => :file, :default => make_absolute("/myfile"), :desc => "a", :mode => '0755'}
end
it "should provide a method that creates directories with the correct modes" do
Puppet::Util::SUIDManager.expects(:asuser).with("suser", "sgroup").yields
- Dir.expects(:mkdir).with(make_absolute("/otherdir"), 0755)
+ Dir.expects(:mkdir).with(make_absolute("/otherdir"), '0755')
@settings.mkdir(:otherdir)
end
it "should create a catalog with the specified sections" do
@settings.expects(:to_catalog).with(:main, :other).returns Puppet::Resource::Catalog.new("foo")
@settings.use(:main, :other)
end
it "should canonicalize the sections" do
@settings.expects(:to_catalog).with(:main, :other).returns Puppet::Resource::Catalog.new("foo")
@settings.use("main", "other")
end
it "should ignore sections that have already been used" do
@settings.expects(:to_catalog).with(:main).returns Puppet::Resource::Catalog.new("foo")
@settings.use(:main)
@settings.expects(:to_catalog).with(:other).returns Puppet::Resource::Catalog.new("foo")
@settings.use(:main, :other)
end
it "should convert the created catalog to a RAL catalog" do
@catalog = Puppet::Resource::Catalog.new("foo")
@settings.expects(:to_catalog).with(:main).returns @catalog
@catalog.expects(:to_ral).returns @catalog
@settings.use(:main)
end
it "should specify that it is not managing a host catalog" do
catalog = Puppet::Resource::Catalog.new("foo")
catalog.expects(:apply)
@settings.expects(:to_catalog).returns catalog
catalog.stubs(:to_ral).returns catalog
catalog.expects(:host_config=).with false
@settings.use(:main)
end
it "should support a method for re-using all currently used sections" do
@settings.expects(:to_catalog).with(:main, :third).times(2).returns Puppet::Resource::Catalog.new("foo")
@settings.use(:main, :third)
@settings.reuse
end
it "should fail with an appropriate message if any resources fail" do
@catalog = Puppet::Resource::Catalog.new("foo")
@catalog.stubs(:to_ral).returns @catalog
@settings.expects(:to_catalog).returns @catalog
@trans = mock("transaction")
@catalog.expects(:apply).yields(@trans)
@trans.expects(:any_failed?).returns(true)
resource = Puppet::Type.type(:notify).new(:title => 'failed')
status = Puppet::Resource::Status.new(resource)
event = Puppet::Transaction::Event.new(
:name => 'failure',
:status => 'failure',
:message => 'My failure')
status.add_event(event)
report = Puppet::Transaction::Report.new('apply')
report.add_resource_status(status)
@trans.expects(:report).returns report
@settings.expects(:raise).with(includes("My failure"))
@settings.use(:whatever)
end
end
describe "when dealing with printing configs" do
before do
@settings = Puppet::Settings.new
#these are the magic default values
@settings.stubs(:value).with(:configprint).returns("")
@settings.stubs(:value).with(:genconfig).returns(false)
@settings.stubs(:value).with(:genmanifest).returns(false)
@settings.stubs(:value).with(:environment).returns(nil)
end
describe "when checking print_config?" do
it "should return false when the :configprint, :genconfig and :genmanifest are not set" do
@settings.print_configs?.should be_false
end
it "should return true when :configprint has a value" do
@settings.stubs(:value).with(:configprint).returns("something")
@settings.print_configs?.should be_true
end
it "should return true when :genconfig has a value" do
@settings.stubs(:value).with(:genconfig).returns(true)
@settings.print_configs?.should be_true
end
it "should return true when :genmanifest has a value" do
@settings.stubs(:value).with(:genmanifest).returns(true)
@settings.print_configs?.should be_true
end
end
describe "when printing configs" do
describe "when :configprint has a value" do
it "should call print_config_options" do
@settings.stubs(:value).with(:configprint).returns("something")
@settings.expects(:print_config_options)
@settings.print_configs
end
it "should get the value of the option using the environment" do
@settings.stubs(:value).with(:configprint).returns("something")
@settings.stubs(:include?).with("something").returns(true)
@settings.expects(:value).with(:environment).returns("env")
@settings.expects(:value).with("something", "env").returns("foo")
@settings.stubs(:puts).with("foo")
@settings.print_configs
end
it "should print the value of the option" do
@settings.stubs(:value).with(:configprint).returns("something")
@settings.stubs(:include?).with("something").returns(true)
@settings.stubs(:value).with("something", nil).returns("foo")
@settings.expects(:puts).with("foo")
@settings.print_configs
end
it "should print the value pairs if there are multiple options" do
@settings.stubs(:value).with(:configprint).returns("bar,baz")
@settings.stubs(:include?).with("bar").returns(true)
@settings.stubs(:include?).with("baz").returns(true)
@settings.stubs(:value).with("bar", nil).returns("foo")
@settings.stubs(:value).with("baz", nil).returns("fud")
@settings.expects(:puts).with("bar = foo")
@settings.expects(:puts).with("baz = fud")
@settings.print_configs
end
it "should return true after printing" do
@settings.stubs(:value).with(:configprint).returns("something")
@settings.stubs(:include?).with("something").returns(true)
@settings.stubs(:value).with("something", nil).returns("foo")
@settings.stubs(:puts).with("foo")
@settings.print_configs.should be_true
end
it "should return false if a config param is not found" do
@settings.stubs :puts
@settings.stubs(:value).with(:configprint).returns("something")
@settings.stubs(:include?).with("something").returns(false)
@settings.print_configs.should be_false
end
end
describe "when genconfig is true" do
before do
@settings.stubs :puts
end
it "should call to_config" do
@settings.stubs(:value).with(:genconfig).returns(true)
@settings.expects(:to_config)
@settings.print_configs
end
it "should return true from print_configs" do
@settings.stubs(:value).with(:genconfig).returns(true)
@settings.stubs(:to_config)
@settings.print_configs.should be_true
end
end
describe "when genmanifest is true" do
before do
@settings.stubs :puts
end
it "should call to_config" do
@settings.stubs(:value).with(:genmanifest).returns(true)
@settings.expects(:to_manifest)
@settings.print_configs
end
it "should return true from print_configs" do
@settings.stubs(:value).with(:genmanifest).returns(true)
@settings.stubs(:to_manifest)
@settings.print_configs.should be_true
end
end
end
end
describe "when determining if the service user is available" do
let(:settings) do
settings = Puppet::Settings.new
settings.define_settings :main, :user => { :default => nil, :desc => "doc" }
settings
end
def a_user_type_for(username)
user = mock 'user'
Puppet::Type.type(:user).expects(:new).with { |args| args[:name] == username }.returns user
user
end
it "should return false if there is no user setting" do
settings.should_not be_service_user_available
end
it "should return false if the user provider says the user is missing" do
settings[:user] = "foo"
a_user_type_for("foo").expects(:exists?).returns false
settings.should_not be_service_user_available
end
it "should return true if the user provider says the user is present" do
settings[:user] = "foo"
a_user_type_for("foo").expects(:exists?).returns true
settings.should be_service_user_available
end
it "caches the result of determining if the user is present" do
settings[:user] = "foo"
a_user_type_for("foo").expects(:exists?).returns true
settings.should be_service_user_available
settings.should be_service_user_available
end
end
describe "when determining if the service group is available" do
let(:settings) do
settings = Puppet::Settings.new
settings.define_settings :main, :group => { :default => nil, :desc => "doc" }
settings
end
def a_group_type_for(groupname)
group = mock 'group'
Puppet::Type.type(:group).expects(:new).with { |args| args[:name] == groupname }.returns group
group
end
it "should return false if there is no group setting" do
settings.should_not be_service_group_available
end
it "should return false if the group provider says the group is missing" do
settings[:group] = "foo"
a_group_type_for("foo").expects(:exists?).returns false
settings.should_not be_service_group_available
end
it "should return true if the group provider says the group is present" do
settings[:group] = "foo"
a_group_type_for("foo").expects(:exists?).returns true
settings.should be_service_group_available
end
it "caches the result of determining if the group is present" do
settings[:group] = "foo"
a_group_type_for("foo").expects(:exists?).returns true
settings.should be_service_group_available
settings.should be_service_group_available
end
end
describe "when dealing with command-line options" do
let(:settings) { Puppet::Settings.new }
it "should get options from Puppet.settings.optparse_addargs" do
settings.expects(:optparse_addargs).returns([])
settings.send(:parse_global_options, [])
end
it "should add options to OptionParser" do
settings.stubs(:optparse_addargs).returns( [["--option","-o", "Funny Option", :NONE]])
settings.expects(:handlearg).with("--option", true)
settings.send(:parse_global_options, ["--option"])
end
it "should not die if it sees an unrecognized option, because the app/face may handle it later" do
expect { settings.send(:parse_global_options, ["--topuppet", "value"]) } .to_not raise_error
end
it "should not pass an unrecognized option to handleargs" do
settings.expects(:handlearg).with("--topuppet", "value").never
expect { settings.send(:parse_global_options, ["--topuppet", "value"]) } .to_not raise_error
end
it "should pass valid puppet settings options to handlearg even if they appear after an unrecognized option" do
settings.stubs(:optparse_addargs).returns( [["--option","-o", "Funny Option", :NONE]])
settings.expects(:handlearg).with("--option", true)
settings.send(:parse_global_options, ["--invalidoption", "--option"])
end
it "should transform boolean option to normal form" do
Puppet::Settings.clean_opt("--[no-]option", true).should == ["--option", true]
end
it "should transform boolean option to no- form" do
Puppet::Settings.clean_opt("--[no-]option", false).should == ["--no-option", false]
end
it "should set preferred run mode from --run_mode <foo> string without error" do
args = ["--run_mode", "master"]
settings.expects(:handlearg).with("--run_mode", "master").never
expect { settings.send(:parse_global_options, args) } .to_not raise_error
Puppet.settings.preferred_run_mode.should == :master
args.empty?.should == true
end
it "should set preferred run mode from --run_mode=<foo> string without error" do
args = ["--run_mode=master"]
settings.expects(:handlearg).with("--run_mode", "master").never
expect { settings.send(:parse_global_options, args) } .to_not raise_error
Puppet.settings.preferred_run_mode.should == :master
args.empty?.should == true
end
end
describe "default_certname" do
describe "using hostname and domainname" do
before :each do
Puppet::Settings.stubs(:hostname_fact).returns("testhostname")
Puppet::Settings.stubs(:domain_fact).returns("domain.test.")
end
it "should use both to generate fqdn" do
Puppet::Settings.default_certname.should =~ /testhostname\.domain\.test/
end
it "should remove trailing dots from fqdn" do
Puppet::Settings.default_certname.should == 'testhostname.domain.test'
end
end
describe "using just hostname" do
before :each do
Puppet::Settings.stubs(:hostname_fact).returns("testhostname")
Puppet::Settings.stubs(:domain_fact).returns("")
end
it "should use only hostname to generate fqdn" do
Puppet::Settings.default_certname.should == "testhostname"
end
it "should removing trailing dots from fqdn" do
Puppet::Settings.default_certname.should == "testhostname"
end
end
end
end
diff --git a/spec/unit/ssl/certificate_authority_spec.rb b/spec/unit/ssl/certificate_authority_spec.rb
index ef5a86862..2881b0a1e 100755
--- a/spec/unit/ssl/certificate_authority_spec.rb
+++ b/spec/unit/ssl/certificate_authority_spec.rb
@@ -1,1107 +1,1130 @@
#! /usr/bin/env ruby
# encoding: ASCII-8BIT
require 'spec_helper'
require 'puppet/ssl/certificate_authority'
describe Puppet::SSL::CertificateAuthority do
after do
Puppet::SSL::CertificateAuthority.instance_variable_set(:@singleton_instance, nil)
- Puppet.settings.clearused
end
def stub_ca_host
@key = mock 'key'
@key.stubs(:content).returns "cakey"
@cacert = mock 'certificate'
@cacert.stubs(:content).returns "cacertificate"
@host = stub 'ssl_host', :key => @key, :certificate => @cacert, :name => Puppet::SSL::Host.ca_name
end
it "should have a class method for returning a singleton instance" do
Puppet::SSL::CertificateAuthority.should respond_to(:instance)
end
describe "when finding an existing instance" do
describe "and the host is a CA host and the run_mode is master" do
before do
Puppet[:ca] = true
Puppet.run_mode.stubs(:master?).returns true
@ca = mock('ca')
Puppet::SSL::CertificateAuthority.stubs(:new).returns @ca
end
it "should return an instance" do
Puppet::SSL::CertificateAuthority.instance.should equal(@ca)
end
it "should always return the same instance" do
Puppet::SSL::CertificateAuthority.instance.should equal(Puppet::SSL::CertificateAuthority.instance)
end
end
describe "and the host is not a CA host" do
it "should return nil" do
Puppet[:ca] = false
Puppet.run_mode.stubs(:master?).returns true
ca = mock('ca')
Puppet::SSL::CertificateAuthority.expects(:new).never
Puppet::SSL::CertificateAuthority.instance.should be_nil
end
end
describe "and the run_mode is not master" do
it "should return nil" do
Puppet[:ca] = true
Puppet.run_mode.stubs(:master?).returns false
ca = mock('ca')
Puppet::SSL::CertificateAuthority.expects(:new).never
Puppet::SSL::CertificateAuthority.instance.should be_nil
end
end
end
describe "when initializing" do
before do
Puppet.settings.stubs(:use)
Puppet::SSL::CertificateAuthority.any_instance.stubs(:setup)
end
it "should always set its name to the value of :certname" do
Puppet[:certname] = "ca_testing"
Puppet::SSL::CertificateAuthority.new.name.should == "ca_testing"
end
it "should create an SSL::Host instance whose name is the 'ca_name'" do
Puppet::SSL::Host.expects(:ca_name).returns "caname"
host = stub 'host'
Puppet::SSL::Host.expects(:new).with("caname").returns host
Puppet::SSL::CertificateAuthority.new
end
it "should use the :main, :ca, and :ssl settings sections" do
Puppet.settings.expects(:use).with(:main, :ssl, :ca)
Puppet::SSL::CertificateAuthority.new
end
it "should make sure the CA is set up" do
Puppet::SSL::CertificateAuthority.any_instance.expects(:setup)
Puppet::SSL::CertificateAuthority.new
end
end
describe "when setting itself up" do
it "should generate the CA certificate if it does not have one" do
Puppet.settings.stubs :use
host = stub 'host'
Puppet::SSL::Host.stubs(:new).returns host
host.expects(:certificate).returns nil
Puppet::SSL::CertificateAuthority.any_instance.expects(:generate_ca_certificate)
Puppet::SSL::CertificateAuthority.new
end
end
describe "when retrieving the certificate revocation list" do
before do
Puppet.settings.stubs(:use)
Puppet[:cacrl] = "/my/crl"
cert = stub("certificate", :content => "real_cert")
key = stub("key", :content => "real_key")
@host = stub 'host', :certificate => cert, :name => "hostname", :key => key
Puppet::SSL::CertificateAuthority.any_instance.stubs(:setup)
@ca = Puppet::SSL::CertificateAuthority.new
@ca.stubs(:host).returns @host
end
it "should return any found CRL instance" do
crl = mock 'crl'
Puppet::SSL::CertificateRevocationList.indirection.expects(:find).returns crl
@ca.crl.should equal(crl)
end
it "should create, generate, and save a new CRL instance of no CRL can be found" do
crl = Puppet::SSL::CertificateRevocationList.new("fakename")
Puppet::SSL::CertificateRevocationList.indirection.expects(:find).returns nil
Puppet::SSL::CertificateRevocationList.expects(:new).returns crl
crl.expects(:generate).with(@ca.host.certificate.content, @ca.host.key.content)
Puppet::SSL::CertificateRevocationList.indirection.expects(:save).with(crl)
@ca.crl.should equal(crl)
end
end
describe "when generating a self-signed CA certificate" do
before do
Puppet.settings.stubs(:use)
Puppet::SSL::CertificateAuthority.any_instance.stubs(:setup)
Puppet::SSL::CertificateAuthority.any_instance.stubs(:crl)
@ca = Puppet::SSL::CertificateAuthority.new
@host = stub 'host', :key => mock("key"), :name => "hostname", :certificate => mock('certificate')
Puppet::SSL::CertificateRequest.any_instance.stubs(:generate)
@ca.stubs(:host).returns @host
end
it "should create and store a password at :capass" do
Puppet[:capass] = File.expand_path("/path/to/pass")
Puppet::FileSystem.expects(:exist?).with(Puppet[:capass]).returns false
fh = StringIO.new
Puppet.settings.setting(:capass).expects(:open).with('w').yields fh
@ca.stubs(:sign)
@ca.generate_ca_certificate
expect(fh.string.length).to be > 18
end
it "should generate a key if one does not exist" do
@ca.stubs :generate_password
@ca.stubs :sign
@ca.host.expects(:key).returns nil
@ca.host.expects(:generate_key)
@ca.generate_ca_certificate
end
it "should create and sign a self-signed cert using the CA name" do
request = mock 'request'
Puppet::SSL::CertificateRequest.expects(:new).with(@ca.host.name).returns request
request.expects(:generate).with(@ca.host.key)
request.stubs(:request_extensions => [])
@ca.expects(:sign).with(@host.name, false, request)
@ca.stubs :generate_password
@ca.generate_ca_certificate
end
it "should generate its CRL" do
@ca.stubs :generate_password
@ca.stubs :sign
@ca.host.expects(:key).returns nil
@ca.host.expects(:generate_key)
@ca.expects(:crl)
@ca.generate_ca_certificate
end
end
describe "when signing" do
before do
Puppet.settings.stubs(:use)
Puppet::SSL::CertificateAuthority.any_instance.stubs(:password?).returns true
stub_ca_host
Puppet::SSL::Host.expects(:new).with(Puppet::SSL::Host.ca_name).returns @host
@ca = Puppet::SSL::CertificateAuthority.new
@name = "myhost"
@real_cert = stub 'realcert', :sign => nil
@cert = Puppet::SSL::Certificate.new(@name)
@cert.content = @real_cert
Puppet::SSL::Certificate.stubs(:new).returns @cert
Puppet::SSL::Certificate.indirection.stubs(:save)
# Stub out the factory
Puppet::SSL::CertificateFactory.stubs(:build).returns @cert.content
@request_content = stub "request content stub", :subject => OpenSSL::X509::Name.new([['CN', @name]]), :public_key => stub('public_key')
@request = stub 'request', :name => @name, :request_extensions => [], :subject_alt_names => [], :content => @request_content
@request_content.stubs(:verify).returns(true)
# And the inventory
@inventory = stub 'inventory', :add => nil
@ca.stubs(:inventory).returns @inventory
Puppet::SSL::CertificateRequest.indirection.stubs(:destroy)
end
describe "its own certificate" do
before do
@serial = 10
@ca.stubs(:next_serial).returns @serial
end
it "should not look up a certificate request for the host" do
Puppet::SSL::CertificateRequest.indirection.expects(:find).never
@ca.sign(@name, true, @request)
end
it "should use a certificate type of :ca" do
Puppet::SSL::CertificateFactory.expects(:build).with do |*args|
args[0].should == :ca
end.returns @cert.content
@ca.sign(@name, :ca, @request)
end
it "should pass the provided CSR as the CSR" do
Puppet::SSL::CertificateFactory.expects(:build).with do |*args|
args[1].should == @request
end.returns @cert.content
@ca.sign(@name, :ca, @request)
end
it "should use the provided CSR's content as the issuer" do
Puppet::SSL::CertificateFactory.expects(:build).with do |*args|
args[2].subject.to_s.should == "/CN=myhost"
end.returns @cert.content
@ca.sign(@name, :ca, @request)
end
it "should pass the next serial as the serial number" do
Puppet::SSL::CertificateFactory.expects(:build).with do |*args|
args[3].should == @serial
end.returns @cert.content
@ca.sign(@name, :ca, @request)
end
it "should sign the certificate request even if it contains alt names" do
@request.stubs(:subject_alt_names).returns %w[DNS:foo DNS:bar DNS:baz]
expect do
@ca.sign(@name, false, @request)
end.not_to raise_error
end
it "should save the resulting certificate" do
Puppet::SSL::Certificate.indirection.expects(:save).with(@cert)
@ca.sign(@name, :ca, @request)
end
end
describe "another host's certificate" do
before do
@serial = 10
@ca.stubs(:next_serial).returns @serial
Puppet::SSL::CertificateRequest.indirection.stubs(:find).with(@name).returns @request
Puppet::SSL::CertificateRequest.indirection.stubs :save
end
it "should use a certificate type of :server" do
Puppet::SSL::CertificateFactory.expects(:build).with do |*args|
args[0] == :server
end.returns @cert.content
@ca.sign(@name)
end
it "should use look up a CSR for the host in the :ca_file terminus" do
Puppet::SSL::CertificateRequest.indirection.expects(:find).with(@name).returns @request
@ca.sign(@name)
end
it "should fail if no CSR can be found for the host" do
Puppet::SSL::CertificateRequest.indirection.expects(:find).with(@name).returns nil
expect { @ca.sign(@name) }.to raise_error(ArgumentError)
end
it "should fail if an unknown request extension is present" do
@request.stubs :request_extensions => [{ "oid" => "bananas",
"value" => "delicious" }]
expect {
@ca.sign(@name)
}.to raise_error(/CSR has request extensions that are not permitted/)
end
it "should fail if the CSR contains alt names and they are not expected" do
@request.stubs(:subject_alt_names).returns %w[DNS:foo DNS:bar DNS:baz]
expect do
@ca.sign(@name, false)
end.to raise_error(Puppet::SSL::CertificateAuthority::CertificateSigningError, /CSR '#{@name}' contains subject alternative names \(.*?\), which are disallowed. Use `puppet cert --allow-dns-alt-names sign #{@name}` to sign this request./)
end
it "should not fail if the CSR does not contain alt names and they are expected" do
@request.stubs(:subject_alt_names).returns []
expect { @ca.sign(@name, true) }.to_not raise_error
end
it "should reject alt names by default" do
@request.stubs(:subject_alt_names).returns %w[DNS:foo DNS:bar DNS:baz]
expect do
@ca.sign(@name)
end.to raise_error(Puppet::SSL::CertificateAuthority::CertificateSigningError, /CSR '#{@name}' contains subject alternative names \(.*?\), which are disallowed. Use `puppet cert --allow-dns-alt-names sign #{@name}` to sign this request./)
end
it "should use the CA certificate as the issuer" do
Puppet::SSL::CertificateFactory.expects(:build).with do |*args|
args[2] == @cacert.content
end.returns @cert.content
signed = @ca.sign(@name)
end
it "should pass the next serial as the serial number" do
Puppet::SSL::CertificateFactory.expects(:build).with do |*args|
args[3] == @serial
end.returns @cert.content
@ca.sign(@name)
end
it "should sign the resulting certificate using its real key and a digest" do
digest = mock 'digest'
OpenSSL::Digest::SHA256.expects(:new).returns digest
key = stub 'key', :content => "real_key"
@ca.host.stubs(:key).returns key
@cert.content.expects(:sign).with("real_key", digest)
@ca.sign(@name)
end
it "should save the resulting certificate" do
Puppet::SSL::Certificate.indirection.stubs(:save).with(@cert)
@ca.sign(@name)
end
it "should remove the host's certificate request" do
Puppet::SSL::CertificateRequest.indirection.expects(:destroy).with(@name)
@ca.sign(@name)
end
it "should check the internal signing policies" do
@ca.expects(:check_internal_signing_policies).returns true
@ca.sign(@name)
end
end
context "#check_internal_signing_policies" do
before do
@serial = 10
@ca.stubs(:next_serial).returns @serial
Puppet::SSL::CertificateRequest.indirection.stubs(:find).with(@name).returns @request
@cert.stubs :save
end
it "should reject CSRs whose CN doesn't match the name for which we're signing them" do
# Shorten this so the test doesn't take too long
Puppet[:keylength] = 1024
key = Puppet::SSL::Key.new('the_certname')
key.generate
csr = Puppet::SSL::CertificateRequest.new('the_certname')
csr.generate(key)
expect do
@ca.check_internal_signing_policies('not_the_certname', csr, false)
end.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/common name "the_certname" does not match expected certname "not_the_certname"/
)
end
describe "when validating the CN" do
before :all do
Puppet[:keylength] = 1024
Puppet[:passfile] = '/f00'
@signing_key = Puppet::SSL::Key.new('my_signing_key')
@signing_key.generate
end
[
'completely_okay',
'sure, why not? :)',
'so+many(things)-are=allowed.',
'this"is#just&madness%you[see]',
'and even a (an?) \\!',
'waltz, nymph, for quick jigs vex bud.',
'{552c04ca-bb1b-11e1-874b-60334b04494e}'
].each do |name|
it "should accept #{name.inspect}" do
csr = Puppet::SSL::CertificateRequest.new(name)
csr.generate(@signing_key)
@ca.check_internal_signing_policies(name, csr, false)
end
end
[
'super/bad',
"not\neven\tkind\rof",
"ding\adong\a",
"hidden\b\b\b\b\b\bmessage",
"\xE2\x98\x83 :("
].each do |name|
it "should reject #{name.inspect}" do
# We aren't even allowed to make objects with these names, so let's
# stub that to simulate an invalid one coming from outside Puppet
Puppet::SSL::CertificateRequest.stubs(:validate_certname)
csr = Puppet::SSL::CertificateRequest.new(name)
csr.generate(@signing_key)
expect do
@ca.check_internal_signing_policies(name, csr, false)
end.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/subject contains unprintable or non-ASCII characters/
)
end
end
end
it "accepts numeric OIDs under the ppRegCertExt subtree" do
exts = [{ 'oid' => '1.3.6.1.4.1.34380.1.1.1',
'value' => '657e4780-4cf5-11e3-8f96-0800200c9a66'}]
@request.stubs(:request_extensions).returns exts
expect {
@ca.check_internal_signing_policies(@name, @request, false)
}.to_not raise_error
end
it "accepts short name OIDs under the ppRegCertExt subtree" do
exts = [{ 'oid' => 'pp_uuid',
'value' => '657e4780-4cf5-11e3-8f96-0800200c9a66'}]
@request.stubs(:request_extensions).returns exts
expect {
@ca.check_internal_signing_policies(@name, @request, false)
}.to_not raise_error
end
it "accepts OIDs under the ppPrivCertAttrs subtree" do
exts = [{ 'oid' => '1.3.6.1.4.1.34380.1.2.1',
'value' => 'private extension'}]
@request.stubs(:request_extensions).returns exts
expect {
@ca.check_internal_signing_policies(@name, @request, false)
}.to_not raise_error
end
it "should reject a critical extension that isn't on the whitelist" do
@request.stubs(:request_extensions).returns [{ "oid" => "banana",
"value" => "yumm",
"critical" => true }]
expect { @ca.check_internal_signing_policies(@name, @request, false) }.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/request extensions that are not permitted/
)
end
it "should reject a non-critical extension that isn't on the whitelist" do
@request.stubs(:request_extensions).returns [{ "oid" => "peach",
"value" => "meh",
"critical" => false }]
expect { @ca.check_internal_signing_policies(@name, @request, false) }.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/request extensions that are not permitted/
)
end
it "should reject non-whitelist extensions even if a valid extension is present" do
@request.stubs(:request_extensions).returns [{ "oid" => "peach",
"value" => "meh",
"critical" => false },
{ "oid" => "subjectAltName",
"value" => "DNS:foo",
"critical" => true }]
expect { @ca.check_internal_signing_policies(@name, @request, false) }.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/request extensions that are not permitted/
)
end
it "should reject a subjectAltName for a non-DNS value" do
@request.stubs(:subject_alt_names).returns ['DNS:foo', 'email:bar@example.com']
expect { @ca.check_internal_signing_policies(@name, @request, true) }.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/subjectAltName outside the DNS label space/
)
end
it "should reject a wildcard subject" do
@request.content.stubs(:subject).
returns(OpenSSL::X509::Name.new([["CN", "*.local"]]))
expect { @ca.check_internal_signing_policies('*.local', @request, false) }.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/subject contains a wildcard/
)
end
it "should reject a wildcard subjectAltName" do
@request.stubs(:subject_alt_names).returns ['DNS:foo', 'DNS:*.bar']
expect { @ca.check_internal_signing_policies(@name, @request, true) }.to raise_error(
Puppet::SSL::CertificateAuthority::CertificateSigningError,
/subjectAltName contains a wildcard/
)
end
end
it "should create a certificate instance with the content set to the newly signed x509 certificate" do
@serial = 10
@ca.stubs(:next_serial).returns @serial
Puppet::SSL::CertificateRequest.indirection.stubs(:find).with(@name).returns @request
Puppet::SSL::Certificate.indirection.stubs :save
Puppet::SSL::Certificate.expects(:new).with(@name).returns @cert
@ca.sign(@name)
end
it "should return the certificate instance" do
@ca.stubs(:next_serial).returns @serial
Puppet::SSL::CertificateRequest.indirection.stubs(:find).with(@name).returns @request
Puppet::SSL::Certificate.indirection.stubs :save
@ca.sign(@name).should equal(@cert)
end
it "should add the certificate to its inventory" do
@ca.stubs(:next_serial).returns @serial
@inventory.expects(:add).with(@cert)
Puppet::SSL::CertificateRequest.indirection.stubs(:find).with(@name).returns @request
Puppet::SSL::Certificate.indirection.stubs :save
@ca.sign(@name)
end
it "should have a method for triggering autosigning of available CSRs" do
@ca.should respond_to(:autosign)
end
describe "when autosigning certificates" do
let(:csr) { Puppet::SSL::CertificateRequest.new("host") }
describe "using the autosign setting" do
let(:autosign) { File.expand_path("/auto/sign") }
it "should do nothing if autosign is disabled" do
Puppet[:autosign] = false
@ca.expects(:sign).never
@ca.autosign(csr)
end
it "should do nothing if no autosign.conf exists" do
Puppet[:autosign] = autosign
non_existent_file = Puppet::FileSystem::MemoryFile.a_missing_file(autosign)
Puppet::FileSystem.overlay(non_existent_file) do
@ca.expects(:sign).never
@ca.autosign(csr)
end
end
describe "and autosign is enabled and the autosign.conf file exists" do
let(:store) { stub 'store', :allow => nil, :allowed? => false }
before do
Puppet[:autosign] = autosign
end
describe "when creating the AuthStore instance to verify autosigning" do
it "should create an AuthStore with each line in the configuration file allowed to be autosigned" do
Puppet::FileSystem.overlay(Puppet::FileSystem::MemoryFile.a_regular_file_containing(autosign, "one\ntwo\n")) do
Puppet::Network::AuthStore.stubs(:new).returns store
store.expects(:allow).with("one")
store.expects(:allow).with("two")
@ca.autosign(csr)
end
end
it "should reparse the autosign configuration on each call" do
Puppet::FileSystem.overlay(Puppet::FileSystem::MemoryFile.a_regular_file_containing(autosign, "one")) do
Puppet::Network::AuthStore.stubs(:new).times(2).returns store
@ca.autosign(csr)
@ca.autosign(csr)
end
end
it "should ignore comments" do
Puppet::FileSystem.overlay(Puppet::FileSystem::MemoryFile.a_regular_file_containing(autosign, "one\n#two\n")) do
Puppet::Network::AuthStore.stubs(:new).returns store
store.expects(:allow).with("one")
@ca.autosign(csr)
end
end
it "should ignore blank lines" do
Puppet::FileSystem.overlay(Puppet::FileSystem::MemoryFile.a_regular_file_containing(autosign, "one\n\n")) do
Puppet::Network::AuthStore.stubs(:new).returns store
store.expects(:allow).with("one")
@ca.autosign(csr)
end
end
end
end
end
describe "using the autosign command setting" do
let(:cmd) { File.expand_path('/autosign_cmd') }
let(:autosign_cmd) { mock 'autosign_command' }
let(:autosign_executable) { Puppet::FileSystem::MemoryFile.an_executable(cmd) }
before do
Puppet[:autosign] = cmd
Puppet::SSL::CertificateAuthority::AutosignCommand.stubs(:new).returns autosign_cmd
end
it "autosigns the CSR if the autosign command returned true" do
Puppet::FileSystem.overlay(autosign_executable) do
autosign_cmd.expects(:allowed?).with(csr).returns true
@ca.expects(:sign).with('host')
@ca.autosign(csr)
end
end
it "doesn't autosign the CSR if the autosign_command returned false" do
Puppet::FileSystem.overlay(autosign_executable) do
autosign_cmd.expects(:allowed?).with(csr).returns false
@ca.expects(:sign).never
@ca.autosign(csr)
end
end
end
end
end
describe "when managing certificate clients" do
before do
Puppet.settings.stubs(:use)
Puppet::SSL::CertificateAuthority.any_instance.stubs(:password?).returns true
stub_ca_host
Puppet::SSL::Host.expects(:new).returns @host
Puppet::SSL::CertificateAuthority.any_instance.stubs(:host).returns @host
@cacert = mock 'certificate'
@cacert.stubs(:content).returns "cacertificate"
@ca = Puppet::SSL::CertificateAuthority.new
end
it "should be able to list waiting certificate requests" do
req1 = stub 'req1', :name => "one"
req2 = stub 'req2', :name => "two"
Puppet::SSL::CertificateRequest.indirection.expects(:search).with("*").returns [req1, req2]
@ca.waiting?.should == %w{one two}
end
it "should delegate removing hosts to the Host class" do
Puppet::SSL::Host.expects(:destroy).with("myhost")
@ca.destroy("myhost")
end
it "should be able to verify certificates" do
@ca.should respond_to(:verify)
end
it "should list certificates as the sorted list of all existing signed certificates" do
cert1 = stub 'cert1', :name => "cert1"
cert2 = stub 'cert2', :name => "cert2"
Puppet::SSL::Certificate.indirection.expects(:search).with("*").returns [cert1, cert2]
@ca.list.should == %w{cert1 cert2}
end
it "should list the full certificates" do
cert1 = stub 'cert1', :name => "cert1"
cert2 = stub 'cert2', :name => "cert2"
Puppet::SSL::Certificate.indirection.expects(:search).with("*").returns [cert1, cert2]
@ca.list_certificates.should == [cert1, cert2]
end
describe "and printing certificates" do
it "should return nil if the certificate cannot be found" do
Puppet::SSL::Certificate.indirection.expects(:find).with("myhost").returns nil
@ca.print("myhost").should be_nil
end
it "should print certificates by calling :to_text on the host's certificate" do
cert1 = stub 'cert1', :name => "cert1", :to_text => "mytext"
Puppet::SSL::Certificate.indirection.expects(:find).with("myhost").returns cert1
@ca.print("myhost").should == "mytext"
end
end
describe "and fingerprinting certificates" do
before :each do
@cert = stub 'cert', :name => "cert", :fingerprint => "DIGEST"
Puppet::SSL::Certificate.indirection.stubs(:find).with("myhost").returns @cert
Puppet::SSL::CertificateRequest.indirection.stubs(:find).with("myhost")
end
it "should raise an error if the certificate or CSR cannot be found" do
Puppet::SSL::Certificate.indirection.expects(:find).with("myhost").returns nil
Puppet::SSL::CertificateRequest.indirection.expects(:find).with("myhost").returns nil
expect { @ca.fingerprint("myhost") }.to raise_error
end
it "should try to find a CSR if no certificate can be found" do
Puppet::SSL::Certificate.indirection.expects(:find).with("myhost").returns nil
Puppet::SSL::CertificateRequest.indirection.expects(:find).with("myhost").returns @cert
@cert.expects(:fingerprint)
@ca.fingerprint("myhost")
end
it "should delegate to the certificate fingerprinting" do
@cert.expects(:fingerprint)
@ca.fingerprint("myhost")
end
it "should propagate the digest algorithm to the certificate fingerprinting system" do
@cert.expects(:fingerprint).with(:digest)
@ca.fingerprint("myhost", :digest)
end
end
describe "and verifying certificates" do
let(:cacert) { File.expand_path("/ca/cert") }
before do
@store = stub 'store', :verify => true, :add_file => nil, :purpose= => nil, :add_crl => true, :flags= => nil
OpenSSL::X509::Store.stubs(:new).returns @store
@cert = stub 'cert', :content => "mycert"
Puppet::SSL::Certificate.indirection.stubs(:find).returns @cert
@crl = stub('crl', :content => "mycrl")
@ca.stubs(:crl).returns @crl
end
it "should fail if the host's certificate cannot be found" do
Puppet::SSL::Certificate.indirection.expects(:find).with("me").returns(nil)
expect { @ca.verify("me") }.to raise_error(ArgumentError)
end
it "should create an SSL Store to verify" do
OpenSSL::X509::Store.expects(:new).returns @store
@ca.verify("me")
end
it "should add the CA Certificate to the store" do
Puppet[:cacert] = cacert
@store.expects(:add_file).with cacert
@ca.verify("me")
end
it "should add the CRL to the store if the crl is enabled" do
@store.expects(:add_crl).with "mycrl"
@ca.verify("me")
end
it "should set the store purpose to OpenSSL::X509::PURPOSE_SSL_CLIENT" do
Puppet[:cacert] = cacert
@store.expects(:add_file).with cacert
@ca.verify("me")
end
it "should set the store flags to check the crl" do
@store.expects(:flags=).with OpenSSL::X509::V_FLAG_CRL_CHECK_ALL|OpenSSL::X509::V_FLAG_CRL_CHECK
@ca.verify("me")
end
it "should use the store to verify the certificate" do
@cert.expects(:content).returns "mycert"
@store.expects(:verify).with("mycert").returns true
@ca.verify("me")
end
it "should fail if the verification returns false" do
@cert.expects(:content).returns "mycert"
@store.expects(:verify).with("mycert").returns false
expect { @ca.verify("me") }.to raise_error
end
describe "certificate_is_alive?" do
it "should return false if verification fails" do
@cert.expects(:content).returns "mycert"
@store.expects(:verify).with("mycert").returns false
@ca.certificate_is_alive?(@cert).should be_false
end
it "should return true if verification passes" do
@cert.expects(:content).returns "mycert"
@store.expects(:verify).with("mycert").returns true
@ca.certificate_is_alive?(@cert).should be_true
end
it "should used a cached instance of the x509 store" do
OpenSSL::X509::Store.stubs(:new).returns(@store).once
@cert.expects(:content).returns "mycert"
@store.expects(:verify).with("mycert").returns true
@ca.certificate_is_alive?(@cert)
@ca.certificate_is_alive?(@cert)
end
end
end
describe "and revoking certificates" do
before do
@crl = mock 'crl'
@ca.stubs(:crl).returns @crl
@ca.stubs(:next_serial).returns 10
@real_cert = stub 'real_cert', :serial => 15
@cert = stub 'cert', :content => @real_cert
Puppet::SSL::Certificate.indirection.stubs(:find).returns @cert
end
it "should fail if the certificate revocation list is disabled" do
@ca.stubs(:crl).returns false
expect { @ca.revoke('ca_testing') }.to raise_error(ArgumentError)
end
it "should delegate the revocation to its CRL" do
@ca.crl.expects(:revoke)
@ca.revoke('host')
end
it "should get the serial number from the local certificate if it exists" do
@ca.crl.expects(:revoke).with { |serial, key| serial == 15 }
Puppet::SSL::Certificate.indirection.expects(:find).with("host").returns @cert
@ca.revoke('host')
end
it "should get the serial number from inventory if no local certificate exists" do
real_cert = stub 'real_cert', :serial => 15
cert = stub 'cert', :content => real_cert
Puppet::SSL::Certificate.indirection.expects(:find).with("host").returns nil
- @ca.inventory.expects(:serial).with("host").returns 16
+ @ca.inventory.expects(:serials).with("host").returns [16]
@ca.crl.expects(:revoke).with { |serial, key| serial == 16 }
@ca.revoke('host')
end
+ it "should revoke all serials matching a name" do
+ real_cert = stub 'real_cert', :serial => 15
+ cert = stub 'cert', :content => real_cert
+ Puppet::SSL::Certificate.indirection.expects(:find).with("host").returns nil
+
+ @ca.inventory.expects(:serials).with("host").returns [16, 20, 25]
+
+ @ca.crl.expects(:revoke).with { |serial, key| serial == 16 }
+ @ca.crl.expects(:revoke).with { |serial, key| serial == 20 }
+ @ca.crl.expects(:revoke).with { |serial, key| serial == 25 }
+ @ca.revoke('host')
+ end
+
+ it "should raise an error if no certificate match" do
+ real_cert = stub 'real_cert', :serial => 15
+ cert = stub 'cert', :content => real_cert
+ Puppet::SSL::Certificate.indirection.expects(:find).with("host").returns nil
+
+ @ca.inventory.expects(:serials).with("host").returns []
+
+ @ca.crl.expects(:revoke).never
+ expect { @ca.revoke('host') }.to raise_error
+ end
+
context "revocation by serial number (#16798)" do
it "revokes when given a lower case hexadecimal formatted string" do
@ca.crl.expects(:revoke).with { |serial, key| serial == 15 }
Puppet::SSL::Certificate.indirection.expects(:find).with("0xf").returns nil
@ca.revoke('0xf')
end
it "revokes when given an upper case hexadecimal formatted string" do
@ca.crl.expects(:revoke).with { |serial, key| serial == 15 }
Puppet::SSL::Certificate.indirection.expects(:find).with("0xF").returns nil
@ca.revoke('0xF')
end
it "handles very large serial numbers" do
bighex = '0x4000000000000000000000000000000000000000'
bighex_int = 365375409332725729550921208179070754913983135744
@ca.crl.expects(:revoke).with(bighex_int, anything)
Puppet::SSL::Certificate.indirection.expects(:find).with(bighex).returns nil
@ca.revoke(bighex)
end
end
end
it "should be able to generate a complete new SSL host" do
@ca.should respond_to(:generate)
end
end
end
require 'puppet/indirector/memory'
module CertificateAuthorityGenerateSpecs
describe "CertificateAuthority.generate" do
def expect_to_increment_serial_file
Puppet.settings.setting(:serial).expects(:exclusive_open)
end
def expect_to_sign_a_cert
expect_to_increment_serial_file
end
def expect_to_write_the_ca_password
Puppet.settings.setting(:capass).expects(:open).with('w')
end
def expect_ca_initialization
expect_to_write_the_ca_password
expect_to_sign_a_cert
end
INDIRECTED_CLASSES = [
Puppet::SSL::Certificate,
Puppet::SSL::CertificateRequest,
Puppet::SSL::CertificateRevocationList,
Puppet::SSL::Key,
]
INDIRECTED_CLASSES.each do |const|
class const::Memory < Puppet::Indirector::Memory
# @return Array of all the indirector's values
#
# This mirrors Puppet::Indirector::SslFile#search which returns all files
# in the directory.
def search(request)
return @instances.values
end
end
end
before do
Puppet::SSL::Inventory.stubs(:new).returns(stub("Inventory", :add => nil))
INDIRECTED_CLASSES.each { |const| const.indirection.terminus_class = :memory }
end
after do
INDIRECTED_CLASSES.each do |const|
const.indirection.terminus_class = :file
const.indirection.termini.clear
end
end
describe "when generating certificates" do
let(:ca) { Puppet::SSL::CertificateAuthority.new }
before do
expect_ca_initialization
end
it "should fail if a certificate already exists for the host" do
cert = Puppet::SSL::Certificate.new('pre.existing')
Puppet::SSL::Certificate.indirection.save(cert)
expect { ca.generate(cert.name) }.to raise_error(ArgumentError, /a certificate already exists/i)
end
describe "that do not yet exist" do
let(:cn) { "new.host" }
def expect_cert_does_not_exist(cn)
expect( Puppet::SSL::Certificate.indirection.find(cn) ).to be_nil
end
before do
expect_to_sign_a_cert
expect_cert_does_not_exist(cn)
end
it "should return the created certificate" do
cert = ca.generate(cn)
expect( cert ).to be_kind_of(Puppet::SSL::Certificate)
expect( cert.name ).to eq(cn)
end
it "should not have any subject_alt_names by default" do
cert = ca.generate(cn)
expect( cert.subject_alt_names ).to be_empty
end
it "should have subject_alt_names if passed dns_alt_names" do
cert = ca.generate(cn, :dns_alt_names => 'foo,bar')
expect( cert.subject_alt_names ).to match_array(["DNS:#{cn}",'DNS:foo','DNS:bar'])
end
context "if autosign is false" do
before do
Puppet[:autosign] = false
end
it "should still generate and explicitly sign the request" do
cert = nil
cert = ca.generate(cn)
expect(cert.name).to eq(cn)
end
end
context "if autosign is true (Redmine #6112)" do
def run_mode_must_be_master_for_autosign_to_be_attempted
Puppet.stubs(:run_mode).returns(Puppet::Util::RunMode[:master])
end
before do
Puppet[:autosign] = true
run_mode_must_be_master_for_autosign_to_be_attempted
Puppet::Util::Log.level = :info
end
it "should generate a cert without attempting to sign again" do
cert = ca.generate(cn)
expect(cert.name).to eq(cn)
expect(@logs.map(&:message)).to include("Autosigning #{cn}")
end
end
end
end
end
end
diff --git a/spec/unit/ssl/inventory_spec.rb b/spec/unit/ssl/inventory_spec.rb
index 6e4fbd340..879fd90d1 100755
--- a/spec/unit/ssl/inventory_spec.rb
+++ b/spec/unit/ssl/inventory_spec.rb
@@ -1,137 +1,150 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/ssl/inventory'
describe Puppet::SSL::Inventory, :unless => Puppet.features.microsoft_windows? do
let(:cert_inventory) { File.expand_path("/inven/tory") }
before do
@class = Puppet::SSL::Inventory
end
describe "when initializing" do
it "should set its path to the inventory file" do
Puppet[:cert_inventory] = cert_inventory
@class.new.path.should == cert_inventory
end
end
describe "when managing an inventory" do
before do
Puppet[:cert_inventory] = cert_inventory
Puppet::FileSystem.stubs(:exist?).with(cert_inventory).returns true
@inventory = @class.new
@cert = mock 'cert'
end
describe "and creating the inventory file" do
it "re-adds all of the existing certificates" do
inventory_file = StringIO.new
Puppet.settings.setting(:cert_inventory).stubs(:open).yields(inventory_file)
cert1 = Puppet::SSL::Certificate.new("cert1")
cert1.content = stub 'cert1',
:serial => 2,
:not_before => Time.now,
:not_after => Time.now,
:subject => "/CN=smocking"
cert2 = Puppet::SSL::Certificate.new("cert2")
cert2.content = stub 'cert2',
:serial => 3,
:not_before => Time.now,
:not_after => Time.now,
:subject => "/CN=mocking bird"
Puppet::SSL::Certificate.indirection.expects(:search).with("*").returns [cert1, cert2]
@inventory.rebuild
expect(inventory_file.string).to match(/\/CN=smocking/)
expect(inventory_file.string).to match(/\/CN=mocking bird/)
end
end
describe "and adding a certificate" do
it "should use the Settings to write to the file" do
Puppet.settings.setting(:cert_inventory).expects(:open).with("a")
@inventory.add(@cert)
end
it "should add formatted certificate information to the end of the file" do
cert = Puppet::SSL::Certificate.new("mycert")
cert.content = @cert
fh = StringIO.new
Puppet.settings.setting(:cert_inventory).expects(:open).with("a").yields(fh)
@inventory.expects(:format).with(@cert).returns "myformat"
@inventory.add(@cert)
expect(fh.string).to eq("myformat")
end
end
describe "and formatting a certificate" do
before do
@cert = stub 'cert', :not_before => Time.now, :not_after => Time.now, :subject => "mycert", :serial => 15
end
it "should print the serial number as a 4 digit hex number in the first field" do
@inventory.format(@cert).split[0].should == "0x000f" # 15 in hex
end
it "should print the not_before date in '%Y-%m-%dT%H:%M:%S%Z' format in the second field" do
@cert.not_before.expects(:strftime).with('%Y-%m-%dT%H:%M:%S%Z').returns "before_time"
@inventory.format(@cert).split[1].should == "before_time"
end
it "should print the not_after date in '%Y-%m-%dT%H:%M:%S%Z' format in the third field" do
@cert.not_after.expects(:strftime).with('%Y-%m-%dT%H:%M:%S%Z').returns "after_time"
@inventory.format(@cert).split[2].should == "after_time"
end
it "should print the subject in the fourth field" do
@inventory.format(@cert).split[3].should == "mycert"
end
it "should add a carriage return" do
@inventory.format(@cert).should =~ /\n$/
end
it "should produce a line consisting of the serial number, start date, expiration date, and subject" do
# Just make sure our serial and subject bracket the lines.
@inventory.format(@cert).should =~ /^0x.+mycert$/
end
end
it "should be able to find a given host's serial number" do
@inventory.should respond_to(:serial)
end
describe "and finding a serial number" do
it "should return nil if the inventory file is missing" do
Puppet::FileSystem.expects(:exist?).with(cert_inventory).returns false
@inventory.serial(:whatever).should be_nil
end
it "should return the serial number from the line matching the provided name" do
File.expects(:readlines).with(cert_inventory).returns ["0x00f blah blah /CN=me\n", "0x001 blah blah /CN=you\n"]
@inventory.serial("me").should == 15
end
it "should return the number as an integer" do
File.expects(:readlines).with(cert_inventory).returns ["0x00f blah blah /CN=me\n", "0x001 blah blah /CN=you\n"]
@inventory.serial("me").should == 15
end
end
+
+ describe "and finding all serial numbers" do
+ it "should return nil if the inventory file is missing" do
+ Puppet::FileSystem.expects(:exist?).with(cert_inventory).returns false
+ @inventory.serials(:whatever).should be_empty
+ end
+
+ it "should return the all the serial numbers from the lines matching the provided name" do
+ File.expects(:readlines).with(cert_inventory).returns ["0x00f blah blah /CN=me\n", "0x001 blah blah /CN=you\n", "0x002 blah blah /CN=me\n"]
+
+ @inventory.serials("me").should == [15, 2]
+ end
+ end
end
end
diff --git a/spec/unit/ssl/validator_spec.rb b/spec/unit/ssl/validator_spec.rb
index 2b8cfb0f9..ade1575dc 100644
--- a/spec/unit/ssl/validator_spec.rb
+++ b/spec/unit/ssl/validator_spec.rb
@@ -1,354 +1,353 @@
require 'spec_helper'
require 'puppet/ssl'
-require 'puppet/ssl/configuration'
describe Puppet::SSL::Validator::DefaultValidator do
let(:ssl_context) do
mock('OpenSSL::X509::StoreContext')
end
let(:ssl_configuration) do
Puppet::SSL::Configuration.new(
Puppet[:localcacert],
:ca_chain_file => Puppet[:ssl_client_ca_chain],
:ca_auth_file => Puppet[:ssl_client_ca_auth])
end
let(:ssl_host) do
stub('ssl_host',
:ssl_store => nil,
:certificate => stub('cert', :content => nil),
:key => stub('key', :content => nil))
end
subject do
described_class.new(ssl_configuration,
ssl_host)
end
before :each do
ssl_configuration.stubs(:read_file).
with(Puppet[:localcacert]).
returns(root_ca)
end
describe '#call' do
before :each do
ssl_context.stubs(:current_cert).returns(*cert_chain_in_callback_order)
ssl_context.stubs(:chain).returns(cert_chain)
end
context 'When pre-verification is not OK' do
context 'and the ssl_context is in an error state' do
before :each do
ssl_context.stubs(:error_string).returns("Something went wrong.")
end
it 'makes the error available via #verify_errors' do
subject.call(false, ssl_context)
msg_suffix = OpenSSL::X509::Certificate.new(root_ca).subject
subject.verify_errors.should == ["Something went wrong. for #{msg_suffix}"]
end
end
end
context 'When pre-verification is OK' do
context 'and the ssl_context is in an error state' do
before :each do
ssl_context.stubs(:error_string).returns("Something went wrong.")
end
it 'does not make the error available via #verify_errors' do
subject.call(true, ssl_context)
subject.verify_errors.should == []
end
end
context 'and the chain is valid' do
it 'is true for each CA certificate in the chain' do
(cert_chain.length - 1).times do
subject.call(true, ssl_context).should be_true
end
end
it 'is true for the SSL certificate ending the chain' do
(cert_chain.length - 1).times do
subject.call(true, ssl_context)
end
subject.call(true, ssl_context).should be_true
end
end
context 'and the chain is invalid' do
before :each do
ssl_configuration.stubs(:read_file).
with(Puppet[:localcacert]).
returns(agent_ca)
end
it 'is true for each CA certificate in the chain' do
(cert_chain.length - 1).times do
subject.call(true, ssl_context).should be_true
end
end
it 'is false for the SSL certificate ending the chain' do
(cert_chain.length - 1).times do
subject.call(true, ssl_context)
end
subject.call(true, ssl_context).should be_false
end
end
context 'an error is raised inside of #call' do
before :each do
ssl_context.expects(:current_cert).raises(StandardError, "BOOM!")
end
it 'is false' do
subject.call(true, ssl_context).should be_false
end
it 'makes the error available through #verify_errors' do
subject.call(true, ssl_context)
subject.verify_errors.should == ["BOOM!"]
end
end
end
end
describe '#setup_connection' do
it 'updates the connection for verification' do
subject.stubs(:ssl_certificates_are_present?).returns(true)
connection = mock('Net::HTTP')
connection.expects(:cert_store=).with(ssl_host.ssl_store)
connection.expects(:ca_file=).with(ssl_configuration.ca_auth_file)
connection.expects(:cert=).with(ssl_host.certificate.content)
connection.expects(:key=).with(ssl_host.key.content)
connection.expects(:verify_callback=).with(subject)
connection.expects(:verify_mode=).with(OpenSSL::SSL::VERIFY_PEER)
subject.setup_connection(connection)
end
it 'does not perform verification if certificate files are missing' do
subject.stubs(:ssl_certificates_are_present?).returns(false)
connection = mock('Net::HTTP')
connection.expects(:verify_mode=).with(OpenSSL::SSL::VERIFY_NONE)
subject.setup_connection(connection)
end
end
describe '#valid_peer?' do
before :each do
peer_certs = cert_chain_in_callback_order.map do |c|
Puppet::SSL::Certificate.from_instance(c)
end
subject.instance_variable_set(:@peer_certs, peer_certs)
end
context 'when the peer presents a valid chain' do
before :each do
subject.stubs(:has_authz_peer_cert).returns(true)
end
it 'is true' do
subject.valid_peer?.should be_true
end
end
context 'when the peer presents an invalid chain' do
before :each do
subject.stubs(:has_authz_peer_cert).returns(false)
end
it 'is false' do
subject.valid_peer?.should be_false
end
it 'makes a helpful error message available via #verify_errors' do
subject.valid_peer?
subject.verify_errors.should == [expected_authz_error_msg]
end
end
end
describe '#has_authz_peer_cert' do
context 'when the Root CA is listed as authorized' do
it 'returns true when the SSL cert is issued by the Master CA' do
subject.has_authz_peer_cert(cert_chain, [root_ca_cert]).should be_true
end
it 'returns true when the SSL cert is issued by the Agent CA' do
subject.has_authz_peer_cert(cert_chain_agent_ca, [root_ca_cert]).should be_true
end
end
context 'when the Master CA is listed as authorized' do
it 'returns false when the SSL cert is issued by the Master CA' do
subject.has_authz_peer_cert(cert_chain, [master_ca_cert]).should be_true
end
it 'returns true when the SSL cert is issued by the Agent CA' do
subject.has_authz_peer_cert(cert_chain_agent_ca, [master_ca_cert]).should be_false
end
end
context 'when the Agent CA is listed as authorized' do
it 'returns true when the SSL cert is issued by the Master CA' do
subject.has_authz_peer_cert(cert_chain, [agent_ca_cert]).should be_false
end
it 'returns true when the SSL cert is issued by the Agent CA' do
subject.has_authz_peer_cert(cert_chain_agent_ca, [agent_ca_cert]).should be_true
end
end
end
def root_ca
<<-ROOT_CA
-----BEGIN CERTIFICATE-----
MIICYDCCAcmgAwIBAgIJALf2Pk2HvtBzMA0GCSqGSIb3DQEBBQUAMEkxEDAOBgNV
BAMMB1Jvb3QgQ0ExGjAYBgNVBAsMEVNlcnZlciBPcGVyYXRpb25zMRkwFwYDVQQK
DBBFeGFtcGxlIE9yZywgTExDMB4XDTEzMDMzMDA1NTA0OFoXDTMzMDMyNTA1NTA0
OFowSTEQMA4GA1UEAwwHUm9vdCBDQTEaMBgGA1UECwwRU2VydmVyIE9wZXJhdGlv
bnMxGTAXBgNVBAoMEEV4YW1wbGUgT3JnLCBMTEMwgZ8wDQYJKoZIhvcNAQEBBQAD
gY0AMIGJAoGBAMGSpafR4lboYOPfPJC1wVHHl0gD49ZVRjOlJ9jidEUjBdFXK6SA
S1tecDv2G4tM1ANmfMKjZl0m+KaZ8O2oq0g6kxkq1Mg0eSNvlnEyehjmTLRzHC2i
a0biH2wMtCLzfAoXDKy4GPlciBPE9mup5I8Kien5s91t92tc7K8AJ8oBAgMBAAGj
UDBOMB0GA1UdDgQWBBQwTdZqjjXOIFK2hOM0bcOrnhQw2jAfBgNVHSMEGDAWgBQw
TdZqjjXOIFK2hOM0bcOrnhQw2jAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUA
A4GBACs8EZRrzgzAlcKC1Tz8GYlNHQg0XhpbEDm+p2mOV//PuDD190O+UBpWxo9Q
rrkkx8En0wXQZJf6iH3hwewwHLOq5yXZKbJN+SmvJvRNL95Yhyy08Y9N65tJveE7
rPsNU/Tx19jHC87oXlmAePLI4IaUHXrWb7CRbY9TEcPdmj1R
-----END CERTIFICATE-----
ROOT_CA
end
def master_ca
<<-MASTER_CA
-----BEGIN CERTIFICATE-----
MIICljCCAf+gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBJMRAwDgYDVQQDDAdSb290
IENBMRowGAYDVQQLDBFTZXJ2ZXIgT3BlcmF0aW9uczEZMBcGA1UECgwQRXhhbXBs
ZSBPcmcsIExMQzAeFw0xMzAzMzAwNTUwNDhaFw0zMzAzMjUwNTUwNDhaMH4xJDAi
BgNVBAMTG0ludGVybWVkaWF0ZSBDQSAobWFzdGVyLWNhKTEfMB0GCSqGSIb3DQEJ
ARYQdGVzdEBleGFtcGxlLm9yZzEZMBcGA1UEChMQRXhhbXBsZSBPcmcsIExMQzEa
MBgGA1UECxMRU2VydmVyIE9wZXJhdGlvbnMwXDANBgkqhkiG9w0BAQEFAANLADBI
AkEAvo/az3oR69SP92jGnUHMJLEyyD1Ui1BZ/rUABJcQTRQqn3RqtlfYePWZnUaZ
srKbXRS4q0w5Vqf1kx5w3q5tIwIDAQABo4GcMIGZMHkGA1UdIwRyMHCAFDBN1mqO
Nc4gUraE4zRtw6ueFDDaoU2kSzBJMRAwDgYDVQQDDAdSb290IENBMRowGAYDVQQL
DBFTZXJ2ZXIgT3BlcmF0aW9uczEZMBcGA1UECgwQRXhhbXBsZSBPcmcsIExMQ4IJ
ALf2Pk2HvtBzMA8GA1UdEwEB/wQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3
DQEBBQUAA4GBACRfa1YPS7RQUuhYovGgV0VYqxuATC7WwdIRihVh5FceSXKgSIbz
BKmOBAy/KixEhpnHTbkpaJ0d9ITkvjMTmj3M5YMahKaQA5niVPckQPecMMd6jg9U
l1k75xLLIcrlsDYo3999KOSSchH2K7bLT7TuQ2okdP6FHWmeWmudewlu
-----END CERTIFICATE-----
MASTER_CA
end
def agent_ca
<<-AGENT_CA
-----BEGIN CERTIFICATE-----
MIIClTCCAf6gAwIBAgIBATANBgkqhkiG9w0BAQUFADBJMRAwDgYDVQQDDAdSb290
IENBMRowGAYDVQQLDBFTZXJ2ZXIgT3BlcmF0aW9uczEZMBcGA1UECgwQRXhhbXBs
ZSBPcmcsIExMQzAeFw0xMzAzMzAwNTUwNDhaFw0zMzAzMjUwNTUwNDhaMH0xIzAh
BgNVBAMTGkludGVybWVkaWF0ZSBDQSAoYWdlbnQtY2EpMR8wHQYJKoZIhvcNAQkB
FhB0ZXN0QGV4YW1wbGUub3JnMRkwFwYDVQQKExBFeGFtcGxlIE9yZywgTExDMRow
GAYDVQQLExFTZXJ2ZXIgT3BlcmF0aW9uczBcMA0GCSqGSIb3DQEBAQUAA0sAMEgC
QQDkEj/Msmi4hJImxP5+ocixMTHuYC1M1E2p4QcuzOkZYrfHf+5hJMcahfYhLiXU
jHBredOXhgSisHh6CLSb/rKzAgMBAAGjgZwwgZkweQYDVR0jBHIwcIAUME3Wao41
ziBStoTjNG3Dq54UMNqhTaRLMEkxEDAOBgNVBAMMB1Jvb3QgQ0ExGjAYBgNVBAsM
EVNlcnZlciBPcGVyYXRpb25zMRkwFwYDVQQKDBBFeGFtcGxlIE9yZywgTExDggkA
t/Y+TYe+0HMwDwYDVR0TAQH/BAUwAwEB/zALBgNVHQ8EBAMCAQYwDQYJKoZIhvcN
AQEFBQADgYEAujSj9rxIxJHEuuYXb15L30yxs9Tdvy4OCLiKdjvs9Z7gG8Pbutls
ooCwyYAkmzKVs/8cYjZJnvJrPEW1gFwqX7Xknp85Cfrl+/pQEPYq5sZVa5BIm9tI
0EvlDax/Hd28jI6Bgq5fsTECNl9GDGknCy7vwRZem0h+hI56lzR3pYE=
-----END CERTIFICATE-----
AGENT_CA
end
# Signed by the master CA (Good)
def master_issued_by_master_ca
<<-GOOD_SSL_CERT
-----BEGIN CERTIFICATE-----
MIICZzCCAhGgAwIBAgIBATANBgkqhkiG9w0BAQUFADB+MSQwIgYDVQQDExtJbnRl
cm1lZGlhdGUgQ0EgKG1hc3Rlci1jYSkxHzAdBgkqhkiG9w0BCQEWEHRlc3RAZXhh
bXBsZS5vcmcxGTAXBgNVBAoTEEV4YW1wbGUgT3JnLCBMTEMxGjAYBgNVBAsTEVNl
cnZlciBPcGVyYXRpb25zMB4XDTEzMDMzMDA1NTA0OFoXDTMzMDMyNTA1NTA0OFow
HjEcMBoGA1UEAwwTbWFzdGVyMS5leGFtcGxlLm9yZzBcMA0GCSqGSIb3DQEBAQUA
A0sAMEgCQQDACW8fryVZH0dC7vYUASonVBKYcILnKN2O9QX7RenZGN1TWek9LQxr
yQFDyp7WJ8jUw6nENGniLU8J+QSSxryjAgMBAAGjgdkwgdYwWwYDVR0jBFQwUqFN
pEswSTEQMA4GA1UEAwwHUm9vdCBDQTEaMBgGA1UECwwRU2VydmVyIE9wZXJhdGlv
bnMxGTAXBgNVBAoMEEV4YW1wbGUgT3JnLCBMTEOCAQIwDAYDVR0TAQH/BAIwADAL
BgNVHQ8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMD0GA1Ud
EQQ2MDSCE21hc3RlcjEuZXhhbXBsZS5vcmeCB21hc3RlcjGCBnB1cHBldIIMcHVw
cGV0bWFzdGVyMA0GCSqGSIb3DQEBBQUAA0EAo8PvgLrah6jQVs6YCBxOTn13PDip
fVbcRsFd0dtIr00N61bCqr6Fa0aRwy424gh6bVJTNmk2zoaH7r025dZRhw==
-----END CERTIFICATE-----
GOOD_SSL_CERT
end
# Signed by the agent CA, not the master CA (Rogue)
def master_issued_by_agent_ca
<<-BAD_SSL_CERT
-----BEGIN CERTIFICATE-----
MIICZjCCAhCgAwIBAgIBBDANBgkqhkiG9w0BAQUFADB9MSMwIQYDVQQDExpJbnRl
cm1lZGlhdGUgQ0EgKGFnZW50LWNhKTEfMB0GCSqGSIb3DQEJARYQdGVzdEBleGFt
cGxlLm9yZzEZMBcGA1UEChMQRXhhbXBsZSBPcmcsIExMQzEaMBgGA1UECxMRU2Vy
dmVyIE9wZXJhdGlvbnMwHhcNMTMwMzMwMDU1MDQ4WhcNMzMwMzI1MDU1MDQ4WjAe
MRwwGgYDVQQDDBNtYXN0ZXIxLmV4YW1wbGUub3JnMFwwDQYJKoZIhvcNAQEBBQAD
SwAwSAJBAPnCDnryLLXWepGLqsdBWlytfeakE/yijM8GlE/yT0SbpJInIhJR1N1A
0RskriHrxTU5qQEhd0RIja7K5o4NYksCAwEAAaOB2TCB1jBbBgNVHSMEVDBSoU2k
SzBJMRAwDgYDVQQDDAdSb290IENBMRowGAYDVQQLDBFTZXJ2ZXIgT3BlcmF0aW9u
czEZMBcGA1UECgwQRXhhbXBsZSBPcmcsIExMQ4IBATAMBgNVHRMBAf8EAjAAMAsG
A1UdDwQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwPQYDVR0R
BDYwNIITbWFzdGVyMS5leGFtcGxlLm9yZ4IHbWFzdGVyMYIGcHVwcGV0ggxwdXBw
ZXRtYXN0ZXIwDQYJKoZIhvcNAQEFBQADQQA841IzHLlnn4RIJ0/BOZ/16iWC1dNr
jV9bELC5OxeMNSsVXbFNeTHwbHEYjDg5dQ6eUkxPdBSMWBeQwe2Mw+xG
-----END CERTIFICATE-----
BAD_SSL_CERT
end
def cert_chain
[ master_issued_by_master_ca, master_ca, root_ca ].map do |pem|
OpenSSL::X509::Certificate.new(pem)
end
end
def cert_chain_agent_ca
[ master_issued_by_agent_ca, agent_ca, root_ca ].map do |pem|
OpenSSL::X509::Certificate.new(pem)
end
end
def cert_chain_in_callback_order
cert_chain.reverse
end
let :authz_error_prefix do
"The server presented a SSL certificate chain which does not include a CA listed in the ssl_client_ca_auth file. "
end
let :expected_authz_error_msg do
authz_ca_certs = ssl_configuration.ca_auth_certificates
msg = authz_error_prefix
msg << "Authorized Issuers: #{authz_ca_certs.collect {|c| c.subject}.join(', ')} "
msg << "Peer Chain: #{cert_chain.collect {|c| c.subject}.join(' => ')}"
msg
end
let :root_ca_cert do
OpenSSL::X509::Certificate.new(root_ca)
end
let :master_ca_cert do
OpenSSL::X509::Certificate.new(master_ca)
end
let :agent_ca_cert do
OpenSSL::X509::Certificate.new(agent_ca)
end
end
diff --git a/spec/unit/transaction/resource_harness_spec.rb b/spec/unit/transaction/resource_harness_spec.rb
index 5eeaf0ba4..7d9c6f439 100755
--- a/spec/unit/transaction/resource_harness_spec.rb
+++ b/spec/unit/transaction/resource_harness_spec.rb
@@ -1,478 +1,542 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/transaction/resource_harness'
describe Puppet::Transaction::ResourceHarness do
include PuppetSpec::Files
before do
@mode_750 = Puppet.features.microsoft_windows? ? '644' : '750'
@mode_755 = Puppet.features.microsoft_windows? ? '644' : '755'
path = make_absolute("/my/file")
@transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, nil, nil)
@resource = Puppet::Type.type(:file).new :path => path
@harness = Puppet::Transaction::ResourceHarness.new(@transaction)
@current_state = Puppet::Resource.new(:file, path)
@resource.stubs(:retrieve).returns @current_state
end
it "should accept a transaction at initialization" do
harness = Puppet::Transaction::ResourceHarness.new(@transaction)
harness.transaction.should equal(@transaction)
end
it "should delegate to the transaction for its relationship graph" do
@transaction.expects(:relationship_graph).returns "relgraph"
Puppet::Transaction::ResourceHarness.new(@transaction).relationship_graph.should == "relgraph"
end
describe "when evaluating a resource" do
it "produces a resource state that describes what happened with the resource" do
status = @harness.evaluate(@resource)
status.resource.should == @resource.ref
status.should_not be_failed
status.events.should be_empty
end
it "retrieves the current state of the resource" do
@resource.expects(:retrieve).returns @current_state
@harness.evaluate(@resource)
end
it "produces a failure status for the resource when an error occurs" do
the_message = "retrieve failed in testing"
@resource.expects(:retrieve).raises(ArgumentError.new(the_message))
status = @harness.evaluate(@resource)
status.should be_failed
events_to_hash(status.events).collect do |event|
{ :@status => event[:@status], :@message => event[:@message] }
end.should == [{ :@status => "failure", :@message => the_message }]
end
it "records the time it took to evaluate the resource" do
before = Time.now
status = @harness.evaluate(@resource)
after = Time.now
status.evaluation_time.should be <= after - before
end
end
def events_to_hash(events)
events.map do |event|
hash = {}
event.instance_variables.each do |varname|
hash[varname.to_sym] = event.instance_variable_get(varname)
end
hash
end
end
def make_stub_provider
stubProvider = Class.new(Puppet::Type)
stubProvider.instance_eval do
initvars
newparam(:name) do
desc "The name var"
isnamevar
end
newproperty(:foo) do
desc "A property that can be changed successfully"
def sync
end
def retrieve
:absent
end
def insync?(reference_value)
false
end
end
newproperty(:bar) do
desc "A property that raises an exception when you try to change it"
def sync
raise ZeroDivisionError.new('bar')
end
def retrieve
:absent
end
def insync?(reference_value)
false
end
end
newproperty(:baz) do
desc "A property that raises an Exception (not StandardError) when you try to change it"
def sync
raise Exception.new('baz')
end
def retrieve
:absent
end
def insync?(reference_value)
false
end
end
newproperty(:brillig) do
desc "A property that raises a StandardError exception when you test if it's insync?"
def sync
end
def retrieve
:absent
end
def insync?(reference_value)
raise ZeroDivisionError.new('brillig')
end
end
newproperty(:slithy) do
desc "A property that raises an Exception when you test if it's insync?"
def sync
end
def retrieve
:absent
end
def insync?(reference_value)
raise Exception.new('slithy')
end
end
end
stubProvider
end
context "interaction of ensure with other properties" do
def an_ensurable_resource_reacting_as(behaviors)
stub_type = Class.new(Puppet::Type)
stub_type.class_eval do
initvars
ensurable do
def sync
(@resource.behaviors[:on_ensure] || proc {}).call
end
def insync?(value)
@resource.behaviors[:ensure_insync?]
end
end
newparam(:name) do
desc "The name var"
isnamevar
end
newproperty(:prop) do
newvalue("new") do
#noop
end
def retrieve
"old"
end
end
attr_reader :behaviors
def initialize(options)
@behaviors = options.delete(:behaviors)
super
end
def exists?
@behaviors[:present?]
end
def present?(resource)
@behaviors[:present?]
end
def self.name
"Testing"
end
end
stub_type.new(:behaviors => behaviors,
:ensure => :present,
:name => "testing",
:prop => "new")
end
it "ensure errors means that the rest doesn't happen" do
resource = an_ensurable_resource_reacting_as(:ensure_insync? => false, :on_ensure => proc { raise StandardError }, :present? => true)
status = @harness.evaluate(resource)
expect(status.events.length).to eq(1)
expect(status.events[0].property).to eq('ensure')
expect(status.events[0].name.to_s).to eq('Testing_created')
expect(status.events[0].status).to eq('failure')
end
it "ensure fails completely means that the rest doesn't happen" do
resource = an_ensurable_resource_reacting_as(:ensure_insync? => false, :on_ensure => proc { raise Exception }, :present? => false)
expect do
@harness.evaluate(resource)
end.to raise_error(Exception)
@logs.first.message.should == "change from absent to present failed: Exception"
@logs.first.level.should == :err
end
it "ensure succeeds means that the rest doesn't happen" do
resource = an_ensurable_resource_reacting_as(:ensure_insync? => false, :on_ensure => proc { }, :present? => true)
status = @harness.evaluate(resource)
expect(status.events.length).to eq(1)
expect(status.events[0].property).to eq('ensure')
expect(status.events[0].name.to_s).to eq('Testing_created')
expect(status.events[0].status).to eq('success')
end
it "ensure is in sync means that the rest *does* happen" do
resource = an_ensurable_resource_reacting_as(:ensure_insync? => true, :present? => true)
status = @harness.evaluate(resource)
expect(status.events.length).to eq(1)
expect(status.events[0].property).to eq('prop')
expect(status.events[0].name.to_s).to eq('prop_changed')
expect(status.events[0].status).to eq('success')
end
it "ensure is in sync but resource not present, means that the rest doesn't happen" do
resource = an_ensurable_resource_reacting_as(:ensure_insync? => true, :present? => false)
status = @harness.evaluate(resource)
expect(status.events).to be_empty
end
end
describe "when a caught error occurs" do
before :each do
stub_provider = make_stub_provider
resource = stub_provider.new :name => 'name', :foo => 1, :bar => 2
resource.expects(:err).never
@status = @harness.evaluate(resource)
end
it "should record previous successful events" do
@status.events[0].property.should == 'foo'
@status.events[0].status.should == 'success'
end
it "should record a failure event" do
@status.events[1].property.should == 'bar'
@status.events[1].status.should == 'failure'
end
end
describe "when an Exception occurs during sync" do
before :each do
stub_provider = make_stub_provider
@resource = stub_provider.new :name => 'name', :baz => 1
@resource.expects(:err).never
end
it "should log and pass the exception through" do
lambda { @harness.evaluate(@resource) }.should raise_error(Exception, /baz/)
@logs.first.message.should == "change from absent to 1 failed: baz"
@logs.first.level.should == :err
end
end
describe "when a StandardError exception occurs during insync?" do
before :each do
stub_provider = make_stub_provider
@resource = stub_provider.new :name => 'name', :brillig => 1
@resource.expects(:err).never
end
it "should record a failure event" do
@status = @harness.evaluate(@resource)
@status.events[0].name.to_s.should == 'brillig_changed'
@status.events[0].property.should == 'brillig'
@status.events[0].status.should == 'failure'
end
end
describe "when an Exception occurs during insync?" do
before :each do
stub_provider = make_stub_provider
@resource = stub_provider.new :name => 'name', :slithy => 1
@resource.expects(:err).never
end
it "should log and pass the exception through" do
lambda { @harness.evaluate(@resource) }.should raise_error(Exception, /slithy/)
@logs.first.message.should == "change from absent to 1 failed: slithy"
@logs.first.level.should == :err
end
end
describe "when auditing" do
it "should not call insync? on parameters that are merely audited" do
stub_provider = make_stub_provider
resource = stub_provider.new :name => 'name', :audit => ['foo']
resource.property(:foo).expects(:insync?).never
status = @harness.evaluate(resource)
expect(status.events).to be_empty
end
it "should be able to audit a file's group" do # see bug #5710
test_file = tmpfile('foo')
File.open(test_file, 'w').close
resource = Puppet::Type.type(:file).new :path => test_file, :audit => ['group'], :backup => false
resource.expects(:err).never # make sure no exceptions get swallowed
status = @harness.evaluate(resource)
status.events.each do |event|
event.status.should != 'failure'
end
end
+
+ it "should not ignore microseconds when auditing a file's mtime" do
+ test_file = tmpfile('foo')
+ File.open(test_file, 'w').close
+ resource = Puppet::Type.type(:file).new :path => test_file, :audit => ['mtime'], :backup => false
+
+ # construct a property hash with nanosecond resolution as would be
+ # found on an ext4 file system
+ time_with_nsec_resolution = Time.at(1000, 123456.999)
+ current_from_filesystem = {:mtime => time_with_nsec_resolution}
+
+ # construct a property hash with a 1 microsecond difference from above
+ time_with_usec_resolution = Time.at(1000, 123457.000)
+ historical_from_state_yaml = {:mtime => time_with_usec_resolution}
+
+ # set up the sequence of stubs; yeah, this is pretty
+ # brittle, so this might need to be adjusted if the
+ # resource_harness logic changes
+ resource.expects(:retrieve).returns(current_from_filesystem)
+ Puppet::Util::Storage.stubs(:cache).with(resource).
+ returns(historical_from_state_yaml).then.
+ returns(current_from_filesystem).then.
+ returns(current_from_filesystem)
+
+ # there should be an audit change recorded, since the two
+ # timestamps differ by at least 1 microsecond
+ status = @harness.evaluate(resource)
+ status.events.should_not be_empty
+ status.events.each do |event|
+ event.message.should =~ /audit change: previously recorded/
+ end
+ end
+
+ it "should ignore nanoseconds when auditing a file's mtime" do
+ test_file = tmpfile('foo')
+ File.open(test_file, 'w').close
+ resource = Puppet::Type.type(:file).new :path => test_file, :audit => ['mtime'], :backup => false
+
+ # construct a property hash with nanosecond resolution as would be
+ # found on an ext4 file system
+ time_with_nsec_resolution = Time.at(1000, 123456.789)
+ current_from_filesystem = {:mtime => time_with_nsec_resolution}
+
+ # construct a property hash with the same timestamp as above,
+ # truncated to microseconds, as would be read back from state.yaml
+ time_with_usec_resolution = Time.at(1000, 123456.000)
+ historical_from_state_yaml = {:mtime => time_with_usec_resolution}
+
+ # set up the sequence of stubs; yeah, this is pretty
+ # brittle, so this might need to be adjusted if the
+ # resource_harness logic changes
+ resource.expects(:retrieve).returns(current_from_filesystem)
+ Puppet::Util::Storage.stubs(:cache).with(resource).
+ returns(historical_from_state_yaml).then.
+ returns(current_from_filesystem).then.
+ returns(current_from_filesystem)
+
+ # there should be no audit change recorded, despite the
+ # slight difference in the two timestamps
+ status = @harness.evaluate(resource)
+ status.events.each do |event|
+ event.message.should_not =~ /audit change: previously recorded/
+ end
+ end
end
describe "when applying changes" do
it "should not apply changes if allow_changes?() returns false" do
test_file = tmpfile('foo')
resource = Puppet::Type.type(:file).new :path => test_file, :backup => false, :ensure => :file
resource.expects(:err).never # make sure no exceptions get swallowed
@harness.expects(:allow_changes?).with(resource).returns false
status = @harness.evaluate(resource)
Puppet::FileSystem.exist?(test_file).should == false
end
end
describe "when determining whether the resource can be changed" do
before do
@resource.stubs(:purging?).returns true
@resource.stubs(:deleting?).returns true
end
it "should be true if the resource is not being purged" do
@resource.expects(:purging?).returns false
@harness.should be_allow_changes(@resource)
end
it "should be true if the resource is not being deleted" do
@resource.expects(:deleting?).returns false
@harness.should be_allow_changes(@resource)
end
it "should be true if the resource has no dependents" do
@harness.relationship_graph.expects(:dependents).with(@resource).returns []
@harness.should be_allow_changes(@resource)
end
it "should be true if all dependents are being deleted" do
dep = stub 'dependent', :deleting? => true
@harness.relationship_graph.expects(:dependents).with(@resource).returns [dep]
@resource.expects(:purging?).returns true
@harness.should be_allow_changes(@resource)
end
it "should be false if the resource's dependents are not being deleted" do
dep = stub 'dependent', :deleting? => false, :ref => "myres"
@resource.expects(:warning)
@harness.relationship_graph.expects(:dependents).with(@resource).returns [dep]
@harness.should_not be_allow_changes(@resource)
end
end
describe "when finding the schedule" do
before do
@catalog = Puppet::Resource::Catalog.new
@resource.catalog = @catalog
end
it "should warn and return nil if the resource has no catalog" do
@resource.catalog = nil
@resource.expects(:warning)
@harness.schedule(@resource).should be_nil
end
it "should return nil if the resource specifies no schedule" do
@harness.schedule(@resource).should be_nil
end
it "should fail if the named schedule cannot be found" do
@resource[:schedule] = "whatever"
@resource.expects(:fail)
@harness.schedule(@resource)
end
it "should return the named schedule if it exists" do
sched = Puppet::Type.type(:schedule).new(:name => "sched")
@catalog.add_resource(sched)
@resource[:schedule] = "sched"
@harness.schedule(@resource).to_s.should == sched.to_s
end
end
describe "when determining if a resource is scheduled" do
before do
@catalog = Puppet::Resource::Catalog.new
@resource.catalog = @catalog
end
it "should return true if 'ignoreschedules' is set" do
Puppet[:ignoreschedules] = true
@resource[:schedule] = "meh"
@harness.should be_scheduled(@resource)
end
it "should return true if the resource has no schedule set" do
@harness.should be_scheduled(@resource)
end
it "should return the result of matching the schedule with the cached 'checked' time if a schedule is set" do
t = Time.now
@harness.expects(:cached).with(@resource, :checked).returns(t)
sched = Puppet::Type.type(:schedule).new(:name => "sched")
@catalog.add_resource(sched)
@resource[:schedule] = "sched"
sched.expects(:match?).with(t.to_i).returns "feh"
@harness.scheduled?(@resource).should == "feh"
end
end
it "should be able to cache data in the Storage module" do
data = {}
Puppet::Util::Storage.expects(:cache).with(@resource).returns data
@harness.cache(@resource, :foo, "something")
data[:foo].should == "something"
end
it "should be able to retrieve data from the cache" do
data = {:foo => "other"}
Puppet::Util::Storage.expects(:cache).with(@resource).returns data
@harness.cached(@resource, :foo).should == "other"
end
end
diff --git a/spec/unit/transaction_spec.rb b/spec/unit/transaction_spec.rb
index 04ce08c4f..bf7820227 100755
--- a/spec/unit/transaction_spec.rb
+++ b/spec/unit/transaction_spec.rb
@@ -1,671 +1,722 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'matchers/include_in_order'
require 'puppet_spec/compiler'
require 'puppet/transaction'
require 'fileutils'
describe Puppet::Transaction do
include PuppetSpec::Files
include PuppetSpec::Compiler
def catalog_with_resource(resource)
catalog = Puppet::Resource::Catalog.new
catalog.add_resource(resource)
catalog
end
def transaction_with_resource(resource)
transaction = Puppet::Transaction.new(catalog_with_resource(resource), nil, Puppet::Graph::RandomPrioritizer.new)
transaction
end
before do
@basepath = make_absolute("/what/ever")
@transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, nil, Puppet::Graph::RandomPrioritizer.new)
end
it "should be able to look resource status up by resource reference" do
resource = Puppet::Type.type(:notify).new :title => "foobar"
transaction = transaction_with_resource(resource)
transaction.evaluate
transaction.resource_status(resource.to_s).should be_changed
end
# This will basically only ever be used during testing.
it "should automatically create resource statuses if asked for a non-existent status" do
resource = Puppet::Type.type(:notify).new :title => "foobar"
- @transaction.resource_status(resource).should be_instance_of(Puppet::Resource::Status)
+ transaction = transaction_with_resource(resource)
+ transaction.resource_status(resource).should be_instance_of(Puppet::Resource::Status)
end
it "should add provided resource statuses to its report" do
resource = Puppet::Type.type(:notify).new :title => "foobar"
transaction = transaction_with_resource(resource)
transaction.evaluate
status = transaction.resource_status(resource)
transaction.report.resource_statuses[resource.to_s].should equal(status)
end
it "should not consider there to be failed resources if no statuses are marked failed" do
resource = Puppet::Type.type(:notify).new :title => "foobar"
transaction = transaction_with_resource(resource)
transaction.evaluate
transaction.should_not be_any_failed
end
it "should use the provided report object" do
report = Puppet::Transaction::Report.new("apply")
transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, report, nil)
transaction.report.should == report
end
it "should create a report if none is provided" do
transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, nil, nil)
transaction.report.should be_kind_of Puppet::Transaction::Report
end
describe "when initializing" do
it "should create an event manager" do
- @transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, nil, nil)
- @transaction.event_manager.should be_instance_of(Puppet::Transaction::EventManager)
- @transaction.event_manager.transaction.should equal(@transaction)
+ transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, nil, nil)
+ transaction.event_manager.should be_instance_of(Puppet::Transaction::EventManager)
+ transaction.event_manager.transaction.should equal(transaction)
end
it "should create a resource harness" do
- @transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, nil, nil)
- @transaction.resource_harness.should be_instance_of(Puppet::Transaction::ResourceHarness)
- @transaction.resource_harness.transaction.should equal(@transaction)
+ transaction = Puppet::Transaction.new(Puppet::Resource::Catalog.new, nil, nil)
+ transaction.resource_harness.should be_instance_of(Puppet::Transaction::ResourceHarness)
+ transaction.resource_harness.transaction.should equal(transaction)
end
it "should set retrieval time on the report" do
catalog = Puppet::Resource::Catalog.new
report = Puppet::Transaction::Report.new("apply")
catalog.retrieval_duration = 5
report.expects(:add_times).with(:config_retrieval, 5)
transaction = Puppet::Transaction.new(catalog, report, nil)
end
end
describe "when evaluating a resource" do
- before do
- @catalog = Puppet::Resource::Catalog.new
- @resource = Puppet::Type.type(:file).new :path => @basepath
- @catalog.add_resource(@resource)
-
- @transaction = Puppet::Transaction.new(@catalog, nil, Puppet::Graph::RandomPrioritizer.new)
- @transaction.stubs(:skip?).returns false
- end
+ let(:resource) { Puppet::Type.type(:file).new :path => @basepath }
it "should process events" do
- @transaction.event_manager.expects(:process_events).with(@resource)
+ transaction = transaction_with_resource(resource)
- @transaction.evaluate
+ transaction.expects(:skip?).with(resource).returns false
+ transaction.event_manager.expects(:process_events).with(resource)
+
+ transaction.evaluate
end
describe "and the resource should be skipped" do
- before do
- @transaction.expects(:skip?).with(@resource).returns true
- end
-
it "should mark the resource's status as skipped" do
- @transaction.evaluate
- @transaction.resource_status(@resource).should be_skipped
+ transaction = transaction_with_resource(resource)
+
+ transaction.expects(:skip?).with(resource).returns true
+
+ transaction.evaluate
+ transaction.resource_status(resource).should be_skipped
end
end
end
describe "when applying a resource" do
before do
@catalog = Puppet::Resource::Catalog.new
@resource = Puppet::Type.type(:file).new :path => @basepath
@catalog.add_resource(@resource)
@status = Puppet::Resource::Status.new(@resource)
@transaction = Puppet::Transaction.new(@catalog, nil, Puppet::Graph::RandomPrioritizer.new)
@transaction.event_manager.stubs(:queue_events)
end
it "should use its resource harness to apply the resource" do
@transaction.resource_harness.expects(:evaluate).with(@resource)
@transaction.evaluate
end
it "should add the resulting resource status to its status list" do
@transaction.resource_harness.stubs(:evaluate).returns(@status)
@transaction.evaluate
@transaction.resource_status(@resource).should be_instance_of(Puppet::Resource::Status)
end
it "should queue any events added to the resource status" do
@transaction.resource_harness.stubs(:evaluate).returns(@status)
@status.expects(:events).returns %w{a b}
@transaction.event_manager.expects(:queue_events).with(@resource, ["a", "b"])
@transaction.evaluate
end
it "should log and skip any resources that cannot be applied" do
@resource.expects(:properties).raises ArgumentError
@transaction.evaluate
@transaction.report.resource_statuses[@resource.to_s].should be_failed
end
it "should report any_failed if any resources failed" do
@resource.expects(:properties).raises ArgumentError
@transaction.evaluate
expect(@transaction).to be_any_failed
end
end
describe "#unblock" do
let(:graph) { @transaction.relationship_graph }
let(:resource) { Puppet::Type.type(:notify).new(:name => 'foo') }
it "should calculate the number of blockers if it's not known" do
graph.add_vertex(resource)
3.times do |i|
other = Puppet::Type.type(:notify).new(:name => i.to_s)
graph.add_vertex(other)
graph.add_edge(other, resource)
end
graph.unblock(resource)
graph.blockers[resource].should == 2
end
it "should decrement the number of blockers if there are any" do
graph.blockers[resource] = 40
graph.unblock(resource)
graph.blockers[resource].should == 39
end
it "should warn if there are no blockers" do
vertex = stub('vertex')
vertex.expects(:warning).with "appears to have a negative number of dependencies"
graph.blockers[vertex] = 0
graph.unblock(vertex)
end
it "should return true if the resource is now unblocked" do
graph.blockers[resource] = 1
graph.unblock(resource).should == true
end
it "should return false if the resource is still blocked" do
graph.blockers[resource] = 2
graph.unblock(resource).should == false
end
end
describe "when traversing" do
let(:path) { tmpdir('eval_generate') }
let(:resource) { Puppet::Type.type(:file).new(:path => path, :recurse => true) }
before :each do
@transaction.catalog.add_resource(resource)
end
it "should yield the resource even if eval_generate is called" do
Puppet::Transaction::AdditionalResourceGenerator.any_instance.expects(:eval_generate).with(resource).returns true
yielded = false
@transaction.evaluate do |res|
yielded = true if res == resource
end
yielded.should == true
end
it "should prefetch the provider if necessary" do
@transaction.expects(:prefetch_if_necessary).with(resource)
@transaction.evaluate {}
end
it "traverses independent resources before dependent resources" do
dependent = Puppet::Type.type(:notify).new(:name => "hello", :require => resource)
@transaction.catalog.add_resource(dependent)
seen = []
@transaction.evaluate do |res|
seen << res
end
expect(seen).to include_in_order(resource, dependent)
end
it "traverses completely independent resources in the order they appear in the catalog" do
independent = Puppet::Type.type(:notify).new(:name => "hello", :require => resource)
@transaction.catalog.add_resource(independent)
seen = []
@transaction.evaluate do |res|
seen << res
end
expect(seen).to include_in_order(resource, independent)
end
it "should fail unsuitable resources and go on if it gets blocked" do
dependent = Puppet::Type.type(:notify).new(:name => "hello", :require => resource)
@transaction.catalog.add_resource(dependent)
resource.stubs(:suitable?).returns false
evaluated = []
@transaction.evaluate do |res|
evaluated << res
end
# We should have gone on to evaluate the children
evaluated.should == [dependent]
@transaction.resource_status(resource).should be_failed
end
end
describe "when generating resources before traversal" do
let(:catalog) { Puppet::Resource::Catalog.new }
let(:transaction) { Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new) }
let(:generator) { Puppet::Type.type(:notify).new :title => "generator" }
let(:generated) do
%w[a b c].map { |name| Puppet::Type.type(:notify).new(:name => name) }
end
before :each do
catalog.add_resource generator
generator.stubs(:generate).returns generated
+ # avoid crude failures because of nil resources that result
+ # from implicit containment and lacking containers
+ catalog.stubs(:container_of).returns generator
end
it "should call 'generate' on all created resources" do
generated.each { |res| res.expects(:generate) }
transaction.evaluate
end
it "should finish all resources" do
generated.each { |res| res.expects(:finish) }
transaction.evaluate
end
it "should copy all tags to the newly generated resources" do
generator.tag('one', 'two')
transaction.evaluate
generated.each do |res|
res.must be_tagged(*generator.tags)
end
end
end
+ describe "when performing pre-run checks" do
+ let(:resource) { Puppet::Type.type(:notify).new(:title => "spec") }
+ let(:transaction) { transaction_with_resource(resource) }
+ let(:spec_exception) { 'spec-exception' }
+
+ it "should invoke each resource's hook and apply the catalog after no failures" do
+ resource.expects(:pre_run_check)
+
+ transaction.evaluate
+ end
+
+ it "should abort the transaction on failure" do
+ resource.expects(:pre_run_check).raises(Puppet::Error, spec_exception)
+
+ expect { transaction.evaluate }.to raise_error(Puppet::Error, /Some pre-run checks failed/)
+ end
+
+ it "should log the resource-specific exception" do
+ resource.expects(:pre_run_check).raises(Puppet::Error, spec_exception)
+ resource.expects(:log_exception).with(responds_with(:message, spec_exception))
+
+ expect { transaction.evaluate }.to raise_error(Puppet::Error)
+ end
+ end
+
describe "when skipping a resource" do
before :each do
@resource = Puppet::Type.type(:notify).new :name => "foo"
@catalog = Puppet::Resource::Catalog.new
@resource.catalog = @catalog
@transaction = Puppet::Transaction.new(@catalog, nil, nil)
end
it "should skip resource with missing tags" do
@transaction.stubs(:missing_tags?).returns(true)
@transaction.should be_skip(@resource)
end
it "should skip unscheduled resources" do
@transaction.stubs(:scheduled?).returns(false)
@transaction.should be_skip(@resource)
end
it "should skip resources with failed dependencies" do
@transaction.stubs(:failed_dependencies?).returns(true)
@transaction.should be_skip(@resource)
end
it "should skip virtual resource" do
@resource.stubs(:virtual?).returns true
@transaction.should be_skip(@resource)
end
it "should skip device only resouce on normal host" do
@resource.stubs(:appliable_to_host?).returns false
@resource.stubs(:appliable_to_device?).returns true
@transaction.for_network_device = false
@transaction.should be_skip(@resource)
end
it "should not skip device only resouce on remote device" do
@resource.stubs(:appliable_to_host?).returns false
@resource.stubs(:appliable_to_device?).returns true
@transaction.for_network_device = true
@transaction.should_not be_skip(@resource)
end
it "should skip host resouce on device" do
@resource.stubs(:appliable_to_host?).returns true
@resource.stubs(:appliable_to_device?).returns false
@transaction.for_network_device = true
@transaction.should be_skip(@resource)
end
it "should not skip resouce available on both device and host when on device" do
@resource.stubs(:appliable_to_host?).returns true
@resource.stubs(:appliable_to_device?).returns true
@transaction.for_network_device = true
@transaction.should_not be_skip(@resource)
end
it "should not skip resouce available on both device and host when on host" do
@resource.stubs(:appliable_to_host?).returns true
@resource.stubs(:appliable_to_device?).returns true
@transaction.for_network_device = false
@transaction.should_not be_skip(@resource)
end
end
describe "when determining if tags are missing" do
before :each do
@resource = Puppet::Type.type(:notify).new :name => "foo"
@catalog = Puppet::Resource::Catalog.new
@resource.catalog = @catalog
@transaction = Puppet::Transaction.new(@catalog, nil, nil)
@transaction.stubs(:ignore_tags?).returns false
end
it "should not be missing tags if tags are being ignored" do
@transaction.expects(:ignore_tags?).returns true
@resource.expects(:tagged?).never
@transaction.should_not be_missing_tags(@resource)
end
it "should not be missing tags if the transaction tags are empty" do
@transaction.tags = []
@resource.expects(:tagged?).never
@transaction.should_not be_missing_tags(@resource)
end
it "should otherwise let the resource determine if it is missing tags" do
tags = ['one', 'two']
@transaction.tags = tags
@transaction.should be_missing_tags(@resource)
end
end
describe "when determining if a resource should be scheduled" do
before :each do
@resource = Puppet::Type.type(:notify).new :name => "foo"
@catalog = Puppet::Resource::Catalog.new
@catalog.add_resource(@resource)
@transaction = Puppet::Transaction.new(@catalog, nil, Puppet::Graph::RandomPrioritizer.new)
end
it "should always schedule resources if 'ignoreschedules' is set" do
@transaction.ignoreschedules = true
@transaction.resource_harness.expects(:scheduled?).never
@transaction.evaluate
@transaction.resource_status(@resource).should be_changed
end
it "should let the resource harness determine whether the resource should be scheduled" do
@transaction.resource_harness.expects(:scheduled?).with(@resource).returns "feh"
@transaction.evaluate
end
end
describe "when prefetching" do
let(:catalog) { Puppet::Resource::Catalog.new }
let(:transaction) { Puppet::Transaction.new(catalog, nil, nil) }
let(:resource) { Puppet::Type.type(:sshkey).new :title => "foo", :name => "bar", :type => :dsa, :key => "eh", :provider => :parsed }
let(:resource2) { Puppet::Type.type(:package).new :title => "blah", :provider => "apt" }
before :each do
catalog.add_resource resource
catalog.add_resource resource2
end
it "should match resources by name, not title" do
resource.provider.class.expects(:prefetch).with("bar" => resource)
transaction.prefetch_if_necessary(resource)
end
it "should not prefetch a provider which has already been prefetched" do
transaction.prefetched_providers[:sshkey][:parsed] = true
resource.provider.class.expects(:prefetch).never
transaction.prefetch_if_necessary(resource)
end
it "should mark the provider prefetched" do
resource.provider.class.stubs(:prefetch)
transaction.prefetch_if_necessary(resource)
transaction.prefetched_providers[:sshkey][:parsed].should be_true
end
it "should prefetch resources without a provider if prefetching the default provider" do
other = Puppet::Type.type(:sshkey).new :name => "other"
other.instance_variable_set(:@provider, nil)
catalog.add_resource other
resource.provider.class.expects(:prefetch).with('bar' => resource, 'other' => other)
transaction.prefetch_if_necessary(resource)
end
end
describe "during teardown" do
+ let(:catalog) { Puppet::Resource::Catalog.new }
+ let(:transaction) do
+ Puppet::Transaction.new(catalog, nil, Puppet::Graph::RandomPrioritizer.new)
+ end
+
+ let(:teardown_type) do
+ Puppet::Type.newtype(:teardown_test) do
+ newparam(:name) {}
+ end
+ end
+
before :each do
- @catalog = Puppet::Resource::Catalog.new
- @transaction = Puppet::Transaction.new(@catalog, nil, Puppet::Graph::RandomPrioritizer.new)
+ teardown_type.provide(:teardown_provider) do
+ class << self
+ attr_reader :result
+
+ def post_resource_eval
+ @result = 'passed'
+ end
+ end
+ end
end
it "should call ::post_resource_eval on provider classes that support it" do
- @resource = Puppet::Type.type(:notify).new :title => "foo"
- @catalog.add_resource @resource
+ resource = teardown_type.new(:title => "foo", :provider => :teardown_provider)
- # 'expects' will cause 'respond_to?(:post_resource_eval)' to return true
- @resource.provider.class.expects(:post_resource_eval)
- @transaction.evaluate
+ transaction = transaction_with_resource(resource)
+ transaction.evaluate
+
+ expect(resource.provider.class.result).to eq('passed')
end
it "should call ::post_resource_eval even if other providers' ::post_resource_eval fails" do
- @resource3 = Puppet::Type.type(:user).new :title => "bloo"
- @resource3.provider.class.stubs(:post_resource_eval).raises
- @resource4 = Puppet::Type.type(:notify).new :title => "blob"
- @resource4.provider.class.stubs(:post_resource_eval).raises
- @catalog.add_resource @resource3
- @catalog.add_resource @resource4
-
- # ruby's Set does not guarantee ordering, so both resource3 and resource4
- # need to expect post_resource_eval, rather than just the 'first' one.
- @resource3.provider.class.expects(:post_resource_eval)
- @resource4.provider.class.expects(:post_resource_eval)
+ teardown_type.provide(:always_fails) do
+ class << self
+ attr_reader :result
+
+ def post_resource_eval
+ @result = 'failed'
+ raise Puppet::Error, "This provider always fails"
+ end
+ end
+ end
- @transaction.evaluate
+ good_resource = teardown_type.new(:title => "bloo", :provider => :teardown_provider)
+ bad_resource = teardown_type.new(:title => "blob", :provider => :always_fails)
+
+ catalog.add_resource(bad_resource)
+ catalog.add_resource(good_resource)
+
+ transaction.evaluate
+
+ expect(good_resource.provider.class.result).to eq('passed')
+ expect(bad_resource.provider.class.result).to eq('failed')
end
it "should call ::post_resource_eval even if one of the resources fails" do
- @resource3 = Puppet::Type.type(:notify).new :title => "bloo"
- @resource3.stubs(:retrieve_resource).raises
- @catalog.add_resource @resource3
+ resource = teardown_type.new(:title => "foo", :provider => :teardown_provider)
+ resource.stubs(:retrieve_resource).raises
+ catalog.add_resource resource
- @resource3.provider.class.expects(:post_resource_eval)
+ resource.provider.class.expects(:post_resource_eval)
- @transaction.evaluate
+ transaction.evaluate
end
end
describe 'when checking application run state' do
before do
@catalog = Puppet::Resource::Catalog.new
@transaction = Puppet::Transaction.new(@catalog, nil, Puppet::Graph::RandomPrioritizer.new)
end
context "when stop is requested" do
before :each do
Puppet::Application.stubs(:stop_requested?).returns(true)
end
it 'should return true for :stop_processing?' do
@transaction.should be_stop_processing
end
it 'always evaluates non-host_config catalogs' do
@catalog.host_config = false
@transaction.should_not be_stop_processing
end
end
it 'should return false for :stop_processing? if Puppet::Application.stop_requested? is false' do
Puppet::Application.stubs(:stop_requested?).returns(false)
@transaction.stop_processing?.should be_false
end
describe 'within an evaluate call' do
before do
@resource = Puppet::Type.type(:notify).new :title => "foobar"
@catalog.add_resource @resource
@transaction.stubs(:add_dynamically_generated_resources)
end
it 'should stop processing if :stop_processing? is true' do
@transaction.stubs(:stop_processing?).returns(true)
@transaction.expects(:eval_resource).never
@transaction.evaluate
end
it 'should continue processing if :stop_processing? is false' do
@transaction.stubs(:stop_processing?).returns(false)
@transaction.expects(:eval_resource).returns(nil)
@transaction.evaluate
end
end
end
it "errors with a dependency cycle for a resource that requires itself" do
expect do
apply_compiled_manifest(<<-MANIFEST)
notify { cycle: require => Notify[cycle] }
MANIFEST
end.to raise_error(Puppet::Error, /Found 1 dependency cycle:.*\(Notify\[cycle\] => Notify\[cycle\]\)/m)
end
it "errors with a dependency cycle for a self-requiring resource also required by another resource" do
expect do
apply_compiled_manifest(<<-MANIFEST)
notify { cycle: require => Notify[cycle] }
notify { other: require => Notify[cycle] }
MANIFEST
end.to raise_error(Puppet::Error, /Found 1 dependency cycle:.*\(Notify\[cycle\] => Notify\[cycle\]\)/m)
end
it "errors with a dependency cycle for a resource that requires itself and another resource" do
expect do
apply_compiled_manifest(<<-MANIFEST)
notify { cycle:
require => [Notify[other], Notify[cycle]]
}
notify { other: }
MANIFEST
end.to raise_error(Puppet::Error, /Found 1 dependency cycle:.*\(Notify\[cycle\] => Notify\[cycle\]\)/m)
end
it "errors with a dependency cycle for a resource that is later modified to require itself" do
expect do
apply_compiled_manifest(<<-MANIFEST)
notify { cycle: }
Notify <| title == 'cycle' |> {
require => Notify[cycle]
}
MANIFEST
end.to raise_error(Puppet::Error, /Found 1 dependency cycle:.*\(Notify\[cycle\] => Notify\[cycle\]\)/m)
end
it "reports a changed resource with a successful run" do
transaction = apply_compiled_manifest("notify { one: }")
transaction.report.status.should == 'changed'
transaction.report.resource_statuses['Notify[one]'].should be_changed
end
describe "when interrupted" do
it "marks unprocessed resources as skipped" do
Puppet::Application.stop!
transaction = apply_compiled_manifest(<<-MANIFEST)
notify { a: } ->
notify { b: }
MANIFEST
transaction.report.resource_statuses['Notify[a]'].should be_skipped
transaction.report.resource_statuses['Notify[b]'].should be_skipped
end
end
end
describe Puppet::Transaction, " when determining tags" do
before do
@config = Puppet::Resource::Catalog.new
@transaction = Puppet::Transaction.new(@config, nil, nil)
end
it "should default to the tags specified in the :tags setting" do
Puppet[:tags] = "one"
@transaction.should be_tagged("one")
end
it "should split tags based on ','" do
Puppet[:tags] = "one,two"
@transaction.should be_tagged("one")
@transaction.should be_tagged("two")
end
it "should use any tags set after creation" do
Puppet[:tags] = ""
@transaction.tags = %w{one two}
@transaction.should be_tagged("one")
@transaction.should be_tagged("two")
end
it "should always convert assigned tags to an array" do
@transaction.tags = "one::two"
@transaction.should be_tagged("one::two")
end
it "should accept a comma-delimited string" do
@transaction.tags = "one, two"
@transaction.should be_tagged("one")
@transaction.should be_tagged("two")
end
it "should accept an empty string" do
@transaction.tags = "one, two"
@transaction.should be_tagged("one")
@transaction.tags = ""
@transaction.should_not be_tagged("one")
end
end
diff --git a/spec/unit/type/cron_spec.rb b/spec/unit/type/cron_spec.rb
index b4a853173..82f646290 100755
--- a/spec/unit/type/cron_spec.rb
+++ b/spec/unit/type/cron_spec.rb
@@ -1,543 +1,543 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Type.type(:cron), :unless => Puppet.features.microsoft_windows? do
let(:simple_provider) do
@provider_class = described_class.provide(:simple) { mk_resource_methods }
@provider_class.stubs(:suitable?).returns true
@provider_class
end
before :each do
described_class.stubs(:defaultprovider).returns @provider_class
end
after :each do
described_class.unprovide(:simple)
end
it "should have :name be its namevar" do
described_class.key_attributes.should == [:name]
end
describe "when validating attributes" do
[:name, :provider].each do |param|
it "should have a #{param} parameter" do
described_class.attrtype(param).should == :param
end
end
[:command, :special, :minute, :hour, :weekday, :month, :monthday, :environment, :user, :target].each do |property|
it "should have a #{property} property" do
described_class.attrtype(property).should == :property
end
end
[:command, :minute, :hour, :weekday, :month, :monthday].each do |cronparam|
it "should have #{cronparam} of type CronParam" do
described_class.attrclass(cronparam).ancestors.should include CronParam
end
end
end
describe "when validating values" do
describe "ensure" do
it "should support present as a value for ensure" do
expect { described_class.new(:name => 'foo', :ensure => :present) }.to_not raise_error
end
it "should support absent as a value for ensure" do
expect { described_class.new(:name => 'foo', :ensure => :present) }.to_not raise_error
end
it "should not support other values" do
expect { described_class.new(:name => 'foo', :ensure => :foo) }.to raise_error(Puppet::Error, /Invalid value/)
end
end
describe "command" do
it "should discard leading spaces" do
described_class.new(:name => 'foo', :command => " /bin/true")[:command].should_not match Regexp.new(" ")
end
it "should discard trailing spaces" do
described_class.new(:name => 'foo', :command => "/bin/true ")[:command].should_not match Regexp.new(" ")
end
end
describe "minute" do
it "should support absent" do
expect { described_class.new(:name => 'foo', :minute => 'absent') }.to_not raise_error
end
it "should support *" do
expect { described_class.new(:name => 'foo', :minute => '*') }.to_not raise_error
end
it "should translate absent to :absent" do
described_class.new(:name => 'foo', :minute => 'absent')[:minute].should == :absent
end
it "should translate * to :absent" do
described_class.new(:name => 'foo', :minute => '*')[:minute].should == :absent
end
it "should support valid single values" do
expect { described_class.new(:name => 'foo', :minute => '0') }.to_not raise_error
expect { described_class.new(:name => 'foo', :minute => '1') }.to_not raise_error
expect { described_class.new(:name => 'foo', :minute => '59') }.to_not raise_error
end
it "should not support non numeric characters" do
expect { described_class.new(:name => 'foo', :minute => 'z59') }.to raise_error(Puppet::Error, /z59 is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => '5z9') }.to raise_error(Puppet::Error, /5z9 is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => '59z') }.to raise_error(Puppet::Error, /59z is not a valid minute/)
end
it "should not support single values out of range" do
expect { described_class.new(:name => 'foo', :minute => '-1') }.to raise_error(Puppet::Error, /-1 is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => '60') }.to raise_error(Puppet::Error, /60 is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => '61') }.to raise_error(Puppet::Error, /61 is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => '120') }.to raise_error(Puppet::Error, /120 is not a valid minute/)
end
it "should support valid multiple values" do
expect { described_class.new(:name => 'foo', :minute => ['0','1','59'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :minute => ['40','30','20'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :minute => ['10','30','20'] ) }.to_not raise_error
end
it "should not support multiple values if at least one is invalid" do
# one invalid
expect { described_class.new(:name => 'foo', :minute => ['0','1','60'] ) }.to raise_error(Puppet::Error, /60 is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => ['0','120','59'] ) }.to raise_error(Puppet::Error, /120 is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => ['-1','1','59'] ) }.to raise_error(Puppet::Error, /-1 is not a valid minute/)
# two invalid
expect { described_class.new(:name => 'foo', :minute => ['0','61','62'] ) }.to raise_error(Puppet::Error, /(61|62) is not a valid minute/)
# all invalid
expect { described_class.new(:name => 'foo', :minute => ['-1','61','62'] ) }.to raise_error(Puppet::Error, /(-1|61|62) is not a valid minute/)
end
it "should support valid step syntax" do
expect { described_class.new(:name => 'foo', :minute => '*/2' ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :minute => '10-16/2' ) }.to_not raise_error
end
it "should not support invalid steps" do
expect { described_class.new(:name => 'foo', :minute => '*/A' ) }.to raise_error(Puppet::Error, /\*\/A is not a valid minute/)
expect { described_class.new(:name => 'foo', :minute => '*/2A' ) }.to raise_error(Puppet::Error, /\*\/2A is not a valid minute/)
# As it turns out cron does not complaining about steps that exceed the valid range
# expect { described_class.new(:name => 'foo', :minute => '*/120' ) }.to raise_error(Puppet::Error, /is not a valid minute/)
end
end
describe "hour" do
it "should support absent" do
expect { described_class.new(:name => 'foo', :hour => 'absent') }.to_not raise_error
end
it "should support *" do
expect { described_class.new(:name => 'foo', :hour => '*') }.to_not raise_error
end
it "should translate absent to :absent" do
described_class.new(:name => 'foo', :hour => 'absent')[:hour].should == :absent
end
it "should translate * to :absent" do
described_class.new(:name => 'foo', :hour => '*')[:hour].should == :absent
end
it "should support valid single values" do
expect { described_class.new(:name => 'foo', :hour => '0') }.to_not raise_error
expect { described_class.new(:name => 'foo', :hour => '11') }.to_not raise_error
expect { described_class.new(:name => 'foo', :hour => '12') }.to_not raise_error
expect { described_class.new(:name => 'foo', :hour => '13') }.to_not raise_error
expect { described_class.new(:name => 'foo', :hour => '23') }.to_not raise_error
end
it "should not support non numeric characters" do
expect { described_class.new(:name => 'foo', :hour => 'z15') }.to raise_error(Puppet::Error, /z15 is not a valid hour/)
expect { described_class.new(:name => 'foo', :hour => '1z5') }.to raise_error(Puppet::Error, /1z5 is not a valid hour/)
expect { described_class.new(:name => 'foo', :hour => '15z') }.to raise_error(Puppet::Error, /15z is not a valid hour/)
end
it "should not support single values out of range" do
expect { described_class.new(:name => 'foo', :hour => '-1') }.to raise_error(Puppet::Error, /-1 is not a valid hour/)
expect { described_class.new(:name => 'foo', :hour => '24') }.to raise_error(Puppet::Error, /24 is not a valid hour/)
expect { described_class.new(:name => 'foo', :hour => '120') }.to raise_error(Puppet::Error, /120 is not a valid hour/)
end
it "should support valid multiple values" do
expect { described_class.new(:name => 'foo', :hour => ['0','1','23'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :hour => ['5','16','14'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :hour => ['16','13','9'] ) }.to_not raise_error
end
it "should not support multiple values if at least one is invalid" do
# one invalid
expect { described_class.new(:name => 'foo', :hour => ['0','1','24'] ) }.to raise_error(Puppet::Error, /24 is not a valid hour/)
expect { described_class.new(:name => 'foo', :hour => ['0','-1','5'] ) }.to raise_error(Puppet::Error, /-1 is not a valid hour/)
expect { described_class.new(:name => 'foo', :hour => ['-1','1','23'] ) }.to raise_error(Puppet::Error, /-1 is not a valid hour/)
# two invalid
expect { described_class.new(:name => 'foo', :hour => ['0','25','26'] ) }.to raise_error(Puppet::Error, /(25|26) is not a valid hour/)
# all invalid
expect { described_class.new(:name => 'foo', :hour => ['-1','24','120'] ) }.to raise_error(Puppet::Error, /(-1|24|120) is not a valid hour/)
end
it "should support valid step syntax" do
expect { described_class.new(:name => 'foo', :hour => '*/2' ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :hour => '10-18/4' ) }.to_not raise_error
end
it "should not support invalid steps" do
expect { described_class.new(:name => 'foo', :hour => '*/A' ) }.to raise_error(Puppet::Error, /\*\/A is not a valid hour/)
expect { described_class.new(:name => 'foo', :hour => '*/2A' ) }.to raise_error(Puppet::Error, /\*\/2A is not a valid hour/)
# As it turns out cron does not complaining about steps that exceed the valid range
# expect { described_class.new(:name => 'foo', :hour => '*/26' ) }.to raise_error(Puppet::Error, /is not a valid hour/)
end
end
describe "weekday" do
it "should support absent" do
expect { described_class.new(:name => 'foo', :weekday => 'absent') }.to_not raise_error
end
it "should support *" do
expect { described_class.new(:name => 'foo', :weekday => '*') }.to_not raise_error
end
it "should translate absent to :absent" do
described_class.new(:name => 'foo', :weekday => 'absent')[:weekday].should == :absent
end
it "should translate * to :absent" do
described_class.new(:name => 'foo', :weekday => '*')[:weekday].should == :absent
end
it "should support valid numeric weekdays" do
expect { described_class.new(:name => 'foo', :weekday => '0') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => '1') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => '6') }.to_not raise_error
# According to http://www.manpagez.com/man/5/crontab 7 is also valid (Sunday)
expect { described_class.new(:name => 'foo', :weekday => '7') }.to_not raise_error
end
it "should support valid weekdays as words (long version)" do
expect { described_class.new(:name => 'foo', :weekday => 'Monday') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Tuesday') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Wednesday') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Thursday') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Friday') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Saturday') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Sunday') }.to_not raise_error
end
it "should support valid weekdays as words (3 character version)" do
expect { described_class.new(:name => 'foo', :weekday => 'Mon') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Tue') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Wed') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Thu') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Fri') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Sat') }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => 'Sun') }.to_not raise_error
end
it "should not support numeric values out of range" do
expect { described_class.new(:name => 'foo', :weekday => '-1') }.to raise_error(Puppet::Error, /-1 is not a valid weekday/)
expect { described_class.new(:name => 'foo', :weekday => '8') }.to raise_error(Puppet::Error, /8 is not a valid weekday/)
end
it "should not support invalid weekday names" do
expect { described_class.new(:name => 'foo', :weekday => 'Sar') }.to raise_error(Puppet::Error, /Sar is not a valid weekday/)
end
it "should support valid multiple values" do
expect { described_class.new(:name => 'foo', :weekday => ['0','1','6'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => ['Mon','Wed','Friday'] ) }.to_not raise_error
end
it "should not support multiple values if at least one is invalid" do
# one invalid
expect { described_class.new(:name => 'foo', :weekday => ['0','1','8'] ) }.to raise_error(Puppet::Error, /8 is not a valid weekday/)
expect { described_class.new(:name => 'foo', :weekday => ['Mon','Fii','Sat'] ) }.to raise_error(Puppet::Error, /Fii is not a valid weekday/)
# two invalid
expect { described_class.new(:name => 'foo', :weekday => ['Mos','Fii','Sat'] ) }.to raise_error(Puppet::Error, /(Mos|Fii) is not a valid weekday/)
# all invalid
expect { described_class.new(:name => 'foo', :weekday => ['Mos','Fii','Saa'] ) }.to raise_error(Puppet::Error, /(Mos|Fii|Saa) is not a valid weekday/)
expect { described_class.new(:name => 'foo', :weekday => ['-1','8','11'] ) }.to raise_error(Puppet::Error, /(-1|8|11) is not a valid weekday/)
end
it "should support valid step syntax" do
expect { described_class.new(:name => 'foo', :weekday => '*/2' ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :weekday => '0-4/2' ) }.to_not raise_error
end
it "should not support invalid steps" do
expect { described_class.new(:name => 'foo', :weekday => '*/A' ) }.to raise_error(Puppet::Error, /\*\/A is not a valid weekday/)
expect { described_class.new(:name => 'foo', :weekday => '*/2A' ) }.to raise_error(Puppet::Error, /\*\/2A is not a valid weekday/)
# As it turns out cron does not complaining about steps that exceed the valid range
# expect { described_class.new(:name => 'foo', :weekday => '*/9' ) }.to raise_error(Puppet::Error, /is not a valid weekday/)
end
end
describe "month" do
it "should support absent" do
expect { described_class.new(:name => 'foo', :month => 'absent') }.to_not raise_error
end
it "should support *" do
expect { described_class.new(:name => 'foo', :month => '*') }.to_not raise_error
end
it "should translate absent to :absent" do
described_class.new(:name => 'foo', :month => 'absent')[:month].should == :absent
end
it "should translate * to :absent" do
described_class.new(:name => 'foo', :month => '*')[:month].should == :absent
end
it "should support valid numeric values" do
expect { described_class.new(:name => 'foo', :month => '1') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => '12') }.to_not raise_error
end
it "should support valid months as words" do
expect { described_class.new(:name => 'foo', :month => 'January') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'February') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'March') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'April') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'May') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'June') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'July') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'August') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'September') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'October') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'November') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'December') }.to_not raise_error
end
it "should support valid months as words (3 character short version)" do
expect { described_class.new(:name => 'foo', :month => 'Jan') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Feb') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Mar') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Apr') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'May') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Jun') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Jul') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Aug') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Sep') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Oct') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Nov') }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => 'Dec') }.to_not raise_error
end
it "should not support numeric values out of range" do
expect { described_class.new(:name => 'foo', :month => '-1') }.to raise_error(Puppet::Error, /-1 is not a valid month/)
expect { described_class.new(:name => 'foo', :month => '0') }.to raise_error(Puppet::Error, /0 is not a valid month/)
expect { described_class.new(:name => 'foo', :month => '13') }.to raise_error(Puppet::Error, /13 is not a valid month/)
end
it "should not support words that are not valid months" do
expect { described_class.new(:name => 'foo', :month => 'Jal') }.to raise_error(Puppet::Error, /Jal is not a valid month/)
end
it "should not support single values out of range" do
expect { described_class.new(:name => 'foo', :month => '-1') }.to raise_error(Puppet::Error, /-1 is not a valid month/)
expect { described_class.new(:name => 'foo', :month => '60') }.to raise_error(Puppet::Error, /60 is not a valid month/)
expect { described_class.new(:name => 'foo', :month => '61') }.to raise_error(Puppet::Error, /61 is not a valid month/)
expect { described_class.new(:name => 'foo', :month => '120') }.to raise_error(Puppet::Error, /120 is not a valid month/)
end
it "should support valid multiple values" do
expect { described_class.new(:name => 'foo', :month => ['1','9','12'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => ['Jan','March','Jul'] ) }.to_not raise_error
end
it "should not support multiple values if at least one is invalid" do
# one invalid
expect { described_class.new(:name => 'foo', :month => ['0','1','12'] ) }.to raise_error(Puppet::Error, /0 is not a valid month/)
expect { described_class.new(:name => 'foo', :month => ['1','13','10'] ) }.to raise_error(Puppet::Error, /13 is not a valid month/)
expect { described_class.new(:name => 'foo', :month => ['Jan','Feb','Jxx'] ) }.to raise_error(Puppet::Error, /Jxx is not a valid month/)
# two invalid
expect { described_class.new(:name => 'foo', :month => ['Jan','Fex','Jux'] ) }.to raise_error(Puppet::Error, /(Fex|Jux) is not a valid month/)
# all invalid
expect { described_class.new(:name => 'foo', :month => ['-1','0','13'] ) }.to raise_error(Puppet::Error, /(-1|0|13) is not a valid month/)
expect { described_class.new(:name => 'foo', :month => ['Jax','Fex','Aux'] ) }.to raise_error(Puppet::Error, /(Jax|Fex|Aux) is not a valid month/)
end
it "should support valid step syntax" do
expect { described_class.new(:name => 'foo', :month => '*/2' ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :month => '1-12/3' ) }.to_not raise_error
end
it "should not support invalid steps" do
expect { described_class.new(:name => 'foo', :month => '*/A' ) }.to raise_error(Puppet::Error, /\*\/A is not a valid month/)
expect { described_class.new(:name => 'foo', :month => '*/2A' ) }.to raise_error(Puppet::Error, /\*\/2A is not a valid month/)
# As it turns out cron does not complaining about steps that exceed the valid range
# expect { described_class.new(:name => 'foo', :month => '*/13' ) }.to raise_error(Puppet::Error, /is not a valid month/)
end
end
describe "monthday" do
it "should support absent" do
expect { described_class.new(:name => 'foo', :monthday => 'absent') }.to_not raise_error
end
it "should support *" do
expect { described_class.new(:name => 'foo', :monthday => '*') }.to_not raise_error
end
it "should translate absent to :absent" do
described_class.new(:name => 'foo', :monthday => 'absent')[:monthday].should == :absent
end
it "should translate * to :absent" do
described_class.new(:name => 'foo', :monthday => '*')[:monthday].should == :absent
end
it "should support valid single values" do
expect { described_class.new(:name => 'foo', :monthday => '1') }.to_not raise_error
expect { described_class.new(:name => 'foo', :monthday => '30') }.to_not raise_error
expect { described_class.new(:name => 'foo', :monthday => '31') }.to_not raise_error
end
it "should not support non numeric characters" do
expect { described_class.new(:name => 'foo', :monthday => 'z23') }.to raise_error(Puppet::Error, /z23 is not a valid monthday/)
expect { described_class.new(:name => 'foo', :monthday => '2z3') }.to raise_error(Puppet::Error, /2z3 is not a valid monthday/)
expect { described_class.new(:name => 'foo', :monthday => '23z') }.to raise_error(Puppet::Error, /23z is not a valid monthday/)
end
it "should not support single values out of range" do
expect { described_class.new(:name => 'foo', :monthday => '-1') }.to raise_error(Puppet::Error, /-1 is not a valid monthday/)
expect { described_class.new(:name => 'foo', :monthday => '0') }.to raise_error(Puppet::Error, /0 is not a valid monthday/)
expect { described_class.new(:name => 'foo', :monthday => '32') }.to raise_error(Puppet::Error, /32 is not a valid monthday/)
end
it "should support valid multiple values" do
expect { described_class.new(:name => 'foo', :monthday => ['1','23','31'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :monthday => ['31','23','1'] ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :monthday => ['1','31','23'] ) }.to_not raise_error
end
it "should not support multiple values if at least one is invalid" do
# one invalid
expect { described_class.new(:name => 'foo', :monthday => ['1','23','32'] ) }.to raise_error(Puppet::Error, /32 is not a valid monthday/)
expect { described_class.new(:name => 'foo', :monthday => ['-1','12','23'] ) }.to raise_error(Puppet::Error, /-1 is not a valid monthday/)
expect { described_class.new(:name => 'foo', :monthday => ['13','32','30'] ) }.to raise_error(Puppet::Error, /32 is not a valid monthday/)
# two invalid
expect { described_class.new(:name => 'foo', :monthday => ['-1','0','23'] ) }.to raise_error(Puppet::Error, /(-1|0) is not a valid monthday/)
# all invalid
expect { described_class.new(:name => 'foo', :monthday => ['-1','0','32'] ) }.to raise_error(Puppet::Error, /(-1|0|32) is not a valid monthday/)
end
it "should support valid step syntax" do
expect { described_class.new(:name => 'foo', :monthday => '*/2' ) }.to_not raise_error
expect { described_class.new(:name => 'foo', :monthday => '10-16/2' ) }.to_not raise_error
end
it "should not support invalid steps" do
expect { described_class.new(:name => 'foo', :monthday => '*/A' ) }.to raise_error(Puppet::Error, /\*\/A is not a valid monthday/)
expect { described_class.new(:name => 'foo', :monthday => '*/2A' ) }.to raise_error(Puppet::Error, /\*\/2A is not a valid monthday/)
# As it turns out cron does not complaining about steps that exceed the valid range
# expect { described_class.new(:name => 'foo', :monthday => '*/32' ) }.to raise_error(Puppet::Error, /is not a valid monthday/)
end
end
describe "special" do
%w(reboot yearly annually monthly weekly daily midnight hourly).each do |value|
it "should support the value '#{value}'" do
- expect { described_class.new(:name => 'foo', :special => value ) }.to_not raise_error(Puppet::Error, /cannot specify both a special schedule and a value/)
+ expect { described_class.new(:name => 'foo', :special => value ) }.to_not raise_error
end
end
context "when combined with numeric schedule fields" do
context "which are 'absent'" do
[ %w(reboot yearly annually monthly weekly daily midnight hourly), :absent ].flatten.each { |value|
it "should accept the value '#{value}' for special" do
expect {
described_class.new(:name => 'foo', :minute => :absent, :special => value )
- }.to_not raise_error(Puppet::Error, /cannot specify both a special schedule and a value/)
+ }.to_not raise_error
end
}
end
context "which are not absent" do
%w(reboot yearly annually monthly weekly daily midnight hourly).each { |value|
it "should not accept the value '#{value}' for special" do
expect {
described_class.new(:name => 'foo', :minute => "1", :special => value )
}.to raise_error(Puppet::Error, /cannot specify both a special schedule and a value/)
end
}
it "should accept the 'absent' value for special" do
expect {
described_class.new(:name => 'foo', :minute => "1", :special => :absent )
- }.to_not raise_error(Puppet::Error, /cannot specify both a special schedule and a value/)
+ }.to_not raise_error
end
end
end
end
describe "environment" do
it "it should accept an :environment that looks like a path" do
expect do
described_class.new(:name => 'foo',:environment => 'PATH=/bin:/usr/bin:/usr/sbin')
end.to_not raise_error
end
it "should not accept environment variables that do not contain '='" do
expect do
described_class.new(:name => 'foo',:environment => 'INVALID')
end.to raise_error(Puppet::Error, /Invalid environment setting "INVALID"/)
end
it "should accept empty environment variables that do not contain '='" do
expect do
described_class.new(:name => 'foo',:environment => 'MAILTO=')
end.to_not raise_error
end
it "should accept 'absent'" do
expect do
described_class.new(:name => 'foo',:environment => 'absent')
end.to_not raise_error
end
end
end
describe "when autorequiring resources" do
before :each do
@user_bob = Puppet::Type.type(:user).new(:name => 'bob', :ensure => :present)
@user_alice = Puppet::Type.type(:user).new(:name => 'alice', :ensure => :present)
@catalog = Puppet::Resource::Catalog.new
@catalog.add_resource @user_bob, @user_alice
end
it "should autorequire the user" do
@resource = described_class.new(:name => 'dummy', :command => '/usr/bin/uptime', :user => 'alice')
@catalog.add_resource @resource
req = @resource.autorequire
req.size.should == 1
req[0].target.must == @resource
req[0].source.must == @user_alice
end
end
it "should not require a command when removing an entry" do
entry = described_class.new(:name => "test_entry", :ensure => :absent)
entry.value(:command).should == nil
end
it "should default to user => root if Etc.getpwuid(Process.uid) returns nil (#12357)" do
Etc.expects(:getpwuid).returns(nil)
entry = described_class.new(:name => "test_entry", :ensure => :present)
entry.value(:user).should eql "root"
end
end
diff --git a/spec/unit/type/exec_spec.rb b/spec/unit/type/exec_spec.rb
index 6d780b3ff..ec9847e91 100755
--- a/spec/unit/type/exec_spec.rb
+++ b/spec/unit/type/exec_spec.rb
@@ -1,763 +1,772 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Type.type(:exec) do
include PuppetSpec::Files
def exec_tester(command, exitstatus = 0, rest = {})
Puppet.features.stubs(:root?).returns(true)
output = rest.delete(:output) || ''
output = Puppet::Util::Execution::ProcessOutput.new(output, exitstatus)
tries = rest[:tries] || 1
args = {
:name => command,
:path => @example_path,
:logoutput => false,
:loglevel => :err,
:returns => 0
}.merge(rest)
exec = Puppet::Type.type(:exec).new(args)
status = stub "process", :exitstatus => exitstatus
Puppet::Util::Execution.expects(:execute).times(tries).
with() { |*args|
args[0] == command &&
args[1][:override_locale] == false &&
args[1].has_key?(:custom_environment)
}.returns(output)
return exec
end
before do
@command = make_absolute('/bin/true whatever')
@executable = make_absolute('/bin/true')
@bogus_cmd = make_absolute('/bogus/cmd')
end
describe "when not stubbing the provider" do
before do
path = tmpdir('path')
ext = Puppet.features.microsoft_windows? ? '.exe' : ''
true_cmd = File.join(path, "true#{ext}")
false_cmd = File.join(path, "false#{ext}")
FileUtils.touch(true_cmd)
FileUtils.touch(false_cmd)
File.chmod(0755, true_cmd)
File.chmod(0755, false_cmd)
@example_path = [path]
end
it "should return :executed_command as its event" do
resource = Puppet::Type.type(:exec).new :command => @command
resource.parameter(:returns).event.name.should == :executed_command
end
describe "when execing" do
it "should use the 'execute' method to exec" do
exec_tester("true").refresh.should == :executed_command
end
it "should report a failure" do
expect { exec_tester('false', 1).refresh }.
to raise_error(Puppet::Error, /^false returned 1 instead of/)
end
it "should not report a failure if the exit status is specified in a returns array" do
expect { exec_tester("false", 1, :returns => [0, 1]).refresh }.to_not raise_error
end
it "should report a failure if the exit status is not specified in a returns array" do
expect { exec_tester('false', 1, :returns => [0, 100]).refresh }.
to raise_error(Puppet::Error, /^false returned 1 instead of/)
end
it "should log the output on success" do
output = "output1\noutput2\n"
exec_tester('false', 0, :output => output, :logoutput => true).refresh
output.split("\n").each do |line|
log = @logs.shift
log.level.should == :err
log.message.should == line
end
end
it "should log the output on failure" do
output = "output1\noutput2\n"
expect { exec_tester('false', 1, :output => output, :logoutput => true).refresh }.
to raise_error(Puppet::Error)
output.split("\n").each do |line|
log = @logs.shift
log.level.should == :err
log.message.should == line
end
end
end
describe "when logoutput=>on_failure is set" do
it "should log the output on failure" do
output = "output1\noutput2\n"
expect { exec_tester('false', 1, :output => output, :logoutput => :on_failure).refresh }.
to raise_error(Puppet::Error, /^false returned 1 instead of/)
output.split("\n").each do |line|
log = @logs.shift
log.level.should == :err
log.message.should == line
end
end
it "should log the output on failure when returns is specified as an array" do
output = "output1\noutput2\n"
expect {
exec_tester('false', 1, :output => output, :returns => [0, 100],
:logoutput => :on_failure).refresh
}.to raise_error(Puppet::Error, /^false returned 1 instead of/)
output.split("\n").each do |line|
log = @logs.shift
log.level.should == :err
log.message.should == line
end
end
it "shouldn't log the output on success" do
exec_tester('true', 0, :output => "a\nb\nc\n", :logoutput => :on_failure).refresh
@logs.should == []
end
end
it "shouldn't log the output on success when non-zero exit status is in a returns array" do
exec_tester("true", 100, :output => "a\n", :logoutput => :on_failure, :returns => [1, 100]).refresh
@logs.should == []
end
describe " when multiple tries are set," do
it "should repeat the command attempt 'tries' times on failure and produce an error" do
tries = 5
resource = exec_tester("false", 1, :tries => tries, :try_sleep => 0)
expect { resource.refresh }.to raise_error(Puppet::Error)
end
end
end
it "should be able to autorequire files mentioned in the command" do
foo = make_absolute('/bin/foo')
catalog = Puppet::Resource::Catalog.new
tmp = Puppet::Type.type(:file).new(:name => foo)
execer = Puppet::Type.type(:exec).new(:name => foo)
catalog.add_resource tmp
catalog.add_resource execer
dependencies = execer.autorequire(catalog)
dependencies.collect(&:to_s).should == [Puppet::Relationship.new(tmp, execer).to_s]
end
describe "when handling the path parameter" do
expect = %w{one two three four}
{ "an array" => expect,
"a path-separator delimited list" => expect.join(File::PATH_SEPARATOR),
"both array and path-separator delimited lists" => ["one", "two#{File::PATH_SEPARATOR}three", "four"],
}.each do |test, input|
it "should accept #{test}" do
type = Puppet::Type.type(:exec).new(:name => @command, :path => input)
type[:path].should == expect
end
end
describe "on platforms where path separator is not :" do
before :each do
@old_verbosity = $VERBOSE
$VERBOSE = nil
@old_separator = File::PATH_SEPARATOR
File::PATH_SEPARATOR = 'q'
end
after :each do
File::PATH_SEPARATOR = @old_separator
$VERBOSE = @old_verbosity
end
it "should use the path separator of the current platform" do
type = Puppet::Type.type(:exec).new(:name => @command, :path => "fooqbarqbaz")
type[:path].should == %w[foo bar baz]
end
end
end
describe "when setting user" do
describe "on POSIX systems", :as_platform => :posix do
it "should fail if we are not root" do
Puppet.features.stubs(:root?).returns(false)
expect {
Puppet::Type.type(:exec).new(:name => '/bin/true whatever', :user => 'input')
}.to raise_error Puppet::Error, /Parameter user failed/
end
+ it "accepts the current user" do
+ Puppet.features.stubs(:root?).returns(false)
+ Etc.stubs(:getpwuid).returns(Struct::Passwd.new('input'))
+
+ type = Puppet::Type.type(:exec).new(:name => '/bin/true whatever', :user => 'input')
+
+ expect(type[:user]).to eq('input')
+ end
+
['one', 2, 'root', 4294967295, 4294967296].each do |value|
it "should accept '#{value}' as user if we are root" do
Puppet.features.stubs(:root?).returns(true)
type = Puppet::Type.type(:exec).new(:name => '/bin/true whatever', :user => value)
type[:user].should == value
end
end
end
describe "on Windows systems", :as_platform => :windows do
before :each do
Puppet.features.stubs(:root?).returns(true)
end
it "should reject user parameter" do
expect {
Puppet::Type.type(:exec).new(:name => 'c:\windows\notepad.exe', :user => 'input')
}.to raise_error Puppet::Error, /Unable to execute commands as other users on Windows/
end
end
end
describe "when setting group" do
shared_examples_for "exec[:group]" do
['one', 2, 'wheel', 4294967295, 4294967296].each do |value|
it "should accept '#{value}' without error or judgement" do
type = Puppet::Type.type(:exec).new(:name => @command, :group => value)
type[:group].should == value
end
end
end
describe "when running as root" do
before :each do Puppet.features.stubs(:root?).returns(true) end
it_behaves_like "exec[:group]"
end
describe "when not running as root" do
before :each do Puppet.features.stubs(:root?).returns(false) end
it_behaves_like "exec[:group]"
end
end
describe "when setting cwd" do
it_should_behave_like "all path parameters", :cwd, :array => false do
def instance(path)
# Specify shell provider so we don't have to care about command validation
Puppet::Type.type(:exec).new(:name => @executable, :cwd => path, :provider => :shell)
end
end
end
shared_examples_for "all exec command parameters" do |param|
{ "relative" => "example", "absolute" => "/bin/example" }.sort.each do |name, command|
describe "if command is #{name}" do
before :each do
@param = param
end
def test(command, valid)
if @param == :name then
instance = Puppet::Type.type(:exec).new()
else
instance = Puppet::Type.type(:exec).new(:name => @executable)
end
if valid then
instance.provider.expects(:validatecmd).returns(true)
else
instance.provider.expects(:validatecmd).raises(Puppet::Error, "from a stub")
end
instance[@param] = command
end
it "should work if the provider calls the command valid" do
expect { test(command, true) }.to_not raise_error
end
it "should fail if the provider calls the command invalid" do
expect { test(command, false) }.
to raise_error Puppet::Error, /Parameter #{@param} failed on Exec\[.*\]: from a stub/
end
end
end
end
shared_examples_for "all exec command parameters that take arrays" do |param|
describe "when given an array of inputs" do
before :each do
@test = Puppet::Type.type(:exec).new(:name => @executable)
end
it "should accept the array when all commands return valid" do
input = %w{one two three}
@test.provider.expects(:validatecmd).times(input.length).returns(true)
@test[param] = input
@test[param].should == input
end
it "should reject the array when any commands return invalid" do
input = %w{one two three}
@test.provider.expects(:validatecmd).with(input.first).returns(false)
input[1..-1].each do |cmd|
@test.provider.expects(:validatecmd).with(cmd).returns(true)
end
@test[param] = input
@test[param].should == input
end
it "should reject the array when all commands return invalid" do
input = %w{one two three}
@test.provider.expects(:validatecmd).times(input.length).returns(false)
@test[param] = input
@test[param].should == input
end
end
end
describe "when setting command" do
subject { described_class.new(:name => @command) }
it "fails when passed an Array" do
expect { subject[:command] = [] }.to raise_error Puppet::Error, /Command must be a String/
end
it "fails when passed a Hash" do
expect { subject[:command] = {} }.to raise_error Puppet::Error, /Command must be a String/
end
end
describe "when setting refresh" do
it_should_behave_like "all exec command parameters", :refresh
end
describe "for simple parameters" do
before :each do
@exec = Puppet::Type.type(:exec).new(:name => @executable)
end
describe "when setting environment" do
{ "single values" => "foo=bar",
"multiple values" => ["foo=bar", "baz=quux"],
}.each do |name, data|
it "should accept #{name}" do
@exec[:environment] = data
@exec[:environment].should == data
end
end
{ "single values" => "foo",
"only values" => ["foo", "bar"],
"any values" => ["foo=bar", "baz"]
}.each do |name, data|
it "should reject #{name} without assignment" do
expect { @exec[:environment] = data }.
to raise_error Puppet::Error, /Invalid environment setting/
end
end
end
describe "when setting timeout" do
[0, 0.1, 1, 10, 4294967295].each do |valid|
it "should accept '#{valid}' as valid" do
@exec[:timeout] = valid
@exec[:timeout].should == valid
end
it "should accept '#{valid}' in an array as valid" do
@exec[:timeout] = [valid]
@exec[:timeout].should == valid
end
end
['1/2', '', 'foo', '5foo'].each do |invalid|
it "should reject '#{invalid}' as invalid" do
expect { @exec[:timeout] = invalid }.
to raise_error Puppet::Error, /The timeout must be a number/
end
it "should reject '#{invalid}' in an array as invalid" do
expect { @exec[:timeout] = [invalid] }.
to raise_error Puppet::Error, /The timeout must be a number/
end
end
it "should fail if timeout is exceeded" do
ruby_path = Puppet::Util::Execution.ruby_path()
## Leaving this commented version in here because it fails on windows, due to what appears to be
## an assumption about hash iteration order in lib/puppet/type.rb#hash2resource, where
## resource[]= will overwrite the namevar with ":name" if the iteration is in the wrong order
#sleep_exec = Puppet::Type.type(:exec).new(:name => 'exec_spec sleep command', :command => "#{ruby_path} -e 'sleep 0.02'", :timeout => '0.01')
sleep_exec = Puppet::Type.type(:exec).new(:name => "#{ruby_path} -e 'sleep 0.02'", :timeout => '0.01')
expect { sleep_exec.refresh }.to raise_error Puppet::Error, "Command exceeded timeout"
end
it "should convert timeout to a float" do
command = make_absolute('/bin/false')
resource = Puppet::Type.type(:exec).new :command => command, :timeout => "12"
resource[:timeout].should be_a(Float)
resource[:timeout].should == 12.0
end
it "should munge negative timeouts to 0.0" do
command = make_absolute('/bin/false')
resource = Puppet::Type.type(:exec).new :command => command, :timeout => "-12.0"
resource.parameter(:timeout).value.should be_a(Float)
resource.parameter(:timeout).value.should == 0.0
end
end
describe "when setting tries" do
[1, 10, 4294967295].each do |valid|
it "should accept '#{valid}' as valid" do
@exec[:tries] = valid
@exec[:tries].should == valid
end
if "REVISIT: too much test log spam" == "a good thing" then
it "should accept '#{valid}' in an array as valid" do
pending "inconsistent, but this is not supporting arrays, unlike timeout"
@exec[:tries] = [valid]
@exec[:tries].should == valid
end
end
end
[-3.5, -1, 0, 0.2, '1/2', '1_000_000', '+12', '', 'foo'].each do |invalid|
it "should reject '#{invalid}' as invalid" do
expect { @exec[:tries] = invalid }.
to raise_error Puppet::Error, /Tries must be an integer/
end
if "REVISIT: too much test log spam" == "a good thing" then
it "should reject '#{invalid}' in an array as invalid" do
pending "inconsistent, but this is not supporting arrays, unlike timeout"
expect { @exec[:tries] = [invalid] }.
to raise_error Puppet::Error, /Tries must be an integer/
end
end
end
end
describe "when setting try_sleep" do
[0, 0.2, 1, 10, 4294967295].each do |valid|
it "should accept '#{valid}' as valid" do
@exec[:try_sleep] = valid
@exec[:try_sleep].should == valid
end
if "REVISIT: too much test log spam" == "a good thing" then
it "should accept '#{valid}' in an array as valid" do
pending "inconsistent, but this is not supporting arrays, unlike timeout"
@exec[:try_sleep] = [valid]
@exec[:try_sleep].should == valid
end
end
end
{ -3.5 => "cannot be a negative number",
-1 => "cannot be a negative number",
'1/2' => 'must be a number',
'1_000_000' => 'must be a number',
'+12' => 'must be a number',
'' => 'must be a number',
'foo' => 'must be a number',
}.each do |invalid, error|
it "should reject '#{invalid}' as invalid" do
expect { @exec[:try_sleep] = invalid }.
to raise_error Puppet::Error, /try_sleep #{error}/
end
if "REVISIT: too much test log spam" == "a good thing" then
it "should reject '#{invalid}' in an array as invalid" do
pending "inconsistent, but this is not supporting arrays, unlike timeout"
expect { @exec[:try_sleep] = [invalid] }.
to raise_error Puppet::Error, /try_sleep #{error}/
end
end
end
end
describe "when setting refreshonly" do
[:true, :false].each do |value|
it "should accept '#{value}'" do
@exec[:refreshonly] = value
@exec[:refreshonly].should == value
end
end
[1, 0, "1", "0", "yes", "y", "no", "n"].each do |value|
it "should reject '#{value}'" do
expect { @exec[:refreshonly] = value }.
to raise_error(Puppet::Error,
/Invalid value #{value.inspect}\. Valid values are true, false/
)
end
end
end
end
describe "when setting creates" do
it_should_behave_like "all path parameters", :creates, :array => true do
def instance(path)
# Specify shell provider so we don't have to care about command validation
Puppet::Type.type(:exec).new(:name => @executable, :creates => path, :provider => :shell)
end
end
end
describe "when setting unless" do
it_should_behave_like "all exec command parameters", :unless
it_should_behave_like "all exec command parameters that take arrays", :unless
end
describe "when setting onlyif" do
it_should_behave_like "all exec command parameters", :onlyif
it_should_behave_like "all exec command parameters that take arrays", :onlyif
end
describe "#check" do
before :each do
@test = Puppet::Type.type(:exec).new(:name => @executable)
end
describe ":refreshonly" do
{ :true => false, :false => true }.each do |input, result|
it "should return '#{result}' when given '#{input}'" do
@test[:refreshonly] = input
@test.check_all_attributes.should == result
end
end
end
describe ":creates" do
before :each do
@exist = tmpfile('exist')
FileUtils.touch(@exist)
@unexist = tmpfile('unexist')
end
context "with a single item" do
it "should run when the item does not exist" do
@test[:creates] = @unexist
@test.check_all_attributes.should == true
end
it "should not run when the item exists" do
@test[:creates] = @exist
@test.check_all_attributes.should == false
end
end
context "with an array with one item" do
it "should run when the item does not exist" do
@test[:creates] = [@unexist]
@test.check_all_attributes.should == true
end
it "should not run when the item exists" do
@test[:creates] = [@exist]
@test.check_all_attributes.should == false
end
end
context "with an array with multiple items" do
it "should run when all items do not exist" do
@test[:creates] = [@unexist] * 3
@test.check_all_attributes.should == true
end
it "should not run when one item exists" do
@test[:creates] = [@unexist, @exist, @unexist]
@test.check_all_attributes.should == false
end
it "should not run when all items exist" do
@test[:creates] = [@exist] * 3
end
end
end
{ :onlyif => { :pass => false, :fail => true },
:unless => { :pass => true, :fail => false },
}.each do |param, sense|
describe ":#{param}" do
before :each do
@pass = make_absolute("/magic/pass")
@fail = make_absolute("/magic/fail")
@pass_status = stub('status', :exitstatus => sense[:pass] ? 0 : 1)
@fail_status = stub('status', :exitstatus => sense[:fail] ? 0 : 1)
@test.provider.stubs(:checkexe).returns(true)
[true, false].each do |check|
@test.provider.stubs(:run).with(@pass, check).
returns(['test output', @pass_status])
@test.provider.stubs(:run).with(@fail, check).
returns(['test output', @fail_status])
end
end
context "with a single item" do
it "should run if the command exits non-zero" do
@test[param] = @fail
@test.check_all_attributes.should == true
end
it "should not run if the command exits zero" do
@test[param] = @pass
@test.check_all_attributes.should == false
end
end
context "with an array with a single item" do
it "should run if the command exits non-zero" do
@test[param] = [@fail]
@test.check_all_attributes.should == true
end
it "should not run if the command exits zero" do
@test[param] = [@pass]
@test.check_all_attributes.should == false
end
end
context "with an array with multiple items" do
it "should run if all the commands exits non-zero" do
@test[param] = [@fail] * 3
@test.check_all_attributes.should == true
end
it "should not run if one command exits zero" do
@test[param] = [@pass, @fail, @pass]
@test.check_all_attributes.should == false
end
it "should not run if all command exits zero" do
@test[param] = [@pass] * 3
@test.check_all_attributes.should == false
end
end
it "should emit output to debug" do
Puppet::Util::Log.level = :debug
@test[param] = @fail
@test.check_all_attributes.should == true
@logs.shift.message.should == "test output"
end
end
end
end
describe "#retrieve" do
before :each do
@exec_resource = Puppet::Type.type(:exec).new(:name => @bogus_cmd)
end
it "should return :notrun when check_all_attributes returns true" do
@exec_resource.stubs(:check_all_attributes).returns true
@exec_resource.retrieve[:returns].should == :notrun
end
it "should return default exit code 0 when check_all_attributes returns false" do
@exec_resource.stubs(:check_all_attributes).returns false
@exec_resource.retrieve[:returns].should == ['0']
end
it "should return the specified exit code when check_all_attributes returns false" do
@exec_resource.stubs(:check_all_attributes).returns false
@exec_resource[:returns] = 42
@exec_resource.retrieve[:returns].should == ["42"]
end
end
describe "#output" do
before :each do
@exec_resource = Puppet::Type.type(:exec).new(:name => @bogus_cmd)
end
it "should return the provider's run output" do
provider = stub 'provider'
status = stubs "process_status"
status.stubs(:exitstatus).returns("0")
provider.expects(:run).returns(["silly output", status])
@exec_resource.stubs(:provider).returns(provider)
@exec_resource.refresh
@exec_resource.output.should == 'silly output'
end
end
describe "#refresh" do
before :each do
@exec_resource = Puppet::Type.type(:exec).new(:name => @bogus_cmd)
end
it "should call provider run with the refresh parameter if it is set" do
myother_bogus_cmd = make_absolute('/myother/bogus/cmd')
provider = stub 'provider'
@exec_resource.stubs(:provider).returns(provider)
@exec_resource.stubs(:[]).with(:refresh).returns(myother_bogus_cmd)
provider.expects(:run).with(myother_bogus_cmd)
@exec_resource.refresh
end
it "should call provider run with the specified command if the refresh parameter is not set" do
provider = stub 'provider'
status = stubs "process_status"
status.stubs(:exitstatus).returns("0")
provider.expects(:run).with(@bogus_cmd).returns(["silly output", status])
@exec_resource.stubs(:provider).returns(provider)
@exec_resource.refresh
end
it "should not run the provider if check_all_attributes is false" do
@exec_resource.stubs(:check_all_attributes).returns false
provider = stub 'provider'
provider.expects(:run).never
@exec_resource.stubs(:provider).returns(provider)
@exec_resource.refresh
end
end
describe "relative and absolute commands vs path" do
let :type do Puppet::Type.type(:exec) end
let :rel do 'echo' end
let :abs do make_absolute('/bin/echo') end
let :path do make_absolute('/bin') end
it "should fail with relative command and no path" do
expect { type.new(:command => rel) }.
to raise_error Puppet::Error, /no path was specified/
end
it "should accept a relative command with a path" do
type.new(:command => rel, :path => path).must be
end
it "should accept an absolute command with no path" do
type.new(:command => abs).must be
end
it "should accept an absolute command with a path" do
type.new(:command => abs, :path => path).must be
end
end
describe "when providing a umask" do
it "should fail if an invalid umask is used" do
resource = Puppet::Type.type(:exec).new :command => @command
expect { resource[:umask] = '0028'}.to raise_error(Puppet::ResourceError, /umask specification is invalid/)
expect { resource[:umask] = '28' }.to raise_error(Puppet::ResourceError, /umask specification is invalid/)
end
end
end
diff --git a/spec/unit/type/file/mode_spec.rb b/spec/unit/type/file/mode_spec.rb
index 9936ebdbc..82bd5a09f 100755
--- a/spec/unit/type/file/mode_spec.rb
+++ b/spec/unit/type/file/mode_spec.rb
@@ -1,195 +1,220 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Type.type(:file).attrclass(:mode) do
include PuppetSpec::Files
let(:path) { tmpfile('mode_spec') }
- let(:resource) { Puppet::Type.type(:file).new :path => path, :mode => 0644 }
+ let(:resource) { Puppet::Type.type(:file).new :path => path, :mode => '0644' }
let(:mode) { resource.property(:mode) }
describe "#validate" do
it "should accept values specified as integers" do
expect { mode.value = 0755 }.not_to raise_error
end
it "should accept values specified as octal numbers in strings" do
expect { mode.value = '0755' }.not_to raise_error
end
it "should accept valid symbolic strings" do
expect { mode.value = 'g+w,u-x' }.not_to raise_error
end
it "should not accept strings other than octal numbers" do
expect do
mode.value = 'readable please!'
end.to raise_error(Puppet::Error, /The file mode specification is invalid/)
end
end
describe "#munge" do
# This is sort of a redundant test, but its spec is important.
it "should return the value as a string" do
mode.munge('0644').should be_a(String)
end
it "should accept strings as arguments" do
mode.munge('0644').should == '644'
end
it "should accept symbolic strings as arguments and return them intact" do
mode.munge('u=rw,go=r').should == 'u=rw,go=r'
end
it "should accept integers are arguments" do
mode.munge(0644).should == '644'
end
end
describe "#dirmask" do
before :each do
Dir.mkdir(path)
end
it "should add execute bits corresponding to read bits for directories" do
mode.dirmask('0644').should == '755'
end
it "should not add an execute bit when there is no read bit" do
mode.dirmask('0600').should == '700'
end
it "should not add execute bits for files that aren't directories" do
resource[:path] = tmpfile('other_file')
mode.dirmask('0644').should == '0644'
end
end
describe "#insync?" do
it "should return true if the mode is correct" do
FileUtils.touch(path)
mode.must be_insync('644')
end
it "should return false if the mode is incorrect" do
FileUtils.touch(path)
mode.must_not be_insync('755')
end
it "should return true if the file is a link and we are managing links", :if => Puppet.features.manages_symlinks? do
Puppet::FileSystem.symlink('anything', path)
mode.must be_insync('644')
end
describe "with a symbolic mode" do
let(:resource_sym) { Puppet::Type.type(:file).new :path => path, :mode => 'u+w,g-w' }
let(:mode_sym) { resource_sym.property(:mode) }
it "should return true if the mode matches, regardless of other bits" do
FileUtils.touch(path)
mode_sym.must be_insync('644')
end
it "should return false if the mode requires 0's where there are 1's" do
FileUtils.touch(path)
mode_sym.must_not be_insync('624')
end
it "should return false if the mode requires 1's where there are 0's" do
FileUtils.touch(path)
mode_sym.must_not be_insync('044')
end
end
end
describe "#retrieve" do
it "should return absent if the resource doesn't exist" do
resource[:path] = File.expand_path("/does/not/exist")
mode.retrieve.should == :absent
end
it "should retrieve the directory mode from the provider" do
Dir.mkdir(path)
mode.expects(:dirmask).with('644').returns '755'
resource.provider.expects(:mode).returns '755'
mode.retrieve.should == '755'
end
it "should retrieve the file mode from the provider" do
FileUtils.touch(path)
mode.expects(:dirmask).with('644').returns '644'
resource.provider.expects(:mode).returns '644'
mode.retrieve.should == '644'
end
end
describe '#should_to_s' do
describe 'with a 3-digit mode' do
it 'returns a 4-digit mode with a leading zero' do
mode.should_to_s('755').should == '0755'
end
end
describe 'with a 4-digit mode' do
it 'returns the 4-digit mode when the first digit is a zero' do
mode.should_to_s('0755').should == '0755'
end
it 'returns the 4-digit mode when the first digit is not a zero' do
mode.should_to_s('1755').should == '1755'
end
end
end
describe '#is_to_s' do
describe 'with a 3-digit mode' do
it 'returns a 4-digit mode with a leading zero' do
mode.is_to_s('755').should == '0755'
end
end
describe 'with a 4-digit mode' do
it 'returns the 4-digit mode when the first digit is a zero' do
mode.is_to_s('0755').should == '0755'
end
it 'returns the 4-digit mode when the first digit is not a zero' do
mode.is_to_s('1755').should == '1755'
end
end
describe 'when passed :absent' do
it 'returns :absent' do
mode.is_to_s(:absent).should == :absent
end
end
end
describe "#sync with a symbolic mode" do
let(:resource_sym) { Puppet::Type.type(:file).new :path => path, :mode => 'u+w,g-w' }
let(:mode_sym) { resource_sym.property(:mode) }
before { FileUtils.touch(path) }
it "changes only the requested bits" do
# lower nibble must be set to 4 for the sake of passing on Windows
Puppet::FileSystem.chmod(0464, path)
mode_sym.sync
stat = Puppet::FileSystem.stat(path)
(stat.mode & 0777).to_s(8).should == "644"
end
end
+
+ describe '#sync with a symbolic mode of +X for a file' do
+ let(:resource_sym) { Puppet::Type.type(:file).new :path => path, :mode => 'g+wX' }
+ let(:mode_sym) { resource_sym.property(:mode) }
+
+ before { FileUtils.touch(path) }
+
+ it 'does not change executable bit if no executable bit is set' do
+ Puppet::FileSystem.chmod(0644, path)
+
+ mode_sym.sync
+
+ stat = Puppet::FileSystem.stat(path)
+ (stat.mode & 0777).to_s(8).should == '664'
+ end
+
+ it 'does change executable bit if an executable bit is set' do
+ Puppet::FileSystem.chmod(0744, path)
+
+ mode_sym.sync
+
+ stat = Puppet::FileSystem.stat(path)
+ (stat.mode & 0777).to_s(8).should == '774'
+ end
+ end
end
diff --git a/spec/unit/type/file/source_spec.rb b/spec/unit/type/file/source_spec.rb
index b6e97cd7a..ff192a5f4 100755
--- a/spec/unit/type/file/source_spec.rb
+++ b/spec/unit/type/file/source_spec.rb
@@ -1,555 +1,577 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'uri'
source = Puppet::Type.type(:file).attrclass(:source)
describe Puppet::Type.type(:file).attrclass(:source) do
include PuppetSpec::Files
before do
# Wow that's a messy interface to the resource.
@environment = "myenv"
@resource = stub 'resource', :[]= => nil, :property => nil, :catalog => stub("catalog", :dependent_data_expired? => false, :environment => @environment), :line => 0, :file => ''
@foobar = make_absolute("/foo/bar baz")
@feebooz = make_absolute("/fee/booz baz")
@foobar_uri = URI.unescape(Puppet::Util.path_to_uri(@foobar).to_s)
@feebooz_uri = URI.unescape(Puppet::Util.path_to_uri(@feebooz).to_s)
end
it "should be a subclass of Parameter" do
source.superclass.must == Puppet::Parameter
end
describe "#validate" do
let(:path) { tmpfile('file_source_validate') }
let(:resource) { Puppet::Type.type(:file).new(:path => path) }
it "should fail if the set values are not URLs" do
URI.expects(:parse).with('foo').raises RuntimeError
lambda { resource[:source] = %w{foo} }.must raise_error(Puppet::Error)
end
it "should fail if the URI is not a local file, file URI, or puppet URI" do
lambda { resource[:source] = %w{http://foo/bar} }.must raise_error(Puppet::Error, /Cannot use URLs of type 'http' as source for fileserving/)
end
it "should strip trailing forward slashes", :unless => Puppet.features.microsoft_windows? do
resource[:source] = "/foo/bar\\//"
resource[:source].should == %w{file:/foo/bar\\}
end
it "should strip trailing forward and backslashes", :if => Puppet.features.microsoft_windows? do
resource[:source] = "X:/foo/bar\\//"
resource[:source].should == %w{file:/X:/foo/bar}
end
it "should accept an array of sources" do
resource[:source] = %w{file:///foo/bar puppet://host:8140/foo/bar}
resource[:source].should == %w{file:///foo/bar puppet://host:8140/foo/bar}
end
it "should accept file path characters that are not valid in URI" do
resource[:source] = 'file:///foo bar'
end
it "should reject relative URI sources" do
lambda { resource[:source] = 'foo/bar' }.must raise_error(Puppet::Error)
end
it "should reject opaque sources" do
lambda { resource[:source] = 'mailto:foo@com' }.must raise_error(Puppet::Error)
end
it "should accept URI authority component" do
resource[:source] = 'file://host/foo'
resource[:source].should == %w{file://host/foo}
end
it "should accept when URI authority is absent" do
resource[:source] = 'file:///foo/bar'
resource[:source].should == %w{file:///foo/bar}
end
end
describe "#munge" do
let(:path) { tmpfile('file_source_munge') }
let(:resource) { Puppet::Type.type(:file).new(:path => path) }
it "should prefix file scheme to absolute paths" do
resource[:source] = path
resource[:source].should == [URI.unescape(Puppet::Util.path_to_uri(path).to_s)]
end
%w[file puppet].each do |scheme|
it "should not prefix valid #{scheme} URIs" do
resource[:source] = "#{scheme}:///foo bar"
resource[:source].should == ["#{scheme}:///foo bar"]
end
end
end
describe "when returning the metadata" do
before do
@metadata = stub 'metadata', :source= => nil
@resource.stubs(:[]).with(:links).returns :manage
@resource.stubs(:[]).with(:source_permissions)
end
it "should return already-available metadata" do
@source = source.new(:resource => @resource)
@source.metadata = "foo"
@source.metadata.should == "foo"
end
it "should return nil if no @should value is set and no metadata is available" do
@source = source.new(:resource => @resource)
@source.metadata.should be_nil
end
it "should collect its metadata using the Metadata class if it is not already set" do
@source = source.new(:resource => @resource, :value => @foobar)
Puppet::FileServing::Metadata.indirection.expects(:find).with do |uri, options|
expect(uri).to eq @foobar_uri
expect(options[:environment]).to eq @environment
expect(options[:links]).to eq :manage
end.returns @metadata
@source.metadata
end
it "should use the metadata from the first found source" do
metadata = stub 'metadata', :source= => nil
@source = source.new(:resource => @resource, :value => [@foobar, @feebooz])
options = {
:environment => @environment,
:links => :manage,
:source_permissions => nil
}
Puppet::FileServing::Metadata.indirection.expects(:find).with(@foobar_uri, options).returns nil
Puppet::FileServing::Metadata.indirection.expects(:find).with(@feebooz_uri, options).returns metadata
@source.metadata.should equal(metadata)
end
it "should store the found source as the metadata's source" do
metadata = mock 'metadata'
@source = source.new(:resource => @resource, :value => @foobar)
Puppet::FileServing::Metadata.indirection.expects(:find).with do |uri, options|
expect(uri).to eq @foobar_uri
expect(options[:environment]).to eq @environment
expect(options[:links]).to eq :manage
end.returns metadata
metadata.expects(:source=).with(@foobar_uri)
@source.metadata
end
it "should fail intelligently if an exception is encountered while querying for metadata" do
@source = source.new(:resource => @resource, :value => @foobar)
Puppet::FileServing::Metadata.indirection.expects(:find).with do |uri, options|
expect(uri).to eq @foobar_uri
expect(options[:environment]).to eq @environment
expect(options[:links]).to eq :manage
end.raises RuntimeError
@source.expects(:fail).raises ArgumentError
lambda { @source.metadata }.should raise_error(ArgumentError)
end
it "should fail if no specified sources can be found" do
@source = source.new(:resource => @resource, :value => @foobar)
Puppet::FileServing::Metadata.indirection.expects(:find).with do |uri, options|
expect(uri).to eq @foobar_uri
expect(options[:environment]).to eq @environment
expect(options[:links]).to eq :manage
end.returns nil
@source.expects(:fail).raises RuntimeError
lambda { @source.metadata }.should raise_error(RuntimeError)
end
end
it "should have a method for setting the desired values on the resource" do
source.new(:resource => @resource).must respond_to(:copy_source_values)
end
describe "when copying the source values" do
- before do
+ before :each do
@resource = Puppet::Type.type(:file).new :path => @foobar
@source = source.new(:resource => @resource)
@metadata = stub 'metadata', :owner => 100, :group => 200, :mode => "173", :checksum => "{md5}asdfasdf", :ftype => "file", :source => @foobar
@source.stubs(:metadata).returns @metadata
Puppet.features.stubs(:root?).returns true
end
+ it "should not issue a deprecation warning if the source mode value is a Numeric" do
+ @metadata.stubs(:mode).returns 0173
+ if Puppet::Util::Platform.windows?
+ Puppet.expects(:deprecation_warning).with(regexp_matches(/Copying owner\/mode\/group from the source file on Windows is deprecated/)).at_least_once
+ else
+ Puppet.expects(:deprecation_warning).never
+ end
+
+ @source.copy_source_values
+ end
+
+ it "should not issue a deprecation warning if the source mode value is a String" do
+ @metadata.stubs(:mode).returns "173"
+ if Puppet::Util::Platform.windows?
+ Puppet.expects(:deprecation_warning).with(regexp_matches(/Copying owner\/mode\/group from the source file on Windows is deprecated/)).at_least_once
+ else
+ Puppet.expects(:deprecation_warning).never
+ end
+
+ @source.copy_source_values
+ end
+
it "should fail if there is no metadata" do
@source.stubs(:metadata).returns nil
@source.expects(:devfail).raises ArgumentError
lambda { @source.copy_source_values }.should raise_error(ArgumentError)
end
it "should set :ensure to the file type" do
@metadata.stubs(:ftype).returns "file"
@source.copy_source_values
@resource[:ensure].must == :file
end
it "should not set 'ensure' if it is already set to 'absent'" do
@metadata.stubs(:ftype).returns "file"
@resource[:ensure] = :absent
@source.copy_source_values
@resource[:ensure].must == :absent
end
describe "and the source is a file" do
before do
@metadata.stubs(:ftype).returns "file"
Puppet.features.stubs(:microsoft_windows?).returns false
end
it "should copy the metadata's owner, group, checksum, and mode to the resource if they are not set on the resource" do
@source.copy_source_values
@resource[:owner].must == 100
@resource[:group].must == 200
@resource[:mode].must == "173"
# Metadata calls it checksum, we call it content.
@resource[:content].must == @metadata.checksum
end
it "should not copy the metadata's owner, group, checksum and mode to the resource if they are already set" do
@resource[:owner] = 1
@resource[:group] = 2
@resource[:mode] = 3
@resource[:content] = "foobar"
@source.copy_source_values
@resource[:owner].must == 1
@resource[:group].must == 2
@resource[:mode].must == "3"
@resource[:content].should_not == @metadata.checksum
end
describe "and puppet is not running as root" do
before do
Puppet.features.stubs(:root?).returns false
end
it "should not try to set the owner" do
@source.copy_source_values
@resource[:owner].should be_nil
end
it "should not try to set the group" do
@source.copy_source_values
@resource[:group].should be_nil
end
end
context "when source_permissions is `use_when_creating`" do
before :each do
@resource[:source_permissions] = "use_when_creating"
Puppet.features.expects(:root?).returns true
@source.stubs(:local?).returns(false)
end
context "when managing a new file" do
it "should copy owner and group from local sources" do
@source.stubs(:local?).returns true
@source.copy_source_values
@resource[:owner].must == 100
@resource[:group].must == 200
@resource[:mode].must == "173"
end
it "copies the remote owner" do
@source.copy_source_values
@resource[:owner].must == 100
end
it "copies the remote group" do
@source.copy_source_values
@resource[:group].must == 200
end
it "copies the remote mode" do
@source.copy_source_values
@resource[:mode].must == "173"
end
end
context "when managing an existing file" do
before :each do
Puppet::FileSystem.stubs(:exist?).with(@resource[:path]).returns(true)
end
it "should not copy owner, group or mode from local sources" do
@source.stubs(:local?).returns true
@source.copy_source_values
@resource[:owner].must be_nil
@resource[:group].must be_nil
@resource[:mode].must be_nil
end
it "preserves the local owner" do
@source.copy_source_values
@resource[:owner].must be_nil
end
it "preserves the local group" do
@source.copy_source_values
@resource[:group].must be_nil
end
it "preserves the local mode" do
@source.copy_source_values
@resource[:mode].must be_nil
end
end
end
context "when source_permissions is `ignore`" do
before :each do
@resource[:source_permissions] = "ignore"
@source.stubs(:local?).returns(false)
Puppet.features.expects(:root?).returns true
end
it "should not copy owner, group or mode from local sources" do
@source.stubs(:local?).returns true
@source.copy_source_values
@resource[:owner].must be_nil
@resource[:group].must be_nil
@resource[:mode].must be_nil
end
it "preserves the local owner" do
@source.copy_source_values
@resource[:owner].must be_nil
end
it "preserves the local group" do
@source.copy_source_values
@resource[:group].must be_nil
end
it "preserves the local mode" do
@source.copy_source_values
@resource[:mode].must be_nil
end
end
describe "on Windows" do
before :each do
Puppet.features.stubs(:microsoft_windows?).returns true
end
let(:deprecation_message) { "Copying owner/mode/group from the" <<
" source file on Windows is deprecated;" <<
" use source_permissions => ignore." }
it "should copy only mode from remote sources" do
@source.stubs(:local?).returns false
@source.copy_source_values
@resource[:owner].must be_nil
@resource[:group].must be_nil
@resource[:mode].must == "173"
end
it "should copy mode from remote sources" do
@source.stubs(:local?).returns false
@source.copy_source_values
@resource[:mode].must == "173"
end
it "should copy owner and group from local sources" do
@source.stubs(:local?).returns true
@source.copy_source_values
@resource[:owner].must == 100
@resource[:group].must == 200
@resource[:mode].must == "173"
end
it "should issue deprecation warning when copying metadata from remote sources when group, owner, and mode are unspecified" do
@source.stubs(:local?).returns false
Puppet.expects(:deprecation_warning).with(deprecation_message).at_least_once
@source.copy_source_values
end
it "should issue deprecation warning when copying metadata from remote sources if only user is unspecified" do
@source.stubs(:local?).returns false
Puppet.expects(:deprecation_warning).with(deprecation_message).at_least_once
@resource[:group] = 2
- @resource[:mode] = 3
+ @resource[:mode] = "0003"
@source.copy_source_values
end
it "should issue deprecation warning when copying metadata from remote sources if only group is unspecified" do
@source.stubs(:local?).returns false
Puppet.expects(:deprecation_warning).with(deprecation_message).at_least_once
@resource[:owner] = 1
- @resource[:mode] = 3
+ @resource[:mode] = "0003"
@source.copy_source_values
end
it "should issue deprecation warning when copying metadata from remote sources if only mode is unspecified" do
@source.stubs(:local?).returns false
Puppet.expects(:deprecation_warning).with(deprecation_message).at_least_once
@resource[:owner] = 1
@resource[:group] = 2
@source.copy_source_values
end
it "should not issue deprecation warning when copying metadata from remote sources if group, owner, and mode are all specified" do
@source.stubs(:local?).returns false
Puppet.expects(:deprecation_warning).with(deprecation_message).never
@resource[:owner] = 1
@resource[:group] = 2
- @resource[:mode] = 3
+ @resource[:mode] = "0003"
@source.copy_source_values
end
end
end
describe "and the source is a link" do
it "should set the target to the link destination" do
@metadata.stubs(:ftype).returns "link"
@metadata.stubs(:links).returns "manage"
@resource.stubs(:[])
@resource.stubs(:[]=)
@metadata.expects(:destination).returns "/path/to/symlink"
@resource.expects(:[]=).with(:target, "/path/to/symlink")
@source.copy_source_values
end
end
end
it "should have a local? method" do
source.new(:resource => @resource).must be_respond_to(:local?)
end
context "when accessing source properties" do
let(:catalog) { Puppet::Resource::Catalog.new }
let(:path) { tmpfile('file_resource') }
let(:resource) { Puppet::Type.type(:file).new(:path => path, :catalog => catalog) }
let(:sourcepath) { tmpfile('file_source') }
describe "for local sources" do
before :each do
FileUtils.touch(sourcepath)
end
describe "on POSIX systems", :if => Puppet.features.posix? do
['', "file:", "file://"].each do |prefix|
it "with prefix '#{prefix}' should be local" do
resource[:source] = "#{prefix}#{sourcepath}"
resource.parameter(:source).must be_local
end
it "should be able to return the metadata source full path" do
resource[:source] = "#{prefix}#{sourcepath}"
resource.parameter(:source).full_path.should == sourcepath
end
end
end
describe "on Windows systems", :if => Puppet.features.microsoft_windows? do
['', "file:/", "file:///"].each do |prefix|
it "should be local with prefix '#{prefix}'" do
resource[:source] = "#{prefix}#{sourcepath}"
resource.parameter(:source).must be_local
end
it "should be able to return the metadata source full path" do
resource[:source] = "#{prefix}#{sourcepath}"
resource.parameter(:source).full_path.should == sourcepath
end
it "should convert backslashes to forward slashes" do
resource[:source] = "#{prefix}#{sourcepath.gsub(/\\/, '/')}"
end
end
it "should be UNC with two slashes"
end
end
describe "for remote sources" do
let(:sourcepath) { "/path/to/source" }
let(:uri) { URI::Generic.build(:scheme => 'puppet', :host => 'server', :port => 8192, :path => sourcepath).to_s }
before(:each) do
metadata = Puppet::FileServing::Metadata.new(path, :source => uri, 'type' => 'file')
#metadata = stub('remote', :ftype => "file", :source => uri)
Puppet::FileServing::Metadata.indirection.stubs(:find).
with(uri,all_of(has_key(:environment), has_key(:links))).returns metadata
resource[:source] = uri
end
it "should not be local" do
resource.parameter(:source).should_not be_local
end
it "should be able to return the metadata source full path" do
resource.parameter(:source).full_path.should == "/path/to/source"
end
it "should be able to return the source server" do
resource.parameter(:source).server.should == "server"
end
it "should be able to return the source port" do
resource.parameter(:source).port.should == 8192
end
describe "which don't specify server or port" do
let(:uri) { "puppet:///path/to/source" }
it "should return the default source server" do
Puppet[:server] = "myserver"
resource.parameter(:source).server.should == "myserver"
end
it "should return the default source port" do
Puppet[:masterport] = 1234
resource.parameter(:source).port.should == 1234
end
end
end
end
end
diff --git a/spec/unit/type/file_spec.rb b/spec/unit/type/file_spec.rb
index 00b027ed7..7a4d02553 100755
--- a/spec/unit/type/file_spec.rb
+++ b/spec/unit/type/file_spec.rb
@@ -1,1504 +1,1504 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Type.type(:file) do
include PuppetSpec::Files
let(:path) { tmpfile('file_testing') }
let(:file) { described_class.new(:path => path, :catalog => catalog) }
let(:provider) { file.provider }
let(:catalog) { Puppet::Resource::Catalog.new }
before do
Puppet.features.stubs("posix?").returns(true)
end
describe "the path parameter" do
describe "on POSIX systems", :if => Puppet.features.posix? do
it "should remove trailing slashes" do
file[:path] = "/foo/bar/baz/"
file[:path].should == "/foo/bar/baz"
end
it "should remove double slashes" do
file[:path] = "/foo/bar//baz"
file[:path].should == "/foo/bar/baz"
end
it "should remove triple slashes" do
file[:path] = "/foo/bar///baz"
file[:path].should == "/foo/bar/baz"
end
it "should remove trailing double slashes" do
file[:path] = "/foo/bar/baz//"
file[:path].should == "/foo/bar/baz"
end
it "should leave a single slash alone" do
file[:path] = "/"
file[:path].should == "/"
end
it "should accept and collapse a double-slash at the start of the path" do
file[:path] = "//tmp/xxx"
file[:path].should == '/tmp/xxx'
end
it "should accept and collapse a triple-slash at the start of the path" do
file[:path] = "///tmp/xxx"
file[:path].should == '/tmp/xxx'
end
end
describe "on Windows systems", :if => Puppet.features.microsoft_windows? do
it "should remove trailing slashes" do
file[:path] = "X:/foo/bar/baz/"
file[:path].should == "X:/foo/bar/baz"
end
it "should remove double slashes" do
file[:path] = "X:/foo/bar//baz"
file[:path].should == "X:/foo/bar/baz"
end
it "should remove trailing double slashes" do
file[:path] = "X:/foo/bar/baz//"
file[:path].should == "X:/foo/bar/baz"
end
it "should leave a drive letter with a slash alone" do
file[:path] = "X:/"
file[:path].should == "X:/"
end
it "should not accept a drive letter without a slash" do
expect { file[:path] = "X:" }.to raise_error(/File paths must be fully qualified/)
end
describe "when using UNC filenames", :if => Puppet.features.microsoft_windows? do
it "should remove trailing slashes" do
file[:path] = "//localhost/foo/bar/baz/"
file[:path].should == "//localhost/foo/bar/baz"
end
it "should remove double slashes" do
file[:path] = "//localhost/foo/bar//baz"
file[:path].should == "//localhost/foo/bar/baz"
end
it "should remove trailing double slashes" do
file[:path] = "//localhost/foo/bar/baz//"
file[:path].should == "//localhost/foo/bar/baz"
end
it "should remove a trailing slash from a sharename" do
file[:path] = "//localhost/foo/"
file[:path].should == "//localhost/foo"
end
it "should not modify a sharename" do
file[:path] = "//localhost/foo"
file[:path].should == "//localhost/foo"
end
end
end
end
describe "the backup parameter" do
[false, 'false', :false].each do |value|
it "should disable backup if the value is #{value.inspect}" do
file[:backup] = value
file[:backup].should == false
end
end
[true, 'true', '.puppet-bak'].each do |value|
it "should use .puppet-bak if the value is #{value.inspect}" do
file[:backup] = value
file[:backup].should == '.puppet-bak'
end
end
it "should use the provided value if it's any other string" do
file[:backup] = "over there"
file[:backup].should == "over there"
end
it "should fail if backup is set to anything else" do
expect do
file[:backup] = 97
end.to raise_error(Puppet::Error, /Invalid backup type 97/)
end
end
describe "the recurse parameter" do
it "should default to recursion being disabled" do
file[:recurse].should be_false
end
[true, "true", "inf", "remote"].each do |value|
it "should consider #{value} to enable recursion" do
file[:recurse] = value
file[:recurse].should be_true
end
end
it "should not allow numbers" do
expect { file[:recurse] = 10 }.to raise_error(
Puppet::Error, /Parameter recurse failed on File\[[^\]]+\]: Invalid recurse value 10/)
end
[false, "false"].each do |value|
it "should consider #{value} to disable recursion" do
file[:recurse] = value
file[:recurse].should be_false
end
end
end
describe "the recurselimit parameter" do
it "should accept integers" do
file[:recurselimit] = 12
file[:recurselimit].should == 12
end
it "should munge string numbers to number numbers" do
file[:recurselimit] = '12'
file[:recurselimit].should == 12
end
it "should fail if given a non-number" do
expect do
file[:recurselimit] = 'twelve'
end.to raise_error(Puppet::Error, /Invalid value "twelve"/)
end
end
describe "the replace parameter" do
[true, :true, :yes].each do |value|
it "should consider #{value} to be true" do
file[:replace] = value
file[:replace].should be_true
end
end
[false, :false, :no].each do |value|
it "should consider #{value} to be false" do
file[:replace] = value
file[:replace].should be_false
end
end
end
describe ".instances" do
it "should return an empty array" do
described_class.instances.should == []
end
end
describe "#bucket" do
it "should return nil if backup is off" do
file[:backup] = false
file.bucket.should == nil
end
it "should not return a bucket if using a file extension for backup" do
file[:backup] = '.backup'
file.bucket.should == nil
end
it "should return the default filebucket if using the 'puppet' filebucket" do
file[:backup] = 'puppet'
bucket = stub('bucket')
file.stubs(:default_bucket).returns bucket
file.bucket.should == bucket
end
it "should fail if using a remote filebucket and no catalog exists" do
file.catalog = nil
file[:backup] = 'my_bucket'
expect { file.bucket }.to raise_error(Puppet::Error, "Can not find filebucket for backups without a catalog")
end
it "should fail if the specified filebucket isn't in the catalog" do
file[:backup] = 'my_bucket'
expect { file.bucket }.to raise_error(Puppet::Error, "Could not find filebucket my_bucket specified in backup")
end
it "should use the specified filebucket if it is in the catalog" do
file[:backup] = 'my_bucket'
filebucket = Puppet::Type.type(:filebucket).new(:name => 'my_bucket')
catalog.add_resource(filebucket)
file.bucket.should == filebucket.bucket
end
end
describe "#asuser" do
before :each do
# Mocha won't let me just stub SUIDManager.asuser to yield and return,
# but it will do exactly that if we're not root.
Puppet.features.stubs(:root?).returns false
end
it "should return the desired owner if they can write to the parent directory" do
file[:owner] = 1001
FileTest.stubs(:writable?).with(File.dirname file[:path]).returns true
file.asuser.should == 1001
end
it "should return nil if the desired owner can't write to the parent directory" do
file[:owner] = 1001
FileTest.stubs(:writable?).with(File.dirname file[:path]).returns false
file.asuser.should == nil
end
it "should return nil if not managing owner" do
file.asuser.should == nil
end
end
describe "#exist?" do
it "should be considered existent if it can be stat'ed" do
file.expects(:stat).returns mock('stat')
file.must be_exist
end
it "should be considered nonexistent if it can not be stat'ed" do
file.expects(:stat).returns nil
file.must_not be_exist
end
end
describe "#eval_generate" do
before do
@graph = stub 'graph', :add_edge => nil
catalog.stubs(:relationship_graph).returns @graph
end
it "should recurse if recursion is enabled" do
resource = stub('resource', :[] => 'resource')
file.expects(:recurse).returns [resource]
file[:recurse] = true
file.eval_generate.should == [resource]
end
it "should not recurse if recursion is disabled" do
file.expects(:recurse).never
file[:recurse] = false
file.eval_generate.should == []
end
end
describe "#ancestors" do
it "should return the ancestors of the file, in ascending order" do
file = described_class.new(:path => make_absolute("/tmp/foo/bar/baz/qux"))
pieces = %W[#{make_absolute('/')} tmp foo bar baz]
ancestors = file.ancestors
ancestors.should_not be_empty
ancestors.reverse.each_with_index do |path,i|
path.should == File.join(*pieces[0..i])
end
end
end
describe "#flush" do
it "should flush all properties that respond to :flush" do
file[:source] = File.expand_path(__FILE__)
file.parameter(:source).expects(:flush)
file.flush
end
it "should reset its stat reference" do
FileUtils.touch(path)
stat1 = file.stat
file.stat.should equal(stat1)
file.flush
file.stat.should_not equal(stat1)
end
end
describe "#initialize" do
it "should remove a trailing slash from the title to create the path" do
title = File.expand_path("/abc/\n\tdef/")
file = described_class.new(:title => title)
file[:path].should == title
end
it "should set a desired 'ensure' value if none is set and 'content' is set" do
file = described_class.new(:path => path, :content => "/foo/bar")
file[:ensure].should == :file
end
it "should set a desired 'ensure' value if none is set and 'target' is set", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
file = described_class.new(:path => path, :target => File.expand_path(__FILE__))
file[:ensure].should == :link
end
end
describe "#mark_children_for_purging" do
it "should set each child's ensure to absent" do
paths = %w[foo bar baz]
children = paths.inject({}) do |children,child|
children.merge child => described_class.new(:path => File.join(path, child), :ensure => :present)
end
file.mark_children_for_purging(children)
children.length.should == 3
children.values.each do |child|
child[:ensure].should == :absent
end
end
it "should skip children which have a source" do
child = described_class.new(:path => path, :ensure => :present, :source => File.expand_path(__FILE__))
file.mark_children_for_purging('foo' => child)
child[:ensure].should == :present
end
end
describe "#newchild" do
it "should create a new resource relative to the parent" do
child = file.newchild('bar')
child.must be_a(described_class)
child[:path].should == File.join(file[:path], 'bar')
end
{
:ensure => :present,
:recurse => true,
:recurselimit => 5,
:target => "some_target",
:source => File.expand_path("some_source"),
}.each do |param, value|
it "should omit the #{param} parameter", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
# Make a new file, because we have to set the param at initialization
# or it wouldn't be copied regardless.
file = described_class.new(:path => path, param => value)
child = file.newchild('bar')
child[param].should_not == value
end
end
it "should copy all of the parent resource's 'should' values that were set at initialization" do
parent = described_class.new(:path => path, :owner => 'root', :group => 'wheel')
child = parent.newchild("my/path")
child[:owner].should == 'root'
child[:group].should == 'wheel'
end
it "should not copy default values to the new child" do
child = file.newchild("my/path")
child.original_parameters.should_not include(:backup)
end
it "should not copy values to the child which were set by the source" do
source = File.expand_path(__FILE__)
file[:source] = source
- metadata = stub 'metadata', :owner => "root", :group => "root", :mode => 0755, :ftype => "file", :checksum => "{md5}whatever", :source => source
+ metadata = stub 'metadata', :owner => "root", :group => "root", :mode => '0755', :ftype => "file", :checksum => "{md5}whatever", :source => source
file.parameter(:source).stubs(:metadata).returns metadata
file.parameter(:source).copy_source_values
file.class.expects(:new).with { |params| params[:group].nil? }
file.newchild("my/path")
end
end
describe "#purge?" do
it "should return false if purge is not set" do
file.must_not be_purge
end
it "should return true if purge is set to true" do
file[:purge] = true
file.must be_purge
end
it "should return false if purge is set to false" do
file[:purge] = false
file.must_not be_purge
end
end
describe "#recurse" do
before do
file[:recurse] = true
@metadata = Puppet::FileServing::Metadata
end
describe "and a source is set" do
it "should pass the already-discovered resources to recurse_remote" do
file[:source] = File.expand_path(__FILE__)
file.stubs(:recurse_local).returns(:foo => "bar")
file.expects(:recurse_remote).with(:foo => "bar").returns []
file.recurse
end
end
describe "and a target is set" do
it "should use recurse_link" do
file[:target] = File.expand_path(__FILE__)
file.stubs(:recurse_local).returns(:foo => "bar")
file.expects(:recurse_link).with(:foo => "bar").returns []
file.recurse
end
end
it "should use recurse_local if recurse is not remote" do
file.expects(:recurse_local).returns({})
file.recurse
end
it "should not use recurse_local if recurse is remote" do
file[:recurse] = :remote
file.expects(:recurse_local).never
file.recurse
end
it "should return the generated resources as an array sorted by file path" do
one = stub 'one', :[] => "/one"
two = stub 'two', :[] => "/one/two"
three = stub 'three', :[] => "/three"
file.expects(:recurse_local).returns(:one => one, :two => two, :three => three)
file.recurse.should == [one, two, three]
end
describe "and purging is enabled" do
before do
file[:purge] = true
end
it "should mark each file for removal" do
local = described_class.new(:path => path, :ensure => :present)
file.expects(:recurse_local).returns("local" => local)
file.recurse
local[:ensure].should == :absent
end
it "should not remove files that exist in the remote repository" do
file[:source] = File.expand_path(__FILE__)
file.expects(:recurse_local).returns({})
remote = described_class.new(:path => path, :source => File.expand_path(__FILE__), :ensure => :present)
file.expects(:recurse_remote).with { |hash| hash["remote"] = remote }
file.recurse
remote[:ensure].should_not == :absent
end
end
end
describe "#remove_less_specific_files" do
it "should remove any nested files that are already in the catalog" do
foo = described_class.new :path => File.join(file[:path], 'foo')
bar = described_class.new :path => File.join(file[:path], 'bar')
baz = described_class.new :path => File.join(file[:path], 'baz')
catalog.add_resource(foo)
catalog.add_resource(bar)
file.remove_less_specific_files([foo, bar, baz]).should == [baz]
end
end
describe "#remove_less_specific_files" do
it "should remove any nested files that are already in the catalog" do
foo = described_class.new :path => File.join(file[:path], 'foo')
bar = described_class.new :path => File.join(file[:path], 'bar')
baz = described_class.new :path => File.join(file[:path], 'baz')
catalog.add_resource(foo)
catalog.add_resource(bar)
file.remove_less_specific_files([foo, bar, baz]).should == [baz]
end
end
describe "#recurse?" do
it "should be true if recurse is true" do
file[:recurse] = true
file.must be_recurse
end
it "should be true if recurse is remote" do
file[:recurse] = :remote
file.must be_recurse
end
it "should be false if recurse is false" do
file[:recurse] = false
file.must_not be_recurse
end
end
describe "#recurse_link" do
before do
@first = stub 'first', :relative_path => "first", :full_path => "/my/first", :ftype => "directory"
@second = stub 'second', :relative_path => "second", :full_path => "/my/second", :ftype => "file"
@resource = stub 'file', :[]= => nil
end
it "should pass its target to the :perform_recursion method" do
file[:target] = "mylinks"
file.expects(:perform_recursion).with("mylinks").returns [@first]
file.stubs(:newchild).returns @resource
file.recurse_link({})
end
it "should ignore the recursively-found '.' file and configure the top-level file to create a directory" do
@first.stubs(:relative_path).returns "."
file[:target] = "mylinks"
file.expects(:perform_recursion).with("mylinks").returns [@first]
file.stubs(:newchild).never
file.expects(:[]=).with(:ensure, :directory)
file.recurse_link({})
end
it "should create a new child resource for each generated metadata instance's relative path that doesn't already exist in the children hash" do
file.expects(:perform_recursion).returns [@first, @second]
file.expects(:newchild).with(@first.relative_path).returns @resource
file.recurse_link("second" => @resource)
end
it "should not create a new child resource for paths that already exist in the children hash" do
file.expects(:perform_recursion).returns [@first]
file.expects(:newchild).never
file.recurse_link("first" => @resource)
end
it "should set the target to the full path of discovered file and set :ensure to :link if the file is not a directory", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
file.stubs(:perform_recursion).returns [@first, @second]
file.recurse_link("first" => @resource, "second" => file)
file[:ensure].should == :link
file[:target].should == "/my/second"
end
it "should :ensure to :directory if the file is a directory" do
file.stubs(:perform_recursion).returns [@first, @second]
file.recurse_link("first" => file, "second" => @resource)
file[:ensure].should == :directory
end
it "should return a hash with both created and existing resources with the relative paths as the hash keys" do
file.expects(:perform_recursion).returns [@first, @second]
file.stubs(:newchild).returns file
file.recurse_link("second" => @resource).should == {"second" => @resource, "first" => file}
end
end
describe "#recurse_local" do
before do
@metadata = stub 'metadata', :relative_path => "my/file"
end
it "should pass its path to the :perform_recursion method" do
file.expects(:perform_recursion).with(file[:path]).returns [@metadata]
file.stubs(:newchild)
file.recurse_local
end
it "should return an empty hash if the recursion returns nothing" do
file.expects(:perform_recursion).returns nil
file.recurse_local.should == {}
end
it "should create a new child resource with each generated metadata instance's relative path" do
file.expects(:perform_recursion).returns [@metadata]
file.expects(:newchild).with(@metadata.relative_path).returns "fiebar"
file.recurse_local
end
it "should not create a new child resource for the '.' directory" do
@metadata.stubs(:relative_path).returns "."
file.expects(:perform_recursion).returns [@metadata]
file.expects(:newchild).never
file.recurse_local
end
it "should return a hash of the created resources with the relative paths as the hash keys" do
file.expects(:perform_recursion).returns [@metadata]
file.expects(:newchild).with("my/file").returns "fiebar"
file.recurse_local.should == {"my/file" => "fiebar"}
end
it "should set checksum_type to none if this file checksum is none" do
file[:checksum] = :none
Puppet::FileServing::Metadata.indirection.expects(:search).with { |path,params| params[:checksum_type] == :none }.returns [@metadata]
file.expects(:newchild).with("my/file").returns "fiebar"
file.recurse_local
end
end
describe "#recurse_remote", :uses_checksums => true do
let(:my) { File.expand_path('/my') }
before do
file[:source] = "puppet://foo/bar"
@first = Puppet::FileServing::Metadata.new(my, :relative_path => "first")
@second = Puppet::FileServing::Metadata.new(my, :relative_path => "second")
@first.stubs(:ftype).returns "directory"
@second.stubs(:ftype).returns "directory"
@parameter = stub 'property', :metadata= => nil
@resource = stub 'file', :[]= => nil, :parameter => @parameter
end
it "should pass its source to the :perform_recursion method" do
data = Puppet::FileServing::Metadata.new(File.expand_path("/whatever"), :relative_path => "foobar")
file.expects(:perform_recursion).with("puppet://foo/bar").returns [data]
file.stubs(:newchild).returns @resource
file.recurse_remote({})
end
it "should not recurse when the remote file is not a directory" do
data = Puppet::FileServing::Metadata.new(File.expand_path("/whatever"), :relative_path => ".")
data.stubs(:ftype).returns "file"
file.expects(:perform_recursion).with("puppet://foo/bar").returns [data]
file.expects(:newchild).never
file.recurse_remote({})
end
it "should set the source of each returned file to the searched-for URI plus the found relative path" do
@first.expects(:source=).with File.join("puppet://foo/bar", @first.relative_path)
file.expects(:perform_recursion).returns [@first]
file.stubs(:newchild).returns @resource
file.recurse_remote({})
end
it "should create a new resource for any relative file paths that do not already have a resource" do
file.stubs(:perform_recursion).returns [@first]
file.expects(:newchild).with("first").returns @resource
file.recurse_remote({}).should == {"first" => @resource}
end
it "should not create a new resource for any relative file paths that do already have a resource" do
file.stubs(:perform_recursion).returns [@first]
file.expects(:newchild).never
file.recurse_remote("first" => @resource)
end
it "should set the source of each resource to the source of the metadata" do
file.stubs(:perform_recursion).returns [@first]
@resource.stubs(:[]=)
@resource.expects(:[]=).with(:source, File.join("puppet://foo/bar", @first.relative_path))
file.recurse_remote("first" => @resource)
end
# LAK:FIXME This is a bug, but I can't think of a fix for it. Fortunately it's already
# filed, and when it's fixed, we'll just fix the whole flow.
with_digest_algorithms do
it "it should set the checksum type to #{metadata[:digest_algorithm]} if the remote file is a file" do
@first.stubs(:ftype).returns "file"
file.stubs(:perform_recursion).returns [@first]
@resource.stubs(:[]=)
@resource.expects(:[]=).with(:checksum, digest_algorithm.intern)
file.recurse_remote("first" => @resource)
end
end
it "should store the metadata in the source property for each resource so the source does not have to requery the metadata" do
file.stubs(:perform_recursion).returns [@first]
@resource.expects(:parameter).with(:source).returns @parameter
@parameter.expects(:metadata=).with(@first)
file.recurse_remote("first" => @resource)
end
it "should not create a new resource for the '.' file" do
@first.stubs(:relative_path).returns "."
file.stubs(:perform_recursion).returns [@first]
file.expects(:newchild).never
file.recurse_remote({})
end
it "should store the metadata in the main file's source property if the relative path is '.'" do
@first.stubs(:relative_path).returns "."
file.stubs(:perform_recursion).returns [@first]
file.parameter(:source).expects(:metadata=).with @first
file.recurse_remote("first" => @resource)
end
describe "and multiple sources are provided" do
let(:sources) do
h = {}
%w{/a /b /c /d}.each do |key|
h[key] = URI.unescape(Puppet::Util.path_to_uri(File.expand_path(key)).to_s)
end
h
end
describe "and :sourceselect is set to :first" do
it "should create file instances for the results for the first source to return any values" do
data = Puppet::FileServing::Metadata.new(File.expand_path("/whatever"), :relative_path => "foobar")
file[:source] = sources.keys.sort.map { |key| File.expand_path(key) }
file.expects(:perform_recursion).with(sources['/a']).returns nil
file.expects(:perform_recursion).with(sources['/b']).returns []
file.expects(:perform_recursion).with(sources['/c']).returns [data]
file.expects(:perform_recursion).with(sources['/d']).never
file.expects(:newchild).with("foobar").returns @resource
file.recurse_remote({})
end
end
describe "and :sourceselect is set to :all" do
before do
file[:sourceselect] = :all
end
it "should return every found file that is not in a previous source" do
klass = Puppet::FileServing::Metadata
file[:source] = abs_path = %w{/a /b /c /d}.map {|f| File.expand_path(f) }
file.stubs(:newchild).returns @resource
one = [klass.new(abs_path[0], :relative_path => "a")]
file.expects(:perform_recursion).with(sources['/a']).returns one
file.expects(:newchild).with("a").returns @resource
two = [klass.new(abs_path[1], :relative_path => "a"), klass.new(abs_path[1], :relative_path => "b")]
file.expects(:perform_recursion).with(sources['/b']).returns two
file.expects(:newchild).with("b").returns @resource
three = [klass.new(abs_path[2], :relative_path => "a"), klass.new(abs_path[2], :relative_path => "c")]
file.expects(:perform_recursion).with(sources['/c']).returns three
file.expects(:newchild).with("c").returns @resource
file.expects(:perform_recursion).with(sources['/d']).returns []
file.recurse_remote({})
end
end
end
end
describe "#perform_recursion" do
it "should use Metadata to do its recursion" do
Puppet::FileServing::Metadata.indirection.expects(:search)
file.perform_recursion(file[:path])
end
it "should use the provided path as the key to the search" do
Puppet::FileServing::Metadata.indirection.expects(:search).with { |key, options| key == "/foo" }
file.perform_recursion("/foo")
end
it "should return the results of the metadata search" do
Puppet::FileServing::Metadata.indirection.expects(:search).returns "foobar"
file.perform_recursion(file[:path]).should == "foobar"
end
it "should pass its recursion value to the search" do
file[:recurse] = true
Puppet::FileServing::Metadata.indirection.expects(:search).with { |key, options| options[:recurse] == true }
file.perform_recursion(file[:path])
end
it "should pass true if recursion is remote" do
file[:recurse] = :remote
Puppet::FileServing::Metadata.indirection.expects(:search).with { |key, options| options[:recurse] == true }
file.perform_recursion(file[:path])
end
it "should pass its recursion limit value to the search" do
file[:recurselimit] = 10
Puppet::FileServing::Metadata.indirection.expects(:search).with { |key, options| options[:recurselimit] == 10 }
file.perform_recursion(file[:path])
end
it "should configure the search to ignore or manage links" do
file[:links] = :manage
Puppet::FileServing::Metadata.indirection.expects(:search).with { |key, options| options[:links] == :manage }
file.perform_recursion(file[:path])
end
it "should pass its 'ignore' setting to the search if it has one" do
file[:ignore] = %w{.svn CVS}
Puppet::FileServing::Metadata.indirection.expects(:search).with { |key, options| options[:ignore] == %w{.svn CVS} }
file.perform_recursion(file[:path])
end
end
describe "#remove_existing" do
it "should do nothing if the file doesn't exist" do
file.remove_existing(:file).should == false
end
it "should fail if it can't backup the file" do
file.stubs(:stat).returns stub('stat', :ftype => 'file')
file.stubs(:perform_backup).returns false
expect { file.remove_existing(:file) }.to raise_error(Puppet::Error, /Could not back up; will not replace/)
end
describe "backing up directories" do
it "should not backup directories if force is false" do
file[:force] = false
file.stubs(:stat).returns stub('stat', :ftype => 'directory')
file.expects(:perform_backup).never
file.remove_existing(:file).should == false
end
it "should backup directories if force is true" do
file[:force] = true
FileUtils.expects(:rmtree).with(file[:path])
file.stubs(:stat).returns stub('stat', :ftype => 'directory')
file.expects(:perform_backup).once.returns(true)
file.remove_existing(:file).should == true
end
end
it "should not do anything if the file is already the right type and not a link" do
file.stubs(:stat).returns stub('stat', :ftype => 'file')
file.remove_existing(:file).should == false
end
it "should not remove directories and should not invalidate the stat unless force is set" do
# Actually call stat to set @needs_stat to nil
file.stat
file.stubs(:stat).returns stub('stat', :ftype => 'directory')
file.remove_existing(:file)
file.instance_variable_get(:@stat).should == nil
@logs.should be_any {|log| log.level == :notice and log.message =~ /Not removing directory; use 'force' to override/}
end
it "should remove a directory if force is set" do
file[:force] = true
file.stubs(:stat).returns stub('stat', :ftype => 'directory')
FileUtils.expects(:rmtree).with(file[:path])
file.remove_existing(:file).should == true
end
it "should remove an existing file" do
file.stubs(:perform_backup).returns true
FileUtils.touch(path)
file.remove_existing(:directory).should == true
Puppet::FileSystem.exist?(file[:path]).should == false
end
it "should remove an existing link", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
file.stubs(:perform_backup).returns true
target = tmpfile('link_target')
FileUtils.touch(target)
Puppet::FileSystem.symlink(target, path)
file[:target] = target
file.remove_existing(:directory).should == true
Puppet::FileSystem.exist?(file[:path]).should == false
end
it "should fail if the file is not a file, link, or directory" do
file.stubs(:stat).returns stub('stat', :ftype => 'socket')
expect { file.remove_existing(:file) }.to raise_error(Puppet::Error, /Could not back up files of type socket/)
end
it "should invalidate the existing stat of the file" do
# Actually call stat to set @needs_stat to nil
file.stat
file.stubs(:stat).returns stub('stat', :ftype => 'file')
Puppet::FileSystem.stubs(:unlink)
file.remove_existing(:directory).should == true
file.instance_variable_get(:@stat).should == :needs_stat
end
end
describe "#retrieve" do
it "should copy the source values if the 'source' parameter is set" do
file[:source] = File.expand_path('/foo/bar')
file.parameter(:source).expects(:copy_source_values)
file.retrieve
end
end
describe "#should_be_file?" do
it "should have a method for determining if the file should be a normal file" do
file.must respond_to(:should_be_file?)
end
it "should be a file if :ensure is set to :file" do
file[:ensure] = :file
file.must be_should_be_file
end
it "should be a file if :ensure is set to :present and the file exists as a normal file" do
file.stubs(:stat).returns(mock('stat', :ftype => "file"))
file[:ensure] = :present
file.must be_should_be_file
end
it "should not be a file if :ensure is set to something other than :file" do
file[:ensure] = :directory
file.must_not be_should_be_file
end
it "should not be a file if :ensure is set to :present and the file exists but is not a normal file" do
file.stubs(:stat).returns(mock('stat', :ftype => "directory"))
file[:ensure] = :present
file.must_not be_should_be_file
end
it "should be a file if :ensure is not set and :content is" do
file[:content] = "foo"
file.must be_should_be_file
end
it "should be a file if neither :ensure nor :content is set but the file exists as a normal file" do
file.stubs(:stat).returns(mock("stat", :ftype => "file"))
file.must be_should_be_file
end
it "should not be a file if neither :ensure nor :content is set but the file exists but not as a normal file" do
file.stubs(:stat).returns(mock("stat", :ftype => "directory"))
file.must_not be_should_be_file
end
end
describe "#stat", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
before do
target = tmpfile('link_target')
FileUtils.touch(target)
Puppet::FileSystem.symlink(target, path)
file[:target] = target
file[:links] = :manage # so we always use :lstat
end
it "should stat the target if it is following links" do
file[:links] = :follow
file.stat.ftype.should == 'file'
end
it "should stat the link if is it not following links" do
file[:links] = :manage
file.stat.ftype.should == 'link'
end
it "should return nil if the file does not exist" do
file[:path] = make_absolute('/foo/bar/baz/non-existent')
file.stat.should be_nil
end
it "should return nil if the file cannot be stat'ed" do
dir = tmpfile('link_test_dir')
child = File.join(dir, 'some_file')
Dir.mkdir(dir)
File.chmod(0, dir)
file[:path] = child
file.stat.should be_nil
# chmod it back so we can clean it up
File.chmod(0777, dir)
end
it "should return nil if parts of path are no directories" do
regular_file = tmpfile('ENOTDIR_test')
FileUtils.touch(regular_file)
impossible_child = File.join(regular_file, 'some_file')
file[:path] = impossible_child
file.stat.should be_nil
end
it "should return the stat instance" do
file.stat.should be_a(File::Stat)
end
it "should cache the stat instance" do
file.stat.should equal(file.stat)
end
end
describe "#write" do
describe "when validating the checksum" do
before { file.stubs(:validate_checksum?).returns(true) }
it "should fail if the checksum parameter and content checksums do not match" do
checksum = stub('checksum_parameter', :sum => 'checksum_b', :sum_file => 'checksum_b')
file.stubs(:parameter).with(:checksum).returns(checksum)
property = stub('content_property', :actual_content => "something", :length => "something".length, :write => 'checksum_a')
file.stubs(:property).with(:content).returns(property)
expect { file.write :NOTUSED }.to raise_error(Puppet::Error)
end
end
describe "when not validating the checksum" do
before { file.stubs(:validate_checksum?).returns(false) }
it "should not fail if the checksum property and content checksums do not match" do
checksum = stub('checksum_parameter', :sum => 'checksum_b')
file.stubs(:parameter).with(:checksum).returns(checksum)
property = stub('content_property', :actual_content => "something", :length => "something".length, :write => 'checksum_a')
file.stubs(:property).with(:content).returns(property)
expect { file.write :NOTUSED }.to_not raise_error
end
end
describe "when resource mode is supplied" do
before { file.stubs(:property_fix) }
context "and writing temporary files" do
before { file.stubs(:write_temporary_file?).returns(true) }
it "should convert symbolic mode to int" do
file[:mode] = 'oga=r'
Puppet::Util.expects(:replace_file).with(file[:path], 0444)
file.write :NOTUSED
end
it "should support int modes" do
file[:mode] = '0444'
Puppet::Util.expects(:replace_file).with(file[:path], 0444)
file.write :NOTUSED
end
end
context "and not writing temporary files" do
before { file.stubs(:write_temporary_file?).returns(false) }
it "should set a umask of 0" do
file[:mode] = 'oga=r'
Puppet::Util.expects(:withumask).with(0)
file.write :NOTUSED
end
it "should convert symbolic mode to int" do
file[:mode] = 'oga=r'
File.expects(:open).with(file[:path], anything, 0444)
file.write :NOTUSED
end
it "should support int modes" do
file[:mode] = '0444'
File.expects(:open).with(file[:path], anything, 0444)
file.write :NOTUSED
end
end
end
describe "when resource mode is not supplied" do
context "and content is supplied" do
it "should default to 0644 mode" do
file = described_class.new(:path => path, :content => "file content")
file.write :NOTUSED
expect(File.stat(file[:path]).mode & 0777).to eq(0644)
end
end
context "and no content is supplied" do
it "should use puppet's default umask of 022" do
file = described_class.new(:path => path)
umask_from_the_user = 0777
Puppet::Util.withumask(umask_from_the_user) do
file.write :NOTUSED
end
expect(File.stat(file[:path]).mode & 0777).to eq(0644)
end
end
end
end
describe "#fail_if_checksum_is_wrong" do
it "should fail if the checksum of the file doesn't match the expected one" do
expect do
file.instance_eval do
parameter(:checksum).stubs(:sum_file).returns('wrong!!')
fail_if_checksum_is_wrong(self[:path], 'anything!')
end
end.to raise_error(Puppet::Error, /File written to disk did not match checksum/)
end
it "should not fail if the checksum is correct" do
file.instance_eval do
parameter(:checksum).stubs(:sum_file).returns('anything!')
fail_if_checksum_is_wrong(self[:path], 'anything!').should == nil
end
end
it "should not fail if the checksum is absent" do
file.instance_eval do
parameter(:checksum).stubs(:sum_file).returns(nil)
fail_if_checksum_is_wrong(self[:path], 'anything!').should == nil
end
end
end
describe "#write_content" do
it "should delegate writing the file to the content property" do
io = stub('io')
file[:content] = "some content here"
file.property(:content).expects(:write).with(io)
file.send(:write_content, io)
end
end
describe "#write_temporary_file?" do
it "should be true if the file has specified content" do
file[:content] = 'some content'
file.send(:write_temporary_file?).should be_true
end
it "should be true if the file has specified source" do
file[:source] = File.expand_path('/tmp/foo')
file.send(:write_temporary_file?).should be_true
end
it "should be false if the file has neither content nor source" do
file.send(:write_temporary_file?).should be_false
end
end
describe "#property_fix" do
{
:mode => 0777,
:owner => 'joeuser',
:group => 'joeusers',
:seluser => 'seluser',
:selrole => 'selrole',
:seltype => 'seltype',
:selrange => 'selrange'
}.each do |name,value|
it "should sync the #{name} property if it's not in sync" do
file[name] = value
prop = file.property(name)
prop.expects(:retrieve)
prop.expects(:safe_insync?).returns false
prop.expects(:sync)
file.send(:property_fix)
end
end
end
describe "when autorequiring" do
describe "target" do
it "should require file resource when specified with the target property", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
file = described_class.new(:path => File.expand_path("/foo"), :ensure => :directory)
link = described_class.new(:path => File.expand_path("/bar"), :ensure => :link, :target => File.expand_path("/foo"))
catalog.add_resource file
catalog.add_resource link
reqs = link.autorequire
reqs.size.must == 1
reqs[0].source.must == file
reqs[0].target.must == link
end
it "should require file resource when specified with the ensure property" do
file = described_class.new(:path => File.expand_path("/foo"), :ensure => :directory)
link = described_class.new(:path => File.expand_path("/bar"), :ensure => File.expand_path("/foo"))
catalog.add_resource file
catalog.add_resource link
reqs = link.autorequire
reqs.size.must == 1
reqs[0].source.must == file
reqs[0].target.must == link
end
it "should not require target if target is not managed", :if => described_class.defaultprovider.feature?(:manages_symlinks) do
link = described_class.new(:path => File.expand_path('/foo'), :ensure => :link, :target => '/bar')
catalog.add_resource link
link.autorequire.size.should == 0
end
end
describe "directories" do
it "should autorequire its parent directory" do
dir = described_class.new(:path => File.dirname(path))
catalog.add_resource file
catalog.add_resource dir
reqs = file.autorequire
reqs[0].source.must == dir
reqs[0].target.must == file
end
it "should autorequire its nearest ancestor directory" do
dir = described_class.new(:path => File.dirname(path))
grandparent = described_class.new(:path => File.dirname(File.dirname(path)))
catalog.add_resource file
catalog.add_resource dir
catalog.add_resource grandparent
reqs = file.autorequire
reqs.length.must == 1
reqs[0].source.must == dir
reqs[0].target.must == file
end
it "should not autorequire anything when there is no nearest ancestor directory" do
catalog.add_resource file
file.autorequire.should be_empty
end
it "should not autorequire its parent dir if its parent dir is itself" do
file[:path] = File.expand_path('/')
catalog.add_resource file
file.autorequire.should be_empty
end
describe "on Windows systems", :if => Puppet.features.microsoft_windows? do
describe "when using UNC filenames" do
it "should autorequire its parent directory" do
file[:path] = '//localhost/foo/bar/baz'
dir = described_class.new(:path => "//localhost/foo/bar")
catalog.add_resource file
catalog.add_resource dir
reqs = file.autorequire
reqs[0].source.must == dir
reqs[0].target.must == file
end
it "should autorequire its nearest ancestor directory" do
file = described_class.new(:path => "//localhost/foo/bar/baz/qux")
dir = described_class.new(:path => "//localhost/foo/bar/baz")
grandparent = described_class.new(:path => "//localhost/foo/bar")
catalog.add_resource file
catalog.add_resource dir
catalog.add_resource grandparent
reqs = file.autorequire
reqs.length.must == 1
reqs[0].source.must == dir
reqs[0].target.must == file
end
it "should not autorequire anything when there is no nearest ancestor directory" do
file = described_class.new(:path => "//localhost/foo/bar/baz/qux")
catalog.add_resource file
file.autorequire.should be_empty
end
it "should not autorequire its parent dir if its parent dir is itself" do
file = described_class.new(:path => "//localhost/foo")
catalog.add_resource file
puts file.autorequire
file.autorequire.should be_empty
end
end
end
end
end
describe "when managing links", :if => Puppet.features.manages_symlinks? do
require 'tempfile'
before :each do
Dir.mkdir(path)
@target = File.join(path, "target")
@link = File.join(path, "link")
target = described_class.new(
:ensure => :file, :path => @target,
:catalog => catalog, :content => 'yayness',
- :mode => 0644)
+ :mode => '0644')
catalog.add_resource target
@link_resource = described_class.new(
:ensure => :link, :path => @link,
:target => @target, :catalog => catalog,
- :mode => 0755)
+ :mode => '0755')
catalog.add_resource @link_resource
# to prevent the catalog from trying to write state.yaml
Puppet::Util::Storage.stubs(:store)
end
it "should preserve the original file mode and ignore the one set by the link" do
@link_resource[:links] = :manage # default
catalog.apply
# I convert them to strings so they display correctly if there's an error.
(Puppet::FileSystem.stat(@target).mode & 007777).to_s(8).should == '644'
end
it "should manage the mode of the followed link" do
pending("Windows cannot presently manage the mode when following symlinks",
:if => Puppet.features.microsoft_windows?) do
@link_resource[:links] = :follow
catalog.apply
(Puppet::FileSystem.stat(@target).mode & 007777).to_s(8).should == '755'
end
end
end
describe "when using source" do
before do
file[:source] = File.expand_path('/one')
end
Puppet::Type::File::ParameterChecksum.value_collection.values.reject {|v| v == :none}.each do |checksum_type|
describe "with checksum '#{checksum_type}'" do
before do
file[:checksum] = checksum_type
end
it 'should validate' do
expect { file.validate }.to_not raise_error
end
end
end
describe "with checksum 'none'" do
before do
file[:checksum] = :none
end
it 'should raise an exception when validating' do
expect { file.validate }.to raise_error(/You cannot specify source when using checksum 'none'/)
end
end
end
describe "when using content" do
before do
file[:content] = 'file contents'
end
(Puppet::Type::File::ParameterChecksum.value_collection.values - SOURCE_ONLY_CHECKSUMS).each do |checksum_type|
describe "with checksum '#{checksum_type}'" do
before do
file[:checksum] = checksum_type
end
it 'should validate' do
expect { file.validate }.to_not raise_error
end
end
end
SOURCE_ONLY_CHECKSUMS.each do |checksum_type|
describe "with checksum '#{checksum_type}'" do
it 'should raise an exception when validating' do
file[:checksum] = checksum_type
expect { file.validate }.to raise_error(/You cannot specify content when using checksum '#{checksum_type}'/)
end
end
end
end
describe "when auditing" do
before :each do
# to prevent the catalog from trying to write state.yaml
Puppet::Util::Storage.stubs(:store)
end
it "should not fail if creating a new file if group is not set" do
file = described_class.new(:path => path, :audit => 'all', :content => 'content')
catalog.add_resource(file)
report = catalog.apply.report
report.resource_statuses["File[#{path}]"].should_not be_failed
File.read(path).should == 'content'
end
it "should not log errors if creating a new file with ensure present and no content" do
file[:audit] = 'content'
file[:ensure] = 'present'
catalog.add_resource(file)
catalog.apply
Puppet::FileSystem.exist?(path).should be_true
@logs.should_not be_any {|l| l.level != :notice }
end
end
describe "when specifying both source and checksum" do
it 'should use the specified checksum when source is first' do
file[:source] = File.expand_path('/foo')
file[:checksum] = :md5lite
file[:checksum].should == :md5lite
end
it 'should use the specified checksum when source is last' do
file[:checksum] = :md5lite
file[:source] = File.expand_path('/foo')
file[:checksum].should == :md5lite
end
end
describe "when validating" do
[[:source, :target], [:source, :content], [:target, :content]].each do |prop1,prop2|
it "should fail if both #{prop1} and #{prop2} are specified" do
file[prop1] = prop1 == :source ? File.expand_path("prop1 value") : "prop1 value"
file[prop2] = "prop2 value"
expect do
file.validate
end.to raise_error(Puppet::Error, /You cannot specify more than one of/)
end
end
end
end
diff --git a/spec/unit/type/nagios_spec.rb b/spec/unit/type/nagios_spec.rb
index bc96c26d4..4bd1271c2 100755
--- a/spec/unit/type/nagios_spec.rb
+++ b/spec/unit/type/nagios_spec.rb
@@ -1,284 +1,293 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/external/nagios'
describe "Nagios parser" do
NONESCAPED_SEMICOLON_COMMENT = <<-'EOL'
define host{
use linux-server ; Name of host template to use
host_name localhost
alias localhost
address 127.0.0.1
}
define command{
command_name notify-host-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$
}
EOL
LINE_COMMENT_SNIPPET = <<-'EOL'
# This is a comment starting at the beginning of a line
define command{
# This is a comment starting at the beginning of a line
command_name command_name
# This is a comment starting at the beginning of a line
## --PUPPET_NAME-- (called '_naginator_name' in the manifest) command_name
command_line command_line
# This is a comment starting at the beginning of a line
}
# This is a comment starting at the beginning of a line
EOL
LINE_COMMENT_SNIPPET2 = <<-'EOL'
define host{
use linux-server ; Name of host template to use
host_name localhost
alias localhost
address 127.0.0.1
}
define command{
command_name command_name2
command_line command_line2
}
EOL
UNKNOWN_NAGIOS_OBJECT_DEFINITION = <<-'EOL'
define command2{
command_name notify-host-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$
}
EOL
MISSING_CLOSING_CURLY_BRACKET = <<-'EOL'
define command{
command_name notify-host-by-email
command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$
EOL
ESCAPED_SEMICOLON = <<-'EOL'
define command {
command_name nagios_table_size
command_line $USER3$/check_mysql_health --hostname localhost --username nagioschecks --password nagiosCheckPWD --mode sql --name "SELECT ROUND(Data_length/1024) as Data_kBytes from INFORMATION_SCHEMA.TABLES where TABLE_NAME=\"$ARG1$\"\;" --name2 "table size" --units kBytes -w $ARG2$ -c $ARG3$
}
EOL
POUND_SIGN_HASH_SYMBOL_NOT_IN_FIRST_COLUMN = <<-'EOL'
define command {
command_name notify-by-irc
command_line /usr/local/bin/riseup-nagios-client.pl "$HOSTNAME$ ($SERVICEDESC$) $NOTIFICATIONTYPE$ #$SERVICEATTEMPT$ $SERVICESTATETYPE$ $SERVICEEXECUTIONTIME$s $SERVICELATENCY$s $SERVICEOUTPUT$ $SERVICEPERFDATA$"
}
EOL
ANOTHER_ESCAPED_SEMICOLON = <<-EOL
define command {
\tcommand_line LC_ALL=en_US.UTF-8 /usr/lib/nagios/plugins/check_haproxy -u 'http://blah:blah@$HOSTADDRESS$:8080/haproxy?stats\\;csv'
\tcommand_name check_haproxy
}
EOL
it "should parse without error" do
parser = Nagios::Parser.new
expect {
results = parser.parse(NONESCAPED_SEMICOLON_COMMENT)
}.to_not raise_error
end
describe "when parsing a statement" do
parser = Nagios::Parser.new
results = parser.parse(NONESCAPED_SEMICOLON_COMMENT)
results.each do |obj|
it "should have the proper base type" do
obj.should be_a_kind_of(Nagios::Base)
end
end
end
it "should raise an error when an incorrect object definition is present" do
parser = Nagios::Parser.new
expect {
results = parser.parse(UNKNOWN_NAGIOS_OBJECT_DEFINITION)
}.to raise_error Nagios::Base::UnknownNagiosType
end
it "should raise an error when syntax is not correct" do
parser = Nagios::Parser.new
expect {
results = parser.parse(MISSING_CLOSING_CURLY_BRACKET)
}.to raise_error Nagios::Parser::SyntaxError
end
describe "when encoutering ';'" do
it "should not throw an exception" do
parser = Nagios::Parser.new
expect {
results = parser.parse(ESCAPED_SEMICOLON)
- }.to_not raise_error Nagios::Parser::SyntaxError
+ }.to_not raise_error
end
it "should ignore it if it is a comment" do
parser = Nagios::Parser.new
results = parser.parse(NONESCAPED_SEMICOLON_COMMENT)
results[0].use.should eql("linux-server")
end
it "should parse correctly if it is escaped" do
parser = Nagios::Parser.new
results = parser.parse(ESCAPED_SEMICOLON)
results[0].command_line.should eql("$USER3$/check_mysql_health --hostname localhost --username nagioschecks --password nagiosCheckPWD --mode sql --name \"SELECT ROUND(Data_length/1024) as Data_kBytes from INFORMATION_SCHEMA.TABLES where TABLE_NAME=\\\"$ARG1$\\\";\" --name2 \"table size\" --units kBytes -w $ARG2$ -c $ARG3$")
end
end
describe "when encountering '#'" do
it "should not throw an exception" do
parser = Nagios::Parser.new
expect {
results = parser.parse(POUND_SIGN_HASH_SYMBOL_NOT_IN_FIRST_COLUMN)
- }.to_not raise_error Nagios::Parser::SyntaxError
+ }.to_not raise_error
end
it "should ignore it at the beginning of a line" do
parser = Nagios::Parser.new
results = parser.parse(LINE_COMMENT_SNIPPET)
results[0].command_line.should eql("command_line")
end
it "should let it go anywhere else" do
parser = Nagios::Parser.new
results = parser.parse(POUND_SIGN_HASH_SYMBOL_NOT_IN_FIRST_COLUMN)
results[0].command_line.should eql("/usr/local/bin/riseup-nagios-client.pl \"$HOSTNAME$ ($SERVICEDESC$) $NOTIFICATIONTYPE$ \#$SERVICEATTEMPT$ $SERVICESTATETYPE$ $SERVICEEXECUTIONTIME$s $SERVICELATENCY$s $SERVICEOUTPUT$ $SERVICEPERFDATA$\"")
end
end
describe "when encountering ';' again" do
it "should not throw an exception" do
parser = Nagios::Parser.new
expect {
results = parser.parse(ANOTHER_ESCAPED_SEMICOLON)
- }.to_not raise_error Nagios::Parser::SyntaxError
+ }.to_not raise_error
end
it "should parse correctly" do
parser = Nagios::Parser.new
results = parser.parse(ANOTHER_ESCAPED_SEMICOLON)
results[0].command_line.should eql("LC_ALL=en_US.UTF-8 /usr/lib/nagios/plugins/check_haproxy -u 'http://blah:blah@$HOSTADDRESS$:8080/haproxy?stats;csv'")
end
end
it "should be idempotent" do
parser = Nagios::Parser.new
src = ANOTHER_ESCAPED_SEMICOLON.dup
results = parser.parse(src)
nagios_type = Nagios::Base.create(:command)
nagios_type.command_name = results[0].command_name
nagios_type.command_line = results[0].command_line
nagios_type.to_s.should eql(ANOTHER_ESCAPED_SEMICOLON)
end
end
describe "Nagios generator" do
it "should escape ';'" do
param = '$USER3$/check_mysql_health --hostname localhost --username nagioschecks --password nagiosCheckPWD --mode sql --name "SELECT ROUND(Data_length/1024) as Data_kBytes from INFORMATION_SCHEMA.TABLES where TABLE_NAME=\"$ARG1$\";" --name2 "table size" --units kBytes -w $ARG2$ -c $ARG3$'
nagios_type = Nagios::Base.create(:command)
nagios_type.command_line = param
nagios_type.to_s.should eql("define command {\n\tcommand_line $USER3$/check_mysql_health --hostname localhost --username nagioschecks --password nagiosCheckPWD --mode sql --name \"SELECT ROUND(Data_length/1024) as Data_kBytes from INFORMATION_SCHEMA.TABLES where TABLE_NAME=\\\"$ARG1$\\\"\\;\" --name2 \"table size\" --units kBytes -w $ARG2$ -c $ARG3$\n}\n")
end
it "should escape ';' if it is not already the case" do
param = "LC_ALL=en_US.UTF-8 /usr/lib/nagios/plugins/check_haproxy -u 'http://blah:blah@$HOSTADDRESS$:8080/haproxy?stats;csv'"
nagios_type = Nagios::Base.create(:command)
nagios_type.command_line = param
nagios_type.to_s.should eql("define command {\n\tcommand_line LC_ALL=en_US.UTF-8 /usr/lib/nagios/plugins/check_haproxy -u 'http://blah:blah@$HOSTADDRESS$:8080/haproxy?stats\\;csv'\n}\n")
end
it "should be idempotent" do
param = '$USER3$/check_mysql_health --hostname localhost --username nagioschecks --password nagiosCheckPWD --mode sql --name "SELECT ROUND(Data_length/1024) as Data_kBytes from INFORMATION_SCHEMA.TABLES where TABLE_NAME=\"$ARG1$\";" --name2 "table size" --units kBytes -w $ARG2$ -c $ARG3$'
nagios_type = Nagios::Base.create(:command)
nagios_type.command_line = param
parser = Nagios::Parser.new
results = parser.parse(nagios_type.to_s)
results[0].command_line.should eql(param)
end
+
+ it "should accept FixNum params and convert to string" do
+ param = 1
+ nagios_type = Nagios::Base.create(:serviceescalation)
+ nagios_type.first_notification = param
+ parser = Nagios::Parser.new
+ results = parser.parse(nagios_type.to_s)
+ results[0].first_notification.should eql(param.to_s)
+ end
end
describe "Nagios resource types" do
Nagios::Base.eachtype do |name, nagios_type|
puppet_type = Puppet::Type.type("nagios_#{name}")
it "should have a valid type for #{name}" do
puppet_type.should_not be_nil
end
next unless puppet_type
describe puppet_type do
it "should be defined as a Puppet resource type" do
puppet_type.should_not be_nil
end
it "should have documentation" do
puppet_type.instance_variable_get("@doc").should_not == ""
end
it "should have #{nagios_type.namevar} as its key attribute" do
puppet_type.key_attributes.should == [nagios_type.namevar]
end
it "should have documentation for its #{nagios_type.namevar} parameter" do
puppet_type.attrclass(nagios_type.namevar).instance_variable_get("@doc").should_not be_nil
end
it "should have an ensure property" do
puppet_type.should be_validproperty(:ensure)
end
it "should have a target property" do
puppet_type.should be_validproperty(:target)
end
it "should have documentation for its target property" do
puppet_type.attrclass(:target).instance_variable_get("@doc").should_not be_nil
end
[ :owner, :group, :mode ].each do |fileprop|
it "should have a #{fileprop} parameter" do
puppet_type.parameters.should be_include(fileprop)
end
end
nagios_type.parameters.reject { |param| param == nagios_type.namevar or param.to_s =~ /^[0-9]/ }.each do |param|
it "should have a #{param} property" do
puppet_type.should be_validproperty(param)
end
it "should have documentation for its #{param} property" do
puppet_type.attrclass(param).instance_variable_get("@doc").should_not be_nil
end
end
nagios_type.parameters.find_all { |param| param.to_s =~ /^[0-9]/ }.each do |param|
it "should have not have a #{param} property" do
puppet_type.should_not be_validproperty(:param)
end
end
end
end
end
diff --git a/spec/unit/type/resources_spec.rb b/spec/unit/type/resources_spec.rb
index f08afd7ae..e985b9752 100755
--- a/spec/unit/type/resources_spec.rb
+++ b/spec/unit/type/resources_spec.rb
@@ -1,284 +1,318 @@
#! /usr/bin/env ruby
require 'spec_helper'
resources = Puppet::Type.type(:resources)
# There are still plenty of tests to port over from test/.
describe resources do
+
+ before :each do
+ described_class.reset_system_users_max_uid!
+ end
+
describe "when initializing" do
it "should fail if the specified resource type does not exist" do
Puppet::Type.stubs(:type).with { |x| x.to_s.downcase == "resources"}.returns resources
Puppet::Type.expects(:type).with("nosuchtype").returns nil
lambda { resources.new :name => "nosuchtype" }.should raise_error(Puppet::Error)
end
it "should not fail when the specified resource type exists" do
lambda { resources.new :name => "file" }.should_not raise_error
end
it "should set its :resource_type attribute" do
resources.new(:name => "file").resource_type.should == Puppet::Type.type(:file)
end
end
describe :purge do
let (:instance) { described_class.new(:name => 'file') }
it "defaults to false" do
instance[:purge].should be_false
end
it "can be set to false" do
instance[:purge] = 'false'
end
it "cannot be set to true for a resource type that does not accept ensure" do
instance.resource_type.stubs(:respond_to?).returns true
instance.resource_type.stubs(:validproperty?).returns false
expect { instance[:purge] = 'yes' }.to raise_error Puppet::Error
end
it "cannot be set to true for a resource type that does not have instances" do
instance.resource_type.stubs(:respond_to?).returns false
instance.resource_type.stubs(:validproperty?).returns true
expect { instance[:purge] = 'yes' }.to raise_error Puppet::Error
end
it "can be set to true for a resource type that has instances and can accept ensure" do
instance.resource_type.stubs(:respond_to?).returns true
instance.resource_type.stubs(:validproperty?).returns true
- expect { instance[:purge] = 'yes' }.not_to raise_error Puppet::Error
+ expect { instance[:purge] = 'yes' }.to_not raise_error
end
end
describe "#check_user purge behaviour" do
describe "with unless_system_user => true" do
before do
@res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_system_user => true
@res.catalog = Puppet::Resource::Catalog.new
+ Puppet::FileSystem.stubs(:exist?).with('/etc/login.defs').returns false
end
it "should never purge hardcoded system users" do
%w{root nobody bin noaccess daemon sys}.each do |sys_user|
@res.user_check(Puppet::Type.type(:user).new(:name => sys_user)).should be_false
end
end
it "should not purge system users if unless_system_user => true" do
user_hash = {:name => 'system_user', :uid => 125, :system => true}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_false
end
- it "should purge manual users if unless_system_user => true" do
- user_hash = {:name => 'system_user', :uid => 525, :system => true}
+ it "should purge non-system users if unless_system_user => true" do
+ user_hash = {:name => 'system_user', :uid => described_class.system_users_max_uid + 1, :system => true}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_true
end
- it "should purge system users over 500 if unless_system_user => 600" do
+ it "should not purge system users under 600 if unless_system_user => 600" do
res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_system_user => 600
res.catalog = Puppet::Resource::Catalog.new
- user_hash = {:name => 'system_user', :uid => 525, :system => true}
+ user_hash = {:name => 'system_user', :uid => 500, :system => true}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
res.user_check(user).should be_false
end
end
- describe "with unless_uid" do
- describe "with a uid range" do
- before do
- @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => 10_000..20_000
+ %w(FreeBSD OpenBSD).each do |os|
+ describe "on #{os}" do
+ before :each do
+ Facter.stubs(:value).with(:kernel).returns(os)
+ Facter.stubs(:value).with(:operatingsystem).returns(os)
+ Facter.stubs(:value).with(:osfamily).returns(os)
+ Puppet::FileSystem.stubs(:exist?).with('/etc/login.defs').returns false
+ @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_system_user => true
@res.catalog = Puppet::Resource::Catalog.new
end
- it "should purge uids that are not in a specified range" do
- user_hash = {:name => 'special_user', :uid => 25_000}
+ it "should not purge system users under 1000" do
+ user_hash = {:name => 'system_user', :uid => 999}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
- @res.user_check(user).should be_true
+ @res.user_check(user).should be_false
end
- it "should not purge uids that are in a specified range" do
- user_hash = {:name => 'special_user', :uid => 15_000}
+ it "should purge users over 999" do
+ user_hash = {:name => 'system_user', :uid => 1000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
- @res.user_check(user).should be_false
+ @res.user_check(user).should be_true
end
end
+ end
+
+ describe 'with login.defs present' do
+ before :each do
+ Puppet::FileSystem.expects(:exist?).with('/etc/login.defs').returns true
+ Puppet::FileSystem.expects(:each_line).with('/etc/login.defs').yields(' UID_MIN 1234 # UID_MIN comment ')
+ @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_system_user => true
+ @res.catalog = Puppet::Resource::Catalog.new
+ end
+
+ it 'should not purge a system user' do
+ user_hash = {:name => 'system_user', :uid => 1233}
+ user = Puppet::Type.type(:user).new(user_hash)
+ user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
+ @res.user_check(user).should be_false
+ end
+
+ it 'should purge a non-system user' do
+ user_hash = {:name => 'system_user', :uid => 1234}
+ user = Puppet::Type.type(:user).new(user_hash)
+ user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
+ @res.user_check(user).should be_true
+ end
+ end
- describe "with a uid range array" do
+ describe "with unless_uid" do
+ describe "with a uid array" do
before do
- @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => [10_000..15_000, 15_000..20_000]
+ @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => [15_000, 15_001, 15_002]
@res.catalog = Puppet::Resource::Catalog.new
end
- it "should purge uids that are not in a specified range array" do
+ it "should purge uids that are not in a specified array" do
user_hash = {:name => 'special_user', :uid => 25_000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_true
end
- it "should not purge uids that are in a specified range array" do
- user_hash = {:name => 'special_user', :uid => 15_000}
+ it "should not purge uids that are in a specified array" do
+ user_hash = {:name => 'special_user', :uid => 15000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_false
end
end
- describe "with a uid array" do
+ describe "with a single integer uid" do
before do
- @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => [15_000, 15_001, 15_002]
+ @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => 15_000
@res.catalog = Puppet::Resource::Catalog.new
end
- it "should purge uids that are not in a specified array" do
+ it "should purge uids that are not specified" do
user_hash = {:name => 'special_user', :uid => 25_000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_true
end
- it "should not purge uids that are in a specified array" do
- user_hash = {:name => 'special_user', :uid => 15000}
+ it "should not purge uids that are specified" do
+ user_hash = {:name => 'special_user', :uid => 15_000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_false
end
-
end
- describe "with a single uid" do
+ describe "with a single string uid" do
before do
- @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => 15_000
+ @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => '15000'
@res.catalog = Puppet::Resource::Catalog.new
end
it "should purge uids that are not specified" do
user_hash = {:name => 'special_user', :uid => 25_000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_true
end
it "should not purge uids that are specified" do
user_hash = {:name => 'special_user', :uid => 15_000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_false
end
end
describe "with a mixed uid array" do
before do
- @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => [10_000..15_000, 16_666]
+ @res = Puppet::Type.type(:resources).new :name => :user, :purge => true, :unless_uid => ['15000', 16_666]
@res.catalog = Puppet::Resource::Catalog.new
end
it "should not purge ids in the range" do
user_hash = {:name => 'special_user', :uid => 15_000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_false
end
it "should not purge specified ids" do
user_hash = {:name => 'special_user', :uid => 16_666}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_false
end
it "should purge unspecified ids" do
user_hash = {:name => 'special_user', :uid => 17_000}
user = Puppet::Type.type(:user).new(user_hash)
user.stubs(:retrieve_resource).returns Puppet::Resource.new("user", user_hash[:name], :parameters => user_hash)
@res.user_check(user).should be_true
end
end
-
+
end
end
describe "#generate" do
before do
@host1 = Puppet::Type.type(:host).new(:name => 'localhost', :ip => '127.0.0.1')
@catalog = Puppet::Resource::Catalog.new
end
describe "when dealing with non-purging resources" do
before do
@resources = Puppet::Type.type(:resources).new(:name => 'host')
end
it "should not generate any resource" do
@resources.generate.should be_empty
end
end
describe "when the catalog contains a purging resource" do
before do
@resources = Puppet::Type.type(:resources).new(:name => 'host', :purge => true)
@purgeable_resource = Puppet::Type.type(:host).new(:name => 'localhost', :ip => '127.0.0.1')
@catalog.add_resource @resources
end
it "should not generate a duplicate of that resource" do
Puppet::Type.type(:host).stubs(:instances).returns [@host1]
@catalog.add_resource @host1
@resources.generate.collect { |r| r.ref }.should_not include(@host1.ref)
end
it "should not include the skipped system users" do
res = Puppet::Type.type(:resources).new :name => :user, :purge => true
res.catalog = Puppet::Resource::Catalog.new
root = Puppet::Type.type(:user).new(:name => "root")
Puppet::Type.type(:user).expects(:instances).returns [ root ]
list = res.generate
names = list.collect { |r| r[:name] }
names.should_not be_include("root")
end
describe "when generating a purgeable resource" do
it "should be included in the generated resources" do
Puppet::Type.type(:host).stubs(:instances).returns [@purgeable_resource]
@resources.generate.collect { |r| r.ref }.should include(@purgeable_resource.ref)
end
end
describe "when the instance's do not have an ensure property" do
it "should not be included in the generated resources" do
@no_ensure_resource = Puppet::Type.type(:exec).new(:name => "#{File.expand_path('/usr/bin/env')} echo")
Puppet::Type.type(:host).stubs(:instances).returns [@no_ensure_resource]
@resources.generate.collect { |r| r.ref }.should_not include(@no_ensure_resource.ref)
end
end
describe "when the instance's ensure property does not accept absent" do
it "should not be included in the generated resources" do
@no_absent_resource = Puppet::Type.type(:service).new(:name => 'foobar')
Puppet::Type.type(:host).stubs(:instances).returns [@no_absent_resource]
@resources.generate.collect { |r| r.ref }.should_not include(@no_absent_resource.ref)
end
end
describe "when checking the instance fails" do
it "should not be included in the generated resources" do
@purgeable_resource = Puppet::Type.type(:host).new(:name => 'foobar')
Puppet::Type.type(:host).stubs(:instances).returns [@purgeable_resource]
@resources.expects(:check).with(@purgeable_resource).returns(false)
@resources.generate.collect { |r| r.ref }.should_not include(@purgeable_resource.ref)
end
end
end
end
end
diff --git a/spec/unit/type/user_spec.rb b/spec/unit/type/user_spec.rb
index 2e016d9ba..f5a351752 100755
--- a/spec/unit/type/user_spec.rb
+++ b/spec/unit/type/user_spec.rb
@@ -1,514 +1,514 @@
#! /usr/bin/env ruby
# encoding: UTF-8
require 'spec_helper'
describe Puppet::Type.type(:user) do
before :each do
@provider_class = described_class.provide(:simple) do
has_features :manages_expiry, :manages_password_age, :manages_passwords, :manages_solaris_rbac, :manages_shell
mk_resource_methods
def create; end
def delete; end
def exists?; get(:ensure) != :absent; end
def flush; end
def self.instances; []; end
end
described_class.stubs(:defaultprovider).returns @provider_class
end
it "should be able to create an instance" do
described_class.new(:name => "foo").should_not be_nil
end
it "should have an allows_duplicates feature" do
described_class.provider_feature(:allows_duplicates).should_not be_nil
end
it "should have a manages_homedir feature" do
described_class.provider_feature(:manages_homedir).should_not be_nil
end
it "should have a manages_passwords feature" do
described_class.provider_feature(:manages_passwords).should_not be_nil
end
it "should have a manages_solaris_rbac feature" do
described_class.provider_feature(:manages_solaris_rbac).should_not be_nil
end
it "should have a manages_expiry feature" do
described_class.provider_feature(:manages_expiry).should_not be_nil
end
it "should have a manages_password_age feature" do
described_class.provider_feature(:manages_password_age).should_not be_nil
end
it "should have a system_users feature" do
described_class.provider_feature(:system_users).should_not be_nil
end
it "should have a manages_shell feature" do
described_class.provider_feature(:manages_shell).should_not be_nil
end
describe :managehome do
let (:provider) { @provider_class.new(:name => 'foo', :ensure => :absent) }
let (:instance) { described_class.new(:name => 'foo', :provider => provider) }
it "defaults to false" do
instance[:managehome].should be_false
end
it "can be set to false" do
instance[:managehome] = 'false'
end
it "cannot be set to true for a provider that does not manage homedirs" do
provider.class.stubs(:manages_homedir?).returns false
expect { instance[:managehome] = 'yes' }.to raise_error(Puppet::Error, /can not manage home directories/)
end
it "can be set to true for a provider that does manage homedirs" do
provider.class.stubs(:manages_homedir?).returns true
instance[:managehome] = 'yes'
end
end
describe "instances" do
it "should delegate existence questions to its provider" do
@provider = @provider_class.new(:name => 'foo', :ensure => :absent)
instance = described_class.new(:name => "foo", :provider => @provider)
instance.exists?.should == false
@provider.set(:ensure => :present)
instance.exists?.should == true
end
end
properties = [:ensure, :uid, :gid, :home, :comment, :shell, :password, :password_min_age, :password_max_age, :groups, :roles, :auths, :profiles, :project, :keys, :expiry]
properties.each do |property|
it "should have a #{property} property" do
described_class.attrclass(property).ancestors.should be_include(Puppet::Property)
end
it "should have documentation for its #{property} property" do
described_class.attrclass(property).doc.should be_instance_of(String)
end
end
list_properties = [:groups, :roles, :auths]
list_properties.each do |property|
it "should have a list '#{property}'" do
described_class.attrclass(property).ancestors.should be_include(Puppet::Property::List)
end
end
it "should have an ordered list 'profiles'" do
described_class.attrclass(:profiles).ancestors.should be_include(Puppet::Property::OrderedList)
end
it "should have key values 'keys'" do
described_class.attrclass(:keys).ancestors.should be_include(Puppet::Property::KeyValue)
end
describe "when retrieving all current values" do
before do
@provider = @provider_class.new(:name => 'foo', :ensure => :present, :uid => 15, :gid => 15)
@user = described_class.new(:name => "foo", :uid => 10, :provider => @provider)
end
it "should return a hash containing values for all set properties" do
@user[:gid] = 10
values = @user.retrieve
[@user.property(:uid), @user.property(:gid)].each { |property| values.should be_include(property) }
end
it "should set all values to :absent if the user is absent" do
@user.property(:ensure).expects(:retrieve).returns :absent
@user.property(:uid).expects(:retrieve).never
@user.retrieve[@user.property(:uid)].should == :absent
end
it "should include the result of retrieving each property's current value if the user is present" do
@user.retrieve[@user.property(:uid)].should == 15
end
end
describe "when managing the ensure property" do
it "should support a :present value" do
expect { described_class.new(:name => 'foo', :ensure => :present) }.to_not raise_error
end
it "should support an :absent value" do
expect { described_class.new(:name => 'foo', :ensure => :absent) }.to_not raise_error
end
it "should call :create on the provider when asked to sync to the :present state" do
@provider = @provider_class.new(:name => 'foo', :ensure => :absent)
@provider.expects(:create)
described_class.new(:name => 'foo', :ensure => :present, :provider => @provider).parameter(:ensure).sync
end
it "should call :delete on the provider when asked to sync to the :absent state" do
@provider = @provider_class.new(:name => 'foo', :ensure => :present)
@provider.expects(:delete)
described_class.new(:name => 'foo', :ensure => :absent, :provider => @provider).parameter(:ensure).sync
end
describe "and determining the current state" do
it "should return :present when the provider indicates the user exists" do
@provider = @provider_class.new(:name => 'foo', :ensure => :present)
described_class.new(:name => 'foo', :ensure => :absent, :provider => @provider).parameter(:ensure).retrieve.should == :present
end
it "should return :absent when the provider indicates the user does not exist" do
@provider = @provider_class.new(:name => 'foo', :ensure => :absent)
described_class.new(:name => 'foo', :ensure => :present, :provider => @provider).parameter(:ensure).retrieve.should == :absent
end
end
end
describe "when managing the uid property" do
it "should convert number-looking strings into actual numbers" do
described_class.new(:name => 'foo', :uid => '50')[:uid].should == 50
end
it "should support UIDs as numbers" do
described_class.new(:name => 'foo', :uid => 50)[:uid].should == 50
end
it "should support :absent as a value" do
described_class.new(:name => 'foo', :uid => :absent)[:uid].should == :absent
end
end
describe "when managing the gid" do
it "should support :absent as a value" do
described_class.new(:name => 'foo', :gid => :absent)[:gid].should == :absent
end
it "should convert number-looking strings into actual numbers" do
described_class.new(:name => 'foo', :gid => '50')[:gid].should == 50
end
it "should support GIDs specified as integers" do
described_class.new(:name => 'foo', :gid => 50)[:gid].should == 50
end
it "should support groups specified by name" do
described_class.new(:name => 'foo', :gid => 'foo')[:gid].should == 'foo'
end
describe "when testing whether in sync" do
it "should return true if no 'should' values are set" do
# this is currently not the case because gid has no default value, so we would never even
# call insync? on that property
if param = described_class.new(:name => 'foo').parameter(:gid)
param.must be_safe_insync(500)
end
end
it "should return true if any of the specified groups are equal to the current integer" do
Puppet::Util.expects(:gid).with("foo").returns 300
Puppet::Util.expects(:gid).with("bar").returns 500
described_class.new(:name => 'baz', :gid => [ 'foo', 'bar' ]).parameter(:gid).must be_safe_insync(500)
end
it "should return false if none of the specified groups are equal to the current integer" do
Puppet::Util.expects(:gid).with("foo").returns 300
Puppet::Util.expects(:gid).with("bar").returns 500
described_class.new(:name => 'baz', :gid => [ 'foo', 'bar' ]).parameter(:gid).must_not be_safe_insync(700)
end
end
describe "when syncing" do
it "should use the first found, specified group as the desired value and send it to the provider" do
Puppet::Util.expects(:gid).with("foo").returns nil
Puppet::Util.expects(:gid).with("bar").returns 500
@provider = @provider_class.new(:name => 'foo')
resource = described_class.new(:name => 'foo', :provider => @provider, :gid => [ 'foo', 'bar' ])
@provider.expects(:gid=).with 500
resource.parameter(:gid).sync
end
end
end
describe "when managing groups" do
it "should support a singe group" do
expect { described_class.new(:name => 'foo', :groups => 'bar') }.to_not raise_error
end
it "should support multiple groups as an array" do
expect { described_class.new(:name => 'foo', :groups => [ 'bar' ]) }.to_not raise_error
expect { described_class.new(:name => 'foo', :groups => [ 'bar', 'baz' ]) }.to_not raise_error
end
it "should not support a comma separated list" do
expect { described_class.new(:name => 'foo', :groups => 'bar,baz') }.to raise_error(Puppet::Error, /Group names must be provided as an array/)
end
it "should not support an empty string" do
expect { described_class.new(:name => 'foo', :groups => '') }.to raise_error(Puppet::Error, /Group names must not be empty/)
end
describe "when testing is in sync" do
before :each do
# the useradd provider uses a single string to represent groups and so does Puppet::Property::List when converting to should values
@provider = @provider_class.new(:name => 'foo', :groups => 'a,b,e,f')
end
it "should not care about order" do
@property = described_class.new(:name => 'foo', :groups => [ 'a', 'c', 'b' ]).property(:groups)
@property.must be_safe_insync([ 'a', 'b', 'c' ])
@property.must be_safe_insync([ 'a', 'c', 'b' ])
@property.must be_safe_insync([ 'b', 'a', 'c' ])
@property.must be_safe_insync([ 'b', 'c', 'a' ])
@property.must be_safe_insync([ 'c', 'a', 'b' ])
@property.must be_safe_insync([ 'c', 'b', 'a' ])
end
it "should merge current value and desired value if membership minimal" do
@instance = described_class.new(:name => 'foo', :groups => [ 'a', 'c', 'b' ], :provider => @provider)
@instance[:membership] = :minimum
@instance[:groups].should == 'a,b,c,e,f'
end
it "should not treat a subset of groups insync if membership inclusive" do
@instance = described_class.new(:name => 'foo', :groups => [ 'a', 'c', 'b' ], :provider => @provider)
@instance[:membership] = :inclusive
@instance[:groups].should == 'a,b,c'
end
end
end
describe "when managing expiry" do
it "should fail if given an invalid date" do
expect { described_class.new(:name => 'foo', :expiry => "200-20-20") }.to raise_error(Puppet::Error, /Expiry dates must be YYYY-MM-DD/)
end
end
describe "when managing minimum password age" do
it "should accept a negative minimum age" do
expect { described_class.new(:name => 'foo', :password_min_age => '-1') }.to_not raise_error
end
it "should fail with an empty minimum age" do
expect { described_class.new(:name => 'foo', :password_min_age => '') }.to raise_error(Puppet::Error, /minimum age must be provided as a number/)
end
end
describe "when managing maximum password age" do
it "should accept a negative maximum age" do
expect { described_class.new(:name => 'foo', :password_max_age => '-1') }.to_not raise_error
end
it "should fail with an empty maximum age" do
expect { described_class.new(:name => 'foo', :password_max_age => '') }.to raise_error(Puppet::Error, /maximum age must be provided as a number/)
end
end
describe "when managing passwords" do
before do
@password = described_class.new(:name => 'foo', :password => 'mypass').parameter(:password)
end
it "should not include the password in the change log when adding the password" do
@password.change_to_s(:absent, "mypass").should_not be_include("mypass")
end
it "should not include the password in the change log when changing the password" do
@password.change_to_s("other", "mypass").should_not be_include("mypass")
end
it "should redact the password when displaying the old value" do
@password.is_to_s("currentpassword").should =~ /^\[old password hash redacted\]$/
end
it "should redact the password when displaying the new value" do
@password.should_to_s("newpassword").should =~ /^\[new password hash redacted\]$/
end
it "should fail if a ':' is included in the password" do
expect { described_class.new(:name => 'foo', :password => "some:thing") }.to raise_error(Puppet::Error, /Passwords cannot include ':'/)
end
it "should allow the value to be set to :absent" do
expect { described_class.new(:name => 'foo', :password => :absent) }.to_not raise_error
end
end
describe "when managing comment on Ruby 1.9", :if => String.method_defined?(:encode) do
it "should force value encoding to ASCII-8BIT" do
value = 'abcd™'
value.encoding.should == Encoding::UTF_8
user = described_class.new(:name => 'foo', :comment => value)
user[:comment].encoding.should == Encoding::ASCII_8BIT
user[:comment].should == value.force_encoding(Encoding::ASCII_8BIT)
end
end
describe "when manages_solaris_rbac is enabled" do
it "should support a :role value for ensure" do
expect { described_class.new(:name => 'foo', :ensure => :role) }.to_not raise_error
end
end
describe "when user has roles" do
it "should autorequire roles" do
testuser = described_class.new(:name => "testuser", :roles => ['testrole'] )
testrole = described_class.new(:name => "testrole")
config = Puppet::Resource::Catalog.new :testing do |conf|
[testuser, testrole].each { |resource| conf.add_resource resource }
end
Puppet::Type::User::ProviderDirectoryservice.stubs(:get_macosx_version_major).returns "10.5"
rel = testuser.autorequire[0]
rel.source.ref.should == testrole.ref
rel.target.ref.should == testuser.ref
end
end
describe "when setting shell" do
before :each do
@shell_provider_class = described_class.provide(:shell_manager) do
has_features :manages_shell
mk_resource_methods
def create; check_valid_shell;end
def shell=(value); check_valid_shell; end
def delete; end
def exists?; get(:ensure) != :absent; end
def flush; end
def self.instances; []; end
def check_valid_shell; end
end
described_class.stubs(:defaultprovider).returns @shell_provider_class
end
it "should call :check_valid_shell on the provider when changing shell value" do
@provider = @shell_provider_class.new(:name => 'foo', :shell => '/bin/bash', :ensure => :present)
@provider.expects(:check_valid_shell)
resource = described_class.new(:name => 'foo', :shell => '/bin/zsh', :provider => @provider)
Puppet::Util::Storage.stubs(:load)
Puppet::Util::Storage.stubs(:store)
catalog = Puppet::Resource::Catalog.new
catalog.add_resource resource
catalog.apply
end
it "should call :check_valid_shell on the provider when changing ensure from present to absent" do
@provider = @shell_provider_class.new(:name => 'foo', :shell => '/bin/bash', :ensure => :absent)
@provider.expects(:check_valid_shell)
resource = described_class.new(:name => 'foo', :shell => '/bin/zsh', :provider => @provider)
Puppet::Util::Storage.stubs(:load)
Puppet::Util::Storage.stubs(:store)
catalog = Puppet::Resource::Catalog.new
catalog.add_resource resource
catalog.apply
end
end
describe "when purging ssh keys" do
it "should not accept a keyfile with a relative path" do
expect {
described_class.new(:name => "a", :purge_ssh_keys => "keys")
}.to raise_error(Puppet::Error, /Paths to keyfiles must be absolute, not keys/)
end
context "with a home directory specified" do
it "should accept true" do
described_class.new(:name => "a", :home => "/tmp", :purge_ssh_keys => true)
end
it "should accept the ~ wildcard" do
described_class.new(:name => "a", :home => "/tmp", :purge_ssh_keys => "~/keys")
end
it "should accept the %h wildcard" do
described_class.new(:name => "a", :home => "/tmp", :purge_ssh_keys => "%h/keys")
end
it "raises when given a relative path" do
expect {
described_class.new(:name => "a", :home => "/tmp", :purge_ssh_keys => "keys")
}.to raise_error(Puppet::Error, /Paths to keyfiles must be absolute/)
end
end
context "with no home directory specified" do
it "should not accept true" do
expect {
described_class.new(:name => "a", :purge_ssh_keys => true)
}.to raise_error(Puppet::Error, /purge_ssh_keys can only be true for users with a defined home directory/)
end
it "should not accept the ~ wildcard" do
expect {
described_class.new(:name => "a", :purge_ssh_keys => "~/keys")
}.to raise_error(Puppet::Error, /meta character ~ or %h only allowed for users with a defined home directory/)
end
it "should not accept the %h wildcard" do
expect {
described_class.new(:name => "a", :purge_ssh_keys => "%h/keys")
}.to raise_error(Puppet::Error, /meta character ~ or %h only allowed for users with a defined home directory/)
end
end
context "with a valid parameter" do
let(:paths) do
[ "/dev/null", "/tmp/keyfile" ].map { |path| File.expand_path(path) }
end
subject do
res = described_class.new(:name => "test", :purge_ssh_keys => paths)
res.catalog = Puppet::Resource::Catalog.new
res
end
it "should not just return from generate" do
subject.expects :find_unmanaged_keys
subject.generate
end
it "should check each keyfile for readability" do
paths.each do |path|
File.expects(:readable?).with(path)
end
subject.generate
end
end
describe "generated keys" do
subject do
res = described_class.new(:name => "test_user_name", :purge_ssh_keys => purge_param)
res.catalog = Puppet::Resource::Catalog.new
res
end
context "when purging is disabled" do
let(:purge_param) { false }
its(:generate) { should be_empty }
end
context "when purging is enabled" do
let(:purge_param) { my_fixture('authorized_keys') }
let(:resources) { subject.generate }
it "should contain a resource for each key" do
names = resources.collect { |res| res.name }
- names.should include("keyname1")
+ names.should include("key1 name")
names.should include("keyname2")
end
it "should not include keys in comment lines" do
names = resources.collect { |res| res.name }
names.should_not include("keyname3")
end
it "should each have a value for the user property" do
resources.map { |res|
res[:user]
}.reject { |user_name|
user_name == "test_user_name"
}.should be_empty
end
end
end
end
end
diff --git a/spec/unit/type/yumrepo_spec.rb b/spec/unit/type/yumrepo_spec.rb
old mode 100644
new mode 100755
index b97c60666..2246b7274
--- a/spec/unit/type/yumrepo_spec.rb
+++ b/spec/unit/type/yumrepo_spec.rb
@@ -1,251 +1,387 @@
require 'spec_helper'
require 'puppet'
shared_examples_for "a yumrepo parameter that can be absent" do |param|
it "can be set as :absent" do
described_class.new(:name => 'puppetlabs', param => :absent)
end
it "can be set as \"absent\"" do
described_class.new(:name => 'puppetlabs', param => 'absent')
end
end
+shared_examples_for "a yumrepo parameter that expects a natural value" do |param|
+ it "accepts a valid positive integer" do
+ instance = described_class.new(:name => 'puppetlabs', param => '12')
+ expect(instance[param]).to eq '12'
+ end
+ it "rejects invalid negative integer" do
+ expect {
+ described_class.new(
+ :name => 'puppetlabs',
+ param => '-12'
+ )
+ }.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
+ end
+ it "rejects invalid non-integer" do
+ expect {
+ described_class.new(
+ :name => 'puppetlabs',
+ param => 'I\'m a six'
+ )
+ }.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
+ end
+ it "rejects invalid string with integers inside" do
+ expect {
+ described_class.new(
+ :name => 'puppetlabs',
+ param => 'I\'m a 6'
+ )
+ }.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
+ end
+end
+
shared_examples_for "a yumrepo parameter that expects a boolean parameter" do |param|
valid_values = %w[True False 0 1 No Yes]
valid_values.each do |value|
it "accepts a valid value of #{value}" do
instance = described_class.new(:name => 'puppetlabs', param => value)
expect(instance[param]).to eq value
end
it "accepts #{value} downcased to #{value.downcase}" do
instance = described_class.new(:name => 'puppetlabs', param => value.downcase)
expect(instance[param]).to eq value.downcase
end
+ it "fails on valid value #{value} contained in another value" do
+ expect {
+ described_class.new(
+ :name => 'puppetlabs',
+ param => "bla#{value}bla"
+ )
+ }.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
+ end
end
it "rejects invalid boolean values" do
expect {
described_class.new(:name => 'puppetlabs', param => 'flase')
}.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
end
end
shared_examples_for "a yumrepo parameter that accepts a single URL" do |param|
it "can accept a single URL" do
described_class.new(
:name => 'puppetlabs',
param => 'http://localhost/yumrepos'
)
end
it "fails if an invalid URL is provided" do
expect {
described_class.new(
:name => 'puppetlabs',
param => "that's no URL!"
)
}.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
end
it "fails if a valid URL uses an invalid URI scheme" do
expect {
described_class.new(
:name => 'puppetlabs',
param => 'ldap://localhost/yumrepos'
)
}.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
end
end
shared_examples_for "a yumrepo parameter that accepts multiple URLs" do |param|
it "can accept multiple URLs" do
described_class.new(
:name => 'puppetlabs',
param => 'http://localhost/yumrepos http://localhost/more-yumrepos'
)
end
it "fails if multiple URLs are given and one is invalid" do
expect {
described_class.new(
:name => 'puppetlabs',
param => "http://localhost/yumrepos That's no URL!"
)
}.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
end
end
+shared_examples_for "a yumrepo parameter that accepts kMG units" do |param|
+ %w[k M G].each do |unit|
+ it "can accept an integer with #{unit} units" do
+ described_class.new(
+ :name => 'puppetlabs',
+ param => "123#{unit}"
+ )
+ end
+ end
+
+ it "fails if wrong unit passed" do
+ expect {
+ described_class.new(
+ :name => 'puppetlabs',
+ param => '123J'
+ )
+ }.to raise_error(Puppet::ResourceError, /Parameter #{param} failed/)
+ end
+end
+
describe Puppet::Type.type(:yumrepo) do
it "has :name as its namevar" do
expect(described_class.key_attributes).to eq [:name]
end
describe "validating" do
describe "name" do
it "is a valid parameter" do
instance = described_class.new(:name => 'puppetlabs')
expect(instance.name).to eq 'puppetlabs'
end
end
describe "target" do
it_behaves_like "a yumrepo parameter that can be absent", :target
end
describe "descr" do
it_behaves_like "a yumrepo parameter that can be absent", :descr
end
describe "mirrorlist" do
it_behaves_like "a yumrepo parameter that accepts a single URL", :mirrorlist
it_behaves_like "a yumrepo parameter that can be absent", :mirrorlist
end
describe "baseurl" do
it_behaves_like "a yumrepo parameter that can be absent", :baseurl
it_behaves_like "a yumrepo parameter that accepts a single URL", :baseurl
it_behaves_like "a yumrepo parameter that accepts multiple URLs", :baseurl
end
describe "enabled" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :enabled
it_behaves_like "a yumrepo parameter that can be absent", :enabled
end
describe "gpgcheck" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :gpgcheck
it_behaves_like "a yumrepo parameter that can be absent", :gpgcheck
end
describe "repo_gpgcheck" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :repo_gpgcheck
it_behaves_like "a yumrepo parameter that can be absent", :repo_gpgcheck
end
describe "gpgkey" do
it_behaves_like "a yumrepo parameter that can be absent", :gpgkey
it_behaves_like "a yumrepo parameter that accepts a single URL", :gpgkey
it_behaves_like "a yumrepo parameter that accepts multiple URLs", :gpgkey
end
describe "include" do
it_behaves_like "a yumrepo parameter that can be absent", :include
it_behaves_like "a yumrepo parameter that accepts a single URL", :include
end
describe "exclude" do
it_behaves_like "a yumrepo parameter that can be absent", :exclude
end
describe "includepkgs" do
it_behaves_like "a yumrepo parameter that can be absent", :includepkgs
end
describe "enablegroups" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :enablegroups
it_behaves_like "a yumrepo parameter that can be absent", :enablegroups
end
describe "failovermethod" do
%w[roundrobin priority].each do |value|
it "accepts a value of #{value}" do
described_class.new(:name => "puppetlabs", :failovermethod => value)
end
+ it "fails on valid value #{value} contained in another value" do
+ expect {
+ described_class.new(
+ :name => 'puppetlabs',
+ :failovermethod => "bla#{value}bla"
+ )
+ }.to raise_error(Puppet::ResourceError, /Parameter failovermethod failed/)
+ end
end
it "raises an error if an invalid value is given" do
expect {
described_class.new(:name => "puppetlabs", :failovermethod => "notavalidvalue")
}.to raise_error(Puppet::ResourceError, /Parameter failovermethod failed/)
end
it_behaves_like "a yumrepo parameter that can be absent", :failovermethod
end
describe "keepalive" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :keepalive
it_behaves_like "a yumrepo parameter that can be absent", :keepalive
end
describe "http_caching" do
%w[packages all none].each do |value|
it "accepts a valid value of #{value}" do
described_class.new(:name => 'puppetlabs', :http_caching => value)
end
+ it "fails on valid value #{value} contained in another value" do
+ expect {
+ described_class.new(
+ :name => 'puppetlabs',
+ :http_caching => "bla#{value}bla"
+ )
+ }.to raise_error(Puppet::ResourceError, /Parameter http_caching failed/)
+ end
end
it "rejects invalid values" do
expect {
described_class.new(:name => 'puppetlabs', :http_caching => 'yes')
}.to raise_error(Puppet::ResourceError, /Parameter http_caching failed/)
end
it_behaves_like "a yumrepo parameter that can be absent", :http_caching
end
describe "timeout" do
it_behaves_like "a yumrepo parameter that can be absent", :timeout
+ it_behaves_like "a yumrepo parameter that expects a natural value", :timeout
end
describe "metadata_expire" do
it_behaves_like "a yumrepo parameter that can be absent", :metadata_expire
+ it_behaves_like "a yumrepo parameter that expects a natural value", :metadata_expire
+
+ it "accepts dhm units" do
+ %W[d h m].each do |unit|
+ described_class.new(
+ :name => 'puppetlabs',
+ :metadata_expire => "123#{unit}"
+ )
+ end
+ end
+
+ it "accepts never as value" do
+ described_class.new(:name => 'puppetlabs', :metadata_expire => 'never')
+ end
end
describe "protect" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :protect
it_behaves_like "a yumrepo parameter that can be absent", :protect
end
describe "priority" do
it_behaves_like "a yumrepo parameter that can be absent", :priority
end
describe "proxy" do
it_behaves_like "a yumrepo parameter that can be absent", :proxy
+ it "accepts _none_" do
+ described_class.new(
+ :name => 'puppetlabs',
+ :proxy => "_none_"
+ )
+ end
it_behaves_like "a yumrepo parameter that accepts a single URL", :proxy
end
describe "proxy_username" do
it_behaves_like "a yumrepo parameter that can be absent", :proxy_username
end
describe "proxy_password" do
it_behaves_like "a yumrepo parameter that can be absent", :proxy_password
end
describe "s3_enabled" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :s3_enabled
it_behaves_like "a yumrepo parameter that can be absent", :s3_enabled
end
describe "skip_if_unavailable" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :skip_if_unavailable
it_behaves_like "a yumrepo parameter that can be absent", :skip_if_unavailable
end
describe "sslcacert" do
it_behaves_like "a yumrepo parameter that can be absent", :sslcacert
end
describe "sslverify" do
it_behaves_like "a yumrepo parameter that expects a boolean parameter", :sslverify
it_behaves_like "a yumrepo parameter that can be absent", :sslverify
end
describe "sslclientcert" do
it_behaves_like "a yumrepo parameter that can be absent", :sslclientcert
end
describe "sslclientkey" do
it_behaves_like "a yumrepo parameter that can be absent", :sslclientkey
end
describe "metalink" do
it_behaves_like "a yumrepo parameter that can be absent", :metalink
it_behaves_like "a yumrepo parameter that accepts a single URL", :metalink
end
+
+
+ describe "cost" do
+ it_behaves_like "a yumrepo parameter that can be absent", :cost
+ it_behaves_like "a yumrepo parameter that expects a natural value", :cost
+ end
+
+ describe "throttle" do
+ it_behaves_like "a yumrepo parameter that can be absent", :throttle
+ it_behaves_like "a yumrepo parameter that expects a natural value", :throttle
+ it_behaves_like "a yumrepo parameter that accepts kMG units", :throttle
+
+ it "accepts percentage as unit" do
+ described_class.new(
+ :name => 'puppetlabs',
+ :throttle => '123%'
+ )
+ end
+ end
+
+ describe "bandwidth" do
+ it_behaves_like "a yumrepo parameter that can be absent", :bandwidth
+ it_behaves_like "a yumrepo parameter that expects a natural value", :bandwidth
+ it_behaves_like "a yumrepo parameter that accepts kMG units", :bandwidth
+ end
+
+ describe "gpgcakey" do
+ it_behaves_like "a yumrepo parameter that can be absent", :gpgcakey
+ it_behaves_like "a yumrepo parameter that accepts a single URL", :gpgcakey
+ end
+
+ describe "retries" do
+ it_behaves_like "a yumrepo parameter that can be absent", :retries
+ it_behaves_like "a yumrepo parameter that expects a natural value", :retries
+ end
+
+ describe "mirrorlist_expire" do
+ it_behaves_like "a yumrepo parameter that can be absent", :mirrorlist_expire
+ it_behaves_like "a yumrepo parameter that expects a natural value", :mirrorlist_expire
+ end
end
end
diff --git a/spec/unit/type/zone_spec.rb b/spec/unit/type/zone_spec.rb
index e54902627..3497ae3a7 100755
--- a/spec/unit/type/zone_spec.rb
+++ b/spec/unit/type/zone_spec.rb
@@ -1,129 +1,172 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe Puppet::Type.type(:zone) do
- let(:zone) { described_class.new(:name => 'dummy', :path => '/dummy', :provider => :solaris) }
+ let(:zone) { described_class.new(:name => 'dummy', :path => '/dummy', :provider => :solaris, :ip=>'if:1.2.3.4:2.3.4.5', :inherit=>'/', :dataset=>'tank') }
let(:provider) { zone.provider }
+ let(:ip) { zone.property(:ip) }
+ let(:inherit) { zone.property(:inherit) }
+ let(:dataset) { zone.property(:dataset) }
parameters = [:create_args, :install_args, :sysidcfg, :realhostname]
parameters.each do |parameter|
it "should have a #{parameter} parameter" do
described_class.attrclass(parameter).ancestors.should be_include(Puppet::Parameter)
end
end
properties = [:ip, :iptype, :autoboot, :pool, :shares, :inherit, :path]
properties.each do |property|
it "should have a #{property} property" do
described_class.attrclass(property).ancestors.should be_include(Puppet::Property)
end
end
+ describe "when trying to set a property that is empty" do
+ it "should verify that property.insync? of nil or :absent is true" do
+ [inherit, ip, dataset].each do |prop|
+ prop.stubs(:should).returns []
+ end
+ [inherit, ip, dataset].each do |prop|
+ prop.insync?(nil).should be_true
+ end
+ [inherit, ip, dataset].each do |prop|
+ prop.insync?(:absent).should be_true
+ end
+ end
+ end
+ describe "when trying to set a property that is non empty" do
+ it "should verify that property.insync? of nil or :absent is false" do
+ [inherit, ip, dataset].each do |prop|
+ prop.stubs(:should).returns ['a','b']
+ end
+ [inherit, ip, dataset].each do |prop|
+ prop.insync?(nil).should be_false
+ end
+ [inherit, ip, dataset].each do |prop|
+ prop.insync?(:absent).should be_false
+ end
+ end
+ end
+ describe "when trying to set a property that is non empty" do
+ it "insync? should return true or false depending on the current value, and new value" do
+ [inherit, ip, dataset].each do |prop|
+ prop.stubs(:should).returns ['a','b']
+ end
+ [inherit, ip, dataset].each do |prop|
+ prop.insync?(['b', 'a']).should be_true
+ end
+ [inherit, ip, dataset].each do |prop|
+ prop.insync?(['a']).should be_false
+ end
+ end
+ end
+
it "should be valid when only :path is given" do
described_class.new(:name => "dummy", :path => '/dummy', :provider => :solaris)
end
it "should be invalid when :ip is missing a \":\" and iptype is :shared" do
expect {
described_class.new(:name => "dummy", :ip => "if", :path => "/dummy", :provider => :solaris)
}.to raise_error(Puppet::Error, /ip must contain interface name and ip address separated by a ":"/)
end
it "should be invalid when :ip has a \":\" and iptype is :exclusive" do
expect {
described_class.new(:name => "dummy", :ip => "if:1.2.3.4", :iptype => :exclusive, :provider => :solaris)
}.to raise_error(Puppet::Error, /only interface may be specified when using exclusive IP stack/)
end
it "should be invalid when :ip has two \":\" and iptype is :exclusive" do
expect {
described_class.new(:name => "dummy", :ip => "if:1.2.3.4:2.3.4.5", :iptype => :exclusive, :provider => :solaris)
}.to raise_error(Puppet::Error, /only interface may be specified when using exclusive IP stack/)
end
it "should be valid when :iptype is :shared and using interface and ip" do
described_class.new(:name => "dummy", :path => "/dummy", :ip => "if:1.2.3.4", :provider => :solaris)
end
it "should be valid when :iptype is :shared and using interface, ip and default route" do
described_class.new(:name => "dummy", :path => "/dummy", :ip => "if:1.2.3.4:2.3.4.5", :provider => :solaris)
end
it "should be valid when :iptype is :exclusive and using interface" do
described_class.new(:name => "dummy", :path => "/dummy", :ip => "if", :iptype => :exclusive, :provider => :solaris)
end
it "should auto-require :dataset entries" do
fs = 'random-pool/some-zfs'
catalog = Puppet::Resource::Catalog.new
relationship_graph = Puppet::Graph::RelationshipGraph.new(Puppet::Graph::RandomPrioritizer.new)
zfs = Puppet::Type.type(:zfs).new(:name => fs)
catalog.add_resource zfs
zone = described_class.new(:name => "dummy",
:path => "/foo",
:ip => 'en1:1.0.0.0',
:dataset => fs,
:provider => :solaris)
catalog.add_resource zone
relationship_graph.populate_from(catalog)
relationship_graph.dependencies(zone).should == [zfs]
end
describe Puppet::Zone::StateMachine do
let (:sm) { Puppet::Zone::StateMachine.new }
before :each do
sm.insert_state :absent, :down => :destroy
sm.insert_state :configured, :up => :configure, :down => :uninstall
sm.insert_state :installed, :up => :install, :down => :stop
sm.insert_state :running, :up => :start
end
context ":insert_state" do
it "should insert state in correct order" do
sm.insert_state :dummy, :left => :right
sm.index(:dummy).should == 4
end
end
context ":alias_state" do
it "should alias state" do
sm.alias_state :dummy, :running
sm.name(:dummy).should == :running
end
end
context ":name" do
it "should get an aliased state correctly" do
sm.alias_state :dummy, :running
sm.name(:dummy).should == :running
end
it "should get an un aliased state correctly" do
sm.name(:dummy).should == :dummy
end
end
context ":index" do
it "should return the state index correctly" do
sm.insert_state :dummy, :left => :right
sm.index(:dummy).should == 4
end
end
context ":sequence" do
it "should correctly return the actions to reach state specified" do
sm.sequence(:absent, :running).map{|p|p[:up]}.should == [:configure,:install,:start]
end
it "should correctly return the actions to reach state specified(2)" do
sm.sequence(:running, :absent).map{|p|p[:down]}.should == [:stop, :uninstall, :destroy]
end
end
context ":cmp" do
it "should correctly compare state sequence values" do
sm.cmp?(:absent, :running).should == true
sm.cmp?(:running, :running).should == false
sm.cmp?(:running, :absent).should == false
end
end
end
end
diff --git a/spec/unit/type_spec.rb b/spec/unit/type_spec.rb
index 75a5a0768..8c250ee34 100755
--- a/spec/unit/type_spec.rb
+++ b/spec/unit/type_spec.rb
@@ -1,1094 +1,1114 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet_spec/compiler'
describe Puppet::Type, :unless => Puppet.features.microsoft_windows? do
include PuppetSpec::Files
include PuppetSpec::Compiler
it "should be Comparable" do
a = Puppet::Type.type(:notify).new(:name => "a")
b = Puppet::Type.type(:notify).new(:name => "b")
c = Puppet::Type.type(:notify).new(:name => "c")
[[a, b, c], [a, c, b], [b, a, c], [b, c, a], [c, a, b], [c, b, a]].each do |this|
this.sort.should == [a, b, c]
end
a.must be < b
a.must be < c
b.must be > a
b.must be < c
c.must be > a
c.must be > b
[a, b, c].each {|x| a.must be <= x }
[a, b, c].each {|x| c.must be >= x }
b.must be_between(a, c)
end
it "should consider a parameter to be valid if it is a valid parameter" do
Puppet::Type.type(:mount).should be_valid_parameter(:name)
end
it "should consider a parameter to be valid if it is a valid property" do
Puppet::Type.type(:mount).should be_valid_parameter(:fstype)
end
it "should consider a parameter to be valid if it is a valid metaparam" do
Puppet::Type.type(:mount).should be_valid_parameter(:noop)
end
it "should be able to retrieve a property by name" do
resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present)
resource.property(:fstype).must be_instance_of(Puppet::Type.type(:mount).attrclass(:fstype))
end
it "should be able to retrieve a parameter by name" do
resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present)
resource.parameter(:name).must be_instance_of(Puppet::Type.type(:mount).attrclass(:name))
end
it "should be able to retrieve a property by name using the :parameter method" do
resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present)
resource.parameter(:fstype).must be_instance_of(Puppet::Type.type(:mount).attrclass(:fstype))
end
it "should be able to retrieve all set properties" do
resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present)
props = resource.properties
props.should_not be_include(nil)
[:fstype, :ensure, :pass].each do |name|
props.should be_include(resource.parameter(name))
end
end
it "can retrieve all set parameters" do
resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present, :tag => 'foo')
params = resource.parameters_with_value
[:name, :provider, :ensure, :fstype, :pass, :dump, :target, :loglevel, :tag].each do |name|
params.should be_include(resource.parameter(name))
end
end
it "can not return any `nil` values when retrieving all set parameters" do
resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present, :tag => 'foo')
params = resource.parameters_with_value
params.should_not be_include(nil)
end
it "can return an iterator for all set parameters" do
resource = Puppet::Type.type(:notify).new(:name=>'foo',:message=>'bar',:tag=>'baz',:require=> "File['foo']")
params = [:name, :message, :withpath, :loglevel, :tag, :require]
resource.eachparameter { |param|
params.should be_include(param.to_s.to_sym)
}
end
it "should have a method for setting default values for resources" do
Puppet::Type.type(:mount).new(:name => "foo").must respond_to(:set_default)
end
it "should do nothing for attributes that have no defaults and no specified value" do
Puppet::Type.type(:mount).new(:name => "foo").parameter(:noop).should be_nil
end
it "should have a method for adding tags" do
Puppet::Type.type(:mount).new(:name => "foo").must respond_to(:tags)
end
it "should use the tagging module" do
Puppet::Type.type(:mount).ancestors.should be_include(Puppet::Util::Tagging)
end
it "should delegate to the tagging module when tags are added" do
resource = Puppet::Type.type(:mount).new(:name => "foo")
resource.stubs(:tag).with(:mount)
resource.expects(:tag).with(:tag1, :tag2)
resource.tags = [:tag1,:tag2]
end
it "should add the current type as tag" do
resource = Puppet::Type.type(:mount).new(:name => "foo")
resource.stubs(:tag)
resource.expects(:tag).with(:mount)
resource.tags = [:tag1,:tag2]
end
it "should have a method to know if the resource is exported" do
Puppet::Type.type(:mount).new(:name => "foo").must respond_to(:exported?)
end
it "should have a method to know if the resource is virtual" do
Puppet::Type.type(:mount).new(:name => "foo").must respond_to(:virtual?)
end
it "should consider its version to be zero if it has no catalog" do
Puppet::Type.type(:mount).new(:name => "foo").version.should == 0
end
it "reports the correct path even after path is used during setup of the type" do
Puppet::Type.newtype(:testing) do
newparam(:name) do
isnamevar
validate do |value|
path # forces the computation of the path
end
end
end
ral = compile_to_ral(<<-MANIFEST)
class something {
testing { something: }
}
include something
MANIFEST
ral.resource("Testing[something]").path.should == "/Stage[main]/Something/Testing[something]"
end
context "alias metaparam" do
it "creates a new name that can be used for resource references" do
ral = compile_to_ral(<<-MANIFEST)
notify { a: alias => c }
MANIFEST
expect(ral.resource("Notify[a]")).to eq(ral.resource("Notify[c]"))
end
end
context "resource attributes" do
let(:resource) {
resource = Puppet::Type.type(:mount).new(:name => "foo")
catalog = Puppet::Resource::Catalog.new
catalog.version = 50
catalog.add_resource resource
resource
}
it "should consider its version to be its catalog version" do
resource.version.should == 50
end
it "should have tags" do
expect(resource).to be_tagged("mount")
expect(resource).to be_tagged("foo")
end
it "should have a path" do
resource.path.should == "/Mount[foo]"
end
end
it "should consider its type to be the name of its class" do
Puppet::Type.type(:mount).new(:name => "foo").type.should == :mount
end
it "should use any provided noop value" do
Puppet::Type.type(:mount).new(:name => "foo", :noop => true).must be_noop
end
it "should use the global noop value if none is provided" do
Puppet[:noop] = true
Puppet::Type.type(:mount).new(:name => "foo").must be_noop
end
it "should not be noop if in a non-host_config catalog" do
resource = Puppet::Type.type(:mount).new(:name => "foo")
catalog = Puppet::Resource::Catalog.new
catalog.add_resource resource
resource.should_not be_noop
end
describe "when creating an event" do
before do
@resource = Puppet::Type.type(:mount).new :name => "foo"
end
it "should have the resource's reference as the resource" do
@resource.event.resource.should == "Mount[foo]"
end
it "should have the resource's log level as the default log level" do
@resource[:loglevel] = :warning
@resource.event.default_log_level.should == :warning
end
{:file => "/my/file", :line => 50}.each do |attr, value|
it "should set the #{attr}" do
@resource.stubs(attr).returns value
@resource.event.send(attr).should == value
end
end
it "should set the tags" do
@resource.tag("abc", "def")
@resource.event.should be_tagged("abc")
@resource.event.should be_tagged("def")
end
it "should allow specification of event attributes" do
@resource.event(:status => "noop").status.should == "noop"
end
end
describe "when creating a provider" do
before :each do
@type = Puppet::Type.newtype(:provider_test_type) do
newparam(:name) { isnamevar }
newparam(:foo)
newproperty(:bar)
end
end
after :each do
@type.provider_hash.clear
end
describe "when determining if instances of the type are managed" do
it "should not consider audit only resources to be managed" do
@type.new(:name => "foo", :audit => 'all').managed?.should be_false
end
it "should not consider resources with only parameters to be managed" do
@type.new(:name => "foo", :foo => 'did someone say food?').managed?.should be_false
end
it "should consider resources with any properties set to be managed" do
@type.new(:name => "foo", :bar => 'Let us all go there').managed?.should be_true
end
end
it "should have documentation for the 'provider' parameter if there are providers" do
@type.provide(:test_provider)
@type.paramdoc(:provider).should =~ /`provider_test_type`[\s\r]+resource/
end
it "should not have documentation for the 'provider' parameter if there are no providers" do
expect { @type.paramdoc(:provider) }.to raise_error(NoMethodError)
end
it "should create a subclass of Puppet::Provider for the provider" do
provider = @type.provide(:test_provider)
provider.ancestors.should include(Puppet::Provider)
end
it "should use a parent class if specified" do
parent_provider = @type.provide(:parent_provider)
child_provider = @type.provide(:child_provider, :parent => parent_provider)
child_provider.ancestors.should include(parent_provider)
end
it "should use a parent class if specified by name" do
parent_provider = @type.provide(:parent_provider)
child_provider = @type.provide(:child_provider, :parent => :parent_provider)
child_provider.ancestors.should include(parent_provider)
end
it "should raise an error when the parent class can't be found" do
expect {
@type.provide(:child_provider, :parent => :parent_provider)
}.to raise_error(Puppet::DevError, /Could not find parent provider.+parent_provider/)
end
it "should ensure its type has a 'provider' parameter" do
@type.provide(:test_provider)
@type.parameters.should include(:provider)
end
it "should remove a previously registered provider with the same name" do
old_provider = @type.provide(:test_provider)
new_provider = @type.provide(:test_provider)
old_provider.should_not equal(new_provider)
end
it "should register itself as a provider for the type" do
provider = @type.provide(:test_provider)
provider.should == @type.provider(:test_provider)
end
it "should create a provider when a provider with the same name previously failed" do
@type.provide(:test_provider) do
raise "failed to create this provider"
end rescue nil
provider = @type.provide(:test_provider)
provider.ancestors.should include(Puppet::Provider)
provider.should == @type.provider(:test_provider)
end
+
+ describe "with a parent class from another type" do
+ before :each do
+ @parent_type = Puppet::Type.newtype(:provider_parent_type) do
+ newparam(:name) { isnamevar }
+ end
+ @parent_provider = @parent_type.provide(:parent_provider)
+ end
+
+ it "should be created successfully" do
+ child_provider = @type.provide(:child_provider, :parent => @parent_provider)
+ child_provider.ancestors.should include(@parent_provider)
+ end
+
+ it "should be registered as a provider of the child type" do
+ child_provider = @type.provide(:child_provider, :parent => @parent_provider)
+ @type.providers.should include(:child_provider)
+ @parent_type.providers.should_not include(:child_provider)
+ end
+ end
end
describe "when choosing a default provider" do
it "should choose the provider with the highest specificity" do
# Make a fake type
type = Puppet::Type.newtype(:defaultprovidertest) do
newparam(:name) do end
end
basic = type.provide(:basic) {}
greater = type.provide(:greater) {}
basic.stubs(:specificity).returns 1
greater.stubs(:specificity).returns 2
type.defaultprovider.should equal(greater)
end
end
describe "when defining a parent on a newtype" do
it "prints a deprecation message" do
Puppet.expects(:deprecation_warning)
type = Puppet::Type.newtype(:test_with_parent, :parent => Puppet::Type) do
newparam(:name) do end
end
end
end
describe "when initializing" do
describe "and passed a Puppet::Resource instance" do
it "should set its title to the title of the resource if the resource type is equal to the current type" do
resource = Puppet::Resource.new(:mount, "/foo", :parameters => {:name => "/other"})
Puppet::Type.type(:mount).new(resource).title.should == "/foo"
end
it "should set its title to the resource reference if the resource type is not equal to the current type" do
resource = Puppet::Resource.new(:user, "foo")
Puppet::Type.type(:mount).new(resource).title.should == "User[foo]"
end
[:line, :file, :catalog, :exported, :virtual].each do |param|
it "should copy '#{param}' from the resource if present" do
resource = Puppet::Resource.new(:mount, "/foo")
resource.send(param.to_s + "=", "foo")
resource.send(param.to_s + "=", "foo")
Puppet::Type.type(:mount).new(resource).send(param).should == "foo"
end
end
it "should copy any tags from the resource" do
resource = Puppet::Resource.new(:mount, "/foo")
resource.tag "one", "two"
tags = Puppet::Type.type(:mount).new(resource).tags
tags.should be_include("one")
tags.should be_include("two")
end
it "should copy the resource's parameters as its own" do
resource = Puppet::Resource.new(:mount, "/foo", :parameters => {:atboot => :yes, :fstype => "boo"})
params = Puppet::Type.type(:mount).new(resource).to_hash
params[:fstype].should == "boo"
params[:atboot].should == :yes
end
end
describe "and passed a Hash" do
it "should extract the title from the hash" do
Puppet::Type.type(:mount).new(:title => "/yay").title.should == "/yay"
end
it "should work when hash keys are provided as strings" do
Puppet::Type.type(:mount).new("title" => "/yay").title.should == "/yay"
end
it "should work when hash keys are provided as symbols" do
Puppet::Type.type(:mount).new(:title => "/yay").title.should == "/yay"
end
it "should use the name from the hash as the title if no explicit title is provided" do
Puppet::Type.type(:mount).new(:name => "/yay").title.should == "/yay"
end
it "should use the Resource Type's namevar to determine how to find the name in the hash" do
yay = make_absolute('/yay')
Puppet::Type.type(:file).new(:path => yay).title.should == yay
end
[:catalog].each do |param|
it "should extract '#{param}' from the hash if present" do
Puppet::Type.type(:mount).new(:name => "/yay", param => "foo").send(param).should == "foo"
end
end
it "should use any remaining hash keys as its parameters" do
resource = Puppet::Type.type(:mount).new(:title => "/foo", :catalog => "foo", :atboot => :yes, :fstype => "boo")
resource[:fstype].must == "boo"
resource[:atboot].must == :yes
end
end
it "should fail if any invalid attributes have been provided" do
expect { Puppet::Type.type(:mount).new(:title => "/foo", :nosuchattr => "whatever") }.to raise_error(Puppet::Error, /Invalid parameter/)
end
context "when an attribute fails validation" do
it "should fail with Puppet::ResourceError when PuppetError raised" do
expect { Puppet::Type.type(:file).new(:title => "/foo", :source => "unknown:///") }.to raise_error(Puppet::ResourceError, /Parameter source failed on File\[.*foo\]/)
end
it "should fail with Puppet::ResourceError when ArgumentError raised" do
expect { Puppet::Type.type(:file).new(:title => "/foo", :mode => "abcdef") }.to raise_error(Puppet::ResourceError, /Parameter mode failed on File\[.*foo\]/)
end
it "should include the file/line in the error" do
Puppet::Type.type(:file).any_instance.stubs(:file).returns("example.pp")
Puppet::Type.type(:file).any_instance.stubs(:line).returns(42)
expect { Puppet::Type.type(:file).new(:title => "/foo", :source => "unknown:///") }.to raise_error(Puppet::ResourceError, /example.pp:42/)
end
end
it "should set its name to the resource's title if the resource does not have a :name or namevar parameter set" do
resource = Puppet::Resource.new(:mount, "/foo")
Puppet::Type.type(:mount).new(resource).name.should == "/foo"
end
it "should fail if no title, name, or namevar are provided" do
expect { Puppet::Type.type(:mount).new(:atboot => :yes) }.to raise_error(Puppet::Error)
end
it "should set the attributes in the order returned by the class's :allattrs method" do
Puppet::Type.type(:mount).stubs(:allattrs).returns([:name, :atboot, :noop])
resource = Puppet::Resource.new(:mount, "/foo", :parameters => {:name => "myname", :atboot => :yes, :noop => "whatever"})
set = []
Puppet::Type.type(:mount).any_instance.stubs(:newattr).with do |param, hash|
set << param
true
end.returns(stub_everything("a property"))
Puppet::Type.type(:mount).new(resource)
set[-1].should == :noop
set[-2].should == :atboot
end
it "should always set the name and then default provider before anything else" do
Puppet::Type.type(:mount).stubs(:allattrs).returns([:provider, :name, :atboot])
resource = Puppet::Resource.new(:mount, "/foo", :parameters => {:name => "myname", :atboot => :yes})
set = []
Puppet::Type.type(:mount).any_instance.stubs(:newattr).with do |param, hash|
set << param
true
end.returns(stub_everything("a property"))
Puppet::Type.type(:mount).new(resource)
set[0].should == :name
set[1].should == :provider
end
# This one is really hard to test :/
it "should set each default immediately if no value is provided" do
defaults = []
Puppet::Type.type(:service).any_instance.stubs(:set_default).with { |value| defaults << value; true }
Puppet::Type.type(:service).new :name => "whatever"
defaults[0].should == :provider
end
it "should retain a copy of the originally provided parameters" do
Puppet::Type.type(:mount).new(:name => "foo", :atboot => :yes, :noop => false).original_parameters.should == {:atboot => :yes, :noop => false}
end
it "should delete the name via the namevar from the originally provided parameters" do
Puppet::Type.type(:file).new(:name => make_absolute('/foo')).original_parameters[:path].should be_nil
end
context "when validating the resource" do
it "should call the type's validate method if present" do
Puppet::Type.type(:file).any_instance.expects(:validate)
Puppet::Type.type(:file).new(:name => make_absolute('/foo'))
end
it "should raise Puppet::ResourceError with resource name when Puppet::Error raised" do
expect do
Puppet::Type.type(:file).new(
:name => make_absolute('/foo'),
:source => "puppet:///",
:content => "foo"
)
end.to raise_error(Puppet::ResourceError, /Validation of File\[.*foo.*\]/)
end
it "should raise Puppet::ResourceError with manifest file and line on failure" do
Puppet::Type.type(:file).any_instance.stubs(:file).returns("example.pp")
Puppet::Type.type(:file).any_instance.stubs(:line).returns(42)
expect do
Puppet::Type.type(:file).new(
:name => make_absolute('/foo'),
:source => "puppet:///",
:content => "foo"
)
end.to raise_error(Puppet::ResourceError, /Validation.*example.pp:42/)
end
end
end
describe "when #finish is called on a type" do
let(:post_hook_type) do
Puppet::Type.newtype(:finish_test) do
newparam(:name) { isnamevar }
newparam(:post) do
def post_compile
raise "post_compile hook ran"
end
end
end
end
let(:post_hook_resource) do
post_hook_type.new(:name => 'foo',:post => 'fake_value')
end
it "should call #post_compile on parameters that implement it" do
expect { post_hook_resource.finish }.to raise_error(RuntimeError, "post_compile hook ran")
end
end
it "should have a class method for converting a hash into a Puppet::Resource instance" do
Puppet::Type.type(:mount).must respond_to(:hash2resource)
end
describe "when converting a hash to a Puppet::Resource instance" do
before do
@type = Puppet::Type.type(:mount)
end
it "should treat a :title key as the title of the resource" do
@type.hash2resource(:name => "/foo", :title => "foo").title.should == "foo"
end
it "should use the name from the hash as the title if no explicit title is provided" do
@type.hash2resource(:name => "foo").title.should == "foo"
end
it "should use the Resource Type's namevar to determine how to find the name in the hash" do
@type.stubs(:key_attributes).returns([ :myname ])
@type.hash2resource(:myname => "foo").title.should == "foo"
end
[:catalog].each do |attr|
it "should use any provided #{attr}" do
@type.hash2resource(:name => "foo", attr => "eh").send(attr).should == "eh"
end
end
it "should set all provided parameters on the resource" do
@type.hash2resource(:name => "foo", :fstype => "boo", :boot => "fee").to_hash.should == {:name => "foo", :fstype => "boo", :boot => "fee"}
end
it "should not set the title as a parameter on the resource" do
@type.hash2resource(:name => "foo", :title => "eh")[:title].should be_nil
end
it "should not set the catalog as a parameter on the resource" do
@type.hash2resource(:name => "foo", :catalog => "eh")[:catalog].should be_nil
end
it "should treat hash keys equivalently whether provided as strings or symbols" do
resource = @type.hash2resource("name" => "foo", "title" => "eh", "fstype" => "boo")
resource.title.should == "eh"
resource[:name].should == "foo"
resource[:fstype].should == "boo"
end
end
describe "when retrieving current property values" do
before do
@resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present)
@resource.property(:ensure).stubs(:retrieve).returns :absent
end
it "should fail if its provider is unsuitable" do
@resource = Puppet::Type.type(:mount).new(:name => "foo", :fstype => "bar", :pass => 1, :ensure => :present)
@resource.provider.class.expects(:suitable?).returns false
expect { @resource.retrieve_resource }.to raise_error(Puppet::Error)
end
it "should return a Puppet::Resource instance with its type and title set appropriately" do
result = @resource.retrieve_resource
result.should be_instance_of(Puppet::Resource)
result.type.should == "Mount"
result.title.should == "foo"
end
it "should set the name of the returned resource if its own name and title differ" do
@resource[:name] = "myname"
@resource.title = "other name"
@resource.retrieve_resource[:name].should == "myname"
end
it "should provide a value for all set properties" do
values = @resource.retrieve_resource
[:ensure, :fstype, :pass].each { |property| values[property].should_not be_nil }
end
it "should provide a value for 'ensure' even if no desired value is provided" do
@resource = Puppet::Type.type(:file).new(:path => make_absolute("/my/file/that/can't/exist"))
end
it "should not call retrieve on non-ensure properties if the resource is absent and should consider the property absent" do
@resource.property(:ensure).expects(:retrieve).returns :absent
@resource.property(:fstype).expects(:retrieve).never
@resource.retrieve_resource[:fstype].should == :absent
end
it "should include the result of retrieving each property's current value if the resource is present" do
@resource.property(:ensure).expects(:retrieve).returns :present
@resource.property(:fstype).expects(:retrieve).returns 15
@resource.retrieve_resource[:fstype] == 15
end
end
describe "#to_resource" do
it "should return a Puppet::Resource that includes properties, parameters and tags" do
type_resource = Puppet::Type.type(:mount).new(
:ensure => :present,
:name => "foo",
:fstype => "bar",
:remounts => true
)
type_resource.tags = %w{bar baz}
# If it's not a property it's a parameter
type_resource.parameters[:remounts].should_not be_a(Puppet::Property)
type_resource.parameters[:fstype].is_a?(Puppet::Property).should be_true
type_resource.property(:ensure).expects(:retrieve).returns :present
type_resource.property(:fstype).expects(:retrieve).returns 15
resource = type_resource.to_resource
resource.should be_a Puppet::Resource
resource[:fstype].should == 15
resource[:remounts].should == :true
resource.tags.should == Puppet::Util::TagSet.new(%w{foo bar baz mount})
end
end
describe ".title_patterns" do
describe "when there's one namevar" do
before do
@type_class = Puppet::Type.type(:notify)
@type_class.stubs(:key_attributes).returns([:one])
end
it "should have a default pattern for when there's one namevar" do
patterns = @type_class.title_patterns
patterns.length.should == 1
patterns[0].length.should == 2
end
it "should have a regexp that captures the entire string" do
patterns = @type_class.title_patterns
string = "abc\n\tdef"
patterns[0][0] =~ string
$1.should == "abc\n\tdef"
end
end
end
describe "when in a catalog" do
before do
@catalog = Puppet::Resource::Catalog.new
@container = Puppet::Type.type(:component).new(:name => "container")
@one = Puppet::Type.type(:file).new(:path => make_absolute("/file/one"))
@two = Puppet::Type.type(:file).new(:path => make_absolute("/file/two"))
@catalog.add_resource @container
@catalog.add_resource @one
@catalog.add_resource @two
@catalog.add_edge @container, @one
@catalog.add_edge @container, @two
end
it "should have no parent if there is no in edge" do
@container.parent.should be_nil
end
it "should set its parent to its in edge" do
@one.parent.ref.should == @container.ref
end
after do
@catalog.clear(true)
end
end
it "should have a 'stage' metaparam" do
Puppet::Type.metaparamclass(:stage).should be_instance_of(Class)
end
describe "#suitable?" do
let(:type) { Puppet::Type.type(:file) }
let(:resource) { type.new :path => tmpfile('suitable') }
let(:provider) { resource.provider }
it "should be suitable if its type doesn't use providers" do
type.stubs(:paramclass).with(:provider).returns nil
resource.must be_suitable
end
it "should be suitable if it has a provider which is suitable" do
resource.must be_suitable
end
it "should not be suitable if it has a provider which is not suitable" do
provider.class.stubs(:suitable?).returns false
resource.should_not be_suitable
end
it "should be suitable if it does not have a provider and there is a default provider" do
resource.stubs(:provider).returns nil
resource.must be_suitable
end
it "should not be suitable if it doesn't have a provider and there is not default provider" do
resource.stubs(:provider).returns nil
type.stubs(:defaultprovider).returns nil
resource.should_not be_suitable
end
end
describe "::instances" do
after :each do Puppet::Type.rmtype(:type_spec_fake_type) end
let :type do
Puppet::Type.newtype(:type_spec_fake_type) do
newparam(:name) do
isnamevar
end
newproperty(:prop1) {}
end
Puppet::Type.type(:type_spec_fake_type)
end
it "should not fail if no suitable providers are found" do
type.provide(:fake1) do
confine :exists => '/no/such/file'
mk_resource_methods
end
expect { type.instances.should == [] }.to_not raise_error
end
context "with a default provider" do
before :each do
type.provide(:default) do
defaultfor :operatingsystem => Facter.value(:operatingsystem)
mk_resource_methods
class << self
attr_accessor :names
end
def self.instance(name)
new(:name => name, :ensure => :present)
end
def self.instances
@instances ||= names.collect { |name| instance(name.to_s) }
end
@names = [:one, :two]
end
end
it "should return only instances of the type" do
type.instances.should be_all {|x| x.is_a? type }
end
it "should return instances from the default provider" do
type.instances.map(&:name).should == ["one", "two"]
end
it "should return instances from all providers" do
type.provide(:fake1, :parent => :default) { @names = [:three, :four] }
type.instances.map(&:name).should == ["one", "two", "three", "four"]
end
it "should not return instances from unsuitable providers" do
type.provide(:fake1, :parent => :default) do
@names = [:three, :four]
confine :exists => "/no/such/file"
end
type.instances.map(&:name).should == ["one", "two"]
end
end
end
describe "::ensurable?" do
before :each do
class TestEnsurableType < Puppet::Type
def exists?; end
def create; end
def destroy; end
end
end
it "is true if the class has exists?, create, and destroy methods defined" do
TestEnsurableType.should be_ensurable
end
it "is false if exists? is not defined" do
TestEnsurableType.class_eval { remove_method(:exists?) }
TestEnsurableType.should_not be_ensurable
end
it "is false if create is not defined" do
TestEnsurableType.class_eval { remove_method(:create) }
TestEnsurableType.should_not be_ensurable
end
it "is false if destroy is not defined" do
TestEnsurableType.class_eval { remove_method(:destroy) }
TestEnsurableType.should_not be_ensurable
end
end
end
describe Puppet::Type::RelationshipMetaparam do
include PuppetSpec::Files
it "should be a subclass of Puppet::Parameter" do
Puppet::Type::RelationshipMetaparam.superclass.should equal(Puppet::Parameter)
end
it "should be able to produce a list of subclasses" do
Puppet::Type::RelationshipMetaparam.should respond_to(:subclasses)
end
describe "when munging relationships" do
before do
@path = File.expand_path('/foo')
@resource = Puppet::Type.type(:file).new :name => @path
@metaparam = Puppet::Type.metaparamclass(:require).new :resource => @resource
end
it "should accept Puppet::Resource instances" do
ref = Puppet::Resource.new(:file, @path)
@metaparam.munge(ref)[0].should equal(ref)
end
it "should turn any string into a Puppet::Resource" do
@metaparam.munge("File[/ref]")[0].should be_instance_of(Puppet::Resource)
end
end
it "should be able to validate relationships" do
Puppet::Type.metaparamclass(:require).new(:resource => mock("resource")).should respond_to(:validate_relationship)
end
describe 'if any specified resource is not in the catalog' do
let(:catalog) { mock 'catalog' }
let(:resource) do
stub 'resource',
:catalog => catalog,
:ref => 'resource',
:line= => nil,
:file= => nil
end
let(:param) { Puppet::Type.metaparamclass(:require).new(:resource => resource, :value => %w{Foo[bar] Class[test]}) }
before do
catalog.expects(:resource).with("Foo[bar]").returns "something"
catalog.expects(:resource).with("Class[Test]").returns nil
end
describe "and the resource doesn't have a file or line number" do
it "raises an error" do
expect { param.validate_relationship }.to raise_error do |error|
error.should be_a Puppet::ResourceError
error.message.should match %r[Class\[Test\]]
end
end
end
describe "and the resource has a file or line number" do
before do
resource.stubs(:line).returns '42'
resource.stubs(:file).returns '/hitchhikers/guide/to/the/galaxy'
end
it "raises an error with context" do
expect { param.validate_relationship }.to raise_error do |error|
error.should be_a Puppet::ResourceError
error.message.should match %r[Class\[Test\]]
error.message.should match %r["in /hitchhikers/guide/to/the/galaxy:42"]
end
end
end
end
end
describe Puppet::Type.metaparamclass(:audit) do
include PuppetSpec::Files
before do
@resource = Puppet::Type.type(:file).new :path => make_absolute('/foo')
end
it "should default to being nil" do
@resource[:audit].should be_nil
end
it "should specify all possible properties when asked to audit all properties" do
@resource[:audit] = :all
list = @resource.class.properties.collect { |p| p.name }
@resource[:audit].should == list
end
it "should accept the string 'all' to specify auditing all possible properties" do
@resource[:audit] = 'all'
list = @resource.class.properties.collect { |p| p.name }
@resource[:audit].should == list
end
it "should fail if asked to audit an invalid property" do
expect { @resource[:audit] = :foobar }.to raise_error(Puppet::Error)
end
it "should create an attribute instance for each auditable property" do
@resource[:audit] = :mode
@resource.parameter(:mode).should_not be_nil
end
it "should accept properties specified as a string" do
@resource[:audit] = "mode"
@resource.parameter(:mode).should_not be_nil
end
it "should not create attribute instances for parameters, only properties" do
@resource[:audit] = :noop
@resource.parameter(:noop).should be_nil
end
describe "when generating the uniqueness key" do
it "should include all of the key_attributes in alphabetical order by attribute name" do
Puppet::Type.type(:file).stubs(:key_attributes).returns [:path, :mode, :owner]
Puppet::Type.type(:file).stubs(:title_patterns).returns(
[ [ /(.*)/, [ [:path, lambda{|x| x} ] ] ] ]
)
myfile = make_absolute('/my/file')
res = Puppet::Type.type(:file).new( :title => myfile, :path => myfile, :owner => 'root', :content => 'hello' )
res.uniqueness_key.should == [ nil, 'root', myfile]
end
end
context "type attribute bracket methods" do
after :each do Puppet::Type.rmtype(:attributes) end
let :type do
Puppet::Type.newtype(:attributes) do
newparam(:name) {}
end
end
it "should work with parameters" do
type.newparam(:param) {}
instance = type.new(:name => 'test')
expect { instance[:param] = true }.to_not raise_error
expect { instance["param"] = true }.to_not raise_error
instance[:param].should == true
instance["param"].should == true
end
it "should work with meta-parameters" do
instance = type.new(:name => 'test')
expect { instance[:noop] = true }.to_not raise_error
expect { instance["noop"] = true }.to_not raise_error
instance[:noop].should == true
instance["noop"].should == true
end
it "should work with properties" do
type.newproperty(:property) {}
instance = type.new(:name => 'test')
expect { instance[:property] = true }.to_not raise_error
expect { instance["property"] = true }.to_not raise_error
instance.property(:property).must be
instance.should(:property).must be_true
end
it "should handle proprieties correctly" do
# Order of assignment is significant in this test.
props = {}
[:one, :two, :three].each {|prop| type.newproperty(prop) {} }
instance = type.new(:name => "test")
instance[:one] = "boo"
one = instance.property(:one)
instance.properties.must == [one]
instance[:three] = "rah"
three = instance.property(:three)
instance.properties.must == [one, three]
instance[:two] = "whee"
two = instance.property(:two)
instance.properties.must == [one, two, three]
end
it "newattr should handle required features correctly" do
Puppet::Util::Log.level = :debug
type.feature :feature1, "one"
type.feature :feature2, "two"
none = type.newproperty(:none) {}
one = type.newproperty(:one, :required_features => :feature1) {}
two = type.newproperty(:two, :required_features => [:feature1, :feature2]) {}
nope = type.provide(:nope) {}
maybe = type.provide(:maybe) { has_features :feature1 }
yep = type.provide(:yep) { has_features :feature1, :feature2 }
[nope, maybe, yep].each_with_index do |provider, i|
rsrc = type.new(:provider => provider.name, :name => "test#{i}",
:none => "a", :one => "b", :two => "c")
rsrc.should(:none).must be
if provider.declared_feature? :feature1
rsrc.should(:one).must be
else
rsrc.should(:one).must_not be
@logs.find {|l| l.message =~ /not managing attribute one/ }.should be
end
if provider.declared_feature? :feature2
rsrc.should(:two).must be
else
rsrc.should(:two).must_not be
@logs.find {|l| l.message =~ /not managing attribute two/ }.should be
end
end
end
end
end
diff --git a/spec/unit/util/colors_spec.rb b/spec/unit/util/colors_spec.rb
index 7407b628b..b0f791f82 100755
--- a/spec/unit/util/colors_spec.rb
+++ b/spec/unit/util/colors_spec.rb
@@ -1,83 +1,89 @@
#!/usr/bin/env ruby
require 'spec_helper'
describe Puppet::Util::Colors do
include Puppet::Util::Colors
let (:message) { 'a message' }
let (:color) { :black }
let (:subject) { self }
describe ".console_color" do
it { should respond_to :console_color }
it "should generate ANSI escape sequences" do
subject.console_color(color, message).should == "\e[0;30m#{message}\e[0m"
end
end
describe ".html_color" do
it { should respond_to :html_color }
it "should generate an HTML span element and style attribute" do
subject.html_color(color, message).should =~ /<span style=\"color: #FFA0A0\">#{message}<\/span>/
end
end
describe ".colorize" do
it { should respond_to :colorize }
context "ansicolor supported" do
before :each do
subject.stubs(:console_has_color?).returns(true)
end
it "should colorize console output" do
Puppet[:color] = true
subject.expects(:console_color).with(color, message)
subject.colorize(:black, message)
end
it "should not colorize unknown color schemes" do
Puppet[:color] = :thisisanunknownscheme
subject.colorize(:black, message).should == message
end
end
context "ansicolor not supported" do
before :each do
subject.stubs(:console_has_color?).returns(false)
end
it "should not colorize console output" do
Puppet[:color] = true
subject.expects(:console_color).never
subject.colorize(:black, message).should == message
end
it "should colorize html output" do
Puppet[:color] = :html
subject.expects(:html_color).with(color, message)
subject.colorize(color, message)
end
end
end
- describe "on Windows", :if => Puppet.features.microsoft_windows? do
- it "expects a trailing embedded NULL character in the wide string" do
- message = "hello"
+ context "on Windows in Ruby 1.x", :if => Puppet.features.microsoft_windows? && RUBY_VERSION =~ /^1./ do
+ it "should define WideConsole" do
+ expect(defined?(Puppet::Util::Colors::WideConsole)).to be_true
+ end
- console = Puppet::Util::Colors::WideConsole.new
- wstr, nchars = console.string_encode(message)
+ it "should define WideIO" do
+ expect(defined?(Puppet::Util::Colors::WideIO)).to be_true
+ end
+ end
- expect(nchars).to eq(message.length)
+ context "on Windows in Ruby 2.x", :if => Puppet.features.microsoft_windows? && RUBY_VERSION =~ /^2./ do
+ it "should not define WideConsole" do
+ expect(defined?(Puppet::Util::Colors::WideConsole)).to be_false
+ end
- expect(wstr.length).to eq(nchars + 1)
- expect(wstr[-1].ord).to be_zero
+ it "should not define WideIO" do
+ expect(defined?(Puppet::Util::Colors::WideIO)).to be_false
end
end
end
diff --git a/spec/unit/util/command_line_spec.rb b/spec/unit/util/command_line_spec.rb
index 6ba8077c2..9eb61b077 100755
--- a/spec/unit/util/command_line_spec.rb
+++ b/spec/unit/util/command_line_spec.rb
@@ -1,188 +1,192 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/face'
require 'puppet/util/command_line'
describe Puppet::Util::CommandLine do
include PuppetSpec::Files
context "#initialize" do
it "should pull off the first argument if it looks like a subcommand" do
command_line = Puppet::Util::CommandLine.new("puppet", %w{ client --help whatever.pp })
command_line.subcommand_name.should == "client"
command_line.args.should == %w{ --help whatever.pp }
end
it "should return nil if the first argument looks like a .pp file" do
command_line = Puppet::Util::CommandLine.new("puppet", %w{ whatever.pp })
command_line.subcommand_name.should == nil
command_line.args.should == %w{ whatever.pp }
end
it "should return nil if the first argument looks like a .rb file" do
command_line = Puppet::Util::CommandLine.new("puppet", %w{ whatever.rb })
command_line.subcommand_name.should == nil
command_line.args.should == %w{ whatever.rb }
end
it "should return nil if the first argument looks like a flag" do
command_line = Puppet::Util::CommandLine.new("puppet", %w{ --debug })
command_line.subcommand_name.should == nil
command_line.args.should == %w{ --debug }
end
it "should return nil if the first argument is -" do
command_line = Puppet::Util::CommandLine.new("puppet", %w{ - })
command_line.subcommand_name.should == nil
command_line.args.should == %w{ - }
end
it "should return nil if the first argument is --help" do
command_line = Puppet::Util::CommandLine.new("puppet", %w{ --help })
command_line.subcommand_name.should == nil
end
it "should return nil if there are no arguments" do
command_line = Puppet::Util::CommandLine.new("puppet", [])
command_line.subcommand_name.should == nil
command_line.args.should == []
end
it "should pick up changes to the array of arguments" do
args = %w{subcommand}
command_line = Puppet::Util::CommandLine.new("puppet", args)
args[0] = 'different_subcommand'
command_line.subcommand_name.should == 'different_subcommand'
end
end
context "#execute" do
%w{--version -V}.each do |arg|
it "should print the version and exit if #{arg} is given" do
expect do
described_class.new("puppet", [arg]).execute
- end.to have_printed(/^#{Puppet.version}$/)
+ end.to have_printed(/^#{Regexp.escape(Puppet.version)}$/)
end
end
end
describe "when dealing with puppet commands" do
it "should return the executable name if it is not puppet" do
command_line = Puppet::Util::CommandLine.new("puppetmasterd", [])
command_line.subcommand_name.should == "puppetmasterd"
end
describe "when the subcommand is not implemented" do
it "should find and invoke an executable with a hyphenated name" do
commandline = Puppet::Util::CommandLine.new("puppet", ['whatever', 'argument'])
Puppet::Util.expects(:which).with('puppet-whatever').
returns('/dev/null/puppet-whatever')
Kernel.expects(:exec).with('/dev/null/puppet-whatever', 'argument')
commandline.execute
end
describe "and an external implementation cannot be found" do
+ before :each do
+ Puppet::Util::CommandLine::UnknownSubcommand.any_instance.stubs(:console_has_color?).returns false
+ end
+
it "should abort and show the usage message" do
- commandline = Puppet::Util::CommandLine.new("puppet", ['whatever', 'argument'])
Puppet::Util.expects(:which).with('puppet-whatever').returns(nil)
+ commandline = Puppet::Util::CommandLine.new("puppet", ['whatever', 'argument'])
commandline.expects(:exec).never
expect {
commandline.execute
- }.to have_printed(/Unknown Puppet subcommand 'whatever'/)
+ }.to have_printed(/Unknown Puppet subcommand 'whatever'/).and_exit_with(1)
end
it "should abort and show the help message" do
- commandline = Puppet::Util::CommandLine.new("puppet", ['whatever', 'argument'])
Puppet::Util.expects(:which).with('puppet-whatever').returns(nil)
+ commandline = Puppet::Util::CommandLine.new("puppet", ['whatever', 'argument'])
commandline.expects(:exec).never
expect {
commandline.execute
- }.to have_printed(/See 'puppet help' for help on available puppet subcommands/)
+ }.to have_printed(/See 'puppet help' for help on available puppet subcommands/).and_exit_with(1)
end
%w{--version -V}.each do |arg|
it "should abort and display #{arg} information" do
- commandline = Puppet::Util::CommandLine.new("puppet", ['whatever', arg])
Puppet::Util.expects(:which).with('puppet-whatever').returns(nil)
+ commandline = Puppet::Util::CommandLine.new("puppet", ['whatever', arg])
commandline.expects(:exec).never
expect {
commandline.execute
- }.to have_printed(/^#{Puppet.version}$/)
+ }.to have_printed(%r[^#{Regexp.escape(Puppet.version)}$]).and_exit_with(1)
end
end
end
end
describe 'when loading commands' do
it "should deprecate the available_subcommands instance method" do
Puppet::Application.expects(:available_application_names)
Puppet.expects(:deprecation_warning).with("Puppet::Util::CommandLine#available_subcommands is deprecated; please use Puppet::Application.available_application_names instead.")
command_line = Puppet::Util::CommandLine.new("foo", %w{ client --help whatever.pp })
command_line.available_subcommands
end
it "should deprecate the available_subcommands class method" do
Puppet::Application.expects(:available_application_names)
Puppet.expects(:deprecation_warning).with("Puppet::Util::CommandLine.available_subcommands is deprecated; please use Puppet::Application.available_application_names instead.")
Puppet::Util::CommandLine.available_subcommands
end
end
describe 'when setting process priority' do
let(:command_line) do
Puppet::Util::CommandLine.new("puppet", %w{ agent })
end
before :each do
Puppet::Util::CommandLine::ApplicationSubcommand.any_instance.stubs(:run)
end
it 'should never set priority by default' do
Process.expects(:setpriority).never
command_line.execute
end
it 'should lower the process priority if one has been specified' do
Puppet[:priority] = 10
Process.expects(:setpriority).with(0, Process.pid, 10)
command_line.execute
end
it 'should warn if trying to raise priority, but not privileged user' do
Puppet[:priority] = -10
Process.expects(:setpriority).raises(Errno::EACCES, 'Permission denied')
Puppet.expects(:warning).with("Failed to set process priority to '-10'")
command_line.execute
end
it "should warn if the platform doesn't support `Process.setpriority`" do
Puppet[:priority] = 15
Process.expects(:setpriority).raises(NotImplementedError, 'NotImplementedError: setpriority() function is unimplemented on this machine')
Puppet.expects(:warning).with("Failed to set process priority to '15'")
command_line.execute
end
end
end
end
diff --git a/spec/unit/util/execution_spec.rb b/spec/unit/util/execution_spec.rb
index 7c6238f9f..6a4bee490 100755
--- a/spec/unit/util/execution_spec.rb
+++ b/spec/unit/util/execution_spec.rb
@@ -1,637 +1,630 @@
#! /usr/bin/env ruby
require 'spec_helper'
+require 'puppet/file_system/uniquefile'
describe Puppet::Util::Execution do
include Puppet::Util::Execution
- # utility method to help deal with some windows vs. unix differences
- def process_status(exitstatus)
- return exitstatus if Puppet.features.microsoft_windows?
-
- stub('child_status', :exitstatus => exitstatus)
- end
-
# utility methods to help us test some private methods without being quite so verbose
def call_exec_posix(command, arguments, stdin, stdout, stderr)
Puppet::Util::Execution.send(:execute_posix, command, arguments, stdin, stdout, stderr)
end
def call_exec_windows(command, arguments, stdin, stdout, stderr)
Puppet::Util::Execution.send(:execute_windows, command, arguments, stdin, stdout, stderr)
end
describe "execution methods" do
let(:pid) { 5501 }
let(:process_handle) { 0xDEADBEEF }
let(:thread_handle) { 0xCAFEBEEF }
let(:proc_info_stub) { stub 'processinfo', :process_handle => process_handle, :thread_handle => thread_handle, :process_id => pid}
let(:null_file) { Puppet.features.microsoft_windows? ? 'NUL' : '/dev/null' }
def stub_process_wait(exitstatus)
if Puppet.features.microsoft_windows?
Puppet::Util::Windows::Process.stubs(:wait_process).with(process_handle).returns(exitstatus)
- Process.stubs(:CloseHandle).with(process_handle)
- Process.stubs(:CloseHandle).with(thread_handle)
+ FFI::WIN32.stubs(:CloseHandle).with(process_handle)
+ FFI::WIN32.stubs(:CloseHandle).with(thread_handle)
else
Process.stubs(:waitpid2).with(pid).returns([pid, stub('child_status', :exitstatus => exitstatus)])
end
end
describe "#execute_posix (stubs)", :unless => Puppet.features.microsoft_windows? do
before :each do
# Most of the things this method does are bad to do during specs. :/
Kernel.stubs(:fork).returns(pid).yields
Process.stubs(:setsid)
Kernel.stubs(:exec)
Puppet::Util::SUIDManager.stubs(:change_user)
Puppet::Util::SUIDManager.stubs(:change_group)
# ensure that we don't really close anything!
(0..256).each {|n| IO.stubs(:new) }
$stdin.stubs(:reopen)
$stdout.stubs(:reopen)
$stderr.stubs(:reopen)
@stdin = File.open(null_file, 'r')
- @stdout = Tempfile.new('stdout')
+ @stdout = Puppet::FileSystem::Uniquefile.new('stdout')
@stderr = File.open(null_file, 'w')
# there is a danger here that ENV will be modified by exec_posix. Normally it would only affect the ENV
# of a forked process, but here, we're stubbing Kernel.fork, so the method has the ability to override the
# "real" ENV. To guard against this, we'll capture a snapshot of ENV before each test.
@saved_env = ENV.to_hash
# Now, we're going to effectively "mock" the magic ruby 'ENV' variable by creating a local definition of it
# inside of the module we're testing.
Puppet::Util::Execution::ENV = {}
end
after :each do
# And here we remove our "mock" version of 'ENV', which will allow us to validate that the real ENV has been
# left unharmed.
Puppet::Util::Execution.send(:remove_const, :ENV)
# capture the current environment and make sure it's the same as it was before the test
cur_env = ENV.to_hash
# we will get some fairly useless output if we just use the raw == operator on the hashes here, so we'll
# be a bit more explicit and laborious in the name of making the error more useful...
@saved_env.each_pair { |key,val| cur_env[key].should == val }
(cur_env.keys - @saved_env.keys).should == []
end
it "should fork a child process to execute the command" do
Kernel.expects(:fork).returns(pid).yields
Kernel.expects(:exec).with('test command')
call_exec_posix('test command', {}, @stdin, @stdout, @stderr)
end
it "should start a new session group" do
Process.expects(:setsid)
call_exec_posix('test command', {}, @stdin, @stdout, @stderr)
end
it "should permanently change to the correct user and group if specified" do
Puppet::Util::SUIDManager.expects(:change_group).with(55, true)
Puppet::Util::SUIDManager.expects(:change_user).with(50, true)
call_exec_posix('test command', {:uid => 50, :gid => 55}, @stdin, @stdout, @stderr)
end
it "should exit failure if there is a problem execing the command" do
Kernel.expects(:exec).with('test command').raises("failed to execute!")
Puppet::Util::Execution.stubs(:puts)
Puppet::Util::Execution.expects(:exit!).with(1)
call_exec_posix('test command', {}, @stdin, @stdout, @stderr)
end
it "should properly execute commands specified as arrays" do
Kernel.expects(:exec).with('test command', 'with', 'arguments')
call_exec_posix(['test command', 'with', 'arguments'], {:uid => 50, :gid => 55}, @stdin, @stdout, @stderr)
end
it "should properly execute string commands with embedded newlines" do
Kernel.expects(:exec).with("/bin/echo 'foo' ; \n /bin/echo 'bar' ;")
call_exec_posix("/bin/echo 'foo' ; \n /bin/echo 'bar' ;", {:uid => 50, :gid => 55}, @stdin, @stdout, @stderr)
end
it "should return the pid of the child process" do
call_exec_posix('test command', {}, @stdin, @stdout, @stderr).should == pid
end
end
describe "#execute_windows (stubs)", :if => Puppet.features.microsoft_windows? do
before :each do
Process.stubs(:create).returns(proc_info_stub)
stub_process_wait(0)
@stdin = File.open(null_file, 'r')
- @stdout = Tempfile.new('stdout')
+ @stdout = Puppet::FileSystem::Uniquefile.new('stdout')
@stderr = File.open(null_file, 'w')
end
it "should create a new process for the command" do
Process.expects(:create).with(
:command_line => "test command",
:startup_info => {:stdin => @stdin, :stdout => @stdout, :stderr => @stderr},
:close_handles => false
).returns(proc_info_stub)
call_exec_windows('test command', {}, @stdin, @stdout, @stderr)
end
it "should return the process info of the child process" do
call_exec_windows('test command', {}, @stdin, @stdout, @stderr).should == proc_info_stub
end
it "should quote arguments containing spaces if command is specified as an array" do
Process.expects(:create).with do |args|
args[:command_line] == '"test command" with some "arguments \"with spaces"'
end.returns(proc_info_stub)
call_exec_windows(['test command', 'with', 'some', 'arguments "with spaces'], {}, @stdin, @stdout, @stderr)
end
end
describe "#execute (stubs)" do
before :each do
stub_process_wait(0)
end
describe "when an execution stub is specified" do
before :each do
Puppet::Util::ExecutionStub.set do |command,args,stdin,stdout,stderr|
"execution stub output"
end
end
it "should call the block on the stub" do
Puppet::Util::Execution.execute("/usr/bin/run_my_execute_stub").should == "execution stub output"
end
it "should not actually execute anything" do
Puppet::Util::Execution.expects(:execute_posix).never
Puppet::Util::Execution.expects(:execute_windows).never
Puppet::Util::Execution.execute("/usr/bin/run_my_execute_stub")
end
end
describe "when setting up input and output files" do
include PuppetSpec::Files
let(:executor) { Puppet.features.microsoft_windows? ? 'execute_windows' : 'execute_posix' }
let(:rval) { Puppet.features.microsoft_windows? ? proc_info_stub : pid }
before :each do
Puppet::Util::Execution.stubs(:wait_for_output)
end
it "should set stdin to the stdinfile if specified" do
input = tmpfile('stdin')
FileUtils.touch(input)
Puppet::Util::Execution.expects(executor).with do |_,_,stdin,_,_|
stdin.path == input
end.returns(rval)
Puppet::Util::Execution.execute('test command', :stdinfile => input)
end
it "should set stdin to the null file if not specified" do
Puppet::Util::Execution.expects(executor).with do |_,_,stdin,_,_|
stdin.path == null_file
end.returns(rval)
Puppet::Util::Execution.execute('test command')
end
describe "when squelch is set" do
it "should set stdout and stderr to the null file" do
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,stderr|
stdout.path == null_file and stderr.path == null_file
end.returns(rval)
Puppet::Util::Execution.execute('test command', :squelch => true)
end
end
describe "when squelch is not set" do
it "should set stdout to a temporary output file" do
- outfile = Tempfile.new('stdout')
- Tempfile.stubs(:new).returns(outfile)
+ outfile = Puppet::FileSystem::Uniquefile.new('stdout')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(outfile)
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,_|
stdout.path == outfile.path
end.returns(rval)
Puppet::Util::Execution.execute('test command', :squelch => false)
end
it "should set stderr to the same file as stdout if combine is true" do
- outfile = Tempfile.new('stdout')
- Tempfile.stubs(:new).returns(outfile)
+ outfile = Puppet::FileSystem::Uniquefile.new('stdout')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(outfile)
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,stderr|
stdout.path == outfile.path and stderr.path == outfile.path
end.returns(rval)
Puppet::Util::Execution.execute('test command', :squelch => false, :combine => true)
end
it "should set stderr to the null device if combine is false" do
- outfile = Tempfile.new('stdout')
- Tempfile.stubs(:new).returns(outfile)
+ outfile = Puppet::FileSystem::Uniquefile.new('stdout')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(outfile)
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,stderr|
stdout.path == outfile.path and stderr.path == null_file
end.returns(rval)
Puppet::Util::Execution.execute('test command', :squelch => false, :combine => false)
end
it "should combine stdout and stderr if combine is true" do
- outfile = Tempfile.new('stdout')
- Tempfile.stubs(:new).returns(outfile)
+ outfile = Puppet::FileSystem::Uniquefile.new('stdout')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(outfile)
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,stderr|
stdout.path == outfile.path and stderr.path == outfile.path
end.returns(rval)
Puppet::Util::Execution.execute('test command', :combine => true)
end
it "should default combine to true when no options are specified" do
- outfile = Tempfile.new('stdout')
- Tempfile.stubs(:new).returns(outfile)
+ outfile = Puppet::FileSystem::Uniquefile.new('stdout')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(outfile)
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,stderr|
stdout.path == outfile.path and stderr.path == outfile.path
end.returns(rval)
Puppet::Util::Execution.execute('test command')
end
it "should default combine to false when options are specified, but combine is not" do
- outfile = Tempfile.new('stdout')
- Tempfile.stubs(:new).returns(outfile)
+ outfile = Puppet::FileSystem::Uniquefile.new('stdout')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(outfile)
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,stderr|
stdout.path == outfile.path and stderr.path == null_file
end.returns(rval)
Puppet::Util::Execution.execute('test command', :failonfail => false)
end
it "should default combine to false when an empty hash of options is specified" do
- outfile = Tempfile.new('stdout')
- Tempfile.stubs(:new).returns(outfile)
+ outfile = Puppet::FileSystem::Uniquefile.new('stdout')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(outfile)
Puppet::Util::Execution.expects(executor).with do |_,_,_,stdout,stderr|
stdout.path == outfile.path and stderr.path == null_file
end.returns(rval)
Puppet::Util::Execution.execute('test command', {})
end
end
end
describe "on Windows", :if => Puppet.features.microsoft_windows? do
it "should always close the process and thread handles" do
Puppet::Util::Execution.stubs(:execute_windows).returns(proc_info_stub)
Puppet::Util::Windows::Process.expects(:wait_process).with(process_handle).raises('whatever')
- Puppet::Util::Windows::Process.expects(:CloseHandle).with(thread_handle)
- Puppet::Util::Windows::Process.expects(:CloseHandle).with(process_handle)
+ FFI::WIN32.expects(:CloseHandle).with(thread_handle)
+ FFI::WIN32.expects(:CloseHandle).with(process_handle)
expect { Puppet::Util::Execution.execute('test command') }.to raise_error(RuntimeError)
end
it "should return the correct exit status even when exit status is greater than 256" do
real_exit_status = 3010
Puppet::Util::Execution.stubs(:execute_windows).returns(proc_info_stub)
stub_process_wait(real_exit_status)
$CHILD_STATUS.stubs(:exitstatus).returns(real_exit_status % 256) # The exitstatus is changed to be mod 256 so that ruby can fit it into 8 bits.
Puppet::Util::Execution.execute('test command', :failonfail => false).exitstatus.should == real_exit_status
end
end
end
describe "#execute (posix locale)", :unless => Puppet.features.microsoft_windows? do
before :each do
# there is a danger here that ENV will be modified by exec_posix. Normally it would only affect the ENV
# of a forked process, but, in some of the previous tests in this file we're stubbing Kernel.fork., which could
# allow the method to override the "real" ENV. This shouldn't be a problem for these tests because they are
# not stubbing Kernel.fork, but, better safe than sorry... so, to guard against this, we'll capture a snapshot
# of ENV before each test.
@saved_env = ENV.to_hash
end
after :each do
# capture the current environment and make sure it's the same as it was before the test
cur_env = ENV.to_hash
# we will get some fairly useless output if we just use the raw == operator on the hashes here, so we'll
# be a bit more explicit and laborious in the name of making the error more useful...
@saved_env.each_pair { |key,val| cur_env[key].should == val }
(cur_env.keys - @saved_env.keys).should == []
end
# build up a printf-style string that contains a command to get the value of an environment variable
# from the operating system. We can substitute into this with the names of the desired environment variables later.
get_env_var_cmd = 'echo $%s'
# a sentinel value that we can use to emulate what locale environment variables might be set to on an international
# system.
lang_sentinel_value = "en_US.UTF-8"
# a temporary hash that contains sentinel values for each of the locale environment variables that we override in
# "execute"
locale_sentinel_env = {}
Puppet::Util::POSIX::LOCALE_ENV_VARS.each { |var| locale_sentinel_env[var] = lang_sentinel_value }
it "should override the locale environment variables when :override_locale is not set (defaults to true)" do
# temporarily override the locale environment vars with a sentinel value, so that we can confirm that
# execute is actually setting them.
Puppet::Util.withenv(locale_sentinel_env) do
Puppet::Util::POSIX::LOCALE_ENV_VARS.each do |var|
# we expect that all of the POSIX vars will have been cleared except for LANG and LC_ALL
expected_value = (['LANG', 'LC_ALL'].include?(var)) ? "C" : ""
Puppet::Util::execute(get_env_var_cmd % var).strip.should == expected_value
end
end
end
it "should override the LANG environment variable when :override_locale is set to true" do
# temporarily override the locale environment vars with a sentinel value, so that we can confirm that
# execute is actually setting them.
Puppet::Util.withenv(locale_sentinel_env) do
Puppet::Util::POSIX::LOCALE_ENV_VARS.each do |var|
# we expect that all of the POSIX vars will have been cleared except for LANG and LC_ALL
expected_value = (['LANG', 'LC_ALL'].include?(var)) ? "C" : ""
Puppet::Util::execute(get_env_var_cmd % var, {:override_locale => true}).strip.should == expected_value
end
end
end
it "should *not* override the LANG environment variable when :override_locale is set to false" do
# temporarily override the locale environment vars with a sentinel value, so that we can confirm that
# execute is not setting them.
Puppet::Util.withenv(locale_sentinel_env) do
Puppet::Util::POSIX::LOCALE_ENV_VARS.each do |var|
Puppet::Util::execute(get_env_var_cmd % var, {:override_locale => false}).strip.should == lang_sentinel_value
end
end
end
it "should have restored the LANG and locale environment variables after execution" do
# we'll do this once without any sentinel values, to give us a little more test coverage
orig_env_vals = {}
Puppet::Util::POSIX::LOCALE_ENV_VARS.each do |var|
orig_env_vals[var] = ENV[var]
end
# now we can really execute any command--doesn't matter what it is...
Puppet::Util::execute(get_env_var_cmd % 'anything', {:override_locale => true})
# now we check and make sure the original environment was restored
Puppet::Util::POSIX::LOCALE_ENV_VARS.each do |var|
ENV[var].should == orig_env_vals[var]
end
# now, once more... but with our sentinel values
Puppet::Util.withenv(locale_sentinel_env) do
# now we can really execute any command--doesn't matter what it is...
Puppet::Util::execute(get_env_var_cmd % 'anything', {:override_locale => true})
# now we check and make sure the original environment was restored
Puppet::Util::POSIX::LOCALE_ENV_VARS.each do |var|
ENV[var].should == locale_sentinel_env[var]
end
end
end
end
describe "#execute (posix user env vars)", :unless => Puppet.features.microsoft_windows? do
# build up a printf-style string that contains a command to get the value of an environment variable
# from the operating system. We can substitute into this with the names of the desired environment variables later.
get_env_var_cmd = 'echo $%s'
# a sentinel value that we can use to emulate what locale environment variables might be set to on an international
# system.
user_sentinel_value = "Abracadabra"
# a temporary hash that contains sentinel values for each of the locale environment variables that we override in
# "execute"
user_sentinel_env = {}
Puppet::Util::POSIX::USER_ENV_VARS.each { |var| user_sentinel_env[var] = user_sentinel_value }
it "should unset user-related environment vars during execution" do
# first we set up a temporary execution environment with sentinel values for the user-related environment vars
# that we care about.
Puppet::Util.withenv(user_sentinel_env) do
# with this environment, we loop over the vars in question
Puppet::Util::POSIX::USER_ENV_VARS.each do |var|
# ensure that our temporary environment is set up as we expect
ENV[var].should == user_sentinel_env[var]
# run an "exec" via the provider and ensure that it unsets the vars
Puppet::Util::execute(get_env_var_cmd % var).strip.should == ""
# ensure that after the exec, our temporary env is still intact
ENV[var].should == user_sentinel_env[var]
end
end
end
it "should have restored the user-related environment variables after execution" do
# we'll do this once without any sentinel values, to give us a little more test coverage
orig_env_vals = {}
Puppet::Util::POSIX::USER_ENV_VARS.each do |var|
orig_env_vals[var] = ENV[var]
end
# now we can really execute any command--doesn't matter what it is...
Puppet::Util::execute(get_env_var_cmd % 'anything')
# now we check and make sure the original environment was restored
Puppet::Util::POSIX::USER_ENV_VARS.each do |var|
ENV[var].should == orig_env_vals[var]
end
# now, once more... but with our sentinel values
Puppet::Util.withenv(user_sentinel_env) do
# now we can really execute any command--doesn't matter what it is...
Puppet::Util::execute(get_env_var_cmd % 'anything')
# now we check and make sure the original environment was restored
Puppet::Util::POSIX::USER_ENV_VARS.each do |var|
ENV[var].should == user_sentinel_env[var]
end
end
end
end
describe "after execution" do
before :each do
stub_process_wait(0)
if Puppet.features.microsoft_windows?
Puppet::Util::Execution.stubs(:execute_windows).returns(proc_info_stub)
else
Puppet::Util::Execution.stubs(:execute_posix).returns(pid)
end
end
it "should wait for the child process to exit" do
Puppet::Util::Execution.stubs(:wait_for_output)
Puppet::Util::Execution.execute('test command')
end
it "should close the stdin/stdout/stderr files used by the child" do
stdin = mock 'file', :close
stdout = mock 'file', :close
stderr = mock 'file', :close
File.expects(:open).
times(3).
returns(stdin).
then.returns(stdout).
then.returns(stderr)
Puppet::Util::Execution.execute('test command', {:squelch => true, :combine => false})
end
it "should read and return the output if squelch is false" do
- stdout = Tempfile.new('test')
- Tempfile.stubs(:new).returns(stdout)
+ stdout = Puppet::FileSystem::Uniquefile.new('test')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(stdout)
stdout.write("My expected command output")
Puppet::Util::Execution.execute('test command').should == "My expected command output"
end
it "should not read the output if squelch is true" do
- stdout = Tempfile.new('test')
- Tempfile.stubs(:new).returns(stdout)
+ stdout = Puppet::FileSystem::Uniquefile.new('test')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(stdout)
stdout.write("My expected command output")
Puppet::Util::Execution.execute('test command', :squelch => true).should == ''
end
it "should delete the file used for output if squelch is false" do
- stdout = Tempfile.new('test')
+ stdout = Puppet::FileSystem::Uniquefile.new('test')
path = stdout.path
- Tempfile.stubs(:new).returns(stdout)
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(stdout)
Puppet::Util::Execution.execute('test command')
Puppet::FileSystem.exist?(path).should be_false
end
it "should not raise an error if the file is open" do
- stdout = Tempfile.new('test')
- Tempfile.stubs(:new).returns(stdout)
+ stdout = Puppet::FileSystem::Uniquefile.new('test')
+ Puppet::FileSystem::Uniquefile.stubs(:new).returns(stdout)
file = File.new(stdout.path, 'r')
Puppet::Util.execute('test command')
end
it "should raise an error if failonfail is true and the child failed" do
stub_process_wait(1)
expect {
subject.execute('fail command', :failonfail => true)
}.to raise_error(Puppet::ExecutionFailure, /Execution of 'fail command' returned 1/)
end
it "should not raise an error if failonfail is false and the child failed" do
stub_process_wait(1)
subject.execute('fail command', :failonfail => false)
end
it "should not raise an error if failonfail is true and the child succeeded" do
stub_process_wait(0)
subject.execute('fail command', :failonfail => true)
end
it "should not raise an error if failonfail is false and the child succeeded" do
stub_process_wait(0)
subject.execute('fail command', :failonfail => false)
end
it "should default failonfail to true when no options are specified" do
stub_process_wait(1)
expect {
subject.execute('fail command')
}.to raise_error(Puppet::ExecutionFailure, /Execution of 'fail command' returned 1/)
end
it "should default failonfail to false when options are specified, but failonfail is not" do
stub_process_wait(1)
subject.execute('fail command', { :combine => true })
end
it "should default failonfail to false when an empty hash of options is specified" do
stub_process_wait(1)
subject.execute('fail command', {})
end
it "should raise an error if a nil option is specified" do
expect {
Puppet::Util::Execution.execute('fail command', nil)
}.to raise_error(TypeError, /(can\'t convert|no implicit conversion of) nil into Hash/)
end
end
end
describe "#execpipe" do
it "should execute a string as a string" do
Puppet::Util::Execution.expects(:open).with('| echo hello 2>&1').returns('hello')
- $CHILD_STATUS.expects(:==).with(0).returns(true)
+ Puppet::Util::Execution.expects(:exitstatus).returns(0)
Puppet::Util::Execution.execpipe('echo hello').should == 'hello'
end
it "should print meaningful debug message for string argument" do
Puppet::Util::Execution.expects(:debug).with("Executing 'echo hello'")
Puppet::Util::Execution.expects(:open).with('| echo hello 2>&1').returns('hello')
- $CHILD_STATUS.expects(:==).with(0).returns(true)
+ Puppet::Util::Execution.expects(:exitstatus).returns(0)
Puppet::Util::Execution.execpipe('echo hello')
end
it "should print meaningful debug message for array argument" do
Puppet::Util::Execution.expects(:debug).with("Executing 'echo hello'")
Puppet::Util::Execution.expects(:open).with('| echo hello 2>&1').returns('hello')
- $CHILD_STATUS.expects(:==).with(0).returns(true)
+ Puppet::Util::Execution.expects(:exitstatus).returns(0)
Puppet::Util::Execution.execpipe(['echo','hello'])
end
it "should execute an array by pasting together with spaces" do
Puppet::Util::Execution.expects(:open).with('| echo hello 2>&1').returns('hello')
- $CHILD_STATUS.expects(:==).with(0).returns(true)
+ Puppet::Util::Execution.expects(:exitstatus).returns(0)
Puppet::Util::Execution.execpipe(['echo', 'hello']).should == 'hello'
end
it "should fail if asked to fail, and the child does" do
- Puppet::Util::Execution.stubs(:open).returns('error message')
- $CHILD_STATUS.expects(:==).with(0).returns(false)
+ Puppet::Util::Execution.stubs(:open).with('| echo hello 2>&1').returns('error message')
+ Puppet::Util::Execution.expects(:exitstatus).returns(1)
expect { Puppet::Util::Execution.execpipe('echo hello') }.
to raise_error Puppet::ExecutionFailure, /error message/
end
it "should not fail if asked not to fail, and the child does" do
Puppet::Util::Execution.stubs(:open).returns('error message')
- $CHILD_STATUS.stubs(:==).with(0).returns(false)
Puppet::Util::Execution.execpipe('echo hello', false).should == 'error message'
end
end
end
diff --git a/spec/unit/util/feature_spec.rb b/spec/unit/util/feature_spec.rb
index aa8afbba6..e6d844533 100755
--- a/spec/unit/util/feature_spec.rb
+++ b/spec/unit/util/feature_spec.rb
@@ -1,94 +1,106 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/feature'
describe Puppet::Util::Feature do
before do
@features = Puppet::Util::Feature.new("features")
@features.stubs(:warn)
end
it "should consider undefined features to be absent" do
@features.should_not be_defined_feature
end
it "should be able to add new features" do
@features.add(:myfeature) {}
@features.should respond_to(:myfeature?)
end
it "should call associated code when loading a feature" do
$loaded_feature = false
@features.add(:myfeature) { $loaded_feature = true}
$loaded_feature.should be_true
end
it "should consider a feature absent when the feature load fails" do
@features.add(:failer) { raise "foo" }
@features.should_not be_failer
end
it "should consider a feature to be absent when the feature load returns false" do
@features.add(:failer) { false }
@features.should_not be_failer
end
it "should consider a feature to be present when the feature load returns true" do
@features.add(:available) { true }
@features.should be_available
end
it "should cache the results of a feature load via code block" do
$loaded_feature = 0
@features.add(:myfeature) { $loaded_feature += 1 }
@features.myfeature?
@features.myfeature?
$loaded_feature.should == 1
end
it "should invalidate the cache for the feature when loading" do
# block defined features are evaluated at load time
@features.add(:myfeature) { false }
@features.should_not be_myfeature
# features with no block have deferred evaluation so an existing cached
# value would take precedence
@features.add(:myfeature)
@features.should be_myfeature
end
it "should support features with libraries" do
lambda { @features.add(:puppet, :libs => %w{puppet}) }.should_not raise_error
end
it "should consider a feature to be present if all of its libraries are present" do
@features.add(:myfeature, :libs => %w{foo bar})
@features.expects(:require).with("foo")
@features.expects(:require).with("bar")
@features.should be_myfeature
end
it "should log and consider a feature to be absent if any of its libraries are absent" do
@features.add(:myfeature, :libs => %w{foo bar})
@features.expects(:require).with("foo").raises(LoadError)
@features.stubs(:require).with("bar")
Puppet.expects(:debug)
@features.should_not be_myfeature
end
it "should change the feature to be present when its libraries become available" do
@features.add(:myfeature, :libs => %w{foo bar})
@features.expects(:require).twice().with("foo").raises(LoadError).then.returns(nil)
@features.stubs(:require).with("bar")
Puppet::Util::RubyGems::Source.stubs(:source).returns(Puppet::Util::RubyGems::OldGemsSource)
Puppet::Util::RubyGems::OldGemsSource.any_instance.expects(:clear_paths).times(3)
Puppet.expects(:debug)
@features.should_not be_myfeature
@features.should be_myfeature
end
+
+ it "should cache load failures when configured to do so" do
+ Puppet[:always_cache_features] = true
+
+ @features.add(:myfeature, :libs => %w{foo bar})
+ @features.expects(:require).with("foo").raises(LoadError)
+
+ @features.should_not be_myfeature
+ # second call would cause an expectation exception if 'require' was
+ # called a second time
+ @features.should_not be_myfeature
+ end
end
diff --git a/spec/unit/util/http_proxy_spec.rb b/spec/unit/util/http_proxy_spec.rb
index bc6b4d2b7..59f39c511 100644
--- a/spec/unit/util/http_proxy_spec.rb
+++ b/spec/unit/util/http_proxy_spec.rb
@@ -1,83 +1,125 @@
require 'uri'
require 'spec_helper'
require 'puppet/util/http_proxy'
describe Puppet::Util::HttpProxy do
- host, port = 'some.host', 1234
+ host, port, user, password = 'some.host', 1234, 'user1', 'pAssw0rd'
describe ".http_proxy_env" do
it "should return nil if no environment variables" do
subject.http_proxy_env.should == nil
end
it "should return a URI::HTTP object if http_proxy env variable is set" do
Puppet::Util.withenv('HTTP_PROXY' => host) do
subject.http_proxy_env.should == URI.parse(host)
end
end
it "should return a URI::HTTP object if HTTP_PROXY env variable is set" do
Puppet::Util.withenv('HTTP_PROXY' => host) do
subject.http_proxy_env.should == URI.parse(host)
end
end
it "should return a URI::HTTP object with .host and .port if URI is given" do
Puppet::Util.withenv('HTTP_PROXY' => "http://#{host}:#{port}") do
subject.http_proxy_env.should == URI.parse("http://#{host}:#{port}")
end
end
it "should return nil if proxy variable is malformed" do
Puppet::Util.withenv('HTTP_PROXY' => 'this is not a valid URI') do
subject.http_proxy_env.should == nil
end
end
end
describe ".http_proxy_host" do
it "should return nil if no proxy host in config or env" do
subject.http_proxy_host.should == nil
end
it "should return a proxy host if set in config" do
Puppet.settings[:http_proxy_host] = host
subject.http_proxy_host.should == host
end
it "should return nil if set to `none` in config" do
Puppet.settings[:http_proxy_host] = 'none'
subject.http_proxy_host.should == nil
end
it "uses environment variable before puppet settings" do
Puppet::Util.withenv('HTTP_PROXY' => "http://#{host}:#{port}") do
Puppet.settings[:http_proxy_host] = 'not.correct'
subject.http_proxy_host.should == host
end
end
end
describe ".http_proxy_port" do
it "should return a proxy port if set in environment" do
Puppet::Util.withenv('HTTP_PROXY' => "http://#{host}:#{port}") do
subject.http_proxy_port.should == port
end
end
it "should return a proxy port if set in config" do
Puppet.settings[:http_proxy_port] = port
subject.http_proxy_port.should == port
end
it "uses environment variable before puppet settings" do
Puppet::Util.withenv('HTTP_PROXY' => "http://#{host}:#{port}") do
Puppet.settings[:http_proxy_port] = 7456
subject.http_proxy_port.should == port
end
end
end
+ describe ".http_proxy_user" do
+ it "should return a proxy user if set in environment" do
+ Puppet::Util.withenv('HTTP_PROXY' => "http://#{user}:#{password}@#{host}:#{port}") do
+ subject.http_proxy_user.should == user
+ end
+ end
+
+ it "should return a proxy user if set in config" do
+ Puppet.settings[:http_proxy_user] = user
+ subject.http_proxy_user.should == user
+ end
+
+ it "should use environment variable before puppet settings" do
+ Puppet::Util.withenv('HTTP_PROXY' => "http://#{user}:#{password}@#{host}:#{port}") do
+ Puppet.settings[:http_proxy_user] = 'clownpants'
+ subject.http_proxy_user.should == user
+ end
+ end
+
+ end
+
+ describe ".http_proxy_password" do
+ it "should return a proxy password if set in environment" do
+ Puppet::Util.withenv('HTTP_PROXY' => "http://#{user}:#{password}@#{host}:#{port}") do
+ subject.http_proxy_password.should == password
+ end
+ end
+
+ it "should return a proxy password if set in config" do
+ Puppet.settings[:http_proxy_user] = user
+ Puppet.settings[:http_proxy_password] = password
+ subject.http_proxy_password.should == password
+ end
+
+ it "should use environment variable before puppet settings" do
+ Puppet::Util.withenv('HTTP_PROXY' => "http://#{user}:#{password}@#{host}:#{port}") do
+ Puppet.settings[:http_proxy_password] = 'clownpants'
+ subject.http_proxy_password.should == password
+ end
+ end
+
+ end
end
diff --git a/spec/unit/util/log/destinations_spec.rb b/spec/unit/util/log/destinations_spec.rb
index a91236dba..a81eac631 100755
--- a/spec/unit/util/log/destinations_spec.rb
+++ b/spec/unit/util/log/destinations_spec.rb
@@ -1,183 +1,227 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'json'
require 'puppet/util/log'
describe Puppet::Util::Log.desttypes[:report] do
before do
@dest = Puppet::Util::Log.desttypes[:report]
end
it "should require a report at initialization" do
@dest.new("foo").report.should == "foo"
end
it "should send new messages to the report" do
report = mock 'report'
dest = @dest.new(report)
report.expects(:<<).with("my log")
dest.handle "my log"
end
end
describe Puppet::Util::Log.desttypes[:file] do
include PuppetSpec::Files
before do
File.stubs(:open) # prevent actually creating the file
- File.stubs(:chown) # prevent chown on non existing file from failing
+ File.stubs(:chown) # prevent chown on non existing file from failing
@class = Puppet::Util::Log.desttypes[:file]
end
it "should default to autoflush false" do
@class.new(tmpfile('log')).autoflush.should == true
end
describe "when matching" do
shared_examples_for "file destination" do
it "should match an absolute path" do
@class.match?(abspath).should be_true
end
it "should not match a relative path" do
@class.match?(relpath).should be_false
end
end
describe "on POSIX systems", :as_platform => :posix do
let (:abspath) { '/tmp/log' }
let (:relpath) { 'log' }
it_behaves_like "file destination"
end
describe "on Windows systems", :as_platform => :windows do
let (:abspath) { 'C:\\temp\\log.txt' }
let (:relpath) { 'log.txt' }
it_behaves_like "file destination"
end
end
end
describe Puppet::Util::Log.desttypes[:syslog] do
let (:klass) { Puppet::Util::Log.desttypes[:syslog] }
# these tests can only be run when syslog is present, because
# we can't stub the top-level Syslog module
describe "when syslog is available", :if => Puppet.features.syslog? do
before :each do
Syslog.stubs(:opened?).returns(false)
Syslog.stubs(:const_get).returns("LOG_KERN").returns(0)
Syslog.stubs(:open)
end
it "should open syslog" do
Syslog.expects(:open)
klass.new
end
it "should close syslog" do
Syslog.expects(:close)
dest = klass.new
dest.close
end
it "should send messages to syslog" do
syslog = mock 'syslog'
syslog.expects(:info).with("don't panic")
Syslog.stubs(:open).returns(syslog)
msg = Puppet::Util::Log.new(:level => :info, :message => "don't panic")
dest = klass.new
dest.handle(msg)
end
end
describe "when syslog is unavailable" do
it "should not be a suitable log destination" do
Puppet.features.stubs(:syslog?).returns(false)
klass.suitable?(:syslog).should be_false
end
end
end
describe Puppet::Util::Log.desttypes[:logstash_event] do
describe "when using structured log format with logstash_event schema" do
before :each do
@msg = Puppet::Util::Log.new(:level => :info, :message => "So long, and thanks for all the fish.", :source => "a dolphin")
end
it "format should fix the hash to have the correct structure" do
dest = described_class.new
result = dest.format(@msg)
result["version"].should == 1
result["level"].should == :info
result["message"].should == "So long, and thanks for all the fish."
result["source"].should == "a dolphin"
# timestamp should be within 10 seconds
Time.parse(result["@timestamp"]).should >= ( Time.now - 10 )
end
it "format returns a structure that can be converted to json" do
dest = described_class.new
hash = dest.format(@msg)
JSON.parse(hash.to_json)
end
it "handle should send the output to stdout" do
$stdout.expects(:puts).once
dest = described_class.new
dest.handle(@msg)
end
end
end
describe Puppet::Util::Log.desttypes[:console] do
let (:klass) { Puppet::Util::Log.desttypes[:console] }
describe "when color is available" do
before :each do
subject.stubs(:console_has_color?).returns(true)
end
it "should support color output" do
Puppet[:color] = true
subject.colorize(:red, 'version').should == "\e[0;31mversion\e[0m"
end
it "should withhold color output when not appropriate" do
Puppet[:color] = false
subject.colorize(:red, 'version').should == "version"
end
it "should handle multiple overlapping colors in a stack-like way" do
Puppet[:color] = true
vstring = subject.colorize(:red, 'version')
subject.colorize(:green, "(#{vstring})").should == "\e[0;32m(\e[0;31mversion\e[0;32m)\e[0m"
end
it "should handle resets in a stack-like way" do
Puppet[:color] = true
vstring = subject.colorize(:reset, 'version')
subject.colorize(:green, "(#{vstring})").should == "\e[0;32m(\e[mversion\e[0;32m)\e[0m"
end
it "should include the log message's source/context in the output when available" do
Puppet[:color] = false
$stdout.expects(:puts).with("Info: a hitchhiker: don't panic")
msg = Puppet::Util::Log.new(:level => :info, :message => "don't panic", :source => "a hitchhiker")
dest = klass.new
dest.handle(msg)
end
end
end
+
+
+describe ":eventlog", :if => Puppet::Util::Platform.windows? do
+ before do
+ if Facter.value(:kernelmajversion).to_f < 6.0
+ pending("requires win32-eventlog gem upgrade to 0.6.2 on Windows 2003")
+ end
+ end
+
+ let(:klass) { Puppet::Util::Log.desttypes[:eventlog] }
+
+ def expects_message_with_type(klass, level, eventlog_type, eventlog_id)
+ eventlog = stub('eventlog')
+ eventlog.expects(:report_event).with(has_entries(:source => "Puppet", :event_type => eventlog_type, :event_id => eventlog_id, :data => "a hitchhiker: don't panic"))
+ Win32::EventLog.stubs(:open).returns(eventlog)
+
+ msg = Puppet::Util::Log.new(:level => level, :message => "don't panic", :source => "a hitchhiker")
+ dest = klass.new
+ dest.handle(msg)
+ end
+
+ it "supports the eventlog feature" do
+ expect(Puppet.features.eventlog?).to be_true
+ end
+
+ it "logs to the Application event log" do
+ eventlog = stub('eventlog')
+ Win32::EventLog.expects(:open).with('Application').returns(stub('eventlog'))
+
+ klass.new
+ end
+
+ it "logs :debug level as an information type event" do
+ expects_message_with_type(klass, :debug, klass::EVENTLOG_INFORMATION_TYPE, 0x1)
+ end
+
+ it "logs :warning level as an warning type event" do
+ expects_message_with_type(klass, :warning, klass::EVENTLOG_WARNING_TYPE, 0x2)
+ end
+
+ it "logs :err level as an error type event" do
+ expects_message_with_type(klass, :err, klass::EVENTLOG_ERROR_TYPE, 0x3)
+ end
+end
diff --git a/spec/unit/util/logging_spec.rb b/spec/unit/util/logging_spec.rb
index 7eba80bca..abdae9189 100755
--- a/spec/unit/util/logging_spec.rb
+++ b/spec/unit/util/logging_spec.rb
@@ -1,167 +1,205 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/logging'
class LoggingTester
include Puppet::Util::Logging
end
describe Puppet::Util::Logging do
before do
@logger = LoggingTester.new
end
Puppet::Util::Log.eachlevel do |level|
it "should have a method for sending '#{level}' logs" do
@logger.should respond_to(level)
end
end
it "should have a method for sending a log with a specified log level" do
@logger.expects(:to_s).returns "I'm a string!"
Puppet::Util::Log.expects(:create).with { |args| args[:source] == "I'm a string!" and args[:level] == "loglevel" and args[:message] == "mymessage" }
@logger.send_log "loglevel", "mymessage"
end
describe "when sending a log" do
it "should use the Log's 'create' entrance method" do
Puppet::Util::Log.expects(:create)
@logger.notice "foo"
end
it "should send itself converted to a string as the log source" do
@logger.expects(:to_s).returns "I'm a string!"
Puppet::Util::Log.expects(:create).with { |args| args[:source] == "I'm a string!" }
@logger.notice "foo"
end
it "should queue logs sent without a specified destination" do
Puppet::Util::Log.close_all
Puppet::Util::Log.expects(:queuemessage)
@logger.notice "foo"
end
it "should use the path of any provided resource type" do
resource = Puppet::Type.type(:host).new :name => "foo"
resource.expects(:path).returns "/path/to/host".to_sym
Puppet::Util::Log.expects(:create).with { |args| args[:source] == "/path/to/host" }
resource.notice "foo"
end
it "should use the path of any provided resource parameter" do
resource = Puppet::Type.type(:host).new :name => "foo"
param = resource.parameter(:name)
param.expects(:path).returns "/path/to/param".to_sym
Puppet::Util::Log.expects(:create).with { |args| args[:source] == "/path/to/param" }
param.notice "foo"
end
it "should send the provided argument as the log message" do
Puppet::Util::Log.expects(:create).with { |args| args[:message] == "foo" }
@logger.notice "foo"
end
it "should join any provided arguments into a single string for the message" do
Puppet::Util::Log.expects(:create).with { |args| args[:message] == "foo bar baz" }
@logger.notice ["foo", "bar", "baz"]
end
[:file, :line, :tags].each do |attr|
it "should include #{attr} if available" do
@logger.singleton_class.send(:attr_accessor, attr)
@logger.send(attr.to_s + "=", "myval")
Puppet::Util::Log.expects(:create).with { |args| args[attr] == "myval" }
@logger.notice "foo"
end
end
end
describe "when sending a deprecation warning" do
it "does not log a message when deprecation warnings are disabled" do
Puppet.expects(:[]).with(:disable_warnings).returns %w[deprecations]
@logger.expects(:warning).never
@logger.deprecation_warning 'foo'
end
it "logs the message with warn" do
@logger.expects(:warning).with do |msg|
msg =~ /^foo\n/
end
@logger.deprecation_warning 'foo'
end
it "only logs each offending line once" do
@logger.expects(:warning).with do |msg|
msg =~ /^foo\n/
end .once
5.times { @logger.deprecation_warning 'foo' }
end
it "ensures that deprecations from same origin are logged if their keys differ" do
@logger.expects(:warning).with(regexp_matches(/deprecated foo/)).times(5)
5.times { |i| @logger.deprecation_warning('deprecated foo', :key => "foo#{i}") }
end
it "does not duplicate deprecations for a given key" do
@logger.expects(:warning).with(regexp_matches(/deprecated foo/)).once
5.times { @logger.deprecation_warning('deprecated foo', :key => 'foo-msg') }
end
it "only logs the first 100 messages" do
(1..100).each { |i|
@logger.expects(:warning).with do |msg|
msg =~ /^#{i}\n/
end .once
# since the deprecation warning will only log each offending line once, we have to do some tomfoolery
# here in order to make it think each of these calls is coming from a unique call stack; we're basically
# mocking the method that it would normally use to find the call stack.
@logger.expects(:get_deprecation_offender).returns(["deprecation log count test ##{i}"])
@logger.deprecation_warning i
}
@logger.expects(:warning).with(101).never
@logger.deprecation_warning 101
end
end
+ describe "when sending a puppet_deprecation_warning" do
+ it "requires file and line or key options" do
+ expect do
+ @logger.puppet_deprecation_warning("foo")
+ end.to raise_error(Puppet::DevError, /Need either :file and :line, or :key/)
+ expect do
+ @logger.puppet_deprecation_warning("foo", :file => 'bar')
+ end.to raise_error(Puppet::DevError, /Need either :file and :line, or :key/)
+ expect do
+ @logger.puppet_deprecation_warning("foo", :key => 'akey')
+ @logger.puppet_deprecation_warning("foo", :file => 'afile', :line => 1)
+ end.to_not raise_error
+ end
+
+ it "warns with file and line" do
+ @logger.expects(:warning).with(regexp_matches(/deprecated foo.*afile:5/m))
+ @logger.puppet_deprecation_warning("deprecated foo", :file => 'afile', :line => 5)
+ end
+
+ it "warns keyed from file and line" do
+ @logger.expects(:warning).with(regexp_matches(/deprecated foo.*afile:5/m)).once
+ 5.times do
+ @logger.puppet_deprecation_warning("deprecated foo", :file => 'afile', :line => 5)
+ end
+ end
+
+ it "warns with separate key only once regardless of file and line" do
+ @logger.expects(:warning).with(regexp_matches(/deprecated foo.*afile:5/m)).once
+ @logger.puppet_deprecation_warning("deprecated foo", :key => 'some_key', :file => 'afile', :line => 5)
+ @logger.puppet_deprecation_warning("deprecated foo", :key => 'some_key', :file => 'bfile', :line => 3)
+ end
+
+ it "warns with key but no file and line" do
+ @logger.expects(:warning).with(regexp_matches(/deprecated foo.*unknown:unknown/m))
+ @logger.puppet_deprecation_warning("deprecated foo", :key => 'some_key')
+ end
+ end
+
describe "when formatting exceptions" do
it "should be able to format a chain of exceptions" do
exc3 = Puppet::Error.new("original")
exc3.set_backtrace(["1.rb:4:in `a'","2.rb:2:in `b'","3.rb:1"])
exc2 = Puppet::Error.new("second", exc3)
exc2.set_backtrace(["4.rb:8:in `c'","5.rb:1:in `d'","6.rb:3"])
exc1 = Puppet::Error.new("third", exc2)
exc1.set_backtrace(["7.rb:31:in `e'","8.rb:22:in `f'","9.rb:9"])
# whoa ugly
@logger.format_exception(exc1).should =~ /third
.*7\.rb:31:in `e'
.*8\.rb:22:in `f'
.*9\.rb:9
Wrapped exception:
second
.*4\.rb:8:in `c'
.*5\.rb:1:in `d'
.*6\.rb:3
Wrapped exception:
original
.*1\.rb:4:in `a'
.*2\.rb:2:in `b'
.*3\.rb:1/
end
end
end
diff --git a/spec/unit/util/pidlock_spec.rb b/spec/unit/util/pidlock_spec.rb
index 2ebe7dec8..fcef7aa31 100644
--- a/spec/unit/util/pidlock_spec.rb
+++ b/spec/unit/util/pidlock_spec.rb
@@ -1,182 +1,218 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/pidlock'
describe Puppet::Util::Pidlock do
require 'puppet_spec/files'
include PuppetSpec::Files
before(:each) do
@lockfile = tmpfile("lock")
@lock = Puppet::Util::Pidlock.new(@lockfile)
end
describe "#lock" do
it "should not be locked at start" do
@lock.should_not be_locked
end
it "should not be mine at start" do
@lock.should_not be_mine
end
it "should become locked" do
@lock.lock
@lock.should be_locked
end
it "should become mine" do
@lock.lock
@lock.should be_mine
end
it "should be possible to lock multiple times" do
@lock.lock
lambda { @lock.lock }.should_not raise_error
end
it "should return true when locking" do
@lock.lock.should be_true
end
it "should return true if locked by me" do
@lock.lock
@lock.lock.should be_true
end
it "should create a lock file" do
@lock.lock
Puppet::FileSystem.exist?(@lockfile).should be_true
end
+ it 'should create an empty lock file even when pid is missing' do
+ Process.stubs(:pid).returns('')
+ @lock.lock
+ Puppet::FileSystem.exist?(@lock.file_path).should be_true
+ Puppet::FileSystem.read(@lock.file_path).should be_empty
+ end
+
+ it 'should replace an existing empty lockfile with a pid, given a subsequent lock call made against a valid pid' do
+ # empty pid results in empty lockfile
+ Process.stubs(:pid).returns('')
+ @lock.lock
+ Puppet::FileSystem.exist?(@lock.file_path).should be_true
+
+ # next lock call with valid pid kills existing empty lockfile
+ Process.stubs(:pid).returns(1234)
+ @lock.lock
+ Puppet::FileSystem.exist?(@lock.file_path).should be_true
+ Puppet::FileSystem.read(@lock.file_path).should == '1234'
+ end
+
it "should expose the lock file_path" do
@lock.file_path.should == @lockfile
end
end
describe "#unlock" do
it "should not be locked anymore" do
@lock.lock
@lock.unlock
@lock.should_not be_locked
end
it "should return false if not locked" do
@lock.unlock.should be_false
end
it "should return true if properly unlocked" do
@lock.lock
@lock.unlock.should be_true
end
it "should get rid of the lock file" do
@lock.lock
@lock.unlock
Puppet::FileSystem.exist?(@lockfile).should be_false
end
end
describe "#locked?" do
it "should return true if locked" do
@lock.lock
@lock.should be_locked
end
+
+ it "should remove the lockfile when pid is missing" do
+ Process.stubs(:pid).returns('')
+ @lock.lock
+ @lock.locked?.should be_false
+ Puppet::FileSystem.exist?(@lock.file_path).should be_false
+ end
+ end
+
+ describe '#lock_pid' do
+ it 'should return nil if the pid is empty' do
+ # fake pid to get empty lockfile
+ Process.stubs(:pid).returns('')
+ @lock.lock
+ @lock.lock_pid.should == nil
+ end
end
describe "with a stale lock" do
before(:each) do
# fake our pid to be 1234
Process.stubs(:pid).returns(1234)
# lock the file
@lock.lock
# fake our pid to be a different pid, to simulate someone else
# holding the lock
Process.stubs(:pid).returns(6789)
Process.stubs(:kill).with(0, 6789)
Process.stubs(:kill).with(0, 1234).raises(Errno::ESRCH)
end
it "should not be locked" do
@lock.should_not be_locked
end
describe "#lock" do
it "should clear stale locks" do
- @lock.locked?
+ @lock.locked?.should be_false
Puppet::FileSystem.exist?(@lockfile).should be_false
end
it "should replace with new locks" do
@lock.lock
Puppet::FileSystem.exist?(@lockfile).should be_true
@lock.lock_pid.should == 6789
@lock.should be_mine
@lock.should be_locked
end
end
describe "#unlock" do
it "should not be allowed" do
@lock.unlock.should be_false
end
it "should not remove the lock file" do
@lock.unlock
Puppet::FileSystem.exist?(@lockfile).should be_true
end
end
end
describe "with another process lock" do
before(:each) do
# fake our pid to be 1234
Process.stubs(:pid).returns(1234)
# lock the file
@lock.lock
# fake our pid to be a different pid, to simulate someone else
# holding the lock
Process.stubs(:pid).returns(6789)
Process.stubs(:kill).with(0, 6789)
Process.stubs(:kill).with(0, 1234)
end
it "should be locked" do
@lock.should be_locked
end
it "should not be mine" do
@lock.should_not be_mine
end
describe "#lock" do
it "should not be possible" do
@lock.lock.should be_false
end
it "should not overwrite the lock" do
@lock.lock
@lock.should_not be_mine
end
end
describe "#unlock" do
it "should not be possible" do
@lock.unlock.should be_false
end
it "should not remove the lock file" do
@lock.unlock
Puppet::FileSystem.exist?(@lockfile).should be_true
end
it "should still not be our lock" do
@lock.unlock
@lock.should_not be_mine
end
end
end
end
diff --git a/spec/unit/util/profiler/aggregate_spec.rb b/spec/unit/util/profiler/aggregate_spec.rb
new file mode 100644
index 000000000..8d38d4673
--- /dev/null
+++ b/spec/unit/util/profiler/aggregate_spec.rb
@@ -0,0 +1,59 @@
+require 'spec_helper'
+require 'puppet/util/profiler'
+require 'puppet/util/profiler/around_profiler'
+require 'puppet/util/profiler/aggregate'
+
+describe Puppet::Util::Profiler::Aggregate do
+ let(:logger) { AggregateSimpleLog.new }
+ let(:profiler) { Puppet::Util::Profiler::Aggregate.new(logger, nil) }
+ let(:profiler_mgr) do
+ p = Puppet::Util::Profiler::AroundProfiler.new
+ p.add_profiler(profiler)
+ p
+ end
+
+ it "tracks the aggregate counts and time for the hierarchy of metrics" do
+ profiler_mgr.profile("Looking up hiera data in production environment", ["function", "hiera_lookup", "production"]) { sleep 0.01 }
+ profiler_mgr.profile("Looking up hiera data in test environment", ["function", "hiera_lookup", "test"]) {}
+ profiler_mgr.profile("looking up stuff for compilation", ["compiler", "lookup"]) { sleep 0.01 }
+ profiler_mgr.profile("COMPILING ALL OF THE THINGS!", ["compiler", "compiling"]) {}
+
+ profiler.values["function"].count.should == 2
+ profiler.values["function"].time.should be > 0
+ profiler.values["function"]["hiera_lookup"].count.should == 2
+ profiler.values["function"]["hiera_lookup"]["production"].count.should == 1
+ profiler.values["function"]["hiera_lookup"]["test"].count.should == 1
+ profiler.values["function"].time.should be >= profiler.values["function"]["hiera_lookup"]["test"].time
+
+ profiler.values["compiler"].count.should == 2
+ profiler.values["compiler"].time.should be > 0
+ profiler.values["compiler"]["lookup"].count.should == 1
+ profiler.values["compiler"]["compiling"].count.should == 1
+ profiler.values["compiler"].time.should be >= profiler.values["compiler"]["lookup"].time
+
+ profiler.shutdown
+
+ logger.output.should =~ /function -> hiera_lookup: .*\(2 calls\)\nfunction -> hiera_lookup ->.*\(1 calls\)/
+ logger.output.should =~ /compiler: .*\(2 calls\)\ncompiler ->.*\(1 calls\)/
+ end
+
+ it "tolerates calls to `profile` that don't include a metric id" do
+ profiler_mgr.profile("yo") {}
+ end
+
+ it "supports both symbols and strings as components of a metric id" do
+ profiler_mgr.profile("yo", [:foo, "bar"]) {}
+ end
+
+ class AggregateSimpleLog
+ attr_reader :output
+
+ def initialize
+ @output = ""
+ end
+
+ def call(msg)
+ @output << msg << "\n"
+ end
+ end
+end
diff --git a/spec/unit/util/profiler/around_profiler_spec.rb b/spec/unit/util/profiler/around_profiler_spec.rb
new file mode 100644
index 000000000..0837395b5
--- /dev/null
+++ b/spec/unit/util/profiler/around_profiler_spec.rb
@@ -0,0 +1,61 @@
+require 'spec_helper'
+require 'puppet/util/profiler'
+
+describe Puppet::Util::Profiler::AroundProfiler do
+ let(:child) { TestAroundProfiler.new() }
+ let(:profiler) { Puppet::Util::Profiler::AroundProfiler.new }
+
+ before :each do
+ profiler.add_profiler(child)
+ end
+
+ it "returns the value of the profiled segment" do
+ retval = profiler.profile("Testing", ["testing"]) { "the return value" }
+
+ retval.should == "the return value"
+ end
+
+ it "propagates any errors raised in the profiled segment" do
+ expect do
+ profiler.profile("Testing", ["testing"]) { raise "a problem" }
+ end.to raise_error("a problem")
+ end
+
+ it "makes the description and the context available to the `start` and `finish` methods" do
+ profiler.profile("Testing", ["testing"]) { }
+
+ child.context.should == "Testing"
+ child.description.should == "Testing"
+ end
+
+ it "calls finish even when an error is raised" do
+ begin
+ profiler.profile("Testing", ["testing"]) { raise "a problem" }
+ rescue
+ child.context.should == "Testing"
+ end
+ end
+
+ it "supports multiple profilers" do
+ profiler2 = TestAroundProfiler.new
+ profiler.add_profiler(profiler2)
+ profiler.profile("Testing", ["testing"]) {}
+
+ child.context.should == "Testing"
+ profiler2.context.should == "Testing"
+ end
+
+ class TestAroundProfiler
+ attr_accessor :context, :description
+
+ def start(description, metric_id)
+ description
+ end
+
+ def finish(context, description, metric_id)
+ @context = context
+ @description = description
+ end
+ end
+end
+
diff --git a/spec/unit/util/profiler/logging_spec.rb b/spec/unit/util/profiler/logging_spec.rb
index 5316e5ae9..3f6a728dd 100644
--- a/spec/unit/util/profiler/logging_spec.rb
+++ b/spec/unit/util/profiler/logging_spec.rb
@@ -1,81 +1,70 @@
require 'spec_helper'
require 'puppet/util/profiler'
describe Puppet::Util::Profiler::Logging do
let(:logger) { SimpleLog.new }
let(:identifier) { "Profiling ID" }
- let(:profiler) { TestLoggingProfiler.new(logger, identifier) }
-
- it "returns the value of the profiled segment" do
- retval = profiler.profile("Testing") { "the return value" }
-
- retval.should == "the return value"
- end
-
- it "propogates any errors raised in the profiled segment" do
- expect do
- profiler.profile("Testing") { raise "a problem" }
- end.to raise_error("a problem")
+ let(:logging_profiler) { TestLoggingProfiler.new(logger, identifier) }
+ let(:profiler) do
+ p = Puppet::Util::Profiler::AroundProfiler.new
+ p.add_profiler(logging_profiler)
+ p
end
it "logs the explanation of the profile results" do
- profiler.profile("Testing") { }
+ profiler.profile("Testing", ["test"]) { }
logger.messages.first.should =~ /the explanation/
end
- it "logs results even when an error is raised" do
- begin
- profiler.profile("Testing") { raise "a problem" }
- rescue
- logger.messages.first.should =~ /the explanation/
- end
- end
-
it "describes the profiled segment" do
- profiler.profile("Tested measurement") { }
+ profiler.profile("Tested measurement", ["test"]) { }
logger.messages.first.should =~ /PROFILE \[#{identifier}\] \d Tested measurement/
end
it "indicates the order in which segments are profiled" do
- profiler.profile("Measurement") { }
- profiler.profile("Another measurement") { }
+ profiler.profile("Measurement", ["measurement"]) { }
+ profiler.profile("Another measurement", ["measurement"]) { }
logger.messages[0].should =~ /1 Measurement/
logger.messages[1].should =~ /2 Another measurement/
end
it "indicates the nesting of profiled segments" do
- profiler.profile("Measurement") { profiler.profile("Nested measurement") { } }
- profiler.profile("Another measurement") { profiler.profile("Another nested measurement") { } }
+ profiler.profile("Measurement", ["measurement1"]) do
+ profiler.profile("Nested measurement", ["measurement2"]) { }
+ end
+ profiler.profile("Another measurement", ["measurement1"]) do
+ profiler.profile("Another nested measurement", ["measurement2"]) { }
+ end
logger.messages[0].should =~ /1.1 Nested measurement/
logger.messages[1].should =~ /1 Measurement/
logger.messages[2].should =~ /2.1 Another nested measurement/
logger.messages[3].should =~ /2 Another measurement/
end
class TestLoggingProfiler < Puppet::Util::Profiler::Logging
- def start
+ def do_start(metric, description)
"the start"
end
- def finish(context)
- "the explanation of #{context}"
+ def do_finish(context, metric, description)
+ {:msg => "the explanation of #{context}"}
end
end
class SimpleLog
attr_reader :messages
def initialize
@messages = []
end
def call(msg)
@messages << msg
end
end
end
diff --git a/spec/unit/util/profiler/none_spec.rb b/spec/unit/util/profiler/none_spec.rb
deleted file mode 100644
index 0cabfef6f..000000000
--- a/spec/unit/util/profiler/none_spec.rb
+++ /dev/null
@@ -1,12 +0,0 @@
-require 'spec_helper'
-require 'puppet/util/profiler'
-
-describe Puppet::Util::Profiler::None do
- let(:profiler) { Puppet::Util::Profiler::None.new }
-
- it "returns the value of the profiled block" do
- retval = profiler.profile("Testing") { "the return value" }
-
- retval.should == "the return value"
- end
-end
diff --git a/spec/unit/util/profiler/wall_clock_spec.rb b/spec/unit/util/profiler/wall_clock_spec.rb
index 668f63221..1adcf0d61 100644
--- a/spec/unit/util/profiler/wall_clock_spec.rb
+++ b/spec/unit/util/profiler/wall_clock_spec.rb
@@ -1,13 +1,13 @@
require 'spec_helper'
require 'puppet/util/profiler'
describe Puppet::Util::Profiler::WallClock do
it "logs the number of seconds it took to execute the segment" do
profiler = Puppet::Util::Profiler::WallClock.new(nil, nil)
- message = profiler.finish(profiler.start)
+ message = profiler.do_finish(profiler.start(["foo", "bar"], "Testing"), ["foo", "bar"], "Testing")[:msg]
message.should =~ /took \d\.\d{4} seconds/
end
end
diff --git a/spec/unit/util/profiler_spec.rb b/spec/unit/util/profiler_spec.rb
new file mode 100644
index 000000000..c7fc48cb9
--- /dev/null
+++ b/spec/unit/util/profiler_spec.rb
@@ -0,0 +1,55 @@
+require 'spec_helper'
+require 'puppet/util/profiler'
+
+describe Puppet::Util::Profiler do
+ let(:profiler) { TestProfiler.new() }
+
+ it "supports adding profilers" do
+ subject.add_profiler(profiler)
+ subject.current[0].should == profiler
+ end
+
+ it "supports removing profilers" do
+ subject.add_profiler(profiler)
+ subject.remove_profiler(profiler)
+ subject.current.length.should == 0
+ end
+
+ it "supports clearing profiler list" do
+ subject.add_profiler(profiler)
+ subject.clear
+ subject.current.length.should == 0
+ end
+
+ it "supports profiling" do
+ subject.add_profiler(profiler)
+ subject.profile("hi", ["mymetric"]) {}
+ profiler.context[:metric_id].should == ["mymetric"]
+ profiler.context[:description].should == "hi"
+ profiler.description.should == "hi"
+ end
+
+ it "supports profiling without a metric id" do
+ subject.add_profiler(profiler)
+ subject.profile("hi") {}
+ profiler.context[:metric_id].should == nil
+ profiler.context[:description].should == "hi"
+ profiler.description.should == "hi"
+ end
+
+ class TestProfiler
+ attr_accessor :context, :metric, :description
+
+ def start(description, metric_id)
+ {:metric_id => metric_id,
+ :description => description}
+ end
+
+ def finish(context, description, metric_id)
+ @context = context
+ @metric_id = metric_id
+ @description = description
+ end
+ end
+end
+
diff --git a/spec/unit/util/queue_spec.rb b/spec/unit/util/queue_spec.rb
index d7ba57f85..48b98e8e3 100755
--- a/spec/unit/util/queue_spec.rb
+++ b/spec/unit/util/queue_spec.rb
@@ -1,95 +1,94 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/queue'
-require 'spec/mocks'
def make_test_client_class(n)
c = Class.new do
class <<self
attr_accessor :name
def to_s
name
end
end
end
c.name = n
c
end
mod = Puppet::Util::Queue
client_classes = { :default => make_test_client_class('Bogus::Default'), :setup => make_test_client_class('Bogus::Setup') }
describe Puppet::Util::Queue do
before :all do
mod.register_queue_type(client_classes[:default], :default)
mod.register_queue_type(client_classes[:setup], :setup)
end
before :each do
@class = Class.new do
extend mod
end
end
after :all do
instances = mod.instance_hash(:queue_clients)
[:default, :setup, :bogus, :aardvark, :conflict, :test_a, :test_b].each{ |x| instances.delete(x) }
end
context 'when determining a type name from a class' do
it 'should handle a simple one-word class name' do
mod.queue_type_from_class(make_test_client_class('Foo')).should == :foo
end
it 'should handle a simple two-word class name' do
mod.queue_type_from_class(make_test_client_class('FooBar')).should == :foo_bar
end
it 'should handle a two-part class name with one terminating word' do
mod.queue_type_from_class(make_test_client_class('Foo::Bar')).should == :bar
end
it 'should handle a two-part class name with two terminating words' do
mod.queue_type_from_class(make_test_client_class('Foo::BarBah')).should == :bar_bah
end
end
context 'when registering a queue client class' do
c = make_test_client_class('Foo::Bogus')
it 'uses the proper default name logic when type is unspecified' do
mod.register_queue_type(c)
mod.queue_type_to_class(:bogus).should == c
end
it 'uses an explicit type name when provided' do
mod.register_queue_type(c, :aardvark)
mod.queue_type_to_class(:aardvark).should == c
end
it 'throws an exception when type names conflict' do
mod.register_queue_type( make_test_client_class('Conflict') )
lambda { mod.register_queue_type( c, :conflict) }.should raise_error
end
it 'handle multiple, non-conflicting registrations' do
a = make_test_client_class('TestA')
b = make_test_client_class('TestB')
mod.register_queue_type(a)
mod.register_queue_type(b)
mod.queue_type_to_class(:test_a).should == a
mod.queue_type_to_class(:test_b).should == b
end
it 'throws an exception when type name is unknown' do
lambda { mod.queue_type_to_class(:nope) }.should raise_error
end
end
context 'when determining client type' do
it 'returns client class based on the :queue_type setting' do
Puppet[:queue_type] = :myqueue
Puppet::Util::Queue.expects(:queue_type_to_class).with(:myqueue).returns "eh"
@class.client_class.should == "eh"
end
end
end
diff --git a/spec/unit/util/rdoc/parser_spec.rb b/spec/unit/util/rdoc/parser_spec.rb
index acc606b76..7ed2cffcc 100755
--- a/spec/unit/util/rdoc/parser_spec.rb
+++ b/spec/unit/util/rdoc/parser_spec.rb
@@ -1,600 +1,608 @@
#! /usr/bin/env ruby
require 'spec_helper'
describe "RDoc::Parser", :if => Puppet.features.rdoc1? do
before :all do
require 'puppet/resource/type_collection'
require 'puppet/util/rdoc/parser'
require 'puppet/util/rdoc/code_objects'
require 'rdoc/options'
require 'rdoc/rdoc'
end
include PuppetSpec::Files
before :each do
stub_file = stub('init.pp', :stat => stub())
# Ruby 1.8.7 needs the following call to be stubs and not expects
Puppet::FileSystem.stubs(:stat).with('init.pp').returns stub() # stub_file
@top_level = stub_everything 'toplevel', :file_relative_name => "init.pp"
@parser = RDoc::Parser.new(@top_level, "module/manifests/init.pp", nil, Options.instance, RDoc::Stats.new)
end
describe "when scanning files" do
+ around(:each) do |example|
+ Puppet.override({
+ :current_environment => Puppet::Node::Environment.create(:doc, [], '/somewhere/etc/manifests/site.pp')
+ },
+ "A fake current environment that the application would have established by now"
+ ) do
+
+ example.run
+ end
+ end
+
it "should parse puppet files with the puppet parser" do
@parser.stubs(:scan_top_level)
parser = stub 'parser'
Puppet::Parser::Parser.stubs(:new).returns(parser)
parser.expects(:parse).returns(Puppet::Parser::AST::Hostclass.new('')).at_least_once
parser.expects(:file=).with("module/manifests/init.pp")
parser.expects(:file=).with do |args|
args =~ /.*\/etc\/manifests\/site.pp/
end
@parser.scan
end
it "should scan the ast for Puppet files" do
parser = stub_everything 'parser'
Puppet::Parser::Parser.stubs(:new).returns(parser)
parser.expects(:parse).returns(Puppet::Parser::AST::Hostclass.new('')).at_least_once
@parser.expects(:scan_top_level)
@parser.scan
end
it "should return a PuppetTopLevel to RDoc" do
parser = stub_everything 'parser'
Puppet::Parser::Parser.stubs(:new).returns(parser)
parser.expects(:parse).returns(Puppet::Parser::AST::Hostclass.new('')).at_least_once
@parser.expects(:scan_top_level)
@parser.scan.should be_a(RDoc::PuppetTopLevel)
end
it "should scan the top level even if the file has already parsed" do
known_type = stub 'known_types'
- env = Puppet::Node::Environment.create(Puppet[:environment].to_sym, [])
+ env = Puppet.lookup(:current_environment)
env.stubs(:known_resource_types).returns(known_type)
known_type.expects(:watching_file?).with("module/manifests/init.pp").returns(true)
- Puppet.override(:environments => Puppet::Environments::Static.new(env)) do
-
- @parser.expects(:scan_top_level)
+ @parser.expects(:scan_top_level)
- @parser.scan
- end
+ @parser.scan
end
end
describe "when scanning top level entities" do
let(:environment) { Puppet::Node::Environment.create(:env, []) }
before :each do
@resource_type_collection = resource_type_collection = stub_everything('resource_type_collection')
environment.stubs(:known_resource_types).returns(@resource_type_collection)
@parser.stubs(:split_module).returns("module")
@topcontainer = stub_everything 'topcontainer'
@container = stub_everything 'container'
@module = stub_everything 'module'
@container.stubs(:add_module).returns(@module)
@parser.stubs(:get_class_or_module).returns([@container, "module"])
end
it "should read any present README as module documentation" do
FileTest.stubs(:readable?).with("module/README").returns(true)
FileTest.stubs(:readable?).with("module/README.rdoc").returns(false)
File.stubs(:open).returns("readme")
@parser.stubs(:parse_elements)
@module.expects(:add_comment).with("readme", "module/manifests/init.pp")
@parser.scan_top_level(@topcontainer, environment)
end
it "should read any present README.rdoc as module documentation" do
FileTest.stubs(:readable?).with("module/README.rdoc").returns(true)
FileTest.stubs(:readable?).with("module/README").returns(false)
File.stubs(:open).returns("readme")
@parser.stubs(:parse_elements)
@module.expects(:add_comment).with("readme", "module/manifests/init.pp")
@parser.scan_top_level(@topcontainer, environment)
end
it "should prefer README.rdoc over README as module documentation" do
FileTest.stubs(:readable?).with("module/README.rdoc").returns(true)
FileTest.stubs(:readable?).with("module/README").returns(true)
File.stubs(:open).with("module/README", "r").returns("readme")
File.stubs(:open).with("module/README.rdoc", "r").returns("readme.rdoc")
@parser.stubs(:parse_elements)
@module.expects(:add_comment).with("readme.rdoc", "module/manifests/init.pp")
@parser.scan_top_level(@topcontainer, environment)
end
it "should tell the container its module name" do
@parser.stubs(:parse_elements)
@topcontainer.expects(:module_name=).with("module")
@parser.scan_top_level(@topcontainer, environment)
end
it "should not document our toplevel if it isn't a valid module" do
@parser.stubs(:split_module).returns(nil)
@topcontainer.expects(:document_self=).with(false)
@parser.expects(:parse_elements).never
@parser.scan_top_level(@topcontainer, environment)
end
it "should set the module as global if we parse the global manifests (ie __site__ module)" do
@parser.stubs(:split_module).returns(RDoc::Parser::SITE)
@parser.stubs(:parse_elements)
@topcontainer.expects(:global=).with(true)
@parser.scan_top_level(@topcontainer, environment)
end
it "should attach this module container to the toplevel container" do
@parser.stubs(:parse_elements)
@container.expects(:add_module).with(RDoc::PuppetModule, "module").returns(@module)
@parser.scan_top_level(@topcontainer, environment)
end
it "should defer ast parsing to parse_elements for this module" do
@parser.expects(:parse_elements).with(@module, @resource_type_collection)
@parser.scan_top_level(@topcontainer, environment)
end
it "should defer plugins parsing to parse_plugins for this module" do
@parser.input_file_name = "module/lib/puppet/parser/function.rb"
@parser.expects(:parse_plugins).with(@module)
@parser.scan_top_level(@topcontainer, environment)
end
end
describe "when finding modules from filepath" do
let(:environment) {
Puppet::FileSystem.expects(:directory?).with("/path/to/modules").at_least_once.returns(true)
Puppet::Node::Environment.create(:env, ["/path/to/modules"])
}
it "should return the module name for modulized puppet manifests" do
File.stubs(:identical?).with("/path/to/modules", "/path/to/modules").returns(true)
@parser.split_module("/path/to/modules/mymodule/manifests/init.pp", environment).should == "mymodule"
end
it "should return <site> for manifests not under module path" do
File.stubs(:identical?).returns(false)
@parser.split_module("/path/to/manifests/init.pp", environment).should == RDoc::Parser::SITE
end
it "should handle windows paths with drive letters", :if => Puppet.features.microsoft_windows? && Puppet.features.rdoc1? do
@parser.split_module("C:/temp/init.pp", environment).should == RDoc::Parser::SITE
end
end
describe "when parsing AST elements" do
before :each do
@klass = stub_everything 'klass', :file => "module/manifests/init.pp", :name => "myclass", :type => :hostclass
@definition = stub_everything 'definition', :file => "module/manifests/init.pp", :type => :definition, :name => "mydef"
@node = stub_everything 'node', :file => "module/manifests/init.pp", :type => :node, :name => "mynode"
@resource_type_collection = resource_type_collection = Puppet::Resource::TypeCollection.new("env")
@parser.instance_eval { @known_resource_types = resource_type_collection }
@container = stub_everything 'container'
end
it "should document classes in the parsed file" do
@resource_type_collection.add_hostclass(@klass)
@parser.expects(:document_class).with("myclass", @klass, @container)
@parser.parse_elements(@container, @resource_type_collection)
end
it "should not document class parsed in an other file" do
@klass.stubs(:file).returns("/not/same/path/file.pp")
@resource_type_collection.add_hostclass(@klass)
@parser.expects(:document_class).with("myclass", @klass, @container).never
@parser.parse_elements(@container, @resource_type_collection)
end
it "should document vardefs for the main class" do
@klass.stubs(:name).returns :main
@resource_type_collection.add_hostclass(@klass)
code = stub 'code', :is_a? => false
@klass.stubs(:name).returns("")
@klass.stubs(:code).returns(code)
@parser.expects(:scan_for_vardef).with(@container, code)
@parser.parse_elements(@container, @resource_type_collection)
end
it "should document definitions in the parsed file" do
@resource_type_collection.add_definition(@definition)
@parser.expects(:document_define).with("mydef", @definition, @container)
@parser.parse_elements(@container, @resource_type_collection)
end
it "should not document definitions parsed in an other file" do
@definition.stubs(:file).returns("/not/same/path/file.pp")
@resource_type_collection.add_definition(@definition)
@parser.expects(:document_define).with("mydef", @definition, @container).never
@parser.parse_elements(@container, @resource_type_collection)
end
it "should document nodes in the parsed file" do
@resource_type_collection.add_node(@node)
@parser.expects(:document_node).with("mynode", @node, @container)
@parser.parse_elements(@container, @resource_type_collection)
end
it "should not document node parsed in an other file" do
@node.stubs(:file).returns("/not/same/path/file.pp")
@resource_type_collection.add_node(@node)
@parser.expects(:document_node).with("mynode", @node, @container).never
@parser.parse_elements(@container, @resource_type_collection)
end
end
describe "when documenting definition" do
before(:each) do
@define = stub_everything 'define', :arguments => [], :doc => "mydoc", :file => "file", :line => 42
@class = stub_everything 'class'
@parser.stubs(:get_class_or_module).returns([@class, "mydef"])
end
it "should register a RDoc method to the current container" do
@class.expects(:add_method).with { |m| m.name == "mydef"}
@parser.document_define("mydef", @define, @class)
end
it "should attach the documentation to this method" do
@class.expects(:add_method).with { |m| m.comment = "mydoc" }
@parser.document_define("mydef", @define, @class)
end
it "should produce a better error message on unhandled exception" do
@class.expects(:add_method).raises(ArgumentError)
lambda { @parser.document_define("mydef", @define, @class) }.should raise_error(Puppet::ParseError, /in file at line 42/)
end
it "should convert all definition parameter to string" do
arg = stub 'arg'
val = stub 'val'
@define.stubs(:arguments).returns({arg => val})
arg.expects(:to_s).returns("arg")
val.expects(:to_s).returns("val")
@parser.document_define("mydef", @define, @class)
end
end
describe "when documenting nodes" do
before :each do
@code = stub_everything 'code'
@node = stub_everything 'node', :doc => "mydoc", :parent => "parent", :code => @code, :file => "file", :line => 42
@rdoc_node = stub_everything 'rdocnode'
@class = stub_everything 'class'
@class.stubs(:add_node).returns(@rdoc_node)
end
it "should add a node to the current container" do
@class.expects(:add_node).with("mynode", "parent").returns(@rdoc_node)
@parser.document_node("mynode", @node, @class)
end
it "should associate the node documentation to the rdoc node" do
@rdoc_node.expects(:add_comment).with("mydoc", "file")
@parser.document_node("mynode", @node, @class)
end
it "should scan for include and require" do
@parser.expects(:scan_for_include_or_require).with(@rdoc_node, @code)
@parser.document_node("mynode", @node, @class)
end
it "should scan for variable definition" do
@parser.expects(:scan_for_vardef).with(@rdoc_node, @code)
@parser.document_node("mynode", @node, @class)
end
it "should scan for resources if needed" do
Puppet[:document_all] = true
@parser.expects(:scan_for_resource).with(@rdoc_node, @code)
@parser.document_node("mynode", @node, @class)
end
it "should produce a better error message on unhandled exception" do
@class.stubs(:add_node).raises(ArgumentError)
lambda { @parser.document_node("mynode", @node, @class) }.should raise_error(Puppet::ParseError, /in file at line 42/)
end
end
describe "when documenting classes" do
before :each do
@code = stub_everything 'code'
@class = stub_everything 'class', :doc => "mydoc", :parent => "parent", :code => @code, :file => "file", :line => 42
@rdoc_class = stub_everything 'rdoc-class'
@module = stub_everything 'class'
@module.stubs(:add_class).returns(@rdoc_class)
@parser.stubs(:get_class_or_module).returns([@module, "myclass"])
end
it "should add a class to the current container" do
@module.expects(:add_class).with(RDoc::PuppetClass, "myclass", "parent").returns(@rdoc_class)
@parser.document_class("mynode", @class, @module)
end
it "should set the superclass" do
@rdoc_class.expects(:superclass=).with("parent")
@parser.document_class("mynode", @class, @module)
end
it "should associate the node documentation to the rdoc class" do
@rdoc_class.expects(:add_comment).with("mydoc", "file")
@parser.document_class("mynode", @class, @module)
end
it "should scan for include and require" do
@parser.expects(:scan_for_include_or_require).with(@rdoc_class, @code)
@parser.document_class("mynode", @class, @module)
end
it "should scan for resources if needed" do
Puppet[:document_all] = true
@parser.expects(:scan_for_resource).with(@rdoc_class, @code)
@parser.document_class("mynode", @class, @module)
end
it "should produce a better error message on unhandled exception" do
@module.stubs(:add_class).raises(ArgumentError)
lambda { @parser.document_class("mynode", @class, @module) }.should raise_error(Puppet::ParseError, /in file at line 42/)
end
end
describe "when scanning for includes and requires" do
def create_stmt(name)
stmt_value = stub "#{name}_value", :to_s => "myclass"
Puppet::Parser::AST::Function.new(
:name => name,
:arguments => [stmt_value],
:doc => 'mydoc'
)
end
before(:each) do
@class = stub_everything 'class'
@code = stub_everything 'code'
@code.stubs(:is_a?).with(Puppet::Parser::AST::BlockExpression).returns(true)
end
it "should also scan mono-instruction code" do
@class.expects(:add_include).with { |i| i.is_a?(RDoc::Include) and i.name == "myclass" and i.comment == "mydoc" }
@parser.scan_for_include_or_require(@class, create_stmt("include"))
end
it "should register recursively includes to the current container" do
@code.stubs(:children).returns([ create_stmt("include") ])
@class.expects(:add_include)#.with { |i| i.is_a?(RDoc::Include) and i.name == "myclass" and i.comment == "mydoc" }
@parser.scan_for_include_or_require(@class, [@code])
end
it "should register requires to the current container" do
@code.stubs(:children).returns([ create_stmt("require") ])
@class.expects(:add_require).with { |i| i.is_a?(RDoc::Include) and i.name == "myclass" and i.comment == "mydoc" }
@parser.scan_for_include_or_require(@class, [@code])
end
end
describe "when scanning for realized virtual resources" do
def create_stmt
stmt_value = stub "resource_ref", :to_s => "File[\"/tmp/a\"]"
Puppet::Parser::AST::Function.new(
:name => 'realize',
:arguments => [stmt_value],
:doc => 'mydoc'
)
end
before(:each) do
@class = stub_everything 'class'
@code = stub_everything 'code'
@code.stubs(:is_a?).with(Puppet::Parser::AST::BlockExpression).returns(true)
end
it "should also scan mono-instruction code" do
@class.expects(:add_realize).with { |i| i.is_a?(RDoc::Include) and i.name == "File[\"/tmp/a\"]" and i.comment == "mydoc" }
@parser.scan_for_realize(@class,create_stmt)
end
it "should register recursively includes to the current container" do
@code.stubs(:children).returns([ create_stmt ])
@class.expects(:add_realize).with { |i| i.is_a?(RDoc::Include) and i.name == "File[\"/tmp/a\"]" and i.comment == "mydoc" }
@parser.scan_for_realize(@class, [@code])
end
end
describe "when scanning for variable definition" do
before :each do
@class = stub_everything 'class'
@stmt = stub_everything 'stmt', :name => "myvar", :value => "myvalue", :doc => "mydoc"
@stmt.stubs(:is_a?).with(Puppet::Parser::AST::BlockExpression).returns(false)
@stmt.stubs(:is_a?).with(Puppet::Parser::AST::VarDef).returns(true)
@code = stub_everything 'code'
@code.stubs(:is_a?).with(Puppet::Parser::AST::BlockExpression).returns(true)
end
it "should recursively register variables to the current container" do
@code.stubs(:children).returns([ @stmt ])
@class.expects(:add_constant).with { |i| i.is_a?(RDoc::Constant) and i.name == "myvar" and i.comment == "mydoc" }
@parser.scan_for_vardef(@class, [ @code ])
end
it "should also scan mono-instruction code" do
@class.expects(:add_constant).with { |i| i.is_a?(RDoc::Constant) and i.name == "myvar" and i.comment == "mydoc" }
@parser.scan_for_vardef(@class, @stmt)
end
end
describe "when scanning for resources" do
before :each do
@class = stub_everything 'class'
@stmt = Puppet::Parser::AST::Resource.new(
:type => "File",
:instances => Puppet::Parser::AST::BlockExpression.new(:children => [
Puppet::Parser::AST::ResourceInstance.new(
:title => Puppet::Parser::AST::Name.new(:value => "myfile"),
:parameters => Puppet::Parser::AST::BlockExpression.new(:children => [])
)
]),
:doc => 'mydoc'
)
@code = stub_everything 'code'
@code.stubs(:is_a?).with(Puppet::Parser::AST::BlockExpression).returns(true)
end
it "should register a PuppetResource to the current container" do
@code.stubs(:children).returns([ @stmt ])
@class.expects(:add_resource).with { |i| i.is_a?(RDoc::PuppetResource) and i.title == "myfile" and i.comment == "mydoc" }
@parser.scan_for_resource(@class, [ @code ])
end
it "should also scan mono-instruction code" do
@class.expects(:add_resource).with { |i| i.is_a?(RDoc::PuppetResource) and i.title == "myfile" and i.comment == "mydoc" }
@parser.scan_for_resource(@class, @stmt)
end
end
describe "when parsing plugins" do
before :each do
@container = stub 'container'
end
it "should delegate parsing custom facts to parse_facts" do
@parser = RDoc::Parser.new(@top_level, "module/manifests/lib/puppet/facter/test.rb", nil, Options.instance, RDoc::Stats.new)
@parser.expects(:parse_fact).with(@container)
@parser.parse_plugins(@container)
end
it "should delegate parsing plugins to parse_plugins" do
@parser = RDoc::Parser.new(@top_level, "module/manifests/lib/puppet/functions/test.rb", nil, Options.instance, RDoc::Stats.new)
@parser.expects(:parse_puppet_plugin).with(@container)
@parser.parse_plugins(@container)
end
end
describe "when parsing plugins" do
before :each do
@container = stub_everything 'container'
end
it "should add custom functions to the container" do
File.stubs(:open).yields("# documentation
module Puppet::Parser::Functions
newfunction(:myfunc, :type => :rvalue) do |args|
File.dirname(args[0])
end
end".split("\n"))
@container.expects(:add_plugin).with do |plugin|
plugin.comment == "documentation\n" #and
plugin.name == "myfunc"
end
@parser.parse_puppet_plugin(@container)
end
it "should add custom types to the container" do
File.stubs(:open).yields("# documentation
Puppet::Type.newtype(:mytype) do
end".split("\n"))
@container.expects(:add_plugin).with do |plugin|
plugin.comment == "documentation\n" #and
plugin.name == "mytype"
end
@parser.parse_puppet_plugin(@container)
end
end
describe "when parsing facts" do
before :each do
@container = stub_everything 'container'
File.stubs(:open).yields(["# documentation", "Facter.add('myfact') do", "confine :kernel => :linux", "end"])
end
it "should add facts to the container" do
@container.expects(:add_fact).with do |fact|
fact.comment == "documentation\n" and
fact.name == "myfact"
end
@parser.parse_fact(@container)
end
it "should add confine to the parsed facts" do
ourfact = nil
@container.expects(:add_fact).with do |fact|
ourfact = fact
true
end
@parser.parse_fact(@container)
ourfact.confine.should == { :type => "kernel", :value => ":linux" }
end
end
end
diff --git a/spec/unit/util/tagging_spec.rb b/spec/unit/util/tagging_spec.rb
index c2ebeaab3..53bb39d7a 100755
--- a/spec/unit/util/tagging_spec.rb
+++ b/spec/unit/util/tagging_spec.rb
@@ -1,162 +1,162 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/tagging'
describe Puppet::Util::Tagging do
let(:tagger) { Object.new.extend(Puppet::Util::Tagging) }
it "should add tags to the returned tag list" do
tagger.tag("one")
expect(tagger.tags).to include("one")
end
it "should return a duplicate of the tag list, rather than the original" do
tagger.tag("one")
tags = tagger.tags
tags << "two"
expect(tagger.tags).to_not include("two")
end
it "should add all provided tags to the tag list" do
tagger.tag("one", "two")
expect(tagger.tags).to include("one")
expect(tagger.tags).to include("two")
end
it "should fail on tags containing '*' characters" do
expect { tagger.tag("bad*tag") }.to raise_error(Puppet::ParseError)
end
it "should fail on tags starting with '-' characters" do
expect { tagger.tag("-badtag") }.to raise_error(Puppet::ParseError)
end
it "should fail on tags containing ' ' characters" do
expect { tagger.tag("bad tag") }.to raise_error(Puppet::ParseError)
end
it "should allow alpha tags" do
expect { tagger.tag("good_tag") }.not_to raise_error
end
it "should allow tags containing '.' characters" do
- expect { tagger.tag("good.tag") }.to_not raise_error(Puppet::ParseError)
+ expect { tagger.tag("good.tag") }.to_not raise_error
end
it "should add qualified classes as tags" do
tagger.tag("one::two")
expect(tagger.tags).to include("one::two")
end
it "should add each part of qualified classes as tags" do
tagger.tag("one::two::three")
expect(tagger.tags).to include('one')
expect(tagger.tags).to include("two")
expect(tagger.tags).to include("three")
end
it "should indicate when the object is tagged with a provided tag" do
tagger.tag("one")
expect(tagger).to be_tagged("one")
end
it "should indicate when the object is not tagged with a provided tag" do
expect(tagger).to_not be_tagged("one")
end
it "should indicate when the object is tagged with any tag in an array" do
tagger.tag("one")
expect(tagger).to be_tagged("one","two","three")
end
it "should indicate when the object is not tagged with any tag in an array" do
tagger.tag("one")
expect(tagger).to_not be_tagged("two","three")
end
context "when tagging" do
it "converts symbols to strings" do
tagger.tag(:hello)
expect(tagger.tags).to include('hello')
end
it "downcases tags" do
tagger.tag(:HEllO)
tagger.tag("GooDByE")
expect(tagger).to be_tagged("hello")
expect(tagger).to be_tagged("goodbye")
end
it "accepts hyphenated tags" do
tagger.tag("my-tag")
expect(tagger).to be_tagged("my-tag")
end
end
context "when querying if tagged" do
it "responds true if queried on the entire set" do
tagger.tag("one", "two")
expect(tagger).to be_tagged("one", "two")
end
it "responds true if queried on a subset" do
tagger.tag("one", "two", "three")
expect(tagger).to be_tagged("two", "one")
end
it "responds true if queried on an overlapping but not fully contained set" do
tagger.tag("one", "two")
expect(tagger).to be_tagged("zero", "one")
end
it "responds false if queried on a disjoint set" do
tagger.tag("one", "two", "three")
expect(tagger).to_not be_tagged("five")
end
it "responds false if queried on the empty set" do
expect(tagger).to_not be_tagged
end
end
context "when assigning tags" do
it "splits a string on ','" do
tagger.tags = "one, two, three"
expect(tagger).to be_tagged("one")
expect(tagger).to be_tagged("two")
expect(tagger).to be_tagged("three")
end
it "protects against empty tags" do
expect { tagger.tags = "one,,two"}.to raise_error(/Invalid tag ''/)
end
it "takes an array of tags" do
tagger.tags = ["one", "two"]
expect(tagger).to be_tagged("one")
expect(tagger).to be_tagged("two")
end
it "removes any existing tags when reassigning" do
tagger.tags = "one, two"
tagger.tags = "three, four"
expect(tagger).to_not be_tagged("one")
expect(tagger).to_not be_tagged("two")
expect(tagger).to be_tagged("three")
expect(tagger).to be_tagged("four")
end
it "allows empty tags that are generated from :: separated tags" do
tagger.tags = "one::::two::three"
expect(tagger).to be_tagged("one")
expect(tagger).to be_tagged("")
expect(tagger).to be_tagged("two")
expect(tagger).to be_tagged("three")
end
end
end
diff --git a/spec/unit/util/windows/access_control_entry_spec.rb b/spec/unit/util/windows/access_control_entry_spec.rb
index b139b0d42..8d3f51c8a 100644
--- a/spec/unit/util/windows/access_control_entry_spec.rb
+++ b/spec/unit/util/windows/access_control_entry_spec.rb
@@ -1,67 +1,67 @@
#!/usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/windows'
describe "Puppet::Util::Windows::AccessControlEntry", :if => Puppet.features.microsoft_windows? do
let(:klass) { Puppet::Util::Windows::AccessControlEntry }
let(:sid) { 'S-1-5-18' }
- let(:mask) { Windows::File::FILE_ALL_ACCESS }
+ let(:mask) { Puppet::Util::Windows::File::FILE_ALL_ACCESS }
it "creates an access allowed ace" do
ace = klass.new(sid, mask)
ace.type.should == klass::ACCESS_ALLOWED_ACE_TYPE
end
it "creates an access denied ace" do
ace = klass.new(sid, mask, 0, klass::ACCESS_DENIED_ACE_TYPE)
ace.type.should == klass::ACCESS_DENIED_ACE_TYPE
end
it "creates a non-inherited ace by default" do
ace = klass.new(sid, mask)
ace.should_not be_inherited
end
it "creates an inherited ace" do
ace = klass.new(sid, mask, klass::INHERITED_ACE)
ace.should be_inherited
end
it "creates a non-inherit-only ace by default" do
ace = klass.new(sid, mask)
ace.should_not be_inherit_only
end
it "creates an inherit-only ace" do
ace = klass.new(sid, mask, klass::INHERIT_ONLY_ACE)
ace.should be_inherit_only
end
context "when comparing aces" do
let(:ace1) { klass.new(sid, mask, klass::INHERIT_ONLY_ACE, klass::ACCESS_DENIED_ACE_TYPE) }
let(:ace2) { klass.new(sid, mask, klass::INHERIT_ONLY_ACE, klass::ACCESS_DENIED_ACE_TYPE) }
it "returns true if different objects have the same set of values" do
ace1.should == ace2
end
it "returns false if different objects have different sets of values" do
ace = klass.new(sid, mask)
ace.should_not == ace1
end
it "returns true when testing if two objects are eql?" do
ace1.eql?(ace2)
end
it "returns false when comparing object identity" do
ace1.should_not be_equal(ace2)
end
end
end
diff --git a/spec/unit/util/adsi_spec.rb b/spec/unit/util/windows/adsi_spec.rb
similarity index 52%
rename from spec/unit/util/adsi_spec.rb
rename to spec/unit/util/windows/adsi_spec.rb
index 491c4374b..f569d91e9 100755
--- a/spec/unit/util/adsi_spec.rb
+++ b/spec/unit/util/windows/adsi_spec.rb
@@ -1,437 +1,440 @@
#!/usr/bin/env ruby
require 'spec_helper'
-require 'puppet/util/adsi'
+require 'puppet/util/windows'
-describe Puppet::Util::ADSI do
+describe Puppet::Util::Windows::ADSI, :if => Puppet.features.microsoft_windows? do
let(:connection) { stub 'connection' }
before(:each) do
- Puppet::Util::ADSI.instance_variable_set(:@computer_name, 'testcomputername')
- Puppet::Util::ADSI.stubs(:connect).returns connection
+ Puppet::Util::Windows::ADSI.instance_variable_set(:@computer_name, 'testcomputername')
+ Puppet::Util::Windows::ADSI.stubs(:connect).returns connection
end
after(:each) do
- Puppet::Util::ADSI.instance_variable_set(:@computer_name, nil)
+ Puppet::Util::Windows::ADSI.instance_variable_set(:@computer_name, nil)
end
it "should generate the correct URI for a resource" do
- Puppet::Util::ADSI.uri('test', 'user').should == "WinNT://./test,user"
+ Puppet::Util::Windows::ADSI.uri('test', 'user').should == "WinNT://./test,user"
end
it "should be able to get the name of the computer" do
- Puppet::Util::ADSI.computer_name.should == 'testcomputername'
+ Puppet::Util::Windows::ADSI.computer_name.should == 'testcomputername'
end
it "should be able to provide the correct WinNT base URI for the computer" do
- Puppet::Util::ADSI.computer_uri.should == "WinNT://."
+ Puppet::Util::Windows::ADSI.computer_uri.should == "WinNT://."
end
it "should generate a fully qualified WinNT URI" do
- Puppet::Util::ADSI.computer_uri('testcomputername').should == "WinNT://testcomputername"
+ Puppet::Util::Windows::ADSI.computer_uri('testcomputername').should == "WinNT://testcomputername"
end
- describe ".sid_for_account", :if => Puppet.features.microsoft_windows? do
+ describe ".sid_for_account" do
it "should return nil if the account does not exist" do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('foobar').returns nil
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('foobar').returns nil
- Puppet::Util::ADSI.sid_for_account('foobar').should be_nil
+ Puppet::Util::Windows::ADSI.sid_for_account('foobar').should be_nil
end
it "should return a SID for a passed user or group name" do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('testers').returns 'S-1-5-32-547'
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('testers').returns 'S-1-5-32-547'
- Puppet::Util::ADSI.sid_for_account('testers').should == 'S-1-5-32-547'
+ Puppet::Util::Windows::ADSI.sid_for_account('testers').should == 'S-1-5-32-547'
end
it "should return a SID for a passed fully-qualified user or group name" do
- Puppet::Util::Windows::Security.expects(:name_to_sid).with('MACHINE\testers').returns 'S-1-5-32-547'
+ Puppet::Util::Windows::SID.expects(:name_to_sid).with('MACHINE\testers').returns 'S-1-5-32-547'
- Puppet::Util::ADSI.sid_for_account('MACHINE\testers').should == 'S-1-5-32-547'
+ Puppet::Util::Windows::ADSI.sid_for_account('MACHINE\testers').should == 'S-1-5-32-547'
end
end
- describe ".sid_uri", :if => Puppet.features.microsoft_windows? do
+ describe ".computer_name" do
+ it "should return a non-empty ComputerName string" do
+ Puppet::Util::Windows::ADSI.instance_variable_set(:@computer_name, nil)
+ Puppet::Util::Windows::ADSI.computer_name.should_not be_empty
+ end
+ end
+
+ describe ".sid_uri" do
it "should raise an error when the input is not a SID object" do
[Object.new, {}, 1, :symbol, '', nil].each do |input|
expect {
- Puppet::Util::ADSI.sid_uri(input)
+ Puppet::Util::Windows::ADSI.sid_uri(input)
}.to raise_error(Puppet::Error, /Must use a valid SID object/)
end
end
it "should return a SID uri for a well-known SID (SYSTEM)" do
sid = Win32::Security::SID.new('SYSTEM')
- Puppet::Util::ADSI.sid_uri(sid).should == 'WinNT://S-1-5-18'
+ Puppet::Util::Windows::ADSI.sid_uri(sid).should == 'WinNT://S-1-5-18'
end
end
- describe Puppet::Util::ADSI::User do
+ describe Puppet::Util::Windows::ADSI::User do
let(:username) { 'testuser' }
let(:domain) { 'DOMAIN' }
let(:domain_username) { "#{domain}\\#{username}"}
it "should generate the correct URI" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI::User.uri(username).should == "WinNT://./#{username},user"
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI::User.uri(username).should == "WinNT://./#{username},user"
end
it "should generate the correct URI for a user with a domain" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI::User.uri(username, domain).should == "WinNT://#{domain}/#{username},user"
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI::User.uri(username, domain).should == "WinNT://#{domain}/#{username},user"
end
it "should be able to parse a username without a domain" do
- Puppet::Util::ADSI::User.parse_name(username).should == [username, '.']
+ Puppet::Util::Windows::ADSI::User.parse_name(username).should == [username, '.']
end
it "should be able to parse a username with a domain" do
- Puppet::Util::ADSI::User.parse_name(domain_username).should == [username, domain]
+ Puppet::Util::Windows::ADSI::User.parse_name(domain_username).should == [username, domain]
end
it "should raise an error with a username that contains a /" do
expect {
- Puppet::Util::ADSI::User.parse_name("#{domain}/#{username}")
+ Puppet::Util::Windows::ADSI::User.parse_name("#{domain}/#{username}")
}.to raise_error(Puppet::Error, /Value must be in DOMAIN\\user style syntax/)
end
it "should be able to create a user" do
adsi_user = stub('adsi')
connection.expects(:Create).with('user', username).returns(adsi_user)
- Puppet::Util::ADSI::Group.expects(:exists?).with(username).returns(false)
+ Puppet::Util::Windows::ADSI::Group.expects(:exists?).with(username).returns(false)
- user = Puppet::Util::ADSI::User.create(username)
+ user = Puppet::Util::Windows::ADSI::User.create(username)
- user.should be_a(Puppet::Util::ADSI::User)
+ user.should be_a(Puppet::Util::Windows::ADSI::User)
user.native_user.should == adsi_user
end
it "should be able to check the existence of a user" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI.expects(:connect).with("WinNT://./#{username},user").returns connection
- Puppet::Util::ADSI::User.exists?(username).should be_true
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.expects(:connect).with("WinNT://./#{username},user").returns connection
+ Puppet::Util::Windows::ADSI::User.exists?(username).should be_true
end
it "should be able to check the existence of a domain user" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI.expects(:connect).with("WinNT://#{domain}/#{username},user").returns connection
- Puppet::Util::ADSI::User.exists?(domain_username).should be_true
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.expects(:connect).with("WinNT://#{domain}/#{username},user").returns connection
+ Puppet::Util::Windows::ADSI::User.exists?(domain_username).should be_true
end
- it "should be able to confirm the existence of a user with a well-known SID",
- :if => Puppet.features.microsoft_windows? do
+ it "should be able to confirm the existence of a user with a well-known SID" do
system_user = Win32::Security::SID::LocalSystem
# ensure that the underlying OS is queried here
- Puppet::Util::ADSI.unstub(:connect)
- Puppet::Util::ADSI::User.exists?(system_user).should be_true
+ Puppet::Util::Windows::ADSI.unstub(:connect)
+ Puppet::Util::Windows::ADSI::User.exists?(system_user).should be_true
end
- it "should return nil with an unknown SID",
- :if => Puppet.features.microsoft_windows? do
+ it "should return nil with an unknown SID" do
bogus_sid = 'S-1-2-3-4'
# ensure that the underlying OS is queried here
- Puppet::Util::ADSI.unstub(:connect)
- Puppet::Util::ADSI::User.exists?(bogus_sid).should be_false
+ Puppet::Util::Windows::ADSI.unstub(:connect)
+ Puppet::Util::Windows::ADSI::User.exists?(bogus_sid).should be_false
end
it "should be able to delete a user" do
connection.expects(:Delete).with('user', username)
- Puppet::Util::ADSI::User.delete(username)
+ Puppet::Util::Windows::ADSI::User.delete(username)
end
it "should return an enumeration of IADsUser wrapped objects" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
name = 'Administrator'
wmi_users = [stub('WMI', :name => name)]
- Puppet::Util::ADSI.expects(:execquery).with('select name from win32_useraccount where localaccount = "TRUE"').returns(wmi_users)
+ Puppet::Util::Windows::ADSI.expects(:execquery).with('select name from win32_useraccount where localaccount = "TRUE"').returns(wmi_users)
native_user = stub('IADsUser')
homedir = "C:\\Users\\#{name}"
native_user.expects(:Get).with('HomeDirectory').returns(homedir)
- Puppet::Util::ADSI.expects(:connect).with("WinNT://./#{name},user").returns(native_user)
+ Puppet::Util::Windows::ADSI.expects(:connect).with("WinNT://./#{name},user").returns(native_user)
- users = Puppet::Util::ADSI::User.to_a
+ users = Puppet::Util::Windows::ADSI::User.to_a
users.length.should == 1
users[0].name.should == name
users[0]['HomeDirectory'].should == homedir
end
describe "an instance" do
let(:adsi_user) { stub('user', :objectSID => []) }
let(:sid) { stub(:account => username, :domain => 'testcomputername') }
- let(:user) { Puppet::Util::ADSI::User.new(username, adsi_user) }
+ let(:user) { Puppet::Util::Windows::ADSI::User.new(username, adsi_user) }
it "should provide its groups as a list of names" do
names = ["group1", "group2"]
groups = names.map { |name| mock('group', :Name => name) }
adsi_user.expects(:Groups).returns(groups)
user.groups.should =~ names
end
it "should be able to test whether a given password is correct" do
- Puppet::Util::ADSI::User.expects(:logon).with(username, 'pwdwrong').returns(false)
- Puppet::Util::ADSI::User.expects(:logon).with(username, 'pwdright').returns(true)
+ Puppet::Util::Windows::ADSI::User.expects(:logon).with(username, 'pwdwrong').returns(false)
+ Puppet::Util::Windows::ADSI::User.expects(:logon).with(username, 'pwdright').returns(true)
user.password_is?('pwdwrong').should be_false
user.password_is?('pwdright').should be_true
end
it "should be able to set a password" do
adsi_user.expects(:SetPassword).with('pwd')
adsi_user.expects(:SetInfo).at_least_once
flagname = "UserFlags"
fADS_UF_DONT_EXPIRE_PASSWD = 0x10000
adsi_user.expects(:Get).with(flagname).returns(0)
adsi_user.expects(:Put).with(flagname, fADS_UF_DONT_EXPIRE_PASSWD)
user.password = 'pwd'
end
- it "should generate the correct URI", :if => Puppet.features.microsoft_windows? do
- Puppet::Util::Windows::Security.stubs(:octet_string_to_sid_object).returns(sid)
+ it "should generate the correct URI" do
+ Puppet::Util::Windows::SID.stubs(:octet_string_to_sid_object).returns(sid)
user.uri.should == "WinNT://testcomputername/#{username},user"
end
- describe "when given a set of groups to which to add the user", :if => Puppet.features.microsoft_windows? do
+ describe "when given a set of groups to which to add the user" do
let(:groups_to_set) { 'group1,group2' }
before(:each) do
- Puppet::Util::Windows::Security.stubs(:octet_string_to_sid_object).returns(sid)
+ Puppet::Util::Windows::SID.stubs(:octet_string_to_sid_object).returns(sid)
user.expects(:groups).returns ['group2', 'group3']
end
describe "if membership is specified as inclusive" do
it "should add the user to those groups, and remove it from groups not in the list" do
group1 = stub 'group1'
group1.expects(:Add).with("WinNT://testcomputername/#{username},user")
group3 = stub 'group1'
group3.expects(:Remove).with("WinNT://testcomputername/#{username},user")
- Puppet::Util::ADSI.expects(:sid_uri).with(sid).returns("WinNT://testcomputername/#{username},user").twice
- Puppet::Util::ADSI.expects(:connect).with('WinNT://./group1,group').returns group1
- Puppet::Util::ADSI.expects(:connect).with('WinNT://./group3,group').returns group3
+ Puppet::Util::Windows::ADSI.expects(:sid_uri).with(sid).returns("WinNT://testcomputername/#{username},user").twice
+ Puppet::Util::Windows::ADSI.expects(:connect).with('WinNT://./group1,group').returns group1
+ Puppet::Util::Windows::ADSI.expects(:connect).with('WinNT://./group3,group').returns group3
user.set_groups(groups_to_set, false)
end
end
describe "if membership is specified as minimum" do
it "should add the user to the specified groups without affecting its other memberships" do
group1 = stub 'group1'
group1.expects(:Add).with("WinNT://testcomputername/#{username},user")
- Puppet::Util::ADSI.expects(:sid_uri).with(sid).returns("WinNT://testcomputername/#{username},user")
- Puppet::Util::ADSI.expects(:connect).with('WinNT://./group1,group').returns group1
+ Puppet::Util::Windows::ADSI.expects(:sid_uri).with(sid).returns("WinNT://testcomputername/#{username},user")
+ Puppet::Util::Windows::ADSI.expects(:connect).with('WinNT://./group1,group').returns group1
user.set_groups(groups_to_set, true)
end
end
end
end
end
- describe Puppet::Util::ADSI::Group do
+ describe Puppet::Util::Windows::ADSI::Group do
let(:groupname) { 'testgroup' }
describe "an instance" do
let(:adsi_group) { stub 'group' }
- let(:group) { Puppet::Util::ADSI::Group.new(groupname, adsi_group) }
+ let(:group) { Puppet::Util::Windows::ADSI::Group.new(groupname, adsi_group) }
let(:someone_sid){ stub(:account => 'someone', :domain => 'testcomputername')}
- it "should be able to add a member (deprecated)", :if => Puppet.features.microsoft_windows? do
- Puppet.expects(:deprecation_warning).with('Puppet::Util::ADSI::Group#add_members is deprecated; please use Puppet::Util::ADSI::Group#add_member_sids')
+ it "should be able to add a member (deprecated)" do
+ Puppet.expects(:deprecation_warning).with('Puppet::Util::Windows::ADSI::Group#add_members is deprecated; please use Puppet::Util::Windows::ADSI::Group#add_member_sids')
- Puppet::Util::Windows::Security.expects(:name_to_sid_object).with('someone').returns(someone_sid)
- Puppet::Util::ADSI.expects(:sid_uri).with(someone_sid).returns("WinNT://testcomputername/someone,user")
+ Puppet::Util::Windows::SID.expects(:name_to_sid_object).with('someone').returns(someone_sid)
+ Puppet::Util::Windows::ADSI.expects(:sid_uri).with(someone_sid).returns("WinNT://testcomputername/someone,user")
adsi_group.expects(:Add).with("WinNT://testcomputername/someone,user")
group.add_member('someone')
end
- it "should raise when adding a member that can't resolve to a SID (deprecated)", :if => Puppet.features.microsoft_windows? do
+ it "should raise when adding a member that can't resolve to a SID (deprecated)" do
expect {
group.add_member('foobar')
}.to raise_error(Puppet::Error, /Could not resolve username: foobar/)
end
- it "should be able to remove a member (deprecated)", :if => Puppet.features.microsoft_windows? do
- Puppet.expects(:deprecation_warning).with('Puppet::Util::ADSI::Group#remove_members is deprecated; please use Puppet::Util::ADSI::Group#remove_member_sids')
+ it "should be able to remove a member (deprecated)" do
+ Puppet.expects(:deprecation_warning).with('Puppet::Util::Windows::ADSI::Group#remove_members is deprecated; please use Puppet::Util::Windows::ADSI::Group#remove_member_sids')
- Puppet::Util::Windows::Security.expects(:name_to_sid_object).with('someone').returns(someone_sid)
- Puppet::Util::ADSI.expects(:sid_uri).with(someone_sid).returns("WinNT://testcomputername/someone,user")
+ Puppet::Util::Windows::SID.expects(:name_to_sid_object).with('someone').returns(someone_sid)
+ Puppet::Util::Windows::ADSI.expects(:sid_uri).with(someone_sid).returns("WinNT://testcomputername/someone,user")
adsi_group.expects(:Remove).with("WinNT://testcomputername/someone,user")
group.remove_member('someone')
end
- it "should raise when removing a member that can't resolve to a SID (deprecated)", :if => Puppet.features.microsoft_windows? do
+ it "should raise when removing a member that can't resolve to a SID (deprecated)" do
expect {
group.remove_member('foobar')
}.to raise_error(Puppet::Error, /Could not resolve username: foobar/)
end
- describe "should be able to use SID objects", :if => Puppet.features.microsoft_windows? do
- let(:system) { Puppet::Util::Windows::Security.name_to_sid_object('SYSTEM') }
+ describe "should be able to use SID objects" do
+ let(:system) { Puppet::Util::Windows::SID.name_to_sid_object('SYSTEM') }
it "to add a member" do
adsi_group.expects(:Add).with("WinNT://S-1-5-18")
group.add_member_sids(system)
end
it "to remove a member" do
adsi_group.expects(:Remove).with("WinNT://S-1-5-18")
group.remove_member_sids(system)
end
end
it "should provide its groups as a list of names" do
names = ['user1', 'user2']
users = names.map { |name| mock('user', :Name => name) }
adsi_group.expects(:Members).returns(users)
group.members.should =~ names
end
- it "should be able to add a list of users to a group", :if => Puppet.features.microsoft_windows? do
+ it "should be able to add a list of users to a group" do
names = ['DOMAIN\user1', 'user2']
sids = [
stub(:account => 'user1', :domain => 'DOMAIN'),
stub(:account => 'user2', :domain => 'testcomputername'),
stub(:account => 'user3', :domain => 'DOMAIN2'),
]
# use stubbed objectSid on member to return stubbed SID
- Puppet::Util::Windows::Security.expects(:octet_string_to_sid_object).with([0]).returns(sids[0])
- Puppet::Util::Windows::Security.expects(:octet_string_to_sid_object).with([1]).returns(sids[1])
+ Puppet::Util::Windows::SID.expects(:octet_string_to_sid_object).with([0]).returns(sids[0])
+ Puppet::Util::Windows::SID.expects(:octet_string_to_sid_object).with([1]).returns(sids[1])
- Puppet::Util::Windows::Security.expects(:name_to_sid_object).with('user2').returns(sids[1])
- Puppet::Util::Windows::Security.expects(:name_to_sid_object).with('DOMAIN2\user3').returns(sids[2])
+ Puppet::Util::Windows::SID.expects(:name_to_sid_object).with('user2').returns(sids[1])
+ Puppet::Util::Windows::SID.expects(:name_to_sid_object).with('DOMAIN2\user3').returns(sids[2])
- Puppet::Util::ADSI.expects(:sid_uri).with(sids[0]).returns("WinNT://DOMAIN/user1,user")
- Puppet::Util::ADSI.expects(:sid_uri).with(sids[2]).returns("WinNT://DOMAIN2/user3,user")
+ Puppet::Util::Windows::ADSI.expects(:sid_uri).with(sids[0]).returns("WinNT://DOMAIN/user1,user")
+ Puppet::Util::Windows::ADSI.expects(:sid_uri).with(sids[2]).returns("WinNT://DOMAIN2/user3,user")
members = names.each_with_index.map{|n,i| stub(:Name => n, :objectSID => [i])}
adsi_group.expects(:Members).returns members
adsi_group.expects(:Remove).with('WinNT://DOMAIN/user1,user')
adsi_group.expects(:Add).with('WinNT://DOMAIN2/user3,user')
group.set_members(['user2', 'DOMAIN2\user3'])
end
- it "should raise an error when a username does not resolve to a SID", :if => Puppet.features.microsoft_windows? do
+ it "should raise an error when a username does not resolve to a SID" do
expect {
adsi_group.expects(:Members).returns []
group.set_members(['foobar'])
}.to raise_error(Puppet::Error, /Could not resolve username: foobar/)
end
it "should generate the correct URI" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
group.uri.should == "WinNT://./#{groupname},group"
end
end
it "should generate the correct URI" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI::Group.uri("people").should == "WinNT://./people,group"
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI::Group.uri("people").should == "WinNT://./people,group"
end
it "should be able to create a group" do
adsi_group = stub("adsi")
connection.expects(:Create).with('group', groupname).returns(adsi_group)
- Puppet::Util::ADSI::User.expects(:exists?).with(groupname).returns(false)
+ Puppet::Util::Windows::ADSI::User.expects(:exists?).with(groupname).returns(false)
- group = Puppet::Util::ADSI::Group.create(groupname)
+ group = Puppet::Util::Windows::ADSI::Group.create(groupname)
- group.should be_a(Puppet::Util::ADSI::Group)
+ group.should be_a(Puppet::Util::Windows::ADSI::Group)
group.native_group.should == adsi_group
end
it "should be able to confirm the existence of a group" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
- Puppet::Util::ADSI.expects(:connect).with("WinNT://./#{groupname},group").returns connection
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.expects(:connect).with("WinNT://./#{groupname},group").returns connection
- Puppet::Util::ADSI::Group.exists?(groupname).should be_true
+ Puppet::Util::Windows::ADSI::Group.exists?(groupname).should be_true
end
- it "should be able to confirm the existence of a group with a well-known SID",
- :if => Puppet.features.microsoft_windows? do
+ it "should be able to confirm the existence of a group with a well-known SID" do
service_group = Win32::Security::SID::Service
# ensure that the underlying OS is queried here
- Puppet::Util::ADSI.unstub(:connect)
- Puppet::Util::ADSI::Group.exists?(service_group).should be_true
+ Puppet::Util::Windows::ADSI.unstub(:connect)
+ Puppet::Util::Windows::ADSI::Group.exists?(service_group).should be_true
end
- it "should return nil with an unknown SID",
- :if => Puppet.features.microsoft_windows? do
+ it "should return nil with an unknown SID" do
bogus_sid = 'S-1-2-3-4'
# ensure that the underlying OS is queried here
- Puppet::Util::ADSI.unstub(:connect)
- Puppet::Util::ADSI::Group.exists?(bogus_sid).should be_false
+ Puppet::Util::Windows::ADSI.unstub(:connect)
+ Puppet::Util::Windows::ADSI::Group.exists?(bogus_sid).should be_false
end
it "should be able to delete a group" do
connection.expects(:Delete).with('group', groupname)
- Puppet::Util::ADSI::Group.delete(groupname)
+ Puppet::Util::Windows::ADSI::Group.delete(groupname)
end
it "should return an enumeration of IADsGroup wrapped objects" do
- Puppet::Util::ADSI.stubs(:sid_uri_safe).returns(nil)
+ Puppet::Util::Windows::ADSI.stubs(:sid_uri_safe).returns(nil)
name = 'Administrators'
wmi_groups = [stub('WMI', :name => name)]
- Puppet::Util::ADSI.expects(:execquery).with('select name from win32_group where localaccount = "TRUE"').returns(wmi_groups)
+ Puppet::Util::Windows::ADSI.expects(:execquery).with('select name from win32_group where localaccount = "TRUE"').returns(wmi_groups)
native_group = stub('IADsGroup')
native_group.expects(:Members).returns([stub(:Name => 'Administrator')])
- Puppet::Util::ADSI.expects(:connect).with("WinNT://./#{name},group").returns(native_group)
+ Puppet::Util::Windows::ADSI.expects(:connect).with("WinNT://./#{name},group").returns(native_group)
- groups = Puppet::Util::ADSI::Group.to_a
+ groups = Puppet::Util::Windows::ADSI::Group.to_a
groups.length.should == 1
groups[0].name.should == name
groups[0].members.should == ['Administrator']
end
end
- describe Puppet::Util::ADSI::UserProfile do
+ describe Puppet::Util::Windows::ADSI::UserProfile do
it "should be able to delete a user profile" do
connection.expects(:Delete).with("Win32_UserProfile.SID='S-A-B-C'")
- Puppet::Util::ADSI::UserProfile.delete('S-A-B-C')
+ Puppet::Util::Windows::ADSI::UserProfile.delete('S-A-B-C')
end
it "should warn on 2003" do
connection.expects(:Delete).raises(RuntimeError,
"Delete (WIN32OLERuntimeError)
OLE error code:80041010 in SWbemServicesEx
Invalid class
HRESULT error code:0x80020009
Exception occurred.")
Puppet.expects(:warning).with("Cannot delete user profile for 'S-A-B-C' prior to Vista SP1")
- Puppet::Util::ADSI::UserProfile.delete('S-A-B-C')
+ Puppet::Util::Windows::ADSI::UserProfile.delete('S-A-B-C')
end
end
end
diff --git a/spec/unit/util/windows/api_types_spec.rb b/spec/unit/util/windows/api_types_spec.rb
new file mode 100644
index 000000000..a1e1c76c9
--- /dev/null
+++ b/spec/unit/util/windows/api_types_spec.rb
@@ -0,0 +1,28 @@
+# encoding: UTF-8
+#!/usr/bin/env ruby
+
+require 'spec_helper'
+
+describe "FFI::MemoryPointer", :if => Puppet.features.microsoft_windows? do
+ context "read_wide_string" do
+ let (:string) { "foo_bar" }
+
+ it "should properly roundtrip a given string" do
+ read_string = nil
+ FFI::MemoryPointer.from_string_to_wide_string(string) do |ptr|
+ read_string = ptr.read_wide_string(string.length)
+ end
+
+ read_string.should == string
+ end
+
+ it "should return a given string in the default encoding" do
+ read_string = nil
+ FFI::MemoryPointer.from_string_to_wide_string(string) do |ptr|
+ read_string = ptr.read_wide_string(string.length)
+ end
+
+ read_string.encoding.should == Encoding.default_external
+ end
+ end
+end
diff --git a/spec/unit/util/windows/registry_spec.rb b/spec/unit/util/windows/registry_spec.rb
index 636ba0c93..ed52539a5 100755
--- a/spec/unit/util/windows/registry_spec.rb
+++ b/spec/unit/util/windows/registry_spec.rb
@@ -1,140 +1,141 @@
#! /usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/windows'
-require 'puppet/util/windows/registry'
describe Puppet::Util::Windows::Registry, :if => Puppet::Util::Platform.windows? do
subject do
class TestRegistry
include Puppet::Util::Windows::Registry
end
TestRegistry.new
end
let(:name) { 'HKEY_LOCAL_MACHINE' }
let(:path) { 'Software\Microsoft' }
context "#root" do
it "should lookup the root hkey" do
subject.root(name).should be_instance_of(Win32::Registry::PredefinedKey)
end
it "should raise for unknown root keys" do
expect { subject.root('HKEY_BOGUS') }.to raise_error(Puppet::Error, /Invalid registry key/)
end
end
context "#open" do
let(:hkey) { mock 'hklm' }
let(:subkey) { stub 'subkey' }
before :each do
subject.stubs(:root).returns(hkey)
end
it "should yield the opened the subkey" do
hkey.expects(:open).with do |p, _|
p.should == path
end.yields(subkey)
yielded = nil
subject.open(name, path) {|reg| yielded = reg}
yielded.should == subkey
end
- [described_class::KEY64, described_class::KEY32].each do |access|
- it "should open the key for read access 0x#{access.to_s(16)}" do
- mode = described_class::KEY_READ | access
- hkey.expects(:open).with(path, mode)
+ if Puppet::Util::Platform.windows?
+ [described_class::KEY64, described_class::KEY32].each do |access|
+ it "should open the key for read access 0x#{access.to_s(16)}" do
+ mode = described_class::KEY_READ | access
+ hkey.expects(:open).with(path, mode)
- subject.open(name, path, mode) {|reg| }
+ subject.open(name, path, mode) {|reg| }
+ end
end
end
it "should default to KEY64" do
hkey.expects(:open).with(path, described_class::KEY_READ | described_class::KEY64)
subject.open(hkey, path) {|hkey| }
end
it "should raise for a path that doesn't exist" do
hkey.expects(:keyname).returns('HKEY_LOCAL_MACHINE')
hkey.expects(:open).raises(Win32::Registry::Error.new(2)) # file not found
expect do
subject.open(hkey, 'doesnotexist') {|hkey| }
end.to raise_error(Puppet::Error, /Failed to open registry key 'HKEY_LOCAL_MACHINE\\doesnotexist'/)
end
end
context "#values" do
let(:key) { stub('uninstall') }
it "should return each value's name and data" do
key.expects(:each_value).multiple_yields(
['string', 1, 'foo'], ['dword', 4, 0]
)
subject.values(key).should == { 'string' => 'foo', 'dword' => 0 }
end
it "should return an empty hash if there are no values" do
key.expects(:each_value)
subject.values(key).should == {}
end
context "when reading non-ASCII values" do
# registered trademark symbol
let(:data) do
str = [0xAE].pack("C")
str.force_encoding('US-ASCII')
str
end
def expects_registry_value(array)
key.expects(:each_value).multiple_yields(array)
subject.values(key).first[1]
end
# The win32console gem applies this regex to strip out ANSI escape
# sequences. If the registered trademark had encoding US-ASCII,
# the regex would fail with 'invalid byte sequence in US-ASCII'
def strip_ansi_escapes(value)
value.sub(/([^\e]*)?\e([\[\(])([0-9\;\=]*)([a-zA-Z@])(.*)/, '\5')
end
it "encodes REG_SZ according to the current code page" do
reg_value = ['string', Win32::Registry::REG_SZ, data]
value = expects_registry_value(reg_value)
strip_ansi_escapes(value)
end
it "encodes REG_EXPAND_SZ based on the current code page" do
reg_value = ['expand', Win32::Registry::REG_EXPAND_SZ, "%SYSTEMROOT%\\#{data}"]
value = expects_registry_value(reg_value)
strip_ansi_escapes(value)
end
it "encodes REG_MULTI_SZ based on the current code page" do
reg_value = ['multi', Win32::Registry::REG_MULTI_SZ, ["one#{data}", "two#{data}"]]
value = expects_registry_value(reg_value)
value.each { |str| strip_ansi_escapes(str) }
end
it "passes REG_DWORD through" do
reg_value = ['dword', Win32::Registry::REG_DWORD, '1']
value = expects_registry_value(reg_value)
Integer(value).should == 1
end
end
end
end
diff --git a/spec/unit/util/windows/sid_spec.rb b/spec/unit/util/windows/sid_spec.rb
index 770512188..2748f13c6 100755
--- a/spec/unit/util/windows/sid_spec.rb
+++ b/spec/unit/util/windows/sid_spec.rb
@@ -1,169 +1,166 @@
#!/usr/bin/env ruby
require 'spec_helper'
describe "Puppet::Util::Windows::SID", :if => Puppet.features.microsoft_windows? do
if Puppet.features.microsoft_windows?
require 'puppet/util/windows'
- class SIDTester
- include Puppet::Util::Windows::SID
- end
end
- let(:subject) { SIDTester.new }
+ let(:subject) { Puppet::Util::Windows::SID }
let(:sid) { Win32::Security::SID::LocalSystem }
let(:invalid_sid) { 'bogus' }
let(:unknown_sid) { 'S-0-0-0' }
let(:unknown_name) { 'chewbacca' }
context "#octet_string_to_sid_object" do
it "should properly convert an array of bytes for the local Administrator SID" do
host = '.'
username = 'Administrator'
admin = WIN32OLE.connect("WinNT://#{host}/#{username},user")
converted = subject.octet_string_to_sid_object(admin.objectSID)
converted.should == Win32::Security::SID.new(username, host)
converted.should be_an_instance_of Win32::Security::SID
end
it "should properly convert an array of bytes for a well-known SID" do
bytes = [1, 1, 0, 0, 0, 0, 0, 5, 18, 0, 0, 0]
converted = subject.octet_string_to_sid_object(bytes)
converted.should == Win32::Security::SID.new('SYSTEM')
converted.should be_an_instance_of Win32::Security::SID
end
it "should raise an error for non-array input" do
expect {
subject.octet_string_to_sid_object(invalid_sid)
}.to raise_error(Puppet::Error, /Octet string must be an array of bytes/)
end
it "should raise an error for an empty byte array" do
expect {
subject.octet_string_to_sid_object([])
}.to raise_error(Puppet::Error, /Octet string must be an array of bytes/)
end
it "should raise an error for a malformed byte array" do
expect {
invalid_octet = [1]
subject.octet_string_to_sid_object(invalid_octet)
- }.to raise_error(Win32::Security::SID::Error, /No mapping between account names and security IDs was done./)
+ }.to raise_error(SystemCallError, /No mapping between account names and security IDs was done./)
end
end
context "#name_to_sid" do
it "should return nil if the account does not exist" do
subject.name_to_sid(unknown_name).should be_nil
end
it "should accept unqualified account name" do
subject.name_to_sid('SYSTEM').should == sid
end
it "should be case-insensitive" do
subject.name_to_sid('SYSTEM').should == subject.name_to_sid('system')
end
it "should be leading and trailing whitespace-insensitive" do
subject.name_to_sid('SYSTEM').should == subject.name_to_sid(' SYSTEM ')
end
it "should accept domain qualified account names" do
subject.name_to_sid('NT AUTHORITY\SYSTEM').should == sid
end
it "should be the identity function for any sid" do
subject.name_to_sid(sid).should == sid
end
end
context "#name_to_sid_object" do
it "should return nil if the account does not exist" do
subject.name_to_sid_object(unknown_name).should be_nil
end
it "should return a Win32::Security::SID instance for any valid sid" do
subject.name_to_sid_object(sid).should be_an_instance_of(Win32::Security::SID)
end
it "should accept unqualified account name" do
subject.name_to_sid_object('SYSTEM').to_s.should == sid
end
it "should be case-insensitive" do
subject.name_to_sid_object('SYSTEM').should == subject.name_to_sid_object('system')
end
it "should be leading and trailing whitespace-insensitive" do
subject.name_to_sid_object('SYSTEM').should == subject.name_to_sid_object(' SYSTEM ')
end
it "should accept domain qualified account names" do
subject.name_to_sid_object('NT AUTHORITY\SYSTEM').to_s.should == sid
end
end
context "#sid_to_name" do
it "should return nil if given a sid for an account that doesn't exist" do
subject.sid_to_name(unknown_sid).should be_nil
end
it "should accept a sid" do
subject.sid_to_name(sid).should == "NT AUTHORITY\\SYSTEM"
end
end
context "#sid_ptr_to_string" do
it "should raise if given an invalid sid" do
expect {
subject.sid_ptr_to_string(nil)
}.to raise_error(Puppet::Error, /Invalid SID/)
end
it "should yield a valid sid pointer" do
string = nil
subject.string_to_sid_ptr(sid) do |ptr|
string = subject.sid_ptr_to_string(ptr)
end
string.should == sid
end
end
context "#string_to_sid_ptr" do
it "should yield sid_ptr" do
ptr = nil
subject.string_to_sid_ptr(sid) do |p|
ptr = p
end
ptr.should_not be_nil
end
it "should raise on an invalid sid" do
expect {
subject.string_to_sid_ptr(invalid_sid)
}.to raise_error(Puppet::Error, /Failed to convert string SID/)
end
end
context "#valid_sid?" do
it "should return true for a valid SID" do
subject.valid_sid?(sid).should be_true
end
it "should return false for an invalid SID" do
subject.valid_sid?(invalid_sid).should be_false
end
it "should raise if the conversion fails" do
subject.expects(:string_to_sid_ptr).with(sid).
- raises(Puppet::Util::Windows::Error.new("Failed to convert string SID: #{sid}", Windows::Error::ERROR_ACCESS_DENIED))
+ raises(Puppet::Util::Windows::Error.new("Failed to convert string SID: #{sid}", Puppet::Util::Windows::Error::ERROR_ACCESS_DENIED))
expect {
subject.string_to_sid_ptr(sid) {|ptr| }
}.to raise_error(Puppet::Util::Windows::Error, /Failed to convert string SID: #{sid}/)
end
end
end
diff --git a/spec/unit/util/windows/string_spec.rb b/spec/unit/util/windows/string_spec.rb
index 60f7e6449..5c6473e70 100644
--- a/spec/unit/util/windows/string_spec.rb
+++ b/spec/unit/util/windows/string_spec.rb
@@ -1,54 +1,58 @@
# encoding: UTF-8
#!/usr/bin/env ruby
require 'spec_helper'
require 'puppet/util/windows'
describe "Puppet::Util::Windows::String", :if => Puppet.features.microsoft_windows? do
UTF16_NULL = [0, 0]
def wide_string(str)
Puppet::Util::Windows::String.wide_string(str)
end
def converts_to_wide_string(string_value)
expected = string_value.encode(Encoding::UTF_16LE)
expected_bytes = expected.bytes.to_a + UTF16_NULL
wide_string(string_value).bytes.to_a.should == expected_bytes
end
context "wide_string" do
it "should return encoding of UTF-16LE" do
wide_string("bob").encoding.should == Encoding::UTF_16LE
end
it "should return valid encoding" do
wide_string("bob").valid_encoding?.should be_true
end
it "should convert an ASCII string" do
converts_to_wide_string("bob".encode(Encoding::US_ASCII))
end
it "should convert a UTF-8 string" do
converts_to_wide_string("bob".encode(Encoding::UTF_8))
end
it "should convert a UTF-16LE string" do
converts_to_wide_string("bob\u00E8".encode(Encoding::UTF_16LE))
end
it "should convert a UTF-16BE string" do
converts_to_wide_string("bob\u00E8".encode(Encoding::UTF_16BE))
end
it "should convert an UTF-32LE string" do
converts_to_wide_string("bob\u00E8".encode(Encoding::UTF_32LE))
end
it "should convert an UTF-32BE string" do
converts_to_wide_string("bob\u00E8".encode(Encoding::UTF_32BE))
end
+
+ it "should return a nil when given a nil" do
+ wide_string(nil).should == nil
+ end
end
end
diff --git a/spec/unit/util/zaml_spec.rb b/spec/unit/util/zaml_spec.rb
index 56c5cb719..b239b4a84 100755
--- a/spec/unit/util/zaml_spec.rb
+++ b/spec/unit/util/zaml_spec.rb
@@ -1,304 +1,308 @@
#! /usr/bin/env ruby
# encoding: UTF-8
#
# The above encoding line is a magic comment to set the default source encoding
# of this file for the Ruby interpreter. It must be on the first or second
# line of the file if an interpreter is in use. In Ruby 1.9 and later, the
# source encoding determines the encoding of String and Regexp objects created
# from this source file. This explicit encoding is important becuase otherwise
# Ruby will pick an encoding based on LANG or LC_CTYPE environment variables.
# These may be different from site to site so it's important for us to
# establish a consistent behavior. For more information on M17n please see:
# http://links.puppetlabs.com/understanding_m17n
require 'spec_helper'
require 'puppet/util/monkey_patches'
describe "Pure ruby yaml implementation" do
RSpec::Matchers.define :round_trip_through_yaml do
match do |object|
YAML.load(object.to_yaml) == object
end
end
RSpec::Matchers.define :be_equivalent_to do |expected_yaml|
match do |object|
object.to_yaml == expected_yaml and YAML.load(expected_yaml) == object
end
failure_message_for_should do |object|
if object.to_yaml != expected_yaml
"#{object} serialized to #{object.to_yaml}"
else
"#{expected_yaml} deserialized as #{YAML.load(expected_yaml)}"
end
end
end
{
7 => "--- 7",
3.14159 => "--- 3.14159",
"3.14159" => '--- "3.14159"',
"+3.14159" => '--- "+3.14159"',
"0x123abc" => '--- "0x123abc"',
"-0x123abc" => '--- "-0x123abc"',
"-0x123" => '--- "-0x123"',
"+0x123" => '--- "+0x123"',
"0x123.456" => '--- "0x123.456"',
'test' => "--- test",
[] => "--- []",
:symbol => "--- !ruby/sym symbol",
{:a => "A"} => "--- \n !ruby/sym a: A",
{:a => "x\ny"} => "--- \n !ruby/sym a: |-\n x\n y",
}.each do |data, serialized|
it "should convert the #{data.class} #{data.inspect} to yaml" do
data.should be_equivalent_to serialized
end
end
context Time do
def the_time_in(timezone)
Puppet::Util.withenv("TZ" => timezone) do
Time.local(2012, "dec", 11, 15, 59, 2)
end
end
def the_time_in_yaml_offset_by(offset)
"--- 2012-12-11 15:59:02.000000 #{offset}"
end
it "serializes a time in UTC" do
- pending("not supported on Windows", :if => Puppet.features.microsoft_windows? && RUBY_VERSION[0,3] == '1.8') do
+ bad_rubies =
+ RUBY_VERSION[0,3] == '1.8' ||
+ RUBY_VERSION[0,3] == '2.0' && RUBY_PLATFORM == 'i386-mingw32'
+
+ pending("not supported on Windows", :if => Puppet.features.microsoft_windows? && bad_rubies) do
the_time_in("Europe/London").should be_equivalent_to(the_time_in_yaml_offset_by("+00:00"))
end
end
it "serializes a time behind UTC" do
pending("not supported on Windows", :if => Puppet.features.microsoft_windows?) do
the_time_in("America/Chicago").should be_equivalent_to(the_time_in_yaml_offset_by("-06:00"))
end
end
it "serializes a time behind UTC that is not a complete hour (Bug #15496)" do
pending("not supported on Windows", :if => Puppet.features.microsoft_windows?) do
the_time_in("America/Caracas").should be_equivalent_to(the_time_in_yaml_offset_by("-04:30"))
end
end
it "serializes a time ahead of UTC" do
pending("not supported on Windows", :if => Puppet.features.microsoft_windows?) do
the_time_in("Europe/Berlin").should be_equivalent_to(the_time_in_yaml_offset_by("+01:00"))
end
end
it "serializes a time ahead of UTC that is not a complete hour" do
pending("not supported on Windows", :if => Puppet.features.microsoft_windows?) do
the_time_in("Asia/Kathmandu").should be_equivalent_to(the_time_in_yaml_offset_by("+05:45"))
end
end
it "serializes a time more than 12 hours ahead of UTC" do
pending("not supported on Windows", :if => Puppet.features.microsoft_windows?) do
the_time_in("Pacific/Kiritimati").should be_equivalent_to(the_time_in_yaml_offset_by("+14:00"))
end
end
it "should roundtrip Time.now" do
tm = Time.now
# yaml only emits 6 digits of precision, but on some systems with ruby 1.9
# the original time object may contain nanoseconds, which will cause
# the equality check to fail. So truncate the time object to only microsecs
tm = Time.at(tm.to_i, tm.usec)
tm.should round_trip_through_yaml
end
end
[
{ :a => "a:" },
{ :a => "a:", :b => "b:" },
["a:", "b:"],
{ :a => "/:", :b => "/:" },
{ :a => "a/:", :b => "a/:" },
{ :a => "\"" },
{ :a => {}.to_yaml },
{ :a => [].to_yaml },
{ :a => "".to_yaml },
{ :a => :a.to_yaml },
{ "a:" => "b" },
{ :a.to_yaml => "b" },
{ [1, 2, 3] => "b" },
{ "b:" => { "a" => [] } }
].each do |value|
it "properly escapes #{value.inspect}, which contains YAML characters" do
value.should round_trip_through_yaml
end
end
#
# Can't test for equality on raw objects
{
Object.new => "--- !ruby/object {}",
[Object.new] => "--- \n - !ruby/object {}",
{Object.new => Object.new} => "--- \n ? !ruby/object {}\n : !ruby/object {}"
}.each do |o,y|
it "should convert the #{o.class} #{o.inspect} to yaml" do
o.to_yaml.should == y
end
it "should produce yaml for the #{o.class} #{o.inspect} that can be reconstituted" do
lambda { YAML.load(o.to_yaml) }.should_not raise_error
end
end
it "should emit proper labels and backreferences for common objects" do
# Note: this test makes assumptions about the names ZAML chooses
# for labels.
x = [1, 2]
y = [3, 4]
z = [x, y, x, y]
z.should be_equivalent_to("--- \n - &id001\n - 1\n - 2\n - &id002\n - 3\n - 4\n - *id001\n - *id002")
end
it "should emit proper labels and backreferences for recursive objects" do
x = [1, 2]
x << x
x.to_yaml.should == "--- &id001\n \n - 1\n - 2\n - *id001"
x2 = YAML.load(x.to_yaml)
x2.should be_a(Array)
x2.length.should == 3
x2[0].should == 1
x2[1].should == 2
x2[2].should equal(x2)
end
# Note, many of these tests will pass on Ruby 1.8 but fail on 1.9 if the patch
# fix is not applied to Puppet or there's a regression. These version
# dependant failures are intentional since the string encoding behavior changed
# significantly in 1.9.
context "UTF-8 encoded String#to_yaml (Bug #11246)" do
# JJM All of these snowmen are different representations of the same
# UTF-8 encoded string.
let(:snowman) { 'Snowman: [☃]' }
let(:snowman_escaped) { "Snowman: [\xE2\x98\x83]" }
it "should serialize and deserialize to the same thing" do
snowman.should round_trip_through_yaml
end
it "should serialize and deserialize to a String compatible with a UTF-8 encoded Regexp" do
YAML.load(snowman.to_yaml).should =~ /☃/u
end
end
context "binary data" do
subject { "M\xC0\xDF\xE5tt\xF6" }
if String.method_defined?(:encoding)
def binary(str)
str.force_encoding('binary')
end
else
def binary(str)
str
end
end
it "should not explode encoding binary data" do
expect { subject.to_yaml }.not_to raise_error
end
it "should mark the binary data as binary" do
subject.to_yaml.should =~ /!binary/
end
it "should round-trip the data" do
yaml = subject.to_yaml
read = YAML.load(yaml)
binary(read).should == binary(subject)
end
[
"\xC0\xAE", # over-long UTF-8 '.' character
"\xC0\x80", # over-long NULL byte
"\xC0\xFF",
"\xC1\xAE",
"\xC1\x80",
"\xC1\xFF",
"\x80", # first continuation byte
"\xbf", # last continuation byte
# all possible continuation bytes in one shot
"\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8A\x8B\x8C\x8D\x8E\x8F" +
"\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9A\x9B\x9C\x9D\x9E\x9F" +
"\xA0\xA1\xA2\xA3\xA4\xA5\xA6\xA7\xA8\xA9\xAA\xAB\xAC\xAD\xAE\xAF" +
"\xB0\xB1\xB2\xB3\xB4\xB5\xB6\xB7\xB8\xB9\xBA\xBB\xBC\xBD\xBE\xBF",
# lonely start characters - first, all possible two byte sequences
"\xC0 \xC1 \xC2 \xC3 \xC4 \xC5 \xC6 \xC7 \xC8 \xC9 \xCA \xCB \xCC \xCD \xCE \xCF " +
"\xD0 \xD1 \xD2 \xD3 \xD4 \xD5 \xD6 \xD7 \xD8 \xD9 \xDA \xDB \xDC \xDD \xDE \xDF ",
# and so for three byte sequences, four, five, and six, as follow.
"\xE0 \xE1 \xE2 \xE3 \xE4 \xE5 \xE6 \xE7 \xE8 \xE9 \xEA \xEB \xEC \xED \xEE \xEF ",
"\xF0 \xF1 \xF2 \xF3 \xF4 \xF5 \xF6 \xF7 ",
"\xF8 \xF9 \xFA \xFB ",
"\xFC \xFD ",
# sequences with the last byte missing
"\xC0", "\xE0", "\xF0\x80\x80", "\xF8\x80\x80\x80", "\xFC\x80\x80\x80\x80",
"\xDF", "\xEF\xBF", "\xF7\xBF\xBF", "\xFB\xBF\xBF\xBF", "\xFD\xBF\xBF\xBF\xBF",
# impossible bytes
"\xFE", "\xFF", "\xFE\xFE\xFF\xFF",
# over-long '/' character
"\xC0\xAF",
"\xE0\x80\xAF",
"\xF0\x80\x80\xAF",
"\xF8\x80\x80\x80\xAF",
"\xFC\x80\x80\x80\x80\xAF",
# maximum overlong sequences
"\xc1\xbf",
"\xe0\x9f\xbf",
"\xf0\x8f\xbf\xbf",
"\xf8\x87\xbf\xbf\xbf",
"\xfc\x83\xbf\xbf\xbf\xbf",
# overlong NUL
"\xc0\x80",
"\xe0\x80\x80",
"\xf0\x80\x80\x80",
"\xf8\x80\x80\x80\x80",
"\xfc\x80\x80\x80\x80\x80",
].each do |input|
# It might seem like we should more correctly reject these sequences in
# the encoder, and I would personally agree, but the sad reality is that
# we do not distinguish binary and textual data in our language, and so we
# wind up with the same thing - a string - containing both.
#
# That leads to the position where we must treat these invalid sequences,
# which are both legitimate binary content, and illegitimate potential
# attacks on the system, as something that passes through correctly in
# a string. --daniel 2012-07-14
it "binary encode highly dubious non-compliant UTF-8 input #{input.inspect}" do
encoded = ZAML.dump(binary(input))
encoded.should =~ /!binary/
YAML.load(encoded).should == input
end
end
end
context "multi-line values" do
[
"none",
"one\n",
"two\n\n",
["one\n", "two"],
["two\n\n", "three"],
{ "\nkey" => "value" },
{ "key\n" => "value" },
{ "\nkey\n" => "value" },
{ "key\nkey" => "value" },
{ "\nkey\nkey" => "value" },
{ "key\nkey\n" => "value" },
{ "\nkey\nkey\n" => "value" },
].each do |input|
it "handles #{input.inspect} without corruption" do
input.should round_trip_through_yaml
end
end
end
end
diff --git a/tasks/benchmark.rake b/tasks/benchmark.rake
index 7456c5c0a..69c0fc27c 100644
--- a/tasks/benchmark.rake
+++ b/tasks/benchmark.rake
@@ -1,109 +1,144 @@
require 'benchmark'
require 'tmpdir'
require 'csv'
namespace :benchmark do
def generate_scenario_tasks(location, name)
desc File.read(File.join(location, 'description'))
task name => "#{name}:run"
+ # Load a BenchmarkerTask to handle config of the benchmark
+ task_handler_file = File.expand_path(File.join(location, 'benchmarker_task.rb'))
+ if File.exist?(task_handler_file)
+ require task_handler_file
+ run_args = BenchmarkerTask.run_args
+ else
+ run_args = []
+ end
namespace name do
task :setup do
ENV['ITERATIONS'] ||= '10'
ENV['SIZE'] ||= '100'
ENV['TARGET'] ||= Dir.mktmpdir(name)
ENV['TARGET'] = File.expand_path(ENV['TARGET'])
mkdir_p(ENV['TARGET'])
require File.expand_path(File.join(location, 'benchmarker.rb'))
@benchmark = Benchmarker.new(ENV['TARGET'], ENV['SIZE'].to_i)
end
task :generate => :setup do
@benchmark.generate
@benchmark.setup
end
desc "Run the #{name} scenario."
- task :run => :generate do
+ task :run, [*run_args] => :generate do |_, args|
format = if RUBY_VERSION =~ /^1\.8/
Benchmark::FMTSTR
else
Benchmark::FORMAT
end
-
report = []
+ details = []
Benchmark.benchmark(Benchmark::CAPTION, 10, format, "> total:", "> avg:") do |b|
times = []
ENV['ITERATIONS'].to_i.times do |i|
start_time = Time.now.to_i
times << b.report("Run #{i + 1}") do
- @benchmark.run
+ details << @benchmark.run(args)
end
report << [to_millis(start_time), to_millis(times.last.real), 200, true, name]
end
sum = times.inject(Benchmark::Tms.new, &:+)
[sum, sum / times.length]
end
write_csv("#{name}.samples",
%w{timestamp elapsed responsecode success name},
report)
+
+ # report details, if any were produced
+ if details[0].is_a?(Array) && details[0][0].is_a?(Benchmark::Tms)
+ # assume all entries are Tms if the first is
+ # turn each into a hash of label => tms (since labels are lost when doing arithmetic on Tms)
+ hashed = details.reduce([]) do |memo, measures|
+ memo << measures.reduce({}) {|memo2, measure| memo2[measure.label] = measure; memo2}
+ memo
+ end
+ # sum across all hashes
+ result = {}
+
+ hashed_totals = hashed.reduce {|memo, h| memo.merge(h) {|k, old, new| old + new }}
+ # average the totals
+ hashed_totals.keys.each {|k| hashed_totals[k] /= details.length }
+ min_width = 14
+ max_width = (hashed_totals.keys.map(&:length) << min_width).max
+ puts "\n"
+ puts sprintf("%2$*1$s %3$s", -max_width, 'Details (avg)', " user system total real")
+ puts "-" * (46 + max_width)
+ hashed_totals.sort.each {|k,v| puts sprintf("%2$*1$s %3$s", -max_width, k, v.format) }
+ end
end
desc "Profile a single run of the #{name} scenario."
- task :profile => :generate do
+ task :profile, [:warm_up_runs, *run_args] => :generate do |_, args|
+ warm_up_runs = (args[:warm_up_runs] || '0').to_i
+ warm_up_runs.times do
+ @benchmark.run(args)
+ end
+
require 'ruby-prof'
result = RubyProf.profile do
- @benchmark.run
+ @benchmark.run(args)
end
printer = RubyProf::CallTreePrinter.new(result)
File.open(File.join("callgrind.#{name}.#{Time.now.to_i}.trace"), "w") do |f|
printer.print(f)
end
end
def to_millis(seconds)
(seconds * 1000).round
end
def write_csv(file, header, data)
CSV.open(file, 'w') do |csv|
csv << header
data.each do |line|
csv << line
end
end
end
end
end
scenarios = []
Dir.glob('benchmarks/*') do |location|
name = File.basename(location)
scenarios << name
generate_scenario_tasks(location, File.basename(location))
end
namespace :all do
desc "Profile all of the scenarios. (#{scenarios.join(', ')})"
task :profile do
scenarios.each do |name|
sh "rake benchmark:#{name}:profile"
end
end
desc "Run all of the scenarios. (#{scenarios.join(', ')})"
task :run do
scenarios.each do |name|
sh "rake benchmark:#{name}:run"
end
end
end
end
diff --git a/tasks/parser.rake b/tasks/parser.rake
index 73167f685..c6993d602 100644
--- a/tasks/parser.rake
+++ b/tasks/parser.rake
@@ -1,5 +1,19 @@
-
-desc "Generate the parser"
+desc "Generate the 3.x 'current' parser"
task :gen_parser do
%x{racc -olib/puppet/parser/parser.rb lib/puppet/parser/grammar.ra}
end
+
+desc "Generate the 4.x 'future' parser"
+task :gen_eparser do
+ %x{racc -olib/puppet/pops/parser/eparser.rb lib/puppet/pops/parser/egrammar.ra}
+end
+
+desc "Generate the 4.x 'future' parser with egrammar.output"
+task :gen_eparser_output do
+ %x{racc -v -olib/puppet/pops/parser/eparser.rb lib/puppet/pops/parser/egrammar.ra}
+end
+
+desc "Generate the 4.x 'future' parser with debugging output"
+task :gen_eparser_debug do
+ %x{racc -t -olib/puppet/pops/parser/eparser.rb lib/puppet/pops/parser/egrammar.ra}
+end
diff --git a/tasks/yard.rake b/tasks/yard.rake
index d513be771..ad5ece076 100644
--- a/tasks/yard.rake
+++ b/tasks/yard.rake
@@ -1,59 +1,59 @@
begin
require 'yard'
namespace :doc do
desc "Clean up generated documentation"
task :clean do
rm_rf "doc"
end
desc "Generate public documentation pages for the API"
YARD::Rake::YardocTask.new(:api) do |t|
t.files = ['lib/**/*.rb']
t.options = %w{
--protected
--private
--verbose
--markup markdown
--readme README.md
--tag status
--transitive-tag status
--tag comment
--hide-tag comment
--tag dsl:"DSL"
--no-transitive-tag api
--template-path yardoc/templates
--files README_DEVELOPER.md,CO*.md,api/**/*.md
--api public
--api private
--hide-void-return
}
end
desc "Generate documentation pages for all of the code"
YARD::Rake::YardocTask.new(:all) do |t|
t.files = ['lib/**/*.rb']
t.options = %w{
--verbose
--markup markdown
--readme README.md
--tag status
--transitive-tag status
--tag comment
--hide-tag comment
--tag dsl:"DSL"
--no-transitive-tag api
--template-path yardoc/templates
--files README_DEVELOPER.md,CO*.md,api/**/*.md
--api public
--api private
--no-api
--hide-void-return
}
end
end
rescue LoadError => e
if verbose
- puts "Document generation not available without yard. #{e.message}"
+ STDERR.puts "Document generation not available without yard. #{e.message}"
end
end
diff --git a/util/README_UTIL.md b/util/README_UTIL.md
index e96747bda..e0b340c0a 100644
--- a/util/README_UTIL.md
+++ b/util/README_UTIL.md
@@ -1,76 +1,80 @@
Development Utilities
=====================
The scripts in this directory are utility scripts useful during development.
binary_search_specs.rb
----------------------
This script, written by Nick Lewis, is useful if you encounter a spec failure which only occurs when run in some sequence with other specs. If you have a spec which passes by itself, but fails when run with the full spec suite, this script will help track it down.
The puppet spec/spec_helper.rb checks for an environment variable LOG_SPEC_ORDER. If this is present, it will save the current order of the spec files to './spec_order.txt'.
This file is then used by binary_search_specs.rb so that:
$ ./util/binary_search_specs.rb spec/unit/foo_spec.rb
will begin bisecting runs before and after this spec until it narrows down to a candidate which seems to be effecting foo_spec.rb and causing it to fail.
+### with parallel-spec
+
+To get the groups that the parallel task is running, run: be util/rspec_grouper 1000. Then run each spit out file with "be util/rspec_runner <groupfile>". If it fails, rename it to spec_order.txt and run the binary script.
+
dev-puppet-master
-----------------
This script is very helpful for setting up a local puppet master daemon which you can then interrogate with other puppet 'app' calls such as puppet cert or puppet agent. I'm not sure who wrote it originally.
There are a few steps needed to get this configured properly.
* /etc/hosts needs a 'puppetmaster' added to its localhost entry
The dev-puppet-master script calls `puppet master` with --certname=puppetmaster, and this needs to resolve locally.
You can execute the dev-puppet-master script with a name for the sandbox configuration directory (which will be placed in ~/test/master) or it will use 'default'.
* ./util/dev-puppet-master bar-env
(places conf and var info in ~/tests/master/bar-env for instance)
You should now be able to do things like:
jpartlow@percival:~/work/puppet$ bundler exec puppet agent -t --server puppetmaster
Info: Creating a new SSL key for percival.corp.puppetlabs.net
Info: Caching certificate for ca
Info: Creating a new SSL certificate request for percival.corp.puppetlabs.net
Info: Certificate Request fingerprint (SHA256): 1B:DE:91:8C:AE:10:1B:18:0D:67:9D:4B:87:F1:26:19:6D:C6:37:35:F6:64:40:90:CF:FC:BE:8F:6F:C9:8D:D4
Info: Caching certificate for percival.corp.puppetlabs.net
Info: Caching certificate_revocation_list for ca
Info: Retrieving plugin
Info: Caching catalog for percival.corp.puppetlabs.net
Info: Applying configuration version '1374193823'
Info: Creating state file /home/jpartlow/.puppet/var/state/state.yaml
Notice: Finished catalog run in 0.04 seconds
For an agent run (or any command you want to call the server), you must specify '--server puppetmaster'.
To check the puppetmaster's certs, you instead would need to specify the confdir/vardir:
jpartlow@percival:~/work/puppet$ bundler exec puppet cert list --all --confdir=~/test/master/default --vardir= ~/test/master/default/
+ "percival.corp.puppetlabs.net" (SHA256) 0D:8D:A4:F1:19:E3:7A:62:ED:ED:21:B4:76:FE:04:47:50:01:20:4A:04:48:09:3A:1A:98:86:4A:08:8D:46:F0
+ "puppetmaster" (SHA256) B9:F5:06:F4:74:3B:15:CE:7C:7B:A6:38:83:0E:30:6A:6F:DC:F4:FD:FF:B1:A9:8A:35:12:90:10:26:46:C2:A6 (alt names: "DNS:percival.corp.puppetlabs.net", "DNS:puppet", "DNS:puppet.corp.puppetlabs.net", "DNS:puppetmaster")
### Curl
For simple cases of testing REST API via curl:
* edit ~/tests/master/:confdir/auth.conf and add `"allow *"` to `"path /"`
Now you should be able to:
```bash
jpartlow@percival:~/work/puppet$ curl -k -H 'Accept: text/pson' https://puppetmaster:8140/main/resource/user/nobody
{"type":"User","title":"nobody","tags":["user","nobody"],"exported":false,"parameters":{"ensure":"present","home":"/nonexistent","uid":65534,"gid":65534,"comment":"nobody","shell":"/bin/sh","groups":[],"expiry":"absent","provider":"useradd","membership":"minimum","role_membership":"minimum","auth_membership":"minimum","profile_membership":"minimum","key_membership":"minimum","attribute_membership":"minimum","loglevel":"notice"}}
```
For more complex authorization cases you will need to reference the agents keys:
```bash
jpartlow@percival:~/work/puppet$ curl -H 'Accept: text/pson' --cert `puppet agent --configprint hostcert` --key `be puppet agent --configprint hostprivkey` --cacert `be puppet agent --configprint localcacert` https://puppetmaster:8140/foo/node/percival.corp.puppetlabs.net
```
diff --git a/util/dev-puppet-master b/util/dev-puppet-master
index 92ec9c598..f2cfa32ec 100755
--- a/util/dev-puppet-master
+++ b/util/dev-puppet-master
@@ -1,23 +1,23 @@
#/bin/sh
name="default"
if [ $# -gt 0 ]; then
name=$1
shift 1
fi
-
+
export dir="${HOME}/test/master/${name}"
mkdir -p ${dir}
if [ ! -f $dir/auth.conf ]; then
# Edit this file to change default puppet authorizations.
- cp ~/auth.conf "${dir}/auth.conf"
+ cp ./conf/auth.conf "${dir}/auth.conf"
fi
mkdir -p ${dir}/manifests
touch ${dir}/manifests/site.pp
# Work around Redmine #21908 where the master generates a warning if agent pluginsyncs
# and there isn't at least one module with a libdir.
mkdir -p ${dir}/modules/foo/lib
-bundle exec puppet master --no-daemonize --trace --autosign=true --debug --confdir=${dir} --vardir=${dir} --certname puppetmaster $@
+bundle exec puppet master --no-daemonize --autosign=true --confdir=${dir} --vardir=${dir} --certname puppetmaster $@

File Metadata

Mime Type
application/octet-stream
Expires
Tue, Oct 8, 2:26 PM (2 d)
Storage Engine
chunks
Storage Format
Chunks
Storage Handle
eaettKcRogcS
Default Alt Text
(5 MB)

Event Timeline