• Category Archives Unix
  • RHEL Satellite 6 Installation & Configuration HOW-TO Part 1

    In the following Blog posts I will post tutorials on how to install and configure RHEL Satellite with all the bells and whistles possible. It will include detailed instructions on setting up each component and how to proceed from there.

    This post assumes a fully functional Red Hat Identity Management stack is present in the current network. The RHEL iDM solution will be used for all linux clients deployed with Satellite. This includes DNS, NTP, users, groups, RBAC and HBAC capabilities. RHEL Satellite will manage pxeboot, dhcp, tftp making this a complete environment for a Linux client domain.

    This first post will be about the machine itself, how to set it up and what to do before starting this Tutorial.

    First up, the machine hardware:

    Minimum requirements:

    • 2 CPU
    • 12 Gigabyte RAM
    • 4 Gigabyte SWAP
    • 64 bit CPU architecture.

    Recommended requirements:

    • 4 CPU
    • 16+ Gigabyte RAM
    • 8+ Gigabyte SWAP
    • 64 bit CPU architecture.

    Continue reading  Post ID 188

  • Systemd, the ugly.

    I have not yet voiced an opinion on Systemd, because it is new and therefore unproven technology. I first had to experience it all and see whats up with this new tool that is supposed to fix all problems of init.

    1 word: Crap. It isn’t up to the job. Init still rules my world, let me explain why.

    • I can’t reboot a host in trouble.

    I have no idea who ever thought coupling D-Bus to a system init program is a good idea. But out here in sysadmin land it is the worst decision ever. When I type reboot, I do not expect to see the following:


    What I expect, is to see my host performing a reboot, taking emergency-steps if necessary if some things fail. I do not expect to see my reboot canceled and me having to type sys-req keys “reboot even if system utterly broken” , because this is usally a Virtual machine. Virtual machine means I need to echo-command them to the kernel which might or might not succeed.

    Link:  Reboot even if system utterly broken





  • Linux workstation and DNS servers

    Dear Readers,

    I am now blessed with having a full 100% Linux laptop at work. The laptop also travels between 2-3 clients with various network settings, all with their own DNS servers.

    Usually, DHCP takes care of that and you haven’t got a care in the world.

    Until now …. I now have a client with a few VPN’s that can turn on / off for various tasks and all of them push DNS settings to /etc/resolv.conf which makes other domains unreachable.

    So the idea popped in my head: How to have different DNS servers for different domain-names on my laptop.

    The answer was extremely stupid: install your own DNS server and use as resolving!

    See the following example:

    [root@laptop:/root] # cat /etc/resolv.conf
    domain first.domain.com
    search second.domain.com third.domain.com
    [root@laptop:/root] # tail -50 /etc/named.conf
    zone "." IN {
            type forward;
            forwarders {; };  <<<<<<<<<<<<<<< This is the DNS server that DHCP gives you.
    zone "second.domain.com" IN {
            type forward;
            forwarders {; }; <<<<<<<<<<<< Explicitly name a nameserver for second.domain.com
    zone "third.domain.com" IN {
            type forward;
            forwarders {;  }; <<<<<<<<<<<< Explicitly name a nameserver for third.domain.com

    Add more domains as you like. This solution will work for as many as 8 of them, because /etc/resolv.conf only supports 8 domain names in its search options.

    Of course, you can add more than 8 but then you will have to always type the FQDN for domains 9 and higher.


  • Openstack and Iscsi

    Dear Readers,

    My own datacenter had to go into ‘dark’ mode, because of imminent maintenance on the power-source.

    This has led me to shutdown all VM’s on my network, then shut down the openstack compute nodes and finally openstack the physical machine. (in that order).

    When the openstack nodes bootted again, one isolated test-compute/storage-node named yggdrasil wouldn’t boot a single VM… but to my horror, it ran the production IDM for all my hosts.

    That meant none of my hosts were able to find eachother and slowly, all VM’s entered a rootshell because ISCSI couldn’t find the hard-drives…. woe me!

    It turns out, i had run into 2 bugs at the same time.

    Bug 1: The openstack cluster didn’t preserve its configuration ACL’s on the ISCSI device nodes.

    Bug 2: My LVM devices were ‘discovered’ by the hardware node, so I couldn’t use targetcli to re-add them.

    The path to discovering how this came to be.

    First hint: I saw all the ACL’s were missing and therefore I got permission denied errors when trying to use iscsiadm to login to targets.

    Since I had no comparison material, I had to find out the iscsi name first. Luckily it is saved in the file /etc/iscsi/initiatorname.iscsi.

    So, I used the name and targetcli to re-add all the acl’s… now iscsiadm happily discovered a few nodes, but not all of them yet. Unfortunately, the discovered iscsi drives were not part of my IDM server so I had to continue debugging for now.

    Then I noticed something horrible, one of my commands had failed because the logical volume (lvm) was “in-use” and therefore could not be added as a backstore from targetcli.

    It cost me several days of googling until I finally hit the right answer.

    lslbk …. YEP

    The logical volume was supposed to be a harddisk, which contained another lvm configuration and stupidly enough, my hardware-node had activated the LVM configuration.

    So I deactivated the lv with lvchange -a n /dev/system_vg/swap_lv.

    I was lucky that my hardware node uses root_vg and not system_vg, or the mess might have been bigger!

    And then I added a new filter content : ‘r|/dev/cinder-volume.*|’ to my lvm.conf to not have this happen again.

    After this I could add the backing store, create the lun and acl. Add the thing to the portal and now iscsiadm would happily see all my luns. the others that were still missing had solved themselves automagically.

    Booted my IDM and ‘lo and behold, all was coming up again and my world started spinning.




  • What Satellite is still lacking in features that would make it even more excellent.

    This post I am going to talk about what RHEL Satellite is still lacking and why that is a big deal. Implementing it would probably mean even heavier specs for the RHEL server thats running satellite, but I still think they would be valuable add-ons for Satellite.

    • Puppet Database (puppetdb).
    • Remote Execution (planned in 6.2)
    • Openscap maturity
    • Better Puppet modules management
    • Puppet updated to 4.0


    Using a puppet database, would mean that many things become available that are now a pain to implement in Puppet / Satellite.

    I am talking about Puppet exported resources. If I want to use Satellite now to provision -say- a nagios monitoring host, I can’t make a puppet exported resources to allow for easy management of client hosts. I will have to either edit the nagios host manually (ouch!) or I will have to query the Foreman database for which hosts exist and what properties I need to assign them to complete my monitoring clients.

    PuppetDB stores all facts and most recent report from its client-nodes. It also stores 14 days of history (a few config changes) which means that functionality can be chosen to be inside satellite, or inside PuppetDB.

    Remote Execution

    Remote execution needs little explaining, but after checking back with the Rhel Satellite 6.2 roadmap, I see Remote execution is already planned, so the point became moot 🙂

    Openscap Maturity

    What I mean by this point is, that the current Openscap implementation in Satellite is a bit sloppy. It doesn’t completely implement the Openscap editor tools, you need to know quite a lot about Openscap to write your own client policies and assign them to hosts. Of course you can download other people’s policies from the Internet and assign those, but then they aren’t auditted by your administration to make sure they fit company policy. Furthermore, Openscap currently crashes on me, leaving all reports on Satellite’s harddisk and not loaded in the GUI. Finding the faulty report is a pain and I have no idea why it is denied, as browsing it with a text-browser it seems fine.

    Better puppet modules management.

    Currently puppet modules are imported ‘as-is’ and assigned to content views and lifecycle management. However, despite the version-management of the content views, it is still unclear sometimes where puppet errors (duplicate class definitions , strange variables that appear and dissapear because of previous versions in other content views) come from. Whenever I encounter such an error, I try to push to the most recent puppet module verison on all my content views to be rid of old versions quickly.

    Update to Puppet 4.0

    Puppet 4.0 has many new features that are a boon to system administrators like myself.

    • Iteration over arrays / hashes
    • Bugs in facter solved (Primary ethernet interface anyone?)
    • Package collections. Read more about those on the Puppet BLOG
    • Better GIT integration
    • Robot 10000 (r10k)
    • HEREDOC support
    • Better TYPE support
    • Better performance on puppet server and clients (Faster catalog compile


  • Puppet & Satellite , the ENC explained.

    Satellite has the capability to make facts available to puppet-clients through its ENC.

    This process however is not documented well on Satellite’s , nor on Puppets documentation.

    Take for example a class on the puppet-forge that requires some input for optional couplings:


    This class has some optional parameters:

    # $ldap = {
    #   hostname      => 'ldap.example.com',
    #   ssl           => true,
    #   port          => '636',
    #   dn            => 'o=example',
    #   bind_dn       => "cn=admin,ou=system,o=example",
    #   bind_password => "admin123",
    #   admin_user    => "sysadmin",
    #   guest_user    => "guest"
    # }

    Many blocks on the puppet forge have code like this, its a hash with values that should be enabled.

    The puppet-initiates who have not yet fully grasped satellite, might implement it as following.

    Create a new class, for instance:

      class oliekoets_archiva { 'archiva::ldap' :
        hostname => 'ldap.oliekoets.nl',
        ssl      => true,
        dn       => 'cn=users,cn=accounts,dc=oliekoets,dc=nl'

    This however, is completely unnessecary in Satellite with the ENC.

    Within satellite,  override the archiva::ldap variable and change its type from ‘string’ to ‘hash’

    Now the hash-type accepts any valid json or yaml input, as long as it validates to a legal puppet object.

    Here we go:

        - ldap.oliekoets.nl
        - ldap2.oliekoets.nl
    ssl: true
    port: 636
    dn: cn=users,cn=accounts,dc=oliekoets,dc=nl

    Doublecheck by asking satellite to display its YAML enc:

            - ldap.oliekoets.nl
            - ldap2.oliekoets.nl
        ssl: true port: 636
        dn: cn=users,cn=accounts,dc=oliekoets,dc=nl

    And thats all there is to it.



  • TheForeman & Openstack

    Quick note to boast about success!

    Although my business isn’t all that big, I did find enough cash to finally start building up Oliekoets datacenter to a full-fledged private cloud.

    Considering I have no wish yet to pay licensing fee’s of any kind, I Implemented it all with open-source freely available software.

    CentOS , Foreman, Katello & Openstack.

    Thanks all! you guys rock. Time to start building my virtual machines.



  • Puppet & Satellite part 2 , Server side facts!

    Logo So today, I am talking about server-side facts.

    These are facts a client can not and does not gather by itself. They are defined on the puppet server , which is usually the Satellite server running Foreman as well.

    Unfortunately, Puppet within satellite does not allow the creation of server-side facts easily. You can set global & smart variables, which can become static and thus provide semi-facts, but nothing really sticks out as being convenient to use.

    So far, I haven’t had the need to create server-side facts because my classes would reconfigure themselves easily through smart variables that can be overridden for specific servers, host groups or even globally.


    Facts can be declared in a puppet class , but this would be a client-side fact.
    Facts can also be declared as a global variable, this is a good example of a server-side fact

    Server side facts can include for example: The hostname of your mail-relay, the hostnames of your IDM servers, the NTP/DNS servers, or any other fact really. Some facts are already pushed through by satellite itself: The life cycle environment of a host, the puppet environment, the organization, the host group of the host and a few others. These can be viewed by selecting a host and pushing the “Yaml” button. All custom facts you include yourself by means of global variables are also present in this Yaml display.

    Lastly, facts can be declared from the puppet-master configuration directory. Since I never used them, I don’t know if Foreman displays them inside the web-interface or the hammer-interface of the host.

    = Mark =

  • Puppet & Satellite part 1 , Facter!

    LogoPuppet is a wonderful tool and as Foreman/RH Satellite now support puppet out of the box, it is time to explore. For the purpose of this blog post, I shall assume
    RH Satellite == Foreman!

    The first thing to be noted, is that puppet is severely under-documented in the redhat and foreman documentation. Little is provided and even those little things are creating more questions than answering them. I will write a few blog posts on this very subject, of which this is post 1.

    First topic in this series: Facts!.

    The Foreman project is adding facts to the satellite server through facter , which is part of puppet. But what if these facts aren’t complete. Its all nice and dandy to know standard stuff like a hosts ip-address and hostname and cpu and stuff, but what if we need to know for instance , the names of the IDM servers for the NTP configuration which has been integrated in Satellite. You won’t find those anywhere.

    First lets apply a few logic rules to determine whether we need this new fact, or would rather use it as a variable to be declared in some class::param concept.

    When to use a fact in my opinion

    – It never changes! This seems obvious, but I have seen administrators getting this wrong, I have seen guys write ruby code to change a fact almost every run! Don’t do this people, if something isn’t solid, use a class:param and not a fact! (for instance, the system time 🙂 this is not a fact guys, its completely variable!, it changes as fast as the clock resolution of your computer!)

    The fact will be either master-side , or client-side compiled. Facts created on the puppet-master for instance might include the ntp-servers to use. Client-side facts might be dependant of the environment , like the nfs-homeserver of your cloud environment. Production might have a different one than Development but other than that it is static, it can be put into fact.

    Next post: Server-side facts.

  • RHEL Satellite access

    Today I am going to talk about (remote) access with satellite.

    Apparently, there are a few things that you must know in order to get stuff working correctly.

    First of all: Red Hat Identity Manager <-> Satellite coupling for user accounts.
    When you create the coupling as an external LDAP source in satellite, by default users get put in the anonymous group with very little rights within satellite. Luckily you can also provide a “group” DN for Identity servers which can then be used to assign groups in satellite.


    So create a user group (for instance : Admins) in the satellite user interface. Then in the third (external group) tab , assign a coupling between a redhat identity manager (IDM) group and the local admin group. The source however, must be set to “External” instead of your identity server, I am not sure if this is a bug or works as designed. Now, when users login who have the correct LDAP group, will automaticly be added to the new Admins group on satellite. Now you can assign rights (or even check the full admin checkbox) to the Admin usergroup and the remote access is done.

    Now a short paragraph about local webinterface access as the default admin account:

    When satellite needs reconfiguring, or reinstalling, Red Hat notes that the admin password gets reset to a default password and you will simply have to change it again. This is not entirely true. You can put the password of the default admin user in /etc/katello-installer/answers.katello-installer.yaml, but doing so is a security risk according to some people. I am noting that if you have root-access , the security risk of this file is negligant, because you can simply run katello-installer without any arguments and it will printout the admin password on the console after a succesful completion.

    – Mark.