Discussion:
Why ip address and not hostnames ?
Alan Bunch
11 years ago
Permalink
First of all Thank You to all who have created and contributed to Icinga.
I'm just getting here and hope to be helpful in the future.

I have used Nagios for many years in smaller deployments and in the past I
have never had to set the address property in a host definition. I have
always
depended on DNS to resolve the address. I have found that all of the
command
definitions in Icinga2 use the address rather then the hostname.

I was wondering if someone was aware of the thinking behind this decision.
Please note that I am NOT QUESTIONING the decision. I am just interested
in the rationale behind it. I do my best not to hard code ip addresses
anywhere but in DNS zone files and was wondering if I was missing some
requirement of Icinga to have the addresses in the config files.

TIA
Alan
Russell Van Tassell
11 years ago
Permalink
Simply, if your DNS or network infrastructure start to fail, it's best to
mange sure your monitoring doesn't also fall over and render you even more
completely blind. It's a design / best practices decision you don't have to
follow... you just better be 1000% sure you'll never see an instance where
you zone is corrupt or DoS'd to the point that your resolver fails to work.
--
Russell M. Van Tassell
russell-***@public.gmane.org

This message was sent from my wireless handheld device.
...
Michael Friedrich
11 years ago
Permalink
That's just an attribute. If your address should be the FQDN instead, then
put it there. Or tell your service object, that address should be
overridden with the host.name or any other custom setting resolved at
runtime.

apply Service "blub" {
vars.address = $host.name$
...

The reason why we removed the hack provided by Nagios/Icinga 1.x is simple
- $HOSTADDRESS$ should resolve into the host attribut 'address'. If there
is no address attribute defined, the old core automagically uses the
HOSTNAME.
This leads to funny debug sessions if the hostname is *not* an FQDN. Users
won't see the workaround hack and will question the address macro.
Trust me, i've seen that too many times doing Icinga support for 5+ years
now.
Therefore there's a clear strategy with Icinga2 - use the proper macros,
and stay safe. You'll get a context including warning on the logs if your
macros do not resolve upon check execution.
And you can define default values in your checkcommand, if you put the fqdn
in a custom attribute for instance.

object CheckCommand {
import ...
vars.address = $host.vars.fqdn$ //from host
//vars.address = $fqdn$ //service or host custom attr
...

Afterall, you could also define your own commands passing the $host.name$
macro being an fqdn. But it's not the recommended way for users and
beginners.

I for myself don't trust the dns system, caching and possible resolval
issues which will hide/influence the real problem. I have had too much
trouble in my previous job in Vienna, also managing the cctld for .at ...

Kind regards,
Michael
...
c***@public.gmane.org
11 years ago
Permalink
Using the FQDN instead of the IP address can lead to false positives in certain circumstances.

If you have a problem with DNS, it may appear that you have a problem with various hosts when they are actually fine. Using the IP address eliminates that problem.


-----Original Message-----
From: icinga-users [mailto:icinga-users-bounces-/***@public.gmane.org] On Behalf Of Alan Bunch
Sent: Monday, July 14, 2014 5:15 PM
To: icinga-users-/***@public.gmane.org
Subject: [icinga-users] Why ip address and not hostnames ?

First of all Thank You to all who have created and contributed to Icinga.
I'm just getting here and hope to be helpful in the future.

I have used Nagios for many years in smaller deployments and in the past I have never had to set the address property in a host definition. I have always depended on DNS to resolve the address. I have found that all of the command definitions in Icinga2 use the address rather then the hostname.

I was wondering if someone was aware of the thinking behind this decision.
Please note that I am NOT QUESTIONING the decision. I am just interested in the rationale behind it. I do my best not to hard code ip addresses anywhere but in DNS zone files and was wondering if I was missing some requirement of Icinga to have the addresses in the config files.

TIA
Alan
Werner Flamme
11 years ago
Permalink
Post by c***@public.gmane.org
Using the FQDN instead of the IP address can lead to false positives
in certain circumstances.
If you have a problem with DNS, it may appear that you have a problem
with various hosts when they are actually fine. Using the IP address
eliminates that problem.
That's why there always is a dnsmasq server running on my monitoring
host. The additional entries for the monitored hosts are scraped
together by a little awk routine, since the conf is in the file system.
Getting it from a database should not be more complicated.

Just my 2¢
Werner
--
Alan Bunch
11 years ago
Permalink
...
This gets to the root of my concern. If I had hundreds or thousands of
hosts maintaining ip address in two places would seem to be unworkable.
Really maintaining two copies of anything at scale would seems to a bad
idea. I completely agree and understand that monitoring needs to work
in the face of a DNS failure or DDOS attack.

How are other addressing this issue. DNS with an LDAP backend. A
database backend. Use that to generate or drive DNS and Icinga configs
? How often do you generate new Icinga config ?

I understand there are not necessarily "right" or "wrong" answers, just
different ways to solve the problems at hand.

How are you handling that concern ?

Alan
Werner Flamme
11 years ago
Permalink
...
Hi Alan,

we use 2 SOLID appliance boxes in a failover construction.
Unfortunately, they sometimes forget addresses of non-windows hosts - we
even had the case that the zone files werde emptied completely.

To make sure the main DNS works properly, I defined a service check that
tests DNS for each host:

check_dns -H $HOSTNAME$ -s $OUR_DNS_IP$ -a $HOSTADDRESS$

Now I can reach any host because of my own DNS, but get alarms for
non-working DNS.

The config is changed quite often, since there is always one change or
another in the monitored landscape. And when the config is replicated to
the failover monitoring host, the new hosts file (/etc/hosts.monitoring)
is created from the hosts.cfg file.

Regards,
Werner
--
Werner Flamme, Abt. WKDV
Helmholtz-Zentrum für Umweltforschung GmbH - UFZ
Helmholtz Centre for Environmental Research - UFZ
Permoserstr. 15 - 04318 Leipzig / Germany
Tel./phone: +49 341 235-1921 - Fax +49 341 235-451921
Information nach §§ 37a HGB, 35a GmbHG:
Sitz der Gesellschaft: Leipzig
Registergericht: Amtsgericht Leipzig, Handelsregister Nr. B 4703
Vorsitzender des Aufsichtsrats: MinDirig Wilfried Kraus
Wissenschaftlicher Geschäftsführer: Prof. Dr. Georg Teutsch
(Scientific Managing Director)
Administrative Geschäftsführerin: Dr. Heike Graßmann
(Administrative Managing Director)
Michael Friedrich
11 years ago
Permalink
...
Using Icinga 2, you can easily define a global host-to-service and
service-to-service dependency like so. Say that "global-dns-health" is
your service on host "global-dns-server" targetting the dns resolving.
If it fails, you don't want to be alerted about anything.

Something like

object Host "global-dns-server" {
import "generic-host"
address = "131.130.1.11"
}

object Service "global-dns-health" {
import "generic-service"
host_name = "global-dns-server"
check_command = "dns"
vars.dns_lookup = "www.univie.ac.at"
vars.dns_expected_answer = "131.130.70.8"
}

apply Dependency "global-dns-failure-host" to Host {
//import "..."
parent_host_name = "global-dns-server"
parent_service_name = "global-dns-health"

states = [ Up ]
disable_checks = true
disable_notifications = true

assign where match("*", host.name)
}


apply Dependency "global-dns-failure-service" to Service {
//import "..."
parent_host_name = "global-dns-server"
parent_service_name = "global-dns-health"

states = [ OK ]
disable_checks = true
disable_notifications = true

assign where match("*", service.name)
}


kind regards,
Michael

-- 
Michael Friedrich, DI (FH)
Application Developer

NETWAYS GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg
Tel: +49 911 92885-0 | Fax: +49 911 92885-77
GF: Julian Hein, Bernd Erk | AG Nuernberg HRB18461
http://www.netways.de | ***@netways.de

** Open Source Backup Conference 2014 - September - osbconf.org **
** Puppet Camp Duesseldorf 2014 - Oktober - netways.de/puppetcamp **
** OSMC 2014 - November - netways.de/osmc **
** OpenNebula Conf 2014 - Dezember - opennebulaconf.com **
simon
11 years ago
Permalink
...
I do something similar with NRPE:
command[dns]=/usr/lib/nagios/plugins/check_dns -s x.x.x.x -H `hostname
-f` -a `hostname -i`

You have to keep IP address information in two places anyway. This check
deals with both.

As others have said, I would not want to use domain names in place of IP
addresses for host/service checks, not only because of false positives,
as these can be taken care of with proper dependency configuration, but
also because if DNS goes down, then many other checks will not be able
to be executed.

With Icinga, I've pretty much adopted the mind set of "test driven
development" from the software development world and applied it when
designing networks. So while it would seem that generating Icinga
configs from an authoritive DNS lookup is a cool idea, I have been
working the other way around - not sure if that suits other situations.
It's been working well for me thus far.

So when adding a host, I start with Icinga - even before provisioning
the hardware/VM or setting up the OS/services.

Loading...